Bing-Yuan Cao Optimal Models and Methods with Fuzzy Quantities
Studies in Fuzziness and Soft Computing, Volume 248 Ed...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Bing-Yuan Cao Optimal Models and Methods with Fuzzy Quantities

Studies in Fuzziness and Soft Computing, Volume 248 Editor-in-Chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail: [email protected] Further volumes of this series can be found on our homepage: springer.com Vol. 231. Michal Baczynski, Balasubramaniam Jayaram Soft Fuzzy Implications, 2008 ISBN 978-3-540-69080-1 Vol. 232. Eduardo Massad, Neli Regina Siqueira Ortega, Laécio Carvalho de Barros, Claudio José Struchiner Fuzzy Logic in Action: Applications in Epidemiology and Beyond, 2008 ISBN 978-3-540-69092-4 Vol. 233. Cengiz Kahraman (Ed.) Fuzzy Engineering Economics with Applications, 2008 ISBN 978-3-540-70809-4 Vol. 234. Eyal Kolman, Michael Margaliot Knowledge-Based Neurocomputing: A Fuzzy Logic Approach, 2009 ISBN 978-3-540-88076-9 Vol. 235. Koﬁ Kissi Dompere Fuzzy Rationality, 2009 ISBN 978-3-540-88082-0 Vol. 236. Koﬁ Kissi Dompere Epistemic Foundations of Fuzziness, 2009 ISBN 978-3-540-88084-4

Vol. 240. Asli Celikyilmaz, I. Burhan Türksen Modeling Uncertainty with Fuzzy Logic, 2009 ISBN 978-3-540-89923-5 Vol. 241. Jacek Kluska Analytical Methods in Fuzzy Modeling and Control, 2009 ISBN 978-3-540-89926-6 Vol. 242. Yaochu Jin, Lipo Wang Fuzzy Systems in Bioinformatics and Computational Biology, 2009 ISBN 978-3-540-89967-9 Vol. 243. Rudolf Seising (Ed.) Views on Fuzzy Sets and Systems from Different Perspectives, 2009 ISBN 978-3-540-93801-9 Vol. 244. Xiaodong Liu and Witold Pedrycz Axiomatic Fuzzy Set Theory and Its Applications, 2009 ISBN 978-3-642-00401-8 Vol. 245. Xuzhu Wang, Da Ruan, Etienne E. Kerre Mathematics of Fuzziness – Basic Issues, 2009 ISBN 978-3-540-78310-7

Vol. 237. Koﬁ Kissi Dompere Fuzziness and Approximate Reasoning, 2009 ISBN 978-3-540-88086-8

Vol. 246. Piedad Brox, Iluminada Castillo, Santiago Sánchez Solano Fuzzy Logic-Based Algorithms for Video De-Interlacing, 2010 ISBN 978-3-642-10694-1

Vol. 238. Atanu Sengupta, Tapan Kumar Pal Fuzzy Preference Ordering of Interval Numbers in Decision Problems, 2009 ISBN 978-3-540-89914-3

Vol. 247. Michael Glykas Fuzzy Cognitive Maps, 2010 ISBN 978-3-642-03219-6

Vol. 239. Baoding Liu Theory and Practice of Uncertain Programming, 2009 ISBN 978-3-540-89483-4

Vol. 248. Bing-Yuan Cao Optimal Models and Methods with Fuzzy Quantities, 2010 ISBN 978-3-642-10710-8

Bing-Yuan Cao

Optimal Models and Methods with Fuzzy Quantities

ABC

Author Bing-Yuan Cao Guangzhou Higher Education Mega Center Guangzhou University No. 230 Waihuan Xi Road Guangzhou People’s Republic China

ISBN 978-3-642-10710-8

e-ISBN 978-3-642-10712-2

DOI 10.1007/978-3-642-10712-2 Studies in Fuzziness and Soft Computing

ISSN 1434-9922

Library of Congress Control Number: 2009939991 c 2010 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typeset & Cover Design: Scientiﬁc Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 987654321 springer.com

To my wife Wang Pei-hua

Preface

I originated a submission titled “Fuzzy Geometric Programming” for the Proceeding of the Second International Fuzzy Systems Association (IFSA) Congress (Tokyo) in 1987, and later published through rigorous selection in Fuzzy Sets and Systems. In 1989, I brought up “Study on non-distinct selfregression forecast model” for discussion by using Zadeh’s theory on fuzzy sets. From then on, I have done researches on an optimal model with fuzzy information quantities. In the book, I regard the model with fuzzy quantities, including fuzzy coeﬃcients and fuzzy variables, as a main line, introducing the molding of various problems and their practical examples, completely and clearly, in some ﬁelds. Many of my papers are indexed in SCI (Science Citation Index), EI (Engineering Index) and ISTP (Index to Scientiﬁc & Technical Proceedings), commented or extracted in American Mathematical Reviews and Zentralblatt Math. The researching and writing have been funded by the National Natural Science Foundation of China for three times (1997, 2003, 2008). At the same time, it is supported by the Science and Technology Project of Hunan Province, the Science Research Foundation of Changsha Electric Power University, “211 Project” Foundation of Shantou University and Li Ka-Shing Science Development Foundation of Shantou University, and Scientiﬁc Research Foundation of Guangzhou University. The research project won the Third Award of Guangdong Science and Technology Awarded by the Government of Guangdong Province (2005) and Third Award of Excellent Papers in Natural Science by it (2003), successively. The book contains ten chapters as follows: Chapter 1. Prepare Knowledge; Chapter 2. Regression and Self-regression Models with Fuzzy Coeﬃcients; Chapter 3. Regression and Self-regression Models with Fuzzy Variables; Chapter 4. Fuzzy Input–output Model; Chapter 5. Fuzzy Cluster Analysis and Fuzzy Recognition;

VIII

Preface

Chapter Chapter Chapter Chapter Chapter

6. Fuzzy Linear Programming; 7. Fuzzy Geometric Programming; 8. Fuzzy Relative Equation and Its Optimizing; 9. Interval and Fuzzy Diﬀerential Equations; 10. Interval and Fuzzy Functional and Their Variation.

It can not only be used as teaching materials or a reference book for undergraduates in higher education, master graduates and doctor graduates in the courses of applied mathematics, computer science, artiﬁcial intelligence, fuzzy information process and automation, operations research, system science and engineering, and the like, but also serves as a reference book for researchers in these ﬁelds, particularly, for researchers in soft science. I appreciate supports from the Nation Natural Science Foundation of China (No.70771030, No.70271047 and No.79670012) and the Science Foundation of Guangzhou University. In this book, some papers have been taken from my Doctor J.H. Yang (Section 8.5 and 8.6), Master students Z.X. Zhu (Section 2.5); M.J. Liu (Section 6.2); Y.F. Tan (Section 6.3); Q.P. Gu (Section 6.7) and X.G. Zhou (Section 8.4), for whose contribution I am grateful, besides, thank Master students L Q Chen, H Q Qiu, X J Cui, Y C Hou, R J Hu, J Tan, Y F Zhang, X W Zhou and G C Zhu for their earnest proof sheet. And I also appreciate associate professor P H Wang for examination and revision of the ﬁnal proof and F H Cao for its typewriting. And also my heart-felt thanks go to Springer for a nice platform for me and to the editors for their hard work. 2006.10.1

Bing-yuan Cao Guangzhou

Contents

1

Prepare Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Operations in Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 α−Cut and Convex Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Fuzzy Relativity and Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Fuzzy Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Three Mainstream Theorems in Fuzzy Mathematics . . . . . . . . 1.7 Five-Type Fuzzy Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

Regression and Self-regression Models with Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Regression Model with Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . 2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients . . . . . . . . 2.3 Exponential Model with Fuzzy Parameters . . . . . . . . . . . . . . . . 2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Linear Regression with Triangular Fuzzy Numbers . . . . . . . . . .

3

Regression and Self-regression Models with Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Regression Model with T - Fuzzy Variables . . . . . . . . . . . . . . . . . 3.2 Self-regression Model with T -Fuzzy Variables . . . . . . . . . . . . . . 3.3 Regression Model with (·, c) Fuzzy Variables . . . . . . . . . . . . . . . 3.4 Self-regression with (·, c) Fuzzy Variables . . . . . . . . . . . . . . . . . . 3.5 Nonlinear Regression with T -fuzzy Data to be Linearized . . . . 3.6 Regression and Self-regression Models with Flat Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 5 9 12 18 21 27

33 33 39 44 50 57

63 63 71 76 78 85 91

X

Contents

4

Fuzzy Input-Output Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Fuzzy Input-Output Mathematical Model . . . . . . . . . . . . . . . . . 4.2 Input-Output Model with T -Fuzzy Data . . . . . . . . . . . . . . . . . . . 4.3 Input-Output Model with Triangular Fuzzy Data . . . . . . . . . . .

5

Fuzzy Cluster Analysis and Fuzzy Recognition . . . . . . . . . . . . 117 5.1 Fuzzy Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.2 Fuzzy Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6

Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Fuzzy Linear Programming and Its Algorithm . . . . . . . . . . . . . . 6.2 Expansion on Optimal Solution of Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Discussion of Optimal Solution to Fuzzy Constraints Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Relation between Fuzzy Linear Programming and Its Dual One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Antinomy in Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . 6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Linear Programming with L-R Coeﬃcients . . . . . . . . . . . . . . . . 6.8 Linear Programming Model with T -Fuzzy Variables . . . . . . . . . 6.9 Multi-Objective Linear Programming with T -Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

8

95 95 98 108

139 139 146 154 159 165 171 177 182 187

Fuzzy Geometric Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction of Fuzzy Geometric Programming . . . . . . . . . . . . . 7.2 Lagrange Problem in Fuzzy Geometric Programming . . . . . . . . 7.3 Antinomy in Fuzzy Geometric Programming . . . . . . . . . . . . . . . 7.4 Geometric Programming with Fuzzy Coeﬃcients . . . . . . . . . . . 7.5 Geometric Programming with (α, c) Coeﬃcients . . . . . . . . . . . . 7.6 Geometric Programming with L-R Coeﬃcients . . . . . . . . . . . . . 7.7 Geometric Programming with Flat Coeﬃcients . . . . . . . . . . . . . 7.8 Geometric Programming with Fuzzy Variables . . . . . . . . . . . . . 7.9 Dual Method of Geometric Programming with Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193 193 201 206 214 218 224 229 235

FuzzyRelative Equation and Its Optimizing . . . . . . . . . . . . . . 8.1 (, ) Fuzzy Relative Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 ( , ·) Fuzzy Relative Equation . . . . . . . . . . . ................ 8.3 Algorithm Application and Comparing in ( , ·) Relative Equations . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . 8.4 Lattice Linear Programming with ( , ·) Operator . . . . . . . . . .

255 255 261

240 248

266 273

Contents

XI

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Interval and Fuzzy Diﬀerential Equations . . . . . . . . . . . . . . . . . 9.1 Interval Ordinary Diﬀerential Equations . . . . . . . . . . . . . . . . . . . 9.2 Fuzzy-Valued Ordinary Diﬀerential Equations . . . . . . . . . . . . . . 9.3 Ordinary Diﬀerential Equations with Fuzzy Variables . . . . . . . 9.4 Fuzzy Duoma Debted Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Model for Fuzzy Solow Growth in Economics . . . . . . . . . . . . . . 9.6 Application of Fuzzy Economic Model . . . . . . . . . . . . . . . . . . . . .

293 293 299 306 309 315 320

10 Interval and Fuzzy Functional and Their Variation . . . . . . . 10.1 Interval Functional and Its Variation . . . . . . . . . . . . . . . . . . . . . . 10.2 Fuzzy-Valued Functional and Its Variation . . . . . . . . . . . . . . . . . 10.3 Convex Interval and Fuzzy Function and Functional . . . . . . . . 10.4 Convex Fuzzy-Valued Function and Functional . . . . . . . . . . . . . 10.5 Variation of Condition Extremum on Interval and Fuzzy-Valued Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Variation of Condition Extremum on Functional with Fuzzy Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

327 327 332 338 345

9

350 356

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

1 Prepare Knowledge

This chapter represents deﬁnitions on fuzzy sets and its operation properties, α−cut sets and convex fuzzy sets. Besides, based on the elaborative fuzzy relation, it introduces a fuzzy operator related to this chapter, and it exhibits fuzzy function. At the same time, it describes fuzzy mathematics of three mainstream theorems—expansion principle, decomposition theorem and representation theorem. Finally, it inquires into ﬁve-type fuzzy numbers and its operations.

1.1

Fuzzy Sets

In order to give fuzzy sets concept, the chapter ﬁrst describes a foundation concept on a fuzzy sets theory—universe. The so-called universe means all of the object involved is a set commonly, usually by writing English alphabets X,Y ,Z and etc. The fuzzy sets diﬀer from classic ones with a strict mathematics deﬁnition. We give its mathematics description as follows. Deﬁnition 1.1.1. A so-called fuzzy subset A˜ in set X is a set A˜ = {(μA˜ (x), x)|x ∈ X}, where μA˜ (x) is a real number in interval [0, 1], called a membership degree ˜ This function is deﬁned in the interval [0, 1] from point x to A. μA˜ : X −→ [0, 1], x → μA˜ (x) ˜ called a membership function in fuzzy set A. At the same time, fuzzy subsets are also often called fuzzy sets. From Deﬁnition 1.1.1 of the fuzzy sets, there exist few next conclusions, obviously: B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 1–32. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

2

1 Prepare Knowledge

(1) The concept of fuzzy sets is an expansion concept of classical sets. If F (X) means all fuzzy sets on X, i.e., ˜ A˜ is a fuzzy set on X}, F (X) = {A| then P (X) ⊂ F (X), where P (X) is the power sets on X, i.e., P (X) = {A|A is a classic set on X}, that is, if the membership function of fuzzy set A˜ takes only 0 and 1, two values, then A˜ is exuviated into the classic sets of X. (2) The concept of the membership function is the expansion of the characteristic function concept. When A ∈ P (X) is an ordinary subset in X, the characteristic function of A is 1, x ∈ A (membership degree of x for A is 1), χA = 0, x ∈ / A (membership degree of x for A is 0). This means in fuzzy sets, the nearer the membership degree μA˜ (x) in fuzzy set A˜ is to 1, the bigger x belonging to A˜ degree is; whereas, the nearer μA˜ (x) is to 0, the smaller x belonging to A˜ degree is. If the value region of μA˜ (x) is {0, 1}, then fuzzy set A˜ is an ordinary set A, but membership function μA˜ (x) is a characteristic function χA (x). (3) We call fuzzy sets in F (X)P (X) true fuzzy sets. Several representation methods to fuzzy sets are shown as follows. 10 A representation method to fuzzy set by Zadeh If set X is a ﬁnite set, let universe X = {x1 , x2 , · · · , xn }. The fuzzy set is μ ˜ (x1 ) μA˜ (x2 ) μ ˜ (xn ) μA˜ (xi ) A˜ = A + + ···+ A = , x1 x2 xn xi i=1 n

μ (x )

here symbol “Σ” is no longer a numerical sum, A˜xi i is not a fraction; it only has the sign meaning, that is, only membership degree of the point xi with respect to fuzzy set A˜ is μA˜ (xi ). If X is an inﬁnite set, a fuzzy set on X is μA˜ (x) A˜ = . x x∈X Similarly, the sign “ ” is not an integral any more, only means an inﬁnite μ (x) logic sum, but the meaning of A˜x is in accordance with the ﬁnite case. 0 When the universe X is a ﬁnite set, the fuzzy set represented in 2 Deﬁnition 1.1.1 is A˜ = {(μA˜ (x1 ), x1 ), (μA˜ (x2 ), x2 ), · · · , (μA˜ (xn ), xn )}. 30 When the universe X is a ﬁnite set, the fuzzy set represented according to a vector form is A˜ = (μA˜ (x1 ), μA˜ (x2 ), · · · , μA˜ (xn )).

1.1 Fuzzy Sets

3

Remarkably, X and φ also can be seen as fuzzy set in X, if membership functions μA˜ (x) ≡ 1 and μA˜ (x) ≡ 0, then A˜ is a complete set X and an empty set φ, respectively. An element that the membership degree is 1 deﬁnitely belongs to this fuzzy set; an element that the membership degree is 0 does not belong to this fuzzy set deﬁnitely. But the membership function value in (0, 1) forms a distinct boundary, also calling distinct subsets of fuzzy sets. When a fuzzy object is described by using the fuzzy set, choice of its membership function is a key. Now we give three membership functions basically: 1. Partial minitype (abstains up, Figure 1.1.1) [1 + (a(x − c))b ]−1 , when x c, μA˜ (x) = 1, when x < c, where c ∈ X is an arbitrary point, a > 0, b > 0 are two parameters. 2. Partial large-scale (abstains down, Figure 1.1.2) 0, when x c, μA˜ (x) = [1 + (a(x − c))−b ]−1 , when x > c, where x ∈ X is an arbitrary point, a > 0, b > 0 are two parameters. 3. Normal type (middle type, Figure 1.1.3) x − a 2 b μA˜ (x) = e , −

where a ∈ X is an arbitrary value, b > 0 is a parameter. 6

6 1

6

1

1

μA˜ (x) 0

c Figure 1.1.1

μA˜ (x)

μA˜ (x) x

0

c Figure 1.1.2

x

0

a-b

a

a+b

x

Figure 1.1.3

Obviously, Type 1 and 2 is dual, and its meaning shows clear at a glance. ˜ which is “ suﬃciently near to number set of a”, then Type 3 is a fuzzy set A, this membership function in A˜ is deﬁned on a center type according to the deﬁnition. Example 1.1.1: Let X ⊆ R + (R + is a non-negative real number set). Regard ˜ and “ youth” Y˜ , age as universe and take X=[0,100]. Zadeh gave “oldness” O these two membership functions respectively are

4

1 Prepare Knowledge

⎧ 0, ⎪ ⎪ ⎪ ⎨ μO˜ (x) = and

⎪ ⎪ ⎪ ⎩

1+

⎪ ⎪ ⎪ ⎩

0 x 50, , 50 < x 100,

1,

x > 100,

⎧ 1, ⎪ ⎪ ⎪ ⎨ μY˜ (x) =

x − 50 5

−2 −1

1+

x − 25 5

2 −1

0,

0 x 25, , 25 < x 100, x > 100.

If some person’s age is 28, then his membership degree belongings to “youth” or “oldness” respectively is 2 −1 28 − 25 1+ = 0.735 and 0. 5 If some person’s age is 55, then his membership degree belongings to “youth” or “oldness” respectively is 2 −1 55 − 25 1+ = 0.027 5 and

1+

55 − 50 5

−2 −1 = 0.5.

According to the three-type membership functions mentioned above, we can, to a certain, calculate its membership degree by concrete object x. When its accuracy is not required high, for simple account, we can determine the membership degree by adopting evaluation. Example 1.1.2: Suppose X={1,2,3,4}, these four elements constitute a small number set. Obviously, element 1 is standardly a small number, it should belong to this set, and its membership degree is 1; element 4 is not a small number, and it should not belong to this set, its membership degree being 0. Element 2 “also returns small” or make “eighty percent small”, its membership degree being 0.8; element 3 probably is “force small”, or makes “two percent small”; its membership degree being 0.2. The fuzzy sets written in ˜ its elements still are 1,2,3,4, at the same time, and a small numbers as A, membership degree of element in A˜ is given, denoted by 1 0.8 0.2 0 + + . Zadeh’s representation is A˜ = + 1 2 3 4 ˜ An order dual representation is A={(1,1),(0.8,2),(0.2,3),(0,4)}. ˜ A vector method simply shows as A=(1,0.8,0.2,0).

1.2 Operations in Fuzzy Sets

1.2

5

Operations in Fuzzy Sets

Because the value region in membership function of fuzzy sets corresponding to clear-subset characteristic function is extended from {0, 1} to [0, 1]. Similar to the characteristic function to demonstrate the relation between a distinctive subset, we have the following. ˜ B ˜ ∈ F (X). If to arbitrary x ∈ X, we have Deﬁnition 1.2.1. Let A, ˜ ˜ Inclusion: A ⊆ B ⇐⇒ μA˜ (x) μB˜ (x). ˜ ⇐⇒ μ ˜ (x) = μ ˜ (x). Equality: A˜ = B A B ˜ ⇐⇒ A˜ ⊆ B ˜ and B ˜ ⊆ A. ˜ That is to say, the From Deﬁnition 1.2.1, A˜ = B inclusion relation is a binary relation on fuzzy power set F (X) with following properties, i.e., (1) (2) (3)

A˜ ⊆ A˜ (reﬂection). ˜ (symmetry). ˜ and B ˜ ⊆ A˜ =⇒ A˜ = B A˜ ⊆ B ˜ ˜ ˜ ˜ ˜ A ⊆ B and B ⊆ C =⇒ A ⊆ C˜ (transitivity).

Since relation “ ⊆ ” constitutes an order relation on F (X), (F (X), ⊆) stands for a partially ordered set. Again as φ, X ∈ F (X), hence F (X) contains maximum element X and minimum element φ. ˜ B ˜ ∈ F (x). Then we deﬁne Deﬁnition 1.2.2. Let A, ˜ ˜ Union: A B, whose membership function is μ(A˜ B) μB˜ (x) = max{μA˜ (x), μB˜ (x)}. ˜ (x) ˜ (x) = μA ˜ whose membership function is Intersection: A˜ B, (μA˜ B) μB˜ (x) = min{μA˜ (x), μB˜ (x)}. ˜ (x) = μA ˜ (x) c Complement: A˜ , whose membership function is μA˜c (x) = 1 − μA˜ (x). Their images show like Figure 1.2.1—Figure 1.2.3: 6

6 1

0

A

1

B x

B Figure 1.2.1 A

0

A

6 1

B

A x

B Figure 1.2.2 A

c A x

0 c Figure 1.2.3 A

Comparing operation of union, intersection and complement in distinctive set, we discover immediately that the fuzzy sets ˜operation is exactly a is a minimum fuzzy parallel deﬁnition of the distinct set operation, A˜ B

6

1 Prepare Knowledge

˜ is a maximum fuzzy set ˜ A˜ B set embodying A˜ and embodied again in B. ˜ ˜ embodying A and embodied again in B. According to the two kinds of cases, where the universe X is ﬁnite or inﬁnite, the calculation formula of union, intersection and complement in fuzzy ˜ can be represented, respectively, like the following: sets A˜ and B (1)

The universe is X = {x1 , x2 , · · · , xn }, and A˜ =

n μ (x ) ˜ i B , then x i i=1

A˜ A˜

˜= B ˜= B

n μ ˜ (xi ) ∨ μ ˜ (xi ) A

i=1 n i=1

A˜c =

n 1 − μ ˜ (xi ) A

(2) X is an inﬁnite set, and A˜ =

,

μA˜ (xi ) ∧ μB˜ (xi ) , xi xi

i=1

B

xi

n μ (x ) ˜ i A ˜ = ,B xi i=1

x∈X

.

μA˜ (x) , x

˜= B

x∈X

μB˜ (x) , then x

μA˜ (x) ∨ μB˜ (x) , x x∈X μA˜ (x) ∧ μB˜ (x) ˜= A˜ B , x x∈X μ ˜ (x) A˜c = . 1− A x x∈X

A˜

˜= B

1 0.8 0.2 0 Example 1.2.1: Suppose X = {x1 , x2 , x3 , x4 }; A˜ = + + + ; x1 x2 x3 x4 ˜ = 0 + 0.2 + 0.8 + 0 , then B x1 x2 x3 x4 ˜ = 1 ∨ 0 + 0.8 ∨ 0.2 + 0.2 ∨ 0.8 + 0 ∨ 0 A˜ B x1 x2 x3 x4 1 0.8 0.8 0 = + + + . x1 x2 x3 x4 1 ∧ 0 0.8 ∧ 0.2 0.2 ∧ 0.8 0 ∧ 0 ˜= A˜ B + + + x1 x2 x3 x4 0 0.2 0.2 0 = + + + . x1 x2 x3 x4 1 − 1 1 − 0.8 1 − 0.2 1 − 0 c A˜ = + + + x1 x2 x3 x4 0 0.2 0.8 1 = + + + . x1 x2 x3 x4

1.2 Operations in Fuzzy Sets

7

Example 1.2.2: Compute union, intersection and complement of the fuzzy ˜ in Example 1.1.1 in Section 1.1. sets Y˜ and O From the deﬁnition, we have μY˜ (x) μO˜ (x) ˜= Y˜ O x x∈X 2 −1 x − 25 1 + 25<xx∗ 1 + x 5 = x 0x25 −2 −1 x − 50 1+ 5 1 + + , x x∗ <x100 x>100 x where x∗ ≈ 51;

Y˜

1+

˜= O 50<xx∗

x − 50 5 x

1+

x − 25 5

+

x

x∗ <x100

˜c = O 0x50

1 + x

2 −1 ;

1− 1+

50<x100

1− 1+

−2 −1

˜c

Y =

x − 25 5

x − 50 5

2 −1

x

25x100

−2 −1 ;

x + x>100

1 . x

The union, intersection and complement operation in fuzzy set can be extended to several fuzzy sets. Deﬁnition 1.2.3. Suppose T to be an index set, A˜t ∈ F (X) (t ∈ T ), then μA˜t (x) = sup μA˜t (x), x ∈ X, μ A˜t (x) = t∈T

μ

t∈T

Obviously,

t∈T

t∈T ˜t (x) A

=

μA˜t (x) = inf μA˜t (x), x ∈ X. t∈T

t∈T

t∈T

A˜t ,

t∈T

A˜t ∈ F (X).

8

1 Prepare Knowledge

In particular, when T is a ﬁnite set,

μ

˜t (x) A

= max μA˜t (x), x ∈ X,

˜t (x) A

= min μA˜t (x), x ∈ X.

t∈T

t∈T

μ

t∈T

t∈T

Theorem 1.2.1. (F (X), , , c) satisﬁes the following properties: ˜ A˜ A˜ = A. ˜ (1) Idempotent law A˜ A˜ = A, ˜ ˜ =B ˜ A. ˜ A˜ B ˜=B ˜ A, (2) Commutative law A˜ B (3) Associative law ˜ ˜ C), ˜ (A˜ B) C˜ = A˜ (B ˜ ˜ ˜ ˜ ˜ ˜ (A B) C = A (B C). ˜ ˜ (A˜ B) ˜ ˜ (4) Absorptive law (A˜ B) A˜ = A, A˜ = A. (5) Distributive law ˜ ˜ ˜ ˜ ˜ (B C), C = (A˜ C) (A˜ B) ˜ ˜ ˜ ˜ ˜ (B C). C = (A˜ C) (A˜ B) (6) 0-1 law

˜ A˜ φ = φ, A˜ X = A, ˜ A˜ X = X, A˜ φ = A.

˜c c = A. ˜ (7) Restore original law ˜ c ˜c ˜ c (A ˜)c ˜ ˜ B c , (A˜ B) = A˜c B . (8) Dual law (A B) = A Proof: Proved by taking Property (8) for example, the rest can be veriﬁed directly. From ∀x ∈ X, we have μ(A˜ B) ˜ B ˜ (x) ˜ c (x) = 1 − μA = 1 − max{μA˜ (x), μB˜ (x)} = min{1 − μA˜ (x), 1 − μB˜ (x)} = min{μA˜c (x), μB˜ c (x)} = μA˜c B˜ c (x). Hence (A˜

˜ c = A˜c B)

˜c. B

Similarly, we can prove (A˜

˜ c = A˜c B)

˜c. B

It is pointed out that the operation in a fuzzy set no longer satisﬁes the excluded-middle law. Namely, under circumstance generally, we have A˜ A˜c = X, A˜ A˜c = φ.

1.3 α−Cut and Convex Fuzzy Sets

But we have A˜

9

1 1 A˜c , A˜ A˜c . 2 2

Example 1.2.3: If μA˜ (x) ≡ 0.5, μA˜c (x) ≡ 0.5, then μA˜ A˜c (x) = max{0.5, 0.5} = 0.5 = 1, μA˜ A˜c (x) = min{0.5, 0.5} = 0.5 = 0.

1.3

α−Cut and Convex Fuzzy Sets

1.3.1 α–Cut Set Deﬁnition 1.3.1. Suppose A˜ ∈ F (X), ∀α ∈ [0, 1], we write ˜ α = Aα = {x|μ ˜ (x) α}, (A) A ˜ Again, we write then Aα is said to be an α–cut set of fuzzy set A. ˜ α = Aα = {x|μ ˜ (x) > α}, (A) A · · ˜ α a conﬁdence level, and Aα· is called a strong α–cut set of fuzzy set A, ˜ 0 = A0 = {x|μ ˜ (x) > 0} = suppA, ˜ (A) A ·

·

˜ A0 is called a support of fuzzy set A. ·

If this support supp A˜ = {x} is a single point set, then A˜ is called a fuzzy point on X. Audio-visually, the meaning in Aα is that if x to the membership degree of A˜ attains or exceeds the level α, at last it has the qualiﬁed member; since all of these qualiﬁed members constitute Aα , it is a classical subset in X. 0 0.9 1 0.5 0.7 + + + + , then Example 1.3.1: Suppose A˜ = x1 x2 x3 x4 x5 at α = 1, A1 = {x5 }, A1 = φ, at α = 0.9, A0.9 = {x4 , x5 }, A0.9 = {x5 }, at α = 0.7, A0.7 = {x2 , x4 , x5 }, A0.7 = {x4 , x5 }, at α = 0.5, A0.5 = {x1 , x2 , x4 , x5 }, A0.5 = {x2 , x4 , x5 }, at α = 0, A0 = X, A0 = {x1 , x2 , x4 , x5 }. α-cut set has the following properties. Property 1.3.1 (1) (A˜ (2) (A˜

˜ α = Aα B) ˜ α = Aα B) · ·

Bα , Bα· ,

˜ α = Aα Bα . (A˜ B) ˜ α = Aα Bα . (A˜ B) · · ·

10

1 Prepare Knowledge

Proof: We prove only the ﬁrst formula in (1). ˜ (A˜ B) ˜ B ˜ (x) α} = {x|μA ˜ (x) ∨ μB ˜ (x) α} α = {x|μA = {x|μA˜ (x) α} {x|μB˜ (x) α} = Aα Bα . Proof of the other formulas is the same. Property 1.3.2 ˜ ˜ ˜ ˜ At )α ⊇ At )α = (1) ( (At )α , ( (At )α , (A˜c )α = (A1−α )c . t∈T t∈T t∈T t∈T ˜ ˜ ˜ ˜ (2) ( At )α. = At )α. ⊆ (At )α , ( (At )α. , (A˜c )α. = (A1−α )c . t∈T

t∈T

t∈T

t∈T

Proof in Property 1.3.2 is easy, readers themselves can prove it. It must be pointed out that the ﬁrst formula in (1) and the second formula in (2) can’t be changed for the equation. Example 1.3.2: Let μA˜n (x) ≡

1 1 (1− ), n = 1, 2, · · · . Then μ 2 n

∞

n=1

∞ so that ( A˜n )0.5 = X. But

˜n A

(x) ≡

1 , 2

n=1

(A˜n )0.5 = φ (n 1), so that

∞

(A˜n )0.5 = φ.

n=1

Therefore

(

∞ ∞ A˜n )0.5 = (A˜n )0.5 .

n=1

n=1

1 1 Similarly, let μB˜n (x) ≡ (1 + ), n = 1, 2, · · · . We can prove 2 n (

∞

n=1

˜n )0.5 = B

∞

˜n )0.5 . (B

n=1

Deﬁnition 1.3.2. Suppose A˜ ∈ F (X), set Ker A˜ = {x|μA˜ (x) = 1} is called a kernel of fuzzy set A˜ and A˜ is a normal fuzzy set if Ker A˜ =

φ. 1.3.2 Convex Fuzzy Sets Recall the ﬁrst concept of ordinary convex sets. Suppose X = R n to be ndimensional Euclidean space, A is an ordinary subset in X. If ∀x1 , x2 ∈ A, and ∀λ ∈ [0, 1], we have

1.3 α−Cut and Convex Fuzzy Sets

11

λx1 + (1 − λ)x2 ∈ A, and then call A convex sets. Before introduction of the convex fuzzy set concepts, we prove ﬁrst result below. Theorem 1.3.1. Suppose A˜ to be a fuzzy set in X, if α ∈ [0, 1], Aα = {x|μA˜ (x) α} are all convex sets if and only if ∀x1 , x2 ∈ X, λ ∈ [0, 1], there is (1.3.1) μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ). Proof: If we have already known α ∈ [0, 1], Aα are all convex sets, ∀x1 , x2 ∈ X might as well suppose μA˜ (x2 ) μA˜ (x1 ) = α0 , then μA˜ (x1 ) ∧ μA˜ (x2 ) = α0 . Because Aα0 is a convex set, ∀x1 , x2 ∈ Aα0 , and ∀λ ∈ [0, 1], we have λx1 + (1 − λ)x2 ∈ Aα0 , μA˜ (λx1 + (1 − λ)x2 ) α0 .

hence Therefore

μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 )

μA˜ (x2 ).

Conversely, if we have already known ∀x1 , x2 ∈ X, α ∈ [0, 1], there exist μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ), then, if α ∈ [0, 1], x1 , x2 ∈ Aα , hence μA˜ (x1 ) α, μA˜ (x2 ) α, such that

μA˜ (x1 ) ∧ μA˜ (x2 ) α,

μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ) α,

so hence

λx1 + (1 − λ)x2 ∈ Aα .

Therefore, Aα is a convex set. Deﬁnition 1.3.3. Suppose X = R n to be n-dimensional Euclidean space, A˜ is a fuzzy set in X. If ∀α ∈ [0, 1], Aα are all convex sets, calling fuzzy set A˜ a convex fuzzy set. From Theorem 1.3.1 we know that A˜ is a convex set if and only if ∀λ ∈ [0, 1], x1 , x2 ∈ X, there is μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ). ˜ are convex sets, so is A˜ ∩ B. ˜ Theorem 1.3.2. If A˜ and B Proof: ∀x1 , x2 ∈ X, ∀λ ∈ [0, 1], μ ˜ ˜ λx1 + (1 − λ)x2 = μA˜ λx1 + (1 − λ)x2 ∧ μB˜ λx1 + (1 − λ)x2 A∩B μA˜ (x1 ) ∧ μA˜ (x2 ) ∧ μB˜ (x1 ) ∧ μB˜ (x2 ) = μA˜ (x1 ) ∧ μB˜ (x1 ) ∧ μA˜ (x2 ) ∧ μB˜ (x2 ) = μA∩ ˜ B ˜ (x1 ) ∧ μA∩ ˜ B ˜ (x2 ).

12

1 Prepare Knowledge

˜ denotes a convex fuzzy set. Therefore, A˜ ∩ B ˜ B, ˜ ˜ ∈ F (X). Then a convex combination with Deﬁnition 1.3.4. Let A, ˜ ˜ ˜ B; ˜ ), ˜ of A and B is a fuzzy set, denoted by (A, ˜ with its memrespect to bership function being ˜ ˜ μB˜ (x), ∀x ∈ X. μ(A, ˜ B; ˜ ) ˜ (x) + 1 − (x) ˜ (x) = (x)μ A ˜ i ∈ F (X)(1 i m) and Generally, if A˜i ,

m

˜ i (x) = 1(∀x ∈ X), then

i=1

˜ i of A˜i is written as a convex combination with respect to μ(A˜1 ,A˜2 ,··· ,A˜m ; ˜ 1 , ˜ 2 ,··· , ˜ m ) (x) =

m

˜ i (x)μA˜ (x), i

∀x ∈ X.

i=1

Deﬁnition 1.3.5. Suppose A˜ ∈ F (X), if ∀α ∈ [0, 1], Aα to be all bounded sets in X, then A˜ is called a bounded fuzzy set in X. Theorem 1.3.3. Both union and intersection of two bounded fuzzy sets are bounded fuzzy sets, respectively. It is easy to prove it by property and Deﬁnition 1.3.5 of α-cut sets.

1.4

Fuzzy Relativity and Operator

1.4.1 Fuzzy Relations ˜ Deﬁnition 1.4.1. Suppose X × Y to be a Cartesian product in X and Y, R is a fuzzy set of X × Y , its membership function μR˜ (x, y)(x ∈ X, y ∈ Y ) ˜ in X and Y (still use the same mark). determines a fuzzy relation R Example 1.4.1: Suppose X={x1 , x2 , x3 , x4 } denotes the set of four factories, Y ={electricity, coal, petroleum} denotes three kinds of energy resource set, ˜ between factories and each energy reTable 1.4.1 denotes fuzzy relations R ˜ source, Rij denotes dependence degree from the factory i to energy resource j. Table 1.4.1. Fuzzy Relations between Factories and Energy Resource

Factory Factory Factory Factory

1 2 3 4

Electricity ˜ 11 R ˜ 21 R ˜ 31 R ˜ 41 R

Coal ˜ 12 R ˜ 22 R ˜ 32 R ˜ 42 R

Petroleum ˜ 13 R ˜ 23 R ˜ 33 R ˜ 43 R

Example 1.4.2: Suppose X = Y is a real number set, Cartesian product X × Y is the whole plane. R: “ x > y” is an ordinary relation, that is, set R in plane. But we consider the relation as follows:

1.4 Fuzzy Relativity and Operator

13

“x y”, that is “x is much greater than y”, which is a fuzzy relation; write ˜ and we deﬁne its membership function as R, ⎧ ⎨ 0, x y, 100 −1 μR˜ (x, y) = , x > y. ⎩ 1+ (x − y)2 From here we can know the following: ˜ from X to Y is a fuzzy set in Cartesian product X ×Y . 10 Fuzzy relation R ˜ is Because of Cartesian product with order relevant, i.e., X × Y = Y × X, R also with order relevant. ˜ y) in 20 If two values {0,1} is taken from the membership function R(x, ˜ fuzzy relation only, then R conﬁrms an ordinary set in X × Y , so the fuzzy relation is extended to an ordinary relation. ˜ is a fuzzy relation between the same universe. Under In Example 1.4.2, R ˜ fuzzy relation in X. the condition of X = Y , we call R ˜ deExample 1.4.3: Suppose X={x1 ,x2 ,x3 } denotes three persons’ sets, R notes fuzzy relation in three persons’ trust each other, i.e., ˜= R

1 0.6 0.9 0.1 1 + + + + (x1 , x1 ) (x1 , x2 ) (x1 , x3 ) (x2 , x1 ) (x2 , x2 ) 0.7 0.5 0.8 1 + + + + . (x2 , x3 ) (x3 , x1 ) (x3 , x2 ) (x3 , x3 )

μR˜ (xi , xi ) = 1 expresses that everybody trusts most himself. μR˜ (x2 , x1 ) = 0.1 indicates that x2 to x1 “distrust basically”. Deﬁnition 1.4.1 can be expanded into fuzzy relations between ﬁnite, even an inﬁnite universe. ˜ is given through set R ˜ in Cartesian product set Since fuzzy relation R X × X, then some operations and properties of fuzzy relations are all those of fuzzy sets. In addition, the fuzzy relations still have the following special operations. ˜ 2 is a ˜ 1 to be a fuzzy relation from X to Y , R Deﬁnition 1.4.2. Suppose R ˜ ˜ ˜ ˜ fuzzy relation from Y to Z, then synthesis R1 ◦ R2 of R1 and R2 is a fuzzy relation from X to Z; its membership function conﬁrms as follows: ∀(x, z) ∈ X × Z, μ(R˜ 1 ◦R˜ 2 ) (x, z) =

[μR˜ 1 (x, y) ∧ μR˜ 2 (y, z)],

(1.4.1)

y∈Y

where x ∈ X, z ∈ Z. If R1 , R2 are two ordinary relations, according to method in ordinary set, its synthesis denotes R1 ◦ R2 = {(x, z)|(x, z) ∈ X × Z, ∃y ∈ Y, s.t. (x, y) ∈ R1 , (y, z) ∈ R2 }.

(1.4.2)

14

1 Prepare Knowledge

From here, as an ordinary relation R1 with R2 , its synthesis (1.4.1) and (1.4.2) should be accordant. In fact, at this time, synthesis (1.4.1) of R1 and R2 also can take only two values {0,1}. It is easy to prove that (1.4.1) is equivalent to (1.4.2). ˜ 1 to be a fuzzy relation in X and Y , its memberExample 1.4.4: Suppose R 2 ˜ 2 is a fuzzy relation in Y and Z, ship function is μR˜ 1 (x, y) = e−k(x−y) and R 2 its membership function is μR˜ 2 (y, z) = e−k(y−z) (k 1, constant), then its ˜ 2 is a fuzzy relation in X and Z, its membership function is ˜1 ◦ R synthesis R μ(R˜ 1 ◦R˜ 2 ) (x, z) =

y∈Y

=e

[e−k(x−y)

−k x−

2

2

e−k(y−z) ]

x − z 2 x + z 2 −k 2 2 =e .

˜ to be fuzzy relation in X. A few special fuzzy relations, and suppose R (1) Inverse fuzzy relation. ˜ denotes R ˜ −1 , its membership funcInverse fuzzy relation of fuzzy relation R tion being μR˜ −1 (x, y) = μR˜ (y, x), ∀x, y ∈ X. ˜ is Example 1.4.5: In Example 1.4.3, inverse relation of R 0.1 0.5 0.6 1 0.8 1 + + + + + (x1 , x1 ) (x1 , x2 ) (x1 , x3 ) (x2 , x1 ) (x2 , x2 ) (x2 , x3 ) 0.7 1 0.9 + + . + (x3 , x1 ) (x3 , x2 ) (x3 , x3 )

˜ −1 = R

(2) Symmetric relation. ˜ satisﬁes If fuzzy relation R μR˜ −1 (x, y) = μR˜ (x, y), ∀x, y ∈ X, ˜ is called symmetry. then R Example 1.4.6: The “friend relation” is symmetric, while “paternity relation” and “consequence relation” are not symmetric. (3) Identical relation. Fuzzy relation I˜ on X called identical relation means that I˜ represents an ordinary relation with its membership function being 1, x = y, μI˜(x, y) = ∀x, y ∈ X. 0, x = y, ˜ and the whole relation X ˜ are (4) Zero relation O μO˜ (x, y) = 0, μX˜ (x, y) = 1, ∀x, y ∈ X.

1.4 Fuzzy Relativity and Operator

15

1.4.2 The Operation Properties of the Fuzzy Relation Proposition 1.4.1. Synthesis of fuzzy relation satisﬁes combination law ˜2) ◦ R ˜3 = R ˜ 1 ◦ (R ˜2 ◦ R ˜ 3 ). ˜1 ◦ R (1.4.3) (R Proof: Because [μ(R˜ 1 ◦R˜ 2 ) (x, z) μR˜ 3 (z, w)] μ(R˜ 1 ◦R˜ 2 )◦R˜ 3 (x, w) = z∈X = { [μR˜1 (x, y) μR˜ 2 (y, z)] μR˜ 3 (z, w)} z∈X y∈X [ (μR˜ 1 (x, y) μR˜ 2 (y, z) μR˜ 3 (z, w))] = y∈X z∈X = {μR˜ 1 (x, y) [ (μR˜ 2 (y, z) μR˜ 3 (z, w))]} y∈X z∈X [μR˜ 1 (x, y) μ(R˜ 2 ◦R˜ 3 ) (y, w)] = y∈X

= μR˜ 1 ◦(R˜ 2 ◦R˜ 3 ) (x, w), consequently (1.4.3) holds. ˜ is a fuzzy relation in X with X, then we stipulate If R ˜ ˜ ◦ ···◦ R ˜=R ˜k. R ◦ R k

˜ we have Proposition 1.4.2. For arbitrarily fuzzy relation R, ˜=R ˜ ◦ I˜ = R, ˜ I˜ ◦ R

˜◦R ˜=R ˜◦O ˜ = O. ˜ O

˜ ◦ S˜ ⊆ R ˜ ◦ T˜, S˜ ◦ R ˜ ⊆ T˜ ◦ R. ˜ Proposition 1.4.3. If S˜ ⊆ T˜ , then R ˜ i }i∈I and fuzzy Proposition 1.4.4. For arbitrarily a tuft fuzzy relation {R ˜ we have relation R, ˜ ˜ ˜i) = R ˜◦R ˜i; ˜ ◦( R ˜ ˜ Ri ◦ R. (1) R (2) ( R i) ◦ R = i∈I

i∈I

i∈I

i∈I

Proof: Only prove (1). ∀(x, z) ∈ X × X, μR◦( {μR˜ (x, y) [ μR˜ i (y, z)]} ˜ μ ˜ ) (x, z) = Ri y∈X i∈I i∈I = {μR˜ (x, y) [ μR˜ i (y, z)]} = { [μR˜ (x, y) μR˜ i (y, z)]} y∈X i∈I i∈I y∈X μ(R◦ μ(R◦ = ˜ R ˜ i ) (x, z) = ˜ R ˜ i ) (x, z). i∈I

i∈I

Therefore (1) holds.

˜i) ⊆ R ˜◦R ˜i; ˜ ◦( R Proposition 1.4.5. (1) R i∈I

˜ ˜ ˜ ˜ Ri ◦ R. (2) ( R i) ◦ R ⊆ i∈I

i∈I

i∈I

Proof: Only ˜prove ˜(1). Ri ⊆ Ri , hence ∀(x, z) ∈ X × X, ∀i ∈ I, from Proposition 1.4.3, ∀i ∈ I, i∈I

then μ[R◦( ˜ i∈I

˜ i )] (x, z) R

μ(R◦ ˜ R ˜ i ) (x, z),

16

1 Prepare Knowledge

hence μ[R◦( ˜

˜ i )] (x, z) R

μ(R◦ ˜ R ˜ i ) (x, z).

i∈I

Therefore (1) holds. ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 . Proposition 1.4.6. (R 2 1 Proof: ∀(x, z) ∈ X × X, we have

μ(R˜ 1 ◦R˜ 2 )−1 (x, z) = μ(R˜ 1 ◦R˜ 2 ) (z, x) = =

[μR˜ 1 (z, y) ∧ μR˜ 2 (y, x)]

y∈X

[μR˜ −1 (y, z) ∧ μR˜ −1 (x, y)] 1

2

y∈X

=

[μR˜ −1 (x, y) ∧ μR˜ −1 (y, z)] 2

1

y∈X

= μ(R˜ −1 ◦R˜ −1 ) (x, z). 2

1

˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 . (R 2 1 −1 ˜ ˜ ; R Proposition 1.4.7. (1) ( Ri )−1 = i

Hence

i∈I

i∈I

Proof: Only prove (1). ∀(x, y) ∈ X × X, μ( R˜ i )−1 (x, y) = μ( i∈I

=μ i∈I

˜ i ) (y, x) R

=μ

i∈I

˜ −1 (x, y) R i

(2) (

= μ(

i∈I

i∈I

˜ i )−1 = R

i∈I

˜ −1 . R i

˜ i (y, x) R

i∈I

˜ −1 ) (x, y). R i

Therefore (1) holds. ˜ −1 )−1 = R. ˜ Proposition 1.4.8. (R ˜ to be a fuzzy relation in X. If R ˜ satisﬁes R◦ ˜ R ˜⊆ Deﬁnition 1.4.3. Suppose R ˜ ˜ R, then R is called a transitivity fuzzy relation. Notice, if R is an ordinary relation on X; R is transitive if and only if (x, y)∈ R and (y, z) ∈ R, then (x, z)∈ R. It is easy to understand transitivity ˜ degenerates into the ordinary relation, with in the Deﬁnition 1.4.3, when R the ordinary transitivity being the same. Proposition 1.4.9. The union and intersection of symmetric fuzzy relation also are still symmetric. Proposition 1.4.10. The intersection of transitive fuzzy relation is transitive. ˜ we have the following: Proposition 1.4.11. To arbitrarily fuzzy relation R, ˜ is the least symmetric fuzzy relation, that is (1) Existence of inclusive R ˜ recorded as S(R). ˜ the symmetric closure of R,

1.4 Fuzzy Relativity and Operator

17

˜ that is the (2) Existence of the least transitive fuzzy relation contains R, ˜ ˜ transitive closure of R, recorded as T (R). Proof: Now, we only prove (1). ˜ to denote all sets of containments symmetric fuzzy relation R, ˜ beUse Q ˜ ˜ ˜ cause the whole relation X is symmetric on X, i.e., X ∈ Q, as a result is not ˜ S˜ ∈ Q} ˜ from Proposition 1.4.9. Then S˜0 is the least empty. Let S˜0 = {S| ˜ symmetric relation containing R. ˜ 2 to be a symmetric fuzzy relation, ˜ 1 and R Proposition 1.4.12. Suppose R ˜ 2 is symmetric ⇐⇒ R ˜1 ◦ R ˜2 = R ˜2 ◦ R ˜1. ˜1 ◦ R then R ˜1 ◦ R ˜ 2 is symmetric, then Proof: “=⇒” Because R ˜1 ◦ R ˜ 2 = (R ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 = R ˜2 ◦ R ˜1. R 2 1 ˜2 = R ˜2 ◦ R ˜ 1 , then ˜1 ◦ R “⇐=” If R ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 = R ˜2 ◦ R ˜1 = R ˜1 ◦ R ˜2. (R 2

1

˜1 ◦ R ˜ 2 is symmetric. Therefore R ˜ is transitive, then R ˜ −1 is transitive. Proposition 1.4.13. If R ˜◦R ˜ ⊆ R, ˜ then from Proposition 1.4.6, ∀(x, y) ∈ X × X, Proof: Because R hence μ(R˜ −1 ◦R˜ −1 ) (x, y) = μ(R◦ ˜ R) ˜ −1 (x, y) = μ(R◦ ˜ R) ˜ (y, x) μR˜ (y, x) = μR˜ −1 (x, y), ˜ −1 is transitive. that is, R As for a series of propositions concerning fuzzy relation, the above is considered all for X fuzzy relations. We can throw away this restraint actually, that is, above-mentioned proposition holds as long as the synthesis exists. 1.4.3 Special Fuzzy Operators ˜ B ˜ ∈ F (X), its general form of union and intersecDeﬁnition 1.4.4. For A, tion operation is deﬁned as ∗ ∗ μ(A˜ B) μB˜ (x), μ(A˜ B) μB˜ (x). ˜ (x) ˜ (x) ˜ (x) μA ˜ (x) μA Here

∗

,

∗

is binary operation in [0, 1], and is brieﬂy called a fuzzy operator.

We take them as follows. I. Max-product operator ( , ·) μA˜ (x) μB˜ (x) = max{μA˜ (x), μB˜ (x)}; μA˜ (x) · μB˜ (x) denotes an ordinary real number product method. II. Boundary sum and product operator (⊕, ) ˜ max{0, μ ˜ (x) + μA˜ (x) ⊕ μB˜ (x) min{μA˜ (x) + μB˜ (x), 1}, A˜ B A μB˜ (x) − 1}.

18

1 Prepare Knowledge

III. Probability sum and product operator (+, ·) μA˜ (x)+μB˜ (x) μA˜ (x) + μB˜ (x) − μA˜ (x)μB˜ (x). It can be veriﬁed by using an elementary calculation that Operator I satisﬁes Operator ( , ) in accordance with a calculation law, but Operator II and III dissatisfy idempotent, absorptive and distributive laws.

1.5

Fuzzy Functions

A fuzzy function is one of the most important conceptions in a fuzzy optimum problem. Its discussion is divided into two parts [DPr80]. Besides, kinds of constraint functions repeatedly used in the book are introduced. 1.5.1

Fuzzy Function from Universe X to Another One Y

Deﬁnition 1.5.1. Let F (X) and F (Y ) represent all fuzzy sets on universe X and Y , respectively. If there exists an ordinary mapping f : F (X) → F (Y ), then we call f a fuzzy-valued function from X to Y, writing f˜ : X ∼→ Y . Deﬁnition 1.5.2. Let f˜ : X ∼→ Y, g˜ : Y ∼→ Z be two fuzzy-valued functions. Then g˜ ◦ f˜ : F (X) → F (Z), i.e., ˜ ∈ F (Z) ∀A˜ ∈ F (X), (˜ g ◦ f˜)(A) is called a compound fuzzy function of f˜ and g˜. Proposition 1.5.1. If f : X → Y, g : Y → Z denote two ordinary mappings, two fuzzy functions f˜ : X ∼→ Y and g˜ : Y ∼→ Z can be obtained by means of the extension principle. Their compound under Deﬁnition 1.5.2 coincides with fuzzy functions in compound g ◦ f : X → Z from ordinary mappings f : X → Y and g : Y → Z by means of the extension principle. Proof: As for ∀A˜ ∈ F (X), the image ⎧ ⎨ sup μA˜ (x), x∈f −1 (y) μf˜(A) (y) = ˜ ⎩0,

f −1 (y) = φ f −1 (y) = φ

obtained with fuzzy function f˜ : X ∼→ Y is extended from f : X → Y to ˜ ∈ F (Y ), the image arbitrary y ∈ Y . ∀B ⎧ ⎨ sup μB˜ (y), g −1 (z) =

φ −1 y∈g (z) μg˜(B) ˜ (z) = ⎩0, g −1 (z) = φ

1.5 Fuzzy Functions

19

is achieved by a fuzzy function g˜ : Y ∼→ Z for ∀z ∈ Z, such that their compound denotes g˜ ◦ f˜ : X ∼→ Z. From Deﬁnition 1.5.2, ∀A˜ ∈ F (X), ∀z ∈ Z, there exists μg˜◦f˜(A) ˜ (z) = μg ˜ (z) ˜(f˜(A)) ⎧ μf (A) g −1 (z) = φ ⎪ ˜ (y), ⎨ sup y∈g−1 (z) = ⎪ ⎩ 0, g −1 (z) = φ ⎧ ⎧ ⎪ μA˜ (x), f −1 (y) = φ, ⎪ ⎪ ⎨ sup ⎪ ⎪ x∈f −1 (y) ⎪ ⎨ sup g −1 (z) = φ −1 (z) ⎪ y∈g ⎩ = 0, f −1 (y) = φ, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0, g −1 (z) = φ ⎧ ⎨ sup sup μA˜ (x), f −1 (y) = φ, g −1 (z) = φ, = y∈g−1 (z) x∈f −1 (y) ⎩0, f −1 (y) = φ or g(z) = φ; therefore, ∀z ∈ Z, μg˜◦f˜(A) ˜ (z) =

=

⎧ ⎨

sup x∈f −1 (g−1 (z))

⎩0, ⎧ ⎨

f −1 (g −1 (z)) = φ f −1 (g −1 (z)) = φ

sup

x∈(y◦f )−1 (z)

⎩0,

μA˜ (x),

μA(x) , (g ◦ f )−1 (z) = φ, ˜ (g ◦ f )−1 (z) = φ.

(1.5.1)

On the right of Formula (1.5.1) is a fuzzy function gained from the ordinary compound mapping g ◦ f by means of an extension principle. 1.5.2

˜ Fuzzy Functions from Fuzzy Set A˜ to Another One B

Deﬁnition 1.5.3. Let f : X → Y be an ordinary mapping. If fuzzy sets A˜ ˜ ˜ and B are deﬁned on X and Y , respectively, we have B f (x) = μA˜ (x) for ˜ writing ∀x ∈ X, then we call f˜ a fuzzy-valued function from fuzzy set A˜ to B, ˜ f˜ : A˜ ∼→ B. ˜ ∈ F (Y ), C˜ ∈ F (Z), and let f : A˜ ∼→ B ˜ and g : B ˜ ∼→ Let A˜ ∈ F (X), B ˜ C. Then a composite mapping g ◦ f : X → Z is a fuzzy function from A˜ to ˜ i.e, g˜ ◦ f˜ : A˜ ∼→ C. ˜ C, In fact, ∀x ∈ X, μ(˜g◦f˜) (x) = g˜ f˜(x) , hence, ˜ g ◦ f˜)(x)) = C(˜ ˜ g (f˜(x))) μ(C◦(˜ ˜ g◦f˜)) (x) = C((˜ ˜ f˜(x)) = μ ˜ (x), = (C˜ ◦ g˜)(f˜(x)) = B( A

20

1 Prepare Knowledge

i.e., ˜ ◦ f˜ = A. ˜ C˜ ◦ (˜ g ◦ f˜) = (C˜ ◦ f˜) ◦ f˜ = B 1.5.3 Fuzzy Constrained Function We introduce some fuzzy constrained functions constantly used for the sake of discussion [Cao93a][Cao94b][Cao07][DPr80]. Deﬁnition 1.5.4. ∀x ∈ X, g(x) is a real bounded function deﬁned on X, and its inﬁmum and supremum are written as inf(g) and sup(g), respectively, such that we deﬁne ! "n g(x) − inf(g) , (1.5.2) μM˜ (x) = sup(g) − inf(g) ˜ : X → [0, 1] a maximal set of g, where μ ˜ (x) = 0, n is a natural calling M M number. Deﬁnition 1.5.5. If c1i , c2i are left and right endpoints of an interval, then, for c˜i freely ﬁxed in a closed value interval [c1i , c2i ], its degree of accomplishment is determined by ⎧ ⎪ 0, if ci c1i , ⎪ ⎪ ⎨ c − c1 n i i μφ˜i (˜ (1.5.3) ci ) = , if c1i < ci c2i , 2 − c1 ⎪ c ⎪ i i ⎪ ⎩1, if ci > c2i , where n denotes a natural number. For fuzzy constraint sets and fuzzy objective sets, we have the following. Deﬁnition 1.5.6. If A˜i = {x ∈ R m |gi (x) 1} (1 i p) is a fuzzy constraint set corresponding to fuzzy constraint inequations gi (x) 1, then the membership functions of A˜i are ⎧ ⎨ 0, # n if gi (x) 1 + di , μA˜i (x) = (1.5.4) 1 − ti di , if gi (x) = 1 + ti , 0 ti di , ⎩ 1, if gi (x) 1, where di ∈ R (a real number set) denotes a maximum ﬂexible index of gi (x). Deﬁnition 1.5.7. Regard A˜0 = {x ∈ R m |g0 (x) z0 } as a fuzzy objective set and assume a membership function of A˜0 as follows: ⎧ ⎨ 0, # n if g0 (x) z0 − d0 , μA˜0 (x) = (1.5.5) 1 − t0 d0 , if g0 (x) = z0 − t0 , 0 t0 d0 , ⎩ 1, if g0 (x) z0 , where d0 0 is a maximum ﬂexible index of g0 (x) and z0 an objective value. We deﬁne symbol “ ” as a ﬂexible version of at a ‘certain degree’ [Ver84][LL01], or approximately less than or equal to.

1.6 Three Mainstream Theorems in Fuzzy Mathematics

21

Deﬁnition 1.5.8. Let fuzzy sets A˜i (1 i p) be A˜i = {x ∈ R m |gi (x) 1} (0 i p ) and

A˜i = {x ∈ R m |gi (x) 1} (p + 1 i p).

Then their membership functions are deﬁned as 1, gi (x) 1 μA˜i (x) = − 1 (g (x)−1) , 1 < gi (x) 1 + di e di i

(1.5.6)

for 0 i p , and μA˜i (x) =

0, gi (x) 1 − 1 (g (x)−1) , 1 < gi (x) 1 + di 1 − e di i

(1.5.7)

for p + 1 i p, where di 0 is a maximum ﬂexible index of i−th function gi (x). We introduce the possibility grade of dominance of 1˜ over g˜i (x), a concept introduced by Dubois and Prade in 1980 which represents the fuzzy extension for gi (x) 1 [DPr80]. Deﬁnition 1.5.9. The degree of possibility of g˜i (x) ˜1 is deﬁned as v(˜ gi (x) ˜ 1) = sup min(μ˜1 (x), μg˜i (x) (y)). x,y:xy

This formula is an extension of the inequality x y according to the extension principle. When pair (x, y) exists, such that x y and μ˜1 (x) = μg˜i (x) (y) = 1, then v(˜ gi (x) ˜ 1) = 1. When g˜i (x) and ˜ 1 are convex fuzzy numbers, we have v(˜ gi (x) ˜ 1) = 1, if and only if gi (x) 1, ˜ v(˜ gi (x) 1) = hgt (˜ gi (x) ˜1) = μ˜1 (d), where d is an ordinate of the highest intersection point between μ˜1 (x) and μg˜i (x) (y).

1.6

Three Mainstream Theorems in Fuzzy Mathematics

1.6.1 Decomposition Theorem Deﬁnition 1.6.1. If α ∈ [0, 1], A˜ ∈ F (X), then product of number α with fuzzy set A˜ is deﬁned as μA˜ (x). μ(αA) ˜ (x) = α

22

1 Prepare Knowledge

Theorem 1.6.1. (Decomposition Theorem I) For an arbitrary A˜ ∈ F (X), we have αAα , (1.6.1) A˜ = α∈[0,1]

A˜ =

(1.6.2)

αAα. .

α∈[0,1]

Proof: Because μAα (x) = then

μ(

αAα ) (x)

x ∈ Aα x∈ / Aα ,

1, 0,

= sup αμAα (x) = sup α 0α1

α∈[0,1]

=

x∈Aα

sup αμA˜ (x)

α = μA˜ (x).

Therefore, (1.6.1) is proved. Similarly, we can prove Formula (1.6.2). Example 1.6.1: Suppose universe to be X = {2, 1, 7, 6, 9}, try to decompose the fuzzy set 0.1 0.3 0.5 0.9 1 + + + + A˜ = 2 1 7 6 9 by applying Decomposition Theorem. Solution: The relevant cut-sets in fuzzy sets are

A˜ =

α∈[0,1]

A0.1 = X, A0.3 = {1, 7, 6, 9}, A0.5 = {7, 6, 9}, A0.9 = {6, 9}, A1 = {9},

0 < α 0.1, 0.1 < α 0.3, 0.3 < α 0.5, 0.5 < α 0.9, 0.9 < α 1,

αAα

1 1 1 1 1 1 1 1 1 + + + + + + + = α α 2 1 7 6 9 1 7 6 9 0 μc˜(x2 ), which is a contradiction. Therefore, L(x) is an increasing function. (3) L(x) continues on the right, otherwise there exists x < c1 , xn → x, then lim ∗ L(xn ) = α > L(x). xn →x

Since xn ∈ c¯α and c¯α is closed, then x ∈ c¯α , such that μc˜(x) = L(x) α. Therefore, contradiction. For the same reason, μc˜(x) = R(x) is a continuously decreasing function on the left for x > c2 , with 0 R(x) < 1. Suﬃciency. Let c˜ satisfy the condition in the theorem. Then (1) c˜ is obviously normal. (2) Prove c¯α = [c1α , c2α ], ∀α ∈ (0, 1]. μc˜(x) = L(x) for x < c1 , so we select c1α = min{x|L(x) α} and μc˜(x) = R(x) for x > c2 , such that we select c2α = max{x|R(x) α}. Obviously, c¯α ⊂ [c1α , c2α ]. Now, prove [c1α , c2α ] ⊂ c¯α , and we prove only [c1α , c1 ) ⊂ c¯α (because we can prove (c2 , c2α ] ⊂ c¯α for the same reason). Again, we prove only c1α ∈ c¯α due to the monotonicity of L(x). Select xn → c1α , then L(c1α ) = lim L(xn ) α, such that c1α ∈ c¯α . xn →c1α

1.7.2

Type (·, c), T, L − R and Flat Fuzzy Numbers

Deﬁnition 1.7.4. c˜ = (α, c) is deﬁned as a (·, c) fuzzy number on a product space α1 × α2 × · · · × αJ ; its membership function is μc˜(a) = min[μc˜j (aj )], j

⎧ ⎨1 − |αj − aj | , α − c a α + c , j j j j j cj μc˜(aj ) = ⎩ 0, otherwise,

(1.7.2)

30

1 Prepare Knowledge

where α = (a1 , a2 , · · · , aJ )T , c = (c1 , c2 , · · · , cJ )T ; α denotes the center of c˜, c the extension of c˜, with cj > 0. Coming next are special cases. Deﬁnition 1.7.5. L is called a reference function of fuzzy numbers if L satisﬁes (i) L(x) = L(−x); (ii) L(0) = 1; (iii) L(x) is a nonincreasing and piecewise continuous function at [0, +∞). Deﬁnition 1.7.6. Let L, R be reference functions of a fuzzy number c˜, called a L-R fuzzy number. If ⎧ c−x ⎪ ⎪ , x c, c > 0, ⎨L c μc˜(x) = (1.7.3) x−c ⎪ ⎪ , x c, c > 0, ⎩R c we write c˜ = (c, c, c)LR , where c is a mean value; c and c are called the left and the right spreads of c˜, respectively. L is called a left reference and R a right reference. If take c˜ to be variable x ˜, then x ˜ = (x, ξ, ξ)LR represents T -fuzzy variable. Deﬁnition 1.7.7. If L and R are functions satisfying $ 1 − |x|, if − 1 x 1, T (x) = 0, otherwise,

(1.7.4)

then we call c˜ = (c, c, c)T T -fuzzy numbers, T (R) representing T -fuzzy number sets. If take c˜ to be variable x ˜, then x˜ = (x, ξ, ξ)T represents T -fuzzy variables. Deﬁnition 1.7.8. Let L, R be reference functions and the quadruple c˜ = (c− , c+ , σc− , σc+ )LR is called a L-R ﬂat fuzzy number. Then ⎧ − c −x ⎪ ⎪L , x c− , σc− > 0 ⎪ ⎪ ⎨ σc− x − c+ μc˜(x) = (1.7.5) R , x c+ , σc+ > 0 ⎪ + ⎪ ⎪ σ c ⎪ ⎩ 1, otherwise satisfying ∃(c− , c+ ) ∈ R, c− < c+ , with μc˜(x) = 1.

1.7 Five-Type Fuzzy Numbers

31

Especially, c˜ = (c− , c+ , σc− , σc+ ) is said to be a ﬂat fuzzy number, where ⎧ c− − x ⎪ − − − ⎪ 1 − ⎪ − , if c − σc x c , ⎪ σ ⎪ c ⎪ ⎨1, if c− < x < c+ , μc˜(x) = (1.7.6) + x−c ⎪ + + + ⎪ 1 − , if c x c + σ , ⎪ c ⎪ ⎪ σc+ ⎪ ⎩ 0, otherwise. If we take interval (c− , c+ ) to be (x− , x+ ), then x ˜ = (x− , x+ , ξ, ξ)LR and x˜ = (x− , x+ , ξ, ξ) represent an L-R fuzzy variable and a ﬂat fuzzy one, respectively. Deﬁnition 1.7.9. Suppose that “ ∗ ” represents an arbitrary ordinary binary operation in R, such that ∀˜ c, d˜ ∈ F (R) and we deﬁne μc˜(x) ∧ μd˜(y) ˜ , c˜ ∗ d = x∗y x,y∈R

i.e., ∀z ∈ R, μc˜∗d˜(z) =

˜ (μc˜(x) ∧ d(y)),

x∗y=z

where “ ∗ ” represents arithmetic operations +, −, ·, ÷. Accordingly, we can deﬁne the operations of Type L-R, T and ﬂat fuzzy numbers. A. Operations properties in L-R fuzzy number Let c˜ = (c, c, c)LR , d˜ = (d, d, d)LR and p˜ = (p, p, p)RL be an L-R fuzzy number. Then 1) c˜ + d˜ =$(c + d, c + d, c + d)LR . (kc, kc, kc)LR , when k 0 (k ∈ R). 2) k · c˜ = (kc, −kc, −kc)RL , when k < 0 Let (−1)˜ c = −˜ c for k = −1. Then −˜ c = (−c, c, c)RL . 3) c˜ − p˜ = (c − p, c + p, c + p)LR for L = R. 4) c˜ · d˜ ≈ (cd, cd + dc, cd + dc)LR . c pc + cp pc + cp , )LR , p = 0, c˜ and p˜ can not be divided for 5) c˜ ÷ p˜ ≈ ( , p p2 p2 L = R. ˜ ≈ (c ∧ d, c ∨ d, c ∧ d)LR . ˜ ≈ (c ∨ d, c ∧ d, c ∨ d)LR , min(˜ % c, d) 6) max(˜ % c, d) ˜ ˜ ˜ 7) c˜ d ⇐⇒ c d, c d, c d; c˜ ⊆ d ⇐⇒ c + c d − d, or c˜ = d. B. Operations properties in T -fuzzy numbers If c˜1 = (c1 , c1 , c1 )T , c˜2 = (c2 , c2 , c2 )T , then (1) c˜1 + c˜2 = (c1 + c2 , c1 + c2 , c1 + c2 )T ; (2) c˜1 − c˜2 = (c1 − c2 , c1 + c2 , c1 + c2 )T ;

32

1 Prepare Knowledge

(3) λ˜ c = λ(c, c, c)T =

(λc, λc, λc)T , (λc, −λc, −λc)T ,

∀λ > 0, ∀λ < 0.

1 −2 −2 (4) c˜−1 = (c, c, c)−1 , cc )T . T ≈ ( , cc c C. Operation properties in ﬂat fuzzy numbers Let c˜ = (c− , c+ , σc− , σc+ ) and d˜ = (d− , d+ , σd− , σd+ ) be ﬂat fuzzy numbers. Then 1) c˜ + d˜ =$(c− + d− , c+ + d+ , σc− + σd− , σc+ + σd+ ). for k > 0, (kc− , kc+ , kσc− , kσc+ ), 2) k · c˜ = (kc+ , kc− , −kσc− , −kσc+ ), for k 0. By the deﬁnition of Type L-R, T or ﬂat fuzzy numbers, it is easy to prove their operation properties [Dia87][DPr80]. We can deduce operation properties of (·, c) fuzzy numbers since it is extended over ﬂat fuzzy ones.

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

As the phenomenon in the world is complicated, at the time of carrying on statistic forecast, we will usually meet a type of fuzzy number that points are constant, the circle is changed, and vice versa. For such a case, an analytical problem needs considering in regression and self-regression under a fuzzy environment. In 1980, in this aspect, a regression analysis formulation was already developed according to a possible linear system [TUA80]. Hereafter the regression analysis was variously formed by means of fuzzy data analysis, and carried in extensive application [TUA82]. In 1989, based on the theory of Zadeh fuzzy sets [Zad65a], self-regression forecast model with fuzzy coeﬃcients was advanced [cao89b][cao90]. This chapter introduces a regression and self-regression model containing (·, c) fuzzy coeﬃcients, ﬂat fuzzy coeﬃcients as well as triangular fuzzy coefﬁcients, concludes the regression analysis as a linear programming.

2.1

Regression Model with Fuzzy Coeﬃcients

2.1.1 Introduction Suppose a classical linear regression model to be Y = A1 x1 + A2 x2 + . . . + An xn + ε, where Y is a correlated variable, and xi , Ai an independent variable and parameter, respectively, and ε an error. Because problems within realistic world all contain a great quantity of fuzzyness, this section will consider a fuzzy model as follows: Y˜ = A˜1 x1 + A˜2 x2 + · · · + A˜n xn + ε,

(2.1.1)

where Y˜ and A˜j (1 j n) are (·, c) fuzzy correlated variables and parameter; x = (x1 , x2 , . . . , xn )T is an independent variable vector, and independent variable xj (1 j n) in i-period changed backward, with ε being an error. We call (2.1.1) a regression model with fuzzy coeﬃcients. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 33–62. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

34

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

2.1.2 Deﬁnitions and Concepts of Fuzzy Parameters Deﬁnition 2.1.1. Suppose F (R) is a fuzzy set, and A˜j ∈ F (R)(j = 1, 2) denotes fuzzy parameters with its membership function (1.5.3). [TOA73] [TUA82] Deﬁnition 2.1.2. Fuzzy number A˜ is a convex normalized fuzzy subset of a real axis satisfying (i) ∃x0 ∈ R and μA˜ (x0 ) = 0; (ii) A˜ is a piecewise continuous function. The α−cut set of A˜ is set A˜α = {x ∈ R, μA˜ (x) α}, where α ∈ [0, 1]. Deﬁnition 2.1.3. If ∀x, y, z ∈ R, and x y z, we have μA˜ (y) μA˜ (x) ∧ μA˜ (z), calling A˜ a normal fuzzy number. As for relevant (a, c), the deﬁnitions and properties of fuzzy numbers, refer to the Ref. [TA84] and [Wat87]. Extension Principle: Suppose A˜1 , · · · , A˜n to be (a, c) fuzzy numbers, mapping f : R → R, i.e., f (x1 , x2 , · · · , xn ) = x1 ∗ x2 ∗ · · · ∗ xn . Expand this operation ‘∗’ to fuzzy numbers, then rule f (A˜1 , A˜2 · · · , A˜n ) = A˜1 ∗ A˜2 ∗ · · · ∗ A˜n min{A˜1 (x1 ), · · · , A˜n (xn )} , = f (x1 , · · · , xn ) X1 ∗···∗Xn its membership function meaning μf (A˜1 ,··· ,A˜n ) (y) = sup

min

x1 ,··· ,xn ∈f −1 (y)

{μA˜1 (x1 ), · · · , μA˜n (xn )}.

˜ = f (A˜1 , · · · , A˜n ) means an image in A˜1 , · · · , A˜n , By using α-cut sets, if B then [f (A˜1 , · · · , A˜n )]α = f (A1α , · · · , Anα ) ⇐⇒ ∀y ∈ Y, ∃¯ x1 , · · · , x ¯n , such that μB˜ (y) = μ(A˜1 ∗A˜2 ∗···∗A˜n ) (¯ x1 , x ¯2 , · · · , x ¯n ). Deﬁnition 2.1.4. Assume that two sets X and Y , f : X → Y denotes a ˜ function Y = f (x, a), f : X → F (y) denotes a fuzzy function Y˜ = f (x, A), then the membership function of fuzzy set Y˜ denotes $ μA˜ (a), when {a|y = f (x, a)} = φ, max μY˜ (y) = {a|y=f (x,a)} 0, otherwise,

2.1 Regression Model with Fuzzy Coeﬃcients

35

where x ∈ R, a parameter on the product space of a = a1 × a2 × · · · × an , where n is the number of independent variables. A˜ a fuzzy set, Y˜ a mapping ˜ and F (y) a fuzzy-valued set. of x in A, Deﬁnition 2.1.5. The fuzzy parameter A˜ of fuzzy linear regression is deﬁned from Cartesian product space R n on the Cartesian product sets A˜ = A˜1 × A˜2 × · · · × A˜n , such as Figure 2.1.1 shows, 6 1

i A @ @ @ ci αi = ai

j A A A A A αj cj

aj

˜ Fig. 2.1.1. Fuzzy Parameter A

its membership function is a triangle type, i.e., μA˜ (a) = min μA˜j (aj ), j

⎧ ⎨ μA˜j (aj ) =

⎩

1− 0,

|αj − aj | , when αj − cj aj αj + cj , cj otherwise,

where cj > 0(j = 1, 2, 3, · · · , n). Deﬁnition 2.1.6. Fuzzy regression parameter A˜ deﬁned on the vector space R n is written as a vector form A˜ = (α, c), α = (α1 , · · · , αn )T , c = (c1 , · · · , cn )T , ˜ respectively, “T ” means a transporting sign, α, c the center and the shape in A, ˜ and A means “approximately A”. Suppose that Y˜ and A˜ are all convex, normalized fuzzy functions and fuzzy numbers below. 2.1.3

Establishment of Linear Regression Model

Suppose the linear regression model to be ˜ = (aT x, cT x), Y˜ = A˜1 x1 + A˜2 x2 + · · · + A˜n xn = Ax where A˜j (j = 1, · · · , n) is a waiting parameter.

(2.1.2)

36

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Proposition 2.1.1. The membership function of (2.1.2) is ⎧ |y − αT x| ⎪ ⎪ , x = 0, ⎨1 − cT |x| μY˜ (y) = 1, x = 0, y = 0, ⎪ ⎪ ⎩ 0, x = 0, y = 0, where |x| = (|x1 |, |x2 |, · · · , |xn |)T and μY˜ (y) = 0 when cT |x| |y − αT x|. In fact, according to Deﬁnition 2.1.4 and stipulate, then ⎧ ⎪ μA˜ (α), {α|αT x = y} = φ ⎪ ⎨ T μY˜ (y) = {α|α x=y} ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ n ⎪ ⎪ ⎪ { μA˜p (αj )}, {α|αT x = y} = φ ⎨ = {α|αT x=y} j=1 ⎪ ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ n |αj − aj | ⎪ ⎪ ⎪ { (1 − )}, {α|αT x = y} = φ ⎨ c j T = {α|α x=y} j=1 ⎪ ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ T |y − α x| ⎪ ⎪ , x = 0, ⎨1 − cT |x| = 1, x = 0, y = 0, ⎪ ⎪ ⎩ 0, x = 0, y = 0. Here, when cT |x| < |y − αT x|, the above means a deviation between the calculation value of y and the actual value is bigger than a fuzzy shape in calculation values, then μY˜ (y) = 0. Take a sample (yi ; xi1 , xi2 , · · · , xin ) for example, where the capacity is n, and yi = αT xi (i = 1, 2, · · · , n) is an observed value, yˆi an estimation value, both of deviations are εi = yi − yˆi , then Y˜ = (y, ε) (with correlated variable y being for center, the deviation being ε a shape) is a fuzzy correlated variable, and ε = 0 is non-fuzzy situation. We aim at assuring fuzzy parameters according to observation value Aˆ˜j (j = 1, 2, · · · , n). But adoption of classical least square method will meet trouble of whether ˜j (j = 1, 2, · · · , n) is diﬀerentiable. Hence, we determine Aˆ˜j (j = 1, 2, · · · , n) Aˆ by use of methods below. ¯ between the observed data and In order to measure the degree of ﬁtting h the estimated one, a decision maker can choose a threshold value H. Here, H is selected by a person in experts’ experience. The selection of H aﬀects the width in fuzzy parameter cj .

2.1 Regression Model with Fuzzy Coeﬃcients

37

If compute max ¯ h H, such that YˆiH = {y|μYˆi (y) H},

(2.1.3)

¯ is an optimal estimation of correlated variables in (2.1.2). The index then h ¯ shows as Figure 2.1.2: of approximately degree in h x

6

1

¯ hi 0

#c c # c # B# c c #B c # B n c # B cj |xij | c # v B c# j=1 B c- y # ei yi αT xi B B

¯ Fig. 2.1.2. The Index of Approximately Degree in h

Theorem 2.1.1. Assume that a fuzzy linear regression model as (2.1.2), then ¯ H ⇐⇒ maxh ⎧ n ⎪ ⎪ T ⎪ α x + (1 − H) cj |xij | yi + (1 − H)εi , i ⎪ ⎨ j=1

n ⎪ ⎪ T ⎪ −α x + (1 − H) cj |xij | −yi + (1 − H)εi , i ⎪ ⎩

(i = 1, 2, · · · , N ).

j=1

Proof: Shown as Figure 2.1.2, ¯ h is derived as follow. By using the similarity of the right triangles, then v 1−h , v = εi (1 − h), = εi 1 k = v + |yi − αT xi |, k = |yi − αT xi | + εi (1 − h). Again by using the similarity of the right triangles, hence 1−h k = , n 1 cj |xij | j=1

1−h |yi − αT xi | + εi (1 − h) = . n 1 cj |xij | j=1

(2.1.4)

38

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Find equation (2.1.4), then |yi − αT xi | 1−h= , n cj |xij | − εi j=1

i.e., ¯i = 1 − h

|yi − αT xi |

n

.

cj |xij | − εi

j=1

From (2.1.4), at yi − αT xi 0, then −αT xi + (1 − H)

n

cj |xij | −yi + (1 − H)εi ,

j=1

at yi − αT xi 0, we can get the same truth, αT xi + (1 − H)

n

cj |xij | yi + (1 − H)εi (i = 1, 2, · · · , N ).

j=1

Combining two kinds of situations above, the theorem can be certiﬁcated. Deﬁnition 2.1.7. The vagueness of the fuzzy linear model is denoted by n J(c) = cj |xij |, where xij is an observation datum, cj a width in A˜j . j=1

Therefore, fuzzy parameter A˜j (j = 1, · · · , n) certainly is concluded to com˜j = (αj , cj ) in the following linear programputation of an optimal solution Aˆ ming with parameter variables min J(c) =

n

cj |xij |

j=1

s.t. αT xi + (1 − H)

n

cj |xij | yi + (1 − H)εi ,

j=1

− αT xi + (1 − H)

n

(2.1.5)

cj |xij | −yi + (1 − H)εi ,

j=1

c 0, H ∈ [0, 1], (i = 1, · · · , N ). Deﬁnition 2.1.8. Suppose the regression value of model is Y˜ˆi = (yi , εi ), but actually measure value is Yi , then

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

39

& ' N ' ' (ˆ yi − yi )2 ' ' i=1 RIC = ' ' N ( yi2 i=1

is an accurate level measuring a forecast model, and RIC ∈ [0, +∞). When RIC=0, it is a perfect forecast. ˜j into (2.1.2), that is, a fuzzy linear regression model with fuzzy Put Aˆ coeﬃcients is what we ﬁnd. Obviously, c = 0 is a classical case. The mold steps can be induced as follows. Step 1. Put the collected data (ordinarily real data) into (2.1.3), and according to Theorem 2.1.1, change the solution to parameter A˜j into a solution to a linear programming. Step 2. Find an optimal parameter solution Aˆj (j = 1, 2, · · · , n) to (2.1.1), then we get a regression forecasting model ˆ x + A˜ ˆ x + . . . + A˜ ˆ x (i = 1, 2, · · · , N ). ˜i = A˜ Yˆ 1 i1 2 i2 n in Step 3. Obtain an accurate judgement in forecast model a. The nearer RIC reaches zero, the nearer the value of yˆi approaches yi , which means the higher an accuracy of the forecasting value. b. At RIC=0, this is a perfect forecast, here, yˆi = (yi + εi ) × 0.618 + (yi − εi ) × 0.382. Through judgement, the model passes through examination, then it can be thrown into a forecast. c. The estimation of the forecast value range. n Aˆj xij , then take Suppose yˆi = (yi , εi ) = j=1

yˆi− = yi − (1 − H)εi , yˆi+ = yi + (1 − H)εi , hence [ˆ yi− , ˆi+ t ] is a forecast value range.

2.2

Self-regression Models with (·, c)−Fuzzy Coeﬃcients

2.2.1 Introduction On the foundation of Ref.[Cao90],[Dia87] and [Wat87], we put another model into consideration, that is, self-regression model with (·, c) fuzzy coeﬃcients.

40

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

It is used to generalize a fuzzy least squares system through a special example of T -fuzzy data, which will be more extensive in its application than a classical one. 2.2.2 Model Let us consider the self-regression forecast model of classical n-order Yt = A0 + A1 Yt−1 + · · · + An Yt−n + e.

(2.2.1)

This means applying a fuzzy set theory to the expansion of (2.2.1), i.e., 0 + A 1 Yt−1 + · · · + A n Yt−n + et , Yt = A

(2.2.2)

calling (2.2.2) a self-regression model with (·, c) fuzzy coeﬃcients, where paj (j = 0, 1, · · · , n) to be estimated and dependent sequence Yt are rameter A all (·, c) fuzzy numbers, and et is an error, t denotes benchmark time. j (j = 0, 1, · · · , n) to be all convex and normalized fuzzy Assume Yt and A numbers. In [Cao89b] appear an expansion principle and the conception of fuzzy numbers. Deﬁnition 2.2.1. Let f : R n+1 → F (y) be a fuzzy function, Yt = is a fuzzy set, F (y) repref (Yt−j , A)(j = 1, 2, · · · , n), where Yt−j ∈ R, A sents all fuzzy subsets on R and the membership function of Yt is $ max μA (a), {a|y = f (yt−j , a)} = φ, μYt (y) = {a|y=f (yt−j ,a)} 0, otherwise. is deﬁned by a Cartesian Deﬁnition 2.2.2. Fuzzy self-regression parameter A product set =A 0 × A 1 × · · · × A n , A j which is on Cartesian product space R n+1 . The membership function of A is ⎧ ⎨ 1 − |αj − aj | , aj ∈ [αj − cj , αj + cj ], μAj (aj ) = cj ⎩ 0, otherwise, where a =

n )

j = (αj , cj )(j = 0, 1, · · · , n), αj is the mean value of A j aj , A

j=0

j . and cj > 0 is the width of A Proposition 2.2.1. Fuzzy self-regression model is 0 + Yt = A

n

j Yt−j = AY = (αT Y, cT Y ), A

(2.2.3)

j=1

where Y = (1, Yt−1 , · · · , Yt−n )T , α = (α0 , α1 , · · · , αn )T , c = (c0 , c1 , · · · , cn )T , and the membership function of Yt

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

⎧ ⎨ μYt (y) =

⎩

1−

|y − αT Y | , cT |Y |

0,

Y = 0,

μYt (y) =

=

{a|aT Y

0, ⎧ ⎪ ⎨

=y}

{a|aT Y =y} ⎪ ⎩ 0,

0 =0, 0

{a|aT Y = y} = φ

μA (a), $

(2.2.4)

otherwise.

Proof: Applying Deﬁnition 2.2.1 and stipulating $

41

otherwise

* |αj − aj | (1 − ) , cj j=1 n

Y = 0 otherwise

= (2.2.4). A decision-maker choose threshold value H0 . If the degree of ﬁtting H between forecast data and estimation value tallies with max H H0 , such that

Yt∗H0 = {y|μY ∗ (y) H0 }, t

then we attain the best estimation of dependent variable of (2.2.3). The approximate indicator of H is shown as in Figure 2.2.1: μ6 B(Yt , 1) D(α0 + αi Yt−i , 1) #c B c B ## c B c # c #B c # B c # H B c # B c # e t B c- y n # 0 A(Yt − et , 0) c0 + cj |Y(t−j)i | n j=1 T C(α0 + αi Yt−i − c0 − cj |Y(t−j)i |, 0) 1

j=1

Fig. 2.2.1. The Approximate Indicator of H

Theorem 2.2.1. Let fuzzy self-regression model be (2.2.2). Then max H H0

(2.2.5)

42

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

⎧ n ⎪ ⎪ ⎪ −α0 − αi Yt−i + (1 − H0 ) c0 + cj |Y(t−j)i | ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎨ −Yt + (1 − H0 )et , ⇐⇒ n ⎪ ⎪ ⎪ cj |Y(t−j)i | α0 + αi Yt−i + (1 − H0 ) c0 + ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎩ Yt + (1 − H0 )et .

(2.2.6)

(2.2.7)

Proof: From Figure 2.2.1, at Yt − αi Yt−i 0, the line segments AB and CD show separately below: ⎧ x = et (y − 1) + Yt , ⎪ ⎪ ⎪ ⎪ ⎨

n j=1

cj |Y(t−j)i |

y = x − (α0 + αi Yt−i ) + c0 + ⎪ n ⎪ ⎪ ⎪ c0 + cj |Y(t−j)i | ⎩ j=1

⇐⇒ |H0 | = 1 −

α0 + αi Yt−i − Yt , n c0 + cj |Y(t−j)i | − et j=1

where et = yt − yt represents a deviation and et = 0 is a non-fuzzy state. Then by combining with (2.2.5), we can obtain (2.2.6). When yt − αi Yt−i 0, by the same method we can obtain (2.2.7). If deﬁnition J =

n j=1

cj |Y(t−j)i | is the fuzzy degree of (2.2.2), we change

j (j = 0, 1, · · · , n) into solving the optimal solution to ﬁnding parameter A min{J =

n

cj |Y(t−j)i ||(2.2.6), (2.2.7)}.

(2.2.8)

j=1

Algorithm Steps The built model steps of (2.2.2) are summed up as follows: Step 1. By observation data, we work out a self-dependent sequence table. Step 2. By N rj = + [N

N i=1

N i=1

Y(t−j)i Yti −

2 Y(t−j) −( i

N

i=1

N i=1

Y(t−j)i Yti

Y(t−j)i )2 ][N

N i=1

Yt2i − (

N

i=1

, Yti )2 ]

(j = 1, 2, · · · , n ∈ {j}), we calculate the self-dependent coeﬃcients, where time moves backward by i.

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

43

n j Y(t−j) to be the best fuzzy self0 + A Step 3. Determine Yt = A i j=1

regression forecast model by taking rκ = max{rj |j = 1, 2, · · · , n}. Again j and obtain the self-regression according to Theorem 2.2.1, we solve A equation n j Yt−j . 0 + A yt = (yt , et ) = A j=1

Step 4. Decision Let yt = 0.618(yt + et ) + 0.382(yt − et ). Then deﬁne

& ' N ' ' (yti − Yti )2 ' i=1 RIC = ' , RIC ∈ [0, ∞). ' N ( Yt2i i=1

The closer RIC approaches zero, the higher a precision of the forecast is. And RIC = 0 stands for a perfect forecast. Step 5. Forecast Let n , j Yt−(j+q) . 0 + A Y t+q = A j=1

Then the state at q moment can be forecasted, and the range of forecast value is estimated to be ∗ Yt+j ∈ [Yt+q − (1 − H0 )et+q , Yt+q + (1 − H0 )et+q ].

2.2.3

Conclusion

In 1992, the author of Paper [Yin92] advances least fuzzy squares identiﬁcation method by using the model of Papers [Cao89b][Cao90], calling fuzzy least squares systems. In south maintenance section in Zhengzhou Railroad Bureau, we analyzed the spectrum data sample of lubricate oil from BJ-type diesel locomotive. From 200 BJ-type motorcycles, we diagnosed 50 sets randomly by fuzzy least squares systems set up by the writer and knew that abnormal wear positions are generally exactitude, so are the diagnostics of total state and breakdown positions basically. Besides, a correct rate doubles than that is done by the methods of critical value or regression control ﬁniteness. From this point, we can diagnose its breakdown without disassembly of diesel machines, therefore, acquisition of the economic proﬁt is beyond estimation because of its convenience and practicality.

44

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

2.3 2.3.1

Exponential Model with Fuzzy Parameters Introduction

Consider a model by Lenz, Isenson and Hartman as follows, where volume of information increases as time and factors concerned do, and we change it into a forecasting technique function before concluding it as the following mathematics model Y˙ t = kYt (k > 0), (2.3.1) where Yt is a characteristic parameter; t is time; k a proportional constant; Y˙ t a relative increase rate and the solution to the equation (2.3.1) is an exponential one Yt = Y0 ekt . Because the characteristic technology in long-distance telephone tallies with exponential regularity, we consider a more general exponential one as follows: (2.3.2) Yˆ (t) = A1 At2 , where A1 , A2 are parameters to be estimated, and Yˆ (t) denotes the evaluation in telephone amount during t years. Telephone amount ﬂuctuates with various indeterminable factors. If we assume that the parameters waiting for evaluation in (2.3.2) are fuzzy numbers, the model will contain more information. Below, we fuzzify the parameters of model (2.3.2), based on Zadeh’s fuzzy sets theory [Zad65a], establish a forecast model with the exponential type of the fuzzy parameter, and study the application of this model by practical example. 2.3.2

Exponential Model with Fuzzy Coeﬃcients

Deﬁnition 2.3.1. Suppose F (R) to be a set of the whole fuzzy parameters, and A˜i ∈ F (R)(i = 1, 2), we have Y˜ (t) = A˜1 A˜t2 , (2.3.3) − + − ˜ ˜ where A1 , A2 are ﬂexibly ﬁxed values in the closed intervals [A1 , A1 ], [A2 , A+ 2 ], + − + − + − + respectively, A− < A , A < A , and A , A ; A , A stand for all real num1 1 2 2 1 1 2 2 bers, Y˜ (t) denotes fuzzy telephone amount, t denotes time. We call (2.3.3) an exponential model with parameters. Next, solutions are introduced to the Model (2.3.3). 10 Nonfuzziﬁcation Theorem 2.3.1. If the membership function φ : R → [0, 1] is a continuous and strictly monotone, then, the inverse function φ−1 exists, such that φ(A˜j ) α ⇒ A˜j φ−1 (α), α ∈ [0, 1], (j = 1, 2). Proof: From deﬁnition of α−cut set, the theorem appears obviously. Let φ(A˜j ) like (1.5.3). If φ(A˜j ) α, α ∈ [0, 1], then A˜j − A− j − A+ j − Aj

+ − α ⇒ A˜j − A− j α(Aj − Aj ) + − ⇒ A˜j A− j + α(Aj − Aj ), (j = 1, 2).

2.3 Exponential Model with Fuzzy Parameters

45

Take + − − + − ˜ A˜1 → A− 1 + α(A1 − A1 ), A2 → A2 + α(A2 − A2 ).

Put them into (2.3.3), then, Y˜ (t) → Y (t, α), and (2.3.3) becomes a crisp model + − − + − t Yˆ (t, α) = [A− 1 + α(A1 − A1 )][A2 + α(A2 − A2 )] , α ∈ [0, 1].

(2.3.4)

It is testiﬁed. 20 Linearizing + − − + − Let A = A− 1 + α(A1 − A1 ), B = A2 + α(A2 − A2 ). Then change (2.3.4) into Yˆ (t, α) = AB t .

(2.3.5)

Linearize (2.3.5) by taking logarithm and we can get ln Yˆ (t, α) = ln A + t ln B.

(2.3.6)

30 Estimation parameters Now coming next is estimation parameters A and B. Theorem 2.3.2. As for the given sample set {Y (t1 , α), Y (t2 , α), · · ·, Y (tN , α)}, α ∈ [0, 1], the least squares estimator for parameters A, B with variable α are N −1 N −1 $ tk ln Y (tk , α) − tk ln Y (tk+1 , α) * Y (tk , α) k=1 Aˆ = exp k=1 , N −1 t2k

(2.3.7)

k=1

$ ˆ = exp B

N −1

tk ln Y (tk , α) − ln A

k=1

N

tk *

k=1 N k=1

.

(2.3.8)

t2k

Proof: a) Because of a sample set {Y (t1 , α), · · · , Y (tN , α)} → {ln Y (t1 , α), · · · , ln Y (tN , α)}, then for given sample points {ln Y (tk , α)}(k = 1, 2, · · · , N ), α ∈ [0, 1], we take two near arbitrary sample points tk and tk+1 (k = 1, 2, · · · , N − 1) into consideration from (2.3.6), then ln Yˆ (tk , α) = ln A + tk ln B,

(2.3.9)

ln Yˆ (tk+1 , α) = ln A + tk+1 ln B.

(2.3.10)

46

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

(2.3.9) × tk+1 − (2.3.10) × tk , we obtain (tk+1 − tk ) ln A = tk+1 ln Yˆ (tk , α) − tk ln Yˆ (tk+1 , α).

(2.3.11)

b) Applying the least square method, we build an objective function by (2.3.11) J1 =

N −1

[tk+1 ln Y (tk , α) − tk ln Y (tk+1 , α) − (tk+1 − tk ) ln A]2 .

k=1

By combining (2.3.9), we build another objective function by the least square method N J2 = [ln Y (tk , α) − ln Yˆ (tk , α)]2 k=1

=

N

[ln Y (tk , α) − (ln A + tk ln B)]2 .

k=1

∂J2 ∂J1 To extract minimum of J1 and J2 , we let = 0, = 0, and write ∂ ln A ∂ ln B down tk = tk+1 − tk before obtaining ⎧ N −1 N −1 ⎪ ⎪ [tk+1 ln Y (tk , α) − tk ln Y (tk+1 , α)] tk = t2k ln A, ⎨ k=1 k=1 (2.3.12) N ⎪ ⎪ ⎩2 [ln Y (tk , α) − ln A − tk ln B]tk = 0. k=1

Solve (2.3.12) and we get (2.3.7) and (2.3.8). It is certiﬁcated. 40

Test

Obviously, to a certain determination α, Model (2.3.5) is determined after two-step of linearized model, so is Model (2.3.3). From the principle, we get two determined values α1 , α2 (∈ [0, 1]), such that Yˆ (tk , α1 ) Yˆ (tk , α2 ), we get Yˆ (tk , α) = Yˆ (tk , α1 ) + 0.618 × [Yˆ (tk , α2 ), Yˆ (tk , α1 )].

(2.3.13)

Again from formula

S=

& ' N ' 2 ˆ ' ( k=1[Y (tk , α) − Y (tk , α)] N

,

N 1 -Y (tk , α) -E% = -1 − Yˆ (t , α) -×100%. N k k=1

(2.3.14)

(2.3.15)

2.3 Exponential Model with Fuzzy Parameters

47

After ﬁnding a standard deviation S in a forecasting error and an average relative error percentage E%, we determine ﬁtting best for forecasting models when S and E% are smaller. 50

Model determination

Theorem 2.3.3. Let φ : R → [0, 1] be a membership function of continuous and strictly monotone. Then (2.3.3) ⇐⇒ (2.3.6). Proof: From the discussion above, the result is obvious. ˆ B ˆ into (2.3.6), we obtain a crisp model Put A, ˆ ln Yˆ (t, α) = ln Aˆ + t ln B. Because of (2.3.6) ⇐⇒ (2.3.5), hence ˆ t. Yˆ (tk , α) = AˆB

(2.3.16)

But (2.3.5) ⇐⇒ (2.3.3), such that (2.3.3) ⇐⇒ (2.3.6). It is certiﬁcated. Therefore, we can design a controlling forecast system for telephone amount, with a classical system an exception. If the above result strays away from practice, we can obtain Yˆ (k, α) by taking value from [0,1]. But if we do so, we may get inﬁnite values. It is impossible for us to calculate inﬁniteness of Yˆ , so we calculate the value of Yˆ (k, 0) by choosing α = 0. Compare it with Yˆ (k, 1), if Yˆ (k, 0) is superior to Yˆ (k, 1), then Yˆ (k, 0) is the goal. Otherwise, we apply the 0.618 method for search until an optimal value of the problem is found. Especially, when tk = k(k = 1, 2, · · · , N ), we have Yˆ (tk , α) = Yˆ (k, α), where tk = tk+1 − tk = 1, and at this time, we change (2.3.7) and (2.3.8) into N −1 N −1 $ ln Y (k, α) − k[ln Y (k + 1, α) − ln Y (k, α)] * k=1 Aˆ = exp k=1 N −1 N −1 $ 2 ln Y (k, α) − (N − 1) ln Y (N, α) * k=1 , = exp N −1 N $ 6 k ln Y (k, α) − 3N (N + 1) ln A * k=1 ˆ = exp B . N (N + 1)(2N + 1)

(2.3.17)

(2.3.18)

48

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

The models corresponding to (2.3.16) and (2.3.13) denote ˆt Yˆ (k, α) = AˆB

(2.3.19)

and Yˆ (k, α) = Yˆ (k, α1 ) + 0.618 × [Yˆ (k, α2 ), Yˆ (k, α1 )],

(2.3.20)

respectively. Because of ˆ+ ˆ− ˆ ˆ− ˆ+ ˆ− Aˆ = Aˆ− 1 + α(A1 − A1 ), B = A2 + α(A2 − A2 ),

(2.3.21)

we compute those simultaneous equations (2.3.21) with a determination α, ˆ+ ˆ− ˆ+ then Aˆ− 1 , A1 , A2 , A2 are determined. Now,we synthesize again an exponential model below Yˆ (k, α) = Aˆ1 Aˆk2 ,

(2.3.22)

such that an exponential model with fuzzy parameters can be obtained below Y˜ (k) = A˜1 A˜k2 . 2.3.3

(2.3.23)

Practical Example

Example 2.3.1: The amount of long-distance telephone in China during 1980-1990 shows as follows. Table 2.3.1. Amount of Long-distance Telephone in China

Year No Practical date 1980 1 [14940,21404] 1981 2 [18031,22049] 1982 3 [21760,23574] 1983 4 [26262,26556]

Year No 1984 5 1985 6 1986 7 1987 8

Practical date Year No Practical date [31549,31553] 1988 9 [64615,64617] [38250,38254] 1989 10 [78458,78462] [42299,42303] 1990 11 [97932,106291] [51521,51525]

Forecast time for telephone by applying an exponential Model (2.3.23) with fuzzy parameters, and, from (2.3.11), we take α = 1, with Formula (2.3.22) correspondingly being ˆ+ k Yˆ (k, 1) = Aˆ+ 1 (A2 ) . If by using (2.3.17) and (2.3.18), we can get parameters ˆ+ Aˆ+ 1 = 12380, A2 = 1.2069. When α = α1 = α2 = 1, from (2.3.20), then Yˆ (k, 1) = 12380 × 1.2069k (k = 1, 2, · · · , 11).

2.3 Exponential Model with Fuzzy Parameters

49

Hence, telephone amount forecast value at α = 1 is shown as Table 2.3.2. Table 2.3.2. Amount of Long-distance Telephone at α = 1 in China

Year No Practical date Year No Practical date 1980 1 21404 1984 5 31553 1985 6 38254 1981 2 22049 1982 3 23574 1986 7 42303 1983 4 26556 1987 8 51525

Year No Practical date 1988 9 64617 1989 10 78462 1990 11 106291

By a standard deviation formula (2.3.14), + 11 2 ˆ k=1 [Y (k, 1) − Y (k, 1)] , S= 11 we can obtain S = 4019. Again, from formula (2.3.15) of percentage error E% =

11 Y (k, 1) -1 -×100%, 1 − 11 Yˆ (k, 1) k=1

we can get an average relative error to be 8.21%. While, by the aid of geometric average, we obtain S = 9405, E% = 19.78%, and S = 4811, E% = 9.74% by average value exponential curve. Therefore, the fuzzy exponential forecast method mentioned here is superior to the above two [Zhe92]. Under the ﬁducial degree of 95%, the long-distance telephone in China varies at the following interval Yˆ ± 2S. Hence, their forecast amount between 1980 − 1990 shows below. Table 2.3.3. Forecast Amount of Long-distance Telephone in China Year No Practical date 1980 1 [6910.6, 22972.2] 1981 2 [10002, 26063.6] 1982 3 [13733, 29794.6] 1983 4 [18236, 34297.5]

Year 1984 1985 1986 1987

No 5 6 7 8

Practical date [23670.5, 39732.1] [30229.5, 46291.1] [38145.6, 54207.2] [47699.5 63761]

YearNo Practical date 1988 9 [59230, 75291.6] 1989 10 [73146.3, 89207.9] 1990 11 [89941.9, 106003.4]

If we make use of the 0.618 method by selections of α(∈ [0, 1]), and make use of (2.3.20) for search, we may acquire a better result. 2.3.4

Conclusion

The method in this section is an extension of fuzzy exponential forecast model. We can always change it into a series of determination forecast models for diﬀerent α values (α ∈ [0, 1]), and then obtain a forecast value for linearized model respectively by adopting two-step of the least square method. Each

50

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

forecast value Yˆ ﬂuctuates in the band region composed of Yˆ − and Yˆ + , which presents us more information when we choose a satisfactory forecast result by 0.618. It is pointed out that the model here can be still expanded to contain situations with various fuzzy coeﬃcients and even with the fuzzy variables. [Cao89b][Cao93e][DPr78][TUA82][Wat87][Zad82].

2.4

Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

2.4.1 Basic Properties Deﬁnition 2.4.1. For ∀x, y, z ∈ R and x y z satisfy i) μA˜ (y) μA˜ (z) ∧ μA˜ (z), ii) max μA˜ (x) = 1, x∈R

a convex normal fuzzy number. then call A We also call Aα = {x|μA˜ (x) α, 0 < α 1} a platform of ﬂat fuzzy number A. is convex ⇔ Aα is all interval Proposition 2.4.1. The ﬂat fuzzy number A (0 < α 1). is convex, then from Deﬁnition 2.4.1 we know, i) y ∈ Proof: “⇒ ” If A Aα , again according to randomness in x, y, z, we know, Aα is an interval necessarily. “⇐ ” If ∀α ∈ [0, 1], Aα is an interval. Consider x, z ∈ R, and let α0 = μA˜ (x) = μA˜ (z), then Aα0 must be an interval. From x y z, y ∈ Aα0 , then is convex. μA˜ (y) α0 , hence A Again from Deﬁnition 2.4.1 we know that a ﬂat fuzzy number necessarily satisﬁes max μA˜ (x) = μA˜j (α) = 1, hence it is a convex normal fuzzy x∈R

+ α∈(A− j ,Aj )

number. 2.4.2 Linear Regression Model with Flat Fuzzy Parameters j is a convex and a normal fuzzy number, consider We always suppose that A 1 x1 + A 2 x2 + · · · + A n xn Ax, Y = A

(2.4.1)

where A˜ = (A˜1 , A˜2 , · · · , A˜n ), x = (x1 , x2 , · · · , xn )T , which is a linear regres∗+ sion model. In the model, we call yj∗ = (A∗− j , Aj )xj (j = 1, 2, · · · , n) a regres

sion value, yj = (Aj− , Aj+ )xj an observation value, and yj − yj∗ = εj an obser

− vation error, εj a random variable with zero for the main value, A∗− j = Aj ±εj

+ and A∗+ j = Aj ± εj .

Deﬁnition 2.4.2. Suppose f : x → F (y) denotes fuzzy function Y = f (x, A), where x ∈ R, F (y) is a fuzzy-valued set, the membership function in Y denotes

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

$ μY˜ (y) =

max

{a|y=f (x,a)}

0,

51

μA˜ (a), {a|y = f (x, a)} = φ, otherwise.

Deﬁnition 2.4.4. Suppose the quadruple parameter to be a ﬂat fuzzy number j = (A− , A+ , σ − , σ + ), then its membership function μ ˜ (aj ) is deﬁned as A j j Aj Aj Aj ⎧ A− − aj ⎪ ⎪ ⎪1 − j , A− aj < A− ⎪ j − σA− j , ⎪ j σA− ⎪ ⎪ j ⎪ ⎨ + 1, A− j aj Aj , μA˜j (aj ) = + ⎪ aj − Aj ⎪ + ⎪ 1− , A+ , ⎪ j < aj Aj + σA+ ⎪ j + σ ⎪ ⎪ A j ⎪ ⎩ 0, otherwise. = (A− , A+ , σ − , σ + ) to Proposition 2.4.2. Suppose regression coeﬃcient A A A be a ﬂat fuzzy number, then the membership function in (2.4.1) is ⎧ A− xT − y ⎪ − T ⎪ , (A− − σA )x y < A− xT , ⎪1 − − T ⎪ ⎪ σ x ⎪ A ⎨ 1, A− xT y A+ xT , (2.4.2) μY˜ (y) = + T y−A x ⎪ + T + T + ⎪ ⎪1 − , A x < y (A + σ )x , ⎪ A + ⎪ σA xT ⎪ ⎩ 0, otherwise, where x = (x1 , x2 , · · · , xn )T . Proof:

$

{a|aT x=y}

μY˜ (y) = ⎧ ⎨ =

⎩

0,

μA (a), {a|axT = y} = φ otherwise {

n

{a|aT x=y} j=1

μA˜j (aj )}, {axT = y} = φ

0, otherwise ⎧ − n Aj − aj ⎪ ⎪ { (1 − )}, A− aj < A− ⎪ j − σA− j − ⎪ j ⎪ σ T j=1 {a|a x=y} ⎪ Aj ⎪ ⎨ − + 1, Aj aj Aj = n ⎪ aj − A+ j ⎪ + + ⎪ { (1 − )}, A+ ⎪ j < aj Aj + σAj + ⎪ ⎪ T x=y} j=1 σA {a|a ⎪ j ⎩ 0, otherwise = (2.4.2). The proposition holds. ∗1 xi1 + A ∗2 xi2 + . . . + A ∗n xin Suppose fuzzy linear regression model Yi∗ = A ∗ ∗ ∗ ∗ ∗ xi (i = 1, 2, · · · , N ), where A = (A ,A ,...,A ), xi = (xi1 , xi2 , . . . , xin )T . A 1 2 n Then the membership function of Yi∗ is given by

52

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

μY˜ ∗ (y) = 1 − i

|yi − A∗± i xi | , σA∗± |xi | i

its degree of ﬁtting estimation to the given data Yi = (yi , εi ) is measured by the following index hl (l = 1, 2), which maximizes h subject to Yih ⊂ Yi∗h (i = 1, 2, · · · , N ), where Yih = {y|μY˜i (y) h},

Yi∗h = {y|μY˜ ∗ (y) h} i

¯ is illustrated as Figure 2.4.1: are h−level sets, and index h μ 6 1

h2 h1

0

A D G V E F K C B H I − A− xi − σA xi A− xi

C S C SC SC CS CS C S C S C S C S - y + A+ xi A+ xi − σA xi S

Fig. 2.4.1. Illustration for Membership Function of Regression Coeﬃcient A

The ﬁtting degree of a fuzzy linear regression model to all data Y1 , Y2 , · · · , YN are deﬁned by min{hl }. l

Deﬁnition 2.4.4. Use − − − J (1) = σA xi1 + σA xi2 + · · · + σA xin , 1 2 n + + + x + σA x + · · · + σA x (i = 1, 2, · · · , N ) J (2) = σA 1 i1 2 i2 n in

to denote fuzzy degree of Model (2.4.1) in left and right shapes, respectively. The problem is explained as fuzzy parameters A˜∗ being obtained, which ¯ l h for all l, where hl = (h1 , h2 ) is a minimize (J (1) , J (2) ) subject to h degree of the ﬁtting in a fuzzy linear model chosen by decision makers. Theorem 2.4.1. Suppose the model with ﬂat fuzzy data as (2.4.1), then

⇔

h = (min h1 , min h2 )T (h1 , h2 )T

(2.4.3)

j=1

(2.4.4)

⎧ n − − ⎪ ⎪ σA |xij | yi− + (1 − h1 )ε− ⎨ Ai xi + (1 − h1 ) i j n ⎪ − − ⎪ σA |xij | −yi− + (1 − h1 )ε− ⎩ −Ai xi + (1 − h1 ) i j j=1

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

and

53

⎧ n + + ⎪ ⎪ σA |xij | yi+ + (1 − h2 )ε+ ⎨ Ai xi + (1 − h2 ) i , j j=1

n ⎪ + + ⎪ σA |xij | −yi+ + (1 − h2 )ε+ ⎩ −Ai xi + (1 − h2 ) i . j

(2.4.5)

j=1

Proof: Shown as Figure 2.4.1, because ABH ∼ AEG, then 1 − h1 v =⇒ v = ε− i (1 − h1 ). − = 1 εi But k = v + HI = V + |yi− − A− j xi |, again CDI ∼ EDF , hence 1 − h1 = n 1 j=1

therefore,

k − σA |xij | j

⇒ 1 − h1 =

− − ε− i (1 − h1 ) + |yi − Aj xij | , n − σA |x | ij j j=1

|yi− − A− j xi | h1 = 1 − . n − − σA |x | − ε ij i j

(2.4.6)

j=1

The same truth is that we can get |yi+ − A+ j xi | h2 = 1 − . n + + σA |x | − ε ij i j

(2.4.7)

j=1

Combine (2.4.3) and (2.4.6),(2.4.7), then |yi− − A− j xi | 1− h1 , n − − σA |x | − ε ij i j j=1

|yi+ − A+ j xi | 1− h2 . n + + σA |x | − ε ij i j j=1

So that (2.4.4) and (2.4.5) are established, and the theorem is certiﬁcated. + − + Our problem is to determine parameter in (2.4.1) A˜∗j = (A− j , Aj , σAj , σAj ), that is to ﬁnd the minimum value of J (1) and J (2) under constraint h (h1 , h2 )T , in order to solve a classical parameter programming as follows:

min J (1) and ⎧ ⎨ s.t. (2.4.4) − σA 0, h1 ∈ [0, 1], j ⎩ (j = 1, 2, · · · , n),

min J (2) ⎧ ⎨ s.t. (2.4.5) + σA 0, h2 ∈ [0, 1], j ⎩ (j = 1, 2, · · · , n),

(2.4.8)

54

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

a simplex method or a dual simplex method is used to solve their optimal solution easily, obviously, in (2.4.8), the constraint condition of each problem, that is (2.4.4) and (2.4.5), all containing 2n constraint, its number is larger than a variables number, so to change them into a dual form is much easier − + + than to ﬁnd an optimal parameter solution A− j , σAj ; Aj , σAj , synthesize it − to a ﬂat fuzzy number in sequence, record for A˜j = (A , A+ , σ − , σ + )(j = j

j

1, 2, · · · , n), thus the fuzzy parameters of (2.4.1) are acquired.

Aj

Aj

2.4.3. Precise Examination of Model and Modeling Method For given data, by solving the classical parameter programming (2.4.8), a best ﬁtting model can be obtained. Below we determine a judgement method to the forecast model in accuracy measurement. Deﬁnition 2.4.5. Suppose fuzzy regression value of (2.4.1) is yˆ˜i∗ = (yi−∗ , yi+∗ , +∗ ε−∗ i , εi ), actually the value is denoted by yi , then with & ' N ' ∗ ' (yi − yi )2 ' i=1 RIC = ' (2.4.9) ' N ( 2 yi i=1

being an accuracy degree’s measure level in model (2.4.1), and RIC ∈ [0, ∞). 1) At RIC = 0, it is a perfect forecast. 2) The more RIC approaches zero, the nearer yi∗ value tends to yi ; it means a higher prediction. According to the theories of optimization method, yi∗ with yi in (2.4.9) being deﬁned below: +∗ +∗ yi∗ = (yi−∗ − ε−∗ i ) × 0.382 + (yi + εi ) × 0.618, − − + + yi = (yi − εi ) × 0.382 + (yi + εi ) × 0.618.

After the model passes through prediction examination of (2.4.9), it can be thrown into forecast formally. Suppose that the forecast to acquire regression − + + ∗ value is yi+p = (yi+p , yi+p , ε− i+p , εi+p ), and take threshold value h0 (h0 = h1 ∨ h2 ), and then −∗ − +∗ + + yi+p = yi+p − ε− i+p (1 − h0 ), yi+p = yi+p − εi+p (1 − h0 ).

(2.4.10)

−∗ +∗ ∗ = [yi+p , yi+p ] is a found forecast value in model (2.4.1). Hence, yi+p Hereby, we can acquire steps of modeling. I. According to collection of the data (ordinarily real data), substitution them (2.4.1). According to Theorem 2.4.1 and Deﬁnition 2.4.4, convert again the ordinarily linear programming (2.4.8) with parameter variables. II. Solve two linear programming with parameter variables in the problem (2.4.8), respectively, a parameter optimal solution to (2.4.8) is found, that is, certain fuzzy regression parameters exist in (2.4.1).

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

55

III. Give a series of data, and the best ﬁtting model is conﬁrmed, making precise examination by (2.4.9). IV. Forecasting N Ai xik . Then we can forecast status at time k. By using (2.4.10) Let Yk = i=1

again, we can ascertain the range in forecasting value. 2.4.4 Self-regression Forecasting Model with Flat Fuzzy Parameters According to the above section theories in a fuzzy linear regression model, we can follow the Ref.[Cao89b], and induce fuzzy time series models from ﬂat fuzzy numbers 1 Yt−1 + A 2 Yt−2 + · · · + A n Yt−n . Yt = A (2.4.11) Deﬁnition 2.4.6. Consider Model (2.4.11), call it n-order self-regression model with ﬂat fuzzy parameters, where Y˜t = (Yt− , Yt+ , σt− , σt+ ). According to observation data Y(t−j)i (i = 1, 2, · · · , N ; j = 1, 2, · · · , n), they are all ordinarily real numbers from the formula N N N Y(t−j)i Yt − Y(t−j)i Yti N i=1 i=1 i=1 . (2.4.12) γi = + N N N N 2 2 2 2 [N Y(t−j)i − ( Y(t−j)i ) ][N Yti − ( Yti ) ] i=1

i=1

i=1

i=1

Calculate the self-related coeﬃcient to change backward i(i = 1, 2, · · · , N ) quarter. If we take γq = max{γi |i = 1, 2, · · · , N }, then the model conﬁrmed n j Y(t−j) is optimal. A by Yt = j=1

q

Theorem 2.4.2. Suppose n-order fuzzy self-regression model to be (2.4.11), then min Hm βm , βm ∈ [0, 1], (m = 1, 2) ⎧ n − ⎪ T ⎪ A− σA |Y(t−j)i | Yt− + (1 − β1 )e− ⎨ t j Yt−i + (1 − β1 ) j j=1 ⇔ (2.4.13) n − ⎪ − T ⎪ σAj |Y(t−j)i | −Yt− + (1 − β1 )e− ⎩ −Aj Yt−i + (1 − β1 ) t j=1

and

⎧ n + T + ⎪ ⎪ σA |Y(t−j)i | Yt+ + (1 − β2 )e+ ⎨ Aj Yt−i + (1 − β2 ) t , j j=1

n ⎪ − T + ⎪ σA |Y(t−j)i | −Yt+ + (1 − β2 )e+ ⎩ −Aj Yt−i + (1 − β2 ) t . j

(2.4.14)

j=1

Proof: In Theorem 2.4.1, what needs is only to change yi− , yi+ into Yt− , Yt+ ; xi , xij into Yt−i , Y(t−j)i , respectively. Similar proof in Theorem 2.4.1, then this theorem holds true.

56

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Deﬁnition 2.4.7. The fuzzy degree of left and right shapes in Model (2.4.11) are denoted by − − − Y + σA Y · · · + σA Y , s1 = σA 1 (t−j)1 2 (t−j)2 n (t−j)n + + + Y + σA Y · · · + σA Y . s2 = σA 1 (t−j)1 2 (t−j)2 n (t−j)n

Then, the assurance of self-regression forecasting model (2.4.11) with ﬂat fuzzy parameters comes to arbitrary m for ﬁnding min sm (m = 1, 2) under m ¯ (β1 , β2 )T , that is to ﬁnd an optimal solution to an ordinary parameter h programming and min s1 ⎧ ⎨ s.t. (2.4.13), − σA 0, β1 ∈ [0, 1], i ⎩ (i = 1, · · · , N ),

min s2 ⎧ ⎨ s.t. (2.4.14), + σA 0, β2 ∈ [0, 1], i ⎩ (i = 1, · · · , N ),

(2.4.15)

where β1 , β2 are the degree of the ﬁtting of fuzzy self-regression models for decision makers to choose. Obviously, modeling steps in (2.4.11) can be induced as follows: I. The self-related sequence table is programmed according to collected data. II. By use of (2.4.12), ﬁnd self-related coeﬃcient γ, and choose forecast model (2.4.10) from γq = max{γi |i = 1, · · · , N }. III. Find an optimal parameter solution to (2.4.15), thus determine a fuzzy self-regression parameter. IV. Give a list of data, the optimally ﬁtting model is conﬁrmed, making the accurate examination at the same time. Suppose & ' N ' ' (Yti − yti )2 ' i=1 , RIC = ' ' N ( 2 yti i=1

+ + − − − e− where Yti = (Yt− ti ) × 0.382 + (Yti + eti ) × 0.618; yti = (yti − eti ) × 0.382 + i + + (yti + eti ) × 0.618 is accurate to measurement level of forecast Model (2.4.11), and RIC ∈ [0, ∞). Judge the following 10 At RIC = 0, it is a perfect forecast. 20 The more RIC approaches zero, the higher the forecast precision is, otherwise lower. V. Forecast. n Aj Yt−(pj +q) . Then we forecast the status at time q; its foreLet y˜t+q = j=1

− + − − + ∗ = [Yt+q , Yt+q ], where Yt+q = yt+q − e− casting range is Yt+q t+q (1 − H0 ), Yt+q = + yt+q − e+ t+q (1 − H0 ), H0 = β1 ∨ β2 is a threshold value.

2.5 Linear Regression with Triangular Fuzzy Numbers

57

− + From (2.4.1) and (2.4.11) we know, when the spread of its parameter σA , σA − + and spread of related fuzzy variables Y e , e are all zero, (2.4.1) and (2.4.11) are exuviated into classical models in linear regression and self-regression.

2.5

Linear Regression with Triangular Fuzzy Numbers

This section presents a new deﬁnition on the distance between two triangular fuzzy numbers with respect to their parameter variables and it provides a new method to fuzzy linear regression problems. 2.5.1 Preliminary In order to study the fuzzy linear regression with triangular fuzzy numbers, we introduce some basic knowledge as follows. Deﬁnition 2.5.1. A fuzzy set A˜ is called a fuzzy number on R if it satisﬁes the following: (1) There exists x0 ∈ R such that μA˜ (x0 ) = 1; (2) ∀α ∈ [0, 1], Aα = {x|μA˜ (x) α} = [Aα , Aα ] is a closed interval on R. Denote F (R) as the set of all fuzzy numbers on R, and among F (R), we often use triangular fuzzy numbers. Deﬁnition 2.5.2. If A ∈ F (R), it satisﬁes conditions (1) ∀α ∈ [0, 1], Aα is a convex set on R; (2) Its membership function A˜ can be expressed as ⎧ x − AL ⎪ L C ⎪ ⎪ ⎨ AC − AL , when A x A , x − AR μA˜ (x) = ⎪ , when AC x AR , ⎪ ⎪ AC − AR ⎩ 0, otherwise, then A˜ is called a triangular fuzzy number, where A˜ = (AL , AC , AR ), and ˜ AK (K = L, C, R) are called three parameter variables in A. To the triangular fuzzy numbers, they satisfy the following properties. ˜ = (B L , B C , B R ), k ∈ R. Then Property 2.5.1. Let A˜ = (AL , AC , AR ), B ˜ = (AL + B L , AC + B C , AR + B R ); (1) A˜ + B ˜ = (AL − B R , AC − B C , AR + B L ); (2) A˜ − B (kAL , kAC , kAR ), when k 0, ˜ (3) k A = (kAR , kAC , kAL ), when k < 0. Besides the properties above, any two triangular fuzzy numbers can be compared with each other, that is

58

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

˜ = (B L , B C , B R ), k ∈ R, we have Deﬁnition 2.5.3. Let A˜ = (AL , AC , AR ), B L L C ˜ ˜ (1) A < B if only if A < B , A < B C , and AR < B R ; ˜ if only if AL = B R , AC = B C , and AR = B L ; (2) A˜ = B (3) A > B if only if AL > B L , AC > B C , and AR > B R . 2.5.2 Distance between Two Triangular Fuzzy Numbers In order to estimate the regression parameters in the fuzzy linear regression models, ﬁrst we introduce a new conception as follows. ˜ = (B L , B C , B R ), k ∈ R. Then Deﬁnition 2.5.4. Let A˜ = (AL , AC , AR ), B we deﬁne ˜ B) ˜ = (AL − B L )2 ; (1) Left distance: dL (A, ˜ ˜ (2) Center distance: dC (A, B) = (AC − B C )2 ; ˜ B) ˜ = (AR − B R )2 . (3) Right distance: dR (A, Obviously, from the deﬁnition above, we know that they are the distance square between the points that the three parameter variables correspond to the rectangular coordinate system in fact, so they are an ordinary distance. Thus follows the next. ˜ = (B L , B C , B R ), k ∈ R. Then Property 2.5.2. Let A˜ = (AL , AC , AR ), B ˜ B) ˜ 0, dC (A, ˜ B) ˜ 0, dR (A, ˜ B) ˜ 0. (1) dL (A, ˜ ˜ ˜ ˜ ˜ ˜ ˜ A), ˜ dR (A, ˜ B) ˜ = dR (B, ˜ A). ˜ (2) dL (A, B) = dL (B, A), dC (A, B) = dC (B, L L 2 ˜ ˜ (Here we deﬁne dL (B, A) = (B − A ) , and the same to the others). ˜ B) ˜ = dC (A, ˜ B) ˜ = dR (A, ˜ B) ˜ = 0. (3) dL (A, 2 ˜ ˜ ˜ ˜ ˜ k B) ˜ = k 2 dC (A, ˜ B), ˜ (4) dL (k A, k B) = k dL (A, B), dC (k A, 2 ˜ ˜ ˜ ˜ dR (k A, k B) = k dR (A, B), k 0. Proof: From the Deﬁnition 2.5.1, (1) and (3) are obviously correct. Now we only prove the left distance in (2) and (4), and the same to the others. Then, we have ˜ B) ˜ = (AL − B L )2 = (B L − AL )2 = dL (B, A). (2) dL (A, ˜ ˜ = (kAL − kB L )2 = (kB L − kAL )2 = dL (kB, kA). (4) dL (k A, k B) 2.5.3 Fuzzy Linear Regression Now we consider the following fuzzy linear regression model y˜ = a ˜0 + a ˜1 x1 + a ˜2 x2 , x1 , x2 0,

(2.5.1)

where y, a0 , a1 and a2 are triangular fuzzy numbers with y˜ = (y , y , y ), a ˜0 = C R C R C R (aL ˜1 = (aL ˜2 = (aL 0 , a0 , a0 ), a 1 , a1 , a1 ) and a 2 , a2 , a2 ), and also all parameter variables are nonnegative real numbers. Suppose x1i , x2i and yi (i = 1, 2, · · · , N ) to be real input data and fuzzy output data, now we will calculate the estimated values of a0 , a1 and a2 of Model (2.5.1). In many papers, the distance between two fuzzy numbers is mostly adopted by [Xu98], such that the optimal estimated values are got. Here we will introduce a new method. L

C

R

2.5 Linear Regression with Triangular Fuzzy Numbers

59

To the fuzzy linear regression problems, we all know what most important is that we should make the error much less between the observed values and practical ones. But to Model (2.5.1), these data are all triangular fuzzy numbers, i.e., (yiL , yiC , yiR ) and L L C C C R R R (aL 0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i ).

According to the previous analysis,we can consider the corresponding parameter variables of the above. If the less errors between observed values and practical ones are, the less total error is. So the fuzzy linear regression problem is transformed to ⎧ N ⎪ L L 2 ⎪ min dL ( a0 + a1 x1 + a2 x2 , y) = min (yiL − aL ⎪ 0 − a1 x1i − a2 x2i ) , ⎪ ⎪ i=1 ⎪ ⎨ N C C 2 a0 + a1 x1 + a2 x2 , y) = min (yiC − aC min dC ( 0 − a1 x1i − a2 x2i ) , ⎪ i=1 ⎪ ⎪ ⎪ N ⎪ ⎪ R R 2 ⎩ min dR ( a0 + a1 x1 + a2 x2 , y) = min (yiR − aR 0 − a1 x1i − a2 x2i ) . i=1

According to the least square method, suppose ∂dL L L = (yiL − aL 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2), L ∂al i=1 N

∂dC C C = (yiC − aC 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2), C ∂al i=1 N

(2.5.2)

∂dR R R = (yiR − aR 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2). ∂aR l i=1 N

For ﬁrst formula of (2.5.2), we have ⎧ N N N ⎪ L L L ⎪ na + x a + x a = yiL , ⎪ 1i 2i 0 1 2 ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎨ N N N N x1i aL x21i aL x1i x2i aL x1i yiL , 0 + 1 + 2 = ⎪ i=1 i=1 i=1 i=1 ⎪ ⎪ ⎪ N N N N ⎪ ⎪ ⎩ x2i aL x1i x2i aL x22i aL x2i yiL , 0 + 1 + 2 = i=1

N when=

N i=1 N i=1

x1i x2i

N i=1 N i=1 N i=1

i=1

x1i x21i x1i x2i

N i=1 N i=1 N i=1

i=1

(2.5.3)

i=1

x2i x1i x2i =0, by the aid of the Cramer rule, we have x22i

60

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

aL 0 =

1 L 2 L 3 ,a = ,a = , 1 2

(2.5.4)

where j replaces the element of j(j = 1, 2, 3) column in into the term of N N N yiL , x1i yiL , x2i yiL , respectively. equations i=1

i=1

i=1

Similarly, consider the second and third forms of (2.5.2), we can get aC 0 =

1 C 3 R 2 R , a1 = 2 , aC ; a0 = 1 , aR , a2 = 3 , 2 = 1 =

(2.5.5)

where j and j are replacement of the element of j(j = 1, 2, 3) column in into the term of equations, respectively N i=1

yiC ,

N i=1

x1i yiC ,

N i=1

x2i yiC and

N i=1

yiR ,

N

x1i yiR ,

i=1

N

x2i yiR .

i=1

Thus the estimated values of a0 , a1 and a2 are C R C R C R a0 = (aL a1 = (aL a2 = (aL 0 , a0 , a0 ), 1 , a1 , a1 ), 2 , a2 , a2 ).

(2.5.6)

C Deﬁnition 2.5.5. The parameter variables aL and aR l , al l (l = 0, 1, 2) are called optimal estimated parameters of Model (2.5.1) if only if they satisfy (2.5.2), and the corresponding solutions (2.5.6) are called the optimal estimated values in (2.5.1). So the estimated regression equation is

a1 x1 + a2 x2 , x1 , x2 0. y = a0 +

(2.5.7)

Example 2.5.1 [Xu98]: The sales at a certain product on the market can be seen from Table 2.5.1 Table 2.5.1. Product Sales in Years

Year (xi ) Amount of sales ( yi ) (Unit:104 pieces) 1987 (228, 230, 231) 1988 (233, 236, 238) 1989 (239, 241, 244) Try to estimate the amount of sales in 1990. L According to the formula (2.5.4) and (2.5.6), we can get aL 1 = 222.3, a2 = C C R R 5.5; a1 = 224.7, a2 = 5.5; a1 = 224.7, a2 = 6.5. Therefore the regression equation is y = (222.3, 224.7, 224.7) + (5.5, 5.5, 6.5)(x − 1986), at x = 1990, we have y = (244.3, 246.7, 250.7).

2.5 Linear Regression with Triangular Fuzzy Numbers

61

2.5.4 Error Analysis To Model (2.5.1), we get the following data ( yi , x1i , x2i )(i = 1, 2, · · · , N ) by making observations. Then the practical values and observed values of y are L L C C C R R R yi = (aL 0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i ), and yi = (yiL , yiC , yiR ), respectively. We have already got the estimated values of every parameter variable, now we analyze the left parameter variables, and the same to the others. In fact, to Model (2.5.1), by Deﬁnition 2.5.3, we have L L y L = aL 0 + a1 x1 + a2 x2 ,

(2.5.8)

where y L , aL l and xj (l = 0, 1, 2; j = 1, 2) are nonnegatively real numbers. Obviously the Equation (2.5.8) can be regarded as an ordinary linear regression model, so we can estimate the previous values when entering ordinary cases. Thus, according to the properties of the ordinary linear regression, the L L estimated values aL 0 , a1 and a2 are unbiased estimations, and also the variance is the same to an ordinary case. 2.5.5 Comparison of Two-Kind Distance Formula To the fuzzy linear regression (2.5.1), in most of papers, there is the following deﬁnition about the distance between two fuzzy numbers, that is 1 ˜ B) ˜ = D2 (A, f (a)d2 (Aα , Bα )dα, (2.5.9) 0

where d2 (Aα , Bα ) = (Aα − B α )2 + (Aα − B α )2 , and Aα = [Aα , Aα ], Bα = [B α , B α ]. f (α) is a monotonously increasing function at [0, 1], and f (0) = 0,

1

f (α)dα = 0

1 . 2

If we use the distance above and by the diﬀerential, integral and the leastsquare method to the Model (2.5.1), and take f (α) = α, we can get [Lin01] n

x21i aC 1 +

i=1

x21i aL 1 +

i=1

i=1

x21i aL 1 +6

n i=1

x21i aR 1 =

i=1

n

n

n

n

n i=1

x1i (yiC + yiR ),

(2.5.10)

x1i (yiL + yiC ),

(2.2.11)

x1i (yiL + 6yiC + yiR ).

(2.2.12)

i=1

x21i aC 1 =

i=1

x21i aC 1 +

n

n i=1

x21i aR 1 =

n i=1

62

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

C R Then according to the Cramer’s rule, we can get values of aL 1 , a1 and a1 , and by using the same method, we can also calculate the others. Compare (2.5.3) with (2.5.10) (2.5.11) and (2.5.12), obviously, in general, by using diﬀerent distance, we can get diﬀerent parameter estimated values. So, to the above distance (2.5.9), the process of calculation is more complex and the properties about parameter variables are not the same as this section. Therefore, the method of this section is more direct, above all it is provided with some practical values.

3 Regression and Self-regression Models with Fuzzy Variables

In 1989, based on the theory of Zadeh fuzzy sets [Zad65a], self-regression forecast model with T -fuzzy variables was advanced [Cao89b],[Cao89c],[Cao 90a], and, again in 1992, a linearizable non-linear regression model with T -fuzzy variables [Cao95c] was developed. The application appears vastly extensive because of much wider information in models. 1) Make use of a fuzzy distance, follow the classic regression analytical method with a beeline( or curve) imitation. 2) Ascertain the regression model with fuzzy variables under a cone and platform index. Because fuzzy regression analysis is an interval estimation, a kind of analytical methods become much useful. This chapter introduces T -fuzzy variables, (·, c) fuzzy variables and ﬂat (or trapezoidal) fuzzy variables into regression models, and builds more practical kinds of way to the model determination. Meanwhile, their application is discussed.

3.1 3.1.1

Regression Model with T - Fuzzy Variables Basic Property

As for deﬁnition and property of T -fuzzy number, we can read Ref. [TUA82]. It is easy to prove that this kind of fuzzy numbers are regular and convex fuzzy subsets. Deﬁnition 3.1.1. Let x˜ = (m(x), c1 ), y˜ = (m(y), c2 ). Then the distance on T (R), T -fuzzy number set (R is a real number set), is deﬁned as d(˜ x, y˜)2 = D2 (Supp(˜ x), Supp(˜ y ))2 + (m(˜ x) − m(˜ y ))2 , where Supp(·) denotes the support interval of (·) and m(·) denotes its modal value. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 63–94. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

64

3 Regression and Self-regression Models with Fuzzy Variables

Lemma 3.1.1. d(˜ yi , y˜j )2 = 2d(˜ yi , x˜)2 + 2d(˜ x, y˜j )2 − 4d(˜ x,

y˜i + y˜j 2 ) . 2

Proof: From parallelogram rule, we can get: 2(yi − x)2 + 2(x − yj )2 = [(yi − x) − (x − yj )]2 + [(yi − x) + (x − yj )]2 = (yi − yj )2 + [2x − (yi + yj )]2 . In addition, if we establish y˜i = (yi , η i , η i )T , y˜j = (yj , η j , η j )T , x ˜ = (x, ξ, ξ)T , and let F = yi − η i − (x − ξ), G = x − ξ − (yj − ηj ), F = yi + η i − (x + ξ),

G = x + ξ − (yj + ηj ).

Because 2

2

2(F 2 + G2 ) + 2(F + G ) = (F − G)2 + (F − G)2 + (F + G)2 + (F + G)2 2

= 2(F 2 + F ) + 2(G2 + G2 ) = 2[yi − x − (ηi − ξ)]2 + 2[yi − x + (η i − ξ)]2 + 2[x − yj − (ξ − ηj )]2 + 2[x − yj + (ξ − η j )]2 = 2[(yi − ηi ) − (x − ξ)]2 + 2[(x − ξ) − (yj − ηj )]2 + 2[(yi + η i ) − (x + ξ)]2 + 2[(x + ξ) − (yj + η j )]2 = [(yi − ηi ) − (yj − ηj )]2 + [2(x − ξ) − (yi − ηi + yj − ηj )]2 + [(yi + η i ) − (yj − η j )]2 + [2(x + ξ) − (yi − η i + yj − η j )]2 ⇒

[(yi − yj ) − (ηi − ηj )]2 + [(yi − yj ) + (η i + η j )]2

= 2[(yi − x) − (ηi − ξ)]2 + 2[(x − yj ) − (ξ − ηj )]2 + 2[(yi − x) + (η i − ξ)]2 + 2[(x − yj ) + (ξ − η j )]2 yi + yj − ηi − ηj 2 yi + yj − η i − η j 2 ] − 4[(x + ξ) − ] , − 4[(x − ξ) − 2 2 i.e., D2 (Suppyi , Suppyj )2 = 2D2 (Suppyi , Supp x)2 +2D2 (Supp x, Supp˜ yj )2 − 4D2 {Supp x, Supp

yi + yj 2 } . 2

Theorem 3.1.1. Let V be a closed cone in P (R) (subspace of T (R)). Then for any x ˜ in P (R), there exists the unique T -fuzzy number y˜0 in V , such that for all of y˜ in V , we have d(˜ x, y˜0 ) d(˜ x, y˜), and a necessary and suﬃcient condition where y0 being unique minimizing fuzzy number in V is that x is y˜0 −orthogonality to V .

3.1 Regression Model with T - Fuzzy Variables

65

Proof: Suﬃciency. Because of d( x, y)2 = [x − y − (ξ − η)]2 + [x − y + (ξ − η)]2 + (x − y)2 = [x − y0 − (ξ − η0 )]2 + [x − y0 + (ξ − η)]2 + (x − y0 )2 + [y0 − y − (η0 − η)]2 + [y0 − y + (η 0 − η)]2 + (y0 − y)2 + 2[y0 − y − (η0 − η)][x − y0 − (ξ − η)] + 2[y0 − y + (η 0 − η)][x − y0 + (ξ − η 0 )] + 2(y0 − y)(x − y0 ) D( x, y0 )2 + D(y0 , y)2 + (x − y0 )2 + (y0 − y)2 = d( x, y0 )2 + d(y0 , y)2 , again because of d(y0 , y)2 > 0, hence d( x, y)2 > d( x, y0 )2 for y = y0 holds true. Necessity. If for some y in V , such that for λ ∈ (0, 1), we have [y0 − y − (η0 − η)][x − y0 − (ξ − η0 )] + [y0 − y + (η 0 − η)][x − y0 + (ξ − η 0 )] + (y0 − y)(x − y0 ) = −λ. Suppose d( y , y0 ) = 1 and, in order not to lose generality, we consider y1 = (1 − λ)y0 + λ y ; it is known in V from the convex. Then d( x, y)2 = d( x, y0 )2 + λ2 d( y , y0 )2 + λ[2(y0 − y − (η0 − η))(x − y0 − (ξ − η0 )) + 2(y0 − y + (η 0 − η))(x − y0 + (ξ − η 0 )) + 2(y0 − y)(x − y0 )] = d( x, y0 )2 − λ2 , hence y˜0 is not a minimum element in V . Contradiction. Therefore, unique, suﬃcient and necessary condition of y0 -orthogonality can get certiﬁcated. The following proof exists in y˜0 again. If x ∈ V , then existence is proved. If x ∈ / V , then deﬁne δ = inf{d( x, y)| y∈ x, yi ) → δ. From V }. Suppose y˜i to be fuzzy sequence in V , such that d( equality yi + yj 2 d(yi , yj )2 = 2d(yi , x ) , )2 + 2d( x, yj )2 − 4d( x, 2 yi + yj yi + yj in V , but cone V is convex, hence, d( x, ) δ, and because ∀i, j, 2 2 then d(yi , yj )2 2d(yi , x )2 + 2d( x, yj )2 − 4δ 2 , and when i, j → ∞, d(yi , yj ) → 0, {˜ yi } is a Cauchy sequence. Again, because (T (R), d) is complete, and V is closure, y0 = lim yi in V . Corollary 3.1.1. Let N be a positive integer. If V is a close cone in P (R)N , the measurement in P (R)N is represented by dN , which is deﬁned

66

3 Regression and Self-regression Models with Fuzzy Variables

as dN (˜ x, y˜)2 =

N

d(˜ xi , y˜i )2 , where x ˜i , y˜i ∈ P (R)(i = 1, 2, · · · , N ) are the

i=1

component in N −dimensional fuzzy vector x ˜, y˜ ∈ P (R)N , then for arbitrary N x ˜ in P (R) , the unique vector y˜0 exists in V , such that x, y˜0 )2 dN (˜ x, y˜)2 dN (˜ is established for all y˜ in V . 3.1.2 Regression Model with T -Fuzzy Variables Consider y = β0 + β1 x1 + · · · + βn xn + ε, we call it a regression model, where E, βp (p = 0, 1, · · · , n) are ordinary real number, xp (p = 1, · · · , n) and y are ordinary real variables. Deﬁnition 3.1.2. If y˜ = β0 + β1 x˜1 + · · · + βn x ˜n + ε,

(3.1.1)

x ˜p (p = 1, 2, · · · , n) is the T -fuzzy variable, y˜ is T -fuzzy function variable, is n−vector represented by all e = (1, 0, 0), β0 , β1 , · · · , βn ∈ R and ε is an error. We call (3.1.1) a regression model with T -fuzzy variables. The concept about T -fuzzy variable is shown as Section 1.7 in Chapter 1. Deﬁnition 3.1.3. Assume that P (R) is a subspace consisting of the support T (R) of all non-negative elements. For each (x, ξ, ξ) ∈ P (R) and x − ξ 0, P (R) is a cone in T (R) and also a closed convex subset of T (R) with respect to the topology induced by d. Here d( x, y)2 =

[x − y − (ξ − η)]2 + [x − y + (ξ − η)]2 + (x − y)2 , 3 N x , y ∈ P (R) , x i , yi ∈ P (R).

˜ni and yi are given by a Assume that the test of data sets x˜1i , x˜2i , · · · , x linear regression equation ˜1i + · · · + βn x ˜ni , yi = β0 + β1 x

(3.1.2)

x ˜pi = (xpi , ξ pi , ξ pi )(p = 0, 1, · · · , n; i = 1, · · · , N ) a fuzzy independent variable, and y˜i = (yi , η i , η i ) an aﬃne function from P (R)N to T (R). If again (M )

r(β0 , β) =

N

d(β0 + β1 x 1i + · · · + βn x ni ; yi )2 ,

i=1

then βp (p = 0, 1, · · · , n) is determined by applying the least square method, it is a pity that income βp are all T - fuzzy numbers rather than real numbers,

3.1 Regression Model with T - Fuzzy Variables

67

so that the classical least square method can not be directly applied, therefore conversion should be made. For this reason, we induce deﬁnitions and properties ﬁrst as follows. Diﬀerent expressions arise for r(β0 , β) according as some of the βp are positive and negative because βp xp = (βp xp , βp ξ p , βp ξ p ) if βp 0 and βp xp = (βp xp , βp ξ p , βp ξ p ) and when βp < 0. So if negative β appears in (M ), “mixed” upper and lower spreads occur in each summand as can easily be seen from above form. Consequently, in order to derive analogues of the normal equations it is necessary to specify certain cones in which to seek minimizing solution to (M ). Then we deﬁne the following. x1i , x ˜2i , · · · , x ˜ni ) (i = 1, 2, · · · , N ). If Deﬁnition 3.1.4. Assume that x ˜i = (˜ partition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets J(−), J(+), one of which may be empty. To each of such partition associate a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by jp = 0, if p ∈ J(+), Denoted by the cone C(J ) in T (R)n 1, if p ∈ J(−). C(J ) = {β0 + β1 x1 + · · · + βn xn |βp 0, if jp = 0; βp < 0, if jp = 1}, we call J a cone index, and C(J ) is a cone determined by it. Proposition 3.1.1. For a given cone index J , then the problem of minimizing in cone (M (J )) (M (J ))

r(β0 (J ), β(J )) =

N

d(β0 + β1 x 1i + · · · + βn x ni , yi )2 (3.1.3)

i=1

has a unique parameter solution β0 (J ), β1 (J ), · · · , βn (J ). Deﬁnition 3.1.5. Assume fuzzy data to be x ˜1i , x ˜2i , · · · , x ˜ni ; y˜i , and we call the system S(J ) consisting of n + 1 equations ∂r(β0 (J ), β(J )) = 0 (p = 0, 1, · · · , n), ∂βp and write it as ⎛

N

N

S(J )

x1i (J )

···

N

(3.1.4)

⎞ xni (J )

⎜ ⎟ ⎛ ⎞ i=1 i=1 ⎜ N ⎟ β0 (J ) N N ⎜ ⎟ ⎟ ⎜ ⎜ x1i (J ) x21i (J ) ··· x1i (J )xni (J ) ⎟ ⎜ ⎟ ⎜ β1 (J ) ⎟ i=1 i=1 ⎜ i=1 ⎟ · ⎜ .. ⎟ ⎜ ⎟ ⎝ . ⎠ .. .. .. ⎜ ⎟ . . ··· . ⎜ ⎟ βn (J ) ⎝ ⎠ N N N 2 xni (J ) xni (J )x1i (J ) · · · xni (J ) i=1

i=1

i=1

68

3 Regression and Self-regression Models with Fuzzy Variables

⎛

N

⎞ yi (J )

⎜ ⎟ ⎜ N i=1 ⎟ ⎜ ⎟ ⎜ ⎟ x (J )y (J ) 1i i ⎜ ⎟ = ⎜ i=1 ⎟. ⎜ ⎟ . .. ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ N xni (J )yi (J ) i=1

If S(J ) has a solution β0 (J ), β1 (J ), · · · βn (J ), such that βp > 0 at jp = 0; βp < 0 at jp = 1, then we call (3.1.3) a J −compatible with the data. If the unconstraint minimization of S(J ) is compatible with Y˜i = β0 E + β1 x˜1i + · · · + βn x˜ni in C(J ), then the model is called compatibleness. ˜2i , · · · , x ˜ni ; y˜i , (i = 1, 2, · · · , N ) satisfy Theorem 3.1.2. Let the data x ˜1i , x both Equation (3.1.2), for all of cone index J , there exists a unique solution β0 (J ), β1 (J ), · · · , βn (J ) in system (3.1.4). Proof: Catalogue {˜ xpi } by subscription. For i = 1, 2, · · · , N, wi = yi and for each p, z$ pi = xpi , for i = N + xpi − ξ pi , if jp = 0, for 1, · · · , 2N, wi = yi − η i , and for each p, zpi = xpi + ξ pi , if jp = 1; $ xpi + ξ pi , if jp = 0, i = 2N + 1, · · · , 3N, wi = yi + η i and for each p, zpi = xpi − ξ pi , if jp = 1. Then it is not diﬃcult to see that S(J ) is the same system as the crisp normal equations for the least squares ﬁtting model w = β0 + β 1 z 1 + · · · + β n z n

(3.1.5)

to the data wi , z1i , z2i , · · · , zni . By using the classical least square method, it is easier for us to ﬁnd a unique optimal solution βp (p = 0, 1, · · · , n) in (3.1.5) concerning to a cone index J . 3.1.3 Regression Model with T -Fuzzy Data We call (3.1.1) regression model with T -fuzzy parameters. According to the theory above, the modeling steps in Model (3.1.1) can be concluded as follows: 10 Work out a sequence table by observation data and classify the data by Deﬁnition 3.1.4. ˜pi and the dependent variable y˜i into 20 Change the observation date x nonfuzziness. Then fuzzy data are changed into ordinary data before (3.1.1) is changed into a classical linear regression model (3.1.5).

3.1 Regression Model with T - Fuzzy Variables

69

30 From the knowledge of Theorem 3.1.1, the model has a unique solution β0 , β1 , · · · , βn in it, replaced in (3.1.5), it can be testiﬁed by a classical determination method. Calculate rp (p = 1, · · · , n) and s: N rp = +

N

zpi wi −

i=1

[N

N i=1

N i=1

2 −( zpi

N

zpi )2 ][N

i=1

N

zpi

wi

i=1 N i=1

wi2 − (

N

,

(3.1.6)

wi )2 ]

i=1

& ' N N N N ' N 2 ' z ( z )2 /N − ( zpi wpi − ( zpi )( wpi )/N ) ( i=1 pi i=1 pi i=1 i=1 i=1 s= (N − 2) (p = 0, 1, . . . , n). 40 Decision. If |rp | > r0.05 , then a test goes through. 50 A forecast model is obtained as follows, w = βˆ01 − 2S + βˆ1 z, w = βˆ02 + 2S + βˆ1 z. Example 3.1.1: The needed petroleum is arranged for a western developed country during 1965 and 1981 as follows. Table 3.1.1. Needed Arrangement of Petroleum in Developed Country

Years 1965 1967 1969 Demand(Ktoe) (8.05, 0.020.03) (8.28, 0.02, 0.02) (8.5, 0.02, 0.01) 1971 1973 1975 (8.7, 0.01, 0.03) (8.94, 0.05, 0.03) (9, 0, 0.01) 1977 1979 1981 (9.04, 0.01, 0.02) (9.18, 0.02, 0.03) (9.28, 0.03, 0.04) Try to forecast the country’s petroleum demanded in 1998. From the data in Table 3.1.1, we know that each datum represents a cone and its ﬁgure constructed by its top and a linear distribution, therefore, applying the method above to it, we obtain the following: 10 Divide the T -fuzzy data annually into two, one is {65, 69, 73, 75, 81}, denoting J(−), and the other is {67, 71, 77, 79}, denoting J(+). 20 Nonfuzzify it. Classify the data into three parts: One is (8.5, 0.02, 0.01), (9, 0, 0.01), (9.28, 0.03, 0.04), to which the year corresponding is {69, 75, 81}.

70

3 Regression and Self-regression Models with Fuzzy Variables

Another is (8.28, 0.01, 0.02), (8.94, 0.05, 0.03), (9.18, 0.02, 0.03), to which the year corresponding is {67, 73, 79}; and the other is (8.05, 0.02, 0.03), (8.7, 0.01, 0.03), (9.04, 0.01, 0.02), to which the year corresponding is {65, 71, 77}. By the expression of zpi , the needed petroleum in Table 3.1.1 can be turned into Table 3.1.2. Needed Petroleum Crisp Value

Years 1965 1967 1969 1971 1973 1975 1977 1979 1981 Demand(ktoe) 8.03 8.27 8.5 8.73 8.97 9 9.06 9.16 9.28 30 List Table Table 3.1.3. Unary Regression Simplify Table

t 0 1 2 3 4 5 6 7 8 = 36 w 8.03 8.27 8.5 8.73 8.97 9 9.06 9.16 9.28 = 79 t2 0 1 4 9 16 25 36 49 64 = 204 w2 64.48 68.39 72.25 76.21 80.46 81 82.08 83.9 86.1 = 694.87 tw 0 8.27 17 26.19 35.88 = 324.36 45 54.36 63.42 74.24 ¯ w/9 ≈ 8.778 t¯= t/9=4, t¯2 =16, w= and estimate parameter βˆ0 , βˆ1 : ¯ ti wi − nt¯w ˆ β1 = 2 ≈ 0.1392, βˆ0 = w ¯ − βˆ1 t¯ ≈ 8.2212. ti − nt¯2 Substitute them for (3.1.5), then w ˆ = 8.2212 + 0.1392t. 40 Test From (3.1.6), we calculate r = 1.652, at r0.05 = 0.666, we have r > r0.05 . Then, a test goes 4 through. 1.426 − 0.1392 × 8.352 ≈ 0.194, then Again s = 7 w = βˆ01 + 0.1392t = 7.8332 + 0.1392t, w = βˆ2 + 0.1392t = 8.6092 + 0.1392t. 0

3.2 Self-regression Model with T -Fuzzy Variables

71

50 Forecast w1998 = 7.8332 + 0.1392 × 16.5 = 10.13, w1998 = 8.6092 + 0.1392 × 16.5 = 10.906.

Such that w − w1998 w1998 + w1998 , 0.382 × 1998 , 0.618 2 2 − w1998 w1998 ) = (10.518, 0.1482, 0.2398), × 2

y=(

i.e., the petroleum needed for the country in 1998 is a bit more than 10.518(ktoe), which tallies with practice.

3.2

Self-regression Model with T -Fuzzy Variables

If we modify (3.1.1) for Yt = β0 + β1 Yt−1 + · · · + βn Yt−n + εt ,

(3.2.1)

then call (3.2.1) an n-order self-regression model with T -fuzzy variables, where β0 , β1 , · · · , βn are awaiting-evaluation parameters, Yt is fuzzy correlated variable, Yt−p = (Yt−p , η t−p , η t−p )(p = 1, · · · , n) is an independent variable in p period changed backward, with ε being an error. Theorem 3.2.1. Assume that the data set is Y(t−1)i , · · · , Y(t−n)i and Yti is given by model Yti = β0 + β1 Y(t−1)i + · · · + βn Y(t−n)i (i = 1, · · · , 3N ), the system ∂r[β0 (J ), β(J )] ∂ βp

= 0 (p = 0, · · · , n)

has a unique solution β0 (J ), β1 (J ), · · · , βn (J ) for all cone indices. Proof: Similar to the proof of Theorem 3.1.2, the formula corresponding to (3.1.5) is Zt = β0 + β1 Zt−1 + β2 Zt−2 + · · · + βn Zt−n , then r(β0 , β) =

3N i=1

d[β0 + β1 Z(t−1)i + · · · + βp Z(t−n)i ; Zt ]2 .

(3.2.2)

72

3 Regression and Self-regression Models with Fuzzy Variables

The normal equation S(J ) is simpliﬁed into a classical form ⎧ N N N ⎪ ⎪ ⎪ Zti = nβ0 (J ) + Z(t−1)i β1 (J ) + · · · + Z(t−n)i βn (J ), ⎪ ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N N N ⎪ ⎪ 2 ⎪ β1 (J ) + · · · + Z(t−1)i β0 (J ) + Z(t−1) Z(t−1)i Z(t−n)i βn (J ) ⎪ i ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N ⎪ ⎨ Zt Z(t−1)i , = i=1 ⎪ ⎪ ··· ··· ··· ⎪ ⎪ ⎪ ⎪ N N N ⎪ ⎪ 2 ⎪ Z(t−n)i β0 (J ) + Z(t−1)i Z(t−n)i β1 (J ) + · · · + Z(t−n) βn (J ) ⎪ i ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N ⎪ ⎪ ⎪ Zt Z(t−n)i . = ⎩ i=1

Equations contain a unique solution βp (p = 0, 1, · · · , n), and this theorem is certiﬁed. Hereby, the modeling steps in Model (3.2.1) can be concluded as follows. 10 Design a self-dependent sequence table by tested data Y˜(t−p)i = (Y(t−p)i , η (t−p) , η (t−p)i ) and classify the data in the table by means of Deﬁnition 3.1.4. i

Table 3.2.1. Self-related Sequence Table Q

I II III IV I II III IV

1983 sale

Sequence

Yt

move

backward

Yt

Yt−1

Yt−2

···

Yt−p

(y(t−p)1 , η (t−p) , η (t−p)1 ) 1 yt−p,1 · · · (y(t−p)2 , η (t−p) , η (t−p)2 ) 2 yt−p,2 (y(t−2)1 , η (t−2) , η (t−2)1 ) · · · (y(t−p)3 , η (t−p) , η (t−p)3 ) 1 3 y(t−2) y(t−p) 1 3 (y(t−1)1 , η (t−1) , η (t−1)1 ) (y(t−2)2 , η (t−2) , η (t−2)2 ) · · · (y(t−p)4 , η (t−p) , η (t−p)4 ) 1 2 4 y(t−1)1 y(t−2)2 y(t−p)4 (yt1 , η t , η t1 ) (y(t−1)2 , η (t−1) , η (t−1)2 ) (y(t−2)3 , η (t−2) , η (t−2)3 ) · · · 1 2 3 yt 1 y(t−1) y(t−2) 2 3 (yt2 , η t , η t2 ) (y(t−1)3 , η (t−1) , η (t−1)3 ) (y(t−2)4 , η (t−2) , η (t−2)4 ) 2 3 4 yt2 y(t−1)3 y(t−2)4 (yt3 , η t , η t3 ) (y(t−1)4 , η (t−1) , η (t−1)4 ) 3 4 yt 3 y(t−1) 4 (yt4 , η t , η t4 ) 4 yt4

Q—Quarter.

20 Change fuzzy Y(t−p)i and the dependent variable Yti into nonfuzziness by the proof of Theorem 3.1.2.

3.2 Self-regression Model with T -Fuzzy Variables

73

30 Calculate self-dependent coeﬃcients, and let N γp = + [N

N i=1

N i=1

Z(t−p)i Zti −

2 Z(t−p) i

−(

N

i=1

N i=1

Z(t−p)i

Z(t−p)i

)2 ][N

N i=1

N i=1

Zt2i

Zti −(

N

i=1

. Zti

(3.2.3)

)2 ]

Calculate quarterly self-related coeﬃcients by moving backwards i(i = 1, · · · , N), and, by taking γK = max{γp |p = 1, · · · , n}, it is proper to determine the model set up on benchmark time series Zt by moving backwards n quarters. 40 βp (J )(p = 0, 1, · · · , n) is determined by S(J ), planted into (3.2.2). Let + N 1 (Zt − Zti )2 K i=1 i + IC = + (Zt2i + Zt2i = 1). (3.2.4) N N 1 1 2 2 Z + Z K i=1 ti K i=1 ti If 0 IC 1, then it is an eﬀective forecast, and IC → 0, Zti → Zti is a perfect case, while IC = 1, the forecast is most uncorrect. Therefore, when IC is a smaller positive number, the fuzzy self-regression forecast model determined by βp (J )(p = 0, 1, · · · , n) can be used in an actual forecast. Example 3.2.1: If the candy sale quantity of 1980-1983 in certain place shows below Table 3.2.2. Candy Sale of 1980-1983 in Certain Place (10000/unit)

Quarters 1980 1981 1982 1983 I (23, 0.1, 0) (25, 0.1, 0.1) (26, 0.1, 0.2) (27, 0.2, 0.1) II (11, 0.6, 0.8) (11, 0.8, 0.5) (12, 0.3, 1) (13, 1.2, 0.3) III (11, 0.9, 0.4) (12, 1, 0.5) (14, 0.7, 0.9) (15, 1.1, 0.8) IV (15, 0.8, 1) (16, 0.6, 0.3) (18, 0.1, 0.4) (20, 0.4, 1) Try to forecast the candy sale quantity of 1984 Quarter I,II. 1. Choose a 1-order fuzzy self-regression model, and we list table according to the data in Table 3.2.2. Note. The ordinary real data under blanks are obtained by taking a main value of fuzzy numbers in ﬁrst quarter; at odd numbers, Z(t−p)i = Y(t−p)i + η (t−p)i is taken; at even numbers, Z(t−p)i = Y(t−p)i − η (t−p) is taken, on i two diagonals of the former. Let Z(t−p)i = Y(t−p)i − η (t−p) at odd numbers i while Z(t−p)i = Y(t−p)i + η (t−p)i at even numbers on the two diagonals of the latter.

74

3 Regression and Self-regression Models with Fuzzy Variables Table 3.2.3. Self-related Sequence Table

Quar- 1983 sale ters Yt IV I II III IV I II III IV

Yt−6

III IV I II III IV I II III

sale

sequence

move

backward

Yt−1

Yt−2

Yt−3

Yt−4

Yt−5

(16,0.6,0.3) 15.4 (26,0.1,0.2) (26,0.1,0.2) 26 26 (12,0.3,1) (12,0.3,1) (12,0.3,1) 11.7 11.7 13 (14,0.7,0.9) (14,0.7,0.9) (14,0.7,0.9) (14,0.7,0.9) 14.9 14.9 13.3 13.3 (18,0.1,0.4) (18,0.1,0.4) (18,0.1,0.4) (18,0.1,0.4) 17.9 17.9 18.4 18.4 (27,0.2,0.1) (27,0.2,0.1) (27,0.2,0.1) (27,0.2,0.1) 27 27 27 27 (13,1.2,0.3) (13,1.2,0.3) (13,1.2,0.3) 11.8 13.3 13.3 (15,1.1,0.8) (15,1.1,0.8) 13.9 13.9 (20,0.4,1) 21

Q

II

The

The

sale

sequence

move

backward

Yt−7

Yt−8

Yt−9

Yt−10

Yt−11

Yt−12 (23,0.1,0) 23 (11,0.6,0.8) (11,0.6,0.8) 10.4 10.4 (11,0.9,0.4) (11,0.9,0.4) (11,0.9,0.4) 11.4 11.4 10.1 (15,0.8,1) (15,0.8,1) (15,0.8,1) (15,0.8,1) 14.2 14.2 16 16 (25,0.8,1) (25,0.8,1) (25,0.8,1) (25,0.8,1) 25 25 25 25 (11,0.8,0.5) (11,0.8,0.5) (11,0.8,0.5) (11,0.8,0.5) 10.2 10.2 11.5 11.5 (12,1,0.5) (12,1,0.5) (12,1,0.5) (12,1,0.5) 12.5 12.5 11 11 (16,0.6,0.3) (16,0.6,0.3) (16,0.6,0.3) 15.4 16.3 16.3 (26,0.1,0.2) (26,0.1,0.2) 26 26 (12,0.3,1) 13

3.2 Self-regression Model with T -Fuzzy Variables

75

2. By means of (3.2.3): 4 γp = + [4

4 i=1

4 i=1

2 Z(t−p) i

Z(t−p)i Zti −

−(

4

i=1

4 i=1

Z(t−p)i

Z(t−p)i Zti

)2 ][4

4 i=1

Zt2i

−(

4

i=1

(p = 1, · · · , 12), Zti

)2 ]

the self-related coeﬃcients calculated are: R ={γ1 , γ2 , · · · , γ12 } = {−0.378, −0.618, −0.088, −0.990, −0.506, − 0.601, −0.015, −0.980, −0.496, −0.595, −0.274, −0.802}, then γ4 = max |R| = 0.990. Therefore, the sequence by moving 4 quarters backwards as follows Zt = β0 + β1 Zt−4 . Through the normal equations S(J ), we can get 4

β0 (J ) =

i=1

Zti

4 i=1

4

2 Z(t−4) − i 4

i=1

4 β1 (J ) =

4 i=1

4

i=1

i=1

Z(t−4)i

2 Z(t−4) −( i

Zti Z(t−4)i − 4

4

4

i=1

4

2 Z(t−4) −( i

i=1

i=1

Zti Z(t−4)i

≈ −0.059,

Z(t−4)i )2

Z(t−4)i

i=1 4

4

4 i=1

Zti ≈ −1.0675.

Z(t−4)i )2

Therefore Zˆti = −0.059 + 1.0675Z(t−4)i .

(3.2.5)

3. Testiﬁcation Nonfuzziﬁcation sale data are Zti = {27, 11.8, 13.9, 21} in 1983 checked in Table 3.2.2, replaced into (3.2.5), we obtain Zˆti = {27.6591, 12.3939, 14.1019, 19.5461}, from (3.2.4), then + 4 1 (Zt − Zti )2 4 i=1 i + ≈ 0.022, IC = + 4 4 1 1 2 2 Z + Z 4 i=1 ti 4 i=1 ti so the forecast is very accurate. Therefore Yt = β0 (J ) + β1 (J )Yt−4 = −0.059E + 1.0675Yt−4 can be used to forecast the sale in Quarter I, II in 1984, that is Yt−1 = (28.7266, 0.2135, 0.10675), Yt−2 = (13.7816, 1.281, 0.03203).

76

3 Regression and Self-regression Models with Fuzzy Variables

3.3 3.3.1

Regression Model with (·, c) Fuzzy Variables Determination of the Modal with (·, c) Fuzzy Variables

Consider y˜ = β0 E + β1 x ˜1 + · · · + βn x˜n + ε,

(3.3.1)

where x ˜p (p = 1, 2, · · · , n) is the (·, c) fuzzy variable, y˜ is (·, c) fuzzy function variable; E is n−vector represented by all E = (1, 0), β0 , β1 , · · · , βn ∈ R and ε is an error. Deﬁnition 3.3.1. We call (3.3.1) a regression model with (·, c) fuzzy variables. If x ˜ = (x, c1 ) and y˜ = (y, c2 ), then their metric d on T (R) is deﬁned by d(˜ x, y˜)2 =

(x − y − (c1 − c2 ))2 + (x − y + (c1 − c2 ))2 + (x − y)2 . 3

Certainty path is researched for Model (3.3.1) as follows. Deﬁnition 3.3.2. Suppose that x ˜ = (x, c) ∈ P (R) for each x ˜, x c, then P (R) is one of the cone T (R), and is a close convex subset of T (R) relevant with topology induced by distance d. Suppose test data to be x ˜1i , x ˜2i , · · · , x˜ni ; y˜i , for the Model (3.3.1), βp (p = 1, 2, · · · , n) to be an ordinary real number, x˜pi a (·, c) fuzzy variable, and y˜i a (·, c) aﬃne function from P (R)N to T (R), where x ˜pi = (xpi , cpi ), y˜i = (yi , ci ) (i = 1, 2, · · · , N ; p = 1, 2, · · · , n). Let N (M ) r(β0 , β) = d(β0 + β1 x ˜1i + · · · + βn x˜ni , y˜i )2 . i=1

Then βi (where β = (β1 , β2 , · · · , βn )) determined by applying the least square method is a (·, c) fuzzy number rather than a real number. Similarly to method of Section 3.1, we induce deﬁnitions and properties ﬁrst as follows. x1i , x ˜2i , · · · , x ˜ni ) (i = 1, 2, · · · , N ). If parDeﬁnition 3.3.3. Suppose x˜i = (˜ tition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets J(−), J(+), one of which may be empty set, and then contacts a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by Ti = {0, if i ∈ J(+); 1, if i ∈ J(−)} for this division. Especially, we write J0 = (0, 0, · · · , 0), J1 = (1, 1, · · · , 1). Deﬁnition 3.3.4. Use C(J) = {β0 E + β1 x ˜i1 + · · · + βn x ˜in |βp 0, if jp = 0; βp < 0, if jp = 1} to represent a cone in T (R)N and we call it a determined cone from the cone index J.

3.3 Regression Model with (·, c) Fuzzy Variables

77

Proposition 3.3.1. For a given cone index J, the minimization model r(β0 (J), β(J)) =

N

d(β0 E + β1 x ˜1i + · · · + βn x ˜ni , y˜i )2

(3.3.2)

i=1

has a unique parameter solution β0 (J), β(J) in cone C(J), where β(J) = (β1 (J), β2 (J), · · · , βn (J)). Deﬁnition 3.3.5. Suppose fuzzy data to be x ˜1i , x ˜2i , · · · , x ˜ni ; y˜i , and we call the system S(J) consisting of n + 1 equations ∂r(β0 (J), β(J)) = 0(p = 0, 1, · · · , n). ∂βp If S(J) has a solution β0 (J), β1 (J), · · · , βn (J), such that βp > 0, when jp = 0; βp < 0 when jp = 1, then we call (3.3.2) J−compatible with the data. If the unconstraint minimization of S(J) is J−compatible with data. Then a model is J−compatible if the formal equations S(J) for unconstrained min˜n lying in C(J). imization are compatible with β0 E + β1 x˜1 + · · · + βn x Theorem 3.3.1. Let the data set x ˜1i , x˜2i , · · · , x ˜ni ; y˜i , (i = 1, 2, · · · , N ) satisfy Equation (3.3.2), for all of cone indices J, there exists a unique solution β0 (J), β1 (J), · · · , βn (J) in system S(J). Proof: Catalogue {˜ xpi } by subscription: i = 1, 2, · · · , N is one type; i = N + 1, · · · , 3N the other type. Hence, when i = 1, 2, · · · , N, wi = yi to each p, zpi = xpi ; when i = N + 1, · · · , 2N, we have wi = yi − ci . But when i = 2N + 1, · · · , 3N, we have wi = yi + ci to each p, xpi − cpi , if jp = 0, zpi = xpi + cpi , if jp = 1. From here we can get a classical regression model with cone index J corresponding to (3.3.1), suitable to data wi , zpi (i = 1, 2, · · · , 3N ). Now, mark it w = β0 + β1 z 1 + · · · + βn z n .

(3.3.3)

By using the classical least square method, it is easier for us to ﬁnd out a unique optimal solution βp (p = 0, 1, · · · , n) in (3.3.3) concerning to a cone index J. Accordingly, it is of practical value for us to approach Model (3.3.1) by using a crisp model in (3.3.3). 3.3.3

Obtaining (·, c) Fuzzy Data

Data in actuality are most random and fuzzy. The so-called “precision” data are almost approximation of a true value. By using fuzzy data, we can obviously get more information in objects. Therefore, it is most important for us to obtain fuzzy data by usual methods as follows.

78

3 Regression and Self-regression Models with Fuzzy Variables

A. Direct obtainment. Record experiments or the measurement data as fuzzy numbers according to its character. B. Fitting. Fit the collected fuzzy data into a distributing function with the known fuzzy numbers; the closed one is what we long for. C. The assignment of information. D. Structure method and etc. Below is only the structure of the (·, c) fuzzy number to be introduced. But historical data are fuzzy. Because of variety of reasons, suppose what we record is a group of real numbers x1 , x2 , · · · , xn and (·, c) fuzzy number can be constructed by the group of “accurate” number, then take fuzzy time series analysis. Its steps as follows. 10 Let Mt = max{xt−1 , xt , xt+1 }, mt = min{xt−1 xt , xt+1 }. Suppose that the data are inﬂuenced by the front and back data each (or two each) at t period, then Mt mt , at t = 2, 3, · · · , N − 1 and

Mt = max{x1 , x2 }, mt = min{x1 , x2 }, at t = 1; Mt = max{xN −1 , xN }, mt = min{xN −1 , xN }, at t = N.

20 Let

⎧ ⎨ μy˜t (x) =

1−

⎩ 0,

1 |x − αt |, x ∈ [mt , Mt ], ct x∈ / [mt , Mt ].

Here αt =

1 1 (Mt + mt ), ct = (Mt − mt ), t = 1, 2, · · · , N. 2 2

30 y˜t = (αt , ct ) is composition by αt with ct , which is a (·, c) fuzzy number. In the steps above, ct may be a ﬁxed positive number. In application, as for free-ﬁxed t value within interval [1, N ] according to practical situation, what we seek after is to choose ct value corresponding to t. According to the method discussed in this section, we can design a series of systems such as breakdown diagnosis in computer, future forecasting and recent identiﬁcation with (·, c) fuzzy variables.

3.4

Self-regression with (·, c) Fuzzy Variables

Consider

0 E + A 1 Yt−1 + · · · + A n Yt−n + εt Yt = A

(3.4.1)

3.4 Self-regression with (·, c) Fuzzy Variables

and

yt = ft (˜ yt−1 , y˜t−2 , · · · , y˜t−n ) + εt ,

79

(3.4.2)

where data Yt−p , y˜t−p (p = 1, · · · , n) and dependent sequence Yt , y˜t are all (·, c) fuzzy data, respectively. Yt , ft is a fuzzy linear function and a fuzzy nonlinear function to be linearized, respectively, and εt is error. We call (3.4.1) a linear self-regression model with (·, c) fuzzy variables and (3.4.2) a nonlinear self-regression model with (·, c) fuzzy variables. 3.4.1 Linear Model For a linear self-regression model with (·, c) fuzzy variables (3.4.1), we discuss it determinedly. Deﬁnition 3.4.1 Let P (R) be a subspace consisting of the support T (R) of all non-negative elements. For each (Yt−p , ηt−p ) ∈ P (R), Yt−p − ηt−p 0, P (R) is a cone of T (R), which is a closed convex subset corresponding to topology induced by d. When Yt−p = (Yt−p , ηt−p ), Yt = (Yt , ηt ), d(Yt−p , Yt )2 = [Yt−p − Yt − (ηt−p − ηt )]2 + [Yt−p − Yt + (ηt−p − ηt )]2 + (Yt−p − Yt2 ) , 3 Yt , Yt−p ∈ P (R)N , Yti , Yt−pi ∈ P (R), (p = 1, 2, · · · , n; i = 1, 2, · · · , N ). Deﬁnition 3.4.2. Let Yt−p = (Yt−p,1 , Yt−p,2 , · · · , Yt−p,N ). Then we partition the set of natural numbers {1,2,· · · ,n} into two exhaustive, mutually exclusive subsets J(−) and J(+), one of which may be empty. Each partition associates 0, if p ∈ J(+), a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by jp = 1, if p ∈ J(−). Especially, J0 = (0, 0, · · · , 0), J1 = (1, 1, · · · , 1). Denote by C(J) the cone in T (R), C(J) : {A0 E + A1 Yt−1 + · · · + An Yt−n |Ap > 0, if jp = 0; Ap < 0, if jp = 1} (p = 1, 2, · · · , n) is the cone of T (R)n and it is determined by cone index J. Proposition 3.4.1. For a given cone index J, the model of minimizing in the cone C(J): r(A0 (J), A(J)) =

N

d(A0 + A1 Y(t−1)i + · · · + An Y(t−n)i ; Yti )2

i=1

has a unique parameter set A0 (J), A1 (J), · · · , An (J).

(3.4.3)

80

3 Regression and Self-regression Models with Fuzzy Variables

∂r(A0 (J), A((J)) = 0(p = 0, 1, · · · , n) ∂Ap written S(J). If S(J) has a solution Ap , such that Ap (J) > 0 at jp = 0; and Ap (J) < 0 at jp = 1, then Model (3.4.3) is J−compatible with the data. If the minimization of the unconstrained normal equations S(J) is compatible with A0 E + A1 Yt−1 + · · · + An Yt−n lying in C(J), we call the model J−compatible. Deﬁnition 3.4.3. Let the system

Theorem 3.4.1. Suppose that the data set Y(t−1)i , · · · , Y(t−n)i and Yti is n given by model Yti = A0 + Ap Y(t−p)i (i = 1, 2, · · · , N ), then S(J) has p=1

unique solution Ap (J)(p = 0, 1, · · · , n) for all cone indexes. Proof: Classify the observation data by subscripts and we might as well let i = 1, 2, · · · , N corresponding to the small ﬂuctuating data, and the other data corresponding to i = N + 1, · · · , 3N. Then Wti = Yti to each p, Z(t−p)i = Y(t−p)i at i = 1, 2, · · · , N ; Wti = Yti − ηti to each p, at i = N + 1, · · · , 2N . But at i = 2N + 1, · · · , 3N, we have Wti = Yti + ηti to each p, Y(t−p)i − ξ(t−p)i , if jp = 0, Z(t−p)i = Y(t−p)i + ξ (t−p)i , if jp = 1. Hence determining self-regression model is turned into determining one Wti = A0 + A1 Z(t−1)i + · · · + An Z(t−n)i . Let 3N n Ap Z(t−p)i ; Wti )2 , d(A0 + r(A0 , A) = i=1

p=1

∂r(A0 (J), A((J)) = 0. Then we obtain the formal equations and the ∂Ap unique solution to Ap (J)(p = 0, 1, 2, · · · , n) after solving the equations.

and

So the modeling steps can be concluded as follows. Step 1. Work out a self-dependent sequence table by observation data and classify the data by Deﬁnition 3.4.2. Step 2. Change the observation value Y(t−p)i and the dependent variable Yti into nonfuzziness by the proof of Theorem 3.4.1.

3.4 Self-regression with (·, c) Fuzzy Variables

81

Step 3. Calculate N rp = +

N i=1

[N

N i=1

Z(t−p)i Wti −

2 Z(t−p) −( i

N

i=1

N i=1

Z(t−p)i

Z(t−p)i )2 ][N

N i=1

N i=1

Wti

Wt2i − (

N

i=1

Wti )2 ]

(p = 1, 2, · · · , n) and take |rK | = max{rp |p = 1, 2, · · · , n}, so the best model is determined as ,t = A0 + W Ap Zt−p . p

Step 4. Decision Let

+

N 1 ,ti − Wti )2 (W K i=1 , 2 + W 2 = 1). + IC = + (W ti ti N N 1 1 2 , Wt + W K i=1 i K i=1 ti

Then the forecast is an eﬃcient one at IC ∈ (0, 1), a perfect one at IC = 0 and an ineﬃcient one at IC = 1. So Y t+q = A0 + Ap Yt−(p+q) is determined, and the state at q moment p

can be estimated as ∗ Yt+q ∈ [Yt+q − 0.382ηt+q , Yt+q + 0.618ηt+q ].

We are satisﬁed with the result after forecasting the sale of candies with Model (3.4.1) in some places in the ﬁrst half year in 1984. The methods mentioned above can be developed into complicated ones by computers. 3.4.2 Non-linear Model In this section, non-fuzzifying problem of (3.4.2) is resolved under cone index J, making it linearized with transformation. Proposition 3.4.2. Suppose the model like (3.4.2), for a ﬁx cone index J, the minimized model in cone C(J) = r(β0 (J), β(J))

N

d(ft ( y(t−1)i , · · · , y(t−p)i ), ft ( yti ))2

i=1

has a unique parameter solution β0 (J), β1 (J), · · · , βn (J).

82

3 Regression and Self-regression Models with Fuzzy Variables

∂r(β0 (J), β(J)) ∂r1 (β0 (J), β (J)) = 0 and = 0, the ∂ βp ∂ β p systems are written as S(J) and S1 (J). Deﬁnition 3.4.4. Like

Theorem 3.4.2. Suppose data sets y(t−1)i , · · · , y(t−n)i ; yti are all given by model yt = ft (y(t−1)i , · · · , y(t−p)i )(i = 1, 2, · · · , N ), for all cone index J, S(J) has a unique solution β0 (J), β1 (J), · · · , βn (J). Proof: Prove the following like Deﬁnition 3.1 in [Cao89b]. When data ﬂuctuate little, we take i = 1, 2, · · · , N , at this time, wti = yti , to each p, z(t−p)i = y(t−p)i ; at i = N + 1, · · · , 2N , wti = yti − ηti . But at i = 2N + 1, · · · , 3N , we have wti = yti + ηti , to each p, y(t−p)i − η(t−p)i , if jp = 0, z(t−p)i = y(t−p)i + η(t−p)i , if jp = 1. Therefore, a deterministic self-regression model is gained as follows: wt = ft (zt−1 , zt−2 , · · · , zt−n ). By variable replacement, it is linearized, then L(wt ) = L[ft (zt−1 , zt−2 , · · · , zt−n )], i.e., Ut = β0 (J) +

n

βj (J)zt−p .

p=1

It is not diﬃcult to get a conclusion by using least square principle for Ut . Proposition 3.4.3. As for ﬁx cone index J, the minimum model in cone C(J) r1 (β0 (J), β (J)) =

N

i2 d[ft ( D y(t−1)i , · · · , y(t−p)i ), ft ( yti )]2

i=1

has a unique parameter solution β0 (J), β1 (J), · · · , βn (J). Theorem 3.4.3. Suppose a datum set y(t−1)i , · · · , y(t−n)i ; yti is all given by Model (3.4.2), then to all cone indexes J, S1 (J) has a unique solution β0 (J), β1 (J), · · · , βn (J).

3.4 Self-regression with (·, c) Fuzzy Variables

83

Proof: Similarly to the proof of Theorem 3.4.1, we only notice S1 (J), i.e., ⎛ ⎞ N N N N D D z · · · D z i i (t−1)i i (t−p)i ⎞ ⎜ ⎟ ⎛ i=1 i=1 ⎜ N i=1 ⎟ β0 (J) N N ⎜ ⎟⎜ ⎟ 2 ⎜ Di z(t−1)i Di z(t−1) ··· Di z(t−1)i z(t−p)i ⎟ β1 (J) ⎟ ⎜ ⎟⎜ i ⎜ i=1 i=1 ⎜ i=1 ⎟⎜ . ⎟ ⎜ ⎟ ⎝ .. ⎟ . . . ⎠ .. .. .. ⎜ ⎟ · · · ⎜ ⎟ β (J) ⎝ ⎠ N N N n 2 Di z(t−p)i Di z(t−1)i z(t−p)i · · · Di z(t−p) i i=1 i=1 i=1 ⎛ ⎞ N Di Uti ⎜ ⎟ ⎜ N i=1 ⎟ ⎜ ⎟ ⎜ Di z(t−1)i Uti ⎟ ⎜ ⎟ (3.4.4) = ⎜ i=1 ⎟. ⎜ ⎟ .. ⎜ ⎟ . ⎜ ⎟ ⎝ ⎠ N Di z(t−p)i Uti i=1

Obviously, (3.4.4) has a unique solution β0 (J), β1 (J), · · · , βn (J). It is also veriﬁed that, by adopting weight least square method to determine fuzzy nonlinear self-regression model, the forecasting is more accurate. The construction of models is induced as follows. 1. By observation data (y(t−p)i , η(t−p)i ), we authorize a table in fuzzy selfrelated sequence table like Table 3.2.1. 2. Nonfuzzify (3.4.2) (or by variable replacement), and change it into deterministic nonlinear model (or linear fuzzy model). 3. By variable replacement (or fuzzﬁcation), and change the corresponding model into a classical self-regression model Ut = β0 (J) +

n

βp (J)zt−p .

p=1

4. Determine rα by checking the table in critical value of related coeﬃcient. Suppose N rp = + [N

N i=1

N i=1

z(t−p)i Uti −

2 z(t−p) −( i

N

i=1

N i=1

z(t−p)i

z(t−p)i )2 )][N

N i=1

N i=1

Uti

Ut2i − (

N

i=1

Uti )2 ]

and calculate the self-related coeﬃcient in quarter by moving backwards p(p = 1, 2, · · · , n). If |rp | > rα , the linear relation is marked between p period backwards and norm time sequence in building a self-regression model.

84

3 Regression and Self-regression Models with Fuzzy Variables

Again take |rK | = max{rp |p = 1, 2, · · · , n}, and the model is best, which is built on the norm time series Ut backwards to K quarter. The model is n Ut (β0 , β ) = β0 + (3.4.5) βp zt−p . p=1

5. The parameter in (3.4.5) β0 (J), β1 (J), · · · , βn (J) is determined by using the classical least squares method, placed into (3.4.5). This is what we ﬁnd. 6. Testify. Let + N

(Uti −Uti )

i=1

IC = +

K N i=1

+

Ut2

i

K

N i=1

+

(Ut2i + Ut2i = 1). Ut2

i

K

It is an eﬀective forecast at IC ∈ [0, 1), a prefect forecast at IC = 0 and an ineﬀective forecast at IC = 1. 7. Regeneration. According to βp (p = 1, 2, · · · , n) determine coeﬃcient in (3.4.2) before supposing the best model solved in nonlinear inﬁnite regression problem to be , t (β0 , β ) = ft ( U yt−1 , · · · , yt−n ). Given

, t+q (β , β ) = ft+q ( U yt−(1+q) , · · · , yt−(n+q) ), 0

we can forecast the constant statement at q. n Example 3.4.1: Suppose Ut = β0 (J) + βp (J)zt−p . Let Ut = ln Ut . Then p=1

(J)+ β (J)zt−p β 0 p n

Ut

=e

p=1

(J)+ β (J)zt−p β 0 p n

⇒Ut−(n+q)

=e

Therefore

p=1

.

+ β y β 0 p t−(n+q) n

Ut−(n+q)

=e

p=1

is what we ﬁnd. If there exist parameters in the formula, the parameters need determining by an optimization method. 8. Determine the region of forecasting evaluation [Cao89c]. = (Ut+q , θt+q , θt+q )T , the forecasting region is Since Ut+q ∗ Ut+q ∈ [Ut+q − 0.382θt+q , Ut+q + 0.618θt+q ].

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

3.5 3.5.1

85

Nonlinear Regression with T -fuzzy Data to be Linearized Introduction

Consider a nonlinear model as follows: y = f(˜ ˜2 , · · · , x ˜n ) + ε, x1 , x

(3.5.1)

where f is a fuzzy nonlinear function to be linearized, ε is an error, y˜ = (y, η, η) and x˜p = (xp , ξ p , ξ p )(p = 1, 2, · · · , n) denote T -fuzzy correlated variables and independent variables, respectively. We call (3.5.1) a nonlinear regression model with T -fuzzy variables. A classical model is regarded as its especial example. In this section, non-T -fuzziﬁed problem of (3.5.1) is resolved under cone index J, making it linearized with transformation. Meanwhile, the theories of this model are demonstrated in conﬁrmation and linearized, and a method is advanced to this problem. 3.5.2

Prepare Theorem and Property

Seen in Section 1.7 is a fuzzy number of deﬁnition and property relevant to T -fuzzy number. It is easy to certiﬁcate that this T -fuzzy datum x˜ = (x, ξ, ξ) is regular and convex fuzzy subset. Deﬁnition 3.5.1. If x˜ = (m(x), L1 , R1 ), y˜ = (m(y), L2 , R2 ), then the distance deﬁnition on the T -fuzzy number set T (R) is d( x, y)2 = D2 (Supp(˜ x), Supp(˜ y ))2 + (m(˜ x) − m(˜ y ))2 , where Supp(·) denotes a support of (·), m(·) denotes a main value of (·). Especially, when x = (x, ξ, ξ), y˜ = (y, η, η), then d( x, y)2 =

(x − y − (ξ − η))2 + (x − y + (ξ − η))2 + (x − y)2 . 3

(˜ yi + y˜j ) 2 ) . 2 Proof: Similar to Lemma 3.1.1, this lemma can be proved.

Lemma 3.5.1. d(˜ yi , yj )2 = 2d(˜ yi , x ˜)2 + 2d(˜ x, y˜j )2 − 4d(˜ x,

Theorem 3.5.1. Let V be a closed cone in P (R). Then for any x in P (R), x, y0 ) d( x, y) for all y a unique T -fuzzy number y0 exists in V , such that d( in V . A necessary and suﬃcient condition, where y0 is the unique minimizing fuzzy number in V , is that x is y˜0 −orthogonality to V . Proof: Similar to Theorem 3.1.1, this theorem is not diﬃcult to prove.

86

3 Regression and Self-regression Models with Fuzzy Variables

3.5.3 Two Kinds of Non-T -Fuzzily Approach and Its Equivalence Based on the above-mentioned theories, by taking Model (3.5.1) for example, we inquire into a method to the conversion of a non-fuzzy linear model. I. T -fuzzifying before making variable replacement linearized Deﬁnition 3.5.2. For the given cone index J , the measurement is deﬁned between the fuzzy data and regression curve as r0 (J ), r(J )) = Q(

N

d(f( x1i , x 2i , · · · , x ni ); yi )2 .

(3.5.2)

i=1

Theorem 3.5.2. Suppose that T -fuzzy data x 1i , x 2i , · · · , x ni , yi are all given from model yi = f( x1i , x 2i , · · · , x ni )(i = 1, 2, · · · , N ) for all of cone index J , there exists a unique solution r0 (J ), r1 (J ), · · · , rn (J ) in a normal equations ∂Q( r0 (J ), r(J )) = 0(p = 0, 1, · · · , n). ∂ rp Proposition 3.5.1. For a given cone index J , the minimization Model (3.5.2) has a unique parameter solution set r0 (J ), r1 (J ), · · · , rn (J ) in cone C(J ). In fact,we can take a list of T -fuzzy samples ((x1 , ξ 1 , ξ 1 ), (y1 , η 1 , η 1 )), · · · , ((xn , ξ n , ξ n ), (yn , η n , η n )), for the smaller sample in ﬂuctuation, let wi = yi , and to each p, zpi = xpi , at i = 1, 2, · · · , N ; to the rest sample, it can be handled as follows. On the one hand, let wi = yi − η i . To each p, $ xpi − ξ pi , if jp = 0, zpi = xpi + ξ pi , if jp = 1, at i = N + 1, N + 2, · · · , 2N. On the other hand, let wi = yi + η i . To each p, $ xpi + ξ pi , if jp = 0, zpi = xpi − ξ pi , if jp = 1, at i = 2N + 1, 2N + 2, · · · , 3N . Therefore (3.5.1) can be changed into a classically expressive type as follows: wi = f (z1i , z2i , · · · , zni )(i = 1, 2, · · · , N ). Through an appropriate linear transformation L, the linear regression model is then acquired below: n rp (J )zp . U = r0 (J ) + (3.5.3) p=1

Thereout, it is easy to obtain a result in Proposition 3.5.1 (or in Theorem 3.5.1)

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

87

Corollary 3.5.1. Under the condition of Theorem 3.5.2, for a given cone index J , (3.5.3) there exists a group of unique coeﬃcients r0 (J ), r1 (J ), · · · , rn (J ). II. Variables replacement before non-T -fuzziﬁed Suppose y like (3.5.1), it can be changed into a linear function with T -fuzzy variables through an appropriate variable replacement: s = r0 +

n

rp u pi .

(3.5.4)

p=1

Theorem 3.5.3. Under the condition of Theorem 3.5.2, for a given cone index J , (3.5.4) has a unique coeﬃcient rp (J )(p = 0, 1, · · · , n). Proof: Because the coeﬃcients in (3.5.4) rp (p = 0, 1, · · · , n) are all conﬁrmed by T -fuzzy data u pi , according to Theorem 3.5.2 and proof in Proposition 3.5.1, the theorem also gets true. Thereout it is known that, (3.5.4) can be changed into a deterministic linear model n V = r0 (J ) + rp (J )zp . (3.5.5) p=1

Theorem 3.5.4. Under the condition of Theorem 3.5.2, in the same ﬁx cone index J , the determined T -fuzzy data regression Equation (3.5.3) is equivalent to (3.5.5). Proof: Because under the same ﬁx cone index J , original T -fuzzy datum x p is determined in cone C(J ), hence ﬁrst to (3.5.1), we implement nonT -fuzziﬁcation: N ( y ); carry out again the linearized: L(W ) (N, L mean the implement of non-T -fuzzication and linearized, respectively) before getting zp . Or towards (3.5.1) we carry out the linearized ﬁrst, then non-T -fuzziﬁcation and get zp . Acquisition of the independent variable sequence should be equal accordingly, i.e., zp = zp . Again because, in the above cone C(J ), normal equations corresponding to (3.5.3) and (3.5.5) ∂Q( r0 (J ), r1 (J ), · · · , rn (J )) =0 ∂ rp and

∂Q( r0 (J ), r1 (J ), · · · , rn (J )) = 0(p = 0, 1, · · · , n) ∂ rp

contain a unique parameter solution, respectively, rp (J ) and rp (J ), and again according to zp = zp , we have rp (J ) = rp (J )(p = 0, 1, · · · , n). Hence, (3.5.3) ⇐⇒ (3.5.5).

88

3 Regression and Self-regression Models with Fuzzy Variables

3.5.4 Weight of Linearized Nonlinear Regression with T -Fuzzy Variables T -fuzzy data reﬂect more objectively observation ones in diﬀerent positions in the whole test. In a convex cone, the center value is regarded as main value, with the value distributed at both sides of left and right. Consider the inﬂu1 , x 2 , · · · , x n , it is eﬀective for us to handle a ence degree of a data pair y; x linear regression problem with T -fuzzy variables by adopting non-T -fuzzifying [Cao93e]. But it is not necessarily the best to handle a non-linear regression with T -fuzzy variables by adopting the above two replacements before determining regression coeﬃcient with a least squares principle. Therefore, we need fuzzy weight processing for the error item yi − yi . Because, at diﬀerent points yi (i = 1, · · · , N ), when the similar deviation is transformed to the original T -Fuzzy variables, the transform makes the direct proportion between par y d y )i or fuzzy derivative ( )i . It tial diﬀerence rate and fuzzy diﬀerence ( s d s is known that the model handled by weight is more accurate than the nonweighted one handled in a practical operation. y i = ( )i Assume that we write fuzzy diﬀerence or fuzzy derivative as D s y i = ( d )i . Let or D d s r0 , r) = Q1 ( =

N N d y y 2 [ ( s − s )] ( [ ( s − si )]2 ) i i i i i d s s i=1 i=1 N

i ( [D si − si )]2 =

i=1

N

i2 ( D si − (r0 +

i=1

N

(3.5.6)

rp u pi ))2 .

i=1

Then we discuss the following by using Method II in 3.5.3 (If based on Method I, we can get the similar result). Proposition 3.5.2. For the given cone index J , in cone C(J ), then y(z(J )) d dy(z(J )) y y ⇒ , ⇒ i i i s s(z(J )) d s ds(z(J )) i and (3.5.6) can be changed into Q1 ( r0 , r) =

N i=1

where, Di denotes

2 (Vi − ( D r0 + i

N i=1

y(z(J )) dy(z(J )) or . s(z(J )) i ds(z(J )) i

rp zpi ))2 ,

(3.5.7)

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

89

Proposition 3.5.3. For the given cone index J , the minimized model in C(J ) (3.5.6) has a unique parameter solution r0 (J ), r1 (J ), · · · , rn (J ). 2i , · · · , x ni ; yi be all given by model Theorem 3.5.5. Let T -fuzzy data x 1i , x yi = fi ( x)(i = 1, 2, · · · , N ), x = ( x1 , x 2 , · · · , x n ). Then to the given cone r0 , r) ∂Q1 ( = 0 contains a unique solution index J , a normal equations ∂ rp r0 (J ), r1 (J ), · · · , rn (J ). Proof: By following the proof of Proposition 3.5.1 and Theorem 3.5.4, (3.5.6) can be changed into (3.5.7) and the normal equations with respect to (3.5.7) into ⎞ ⎛ N N N N D D Z · · · D z i ni ⎟ ⎜ i=1 i i=1 i 1i ⎛ ⎞ i=1 ⎟ ⎜ N r0 (J ) N N ⎟ ⎜ 2 ⎜ ⎜ ⎟ Di z1i Di z1i ··· Di z1i zni ⎟ ⎟ ⎜ r1 (J ) ⎟ ⎜ i=1 i=1 ⎟ · ⎜ .. ⎟ ⎜ i=1 ⎟ ⎝ . ⎠ ⎜ .. .. .. ⎟ ⎜ . . ··· . ⎟ ⎜ rn (J ) ⎠ ⎝ N N N 2 Di zni Di zni z1i · · · Di zni i=1

i=1

i=1

⎛

⎞ D V i i ⎜ i=1 ⎟ ⎜ N ⎟ ⎜ ⎟ ⎜ ⎟ D z V i 1i i ⎟ ⎜ = ⎜ i=1 ⎟, ⎜ ⎟ . . ⎜ ⎟ . ⎜ ⎟ ⎝ ⎠ N Di zni Vi N

(3.5.8)

i=1

i.e., (Dz T z)˜ r (J ) = Dz T V. Therefore a unique solution r0 (J ), r1 (J ), · · · , rn (J ) exists in (3.5.8). Correspondingly, we can get a testifying formula related to the regression equation [Guj86] N

2

[Di (Vi − Vi )] =

i=1

N

5 Di2 (Vi − V i ) 1 −

i=1

N i=1 N i=1

Obviously,

N 2

R =

i=1 N i=1

Di2 (Vi − V )2 1, Di2 (Vi

−V

)2

Di2 (Vi − V )2 6 . Di2 (Vi

−V

)2

90

i.e.,

3 Regression and Self-regression Models with Fuzzy Variables

& ' N ' 2 2 ' ' i=1 Di (Vi − V ) |R| = ' 1, ' ( N 2 Di (Vi − V )2

(3.5.9)

i=1

calling R a weighted related-coeﬃcient. At |R| → 1, it represents more linear related between V and z. If R > Rα (determined by checking related coeﬃn cient table), then the linear relation of regression equation V = r0 + rp zp p=1

is signiﬁcant. The test of signiﬁcance in regression coeﬃcients is shown as follows. Let N i=1

G=

K N i=1

F =

Di2 (Vi −V )2

,

Di2 (Vi −Vi )2 N −K−1

(3.5.10)

R2p cpp N i=1

Di2 (Vi −Vi )2

(p = 1, 2, · · · , n).

N −K−1

Then negate H0 : rj = 0 at F (j) > Fα (1, N − K − 1), where cpp is p-element on the main diagonal of matrix (Dz T z)−1 . If some p exists, such that F (p) < Fα (1, N − K − 1), then it shows that zp inﬂuences V little, omitted here. 3.5.5 Numeric Example Example 3.5.1: Suppose that a non-linear fuzzy regression model as follows: y = A0 + be− x˜ , c

where A0 , b, c are all constants, and then, by its non-T -fuzziﬁcation, we have W = A0 + be− z . c

Besides, z is a geometrical sequence, and suppose Δ = Wk = A0 + be

− zc

k

, Wk+1 = A0 + be

hence Wk+1 = A0 + be

− zc Δ k

= A0 +

which can be turned into v = r0 + r1 u,

−z

zk+1 , then zk c k+1

,

Wk − A0 − Δ1 , b

3.6 Regression and Self-regression Models with Flat Fuzzy Variables

91

1 1 where v = ln(Wk+1 −A0 ), u = ln(Wk −A0 ), r0 = Δ ln |b| and r1 = − Δ contain parameters, which should be evaluated by an optimum seeking method. Therefore, The modeling steps of (3.5.1) should be concluded as follows:

1) Replacement. (3.5.1) is replaced variably (or dealing by non-T -fuzziﬁcation), and it is linearized (or changed into deterministic non-linearity type). 2) Change. Non-T -fuzzify (or variable replacement), and the problem is changed into a linear deterministic model: V = r0 (J ) + r1 (J )z1 + · · · + rn (J )zn .

(3.5.11)

3) Determination. Determine r0 (J ), r1 (J ), · · · , rn (J ) by solving (3.5.8), i.e., it is a regression coeﬃcient of (3.5.11). 4) Calculation. Calculate (3.5.9) and (3.5.10), and testify (3.5.11) by an ordinary method. 5) Forecast. Coeﬃcient in (3.5.1) is determined by its solution rp (J )(p = 0, 1 · ··, n), and then (3.5.1) can be used to forecast, the choice of q moment in forecasting region is similar to Ref.[Cao89c]. If yq = (yq , η q , η q ), then yq∗ ∈ [yq − 0.328ηq , yq + 0.618ηq ]. 3.5.6

Conclusion

The method can be programmed for operation on computers, thus the model mentioned here is more accurate, more eﬀective and better practical than the models which clear and non-weight nonlinear.

3.6

Regression and Self-regression Models with Flat Fuzzy Variables

3.6.1 Introduction Because (·, c) fuzzy data contain L-R fuzzy variables, T -fuzzy variables and the ﬂat fuzzy variables (or trapezoid fuzzy variables), we can further more + ˜∗ = (y∗− , y∗+ , η ∗ , η ∗ ) to apply the ﬂat fuzzy variables x˜∗i = (x− ∗i , x∗i , ξ ∗i , ξ ∗i ), y the regression and self-regression models in this section. 3.6.2 Determination of the Model with Flat Fuzzy Variables Deﬁnition 3.6.1. Suppose that the models are y˜ = β0 E + β1 x ˜ 1 + · · · + βn x ˜n + ε

(3.6.1)

y˜t = β0 E + β1 x ˜t−1 + · · · + βn x˜t−n + εt ,

(3.6.2)

and

92

3 Regression and Self-regression Models with Fuzzy Variables

where x ˜p , x ˜t−p (p = 1, 2, . . . , n) are ﬂat fuzzy variables, and y˜, y˜t are ﬂat fuzzy function variables. We call (3.6.1) and (3.6.2) a regression model and a selfregression one with ﬂat fuzzy variables, respectively, E is an n−vector represented by all E = (1, 1, 0, 0), and ε, εt are errors, and t is time. Because the variables in (3.6.1) and (3.6.2) are fuzzy, it is impossible to obtain a meaningful result by a classical least square method. Therefore, determination path is researched to model (3.6.1)(3.6.2) as follows. ˜, x− ξ, x+ ξ. Deﬁnition 3.6.2. Let x ˜ = (x− , x+ , ξ, ξ) ∈ P (R) for each x Then P (R) is one of the platform T (R), and is a convex close subset of T (R) relevant with topology induced by distance d. + Suppose test data to be x˜∗1 , x ˜∗2 , . . . , x ˜∗N ; y˜∗ , where x˜∗i = (x− ∗i , x∗i , ξ ∗i , ξ ∗i ) (i = 1, 2, . . . , N ), y˜∗ = (y∗− , y∗+ , η ∗ , η ∗ ), and when the model is a regression model with ﬂat fuzzy variables, “∗” is taken to p; when the model is a selfregression model with ﬂat fuzzy variables, “∗” taken to t-p. Hence for the model (3.6.1) and (3.6.2), βi (i = 1, 2, . . . , N ) is a ordinary real number, x˜∗i is a ﬂat fuzzy variable, y˜∗ is a ﬂat aﬃne function from P (R)N to T (R). Let N N di (˜ x∗i , y˜∗ )2 = [˜ y∗ − (β0 + β1 x ˜∗1 + . . . + βn x ˜∗N )]2 . r(β0 , β) = i=1

i=1

Then βp determined by applying the least square method is a ﬂat fuzzy number rather than a real number, where β = (β1 , β2 , . . . , βn ), so that a classical least square method can’t be directly applied, and a conversion should be made. Similarly to method of Section 3.1, we induce deﬁnitions and properties below. Deﬁnition 3.6.3. Assume x ˜∗i = (˜ x∗i , x ˜∗i , . . . , x ˜∗i ) (i = 1, 2, . . . , N ). If partition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets T (−), T (+), one of which may be empty set φ. Then to each such partition associate a binary multi-index T =(T1 , T2 , . . . , Tn ) deﬁned by Ti = {0, if i ∈ T (+); 1, if i ∈ T (−)}. Especially, we write T0 = (0, 0, · · · , 0), T1 = (1, 1, · · · , 1). Use ˜1 + . . . + βn x˜n |βp 0, if jp = 0; βp 0, if jp = 1} C(T ) = {β0 E + β1 x N

to represent a platform in T (R) , we call it a determined platform from the platform index T . Proposition 3.6.1. For a given platform index T , there exists a unique parameter solution β0 (T ), β1 (T ), . . . , βn (T ) of minimum model r(β0 (T ), β(T )) =

n

d(β0 + β1 x1i + . . . + βn xni , yi )]2

i=1

in platform C(T ), where β(T ) = (β1 (T ), β2 (T ), . . . , βn (T )).

(3.6.3)

3.6 Regression and Self-regression Models with Flat Fuzzy Variables

93

Deﬁnition 3.6.4. Suppose data to be x˜∗1 , x ˜∗2 , . . . , x˜∗n ; y˜∗ , and we call the system S(T ) consisting of n + 1 equation ∂r(β0 (T ), β(T )) = 0(p = 0, 1, . . . , n). ∂βp If S(T ) has a solution β0 (T ), β1 (T ), . . . , βn (T ), such that βp > 0 at jp = 0; βp < 0 at jp = 1, then we call (3.6.3) T -compatible with the data. n βp x ˜p in If un-constraints least value of S(T ) is compatible with β0 E + p=1

C(T ), then this model is called compatibleness. ˜2i , . . . , x˜ni , y˜i satisfy (3.6.1) Theorem 3.6.1. Suppose that ﬂat fuzzy data x ˜1i , x and (3.6.2), respectively, then, for all of the platform index T , there exists a unique solution β0 (T ), β1 (T ), . . . , βn (T ) in system ∂r(β0 (T ), β(T )) = 0 (p = 0, 1, . . . , n). ∂βp + Proof: Suppose that ﬂat fuzzy data are x˜∗i = (x− ˜∗ = ∗i , x∗i , ξ ∗i , ξ ∗i ), y − + (y∗ , y∗ , η ∗ , η ∗ ), and “∗” taken to p, or “∗” taken to t-p. Catalogue {˜ x∗i } by subscription. For i = 1, 2, . . . , N , take

w∗ =

η ∗ y∗− + η ∗ y∗+ + η ∗ η ∗ η∗ + η ∗

+

η∗ + η∗ 2

,

to each ∗, z∗i =

+ ξ ∗i x− ∗i + ξ ∗i x∗i

ξ ∗i + ξ ∗i

+

ξ ∗i + ξ ∗i 2

for i = N + 1, . . . , 2N , let w∗ = y∗− − η ∗ . To each ∗,

z∗i =

⎧ + ⎪ ξ ∗i x− ⎪ ∗i + ξ ∗i x∗i ⎪ − ξ ∗i , j∗ = 0, ⎪ ⎨ ξ +ξ ∗i

∗i

∗i

∗i

+ ⎪ ξ ∗i x− ∗i + ξ ∗i x∗i ⎪ ⎪ + ξ ∗i , j∗ = 1, ⎪ ⎩ ξ +ξ

and for i = 2N + 1, . . . , 3N , let w∗ = y∗+ − η ∗ . To each ∗, ⎧ ⎪ ξ x− + ξ ∗i x+ ⎪ ∗i ⎪ ∗i ∗i − ξ ∗i , j∗ = 0, ⎪ ⎨ ξ +ξ ∗i ∗i z∗i = + ⎪ ξ ∗i x− ∗i + ξ ∗i x∗i ⎪ ⎪ + ξ ∗i , j∗ = 1. ⎪ ⎩ ξ +ξ ∗i

∗i

94

3 Regression and Self-regression Models with Fuzzy Variables 3N

z∗i and we can change the i=1 3N regression or self-regression model with ﬂat fuzzy variables into a determined one with platform index T : Under the given platform index T , let z∗i =

w = β0 + β1 z1 + . . . + βn zn ,

(3.6.4)

wt = β0 + β1 zt−1 + . . . + βn zt−n .

(3.6.5)

From here we can get a classical regression and self-regression model with platform index T corresponding to (3.6.1) and (3.6.2). By using the classical least square method, it is easier for us to ﬁnd out an optimal solution to the unique βp (p = 0, 1, . . . , n) in (3.6.4) or (3.6.5). Accordingly, it is of value for us to approach Model (3.6.1) and (3.6.2) by using crisp models. 3.6.3

Conclusion

According to paper [Cao93e], the model in this section can be generalized into a model of nonlinear regression and time series. If we integrate the model and method here with Data Mining, we can search for an easier acquisition of fuzzy data in those characteristic problems, which are diﬃcult to be described by numerical value. At the same time, we can design a series of systems such as fault diagnosis in computer, future forecasting, resent identiﬁcation with (·, c) fuzzy data [YL99] and as well.

4 Fuzzy Input-Output Model

Focus on expansion of a classical input-output model, this chapter introduces a fuzzy input-output model ﬁrst, then inquires an input-output model with T -fuzzy data and its application, and presents an input-output model with triangle fuzzy data ﬁnally.

4.1 4.1.1

Fuzzy Input-Output Mathematical Model Introduction

In a realistic world there exists a close connection between product technology and economy. The input-output for each department forms a complicated network system reﬂecting much fuzziness. Derived from reality and historical materials, the obtained data are obviously approximate and estimated valuations. If a fuzzy set method is used instead of forcing a classically mathematical one, which will make its result undetermined, into those fuzzy “ﬂuctuating” data, more information shall be kept than ever. Here an input-output model with T -fuzzy data raised, we will describe a development law of the objective better. A research object in input-output methods and the models are extremely complex social systems. But in input-output models, an input quantity, an output quantity, direct consume coeﬃcient and complete consume coeﬃcients all request precision mathematically, which produces a so-called gram principle with each other. When a complexity of systems increases greatly, the ability to make it precise decrease. Upon a certain threshold value, complexity and precision will mutually exclude, the parameter ﬂuctuation factors need considering in input-output models. Companied tightly with complexity is inaccurate and unprecise, that is fuzzyness. Therefore, complexity in social economic system can be studied accurately and scientiﬁcally by establishing fuzzy input-output models. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 95–115. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

96

4 Fuzzy Input-Output Model

4.1.2

Models

Consider the Table 4.1.1, Table 4.1.1. Fuzzy Valued Input-output Table

Input

Output Used in middle Dept Dept 1 2 Material consumption Dept 1 x ˜11 x˜12 Dept 2 x ˜21 x˜22 ··· ··· ··· Dept n x ˜n1 x˜n2 Newly made value total Pay v˜1 v˜2 ˜2 ˜1 M Net proﬁt M ˜2 ˜1 N Amount N Total value x ˜1

Input

x˜2

Final products · · · Dept Amount consumption Accumulation ··· n ··· ··· ··· ···

x ˜1n x ˜ j 1j x ˜2n x j ˜2j ··· ··· x ˜nn ˜nj jx

··· ··· ··· ···

v˜n V˜ ˜ ˜n M M ˜ ˜n N N ˜ x ˜n X

y˜11 y˜21 ··· y˜n1

y˜12 y˜22 ··· y˜n2

Output Total total value

Material consumption Dept 1 Dept 2 ··· Dept n Newly made value total Pay Net proﬁt Amount Total value

y˜13 y˜23 ··· y˜n3

y˜1 y˜2 ··· y˜n

x ˜1 x ˜2 ··· x ˜n

where x ˜i (i = 1, · · · , n)—fuzzy total values of products made by the i−th material production department in the scheduled times. y˜i (i = 1, · · · , n)—fuzzy total values of ﬁnal products made by the i−th material production department. x ˜ij (i = 1, · · · , n)—fuzzy values of product distributed to the j−th department by the i−th department. v˜j (j = 1, · · · , n)—fuzzy values made by the necessary labor of laborers in the j−th material production department. ˜ j (j = 1, · · · , n)—fuzzy values made by the j−th material production M department in scheduled time.

4.1 Fuzzy Input-Output Mathematical Model

97

˜j (j = 1, · · · , n)—fuzzy values newly made by the j−th material producN tion department in scheduled time. Suppose that all of them are T -fuzzy number, the data in Table 4.1.1 tally with the following balancing formulas, n

x ˜ij + y˜i = x ˜i (i = 1, 2, · · · , n),

j=1 n

˜j = x x˜ij + N ˜j (j = 1, 2, · · · , n).

i=1

By a fuzzy consumption coeﬃcient formula, a ˜ij =

x˜ij (i, j = 1, 2, · · · , n), we x ˜j

can change the two formulas above into n

a ˜ij x ˜j + y˜i = x ˜i (i = 1, 2, · · · , n)

(4.1.1)

˜j = x˜j (j = 1, 2, · · · , n). a ˜ij x ˜j + N

(4.1.2)

j=1

and

n i=1

If we write (4.1.1) and (4.1.2) in the form of fuzzy matrices and vectors, then ˜ + Y˜ = X ˜ and C˜ X ˜ +N ˜ = X, ˜ A˜X (4.1.3) i.e., ˜X ˜ = Y˜ (I˜ − A)

(4.1.4)

˜ X ˜ =N ˜, (I˜ − C)

(4.1.5)

and where

⎛

˜12 a ˜11 a ⎜a ˜ a ⎜ 21 ˜22 A˜ = ⎜ . ⎝ ..

··· ··· .. .

⎞

a ˜1n a ˜2n .. .

⎟ ⎟ ⎟ ⎠

˜n2 · · · a ˜nn a ˜n1 a is a fuzzy direct consumption coeﬃcient matrix; ⎛ n

a ˜i1

⎜ i=1 ⎜ ⎜ 0 ⎜ C=⎜ ⎜ . ⎜ . ⎜ . ⎝ 0

⎞

···

0

a ˜i2 · · ·

0

.. .

···

.. .

0

···

0 n i=1

n

i=1

a ˜in

⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎠

(4.1.6)

98

4 Fuzzy Input-Output Model

is a fuzzy material consumption coeﬃcient matrix and ⎛˜ ⎞ E˜ 0 ··· ˜ 0 ⎜˜ ˜ ˜ ⎟ ⎜0 E ··· 0 ⎟ I˜ = ⎜ . ⎟ . . . ... ⎠ ⎝ .. ˜ 0 ˜ 0 · · · E˜ ˜ = (1, 0, 0), ˜ is a fuzzy unit matrix, here E 0 = (0, 0, 0), ⎛

⎛ ⎞ ⎛ ˜ ⎞ ⎞ N1 x ˜1 y˜1 ⎜x ⎜ y˜2 ⎟ ⎜N ˜ ⎟ ⎟ ˜ 2 ⎜ ⎟ ˜ ⎜ 2⎟ ⎟ ˜ =⎜ X = ⎜ . ⎟. ⎜ .. ⎟ , Y˜ = ⎜ .. ⎟ , N ⎝ . ⎠ ⎝ . ⎠ ⎝ .. ⎠ ˜n x ˜n y˜n N

(4.1.7)

A fuzzy complete consumption coeﬃcient matrix is ˜ A˜ = B, ˜ A˜ + B where

⎛˜ ˜ b11 b12 ⎜ ˜b21 ˜b22 ˜=⎜ B ⎜. ⎝ .. ˜bn1 ˜bn2

˜b1n ⎞ ˜b2n ⎟ ⎟ .. ⎟ . . ⎠ · · · ˜bnn ··· ··· .. .

Deﬁnition 4.1.1. We call (4.1.4) and (4.1.5) a fuzzy Leontief input-output mathematical model. Note 4.1.1. From the theory of the next section, we can prove that it counts for little whether the result by dividing two fuzzy numbers is a fuzzy one x ˜ because a ˜ij = x˜ijj can be turned into an ordinary parameter under the cone index J. Therefore, we introduce two kinds of method modeling determinacy in the following.

4.2 4.2.1

Input-Output Model with T -Fuzzy Data Introduction

The author aims at extending a determinate classical input-output model into a case with T -fuzzy data, based on the properties of T -fuzzy numbers. At the same time he shows the eﬀectiveness of an input-output model with T -fuzzy data in theory, discusses a nonfuzziﬁed problem of Leontief synthesized model with T -fuzzy data and comes up with a new solution to an input-output model with T -fuzzy data under a cone index [Cao93b].

4.2 Input-Output Model with T -Fuzzy Data

4.2.2

99

Models and Its Properties

A. Fuzzy Leontief model and its eﬀectiveness Deﬁnition 4.2.1. Suppose Model (4.1.4) and (4.1.5) with T -fuzzy data given, we call them a Leontief input-output mathematical model with T -fuzzy data. ˜ ) or a fuzzy LeonDeﬁnition 4.2.2. For a given cone index J , a matrix A(J tief model corresponding to it is J −eﬀective if there exists a solution vector ˜ ) 0 for any ﬁnal demand vector Y˜ (J ) 0 in (4.1.4). X(J ˜ ) is Deﬁnition 4.2.3. For a given cone index J , the nth square matrix A(J called a J −separable one A˜ (J ) 0 ˜ A(J) = ˜1 , A2 (J ) A˜3 (J ) meaning a movable form through the change of row and column, where A˜1 (J ) and A˜3 (J ) are kth and (n − k)th square matrices (k < n) respectively, oth˜ ) is called a J −nonseparable matrix and λ ˜ erwise A(J A(J ) is deﬁned as a ˜ ). characteristic value of non-separable and non-negative square matrix A(J Theorem 4.2.1. Let a Leontief model (4.1.4) be given through T -fuzzy data. For a given cone index J , the model is J −eﬃcient only if λA(J ˜ ) < 1. Proof: Let T -fuzzy data tally with Model (4.1.4) be (˜ x1j , x ˜1 ), · · · , (˜ xnj , x ˜n ). For a given cone index J , it is not diﬃcult to prove that a ˜ij by T -fuzzy data determinacy method is determined as follows: a ˜ij (J ) = such that

z˜ij x˜ij (J ) = =a ˜ij , x ˜j (J ) z˜j

˜ ) = (˜ aij )(i, j = 1, · · · , n). A(J aij (J )) = (˜

Then Y˜i is determined respectively. Hence a fuzzy Leontief model is determined to be a distinct one ˜ ))W (J ) = Y˜ (J ) (I − A(J

(4.2.1)

to the data Ui , z1i , · · · , zni (i = 1, · · · , 3N ), I indicating a unit matrix depending on a cone index J . From the Theorem in [Lin85,Chapter 3, Section 3], (4.2.1) is eﬃcient only if λA(J ˜ ) < 1, such that (4.1.4) is J −eﬃcient. Corollary 4.2.1. On the assumption of Theorem 4.2.1, if some ﬁnal demand Y˜ (J ) > 0 exists, such that (4.2.1) has a non-negative solution, we call Model (4.1.4) J −eﬃcient.

100

4 Fuzzy Input-Output Model

We can prove such a corollary in a similar way of Theorem 4.2.1 and Corollary in [Lin85, Chapter 3, Section 3]. When the total amount of material resources is less than or equal to the ˜ of labor resources is ﬁnite, a fuzzy ﬁnal demand amount and total amount L Leontief model ˜X ˜ = Y˜ (I˜ − A) has fuzzy constraint ˜ X) ˜ L, ˜ (I, ˜ X) ˜ indicates an inner product of fuzzy vector I˜ and X. ˜ where (I, ˜ = αX ˜ indicates an objective and Max{M ˜ = αX}, ˜ the problem above If M can be changed into a fuzzy input-output optimization model which is to be built as follows: ˜ = αX} ˜ max {M ˜ ˜ ˜ s.t. (I − A)X Y˜ , (4.2.2) ˜ L, ˜ X ˜ 0, X where α indicats the tax and the proﬁt of a unit product value. Theorem 4.2.2. Let (4.2.2) be all given by T -fuzzy data. Then for a given ˜ ) and α(J ) exist cone index J , when A˜ is J -eﬀective, eﬀective solution X(J in the correspondence model. Proof: In a similar way of Theorem 4.2.1, we can prove that (4.2.2) can be turned into a determinate linear programming under a given cone index J : ˜ (J ) = αW (J ) max M ˜ ))W (J ) Y˜ (J ), s.t. (I˜ − A(J ˜ ), W (J ) L(J W (J ) 0.

(4.2.3) (4.2.4)

From (4.2.3) we know a feasible solution set of W (J ) has bound and Y˜ (J ), a ﬁnal demand vector, is a determined bound one, so does the problem’s feasible solution ﬁeld. ˜ ) is eﬀective, there At the same time, as A˜ is J −eﬀective, such that A(J exits a vector W (J ) 0, such that ˜ ))W (J ) Y˜ (J ) (I˜ − A(J ˜ ) > 0, λ > 0 must exist, such that and L(J ˜ ). (λW (J ), l) < L(J Take α1 = λ, w1 (J ) = λW (J ), so w1 (J ), α1 is a group of feasible solution to (4.2.3) and (4.2.4).

4.2 Input-Output Model with T -Fuzzy Data

101

Corollary 4.2.2. Under the condition of Theorem 4.2.2, the dual problem to (4.2.1) is ˜ Q} ˜ min {Y˜ P˜ + L T ˜ ˜ ˜ (4.2.5) s.t. (I − A) P α ˜0 P˜ , Q ˜ and there exists a ﬁnite solution P˜ , Q. Proof: Because of the sameness between a given cone index and a determined cone one in Theorem 4.2.2, under the circumstance of this cone index J , we can change (4.2.5) into a model of determinacy ˜ )V } min {Y˜ (J )U + L(J T ˜ ˜ s.t. (I − A(J )) U + V α, U, V 0.

(4.2.6)

Obviously, (4.2.6) is a dual programming of (4.2.3) and (4.2.4), such that (4.2.5) is a fuzzy dual one of (4.2.2). Again, from dual theory of linear programming, (4.2.5) has a ﬁnite solution ˜ P˜ , Q. Then the solution steps of Model (4.2.2) are as follows. (1) For a given cone index J , (4.2.3) and (4.2.4) result from nonfuzzifying (4.2.2). (2) Obtain the optimal solution vectors W (J ), Y˜ (J ) for (4.2.3) and (4.2.4). ˜ ). (3) Take the maximum value of L(J ∗ (4) Solve a group of solutions w1 (J ) as to diﬀerent Y˜ i (J ) in Programming (4.2.3) and (4.2.4), respectively. (5) Compare. ˜ ) and introduce the maximum solution vector in an obLet wi∗ (J ) L(J ˜ (j) = (M (j) , m(j) , m(j) ). jective. Then we obtain M (6) Determine. 4 ˜ (j) , 0)2 d(M (j = 1, · · · , N ). Then D∗ =Max{Dj |j ∈ N } is Let Dj = N what we want to obtain, where N denotes scheme numbers. B. Fuzzy Leontief synthesized model and its properties of solution Deﬁnition 4.2.4. If n−kind of products and m−kind (m > n) of tech˜ = nology modes exist in the system and if total-product fuzzy vector is X T (˜ x1 , x ˜2 , · · · , x ˜m ) produced under m−kind of technology, then ⎛

˜12 a ˜11 a ⎜a ˜ a 21 ˜22 ˜=⎜ Aˆ ⎜ .. ⎝.

··· ··· .. .

a ˜1n a ˜2n .. .

˜n2 · · · a ˜nn a ˜n1 a

⎞ ⎟ ⎟ ⎟ ⎠

102

and

4 Fuzzy Input-Output Model

⎛

a ˜i1 1 ⎜a ˜ i2 1 ˜σ = ⎜ Aˆ ⎜ .. ⎝. a ˜in 1

⎞ a ˜i1 2 · · · a ˜i1 n a ˜i2 2 · · · a ˜i2 n ⎟ ⎟ ⎟ .. .. ⎠ . . a ˜in 2 · · · a ˜in n

are deﬁned respectively as a fuzzy technology matrix, and as a fuzzy constant technology matrix of an arbitrary ﬁx technology mode ik ∈ Mk . We call ⎞ ⎛ ˜ 0 0 ··· 0 0 E ⎜. . . . . . ⎟ ⎜ .. .. .. . . .. .. ⎟ ⎟ ⎜ ⎟ ⎜˜ ⎜E 0 0 ··· 0 0 ⎟ ⎜ ˆ ˜ 0 ··· E ˜0 ⎟ I˜ = (˜ eij )m×n = ⎜ 0 E ⎟ ⎜ ˜ ··· 0 E ˜⎟ ⎟ ⎜0 0 E ⎟ ⎜ ⎜ .. .. .. . . .. .. ⎟ . . . ⎠ ⎝. . . ˜ 0 0 0 ··· 0 E ⎛

and

I = (eij )m×n

1 ⎜ .. ⎜. ⎜ ⎜1 ⎜ =⎜ ⎜0 ⎜0 ⎜ ⎜. ⎝ ..

0 0 ··· .. .. . . . . . 0 0 ··· 1 0 ··· 0 1 ··· .. .. . . . . .

0 .. . 0 1 0 .. .

0 0 0 ··· 0 a replacement matrix, where ˜ 1, E, i ∈ Mi , e˜ij = eij = 0, 0, i∈ / Mi ,

⎞ 0 .. ⎟ .⎟ ⎟ 0⎟ ⎟ 0⎟ ⎟ 1⎟ ⎟ .. ⎟ .⎠ 1 i ∈ Mi , i∈ / Mi ,

Mi denoting a technology mode set, in which we can make i−kinds of products. Deﬁnition 4.2.5. For any Y˜ 0, ˜X ˜ − Aˆ ˜X ˜ = Y˜ Iˆ

(4.2.7)

is a fuzzy Leontief synthesized model. Theorem 4.2.3. Let the set of n−kind of technology mode be σ = (i1 , i2 , · · · , in ). If, for a ﬁxed cone index J , the nth square matrix Aˆ˜σ (J ) corresponding with all σ is J −eﬀective, then fuzzy Leontief model (4.2.7) is eﬀective. The proof is omitted here. ˜ − Aˆ ˜ and that a resource consumption vector is L = Suppose that A˜∗ = Iˆ T ˜ ˜ ˜ (l1 , l2 , · · · , ln ) , then to a ﬁnal demand vector Y˜ 0, we consider fuzzy linear programming

4.2 Input-Output Model with T -Fuzzy Data

and its dual form

103

˜ min ˜lX ˜ Y˜ ˜ s.t. A∗ X ˜ X0

(4.2.8)

max P˜ Y˜ s.t. P˜ A˜∗ ˜ l P˜ 0

(4.2.9)

as well. Lemma 4.2.1. For a given cone index J , if (4.2.7) is J −eﬀective for a certain fuzzy-ﬁnal-demand vector Y˜0 > 0, there is a basic solution to (4.2.8) when Y˜ = Y˜0 . Proof: Since, under ﬁxed cone index J , (4.2.7) is changed into ˆ ˜ ))W , = Y˜ (J ), (I − A(J

(4.2.10) which is a determinate Leontief synthesized model, (4.2.7) is eﬀective at Y˜0 > 0, which is equivalent that (4.2.10) is eﬀective at Y˜0 (J ) > 0. From Lemma 1 in [Lin85, Chapter 3, Section 5], we know, , min ˜ l(J )W ˜ )W , Y˜ (J ) s.t. A(J , W 0

(4.2.11)

there exists a basic solution at Y˜ (J ) = Y˜0 (J ), such that the lemma holds. Lemma 4.2.2. For a given cone index J , when Y˜ = Y˜0 in (4.2.8), the com˜ ¯ tallies with ponent with basic solution X ˜ ¯ik > 0, ik ∈ Mk (k = 1, 2, · · · , n); (1) σ = {i1 , i2 , · · · , in } exists, such that x ˜ / σ. (2) x ¯i = 0, i ∈ ˆ˜ )W , = IW , − A(J , Y˜0 (J ) exist Proof: Under a ﬁxed cone index, A˜∗ (J )W in (4.2.8), and by applying Lemma 2 in [Lin85, Chapter 3, Section, 5] at , tallying with Y˜ (J ) = Y˜0 (J ), we have a component of basic solution W , (1) There exists σ, such that W > 0, ik ∈ Mk (k = 1, 2, · · · , n); , = 0, i ∈ (2) W / σ. Therefore, the lemma holds under the ﬁxed cone index J . Lemma 4.2.3. Under the condition of Lemma 4.2.1, to any Y˜ 0, (4.2.8) ˜ where the component tallies with the following conhas a solution vector X ditions: x˜ik > 0, ik ∈ Mk (k = 1, 2, · · · , n); x˜i = ˜ 0, i ∈ / σ, σ = {i1 , i2 , · · · , in }. Proof: In a similar way of Lemma 4.2.2 and by applying Lemma 3 in [Lin85, , ˜ ⇐⇒ W Chapter 3, Section 5], under the ﬁxed cone index J , there exist X such that

104

4 Fuzzy Input-Output Model

x ˜ik > 0 ⇐⇒ wik , ik ∈ Mk (k = 1, 2, · · · , M ); ˜ ⇐⇒ wi = 0, i ∈ x ˜i = 0 / σ. , is a solution to (4.2.11) which is equivalent that X ˜ is a Symbolically, W solution to (4.2.8). Theorem 4.2.4. Under the condition of Lemma 4.2.1, submatrix A˜σ exists ˜ such that to any Y˜ 0, (4.2.8) has a solution vector X ˜ = in matrix A, T (˜ x1 , x ˜2 , · · · , x ˜n ) tallying with x ˜i = 0 when i ∈ / σ, x˜i > 0 when i ∈ σ, σ = {i1 , i2 , · · · , in } and the choice of sets σ bears no relation to the demand Y˜ . Proof: We can prove the theorem by using Lemmas 4.2.1-4.2.3. Theorem 4.2.5. Let σ = {i1 , i2 , · · · , in } and let σ = {i1 , i2 , · · · , in }. If, to ˜σ corresponding with σ is J −eﬀective the same cone index J , submodel Aˆ ˆ ˆ with ˜ lσ∗ = (I˜ − A˜σ )˜lσ , then ˜lσ∗ ˜lσ∗ , where ˜lσ∗ = (˜li1 , ˜li2 , · · · , ˜lin )T indicates a resource-complete-consumption vector corresponding with sets σ , and ik ∈ Mk (k = 1, 2, · · · , n). Proof: Observe the programming as follows.

and

˜σ min ˜lσ X ˜ − Aˆ ˜σ )X ˜ σ Y˜0 s.t. (Iˆ ˜ Xσ 0

(4.2.12)

˜ σ min ˜lσ X ˆ ˜σ )X ˜ σ Y˜0 , s.t. (I˜ − Aˆ ˜ σ 0. X

(4.2.13)

The dual form of (4.2.12) is max P˜ Y˜0 ˜ − Aˆ ˜σ ) ˜lσ , s.t. P˜ (Iˆ ˜ P 0.

(4.2.14)

Similarly , we can obtain a dual programming of (4.2.13). Under a given cone index J , (4.2.12)-(4.2.14) are equivalent to the following: ,σ min ˜lσ (J )W ˆ ˆ ,σ Y˜0 (J ), s.t. (I˜ − A˜σ (J ))W , Wσ 0, ,σ min ˜ lσ (J )W ˆ ˆ ,σ Y˜0 (J ) s.t. (I˜ − A˜σ (J ))W , Wσ 0

4.2 Input-Output Model with T -Fuzzy Data

and

105

ˆ ˜ ) max U Y˜ I(J ˆ ˜ s.t. U (I˜ − A)(J ) lσ , U 0.

Therefore, to the same σ, σ and the same cone index J , Aˆ˜σ (J ) is eﬀective, and from Theorem 2 in [Lin85, Chapter 3, Section 5], we have ˜lσ∗ (J ) ˜lσ∗ (J ). So the theorem holds. 4.2.3

Numerical Example

We give an input-output value table of agriculture, light and heavy industries in our national economy in 1978 in Table 4.2.1. Table 4.2.1. Input-output Value Table

Input

Output Middle products Agriculture Light ind. Heavy ind. Amount Production department Agriculture (80,5,3) (200,10,2) (300,2,3) (580,17,8) Light ind. (120,7,10) (100,7,6) (400,4,9) (620,18,25) Heavy ind. (200,2,15) (400,1,4) (600,15,7,) (1200,18,26) Amount (400,14,28) (700,18,12) (1300,21,19) (2400,53,59) New value total Pay (200,16,1) (380,7,13) (500,9,10) (1080,28,26) Proﬁt (100,7,0) (120,10,15) (200,3,7) (420,24,20) Value (700,37,29) (1200,35,40) (2000,33,36) (3900,105,105) Input

Output

Final proﬁt Total value Production department Agriculture (120,20,21,) (700,37,29) Light ind. (580,17,15) (1200,35,40) Heavy ind. (800,15,10) (2000,33,36) Amount (1500,52,46) (3900,105,105) New value total Pay Proﬁt Value The proportion of the three departments in the national economy is 17.95%, 30.77% and 51.28%, respectively. Suppose that the structure is not reasonable, needing an adjustment, and there are three schemes for it. During 3-year adjustment, the average annual growth rate in the ﬁnal products for the three parts is (I) 6%, 10%, 4%; (II) 5%, 15%, 2% and (III) 10%, 20% and 0%,

106

4 Fuzzy Input-Output Model

respectively. The greatest quota of average annual growth rate in product value is 6% for agriculture, 15% for light industry and 5% for heavy industry. The goal is to make proﬁt and tax largest. Again we know the proﬁt and tax created in product value of 100 million for the three departments are (0.1429,0.02,0.05),(0.10,0.005,0.03) and (0.1,0.01,0.04)(the unit is 100 million), respectively. Now, comprehensively consider that we try to determine an optimized scheme of an economic structure during 3-year adjustment. Solution: (1) From the above problem, we may build a fuzzy input-output optimization model Max {(0.1429, 0.02, 0.005)˜ x1 + (0.1, 0.005, 0.03)˜ x2 +(0.1, 0.01, 0.04)˜ x3} ˜X ˜ Y˜ (3) (j = 1, 2, 3), s.t. (I˜ − A) j ˜ L, ˜ X ˜ 0. X

(4.2.15)

(2) For the ﬁxed cone index J , by partition methods advanced in the proof of Theorem 4.2.1, we change Table 4.2.1 into Table 4.2.2. Table 4.2.2. Input-output Crisp Value Table

Input

Output Middle products Agri Light ind Production department Agriculture 80 202 Light ind 113 93 Heavy ind 215 402.5 Amount 408 679.5 New value total Labor pay 325 560.5 Net proﬁt value 733 1240

Heavy ind Amount Final produc Total value 303 400 585 1288

585 606 1202.5 2393.5

748 2036

1615.5 4009

148 634 833.5 1615.5

733 1240 2036 4009

n n ˜j (J ) from wj − wij . where Y˜i (J ) is determined from wi − wij , and N j=1

i=1

(3) Obtain the ﬁnal product vector of the three departments in a forecasting period. ⎞ ⎛ ⎞ ⎛ 176.27 148(1 + 0.06)3 (3) Scheme I: Y1 = ⎝ 634(1 + 0.1)3 ⎠ ≈ ⎝ 843.85 ⎠; 3 ⎛833.5(1 + 0.04)3 ⎞ ⎛937.57 ⎞ 171.33 148(1 + 0.05) (3) Scheme II: Y2 = ⎝ 634(1 + 0.15)3 ⎠ ≈ ⎝ 946.23 ⎠; 3 ⎞ ⎛ 884.52⎞ ⎛833.5(1 + 0.02) 3 148(1 + 0.1) 196.99 (3) Scheme III: Y3 = ⎝ 634(1 + 0.2)3 ⎠ ≈ ⎝ 1095.55 ⎠; 833.5(1 + 0)3 833.5

4.2 Input-Output Model with T -Fuzzy Data

⎛

⎞

⎛

107

⎞

733(1 + 0.06)3 837.01 Max L= ⎝ 1240(1 + 0.15)3 ⎠ ≈ ⎝ 1885.89 ⎠ . 2036(1 + 0.05)3 2356.92 (4) Extract the solution and compare ⎛ ⎞ 0.109 0.163 0.149 ˜ ) = ⎝ 0.154 0.075 0.196 ⎠ , A(J 0.293 0.325 0.287 ⎛ ⎞−1 ⎛ ⎞ 0.891 −0.163 −0.149 1.307 0.326 0.372 ˜ ))−1 = ⎝ −0.154 0.925 −0.196 ⎠ ≈ ⎝ 0.376 1.298 0.434 ⎠. (I − A(J −0.293 −0.325 0.713 0.704 0.740 1.753 Then for Scheme I, we have

⎛

⎞ 884.63 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1566.91 ⎠, W (I − A(J 1 2392.10 comparing it with Max L, we should take ˜ (1) ≈ (554.21, 51.04, 196.49). x1 = 884.63, x2 = 1885.89, x3 = 2392.10, M For Scheme II, we have

⎛

⎞ 902.02 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1698.33 ⎠, W (I − A(J 2 2390.88 comparing it with Max L, we should take ˜ (2) ≈ (556.58, 51.38, 197.31). x1 = 902.02, x2 = 1885.89, x3 = 2390.88, M For Scheme II,we have

⎛

⎞ 964.12 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1856.06 ⎠, W (I − A(J 3 2410.51 comparing it with Max L, we should take ˜ (3) ≈ (567.41, 52.82, 201.20). x1 = 964.12, x2 = 1885.89, x3 = 2410.51, M 4 ˜ (j) , 0) d(M (j = 1, 2, 3); we have (5) Decision: Dj = 3 D1 ≈ 612.067, D2 ≈ 614.643, D3 ≈ 626.503. Obviously, they tally with D1 < D2 < D3 . Considering only the largest proﬁt and tax, we know Scheme III is the best one. But it does not meet the demand of objective practice as the growth rate is zero in ﬁnal annual average production in heavy industry. Though the proﬁt and tax in Scheme II is lower than Scheme III, the speed is proper for the three departments in developing. So, in view of the optimized economic structure, Scheme II is the most satisfactory.

108

4.3

4 Fuzzy Input-Output Model

Input-Output Model with Triangular Fuzzy Data

Because the direct depletion coeﬃcient a ˜ij is a membership function between each department, for the sake of easy calculation, the triangle distribution is presented in order not to lose the general assumption of the membership function. Hence, we discusses the input-output model with triangular fuzzy data in this section. 4.3.1 Deﬁnitions and Properties Deﬁnition 4.3.1. Call fuzzy sets in real axis R a fuzzy number, written as μA˜ (x) ˜ A= x x∈R or A˜ ⇔ μA˜ (x) ∈ [0, 1], ˜ where μA˜ (x) is a membership function in A. Deﬁnition 4.3.2. If ∀x, y, z ∈ R, and x y z, then there must exist ˜ ˜ A(y) μA˜ (x) ∧ A(z), we call A˜ a convex fuzzy number. If max μA˜ (x) = 1, then call A˜ normal fuzzy x∈R

sets. According to the deﬁnition mentioned above, it is easy to prove the next two theorems. Theorem 4.3.1. A˜ is a convex fuzzy number ⇔ ∀α(0 < α 1), Aα is an interval, written as Aα = [αL (a), αR (a)], where αL (a) and αR (a) represents the left and right endpoint in Aα , respectively, according to endpoint included in Aα , the interval can be divided into a close and open interval. Theorem 4.3.2. A˜ is a convex fuzzy number, and suppose suppose α1 , α2 satisfy 0 α2 α1 < 1, then αL (a1 ) αL (a2 ), αR (a1 ) αR (a2 ), and αL (a), αR (a) are all monotonous functions, αL (a) is non-decrease and αR (a) is non-increase. Deﬁnition 4.3.3. Suppose A0 = (aL , aR ) is a platform in convex fuzzy num˜ If aL > 0, then call A˜ a positive fuzzy number; if aR < 0, then call A˜ ber A. a negative fuzzy number; if aL < 0 < aR , then call A˜ a zero fuzzy number. ˜ B ˜ are the fuzzy numbers, mapping f : Extension principle. Suppose A, R → R, i.e., f (x, y) = x ∗ y, where “∗” is a binary operate, operation “∗” expands to fuzzy number, stipulated as

4.3 Input-Output Model with Triangular Fuzzy Data

109

˜ f (x, y) = A˜ ∗ B ˜ μA˜ (x) ∧ B(y) = x∗y x∗y=z ˜ min[μA˜ (x), B(y)] , = x∗y x∗y=z where f (x, y) = z = x ∗ y, its membership function can be denoted as ˜ f (x, y)(z) = (A˜ ∗ B)(z) ˜ = (μA˜ (x) ∧ B(y)) x∗y=z

=

sup x,y∈f −1 (z)

˜ min{μA˜ (x), B(y)}.

When above-mentioned operation “∗” denotes “+” and “·”, we give sum and product operation in fuzzy number below. Sum of fuzzy numbers ˜ B ˜ are two fuzzy numbers, then sum of A, ˜ B ˜ is deﬁned as: Suppose A, ˜ ˜ (A˜ + B)(z) = (μA˜ (x) ∧ B(y)), ∀z ∈ R. z=x+y

Make use of α−cut sets Aα and Bα can be rewritten in another form, if Aα = [mα , nα ], Bα = [pα , qα ], then A˜ = αAα = α[mα , nα ], α

˜= B

α

αBα =

α

˜= A˜ + B

α[pα , qα ];

α

α[mα , nα ] +

α

=

α[pα , qα ]

α

α([mα , nα ] + [pα , qα ]).

α

Because all numbers mα x nα and all numbers pα y qα should add mutually in the right-end bracket of the last type, the interval is [mα + pα , nα + qα ], therefore ˜= A˜ + B α[mα + pα , nα + qα ]. α

Product of fuzzy numbers ˜ B ˜ are two fuzzy numbers, product of A, ˜ B ˜ is deﬁned as: Suppose A, ˜ ˜ = (μA˜ (x) ∧ B(y)). ∀z ∈ R + , (A˜ · B)(z) z = x · y, x, y 0

110

4 Fuzzy Input-Output Model

Make use of α−cut set Aα , Bα , formula above can be written as ˜= A˜ · B α[mα · pα , nα · qα ]. α

It is not diﬃcult to prove the theorem as follows. Theorem 4.3.3. The sum and product of positive convex fuzzy numbers are positive convex fuzzy ones, respectively and they satisfy commutative, associative and distributive laws. 4.3.2 Model Suppose that fuzzy number is a ˜ij , its membership function is triangle disx0 tributing, a ˜ij is represented by three numerals ( ), where x0 is deterxL xR mined by formula below: max a ˜ij (x0 ) = 1, x0 ∈R

˜ij , i.e., (˜ aij )0 =(xL , xR ). Obvibut (xL , xR ) is a platform in fuzzy number a ously, the kind of fuzzy number is convex. If the membership function of fuzzy number A˜ is a positive convex one, then it is an interval Aα = [aαL , aαR ] corresponding to each α−level. Oppositely, ﬁnd out fuzzy numbers corresponding to diﬀerent level α, and we can make what corresponds a membership function. Hence, (4.1.1),(4.1.2) can be changed into an ordinary equation n a ˜ij )α (˜ xj )α + (˜ yi )α = (˜ xi )α ( i=1

and

n ˜j )α = (˜ c˜ij )α (˜ xj )α + (N x)α , ( i=1

yj )α all represent α−level sets of fuzzy numbers correspondwhere (˜ xj )α and (˜ ing to the fuzzy variables X and Y . Under assumption that each fuzzy number denotes positively convex, these α−level sets are all interval. Therefore, the above-mentioned equations can denote the form for interval as coeﬃcients and variables. According to the deﬁnition of fuzzy number operation, we know that the interval number operation can be carried on between the cut sets of convex fuzzy numbers. But to positive convex fuzzy sets, the operation on addition and multiplication in interval can add and multiply real numbers at left and right endpoints in the interval, then the equations above can be written down again as ˜ α [X] ˜ α + [Y˜ ]α = [X] ˜α [A]

4.3 Input-Output Model with Triangular Fuzzy Data

111

and ˜ α [X] ˜ α + [N ˜ ]α = [X] ˜ α, [C] aij ]α is an interval [aijL , aijR ]. where A˜ = [˜ aij ]; [˜ ˜ = (˜ ˜ α is an interval [xiL , xiR ]α ; (X) x1 , x ˜2 , · · · , x ˜n )T is a row vector, [X] T ˜ ˜ (Y ) = (˜ y1 , y˜2 , · · · , y˜n ) is a row vector, [Y ]α is an interval [yiL , yiR ]α . Each equation below can be operated under some α-level sets, attachment mark α no longer notes, thus, the type above can be predigested into form as (4.1.3) according to the interval operation rule in the above-mentioned deﬁnition, the equation above at actual operation can solve two matrix equations at left and right endpoints, respectively, i.e., AL XL + YL = XL ,

AR XR + YR = XR ,

CL XL + NL = XL ,

CR XR + NR = XR ,

and

where AL , XL , NL and YL mean the constituted matrix and vector corresponding to the left end real numbers at each intervals; AR , XR , NR and YR mean the constituted matrix and vector corresponding to right end real numbers at each intervals. ˜ X ˜ and we can ﬁnd YL , YR , such that ﬁnd Y˜ from Give A, YL = (I − AL )XL , YR = (I − AR )XR ,

(4.3.1)

˜ Y˜ , we can ﬁnd XL , XR , such that ﬁnd X ˜ from or when give A, XL = (I − AL )−1 YL , XR = (I − AR )−1 YR , i.e., ˜ X, ˜ X ˜ = (I − A) ˜ −1 Y˜ , Y˜ = (I − A) ˜ and N ˜ from and similarly we can ﬁnd X ˜ = (I − C) ˜ X, ˜ X ˜ = (I − C) ˜ −1 N ˜, N ˜ N ˜ as (4.1.7), X ˜ j and N ˜j representing fuzzy where I is a unit matrix, X, variables. Similar to certiﬁcating the existence of an inverse matrix in a usual input˜ −1 exists. C˜ = (c˜j ) like (4.1.6) ˜ −1 , (I − C) output model, inverse matrix (I − A)

112

4 Fuzzy Input-Output Model

is called a material consumption coeﬃcient matrix, c˜j =

n

a ˜ij (j = 1, · · · , n)

i=1

is the total of j department product in production units demanding a direct consumption of the products from other sections by value form, it is a fuzzy number. If (4.3.1) is an input-output model with fuzzy quantity by triangle distributing, then for diﬀerent α-level, the equations is that, at α = 0, [YL ] = [I − AL ][XL ] = [AL ][XL ], [YR ] = [I − AR ][XR ] = [AR ][XR ], and at α = 1,

[Y0 ] = [I − A0 ][X0 ] = [A0 ][X0 ]. Use the procedure of computer program, we can very quickly calculate the following. 10 The ﬁnal product amount YL , YR , Y0 in each department. 20 If we have already known the ﬁnal product amount of each department, we can solve the annual product total amount XL , XR , X0 in each department. 4.3.3 Numerical Example Example 4.3.1: According to table of an input-output value datum from ﬁfty-six departments authorized by Statistic Bureau in Shanxi Province of China in May, 1982, we give direct depletion coeﬃcient in each department as Table 4.3.1 shows. Solution: Because ⎛

⎞ 1 − 0.1350 −0.1752 −0.0829 0 −0.0053 ⎜ −0.1560 1 − 0.4349 −0.5873 −0.3250 −0.15310 ⎟ ⎜ ⎟ ⎟, 0 0 1−0 0 0 [I − AL ] = ⎜ ⎜ ⎟ ⎝ −0.00703 −0.0076 −0.0432 1 − 0.0149 −0.0950 ⎠ −0.00199 −0.005035 −0.01206 −0.009715 1 − 0.01054 ⎛

⎞ 1 − 0.1492 −0.1936 −0.0917 0 −0.0059 ⎜ −0.1724 1 − 0.4807 −0.6491 −0.3592 −0.1691 ⎟ ⎜ ⎟ ⎟, 0 0 1−0 0 0 [I − AR ] = ⎜ ⎜ ⎟ ⎝ −0.0077 −0.0084 −0.0478 1 − 0.0165 −0.1050 ⎠ −0.00221 −0.005565 −0.013335 −0.01128 1 − 0.01165

4.3 Input-Output Model with Triangular Fuzzy Data

113

Table 4.3.1. Direct Depletion Coeﬃcient in Each Department

Agriculture 0.1421 Agriculture ( ) 0.1350 0.1492 0.1642 Industry ( ) 0.1562 0.1724 0 Building indu. ( ) 0 0 Transport & 0.0074 ( ) post elect. 0.00703 0.00777 0.0021 Business ( ) 0.00199 0.00221 0.3158 Total ( ) 0.30002 0.33158 Wages 0.5474 ( ) labor guerdon 0.52003 0.57477 Net. 0.1368 ( ) proﬁt 0.12996 0.14364

Agriculture Industry Building indu. Transport & post elect. Business Total Wages labor guerdon Net. proﬁt ⎛

Industry

Building indu.

0.1844 ( ) 0.1752 0.1936 0.4578 ( ) 0.4349 0.4807 0 ( ) 0 0 0.0080 ( ) 0.0076 0.0084 0.0053 ( ) 0.005035 0.005565 0.6555 ( ) 0.622735 0.688215 0.0867 ( ) 0.08236 0.09103 0.2578 ( ) 0.24491 0.27069

0.0873 ) 0.0829 0.0917 0.6182 ( ) 0.5873 0.6491 0 ( ) 0 0 0.0455 ( ) 0.0432 0.0478 0.0127 ( ) 0.012065 0.013335 0.7637 ( ) 0.72546 0.80192 0.1636 ( ) 0.15542 0.17178 0.0727 ( ) 0.06906 0.07634 (

Transport and post elect.

Business

0

0.0056 ) 0.0053 0.0059 0.1611 ( ) 0.1531 0.1691 0 ( ) 0 0 0.1000 ( ) 0.0905 0.1050 0.0111 ( ) 0.01054 0.01165 0.2778 ( ) 0.26394 0.29165 0.2222 ( ) 0.21109 0.23331 0.5000 ( ) 0.4750 0.5250

(

) 0 0 0.3421 ( ) 0.3250 0.3592 0 ( ) 0 0 0.0157 ( ) 0.0149 0.0165 0.0105 ( ) 0.009715 0.011285 0.3683 ( ) 0.34987 0.38670 0.2362 ( ) 0.25044 0.27676 0.3684 ( ) 0.34998 0.38682

(

⎞ 1 − 0.1421 −0.1844 −0.0873 0 −0.0056 ⎜ −0.0642 1 − 0.4578 −0.6182 −0.3421 −0.1611 ⎟ ⎜ ⎟ ⎟. 0 0 1−0 0 0 [I − A0 ] = ⎜ ⎜ ⎟ ⎝ −0.0074 −0.0084 −0.0455 1 − 0.0157 −0.1000 ⎠ −0.0021 −0.0053 −0.0127 −0.0105 1 − 0.0111

114

4 Fuzzy Input-Output Model

Give total amount in each department (agriculture, industry, building industry, transport and post electricity(TPE), business) as follows: ˜ = (X ˜i) X agriculture

1900 1805 1955

industry 4500 4275 4725

building indus

550 552.5 577.5

TPE

190 180.5 199.5

business

360 342 378

T ,

then according to the procedure of computer program, we can compute ﬁnal product amount in each department; or by giving a ﬁnal amount in each department, we compute the total amounts or total product value (TPV), which is shown in Table 4.3.2. Table 4.3.2 Fuzzy Input-output Table (unit: hundred million dollars) Product Input Source Material product dept

Agriculture x ˜1j Industry x ˜2j Building industry x ˜3j

TPE x ˜4j Business x ˜5j Total consump ˜ U Wages labor guerdon Newly V˜ Net. proﬁt made ˜ M Total value ˜ N Total value ˜ X Material

Product allotment direction Middle product Agriculture Industry Building industry x ˜i1 x ˜i2 x ˜i3 269.99 829.80 48.135 243.67 297.65 748.98 914.76 43.315 52.956 311.98 2060.10 340.01 281.58 343.94 1859.2 2271.3 306.86 374.85 0 0 0

0 0 0

0 0 0

14.06 12.689 15.501

36.00 32.49 39.69

25.025 22.572 27.604

3.99 3.592 4.409 600.02 541.53 661.5

23.85 21.52 26.29 2949.75 2662.19 3252.04

6.985 6.304 7.701 420.16 379.05 463.11

1040.06 1010.85 1066.88

390.15 405.88 370.68

89.98 99.31 79.2

259.92 252.62 266.62

1160.1 1206.9 1102.3

39.99 44.13 35.19

1299.98 1263.62 1333.5 1900 1805 1995

1550.25 1612.81 1472.95 4500 4275 4725

129.904 143.45 114.39 550 522.5 577.5

4.3 Input-Output Model with Triangular Fuzzy Data

115

Table 4.3.2. (continued) Product Input Source Material

product

dept Material consump

Newly made value

Agriculture x ˜1j Industry x ˜2j Building industry x ˜3j TPE x ˜4j Business x ˜5j Total ˜ U Wages labor guerdon V˜ Net. proﬁt ˜ M Total ˜ N

Total value ˜ X Product Input Source Material Agriculture x ˜1j Industry product x ˜2j Building industry dept. x ˜3j TPE Material x ˜4j Business x ˜5j Total consump ˜ U

TPE x ˜i4 0 0 0 65.00 58.67 71.66

Product allotment direction Final product Total Business ˜ x ˜i5 E 2.016 1.817 2.230 57.996 52.36 63.91

1149.94 1037.782 269.6 2835.086 2558.67 3125.66

0 0 0

0 0 0

2.983 2.689 3.291

36.00 32.49 39.69

114.068 102.93 125.78

1.995 1.754 2.251 69.98 63.11 77.2

3.996 3.606 4.406 100.08 90.27 110.26

40.816 36.780 45.065 4139.91 3736.16 4564.11

50.01 48.95 51.03

79.99 77.45 82.38

1650.19 1642.4 1650.2

69.996 68.402 71.323

180 174.28 185.38

1710.2 1642.4 1660.8

120.004 117.349 122.353 190 180.5 199.5

259.92 251.73 267.76 360 342 378

3360.19 3388.8 3310.95 7500 7125 7875

Accumulate Z˜

0 0 0

Product allotment direction Final product Consume ˜ W

Totoal Y˜

TPV ˜ X

50.18 51.4044 48.7371 487.988 503.056 468.767

700.00 715.83 678.68 1176.93 1213.27 1130.57

750.179 767.23 727.42 1664.92 1716.33 1599.34

1900 1805 1995 4500 4275 4725

550 522.5 577.5

0 0 0

550 522.5 577.5

550 522.5 577.5

25.976 26.536 25.222 15.96 15.26 16.65 1130.10 1118.76 1136.87

49.956 51.033 48.505 303.22 289.96 316.29 2230.11 2270.09 2174.04

75.932 77.569 73.727 319.18 305.22 332.94 3360.22 3388.84 3310.93

190 180.5 199.5 360 342 378 7500 7125 7875

5 Fuzzy Cluster Analysis and Fuzzy Recognition

A fuzzy cluster analysis as well as fuzzy recognition is introduced in this chapter. First, fuzzy cluster analysis with T -fuzzy data is developed, and fuzzy recognition with T -fuzzy data is exhibited secondly.

5.1

Fuzzy Cluster Analysis

5.1.1 Fuzzy Cluster 5.1.1.1 Introduction A mathematical method to classify things according to a certain condition or the character is called a cluster analysis. To fuzzy problems, if an equivalent relation can be built up concerning U , via which U can be divided into several equivalent classes, forming an equivalent fuzzy matrix. Again an equivalent Boolean matrix can be got by choosing α ∈ [0, 1] according to diﬀerent requirements, by which elements in U are divided into some equivalent classes, called a fuzzy cluster analysis. 5.1.1.2 Model Let U ={u1 , u2 , · · · , un } be a set consisting of clustered n objects. Each of its classiﬁcation object ui is represented by a group of data xi1 , xi2 , · · · , xin . Therefore, the modeling steps are included as follows. Step 1. Data standardization In each variable in clustering process, its unit diﬀers possibly from the quantity class. Even though some variable measures are similar, absolute values in each variable diﬀer at their size. The calculation from directly original date would make the variable of great-absolute-value function outstanding, but reduce greatly the function of those small absolute ones. Meanwhile, fuzzy operation requires compressing the data in [0,1], so the originally collected data should B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 117–137. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

118

5 Fuzzy Cluster Analysis and Fuzzy Recognition

be standardized. Here we have many methods of them with introduction below. (1) The standard deviation standardization Standardizing i-th variable, namely, changing xij into xij , i.e., xij =

xij − xi (1 j m), Si

where xij is an actual measure value at the variables, with xi =

(5.1.1) m 1 xij m j=1

being a sample average value, and & ' m ' 1 (xij − xi )2 Si = ( m − 1 j=1 being a sample standard deviation. (2) Pole diﬀerence regularity and standardization Its regularity formal of pole diﬀerence becomes xij − min{xij } , max{xij } − min{xij }

xij =

(5.1.2)

and its standardization formal of pole diﬀerence becomes xij − xi , max{xij } − min{xij }

xij =

(5.1.3)

where xij is a measure value actually of a certain factor, max{xij }(or min{xij }) represents the maximum or minimum value in the same factor measure value actually. Step 2. Mark settlement The so-called mark settlement means calculating a similitude degree of similitude coeﬃcient rij scaling the classiﬁed objects, thus conﬁrm fuzzy similitude ˜ at a universe U . The used methods in common appear as follows. relation R (1) A related coeﬃcient method m

rij = +

|xik − xi ||xjk − xj | + , m m (xik − xi )2 (xjk − xj )2 k=1

k=1

where xi =

m

k=1 m

1 1 xik , xj = xjk (1 i n; 1 j m). m k=1 m k=1

(5.1.4)

5.1 Fuzzy Cluster Analysis

119

(2) A vectorial angle cosine method to mathematics character Let αr = (xr1 , xr2 , · · · , xrn ) be called a mathematic character vector of object xr (r = i, j). Then take m xik xjk | | k=1 + rij = | cos(αi , αj )| = + . (5.1.5) m m 2 2 xik xjk k=1

k=1

(3) A maximum and minimum method m min(xik , xjk ) rij = k=1 (i, j n). m max(xik , xjk )

(5.1.6)

k=1

Step 3. Cluster The transitive closure t(R) of a similitude matrix R is computed by using a square method. The result is not accurate sometimes because of a clustering analysis by using a similitude matrix, therefore the similitude matrix R must be reformed into an equivalent one, and the transitive closure scilicet means a fuzzy equivalent matrix. Step 4. Draw a dynamic clustering ﬁgure of equivalent matrix t(R). 5.1.2 Cluster Analysis Model with T -Fuzzy Data 5.1.2.1 Introduction Without any models being followed, and based on samples and on rationally clustering in their own characteristic, respectively, we call that cluster analysis. The clus analysis makes sure a close or distant relation between samples before cluster. This needs not only mathematic knowledge but also experience and professional knowledge. If adopting a classic method, it will throw away much information. In order to overcome this weakness. Ruspini [Rus69] presented a fuzzy cluster approach. Thereafter, Gitman et al again developed a single peak fuzzy sets method to cluster in 1970 [GL70]. This section acquires a satisﬁed result by plunging T -fuzzy data [Cao89b] [Dia87] into a classiﬁcation model. It puts forward computation of a shortcut method to a fuzzy transitive closure, and veriﬁes molding steps and some meaningful results by examples. 5.1.2.2 Basic Property Relevant deﬁnition and property of T -fuzzy data are seen in Chapter 1 and 2. Deﬁnition 5.1.1. Suppose that T (R) represents the whole T -fuzzy point sets deﬁned on ordinary set R, and x ˜ = (x, ξ, ξ), y˜ = (y, η, η), x, y ∈ R, then

120

5 Fuzzy Cluster Analysis and Fuzzy Recognition

+ d(˜ x, y˜) =

(x − y − (ξ − η))2 + (x − y + (ξ − η))2 + (x − y)2 3

(5.1.7)

is a distance between x ˜ and y˜. Deﬁnition 5.1.2. Suppose that dij is the generous measure value of various samples corresponding to (5.1.7), then deﬁne a standardized formula as dij − m , t˜ij = M −m

(5.1.8)

n 1 where M = max{dij }, m0 = min{dij }, m ∈ [m0 , d¯ij ], and d¯ij = dij (1 n j=1 i n). Here by taking the value of m, we avoid m ≡ 0, and make a ﬁrm decision of a ﬂexible description for persons. A fuzzy matrix constructed on t˜ij as elements is recorded like T = (t˜ij ), with its dual matrix written as T = (˜ rij ) = (1 − t˜ij ).

5.1.2.3 Algorithm Suppose U = (u1 , u2 , · · · , un ) is an object set waiting for clustering and x ˜= ˜2 , · · · , x ˜n ) is a T-fuzzy datum in the depictive object ui (i = 1, 2, · · · , n), (˜ x1 , x we use Formula (5.1.8) and convert T -fuzzy data for fuzzy matrix behind the cluster and we ﬁrst prove the few useful results below. Deﬁnition 5.1.3. If ∀(t˜ij , t˜jk )(t˜jk , t˜kl ) ∈ U × U , then (μT˜ (t˜ij , t˜jk ) μT˜ (t˜jk , t˜kl )) μT˜ (t˜ij , t˜kl )

(5.1.9)

jk

or (μT˜ (t˜ij , t˜kl )

(μT˜ (t˜ij , t˜jk ) μT˜ (t˜jk , t˜kl )), jk

calling T˜ a Min-Max (resp. Max-Min) transitive relation in U . Theorem 5.1.1. Give arbitrarily a set sample value of T -fuzzy data x ˜ = (˜ x1 , x ˜2 , · · · , x˜n ), then an anti-reﬂexivity fuzzy matrix T can be constructed under the metric Formula (5.1.7) and (5.1.8). ˜2 , · · · , x ˜n ) goes into (5.1.8) through (5.1.7) Proof: A T -fuzzy datum x ˜ = (˜ x1 , x before getting fuzzy matrix T . It is easy to verify that the elements in T satisﬁes t˜ij = t˜ji (i = j) and t˜ii = 0. Deﬁnition 5.1.4. Fuzzy relation or fuzzy matrix satisﬁes i) Anti-reﬂexivity (resp. reﬂexivity); ii) Symmetry; iii) Min-Max (resp. Max-Min) transitivity, so we call fuzzy distance (resp. equivalence) a relation or matrix.

5.1 Fuzzy Cluster Analysis

Deﬁnition 5.1.5. ∀α ∈ [0, 1], call 1, α μT˜ = 0,

and μα tij =

1, 0,

if if

T˜ α T˜ > α

if if

t˜ij α t˜ij > α

121

α−cut relation and α−cut matrix in fuzzy relation T˜ and fuzzy matrix T = (t˜ij ), respectively. Theorem 5.1.2. The necessary and suﬃcient condition that T˜ with Min-Max transitivity is ∀α ∈ [0, 1], μT˜ (t˜ij ,t˜jk ) α or μT˜(t˜jk ,t˜kl ) α

(5.1.10)

⇒ μT˜(t˜ij ,t˜kl ) α. Proof: If (5.1.9) holds, and when μT˜(t˜ij ,t˜jk ) α, μT˜(t˜jk ,t˜kl ) α, there must exist μT˜(t˜ij ,t˜jk ) μT˜(t˜jk ,t˜kl ) α. Hence, μT˜(t˜ij ,t˜kl ) μT˜ (t˜ij ,t˜jk )

μT˜(t˜jk ,t˜kl ) α.

Whereas, if μT˜ (t˜ij ,t˜kl ) α holds, we give arbitrarily t˜j0 k0 ∈ U , and take α = μT˜(t˜ij ,t˜j

0 k0

)

μT˜(t˜j

0 k0

,t˜kl ) .

Obviously, we have μT˜(t˜ij ,t˜j

0 k0

)

α or μT˜(t˜j

0 k0

,t˜kl )

α,

and there must exist μT˜(t˜ij ,t˜j

) 0 k0

μT˜(t˜ij ,t˜j

) 0 k0

μT˜(t˜j

0 k0

,t˜kl ) ,

(5.1.10) is true from the arbitrary tj0 k0 . Therefore, the theorem holds. Theorem 5.1.3. The necessary and suﬃcient condition in T with a fuzzy distance matrix requires that ∀α ∈ [0, 1], α−cut matrixes Tα are all distance Boolean matrixes. Proof: i) T is anti-reﬂexivity if and only if Tα is anti-reﬂexivity, obviously. ii) Symmetry in T ⇐⇒ Tα is symmetry. If t˜ij = t˜ji , we might as well α α α establish t˜ij < t˜ji before getting tα ij = 0, tji = 1, such that tij = tji . Hence Tα

122

5 Fuzzy Cluster Analysis and Fuzzy Recognition

is symmetry ⇒ T is symmetry. Whereas, obviously, T is symmetry ⇒ Tα is symmetry. iii) T with Min-Max (resp. Max-Min) transitivity ⇔ Tα with Min-Max (resp. Max-Min) transitivity. This can be deduced from Theorem 5.1.2 directly. Theorem 5.1.4. If T is a fuzzy distance matrix, then to any 0 λ < α 1, Tλ dividing each class involves Tα dividing a certain subclass. Proof: tλij = 1 ⇔ t˜ij λ ⇒ t˜ij α ⇔ tα ij = 1. Suppose i, j divides the same class by Tλ , so does it by Tα . Theorem 5.1.5. If T ∈ Tn×n with anti-reﬂexivity and symmetry, then S(T ) = d(T ), where S(T ) is the Min-Max (resp. Max-Min) transitive closure in T , d(T ) is distance closure and it is the biggest distance matrix with T but inclosed by any distance matrix with T . Proof: Because of S(T ) = T k , then 1) If T is anti-reﬂexivity, then S(T ) is anti-reﬂexivity. 2) If T is symmetry, then S(T ) is symmetry. Obviously, S(T ) is anti-reﬂexivity and symmetry matrix. Again S(T ) has Min-Max (resp. Max-Min) transitivity, so S(T ) is a distance matrix. Suppose that M is any distance matrix containing T , then it must be the Min-Max (resp. Max-Min) transitive matrix including T . But S(T ) is MinMax (resp. Max-Min) transitive closure of T . Based on deﬁnition in Min-Max (resp. Max-Min) transitive closure [Wang83], we know M ⊆ S(T ). Since Min-Max transitive closure bears heavy computation, the author puts forward a kind of new algorithm, so that transitive closure can be immediately computed by two steps. Algorithm If T satisﬁes anti-reﬂexivity and symmetry, then ﬁrst to T , make a Min-Max operation, and take again the matrix element, t˜∗i0 ,i0 +1 = max{t˜2i,i+1 |1 i n − 1}, where t˜2i,i+1 =

n j=1

(t˜ij

t˜i,j+1 ), such that ∀t˜2ij t˜∗i0 ,i0 +1 .

Let t˜2ij = t˜∗i0 ,i0 +1 (i = i1 , i2 , . . . , in ; j = j1 , j2 , . . . , jn , i = j, and subscripts i, j are arranged arbitrarily). Constitute matrix of the ﬁrst half part, and constitute the second half part by use of the symmetry, writing this matrix as T . Theorem 5.1.6. Suppose T = (t˜ij ) ∈ Tn×n is an anti-reﬂexivity, symmetry matrix, when i j, t˜ij t˜i,i+1 . . . t˜in (1 i n − 1), and reformation

5.1 Fuzzy Cluster Analysis

123

T to T ∗ by the algorithm, then T ∗2 = T ∗ , where T ∗ is Min-Max transitive closure S(T ). Proof: 1) IfT is anti-reﬂexivity, i.e., t˜ii = 0(1 i n), then there must be an item t˜ii t˜ij in the synthesize calculation formula of t˜∗ii , and it is necessarily zero. But t˜ii and t˜∗ii is an element located at i row and j column in the fuzzy ∗ ˜∗ matrix T and T , respectively. Hence, there must exist tii = 0 under Zadeh operator “ ”. Anti-reﬂexivity of T ∗ get certiﬁcated. 2) If T is symmetry, it is easy to certiﬁcate T ∗ symmetry by algorithm. 3) Min-Max transitivity. Because of t˜∗2 ij =

n

(t˜∗ik

t˜∗kj ),

(5.1.11)

k=1

at i = j, if k = i, and according to 1), obviously, we have ˜∗ t˜∗2 ij = tii = 0(1 i n) by (5.1.11); at i = j, because t˜i0 ,j0 = max{t˜2i,i+1 |1 i n − 1}, then i) If each term in (5.1.11) implies t˜∗i0 ,j0 , under Zadeh operator “ ”, then ˜∗ t˜∗ik t˜∗kj = t˜i0 ,j0 , from (5.1.11) we know that t˜∗2 ij = ti0 ,j0 ∀i0 , j0 holds. ∗ ˜ ii) If parts of items in (5.1.11) contain ti0 ,j0 , another fails to contain t˜∗i0 ,j0 . According to establishment, when i j and ordinate larger than j, t˜∗i0 ,j0 is contained in the item (or when i j and ordinate less than j, t˜∗i0 ,j0 is contained in the item), then j n ∗2 ∗ ∗ ˜ ˜ ˜ (t˜∗ik tij = [ (tik tkj )] [ t˜∗kj )] k=1

=

j

k=j+1

(t˜∗ik

t˜∗kj )

t˜∗i0 j0

(5.1.12)

k=1

=

j

(t˜∗ik

t˜∗kj )

k=1

or ˜∗ t˜∗2 ij = ti0 j0

n n [ (t˜∗ik t˜∗kj )] = t˜∗kj ). (t˜∗ik k=j

k=j

Again, from suppose, we know that t˜∗ii t˜∗i,i+1 . . . t˜∗in (i j) or t˜∗i1 t˜∗i2 . . . t˜∗ii (i j)(1 i n − 1),

(5.1.13)

124

5 Fuzzy Cluster Analysis and Fuzzy Recognition

then (5.1.12) is ˜∗ t˜∗2 ij = ti1

t˜∗i2

Notice

...

t˜∗ij

t˜∗i,j+1

t˜∗i,j+2

...

t˜∗in = t˜∗ij .

t˜∗i1 = t˜∗i2 = . . . = t˜∗i−1,j t˜∗i,j .

The homologous conclusion may also be got from (5.1.13), again make use of ˜∗ the symmetry, ∀i, j(1 i, j n), there is all t˜∗2 ij = tij . The theorem gets certiﬁcated. Theorem 5.1.7. Suppose T satisﬁes condition of Theorem 5.1.6, then it can be reformed to become the biggest distance matrix T ∗ contained in T but surrounded by any distance matrix in the contained T . And T ∗ can be obtained by two steps of reformation. Because the fuzzy distance matrix is mutually dual with the fuzzy equivalence matrix, the result of relevant fuzzy distance matrix all can be transplanted into fuzzy equivalence matrix; here omitted. 5.1.2.4 Modeling Suppose that a classiﬁcation objective set is U = (u1 , u2 , · · · , um ), and their ˜ = (˜ ˜2 , · · · , x ˜n )T , we introduce steps to fuzzy set characteristic value is X x1 , x method modeling as follows. 1. Obtain T - fuzzy data. The biggest obstacle is how to obtain the T -fuzzy data in applied model in this section. Usually methods can be seen in Chapter 3, 3.1.4. 2. Turn the T -fuzzy data into non-T -fuzzy data. Converse a T -fuzzy data sample value into non-T -fuzzy data, and we have two methods: a) Distance method. Let x˜ = (˜ x1 , x˜2 , · · · , x˜m )T be a column T -fuzzy data. Then we can compute immediately distance dij (˜ xi , x ˜j ) between the T - fuzzy data, which is non-fuzzy number xi by Formula (5.1.7). b) Non-T -fuzzifying method. Let x ˜ = (˜ x1 , x ˜2 , · · · , x ˜n )T be a column T -fuzzy data. Then we classify the data x ˜i by subscripts. ξ + ξ li ; For i = 1, ..., N and each l, Uli = xi + li $2 xi − ξ li , jl = 0, for i = N + 1, · · · , 2N and each l, Uli = xi + ξ li , jl = 1; $ xl + ξ li , jl = 0, for i = 2N + 1, · · · , 3N and each l, Uli = xl − ξ li , jl = 1. Hence, under a given cone index J , x˜i is changed into real data.

5.1 Fuzzy Cluster Analysis

125

Again, from Method a) or b) we calculate the metric value dij between T -fuzzy sample data x ˜i and x ˜j , listing united table as Table 5.1 below: Table 5.1. The United Table

x ˜1 x ˜2 .. . x ˜n

x ˜1 0 d21 .. . dn1

x ˜2 · · · x ˜n 0 .. . dn2

.. . ··· 0

Thereout we order anti-reﬂexivity and symmetry commonness matrix table ⎛ ⎞ 0 ⎜ d21 0 ⎟ ⎟. D=⎜ ⎝··· ··· 0 ⎠ dn1 dn2 · · · 0 D is classiﬁed not only by use of a classic method, but also by a fuzzy method. Now, we calculate the classiﬁcation by adopting the method to fuzzy sets as follows. 3. Fuzzify data. Take (5.1.8) for calculation of the number in Table 5.1 or matrix D, and we obtain a standardized matrix T = (tij )n ; it is a fuzzy matrix. 4. Compute the transitivity closure. If T is already a fuzzy distance matrix, we go to Step 5. Otherwise, by Min-Max or Max-Min operation ‘◦’, compute the Min-Max (resp. Max-Min) transitive closure S(T ) in the standardized matrix T = (tij )n by a shortcut method, reform fuzzy matrix T into fuzzy distance matrix T ∗ . 5. Classiﬁcation. According to an element size in the matrix T ∗ , and by a sequence from small α ˜ to big, we take α−cut matrix (T ∗ )α , at t˜ij > α, tα ij = 1; at tij α, tij = 0, ∗ then (T )α is a Boolean matrix. Let α ∈ [0, 1]. Classify U . 5.1.2.5 Application Example Example 5.l: If ﬁve samples are u1 , u2 , u3 , u4 , u5 with only one characteristic, in which T -fuzzy data are depicted as x ˜1 = (1, 0.5, 0), x ˜2 = (2, 0.7, 0.3), x ˜3 = (4.5, 0.8, 0.5), ˜5 = (8, 1, 0.8). x ˜4 = (6, 0.9, 0.6), x Try to ﬁnd their classiﬁcation.

126

5 Fuzzy Cluster Analysis and Fuzzy Recognition

By applying modeling steps in Section 5.3, operate the above as follows: 1) Processing by non-T -fuzzifying data. Compute a measure value between x˜i and x˜j by (5.1.7), listing a united table as Table 5.2: Table 5.2. Measure Value between x ˜i and x ˜j (1, 0.5.0.1) (2, 0.7, 0.3) (4.5, 0.8, 0.5) (6, 0.9, 0.6) (8, 1, 0.8)

(1, 0.5, 0.1) (2, 0.7, 0.3) (4.5, 0.8, 0.5) (6, 0.9, 0.6) (8, 1, 0.8) 0 1.013 0 3.545 2.536 0 5.047 4.039 1.502 0 7.084 6.076 3.539 2.037 0

2) Fuzzifying data. Fuzzifying data in Table 5.2 by (5.1.8) (for the sake of simple record, here we take m =0), and obtain ⎞ ⎛ 0 0.14 0.50 0.72 1 ⎜ 0.14 0 0.36 0.52 0.86 ⎟ ⎟ ⎜ ⎟ T =⎜ ⎜ 0.50 0.36 0 0.19 0.50 ⎟ . ⎝ 0.72 0.52 0.19 0 0.29 ⎠ 1 0.86 0.50 0.29 0 3) Computing a fuzzy distance matrix. Take a Min-Max operation ‘◦’, we have ⎛ ⎞ 0 0.14 0.36 0.5 0.5 ⎜ 0.14 0 0.36 0.36 0.5 ⎟ ⎜ ⎟ 2 ⎟ T =T ◦T =⎜ ⎜ 0.36 0.36 0 0.19 0.29 ⎟ , ⎝ 0.5 0.36 0.19 0 0.29 ⎠ 0.5 0.5 0.29 0.29 0 then, we use a shortcut method to T ∗ . By taking t˜∗i0 j0 = max{t˜2i,i+1 |i = 1, 2, 3, 4} = max{0.14, 0.36, 0.19, 0.29} = 0.36, i.e., the greatest element of the main diagonal line above the second one in T 2 . Here any element larger than 0.36 is 0.36, less than or equal to 0.36 keeps the original value constant in T 2 . Therefore T ∗ is derived as follows: ⎛ ⎞ 0 0.14 0.36 0.36 0.36 ⎜ 0.14 0 0.36 0.36 0.36 ⎟ ⎜ ⎟ ⎟ T∗ = ⎜ ⎜ 0.36 0.36 0 0.19 0.29 ⎟ . ⎝ 0.36 0.36 0.19 0 0.29 ⎠ 0.36 0.36 0.29 0.29 0

5.2 Fuzzy Recognition

127

4) Cluster According to an element size in the matrix T ∗ , the sequence goes from small to big. Take α−cut matrix T ∗ )α , and classify U . At α = 0, a cut matrix presents ⎛ ⎞ 01111 ⎜1 0 1 1 1⎟ ⎜ ⎟ ∗ ⎟ (T )0 = ⎜ ⎜1 1 0 1 1⎟, ⎝1 1 1 0 1⎠ 11110 divided into ﬁve types: {u1 }, {u2 }, {u3 }, {u4}, {u5 }. Analogously At α = 0.14, the cut matrix is (T ∗ )0.14 , divided into four types: {u1 , u2 }, {u3}, {u4 }, {u5 }. At α = 0.19, the cut matrix is (T ∗ )0.19 , divided into three types: {u1 , u2 }, {u3, u4 }, {u5 }. At α = 0.29, the cut matrix is (T ∗ )0.29 , divided into two types: {ul , u2 }, {u3, u4 , u5 }. At α = 0.36, the cut matrix is (T ∗ )0.36 , divided into one type: {u1 , u2 , u3 , u4 , u5 }. This coincides with the result from the united table value by using a system cluster method [Fang89]. 5.1.2.6 Conclusion For the fuzzy similar matrices, the corresponding classiﬁcation can be obtained to them also. The result in the section can be used for pattern recognition and more complicated systems and engineering [Cao07b].

5.2

Fuzzy Recognition

A pattern recognition means recognizing a given object, where it belongs, on knowledge of various patterns and it is a problem with pattern clustered. It is diﬃcult for us to give an exact description of a sample in complicated phenomena in the real world, and the data obtained by means of quantify are of approximate value to a certain degree. It is well known that the concept of a fuzzy set originated from the study of problems related to pattern classiﬁcation and there have appeared many recognition methods by the inspiration of fuzzy sets [Zad76], so many authors [Cao96c,07b] [CL91] have developed fuzzy pattern matching and various distance models among fuzzy sets (including fuzzy data).

128

5 Fuzzy Cluster Analysis and Fuzzy Recognition

5.2.1 Common Method in Pattern Recognition As for a pattern recognition model, we have methods such as statistics, language and fuzzy sets. We introduce the methods of fuzzy sets as follows: a. The method to individual model recognition Let A˜1 , A˜2 , · · · , A˜n be n fuzzy sets of X and x0 ∈ X. If A˜k (x0 ) = max {A˜i (x0 )}, 1in

then x0 is regarded as a relative-belonging to fuzzy set A˜k . This is a so-called maximum membership degree principle. Assume there exist n models representing n fuzzy sets A˜1 , A˜2 , · · · , A˜n of X. Now, we have an object x0 ∈ X identiﬁed. Which model x0 belongs to is judged by the maximum membership degree principle; it means which membership function is the greatest to the fuzzy sets can be determined to a model it belongs to. This is a direct method, the ﬁrst recognition method to the model. The method above can be changed as follows, that is, stipulate threshold value λ ∈ [0, 1] before it is judged by the maximum membership degree principle, writing α = max{A˜1 (x0 ), A˜2 (x0 ), · · · , A˜n (x0 )}. If α < λ, then it fails to identify and we have to analyze it another way. If α λ, then it can identify and can judge it by the maximum membership degree principle. b. The method to group model recognition ˜ B ˜ ∈ F (X). Then we call Deﬁnition 5.2.1. Let A, ˜ B) ˜ = 1 [A˜ ⊗ B ˜ + (1 − A˜ B)] ˜ σ(A, 2 ˜ where an approaching degree of A˜ and B, ˜= [μA˜ (x) μB˜ (x)] A˜ ⊗ B x∈X

and ˜= A˜ B

x∈X

[μA˜ (x)

μB˜ (x)]

˜ respectively. Here are called an inner product and outside one in A˜ and B, ˜ ˜ A1 = φ, B1 = φ, suppA = X, suppB = X. The principle of choice near ˜ ∈ F (X)(1 i n). If A˜i0 exists, such that Let A˜i , B ˜ = max{σ(A˜1 , B), ˜ σ(A˜2 , B), ˜ · · · , σ(A˜n , B)}, ˜ σ(A˜i0 , B)

5.2 Fuzzy Recognition

129

˜ most close to A˜i0 , and judge that B ˜ should belong to A˜i0 then we call B class. The recognition model with fuzzy data will be of vast practice. But here the author puts forward a new recognition model with T -fuzzy data diﬀerent from the traditional and fuzzy recognition models. Deﬁnition 5.2.2. Let the measure of T -fuzzy data x i , yk is di ( xi , yk ). Then D− =

inf

x ˜i ∈T˜ (xi )

there is

di ( xi , yk ),

D+ =

sup

x ˜i ∈T˜ (xi )

di ( xi , yk ),

di ( xi , yk ) − D− Y˜ = 1 − , D+ − D−

(5.2.1)

where T˜ (xi ) is the whole T -fuzzy number in U . Following next is that we shall introduce pattern recognition method with T -fuzzy date. 5.2.2 Pattern Recognition Model with T -Fuzzy Data 5.2.2.1 Introduction This section induces T -fuzzy data [Cao90][Dia87] into pattern recognition model before developing a new model diﬀerent from those gained by other methods, which is a pattern recognition model with T -fuzzy data. This model is applied into recognition of environmental quality, human fossil and children’s health growth, they can be of eﬀect in disposal of problems in pattern recognition with T -fuzzy data and gets a good result from a numerical example. 5.2.2.2

Model

As for the building of a pattern recognition model, we have fuzzy set methods such as threshold value, experience and dual comparative function [Wang83]. The author once advanced another recognition method [Cao96d,07b] as follows. Let ui ∈ U (1 i m) be a waiting recognition object with its feature described by T -fuzzy data x ˜i = (xi , ξ i , ξ i ), and suppose U to be a standard one whose feature is assigned by the value y˜ = (y, η, η). 1) Methods to threshold or experience value Non-fuzzyﬁcation of sample x ˜ and y˜ are held by Formula (5.1.7), and properly a threshold value λ ∈ [0, 1] (or determine an experience value α ∈ R) is selected. If di λ (or di α), a recognition is accepted, and at di < λ (or at di > α), a recognition is refused. 2) Method to a dual opposite comparison function Non-fuzzyﬁcation of sample x˜i and y˜j are held by Formula (5.1.7), then, let

130

5 Fuzzy Cluster Analysis and Fuzzy Recognition

μv0 (uij ) = dij (

y0 ) dx˜ij (˜ x˜ij (i = j) ) y˜0 dx˜ij (˜ y0 ) + dy˜0 (˜ xij )

or μv0 (uij ) =

y0 ) dx˜ij (˜ (i = j), dx˜ij (˜ y0 ) dy˜0 (˜ xij )

where μv0 (uij ) ∈ [0, 1](1 i m; 1 j n). List relation orders and we will take θ = μv0 (uk0 ) = max{μv0 (ui1 ), μv0 (ui2 ), · · · , μv0 (uin )(1 i m)}. Therefore, we can decide that the k0 -th sample uk0 is most similar to a standard sample v0 . If there exists m feature inﬂuencing ui corresponding to v, feature values of ui and vi are x˜ij and y˜, respectively. Then the metric in x ˜ij and y˜ shall be weighted, i.e., ⎧ n ⎨ k d(˜ ˜), i = j, ij xij , y dij = j=1 ⎩ 0, i = j, where kij 0 and

n

kij = 1(1 i m).

j=1

The method mentioned above is applied into recognition of the humanity fossil [Cao96c] and children’s health growth [CL91], respectively, the results of which prove satisfactory. But here we develop another method diﬀerent from anciently [DPr80] based on his own model. 3) Concrete pattern classiﬁcation Obviously, we consider and use variety of information in it, the steps of which are as follows. 10 Feature collection To the object ui = {uij } ∈ U (1 i m; 1 j n), we collect and recognize the collective peculiarities concerned, test the data in its feature description: x ˜ij = (xij , ξ ij , ξ ij ), where the extension can be taken as in function max{0, 1 − |xij |}, ect. 20 Variation pattern Change ui into T -fuzzy number pattern p(ui ) = (u1i , u2i , · · · , um i ), meanwhile, give a standard object v0 and determine its pattern, which is vector p(v0 ) = (v01 , v02 , · · · , v0m ) with respect to v0 and its assigned value corresponding feature is y˜0 = (y0 , η 0 , η 0 ). 30 Nonfuzzyﬁcation By the aid of distance Formula (5.1.7), we calculate metric value d˜ij between x ˜ij (1 i m, 1 j n)) and y˜0 , then

5.2 Fuzzy Recognition

⎛

(d˜ij )m×n

131

⎞

x12 , y˜0 ) · · · d(˜ x1n , y˜0 ) d(˜ x11 , y˜0 ) d(˜ ⎜ d(˜ x , y ˜ ) d(˜ x , y ˜ ) · · · d(˜ x2n , y˜0 ) ⎟ 21 0 22 0 ⎟, =⎜ ⎝ ··· ⎠ ··· ··· ··· xm2 , y˜0 ) · · · d(˜ xmn , y˜0 ) d(˜ xm1 , y˜0 ) d(˜

(5.2.2)

xij , y˜0 ). where d˜ij = d(˜ 40 Optimum decision [Yage80] Take maximum, minimum and average value to d(˜ xij , y˜0 ) in (5.2.2) according to line and we have xij , y˜0 ), Di− = inf di (˜ xij , y˜0 ); Di+ = sup di (˜ n xij ,˜ y0 ) di (˜ ¯i = D (1 i m, 1 j n)), n i=1

then an assessable matrix from Formal (5) will be obtained below: ⎞ ⎛ + − ¯1 I D1 D1 D + − ¯ ⎟ II ⎜ ⎜ D2 D2 D2 ⎟ .. ⎜ .. .. .. ⎟ . . ⎝. . . ⎠ + − ¯m m Dm Dm D Again, let ¯ i. ¯ i ) = k1 D+ + k2 D− + k3 D f (Di+ + Di− + D i i Here, ki ∈ [0, 1](i = 1, 2, 3) represents a weight number obtained by Analytic Hierarchy Process or Delphi method. ¯ i (1 i m). Finally, we take min f (·) = fk , where “·” denotes Di+ +Di− +D 50 Determination fk is most similar to v0 corresponding to object uk . 5.2.2.3 Practical Example In environmental protection, we always distinguish the grade of environmental quality [Lao90]. Now, we divide the environmental quality into I–V grades as follows: U = {clean (I), less clean (II), less polluted (III), more polluted (IV ), most polluted (V )}, where U is a universe. We choose atmosphere, water on and under the earth as u1 , u2 and u3 respectively as environmental factors according to monitoring data at 10 observation points in some city, and their target sets are u1 = {SO2 , NOx , TSP}, u2 = {COD, NH3− N, DO, NO3− N, Cr+6 , CN}, −

u3 = {SO34 , Cl− , NO3− N, , Cr+6 , CN}, respectively.

132

5 Fuzzy Cluster Analysis and Fuzzy Recognition

10 Ref. [Lao90] gives a standard value in distinguishing the grade of environmental quality as Table 5.2.1. Table 5.2.1. Standard Distinguish Grade of Environmental Quality

Factors Index I Atmos- SO2 0.05 phere NOx 0.05 TSP 0.15 Water COD 2 on the NH3− N 0.25 DO 8 earth NO3− N 10 C+6 0.01 r CN 0.01 − Water SO34 120 − Cl 120 under the NO3− N 10 earth hard. 250 C+6 0.01 r CN 0.01

Distinguish the grade II III IV > 0.05 ≤ 0.15 > 0.15 ∼ 0.25 > 0.25 ∼ 0.50 > 0.05 0.10 > 0.10 ∼ 0.15 > 0.15 ∼ 0.30 > 0.15 0.30 > 0.30 ∼ 0.50 > 0.50 ∼ 0.75 > 2 6 > 6 ∼ 12 > 12 ∼ 25 > 0.25 ≤ 0.50 > 0.50 ∼ 1 > 1 ∼ 3 4 8 3 4 2 8 > 10 20 > 20 ∼ 40 > 40 ∼ 80 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 120 250 > 250 ∼ 750 > 750 ∼ 1000 > 120 250 > 250 ∼ 750 > 750 ∼ 1000 10 20 20 40 40 80 > 250 450 > 450 ∼ 650 > 650 ∼ 900 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25

V > 0.50 > 0.30 > 0.75 > 25 >3 2 > 80 > 0.25 > 0.25 > 1000 > 1000 8 > 900 > 0.25 > 0.25

Let A = {u1 , u2 , u3 }. T -fuzzy number sets corresponding to the ﬁve standard degrades in U are as follows (Unit: atmosphere mg/N m3 , water mg/l, and hardness mg/l): AI = {(0.05, 0, 0)T , (0.05, 0, 0)T , (0.15, 0, 0)T ; (2, 0, 0)T , (0.25, 0, 0)T , (8, 0, 0)T , (10, 0, 0)T , (0.01, 0, 0)T , (0.01, 0, 0)T ; (120, 0, 0)T , (120, 0, 0)T , (10, 0, 0)T , (250, 0, 0)T , (0.01, 0, 0)T , (0.01, 0, 0)T }. AII = {(0.1, 0.04, 0.05)T , (0.08, 0.02, 0.01)T , (0.2, 0.02, 0.1)T ; (4, 1.8, 2)T , (0.35, 0.1, 0.05)T , (6, 2, 1.5)T , (15, 4.5, 5)T , (0.03, 0.02, 0.01)T , (0.03, 0.02, 0.01)T ; (190, 65, 60)T , (190, 65, 60)T , (15, 4.5, 5)T , (360, 110, 90)T , (0.03, 0.02, 0.01)T , 0.03, 0.02, 0.01)T }. AIII = {(0.2, 0.01, 0.05)T , (0.12, 0.02, 0.01)T , (0.4, 0.1, 0.05)T ; (9, 2, 3)T , (0.7, 0.05, 0.2)T , (3.5, 0.5, 0.3)T , (30, 9, 10)T , (0.08, 0.01, 0.01)T , (0.08, 0.02, 0.02)T ; (510, 260, 240)T , (510, 260, 240)T , (30, 9, 10)T , (560, 110, 90)T , (0.08, 0.01, 0.01)T , (0.08, 0.02, 0.02)T }. AIV = {(0.38, 0.12, 0.1)T , (0.22, 0.05, 0.08)T , (0.62, 0.10, 0.12)T ; (19, 5, 6, )T , (2, 0.8, 1)T , (2.5, 0.5, 0.1)T , (60, 18, 20)T , (0.18, 0.07, 0.05)T , (0.18, 0.07, 0.05)T ; (880, 130, 120)T , (880, 130, 120)T , (60, 18, 20)T , (780, 130, 120)T , (0.18, 0.07, 0.05)T , (0.18, 0.07, 0.05)T }. AV = {(0.5, 0, 0)T , (0.3, 0, 0)T , (0.75, 0, 0)T ; (25, 0, 0)T , (3, 0, 0)T , (2, 0, 0)T , (80, 0, 0, )T , (0.25, 0, 0)T , (0.25, 0, 0)T ; (1000, 0, 0)T , (1000, 0, 0)T , (80, 0, 0)T , (900, 0, 0, )T , (0.25, 0, 0)T , (0.25, 0, 0)T }.

5.2 Fuzzy Recognition

133

Now, according to Table 5.2.1 , we have tested the basic index feature value of environmental quality in a city as Table 5.2.2. Table 5.2.2. Basic Indexes Feature Value Factors Atmosphere Water on the earth Index SO2 NOx TSP COD NH3− N DO NO3− N C+6 CN r Feature value 0.07 0.05 0.6 19.2 1.5 5.6 10 0.01 0.01 Factors Water under the earth − Index SO34 Cl− hardness NO3− N C+6 CN r Feature value 612 625 14 290 0.01 0.01

20

A0 = {u01 , u02 , u03 } = {(0.06, 0.005, 0.015)T , (0.05, 0.01, 0.02)T , (0.5, 0.05, 0.15)T ; (19, 0.05, 0.4)T , (2, 0.5, 0.1)T , (5.5, 0.3, 0.4)T , (9, 1, 2)T , (0.01, 0.005, 0.001)T , (0.02, 0.001, 0.005)T ; (611, 0.5, 1.5)T , (624, 1, 2)T , (15, 1, 0)T , (290, 1, 2)T , (0.01, 0.005, 0.001)T , (0.02, 0.001, 0.005)T }.

30 The 5 × 15 matrix (dij )5×9 is obtained for each component of Aj ( j = I, II, III, IV, V) by calculating, the distance between component of them and A0 with Formula (5.1.7), respectively: (dij )5×15 = ⎛ 0.0110 0.0129 0.3279 16.8845 1.9015 2.5495 1.8257 490.6674 ⎜ 0.0492 0.0238 0.3084 15.1120 1.5465 1.1902 6.4096 425.9724 ⎜ ⎜ 0.1511 0.0635 0.1555 9.9593 1.1332 2.1016 21.9924 230.2662 ⎜ ⎝ 0.3207 0.1814 0.0956 4.3152 0.5477 3.1691 53.2854 284.0056 0.4367 0.2470 0.2327 5.8868 1.1633 3.5449 70.6777 388.6676 ⎞ 503.6682 5.3541 39.6863 0.0029 0.0091 0.0029 0.0091 438.8379 3.5237 102.2823 0.0205 0.0016 0.0205 0.0016 ⎟ ⎟ 236.2915 17.3109 275.0667 0.0716 0.0590 0.0716 0.0603 ⎟ ⎟. 271.748 48.4218 496.6840 0.1712 0.1591 0.1712 0.1591 ⎠ 375.6687 65.3350 609.6679 0.1712 0.1591 0.2413 0.2287 40 An assessable matrix shall be derived from Method 3 of Step 4 in 5.2.2.2: ⎞ ⎛ ⎛ + − ⎞ ¯1 D1 D1 D I 503.6682 0.0029 70.8609 + − ¯ ⎟ ⎜ ⎟ II ⎜ ⎜ D2+ D2− D2 ⎟ ⎜ 438.8379 0.0116 66.3501 ⎟ ¯ 3 ⎟ = ⎜ 236.2915 0.0590 52.9836 ⎟ . III ⎜ D D D ⎟ ⎜ ⎜ 3 3 ⎟ ¯ 4 ⎠ ⎝ 496.6840 0.0956 77.5623 ⎠ IV ⎝ D4+ D4− D ¯5 V 609.6679 0.1591 101.4886 D5+ D5− D

134

5 Fuzzy Cluster Analysis and Fuzzy Recognition

We might as well let α1 = α2 = α3 =

1 , then 3

1 (503.6682 + 0.0029 + 70.8607) ≈ 191.5107, 3 fII (∗2 ) ≈ 168.4, fIII (∗3 ) ≈ 96.445, fIV (∗4 ) ≈ 191.45, fV (∗5 ) ≈ 237.1052.

fI (∗1 ) =

50 fIII (∗3 ) = 96.4447 is smallest, such that the city’s environmental quality is approximate to III grade, i.e, less polluted, which meets with practice. 5.2.2.4

Conclusion

In application, it is diﬃcult for us to obtain T -fuzzy data, so an approximate value obtained from tests or measure is regarded as ﬁtting of T -fuzzy data. If historical data noted down are not overall or inexact, T -fuzzy numbers can be constructed by similarly Section 3.1.4. The result coincides with the systematic fuzzy evaluation method in Ref.[Chen94] and 2-order fuzzy synthetical evaluation model in Ref. [Zad76]. But the method of this section is superior to the above two: 1) The textual content contains wider information due to using fuzzy data. 2) Its result contains numbers which are not compressed to [0,1], because the smaller weight by more division will make each single factor evaluation meaningless under Zadeh’s “∨” operator, in such case, a lot of information loses. 5.2.3 Application of Recognition Model with T -Fuzzy Data 5.2.3.1 Application in Identiﬁcation of the Human Fossil We introduce practical application of T -fuzzy data below since it is of vast foreground in its application in every ﬁeld. We usually meet with the cases, in an animal fossil identiﬁcation, including its bigger diﬀerence between fossil and specimen, unclear boundary standard and fuzzy boundary etc. This brings a certain diﬃculty in identiﬁcation. Now we develop problems in a rat fossil identiﬁcation concerning indexes at belt, knucklebone bows and toes of big gerbils, meridian gerbils, gerbils with long claws and back. Let big gerbils u1 , meridian gerbils u2 , and gerbils with long claws u3 , have the indexes at their belt uj1 , knucklebone bows uj2 , toes uj3 , and back uj4 . Through our measure, these three gerbils own four kinds of indexes, respectively, below P (uj ) = {P (uj1 ), P (uj2 ), P (uj3 ), P (uj4 )},

5.2 Fuzzy Recognition

135

i.e, P (u1 ) = {(45.2, 8.5, 11.3), (100, 18.3, 29.2), (87.5, 28.4, 34.7), (47.8, 9.4, 11.6)}, P (u2 ) =

{(52, 6.5, 15.5), (80, 24.2, 24.8), (125, 24.8, 24.6),

(53.8, 15, 13.3)}, P (u ) = {(50, 5.4, 9.4), (76.2, 18.7, 28.4), (105.3, 22.8, 30), 3

(66.7, 36, 17.8)}. Now the measure data of a gerbil fossil is u∗i (i = 1, 2, 3, 4) after measured: P (u∗ ) = {(147, 7.2, 13.5), (78, 20, 15), (110, 19, 33), (45, 29, 18)}, which is a standard mode. Try to determine which gerbil it belongs to. 10 By use of (5.1.7), calculate the distance degree of u∗ and uj , then √ d(u∗1 , u11 ) = 9.61 + 16 + 3.24 ≈ 5.32, d(u∗2 , u12 ) ≈ 42.74, d(u∗3 , u13 ) ≈ 44.23, d(u∗4 , u14 ) ≈ 22.57 and d(u∗1 , u21 ) ≈ 10.32, d(u∗2 , u22 ) ≈ 12.17, d(u∗3 , u23 ) ≈ 18.8, d(u∗4 , u24 ) ≈ 24.78, d(u∗1 , u31 ) ≈ 5.77, d(u∗2 , u32 ) ≈ 21.91, d(u∗3 , u33 ) ≈ 12.39, d(u∗4 , u34 ) ≈ 33.9. 20 Make Dj = k1 D1j + k2 D2j + · · · + kn Dnj , where 1, 2, 3, ki =

1 , then 4

D1 =

n

ki = 1. Take n = 4, j =

i=1

1 [d(u∗1 , u11 ) + d(u∗2 , u12 ) + d(u∗3 , u13 ) + d(u∗4 , u14 ) ≈ 28.73. 4

Similarly, D2 ≈ 16.52, D3 ≈ 18.49. 30 Determine. Compare Dj , and taking D1 = max{Dj |j = 1, 2, 3} = 28.73 is what we want to ﬁnd. Therefore this immemorial rat is more alike with a big gerbil, with 0.45 membership degree belonging to gerbils. In practical application, we can give the diﬀerent weight coeﬃcients according to concrete circumstance and, adopt the methods such as layer analysis, the Delphi method and etc. 5.2.3.2 Application in Young Children Body Growth Young children are the future hope, they are being brought up in gaining knowledge, inﬂuencing their sentiment, and developing their bodies, so

136

5 Fuzzy Cluster Analysis and Fuzzy Recognition

prompt, scientiﬁc, and accurate analysis of their growth and urge of its full moral development for reliable successors, will have an extremely signiﬁcant strategic sense. However, because there exist many factors in inﬂuencing human body’s growth and also there is not clear boundary in various division of human bodies, many diﬃculties have been brought to an assessment of body division and growth. This section establishes a simple recognition and judgment model of young children growth by applying a fuzzy set theory according to the method mentioned. The modeling steps of this model are showed with an example. Suppose the universe of human body type U = {non-force type, positive-force type, super-force type}, its characteristic is P (u) = {stature, avoirdupois, chest circumference} = ˜i = (xi , ξi , ηi ), if (u1 , u2 , u3 ), depicts it with T -fuzzy data, written as x x ˜1 , x ˜2 , x ˜3 are the former, then P (˜ x) = (˜ x1 , x ˜2 , x ˜3 ) denotes a standard mode. According to Ref. [ISTI82], the relative between human body stature, avoirdupois and chest circumference of 18-25 year old men in the city, nonforce type, positive-force type and super-force type concerning three parts of diﬀerence scope human body is height (u1 ) : [155, 186], weight (u2 ) : [42, 75], and chest circumference (u3 ) : [74, 98]. But the standard evaluation of non-force type is (1)

(1)

(1)

(2)

(2)

(3)

(3)

2 = (48, 6, 7), x 3 = (79, 5, 5); x 1 = (170, 3, 4), x the positive force type is (2)

2 = (58, 3, 4), x ˜3 = (86, 2, 2); x 1 = (170, 3, 4), x the super force type is (3)

2 = (68, 6, 7), x 3 = (93, 5, 5). x 1 = (170, 3, 4), x Now some student P0 , concerning ui (i = 1, 2, 3) measure, its corresponding evaluate is (1)

(2)

(3)

0 = (59, 2, 6), x 0 = (86, 3, 2). x 0 = (170, 1, 5), x Try to judge P0 with what degree belong to P (uj )(j = 1, 2, 3). i) Computation of the distance between P0 and P (uj )j = 1, 2, 3 (by use of the Formula (5.1.7)) Compute the distance between P0 and P (u1 ): (1)

1

x0 , x ˜1 ) = (4 + 1 + 0) 2 ≈ 2.24, D11 = d1 (˜ (2) 1 D2 = d2 (˜ x0 , x ˜1 ) ≈ 21.12, (3) x0 , x ˜1 ) ≈ 12.08. D31 = d3 (˜

5.2 Fuzzy Recognition

137

Compute the distance between P0 and P (u2 ): (1)

x0 , x˜2 ) ≈ 2.24, D12 = d1 (˜ (2) x0 , x˜2 ) ≈ 3.74, D22 = d2 (˜ (3) D32 = d3 (x˜0 , x˜2 ) ≈ 1. Compute the distance between P0 and P( u3 ): (1)

D13 = d1 (˜ x0 , x 3 ) ≈ 2.24, (2) 3 x0 x 3 ) ≈ 14.35, D2 = d2 (˜ (3) 3 D3 = d3 (˜ x0 , x 3 ) ≈ 13.19. ii) The initial judgement Let 1 D1 = (D11 + D21 + D31 ) = 11.81, 3 1 D2 = (D12 + D22 + D32 ) = 2.33, 3 1 D3 = (D13 + D23 + D33 ) = 9.93. 3 Then D1 > D2 > D3 is known that P0 is closer to the positive force type. iii) Further judgement Comparison Dji (i, j = 1, 2, 3) at size. Let D+ = sup Dji = 21.12, D− = inf Dji = 1. Then Di − D− , P (uj )(P0 ) = 1 − + D − D− hence P (u1 )(P0 ) = 1 − 0.537 = 0.463, P (u2 )(P0 ) = 1 − 0.066 = 0.934, P (u3 )(P0 ) = 1 − 0.494 = 0.506. Therefore, the student P0 belongs to the positive force type with membership degree 0.934.

6 Fuzzy Linear Programming

In this chapter, based on the general fuzzy linear programming, we ﬁrst aim at discussing how to solve an optimal judge problem of Zimmermann arithmetic; then we put forward “the more-for-less paradox” of fuzzy linear programming, inquiry into the one with various fuzzy coeﬃcients, and study a new linear programming model with T - fuzzy variables. Finally we make some extension to fuzzy line programming.

6.1

Fuzzy Linear Programming and Its Algorithm

Suppose that x = (x1 , x2 , · · · , xn )T is an n-dimensional decision vector, c = (c1 , c2 , · · · , cn ) is an n-dimensional objective coeﬃcient vector, A = (aij )(1 i m; 1 j n) is an m × n-dimensional constraint coeﬃcient matrix, b = (b1 , b2 , · · · , bm )T is an m-dimensional constant vector, and fuzzify objective and constraint function in the ordinary linear programming, then % z = cx max % (or min) s.t. Ax b, x 0,

(6.1.1)

we call it a fuzzy linear programming. Let the rank(A)=m. “” denotes the fuzzy version of “” and has the linguistic interaction “essentially smaller than or equal to” [Zim76][LL01]. max % represents fuzzy maximizing, written n n as cx = cj xj , Ax = ( aij xj )m×n (1 i m). j=1

j=1

B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 139–191. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

140

6 Fuzzy Linear Programming

The membership function of fuzzy objective g˜(x) is n μG˜ (x) = g˜( cj xj ) j=1

⎧ n ⎪ ⎪ when cj xj z0 , ⎪ 0, ⎪ ⎪ j=1 ⎪ ⎨ 1 n n ( cj xj − z0 ), when z0 < cj xj z0 + d0 , = d0 j=1 ⎪ j=1 ⎪ ⎪ n ⎪ ⎪ ⎪ when cj xj > z0 + d0 , ⎩ 1,

(6.1.2)

j=1

written as t0 =

n

cj xj , the image of g˜(t0 ) is shown as Figure 6.1.1.

j=1

1

0

g(t ) 6 0

1

z0 z0 + d0 Fig. 6.1.1. Image of g˜(t0 )

t0

f(t ) 6 i Q Q Q Q Q Q bi bi + di ti

0

Fig. 6.1.2. Image of f˜(ti )

The membership functions of fuzzy constraints f (x) are: n μS˜i (x) = f˜( aij xj ) j=1

⎧ n ⎪ ⎪ 1, when aij xj bi , ⎪ ⎪ ⎪ j=1 ⎪ ⎨ n n 1 aij xj − bi ), when bi < aij xj bi + di , = 1− ( di j=1 ⎪ j=1 ⎪ ⎪ n ⎪ ⎪ ⎪ when cj xj > bi + di , ⎩ 0,

(6.1.3)

j=1

written as ti =

n j=1

aij xj , the image of f˜(ti ) is shown as Figure 6.1.2, where

di 0(0 i m) is a ﬂexible index by an appropriate choice. Consider a symmetric form fuzzy linear programming (6.1.1), written as 7f , and we call it condition and unconditional fuzzy μS = Sf and μG = M respectively. superiority set of f concerning constraint S,

6.1 Fuzzy Linear Programming and Its Algorithm

141

6.1.1 Replacement Solution Method in Fuzzy Linear Programming Theorem 6.1.1. For a symmetric type programming, we have max μD (x) = max (α ∧ max μG (x)). x∈X

(6.1.4)

x∈Sα

α∈[0,1]

Proof: From Theorem, we denote fuzzy constraint S˜ for Decomposition μS˜ (x) = α Sα (x), then α∈[0,1]

μD(x) = μG(x) ˜ ˜ =

μS˜ (x) = μG˜ (x)

[μG˜ (x)

(α

[

(α

Sα (x))]

α∈[0,1]

Sα (x))],

α∈[0,1]

1, x ∈ Sα , Hence 0, x ∈ / Sα . max μD˜ (x) = [μG˜ (x) (α Sα (x))]

where Sα (x) =

x∈X

x∈X α∈[0,1]

=

{α

[ (μG˜ (x) Sα (x))]},

α∈[0,1]

while

[μG˜ (x)

Sα (x)] = {

x∈X

[μG˜ (x)

x∈X

Sα (x)]}

x∈Sα

=

{ [μG˜ (x) Sα (x)]} x∈S / α

μG˜ (x).

x∈Sα

Therefore, (6.1.4) is certiﬁcated. For the sake of the convenience, let (1) ϕ : [0, 1] → [0, 1], ϕ(α) = max μG˜ (x); x∈S α (2) ψ : [0, 1] → [0, 1], ψ(α) = α ϕ(α). Obviously, ϕ has the properties: 10 ϕ(0) = max μG˜ (x); x∈X

20 ϕ is a gradually decreasing function. Asai, Tanaka et al have given ϕ a suﬃciency condition of continuity [TOA73]: If fuzzy constraint S˜ is a strict convex fuzzy set, then function ϕ is a continuous function in [0,1]. Theorem 6.1.2. If ϕ continues in [0,1], then ϕ has a unique ﬁxed point. Proof: If f (α) = α − ϕ(α), we know ϕ(α) is a continuous function in [0,1], f (α) also is a continuous function in [0,1].

142

6 Fuzzy Linear Programming

Because f (1) = 1 − ϕ(1) 0, which comes from value region of ϕ(α) decision at [0,1], similarly, we have f (0) = 0 − ϕ(0) < 0. Therefore, point α∗ at least exists in the continuous function ϕ(α) in [0,1], such that f (α∗ ) = 0, i.e., α∗ = ϕ(α∗ ). Now prove uniqueness. In reverse suppose of α∗1 , α∗2 , all satisfy ϕ(α∗1 ) = ∗ α1 , ϕ(α∗2 ) = α∗2 , when α∗1 α∗2 , then ϕ(α∗1 ) ϕ(α∗2 ). This is impossible, and because of ϕ(α) deﬁnition, we have α∗1 α∗2 ⇐⇒ ϕ(α∗1 ) ϕ(α∗2 ); hence α∗1 = α∗2 . Theorem 6.1.3. The ﬁxed point α∗ of the continuous function ϕ(α) is all the ﬁxed point of the function ψ(α), i.e., ψ(α) = α∗ . Proof: ψ(α∗ ) = α∗ ϕ(α∗ ) = α∗ α∗ = α∗ . Theorem 6.1.4. If ϕ is continuous, then max μD˜ (x) = ψ(α∗ ) = α∗ x∈X

to fuzzy adjudge μD˜ (x), where α∗ is the ﬁxed point in ϕ. Proof: Because max μD˜ (x) = max ψ(α), ψ(α∗ ) = α∗ ϕ(α∗ ) = α∗ , it only x∈X

proves max ψ(α) = ψ(α∗ ).

α∈[0,1]

α∈[0,1]

(1) When α α∗ , ϕ(α) ϕ(α∗ ) = α∗ α, then ψ(α) = α ϕ(α) = α α∗ = ψ(α∗ ). (2) When α α∗ , ϕ(α) ϕ(α∗ ) = α∗ α, then ψ(α) = α ϕ(α) = ϕ(α) α∗ ϕ(α∗ ) = ψ(α∗ ). Therefore, ∀α ∈ [0, 1], ψ(α) ψ(α∗ ), i.e., ψ(α∗ ) = max ψ(α). α∈[0,1]

Theorem 6.1.5. If α∗ is a ﬁxed point of the continuous function ψ(α), then α∗ = max μD˜ (x), that is, the ﬁxed point α∗ of ψ(α) is a determination optimal x∈X

judgment value x∗ . From Theorem 6.1.4, it easily gets α∗ = max μA˜0 (x) = max μD˜ (x). x∈Sα

x∈X

Thus, we converse fuzzy linear programming into a process to solve an ordinary linear programming. In (6.1.1), we only discuss ﬁnding maximum problem in objective function f (x) (to ﬁnd a fuzzy minimum problem, we can convert it into ﬁnding a fuzzy maximum of −f (x)).

6.1 Fuzzy Linear Programming and Its Algorithm

143

Concrete steps of solution to (6.1.1) shown follows. 10 Solve two linear programmings (I)

min cx s.t. Ax b, x 0,

(II)

max cx s.t. Ax b, x 0.

Find the minimum m = min cx and maximum M = max cx are obtained, respectively. If zero stays in the feasible region of Problem (I), and coeﬃcient c is all not negative, then m = 0 can be got directly. 20 Determine replacement accuracy ε > 0. According to Theorem 6.1.1, we take α1 ∈ (0, 1), suppose k = 1, and change the problem into ﬁnding a linear programming max μA˜0 (x) s.t. Ax bαk , x 0, where μA˜0 (x) =

cx − m , bαk = {(1 − αk )p1 + b1 , (1 − αk )p2 + b2 , · · · , (1 − M −m

αk )pm + bm }. We can get a maximum gk = max μA˜0 (x). x∈Sαk

30 A calculation error: εk = gk − αk . If |εk | < ε, then to Step 40 . Otherwise, suppose αk+1 = αk + γk εk , where γk is a replacement modifying coeﬃcient, it needs appropriately selecting, such that 0 αk+1 1. Then, we change k into k + 1, and turn to Step 20 . 40 Let α∗ = αk . Then solve the linear programming max μA˜0 (x) s.t. Ax bα , x 0. From the knowledge of Theorem 6.1.5, the obtained optimal solution sets is an optimal solution to (6.1.1)(determination judgement). Theoretically, there exists uncountably inﬁnite α in Step 30 at [0,1]. In fact, it can’t be compared with one by one calculation. In order to solve the problem, we shall apply the concept and theory of a ﬁxed point. 6.1.2

Zimmermann Algorithm to Fuzzy Linear Programming

Reconsider problem in (6.1.1). In order to ﬁnd an optimal solution to a fuzzy objective function under the fuzzy constraint, we can convert a fuzzy objective

144

6 Fuzzy Linear Programming

function into a fuzzy constraint condition cx z0 , correspondingly, it has a ∈ F (x) (the fuzzy objective set) in X, its membership function fuzzy set G n is (6.1.2), and for every constraint condition aj xj bj , a fuzzy set Si in j=1

X correspondsto itand its membership function is (6.1.3). Let S˜ = S˜1 S˜2 · · · S˜m ∈ F (X). Then we call it fuzzy constraint set corresponding to constraint condition Ax b, x 0, when di = 0(1 i m), S˜ is changed into an ordinarily constraint set S, and at this time, “” is changed into “” in constraint equations. Deﬁnition 6.1.1. Suppose μG˜ (x), μS˜i (x) is in turns the membership function ˜ satisfying of fuzzy objective and i-th fuzzy constraint, then we call fuzzy set D n μD˜ (x) = μG˜ (x) ( μS˜i (x)), x 0 is fuzzy decision in (6.1.1), but point x∗ i=1 satisfying μD˜ (x∗ ) = μD˜ (x) is an optimal solution in (6.1.1). x∈X

Fuzzy programming (6.1.1) can be written as ⎧ ⎨ −cx −z0 , Ax b, ⎩ x 0,

(6.1.5)

where z0 is an expecting value for objective and it is a constant. We can see it easily at μS˜ (x) = 1, μG˜ (x) = 0, and hope to make the objective value bigger than z0 , but must be lower than μS˜ (x), caring for fuzzy constraint set S˜ with ˜ at both sides, according to the deﬁnition we can use a fuzzy objective set G ˜ ˜ S, ˜ i.e., fuzzy judgement D = G m μD˜ (x) = μG˜ (x) μS˜ (x) = μG˜ (x) [ μS˜i (x)] i=1 m

b − (Bx)i = [μS˜i (Bx)] = min ( i ), 0im di i=0

(6.1.6)

where (Bx)i denotes an element of matrix (Bx) in i-th row. B = (−c, A)T , b = (−z0 , b)T . b − (Bx)i Let α = min ( i ), then μD˜ (x) = α, hence we can get the follow0im di ing. Theorem 6.1.6. Maximization μD˜ (x) is equivalent to linear programming max G = α n 1 aij xj − bj ) α (1 i m), s.t. 1 − ( di j=1 n 1 ( cj xj − z0 ) α, d0 j=1 0 α 1, x1 , · · · , xn 0.

(6.1.7)

6.1 Fuzzy Linear Programming and Its Algorithm

145

Again according to Deﬁnition 6.1.1 and Theorem 6.1.6, and obviously. Theorem 6.1.7. Suppose x ¯∗ = (x∗1 , x∗2 , · · · , x∗n ; α∗ )T is an optimal solution ∗ ∗ ∗ in (6.1.7), then x = (x1 , x2 , · · · , x∗n )T is an optimal solution in (6.1.1), and they have constraint and optimization level of α. Zimmermann initiated an arithmetic to Problem (6.1.1)[Zim78]. Here we introduce its solution as follows: 10 First ﬁnd an ordinary linear programming max z = cx s.t. Ax b x0 and max z = cx s.t. Ax b + d, x 0, we obtain a maximum value z0 and z0 + d0 , where b + d = (b1 + d1 , · · · , bm + dm )T . Here, z0 is an object function maximum under the constraint condition Ax b obeyed strictly (the membership degree is μS˜ (x) = 1 at this time). z0 + d0 is an object function maximum when the constraint condition to be relaxed as Ax b + d (the membership degree μS˜ (x) = 0 at this time). z0 and z0 + d0 corresponds to two extreme cases μS˜ (x) = 1 and μS˜ (x) = 0, which can adequate lowers membership degree μS˜ (x), such that the optimal value is improved, lying between z0 and z0 + d0 . ˜ ∈ F (x), its membership function is like 20 Construct a fuzzy object set G (6.1.2), hence, fuzzy judgement in (6.1.5) is that in (6.1.6). Then ﬁnding the optimal point x∗ , such that μG˜ (x) μS˜ (x)). μD˜ (x∗ ) = μG˜ (x∗ ) μS˜ (x∗ ) = x∈X

30 Let

˜ ◦ S˜ = G

(μG˜ (x)

μS˜ (x))

x∈X

{α|μG˜ (x) α, μS˜ (x) α, (0 α 1)} {α|μG˜ (x) α; μS˜1 (x) α, · · · , μS˜m (x) α, (0 α 1)}. = =

x∈X

According to Theorem 6.1.6, this is the ordinarily linear programming with the parameter

146

6 Fuzzy Linear Programming

max G = α n s.t. aij xj + di α bi + di , (1 i m), j=1 n

cj xj − d0 α z0 ,

j=1

0 α 1, x1 , · · · , xn 0. We ﬁnd its optimal solution x∗ = (x∗1 , x∗2 , · · · , x∗n ; α∗ )T by use of a simplex method, thus optimal point x∗ = (x∗1 , x∗2 , · · · , x∗n )T in (6.1.1) is obtained by Theorem 6.1.7; corresponding, the objective function value is z ∗ = cx∗ , the optimal level is μD˜ (x∗ ) = α∗ .

6.2 6.2.1

Expansion on Optimal Solution of Fuzzy Linear Programming Introduction

We consider a form of linear programming with fuzzy constraint in (6.1.1) being %) (LP max z = cx s.t. Ax b, x 0, its corresponding parameter linear programming presents as follows: (LPα )

max cT x s.t. Ax b + (1 − α)d, x 0,

where α ∈ [0, 1]. Let x(α) , α ∈ [0, 1] denotes an optimal solution to linear programming (LPα ), and Bα denotes an optimal basis, zα denotes an optimal value. After Zimmermann’s algorithm [Zim78] has been given, people always simplify it and obtain the optimal value at α∗ = 0.5 [Fu90], [Pan87], [LL97]. However, the above results hold in the case that the optimal basis of (LP0 ) is identical to that of (LP1 ). If the optimal basis of (LP0 ) is not identical to that of (LP1 ), what is the value of α∗ when its optimal solution is obtained? 6.2.2

Relevant Theorems of Parameter Linear Programming (LPα ) (1)

(1)

Lemma 6.2.1. Assume x(1) = (x1 , · · · , xm , 0, · · · , 0)T , its corresponding columns of A. If B1−1 (b + d) 0 optimal basis is B1 , consisting of the ﬁrst m B1 N b+d does not hold, there will be . x(0) = 0 0 I

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

147

Corollary 6.2.1. Suppose 0 α1 < α2 1, without loss of generality, let (α ) (α ) x(α2 ) = (x1 2 , · · · , xm 2 , 0, · · · , 0)T . Its corresponding optimal basis is Bα2 , −1 which consists of the ﬁrst mcolumns of A. If Bα2 (b +(1 − α2 )d) 0 does not Bα1 N b + (1 − α1 )d hold, there will be x(α1 ) = . 0 I 0 Theorem 6.2.1. Let 0 α1 < α2 1. Suppose that the optimal solution (α ) (α ) to linear programming (Lα2 ) is x(α2 ) = (x1 2 , · · · , xm 2 , 0, · · · , 0)T , and the corresponding optimal basis is Bα2 , there exits Bα−1 (b + (1 − α1 )d) 2 (b + (1 − α )d) 0, then x = is the (1) If Bα−1 2 2 0 optimal solution to (Lα1 ). (2) If Bα−1 (b + (1 − α2 )d) 0 does not hold, there is cT x > zα1 . 2 Proof: (1) It can be immediately proved by a simplex method of the linear programming. (2) Without loss of generality, we only consider α1 = 0, α2 = 1, that is, we only prove that if B1−1 (b + d) 0 does not hold, there exists −1 B1 (b + d) cT > z0 = cT x(0) . 0 Transforming x by ξ=

B1 N 0 I

x=

⎧ n ⎨ aij xj , (1 i m), ⎩

j=1

0,

(m + 1 i n).

Since B1 is a feasible basis of (LP1 ), this transformation is full rank. Therefore, objective function can be transformed into function with respect to ξ as follows: −1 B1 −B1−1 N f (x) = cT x = cT ξ = c1 ξ1 + · · · + cn ξn . 0 I According to the Theorem 2 of [Fu90], we have ci 0, (1 i m), ci 0, (m + 1 i n). Now we consider linear programming (LP0 ). By Lemma 6.2.1, we obtain b+d B1 N (0) . x = 0 0 I Since x(0) is the optimal solution to (LP0 ), there is Ax(0) b + d.

148

6 Fuzzy Linear Programming

We obtain it because, at least, one of the following inequality holds, (0)

ξi

n

=

(0)

< bi + di (1 i m);

aij xj

j=1 (0)

ξi

(0)

= xi

> 0 (m + 1 i n).

Such that we have f (x(0) ) =

n

(0)

ci ξi

α∗ , d0 d0 zα∗ − z1 cT x∗ − z1 > α∗ . d0 d0 Let α2 =

zα∗ − z1 , and take α ¯ = min(α1 , α2 ). Then d0 ¯ )d Ax(α1 ) = b + (1 − α1 )d b + (1 − α

and

cT x(α1 ) − z1 α ¯ > α∗ , d0

¯ d b + d, cT x(α1 ) − d0 α ¯ z1 . i.e., Ax(α1 ) + α (α1 ) So (¯ α, x ) is a feasible solution, but α ¯ > α∗ , which contradicts with the conditions in this theorem, which completes the proof. Therefore, we only consider the optimal solution α and optimal value zα to the linear programming (LPα ) for the fuzzy linear programming. That is, we only consider the function zα . Moreover, a membership function of fuzzy objective sets is deﬁned as Cα : zα = z1 + d0 α, where d0 = z0 − z1 , which is a simple fuzzy number, and its image is a straight line. Therefore, for the fuzzy linear programming, when the ‘intersection’ operations denote the fuzzy decision, their optimal solution equals the intersection point of Figure 6.2.1 (or Figure 6.2.2) and the straight line. It is the intersection point of object function Sα : zα = z1 + d0 α and the constraint function zα = cTBα Bα−1 (b + (1 − α)d). From above results, we have the following conclusions. Theorem 6.2.4. Suppose that the B0 and B1 are optimal basis of (LP0 ) and (LP1 ), respectively. %) is α = 0.5. 1) If B0−1 b 0, or B1−1 (b + d) 0, then fuzzy decision of (LP −1 −1 %) is α > 0.5. 2) If B0 b 0 and B1 (b+d) 0, then fuzzy decision of (LP Proof: 1) When B0−1 b 0, from the Theorem 6.2.2, the relation between zα and α in linear programming (LPα ) presents as follows: zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d = cTB1 B1−1 (b + d) − αcTB1 B1−1 d.

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

151

Its intersect with the object set Sα : zα = z1 + d0 α =⇒ zα = cTB1 B1−1 b + αcTB1 B1−1 d is α = 0.5, zα = cTB1 B1−1 (b + 0.5d). As a similar argument,we can prove that the optimal solution is α = 0.5 when B1−1 (b + d) 0. 2) When the two conditions in 1) are dissatisﬁed, from the Theorem 6.2.2, the function zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d is a fold line whose slope increases with α decreases. Moreover, it is a cave function and the membership function of fuzzy objective sets is a monotonously increased line segment whose slope is d0 , i.e., the line segment AB in the Figure 6.2.3. Therefore, zα intersection with membership function of objective set is shown in Figure 6.2.3 zα 6 D B a H a z0 H Haa HH aa H aa HQ Q E H H Q HQ H H z1 QC A 0

1

α

Fig. 6.2.3. B1−1 (b + d) 0 and B0−1 b 0

8 Link (0, z0 ) and (1, z1 ), its line segment CD stands under the fold line CD. z0 + z1 Obviously, CD intersect AB in the point E(0.5, ). Therefore, when α 2 8 have no intersection joint. Otherwise, con0.5, we have that AB and fold line CD tradict with the concave property of zα . It follows that their intersection point satisﬁes α > 0.5, i.e., fuzzy decision α > 0.5. So the proof is complete. It is easy to know from Theorem 6.2.1 and Theorem 6.2.2 that the function zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d is a sectional function. Then the function can be expressed as

and

zα = cTB1 B1−1 (b + d) − αcTB1 B1−1 d

(6.2.1)

zα = cTB0 B0−1 (b + d) − αcTB0 B0−1 d,

(6.2.2)

when the function zα cross through (0, z0 ) and (1, z1 ), respectively. The intersection point of the above straight line is

152

6 Fuzzy Linear Programming

α =

cTB1 B1−1 (b + d) − cTB0 B0−1 (b + d) cTB0 B0−1 d − cTB1 B1−1 d

.

Suppose that B0−1 (b + (1 − α )d) 0, then we have B1−1 (b + (1 − α )d) 0. In fact, if B1−1 (b + (1 − α )d) 0, and since B0−1 (b + (1 − α )d) 0, zα is an optimal solution to (LPα ). By Theorem 6.2.1 and Theorem 6.2.2, we obtain cT x¯ > zα , i.e., zα = cB1T (B1−1 (b + (1 − α )d) > zα , which self-contradicts, so B1−1 (b + (1 − α )d) 0. Therefore, the function zα is subsection function below: $ cTB0 B0−1 (b + d) − αcTB0 B0−1 d, 0 α α , zα = cTB1 B1−1 (b + d) − αcTB1 B1−1 d, α α 1. It follows from the above theorems that the optimal solution is the intersection point of Sα : zα = z1 + d0 α and zα . %) represents very However, when B0−1 (b + (1 − α )d) 0, the method to (LP complicated. We can obtain the optimal solution to fuzzy linear programming by solving the corresponding linear programming (LP ). %) below: We suggest the algorithm to (LP 10 Obtain the optimal solution x(0) , x(1) to linear programming (LP0 ), (LP1 ). We denote their corresponding optimal basis as B0 , B1 , its corresponding objective function coeﬃcient as cB0 , cB1 and its optimal value as z0 , z1 , respectively. 20 Compute the intersection point of two straight lines zα crossing through (0, z0 ) and (1, z1 ), respectively, denoted by α . 30 Determination. If B0−1 (b + (1 − α )d) 0, go to 40 ; otherwise, go to 70 . 40 Compute the intersection point of the function zα and Sα , we obtain (α1 , zα1 ). If α α1 , go to 50 ; if α < α1 , then go to 60 . z0 − z1 50 Write α = , the optimal solution to programming z0 − z1 + cTB0 B0−1 d −1 B0 (b + (1 − α)d) %) is x = (LP , the optimal value equals to zα1 = 0 cTB0 B0−1 (b + (1 − α)d). It ends. cTB1 B1−1 d 60 Write α = , the optimal solution to programming z0 − z1 + cTB1 B1−1 d −1 %) is x = B1 (b + (1 − α)d) , the optimal value is zα1 = cT B −1 (b + (LP B1 1 0 (1 − α)d). It ends.

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

153

%) and we obtain optimal solution x of the 70 Solve linear programming (LP % (LP ) and the optimal value z. It ends. If the intersection point α satisﬁes the condition B0−1 (b + (1 − α )d) 0, it is easy to get a conclusion as follows. Theorem 6.2.5. Assumption condition of 2) holds in Theorem 6.2.4, α is an intersection point of (6.2.1) and (6.2.2). If B0−1 (b + (1 − α )d) 0, then the linear programming (LPα ) is degenerative [Cao91c], and that is the basic variable B0−1 (b + (1 − α )d) in optimal basic solution with a zero value. 6.2.4 Example Example 6.2.1: Find max x1 + x2 s.t. x1 + 2x2 100, x1 50,

(6.2.3)

x2 20, x1 0, x2 0. Step 1. Let ⎛ d = (0, 5, 5)⎞T . Then optimal basic of corresponding (LP1 ) is B1 , 1 −1 −2 and B1−1 = ⎝ 0 1 0 ⎠ , cB −1 = (0, 1, 1). 1 0 0 1 Solve linear programming max x1 + x2 s.t. x1 + 2x2 100 x1 55 x2 25 x1 0, x2 0 (1)

(2)

and we get an optimal solution x(1) = (x1 , x2 , y3 ) = (55, 22.5, 2.5) as well as an optimal value z1 = 77.5. Let d = (0, 2, 2)T . Then optimal basic of corresponding (LP0 ) is B0 , and ⎛ ⎞ 0.5 −0.5 0 B0−1 = ⎝ 0 1 0 ⎠ , cB −1 = (1, 1, 0), 0 0.5 0.5 1 and optimal value is z0 = 70. Step 2. Solve α1 = we get α1 =

1 . 3

cTB1 B1−1 (b + d) − cTB0 B0−1 (b + d) cTB0 B0−1 d − cTB1 B1−1 d

,

154

6 Fuzzy Linear Programming

Step 3. Since ⎞ ⎛ ⎞⎛ 0 100 1 −1 −2 B −1 (b + (1 − α1 )d) = ⎝ 0 1 0 ⎠ ⎝ 50 + 53 ⎠ = ⎝ 50 + 0 0 1 20 + 53 20 +

⎞

⎛

5 3 5 3

⎠,

we turn to Step 4. Step 4. Given the optimal value zα in parametric linear programming (LPα ) with respect to the function of parameter α as follows: 9 zα =

80 − 10α, 13 α 1 77.5 − 2.5α, 0 α 13

and the objective function is Sα = 70 + 7.5α. Solve the intersection of the two 4 1 functions, we obtain the interaction point ( 47 , 520 7 ). Since 7 ≥ 3 , we have xB1 = B1−1 (b + (1 − α)d) = (

365 155 25 , , ). 7 7 7

Therefore, the optimal solution in fuzzy linear programming (6.2.3) is ob4 520 365 155 tained as α = , x1 = , x2 = , and optimal value denotes z = . 7 7 7 7 6.2.5 Conclusion %) and parameter α, we From the relation between linear programming (LP %) can be transformed into solving the interknow that optimal problem (LP section of two linear functions. It is a fuzzy optimal solution obtained directly from the optimal solutions x(1) , x(0) of programmings (LP1 ) and (LP0 ) and optimal basis B1 and B0 , so that it is unnecessary to calculate the more complex linear programming than (LP1 ) and (LP0 ).

6.3 6.3.1

Discussion of Optimal Solution to Fuzzy Constraints Linear Programming Introduction

In this section, we focus on the fuzzy constraint linear programming. First we discuss the properties of an optimal solution vector and of an optimal value in the corresponding parametric programming, and propose a method to the critical values. Then we present a new algorithm to the fuzzy constraint linear programming by associating an object function with an optimal value of parametric programming. %) as The normal form of a linear programming with fuzzy constraint is (LP Section 6.2.1

6.3 Discussion of Optimal Solution

%) (LP

155

max z = cx s.t. Ax b, x 0,

%) is to turn it into a classical linear prothe representative method to (LP gramming [Cao02a]. We will try to explain the number that its fuzzy decision usually is 0.5 found by Researchers [Cao02a][Fu90][LC02][Pan87]. Here we %). shall propose another algorithm to (LP 6.3.2 Analysis of Fuzzy Linear Programming Suppose xα denotes an optimal solution to (LPα ), Bα and zα an optimal basis matrix and an optimal value of (LPα ), respectively, and then we consider (LPα )

max z = cT x s.t. Ax b + (1 − α)d, x 0,

where α is a parameter on the interval [0,1], d 0, b + (1 − α)d will vary with parameter α. Its optimal solution is Bα−1 (b + (1 − α)d). If we solve (LPα ) by using a simplex method, there is no relationship between discriminate number σ = cN − cB B −1 N and parameter α, so the variation of an optimal basis matrix is decided only by xα . 6.3.2.1 Properties of the Parametric Linear Programming Deﬁnition 6.3.1. Let B be one of the optimal basic matrix of (LPα ). If an interval [α1 , α2 ] exists, satisfying that B is an optimal basic matrix of (LPα )(∀α ∈ [α1 , α2 ]) while B is not an optimal matrix for each α∈[α1 , α2 ], we call that α1 and α2 critical values of (LPα ) and [α1 , α2 ] a characteristic interval. Theorem 6.3.1. (LPα ) has a ﬁnite characteristic interval on the interval [0,1]. Proof: Let us assume B is an optimal basis matrix of (LPα ), and there are two characteristic intervals [αi−1 , αi ] and [αi+1 , αi+2 ], (αi < αi+1 ) corresponding to B. The optimal solution to (LPα ) is (xB , xN )T , where xB = B −1 (b + (1 − α)d) 0, α ∈ [αi−1 , αi ] ∪ [αi+1 , αi+2 ], xN = 0, αi < αi+1 . So xB = B −1 (b + (1 − α)d) 0 when α ∈ [αi , αi+1 ], this means an optimal matrix of (LPα ) is also B on the interval [αi , αi+1 ]. Therefore the characteristic interval where the optimal matrix keeps invariant is [αi−1 , αi+2 ]. So the optimal matrix has only one corresponding characteristic interval. Because the coeﬃcient matrix of (LPα ) keeps invariant on the interval [0,1], and an

156

6 Fuzzy Linear Programming

optimal matrix is ﬁnite, the number of characteristic intervals is ﬁnite. This means (LPα ) has ﬁnite characteristic interval on the interval [0,1]. Theorem 6.3.2. Let B be an optimal basis matrix of (LPα ) on a characteristic interval [α1 , α2 ]. If (B −1 b)i = 0(1 i m), then α1 = max

5 [B −1 (b + d)]

i

(B −1 d)i

6 , 0 | (B −1 d)i < 0(1 i m) ,

5 [B −1 (b + d)] 6 i −1 , 0 | (B d) > 0(1 i m) α2 = min i (B −1 d)i

(6.3.1)

(6.3.2)

is derived, where (B −1 (b+d))i and (B −1 d)i are the i-th components of B −1 (b+ d) and B −1 d, respectively. Proof: We can use partitioned matrices to represent the simplex method to a linear programming [LL97]. Since I B −1 N B −1 (b + (1 − α)d) B N b + (1 − α)d =⇒ cB cN z z c B cN I B −1 N B −1 (b + (1 − α)d) =⇒ , 0 cN − cB B −1 N z − cB B −1 (b + (1 − α)d) where N is a non-basis matrix corresponding to B, there is no relationship between variable α and the discriminate number. B −1 (b + (1 − α)d) 0 is only required in order to make the optimal matrix of (LPα ) invariant. This means ∀ i, [B −1 (b + (1 − α)d)]i 0, i.e., ∀ i, [B −1 (b + d)]i − α(B −1 d)i 0. By solving this inequality, we can obtain α ∈ [α1 , α2 ], where α1 and α2 are represented with (6.3.1) and (6.3.2). It is obvious that the optimal matrix of (LPα ) will change at α > α2 or α < α1 . Therefore the characteristic interval, corresponding to the optimal basis matrix B, is [α1 , α2 ]. Based on the above conclusion, we can easily get the properties of optimal value function Zα as follows. Property 6.3.1. Let B be an optimal matrix of (LPα ) on the characteristic interval [αi , αj ]. Then xα = B −1 (b + (1 − α)d)(αi α αj ) is a linear vector function about variable α. The optimal value function zα = cB B −1 (b + (1 − α)d) is a linear function about variable α and decreases with the increase of variable α. Property 6.3.2. The optimal value function zα of (LPα ) continues on the interval [0,1].

6.3 Discussion of Optimal Solution

157

6.3.2.2 Optimal Solution to Fuzzy Linear Programming ˜ the fuzzy objective function Theorem 6.3.3. Let S˜ be the fuzzy constraint, G ∗ ˜ =G ˜ ∧ S˜ on domain X, then the optimal solution x to the fuzzy optimal set D satisﬁes μD˜ (x∗ ) = max μD˜ (x) x∈X

= max {α ∧ max μS˜ (x)}, 0α1

x∈Sα

where Sα = {x|x ∈ X, μG˜ (x) α} [Cao02a]. The fuzzy objective function can be deﬁned as Gα : zα = z1 + d0 α, we can use the intersection of the fuzzy objective function Gα : zα = z1 + d0 α and fuzzy constraints Sα : zα = cB α Bα−1 (b + (1 − α)d) to ﬁnd an optimal decision %), shown as in Figure 6.3.1. of (LP zα

6 Sα z0 ``` Z Z Z A AG z1 α | -α 0 1 Fig. 6.3.1. The Intersection of Gα and Sα

6.3.3

Algorithm to Fuzzy Linear Programming

Let z1 be an optimal value of (LP1 ), and z0 be an optimal value of (LP0 ), d0 = z0 − z1 > 0. Based on the above conclusions, we give a new algorithm to fuzzy linear programming as follows. Step 1. Solve linear programmings (LP0 ) and (LP1 ). Let the optimal solutions be x0 , x1 , the optimal values be z0 , z1 , and the optimal matrix of (LP0 ) be B0 . Step 2. Solve

[B0 −1 (b + (1 − α)d)]i = 0.

Assume the solutions as α1 , · · · , αn−1 , (0 < α1 < · · · < αn−1 < 1). Let α0 = 0, αn = 1, α = α1 , k = 1. Step 3. Solve (LPα ). Let the optimal value be zα . If zα z1 + d0 α, turn to Step 4, otherwise let k = k + 1, α = αk , turn to Step 3.

158

6 Fuzzy Linear Programming

Step 4. Solve the optimal decision α∗ =

z1αk − z1αk−1 − zαk−1 αk + zαk αk−1 . zαk − zαk−1 − αk d0 + αk−1 d0

Step 5. Solve linear programming (LPα∗ ), and we can obtain an optimal solution xα∗ and an optimal value zα∗ . Example 6.3.1: Calculate max 3x1 + 5x2 s.t. 7x1 + 2x2 66, 5x1 + 3x2 61, x1 + x2 16, x1 8, x2 5, xi 0(i = 1, 2),

(6.3.3)

where d1 = d2 = d3 = d4 = 0, d5 = 7 is a ﬂexible value of a object and constraint function, respectively. We obtain z0 = 72, z1 = 49, d0 = 23 by calculating (LP0 ) and (LP1 ) corresponding to (6.3.3), respectively. The inverse matrix of the optimal matrix in (LP0 ) is B0−1 = (b1 , · · · , b5 ), where b1 = (1, 0, 0, 0, 0)T , b2 = (0, 1, 0, 0, 0)T , b3 = (−7, −5, 1, −1, 0)T , b4 = (0, 0, 0, 1, 0)T , b5 = (5, 2, −1, 1, 1)T . By calculating the equations [B0−1 (66, 61, 16, 8, 12 − 7α)T ]i = 0, (i = 1, · · · , 5), 5 2 4 , α2 = , α3 = . respectively, we obtain α1 = 14 5 7 Assume α0 = 0, α4 = 1, and we use Lindo software to solve the linear 5 =67. programming (LPα1 ) before we obtain an optimal value zα1 = z 14 5 5 > Z1 + Because Z 14 14 d0 , we must continue to solve the linear programming (LPα2 ). By calculating linear programming (LPα2 ), we obtain an optimal value zα2 = z 25 =66.04. Because z 52 > z1 + 25 d0 , we must continue to solve the linear programming (LPα3 ). By calculating linear programming (LPα3 ), we gain an access to an optimal value zα3 = z 74 =61.429. Because z 47 < z1 + 47 d0 , the optimal decision is α∗ = 0.557. By calculating (LP0.557 ), we obtain x0.557 = (6.8606, 8.8990)T

6.4 Relation between Fuzzy Linear Programming and Its Dual One

159

and Z0.557 = 65.0768. So the optimal solution to the example is x∗ = (6.8606, 8.8990)T and the optimal value is z ∗ = 65.078. 6.3.4 Conclusion We know that optimal decision of the fuzzy constraint linear programming does %) is not not necessarily equal 0.5, and the optimal value function ﬁgure of (LP necessarily a segment. Based on the properties of the optimal value function, we have proposed a new algorithm to fuzzy constraint linear programming.

6.4

Relation between Fuzzy Linear Programming and Its Dual One

6.4.1 Introduction Let a linear programming primal problem like min z = cx s.t. Ax = b, x 0,

(6.4.1)

max yb s.t. yA c y0

(6.4.2)

while

is a dual linear programming in (6.4.1)[Dan63], where x = (x1 , x2 , · · · , xn )T , y = (y1 , y2 , · · · , ym ), c = (c1 , c2 , · · · , cn ), b = (b1 , b2 , · · · , bm )T is a variable and constant vector, respectively, A = (aij )m×n is an m × n matrix. We discuss relation between them as follow. 6.4.2 Case with Fuzzy Coeﬃcients Consider a linear programming with fuzzy coeﬃcient to be min z˜ = c˜x s.t. Ax = b, x 0,

(6.4.3)

where c˜ is a fuzzy coeﬃcient; its dual form is max w = y˜b s.t. y˜A c˜, y˜ 0, where y˜ denotes a fuzzy variable vector.

(6.4.4)

160

6 Fuzzy Linear Programming

Lemma 6.4.1. The dual form of (6.4.3) is (6.4.4). If there exists an optimum solution in one, then there exists an optimum solution in the other, with there existing the same fuzzy optimum value in (6.4.3) and (6.4.4) for a continuous ˜ and strictly monotone function φ. Proof: According to formula (1.5.3) in Section 1.5, (6.4.1) is turned into the following problem for solution: min cx, s.t. μφ˜(c) 1 − α, α ∈ [0, 1], Ax = b, c ∈ R n , x 0. If we deﬁne [Ver84]: ∀c ∈ R n , μφ˜(c) = inf μφ˜j (cj )(1 i l, l n), c = (c1 , c2 , · · · , cn ). But if μφ˜(c) 1 − α, then

j

inf μφ˜j (cj ) 1 − α ⇐⇒ μφ˜j (cj ) 1 − α(1 j l) j

⇐⇒ cj μφ˜−1 (1 − α). j

Therefore, we have min

n

cj xj

j=1

s.t. cj μφ˜−1 (1 − α) (1 j n), j Ax = b, α ∈ [0, 1], x 0. This problem is equivalent to min

n

cj xj

j=1

s.t. cj = μφ˜−1 (1 − α) j Ax = b, α ∈ [0, 1] x0 ⇐⇒ min μφ˜−1 (β)x s.t. Ax = b, β ∈ [0, 1], x 0,

(6.4.5)

where β = 1 − α; the dual form of (6.4.5) is max yb s.t. yA = μφ˜−1 (β), β ∈ [0, 1] y0

(6.4.6)

6.4 Relation between Fuzzy Linear Programming and Its Dual One

161

⇐⇒ max yb s.t. μφ˜(c) β yA = c y 0, β ∈ [0, 1] ⇐⇒ (6.4.4). We can see that the lemma holds because of the same parameter solutions to (6.4.4) as well as to (6.4.6), by the equivalence of (6.4.5) with (6.4.3), and (6.4.6) with (6.4.4), and by the mutual dual problems of (6.4.3) and (6.4.4). 6.4.3 Case with Fuzzy Variables Consider fuzziﬁcation of linear programming min z˜ = c˜ x ˜ s.t. A˜ x b, x ˜ 0,

(6.4.7)

called a linear programming with fuzzy variable [AMA93], where x ˜ = (˜ x1 , x ˜2 , ··· ,x ˜n )T an n−dimensional fuzzy variable vector, 0 c ∈ R n , ˜b ∈ (F (R))m a fuzzy vector, respectively, and A ∈ R m×n represents an m × n matrix. The dual problem of (6.4.7) is denoted by max w ˜ = y˜b s.t. yA c, y 0,

(6.4.8)

where c ∈ R n , A ∈ R m×n , y ∈ R m , ˜b ∈ (F (R))m . x ˜ is said to be a fuzzy feasible solution to (6.4.7) if and only if x ˜ satisﬁes the constraints of the problem. By an optimal fuzzy solution to (6.4.7) we x0 c˜ x for all x ˜ belong denote a fuzzy feasible solution, say x˜0 , such that c˜ to the set of all fuzzy feasible solutions to (6.4.7). The relation between fuzzy linear programming (6.4.7) and its dual programming (6.4.8) is as follow. In order to solve programming (6.4.7), we shall ﬁnd an optimal solution to problem (6.4.8). However (6.4.8) is, in fact, a linear programming with fuzzy coeﬃcient, and we already know how to solve this. It follows that we shall discuss the relationships between the primary and dual programmings. Lemma 6.4.2. If x ˜ is any fuzzy feasible solution to (6.4.7) and y is any feasible one to (6.4.8), then y˜b c˜ x. Proof: Straightforward. Lemma 6.4.3. If x˜0 is a fuzzy feasible solution to (6.4.7) and y 0 is a feasible x0 , then y 0 is an optimal solution to (6.4.8) one to (6.4.8), such that y 0˜b = c˜ 0 and x˜ is a fuzzy optimal one to (6.4.7).

162

6 Fuzzy Linear Programming

Proof: Straightforward. Theorem 6.4.1. If the dual problem (6.4.8) has an optimal solution, then problem (6.4.7) has a fuzzy optimal solution. Proof: We ﬁrst transform (6.4.7) into the form max w ˜ = y˜b s.t. yA + ys I = c, y, ys 0,

(6.4.9)

˜2 , · · · , w ˜n ), ys represents a slack variable and I is a unit where w ˜ = (w ˜1 , w matrix. Let A = (A, I)T , y = (y, ys ), c = (c, 0)T . Formula (6.4.9) is simpliﬁed as follows: min w ˜ = y ˜b (6.4.10) s.t. y A = c , y 0. be an optimal basic solution to (6.4.10). Such that w ˜j − ˜bj 0 for Let yB −1 ˜ ˜ all j; thus, bB B A b, where B is a basic matrix corresponding to A. If we write x˜ = ˜bB B −1 , we can see that x˜ is a fuzzy feasible solution to (6.4.7). On the other hand, we have

z˜ = c˜ x = ˜bB B −1 c = yB ˜bB = w. ˜ Hence, x˜ is an optimal solution to (6.4.7). Lemma 6.4.4. If problem (6.4.8) has an unbounded solution, then problem (6.4.7) has no fuzzy feasible solution. Proof: Straightforward. We conclude that, in order to solve a linear programming with fuzzy variables, it is suﬃcient to solve its dual problem. We can then obtain the fuzzy optimal solution to our problem by using the theorem and lemmas of this section, and vice versa. Let μφ˜ be (1.5.3) in Section 1.5. If a fuzziﬁed form of (6.4.1) is (6.4.3), its primal programming with parameter is min (m + βdn−1 )x s.t. Ax = b, β ∈ [0, 1], x 0,

(6.4.11)

where m, n are real numbers, with c m + βdn−1 ⇐⇒ c = m + βdn−1 , d denoting a ﬂexible index, β = 1 − α, c is freely ﬁxed in the value interval [m, n], while the dual problem in (6.4.11) is max yb s.t. yA = m + βdn−1 , β ∈ [0, 1], y 0.

(6.4.12)

6.4 Relation between Fuzzy Linear Programming and Its Dual One

163

Theorem 6.4.2. Let μφ˜ : R → [0, 1] be a continuous and strictly monotone membership function. x0 is a unique solution to (6.4.1) if and only if x0 remains a parameter solution to (6.4.5) ∀β ∈ [0, 1](β = 1 − α). Proof: Similar to the proof of Ref.[Man79], then x0 is a unique solution to (6.4.1) ⇐⇒ ∀d/n ∈ R n , ∃x, y, α ∈ R n+m+1 Ax − bα 0, −yA + cα = 0, y 0 yb − cx 0, −dn−1 x + dn−1 x0 α > 0, α > 0 ⇐⇒ ∀dn−1 ∈ R n , ∃x, u, y, κ, r ∈ R n+m+k+2 −Ax + bκ + u = 0 yA − (κc + βdn−1 ) = 0, −yb + cx + dn−1 x0 β + r = 0 u, κ, r 0, β ∈ [0, 1], β + r > 0 ⇐⇒ ∀dn−1 ∈ R n , ∃x, y, κ ∈ R n+m+1 Ax bκ, cx = κcx0 , κ 0 yA = κc + βdn−1 , yb (κc + βdn−1 )x0 , β ∈ [0, 1] ⇐⇒ ∀dn−1 ∈ R n , ∃y, κ ∈ R m+1 yA = κc + βdn−1 , yb = (κc + βdn−1 )x0 κ 0, β ∈ [0, 1] (cx + βdn−1 x0 yb yAx0 = (κc + βdn−1 )x0 ) ⇐⇒ ∀dn−1 ∈ R n , ∃y ∈ R m yA = c + βdn−1 , yb = (c + βdn−1 )x0 β ∈ [0, 1], and let κ = 1 ⇐⇒ ∀dn−1 ∈ R n , ∃β ∈ [0, 1], so a solution to (6.4.5) is found to be x0 . Because x0 denotes a feasible solution to (6.4.5), y¯ is a feasible one to dual problem (6.4.6) coming from (6.4.5), with y¯b = (m + βdn−1 )x0 , where y¯ = y(β). But, the fuzzy solution to (6.4.7) is given by an optimal solution to the parametric linear problem [Ver84], therefore, the theorem holds. Similar to a corollary in Ref.[Man79], we can conﬁrm the following. Corollary 6.4.1. The dual optimal solution y is unique to (6.4.2) associated with a primal optimal solution x0 to (6.4.1), if and only if, for a continuous and strictly monotone membership function μφ˜ : R → [0, 1], such that ∀β ∈ [0, 1], y remains a dual optimal parameter solution to the perturbed linear programming (6.4.12). Theorem 6.4.3. Let μφ˜ : R → [0, 1] be a continuous and strictly monotone membership function. A solution x0 is unique to linear programming (6.4.1) if only if x0 is still a fuzzy optimal solution to fuzzy linear programming (6.4.7). Proof: From Lemma 6.4.1, we know (6.4.7) ⇐⇒ (6.4.5), so, from the result where Theorem 6.4.2 is applied to (6.4.5), x0 is a unique solution to (6.4.1) if and only if x0 remains a parameter optimal solution to (6.4.5). But the

164

6 Fuzzy Linear Programming

minimization in (6.4.7) is equivalent to that in (6.4.5), and x0 is a parameter optimum solution to (6.4.5) if and only if x0 is a fuzzy optimal solution to (6.4.4). Corollary 6.4.2. The dual optimal solution y to (6.4.2) corresponding to the primal optimum solution x0 to (6.4.1) is unique, if and only if, for a continuous and strictly monotone membership function μφ˜ : R → [0, 1], y˜ is still a dual optimum solution to the programming (6.4.3). Proof: Let μφ˜ be Formula (1.5.3) in Section 1.5. Then we have (6.4.3) ⇐⇒ min z = cx s.t. Ax = b, μφ˜(c) β, β ∈ [0, 1] x0 ⇐⇒ (6.4.11) by Ref. [Ver84]. Apply Corollary 6.4.1 to (6.4.11) and the conclusion holds. Deﬁnition 6.4.1. Let μA˜0 (x), μF˜ (x) be membership functions of fuzzy ob˜ satisfying μ ˜ (x) = jection and fuzzy constraint. Then we call a fuzzy set D D μA˜0 (x) ∧ μF˜ (x), x 0 a fuzzy decision for the programming c˜x b0 s.t. Ax b, x 0,

(6.4.13)

while we call a point x satisfying μD˜ (x∗ ) = max{(1 − μA˜0 (x)) ∧ μF˜ (x)} an x0

optimal solution to (6.4.13). Theorem 6.4.4. The maximization of μD˜ (x) is equivalent to linear programming min (m + βM0 n−1 )x (6.4.14) s.t. Ax b1 + Bβb−1 2 + dα, α, β ∈ [0, 1], x 0, d denoting a ﬂexible index; M0 and B representing the length in intervals [m, n] and [b1 , b2 ], respectively. Proof: From Formulas (1.5.3) (1.5.4) and (1.5.5) in Section 1.5, we have max μD˜ (x) ⇐⇒ max (−˜ c)x s.t. Ax ˜b x0 ⇐⇒ min c˜x s.t. μφ˜(c) β Ax b + dα, μφ˜ (b) β β ∈ [0, 1] x0 ⇐⇒ (6.4.14),

6.5 Antinomy in Fuzzy Linear Programming

165

where c˜, ˜b can be freely ﬁxed in the close value interval [m, n] and [b1 , b2 ], its degree of accomplishment is determined by Formula (1.5.3). 6.4.4 Conclusion The method in this chapter indicates that we can change a linear programming with fuzzy variables into a dual programming with fuzzy coeﬃcients for solution, such that the problem is solved easily.

6.5 6.5.1

Antinomy in Fuzzy Linear Programming Introduction

In 1971, Charnes and Klingman initiated the more-for-less paradox or the lessfor-more paradox of the allotment model [Ck71]. If a constant b in (6.4.1) increases by d(> 0), then an objective value z decreases instead. If constant c in (6.4.2) decreases by d(> 0), then an objective value yb increases instead. Such a strange phenomenon is called “antinomy” in mathematics. Again in 1986, Lin discussed antinomy of the general linear programming [Lin86] by taking its expansion. In 1987, Charnes, Duﬀuaa and Ryan also discussed “more-for-less paradox” of the general linear programming [CDR87]. In 1991, Yang and Jing again put forward another suﬃciency and necessary condition in antinomy of the general linear programming and condition of the non-linear programming, where antinomy appeared [YJ91]. In 1991, author initially used the method of fuzzy sets to study antinomy of the linear programming [Cao91c]. We introduce antinomy problem in fuzzy linear programming, and present a fuzzy set method for its investigation. 6.5.2

Reason for Antinomy Emergence

Deﬁnition 6.5.1. Suppose x0 is a basic feasible solution to (6.4.1). If its basic variable value are all positive, then we call x0 a nondegeneration basic feasible solution; if there are some basic variable value equalling to zero, then we call x0 a degeneration basic feasible solution. If all basic feasible solutions in the linear programming are nondegeneration, we call them nondegeneration. Example 6.5.1: Consider ﬁnding min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 = b1 , 4x2 + 2x3 + 6x4 = b2 , 5x1 + 6x2 + 5x3 + 4x4 = b3 , x1 , x2 , x3 , x4 0, where bi = bi + di (i = 1, 2, 3).

166

6 Fuzzy Linear Programming

If we suppose x1 , x2 , x3 to be basic variables, accordingly, a basis matrix B as well as an inverse matrix B −1 is denoted respectively by ⎛ ⎞ 111 B = ⎝0 4 2 ⎠ 565 and

⎛

⎞ −1 1 ⎜ ⎟ 2 ⎟ B −1 = ⎜ ⎝−5 0 1 ⎠ . 1 −2 10 2 When assignment volume of three products b = (9, 12, 46)T is increases to b = (10, 18, 50)T, the minimum cost z = cB xB decreases from z = 15 to z = 11. Why? If the problem nondegenerates and when a negative component exists in y = cB B −1 or in a certain evaluation coeﬃcient zs < 0, then the objective function is cB B −1 b = yb = yb + yd < yb = cx∗ −4

or cB xB = cB B −1 b = cB B −1 b + cB B −1 Ps = cx∗ + δzs < cx∗ , so that antinomy appears. Therefore, we have a discussion as follows [CDR87][Lin86]: Corollary 6.5.1. Let a basic solution x∗ = (xB , xN ) in (6.4.1) be a nondegeneration optimum solution. If ∃j0 : zj0 < 0, then antinomy takes shape in (6.4.1). Proposition 6.5.1. Let a basic solution x∗ = (xB , xN ) in (6.4.1) be a nondegeneration optimum solution. Antinomy arises if and only if a negative component exists in y = cB B −1 . Does the conclusion above hold if programming (6.4.1) degenerates? Proposition 6.5.2. If deﬁnition (6.4.1) denotes a degeneration linear programming, then min{cx|Ax = b +

n

εj Pj = b(ε), x 0, ε > 0 suﬃciently small}

(6.5.1)

j=1

is a linear programming of nondegeneration. Theorem 6.5.1. If any basic feasible solution is ε = 0 in (6.5.1) when ε is suﬃciently small, a basic feasible solution can be obtained to a degenerated linear programming (6.4.1).

6.5 Antinomy in Fuzzy Linear Programming

167

Proposition 6.5.3. If a basic solution x∗ (0) = (xB (0), xN ) denotes a degeneration optimum solution to linear programming (6.4.1), then there exists antinomy if and only if a negative component exists in y = cB B −1 . Proof: Take a basic solution x∗ (ε) = (xB (ε), xN ) in (6.5.1) into consideration, where xB (ε) = B −1 b0 + εB + B −1 N εN − B −1 N xN = B −1 b(ε) − B −1 N xN is nondegeneration. According to Proposition 6.5.1, if xB (ε) is a nondegeneration optimum solution, then there appears antinomy if and only if a negative component exists in y = cB B −1 . When ε is suﬃciently small and if we suppose ε = 0 in any basic feasible solution to (6.5.1), we can obtain a basic feasible solution to (6.4.1). If (6.5.1) is solved when ε is suﬃciently small, we can get a list of basic feasible solutions x(ε) = {x0 (ε), x1 (ε), · · · } until we have an optimal solution x∗ (ε). If ε = 0, we also have a list of basic feasible solutions x(0) = {x0 (0), x1 (0), · · · } to (6.4.1). Since the coeﬃcient matrices and the objective functions are all equal in (6.5.1) and (6.4.1), accordingly, the test numbers are identical in basic feasible solutions xi (ε) and xi (0). Therefore, x∗ (0) is also an optimal solution to (6.4.1). This demonstrates that x∗ (ε) serving as a nondegeneration optimum solution to (6.5.1) is equivalent to x∗ (0) serving as a degeneration optimum one to (6.4.1). At this time, the objective function denoted by cB B −1 b (ε) = yb (ε) − yN xN = yb0 (ε) + δyT − yN xN < yb0 (ε) = cx∗ (ε) holds when a negative component exists in y and there exists cB B −1 b (0) < cx∗ (0) for ε = 0. Therefore the proposition holds. Corollary 6.5.2. Antinomy arises in (6.4.1) under the condition of Proposition 6.5.3 and in the event of ∃j0 : zj0 < 0. Proof: Because they have identical coeﬃcient matrices and objective functions, and (6.4.1) and (6.5.1) have the same test numbers in their basic feasible solutions xi (ε) as well as xi (0), we know a negative component must exist in y, in the event of zj0 = cB B −1 Pj = yPj < 0, with cB xB (ε) = cB B −1 b0 (ε) + δcB B −1 Ps − cB B −1 N xN = cx∗ (ε) + δzs < cx∗ (ε) (δ > 0 suﬃciently small), so, cB xB (0) < cx∗ (0) for ε = 0. Therefore, the corollary holds. In conclusion, whether a classical linear programming degenerates or not results in the fact that antinomy comes into being. If we try to keep antinomy

168

6 Fuzzy Linear Programming

from being contrary, we only change the equal-sign into an inequality sign in constraint condition. Proposition 6.5.4. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ = (xB , xN )T nondegenerates in fuzzy linear programming min z˜ = c˜x s.t. Ax = b, x 0,

(6.5.2)

then antinomy arises if and only if a negative component exists in a fuzzy shadow price y˜ = c˜B B −1 . Proof: Necessity. Since (6.5.2)⇐⇒(6.4.5) and (6.4.4) ⇐⇒(6.4.6), if y˜ = 0 ⇐⇒ y(β) = (μφ˜−1 (β))B B −1 0, and then to ∀T 0, when c˜B B −1 ˜ b → b + T , for any feasible solution in this problem with a soft constraint, we know μD˜ (x) = μφ˜−1 (β)x y(b + T ) yb = μφ˜−1 (β)x∗ = μD˜ (x∗ ) from a dual theorem of ordinary parameter linear programming, such that c˜x y˜b = c˜x∗ . m Suﬃciency. If there exists a negative component in y˜, then ∃T 0, ti > 0, i=1

such that

y˜T 0). Then the problem with a soft constraint concerning a basic solution in basis B is xB = B −1 b = B −1 b + δB −1 T, xN = 0. x∗ nondegenerates on the proposition assumption, xB 0 means a basic feasible solution having test numbers unchangeable when δ > 0 is suﬃciently small. Therefore it also belongs to an optimal solution with soft constraints and y˜ = c˜B B −1 is still a fuzzy optimal solution to the dual problem. But the objective value ∀β ∈ [0, 1] is denoted by formula below, i.e., (μφ˜−1 (β))B B −1 b = y(β)b = y(β)b + δy(β)T < y(β)b = cx∗ −1 ⇐⇒ c˜B B b = y˜b = y˜b + δ y˜T < y˜b = c˜x∗ , such that antinomy arises in (6.5.2).

6.5 Antinomy in Fuzzy Linear Programming

169

Corollary 6.5.3. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ = (xB , xN ) to (6.5.2) denotes a nondegeneration optimum solution, then the condition where antinomy arises is ∃j0 , such that z˜j0 < ˜0. Proof: From the proof of Proposition 6.5.4, we know sup μφ˜0 (x) = sup μφ˜0 (xB )

Ax=b

= sup (μφ˜−1 (β))B xB β∈[0,1]

= sup {(μφ˜−1 (β))B B −1 b + Pj δ(μφ˜−1 (β))B B −1 Pj T } β∈[0,1]

< sup (μφ˜−1 (β))B B −1 bPj . β∈[0,1]

(Because zj = (μφ˜−1 (β))B B −1 Pj < 0, where xB = B −1 b + δB −1 T, xN = 0, we know there must exist a negative component in y(β) = (μφ˜−1 (β))B B −1 ). It is equivalent that there must be a fuzzy negative component in y˜ from 0, such that we have the knowledge of z˜j = c˜B B −1 Pj = y˜Pj < ˜ c˜B xB = c˜x∗ + δ˜ zj < c˜x∗ . Proposition 6.5.5. If deﬁnition (6.5.2) serves as a degeneration fuzzy linear programming, then min z˜ = c˜x n εj Pj = b(ε) s.t. Ax = b + (6.5.3) x0

j=1

serves as a nondegeneration fuzzy linear programming, where εj is a suﬃciently small positive number. Proposition 6.5.6. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ (0) = (xB (0), xN ) to (6.5.2) denotes a degeneration optimum solution, then antinomy appears if and only if a fuzzy negative component exists in y˜ = c˜B B −1 . Corollary 6.5.4. Let μφ˜ be a continuous and strictly monotone function. Suppose that a basic solution x∗ (0) = (xB (0), xN ) to (6.5.2) is a degeneration optimum solution, then the condition where antinomy arises is ∃j0 , such that z˜j0 < ˜ 0. In fact, because min c˜x s.t. Ax = b(ε) x0

(6.5.4)

min μφ˜−1 (β)x s.t. Ax = b(ε), β ∈ [0, 1], x 0,

(6.5.5)

is equivalent to

170

6 Fuzzy Linear Programming

the dual form of (6.5.5) is max yb(ε) s.t. yA = μφ˜−1 (β), β ∈ [0, 1], y 0. Therefore, (6.5.4) is equivalent to max y˜b(ε) s.t. y˜A = c˜, y˜ ˜ 0. From Proposition 6.5.3 and Corollary 6.5.1, we know properties as follows: a. If there exists a degeneration optimum basic solution x∗ (0, β) in the classical linear programming (6.5.5) with parameter variable β, then the antinomy appears if and only if a negative component exists in y˜ = c˜B B −1 . b. Under the condition of a, if ∃j0 < 0, then antinomy appears in (6.5.5). 6.5.3

Example

Example 6.5.2: The fuzzy linear programming corresponding to Example 6.5.1 in this section is min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 9, 4x2 + 2x3 + 6x4 12, 5x1 + 6x2 + 5x3 + 4x4 46, xi 0(i = 1, · · · , 4).

(6.5.6)

Assume d1 = 1, d2 = 6, d3 = 4 and we make a parameter programming, then (6.5.6) is turned into min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 + x5 = 10, 4x2 + 2x3 + 6x4 + x6 = 18, 5x1 + 6x2 + 5x3 + 4x4 + x7 = 50, xi 0(i = 1, · · · , 7). Under the unchangeable condition of basis matrix B, an optimal parameter solution denotes 1 1 x = B −1 b(α) = (4 − 3 α, 1 − 4α, 4 + 8 α)T , 2 2 and optimal value is 1 z = 15 − 10 α. 2

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

171

When b is added from b = (9, 12, 46)T to b = (10, 18, 50)T, z decreases from z = 15 to 11. Therefore, antinomy comes into being because negative compo1 nents exist in a solution vector for α > . 4 6.5.4 Conclusion On the whole, no matter whether (6.5.2) is a generation or nongeneration fuzzy linear programming, antinomy appears in both of them. If we prevent antinomy in fuzzy linear programming from being contrary, the constraint equal-sign can be turned into a soft constraint. Overall, if antinomy is changed into formula (6.4.12) for solution, the antinomy of a fuzzy linear programming is not contrary. If the optimal solution in a primal linear programming is unique, then antinomy does not exist, which can be concluded as solutions to fuzzy linear programming (6.5.2). In the light of Theorem 6.4.4, an ordinary linear programming is only a particular example of fuzzy linear programming (6.4.12) for di = 0. Therefore, it can be changed into ﬁnding solutions by a fuzzy set method no matter whether it is antinomy of linear programming or fuzzy linear programming.

6.6

Fuzzy Linear Programming Based on Fuzzy Numbers Distance

6.6.1 Introduction In the section, we discuss the constraint conditions with fuzzy coeﬃcients, whose standard form is: max z = cx b, s.t. Ax x 0, = ( aij ) is where c = (c1 , c2 , · · · , cn ) is an n-dimensional clear row vector, A T an m × n fuzzy number matrix, b = (b1 , b2 , · · · , bm ) is an m-dimensional fuzzy line vector and x = (x1 , x2 , · · · , xn )T is a decisive vector. Solving this kind of fuzzy linear programming is based on an order relation between fuzzy numbers, by which we can transform fuzzy linear programming into clear linear programming. 6.6.2

Distance

A. Distance between Interval Numbers Assume a = [a1 , a2 ] and b = [b1 , b2 ] to be two interval numbers, a = b ⇐⇒ a1 = b1 and a2 = b2 .

172

6 Fuzzy Linear Programming

Similarly to Ref. [LiuH04], we also consider the diﬀerent value between corresponding point and point in the intervals, giving a new deﬁnition on the distance between interval numbers. Deﬁnition 6.6.1. Let a = [a1 , a2 ] and b = [b1 , b2 ] be two interval numbers. Then deﬁne 12 a1 + a2 b1 + b2 d(a, b) = + x(a2 − a1 )] − [ + x(b2 − b1 )]|dx (6.6.1) |[ 1 2 2 −2 as the distance between a and b. Regarding the distance d(a, b) between interval numbers as a proposition, we can verify satisfaction of the three conditions in distance. In fact, let f (x) = |[

a 1 + a2 b1 + b2 + x(a2 − a1 )] − [ + x(b2 − b1 )]|. 2 2

Since f (x) is a simple function about x, it concludes that f (x) is continuous, so d(a, b) is integrable. (1) Since f (x) 0, also by the continuity and integrable character of f (x), we have d(a, b) 0. If d(a, b) = 0, then f (x) = 0. a1 + a2 b1 + b2 When f (x) = 0, we have [ + x(a2 − a1 )] − [ + x(b2 − b1 )] = 0, 2 2 b1 + b2 1 1 a1 + a 2 − ) + x[(a2 − a1 ) − (b2 − b1 )] = 0 (∀x ∈ [− , ]), which i.e., ( 2 2 2 2 satisﬁes a1 = b1 , a2 = b2 , hence a = b. On the contrary, when a = b, i.e., a1 = b1 , a2 = b2 , we have f (x) = 0. Thus d(a, b) =

1 2

− 12

f (x)dx = 0.

(2) d(a, b) = d(b, a) holds obviously. a1 + a2 (3) For any interval number c, where c = [c1 , c2 ], denote ax = + 2 b1 + b2 c1 + c2 + x(b2 − b1 ), cx = + x(c2 − c1 ). Then 0 x(a2 − a1 ), bx = 2 2 |ax − bx | |ax − cx | + |cx − bx | satisﬁes

1 2

− 12

|ax − bx |dx

1 2

− 12

|ax − cx |dx +

It follows that d(a, b) d(a, c) + d(c, b) holds.

1 2

− 12

|cx − bx |dx.

a1 + a2 + x(a2 − In the distance formula, the integralled function f (x) = |[ 2 b1 + b2 + x(b2 − b1 )]| is the distance function between corresponding a1 )] − [ 2 1 1 point and point in two intervals. At x = − , f (− ) is the distance between 2 2

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

173

1 1 left endpoints of the two interval numbers; at x = , f ( ) is the distance 2 2 between right endpoints of the two interval numbers. B. Distance between Fuzzy Numbers in the real number set is called Deﬁnition 6.6.2 [TD02]. The fuzzy set A an L-R fuzzy number. If its membership function is: ⎧ a2 − x ⎪ ⎪ ⎪ L( a − a ), a1 x a2 , ⎪ 2 1 ⎪ ⎨ 1, a2 x a3 , μA˜ (x) = (6.6.2) x − a3 ⎪ ⎪ R( ), a3 x a4 , ⎪ ⎪ a4 − a3 ⎪ ⎩ 0, x < a1 , x > a4 , where L, R are strictly decreasing functions in [0, 1], and satisfy L(x) = R(x) = 1(x 0); L(x) = R(x) = 0(x 1), = (a1 , a2 , a3 , a4 )LR . the fuzzy number is denoted by A Especially, when L(x) = R(x) = 1 − x, fuzzy number deﬁned in (6.6.2) is = (a1 , a2 , a3 , a4 ); when L(x) = a trapeziform fuzzy number, denoted by A R(x) = 1 − x and a2 = a3 , fuzzy number deﬁned in (6.6.2) is a triangular = (a1 , a2 , a3 ). fuzzy number, denoted by A denotes an interval number: ∀α ∈ [0, 1], α-level curve of fuzzy number A α = [AL (α), AR (α)], A −1 where AL (α) = a2 − (a2 − a1 )L−1 A (α); AR (α) = a3 + (a4 − a3 )RA (α). By the distance between interval numbers, we deﬁne the distance between fuzzy numbers as follows.

and B be two fuzzy numbers, and Deﬁnition 6.6.3. Let A α = [AL (α), AR (α)] = [a2 − (a2 − a1 )L−1 (α), a3 + (a4 − a3 )R−1 (α)], A A A α = [BL (α), BR (α)] = [b2 − (b2 − b1 )L−1 (α), b3 + (b4 − b3 )R−1 (α)]. B B B and B by Then we deﬁne the distance between A 1 B) = α , B α )dα, D(A, d(A 0

where α , B α ) = d(A −[

1 2

− 12

|[

aL (α) + aR (α) + x(aR (α) − aL (α))] 2

bL(α) + bR (α) + x(bR (α) − bL (α)]|dx. 2

(6.6.3)

174

6 Fuzzy Linear Programming

In fact, let aL (α) + aR (α) + x(aR (α) − aL (α))] 2 bL (α) + bR (α) + x(bR (α) − bL (α)]|. −[ 2

f (x, α) = |[

Since f (x, α) is a simple function about x, f (x, α) is continues; it concludes α ) is also continuous, so D(A, B) is integrable. α , B that d(A α , B α ) 0, also by the continuity and integrable character of (1) Since d(A B) 0. If D(A, B) = 0, it satisﬁes d(A α , B α ) = 0. d(Aα , Bα ), we have D(A, When d(Aα , Bα ) = 0, by the distance deﬁnition between interval numbers, = B. we know aL (α) = bL (α), aR (α) = bR (α), then A = B, i.e., aL (α) = bL (α), aR (α) = bR (α). On the contrary, when A α ) = 0, the result holds clearly true. So D(A, α B B) = 1 d(A α , B α )dα = d(A 0 0. B) = D(B, A) holds obviously. (2) D(A, ˜ by the distance deﬁnition between interval (3) For any fuzzy number C, numbers: α ) d(A α , C α ) + d(B α , C α ), α , B 0 d(A α = [CL (α), CR (α)] = [C2 − (C2 − C1 )L−1 (α), C3 + (C4 − C3 )R−1 (α)], where C C C so 1 1 1 α , B α )dα α , C α )dα + α , B α )dα d(A d(A d(C 0

0

0

holds. 6.6.3 Ranking Fuzzy Numbers Here, we present a ranking idea about fuzzy numbers: before ranking fuzzy numbers, we ﬁx a real number M as refereing object (M is supremum about and support set of B). The nearer a fuzzy number to M , the support set of A larger it is; that is, the smaller the distance to M , the larger a fuzzy number is. Deﬁnition 6.6.4. If M = sup(s(A) ∪ s(B)), we call M the supremum of A and B, where s(A) and s(B) are the support sets of A and B, respectively. to M: By Deﬁnition 6.6.3, we can obtain the distance from fuzzy number A M) = D(A,

0

1

{

1 2

− 12

{M − [

aL (α) + aR (α) + x(aR (α) − aL (α))]}dx}dα, 2

∀α ∈ [0, 1]. Coordinate: M ) = M − a2 + a 3 + a 2 − a 1 D(A, 2 2

0

1

L−1 A (α)−

a4 − a3 2

0

1

−1 RA (α). (6.6.4)

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

175

Similarly M ) = M − b2 + b3 + b2 − b1 D(B, 2 2

0

1

L−1 B (α) −

b4 − b3 2

0

1

−1 RB (α). (6.6.5)

Thus, we can obtain the deﬁnition of ranking fuzzy numbers as follows in light of this. and B be two fuzzy numbers, and M be the supreDeﬁnition 6.6.5. Let A mum of A and B. Then M ) < D(B, M ), we call A > B; (1) When D(A, M ) = D(B, M ), we call A = B; (2) When D(A, (3) When D(A, M ) > D(B, M ), we call A < B. and B are trapeziform fuzzy numbers or triangular Especially, when A numbers, respectively, we can get concrete expressions: and B are trapeziform fuzzy numbers A = (a1 , a2 , a3 , a4 ), B = 10 When A (b1 , b2 , b3 , b4 ), then M) = M − D(A,

a1 + a2 + a3 + a4 M ) = M − b1 + b2 + b3 + b4 . ; D(B, 4 4

By Deﬁnition 6.6.5: > B; (1) a1 + a2 + a3 + a4 > b1 + b2 + b3 + b4 ⇔ A (2) a1 + a2 + a3 + a4 = b1 + b2 + b3 + b4 ⇔ A = B; < B. (3) a1 + a2 + a3 + a4 < b1 + b2 + b3 + b4 ⇔ A and B are triangular numbers A = (a1 , a2 , a3 ), B = (b1 , b2 , b3 ), 20 When A then M) = M − D(A,

a1 + 2a2 + a3 M ) = M − b1 + 2b2 + b3 . ; D(B, 4 4

By Deﬁnition 6.6.5: > B; (1) a1 + 2a2 + a3 > b1 + 2b2 + b3 ⇔ A (2) a1 + 2a2 + a3 = b1 + 2b2 + b3 ⇔ A = B; < B. (3) a1 + 2a2 + a3 < b1 + 2b2 + b3 ⇔ A 6.6.4 Linear Programming in Constraint with Fuzzy Coeﬃcients Assume that the linear programming in constraint with fuzzy coeﬃcient is deﬁned as follows: max z = cx b, (6.6.6) s.t. Ax x 0,

176

6 Fuzzy Linear Programming

denoted as:

max z = c1 x1 + c2 x2 + · · · + cn xn s.t. ai1 x1 + ai2 x2 + · · · + ain xn bi , x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m,

(6.6.7)

where fuzzy numbers are triangular fuzzy ones, i.e., ai1 = (ai11 , ai12 , ai13 ), ai2 = (ai21 , ai22 , ai23 ), · · · , ain = (ain1 , ain2 , ain3 ); bi = (bi1 , bi2 , bi3 ). By Zadeh’s extension principle, the sum of any triangular fuzzy numbers is still a triangular one. Formula (6.6.7) is equivalent to the format as follows: max z = c1 x1 + c2 x2 + · · · + cn xn s.t. (ai11 x1 + ai21 x2 + · · · + ain1 xn , ai12 x1 + ai22 x2 + · · · + ain2 xn , (6.6.8) ai13 x1 + ai23 x2 + · · · + ain3 xn ) (bi1 , bi2 , bi3 ), x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m. By Deﬁnition 6.6.5 and the method to ranking triangular fuzzy numbers, we transform Formula (6.6.8) into a linear programming as follows: max z = c1 x1 + c2 x2 + · · · + cn xn s.t. (ai11 x1 + ai21 x2 + · · · + ain1 xn ) +2(ai12 x1 + ai22 x2 + · · · + ain2 xn ) +(ai13 x1 + ai23 x2 + · · · + ain3 xn ) bi1 + 2bi2 + bi3 , x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m.

(6.6.9)

6.6.5 Numerical Example Example 6.6.1: Find solution to the linear programming in constraint with fuzzy coeﬃcients: max z = 3x1 + 4x2 s.t. a11 x1 + a12 x2 b1 , a21 x1 + a22 x2 b2 , x1 , x2 0, a12 = (20, 20, 21), a21 = (11, 12, 13), a22 = (5.4, 6.4, 7.4), where a11 = (3, 4, 4), b1 = (4500, 4600, 4800), b2 = (4600, 4800, 5250). Solution: By Formula (6.6.9), transform the fuzzy linear programming into a clear linear programming max z = 3x1 + 4x2 s.t. (3x1 + 20x2 ) + 2(4x1 + 20x2 ) + (4x1 + 21x2 ) 4500 + 2 × 4600 + 4800, (11x1 + 5.4x2 ) + 2(12x1 + 6.4x2 ) + (13x1 + 7.4x2 ) 4600 + 2 × 4800 + 5250, x1 , x2 0, we obtain x∗1 = 315 x∗2 = 170 z1∗ = 1625.

(6.6.10)

6.7 Linear Programming with L-R Coeﬃcients

177

By the ranking idea from Ref. [LiR02], we transform the fuzzy linear programming into a clear linear programming as follow: max z = 3x1 + 4x2 s.t. 3x1 + 20x2 4500, 4x1 + 20x2 4600, 4x1 + 21x2 4800, 11x1 + 5.4x2 4600, 12x1 + 6.4x2 4800, 13x1 + 7.4x2 5250, x1 , x2 0,

(6.6.11)

and we obtain x∗1 = 308 x∗2 = 168 z2∗ = 1598. Obviously z1∗ > z2∗ , at the same time, the number of constraint conditions in (6.6.10) reduces by four times compared to the numbers in (6.6.11), which indicates the ranking rule in the paper is superior to the ranking rule in Ref. [LiR02], consequently we gain a better optimal value of the linear programming in constraint with fuzzy coeﬃcients. 6.6.6 Conclusion We propose a new distance between fuzzy numbers based on the distance between interval numbers. In sighting the ranking idea, we get a ranking rule about fuzzy numbers. On the basis of the ranking rule, we gain a new approach to linear programming in triangular fuzzy numbers with coeﬃcients. At the same time, we use the simplicity of triangular fuzzy numbers in solving the problem. But it remains to research the linear programming in general fuzzy coeﬃcients.

6.7

Linear Programming with L-R Coeﬃcients

6.7.1 Introduction Consider linear programming max % z˜ = c˜x ˜ ˜b, s.t. Ax

(6.7.1)

x 0, where c˜ = (˜ c1 , · · · , c˜n ), ˜b = (˜b1 , · · · , ˜bm )T are L-R vectors, A˜ = (˜ aij )m×n an L-R matrix, c˜j = (cj , cj , cj )LR , ˜bi = (bi , bi , bi )LR and a ˜ij = (aij , aij , aij )LR L-R numbers, and x = (x1 , x2 , · · · , xn )T an ordinarily variable vector. Now two kinds of situations are discussed as follows respectively.

178

6 Fuzzy Linear Programming

6.7.2 Linear Programming in Constraints with L-R Coeﬃcients Consider max z = cx ˜ ˜b, s.t. Ax

(6.7.2)

x 0. Because a ˜ij , ˜bi (1 i m, 1 j n) are all L-R numbers, and xj 0, then n j=1

n n n aij xj = ( aij xj , aij xj , aij xj )LR j=1

j=1

j=1

is still an L-R number, hence n

aij xj bi

j=1

⇐⇒

n

aij xj bi ,

j=1 n

(6.7.3)

aij xj bi ,

j=1 n

aij xj bi ,

j=1

written down as A = (aij ), A = (aij ), A = (aij ), b = (b1 , b2 , · · · , bm )T , b = (b1 , b2 , · · · , bm )T , b = (b1 , b2 , · · · , bm )T . Therefore (6.7.2) can be rewritten as the following ordinary linear programming with 3m linear inequality constraints, i.e., max z = cx s.t. Ax b, Ax b,

(6.7.4)

Ax b, x 0. It is worthwhile to point out that turning (6.7.2) into (6.7.4) is irrelevant with choice of concrete appearance in reference function L and R in L-R number. We only consider two variable linear programming, and illustrate a method to it (may also use a simplex method to it). Example 6.7.1: A person on a business trip, needs to take two kinds of goods, each wrapped heavy by “6 kg possibility more” of Goods A (denoted

6.7 Linear Programming with L-R Coeﬃcients

179

as ˜ 6 = (6, 0, 1)LR ), worth 20 dollars. Goods B wrapped heavy by “2 kg or so” (denoted as ˜ 2 = (2, 1, 1)LR ), worth 10 dollars. This person wishes to take ˜ = (21, 1, 5)LR), hoping “about 21 kg” at most once (it can be denoted as 21 the total value of goods he takes is the greatest. Solution: Suppose that Goods A he takes is package x1 , and B is package x2 , then the problem involves ﬁnding a solution to linear programming in constraints with fuzzy coeﬃcients as follows: max z = 20x1 + 10x2 ˜ s.t. ˜ 6x1 + ˜ 2x2 21, x1 0, x2 0.

(6.7.5)

It is equivalent to a solution to an ordinary linear programming max z = 20x1 + 10x2 s.t. 6x1 + 2x2 21, x2 1, x1 + x2 5 x1 0, x2 0. Use an illustrating method to the problem (see Figure 6.7.1), the optimal 1 11 ∗ 9 310 solution is to get x∗1 = , x = , the optimal value is z ∗ = = 77 . 4 2 4 4 2 x2 6 20x1 + 10x2 = 77.5 x1 + x2 = 5 PP @ B 6x1 + 2x2 = 21 PP PP PP@ B ( 11 , 9 ) P@ PBP 4 4 @ B PP B@ P 0 x1 Fig. 6.7.1. Illustrating Method to (6.7.5)

3 If the goods allow to be torn open, then Goods A he takes can be 2 4 1 packages, B 2 packages, total worth 77.5 dollars. If the goods must be taken 4 by whole packages, it needs taking an integral for the restrict x1 , x2 , and that is, to solve it with an integral programming method. The result is that Goods A he would take is 2 packages, B is 3 packages (or A is 3, B is 1), the total value amounting to 70 dollars.

180

6 Fuzzy Linear Programming

6.7.3 Linear Programming in Object with L-R Coeﬃcient Consider the problem as follows: max % z˜ = c˜x s.t. Ax b, x 0. Because

(6.7.6)

c˜ = (˜ c1 , · · · , c˜n ), c˜j = (cj , cj , cj )LR , n n n z˜ = (z, z, z)LR = ( cj xj , cj xj , cj xj )LR j=1

j=1

j=1

are all L-R numbers, and according to approximately formula of max, % (6.7.6) is approximately equivalent to a linear programming with 3 objectives max(z = min(z = max(z =

n

cj xj = cx)

j=1 n j=1 n

cj xj = cx) cj xj = cx)

j=1

s.t. Ax b, x 0.

Example 6.7.2: Find fuzzy linear programming as follow: ˜ 1 + 10x ˜ 2 max % z˜ = 20x s.t. 6x1 + 2x2 21, x1 0, x2 0, ˜ = (10, 2, 1)LR. ˜ = (20, 3, 4)LR, 10 where 20 This problem is approximately equivalent to max z = 20x1 + 10x2 min Z = 3x1 + 2x2 max Z = 4x1 + x2 s.t. 6x1 + 2x2 21, x1 0, x2 0.

6.7 Linear Programming with L-R Coeﬃcients

181

Find an optimal solution to each objective respectively (1)

(1)

x1 = 0, x2 = 10.5, Z (1) = 105, when Z (1) = 21, Z (2) x1 (3) x1

= =

(1)

(2) (2) 0, x2 = 0, Z (2) = 0, when Z (2) = Z = 0. (3) (3) 3.5, x2 = 0, Z (3) = 14, when Z (3) = 70, Z

= 10.5.

= 10.5.

Subjectively give a ﬂex index for d1 = 5, d2 = 20, d3 = 4, and we construct ˜ 1, M ˜ 2, M ˜ 3: M ˜ 1, M ˜ 2, M ˜ 3: three fuzzy objective sets M μM˜ 1 (x) = f1 (20x1 + 10x2 ) ⎧ 20x1 + 10x2 < 100, ⎪ ⎨ 0, 1 = 1 − (105 − 20x1 − 10x2 ), 100 20x1 + 10x2 < 105, ⎪ 5 ⎩ 1, 20x1 + 10x2 105; μM˜ 2 (x) = f2 (3x1 + 2x2 ) $ 0, 3x1 + 2x2 20, = 1 1 − (3x1 + 2x2 ), 0 < 3x1 + 2x2 < 20; 20 μM˜ 3 (x) = f3 (4x1 + x2 ) ⎧ 4x1 + x2 < 10, ⎪ ⎨ 0, 1 = 1 − (14 − 4x1 − x2 ), 10 4x1 + x2 < 14, ⎪ 4 ⎩ 1, 4x1 + x2 14. ˜2 M ˜ 3 . Then the problem is changed into an ordinarily ˜ = M ˜1 M Let M linear programming max α 1 s.t. 1 − (105 − 20x1 − 10x2 ) α, 5 1 1 − (3x1 + 2x2 ) α, 20 1 1 − (14 − 4x1 − x2 ) α, 4 6x1 + 2x2 21, 0 α 1, x1 0, x2 0,

182

6 Fuzzy Linear Programming

i.e., max α s.t. 20x1 + 10x2 − 5α 100, 3x1 + 2x2 + 20α 20, 4x1 + x2 − 4α 10, 6x1 + 2x2 21, 0 α 1, x1 , x2 0. The optimal solution x∗1 = 0.488, x∗2 = 9.035, α∗ = 0.022 is obtained, correspondingly, z ∗ = 100.11, z∗ = 19.534, z∗ = 10.987. And then the approximately fuzzy optimal value is z˜∗ = (100.11, 19.534, 10.987)LR.

6.7.4 Conclusion As for the object and constraint with L-R coeﬃcient in the linear programming, we integrate the method above, which can also be changed into to a fuzzy optimal solution to multi-object linear programming. Meanwhile, determination of this model may cause the constraint ﬁeld of linear programming to be empty sets after subjectively the ﬂexible indexes d1 , d2 , d3 are given. At this time, the problem has no optimal solution in them, so this needs to be adjusted appropriately to a ﬂexible index, in order to guarantee the existence of an optimal solution.

6.8 6.8.1

Linear Programming Model with T -Fuzzy Variables Introduction

Theoretically, we build a new linear programming model on the basis of T fuzzy numbers, study its dual form, nonfuzzify it under a cone index J , and turn a linear programming with T -fuzzy variables into a linear programming depending on a cone index J . In such a theoretical framework, we can transplant many results of the linear programming into a linear programming with T -fuzzy variables [Cao96a].

6.8 Linear Programming Model with T -Fuzzy Variables

6.8.2

183

Linear Programming with T -Fuzzy Variables

Deﬁnition 6.8.1. Let fuzzy linear programming be %) (LP

% c˜ min x s.t. A˜ x ˜b, x ˜ 0,

(6.8.1)

where c is a real 1×n matrix, A a real m×n matrix, x˜ a real n−dimensional T fuzzy variable vector, and ˜b = (˜b1 , ˜b2 , · · · , ˜bm )T a real m−dimensional T -fuzzy vector. If x˜ and ˜b are T -fuzzy data deﬁned as in Ref. [Cao89b,c],[Dia87] and ˜2 , · · · , x ˜n )T ; here x ˜l = (xl , ξ l , ξ l )(1 l n), ˜1 = [DPr80], i.e., x˜ = (˜ x1 , x (1, 1, 1) and (6.8.1) is called a linear programming with T -fuzzy variables. But we call (LP (J ))

min s.t.

n

cl U l

l=1 n

ail Ul bi (J )(1 i m),

l=1

U 0 a linear programming depending on a cone index J , where U = (U1 , U2 , · · · , Un )T 3M Uil and bi (J ) is a number depending on is an n−dimensional vector, Ul = i=1 3M a cone index J . Theorem 6.8.1. Let the linear programming be given from T -fuzzy variables %). Then (LP %) is equivalent to (LP (J )) for a given cone index J , and as (LP (LP (J )) has an optimal solution depending on a cone index J , equivalent to %) with a T -fuzzy optimal one. (LP %), where x ˜il = Proof: Let {˜ xil } be a column T -fuzzy variable satisfying (LP (xl , ξ il , ξ il )T (1 i m; 1 l n). We classify vectors of the column by subscripts, and might as well let l = 1, · · · , N correspond to a smaller ﬂuctuating variable, and the other variables correspond to l = N + 1, · · · , 3N . ξ + ξ il ; for i = M + 1, · · · , 2M Then for i = 1, · · · , M and each l, Uil = xi + il 2 $ xl − ξ il , jl = 0, and each l, Uil = for i = 2M + 1, · · · , 3M and each l, xl + ξ il , jl = 1; $ xl + ξ il , jl = 0, %) is changed So, under a given cone index J , (LP Uil = xl − ξ il , jl = 1. into LP (J )).

184

6 Fuzzy Linear Programming

%) and (LP (J )), we know that (LP (J )) has an From the equivalence of (LP %) optimal solution depending on a cone index J , which is equivalent to (LP with an optimal T -fuzzy solution. Therefore, the theorem holds. %) can be turned into an ordinary paraTheorem 6.8.1 shows us that (LP metric linear programming (LP (J )) depending on a cone index J , where (LP (J )) has many methods and an optimal one to it can be found in any literature on linear programming. 6.8.3

Dual Problem

For the linear programming with T -fuzzy variables, there always exits a dual linear programming with T -fuzzy parameter corresponding to it. 3M ξil Let Ul = xl + . Then i=1 3M (LP (J )) ⇔ n 3M ξil cl xl + min i=1 3M l=1 n 3M ξil bi (J ), ail xl + s.t. i=1 3M l=1 xl 0 (1 i m; 1 l n),

(6.8.2)

where ξil is ξ il (resp. −ξ il ) or ξ il (resp. −ξ il ). 3M 3M ξil ξil Substitute xl = xl + , and then we might as well let xl , i=1 3M i=1 3M and turn (6.8.2) into min s.t.

n l=1 n l=1

cl xl

ail xl bi (J ),

(6.8.3)

xl 0 (1 i m; 1 l n), i.e.,

min cx s.t. Ax b(J ), x 0,

while the dual form of (6.8.3) is max yb(J ) s.t. AT y c, y 0.

(6.8.4)

%) is deduced from T -fuzzy Theorem 6.8.2. Suppose linear programming (LP variables. Its dual form is

6.8 Linear Programming Model with T -Fuzzy Variables

max % y˜b s.t. AT y c y0

185

(6.8.5)

%) has an optimal T -fuzzy solution equivalent to (6.8.5) having an and (LP %) has the same optimal T -fuzzy values as (6.8.5). optimal solution, and (LP %) can be changed into (LP (J )) under above cone index J , Proof: As (LP and the dual form of (LP (J )) is equivalent to (6.8.4), then (6.8.5) can be %) is known to changed into (6.8.4) under the cone index J above. Again, (LP % be mutually dual with (6.8.5) due to the equivalence of (LP ) with (LP (J )), and (6.8.5) with (6.8.4), and the mutual duality of (LP (J )) and (6.8.4). Again, (LP (J )) and (6.8.4) are, respectively, an ordinary primal linear programming and a dual linear programming depending on the same cone index J . As for (LP (J )) and (6.8.4), applying Theorem 2 in Section 4.2 in Ref. [GZ83], we know that if one of them has an optimal solution, so has the other. They contain the same optimal values, therefore the theorem holds from the arbitrariness of the cone index J . %) is deduced from T -fuzzy variables, then Theorem 6.8.3. Suppose that (LP % dual programming (LP ) and (6.8.5) have optimal T -fuzzy solutions and optimal solutions, respectively, if and only if they have T -fuzzy feasible ones and feasible ones, respectively, at the same time. Proof: Necessity is apparent and suﬃciency is proved as follows. %) can be changed into (LP (J )) and (6.8.5) into (6.8.4) under the given (LP cone index J . Meanwhile (LP (J )) with (6.8.4) is mutually dual under the same cone index J . In a similar way to the proof of Theorem 1 in Section 4.2 in Ref. [GZ83], we can prove that (LP(J )) and (6.8.4) have feasible solutions depending on a cone index J if and only if they contain optimal solutions depending on a cone index J . Again, we know the theorem holds because of the equivalence of %) and (6.8.5). %) and (LP (J )), and (6.8.5) and (6.8.4), and the duality of (LP (LP %) and y 0 is a Corollary 6.8.1. If x˜0 is a feasible T -fuzzy solution to (LP 0 0˜ 0 feasible solution to (6.8.5), with c˜ x = y b, then x ˜ is an optimal T -fuzzy ˜ ) and y 0 is an optimal solution to (6.8.5). solution to (LP Proof: Straightforward. 6.8.4

Numerical Example

Example 6.8.1: Find max % (3˜ x1 − x ˜2 ) s.t. 2˜ x1 − x˜2 ˜ 2, ˜ x ˜1 4, x ˜1 , x˜2 0,

where ˜2 = (2, 0, 0), where 4˜ = (4, 0, 0), where 0 = (0, 0, 0),

186

6 Fuzzy Linear Programming

and give a column of T -fuzzy data: x ˜1 : 1. (x1 , 0.5, 1.2),

2. (x1 , 0.8, 1), 3. (x1 , 1, 1.4);

x ˜2 : 4. (x2 , 0, 0.4),

5. (x2 , 0.6, 1), 6. (x2 , 1.5, 0.9).

Solution: (i) Number the data by means of 1–6 Group the data into three parts from Deﬁnition 3.1.4: I, No. 1,4; II, No. 2,5; j2 = 0, j5 = 1; and III. No. 3,6; j3 = 1, j6 = 0, here jl = 1 for odd numbers and jl = 0 for even numbers. (ii) Nonfuzziﬁcation Let x1 , x2 be (x1 + 0.85) + (x1 − 0.8) + (x1 + 1.4) = x1 + 0.483, 3 (x2 + 0.2) + (x2 + 1) + (x2 − 1.5) = x2 − 0.1. 3 (iii) Obtain a linear programming corresponding to (6.8.2) as follows: max (3x1 − x2 + 1.55) s.t. 2x1 − x2 + 1.07 2 x1 + 0.483 4 x1 , x2 0 ⇒ max (3x1 − x2 + 1.55) s.t. 2x1 − x2 0.93, x1 3.52, x1 , x2 0. The optimal solution depending on a cone index J is x1 = 3.52, x2 = 6.11, and the optimal value is 6.00. If x1 stands for an expensive resource, then x2 stands for a cheap resource. Decrease x1 and increase x2 properly and we obtain the same optimal value as in the non-crisp case. Obviously it decreases its cost. 6.8.5 Conclusion The linear programming with T-fuzzy variables can always be turned into a parameter programming for solution, which is called a prime problem for fuzzy linear programming. Since a close connection exists between the prime problem and the dual one, we can ﬁnd an answer to the latter more easily than the former.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

6.9

187

Multi-Objective Linear Programming with T -Fuzzy Variables

6.9.1 Introduction There are a lot of fuzzy and undetermined phenomena in the realistic world. If we describe such phenomena with T -fuzzy numbers [Cao90][Dia87], we can get more information. Here we will extend the model in [Cao96a] into a multiobjective linear programming with T -fuzzy variables, and discuss its algorithm, which tests the eﬀectiveness of the model and method by a numerical example. 6.9.2 Building of Model Consider an ordinary multi-objective linear programming: V − max c(j) x (1 j r) s.t. Ax b,

(6.9.1)

x > 0, where, x = (x1 , x2 , · · · , xn )T is an n dimension vector, b = (b1 , b2 , · · · , bm )T an m dimension constant vector, c(j) and A denote r × n and m × n matrix, respectively. Because of practical problems, we extend (6.9.1) into the problem of a linear programming with T -fuzzy variables. Introducing T-fuzzy data into (6.9.1), then ˜ (1 j r) V − max c(j) x s.t. A˜ x ˜b,

(6.9.2)

x ˜ 0, we call (6.9.2) a multi-objective linear programming model with T -fuzzy variables, where, x ˜ = (˜ x1 , x˜2 , · · · , x˜n )T is an n dimension T -fuzzy vector, ˜b = (˜b1 , ˜b2 , · · · , ˜bm )T an m dimension T -fuzzy constant vector, x˜l = (xl , ξ , ξ ) l l a T -fuzzy variable, and ˜bi = (b, bi , bi ) a T -fuzzy number. 6.9.3 Non Fuzziﬁcation of Model Theorem 6.9.1. If (6.9.2) is given by T -fuzzy variables, then, to the given cone index J , (6.9.2) can be turned into V − max c(j) U (J )(1 j r) s.t. AU (J ) b(J ), U (J ) > 0,

(6.9.3)

188

6 Fuzzy Linear Programming

where c(j) U (J ) =

n

(j)

cl Ul (1 j r);

l=1

AU (J ) =

n

ail Ul (1 i m);

l=1

U (J ) = (U1 (J ), U2 (J ), · · · , Un (J ))T

3M

Uil (J ) are a vector and a variable with cone index J , 3M respectively. b(J ) = (b1 (J ), b2 (J ), · · · , bm (J ))T and bi (J ) are a constant vector and a constant with cone index J . And (6.9.3) has a satisfactory solution depending on cone index J , which is equivalent that (6.9.2) has a fuzzy satisfactory one. and Uil (J ) =

i=1

Proof: Let {˜ xil } be a column T -fuzzy variables tallying with (6.9.2), where x ˜il = (xil , ξ il , ξ il )(1 i m; 1 l n). We classify vectors of the column by subscripts, and might as well let l = 1, 2, · · · , N correspond to smaller ﬂuctuating variables, while the other variables correspond to l = N + 1, · · · , 3N , then to i = 1, 2, · · · , M and each l, Uil = xil +

ξ il + ξ il 2

;

to i = M + 1, · · · , 2M and each l, xil + ξ il , Uil = xil − ξ il ,

if if

jl = 0, jl = 1,

to i = 2M + 1, · · · , 3M, and each l, xil − ξ il , Uil = xil + ξ il ,

if if

jl = 0, jl = 1.

Then, under the given cone index J , (6.9.2) is turned into (6.9.3), such that (6.9.3) can be found out. Since (6.9.2) is equivalent to (6.9.3), a parameter optimal solution in (6.9.3) depending on cone index J is equivalent to an optimal T - fuzzy one in (6.9.2). We conclude the solutions to Model (6.9.2) as follows. 10 To the given T -fuzzy variables x ˜l , we partition natural number set {1, 2, · · · , n} into three parts by subscription. ξ + ξ il , i = 1, 2, · · · , M and each l, I: Uil = xil + il 2 II: xil − ξ il , if jl = 0, Uil = xil + ξ il , if jl = 1, i = M + 1, · · · , 2M and each l.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

III:

Uil =

xil + ξ il , xil − ξ il ,

i = 2M + 1, · · · , 3M, and each l. ˜l . 20 Nonfuzziﬁed x We take Uil = xil +

if if

189

jl = 0, jl = 1,

3N ξil∗ , 3N l=1

ξ il ), or ± ξ il , or ±ξ il . 2 0 ˜l in (6.9.2) and we can get (6.9.3). 3 Substitute Uil for x 40 Determine a satisfactory (or eﬀective) solution to problem (6.9.3) by the aid of solution to an ordinary multi-objective linear programming and we can get a fuzzy satisfactory solution to (6.9.2). There are a lot of methods to ﬁnding satisfactory (eﬀective) solutions to programming (6.9.3). Here, we advance two ways to nonfuzziﬁcation (6.9.2): 1) Nonfuzziﬁcation before a weighted method Turn (6.9.2) into a linear programming (6.9.3) with cone index J . Give weight to r objective functions n 3M Uil (J ) (j) ), fj (U ) = cl ( 3M i=1 where ξil∗ be (ξ il +

l=1

we have f (U ) = γ1 f1 (U ) + γ2 f2 (U ) + · · · + γr fr (U ), where γj (j = 1, · · · , r) is a factor of weight, satisfying with 0 γj 1 and γ1 + γ2 + · · · + γr = 1. Turn (6.9.3) into a single objective linear programming max f (U (J )) s.t. AU (J ) b(J), U (J ) 0.

(6.9.4)

2) By a weighted method before nonfuzziﬁcation Consider (6.9.2), and r fuzzy objective functions to (6.9.2) are weighted: f (˜ x) = γ1 f1 (˜ x) + γ2 f2 (˜ x) + · · · + γr fr (˜ x), Programming (6.9.2) is changed into max f (˜ x) s.t. A˜ x ˜b,

(6.9.5)

x ˜ 0. Now nonfuzzify (6.9.5) by the method mentioned, and we can obtain (6.9.4).

190

6 Fuzzy Linear Programming

6.9.4 Finding Solution We have many algorithms, such as genetic and simulated annealing algorithm (algorithm process is omitted) by which, to single objective linear programmings (6.9.4) and (6.9.5), we can ﬁnally get a satisfactory solution with practical value. Now we are searching for a better algorithm since the overall optimum constraint by a single algorithm code fails to show the result and to ensure its convergence for the optimum solution. Assume that there are computer programmings for solution in (6.9.2) or (6.9.3) in order to discuss theoretically the searching (omitted here), we consider the following example. Example 6.9.1: Find max (˜ z1 , z˜2 ) x1 + x˜2 z˜1 = 5˜ z˜2 = x ˜1 + x ˜2 s.t. x ˜1 + x ˜2 ˜ 6, ˜ x˜1 ˜ 0 5, x˜2 0, where ˜ 5 = (5, 0, 0), ˜ 6 = (6, 00). We take T -fuzzy variables as follows: x ˜1 : x ˜2 :

1. (x1 , 1, 0), 4. (x2 , 0, 1);

2. (x1 , 0, 1), 5. (x2 , 1, 0);

3. (x1 , 2, 1). 6. (x2 , 2, 2).

Now, divide the data into there groups including No.1,4; No.2,5 and No.3,6. As for data No.1,4, we get a value by Formula I. For the rest, we use the formulas corresponding to jp = 1 and jp = 0 in Formula II and III, when odd numbers and even numbers appear, respectively. So, we can nonfuzziﬁed x˜1 , x ˜2 x ˜1 : [(x1 + 0.5) + (x1 − 0) + (x1 + 1)]/3 = x1 + 0.5, x ˜2 : [(x2 + 0.5) + (x2 − 0) + (x3 − 2)]/3 = x2 − 0.5, f (˜ x) = γ1 z˜1 + γ2 z˜2 : f (U (J)) = 6x1 + 5x2 when γ1 = γ2 = 1. Such that, a linear programming corresponding to (6.9.5) appears as follows: max z = 6x1 + 5x2 s.t. x1 + x2 6, 0 x1 5, x2 0. Its corresponding superior solution is x1 = 5, x2 = 1, z1 = 26, z2 = 9.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

191

6.9.5 Conclusion Therefore, we know that we can turn (6.9.2) into an ordinary multi-objective parameter linear programming (6.9.3) depending on cone index J . And as to (6.9.3), we adopt the methods to multi-objective programming, such as methods by which we change a multi-objective majorized problem into a single one or series of single ones.

7 Fuzzy Geometric Programming

We often meet with a problem as follows in economic management. Suppose that we manufacture a case for transporting cotton, the case is V m3 in volume with a bottom, but without a cover, whose bottom and two sides are made from Cm2 of a special ﬂexible material with negligible cost. The material for the other two sides cost more than A yuan/m2 (yuan means RMB), and transportation of the case costs about k yuan. What is the cost at least to ship one case of cotton? Such a problem can be posed for solution by a geometric programming. Since a classical geometric programming can not account well for the problem, or obtain a practical solution to it, at the IFSA (1987), author proposed initially a fuzzy geometric programming theory for the problem [Cao87a]. This chapter ﬁrst introduces a progress in fuzzy geometric programming, and puts forward problems of Lagrange and antinomy in it. Besides, it studies the geometric programming with fuzzy coeﬃcients and fuzzy variables. Finally, it discusses its expansion.

7.1

Introduction of Fuzzy Geometric Programming

7.1.1 Fuzzy Posynomial Geometric Programming Deﬁnition 7.1.1. Call (P˜ )

% g0 (x) min s.t. gi (x) 1 (1 i p), x > 0,

(7.1.1)

the fuzzy posynomial geometric programming, where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, ‘T’ represents a transpose symbol, and all Ji Ji m ) gi (x) = vik (x) = cik xγl ikl (0 i p) are fuzzy posynomials of x, k=1

k=1

l=1

% g0 (x) ←− g0 (x) z0 , cik 0 a constant, γikl an arbitrary real number, min B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 193–253. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

194

7 Fuzzy Geometric Programming

that is, the objective function g0 (x) might have to be written as a minimizing goal in order to consider z0 as an upper bound, z0 is an expectation value of objective function g0 (x), “” denotes the fuzziﬁed version of “ ” with the linguistic interpretation being “essentially smaller than or equal”, and di 0 denotes a ﬂexible index of gi (x)(0 i p). The membership functions of fuzzy objective g0 (x) and fuzzy constraints gi (x) are (1.5.6) and (1.5.7), respectively. (7.1.1) can be changed into g0 (x) z0 gi (x) 1 (1 i p),

(7.1.2)

x > 0. Especially, we have % g0 (x) min s.t. gi (x) = 1 (1 i p),

(7.1.3)

x > 0,

and

% g0 (x) min s.t. gi (x) 1 (1 i p) x>0

(7.1.4)

% g0 (x) min s.t. gi (x) = 1 + ti (1 i p), x > 0.

(7.1.5)

In fact, we change the equations in (7.1.5) into inequations [Wei87], make ti = −di log α(1 i p), because of α ∈ [0, 1], di > 0, ti > 0 is obvious. And then, gi (x) = 1 + ti can be changed into gi (x) 1 − di log α, which is exactly a fuzzy constraint gi (x) 1 by converting (1.5.6) (1.5.7) into a really certain expression. Therefore, (7.1.3) can be changed into (7.1.1), that is (7.1.2). ˜0 be a fuzzyDeﬁnition 7.1.2. Let A˜0 be a fuzzy set deﬁned on X ⊂ R m , B ˜ valued set of g0 (x), and if μA˜0 (x) = B0 (g0 (x)), then g0 (x) is a fuzzy objective function with respect to A˜0 . ˜i Deﬁnition 7.1.3. Let F˜i (1 i p) be a fuzzy set deﬁned on X ⊂ R m , B ∗ ∗ ˜ be a fuzzy-valued set of gi (x). If μF˜i (x) = Bi (gi (x))(where gi (x) = gi (x) − 1), then gi∗ (x) are fuzzy constraint functions with respect to F˜i . Deﬁnition 7.1.4. Let F˜ be a fuzzy set deﬁned on X ⊂ R m , gi∗ (x) be a fuzzy constraint functions with respect to F˜i (1 i p). If μF˜i (x) min μF˜i (x), μF˜ (x) = 1ip

1ip

then F˜ is a fuzzy feasible solution set with respect to F˜i .

7.1 Introduction of Fuzzy Geometric Programming

195

˜ be a fuzzy set deﬁned on X ⊂ R m , F˜ be a fuzzy Deﬁnition 7.1.5. Let H feasible solution set with respect to F˜i , if there exists a fuzzy optimal point set A˜∗0 of g0 (x), such that ˜ H(x) = μA˜∗ (x) μF˜ (x) min{μA˜∗ (x), min μF˜i (x)} 0 0 x>0 1ip (7.1.6) ∗ ˜i (gi (x))}, ˜ 0 (g0 (x)), min B = min{B 1ip

x>0

˜ is said to be a fuzzy posynomial geometric programming with then max H(x) x>0

˜ of g0 (x). respect to H Deﬁnition 7.1.6. If there is a point x∗ , such that fuzzy posynomial geometric ˜ ˜ ∗ ) = max H(x), then x∗ is said to be an optimal solution programming is H(x x>0

˜ ∗ ), and fuzzy set H ˜ satisfying (7.1.6) is a fuzzy decision in (7.1.2). to H(x ˜ Theorem 7.1.1. The maximum of H(x) is equivalent to programming max α s.t. g0 (x) z0 − d0 log α, gi (x) 1 − di log α(1 i p), α ∈ [0, 1], x > 0,

(7.1.7)

where di > 0(0 i p) denote constants. Proof: It is known by Deﬁnition 7.1.6 that x∗ satisﬁes (7.1.6), called an optimal solution to (7.1.2). Again, x∗ bears the similar level for constraint and optimization. Particularly, x∗ is a solution to fuzzy posynomial geometric programming (7.1.1) at ˜ ∗ ) = 1. Hence, when −g0 (x) = z0 − t0 and gi (x) = 1 + ti , there exists H(x ˜ H(x) = μA˜∗ (x) ∧ min μF˜i (x) 0

1ip

by Formulas (1.5.6) and (1.5.7), Ji J0 1 1 m m ) ) γ γ − c0k xl 0kl −z0 ) − cik xl ikl −1) ( ( k=1 l=1 k=1 l=1 d d ˜ H(x) = e 0 . min e i

1ip

˜ ˜ Given α = H(x). Because of ∀ α ∈ [0, 1], H(x) α is equivalent to Ji J0 1 1 m m ) ) γ γ c0k xl 0kl −z0 ) − cik xl ikl −1) ( ( k=1 l=1 k=1 l=1 d d α, e i α(1 i p), e 0

−

i.e., g0 (x) =

J0

c0k

k=1

gi (x) =

Ji k=1

m :

xγl 0kl z0 − d0 log α,

l=1

cik

m : l=1

xγl ikl 1 − di log α (1 i p).

196

7 Fuzzy Geometric Programming

˜ Therefore, the maximization of H(x) is equivalent to (7.1.7) for arbitrary α ∈ [0, 1], and the theorem holds. From the above, we know di > 0(0 i p) are admissible violations of the constraints. They are chosen by decision makers on actual circumstance. (1) (2) (1) (2) When z0 = z0 − z0 , the value of z0 and z0 are initially determined. Therefore, we consider two certainties to posynomial geometric programming min g0 (x) s.t. gi (x) 1 (1 i p) x>0

(7.1.8)

min g0 (x) s.t. gi (x) 1 − di log α (1 i p), α ∈ [0, 1], x > 0,

(7.1.9)

and

(1)

(2)

to which a solution is given, respectively; the optimal value z0 and z0 in (7.1.8) and (7.1.9) are what is obtained. From here it is known that solving the fuzzy posynomial geometric programming (7.1.1) involves solving three certainty posynomial geometric programming (7.1.8), (7.1.9) and (7.1.7), respectively. Equivalent form is considered below in the fuzzy posynomial geometric programming, therefore properties are ﬁrst introduced as follows. Theorem 7.1.2 [Cao93a]. If Gi (z)(1 i p) denotes a convex function for ∀i, then the fuzzy geometric programming G0 (z) G0 Gi (z) 1 (1 i p)

(7.1.10)

is a fuzzy convex programming, and a strictly local minimal solution to (7.1.10) is its global minimal solution, where G0 is an expectation value of objective function G0 (z). Proof: Change (7.1.10) into crisp programming by (1.5.6) and (1.5.7) [Zim00], it is easy to prove that the theorem holds in similar way of Theorem 1.2.1 in Ref.[WY82]. Theorem 7.1.3. Any fuzzy posynomial geometric programming (P˜ ) can be turned into a fuzzy convex programming. Proof: Substitute xl = ezl (1 l m) for gi (x), then gi (x) =

Ji k=1

cik

m : l=1

γikl

x

=

Ji

m

cik el=1

γikl zl

= Gi (z) (0 i p).

k=1

Thereby (P˜ ) is turned into (7.1.10). From Theorem 7.1.2 we know the theorem holds.

7.1 Introduction of Fuzzy Geometric Programming

197

Theorem 7.1.4. The programming (P˜ ) is equivalent to % g¯0 (x) min s.t. g¯i (x) 1(1 i p), x > 0, where g i (x, xk−1 ) =

Ji )

k=1

εik

cik εik

m

m ) l=1

γikl εik

xk=1 l

(0 i p) is a monomial

posynomial. Proof: ∀xk−1 > 0, by using a fuzzy geometric inequality in Ref.[Cao93a], we have g i (x, xk−1 ) gi (x), where g i (x, xk−1 ) =

Ji m : cik : k=1

gi (x) =

Ji

cik

i=1

m :

εik

xγl ikl

εik

l=1

= ci

m :

γ

xl il ,

l=1

xγl ikl ,

l=1

Ji c ε Ji ) vik (xk−1 ) ik ik (1 k Ji , 1 i , γ il = γikl εik ; εik = gi (xk−1 ) k=1 εik k=1 p). It is easy to prove that the theorem holds.

here ci =

Theorem 7.1.5. Let A˜i be a continuous and strictly monotone fuzzy-value function. The dual programming of (P˜ ) is [Cao89a] w00 p J wik p ) )i ) wi0 c˜ik a ˜00 ˜ ˜ wi0 (D) max d(w) = w00 i=0 k=1 ai wik i=1 s.t. w00 = 1, Γ T w = 0, w 0, where

⎛

γ011 · · · γ01l · · · ⎜ .. .. ⎜ . . ⎜ ⎜ γ0J 1 · · · γ0J l · · · 0 0 ⎜ Γ = ⎜ ··· ··· ⎜ ⎜ γp11 · · · γp1l · · · ⎜ ⎝ ··· ··· γpJp 1 · · · γpJp l · · ·

⎞ γ01m .. ⎟ . ⎟ ⎟ γ0J0 m ⎟ ⎟ ··· ⎟ ⎟ γp1m ⎟ ⎟ ··· ⎠ γpJp m

(7.1.11)

denotes structure of exponent to apiece term of variable xl corresponding to an objective function g0 (x) and each constraint function gi (x)(1 i p), called exponent matrix. It contains J = (J0 + J1 + · · · + Jp ) row and m column, J is the sum of apiece term in gi (x)(0 i p), and w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T is a J−dimensional variable vector, wi0 =

198 J0

7 Fuzzy Geometric Programming

wik = wi0 + wi1 + · · · + wiJi (0 i p) is the sum of each dual variables

k=1

corresponding to an objective function g0 (x)(i = 0) or constraint function % cik gi (x)(i = 0); a ˜ik = (0 k Ji , 0 i p) is freely ﬁxed in a closed value ai interval [a1ik , a2ik ], and the degree of accomplishment is determined by formula like (1.5.3). ˜ In order to assume the deﬁned continuity of d(w), we stipulate (wik )wik |wik =0 = 1. Proof: Because (P˜ ) can be turned into (7.1.1), follow Ref. [Cao93a] and it ˜ Now the theorem is proved. can be proved that the dual form in (7.1.1) is (D). From Theorem 7.1.4 it is known that any fuzzy posynomial geometric programming can be turned into a monomial fuzzy posynomial geometric programming. Thereby, only monomial fuzzy posynomial geometric programming is considered: % g0 (x) = c0 min s.t. gi (x)) = ci

m ) l=1 m ) l=1

xγl 0l (7.1.12)

xγl il bi (1 i p),

x > 0, its dual form means max

p w w0 ) i c˜ i c˜ 0 i=1

s.t. w0 = 1, p γ il wi = 0 (1 l m),

(7.1.13)

i=0

where c˜ 0 =

c˜0 z0

w 0, < c˜ i . bi

;

, c˜ i =

Theorem 7.1.6. Given monomial fuzzy posynomial geometric programming like (7.1.12), then it can be turned into a fuzzy linear programming % min s.t.

m

γ0l zl

l=1 m

(7.1.14)

γil zl + ln ci ln bi (1 i p),

l=1

x > 0, with fuzzy optimal solution to (7.1.14) being that of (7.1.12). m

Proof: Let zl = ln xl (1 l m). Then gi (x) = ci e that (7.1.12) can be turned into

l=1

γil zl

(0 i p), such

7.1 Introduction of Fuzzy Geometric Programming m

% c0 el=1 min m

γ0l zl

γil zl

s.t. ci el=1 x > 0,

199

bi (1 i p),

equivalent to (7.1.14). Hence, the ﬁrst conclusion of the theorem holds. Again (7.1.12) is a fuzzy convex programming, such that the second conclusion of the theorem holds from Theorem 7.1.2. 7.1.2 Extension in Fuzzy Geometric Programming Fuzzy geometric programming can be extended into two cases, that is general fuzzy posynomial geometric programming and general reversed one. a) General fuzzy posynomial geometric programming 7 writing % in (P˜ ) for inf, Deﬁnition 7.1.7. Replace min (P˜1 )

7 g0 (x) inf s.t. gi (x) 1 (1 i p), x > 0,

calling (P˜1 ) a general fuzzy posynomial geometric programming. ˜ into s% Change max % in (D) up, that is w00 p J wik p ) )i ) wi0 a ˜00 c˜ik ˜ ˜ (D1 ) s% up d(w) = wi0 w00 i=0 k=1 wik i=1 s.t. w00 = 1, Γ T w = 0, w 0, ˜ 1 ) the dual programming. Γ is an exponent matrices as (7.1.11). calling (D 7 inf denotes fuzzy inﬁmum, s% up denotes fuzzy supremum. When (P˜1 ) and ˜ (D1 ) are fuzzy consistent, we denote MP˜1 and MD˜ 1 a fuzzy constraint inﬁmum ˜ 1 ), respectively. Obviously, M ˜ of (P˜1 ) and a fuzzy constraint supremum of (D P1 is a ﬁnite fuzzy number. b) General fuzzy reversed posynomial geometric programming Deﬁnition 7.1.8. Calling the general form 72 ) (P

7 g0 (x) % (or inf) min s.t. gi (x) 1, (1 i p ) gi (x) 1, (p + 1 i p) x>0

(7.1.15)

200

7 Fuzzy Geometric Programming

a fuzzy reversed posynomial geometric programming [Cao02a][Cao07a]. Here Ji vik (x) (0 i p) are posynomial functions of x, where all gi (x) = k=1

vik (x) =

⎧ m ) ⎪ ⎪ xγl ikl , (1 k Ji ; 0 i p ) ⎨ cik l=1

m ) ⎪ ikl ⎪ x−γ , (1 k Ji ; p + 1 i p) ⎩ cik l l=1

are monomial of x. The membership functions of objective g0 (x) and constraint functions gi (x)(1 i p) are deﬁned by (1.5.6) and (1.5.7), respectively. 72 ) is Dual programming in (P w00 p J wik p Ji ) )i ) ) a ˜0k cik ˜ ˜ (D2 ) max % (or sup) d(w) = w00 ˜ik wik i=0 k=1 a i=p +1 k=1 −wik p p ) wi0 ) cik a ˜ik −wi0 wi0 wi0 wik i=1 i=p +1 J0 s.t. w00 = w0k = 1, k=1

Γ T w = 0, w 0,

where Γ still represents a fuzzy exponent matrix, i.e., ⎞ ⎛ γ011 · · · γ01l · · · γ01m ⎟ ⎜ ··· ··· ··· ⎟ ⎜ ⎟ ⎜ γ0J0 l · · · γ0J0 m γ0J0 1 · · · ⎟ ⎜ ⎟ ⎜ ··· ··· ··· ⎟ ⎜ −1 −1 −1 . Γ =⎜ γp Jp 1 · · · γp Jp l · · · γp Jp m ⎟ ⎟ ⎜ ⎟ ⎜ −1 −1 −1 ⎜ −γp +1Jp +1 1 · · · −γp +1Jp +1 l · · · −γp +1Jp +1 m ⎟ ⎟ ⎜ ⎠ ⎝ ··· ··· ··· −γpJp 1 · · · −γpJp l · · · −γpJp m Here w = (w00 , w01 , . . . , w0J0 , wp1 , . . . , wpJp )T is a J −dimensional variable vector (J = 1 + J0 + · · · + Jp ), and wi0 = wi1 + wi2 + · · · + wiJi ; −wik and −wi0 denote a reversed direction inequality gi (x)1 corresponding to cik −wik −wi0 factors ( ) and wi0 in the upper-right-corner exponent; a ˜0k is a wik fuzzy number. c) Other case When the above objective and constraint functions in geometric programming contain a fuzzy relative operators and fuzzy coeﬃcients, we call them a geometric programmings with fuzzy relative and fuzzy coeﬃcients. The researches concerned can be seen in Section 7.8, 7.9, 8.5 and 8.6.

7.2 Lagrange Problem in Fuzzy Geometric Programming

7.2

201

Lagrange Problem in Fuzzy Geometric Programming

7.2.1 Introduction The author advanced fuzzy reversed posynomial geometric programming based on the fuzzy posynomial geometric programming [Cao87a][Cao93a] by Zadeh’s fuzzy set theory [Zad65], he gives its Lagrange problem and a direct algorithm, which will be wide applied in optimization and classiﬁcation. 7.2.2 Fuzzy Reversed Posynomial Geometric Programming Model We try to expand the reversed posynomial geometric programming [WY82] into a fuzzy reversed posynomial geometric programming model. Deﬁnition 7.2.1. Let (P )

% g˜0 (x) min s.t. g˜i (x) 1(1 i p ), g˜i (x) 1(p + 1 i p), x > 0.

(7.2.1)

Then (P) is called a fuzzy reversed posynomial geometric programming, where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, all g˜i (x) = m Ji Ji ) v˜ik (x) = c˜ik xγl ikl (0 i p) are fuzzy posynomial functions of k=1

k=1

x, here

l=1

⎧ m ) ⎪ ⎪ xlγ˜ikl , (1 k Ji ; 0 i p ) ⎨ c˜ik v˜ik (x) =

l=1

m ) γikl ⎪ ⎪ x−˜ , (1 k Ji ; p + 1 i p) ⎩ c˜ik l l=1

are fuzzy monomial of x. For each item vik (x) (1 k Ji ; p + 1 i p) in γikl the reversed inequality g˜i (x)1, xl acts as an exponent in the item by −˜ instead of by γ˜ikl . Fuzzy coeﬃcients and exponents c˜ik 0, γ˜ikl are all freely ﬁxed in the + − + closed interval [c− ˜ is taken as c˜ik or γ˜ikl , ik , cik ], [γikl , γikl ], respectively, when a − + respectively, and a, a , a , r are all real numbers, then the degree of accomplishment in a ˜ is determined [Cao93a] by ⎧ 0, if a < a− , ⎪ ⎨= a − a− > r (7.2.2) μa˜ (a) = , if a− a a+ , ⎪ ⎩ a+ − a− + 1, if a > a . x) [cao02a]. Then the membership funcUnder (7.2.2), change g˜i (x) to g˜i (¯ x) and constraints g˜i (¯ x)(1 i p) are deﬁned by tions of objective g˜0 (¯

202

7 Fuzzy Geometric Programming

˜i (¯ μA˜i (x) = B gi (x)) ⎧ 1, if g¯i (x) bi , ⎪ ⎨ ti = 1 − , if g¯i (x) = bi − ti (0 ti di ), ⎪ di ⎩ 0, if g¯i (x) bi − di (0 i p ),

where bi =

(7.2.3)

z0 , i = 0; 1, 1 i p ,

z0 is an aspiration level of the objective function g˜0 (x), and ˜i (¯ gi (x)) μA˜i (x) = B ⎧ 1, if g¯i (x) 1, ⎪ ⎨ ti = 1 − , if g¯i (x) = 1 − ti (0 ti di ), ⎪ di ⎩ 0, if g¯i (x) 1 − di (p + 1 i p),

(7.2.4)

here di 0(0 i p) are ﬂexible indexes of i−th fuzzy function g¯i (x), and g¯i (x) =

Ji

c˜−1 ik (β)

k=1

m :

γ ˜ −1 (β)

xl ikl

, β ∈ [0, 1], (0 i p).

l=1

If the objective function in (7.2.1) might be written as a minimizing goal in order to consider z0 as an upper bound, then (7.2.1) can be rewritten as ⎧ g˜0 (x) z0 , ⎪ ⎪ ⎨ g˜i (x) 1 (1 i p ), (7.2.5) g˜i (x) 1 (p + 1 i p), ⎪ ⎪ ⎩ x > 0. gi (x) 1, x > 0}(1 i p ) and Deﬁnition 7.2.2. Let A˜i = {x ∈ R m |˜ m ˜ Ai = {x ∈ R |˜ gi (x) 1, x > 0}(p + 1 i p) be fuzzy feasible solution sets corresponding to g˜i (x) 1 and g˜i (x) 1, respectively. Then A˜i ) ( A˜i ) Y˜ = A˜0 ( 1ip

p +1ip

is called the fuzzy decision for (7.2.5), and so is for (7.2.1), satisfying μY˜ (x) = μA˜0 (x) min μA˜i (x) min μA˜ (x), x > 0, (7.2.6) 1ip

p +1ip

i

while x∗ is called a fuzzy optimal solution to (7.2.5), and so is to (7.2.1), satisfying μY˜ (x∗ ) = max{μY˜ (x) x>0

= min{μA˜0 (x), min μA˜i (x), 1ip

min

p +1ip

μA˜ (x)}}. i

(7.2.7)

7.2 Lagrange Problem in Fuzzy Geometric Programming

203

If there exists a fuzzy optimal point set A˜0 of g˜0 (x), (7.2.4) holds, (7.2.5) is called a fuzzy reversed posynomial geometric programming for g˜0 (x) with respect to Y˜ . Substitute (7.2.2)(7.2.3)(7.2.4) into (7.2.6), after some rearrangements [Zim76], then g¯i (x) − 1 g¯0 (x) − z0 μY˜ (x) = 1 − min 1 − 1ip d0 di g¯i (x) − 1 . min 1 + p +1ip di By inducing a new variable α, α ∈ [0, 1], the maximization decision of (7.2.1) can be turned into ﬁnding solution x(> 0), that is, maximizing x∗ in μY˜ (x). Let α = μY˜ (x). By using functions (7.2.2) and (7.1.8), when μA˜0 (x) = z0 − d0 , and μA˜i (x) = 1 + di . So, we have the following. Theorem 7.2.1. The maximizing of μY˜ (x) is equivalent to μY˜ (x) α, so we arrive at Formula (7.2.1), then max α s.t. g¯0 (x) z0 + (1 − α)d0 , g¯i (x) 1 + (1 − α)di , (1 i p ), g¯i (x) 1 + (α − 1)di , (p + 1 i p), x > 0, α, β ∈ [0, 1].

(7.2.8)

Proof: Similar to Theorem 7.1.1, it is not diﬃcult to prove a truth of the theorem. 7.2.3 Fuzzy Lagrange Problem and Algorithm Deﬁnition 7.2.3. Let w = (w01 , . . . , w0J0 , . . . , wp1 , . . . , wpJp )T . Write down I = {(i, k)|Γ T w = 0, w 0 with a solution w satisfying wik > 0}, then I is called an unreduced subscript set. Deﬁnition 7.2.4. If an unreduced set I = {(i, k)|1 k Ji , 0 i p}, i.e., if I includes all subscript pairs (i, k), the primal fuzzy posynomial geo˜ are said to be canonical metric programming (P˜ ) and dual programming (D) ˜ ˜ types. Otherwise, (P ) and (D) degenerate. More speciﬁcally, if I fails to con˜ are called totally degenerate types. tain any (0, k)(1 k J0 ), (P˜ ) and (D) Deﬁnition 7.2.5. Assume that g˜i (x) is an m−dimensional fuzzy diﬀerentiable function, and its gradient is deﬁned as ∇x g˜i (x) = (

∂ ∂ ∂ g˜i (x), g˜i (x), · · · , g˜i (x))T , ∂x1 ∂x2 ∂xm

204

7 Fuzzy Geometric Programming

then it is easy to change it into ∇x g¯i (x) = (

∂ ∂ ∂ g¯i (x), g¯i (x), · · · , g¯i (x))T . ∂x1 ∂x2 ∂xm

Deﬁnition 7.2.6. Find a fuzzy feasible solution x∗ to (7.2.1) and λ∗ = gi (x∗ ) − 1) = 0(1 i p)(where (λ∗1 , λ∗2 , · · · , λ∗p )T 0, satisfying λ∗i (˜ ∗ ˜ i (˜ gi (x) − 1) is 1, g˜i (x ) = 1 is fuzzy equality, and its membership degree B such that a fuzzy Lagrange function

˜ λ) = g˜0 (x) + L(x,

p

p

(˜ gi (x) − 1) +

i=1

λi 1 − g˜i (x)

i=p +1

˜ ∗ , λ∗ ) = 0, called a Lagrange problem in (7.2.1). satisﬁes ∇x L(x Theorem 7.2.2. Let x∗ be a fuzzy feasible solution to (7.2.1), writing E = {i|˜ gi (x∗ ) = 1(1 i p)} as a subscript set of fuzzy eﬀective constraint at x∗ and μA˜i (·)(0 i p) are fuzzy functions of continuous and strictly monotone. Then λ∗ enables (x∗ , λ∗ ) to be a fuzzy solution in Lagrange problem if and only if all variable vectors x(> 0) satisfy m

Γ˜il (ln xl − ln x∗l ) 0 (i ∈ E),

(7.2.9)

l=1

and then

where Γ˜il =

g˜0 (x∗ ) g˜0 (x), Ji

(7.2.10)

γ˜ikl v˜ik (x∗ ) (i ∈ E, 1 l m).

k=1

Proof: Let μA˜i (·)(0 i p) be fuzzy functions of continuous and strictly monotone, then (7.2.1) is equivalent to g0 (x)) min μA˜0 (˜

s.t. μA˜i (˜ gi (x) − 1) α (1 i p ), μA˜i (1 − g˜i (x)) α (p + 1 i p),

(7.2.11)

α ∈ [0, 1], x > 0, while

E ⇐⇒ E = {i|μA˜i (˜ gi (x∗ ) − 1) = 0, (1 i p)},

with (7.2.9) equivalent to μA˜i [Γ˜il (ln xi − ln x∗i )] α(i ∈ E ),

(7.2.12)

7.2 Lagrange Problem in Fuzzy Geometric Programming

205

and then (7.2.10) is equivalent to μA˜0 [˜ g0 (x∗ ) − g˜0 (x)] α.

(7.2.13)

From the condition, it is known that x∗ is a fuzzy feasible solution to (7.2.1), which is equivalent that (x∗ , α) is a parameter feasible solution to (7.2.11)[Cao93a]. Therefore, as for E and any α(∈ [0, 1]), there exists λ∗ , enabling (x∗ , λ∗ , α) to be a Lagrange problem solution with parameter α if and only if all variable vectors x > 0 tally with (7.1.7). And (7.1.9) holds from the knowledge of Theorem 4.4.1 in Ref.[WY82]. Hence the theorem holds from arbitrariness of α ∈ [0, 1]. Proposition 7.2.1. Let μA˜i (x)(0 i p) be a fuzzy function of continuous and monotone. On on the assumption of constraint complete lattice, a local optimum solution to (7.2.1) must be a part of fuzzy solution to the Lagrange problem. Proposition 7.2.2. (Reverse proposition) Let μA˜i (x)(0 i p) be a fuzzy function of continuous and strictly monotone and x∗ be a part of fuzzy solution to the Lagrange problem. x∗ is a global fuzzy optimum solution to (7.2.1) if (7.2.1) is fuzzy convex or p = p; x∗ is not necessarily a global fuzzy optimum one to (7.2.1) if p = p, not even a local fuzzy optimum one. Direct Algorithm[Asa82] If η is a continuous function on [0,1], there exists a unique ﬁxed point α ¯ = η(α). Let g˜i (x) be diﬀerentiable. Then steps of a direct algorithm list as follows. 10 Let k = 1, and determine α1 as well as h by means of 1 − hd = α1 . 20 Calculate η (k) = supx∈Aα |μA˜0 g˜0 (x) | k

and

30 Calculate

˜ (k) (x) = 1 g˜0 (x) ∈ [0, 1]. M η (k) ˜ (k) (x). εk = αk − M

If |εk | > ε, then go to 20 , otherwise to 50 . 40 Select rk ∈ [0, 1] properly. If αk+1 = αk − rk εk , let k be k + 1. Go to 20 . ˜ (k) (x∗ ) when α = αk , then x∗ is an optimal solution to P˜ . 50 Calculate M Note. It is proper to take α1 ∈ [0.9, 1] when g˜0 (x) increases strictly monotonous, otherwise to take α1 ∈ [0.75, 0.9]. If b(> 0) is very large, larger, smaller, very small, it is proper to take h as 0.02, 0.2, 2 and 20, respectively. As for rk selection, when ε1 ε2 , rk = 0.5 may be chosen. If ε1 ε2 changes a little and if ε1 " ε2 , rk ∈ [0.618, 1] and rk ∈ [0.382, 0.4] can be properly taken, respectively. Otherwise, a contradictory may appear.

206

7 Fuzzy Geometric Programming

Example 7.2.1: Find % 2x1 + 3x2 min s.t. x21 + x22 1, x1 , x2 > 0. Since η (1) =

√

13, we suppose a fuzzy constraint membership function is

μ1 (d1 ) = {1 − 0.2h, 0 d1 < 0.25; 0, d1 0}. After two steps, we can ﬁnd ˜ (2) = 0.969717 x∗1 = 0.915683, x∗2 = 0.555, M and an objective function represents S ≈ 3.496, with its constraint inﬁmum being MP¯ = 2.1415, where x(0)∗ = (1.07075, 0) is a fuzzy minimum solution. x(2) = (x∗1 , x∗2 ) = (0.915683, 0.555) is not a global fuzzy optimum point of the problem, nor is a local one. So Proposition 7.2.2 is conﬁrmed. But, x∗ still is a fuzzy optimal point for all of x satisfying (7.2.9). Since Γ11 = −2(x∗1 )2 ≈ −1.677, Γ12 = −2(x∗2 )2 ≈ −0.616, all of x1 and x2 satisfy the problem, such that −1.677(ln x1 − ln x∗1 ) − 0.616(ln x2 − ln x∗2 ) 0 x−0.616 0.44572, =⇒ x−1.677 1 2 then 2x1 + 3x2 3.4963, i.e., (x∗1 , x∗2 ) is a fuzzy optimal point of the problem in certain range. So Theorem 7.2.2 is conﬁrmed. The property is called tangentially optimal of fuzziness. 7.2.4 Conclusion A direct algorithm is given to the Lagrange problem of fuzzy reversed posynomial geometric programming. As for its dual programming, it will be built by fuzzy dual theory [Cao02a]. Because a special example of fuzzy posynomial geometric programming is a fuzzy reverse one, the idea and method mentioned above suit for fuzzy posynomial geometric programming.

7.3

Antinomy in Fuzzy Geometric Programming

7.3.1 Antinomy in Fuzzy Posynomial Geometric Programming Deﬁnition 7.3.1. Suppose that a fuzzy optimal solution exists in fuzzy posynp omial geometric programming (7.1.3). If ∃ti > 0 and ti > 0, a fuzzy opi=1

(1)

timal solution to (7.1.5) exists. Again, if an optimal value g0

in (7.1.3) is

7.3 Antinomy in Fuzzy Geometric Programming (2)

larger than the optimal value g0 (7.1.3).

207

in (7.1.5), then there appears antinomy in

What is the reason for such a strange phenomenon? The following is the sufﬁcient and necessary condition that antinomy appears in a fuzzy posynomial geometric programming of general non-degeneration. Theorem 7.3.1. The suﬃcient and necessary condition that antinomy appears in (7.1.3) is that optimal value of (7.1.3) does not equal that of (7.1.1). Proof: Let x(1) and x(2) be fuzzy optimal solution to (7.1.3) and (7.1.1), respectively. Then Suﬃciency. If g0 (x(1) ) = g0 (x(2) ), but x(1) is a fuzzy feasible solution to (7.1.1), then g0 (x(1) ) > g0 (x(2) ). Now we build di = 1 − gi (x(2) )(1 p di > 0. Then (7.1.5) is i p). Obviously, d = (d1 , d2 , · · · , dp )T 0 and i=1

constructed with x(2) being a fuzzy feasible solution to (7.1.5), and (7.1.1) is obtained from (7.1.5). Hence fuzzy optimal value of (7.1.1) is larger than or equal to that in (7.1.1). Let x(3) be a fuzzy optimal solution to (7.1.1), with g0 (x(2) ) g0 (x(3) ), but g0 (x(1) ) > g0 (x(2) ). Therefore g0 (x(1) ) > g0 (x(3) ), and the antinomy appears in (7.1.3). p di > 0, Necessity. If the antinomy appears in (7.1.3), i.e., ∃di 0 with i=1

such that a fuzzy optimal solution exists in (7.1.1), with g0 (x(1) ) > g0 (x(3) ). At this time, as for ∀i ∈ {1, 2, · · · , p}, we have di = 1 − gi (x(3) ) > 0, i.e., x(3) is a fuzzy feasible solution to (7.1.1), therefore, g0 (x(2) ) g0 (x(3) ). We have g0 (x(2) ) g0 (x(3) ) < g0 (x(1) ) at g0 (x(3) ) < g0 (x(1) ), i.e., the fuzzy optimal value of (7.1.3) is not equal to that in (7.1.1). Expand the convexity and we have the following. Deﬁnition 7.3.2. Let X ⊂ R m be a convex set. If g0 (x) is a fuzzy convex function (resp. a strongly fuzzy convex one) with respect to A˜0 and suppose gi∗ (x)(1 i p) is a fuzzy convex function (resp. a strongly fuzzy convex one) with respect to F˜i , then we call (7.1.3) a fuzzy convex (resp. strongly fuzzy convex) programming with respect to g0 (x). Theorem 7.3.2. Let x∗ be an optimal solution to fuzzy posynomial geometric programming (P˜ ). If g0 (x), gi (x)(1 i p) are diﬀerentiable, while g0 (x) is pseudoconvex and gi (x) is quasiconvex at x∗ , #gi (x∗ )(0 i p) are linear independence, then antinomy appears in (P˜ ) ⇐⇒ the combination coeﬃcient of #gi (x) contains a negative component in Kuhn-Tucker condition at x∗ , i.e., ∃λi (1 i p), such that p λi # gi (x∗ ) = 0 (7.3.1) #g0 (x∗ ) − i=1

contains at least a negative component: λi < 0.

208

7 Fuzzy Geometric Programming

Proof: Since (7.1.3) can be turned into a determined posynomial geometric programming [Cao93a] max α s.t. g0 (x) z0 − d0 log α, gi (x) = 1 (1 i p) α ∈ [0, 1], x > 0,

(7.3.2)

(7.3.2) can be changed into a determined posynomial geometric programming max α s.t. g0 (x) z0 − d0 log α, gi (x) 1 (1 i p) α ∈ [0, 1], x > 0,

(7.3.3)

and (7.1.1) can be converted into a determined posynomial geometric programming (7.1.7), then, we have the following. Necessity. Oppositely suppose λi > 0(1 i p), if x∗ is an optimal solution to (7.1.3), we can prove that the convexity with K-T optimal condition satisﬁes assumption of gi (x)(0 i p) in this theorem, with optimal solution x∗ in (7.1.3) being still an optimal solution to (7.1.1). In fact, this is only fuzzy posynomial geometric programming (7.1.3) and (7.1.1) can be changed into a determined posynomial geometric programming (7.3.3) and (7.1.7), re¯∗ is still that spectively. It is easy to see that an optimal solution to (7.3.2) x to (7.3.3). From there we can prove that an optimal solution to (7.1.3) is still an optimal one to (7.1.1). Again, from the same optimal solution to (7.1.3) and (7.1.1), we know that their optimal value is equal as well. This contradicts with (7.1.3), where antinomy appears. Therefore, at least there exists a negative component in λi . Suﬃciency. If there contains at least a negative component in λi , then AT P = 0, P 0, P = 0 no solution, here A = (∇g0 (x∗ ), −∇g1 (x∗ ), · · · , −∇gp (x∗ ))T . Otherwise, if it has a solution P = (λ0 , λ1 , · · · , λp ), i.e., λ0 # g0 (x∗ ) −

p

λi # gi (x∗ ) = 0,

i=1

then λ0 = 0. Otherwise, ∇gi (x)(1 i p) is linearly related, and it contradicts with the assumption, such that #g0 (x∗ ) −

p λi # gi (x∗ ) = 0. λ 0 i=1

λi 0, contradicts with λi containing negative components. Thereλ0 fore, AT d < 0 has a solution from the knowledge of Gordan Theorem [BS79], that is, vector d > 0 exists, such that And all of

7.3 Antinomy in Fuzzy Geometric Programming

209

∇g0 (x∗ )T d < 0, ∇gi (x∗ )T d > 0(1 i p). As (7.3.2) is concerned, we can prove that, similarly of Ref.[BS79], d is a descent feasible direction at x ¯∗ , so is d at x∗ as for (7.1.1). That is to say, another feasible solution x ˆ to (7.1.1) can be certainly found out in the direction, such that g0 (ˆ x) < g0 (x∗ ). By doing so, an optimal value in (7.1.1) is smaller than that of (7.1.3). Therefore, antinomy appears in (7.1.3), and this theorem is true. Any fuzzy posynomial geometric programming is equivalent to a fuzzy linear programming from Section 7.1. Therefore, the condition that antinomy appears in (7.1.3) is equivalently obtained by using the condition that antinomy appears in fuzzy linear programming. The following are results got by means of non-degeneration. Theorem 7.3.3. Let a non-degeneration fuzzy optimal solution exist in fuzzy posynomial geometric programming (7.1.3). Then (7.1.3) appearance antinomy is equivalent to a corresponding fuzzy linear programming (7.1.14). When a basic solution z ∗ = (zB , zN ) corresponding to basis B denotes a nondegeneration optimal solution, a negative component exists in a dual basic solution w = CB B −1 . Proof: From the discussion of Theorem 7.1.4 and Theorem 7.1.6, it is known that any fuzzy posynomial geometric programming (7.1.3) can be changed into a fuzzy linear programming (7.1.14), such that antinomy appearance in (7.1.3) is equivalent to that in (7.1.14). It is proved that [Cao91c], when a basic solution is a non-degeneration optimal solution with respect to basis B, the antinomy appearance in (7.1.14) means that a negative component exists in its dual basic solution. Therefore the theorem holds. Corollary 7.3.1. Suppose that (7.1.3) has a non-degenerative fuzzy optimal solution and as for its non-degenerative basic optimal solution z ∗ = (zB , zN ) corresponding to fuzzy linear programming (7.1.14), ∃j0 , such that a certain determined coeﬃcient appears in (7.1.14) σj0 < 0, and then antinomy appears in (7.1.3). Proof: In fact, since (7.1.3) can be turned into (7.1.14), and dual programming in (7.1.14) is (7.1.13), a negative component must exist in w of (7.1.13) from the knowledge of σj = wPj < 0 [Cao91c]. This corollary holds from Theorem 7.1.6. 7.3.2 Example of Antinomy Example 7.3.1: A precision-instrument factory needs kg b1 , b2 and b3 for three kinds of metal A1 , A2 and A3 , respectively, to be smelted by using four diﬀerent kinds of ore B1 , B2 , B3 and B4 . The elements of ore (i.e., percentage P1 , P2 and P3 and its unit price (yuan/kg) are listed in Table 7.3.1 as follows.

210

7 Fuzzy Geometric Programming Table 7.3.1. The Ore Element and Unit Price Metal Ore A1 A2 A3 Ore unit price (yuan/kg)

1 0 5

1 4 6

1 2 5

Required (kg) 1 b1 6 b2 4 b3

2

3

1

2

B1 B2 B3 B4

How to purchase each metal in order to make the cost lowest? This question can be concluded as building a monomial fuzzy posynomial geometric programming as follows: % g0 (x) = x21 x32 x3 x24 min s.t. g1 (x) = x1 x2 x3 x4 = ˜b1 , g2 (x) = x42 x23 x64 = ˜b2 , g3 (x) = x51 x62 x53 x44 = ˜b3 , x1 , x2 , x3 , x4 > 0,

(7.3.4)

where ˜bi (i = 1, 2, 3) means ﬁxed-freely taking values in a certain close real + number interval [b− i , bi ]; its degree of accomplishment is determined by (7.2.2). Given zl = ln xl (l = 1, 2, 3, 4), then from Theorem 7.1.6, (7.3.4) is changed into fuzzy linear programming % f0 (z) = 2z1 + 3z2 + z3 + 2z4 min s.t. f1 (z) = z1 + z2 + z3 + z4 = ln ˜b1 , f2 (z) = 4z2 + 2z3 + 6z4 = ln ˜b2 , f3 (z) = 5z1 + 6z2 + 5z3 + 4z4 = ln ˜b3 .

(7.3.5)

If z1 , z2 , z3 is taken as a basis variable, a corresponding basis matrix B and basis reverse matrix B −1 are ⎞ ⎛ ⎛ ⎞ −4 − 12 1 111 B = ⎝0 4 2⎠ , B −1 = ⎝−5 0 1 ⎠ . 10 12 −2 565 When we take ˜b1 = e9 , ˜b2 = e12 , ˜b3 = e46 , basis feasible solution z1 = 4, z2 = 1, z3 = 4, z4 = 0 is an optimal one and a minimum is f01 = CB ZB = 15; when we take ˜b1 = e10 , ˜b2 = e18 , ˜b3 = e15 , another group of basis feasible solution z1 = 1, z2 = 0, z3 = 9, z4 = 0 is an optimal one and minimum is f02 = 11, that is, under the unchangeable state of other conditions, constraint conditions (resp. task quantity) in (7.3.4) is added from e9 , e12 , e46 to e10 , e18 , e50 , while objective function (or cost) decreases by f0 = 15 − 11 = 4 unit. The reason is that there exists a negative component −9 in

7.3 Antinomy in Fuzzy Geometric Programming

w = CB B −1

211

⎞ ⎛ −4 − 12 1 = (1, 3, 1) ⎝−5 0 1 ⎠ = (−9, 0, 2), 10 12 −2

testifying Theorem 7.3.2. For the fuzzy posynomial geometric programming with negative exponential, the above-mentioned phenomenon of antinomy also will appear. Example 7.3.2: Suppose a fuzzy posynomial geometric programming to be as follows % g0 (x) = x1 x2 x63 x34 min −1 2 ˜ s.t. g1 (x) = x1 x−1 2 x3 x4 = b1 , 2 2 −1 ˜ (7.3.6) g2 (x) = x1 x2 x3 x4 = b2 , g3 (x) = x1 x2 x23 x34 = ˜b3 , x1 , x2 , x3 , x4 > 0. Given zl = ln xl (l = 1, 2, 3, 4), then (7.3.6) is changed into % f0 (x) = z1 + z2 + 6z3 + 3z4 min s.t. f1 (x) = z1 − z2 − z3 + 2z4 = ln ˜b1 , f2 (x) = 2z1 + z2 + 2z3 − z4 = ln ˜b2 , f3 (x) = z1 + z2 + 2z3 + 3z4 = ln ˜b3 .

(7.3.7)

If z1 , z2 , z3 is taken as a basis variable, when ˜b1 = e, ˜b2 = e6 , ˜b3 = e4 are taken, an optimal solution denotes z1 = 2, z2 = 0, z3 = 1 and a minimum is f01 = CB ZB = 8; when ˜b1 = e2 , ˜b2 = e7 , ˜b3 = e4 are taken, an optimal solution is z1 = 3, z2 = 1, z3 = 0 and minimum is f02 = 4, that is, under the unchangeable state of other conditions, constraint conditions in (7.3.6) is added from e1 , e6 , e4 to e2 , e7 , e4 , while objective function decreases by f0 = 8 − 4 = 4 unit, also because a negative component −8 exists in ⎛ ⎞ 0 1 −1 w = CB B −1 = (1, 1, 6) ⎝−2 3 −4⎠ = (4, −8, 13), 1 −2 3 it testﬁes Theorem 7.3.2. In fact, when ˜b1 = e1 , ˜b2 = e6 , ˜b3 = e4 is taken, a crisp programming corresponding to (7.3.5) denotes max S = α s.t. z1 − z2 − z3 + 2z4 + α 2, 2z1 + z2 + 2z3 − z4 + α 7, z1 + z2 + 2z3 + 3z4 4, −z1 − z2 − 6z3 − 3z4 − 4α −8. It is solved with a simplex method through ﬁve steps, and then when α = 1, an optimal parameter solution to the problem is 30 19 1 ∗ , , 0, , 1 . Z = (z1 , z2 , z3 , z4 , α) = 13 13 13

212

7 Fuzzy Geometric Programming

Let z ∗ = (z1 , z2 , z3 , z4 ). Then it is an optimal solution to (7.3.7), its optimal value is f0 (z ∗ ) = 4, such that an optimal solution to the prime fuzzy geometric programming (7.3.6) is ∗

X ∗ = eZ = (ez1 , ez2 , ez3 , ez4 , e1 ) > = 30 19 1 = (x∗1 , x∗2 , x∗3 , x∗4 , 1) = e 13 , e 13 , e0 , e 13 , e1 , an optimal value is e4 . When α ∈ [0, 1], an optimal solution to the prime fuzzy geometric programming (7.3.6) is ∗

X ∗ = eZ = (ez1 , ez2 , ez3 , ez4 , eα ) > = 30 19 1 = (x∗1 , x∗2 , x∗3 , x∗4 , α) = e 13 , e 13 , e0 , e 13 , eα , 30

19

3

the optimal parameter value is g0 (X ∗ ) = e 13 · e 13 · e 13 · eα = e4+α . But α > 0, then the antinomy will not take place again under such a case. Summarily, if sign “=” in (7.1.3) is changed into sign “”, then an antinomy can be kept from appearing in (7.1.3). From the knowledge of a dual theory in fuzzy geometric programming, if the antinomy in the programming is prevented from antinomy appearing, it is right to make ﬂuctuational the objective coeﬃcient ci of the dual programming in (7.1.3). Here it shall not be re-discussed. Now the problem is only to take it into consideration that the prime problem (7.1.3) is made to keep from antinomy appearing. At the same time, we can directly testify Theorem 7.3.2 above. Example 7.3.3: Consider Example 7.3.1, when e, e6 , e4 is chosen in ˜bi , respectively, because a basis feasible solution z1 = 2, z2 = 0, z3 = 1, z4 = 0 is an optimal one, an optimal solution to (7.3.6) denotes x∗ = (e2 , 1, e, 1)T . Meanwhile #g0 (x∗ ) = (e6 , e8 , 6e7 , 3e8 )T , # g1 (x∗ ) = (e−1 , −e, −1, 2e)T, #g2 (x∗ ) = (2e4 , e6 , 2e5 , −e6 )T , # g3 (x∗ ) = (e2 , e4 , 2e3 , 3e4 )T . Let #g0 (x∗ ) − λ1 # g1 (x∗ ) − λ2 # g2 (x∗ ) − λ3 # g3 (x∗ ) = 0. Then ⎧ λ1 + 2e5 λ2 + e3 λ3 = e7 , ⎪ ⎪ ⎪ ⎨−λ + e5 λ + e3 λ = e7 , 1 2 3 5 3 ⎪ −λ + 2e λ + 2e λ3 = 6e7 , 1 2 ⎪ ⎪ ⎩ 2λ1 − e5 λ2 + 3e3 λ3 = 3e7 . If take z1 = 2, z2 = 0, z3 = 1 as a feasible basis, and z4 = 0 is nonbasis, namely, variable z4 is inactive, i.e, variable x4 is inactive, such that the last equation can be deleted in the above mentioned equations, and we take the basis feasible solution λ1 = 4e7 , λ2 = −8e2 , λ3 = 13e4 , so that at least one

7.3 Antinomy in Fuzzy Geometric Programming

213

of λ2 < 0 satisﬁes suﬃcient condition in Theorem 7.3.2. Therefore, antinomy appears in the above fuzzy posynomial geometric programming. 7.3.3 Extension Based on a strong dual theory, there exist the following results. Theorem 7.3.4 [Cao93d]. Let (P˜1 ) denote fuzzy super-consistence, with ˜ 1 ) must be fuzzy consistence with the MP˜1 > 0, then a dual programming (D next representing: 10 There exists of an optimal solution; ˜ ∗ ) = M ˜ , where M ˜ and M ˜ represents the fuzzy 20 MD˜ 1 = d(w P1 P1 D1 ˜ 1 ), constraint inﬁmum of (P˜1 ) and the fuzzy constraint supremum of (D respectively. Theorem 7.3.5. On the assumption of Theorem 7.3.2, antinomy appears in (P˜1 ) if the fuzzy strong duality holds if and only if at least a component is ˜ 1 ), or if and only if no negative in solution w∗ = (w1∗ , w2∗ , · · · , wp∗ )T to (D ˜ 1 ). feasible solution exists in (D Proof: Since (P˜1 ) can be changed into a fuzzy linear programming, the theorem holds from the suﬃcient and necessary condition that antinomy appears in fuzzy linear programming. Corollary 7.3.2. The antinomy appears when (P˜1 ) is changed into fuzzy linear programming without nonnegative constraint if and only if there exists at least a negative component in w. Example 7.3.4: Consider programming corresponding to (7.3.4) 7 g0 (x) = x2 x3 x3 x2 inf 1 2 4 ˜ s.t. g1 (x) = x1 x2 x3 x4 = e9 , ˜ 12 4 2 6 g2 (x) = x2 x3 x4 = e , ˜ g3 (x) = x51 x62 x53 x44 = e46 , x1 , x2 , x3 , x4 > 0.

(7.3.7)

Let xl = ezl (l = 1, 2, 3, 4). Then (7.3.7) is changed by writing it down as 7 f0 (z) = 2z1 + 3z2 + z3 + 2z4 inf s.t. f1 (z) = z1 + z2 + z3 + z4 = ˜ 9 = b1 + b1 , ˜ = b2 + b2 , f2 (z) = 4z2 + 2z3 + 6z4 = 12 ˜ = b3 + b3 , f3 (z) = 5z1 + 6z2 + 5z3 + 4z4 = 46 x1 , x2 , x3 , x4 > 0.

(7.3.8)

In (7.3.8), an optimal solution is x=(4,1,4,0) and optimal value is f0 =15 for −2 1 , ], b2 = b3 = 0, f0 = b1 = b2 = b3 = 0. But when b1 ∈ [ 5 5

214

7 Fuzzy Geometric Programming

1 15 − 13 b1, or b2 ∈ [−8, 8], b1 = b3 = 0, f0 = 15 − b2 , the antinomy 2 takes place in (7.3.8), i.e., antinomy takes place in (7.3.7). The reason is that,when we solve the constraint equations in dual programming in (7.3.8)

w0 w1 w2 w3 1 1 1 1 s% up (w1 + w2 + w3 )w1 +w2 +w3 ˜ ˜ 2 ˜ 3 w0 9w1 12w 46w s.t. w0 = w1 + w2 + w3 = 1, 2w0 + w1 + 5w3 = 0, 3w0 + w1 + 4w2 + 6w3 = 0, w0 + w1 + 2w2 + 5w3 = 0, 2w0 + w1 + 6w2 + 4w3 = 0, w0 , w1 , w2 , w3 0, we ﬁnd 4-th equation 0 = d4 appears, a contradictory equation, and w3 = −3 is negative. Hence there is no feasible solution to the dual programming. Therefore it is clear that we can change the fuzzy posynomial geomet˜ for soluric programming (7.1.3) into the fuzzy dual programming (D) tion. For multi-objective fuzzy geometric programming, we can discuss it similarly. The discussion of the antinomy problem helps us not only to make rational use of resources, but also to diagnose a variety of systems in order to better the systems, which can be better used to disentomb the system potentially, so that business management can be eﬀective by antinomy.

7.4

Geometric Programming with Fuzzy Coeﬃcients

Suppose that x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, c˜ik (i i p, 1 k J0 ) are an interval fuzzy numbers, then programming min s.t.

J0

c˜0k

k=1 Ji

c˜ik

k=1

m ) l=1 m ) l=1

xγl 0kl xγl ikl ˜ 1 (1 i p)

(7.4.1)

x>0 is called a geometric programming with fuzzy where c˜ik and 1˜ to coeﬃcients, + be interval fuzzy numbers, writing c˜ik = α[c− , c ikα ikα ](1 k Ji , 0 α∈[0,1] − + − + + α[1− i p), ˜ 1= α , 1α ]; [cikα , cikα ] is the interval numbers, cikα and cikα is α∈[0,1]

left and right endpoints in the interval, respectively, which are real numbers, γikl an arbitrary real number.

7.4 Geometric Programming with Fuzzy Coeﬃcients

215

7.4.1 Constraint Function with Fuzzy Coeﬃcients In geometric programming, if coeﬃcients in constraint are fuzzy numbers, i.e., min

J0

c0k

k=1

s.t.

m )

xγl 0kl

l=1 Ji

Ji m m ) γikl + ) γikl α c− x , c xl ikα ikα l k=1 l=1 α∈[0,1] k=1 − +l=1 ⊆ α[1α , 1α ] (1 i p)

(7.4.2)

α∈[0,1]

x>0 called a geometric programming with fuzzy-valued coeﬃcients in constraint conditions. Theorem 7.4.1. ∀α ∈ [0, 1], Problem (7.4.2) is equivalent to J0 m ) c0k xγl 0kl min s.t.

k=1 Ji

k=1 Ji k=1

l=1 m )

c− ikα c+ ikα

l=1 m ) l=1

xγl ikl 1− α, (7.4.3) xγl ikl

1+ α

(1 i p),

α ∈ [0, 1], x > 0.

˜= If x ¯α is an optimal solution to (7.4.3), then x

α¯ xα is a fuzzy optimal

α∈[0,1]

solution to (7.4.2). Proof: It is easy to prove by means of properties of an interval number. Because " ! Ji Ji m m : : γikl γikl + c− x , c x ⊆ [1− , 1+ ] ik ik l l k=1

is equivalent to

Ji k=1

c− ik

l=1 m :

xγl ikl

l=1

k=1

−

1 ,

l=1 Ji k=1

c+ ik

m :

xγl ikl 1+ ,

l=1

for α ∈ [0, 1], it is not diﬃcult to prove the equivalence of (7.4.2) and (7.4.3) by α¯ xα the α-cut set properties of a fuzzy number operation, such that x ˜= α∈[0,1]

means a fuzzy optimal solution to (7.4.2). 7.4.2

Objective Function with Fuzzy Coeﬃcients

Let an objective coeﬃcient c˜0k of programming (7.4.1) be a fuzzy number, i.e., + c˜0k = α[c− ikα , cikα ](1 k J0 ). α∈[0,1]

216

7 Fuzzy Geometric Programming

Then (7.4.1) can be denoted by min{˜ g0 (x) =

J0

c˜0k

m )

xγl 0kl }

k=1 l=1 Ji m ) γikl cik xl 1 k=1 l=1

s.t.

(1 i p),

(7.4.4)

x > 0. Theorem 7.4.2. Programming (7.4.4) is equivalent to ﬁnding J0

min s.t.

k=1 Ji

c− ikα cik

k=1

m ) l=1 m )

xγl 0kl

xγl ikl 1 (1 i p)

l=1

(7.4.5)

x>0 and min s.t.

J0 k=1 Ji

c+ ikα cik

k=1

m ) l=1

m )

l=1

xγl 0kl

xγl ikl 1 (1 i p),

(7.4.6)

x > 0. − − + + − + + ∀α ∈ [0, 1]. If x− α = (x1α , x2α , · · · , xmα ), xα = (x1α , x2α , · · · , xmα ) represents optimal solutions to (7.4.5) and to (7.4.6), respectively, then a fuzzy optimal + α[x− solution to (7.4.4) is x ˜= α , xα ]. α∈[0,1]

Proof: By means of α-cut set operation properties of fuzzy numbers, then (7.4.4) ⇔ min s.t.

J0 J0 m m ) γ0kl + ) γ0kl α c− x , c xl ikα ikα l

k=1

α∈[0,1] Ji

m )

k=1

l=1

cik

xγl ikl

l=1

k=1

l=1

(7.4.7)

1 (1 i p),

x > 0. Similar to Theorem 7.4.1, for a certain α ∈ [0, 1], ﬁnding an optimal solution to Programming (7.4.5) and (7.4.6) means getting an optimal solution to (7.4.7). + optimal solutions to (7.4.5) and to (7.4.6), respectively, then If x− α and xα are + x ˜= α[x− α , xα ] is an optimal solution to (7.4.7). α∈[0,1]

Now the theorem holds by the arbitrariness of α ∈ [0, 1]. 7.4.3

Mixed with Fuzzy Coeﬃcients in Objective and Constraints

When coeﬃcients c˜ik (1 k Ji , 0 i p) in objective and constraints are all fuzzy numbers, (7.4.1) is written as

7.4 Geometric Programming with Fuzzy Coeﬃcients J0

m )

xγl 0kl } k=1 l=1 Ji m ) c˜ik xγl ikl ⊆ ˜ 1(1 i k=1 l=1

min{˜ g0 (x) = s.t.

c˜0k

217

p),

(7.4.8)

x > 0. Theorem 7.4.3. Programming (7.4.8) if and only if ∀α ∈ [0, 1], such that min

J0

c− ikα

k=1 Ji

s.t.

k=1 Ji k=1

c− ikα c+ ikα

m ) l=1 m ) l=1 m ) l=1

xγl 0kl xγl ikl 1− α

(7.4.9)

xγl ikl 1+ α (1 i p)

x>0 and

J0

min

k=1 Ji

s.t.

k=1 Ji k=1

m )

c+ ikα

l=1 m )

c− ikα c+ ikα

l=1 m ) l=1

xγl 0kl xγl ikl 1− α,

(7.4.10)

xγl ikl 1+ α (1 i p),

x > 0. optimalsolutions to (7.4.9) and to (7.4.10), respecIf x− and x+ represent α tively, then x˜ = αxα = α[x− α , x ] represents an optimal solutions α∈[0,1]

α∈[0,1]

to (7.4.8). Proof: By means of operation properties for a fuzzy number cut set, we know (7.4.8) ⇐⇒ min

J0 J0 m m ) ) α c− xγl 0kl , c+ xγl 0kl ikα ikα

α∈[0,1]

k=1

l=1

k=1

l=1

Ji Ji m m ) ) s.t. α c− xγl ikl , c+ xγl ikl , ikα ikα l=1 k=1 l=1 α∈[0,1] k=1 + ⊆ α 1− α , 1α (1 i p),

(7.4.11)

α∈[0,1]

x > 0. Again, according to Theorem 7.4.1 and Theorem 7.4.2, ﬁnding (7.4.11) is equivalent to ﬁnding (7.4.9) and (7.4.10) ∀α ∈ [0, 1]. Therefore, this theorem holds from the arbitrariness of α ∈ [0, 1].

218

7 Fuzzy Geometric Programming

7.5

Geometric Programming with (α, c) Coeﬃcients

7.5.1

Introduction

Consider % g˜0 (x) min s.t. g˜i (x) ˜ 1, (1 i p ), g˜i (x) ˜ 1, (p + 1 i p), x > 0, where g˜i (x) =

Ji k=1

c˜ik

m ) l=1

(7.5.1)

xγl ikl (0 i p) are (α, c) fuzzy functions of x,

x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, both c˜ik (0) and ˜ 1 are all (α, c) fuzzy numbers, γikl an arbitrary real number. We call (7.5.1) a fuzzy geometric programming with (α, c) fuzzy coeﬃcients. 7.5.2

Nonfuzziﬁcation Model

Deﬁnition 7.5.1. g˜i∗ 0 represents ‘almost positive’, which can be deﬁned by applying equivalent form g˜i∗ (0) 1 − h, α∗T i Gi 0, where

$ g˜i∗ =

g˜i , −˜ gi ,

0 i p , p + 1 i p,

h standing for the degree of g˜i∗ 0. The larger h is, the stronger the meaning of ‘almost positive’ is. If an objective expectation value ˜b0 is presented by decision makers, then (7.5.1) can be turned into the following form: objection

constraint

where

J0 c˜0k G0k (x) 0 g˜0 = ˜b0 G00 − k=1 ⎧ Ji ⎪ ⎪ ⎪ g˜i = G00 − c˜ik Gik (x) 0 ⎪ ⎪ ⎨ k=1 Ji −˜ g = −G + c˜ik Gik (x) 0 ⎪ i 00 ⎪ ⎪ k=1 ⎪ ⎪ ⎩x > 0 )m G00 = 1, Gik (x) = l=1 xγl ikl

(1 i p ), (p + 1 i p), (0 i p).

Theorem 7.5.1. Given that fuzzy coeﬃcients are denoted by c˜ = (α, c), where α = (ai1 , ai2 , · · · , aiJi )T , c = (ci1 , ci2 , · · · , ciJi )T (0 i p), and fuzzy functions are denoted by

7.5 Geometric Programming with (α, c) Coeﬃcients

g˜i (x) = c˜i1

m ) l=1

xγl i1l + c˜i2

m ) l=1

xγl i2l + · · · + c˜iJi

= c˜G = (αT Gi , cT Gi ); here Gi = (

m )

l=1

xγl i1l ,

m ) l=1

xγl i2l , · · · ,

m )

m ) l=1

219

xγl ikl

γiJi l T

l=1

) and its membership function is

xl

⎧ |gi − αT Gi | ⎪ ⎪ ⎪ ⎨1 − cT |G | , Gi = 0, i g˜i (gi ) = 1, Gi = 0, gi = 0, ⎪ ⎪ ⎪ ⎩0, Gi = 0, gi = 0, where |Gi | = (|Gi1 |, · · · , |GiJi |)T and g˜i (gi ) = 0 for cT |Gi | |gi − αT Gi |. Proof: We only prove the case to be true at Gi = 0, but the other cases are self-evident. V 1−h Because ABD ∼ AEF (shown in Figure 7.5.1), then T = ⇒ 1 ciki V = cT iki (1 − h), while

½¼

¼

´

½µ

´

´

½µ

´

½µ

½µ

Fig. 7.5.1. Illustration of Expectation Value and Fuzzy Constraint Function

- gi − ΣαT i(ki −1) Gi(ki −1) K =− αT iki − V Giki T |gi − ΣαT i(ki −1) Gi(ki −1) − αiki Giki | − cT = iki (1 − h) |Giki | T |gi − ΣαT iki Giki | − ciki |Gi |(1 − h) . = |Giki |

220

7 Fuzzy Geometric Programming

Applying similarity of right triangles, we have K |Gi(ki −1) | ΣcT i(ki −1) |G iki | T |gi − ΣαT iki Giki | − ciki |Giki |(1 − h) ⇒1−h= ΣcT i(ki −1) |Gi(ki −1) | T ⇒ (1 − h)Σci(ki −1) |Gi(ki −1) | + (1 − h)cT iki |Giki | T = |gi − Σαiki Giki | |gi − αT Gi | ⇒1−h= ΣcT iki |Giki | |gi − αT Gi | T , (cik > 0). ⇒h=1− cT ik |Gi |

1−h=

Theorem 7.5.2. If a fuzzy coeﬃcient is known to be c˜ = (α, c), and α = (αi1 , · · · , αiJi )T , c = (ci1 , · · · , ciJi )T , then g˜i∗ 0(1 i p) ⇐⇒ (α∗ik − hc∗ik )T Gi 0(1 k Ji , 0 i p).

(7.5.2)

Proof: From Deﬁnition 7.5.1, we know g˜0∗ 0 ⇐⇒ g˜0∗ (0) = 1 − .. .

.. .

g˜i∗ 0 ⇐⇒ g˜i∗ (0) = 1 −

α∗T 0k Gi 1 − h, α∗T 0k Gi 0, ∗T c0k Gi .. . α∗T ik Gi 1 − h, α∗T ik Gi 0, ∗T cik Gi

where Gi > 0; then ∗T c∗T 0k Gi − α0k Gi .. .

∗T ∗T (1 − h)c∗T 0k Gi =⇒ (α0k − hc0k )Gi 0, .. .. . . ∗T ∗T ∗T ∗T G − α G (1 − h)c G =⇒ (α c∗T i i i ik ik ik ik − hcik )Gi 0.

Hence the theorem holds. Theorem 7.5.3. (7.5.1) ⇐⇒ max h s.t. (α∗ik − hc∗ik )T Gi (x) 0, h ∈ [0, 1], (1 k Ji , 0 i p), x > 0, where (α∗ik − hc∗ik )T Gi (x) = gi∗ (x, h).

(7.5.3)

7.5 Geometric Programming with (α, c) Coeﬃcients

221

Proof: According to Ref. [Cao87a] and [Cao87b], what we want is to let μg0∗ (x) , max{μμD˜ (x) = min{˜ x

x

p

μg˜i∗ (x) }}.

i=1

Here, μ ˜g0∗ (x) and μg˜i∗ (x) represent fuzzy objective and fuzzy constraint functions of (7.5.1), respectively, which is equivalent to causing the height of membership intersection to be highest between objective and constraints. Therefore the theorem is proved from Theorem 7.5.2. Theorem 7.5.4. Let X be a feasible solution set of (7.5.2) with h being Xh . Then h1 < h2 =⇒ Xh∗1 ⊃ Xh∗2 . The theorem is proved by means of (7.5.2) without diﬃculty. According to this theorem, we can choose a better constraint under the level of h + r ( > 0 is a small increment) by means of (7.5.2) and we might as well suppose it to be the i-th constraint, where the left inequality is regarded as a new objective function of the problem, such that (7.5.3) can be changed into ﬁnding max{α∗i0 G0 (x) + α∗i1 G1 (x) + · · · + α∗iJ0 GJ0 (x)} s.t. (α∗ik − (h + r)c∗ik )T Gi (x) 0, h ∈ (0, 1), (1 k Ji , 0 i p), x > 0, whose solution x∗ denotes an approximate solution to (7.5.3). 7.5.3 Algorithm and Numerical Example Based on the theory mentioned above, we build the algorithms to (7.5.1). Because of (7.5.1) ⇐⇒ (7.5.3), we have the next. Algorithm I. Choose the i-th constraint inequality in (7.5.2), solve h and substitute it for the objective function and for the remaining constraints in (7.5.3) before obtaining a determined geometric programming. Again, ﬁnd its optimal solution by a direct algorithm in Ref. [Cao87a],[Cao87b] and [Cao93a]. Algorithm II. Turn (7.5.2) into (7.5.3) before writing down the dual form of (7.5.3); solve the optimal solution to its dual problem by a dual algorithm in Ref. [Cao89a] and [Cao93a], such that we get an optimal solution to (7.5.2). Algorithm III. [Cao92a][RT91] We have the following. 10 Deﬁne the lower and the upper bounds for h and we suppose h− 0 = 0, h+ = 1 for θ = 0 in (7.5.3). 0 20 Fix hθ+1 and let hθ+1 = small end+(big end − small end) × 0.618.

222

7 Fuzzy Geometric Programming

Then small and big ends mean left and right endpoint values in the interval − we refer to. If |h+ θ − hθ | < ε (ε is a suﬃciently small positive number), then ∗ we take h = hθ+1 . It ends, otherwise, we go on to 30 . 30 If there exists a feasible solution set X for h = hθ+1 , then move ahead − − to 40 . Otherwise, go back to 20 , and let h+ θ+1 = hθ , hθ+1 = hθ . 0 ∗ 4 Let x ∈ X. We deﬁne ¯ hn = min{g0 (x, h), max gi∗ (x, h)}, i

+ 0 ¯n + and take h− θ+1 = h , hθ+1 = hθ and turn back to 2 . Continue doing it like this, and we can ﬁnd an approximate optimal solution to (7.5.3). It is easy to compose an approximate fuzzy optimal value for (7.5.1) after the optimal solution converges to (7.5.3) by the three algorithms mentioned above.

Example 7.5.1: Find a fuzzy posynomial geometric programming −1 −1 % min 7 37x−1 1 x2 x3 + 38.5x2 x3 7 1 x3 + 0.9x 7 1 x2 4.5, 7 s.t. 1.5x x1 , x2 , x3 > 0,

% = (38.5, 3), 1.5 7 = (1.5, 1), 0.9 7 = (0.9, 0.2), 4.5 7 = where 7 37 = (37, 6), 38.5 (4.5, 1), and suppose the expected objective value to be 7 64 = (64, 8). Solution: Turn the object and constraint of the problem into −1 −1 7 % 64 − 7 37x−1 1 x2 x3 − 38.5x2 x3 0, 7 − 1.5x 7 1 x3 − 0.9x 7 1 x2 0, 4.5 x1 , x2 , x3 > 0,

(7.5.4)

here, we suppose fuzzy sets c˜i (i = 0, 1) to be c˜0 = {α∗0 = (64, −37, −38.5)T, c∗0 = (8, 6, 4)T }, c˜1 = {α∗1 = (4.5, −1.5, −0.9)T, c∗1 = (1, 1, 0.2)T }. According to Formula (7.5.3) in Theorem 7.5.3, (7.5.4) can be changed into max h −1 −1 s.t. 64 − 8h + (−37 − 6h)x−1 1 x2 x3 + (−38.5 − 3h)x2 x3 0, 4.5 − h + (−1.5 − h)x1 x3 + (−0.9 − 0.2h)x1 x2 0, x1 , x2 , x3 > 0, h ∈ [0, 1], and diﬀerent optimal solutions can be obtained for diﬀerent level h. A decision maker may select level k and compare its obtained k-group optimal solutions, among which the best is the most satisfactory solution. In the example, if we choose h = 0.5, we can obtain a unique feasible solution by applying a dual algorithm meaning a dual optimal solution

7.5 Geometric Programming with (α, c) Coeﬃcients

223

2 1 1 1 W ∗ = (w01 , w02 , w11 , w12 ; h)T = ( , , , ; 0.5)T , 3 3 3 3 such that a unique feasible solution can be obtained corresponding to the primal problem, i.e., an optimal solution. 1 1 X ∗ = (x1 , x2 , x3 ; h)T = (2, 1, ; )T 2 2 and the optimal value is 60. 7.5.4 Extension The geometric programming with fuzzy parametric (α, c) can be extended to min s.t.

J0

m )

c˜0k

k=1 Ji

l=1 m )

k=1

l=1

c˜ik

xlγ˜0kl

xlγ˜ikl ⊗ ˜ 1 (1 i p),

(7.5.5)

x > 0, then, it can be changed into a geometric programming with h and β being parameters as follows. If “⊗” is taken to be “”, we have min g0 (x, h, β) s.t. gi (x, h, β) 1(1 i p), x > 0, h, β ∈ [0, 1]. If “⊗” is taken to be “” for 1 i p and to be “” for p + 1 i p, we have min g0 (x, h, β) s.t. gi (x, h, β) 1 (1 i p ), gi (x, h, β) 1 (p + 1 i p), x > 0, h, β ∈ [0, 1], where gi (x, h, β) =

Ji k=1

c˜−1 ik (h)

m ) l=1

γ ˜ −1 (β)

xl ikl

(0 i p).

The most satisfactory solution can be found by methods mentioned above. 7.5.5

Conclusion

A series of results can be concluded from the discussion above. i) Any fuzzy geometric programming (7.5.5) with (α, c) coeﬃcients can be completely turned into an ordinary geometric programming (7.5.3) with a parameter h. ii) Programming problem (7.5.5) has the same degree-of-diﬃculty as (7.5.3). iii) When exponent in (7.5.5) stands for a fuzzy number as formula (1.5.3) in Section 1.5.3 in Chapter 1.

224

7.6

7 Fuzzy Geometric Programming

Geometric Programming with L-R Coeﬃcients

Consider a posynomial geometric programming like (7.5.1) [Cao94a]: % g˜0 (x) min s.t. g˜i (x) ˜ 1 (1 i p), x > 0,

(7.6.1)

where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, g˜i (x) =

Ji k=1

c˜ik

m :

Ji Ji Ji m m m : : : xγl ikl = ( cik xγl ikl , cik xγl ikl , c¯ik xγl ikl )LR

l=1

k=1

l=1

k=1

l=1

k=1

l=1

(0 i p) is a posynomial; ˜ 1 = (1, 1, 1)LR and c˜ik = (cik , cik , cik )LR are all L-R fuzzy numbers, γikl an arbitrary real number. We call (7.6.1) a geometric programming with L-R fuzzy coeﬃcients. 7.6.1 Properties %c = (logc, logc, logc)LR and ec = (ec , ec , ec )LR Deﬁnition 7.6.1. Deﬁne log as an L-R logarithm and as an L-R exponent, respectively, and we can deﬁne min f˜ ⇐⇒ (min f, max f , min f ), (resp. max f˜ ⇐⇒ (max f, min f , max f )) and f˜ ˜b ⇐⇒ f b, f b, f b. % g˜0 (x) is equivalent to min g0 (x), max g (x) and min g¯0 (x), while Because min 0 1 is equivalent to gi (x) 1, gi (x) 1 and g i (x) 1, we note that for g˜i (x) ˜ min f = max(−f ), the nonfuzziﬁcation form of (7.6.1) means

min g0 (x) = max g 0 (x) = min g¯0 (x) =

J0

c0k

k=1 J0

c0k

k=1

J0

k=1

c0k

m ) l=1 m )

xγl 0kl

l=1

m )

l=1

xγl 0kl

xγl 0kl

s.t. gi (x) 1, gi (x) 1, gi (x) 1 (1 i p), x > 0.

7.6 Geometric Programming with L-R Coeﬃcients

225

Let xl = ezl (1 l m). Then g˜i (x) is deformed into ˜ i (z) = G

Ji

m c˜ik exp( γikl zl )(0 i p),

k=1

l=1

such that (7.6.1) can be changed into ˜ 0 (z) min G ˜ i (z) ˜ 1 (1 i p). s.t. G

(7.6.2)

It is easy to prove the following theorems and corollaries from the deﬁnition of L-R numbers. ˜ i (z) serves as a fuzzy convex function for all i(0 i p), Theorem 7.6.1. G so the deformed posynomial geometric programming (7.6.2) with L-R coeﬃcients is a fuzzy convex programming with its fuzzy local minimum solution being its fuzzy global minimum one. Corollary 7.6.1. Any strict fuzzy local minimum solution to (7.6.2) is its fuzzy global minimum one. Theorem 7.6.2. Let (7.6.2) be a strongly fuzzy convex programming problem. Then its fuzzy local minimum solution is its unique fuzzy global minimum one. Theorem 7.6.3. Given c˜k > ˜ 0 (1 k J), then J

log G(z) = log

k=1

c˜k exp{

m

γkl zl }

l=1

means a fuzzy convex function of z. Proof: From Deﬁnition 7.6.1 and Theorem 7.6.1, we can prove the result as we did in Theorem 2.3 in Ref. [Cao87a] and [Cao87b]. 7.6.2 Fuzzy Model Suppose (7.6.1) is only constraint with coeﬃcients being L-R numbers. Obviously, (7.6.1) is equivalent to min x0 s.t. x−1 0 g0 (x) 1, g˜i (x) ˜ 1 (1 i p), x0 > 0, x > 0, which still represents a posynomial geometric programming containing L-R coeﬃcients, with objective function being very simple. Now, we might as well suppose g0 (x) = x1 . And practically, we can estimate the range in fuzzy optimal solutions, such that we can consider the fuzzy posynomial geometric programming with variables subject to lower and upper bounds below,

226

7 Fuzzy Geometric Programming

min x1 s.t. g˜i (x) ˜ 1(1 i p), 0 < xL x xU .

(7.6.3)

m x0γikl ) l 0 ∗ 0 (1 k Ji ; 1 i p) for all x > 0. Then l=1 gi (x ) Ji g (x0 ) + g¯i (x0 ) c¯ik + cik ∗ 0 , gi (x ) = gi (x0 ) + i . εik = 1, where c∗ik = cik + 2 2 k=1 Such that we have the following lemma.

Let εik = c∗ik

Lemma 7.6.1. Let c˜∗i > ˜ 0, xl > 0, γil∗ > 0. Then g˜i∗ (x, x0 ) = c˜∗i

m :

γ∗

xl il c˜∗i

l=1

m

γil∗ xl = g˜i∗ (x),

(7.6.4)

l=1

where c˜∗i =

Ji Ji Ji : c˜∗ ( ik )εik , γil∗ = γikl εik , εik = 1 (0 i p, 1 l m). εik

k=1

Proof: Since c˜∗i

k=1

m ) l=1

γ∗

xl il − c˜∗i

m l=1

k=1

γil∗ xl = c˜∗i (

m )

l=1

γ∗

xl il −

0, xl > 0, due to ordinary geometric inequality,

m

γil∗ xl ), and when γil∗ >

l=1 m ) γ∗ xl il l=1

−

m l=1

γil∗ xl 0 holds.

But c˜∗i > 0, and from the deﬁnition of L-R numbers and their operation properties, we have m m : γ∗ γil∗ xl ) 0. c˜∗i ( xl il − l=1

l=1

Therefore, (7.6.4) holds. Since (7.6.4)) holds, then (7.6.3) is equivalent to a monomial posynomial geometric programming with L-R coeﬃcients where variables are limited by lower and upper bounds min x1 s.t. x ∈ F˜ 0 F˜ 0 = {x|˜ gi∗ (x, x0 ) ˜ 1 (1 i p), xL x xU }.

(7.6.5)

Theorem 7.6.4. If there exists a minimum solution xk to (7.6.5) for a determinate k, then xk must denote a fuzzy optimal solution to (7.6.3) after ﬁnite steps, otherwise any limit point of {xk } is a fuzzy optimal solution to (7.6.3). Proof: The fuzzy posynomial geometric programming (7.6.3) is equivalent to min x1 s.t. x ∈ Fj (j = 1, 2, 3),

(7.6.6)

7.6 Geometric Programming with L-R Coeﬃcients

where

227

F1 = {x|gi (x) 1 (1 i p), 0 < xL x xU }, F2 = {x|g i (x) 1 (1 i p), 0 < xL x xU }, F3 = {x|g i (x) 1 (1 i p), 0 < xL x xU }.

But (7.6.6) is equivalent to a monomial fuzzy posynomial geometric programming by means of (7.6.4) [WY82] as follows: min x1 s.t. x ∈ Fj0 (j = 1, 2, 3) F10 = {x|gi∗ (x, x0 ) 1 (1 i p), 0 xL x xU }, F20 = {x|g ∗i (x, x0 ) 1 (1 i p), 0 xL x xU }, F30 = {x|g ∗i (x, x0 ) 1 (1 i p), 0 xL x xU }, and its optimal solution xk must be that to (7.6.6), otherwise, any limit point in a point range {xk } must be an optimal solution to (7.6.6)[Shi81] and the theorem is true. This indicates that any multinormal posynomial geometric programming with L-R coeﬃcients can be turned into a monomial one. Now, we consider a monomial posynomial geometric programming with only constraint containing L-R coeﬃcients m ) min g0 (x) = c0 xγl 0l l=1

s.t. g˜i (x) = c˜i

m )

l=1

(7.6.7)

xγl il ˜ 1 (1 i p),

x > 0. Substitute zl = log xl , such that (7.6.7) is turned into m min γ0l zl l=1

s.t.

m

(7.6.8)

γil zl + log ci 0 (1 i p).

l=1

Theorem 7.6.5. The fuzzy optimal solution of programming (7.6.7) is also that of (7.6.8). Proof: Because m m m m : : : : xγl il ˜ 1 ⇐⇒ ci xγl il 1, ci xγl il 1, ci xγl il 1, c˜i l=1

l=1

l=1

l=1

take logarithms from formulas on both sides at the left and right ends above, respectively, and we have m

γil zl + log ci 0,

l=1

which is equivalent to

m l=1

m l=1

γil zl + log ci 0,

m

γil zl + log ci 0,

l=1

γil zl + log ci 0, then the theorem holds.

228

7.6.3

7 Fuzzy Geometric Programming

Numerical Example

Example 7.6.1: Suppose we make about a 400m3 box having a bottom without a cover which is used to transport chemical raw materials. The bottom and both sides of the box are made of 4m2 special materials with negligible cost. The materials of both ends cost 20/m2 , and the transporting expense of each box is a little more than 0.1. How much does it cost to transport about 400m3 of chemical raw materials? Solution: Let x1 , x2 , x3 represent the length, width and height of the box, respectively. Then its cost is equal to the total cost of transportation and material for the sides of the box, i.e., −1 −1 40x−1 g˜0 (x) = 7 1 x2 x3 + 40x2 x3 .

Again, because the area sum of its bottom and both sides are less than 4m2 , we have g1 (x) = 2x1 x3 + x1 x2 4. This problem can be concluded as a problem to ﬁnd an answer to the fuzzy posynomial geometric programming −1 −1 % (7 min 40x−1 1 x2 x3 + 40x2 x3 ) s.t. 2x1 x3 + x1 x2 4, x1 , x2 , x3 > 0,

(7.6.9)

2 1 1 , w02 = , w11 = w12 = . Then (7.6.9) can 3 3 2 be turned into the next form by means of (7.6.4):

where 7 40 = (40, 1, 9). Let w01 =

1 7 −1 −1 −1 2 % 40x1 x2 x3 3 40x2 x3 3 min 2 1 3

s.t.

2x1 x3 12 x1 x2 12 1 2

1 2

3

4,

x1 , x2 , x3 > 0, i.e.,

2

zi =log xi

⇐⇒

1

1

3 −3 −3 % 40) ˜ 23 (270) 31 x− min( 1 x2 x3 1 1 √ s.t. x1 x22 x33 2 x1 , x2 , x3 > 0

% 2 log7 40 + 13 log270 − 23 z1 − 13 z2 − 13 z3 } min{ 3 √ 1 1 s.t. z1 + z2 + z3 log 2. 2 2 √ 1 We obtain z1 = log 2, z2 = z3 = 0, or z1 = log 2, z2 = 0, z3 = log , and 2 1 then x1 = 2, x2 = x3 = 1, or x1 = 2, x2 = 1, x3 = are optimal solutions and 2 2 1 a fuzzy optimal value is g˜0∗ = (7 40) 3 · (270) 3 · 0.7937.

7.7 Geometric Programming with Flat Coeﬃcients

229

If we change 7 40 into 39, then g0∗ = 58.996 and if we change 7 40 into 49, then = 68.692, so it will cost between 58.996 and 68.692 for transporting materials something like 400m3 . If the weight ε is properly chosen, the budget expense is superiorly obtained. g0∗

7.7

Geometric Programming with Flat Coeﬃcients

Consider a general fuzzy geometric programming[Cao95a][Cao00a] % g˜0 (x) min s.t. g˜i (x) ⊗ ˜ 1 (1 i p), x > 0, Ji

where g˜i (x) =

c˜ik

k=1 , xm )T

m ) l=1

(7.7.1)

xγl ikl (1 i p) is a ﬂat function of x,

x = (x1 , x2 , · · · is an m−dimensional variable vector, c˜ik > 0 and 1˜ are ﬂat numbers, γikl an arbitrary real number. Sign “⊗” is the aggregation of “ ” or “ ”, and “⊗” is taken to be “” for 1 i p and to be “” for p + 1 i p, respectively. We call (7.7.1) a fuzzy geometric programming with ﬂat coeﬃcients. 7.7.1

Change of Fuzzy Objective Function

In (7.7.1), an objective function is denoted by g˜0 (x) =

J0

c˜0k

k=1 J0

m )

xγl 0kl = (g0− (x), g0+ (x), σg−0 (x) , σg+0 (x) )

l=1 m )

c− 0k

k=1 J0

xγl 0kl ,

J0

c+ 0k

m )

xγl 0kl , l=1 m + ) γ0kl σc−0k xγl 0kl , σ0k xl ). k=1 l=1 k=1 l=1

=(

l=1 m )

(7.7.2)

k=1 J0

An expected object is written as F˜ = (F , F , 0, σF+ ); then let g˜0 (x) intersect with an expected objective function F˜ be pictured in the following ﬁgure:

¼ ¼

¼

¼ ¼ ¼ Fig. 7.7.1. Relationship between g˜0 (x) and F˜

230

7 Fuzzy Geometric Programming

Theorem 7.7.1. Given that g˜0 (x) is like (7.7.2), it intersects an expected object F˜ = (F , F , 0, σF+ ), then min g˜0 (x) is equivalent to J0

min

k=1 J0

c− 0k

σc−0k

k=1

m )

xγl 0kl − F

l=1 m )

(7.7.3)

. xγl 0kl + σF+

l=1

Proof: Let the equation of AB, CD denote (shown in Figure 7.7.1) h0 − 1 = −

1 (x − F ) σF+

(7.7.4)

and h0 − 1 =

J0

c− 0k

m )

xγl 0kl l=1 J0 m − ) γ0kl σ0k xl k=1 l=1

x−

k=1

(7.7.5)

,

respectively. Find (7.7.4) and (7.7.5), and then J0

hk = 1 −

k=1 J0

c− 0k

σc−0k

k=1

m ) l=1 m ) l=1

xγl 0kl − F . xγl 0kl

+

σF+

Since PD(F˜ , g˜0 )

max

min{F (x), g˜0 (y)} = min{1, hgt(inf g˜0 sup F˜ )}, {(x,y)|x>y}

where, hgt(inf g˜0 sup F˜ ) stands for the nonnegative height in the intersection of a decrease at right end side for μF˜ (x) and an increase at left one for g˜0 (x), we have PD(F˜ , g˜0 ) = h0 . According to the judgment criterion, min g˜0 (x) means making hk as high as possible, i.e., max hk , which is equivalent to the truth of (7.7.3). 7.7.2 Determination of Fuzzy Constraints ˜ = (1− , 1+ , σ1− , σ1+ ), according to Given g˜i (x) = (gi− (x), gi+ (x), σg−i (x) , σg+i (x) ), 1 the method in Ref. [Dia87],[Cao89b],[Cao00a] and [RT91], we may prove: ⎧ + + ⎪ ⎪ ⎪ gi (x) 1 , ⎪ ⎨ g + (x) + σ + 1+ + σ + , 1 i gi (x) g˜i (x) ⊆ ˜ 1 ⇐⇒ (7.7.6) ⎪ gi− (x) − σg− (x) 1− − σ1− , ⎪ i ⎪ ⎪ ⎩ g − (x) 1− . i

7.7 Geometric Programming with Flat Coeﬃcients

231

Deﬁnition 7.7.1. I(˜ gi (x), α) = {x|μg˜i (x) α, α ∈ [0, 1]} is called an α− level set for g˜i (x). The level set in Deﬁnition 7.7.1 denotes an open interval where on a gi (x), α) and a ﬁnal end real axis are embodied an initial end inf I(˜ x

sup I(˜ gi (x), α). By means of monotonicity of μg˜i (x) and μ˜1 (x), (7.7.6) is equivx

alent to

g˜i (x) ⊆ ˜ 1 ⇐⇒

⎧ ⎨ sup I(˜ gi (x), α) sup I(˜1, α), ∀α ∈ [0, 1], x

x

⎩ inf I(˜ gi (x), α) inf I(˜1, α), ∀α ∈ [0, 1]. x

If height hk = hgt(inf ˜ 1 x

x

sup g˜i (x)) > 0, where the left end side for μ˜1 (x) x

increases, whereas the right end side for μg˜i (x) decreases, then hgt(inf ˜ 1 x

g + − 1− + 1, 0} sup g˜i (x)) = max{ + i σgi (x) (x) + σ1− x $ 1, if gi+ (x) 1− , = < 1, if gi+ (x) < 1− ,

(7.7.7)

such that the degree of possibility of g˜i (x) superior to ˜1 is introduced by Dubois and Prade [DPr80] which represents the fuzzy extension for 1 g˜i (x) > ˜ PD(˜ gi (x), ˜ 1) =

max

min{μg˜i (x) (x), μ˜1 (y)} = min{1, hgt(inf ˜1 sup g˜i (x))}. {(x,y)|x>y}

Deﬁnition 7.7.2. Let θ ∈ [0, 1] be an expected level. Then g˜i (x) θ ˜ 1 iﬀ no g˜i (x) 1− − (1 − θ)σ1− , $ gi− (x) − (1 − θ)σg−i (x) 1+ + (1 − θ)σ1+ , ˜ iﬀ g˜i (x) ≈θ 1 gi+ (x) + (1 − θ)σg+i (x) 1− − (1 − θ)σ1− . Proof: From (7.7.8), (7.7.7) and Dubois’ proof [DPr80], we have 1+ − gi− (x) g˜i (x) θ ˜ 1 iﬀ + +1θ σ1 + σg−i (x) iﬀ

gi− (x)

− (1 −

θ)σg−i (x)

(7.7.9) +

1 + (1 −

θ)σ1+ ,

g + (x) − 1− g˜i (x) >θ ˜ 1 iﬀ i+ +1>θ σgi (x) + σ˜1+ −

iﬀ 1 − (1 −

θ)σ1−

0. Ji

c+ ik

m )

xγl ikl

+

But (7.7.11) is equivalent to $ max 1 −

J0 k=1 J0 k=1

c− 0k

σc−0k

m ) l=1 m ) l=1

xγl 0kl − F * , xγl 0kl + σF+

(7.7.12)

7.7 Geometric Programming with Flat Coeﬃcients

233

hence (7.7.11) and (7.7.12) are equivalent to max θ s.t.

J0

− [c− 0k − (1 − θ)σc0k ]

k=1

m :

xγl ikl F + (1 − θ)σF+ ,

(7.7.13)

l=1

(7.7.12), θ ∈ [0, 1]. B. When “⊗” in (7.7.1) is selected as “” for 1 i p and “” for p + 1 i p, then from (7.7.7) and (7.7.8), we know (7.7.1) iﬀ max PD(F˜ , g0 (x)) s.t. PD(˜ 1, g˜i (x)) θ (1 i p ) PD(˜ gi (x), ˜ 1) θ (p + 1 i p) θ ∈ [0, 1], x > 0 ⇐⇒ min (7.7.11) Ji

[c− ik k=1

s.t.

Ji k=1

[c+ ik

− (1 −

+ (1 −

θ)σc−ik ]

θ)σc+ik ]

m ) l=1

m ) l=1

⎫

xγi ikl

+

1 + (1 −

⎪ θ)σ1+ ⎪ ⎪ ⎪

(1 i p )

xγi ikl

θ ∈ [0, 1], x1 , x2 , · · · , xm > 0

−

1 − (1 −

θ)σ1−

(p + 1 i p)

⎪ ⎪ ⎪ ⎪ ⎬

(7.7.14)

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭

⇐⇒ max θ s.t.

Ji

− [c− 0k − (1 − θ)σc0k ]

k=1

m :

xγl 0kl F + (1 − θ)σF+ ,

(7.7.15)

l=1

(7.7.14). Comparing A to B, (7.7.13) contains only 3p constraints more than (7.7.15). Accordingly, we take only (7.7.15) into consideration. In order to handle a negative term in constraints, we introduce a sign δik =

δi , for 1 k Si , (1 k Ji , 0 i p), −δi , for Si+1 k Ji ,

where Si denotes all items with the same sign as a constraint function sign δi . If the constraint with negative items denotes the i−th one, the constraint can be uniquely written as

234

7 Fuzzy Geometric Programming

g i (x) = δi [1+ + (1 − θ)σ1+ − δi

Ji

m : − δik c− (1 − θ)σ xγi ikl ] c ik ik

k=1

l=1

= δi [1+ + (1 − θ)σ1+ − Pi + Ni ] 0 $ δi [1+ + (1 − θ)σ1+ + x−1 i0 Pi ] 0, ⇐⇒ −1 δi [1+ + (1 − θ)σ1+ − x−1 i0 − xi0 Ni ] 0, where Pi =

Si

− [c− ik − (1 − θ)σcik ]

k=1 Ji

Ni =

m :

xγl 0kl > 0,

l=1 − [c− ik − (1 − θ)σcik ]

k=Si +1

m :

xγl ikl > 0,

l=1

xi0 is a new non-tentative-value variable, such that an inequality constraint polynomial with an arbitrary sign coeﬃcient can be turned into a monomial. In (7.7.15), let b0 = F + (1 − θ)σF+ > 0, b1 = 1+ + (1 − θ)σ1+ > 0, b2 = 1− − (1 − θ)σ1− > 0. Then (7.7.15) can be turned into an ordinary reverse posynomial geometric programming max θ = min(−θ) s.t.

J0 m : 1 − [c− − (1 − θ)σ ] xγl 0kl 1, c0k 0k b0 k=1

l=1

Ji m : 1 − [c− − (1 − θ)σ ] xγl ikl 1 (1 i p ), c ik ik b1 k=1

(7.7.16)

l=1

Ji m : 1 + + [cik + (1 − θ)σcik ] xγl ikl 1 (p + 1 i p), b2 k=1

l=1

θ ∈ [0, 1], x1 , x2 , · · · , xm > 0, such that we can obtain the following. Theorem 7.7.3. There exists a fuzzy optimal solution to the fuzzy posynomial geometric programming (7.7.1) which is equivalent to existence of a parameter optimal solution to a reverse posynomial geometric programming (7.7.16) with parameter θ. Algorithm We have several solutions to (7.7.16), for example, we can turn (7.7.1) into (7.7.16) for solution. But here, we only introduce a new algorithm, with steps as follows: 10 Deﬁne the lower and the upper bounds for θ and we suppose θ0− = 0, θ0+ = 1 for l = 0.

7.8 Geometric Programming with Fuzzy Variables

235

20 Fix θl+1 and let θl+1 = small end+(big end − small end) × 0.618. Then the small one and the big one mean left and right endpoint values in the interval we refer to. If |θl+ − θl− | < ε (ε means a suﬃciently small positive number), then we take θ∗ = θl+1 . It ends, otherwise, go on to 30 . 30 If there exists a feasible solution set X for θ = θl+1 , then we move on to + − 0 = θl , θl+1 = θl− . 4 , otherwise, turn back to 20 , and let θl+1 0 ∗ 4 Let x ∈ X. We deﬁne θ0 = min{PD(F˜ , g˜0 (x)), min PD(˜ gi (x), ˜ 1), 1ip

min

p +1ip

PD(˜1, g˜i (x))}.

− + = θ0 , θl+1 = θl+ , turn back to 20 . If we ﬁx θl+1 Continue like this and we can ﬁnd an approximate optimal solution to (7.7.15) before we obtain an approximate fuzzy optimal value for (7.7.1). Finally, we point out that after (7.7.1) is turned into (7.7.11), (7.7.12) or (7.7.16), we can solve it by the primal or the dual algorithm.

7.8 7.8.1

Geometric Programming with Fuzzy Variables Introduction

We call a posynomial geometric programming of variables serves as kinds of fuzzy variables in a general posynomial geometric programming with fuzzy variables. The geometric programming models with T -fuzzy variables and with trapezoidal fuzzy variables are built in this section, respectively. And eﬃcient algorithm is advanced. 7.8.2 Primal Geometric Programming with T -Fuzzy Variables The deﬁnition is given in primal geometric programming with T -fuzzy variables. Deﬁnition 7.8.1. Suppose that geometric programming given by T -fuzzy data is said to be a primal geometric programming with T -fuzzy variables, whose mathematical formula is % g0 (˜ x) min x) σ ˜i s.t. gi (˜ x˜ > 0,

(1 i p),

(7.8.1)

˜2 , · · · , x˜m )T stands for an m−dimensional T -fuzzy variable where x ˜ = (˜ x1 , x vector, x ˜i = (xi , ξ i , ξ i ) for T -fuzzy variable, σ ˜i σi × ˜1 and ˜1 = (1, 1, 1) for T -fuzzy numbers[Cao89b,c] [Cao90], and gi (˜ x) =

Ji k=1

vik (˜ x) =

Ji k=1

σik cik

m : l=1

x ˜γl ikl (0 i p)

236

7 Fuzzy Geometric Programming

are fuzzy polynomials of x˜, it is a fuzzy signomial function, where σi , σik = ±1, γikl is an arbitrary real number.[WB67] When σ ˜i taken ˜ 1, (7.8.1) is turned into min g0 (˜ x) s.t. gi (˜ x) ˜ 1(1 i p), x ˜ > 0. Theorem 7.8.1. Let a geometric programming model given by T -fuzzy data x ˜ be denoted as (7.8.1). Then, for a given cone index J , it is equivalent to min g0 (z(J )) s.t. gi (z(J )) σi (1 i p) z(J ) > 0

(7.8.2)

and an optimal solution depending on a cone index J in (7.8.2) is equivalent to Ji m ) optimal T -fuzzy solution to (7.8.1), where gi (z(J )) = σik cik (zl (J ))γikl k=1

(0 i p).

l=1

Proof: Similarly to method of disposal T -fuzzy data in Section 3.3, under the given cone index J , (7.8.1) can be turned into (7.8.2). Therefore, the theorem holds. This shows that the geometric programming with T -fuzzy variables can be changed into an ordinary geometric programming depending on a cone index J for solution. Numerical Example Example 7.8.1: Find min 2˜ x21 x˜3 + x ˜2 x˜3 ˜ s.t. 2˜ x−2 ˜−1 ˜−1 1 +x 2 x 3 2,

(7.8.3)

x ˜1 , x ˜2 , x ˜3 > 0, where ˜ 2 = (2, 0, 0), and give T -fuzzy data of a column x ˜1 : 1. (x1 , 0.4, 0.2); 2. (x1 , 1, 0.7);

x ˜2 : 4. (x2 , 1, 0.2); 5. (x2 , 1.4, 1.2);

x ˜3 : 7. (x3 , 0.8, 1); 8. (x3 , 1.6, 1.2);

3. (x1 , 1.2, 1.5);

6. (x2 , 0.5, 0.2);

9. (x3 , 0.2, 0.4).

To solve process below. 10 Number the data from 1 to 9. Classify the data under three types from Deﬁnition 3.3.2: I. No.: 1,6,9; II. No.: 2,5,8, with j2 = 0, j5 = 1, j8 = 0; III. No.: 3,4,7, with j3 = 1, j4 = 0, j7 = 1. Here jl = 1 stands for odd numbers and jl = 0 for even ones.

7.8 Geometric Programming with Fuzzy Variables

237

20 Nonfuzzify: (x1 + 0.2) + (x1 − 1) + (x1 + 1.5) = x1 + 0.27, 3 (x2 + 0.4) + (x2 + 1.2) + (x2 − 1) = x2 + 0.2, x˜2 −→ 3 (x3 + 0.3) + (x3 − 1.6) + (x3 + 1) x˜3 −→ = x3 − 0.1. 3

x˜1 −→

30 Obtain a programming corresponding to (7.8.13) as follows: min g0 (x) = {2(x1 + 0.27)2 (x3 − 0.1) + (x2 + 0.2)(x3 − 0.1)} s.t. 2(x1 + 0.27)−2 + (x2 + 0.2)−1 (x3 − 0.1)−1 2, x1 , x2 , x3 > 0. Substituting u1 = x1 + 0.27, u2 = x2 + 0.2, u3 = x3 − 0.1, then min{2u21u3 + u2 u3 } −1 −1 s.t. 2u−2 1 + u2 u3 2, u1 , u2 , u3 > 0,

(7.8.4)

we can obtain an optimal solution to (7.8.4): 4 3 3 , u2 = , u3 = 1, u1 = 2 2 and then a J −optimal solution to a primal problem can be obtained: 4 3 3 − 0.27, x2 = − 0.2, x3 = 1.1, x1 = 2 2 and the optimal value is g0 (x) = 92 . 7.8.3

Primal Geometric programming with Trapezoidal Fuzzy Variables

Similar with method to a geometric programming model with T -fuzzy variables, we can deal with the model in this section. Deﬁnition 7.8.2. If (7.8.1) is given by a trapezoidal fuzzy variable, i.e., % g0 (˜ x) min s.t. gi (˜ x) σ ˜i x˜ > 0,

(1 i p),

(7.8.5)

where x ˜ = (x1 , x2 , · · · , xm )T is an m−dimensional trapezoidal fuzzy variable + − + ˜ vector, x ˜l = (x− l , xl , ξ l , ξ l ) is trapezoidal fuzzy data, 1 = (1 , 1 , 1, 1) is a

238

7 Fuzzy Geometric Programming

trapezoidal fuzzy number, then it is a posynomial geometric programming with trapezoidal fuzzy variables. Theorem 7.8.2. If the posynomial geometric programming with trapezoidal fuzzy variable is shown as (7.8.5), for a ﬁxed platform index T , then (7.8.5) is changed into a posynomial geometric programming depending on platform index T min g0 (z(T )) s.t. gi (z(T )) 1(1 i p) (7.8.6) z(T ) > 0 and the optimum solution with platform index T in (7.8.6) means also a trapeJi m ) zoidal fuzzy one in (7.8.5), where gi (z(T )) = cik (zl (T ))γikl (0 i p). k=1

l=1

T

Proof: Let x ˜l = (˜ xl1 , x ˜l2 , · · · , x ˜lp ) be a trapezoidal fuzzy variable satisfy+ , x ing (7.8.5), where x ˜li = (x− li li , ξ li , ξ li ) (1 l m; 1 i p). Because + x is freely ﬁxed in the closed value interval [x− li , xli ], we choose the degree of accomplishment in the light of membership function like (1.5.3), then we deduce √ √ − − − n n xli − x− α(x+ α(x+ li li − xli ) ⇒ xli = xli + li − xli ) by φ˜l (xli ) α. We classify variables of the column by subscripts, and might as well let l = 1, 2, · · · , M correspond to a smaller ﬂuctuating variable while the other variables correspond to l = M + 1, · · · , 3M . And then 10 As for l = 1, 2, · · · , M and each i, √ ξ +ξ li − n li α(x+ ; x ˜li → x− li + li − xli ) + 2 0 2 As for$l = M + 1, · · · , 2M and each i, √ − n x− α(x+ li + li − xli ) + ξ li , jl = 0, x ˜li → √ − n x− α(x+ li + li − xli ) − ξ li , jl = 1; 0 3 As for$l = 2M + 1, · · · , 3M and each i, √ − n x− α(x+ li + li − xli ) − ξ li , jl = 0, x ˜li → √ − n x− α(x+ li + li − xli ) + ξ li , jl = 1.

√ n Therefore, under the same given platform index T , let zli = x− α(x+ li + li − − + 3M ξ x +ξ x ξ +ξ li li li ∗ ∗ li li li x− + ξli∗ 3M , ±ξ li , li ) + ξli . Then zl (T ) = m , where ξli is 2 ξ +ξ li

li

i=1

or ±ξ li . Substitute zl (T ) for x˜l in (7.8.5), and we turn (7.8.5) into (7.8.6). So, under the given platform index T mentioned above, (7.8.5) can be turned into (7.8.6) and an optimal solution to (7.8.6) depending on platform index T is equivalent to a trapezoidal fuzzy optimal one in (7.8.5).

7.8 Geometric Programming with Fuzzy Variables

239

Illustrative Examples + Now, variables x ˜l = (x− l , xl , ξ l , ξ l ) are divided into two cases below. I. Variable type, where the mean value and the spreads are all decision variables. II. Data type, where the mean value and the spreads are all real numbers.

Example 7.8.2: Find √ 1 x ˜1 −1 (˜ x1 − 1)(˜ x2 − 14 ) 0<x ˜1 ˜ 2, 0<x ˜2 ˜54 ,

x) = min g0 (˜ s.t.

1 x ˜2 − 14

√ 3

˜1,

(7.8.7)

where ˜ 1 = (1, 1, 0, 0), ˜ 2 = (2, 2, 0, 0), ˜54 = ( 54 , 54 , 0, 0) are special trapezoidal fuzzy numbers. ˜2 trapezoidal fuzzy data as: i) Take x ˜1 and x + − + − + x˜1 : 1) (x− 1 , x1 , 1, 0); 3) (x1 , x1 , 2, 1); 5) (x1 , x1 , 1.5, 1); − + 1 1 − + − 7) (x1 , x1 , 2 , 2 ); 9) (x1 , x1 , 2, 2); 11) (x1 , x+ 1 , 1, 2). + − + − + x ˜2 : 2) (x− 2 , x2 , 1.5, 1.2); 4) (x2 , x2 , 0, 1); 6) (x2 , x2 , 2, 1); − + − + − 8) (x2 , x2 1, 1.5); 10) (x2 , x2 , 0, 0); 12) (x2 , x+ 2 , 2, 1).

ii) x ˜1 , x ˜2 may be freely ﬁxed in a closed value intervals [1, 2], [ 41 , 1 14 ] respectively, for interval [m, M ], from formula (1.5.3) in Section 1.5 in Chapter 1, where xL and xU are left and right interval endpoints, respectively. Let n = 1. Then $ 1 −1 ϕ1 (x1 ) = x2−1 α1 x −1

ϕ2 (x2 ) = 1 12 − 41 α2 4 4 $ x1 α1 + 1, ⇒ x2 α2 + 14 , which is equivalent to x ˜1 : x1 = α1 + 1; x ˜2 : x2 = α2 + 14 , α1 , α2 ∈ [0, 1]. So the data of twelve groups mentioned above are as follows: 1) (α1 + 1, 1, 0); 7) (α1 + 1, 12 , 12 );

3) (α1 + 1, 2, 1); 9) (α1 + 1, 2, 2);

5) (α1 + 1, 1.5, 1); 11) (α1 + 1, 1, 2).

2) (α2 + 14 , 1.5, 1.2); 4) (α2 + 14 , 0, 1); 6) (α2 + 14 , 2, 1); 8) (α2 + 14 , 1, 1.5); 10) (α2 + 14 , 0, 0); 12) (α2 + 14 , 2, 1). iii) Partition them into three groups I, II, and III by applying the proof in Theorem 7.3.1 in Section 7.3.

240

7 Fuzzy Geometric Programming

I. Number 1,4,7 and 10, whose data correspond to 3 3 1 1 α1 + , α2 + , α1 + 1 , α2 + . 2 4 2 4 II. Number 2,5,8 and 11, whose data correspond to 5 3 α2 − , α1 + 2, α2 − , α1 + 3. 4 4 III. Number 3,6,9 and 12, whose data correspond to 5 5 α1 − 1, α2 + , α1 − 1, α2 + . 4 4 Add their factors with αj (j = 1, 2), such that 3 + α1 + 2 3 α2 + + α2 + 4

α1 +

3 + α1 + 2 + α1 + 3 + α1 − 1 + α1 − 1 = 6α1 + 6, 2 1 5 3 5 5 6 + α2 − + α2 − + α2 + + α2 + = 6α2 + . 4 4 4 4 4 4

iv) Substitute α1 + 1, α2 + 14 for x˜1 and x ˜2 in (7.8.7), respectively, and then change formula (7.8.7) into an equivalent problem −1

− 13

min g0 (α) = α1 2 α2 s.t. α1 α2 1,

(7.8.8)

0 < α1 1, 0 < α2 1. An optimal solution to (7.8.8) is α = (1, 1)T , an optimal value g0 (α) = 1. And then a parameter optimal solution to (7.8.7) is x1 = 1 + 1 = 2, x2 = 1 + 14 = 1 41 , a parameter optimal value g0 (x) = 1. 7.8.4

Conclusion

It is pointed out that the method in this section can be applied to the posynomial geometric programming model with other type of fuzzy variables. In application, usually it is simply to ﬁnd a parameter optimal solution and a parameter optimal value.

7.9 7.9.1

Dual Method of Geometric Programming with Fuzzy Variables Introduction

The posynomial geometric programming with fuzzy variables is advanced in above section, here it is turned into a dual one with fuzzy coeﬃcients by

7.9 Dual Method of Geometric Programming with Fuzzy Variables

241

a fuzzy dual theorem [MTM00], before its solution is found by the method mentioned in Section 7.4-7.7, i.e., a dual method.[Cha83] Next follows an introduction of geometric programming with fuzzy variables. 7.9.2

Dual Geometric Programming with T −Fuzzy Variables

Suppose that a geometric programming with T −fuzzy variables to be (7.8.1), then its dual programming is deﬁned as. Deﬁnition 7.9.1. Let (7.8.1) be primal geometric programming with T −fuzzy variables. Then we call p J σ Ji i :: c˜ik σi wik ˜ ˜ (D) max d(w) = σ wik wik i=0 k=1

k=1

(7.9.1)

s.t. w00 = 1 T

Γ w=0 w0 a dual programming of (7.8.1), where w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T is a J−dimensional variable vector (J = J0 + J1 + · · · + Jp ), c˜ik is a T −fuzzy number, σ = ±1 and ⎞ ⎛ σ01 γ011 · · · σ01 γ01l · · · σ01 γ01m ⎟ ⎜ ··· ··· ··· ⎟ ⎜ ⎜ σ0J0 γ0J0 1 · · · σ0J0 γ0J0 l · · · σ0J0 γ0J0 m ⎟ ⎟ ⎜ ⎟ ··· ··· Γ = ⎜ ⎟ ⎜ ··· ⎜ σp1 γp11 · · · σp1 γp1l · · · σp1 γp1m ⎟ ⎟ ⎜ ⎠ ⎝ ··· ··· ··· σpJp γpJp 1 · · · σpJp γpJp l · · · σpJp γpJp m is an exponent matrix. We stipulate (wik )wik |wik =0 = 1. When σi taken 1, (7.9.1) turned into ˜ max d(w) =

>wik ) p ) p Ji = ) c˜ik i=0 k=1

wik

i=1

wi0 wi0

s.t. w00 = 1, Γ T w = 0, w 0, 1(1 i p) are fuzzy numbers, Γ an exponent matrix; w a where c˜ik = cik · ˜ J−dimensional variables vector. Disposal of Nonfuzziﬁcation in Problem: Theorem 7.9.1. If the problem (7.8.1) is deduced from T -fuzzy variable x ˜l = (˜ xl1 , x˜l2 , · · · , xlp )T , the dual form of (7.8.1) is (7.9.1).

242

7 Fuzzy Geometric Programming

Proof: As (7.8.1) is equivalent to (7.8.2) from Theorem 7.8.1, the dual form of the programming (7.8.2) contains parameter is obviously ! max d(w(J )) = σ

"σ Ji cik σi wik (J ) ( wik (J ) wik (J )) i=0 k=1 k=1 p ) Ji )

s.t. w00 (J ) = 1, Γ T w(J ) = 0, w(J ) 0,

(7.9.2)

where w(J ) = (w01 (J ), · · · , w0J0 (J ), · · · , wp1 (J ), · · · , wpJp (J ))T is a J−dimensional variable vector depending on a cone index J , and we stipulate (wik (J ))wik (J ) |wik (J )=0 = 1 correspondingly. We can also prove (7.9.2) to be equivalent to (7.9.1) under the above cone index J , while (7.8.2) and (7.9.2) are mutually dual under the cone index J , such that (7.8.1) and (7.9.1) are mutually dual problems, so the theorem holds. Obviously, (7.8.1) can be changed into a dual programming (7.9.1) with fuzzy coeﬃcients, while (7.9.1) is easier to be found than (7.8.1) since its variables are nonfuzzy and its optimal solution can be obtained by methods mentioned in the previous chapter. Next we discuss what condition is needed for the existence of a fuzzy optimal solution in (7.8.1). x) ˜1(< ˜1), (1 i p), then Deﬁnition 7.9.2. If x ˜ > 0 satisﬁes gi (˜ primal posynomial geometric programming (P˜ ) with T - fuzzy variables is called a fuzzy consistent (or fuzzy super-consistent). If z(J ) > 0, such that gi (z(J )) 1(< 1), (1 i p), then primal posynomial geometric programming depending on cone index J is called a consistence (or super-consistence). Lemma 7.9.1. (Basic lemma) For any T -fuzzy feasible solution x˜ in a primal posynomial geometric programming (7.8.1) with T -fuzzy variable, and any feasible one w in a dual programming (7.9.1) with fuzzy coeﬃcients, we have x) g0 (˜ x) g0 (˜

p :

˜ (gi (˜ x))wi0 d(w)

i=1

˜ x) = d(w) ⇐⇒ and g0 (˜ wik

⎧ x) ⎨ v0k (˜ , (i = 0; 1 k J0 ) g (˜ x = 0 ) ⎩ wi0 vik (˜ x), (i =

0; 1 k Ji )

holds, such that x ˜ and w denote a T -fuzzy optimal solution to (7.8.1) and an optimal solution to (7.9.1), respectively. Proof: From the knowledge of Theorem 7.9.1, (7.8.1) ⇐⇒ (7.8.2). Similarly, we can prove (7.9.1) equivalent to (7.9.2).

7.9 Dual Method of Geometric Programming with Fuzzy Variables

243

But (7.9.2) denotes a common programming depending on cone index J , under the same given cone index J , (P (J )) is mutually dual with (D(J )) with respect to cone index J . From the knowledge of Lemma 1.5.3 in Ref. [WY82], any feasible solution x(J ) and w(J ) in (P (J )) and (D(J )) contains p ) g0 (x(J )) g0 (x(J )) (gi (x(J ))wi0 d((J )), with g0 (x(J )) = d(w(J )) iﬀ i=1

$ wik =

v0k (x(J ))/g0 (x(J ), wi0 (J )vik (x(J )),

(i = 0, 1 k J0 ) (i = 0, 1 k Ji )

holds, x(J ) and w(J ) denoting optimal solutions to (P (J )) and (D(J )), respectively. ˜ and (D(J )), Again, by the equivalence of (P˜ ) and (P (J )) as well as (D) ˜ therefore, the theorem holds. we know that of (P˜ ) and (D), Theorem 7.9.2. (First fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.1) be deduced from T -fuzzy variable x˜l = (˜ xl1 , x ˜l2 , · · · , x ˜lp )T . ∗ If it is fuzzy super-consistent having T -fuzzy optimal solution x ˜ , then there must exist a Lagrange multiplier λ∗ = (λ∗1 , λ∗2 , · · · , λ∗p )T 0, such that #g0 (˜ x∗ ) +

p i=1

λ∗i # gi (˜ x∗ ) = 0,

λ∗i (gi (˜ x∗ ) − 1) = 0

(1 i p),

(7.9.3) (7.9.4)

while w∗ deﬁned by

∗ wik

⎧ v0k (˜ x∗ ) ⎪ ⎪ , ⎨ g0 (˜ x∗ ) = ∗ x∗ ) ⎪ ⎪ λi vik (˜ , ⎩ ∗ g0 (˜ x )

(i = 0; 1 k J0 ) (7.9.5) (i = 0; 1 k Jp )

is an optimal one of dual programming (7.9.1), with ˜ ∗ ). g0 (˜ x∗ ) = d(w

(7.9.6)

Proof: For a given cone index J , it may be proved that, similar to Theorem 7.8.1, the conditions of the theorem are equivalent to (7.8.2) J −super consistently containing an optimal solution z ∗ (J ) which depends on a cone index J . But under condition of the (7.8.2) J −super consistent having optimal solution z ∗ (J ), it can be proved that, similar to Theorem 1.6.3 in Ref. [WY82], there must be a Lagrange multiplier λ∗ = (λ∗1 , λ∗2 , · · · , λ∗p )T 0, such that #g0 (z ∗ (J )) +

p i=1

λ∗i # gi (z ∗ (J )) = 0,

λ∗i (gi (z ∗ (J )) − 1) = 0,

(7.9.7) (7.9.8)

244

7 Fuzzy Geometric Programming

while w∗ (J ) deﬁned by ⎧ v0k (z ∗ (J )) ⎪ ⎪ , (i = 0; 1 k J0 ) ⎨ g (z ∗ (J )) ∗ wik (J ) = λ∗0v (z ∗ (J )) ik ⎪ ⎪ , (i =

0; 1 k Jp ) ⎩ i g0 (z ∗ (J ))

(7.9.9)

is an optimal solution depending on the cone index J of the dual programming (7.9.2), with g0 (z ∗ (J )) = d(w∗ (J )). (7.9.10) Under the above cone index, (7.9.3)–(7.9.6) are equivalent to (7.9.7)–(7.9.10), respectively, therefore the theorem holds Theorem 7.9.3 (Second fuzzy dual theorem). Let the primal posynomial geometric programming (7.8.1) be deduced from T -fuzzy variable. If (7.8.1) is fuzzy consistent and a dual problem (7.9.1) has a feasible solution with components being positive, then (7.8.1) has an optimal T -fuzzy solution. Proof: Under a given cone index J , the condition of this theorem is equivalent to the following. If primal problem (7.8.2) is J −compatible, its dual problem (7.9.2) has a J −feasible solution with components being positive, so, under this condition, it may be proved that, similar to Theorem 1.8.1 in Ref. [WY82], (7.8.2) has a J −optimal solution. This is equivalent to the truth of the theorem. Theorem 7.9.2 shows us that if a primal problem (7.8.1) with T -fuzzy variable is fuzzy super-consistent, with optimal T -fuzzy solutions x ˜∗ , the dual ∗ problem (7.9.1) has an optimal solution w , but (7.9.1) and (7.8.1) have the same optimal T -fuzzy values. Theorem 7.9.3 further gives us a suﬃcient condition in which (7.8.1) has an optimal T -fuzzy solution by determination. Example 7.9.1: Consider Example 7.8.1. Substituting u1 = x1 + 0.27, u2 = x2 + 0.2, u3 = x3 − 0.1, with (7.8.3) changed into (7.8.4), its dual programming is 2 w01 1 w02 1 w11 1 w12 (w11 + w12 )w11 +w12 w01 w02 w11 2w12 s.t. w01 + w02 = 1; 2w01 − 2w12 = 0;

max

w01 − w12 = 0; w 0. Also we obtain a unique dual feasible solution: w01 = 23 , w02 = 13 , w11 = 2 1 3 , w12 = 3 , an optimal solution correspondingly, and the dual optimal value is MD = d(w) = 92 . Again from (7.8.1), combining substitutes, we can obtain a

7.9 Dual Method of Geometric Programming with Fuzzy Variables

J −optimal solution to a primal problem: x1 = 1.1, and the optimal value is g0 (x) = 7.9.3

9 2.

B

3 2

− 0.27, x2 =

3 2

245

− 0.2, x3 =

Therefore, d(w) = g0 (x).

Dual Geometric Programming with Trapezoidal Fuzzy Variables

Similarly we discuss dual geometric programming with trapezoidal fuzzy variables, consider (7.8.5), its dual programming like (7.9.1) is " ! p J Ji σi wik σ ) )i c˜ik ˜ wik max d(w) = σ wik i=0 k=1

k=1

(7.9.11)

s.t. w00 = 1, Γ T w = 0, w 0, where c˜ik is a trapezoidal fuzzy number.

Theorem 7.9.4. (First fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.5) with trapezoidal fuzzy variables being fuzzy superconsistent, having fuzzy optimal solution x ˜∗ . Then there must exist a Lagrange ∗ ∗ ∗ ∗ T multiplier λ = (λ1 , λ2 , · · · , λp ) 0, such that x∗ ) + #g0 (˜

p

λ∗i # gi (˜ x∗ ) = 0, λ∗i (gi (˜ x∗ ) − ˜1) = 0

(1 i p),

i=1

while w∗ deﬁned by

∗ wik

⎧ v0k (˜ x∗ ) ⎪ ⎪ , ⎨ g0 (˜ x∗ ) = ∗ ∗ λ vik (˜ x ) ⎪ ⎪ , ⎩ i ∗ g0 (˜ x )

(i = 0; 1 k J0 ) (i = 0; 1 k Ji )

˜ ∗ ). is an optimal one of a dual programming (7.9.3), with g0 (˜ x∗ ) = d(w Theorem 7.9.5. (Second fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.5) be deduced from trapezoidal fuzzy variable. If (7.8.5) is fuzzy consistent and dual problem (7.9.11) has a feasible solution with components being positive, then (7.8.5) has a fuzzy optimal solution. Example 7.9.2: Find dual programming of Example 7.8.2. Because (7.8.7) can be turned into (7.8.8) and the dual problem in (7.8.8) is

246

7 Fuzzy Geometric Programming

1 w0 1 w1 1 w2 1 w3 w1 w2 w3 ) ( ) ( ) ( ) w1 w2 w3 w0 w1 w2 w3 s.t. w0 = 1, 1 − w0 + w1 + w2 = 0, 2 1 − w0 + w1 + w3 = 0, 3 w0 , w1 , w2 , w3 0. max (

Its optimum solution is w = (1, 13 , 16 , 0)T , optimal value d is also 1, such that we can get an approximately optimal solution of the prime problem, which is x = (2, 1 41 )T , and its optimal value is 1. 7.9.4

Disposal of Nonfuzziﬁcation in Fuzzy Number

Proposition 7.9.1. Let x˜ and y˜ be T -fuzzy variables (x, ξ, ξ)T and (y, η, η)T and reference functions (Lx˜ , Rx˜ ), (Ly˜, Ry˜), where all reference functions are invertible. Then x ˜ ˜b if and only if ˜αx˜,L sup y˜αy,R + sup y˜α˜y,L , sup x ˜αx˜,R + sup x ˜ 1 1 where if k = x ˜, y˜, αx˜,R = Rk ( 0 Rk−1 (α)dα), αy˜,R = Lk ( 0 L−1 k (α)dα). Proof: According to the deﬁnition proved by Roubens and similar to proof in Ref.[Rou91], the theorem is easy proved. For T -fuzzy variable x ˜ = (x, ξ, ξ)T and y˜ = (y, η, η)T , we have x ˜ y˜ if and only if x + 12 (ξ − ξ) y + 12 (η − η). Therefore, the real variable x corresponds to the T -fuzzy variable x ˜ = (x, ξ, ξ)T if 1 x = x + (ξ − ξ). 2 Especially, as a ˜ = (a, α, α)T and ˜b = (b, β, β)T are T -fuzzy numbers, we ˜ have a ˜ b if and only if a + 12 (α − α) b + 12 (β − β), and the real data a corresponds to the T -fuzzy data a ˜ = (a, α, α)T if 1 a ¯ = a + (α − α). 2

(7.9.12)

Generally, the real variable x corresponds to the trapezoidal fuzzy variable x ˜ = (xL , xU , ξ, ξ) if 1 x = [xL + xU + (ξ − ξ)], 2 and the real datum a corresponds to trapezoidal fuzzy datum a ˜ = (aL , aU , α, α) a ˜=

1 L [a + aU + (α − α)]. 2

(7.9.13)

7.9 Dual Method of Geometric Programming with Fuzzy Variables

247

Example 7.9.3: Find min g0 (˜ x) = 20˜ x1 x ˜3 + 40˜ x2 x ˜3 + 80˜ x1 x ˜2 −1 −1 −1 ˜ s.t. g0 (˜ x) = 8˜ x1 x ˜2 x ˜3 1, x > 0, where x ˜i are special trapezoidal fuzzy variables and ˜1 = (1, 1, 0, 0) is a special trapezoidal fuzzy number. Its dual programming is w01 w02 w03 ˜8 w11 w 20 40 80 ˜ max d(w) = w1111 w01 w02 w03 w11 s.t. w01 + w02 + w03 = 1, w01 + w03 − w11 = 0, w02 + w03 − w11 = 0, w01 + w02 − w11 = 0, w 0, where w = (w01 , w02 , w03 , w11 )T is a 4–dimensional vector, ˜8 is a fuzzy number. 4-th equation is computed in constraint equation groups, feasible solution is w01 = 13 , w02 = 13 , w03 = 13 , w11 = 23 . It is a unique feasible solution, so it is an optimal solution, and the optimal value is ˜ d(w) =

20 1 3

13

40 1 3

13

80 1 3

13 23 ˜ 8 2 3

w11 w11

=

1728000 1 3

13

3 × ˜8 2

23 .

When ˜ 8 ∈ [7.5, 8.5], ¯ 8 7.8 + 0.7α is deﬁned by (1.5.1) (n = 1), ˜8 = > 13 = 2 ˜ [1.5 × (7.8 + 0.7α)] 3 . 7.8 + 0.7α, and the optimal value is d(w) = 1728000 1 3 When ˜ 8 = (7.5, 8.5, 0.3, 0.5) is a trapezoidal fuzzy number, ¯8 = 8.15+ 0.5−0.3 2 >1 = 2 1728000 3 ˜ 3 by (7.9.13),and the optimal value is d(w) = (16.4) . 1 3 ˜ ¯ When 8 = (8, 0.3, 0.5) is a T -fuzzy number, 8 = 8 + 0.5−0.3 by (7.9.12). 2 = > 13 2 ˜ And the optimal value is d(w) = 1728000 (8.1) 3 . 1 3

7.9.5

Conclusions

This section gives methods to ﬁnd a programming with fuzzy variables and to determine an optimal solution. We can acquire an analytic solution to the primal problem by ﬁnding its dual problem as long as a solution exists in the primal programming problem.

248

7 Fuzzy Geometric Programming

7.10

Multi-Objective Geometric Programming with T -Fuzzy Variables

7.10.1 Modeling Deﬁnition 7.10.1. [Cao96a]. Let (j)

min g0 (˜ x) (1 j n) x) σ ˜ (1 i p), s.t. gi (˜

(7.10.1)

x˜ > 0. If x˜ = (˜ x1 , x˜2 , · · · , x˜m )T stands for an m-dimensional T -fuzzy variable vector, we call (7.10.1) as a multi-objective geometric programming with T -fuzzy ˜ σ×˜ 1, ˜ 1 = (1, 1, 1) are T -fuzzy numbers, variables. Here x˜i = (xi , ξ i , ξ i ), σ (j)

J 0

(j)

where σik , σ = ±1, and g0 (˜ x) = Ji

vik (˜ x) =

k=1

Ji

σik cik

k=1

k=1

m ) l=1

x ˜γl ikl (1

(j)

σ0k c0k

m ) l=1

γ

(j)

x ˜l 0kl (1 j n), gi (˜ x) =

i p) are fuzzy polynomials of x ˜, γikl is

an arbitrary real number. Theorem 7.10.1. Let the multi-objective geometric programming model with T - fuzzy variables be as in (7.10.1). Then it can be converted into a multiobjective geometric programming with a cone index J (j)

min g0 (z(J )) (1 j n) s.t. gi (z(J )) σ (1 i p), z(J ) > 0, (j)

where

(j) g0 (z(J ))

=

J 0

k=1

(j)

σ0k c0k

m ) l=1

γ

(j)

zl 0kl (J ), gi (z(J )) =

(7.10.2)

Ji

σik cik

k=1

m ) l=1

zlγikl (J ),

and (7.10.1) contains an optimal solution with T - fuzzy variable, which is equivalent to (7.10.2), containing an optimal solution depending on a cone index J . Proof: Similar to the proof of Theorem 7.8.1, (7.10.1) is turned into: min

J0 k=1

s.t.

Ji k=1

(j)

c0k

m : (zl (J ))γ0kl (1 j n) l=1

σik cik

m : (zl (J ))γikl σ (1 i p), l=1

zl (J ) > 0 (1 l m), such that (7.10.2) can be found out.

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

249

Since (7.10.1) is equivalent to (7.10.2), a parameter optimal solution to (7.10.2) depending on a cone index J is equivalent to an optimal T -fuzzy one to (7.10.1). Corollary 7.10.1. ˜ 1 is taken for σ ˜i in (7.10.1), then (7.10.1) is (j)

x) (1 j n) min g0 (˜ s.t. gi (˜ x) ˜ 1 (1 i p) x ˜>0 and 1 is taken for σ in (7.10.2), then (7.10.2) is (j)

min g0 (z(J )) (1 j n) s.t. gi (z(J )) 1 (1 i p), z(J ) > 0, so the conclusion corresponding to Theorem 7.10.1 still holds. Algorithm For a multi-objective geometric programming (7.10.1) with T -fuzzy variables, the objective functions can be either weighted before nonfuzziﬁcation or nonfuzziﬁed before its being weighted. Two algorithms can be advanced on the assumption that (7.10.1) with a solution is nonfuzziﬁed into (7.10.2) for our discussion. A. Nonfuzziﬁcation steps The steps nonfuzziﬁed to Model (7.10.1) are concluded as follows. Step 1. As for the given T -fuzzy variable x ˜l , natural number sets {1, 2, · · · , n} are partitioned into three parts by subscripts: ξ + ξ li zli = xli + li for 1 i N and each l; 2 xli + ξ li , if jl = 0 II : zli = for N + 1 i 2N and each l; x − ξ li , if jl = 1 li xli − ξ li , if jl = 0 for 2N + 1 i 3N and each l. III : zli = xli + ξ li , if jl = 1

I:

Step 2. Nonfuzzify variable x ˜l . 3N

∗ ξli

ξ +ξ

Select zli = xli + 3N , where ξli∗ is li 2 li (in the case of I), or ±ξ li (in II), or ±ξ li (in III). ˜l and T -fuzzy variable is turned into a deStep 3. Substitute zli for x termined variable, such that (7.10.1) is changed into a determined geometric programming (7.10.2). i=1

250

7 Fuzzy Geometric Programming

Step 4. Determine a satisfactory (resp. eﬀective) solution to problem (7.10.2) by geometric programming before a fuzzy satisfactory (resp. eﬀective) solution can be composed to (7.10.1). B. Direct primal algorithm For a multi-objective geometric programming (7.10.1) with T - fuzzy variable, two ways are advanced to nonfuzziﬁcation (7.10.1). Algorithm I: Nonfuzzify (7.10.1) into (7.10.2) before the weighted process for objective functions in (7.10.2). Give weight to j objective functions:κj (1 j n), respectively. Now, n ob(1) jective functions in (7.10.2) are weighted and then g0∗ (z(J )) = κ1 g0 (z(J )) + (2) (n) κ2 g0 (z(J )) + · · · + κn g0 (z(J )), where κj is a weighted factor satisfying 0 κj 1(1 j n), and κ1 + κ2 + · · · + κn = 1. Substitute g0∗ (z(J )) for n objective functions in (7.10.2), and we turn (7.10.2) into a single objective parameter geometric programming min g0∗ (z(J )) s.t. gi (z(J )) σ (1 i p), z(J ) > 0,

(7.10.3)

and again calculate it. Algorithm II: Use a weighted method to objective functions in (7.10.1) before nonfuzzifying it. Consider Problem (7.10.1), and n objective functions are weighted in (1) (2) (n) x) = κ1 g0 (˜ x) + κ2 g0 (˜ x) + · · · + κn g0 (˜ x). Substitute (7.10.1) and then g0∗ (˜ ∗ g0 (˜ x) for n objective functions in (7.10.1), and it is changed into min g0∗ (˜ x) x) σ ˜ (1 i p), s.t. gi (˜ x˜ > 0.

(7.10.4)

Nonfuzzify (7.10.4) in the above way and a determined geometric programming can be obtained like (7.10.3). Note: The above weight vector κj can be determined by an Analytic Hierarchy Process method, or an expert evaluation method. Solve an ordinary geometric programming like (7.10.3) and several solution methods appear in it. The primal algorithm can be adopted in ﬁnding an approximate satisfactory (or eﬀective) solution so that a fuzzy satisfactory solution can be found in (7.10.1), which behaves as a single objective geometric programming problem with respect to a cone index J . It is obvious that there exist a lot of direct solutions with respect to (7.10.3) [Cao89c][TA84] [WB67] [WY82].

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

7.10.2

251

Fuzzy Dual Problem

Deﬁnition 7.10.2. Given that max d˜(j) (w) = σ

p : Ji J0 Ji (j) J0 6σ 5: : c˜ c˜ik ( 0k w0k )σ0k w0k ( wik )σik wik w0k wik i=1 k=1

k=1

k=1

k=1

s.t. w00 = 1 Γ (j)T w = 0

(7.10.5)

w0 is called a multi-objective dual geometric programming with T - fuzzy variable w ˜ corresponding to (7.10.1), where ⎛

Γ (j)

(j)

σ01 γ011 ⎜ ··· ⎜ ⎜ σ γ (j) 0J0 =⎜ ⎜ σ γ0J0 1 ⎜ 11 111 ⎝ ··· σpJp γpJp 1

(j)

· · · σ01 γ01l ··· (j) · · · σ0J0 γ0J0 l · · · σ11 γ11l ··· · · · σpJp γpJp l

(j) ⎞ · · · σ01 γ01m ⎟ ··· ⎟ (j) ⎟ · · · σ0J0 γ0J0 m ⎟ (1 j n) · · · σ11 γ11m ⎟ ⎟ ⎠ ··· · · · σpJp γpJp m

denotes j−th exponent matrix. Its l-th column is composed of exponents from a (j) variable x ˜l at each item in j−th objective function g0 (˜ x)(1 j n) and constraint one gi (˜ x)(1 i p), where w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T represents a J-dimension variables vector, and (wik )wik |wik =0 = 1 is stipulated. Theorem 7.10.2. If the problem (7.10.1) is deduced from T -fuzzy variable xl1 , x ˜l2 , · · · , xlp )T , then the dual form of (7.10.1) is (7.10.5). x ˜l = (˜ Proof: As (7.10.1) is equivalent to 7.10.2) from Theorem 7.10.1, the dual form of (7.10.2) is obviously max d(w(J )) s.t. w00 (J ) = 1, Γ¯ T w(J ) = 0,

(7.10.6)

w(J ) 0, where d(w(J )) = σ

J0 5:

J

p )

J )i

;

0 σ0k w0k (J ) c i=1 k=1 ( 0k w0k )(J ) w0k (J )

k=1

(j)

cik wik (J )

Ji k=1

wik (J )

< σ w (J ) 6 σ i ik

,

k=1

w(J ) = (w01 (J ), · · · , w0J0 (J ), · · · , wp1 (J ), · · · , wpJp (J ))T is a J− dimensional variable vector depending on a cone index J , and Γ¯ is an exponent matrix. Now let us stipulate (wik (J ))wik (J ) |wik (J )=0 = 1 correspondingly.

252

7 Fuzzy Geometric Programming

(7.10.6) can be also proved to be equivalent to (7.10.5) under the above cone index J , while (7.10.2) and (7.10.6) are mutually dual under the cone index J . Hence (7.10.1) and (7.10.5) are mutually dual problems, so the theorem holds. Dual algorithm Algorithm III: A single objective geometric programming is obtained, for a certain j, over (7.10.1), and its corresponding dual programming is (7.10.6). n groups of optimal solutions, the worst value Uj and the best value Lj are obtained by solving n dual geometric programming in terms of j(1 j n) respectively. Thereafter a single objective fuzzy geometric programming is obtained below: max xm+1 s.t. gi (x) 1 (1 i p), (j) gp+j (x) = U1j g0 (x) + (1 − x1 , x2 , · · · , xm+1 > 0,

Lj Uj )xm+1

1 (1 j n),

(7.10.7)

where gi (x)(1 i p + n) are posynomial when coeﬃcients are positive. Finally, the optimal compromise solution to problem (7.10.2) is acquired respectively by adopting a primal algorithm [Biw92][WB67][WY82], such that a fuzzy optimal compromise solution comes out. Algorithm IV: Change (7.10.1) into an ordinary single objective geometric programming like (7.10.3) by means of the two nonfuzziﬁcation methods above, and deduce its dual programming by a dual theory. Again a dual parameter solution to the dual problem can be obtained by using a dual algorithm, so can a fuzzy optimal compromise solution. 7.10.3 Numerical Example Let us consider a multi-objective geometric programming with T -fuzzy variables as follows: (1)

(2)

x) = x ˜1 , min g0 (˜ x) = x˜22 } min {g0 (˜ −1 −1 ˜ s.t. 4˜ x1 x˜2 1, x ˜2 ˜ 1, 1 + x˜21 ˜2 > 0, x˜1 , x

(7.10.8)

˜2 = (x2 , ξ 2 , ξ 2 ) be T -fuzzy variable, and ˜1 = where x˜1 = (x1 , ξ 1 , ξ 1 ), and x (1, 0, 0) be special T - fuzzy number. Here we might as well let T -fuzzy variables are adopted as follows: x ˜1 : x ˜2 :

1. (x1 , 1, 0), 4. (x2 , 0, 1),

2. (x1 , 0, 1), 5. (x2 , 1, 0),

3. (x1 , 2, 1); 6. (x2 , 2, 2).

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

253

Now, let us divide the data into three groups including No.1, 4; No.2, 5 and No.3, 6. As for data No.1, 4, a value is got by Formula I in (1) Algorithm A. For the rest, the formulas is used corresponding to jp = 1 and jp = 0 in Formula II and III when odd numbers and even numbers appear, respectively. ˜2 can be nonfuzziﬁed So, x ˜1 , x (x1 + 0.5) + (x1 − 0) + (x1 + 1) = x1 + 0.5, 3 (x2 + 0.5) + (x2 − 0) + (x3 − 2) = x2 − 0.5. x ˜2 : 3

x ˜1 :

Such that, the geometric programming corresponding to (7.10.8) is min {(x1 + 0.5), (x2 − 0.5)2 } s.t. 4(x1 + 0.5)−1 (x2 − 0.5)−1 1, x2 − 0.5 1, 1 + x1 +0.5 2 x1 , x2 > 0.

(7.10.9)

This is a multi-objective geometric programming problem concerning the cone index J . Obviously, the variety of direct solution methods exists in (7.10.9). By adopting an object-weighted method, an objective function is (1) (2) changed into g0 (˜ x) = κ1 g0 (˜ x) + κ2 g0 (˜ x), such that g0 (x) = κ1 (x1 + 0.5) + 2 κ2 (x2 − 0.5) is obtained. We might as well take κ1 = κ2 = 12 , and let u1 = x1 + 0.5, u2 = x2 − 0.5, (7.10.9) is turned into min 12 u1 + 12 u22 −1 s.t. 4u−1 1 u2 1, u2 1, 1 + u21 u1 , u2 > 0.

(7.10.10)

By solving (7.10.10), its optimal solution is acquired as (u1 , u2 ) = (2, 2)T , so, optimal solution to (7.10.9) is (x1 , x2 ) = (2.5, 1.5)T . Certainly, a T -fuzzy optimal solution to (7.10.8) can be synthesized, but it is unnecessary to do so practically. Therefore this solution is considered as an approximately T -fuzzy optimal one in (7.10.8).

8 Fuzzy Relative Equation and Its Optimizing

Since Sanchez, a famous fuzzy biology mathematician, put forward a fuzzy relation equation in 1976 with innovative investigation, some scholars at home and abroad have already developed many characteristic methods to it. In this chapter, ﬁnding-solution is introduced to ( , ) and ( , ·) fuzzy relation equation and their application in business management. Meanwhile, a recently-rising optimal problem is discussed in fuzzy relation equation with fuzzy relation linear programming and fuzzy relation geometric programming put forward.

8.1

( , ) Fuzzy Relative Equation

Suppose that X, Y, W are ﬁnite sets; fuzzy matrix A ∈ Mm×n , b ∈ Mm×1 , and the fuzzy variable x ∈ Mn×1 is requested, respectively, such that A◦x =b

(8.1.1) is satisﬁed, where “◦” represents synthesizes operator ( , ), and records its solution sets for X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b, xi ∈ [0, 1], i ∈ I}. It is easy to prove the properties as follows. Proposition 8.1.1. If xi ∈ Xi , i ∈ I(I is a non-empty index set), then xi ∈ X. i∈I

Proposition 8.1.2. If x1 ⊆ x2 ⊆ x3 and x1 , x3 ∈ X, then x2 ∈ X. From Proposition 8.1.1 we know that, if X = φ, then the greatest element must exist in X(A, b) (i.e.,the greatest solution must exist in Equation (8.1.1)). This holds if only union is taken into consideration of all elements in X(A, b). For the sake of ﬁnding the greatest element of X(A, b), Sanchez deﬁned an operation: B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 255–292. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

256

8 Fuzzy Relative Equation and Its Optimizing

aij bi =

1, bi ,

when aij bi , ∀a , b ∈ [0, 1], when aij > bi , ij i

and x = AT B is x = (x1 , x2 , · · · , xn )T , where xj =

m

(8.1.2)

(aij bi )(1 j n).

i=1

Proposition 8.1.3. (Sanchez E.) X = φ ⇐⇒ A ◦ x ˆ = b, if X = φ, then x ˆ is the greatest element in X(A, b). Through calculation of x, the matrix table is listed, i.e., fuzzy extended matrix (A|b) with its computation process is ⎞ ⎛ a11 a12 · · · a1n b1 ⎜ a21 a22 · · · a2n b2 ⎟ ⎟ ⎜ (A|b) = ⎜ . .. .. .. .. ⎟ ⎝ .. . . . . ⎠ am1 am2· · · amn bm ⎛ ⎜ ⎜ −→ ⎜ ⎝

aij bi

a11 b1 a12 b1 · · · a1n b1 a21 b2 a22 b2 · · · a2n b2 .. .. .. . . .

⎞ ⎟ ⎟ ⎟ ⎠

am1 bm am2 bm · · · amn bm m m m xT = (ai1 bi ) (ai2 bi ) · · · (ain bi ) . aij bi

i=1

i=1

i=1

Here, −→ denotes implement “ ” operation to aij with bi . Deﬁnition 8.1.1. If ∃ˆ x ∈ X(A, b), such that x xˆ, ∀x ∈ X(A, b), then x ˆ is called a greatest solution to (8.1.1). If ∃˘ x ∈ X(A, b), such that x˘ x, ∀x ∈ X(A, b), then x ˘ is called the least solution to (8.1.1). And if ∃˘ x ∈ X(A, b), such that when x x˘, we have x = x ˘, x˘ is called a minimum solution to (8.1.1). Example 8.1.1: Find the greatest solution to fuzzy relative equations ⎛ ⎛ ⎞ ⎞ ⎛ ⎞ 0.3 0.2 0.7 0.8 0.7 x 1 ⎜ 0.5 0.4 0.4 0.9 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 0.4 ⎟ ⎜ 0.7 0.3 0.2 0.7 ⎟ ◦ ⎜ x2 ⎟ = ⎜ 0.4 ⎟ . (8.1.3) ⎜ ⎟ ⎝ x3 ⎠ ⎜ ⎟ ⎝ 0.9 0.6 0.1 0.2 ⎠ ⎝ 0.3 ⎠ x4 0.8 0.5 0.6 0.4 0.6 Solution: Because ⎛ ⎞ ⎛ 1 1 1 0.3 0.2 0.7 0.8 0.7 ⎜ 0.4 1 1 ⎜ 0.5 0.4 0.4 0.9 0.4 ⎟ ⎟ ⎜ ⎜ ⎜ 0.4 1 1 ⎜ 0.7 0.3 0.2 0.7 0.4 ⎟ −→ ⎜ ⎟ ⎜ ⎝ 0.3 0.3 1 ⎝ 0.9 0.6 0.1 0.2 0.3 ⎠ 0.6 1 1 0.8 0.5 0.6 0.4 0.6

⎞ 0.7 0.4 ⎟ ⎟ 0.4 ⎟ ⎟, 1 ⎠ 1

xT =(0.3,0.3,1,0.4)

8.1 ( , ) Fuzzy Relative Equation

257

it is easy to calculate the following. xT = (0.3, 0.3, 1, 0.4)T is a solution to (8.1.3), and is the greatest solution. Example 8.1.2: Judge whether the fuzzy relative equations 0.2 0.1 0.4 x1 = ◦ x2 0.3 0.2 0.1

(8.1.4)

has a solution or not. Solution:

0.1 0.4 0.2 0.2 0.1 0.3

1 0.2 . 1 1 xT =(1,0.2)

−→

Obviously, x = (1, 0.2)T does not satisfy (8.1.4), so (8.1.4) has no solution. Proposition 8.1.3 is determined whether fuzzy relative equation (8.1.1) has solutions or not and how to ﬁnd the problem of the greatest solution when it has solution. Generally, if (8.1.1) has a solution, and although it is not necessarily the least solution, certainly it must exist the minimum solution, that is the minimum element in X(A, b). Deﬁnition 8.1.2. Suppose ∃ˇ x0 , if from x ∈ X(A, b) and x xˇ0 , we can get x=x ˇ0 , calling x ˇ0 a minimum element in X(A, b). Proposition 8.1.4. (Czogala E. et al) If x ∈ X(A, b), then in X(A, b) certainly there exists a minimum element x ˇ0 , such that xˇ0 ⊆ X(A, b) ⊆ xˆ. From Proposition 8.1.4, we know that, if only ﬁnd all minimum solutions to (8.1.1), we get all solutions to (8.1.1). If b = (0, 0, · · · , 0)T , then (8.1.1) has a unique minimum solution x = (0, 0, · · · , 0)T , that is the least solution. Following next is all supposed as b = (0, 0, · · · , 0)T . Deﬁnition 8.1.3. Suppose x = AT b = (x1 , x2 , · · · , xn )T , and m × n fuzzy matrix D=(dij ) is deﬁned as bi , when aij xj bi (1 i m, 1 j n), dij = 0, otherwise, then we call D a distinguishing matrix of Equation (8.1.1). D ◦ x = b a distinguishing equation of (8.1.1). We deﬁne another operator β: bi , when aij bi aij βbi = 0, when aij < bi , obviously dij = (aij xj )βbi exists. Construction of the distinguishing matrix also can be achieved through ranking a matrix table:

258

8 Fuzzy Relative Equation and Its Optimizing

⎛

x1

x2 · · · xn

⎞ a11 a12 · · · a1n b1 ⎜ a21 a22 · · · a2n b2⎟ ⎟ (A|b) = ⎜ ⎝ · · · · · · · · · · · · · · ·⎠ am1 am2· · ·amn bm ⎛ ⎞ (a11 x1 )βb1 · · · (a1n xn )βb1 (aij xj )βbi ⎜ (a21 x1 )βb2 · · · (a2n xn )βb2 ⎟ ⎜ ⎟ −− −→ ⎝ ⎠ ··· ··· · · · (am1 x1 )βbm· · ·(amn xn )βbm =D. Proposition 8.1.5. X = φ ⇐⇒ each row has nonzero element of distinguishing matrix D. See about Example 8.1.1 through the calculation: ⎛ ⎞ 0 0 0.7 0 ⎜ 0 0 0.4 0.4 ⎟ ⎜ ⎟ ⎟ D=⎜ ⎜ 0 0 0 0.4 ⎟ . ⎝ 0.3 0.3 0 0 ⎠ 0 0 0.6 0 Obviously, each row in D has nonzero element, from Proposition 8.1.5, (8.1.3) a solution exists. ˇ 0 (A, b) is the whole minimum element in Proposition 8.1.6. Suppose X ∗ ˇ ∗ (A, b) is the whole miniX(A, b), again if X (A, b)={x|D ◦ x = b} and X 0 ∗ ∗ ˇ 0∗ (A, b) = X ˇ 0 (A, b). mum element in X (A, b), then X (A, b) ⊇ X(A, b), X By use of Proposition 8.1.6, if we ﬁnd a minimum solution to A ◦ x = b, we have only to ﬁnd a minimum one in distinguishing equation D ◦ x = b, but i-row element in D is bi instead of zero, thus it can simplify an operation to solution greatly. A nonzero element is taken in each row in D, and zero taken in the rest position before one transition matrix D(i) is obtained. Take max as each column of this matrix (that is supremum), and we get an N -dimension vector, then take its transpose, calling it a quasiminimum solution. Delete repetition and nonminimum solution in every quasiminimum solution and we obtain all to the minimum solution. Four transition matrixes D(1) , D(2) , D(3) and D(4) exist in the discretion matrix D in Example 8.1.1, respectively: ⎛ ⎛ ⎞ ⎞ 0 0 0.7 0 0 0 0.7 0 ⎜ 0 0 0.4 0 ⎟ ⎜ 0 0 0 0.4 ⎟ ⎜ ⎜ ⎟ ⎟ (2) ⎜ 0 0 0 0.4 ⎟ , ⎟, 0 0 0 0.4 D = D(1) = ⎜ ⎜ ⎜ ⎟ ⎟ ⎝ 0.3 0 0 0 ⎠ ⎝ 0.3 0 0 0 ⎠ 0 0 0.6 0 0 0 0.6 0 (0.3, 0, 0.7, 0.4) (0.3, 0, 0.7, 0.4)

8.1 ( , ) Fuzzy Relative Equation

⎛

D(3)

⎞ 0 0 0.7 0 ⎜ 0 0 0.4 0 ⎟ ⎜ ⎟ ⎟ =⎜ ⎜ 0 0 0 0.4 ⎟ , ⎝ 0 0.3 0 0 ⎠ 0 0 0.6 0 (0, 0.3, 0.7, 0.4)

259

⎛

D(4)

⎞ 0 0 0.7 0 ⎜ 0 0 0 0.4 ⎟ ⎜ ⎟ ⎟ =⎜ ⎜ 0 0 0 0.4 ⎟ . ⎝ 0 0.3 0 0 ⎠ 0 0 0.6 0 (0, 0.3, 0.7, 0.4)

The quasiminimum solution to (8.1.2) is got for: x1 =(0.3,0,0.7,0.4)T , x3 =(0,0.3,0.7,0.4)T ,

x2 =(0.3,0,0.7,0.4)T , x4 =(0,0.3,0.7,0.4)T ,

where x2 and x4 is repetition, and should be deleted, so x1 , x3 are minimum solutions to Equation (8.1.3). However ni exists in i−row nonzero element at D, then we can get n1 , n2 , · · · , nm transition matrix. As a result, there exist an n1 , n2 , · · · , nm quasi-minimum solution, but genuine minimum solution is only a part of it. In order to avoid those invalid labor, we especially put forward as follows a kind of valid method to ﬁnd a minimum solution. On universe Y , alternation of some nonempty fuzzy sets is called a fuzzy set chain, a chain for short, recorded as A˜1 ∗ A˜2 ∗ · · · ∗ A˜p , where A˜i ∈ F (Y ), A˜i = φ(1 i p). Call every A˜i an item of the chain, with assumption order of the item unimportant, that is if (i1 , i2 , · · · , ip ) is an arrangement of (1, 2, · · · , p), then A˜i1 ∗ A˜i2 ∗ · · · ∗ A˜ip = A˜1 ∗ A˜2 ∗ · · · ∗ A˜p . If there exists A˜i (1 i p − 1) in chain A˜1 ∗ A˜2 ∗ · · · ∗ A˜p , such that ˜ Ai ⊆ A˜p , then A˜p can be eliminated from chain, i.e., A˜1 ∗ A˜2 ∗ · · · ∗ A˜p = A˜1 ∗ A˜2 ∗ · · · ∗ A˜p−1 , calling it an elimination principle. Continuous implementing of some expunction rule can change the chain A˜1 ∗ A˜2 ∗ · · · ∗ A˜p into the chain of each item without containing each other, calling it a reduced chain. At p = 1, fuzzy set A˜1 itself is a chain only with one item, obviously it is a reduced chain. ˜1 ∗ B ˜2 ∗ · · · ∗ B ˜q are any two chains, then Suppose A˜1 ∗ A˜2 ∗ · · · ∗ A˜p and B their “union” operation is deﬁned as ˜1 ∗ B ˜2 ∗ · · · ∗ B ˜q ) (A˜1 ∗ A˜2 ∗ · · · ∗ A˜p ) (B ˜ 1 ) ∗ (A˜2 B ˜1 ) ∗ · · · ∗ (A˜p ˜1 )∗ B B (A˜1 ˜ 2 ) ∗ (A˜2 B ˜2 ) ∗ · · · ∗ (A˜p ˜2 ) ∗ · · · ∗ B B (A˜1 ˜ q ) ∗ (A˜2 ˜ q ) ∗ · · · ∗ (A˜p ˜q ). B B B (A˜1 Thus union of two chains is still one chain if and only if p = q = 1, and “union” of chain operation degenerates into “union” of fuzzy set operation.

260

8 Fuzzy Relative Equation and Its Optimizing

It is easily certiﬁcated that “union” of chain operation satisﬁes commutative, associative and idempotent laws, which can be expanded in unions of some other chains. In order to apply the concept of fuzzy chain to a minimum solution to distinguishing equation D ◦ x = b, we consider any row (di1 , di2 , · · · , din )(1 i m) of a distinguishing matrix D. If all of its nonzero element are dij1 , dij2 , · · · , dijk , then there exists a unique chain dij1 dij2 dij ∗ ∗ ···∗ k xj1 xj2 xjk corresponding to it, each item of this chain is a single point fuzzy set, called D concomitant chain in i row. It can be proved as follows. Proposition 8.1.7. The reduced chain of every item in union of concomitant chain of each row in distinguishing matrix D means just all of minimum solutions to Equation (8.1.1). Find each row concomitant chain of D in Example 8.1.1, i.e., ⎛ ⎞ 0.7 ˜ 0 0 0.7 0 → x3 = A1 0.4 0.4 ⎜ 0 0 0.4 0.4 ⎟ → x ∗ x = A˜2 4 ⎜ ⎟ 0.43 ˜3 ⎟→ 0 0 0 0.4 = A D=⎜ x4 ⎜ ⎟ ⎝ 0.3 0.3 0 0 ⎠ → 0.3 ∗ 0.3 = A˜ 4 x3 x2 0 0 0.6 0 → 0.6 = A˜5 . x3

Take union from each concomitant chain, then to change it into a reduced chain according to an elimination principle. P = A˜1 ∪ A˜2 ∪ A˜3 ∪ A˜4 0.7 0.4 0.4 0.4 0.3 0.3 0.6 = ∗ ) ∗ ) ( ( x3 x3 x4 x4 x1 x2 x3 I II III IV V 0.7 0.4 0.3 0.3 ∗ ). = ( x3 x4 x1 x2 0.7 0.4 0.4 0.4 0.7 0.4 ∗ ) = ; again x3 x3 and x4 appear in II, then ( x3 x3 x4 x4 x3 x4 0.3 0.3 0.6 0.3 0.3 and x4 do not appear in IV, then ( ∗ ) = ∗ ), so x1 x2 x3 x1 x2 0.7 0.4 0.3 0.7 0.4 0.3 )∗( ) P =( x3 x4 x1 x3 x4 x2 0.3 0.7 0.4 0.3 0.7 0.4 =( )∗( ). x1 x3 x4 x2 x3 x4 This reduced chain has two items, one is a minimum solution (0.3,0,0.7,0.4)T , and the other is a minimum solution (0,0.3,0.7,0.4)T . But the greatest solution is (0.30.310.4)T .

8.2 ( , ·) Fuzzy Relative Equation

261

In synthesis, the solutions set to Equation (8.1.2) is ⎛ ⎞ ⎛ ⎞ 0.3 [0, 0.3] ⎜ [0, 0.3] ⎟ ⎜ 0.3 ⎟ ⎟ ⎜ ⎟ X(A, b) = ⎜ ⎝ [0.7, 1] ⎠ ⎝ [0.7, 1] ⎠. 0.4 0.4 No solution may exist in the fuzzy relation equation A ◦ x = b abstracted from an actual problem. At this time we can consider it as its approximate solution. When it has a solution, the greatest solution to A◦x = b is x = AT b. Therefore we can make maximum shortage approximate solution by taking x to equation A ◦ x = b. Even though this equation has a solution, it also contains many solutions generally. How to choose an appropriate solution from numerous ones, it needs choosing it according to the demand of actual problems and persons’ experience, which remains to be studied in the future.

8.2 8.2.1

( , ·) Fuzzy Relative Equation Introduction

Let x = {x1 , x2 , · · · , xp }, y = {y1 , y2 , · · · , yq }(p n, q m) be a ﬁnite ﬁeld, and fuzzy matrix A ∈ Mm×n , x ∈ Mn×1 , b ∈ Mm×1 . Then consider the generalized fuzzy relative equations: A ◦ x = b,

(8.2.1)

we call (8.2.1) a (∨, ·) fuzzy relative equation, where “ ◦ ” represents Maxproduct operation, that is operator (∨, ·), ⎛ ⎞ (1) (2) (n) A1 (x1 ) A1 (x1 ) · · · A1 (x1 ) ⎜ ⎟ ... ... ... ... A=⎝ (8.2.2) ⎠, (1) (2) (n) Am (xm ) Am (xm ) · · · Am (xm ) x = (x1 , x2 , · · · , xn )T , b = (b1 , b2 , · · · , bm )T and “T” represents transpose. First of all, in this section, we study a solution to (8.2.1) theoretically, then, by the application of practical examples, attain the degree factor inﬂuencing economical beneﬁts in enterprizes of commerce, which ﬁnds the practical background for the application of Equations (8.2.1). 8.2.2 Solubility of ( , ·) Fuzzy Relative Equations and Theorem for Greatest Solution In order to discuss problems conveniently, let matrix element in (8.2.2) be (j) A˜i (xi ) = aij

(1 i m, 1 j n).

262

8 Fuzzy Relative Equation and Its Optimizing

Then, next step, we only discuss fuzzy relation equations like ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ x1 b1 a11 a12 · · · a1n ⎟ ⎜ ⎟ ⎜ ⎜ . . ⎟ . .. ⎠ ◦ ⎝ .. ⎠ = ⎝ .. ⎠ , ⎝ am1 am2 · · · amn xn bm

(8.2.3)

where the compound operation “ ◦ ” in matrix is (∨, ·) composition, i.e., (aij · xj ) = bi (i m) 1jn

and records its solution sets for X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b}. Deﬁnition 8.2.1

⎧ ⎨ bi , aij −1 bi aij ⎩ 1,

aij > bi , aij bi ,

∀aij , bi ∈ [0, 1],

(8.2.4)

where “ −1 ”is an operator deﬁned at [0,1]. And let Kj

m

aij −1 bi , (j n).

i=1

Then x ˆ = (K1 , K2 , · · · , Kn )T is a greatest element in X(A, b). Proposition 8.2.1. If a, b, c ∈ R, then b c ⇒ a −1 b a −1 c. Proof: a > b ⇒ a > c, then a −1 b = from (8.2.4),

a b ⇒ a −1 b = 1,

but hence

c b = a −1 c a a

a −1 c 1, a −1 b a −1 c.

Corollary 8.2.1. a −1 (b ∨ c) a −1 c. Proposition 8.2.2. a · (a −1 b) = a ∧ b; a −1 (a · b) b. Proof: 10

a > b ⇒ a −1 b =

b ⇒ a · (a −1 b) = b. a

a b ⇒ a −1 b = 1 ⇒ a · (a −1 b) = a · 1 = a.

(8.2.5)

8.2 ( , ·) Fuzzy Relative Equation

So 20

263

a · (a −1 b) = a ∧ b. When

a > ab ⇒ a −1 (a · b) = b; a ab ⇒ a −1 (a · b) = 1,

then

a −1 (a · b) b.

Theorem 8.2.1. There exists a solution x = (x1 , x2 , · · · , xn )T to fuzzy relative Equations (8.2.3) if and only if aij xj bi (i m, j n), and for each ji , there exists ji , such that aiji · xji = bi . Proof: Suﬃciency is certiﬁed. Now let’s prove the necessity. If x = (x1 , x2 , · · · xn )T is the solution to (8.2.3), then aij · xi bi (i m, j n).

(8.2.6)

Otherwise, if there exist i, j, such that aij · xj > bi , then (ai1 · x1 ) ∨ · · · ∨ (aij · xj ) ∨ · · · ∨ (ain · xn ) > bi . Contradictory, therefore, (8.2.6) holds. At the same time, if there exists a solution to (8.2.6), and aij · xj bi (i m; j n), there must exist ji for each i m, such that aiji xji = bi (j n). Otherwise, 10 We have proved it is impossible that if, for each i, there exists ji such that aiji xji > bi . 20 If for each j, there exists i, such that aij xj < bi , then (ai1 x1 ) ∨ (ai2 x2 ) ∨ · · · ∨ (ain xn ) < bi , which is in contradictory with no solution to (8.2.3). In practical application, (8.2.3) probably has no solution, but small alteration may be always given to A for ε and B for δ, so that Aε ◦ x = b

or A ◦ x = bδ

has a steady solution. So the following is always assumed to have solution to (8.2.3). If b in (8.2.3) is arranged in standardization, then b1 b2 · · · bm

(or b1 b2 · · · bm ).

For short, let bi still stand for bi and (aij ) for (aij ) correspondingly. Theorem 8.2.2. If there exists a solution X(A, b) = φ to (8.2.3), then x ˆ is its greatest solution.

264

8 Fuzzy Relative Equation and Its Optimizing

Proof: Because xˆ = φ, then {i, aij > bi } = φ (i m, j n). Hence when A◦x ˆ = (bi ; 1 i m), then bi =

n

[aij · (

j=1 n

n

j=1

(aij ·

j=1

n

(aij −1 bi )] =

[aij · (aij −1 bi )],

j=1

bi ) = bi (1 i m). aij

Again x, x ˆ ∈ X(A, b), then, Kj0 = =

n j=1 n

n

(aij0 −1 bi ) =

(aij0 −1 bi )

j=1

[aij0

−1

(

j=1

n

aij0 · xj0 )]

j=1

n

[aij0 −1 (aij0 · xj0 )]

j=1

= ai0 j0 −1 (ai0 j0 · xj0 ) xj0 . Hence x ⊆ xˆ. Corollary 8.2.2. If we have solution to x ◦ R = b, then x ˆT is its greatest solution. Proof: Because

x ◦ R = b ⇔ RT ◦ xT = bT ⇒ xT ⊂ [(RT )T −1 bT ] = x ˆ

and

RT ◦ x ˆ = bT ⇔ x⊂x ˆT

and

xˆT ◦ R = b.

So, the solution introduced is suitable for the inverse problem of comprehensive decision. Deﬁnition 8.2.2. Stipulate a∗ij

= aij β

−1

$ bi

Kj , 0,

aij · Kj = bi , others.

(8.2.7)

If Kj is a deﬁnition by Deﬁnition 8.2.1, it is impossible to make aij ·Kj > bi . Then we can get a deﬁnition equal to Deﬁnition 8.2.2. Deﬁnition 8.2.3. Stipulate $ a∗ij

= aij β

−1

bi

Kj , 0,

aij · Kj = bi , aij · Kj < bi .

(8.2.8)

Deﬁnition 8.2.4. Matrix A∗ = (a∗ij ), as nonzero element is the element of solution x ˆ, then A∗ is called a matrix of solution being chosen in (8.2.3) and

8.2 ( , ·) Fuzzy Relative Equation

265

the set of each row element in A∗ is called a row element set, written as follows. a∗ a∗ a∗ A∗i = ( i1 + i2 + . . . + in ) (1 i m). x1 x2 xn Deﬁnition 8.2.5. Stipulate an operator ⎧) ⎨ ri , : ri rj If ∃i = j, xi )( )= P ( ⎩all multiplication of sum, otherwise. xi xj Proposition 8.2.3.

1im

P

A∗i ⇐⇒ x∗ , where x∗ =

and ri is one of Kj (1 j n).

1im

x∗i =

) ri ( ), i i xi

Proof: From Deﬁnition 8.2.5 and by law of set operation, it is easy to obtain : ri A∗i ⇐⇒ ( ). xi i i 1im

a∗ij

= 0 is omitted in the course of P operation and also nonzero repeatAs edly removable element a∗ij has to be rejected in the application of absorptive law and so on, hence reserve ri isone of Kj . Obviously, at a∗ij > 0, x ˆ= A∗i . We reject the repeatedly removable 1im

elements in xˆ, then x∗j is obtained.

Theorem 8.2.3. Let X(A, b) = φ. Then x ˇj ∈ X(A, b) is a minimum solution to (8.2.3) at aij Kj = bi . Proof: As X(A, b) = φ, then {j|aij Kj = bi } = φ(j ∈ n). Hence at A · x∗j = (bi ; 1 i m), we know the following from Deﬁnition 8.2.5: bi =

n

P

(aij · rj ) ⇐⇒

j=1 n

(aij · Kj ) = bi (i m),

j=1 P

where ⇐⇒ denotes equivalence under operator P . So x∗j is a solution to (8.2.3), such that it is a minimum solution. Otherwise, if we have another x∗j ⊂ X(A, b), there exists (i0 , j0 ), such that rj 0 < rj0 , then n

(aij · rj ) = ai0 j0 rj 0 < ai0 j0 rj0

j=1

=

n

(aij · rj ) = bi (1 i m).

j=1

Contradictorily. Hence x ˇ∗j is the minimum solution to (8.2.3).

266

8 Fuzzy Relative Equation and Its Optimizing

Theorem 8.2.4. X(A, b) = φ ⇐⇒ Each row of A∗ has at least a nonzero element. f ormT h.8.2.1

=⇒ aij xj bi (1 i m; 1 j n). Proof: “ ⇒ ” X(A, b) = φ And for each i there exists ji , such that aiji · xji = bi , then, Kji ∈ X(A, b), so that aiji · Kji = bi (1 i m). From Deﬁnition 8.2.3, we know there exists at least a nonzero element in each row of A∗ . “ ⇐ ” If there exists at least a nonzero element a∗ij = Kj in each row of ∗ A , we might as well let a∗ij0 = Kj0 = 0(1 i m), while other a∗ij = 0. From Deﬁnition 8.2.3, for each i, aij0 · Kj0 = bi , aij · Kj < bi (j = j0 ). Obviously, it satisﬁes the suﬃcient condition in Theorem 8.2.1, hence there exists a solution to (8.2.3). If we represent the minimum solution and greatest solution by using x ˇ∗j (1 j n) and, x ˆ, respectively, then general solution of (8.2.3) is X(A, b) = ( x ˇ∗j ) xˆ = x ˇ∗ x ˆ. j

Obviously x ˆ is unique, but xˇ∗j may not. 8.2.3

Conclusion

We study the existence in solution to fuzzy relation equations in (∨, ·) and the theorems for greatest and minimum solutions before we give a short-circuit for solution. And at the same time, with these relation equations, we solve an inﬂuence factor for economical beneﬁts in enterprize of commerce, the result of which tallies with practice basically.

8.3

Algorithm Application and Comparing in ( , ·) Relative Equations

In this section, a better result is obtained after the (∨, ·) fuzzy relative equations are discussed in analysis of a shop proﬁt in application and the algorithm above is compared with the one in [LF99] after solutions through practical examples. 8.3.1 Application in Business Management Practically, Section 8.2 shows a solution to (8.2.1), the calculation step of which is to be explored by an example. The next table shows the commodity bought-sold by ﬁve stores in the suburb of a city in 1984 in China in Table 8.3.1:

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

267

Table 8.3.1. The Commodity Bought-sold Table y1 y2 y3 y4 y5

x1 1285 600 680 472 660

x3 x4 x5 x2 20 65 0.045 -0.8 91 0.056 10 82 0.042 -3.1 106 0.049 0.8 72 0.054

x2 x3 550 25 250 14 408 17 438.6 21.5 367.5 19.8

x4 x2

0.036 -0.0032 0.025 -0.0071 0.002

R1 490 262 401 480 378

By statistics material, the evaluation item is x1 for purchase, x2 for sale, x3 for expense, x4 for beneﬁts (ten thousand a unit), x5 for fund turnover (one day a unit), and the membership function distributed in economical beneﬁts is as follows. (1) Let A˜1 be a fuzzy subset, representing the beneﬁts of commodity bought at the universe U = R + = [0 + ∞]. Its membership function be ⎧ ⎪ 0 x1 < h3 x2 , ⎪ ⎪0, ⎪ ⎪ x1 − h3 x2 ⎪ ⎪ , h3 x2 x1 h1 x2 , ⎪ ⎪ ⎨ (h1 − h3 )x2 μA˜1 (x1 ) = 1, h1 x2 < x1 < h2 x2 , ⎪ ⎪ x − h x ⎪ 1 4 2 ⎪ , h2 x2 x1 h4 x2 , ⎪ ⎪ ⎪ (h2 − h4 )x2 ⎪ ⎪ ⎩0, h4 x2 < x1 < +∞. Its ﬁgure is 6 μA1 (x1 ) 1

0

h3 x2

h1 x2

A

A

h2 x2

A A

A A A h4 x2

x1

˜1 Fig. 8.3.1. Figure of Membership Function in A

where, h1 , h2 , h3 and h4 are constants. (2) Let A˜2 be a fuzzy subset representing commodity sale proﬁts at universe U = R + . Its membership function be ⎧ R1 < x2 < +∞, ⎪ ⎪1, ⎨ x2 − R2 μA˜2 (x2 ) = , R2 x2 R1 , ⎪ R − R2 ⎪ ⎩ 1 0, 0 < x2 < R2 .

268

8 Fuzzy Relative Equation and Its Optimizing

Its ﬁgure is

1

μA 2 (x2 ) 6

0

R2

x2

R1

˜2 Fig. 8.3.2. Figure of Membership Function in A

Here R2 and R1 represent the upper limit of poorest sales and best ones, respectively. (3) Let A˜3 be a fuzzy subset representing purchase cost at region U = R + . Its membership function be μA˜3 (x3 ) =

1 , x3 ∈ R + . x3 hx4 1 + 1000( − ) x2 x2

Its ﬁgure is 6 μA 3 (x3 ) 1

0 hx4

x3

Fig. 8.3.3. Figure of Membership Function in A˜3

Here, h denotes the ratio of retail proﬁts and purchase cost at the best beneﬁts in purchase cost. (4) Let A˜4 be a fuzzy subset representing commodity retail proﬁts at region U = R + . Its membership function be

μA˜4 (x4 ) = e

−(

x4 %−m)2 x2 , x4 ∈ R + .

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

269

Its ﬁgure is 6 μA 4 (x4 ) 1

x4

0 mx2

˜4 Fig. 8.3.4. Figure of Membership Function in A

where constant m is proﬁt rate at the best retail beneﬁts. (5) Let A˜5 be a fuzzy subset representing a cash ﬂow beneﬁt at region U = (0, 365). Its its membership function be ⎧ ⎪ 1, 0 x5 < n1 , ⎪ ⎨x − n 5 1 , n1 x5 n2 , x5 ∈ [0, 365], μA˜5 (x5 ) = n − n1 ⎪ ⎪ ⎩ 2 0, n2 < x5 365. Its ﬁgure is 6 μA 5 (x5 ) 1

0

@ @ @ @ @ @ @ n1 n2 x5

Fig. 8.3.5. Figure of Membership Function in A˜5 where n1 and n2 denote the upper limit of turnover days in good beneﬁt and its lower limit in bad beneﬁt, respectively. According to statistics material, it is proper to select h1 = 1.8, h2 = 2.2, h3 = 1.5, h4 = 2.5, h = 1, m = 3.2, n1 = 62, n2 = 88, R2 = 90% · R1 (R1 = 1983 year sales volume). It is known that the evaluation by experts for ﬁve stores in economical beneﬁts shows as follows: y1 b = (0.782,

y2 0.378,

y3 0.7,

y4 0.2,

y5 0.49)T .

270

8 Fuzzy Relative Equation and Its Optimizing

Now try to determine inﬂuencing degree of each individual quota to the whole economical beneﬁts. Let the inﬂuencing factor be x = (x1 , x2 , x3 , x4 , x5 )T , and replace the data in Table 8.3.1 and parameter above separately in (1) ∼ (5). Then calculate A, hence fuzzy relative equations corresponding to (8.2.1) are as follows ⎛

0.6 1 0.93 0.85 ⎜ 0.3 0.54 0.22 4 × 10−6 ⎜ ⎜0.56 1 0.78 0.61 ⎜ ⎝ 0 0.14 0.24 2.3 × 10−7 0.99 0.7 0.27 1.2 × 10−4

⎞ ⎛ 0.12 ⎜ 0 ⎟ ⎟ ⎜ ⎟ 0.77⎟ · ⎜ ⎜ 0 ⎠ ⎝ 0.38

x1 x2 x3 x4 x5

⎞

⎛

⎟ ⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎠ ⎝

0.782 0.378 0.7 0.2 0.49

⎞ ⎟ ⎟ ⎟. ⎟ ⎠

10 Make augmented matrix (A|b), and arrange it in standardization ⎛

0.6 1 0.93 0.85 ⎜0.56 1 0.78 0.61 ⎜ ⎜0.99 0.7 0.27 1.2 × 10−4 ⎜ ⎝ 0.3 0.54 0.22 4 × 10−6 0 0.14 0.24 1.2 × 10−7

⎞ 0.12 | 0.782 0.77 | 0.7 ⎟ ⎟ 0.38 | 0.49 ⎟ ⎟. 0 | 0.378⎠ 0 | 0.2

20 From (8.2.4) and (8.2.5), we obtain ⎛

0.49 0.7 0.83 0.92 0.91| Kj

⎞ 1 0.782 0.84 0.92 1 | 0.782 ⎜ 1 0.7 0.9 1 0.91 | 0.7 ⎟ ⎜ ⎟ ⎜0.49 0.7 1 1 1 | 0.49 ⎟ ⎜ ⎟. ⎝ 1 0.7 1 1 1 | 0.378⎠ 1 1 0.83 1 1 | 0.2 30 From (8.2.8), A∗ is obtained as follows: x1 x2 x3 x4 x5 ⎞ 0 0 0 0.92 0 ⎜ 0 0.7 0 0 0.91⎟ ⎜ ⎟ ∗ ⎜ A = ⎜0.49 0.7 0 0 0 ⎟ ⎟. ⎝ 0 0.7 0 0 0 ⎠ 0 0 0.83 0 0 ⎛

From the decision of Theorem 8.2.4, there exists a solution to the equations. 40 Calculate: 0.92 0.7 0.91 0.49 0.7 0.7 0.83 A˜∗1 A˜∗2 A˜∗3 A˜∗4 A˜∗5 = ·( + )·( + )· · x4 x2 x5 x1 x2 x2 x3 0.7 0.91 0.7 0.83 0.92 0.7 0.83 P 0.92 −→ ·( + )· · = · · . x4 x2 x5 x2 x3 x4 x2 x3

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

271

Therefore there exists a minimum solution to relative equations, which is the least solution, that is, x ˇ∗1 = (0, 0.7, 0.83, 0.92, 0)T , while its greatest solution is x ˆ = (0.49, 0.7, 0.83, 0.92, 0.91)T , and its solution set is T X(A, b) = [0, 0.49], 0.7, 0.83, 0.92, [0, 0.91] .

(8.3.1)

Note 8.3.1. If we come across ri = rj in the application of absorptive law, rj ri ri ri + · = . we have x1 x2 x1 x1 From (8.3.1), the proﬁt inﬂuences economical beneﬁts most greatly in the stores with the relative degree highest, and ﬂexible room is very small. Sale and expense correlates with economical beneﬁts closely. Though large sale and low expense are needed, yet attention must be paid to the suitable level of purchase and sale and to the appropriate rate of expense and proﬁt. Purchase and fund turnover can be respectively changed between [0, 0.49] and [0, 0.91] freely. In fact, the more sale the more proﬁt enlarge purchase running fund will be occupied with fund turnover which will aﬀect sale. If expense is too low regular purchase and sale will be aﬀected turn sale decreases proﬁt will cut down. Besides, it is not a noticeable inﬂuence that purchase and fund turnover aﬀect the store in economical beneﬁts, which proves calculation theoretically tallying with practical regularity. If we take the unique inﬂuence factor, the middle point of interval number in (8.3.1) is easily taken. Then we obtain x = (0.25, 0.7, 0.83, 0.92, 0.45)T . 8.3.2

Comparison in Algorithm

In Section 8.2, we give a short cut algorithm in (∨, ·) fuzzy relation equations. Now, we ﬁnd a solution to the example of [LoF99] by means of the method mentioned above xT ◦ A = b, ⎛ ⎞T ⎛ ⎞ x1 0.8 0.6 0.5 0.2 0.6 0.9 ⎛ ⎞ ⎜x2 ⎟ ⎜0.6 0.3 0.8 0.4 0.2 0.9⎟ 0.56 ⎜ ⎟ ⎜ ⎟ ⎜x3 ⎟ ⎜0.2 0.7 0.7 0.5 0.5 0.8⎟ ⎜0.42⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜x4 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ◦ ⎜0.4 0.6 0.4 0.1 0.5 0.2⎟ = ⎜0.64⎟ . ⎜x5 ⎟ ⎜0.2 0.1 0.7 0.3 0.1 0.8⎟ ⎜ 0.4 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜x6 ⎟ ⎜0.7 0.3 0.8 0.5 0.4 0.6⎟ ⎝0.42⎠ ⎜ ⎟ ⎜ ⎟ ⎝x7 ⎠ ⎝0.7 0.5 0.3 0.8 0.7 0.1⎠ 0.72 x8 0.5 0.3 0.8 0.4 0.2 0.4

(8.3.2)

x Step 1. Arrange an extended matrix of the equation ( ) in standardized A order from large to small in bi by operation of (8.2.4) and (8.2.5) before we obtain

272

8 Fuzzy Relative Equation and Its Optimizing

x x , → A A 0.72 0.64 0.56 0.42 0.42 0.4| Ki ⎛ ⎞ 0.8 1 0.7 0.7 0.7 1 | 0.7 ⎜0.8 0.8 0.93 1 1 1 | 0.8⎟ ⎜ ⎟ ⎜0.9 0.91 1 0.6 0.84 0.8 | 0.6⎟ ⎜ ⎟ ⎜1 1 1 0.7 0.84 1 | 0.7⎟ ⎜ ⎟. ⎜0.9 0.91 1 1 1 1 | 0.9⎟ ⎜ ⎟ ⎜ 1 0.8 0.8 1 1 0.8 | 0.8⎟ ⎜ ⎟ ⎝ 1 1 0.8 0.84 0.6 0.5 | 0.5⎠ 1 0.8 1 1 1 1 | 0.8

(8.3.3)

Find out a greatest solution to (8.3.2) x ˆ = (0.7, 0.8, 0.6, 0.7, 0.9, 0.8, 0.5, 0.8). Step 2. Transform (8.3.3) by means of (8.2.7) and we have ⎞T ⎛ x1 0 0 0.7 0.7 0.7 0 ⎟ x2 ⎜ ⎜0.8 0.8 0 0 0 0 ⎟ ⎟ 0 0 0 0.6 0 0 x3 ⎜ ⎟ ⎜ ⎟ ⎜ 0 0 0 0.7 0 0 x 4⎜ ∗ ⎟ . A = ⎜ x5 ⎜0.9 0 0 0 0 0 ⎟ ⎟ ⎟ x6 ⎜ ⎜ 0 0.8 0.8 0 0 0.8⎟ x7 ⎝ 0 0 0 0 0 0.5⎠ x8 0.5 0.8 0 0 0 0 Decide the matrix above with Theorem 8.2.4 and we know there exists a solution to (8.3.2). Step 3. Calculate A˜∗1 ∩ A˜∗2 ∩ A˜∗3 ∩ A˜∗4 ∩ A˜∗5 ∩ A˜∗6 0.8 0.9 0.8 0.8 0.8 0.7 0.8 · · = + + + + x2 x5 x2 x6 x8 x1 x6 0.7 0.6 0.7 0.7 0.8 0.5 · + + · + · x1 x3 x4 x1 x6 x7 0.9 0.8 0.8 0.8 0.7 0.8 0.5 P 0.8 −→ + + + · + · = x2 x5 x2 x6 x8 x1 x6 x7 0.7 0.8 0.8 0.7 0.8 0.5 0.7 0.9 0.8 0.7 · · + · · + · · + · x1 x2 x6 x1 x2 x7 x1 x5 x6 x1 0.8 0.9 0.5 0.7 0.9 0.8 0.5 0.7 0.9 0.5 0.8 · · + · · · + · · · , x2 x5 x7 x1 x5 x6 x7 x1 x5 x7 x8 where “P ” denotes an operator in Deﬁnition 8.2.4 obtained by application of an absorptive law and the like, with an element a∗ij = 0 in A∗ omitted. Now, we can obtain 6 minimum solutions arranged in Table 8.3.2:

8.4 Lattice Linear Programming with ( , ·) Operator

273

Table 8.3.2. Complete Set on Minimal Solutions Minimal Solutions x∗1 x∗2 x∗3 x∗4 x∗5 x∗6

(0.7, (0.7, (0.7, (0.7, (0.7, (0.7,

Values 0.8, 0, 0, 0, 0.8, 0, 0) 0.8, 0, 0, 0, 0, 0.5, 0) 0, 0, 0, 0.9, 0.8, 0, 0) 0.8, 0, 0, 0.9, 0, 0.5, 0) 0, 0, 0, 0.9, 0.8, 0.5, 0) 0, 0, 0, 0.9, 0, 0.5, 0.8)

Comparing algorithm. 1. Solution. If we solve the example in [LF99] by means of calculation mentioned in [Cao87b], we can obtain all of the minimum solutions to the Equations (8.3.2), with two solutions more than those in [LF99], i.e., x∗4 and x∗5 , which indeed denote the solutions to (8.3.2) and also are minimum solutions. 2. Simpliﬁcation. In fact, only three steps are required in [Cao87b] instead of four ones, which is simpler than the steps mentioned in [LF99], where six ones are needed in the calculation.

8.4

Lattice Linear Programming with ( , ·) Operator

8.4.1 Introduction Compared to the regular programming problem, this optimization problem subject to fuzzy relations about ( , ·), i.e., max-product composition and objective function about it have very diﬀerent nature. According to Ref. [BF98][LF01a,b][WZSL91] and [HK84], when the solution set of fuzzy relation equations is not empty, it can be completely determined by a unique maximum solution and a ﬁnite number of minimal solutions. Because the solution set is non-convex, traditional programming methods, such as the simplex algorithms, become useless. In this section, we study characteristic of optimal solution about optimization problem max Z = c ◦ xT (8.4.1) s.t. x ◦ A = b, 0 xi 1(1 i m), and

min Z = c ◦ xT s.t. x ◦ A = b, 0 xi 1(1 i m),

(8.4.2)

where “◦” denotes ( , ·) composition, A = (aij )(0 aij 1, 1 i m, 1 j n) is an (m × n)-dimensional fuzzy matrix and b = (b1 , b2 , · · · , bn )(0 bj 1) is an n-dimensional constant vector, c = (c1 , c2 , · · · , cm )(0 ci 1) is

274

8 Fuzzy Relative Equation and Its Optimizing

an m-dimensional constant vector, x = (x1 , x2 , · · · , xm ) is an m−dimensional variable vector, i ∈ I = {1, 2, · · · , m} and j ∈ J = {1, 2, · · · , n}. Call (8.4.1) and (8.4.2) fuzzy relation linear programming with ( , ·) operator. We ﬁrst build min-max methods for showing, then a step-by-step algorithm for solving (8.4.2), and ﬁnally one example is to illustrate how the algorithm is. 8.4.2 Characteristic of Optimal Solution The feasible domain of Problem (8.4.1) and (8.4.2) is a solution set of fuzzy relation equations. We consider the following fuzzy relation equation x ◦ A = b,

(8.4.3)

that is, we try to ﬁnd a solution vector x = (x1 , · · · , xm ), with 0 xi 1, such that m (xi · aij ) = bj (1 j n). (8.4.4) i=1,

X(A, b) = {x = (x1 , x2 , · · · , xm ) ∈ R m | x ◦ A = b, xi ∈ [0, 1], i ∈ I} denotes the solution set of (8.4.3) based on Ref.[LF01a,b]. Let X = {x ∈ R m | x = (x1 , x2 , · · · , xm ), 0 xi 1, ∀i ∈ I}. We say x1 x2 if and only if x1i x2i , ∀i ∈ I and for x1 , x2 ∈ X. In this way, the operator “” forms a partial order relation on X and (X, ) becomes a lattice. Deﬁnition 8.4.1. If ∃ˆ x ∈ X(A, b), such that x x ˆ, ∀x ∈ X(A, b), then x ˆ is called a greatest solution to (8.4.3). If ∃˘ x ∈ X(A, b), such that x˘ x, ∀x ∈ X(A, b), then x˘ is called a minimal solution to (8.4.3). And if ∃˘ x ∈ X(A, b), such that when x x ˘, we have x = x˘, x˘ is called a minimum solution to (8.4.3). If X(A, b) = φ, it can be completely determined by a unique maximum solution and a ﬁnite number of minimal solution [BF98] [LF01a,b]. The maximum solution can be obtained by applying the following operation x = A −1 b = [

n

(aij −1 bj )]i∈I ,

(8.4.5)

j=1

where

⎧ ⎨ 1, if aij bj , bj aij bj = , if aij > bj . ⎩ aij

(8.4.6)

ˇ We denote the set of all minimal solutions by X(A, b), then solution set of (8.4.3) is obtained by {x ∈ X | x ˇ x x}. (8.4.7) X(A, b) = ˇ x ˇ∈X(A,b)

8.4 Lattice Linear Programming with ( , ·) Operator

275

Theorem 8.4.1. If X(A, b) = φ, then x is an optimal solution to (8.4.1). Proof: When X(A, b) = φ, note that 0 x x, ∀x ∈ X(A, b), i.e., 0 xi xi , ∀i ∈ I. Therefore, 0 ci xi ci xi , ∀i ∈ I, then 0 c ◦ xT c ◦ xT and x is an optimal of (8.4.1). Theorem 8.4.2. If X(A, b) = φ, then one of the minimum solutions is an optimal solution to (8.4.2). ˇ b) such that Proof: According to (8.4.7), ∀x ∈ X(A, b), there exists x ˇ0 ∈ X(A, ˇ0i xi xi , ∀i ∈ I, such that, ci x ˇ0i ci xi ci xi , ∀i ∈ I, x ˇ0 x x, hence, x therefore, c ◦ x ˇT0 c ◦ xT c ◦ xT , ∀i ∈ I. ˇ We choose x ˇ∗ such that c ◦ x ˇ∗T = min{c ◦ xˇT | xˇ ∈ X(A, b)}, then, c◦x ˇ∗T c ◦ xT c ◦ xT , ∀x ∈ X(A, b). ˇ b) is an optimal solution to (8.4.2). So, the minimum solution xˇ∗ ∈ X(A, 8.4.3 Method to Optimal Solution According to Theorem 8.4.1 and (8.4.5), generating an optimal solution to (8.4.1) is not a problem. Since fuzzy relation equation has a ﬁnite number of minimum solutions and procedure of minimum solution to fuzzy relation equation is not easy, solving (8.4.2) is very diﬃcult. Here, we build a min-max method to ﬁnd an optimal solution to (8.4.2). A. Characterization of feasible domain [LF01a,b] Lemma 8.4.1. If x ∈ X(A, b), then for each j ∈ J, there exists i0 ∈ I such that xi0 ai0 j = bj and xi aij bj , ∀i ∈ I. When X(A, b) = φ, we deﬁne

and

Ij = {i ∈ I | xi aij = bj }, ∀j ∈ J

(8.4.8)

Λ = I1 × I2 × · · · × Im .

(8.4.9)

Hence, Ij is an index set and f = (f1 , f2 , · · · , fn ) ∈ Λ if and only if fj ∈ Ij , ∀j ∈ J. By the deﬁnition of Ij and Lemma 8.4.1, we can easily see the following result. Lemma 8.4.2. If X(A, b) = φ, then Ij = φ, ∀j ∈ J. Lemma 8.4.3. If X(A, b) = φ, then Λ = φ. In order to study X(A, b) in terms of the elements f ∈ Λ, we deﬁne Jfi = {j ∈ J | fj = i}, i ∈ I

(8.4.10)

276

8 Fuzzy Relative Equation and Its Optimizing

and F : Λ −→ R m such that, ⎧ ⎨ max bj , if J i = φ, f Fi (f ) = j∈Jfi aij ∀i ∈ I. ⎩ 0, if Jfi = φ,

(8.4.11)

Then we see the relationship between X(A, b) and F (Λ) = {F (f ) | f ∈ Λ}. Theorem 8.4.3. Given that X(A, b) = φ, (1) If f ∈ Λ, then F (f ) ∈ X(A, b). (2) For any x ∈ X(A, b), there exists f ∈ Λ such that F (f ) x. [BF98] ˇ Corollary 8.4.1. X(A, b) ⊂ F (Λ) ⊂ X(A, b). B. Min-max method According to Corollary 8.4.1, F (Λ) ⊂ X(A, b), every group value in F (Λ) belongs to a solution set to the fuzzy relation equation. On the other hand, ˇ X(A, b) ⊂ F (Λ), hence, any minimum solution to a fuzzy relation equation corresponds to a group value of F (Λ). Therefore, solving (8.4.2) becomes equivalent to ﬁnding an f ∗ ∈ Λ, such that m

(ci Fi (f ∗ )) = min{ f ∈Λ

i=1

m

(ci Fi (f ))}.

(8.4.12)

i=1

The min-max method is given bellow. Step 1. Choose fj ∈ Ij , ∀j ∈ J, such that cfj

bj bj = min cfj . fj ∈Ij afj j afj j

(8.4.13)

We deﬁne an index set Ij satisfying (8.4.13), so all the corresponding fj are taken. Ij only includes indexes satisfying (8.4.13). Obviously, Ij ⊂ Ij . Step 2. Let Λ1 = I1 × · · · × In . Obviously, Λ1 ⊂ Λ. Step 3. According to (8.4.11), we choose fj ∈ Ij such that f = (f1 , · · · , fn ) ∈ Λ1 , then we can construct a solution: ⎧ ⎨ max bj , if ∃ f = i, j x∗i = fj =i afj j (1 j n). (8.4.14) ⎩ 0, if fj = i, Theorem 8.4.4. Let f = (f1 , · · · , fn ) and f = (f1 , · · · , fn ) ∈ Λ1 . According to (8.3.14), computation of x∗ and x∗ corresponds to f and f respectively, Z ∗ and Z ∗ are objective values corresponding to x∗ and x∗ respectively, then Z ∗ = Z ∗ . Proof: The deﬁnition of Λ1 implies that fj , fj ∈ Ij ⊂ Ij , ∀j ∈ J = {j = bj bj 1, 2, · · · , n}. Hence, according to (8.4.13), cfj = cfj , ∀j ∈ J. afj j afj j

8.4 Lattice Linear Programming with ( , ·) Operator

Suppose

277

Z ∗ = ci0 x∗i0 , Z ∗ = ci1 x∗i1 ,

where x∗i0 = max

fj =i0

bj bj bj0 bj1 = (fj = i0 ), x∗i1 = max = (fj1 = afj j afj0 j0 0 fj =i1 af j a fj1 j1 j

i1 ). bj0

If Z ∗ < Z ∗ , i.e., cfj

0

fj1 , fj1 fj

∈

Ij1

and cfj

afj

0

bj1

j0

< cfj

1

bj1 afj j1

bj1

, for j1 , there exists fj1 , such that

1

= cfj . 1 a afj j1 fj j1 1 1 = i2 (1 i2 m). If there does not exist fj (j = j1 ), such that 1

Suppose fj1 = i2 , then

bj0

Z ∗ = cfj

0

a

fj j0 0

cfj

1

bj1 a

fj j1 1

> cfj

0

bj0 afj

0

= Z ∗ ,

j0

contradiction. bj bj1 If there exists fj (j = j1 ), such that fj = i2 , then x∗i2 = max . fj =i2 af j afj j1 j 1 Therefore, Z ∗ = cfj

0

bj0 bj1 bj0 cfj > cfj = Z ∗ , 1 a 0 a afj j0 fj j1 fj j0 0

1

0

contradiction. So, Z ∗ Z ∗ . By a similar argument, we can show that Z ∗ Z ∗ . Then, Z ∗ = Z ∗ and the proof is complete. Theorem 8.4.5. If X(A, b) = φ and x∗ is deﬁned according to (8.4.14), then x∗ is an optimal solution to (8.4.2). Proof: For any f = (f1 , · · · , fn ) ∈ Λ, f = (f1 , · · · , fn ) ∈ Λ1 such that fj , fj ∈ Ij , ∀j ∈ J. Let feasible solutions x1 and x∗ correspond to f and f , values of objective Z 1 and Z ∗ correspond to x1 and x∗ , respectively. Based on min-max method, we have cfj

bj bj cfj (1 j n). afj j afj j

Suppose Z ∗ = ci0 x∗i0 , Z 1 = ci1 x1i1 , where x∗i0 = max

fj =i0

bj bj1 = (fj = i1 ). fj =i1 afj j afj1 j1 1 bj0 bj1 If Z ∗ > Z 1 , then cfj > cfj1 . 0 a afj1 j1 fj j0

i0 ), x1i1 = max

0

bj bj0 = (f = afj j afj j0 j0 0

278

8 Fuzzy Relative Equation and Its Optimizing

bj0 bj0 cfj0 . afj j0 afj0 j0 0 Let fj0 = i2 (1 i2 m). If there does not exist fj (j = j0 ) such that fj = i2 , then For j0 , there exists fj0 such that fj0 , fj0 ∈ Ij0 and cfj

0

Z 1 cfj0

bj0 bj0 bj1 cfj > cfj1 = Z 1, 0 a afj0 j0 a fj1 j1 fj j0 0

contradiction. bj bj0 . If there exists fj (j = j1 ) such that fj = i2 , then x1i2 = max fj =i2 afj j afj0 j0 Therefore, Z 1 cfj0

bj0 bj0 bj1 cfj > cfj1 = Z 1, 0 a afj0 j0 a fj1 j1 fj j0 0

contradiction. So, Z ∗ Z 1 and the proof is complete. Based on Corollary 8.4.1, because f ∈ Λ1 ⊂ Λ, then x∗ = (x∗1 , · · · , x∗m ) is a feasible solution to (8.4.2). Theorem 8.4.4 and Theorem 8.4.5 show that x∗ is an optimal solution to (8.4.2). This method is called a min-max method. C. Algorithm Based on mini-max method, we advance an algorithm to an optimal solution to (8.4.2). Step the greatest solution to (8.4.3), i.e., compute x = A −1 n 1. Compute −1 b = [ j=1 (aij bj )]i∈I , according to (8.4.5). Step 2. Check feasibility. If x ◦ A = b, continue. Otherwise, stop. Step 3. Compute index sets Ij , ∀j ∈ J, according to (8.4.8). Step 4. Compute index sets Ij , ∀j ∈ J, according to (8.4.13). Step 5. Deﬁne Λ1 = I1 × · · · × In . Step 6. We choose any f ∈ Λ1 , then, compute an optimal solution x∗ according to (8.4.14), and obtain an optimal value Z ∗ . 8.4.4 Numerical Example Consider the following optimization problem: min Z = 0.4x1 ∨ 0.5x2 ∨ 0.3x3 ∨ 0.6x4 ∨ 0.8x5 ∨ 0.6x6 ∨ 0.7x7 ∨ 0.9x8 ∨ 0.5x9 ∨ 0.7x10 s.t. x ◦ A = b, 0 xi 1(i = 1, · · · , 10),

8.4 Lattice Linear Programming with ( , ·) Operator

where

279

⎛

⎞ 0.6 0.2 0.5 0.3 0.7 0.5 0.2 0.8 ⎜ 0.5 0.6 0.9 0.5 0.8 0.9 0.3 0.8 ⎟ ⎜ ⎟ ⎜ 0.1 0.9 0.4 0.7 0.5 0.7 0.4 0.7 ⎟ ⎜ ⎟ ⎜ 0.1 0.6 0.2 0.5 0.4 0.1 0.7 0.5 ⎟ ⎜ ⎟ ⎜ 0.3 0.8 0.8 0.8 0.8 0.5 0.5 0.8 ⎟ ⎜ ⎟, A=⎜ ⎟ ⎜ 0.8 0.4 0.1 0.1 0.2 0.8 0.8 0.3 ⎟ ⎜ 0.4 0.5 0.4 0.8 0.4 0.7 0.3 0.4 ⎟ ⎜ ⎟ ⎜ 0.6 0.3 0.4 0.3 0.1 0.2 0.5 0.7 ⎟ ⎜ ⎟ ⎝ 0.2 0.5 0.7 0.4 0.9 0.9 0.7 0.2 ⎠ 0.1 0.3 0.6 0.6 0.6 0.4 0.4 0.8 b = (0.48 0.56 0.72 0.56 0.64 0.72 0.42 0.64), x = (x1 , x2 , · · · , x10 ).

Solution: Because I = {1, · · · , 10}, J = {1, · · · , 8}, then Step 1. The greatest solution to the problem is x = A −1 b = (0.8, 0.8, 0.622, 0.6, 0.7, 0.525, 0.7, 0.8, 0.6, 0.8). Step 2. Since x ◦ A = b, we know X(A, b) = φ. Step 3. Compute the index set: I1 = {1, 8}, I2 = {3, 5}, I3 = {2}, I4 = {5, 7}, I5 = {2}, I6 = {2}, I7 = {4, 6, 9}, I8 = {1, 2, 10}. Step 4. Because I3 , I5 , I6 have only one element, respectively, then, I3 = I3 = {2}, I5 = I5 = {2}, I6 = I6 = {2}. Therefore we compute index set I1 , I2 , I4 , I7 , I8 . 0.48 b1 b8 , c8 ) = min(0.4× I1 = {1, 8}, according to (8.4.13), min(c1 , 0.9× a11 a81 0.6 b1 0.48 ) = min(0.32, 0.72) = 0.32 = c1 . Therefore I1 = {1}. 0.6 a11 0.56 b2 b2 I2 = {3, 5}, according to (8.4.13), min(c3 , 0.8× , c5 ) = min(0.3× a32 a52 0.9 0.56 b2 ) = c3 . Therefore I2 = {3}. 0.8 a32 By a similar method, we can compute I4 = {7}, I7 = {9}, I8 = {1}. Step 5. Λ1 = I1 × · · · × I8 = {1} × {3} × {2} × {7} × {2} × {2} × {9} × {1}. Step 6. According to Λ1 and (8.4.14), f = (1, 3, 2, 7, 2, 2, 9, 1). Because there dose not exist j ∈ J such that fj = 4, 5, 6, 8, 10, then, x∗4 = x∗5 = x∗6 = x∗8 = x∗10 = 0. b1 b8 0.48 0.64 , } = 0.8. , } = max{ Since f1 = f8 = 1, then x∗1 = max{ a11 a18 0.6 0.8 ∗ ∗ ∗ By a similar method, we can compute x2 = 0.8, x3 = 0.622, x7 = 0.7, x∗9 = 0.6.

280

8 Fuzzy Relative Equation and Its Optimizing

Therefore, an optimal solution is x∗ = (0.8, 0.8, 0.622, 0, 0, 0, 0.7, 0, 0.6, 0) and an optimal value is Z ∗ = 0.49. 8.4.5 Conclusion In this section, we build a min-max method for ﬁnding an optimal solution to latticed linear programming based on (∨, ·) composition.

8.5

Fuzzy Relation Geometric Programming with ( , ) Operator

8.5.1 Introduction We call min f (x) = (c1 ∧ xγ11 ) ∨ (c2 ∧ xγ22 ) ∨ · · · ∨ (cm ∧ xγmm ) (8.5.1) s.t x ◦ A = b (a) 0 xi 1(1 i m) a ( , ) (max-min) fuzzy relation geometric programming, where A = (aij )(0 aij 1, 1 i m, 1 l n) is an (m × n)-dimensional fuzzy matrix, x = (x1 , x2 , · · · , xm ) an m-dimensional variable vector, c = (c1 , c2 , · · · , cn )(ci 0) and b = (b1 , b2 , · · · , bn )(0 bj 1) an n-dimensional constant vector, γi an arbitrary real number, and composition operator is “ ◦ ” (∨, ∧), i.e., m (xi ∧ aij ) = bj (1 j n). i=1

Without loss of generality, suppose 1 b1 > b2 > · · · > bn > 0. Since a fuzzy relation geometric programming are widely applied in engineering optimization designment and modernization of management, technological economic analysis, it is signiﬁcant to solve such a programming. 8.5.2 Structure of Solution Set on Model Since the feasible domain of (8.5.1) is a solution set to (8.5.1)(a), solving (8.5.1)(a) is very important for optimization of (8.5.1). Now, we make some explanation to structure of solution set to (8.5.1)(a) as follows. Deﬁnition 8.5.1. [Luo89] If there exists a solution to (8.5.1)(a), it is called compatible. Suppose that X(A, b) = {(x1 , x2 , · · · , xm ) ∈ R m |x ◦ A = b, 0 xi 1} is the whole solution set to (8.5.1), ∀x1 , x2 ∈ X(A, b), we deﬁne x1 x2 ⇔ x1i x2i (1 i m), such a deﬁnition “ ” is a partial order relation on X(A, b). Deﬁnition 8.5.2. Similarly to Deﬁnition 8.4.1, we deﬁne x ˆ ∈ X(A, b) is a greatest solution, x˘ a minimal solution and x˘ a minimum solution to (8.5.1)(a).

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

281

Let x ˆi = ∧{bj |bj < aij } (1 i m, 1 j n).

(8.5.2)

Stipulate that set {∧Ø = 1}. If xˆ = (ˆ x1 , xˆ2 , · · · , xˆm ) is a solution to (8.5.1)(a), we can easily prove that x ˆ must be a greatest solution to (8.5.1)(a). For greatest solution to (8.5.1)(a), we have the following lemma. Lemma 8.5.1. [Pre81] x ◦ A = b is compatible, if and only if xˆ ◦ A = b and x ˆ is a greatest solution. Proof: The suﬃciency is evident easily, and now we prove necessity. If x is a solution to x ◦ A = b, and m

(xi ∧ aij ) = bj (1 j n),

i=1

then ∀k, j, there is xk ∧ akj bj , let k be ﬁxed. At akj bj , then 0 xk 1; at akj > bj , then 0 xk bj . According to set {∧Ø = 1}, we have ˆk , xk {bj |bj < akj } = x i.e., x x ˆ. Step forward, suppose bj < akj , since x ˆk = {bj |bj < akj } bj , hence x ˆk ∧ akj bj . Suppose bj akj , then x ˆk ∧ akj akj bj , so we have m

(ˆ xi ∧ aij ) bj ,

i=1

i.e., x ˆ ◦ A b. Since x x ˆ, then b=x◦Ax ˆ ◦ A b. Hence, xˆ ◦ A = b. Corollary 8.5.1. [San76] If X(A, b) = Ø, then x ˆ ∈ X(A, b). In terms of the minimal solution to (8.5.1)(a), Ref. [San76] has provided a suﬃcient and necessary condition, but it is diﬃcult to be satisﬁed, so generally speaking, the minimal solution does not exist in X(A, b). This enlarges the diﬃcult for us to solve (8.5.1)(a). Since X(A, b) is a partial order set to “ ”, its minimum element exists. Even though as minimum solution is often considered in many practical problems, we usually pay more attention to the minimum solution to (8.5.1)(a). For a minimum element of X(A, b), we have the following lemma. Lemma 8.5.2. If X(A, b) = Ø, then a minimum element must exist in X(A, b). If X(A, b) has a minimum element, its numbers usually are not

282

8 Fuzzy Relative Equation and Its Optimizing

˘ unique. If we denote the set of all minimum elements by X(A, b), then the solution set of (8.5.1)(a) can be denoted as follows: {x|˘ xxx ˆ, x ∈ X}. (8.5.3) X(A, b) = ˘ x ˘∈X(A,b)

We can clearly see that, by Formula (8.5.3), solution set structure of (8.5.1)(a) ˘ ˘ can be determined by X(A, b), and solving X(A, b) means doing X(A, b). Now we introduce the method that the minimum solution is found by a conservative path. Deﬁnition 8.5.3. Matrix C = (cij )m×n is called a characteristic matrix of A, here 1, bj aij , cij = 0, bj > aij . Obviously, the characteristic matrix is a boolean one. Let Gj = {i|cij = 1, 1 i m} (1 j n), and G = G1 × G2 × · · · × Gn . If xgi = ∨{bj |Kj = i}(1 i m), ∀g = (K1 , K2 , · · · , Kn ) ∈ G and stipulate {∨Ø = 0}, then xg = (xg1 , xg2 , · · · , xgm ) is a solution to (8.5.1)(a), xg is called a quasi-minimum solution to (8.5.1)(a). ˘ (A, b). We can denote all quasi-minimum solution set of (8.5.1)(a) by X ˘ ˘ Now we introduce how to choose X(A, b) through X (A, b). Deﬁnition 8.5.4. Let C be a boolean matrix, sequence p = (p(1), p(2), · · · , p(n)) ∈ G is called a path to C. Deﬁnition 8.5.5. [WZSL91] Suppose that pC = (p(1), p(2), · · · , p(n)) ∈ G is called a conservative path to C, when {p(1), p(2), · · · , p(k − 1)} ∩ Gk = Ø, and if p(i) is an element among {p(1), · · · , p(k − 1)}, ∀k ∈ {2, 3, · · · , n} that ﬁrst comes into Gk , then p(k) = p(i). At n = 1, every path to C is a conservative one. We denote all conservative path sets of C by W C (C), then have the following lemma. Lemma 8.5.3. (1) Minimum solutions to x ◦ A = b is one-to-one correspondence with elements in W C (C). (2) x ◦ A = b is compatible ⇔ G = Ø. The proof to see [WZSL91]. We can know by Lemma 8.5.2 and 8.5.3 C X(A, b) = {xp x x ˆ}, pC ∈W C (C) pC

where x

is a minimum solution corresponding to conservative path pC .

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

283

According to Deﬁnition 8.5.5 and Lemma 8.5.3, we can get the following ﬁltration rule about conservative paths. Rule 8.5.1. (Filtration rule of conservative paths) Let j0 (1 j0 n) be ﬁxed as follows. 1) j0 = 1, Kj is selected from every one of G1 . 2) At j < j0 , we suppose that Kj has been selected from Gj , then Kj0 is done by the following methods. 10 If G∗j0 = {K0 , · · · , Kj0 −1 } ∩ Gj0 = Ø, then Kj0 is Kj among {K0 , · · · , Kj0 −1 } that ﬁrst comes into G∗j0 . 20 If G∗j0 = Ø, then Kj0 is selected from every one elements of Gj0 . 3) g = (Kj1 , Kj2 , · · · , Kjn ), selected according to 1) and 2), is a conservative path, and then xg must be a minimum solution. 8.5.3 Solution on Model Let us consider the objective function f (x) = (c1 ∧ xγ11 ) ∨ (c2 ∧ xγ22 ) ∨ · · · ∨ (cm ∧ xγmm ).

(8.5.4)

The taken optimum value of f (x) is related to every item exponent γi of xi (1 i m), closely. Now we discuss (8.5.2) through the following three cases. ˆ to (8.5.1)(a) is Lemma 8.5.4. If γi < 0(1 i m), then greatest solution x an optimum one to (8.5.1). Proof: Since γi < 0(1 i m), then d(xγi i ) = γi xγi i −1 0 dxi for each xi with 0 xi 1. Hence xγi i is a monotone decreasing function about xi , so it is easy to know that ci ∧ xγi i is also one about xi . Moreover, ∀x ∈ X(A, b), at x x ˆ, then ci ∧ xγi i ci ∧ x ˆγi (1 i m), such that f (x) f (ˆ x), so x ˆ is an optimum solution to (8.5.1). Lemma 8.5.5. If γi 0(1 i m), then a certain minimum solution x ˘ to (8.5.1)(a) is an optimum one to (8.5.1). Proof: Since γi 0(1 i m), then d(xγi i ) = γi xγi i −1 0, dxi

284

8 Fuzzy Relative Equation and Its Optimizing

for each xi with 0 xi 1. Therefore xγi i is a monotone increasing function about xi , so is ci ∧ xγi i about xi . Moreover, ∀x ∈ X(A, b), according to (8.5.1), ˘ then there exists x ˘ ∈ X(A, b), such that x x˘, i.e., xi x˘i , so ci ∧ xγi i ci ∧ x˘i γi (1 i m), ˘ then f (x) f (˘ x), i.e., the optimum solution to (8.5.1) must exist in X(A, b). ∗ ˘ Let f (˘ x ) = min{f (˘ x)|˘ x ∈ X(A, b)}. Then ∀x ∈ X(A, b), there is f (x) ˘ ˘∗ ∈ X(A, b). f (˘ x∗ ), so x˘∗ is an optimum solution to (8.5.1). Here x As for the general situation, i.e., in (8.5.4), every item exponent γi (1 i m) of xi is either a positive number or a negative one. Let R1 = {i|γi < 0, 1 i m}, R2 = {i|γi 0, 1 i m}. Then R1 ∩ R2 =Ø, R1 ∪ R2 = I, here I= {1, 2, · · · , m}. Let f1 (x) = {(ci ∧ xγi i )}, f2 (x) = {(ci ∧ xγi i )}. Then f (x) = f1 (x) ∨ i∈R1

i∈R2

f2 (x). Therefore, we have the following two optimization models based on the above: min f1 (x) s.t x ◦ A = b (8.5.5) 0 xi 1(1 i m) and min f2 (x) s.t x ◦ A = b, 0 xi 1(1 i m).

(8.5.6)

By Lemma 8.5.4, x ˆ is an optimum solution to (8.5.5). By Lemma 8.5.5, ∃˘ x∗ ∈ ∗ ˘ X(A, b), x˘ is an optimum one to (8.5.6). Let xˆi , i ∈ R1 , xi ∗ = x˘i ∗ , i ∈ R2 . Then we have the theorem as follows. Theorem 8.5.1. If every item exponent γi (1 i m) of xi is either a positive number or a negative one, then x∗ is an optimum solution to (8.5.1). ˘ Proof: ∀x ∈ X(A, b), according to (8.5.2), ∃˘ x ∈ X(A, b), such that x ˘xx ˆ. By Lemma 8.5.4 and 8.5.5, we have f (x) = f1 (ˆ x) ∨ f2 (ˇ x) f1 (ˆ x) ∨ f2 (˘ x∗ ) = f (x∗ ). So x∗ is an optimum solution to (8.5.1). 8.5.4 Model Algorithm Algorithm 8.5.1 Step 1. According to the order of components of b from large to small, b is rearranged, and A, x and f (x) are adjusted corresponding to the narration above.

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

285

Step 2. By Formula (8.5.2), solve x ˆ. If x ˆ is not a solution to (8.5.1)(a), then turn to Step 10. Otherwise, turn to Step 3. Step 3. Check out sign of γi (1 i m). If γi < 0(1 i m), then turn to Step 9. Otherwise, turn to Step 4. Step 4. Solve the characteristic matrix C of A and Gj (1 j n), and we ˘ ﬁnd the minimum solution set X(A, b) of (8.5.1)(a) by Rule 8.5.1. ˘∗ by Lemma 8.5.2, turn to Step 5. If γi 0(1 i m), then we obtain x Step 8. Otherwise, turn to Step 6. Step 6. Gain x∗ by Theorem 8.5.1. Step 7. Print f (x∗ ), stop. Step 8. Print f (˘ x∗ ), stop. Step 9. Print f (ˆ x), stop. Step 10. Print “have no solution”, stop. 8.5.5 Examples Example 8.5.1: We consider the following fuzzy relation geometric programming 1 min f (x) = (3 ∧ x1 −2 ) ∨ (2 ∧ x2 −1 ) ∨ (1.5 ∧ x3 − 2 )∨ 5 (2.5 ∧ x4 −2 ) ∨ (0.5 ∧ x5 − 2 ) ∨ (4 ∧ x6 −1 ) s.t x ◦ A = b, 0 xi 1 (1 i 6), where b = (0.85, 0.6, 0.5, 0.1),

⎛

⎞ 0.5 0.2 0.8 0.1 ⎜ 0.8 0.2 0.8 0.1 ⎟ ⎜ ⎟ ⎜ 0.9 0.1 0.4 0.1 ⎟ ⎟ A=⎜ ⎜ 0.3 0.95 0.1 0.1 ⎟ . ⎟ ⎜ ⎝ 0.85 0.1 0.1 0.1 ⎠ 0.4 0.8 0.1 0

By Formula (8.5.2), we can solve x ˆ = (0.5, 0.5, 0.85, 0.6, 1, 0.6). Since x ˆ ◦ A = b, then x ˆ is a greatest solution to x ◦ A = b. It is easy to see ˆ is an optimum solution to Example γi < 0(1 i 6). By Lemma 8.5.4, x 8.5.1, and optimum value is f (ˆ x) = 3. Example 8.5.2: Consider ﬁnding 1

1

min f (x) = (1.5 ∧ x1 2 ) ∨ (2 ∧ x2 ) ∨ (0.8 ∧ x3 − 2 )∨ (0.9 ∧ x4 −2 ) ∨ (0.7 ∧ x5 −4 ) ∨ (1 ∧ x6 −1 ) s.t x ◦ A = b, 0 xi 1 (1 i 6), where A, b is the same as Example 8.5.1.

286

8 Fuzzy Relative Equation and Its Optimizing

Since exponent γi is either positive or negative, we solve characteristic matrix C of A by Algorithm 8.5.1: ⎛ ⎞ 0011 ⎜0 0 1 1⎟ ⎜ ⎟ ⎜1 0 0 1⎟ ⎟ C =⎜ ⎜ 0 1 0 1 ⎟ , G1 = {3, 5}, G2 = {4, 6}, G3 = {1, 2}, G4 = {1, 2, 3, 4, 5}. ⎜ ⎟ ⎝1 0 0 1⎠ 0100 8 conservative paths to C can be got by Rule 8.5.1: pC 1 = (3413), pC 5 = (5415),

pC 2 = (3423), pC 6 = (5425),

pC 3 = (3613), pC 7 = (5615),

pC 4 = (3623), pC 8 = (5625).

For the path above, a corresponding minimum solutions are x ˘1 = (0.5, 0, 0.85, 0.6, 0, 0), ˘ x2 = (0, 0.5, 0.85, 0.6, 0, 0), x ˘3 = (0.5, 0, 0.85, 0, 0, 0.6), ˘ x4 = (0, 0.5, 0.85, 0, 0, 0.6), x6 = (0, 0.5, 0, 0.6, 0.85, 0), x ˘5 = (0.5, 0, 0, 0.6, 0.85, 0), ˘ x ˘7 = (0.5, 0, 0, 0, 0.85, 0.6), ˘ x8 = (0, 0.5, 0, 0, 0.85, 0.6). 1

Let f1 (x) = (0.8 ∧ x3 − 2 ) ∨ (0.9 ∧ x4 −2 ) ∨ (0.7 ∧ x5 −4 ) ∨ (1 ∧ x6 −1 ), f2 (x) = 1 (1.5 ∧ x1 2 ) ∨ (2 ∧ x2 ). By Lemma 8.5.1, x ˆ is an optimum solution to f1 (x). By Lemma 8.5.2, x ˘1 , x ˘3 , x ˘5 , x˘7 is an optimum solution to f2 (x). By Theorem 8.5.1, x∗ = (0.5, 0, 0.85, 0.6, 1, 0.6) is an optimum solution to f (x), and optimum value is f (x∗ ) = 1. The method mentioned here can be applied to both project optimization designment and technological economic analysis, and is of practical use in research for environment protection and pollution disposal as well. 8.5.6 Conclusion In research for the fuzzy relation geometric programming, when its variable scale is not very large, we can smoothly reach the optimum point by applying this algorithm. However, when its variable scale is very large, the number of element ˘ b) of (8.5.1)(a) will increase signiﬁcantly. among the minimum solution set X(A,

8.6

Fuzzy Relation Geometric Programming with ( , ·) Operator

8.6.1 Introduction We call

min f (x) = (c1 · xγ11 ) ∨ (c2 · xγ22 ) ∨ · · · ∨ (cn · xγnn ) s.t A ◦ x = b (a) 0 xj 1(1 j n)

(8.6.1)

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

287

a (∨, ·) (max-product) fuzzy relation geometric programming, where A = (aij ) (1 i m, 1 j n) is (m × n)-dimensional fuzzy matrix, x = (x1 , x2 , · · · , xn )T an n-dimensional variable vector, b = (b1 , b2 , · · · , bm )T (0 bi 1) an m-dimensional constant vector, c = (c1 , c2 , · · · , cn )T (cj 0) an n-dimensional constant vector and γj is an arbitrary real number, and comn (aij · xj ) = bi . position operator “ ◦ ” is (∨, ·), i.e., j=1

To some problems, Operator (∨, ·) can overcome the Operator (∨, ∧) shortage, and it is irreplaceable to solve some practical problem. In this section, we propose a fuzzy relation geometric programming model. 8.6.2 Structure of Solution Set on Equation Since the feasible domain of (8.6.1) is a solution to (8.6.1)(a), solving Equation (8.6.1)(a) is very important in order to optimize Model (8.6.1), so that we make some exposition to structure of solution set in (8.6.1)(a) as follows [Pre81]. Deﬁnition 8.6.1. If there exists a solution in (8.6.1)(a), it is called compatible. Suppose X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b, 0 xj 1} is the solution set of (8.6.1)(a). We deﬁne ∀x1 , x2 ∈ X(A, b), x1 x2 ⇔ x1j x2j (1 j n). Such that a deﬁnition “ ” is a partial order relation on X(A, b). Deﬁnition 8.6.2. Similar to Deﬁnition 8.1.1, we deﬁne x ˆ ∈ X(A, b) is a greatest solution, x ˘ a minimal solution and x ˘ a minimum solution to (8.6.1)(a). Let x ˆj =

m

(aij −1 bi ) (1 j n).

(8.6.2)

i=1

If x ˆ = (ˆ x1 , x ˆ2 , · · · , xˆn )T is a solution to (8.6.1)(a), we can easily prove that x ˆ must be a greatest solution to (8.6.1)(a), where aij −1 bi as (8.2.4). For greatest solution to (8.6.1)(a), we have Lemma 8.6.1. [San76] A ◦ x = b is compatible, if and only if A ◦ x ˆ = b and x ˆ is the greatest solution. Proof: The suﬃciency is evident, and now we prove the necessity. n (aij · xj ) = bi , (1 i m), so If x is a solution to A ◦ x = b, then j=1

∀i, j, there exists aij · xj bi , let j be ﬁxed at aij bi . Then 0 xj 1; at aij > bi , then 0 xj abiji , and, hence, we have xj ≤

m i=1

(aij −1 bi ) = x ˆj ,

288

8 Fuzzy Relative Equation and Its Optimizing

i.e., x x ˆ. Step forward, suppose that bi < aij , since xˆj =

m

(aij −1 bi ), then aij ·ˆ xj

i=1

bi ; suppose that bi aij , then aij · x ˆj aij bi , so we have n

(aij · x ˆj ) bi ,

j=1

i.e., A ◦ xˆ b. Since x x ˆ, then b = A ◦ x A ◦ x ˆ b. Hence, A ◦ x ˆ = b, and x ˆ are greatest solutions. Corollary 8.6.1. If X(A, b) = Ø, then x ˆ ∈ X(A, b). Similar to Section 8.5, Ref. [ZW91] has provided that a suﬃcient and necessary condition of minimal element exists in equation in (8.6.1)(a), but minimal element to (8.6.1)(a) may not exist in X(A, b) ordinarily. For minimum element of X(A, b), we have the following. Lemma 8.6.2. [ZW91] If X(A, b) = Ø, then a minimum element must exist in X(A, b). If X(A, b) has a minimum element, its numbers usually are not unique. ˘ [Wangx02] If we denote all minimum element by X(A, b), then solution set of (8.6.1)(a) can be denoted as follows: {x|˘ xxx ˆ, x ∈ X}. (8.6.3) X(A, b) = ˘ x ˘∈X(A,b)

We can clearly see by (8.6.3) that the solution set structure of (8.6.1)(a) can ˘ ˘ be obtained by X(A, b), solving X(A, b) involves solving X(A, b). Now we introduce the method to ﬁnd the minimum solution to a (∨, ·) fuzzy relation equation. Deﬁnition 8.6.3. Matrix D = (dij )m×n is called a discriminate matrix of A, where aij , aij · x ˆj = bi , dij = 0, aij · x ˆj = bi . We can easily prove that by Deﬁnition 8.6.3, (8.6.1)(a) has a solution if and only if discriminate matrix D of A contains at least a nonzero entry in each row. Deﬁnition 8.6.4. Matrix G = (gij )m×n is called a simpliﬁcation matrix of A, where x ˆj , aij · xˆj = bi , gij = 0, aij · xˆj = bi . ˘ Based on matrix G, X(A, b) can be ﬁltrated as follows.

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

289

Rule 8.6.1. (Filtration rule of minimum solution) 1) If bi = 0, then delete the i−th row of G. 2) If bi > 0, and ∃ k ∈ {1, 2, · · · , n}, such that k > i, ∀j = 1, 2, · · · , n, ckj = 0 ⇐⇒ cij = 0, then delete i-th row of G. ˜ To each row 3) The matrix gained by 1) and 2) can be denoted by G. ˜ of G, the only nonzero value is selected in every row with all entries of the ˜2, · · · , G ˜ p . To ˜1, G rest seen as zero, perhaps all of matrices are denoted by G ˜ each column of Gk (1 k p), the maximum is selected, a quasi-minimum solution xj can be obtained through such a method. The set composed of all xj is called a quasi-minimum solution one, and it includes all minimum solution to (8.6.1)(a). If repeat solution is deleted, and according to Deﬁnition 8.6.2, all ˘ minimum solutions X(A, b) can be got by ﬁltration [WZSL91][HK84][Zim91]. 8.6.3 Solving Solution on Model Let us consider the objective function as follows: f (x) = (c1 · xγ11 ) ∨ (c2 · xγ22 ) ∨ · · · ∨ (cn · xγnn ),

(8.6.4)

the optimal value of f (x) is related to exponent γj of every item xj (1 j n). Now we discuss (8.6.1) through the following three cases. Lemma 8.6.3. If γj < 0 (1 j n), then greatest solution xˆ to Equation (8.6.1)(a) is an optimal one to Model (8.6.1). Proof: Since γj < 0 (1 j n), then γ

d(xj j ) γ −1 = γj xj j 0 dxj γ

for each xj with 0 xj 1, so xj j is a monotone decreasing function about γ xj . It is easy to know that cj xj j is also a monotone decreasing function about xj . Therefore, ∀x ∈ X(A, b), when x xˆ, then γ

ˆγj (1 j n), cj · xj j cj · x such that f (x) f (ˆ x), so x ˆ is an optimal solution to (8.6.1). Lemma 8.6.4. If γj 0(1 j n), then a certain minimum solution x ˘ to (8.6.1)(a) is an optimal one to (8.6.1). Proof: Since γj 0(1 j n), then γ

d(xj j ) γ −1 = γj xj j 0 dxj γ

for each xj with 0 xj 1, hence xj j is a monotone increasing function with γ respect to xj , so is cj xj j with respect to xj .

290

8 Fuzzy Relative Equation and Its Optimizing

So, ∀x ∈ X(A, b), according to Formula (8.6.3), then there exists x ˘ ∈ ˘ X(A, b), such that x x ˘, that is, xj x˘j . Therefore, γ

cj · xj j cj · x˘j γj (1 j n), ˘ then f (x) f (˘ x), that is, the optimal solution to (8.6.1) must exist in X(A, b). Let ˘ f (˘ x∗ ) = min{f (˘ x)|˘ x ∈ X(A, b)}. Then ∀x ∈ X(A, b), there exists f (x) f (˘ x∗ ), so x ˘∗ is an optimal solution to ∗ ˘ (8.6.1), here x ˘ ∈ X(A, b). As for the general situation, that is, in function (8.6.4), the exponent γj (1 j n) of each item xj is either a positive number or a negative one. Let R1 = {j|γj < 0, 1 j n}, R2 = {j|γj 0, 1 j n}. Then R1 ∩ R2 = Ø, R1 ∪ R2 = J, here J = {1, 2, · · · , n}. Let γ γ f1 (x) = {(cj · xj j )}, f2 (x) = {(cj · xj j )}. j∈R1

j∈R2

Then f (x) = f1 (x) ∨ f2 (x). Therefore, we have the next two optimization models based on the above: min f1 (x) s.t A ◦ x = b, 0 xj 1(1 j n),

(8.6.5)

min f2 (x) s.t A ◦ x = b, 0 xj 1(1 j n).

(8.6.6)

and

By Lemma 8.6.3, x ˆ is an optimal solution to (8.6.5). By Lemma 8.6.4, ∃˘ x∗ ∈ ∗ ˘ X(A, b), such that x ˘ is an optimal solution to (8.6.6). Let x ˆj , j ∈ R1 , ∗ xj = x˘j ∗ , j ∈ R2 . We have the following theorem. Theorem 8.6.1. If exponent γj (1 j n) of each item xj is either a positive number or a negative one, then x∗ is an optimal solution to (8.6.1). ˘ Proof: ∀x ∈ X(A, b). According to (8.6.3), ∃˘ x ∈ X(A, b), such that x ˘xx ˆ. By Lemma 8.6.3 and 8.6.4, we have x) ∨ f2 (ˇ x) f1 (ˆ x) ∨ f2 (˘ x∗ ) = f (x∗ ). f (x) = f1 (x) ∨ f2 (x) f1 (ˆ So x∗ is an optimal solution to (8.6.1).

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

291

8.6.4 Algorithm to Model A. Algorithm Algorithm 8.6.1 Step 1. x ˆ is found by (8.6.2). If x ˆ is not a solution to (8.6.1)(a), then turn to Step 9. Otherwise, turn to Step 2. Step 2. Check the sign of γj (1 j n). If γj < 0, then turn to Step 8. Otherwise, turn to Step 3. Step 3. Solving discrimination matrix D and simpliﬁcation matrix G of A. ˘ The minimum solution set X(A, b) of (8.6.1)(a) is ﬁltrated by Rule 8.6.1. Step 4. If γj 0 (1 j n), we obtain x˘∗ by Lemma 8.6.4. Turn to Step 7. Otherwise, turn to Step 5. Step 5. Gain x∗ by Theorem 8.6.1. Step 6. Print f (x∗ ), stop. Step 7. Print f (˘ x∗ ), stop. Step 8. Print f (ˆ x), stop. Step 9. Print “have no solution”, stop. B. Example Example 8.6.1: We now consider the (∨, ·) fuzzy relation geometric programming as follows: 1

1

min f (x) = (0.3 · x1 −2 ) ∨ (1.8 · x2 − 3 ) ∨ (1.5 · x3 − 2 ) ∨ (0.45 · x4 −2 ) s.t A ◦ x = b, 0 xj 1 (1 j 4), where b = (0.4, 0.2, 0.2)T , ⎛

⎞ 0.5 0 0.6 0.8 A = ⎝ 0.5 0.2 0 0.4 ⎠ . 0.2 0.1 0.3 0.2 ˆ = b, then By Formula (8.6.2), we can solve xˆ = (0.4, 1, 23 , 0.5)T . Since A ◦ x in A ◦ x = b exists a solution and x ˆ is a greatest solution to A ◦ x = b. It is easy to see γj < 0 (1 j 4), x ˆ is an optimal solution by Lemma 8.6.3, and the optimal value is f (ˆ x) = 1.875. Example 8.6.2: Finding 1

3

1

min f (x) = (0.4 · x1 − 2 ) ∨ (0.7 · x2 2 ) ∨ (0.6 · x3 2 ) ∨ (0.2 · x4 −2 ) s.t A ◦ x = b, 0 xj 1 (1 j 4), where A, b is the same as Example 8.6.1.

292

8 Fuzzy Relative Equation and Its Optimizing

The discriminate matrix of A is ⎛

⎞ 0 0 0.6 0.8 D = ⎝ 0.5 0.2 0 0.4 ⎠ . 0 0 0.3 0

Since each row of D contains at least a nonzero entry, a solution exists in (∨, ·) fuzzy relation equation A ◦ x = b. The outcome consists with Example 8.6.1. Because the exponent γj is either positive or negative, we solve simpliﬁcation matrix G of A by Algorithm 8.6.1, then ⎞ ⎛ 0 0 23 0.5 G = ⎝ 0.4 1 0 0.5 ⎠ , 0 0 23 0 the matrix G dealt with by Rule 8.2.1, we can get ˜ = 0.4 1 02 0.5 . G 0 0 3 0 Therefore, we have ˜ 1 = 0.4 0 02 0 , G ˜ 2 = 0 1 02 0 , G ˜ 3 = 0 0 02 0.5 . G 0 0 3 0 00 3 0 00 3 0 So all of minimum solutions to A ◦ x = b is 2 2 2 ˘(2) = (0, 1, , 0)T , x ˘(3) = (0, 0, , 0.5)T . x ˘(1) = (0.4, 0, , 0)T , x 3 3 3 1

3

1

Notice that f1 (x) = (0.4 · x1 − 2 ) ∨ (0.2 · x4 −2 ), f2 (x) = (0.7 · x2 2 ) ∨ (0.6 · x3 2 ). From Lemma 8.6.3, we know x ˆ is an optimal solution to f1 (x). From Lemma 8.6.4, we know x ˘(1) and x ˘(3) are optimal solutions to f2 (x). ∗ By Theorem 8.6.1, clearly, x = (0.4, 0, 23 , 0.5)T is an optimal solution to f (x), and the optimal value is f (x∗ ) = 0.8. 8.6.5 Conclusion The relation programming with (∨, ∧) and (∨, ·) operator are recently being paid more attention to by people, while the fuzzy relation geometric programming has been developing very slowly all the time. The reason is that it is diﬃcult to get an ideal result by traditional nonlinear optimization method since the feasible domain of this kind of programming is general nonconvex[BS79][Kel71]. Besides, owing to nonlinear objective function, it is very diﬃcult to provide the general algorithm to this kind of optimization problem. We can only make corresponding discussion on some concrete nonlinear objective. Ref. [LF01b] has provided a solution method to such a problem by genetic algorithm. However, when the scale of variable enlarges, it is diﬃcult to solve premature convergence problem of it.

9 Interval and Fuzzy Diﬀerential Equations

In this chapter, we put forward the concepts of ordinary diﬀerential equations in interval function (i.e., interval-valued function) and fuzzy (value) functions, discuss existence and uniqueness in solutions to the interval ordinary diﬀerential equation, study the existence and uniqueness of solutions to the former equation at ordinary points and fuzzy points by using a decomposition theorem in fuzzy sets, and get a kind of solution to this equation. At the same time, we research Solow economical increase model, and Duoma debt models inﬂuencing very greatly in economics, by applying a fuzzy set-value mapping method to extension of the diﬀerential equation.

9.1

Interval Ordinary Diﬀerential Equations

Deﬁnition 9.1.1. If we use R to denote a real number set, then we call the real number close interval x¯ = [x− , x+ ] = {x|x− x x+ , x− ; x+ ∈ R} an interval number, while the degenerated close interval [x, x] is seen as a real number x itself (x = 0 is especially example). + + Deﬁnition 9.1.2. Suppose x¯1 = [x− ¯2 = [x− 1 , x1 ], x 2 , x2 ], below “ ∗ ” means carrying on arithmetic “+, −, ×, ÷” to the real number, by use of a classical expansion principle, we have + − + ¯2 = {z|∃(x1 , x2 ) ∈ [x− x ¯1 ∗ x 1 , x1 ] × [x2 , x2 ], z = x1 ∗ x2 }.

If the income as a result is still a close interval number, we say an operation of R given by the formula above. When “ ∗ ” denotes division, 0 ∈ x ¯2 is an exception. Deﬁnition 9.1.3. Suppose F¯ : [a, b] → IR , IR = {[x1 , x2 ]|x1 x2 , x1 , x2 ∈ R}, + x → [x− 1 , x2 ]

B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 293–326. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

294

9 Interval and Fuzzy Diﬀerential Equations

then x → F¯ (x) = [F − (·), F + ()] is an interval function, here x1 = F − (·), x2 = F + (), and “(·)” denotes df − (x) df + (x) dn f − (x) + (x, f − (x), ), “()” denotes (x, f (x), ), · · · , ), · · · , dx dxn dx n + d f (x) ), and F − , F + with f − , f + are all the ordinary functions on [a, b], dxn hence ∀ ∈ [a, b], f − (x) f + (x), F − (·) F + (), f − (x), f + (x), such that F − (·), F + () continues at [a,b], then call F¯ (x) continuous at [a,b]. The relevant deﬁnition of continuum and diﬀerentiable in y = f (x) at [a,b] can be seen in [Cen87]. Deﬁnition 9.1.4. Suppose f¯(x) to be a function deﬁned at [a, b], and df + (x0 ) df − (x0 ) at x0 ∈ [a, b], if there exists ordinary derivatives and , dx dx − df (x0 ) we call that interval function f¯(x) is derivable at x0 , and [min{ , dx + − + df (x0 ) df (x0 ) df (x0 ) }, max{ , }] is interval derivative of f¯(x) at x0 . dx dx dx df + (x0 ) df − (x0 ) df + (x0 ) df − (x0 ) , [ , ] is an interval same order When dx dx dx dx + − df (x0 ) df (x0 ) , ] is an interval antitone derivative of f¯(x) at x0 . Otherwise, [ dx dx ¯ one of f (x) at x0 . Deﬁnition 9.1.5. Suppose f¯(x) to be a function deﬁned at interval [a, b], df − (x) df + (x) and if for ∀x ∈ [a, b], there exist derived functions and , called dx dx − + − + df (x) df (x) df (x) df (x) [min{ , }, max{ , }] is an interval derived function dx dx dx dx df¯(x) in f¯(x) on [a, b], brieﬂy written down as = [y − (x), y + (x)], and f¯(x) is dx called a primal function of y¯(x) on interval [a, b]. df − (x) df + (x) If , ∀x ∈ [a, b], then we call f¯(x) the same order derivable dx dx df − (x) df + (x) on [a,b], y¯(x) = [ , ] is the same order derived function of f¯(x) dx dx on [a, b] and f¯(x) the same order primal function of y¯(x). Otherwise, we call df + (x) df − (x) f¯(x) an antitone derivable, y¯(x) = [ , ] an antitone derivative dx dx ¯ ¯ function of f (x) on [a, b] and f (x) an antitone primal function of y¯(x).

9.1 Interval Ordinary Diﬀerential Equations

295

dn f − (x0 ) Similarly, if we deﬁne f¯(x) at interval [a, b], and if there exists , dxn n + d f (x0 ) at point x0 ∈ [a, b], we call dxn dn f − (x0 ) dn f + (x0 ) dn f − (x0 ) dn f + (x0 ) [min( , ), max( , )] n n dx dx dxn dxn dn f¯(x0 ) , while it is an nth an nth derivative of f¯(x) at x0 , written down as dxn derivative function of f (x) at x. Deﬁnition 9.1.6. The equation with unknown interval derivative is called an interval diﬀerential equation, the diﬀerential equation containing an unknown is called an interval ordinary diﬀerential equation, i.e., df¯(x) df − (x) df + (x) (9.1.1) = f¯(x) [ , )] = [f − (x), f + (x)], dx dx dx calling it the 1st interval ordinary diﬀerential equation, but dn f¯(x) df¯(x) + an (x)f¯(x) = 0 + · · · + an−1 (x) n dx dx df − (x) df + (x) dn f − (x) dn f + (x) (9.1.2) , )] + · · · + a (x)[ [ , )] n−1 dxn dxn dx dx + an (x)[f − (x), f + (x)] = [0, 0] is called the nth interval ordinary diﬀerential equation, where a1 (x), · · · , an−1 (x), an (x) are known functions. The following functions to discuss are all supposed to be the same order derivable [Cen87] (the antitone derivable can be similarly discussed). Deﬁnition 9.1.7. If function ϕ(x) ¯ is placed into (9.1.1) or (9.1.2), such that it is identical, then ϕ(x) ¯ is called an interval solution to them. Seeking the interval solutions to (9.1.1) or (9.1.2) means solving interval diﬀerential equation. Here ϕ(x) ¯ = [ϕ− (x), ϕ+ (x)], where ϕ− , ϕ+ are ordinary functions. Deﬁnition 9.1.8. The ﬁxed solutions problem is ⎧ 2¯ n¯ ¯ ⎪ ⎨ F¯ (x, f¯(x), df (x) , d f (x) , · · · , d f (x) ) = 0 (9.1.3) 2 n dx dx dx n−1 ¯ ¯(x0 ) ) f (x d f d ⎪ 0 (1) (n−1) (9.1.4) ⎩ f¯(x0 ) = f¯0 , = f¯0 , · · · , = f¯0 dx dxn−1 ⎧ − + n − n + ⎪ ¯ ([x, x], [f − (x), f + (x)], [ df (x) , df (x) ], · · · , [ d f (x) , d f (x) ]) ⎪ F ⎪ ⎪ dx dx dxn dxn ⎪ ⎪ ⎨ = [0, 0] df − (x0 ) df + (x0 ) (1)− (1)+ ⎪ , ] = [f0 , f0 ], · · · , [f − (x0 ), f + (x0 )] = [f0− , f0+ ], [ ⎪ ⎪ dx dx ⎪ ⎪ n−1 − ⎪ f (x0 ) dn−1 f + (x0 ) −(n−1) +(n−1) ⎩[ d , ] = [f0 , f0 ] dxn−1 dxn−1

296

9 Interval and Fuzzy Diﬀerential Equations

a solution satisfying (9.1.3) and (9.1.4), called an interval special one to the ﬁxed solutions problem. Deﬁnition 9.1.9. An expression of solution containing n arbitrarily interval constants in nth interval function equation looks like (9.1.3), f¯(x) = ϕ(x, ¯ C¯0 , C¯1 , · · · , C¯n−1 ), is called an interval general solution. For an initial value condition given arbitrarily in a certain scope, if at least in ⎧ ¯ df (x) ∂ ϕ¯ ⎪ ⎪ = (x, C¯0 , C¯1 , · · · , C¯n−1 ) ⎪ ⎪ ∂x ⎨ dx .. .. . . ⎪ ⎪ n¯ n ⎪ d ϕ ¯ f (x) ∂ ⎪ ⎩ = (x, C¯0 , C¯1 , · · · , C¯n−1 ) dxn ∂xn can be found a special ﬁxed value of an arbitrary constant C¯0 , C¯1 , · · · , C¯n−1 , such that the corresponding solution satisﬁes this condition. ∂ k ϕ(·) ¯ ∂ k ϕ− (·) ∂ k ϕ+ (·) ∂ k ϕ− (·) ∂ k ϕ+ (·) [min( , ), max( , )], and unk k k ∂x ∂x ∂x ∂xk ∂xk der the precondition of f¯ same order derivable, there is

Note:

¯ ∂ k ϕ− (·) ∂ k ϕ+ (·) ∂ k ϕ(·) [ , ]. k ∂x ∂xk ∂xk Theorem 9.1.1. (Existence theorem on implicit function) Suppose F¯ (¯ x0 , ∂ F¯ x ¯1 , · · · , x ¯n ), (¯ x0 , x ¯1 , · · · , x ¯n )(i = 0, 1, · · · , n) to be deﬁned and contin∂xi uous in some neighborhood σ at (¯ x00 , x ¯01 , · · · , x ¯0n ), with F¯ (¯ x00 , x ¯01 , · · · , x ¯0n ) = ¯ ∂F 0 0 0, (¯ x ,x ¯ , ··· ,x ¯0n ) = 0, then equation F¯ (¯ x0 , x ¯1 , · · · , x ¯n ) = 0 has a unique ∂xn 0 1 interval solution x ¯n = f (¯ x0 , x ¯1 , · · · , x¯n−1 ) in a certain neighborhood σ ⊇ σ 0 0 0 at point (¯ x0 ,¯,x1 , · · · , x ¯n ). Proof: Because ∂ F¯ 0 0 (¯ x ,x ¯ ,··· ,x ¯0n ) = 0 ⇐⇒ ∂xn 0 1 ∂ F¯ − −0 −0 ∂ F¯ + +0 +0 [ (x0 , x1 , · · · , x−0 (x , x1 , · · · , x+0 n ), n )] = [0, 0], ∂xn ∂xn 0 such that ∂ F¯ + +0 +0 ∂ F¯ − −0 −0 (x0 , x1 , · · · , x−0 (x , x1 , · · · , x+0 n ) = 0, n ) = 0. ∂xn ∂xn 0 The theorem holds according to the deﬁnition of interval and existence theorem of a classical implicit function.

9.1 Interval Ordinary Diﬀerential Equations

297

Theorem 9.1.2. Let the interval implicit function equation like (9.1.3), if at ∂ F¯ the considered region, ¯(n)

= 0, then a normal type of interval diﬀerential ∂ f (x) equation is attained as follows: 2¯ n−1 ¯ ¯ f (x) dn f¯(x) ¯(x), df (x) , d f (x) , · · · , d = ϕ(x, ¯ f ), n 2 n−1 dx dx dx dx

(9.1.5)

where ϕ¯ is a known interval function dependent upon n + 1 variation. ∂ F¯ Proof: Because ¯(n)

= 0, there exists ∂ f (x) ∂F + ∂F −

= 0, ¯+(n)

= 0. −(n) ∂f (x) ∂f (x) From the existence theorem of an interval implicit function, it is known that df − (x) d2 f − (x) dn−1 f − (x) dn f − (x) − − , = ϕ (x, f (x), , · · · , ), dxn dx dx2 dxn−1 dn f + (x) df + (x) d2 f + (x) dn−1 f + (x) , = ϕ+ (x, f + (x), ,··· , ). n 2 dx dx dx dxn−1 Therefore, the theorem holds. Theorem 9.1.3. Any normal type of interval diﬀerential equation (or group) can be turned into 1-order one. Proof: Because dn f¯(x)

2¯ n−1 ¯ ¯ f (x) ¯(x), df (x) , d f (x) , · · · , d = ϕ(x, ¯ f ) n 2 n−1 dx dx dx dx df¯1 (x) df¯n−1 (x) df¯(x) = f¯1 , = f¯2 , · · · , = ϕ(x, ¯ f¯, f¯1 , · · · , f¯n−1 ), ⇐⇒ dx dx dx where f¯i = [fi− , fi+ ](i = (1, 2, · · · , n − 1) is an unknown interval function, the theorem holds.

Corollary 9.1.1. The initial value problem in (9.1.5) is equivalent to the initial one in the 1st normal type of interval diﬀerential equations. Let y¯ = (¯ y1 , y¯2 , · · · , y¯n )T , f¯ = (f¯1 , f¯2 , · · · , f¯n )T , Then the equations like

d¯ y1 d¯ y2 d¯ yn T d¯ y =( , ,··· , ) . dx dx dx dx

d¯ yi = f¯i (x, y¯1 , y¯2 , · · · , y¯n ) can be written down as: dx d¯ y = f¯(x, y¯), (9.1.6) dx

dy − dy + d¯ yi where y¯ = [yi− , yi+ ], f¯i = [fi− , fi+ ], =[ , ](i = 1, 2, · · · , n). dx dx dx

298

9 Interval and Fuzzy Diﬀerential Equations

Therefore, only (9.1.6) needs discussing. Theorem 9.1.4. (Existence theorem in solution) Given interval diﬀerential equation (9.1.6) and initial value (x0 , y¯0 ), and we suppose f¯(x, y¯) continues at close ﬁeld æ : d(x, x0 ) ⊂ [a, a], d(Y¯ , y¯0 ) ⊂ [b− , b+ ](a > 0, b+ > b− > 0), and

d(x, Y¯ ) = dH ([x, x], [Y − , Y + ]) = max(|x − Y + |, |x − Y + |),

where dH is Hansdoﬀ measurement, then at least an interval solution exists in (9.1.6), value Y¯0 taken at x = x0 . Meanwhile it is determined and continuous at a certain interval containing x0 . Proof: (9.1.6) has at least a determined interval continuous solution to a certain interval at x0 if and only if in d¯ y− d¯ y+ = f¯− (x, y¯− ), = f¯+ (x, y¯+ ) dx dx

(9.1.7)

there exists at least a determined continuous solutions y0− and y0+ respectively at x0 and a certain interval of x0 . Therefore, y¯ = [y0− , y0+ ] is a determined continuous solution at a certain interval of (9.1.6) containing x¯0 . Theorem 9.1.5. (Uniqueness theorem of solution) Under the condition of Theorem 9.1.4, if in æ, the variation y¯ satisﬁes also the Lipschitz condition, i.e., ∃N > 0, such that two arbitrary values y¯1 , y¯2 in æ imply |f¯(x, y¯1 ) − f¯(x, y¯2 )| ⊂ N |¯ y1 − y¯2 |, then

(9.1.6) f¯(x, y¯)|x=x0 ,¯y=¯y0 = f¯0

(9.1.8)

(9.1.9)

has a unique determined continuous interval solution. Proof: From the deﬁnition of Hausdorﬀ measurement, we know (9.1.8) ⇐⇒ max{|f − (x, y1− ) − f − (x, y2− )|, |f + (x, y1+ ) − f + (x, y2+ )|} N max{|y1− − y2− |, |y1+ − y2+ |}

⇐⇒ |f − (x, y1− ) − f − (x, y2− )| N |y1− − y2− | or

|f + (x, y1+ ) − f + (x, y2+ )| N |y1+ − y2+ |.

(9.1.10) (9.1.11)

Because (9.1.6) satisfying the condition of Theorem 9.1.4 is equivalent to (9.1.7) satisfying respectively f − (x, y − ) and f + (x, y + ) continuing at the close regions:

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

299

æ ¯ 1 : |x − x0 | a, |y − − y0− | b− ; æ ¯ 2 : |x − x0 | a, |y + − y0+ | b+ and satisfying (9.1.10) and (9.1.11), it is known that, by using the uniqueness theorem of solution in classical ordinary diﬀerential equation, there exists a unique solution (x0 , y0− ) and (x0 , y0+ ) in (9.1.7), respectively, hence (x0 , y¯0 ), and that is a unique interval solution to (9.1.9). Theorem 9.1.6. Let f¯(x, y¯) be the same order derivable [Cen87]. Then there exists y¯ = ϕ(x ¯ 0 ) + c¯ in (9.1.6). Proof: Because d¯ y d¯ y − dy + = f¯(x, y¯) ⇐⇒ [ , ] = [f − (x, y − ), f + (x, y + )] dx dx dx dy − dy + =⇒ = f − (x, y − ), = f + (x, y + ) dx dx (f¯(·) has the same order primal functionϕ(·)) ¯ − − − + + + =⇒ y = ϕ (x0 ) + c ; y = ϕ (x0 ) + c , hence, y¯ = ϕ(x ¯ 0 ) + c¯.

9.2

Fuzzy-Valued Ordinary Diﬀerential Equations

Deﬁnition 9.2.1. Let A˜ ∈ F (R) be a fuzzy subset on R. If, for ∀α ∈ + ˜ [0, 1], Aα = [A− α , Aα ] with A1 = φ, then A is called a fuzzy number, whole sets of which are written as F (R). Deﬁnition 9.2.2. If 1) 2) 3) 4) Then

There exists a unique y0 ∈ R, such that μA˜ (y0 ) = 1; μA˜ (y) continues with respect to y; ˜ ⊂ [y1 , y2 ]; ∃[y1 , y2 ], such that S(A) ∀s, ∀t > s, ∀y ∈ (s, t), there is μA˜ (y) > min(μA˜ (s), μA˜ (t)). A˜ ∈ F (R) is called a convex normal fuzzy number.

Deﬁnition 9.2.3. Let f˜ : [a, b] → F (R), x → f˜(x). Then f˜ is called a fuzzyvalued function deﬁned at [a, b], and when f˜(x) is a convex normal fuzzy number, f˜ is called a convex normal fuzzy function. Deﬁnition 9.2.4. Let f¯α : [a, b] → IR , x → f¯α (x) [f˜(x)]α . Then f¯α is called an α−cut function of f˜, if and only if ∀α ∈ (0, 1], f¯α continues, so as f˜. The operation of fuzzy numbers is deﬁned by fuzzy extension principle [WL85]. (1) (2) (m) ˜ B ˜∈ αf (Aα , Aα , · · · , Aα ), we have A, If f (A˜(1) , A˜(2) , · · · , A˜(m) ) α∈(0,1]

F (R), then

300

9 Interval and Fuzzy Diﬀerential Equations

˜ α = Aα ± Bα ; 1) (A˜ ± B) ˜ 2) (k A)α = kAα . Deﬁnition 9.2.5. Let f˜(x) be deﬁned at [a, b] and f¯α (x) be diﬀerentiable ∀α ∈ (0, 1]. Then df¯α (x) df˜(x) = α dx dx α∈(0,1]

is called a fuzzy-valued derivative at ordinary point x. In the following, it is supposed that f˜(x) is the same order derivable at [a, b] (similar discussion by antitone derivable), then, the fuzzy-valued derivative df˜(x) df − (x) dfα+ (x) can be simply expressed as = , }. α{ α dx dx dx α∈(0,1] Deﬁnition 9.2.6. Let f˜ : [a, b] × [c, d] → F (R), (x, y) → f˜(x, y) be deﬁned as a binary fuzzy-valued function at [a, b] × [c, d]; its α−cut function is f¯α : x → f¯α (x, y) [f˜(x, y)]α = [fα− (x, y), fα+ (x, y)]. If for ∀α ∈ (0, 1], fα− and fα+ diﬀerentiable at (x, y), then the partial derivative of f˜ at (x, y) is deﬁned as: ∂ f˜(x, y) = ∂x ∂ f˜(x, y) = ∂y

α{

α∈(0,1]

∂ f˜α− (x, y) ∂ f˜α+ (x, y) , }, ∂x ∂x

α{

α∈(0,1]

∂ f˜α− (x, y) ∂ f˜α+ (x, y) , }. ∂y ∂y

df˜1 (x) df˜2 (x) , Theorem 9.2.1. If f˜1 (x), f˜2 (x) is the same order derivable, dx dx is a normal convex, then d ˜ df˜1 (x) df˜2 (x) (f1 (x) ± f˜2 (x)) = ± ; 1) dx dx dx ˜ d df (x) 2) (k f˜(x)) = k . dx dx Deﬁnition 9.2.7. The equation with unknown fuzzy-valued derivative is called a fuzzy-valued diﬀerential equation, df˜(x) = f˜(x) dx

α∈(0,1]

α

df¯α (x) = dx

α∈(0,1]

αf¯α (x)

(9.2.1)

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

301

is called the 1st fuzzy-valued diﬀerential one, and dn f˜(x) df˜(x) + an (x)f˜(x) = 0 + · · · + a (x) n−1 dxn dx dn f¯α (x) df¯α (x) + α + · · · + an−1 (x) α n dx dx α∈(0,1] α∈(0,1] an (x) αf¯α (x) = 0

(9.2.2)

α∈(0,1]

is called the nth fuzzy-valued diﬀerential one, where ai (x)(1 i n) is a known ordinary function (also a fuzzy-valued function), with an (x) = 0. Deﬁnition 9.2.8. If a fuzzy-valued function f˜(x) is substituted for (9.2.1) or (9.2.2), such that it is identical, then the function f˜(x) is a solution to it. The process of ﬁnding solution to (9.2.1) or (9.2.2) is called ﬁnding a solution to fuzzy-valued diﬀerential equation. Deﬁnition 9.2.9. Let a problem of fuzzy-valued ﬁxed solution be ⎧ 2˜ n˜ ˜ ⎪ ⎨ F˜ (x, f˜(x), df (x) , d f (x) , · · · , d f (x) ) = 0, 2 dx dx dxn (9.2.3) n−1 ˜ ˜(x0 ) ⎪ d f d f (x0 ) (n−1) ⎩ f˜(x ) = f˜ , ˜ ˜ = f0 . (a) ) = f0 , · · · , 0 0 dx dxn−1 Then a solution satisfying (9.2.3) is called a special solution, while an arbitrary expression of the solution to (9.2.3) f˜(x) = ϕ(x, ˜ C˜0 , C˜1 , · · · , C˜n−1 ) is called a fuzzy-valued general solution to problem. Here (9.2.3) ⎧ n¯ ¯ ⎪ ⎪ ¯α (x), α dfα (x) , · · · , α d fα (x) = 0, ¯ (x, F α f ⎪ ⎪ ⎪ dx dxn ⎪ α∈(0,1] α∈(0,1] ⎪ α∈(0,1] ⎪ ¯ ¯ ⎪ αfα (x0 ) = αf0α , ⎪ ⎪ ⎪ α∈(0,1] ⎪ ⎨ α∈(0,1] df¯0α (x0 ) = α αf¯0α , ⎪ ⎪ α∈(0,1] dx α∈(0,1] ⎪ ⎪ ⎪ ⎪ ··············· ⎪ ⎪ ⎪ ⎪ d(n−1) f¯0α (x0 ) ⎪ (n−1) ⎪ ⎪ α = αf¯0α , ⎩ (n−1) dx α∈(0,1] α∈(0,1] αϕ¯α (x, C¯0α , C¯1α , · · · , C¯(n−1)α ). f˜(x) α∈(0,1]

If at least to a certainty range arbitrary given by initial value condition (9.2.3)(a), a speciﬁcally ﬁxed value of arbitrary fuzzy number C˜0 , C˜1 , · · · , C˜n−1 can all be found, such that the corresponding solution satisﬁes this condition. Theorem 9.2.2. (Existence theorem on implicit function) Suppose neighborhood at point (x01 , x02 , · · · , x0n , u˜0 ), if

302

9 Interval and Fuzzy Diﬀerential Equations

1) F˜ (x1 , x2 , · · · , xn , u ˜) is a continuously convex normal fuzzy-valued function, with F˜ (x01 , x02 , · · · , x0n , u ˜0 ) = 0; ∂ F˜ 0 0 (x , x , · · · , x0n , u ˜0 ) is the same order fuzzy-valued partial derivative 2) ∂u 1 2 ˜ ∂F 0 0 (x , x , · · · , x0n , u ˜0 ) = 0. with continuation, and ∂u 1 2 ˜) = 0 contains a unique Then neighborhood at this point, F˜ (x1 , x2 , · · · , xn , u fuzzy-valued solution u˜ = ϕ(x ˜ 1 , x2 , · · · , xn ). Proof: Because

F˜ (x1 , x2 , · · · , xn , u ˜) =

αF¯α (x1 , x2 , · · · , xn , u ¯α ),

α∈(0,1]

∂ F˜ (x1 , x2 , · · · , xn , u ˜) = ∂u

α∈(0,1]

α

∂ F¯α (x1 , x2 , · · · , xn , u ¯α ), ∂u

∂ F˜ 0 0 again (x , x , · · · , x0n , u ˜0 ) = 0, we know the following according to the ∂u 1 2 assumption

α[

α∈(0,1]

i.e.,

∂ F¯α− 0 0 ∂ F¯α+ 0 0 (x1 , x2 , · · · , x0n , u (x , x , · · · , x0n , u ¯− ¯+ 0α , 0α )] = 0, ∂u ∂u 1 2

α

∂Fα− 0 0 (x1 , x2 , · · · , x0n , u− 0α ) = 0, ∂u

α

∂Fα+ 0 0 (x , x , · · · , x0n , u+ 0α ) = 0, ∂u 1 2

α∈(0,1]

α∈(0,1]

with ∀α ∈ (0, 1], all containing ∂ F¯α ¯ (F )α (x01 , x02 , · · · , x0n , u¯0α ) = 0, F¯α (x01 , x02 , · · · , x0n , u ¯0α ) = 0, ∂u u and continuing near at (x01 , x02 , · · · , x0n , u¯0α ), such that a unique interval solution u¯α = ϕ¯α (x01 , x02 , · · · , x0n ) exists in F¯α (x01 , x02 , · · · , x0n , u¯0α ) = 0 at this point. ˜0 ) = 0 exists a unique fuzzyTherefore at this point, F˜ (x01 , x02 , · · · , x0n , u αϕ¯α (x1 , x2 , · · · , xn ). valued solution u ˜= α∈(0,1]

If we solved like

dn f˜(x) from relation form (9.2.3), then we obtain an equation dxn dn y˜(x) dn−1 y˜(x) d˜ y (x) ,··· , = f˜(x, y˜(x), ), n dx dx dxn−1

(9.2.4)

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

303

where f˜ is a known fuzzy-valued function dependent upon n + 1 variation x, called a normal type fuzzy-valued diﬀerential equation. ∂ F˜

= 0, then (9.2.3) ∂ f˜(n) (x) contains a normal type fuzzy-valued diﬀerential equation (9.2.4).

Theorem 9.2.3. If by considering the region,

Proof: Because at the considered region ∂ F˜

= 0 ⇐⇒ ∂ f˜(n) (x)

α∈(0,1]

∂ F˜

= 0, while ∂ f˜(n) (x)

α

∂ ¯ (n−1) (x)) = 0, (F ∂x α

and according to existence theorem in the fuzzy-valued implicit function, the theorem holds. Theorem 9.2.4. Any of the nth normal type fuzzy-valued diﬀerential equations (9.2.4) is equivalent to a 1-order one ⎧ d˜ y (x) ⎪ ⎪ = y˜1 (x), ⎪ ⎪ dx ⎪ ⎪ y1 (x) ⎪ ⎨ d˜ = y˜2 (x), dx (9.2.5) .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ y (x) ⎩ d˜ = f˜(x; y˜(x), y˜1 (x), · · · , y˜n−1 (x)). dx Proof: Because (9.2.4) ⇐⇒ α∈(0,1]

α

dn y¯α (x) = dxn

α∈(0,1]

αf¯α (x, y¯α (x),

dn−1 y¯α (x) d¯ yα (x) ,··· , ). dx dxn−1

Suppose y˜(x) = ϕ˜( x) to be a solution to (9.2.4) in interval I = [a, b], let ϕ˜1 (x) =

˜ dϕ(x) ˜ dn−1 ϕ(x) , · · · , ϕ˜n−1 (x) = . dx dxn−1

Then which is equivalent to ⎧ dϕ¯α (x) ⎪ ⎪ = α αϕ¯1α (x), ⎪ ⎪ dx ⎪ α∈(0,1] α∈(0,1] ⎪ ⎪ ⎪ dϕ¯1α (x) ⎪ ⎪ ⎨ = α αϕ¯2α (x), dx α∈(0,1] α∈(0,1] ⎪ .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ dϕ¯(n−1),α (x) ⎪ ⎪ ⎪ = α αf¯α (x, ϕα (x), ϕ¯1α (x), · · · , ϕ¯(n−1)α (x)), ⎩ dx α∈(0,1] α∈(0,1]

304

9 Interval and Fuzzy Diﬀerential Equations

i.e.,

⎧ dϕ(x) ˜ ⎪ ⎪ = ϕ˜1 (x), ⎪ ⎪ dx ⎪ ⎪ ⎪ ⎨ dϕ˜1 (x) = ϕ˜2 (x), dx .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ ⎩ dϕ˜n−1 (x) = f˜(x, ϕ(x), ˜ ϕ˜1 (x), · · · , ϕ˜n−1 (x)). dx αϕ¯iα (x), (1 i n − 1) is an unknown fuzzy-valued Because ϕ˜i (x) = α∈(0,1]

function, it shows that, ∀α, y¯α (x) = ϕ¯α (x), y¯1α (x) =

dϕ¯α (x) dn−1 ϕ¯α (x) , · · · , y¯(n−1)α (x) = dx dxn−1

is equivalent to a solution corresponding to a classical problem in interval dϕ(x) ˜ dϕ˜n−1 (x) , · · · , y˜n−1 (x) = is I = [a, b]. Thereby, y˜(x) = ϕ(x), ˜ y˜1 (x) = dx dxn−1 equivalent to a solution to (9.2.5) in interval I = [a, b]. Therefore, the theorem is certiﬁcated. Corollary 9.2.1. The arbitrary initial value problem of (9.2.4) is equivalent to the interval value problem of 1-order normal fuzzy-valued diﬀerential equations. Only the 1st case is discussed below, because the similar conclusion can be got to the others. Deﬁnition 9.2.10. d(˜ x, y˜) = dH (˜ x, y˜) is Hausdorﬀ measure induced by the measure d, deﬁned as ⎧ ˜}, sup{d(y, y˜)|y ∈ y˜}), if x ˜, y˜ = φ, ⎨ max(sup{d(x, x˜)| x ∈ x if x˜, y˜ = φ, dH (˜ x, y˜) = 0, ⎩ ∞, x ˜ = φ, y˜ = φ or x ˜ = φ, y˜ = φ. When x ˜, y˜ is a non-empty closed set of a closed region æ, we have dH (˜ x, y˜) = max{sup[d(x, x˜)|x ∈ x ˜], sup[d(y, y˜)|y ∈ y˜]}. d˜ y = f˜(x, y˜) and Theorem 9.2.5. (Existence theorem of the solution) Given dx the initial value (x0 , y˜0 ), with f˜(x, y˜), a convex normal fuzzy-valued function, continuing on closed region æ : |x − x0 | a, d(˜ y , y˜0 ) ⊆ [b− , b+ ](a > 0, b− > d˜ y ˜ contains at least a fuzzy-valued 0, b+ > 0, and b− < b+ ), then = f˜(x, y) dx solution with a value y˜0 taken at x = x0 , conﬁrmed and continuing at a certain interval containing x0 at the same time.

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

305

d¯ yα = Proof: From existence theorem of the solution in the interval, α dx α∈(0,1] αf¯α (x, y¯α ) contains at least a solution y¯0 , conﬁrmed and continuing at α∈(0,1]

certain interval of x0 . Again,

α∈(0,1]

α

d¯ yα = dx

d˜ y = dx

αf¯α (x, y¯α ) ⇐⇒

α∈(0,1]

f˜(x, y˜), hence the conclusion of the theorem holds. Theorem 9.2.6. (Uniqueness theorem of the solution) Under the condition of Theorem 9.2.5, if fuzzy variation y˜ still satisﬁes Lipschitz condition in nonempty closed region æ, i.e., ∃N > 0, any of the two values y˜1 , y˜2 in æ always contains dH [f˜(x, y˜1 ), f˜(x, y˜2 )] ⊆ N dH (˜ y1 , y˜2 ), (9.2.6) then a unique conﬁrmed and continuous fuzzy-valued solution exists in ⎧ y ⎨ d˜ = f˜(x, y˜), dx ⎩ f˜(x, y˜)| ˜ x=x0 ,˜ y=˜ y0 = f0 .

(9.2.7)

Proof: We know from the prove of Theorem 9.2.5, for arbitrary α ∈ (0, 1], d¯ yα = f¯α (x, y¯α ) continues at æ : |x − x0 | a, |¯ yα − y¯0α | ⊆ [b− , b+ ]. Again dx αdH [f¯α (x, y¯1α ), f¯α (x, y¯2α )] ⊆ N αdH (¯ y1α , y¯2α ), (9.2.6) ⇐⇒ α∈(0,1]

α∈(0,1]

for arbitrary α above, then dH [f¯α (x, y¯1α ), f¯α (x, y¯2α )] ⊆ N dH (¯ y1α , y¯2α ). From Theorem 9.2.5, we know ⎧ yα ⎨ d¯ = f¯α (x, y¯α ), dx ⎩¯ fα (x, y¯α )|x=x0 ,¯y=¯y0α = f¯0α contains a unique conﬁrmed and continuous interval solution ϕ¯α (x, y¯α ). Because of arbitrariness of α at (0, 1], then f˜ = αϕ¯α (x, y¯α ), that is α∈(0,1]

a unique conﬁrmed and continuous fuzzy-valued solution to (9.2.7) at æ. Theorem 9.2.7. Let f˜(x, y˜) be a convex normal fuzzy-valued function of the d˜ y same order integrable. Then in = f˜(x, y˜) exists fuzzy-valued solution y˜ = dx ϕ(x) ˜ + c˜, where c˜ is a fuzzy constant.

306

9 Interval and Fuzzy Diﬀerential Equations

Proof: Since =

d˜ y> d ˜ dx = f (x, y˜)dx ⇐⇒ y˜dx = f˜(x, y˜)dx dx dx > d = α y¯α dx = α f¯α (x, y¯α )dx, dx α∈(0,1] α∈(0,1] for ∀α ∈ (0, 1], there exists y¯α = f¯α (x, y¯α )dx. When f¯α (x, y¯α ) is the same order integrable and with same order primal function ϕ¯α (x), there exists y¯α = ϕ¯α (x) + c¯α ⇐⇒ α¯ yα = α(ϕ¯α (x) + c¯α ). α∈(0,1]

α∈(0,1]

Therefore, y˜(x) = ϕ(x) ˜ + c˜.

9.3

Ordinary Diﬀerential Equations with Fuzzy Variables

If f˜(x) is a fuzzy-valued function deﬁned in Section 9.2 and its derivative df˜(x) function is mapping from R to F (R). By the aid of extension principle, dx suppose x ˜ to be a fuzzy point, subset S(˜ x) ⊆ æ (æ is a closed region). The general fuzzy variable can be represented by an applied interval nest, i.e., we can determine a unique certain fuzzy variable by + − + − + − + (x− λ , xλ )|λ ∈ [0, 1], (xλ , xλ ) = φ, λ1 < λ2 ⇒ (xλ1 , xλ1 ) ⊇ (xλ2 , xλ2 ),

written down as x ˜

+ λ(x− λ , xλ ).

λ∈[0,1]

But upon the limit have

+ x− λ , xλ

x ˜=

+ (x− λ , xλ )

,

+ is taken for close interval [x− λ , xλ ], and + λ[x− λ¯ xλ . λ , xλ ] =

λ∈[0,1]

λ∈[0,1]

Similarly, fuzzy function y˜ deﬁnition is y˜ = λ[yλ− , yλ+ ] = λ¯ yλ . λ∈[0,1]

Deﬁnition 9.3.1.

λ∈[0,1]

d ˜ df˜(˜ x) f( λ¯ xλ ) ∈ F (F (R)), dx dx λ∈(0,1]

∂ f˜(˜ x, y˜) ∂ ˜ f( λ¯ xλ , α¯ yα ) ∈ F (F (R)), ∂x ∂x λ∈(0,1]

α∈(0,1]

∂ ˜ ∂ f˜(˜ x, y˜) f( λ¯ xλ , α¯ yα ) ∈ F (F (R)). ∂y ∂y λ∈(0,1]

α∈(0,1]

9.3 Ordinary Diﬀerential Equations with Fuzzy Variables

307

xλ , y¯α ) ∂ f˜(¯ xλ , y¯α ) df˜(¯ xλ ) ∂ f˜(¯ , , in the deﬁnition is induced dx ∂x ∂y derivative or partial derivative of fuzzy-valued function at ordinary points, then Deﬁnition 9.3.1 can be further deﬁned as. Because every

Deﬁnition 9.3.2.

df˜(˜ x) dx

α

α∈(0,1]

∂ f˜(˜ x, y˜) ∂x ∂ f˜(˜ x, y˜) ∂y

6 df¯α 5 λ¯ xλ , dx

α∈(0,1]

α∈(0,1]

λ∈(0,1]

α

6 ∂ f¯α 5 λ¯ xλ , α¯ yα , ∂x λ∈(0,1]

α∈(0,1]

6 ∂ f¯α 5 α λ¯ xλ , α¯ yα . ∂y λ∈(0,1]

α∈(0,1]

And then, the result of Section 9.2 can be extended to a diﬀerential equation case of the fuzzy-valued function at a fuzzy point. The main result is related hereinafter. Theorem 9.3.1. Any normal types of fuzzy-valued diﬀerential equation with fuzzy point x ˜ can be changed into the 1st normal type equation: d˜ y (˜ x) = f˜(˜ x, y˜). dx

(9.3.1)

Theorem 9.3.2. (Existence theorem of solution) Given (9.3.1) and the initial fuzzy value (˜ x0 , y˜0 ), with f˜(˜ x, y˜) a continuous convex normal fuzzyvalued function at a closed region æ : d(˜ x, x ˜0 ) ⊆ [a− , a+ ], d(˜ y , y˜0 ) ⊆ [b− , b+ ](a+ > a− > 0, b+ > b− > 0), then there exists at least a fuzzy-valued solution to (9.3.1); its value y˜0 is taken at x ˜=x ˜0 , conﬁrmed and continuing at a certain interval with x ˜0 , where d(˜ x, y˜) = dH (˜ x, y˜) is Hausdorﬀ measurement deﬁned by Deﬁnition 9.2.10. Proof: Only by changing the formula in proof of Theorem 9.2.5 into + + + dfα− (¯ x− ¯α− ) xλ , y¯α ) + dfα (¯ λ,y = fα− (¯ = fα+ (¯ x− , y ¯ ), x+ ¯α+ ), α λ λ,y dx dx + fα− , fα+ respectively at x− 0 and x0 contains at least a determinedly continuous solution for ∀λ ∈ (0, 1], α ∈ [0, 1]. Similarly to the proof in Theorem 9.2.5, the theorem can be certiﬁcated.

Theorem 9.3.3. (Uniqueness theorem of the solution) Under the condition of Theorem 9.3.2 if fuzzy variable y˜ still satisﬁes Lipschitz condition in nonempty closed region æ, i.e., ∃N > 0, for any of the two values y˜1 and y˜2 in æ, always there exists

308

9 Interval and Fuzzy Diﬀerential Equations

dH [f˜(˜ x, y˜1 ), f˜(˜ x, y˜2 )] ⊆ N dH (˜ y1 , y˜2 ),

(9.3.2)

⎧ y ⎨ d˜ = f˜(˜ x, y˜) dx ⎩ ˜ f (˜ x, y˜)|x˜=˜x0 ,˜y=˜y0 = f˜0

(9.3.3)

then

has a unique determinedly continuous fuzzy-valued solution. Proof: Because d˜ y (x) = f˜(˜ x, y˜) ⇐⇒ dx 6 5 6 d¯ yα 5 α λ¯ xλ = αf¯α λ(¯ xλ , y¯α ) , dx α∈(0,1] λ∈(0,1] α∈(0,1] λ∈(0,1] 9 5 5 6 6C αdH f¯α λ(¯ xλ , y¯1α ) , f¯α λ(¯ xλ , y¯2α ) (9.3.2) ⇐⇒ α∈(0,1]

⊆N

λ∈(0,1]

λ∈(0,1]

αdH (¯ y1α , y¯2α ),

α∈(0,1]

for λ ∈ (0, 1], α ∈ [0, 1], there is d¯ yα (¯ xλ ) = f¯α (¯ xλ , y¯α ), dx dH [f¯α (¯ xλ , y¯1α ), f¯α (¯ xλ , y¯2α )] ⊆ N dH (¯ y1α , y¯2α ). Similarly to the proof in Theorem 9.2.6, the theorem is certiﬁcated. Theorem 9.3.4. Let f˜(˜ x, y˜) be a continuously convex normal fuzzy-valued d˜ y function. Then for fuzzy-valued diﬀerential equation = f˜(˜ x, y˜) with fuzzy dx point, there exists a fuzzy-valued solution y˜ = ϕ(˜ ˜ x) + c˜, where c˜ is a fuzzy constant. Proof: Because d y˜(˜ x)dx = f˜(˜ x, y˜)dx ⇐⇒ dx 5 6 d α¯ yα λ¯ xλ dx = dx α∈(0,1]

λ∈(0,1]

αf¯α

α∈(0,1]

5

6 λ(¯ xλ , y¯α ) dx,

λ∈(0,1]

for ∀λ ∈ (0, 1], α ∈ [0, 1], there is y¯α (¯ xλ ) =

f¯α (¯ xλ , y¯α )dx.

Similarly to the proof in Theorem 9.2.7, the theorem is certiﬁcated.

9.4 Fuzzy Duoma Debted Model

9.4

309

Fuzzy Duoma Debted Model

A variety of fuzzy phenomena exist in economic systems in the realistic world, so it is of great signiﬁcance to build a model with fuzzy numbers to solve them. However, the function with fuzzy numbers is not diﬀerentiable. Moreover, the traditional operation rules can not be used to solve the fuzzy quadrate equality. In this section, debt model is generalized, an extensively-used fuzzy debt model is built by a concept of inverse image deﬁned by fuzzy functions. 9.4.1

Building Model

Let gi be real function T × R. When national income increases by constantly dY (t) relative rate, its curve is = γ1 Y (t) (0 < γ1 < 1), the model of debt dt D(t) by dD(t) − ν1 Y (t) = 0(0 < ν1 < 1), g1 : dt dY (t) (9.4.1) − γ1 Y (t) = 0(0 < γ1 < 1, ), g2 : dt g1 : D(0) = D0 , g2 : Y (0) = Y0 is determined as a function one B(t) =

iD(t) , Y (t)

(9.4.2)

(9.4.1) and (9.4.2) are called a model of Duoma debt [Duo44], where i (constant) is interest rate, γ1 , ν1 are parameters. Suppose the functions being discussed keeps good properties such as convexity and continuous diﬀerentiable in a fuzzy environment. It is vital for us to use Duoma model (9.4.1) and (9.4.2) in fuzziﬁcation to solve the operational and diﬀerential problems in fuzzy functions [Hei87][Zad75a]. Duoma debt model is generalized in the following discussion. Next comes the deﬁnition ﬁrst. Deﬁnition 9.4.1. Let T be a closed space on the real axis R, C(T ) be a continuous function cluster from T to R. Again suppose G to be a fuzzy ˜ (R) and D, ˜ Y˜ to be approximate quantities of function from T × R 2 to W ˜ (R) denotes subclass of fuzzy sets, then we deﬁne D, Y , where W ˜ ¯ : T × C (t) −→ W ¯ = (t, Y˜ (t), dD(t) , ˜ 1 (R), G G 1 1 1 dt ˜ (t) d Y ¯ : T × C (t) −→ W ¯ = (t, Y˜ (t), ˜ 2 (R), G G ), 2 2 2 dt ˜ =D ˜ 0, Q1 : C1 (T ) −→ R, Q1 (D) Q2 : C2 (T ) −→ R, Q2 (Y˜ ) = Y˜0 , ˜ Y˜ (∈ C(T ), t ∈ T ) and D ˜ 0 , Y˜0 are fuzzy numbers. where D,

310

9 Interval and Fuzzy Diﬀerential Equations

˜ dD(t) to be the approximate quantity of total net Suppose fuzzy derivative dt loan rate and Y˜ (t) to be that of ﬂow in national income, then the following fuzzy diﬀerential equations ˜ dD(t) ˜1 (0 < γ < 1), = γ Y˜ (t) + U dt dY˜ (t) ˜2 (0 < ν < 1) = ν Y˜ (t) + U dt reﬂect the proportion between the total loan rate and the national income and between the national income varying rate and the national income, where ˜2 are fuzzy numbers. ˜1 , U γ, ν are parameters; U Deﬁnition 9.4.2. From a ﬁxed solutions problem ˜ ¯ : dD(t) − γ Y˜ (t) = U ˜1 (0 < γ < 1), G 1 dt ˜ ¯ : dY (t) − ν Y˜ (t) = U ˜2 (0 < ν < 1), G 2 dt ˜ ˜ 0, Q1 : D(0) =D Q2 : Y˜ (0) = Y˜0 ,

(9.4.3)

the function determined by a fuzzy solution ˜ ˜ ≡ iD(t) B(t) Y˜ (t)

(9.4.4)

is called a fuzzy Duoma debt function, while the model determined by (9.4.3) and (9.4.4) is called fuzzy Duoma debt model. The economy needs great developing to keep debt within an allowable level, reﬂecting on a mathematics model as follows, % ˜ ˜ = iminj Dj (t) , B(t) max % j Y˜j (t)

(9.4.5)

˜ j (t), Y˜j (t) are fuzzy solution classes of (9.4.3). where D 9.4.2

Solution and Its Properties in Fuzzy Duoma Debt Model

Deﬁnition 9.4.3. (9.4.3) is called non-homogeneous fuzzy diﬀerential equa˜ 0 , Y˜0 , while a fuzzy subset of space C(T) tions with initial conditions D ˜ ˜ = (R(G, ¯ U ˜ )) ˜ I) ˜ B R(Q, C(T )

9.4 Fuzzy Duoma Debted Model

311

is a fuzzy solution to the problem, where I˜ stands for an initial value. Here ˜¯ ˜ ˜ ¯ U ˜) R(G, C(T ) means mapping fuzzy subsets R(G, U ) ∈ F (T × C(T )) into a strong mapping of C(T ) [Hei83][Hei87]. Coming next is to discuss homogeneous system. Obviously ˜ ¯ 0)) ˜ = (R(G, ˜ I) ˜ B R(Q, C(T )

(9.4.6)

is a fuzzy solution to (9.4.3) corresponding to a homogeneous problem. ˜ Y˜ continue on T , their membership degree can be respectively Since D, calculated as follows: ˜ ˜ ˜ ˜ ˜ D) ˜ = (R(G ¯ 1 , 0)) B( C(T ) (D) ∧ R(Q1 , I1 ) ˜ dD ˜ ) ∧ D(0)} t∈T dt ˜ ˜ 1 (t, Y˜ (t), dD(t) )(0) ∧ D(0)}. ˜ = inf {G t∈T dt

˜ ¯ 1 , 0)(t, Y˜ , = inf {R(G

Similarly, ˜ ˜ Y˜ ) = inf {G ˜ 2 (t, Y˜ (t), dY (t) )(0) ∧ Y˜ (0)}. B( t∈T dt Deﬁnition 9.4.4. We call fuzzy functions ⎧ sup ⎪ ⎨XD˜ 0 (t)(y) = ˜ ˜ ⎪ ⎩XY˜0 (t)(y) =

˜ D) ˜ B(

{D∈E(T ): D(t)=y}

sup

{Y˜ ∈E(T ): Y˜ (t)=y}

˜ Y˜ ) B(

a trajectory of the problems $

and

$

˜ 1 , 0) (G ˜ 1 , I˜1 ) (Q

(9.4.7)

˜ 2 , 0), (G ˜ 2 , I˜2 ), (Q

(9.4.8)

respectively, where E(T ) is a subclass in C(T ) as well as a domain of fuzzy trajectory X. Theorem 9.4.1. Given that g is a function on T × R 2 , such that ˜ Y, y) g(t, Y, y) ⊂ G(t,

(9.4.9)

for each t ∈ T ; Y, y, D0, Y0 ∈ R, If D0 and Y0 are solutions to (9.4.1), then D0 (t) ⊂ X(t), Y0 (t) ⊂ X(t) holds, where X(t) = XD˜ 0 (t) XY˜0 (t) is a union of fuzzy trajectories in Equation (9.4.7) and (9.4.8).

312

9 Interval and Fuzzy Diﬀerential Equations

Proof: By using (9.4.6) and (9.4.9) we obtain ˜ ˜ 0 ) = inf {( dY (t) − ν Y˜ (t))(0) ∧ Y˜ (0)} B(Y t∈T dt dD(t) dY˜ (t) from (9.4.1) − ν Y˜ (t))( − ν1 Y (t)) ∧ Y˜0 = 1}, = inf ({ t∈T dt dt i.e., the function Y0 satisﬁes (9.4.7) with membership degrees equal to 1: Y˜ (t)(Y0 (t)) =

sup {Y ∈E(T ): Y (t)=Y0 (t)}

˜ ) B(Y ˜ 0 ) = 1, B(Y

it follows that Y0 ⊂ Y˜ (t) = XY˜0 (t). Similarly we can prove ˜ = XD˜ 0 (t). D0 ⊂ D(t) Hence Y0 ⊂ X(t), D0 ⊂ X(t). Let D0 = p1 (t, γ1 , D0 ), Y0 = p2 (t, ν1 , Y0 ) be a singular solution to (9.4.1) and pj (j = 1, 2) be mapped from T ×R ×R n to R. If γ1 = (γ11 , . . . , γ1n ), ν1 = (ν11 , . . . , ν1n ) ∈ T, D0 , Y0 ∈ R is given by small variations respectively, i.e., ˜ (T ), ν1 → ν ∈ W ˜ (T ), D0 → D ˜ ∈W ˜ (R), Y0 → Y˜ ∈ W ˜ (R), γ1 → γ ∈ W ˜ Y˜ are approximate quantities of γ1 , ν1 , D0 , Y0 , then fuzzy difwhere γ, ν, D, ferential equations are obtained as follows: $ ˜ ¯ 1 , 0) (G (9.4.10) (p1 , I˜1 ) $

and

˜ ¯ 2 , 0), (G (p2 , I˜2 ).

(9.4.11)

˜ 0 ), Their approximate solutions and fuzzy trajectories represent p1 (t, γ, D p2 (t, ν, Y˜0 ) and X(t), respectively. Theorem 9.4.2. Let D0 , Y0 be a solution to Model (9.4.1). Then ˜ = p1 (t, γ, D) ˜ ⊂ X ˜ (t) ⊂ X(t), D D Y˜ = p2 (t, ν, Y˜ ) ⊂ X ˜ (t) ⊂ X(t), Y

where X(t) = XD˜ (t)

XY˜ (t).

9.4 Fuzzy Duoma Debted Model

313

Proof: Because D0 , Y0 is a solution to (9.4.1), they must be a solution to Equation (9.4.10) and (9.4.11) from the knowledge of Theorem 9.4.1. Then ˜ is the membership degree of function Y0 at fuzzy solution B

˜ 0 ) = inf {( dY0 (t) − ν1 Y0 (t))(0)} B(Y t∈T dt = inf {

I˜2 (0) C(ν1 )} Y0

sup

t∈T

{ν∈R: ν1 Y0 (t)=

C(ν1 )

dY0 (t) } dt

Y0 .

Similarly, ˜ 0 ) C(γ1 ) B(D

(9.4.13)

D0 .

If sup

C(ν1 ) = C(ν1 ),

dY0 (t) {ν1 ∈R:ν1 Y0 (t)= } dt and sup

C(γ1 ) = C(γ1 ),

dD0 (t) {γ1 ∈R:γ1 D0 (t)= } dt then the above formulas (9.4.12) and (9.4.13) become equalities. Therefore, for y ∈ R, then XY˜0 (t)(y) =

˜ ) B(Y

sup {Y ∈E(T ): Y (t)=y}

=

˜ B(p)

sup {ν1 ,Y0 : p2 (t,ν1 ,Y0 )=y}

sup

{C(ν1 )

{ν1 ,Y0 : p2 (t,ν1 ,Y0 )=y}

Y0 }

= p2 (t, ν, Y˜ ). Here, E(T ) = {Y ∈ C(T ) : Y0 (t) = p2 (t, ν1 , Y0 ), ν1 ∈ R n , Y0 ∈ R} is a solution to the subclass of Equations (9.4.10) and (9.4.11). So p2 (t, ν, Y˜ ) ⊂ XY˜ (t)(y). Similarly, we can prove XD˜ (t)(y) =

(9.4.12)

sup {D∈E(T ): D(t)=y}

˜ ˜ B(D) p1 (t, γ, D)(y),

314

9 Interval and Fuzzy Diﬀerential Equations

i.e., ˜ ⊂ X ˜ (t)(y), p1 (t, γ, D) D but X(t) = XD˜ (t)

XY˜ (t),

therefore ˜ ⊂ X(t), p2 (t, ν, Y˜ ) ⊂ X(t). p1 (t, γ, D) ˜ 0 ) = C(γ1 ) D0 , B(Y ˜ 0 ) = C(ν1 ) Y0 , then Corollary 9.4.1. When B(D ˜ X ˜ (t) = p2 (t, ν, Y˜ ). XD˜ (t) = p1 (t, γ, D), Y The above fuzzy trajectories XD˜ (t), XY˜ (t) are equal to direct image of fuzzy subset C × D by usual solution p1 and p2 . Consider particular non-homogeneous linear diﬀerential equations ⎧ ⎨ dD(t) − γ1 D(t) = U1 (9.4.14) dt ⎩D(0) = D 0

and

⎧ ⎨ dY (t) − ν1 Y (t) = U2 , dt ⎩Y (0) = Y . 0

(9.4.15)

˜ 1 , D0 → D ˜ 0 and U2 → U ˜2 , Y0 → Y˜0 , then the problem Suppose U1 → U above is changed into fuzzy non-homogeneous linear diﬀerential equations ⎧ ⎨ dD(t) ˜ ˜1 − γ D(t) =U (9.4.16) dt ⎩D(0) = D 0 and

Let

⎧ ⎨ dY (t) ˜2 , − νY (t) = U dt ⎩Y˜ (0) = Y˜ . 0

(9.4.17)

γt ˜ 0 − (1 − e )U ˜1 ˜1 , D ˜ 0 ) = eγt D XD˜ (t) = p1 (t, γ, U γ (9.4.18) eνt ˜ νt ˜ ˜ ˜ )U2 XY˜ (t) = p2 (t, ν, U2 , Y0 ) = e Y0 − (1 − ν be fuzzy trajectories of Equations (9.4.16) and (9.4.17). It is a direct image ˜1 × D ˜ 0, U ˜2 × Y˜0 by solutions to (9.4.14) and (9.4.15). of fuzzy subset U

9.5 Model for Fuzzy Solow Growth in Economics

315

Again, supposing ˜1 ˜1 U U )eγt − , γ γ (9.4.19) ˜2 ˜2 U U νt WY˜ (t) = (Y˜0 + )e − ν ν to be fuzzy solution sets to (9.4.10) and (9.4.11), it is easy to prove the following lemma. ˜0 + WD˜ (t) = (D

Lemma 9.4.1. XD˜ (t) ⊂ WD˜ (t), XY˜ (t) ⊂ WY˜ (t).

9.5 9.5.1

Model for Fuzzy Solow Growth in Economics Introduction

Solow, an American professor and a computation economist, established the determined economic growth model as early as in 1956 [Sol56]. Because of his great contribution to development of the economic growth theories, in 1987 he won Nobel economics prize. But, due to participation of human beings’ consciousness, there exists large quantity of indetermination in the economic system of the realistic world. This kind of indetermination is not only random, but also fuzzy. The writer researched randomness with fuzzy Solow economic growth model, and obtained some very good result. He only discusses problems in fuzzy situation below. However, progress is slow by surprise, the reason is that problems meet with indiﬀerentiability, integrability and whether fuzzy function equation is closed for arithmetic. In order to solve this nut, the writer discusses the problems, including the building, determination and solution to fuzzy Solow economic growth model as well as ﬁnding solution to the model with fuzzy coeﬃcient by using the fuzzy mapping theories. 9.5.2

Building of Fuzzy Solow Economic Growth Model

Suppose the following: 1) The produce function is Y = f (K, L) (KL > 0), where Y is for the output (not include the depreciation), K a capital, and L labor force. There exists the following under the complete competition. K i) The scale guerdon is constant, that is Y = Lf ( , 1) = Lφ(K ∗ ), here L K K∗ = . L df df ∂2f ii) Marginal productivity gradually decreases: > 0, > 0, < dK dL ∂K 2 2 ∂ f 0, < 0. ∂L2 Y variable. iii) Output–capital ratio q = K

316

9 Interval and Fuzzy Diﬀerential Equations

2) Equilibria condition: Planned saving rate S is equal to planned investdK ∗ (t) . ment rate dt 3) Labor force goes up by exponents: L = L0 eλt (λ > 0). K(t) , we have Hence, when the capital-labor ratio is K ∗ (t) ≡ L(t) dK ∗ (t) = Sφ(K ∗ (t)) − λK ∗ (t), dt

(9.5.1)

K ∗ (t0 ) = K0∗ ,

(9.5.2)

called Solow economic growth model, where (9.5.1) is a main equation, (9.5.2) K0 is the initial condition, K0∗ = (constant). L0 We expand the classic Solow model into fuzzy situation in this section, introducing the concept of fuzzy function and mapping at ﬁrst. at V is Deﬁnition 9.5.1. Let V be an arbitrary linear space. A fuzzy set A called a function in value taken from V the range at [0,1], while value A(x) is ˜ called a membership degree of x with respect to A. Deﬁnition 9.5.2. Let τ be a close interval at real line R, c(τ ) be all of be a fuzzy function from continuous function cluster from τ to R. Again let G ∗ 7 ˜ τ × R to space W (R) and K0 to an approximate value of K0∗ . Then fuzzy function is deﬁned as : τ × C(τ ) → W 7 (R), G ∗ K ∗ ) = G(t, K ∗ , dK ); G(t, dt g : C(τ ) → R, ∗ (t0 ), g(K ∗ ) = K ∗ (t0 ) ∈ W 7 (R). where K ∗ ∈ C(τ ), t0 , t ∈ τ, g(K ∗ ), K Then, imitate this and we can give out the deﬁnition in fuzzy Solow model. Deﬁnition 9.5.3. Suppose L), (K(t), > 0). (1) Non-distinct production function is like Y = f (K, L(t) ∗ dK (t) . (2) Satisfying fuzzy equilibria condition is S = dt λt (3) Fuzzy labor force increases by exponent L = L0 e (λ > 0). Under the complete competition, there exists the following: K ∗ ). ( K , 1)− Lϕ( a. Invariableness related to scale guerdon, that is, Y = Lf L b. Decrease gradually in marginal production. Y c. Fuzzy output-capital ratio q = changeable. K

9.5 Model for Fuzzy Solow Growth in Economics

317

∗ (t) ≡ K(t) reaches the The properties, when the capital-labor ratio K L(t) equilibria, are fuzzy, such that we have the system $ 0), (G, (9.5.3) ∗ (t0 )), ( g, K which is called a fuzzy Solow economic growth model, the special form of (9.5.3) denotes ⎧ ∗ (t) ⎨ dK K ∗ (t)) − λK ∗ (t), = Sϕ( (9.5.4) ⎩ ∗dt 0∗ , K (t0 ) = K ∗ 7 (R). K 0∗ , K ∗ (t), dK (t) ∈ W where S, dt 9.5.3

Solution and Its Properties of Model

d) to be a complete measure space, we deﬁne Deﬁnition 9.5.4. Suppose (X, Y ) = δ(X,

sup x∈X,y∈ Y

d(x, y), ∀x, y ∈ CB(x),

where CB(x) represents all the cluster of nonempty close bounded sets in X [Hei83][Hei87]. Deﬁnition 9.5.5. E(τ ) is sub-cluster of C(τ ), and it is a domain of the fuzzy in equation (9.5.3). trajectory K Deﬁnition 9.5.6. We call a fuzzy function K(t)(y) =

sup

∗) A(K

(9.5.5)

{K ∗ ∈E(τ ): K ∗ (t)=y}

a fuzzy solution set of system (9.5.3), and E(τ ) the domain of a fuzzy solution set K. Deﬁnition 9.5.7. We call 0) A˜ = R(G, R( g, K0∗ ) C(τ ) a solution to fuzzy Solow economic growth model (9.5.3). Here, 0) R(G, C(τ ) 0) ∈ F (τ × C(τ )) to C(τ ) represents a strong mapping of fuzzy subset R(G, and the membership function of solution represents

318

9 Interval and Fuzzy Diﬀerential Equations

0) K ˜ ∗ ) = R(G, ˜∗ ˜ 0∗ )), A( (R( g, K C(τ ) (K )

(9.5.6)

i.e., ∗ (t) dK ˜ K ˜ ∗ (t)) + λK ˜ ∗ (t) = 0) − Sϕ( C(τ ) dt ∗ (t) dK ˜ K ˜ ∗ (t)) + λK ˜ ∗ (t) = 0) (t, K ˜ ∗ (t)) ˜ ∗ (0))} − Sϕ( inf {R( g(K 0 t∈τ dt ∗ (t) dK ∗ (t0 )}, K ∗ (t)) + λK ∗ (t))(0) K − Sϕ( = inf {( 0 t∈τ dt where t0 , t ∈ τ . R(

to be the ﬁrst kind of fuzzy mapping of [0, τ ] × Theorem 9.5.1. Suppose G 7 (R) with ∃ q ∈ (0, 1), such that for all t ∈ [0, τ ], K ∗ (t) ∈ E(τ ) ⊂ C(τ ) → W C(τ ), there exists 2∗ (t))] 1∗ (t)), f (t, K δ[f (t, K 1∗ (t), K 2∗ (t)], δ[K 2∗ (t), f (t, K 1∗ (t))], δ[K 2∗ (t), f (t, K 2∗ (t))], qmax{d[K ∗ ∗ ∗ ∗ δ[K1 (t), f (t, K2 (t))], δ[K2 (t), f (t, K1 (t))]}. ∗ exists in (9.5.4) at E(τ ) ⊂ C(τ ). Then, solution K 0 Proof: To arbitrarily given β ∈ (0, 1), since q ∈ (0, 1), then q β , q 1−β ∈ (0, 1). ∗ (t) ∈ E(τ ), we have Deﬁne arbitrary K t 0∗ + ∗ (τ ))dτ, ∗ (t)) = K f (t, K T (K 0

where f is a multi-valued mapping, satisfying ∗) q β δ(K ∗, K ∗0 + ∗, T K d(K

t

∗ )dτ ), f (t, K

0

then, T denotes a single-valued mapping in E(τ ) → E(τ ). From the assumption and the basic theorem of integral calculus, we have 2∗ (t)) 1∗ (t), T K d(T K t t ∗ + ∗ (τ ))dτ, K ∗ + ∗ (τ ))dτ ) δ(K f (τ, K f (τ, K 0 1 0 2 0 0 t ∗ ), δ(K ∗ (t), K ∗ + ∗ (τ ))dτ ), ∗, K f (τ, K q max{d(K 1

2

∗ + ∗ (t), K δ(K 2 0 ∗ + ∗ (t), K δ(K 1 0 ∗ + ∗ (t), K δ(K 2 0

1

0

t

0

t

0

0

t

0

∗ (τ ))dτ ), f (τ, K 2 ∗ (τ ))dτ ), f (τ, K 2 ∗ (τ ))dτ )} f (τ, K 1

1

9.5 Model for Fuzzy Solow Growth in Economics

319

∗, K ∗ ), q · q −β max{q β d(K 1 2 t 1∗ (t), K 0∗ + 1∗ (τ ))dτ ), q β δ(K f (τ, K 0 t β ∗ ∗ 2 (t), K 0 + 2∗ (τ ))dτ ), f (τ, K q δ(K 0 t 1∗ (t), K 0∗ + 2∗ (τ ))dτ ), f (τ, K q β δ(K 0 t β ∗ ∗ 1∗ (τ ))dτ )} f (τ, K q δ(K2 (t), K0 + 0

1∗ , K 2∗ ), d(K 1∗ , T K 1∗), q 1−β max{d(K 2∗ ), d(K 1∗ , T K 2∗ ), d(K 2∗ , T K 1∗)}, 2∗ , T K d(K ∗ ∈ E(τ ) holds. ∗, K ∀K 1 2 ∗, K ∗ , T is Ciric-type compacted mapping, such that Therefore, for every K 1 2 ¯ ∗ in E(τ ) ⊂ C(τ ). And repeatedly we know T has a unique ﬁxed point K ∗ n ∗ ∞ ¯ ∗ as for any K0∗ ∈ E(τ ). sequence {Kn = T K0 }n=1 converges in K ∗ ∗ ∗ ¯ ¯ ¯ ∗ ¯ = TK with T K ∈ C(τ ), then K ∈ C(τ ), hence (9.5.4) Because of K ∗ in E(τ ). has a fuzzy solution K Theorem 9.5.2. Suppose g(t, K ∗ (t),

dK ∗ (t) dK ∗ (t) )= − Sϕ(K ∗ (t)) + λ1 K ∗ (t) dt dt

to be a function on τ × R 2 , with g(t, K ∗ (t),

∗ dK ∗ (t) K ∗ (t), dK (t) ). ) ⊂ G(t, dt dt

If K0∗ (t) is a solution to ordinary system $

dK ∗ (t) ) = 0, dt K ∗ (t0 ) = K0∗ , g(t, K ∗ (t),

˜ holds to every t ∈ τ, then K ∗ (t) ⊂ K(t)

dK ∗ (t) ∈ E(τ ). dt

Proof: By using (9.5.5) and (9.5.6), we obtain ∗

∗ (t) = inf {( dK (t) ) − Sϕ(K ∗ (t)) + λK ∗ (t))(0) K ∗ (t0 )} A(K 0 t∈τ dt dK ∗ (t) = inf {( ) − Sϕ(K ∗ (t)) + λK ∗ (t)) t∈τ dt dK ∗ (t) (( ) − Sϕ(K ∗ (t)) + λ1 K ∗ (t)) K0∗ = 1}. dt

(9.5.7)

320

9 Interval and Fuzzy Diﬀerential Equations

i.e., the function K0∗ satisﬁes (9.5.3) with grade of membership equal to 1: ∗ K(t)(K 0 (t)) =

sup {K ∗ ∈E(τ ):

K ∗ (t)=K0∗ (t)}

∗) A(K

∗ (t0 )) = 1, A(K it follows that

K(t) ⊃ K0∗ (t).

Theorem 9.5.3. If (9.5.7) has a singular solution K ∗ = p(t, S, K0∗ ) ⊂ K(t), S, K0∗ ∈ R, ∗ = p(t, S, ˜ K ˜ 0∗ ) under mapping and it is changed into K ˜ (R) are approximate quantities of S, K ∗ ∈ R, then W 0

C(τ ) .

˜ K ˜ 0∗ ∈ And S,

∗ (t) ⊂ K(t). K ˜ ∗ (K ∗ ), then the equality holds. ˜ 0) K ˜ 0 ) = S(S If A(x 0 Proof: Let y ∈ R, ˜ K(t)(y) =

∗) A(K

sup {K ∗ ∈E(τ ): K ∗ (t)=y}

=

A(p)

sup {S,a: p(t,S,K0 )=K ∗ }

sup {S,a: p(t,S,K0

)=K ∗ }

{S(S)

˜ ∗ (t0 )} K

˜ K)(y). ˜ = p(t, S, ˜ ∗ (t) is more accurate Thus an approximate solution to fuzzy systems (9.5.3) K ˜ than fuzzy trajectory K(t), containing an accurate solution to this system K ∗ = p(t, S, K0 ). 9.5.4

Conclusion

A fuzzy economic model is built by using fuzzy mapping theory and by generalizing the deﬁnite economic model into a fuzzy case. Fuzzy economic model adopted by us will contain more information than a crisp one, which coincides more with practicality. In the next section, we shall proves the feasible generalization by a numerical example.

9.6 9.6.1

Application of Fuzzy Economic Model Application of Fuzzy Duoma Debt Model

Deﬁnition 9.6.1. Function N (a, m, b) is called a triangular function [Cao90][Zim91] with its membership function satisfying

9.6 Application of Fuzzy Economic Model

⎧ 2 ⎪ t−a ⎪ ⎪ ⎪ ⎪ ⎨ m−a μN (t) = 1 ⎪ 2 ⎪ ⎪ t−b ⎪ ⎪ ⎩ b−m

321

a t < m, t = m, m < t b,

where m is the mean value of N, a is for left and b for right spread, and a m b, t, a, m, b ∈ R. Let us ﬁrstly deﬁne the following operation laws in triangle function N in order to solve (9.4.3) and (9.4.5). Deﬁnition 9.6.2. We deﬁne (c, n, d) = N (a + c, m + n, b + d); 1) N (a, m, b) + N$ N (ka, km, kb), for k 0, 2) kN (a, m, b) = N (kb, km, ka), for k < 0; 3) N (a, m, b) − N (c, n, d) = N (a, m, b) + N (d, −n, c) = N (a + d, m − n, b + c); 4) Suppose (1)

(2)

(3)

(1)

(2)

(3)

WD˜ 0 = S(WD0 , WD0 , WD0 ), WY˜0 (t) = N (WY0 , WY0 , WY0 ), (1)

(2)

(3)

(1)

(2)

(3)

p1 = N (p1 , p1 , p1 ), p2 = N (p2 , p2 , p2 ), (2)

(2)

(2)

(2)

where WD0 , WY0 , p1 , p2 left spreads and

(1)

(1)

(1)

(1)

are mean values, WD0 , WY0 , p1 , p2

(3) (3) (3) (3) WD0 , WY0 , p1 , p2

(1) % min(W ˜ 0 , p1 ) = N (WD0 D (1)

max(W % Y˜0 , p2 ) = N (WY0

stand for

for right spreads. Then

(1)

(2)

(1)

(2)

p1 , WD0 p2 , WY0

(2)

(3)

(2)

(3)

p1 , WD0 p2 , WY0

(3)

p1 ) = p1 , (3)

p2 ) = WY˜0 .

Consider the solution and stability of fuzzy Duoma Debt Model (9.4.3) and (9.4.4) with assumption of fuzzy functions are ˜1 = N (1, 0, −2), U ˜ 2 = N (−2, 0, 1), U ˜ 0 = N (D(1) , D(2) , D(3) ), Y˜0 = N (Y (1) , Y (2) , Y (3) ). D 0 0 0 0 0 0

(9.6.1)

The national economy shall be greatly developed to maintain the debt within an allowable level, a model reﬂecting mathematics is as follows: i(WD˜ 0 p1 ) ˜ B(t) = . (9.6.2) WY˜0 p2 Therefore, substitute U2 , Y0 of (9.6.1) for (9.4.17), and again from (9.4.18) and (9.4.19) we obtain

322

9 Interval and Fuzzy Diﬀerential Equations (1)

(2)

(3)

WY˜0 (t) = N (WY0 , WY0 , WY0 ) N (−2, 0, 1) N (−2, 0, 1) ]− ν ν 1 2 2 (2) (3) 1 (1) νt = e N (Y0 − , Y0 , Y0 + ) + N (− , 0, ) ν ν ν ν 1 2 1 (1) (2) νt 2 (3) νt = N [− + e (Y0 − ), Y0 e , + (Y0 + )eνt )]; ν ν ν ν (1)

(2)

(3)

= eνt [N (Y0 , Y0 , Y0 ) +

(1)

(2)

(3)

˜2 , Y˜0 ) = N (p , p , p ) p2 (t, ν, U 2 2 2 eνt (1) (2) (3) = eνt N (Y0 , Y0 , Y0 ) − (1 − )N (−2, 0, 1) ν 1 eνt 1 eνt (1) (2) (3) = N (Y0 eνt , Y0 eνt , Y0 eνt ) + N [2( − ), 0, − + ] ν ν ν ν 2 2 (2) 1 1 (1) (3) = N [(Y0 − )eνt + , Y0 eνt , (Y0 + )eνt − ]. ν ν ν ν Problem (9.4.3) is summed up as a solution to ⎧ ˜ ⎨ dD(t) ˜1 − γWY˜0 (t) = U ⎩ ˜ dt ˜0 D(0) = D i.e.,

⎧ ˜ ⎪ ⎪ dD(t) = N [− γ + γeνt (Y (1) − 2 ) + 1, γY (2) eνt ⎪ 0 0 ⎨ dt ν ν 2γ 1 νt (3) + γ(Y0 + )e − 2] ⎪ ⎪ ν ν ⎪ ⎩˜ (1) (2) (3) D(0) = N (D , D , D )

(9.6.3)

⎧ ˜ dD(t) 2γ 2 ⎪ (1) (2) ⎪ = N [γeνt (Y0 − ) + + 1, γY0 eνt , ⎪ ⎨ dt ν ν γ 1 (3) γeνt (Y0 + ) − − 2], ⎪ ⎪ ν ν ⎪ ⎩˜ (1) (2) (3) D(0) = N (D , D , D ).

(9.6.4)

0

and

and

⎧ ˜ ⎨ dD(t) ˜1 , − γp2 = U ⎩ ˜ dt ˜ 0, D(0) = D

0

0

0

0

0

By using an extended concept in [GV86], a solution to (9.6.3) and (9.6.4) is obtained respectively: γ (1) 2 γ (1) )t + (Y0 − )(eνt − 1) + D0 , ν ν ν γ (2) νt γ (3) 1 (2) 2γ 3) Y (e − 1) + D0 , ( − 2)t + (Y0 + )(eνt − 1) + D0 ], ν 0 ν ν ν

WD˜ 0 (t) =N [(1 −

9.6 Application of Fuzzy Economic Model

323

and ˜1 , D ˜ 0 ) = N [(1 + 2γ )t + γ (Y (1) − 2 )(eνt − 1) + D(1) , p1 (t, γ, U 0 ν ν 0 ν γ γ (2) νt γ 1 (2) (3) (3) Y (e − 1) + D0 , −(2 + )t + (Y0 + )(eνt − 1) + D0 ]. ν 0 ν ν ν Because of (9.6.2), hence, i · 1 2 2 1 (1) (2) (3) N [− + eνt (Y0 − ), Y0 eνt , + (Y0 + )eνt ] ν ν ν ν 2γ γ (1) 2 νt (1) γ (2) νt (2) {N [(1 + )t + (Y0 − )(e − 1) + D0 , Y0 (e − 1) + D0 , ν ν ν ν γ (3) 1 γ (3) −(2 + )t + (Y0 + )(eνt − 1) + D0 ]}. ν ν ν

˜ = B(t)

It is easy to get the following for t → +∞: ˜ →B ˜0 = B(t)

γ (1) 2 γ (2) γ (3) 1 iN [( (Y0 − ), Y0 , (Y0 + )] ν ν ν ν ν . 2 (2) (3) 1 (1) N (Y0 − , Y0 , Y0 + ) ν ν

˜ ˜ 0 related to the initial value, Obviously, B(t) approaches a fuzzy value B γ, ν and i, but an ordinary real number is shown as a special example of the value. As long as the national income increases by means of unchangeable relatedrate, it is not bad for the government to issue bonds for years on end. The debt with allowable level value continuously depends on parameter γ, ν, the initial value and interest rate i because the debt is increasing without inﬁniteness. 9.6.2

Application of Fuzzy Solow Economic Growth Model

Deﬁnition 9.6.3. We call a vector with quarter variable f (t) = m(f1 (t), f2 (t); f3 (t), f4 (t)) a fuzzy function. = m(c− , c+ ; a, b) Deﬁnition 9.6.4. We call number with quarter parameters C a fuzzy number, and its membership function μC is deﬁned as ⎧ if t a, b t, ⎪ ⎪ 0,t−a ⎨ if a < t < c− , − −a , c μC (t) = 1, if c− t c+ , ⎪ ⎪ ⎩ b−t if c+ < t < b, b−c− , a, b is the left-right distribution where, (c− , c+ ) is left-right main value of C, − + of C, respectively, with a c c b.

324

9 Interval and Fuzzy Diﬀerential Equations

Consider a fuzzy Cobb-Douglas function K ∗0.7 , Y = γ L where ϕ(K ∗ ) = f (

K ∗0.7 . Put it into (9.5.3), then , 1) = γ K L ⎧ ∗ (t) ⎨ dK ∗0.7 (t) − λK ∗ (t), = γ SK ⎩ ∗dt 0∗ (γ > 0 is a constant). K (t0 ) = K

(9.6.5)

If we stipule (n)∗ )β = m(K (n)∗β , K (n)∗β , K (n)∗β , K (n)∗β )(n = 0, 1), (K 1 2 3 4 and an ordinary real number is c = m(c, c; 0, 0)(c is a constant). Consider $ dK ∗ (t) = γSK ∗0.7 (t) − λK ∗ (t), dt ∗ K (t0 ) = K0∗ ,

(9.6.6)

7 (t) to (9.6.5), while the its solution W (t) can mapping into solution W trajectory 1 γ K ∗ (t) = [K0∗0.3 e−0.3λt + S(1 − e−0.3λt )] 0.3 λ in (9.6.6) directly mapping into fuzzy trajectory in (9.6.5): 1 ∗ (t) = [K − e−0.3λt )] 0.3 ∗0.3 e−0.3λt + γ S(1 K . 0 λ

The special form of (9.5.3) denotes ⎧ ∗ (t) ⎨ dK ∗ (t) + U , = SK dt ⎩ ∗ 0∗ , S K (t0 ) = K

(9.6.7)

(9.6.8)

its ordinary linear diﬀerential equation and initial condition are $ dK ∗ (t) = SK ∗ (t) + u, dt∗ SK (t0 ) = K0∗ . ˜ K) ˜ and W 7 (t) is the fuzzy ∗ = p(t, h, U, Corollary 9.6.1. Suppose that K 7 (t) ⊃ trajectory and fuzzy solution in System (9.6.8), respectively, then W ˜ ˜ p(t, h, U, K).

9.6 Application of Fuzzy Economic Model

325

Proof: Let y ∈ R. Then ˜ , K)(y) ˜ p(t, h, U =

{U(u)

sup

K(K)}

u {u,k:(k+ u h ) exp(ht)− h =y}

=

{U(hu)

sup

K(K)}

{u,k:(k+u) exp(ht)−u=y}

=

{U(hu)

sup

K(K)}

{u,k:(d exp(ht)−u(1−exp(ht))=y}

=

sup

{U(hu)

{u,k:(d−u)=y}

K(K) exp

ht } 1 − exp(ht)

exp(ht) ){U − U + exp(ht)K)(y)} h exp(ht) (( )U − U /h + exp(ht)K)(y) h U − U )(y) = (exp(ht)( + K) h h 7 = W (t)(y).

= ((

The corollary is proved from deﬁnition of direct image of fuzzy sets and properties of approximate quantities. ˜ K ˜ 0) Even thought the approximate solution to fuzzy system (9.6.8) p(t, h, U, 7 (t), we have to get the solution to is more accurate than a fuzzy solution W 7 7 W (t) because it is more easy to obtain W (t). Example 9.6.1: If 0∗ = m(1, 2, ; 0, 4), S = m(0, 1; −3, 4), K we solve

⎧ ∗ (t) ⎨ dK ∗0.7(t) − λK ∗ (t), = am(0, 1; −3, 4)K dt ⎩ ∗ K (0) = m(1, 2; 0, 4).

From practical sense of the problem, there must be t 0. By using Corollary 9.6.1, we can obtain a fuzzy trajectory to the problem: 5 1 ∗ (t) = m e−λt , ( γ + (2 − γ )e−0.3λt ) 0.3 K ; λ λ −3γ −0.3λt 1 4γ 4γ −0.3λt 1 6 −3γ + e + (4 − )e ) 0.3 , ( ) 0.3 . ( λ λ λ λ And by using (9.5.4), we can get a fuzzy solution to the problem 5 γ 1 1 7 (t) = m ((1 − γ )e−0.3λt ) 0.3 W , ( + 2e−0.3λt ) 0.3 ; λ λ −4γ −0.3λt 1 4γ 3γ −0.3λt 1 6 −3γ + e + (4 + )e ( ) 0.3 , ( ) 0.3 . λ λ λ λ

326

9 Interval and Fuzzy Diﬀerential Equations

˜ K) ˜ ⊂W 7 (t) is testiﬁed. Therefore, p(t, h, U, ∗ 7 When t → +∞, K (t), W (t) trends to be an average value γ 1 3γ 1 4γ 1 m[0, ( ) 0.3 ; (− ) 0.3 , ( ) 0.3 ]. λ λ λ 3γ 1 4γ 1 γ 1 γ 1 Because m[0, ( ) 0.3 ; (− ) 0.3 , ( ) 0.3 ] is not a ﬁxed number, (0, ( ) 0.3 ) λ λ λ λ 4γ 1 3γ 1 0.3 0.3 and ( ) are called the is regarded as left-right main value and (− ) λ λ ˜ left-right spreads of C, respectively. Hence, decision makers can choose most satisfactory value according to their practical requirement.

10 Interval and Fuzzy Functional and their Variation

The writer put forward the concept of an interval and a fuzzy (value) functional variation on base of the classic function and functional variation in 1991 [Cao91a]. In 1992, he extended the research of convex function and convex functional into the consideration of the interval and the fuzzy environment. Later he processed the research for a conditional extremum variation problem in interval and fuzzy-valued functional [Cao01e] and the functional variation with fuzzy functions [Cao99a]. In this chapter, interval and fuzzy-valued functional variation are discussed as follows. Section 1, Interval functional and its variation Section 2, Fuzzy-valued functional and its variation Section 3, Convex interval and fuzzy function and functional Section 4, Convex fuzzy-valued function and functional Section 5, Variation of interval and fuzzy-valued functional condition extremum Section 6, Variation of condition extremum on functional with fuzzy functions

10.1

Interval Functional and Its Variation

In chapter 1, we can ﬁnd the deﬁnition of interval numbers, its operation and interval functional. In this section, we discuss some properties of interval functional and its variation, introduce extremely valued condition for interval functional. Deﬁnition 10.1.1. If each function y¯(x) is a certain interval function y¯(x) = ¯ has some interval value corresponding to it, the interval [y − (x), y + (x)] and Π ¯ = variable is called a functional dependent on function y¯(x), written as Π − + Π[y (x), y (x)]. Deﬁnition 10.1.2. Suppose that the interval functional Π(¯ y (x)) is deﬁned on interval [a, b], and if point x ∈ [a, b], δy − = y − (x) − y1− (x), and δy + = B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 327–361. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

328

10 Interval and Fuzzy Functional and their Variation

y + (x) − y1+ (x) exist, where y1− (x), y1+ (x) and y − (x), y + (x) are ordinary functions which belong to the domain of a functional, the functional Π[¯ y(x)] is called the interval-model-variable variationableness, and [min(δy − , δy + ), max(δy − , δy + )] is called variation at x for y¯, i.e., δ y¯ [min(δy − , δy + ), max(δy − , δy + )], at δy − (x0 ) δy + (x0 ) (or δy − (x0 ) δy + (x0 )), y¯ is the same order (or antitone) variationableness at x0 . For ∀x ∈ [a, b], δ y¯ = [δy − , δy + ] (or δ y¯ = [δy + , δy − ]) is called the same order (or antitone) variation on [a, b]. (k)

y (k) (x), y¯0 (x)) ⊂ Deﬁnition 10.1.3. If for ∀ε > 0, ∃δ > 0, and when dH (¯ (−δ, δ), we have dH (Π y¯(x), Π y¯0 (x)) ⊂ (−ε, ε), calling the functional Π y¯(x) a k-th approaching continual interval functional on y¯0 (x), where, dH denotes a Hausdorﬀ metric. Deﬁnition 10.1.4. For ∀x ∈ [a, b],

y1− (x) y1+ (x), y − (x) y + (x), y − (x) y + (x), · · · , y −(n) (x) y +(n) (x) if and only if functional Π(y − (x)) and Π(y + (x)) approach to continue by k-th on y − (x) = y1− (x) and y + (x) = y1+ (x) [Ail52], respectively, Π y¯(x) is called an k-th approaching continual interval functional on y¯ = y¯1 . ∂ Π(¯ y + ϑδ y¯)|ϑ=0 ∂ϑ ¯ is called the 1st variation for the interval functional and, by δ Π, we deduce Deﬁnition 10.1.5. For the interval functional Π(¯ y (x)),

∂ ∂ Π(y − + ϑδy − )|ϑ=0 , Π(y + + ϑδy + )|ϑ=0 }, ∂ϑ ∂ϑ ∂ ∂ Π(y + + ϑδy + )|ϑ=0 }]. max{ Π(y − + ϑδy − )|ϑ=0 , ∂ϑ ∂ϑ

¯ [min{ δΠ

(10.1.1)

∂ ∂ ¯ is the Π(y − + ϑδy − )|ϑ=0 Π(y + + ϑδy + )|ϑ=0 , the functional Π At ∂ϑ ∂ϑ same order variationableness, and ¯ = [ ∂ Π(y − + ϑδy − )|ϑ=0 , ∂ Π(y + + ϑδy + )|ϑ=0 ] δΠ ∂ϑ ∂ϑ represents the same order variation of functional. ∂ ∂ ¯ is an Π(y + + ϑδy + )|ϑ=0 Π(y − + ϑδy − )|ϑ=0 , the functional Π At ∂ϑ ∂ϑ antitone variationableness, and ¯ = [ δΠ

∂ ∂ Π(y + + ϑδy + )|ϑ=0 , Π(y − + ϑδy − )|ϑ=0 ] ∂ϑ ∂ϑ

represents the antitone variation of functional.

10.1 Interval Functional and Its Variation

329

The next step is only to consider the same order variation since the antitone variation can be done in the same way. Therefore, (10.1.1) can be simpliﬁed into ¯ [ δΠ

∂ ∂ Π(y − + ϑδy − )|ϑ=0 , Π(y + + ϑδy + )|ϑ=0 ]. ∂ϑ ∂ϑ

∂n ¯ = Π(¯ Deﬁnition 10.1.6. For the interval functional Π y (x)), Π(¯ y+ ∂ϑn n ¯ ϑδ y¯)|ϑ=0 is called n-th variation for the interval functional and, by sign δ Π, n ¯ = ∂ Π(¯ ¯ (n 2) a higher variation. writing δ n Π y + ϑδ y¯)|ϑ=0 . We call δ n Π ∂ϑn Deﬁnition 10.1.7. We call F (x, y¯(x), y¯ (x)) an interval-compound function, writing

F (x, y¯(x), y¯ (x)) [F − (x, y − (x), y − (x)), F + (x, y + (x), y + (x))]. Deﬁnition 10.1.8. For the interval-compound function F (x, y¯(x), y¯ (x)), if ﬁxing variable x, we deﬁne δ F¯ =

∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 ∂ϑ

as the 1st-interval variation of the interval-compound function F¯ , δ n F¯ =

∂F n (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 ∂ϑn

is called its n-th interval variation. Similarly, we can deﬁne the variation of F [x, y¯(x), y¯ (x), · · · , y¯(n) (x)], φ[x, y¯(x), z¯(x)]. Theorem 10.1.1. For interval model variation, we have (1) (δ y¯) = δ y¯ , (δ y¯)(n) = δ y¯(n) ;

(2) δ(δ y¯) = 0.

Proof: (1) Let the interval-compound function be F¯ = F [x, y¯(x), y¯ (x)] = y¯ ; F [x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) ] = y¯ + ϑ(δ y¯) . Under the meaning of the same order variationableness, we have ∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) ) = (δ y¯) . ∂ϑ Therefore, we obtain δ F¯ = δ y¯ =

∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 = (δ y¯) . ∂ϑ

330

10 Interval and Fuzzy Functional and their Variation

Similarly, we can prove that the formula holds under the meaning of an antitone variationableness, hence (δ y¯) = δ y¯ . In the proper order, we can prove (δ y¯)(n) = δ y¯(n) . (2) Let the interval-compound function be F (x, y¯(x)) = δ y¯, F (x, y¯ + ϑδ y¯) = δ y¯. Then ∂F ∂F (x, y¯ + ϑδ y¯) = 0 ⇒ δ F¯ = (x, y¯ + ϑδ y¯)|ϑ=0 = 0. ∂ϑ ∂ϑ So, the theorem holds. Theorem 10.1.2. Let F¯ , F¯1 , F¯2 be interval-compound functions with the variationable same order. Then (1) (2) (3) (4) (5) (6)

δ(F¯1 ± F¯2 ) = δ F¯1 ± δ F¯2 , δ(F¯1 · F¯2 ) = F¯1 δ F¯2 + F¯2 δ F¯1 , δ(k · F¯ ) = kδ F¯ , ¯1 ¯ ¯ ¯ ¯ δ( F ) = F2 δF1F¯−2F1 δF2 (F¯2 = 0), F¯2 2 δ F¯ n = nF¯ n−1 δ F¯ , b b δ F¯ dx = δ F¯ dx. a

a

Proof: We only prove (2) and (6): (2) Let F (x, y¯(x), y¯ (x)) = F1 (x, y¯(x), y¯ (x)) · F2 (x, y¯(x), y¯ (x)). Then ∂F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) ∂ϑ ∂ {F1 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )}F2 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) = ∂ϑ ∂ + F1 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) F2 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ). ∂ϑ From this, we obtain

(10.1.2)

δ F¯ = F¯1 δ F¯2 + F¯2 δ F¯1 . (6) Let F¯ = F (x, y¯(x), y¯ (x)). Then b b ∂ δ F (x, y¯(x), y¯ (x))dx = F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )|ϑ=0 dx ∂ϑ a a b ∂ F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )|ϑ=0 dx = ∂ϑ a b = δF (x, y¯(x), y¯ (x))dx. a

10.1 Interval Functional and Its Variation

331

¯ Lemma I. (Basic interval variation lemma) The interval function φ(x) remains continuous on (a, b) and as for arbitrary function η(x), it satisﬁes with the following. (1) η(x) has kth continuous derivative on (a, b); (2) η(a) = η(b); (3) |η(x)| < , |η (1) (x)| < , · · · , |η (k) (x)| < , where is a small arbitrary positive number, b ¯ ¯ we have a φ(x)η(x)dx = 0, then φ(x) ≡ 0 on [a, b]. Proof: Because b ¯ φ(x)η(x)dx =[ a

b

φ− (x)η(x)dx,

a

b

φ+ (x)η(x)dx] = 0,

a −

+

¯ by condition, we notice φ(x) = [φ (x), φ (x)]. b If by applying the variation lemma [Ail52] into φ− (x), a φ− (x)η(x)dx and b φ+ (x), a φ+ (x)η(x)dx, we have φ− (x) = 0 and φ+ (x) = 0, respectively. Hence, the lemma holds. The extreme value of the interval functional. Deﬁnition 10.1.9. If the value of interval functional Π(¯ y (x)) in any curve ¯ = Π(¯ approaching to y¯ = y¯0 (x) is smaller than Π(¯ y0 (x)), i.e., if ΔΠ y (x)) − y0 (x)) reaches the maximum (or a strict Π(¯ y0 (x)) ⊂ 0(or = 0), functional Π(¯ one) on y¯ = y¯0 . The minimum (or a strict one) of Π(¯ y (x)) can be deﬁned by imitation, and maximum (or minimum) of Π(¯ y(x)) is called an extreme value. Theorem 10.1.3. If the interval functional Π(¯ y (x)) with variation reaches max¯ = 0. imum (or minimum) on y¯ = y¯0 (x), then, on y¯ = y¯0 (x) there exists δ Π Proof: Consider ¯ Π(¯ y0 (x) + ϑδ y¯) = φ(ϑ) − − ⇐⇒ [Π(y0 (x) + ϑδy ), Π(y0+ (x) + ϑδy + )] = [φ− (ϑ), φ+ (ϑ)], ¯ is an inwhen y0− (x) and δy − , y0+ (x) and δy + are ﬁxed, respectively, φ(ϑ) ¯ terval function of ϑ. By assumption, φ(0) is taken for extreme value ⇐⇒ φ− (0), φ+ (0) is. Therefore, φ− (0) = 0, φ+ (0) = 0, i.e., y0 (x)) = 0. φ¯ (0) = 0 =⇒ δΠ(¯ It is not diﬃcult to extend the results above into the interval functional dependent upon multi-model-variable Π(¯ y1 (x), y¯2 (x), · · · , y¯n (x)) and upon a model interval functional of multi-variable or upon its variety of model interval functionals Π(¯ y (x1 , x2 , · · · , xn ));

332

10 Interval and Fuzzy Functional and their Variation

or Π(¯ z1 (x1 , x2 , · · · , xn ), z¯2 (x1 , x2 , · · · , xn ), · · · , z¯n (x1 , x2 , · · · , xn )). Theorem 10.1.4. If the interval functional Π(¯ y (x)) with 1st and 2nd interval ¯ and when y¯ = y¯0 (x), δΠ(¯ ¯ δ 2 Π, y0 (x)) = 0, and δ 2 Π(¯ y0 (x)) = 0 variation δ Π, hold, then an extreme value is taken for functional Π(¯ y (x)) on y¯ = y¯0 (x). When δ 2 Π(¯ y0 (x)) ⊂ 0, maximum exists and when δ 2 Π(¯ y0 (x)) ⊃ 0, minimum exists. ¯ Proof: Let the interval functional be φ(ϑ) = Π(¯ y0 (x)+ϑδ y¯), if δΠ(¯ y0 (x)) = 0, δ 2 Π(¯ y0 (x)) ⊂ 0, then − φ (0) = 0, φ− (0) < 0 ¯ ¯ φ (0) = 0, φ (0) ⊂ 0 =⇒ φ+ (0) = 0, φ+ (0) < 0 − φ (0), maximal value is taken =⇒ φ+ (0), maximal value is taken ¯ ⇐⇒ φ(0), maximal value is taken, i.e., Π(¯ y0 (x) + ϑδ y¯) Π(¯ y0 (x)). Therefore, maximal value is taken for Π(¯ y0 (x)). Similarly, we can prove the states of the minimum.

10.2

Fuzzy-Valued Functional and Its Variation

10.2.1 Introduction We aim at extending conception of functional variation under interval meaning into fuzzy state, having put forward the conception of fuzzy variation. In this section, we discuss some properties of fuzzy-valued functional and its variation, educe extremely valued condition for interval functional. 10.2.2 Variation of Fuzzy-Value Functional at Ordinary Point In [DPr78] and [Luo84a,b], we can ﬁnd some deﬁnitions of fuzzy number, its operation, and those of the fuzzy-value function. Deﬁnition 10.2.1. Let (1) y˜ : [a, b] → R, x → y˜(x), y˜(x) is a fuzzy-value function deﬁned on [a, b]; (2) y¯α : [a, b] → ER = {[e, f ]|e f ; e, f ∈ R}, x → y¯α (x) (˜ y (x))α = [yα− (x), yα+ (x)].

10.2 Fuzzy-Valued Functional and Its Variation

333

Then y¯α is called an α-cuts function for y˜, which is an interval function deﬁned on [a,b]. Deﬁnition 10.2.2. If for a kind of fuzzy-value function y˜(x), each function y˜(x) has some fuzzy numbers Π(˜ y(x)) corresponding to it, then Π(˜ y (x)) is called a fuzzy-valued functional of such function y˜(x), and we write it down ˜ = Π(˜ as Π y (x)). Deﬁnition 10.2.3. Let the fuzzy-valued functional be deﬁned on [a, b]. If for ∀α ∈ (0, 1], there exists δ y¯α = y¯α (x) − y¯1α (x), such that αδ y¯α = α(¯ yα (x) − y¯1α (x)), α∈(0,1]

α∈(0,1]

then it is called a fuzzy-model-variable variation in functional Π(˜ y(x)), written as δ y˜ = y˜(x) − y˜1 (x). Deﬁnition 10.2.4. Let y˜(x) be deﬁned on [a, b], for ∀α ∈ (0, 1], y¯α (x) is the same order (or antitone) variationableness, δ y˜(x) = αδ y¯α is called the α∈(0,1]

same order (or antitone) variation for y˜(x). Deﬁnition 10.2.5. Let y¯α : [a, b] → ER , x → y¯α (x), ¯ α : ER → [g, h], y¯α → Π(¯ Π yα (x)). Then Π(¯ yα ) is called an α-cuts functional of Π y˜, if only if for ∀α ∈ (0, 1], when Π(¯ yα ) approaches to continue by kth on y¯α = y¯0α (x), a fuzzy-valued functional Π y˜ can be called a kth approaching continuation on y˜ = y˜0 (x). Deﬁnition 10.2.6. For the fuzzy-valued functional Π(˜ y (x)), we call ∂ Π(˜ y + ϑδ y˜)|ϑ=0 the 1st variation of a fuzzy-valued functional, by sign ∂ϑ ˜ then δ Π, ∂ ˜ yα + ϑδ y¯α )|ϑ=0 , δΠ α Π(¯ ∂ϑ α∈(0,1]

2

∂ Π(˜ y + ϑδ y˜)|ϑ=0 the 2nd variation of a fuzzy-valued functional, and ∂ϑ2 ˜ then by sign δ 2 Π, ∂2 ˜ δ2Π α 2 Π(¯ yα + ϑδ y¯α )|ϑ=0 . ∂ϑ call

α∈(0,1]

˜ = Π(˜ Deﬁnition 10.2.7. For a fuzzy-valued functional of type Π y (x), z˜(x)), ˜ Π = Π(˜ u(x, y)), whose 1st variation is

334

10 Interval and Fuzzy Functional and their Variation

˜ = δΠ

α

∂ Π(¯ yα + ϑδ y¯α , z¯α + ϑδ¯ zα )|ϑ=0 , ∂ϑ

α

∂ Π(¯ uα + ϑ¯ uα )|ϑ=0 . ∂ϑ

α∈(0,1]

˜ = δΠ

α∈(0,1]

Similarly, we can deﬁne the 2nd variation. Deﬁnition 10.2.8. We call function like F (x, y˜(x), y˜ (x)) a fuzzy-valued compound function with F (x, y˜(x), y˜ (x)) αF (x, y¯α (x), y¯ α (x)). α∈(0,1]

Deﬁnition 10.2.9. If ﬁxing variable x for a fuzzy-valued compound function F (x, y˜(x)), y˜ (x)), we deﬁne ∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 ∂ϑ ∂ αδ F¯α = α F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )|ϑ=0 , ∂ϑ

δ F˜ =

α∈(0,1]

α∈(0,1]

which is called the 1st fuzzy-valued variation. And then ∂n F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 ∂ϑn ∂n αδ n F¯α = α n F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )|ϑ=0 , ∂ϑ

δ n F˜ =

α∈(0,1]

α∈(0,1]

which is called an nth fuzzy-value variation. In the same way, we can deﬁne the fuzzy-valued variation of F (x, y˜(x), y˜ (x), · · · , y˜(n) (x));

φ(x, y˜(x), z˜(x)).

Theorem 10.2.1. For fuzzy valued-model-variable variation, we have (1) (δ y˜) = δ y˜ , (δ y˜)(n) = δ y˜(n) ;

(10.2.1)

(2) δ(δ y˜) = 0.

(10.2.2)

Proof: a) Let the fuzzy-valued compound function be F˜ = F (x, y˜(x), y˜ (x)) = y˜ ;

F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) ) = y˜ + ϑ(δ y˜) .

10.2 Fuzzy-Valued Functional and Its Variation

335

Under the same order variationable meaning, we have ∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) ) ∂ϑ ∂ α F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) = ∂ϑ α∈(0,1]

α(δ y¯α ) (δ y˜) .

α∈(0,1]

Therefore, we obtain δ F˜ = δ y˜ =

∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 = (δ y˜) . ∂ϑ

We can prove similarly that the formula holds under the variationable meaning in antitone, so (δ y˜) = δ y˜ . By mathematical induction, we can prove (δ y˜)(n) = δ y˜(n) in proper order. So, (10.2.1) holds. b) Let the fuzzy-valued compound function be F (x, y˜(x)) = δ y˜, Then ∂ F (x, y˜ + ϑδ y˜) ∂ϑ

F (x, y˜ + ϑδ y˜) = δ y˜.

α∈(0,1]

α

∂ F (x, y¯α + ϑδ y¯α ) = 0. ∂ϑ

Therefore δ F˜ =

∂ F (x, y˜ + ϑδ y˜)|ϑ=0 = 0, ∂ϑ

i.e., (10.2.2) holds. Theorem 10.2.2. Let F˜ , F˜1 , F˜2 be fuzzy-valued compound functions with the same order variationable. Then δ(F˜1 ± F˜2 ) = δ F˜1 ± δ F˜2 , δ(F˜1 · F˜2 ) = F˜1 δ F˜2 + F˜2 δ F˜1 , δ(k · F˜ ) = kδ F˜ , ˜1 ˜ ˜ ˜ ˜ ) = F2 δF1F˜−2F1 δF2 , (F˜2 = 0), δ( F F˜2 2 (5) δ F˜ n = nF˜ n−1 δ F˜ , b b (6) δ a F˜ dx = a δ F˜ dx.

(1) (2) (3) (4)

Proof: Only (2) and (6) are proved, and the others can be proved similarly. (2) Let F (x, y˜(x), y˜ (x)) = F1 (x, y˜(x), y˜ (x)) · F2 (x, y˜(x), y˜ (x)). Then

336

10 Interval and Fuzzy Functional and their Variation

∂F (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ ) ∂ϑ ∂ {F1 (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ )F2 (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ )} = ∂ϑ ∂F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) α ⇐⇒ ∂ϑ α∈(0,1]

=

α

α∈(0,1]

∂F1 {(x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )}F2 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) ∂ϑ

+ F1 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )

∂F2 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ). ∂ϑ

(6) The conclusion holds from proof of (6) in Theorem 10.1.2 and from expressive theorem of fuzzy numbers. ˜ Lemma II. (Basic fuzzy variation lemma) The fuzzy-valued function φ(x) continues on (a,b), and an arbitrary function η(x) satisﬁes with a classical variation lemma condition [Ail52], i.e., 10 η(x) with a kth continuous derivative on (a, b), 20 η(a) = η(b), 30 |η(x)| < , |η (1) (x)| < ,· · · , |η (k) (x)| < , where is a small arbitrary b ˜ ˜ = 0, then φ(x) = 0 on [a,b]. positive number, we have a φ(x)η(x)dx Proof: Since

b a

˜ φ(x)η(x)dx

α∈(0,1]

α

b a

¯ φ(x)η(x)dx, for an arbitrary α ∈

˜ (0, 1], we have φ(x) = 0, x ∈ [a, b] from variation Lemma I and expressive theorem of fuzzy numbers. The extreme value of the fuzzy-valued functional is below. Deﬁnition 10.2.10. If the fuzzy-valued functional Π(˜ y (x)) is smaller than ˜ = Π(˜ y (x)) − Π(˜ y0 (x)) on an arbitrary curve near to y˜ = y˜0 , i.e., if ΔΠ Π(˜ y0 (x)) ⊂ 0(or = 0), functional Π(˜ y(x)) is known to reach the maximum (or a strict maximum) on curve y˜ = y˜0 (x). The minimum valued curve can be deﬁned similarly as above. Theorem 10.2.3. If the fuzzy-valued functional Π(˜ y (x)) with variation ˜ = 0 on reaches maximum (or minimum) on y˜ = y˜0 (x), then, there is δ Π y˜ = y˜0 (x). Proof: As ˜ Π(˜ y0 (x) + ϑδ y˜) = φ(ϑ) ⇔ αΠ(¯ y0α (x) + ϑδ y¯α ) = α∈(0,1]

α∈(0,1]

αφ¯α (ϑ),

10.2 Fuzzy-Valued Functional and Its Variation

337

for an arbitrary α ∈ (0, 1] holds, then the conclusion holds from Theorem 10.1.3. It is not diﬃcult to extend the results above into the fuzzy-valued functional of other types. Theorem 10.2.4. If the fuzzy-valued functional Π(˜ y (x)) has the 1st and 2nd ˜ and, at y˜ = y˜0 (x), ˜ and δ 2 Π, fuzzy-valued variation δ Π δΠ(˜ y0 (x)) = 0,

δ 2 Π(˜ y0 (x)) = 0,

holds, then the extreme value is taken for the fuzzy-valued functional Π(˜ y(x)) y0 (x)) ⊂ 0, maximum exists, and at δ 2 Π(˜ y0 (x)) ⊃ 0, on y˜ = y˜0 (x). At δ 2 Π(˜ minimum exists. ˜ Proof: Let the fuzzy-valued functional be φ(ϑ) = Π(˜ y0 (x) + ϑδ y˜). Whlie ˜ φ(ϑ) = Π(˜ y0 (x) + ϑδ y˜) αφ¯α (ϑ) = αΠ(¯ y0α (x) + ϑδ y¯α ), α∈(0,1]

α∈(0,1]

for an arbitrary α ∈ (0, 1], the conclusion holds from Theorem 10.1.4. With L-R fuzzy functional variation discussed, we can obtain the same conclusion corresponding to the results above. 10.2.3 Variation of Ordinary or Fuzzy-Valued Functional at Fuzzy Points Let Πy be a variationable ordinary functional on [a,b] and δΠy be variation ˜ is a fuzzy point, i.e., it is a convex fuzzy set on R, of Πy. Suppose that X ˜ is and a support of X ˜ = {x ∈ R|μ ˜ (x) > 0} ⊂ [a, b]. s(X) X Since δΠy is also a function on [a,b], and by using one-place extension principle, we have the following. ˜ = αδΠy(Xα ) is the 1st Deﬁnition 10.2.11. Suppose that δΠy(X) α∈(0,1]

˜ where δΠy(Xα ) = {z|∃x ∈ variation of ordinary functional at fuzzy point X, Xα ; δΠy(x) = z}, and its membership function is μδΠy(X) μX˜ (x), ˜ (z) = δΠy(x)=z

∂2 ˜ + ϑδy(X))| ˜ ϑ=0 the 2nd variation of ordinary functional Π(y(X) ∂ϑ2 ˜ i.e., at fuzzy points, writing δ 2 Πy(X),

we call

338

10 Interval and Fuzzy Functional and their Variation

˜ δ 2 (Πy(X))

α∈(0,1]

α

∂2 Π(y(Xα ) + ϑδy(Xα ))|ϑ=0 . ∂ϑ2

The variation property of an ordinary functional at ordinary points can be extended into the state of an ordinary functional variation at fuzzy points by Deﬁnition 10.2.11. Deﬁnition 10.2.12. Let Π y˜ be one-place fuzzy-valued functional, which can be variationable on [a, b], where variation δΠ y˜ is mapping from [a, b] to ˜ be a fuzzy point and S(X) ˜ ⊂ [a, b] F (R). By the extended principle, let X ˜ can be deﬁned by be a support. Then the variation of Π y˜ at fuzzy point X ˜ = δΠ y˜(X) αδΠ y˜(Xα ) ∈ F (F (R)), α∈(0,1]

γ ∈ F (R)|∃x ∈ Xα ; δΠ y˜(x) = γ˜ }, its membership where δΠ(˜ y(Xα )) = {˜ function represents μδΠ y˜(X) γ) = μX˜ (x), ˜ (˜ δΠ y˜(x)=˜ γ

∂2 ˜ ϑδ y˜(X))| ˜ ϑ=0 the 2nd variation of the fuzzy-valued funcΠ(˜ y (X)+ ∂ϑ2 ˜ y˜(X) ˜ as tional at fuzzy points, writing δ 2 Π ∂2 ˜ y˜(X) ˜ δ2Π α 2 Π(˜ y (Xα ) + ϑδ y˜(Xα ))|ϑ=0 . ∂ϑ

we call

α∈(0,1]

The corresponding results of Section 10.1 and Section 10.2.2 can be used into the state of ordinary or fuzzy-valued functional on fuzzy-pointed variation, which is omitted here. 10.2.4 Conclusion The author has put forward the basic conception and properties of variation for interval and fuzzy functional in this section, and discussed its further results which will be widely used in fuzzy physics, engineering theory and approximate calculation. The variational calculus on border and direct algorithms in variational problems under fuzzy environment will be discussed next.

10.3

Convex Interval and Fuzzy Function and Functional

10.3.1 Introduction On the foundation of interval and fuzzy function, we introduce a concept on convex interval and convex fuzzy function with functional, give the deﬁnition

10.3 Convex Interval and Fuzzy Function and Functional

339

of a convex function and convex functional about an interval and a ordinary function at fuzzy points, and judge their convexity condition. 10.3.2 Convex Interval Function with Functional 1. Convex interval function See Ref.[Cen87] about the deﬁnition of interval function. ¯ = [J − (y), J + (y)](J − (y) J + (y)) be an interval Deﬁnition 10.3.1. Let J(y) function deﬁned on [a, b] ⊂ D ⊂ R (D is a convex region and R a real ﬁeld). If ∀λ ∈ [0, 1] and y, z ∈ D, there always exist J − (λy + (1 − λ)z) λJ − (y) + (1 − λ)J − (z) and

J + (λy + (1 − λ)z) λJ + (y) + (1 − λ)J + (z),

i.e., ¯ J(λy + (1 − λ)z) ⊆ λJ¯(y) + (1 − λ)J¯(z),

(10.3.1)

¯ we call J(y) a convex interval function. ¯ ¯ [−J + (y), −J − (y)] For interval function J(y), if J¯ is convex, then −J(y) is a concave function. ¯ Deﬁnition 10.3.2. Suppose J(y) to be an interval function and if at y0 ∈ [a, b], there exists common nth derivatives J −(n) (y0 ) and J +(n) (y0 )(n = 1, 2), ¯ meaning that J(y) has nth derivable at y0 , and [min{J −(n) (y0 ), J +(n) (y0 )}, max{J −(n) (y0 ), J +(n) (y0 )}] ¯ is nth interval derivative in J(y) at y0 . −(n) +(n) When J (y0 ) J (y0 ), [J −(n) (y0 ), J +(n) (y0 )] is nth interval same ¯ order derivative in J(y) at y0 . Otherwise, [J +(n) (y0 ), J −(n) (y0 )] is nth inter¯ val antitone derivative in J(y) at y0 . We assume the function to be all the same order derivable in the book. In the binary situation (n( 3)-variate circumstance is discussed similarly), we call ¯ i , yk ) ∂ 2 J(y ∂ 2 J − (yi , yk ) ∂ 2 J + (yi , yk ) ={ , } ∂yi ∂yk ∂yi ∂yk ∂yi ∂yk ¯ the 2nd partial derivative in binary interval function J. It is not diﬃculty to get the deﬁnition of interval matrix and interval Taylor theorem [JM61] by using the deﬁnition of interval function. ¯ Theorem 10.3.1. If J(y) is the 2nd diﬀerentiable interval function, with an ∂ 2 J¯ ) ⊇ 0, then J¯ is a convex interval function. interval matrix being ( ∂yi ∂yk

340

10 Interval and Fuzzy Functional and their Variation

Proof: According to the proof in Ref. [JM61], we suppose ¯ + (1 − t)z), f¯(t) = J(ty since f¯ (t) =

(yi − zi )(yk − zk )(

i,k

∂ 2 J¯ )|ty+(1−t)z , ∂yi ∂yk

the right is non-negative, such that f¯ (t) 0. As for functions f¯− (t) and f¯+ (t), we apply Taylor theorem [JM61], respectively and get

1 f¯(1) − f¯(λ) = (1 − λ)f¯ (λ) + (1 − λ)2 f¯ (λ ) ⊇ (1 − λ)f¯ (λ), 2 where λ is a number between 1 and λ. Similarly, f¯(0) − f¯ (λ) ⊇ −λf¯ (λ).

(10.3.2)

(10.3.3)

λ × (10.3.2) + (1 − λ) × (10.3.3), then λf¯(1) + (1 − λ)f¯(0) − f¯ (λ) ⊇ 0, this is (10.3.1), J¯ being a convex function by Deﬁnition 10.3.1. The theorem is certiﬁcated. Note 10.3.1. The interval function derivative is no more an interval number [WL85]. 2. Convex interval functional Deﬁnition 10.3.3. Let λ1 ¯ Π(y, y ) = F¯ (x, y, y )dx λ0

[Π (y, y ), Π (y, y )] = [ −

+

λ1

−

λ1

F (x, y, y )dx,

λ0

(10.3.4) +

F (x, y, y )dx]. λ0

Then we call (10.3.4) an interval functional, where F¯ is an interval function. ¯ be an interval functional deﬁned in convex region Deﬁnition 10.3.4. Let Π D. If for 0 λ 1; y, y ; z, z ∈ D, we always have ¯ ¯ ¯ z ), Π[λy + (1 − λ)z, λy + (1 − λ)z ] ⊆ λΠ(y, y ) + (1 − λ)Π(z,

(10.3.5)

¯ a convex in D. calling the interval functional Π ¯ ¯ If Π(y, y ) is a convex interval functional, then −Π(y, y ) [−Π + (y, y ), − −Π (y, y )] is a concave one. Theorem 10.3.2. Let F¯y y ⊇ 0 and F¯yy F¯y y − (F¯yy )2 ⊇ 0. Then F¯ (x, y, y ) is a convex interval function concerning two variable numbers y(x), y (x). If

10.3 Convex Interval and Fuzzy Function and Functional

341

¯ y(x), y (x) are regarded as two independent functions, then Π(y, y ) is called a convex interval functional in Deﬁnition 10.3.3. Proof: It is similar with Formal (10.3.1), for 0 λ 1; y, y ; z, z ∈ D, (10.3.5) always holds. Similarly to the proof in Theorem 10.3.1, we only prove ∂2 ¯ Π(ty + (1 − t)z, ty + (1 − t)z ) ⊇ 0. ∂t2

(10.3.6)

But, from Formal (10.3.4) in Deﬁnition 10.3.3, we can see that the left of Formal (10.3.6) is [(F¯yy )(y − z)2 + 2(F¯yy )(y − z)(y − z ) + (F¯y y )(y − z )2 ]dx, (10.3.7) where (F¯yy ), etc., represents F¯yy (x, ty + (1 − t)z, ty + (1 − t)z ) etc., and by an assumption, we know − − − 2 (Fyy )(y − z)2 + 2(Fyy )(y − z)(y − z ) + (Fy y )(y − z ) 0, + + + 2 (Fyy )(y − z)2 + 2(Fyy )(y − z)(y − z ) + (Fy y )(y − z ) 0.

Therefore (F¯yy )(y − z)2 + 2(F¯yy )(y − z)(y − z ) + (F¯y y )(y − z )2 ⊇ 0, i.e., (10.3.7) ⊇ 0, such that (10.3.6) holds. 10.3.3 Convex Function with Functional at Fuzzy Points 1. Convex function at fuzzy points Suppose J to be an ordinary diﬀerentiable function deﬁned on [a, b], and x ˜ to be a fuzzy point (i.e., a convex fuzzy set on R), and its support is S(˜ x) = {x ∈ R|μx˜ (x) > 0} ⊆ [a, b]. Suppose again y(˜ x) means also a fuzzy point, and its support is S(y(˜ x)) = {y(x) ∈ R|μy(˜x) (y(x)) > 0} ⊆ [c, d], then we have the following by an extension principle. Suppose J to be a one-place function deﬁned on [a, b], if S(y(˜ x)) ⊂ [c, d], then we deﬁne αJ(y(¯ xα )). J(y(˜ x)) α∈(0,1]

Deﬁnition 10.3.5. Let J(y(˜ x)) be an ordinary function deﬁned on [a, b]. Then we call J(y(˜ x)) a convex function at fuzzy point x ˜ if for ∀λ, α ∈ [0, 1] and y(˜ x), z(˜ x) ∈ R, we have

342

10 Interval and Fuzzy Functional and their Variation

J(λy(˜ x) + (1 − λ)z(˜ x)) ⊆ λJ(y(˜ x)) + (1 − λ)J(z(˜ x)) α{J(λy(¯ xα ) + (1 − λ)z(¯ xα ))} (10.3.8)

α∈(0,1]

⊆

α{λJ(y(¯ xα ) + (1 − λ)J(z(¯ xα ))}.

α∈(0,1]

Deﬁnition 10.3.6. Let J(y(˜ x)) be an ordinary function deﬁned on [a, b]. If the x0α ))(n = 1, 2) there exists ∀α ∈ (0, 1] at point y(¯ x0α ) ∈ R, derivative J (n) (y(¯ then we call n-th derivative of J(y(˜ x)) existence at fuzzy point y(˜ x0 ), written as x0 )) = αJ (n) (y(¯ x0α )), J (n) (y(˜ α∈(0,1]

where

J (n) (y(¯ x0α )) = {γ|∃y(x0 ) ∈ y(¯ x0α ), J (n) (y(x0 )) = γ},

its membership function is μJ (n) (y(˜x0 )) (γ) =

μy(˜x0 ) (y(x0 )).

J (n) (y(x0 ))=γ

In the binary situation (n( 3)-variate circumstance is discussed similarly), we call x), yk (˜ x)) xα ), yi (¯ xα )) ∂ 2 J(yi (˜ ∂ 2 J(yi (¯ = α ∂yi ∂yk ∂yi ∂yk α∈(0,1]

the 2nd partial derivative in a binary ordinary function at fuzzy points, and its membership function is μ ∂ 2 J(yi (˜x),yk (˜x)) (γ) = {μyi (˜x) (yi ) μyk (˜x) (yk )}. ∂yi ∂yk

∂ 2 J(yi (x),yk (x)) =γ ∂yi ∂yk

Theorem 10.3.3. Let y(˜ x) be a fuzzy point. If J is the 2-nd diﬀerentiable ordinary function, with a matrix being (

∂2J ) ⊇ 0, then J(y(˜ x)) is a convex ∂yi ∂yk

function at fuzzy points. Proof: According to the assumption and deﬁnition of fuzzy numbers, let f (t) = J(ty(˜ x) + (1 − t)z(˜ x)) be only the function concerning t. Then

f (t) =

i,k

(yi (˜ x) − zi (˜ x))(yk (˜ x) − zk (˜ x))(

∂2J )|ty(˜x)+(1−t)z(˜x) , ∂yi ∂yk

10.3 Convex Interval and Fuzzy Function and Functional

343

and the right end is not negative because the right end of ∂2J (yi (˜ x) − zi (˜ x))(yk (˜ x) − zk (˜ x))( )|ty(˜x)+(1−t)z(˜x) = ∂yi ∂yk i,k

α{

(yi (¯ xα ) − zi (¯ xα ))(yk (¯ xα ) − zk (¯ xα ))(

i,k

α∈(0,1]

∂2J )|ty(˜x)+(1−t)z(˜x) }, ∂yi ∂yk

obviously it is not negative, hence f (t) 0. From the extension principle and by applying Taylor theorem, we get 1 (10.3.9) f (1) − f (λ) = (1 − λ)f (λ) + (1 − λ)2 f (λ ) ⊇ (1 − λ)f (λ), 2 where λ is a number between 1 and λ. Similarly, f (0) − f (λ) ⊇ −λf (λ).

(10.3.10)

λ × (10.3.9) + (1 − λ) × (10.3.10), then λf (1) + (1 − λ)f (0) − f (λ) ⊇ 0, i.e., (10.3.8). Hence J(y(˜ x)) is a convex function at fuzzy points in Deﬁnition 10.3.5 and the theorem holds. 2. Convex functional at fuzzy points Deﬁnition 10.3.7. Suppose Π to be an ordinary functional and x ˜ to be a fuzzy point at R, then we call λ1 Π(y(˜ x), y (˜ x)) = F (˜ x, y(˜ x), y (˜ x))dx

λ0

αΠ(y(¯ xα ), y (¯ xα )) =

α∈(0,1]

α∈(0,1]

λ1

α

(10.3.11) F (¯ xα , y(¯ xα ), y (¯ xα ))dx

λ0

a functional at fuzzy points, where F is an ordinary function. Deﬁnition 10.3.8. Let Π be an ordinary functional deﬁned in convex region D. If in fuzzy points y(˜ x), z(˜ x) ∈ R for arbitrarily λ ∈ [0, 1], there is Π(λy(˜ x) + (1 − λ)z(˜ x), λy (˜ x) + (1 − λ)z (˜ x)) x)) + (1 − λ)Π(z(˜ x), z (˜ x)) ⊆ λΠ(y(˜ x), y (˜ α{Π(λy(¯ xα ) + (1 − λ)z(¯ xα ), λy (¯ xα ) + (1 − λ)z (¯ xα ))} α∈(0,1]

⊆

α{λΠ(y(¯ xα ), y (¯ xα )) + (1 − λ)Π(z(¯ xα ), z (¯ xα ))},

α∈(0,1]

then Π is called a convex functional at fuzzy points in D.

(10.3.12)

344

10 Interval and Fuzzy Functional and their Variation

Theorem 10.3.4. Let Fy y ⊃ 0 and Fyy Fy y − (Fyy )2 ⊇ 0. Then F (˜ x, y(˜ x), y (˜ x)) is a convex function concerning two fuzzy variable numbers y(˜ x) and y (˜ x). If y(˜ x) and y (˜ x) are regarded as two independent fuzzy functions, then x)) a convex functional at fuzzy points deﬁned by (10.3.11). we call Π(y(˜ x), y (˜ Proof: It is similar with Formal (10.3.8), for 0 λ 1, we always have (10.3.12) hold. Similarly to the proof in Theorem 10.3.3, we only prove ∂2 Π(ty(˜ x) + (1 − t)z(˜ x), ty (˜ x) + (1 − t)z (˜ x)) ⊇ 0. ∂t2

(10.3.13)

But, from Formal (10.3.11) in Deﬁnition 10.3.7, we can see that the left end of Formal (10.3.13) is [(Fyy )(y(˜ x) − z(˜ x))2 + 2(Fyy )(y(˜ x) − z(˜ x))(y (˜ x) − z (˜ x)) (10.3.14) x) − z (˜ x))2 ]dx, + (Fy y )(y (˜ where (Fyy ), etc., represents Fyy (˜ x, ty(˜ x) + (1 − t)z(˜ x), ty (˜ x) + (1 − t)z (˜ x)), etc. And by an assumption, we know (Fyy )(y(x− ) − z(x− ))2 + 2(Fyy )(y(x− ) − z(x− ))(y (x− ) − z (x− )) + (Fy y )(y (x− ) − z (x− ))2 0, (Fyy )(y(x+ ) − z(x+ ))2 + 2(Fyy )(y(x+ ) − z(x+ ))(y (x+ ) − z (x+ )) + (Fy y )(y (x+ ) − z (x+ ))2 0. Therefore, there is α(Fyy )(y(¯ xα ) − z(¯ xα ))2 + 2(Fyy )(y(¯ xα ) − z(¯ xα ))(y (¯ xα ) − z (¯ xα )) α∈(0,1]

+ (Fy y )(y (¯ xα ) − z (¯ xα ))2 ⊇ 0, x) − z(˜ x))2 + 2(Fyy )(y(˜ x) − z(˜ x))(y (˜ x) − z (˜ x)) ⇒ (Fyy )(y(˜ x) − z (˜ x))2 ⊇ 0, + (Fy y )(y (˜ i.e., (10.3.14) ⊇ 0, such that (10.3.13) holds. 10.3.4 Conclusion In this section, we expand the concept of a classic convex, establish the theory frame of the convex interval and fuzzy functions with convex functionals. In the next section, we will advance cove fuzzy-value function and functional. Under this frame, we can develop a lot of researches to optimizing problems concerning static, more static and dynamic cases under interval and fuzzy environment. The work concerning this aspect will be researched continuously.

10.4 Convex Fuzzy-Valued Function and Functional

10.4

345

Convex Fuzzy-Valued Function and Functional

In this section, on the foundation of fuzzy-valued function and functional variation, we put forward the next [Cao09]. (1) Developing a concept on convex fuzzy-valued function with functional. (2) Discussing the convexity in fuzzy-valued function and functional at ordinary and fuzzy points, respectively. 10.4.1 Convex Fuzzy-Valued Function and Functional at Ordinary Points 1. Convex fuzzy-Valued Function at Ordinary Points Fuzzy-valued function with functional can be deﬁned similarly as the above section. ˜ Deﬁnition 10.4.1. Suppose J(y) to be a fuzzy-valued function deﬁned at [a, b], and ˜ J(y) αJ¯α (y) = α[Jα− (y), Jα+ (y)], α∈(0,1]

α∈(0,1]

if for ∀λ ∈ [0, 1] and y, z ∈ R, we have ˜ J(λy + (1 − λ)z) ⊆ λJ˜(y) + (1 − λ)J˜(z),

(10.4.1)

˜ then we call J(y) the convex fuzzy-valued function. Here (10.4.1) α{J¯α (λy + (1 − λ)z)} α{λJ¯α (y) + (1 − λ)J¯α (z)} α∈(0,1]

⇐⇒

α∈(0,1]

α{Jα− (λy + (1 − λ)z)}

α∈(0,1]

α{λJα− (y) + (1 − λ)Jα− (z)}

α∈(0,1]

α{Jα+ (λy

+ (1 − λ)z)}

α∈(0,1]

α{λJα+ (y) + (1 − λ)Jα+ (z)}.

α∈(0,1]

˜ ˜ If J(y) is a convex fuzzy-valued function, then −J(y) = −Jα− (y)] is a concave one.

α[−Jα+ (y),

α∈(0,1]

˜ Deﬁnition 10.4.2. Let J(y) be a fuzzy-valued function deﬁned at interval [a, b]. If at some point y0 ∈ (a, b], there exists nth interval derivative (n) J¯α (y0 )(n = 1, 2) for ∀α ∈ (0, 1], then we call that nth fuzzy-valued deriva˜ tive exists in J(y) at y0 , written down as

346

10 Interval and Fuzzy Functional and their Variation

J˜(n) (y0 ) =

αJ¯α(n) (y0 ) =

α∈(0,1]

α[Jα−(n) (y0 ), Jα+(n) (y0 )],

α∈(0,1]

its membership function being μJ˜(n) (y0 ) (γ) = {α|Jα−(n) (y0 ) = γ, or Jα+(n) (y0 ) = γ}. As for the binary situation (n( 3)-variate circumstance is discussed similarly), we call ˜ i , yk ) ∂ 2 J(y = ∂yi ∂yk =

α

α∈(0,1]

∂ ∂ J¯α (yi , yk ) ∂yk ∂yi

α{

α∈(0,1]

∂ 2 Jα− (yi , yk ) ∂ 2 Jα+ (yi , yk ) }, α{ } ∂yi ∂yk ∂yi ∂yk α∈(0,1]

˜ its membership func2nd partial derivative in binary fuzzy-valued function J, tion being ˜ (γ) = μ ∂ 2 J(y i ,yk ) ∂yi ∂yk

{α|

∂ 2 Jα− (yi , yk ) ∂ 2 Jα+ (yi , yk ) = γ, or = γ}. ∂yi ∂yk ∂yi ∂yk

˜ Theorem 10.4.1. If J(y) is the 2nd diﬀerentiable fuzzy-valued function, with ∂ 2 J˜ ) ⊇ 0, then J˜ is a convex fuzzy-valued a fuzzy-valued matrix being ( ∂yi ∂yk function. Proof: According to the assumption and deﬁnition of a fuzzy-valued function, let ˜ + (1 − t)z). f˜(t) = J(ty ∂ 2 J˜ (yi − zi )(yk − zk )( )|ty+(1−t)z is not Because the right of f˜ (t) = ∂yi ∂yk i,k negative, such that f˜ (t) 0, from an extension principle and by applying Taylor theorem, we get 1 f˜(1) − f˜(λ) = (1 − λ)f¯ (λ) + (1 − λ)2 f˜ (λ ) ⊇ (1 − λ)f˜ (λ), 2

(10.4.2)

where λ is a number between 1 and λ. Similarly, f˜(0) − f˜ (λ) ⊇ −λf¯ (λ).

(10.4.3)

λ × (10.4.2) + (1 − λ) × (10.4.3), then λf˜(1) + (1 − λ)f˜(0) − f˜ (λ) ⊇ 0, which is (10.4.1). Hence J˜ is a convex fuzzy-valued function by Deﬁnition 10.4.1 and the theorem is certiﬁcated.

10.4 Convex Fuzzy-Valued Function and Functional

347

Note 10.1. The derivative of fuzzy-valued function is not necessarily a fuzzy number [WL85]. 2. Convex fuzzy-valued functional at ordinary points Deﬁnition 10.4.3. We call the formal λ1 ˜ Π(y, y) = F˜ (x, y, y )dx

λ0

¯ α (y, y ) = αΠ

α∈(0,1]

=

α[Πα− (y, y ), Πα+ (y, y )]

α∈(0,1] λ1

α

α∈(0,1]

(10.4.4)

F¯α (x, y, y )dx

λ0

a fuzzy-valued functional, where F˜ is a fuzzy-valued function. ˜ Deﬁnition 10.4.4. Let Π(y, y ) be a fuzzy-valued functional deﬁned in convex region D. If ∀λ ∈ [0, 1]; y, y ; z, z ∈ D, we always have ˜ ˜ ˜ z ) Π(λy + (1 − λ)z, λy + (1 − λ)z ) ⊆ λΠ(y, y ) + (1 − λ)Π(z, ¯ α (λy + (1 − λ)z, λy + (1 − λ)z ) αΠ (10.4.5)

α∈(0,1]

⊆

¯ α (y, y ) + (1 − λ)Π ¯ α (z, z )}, α{λΠ

α∈(0,1]

˜ calling the fuzzy-valued functional Π(y, y ) convex in D. ˜ ˜ If Π(y, y ) is a convex fuzzy-valued functional, then −Π(y, y) = −Πα+ (y, y ), −Πα− (y, y )] is a concave one.

α[

α∈(0,1]

Theorem 10.4.2. Let F˜y y ⊃ 0 and F˜yy F˜y y − (F˜yy )2 ⊇ 0. Then F˜ (x, y, y ) is a convex fuzzy-valued function concerning two variable numbers y(x) and y (x). If y(x) and y (x) are regarded as two independent functions, then we ˜ call Π(y, y ) a convex fuzzy-valued functional by Deﬁnition 10.4.3. Proof: It is similar with Formal (10.4.1), for 0 λ 1; y, y ; z, z ∈ D, we always have (10.4.5) hold. Similarly to the proof in Theorem 10.4.1, we only prove ∂2 ˜ Π(ty + (1 − t)z, ty + (1 − t)z ) ⊇ 0. (10.4.6) ∂t2 But, from Formal (10.4.4) in Deﬁnition 10.4.3, we can see that the left of the Formal (10.4.6) is [(F˜yy )(y − z)2 + 2(F˜yy )(y − z)(y − z ) + (F˜y y )(y − z )2 ]dx, (10.4.7)

348

10 Interval and Fuzzy Functional and their Variation

where (F˜yy ), etc., represents F˜yy (x, ty + (1 − t)z, ty + (1 − t)z ), etc., and by an assumption, we know (F¯α )yy (y − z)2 + 2(F¯α )yy (y − z)(y − z ) + (F¯α )y y (y − z )2 ⊇ 0, therefore, α{(F¯α )yy (y − z)2 + 2(F¯α )yy (y − z)(y − z ) + (F¯α )y y (y − z )2 } ⊇ 0, α∈(0,1]

⇒ (F˜yy )(y − z)2 + 2(F˜yy )(y − z)(y − z ) + (F˜y y )(y − z )2 ⊇ 0, i.e., (10.4.7) ⊇ 0, such that (10.4.6) holds. 10.4.2 Convex Fuzzy-Valued Function and Functional at Fuzzy Points 1. Convex fuzzy-valued function at fuzzy points Suppose that J˜ is a one-place fuzzy-valued function deﬁned at [a, b]. By extension principle, if y(˜ x) is a fuzzy point and its support is S(y(˜ x)) ⊂ [c, d], then ˜ x)) αJ˜(y(¯ xα )) ∈ F (F (R)) J(y(˜ α∈(0,1]

˜ x)) = {˜ is a fuzzy-valued function deﬁned at the fuzzy points, where J(y(˜ γ∈ ˜ F (R)|∃y(x) ∈ y(¯ xα ), J(y(x)) = γ˜ }, its membership function being γ) = μy(˜x) (y(x)). μJ(y(˜ ˜ x)) (˜ J˜(y(x))=˜ γ

Deﬁnition 10.4.5. If ∀λ ∈ [0, 1] and fuzzy points y(˜ x), z(˜ x) ∈ R, there is ˜ ˜ x)) + (1 − λ)J˜(z(˜ J[λy(˜ x) + (1 − λ)J˜(z(˜ x))] ⊆ λJ(y(˜ x)) ˜ α{J[λy(¯ xα ) + (1 − λ)z(¯ xα )]} α∈(0,1]

⊆

˜ xα )) + (1 − λ)J(z(¯ ˜ xα ))}, α{λJ(y(¯

α∈(0,1]

then we call J˜ a convex fuzzy-valued function at fuzzy points. ˜ Deﬁnition 10.4.6. Suppose J(y(x)) to be the fuzzy-valued function deﬁned at interval [a, b], if ∀α ∈ (0, 1], J˜(n) (y(¯ x0α ))(n = 1, 2) exist at certain point ˜ exists at fuzzy point y(¯ x0α ) ∈ R, then we call that nth derivative of J(y(x)) y(˜ x0 ), written down as J˜(n) (y(˜ x0 )) = αJ˜(n) (y(¯ x0α )) ∈ F (F (R)), α∈(0,1]

10.4 Convex Fuzzy-Valued Function and Functional

349

where J˜(n) (y(¯ x0α )) = {˜ γ ∈ F (R)|∃y(x0 ) ∈ y(¯ x0α ), J˜(n) (y(x0 )) = γ˜}, its membership function being μJ˜(n) (y(˜x0 )) (˜ γ) = μy(˜x0 ) (y(x0 )). J˜(n) (y(x0 ))=˜ γ

In the binary situation (n( 3)-variate circumstance are discussed similarly), we call ˜ i (˜ x), yk (˜ x)) ∂ 2 J(y = ∂yi ∂yk

α∈(0,1]

α

˜ i (¯ xα ), yk (¯ xα )) ∂ 2 J(y ∈ F (F (R)) ∂yi ∂yk

the 2nd partial derivative in binary fuzzy-valued function at fuzzy points, where ˜ i (¯ xα ), yk (¯ xα )) ∂ 2 J(y = {˜ γ |∃(yi (x), yk (x)) ∈ yi (¯ xα ) × yk (¯ xα ), ∂yi ∂yk ˜ i (x), yk (x)) ∂ 2 J(y = γ˜ }, ∂yi ∂yk its membership function being μ ∂ 2 J(y ˜ γ) = x),yk (˜ x) (˜ i (˜ ∂yi ∂yk

{μyi (˜x) (yi (x))

μyk (˜x) (yk (x))}.

∂ 2 J˜(yi (x),yk (x)) =˜ γ ∂yi ∂yk

Theorem 10.4.3. Let y(˜ x) be a fuzzy point. If J˜ is a 2nd diﬀerentiable fuzzy∂ 2 J˜ valued function, with a fuzzy-valued matrix being ( ) ⊇ 0, then J˜ is a ∂yi ∂yk convex fuzzy-valued function at fuzzy points. Combine Theorem 10.3.1 with Theorem 10.3.3 and we can get a proof immediately in this theorem. 2. Convex fuzzy-valued functional at fuzzy points ˜ to be a fuzzy-valued functional and x Deﬁnition 10.4.7. Suppose Π ˜ to be a fuzzy point in R, then we call λ1 ˜ Π(y(˜ x), y (˜ F˜ (˜ x, y(˜ x), y (˜ x)) = x))dx α∈(0,1]

λ0

˜ xα ), y (¯ αΠ(y(¯ xα )) =

λ1

α

α∈(0,1]

F˜ (¯ xα , y(¯ xα ), y (¯ xα ))dx

λ0

a fuzzy-valued functional at fuzzy points. ˜ be a fuzzy-valued functional deﬁned in convex Deﬁnition 10.4.8. Let Π region D. If in fuzzy point x ˜ ∈ R for arbitrarily λ ∈ [0, 1], there is

350

10 Interval and Fuzzy Functional and their Variation

˜ Π(λy(˜ x) + (1 − λ)z(˜ x), λy (˜ x) + (1 − λ)z (˜ x)) ˜ x), z (˜ ˜ x), y (˜ x)) + (1 − λ)Π(z(˜ x)) ⊆ λΠ(y(˜ ˜ α{Π(λy(¯ xα ) + (1 − λ)z(¯ xα ), λy (¯ xα ) + (1 − λ)z (¯ xα ))} α∈(0,1]

⊆

˜ xα ), y (¯ ˜ xα ), z (¯ α{λΠ(y(¯ xα )) + (1 − λ)Π(z(¯ xα ))},

α∈(0,1]

˜ a convex fuzzy-valued functional at fuzzy points in D. then we call Π Theorem 10.4.4. Let F˜y y ⊃ 0 and F˜yy F˜y y − (F˜yy )2 ⊇ 0. Then F˜ (˜ x, y(˜ x), y (˜ x)) is a convex fuzzy-valued function concerning two fuzzy variable x). If y(˜ x) and y (˜ x) are regarded as two independent numbers y(˜ x) and y (˜ ˜ x), y (˜ fuzzy functions, then we call Π(y(˜ x)) in Deﬁnition 10.4.7 a convex fuzzy-valued functional at fuzzy points. Combine Theorem 10.3.2 with Theorem 10.3.4 and we can get a proof in this theorem immediately.

10.5

10.5.1

Variation of Condition Extremum on Interval and Fuzzy-Valued Functional Introduction

In this section the interval and fuzzy valued variation is going to be extended into a functional condition extremum, developing that of an interval and fuzzyvalued functional and verifying an eﬀectiveness of the extension with a numerical example. 10.5.2

Variation of Condition Extremum in Interval Functional

Deﬁnition 10.5.1. We call x1 ¯ = Π F¯ (x, y; y )dx x 0x1 F − (x, y; y )dx, =[ x0

x1

(10.5.1) F + (x, y; y )dx]

x0

an interval functional dependent on n unknown functions, where y = y1 , y2 , · · · , yn ; y = y1 , y2 · · · yn . In [Cao91a] [Cao01e] and [Luo84a,b] you can ﬁnd the deﬁnition on interval value and its functional variation. Theorem 10.5.1. Suppose that functions y1 , y2 , · · · , yn enable extremum to exist in the interval functional (10.5.1) under the condition

10.5 Variation of Condition Extremum

(i = 1, 2, · · · , m; m < n)

ϕ¯i (x, y) = 0

351

(10.5.2)

with (10.5.2) independent, i.e., in m-order interval function determinants, only one determinant is not zero, i.e., D(ϕ¯1 , ϕ¯2 , · · · , ϕ¯m )

= 0, D(y1 , y2 , · · · , ym )

(10.5.3)

¯ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n) tally then proper chosen factors λ with Euler equation determined by interval functional ¯∗ = Π

x1

(F¯ +

x0

m

¯i (x)ϕ¯i )dx = λ

i=1

x1

F¯ ∗ dx,

(10.5.4)

x0

¯ i (x) and yj (x)(i = 1, 2, · · · , m; j = 1, 2, · · · , n) are deterwhile functions λ mined by the interval Euler equations and interval ones d ¯∗ F , = 0 (j = 1, 2, · · · , n), F¯y∗j − dx yj

(10.5.5)

ϕ¯i = 0 (i = 1, 2, · · · , m)

(10.5.6)

¯ 1 (x), λ ¯2 (x), · · · , λ ¯ m (x) are all regarded as respectively. If y1 , y2 , · · · , yn and λ ∗ ¯ model-variable of interval functional Π , then (10.5.6) can be considered as ¯ ∗ , where Euler equations of internal functional Π (10.5.3)

− + D(ϕ− D(ϕ+ 1 , · · · , ϕm ) 1 , · · · , ϕm )

= 0,

= 0. D(y1 , · · · , ym ) D(y1 , · · · , ym )

Proof: According to interval deﬁnition [Cao01e] and basic condition of extremum (Ref.[Cao91a]. Theorem 1.1), we have ¯∗ = 0 ⇔ δΠ

x1

x0

⇒

n m ∂ F¯ ¯ ∂ ϕ¯i d ∂ F¯ [ + − ]δyj dx = 0 λi (x) ∂yj i=1 ∂yj dx ∂yj, j=1

m x1

x0

(F¯y∗j −

j=1

⇒ F¯y∗j − where F¯ ∗ = F¯ +

d ¯∗ F , )δyj dx = 0 dx yj

d ¯∗ F , =0 dx yj

(j = 1, 2, · · · , m),

(10.5.7)

m ¯i (x)ϕ¯i . Besides, since (10.5.7) represents an interval λ i=1

¯ i and when (10.5.3) holds, we have the solution linear group with respect to λ − + ¯ ¯ i (x), the necessary λi (x) = [λi (x), λi (x)] (i = 1, 2, · · · , m). As for such λ n x1 d condition of the extremum in x0 (F¯y∗j − F¯y∗j )δyj dx = 0 can be changed dx j=1

352

10 Interval and Fuzzy Functional and their Variation

x1

n

d ¯∗ Fyj, )δyj dx = 0. Because of the arbitrary of δyj (j = dx j=m+1 m + 1, · · · , n), all of items are made to be zero, except one of them, by turns, and by applying basic variation Lemma I in Section 10.1, we have

into

x0

(F¯y∗j −

d ¯∗ F¯y∗j − F , = 0 (j = m + 1, · · · , n). dx yj

(10.5.8)

By combination of (10.5.7) and (10.5.8), we enable a condition extremum ¯i (x) all to tally with (10.5.5) function of functional required by Πi and factor λ and (10.5.6). Theorem 10.5.2. Suppose that functions y1 , y2 , · · · , yn enable extremum to exist in the interval functional (10.5.1) under the condition ψ¯i (x, y; y ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.9)

with (10.5.9) independent, i.e., there exists an m-order interval function determinant D(ψ¯1 , ψ¯2 , · · · , ψ¯m )

= 0, (10.5.10) ) D(y1 , y2 , · · · , ym ¯ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n) enable then proper chosen factors λ the interval functional in (10.5.1) to reach the condition extremum curve, i.e., its extremum curve x1 x1 m ∗ ¯ ¯ ¯ ¯ F¯1∗ dx, Π = (F + λi (x)ψi )dx = x0

x0

i=1

m ¯ i (x)ψ¯i , and where F¯1∗ = F¯ + λ i=1

(10.5.10)

− + D(ψ1− , ψ2− , · · · , ψm D(ψ1+ , ψ2+ , · · · , ψm ) )

= 0,

= 0. D(y1 , y2 , · · · , ym ) D(y1 , y2 , · · · , ym )

Proof: The theorem can be proved as Theorem 10.5.1. 10.5.3 Variation on Fuzzy-Valued Functional Condition Extremum at Ordinary Points Deﬁnition 10.5.2. Call x1 ˜ ˜ Π= F (x, y; y )dx = α x0

a∈(0.1]

x1

x0

F¯α (x, y; y )dx

(10.5.11)

a fuzzy-valued functional depending upon n unknown functions, here F¯α (x, y; y ) = [Fα− (x, y; y ), Fα+ (x, y; y )].

10.5 Variation of Condition Extremum

353

The deﬁnition can be found in Ref.[Cao91a] [Cao01e] and [Luo84a,b] with respect to fuzzy value and its functional variation. Theorem 10.5.3. Suppose that functions yj (j = 1, 2, · · · , n) make an extremum exist in (10.5.11) under the condition ϕ¯i (x, y1 , · · · , yn ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.12)

with (10.5.12) independent, i.e, there exists an m-order fuzzy-valued function determinant D(ϕ˜1 , ϕ˜2 , · · · , ϕ˜m )

= 0, (10.5.13) D(y1 , y2 , · · · , ym ) ¯ i (x) and yj (i = 1, 2, · · · , m; j = 1, 2, · · · , n) then, the proper chosen factors λ satisfy Euler equation determined by fuzzy-valued functional x1 x1 m ˜i (x)ϕ˜i )dx = ˜∗ = F˜ ∗ dx, Π (F˜ + (10.5.14) λ x0

x0

i=1

˜ i (x) and yj (x) are determined by fuzzy-valued Euler equations while functions λ and a fuzzy-valued one d ˜∗ F˜y∗j − F = 0 (j = 1, 2, · · · , n), dx yj

(10.5.15)

ϕ˜i = 0 (i = 1, 2, · · · , m),

(10.5.16)

˜ i (x) and yj (i = 1, · · · , m; j = 1, · · · , n) as the varirespectively. If we regard λ ˜ ∗ , we can regard (10.5.16) as Euler equations ables of fuzzy-valued functional Π ˜ ∗ , where of fuzzy functional Π

(10.5.13)

a∈(0,1]

α

D(ϕ¯1α , ϕ¯2α , · · · , ϕ¯mα

= 0. D(y1 , y2 , · · · , ym )

Proof: According to fuzzy-valued (or fuzzy-valued functional) deﬁnition [Cao01e] and its basic condition of extremum ([Cao91a], Theorem 2.1), we have ˜∗ = 0 δΠ ⇔ α

x0

α∈(0,1]

⇒

n j=1

α(F¯y∗j α −

α∈(0,1]

where F¯α∗ = F¯α +

x1

(F¯y∗j α −

d ¯∗ F , )δyj dx = 0 dx yj α

(10.5.17)

d ¯∗ F , ) = 0 (j = 1, 2, · · · , n), dx yj α

m ¯ i (x)ϕ¯iα . Besides, (10.5.17) is a fuzzy valued linear group λ i=1

¯ iα . When (10.5.13) holds, as for a certain α, we can get the with respect to λ

354

10 Interval and Fuzzy Functional and their Variation

¯ iα (x) = [λ− (x), λ+ (x)](i = 1, 2, · · · , m) by the proof in Theorem solution λ iα iα ¯ iα (x), the necessary condition of fuzzy extremum 10.5.1. As for such λ x1 n d ¯∗ Fyj α )δyj dα = 0 α (F¯y∗j α − dx x0 j=1 α∈(0,1]

is turned into α∈(0,1]

x1

α x0

n

(F¯y∗j α −

j=m+1

d ¯∗ F )δyj dα = 0. dx yj α

Because δyj is arbitrary, all of items are made to be zero, except one of them, in turns and, by the application of basic variation Lemma II in Section 10.2, we have d ¯∗ F , ) = 0 (j = m + 1, · · · , n). α(F¯y∗j α − (10.5.18) dx yj α α∈(0,1]

By combining (10.5.17) and (10.5.18), as for arbitrary α ∈ (0, 1], the condition ¯ i (x) should all ˜ and factor λ extremum obtained by fuzzy-valued functional Π meet with (10.5.15) and (10.5.16) from Theorem 10.5.1. Now the theorem holds. Theorem 10.5.4. Suppose that functions yj (j = 1, 2, · · · n) enable extremum to exist in (10.5.11) under the condition ψ˜i (x, y; y ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.19)

with (10.5.19) independent, i.e., there exists an m-order fuzzy-valued function determinant D(ψ˜1 , ψ˜2 , · · · , ψ˜m )

= 0, (10.5.20) ) D(y1 , y2 , · · · , ym ˜ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n), enable then proper chosen factors λ the fuzzy-valued functional in (10.5.11) to reach the condition extremum curve, i.e., its extremum curve x1 x1 m ˜i (x)ψ˜i )dx = ˜∗ = F˜1∗ dx, Π (F˜ + λ x0

i=1

x0

m ˜ i (x)ψ˜i , and λ where F˜1∗ = F˜ + i=1

(10.5.20)

α

− − − , ψ2α , · · · , ψmα ) D(ψ1α

= 0, ) D(y1 , y2 , · · · , ym

α

+ + + , ψ2α , · · · , ψmα ) D(ψ1α

= 0. D(y1 , y2 , · · · , ym )

α∈(0,1]

α∈(0,1]

10.5 Variation of Condition Extremum

355

Proof: The theorem can be proved like Theorem 10.5.3. 10.5.4

Numerical Example

Example 10.5.1: Find fuzzy functional extremum S˜ = equal circumference x1 D 2 1 + y dx = ˜l.

x1 x0

y˜dx under the

x0

˜∗ = Make a supplementary functional Π Euler equation is F˜ − y F˜y, = C˜1 , i.e.,

x1 x0

D ˜ (˜ y+λ 1 + y 2 )dx, and fuzzy

˜ 2 D λy ˜ 1 + y 2 − D = C˜1 . y˜ + λ 2 1+y

(10.5.21)

For a certain determined α, (10.5.21) is ¯ 2 D ¯ α ( 1 + y 2 )α − D λα y¯α α{¯ yα + λ }= α{C¯1α }. 2 ) ( 1 + y α α∈(0,1] α∈(0,1]

We ﬁrst ﬁnd

¯α λ y¯α − C¯1α = − D 1 + y¯α 2

by introducing parameter t, such that y¯ = tg t, then ¯ α cos t; y¯α − C¯1α = −λ ¯ α sin tdt λ d¯ yα d¯ yα ¯ α cos tdt; = =λ = tg t, therefore, d¯ xα = x tg t tg t ¯ α sin t. x¯α = C¯2α + λ Then, when extremal equation is represented by parameter form, we have $ ¯ α sin t x¯α − C¯2α = λ ¯ α cos t ¯ y¯α − C1α = −λ and by canceling t, then ¯2 , (¯ xα − C¯2α )2 + (¯ yα − C¯2α )2 = λ α such that

˜ 2 + (˜ ˜ 2=λ ˜2. (˜ x − C) y − C)

356

10 Interval and Fuzzy Functional and their Variation

− + + It is curve variety of functional extremum we ﬁnd, where Ciα , Ciα , λ− α , λα (i = ¯ ¯ 1, 2) are constants and parameters; Ciα , λα (i = 1, 2) are interval constants ˜ are fuzzy constant and parameter. ˜ λ and parameters; C,

10.5.5

Conclusion

The functional condition extremum problem mentioned in this section contains more information than a classical one. We notice that for α ∈ (0, 1], it is diﬃcult for us to ﬁnd all the curves. But, in practical application, we ﬁnd a solution to some α (or ﬁnite α) according to the requirement. It is worth mentioning that we can obtain the more satisfactory result in 0.618 searching way. The result discussed here can be easily extended into a condition extremum variation of ordinary or fuzzy-valued functional with fuzzy function y˜j (j = 1, · · · , n).

10.6 10.6.1.

Variation of Condition Extremum on Functional with Fuzzy Function Introduction

By Deﬁnition “nest of set”, a condition extremum variation problem of an ordinary and fuzzy functionals on ordinary function are extended into function being a fuzzy state. In this section, we discuss the ﬁrst condition extremum variation of functional with function, and extend it to variation of fuzzy-valued functional condition extremum with fuzzy function. 10.6.2

Condition Extremum Variation of Functional with Fuzzy Function

Let F be an ordinary diﬀerentiable functional deﬁned on [x0 , x1 ] ⊆ R, y˜j (j = 1, 2, · · · , n) be fuzzy functions (i.e., a convex fuzzy set on R) and the support of y˜j be s(˜ yj ) = {x ∈ R|μy˜j (x) > 0} ⊆ [aj , bj ]. By the extension principle, we have the following. Deﬁnition 10.6.1. Let’s call x1 ˜ = Π F (x, y˜; y˜ )dx = x0

x1

x0 α∈[0,1]

αF (x, y¯α ; y¯α )dx

(10.6.1)

− + , yjα ], an ordinary functional depending on n fuzzy functions, where y¯jα = [yjα − + − + − + y˜jα = [min(yjα , yjα ), max(yjα , yjα )], and y˜jα , y¯jα , yjα , yjα denote a fuzzy derivative, an interval value one and interval value left and right ones of y˜j , respectively, and

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

F (x, y˜, y˜ ) =

357

αF (x, y¯α , y¯α ),

α∈[0,1]

its membership function is

μF (x,˜y,˜y ) (γ) =

{μy˜(y) ∧ μy˜ (y)},

F (x,y,y )=γ

where F (x, y¯α , y¯α ) = {γ|∃(x, y, y ) ∈ X × Y¯α × Y¯α , F (x, y, y ) = γ}, y˜ = (˜ y1 , y˜2 , · · · , y˜n ), y˜ = (˜ y1 , y˜2 , · · · , y˜n ); y1α , y¯2α , · · · , y¯nα ), y¯α = (¯ y1α , y¯2α , · · · , y¯nα ). y¯α = (¯ From Deﬁnition 10.6.1, we know that the ordinary functional dependent on n fuzzy functions can be changed into an interval one for a certain determined α value. Therefore, it is easy to ﬁnd deﬁnitions of the functional variation according to Ref.[Cao91a] [Ail52] and [Cao91b]. Deﬁnition 10.6.2. Let us call Fyj (x, y˜, y˜ ) = αFyj (x, y¯α , y¯α ), α∈[0,1]

Fyj (x, y˜, y˜ ) =

αFyj (x, y¯α , y¯α ) (j = 1, 2, · · · , n)

α∈[0,1]

a partial derivatives of ordinary functional F on fuzzy points (x, y˜; y˜ ) with respect to y˜j and y˜j (j = 1, 2, · · · , n), respectively, whose membership functions are respectively [μy˜(y) ∧ μy˜ (y )] = μFyj (x,˜y,˜y ) (γ) =

Fyj (x,y,y )=γ

)=γ Fyj (x,y1 ,··· ,yn ;y1 ,··· ,yn

μFy (x,˜y,˜y ) (γ) = j

(μy˜1 (y1 ) ∧· · ·∧ μy˜n (yn )) ∧ (μy˜1 (y1 ) ∧· · ·∧ μy˜n (yn )) , [μy˜(y) ∧ μy˜ (y )] =

Fy (x,y,y )=γ j

)=γ Fy (x,y1 ,··· ,yn ;y1 ,··· ,yn

μy˜1 (y1 ) ∧ · · · ∧ μy˜n (yn ) ∧ (μy˜1 (y1 ) ∧ · · · ∧ μy˜n (yn )) ,

j

where x, γ ∈ R, y = (y1 , y2 , · · · , yn ) and y = (y1 , y2 , · · · , yn ) are real function vectors on real region R. Theorem 10.6.1. Suppose that fuzzy functions y˜1 , y˜2 , · · · , y˜n enable ordinary functional (10.6.1) under the conditions

358

10 Interval and Fuzzy Functional and their Variation

φi (x, y˜) = 0 (i = 1, 2, · · · , m; m < n))

(10.6.2)

to realize extremum and (10.6.2) are independent, i.e., there is an m-order fuzzy function determinant with fuzzy function D(φ1 (x, y˜), φ2 (x, y˜), · · · , φm (x, y˜))

= 0, D(˜ y1 , y˜2 , · · · , y˜m )

(10.6.3)

then the proper chosen factors ki (x) and y˜j (i = 1, 2, · · · , m; j = 1, 2, · · · , n) satisfy Euler equation given by an ordinary functional with fuzzy function x1 m ∗ [F (x, y˜, y˜ ) + ki (x)φi (x, y˜)]dx Π = x0 i=1 (10.6.4) x1

=

F ∗ (x, y˜, y˜ )dx,

x0

while functions ki (x) and y˜j (x) are determined by Euler equations with fuzzy function d ∗ F (x, y˜, y˜ ) = 0 (j = 1, 2, · · · , n) dx yj and by Equations with fuzzy function Fy∗j (x, y˜, y˜ ) −

φi (x, y˜) = 0 (i = 1, 2, · · · , m).

(10.6.5)

(10.6.6)

If y˜j and ki (x)(j = 1, 2, · · · , n; i = 1, 2, · · · , m) are regarded as fuzzy-modelvariable of functional Π ∗ , (10.6.6) is regarded as Euler equation of Π ∗ with fuzzy function. Proof: If for an arbitrary α ∈ [0, 1], the basic condition of extremum is ¯∗ = 0 δΠ ∗ = 0 ⇐⇒ αδ Π α ⇐⇒

α∈[0,1]

x1

x0

α

α∈[0,1]

n j=1

[Fyj (x, y¯α , y¯α )δyj −

d Fy (x, y¯α , y¯α )δyj ]dx = 0, dx j

or we intergrade by part the second item in each middle bracket. And by using the deﬁnition of the interval value (or function) in Ref. [Cao93c], and the basic condition of extremum in Theorem 1.1 in Ref. [Cao91a], we have x1 n d Fyj (x, y¯α , y¯α )]δyj dx = 0, α [Fyj (x, y¯α , y¯α ) − (10.6.7) dx x0 j=1 α∈[0,1]

y¯α = (¯ y1α , y¯2α , · · · , y¯nα ), obeys the independent constraints of m αφi (x, y¯α ) = 0 (i = 1, 2, · · · , m). α∈[0,1]

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

359

As for this formula, we ﬁnd variation with α∈[0,1]

α

n ∂φi (x, y¯α ) j=1

∂yj

δyj = 0 (i = 1, 2, · · · , m),

where there are n−m variation arbitrations in δyj , for example, say δym+1 , · · · , δyn arbitration, then x1 n ∂φi (x, y¯α ) ki (x)[ α δyj ]dx = 0 (i = 1, 2, · · · , m). ∂yj x0 j=1 α∈[0,1]

Add them with satisﬁed equation (10.6.7) from admitted variation δyj , respectively, then x1 n m ∂F (x, y¯α , y¯α ) ∂φi (x, y¯α ) α [ + ki (x) − ∂yj ∂yj x0 j=1 i=1 α∈[0,1]

d ∂F (x, y¯α , y¯α ) ]δyj dx = 0, dx ∂yj F ∗ (x,¯ yα ,¯ yα )=F (x,¯ yα ,¯ yα )+

m

i=1

kiα (x)φi (x,¯ yα )

− − − − − − − − − − − − − − −− −→ x1 n d ∗ Fyj (x, y¯α , y¯α )]δyj dx = 0, α [Fy∗j (x, y¯α , y¯α ) − dx x0 j=1 α∈[0,1]

and then we change it into x1 n d ∗ Fyj (x, y¯α , y¯α )]δyj dx = 0. α [Fy∗j (x, y¯α , y¯α ) − dx x0 j=m+1

(10.6.8)

α∈[0,1]

Again because δyj (j = m + 1, m + 2, · · · , n) is arbitrary, we make all of above function equations be zero except one of them by turns. For ∀α ∈ [0, 1], by applying basic Lemma I in variation of Ref.[Cao91a], we have d ∗ F (x, y¯α , y¯α )] = 0 α[Fy∗j (x, y¯α , y¯α ) − dx yj (10.6.9) α∈[0,1]

(j = m + 1, m + 2, · · · , n). Combine (10.6.8) and (10.6.9), we enable the function of conditional extremum realized by functional Π ∗ and factor ki (x) to all satisfy equations (10.6.5) and (10.6.6), so, Theorem 10.6.1 holds. Theorem 10.6.2. If we change (10.6.2) in Theorem 10.6.1 into diﬀerential equation with fuzzy function φ1 (x, y˜, y˜ ) = 0 (i = 1, · · · , m; m < n), while the other conditions are unchanged, the conclusion is still true.

360

10 Interval and Fuzzy Functional and their Variation

10.6.3

Variation of Fuzzy-Valued Functional Condition Extremum with Fuzzy Function

Deﬁnition 10.6.3. We call ∗ ˜ Π =

x1

F˜ (x, y˜; y˜ )dx ∈ F (F (R)) x1 F˜ (x, y¯α , y¯α )dx α = x0

(10.6.10)

x0

α∈[0,1]

a fuzzy-valued functional dependendent on n-fuzzy functions, where y¯α and y¯α are deﬁned by Deﬁnition 10.6.1, with F˜ (x, y˜, y˜ ) = αF˜ (x, y¯α , y¯α ), α∈[0,1]

F˜ (x, y¯α , y¯α ) = {˜ γ ∈ F (R)|∃(x, y, y ) ∈ X × Y¯α × Y¯α , F˜ ∗ (x, y, y ) = γ˜ }, and its membership function is

γ) = μF˜ (x,˜y,˜y ) (˜

{μy˜(y) ∧ μy˜ (y )}.

F˜ (x,y,y )=˜ γ

Deﬁnition 10.6.4. Let’s call F˜yj (x, y˜; y˜ ) = αF˜ ∗ (x, y¯α ; y¯ ) ∈ F (F (R)), yj

α

α∈[0,1]

F˜yj (x, y˜; y˜ ) =

αF˜y∗j (x, y¯α ; y¯α ) ∈ F (F (R))(j = 1, 2, · · · , n)

α∈[0,1]

a partial derivation of fuzzy-valued functional F˜ on fuzzy point (x, y˜; y˜ ) with respect to y and y , respectively, where F˜y∗j (x, y¯α ; y¯α ) = {˜ γ |∃(x, y; y ) ∈ X × Y¯α × Y¯α , F˜y∗j (x, y; y ) = γ˜ }, F˜y∗j (x, y¯α ; y¯α ) = {˜ γ |∃(x, y; y ) ∈ X × Y¯α × Y¯α , F˜y∗j (x, y; y ) = γ˜ }, their membership functions are μF˜y

γ) (x,˜ y;˜ y ) (˜ j

=

μF˜

(˜ γ) =

y;˜ y ) y (x,˜ j

(μy˜(y)

F˜yj (x,y;y )=˜ γ

(μy˜(y)

μy˜ (y )),

μy˜ (y )).

F˜y (x,y;y )=˜ γ j

Theorem 10.6.3. Suppose that fuzzy functions y˜j (j = 1, 2, · · · , n) make when fuzzy-valued functional (10.6.10) with fuzzy function under the condition

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

φi (x, y˜) = 0 (i = 1, 2, · · · , m; m < n)

361

(10.6.11)

reached extremum, and (10.6.11) is independent, i.e., there is an m-order fuzzyvalued function determinant with fuzzy function D(φ1 (x, y˜), φ2 (x, y˜), · · · φm (x, y˜))

= 0. D(˜ y)

(10.6.12)

Then, the proper chosen factors ki (x)(i = 1, 2, · · · , m) and y˜j (j = 1, 2, · · · , n) satisfy Euler equation obtained by fuzzy functional with fuzzy function ˜∗ = Π

x1

[(F˜ (x, y˜, y˜ ) +

x0

m

x1

ki (x)φi (x, y˜))]dx =

i=1

F˜ ∗ (x, y˜, y˜ )dx,

x0

while functions ki (x) and yj (x) are determined by fuzzy-valued Euler equations d ˜∗ F˜y∗j (x, y˜, y˜ ) − F (x, y˜, y˜ ) = 0 (j = 1, 2, · · · , n) dx yj and by fuzzy-valued equations: φj (x, y˜) = 0 (i = 1, 2, · · · , m).

(10.6.13)

If we regard ki (x) and y˜j (i = 1, 2, · · · , m; j = 1, 2, · · · , n) as the model vari˜ ∗ , we can write (10.6.13) as Euler equations of able of fuzzy-valued functional Π ∗ ˜ fuzzy-valued functional Π , where (10.6.12)

α∈[0,1]

α

D(φ¯1α (x, y¯α ), φ¯2α (x, y¯α ), · · · , φ¯mα (x, y¯α ))

= 0. D(¯ yα )

Theorem 10.6.4. If we change (10.6.11) in Theorem 10.6.3 into fuzzy differential equations with fuzzy functions φi (x, y˜; y˜ ) = 0 (i = 1, 2, · · · , m; m < n) with the other conditions unchanged, the conclusion holds. 10.6.4

Conclusion

The condition extremum problem, functional, ordinary or fuzzy-valued, with fuzzy function is advanced, and a method to it is obtained for them containing fuzzy functions by the aid of the variational methods. This model will contain more information than a classical one and will be of extensive use in engineering act ﬁelds, the application examples remains to be completed by readers.

References

[Ail52] [AG01]

[AMA93] [AP93] [Asa82] [Avr76] [AW70] [BF98]

[Biw92] [BMS96]

[BP76] [BS79] [Cao87a]

[Cao87b]

Ailisgerzi: Calculus of variations. Soviet Union National Technology Theory Works Publishing House (1952) (Chinese Translation) Arikan, F., G¨ ung¨ or, Z.: An application of fuzzy goal programming to a multiobjective project network problem. Fuzzy Sets and Systems 119, 49–58 (2001) Aliev, P., Mamedova, G., Aliev, R.: Fuzzy Sets Theory and Its Application. Talriz University Press (1993) Admopoulos, G.I., Pappis, C.P.: Some results on the resolution of fuzzy relation equations. Fuzzy Sets and Systems 60, 83–88 (1993) Asai, K. (Writing), Zhao, R.H. (Translation): An Introduction to the Theory of Fuzzy Systems. Peking Norm. University Press, Peking (1982) Avriel, M.: Nonlinear programming Analysis and Methods. Prentice Hall Co. Inc., Englewood Cliﬀs (1976) Avriel, M., Williams, A.C.: Complementary geometric programming. SIAM J. Appl. Math. 19, 125–141 (1970) Bourke, M.M., Fisher, D.G.: Solution algorithms for fuzzy relational equations with max-product composition. Fuzzy Sets and Systems 94, 61–69 (1998) Biwal, M.P.: Fuzzy programming technique to solve multi-objective geometric programming problems. Fuzzy Sets and Systems 51, 67–71 (1992) Burnwal, A.P., Mukherjee, S.N., Singh, D.: Fuzzy geometric programming with nonequivalent objectives. Ranchi University Mathematical J. 27, 53– 58 (1996) Beightler, C.S., Phillips, D.T.: Applied Geometric Programming. John Wiley and Sons, New York (1976) Bazaraa, M.S., Shetty, C.M.: Nonlinear Programming—Theory and Algorithms. John Wiley and Sons, New York (1979) Cao, B.Y.: Solution and theory of question for a kind of fuzzy positive geometric program. In: Proc. of 2nd IFSA Congress, Tokyo, July 20-25, vol. I, pp. 205–208 (1987); Also to see: J. of Changsha Norm. Univ. of Water Resources and Electric Power. Natural Sci. Ed. 2(4), 51–61 (1987) Cao, B.Y.: The theory and practice of solution for fuzzy relative equations in Max-product. J. of Hunan Univ. of Science and Technology 3(2), 57–65 (1987)

364

References

[Cao89a] Cao, B.Y.: Study of fuzzy positive geometric programming dual form. In: Proc. 3rd IFSA Congress, Seattle, August 6-11, pp. 775–778 (1989) [Cao89b] Cao, B.Y.: Study on non-distinct self-regression forecast model. Kexue Tongbao 34(17), 1291–1294 (1989) [Cao89c] Cao, B.Y.: Study for a kind of regression forecasting model with fuzzy datums. J. of Mathematical Statistics and Applied Probability 4(2), 182– 189 (1989) [Cao90] Cao, B.Y.: Study on non-distinct self-regression forecast model. Chinese Sci. Bull. 35(13), 1057–1062 (1990) [Cao91a] Cao, B.Y.: Variation of interval-valued and fuzzy functional. In: Proc. of 4th IFSA Congress, Math., pp. 21–24 (1991) [Cao91b] Cao, B.Y.: Ordinary diﬀerential equations of interval-valued and fuzzyvalued functions. J. of Changsha Norm. Univ. of Water Resources and Electric Power 6(1), 26–38 (1991) [Cao91c] Cao, B.Y.: A method of fuzzy set for studying linear programming “Contrary Theory”. J. Hunan Educational Institute 9(2), 17–22 (1991) [Cao92a] Cao, B.Y.: Further study of posynomial geometric programming with fuzzy coeﬃcients. Mathematics Applicata 5(4), 119–120 (1992) [Cao92b] Cao, B.Y.: Another proof of fuzzy posynomial geometric programming dual theorem. BUSEFAL 66, 43–47 (1996) [Cao92c] Cao, B.Y.: Interval-valued and fuzzy convex function and convex functional research. J. of Fuzzy Systems and Math. (Special issue), 300–303 (1992) [Cao92d] Cao, B.Y. (ed.): Proceedings of the Results Congress on Fuzzy Sets and Systems. Hunan Science Technology Press, Changsha (1992) [Cao93a] Cao, B.Y.: Fuzzy geometric programming(I). Fuzzy Sets and Systems 53, 135–153 (1993) [Cao93b] Cao, B.Y.: Input-output mathematical model with T-fuzzy data. Fuzzy Sets and Systems 59, 15–23 (1993) [Cao93c] Cao, B.Y.: Extended fuzzy geometric programming. J. of Fuzzy Mathematics 2, 285–293 (1993) [Cao93d] Cao, B.Y.: Fuzzy strong dual results for fuzzy posynomial geometric programming. In: Proc.of 5th IFSA Congress, Seoul, July 4-9, pp. 588–591 (1993) [Cao93e] Cao, B.Y.: Nonlinear regression forecasting model with T-fuzzy Data to be linearized. J. of Fuzzy Systems and Mathematics 7(2), 43–53 (1993) [Cao94a] Cao, B.Y.: Posynomial geometric programming with L-R fuzzy coeﬃcients. Fuzzy Sets and Systems 64, 267–276 (1994) [Cao94b] Cao, B.Y.: Lecture in Economic Mathematics—Linear Programming and Fuzzy Mathematics. Tiangjing Translating Press of Science and Technology, Tiangjing (1994) [Cao95a] Cao, B.Y.: The study of geometric programming with (·, c)-fuzzy parameters. J. of Changsha Univ. of Electric Power (Natural Sci. Ed.) 1, 15–21 (1995) [Cao95b] Cao, B.Y.: Fuzzy geometric programming optimum seeking of scheme for waste water disposal in power plant. In: Proc. of FUZZY-IEEE/IFES 1995, Yokohama, August 22-25, pp. 793–798 (1995) [Cao95c] Cao, B.Y.: Types of non-distinct multi-objective geometric programming. Hunan Annals of Mathematics 15(1), 99–106 (1995)

References

365

[Cao95d] Cao, B.Y.: Study of fuzzy positive geometric programming dual form. J. of Changsha Univ. of Electric Power (Natural Sci. Ed.) 10(4), 343–351 (1995) [Cao95e] Cao, B.Y.: Classiﬁcation of fuzzy posynomial geometric programming and corresponding class properties. J. of Fuzzy Systems and Mathematics 9(4), 60–64 (1995) [Cao96a] Cao, B.Y.: New model with T-fuzzy variable in linear programming. Fuzzy Sets and Systems 78, 289–292 (1996) [Cao96b] Cao, B.Y.: Fuzzy geometric programming (II)—Fuzzy strong dual results for fuzzy posynomial geometric programming. J. of Fuzzy Mathematics 4(1), 119–129 (1996) [Cao96c] Cao, B.Y.: Cluster and recognition model with T −fuzzy data. Mathematical Statistics and Applied Probability 11(4), 317–325 (1996) [Cao97a] Cao, B.Y.: Research for a geometric programming model with T-fuzzy variable. J. of Fuzzy Mathematics 5(3), 625–632 (1997) [Cao97b] Cao, B.Y.: Fuzzy geometric programming optimum seeking of scheme for waste-water disposal in power plant. Systems Engineering—Theory & Practice 5, 140–144 (1997) [Cao98] Cao, B.Y.: Further research of solution to fuzzy posynomial geometric programming. Academic Periodical Abstracts of China 4(12), 1435–1437 (1998); Also to see: Popular Works by Centuries’ World Celebrities. In: Mah, Z.X. (ed.) U. S. World Celebrity Books LLC, pp. 15–20. World Science Press, California (1998) [Cao99a] Cao, B.Y.: Variation of functional condition extremum with fuzzy variable. J. of Fuzzy Mathematics 7(3), 559–564 (1999) [Cao99b] Cao, B.Y.: Fuzzy geometric programming optimum seeking in power supply radius of transformer substation. In: 1999 IEEE Int. Fuzzy Systems Conference Proceedings, Korea, July 25-29, vol. 3, pp. III–1749–III–1753 (1999) [Cao00a] Cao, B.Y.: Research of posynomial geometric programming with ﬂat fuzzy coeﬃcients. J. of Shantou University (Natural Sci. Ed.) 15(1), 13–19 (2000) [Cao00b] Cao, B.Y.: Parameterized solution to a fractional geometric programming. In: Proc. of the sixth National Conference of Operations Research Society of China, pp. 362–366. Global-Link Publishing Company, Hong Kong (2000) [Cao01a] Cao, B.Y.: Primal algorithm of fuzzy posynomial geometric programming. In: Joint 9th IFSA World Congress and 20th NAFIPS International Conference Proceedings, Vancouver, July 25-28, pp. 31–34 (2001). Also to see: Direct algorithm of fuzzy posynomial geometric programming. Fuzzy Systems and Mathematics 15(4), 81–86 (2001) [Cao01b] Cao, B.Y.: Model of fuzzy geometric programming in economical power supply radius and optimum seeking method. Engineer Sciences 3, 52–55 (2001) [Cao01c] Cao, B.Y.: Application of geometric programming and fuzzy geometric one with fuzzy coeﬃcients in seeking power supply radius transformer substation. Systems Engineering—Theory & Practice 21(7), 92–95 (2001) [Cao01d] Cao, B.Y.: Extension posynomial geometric programming. J. of Guangdong University of Technology 18(1), 61–64 (2001)

366

References

[Cao01e] Cao, B.Y.: Variation of condition extremum in interval and fuzzy valued functional. Fuzzy Mathematics 9(4), 845–852 (2001) [Cao02a] Cao, B.Y.: Fuzzy Geometric Programming. Kluwer Academic Publishers, Dordrecht (2002) [Cao02b] Cao, B.Y.: Variation of interval-valued and fuzzy functional. Fuzzy Mathematics 10(4), 797–808 (2002) [Cao04] Cao, B.Y.: Antinomy in posynomial geometric programming. Advances in Systems Science and Applications 1, 7–12 (2004) [Cao05] Cao, B.Y.: Application of Fuzzy Mathematics and Systems. Science Press, Peking (2005) [Cao07a] Cao, B.Y.: Fuzzy reversed posynomial geometric programming and its dual form. In: Melin, P., Castillo, O., Aguilar, L.T., Kacprzyk, J., Pedrycz, W. (eds.) IFSA 2007. LNCS (LNAI), vol. 4529, pp. 553–562. Springer, Heidelberg (2007) [Cao07b] Cao, B.Y.: Pattern classiﬁcation model with T-fuzzy data. In: Advance in Soft Computing, pp. 793–802. Springer, Heidelberg (2007) [Cao08] Cao, B.Y.: Cluster model with T -fuzzy data. Fuzzy Optimization and Decision Making 7(4), 317–329 (2008) [Cao09] Cao, B.Y.: Convexity study of interval functions and functionals & fuzzy functions and functionals. Fuzzy Information and Engineering 1(4), 421– 434 (2009) [CDR87] Charnes, A., Duﬀuaa, S., Ryan, M.: The more-for-less paradox in linear programming. European Journal of Operational Research 31, 194–197 (1987) [Cen87] Cen, Y.T.: Newton-Leibniz formulas of interval-valued function and fuzzy-valued function. Fuzzy Mathematics (3-4), 13–18 (1987) [Cha83] Chants, S.: The use of parametric programming in fuzzy linear programming. Fuzzy Sets and Systems 11, 243–251 (1983) [Chen94] Chen, S.Y.: Theory and application system fuzzy decision. Dalian Technology Publishing House, Dalian (1994) [CK71] Charnes, A., Klingman, D.: The more-for-less paradox in the distribution model. Cahiers de Center d’Etudes Recharche Operationelle 13(1), 11–22 (1971) [CL91] Cao, B.Y., Liu, G.H.: A new fuzzy recognition model for children’s health growth. In: Cao, B.Y. (ed.) Proceedings of Result Congress on Fuzzy Sets and Systems, pp. 198–200. Hunan Press of Science and Technology, Changsha China (1991) [Dan63] Dantzing, G.B.: Linear Programming and Extensions. Princeton U.P., Princeton (1963) [Dia87] Diamond, P.: Fuzzy least squares. Inform. Sci. 46, 141–157 (1988). In: Proc. of IFSA Congress, vol. I, Tokyo, July 20-25, pp. 329–332 (1987) [DPe66] Duﬃn, R.J., Peterson, E.L.: Duality theory for geometric programming. SIAM J. Appl. Math. 14, 1307–1349 (1966) [DPe72] Duﬃn, R.J., Peterson, E.L.: Reversed geometric programming treated by harmonic means. Indiana Univ. Math. J. 22, 531–550 (1972) [DPe73] Duﬃn, R.J., Peterson, E.L.: Geometric programming with signomials. J. Optimization Theory Appl. 11, 3–35 (1973) [DPr78] Dubois, D., Prade, H.: Operations on fuzzy number. Int. J. Systems Sciences 9(6), 613–626 (1978)

References [DPr80]

367

Dubois, D., Prade, H.: Fuzzy Sets and Systems—Theory and Applications. Academic Press, New York (1980) [DPZ67] Duﬃn, R.J., Peterson, E.L., Zener, C.: Geometric Programming: Theory and Applications. John Wiley and Sons, New York (1967) [Duf62a] Duﬃn, R.J.: Dual programs and minimum cost. SIAM J.Appl. Math. 10, 119–123 (1962) [Duf62b] Duﬃn, R.J.: Cost minimization problems treated by geometric means. Operations Res. 10, 668–675 (1962) [Duf70] Duﬃn, R.J.: Linearizing geometric programs. SIAM. Review 12, 211–227 (1970) [Duo44] Duoma, E.D.: Debt and national income. America Economic Review 10, 798–827 (1944) [Eck80] Ecker, J.G.: Geometric programming. SIAM Review 22(3), 338–362 (1980) [EK87] Eckerand, J.G., Kupferschmid, M.: An ellipsoid algorithm for non-linear programming. Mathematical Programming 27, 83–106 (1987) [Fang89] Fang, K.T.: Multivariate Statistical Analysis. East China Normal University Publisher, Shanghai (1989) [Fin77] Finke, G.: Auniﬁed approach to reshipment, overshipmemt and postoptimization problems. In: Proceedings of the 8th IFIP Conference on Optimization Techniques, part 2, pp. 201–208 (1977) [FL99] Fang, S.-C., Li, G.: Solving fuzzy relation equations with a linear objective function. Fuzzy Sets and Systems 103, 107–113 (1999) [Fu90] Fu, G.Y.: On optimal solution of fuzzy linear programming. Fuzzy Systems and Mathematics 4(1), 65–72 (1990) [GL70] Gitman, I., Levine, M.D.: An algorithm for detecting Unimodal fuzzy sets and its application as a clustering technique. IEEE Trans. on Comput. 19, 583–593 (1970) [Guj86] Gujarati, D., Hao, P., et al ( translate): The foundation econometrics. Science and Technology Literature Press Chong Qing Branch, Chong Qing (1986) [GV86] Goetschcl, R., Voxman, D.W.: Elementary fuzzy calculus. Fuzzy Sets and Systems 18, 31–42 (1986) [GZ83] Guan, M.G., Zheng, H.D.: Linear Programming. Shandong Science Technology Press, Jinan (1983) [Hei83] Heilpern, S.: Fuzzy mappings. Matematyka Stosowana XXII, 179–197 (1983) [Hei87] Heilpern, S.: Fuzzy equations. Fuzzy Mathematics 2, 77–84 (1987) [HK84] Higashi, M., Klir, G.J.: Resolution of ﬁnite fuzzy relation equations. Fuzzy Sets and Systems 13, 65–82 (1984) [ISTI82] Institute of Scientiﬁc and Technical Information. A new fuzzy recognization model for children’s health growth. Scientiﬁc and Technological Literature Publishing House, Peking (1982) [JM61] Minfu, J.: Variational method and its application (Iwanami Shoten). Science Technique Publisher of Shanghai, Shanghai (1961) [JS78] Jeﬀerson, T.R., Scott, C.H.: Avenues of geometric programming. New Zealand Operational Res. 6, 109–136 (1978) [Kel71] Klee, V.: What is a convex set? The American Mathematical Monthly 78(6), 616–631 (1971)

368

References

[Lao90] [LC02]

[LF99]

[LF01a]

[LF01b]

[Lin85] [Lin86] [LiR02]

[LiuDa98]

[LiuH04] [LiuS04]

[LiuT00] [LiuX01] [LL97] [LL01] [Luo84a] [Luo84b] [Luo89] [LW92] [LZ98] [Man79]

Lao, Q.T.: Fuzzy comprehensive evaluation to city environmental quality. Chinese Environmental Science 2 (1990) Lui, M.J., Cao, B.Y.: The Research and Expansion on Optimal Solution of Fuzzy LP. In: Proc. of 1st Int. Conf. on FSKD, Singapore, vol. 2, pp. 539–543 (2002) Loetamonphong, J., Fang, S.C.: An eﬃcient solution procedure for fuzzy relation equations with max-product composition. IEEE Tran. on Fuzzy Systems 7, 441–445 (1999) Loetamonphong, J., Fang, S.Z.: Optimization of fuzzy relation equations with max-product composition. Fuzzy Sets and Systems 118, 509–517 (2001) Loetamonphong, J., Fang, S.Z.: Solving nonlinear optimization problems with fuzzy relation equation constraints. Fuzzy Sets and Systems 119, 1–20 (2001) Lin, E.W.: The mathematical method for macroeconomics model, pp. 78–94. Fujian Press, Fujian (1985) Lin, Y.: On “Contrary Theory” in general linear programming. Chinese J. of Operations Research 5(1), 79–81 (1986) Li, R.J.: Analysis of possibilistic linear programming based on comparison of fuzzy numbers-discussed with author. Fuzzy Sets and Systems 16(4), 107–109 (2002) Liu, X.W., Da, Q.-l.: The solution for fuzzy linear programming with constraint satisfactory function. Journey of Systems Engineering 13(3), 36–40 (1998) Liu, H.W.: Comparison of fuzzy numbers based on a fuzzy distance measure. Shandong University Transaction 39(2), 31–36 (2004) Liu, S.T.: Fuzzy geometric programming approach to a fuzzy machining economics model. International Journal of Production Research 42(16), 3253–3269 (2004) Liu, T.F.: The solution to the problem of parametric linear programming by lumped matrix. Journal of Electric Power 15(1), 22–25 (2000) Liu, X.W.: Measuring the satisfaction of constraints in fuzzy linear programming. Fuzzy Sets and Systems 122, 263–275 (2001) Liu, W.Q., Luo, C.Z.: A few notes on fuzzy linear programming with elastic constraints. Mathematia Applicata 10(2), 105–109 (1997) Le´ on, T., Liern, V.: A fuzzy method to repair infeasibility in linearly constrained problems. Fuzzy Sets and Systems 122, 237–243 (2001) Luo, C.Z.: The extension principle and fuzzy numbers (I). Fuzzy Mathematics 4(3), 109–116 (1984) Luo, C.Z.: The extension principle and fuzzy numbers (II). Fuzzy Mathematics 4(4), 105–114 (1984) Luo, C.Z.: The Theory of Fuzzy Sets. The Publishing House of Beijing Normal University, Piking (1989) Lu, M.G., Wu, W.M.: Interval value and derivate of fuzzy-valued function. J. Fuzzy Systems and Mathematics 6(Special issue), 182–184 (1992) Liu, B.D., Zhao, R.Q.: Stochastic Programming and Fuzzy Programming. Tsinghua University Press, Peking (1998) Mangasarian, O.L.: Uniqueness of solution in linear programming. Linear Algebra and Its Applications 25, 151–162 (1979)

References

369

[MRM05] Mandal, N.K., Roy, T.K., Maiti, M.: Multi-objective fuzzy inventory model with three constraints: a geometric programming approach. Fuzzy Sets and Systems 150(1), 87–106 (2005) [MTM00] Maleki, H.R., Tata, M., Mashinchi, M.: Linear programming with fuzzy variable. Fuzzy Sets and Systems 109, 21–33 (2000) [NS76] Negoita, C.V., Sularia, M.: Fuzzy linear programming and tolerance in planning. Econ. Comp. Econ. Cybernetic Stud. Res. 1, 613–615 (1976) [Obr86] Obrad, M.M.: Mathematical dynamic model for long-term distribution system planning. IEEE Transaction on Power Systems 1, 34–41 (1986) [Pan87] Pan, R.F.: A simple solution for fuzzy linear programming. Journal of Xiangtan University (Natural Science) 3, 29–36 (1987) [PB71] Pascual, L.D., Ben-Israel, A.: Vector-valued criteria in geometric programming. Operations Res. 19, 98–104 (1971) [Pet78] Peterson, E.L.: Geometric programming. SIAM Review 18, 1–51 (1978) [Pet01] Peterson, E.L.: The Origins of Geometric Programming. Annals of Operations Research 105, 15–19 (2001) [Pre81] Prevot, M.: Algorithm for the solution of fuzzy relations equations. Fuzzy Sets and Systems 5, 319–322 (1981) [Rij74] Rijckaert, M.J.: Survey of programs in geometric programming. C.C.E.R.O. 16, 369–382 (1974) [Rou91] Roubens, M.: Inequality constraints between fuzzy numbers and their use in mathematical programming. In: Slowinski, R., Teghem, J. (eds.) Stochastic Versus Fuzzy Approaches to Multiobjective Mathematical Programming Under Uncertainty, pp. 321–330. Kluwer Academic Publishers, Dordrecht (1991) [RT91] Roubens, M., Teghem, J.J.: Comparison of methodologies for fuzzy and stochastic multi-objective programming. Fuzzy Sets and Systems 42, 119– 132 (1991) [Rus69] Ruspini, E.H.: A new approach to clustering. Information Control 15, 22–32 (1969) [SahK06] Sahidul, I., Kumar, R.T.: A new fuzzy multi-objective programming: entropy based geometric programming and its application of transportation problems. European Journal of Operational Research 173(2), 387–404 (2006) [San76] Sanchez, E.: Resolution of composite fuzzy relation equations. Inform. and Control 30, 38–48 (1976) [Shi81] Shi, G.Y.: Algorithm and convergence about a general geometric programming. J. Dalian Institute Technol. 20, 19–25 (1981) [Shim73] Shimura, M.: Fuzzy sets concept in rank-ordering objects. J. Math. Anal. & Appl. 43, 717–733 (1973) [Sol56] Solow, M.R.: Contribution economy increase theories. Economics Quarterly, 65–94 (1956) [TA84] Tanaka, H., Asai, K.: Fuzzy linear programming problem with fuzzy number. Fuzzy Sets and Systems 13, 1–10 (1984) [TD02] Tran, L., Duckstein, L.: Comparison of fuzzy numbers using a fuzzy distance measure. Fuzzy Set and Systems 130, 331–341 (2002) [TOA73] Tanaka, H., Okuda, T., Asai, K.: On fuzzy mathematical programming. J. Cybern. 3(4), 37–46 (1973)

370

References

[TUA80] Tanaka, H., Uejima, S., Asai, K.: Fuzzy linear regression model. In: Int. Congress on Applied Systems Research and Cybernetics, Acapulco, Mexico, December 1980, pp. 12–16 (1980) [TUA82] Tanaka, H., Uejima, S., Asai, K.: Linear regression analysis with fuzzy model. IEEE Transactions on Systems, Man and Cybernetics, SMC 12(6), 903–907 (1982) [Ver84] Verdegay, J.L.: A dual approach to solve the fuzzy linear programming problem. Fuzzy Sets and Systems 14, 131–140 (1984) [Ver90] Verma, R.K.: Fuzzy geometric programming with several objective functions. Fuzzy Sets and Systems 35, 115–120 (1990) [Wang83] Wang, P.Z.: Fuzzy Sets and Its Application. Shanghai Scientiﬁc and Technical Publishers, Shanghai (1983) [Wangx02] Wang, X.P.: Conditions under which a fuzzy relational equation has minimal solutions in a complete Brouwerian lattice. Advances in Mathematics 31(3), 220–228 (2002) [Wat87] Watada, J., Chen, G.F. (translated), et al.: Theories and Its Application of Fuzzy Multianalysis, pp. 6–17. Chongqing Branch of Scientiﬁc and Technological Literature Publishing House, Chongqing (1987) [WB67] Wilde, D.J., Beightler, C.S.: Foundations of Optimization, pp. 76–109. Prentice Hall Co. Inc., Englewood Cliﬀs (1967) [Wei87] Wei, H.P.: Application of Optimization Techniques. Tongji University Press (1987) [WL85] Wang, D.M., Lou, C.Z.: Extension of fuzzy diﬀerenial calculus. Fuzzy Mathematics 1, 75–80 (1985) [WX99] Xing, W.X., Xie, J.X.: Modern Optimization for Calculation Method. Tsinghua Press, Peking (1999) [WY82] Wu, F., Yuan, Y.Y.: Geometric programming. Math. in Practice and Theory (1-2), 46–63, 61–72, 60–80, 68–81 (1982) [WZSL91] Wang, P.Z., Zhang, D.Z., Sanchez, E., Lee, E.S.: Latticized linear programming and fuzzy relation inequalities. J. Math. Anal. and Appl. 159, 72–87 (1991) [Xu98] Xu, R.N.: The linear regression models with fuzzy regression parameters. Fuzzy Systems and Mathematics 2 (1998) [XuL01] Xu, R.N., Li, C.L.: Multidimensional least-squares ﬁtting with a fuzzy model. Fuzzy Sets and Systems 119, 215–233 (2001) [XuR89] Xu, R.Z.: Optimal Methods of Mathematics Programming in Economic Management. Sichuan Science Technique Publisher House, Chengdu (1989) [Yage80] Yager, R.: Fuzzy sets, Probabilities and Decision. J. Cybern. 10, 1–18 (1980) [YC05] Yang, J.H., Cao, B.Y.: Geometric Programming with Fuzzy Relation Equation Constraints. In: 2005 IEEE International Fuzzy Systems Conference Proceedings, Reno, Nevada, May 22-25, pp. 557–560 (2005) [YC06] Yang, J.H., Cao, B.Y.: The origin and its application of geometric programming. In: Proc.of the Eighth National Conference of Operations Research Society of China, pp. 358–363. Global-Link Publishing Company, Hong Kong, ISBN: 962-8286- 09-9 [YCL95] Yu, Y.Y., Cao, B.Y., Li, X.R.: The application of geometric and fuzzy geometric programming in option of economic supply radius of transformer substations. In: Zhou, K.Q. (ed.) Proceedings of Int. Conference on Inform and Knowledge Engineering, August 21-25, pp. 245–249. Dalian Maritime University Publishing House, Dalian (1995)

References

371

[YGR99] Yen, K.K., Ghoshrya, S., Roig, G.: A linear regression model using triangular fuzzy number coeﬃcient. Fuzzy Sets and Systems 106, 167–177 (1999) [YL99] Yang, M.S., Liu, H.H.: Fuzzy clustering procedures for conical fuzzy vector data. Fuzzy Sets and Systems 106, 189–200 (1999) [Ying92] Ying, L.J.: The study of fuzzy information—processing methods and their applications in fault diagnosis. Changsha Railway University Doctorate Dissertation (1992) [YJ91] Yang, C.E., Jin, D.Y.: The more-for-less paradox in linear programming and nonlinear programming. Systems Engineering 9(2), 62–68 (1991) [YWY91] Yu, Y.Y., Wang, X.Z., Yang, Y.W.: Optimizational selection for substation feel economic radius. J. of Changsha Normal University of Water Resources and Electric Power 6(1), 118–124 (1991) [YZY87] Yang, Q.Y., Zhang, Z.L., Yang, M.Z.: Transformer substation capacity dynamic optimizing in city power network planning. In: Proc. of Colleges and Univ. Speciality of Power System and Its Automation. The Third a Academic Annual Conference, pp. 7–11. Xian Jiao Tong Univ. Press, Xian (1987) [Zad65a] Zadeh, L.A.: Fuzzy sets. Inform. and Control 8, 338–353 (1965) [Zad65b] Zadeh, L.A.: Fuzzy sets and systems. In: Proc. of the Symposium on Systems Theory. Polytechnic Press of Polytechnic Institute of Brooklyn, NY (1965) [Zad75a] Zadeh, L.A.: Calculus of fuzzy restriction. In: Zadeh, L.A., Fu, K.S., Tanaka, K., Shimura, M. (eds.) Fuzzy Sets and Their Applications to Cognitive and Decision Processes. Academy Press, New York (1975) [Zad75b] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning, Part 1. Information Science 8, 199–249 (1975) [Zad76] Zadeh, L.A.: Fuzzy Sets and their application to pattern classiﬁcation and cluster analysis. Multivariate Analysis, 113–161 (1976) [Zad82] Zadeh, L.A., Chen, G.Q. (Translation): Fuzzy Sets, Language variable and Fuzzy Logics. Science in china Press, Peking (1982) [Zen61] Zener, C.: A mathematical aid in optimizing engineering design. Proc. Nat. Acad. Sci. USA 47, 537–539 (1961) [Zhang97] Zhang, Z.K.: The application of fuzzy mathematics in roboticized technology. Tsinghua University Press, Beijing (1997) [Zhe92] Zheng, X.C.: Prospect forecast in the amount of long distance telephone in China. Forecast, 3 (1992) [Zim76] Zimmermann, H.-J.: Description and optimization of fuzzy systems. Internat. J. General Systems 2, 209–215 (1976) [Zim78] Zimmermann, H.-J.: Fuzzy programming and linear programming with several objective functions. Fuzzy Sets and Systems 1, 45–55 (1978) [Zim91] Zimmermann, H.-J.: Fuzzy Sets Theory and Its Application. Kluwer Academic Publishers, Boston (1991) [Zim00] Zimmermann, H.-J.: Fuzzy Sets and Operations Research for Decision Support. Beijing Normal University Press, Beijing (2000) [ZW91] Zhang, W.X., Wang, G.J.: Introduction to Fuzzy Sets. Xi’an Jiaotong University Publishers, Xi’an (1991)

Index

A Absorptive law 8 admissible 196 aﬃne function 66 Algorithm 42 α-cut 1 α-level 110 Analytic Hierarchy Process 131 analytic solution 247 Antinomy 165 antitone 294 approximate indicator 41 appropriate linear transformation 86 approximate quantities 309 approximately fuzzy optimal 182 approximately less than or equal to 20 arithmetic operations 31 Associative law 8 axis 34 B basic feasible solution 165 basic fuzzy variation lemma 336 basic interval variation lemma 331 basic solution 103 binary 5 Boolean matrix 117 bounded 12 Business Management 214

C canonical types 203 capacity 36 Cartesian product 12 Cauchy sequence 65 Center distance 58 characteristic interval 155 characteristic matrix 282 Ciric-type compacted mapping 319 classiﬁcation 117 close cone 65 close interval 293 closed region 304 closure 16 Cluster Analysis 117 Cobb-Douglas function 324 collection sleeve 22 combination law 15 Commutative 8 comparison 61 compatible 68 compatibleness 68 complement operation 7 Complete set 3 components being positive 244 composition operator 280 compound 18 comprehensive decision 264 compromise solution 252 concomitant chain 260 condition extremum 327 cone index 67

374

Index

conﬁdence level 9 conservative path sets 282 consistent 199 constraint 18 constraint complete lattice 205 constraint inequality 221 consume coeﬃcient 95 Continuity 141 continuous and strictly monotone concave function 339 convex combination 12 Convex function 196 convex functional 327 Convex fuzzy set 11 convex normal 50 convex region 339 correlated variable 33 Cramer rule 59 crisp model 45 critical value 43 curve 49

44

E

D Data Mining 94 degree-of-diﬃculty 223 decomposition theorem 1 degenerate 203 degeneration optimum solution degree of possibility 21 degree of the ﬁtting 52 Delphi method 131 derivable 294 derivatives 294 determinant 351 diagnostics 43 diﬀerentiable 36 Diﬀerential Equations 293 direct algorithm 201 direct image 314 direct proportion 88 discretion 258 discrimination matrix 291 distance 44 distance closure 122 distinct 3 distinguishing 132 distributing function 78 distribution 69

Distributive law 8 dual algorithm 221 dual method 240 Dual law 8 dual simplex method 54 dual theorem 168 duality 185 dual variables 198 dynamic clustering 119

167

economical beneﬁts 261 elements 4 elimination principle 260 embodied 6 empty set 3 energy resource 12 environment protection 286 equality 5 equivalence 25 Error Analysis 61 essentially smaller than or equal Establishment 35 excluded-middle law 8 exhaustive 67 existence theorem 296 Expansion 1 expectation value 194 expected level 231 expert evaluation method 250 experts’ experience 36 explored 266 Exponential Model 44 exponential regularity 44 expression 70 extension principle 18 extensively 309 extract 46 extreme value 331 Euclidean space 10 Euler equation 351 F feasible direction 209 feasible domain 274 feasible solution 100 feature 129

139

Index ﬁltration rule 283 ﬁnite ﬁeld 261 ﬁtting 36 ﬁxed point 141 ﬁve-type fuzzy numbers 1 ﬂat fuzzy coeﬃcients 33 ﬂat fuzzy numbers 29 ﬂexible index 20 ﬂow 269 ﬂuctuating variables 188 Forecast 33 freely 20 function equation 296 Functional 327 functional condition extremum 327 Fuzziﬁcation 87 fuzziness 95 fuzzy coeﬃcient 159 fuzzy convex 28 fuzzy convex function 207 fuzzy convex programming 196 fuzzy distance 63 fuzzy dual programming 214 Fuzzy Duoma debt model 310 fuzzy environment 33 fuzzy exponent matrix 200 fuzzy extended matrix 256 fuzzy function 1 fuzzy function determinant 358 fuzzy geometric inequality 197 fuzzy Lagrange function 204 Fuzzy linear programming 102 fuzzy linear regression 35 fuzzy matrices 97 fuzzy maximum 142 Fuzzy Numbers Distance 171 fuzzy objective 20 fuzzy optimal solution 154 Fuzzy posynomial geometric programming 193 Fuzzy Quantities 1 fuzzy regression 35 fuzzy relation 1 fuzzy relation equations 262 fuzzy relation geometric programming 255 fuzzy reversed posynomial geometric programming 199 fuzzy satisfactory 188

375

fuzzy sets 1 fuzzy set chain 259 fuzzy set-value mapping 293 fuzzy solution 161 fuzzy subset 1 fuzzy super-consistent 242 fuzzy time series analysis 78 fuzzy trajectory 311 fuzzy-valued 18 fuzzy-valued diﬀerential 300 fuzzy-valued ﬁxed solution 301 fuzzy-valued functional 327 fuzzy variable 30 fuzzy vector 66 G generalized 94 genetic algorithm 292 geometric 49 geometric inequality 197 geometric programming 193 goal 47 global fuzzy optimum solution global minimum 225 grade of membership 320 gradient 203 greatest solution 179 group 78 H Hausdorﬀ measure 304 height 136 homogeneous system 311 I Idempotent law 8 image 18 imitation 63 immemorial rat 135 implicit function 296 independent variable 33 index 7 ineﬀective forecast 84 inﬁmum 20 inﬁnite 2 inﬁnite logic sum 2 inﬂuence factor 266

205

376

Index

initial value 296 Input-Output 95 integrable 172 integral calculus 318 intersection 5 interval 1 interval derivative 294 interval diﬀerential equation interval nest 306 Interval number 27 interval same order 294 Interval-valued 293 inverse 14

M

295

J J -compatible 68 J-dimensional 197 J -eﬀective 99 J -nonseparable 99 J -optimal solution 237 judgement method 54 K k-th approaching 328 Kuhn-Tucker condition

207

L Lagrange 193 Lagrange multiplier 243 Lattice 205 Lattice Linear Programming 273 least solution 256 least square method 36 Left distance 58 Leontief synthesized model 98 level set 231 limit point 226 line segments 42 linear independence 207 linear programming 33 linearized 46 Lipschitz condition 298 local fuzzy optimum 205 local minimum solution 225 lower and upper bounds 225 L–R fuzzy number 30

main diagonal 90 mapping 18 mathematical induction 335 maximizing 139 maximum membership degree principle 128 maximum shortage 261 maximum solution 101 Max method 275 mean value 30 measure 36 membership degree 1 membership function 1 Min-max methods 274 minimization 68 minimum element 5 mixed 67 model 33 model-variable 328 modernization 280 monomial 197 monotone increasing function 284 most satisfactory solution 222 multi-index 67 multi-objective 187 multiplication 110 multi-valued 318 mutually exclusive 67 N n-dimensional 11 necessary and suﬃcient condition neighborhood 296 nest of set 356 net 96 Non-distinct 316 nonempty 259 nonfuzziﬁcation 44 Nonfuzzify 69 nonfuzzy 242 Nonlinear regression 85 non-linearity type 91 non-negative elements 66 non-tentative-value 234 non-degeneration 207

64

Index normal form 154 normal fuzzy set 10 norm time sequence 83 Normal type 3 O objective function 46 operation properties 1 operator 1 optimal 1 optimization designment 280 optimal estimated values 58 optimal level 146 optimal matrix 155 Optimal Models 1 optimal solution 38 optimal value 47 optimize 287 optimization method 54 Optimizing 255 order relation 5 ordinary 2 ordinary diﬀerential equations 293 orthogonality 64 P parallelogram rule 64 parameter geometric programming 250 parameter variable 61 partial derivative 300 partial diﬀerence rate 88 Partial large-scale 3 Partial minitype 3 partial order set 281 pattern recognition 127 perfect forecast 39 piecewise continuous 30 platform 50 platform index 63 point range 227 Pole diﬀerence regularity 118 polynomial 234 posynomial 193 precision 43 primal function 294 primal problem 159 properties 1 pseudoconvex 207

377

Q quadruple 30 quantities 309 quasiconvex 207 quasi-minimum solution

259

R randomness 50 ranking 174 real bounded function 20 real line 316 reduced chain 259 reference function 30 reﬂection 5 regression forecast model 33 regular 63 relation 1 Representation Theorem 25 Restore original law 8 reverse posynomial 234 Right distance 58 right triangles 37 S satisfactory solution 188 self-dependent sequence 42 Self-regression 33 set value mapping 23 shape 35 shortcut method 119 single objective 189 single-valued 318 simplex method 54 simulated annealing algorithm 190 simultaneous equations 48 singular solution 312 soft constraint 168 Solow model 316 solution sets 143 spread 57 standard deviation standardization 118 strong α-cut set 9 strong dual theory 213 strong mapping 311

378

Index

structure of solution 280 sub-cluster 317 subscript set 203 subsets 1 subspace 64 super-consistence 213 support 9 supremum 20 symmetry 5 synthesizes operator 255 system cluster method 127 T

U unbiased estimations 61 unconstrained minimization union 5 unique 64 uniqueness 142 uniqueness theorem 298 unit 60 united table 125 unreduced 203 upper bound 194 upper-right-corner 200

77

V tangentially optimal 206 Taylor theorem 339 technological economic analysis 280 Telephone Amount 44 term 60 T -fuzzy data 85 T -fuzzy number 30 T -fuzzy variable 30 theoretical framework 182 theory 1 Three Mainstream Theorems 21 threshold value 36 topology induced 66 totally degenerate 203 traditional operation rules 309 trajectory 311 transformer substation 365 transitive closure 17 transitive relation 120 transplant 182 trapeziform fuzzy number 173 trapezoidal fuzzy variable 237 triangle distributing 110 triangular fuzzy numbers 57 Tucker 207 type (·, c) 29

vagueness 38 variation 130 variationableness 328 vector space 35 (∨, ·) composition 262 (∨, ·) Fuzzy Relative Equation 261 (∨, ∧) Fuzzy Relative Equation 255 W weight 83 weighted related-coeﬃcient width 36

90

Z Zadeh fuzzy sets 33 zero fuzzy number 108 zero relation 14 Zimmermann algorithm 143 0 0-1 law 8 0.618 method

47

Studies in Fuzziness and Soft Computing, Volume 248 Editor-in-Chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail: [email protected] Further volumes of this series can be found on our homepage: springer.com Vol. 231. Michal Baczynski, Balasubramaniam Jayaram Soft Fuzzy Implications, 2008 ISBN 978-3-540-69080-1 Vol. 232. Eduardo Massad, Neli Regina Siqueira Ortega, Laécio Carvalho de Barros, Claudio José Struchiner Fuzzy Logic in Action: Applications in Epidemiology and Beyond, 2008 ISBN 978-3-540-69092-4 Vol. 233. Cengiz Kahraman (Ed.) Fuzzy Engineering Economics with Applications, 2008 ISBN 978-3-540-70809-4 Vol. 234. Eyal Kolman, Michael Margaliot Knowledge-Based Neurocomputing: A Fuzzy Logic Approach, 2009 ISBN 978-3-540-88076-9 Vol. 235. Koﬁ Kissi Dompere Fuzzy Rationality, 2009 ISBN 978-3-540-88082-0 Vol. 236. Koﬁ Kissi Dompere Epistemic Foundations of Fuzziness, 2009 ISBN 978-3-540-88084-4

Vol. 240. Asli Celikyilmaz, I. Burhan Türksen Modeling Uncertainty with Fuzzy Logic, 2009 ISBN 978-3-540-89923-5 Vol. 241. Jacek Kluska Analytical Methods in Fuzzy Modeling and Control, 2009 ISBN 978-3-540-89926-6 Vol. 242. Yaochu Jin, Lipo Wang Fuzzy Systems in Bioinformatics and Computational Biology, 2009 ISBN 978-3-540-89967-9 Vol. 243. Rudolf Seising (Ed.) Views on Fuzzy Sets and Systems from Different Perspectives, 2009 ISBN 978-3-540-93801-9 Vol. 244. Xiaodong Liu and Witold Pedrycz Axiomatic Fuzzy Set Theory and Its Applications, 2009 ISBN 978-3-642-00401-8 Vol. 245. Xuzhu Wang, Da Ruan, Etienne E. Kerre Mathematics of Fuzziness – Basic Issues, 2009 ISBN 978-3-540-78310-7

Vol. 237. Koﬁ Kissi Dompere Fuzziness and Approximate Reasoning, 2009 ISBN 978-3-540-88086-8

Vol. 246. Piedad Brox, Iluminada Castillo, Santiago Sánchez Solano Fuzzy Logic-Based Algorithms for Video De-Interlacing, 2010 ISBN 978-3-642-10694-1

Vol. 238. Atanu Sengupta, Tapan Kumar Pal Fuzzy Preference Ordering of Interval Numbers in Decision Problems, 2009 ISBN 978-3-540-89914-3

Vol. 247. Michael Glykas Fuzzy Cognitive Maps, 2010 ISBN 978-3-642-03219-6

Vol. 239. Baoding Liu Theory and Practice of Uncertain Programming, 2009 ISBN 978-3-540-89483-4

Vol. 248. Bing-Yuan Cao Optimal Models and Methods with Fuzzy Quantities, 2010 ISBN 978-3-642-10710-8

Bing-Yuan Cao

Optimal Models and Methods with Fuzzy Quantities

ABC

Author Bing-Yuan Cao Guangzhou Higher Education Mega Center Guangzhou University No. 230 Waihuan Xi Road Guangzhou People’s Republic China

ISBN 978-3-642-10710-8

e-ISBN 978-3-642-10712-2

DOI 10.1007/978-3-642-10712-2 Studies in Fuzziness and Soft Computing

ISSN 1434-9922

Library of Congress Control Number: 2009939991 c 2010 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typeset & Cover Design: Scientiﬁc Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 987654321 springer.com

To my wife Wang Pei-hua

Preface

I originated a submission titled “Fuzzy Geometric Programming” for the Proceeding of the Second International Fuzzy Systems Association (IFSA) Congress (Tokyo) in 1987, and later published through rigorous selection in Fuzzy Sets and Systems. In 1989, I brought up “Study on non-distinct selfregression forecast model” for discussion by using Zadeh’s theory on fuzzy sets. From then on, I have done researches on an optimal model with fuzzy information quantities. In the book, I regard the model with fuzzy quantities, including fuzzy coeﬃcients and fuzzy variables, as a main line, introducing the molding of various problems and their practical examples, completely and clearly, in some ﬁelds. Many of my papers are indexed in SCI (Science Citation Index), EI (Engineering Index) and ISTP (Index to Scientiﬁc & Technical Proceedings), commented or extracted in American Mathematical Reviews and Zentralblatt Math. The researching and writing have been funded by the National Natural Science Foundation of China for three times (1997, 2003, 2008). At the same time, it is supported by the Science and Technology Project of Hunan Province, the Science Research Foundation of Changsha Electric Power University, “211 Project” Foundation of Shantou University and Li Ka-Shing Science Development Foundation of Shantou University, and Scientiﬁc Research Foundation of Guangzhou University. The research project won the Third Award of Guangdong Science and Technology Awarded by the Government of Guangdong Province (2005) and Third Award of Excellent Papers in Natural Science by it (2003), successively. The book contains ten chapters as follows: Chapter 1. Prepare Knowledge; Chapter 2. Regression and Self-regression Models with Fuzzy Coeﬃcients; Chapter 3. Regression and Self-regression Models with Fuzzy Variables; Chapter 4. Fuzzy Input–output Model; Chapter 5. Fuzzy Cluster Analysis and Fuzzy Recognition;

VIII

Preface

Chapter Chapter Chapter Chapter Chapter

6. Fuzzy Linear Programming; 7. Fuzzy Geometric Programming; 8. Fuzzy Relative Equation and Its Optimizing; 9. Interval and Fuzzy Diﬀerential Equations; 10. Interval and Fuzzy Functional and Their Variation.

It can not only be used as teaching materials or a reference book for undergraduates in higher education, master graduates and doctor graduates in the courses of applied mathematics, computer science, artiﬁcial intelligence, fuzzy information process and automation, operations research, system science and engineering, and the like, but also serves as a reference book for researchers in these ﬁelds, particularly, for researchers in soft science. I appreciate supports from the Nation Natural Science Foundation of China (No.70771030, No.70271047 and No.79670012) and the Science Foundation of Guangzhou University. In this book, some papers have been taken from my Doctor J.H. Yang (Section 8.5 and 8.6), Master students Z.X. Zhu (Section 2.5); M.J. Liu (Section 6.2); Y.F. Tan (Section 6.3); Q.P. Gu (Section 6.7) and X.G. Zhou (Section 8.4), for whose contribution I am grateful, besides, thank Master students L Q Chen, H Q Qiu, X J Cui, Y C Hou, R J Hu, J Tan, Y F Zhang, X W Zhou and G C Zhu for their earnest proof sheet. And I also appreciate associate professor P H Wang for examination and revision of the ﬁnal proof and F H Cao for its typewriting. And also my heart-felt thanks go to Springer for a nice platform for me and to the editors for their hard work. 2006.10.1

Bing-yuan Cao Guangzhou

Contents

1

Prepare Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Operations in Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 α−Cut and Convex Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Fuzzy Relativity and Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Fuzzy Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Three Mainstream Theorems in Fuzzy Mathematics . . . . . . . . 1.7 Five-Type Fuzzy Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

Regression and Self-regression Models with Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Regression Model with Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . 2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients . . . . . . . . 2.3 Exponential Model with Fuzzy Parameters . . . . . . . . . . . . . . . . 2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Linear Regression with Triangular Fuzzy Numbers . . . . . . . . . .

3

Regression and Self-regression Models with Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Regression Model with T - Fuzzy Variables . . . . . . . . . . . . . . . . . 3.2 Self-regression Model with T -Fuzzy Variables . . . . . . . . . . . . . . 3.3 Regression Model with (·, c) Fuzzy Variables . . . . . . . . . . . . . . . 3.4 Self-regression with (·, c) Fuzzy Variables . . . . . . . . . . . . . . . . . . 3.5 Nonlinear Regression with T -fuzzy Data to be Linearized . . . . 3.6 Regression and Self-regression Models with Flat Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 5 9 12 18 21 27

33 33 39 44 50 57

63 63 71 76 78 85 91

X

Contents

4

Fuzzy Input-Output Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Fuzzy Input-Output Mathematical Model . . . . . . . . . . . . . . . . . 4.2 Input-Output Model with T -Fuzzy Data . . . . . . . . . . . . . . . . . . . 4.3 Input-Output Model with Triangular Fuzzy Data . . . . . . . . . . .

5

Fuzzy Cluster Analysis and Fuzzy Recognition . . . . . . . . . . . . 117 5.1 Fuzzy Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.2 Fuzzy Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6

Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Fuzzy Linear Programming and Its Algorithm . . . . . . . . . . . . . . 6.2 Expansion on Optimal Solution of Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Discussion of Optimal Solution to Fuzzy Constraints Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Relation between Fuzzy Linear Programming and Its Dual One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Antinomy in Fuzzy Linear Programming . . . . . . . . . . . . . . . . . . 6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Linear Programming with L-R Coeﬃcients . . . . . . . . . . . . . . . . 6.8 Linear Programming Model with T -Fuzzy Variables . . . . . . . . . 6.9 Multi-Objective Linear Programming with T -Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

8

95 95 98 108

139 139 146 154 159 165 171 177 182 187

Fuzzy Geometric Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction of Fuzzy Geometric Programming . . . . . . . . . . . . . 7.2 Lagrange Problem in Fuzzy Geometric Programming . . . . . . . . 7.3 Antinomy in Fuzzy Geometric Programming . . . . . . . . . . . . . . . 7.4 Geometric Programming with Fuzzy Coeﬃcients . . . . . . . . . . . 7.5 Geometric Programming with (α, c) Coeﬃcients . . . . . . . . . . . . 7.6 Geometric Programming with L-R Coeﬃcients . . . . . . . . . . . . . 7.7 Geometric Programming with Flat Coeﬃcients . . . . . . . . . . . . . 7.8 Geometric Programming with Fuzzy Variables . . . . . . . . . . . . . 7.9 Dual Method of Geometric Programming with Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193 193 201 206 214 218 224 229 235

FuzzyRelative Equation and Its Optimizing . . . . . . . . . . . . . . 8.1 (, ) Fuzzy Relative Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 ( , ·) Fuzzy Relative Equation . . . . . . . . . . . ................ 8.3 Algorithm Application and Comparing in ( , ·) Relative Equations . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . 8.4 Lattice Linear Programming with ( , ·) Operator . . . . . . . . . .

255 255 261

240 248

266 273

Contents

XI

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Interval and Fuzzy Diﬀerential Equations . . . . . . . . . . . . . . . . . 9.1 Interval Ordinary Diﬀerential Equations . . . . . . . . . . . . . . . . . . . 9.2 Fuzzy-Valued Ordinary Diﬀerential Equations . . . . . . . . . . . . . . 9.3 Ordinary Diﬀerential Equations with Fuzzy Variables . . . . . . . 9.4 Fuzzy Duoma Debted Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Model for Fuzzy Solow Growth in Economics . . . . . . . . . . . . . . 9.6 Application of Fuzzy Economic Model . . . . . . . . . . . . . . . . . . . . .

293 293 299 306 309 315 320

10 Interval and Fuzzy Functional and Their Variation . . . . . . . 10.1 Interval Functional and Its Variation . . . . . . . . . . . . . . . . . . . . . . 10.2 Fuzzy-Valued Functional and Its Variation . . . . . . . . . . . . . . . . . 10.3 Convex Interval and Fuzzy Function and Functional . . . . . . . . 10.4 Convex Fuzzy-Valued Function and Functional . . . . . . . . . . . . . 10.5 Variation of Condition Extremum on Interval and Fuzzy-Valued Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Variation of Condition Extremum on Functional with Fuzzy Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

327 327 332 338 345

9

350 356

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

1 Prepare Knowledge

This chapter represents deﬁnitions on fuzzy sets and its operation properties, α−cut sets and convex fuzzy sets. Besides, based on the elaborative fuzzy relation, it introduces a fuzzy operator related to this chapter, and it exhibits fuzzy function. At the same time, it describes fuzzy mathematics of three mainstream theorems—expansion principle, decomposition theorem and representation theorem. Finally, it inquires into ﬁve-type fuzzy numbers and its operations.

1.1

Fuzzy Sets

In order to give fuzzy sets concept, the chapter ﬁrst describes a foundation concept on a fuzzy sets theory—universe. The so-called universe means all of the object involved is a set commonly, usually by writing English alphabets X,Y ,Z and etc. The fuzzy sets diﬀer from classic ones with a strict mathematics deﬁnition. We give its mathematics description as follows. Deﬁnition 1.1.1. A so-called fuzzy subset A˜ in set X is a set A˜ = {(μA˜ (x), x)|x ∈ X}, where μA˜ (x) is a real number in interval [0, 1], called a membership degree ˜ This function is deﬁned in the interval [0, 1] from point x to A. μA˜ : X −→ [0, 1], x → μA˜ (x) ˜ called a membership function in fuzzy set A. At the same time, fuzzy subsets are also often called fuzzy sets. From Deﬁnition 1.1.1 of the fuzzy sets, there exist few next conclusions, obviously: B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 1–32. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

2

1 Prepare Knowledge

(1) The concept of fuzzy sets is an expansion concept of classical sets. If F (X) means all fuzzy sets on X, i.e., ˜ A˜ is a fuzzy set on X}, F (X) = {A| then P (X) ⊂ F (X), where P (X) is the power sets on X, i.e., P (X) = {A|A is a classic set on X}, that is, if the membership function of fuzzy set A˜ takes only 0 and 1, two values, then A˜ is exuviated into the classic sets of X. (2) The concept of the membership function is the expansion of the characteristic function concept. When A ∈ P (X) is an ordinary subset in X, the characteristic function of A is 1, x ∈ A (membership degree of x for A is 1), χA = 0, x ∈ / A (membership degree of x for A is 0). This means in fuzzy sets, the nearer the membership degree μA˜ (x) in fuzzy set A˜ is to 1, the bigger x belonging to A˜ degree is; whereas, the nearer μA˜ (x) is to 0, the smaller x belonging to A˜ degree is. If the value region of μA˜ (x) is {0, 1}, then fuzzy set A˜ is an ordinary set A, but membership function μA˜ (x) is a characteristic function χA (x). (3) We call fuzzy sets in F (X)P (X) true fuzzy sets. Several representation methods to fuzzy sets are shown as follows. 10 A representation method to fuzzy set by Zadeh If set X is a ﬁnite set, let universe X = {x1 , x2 , · · · , xn }. The fuzzy set is μ ˜ (x1 ) μA˜ (x2 ) μ ˜ (xn ) μA˜ (xi ) A˜ = A + + ···+ A = , x1 x2 xn xi i=1 n

μ (x )

here symbol “Σ” is no longer a numerical sum, A˜xi i is not a fraction; it only has the sign meaning, that is, only membership degree of the point xi with respect to fuzzy set A˜ is μA˜ (xi ). If X is an inﬁnite set, a fuzzy set on X is μA˜ (x) A˜ = . x x∈X Similarly, the sign “ ” is not an integral any more, only means an inﬁnite μ (x) logic sum, but the meaning of A˜x is in accordance with the ﬁnite case. 0 When the universe X is a ﬁnite set, the fuzzy set represented in 2 Deﬁnition 1.1.1 is A˜ = {(μA˜ (x1 ), x1 ), (μA˜ (x2 ), x2 ), · · · , (μA˜ (xn ), xn )}. 30 When the universe X is a ﬁnite set, the fuzzy set represented according to a vector form is A˜ = (μA˜ (x1 ), μA˜ (x2 ), · · · , μA˜ (xn )).

1.1 Fuzzy Sets

3

Remarkably, X and φ also can be seen as fuzzy set in X, if membership functions μA˜ (x) ≡ 1 and μA˜ (x) ≡ 0, then A˜ is a complete set X and an empty set φ, respectively. An element that the membership degree is 1 deﬁnitely belongs to this fuzzy set; an element that the membership degree is 0 does not belong to this fuzzy set deﬁnitely. But the membership function value in (0, 1) forms a distinct boundary, also calling distinct subsets of fuzzy sets. When a fuzzy object is described by using the fuzzy set, choice of its membership function is a key. Now we give three membership functions basically: 1. Partial minitype (abstains up, Figure 1.1.1) [1 + (a(x − c))b ]−1 , when x c, μA˜ (x) = 1, when x < c, where c ∈ X is an arbitrary point, a > 0, b > 0 are two parameters. 2. Partial large-scale (abstains down, Figure 1.1.2) 0, when x c, μA˜ (x) = [1 + (a(x − c))−b ]−1 , when x > c, where x ∈ X is an arbitrary point, a > 0, b > 0 are two parameters. 3. Normal type (middle type, Figure 1.1.3) x − a 2 b μA˜ (x) = e , −

where a ∈ X is an arbitrary value, b > 0 is a parameter. 6

6 1

6

1

1

μA˜ (x) 0

c Figure 1.1.1

μA˜ (x)

μA˜ (x) x

0

c Figure 1.1.2

x

0

a-b

a

a+b

x

Figure 1.1.3

Obviously, Type 1 and 2 is dual, and its meaning shows clear at a glance. ˜ which is “ suﬃciently near to number set of a”, then Type 3 is a fuzzy set A, this membership function in A˜ is deﬁned on a center type according to the deﬁnition. Example 1.1.1: Let X ⊆ R + (R + is a non-negative real number set). Regard ˜ and “ youth” Y˜ , age as universe and take X=[0,100]. Zadeh gave “oldness” O these two membership functions respectively are

4

1 Prepare Knowledge

⎧ 0, ⎪ ⎪ ⎪ ⎨ μO˜ (x) = and

⎪ ⎪ ⎪ ⎩

1+

⎪ ⎪ ⎪ ⎩

0 x 50, , 50 < x 100,

1,

x > 100,

⎧ 1, ⎪ ⎪ ⎪ ⎨ μY˜ (x) =

x − 50 5

−2 −1

1+

x − 25 5

2 −1

0,

0 x 25, , 25 < x 100, x > 100.

If some person’s age is 28, then his membership degree belongings to “youth” or “oldness” respectively is 2 −1 28 − 25 1+ = 0.735 and 0. 5 If some person’s age is 55, then his membership degree belongings to “youth” or “oldness” respectively is 2 −1 55 − 25 1+ = 0.027 5 and

1+

55 − 50 5

−2 −1 = 0.5.

According to the three-type membership functions mentioned above, we can, to a certain, calculate its membership degree by concrete object x. When its accuracy is not required high, for simple account, we can determine the membership degree by adopting evaluation. Example 1.1.2: Suppose X={1,2,3,4}, these four elements constitute a small number set. Obviously, element 1 is standardly a small number, it should belong to this set, and its membership degree is 1; element 4 is not a small number, and it should not belong to this set, its membership degree being 0. Element 2 “also returns small” or make “eighty percent small”, its membership degree being 0.8; element 3 probably is “force small”, or makes “two percent small”; its membership degree being 0.2. The fuzzy sets written in ˜ its elements still are 1,2,3,4, at the same time, and a small numbers as A, membership degree of element in A˜ is given, denoted by 1 0.8 0.2 0 + + . Zadeh’s representation is A˜ = + 1 2 3 4 ˜ An order dual representation is A={(1,1),(0.8,2),(0.2,3),(0,4)}. ˜ A vector method simply shows as A=(1,0.8,0.2,0).

1.2 Operations in Fuzzy Sets

1.2

5

Operations in Fuzzy Sets

Because the value region in membership function of fuzzy sets corresponding to clear-subset characteristic function is extended from {0, 1} to [0, 1]. Similar to the characteristic function to demonstrate the relation between a distinctive subset, we have the following. ˜ B ˜ ∈ F (X). If to arbitrary x ∈ X, we have Deﬁnition 1.2.1. Let A, ˜ ˜ Inclusion: A ⊆ B ⇐⇒ μA˜ (x) μB˜ (x). ˜ ⇐⇒ μ ˜ (x) = μ ˜ (x). Equality: A˜ = B A B ˜ ⇐⇒ A˜ ⊆ B ˜ and B ˜ ⊆ A. ˜ That is to say, the From Deﬁnition 1.2.1, A˜ = B inclusion relation is a binary relation on fuzzy power set F (X) with following properties, i.e., (1) (2) (3)

A˜ ⊆ A˜ (reﬂection). ˜ (symmetry). ˜ and B ˜ ⊆ A˜ =⇒ A˜ = B A˜ ⊆ B ˜ ˜ ˜ ˜ ˜ A ⊆ B and B ⊆ C =⇒ A ⊆ C˜ (transitivity).

Since relation “ ⊆ ” constitutes an order relation on F (X), (F (X), ⊆) stands for a partially ordered set. Again as φ, X ∈ F (X), hence F (X) contains maximum element X and minimum element φ. ˜ B ˜ ∈ F (x). Then we deﬁne Deﬁnition 1.2.2. Let A, ˜ ˜ Union: A B, whose membership function is μ(A˜ B) μB˜ (x) = max{μA˜ (x), μB˜ (x)}. ˜ (x) ˜ (x) = μA ˜ whose membership function is Intersection: A˜ B, (μA˜ B) μB˜ (x) = min{μA˜ (x), μB˜ (x)}. ˜ (x) = μA ˜ (x) c Complement: A˜ , whose membership function is μA˜c (x) = 1 − μA˜ (x). Their images show like Figure 1.2.1—Figure 1.2.3: 6

6 1

0

A

1

B x

B Figure 1.2.1 A

0

A

6 1

B

A x

B Figure 1.2.2 A

c A x

0 c Figure 1.2.3 A

Comparing operation of union, intersection and complement in distinctive set, we discover immediately that the fuzzy sets ˜operation is exactly a is a minimum fuzzy parallel deﬁnition of the distinct set operation, A˜ B

6

1 Prepare Knowledge

˜ is a maximum fuzzy set ˜ A˜ B set embodying A˜ and embodied again in B. ˜ ˜ embodying A and embodied again in B. According to the two kinds of cases, where the universe X is ﬁnite or inﬁnite, the calculation formula of union, intersection and complement in fuzzy ˜ can be represented, respectively, like the following: sets A˜ and B (1)

The universe is X = {x1 , x2 , · · · , xn }, and A˜ =

n μ (x ) ˜ i B , then x i i=1

A˜ A˜

˜= B ˜= B

n μ ˜ (xi ) ∨ μ ˜ (xi ) A

i=1 n i=1

A˜c =

n 1 − μ ˜ (xi ) A

(2) X is an inﬁnite set, and A˜ =

,

μA˜ (xi ) ∧ μB˜ (xi ) , xi xi

i=1

B

xi

n μ (x ) ˜ i A ˜ = ,B xi i=1

x∈X

.

μA˜ (x) , x

˜= B

x∈X

μB˜ (x) , then x

μA˜ (x) ∨ μB˜ (x) , x x∈X μA˜ (x) ∧ μB˜ (x) ˜= A˜ B , x x∈X μ ˜ (x) A˜c = . 1− A x x∈X

A˜

˜= B

1 0.8 0.2 0 Example 1.2.1: Suppose X = {x1 , x2 , x3 , x4 }; A˜ = + + + ; x1 x2 x3 x4 ˜ = 0 + 0.2 + 0.8 + 0 , then B x1 x2 x3 x4 ˜ = 1 ∨ 0 + 0.8 ∨ 0.2 + 0.2 ∨ 0.8 + 0 ∨ 0 A˜ B x1 x2 x3 x4 1 0.8 0.8 0 = + + + . x1 x2 x3 x4 1 ∧ 0 0.8 ∧ 0.2 0.2 ∧ 0.8 0 ∧ 0 ˜= A˜ B + + + x1 x2 x3 x4 0 0.2 0.2 0 = + + + . x1 x2 x3 x4 1 − 1 1 − 0.8 1 − 0.2 1 − 0 c A˜ = + + + x1 x2 x3 x4 0 0.2 0.8 1 = + + + . x1 x2 x3 x4

1.2 Operations in Fuzzy Sets

7

Example 1.2.2: Compute union, intersection and complement of the fuzzy ˜ in Example 1.1.1 in Section 1.1. sets Y˜ and O From the deﬁnition, we have μY˜ (x) μO˜ (x) ˜= Y˜ O x x∈X 2 −1 x − 25 1 + 25<xx∗ 1 + x 5 = x 0x25 −2 −1 x − 50 1+ 5 1 + + , x x∗ <x100 x>100 x where x∗ ≈ 51;

Y˜

1+

˜= O 50<xx∗

x − 50 5 x

1+

x − 25 5

+

x

x∗ <x100

˜c = O 0x50

1 + x

2 −1 ;

1− 1+

50<x100

1− 1+

−2 −1

˜c

Y =

x − 25 5

x − 50 5

2 −1

x

25x100

−2 −1 ;

x + x>100

1 . x

The union, intersection and complement operation in fuzzy set can be extended to several fuzzy sets. Deﬁnition 1.2.3. Suppose T to be an index set, A˜t ∈ F (X) (t ∈ T ), then μA˜t (x) = sup μA˜t (x), x ∈ X, μ A˜t (x) = t∈T

μ

t∈T

Obviously,

t∈T

t∈T ˜t (x) A

=

μA˜t (x) = inf μA˜t (x), x ∈ X. t∈T

t∈T

t∈T

A˜t ,

t∈T

A˜t ∈ F (X).

8

1 Prepare Knowledge

In particular, when T is a ﬁnite set,

μ

˜t (x) A

= max μA˜t (x), x ∈ X,

˜t (x) A

= min μA˜t (x), x ∈ X.

t∈T

t∈T

μ

t∈T

t∈T

Theorem 1.2.1. (F (X), , , c) satisﬁes the following properties: ˜ A˜ A˜ = A. ˜ (1) Idempotent law A˜ A˜ = A, ˜ ˜ =B ˜ A. ˜ A˜ B ˜=B ˜ A, (2) Commutative law A˜ B (3) Associative law ˜ ˜ C), ˜ (A˜ B) C˜ = A˜ (B ˜ ˜ ˜ ˜ ˜ ˜ (A B) C = A (B C). ˜ ˜ (A˜ B) ˜ ˜ (4) Absorptive law (A˜ B) A˜ = A, A˜ = A. (5) Distributive law ˜ ˜ ˜ ˜ ˜ (B C), C = (A˜ C) (A˜ B) ˜ ˜ ˜ ˜ ˜ (B C). C = (A˜ C) (A˜ B) (6) 0-1 law

˜ A˜ φ = φ, A˜ X = A, ˜ A˜ X = X, A˜ φ = A.

˜c c = A. ˜ (7) Restore original law ˜ c ˜c ˜ c (A ˜)c ˜ ˜ B c , (A˜ B) = A˜c B . (8) Dual law (A B) = A Proof: Proved by taking Property (8) for example, the rest can be veriﬁed directly. From ∀x ∈ X, we have μ(A˜ B) ˜ B ˜ (x) ˜ c (x) = 1 − μA = 1 − max{μA˜ (x), μB˜ (x)} = min{1 − μA˜ (x), 1 − μB˜ (x)} = min{μA˜c (x), μB˜ c (x)} = μA˜c B˜ c (x). Hence (A˜

˜ c = A˜c B)

˜c. B

Similarly, we can prove (A˜

˜ c = A˜c B)

˜c. B

It is pointed out that the operation in a fuzzy set no longer satisﬁes the excluded-middle law. Namely, under circumstance generally, we have A˜ A˜c = X, A˜ A˜c = φ.

1.3 α−Cut and Convex Fuzzy Sets

But we have A˜

9

1 1 A˜c , A˜ A˜c . 2 2

Example 1.2.3: If μA˜ (x) ≡ 0.5, μA˜c (x) ≡ 0.5, then μA˜ A˜c (x) = max{0.5, 0.5} = 0.5 = 1, μA˜ A˜c (x) = min{0.5, 0.5} = 0.5 = 0.

1.3

α−Cut and Convex Fuzzy Sets

1.3.1 α–Cut Set Deﬁnition 1.3.1. Suppose A˜ ∈ F (X), ∀α ∈ [0, 1], we write ˜ α = Aα = {x|μ ˜ (x) α}, (A) A ˜ Again, we write then Aα is said to be an α–cut set of fuzzy set A. ˜ α = Aα = {x|μ ˜ (x) > α}, (A) A · · ˜ α a conﬁdence level, and Aα· is called a strong α–cut set of fuzzy set A, ˜ 0 = A0 = {x|μ ˜ (x) > 0} = suppA, ˜ (A) A ·

·

˜ A0 is called a support of fuzzy set A. ·

If this support supp A˜ = {x} is a single point set, then A˜ is called a fuzzy point on X. Audio-visually, the meaning in Aα is that if x to the membership degree of A˜ attains or exceeds the level α, at last it has the qualiﬁed member; since all of these qualiﬁed members constitute Aα , it is a classical subset in X. 0 0.9 1 0.5 0.7 + + + + , then Example 1.3.1: Suppose A˜ = x1 x2 x3 x4 x5 at α = 1, A1 = {x5 }, A1 = φ, at α = 0.9, A0.9 = {x4 , x5 }, A0.9 = {x5 }, at α = 0.7, A0.7 = {x2 , x4 , x5 }, A0.7 = {x4 , x5 }, at α = 0.5, A0.5 = {x1 , x2 , x4 , x5 }, A0.5 = {x2 , x4 , x5 }, at α = 0, A0 = X, A0 = {x1 , x2 , x4 , x5 }. α-cut set has the following properties. Property 1.3.1 (1) (A˜ (2) (A˜

˜ α = Aα B) ˜ α = Aα B) · ·

Bα , Bα· ,

˜ α = Aα Bα . (A˜ B) ˜ α = Aα Bα . (A˜ B) · · ·

10

1 Prepare Knowledge

Proof: We prove only the ﬁrst formula in (1). ˜ (A˜ B) ˜ B ˜ (x) α} = {x|μA ˜ (x) ∨ μB ˜ (x) α} α = {x|μA = {x|μA˜ (x) α} {x|μB˜ (x) α} = Aα Bα . Proof of the other formulas is the same. Property 1.3.2 ˜ ˜ ˜ ˜ At )α ⊇ At )α = (1) ( (At )α , ( (At )α , (A˜c )α = (A1−α )c . t∈T t∈T t∈T t∈T ˜ ˜ ˜ ˜ (2) ( At )α. = At )α. ⊆ (At )α , ( (At )α. , (A˜c )α. = (A1−α )c . t∈T

t∈T

t∈T

t∈T

Proof in Property 1.3.2 is easy, readers themselves can prove it. It must be pointed out that the ﬁrst formula in (1) and the second formula in (2) can’t be changed for the equation. Example 1.3.2: Let μA˜n (x) ≡

1 1 (1− ), n = 1, 2, · · · . Then μ 2 n

∞

n=1

∞ so that ( A˜n )0.5 = X. But

˜n A

(x) ≡

1 , 2

n=1

(A˜n )0.5 = φ (n 1), so that

∞

(A˜n )0.5 = φ.

n=1

Therefore

(

∞ ∞ A˜n )0.5 = (A˜n )0.5 .

n=1

n=1

1 1 Similarly, let μB˜n (x) ≡ (1 + ), n = 1, 2, · · · . We can prove 2 n (

∞

n=1

˜n )0.5 = B

∞

˜n )0.5 . (B

n=1

Deﬁnition 1.3.2. Suppose A˜ ∈ F (X), set Ker A˜ = {x|μA˜ (x) = 1} is called a kernel of fuzzy set A˜ and A˜ is a normal fuzzy set if Ker A˜ =

φ. 1.3.2 Convex Fuzzy Sets Recall the ﬁrst concept of ordinary convex sets. Suppose X = R n to be ndimensional Euclidean space, A is an ordinary subset in X. If ∀x1 , x2 ∈ A, and ∀λ ∈ [0, 1], we have

1.3 α−Cut and Convex Fuzzy Sets

11

λx1 + (1 − λ)x2 ∈ A, and then call A convex sets. Before introduction of the convex fuzzy set concepts, we prove ﬁrst result below. Theorem 1.3.1. Suppose A˜ to be a fuzzy set in X, if α ∈ [0, 1], Aα = {x|μA˜ (x) α} are all convex sets if and only if ∀x1 , x2 ∈ X, λ ∈ [0, 1], there is (1.3.1) μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ). Proof: If we have already known α ∈ [0, 1], Aα are all convex sets, ∀x1 , x2 ∈ X might as well suppose μA˜ (x2 ) μA˜ (x1 ) = α0 , then μA˜ (x1 ) ∧ μA˜ (x2 ) = α0 . Because Aα0 is a convex set, ∀x1 , x2 ∈ Aα0 , and ∀λ ∈ [0, 1], we have λx1 + (1 − λ)x2 ∈ Aα0 , μA˜ (λx1 + (1 − λ)x2 ) α0 .

hence Therefore

μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 )

μA˜ (x2 ).

Conversely, if we have already known ∀x1 , x2 ∈ X, α ∈ [0, 1], there exist μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ), then, if α ∈ [0, 1], x1 , x2 ∈ Aα , hence μA˜ (x1 ) α, μA˜ (x2 ) α, such that

μA˜ (x1 ) ∧ μA˜ (x2 ) α,

μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ) α,

so hence

λx1 + (1 − λ)x2 ∈ Aα .

Therefore, Aα is a convex set. Deﬁnition 1.3.3. Suppose X = R n to be n-dimensional Euclidean space, A˜ is a fuzzy set in X. If ∀α ∈ [0, 1], Aα are all convex sets, calling fuzzy set A˜ a convex fuzzy set. From Theorem 1.3.1 we know that A˜ is a convex set if and only if ∀λ ∈ [0, 1], x1 , x2 ∈ X, there is μA˜ (λx1 + (1 − λ)x2 ) μA˜ (x1 ) ∧ μA˜ (x2 ). ˜ are convex sets, so is A˜ ∩ B. ˜ Theorem 1.3.2. If A˜ and B Proof: ∀x1 , x2 ∈ X, ∀λ ∈ [0, 1], μ ˜ ˜ λx1 + (1 − λ)x2 = μA˜ λx1 + (1 − λ)x2 ∧ μB˜ λx1 + (1 − λ)x2 A∩B μA˜ (x1 ) ∧ μA˜ (x2 ) ∧ μB˜ (x1 ) ∧ μB˜ (x2 ) = μA˜ (x1 ) ∧ μB˜ (x1 ) ∧ μA˜ (x2 ) ∧ μB˜ (x2 ) = μA∩ ˜ B ˜ (x1 ) ∧ μA∩ ˜ B ˜ (x2 ).

12

1 Prepare Knowledge

˜ denotes a convex fuzzy set. Therefore, A˜ ∩ B ˜ B, ˜ ˜ ∈ F (X). Then a convex combination with Deﬁnition 1.3.4. Let A, ˜ ˜ ˜ B; ˜ ), ˜ of A and B is a fuzzy set, denoted by (A, ˜ with its memrespect to bership function being ˜ ˜ μB˜ (x), ∀x ∈ X. μ(A, ˜ B; ˜ ) ˜ (x) + 1 − (x) ˜ (x) = (x)μ A ˜ i ∈ F (X)(1 i m) and Generally, if A˜i ,

m

˜ i (x) = 1(∀x ∈ X), then

i=1

˜ i of A˜i is written as a convex combination with respect to μ(A˜1 ,A˜2 ,··· ,A˜m ; ˜ 1 , ˜ 2 ,··· , ˜ m ) (x) =

m

˜ i (x)μA˜ (x), i

∀x ∈ X.

i=1

Deﬁnition 1.3.5. Suppose A˜ ∈ F (X), if ∀α ∈ [0, 1], Aα to be all bounded sets in X, then A˜ is called a bounded fuzzy set in X. Theorem 1.3.3. Both union and intersection of two bounded fuzzy sets are bounded fuzzy sets, respectively. It is easy to prove it by property and Deﬁnition 1.3.5 of α-cut sets.

1.4

Fuzzy Relativity and Operator

1.4.1 Fuzzy Relations ˜ Deﬁnition 1.4.1. Suppose X × Y to be a Cartesian product in X and Y, R is a fuzzy set of X × Y , its membership function μR˜ (x, y)(x ∈ X, y ∈ Y ) ˜ in X and Y (still use the same mark). determines a fuzzy relation R Example 1.4.1: Suppose X={x1 , x2 , x3 , x4 } denotes the set of four factories, Y ={electricity, coal, petroleum} denotes three kinds of energy resource set, ˜ between factories and each energy reTable 1.4.1 denotes fuzzy relations R ˜ source, Rij denotes dependence degree from the factory i to energy resource j. Table 1.4.1. Fuzzy Relations between Factories and Energy Resource

Factory Factory Factory Factory

1 2 3 4

Electricity ˜ 11 R ˜ 21 R ˜ 31 R ˜ 41 R

Coal ˜ 12 R ˜ 22 R ˜ 32 R ˜ 42 R

Petroleum ˜ 13 R ˜ 23 R ˜ 33 R ˜ 43 R

Example 1.4.2: Suppose X = Y is a real number set, Cartesian product X × Y is the whole plane. R: “ x > y” is an ordinary relation, that is, set R in plane. But we consider the relation as follows:

1.4 Fuzzy Relativity and Operator

13

“x y”, that is “x is much greater than y”, which is a fuzzy relation; write ˜ and we deﬁne its membership function as R, ⎧ ⎨ 0, x y, 100 −1 μR˜ (x, y) = , x > y. ⎩ 1+ (x − y)2 From here we can know the following: ˜ from X to Y is a fuzzy set in Cartesian product X ×Y . 10 Fuzzy relation R ˜ is Because of Cartesian product with order relevant, i.e., X × Y = Y × X, R also with order relevant. ˜ y) in 20 If two values {0,1} is taken from the membership function R(x, ˜ fuzzy relation only, then R conﬁrms an ordinary set in X × Y , so the fuzzy relation is extended to an ordinary relation. ˜ is a fuzzy relation between the same universe. Under In Example 1.4.2, R ˜ fuzzy relation in X. the condition of X = Y , we call R ˜ deExample 1.4.3: Suppose X={x1 ,x2 ,x3 } denotes three persons’ sets, R notes fuzzy relation in three persons’ trust each other, i.e., ˜= R

1 0.6 0.9 0.1 1 + + + + (x1 , x1 ) (x1 , x2 ) (x1 , x3 ) (x2 , x1 ) (x2 , x2 ) 0.7 0.5 0.8 1 + + + + . (x2 , x3 ) (x3 , x1 ) (x3 , x2 ) (x3 , x3 )

μR˜ (xi , xi ) = 1 expresses that everybody trusts most himself. μR˜ (x2 , x1 ) = 0.1 indicates that x2 to x1 “distrust basically”. Deﬁnition 1.4.1 can be expanded into fuzzy relations between ﬁnite, even an inﬁnite universe. ˜ is given through set R ˜ in Cartesian product set Since fuzzy relation R X × X, then some operations and properties of fuzzy relations are all those of fuzzy sets. In addition, the fuzzy relations still have the following special operations. ˜ 2 is a ˜ 1 to be a fuzzy relation from X to Y , R Deﬁnition 1.4.2. Suppose R ˜ ˜ ˜ ˜ fuzzy relation from Y to Z, then synthesis R1 ◦ R2 of R1 and R2 is a fuzzy relation from X to Z; its membership function conﬁrms as follows: ∀(x, z) ∈ X × Z, μ(R˜ 1 ◦R˜ 2 ) (x, z) =

[μR˜ 1 (x, y) ∧ μR˜ 2 (y, z)],

(1.4.1)

y∈Y

where x ∈ X, z ∈ Z. If R1 , R2 are two ordinary relations, according to method in ordinary set, its synthesis denotes R1 ◦ R2 = {(x, z)|(x, z) ∈ X × Z, ∃y ∈ Y, s.t. (x, y) ∈ R1 , (y, z) ∈ R2 }.

(1.4.2)

14

1 Prepare Knowledge

From here, as an ordinary relation R1 with R2 , its synthesis (1.4.1) and (1.4.2) should be accordant. In fact, at this time, synthesis (1.4.1) of R1 and R2 also can take only two values {0,1}. It is easy to prove that (1.4.1) is equivalent to (1.4.2). ˜ 1 to be a fuzzy relation in X and Y , its memberExample 1.4.4: Suppose R 2 ˜ 2 is a fuzzy relation in Y and Z, ship function is μR˜ 1 (x, y) = e−k(x−y) and R 2 its membership function is μR˜ 2 (y, z) = e−k(y−z) (k 1, constant), then its ˜ 2 is a fuzzy relation in X and Z, its membership function is ˜1 ◦ R synthesis R μ(R˜ 1 ◦R˜ 2 ) (x, z) =

y∈Y

=e

[e−k(x−y)

−k x−

2

2

e−k(y−z) ]

x − z 2 x + z 2 −k 2 2 =e .

˜ to be fuzzy relation in X. A few special fuzzy relations, and suppose R (1) Inverse fuzzy relation. ˜ denotes R ˜ −1 , its membership funcInverse fuzzy relation of fuzzy relation R tion being μR˜ −1 (x, y) = μR˜ (y, x), ∀x, y ∈ X. ˜ is Example 1.4.5: In Example 1.4.3, inverse relation of R 0.1 0.5 0.6 1 0.8 1 + + + + + (x1 , x1 ) (x1 , x2 ) (x1 , x3 ) (x2 , x1 ) (x2 , x2 ) (x2 , x3 ) 0.7 1 0.9 + + . + (x3 , x1 ) (x3 , x2 ) (x3 , x3 )

˜ −1 = R

(2) Symmetric relation. ˜ satisﬁes If fuzzy relation R μR˜ −1 (x, y) = μR˜ (x, y), ∀x, y ∈ X, ˜ is called symmetry. then R Example 1.4.6: The “friend relation” is symmetric, while “paternity relation” and “consequence relation” are not symmetric. (3) Identical relation. Fuzzy relation I˜ on X called identical relation means that I˜ represents an ordinary relation with its membership function being 1, x = y, μI˜(x, y) = ∀x, y ∈ X. 0, x = y, ˜ and the whole relation X ˜ are (4) Zero relation O μO˜ (x, y) = 0, μX˜ (x, y) = 1, ∀x, y ∈ X.

1.4 Fuzzy Relativity and Operator

15

1.4.2 The Operation Properties of the Fuzzy Relation Proposition 1.4.1. Synthesis of fuzzy relation satisﬁes combination law ˜2) ◦ R ˜3 = R ˜ 1 ◦ (R ˜2 ◦ R ˜ 3 ). ˜1 ◦ R (1.4.3) (R Proof: Because [μ(R˜ 1 ◦R˜ 2 ) (x, z) μR˜ 3 (z, w)] μ(R˜ 1 ◦R˜ 2 )◦R˜ 3 (x, w) = z∈X = { [μR˜1 (x, y) μR˜ 2 (y, z)] μR˜ 3 (z, w)} z∈X y∈X [ (μR˜ 1 (x, y) μR˜ 2 (y, z) μR˜ 3 (z, w))] = y∈X z∈X = {μR˜ 1 (x, y) [ (μR˜ 2 (y, z) μR˜ 3 (z, w))]} y∈X z∈X [μR˜ 1 (x, y) μ(R˜ 2 ◦R˜ 3 ) (y, w)] = y∈X

= μR˜ 1 ◦(R˜ 2 ◦R˜ 3 ) (x, w), consequently (1.4.3) holds. ˜ is a fuzzy relation in X with X, then we stipulate If R ˜ ˜ ◦ ···◦ R ˜=R ˜k. R ◦ R k

˜ we have Proposition 1.4.2. For arbitrarily fuzzy relation R, ˜=R ˜ ◦ I˜ = R, ˜ I˜ ◦ R

˜◦R ˜=R ˜◦O ˜ = O. ˜ O

˜ ◦ S˜ ⊆ R ˜ ◦ T˜, S˜ ◦ R ˜ ⊆ T˜ ◦ R. ˜ Proposition 1.4.3. If S˜ ⊆ T˜ , then R ˜ i }i∈I and fuzzy Proposition 1.4.4. For arbitrarily a tuft fuzzy relation {R ˜ we have relation R, ˜ ˜ ˜i) = R ˜◦R ˜i; ˜ ◦( R ˜ ˜ Ri ◦ R. (1) R (2) ( R i) ◦ R = i∈I

i∈I

i∈I

i∈I

Proof: Only prove (1). ∀(x, z) ∈ X × X, μR◦( {μR˜ (x, y) [ μR˜ i (y, z)]} ˜ μ ˜ ) (x, z) = Ri y∈X i∈I i∈I = {μR˜ (x, y) [ μR˜ i (y, z)]} = { [μR˜ (x, y) μR˜ i (y, z)]} y∈X i∈I i∈I y∈X μ(R◦ μ(R◦ = ˜ R ˜ i ) (x, z) = ˜ R ˜ i ) (x, z). i∈I

i∈I

Therefore (1) holds.

˜i) ⊆ R ˜◦R ˜i; ˜ ◦( R Proposition 1.4.5. (1) R i∈I

˜ ˜ ˜ ˜ Ri ◦ R. (2) ( R i) ◦ R ⊆ i∈I

i∈I

i∈I

Proof: Only ˜prove ˜(1). Ri ⊆ Ri , hence ∀(x, z) ∈ X × X, ∀i ∈ I, from Proposition 1.4.3, ∀i ∈ I, i∈I

then μ[R◦( ˜ i∈I

˜ i )] (x, z) R

μ(R◦ ˜ R ˜ i ) (x, z),

16

1 Prepare Knowledge

hence μ[R◦( ˜

˜ i )] (x, z) R

μ(R◦ ˜ R ˜ i ) (x, z).

i∈I

Therefore (1) holds. ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 . Proposition 1.4.6. (R 2 1 Proof: ∀(x, z) ∈ X × X, we have

μ(R˜ 1 ◦R˜ 2 )−1 (x, z) = μ(R˜ 1 ◦R˜ 2 ) (z, x) = =

[μR˜ 1 (z, y) ∧ μR˜ 2 (y, x)]

y∈X

[μR˜ −1 (y, z) ∧ μR˜ −1 (x, y)] 1

2

y∈X

=

[μR˜ −1 (x, y) ∧ μR˜ −1 (y, z)] 2

1

y∈X

= μ(R˜ −1 ◦R˜ −1 ) (x, z). 2

1

˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 . (R 2 1 −1 ˜ ˜ ; R Proposition 1.4.7. (1) ( Ri )−1 = i

Hence

i∈I

i∈I

Proof: Only prove (1). ∀(x, y) ∈ X × X, μ( R˜ i )−1 (x, y) = μ( i∈I

=μ i∈I

˜ i ) (y, x) R

=μ

i∈I

˜ −1 (x, y) R i

(2) (

= μ(

i∈I

i∈I

˜ i )−1 = R

i∈I

˜ −1 . R i

˜ i (y, x) R

i∈I

˜ −1 ) (x, y). R i

Therefore (1) holds. ˜ −1 )−1 = R. ˜ Proposition 1.4.8. (R ˜ to be a fuzzy relation in X. If R ˜ satisﬁes R◦ ˜ R ˜⊆ Deﬁnition 1.4.3. Suppose R ˜ ˜ R, then R is called a transitivity fuzzy relation. Notice, if R is an ordinary relation on X; R is transitive if and only if (x, y)∈ R and (y, z) ∈ R, then (x, z)∈ R. It is easy to understand transitivity ˜ degenerates into the ordinary relation, with in the Deﬁnition 1.4.3, when R the ordinary transitivity being the same. Proposition 1.4.9. The union and intersection of symmetric fuzzy relation also are still symmetric. Proposition 1.4.10. The intersection of transitive fuzzy relation is transitive. ˜ we have the following: Proposition 1.4.11. To arbitrarily fuzzy relation R, ˜ is the least symmetric fuzzy relation, that is (1) Existence of inclusive R ˜ recorded as S(R). ˜ the symmetric closure of R,

1.4 Fuzzy Relativity and Operator

17

˜ that is the (2) Existence of the least transitive fuzzy relation contains R, ˜ ˜ transitive closure of R, recorded as T (R). Proof: Now, we only prove (1). ˜ to denote all sets of containments symmetric fuzzy relation R, ˜ beUse Q ˜ ˜ ˜ cause the whole relation X is symmetric on X, i.e., X ∈ Q, as a result is not ˜ S˜ ∈ Q} ˜ from Proposition 1.4.9. Then S˜0 is the least empty. Let S˜0 = {S| ˜ symmetric relation containing R. ˜ 2 to be a symmetric fuzzy relation, ˜ 1 and R Proposition 1.4.12. Suppose R ˜ 2 is symmetric ⇐⇒ R ˜1 ◦ R ˜2 = R ˜2 ◦ R ˜1. ˜1 ◦ R then R ˜1 ◦ R ˜ 2 is symmetric, then Proof: “=⇒” Because R ˜1 ◦ R ˜ 2 = (R ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 = R ˜2 ◦ R ˜1. R 2 1 ˜2 = R ˜2 ◦ R ˜ 1 , then ˜1 ◦ R “⇐=” If R ˜1 ◦ R ˜ 2 )−1 = R ˜ −1 ◦ R ˜ −1 = R ˜2 ◦ R ˜1 = R ˜1 ◦ R ˜2. (R 2

1

˜1 ◦ R ˜ 2 is symmetric. Therefore R ˜ is transitive, then R ˜ −1 is transitive. Proposition 1.4.13. If R ˜◦R ˜ ⊆ R, ˜ then from Proposition 1.4.6, ∀(x, y) ∈ X × X, Proof: Because R hence μ(R˜ −1 ◦R˜ −1 ) (x, y) = μ(R◦ ˜ R) ˜ −1 (x, y) = μ(R◦ ˜ R) ˜ (y, x) μR˜ (y, x) = μR˜ −1 (x, y), ˜ −1 is transitive. that is, R As for a series of propositions concerning fuzzy relation, the above is considered all for X fuzzy relations. We can throw away this restraint actually, that is, above-mentioned proposition holds as long as the synthesis exists. 1.4.3 Special Fuzzy Operators ˜ B ˜ ∈ F (X), its general form of union and intersecDeﬁnition 1.4.4. For A, tion operation is deﬁned as ∗ ∗ μ(A˜ B) μB˜ (x), μ(A˜ B) μB˜ (x). ˜ (x) ˜ (x) ˜ (x) μA ˜ (x) μA Here

∗

,

∗

is binary operation in [0, 1], and is brieﬂy called a fuzzy operator.

We take them as follows. I. Max-product operator ( , ·) μA˜ (x) μB˜ (x) = max{μA˜ (x), μB˜ (x)}; μA˜ (x) · μB˜ (x) denotes an ordinary real number product method. II. Boundary sum and product operator (⊕, ) ˜ max{0, μ ˜ (x) + μA˜ (x) ⊕ μB˜ (x) min{μA˜ (x) + μB˜ (x), 1}, A˜ B A μB˜ (x) − 1}.

18

1 Prepare Knowledge

III. Probability sum and product operator (+, ·) μA˜ (x)+μB˜ (x) μA˜ (x) + μB˜ (x) − μA˜ (x)μB˜ (x). It can be veriﬁed by using an elementary calculation that Operator I satisﬁes Operator ( , ) in accordance with a calculation law, but Operator II and III dissatisfy idempotent, absorptive and distributive laws.

1.5

Fuzzy Functions

A fuzzy function is one of the most important conceptions in a fuzzy optimum problem. Its discussion is divided into two parts [DPr80]. Besides, kinds of constraint functions repeatedly used in the book are introduced. 1.5.1

Fuzzy Function from Universe X to Another One Y

Deﬁnition 1.5.1. Let F (X) and F (Y ) represent all fuzzy sets on universe X and Y , respectively. If there exists an ordinary mapping f : F (X) → F (Y ), then we call f a fuzzy-valued function from X to Y, writing f˜ : X ∼→ Y . Deﬁnition 1.5.2. Let f˜ : X ∼→ Y, g˜ : Y ∼→ Z be two fuzzy-valued functions. Then g˜ ◦ f˜ : F (X) → F (Z), i.e., ˜ ∈ F (Z) ∀A˜ ∈ F (X), (˜ g ◦ f˜)(A) is called a compound fuzzy function of f˜ and g˜. Proposition 1.5.1. If f : X → Y, g : Y → Z denote two ordinary mappings, two fuzzy functions f˜ : X ∼→ Y and g˜ : Y ∼→ Z can be obtained by means of the extension principle. Their compound under Deﬁnition 1.5.2 coincides with fuzzy functions in compound g ◦ f : X → Z from ordinary mappings f : X → Y and g : Y → Z by means of the extension principle. Proof: As for ∀A˜ ∈ F (X), the image ⎧ ⎨ sup μA˜ (x), x∈f −1 (y) μf˜(A) (y) = ˜ ⎩0,

f −1 (y) = φ f −1 (y) = φ

obtained with fuzzy function f˜ : X ∼→ Y is extended from f : X → Y to ˜ ∈ F (Y ), the image arbitrary y ∈ Y . ∀B ⎧ ⎨ sup μB˜ (y), g −1 (z) =

φ −1 y∈g (z) μg˜(B) ˜ (z) = ⎩0, g −1 (z) = φ

1.5 Fuzzy Functions

19

is achieved by a fuzzy function g˜ : Y ∼→ Z for ∀z ∈ Z, such that their compound denotes g˜ ◦ f˜ : X ∼→ Z. From Deﬁnition 1.5.2, ∀A˜ ∈ F (X), ∀z ∈ Z, there exists μg˜◦f˜(A) ˜ (z) = μg ˜ (z) ˜(f˜(A)) ⎧ μf (A) g −1 (z) = φ ⎪ ˜ (y), ⎨ sup y∈g−1 (z) = ⎪ ⎩ 0, g −1 (z) = φ ⎧ ⎧ ⎪ μA˜ (x), f −1 (y) = φ, ⎪ ⎪ ⎨ sup ⎪ ⎪ x∈f −1 (y) ⎪ ⎨ sup g −1 (z) = φ −1 (z) ⎪ y∈g ⎩ = 0, f −1 (y) = φ, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0, g −1 (z) = φ ⎧ ⎨ sup sup μA˜ (x), f −1 (y) = φ, g −1 (z) = φ, = y∈g−1 (z) x∈f −1 (y) ⎩0, f −1 (y) = φ or g(z) = φ; therefore, ∀z ∈ Z, μg˜◦f˜(A) ˜ (z) =

=

⎧ ⎨

sup x∈f −1 (g−1 (z))

⎩0, ⎧ ⎨

f −1 (g −1 (z)) = φ f −1 (g −1 (z)) = φ

sup

x∈(y◦f )−1 (z)

⎩0,

μA˜ (x),

μA(x) , (g ◦ f )−1 (z) = φ, ˜ (g ◦ f )−1 (z) = φ.

(1.5.1)

On the right of Formula (1.5.1) is a fuzzy function gained from the ordinary compound mapping g ◦ f by means of an extension principle. 1.5.2

˜ Fuzzy Functions from Fuzzy Set A˜ to Another One B

Deﬁnition 1.5.3. Let f : X → Y be an ordinary mapping. If fuzzy sets A˜ ˜ ˜ and B are deﬁned on X and Y , respectively, we have B f (x) = μA˜ (x) for ˜ writing ∀x ∈ X, then we call f˜ a fuzzy-valued function from fuzzy set A˜ to B, ˜ f˜ : A˜ ∼→ B. ˜ ∈ F (Y ), C˜ ∈ F (Z), and let f : A˜ ∼→ B ˜ and g : B ˜ ∼→ Let A˜ ∈ F (X), B ˜ C. Then a composite mapping g ◦ f : X → Z is a fuzzy function from A˜ to ˜ i.e, g˜ ◦ f˜ : A˜ ∼→ C. ˜ C, In fact, ∀x ∈ X, μ(˜g◦f˜) (x) = g˜ f˜(x) , hence, ˜ g ◦ f˜)(x)) = C(˜ ˜ g (f˜(x))) μ(C◦(˜ ˜ g◦f˜)) (x) = C((˜ ˜ f˜(x)) = μ ˜ (x), = (C˜ ◦ g˜)(f˜(x)) = B( A

20

1 Prepare Knowledge

i.e., ˜ ◦ f˜ = A. ˜ C˜ ◦ (˜ g ◦ f˜) = (C˜ ◦ f˜) ◦ f˜ = B 1.5.3 Fuzzy Constrained Function We introduce some fuzzy constrained functions constantly used for the sake of discussion [Cao93a][Cao94b][Cao07][DPr80]. Deﬁnition 1.5.4. ∀x ∈ X, g(x) is a real bounded function deﬁned on X, and its inﬁmum and supremum are written as inf(g) and sup(g), respectively, such that we deﬁne ! "n g(x) − inf(g) , (1.5.2) μM˜ (x) = sup(g) − inf(g) ˜ : X → [0, 1] a maximal set of g, where μ ˜ (x) = 0, n is a natural calling M M number. Deﬁnition 1.5.5. If c1i , c2i are left and right endpoints of an interval, then, for c˜i freely ﬁxed in a closed value interval [c1i , c2i ], its degree of accomplishment is determined by ⎧ ⎪ 0, if ci c1i , ⎪ ⎪ ⎨ c − c1 n i i μφ˜i (˜ (1.5.3) ci ) = , if c1i < ci c2i , 2 − c1 ⎪ c ⎪ i i ⎪ ⎩1, if ci > c2i , where n denotes a natural number. For fuzzy constraint sets and fuzzy objective sets, we have the following. Deﬁnition 1.5.6. If A˜i = {x ∈ R m |gi (x) 1} (1 i p) is a fuzzy constraint set corresponding to fuzzy constraint inequations gi (x) 1, then the membership functions of A˜i are ⎧ ⎨ 0, # n if gi (x) 1 + di , μA˜i (x) = (1.5.4) 1 − ti di , if gi (x) = 1 + ti , 0 ti di , ⎩ 1, if gi (x) 1, where di ∈ R (a real number set) denotes a maximum ﬂexible index of gi (x). Deﬁnition 1.5.7. Regard A˜0 = {x ∈ R m |g0 (x) z0 } as a fuzzy objective set and assume a membership function of A˜0 as follows: ⎧ ⎨ 0, # n if g0 (x) z0 − d0 , μA˜0 (x) = (1.5.5) 1 − t0 d0 , if g0 (x) = z0 − t0 , 0 t0 d0 , ⎩ 1, if g0 (x) z0 , where d0 0 is a maximum ﬂexible index of g0 (x) and z0 an objective value. We deﬁne symbol “ ” as a ﬂexible version of at a ‘certain degree’ [Ver84][LL01], or approximately less than or equal to.

1.6 Three Mainstream Theorems in Fuzzy Mathematics

21

Deﬁnition 1.5.8. Let fuzzy sets A˜i (1 i p) be A˜i = {x ∈ R m |gi (x) 1} (0 i p ) and

A˜i = {x ∈ R m |gi (x) 1} (p + 1 i p).

Then their membership functions are deﬁned as 1, gi (x) 1 μA˜i (x) = − 1 (g (x)−1) , 1 < gi (x) 1 + di e di i

(1.5.6)

for 0 i p , and μA˜i (x) =

0, gi (x) 1 − 1 (g (x)−1) , 1 < gi (x) 1 + di 1 − e di i

(1.5.7)

for p + 1 i p, where di 0 is a maximum ﬂexible index of i−th function gi (x). We introduce the possibility grade of dominance of 1˜ over g˜i (x), a concept introduced by Dubois and Prade in 1980 which represents the fuzzy extension for gi (x) 1 [DPr80]. Deﬁnition 1.5.9. The degree of possibility of g˜i (x) ˜1 is deﬁned as v(˜ gi (x) ˜ 1) = sup min(μ˜1 (x), μg˜i (x) (y)). x,y:xy

This formula is an extension of the inequality x y according to the extension principle. When pair (x, y) exists, such that x y and μ˜1 (x) = μg˜i (x) (y) = 1, then v(˜ gi (x) ˜ 1) = 1. When g˜i (x) and ˜ 1 are convex fuzzy numbers, we have v(˜ gi (x) ˜ 1) = 1, if and only if gi (x) 1, ˜ v(˜ gi (x) 1) = hgt (˜ gi (x) ˜1) = μ˜1 (d), where d is an ordinate of the highest intersection point between μ˜1 (x) and μg˜i (x) (y).

1.6

Three Mainstream Theorems in Fuzzy Mathematics

1.6.1 Decomposition Theorem Deﬁnition 1.6.1. If α ∈ [0, 1], A˜ ∈ F (X), then product of number α with fuzzy set A˜ is deﬁned as μA˜ (x). μ(αA) ˜ (x) = α

22

1 Prepare Knowledge

Theorem 1.6.1. (Decomposition Theorem I) For an arbitrary A˜ ∈ F (X), we have αAα , (1.6.1) A˜ = α∈[0,1]

A˜ =

(1.6.2)

αAα. .

α∈[0,1]

Proof: Because μAα (x) = then

μ(

αAα ) (x)

x ∈ Aα x∈ / Aα ,

1, 0,

= sup αμAα (x) = sup α 0α1

α∈[0,1]

=

x∈Aα

sup αμA˜ (x)

α = μA˜ (x).

Therefore, (1.6.1) is proved. Similarly, we can prove Formula (1.6.2). Example 1.6.1: Suppose universe to be X = {2, 1, 7, 6, 9}, try to decompose the fuzzy set 0.1 0.3 0.5 0.9 1 + + + + A˜ = 2 1 7 6 9 by applying Decomposition Theorem. Solution: The relevant cut-sets in fuzzy sets are

A˜ =

α∈[0,1]

A0.1 = X, A0.3 = {1, 7, 6, 9}, A0.5 = {7, 6, 9}, A0.9 = {6, 9}, A1 = {9},

0 < α 0.1, 0.1 < α 0.3, 0.3 < α 0.5, 0.5 < α 0.9, 0.9 < α 1,

αAα

1 1 1 1 1 1 1 1 1 + + + + + + + = α α 2 1 7 6 9 1 7 6 9 0 μc˜(x2 ), which is a contradiction. Therefore, L(x) is an increasing function. (3) L(x) continues on the right, otherwise there exists x < c1 , xn → x, then lim ∗ L(xn ) = α > L(x). xn →x

Since xn ∈ c¯α and c¯α is closed, then x ∈ c¯α , such that μc˜(x) = L(x) α. Therefore, contradiction. For the same reason, μc˜(x) = R(x) is a continuously decreasing function on the left for x > c2 , with 0 R(x) < 1. Suﬃciency. Let c˜ satisfy the condition in the theorem. Then (1) c˜ is obviously normal. (2) Prove c¯α = [c1α , c2α ], ∀α ∈ (0, 1]. μc˜(x) = L(x) for x < c1 , so we select c1α = min{x|L(x) α} and μc˜(x) = R(x) for x > c2 , such that we select c2α = max{x|R(x) α}. Obviously, c¯α ⊂ [c1α , c2α ]. Now, prove [c1α , c2α ] ⊂ c¯α , and we prove only [c1α , c1 ) ⊂ c¯α (because we can prove (c2 , c2α ] ⊂ c¯α for the same reason). Again, we prove only c1α ∈ c¯α due to the monotonicity of L(x). Select xn → c1α , then L(c1α ) = lim L(xn ) α, such that c1α ∈ c¯α . xn →c1α

1.7.2

Type (·, c), T, L − R and Flat Fuzzy Numbers

Deﬁnition 1.7.4. c˜ = (α, c) is deﬁned as a (·, c) fuzzy number on a product space α1 × α2 × · · · × αJ ; its membership function is μc˜(a) = min[μc˜j (aj )], j

⎧ ⎨1 − |αj − aj | , α − c a α + c , j j j j j cj μc˜(aj ) = ⎩ 0, otherwise,

(1.7.2)

30

1 Prepare Knowledge

where α = (a1 , a2 , · · · , aJ )T , c = (c1 , c2 , · · · , cJ )T ; α denotes the center of c˜, c the extension of c˜, with cj > 0. Coming next are special cases. Deﬁnition 1.7.5. L is called a reference function of fuzzy numbers if L satisﬁes (i) L(x) = L(−x); (ii) L(0) = 1; (iii) L(x) is a nonincreasing and piecewise continuous function at [0, +∞). Deﬁnition 1.7.6. Let L, R be reference functions of a fuzzy number c˜, called a L-R fuzzy number. If ⎧ c−x ⎪ ⎪ , x c, c > 0, ⎨L c μc˜(x) = (1.7.3) x−c ⎪ ⎪ , x c, c > 0, ⎩R c we write c˜ = (c, c, c)LR , where c is a mean value; c and c are called the left and the right spreads of c˜, respectively. L is called a left reference and R a right reference. If take c˜ to be variable x ˜, then x ˜ = (x, ξ, ξ)LR represents T -fuzzy variable. Deﬁnition 1.7.7. If L and R are functions satisfying $ 1 − |x|, if − 1 x 1, T (x) = 0, otherwise,

(1.7.4)

then we call c˜ = (c, c, c)T T -fuzzy numbers, T (R) representing T -fuzzy number sets. If take c˜ to be variable x ˜, then x˜ = (x, ξ, ξ)T represents T -fuzzy variables. Deﬁnition 1.7.8. Let L, R be reference functions and the quadruple c˜ = (c− , c+ , σc− , σc+ )LR is called a L-R ﬂat fuzzy number. Then ⎧ − c −x ⎪ ⎪L , x c− , σc− > 0 ⎪ ⎪ ⎨ σc− x − c+ μc˜(x) = (1.7.5) R , x c+ , σc+ > 0 ⎪ + ⎪ ⎪ σ c ⎪ ⎩ 1, otherwise satisfying ∃(c− , c+ ) ∈ R, c− < c+ , with μc˜(x) = 1.

1.7 Five-Type Fuzzy Numbers

31

Especially, c˜ = (c− , c+ , σc− , σc+ ) is said to be a ﬂat fuzzy number, where ⎧ c− − x ⎪ − − − ⎪ 1 − ⎪ − , if c − σc x c , ⎪ σ ⎪ c ⎪ ⎨1, if c− < x < c+ , μc˜(x) = (1.7.6) + x−c ⎪ + + + ⎪ 1 − , if c x c + σ , ⎪ c ⎪ ⎪ σc+ ⎪ ⎩ 0, otherwise. If we take interval (c− , c+ ) to be (x− , x+ ), then x ˜ = (x− , x+ , ξ, ξ)LR and x˜ = (x− , x+ , ξ, ξ) represent an L-R fuzzy variable and a ﬂat fuzzy one, respectively. Deﬁnition 1.7.9. Suppose that “ ∗ ” represents an arbitrary ordinary binary operation in R, such that ∀˜ c, d˜ ∈ F (R) and we deﬁne μc˜(x) ∧ μd˜(y) ˜ , c˜ ∗ d = x∗y x,y∈R

i.e., ∀z ∈ R, μc˜∗d˜(z) =

˜ (μc˜(x) ∧ d(y)),

x∗y=z

where “ ∗ ” represents arithmetic operations +, −, ·, ÷. Accordingly, we can deﬁne the operations of Type L-R, T and ﬂat fuzzy numbers. A. Operations properties in L-R fuzzy number Let c˜ = (c, c, c)LR , d˜ = (d, d, d)LR and p˜ = (p, p, p)RL be an L-R fuzzy number. Then 1) c˜ + d˜ =$(c + d, c + d, c + d)LR . (kc, kc, kc)LR , when k 0 (k ∈ R). 2) k · c˜ = (kc, −kc, −kc)RL , when k < 0 Let (−1)˜ c = −˜ c for k = −1. Then −˜ c = (−c, c, c)RL . 3) c˜ − p˜ = (c − p, c + p, c + p)LR for L = R. 4) c˜ · d˜ ≈ (cd, cd + dc, cd + dc)LR . c pc + cp pc + cp , )LR , p = 0, c˜ and p˜ can not be divided for 5) c˜ ÷ p˜ ≈ ( , p p2 p2 L = R. ˜ ≈ (c ∧ d, c ∨ d, c ∧ d)LR . ˜ ≈ (c ∨ d, c ∧ d, c ∨ d)LR , min(˜ % c, d) 6) max(˜ % c, d) ˜ ˜ ˜ 7) c˜ d ⇐⇒ c d, c d, c d; c˜ ⊆ d ⇐⇒ c + c d − d, or c˜ = d. B. Operations properties in T -fuzzy numbers If c˜1 = (c1 , c1 , c1 )T , c˜2 = (c2 , c2 , c2 )T , then (1) c˜1 + c˜2 = (c1 + c2 , c1 + c2 , c1 + c2 )T ; (2) c˜1 − c˜2 = (c1 − c2 , c1 + c2 , c1 + c2 )T ;

32

1 Prepare Knowledge

(3) λ˜ c = λ(c, c, c)T =

(λc, λc, λc)T , (λc, −λc, −λc)T ,

∀λ > 0, ∀λ < 0.

1 −2 −2 (4) c˜−1 = (c, c, c)−1 , cc )T . T ≈ ( , cc c C. Operation properties in ﬂat fuzzy numbers Let c˜ = (c− , c+ , σc− , σc+ ) and d˜ = (d− , d+ , σd− , σd+ ) be ﬂat fuzzy numbers. Then 1) c˜ + d˜ =$(c− + d− , c+ + d+ , σc− + σd− , σc+ + σd+ ). for k > 0, (kc− , kc+ , kσc− , kσc+ ), 2) k · c˜ = (kc+ , kc− , −kσc− , −kσc+ ), for k 0. By the deﬁnition of Type L-R, T or ﬂat fuzzy numbers, it is easy to prove their operation properties [Dia87][DPr80]. We can deduce operation properties of (·, c) fuzzy numbers since it is extended over ﬂat fuzzy ones.

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

As the phenomenon in the world is complicated, at the time of carrying on statistic forecast, we will usually meet a type of fuzzy number that points are constant, the circle is changed, and vice versa. For such a case, an analytical problem needs considering in regression and self-regression under a fuzzy environment. In 1980, in this aspect, a regression analysis formulation was already developed according to a possible linear system [TUA80]. Hereafter the regression analysis was variously formed by means of fuzzy data analysis, and carried in extensive application [TUA82]. In 1989, based on the theory of Zadeh fuzzy sets [Zad65a], self-regression forecast model with fuzzy coeﬃcients was advanced [cao89b][cao90]. This chapter introduces a regression and self-regression model containing (·, c) fuzzy coeﬃcients, ﬂat fuzzy coeﬃcients as well as triangular fuzzy coefﬁcients, concludes the regression analysis as a linear programming.

2.1

Regression Model with Fuzzy Coeﬃcients

2.1.1 Introduction Suppose a classical linear regression model to be Y = A1 x1 + A2 x2 + . . . + An xn + ε, where Y is a correlated variable, and xi , Ai an independent variable and parameter, respectively, and ε an error. Because problems within realistic world all contain a great quantity of fuzzyness, this section will consider a fuzzy model as follows: Y˜ = A˜1 x1 + A˜2 x2 + · · · + A˜n xn + ε,

(2.1.1)

where Y˜ and A˜j (1 j n) are (·, c) fuzzy correlated variables and parameter; x = (x1 , x2 , . . . , xn )T is an independent variable vector, and independent variable xj (1 j n) in i-period changed backward, with ε being an error. We call (2.1.1) a regression model with fuzzy coeﬃcients. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 33–62. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

34

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

2.1.2 Deﬁnitions and Concepts of Fuzzy Parameters Deﬁnition 2.1.1. Suppose F (R) is a fuzzy set, and A˜j ∈ F (R)(j = 1, 2) denotes fuzzy parameters with its membership function (1.5.3). [TOA73] [TUA82] Deﬁnition 2.1.2. Fuzzy number A˜ is a convex normalized fuzzy subset of a real axis satisfying (i) ∃x0 ∈ R and μA˜ (x0 ) = 0; (ii) A˜ is a piecewise continuous function. The α−cut set of A˜ is set A˜α = {x ∈ R, μA˜ (x) α}, where α ∈ [0, 1]. Deﬁnition 2.1.3. If ∀x, y, z ∈ R, and x y z, we have μA˜ (y) μA˜ (x) ∧ μA˜ (z), calling A˜ a normal fuzzy number. As for relevant (a, c), the deﬁnitions and properties of fuzzy numbers, refer to the Ref. [TA84] and [Wat87]. Extension Principle: Suppose A˜1 , · · · , A˜n to be (a, c) fuzzy numbers, mapping f : R → R, i.e., f (x1 , x2 , · · · , xn ) = x1 ∗ x2 ∗ · · · ∗ xn . Expand this operation ‘∗’ to fuzzy numbers, then rule f (A˜1 , A˜2 · · · , A˜n ) = A˜1 ∗ A˜2 ∗ · · · ∗ A˜n min{A˜1 (x1 ), · · · , A˜n (xn )} , = f (x1 , · · · , xn ) X1 ∗···∗Xn its membership function meaning μf (A˜1 ,··· ,A˜n ) (y) = sup

min

x1 ,··· ,xn ∈f −1 (y)

{μA˜1 (x1 ), · · · , μA˜n (xn )}.

˜ = f (A˜1 , · · · , A˜n ) means an image in A˜1 , · · · , A˜n , By using α-cut sets, if B then [f (A˜1 , · · · , A˜n )]α = f (A1α , · · · , Anα ) ⇐⇒ ∀y ∈ Y, ∃¯ x1 , · · · , x ¯n , such that μB˜ (y) = μ(A˜1 ∗A˜2 ∗···∗A˜n ) (¯ x1 , x ¯2 , · · · , x ¯n ). Deﬁnition 2.1.4. Assume that two sets X and Y , f : X → Y denotes a ˜ function Y = f (x, a), f : X → F (y) denotes a fuzzy function Y˜ = f (x, A), then the membership function of fuzzy set Y˜ denotes $ μA˜ (a), when {a|y = f (x, a)} = φ, max μY˜ (y) = {a|y=f (x,a)} 0, otherwise,

2.1 Regression Model with Fuzzy Coeﬃcients

35

where x ∈ R, a parameter on the product space of a = a1 × a2 × · · · × an , where n is the number of independent variables. A˜ a fuzzy set, Y˜ a mapping ˜ and F (y) a fuzzy-valued set. of x in A, Deﬁnition 2.1.5. The fuzzy parameter A˜ of fuzzy linear regression is deﬁned from Cartesian product space R n on the Cartesian product sets A˜ = A˜1 × A˜2 × · · · × A˜n , such as Figure 2.1.1 shows, 6 1

i A @ @ @ ci αi = ai

j A A A A A αj cj

aj

˜ Fig. 2.1.1. Fuzzy Parameter A

its membership function is a triangle type, i.e., μA˜ (a) = min μA˜j (aj ), j

⎧ ⎨ μA˜j (aj ) =

⎩

1− 0,

|αj − aj | , when αj − cj aj αj + cj , cj otherwise,

where cj > 0(j = 1, 2, 3, · · · , n). Deﬁnition 2.1.6. Fuzzy regression parameter A˜ deﬁned on the vector space R n is written as a vector form A˜ = (α, c), α = (α1 , · · · , αn )T , c = (c1 , · · · , cn )T , ˜ respectively, “T ” means a transporting sign, α, c the center and the shape in A, ˜ and A means “approximately A”. Suppose that Y˜ and A˜ are all convex, normalized fuzzy functions and fuzzy numbers below. 2.1.3

Establishment of Linear Regression Model

Suppose the linear regression model to be ˜ = (aT x, cT x), Y˜ = A˜1 x1 + A˜2 x2 + · · · + A˜n xn = Ax where A˜j (j = 1, · · · , n) is a waiting parameter.

(2.1.2)

36

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Proposition 2.1.1. The membership function of (2.1.2) is ⎧ |y − αT x| ⎪ ⎪ , x = 0, ⎨1 − cT |x| μY˜ (y) = 1, x = 0, y = 0, ⎪ ⎪ ⎩ 0, x = 0, y = 0, where |x| = (|x1 |, |x2 |, · · · , |xn |)T and μY˜ (y) = 0 when cT |x| |y − αT x|. In fact, according to Deﬁnition 2.1.4 and stipulate, then ⎧ ⎪ μA˜ (α), {α|αT x = y} = φ ⎪ ⎨ T μY˜ (y) = {α|α x=y} ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ n ⎪ ⎪ ⎪ { μA˜p (αj )}, {α|αT x = y} = φ ⎨ = {α|αT x=y} j=1 ⎪ ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ n |αj − aj | ⎪ ⎪ ⎪ { (1 − )}, {α|αT x = y} = φ ⎨ c j T = {α|α x=y} j=1 ⎪ ⎪ 1, x = 0, y = 0 ⎪ ⎩ 0, x = 0, y = 0 ⎧ T |y − α x| ⎪ ⎪ , x = 0, ⎨1 − cT |x| = 1, x = 0, y = 0, ⎪ ⎪ ⎩ 0, x = 0, y = 0. Here, when cT |x| < |y − αT x|, the above means a deviation between the calculation value of y and the actual value is bigger than a fuzzy shape in calculation values, then μY˜ (y) = 0. Take a sample (yi ; xi1 , xi2 , · · · , xin ) for example, where the capacity is n, and yi = αT xi (i = 1, 2, · · · , n) is an observed value, yˆi an estimation value, both of deviations are εi = yi − yˆi , then Y˜ = (y, ε) (with correlated variable y being for center, the deviation being ε a shape) is a fuzzy correlated variable, and ε = 0 is non-fuzzy situation. We aim at assuring fuzzy parameters according to observation value Aˆ˜j (j = 1, 2, · · · , n). But adoption of classical least square method will meet trouble of whether ˜j (j = 1, 2, · · · , n) is diﬀerentiable. Hence, we determine Aˆ˜j (j = 1, 2, · · · , n) Aˆ by use of methods below. ¯ between the observed data and In order to measure the degree of ﬁtting h the estimated one, a decision maker can choose a threshold value H. Here, H is selected by a person in experts’ experience. The selection of H aﬀects the width in fuzzy parameter cj .

2.1 Regression Model with Fuzzy Coeﬃcients

37

If compute max ¯ h H, such that YˆiH = {y|μYˆi (y) H},

(2.1.3)

¯ is an optimal estimation of correlated variables in (2.1.2). The index then h ¯ shows as Figure 2.1.2: of approximately degree in h x

6

1

¯ hi 0

#c c # c # B# c c #B c # B n c # B cj |xij | c # v B c# j=1 B c- y # ei yi αT xi B B

¯ Fig. 2.1.2. The Index of Approximately Degree in h

Theorem 2.1.1. Assume that a fuzzy linear regression model as (2.1.2), then ¯ H ⇐⇒ maxh ⎧ n ⎪ ⎪ T ⎪ α x + (1 − H) cj |xij | yi + (1 − H)εi , i ⎪ ⎨ j=1

n ⎪ ⎪ T ⎪ −α x + (1 − H) cj |xij | −yi + (1 − H)εi , i ⎪ ⎩

(i = 1, 2, · · · , N ).

j=1

Proof: Shown as Figure 2.1.2, ¯ h is derived as follow. By using the similarity of the right triangles, then v 1−h , v = εi (1 − h), = εi 1 k = v + |yi − αT xi |, k = |yi − αT xi | + εi (1 − h). Again by using the similarity of the right triangles, hence 1−h k = , n 1 cj |xij | j=1

1−h |yi − αT xi | + εi (1 − h) = . n 1 cj |xij | j=1

(2.1.4)

38

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Find equation (2.1.4), then |yi − αT xi | 1−h= , n cj |xij | − εi j=1

i.e., ¯i = 1 − h

|yi − αT xi |

n

.

cj |xij | − εi

j=1

From (2.1.4), at yi − αT xi 0, then −αT xi + (1 − H)

n

cj |xij | −yi + (1 − H)εi ,

j=1

at yi − αT xi 0, we can get the same truth, αT xi + (1 − H)

n

cj |xij | yi + (1 − H)εi (i = 1, 2, · · · , N ).

j=1

Combining two kinds of situations above, the theorem can be certiﬁcated. Deﬁnition 2.1.7. The vagueness of the fuzzy linear model is denoted by n J(c) = cj |xij |, where xij is an observation datum, cj a width in A˜j . j=1

Therefore, fuzzy parameter A˜j (j = 1, · · · , n) certainly is concluded to com˜j = (αj , cj ) in the following linear programputation of an optimal solution Aˆ ming with parameter variables min J(c) =

n

cj |xij |

j=1

s.t. αT xi + (1 − H)

n

cj |xij | yi + (1 − H)εi ,

j=1

− αT xi + (1 − H)

n

(2.1.5)

cj |xij | −yi + (1 − H)εi ,

j=1

c 0, H ∈ [0, 1], (i = 1, · · · , N ). Deﬁnition 2.1.8. Suppose the regression value of model is Y˜ˆi = (yi , εi ), but actually measure value is Yi , then

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

39

& ' N ' ' (ˆ yi − yi )2 ' ' i=1 RIC = ' ' N ( yi2 i=1

is an accurate level measuring a forecast model, and RIC ∈ [0, +∞). When RIC=0, it is a perfect forecast. ˜j into (2.1.2), that is, a fuzzy linear regression model with fuzzy Put Aˆ coeﬃcients is what we ﬁnd. Obviously, c = 0 is a classical case. The mold steps can be induced as follows. Step 1. Put the collected data (ordinarily real data) into (2.1.3), and according to Theorem 2.1.1, change the solution to parameter A˜j into a solution to a linear programming. Step 2. Find an optimal parameter solution Aˆj (j = 1, 2, · · · , n) to (2.1.1), then we get a regression forecasting model ˆ x + A˜ ˆ x + . . . + A˜ ˆ x (i = 1, 2, · · · , N ). ˜i = A˜ Yˆ 1 i1 2 i2 n in Step 3. Obtain an accurate judgement in forecast model a. The nearer RIC reaches zero, the nearer the value of yˆi approaches yi , which means the higher an accuracy of the forecasting value. b. At RIC=0, this is a perfect forecast, here, yˆi = (yi + εi ) × 0.618 + (yi − εi ) × 0.382. Through judgement, the model passes through examination, then it can be thrown into a forecast. c. The estimation of the forecast value range. n Aˆj xij , then take Suppose yˆi = (yi , εi ) = j=1

yˆi− = yi − (1 − H)εi , yˆi+ = yi + (1 − H)εi , hence [ˆ yi− , ˆi+ t ] is a forecast value range.

2.2

Self-regression Models with (·, c)−Fuzzy Coeﬃcients

2.2.1 Introduction On the foundation of Ref.[Cao90],[Dia87] and [Wat87], we put another model into consideration, that is, self-regression model with (·, c) fuzzy coeﬃcients.

40

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

It is used to generalize a fuzzy least squares system through a special example of T -fuzzy data, which will be more extensive in its application than a classical one. 2.2.2 Model Let us consider the self-regression forecast model of classical n-order Yt = A0 + A1 Yt−1 + · · · + An Yt−n + e.

(2.2.1)

This means applying a fuzzy set theory to the expansion of (2.2.1), i.e., 0 + A 1 Yt−1 + · · · + A n Yt−n + et , Yt = A

(2.2.2)

calling (2.2.2) a self-regression model with (·, c) fuzzy coeﬃcients, where paj (j = 0, 1, · · · , n) to be estimated and dependent sequence Yt are rameter A all (·, c) fuzzy numbers, and et is an error, t denotes benchmark time. j (j = 0, 1, · · · , n) to be all convex and normalized fuzzy Assume Yt and A numbers. In [Cao89b] appear an expansion principle and the conception of fuzzy numbers. Deﬁnition 2.2.1. Let f : R n+1 → F (y) be a fuzzy function, Yt = is a fuzzy set, F (y) repref (Yt−j , A)(j = 1, 2, · · · , n), where Yt−j ∈ R, A sents all fuzzy subsets on R and the membership function of Yt is $ max μA (a), {a|y = f (yt−j , a)} = φ, μYt (y) = {a|y=f (yt−j ,a)} 0, otherwise. is deﬁned by a Cartesian Deﬁnition 2.2.2. Fuzzy self-regression parameter A product set =A 0 × A 1 × · · · × A n , A j which is on Cartesian product space R n+1 . The membership function of A is ⎧ ⎨ 1 − |αj − aj | , aj ∈ [αj − cj , αj + cj ], μAj (aj ) = cj ⎩ 0, otherwise, where a =

n )

j = (αj , cj )(j = 0, 1, · · · , n), αj is the mean value of A j aj , A

j=0

j . and cj > 0 is the width of A Proposition 2.2.1. Fuzzy self-regression model is 0 + Yt = A

n

j Yt−j = AY = (αT Y, cT Y ), A

(2.2.3)

j=1

where Y = (1, Yt−1 , · · · , Yt−n )T , α = (α0 , α1 , · · · , αn )T , c = (c0 , c1 , · · · , cn )T , and the membership function of Yt

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

⎧ ⎨ μYt (y) =

⎩

1−

|y − αT Y | , cT |Y |

0,

Y = 0,

μYt (y) =

=

{a|aT Y

0, ⎧ ⎪ ⎨

=y}

{a|aT Y =y} ⎪ ⎩ 0,

0 =0, 0

{a|aT Y = y} = φ

μA (a), $

(2.2.4)

otherwise.

Proof: Applying Deﬁnition 2.2.1 and stipulating $

41

otherwise

* |αj − aj | (1 − ) , cj j=1 n

Y = 0 otherwise

= (2.2.4). A decision-maker choose threshold value H0 . If the degree of ﬁtting H between forecast data and estimation value tallies with max H H0 , such that

Yt∗H0 = {y|μY ∗ (y) H0 }, t

then we attain the best estimation of dependent variable of (2.2.3). The approximate indicator of H is shown as in Figure 2.2.1: μ6 B(Yt , 1) D(α0 + αi Yt−i , 1) #c B c B ## c B c # c #B c # B c # H B c # B c # e t B c- y n # 0 A(Yt − et , 0) c0 + cj |Y(t−j)i | n j=1 T C(α0 + αi Yt−i − c0 − cj |Y(t−j)i |, 0) 1

j=1

Fig. 2.2.1. The Approximate Indicator of H

Theorem 2.2.1. Let fuzzy self-regression model be (2.2.2). Then max H H0

(2.2.5)

42

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

⎧ n ⎪ ⎪ ⎪ −α0 − αi Yt−i + (1 − H0 ) c0 + cj |Y(t−j)i | ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎨ −Yt + (1 − H0 )et , ⇐⇒ n ⎪ ⎪ ⎪ cj |Y(t−j)i | α0 + αi Yt−i + (1 − H0 ) c0 + ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎩ Yt + (1 − H0 )et .

(2.2.6)

(2.2.7)

Proof: From Figure 2.2.1, at Yt − αi Yt−i 0, the line segments AB and CD show separately below: ⎧ x = et (y − 1) + Yt , ⎪ ⎪ ⎪ ⎪ ⎨

n j=1

cj |Y(t−j)i |

y = x − (α0 + αi Yt−i ) + c0 + ⎪ n ⎪ ⎪ ⎪ c0 + cj |Y(t−j)i | ⎩ j=1

⇐⇒ |H0 | = 1 −

α0 + αi Yt−i − Yt , n c0 + cj |Y(t−j)i | − et j=1

where et = yt − yt represents a deviation and et = 0 is a non-fuzzy state. Then by combining with (2.2.5), we can obtain (2.2.6). When yt − αi Yt−i 0, by the same method we can obtain (2.2.7). If deﬁnition J =

n j=1

cj |Y(t−j)i | is the fuzzy degree of (2.2.2), we change

j (j = 0, 1, · · · , n) into solving the optimal solution to ﬁnding parameter A min{J =

n

cj |Y(t−j)i ||(2.2.6), (2.2.7)}.

(2.2.8)

j=1

Algorithm Steps The built model steps of (2.2.2) are summed up as follows: Step 1. By observation data, we work out a self-dependent sequence table. Step 2. By N rj = + [N

N i=1

N i=1

Y(t−j)i Yti −

2 Y(t−j) −( i

N

i=1

N i=1

Y(t−j)i Yti

Y(t−j)i )2 ][N

N i=1

Yt2i − (

N

i=1

, Yti )2 ]

(j = 1, 2, · · · , n ∈ {j}), we calculate the self-dependent coeﬃcients, where time moves backward by i.

2.2 Self-regression Models with (·, c)−Fuzzy Coeﬃcients

43

n j Y(t−j) to be the best fuzzy self0 + A Step 3. Determine Yt = A i j=1

regression forecast model by taking rκ = max{rj |j = 1, 2, · · · , n}. Again j and obtain the self-regression according to Theorem 2.2.1, we solve A equation n j Yt−j . 0 + A yt = (yt , et ) = A j=1

Step 4. Decision Let yt = 0.618(yt + et ) + 0.382(yt − et ). Then deﬁne

& ' N ' ' (yti − Yti )2 ' i=1 RIC = ' , RIC ∈ [0, ∞). ' N ( Yt2i i=1

The closer RIC approaches zero, the higher a precision of the forecast is. And RIC = 0 stands for a perfect forecast. Step 5. Forecast Let n , j Yt−(j+q) . 0 + A Y t+q = A j=1

Then the state at q moment can be forecasted, and the range of forecast value is estimated to be ∗ Yt+j ∈ [Yt+q − (1 − H0 )et+q , Yt+q + (1 − H0 )et+q ].

2.2.3

Conclusion

In 1992, the author of Paper [Yin92] advances least fuzzy squares identiﬁcation method by using the model of Papers [Cao89b][Cao90], calling fuzzy least squares systems. In south maintenance section in Zhengzhou Railroad Bureau, we analyzed the spectrum data sample of lubricate oil from BJ-type diesel locomotive. From 200 BJ-type motorcycles, we diagnosed 50 sets randomly by fuzzy least squares systems set up by the writer and knew that abnormal wear positions are generally exactitude, so are the diagnostics of total state and breakdown positions basically. Besides, a correct rate doubles than that is done by the methods of critical value or regression control ﬁniteness. From this point, we can diagnose its breakdown without disassembly of diesel machines, therefore, acquisition of the economic proﬁt is beyond estimation because of its convenience and practicality.

44

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

2.3 2.3.1

Exponential Model with Fuzzy Parameters Introduction

Consider a model by Lenz, Isenson and Hartman as follows, where volume of information increases as time and factors concerned do, and we change it into a forecasting technique function before concluding it as the following mathematics model Y˙ t = kYt (k > 0), (2.3.1) where Yt is a characteristic parameter; t is time; k a proportional constant; Y˙ t a relative increase rate and the solution to the equation (2.3.1) is an exponential one Yt = Y0 ekt . Because the characteristic technology in long-distance telephone tallies with exponential regularity, we consider a more general exponential one as follows: (2.3.2) Yˆ (t) = A1 At2 , where A1 , A2 are parameters to be estimated, and Yˆ (t) denotes the evaluation in telephone amount during t years. Telephone amount ﬂuctuates with various indeterminable factors. If we assume that the parameters waiting for evaluation in (2.3.2) are fuzzy numbers, the model will contain more information. Below, we fuzzify the parameters of model (2.3.2), based on Zadeh’s fuzzy sets theory [Zad65a], establish a forecast model with the exponential type of the fuzzy parameter, and study the application of this model by practical example. 2.3.2

Exponential Model with Fuzzy Coeﬃcients

Deﬁnition 2.3.1. Suppose F (R) to be a set of the whole fuzzy parameters, and A˜i ∈ F (R)(i = 1, 2), we have Y˜ (t) = A˜1 A˜t2 , (2.3.3) − + − ˜ ˜ where A1 , A2 are ﬂexibly ﬁxed values in the closed intervals [A1 , A1 ], [A2 , A+ 2 ], + − + − + − + respectively, A− < A , A < A , and A , A ; A , A stand for all real num1 1 2 2 1 1 2 2 bers, Y˜ (t) denotes fuzzy telephone amount, t denotes time. We call (2.3.3) an exponential model with parameters. Next, solutions are introduced to the Model (2.3.3). 10 Nonfuzziﬁcation Theorem 2.3.1. If the membership function φ : R → [0, 1] is a continuous and strictly monotone, then, the inverse function φ−1 exists, such that φ(A˜j ) α ⇒ A˜j φ−1 (α), α ∈ [0, 1], (j = 1, 2). Proof: From deﬁnition of α−cut set, the theorem appears obviously. Let φ(A˜j ) like (1.5.3). If φ(A˜j ) α, α ∈ [0, 1], then A˜j − A− j − A+ j − Aj

+ − α ⇒ A˜j − A− j α(Aj − Aj ) + − ⇒ A˜j A− j + α(Aj − Aj ), (j = 1, 2).

2.3 Exponential Model with Fuzzy Parameters

45

Take + − − + − ˜ A˜1 → A− 1 + α(A1 − A1 ), A2 → A2 + α(A2 − A2 ).

Put them into (2.3.3), then, Y˜ (t) → Y (t, α), and (2.3.3) becomes a crisp model + − − + − t Yˆ (t, α) = [A− 1 + α(A1 − A1 )][A2 + α(A2 − A2 )] , α ∈ [0, 1].

(2.3.4)

It is testiﬁed. 20 Linearizing + − − + − Let A = A− 1 + α(A1 − A1 ), B = A2 + α(A2 − A2 ). Then change (2.3.4) into Yˆ (t, α) = AB t .

(2.3.5)

Linearize (2.3.5) by taking logarithm and we can get ln Yˆ (t, α) = ln A + t ln B.

(2.3.6)

30 Estimation parameters Now coming next is estimation parameters A and B. Theorem 2.3.2. As for the given sample set {Y (t1 , α), Y (t2 , α), · · ·, Y (tN , α)}, α ∈ [0, 1], the least squares estimator for parameters A, B with variable α are N −1 N −1 $ tk ln Y (tk , α) − tk ln Y (tk+1 , α) * Y (tk , α) k=1 Aˆ = exp k=1 , N −1 t2k

(2.3.7)

k=1

$ ˆ = exp B

N −1

tk ln Y (tk , α) − ln A

k=1

N

tk *

k=1 N k=1

.

(2.3.8)

t2k

Proof: a) Because of a sample set {Y (t1 , α), · · · , Y (tN , α)} → {ln Y (t1 , α), · · · , ln Y (tN , α)}, then for given sample points {ln Y (tk , α)}(k = 1, 2, · · · , N ), α ∈ [0, 1], we take two near arbitrary sample points tk and tk+1 (k = 1, 2, · · · , N − 1) into consideration from (2.3.6), then ln Yˆ (tk , α) = ln A + tk ln B,

(2.3.9)

ln Yˆ (tk+1 , α) = ln A + tk+1 ln B.

(2.3.10)

46

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

(2.3.9) × tk+1 − (2.3.10) × tk , we obtain (tk+1 − tk ) ln A = tk+1 ln Yˆ (tk , α) − tk ln Yˆ (tk+1 , α).

(2.3.11)

b) Applying the least square method, we build an objective function by (2.3.11) J1 =

N −1

[tk+1 ln Y (tk , α) − tk ln Y (tk+1 , α) − (tk+1 − tk ) ln A]2 .

k=1

By combining (2.3.9), we build another objective function by the least square method N J2 = [ln Y (tk , α) − ln Yˆ (tk , α)]2 k=1

=

N

[ln Y (tk , α) − (ln A + tk ln B)]2 .

k=1

∂J2 ∂J1 To extract minimum of J1 and J2 , we let = 0, = 0, and write ∂ ln A ∂ ln B down tk = tk+1 − tk before obtaining ⎧ N −1 N −1 ⎪ ⎪ [tk+1 ln Y (tk , α) − tk ln Y (tk+1 , α)] tk = t2k ln A, ⎨ k=1 k=1 (2.3.12) N ⎪ ⎪ ⎩2 [ln Y (tk , α) − ln A − tk ln B]tk = 0. k=1

Solve (2.3.12) and we get (2.3.7) and (2.3.8). It is certiﬁcated. 40

Test

Obviously, to a certain determination α, Model (2.3.5) is determined after two-step of linearized model, so is Model (2.3.3). From the principle, we get two determined values α1 , α2 (∈ [0, 1]), such that Yˆ (tk , α1 ) Yˆ (tk , α2 ), we get Yˆ (tk , α) = Yˆ (tk , α1 ) + 0.618 × [Yˆ (tk , α2 ), Yˆ (tk , α1 )].

(2.3.13)

Again from formula

S=

& ' N ' 2 ˆ ' ( k=1[Y (tk , α) − Y (tk , α)] N

,

N 1 -Y (tk , α) -E% = -1 − Yˆ (t , α) -×100%. N k k=1

(2.3.14)

(2.3.15)

2.3 Exponential Model with Fuzzy Parameters

47

After ﬁnding a standard deviation S in a forecasting error and an average relative error percentage E%, we determine ﬁtting best for forecasting models when S and E% are smaller. 50

Model determination

Theorem 2.3.3. Let φ : R → [0, 1] be a membership function of continuous and strictly monotone. Then (2.3.3) ⇐⇒ (2.3.6). Proof: From the discussion above, the result is obvious. ˆ B ˆ into (2.3.6), we obtain a crisp model Put A, ˆ ln Yˆ (t, α) = ln Aˆ + t ln B. Because of (2.3.6) ⇐⇒ (2.3.5), hence ˆ t. Yˆ (tk , α) = AˆB

(2.3.16)

But (2.3.5) ⇐⇒ (2.3.3), such that (2.3.3) ⇐⇒ (2.3.6). It is certiﬁcated. Therefore, we can design a controlling forecast system for telephone amount, with a classical system an exception. If the above result strays away from practice, we can obtain Yˆ (k, α) by taking value from [0,1]. But if we do so, we may get inﬁnite values. It is impossible for us to calculate inﬁniteness of Yˆ , so we calculate the value of Yˆ (k, 0) by choosing α = 0. Compare it with Yˆ (k, 1), if Yˆ (k, 0) is superior to Yˆ (k, 1), then Yˆ (k, 0) is the goal. Otherwise, we apply the 0.618 method for search until an optimal value of the problem is found. Especially, when tk = k(k = 1, 2, · · · , N ), we have Yˆ (tk , α) = Yˆ (k, α), where tk = tk+1 − tk = 1, and at this time, we change (2.3.7) and (2.3.8) into N −1 N −1 $ ln Y (k, α) − k[ln Y (k + 1, α) − ln Y (k, α)] * k=1 Aˆ = exp k=1 N −1 N −1 $ 2 ln Y (k, α) − (N − 1) ln Y (N, α) * k=1 , = exp N −1 N $ 6 k ln Y (k, α) − 3N (N + 1) ln A * k=1 ˆ = exp B . N (N + 1)(2N + 1)

(2.3.17)

(2.3.18)

48

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

The models corresponding to (2.3.16) and (2.3.13) denote ˆt Yˆ (k, α) = AˆB

(2.3.19)

and Yˆ (k, α) = Yˆ (k, α1 ) + 0.618 × [Yˆ (k, α2 ), Yˆ (k, α1 )],

(2.3.20)

respectively. Because of ˆ+ ˆ− ˆ ˆ− ˆ+ ˆ− Aˆ = Aˆ− 1 + α(A1 − A1 ), B = A2 + α(A2 − A2 ),

(2.3.21)

we compute those simultaneous equations (2.3.21) with a determination α, ˆ+ ˆ− ˆ+ then Aˆ− 1 , A1 , A2 , A2 are determined. Now,we synthesize again an exponential model below Yˆ (k, α) = Aˆ1 Aˆk2 ,

(2.3.22)

such that an exponential model with fuzzy parameters can be obtained below Y˜ (k) = A˜1 A˜k2 . 2.3.3

(2.3.23)

Practical Example

Example 2.3.1: The amount of long-distance telephone in China during 1980-1990 shows as follows. Table 2.3.1. Amount of Long-distance Telephone in China

Year No Practical date 1980 1 [14940,21404] 1981 2 [18031,22049] 1982 3 [21760,23574] 1983 4 [26262,26556]

Year No 1984 5 1985 6 1986 7 1987 8

Practical date Year No Practical date [31549,31553] 1988 9 [64615,64617] [38250,38254] 1989 10 [78458,78462] [42299,42303] 1990 11 [97932,106291] [51521,51525]

Forecast time for telephone by applying an exponential Model (2.3.23) with fuzzy parameters, and, from (2.3.11), we take α = 1, with Formula (2.3.22) correspondingly being ˆ+ k Yˆ (k, 1) = Aˆ+ 1 (A2 ) . If by using (2.3.17) and (2.3.18), we can get parameters ˆ+ Aˆ+ 1 = 12380, A2 = 1.2069. When α = α1 = α2 = 1, from (2.3.20), then Yˆ (k, 1) = 12380 × 1.2069k (k = 1, 2, · · · , 11).

2.3 Exponential Model with Fuzzy Parameters

49

Hence, telephone amount forecast value at α = 1 is shown as Table 2.3.2. Table 2.3.2. Amount of Long-distance Telephone at α = 1 in China

Year No Practical date Year No Practical date 1980 1 21404 1984 5 31553 1985 6 38254 1981 2 22049 1982 3 23574 1986 7 42303 1983 4 26556 1987 8 51525

Year No Practical date 1988 9 64617 1989 10 78462 1990 11 106291

By a standard deviation formula (2.3.14), + 11 2 ˆ k=1 [Y (k, 1) − Y (k, 1)] , S= 11 we can obtain S = 4019. Again, from formula (2.3.15) of percentage error E% =

11 Y (k, 1) -1 -×100%, 1 − 11 Yˆ (k, 1) k=1

we can get an average relative error to be 8.21%. While, by the aid of geometric average, we obtain S = 9405, E% = 19.78%, and S = 4811, E% = 9.74% by average value exponential curve. Therefore, the fuzzy exponential forecast method mentioned here is superior to the above two [Zhe92]. Under the ﬁducial degree of 95%, the long-distance telephone in China varies at the following interval Yˆ ± 2S. Hence, their forecast amount between 1980 − 1990 shows below. Table 2.3.3. Forecast Amount of Long-distance Telephone in China Year No Practical date 1980 1 [6910.6, 22972.2] 1981 2 [10002, 26063.6] 1982 3 [13733, 29794.6] 1983 4 [18236, 34297.5]

Year 1984 1985 1986 1987

No 5 6 7 8

Practical date [23670.5, 39732.1] [30229.5, 46291.1] [38145.6, 54207.2] [47699.5 63761]

YearNo Practical date 1988 9 [59230, 75291.6] 1989 10 [73146.3, 89207.9] 1990 11 [89941.9, 106003.4]

If we make use of the 0.618 method by selections of α(∈ [0, 1]), and make use of (2.3.20) for search, we may acquire a better result. 2.3.4

Conclusion

The method in this section is an extension of fuzzy exponential forecast model. We can always change it into a series of determination forecast models for diﬀerent α values (α ∈ [0, 1]), and then obtain a forecast value for linearized model respectively by adopting two-step of the least square method. Each

50

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

forecast value Yˆ ﬂuctuates in the band region composed of Yˆ − and Yˆ + , which presents us more information when we choose a satisfactory forecast result by 0.618. It is pointed out that the model here can be still expanded to contain situations with various fuzzy coeﬃcients and even with the fuzzy variables. [Cao89b][Cao93e][DPr78][TUA82][Wat87][Zad82].

2.4

Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

2.4.1 Basic Properties Deﬁnition 2.4.1. For ∀x, y, z ∈ R and x y z satisfy i) μA˜ (y) μA˜ (z) ∧ μA˜ (z), ii) max μA˜ (x) = 1, x∈R

a convex normal fuzzy number. then call A We also call Aα = {x|μA˜ (x) α, 0 < α 1} a platform of ﬂat fuzzy number A. is convex ⇔ Aα is all interval Proposition 2.4.1. The ﬂat fuzzy number A (0 < α 1). is convex, then from Deﬁnition 2.4.1 we know, i) y ∈ Proof: “⇒ ” If A Aα , again according to randomness in x, y, z, we know, Aα is an interval necessarily. “⇐ ” If ∀α ∈ [0, 1], Aα is an interval. Consider x, z ∈ R, and let α0 = μA˜ (x) = μA˜ (z), then Aα0 must be an interval. From x y z, y ∈ Aα0 , then is convex. μA˜ (y) α0 , hence A Again from Deﬁnition 2.4.1 we know that a ﬂat fuzzy number necessarily satisﬁes max μA˜ (x) = μA˜j (α) = 1, hence it is a convex normal fuzzy x∈R

+ α∈(A− j ,Aj )

number. 2.4.2 Linear Regression Model with Flat Fuzzy Parameters j is a convex and a normal fuzzy number, consider We always suppose that A 1 x1 + A 2 x2 + · · · + A n xn Ax, Y = A

(2.4.1)

where A˜ = (A˜1 , A˜2 , · · · , A˜n ), x = (x1 , x2 , · · · , xn )T , which is a linear regres∗+ sion model. In the model, we call yj∗ = (A∗− j , Aj )xj (j = 1, 2, · · · , n) a regres

sion value, yj = (Aj− , Aj+ )xj an observation value, and yj − yj∗ = εj an obser

− vation error, εj a random variable with zero for the main value, A∗− j = Aj ±εj

+ and A∗+ j = Aj ± εj .

Deﬁnition 2.4.2. Suppose f : x → F (y) denotes fuzzy function Y = f (x, A), where x ∈ R, F (y) is a fuzzy-valued set, the membership function in Y denotes

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

$ μY˜ (y) =

max

{a|y=f (x,a)}

0,

51

μA˜ (a), {a|y = f (x, a)} = φ, otherwise.

Deﬁnition 2.4.4. Suppose the quadruple parameter to be a ﬂat fuzzy number j = (A− , A+ , σ − , σ + ), then its membership function μ ˜ (aj ) is deﬁned as A j j Aj Aj Aj ⎧ A− − aj ⎪ ⎪ ⎪1 − j , A− aj < A− ⎪ j − σA− j , ⎪ j σA− ⎪ ⎪ j ⎪ ⎨ + 1, A− j aj Aj , μA˜j (aj ) = + ⎪ aj − Aj ⎪ + ⎪ 1− , A+ , ⎪ j < aj Aj + σA+ ⎪ j + σ ⎪ ⎪ A j ⎪ ⎩ 0, otherwise. = (A− , A+ , σ − , σ + ) to Proposition 2.4.2. Suppose regression coeﬃcient A A A be a ﬂat fuzzy number, then the membership function in (2.4.1) is ⎧ A− xT − y ⎪ − T ⎪ , (A− − σA )x y < A− xT , ⎪1 − − T ⎪ ⎪ σ x ⎪ A ⎨ 1, A− xT y A+ xT , (2.4.2) μY˜ (y) = + T y−A x ⎪ + T + T + ⎪ ⎪1 − , A x < y (A + σ )x , ⎪ A + ⎪ σA xT ⎪ ⎩ 0, otherwise, where x = (x1 , x2 , · · · , xn )T . Proof:

$

{a|aT x=y}

μY˜ (y) = ⎧ ⎨ =

⎩

0,

μA (a), {a|axT = y} = φ otherwise {

n

{a|aT x=y} j=1

μA˜j (aj )}, {axT = y} = φ

0, otherwise ⎧ − n Aj − aj ⎪ ⎪ { (1 − )}, A− aj < A− ⎪ j − σA− j − ⎪ j ⎪ σ T j=1 {a|a x=y} ⎪ Aj ⎪ ⎨ − + 1, Aj aj Aj = n ⎪ aj − A+ j ⎪ + + ⎪ { (1 − )}, A+ ⎪ j < aj Aj + σAj + ⎪ ⎪ T x=y} j=1 σA {a|a ⎪ j ⎩ 0, otherwise = (2.4.2). The proposition holds. ∗1 xi1 + A ∗2 xi2 + . . . + A ∗n xin Suppose fuzzy linear regression model Yi∗ = A ∗ ∗ ∗ ∗ ∗ xi (i = 1, 2, · · · , N ), where A = (A ,A ,...,A ), xi = (xi1 , xi2 , . . . , xin )T . A 1 2 n Then the membership function of Yi∗ is given by

52

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

μY˜ ∗ (y) = 1 − i

|yi − A∗± i xi | , σA∗± |xi | i

its degree of ﬁtting estimation to the given data Yi = (yi , εi ) is measured by the following index hl (l = 1, 2), which maximizes h subject to Yih ⊂ Yi∗h (i = 1, 2, · · · , N ), where Yih = {y|μY˜i (y) h},

Yi∗h = {y|μY˜ ∗ (y) h} i

¯ is illustrated as Figure 2.4.1: are h−level sets, and index h μ 6 1

h2 h1

0

A D G V E F K C B H I − A− xi − σA xi A− xi

C S C SC SC CS CS C S C S C S C S - y + A+ xi A+ xi − σA xi S

Fig. 2.4.1. Illustration for Membership Function of Regression Coeﬃcient A

The ﬁtting degree of a fuzzy linear regression model to all data Y1 , Y2 , · · · , YN are deﬁned by min{hl }. l

Deﬁnition 2.4.4. Use − − − J (1) = σA xi1 + σA xi2 + · · · + σA xin , 1 2 n + + + x + σA x + · · · + σA x (i = 1, 2, · · · , N ) J (2) = σA 1 i1 2 i2 n in

to denote fuzzy degree of Model (2.4.1) in left and right shapes, respectively. The problem is explained as fuzzy parameters A˜∗ being obtained, which ¯ l h for all l, where hl = (h1 , h2 ) is a minimize (J (1) , J (2) ) subject to h degree of the ﬁtting in a fuzzy linear model chosen by decision makers. Theorem 2.4.1. Suppose the model with ﬂat fuzzy data as (2.4.1), then

⇔

h = (min h1 , min h2 )T (h1 , h2 )T

(2.4.3)

j=1

(2.4.4)

⎧ n − − ⎪ ⎪ σA |xij | yi− + (1 − h1 )ε− ⎨ Ai xi + (1 − h1 ) i j n ⎪ − − ⎪ σA |xij | −yi− + (1 − h1 )ε− ⎩ −Ai xi + (1 − h1 ) i j j=1

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

and

53

⎧ n + + ⎪ ⎪ σA |xij | yi+ + (1 − h2 )ε+ ⎨ Ai xi + (1 − h2 ) i , j j=1

n ⎪ + + ⎪ σA |xij | −yi+ + (1 − h2 )ε+ ⎩ −Ai xi + (1 − h2 ) i . j

(2.4.5)

j=1

Proof: Shown as Figure 2.4.1, because ABH ∼ AEG, then 1 − h1 v =⇒ v = ε− i (1 − h1 ). − = 1 εi But k = v + HI = V + |yi− − A− j xi |, again CDI ∼ EDF , hence 1 − h1 = n 1 j=1

therefore,

k − σA |xij | j

⇒ 1 − h1 =

− − ε− i (1 − h1 ) + |yi − Aj xij | , n − σA |x | ij j j=1

|yi− − A− j xi | h1 = 1 − . n − − σA |x | − ε ij i j

(2.4.6)

j=1

The same truth is that we can get |yi+ − A+ j xi | h2 = 1 − . n + + σA |x | − ε ij i j

(2.4.7)

j=1

Combine (2.4.3) and (2.4.6),(2.4.7), then |yi− − A− j xi | 1− h1 , n − − σA |x | − ε ij i j j=1

|yi+ − A+ j xi | 1− h2 . n + + σA |x | − ε ij i j j=1

So that (2.4.4) and (2.4.5) are established, and the theorem is certiﬁcated. + − + Our problem is to determine parameter in (2.4.1) A˜∗j = (A− j , Aj , σAj , σAj ), that is to ﬁnd the minimum value of J (1) and J (2) under constraint h (h1 , h2 )T , in order to solve a classical parameter programming as follows:

min J (1) and ⎧ ⎨ s.t. (2.4.4) − σA 0, h1 ∈ [0, 1], j ⎩ (j = 1, 2, · · · , n),

min J (2) ⎧ ⎨ s.t. (2.4.5) + σA 0, h2 ∈ [0, 1], j ⎩ (j = 1, 2, · · · , n),

(2.4.8)

54

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

a simplex method or a dual simplex method is used to solve their optimal solution easily, obviously, in (2.4.8), the constraint condition of each problem, that is (2.4.4) and (2.4.5), all containing 2n constraint, its number is larger than a variables number, so to change them into a dual form is much easier − + + than to ﬁnd an optimal parameter solution A− j , σAj ; Aj , σAj , synthesize it − to a ﬂat fuzzy number in sequence, record for A˜j = (A , A+ , σ − , σ + )(j = j

j

1, 2, · · · , n), thus the fuzzy parameters of (2.4.1) are acquired.

Aj

Aj

2.4.3. Precise Examination of Model and Modeling Method For given data, by solving the classical parameter programming (2.4.8), a best ﬁtting model can be obtained. Below we determine a judgement method to the forecast model in accuracy measurement. Deﬁnition 2.4.5. Suppose fuzzy regression value of (2.4.1) is yˆ˜i∗ = (yi−∗ , yi+∗ , +∗ ε−∗ i , εi ), actually the value is denoted by yi , then with & ' N ' ∗ ' (yi − yi )2 ' i=1 RIC = ' (2.4.9) ' N ( 2 yi i=1

being an accuracy degree’s measure level in model (2.4.1), and RIC ∈ [0, ∞). 1) At RIC = 0, it is a perfect forecast. 2) The more RIC approaches zero, the nearer yi∗ value tends to yi ; it means a higher prediction. According to the theories of optimization method, yi∗ with yi in (2.4.9) being deﬁned below: +∗ +∗ yi∗ = (yi−∗ − ε−∗ i ) × 0.382 + (yi + εi ) × 0.618, − − + + yi = (yi − εi ) × 0.382 + (yi + εi ) × 0.618.

After the model passes through prediction examination of (2.4.9), it can be thrown into forecast formally. Suppose that the forecast to acquire regression − + + ∗ value is yi+p = (yi+p , yi+p , ε− i+p , εi+p ), and take threshold value h0 (h0 = h1 ∨ h2 ), and then −∗ − +∗ + + yi+p = yi+p − ε− i+p (1 − h0 ), yi+p = yi+p − εi+p (1 − h0 ).

(2.4.10)

−∗ +∗ ∗ = [yi+p , yi+p ] is a found forecast value in model (2.4.1). Hence, yi+p Hereby, we can acquire steps of modeling. I. According to collection of the data (ordinarily real data), substitution them (2.4.1). According to Theorem 2.4.1 and Deﬁnition 2.4.4, convert again the ordinarily linear programming (2.4.8) with parameter variables. II. Solve two linear programming with parameter variables in the problem (2.4.8), respectively, a parameter optimal solution to (2.4.8) is found, that is, certain fuzzy regression parameters exist in (2.4.1).

2.4 Regression and Self-regression Models with Flat Fuzzy Coeﬃcients

55

III. Give a series of data, and the best ﬁtting model is conﬁrmed, making precise examination by (2.4.9). IV. Forecasting N Ai xik . Then we can forecast status at time k. By using (2.4.10) Let Yk = i=1

again, we can ascertain the range in forecasting value. 2.4.4 Self-regression Forecasting Model with Flat Fuzzy Parameters According to the above section theories in a fuzzy linear regression model, we can follow the Ref.[Cao89b], and induce fuzzy time series models from ﬂat fuzzy numbers 1 Yt−1 + A 2 Yt−2 + · · · + A n Yt−n . Yt = A (2.4.11) Deﬁnition 2.4.6. Consider Model (2.4.11), call it n-order self-regression model with ﬂat fuzzy parameters, where Y˜t = (Yt− , Yt+ , σt− , σt+ ). According to observation data Y(t−j)i (i = 1, 2, · · · , N ; j = 1, 2, · · · , n), they are all ordinarily real numbers from the formula N N N Y(t−j)i Yt − Y(t−j)i Yti N i=1 i=1 i=1 . (2.4.12) γi = + N N N N 2 2 2 2 [N Y(t−j)i − ( Y(t−j)i ) ][N Yti − ( Yti ) ] i=1

i=1

i=1

i=1

Calculate the self-related coeﬃcient to change backward i(i = 1, 2, · · · , N ) quarter. If we take γq = max{γi |i = 1, 2, · · · , N }, then the model conﬁrmed n j Y(t−j) is optimal. A by Yt = j=1

q

Theorem 2.4.2. Suppose n-order fuzzy self-regression model to be (2.4.11), then min Hm βm , βm ∈ [0, 1], (m = 1, 2) ⎧ n − ⎪ T ⎪ A− σA |Y(t−j)i | Yt− + (1 − β1 )e− ⎨ t j Yt−i + (1 − β1 ) j j=1 ⇔ (2.4.13) n − ⎪ − T ⎪ σAj |Y(t−j)i | −Yt− + (1 − β1 )e− ⎩ −Aj Yt−i + (1 − β1 ) t j=1

and

⎧ n + T + ⎪ ⎪ σA |Y(t−j)i | Yt+ + (1 − β2 )e+ ⎨ Aj Yt−i + (1 − β2 ) t , j j=1

n ⎪ − T + ⎪ σA |Y(t−j)i | −Yt+ + (1 − β2 )e+ ⎩ −Aj Yt−i + (1 − β2 ) t . j

(2.4.14)

j=1

Proof: In Theorem 2.4.1, what needs is only to change yi− , yi+ into Yt− , Yt+ ; xi , xij into Yt−i , Y(t−j)i , respectively. Similar proof in Theorem 2.4.1, then this theorem holds true.

56

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

Deﬁnition 2.4.7. The fuzzy degree of left and right shapes in Model (2.4.11) are denoted by − − − Y + σA Y · · · + σA Y , s1 = σA 1 (t−j)1 2 (t−j)2 n (t−j)n + + + Y + σA Y · · · + σA Y . s2 = σA 1 (t−j)1 2 (t−j)2 n (t−j)n

Then, the assurance of self-regression forecasting model (2.4.11) with ﬂat fuzzy parameters comes to arbitrary m for ﬁnding min sm (m = 1, 2) under m ¯ (β1 , β2 )T , that is to ﬁnd an optimal solution to an ordinary parameter h programming and min s1 ⎧ ⎨ s.t. (2.4.13), − σA 0, β1 ∈ [0, 1], i ⎩ (i = 1, · · · , N ),

min s2 ⎧ ⎨ s.t. (2.4.14), + σA 0, β2 ∈ [0, 1], i ⎩ (i = 1, · · · , N ),

(2.4.15)

where β1 , β2 are the degree of the ﬁtting of fuzzy self-regression models for decision makers to choose. Obviously, modeling steps in (2.4.11) can be induced as follows: I. The self-related sequence table is programmed according to collected data. II. By use of (2.4.12), ﬁnd self-related coeﬃcient γ, and choose forecast model (2.4.10) from γq = max{γi |i = 1, · · · , N }. III. Find an optimal parameter solution to (2.4.15), thus determine a fuzzy self-regression parameter. IV. Give a list of data, the optimally ﬁtting model is conﬁrmed, making the accurate examination at the same time. Suppose & ' N ' ' (Yti − yti )2 ' i=1 , RIC = ' ' N ( 2 yti i=1

+ + − − − e− where Yti = (Yt− ti ) × 0.382 + (Yti + eti ) × 0.618; yti = (yti − eti ) × 0.382 + i + + (yti + eti ) × 0.618 is accurate to measurement level of forecast Model (2.4.11), and RIC ∈ [0, ∞). Judge the following 10 At RIC = 0, it is a perfect forecast. 20 The more RIC approaches zero, the higher the forecast precision is, otherwise lower. V. Forecast. n Aj Yt−(pj +q) . Then we forecast the status at time q; its foreLet y˜t+q = j=1

− + − − + ∗ = [Yt+q , Yt+q ], where Yt+q = yt+q − e− casting range is Yt+q t+q (1 − H0 ), Yt+q = + yt+q − e+ t+q (1 − H0 ), H0 = β1 ∨ β2 is a threshold value.

2.5 Linear Regression with Triangular Fuzzy Numbers

57

− + From (2.4.1) and (2.4.11) we know, when the spread of its parameter σA , σA − + and spread of related fuzzy variables Y e , e are all zero, (2.4.1) and (2.4.11) are exuviated into classical models in linear regression and self-regression.

2.5

Linear Regression with Triangular Fuzzy Numbers

This section presents a new deﬁnition on the distance between two triangular fuzzy numbers with respect to their parameter variables and it provides a new method to fuzzy linear regression problems. 2.5.1 Preliminary In order to study the fuzzy linear regression with triangular fuzzy numbers, we introduce some basic knowledge as follows. Deﬁnition 2.5.1. A fuzzy set A˜ is called a fuzzy number on R if it satisﬁes the following: (1) There exists x0 ∈ R such that μA˜ (x0 ) = 1; (2) ∀α ∈ [0, 1], Aα = {x|μA˜ (x) α} = [Aα , Aα ] is a closed interval on R. Denote F (R) as the set of all fuzzy numbers on R, and among F (R), we often use triangular fuzzy numbers. Deﬁnition 2.5.2. If A ∈ F (R), it satisﬁes conditions (1) ∀α ∈ [0, 1], Aα is a convex set on R; (2) Its membership function A˜ can be expressed as ⎧ x − AL ⎪ L C ⎪ ⎪ ⎨ AC − AL , when A x A , x − AR μA˜ (x) = ⎪ , when AC x AR , ⎪ ⎪ AC − AR ⎩ 0, otherwise, then A˜ is called a triangular fuzzy number, where A˜ = (AL , AC , AR ), and ˜ AK (K = L, C, R) are called three parameter variables in A. To the triangular fuzzy numbers, they satisfy the following properties. ˜ = (B L , B C , B R ), k ∈ R. Then Property 2.5.1. Let A˜ = (AL , AC , AR ), B ˜ = (AL + B L , AC + B C , AR + B R ); (1) A˜ + B ˜ = (AL − B R , AC − B C , AR + B L ); (2) A˜ − B (kAL , kAC , kAR ), when k 0, ˜ (3) k A = (kAR , kAC , kAL ), when k < 0. Besides the properties above, any two triangular fuzzy numbers can be compared with each other, that is

58

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

˜ = (B L , B C , B R ), k ∈ R, we have Deﬁnition 2.5.3. Let A˜ = (AL , AC , AR ), B L L C ˜ ˜ (1) A < B if only if A < B , A < B C , and AR < B R ; ˜ if only if AL = B R , AC = B C , and AR = B L ; (2) A˜ = B (3) A > B if only if AL > B L , AC > B C , and AR > B R . 2.5.2 Distance between Two Triangular Fuzzy Numbers In order to estimate the regression parameters in the fuzzy linear regression models, ﬁrst we introduce a new conception as follows. ˜ = (B L , B C , B R ), k ∈ R. Then Deﬁnition 2.5.4. Let A˜ = (AL , AC , AR ), B we deﬁne ˜ B) ˜ = (AL − B L )2 ; (1) Left distance: dL (A, ˜ ˜ (2) Center distance: dC (A, B) = (AC − B C )2 ; ˜ B) ˜ = (AR − B R )2 . (3) Right distance: dR (A, Obviously, from the deﬁnition above, we know that they are the distance square between the points that the three parameter variables correspond to the rectangular coordinate system in fact, so they are an ordinary distance. Thus follows the next. ˜ = (B L , B C , B R ), k ∈ R. Then Property 2.5.2. Let A˜ = (AL , AC , AR ), B ˜ B) ˜ 0, dC (A, ˜ B) ˜ 0, dR (A, ˜ B) ˜ 0. (1) dL (A, ˜ ˜ ˜ ˜ ˜ ˜ ˜ A), ˜ dR (A, ˜ B) ˜ = dR (B, ˜ A). ˜ (2) dL (A, B) = dL (B, A), dC (A, B) = dC (B, L L 2 ˜ ˜ (Here we deﬁne dL (B, A) = (B − A ) , and the same to the others). ˜ B) ˜ = dC (A, ˜ B) ˜ = dR (A, ˜ B) ˜ = 0. (3) dL (A, 2 ˜ ˜ ˜ ˜ ˜ k B) ˜ = k 2 dC (A, ˜ B), ˜ (4) dL (k A, k B) = k dL (A, B), dC (k A, 2 ˜ ˜ ˜ ˜ dR (k A, k B) = k dR (A, B), k 0. Proof: From the Deﬁnition 2.5.1, (1) and (3) are obviously correct. Now we only prove the left distance in (2) and (4), and the same to the others. Then, we have ˜ B) ˜ = (AL − B L )2 = (B L − AL )2 = dL (B, A). (2) dL (A, ˜ ˜ = (kAL − kB L )2 = (kB L − kAL )2 = dL (kB, kA). (4) dL (k A, k B) 2.5.3 Fuzzy Linear Regression Now we consider the following fuzzy linear regression model y˜ = a ˜0 + a ˜1 x1 + a ˜2 x2 , x1 , x2 0,

(2.5.1)

where y, a0 , a1 and a2 are triangular fuzzy numbers with y˜ = (y , y , y ), a ˜0 = C R C R C R (aL ˜1 = (aL ˜2 = (aL 0 , a0 , a0 ), a 1 , a1 , a1 ) and a 2 , a2 , a2 ), and also all parameter variables are nonnegative real numbers. Suppose x1i , x2i and yi (i = 1, 2, · · · , N ) to be real input data and fuzzy output data, now we will calculate the estimated values of a0 , a1 and a2 of Model (2.5.1). In many papers, the distance between two fuzzy numbers is mostly adopted by [Xu98], such that the optimal estimated values are got. Here we will introduce a new method. L

C

R

2.5 Linear Regression with Triangular Fuzzy Numbers

59

To the fuzzy linear regression problems, we all know what most important is that we should make the error much less between the observed values and practical ones. But to Model (2.5.1), these data are all triangular fuzzy numbers, i.e., (yiL , yiC , yiR ) and L L C C C R R R (aL 0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i ).

According to the previous analysis,we can consider the corresponding parameter variables of the above. If the less errors between observed values and practical ones are, the less total error is. So the fuzzy linear regression problem is transformed to ⎧ N ⎪ L L 2 ⎪ min dL ( a0 + a1 x1 + a2 x2 , y) = min (yiL − aL ⎪ 0 − a1 x1i − a2 x2i ) , ⎪ ⎪ i=1 ⎪ ⎨ N C C 2 a0 + a1 x1 + a2 x2 , y) = min (yiC − aC min dC ( 0 − a1 x1i − a2 x2i ) , ⎪ i=1 ⎪ ⎪ ⎪ N ⎪ ⎪ R R 2 ⎩ min dR ( a0 + a1 x1 + a2 x2 , y) = min (yiR − aR 0 − a1 x1i − a2 x2i ) . i=1

According to the least square method, suppose ∂dL L L = (yiL − aL 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2), L ∂al i=1 N

∂dC C C = (yiC − aC 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2), C ∂al i=1 N

(2.5.2)

∂dR R R = (yiR − aR 0 − a1 x1i − a2 x2i ) = 0(l = 0, 1, 2). ∂aR l i=1 N

For ﬁrst formula of (2.5.2), we have ⎧ N N N ⎪ L L L ⎪ na + x a + x a = yiL , ⎪ 1i 2i 0 1 2 ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎨ N N N N x1i aL x21i aL x1i x2i aL x1i yiL , 0 + 1 + 2 = ⎪ i=1 i=1 i=1 i=1 ⎪ ⎪ ⎪ N N N N ⎪ ⎪ ⎩ x2i aL x1i x2i aL x22i aL x2i yiL , 0 + 1 + 2 = i=1

N when=

N i=1 N i=1

x1i x2i

N i=1 N i=1 N i=1

i=1

x1i x21i x1i x2i

N i=1 N i=1 N i=1

i=1

(2.5.3)

i=1

x2i x1i x2i =0, by the aid of the Cramer rule, we have x22i

60

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

aL 0 =

1 L 2 L 3 ,a = ,a = , 1 2

(2.5.4)

where j replaces the element of j(j = 1, 2, 3) column in into the term of N N N yiL , x1i yiL , x2i yiL , respectively. equations i=1

i=1

i=1

Similarly, consider the second and third forms of (2.5.2), we can get aC 0 =

1 C 3 R 2 R , a1 = 2 , aC ; a0 = 1 , aR , a2 = 3 , 2 = 1 =

(2.5.5)

where j and j are replacement of the element of j(j = 1, 2, 3) column in into the term of equations, respectively N i=1

yiC ,

N i=1

x1i yiC ,

N i=1

x2i yiC and

N i=1

yiR ,

N

x1i yiR ,

i=1

N

x2i yiR .

i=1

Thus the estimated values of a0 , a1 and a2 are C R C R C R a0 = (aL a1 = (aL a2 = (aL 0 , a0 , a0 ), 1 , a1 , a1 ), 2 , a2 , a2 ).

(2.5.6)

C Deﬁnition 2.5.5. The parameter variables aL and aR l , al l (l = 0, 1, 2) are called optimal estimated parameters of Model (2.5.1) if only if they satisfy (2.5.2), and the corresponding solutions (2.5.6) are called the optimal estimated values in (2.5.1). So the estimated regression equation is

a1 x1 + a2 x2 , x1 , x2 0. y = a0 +

(2.5.7)

Example 2.5.1 [Xu98]: The sales at a certain product on the market can be seen from Table 2.5.1 Table 2.5.1. Product Sales in Years

Year (xi ) Amount of sales ( yi ) (Unit:104 pieces) 1987 (228, 230, 231) 1988 (233, 236, 238) 1989 (239, 241, 244) Try to estimate the amount of sales in 1990. L According to the formula (2.5.4) and (2.5.6), we can get aL 1 = 222.3, a2 = C C R R 5.5; a1 = 224.7, a2 = 5.5; a1 = 224.7, a2 = 6.5. Therefore the regression equation is y = (222.3, 224.7, 224.7) + (5.5, 5.5, 6.5)(x − 1986), at x = 1990, we have y = (244.3, 246.7, 250.7).

2.5 Linear Regression with Triangular Fuzzy Numbers

61

2.5.4 Error Analysis To Model (2.5.1), we get the following data ( yi , x1i , x2i )(i = 1, 2, · · · , N ) by making observations. Then the practical values and observed values of y are L L C C C R R R yi = (aL 0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i , a0 + a1 x1i + a2 x2i ), and yi = (yiL , yiC , yiR ), respectively. We have already got the estimated values of every parameter variable, now we analyze the left parameter variables, and the same to the others. In fact, to Model (2.5.1), by Deﬁnition 2.5.3, we have L L y L = aL 0 + a1 x1 + a2 x2 ,

(2.5.8)

where y L , aL l and xj (l = 0, 1, 2; j = 1, 2) are nonnegatively real numbers. Obviously the Equation (2.5.8) can be regarded as an ordinary linear regression model, so we can estimate the previous values when entering ordinary cases. Thus, according to the properties of the ordinary linear regression, the L L estimated values aL 0 , a1 and a2 are unbiased estimations, and also the variance is the same to an ordinary case. 2.5.5 Comparison of Two-Kind Distance Formula To the fuzzy linear regression (2.5.1), in most of papers, there is the following deﬁnition about the distance between two fuzzy numbers, that is 1 ˜ B) ˜ = D2 (A, f (a)d2 (Aα , Bα )dα, (2.5.9) 0

where d2 (Aα , Bα ) = (Aα − B α )2 + (Aα − B α )2 , and Aα = [Aα , Aα ], Bα = [B α , B α ]. f (α) is a monotonously increasing function at [0, 1], and f (0) = 0,

1

f (α)dα = 0

1 . 2

If we use the distance above and by the diﬀerential, integral and the leastsquare method to the Model (2.5.1), and take f (α) = α, we can get [Lin01] n

x21i aC 1 +

i=1

x21i aL 1 +

i=1

i=1

x21i aL 1 +6

n i=1

x21i aR 1 =

i=1

n

n

n

n

n i=1

x1i (yiC + yiR ),

(2.5.10)

x1i (yiL + yiC ),

(2.2.11)

x1i (yiL + 6yiC + yiR ).

(2.2.12)

i=1

x21i aC 1 =

i=1

x21i aC 1 +

n

n i=1

x21i aR 1 =

n i=1

62

2 Regression and Self-regression Models with Fuzzy Coeﬃcients

C R Then according to the Cramer’s rule, we can get values of aL 1 , a1 and a1 , and by using the same method, we can also calculate the others. Compare (2.5.3) with (2.5.10) (2.5.11) and (2.5.12), obviously, in general, by using diﬀerent distance, we can get diﬀerent parameter estimated values. So, to the above distance (2.5.9), the process of calculation is more complex and the properties about parameter variables are not the same as this section. Therefore, the method of this section is more direct, above all it is provided with some practical values.

3 Regression and Self-regression Models with Fuzzy Variables

In 1989, based on the theory of Zadeh fuzzy sets [Zad65a], self-regression forecast model with T -fuzzy variables was advanced [Cao89b],[Cao89c],[Cao 90a], and, again in 1992, a linearizable non-linear regression model with T -fuzzy variables [Cao95c] was developed. The application appears vastly extensive because of much wider information in models. 1) Make use of a fuzzy distance, follow the classic regression analytical method with a beeline( or curve) imitation. 2) Ascertain the regression model with fuzzy variables under a cone and platform index. Because fuzzy regression analysis is an interval estimation, a kind of analytical methods become much useful. This chapter introduces T -fuzzy variables, (·, c) fuzzy variables and ﬂat (or trapezoidal) fuzzy variables into regression models, and builds more practical kinds of way to the model determination. Meanwhile, their application is discussed.

3.1 3.1.1

Regression Model with T - Fuzzy Variables Basic Property

As for deﬁnition and property of T -fuzzy number, we can read Ref. [TUA82]. It is easy to prove that this kind of fuzzy numbers are regular and convex fuzzy subsets. Deﬁnition 3.1.1. Let x˜ = (m(x), c1 ), y˜ = (m(y), c2 ). Then the distance on T (R), T -fuzzy number set (R is a real number set), is deﬁned as d(˜ x, y˜)2 = D2 (Supp(˜ x), Supp(˜ y ))2 + (m(˜ x) − m(˜ y ))2 , where Supp(·) denotes the support interval of (·) and m(·) denotes its modal value. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 63–94. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

64

3 Regression and Self-regression Models with Fuzzy Variables

Lemma 3.1.1. d(˜ yi , y˜j )2 = 2d(˜ yi , x˜)2 + 2d(˜ x, y˜j )2 − 4d(˜ x,

y˜i + y˜j 2 ) . 2

Proof: From parallelogram rule, we can get: 2(yi − x)2 + 2(x − yj )2 = [(yi − x) − (x − yj )]2 + [(yi − x) + (x − yj )]2 = (yi − yj )2 + [2x − (yi + yj )]2 . In addition, if we establish y˜i = (yi , η i , η i )T , y˜j = (yj , η j , η j )T , x ˜ = (x, ξ, ξ)T , and let F = yi − η i − (x − ξ), G = x − ξ − (yj − ηj ), F = yi + η i − (x + ξ),

G = x + ξ − (yj + ηj ).

Because 2

2

2(F 2 + G2 ) + 2(F + G ) = (F − G)2 + (F − G)2 + (F + G)2 + (F + G)2 2

= 2(F 2 + F ) + 2(G2 + G2 ) = 2[yi − x − (ηi − ξ)]2 + 2[yi − x + (η i − ξ)]2 + 2[x − yj − (ξ − ηj )]2 + 2[x − yj + (ξ − η j )]2 = 2[(yi − ηi ) − (x − ξ)]2 + 2[(x − ξ) − (yj − ηj )]2 + 2[(yi + η i ) − (x + ξ)]2 + 2[(x + ξ) − (yj + η j )]2 = [(yi − ηi ) − (yj − ηj )]2 + [2(x − ξ) − (yi − ηi + yj − ηj )]2 + [(yi + η i ) − (yj − η j )]2 + [2(x + ξ) − (yi − η i + yj − η j )]2 ⇒

[(yi − yj ) − (ηi − ηj )]2 + [(yi − yj ) + (η i + η j )]2

= 2[(yi − x) − (ηi − ξ)]2 + 2[(x − yj ) − (ξ − ηj )]2 + 2[(yi − x) + (η i − ξ)]2 + 2[(x − yj ) + (ξ − η j )]2 yi + yj − ηi − ηj 2 yi + yj − η i − η j 2 ] − 4[(x + ξ) − ] , − 4[(x − ξ) − 2 2 i.e., D2 (Suppyi , Suppyj )2 = 2D2 (Suppyi , Supp x)2 +2D2 (Supp x, Supp˜ yj )2 − 4D2 {Supp x, Supp

yi + yj 2 } . 2

Theorem 3.1.1. Let V be a closed cone in P (R) (subspace of T (R)). Then for any x ˜ in P (R), there exists the unique T -fuzzy number y˜0 in V , such that for all of y˜ in V , we have d(˜ x, y˜0 ) d(˜ x, y˜), and a necessary and suﬃcient condition where y0 being unique minimizing fuzzy number in V is that x is y˜0 −orthogonality to V .

3.1 Regression Model with T - Fuzzy Variables

65

Proof: Suﬃciency. Because of d( x, y)2 = [x − y − (ξ − η)]2 + [x − y + (ξ − η)]2 + (x − y)2 = [x − y0 − (ξ − η0 )]2 + [x − y0 + (ξ − η)]2 + (x − y0 )2 + [y0 − y − (η0 − η)]2 + [y0 − y + (η 0 − η)]2 + (y0 − y)2 + 2[y0 − y − (η0 − η)][x − y0 − (ξ − η)] + 2[y0 − y + (η 0 − η)][x − y0 + (ξ − η 0 )] + 2(y0 − y)(x − y0 ) D( x, y0 )2 + D(y0 , y)2 + (x − y0 )2 + (y0 − y)2 = d( x, y0 )2 + d(y0 , y)2 , again because of d(y0 , y)2 > 0, hence d( x, y)2 > d( x, y0 )2 for y = y0 holds true. Necessity. If for some y in V , such that for λ ∈ (0, 1), we have [y0 − y − (η0 − η)][x − y0 − (ξ − η0 )] + [y0 − y + (η 0 − η)][x − y0 + (ξ − η 0 )] + (y0 − y)(x − y0 ) = −λ. Suppose d( y , y0 ) = 1 and, in order not to lose generality, we consider y1 = (1 − λ)y0 + λ y ; it is known in V from the convex. Then d( x, y)2 = d( x, y0 )2 + λ2 d( y , y0 )2 + λ[2(y0 − y − (η0 − η))(x − y0 − (ξ − η0 )) + 2(y0 − y + (η 0 − η))(x − y0 + (ξ − η 0 )) + 2(y0 − y)(x − y0 )] = d( x, y0 )2 − λ2 , hence y˜0 is not a minimum element in V . Contradiction. Therefore, unique, suﬃcient and necessary condition of y0 -orthogonality can get certiﬁcated. The following proof exists in y˜0 again. If x ∈ V , then existence is proved. If x ∈ / V , then deﬁne δ = inf{d( x, y)| y∈ x, yi ) → δ. From V }. Suppose y˜i to be fuzzy sequence in V , such that d( equality yi + yj 2 d(yi , yj )2 = 2d(yi , x ) , )2 + 2d( x, yj )2 − 4d( x, 2 yi + yj yi + yj in V , but cone V is convex, hence, d( x, ) δ, and because ∀i, j, 2 2 then d(yi , yj )2 2d(yi , x )2 + 2d( x, yj )2 − 4δ 2 , and when i, j → ∞, d(yi , yj ) → 0, {˜ yi } is a Cauchy sequence. Again, because (T (R), d) is complete, and V is closure, y0 = lim yi in V . Corollary 3.1.1. Let N be a positive integer. If V is a close cone in P (R)N , the measurement in P (R)N is represented by dN , which is deﬁned

66

3 Regression and Self-regression Models with Fuzzy Variables

as dN (˜ x, y˜)2 =

N

d(˜ xi , y˜i )2 , where x ˜i , y˜i ∈ P (R)(i = 1, 2, · · · , N ) are the

i=1

component in N −dimensional fuzzy vector x ˜, y˜ ∈ P (R)N , then for arbitrary N x ˜ in P (R) , the unique vector y˜0 exists in V , such that x, y˜0 )2 dN (˜ x, y˜)2 dN (˜ is established for all y˜ in V . 3.1.2 Regression Model with T -Fuzzy Variables Consider y = β0 + β1 x1 + · · · + βn xn + ε, we call it a regression model, where E, βp (p = 0, 1, · · · , n) are ordinary real number, xp (p = 1, · · · , n) and y are ordinary real variables. Deﬁnition 3.1.2. If y˜ = β0 + β1 x˜1 + · · · + βn x ˜n + ε,

(3.1.1)

x ˜p (p = 1, 2, · · · , n) is the T -fuzzy variable, y˜ is T -fuzzy function variable, is n−vector represented by all e = (1, 0, 0), β0 , β1 , · · · , βn ∈ R and ε is an error. We call (3.1.1) a regression model with T -fuzzy variables. The concept about T -fuzzy variable is shown as Section 1.7 in Chapter 1. Deﬁnition 3.1.3. Assume that P (R) is a subspace consisting of the support T (R) of all non-negative elements. For each (x, ξ, ξ) ∈ P (R) and x − ξ 0, P (R) is a cone in T (R) and also a closed convex subset of T (R) with respect to the topology induced by d. Here d( x, y)2 =

[x − y − (ξ − η)]2 + [x − y + (ξ − η)]2 + (x − y)2 , 3 N x , y ∈ P (R) , x i , yi ∈ P (R).

˜ni and yi are given by a Assume that the test of data sets x˜1i , x˜2i , · · · , x linear regression equation ˜1i + · · · + βn x ˜ni , yi = β0 + β1 x

(3.1.2)

x ˜pi = (xpi , ξ pi , ξ pi )(p = 0, 1, · · · , n; i = 1, · · · , N ) a fuzzy independent variable, and y˜i = (yi , η i , η i ) an aﬃne function from P (R)N to T (R). If again (M )

r(β0 , β) =

N

d(β0 + β1 x 1i + · · · + βn x ni ; yi )2 ,

i=1

then βp (p = 0, 1, · · · , n) is determined by applying the least square method, it is a pity that income βp are all T - fuzzy numbers rather than real numbers,

3.1 Regression Model with T - Fuzzy Variables

67

so that the classical least square method can not be directly applied, therefore conversion should be made. For this reason, we induce deﬁnitions and properties ﬁrst as follows. Diﬀerent expressions arise for r(β0 , β) according as some of the βp are positive and negative because βp xp = (βp xp , βp ξ p , βp ξ p ) if βp 0 and βp xp = (βp xp , βp ξ p , βp ξ p ) and when βp < 0. So if negative β appears in (M ), “mixed” upper and lower spreads occur in each summand as can easily be seen from above form. Consequently, in order to derive analogues of the normal equations it is necessary to specify certain cones in which to seek minimizing solution to (M ). Then we deﬁne the following. x1i , x ˜2i , · · · , x ˜ni ) (i = 1, 2, · · · , N ). If Deﬁnition 3.1.4. Assume that x ˜i = (˜ partition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets J(−), J(+), one of which may be empty. To each of such partition associate a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by jp = 0, if p ∈ J(+), Denoted by the cone C(J ) in T (R)n 1, if p ∈ J(−). C(J ) = {β0 + β1 x1 + · · · + βn xn |βp 0, if jp = 0; βp < 0, if jp = 1}, we call J a cone index, and C(J ) is a cone determined by it. Proposition 3.1.1. For a given cone index J , then the problem of minimizing in cone (M (J )) (M (J ))

r(β0 (J ), β(J )) =

N

d(β0 + β1 x 1i + · · · + βn x ni , yi )2 (3.1.3)

i=1

has a unique parameter solution β0 (J ), β1 (J ), · · · , βn (J ). Deﬁnition 3.1.5. Assume fuzzy data to be x ˜1i , x ˜2i , · · · , x ˜ni ; y˜i , and we call the system S(J ) consisting of n + 1 equations ∂r(β0 (J ), β(J )) = 0 (p = 0, 1, · · · , n), ∂βp and write it as ⎛

N

N

S(J )

x1i (J )

···

N

(3.1.4)

⎞ xni (J )

⎜ ⎟ ⎛ ⎞ i=1 i=1 ⎜ N ⎟ β0 (J ) N N ⎜ ⎟ ⎟ ⎜ ⎜ x1i (J ) x21i (J ) ··· x1i (J )xni (J ) ⎟ ⎜ ⎟ ⎜ β1 (J ) ⎟ i=1 i=1 ⎜ i=1 ⎟ · ⎜ .. ⎟ ⎜ ⎟ ⎝ . ⎠ .. .. .. ⎜ ⎟ . . ··· . ⎜ ⎟ βn (J ) ⎝ ⎠ N N N 2 xni (J ) xni (J )x1i (J ) · · · xni (J ) i=1

i=1

i=1

68

3 Regression and Self-regression Models with Fuzzy Variables

⎛

N

⎞ yi (J )

⎜ ⎟ ⎜ N i=1 ⎟ ⎜ ⎟ ⎜ ⎟ x (J )y (J ) 1i i ⎜ ⎟ = ⎜ i=1 ⎟. ⎜ ⎟ . .. ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ N xni (J )yi (J ) i=1

If S(J ) has a solution β0 (J ), β1 (J ), · · · βn (J ), such that βp > 0 at jp = 0; βp < 0 at jp = 1, then we call (3.1.3) a J −compatible with the data. If the unconstraint minimization of S(J ) is compatible with Y˜i = β0 E + β1 x˜1i + · · · + βn x˜ni in C(J ), then the model is called compatibleness. ˜2i , · · · , x ˜ni ; y˜i , (i = 1, 2, · · · , N ) satisfy Theorem 3.1.2. Let the data x ˜1i , x both Equation (3.1.2), for all of cone index J , there exists a unique solution β0 (J ), β1 (J ), · · · , βn (J ) in system (3.1.4). Proof: Catalogue {˜ xpi } by subscription. For i = 1, 2, · · · , N, wi = yi and for each p, z$ pi = xpi , for i = N + xpi − ξ pi , if jp = 0, for 1, · · · , 2N, wi = yi − η i , and for each p, zpi = xpi + ξ pi , if jp = 1; $ xpi + ξ pi , if jp = 0, i = 2N + 1, · · · , 3N, wi = yi + η i and for each p, zpi = xpi − ξ pi , if jp = 1. Then it is not diﬃcult to see that S(J ) is the same system as the crisp normal equations for the least squares ﬁtting model w = β0 + β 1 z 1 + · · · + β n z n

(3.1.5)

to the data wi , z1i , z2i , · · · , zni . By using the classical least square method, it is easier for us to ﬁnd a unique optimal solution βp (p = 0, 1, · · · , n) in (3.1.5) concerning to a cone index J . 3.1.3 Regression Model with T -Fuzzy Data We call (3.1.1) regression model with T -fuzzy parameters. According to the theory above, the modeling steps in Model (3.1.1) can be concluded as follows: 10 Work out a sequence table by observation data and classify the data by Deﬁnition 3.1.4. ˜pi and the dependent variable y˜i into 20 Change the observation date x nonfuzziness. Then fuzzy data are changed into ordinary data before (3.1.1) is changed into a classical linear regression model (3.1.5).

3.1 Regression Model with T - Fuzzy Variables

69

30 From the knowledge of Theorem 3.1.1, the model has a unique solution β0 , β1 , · · · , βn in it, replaced in (3.1.5), it can be testiﬁed by a classical determination method. Calculate rp (p = 1, · · · , n) and s: N rp = +

N

zpi wi −

i=1

[N

N i=1

N i=1

2 −( zpi

N

zpi )2 ][N

i=1

N

zpi

wi

i=1 N i=1

wi2 − (

N

,

(3.1.6)

wi )2 ]

i=1

& ' N N N N ' N 2 ' z ( z )2 /N − ( zpi wpi − ( zpi )( wpi )/N ) ( i=1 pi i=1 pi i=1 i=1 i=1 s= (N − 2) (p = 0, 1, . . . , n). 40 Decision. If |rp | > r0.05 , then a test goes through. 50 A forecast model is obtained as follows, w = βˆ01 − 2S + βˆ1 z, w = βˆ02 + 2S + βˆ1 z. Example 3.1.1: The needed petroleum is arranged for a western developed country during 1965 and 1981 as follows. Table 3.1.1. Needed Arrangement of Petroleum in Developed Country

Years 1965 1967 1969 Demand(Ktoe) (8.05, 0.020.03) (8.28, 0.02, 0.02) (8.5, 0.02, 0.01) 1971 1973 1975 (8.7, 0.01, 0.03) (8.94, 0.05, 0.03) (9, 0, 0.01) 1977 1979 1981 (9.04, 0.01, 0.02) (9.18, 0.02, 0.03) (9.28, 0.03, 0.04) Try to forecast the country’s petroleum demanded in 1998. From the data in Table 3.1.1, we know that each datum represents a cone and its ﬁgure constructed by its top and a linear distribution, therefore, applying the method above to it, we obtain the following: 10 Divide the T -fuzzy data annually into two, one is {65, 69, 73, 75, 81}, denoting J(−), and the other is {67, 71, 77, 79}, denoting J(+). 20 Nonfuzzify it. Classify the data into three parts: One is (8.5, 0.02, 0.01), (9, 0, 0.01), (9.28, 0.03, 0.04), to which the year corresponding is {69, 75, 81}.

70

3 Regression and Self-regression Models with Fuzzy Variables

Another is (8.28, 0.01, 0.02), (8.94, 0.05, 0.03), (9.18, 0.02, 0.03), to which the year corresponding is {67, 73, 79}; and the other is (8.05, 0.02, 0.03), (8.7, 0.01, 0.03), (9.04, 0.01, 0.02), to which the year corresponding is {65, 71, 77}. By the expression of zpi , the needed petroleum in Table 3.1.1 can be turned into Table 3.1.2. Needed Petroleum Crisp Value

Years 1965 1967 1969 1971 1973 1975 1977 1979 1981 Demand(ktoe) 8.03 8.27 8.5 8.73 8.97 9 9.06 9.16 9.28 30 List Table Table 3.1.3. Unary Regression Simplify Table

t 0 1 2 3 4 5 6 7 8 = 36 w 8.03 8.27 8.5 8.73 8.97 9 9.06 9.16 9.28 = 79 t2 0 1 4 9 16 25 36 49 64 = 204 w2 64.48 68.39 72.25 76.21 80.46 81 82.08 83.9 86.1 = 694.87 tw 0 8.27 17 26.19 35.88 = 324.36 45 54.36 63.42 74.24 ¯ w/9 ≈ 8.778 t¯= t/9=4, t¯2 =16, w= and estimate parameter βˆ0 , βˆ1 : ¯ ti wi − nt¯w ˆ β1 = 2 ≈ 0.1392, βˆ0 = w ¯ − βˆ1 t¯ ≈ 8.2212. ti − nt¯2 Substitute them for (3.1.5), then w ˆ = 8.2212 + 0.1392t. 40 Test From (3.1.6), we calculate r = 1.652, at r0.05 = 0.666, we have r > r0.05 . Then, a test goes 4 through. 1.426 − 0.1392 × 8.352 ≈ 0.194, then Again s = 7 w = βˆ01 + 0.1392t = 7.8332 + 0.1392t, w = βˆ2 + 0.1392t = 8.6092 + 0.1392t. 0

3.2 Self-regression Model with T -Fuzzy Variables

71

50 Forecast w1998 = 7.8332 + 0.1392 × 16.5 = 10.13, w1998 = 8.6092 + 0.1392 × 16.5 = 10.906.

Such that w − w1998 w1998 + w1998 , 0.382 × 1998 , 0.618 2 2 − w1998 w1998 ) = (10.518, 0.1482, 0.2398), × 2

y=(

i.e., the petroleum needed for the country in 1998 is a bit more than 10.518(ktoe), which tallies with practice.

3.2

Self-regression Model with T -Fuzzy Variables

If we modify (3.1.1) for Yt = β0 + β1 Yt−1 + · · · + βn Yt−n + εt ,

(3.2.1)

then call (3.2.1) an n-order self-regression model with T -fuzzy variables, where β0 , β1 , · · · , βn are awaiting-evaluation parameters, Yt is fuzzy correlated variable, Yt−p = (Yt−p , η t−p , η t−p )(p = 1, · · · , n) is an independent variable in p period changed backward, with ε being an error. Theorem 3.2.1. Assume that the data set is Y(t−1)i , · · · , Y(t−n)i and Yti is given by model Yti = β0 + β1 Y(t−1)i + · · · + βn Y(t−n)i (i = 1, · · · , 3N ), the system ∂r[β0 (J ), β(J )] ∂ βp

= 0 (p = 0, · · · , n)

has a unique solution β0 (J ), β1 (J ), · · · , βn (J ) for all cone indices. Proof: Similar to the proof of Theorem 3.1.2, the formula corresponding to (3.1.5) is Zt = β0 + β1 Zt−1 + β2 Zt−2 + · · · + βn Zt−n , then r(β0 , β) =

3N i=1

d[β0 + β1 Z(t−1)i + · · · + βp Z(t−n)i ; Zt ]2 .

(3.2.2)

72

3 Regression and Self-regression Models with Fuzzy Variables

The normal equation S(J ) is simpliﬁed into a classical form ⎧ N N N ⎪ ⎪ ⎪ Zti = nβ0 (J ) + Z(t−1)i β1 (J ) + · · · + Z(t−n)i βn (J ), ⎪ ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N N N ⎪ ⎪ 2 ⎪ β1 (J ) + · · · + Z(t−1)i β0 (J ) + Z(t−1) Z(t−1)i Z(t−n)i βn (J ) ⎪ i ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N ⎪ ⎨ Zt Z(t−1)i , = i=1 ⎪ ⎪ ··· ··· ··· ⎪ ⎪ ⎪ ⎪ N N N ⎪ ⎪ 2 ⎪ Z(t−n)i β0 (J ) + Z(t−1)i Z(t−n)i β1 (J ) + · · · + Z(t−n) βn (J ) ⎪ i ⎪ ⎪ i=1 i=1 i=1 ⎪ ⎪ N ⎪ ⎪ ⎪ Zt Z(t−n)i . = ⎩ i=1

Equations contain a unique solution βp (p = 0, 1, · · · , n), and this theorem is certiﬁed. Hereby, the modeling steps in Model (3.2.1) can be concluded as follows. 10 Design a self-dependent sequence table by tested data Y˜(t−p)i = (Y(t−p)i , η (t−p) , η (t−p)i ) and classify the data in the table by means of Deﬁnition 3.1.4. i

Table 3.2.1. Self-related Sequence Table Q

I II III IV I II III IV

1983 sale

Sequence

Yt

move

backward

Yt

Yt−1

Yt−2

···

Yt−p

(y(t−p)1 , η (t−p) , η (t−p)1 ) 1 yt−p,1 · · · (y(t−p)2 , η (t−p) , η (t−p)2 ) 2 yt−p,2 (y(t−2)1 , η (t−2) , η (t−2)1 ) · · · (y(t−p)3 , η (t−p) , η (t−p)3 ) 1 3 y(t−2) y(t−p) 1 3 (y(t−1)1 , η (t−1) , η (t−1)1 ) (y(t−2)2 , η (t−2) , η (t−2)2 ) · · · (y(t−p)4 , η (t−p) , η (t−p)4 ) 1 2 4 y(t−1)1 y(t−2)2 y(t−p)4 (yt1 , η t , η t1 ) (y(t−1)2 , η (t−1) , η (t−1)2 ) (y(t−2)3 , η (t−2) , η (t−2)3 ) · · · 1 2 3 yt 1 y(t−1) y(t−2) 2 3 (yt2 , η t , η t2 ) (y(t−1)3 , η (t−1) , η (t−1)3 ) (y(t−2)4 , η (t−2) , η (t−2)4 ) 2 3 4 yt2 y(t−1)3 y(t−2)4 (yt3 , η t , η t3 ) (y(t−1)4 , η (t−1) , η (t−1)4 ) 3 4 yt 3 y(t−1) 4 (yt4 , η t , η t4 ) 4 yt4

Q—Quarter.

20 Change fuzzy Y(t−p)i and the dependent variable Yti into nonfuzziness by the proof of Theorem 3.1.2.

3.2 Self-regression Model with T -Fuzzy Variables

73

30 Calculate self-dependent coeﬃcients, and let N γp = + [N

N i=1

N i=1

Z(t−p)i Zti −

2 Z(t−p) i

−(

N

i=1

N i=1

Z(t−p)i

Z(t−p)i

)2 ][N

N i=1

N i=1

Zt2i

Zti −(

N

i=1

. Zti

(3.2.3)

)2 ]

Calculate quarterly self-related coeﬃcients by moving backwards i(i = 1, · · · , N), and, by taking γK = max{γp |p = 1, · · · , n}, it is proper to determine the model set up on benchmark time series Zt by moving backwards n quarters. 40 βp (J )(p = 0, 1, · · · , n) is determined by S(J ), planted into (3.2.2). Let + N 1 (Zt − Zti )2 K i=1 i + IC = + (Zt2i + Zt2i = 1). (3.2.4) N N 1 1 2 2 Z + Z K i=1 ti K i=1 ti If 0 IC 1, then it is an eﬀective forecast, and IC → 0, Zti → Zti is a perfect case, while IC = 1, the forecast is most uncorrect. Therefore, when IC is a smaller positive number, the fuzzy self-regression forecast model determined by βp (J )(p = 0, 1, · · · , n) can be used in an actual forecast. Example 3.2.1: If the candy sale quantity of 1980-1983 in certain place shows below Table 3.2.2. Candy Sale of 1980-1983 in Certain Place (10000/unit)

Quarters 1980 1981 1982 1983 I (23, 0.1, 0) (25, 0.1, 0.1) (26, 0.1, 0.2) (27, 0.2, 0.1) II (11, 0.6, 0.8) (11, 0.8, 0.5) (12, 0.3, 1) (13, 1.2, 0.3) III (11, 0.9, 0.4) (12, 1, 0.5) (14, 0.7, 0.9) (15, 1.1, 0.8) IV (15, 0.8, 1) (16, 0.6, 0.3) (18, 0.1, 0.4) (20, 0.4, 1) Try to forecast the candy sale quantity of 1984 Quarter I,II. 1. Choose a 1-order fuzzy self-regression model, and we list table according to the data in Table 3.2.2. Note. The ordinary real data under blanks are obtained by taking a main value of fuzzy numbers in ﬁrst quarter; at odd numbers, Z(t−p)i = Y(t−p)i + η (t−p)i is taken; at even numbers, Z(t−p)i = Y(t−p)i − η (t−p) is taken, on i two diagonals of the former. Let Z(t−p)i = Y(t−p)i − η (t−p) at odd numbers i while Z(t−p)i = Y(t−p)i + η (t−p)i at even numbers on the two diagonals of the latter.

74

3 Regression and Self-regression Models with Fuzzy Variables Table 3.2.3. Self-related Sequence Table

Quar- 1983 sale ters Yt IV I II III IV I II III IV

Yt−6

III IV I II III IV I II III

sale

sequence

move

backward

Yt−1

Yt−2

Yt−3

Yt−4

Yt−5

(16,0.6,0.3) 15.4 (26,0.1,0.2) (26,0.1,0.2) 26 26 (12,0.3,1) (12,0.3,1) (12,0.3,1) 11.7 11.7 13 (14,0.7,0.9) (14,0.7,0.9) (14,0.7,0.9) (14,0.7,0.9) 14.9 14.9 13.3 13.3 (18,0.1,0.4) (18,0.1,0.4) (18,0.1,0.4) (18,0.1,0.4) 17.9 17.9 18.4 18.4 (27,0.2,0.1) (27,0.2,0.1) (27,0.2,0.1) (27,0.2,0.1) 27 27 27 27 (13,1.2,0.3) (13,1.2,0.3) (13,1.2,0.3) 11.8 13.3 13.3 (15,1.1,0.8) (15,1.1,0.8) 13.9 13.9 (20,0.4,1) 21

Q

II

The

The

sale

sequence

move

backward

Yt−7

Yt−8

Yt−9

Yt−10

Yt−11

Yt−12 (23,0.1,0) 23 (11,0.6,0.8) (11,0.6,0.8) 10.4 10.4 (11,0.9,0.4) (11,0.9,0.4) (11,0.9,0.4) 11.4 11.4 10.1 (15,0.8,1) (15,0.8,1) (15,0.8,1) (15,0.8,1) 14.2 14.2 16 16 (25,0.8,1) (25,0.8,1) (25,0.8,1) (25,0.8,1) 25 25 25 25 (11,0.8,0.5) (11,0.8,0.5) (11,0.8,0.5) (11,0.8,0.5) 10.2 10.2 11.5 11.5 (12,1,0.5) (12,1,0.5) (12,1,0.5) (12,1,0.5) 12.5 12.5 11 11 (16,0.6,0.3) (16,0.6,0.3) (16,0.6,0.3) 15.4 16.3 16.3 (26,0.1,0.2) (26,0.1,0.2) 26 26 (12,0.3,1) 13

3.2 Self-regression Model with T -Fuzzy Variables

75

2. By means of (3.2.3): 4 γp = + [4

4 i=1

4 i=1

2 Z(t−p) i

Z(t−p)i Zti −

−(

4

i=1

4 i=1

Z(t−p)i

Z(t−p)i Zti

)2 ][4

4 i=1

Zt2i

−(

4

i=1

(p = 1, · · · , 12), Zti

)2 ]

the self-related coeﬃcients calculated are: R ={γ1 , γ2 , · · · , γ12 } = {−0.378, −0.618, −0.088, −0.990, −0.506, − 0.601, −0.015, −0.980, −0.496, −0.595, −0.274, −0.802}, then γ4 = max |R| = 0.990. Therefore, the sequence by moving 4 quarters backwards as follows Zt = β0 + β1 Zt−4 . Through the normal equations S(J ), we can get 4

β0 (J ) =

i=1

Zti

4 i=1

4

2 Z(t−4) − i 4

i=1

4 β1 (J ) =

4 i=1

4

i=1

i=1

Z(t−4)i

2 Z(t−4) −( i

Zti Z(t−4)i − 4

4

4

i=1

4

2 Z(t−4) −( i

i=1

i=1

Zti Z(t−4)i

≈ −0.059,

Z(t−4)i )2

Z(t−4)i

i=1 4

4

4 i=1

Zti ≈ −1.0675.

Z(t−4)i )2

Therefore Zˆti = −0.059 + 1.0675Z(t−4)i .

(3.2.5)

3. Testiﬁcation Nonfuzziﬁcation sale data are Zti = {27, 11.8, 13.9, 21} in 1983 checked in Table 3.2.2, replaced into (3.2.5), we obtain Zˆti = {27.6591, 12.3939, 14.1019, 19.5461}, from (3.2.4), then + 4 1 (Zt − Zti )2 4 i=1 i + ≈ 0.022, IC = + 4 4 1 1 2 2 Z + Z 4 i=1 ti 4 i=1 ti so the forecast is very accurate. Therefore Yt = β0 (J ) + β1 (J )Yt−4 = −0.059E + 1.0675Yt−4 can be used to forecast the sale in Quarter I, II in 1984, that is Yt−1 = (28.7266, 0.2135, 0.10675), Yt−2 = (13.7816, 1.281, 0.03203).

76

3 Regression and Self-regression Models with Fuzzy Variables

3.3 3.3.1

Regression Model with (·, c) Fuzzy Variables Determination of the Modal with (·, c) Fuzzy Variables

Consider y˜ = β0 E + β1 x ˜1 + · · · + βn x˜n + ε,

(3.3.1)

where x ˜p (p = 1, 2, · · · , n) is the (·, c) fuzzy variable, y˜ is (·, c) fuzzy function variable; E is n−vector represented by all E = (1, 0), β0 , β1 , · · · , βn ∈ R and ε is an error. Deﬁnition 3.3.1. We call (3.3.1) a regression model with (·, c) fuzzy variables. If x ˜ = (x, c1 ) and y˜ = (y, c2 ), then their metric d on T (R) is deﬁned by d(˜ x, y˜)2 =

(x − y − (c1 − c2 ))2 + (x − y + (c1 − c2 ))2 + (x − y)2 . 3

Certainty path is researched for Model (3.3.1) as follows. Deﬁnition 3.3.2. Suppose that x ˜ = (x, c) ∈ P (R) for each x ˜, x c, then P (R) is one of the cone T (R), and is a close convex subset of T (R) relevant with topology induced by distance d. Suppose test data to be x ˜1i , x ˜2i , · · · , x˜ni ; y˜i , for the Model (3.3.1), βp (p = 1, 2, · · · , n) to be an ordinary real number, x˜pi a (·, c) fuzzy variable, and y˜i a (·, c) aﬃne function from P (R)N to T (R), where x ˜pi = (xpi , cpi ), y˜i = (yi , ci ) (i = 1, 2, · · · , N ; p = 1, 2, · · · , n). Let N (M ) r(β0 , β) = d(β0 + β1 x ˜1i + · · · + βn x˜ni , y˜i )2 . i=1

Then βi (where β = (β1 , β2 , · · · , βn )) determined by applying the least square method is a (·, c) fuzzy number rather than a real number. Similarly to method of Section 3.1, we induce deﬁnitions and properties ﬁrst as follows. x1i , x ˜2i , · · · , x ˜ni ) (i = 1, 2, · · · , N ). If parDeﬁnition 3.3.3. Suppose x˜i = (˜ tition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets J(−), J(+), one of which may be empty set, and then contacts a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by Ti = {0, if i ∈ J(+); 1, if i ∈ J(−)} for this division. Especially, we write J0 = (0, 0, · · · , 0), J1 = (1, 1, · · · , 1). Deﬁnition 3.3.4. Use C(J) = {β0 E + β1 x ˜i1 + · · · + βn x ˜in |βp 0, if jp = 0; βp < 0, if jp = 1} to represent a cone in T (R)N and we call it a determined cone from the cone index J.

3.3 Regression Model with (·, c) Fuzzy Variables

77

Proposition 3.3.1. For a given cone index J, the minimization model r(β0 (J), β(J)) =

N

d(β0 E + β1 x ˜1i + · · · + βn x ˜ni , y˜i )2

(3.3.2)

i=1

has a unique parameter solution β0 (J), β(J) in cone C(J), where β(J) = (β1 (J), β2 (J), · · · , βn (J)). Deﬁnition 3.3.5. Suppose fuzzy data to be x ˜1i , x ˜2i , · · · , x ˜ni ; y˜i , and we call the system S(J) consisting of n + 1 equations ∂r(β0 (J), β(J)) = 0(p = 0, 1, · · · , n). ∂βp If S(J) has a solution β0 (J), β1 (J), · · · , βn (J), such that βp > 0, when jp = 0; βp < 0 when jp = 1, then we call (3.3.2) J−compatible with the data. If the unconstraint minimization of S(J) is J−compatible with data. Then a model is J−compatible if the formal equations S(J) for unconstrained min˜n lying in C(J). imization are compatible with β0 E + β1 x˜1 + · · · + βn x Theorem 3.3.1. Let the data set x ˜1i , x˜2i , · · · , x ˜ni ; y˜i , (i = 1, 2, · · · , N ) satisfy Equation (3.3.2), for all of cone indices J, there exists a unique solution β0 (J), β1 (J), · · · , βn (J) in system S(J). Proof: Catalogue {˜ xpi } by subscription: i = 1, 2, · · · , N is one type; i = N + 1, · · · , 3N the other type. Hence, when i = 1, 2, · · · , N, wi = yi to each p, zpi = xpi ; when i = N + 1, · · · , 2N, we have wi = yi − ci . But when i = 2N + 1, · · · , 3N, we have wi = yi + ci to each p, xpi − cpi , if jp = 0, zpi = xpi + cpi , if jp = 1. From here we can get a classical regression model with cone index J corresponding to (3.3.1), suitable to data wi , zpi (i = 1, 2, · · · , 3N ). Now, mark it w = β0 + β1 z 1 + · · · + βn z n .

(3.3.3)

By using the classical least square method, it is easier for us to ﬁnd out a unique optimal solution βp (p = 0, 1, · · · , n) in (3.3.3) concerning to a cone index J. Accordingly, it is of practical value for us to approach Model (3.3.1) by using a crisp model in (3.3.3). 3.3.3

Obtaining (·, c) Fuzzy Data

Data in actuality are most random and fuzzy. The so-called “precision” data are almost approximation of a true value. By using fuzzy data, we can obviously get more information in objects. Therefore, it is most important for us to obtain fuzzy data by usual methods as follows.

78

3 Regression and Self-regression Models with Fuzzy Variables

A. Direct obtainment. Record experiments or the measurement data as fuzzy numbers according to its character. B. Fitting. Fit the collected fuzzy data into a distributing function with the known fuzzy numbers; the closed one is what we long for. C. The assignment of information. D. Structure method and etc. Below is only the structure of the (·, c) fuzzy number to be introduced. But historical data are fuzzy. Because of variety of reasons, suppose what we record is a group of real numbers x1 , x2 , · · · , xn and (·, c) fuzzy number can be constructed by the group of “accurate” number, then take fuzzy time series analysis. Its steps as follows. 10 Let Mt = max{xt−1 , xt , xt+1 }, mt = min{xt−1 xt , xt+1 }. Suppose that the data are inﬂuenced by the front and back data each (or two each) at t period, then Mt mt , at t = 2, 3, · · · , N − 1 and

Mt = max{x1 , x2 }, mt = min{x1 , x2 }, at t = 1; Mt = max{xN −1 , xN }, mt = min{xN −1 , xN }, at t = N.

20 Let

⎧ ⎨ μy˜t (x) =

1−

⎩ 0,

1 |x − αt |, x ∈ [mt , Mt ], ct x∈ / [mt , Mt ].

Here αt =

1 1 (Mt + mt ), ct = (Mt − mt ), t = 1, 2, · · · , N. 2 2

30 y˜t = (αt , ct ) is composition by αt with ct , which is a (·, c) fuzzy number. In the steps above, ct may be a ﬁxed positive number. In application, as for free-ﬁxed t value within interval [1, N ] according to practical situation, what we seek after is to choose ct value corresponding to t. According to the method discussed in this section, we can design a series of systems such as breakdown diagnosis in computer, future forecasting and recent identiﬁcation with (·, c) fuzzy variables.

3.4

Self-regression with (·, c) Fuzzy Variables

Consider

0 E + A 1 Yt−1 + · · · + A n Yt−n + εt Yt = A

(3.4.1)

3.4 Self-regression with (·, c) Fuzzy Variables

and

yt = ft (˜ yt−1 , y˜t−2 , · · · , y˜t−n ) + εt ,

79

(3.4.2)

where data Yt−p , y˜t−p (p = 1, · · · , n) and dependent sequence Yt , y˜t are all (·, c) fuzzy data, respectively. Yt , ft is a fuzzy linear function and a fuzzy nonlinear function to be linearized, respectively, and εt is error. We call (3.4.1) a linear self-regression model with (·, c) fuzzy variables and (3.4.2) a nonlinear self-regression model with (·, c) fuzzy variables. 3.4.1 Linear Model For a linear self-regression model with (·, c) fuzzy variables (3.4.1), we discuss it determinedly. Deﬁnition 3.4.1 Let P (R) be a subspace consisting of the support T (R) of all non-negative elements. For each (Yt−p , ηt−p ) ∈ P (R), Yt−p − ηt−p 0, P (R) is a cone of T (R), which is a closed convex subset corresponding to topology induced by d. When Yt−p = (Yt−p , ηt−p ), Yt = (Yt , ηt ), d(Yt−p , Yt )2 = [Yt−p − Yt − (ηt−p − ηt )]2 + [Yt−p − Yt + (ηt−p − ηt )]2 + (Yt−p − Yt2 ) , 3 Yt , Yt−p ∈ P (R)N , Yti , Yt−pi ∈ P (R), (p = 1, 2, · · · , n; i = 1, 2, · · · , N ). Deﬁnition 3.4.2. Let Yt−p = (Yt−p,1 , Yt−p,2 , · · · , Yt−p,N ). Then we partition the set of natural numbers {1,2,· · · ,n} into two exhaustive, mutually exclusive subsets J(−) and J(+), one of which may be empty. Each partition associates 0, if p ∈ J(+), a binary multi-index J = (J1 , J2 , · · · , Jn ) deﬁned by jp = 1, if p ∈ J(−). Especially, J0 = (0, 0, · · · , 0), J1 = (1, 1, · · · , 1). Denote by C(J) the cone in T (R), C(J) : {A0 E + A1 Yt−1 + · · · + An Yt−n |Ap > 0, if jp = 0; Ap < 0, if jp = 1} (p = 1, 2, · · · , n) is the cone of T (R)n and it is determined by cone index J. Proposition 3.4.1. For a given cone index J, the model of minimizing in the cone C(J): r(A0 (J), A(J)) =

N

d(A0 + A1 Y(t−1)i + · · · + An Y(t−n)i ; Yti )2

i=1

has a unique parameter set A0 (J), A1 (J), · · · , An (J).

(3.4.3)

80

3 Regression and Self-regression Models with Fuzzy Variables

∂r(A0 (J), A((J)) = 0(p = 0, 1, · · · , n) ∂Ap written S(J). If S(J) has a solution Ap , such that Ap (J) > 0 at jp = 0; and Ap (J) < 0 at jp = 1, then Model (3.4.3) is J−compatible with the data. If the minimization of the unconstrained normal equations S(J) is compatible with A0 E + A1 Yt−1 + · · · + An Yt−n lying in C(J), we call the model J−compatible. Deﬁnition 3.4.3. Let the system

Theorem 3.4.1. Suppose that the data set Y(t−1)i , · · · , Y(t−n)i and Yti is n given by model Yti = A0 + Ap Y(t−p)i (i = 1, 2, · · · , N ), then S(J) has p=1

unique solution Ap (J)(p = 0, 1, · · · , n) for all cone indexes. Proof: Classify the observation data by subscripts and we might as well let i = 1, 2, · · · , N corresponding to the small ﬂuctuating data, and the other data corresponding to i = N + 1, · · · , 3N. Then Wti = Yti to each p, Z(t−p)i = Y(t−p)i at i = 1, 2, · · · , N ; Wti = Yti − ηti to each p, at i = N + 1, · · · , 2N . But at i = 2N + 1, · · · , 3N, we have Wti = Yti + ηti to each p, Y(t−p)i − ξ(t−p)i , if jp = 0, Z(t−p)i = Y(t−p)i + ξ (t−p)i , if jp = 1. Hence determining self-regression model is turned into determining one Wti = A0 + A1 Z(t−1)i + · · · + An Z(t−n)i . Let 3N n Ap Z(t−p)i ; Wti )2 , d(A0 + r(A0 , A) = i=1

p=1

∂r(A0 (J), A((J)) = 0. Then we obtain the formal equations and the ∂Ap unique solution to Ap (J)(p = 0, 1, 2, · · · , n) after solving the equations.

and

So the modeling steps can be concluded as follows. Step 1. Work out a self-dependent sequence table by observation data and classify the data by Deﬁnition 3.4.2. Step 2. Change the observation value Y(t−p)i and the dependent variable Yti into nonfuzziness by the proof of Theorem 3.4.1.

3.4 Self-regression with (·, c) Fuzzy Variables

81

Step 3. Calculate N rp = +

N i=1

[N

N i=1

Z(t−p)i Wti −

2 Z(t−p) −( i

N

i=1

N i=1

Z(t−p)i

Z(t−p)i )2 ][N

N i=1

N i=1

Wti

Wt2i − (

N

i=1

Wti )2 ]

(p = 1, 2, · · · , n) and take |rK | = max{rp |p = 1, 2, · · · , n}, so the best model is determined as ,t = A0 + W Ap Zt−p . p

Step 4. Decision Let

+

N 1 ,ti − Wti )2 (W K i=1 , 2 + W 2 = 1). + IC = + (W ti ti N N 1 1 2 , Wt + W K i=1 i K i=1 ti

Then the forecast is an eﬃcient one at IC ∈ (0, 1), a perfect one at IC = 0 and an ineﬃcient one at IC = 1. So Y t+q = A0 + Ap Yt−(p+q) is determined, and the state at q moment p

can be estimated as ∗ Yt+q ∈ [Yt+q − 0.382ηt+q , Yt+q + 0.618ηt+q ].

We are satisﬁed with the result after forecasting the sale of candies with Model (3.4.1) in some places in the ﬁrst half year in 1984. The methods mentioned above can be developed into complicated ones by computers. 3.4.2 Non-linear Model In this section, non-fuzzifying problem of (3.4.2) is resolved under cone index J, making it linearized with transformation. Proposition 3.4.2. Suppose the model like (3.4.2), for a ﬁx cone index J, the minimized model in cone C(J) = r(β0 (J), β(J))

N

d(ft ( y(t−1)i , · · · , y(t−p)i ), ft ( yti ))2

i=1

has a unique parameter solution β0 (J), β1 (J), · · · , βn (J).

82

3 Regression and Self-regression Models with Fuzzy Variables

∂r(β0 (J), β(J)) ∂r1 (β0 (J), β (J)) = 0 and = 0, the ∂ βp ∂ β p systems are written as S(J) and S1 (J). Deﬁnition 3.4.4. Like

Theorem 3.4.2. Suppose data sets y(t−1)i , · · · , y(t−n)i ; yti are all given by model yt = ft (y(t−1)i , · · · , y(t−p)i )(i = 1, 2, · · · , N ), for all cone index J, S(J) has a unique solution β0 (J), β1 (J), · · · , βn (J). Proof: Prove the following like Deﬁnition 3.1 in [Cao89b]. When data ﬂuctuate little, we take i = 1, 2, · · · , N , at this time, wti = yti , to each p, z(t−p)i = y(t−p)i ; at i = N + 1, · · · , 2N , wti = yti − ηti . But at i = 2N + 1, · · · , 3N , we have wti = yti + ηti , to each p, y(t−p)i − η(t−p)i , if jp = 0, z(t−p)i = y(t−p)i + η(t−p)i , if jp = 1. Therefore, a deterministic self-regression model is gained as follows: wt = ft (zt−1 , zt−2 , · · · , zt−n ). By variable replacement, it is linearized, then L(wt ) = L[ft (zt−1 , zt−2 , · · · , zt−n )], i.e., Ut = β0 (J) +

n

βj (J)zt−p .

p=1

It is not diﬃcult to get a conclusion by using least square principle for Ut . Proposition 3.4.3. As for ﬁx cone index J, the minimum model in cone C(J) r1 (β0 (J), β (J)) =

N

i2 d[ft ( D y(t−1)i , · · · , y(t−p)i ), ft ( yti )]2

i=1

has a unique parameter solution β0 (J), β1 (J), · · · , βn (J). Theorem 3.4.3. Suppose a datum set y(t−1)i , · · · , y(t−n)i ; yti is all given by Model (3.4.2), then to all cone indexes J, S1 (J) has a unique solution β0 (J), β1 (J), · · · , βn (J).

3.4 Self-regression with (·, c) Fuzzy Variables

83

Proof: Similarly to the proof of Theorem 3.4.1, we only notice S1 (J), i.e., ⎛ ⎞ N N N N D D z · · · D z i i (t−1)i i (t−p)i ⎞ ⎜ ⎟ ⎛ i=1 i=1 ⎜ N i=1 ⎟ β0 (J) N N ⎜ ⎟⎜ ⎟ 2 ⎜ Di z(t−1)i Di z(t−1) ··· Di z(t−1)i z(t−p)i ⎟ β1 (J) ⎟ ⎜ ⎟⎜ i ⎜ i=1 i=1 ⎜ i=1 ⎟⎜ . ⎟ ⎜ ⎟ ⎝ .. ⎟ . . . ⎠ .. .. .. ⎜ ⎟ · · · ⎜ ⎟ β (J) ⎝ ⎠ N N N n 2 Di z(t−p)i Di z(t−1)i z(t−p)i · · · Di z(t−p) i i=1 i=1 i=1 ⎛ ⎞ N Di Uti ⎜ ⎟ ⎜ N i=1 ⎟ ⎜ ⎟ ⎜ Di z(t−1)i Uti ⎟ ⎜ ⎟ (3.4.4) = ⎜ i=1 ⎟. ⎜ ⎟ .. ⎜ ⎟ . ⎜ ⎟ ⎝ ⎠ N Di z(t−p)i Uti i=1

Obviously, (3.4.4) has a unique solution β0 (J), β1 (J), · · · , βn (J). It is also veriﬁed that, by adopting weight least square method to determine fuzzy nonlinear self-regression model, the forecasting is more accurate. The construction of models is induced as follows. 1. By observation data (y(t−p)i , η(t−p)i ), we authorize a table in fuzzy selfrelated sequence table like Table 3.2.1. 2. Nonfuzzify (3.4.2) (or by variable replacement), and change it into deterministic nonlinear model (or linear fuzzy model). 3. By variable replacement (or fuzzﬁcation), and change the corresponding model into a classical self-regression model Ut = β0 (J) +

n

βp (J)zt−p .

p=1

4. Determine rα by checking the table in critical value of related coeﬃcient. Suppose N rp = + [N

N i=1

N i=1

z(t−p)i Uti −

2 z(t−p) −( i

N

i=1

N i=1

z(t−p)i

z(t−p)i )2 )][N

N i=1

N i=1

Uti

Ut2i − (

N

i=1

Uti )2 ]

and calculate the self-related coeﬃcient in quarter by moving backwards p(p = 1, 2, · · · , n). If |rp | > rα , the linear relation is marked between p period backwards and norm time sequence in building a self-regression model.

84

3 Regression and Self-regression Models with Fuzzy Variables

Again take |rK | = max{rp |p = 1, 2, · · · , n}, and the model is best, which is built on the norm time series Ut backwards to K quarter. The model is n Ut (β0 , β ) = β0 + (3.4.5) βp zt−p . p=1

5. The parameter in (3.4.5) β0 (J), β1 (J), · · · , βn (J) is determined by using the classical least squares method, placed into (3.4.5). This is what we ﬁnd. 6. Testify. Let + N

(Uti −Uti )

i=1

IC = +

K N i=1

+

Ut2

i

K

N i=1

+

(Ut2i + Ut2i = 1). Ut2

i

K

It is an eﬀective forecast at IC ∈ [0, 1), a prefect forecast at IC = 0 and an ineﬀective forecast at IC = 1. 7. Regeneration. According to βp (p = 1, 2, · · · , n) determine coeﬃcient in (3.4.2) before supposing the best model solved in nonlinear inﬁnite regression problem to be , t (β0 , β ) = ft ( U yt−1 , · · · , yt−n ). Given

, t+q (β , β ) = ft+q ( U yt−(1+q) , · · · , yt−(n+q) ), 0

we can forecast the constant statement at q. n Example 3.4.1: Suppose Ut = β0 (J) + βp (J)zt−p . Let Ut = ln Ut . Then p=1

(J)+ β (J)zt−p β 0 p n

Ut

=e

p=1

(J)+ β (J)zt−p β 0 p n

⇒Ut−(n+q)

=e

Therefore

p=1

.

+ β y β 0 p t−(n+q) n

Ut−(n+q)

=e

p=1

is what we ﬁnd. If there exist parameters in the formula, the parameters need determining by an optimization method. 8. Determine the region of forecasting evaluation [Cao89c]. = (Ut+q , θt+q , θt+q )T , the forecasting region is Since Ut+q ∗ Ut+q ∈ [Ut+q − 0.382θt+q , Ut+q + 0.618θt+q ].

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

3.5 3.5.1

85

Nonlinear Regression with T -fuzzy Data to be Linearized Introduction

Consider a nonlinear model as follows: y = f(˜ ˜2 , · · · , x ˜n ) + ε, x1 , x

(3.5.1)

where f is a fuzzy nonlinear function to be linearized, ε is an error, y˜ = (y, η, η) and x˜p = (xp , ξ p , ξ p )(p = 1, 2, · · · , n) denote T -fuzzy correlated variables and independent variables, respectively. We call (3.5.1) a nonlinear regression model with T -fuzzy variables. A classical model is regarded as its especial example. In this section, non-T -fuzziﬁed problem of (3.5.1) is resolved under cone index J, making it linearized with transformation. Meanwhile, the theories of this model are demonstrated in conﬁrmation and linearized, and a method is advanced to this problem. 3.5.2

Prepare Theorem and Property

Seen in Section 1.7 is a fuzzy number of deﬁnition and property relevant to T -fuzzy number. It is easy to certiﬁcate that this T -fuzzy datum x˜ = (x, ξ, ξ) is regular and convex fuzzy subset. Deﬁnition 3.5.1. If x˜ = (m(x), L1 , R1 ), y˜ = (m(y), L2 , R2 ), then the distance deﬁnition on the T -fuzzy number set T (R) is d( x, y)2 = D2 (Supp(˜ x), Supp(˜ y ))2 + (m(˜ x) − m(˜ y ))2 , where Supp(·) denotes a support of (·), m(·) denotes a main value of (·). Especially, when x = (x, ξ, ξ), y˜ = (y, η, η), then d( x, y)2 =

(x − y − (ξ − η))2 + (x − y + (ξ − η))2 + (x − y)2 . 3

(˜ yi + y˜j ) 2 ) . 2 Proof: Similar to Lemma 3.1.1, this lemma can be proved.

Lemma 3.5.1. d(˜ yi , yj )2 = 2d(˜ yi , x ˜)2 + 2d(˜ x, y˜j )2 − 4d(˜ x,

Theorem 3.5.1. Let V be a closed cone in P (R). Then for any x in P (R), x, y0 ) d( x, y) for all y a unique T -fuzzy number y0 exists in V , such that d( in V . A necessary and suﬃcient condition, where y0 is the unique minimizing fuzzy number in V , is that x is y˜0 −orthogonality to V . Proof: Similar to Theorem 3.1.1, this theorem is not diﬃcult to prove.

86

3 Regression and Self-regression Models with Fuzzy Variables

3.5.3 Two Kinds of Non-T -Fuzzily Approach and Its Equivalence Based on the above-mentioned theories, by taking Model (3.5.1) for example, we inquire into a method to the conversion of a non-fuzzy linear model. I. T -fuzzifying before making variable replacement linearized Deﬁnition 3.5.2. For the given cone index J , the measurement is deﬁned between the fuzzy data and regression curve as r0 (J ), r(J )) = Q(

N

d(f( x1i , x 2i , · · · , x ni ); yi )2 .

(3.5.2)

i=1

Theorem 3.5.2. Suppose that T -fuzzy data x 1i , x 2i , · · · , x ni , yi are all given from model yi = f( x1i , x 2i , · · · , x ni )(i = 1, 2, · · · , N ) for all of cone index J , there exists a unique solution r0 (J ), r1 (J ), · · · , rn (J ) in a normal equations ∂Q( r0 (J ), r(J )) = 0(p = 0, 1, · · · , n). ∂ rp Proposition 3.5.1. For a given cone index J , the minimization Model (3.5.2) has a unique parameter solution set r0 (J ), r1 (J ), · · · , rn (J ) in cone C(J ). In fact,we can take a list of T -fuzzy samples ((x1 , ξ 1 , ξ 1 ), (y1 , η 1 , η 1 )), · · · , ((xn , ξ n , ξ n ), (yn , η n , η n )), for the smaller sample in ﬂuctuation, let wi = yi , and to each p, zpi = xpi , at i = 1, 2, · · · , N ; to the rest sample, it can be handled as follows. On the one hand, let wi = yi − η i . To each p, $ xpi − ξ pi , if jp = 0, zpi = xpi + ξ pi , if jp = 1, at i = N + 1, N + 2, · · · , 2N. On the other hand, let wi = yi + η i . To each p, $ xpi + ξ pi , if jp = 0, zpi = xpi − ξ pi , if jp = 1, at i = 2N + 1, 2N + 2, · · · , 3N . Therefore (3.5.1) can be changed into a classically expressive type as follows: wi = f (z1i , z2i , · · · , zni )(i = 1, 2, · · · , N ). Through an appropriate linear transformation L, the linear regression model is then acquired below: n rp (J )zp . U = r0 (J ) + (3.5.3) p=1

Thereout, it is easy to obtain a result in Proposition 3.5.1 (or in Theorem 3.5.1)

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

87

Corollary 3.5.1. Under the condition of Theorem 3.5.2, for a given cone index J , (3.5.3) there exists a group of unique coeﬃcients r0 (J ), r1 (J ), · · · , rn (J ). II. Variables replacement before non-T -fuzziﬁed Suppose y like (3.5.1), it can be changed into a linear function with T -fuzzy variables through an appropriate variable replacement: s = r0 +

n

rp u pi .

(3.5.4)

p=1

Theorem 3.5.3. Under the condition of Theorem 3.5.2, for a given cone index J , (3.5.4) has a unique coeﬃcient rp (J )(p = 0, 1, · · · , n). Proof: Because the coeﬃcients in (3.5.4) rp (p = 0, 1, · · · , n) are all conﬁrmed by T -fuzzy data u pi , according to Theorem 3.5.2 and proof in Proposition 3.5.1, the theorem also gets true. Thereout it is known that, (3.5.4) can be changed into a deterministic linear model n V = r0 (J ) + rp (J )zp . (3.5.5) p=1

Theorem 3.5.4. Under the condition of Theorem 3.5.2, in the same ﬁx cone index J , the determined T -fuzzy data regression Equation (3.5.3) is equivalent to (3.5.5). Proof: Because under the same ﬁx cone index J , original T -fuzzy datum x p is determined in cone C(J ), hence ﬁrst to (3.5.1), we implement nonT -fuzziﬁcation: N ( y ); carry out again the linearized: L(W ) (N, L mean the implement of non-T -fuzzication and linearized, respectively) before getting zp . Or towards (3.5.1) we carry out the linearized ﬁrst, then non-T -fuzziﬁcation and get zp . Acquisition of the independent variable sequence should be equal accordingly, i.e., zp = zp . Again because, in the above cone C(J ), normal equations corresponding to (3.5.3) and (3.5.5) ∂Q( r0 (J ), r1 (J ), · · · , rn (J )) =0 ∂ rp and

∂Q( r0 (J ), r1 (J ), · · · , rn (J )) = 0(p = 0, 1, · · · , n) ∂ rp

contain a unique parameter solution, respectively, rp (J ) and rp (J ), and again according to zp = zp , we have rp (J ) = rp (J )(p = 0, 1, · · · , n). Hence, (3.5.3) ⇐⇒ (3.5.5).

88

3 Regression and Self-regression Models with Fuzzy Variables

3.5.4 Weight of Linearized Nonlinear Regression with T -Fuzzy Variables T -fuzzy data reﬂect more objectively observation ones in diﬀerent positions in the whole test. In a convex cone, the center value is regarded as main value, with the value distributed at both sides of left and right. Consider the inﬂu1 , x 2 , · · · , x n , it is eﬀective for us to handle a ence degree of a data pair y; x linear regression problem with T -fuzzy variables by adopting non-T -fuzzifying [Cao93e]. But it is not necessarily the best to handle a non-linear regression with T -fuzzy variables by adopting the above two replacements before determining regression coeﬃcient with a least squares principle. Therefore, we need fuzzy weight processing for the error item yi − yi . Because, at diﬀerent points yi (i = 1, · · · , N ), when the similar deviation is transformed to the original T -Fuzzy variables, the transform makes the direct proportion between par y d y )i or fuzzy derivative ( )i . It tial diﬀerence rate and fuzzy diﬀerence ( s d s is known that the model handled by weight is more accurate than the nonweighted one handled in a practical operation. y i = ( )i Assume that we write fuzzy diﬀerence or fuzzy derivative as D s y i = ( d )i . Let or D d s r0 , r) = Q1 ( =

N N d y y 2 [ ( s − s )] ( [ ( s − si )]2 ) i i i i i d s s i=1 i=1 N

i ( [D si − si )]2 =

i=1

N

i2 ( D si − (r0 +

i=1

N

(3.5.6)

rp u pi ))2 .

i=1

Then we discuss the following by using Method II in 3.5.3 (If based on Method I, we can get the similar result). Proposition 3.5.2. For the given cone index J , in cone C(J ), then y(z(J )) d dy(z(J )) y y ⇒ , ⇒ i i i s s(z(J )) d s ds(z(J )) i and (3.5.6) can be changed into Q1 ( r0 , r) =

N i=1

where, Di denotes

2 (Vi − ( D r0 + i

N i=1

y(z(J )) dy(z(J )) or . s(z(J )) i ds(z(J )) i

rp zpi ))2 ,

(3.5.7)

3.5 Nonlinear Regression with T -fuzzy Data to be Linearized

89

Proposition 3.5.3. For the given cone index J , the minimized model in C(J ) (3.5.6) has a unique parameter solution r0 (J ), r1 (J ), · · · , rn (J ). 2i , · · · , x ni ; yi be all given by model Theorem 3.5.5. Let T -fuzzy data x 1i , x yi = fi ( x)(i = 1, 2, · · · , N ), x = ( x1 , x 2 , · · · , x n ). Then to the given cone r0 , r) ∂Q1 ( = 0 contains a unique solution index J , a normal equations ∂ rp r0 (J ), r1 (J ), · · · , rn (J ). Proof: By following the proof of Proposition 3.5.1 and Theorem 3.5.4, (3.5.6) can be changed into (3.5.7) and the normal equations with respect to (3.5.7) into ⎞ ⎛ N N N N D D Z · · · D z i ni ⎟ ⎜ i=1 i i=1 i 1i ⎛ ⎞ i=1 ⎟ ⎜ N r0 (J ) N N ⎟ ⎜ 2 ⎜ ⎜ ⎟ Di z1i Di z1i ··· Di z1i zni ⎟ ⎟ ⎜ r1 (J ) ⎟ ⎜ i=1 i=1 ⎟ · ⎜ .. ⎟ ⎜ i=1 ⎟ ⎝ . ⎠ ⎜ .. .. .. ⎟ ⎜ . . ··· . ⎟ ⎜ rn (J ) ⎠ ⎝ N N N 2 Di zni Di zni z1i · · · Di zni i=1

i=1

i=1

⎛

⎞ D V i i ⎜ i=1 ⎟ ⎜ N ⎟ ⎜ ⎟ ⎜ ⎟ D z V i 1i i ⎟ ⎜ = ⎜ i=1 ⎟, ⎜ ⎟ . . ⎜ ⎟ . ⎜ ⎟ ⎝ ⎠ N Di zni Vi N

(3.5.8)

i=1

i.e., (Dz T z)˜ r (J ) = Dz T V. Therefore a unique solution r0 (J ), r1 (J ), · · · , rn (J ) exists in (3.5.8). Correspondingly, we can get a testifying formula related to the regression equation [Guj86] N

2

[Di (Vi − Vi )] =

i=1

N

5 Di2 (Vi − V i ) 1 −

i=1

N i=1 N i=1

Obviously,

N 2

R =

i=1 N i=1

Di2 (Vi − V )2 1, Di2 (Vi

−V

)2

Di2 (Vi − V )2 6 . Di2 (Vi

−V

)2

90

i.e.,

3 Regression and Self-regression Models with Fuzzy Variables

& ' N ' 2 2 ' ' i=1 Di (Vi − V ) |R| = ' 1, ' ( N 2 Di (Vi − V )2

(3.5.9)

i=1

calling R a weighted related-coeﬃcient. At |R| → 1, it represents more linear related between V and z. If R > Rα (determined by checking related coeﬃn cient table), then the linear relation of regression equation V = r0 + rp zp p=1

is signiﬁcant. The test of signiﬁcance in regression coeﬃcients is shown as follows. Let N i=1

G=

K N i=1

F =

Di2 (Vi −V )2

,

Di2 (Vi −Vi )2 N −K−1

(3.5.10)

R2p cpp N i=1

Di2 (Vi −Vi )2

(p = 1, 2, · · · , n).

N −K−1

Then negate H0 : rj = 0 at F (j) > Fα (1, N − K − 1), where cpp is p-element on the main diagonal of matrix (Dz T z)−1 . If some p exists, such that F (p) < Fα (1, N − K − 1), then it shows that zp inﬂuences V little, omitted here. 3.5.5 Numeric Example Example 3.5.1: Suppose that a non-linear fuzzy regression model as follows: y = A0 + be− x˜ , c

where A0 , b, c are all constants, and then, by its non-T -fuzziﬁcation, we have W = A0 + be− z . c

Besides, z is a geometrical sequence, and suppose Δ = Wk = A0 + be

− zc

k

, Wk+1 = A0 + be

hence Wk+1 = A0 + be

− zc Δ k

= A0 +

which can be turned into v = r0 + r1 u,

−z

zk+1 , then zk c k+1

,

Wk − A0 − Δ1 , b

3.6 Regression and Self-regression Models with Flat Fuzzy Variables

91

1 1 where v = ln(Wk+1 −A0 ), u = ln(Wk −A0 ), r0 = Δ ln |b| and r1 = − Δ contain parameters, which should be evaluated by an optimum seeking method. Therefore, The modeling steps of (3.5.1) should be concluded as follows:

1) Replacement. (3.5.1) is replaced variably (or dealing by non-T -fuzziﬁcation), and it is linearized (or changed into deterministic non-linearity type). 2) Change. Non-T -fuzzify (or variable replacement), and the problem is changed into a linear deterministic model: V = r0 (J ) + r1 (J )z1 + · · · + rn (J )zn .

(3.5.11)

3) Determination. Determine r0 (J ), r1 (J ), · · · , rn (J ) by solving (3.5.8), i.e., it is a regression coeﬃcient of (3.5.11). 4) Calculation. Calculate (3.5.9) and (3.5.10), and testify (3.5.11) by an ordinary method. 5) Forecast. Coeﬃcient in (3.5.1) is determined by its solution rp (J )(p = 0, 1 · ··, n), and then (3.5.1) can be used to forecast, the choice of q moment in forecasting region is similar to Ref.[Cao89c]. If yq = (yq , η q , η q ), then yq∗ ∈ [yq − 0.328ηq , yq + 0.618ηq ]. 3.5.6

Conclusion

The method can be programmed for operation on computers, thus the model mentioned here is more accurate, more eﬀective and better practical than the models which clear and non-weight nonlinear.

3.6

Regression and Self-regression Models with Flat Fuzzy Variables

3.6.1 Introduction Because (·, c) fuzzy data contain L-R fuzzy variables, T -fuzzy variables and the ﬂat fuzzy variables (or trapezoid fuzzy variables), we can further more + ˜∗ = (y∗− , y∗+ , η ∗ , η ∗ ) to apply the ﬂat fuzzy variables x˜∗i = (x− ∗i , x∗i , ξ ∗i , ξ ∗i ), y the regression and self-regression models in this section. 3.6.2 Determination of the Model with Flat Fuzzy Variables Deﬁnition 3.6.1. Suppose that the models are y˜ = β0 E + β1 x ˜ 1 + · · · + βn x ˜n + ε

(3.6.1)

y˜t = β0 E + β1 x ˜t−1 + · · · + βn x˜t−n + εt ,

(3.6.2)

and

92

3 Regression and Self-regression Models with Fuzzy Variables

where x ˜p , x ˜t−p (p = 1, 2, . . . , n) are ﬂat fuzzy variables, and y˜, y˜t are ﬂat fuzzy function variables. We call (3.6.1) and (3.6.2) a regression model and a selfregression one with ﬂat fuzzy variables, respectively, E is an n−vector represented by all E = (1, 1, 0, 0), and ε, εt are errors, and t is time. Because the variables in (3.6.1) and (3.6.2) are fuzzy, it is impossible to obtain a meaningful result by a classical least square method. Therefore, determination path is researched to model (3.6.1)(3.6.2) as follows. ˜, x− ξ, x+ ξ. Deﬁnition 3.6.2. Let x ˜ = (x− , x+ , ξ, ξ) ∈ P (R) for each x Then P (R) is one of the platform T (R), and is a convex close subset of T (R) relevant with topology induced by distance d. + Suppose test data to be x˜∗1 , x ˜∗2 , . . . , x ˜∗N ; y˜∗ , where x˜∗i = (x− ∗i , x∗i , ξ ∗i , ξ ∗i ) (i = 1, 2, . . . , N ), y˜∗ = (y∗− , y∗+ , η ∗ , η ∗ ), and when the model is a regression model with ﬂat fuzzy variables, “∗” is taken to p; when the model is a selfregression model with ﬂat fuzzy variables, “∗” taken to t-p. Hence for the model (3.6.1) and (3.6.2), βi (i = 1, 2, . . . , N ) is a ordinary real number, x˜∗i is a ﬂat fuzzy variable, y˜∗ is a ﬂat aﬃne function from P (R)N to T (R). Let N N di (˜ x∗i , y˜∗ )2 = [˜ y∗ − (β0 + β1 x ˜∗1 + . . . + βn x ˜∗N )]2 . r(β0 , β) = i=1

i=1

Then βp determined by applying the least square method is a ﬂat fuzzy number rather than a real number, where β = (β1 , β2 , . . . , βn ), so that a classical least square method can’t be directly applied, and a conversion should be made. Similarly to method of Section 3.1, we induce deﬁnitions and properties below. Deﬁnition 3.6.3. Assume x ˜∗i = (˜ x∗i , x ˜∗i , . . . , x ˜∗i ) (i = 1, 2, . . . , N ). If partition the set of nature numbers {1, 2, · · · , n} into two exhaustive, mutually exclusive subsets T (−), T (+), one of which may be empty set φ. Then to each such partition associate a binary multi-index T =(T1 , T2 , . . . , Tn ) deﬁned by Ti = {0, if i ∈ T (+); 1, if i ∈ T (−)}. Especially, we write T0 = (0, 0, · · · , 0), T1 = (1, 1, · · · , 1). Use ˜1 + . . . + βn x˜n |βp 0, if jp = 0; βp 0, if jp = 1} C(T ) = {β0 E + β1 x N

to represent a platform in T (R) , we call it a determined platform from the platform index T . Proposition 3.6.1. For a given platform index T , there exists a unique parameter solution β0 (T ), β1 (T ), . . . , βn (T ) of minimum model r(β0 (T ), β(T )) =

n

d(β0 + β1 x1i + . . . + βn xni , yi )]2

i=1

in platform C(T ), where β(T ) = (β1 (T ), β2 (T ), . . . , βn (T )).

(3.6.3)

3.6 Regression and Self-regression Models with Flat Fuzzy Variables

93

Deﬁnition 3.6.4. Suppose data to be x˜∗1 , x ˜∗2 , . . . , x˜∗n ; y˜∗ , and we call the system S(T ) consisting of n + 1 equation ∂r(β0 (T ), β(T )) = 0(p = 0, 1, . . . , n). ∂βp If S(T ) has a solution β0 (T ), β1 (T ), . . . , βn (T ), such that βp > 0 at jp = 0; βp < 0 at jp = 1, then we call (3.6.3) T -compatible with the data. n βp x ˜p in If un-constraints least value of S(T ) is compatible with β0 E + p=1

C(T ), then this model is called compatibleness. ˜2i , . . . , x˜ni , y˜i satisfy (3.6.1) Theorem 3.6.1. Suppose that ﬂat fuzzy data x ˜1i , x and (3.6.2), respectively, then, for all of the platform index T , there exists a unique solution β0 (T ), β1 (T ), . . . , βn (T ) in system ∂r(β0 (T ), β(T )) = 0 (p = 0, 1, . . . , n). ∂βp + Proof: Suppose that ﬂat fuzzy data are x˜∗i = (x− ˜∗ = ∗i , x∗i , ξ ∗i , ξ ∗i ), y − + (y∗ , y∗ , η ∗ , η ∗ ), and “∗” taken to p, or “∗” taken to t-p. Catalogue {˜ x∗i } by subscription. For i = 1, 2, . . . , N , take

w∗ =

η ∗ y∗− + η ∗ y∗+ + η ∗ η ∗ η∗ + η ∗

+

η∗ + η∗ 2

,

to each ∗, z∗i =

+ ξ ∗i x− ∗i + ξ ∗i x∗i

ξ ∗i + ξ ∗i

+

ξ ∗i + ξ ∗i 2

for i = N + 1, . . . , 2N , let w∗ = y∗− − η ∗ . To each ∗,

z∗i =

⎧ + ⎪ ξ ∗i x− ⎪ ∗i + ξ ∗i x∗i ⎪ − ξ ∗i , j∗ = 0, ⎪ ⎨ ξ +ξ ∗i

∗i

∗i

∗i

+ ⎪ ξ ∗i x− ∗i + ξ ∗i x∗i ⎪ ⎪ + ξ ∗i , j∗ = 1, ⎪ ⎩ ξ +ξ

and for i = 2N + 1, . . . , 3N , let w∗ = y∗+ − η ∗ . To each ∗, ⎧ ⎪ ξ x− + ξ ∗i x+ ⎪ ∗i ⎪ ∗i ∗i − ξ ∗i , j∗ = 0, ⎪ ⎨ ξ +ξ ∗i ∗i z∗i = + ⎪ ξ ∗i x− ∗i + ξ ∗i x∗i ⎪ ⎪ + ξ ∗i , j∗ = 1. ⎪ ⎩ ξ +ξ ∗i

∗i

94

3 Regression and Self-regression Models with Fuzzy Variables 3N

z∗i and we can change the i=1 3N regression or self-regression model with ﬂat fuzzy variables into a determined one with platform index T : Under the given platform index T , let z∗i =

w = β0 + β1 z1 + . . . + βn zn ,

(3.6.4)

wt = β0 + β1 zt−1 + . . . + βn zt−n .

(3.6.5)

From here we can get a classical regression and self-regression model with platform index T corresponding to (3.6.1) and (3.6.2). By using the classical least square method, it is easier for us to ﬁnd out an optimal solution to the unique βp (p = 0, 1, . . . , n) in (3.6.4) or (3.6.5). Accordingly, it is of value for us to approach Model (3.6.1) and (3.6.2) by using crisp models. 3.6.3

Conclusion

According to paper [Cao93e], the model in this section can be generalized into a model of nonlinear regression and time series. If we integrate the model and method here with Data Mining, we can search for an easier acquisition of fuzzy data in those characteristic problems, which are diﬃcult to be described by numerical value. At the same time, we can design a series of systems such as fault diagnosis in computer, future forecasting, resent identiﬁcation with (·, c) fuzzy data [YL99] and as well.

4 Fuzzy Input-Output Model

Focus on expansion of a classical input-output model, this chapter introduces a fuzzy input-output model ﬁrst, then inquires an input-output model with T -fuzzy data and its application, and presents an input-output model with triangle fuzzy data ﬁnally.

4.1 4.1.1

Fuzzy Input-Output Mathematical Model Introduction

In a realistic world there exists a close connection between product technology and economy. The input-output for each department forms a complicated network system reﬂecting much fuzziness. Derived from reality and historical materials, the obtained data are obviously approximate and estimated valuations. If a fuzzy set method is used instead of forcing a classically mathematical one, which will make its result undetermined, into those fuzzy “ﬂuctuating” data, more information shall be kept than ever. Here an input-output model with T -fuzzy data raised, we will describe a development law of the objective better. A research object in input-output methods and the models are extremely complex social systems. But in input-output models, an input quantity, an output quantity, direct consume coeﬃcient and complete consume coeﬃcients all request precision mathematically, which produces a so-called gram principle with each other. When a complexity of systems increases greatly, the ability to make it precise decrease. Upon a certain threshold value, complexity and precision will mutually exclude, the parameter ﬂuctuation factors need considering in input-output models. Companied tightly with complexity is inaccurate and unprecise, that is fuzzyness. Therefore, complexity in social economic system can be studied accurately and scientiﬁcally by establishing fuzzy input-output models. B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 95–115. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

96

4 Fuzzy Input-Output Model

4.1.2

Models

Consider the Table 4.1.1, Table 4.1.1. Fuzzy Valued Input-output Table

Input

Output Used in middle Dept Dept 1 2 Material consumption Dept 1 x ˜11 x˜12 Dept 2 x ˜21 x˜22 ··· ··· ··· Dept n x ˜n1 x˜n2 Newly made value total Pay v˜1 v˜2 ˜2 ˜1 M Net proﬁt M ˜2 ˜1 N Amount N Total value x ˜1

Input

x˜2

Final products · · · Dept Amount consumption Accumulation ··· n ··· ··· ··· ···

x ˜1n x ˜ j 1j x ˜2n x j ˜2j ··· ··· x ˜nn ˜nj jx

··· ··· ··· ···

v˜n V˜ ˜ ˜n M M ˜ ˜n N N ˜ x ˜n X

y˜11 y˜21 ··· y˜n1

y˜12 y˜22 ··· y˜n2

Output Total total value

Material consumption Dept 1 Dept 2 ··· Dept n Newly made value total Pay Net proﬁt Amount Total value

y˜13 y˜23 ··· y˜n3

y˜1 y˜2 ··· y˜n

x ˜1 x ˜2 ··· x ˜n

where x ˜i (i = 1, · · · , n)—fuzzy total values of products made by the i−th material production department in the scheduled times. y˜i (i = 1, · · · , n)—fuzzy total values of ﬁnal products made by the i−th material production department. x ˜ij (i = 1, · · · , n)—fuzzy values of product distributed to the j−th department by the i−th department. v˜j (j = 1, · · · , n)—fuzzy values made by the necessary labor of laborers in the j−th material production department. ˜ j (j = 1, · · · , n)—fuzzy values made by the j−th material production M department in scheduled time.

4.1 Fuzzy Input-Output Mathematical Model

97

˜j (j = 1, · · · , n)—fuzzy values newly made by the j−th material producN tion department in scheduled time. Suppose that all of them are T -fuzzy number, the data in Table 4.1.1 tally with the following balancing formulas, n

x ˜ij + y˜i = x ˜i (i = 1, 2, · · · , n),

j=1 n

˜j = x x˜ij + N ˜j (j = 1, 2, · · · , n).

i=1

By a fuzzy consumption coeﬃcient formula, a ˜ij =

x˜ij (i, j = 1, 2, · · · , n), we x ˜j

can change the two formulas above into n

a ˜ij x ˜j + y˜i = x ˜i (i = 1, 2, · · · , n)

(4.1.1)

˜j = x˜j (j = 1, 2, · · · , n). a ˜ij x ˜j + N

(4.1.2)

j=1

and

n i=1

If we write (4.1.1) and (4.1.2) in the form of fuzzy matrices and vectors, then ˜ + Y˜ = X ˜ and C˜ X ˜ +N ˜ = X, ˜ A˜X (4.1.3) i.e., ˜X ˜ = Y˜ (I˜ − A)

(4.1.4)

˜ X ˜ =N ˜, (I˜ − C)

(4.1.5)

and where

⎛

˜12 a ˜11 a ⎜a ˜ a ⎜ 21 ˜22 A˜ = ⎜ . ⎝ ..

··· ··· .. .

⎞

a ˜1n a ˜2n .. .

⎟ ⎟ ⎟ ⎠

˜n2 · · · a ˜nn a ˜n1 a is a fuzzy direct consumption coeﬃcient matrix; ⎛ n

a ˜i1

⎜ i=1 ⎜ ⎜ 0 ⎜ C=⎜ ⎜ . ⎜ . ⎜ . ⎝ 0

⎞

···

0

a ˜i2 · · ·

0

.. .

···

.. .

0

···

0 n i=1

n

i=1

a ˜in

⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎠

(4.1.6)

98

4 Fuzzy Input-Output Model

is a fuzzy material consumption coeﬃcient matrix and ⎛˜ ⎞ E˜ 0 ··· ˜ 0 ⎜˜ ˜ ˜ ⎟ ⎜0 E ··· 0 ⎟ I˜ = ⎜ . ⎟ . . . ... ⎠ ⎝ .. ˜ 0 ˜ 0 · · · E˜ ˜ = (1, 0, 0), ˜ is a fuzzy unit matrix, here E 0 = (0, 0, 0), ⎛

⎛ ⎞ ⎛ ˜ ⎞ ⎞ N1 x ˜1 y˜1 ⎜x ⎜ y˜2 ⎟ ⎜N ˜ ⎟ ⎟ ˜ 2 ⎜ ⎟ ˜ ⎜ 2⎟ ⎟ ˜ =⎜ X = ⎜ . ⎟. ⎜ .. ⎟ , Y˜ = ⎜ .. ⎟ , N ⎝ . ⎠ ⎝ . ⎠ ⎝ .. ⎠ ˜n x ˜n y˜n N

(4.1.7)

A fuzzy complete consumption coeﬃcient matrix is ˜ A˜ = B, ˜ A˜ + B where

⎛˜ ˜ b11 b12 ⎜ ˜b21 ˜b22 ˜=⎜ B ⎜. ⎝ .. ˜bn1 ˜bn2

˜b1n ⎞ ˜b2n ⎟ ⎟ .. ⎟ . . ⎠ · · · ˜bnn ··· ··· .. .

Deﬁnition 4.1.1. We call (4.1.4) and (4.1.5) a fuzzy Leontief input-output mathematical model. Note 4.1.1. From the theory of the next section, we can prove that it counts for little whether the result by dividing two fuzzy numbers is a fuzzy one x ˜ because a ˜ij = x˜ijj can be turned into an ordinary parameter under the cone index J. Therefore, we introduce two kinds of method modeling determinacy in the following.

4.2 4.2.1

Input-Output Model with T -Fuzzy Data Introduction

The author aims at extending a determinate classical input-output model into a case with T -fuzzy data, based on the properties of T -fuzzy numbers. At the same time he shows the eﬀectiveness of an input-output model with T -fuzzy data in theory, discusses a nonfuzziﬁed problem of Leontief synthesized model with T -fuzzy data and comes up with a new solution to an input-output model with T -fuzzy data under a cone index [Cao93b].

4.2 Input-Output Model with T -Fuzzy Data

4.2.2

99

Models and Its Properties

A. Fuzzy Leontief model and its eﬀectiveness Deﬁnition 4.2.1. Suppose Model (4.1.4) and (4.1.5) with T -fuzzy data given, we call them a Leontief input-output mathematical model with T -fuzzy data. ˜ ) or a fuzzy LeonDeﬁnition 4.2.2. For a given cone index J , a matrix A(J tief model corresponding to it is J −eﬀective if there exists a solution vector ˜ ) 0 for any ﬁnal demand vector Y˜ (J ) 0 in (4.1.4). X(J ˜ ) is Deﬁnition 4.2.3. For a given cone index J , the nth square matrix A(J called a J −separable one A˜ (J ) 0 ˜ A(J) = ˜1 , A2 (J ) A˜3 (J ) meaning a movable form through the change of row and column, where A˜1 (J ) and A˜3 (J ) are kth and (n − k)th square matrices (k < n) respectively, oth˜ ) is called a J −nonseparable matrix and λ ˜ erwise A(J A(J ) is deﬁned as a ˜ ). characteristic value of non-separable and non-negative square matrix A(J Theorem 4.2.1. Let a Leontief model (4.1.4) be given through T -fuzzy data. For a given cone index J , the model is J −eﬃcient only if λA(J ˜ ) < 1. Proof: Let T -fuzzy data tally with Model (4.1.4) be (˜ x1j , x ˜1 ), · · · , (˜ xnj , x ˜n ). For a given cone index J , it is not diﬃcult to prove that a ˜ij by T -fuzzy data determinacy method is determined as follows: a ˜ij (J ) = such that

z˜ij x˜ij (J ) = =a ˜ij , x ˜j (J ) z˜j

˜ ) = (˜ aij )(i, j = 1, · · · , n). A(J aij (J )) = (˜

Then Y˜i is determined respectively. Hence a fuzzy Leontief model is determined to be a distinct one ˜ ))W (J ) = Y˜ (J ) (I − A(J

(4.2.1)

to the data Ui , z1i , · · · , zni (i = 1, · · · , 3N ), I indicating a unit matrix depending on a cone index J . From the Theorem in [Lin85,Chapter 3, Section 3], (4.2.1) is eﬃcient only if λA(J ˜ ) < 1, such that (4.1.4) is J −eﬃcient. Corollary 4.2.1. On the assumption of Theorem 4.2.1, if some ﬁnal demand Y˜ (J ) > 0 exists, such that (4.2.1) has a non-negative solution, we call Model (4.1.4) J −eﬃcient.

100

4 Fuzzy Input-Output Model

We can prove such a corollary in a similar way of Theorem 4.2.1 and Corollary in [Lin85, Chapter 3, Section 3]. When the total amount of material resources is less than or equal to the ˜ of labor resources is ﬁnite, a fuzzy ﬁnal demand amount and total amount L Leontief model ˜X ˜ = Y˜ (I˜ − A) has fuzzy constraint ˜ X) ˜ L, ˜ (I, ˜ X) ˜ indicates an inner product of fuzzy vector I˜ and X. ˜ where (I, ˜ = αX ˜ indicates an objective and Max{M ˜ = αX}, ˜ the problem above If M can be changed into a fuzzy input-output optimization model which is to be built as follows: ˜ = αX} ˜ max {M ˜ ˜ ˜ s.t. (I − A)X Y˜ , (4.2.2) ˜ L, ˜ X ˜ 0, X where α indicats the tax and the proﬁt of a unit product value. Theorem 4.2.2. Let (4.2.2) be all given by T -fuzzy data. Then for a given ˜ ) and α(J ) exist cone index J , when A˜ is J -eﬀective, eﬀective solution X(J in the correspondence model. Proof: In a similar way of Theorem 4.2.1, we can prove that (4.2.2) can be turned into a determinate linear programming under a given cone index J : ˜ (J ) = αW (J ) max M ˜ ))W (J ) Y˜ (J ), s.t. (I˜ − A(J ˜ ), W (J ) L(J W (J ) 0.

(4.2.3) (4.2.4)

From (4.2.3) we know a feasible solution set of W (J ) has bound and Y˜ (J ), a ﬁnal demand vector, is a determined bound one, so does the problem’s feasible solution ﬁeld. ˜ ) is eﬀective, there At the same time, as A˜ is J −eﬀective, such that A(J exits a vector W (J ) 0, such that ˜ ))W (J ) Y˜ (J ) (I˜ − A(J ˜ ) > 0, λ > 0 must exist, such that and L(J ˜ ). (λW (J ), l) < L(J Take α1 = λ, w1 (J ) = λW (J ), so w1 (J ), α1 is a group of feasible solution to (4.2.3) and (4.2.4).

4.2 Input-Output Model with T -Fuzzy Data

101

Corollary 4.2.2. Under the condition of Theorem 4.2.2, the dual problem to (4.2.1) is ˜ Q} ˜ min {Y˜ P˜ + L T ˜ ˜ ˜ (4.2.5) s.t. (I − A) P α ˜0 P˜ , Q ˜ and there exists a ﬁnite solution P˜ , Q. Proof: Because of the sameness between a given cone index and a determined cone one in Theorem 4.2.2, under the circumstance of this cone index J , we can change (4.2.5) into a model of determinacy ˜ )V } min {Y˜ (J )U + L(J T ˜ ˜ s.t. (I − A(J )) U + V α, U, V 0.

(4.2.6)

Obviously, (4.2.6) is a dual programming of (4.2.3) and (4.2.4), such that (4.2.5) is a fuzzy dual one of (4.2.2). Again, from dual theory of linear programming, (4.2.5) has a ﬁnite solution ˜ P˜ , Q. Then the solution steps of Model (4.2.2) are as follows. (1) For a given cone index J , (4.2.3) and (4.2.4) result from nonfuzzifying (4.2.2). (2) Obtain the optimal solution vectors W (J ), Y˜ (J ) for (4.2.3) and (4.2.4). ˜ ). (3) Take the maximum value of L(J ∗ (4) Solve a group of solutions w1 (J ) as to diﬀerent Y˜ i (J ) in Programming (4.2.3) and (4.2.4), respectively. (5) Compare. ˜ ) and introduce the maximum solution vector in an obLet wi∗ (J ) L(J ˜ (j) = (M (j) , m(j) , m(j) ). jective. Then we obtain M (6) Determine. 4 ˜ (j) , 0)2 d(M (j = 1, · · · , N ). Then D∗ =Max{Dj |j ∈ N } is Let Dj = N what we want to obtain, where N denotes scheme numbers. B. Fuzzy Leontief synthesized model and its properties of solution Deﬁnition 4.2.4. If n−kind of products and m−kind (m > n) of tech˜ = nology modes exist in the system and if total-product fuzzy vector is X T (˜ x1 , x ˜2 , · · · , x ˜m ) produced under m−kind of technology, then ⎛

˜12 a ˜11 a ⎜a ˜ a 21 ˜22 ˜=⎜ Aˆ ⎜ .. ⎝.

··· ··· .. .

a ˜1n a ˜2n .. .

˜n2 · · · a ˜nn a ˜n1 a

⎞ ⎟ ⎟ ⎟ ⎠

102

and

4 Fuzzy Input-Output Model

⎛

a ˜i1 1 ⎜a ˜ i2 1 ˜σ = ⎜ Aˆ ⎜ .. ⎝. a ˜in 1

⎞ a ˜i1 2 · · · a ˜i1 n a ˜i2 2 · · · a ˜i2 n ⎟ ⎟ ⎟ .. .. ⎠ . . a ˜in 2 · · · a ˜in n

are deﬁned respectively as a fuzzy technology matrix, and as a fuzzy constant technology matrix of an arbitrary ﬁx technology mode ik ∈ Mk . We call ⎞ ⎛ ˜ 0 0 ··· 0 0 E ⎜. . . . . . ⎟ ⎜ .. .. .. . . .. .. ⎟ ⎟ ⎜ ⎟ ⎜˜ ⎜E 0 0 ··· 0 0 ⎟ ⎜ ˆ ˜ 0 ··· E ˜0 ⎟ I˜ = (˜ eij )m×n = ⎜ 0 E ⎟ ⎜ ˜ ··· 0 E ˜⎟ ⎟ ⎜0 0 E ⎟ ⎜ ⎜ .. .. .. . . .. .. ⎟ . . . ⎠ ⎝. . . ˜ 0 0 0 ··· 0 E ⎛

and

I = (eij )m×n

1 ⎜ .. ⎜. ⎜ ⎜1 ⎜ =⎜ ⎜0 ⎜0 ⎜ ⎜. ⎝ ..

0 0 ··· .. .. . . . . . 0 0 ··· 1 0 ··· 0 1 ··· .. .. . . . . .

0 .. . 0 1 0 .. .

0 0 0 ··· 0 a replacement matrix, where ˜ 1, E, i ∈ Mi , e˜ij = eij = 0, 0, i∈ / Mi ,

⎞ 0 .. ⎟ .⎟ ⎟ 0⎟ ⎟ 0⎟ ⎟ 1⎟ ⎟ .. ⎟ .⎠ 1 i ∈ Mi , i∈ / Mi ,

Mi denoting a technology mode set, in which we can make i−kinds of products. Deﬁnition 4.2.5. For any Y˜ 0, ˜X ˜ − Aˆ ˜X ˜ = Y˜ Iˆ

(4.2.7)

is a fuzzy Leontief synthesized model. Theorem 4.2.3. Let the set of n−kind of technology mode be σ = (i1 , i2 , · · · , in ). If, for a ﬁxed cone index J , the nth square matrix Aˆ˜σ (J ) corresponding with all σ is J −eﬀective, then fuzzy Leontief model (4.2.7) is eﬀective. The proof is omitted here. ˜ − Aˆ ˜ and that a resource consumption vector is L = Suppose that A˜∗ = Iˆ T ˜ ˜ ˜ (l1 , l2 , · · · , ln ) , then to a ﬁnal demand vector Y˜ 0, we consider fuzzy linear programming

4.2 Input-Output Model with T -Fuzzy Data

and its dual form

103

˜ min ˜lX ˜ Y˜ ˜ s.t. A∗ X ˜ X0

(4.2.8)

max P˜ Y˜ s.t. P˜ A˜∗ ˜ l P˜ 0

(4.2.9)

as well. Lemma 4.2.1. For a given cone index J , if (4.2.7) is J −eﬀective for a certain fuzzy-ﬁnal-demand vector Y˜0 > 0, there is a basic solution to (4.2.8) when Y˜ = Y˜0 . Proof: Since, under ﬁxed cone index J , (4.2.7) is changed into ˆ ˜ ))W , = Y˜ (J ), (I − A(J

(4.2.10) which is a determinate Leontief synthesized model, (4.2.7) is eﬀective at Y˜0 > 0, which is equivalent that (4.2.10) is eﬀective at Y˜0 (J ) > 0. From Lemma 1 in [Lin85, Chapter 3, Section 5], we know, , min ˜ l(J )W ˜ )W , Y˜ (J ) s.t. A(J , W 0

(4.2.11)

there exists a basic solution at Y˜ (J ) = Y˜0 (J ), such that the lemma holds. Lemma 4.2.2. For a given cone index J , when Y˜ = Y˜0 in (4.2.8), the com˜ ¯ tallies with ponent with basic solution X ˜ ¯ik > 0, ik ∈ Mk (k = 1, 2, · · · , n); (1) σ = {i1 , i2 , · · · , in } exists, such that x ˜ / σ. (2) x ¯i = 0, i ∈ ˆ˜ )W , = IW , − A(J , Y˜0 (J ) exist Proof: Under a ﬁxed cone index, A˜∗ (J )W in (4.2.8), and by applying Lemma 2 in [Lin85, Chapter 3, Section, 5] at , tallying with Y˜ (J ) = Y˜0 (J ), we have a component of basic solution W , (1) There exists σ, such that W > 0, ik ∈ Mk (k = 1, 2, · · · , n); , = 0, i ∈ (2) W / σ. Therefore, the lemma holds under the ﬁxed cone index J . Lemma 4.2.3. Under the condition of Lemma 4.2.1, to any Y˜ 0, (4.2.8) ˜ where the component tallies with the following conhas a solution vector X ditions: x˜ik > 0, ik ∈ Mk (k = 1, 2, · · · , n); x˜i = ˜ 0, i ∈ / σ, σ = {i1 , i2 , · · · , in }. Proof: In a similar way of Lemma 4.2.2 and by applying Lemma 3 in [Lin85, , ˜ ⇐⇒ W Chapter 3, Section 5], under the ﬁxed cone index J , there exist X such that

104

4 Fuzzy Input-Output Model

x ˜ik > 0 ⇐⇒ wik , ik ∈ Mk (k = 1, 2, · · · , M ); ˜ ⇐⇒ wi = 0, i ∈ x ˜i = 0 / σ. , is a solution to (4.2.11) which is equivalent that X ˜ is a Symbolically, W solution to (4.2.8). Theorem 4.2.4. Under the condition of Lemma 4.2.1, submatrix A˜σ exists ˜ such that to any Y˜ 0, (4.2.8) has a solution vector X ˜ = in matrix A, T (˜ x1 , x ˜2 , · · · , x ˜n ) tallying with x ˜i = 0 when i ∈ / σ, x˜i > 0 when i ∈ σ, σ = {i1 , i2 , · · · , in } and the choice of sets σ bears no relation to the demand Y˜ . Proof: We can prove the theorem by using Lemmas 4.2.1-4.2.3. Theorem 4.2.5. Let σ = {i1 , i2 , · · · , in } and let σ = {i1 , i2 , · · · , in }. If, to ˜σ corresponding with σ is J −eﬀective the same cone index J , submodel Aˆ ˆ ˆ with ˜ lσ∗ = (I˜ − A˜σ )˜lσ , then ˜lσ∗ ˜lσ∗ , where ˜lσ∗ = (˜li1 , ˜li2 , · · · , ˜lin )T indicates a resource-complete-consumption vector corresponding with sets σ , and ik ∈ Mk (k = 1, 2, · · · , n). Proof: Observe the programming as follows.

and

˜σ min ˜lσ X ˜ − Aˆ ˜σ )X ˜ σ Y˜0 s.t. (Iˆ ˜ Xσ 0

(4.2.12)

˜ σ min ˜lσ X ˆ ˜σ )X ˜ σ Y˜0 , s.t. (I˜ − Aˆ ˜ σ 0. X

(4.2.13)

The dual form of (4.2.12) is max P˜ Y˜0 ˜ − Aˆ ˜σ ) ˜lσ , s.t. P˜ (Iˆ ˜ P 0.

(4.2.14)

Similarly , we can obtain a dual programming of (4.2.13). Under a given cone index J , (4.2.12)-(4.2.14) are equivalent to the following: ,σ min ˜lσ (J )W ˆ ˆ ,σ Y˜0 (J ), s.t. (I˜ − A˜σ (J ))W , Wσ 0, ,σ min ˜ lσ (J )W ˆ ˆ ,σ Y˜0 (J ) s.t. (I˜ − A˜σ (J ))W , Wσ 0

4.2 Input-Output Model with T -Fuzzy Data

and

105

ˆ ˜ ) max U Y˜ I(J ˆ ˜ s.t. U (I˜ − A)(J ) lσ , U 0.

Therefore, to the same σ, σ and the same cone index J , Aˆ˜σ (J ) is eﬀective, and from Theorem 2 in [Lin85, Chapter 3, Section 5], we have ˜lσ∗ (J ) ˜lσ∗ (J ). So the theorem holds. 4.2.3

Numerical Example

We give an input-output value table of agriculture, light and heavy industries in our national economy in 1978 in Table 4.2.1. Table 4.2.1. Input-output Value Table

Input

Output Middle products Agriculture Light ind. Heavy ind. Amount Production department Agriculture (80,5,3) (200,10,2) (300,2,3) (580,17,8) Light ind. (120,7,10) (100,7,6) (400,4,9) (620,18,25) Heavy ind. (200,2,15) (400,1,4) (600,15,7,) (1200,18,26) Amount (400,14,28) (700,18,12) (1300,21,19) (2400,53,59) New value total Pay (200,16,1) (380,7,13) (500,9,10) (1080,28,26) Proﬁt (100,7,0) (120,10,15) (200,3,7) (420,24,20) Value (700,37,29) (1200,35,40) (2000,33,36) (3900,105,105) Input

Output

Final proﬁt Total value Production department Agriculture (120,20,21,) (700,37,29) Light ind. (580,17,15) (1200,35,40) Heavy ind. (800,15,10) (2000,33,36) Amount (1500,52,46) (3900,105,105) New value total Pay Proﬁt Value The proportion of the three departments in the national economy is 17.95%, 30.77% and 51.28%, respectively. Suppose that the structure is not reasonable, needing an adjustment, and there are three schemes for it. During 3-year adjustment, the average annual growth rate in the ﬁnal products for the three parts is (I) 6%, 10%, 4%; (II) 5%, 15%, 2% and (III) 10%, 20% and 0%,

106

4 Fuzzy Input-Output Model

respectively. The greatest quota of average annual growth rate in product value is 6% for agriculture, 15% for light industry and 5% for heavy industry. The goal is to make proﬁt and tax largest. Again we know the proﬁt and tax created in product value of 100 million for the three departments are (0.1429,0.02,0.05),(0.10,0.005,0.03) and (0.1,0.01,0.04)(the unit is 100 million), respectively. Now, comprehensively consider that we try to determine an optimized scheme of an economic structure during 3-year adjustment. Solution: (1) From the above problem, we may build a fuzzy input-output optimization model Max {(0.1429, 0.02, 0.005)˜ x1 + (0.1, 0.005, 0.03)˜ x2 +(0.1, 0.01, 0.04)˜ x3} ˜X ˜ Y˜ (3) (j = 1, 2, 3), s.t. (I˜ − A) j ˜ L, ˜ X ˜ 0. X

(4.2.15)

(2) For the ﬁxed cone index J , by partition methods advanced in the proof of Theorem 4.2.1, we change Table 4.2.1 into Table 4.2.2. Table 4.2.2. Input-output Crisp Value Table

Input

Output Middle products Agri Light ind Production department Agriculture 80 202 Light ind 113 93 Heavy ind 215 402.5 Amount 408 679.5 New value total Labor pay 325 560.5 Net proﬁt value 733 1240

Heavy ind Amount Final produc Total value 303 400 585 1288

585 606 1202.5 2393.5

748 2036

1615.5 4009

148 634 833.5 1615.5

733 1240 2036 4009

n n ˜j (J ) from wj − wij . where Y˜i (J ) is determined from wi − wij , and N j=1

i=1

(3) Obtain the ﬁnal product vector of the three departments in a forecasting period. ⎞ ⎛ ⎞ ⎛ 176.27 148(1 + 0.06)3 (3) Scheme I: Y1 = ⎝ 634(1 + 0.1)3 ⎠ ≈ ⎝ 843.85 ⎠; 3 ⎛833.5(1 + 0.04)3 ⎞ ⎛937.57 ⎞ 171.33 148(1 + 0.05) (3) Scheme II: Y2 = ⎝ 634(1 + 0.15)3 ⎠ ≈ ⎝ 946.23 ⎠; 3 ⎞ ⎛ 884.52⎞ ⎛833.5(1 + 0.02) 3 148(1 + 0.1) 196.99 (3) Scheme III: Y3 = ⎝ 634(1 + 0.2)3 ⎠ ≈ ⎝ 1095.55 ⎠; 833.5(1 + 0)3 833.5

4.2 Input-Output Model with T -Fuzzy Data

⎛

⎞

⎛

107

⎞

733(1 + 0.06)3 837.01 Max L= ⎝ 1240(1 + 0.15)3 ⎠ ≈ ⎝ 1885.89 ⎠ . 2036(1 + 0.05)3 2356.92 (4) Extract the solution and compare ⎛ ⎞ 0.109 0.163 0.149 ˜ ) = ⎝ 0.154 0.075 0.196 ⎠ , A(J 0.293 0.325 0.287 ⎛ ⎞−1 ⎛ ⎞ 0.891 −0.163 −0.149 1.307 0.326 0.372 ˜ ))−1 = ⎝ −0.154 0.925 −0.196 ⎠ ≈ ⎝ 0.376 1.298 0.434 ⎠. (I − A(J −0.293 −0.325 0.713 0.704 0.740 1.753 Then for Scheme I, we have

⎛

⎞ 884.63 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1566.91 ⎠, W (I − A(J 1 2392.10 comparing it with Max L, we should take ˜ (1) ≈ (554.21, 51.04, 196.49). x1 = 884.63, x2 = 1885.89, x3 = 2392.10, M For Scheme II, we have

⎛

⎞ 902.02 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1698.33 ⎠, W (I − A(J 2 2390.88 comparing it with Max L, we should take ˜ (2) ≈ (556.58, 51.38, 197.31). x1 = 902.02, x2 = 1885.89, x3 = 2390.88, M For Scheme II,we have

⎛

⎞ 964.12 ˜ ))−1 Y˜ (3) (J ) ≈ ⎝ 1856.06 ⎠, W (I − A(J 3 2410.51 comparing it with Max L, we should take ˜ (3) ≈ (567.41, 52.82, 201.20). x1 = 964.12, x2 = 1885.89, x3 = 2410.51, M 4 ˜ (j) , 0) d(M (j = 1, 2, 3); we have (5) Decision: Dj = 3 D1 ≈ 612.067, D2 ≈ 614.643, D3 ≈ 626.503. Obviously, they tally with D1 < D2 < D3 . Considering only the largest proﬁt and tax, we know Scheme III is the best one. But it does not meet the demand of objective practice as the growth rate is zero in ﬁnal annual average production in heavy industry. Though the proﬁt and tax in Scheme II is lower than Scheme III, the speed is proper for the three departments in developing. So, in view of the optimized economic structure, Scheme II is the most satisfactory.

108

4.3

4 Fuzzy Input-Output Model

Input-Output Model with Triangular Fuzzy Data

Because the direct depletion coeﬃcient a ˜ij is a membership function between each department, for the sake of easy calculation, the triangle distribution is presented in order not to lose the general assumption of the membership function. Hence, we discusses the input-output model with triangular fuzzy data in this section. 4.3.1 Deﬁnitions and Properties Deﬁnition 4.3.1. Call fuzzy sets in real axis R a fuzzy number, written as μA˜ (x) ˜ A= x x∈R or A˜ ⇔ μA˜ (x) ∈ [0, 1], ˜ where μA˜ (x) is a membership function in A. Deﬁnition 4.3.2. If ∀x, y, z ∈ R, and x y z, then there must exist ˜ ˜ A(y) μA˜ (x) ∧ A(z), we call A˜ a convex fuzzy number. If max μA˜ (x) = 1, then call A˜ normal fuzzy x∈R

sets. According to the deﬁnition mentioned above, it is easy to prove the next two theorems. Theorem 4.3.1. A˜ is a convex fuzzy number ⇔ ∀α(0 < α 1), Aα is an interval, written as Aα = [αL (a), αR (a)], where αL (a) and αR (a) represents the left and right endpoint in Aα , respectively, according to endpoint included in Aα , the interval can be divided into a close and open interval. Theorem 4.3.2. A˜ is a convex fuzzy number, and suppose suppose α1 , α2 satisfy 0 α2 α1 < 1, then αL (a1 ) αL (a2 ), αR (a1 ) αR (a2 ), and αL (a), αR (a) are all monotonous functions, αL (a) is non-decrease and αR (a) is non-increase. Deﬁnition 4.3.3. Suppose A0 = (aL , aR ) is a platform in convex fuzzy num˜ If aL > 0, then call A˜ a positive fuzzy number; if aR < 0, then call A˜ ber A. a negative fuzzy number; if aL < 0 < aR , then call A˜ a zero fuzzy number. ˜ B ˜ are the fuzzy numbers, mapping f : Extension principle. Suppose A, R → R, i.e., f (x, y) = x ∗ y, where “∗” is a binary operate, operation “∗” expands to fuzzy number, stipulated as

4.3 Input-Output Model with Triangular Fuzzy Data

109

˜ f (x, y) = A˜ ∗ B ˜ μA˜ (x) ∧ B(y) = x∗y x∗y=z ˜ min[μA˜ (x), B(y)] , = x∗y x∗y=z where f (x, y) = z = x ∗ y, its membership function can be denoted as ˜ f (x, y)(z) = (A˜ ∗ B)(z) ˜ = (μA˜ (x) ∧ B(y)) x∗y=z

=

sup x,y∈f −1 (z)

˜ min{μA˜ (x), B(y)}.

When above-mentioned operation “∗” denotes “+” and “·”, we give sum and product operation in fuzzy number below. Sum of fuzzy numbers ˜ B ˜ are two fuzzy numbers, then sum of A, ˜ B ˜ is deﬁned as: Suppose A, ˜ ˜ (A˜ + B)(z) = (μA˜ (x) ∧ B(y)), ∀z ∈ R. z=x+y

Make use of α−cut sets Aα and Bα can be rewritten in another form, if Aα = [mα , nα ], Bα = [pα , qα ], then A˜ = αAα = α[mα , nα ], α

˜= B

α

αBα =

α

˜= A˜ + B

α[pα , qα ];

α

α[mα , nα ] +

α

=

α[pα , qα ]

α

α([mα , nα ] + [pα , qα ]).

α

Because all numbers mα x nα and all numbers pα y qα should add mutually in the right-end bracket of the last type, the interval is [mα + pα , nα + qα ], therefore ˜= A˜ + B α[mα + pα , nα + qα ]. α

Product of fuzzy numbers ˜ B ˜ are two fuzzy numbers, product of A, ˜ B ˜ is deﬁned as: Suppose A, ˜ ˜ = (μA˜ (x) ∧ B(y)). ∀z ∈ R + , (A˜ · B)(z) z = x · y, x, y 0

110

4 Fuzzy Input-Output Model

Make use of α−cut set Aα , Bα , formula above can be written as ˜= A˜ · B α[mα · pα , nα · qα ]. α

It is not diﬃcult to prove the theorem as follows. Theorem 4.3.3. The sum and product of positive convex fuzzy numbers are positive convex fuzzy ones, respectively and they satisfy commutative, associative and distributive laws. 4.3.2 Model Suppose that fuzzy number is a ˜ij , its membership function is triangle disx0 tributing, a ˜ij is represented by three numerals ( ), where x0 is deterxL xR mined by formula below: max a ˜ij (x0 ) = 1, x0 ∈R

˜ij , i.e., (˜ aij )0 =(xL , xR ). Obvibut (xL , xR ) is a platform in fuzzy number a ously, the kind of fuzzy number is convex. If the membership function of fuzzy number A˜ is a positive convex one, then it is an interval Aα = [aαL , aαR ] corresponding to each α−level. Oppositely, ﬁnd out fuzzy numbers corresponding to diﬀerent level α, and we can make what corresponds a membership function. Hence, (4.1.1),(4.1.2) can be changed into an ordinary equation n a ˜ij )α (˜ xj )α + (˜ yi )α = (˜ xi )α ( i=1

and

n ˜j )α = (˜ c˜ij )α (˜ xj )α + (N x)α , ( i=1

yj )α all represent α−level sets of fuzzy numbers correspondwhere (˜ xj )α and (˜ ing to the fuzzy variables X and Y . Under assumption that each fuzzy number denotes positively convex, these α−level sets are all interval. Therefore, the above-mentioned equations can denote the form for interval as coeﬃcients and variables. According to the deﬁnition of fuzzy number operation, we know that the interval number operation can be carried on between the cut sets of convex fuzzy numbers. But to positive convex fuzzy sets, the operation on addition and multiplication in interval can add and multiply real numbers at left and right endpoints in the interval, then the equations above can be written down again as ˜ α [X] ˜ α + [Y˜ ]α = [X] ˜α [A]

4.3 Input-Output Model with Triangular Fuzzy Data

111

and ˜ α [X] ˜ α + [N ˜ ]α = [X] ˜ α, [C] aij ]α is an interval [aijL , aijR ]. where A˜ = [˜ aij ]; [˜ ˜ = (˜ ˜ α is an interval [xiL , xiR ]α ; (X) x1 , x ˜2 , · · · , x ˜n )T is a row vector, [X] T ˜ ˜ (Y ) = (˜ y1 , y˜2 , · · · , y˜n ) is a row vector, [Y ]α is an interval [yiL , yiR ]α . Each equation below can be operated under some α-level sets, attachment mark α no longer notes, thus, the type above can be predigested into form as (4.1.3) according to the interval operation rule in the above-mentioned deﬁnition, the equation above at actual operation can solve two matrix equations at left and right endpoints, respectively, i.e., AL XL + YL = XL ,

AR XR + YR = XR ,

CL XL + NL = XL ,

CR XR + NR = XR ,

and

where AL , XL , NL and YL mean the constituted matrix and vector corresponding to the left end real numbers at each intervals; AR , XR , NR and YR mean the constituted matrix and vector corresponding to right end real numbers at each intervals. ˜ X ˜ and we can ﬁnd YL , YR , such that ﬁnd Y˜ from Give A, YL = (I − AL )XL , YR = (I − AR )XR ,

(4.3.1)

˜ Y˜ , we can ﬁnd XL , XR , such that ﬁnd X ˜ from or when give A, XL = (I − AL )−1 YL , XR = (I − AR )−1 YR , i.e., ˜ X, ˜ X ˜ = (I − A) ˜ −1 Y˜ , Y˜ = (I − A) ˜ and N ˜ from and similarly we can ﬁnd X ˜ = (I − C) ˜ X, ˜ X ˜ = (I − C) ˜ −1 N ˜, N ˜ N ˜ as (4.1.7), X ˜ j and N ˜j representing fuzzy where I is a unit matrix, X, variables. Similar to certiﬁcating the existence of an inverse matrix in a usual input˜ −1 exists. C˜ = (c˜j ) like (4.1.6) ˜ −1 , (I − C) output model, inverse matrix (I − A)

112

4 Fuzzy Input-Output Model

is called a material consumption coeﬃcient matrix, c˜j =

n

a ˜ij (j = 1, · · · , n)

i=1

is the total of j department product in production units demanding a direct consumption of the products from other sections by value form, it is a fuzzy number. If (4.3.1) is an input-output model with fuzzy quantity by triangle distributing, then for diﬀerent α-level, the equations is that, at α = 0, [YL ] = [I − AL ][XL ] = [AL ][XL ], [YR ] = [I − AR ][XR ] = [AR ][XR ], and at α = 1,

[Y0 ] = [I − A0 ][X0 ] = [A0 ][X0 ]. Use the procedure of computer program, we can very quickly calculate the following. 10 The ﬁnal product amount YL , YR , Y0 in each department. 20 If we have already known the ﬁnal product amount of each department, we can solve the annual product total amount XL , XR , X0 in each department. 4.3.3 Numerical Example Example 4.3.1: According to table of an input-output value datum from ﬁfty-six departments authorized by Statistic Bureau in Shanxi Province of China in May, 1982, we give direct depletion coeﬃcient in each department as Table 4.3.1 shows. Solution: Because ⎛

⎞ 1 − 0.1350 −0.1752 −0.0829 0 −0.0053 ⎜ −0.1560 1 − 0.4349 −0.5873 −0.3250 −0.15310 ⎟ ⎜ ⎟ ⎟, 0 0 1−0 0 0 [I − AL ] = ⎜ ⎜ ⎟ ⎝ −0.00703 −0.0076 −0.0432 1 − 0.0149 −0.0950 ⎠ −0.00199 −0.005035 −0.01206 −0.009715 1 − 0.01054 ⎛

⎞ 1 − 0.1492 −0.1936 −0.0917 0 −0.0059 ⎜ −0.1724 1 − 0.4807 −0.6491 −0.3592 −0.1691 ⎟ ⎜ ⎟ ⎟, 0 0 1−0 0 0 [I − AR ] = ⎜ ⎜ ⎟ ⎝ −0.0077 −0.0084 −0.0478 1 − 0.0165 −0.1050 ⎠ −0.00221 −0.005565 −0.013335 −0.01128 1 − 0.01165

4.3 Input-Output Model with Triangular Fuzzy Data

113

Table 4.3.1. Direct Depletion Coeﬃcient in Each Department

Agriculture 0.1421 Agriculture ( ) 0.1350 0.1492 0.1642 Industry ( ) 0.1562 0.1724 0 Building indu. ( ) 0 0 Transport & 0.0074 ( ) post elect. 0.00703 0.00777 0.0021 Business ( ) 0.00199 0.00221 0.3158 Total ( ) 0.30002 0.33158 Wages 0.5474 ( ) labor guerdon 0.52003 0.57477 Net. 0.1368 ( ) proﬁt 0.12996 0.14364

Agriculture Industry Building indu. Transport & post elect. Business Total Wages labor guerdon Net. proﬁt ⎛

Industry

Building indu.

0.1844 ( ) 0.1752 0.1936 0.4578 ( ) 0.4349 0.4807 0 ( ) 0 0 0.0080 ( ) 0.0076 0.0084 0.0053 ( ) 0.005035 0.005565 0.6555 ( ) 0.622735 0.688215 0.0867 ( ) 0.08236 0.09103 0.2578 ( ) 0.24491 0.27069

0.0873 ) 0.0829 0.0917 0.6182 ( ) 0.5873 0.6491 0 ( ) 0 0 0.0455 ( ) 0.0432 0.0478 0.0127 ( ) 0.012065 0.013335 0.7637 ( ) 0.72546 0.80192 0.1636 ( ) 0.15542 0.17178 0.0727 ( ) 0.06906 0.07634 (

Transport and post elect.

Business

0

0.0056 ) 0.0053 0.0059 0.1611 ( ) 0.1531 0.1691 0 ( ) 0 0 0.1000 ( ) 0.0905 0.1050 0.0111 ( ) 0.01054 0.01165 0.2778 ( ) 0.26394 0.29165 0.2222 ( ) 0.21109 0.23331 0.5000 ( ) 0.4750 0.5250

(

) 0 0 0.3421 ( ) 0.3250 0.3592 0 ( ) 0 0 0.0157 ( ) 0.0149 0.0165 0.0105 ( ) 0.009715 0.011285 0.3683 ( ) 0.34987 0.38670 0.2362 ( ) 0.25044 0.27676 0.3684 ( ) 0.34998 0.38682

(

⎞ 1 − 0.1421 −0.1844 −0.0873 0 −0.0056 ⎜ −0.0642 1 − 0.4578 −0.6182 −0.3421 −0.1611 ⎟ ⎜ ⎟ ⎟. 0 0 1−0 0 0 [I − A0 ] = ⎜ ⎜ ⎟ ⎝ −0.0074 −0.0084 −0.0455 1 − 0.0157 −0.1000 ⎠ −0.0021 −0.0053 −0.0127 −0.0105 1 − 0.0111

114

4 Fuzzy Input-Output Model

Give total amount in each department (agriculture, industry, building industry, transport and post electricity(TPE), business) as follows: ˜ = (X ˜i) X agriculture

1900 1805 1955

industry 4500 4275 4725

building indus

550 552.5 577.5

TPE

190 180.5 199.5

business

360 342 378

T ,

then according to the procedure of computer program, we can compute ﬁnal product amount in each department; or by giving a ﬁnal amount in each department, we compute the total amounts or total product value (TPV), which is shown in Table 4.3.2. Table 4.3.2 Fuzzy Input-output Table (unit: hundred million dollars) Product Input Source Material product dept

Agriculture x ˜1j Industry x ˜2j Building industry x ˜3j

TPE x ˜4j Business x ˜5j Total consump ˜ U Wages labor guerdon Newly V˜ Net. proﬁt made ˜ M Total value ˜ N Total value ˜ X Material

Product allotment direction Middle product Agriculture Industry Building industry x ˜i1 x ˜i2 x ˜i3 269.99 829.80 48.135 243.67 297.65 748.98 914.76 43.315 52.956 311.98 2060.10 340.01 281.58 343.94 1859.2 2271.3 306.86 374.85 0 0 0

0 0 0

0 0 0

14.06 12.689 15.501

36.00 32.49 39.69

25.025 22.572 27.604

3.99 3.592 4.409 600.02 541.53 661.5

23.85 21.52 26.29 2949.75 2662.19 3252.04

6.985 6.304 7.701 420.16 379.05 463.11

1040.06 1010.85 1066.88

390.15 405.88 370.68

89.98 99.31 79.2

259.92 252.62 266.62

1160.1 1206.9 1102.3

39.99 44.13 35.19

1299.98 1263.62 1333.5 1900 1805 1995

1550.25 1612.81 1472.95 4500 4275 4725

129.904 143.45 114.39 550 522.5 577.5

4.3 Input-Output Model with Triangular Fuzzy Data

115

Table 4.3.2. (continued) Product Input Source Material

product

dept Material consump

Newly made value

Agriculture x ˜1j Industry x ˜2j Building industry x ˜3j TPE x ˜4j Business x ˜5j Total ˜ U Wages labor guerdon V˜ Net. proﬁt ˜ M Total ˜ N

Total value ˜ X Product Input Source Material Agriculture x ˜1j Industry product x ˜2j Building industry dept. x ˜3j TPE Material x ˜4j Business x ˜5j Total consump ˜ U

TPE x ˜i4 0 0 0 65.00 58.67 71.66

Product allotment direction Final product Total Business ˜ x ˜i5 E 2.016 1.817 2.230 57.996 52.36 63.91

1149.94 1037.782 269.6 2835.086 2558.67 3125.66

0 0 0

0 0 0

2.983 2.689 3.291

36.00 32.49 39.69

114.068 102.93 125.78

1.995 1.754 2.251 69.98 63.11 77.2

3.996 3.606 4.406 100.08 90.27 110.26

40.816 36.780 45.065 4139.91 3736.16 4564.11

50.01 48.95 51.03

79.99 77.45 82.38

1650.19 1642.4 1650.2

69.996 68.402 71.323

180 174.28 185.38

1710.2 1642.4 1660.8

120.004 117.349 122.353 190 180.5 199.5

259.92 251.73 267.76 360 342 378

3360.19 3388.8 3310.95 7500 7125 7875

Accumulate Z˜

0 0 0

Product allotment direction Final product Consume ˜ W

Totoal Y˜

TPV ˜ X

50.18 51.4044 48.7371 487.988 503.056 468.767

700.00 715.83 678.68 1176.93 1213.27 1130.57

750.179 767.23 727.42 1664.92 1716.33 1599.34

1900 1805 1995 4500 4275 4725

550 522.5 577.5

0 0 0

550 522.5 577.5

550 522.5 577.5

25.976 26.536 25.222 15.96 15.26 16.65 1130.10 1118.76 1136.87

49.956 51.033 48.505 303.22 289.96 316.29 2230.11 2270.09 2174.04

75.932 77.569 73.727 319.18 305.22 332.94 3360.22 3388.84 3310.93

190 180.5 199.5 360 342 378 7500 7125 7875

5 Fuzzy Cluster Analysis and Fuzzy Recognition

A fuzzy cluster analysis as well as fuzzy recognition is introduced in this chapter. First, fuzzy cluster analysis with T -fuzzy data is developed, and fuzzy recognition with T -fuzzy data is exhibited secondly.

5.1

Fuzzy Cluster Analysis

5.1.1 Fuzzy Cluster 5.1.1.1 Introduction A mathematical method to classify things according to a certain condition or the character is called a cluster analysis. To fuzzy problems, if an equivalent relation can be built up concerning U , via which U can be divided into several equivalent classes, forming an equivalent fuzzy matrix. Again an equivalent Boolean matrix can be got by choosing α ∈ [0, 1] according to diﬀerent requirements, by which elements in U are divided into some equivalent classes, called a fuzzy cluster analysis. 5.1.1.2 Model Let U ={u1 , u2 , · · · , un } be a set consisting of clustered n objects. Each of its classiﬁcation object ui is represented by a group of data xi1 , xi2 , · · · , xin . Therefore, the modeling steps are included as follows. Step 1. Data standardization In each variable in clustering process, its unit diﬀers possibly from the quantity class. Even though some variable measures are similar, absolute values in each variable diﬀer at their size. The calculation from directly original date would make the variable of great-absolute-value function outstanding, but reduce greatly the function of those small absolute ones. Meanwhile, fuzzy operation requires compressing the data in [0,1], so the originally collected data should B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 117–137. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

118

5 Fuzzy Cluster Analysis and Fuzzy Recognition

be standardized. Here we have many methods of them with introduction below. (1) The standard deviation standardization Standardizing i-th variable, namely, changing xij into xij , i.e., xij =

xij − xi (1 j m), Si

where xij is an actual measure value at the variables, with xi =

(5.1.1) m 1 xij m j=1

being a sample average value, and & ' m ' 1 (xij − xi )2 Si = ( m − 1 j=1 being a sample standard deviation. (2) Pole diﬀerence regularity and standardization Its regularity formal of pole diﬀerence becomes xij − min{xij } , max{xij } − min{xij }

xij =

(5.1.2)

and its standardization formal of pole diﬀerence becomes xij − xi , max{xij } − min{xij }

xij =

(5.1.3)

where xij is a measure value actually of a certain factor, max{xij }(or min{xij }) represents the maximum or minimum value in the same factor measure value actually. Step 2. Mark settlement The so-called mark settlement means calculating a similitude degree of similitude coeﬃcient rij scaling the classiﬁed objects, thus conﬁrm fuzzy similitude ˜ at a universe U . The used methods in common appear as follows. relation R (1) A related coeﬃcient method m

rij = +

|xik − xi ||xjk − xj | + , m m (xik − xi )2 (xjk − xj )2 k=1

k=1

where xi =

m

k=1 m

1 1 xik , xj = xjk (1 i n; 1 j m). m k=1 m k=1

(5.1.4)

5.1 Fuzzy Cluster Analysis

119

(2) A vectorial angle cosine method to mathematics character Let αr = (xr1 , xr2 , · · · , xrn ) be called a mathematic character vector of object xr (r = i, j). Then take m xik xjk | | k=1 + rij = | cos(αi , αj )| = + . (5.1.5) m m 2 2 xik xjk k=1

k=1

(3) A maximum and minimum method m min(xik , xjk ) rij = k=1 (i, j n). m max(xik , xjk )

(5.1.6)

k=1

Step 3. Cluster The transitive closure t(R) of a similitude matrix R is computed by using a square method. The result is not accurate sometimes because of a clustering analysis by using a similitude matrix, therefore the similitude matrix R must be reformed into an equivalent one, and the transitive closure scilicet means a fuzzy equivalent matrix. Step 4. Draw a dynamic clustering ﬁgure of equivalent matrix t(R). 5.1.2 Cluster Analysis Model with T -Fuzzy Data 5.1.2.1 Introduction Without any models being followed, and based on samples and on rationally clustering in their own characteristic, respectively, we call that cluster analysis. The clus analysis makes sure a close or distant relation between samples before cluster. This needs not only mathematic knowledge but also experience and professional knowledge. If adopting a classic method, it will throw away much information. In order to overcome this weakness. Ruspini [Rus69] presented a fuzzy cluster approach. Thereafter, Gitman et al again developed a single peak fuzzy sets method to cluster in 1970 [GL70]. This section acquires a satisﬁed result by plunging T -fuzzy data [Cao89b] [Dia87] into a classiﬁcation model. It puts forward computation of a shortcut method to a fuzzy transitive closure, and veriﬁes molding steps and some meaningful results by examples. 5.1.2.2 Basic Property Relevant deﬁnition and property of T -fuzzy data are seen in Chapter 1 and 2. Deﬁnition 5.1.1. Suppose that T (R) represents the whole T -fuzzy point sets deﬁned on ordinary set R, and x ˜ = (x, ξ, ξ), y˜ = (y, η, η), x, y ∈ R, then

120

5 Fuzzy Cluster Analysis and Fuzzy Recognition

+ d(˜ x, y˜) =

(x − y − (ξ − η))2 + (x − y + (ξ − η))2 + (x − y)2 3

(5.1.7)

is a distance between x ˜ and y˜. Deﬁnition 5.1.2. Suppose that dij is the generous measure value of various samples corresponding to (5.1.7), then deﬁne a standardized formula as dij − m , t˜ij = M −m

(5.1.8)

n 1 where M = max{dij }, m0 = min{dij }, m ∈ [m0 , d¯ij ], and d¯ij = dij (1 n j=1 i n). Here by taking the value of m, we avoid m ≡ 0, and make a ﬁrm decision of a ﬂexible description for persons. A fuzzy matrix constructed on t˜ij as elements is recorded like T = (t˜ij ), with its dual matrix written as T = (˜ rij ) = (1 − t˜ij ).

5.1.2.3 Algorithm Suppose U = (u1 , u2 , · · · , un ) is an object set waiting for clustering and x ˜= ˜2 , · · · , x ˜n ) is a T-fuzzy datum in the depictive object ui (i = 1, 2, · · · , n), (˜ x1 , x we use Formula (5.1.8) and convert T -fuzzy data for fuzzy matrix behind the cluster and we ﬁrst prove the few useful results below. Deﬁnition 5.1.3. If ∀(t˜ij , t˜jk )(t˜jk , t˜kl ) ∈ U × U , then (μT˜ (t˜ij , t˜jk ) μT˜ (t˜jk , t˜kl )) μT˜ (t˜ij , t˜kl )

(5.1.9)

jk

or (μT˜ (t˜ij , t˜kl )

(μT˜ (t˜ij , t˜jk ) μT˜ (t˜jk , t˜kl )), jk

calling T˜ a Min-Max (resp. Max-Min) transitive relation in U . Theorem 5.1.1. Give arbitrarily a set sample value of T -fuzzy data x ˜ = (˜ x1 , x ˜2 , · · · , x˜n ), then an anti-reﬂexivity fuzzy matrix T can be constructed under the metric Formula (5.1.7) and (5.1.8). ˜2 , · · · , x ˜n ) goes into (5.1.8) through (5.1.7) Proof: A T -fuzzy datum x ˜ = (˜ x1 , x before getting fuzzy matrix T . It is easy to verify that the elements in T satisﬁes t˜ij = t˜ji (i = j) and t˜ii = 0. Deﬁnition 5.1.4. Fuzzy relation or fuzzy matrix satisﬁes i) Anti-reﬂexivity (resp. reﬂexivity); ii) Symmetry; iii) Min-Max (resp. Max-Min) transitivity, so we call fuzzy distance (resp. equivalence) a relation or matrix.

5.1 Fuzzy Cluster Analysis

Deﬁnition 5.1.5. ∀α ∈ [0, 1], call 1, α μT˜ = 0,

and μα tij =

1, 0,

if if

T˜ α T˜ > α

if if

t˜ij α t˜ij > α

121

α−cut relation and α−cut matrix in fuzzy relation T˜ and fuzzy matrix T = (t˜ij ), respectively. Theorem 5.1.2. The necessary and suﬃcient condition that T˜ with Min-Max transitivity is ∀α ∈ [0, 1], μT˜ (t˜ij ,t˜jk ) α or μT˜(t˜jk ,t˜kl ) α

(5.1.10)

⇒ μT˜(t˜ij ,t˜kl ) α. Proof: If (5.1.9) holds, and when μT˜(t˜ij ,t˜jk ) α, μT˜(t˜jk ,t˜kl ) α, there must exist μT˜(t˜ij ,t˜jk ) μT˜(t˜jk ,t˜kl ) α. Hence, μT˜(t˜ij ,t˜kl ) μT˜ (t˜ij ,t˜jk )

μT˜(t˜jk ,t˜kl ) α.

Whereas, if μT˜ (t˜ij ,t˜kl ) α holds, we give arbitrarily t˜j0 k0 ∈ U , and take α = μT˜(t˜ij ,t˜j

0 k0

)

μT˜(t˜j

0 k0

,t˜kl ) .

Obviously, we have μT˜(t˜ij ,t˜j

0 k0

)

α or μT˜(t˜j

0 k0

,t˜kl )

α,

and there must exist μT˜(t˜ij ,t˜j

) 0 k0

μT˜(t˜ij ,t˜j

) 0 k0

μT˜(t˜j

0 k0

,t˜kl ) ,

(5.1.10) is true from the arbitrary tj0 k0 . Therefore, the theorem holds. Theorem 5.1.3. The necessary and suﬃcient condition in T with a fuzzy distance matrix requires that ∀α ∈ [0, 1], α−cut matrixes Tα are all distance Boolean matrixes. Proof: i) T is anti-reﬂexivity if and only if Tα is anti-reﬂexivity, obviously. ii) Symmetry in T ⇐⇒ Tα is symmetry. If t˜ij = t˜ji , we might as well α α α establish t˜ij < t˜ji before getting tα ij = 0, tji = 1, such that tij = tji . Hence Tα

122

5 Fuzzy Cluster Analysis and Fuzzy Recognition

is symmetry ⇒ T is symmetry. Whereas, obviously, T is symmetry ⇒ Tα is symmetry. iii) T with Min-Max (resp. Max-Min) transitivity ⇔ Tα with Min-Max (resp. Max-Min) transitivity. This can be deduced from Theorem 5.1.2 directly. Theorem 5.1.4. If T is a fuzzy distance matrix, then to any 0 λ < α 1, Tλ dividing each class involves Tα dividing a certain subclass. Proof: tλij = 1 ⇔ t˜ij λ ⇒ t˜ij α ⇔ tα ij = 1. Suppose i, j divides the same class by Tλ , so does it by Tα . Theorem 5.1.5. If T ∈ Tn×n with anti-reﬂexivity and symmetry, then S(T ) = d(T ), where S(T ) is the Min-Max (resp. Max-Min) transitive closure in T , d(T ) is distance closure and it is the biggest distance matrix with T but inclosed by any distance matrix with T . Proof: Because of S(T ) = T k , then 1) If T is anti-reﬂexivity, then S(T ) is anti-reﬂexivity. 2) If T is symmetry, then S(T ) is symmetry. Obviously, S(T ) is anti-reﬂexivity and symmetry matrix. Again S(T ) has Min-Max (resp. Max-Min) transitivity, so S(T ) is a distance matrix. Suppose that M is any distance matrix containing T , then it must be the Min-Max (resp. Max-Min) transitive matrix including T . But S(T ) is MinMax (resp. Max-Min) transitive closure of T . Based on deﬁnition in Min-Max (resp. Max-Min) transitive closure [Wang83], we know M ⊆ S(T ). Since Min-Max transitive closure bears heavy computation, the author puts forward a kind of new algorithm, so that transitive closure can be immediately computed by two steps. Algorithm If T satisﬁes anti-reﬂexivity and symmetry, then ﬁrst to T , make a Min-Max operation, and take again the matrix element, t˜∗i0 ,i0 +1 = max{t˜2i,i+1 |1 i n − 1}, where t˜2i,i+1 =

n j=1

(t˜ij

t˜i,j+1 ), such that ∀t˜2ij t˜∗i0 ,i0 +1 .

Let t˜2ij = t˜∗i0 ,i0 +1 (i = i1 , i2 , . . . , in ; j = j1 , j2 , . . . , jn , i = j, and subscripts i, j are arranged arbitrarily). Constitute matrix of the ﬁrst half part, and constitute the second half part by use of the symmetry, writing this matrix as T . Theorem 5.1.6. Suppose T = (t˜ij ) ∈ Tn×n is an anti-reﬂexivity, symmetry matrix, when i j, t˜ij t˜i,i+1 . . . t˜in (1 i n − 1), and reformation

5.1 Fuzzy Cluster Analysis

123

T to T ∗ by the algorithm, then T ∗2 = T ∗ , where T ∗ is Min-Max transitive closure S(T ). Proof: 1) IfT is anti-reﬂexivity, i.e., t˜ii = 0(1 i n), then there must be an item t˜ii t˜ij in the synthesize calculation formula of t˜∗ii , and it is necessarily zero. But t˜ii and t˜∗ii is an element located at i row and j column in the fuzzy ∗ ˜∗ matrix T and T , respectively. Hence, there must exist tii = 0 under Zadeh operator “ ”. Anti-reﬂexivity of T ∗ get certiﬁcated. 2) If T is symmetry, it is easy to certiﬁcate T ∗ symmetry by algorithm. 3) Min-Max transitivity. Because of t˜∗2 ij =

n

(t˜∗ik

t˜∗kj ),

(5.1.11)

k=1

at i = j, if k = i, and according to 1), obviously, we have ˜∗ t˜∗2 ij = tii = 0(1 i n) by (5.1.11); at i = j, because t˜i0 ,j0 = max{t˜2i,i+1 |1 i n − 1}, then i) If each term in (5.1.11) implies t˜∗i0 ,j0 , under Zadeh operator “ ”, then ˜∗ t˜∗ik t˜∗kj = t˜i0 ,j0 , from (5.1.11) we know that t˜∗2 ij = ti0 ,j0 ∀i0 , j0 holds. ∗ ˜ ii) If parts of items in (5.1.11) contain ti0 ,j0 , another fails to contain t˜∗i0 ,j0 . According to establishment, when i j and ordinate larger than j, t˜∗i0 ,j0 is contained in the item (or when i j and ordinate less than j, t˜∗i0 ,j0 is contained in the item), then j n ∗2 ∗ ∗ ˜ ˜ ˜ (t˜∗ik tij = [ (tik tkj )] [ t˜∗kj )] k=1

=

j

k=j+1

(t˜∗ik

t˜∗kj )

t˜∗i0 j0

(5.1.12)

k=1

=

j

(t˜∗ik

t˜∗kj )

k=1

or ˜∗ t˜∗2 ij = ti0 j0

n n [ (t˜∗ik t˜∗kj )] = t˜∗kj ). (t˜∗ik k=j

k=j

Again, from suppose, we know that t˜∗ii t˜∗i,i+1 . . . t˜∗in (i j) or t˜∗i1 t˜∗i2 . . . t˜∗ii (i j)(1 i n − 1),

(5.1.13)

124

5 Fuzzy Cluster Analysis and Fuzzy Recognition

then (5.1.12) is ˜∗ t˜∗2 ij = ti1

t˜∗i2

Notice

...

t˜∗ij

t˜∗i,j+1

t˜∗i,j+2

...

t˜∗in = t˜∗ij .

t˜∗i1 = t˜∗i2 = . . . = t˜∗i−1,j t˜∗i,j .

The homologous conclusion may also be got from (5.1.13), again make use of ˜∗ the symmetry, ∀i, j(1 i, j n), there is all t˜∗2 ij = tij . The theorem gets certiﬁcated. Theorem 5.1.7. Suppose T satisﬁes condition of Theorem 5.1.6, then it can be reformed to become the biggest distance matrix T ∗ contained in T but surrounded by any distance matrix in the contained T . And T ∗ can be obtained by two steps of reformation. Because the fuzzy distance matrix is mutually dual with the fuzzy equivalence matrix, the result of relevant fuzzy distance matrix all can be transplanted into fuzzy equivalence matrix; here omitted. 5.1.2.4 Modeling Suppose that a classiﬁcation objective set is U = (u1 , u2 , · · · , um ), and their ˜ = (˜ ˜2 , · · · , x ˜n )T , we introduce steps to fuzzy set characteristic value is X x1 , x method modeling as follows. 1. Obtain T - fuzzy data. The biggest obstacle is how to obtain the T -fuzzy data in applied model in this section. Usually methods can be seen in Chapter 3, 3.1.4. 2. Turn the T -fuzzy data into non-T -fuzzy data. Converse a T -fuzzy data sample value into non-T -fuzzy data, and we have two methods: a) Distance method. Let x˜ = (˜ x1 , x˜2 , · · · , x˜m )T be a column T -fuzzy data. Then we can compute immediately distance dij (˜ xi , x ˜j ) between the T - fuzzy data, which is non-fuzzy number xi by Formula (5.1.7). b) Non-T -fuzzifying method. Let x ˜ = (˜ x1 , x ˜2 , · · · , x ˜n )T be a column T -fuzzy data. Then we classify the data x ˜i by subscripts. ξ + ξ li ; For i = 1, ..., N and each l, Uli = xi + li $2 xi − ξ li , jl = 0, for i = N + 1, · · · , 2N and each l, Uli = xi + ξ li , jl = 1; $ xl + ξ li , jl = 0, for i = 2N + 1, · · · , 3N and each l, Uli = xl − ξ li , jl = 1. Hence, under a given cone index J , x˜i is changed into real data.

5.1 Fuzzy Cluster Analysis

125

Again, from Method a) or b) we calculate the metric value dij between T -fuzzy sample data x ˜i and x ˜j , listing united table as Table 5.1 below: Table 5.1. The United Table

x ˜1 x ˜2 .. . x ˜n

x ˜1 0 d21 .. . dn1

x ˜2 · · · x ˜n 0 .. . dn2

.. . ··· 0

Thereout we order anti-reﬂexivity and symmetry commonness matrix table ⎛ ⎞ 0 ⎜ d21 0 ⎟ ⎟. D=⎜ ⎝··· ··· 0 ⎠ dn1 dn2 · · · 0 D is classiﬁed not only by use of a classic method, but also by a fuzzy method. Now, we calculate the classiﬁcation by adopting the method to fuzzy sets as follows. 3. Fuzzify data. Take (5.1.8) for calculation of the number in Table 5.1 or matrix D, and we obtain a standardized matrix T = (tij )n ; it is a fuzzy matrix. 4. Compute the transitivity closure. If T is already a fuzzy distance matrix, we go to Step 5. Otherwise, by Min-Max or Max-Min operation ‘◦’, compute the Min-Max (resp. Max-Min) transitive closure S(T ) in the standardized matrix T = (tij )n by a shortcut method, reform fuzzy matrix T into fuzzy distance matrix T ∗ . 5. Classiﬁcation. According to an element size in the matrix T ∗ , and by a sequence from small α ˜ to big, we take α−cut matrix (T ∗ )α , at t˜ij > α, tα ij = 1; at tij α, tij = 0, ∗ then (T )α is a Boolean matrix. Let α ∈ [0, 1]. Classify U . 5.1.2.5 Application Example Example 5.l: If ﬁve samples are u1 , u2 , u3 , u4 , u5 with only one characteristic, in which T -fuzzy data are depicted as x ˜1 = (1, 0.5, 0), x ˜2 = (2, 0.7, 0.3), x ˜3 = (4.5, 0.8, 0.5), ˜5 = (8, 1, 0.8). x ˜4 = (6, 0.9, 0.6), x Try to ﬁnd their classiﬁcation.

126

5 Fuzzy Cluster Analysis and Fuzzy Recognition

By applying modeling steps in Section 5.3, operate the above as follows: 1) Processing by non-T -fuzzifying data. Compute a measure value between x˜i and x˜j by (5.1.7), listing a united table as Table 5.2: Table 5.2. Measure Value between x ˜i and x ˜j (1, 0.5.0.1) (2, 0.7, 0.3) (4.5, 0.8, 0.5) (6, 0.9, 0.6) (8, 1, 0.8)

(1, 0.5, 0.1) (2, 0.7, 0.3) (4.5, 0.8, 0.5) (6, 0.9, 0.6) (8, 1, 0.8) 0 1.013 0 3.545 2.536 0 5.047 4.039 1.502 0 7.084 6.076 3.539 2.037 0

2) Fuzzifying data. Fuzzifying data in Table 5.2 by (5.1.8) (for the sake of simple record, here we take m =0), and obtain ⎞ ⎛ 0 0.14 0.50 0.72 1 ⎜ 0.14 0 0.36 0.52 0.86 ⎟ ⎟ ⎜ ⎟ T =⎜ ⎜ 0.50 0.36 0 0.19 0.50 ⎟ . ⎝ 0.72 0.52 0.19 0 0.29 ⎠ 1 0.86 0.50 0.29 0 3) Computing a fuzzy distance matrix. Take a Min-Max operation ‘◦’, we have ⎛ ⎞ 0 0.14 0.36 0.5 0.5 ⎜ 0.14 0 0.36 0.36 0.5 ⎟ ⎜ ⎟ 2 ⎟ T =T ◦T =⎜ ⎜ 0.36 0.36 0 0.19 0.29 ⎟ , ⎝ 0.5 0.36 0.19 0 0.29 ⎠ 0.5 0.5 0.29 0.29 0 then, we use a shortcut method to T ∗ . By taking t˜∗i0 j0 = max{t˜2i,i+1 |i = 1, 2, 3, 4} = max{0.14, 0.36, 0.19, 0.29} = 0.36, i.e., the greatest element of the main diagonal line above the second one in T 2 . Here any element larger than 0.36 is 0.36, less than or equal to 0.36 keeps the original value constant in T 2 . Therefore T ∗ is derived as follows: ⎛ ⎞ 0 0.14 0.36 0.36 0.36 ⎜ 0.14 0 0.36 0.36 0.36 ⎟ ⎜ ⎟ ⎟ T∗ = ⎜ ⎜ 0.36 0.36 0 0.19 0.29 ⎟ . ⎝ 0.36 0.36 0.19 0 0.29 ⎠ 0.36 0.36 0.29 0.29 0

5.2 Fuzzy Recognition

127

4) Cluster According to an element size in the matrix T ∗ , the sequence goes from small to big. Take α−cut matrix T ∗ )α , and classify U . At α = 0, a cut matrix presents ⎛ ⎞ 01111 ⎜1 0 1 1 1⎟ ⎜ ⎟ ∗ ⎟ (T )0 = ⎜ ⎜1 1 0 1 1⎟, ⎝1 1 1 0 1⎠ 11110 divided into ﬁve types: {u1 }, {u2 }, {u3 }, {u4}, {u5 }. Analogously At α = 0.14, the cut matrix is (T ∗ )0.14 , divided into four types: {u1 , u2 }, {u3}, {u4 }, {u5 }. At α = 0.19, the cut matrix is (T ∗ )0.19 , divided into three types: {u1 , u2 }, {u3, u4 }, {u5 }. At α = 0.29, the cut matrix is (T ∗ )0.29 , divided into two types: {ul , u2 }, {u3, u4 , u5 }. At α = 0.36, the cut matrix is (T ∗ )0.36 , divided into one type: {u1 , u2 , u3 , u4 , u5 }. This coincides with the result from the united table value by using a system cluster method [Fang89]. 5.1.2.6 Conclusion For the fuzzy similar matrices, the corresponding classiﬁcation can be obtained to them also. The result in the section can be used for pattern recognition and more complicated systems and engineering [Cao07b].

5.2

Fuzzy Recognition

A pattern recognition means recognizing a given object, where it belongs, on knowledge of various patterns and it is a problem with pattern clustered. It is diﬃcult for us to give an exact description of a sample in complicated phenomena in the real world, and the data obtained by means of quantify are of approximate value to a certain degree. It is well known that the concept of a fuzzy set originated from the study of problems related to pattern classiﬁcation and there have appeared many recognition methods by the inspiration of fuzzy sets [Zad76], so many authors [Cao96c,07b] [CL91] have developed fuzzy pattern matching and various distance models among fuzzy sets (including fuzzy data).

128

5 Fuzzy Cluster Analysis and Fuzzy Recognition

5.2.1 Common Method in Pattern Recognition As for a pattern recognition model, we have methods such as statistics, language and fuzzy sets. We introduce the methods of fuzzy sets as follows: a. The method to individual model recognition Let A˜1 , A˜2 , · · · , A˜n be n fuzzy sets of X and x0 ∈ X. If A˜k (x0 ) = max {A˜i (x0 )}, 1in

then x0 is regarded as a relative-belonging to fuzzy set A˜k . This is a so-called maximum membership degree principle. Assume there exist n models representing n fuzzy sets A˜1 , A˜2 , · · · , A˜n of X. Now, we have an object x0 ∈ X identiﬁed. Which model x0 belongs to is judged by the maximum membership degree principle; it means which membership function is the greatest to the fuzzy sets can be determined to a model it belongs to. This is a direct method, the ﬁrst recognition method to the model. The method above can be changed as follows, that is, stipulate threshold value λ ∈ [0, 1] before it is judged by the maximum membership degree principle, writing α = max{A˜1 (x0 ), A˜2 (x0 ), · · · , A˜n (x0 )}. If α < λ, then it fails to identify and we have to analyze it another way. If α λ, then it can identify and can judge it by the maximum membership degree principle. b. The method to group model recognition ˜ B ˜ ∈ F (X). Then we call Deﬁnition 5.2.1. Let A, ˜ B) ˜ = 1 [A˜ ⊗ B ˜ + (1 − A˜ B)] ˜ σ(A, 2 ˜ where an approaching degree of A˜ and B, ˜= [μA˜ (x) μB˜ (x)] A˜ ⊗ B x∈X

and ˜= A˜ B

x∈X

[μA˜ (x)

μB˜ (x)]

˜ respectively. Here are called an inner product and outside one in A˜ and B, ˜ ˜ A1 = φ, B1 = φ, suppA = X, suppB = X. The principle of choice near ˜ ∈ F (X)(1 i n). If A˜i0 exists, such that Let A˜i , B ˜ = max{σ(A˜1 , B), ˜ σ(A˜2 , B), ˜ · · · , σ(A˜n , B)}, ˜ σ(A˜i0 , B)

5.2 Fuzzy Recognition

129

˜ most close to A˜i0 , and judge that B ˜ should belong to A˜i0 then we call B class. The recognition model with fuzzy data will be of vast practice. But here the author puts forward a new recognition model with T -fuzzy data diﬀerent from the traditional and fuzzy recognition models. Deﬁnition 5.2.2. Let the measure of T -fuzzy data x i , yk is di ( xi , yk ). Then D− =

inf

x ˜i ∈T˜ (xi )

there is

di ( xi , yk ),

D+ =

sup

x ˜i ∈T˜ (xi )

di ( xi , yk ),

di ( xi , yk ) − D− Y˜ = 1 − , D+ − D−

(5.2.1)

where T˜ (xi ) is the whole T -fuzzy number in U . Following next is that we shall introduce pattern recognition method with T -fuzzy date. 5.2.2 Pattern Recognition Model with T -Fuzzy Data 5.2.2.1 Introduction This section induces T -fuzzy data [Cao90][Dia87] into pattern recognition model before developing a new model diﬀerent from those gained by other methods, which is a pattern recognition model with T -fuzzy data. This model is applied into recognition of environmental quality, human fossil and children’s health growth, they can be of eﬀect in disposal of problems in pattern recognition with T -fuzzy data and gets a good result from a numerical example. 5.2.2.2

Model

As for the building of a pattern recognition model, we have fuzzy set methods such as threshold value, experience and dual comparative function [Wang83]. The author once advanced another recognition method [Cao96d,07b] as follows. Let ui ∈ U (1 i m) be a waiting recognition object with its feature described by T -fuzzy data x ˜i = (xi , ξ i , ξ i ), and suppose U to be a standard one whose feature is assigned by the value y˜ = (y, η, η). 1) Methods to threshold or experience value Non-fuzzyﬁcation of sample x ˜ and y˜ are held by Formula (5.1.7), and properly a threshold value λ ∈ [0, 1] (or determine an experience value α ∈ R) is selected. If di λ (or di α), a recognition is accepted, and at di < λ (or at di > α), a recognition is refused. 2) Method to a dual opposite comparison function Non-fuzzyﬁcation of sample x˜i and y˜j are held by Formula (5.1.7), then, let

130

5 Fuzzy Cluster Analysis and Fuzzy Recognition

μv0 (uij ) = dij (

y0 ) dx˜ij (˜ x˜ij (i = j) ) y˜0 dx˜ij (˜ y0 ) + dy˜0 (˜ xij )

or μv0 (uij ) =

y0 ) dx˜ij (˜ (i = j), dx˜ij (˜ y0 ) dy˜0 (˜ xij )

where μv0 (uij ) ∈ [0, 1](1 i m; 1 j n). List relation orders and we will take θ = μv0 (uk0 ) = max{μv0 (ui1 ), μv0 (ui2 ), · · · , μv0 (uin )(1 i m)}. Therefore, we can decide that the k0 -th sample uk0 is most similar to a standard sample v0 . If there exists m feature inﬂuencing ui corresponding to v, feature values of ui and vi are x˜ij and y˜, respectively. Then the metric in x ˜ij and y˜ shall be weighted, i.e., ⎧ n ⎨ k d(˜ ˜), i = j, ij xij , y dij = j=1 ⎩ 0, i = j, where kij 0 and

n

kij = 1(1 i m).

j=1

The method mentioned above is applied into recognition of the humanity fossil [Cao96c] and children’s health growth [CL91], respectively, the results of which prove satisfactory. But here we develop another method diﬀerent from anciently [DPr80] based on his own model. 3) Concrete pattern classiﬁcation Obviously, we consider and use variety of information in it, the steps of which are as follows. 10 Feature collection To the object ui = {uij } ∈ U (1 i m; 1 j n), we collect and recognize the collective peculiarities concerned, test the data in its feature description: x ˜ij = (xij , ξ ij , ξ ij ), where the extension can be taken as in function max{0, 1 − |xij |}, ect. 20 Variation pattern Change ui into T -fuzzy number pattern p(ui ) = (u1i , u2i , · · · , um i ), meanwhile, give a standard object v0 and determine its pattern, which is vector p(v0 ) = (v01 , v02 , · · · , v0m ) with respect to v0 and its assigned value corresponding feature is y˜0 = (y0 , η 0 , η 0 ). 30 Nonfuzzyﬁcation By the aid of distance Formula (5.1.7), we calculate metric value d˜ij between x ˜ij (1 i m, 1 j n)) and y˜0 , then

5.2 Fuzzy Recognition

⎛

(d˜ij )m×n

131

⎞

x12 , y˜0 ) · · · d(˜ x1n , y˜0 ) d(˜ x11 , y˜0 ) d(˜ ⎜ d(˜ x , y ˜ ) d(˜ x , y ˜ ) · · · d(˜ x2n , y˜0 ) ⎟ 21 0 22 0 ⎟, =⎜ ⎝ ··· ⎠ ··· ··· ··· xm2 , y˜0 ) · · · d(˜ xmn , y˜0 ) d(˜ xm1 , y˜0 ) d(˜

(5.2.2)

xij , y˜0 ). where d˜ij = d(˜ 40 Optimum decision [Yage80] Take maximum, minimum and average value to d(˜ xij , y˜0 ) in (5.2.2) according to line and we have xij , y˜0 ), Di− = inf di (˜ xij , y˜0 ); Di+ = sup di (˜ n xij ,˜ y0 ) di (˜ ¯i = D (1 i m, 1 j n)), n i=1

then an assessable matrix from Formal (5) will be obtained below: ⎞ ⎛ + − ¯1 I D1 D1 D + − ¯ ⎟ II ⎜ ⎜ D2 D2 D2 ⎟ .. ⎜ .. .. .. ⎟ . . ⎝. . . ⎠ + − ¯m m Dm Dm D Again, let ¯ i. ¯ i ) = k1 D+ + k2 D− + k3 D f (Di+ + Di− + D i i Here, ki ∈ [0, 1](i = 1, 2, 3) represents a weight number obtained by Analytic Hierarchy Process or Delphi method. ¯ i (1 i m). Finally, we take min f (·) = fk , where “·” denotes Di+ +Di− +D 50 Determination fk is most similar to v0 corresponding to object uk . 5.2.2.3 Practical Example In environmental protection, we always distinguish the grade of environmental quality [Lao90]. Now, we divide the environmental quality into I–V grades as follows: U = {clean (I), less clean (II), less polluted (III), more polluted (IV ), most polluted (V )}, where U is a universe. We choose atmosphere, water on and under the earth as u1 , u2 and u3 respectively as environmental factors according to monitoring data at 10 observation points in some city, and their target sets are u1 = {SO2 , NOx , TSP}, u2 = {COD, NH3− N, DO, NO3− N, Cr+6 , CN}, −

u3 = {SO34 , Cl− , NO3− N, , Cr+6 , CN}, respectively.

132

5 Fuzzy Cluster Analysis and Fuzzy Recognition

10 Ref. [Lao90] gives a standard value in distinguishing the grade of environmental quality as Table 5.2.1. Table 5.2.1. Standard Distinguish Grade of Environmental Quality

Factors Index I Atmos- SO2 0.05 phere NOx 0.05 TSP 0.15 Water COD 2 on the NH3− N 0.25 DO 8 earth NO3− N 10 C+6 0.01 r CN 0.01 − Water SO34 120 − Cl 120 under the NO3− N 10 earth hard. 250 C+6 0.01 r CN 0.01

Distinguish the grade II III IV > 0.05 ≤ 0.15 > 0.15 ∼ 0.25 > 0.25 ∼ 0.50 > 0.05 0.10 > 0.10 ∼ 0.15 > 0.15 ∼ 0.30 > 0.15 0.30 > 0.30 ∼ 0.50 > 0.50 ∼ 0.75 > 2 6 > 6 ∼ 12 > 12 ∼ 25 > 0.25 ≤ 0.50 > 0.50 ∼ 1 > 1 ∼ 3 4 8 3 4 2 8 > 10 20 > 20 ∼ 40 > 40 ∼ 80 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 120 250 > 250 ∼ 750 > 750 ∼ 1000 > 120 250 > 250 ∼ 750 > 750 ∼ 1000 10 20 20 40 40 80 > 250 450 > 450 ∼ 650 > 650 ∼ 900 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25 > 0.01 0.05 > 0.05 ∼ 0.10 > 0.10 ∼ 0.25

V > 0.50 > 0.30 > 0.75 > 25 >3 2 > 80 > 0.25 > 0.25 > 1000 > 1000 8 > 900 > 0.25 > 0.25

Let A = {u1 , u2 , u3 }. T -fuzzy number sets corresponding to the ﬁve standard degrades in U are as follows (Unit: atmosphere mg/N m3 , water mg/l, and hardness mg/l): AI = {(0.05, 0, 0)T , (0.05, 0, 0)T , (0.15, 0, 0)T ; (2, 0, 0)T , (0.25, 0, 0)T , (8, 0, 0)T , (10, 0, 0)T , (0.01, 0, 0)T , (0.01, 0, 0)T ; (120, 0, 0)T , (120, 0, 0)T , (10, 0, 0)T , (250, 0, 0)T , (0.01, 0, 0)T , (0.01, 0, 0)T }. AII = {(0.1, 0.04, 0.05)T , (0.08, 0.02, 0.01)T , (0.2, 0.02, 0.1)T ; (4, 1.8, 2)T , (0.35, 0.1, 0.05)T , (6, 2, 1.5)T , (15, 4.5, 5)T , (0.03, 0.02, 0.01)T , (0.03, 0.02, 0.01)T ; (190, 65, 60)T , (190, 65, 60)T , (15, 4.5, 5)T , (360, 110, 90)T , (0.03, 0.02, 0.01)T , 0.03, 0.02, 0.01)T }. AIII = {(0.2, 0.01, 0.05)T , (0.12, 0.02, 0.01)T , (0.4, 0.1, 0.05)T ; (9, 2, 3)T , (0.7, 0.05, 0.2)T , (3.5, 0.5, 0.3)T , (30, 9, 10)T , (0.08, 0.01, 0.01)T , (0.08, 0.02, 0.02)T ; (510, 260, 240)T , (510, 260, 240)T , (30, 9, 10)T , (560, 110, 90)T , (0.08, 0.01, 0.01)T , (0.08, 0.02, 0.02)T }. AIV = {(0.38, 0.12, 0.1)T , (0.22, 0.05, 0.08)T , (0.62, 0.10, 0.12)T ; (19, 5, 6, )T , (2, 0.8, 1)T , (2.5, 0.5, 0.1)T , (60, 18, 20)T , (0.18, 0.07, 0.05)T , (0.18, 0.07, 0.05)T ; (880, 130, 120)T , (880, 130, 120)T , (60, 18, 20)T , (780, 130, 120)T , (0.18, 0.07, 0.05)T , (0.18, 0.07, 0.05)T }. AV = {(0.5, 0, 0)T , (0.3, 0, 0)T , (0.75, 0, 0)T ; (25, 0, 0)T , (3, 0, 0)T , (2, 0, 0)T , (80, 0, 0, )T , (0.25, 0, 0)T , (0.25, 0, 0)T ; (1000, 0, 0)T , (1000, 0, 0)T , (80, 0, 0)T , (900, 0, 0, )T , (0.25, 0, 0)T , (0.25, 0, 0)T }.

5.2 Fuzzy Recognition

133

Now, according to Table 5.2.1 , we have tested the basic index feature value of environmental quality in a city as Table 5.2.2. Table 5.2.2. Basic Indexes Feature Value Factors Atmosphere Water on the earth Index SO2 NOx TSP COD NH3− N DO NO3− N C+6 CN r Feature value 0.07 0.05 0.6 19.2 1.5 5.6 10 0.01 0.01 Factors Water under the earth − Index SO34 Cl− hardness NO3− N C+6 CN r Feature value 612 625 14 290 0.01 0.01

20

A0 = {u01 , u02 , u03 } = {(0.06, 0.005, 0.015)T , (0.05, 0.01, 0.02)T , (0.5, 0.05, 0.15)T ; (19, 0.05, 0.4)T , (2, 0.5, 0.1)T , (5.5, 0.3, 0.4)T , (9, 1, 2)T , (0.01, 0.005, 0.001)T , (0.02, 0.001, 0.005)T ; (611, 0.5, 1.5)T , (624, 1, 2)T , (15, 1, 0)T , (290, 1, 2)T , (0.01, 0.005, 0.001)T , (0.02, 0.001, 0.005)T }.

30 The 5 × 15 matrix (dij )5×9 is obtained for each component of Aj ( j = I, II, III, IV, V) by calculating, the distance between component of them and A0 with Formula (5.1.7), respectively: (dij )5×15 = ⎛ 0.0110 0.0129 0.3279 16.8845 1.9015 2.5495 1.8257 490.6674 ⎜ 0.0492 0.0238 0.3084 15.1120 1.5465 1.1902 6.4096 425.9724 ⎜ ⎜ 0.1511 0.0635 0.1555 9.9593 1.1332 2.1016 21.9924 230.2662 ⎜ ⎝ 0.3207 0.1814 0.0956 4.3152 0.5477 3.1691 53.2854 284.0056 0.4367 0.2470 0.2327 5.8868 1.1633 3.5449 70.6777 388.6676 ⎞ 503.6682 5.3541 39.6863 0.0029 0.0091 0.0029 0.0091 438.8379 3.5237 102.2823 0.0205 0.0016 0.0205 0.0016 ⎟ ⎟ 236.2915 17.3109 275.0667 0.0716 0.0590 0.0716 0.0603 ⎟ ⎟. 271.748 48.4218 496.6840 0.1712 0.1591 0.1712 0.1591 ⎠ 375.6687 65.3350 609.6679 0.1712 0.1591 0.2413 0.2287 40 An assessable matrix shall be derived from Method 3 of Step 4 in 5.2.2.2: ⎞ ⎛ ⎛ + − ⎞ ¯1 D1 D1 D I 503.6682 0.0029 70.8609 + − ¯ ⎟ ⎜ ⎟ II ⎜ ⎜ D2+ D2− D2 ⎟ ⎜ 438.8379 0.0116 66.3501 ⎟ ¯ 3 ⎟ = ⎜ 236.2915 0.0590 52.9836 ⎟ . III ⎜ D D D ⎟ ⎜ ⎜ 3 3 ⎟ ¯ 4 ⎠ ⎝ 496.6840 0.0956 77.5623 ⎠ IV ⎝ D4+ D4− D ¯5 V 609.6679 0.1591 101.4886 D5+ D5− D

134

5 Fuzzy Cluster Analysis and Fuzzy Recognition

We might as well let α1 = α2 = α3 =

1 , then 3

1 (503.6682 + 0.0029 + 70.8607) ≈ 191.5107, 3 fII (∗2 ) ≈ 168.4, fIII (∗3 ) ≈ 96.445, fIV (∗4 ) ≈ 191.45, fV (∗5 ) ≈ 237.1052.

fI (∗1 ) =

50 fIII (∗3 ) = 96.4447 is smallest, such that the city’s environmental quality is approximate to III grade, i.e, less polluted, which meets with practice. 5.2.2.4

Conclusion

In application, it is diﬃcult for us to obtain T -fuzzy data, so an approximate value obtained from tests or measure is regarded as ﬁtting of T -fuzzy data. If historical data noted down are not overall or inexact, T -fuzzy numbers can be constructed by similarly Section 3.1.4. The result coincides with the systematic fuzzy evaluation method in Ref.[Chen94] and 2-order fuzzy synthetical evaluation model in Ref. [Zad76]. But the method of this section is superior to the above two: 1) The textual content contains wider information due to using fuzzy data. 2) Its result contains numbers which are not compressed to [0,1], because the smaller weight by more division will make each single factor evaluation meaningless under Zadeh’s “∨” operator, in such case, a lot of information loses. 5.2.3 Application of Recognition Model with T -Fuzzy Data 5.2.3.1 Application in Identiﬁcation of the Human Fossil We introduce practical application of T -fuzzy data below since it is of vast foreground in its application in every ﬁeld. We usually meet with the cases, in an animal fossil identiﬁcation, including its bigger diﬀerence between fossil and specimen, unclear boundary standard and fuzzy boundary etc. This brings a certain diﬃculty in identiﬁcation. Now we develop problems in a rat fossil identiﬁcation concerning indexes at belt, knucklebone bows and toes of big gerbils, meridian gerbils, gerbils with long claws and back. Let big gerbils u1 , meridian gerbils u2 , and gerbils with long claws u3 , have the indexes at their belt uj1 , knucklebone bows uj2 , toes uj3 , and back uj4 . Through our measure, these three gerbils own four kinds of indexes, respectively, below P (uj ) = {P (uj1 ), P (uj2 ), P (uj3 ), P (uj4 )},

5.2 Fuzzy Recognition

135

i.e, P (u1 ) = {(45.2, 8.5, 11.3), (100, 18.3, 29.2), (87.5, 28.4, 34.7), (47.8, 9.4, 11.6)}, P (u2 ) =

{(52, 6.5, 15.5), (80, 24.2, 24.8), (125, 24.8, 24.6),

(53.8, 15, 13.3)}, P (u ) = {(50, 5.4, 9.4), (76.2, 18.7, 28.4), (105.3, 22.8, 30), 3

(66.7, 36, 17.8)}. Now the measure data of a gerbil fossil is u∗i (i = 1, 2, 3, 4) after measured: P (u∗ ) = {(147, 7.2, 13.5), (78, 20, 15), (110, 19, 33), (45, 29, 18)}, which is a standard mode. Try to determine which gerbil it belongs to. 10 By use of (5.1.7), calculate the distance degree of u∗ and uj , then √ d(u∗1 , u11 ) = 9.61 + 16 + 3.24 ≈ 5.32, d(u∗2 , u12 ) ≈ 42.74, d(u∗3 , u13 ) ≈ 44.23, d(u∗4 , u14 ) ≈ 22.57 and d(u∗1 , u21 ) ≈ 10.32, d(u∗2 , u22 ) ≈ 12.17, d(u∗3 , u23 ) ≈ 18.8, d(u∗4 , u24 ) ≈ 24.78, d(u∗1 , u31 ) ≈ 5.77, d(u∗2 , u32 ) ≈ 21.91, d(u∗3 , u33 ) ≈ 12.39, d(u∗4 , u34 ) ≈ 33.9. 20 Make Dj = k1 D1j + k2 D2j + · · · + kn Dnj , where 1, 2, 3, ki =

1 , then 4

D1 =

n

ki = 1. Take n = 4, j =

i=1

1 [d(u∗1 , u11 ) + d(u∗2 , u12 ) + d(u∗3 , u13 ) + d(u∗4 , u14 ) ≈ 28.73. 4

Similarly, D2 ≈ 16.52, D3 ≈ 18.49. 30 Determine. Compare Dj , and taking D1 = max{Dj |j = 1, 2, 3} = 28.73 is what we want to ﬁnd. Therefore this immemorial rat is more alike with a big gerbil, with 0.45 membership degree belonging to gerbils. In practical application, we can give the diﬀerent weight coeﬃcients according to concrete circumstance and, adopt the methods such as layer analysis, the Delphi method and etc. 5.2.3.2 Application in Young Children Body Growth Young children are the future hope, they are being brought up in gaining knowledge, inﬂuencing their sentiment, and developing their bodies, so

136

5 Fuzzy Cluster Analysis and Fuzzy Recognition

prompt, scientiﬁc, and accurate analysis of their growth and urge of its full moral development for reliable successors, will have an extremely signiﬁcant strategic sense. However, because there exist many factors in inﬂuencing human body’s growth and also there is not clear boundary in various division of human bodies, many diﬃculties have been brought to an assessment of body division and growth. This section establishes a simple recognition and judgment model of young children growth by applying a fuzzy set theory according to the method mentioned. The modeling steps of this model are showed with an example. Suppose the universe of human body type U = {non-force type, positive-force type, super-force type}, its characteristic is P (u) = {stature, avoirdupois, chest circumference} = ˜i = (xi , ξi , ηi ), if (u1 , u2 , u3 ), depicts it with T -fuzzy data, written as x x ˜1 , x ˜2 , x ˜3 are the former, then P (˜ x) = (˜ x1 , x ˜2 , x ˜3 ) denotes a standard mode. According to Ref. [ISTI82], the relative between human body stature, avoirdupois and chest circumference of 18-25 year old men in the city, nonforce type, positive-force type and super-force type concerning three parts of diﬀerence scope human body is height (u1 ) : [155, 186], weight (u2 ) : [42, 75], and chest circumference (u3 ) : [74, 98]. But the standard evaluation of non-force type is (1)

(1)

(1)

(2)

(2)

(3)

(3)

2 = (48, 6, 7), x 3 = (79, 5, 5); x 1 = (170, 3, 4), x the positive force type is (2)

2 = (58, 3, 4), x ˜3 = (86, 2, 2); x 1 = (170, 3, 4), x the super force type is (3)

2 = (68, 6, 7), x 3 = (93, 5, 5). x 1 = (170, 3, 4), x Now some student P0 , concerning ui (i = 1, 2, 3) measure, its corresponding evaluate is (1)

(2)

(3)

0 = (59, 2, 6), x 0 = (86, 3, 2). x 0 = (170, 1, 5), x Try to judge P0 with what degree belong to P (uj )(j = 1, 2, 3). i) Computation of the distance between P0 and P (uj )j = 1, 2, 3 (by use of the Formula (5.1.7)) Compute the distance between P0 and P (u1 ): (1)

1

x0 , x ˜1 ) = (4 + 1 + 0) 2 ≈ 2.24, D11 = d1 (˜ (2) 1 D2 = d2 (˜ x0 , x ˜1 ) ≈ 21.12, (3) x0 , x ˜1 ) ≈ 12.08. D31 = d3 (˜

5.2 Fuzzy Recognition

137

Compute the distance between P0 and P (u2 ): (1)

x0 , x˜2 ) ≈ 2.24, D12 = d1 (˜ (2) x0 , x˜2 ) ≈ 3.74, D22 = d2 (˜ (3) D32 = d3 (x˜0 , x˜2 ) ≈ 1. Compute the distance between P0 and P( u3 ): (1)

D13 = d1 (˜ x0 , x 3 ) ≈ 2.24, (2) 3 x0 x 3 ) ≈ 14.35, D2 = d2 (˜ (3) 3 D3 = d3 (˜ x0 , x 3 ) ≈ 13.19. ii) The initial judgement Let 1 D1 = (D11 + D21 + D31 ) = 11.81, 3 1 D2 = (D12 + D22 + D32 ) = 2.33, 3 1 D3 = (D13 + D23 + D33 ) = 9.93. 3 Then D1 > D2 > D3 is known that P0 is closer to the positive force type. iii) Further judgement Comparison Dji (i, j = 1, 2, 3) at size. Let D+ = sup Dji = 21.12, D− = inf Dji = 1. Then Di − D− , P (uj )(P0 ) = 1 − + D − D− hence P (u1 )(P0 ) = 1 − 0.537 = 0.463, P (u2 )(P0 ) = 1 − 0.066 = 0.934, P (u3 )(P0 ) = 1 − 0.494 = 0.506. Therefore, the student P0 belongs to the positive force type with membership degree 0.934.

6 Fuzzy Linear Programming

In this chapter, based on the general fuzzy linear programming, we ﬁrst aim at discussing how to solve an optimal judge problem of Zimmermann arithmetic; then we put forward “the more-for-less paradox” of fuzzy linear programming, inquiry into the one with various fuzzy coeﬃcients, and study a new linear programming model with T - fuzzy variables. Finally we make some extension to fuzzy line programming.

6.1

Fuzzy Linear Programming and Its Algorithm

Suppose that x = (x1 , x2 , · · · , xn )T is an n-dimensional decision vector, c = (c1 , c2 , · · · , cn ) is an n-dimensional objective coeﬃcient vector, A = (aij )(1 i m; 1 j n) is an m × n-dimensional constraint coeﬃcient matrix, b = (b1 , b2 , · · · , bm )T is an m-dimensional constant vector, and fuzzify objective and constraint function in the ordinary linear programming, then % z = cx max % (or min) s.t. Ax b, x 0,

(6.1.1)

we call it a fuzzy linear programming. Let the rank(A)=m. “” denotes the fuzzy version of “” and has the linguistic interaction “essentially smaller than or equal to” [Zim76][LL01]. max % represents fuzzy maximizing, written n n as cx = cj xj , Ax = ( aij xj )m×n (1 i m). j=1

j=1

B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 139–191. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

140

6 Fuzzy Linear Programming

The membership function of fuzzy objective g˜(x) is n μG˜ (x) = g˜( cj xj ) j=1

⎧ n ⎪ ⎪ when cj xj z0 , ⎪ 0, ⎪ ⎪ j=1 ⎪ ⎨ 1 n n ( cj xj − z0 ), when z0 < cj xj z0 + d0 , = d0 j=1 ⎪ j=1 ⎪ ⎪ n ⎪ ⎪ ⎪ when cj xj > z0 + d0 , ⎩ 1,

(6.1.2)

j=1

written as t0 =

n

cj xj , the image of g˜(t0 ) is shown as Figure 6.1.1.

j=1

1

0

g(t ) 6 0

1

z0 z0 + d0 Fig. 6.1.1. Image of g˜(t0 )

t0

f(t ) 6 i Q Q Q Q Q Q bi bi + di ti

0

Fig. 6.1.2. Image of f˜(ti )

The membership functions of fuzzy constraints f (x) are: n μS˜i (x) = f˜( aij xj ) j=1

⎧ n ⎪ ⎪ 1, when aij xj bi , ⎪ ⎪ ⎪ j=1 ⎪ ⎨ n n 1 aij xj − bi ), when bi < aij xj bi + di , = 1− ( di j=1 ⎪ j=1 ⎪ ⎪ n ⎪ ⎪ ⎪ when cj xj > bi + di , ⎩ 0,

(6.1.3)

j=1

written as ti =

n j=1

aij xj , the image of f˜(ti ) is shown as Figure 6.1.2, where

di 0(0 i m) is a ﬂexible index by an appropriate choice. Consider a symmetric form fuzzy linear programming (6.1.1), written as 7f , and we call it condition and unconditional fuzzy μS = Sf and μG = M respectively. superiority set of f concerning constraint S,

6.1 Fuzzy Linear Programming and Its Algorithm

141

6.1.1 Replacement Solution Method in Fuzzy Linear Programming Theorem 6.1.1. For a symmetric type programming, we have max μD (x) = max (α ∧ max μG (x)). x∈X

(6.1.4)

x∈Sα

α∈[0,1]

Proof: From Theorem, we denote fuzzy constraint S˜ for Decomposition μS˜ (x) = α Sα (x), then α∈[0,1]

μD(x) = μG(x) ˜ ˜ =

μS˜ (x) = μG˜ (x)

[μG˜ (x)

(α

[

(α

Sα (x))]

α∈[0,1]

Sα (x))],

α∈[0,1]

1, x ∈ Sα , Hence 0, x ∈ / Sα . max μD˜ (x) = [μG˜ (x) (α Sα (x))]

where Sα (x) =

x∈X

x∈X α∈[0,1]

=

{α

[ (μG˜ (x) Sα (x))]},

α∈[0,1]

while

[μG˜ (x)

Sα (x)] = {

x∈X

[μG˜ (x)

x∈X

Sα (x)]}

x∈Sα

=

{ [μG˜ (x) Sα (x)]} x∈S / α

μG˜ (x).

x∈Sα

Therefore, (6.1.4) is certiﬁcated. For the sake of the convenience, let (1) ϕ : [0, 1] → [0, 1], ϕ(α) = max μG˜ (x); x∈S α (2) ψ : [0, 1] → [0, 1], ψ(α) = α ϕ(α). Obviously, ϕ has the properties: 10 ϕ(0) = max μG˜ (x); x∈X

20 ϕ is a gradually decreasing function. Asai, Tanaka et al have given ϕ a suﬃciency condition of continuity [TOA73]: If fuzzy constraint S˜ is a strict convex fuzzy set, then function ϕ is a continuous function in [0,1]. Theorem 6.1.2. If ϕ continues in [0,1], then ϕ has a unique ﬁxed point. Proof: If f (α) = α − ϕ(α), we know ϕ(α) is a continuous function in [0,1], f (α) also is a continuous function in [0,1].

142

6 Fuzzy Linear Programming

Because f (1) = 1 − ϕ(1) 0, which comes from value region of ϕ(α) decision at [0,1], similarly, we have f (0) = 0 − ϕ(0) < 0. Therefore, point α∗ at least exists in the continuous function ϕ(α) in [0,1], such that f (α∗ ) = 0, i.e., α∗ = ϕ(α∗ ). Now prove uniqueness. In reverse suppose of α∗1 , α∗2 , all satisfy ϕ(α∗1 ) = ∗ α1 , ϕ(α∗2 ) = α∗2 , when α∗1 α∗2 , then ϕ(α∗1 ) ϕ(α∗2 ). This is impossible, and because of ϕ(α) deﬁnition, we have α∗1 α∗2 ⇐⇒ ϕ(α∗1 ) ϕ(α∗2 ); hence α∗1 = α∗2 . Theorem 6.1.3. The ﬁxed point α∗ of the continuous function ϕ(α) is all the ﬁxed point of the function ψ(α), i.e., ψ(α) = α∗ . Proof: ψ(α∗ ) = α∗ ϕ(α∗ ) = α∗ α∗ = α∗ . Theorem 6.1.4. If ϕ is continuous, then max μD˜ (x) = ψ(α∗ ) = α∗ x∈X

to fuzzy adjudge μD˜ (x), where α∗ is the ﬁxed point in ϕ. Proof: Because max μD˜ (x) = max ψ(α), ψ(α∗ ) = α∗ ϕ(α∗ ) = α∗ , it only x∈X

proves max ψ(α) = ψ(α∗ ).

α∈[0,1]

α∈[0,1]

(1) When α α∗ , ϕ(α) ϕ(α∗ ) = α∗ α, then ψ(α) = α ϕ(α) = α α∗ = ψ(α∗ ). (2) When α α∗ , ϕ(α) ϕ(α∗ ) = α∗ α, then ψ(α) = α ϕ(α) = ϕ(α) α∗ ϕ(α∗ ) = ψ(α∗ ). Therefore, ∀α ∈ [0, 1], ψ(α) ψ(α∗ ), i.e., ψ(α∗ ) = max ψ(α). α∈[0,1]

Theorem 6.1.5. If α∗ is a ﬁxed point of the continuous function ψ(α), then α∗ = max μD˜ (x), that is, the ﬁxed point α∗ of ψ(α) is a determination optimal x∈X

judgment value x∗ . From Theorem 6.1.4, it easily gets α∗ = max μA˜0 (x) = max μD˜ (x). x∈Sα

x∈X

Thus, we converse fuzzy linear programming into a process to solve an ordinary linear programming. In (6.1.1), we only discuss ﬁnding maximum problem in objective function f (x) (to ﬁnd a fuzzy minimum problem, we can convert it into ﬁnding a fuzzy maximum of −f (x)).

6.1 Fuzzy Linear Programming and Its Algorithm

143

Concrete steps of solution to (6.1.1) shown follows. 10 Solve two linear programmings (I)

min cx s.t. Ax b, x 0,

(II)

max cx s.t. Ax b, x 0.

Find the minimum m = min cx and maximum M = max cx are obtained, respectively. If zero stays in the feasible region of Problem (I), and coeﬃcient c is all not negative, then m = 0 can be got directly. 20 Determine replacement accuracy ε > 0. According to Theorem 6.1.1, we take α1 ∈ (0, 1), suppose k = 1, and change the problem into ﬁnding a linear programming max μA˜0 (x) s.t. Ax bαk , x 0, where μA˜0 (x) =

cx − m , bαk = {(1 − αk )p1 + b1 , (1 − αk )p2 + b2 , · · · , (1 − M −m

αk )pm + bm }. We can get a maximum gk = max μA˜0 (x). x∈Sαk

30 A calculation error: εk = gk − αk . If |εk | < ε, then to Step 40 . Otherwise, suppose αk+1 = αk + γk εk , where γk is a replacement modifying coeﬃcient, it needs appropriately selecting, such that 0 αk+1 1. Then, we change k into k + 1, and turn to Step 20 . 40 Let α∗ = αk . Then solve the linear programming max μA˜0 (x) s.t. Ax bα , x 0. From the knowledge of Theorem 6.1.5, the obtained optimal solution sets is an optimal solution to (6.1.1)(determination judgement). Theoretically, there exists uncountably inﬁnite α in Step 30 at [0,1]. In fact, it can’t be compared with one by one calculation. In order to solve the problem, we shall apply the concept and theory of a ﬁxed point. 6.1.2

Zimmermann Algorithm to Fuzzy Linear Programming

Reconsider problem in (6.1.1). In order to ﬁnd an optimal solution to a fuzzy objective function under the fuzzy constraint, we can convert a fuzzy objective

144

6 Fuzzy Linear Programming

function into a fuzzy constraint condition cx z0 , correspondingly, it has a ∈ F (x) (the fuzzy objective set) in X, its membership function fuzzy set G n is (6.1.2), and for every constraint condition aj xj bj , a fuzzy set Si in j=1

X correspondsto itand its membership function is (6.1.3). Let S˜ = S˜1 S˜2 · · · S˜m ∈ F (X). Then we call it fuzzy constraint set corresponding to constraint condition Ax b, x 0, when di = 0(1 i m), S˜ is changed into an ordinarily constraint set S, and at this time, “” is changed into “” in constraint equations. Deﬁnition 6.1.1. Suppose μG˜ (x), μS˜i (x) is in turns the membership function ˜ satisfying of fuzzy objective and i-th fuzzy constraint, then we call fuzzy set D n μD˜ (x) = μG˜ (x) ( μS˜i (x)), x 0 is fuzzy decision in (6.1.1), but point x∗ i=1 satisfying μD˜ (x∗ ) = μD˜ (x) is an optimal solution in (6.1.1). x∈X

Fuzzy programming (6.1.1) can be written as ⎧ ⎨ −cx −z0 , Ax b, ⎩ x 0,

(6.1.5)

where z0 is an expecting value for objective and it is a constant. We can see it easily at μS˜ (x) = 1, μG˜ (x) = 0, and hope to make the objective value bigger than z0 , but must be lower than μS˜ (x), caring for fuzzy constraint set S˜ with ˜ at both sides, according to the deﬁnition we can use a fuzzy objective set G ˜ ˜ S, ˜ i.e., fuzzy judgement D = G m μD˜ (x) = μG˜ (x) μS˜ (x) = μG˜ (x) [ μS˜i (x)] i=1 m

b − (Bx)i = [μS˜i (Bx)] = min ( i ), 0im di i=0

(6.1.6)

where (Bx)i denotes an element of matrix (Bx) in i-th row. B = (−c, A)T , b = (−z0 , b)T . b − (Bx)i Let α = min ( i ), then μD˜ (x) = α, hence we can get the follow0im di ing. Theorem 6.1.6. Maximization μD˜ (x) is equivalent to linear programming max G = α n 1 aij xj − bj ) α (1 i m), s.t. 1 − ( di j=1 n 1 ( cj xj − z0 ) α, d0 j=1 0 α 1, x1 , · · · , xn 0.

(6.1.7)

6.1 Fuzzy Linear Programming and Its Algorithm

145

Again according to Deﬁnition 6.1.1 and Theorem 6.1.6, and obviously. Theorem 6.1.7. Suppose x ¯∗ = (x∗1 , x∗2 , · · · , x∗n ; α∗ )T is an optimal solution ∗ ∗ ∗ in (6.1.7), then x = (x1 , x2 , · · · , x∗n )T is an optimal solution in (6.1.1), and they have constraint and optimization level of α. Zimmermann initiated an arithmetic to Problem (6.1.1)[Zim78]. Here we introduce its solution as follows: 10 First ﬁnd an ordinary linear programming max z = cx s.t. Ax b x0 and max z = cx s.t. Ax b + d, x 0, we obtain a maximum value z0 and z0 + d0 , where b + d = (b1 + d1 , · · · , bm + dm )T . Here, z0 is an object function maximum under the constraint condition Ax b obeyed strictly (the membership degree is μS˜ (x) = 1 at this time). z0 + d0 is an object function maximum when the constraint condition to be relaxed as Ax b + d (the membership degree μS˜ (x) = 0 at this time). z0 and z0 + d0 corresponds to two extreme cases μS˜ (x) = 1 and μS˜ (x) = 0, which can adequate lowers membership degree μS˜ (x), such that the optimal value is improved, lying between z0 and z0 + d0 . ˜ ∈ F (x), its membership function is like 20 Construct a fuzzy object set G (6.1.2), hence, fuzzy judgement in (6.1.5) is that in (6.1.6). Then ﬁnding the optimal point x∗ , such that μG˜ (x) μS˜ (x)). μD˜ (x∗ ) = μG˜ (x∗ ) μS˜ (x∗ ) = x∈X

30 Let

˜ ◦ S˜ = G

(μG˜ (x)

μS˜ (x))

x∈X

{α|μG˜ (x) α, μS˜ (x) α, (0 α 1)} {α|μG˜ (x) α; μS˜1 (x) α, · · · , μS˜m (x) α, (0 α 1)}. = =

x∈X

According to Theorem 6.1.6, this is the ordinarily linear programming with the parameter

146

6 Fuzzy Linear Programming

max G = α n s.t. aij xj + di α bi + di , (1 i m), j=1 n

cj xj − d0 α z0 ,

j=1

0 α 1, x1 , · · · , xn 0. We ﬁnd its optimal solution x∗ = (x∗1 , x∗2 , · · · , x∗n ; α∗ )T by use of a simplex method, thus optimal point x∗ = (x∗1 , x∗2 , · · · , x∗n )T in (6.1.1) is obtained by Theorem 6.1.7; corresponding, the objective function value is z ∗ = cx∗ , the optimal level is μD˜ (x∗ ) = α∗ .

6.2 6.2.1

Expansion on Optimal Solution of Fuzzy Linear Programming Introduction

We consider a form of linear programming with fuzzy constraint in (6.1.1) being %) (LP max z = cx s.t. Ax b, x 0, its corresponding parameter linear programming presents as follows: (LPα )

max cT x s.t. Ax b + (1 − α)d, x 0,

where α ∈ [0, 1]. Let x(α) , α ∈ [0, 1] denotes an optimal solution to linear programming (LPα ), and Bα denotes an optimal basis, zα denotes an optimal value. After Zimmermann’s algorithm [Zim78] has been given, people always simplify it and obtain the optimal value at α∗ = 0.5 [Fu90], [Pan87], [LL97]. However, the above results hold in the case that the optimal basis of (LP0 ) is identical to that of (LP1 ). If the optimal basis of (LP0 ) is not identical to that of (LP1 ), what is the value of α∗ when its optimal solution is obtained? 6.2.2

Relevant Theorems of Parameter Linear Programming (LPα ) (1)

(1)

Lemma 6.2.1. Assume x(1) = (x1 , · · · , xm , 0, · · · , 0)T , its corresponding columns of A. If B1−1 (b + d) 0 optimal basis is B1 , consisting of the ﬁrst m B1 N b+d does not hold, there will be . x(0) = 0 0 I

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

147

Corollary 6.2.1. Suppose 0 α1 < α2 1, without loss of generality, let (α ) (α ) x(α2 ) = (x1 2 , · · · , xm 2 , 0, · · · , 0)T . Its corresponding optimal basis is Bα2 , −1 which consists of the ﬁrst mcolumns of A. If Bα2 (b +(1 − α2 )d) 0 does not Bα1 N b + (1 − α1 )d hold, there will be x(α1 ) = . 0 I 0 Theorem 6.2.1. Let 0 α1 < α2 1. Suppose that the optimal solution (α ) (α ) to linear programming (Lα2 ) is x(α2 ) = (x1 2 , · · · , xm 2 , 0, · · · , 0)T , and the corresponding optimal basis is Bα2 , there exits Bα−1 (b + (1 − α1 )d) 2 (b + (1 − α )d) 0, then x = is the (1) If Bα−1 2 2 0 optimal solution to (Lα1 ). (2) If Bα−1 (b + (1 − α2 )d) 0 does not hold, there is cT x > zα1 . 2 Proof: (1) It can be immediately proved by a simplex method of the linear programming. (2) Without loss of generality, we only consider α1 = 0, α2 = 1, that is, we only prove that if B1−1 (b + d) 0 does not hold, there exists −1 B1 (b + d) cT > z0 = cT x(0) . 0 Transforming x by ξ=

B1 N 0 I

x=

⎧ n ⎨ aij xj , (1 i m), ⎩

j=1

0,

(m + 1 i n).

Since B1 is a feasible basis of (LP1 ), this transformation is full rank. Therefore, objective function can be transformed into function with respect to ξ as follows: −1 B1 −B1−1 N f (x) = cT x = cT ξ = c1 ξ1 + · · · + cn ξn . 0 I According to the Theorem 2 of [Fu90], we have ci 0, (1 i m), ci 0, (m + 1 i n). Now we consider linear programming (LP0 ). By Lemma 6.2.1, we obtain b+d B1 N (0) . x = 0 0 I Since x(0) is the optimal solution to (LP0 ), there is Ax(0) b + d.

148

6 Fuzzy Linear Programming

We obtain it because, at least, one of the following inequality holds, (0)

ξi

n

=

(0)

< bi + di (1 i m);

aij xj

j=1 (0)

ξi

(0)

= xi

> 0 (m + 1 i n).

Such that we have f (x(0) ) =

n

(0)

ci ξi

α∗ , d0 d0 zα∗ − z1 cT x∗ − z1 > α∗ . d0 d0 Let α2 =

zα∗ − z1 , and take α ¯ = min(α1 , α2 ). Then d0 ¯ )d Ax(α1 ) = b + (1 − α1 )d b + (1 − α

and

cT x(α1 ) − z1 α ¯ > α∗ , d0

¯ d b + d, cT x(α1 ) − d0 α ¯ z1 . i.e., Ax(α1 ) + α (α1 ) So (¯ α, x ) is a feasible solution, but α ¯ > α∗ , which contradicts with the conditions in this theorem, which completes the proof. Therefore, we only consider the optimal solution α and optimal value zα to the linear programming (LPα ) for the fuzzy linear programming. That is, we only consider the function zα . Moreover, a membership function of fuzzy objective sets is deﬁned as Cα : zα = z1 + d0 α, where d0 = z0 − z1 , which is a simple fuzzy number, and its image is a straight line. Therefore, for the fuzzy linear programming, when the ‘intersection’ operations denote the fuzzy decision, their optimal solution equals the intersection point of Figure 6.2.1 (or Figure 6.2.2) and the straight line. It is the intersection point of object function Sα : zα = z1 + d0 α and the constraint function zα = cTBα Bα−1 (b + (1 − α)d). From above results, we have the following conclusions. Theorem 6.2.4. Suppose that the B0 and B1 are optimal basis of (LP0 ) and (LP1 ), respectively. %) is α = 0.5. 1) If B0−1 b 0, or B1−1 (b + d) 0, then fuzzy decision of (LP −1 −1 %) is α > 0.5. 2) If B0 b 0 and B1 (b+d) 0, then fuzzy decision of (LP Proof: 1) When B0−1 b 0, from the Theorem 6.2.2, the relation between zα and α in linear programming (LPα ) presents as follows: zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d = cTB1 B1−1 (b + d) − αcTB1 B1−1 d.

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

151

Its intersect with the object set Sα : zα = z1 + d0 α =⇒ zα = cTB1 B1−1 b + αcTB1 B1−1 d is α = 0.5, zα = cTB1 B1−1 (b + 0.5d). As a similar argument,we can prove that the optimal solution is α = 0.5 when B1−1 (b + d) 0. 2) When the two conditions in 1) are dissatisﬁed, from the Theorem 6.2.2, the function zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d is a fold line whose slope increases with α decreases. Moreover, it is a cave function and the membership function of fuzzy objective sets is a monotonously increased line segment whose slope is d0 , i.e., the line segment AB in the Figure 6.2.3. Therefore, zα intersection with membership function of objective set is shown in Figure 6.2.3 zα 6 D B a H a z0 H Haa HH aa H aa HQ Q E H H Q HQ H H z1 QC A 0

1

α

Fig. 6.2.3. B1−1 (b + d) 0 and B0−1 b 0

8 Link (0, z0 ) and (1, z1 ), its line segment CD stands under the fold line CD. z0 + z1 Obviously, CD intersect AB in the point E(0.5, ). Therefore, when α 2 8 have no intersection joint. Otherwise, con0.5, we have that AB and fold line CD tradict with the concave property of zα . It follows that their intersection point satisﬁes α > 0.5, i.e., fuzzy decision α > 0.5. So the proof is complete. It is easy to know from Theorem 6.2.1 and Theorem 6.2.2 that the function zα = cTBα Bα−1 (b + d) − αcTBα Bα−1 d is a sectional function. Then the function can be expressed as

and

zα = cTB1 B1−1 (b + d) − αcTB1 B1−1 d

(6.2.1)

zα = cTB0 B0−1 (b + d) − αcTB0 B0−1 d,

(6.2.2)

when the function zα cross through (0, z0 ) and (1, z1 ), respectively. The intersection point of the above straight line is

152

6 Fuzzy Linear Programming

α =

cTB1 B1−1 (b + d) − cTB0 B0−1 (b + d) cTB0 B0−1 d − cTB1 B1−1 d

.

Suppose that B0−1 (b + (1 − α )d) 0, then we have B1−1 (b + (1 − α )d) 0. In fact, if B1−1 (b + (1 − α )d) 0, and since B0−1 (b + (1 − α )d) 0, zα is an optimal solution to (LPα ). By Theorem 6.2.1 and Theorem 6.2.2, we obtain cT x¯ > zα , i.e., zα = cB1T (B1−1 (b + (1 − α )d) > zα , which self-contradicts, so B1−1 (b + (1 − α )d) 0. Therefore, the function zα is subsection function below: $ cTB0 B0−1 (b + d) − αcTB0 B0−1 d, 0 α α , zα = cTB1 B1−1 (b + d) − αcTB1 B1−1 d, α α 1. It follows from the above theorems that the optimal solution is the intersection point of Sα : zα = z1 + d0 α and zα . %) represents very However, when B0−1 (b + (1 − α )d) 0, the method to (LP complicated. We can obtain the optimal solution to fuzzy linear programming by solving the corresponding linear programming (LP ). %) below: We suggest the algorithm to (LP 10 Obtain the optimal solution x(0) , x(1) to linear programming (LP0 ), (LP1 ). We denote their corresponding optimal basis as B0 , B1 , its corresponding objective function coeﬃcient as cB0 , cB1 and its optimal value as z0 , z1 , respectively. 20 Compute the intersection point of two straight lines zα crossing through (0, z0 ) and (1, z1 ), respectively, denoted by α . 30 Determination. If B0−1 (b + (1 − α )d) 0, go to 40 ; otherwise, go to 70 . 40 Compute the intersection point of the function zα and Sα , we obtain (α1 , zα1 ). If α α1 , go to 50 ; if α < α1 , then go to 60 . z0 − z1 50 Write α = , the optimal solution to programming z0 − z1 + cTB0 B0−1 d −1 B0 (b + (1 − α)d) %) is x = (LP , the optimal value equals to zα1 = 0 cTB0 B0−1 (b + (1 − α)d). It ends. cTB1 B1−1 d 60 Write α = , the optimal solution to programming z0 − z1 + cTB1 B1−1 d −1 %) is x = B1 (b + (1 − α)d) , the optimal value is zα1 = cT B −1 (b + (LP B1 1 0 (1 − α)d). It ends.

6.2 Expansion on Optimal Solution of Fuzzy Linear Programming

153

%) and we obtain optimal solution x of the 70 Solve linear programming (LP % (LP ) and the optimal value z. It ends. If the intersection point α satisﬁes the condition B0−1 (b + (1 − α )d) 0, it is easy to get a conclusion as follows. Theorem 6.2.5. Assumption condition of 2) holds in Theorem 6.2.4, α is an intersection point of (6.2.1) and (6.2.2). If B0−1 (b + (1 − α )d) 0, then the linear programming (LPα ) is degenerative [Cao91c], and that is the basic variable B0−1 (b + (1 − α )d) in optimal basic solution with a zero value. 6.2.4 Example Example 6.2.1: Find max x1 + x2 s.t. x1 + 2x2 100, x1 50,

(6.2.3)

x2 20, x1 0, x2 0. Step 1. Let ⎛ d = (0, 5, 5)⎞T . Then optimal basic of corresponding (LP1 ) is B1 , 1 −1 −2 and B1−1 = ⎝ 0 1 0 ⎠ , cB −1 = (0, 1, 1). 1 0 0 1 Solve linear programming max x1 + x2 s.t. x1 + 2x2 100 x1 55 x2 25 x1 0, x2 0 (1)

(2)

and we get an optimal solution x(1) = (x1 , x2 , y3 ) = (55, 22.5, 2.5) as well as an optimal value z1 = 77.5. Let d = (0, 2, 2)T . Then optimal basic of corresponding (LP0 ) is B0 , and ⎛ ⎞ 0.5 −0.5 0 B0−1 = ⎝ 0 1 0 ⎠ , cB −1 = (1, 1, 0), 0 0.5 0.5 1 and optimal value is z0 = 70. Step 2. Solve α1 = we get α1 =

1 . 3

cTB1 B1−1 (b + d) − cTB0 B0−1 (b + d) cTB0 B0−1 d − cTB1 B1−1 d

,

154

6 Fuzzy Linear Programming

Step 3. Since ⎞ ⎛ ⎞⎛ 0 100 1 −1 −2 B −1 (b + (1 − α1 )d) = ⎝ 0 1 0 ⎠ ⎝ 50 + 53 ⎠ = ⎝ 50 + 0 0 1 20 + 53 20 +

⎞

⎛

5 3 5 3

⎠,

we turn to Step 4. Step 4. Given the optimal value zα in parametric linear programming (LPα ) with respect to the function of parameter α as follows: 9 zα =

80 − 10α, 13 α 1 77.5 − 2.5α, 0 α 13

and the objective function is Sα = 70 + 7.5α. Solve the intersection of the two 4 1 functions, we obtain the interaction point ( 47 , 520 7 ). Since 7 ≥ 3 , we have xB1 = B1−1 (b + (1 − α)d) = (

365 155 25 , , ). 7 7 7

Therefore, the optimal solution in fuzzy linear programming (6.2.3) is ob4 520 365 155 tained as α = , x1 = , x2 = , and optimal value denotes z = . 7 7 7 7 6.2.5 Conclusion %) and parameter α, we From the relation between linear programming (LP %) can be transformed into solving the interknow that optimal problem (LP section of two linear functions. It is a fuzzy optimal solution obtained directly from the optimal solutions x(1) , x(0) of programmings (LP1 ) and (LP0 ) and optimal basis B1 and B0 , so that it is unnecessary to calculate the more complex linear programming than (LP1 ) and (LP0 ).

6.3 6.3.1

Discussion of Optimal Solution to Fuzzy Constraints Linear Programming Introduction

In this section, we focus on the fuzzy constraint linear programming. First we discuss the properties of an optimal solution vector and of an optimal value in the corresponding parametric programming, and propose a method to the critical values. Then we present a new algorithm to the fuzzy constraint linear programming by associating an object function with an optimal value of parametric programming. %) as The normal form of a linear programming with fuzzy constraint is (LP Section 6.2.1

6.3 Discussion of Optimal Solution

%) (LP

155

max z = cx s.t. Ax b, x 0,

%) is to turn it into a classical linear prothe representative method to (LP gramming [Cao02a]. We will try to explain the number that its fuzzy decision usually is 0.5 found by Researchers [Cao02a][Fu90][LC02][Pan87]. Here we %). shall propose another algorithm to (LP 6.3.2 Analysis of Fuzzy Linear Programming Suppose xα denotes an optimal solution to (LPα ), Bα and zα an optimal basis matrix and an optimal value of (LPα ), respectively, and then we consider (LPα )

max z = cT x s.t. Ax b + (1 − α)d, x 0,

where α is a parameter on the interval [0,1], d 0, b + (1 − α)d will vary with parameter α. Its optimal solution is Bα−1 (b + (1 − α)d). If we solve (LPα ) by using a simplex method, there is no relationship between discriminate number σ = cN − cB B −1 N and parameter α, so the variation of an optimal basis matrix is decided only by xα . 6.3.2.1 Properties of the Parametric Linear Programming Deﬁnition 6.3.1. Let B be one of the optimal basic matrix of (LPα ). If an interval [α1 , α2 ] exists, satisfying that B is an optimal basic matrix of (LPα )(∀α ∈ [α1 , α2 ]) while B is not an optimal matrix for each α∈[α1 , α2 ], we call that α1 and α2 critical values of (LPα ) and [α1 , α2 ] a characteristic interval. Theorem 6.3.1. (LPα ) has a ﬁnite characteristic interval on the interval [0,1]. Proof: Let us assume B is an optimal basis matrix of (LPα ), and there are two characteristic intervals [αi−1 , αi ] and [αi+1 , αi+2 ], (αi < αi+1 ) corresponding to B. The optimal solution to (LPα ) is (xB , xN )T , where xB = B −1 (b + (1 − α)d) 0, α ∈ [αi−1 , αi ] ∪ [αi+1 , αi+2 ], xN = 0, αi < αi+1 . So xB = B −1 (b + (1 − α)d) 0 when α ∈ [αi , αi+1 ], this means an optimal matrix of (LPα ) is also B on the interval [αi , αi+1 ]. Therefore the characteristic interval where the optimal matrix keeps invariant is [αi−1 , αi+2 ]. So the optimal matrix has only one corresponding characteristic interval. Because the coeﬃcient matrix of (LPα ) keeps invariant on the interval [0,1], and an

156

6 Fuzzy Linear Programming

optimal matrix is ﬁnite, the number of characteristic intervals is ﬁnite. This means (LPα ) has ﬁnite characteristic interval on the interval [0,1]. Theorem 6.3.2. Let B be an optimal basis matrix of (LPα ) on a characteristic interval [α1 , α2 ]. If (B −1 b)i = 0(1 i m), then α1 = max

5 [B −1 (b + d)]

i

(B −1 d)i

6 , 0 | (B −1 d)i < 0(1 i m) ,

5 [B −1 (b + d)] 6 i −1 , 0 | (B d) > 0(1 i m) α2 = min i (B −1 d)i

(6.3.1)

(6.3.2)

is derived, where (B −1 (b+d))i and (B −1 d)i are the i-th components of B −1 (b+ d) and B −1 d, respectively. Proof: We can use partitioned matrices to represent the simplex method to a linear programming [LL97]. Since I B −1 N B −1 (b + (1 − α)d) B N b + (1 − α)d =⇒ cB cN z z c B cN I B −1 N B −1 (b + (1 − α)d) =⇒ , 0 cN − cB B −1 N z − cB B −1 (b + (1 − α)d) where N is a non-basis matrix corresponding to B, there is no relationship between variable α and the discriminate number. B −1 (b + (1 − α)d) 0 is only required in order to make the optimal matrix of (LPα ) invariant. This means ∀ i, [B −1 (b + (1 − α)d)]i 0, i.e., ∀ i, [B −1 (b + d)]i − α(B −1 d)i 0. By solving this inequality, we can obtain α ∈ [α1 , α2 ], where α1 and α2 are represented with (6.3.1) and (6.3.2). It is obvious that the optimal matrix of (LPα ) will change at α > α2 or α < α1 . Therefore the characteristic interval, corresponding to the optimal basis matrix B, is [α1 , α2 ]. Based on the above conclusion, we can easily get the properties of optimal value function Zα as follows. Property 6.3.1. Let B be an optimal matrix of (LPα ) on the characteristic interval [αi , αj ]. Then xα = B −1 (b + (1 − α)d)(αi α αj ) is a linear vector function about variable α. The optimal value function zα = cB B −1 (b + (1 − α)d) is a linear function about variable α and decreases with the increase of variable α. Property 6.3.2. The optimal value function zα of (LPα ) continues on the interval [0,1].

6.3 Discussion of Optimal Solution

157

6.3.2.2 Optimal Solution to Fuzzy Linear Programming ˜ the fuzzy objective function Theorem 6.3.3. Let S˜ be the fuzzy constraint, G ∗ ˜ =G ˜ ∧ S˜ on domain X, then the optimal solution x to the fuzzy optimal set D satisﬁes μD˜ (x∗ ) = max μD˜ (x) x∈X

= max {α ∧ max μS˜ (x)}, 0α1

x∈Sα

where Sα = {x|x ∈ X, μG˜ (x) α} [Cao02a]. The fuzzy objective function can be deﬁned as Gα : zα = z1 + d0 α, we can use the intersection of the fuzzy objective function Gα : zα = z1 + d0 α and fuzzy constraints Sα : zα = cB α Bα−1 (b + (1 − α)d) to ﬁnd an optimal decision %), shown as in Figure 6.3.1. of (LP zα

6 Sα z0 ``` Z Z Z A AG z1 α | -α 0 1 Fig. 6.3.1. The Intersection of Gα and Sα

6.3.3

Algorithm to Fuzzy Linear Programming

Let z1 be an optimal value of (LP1 ), and z0 be an optimal value of (LP0 ), d0 = z0 − z1 > 0. Based on the above conclusions, we give a new algorithm to fuzzy linear programming as follows. Step 1. Solve linear programmings (LP0 ) and (LP1 ). Let the optimal solutions be x0 , x1 , the optimal values be z0 , z1 , and the optimal matrix of (LP0 ) be B0 . Step 2. Solve

[B0 −1 (b + (1 − α)d)]i = 0.

Assume the solutions as α1 , · · · , αn−1 , (0 < α1 < · · · < αn−1 < 1). Let α0 = 0, αn = 1, α = α1 , k = 1. Step 3. Solve (LPα ). Let the optimal value be zα . If zα z1 + d0 α, turn to Step 4, otherwise let k = k + 1, α = αk , turn to Step 3.

158

6 Fuzzy Linear Programming

Step 4. Solve the optimal decision α∗ =

z1αk − z1αk−1 − zαk−1 αk + zαk αk−1 . zαk − zαk−1 − αk d0 + αk−1 d0

Step 5. Solve linear programming (LPα∗ ), and we can obtain an optimal solution xα∗ and an optimal value zα∗ . Example 6.3.1: Calculate max 3x1 + 5x2 s.t. 7x1 + 2x2 66, 5x1 + 3x2 61, x1 + x2 16, x1 8, x2 5, xi 0(i = 1, 2),

(6.3.3)

where d1 = d2 = d3 = d4 = 0, d5 = 7 is a ﬂexible value of a object and constraint function, respectively. We obtain z0 = 72, z1 = 49, d0 = 23 by calculating (LP0 ) and (LP1 ) corresponding to (6.3.3), respectively. The inverse matrix of the optimal matrix in (LP0 ) is B0−1 = (b1 , · · · , b5 ), where b1 = (1, 0, 0, 0, 0)T , b2 = (0, 1, 0, 0, 0)T , b3 = (−7, −5, 1, −1, 0)T , b4 = (0, 0, 0, 1, 0)T , b5 = (5, 2, −1, 1, 1)T . By calculating the equations [B0−1 (66, 61, 16, 8, 12 − 7α)T ]i = 0, (i = 1, · · · , 5), 5 2 4 , α2 = , α3 = . respectively, we obtain α1 = 14 5 7 Assume α0 = 0, α4 = 1, and we use Lindo software to solve the linear 5 =67. programming (LPα1 ) before we obtain an optimal value zα1 = z 14 5 5 > Z1 + Because Z 14 14 d0 , we must continue to solve the linear programming (LPα2 ). By calculating linear programming (LPα2 ), we obtain an optimal value zα2 = z 25 =66.04. Because z 52 > z1 + 25 d0 , we must continue to solve the linear programming (LPα3 ). By calculating linear programming (LPα3 ), we gain an access to an optimal value zα3 = z 74 =61.429. Because z 47 < z1 + 47 d0 , the optimal decision is α∗ = 0.557. By calculating (LP0.557 ), we obtain x0.557 = (6.8606, 8.8990)T

6.4 Relation between Fuzzy Linear Programming and Its Dual One

159

and Z0.557 = 65.0768. So the optimal solution to the example is x∗ = (6.8606, 8.8990)T and the optimal value is z ∗ = 65.078. 6.3.4 Conclusion We know that optimal decision of the fuzzy constraint linear programming does %) is not not necessarily equal 0.5, and the optimal value function ﬁgure of (LP necessarily a segment. Based on the properties of the optimal value function, we have proposed a new algorithm to fuzzy constraint linear programming.

6.4

Relation between Fuzzy Linear Programming and Its Dual One

6.4.1 Introduction Let a linear programming primal problem like min z = cx s.t. Ax = b, x 0,

(6.4.1)

max yb s.t. yA c y0

(6.4.2)

while

is a dual linear programming in (6.4.1)[Dan63], where x = (x1 , x2 , · · · , xn )T , y = (y1 , y2 , · · · , ym ), c = (c1 , c2 , · · · , cn ), b = (b1 , b2 , · · · , bm )T is a variable and constant vector, respectively, A = (aij )m×n is an m × n matrix. We discuss relation between them as follow. 6.4.2 Case with Fuzzy Coeﬃcients Consider a linear programming with fuzzy coeﬃcient to be min z˜ = c˜x s.t. Ax = b, x 0,

(6.4.3)

where c˜ is a fuzzy coeﬃcient; its dual form is max w = y˜b s.t. y˜A c˜, y˜ 0, where y˜ denotes a fuzzy variable vector.

(6.4.4)

160

6 Fuzzy Linear Programming

Lemma 6.4.1. The dual form of (6.4.3) is (6.4.4). If there exists an optimum solution in one, then there exists an optimum solution in the other, with there existing the same fuzzy optimum value in (6.4.3) and (6.4.4) for a continuous ˜ and strictly monotone function φ. Proof: According to formula (1.5.3) in Section 1.5, (6.4.1) is turned into the following problem for solution: min cx, s.t. μφ˜(c) 1 − α, α ∈ [0, 1], Ax = b, c ∈ R n , x 0. If we deﬁne [Ver84]: ∀c ∈ R n , μφ˜(c) = inf μφ˜j (cj )(1 i l, l n), c = (c1 , c2 , · · · , cn ). But if μφ˜(c) 1 − α, then

j

inf μφ˜j (cj ) 1 − α ⇐⇒ μφ˜j (cj ) 1 − α(1 j l) j

⇐⇒ cj μφ˜−1 (1 − α). j

Therefore, we have min

n

cj xj

j=1

s.t. cj μφ˜−1 (1 − α) (1 j n), j Ax = b, α ∈ [0, 1], x 0. This problem is equivalent to min

n

cj xj

j=1

s.t. cj = μφ˜−1 (1 − α) j Ax = b, α ∈ [0, 1] x0 ⇐⇒ min μφ˜−1 (β)x s.t. Ax = b, β ∈ [0, 1], x 0,

(6.4.5)

where β = 1 − α; the dual form of (6.4.5) is max yb s.t. yA = μφ˜−1 (β), β ∈ [0, 1] y0

(6.4.6)

6.4 Relation between Fuzzy Linear Programming and Its Dual One

161

⇐⇒ max yb s.t. μφ˜(c) β yA = c y 0, β ∈ [0, 1] ⇐⇒ (6.4.4). We can see that the lemma holds because of the same parameter solutions to (6.4.4) as well as to (6.4.6), by the equivalence of (6.4.5) with (6.4.3), and (6.4.6) with (6.4.4), and by the mutual dual problems of (6.4.3) and (6.4.4). 6.4.3 Case with Fuzzy Variables Consider fuzziﬁcation of linear programming min z˜ = c˜ x ˜ s.t. A˜ x b, x ˜ 0,

(6.4.7)

called a linear programming with fuzzy variable [AMA93], where x ˜ = (˜ x1 , x ˜2 , ··· ,x ˜n )T an n−dimensional fuzzy variable vector, 0 c ∈ R n , ˜b ∈ (F (R))m a fuzzy vector, respectively, and A ∈ R m×n represents an m × n matrix. The dual problem of (6.4.7) is denoted by max w ˜ = y˜b s.t. yA c, y 0,

(6.4.8)

where c ∈ R n , A ∈ R m×n , y ∈ R m , ˜b ∈ (F (R))m . x ˜ is said to be a fuzzy feasible solution to (6.4.7) if and only if x ˜ satisﬁes the constraints of the problem. By an optimal fuzzy solution to (6.4.7) we x0 c˜ x for all x ˜ belong denote a fuzzy feasible solution, say x˜0 , such that c˜ to the set of all fuzzy feasible solutions to (6.4.7). The relation between fuzzy linear programming (6.4.7) and its dual programming (6.4.8) is as follow. In order to solve programming (6.4.7), we shall ﬁnd an optimal solution to problem (6.4.8). However (6.4.8) is, in fact, a linear programming with fuzzy coeﬃcient, and we already know how to solve this. It follows that we shall discuss the relationships between the primary and dual programmings. Lemma 6.4.2. If x ˜ is any fuzzy feasible solution to (6.4.7) and y is any feasible one to (6.4.8), then y˜b c˜ x. Proof: Straightforward. Lemma 6.4.3. If x˜0 is a fuzzy feasible solution to (6.4.7) and y 0 is a feasible x0 , then y 0 is an optimal solution to (6.4.8) one to (6.4.8), such that y 0˜b = c˜ 0 and x˜ is a fuzzy optimal one to (6.4.7).

162

6 Fuzzy Linear Programming

Proof: Straightforward. Theorem 6.4.1. If the dual problem (6.4.8) has an optimal solution, then problem (6.4.7) has a fuzzy optimal solution. Proof: We ﬁrst transform (6.4.7) into the form max w ˜ = y˜b s.t. yA + ys I = c, y, ys 0,

(6.4.9)

˜2 , · · · , w ˜n ), ys represents a slack variable and I is a unit where w ˜ = (w ˜1 , w matrix. Let A = (A, I)T , y = (y, ys ), c = (c, 0)T . Formula (6.4.9) is simpliﬁed as follows: min w ˜ = y ˜b (6.4.10) s.t. y A = c , y 0. be an optimal basic solution to (6.4.10). Such that w ˜j − ˜bj 0 for Let yB −1 ˜ ˜ all j; thus, bB B A b, where B is a basic matrix corresponding to A. If we write x˜ = ˜bB B −1 , we can see that x˜ is a fuzzy feasible solution to (6.4.7). On the other hand, we have

z˜ = c˜ x = ˜bB B −1 c = yB ˜bB = w. ˜ Hence, x˜ is an optimal solution to (6.4.7). Lemma 6.4.4. If problem (6.4.8) has an unbounded solution, then problem (6.4.7) has no fuzzy feasible solution. Proof: Straightforward. We conclude that, in order to solve a linear programming with fuzzy variables, it is suﬃcient to solve its dual problem. We can then obtain the fuzzy optimal solution to our problem by using the theorem and lemmas of this section, and vice versa. Let μφ˜ be (1.5.3) in Section 1.5. If a fuzziﬁed form of (6.4.1) is (6.4.3), its primal programming with parameter is min (m + βdn−1 )x s.t. Ax = b, β ∈ [0, 1], x 0,

(6.4.11)

where m, n are real numbers, with c m + βdn−1 ⇐⇒ c = m + βdn−1 , d denoting a ﬂexible index, β = 1 − α, c is freely ﬁxed in the value interval [m, n], while the dual problem in (6.4.11) is max yb s.t. yA = m + βdn−1 , β ∈ [0, 1], y 0.

(6.4.12)

6.4 Relation between Fuzzy Linear Programming and Its Dual One

163

Theorem 6.4.2. Let μφ˜ : R → [0, 1] be a continuous and strictly monotone membership function. x0 is a unique solution to (6.4.1) if and only if x0 remains a parameter solution to (6.4.5) ∀β ∈ [0, 1](β = 1 − α). Proof: Similar to the proof of Ref.[Man79], then x0 is a unique solution to (6.4.1) ⇐⇒ ∀d/n ∈ R n , ∃x, y, α ∈ R n+m+1 Ax − bα 0, −yA + cα = 0, y 0 yb − cx 0, −dn−1 x + dn−1 x0 α > 0, α > 0 ⇐⇒ ∀dn−1 ∈ R n , ∃x, u, y, κ, r ∈ R n+m+k+2 −Ax + bκ + u = 0 yA − (κc + βdn−1 ) = 0, −yb + cx + dn−1 x0 β + r = 0 u, κ, r 0, β ∈ [0, 1], β + r > 0 ⇐⇒ ∀dn−1 ∈ R n , ∃x, y, κ ∈ R n+m+1 Ax bκ, cx = κcx0 , κ 0 yA = κc + βdn−1 , yb (κc + βdn−1 )x0 , β ∈ [0, 1] ⇐⇒ ∀dn−1 ∈ R n , ∃y, κ ∈ R m+1 yA = κc + βdn−1 , yb = (κc + βdn−1 )x0 κ 0, β ∈ [0, 1] (cx + βdn−1 x0 yb yAx0 = (κc + βdn−1 )x0 ) ⇐⇒ ∀dn−1 ∈ R n , ∃y ∈ R m yA = c + βdn−1 , yb = (c + βdn−1 )x0 β ∈ [0, 1], and let κ = 1 ⇐⇒ ∀dn−1 ∈ R n , ∃β ∈ [0, 1], so a solution to (6.4.5) is found to be x0 . Because x0 denotes a feasible solution to (6.4.5), y¯ is a feasible one to dual problem (6.4.6) coming from (6.4.5), with y¯b = (m + βdn−1 )x0 , where y¯ = y(β). But, the fuzzy solution to (6.4.7) is given by an optimal solution to the parametric linear problem [Ver84], therefore, the theorem holds. Similar to a corollary in Ref.[Man79], we can conﬁrm the following. Corollary 6.4.1. The dual optimal solution y is unique to (6.4.2) associated with a primal optimal solution x0 to (6.4.1), if and only if, for a continuous and strictly monotone membership function μφ˜ : R → [0, 1], such that ∀β ∈ [0, 1], y remains a dual optimal parameter solution to the perturbed linear programming (6.4.12). Theorem 6.4.3. Let μφ˜ : R → [0, 1] be a continuous and strictly monotone membership function. A solution x0 is unique to linear programming (6.4.1) if only if x0 is still a fuzzy optimal solution to fuzzy linear programming (6.4.7). Proof: From Lemma 6.4.1, we know (6.4.7) ⇐⇒ (6.4.5), so, from the result where Theorem 6.4.2 is applied to (6.4.5), x0 is a unique solution to (6.4.1) if and only if x0 remains a parameter optimal solution to (6.4.5). But the

164

6 Fuzzy Linear Programming

minimization in (6.4.7) is equivalent to that in (6.4.5), and x0 is a parameter optimum solution to (6.4.5) if and only if x0 is a fuzzy optimal solution to (6.4.4). Corollary 6.4.2. The dual optimal solution y to (6.4.2) corresponding to the primal optimum solution x0 to (6.4.1) is unique, if and only if, for a continuous and strictly monotone membership function μφ˜ : R → [0, 1], y˜ is still a dual optimum solution to the programming (6.4.3). Proof: Let μφ˜ be Formula (1.5.3) in Section 1.5. Then we have (6.4.3) ⇐⇒ min z = cx s.t. Ax = b, μφ˜(c) β, β ∈ [0, 1] x0 ⇐⇒ (6.4.11) by Ref. [Ver84]. Apply Corollary 6.4.1 to (6.4.11) and the conclusion holds. Deﬁnition 6.4.1. Let μA˜0 (x), μF˜ (x) be membership functions of fuzzy ob˜ satisfying μ ˜ (x) = jection and fuzzy constraint. Then we call a fuzzy set D D μA˜0 (x) ∧ μF˜ (x), x 0 a fuzzy decision for the programming c˜x b0 s.t. Ax b, x 0,

(6.4.13)

while we call a point x satisfying μD˜ (x∗ ) = max{(1 − μA˜0 (x)) ∧ μF˜ (x)} an x0

optimal solution to (6.4.13). Theorem 6.4.4. The maximization of μD˜ (x) is equivalent to linear programming min (m + βM0 n−1 )x (6.4.14) s.t. Ax b1 + Bβb−1 2 + dα, α, β ∈ [0, 1], x 0, d denoting a ﬂexible index; M0 and B representing the length in intervals [m, n] and [b1 , b2 ], respectively. Proof: From Formulas (1.5.3) (1.5.4) and (1.5.5) in Section 1.5, we have max μD˜ (x) ⇐⇒ max (−˜ c)x s.t. Ax ˜b x0 ⇐⇒ min c˜x s.t. μφ˜(c) β Ax b + dα, μφ˜ (b) β β ∈ [0, 1] x0 ⇐⇒ (6.4.14),

6.5 Antinomy in Fuzzy Linear Programming

165

where c˜, ˜b can be freely ﬁxed in the close value interval [m, n] and [b1 , b2 ], its degree of accomplishment is determined by Formula (1.5.3). 6.4.4 Conclusion The method in this chapter indicates that we can change a linear programming with fuzzy variables into a dual programming with fuzzy coeﬃcients for solution, such that the problem is solved easily.

6.5 6.5.1

Antinomy in Fuzzy Linear Programming Introduction

In 1971, Charnes and Klingman initiated the more-for-less paradox or the lessfor-more paradox of the allotment model [Ck71]. If a constant b in (6.4.1) increases by d(> 0), then an objective value z decreases instead. If constant c in (6.4.2) decreases by d(> 0), then an objective value yb increases instead. Such a strange phenomenon is called “antinomy” in mathematics. Again in 1986, Lin discussed antinomy of the general linear programming [Lin86] by taking its expansion. In 1987, Charnes, Duﬀuaa and Ryan also discussed “more-for-less paradox” of the general linear programming [CDR87]. In 1991, Yang and Jing again put forward another suﬃciency and necessary condition in antinomy of the general linear programming and condition of the non-linear programming, where antinomy appeared [YJ91]. In 1991, author initially used the method of fuzzy sets to study antinomy of the linear programming [Cao91c]. We introduce antinomy problem in fuzzy linear programming, and present a fuzzy set method for its investigation. 6.5.2

Reason for Antinomy Emergence

Deﬁnition 6.5.1. Suppose x0 is a basic feasible solution to (6.4.1). If its basic variable value are all positive, then we call x0 a nondegeneration basic feasible solution; if there are some basic variable value equalling to zero, then we call x0 a degeneration basic feasible solution. If all basic feasible solutions in the linear programming are nondegeneration, we call them nondegeneration. Example 6.5.1: Consider ﬁnding min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 = b1 , 4x2 + 2x3 + 6x4 = b2 , 5x1 + 6x2 + 5x3 + 4x4 = b3 , x1 , x2 , x3 , x4 0, where bi = bi + di (i = 1, 2, 3).

166

6 Fuzzy Linear Programming

If we suppose x1 , x2 , x3 to be basic variables, accordingly, a basis matrix B as well as an inverse matrix B −1 is denoted respectively by ⎛ ⎞ 111 B = ⎝0 4 2 ⎠ 565 and

⎛

⎞ −1 1 ⎜ ⎟ 2 ⎟ B −1 = ⎜ ⎝−5 0 1 ⎠ . 1 −2 10 2 When assignment volume of three products b = (9, 12, 46)T is increases to b = (10, 18, 50)T, the minimum cost z = cB xB decreases from z = 15 to z = 11. Why? If the problem nondegenerates and when a negative component exists in y = cB B −1 or in a certain evaluation coeﬃcient zs < 0, then the objective function is cB B −1 b = yb = yb + yd < yb = cx∗ −4

or cB xB = cB B −1 b = cB B −1 b + cB B −1 Ps = cx∗ + δzs < cx∗ , so that antinomy appears. Therefore, we have a discussion as follows [CDR87][Lin86]: Corollary 6.5.1. Let a basic solution x∗ = (xB , xN ) in (6.4.1) be a nondegeneration optimum solution. If ∃j0 : zj0 < 0, then antinomy takes shape in (6.4.1). Proposition 6.5.1. Let a basic solution x∗ = (xB , xN ) in (6.4.1) be a nondegeneration optimum solution. Antinomy arises if and only if a negative component exists in y = cB B −1 . Does the conclusion above hold if programming (6.4.1) degenerates? Proposition 6.5.2. If deﬁnition (6.4.1) denotes a degeneration linear programming, then min{cx|Ax = b +

n

εj Pj = b(ε), x 0, ε > 0 suﬃciently small}

(6.5.1)

j=1

is a linear programming of nondegeneration. Theorem 6.5.1. If any basic feasible solution is ε = 0 in (6.5.1) when ε is suﬃciently small, a basic feasible solution can be obtained to a degenerated linear programming (6.4.1).

6.5 Antinomy in Fuzzy Linear Programming

167

Proposition 6.5.3. If a basic solution x∗ (0) = (xB (0), xN ) denotes a degeneration optimum solution to linear programming (6.4.1), then there exists antinomy if and only if a negative component exists in y = cB B −1 . Proof: Take a basic solution x∗ (ε) = (xB (ε), xN ) in (6.5.1) into consideration, where xB (ε) = B −1 b0 + εB + B −1 N εN − B −1 N xN = B −1 b(ε) − B −1 N xN is nondegeneration. According to Proposition 6.5.1, if xB (ε) is a nondegeneration optimum solution, then there appears antinomy if and only if a negative component exists in y = cB B −1 . When ε is suﬃciently small and if we suppose ε = 0 in any basic feasible solution to (6.5.1), we can obtain a basic feasible solution to (6.4.1). If (6.5.1) is solved when ε is suﬃciently small, we can get a list of basic feasible solutions x(ε) = {x0 (ε), x1 (ε), · · · } until we have an optimal solution x∗ (ε). If ε = 0, we also have a list of basic feasible solutions x(0) = {x0 (0), x1 (0), · · · } to (6.4.1). Since the coeﬃcient matrices and the objective functions are all equal in (6.5.1) and (6.4.1), accordingly, the test numbers are identical in basic feasible solutions xi (ε) and xi (0). Therefore, x∗ (0) is also an optimal solution to (6.4.1). This demonstrates that x∗ (ε) serving as a nondegeneration optimum solution to (6.5.1) is equivalent to x∗ (0) serving as a degeneration optimum one to (6.4.1). At this time, the objective function denoted by cB B −1 b (ε) = yb (ε) − yN xN = yb0 (ε) + δyT − yN xN < yb0 (ε) = cx∗ (ε) holds when a negative component exists in y and there exists cB B −1 b (0) < cx∗ (0) for ε = 0. Therefore the proposition holds. Corollary 6.5.2. Antinomy arises in (6.4.1) under the condition of Proposition 6.5.3 and in the event of ∃j0 : zj0 < 0. Proof: Because they have identical coeﬃcient matrices and objective functions, and (6.4.1) and (6.5.1) have the same test numbers in their basic feasible solutions xi (ε) as well as xi (0), we know a negative component must exist in y, in the event of zj0 = cB B −1 Pj = yPj < 0, with cB xB (ε) = cB B −1 b0 (ε) + δcB B −1 Ps − cB B −1 N xN = cx∗ (ε) + δzs < cx∗ (ε) (δ > 0 suﬃciently small), so, cB xB (0) < cx∗ (0) for ε = 0. Therefore, the corollary holds. In conclusion, whether a classical linear programming degenerates or not results in the fact that antinomy comes into being. If we try to keep antinomy

168

6 Fuzzy Linear Programming

from being contrary, we only change the equal-sign into an inequality sign in constraint condition. Proposition 6.5.4. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ = (xB , xN )T nondegenerates in fuzzy linear programming min z˜ = c˜x s.t. Ax = b, x 0,

(6.5.2)

then antinomy arises if and only if a negative component exists in a fuzzy shadow price y˜ = c˜B B −1 . Proof: Necessity. Since (6.5.2)⇐⇒(6.4.5) and (6.4.4) ⇐⇒(6.4.6), if y˜ = 0 ⇐⇒ y(β) = (μφ˜−1 (β))B B −1 0, and then to ∀T 0, when c˜B B −1 ˜ b → b + T , for any feasible solution in this problem with a soft constraint, we know μD˜ (x) = μφ˜−1 (β)x y(b + T ) yb = μφ˜−1 (β)x∗ = μD˜ (x∗ ) from a dual theorem of ordinary parameter linear programming, such that c˜x y˜b = c˜x∗ . m Suﬃciency. If there exists a negative component in y˜, then ∃T 0, ti > 0, i=1

such that

y˜T 0). Then the problem with a soft constraint concerning a basic solution in basis B is xB = B −1 b = B −1 b + δB −1 T, xN = 0. x∗ nondegenerates on the proposition assumption, xB 0 means a basic feasible solution having test numbers unchangeable when δ > 0 is suﬃciently small. Therefore it also belongs to an optimal solution with soft constraints and y˜ = c˜B B −1 is still a fuzzy optimal solution to the dual problem. But the objective value ∀β ∈ [0, 1] is denoted by formula below, i.e., (μφ˜−1 (β))B B −1 b = y(β)b = y(β)b + δy(β)T < y(β)b = cx∗ −1 ⇐⇒ c˜B B b = y˜b = y˜b + δ y˜T < y˜b = c˜x∗ , such that antinomy arises in (6.5.2).

6.5 Antinomy in Fuzzy Linear Programming

169

Corollary 6.5.3. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ = (xB , xN ) to (6.5.2) denotes a nondegeneration optimum solution, then the condition where antinomy arises is ∃j0 , such that z˜j0 < ˜0. Proof: From the proof of Proposition 6.5.4, we know sup μφ˜0 (x) = sup μφ˜0 (xB )

Ax=b

= sup (μφ˜−1 (β))B xB β∈[0,1]

= sup {(μφ˜−1 (β))B B −1 b + Pj δ(μφ˜−1 (β))B B −1 Pj T } β∈[0,1]

< sup (μφ˜−1 (β))B B −1 bPj . β∈[0,1]

(Because zj = (μφ˜−1 (β))B B −1 Pj < 0, where xB = B −1 b + δB −1 T, xN = 0, we know there must exist a negative component in y(β) = (μφ˜−1 (β))B B −1 ). It is equivalent that there must be a fuzzy negative component in y˜ from 0, such that we have the knowledge of z˜j = c˜B B −1 Pj = y˜Pj < ˜ c˜B xB = c˜x∗ + δ˜ zj < c˜x∗ . Proposition 6.5.5. If deﬁnition (6.5.2) serves as a degeneration fuzzy linear programming, then min z˜ = c˜x n εj Pj = b(ε) s.t. Ax = b + (6.5.3) x0

j=1

serves as a nondegeneration fuzzy linear programming, where εj is a suﬃciently small positive number. Proposition 6.5.6. Let μφ˜ be a continuous and strictly monotone function. If a basic solution x∗ (0) = (xB (0), xN ) to (6.5.2) denotes a degeneration optimum solution, then antinomy appears if and only if a fuzzy negative component exists in y˜ = c˜B B −1 . Corollary 6.5.4. Let μφ˜ be a continuous and strictly monotone function. Suppose that a basic solution x∗ (0) = (xB (0), xN ) to (6.5.2) is a degeneration optimum solution, then the condition where antinomy arises is ∃j0 , such that z˜j0 < ˜ 0. In fact, because min c˜x s.t. Ax = b(ε) x0

(6.5.4)

min μφ˜−1 (β)x s.t. Ax = b(ε), β ∈ [0, 1], x 0,

(6.5.5)

is equivalent to

170

6 Fuzzy Linear Programming

the dual form of (6.5.5) is max yb(ε) s.t. yA = μφ˜−1 (β), β ∈ [0, 1], y 0. Therefore, (6.5.4) is equivalent to max y˜b(ε) s.t. y˜A = c˜, y˜ ˜ 0. From Proposition 6.5.3 and Corollary 6.5.1, we know properties as follows: a. If there exists a degeneration optimum basic solution x∗ (0, β) in the classical linear programming (6.5.5) with parameter variable β, then the antinomy appears if and only if a negative component exists in y˜ = c˜B B −1 . b. Under the condition of a, if ∃j0 < 0, then antinomy appears in (6.5.5). 6.5.3

Example

Example 6.5.2: The fuzzy linear programming corresponding to Example 6.5.1 in this section is min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 9, 4x2 + 2x3 + 6x4 12, 5x1 + 6x2 + 5x3 + 4x4 46, xi 0(i = 1, · · · , 4).

(6.5.6)

Assume d1 = 1, d2 = 6, d3 = 4 and we make a parameter programming, then (6.5.6) is turned into min z = 2x1 + 3x2 + x3 + 2x4 s.t. x1 + x2 + x3 + x4 + x5 = 10, 4x2 + 2x3 + 6x4 + x6 = 18, 5x1 + 6x2 + 5x3 + 4x4 + x7 = 50, xi 0(i = 1, · · · , 7). Under the unchangeable condition of basis matrix B, an optimal parameter solution denotes 1 1 x = B −1 b(α) = (4 − 3 α, 1 − 4α, 4 + 8 α)T , 2 2 and optimal value is 1 z = 15 − 10 α. 2

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

171

When b is added from b = (9, 12, 46)T to b = (10, 18, 50)T, z decreases from z = 15 to 11. Therefore, antinomy comes into being because negative compo1 nents exist in a solution vector for α > . 4 6.5.4 Conclusion On the whole, no matter whether (6.5.2) is a generation or nongeneration fuzzy linear programming, antinomy appears in both of them. If we prevent antinomy in fuzzy linear programming from being contrary, the constraint equal-sign can be turned into a soft constraint. Overall, if antinomy is changed into formula (6.4.12) for solution, the antinomy of a fuzzy linear programming is not contrary. If the optimal solution in a primal linear programming is unique, then antinomy does not exist, which can be concluded as solutions to fuzzy linear programming (6.5.2). In the light of Theorem 6.4.4, an ordinary linear programming is only a particular example of fuzzy linear programming (6.4.12) for di = 0. Therefore, it can be changed into ﬁnding solutions by a fuzzy set method no matter whether it is antinomy of linear programming or fuzzy linear programming.

6.6

Fuzzy Linear Programming Based on Fuzzy Numbers Distance

6.6.1 Introduction In the section, we discuss the constraint conditions with fuzzy coeﬃcients, whose standard form is: max z = cx b, s.t. Ax x 0, = ( aij ) is where c = (c1 , c2 , · · · , cn ) is an n-dimensional clear row vector, A T an m × n fuzzy number matrix, b = (b1 , b2 , · · · , bm ) is an m-dimensional fuzzy line vector and x = (x1 , x2 , · · · , xn )T is a decisive vector. Solving this kind of fuzzy linear programming is based on an order relation between fuzzy numbers, by which we can transform fuzzy linear programming into clear linear programming. 6.6.2

Distance

A. Distance between Interval Numbers Assume a = [a1 , a2 ] and b = [b1 , b2 ] to be two interval numbers, a = b ⇐⇒ a1 = b1 and a2 = b2 .

172

6 Fuzzy Linear Programming

Similarly to Ref. [LiuH04], we also consider the diﬀerent value between corresponding point and point in the intervals, giving a new deﬁnition on the distance between interval numbers. Deﬁnition 6.6.1. Let a = [a1 , a2 ] and b = [b1 , b2 ] be two interval numbers. Then deﬁne 12 a1 + a2 b1 + b2 d(a, b) = + x(a2 − a1 )] − [ + x(b2 − b1 )]|dx (6.6.1) |[ 1 2 2 −2 as the distance between a and b. Regarding the distance d(a, b) between interval numbers as a proposition, we can verify satisfaction of the three conditions in distance. In fact, let f (x) = |[

a 1 + a2 b1 + b2 + x(a2 − a1 )] − [ + x(b2 − b1 )]|. 2 2

Since f (x) is a simple function about x, it concludes that f (x) is continuous, so d(a, b) is integrable. (1) Since f (x) 0, also by the continuity and integrable character of f (x), we have d(a, b) 0. If d(a, b) = 0, then f (x) = 0. a1 + a2 b1 + b2 When f (x) = 0, we have [ + x(a2 − a1 )] − [ + x(b2 − b1 )] = 0, 2 2 b1 + b2 1 1 a1 + a 2 − ) + x[(a2 − a1 ) − (b2 − b1 )] = 0 (∀x ∈ [− , ]), which i.e., ( 2 2 2 2 satisﬁes a1 = b1 , a2 = b2 , hence a = b. On the contrary, when a = b, i.e., a1 = b1 , a2 = b2 , we have f (x) = 0. Thus d(a, b) =

1 2

− 12

f (x)dx = 0.

(2) d(a, b) = d(b, a) holds obviously. a1 + a2 (3) For any interval number c, where c = [c1 , c2 ], denote ax = + 2 b1 + b2 c1 + c2 + x(b2 − b1 ), cx = + x(c2 − c1 ). Then 0 x(a2 − a1 ), bx = 2 2 |ax − bx | |ax − cx | + |cx − bx | satisﬁes

1 2

− 12

|ax − bx |dx

1 2

− 12

|ax − cx |dx +

It follows that d(a, b) d(a, c) + d(c, b) holds.

1 2

− 12

|cx − bx |dx.

a1 + a2 + x(a2 − In the distance formula, the integralled function f (x) = |[ 2 b1 + b2 + x(b2 − b1 )]| is the distance function between corresponding a1 )] − [ 2 1 1 point and point in two intervals. At x = − , f (− ) is the distance between 2 2

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

173

1 1 left endpoints of the two interval numbers; at x = , f ( ) is the distance 2 2 between right endpoints of the two interval numbers. B. Distance between Fuzzy Numbers in the real number set is called Deﬁnition 6.6.2 [TD02]. The fuzzy set A an L-R fuzzy number. If its membership function is: ⎧ a2 − x ⎪ ⎪ ⎪ L( a − a ), a1 x a2 , ⎪ 2 1 ⎪ ⎨ 1, a2 x a3 , μA˜ (x) = (6.6.2) x − a3 ⎪ ⎪ R( ), a3 x a4 , ⎪ ⎪ a4 − a3 ⎪ ⎩ 0, x < a1 , x > a4 , where L, R are strictly decreasing functions in [0, 1], and satisfy L(x) = R(x) = 1(x 0); L(x) = R(x) = 0(x 1), = (a1 , a2 , a3 , a4 )LR . the fuzzy number is denoted by A Especially, when L(x) = R(x) = 1 − x, fuzzy number deﬁned in (6.6.2) is = (a1 , a2 , a3 , a4 ); when L(x) = a trapeziform fuzzy number, denoted by A R(x) = 1 − x and a2 = a3 , fuzzy number deﬁned in (6.6.2) is a triangular = (a1 , a2 , a3 ). fuzzy number, denoted by A denotes an interval number: ∀α ∈ [0, 1], α-level curve of fuzzy number A α = [AL (α), AR (α)], A −1 where AL (α) = a2 − (a2 − a1 )L−1 A (α); AR (α) = a3 + (a4 − a3 )RA (α). By the distance between interval numbers, we deﬁne the distance between fuzzy numbers as follows.

and B be two fuzzy numbers, and Deﬁnition 6.6.3. Let A α = [AL (α), AR (α)] = [a2 − (a2 − a1 )L−1 (α), a3 + (a4 − a3 )R−1 (α)], A A A α = [BL (α), BR (α)] = [b2 − (b2 − b1 )L−1 (α), b3 + (b4 − b3 )R−1 (α)]. B B B and B by Then we deﬁne the distance between A 1 B) = α , B α )dα, D(A, d(A 0

where α , B α ) = d(A −[

1 2

− 12

|[

aL (α) + aR (α) + x(aR (α) − aL (α))] 2

bL(α) + bR (α) + x(bR (α) − bL (α)]|dx. 2

(6.6.3)

174

6 Fuzzy Linear Programming

In fact, let aL (α) + aR (α) + x(aR (α) − aL (α))] 2 bL (α) + bR (α) + x(bR (α) − bL (α)]|. −[ 2

f (x, α) = |[

Since f (x, α) is a simple function about x, f (x, α) is continues; it concludes α ) is also continuous, so D(A, B) is integrable. α , B that d(A α , B α ) 0, also by the continuity and integrable character of (1) Since d(A B) 0. If D(A, B) = 0, it satisﬁes d(A α , B α ) = 0. d(Aα , Bα ), we have D(A, When d(Aα , Bα ) = 0, by the distance deﬁnition between interval numbers, = B. we know aL (α) = bL (α), aR (α) = bR (α), then A = B, i.e., aL (α) = bL (α), aR (α) = bR (α). On the contrary, when A α ) = 0, the result holds clearly true. So D(A, α B B) = 1 d(A α , B α )dα = d(A 0 0. B) = D(B, A) holds obviously. (2) D(A, ˜ by the distance deﬁnition between interval (3) For any fuzzy number C, numbers: α ) d(A α , C α ) + d(B α , C α ), α , B 0 d(A α = [CL (α), CR (α)] = [C2 − (C2 − C1 )L−1 (α), C3 + (C4 − C3 )R−1 (α)], where C C C so 1 1 1 α , B α )dα α , C α )dα + α , B α )dα d(A d(A d(C 0

0

0

holds. 6.6.3 Ranking Fuzzy Numbers Here, we present a ranking idea about fuzzy numbers: before ranking fuzzy numbers, we ﬁx a real number M as refereing object (M is supremum about and support set of B). The nearer a fuzzy number to M , the support set of A larger it is; that is, the smaller the distance to M , the larger a fuzzy number is. Deﬁnition 6.6.4. If M = sup(s(A) ∪ s(B)), we call M the supremum of A and B, where s(A) and s(B) are the support sets of A and B, respectively. to M: By Deﬁnition 6.6.3, we can obtain the distance from fuzzy number A M) = D(A,

0

1

{

1 2

− 12

{M − [

aL (α) + aR (α) + x(aR (α) − aL (α))]}dx}dα, 2

∀α ∈ [0, 1]. Coordinate: M ) = M − a2 + a 3 + a 2 − a 1 D(A, 2 2

0

1

L−1 A (α)−

a4 − a3 2

0

1

−1 RA (α). (6.6.4)

6.6 Fuzzy Linear Programming Based on Fuzzy Numbers Distance

175

Similarly M ) = M − b2 + b3 + b2 − b1 D(B, 2 2

0

1

L−1 B (α) −

b4 − b3 2

0

1

−1 RB (α). (6.6.5)

Thus, we can obtain the deﬁnition of ranking fuzzy numbers as follows in light of this. and B be two fuzzy numbers, and M be the supreDeﬁnition 6.6.5. Let A mum of A and B. Then M ) < D(B, M ), we call A > B; (1) When D(A, M ) = D(B, M ), we call A = B; (2) When D(A, (3) When D(A, M ) > D(B, M ), we call A < B. and B are trapeziform fuzzy numbers or triangular Especially, when A numbers, respectively, we can get concrete expressions: and B are trapeziform fuzzy numbers A = (a1 , a2 , a3 , a4 ), B = 10 When A (b1 , b2 , b3 , b4 ), then M) = M − D(A,

a1 + a2 + a3 + a4 M ) = M − b1 + b2 + b3 + b4 . ; D(B, 4 4

By Deﬁnition 6.6.5: > B; (1) a1 + a2 + a3 + a4 > b1 + b2 + b3 + b4 ⇔ A (2) a1 + a2 + a3 + a4 = b1 + b2 + b3 + b4 ⇔ A = B; < B. (3) a1 + a2 + a3 + a4 < b1 + b2 + b3 + b4 ⇔ A and B are triangular numbers A = (a1 , a2 , a3 ), B = (b1 , b2 , b3 ), 20 When A then M) = M − D(A,

a1 + 2a2 + a3 M ) = M − b1 + 2b2 + b3 . ; D(B, 4 4

By Deﬁnition 6.6.5: > B; (1) a1 + 2a2 + a3 > b1 + 2b2 + b3 ⇔ A (2) a1 + 2a2 + a3 = b1 + 2b2 + b3 ⇔ A = B; < B. (3) a1 + 2a2 + a3 < b1 + 2b2 + b3 ⇔ A 6.6.4 Linear Programming in Constraint with Fuzzy Coeﬃcients Assume that the linear programming in constraint with fuzzy coeﬃcient is deﬁned as follows: max z = cx b, (6.6.6) s.t. Ax x 0,

176

6 Fuzzy Linear Programming

denoted as:

max z = c1 x1 + c2 x2 + · · · + cn xn s.t. ai1 x1 + ai2 x2 + · · · + ain xn bi , x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m,

(6.6.7)

where fuzzy numbers are triangular fuzzy ones, i.e., ai1 = (ai11 , ai12 , ai13 ), ai2 = (ai21 , ai22 , ai23 ), · · · , ain = (ain1 , ain2 , ain3 ); bi = (bi1 , bi2 , bi3 ). By Zadeh’s extension principle, the sum of any triangular fuzzy numbers is still a triangular one. Formula (6.6.7) is equivalent to the format as follows: max z = c1 x1 + c2 x2 + · · · + cn xn s.t. (ai11 x1 + ai21 x2 + · · · + ain1 xn , ai12 x1 + ai22 x2 + · · · + ain2 xn , (6.6.8) ai13 x1 + ai23 x2 + · · · + ain3 xn ) (bi1 , bi2 , bi3 ), x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m. By Deﬁnition 6.6.5 and the method to ranking triangular fuzzy numbers, we transform Formula (6.6.8) into a linear programming as follows: max z = c1 x1 + c2 x2 + · · · + cn xn s.t. (ai11 x1 + ai21 x2 + · · · + ain1 xn ) +2(ai12 x1 + ai22 x2 + · · · + ain2 xn ) +(ai13 x1 + ai23 x2 + · · · + ain3 xn ) bi1 + 2bi2 + bi3 , x1 ,x2 , · · · ,xn 0 i = 1, 2, · · · , m.

(6.6.9)

6.6.5 Numerical Example Example 6.6.1: Find solution to the linear programming in constraint with fuzzy coeﬃcients: max z = 3x1 + 4x2 s.t. a11 x1 + a12 x2 b1 , a21 x1 + a22 x2 b2 , x1 , x2 0, a12 = (20, 20, 21), a21 = (11, 12, 13), a22 = (5.4, 6.4, 7.4), where a11 = (3, 4, 4), b1 = (4500, 4600, 4800), b2 = (4600, 4800, 5250). Solution: By Formula (6.6.9), transform the fuzzy linear programming into a clear linear programming max z = 3x1 + 4x2 s.t. (3x1 + 20x2 ) + 2(4x1 + 20x2 ) + (4x1 + 21x2 ) 4500 + 2 × 4600 + 4800, (11x1 + 5.4x2 ) + 2(12x1 + 6.4x2 ) + (13x1 + 7.4x2 ) 4600 + 2 × 4800 + 5250, x1 , x2 0, we obtain x∗1 = 315 x∗2 = 170 z1∗ = 1625.

(6.6.10)

6.7 Linear Programming with L-R Coeﬃcients

177

By the ranking idea from Ref. [LiR02], we transform the fuzzy linear programming into a clear linear programming as follow: max z = 3x1 + 4x2 s.t. 3x1 + 20x2 4500, 4x1 + 20x2 4600, 4x1 + 21x2 4800, 11x1 + 5.4x2 4600, 12x1 + 6.4x2 4800, 13x1 + 7.4x2 5250, x1 , x2 0,

(6.6.11)

and we obtain x∗1 = 308 x∗2 = 168 z2∗ = 1598. Obviously z1∗ > z2∗ , at the same time, the number of constraint conditions in (6.6.10) reduces by four times compared to the numbers in (6.6.11), which indicates the ranking rule in the paper is superior to the ranking rule in Ref. [LiR02], consequently we gain a better optimal value of the linear programming in constraint with fuzzy coeﬃcients. 6.6.6 Conclusion We propose a new distance between fuzzy numbers based on the distance between interval numbers. In sighting the ranking idea, we get a ranking rule about fuzzy numbers. On the basis of the ranking rule, we gain a new approach to linear programming in triangular fuzzy numbers with coeﬃcients. At the same time, we use the simplicity of triangular fuzzy numbers in solving the problem. But it remains to research the linear programming in general fuzzy coeﬃcients.

6.7

Linear Programming with L-R Coeﬃcients

6.7.1 Introduction Consider linear programming max % z˜ = c˜x ˜ ˜b, s.t. Ax

(6.7.1)

x 0, where c˜ = (˜ c1 , · · · , c˜n ), ˜b = (˜b1 , · · · , ˜bm )T are L-R vectors, A˜ = (˜ aij )m×n an L-R matrix, c˜j = (cj , cj , cj )LR , ˜bi = (bi , bi , bi )LR and a ˜ij = (aij , aij , aij )LR L-R numbers, and x = (x1 , x2 , · · · , xn )T an ordinarily variable vector. Now two kinds of situations are discussed as follows respectively.

178

6 Fuzzy Linear Programming

6.7.2 Linear Programming in Constraints with L-R Coeﬃcients Consider max z = cx ˜ ˜b, s.t. Ax

(6.7.2)

x 0. Because a ˜ij , ˜bi (1 i m, 1 j n) are all L-R numbers, and xj 0, then n j=1

n n n aij xj = ( aij xj , aij xj , aij xj )LR j=1

j=1

j=1

is still an L-R number, hence n

aij xj bi

j=1

⇐⇒

n

aij xj bi ,

j=1 n

(6.7.3)

aij xj bi ,

j=1 n

aij xj bi ,

j=1

written down as A = (aij ), A = (aij ), A = (aij ), b = (b1 , b2 , · · · , bm )T , b = (b1 , b2 , · · · , bm )T , b = (b1 , b2 , · · · , bm )T . Therefore (6.7.2) can be rewritten as the following ordinary linear programming with 3m linear inequality constraints, i.e., max z = cx s.t. Ax b, Ax b,

(6.7.4)

Ax b, x 0. It is worthwhile to point out that turning (6.7.2) into (6.7.4) is irrelevant with choice of concrete appearance in reference function L and R in L-R number. We only consider two variable linear programming, and illustrate a method to it (may also use a simplex method to it). Example 6.7.1: A person on a business trip, needs to take two kinds of goods, each wrapped heavy by “6 kg possibility more” of Goods A (denoted

6.7 Linear Programming with L-R Coeﬃcients

179

as ˜ 6 = (6, 0, 1)LR ), worth 20 dollars. Goods B wrapped heavy by “2 kg or so” (denoted as ˜ 2 = (2, 1, 1)LR ), worth 10 dollars. This person wishes to take ˜ = (21, 1, 5)LR), hoping “about 21 kg” at most once (it can be denoted as 21 the total value of goods he takes is the greatest. Solution: Suppose that Goods A he takes is package x1 , and B is package x2 , then the problem involves ﬁnding a solution to linear programming in constraints with fuzzy coeﬃcients as follows: max z = 20x1 + 10x2 ˜ s.t. ˜ 6x1 + ˜ 2x2 21, x1 0, x2 0.

(6.7.5)

It is equivalent to a solution to an ordinary linear programming max z = 20x1 + 10x2 s.t. 6x1 + 2x2 21, x2 1, x1 + x2 5 x1 0, x2 0. Use an illustrating method to the problem (see Figure 6.7.1), the optimal 1 11 ∗ 9 310 solution is to get x∗1 = , x = , the optimal value is z ∗ = = 77 . 4 2 4 4 2 x2 6 20x1 + 10x2 = 77.5 x1 + x2 = 5 PP @ B 6x1 + 2x2 = 21 PP PP PP@ B ( 11 , 9 ) P@ PBP 4 4 @ B PP B@ P 0 x1 Fig. 6.7.1. Illustrating Method to (6.7.5)

3 If the goods allow to be torn open, then Goods A he takes can be 2 4 1 packages, B 2 packages, total worth 77.5 dollars. If the goods must be taken 4 by whole packages, it needs taking an integral for the restrict x1 , x2 , and that is, to solve it with an integral programming method. The result is that Goods A he would take is 2 packages, B is 3 packages (or A is 3, B is 1), the total value amounting to 70 dollars.

180

6 Fuzzy Linear Programming

6.7.3 Linear Programming in Object with L-R Coeﬃcient Consider the problem as follows: max % z˜ = c˜x s.t. Ax b, x 0. Because

(6.7.6)

c˜ = (˜ c1 , · · · , c˜n ), c˜j = (cj , cj , cj )LR , n n n z˜ = (z, z, z)LR = ( cj xj , cj xj , cj xj )LR j=1

j=1

j=1

are all L-R numbers, and according to approximately formula of max, % (6.7.6) is approximately equivalent to a linear programming with 3 objectives max(z = min(z = max(z =

n

cj xj = cx)

j=1 n j=1 n

cj xj = cx) cj xj = cx)

j=1

s.t. Ax b, x 0.

Example 6.7.2: Find fuzzy linear programming as follow: ˜ 1 + 10x ˜ 2 max % z˜ = 20x s.t. 6x1 + 2x2 21, x1 0, x2 0, ˜ = (10, 2, 1)LR. ˜ = (20, 3, 4)LR, 10 where 20 This problem is approximately equivalent to max z = 20x1 + 10x2 min Z = 3x1 + 2x2 max Z = 4x1 + x2 s.t. 6x1 + 2x2 21, x1 0, x2 0.

6.7 Linear Programming with L-R Coeﬃcients

181

Find an optimal solution to each objective respectively (1)

(1)

x1 = 0, x2 = 10.5, Z (1) = 105, when Z (1) = 21, Z (2) x1 (3) x1

= =

(1)

(2) (2) 0, x2 = 0, Z (2) = 0, when Z (2) = Z = 0. (3) (3) 3.5, x2 = 0, Z (3) = 14, when Z (3) = 70, Z

= 10.5.

= 10.5.

Subjectively give a ﬂex index for d1 = 5, d2 = 20, d3 = 4, and we construct ˜ 1, M ˜ 2, M ˜ 3: M ˜ 1, M ˜ 2, M ˜ 3: three fuzzy objective sets M μM˜ 1 (x) = f1 (20x1 + 10x2 ) ⎧ 20x1 + 10x2 < 100, ⎪ ⎨ 0, 1 = 1 − (105 − 20x1 − 10x2 ), 100 20x1 + 10x2 < 105, ⎪ 5 ⎩ 1, 20x1 + 10x2 105; μM˜ 2 (x) = f2 (3x1 + 2x2 ) $ 0, 3x1 + 2x2 20, = 1 1 − (3x1 + 2x2 ), 0 < 3x1 + 2x2 < 20; 20 μM˜ 3 (x) = f3 (4x1 + x2 ) ⎧ 4x1 + x2 < 10, ⎪ ⎨ 0, 1 = 1 − (14 − 4x1 − x2 ), 10 4x1 + x2 < 14, ⎪ 4 ⎩ 1, 4x1 + x2 14. ˜2 M ˜ 3 . Then the problem is changed into an ordinarily ˜ = M ˜1 M Let M linear programming max α 1 s.t. 1 − (105 − 20x1 − 10x2 ) α, 5 1 1 − (3x1 + 2x2 ) α, 20 1 1 − (14 − 4x1 − x2 ) α, 4 6x1 + 2x2 21, 0 α 1, x1 0, x2 0,

182

6 Fuzzy Linear Programming

i.e., max α s.t. 20x1 + 10x2 − 5α 100, 3x1 + 2x2 + 20α 20, 4x1 + x2 − 4α 10, 6x1 + 2x2 21, 0 α 1, x1 , x2 0. The optimal solution x∗1 = 0.488, x∗2 = 9.035, α∗ = 0.022 is obtained, correspondingly, z ∗ = 100.11, z∗ = 19.534, z∗ = 10.987. And then the approximately fuzzy optimal value is z˜∗ = (100.11, 19.534, 10.987)LR.

6.7.4 Conclusion As for the object and constraint with L-R coeﬃcient in the linear programming, we integrate the method above, which can also be changed into to a fuzzy optimal solution to multi-object linear programming. Meanwhile, determination of this model may cause the constraint ﬁeld of linear programming to be empty sets after subjectively the ﬂexible indexes d1 , d2 , d3 are given. At this time, the problem has no optimal solution in them, so this needs to be adjusted appropriately to a ﬂexible index, in order to guarantee the existence of an optimal solution.

6.8 6.8.1

Linear Programming Model with T -Fuzzy Variables Introduction

Theoretically, we build a new linear programming model on the basis of T fuzzy numbers, study its dual form, nonfuzzify it under a cone index J , and turn a linear programming with T -fuzzy variables into a linear programming depending on a cone index J . In such a theoretical framework, we can transplant many results of the linear programming into a linear programming with T -fuzzy variables [Cao96a].

6.8 Linear Programming Model with T -Fuzzy Variables

6.8.2

183

Linear Programming with T -Fuzzy Variables

Deﬁnition 6.8.1. Let fuzzy linear programming be %) (LP

% c˜ min x s.t. A˜ x ˜b, x ˜ 0,

(6.8.1)

where c is a real 1×n matrix, A a real m×n matrix, x˜ a real n−dimensional T fuzzy variable vector, and ˜b = (˜b1 , ˜b2 , · · · , ˜bm )T a real m−dimensional T -fuzzy vector. If x˜ and ˜b are T -fuzzy data deﬁned as in Ref. [Cao89b,c],[Dia87] and ˜2 , · · · , x ˜n )T ; here x ˜l = (xl , ξ l , ξ l )(1 l n), ˜1 = [DPr80], i.e., x˜ = (˜ x1 , x (1, 1, 1) and (6.8.1) is called a linear programming with T -fuzzy variables. But we call (LP (J ))

min s.t.

n

cl U l

l=1 n

ail Ul bi (J )(1 i m),

l=1

U 0 a linear programming depending on a cone index J , where U = (U1 , U2 , · · · , Un )T 3M Uil and bi (J ) is a number depending on is an n−dimensional vector, Ul = i=1 3M a cone index J . Theorem 6.8.1. Let the linear programming be given from T -fuzzy variables %). Then (LP %) is equivalent to (LP (J )) for a given cone index J , and as (LP (LP (J )) has an optimal solution depending on a cone index J , equivalent to %) with a T -fuzzy optimal one. (LP %), where x ˜il = Proof: Let {˜ xil } be a column T -fuzzy variable satisfying (LP (xl , ξ il , ξ il )T (1 i m; 1 l n). We classify vectors of the column by subscripts, and might as well let l = 1, · · · , N correspond to a smaller ﬂuctuating variable, and the other variables correspond to l = N + 1, · · · , 3N . ξ + ξ il ; for i = M + 1, · · · , 2M Then for i = 1, · · · , M and each l, Uil = xi + il 2 $ xl − ξ il , jl = 0, and each l, Uil = for i = 2M + 1, · · · , 3M and each l, xl + ξ il , jl = 1; $ xl + ξ il , jl = 0, %) is changed So, under a given cone index J , (LP Uil = xl − ξ il , jl = 1. into LP (J )).

184

6 Fuzzy Linear Programming

%) and (LP (J )), we know that (LP (J )) has an From the equivalence of (LP %) optimal solution depending on a cone index J , which is equivalent to (LP with an optimal T -fuzzy solution. Therefore, the theorem holds. %) can be turned into an ordinary paraTheorem 6.8.1 shows us that (LP metric linear programming (LP (J )) depending on a cone index J , where (LP (J )) has many methods and an optimal one to it can be found in any literature on linear programming. 6.8.3

Dual Problem

For the linear programming with T -fuzzy variables, there always exits a dual linear programming with T -fuzzy parameter corresponding to it. 3M ξil Let Ul = xl + . Then i=1 3M (LP (J )) ⇔ n 3M ξil cl xl + min i=1 3M l=1 n 3M ξil bi (J ), ail xl + s.t. i=1 3M l=1 xl 0 (1 i m; 1 l n),

(6.8.2)

where ξil is ξ il (resp. −ξ il ) or ξ il (resp. −ξ il ). 3M 3M ξil ξil Substitute xl = xl + , and then we might as well let xl , i=1 3M i=1 3M and turn (6.8.2) into min s.t.

n l=1 n l=1

cl xl

ail xl bi (J ),

(6.8.3)

xl 0 (1 i m; 1 l n), i.e.,

min cx s.t. Ax b(J ), x 0,

while the dual form of (6.8.3) is max yb(J ) s.t. AT y c, y 0.

(6.8.4)

%) is deduced from T -fuzzy Theorem 6.8.2. Suppose linear programming (LP variables. Its dual form is

6.8 Linear Programming Model with T -Fuzzy Variables

max % y˜b s.t. AT y c y0

185

(6.8.5)

%) has an optimal T -fuzzy solution equivalent to (6.8.5) having an and (LP %) has the same optimal T -fuzzy values as (6.8.5). optimal solution, and (LP %) can be changed into (LP (J )) under above cone index J , Proof: As (LP and the dual form of (LP (J )) is equivalent to (6.8.4), then (6.8.5) can be %) is known to changed into (6.8.4) under the cone index J above. Again, (LP % be mutually dual with (6.8.5) due to the equivalence of (LP ) with (LP (J )), and (6.8.5) with (6.8.4), and the mutual duality of (LP (J )) and (6.8.4). Again, (LP (J )) and (6.8.4) are, respectively, an ordinary primal linear programming and a dual linear programming depending on the same cone index J . As for (LP (J )) and (6.8.4), applying Theorem 2 in Section 4.2 in Ref. [GZ83], we know that if one of them has an optimal solution, so has the other. They contain the same optimal values, therefore the theorem holds from the arbitrariness of the cone index J . %) is deduced from T -fuzzy variables, then Theorem 6.8.3. Suppose that (LP % dual programming (LP ) and (6.8.5) have optimal T -fuzzy solutions and optimal solutions, respectively, if and only if they have T -fuzzy feasible ones and feasible ones, respectively, at the same time. Proof: Necessity is apparent and suﬃciency is proved as follows. %) can be changed into (LP (J )) and (6.8.5) into (6.8.4) under the given (LP cone index J . Meanwhile (LP (J )) with (6.8.4) is mutually dual under the same cone index J . In a similar way to the proof of Theorem 1 in Section 4.2 in Ref. [GZ83], we can prove that (LP(J )) and (6.8.4) have feasible solutions depending on a cone index J if and only if they contain optimal solutions depending on a cone index J . Again, we know the theorem holds because of the equivalence of %) and (6.8.5). %) and (LP (J )), and (6.8.5) and (6.8.4), and the duality of (LP (LP %) and y 0 is a Corollary 6.8.1. If x˜0 is a feasible T -fuzzy solution to (LP 0 0˜ 0 feasible solution to (6.8.5), with c˜ x = y b, then x ˜ is an optimal T -fuzzy ˜ ) and y 0 is an optimal solution to (6.8.5). solution to (LP Proof: Straightforward. 6.8.4

Numerical Example

Example 6.8.1: Find max % (3˜ x1 − x ˜2 ) s.t. 2˜ x1 − x˜2 ˜ 2, ˜ x ˜1 4, x ˜1 , x˜2 0,

where ˜2 = (2, 0, 0), where 4˜ = (4, 0, 0), where 0 = (0, 0, 0),

186

6 Fuzzy Linear Programming

and give a column of T -fuzzy data: x ˜1 : 1. (x1 , 0.5, 1.2),

2. (x1 , 0.8, 1), 3. (x1 , 1, 1.4);

x ˜2 : 4. (x2 , 0, 0.4),

5. (x2 , 0.6, 1), 6. (x2 , 1.5, 0.9).

Solution: (i) Number the data by means of 1–6 Group the data into three parts from Deﬁnition 3.1.4: I, No. 1,4; II, No. 2,5; j2 = 0, j5 = 1; and III. No. 3,6; j3 = 1, j6 = 0, here jl = 1 for odd numbers and jl = 0 for even numbers. (ii) Nonfuzziﬁcation Let x1 , x2 be (x1 + 0.85) + (x1 − 0.8) + (x1 + 1.4) = x1 + 0.483, 3 (x2 + 0.2) + (x2 + 1) + (x2 − 1.5) = x2 − 0.1. 3 (iii) Obtain a linear programming corresponding to (6.8.2) as follows: max (3x1 − x2 + 1.55) s.t. 2x1 − x2 + 1.07 2 x1 + 0.483 4 x1 , x2 0 ⇒ max (3x1 − x2 + 1.55) s.t. 2x1 − x2 0.93, x1 3.52, x1 , x2 0. The optimal solution depending on a cone index J is x1 = 3.52, x2 = 6.11, and the optimal value is 6.00. If x1 stands for an expensive resource, then x2 stands for a cheap resource. Decrease x1 and increase x2 properly and we obtain the same optimal value as in the non-crisp case. Obviously it decreases its cost. 6.8.5 Conclusion The linear programming with T-fuzzy variables can always be turned into a parameter programming for solution, which is called a prime problem for fuzzy linear programming. Since a close connection exists between the prime problem and the dual one, we can ﬁnd an answer to the latter more easily than the former.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

6.9

187

Multi-Objective Linear Programming with T -Fuzzy Variables

6.9.1 Introduction There are a lot of fuzzy and undetermined phenomena in the realistic world. If we describe such phenomena with T -fuzzy numbers [Cao90][Dia87], we can get more information. Here we will extend the model in [Cao96a] into a multiobjective linear programming with T -fuzzy variables, and discuss its algorithm, which tests the eﬀectiveness of the model and method by a numerical example. 6.9.2 Building of Model Consider an ordinary multi-objective linear programming: V − max c(j) x (1 j r) s.t. Ax b,

(6.9.1)

x > 0, where, x = (x1 , x2 , · · · , xn )T is an n dimension vector, b = (b1 , b2 , · · · , bm )T an m dimension constant vector, c(j) and A denote r × n and m × n matrix, respectively. Because of practical problems, we extend (6.9.1) into the problem of a linear programming with T -fuzzy variables. Introducing T-fuzzy data into (6.9.1), then ˜ (1 j r) V − max c(j) x s.t. A˜ x ˜b,

(6.9.2)

x ˜ 0, we call (6.9.2) a multi-objective linear programming model with T -fuzzy variables, where, x ˜ = (˜ x1 , x˜2 , · · · , x˜n )T is an n dimension T -fuzzy vector, ˜b = (˜b1 , ˜b2 , · · · , ˜bm )T an m dimension T -fuzzy constant vector, x˜l = (xl , ξ , ξ ) l l a T -fuzzy variable, and ˜bi = (b, bi , bi ) a T -fuzzy number. 6.9.3 Non Fuzziﬁcation of Model Theorem 6.9.1. If (6.9.2) is given by T -fuzzy variables, then, to the given cone index J , (6.9.2) can be turned into V − max c(j) U (J )(1 j r) s.t. AU (J ) b(J ), U (J ) > 0,

(6.9.3)

188

6 Fuzzy Linear Programming

where c(j) U (J ) =

n

(j)

cl Ul (1 j r);

l=1

AU (J ) =

n

ail Ul (1 i m);

l=1

U (J ) = (U1 (J ), U2 (J ), · · · , Un (J ))T

3M

Uil (J ) are a vector and a variable with cone index J , 3M respectively. b(J ) = (b1 (J ), b2 (J ), · · · , bm (J ))T and bi (J ) are a constant vector and a constant with cone index J . And (6.9.3) has a satisfactory solution depending on cone index J , which is equivalent that (6.9.2) has a fuzzy satisfactory one. and Uil (J ) =

i=1

Proof: Let {˜ xil } be a column T -fuzzy variables tallying with (6.9.2), where x ˜il = (xil , ξ il , ξ il )(1 i m; 1 l n). We classify vectors of the column by subscripts, and might as well let l = 1, 2, · · · , N correspond to smaller ﬂuctuating variables, while the other variables correspond to l = N + 1, · · · , 3N , then to i = 1, 2, · · · , M and each l, Uil = xil +

ξ il + ξ il 2

;

to i = M + 1, · · · , 2M and each l, xil + ξ il , Uil = xil − ξ il ,

if if

jl = 0, jl = 1,

to i = 2M + 1, · · · , 3M, and each l, xil − ξ il , Uil = xil + ξ il ,

if if

jl = 0, jl = 1.

Then, under the given cone index J , (6.9.2) is turned into (6.9.3), such that (6.9.3) can be found out. Since (6.9.2) is equivalent to (6.9.3), a parameter optimal solution in (6.9.3) depending on cone index J is equivalent to an optimal T - fuzzy one in (6.9.2). We conclude the solutions to Model (6.9.2) as follows. 10 To the given T -fuzzy variables x ˜l , we partition natural number set {1, 2, · · · , n} into three parts by subscription. ξ + ξ il , i = 1, 2, · · · , M and each l, I: Uil = xil + il 2 II: xil − ξ il , if jl = 0, Uil = xil + ξ il , if jl = 1, i = M + 1, · · · , 2M and each l.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

III:

Uil =

xil + ξ il , xil − ξ il ,

i = 2M + 1, · · · , 3M, and each l. ˜l . 20 Nonfuzziﬁed x We take Uil = xil +

if if

189

jl = 0, jl = 1,

3N ξil∗ , 3N l=1

ξ il ), or ± ξ il , or ±ξ il . 2 0 ˜l in (6.9.2) and we can get (6.9.3). 3 Substitute Uil for x 40 Determine a satisfactory (or eﬀective) solution to problem (6.9.3) by the aid of solution to an ordinary multi-objective linear programming and we can get a fuzzy satisfactory solution to (6.9.2). There are a lot of methods to ﬁnding satisfactory (eﬀective) solutions to programming (6.9.3). Here, we advance two ways to nonfuzziﬁcation (6.9.2): 1) Nonfuzziﬁcation before a weighted method Turn (6.9.2) into a linear programming (6.9.3) with cone index J . Give weight to r objective functions n 3M Uil (J ) (j) ), fj (U ) = cl ( 3M i=1 where ξil∗ be (ξ il +

l=1

we have f (U ) = γ1 f1 (U ) + γ2 f2 (U ) + · · · + γr fr (U ), where γj (j = 1, · · · , r) is a factor of weight, satisfying with 0 γj 1 and γ1 + γ2 + · · · + γr = 1. Turn (6.9.3) into a single objective linear programming max f (U (J )) s.t. AU (J ) b(J), U (J ) 0.

(6.9.4)

2) By a weighted method before nonfuzziﬁcation Consider (6.9.2), and r fuzzy objective functions to (6.9.2) are weighted: f (˜ x) = γ1 f1 (˜ x) + γ2 f2 (˜ x) + · · · + γr fr (˜ x), Programming (6.9.2) is changed into max f (˜ x) s.t. A˜ x ˜b,

(6.9.5)

x ˜ 0. Now nonfuzzify (6.9.5) by the method mentioned, and we can obtain (6.9.4).

190

6 Fuzzy Linear Programming

6.9.4 Finding Solution We have many algorithms, such as genetic and simulated annealing algorithm (algorithm process is omitted) by which, to single objective linear programmings (6.9.4) and (6.9.5), we can ﬁnally get a satisfactory solution with practical value. Now we are searching for a better algorithm since the overall optimum constraint by a single algorithm code fails to show the result and to ensure its convergence for the optimum solution. Assume that there are computer programmings for solution in (6.9.2) or (6.9.3) in order to discuss theoretically the searching (omitted here), we consider the following example. Example 6.9.1: Find max (˜ z1 , z˜2 ) x1 + x˜2 z˜1 = 5˜ z˜2 = x ˜1 + x ˜2 s.t. x ˜1 + x ˜2 ˜ 6, ˜ x˜1 ˜ 0 5, x˜2 0, where ˜ 5 = (5, 0, 0), ˜ 6 = (6, 00). We take T -fuzzy variables as follows: x ˜1 : x ˜2 :

1. (x1 , 1, 0), 4. (x2 , 0, 1);

2. (x1 , 0, 1), 5. (x2 , 1, 0);

3. (x1 , 2, 1). 6. (x2 , 2, 2).

Now, divide the data into there groups including No.1,4; No.2,5 and No.3,6. As for data No.1,4, we get a value by Formula I. For the rest, we use the formulas corresponding to jp = 1 and jp = 0 in Formula II and III, when odd numbers and even numbers appear, respectively. So, we can nonfuzziﬁed x˜1 , x ˜2 x ˜1 : [(x1 + 0.5) + (x1 − 0) + (x1 + 1)]/3 = x1 + 0.5, x ˜2 : [(x2 + 0.5) + (x2 − 0) + (x3 − 2)]/3 = x2 − 0.5, f (˜ x) = γ1 z˜1 + γ2 z˜2 : f (U (J)) = 6x1 + 5x2 when γ1 = γ2 = 1. Such that, a linear programming corresponding to (6.9.5) appears as follows: max z = 6x1 + 5x2 s.t. x1 + x2 6, 0 x1 5, x2 0. Its corresponding superior solution is x1 = 5, x2 = 1, z1 = 26, z2 = 9.

6.9 Multi-Objective Linear Programming with T -Fuzzy Variables

191

6.9.5 Conclusion Therefore, we know that we can turn (6.9.2) into an ordinary multi-objective parameter linear programming (6.9.3) depending on cone index J . And as to (6.9.3), we adopt the methods to multi-objective programming, such as methods by which we change a multi-objective majorized problem into a single one or series of single ones.

7 Fuzzy Geometric Programming

We often meet with a problem as follows in economic management. Suppose that we manufacture a case for transporting cotton, the case is V m3 in volume with a bottom, but without a cover, whose bottom and two sides are made from Cm2 of a special ﬂexible material with negligible cost. The material for the other two sides cost more than A yuan/m2 (yuan means RMB), and transportation of the case costs about k yuan. What is the cost at least to ship one case of cotton? Such a problem can be posed for solution by a geometric programming. Since a classical geometric programming can not account well for the problem, or obtain a practical solution to it, at the IFSA (1987), author proposed initially a fuzzy geometric programming theory for the problem [Cao87a]. This chapter ﬁrst introduces a progress in fuzzy geometric programming, and puts forward problems of Lagrange and antinomy in it. Besides, it studies the geometric programming with fuzzy coeﬃcients and fuzzy variables. Finally, it discusses its expansion.

7.1

Introduction of Fuzzy Geometric Programming

7.1.1 Fuzzy Posynomial Geometric Programming Deﬁnition 7.1.1. Call (P˜ )

% g0 (x) min s.t. gi (x) 1 (1 i p), x > 0,

(7.1.1)

the fuzzy posynomial geometric programming, where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, ‘T’ represents a transpose symbol, and all Ji Ji m ) gi (x) = vik (x) = cik xγl ikl (0 i p) are fuzzy posynomials of x, k=1

k=1

l=1

% g0 (x) ←− g0 (x) z0 , cik 0 a constant, γikl an arbitrary real number, min B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 193–253. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

194

7 Fuzzy Geometric Programming

that is, the objective function g0 (x) might have to be written as a minimizing goal in order to consider z0 as an upper bound, z0 is an expectation value of objective function g0 (x), “” denotes the fuzziﬁed version of “ ” with the linguistic interpretation being “essentially smaller than or equal”, and di 0 denotes a ﬂexible index of gi (x)(0 i p). The membership functions of fuzzy objective g0 (x) and fuzzy constraints gi (x) are (1.5.6) and (1.5.7), respectively. (7.1.1) can be changed into g0 (x) z0 gi (x) 1 (1 i p),

(7.1.2)

x > 0. Especially, we have % g0 (x) min s.t. gi (x) = 1 (1 i p),

(7.1.3)

x > 0,

and

% g0 (x) min s.t. gi (x) 1 (1 i p) x>0

(7.1.4)

% g0 (x) min s.t. gi (x) = 1 + ti (1 i p), x > 0.

(7.1.5)

In fact, we change the equations in (7.1.5) into inequations [Wei87], make ti = −di log α(1 i p), because of α ∈ [0, 1], di > 0, ti > 0 is obvious. And then, gi (x) = 1 + ti can be changed into gi (x) 1 − di log α, which is exactly a fuzzy constraint gi (x) 1 by converting (1.5.6) (1.5.7) into a really certain expression. Therefore, (7.1.3) can be changed into (7.1.1), that is (7.1.2). ˜0 be a fuzzyDeﬁnition 7.1.2. Let A˜0 be a fuzzy set deﬁned on X ⊂ R m , B ˜ valued set of g0 (x), and if μA˜0 (x) = B0 (g0 (x)), then g0 (x) is a fuzzy objective function with respect to A˜0 . ˜i Deﬁnition 7.1.3. Let F˜i (1 i p) be a fuzzy set deﬁned on X ⊂ R m , B ∗ ∗ ˜ be a fuzzy-valued set of gi (x). If μF˜i (x) = Bi (gi (x))(where gi (x) = gi (x) − 1), then gi∗ (x) are fuzzy constraint functions with respect to F˜i . Deﬁnition 7.1.4. Let F˜ be a fuzzy set deﬁned on X ⊂ R m , gi∗ (x) be a fuzzy constraint functions with respect to F˜i (1 i p). If μF˜i (x) min μF˜i (x), μF˜ (x) = 1ip

1ip

then F˜ is a fuzzy feasible solution set with respect to F˜i .

7.1 Introduction of Fuzzy Geometric Programming

195

˜ be a fuzzy set deﬁned on X ⊂ R m , F˜ be a fuzzy Deﬁnition 7.1.5. Let H feasible solution set with respect to F˜i , if there exists a fuzzy optimal point set A˜∗0 of g0 (x), such that ˜ H(x) = μA˜∗ (x) μF˜ (x) min{μA˜∗ (x), min μF˜i (x)} 0 0 x>0 1ip (7.1.6) ∗ ˜i (gi (x))}, ˜ 0 (g0 (x)), min B = min{B 1ip

x>0

˜ is said to be a fuzzy posynomial geometric programming with then max H(x) x>0

˜ of g0 (x). respect to H Deﬁnition 7.1.6. If there is a point x∗ , such that fuzzy posynomial geometric ˜ ˜ ∗ ) = max H(x), then x∗ is said to be an optimal solution programming is H(x x>0

˜ ∗ ), and fuzzy set H ˜ satisfying (7.1.6) is a fuzzy decision in (7.1.2). to H(x ˜ Theorem 7.1.1. The maximum of H(x) is equivalent to programming max α s.t. g0 (x) z0 − d0 log α, gi (x) 1 − di log α(1 i p), α ∈ [0, 1], x > 0,

(7.1.7)

where di > 0(0 i p) denote constants. Proof: It is known by Deﬁnition 7.1.6 that x∗ satisﬁes (7.1.6), called an optimal solution to (7.1.2). Again, x∗ bears the similar level for constraint and optimization. Particularly, x∗ is a solution to fuzzy posynomial geometric programming (7.1.1) at ˜ ∗ ) = 1. Hence, when −g0 (x) = z0 − t0 and gi (x) = 1 + ti , there exists H(x ˜ H(x) = μA˜∗ (x) ∧ min μF˜i (x) 0

1ip

by Formulas (1.5.6) and (1.5.7), Ji J0 1 1 m m ) ) γ γ − c0k xl 0kl −z0 ) − cik xl ikl −1) ( ( k=1 l=1 k=1 l=1 d d ˜ H(x) = e 0 . min e i

1ip

˜ ˜ Given α = H(x). Because of ∀ α ∈ [0, 1], H(x) α is equivalent to Ji J0 1 1 m m ) ) γ γ c0k xl 0kl −z0 ) − cik xl ikl −1) ( ( k=1 l=1 k=1 l=1 d d α, e i α(1 i p), e 0

−

i.e., g0 (x) =

J0

c0k

k=1

gi (x) =

Ji k=1

m :

xγl 0kl z0 − d0 log α,

l=1

cik

m : l=1

xγl ikl 1 − di log α (1 i p).

196

7 Fuzzy Geometric Programming

˜ Therefore, the maximization of H(x) is equivalent to (7.1.7) for arbitrary α ∈ [0, 1], and the theorem holds. From the above, we know di > 0(0 i p) are admissible violations of the constraints. They are chosen by decision makers on actual circumstance. (1) (2) (1) (2) When z0 = z0 − z0 , the value of z0 and z0 are initially determined. Therefore, we consider two certainties to posynomial geometric programming min g0 (x) s.t. gi (x) 1 (1 i p) x>0

(7.1.8)

min g0 (x) s.t. gi (x) 1 − di log α (1 i p), α ∈ [0, 1], x > 0,

(7.1.9)

and

(1)

(2)

to which a solution is given, respectively; the optimal value z0 and z0 in (7.1.8) and (7.1.9) are what is obtained. From here it is known that solving the fuzzy posynomial geometric programming (7.1.1) involves solving three certainty posynomial geometric programming (7.1.8), (7.1.9) and (7.1.7), respectively. Equivalent form is considered below in the fuzzy posynomial geometric programming, therefore properties are ﬁrst introduced as follows. Theorem 7.1.2 [Cao93a]. If Gi (z)(1 i p) denotes a convex function for ∀i, then the fuzzy geometric programming G0 (z) G0 Gi (z) 1 (1 i p)

(7.1.10)

is a fuzzy convex programming, and a strictly local minimal solution to (7.1.10) is its global minimal solution, where G0 is an expectation value of objective function G0 (z). Proof: Change (7.1.10) into crisp programming by (1.5.6) and (1.5.7) [Zim00], it is easy to prove that the theorem holds in similar way of Theorem 1.2.1 in Ref.[WY82]. Theorem 7.1.3. Any fuzzy posynomial geometric programming (P˜ ) can be turned into a fuzzy convex programming. Proof: Substitute xl = ezl (1 l m) for gi (x), then gi (x) =

Ji k=1

cik

m : l=1

γikl

x

=

Ji

m

cik el=1

γikl zl

= Gi (z) (0 i p).

k=1

Thereby (P˜ ) is turned into (7.1.10). From Theorem 7.1.2 we know the theorem holds.

7.1 Introduction of Fuzzy Geometric Programming

197

Theorem 7.1.4. The programming (P˜ ) is equivalent to % g¯0 (x) min s.t. g¯i (x) 1(1 i p), x > 0, where g i (x, xk−1 ) =

Ji )

k=1

εik

cik εik

m

m ) l=1

γikl εik

xk=1 l

(0 i p) is a monomial

posynomial. Proof: ∀xk−1 > 0, by using a fuzzy geometric inequality in Ref.[Cao93a], we have g i (x, xk−1 ) gi (x), where g i (x, xk−1 ) =

Ji m : cik : k=1

gi (x) =

Ji

cik

i=1

m :

εik

xγl ikl

εik

l=1

= ci

m :

γ

xl il ,

l=1

xγl ikl ,

l=1

Ji c ε Ji ) vik (xk−1 ) ik ik (1 k Ji , 1 i , γ il = γikl εik ; εik = gi (xk−1 ) k=1 εik k=1 p). It is easy to prove that the theorem holds.

here ci =

Theorem 7.1.5. Let A˜i be a continuous and strictly monotone fuzzy-value function. The dual programming of (P˜ ) is [Cao89a] w00 p J wik p ) )i ) wi0 c˜ik a ˜00 ˜ ˜ wi0 (D) max d(w) = w00 i=0 k=1 ai wik i=1 s.t. w00 = 1, Γ T w = 0, w 0, where

⎛

γ011 · · · γ01l · · · ⎜ .. .. ⎜ . . ⎜ ⎜ γ0J 1 · · · γ0J l · · · 0 0 ⎜ Γ = ⎜ ··· ··· ⎜ ⎜ γp11 · · · γp1l · · · ⎜ ⎝ ··· ··· γpJp 1 · · · γpJp l · · ·

⎞ γ01m .. ⎟ . ⎟ ⎟ γ0J0 m ⎟ ⎟ ··· ⎟ ⎟ γp1m ⎟ ⎟ ··· ⎠ γpJp m

(7.1.11)

denotes structure of exponent to apiece term of variable xl corresponding to an objective function g0 (x) and each constraint function gi (x)(1 i p), called exponent matrix. It contains J = (J0 + J1 + · · · + Jp ) row and m column, J is the sum of apiece term in gi (x)(0 i p), and w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T is a J−dimensional variable vector, wi0 =

198 J0

7 Fuzzy Geometric Programming

wik = wi0 + wi1 + · · · + wiJi (0 i p) is the sum of each dual variables

k=1

corresponding to an objective function g0 (x)(i = 0) or constraint function % cik gi (x)(i = 0); a ˜ik = (0 k Ji , 0 i p) is freely ﬁxed in a closed value ai interval [a1ik , a2ik ], and the degree of accomplishment is determined by formula like (1.5.3). ˜ In order to assume the deﬁned continuity of d(w), we stipulate (wik )wik |wik =0 = 1. Proof: Because (P˜ ) can be turned into (7.1.1), follow Ref. [Cao93a] and it ˜ Now the theorem is proved. can be proved that the dual form in (7.1.1) is (D). From Theorem 7.1.4 it is known that any fuzzy posynomial geometric programming can be turned into a monomial fuzzy posynomial geometric programming. Thereby, only monomial fuzzy posynomial geometric programming is considered: % g0 (x) = c0 min s.t. gi (x)) = ci

m ) l=1 m ) l=1

xγl 0l (7.1.12)

xγl il bi (1 i p),

x > 0, its dual form means max

p w w0 ) i c˜ i c˜ 0 i=1

s.t. w0 = 1, p γ il wi = 0 (1 l m),

(7.1.13)

i=0

where c˜ 0 =

c˜0 z0

w 0, < c˜ i . bi

;

, c˜ i =

Theorem 7.1.6. Given monomial fuzzy posynomial geometric programming like (7.1.12), then it can be turned into a fuzzy linear programming % min s.t.

m

γ0l zl

l=1 m

(7.1.14)

γil zl + ln ci ln bi (1 i p),

l=1

x > 0, with fuzzy optimal solution to (7.1.14) being that of (7.1.12). m

Proof: Let zl = ln xl (1 l m). Then gi (x) = ci e that (7.1.12) can be turned into

l=1

γil zl

(0 i p), such

7.1 Introduction of Fuzzy Geometric Programming m

% c0 el=1 min m

γ0l zl

γil zl

s.t. ci el=1 x > 0,

199

bi (1 i p),

equivalent to (7.1.14). Hence, the ﬁrst conclusion of the theorem holds. Again (7.1.12) is a fuzzy convex programming, such that the second conclusion of the theorem holds from Theorem 7.1.2. 7.1.2 Extension in Fuzzy Geometric Programming Fuzzy geometric programming can be extended into two cases, that is general fuzzy posynomial geometric programming and general reversed one. a) General fuzzy posynomial geometric programming 7 writing % in (P˜ ) for inf, Deﬁnition 7.1.7. Replace min (P˜1 )

7 g0 (x) inf s.t. gi (x) 1 (1 i p), x > 0,

calling (P˜1 ) a general fuzzy posynomial geometric programming. ˜ into s% Change max % in (D) up, that is w00 p J wik p ) )i ) wi0 a ˜00 c˜ik ˜ ˜ (D1 ) s% up d(w) = wi0 w00 i=0 k=1 wik i=1 s.t. w00 = 1, Γ T w = 0, w 0, ˜ 1 ) the dual programming. Γ is an exponent matrices as (7.1.11). calling (D 7 inf denotes fuzzy inﬁmum, s% up denotes fuzzy supremum. When (P˜1 ) and ˜ (D1 ) are fuzzy consistent, we denote MP˜1 and MD˜ 1 a fuzzy constraint inﬁmum ˜ 1 ), respectively. Obviously, M ˜ of (P˜1 ) and a fuzzy constraint supremum of (D P1 is a ﬁnite fuzzy number. b) General fuzzy reversed posynomial geometric programming Deﬁnition 7.1.8. Calling the general form 72 ) (P

7 g0 (x) % (or inf) min s.t. gi (x) 1, (1 i p ) gi (x) 1, (p + 1 i p) x>0

(7.1.15)

200

7 Fuzzy Geometric Programming

a fuzzy reversed posynomial geometric programming [Cao02a][Cao07a]. Here Ji vik (x) (0 i p) are posynomial functions of x, where all gi (x) = k=1

vik (x) =

⎧ m ) ⎪ ⎪ xγl ikl , (1 k Ji ; 0 i p ) ⎨ cik l=1

m ) ⎪ ikl ⎪ x−γ , (1 k Ji ; p + 1 i p) ⎩ cik l l=1

are monomial of x. The membership functions of objective g0 (x) and constraint functions gi (x)(1 i p) are deﬁned by (1.5.6) and (1.5.7), respectively. 72 ) is Dual programming in (P w00 p J wik p Ji ) )i ) ) a ˜0k cik ˜ ˜ (D2 ) max % (or sup) d(w) = w00 ˜ik wik i=0 k=1 a i=p +1 k=1 −wik p p ) wi0 ) cik a ˜ik −wi0 wi0 wi0 wik i=1 i=p +1 J0 s.t. w00 = w0k = 1, k=1

Γ T w = 0, w 0,

where Γ still represents a fuzzy exponent matrix, i.e., ⎞ ⎛ γ011 · · · γ01l · · · γ01m ⎟ ⎜ ··· ··· ··· ⎟ ⎜ ⎟ ⎜ γ0J0 l · · · γ0J0 m γ0J0 1 · · · ⎟ ⎜ ⎟ ⎜ ··· ··· ··· ⎟ ⎜ −1 −1 −1 . Γ =⎜ γp Jp 1 · · · γp Jp l · · · γp Jp m ⎟ ⎟ ⎜ ⎟ ⎜ −1 −1 −1 ⎜ −γp +1Jp +1 1 · · · −γp +1Jp +1 l · · · −γp +1Jp +1 m ⎟ ⎟ ⎜ ⎠ ⎝ ··· ··· ··· −γpJp 1 · · · −γpJp l · · · −γpJp m Here w = (w00 , w01 , . . . , w0J0 , wp1 , . . . , wpJp )T is a J −dimensional variable vector (J = 1 + J0 + · · · + Jp ), and wi0 = wi1 + wi2 + · · · + wiJi ; −wik and −wi0 denote a reversed direction inequality gi (x)1 corresponding to cik −wik −wi0 factors ( ) and wi0 in the upper-right-corner exponent; a ˜0k is a wik fuzzy number. c) Other case When the above objective and constraint functions in geometric programming contain a fuzzy relative operators and fuzzy coeﬃcients, we call them a geometric programmings with fuzzy relative and fuzzy coeﬃcients. The researches concerned can be seen in Section 7.8, 7.9, 8.5 and 8.6.

7.2 Lagrange Problem in Fuzzy Geometric Programming

7.2

201

Lagrange Problem in Fuzzy Geometric Programming

7.2.1 Introduction The author advanced fuzzy reversed posynomial geometric programming based on the fuzzy posynomial geometric programming [Cao87a][Cao93a] by Zadeh’s fuzzy set theory [Zad65], he gives its Lagrange problem and a direct algorithm, which will be wide applied in optimization and classiﬁcation. 7.2.2 Fuzzy Reversed Posynomial Geometric Programming Model We try to expand the reversed posynomial geometric programming [WY82] into a fuzzy reversed posynomial geometric programming model. Deﬁnition 7.2.1. Let (P )

% g˜0 (x) min s.t. g˜i (x) 1(1 i p ), g˜i (x) 1(p + 1 i p), x > 0.

(7.2.1)

Then (P) is called a fuzzy reversed posynomial geometric programming, where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, all g˜i (x) = m Ji Ji ) v˜ik (x) = c˜ik xγl ikl (0 i p) are fuzzy posynomial functions of k=1

k=1

x, here

l=1

⎧ m ) ⎪ ⎪ xlγ˜ikl , (1 k Ji ; 0 i p ) ⎨ c˜ik v˜ik (x) =

l=1

m ) γikl ⎪ ⎪ x−˜ , (1 k Ji ; p + 1 i p) ⎩ c˜ik l l=1

are fuzzy monomial of x. For each item vik (x) (1 k Ji ; p + 1 i p) in γikl the reversed inequality g˜i (x)1, xl acts as an exponent in the item by −˜ instead of by γ˜ikl . Fuzzy coeﬃcients and exponents c˜ik 0, γ˜ikl are all freely ﬁxed in the + − + closed interval [c− ˜ is taken as c˜ik or γ˜ikl , ik , cik ], [γikl , γikl ], respectively, when a − + respectively, and a, a , a , r are all real numbers, then the degree of accomplishment in a ˜ is determined [Cao93a] by ⎧ 0, if a < a− , ⎪ ⎨= a − a− > r (7.2.2) μa˜ (a) = , if a− a a+ , ⎪ ⎩ a+ − a− + 1, if a > a . x) [cao02a]. Then the membership funcUnder (7.2.2), change g˜i (x) to g˜i (¯ x) and constraints g˜i (¯ x)(1 i p) are deﬁned by tions of objective g˜0 (¯

202

7 Fuzzy Geometric Programming

˜i (¯ μA˜i (x) = B gi (x)) ⎧ 1, if g¯i (x) bi , ⎪ ⎨ ti = 1 − , if g¯i (x) = bi − ti (0 ti di ), ⎪ di ⎩ 0, if g¯i (x) bi − di (0 i p ),

where bi =

(7.2.3)

z0 , i = 0; 1, 1 i p ,

z0 is an aspiration level of the objective function g˜0 (x), and ˜i (¯ gi (x)) μA˜i (x) = B ⎧ 1, if g¯i (x) 1, ⎪ ⎨ ti = 1 − , if g¯i (x) = 1 − ti (0 ti di ), ⎪ di ⎩ 0, if g¯i (x) 1 − di (p + 1 i p),

(7.2.4)

here di 0(0 i p) are ﬂexible indexes of i−th fuzzy function g¯i (x), and g¯i (x) =

Ji

c˜−1 ik (β)

k=1

m :

γ ˜ −1 (β)

xl ikl

, β ∈ [0, 1], (0 i p).

l=1

If the objective function in (7.2.1) might be written as a minimizing goal in order to consider z0 as an upper bound, then (7.2.1) can be rewritten as ⎧ g˜0 (x) z0 , ⎪ ⎪ ⎨ g˜i (x) 1 (1 i p ), (7.2.5) g˜i (x) 1 (p + 1 i p), ⎪ ⎪ ⎩ x > 0. gi (x) 1, x > 0}(1 i p ) and Deﬁnition 7.2.2. Let A˜i = {x ∈ R m |˜ m ˜ Ai = {x ∈ R |˜ gi (x) 1, x > 0}(p + 1 i p) be fuzzy feasible solution sets corresponding to g˜i (x) 1 and g˜i (x) 1, respectively. Then A˜i ) ( A˜i ) Y˜ = A˜0 ( 1ip

p +1ip

is called the fuzzy decision for (7.2.5), and so is for (7.2.1), satisfying μY˜ (x) = μA˜0 (x) min μA˜i (x) min μA˜ (x), x > 0, (7.2.6) 1ip

p +1ip

i

while x∗ is called a fuzzy optimal solution to (7.2.5), and so is to (7.2.1), satisfying μY˜ (x∗ ) = max{μY˜ (x) x>0

= min{μA˜0 (x), min μA˜i (x), 1ip

min

p +1ip

μA˜ (x)}}. i

(7.2.7)

7.2 Lagrange Problem in Fuzzy Geometric Programming

203

If there exists a fuzzy optimal point set A˜0 of g˜0 (x), (7.2.4) holds, (7.2.5) is called a fuzzy reversed posynomial geometric programming for g˜0 (x) with respect to Y˜ . Substitute (7.2.2)(7.2.3)(7.2.4) into (7.2.6), after some rearrangements [Zim76], then g¯i (x) − 1 g¯0 (x) − z0 μY˜ (x) = 1 − min 1 − 1ip d0 di g¯i (x) − 1 . min 1 + p +1ip di By inducing a new variable α, α ∈ [0, 1], the maximization decision of (7.2.1) can be turned into ﬁnding solution x(> 0), that is, maximizing x∗ in μY˜ (x). Let α = μY˜ (x). By using functions (7.2.2) and (7.1.8), when μA˜0 (x) = z0 − d0 , and μA˜i (x) = 1 + di . So, we have the following. Theorem 7.2.1. The maximizing of μY˜ (x) is equivalent to μY˜ (x) α, so we arrive at Formula (7.2.1), then max α s.t. g¯0 (x) z0 + (1 − α)d0 , g¯i (x) 1 + (1 − α)di , (1 i p ), g¯i (x) 1 + (α − 1)di , (p + 1 i p), x > 0, α, β ∈ [0, 1].

(7.2.8)

Proof: Similar to Theorem 7.1.1, it is not diﬃcult to prove a truth of the theorem. 7.2.3 Fuzzy Lagrange Problem and Algorithm Deﬁnition 7.2.3. Let w = (w01 , . . . , w0J0 , . . . , wp1 , . . . , wpJp )T . Write down I = {(i, k)|Γ T w = 0, w 0 with a solution w satisfying wik > 0}, then I is called an unreduced subscript set. Deﬁnition 7.2.4. If an unreduced set I = {(i, k)|1 k Ji , 0 i p}, i.e., if I includes all subscript pairs (i, k), the primal fuzzy posynomial geo˜ are said to be canonical metric programming (P˜ ) and dual programming (D) ˜ ˜ types. Otherwise, (P ) and (D) degenerate. More speciﬁcally, if I fails to con˜ are called totally degenerate types. tain any (0, k)(1 k J0 ), (P˜ ) and (D) Deﬁnition 7.2.5. Assume that g˜i (x) is an m−dimensional fuzzy diﬀerentiable function, and its gradient is deﬁned as ∇x g˜i (x) = (

∂ ∂ ∂ g˜i (x), g˜i (x), · · · , g˜i (x))T , ∂x1 ∂x2 ∂xm

204

7 Fuzzy Geometric Programming

then it is easy to change it into ∇x g¯i (x) = (

∂ ∂ ∂ g¯i (x), g¯i (x), · · · , g¯i (x))T . ∂x1 ∂x2 ∂xm

Deﬁnition 7.2.6. Find a fuzzy feasible solution x∗ to (7.2.1) and λ∗ = gi (x∗ ) − 1) = 0(1 i p)(where (λ∗1 , λ∗2 , · · · , λ∗p )T 0, satisfying λ∗i (˜ ∗ ˜ i (˜ gi (x) − 1) is 1, g˜i (x ) = 1 is fuzzy equality, and its membership degree B such that a fuzzy Lagrange function

˜ λ) = g˜0 (x) + L(x,

p

p

(˜ gi (x) − 1) +

i=1

λi 1 − g˜i (x)

i=p +1

˜ ∗ , λ∗ ) = 0, called a Lagrange problem in (7.2.1). satisﬁes ∇x L(x Theorem 7.2.2. Let x∗ be a fuzzy feasible solution to (7.2.1), writing E = {i|˜ gi (x∗ ) = 1(1 i p)} as a subscript set of fuzzy eﬀective constraint at x∗ and μA˜i (·)(0 i p) are fuzzy functions of continuous and strictly monotone. Then λ∗ enables (x∗ , λ∗ ) to be a fuzzy solution in Lagrange problem if and only if all variable vectors x(> 0) satisfy m

Γ˜il (ln xl − ln x∗l ) 0 (i ∈ E),

(7.2.9)

l=1

and then

where Γ˜il =

g˜0 (x∗ ) g˜0 (x), Ji

(7.2.10)

γ˜ikl v˜ik (x∗ ) (i ∈ E, 1 l m).

k=1

Proof: Let μA˜i (·)(0 i p) be fuzzy functions of continuous and strictly monotone, then (7.2.1) is equivalent to g0 (x)) min μA˜0 (˜

s.t. μA˜i (˜ gi (x) − 1) α (1 i p ), μA˜i (1 − g˜i (x)) α (p + 1 i p),

(7.2.11)

α ∈ [0, 1], x > 0, while

E ⇐⇒ E = {i|μA˜i (˜ gi (x∗ ) − 1) = 0, (1 i p)},

with (7.2.9) equivalent to μA˜i [Γ˜il (ln xi − ln x∗i )] α(i ∈ E ),

(7.2.12)

7.2 Lagrange Problem in Fuzzy Geometric Programming

205

and then (7.2.10) is equivalent to μA˜0 [˜ g0 (x∗ ) − g˜0 (x)] α.

(7.2.13)

From the condition, it is known that x∗ is a fuzzy feasible solution to (7.2.1), which is equivalent that (x∗ , α) is a parameter feasible solution to (7.2.11)[Cao93a]. Therefore, as for E and any α(∈ [0, 1]), there exists λ∗ , enabling (x∗ , λ∗ , α) to be a Lagrange problem solution with parameter α if and only if all variable vectors x > 0 tally with (7.1.7). And (7.1.9) holds from the knowledge of Theorem 4.4.1 in Ref.[WY82]. Hence the theorem holds from arbitrariness of α ∈ [0, 1]. Proposition 7.2.1. Let μA˜i (x)(0 i p) be a fuzzy function of continuous and monotone. On on the assumption of constraint complete lattice, a local optimum solution to (7.2.1) must be a part of fuzzy solution to the Lagrange problem. Proposition 7.2.2. (Reverse proposition) Let μA˜i (x)(0 i p) be a fuzzy function of continuous and strictly monotone and x∗ be a part of fuzzy solution to the Lagrange problem. x∗ is a global fuzzy optimum solution to (7.2.1) if (7.2.1) is fuzzy convex or p = p; x∗ is not necessarily a global fuzzy optimum one to (7.2.1) if p = p, not even a local fuzzy optimum one. Direct Algorithm[Asa82] If η is a continuous function on [0,1], there exists a unique ﬁxed point α ¯ = η(α). Let g˜i (x) be diﬀerentiable. Then steps of a direct algorithm list as follows. 10 Let k = 1, and determine α1 as well as h by means of 1 − hd = α1 . 20 Calculate η (k) = supx∈Aα |μA˜0 g˜0 (x) | k

and

30 Calculate

˜ (k) (x) = 1 g˜0 (x) ∈ [0, 1]. M η (k) ˜ (k) (x). εk = αk − M

If |εk | > ε, then go to 20 , otherwise to 50 . 40 Select rk ∈ [0, 1] properly. If αk+1 = αk − rk εk , let k be k + 1. Go to 20 . ˜ (k) (x∗ ) when α = αk , then x∗ is an optimal solution to P˜ . 50 Calculate M Note. It is proper to take α1 ∈ [0.9, 1] when g˜0 (x) increases strictly monotonous, otherwise to take α1 ∈ [0.75, 0.9]. If b(> 0) is very large, larger, smaller, very small, it is proper to take h as 0.02, 0.2, 2 and 20, respectively. As for rk selection, when ε1 ε2 , rk = 0.5 may be chosen. If ε1 ε2 changes a little and if ε1 " ε2 , rk ∈ [0.618, 1] and rk ∈ [0.382, 0.4] can be properly taken, respectively. Otherwise, a contradictory may appear.

206

7 Fuzzy Geometric Programming

Example 7.2.1: Find % 2x1 + 3x2 min s.t. x21 + x22 1, x1 , x2 > 0. Since η (1) =

√

13, we suppose a fuzzy constraint membership function is

μ1 (d1 ) = {1 − 0.2h, 0 d1 < 0.25; 0, d1 0}. After two steps, we can ﬁnd ˜ (2) = 0.969717 x∗1 = 0.915683, x∗2 = 0.555, M and an objective function represents S ≈ 3.496, with its constraint inﬁmum being MP¯ = 2.1415, where x(0)∗ = (1.07075, 0) is a fuzzy minimum solution. x(2) = (x∗1 , x∗2 ) = (0.915683, 0.555) is not a global fuzzy optimum point of the problem, nor is a local one. So Proposition 7.2.2 is conﬁrmed. But, x∗ still is a fuzzy optimal point for all of x satisfying (7.2.9). Since Γ11 = −2(x∗1 )2 ≈ −1.677, Γ12 = −2(x∗2 )2 ≈ −0.616, all of x1 and x2 satisfy the problem, such that −1.677(ln x1 − ln x∗1 ) − 0.616(ln x2 − ln x∗2 ) 0 x−0.616 0.44572, =⇒ x−1.677 1 2 then 2x1 + 3x2 3.4963, i.e., (x∗1 , x∗2 ) is a fuzzy optimal point of the problem in certain range. So Theorem 7.2.2 is conﬁrmed. The property is called tangentially optimal of fuzziness. 7.2.4 Conclusion A direct algorithm is given to the Lagrange problem of fuzzy reversed posynomial geometric programming. As for its dual programming, it will be built by fuzzy dual theory [Cao02a]. Because a special example of fuzzy posynomial geometric programming is a fuzzy reverse one, the idea and method mentioned above suit for fuzzy posynomial geometric programming.

7.3

Antinomy in Fuzzy Geometric Programming

7.3.1 Antinomy in Fuzzy Posynomial Geometric Programming Deﬁnition 7.3.1. Suppose that a fuzzy optimal solution exists in fuzzy posynp omial geometric programming (7.1.3). If ∃ti > 0 and ti > 0, a fuzzy opi=1

(1)

timal solution to (7.1.5) exists. Again, if an optimal value g0

in (7.1.3) is

7.3 Antinomy in Fuzzy Geometric Programming (2)

larger than the optimal value g0 (7.1.3).

207

in (7.1.5), then there appears antinomy in

What is the reason for such a strange phenomenon? The following is the sufﬁcient and necessary condition that antinomy appears in a fuzzy posynomial geometric programming of general non-degeneration. Theorem 7.3.1. The suﬃcient and necessary condition that antinomy appears in (7.1.3) is that optimal value of (7.1.3) does not equal that of (7.1.1). Proof: Let x(1) and x(2) be fuzzy optimal solution to (7.1.3) and (7.1.1), respectively. Then Suﬃciency. If g0 (x(1) ) = g0 (x(2) ), but x(1) is a fuzzy feasible solution to (7.1.1), then g0 (x(1) ) > g0 (x(2) ). Now we build di = 1 − gi (x(2) )(1 p di > 0. Then (7.1.5) is i p). Obviously, d = (d1 , d2 , · · · , dp )T 0 and i=1

constructed with x(2) being a fuzzy feasible solution to (7.1.5), and (7.1.1) is obtained from (7.1.5). Hence fuzzy optimal value of (7.1.1) is larger than or equal to that in (7.1.1). Let x(3) be a fuzzy optimal solution to (7.1.1), with g0 (x(2) ) g0 (x(3) ), but g0 (x(1) ) > g0 (x(2) ). Therefore g0 (x(1) ) > g0 (x(3) ), and the antinomy appears in (7.1.3). p di > 0, Necessity. If the antinomy appears in (7.1.3), i.e., ∃di 0 with i=1

such that a fuzzy optimal solution exists in (7.1.1), with g0 (x(1) ) > g0 (x(3) ). At this time, as for ∀i ∈ {1, 2, · · · , p}, we have di = 1 − gi (x(3) ) > 0, i.e., x(3) is a fuzzy feasible solution to (7.1.1), therefore, g0 (x(2) ) g0 (x(3) ). We have g0 (x(2) ) g0 (x(3) ) < g0 (x(1) ) at g0 (x(3) ) < g0 (x(1) ), i.e., the fuzzy optimal value of (7.1.3) is not equal to that in (7.1.1). Expand the convexity and we have the following. Deﬁnition 7.3.2. Let X ⊂ R m be a convex set. If g0 (x) is a fuzzy convex function (resp. a strongly fuzzy convex one) with respect to A˜0 and suppose gi∗ (x)(1 i p) is a fuzzy convex function (resp. a strongly fuzzy convex one) with respect to F˜i , then we call (7.1.3) a fuzzy convex (resp. strongly fuzzy convex) programming with respect to g0 (x). Theorem 7.3.2. Let x∗ be an optimal solution to fuzzy posynomial geometric programming (P˜ ). If g0 (x), gi (x)(1 i p) are diﬀerentiable, while g0 (x) is pseudoconvex and gi (x) is quasiconvex at x∗ , #gi (x∗ )(0 i p) are linear independence, then antinomy appears in (P˜ ) ⇐⇒ the combination coeﬃcient of #gi (x) contains a negative component in Kuhn-Tucker condition at x∗ , i.e., ∃λi (1 i p), such that p λi # gi (x∗ ) = 0 (7.3.1) #g0 (x∗ ) − i=1

contains at least a negative component: λi < 0.

208

7 Fuzzy Geometric Programming

Proof: Since (7.1.3) can be turned into a determined posynomial geometric programming [Cao93a] max α s.t. g0 (x) z0 − d0 log α, gi (x) = 1 (1 i p) α ∈ [0, 1], x > 0,

(7.3.2)

(7.3.2) can be changed into a determined posynomial geometric programming max α s.t. g0 (x) z0 − d0 log α, gi (x) 1 (1 i p) α ∈ [0, 1], x > 0,

(7.3.3)

and (7.1.1) can be converted into a determined posynomial geometric programming (7.1.7), then, we have the following. Necessity. Oppositely suppose λi > 0(1 i p), if x∗ is an optimal solution to (7.1.3), we can prove that the convexity with K-T optimal condition satisﬁes assumption of gi (x)(0 i p) in this theorem, with optimal solution x∗ in (7.1.3) being still an optimal solution to (7.1.1). In fact, this is only fuzzy posynomial geometric programming (7.1.3) and (7.1.1) can be changed into a determined posynomial geometric programming (7.3.3) and (7.1.7), re¯∗ is still that spectively. It is easy to see that an optimal solution to (7.3.2) x to (7.3.3). From there we can prove that an optimal solution to (7.1.3) is still an optimal one to (7.1.1). Again, from the same optimal solution to (7.1.3) and (7.1.1), we know that their optimal value is equal as well. This contradicts with (7.1.3), where antinomy appears. Therefore, at least there exists a negative component in λi . Suﬃciency. If there contains at least a negative component in λi , then AT P = 0, P 0, P = 0 no solution, here A = (∇g0 (x∗ ), −∇g1 (x∗ ), · · · , −∇gp (x∗ ))T . Otherwise, if it has a solution P = (λ0 , λ1 , · · · , λp ), i.e., λ0 # g0 (x∗ ) −

p

λi # gi (x∗ ) = 0,

i=1

then λ0 = 0. Otherwise, ∇gi (x)(1 i p) is linearly related, and it contradicts with the assumption, such that #g0 (x∗ ) −

p λi # gi (x∗ ) = 0. λ 0 i=1

λi 0, contradicts with λi containing negative components. Thereλ0 fore, AT d < 0 has a solution from the knowledge of Gordan Theorem [BS79], that is, vector d > 0 exists, such that And all of

7.3 Antinomy in Fuzzy Geometric Programming

209

∇g0 (x∗ )T d < 0, ∇gi (x∗ )T d > 0(1 i p). As (7.3.2) is concerned, we can prove that, similarly of Ref.[BS79], d is a descent feasible direction at x ¯∗ , so is d at x∗ as for (7.1.1). That is to say, another feasible solution x ˆ to (7.1.1) can be certainly found out in the direction, such that g0 (ˆ x) < g0 (x∗ ). By doing so, an optimal value in (7.1.1) is smaller than that of (7.1.3). Therefore, antinomy appears in (7.1.3), and this theorem is true. Any fuzzy posynomial geometric programming is equivalent to a fuzzy linear programming from Section 7.1. Therefore, the condition that antinomy appears in (7.1.3) is equivalently obtained by using the condition that antinomy appears in fuzzy linear programming. The following are results got by means of non-degeneration. Theorem 7.3.3. Let a non-degeneration fuzzy optimal solution exist in fuzzy posynomial geometric programming (7.1.3). Then (7.1.3) appearance antinomy is equivalent to a corresponding fuzzy linear programming (7.1.14). When a basic solution z ∗ = (zB , zN ) corresponding to basis B denotes a nondegeneration optimal solution, a negative component exists in a dual basic solution w = CB B −1 . Proof: From the discussion of Theorem 7.1.4 and Theorem 7.1.6, it is known that any fuzzy posynomial geometric programming (7.1.3) can be changed into a fuzzy linear programming (7.1.14), such that antinomy appearance in (7.1.3) is equivalent to that in (7.1.14). It is proved that [Cao91c], when a basic solution is a non-degeneration optimal solution with respect to basis B, the antinomy appearance in (7.1.14) means that a negative component exists in its dual basic solution. Therefore the theorem holds. Corollary 7.3.1. Suppose that (7.1.3) has a non-degenerative fuzzy optimal solution and as for its non-degenerative basic optimal solution z ∗ = (zB , zN ) corresponding to fuzzy linear programming (7.1.14), ∃j0 , such that a certain determined coeﬃcient appears in (7.1.14) σj0 < 0, and then antinomy appears in (7.1.3). Proof: In fact, since (7.1.3) can be turned into (7.1.14), and dual programming in (7.1.14) is (7.1.13), a negative component must exist in w of (7.1.13) from the knowledge of σj = wPj < 0 [Cao91c]. This corollary holds from Theorem 7.1.6. 7.3.2 Example of Antinomy Example 7.3.1: A precision-instrument factory needs kg b1 , b2 and b3 for three kinds of metal A1 , A2 and A3 , respectively, to be smelted by using four diﬀerent kinds of ore B1 , B2 , B3 and B4 . The elements of ore (i.e., percentage P1 , P2 and P3 and its unit price (yuan/kg) are listed in Table 7.3.1 as follows.

210

7 Fuzzy Geometric Programming Table 7.3.1. The Ore Element and Unit Price Metal Ore A1 A2 A3 Ore unit price (yuan/kg)

1 0 5

1 4 6

1 2 5

Required (kg) 1 b1 6 b2 4 b3

2

3

1

2

B1 B2 B3 B4

How to purchase each metal in order to make the cost lowest? This question can be concluded as building a monomial fuzzy posynomial geometric programming as follows: % g0 (x) = x21 x32 x3 x24 min s.t. g1 (x) = x1 x2 x3 x4 = ˜b1 , g2 (x) = x42 x23 x64 = ˜b2 , g3 (x) = x51 x62 x53 x44 = ˜b3 , x1 , x2 , x3 , x4 > 0,

(7.3.4)

where ˜bi (i = 1, 2, 3) means ﬁxed-freely taking values in a certain close real + number interval [b− i , bi ]; its degree of accomplishment is determined by (7.2.2). Given zl = ln xl (l = 1, 2, 3, 4), then from Theorem 7.1.6, (7.3.4) is changed into fuzzy linear programming % f0 (z) = 2z1 + 3z2 + z3 + 2z4 min s.t. f1 (z) = z1 + z2 + z3 + z4 = ln ˜b1 , f2 (z) = 4z2 + 2z3 + 6z4 = ln ˜b2 , f3 (z) = 5z1 + 6z2 + 5z3 + 4z4 = ln ˜b3 .

(7.3.5)

If z1 , z2 , z3 is taken as a basis variable, a corresponding basis matrix B and basis reverse matrix B −1 are ⎞ ⎛ ⎛ ⎞ −4 − 12 1 111 B = ⎝0 4 2⎠ , B −1 = ⎝−5 0 1 ⎠ . 10 12 −2 565 When we take ˜b1 = e9 , ˜b2 = e12 , ˜b3 = e46 , basis feasible solution z1 = 4, z2 = 1, z3 = 4, z4 = 0 is an optimal one and a minimum is f01 = CB ZB = 15; when we take ˜b1 = e10 , ˜b2 = e18 , ˜b3 = e15 , another group of basis feasible solution z1 = 1, z2 = 0, z3 = 9, z4 = 0 is an optimal one and minimum is f02 = 11, that is, under the unchangeable state of other conditions, constraint conditions (resp. task quantity) in (7.3.4) is added from e9 , e12 , e46 to e10 , e18 , e50 , while objective function (or cost) decreases by f0 = 15 − 11 = 4 unit. The reason is that there exists a negative component −9 in

7.3 Antinomy in Fuzzy Geometric Programming

w = CB B −1

211

⎞ ⎛ −4 − 12 1 = (1, 3, 1) ⎝−5 0 1 ⎠ = (−9, 0, 2), 10 12 −2

testifying Theorem 7.3.2. For the fuzzy posynomial geometric programming with negative exponential, the above-mentioned phenomenon of antinomy also will appear. Example 7.3.2: Suppose a fuzzy posynomial geometric programming to be as follows % g0 (x) = x1 x2 x63 x34 min −1 2 ˜ s.t. g1 (x) = x1 x−1 2 x3 x4 = b1 , 2 2 −1 ˜ (7.3.6) g2 (x) = x1 x2 x3 x4 = b2 , g3 (x) = x1 x2 x23 x34 = ˜b3 , x1 , x2 , x3 , x4 > 0. Given zl = ln xl (l = 1, 2, 3, 4), then (7.3.6) is changed into % f0 (x) = z1 + z2 + 6z3 + 3z4 min s.t. f1 (x) = z1 − z2 − z3 + 2z4 = ln ˜b1 , f2 (x) = 2z1 + z2 + 2z3 − z4 = ln ˜b2 , f3 (x) = z1 + z2 + 2z3 + 3z4 = ln ˜b3 .

(7.3.7)

If z1 , z2 , z3 is taken as a basis variable, when ˜b1 = e, ˜b2 = e6 , ˜b3 = e4 are taken, an optimal solution denotes z1 = 2, z2 = 0, z3 = 1 and a minimum is f01 = CB ZB = 8; when ˜b1 = e2 , ˜b2 = e7 , ˜b3 = e4 are taken, an optimal solution is z1 = 3, z2 = 1, z3 = 0 and minimum is f02 = 4, that is, under the unchangeable state of other conditions, constraint conditions in (7.3.6) is added from e1 , e6 , e4 to e2 , e7 , e4 , while objective function decreases by f0 = 8 − 4 = 4 unit, also because a negative component −8 exists in ⎛ ⎞ 0 1 −1 w = CB B −1 = (1, 1, 6) ⎝−2 3 −4⎠ = (4, −8, 13), 1 −2 3 it testﬁes Theorem 7.3.2. In fact, when ˜b1 = e1 , ˜b2 = e6 , ˜b3 = e4 is taken, a crisp programming corresponding to (7.3.5) denotes max S = α s.t. z1 − z2 − z3 + 2z4 + α 2, 2z1 + z2 + 2z3 − z4 + α 7, z1 + z2 + 2z3 + 3z4 4, −z1 − z2 − 6z3 − 3z4 − 4α −8. It is solved with a simplex method through ﬁve steps, and then when α = 1, an optimal parameter solution to the problem is 30 19 1 ∗ , , 0, , 1 . Z = (z1 , z2 , z3 , z4 , α) = 13 13 13

212

7 Fuzzy Geometric Programming

Let z ∗ = (z1 , z2 , z3 , z4 ). Then it is an optimal solution to (7.3.7), its optimal value is f0 (z ∗ ) = 4, such that an optimal solution to the prime fuzzy geometric programming (7.3.6) is ∗

X ∗ = eZ = (ez1 , ez2 , ez3 , ez4 , e1 ) > = 30 19 1 = (x∗1 , x∗2 , x∗3 , x∗4 , 1) = e 13 , e 13 , e0 , e 13 , e1 , an optimal value is e4 . When α ∈ [0, 1], an optimal solution to the prime fuzzy geometric programming (7.3.6) is ∗

X ∗ = eZ = (ez1 , ez2 , ez3 , ez4 , eα ) > = 30 19 1 = (x∗1 , x∗2 , x∗3 , x∗4 , α) = e 13 , e 13 , e0 , e 13 , eα , 30

19

3

the optimal parameter value is g0 (X ∗ ) = e 13 · e 13 · e 13 · eα = e4+α . But α > 0, then the antinomy will not take place again under such a case. Summarily, if sign “=” in (7.1.3) is changed into sign “”, then an antinomy can be kept from appearing in (7.1.3). From the knowledge of a dual theory in fuzzy geometric programming, if the antinomy in the programming is prevented from antinomy appearing, it is right to make ﬂuctuational the objective coeﬃcient ci of the dual programming in (7.1.3). Here it shall not be re-discussed. Now the problem is only to take it into consideration that the prime problem (7.1.3) is made to keep from antinomy appearing. At the same time, we can directly testify Theorem 7.3.2 above. Example 7.3.3: Consider Example 7.3.1, when e, e6 , e4 is chosen in ˜bi , respectively, because a basis feasible solution z1 = 2, z2 = 0, z3 = 1, z4 = 0 is an optimal one, an optimal solution to (7.3.6) denotes x∗ = (e2 , 1, e, 1)T . Meanwhile #g0 (x∗ ) = (e6 , e8 , 6e7 , 3e8 )T , # g1 (x∗ ) = (e−1 , −e, −1, 2e)T, #g2 (x∗ ) = (2e4 , e6 , 2e5 , −e6 )T , # g3 (x∗ ) = (e2 , e4 , 2e3 , 3e4 )T . Let #g0 (x∗ ) − λ1 # g1 (x∗ ) − λ2 # g2 (x∗ ) − λ3 # g3 (x∗ ) = 0. Then ⎧ λ1 + 2e5 λ2 + e3 λ3 = e7 , ⎪ ⎪ ⎪ ⎨−λ + e5 λ + e3 λ = e7 , 1 2 3 5 3 ⎪ −λ + 2e λ + 2e λ3 = 6e7 , 1 2 ⎪ ⎪ ⎩ 2λ1 − e5 λ2 + 3e3 λ3 = 3e7 . If take z1 = 2, z2 = 0, z3 = 1 as a feasible basis, and z4 = 0 is nonbasis, namely, variable z4 is inactive, i.e, variable x4 is inactive, such that the last equation can be deleted in the above mentioned equations, and we take the basis feasible solution λ1 = 4e7 , λ2 = −8e2 , λ3 = 13e4 , so that at least one

7.3 Antinomy in Fuzzy Geometric Programming

213

of λ2 < 0 satisﬁes suﬃcient condition in Theorem 7.3.2. Therefore, antinomy appears in the above fuzzy posynomial geometric programming. 7.3.3 Extension Based on a strong dual theory, there exist the following results. Theorem 7.3.4 [Cao93d]. Let (P˜1 ) denote fuzzy super-consistence, with ˜ 1 ) must be fuzzy consistence with the MP˜1 > 0, then a dual programming (D next representing: 10 There exists of an optimal solution; ˜ ∗ ) = M ˜ , where M ˜ and M ˜ represents the fuzzy 20 MD˜ 1 = d(w P1 P1 D1 ˜ 1 ), constraint inﬁmum of (P˜1 ) and the fuzzy constraint supremum of (D respectively. Theorem 7.3.5. On the assumption of Theorem 7.3.2, antinomy appears in (P˜1 ) if the fuzzy strong duality holds if and only if at least a component is ˜ 1 ), or if and only if no negative in solution w∗ = (w1∗ , w2∗ , · · · , wp∗ )T to (D ˜ 1 ). feasible solution exists in (D Proof: Since (P˜1 ) can be changed into a fuzzy linear programming, the theorem holds from the suﬃcient and necessary condition that antinomy appears in fuzzy linear programming. Corollary 7.3.2. The antinomy appears when (P˜1 ) is changed into fuzzy linear programming without nonnegative constraint if and only if there exists at least a negative component in w. Example 7.3.4: Consider programming corresponding to (7.3.4) 7 g0 (x) = x2 x3 x3 x2 inf 1 2 4 ˜ s.t. g1 (x) = x1 x2 x3 x4 = e9 , ˜ 12 4 2 6 g2 (x) = x2 x3 x4 = e , ˜ g3 (x) = x51 x62 x53 x44 = e46 , x1 , x2 , x3 , x4 > 0.

(7.3.7)

Let xl = ezl (l = 1, 2, 3, 4). Then (7.3.7) is changed by writing it down as 7 f0 (z) = 2z1 + 3z2 + z3 + 2z4 inf s.t. f1 (z) = z1 + z2 + z3 + z4 = ˜ 9 = b1 + b1 , ˜ = b2 + b2 , f2 (z) = 4z2 + 2z3 + 6z4 = 12 ˜ = b3 + b3 , f3 (z) = 5z1 + 6z2 + 5z3 + 4z4 = 46 x1 , x2 , x3 , x4 > 0.

(7.3.8)

In (7.3.8), an optimal solution is x=(4,1,4,0) and optimal value is f0 =15 for −2 1 , ], b2 = b3 = 0, f0 = b1 = b2 = b3 = 0. But when b1 ∈ [ 5 5

214

7 Fuzzy Geometric Programming

1 15 − 13 b1, or b2 ∈ [−8, 8], b1 = b3 = 0, f0 = 15 − b2 , the antinomy 2 takes place in (7.3.8), i.e., antinomy takes place in (7.3.7). The reason is that,when we solve the constraint equations in dual programming in (7.3.8)

w0 w1 w2 w3 1 1 1 1 s% up (w1 + w2 + w3 )w1 +w2 +w3 ˜ ˜ 2 ˜ 3 w0 9w1 12w 46w s.t. w0 = w1 + w2 + w3 = 1, 2w0 + w1 + 5w3 = 0, 3w0 + w1 + 4w2 + 6w3 = 0, w0 + w1 + 2w2 + 5w3 = 0, 2w0 + w1 + 6w2 + 4w3 = 0, w0 , w1 , w2 , w3 0, we ﬁnd 4-th equation 0 = d4 appears, a contradictory equation, and w3 = −3 is negative. Hence there is no feasible solution to the dual programming. Therefore it is clear that we can change the fuzzy posynomial geomet˜ for soluric programming (7.1.3) into the fuzzy dual programming (D) tion. For multi-objective fuzzy geometric programming, we can discuss it similarly. The discussion of the antinomy problem helps us not only to make rational use of resources, but also to diagnose a variety of systems in order to better the systems, which can be better used to disentomb the system potentially, so that business management can be eﬀective by antinomy.

7.4

Geometric Programming with Fuzzy Coeﬃcients

Suppose that x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, c˜ik (i i p, 1 k J0 ) are an interval fuzzy numbers, then programming min s.t.

J0

c˜0k

k=1 Ji

c˜ik

k=1

m ) l=1 m ) l=1

xγl 0kl xγl ikl ˜ 1 (1 i p)

(7.4.1)

x>0 is called a geometric programming with fuzzy where c˜ik and 1˜ to coeﬃcients, + be interval fuzzy numbers, writing c˜ik = α[c− , c ikα ikα ](1 k Ji , 0 α∈[0,1] − + − + + α[1− i p), ˜ 1= α , 1α ]; [cikα , cikα ] is the interval numbers, cikα and cikα is α∈[0,1]

left and right endpoints in the interval, respectively, which are real numbers, γikl an arbitrary real number.

7.4 Geometric Programming with Fuzzy Coeﬃcients

215

7.4.1 Constraint Function with Fuzzy Coeﬃcients In geometric programming, if coeﬃcients in constraint are fuzzy numbers, i.e., min

J0

c0k

k=1

s.t.

m )

xγl 0kl

l=1 Ji

Ji m m ) γikl + ) γikl α c− x , c xl ikα ikα l k=1 l=1 α∈[0,1] k=1 − +l=1 ⊆ α[1α , 1α ] (1 i p)

(7.4.2)

α∈[0,1]

x>0 called a geometric programming with fuzzy-valued coeﬃcients in constraint conditions. Theorem 7.4.1. ∀α ∈ [0, 1], Problem (7.4.2) is equivalent to J0 m ) c0k xγl 0kl min s.t.

k=1 Ji

k=1 Ji k=1

l=1 m )

c− ikα c+ ikα

l=1 m ) l=1

xγl ikl 1− α, (7.4.3) xγl ikl

1+ α

(1 i p),

α ∈ [0, 1], x > 0.

˜= If x ¯α is an optimal solution to (7.4.3), then x

α¯ xα is a fuzzy optimal

α∈[0,1]

solution to (7.4.2). Proof: It is easy to prove by means of properties of an interval number. Because " ! Ji Ji m m : : γikl γikl + c− x , c x ⊆ [1− , 1+ ] ik ik l l k=1

is equivalent to

Ji k=1

c− ik

l=1 m :

xγl ikl

l=1

k=1

−

1 ,

l=1 Ji k=1

c+ ik

m :

xγl ikl 1+ ,

l=1

for α ∈ [0, 1], it is not diﬃcult to prove the equivalence of (7.4.2) and (7.4.3) by α¯ xα the α-cut set properties of a fuzzy number operation, such that x ˜= α∈[0,1]

means a fuzzy optimal solution to (7.4.2). 7.4.2

Objective Function with Fuzzy Coeﬃcients

Let an objective coeﬃcient c˜0k of programming (7.4.1) be a fuzzy number, i.e., + c˜0k = α[c− ikα , cikα ](1 k J0 ). α∈[0,1]

216

7 Fuzzy Geometric Programming

Then (7.4.1) can be denoted by min{˜ g0 (x) =

J0

c˜0k

m )

xγl 0kl }

k=1 l=1 Ji m ) γikl cik xl 1 k=1 l=1

s.t.

(1 i p),

(7.4.4)

x > 0. Theorem 7.4.2. Programming (7.4.4) is equivalent to ﬁnding J0

min s.t.

k=1 Ji

c− ikα cik

k=1

m ) l=1 m )

xγl 0kl

xγl ikl 1 (1 i p)

l=1

(7.4.5)

x>0 and min s.t.

J0 k=1 Ji

c+ ikα cik

k=1

m ) l=1

m )

l=1

xγl 0kl

xγl ikl 1 (1 i p),

(7.4.6)

x > 0. − − + + − + + ∀α ∈ [0, 1]. If x− α = (x1α , x2α , · · · , xmα ), xα = (x1α , x2α , · · · , xmα ) represents optimal solutions to (7.4.5) and to (7.4.6), respectively, then a fuzzy optimal + α[x− solution to (7.4.4) is x ˜= α , xα ]. α∈[0,1]

Proof: By means of α-cut set operation properties of fuzzy numbers, then (7.4.4) ⇔ min s.t.

J0 J0 m m ) γ0kl + ) γ0kl α c− x , c xl ikα ikα l

k=1

α∈[0,1] Ji

m )

k=1

l=1

cik

xγl ikl

l=1

k=1

l=1

(7.4.7)

1 (1 i p),

x > 0. Similar to Theorem 7.4.1, for a certain α ∈ [0, 1], ﬁnding an optimal solution to Programming (7.4.5) and (7.4.6) means getting an optimal solution to (7.4.7). + optimal solutions to (7.4.5) and to (7.4.6), respectively, then If x− α and xα are + x ˜= α[x− α , xα ] is an optimal solution to (7.4.7). α∈[0,1]

Now the theorem holds by the arbitrariness of α ∈ [0, 1]. 7.4.3

Mixed with Fuzzy Coeﬃcients in Objective and Constraints

When coeﬃcients c˜ik (1 k Ji , 0 i p) in objective and constraints are all fuzzy numbers, (7.4.1) is written as

7.4 Geometric Programming with Fuzzy Coeﬃcients J0

m )

xγl 0kl } k=1 l=1 Ji m ) c˜ik xγl ikl ⊆ ˜ 1(1 i k=1 l=1

min{˜ g0 (x) = s.t.

c˜0k

217

p),

(7.4.8)

x > 0. Theorem 7.4.3. Programming (7.4.8) if and only if ∀α ∈ [0, 1], such that min

J0

c− ikα

k=1 Ji

s.t.

k=1 Ji k=1

c− ikα c+ ikα

m ) l=1 m ) l=1 m ) l=1

xγl 0kl xγl ikl 1− α

(7.4.9)

xγl ikl 1+ α (1 i p)

x>0 and

J0

min

k=1 Ji

s.t.

k=1 Ji k=1

m )

c+ ikα

l=1 m )

c− ikα c+ ikα

l=1 m ) l=1

xγl 0kl xγl ikl 1− α,

(7.4.10)

xγl ikl 1+ α (1 i p),

x > 0. optimalsolutions to (7.4.9) and to (7.4.10), respecIf x− and x+ represent α tively, then x˜ = αxα = α[x− α , x ] represents an optimal solutions α∈[0,1]

α∈[0,1]

to (7.4.8). Proof: By means of operation properties for a fuzzy number cut set, we know (7.4.8) ⇐⇒ min

J0 J0 m m ) ) α c− xγl 0kl , c+ xγl 0kl ikα ikα

α∈[0,1]

k=1

l=1

k=1

l=1

Ji Ji m m ) ) s.t. α c− xγl ikl , c+ xγl ikl , ikα ikα l=1 k=1 l=1 α∈[0,1] k=1 + ⊆ α 1− α , 1α (1 i p),

(7.4.11)

α∈[0,1]

x > 0. Again, according to Theorem 7.4.1 and Theorem 7.4.2, ﬁnding (7.4.11) is equivalent to ﬁnding (7.4.9) and (7.4.10) ∀α ∈ [0, 1]. Therefore, this theorem holds from the arbitrariness of α ∈ [0, 1].

218

7 Fuzzy Geometric Programming

7.5

Geometric Programming with (α, c) Coeﬃcients

7.5.1

Introduction

Consider % g˜0 (x) min s.t. g˜i (x) ˜ 1, (1 i p ), g˜i (x) ˜ 1, (p + 1 i p), x > 0, where g˜i (x) =

Ji k=1

c˜ik

m ) l=1

(7.5.1)

xγl ikl (0 i p) are (α, c) fuzzy functions of x,

x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, both c˜ik (0) and ˜ 1 are all (α, c) fuzzy numbers, γikl an arbitrary real number. We call (7.5.1) a fuzzy geometric programming with (α, c) fuzzy coeﬃcients. 7.5.2

Nonfuzziﬁcation Model

Deﬁnition 7.5.1. g˜i∗ 0 represents ‘almost positive’, which can be deﬁned by applying equivalent form g˜i∗ (0) 1 − h, α∗T i Gi 0, where

$ g˜i∗ =

g˜i , −˜ gi ,

0 i p , p + 1 i p,

h standing for the degree of g˜i∗ 0. The larger h is, the stronger the meaning of ‘almost positive’ is. If an objective expectation value ˜b0 is presented by decision makers, then (7.5.1) can be turned into the following form: objection

constraint

where

J0 c˜0k G0k (x) 0 g˜0 = ˜b0 G00 − k=1 ⎧ Ji ⎪ ⎪ ⎪ g˜i = G00 − c˜ik Gik (x) 0 ⎪ ⎪ ⎨ k=1 Ji −˜ g = −G + c˜ik Gik (x) 0 ⎪ i 00 ⎪ ⎪ k=1 ⎪ ⎪ ⎩x > 0 )m G00 = 1, Gik (x) = l=1 xγl ikl

(1 i p ), (p + 1 i p), (0 i p).

Theorem 7.5.1. Given that fuzzy coeﬃcients are denoted by c˜ = (α, c), where α = (ai1 , ai2 , · · · , aiJi )T , c = (ci1 , ci2 , · · · , ciJi )T (0 i p), and fuzzy functions are denoted by

7.5 Geometric Programming with (α, c) Coeﬃcients

g˜i (x) = c˜i1

m ) l=1

xγl i1l + c˜i2

m ) l=1

xγl i2l + · · · + c˜iJi

= c˜G = (αT Gi , cT Gi ); here Gi = (

m )

l=1

xγl i1l ,

m ) l=1

xγl i2l , · · · ,

m )

m ) l=1

219

xγl ikl

γiJi l T

l=1

) and its membership function is

xl

⎧ |gi − αT Gi | ⎪ ⎪ ⎪ ⎨1 − cT |G | , Gi = 0, i g˜i (gi ) = 1, Gi = 0, gi = 0, ⎪ ⎪ ⎪ ⎩0, Gi = 0, gi = 0, where |Gi | = (|Gi1 |, · · · , |GiJi |)T and g˜i (gi ) = 0 for cT |Gi | |gi − αT Gi |. Proof: We only prove the case to be true at Gi = 0, but the other cases are self-evident. V 1−h Because ABD ∼ AEF (shown in Figure 7.5.1), then T = ⇒ 1 ciki V = cT iki (1 − h), while

½¼

¼

´

½µ

´

´

½µ

´

½µ

½µ

Fig. 7.5.1. Illustration of Expectation Value and Fuzzy Constraint Function

- gi − ΣαT i(ki −1) Gi(ki −1) K =− αT iki − V Giki T |gi − ΣαT i(ki −1) Gi(ki −1) − αiki Giki | − cT = iki (1 − h) |Giki | T |gi − ΣαT iki Giki | − ciki |Gi |(1 − h) . = |Giki |

220

7 Fuzzy Geometric Programming

Applying similarity of right triangles, we have K |Gi(ki −1) | ΣcT i(ki −1) |G iki | T |gi − ΣαT iki Giki | − ciki |Giki |(1 − h) ⇒1−h= ΣcT i(ki −1) |Gi(ki −1) | T ⇒ (1 − h)Σci(ki −1) |Gi(ki −1) | + (1 − h)cT iki |Giki | T = |gi − Σαiki Giki | |gi − αT Gi | ⇒1−h= ΣcT iki |Giki | |gi − αT Gi | T , (cik > 0). ⇒h=1− cT ik |Gi |

1−h=

Theorem 7.5.2. If a fuzzy coeﬃcient is known to be c˜ = (α, c), and α = (αi1 , · · · , αiJi )T , c = (ci1 , · · · , ciJi )T , then g˜i∗ 0(1 i p) ⇐⇒ (α∗ik − hc∗ik )T Gi 0(1 k Ji , 0 i p).

(7.5.2)

Proof: From Deﬁnition 7.5.1, we know g˜0∗ 0 ⇐⇒ g˜0∗ (0) = 1 − .. .

.. .

g˜i∗ 0 ⇐⇒ g˜i∗ (0) = 1 −

α∗T 0k Gi 1 − h, α∗T 0k Gi 0, ∗T c0k Gi .. . α∗T ik Gi 1 − h, α∗T ik Gi 0, ∗T cik Gi

where Gi > 0; then ∗T c∗T 0k Gi − α0k Gi .. .

∗T ∗T (1 − h)c∗T 0k Gi =⇒ (α0k − hc0k )Gi 0, .. .. . . ∗T ∗T ∗T ∗T G − α G (1 − h)c G =⇒ (α c∗T i i i ik ik ik ik − hcik )Gi 0.

Hence the theorem holds. Theorem 7.5.3. (7.5.1) ⇐⇒ max h s.t. (α∗ik − hc∗ik )T Gi (x) 0, h ∈ [0, 1], (1 k Ji , 0 i p), x > 0, where (α∗ik − hc∗ik )T Gi (x) = gi∗ (x, h).

(7.5.3)

7.5 Geometric Programming with (α, c) Coeﬃcients

221

Proof: According to Ref. [Cao87a] and [Cao87b], what we want is to let μg0∗ (x) , max{μμD˜ (x) = min{˜ x

x

p

μg˜i∗ (x) }}.

i=1

Here, μ ˜g0∗ (x) and μg˜i∗ (x) represent fuzzy objective and fuzzy constraint functions of (7.5.1), respectively, which is equivalent to causing the height of membership intersection to be highest between objective and constraints. Therefore the theorem is proved from Theorem 7.5.2. Theorem 7.5.4. Let X be a feasible solution set of (7.5.2) with h being Xh . Then h1 < h2 =⇒ Xh∗1 ⊃ Xh∗2 . The theorem is proved by means of (7.5.2) without diﬃculty. According to this theorem, we can choose a better constraint under the level of h + r ( > 0 is a small increment) by means of (7.5.2) and we might as well suppose it to be the i-th constraint, where the left inequality is regarded as a new objective function of the problem, such that (7.5.3) can be changed into ﬁnding max{α∗i0 G0 (x) + α∗i1 G1 (x) + · · · + α∗iJ0 GJ0 (x)} s.t. (α∗ik − (h + r)c∗ik )T Gi (x) 0, h ∈ (0, 1), (1 k Ji , 0 i p), x > 0, whose solution x∗ denotes an approximate solution to (7.5.3). 7.5.3 Algorithm and Numerical Example Based on the theory mentioned above, we build the algorithms to (7.5.1). Because of (7.5.1) ⇐⇒ (7.5.3), we have the next. Algorithm I. Choose the i-th constraint inequality in (7.5.2), solve h and substitute it for the objective function and for the remaining constraints in (7.5.3) before obtaining a determined geometric programming. Again, ﬁnd its optimal solution by a direct algorithm in Ref. [Cao87a],[Cao87b] and [Cao93a]. Algorithm II. Turn (7.5.2) into (7.5.3) before writing down the dual form of (7.5.3); solve the optimal solution to its dual problem by a dual algorithm in Ref. [Cao89a] and [Cao93a], such that we get an optimal solution to (7.5.2). Algorithm III. [Cao92a][RT91] We have the following. 10 Deﬁne the lower and the upper bounds for h and we suppose h− 0 = 0, h+ = 1 for θ = 0 in (7.5.3). 0 20 Fix hθ+1 and let hθ+1 = small end+(big end − small end) × 0.618.

222

7 Fuzzy Geometric Programming

Then small and big ends mean left and right endpoint values in the interval − we refer to. If |h+ θ − hθ | < ε (ε is a suﬃciently small positive number), then ∗ we take h = hθ+1 . It ends, otherwise, we go on to 30 . 30 If there exists a feasible solution set X for h = hθ+1 , then move ahead − − to 40 . Otherwise, go back to 20 , and let h+ θ+1 = hθ , hθ+1 = hθ . 0 ∗ 4 Let x ∈ X. We deﬁne ¯ hn = min{g0 (x, h), max gi∗ (x, h)}, i

+ 0 ¯n + and take h− θ+1 = h , hθ+1 = hθ and turn back to 2 . Continue doing it like this, and we can ﬁnd an approximate optimal solution to (7.5.3). It is easy to compose an approximate fuzzy optimal value for (7.5.1) after the optimal solution converges to (7.5.3) by the three algorithms mentioned above.

Example 7.5.1: Find a fuzzy posynomial geometric programming −1 −1 % min 7 37x−1 1 x2 x3 + 38.5x2 x3 7 1 x3 + 0.9x 7 1 x2 4.5, 7 s.t. 1.5x x1 , x2 , x3 > 0,

% = (38.5, 3), 1.5 7 = (1.5, 1), 0.9 7 = (0.9, 0.2), 4.5 7 = where 7 37 = (37, 6), 38.5 (4.5, 1), and suppose the expected objective value to be 7 64 = (64, 8). Solution: Turn the object and constraint of the problem into −1 −1 7 % 64 − 7 37x−1 1 x2 x3 − 38.5x2 x3 0, 7 − 1.5x 7 1 x3 − 0.9x 7 1 x2 0, 4.5 x1 , x2 , x3 > 0,

(7.5.4)

here, we suppose fuzzy sets c˜i (i = 0, 1) to be c˜0 = {α∗0 = (64, −37, −38.5)T, c∗0 = (8, 6, 4)T }, c˜1 = {α∗1 = (4.5, −1.5, −0.9)T, c∗1 = (1, 1, 0.2)T }. According to Formula (7.5.3) in Theorem 7.5.3, (7.5.4) can be changed into max h −1 −1 s.t. 64 − 8h + (−37 − 6h)x−1 1 x2 x3 + (−38.5 − 3h)x2 x3 0, 4.5 − h + (−1.5 − h)x1 x3 + (−0.9 − 0.2h)x1 x2 0, x1 , x2 , x3 > 0, h ∈ [0, 1], and diﬀerent optimal solutions can be obtained for diﬀerent level h. A decision maker may select level k and compare its obtained k-group optimal solutions, among which the best is the most satisfactory solution. In the example, if we choose h = 0.5, we can obtain a unique feasible solution by applying a dual algorithm meaning a dual optimal solution

7.5 Geometric Programming with (α, c) Coeﬃcients

223

2 1 1 1 W ∗ = (w01 , w02 , w11 , w12 ; h)T = ( , , , ; 0.5)T , 3 3 3 3 such that a unique feasible solution can be obtained corresponding to the primal problem, i.e., an optimal solution. 1 1 X ∗ = (x1 , x2 , x3 ; h)T = (2, 1, ; )T 2 2 and the optimal value is 60. 7.5.4 Extension The geometric programming with fuzzy parametric (α, c) can be extended to min s.t.

J0

m )

c˜0k

k=1 Ji

l=1 m )

k=1

l=1

c˜ik

xlγ˜0kl

xlγ˜ikl ⊗ ˜ 1 (1 i p),

(7.5.5)

x > 0, then, it can be changed into a geometric programming with h and β being parameters as follows. If “⊗” is taken to be “”, we have min g0 (x, h, β) s.t. gi (x, h, β) 1(1 i p), x > 0, h, β ∈ [0, 1]. If “⊗” is taken to be “” for 1 i p and to be “” for p + 1 i p, we have min g0 (x, h, β) s.t. gi (x, h, β) 1 (1 i p ), gi (x, h, β) 1 (p + 1 i p), x > 0, h, β ∈ [0, 1], where gi (x, h, β) =

Ji k=1

c˜−1 ik (h)

m ) l=1

γ ˜ −1 (β)

xl ikl

(0 i p).

The most satisfactory solution can be found by methods mentioned above. 7.5.5

Conclusion

A series of results can be concluded from the discussion above. i) Any fuzzy geometric programming (7.5.5) with (α, c) coeﬃcients can be completely turned into an ordinary geometric programming (7.5.3) with a parameter h. ii) Programming problem (7.5.5) has the same degree-of-diﬃculty as (7.5.3). iii) When exponent in (7.5.5) stands for a fuzzy number as formula (1.5.3) in Section 1.5.3 in Chapter 1.

224

7.6

7 Fuzzy Geometric Programming

Geometric Programming with L-R Coeﬃcients

Consider a posynomial geometric programming like (7.5.1) [Cao94a]: % g˜0 (x) min s.t. g˜i (x) ˜ 1 (1 i p), x > 0,

(7.6.1)

where x = (x1 , x2 , · · · , xm )T is an m−dimensional variable vector, g˜i (x) =

Ji k=1

c˜ik

m :

Ji Ji Ji m m m : : : xγl ikl = ( cik xγl ikl , cik xγl ikl , c¯ik xγl ikl )LR

l=1

k=1

l=1

k=1

l=1

k=1

l=1

(0 i p) is a posynomial; ˜ 1 = (1, 1, 1)LR and c˜ik = (cik , cik , cik )LR are all L-R fuzzy numbers, γikl an arbitrary real number. We call (7.6.1) a geometric programming with L-R fuzzy coeﬃcients. 7.6.1 Properties %c = (logc, logc, logc)LR and ec = (ec , ec , ec )LR Deﬁnition 7.6.1. Deﬁne log as an L-R logarithm and as an L-R exponent, respectively, and we can deﬁne min f˜ ⇐⇒ (min f, max f , min f ), (resp. max f˜ ⇐⇒ (max f, min f , max f )) and f˜ ˜b ⇐⇒ f b, f b, f b. % g˜0 (x) is equivalent to min g0 (x), max g (x) and min g¯0 (x), while Because min 0 1 is equivalent to gi (x) 1, gi (x) 1 and g i (x) 1, we note that for g˜i (x) ˜ min f = max(−f ), the nonfuzziﬁcation form of (7.6.1) means

min g0 (x) = max g 0 (x) = min g¯0 (x) =

J0

c0k

k=1 J0

c0k

k=1

J0

k=1

c0k

m ) l=1 m )

xγl 0kl

l=1

m )

l=1

xγl 0kl

xγl 0kl

s.t. gi (x) 1, gi (x) 1, gi (x) 1 (1 i p), x > 0.

7.6 Geometric Programming with L-R Coeﬃcients

225

Let xl = ezl (1 l m). Then g˜i (x) is deformed into ˜ i (z) = G

Ji

m c˜ik exp( γikl zl )(0 i p),

k=1

l=1

such that (7.6.1) can be changed into ˜ 0 (z) min G ˜ i (z) ˜ 1 (1 i p). s.t. G

(7.6.2)

It is easy to prove the following theorems and corollaries from the deﬁnition of L-R numbers. ˜ i (z) serves as a fuzzy convex function for all i(0 i p), Theorem 7.6.1. G so the deformed posynomial geometric programming (7.6.2) with L-R coeﬃcients is a fuzzy convex programming with its fuzzy local minimum solution being its fuzzy global minimum one. Corollary 7.6.1. Any strict fuzzy local minimum solution to (7.6.2) is its fuzzy global minimum one. Theorem 7.6.2. Let (7.6.2) be a strongly fuzzy convex programming problem. Then its fuzzy local minimum solution is its unique fuzzy global minimum one. Theorem 7.6.3. Given c˜k > ˜ 0 (1 k J), then J

log G(z) = log

k=1

c˜k exp{

m

γkl zl }

l=1

means a fuzzy convex function of z. Proof: From Deﬁnition 7.6.1 and Theorem 7.6.1, we can prove the result as we did in Theorem 2.3 in Ref. [Cao87a] and [Cao87b]. 7.6.2 Fuzzy Model Suppose (7.6.1) is only constraint with coeﬃcients being L-R numbers. Obviously, (7.6.1) is equivalent to min x0 s.t. x−1 0 g0 (x) 1, g˜i (x) ˜ 1 (1 i p), x0 > 0, x > 0, which still represents a posynomial geometric programming containing L-R coeﬃcients, with objective function being very simple. Now, we might as well suppose g0 (x) = x1 . And practically, we can estimate the range in fuzzy optimal solutions, such that we can consider the fuzzy posynomial geometric programming with variables subject to lower and upper bounds below,

226

7 Fuzzy Geometric Programming

min x1 s.t. g˜i (x) ˜ 1(1 i p), 0 < xL x xU .

(7.6.3)

m x0γikl ) l 0 ∗ 0 (1 k Ji ; 1 i p) for all x > 0. Then l=1 gi (x ) Ji g (x0 ) + g¯i (x0 ) c¯ik + cik ∗ 0 , gi (x ) = gi (x0 ) + i . εik = 1, where c∗ik = cik + 2 2 k=1 Such that we have the following lemma.

Let εik = c∗ik

Lemma 7.6.1. Let c˜∗i > ˜ 0, xl > 0, γil∗ > 0. Then g˜i∗ (x, x0 ) = c˜∗i

m :

γ∗

xl il c˜∗i

l=1

m

γil∗ xl = g˜i∗ (x),

(7.6.4)

l=1

where c˜∗i =

Ji Ji Ji : c˜∗ ( ik )εik , γil∗ = γikl εik , εik = 1 (0 i p, 1 l m). εik

k=1

Proof: Since c˜∗i

k=1

m ) l=1

γ∗

xl il − c˜∗i

m l=1

k=1

γil∗ xl = c˜∗i (

m )

l=1

γ∗

xl il −

0, xl > 0, due to ordinary geometric inequality,

m

γil∗ xl ), and when γil∗ >

l=1 m ) γ∗ xl il l=1

−

m l=1

γil∗ xl 0 holds.

But c˜∗i > 0, and from the deﬁnition of L-R numbers and their operation properties, we have m m : γ∗ γil∗ xl ) 0. c˜∗i ( xl il − l=1

l=1

Therefore, (7.6.4) holds. Since (7.6.4)) holds, then (7.6.3) is equivalent to a monomial posynomial geometric programming with L-R coeﬃcients where variables are limited by lower and upper bounds min x1 s.t. x ∈ F˜ 0 F˜ 0 = {x|˜ gi∗ (x, x0 ) ˜ 1 (1 i p), xL x xU }.

(7.6.5)

Theorem 7.6.4. If there exists a minimum solution xk to (7.6.5) for a determinate k, then xk must denote a fuzzy optimal solution to (7.6.3) after ﬁnite steps, otherwise any limit point of {xk } is a fuzzy optimal solution to (7.6.3). Proof: The fuzzy posynomial geometric programming (7.6.3) is equivalent to min x1 s.t. x ∈ Fj (j = 1, 2, 3),

(7.6.6)

7.6 Geometric Programming with L-R Coeﬃcients

where

227

F1 = {x|gi (x) 1 (1 i p), 0 < xL x xU }, F2 = {x|g i (x) 1 (1 i p), 0 < xL x xU }, F3 = {x|g i (x) 1 (1 i p), 0 < xL x xU }.

But (7.6.6) is equivalent to a monomial fuzzy posynomial geometric programming by means of (7.6.4) [WY82] as follows: min x1 s.t. x ∈ Fj0 (j = 1, 2, 3) F10 = {x|gi∗ (x, x0 ) 1 (1 i p), 0 xL x xU }, F20 = {x|g ∗i (x, x0 ) 1 (1 i p), 0 xL x xU }, F30 = {x|g ∗i (x, x0 ) 1 (1 i p), 0 xL x xU }, and its optimal solution xk must be that to (7.6.6), otherwise, any limit point in a point range {xk } must be an optimal solution to (7.6.6)[Shi81] and the theorem is true. This indicates that any multinormal posynomial geometric programming with L-R coeﬃcients can be turned into a monomial one. Now, we consider a monomial posynomial geometric programming with only constraint containing L-R coeﬃcients m ) min g0 (x) = c0 xγl 0l l=1

s.t. g˜i (x) = c˜i

m )

l=1

(7.6.7)

xγl il ˜ 1 (1 i p),

x > 0. Substitute zl = log xl , such that (7.6.7) is turned into m min γ0l zl l=1

s.t.

m

(7.6.8)

γil zl + log ci 0 (1 i p).

l=1

Theorem 7.6.5. The fuzzy optimal solution of programming (7.6.7) is also that of (7.6.8). Proof: Because m m m m : : : : xγl il ˜ 1 ⇐⇒ ci xγl il 1, ci xγl il 1, ci xγl il 1, c˜i l=1

l=1

l=1

l=1

take logarithms from formulas on both sides at the left and right ends above, respectively, and we have m

γil zl + log ci 0,

l=1

which is equivalent to

m l=1

m l=1

γil zl + log ci 0,

m

γil zl + log ci 0,

l=1

γil zl + log ci 0, then the theorem holds.

228

7.6.3

7 Fuzzy Geometric Programming

Numerical Example

Example 7.6.1: Suppose we make about a 400m3 box having a bottom without a cover which is used to transport chemical raw materials. The bottom and both sides of the box are made of 4m2 special materials with negligible cost. The materials of both ends cost 20/m2 , and the transporting expense of each box is a little more than 0.1. How much does it cost to transport about 400m3 of chemical raw materials? Solution: Let x1 , x2 , x3 represent the length, width and height of the box, respectively. Then its cost is equal to the total cost of transportation and material for the sides of the box, i.e., −1 −1 40x−1 g˜0 (x) = 7 1 x2 x3 + 40x2 x3 .

Again, because the area sum of its bottom and both sides are less than 4m2 , we have g1 (x) = 2x1 x3 + x1 x2 4. This problem can be concluded as a problem to ﬁnd an answer to the fuzzy posynomial geometric programming −1 −1 % (7 min 40x−1 1 x2 x3 + 40x2 x3 ) s.t. 2x1 x3 + x1 x2 4, x1 , x2 , x3 > 0,

(7.6.9)

2 1 1 , w02 = , w11 = w12 = . Then (7.6.9) can 3 3 2 be turned into the next form by means of (7.6.4):

where 7 40 = (40, 1, 9). Let w01 =

1 7 −1 −1 −1 2 % 40x1 x2 x3 3 40x2 x3 3 min 2 1 3

s.t.

2x1 x3 12 x1 x2 12 1 2

1 2

3

4,

x1 , x2 , x3 > 0, i.e.,

2

zi =log xi

⇐⇒

1

1

3 −3 −3 % 40) ˜ 23 (270) 31 x− min( 1 x2 x3 1 1 √ s.t. x1 x22 x33 2 x1 , x2 , x3 > 0

% 2 log7 40 + 13 log270 − 23 z1 − 13 z2 − 13 z3 } min{ 3 √ 1 1 s.t. z1 + z2 + z3 log 2. 2 2 √ 1 We obtain z1 = log 2, z2 = z3 = 0, or z1 = log 2, z2 = 0, z3 = log , and 2 1 then x1 = 2, x2 = x3 = 1, or x1 = 2, x2 = 1, x3 = are optimal solutions and 2 2 1 a fuzzy optimal value is g˜0∗ = (7 40) 3 · (270) 3 · 0.7937.

7.7 Geometric Programming with Flat Coeﬃcients

229

If we change 7 40 into 39, then g0∗ = 58.996 and if we change 7 40 into 49, then = 68.692, so it will cost between 58.996 and 68.692 for transporting materials something like 400m3 . If the weight ε is properly chosen, the budget expense is superiorly obtained. g0∗

7.7

Geometric Programming with Flat Coeﬃcients

Consider a general fuzzy geometric programming[Cao95a][Cao00a] % g˜0 (x) min s.t. g˜i (x) ⊗ ˜ 1 (1 i p), x > 0, Ji

where g˜i (x) =

c˜ik

k=1 , xm )T

m ) l=1

(7.7.1)

xγl ikl (1 i p) is a ﬂat function of x,

x = (x1 , x2 , · · · is an m−dimensional variable vector, c˜ik > 0 and 1˜ are ﬂat numbers, γikl an arbitrary real number. Sign “⊗” is the aggregation of “ ” or “ ”, and “⊗” is taken to be “” for 1 i p and to be “” for p + 1 i p, respectively. We call (7.7.1) a fuzzy geometric programming with ﬂat coeﬃcients. 7.7.1

Change of Fuzzy Objective Function

In (7.7.1), an objective function is denoted by g˜0 (x) =

J0

c˜0k

k=1 J0

m )

xγl 0kl = (g0− (x), g0+ (x), σg−0 (x) , σg+0 (x) )

l=1 m )

c− 0k

k=1 J0

xγl 0kl ,

J0

c+ 0k

m )

xγl 0kl , l=1 m + ) γ0kl σc−0k xγl 0kl , σ0k xl ). k=1 l=1 k=1 l=1

=(

l=1 m )

(7.7.2)

k=1 J0

An expected object is written as F˜ = (F , F , 0, σF+ ); then let g˜0 (x) intersect with an expected objective function F˜ be pictured in the following ﬁgure:

¼ ¼

¼

¼ ¼ ¼ Fig. 7.7.1. Relationship between g˜0 (x) and F˜

230

7 Fuzzy Geometric Programming

Theorem 7.7.1. Given that g˜0 (x) is like (7.7.2), it intersects an expected object F˜ = (F , F , 0, σF+ ), then min g˜0 (x) is equivalent to J0

min

k=1 J0

c− 0k

σc−0k

k=1

m )

xγl 0kl − F

l=1 m )

(7.7.3)

. xγl 0kl + σF+

l=1

Proof: Let the equation of AB, CD denote (shown in Figure 7.7.1) h0 − 1 = −

1 (x − F ) σF+

(7.7.4)

and h0 − 1 =

J0

c− 0k

m )

xγl 0kl l=1 J0 m − ) γ0kl σ0k xl k=1 l=1

x−

k=1

(7.7.5)

,

respectively. Find (7.7.4) and (7.7.5), and then J0

hk = 1 −

k=1 J0

c− 0k

σc−0k

k=1

m ) l=1 m ) l=1

xγl 0kl − F . xγl 0kl

+

σF+

Since PD(F˜ , g˜0 )

max

min{F (x), g˜0 (y)} = min{1, hgt(inf g˜0 sup F˜ )}, {(x,y)|x>y}

where, hgt(inf g˜0 sup F˜ ) stands for the nonnegative height in the intersection of a decrease at right end side for μF˜ (x) and an increase at left one for g˜0 (x), we have PD(F˜ , g˜0 ) = h0 . According to the judgment criterion, min g˜0 (x) means making hk as high as possible, i.e., max hk , which is equivalent to the truth of (7.7.3). 7.7.2 Determination of Fuzzy Constraints ˜ = (1− , 1+ , σ1− , σ1+ ), according to Given g˜i (x) = (gi− (x), gi+ (x), σg−i (x) , σg+i (x) ), 1 the method in Ref. [Dia87],[Cao89b],[Cao00a] and [RT91], we may prove: ⎧ + + ⎪ ⎪ ⎪ gi (x) 1 , ⎪ ⎨ g + (x) + σ + 1+ + σ + , 1 i gi (x) g˜i (x) ⊆ ˜ 1 ⇐⇒ (7.7.6) ⎪ gi− (x) − σg− (x) 1− − σ1− , ⎪ i ⎪ ⎪ ⎩ g − (x) 1− . i

7.7 Geometric Programming with Flat Coeﬃcients

231

Deﬁnition 7.7.1. I(˜ gi (x), α) = {x|μg˜i (x) α, α ∈ [0, 1]} is called an α− level set for g˜i (x). The level set in Deﬁnition 7.7.1 denotes an open interval where on a gi (x), α) and a ﬁnal end real axis are embodied an initial end inf I(˜ x

sup I(˜ gi (x), α). By means of monotonicity of μg˜i (x) and μ˜1 (x), (7.7.6) is equivx

alent to

g˜i (x) ⊆ ˜ 1 ⇐⇒

⎧ ⎨ sup I(˜ gi (x), α) sup I(˜1, α), ∀α ∈ [0, 1], x

x

⎩ inf I(˜ gi (x), α) inf I(˜1, α), ∀α ∈ [0, 1]. x

If height hk = hgt(inf ˜ 1 x

x

sup g˜i (x)) > 0, where the left end side for μ˜1 (x) x

increases, whereas the right end side for μg˜i (x) decreases, then hgt(inf ˜ 1 x

g + − 1− + 1, 0} sup g˜i (x)) = max{ + i σgi (x) (x) + σ1− x $ 1, if gi+ (x) 1− , = < 1, if gi+ (x) < 1− ,

(7.7.7)

such that the degree of possibility of g˜i (x) superior to ˜1 is introduced by Dubois and Prade [DPr80] which represents the fuzzy extension for 1 g˜i (x) > ˜ PD(˜ gi (x), ˜ 1) =

max

min{μg˜i (x) (x), μ˜1 (y)} = min{1, hgt(inf ˜1 sup g˜i (x))}. {(x,y)|x>y}

Deﬁnition 7.7.2. Let θ ∈ [0, 1] be an expected level. Then g˜i (x) θ ˜ 1 iﬀ no g˜i (x) 1− − (1 − θ)σ1− , $ gi− (x) − (1 − θ)σg−i (x) 1+ + (1 − θ)σ1+ , ˜ iﬀ g˜i (x) ≈θ 1 gi+ (x) + (1 − θ)σg+i (x) 1− − (1 − θ)σ1− . Proof: From (7.7.8), (7.7.7) and Dubois’ proof [DPr80], we have 1+ − gi− (x) g˜i (x) θ ˜ 1 iﬀ + +1θ σ1 + σg−i (x) iﬀ

gi− (x)

− (1 −

θ)σg−i (x)

(7.7.9) +

1 + (1 −

θ)σ1+ ,

g + (x) − 1− g˜i (x) >θ ˜ 1 iﬀ i+ +1>θ σgi (x) + σ˜1+ −

iﬀ 1 − (1 −

θ)σ1−

0. Ji

c+ ik

m )

xγl ikl

+

But (7.7.11) is equivalent to $ max 1 −

J0 k=1 J0 k=1

c− 0k

σc−0k

m ) l=1 m ) l=1

xγl 0kl − F * , xγl 0kl + σF+

(7.7.12)

7.7 Geometric Programming with Flat Coeﬃcients

233

hence (7.7.11) and (7.7.12) are equivalent to max θ s.t.

J0

− [c− 0k − (1 − θ)σc0k ]

k=1

m :

xγl ikl F + (1 − θ)σF+ ,

(7.7.13)

l=1

(7.7.12), θ ∈ [0, 1]. B. When “⊗” in (7.7.1) is selected as “” for 1 i p and “” for p + 1 i p, then from (7.7.7) and (7.7.8), we know (7.7.1) iﬀ max PD(F˜ , g0 (x)) s.t. PD(˜ 1, g˜i (x)) θ (1 i p ) PD(˜ gi (x), ˜ 1) θ (p + 1 i p) θ ∈ [0, 1], x > 0 ⇐⇒ min (7.7.11) Ji

[c− ik k=1

s.t.

Ji k=1

[c+ ik

− (1 −

+ (1 −

θ)σc−ik ]

θ)σc+ik ]

m ) l=1

m ) l=1

⎫

xγi ikl

+

1 + (1 −

⎪ θ)σ1+ ⎪ ⎪ ⎪

(1 i p )

xγi ikl

θ ∈ [0, 1], x1 , x2 , · · · , xm > 0

−

1 − (1 −

θ)σ1−

(p + 1 i p)

⎪ ⎪ ⎪ ⎪ ⎬

(7.7.14)

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭

⇐⇒ max θ s.t.

Ji

− [c− 0k − (1 − θ)σc0k ]

k=1

m :

xγl 0kl F + (1 − θ)σF+ ,

(7.7.15)

l=1

(7.7.14). Comparing A to B, (7.7.13) contains only 3p constraints more than (7.7.15). Accordingly, we take only (7.7.15) into consideration. In order to handle a negative term in constraints, we introduce a sign δik =

δi , for 1 k Si , (1 k Ji , 0 i p), −δi , for Si+1 k Ji ,

where Si denotes all items with the same sign as a constraint function sign δi . If the constraint with negative items denotes the i−th one, the constraint can be uniquely written as

234

7 Fuzzy Geometric Programming

g i (x) = δi [1+ + (1 − θ)σ1+ − δi

Ji

m : − δik c− (1 − θ)σ xγi ikl ] c ik ik

k=1

l=1

= δi [1+ + (1 − θ)σ1+ − Pi + Ni ] 0 $ δi [1+ + (1 − θ)σ1+ + x−1 i0 Pi ] 0, ⇐⇒ −1 δi [1+ + (1 − θ)σ1+ − x−1 i0 − xi0 Ni ] 0, where Pi =

Si

− [c− ik − (1 − θ)σcik ]

k=1 Ji

Ni =

m :

xγl 0kl > 0,

l=1 − [c− ik − (1 − θ)σcik ]

k=Si +1

m :

xγl ikl > 0,

l=1

xi0 is a new non-tentative-value variable, such that an inequality constraint polynomial with an arbitrary sign coeﬃcient can be turned into a monomial. In (7.7.15), let b0 = F + (1 − θ)σF+ > 0, b1 = 1+ + (1 − θ)σ1+ > 0, b2 = 1− − (1 − θ)σ1− > 0. Then (7.7.15) can be turned into an ordinary reverse posynomial geometric programming max θ = min(−θ) s.t.

J0 m : 1 − [c− − (1 − θ)σ ] xγl 0kl 1, c0k 0k b0 k=1

l=1

Ji m : 1 − [c− − (1 − θ)σ ] xγl ikl 1 (1 i p ), c ik ik b1 k=1

(7.7.16)

l=1

Ji m : 1 + + [cik + (1 − θ)σcik ] xγl ikl 1 (p + 1 i p), b2 k=1

l=1

θ ∈ [0, 1], x1 , x2 , · · · , xm > 0, such that we can obtain the following. Theorem 7.7.3. There exists a fuzzy optimal solution to the fuzzy posynomial geometric programming (7.7.1) which is equivalent to existence of a parameter optimal solution to a reverse posynomial geometric programming (7.7.16) with parameter θ. Algorithm We have several solutions to (7.7.16), for example, we can turn (7.7.1) into (7.7.16) for solution. But here, we only introduce a new algorithm, with steps as follows: 10 Deﬁne the lower and the upper bounds for θ and we suppose θ0− = 0, θ0+ = 1 for l = 0.

7.8 Geometric Programming with Fuzzy Variables

235

20 Fix θl+1 and let θl+1 = small end+(big end − small end) × 0.618. Then the small one and the big one mean left and right endpoint values in the interval we refer to. If |θl+ − θl− | < ε (ε means a suﬃciently small positive number), then we take θ∗ = θl+1 . It ends, otherwise, go on to 30 . 30 If there exists a feasible solution set X for θ = θl+1 , then we move on to + − 0 = θl , θl+1 = θl− . 4 , otherwise, turn back to 20 , and let θl+1 0 ∗ 4 Let x ∈ X. We deﬁne θ0 = min{PD(F˜ , g˜0 (x)), min PD(˜ gi (x), ˜ 1), 1ip

min

p +1ip

PD(˜1, g˜i (x))}.

− + = θ0 , θl+1 = θl+ , turn back to 20 . If we ﬁx θl+1 Continue like this and we can ﬁnd an approximate optimal solution to (7.7.15) before we obtain an approximate fuzzy optimal value for (7.7.1). Finally, we point out that after (7.7.1) is turned into (7.7.11), (7.7.12) or (7.7.16), we can solve it by the primal or the dual algorithm.

7.8 7.8.1

Geometric Programming with Fuzzy Variables Introduction

We call a posynomial geometric programming of variables serves as kinds of fuzzy variables in a general posynomial geometric programming with fuzzy variables. The geometric programming models with T -fuzzy variables and with trapezoidal fuzzy variables are built in this section, respectively. And eﬃcient algorithm is advanced. 7.8.2 Primal Geometric Programming with T -Fuzzy Variables The deﬁnition is given in primal geometric programming with T -fuzzy variables. Deﬁnition 7.8.1. Suppose that geometric programming given by T -fuzzy data is said to be a primal geometric programming with T -fuzzy variables, whose mathematical formula is % g0 (˜ x) min x) σ ˜i s.t. gi (˜ x˜ > 0,

(1 i p),

(7.8.1)

˜2 , · · · , x˜m )T stands for an m−dimensional T -fuzzy variable where x ˜ = (˜ x1 , x vector, x ˜i = (xi , ξ i , ξ i ) for T -fuzzy variable, σ ˜i σi × ˜1 and ˜1 = (1, 1, 1) for T -fuzzy numbers[Cao89b,c] [Cao90], and gi (˜ x) =

Ji k=1

vik (˜ x) =

Ji k=1

σik cik

m : l=1

x ˜γl ikl (0 i p)

236

7 Fuzzy Geometric Programming

are fuzzy polynomials of x˜, it is a fuzzy signomial function, where σi , σik = ±1, γikl is an arbitrary real number.[WB67] When σ ˜i taken ˜ 1, (7.8.1) is turned into min g0 (˜ x) s.t. gi (˜ x) ˜ 1(1 i p), x ˜ > 0. Theorem 7.8.1. Let a geometric programming model given by T -fuzzy data x ˜ be denoted as (7.8.1). Then, for a given cone index J , it is equivalent to min g0 (z(J )) s.t. gi (z(J )) σi (1 i p) z(J ) > 0

(7.8.2)

and an optimal solution depending on a cone index J in (7.8.2) is equivalent to Ji m ) optimal T -fuzzy solution to (7.8.1), where gi (z(J )) = σik cik (zl (J ))γikl k=1

(0 i p).

l=1

Proof: Similarly to method of disposal T -fuzzy data in Section 3.3, under the given cone index J , (7.8.1) can be turned into (7.8.2). Therefore, the theorem holds. This shows that the geometric programming with T -fuzzy variables can be changed into an ordinary geometric programming depending on a cone index J for solution. Numerical Example Example 7.8.1: Find min 2˜ x21 x˜3 + x ˜2 x˜3 ˜ s.t. 2˜ x−2 ˜−1 ˜−1 1 +x 2 x 3 2,

(7.8.3)

x ˜1 , x ˜2 , x ˜3 > 0, where ˜ 2 = (2, 0, 0), and give T -fuzzy data of a column x ˜1 : 1. (x1 , 0.4, 0.2); 2. (x1 , 1, 0.7);

x ˜2 : 4. (x2 , 1, 0.2); 5. (x2 , 1.4, 1.2);

x ˜3 : 7. (x3 , 0.8, 1); 8. (x3 , 1.6, 1.2);

3. (x1 , 1.2, 1.5);

6. (x2 , 0.5, 0.2);

9. (x3 , 0.2, 0.4).

To solve process below. 10 Number the data from 1 to 9. Classify the data under three types from Deﬁnition 3.3.2: I. No.: 1,6,9; II. No.: 2,5,8, with j2 = 0, j5 = 1, j8 = 0; III. No.: 3,4,7, with j3 = 1, j4 = 0, j7 = 1. Here jl = 1 stands for odd numbers and jl = 0 for even ones.

7.8 Geometric Programming with Fuzzy Variables

237

20 Nonfuzzify: (x1 + 0.2) + (x1 − 1) + (x1 + 1.5) = x1 + 0.27, 3 (x2 + 0.4) + (x2 + 1.2) + (x2 − 1) = x2 + 0.2, x˜2 −→ 3 (x3 + 0.3) + (x3 − 1.6) + (x3 + 1) x˜3 −→ = x3 − 0.1. 3

x˜1 −→

30 Obtain a programming corresponding to (7.8.13) as follows: min g0 (x) = {2(x1 + 0.27)2 (x3 − 0.1) + (x2 + 0.2)(x3 − 0.1)} s.t. 2(x1 + 0.27)−2 + (x2 + 0.2)−1 (x3 − 0.1)−1 2, x1 , x2 , x3 > 0. Substituting u1 = x1 + 0.27, u2 = x2 + 0.2, u3 = x3 − 0.1, then min{2u21u3 + u2 u3 } −1 −1 s.t. 2u−2 1 + u2 u3 2, u1 , u2 , u3 > 0,

(7.8.4)

we can obtain an optimal solution to (7.8.4): 4 3 3 , u2 = , u3 = 1, u1 = 2 2 and then a J −optimal solution to a primal problem can be obtained: 4 3 3 − 0.27, x2 = − 0.2, x3 = 1.1, x1 = 2 2 and the optimal value is g0 (x) = 92 . 7.8.3

Primal Geometric programming with Trapezoidal Fuzzy Variables

Similar with method to a geometric programming model with T -fuzzy variables, we can deal with the model in this section. Deﬁnition 7.8.2. If (7.8.1) is given by a trapezoidal fuzzy variable, i.e., % g0 (˜ x) min s.t. gi (˜ x) σ ˜i x˜ > 0,

(1 i p),

(7.8.5)

where x ˜ = (x1 , x2 , · · · , xm )T is an m−dimensional trapezoidal fuzzy variable + − + ˜ vector, x ˜l = (x− l , xl , ξ l , ξ l ) is trapezoidal fuzzy data, 1 = (1 , 1 , 1, 1) is a

238

7 Fuzzy Geometric Programming

trapezoidal fuzzy number, then it is a posynomial geometric programming with trapezoidal fuzzy variables. Theorem 7.8.2. If the posynomial geometric programming with trapezoidal fuzzy variable is shown as (7.8.5), for a ﬁxed platform index T , then (7.8.5) is changed into a posynomial geometric programming depending on platform index T min g0 (z(T )) s.t. gi (z(T )) 1(1 i p) (7.8.6) z(T ) > 0 and the optimum solution with platform index T in (7.8.6) means also a trapeJi m ) zoidal fuzzy one in (7.8.5), where gi (z(T )) = cik (zl (T ))γikl (0 i p). k=1

l=1

T

Proof: Let x ˜l = (˜ xl1 , x ˜l2 , · · · , x ˜lp ) be a trapezoidal fuzzy variable satisfy+ , x ing (7.8.5), where x ˜li = (x− li li , ξ li , ξ li ) (1 l m; 1 i p). Because + x is freely ﬁxed in the closed value interval [x− li , xli ], we choose the degree of accomplishment in the light of membership function like (1.5.3), then we deduce √ √ − − − n n xli − x− α(x+ α(x+ li li − xli ) ⇒ xli = xli + li − xli ) by φ˜l (xli ) α. We classify variables of the column by subscripts, and might as well let l = 1, 2, · · · , M correspond to a smaller ﬂuctuating variable while the other variables correspond to l = M + 1, · · · , 3M . And then 10 As for l = 1, 2, · · · , M and each i, √ ξ +ξ li − n li α(x+ ; x ˜li → x− li + li − xli ) + 2 0 2 As for$l = M + 1, · · · , 2M and each i, √ − n x− α(x+ li + li − xli ) + ξ li , jl = 0, x ˜li → √ − n x− α(x+ li + li − xli ) − ξ li , jl = 1; 0 3 As for$l = 2M + 1, · · · , 3M and each i, √ − n x− α(x+ li + li − xli ) − ξ li , jl = 0, x ˜li → √ − n x− α(x+ li + li − xli ) + ξ li , jl = 1.

√ n Therefore, under the same given platform index T , let zli = x− α(x+ li + li − − + 3M ξ x +ξ x ξ +ξ li li li ∗ ∗ li li li x− + ξli∗ 3M , ±ξ li , li ) + ξli . Then zl (T ) = m , where ξli is 2 ξ +ξ li

li

i=1

or ±ξ li . Substitute zl (T ) for x˜l in (7.8.5), and we turn (7.8.5) into (7.8.6). So, under the given platform index T mentioned above, (7.8.5) can be turned into (7.8.6) and an optimal solution to (7.8.6) depending on platform index T is equivalent to a trapezoidal fuzzy optimal one in (7.8.5).

7.8 Geometric Programming with Fuzzy Variables

239

Illustrative Examples + Now, variables x ˜l = (x− l , xl , ξ l , ξ l ) are divided into two cases below. I. Variable type, where the mean value and the spreads are all decision variables. II. Data type, where the mean value and the spreads are all real numbers.

Example 7.8.2: Find √ 1 x ˜1 −1 (˜ x1 − 1)(˜ x2 − 14 ) 0<x ˜1 ˜ 2, 0<x ˜2 ˜54 ,

x) = min g0 (˜ s.t.

1 x ˜2 − 14

√ 3

˜1,

(7.8.7)

where ˜ 1 = (1, 1, 0, 0), ˜ 2 = (2, 2, 0, 0), ˜54 = ( 54 , 54 , 0, 0) are special trapezoidal fuzzy numbers. ˜2 trapezoidal fuzzy data as: i) Take x ˜1 and x + − + − + x˜1 : 1) (x− 1 , x1 , 1, 0); 3) (x1 , x1 , 2, 1); 5) (x1 , x1 , 1.5, 1); − + 1 1 − + − 7) (x1 , x1 , 2 , 2 ); 9) (x1 , x1 , 2, 2); 11) (x1 , x+ 1 , 1, 2). + − + − + x ˜2 : 2) (x− 2 , x2 , 1.5, 1.2); 4) (x2 , x2 , 0, 1); 6) (x2 , x2 , 2, 1); − + − + − 8) (x2 , x2 1, 1.5); 10) (x2 , x2 , 0, 0); 12) (x2 , x+ 2 , 2, 1).

ii) x ˜1 , x ˜2 may be freely ﬁxed in a closed value intervals [1, 2], [ 41 , 1 14 ] respectively, for interval [m, M ], from formula (1.5.3) in Section 1.5 in Chapter 1, where xL and xU are left and right interval endpoints, respectively. Let n = 1. Then $ 1 −1 ϕ1 (x1 ) = x2−1 α1 x −1

ϕ2 (x2 ) = 1 12 − 41 α2 4 4 $ x1 α1 + 1, ⇒ x2 α2 + 14 , which is equivalent to x ˜1 : x1 = α1 + 1; x ˜2 : x2 = α2 + 14 , α1 , α2 ∈ [0, 1]. So the data of twelve groups mentioned above are as follows: 1) (α1 + 1, 1, 0); 7) (α1 + 1, 12 , 12 );

3) (α1 + 1, 2, 1); 9) (α1 + 1, 2, 2);

5) (α1 + 1, 1.5, 1); 11) (α1 + 1, 1, 2).

2) (α2 + 14 , 1.5, 1.2); 4) (α2 + 14 , 0, 1); 6) (α2 + 14 , 2, 1); 8) (α2 + 14 , 1, 1.5); 10) (α2 + 14 , 0, 0); 12) (α2 + 14 , 2, 1). iii) Partition them into three groups I, II, and III by applying the proof in Theorem 7.3.1 in Section 7.3.

240

7 Fuzzy Geometric Programming

I. Number 1,4,7 and 10, whose data correspond to 3 3 1 1 α1 + , α2 + , α1 + 1 , α2 + . 2 4 2 4 II. Number 2,5,8 and 11, whose data correspond to 5 3 α2 − , α1 + 2, α2 − , α1 + 3. 4 4 III. Number 3,6,9 and 12, whose data correspond to 5 5 α1 − 1, α2 + , α1 − 1, α2 + . 4 4 Add their factors with αj (j = 1, 2), such that 3 + α1 + 2 3 α2 + + α2 + 4

α1 +

3 + α1 + 2 + α1 + 3 + α1 − 1 + α1 − 1 = 6α1 + 6, 2 1 5 3 5 5 6 + α2 − + α2 − + α2 + + α2 + = 6α2 + . 4 4 4 4 4 4

iv) Substitute α1 + 1, α2 + 14 for x˜1 and x ˜2 in (7.8.7), respectively, and then change formula (7.8.7) into an equivalent problem −1

− 13

min g0 (α) = α1 2 α2 s.t. α1 α2 1,

(7.8.8)

0 < α1 1, 0 < α2 1. An optimal solution to (7.8.8) is α = (1, 1)T , an optimal value g0 (α) = 1. And then a parameter optimal solution to (7.8.7) is x1 = 1 + 1 = 2, x2 = 1 + 14 = 1 41 , a parameter optimal value g0 (x) = 1. 7.8.4

Conclusion

It is pointed out that the method in this section can be applied to the posynomial geometric programming model with other type of fuzzy variables. In application, usually it is simply to ﬁnd a parameter optimal solution and a parameter optimal value.

7.9 7.9.1

Dual Method of Geometric Programming with Fuzzy Variables Introduction

The posynomial geometric programming with fuzzy variables is advanced in above section, here it is turned into a dual one with fuzzy coeﬃcients by

7.9 Dual Method of Geometric Programming with Fuzzy Variables

241

a fuzzy dual theorem [MTM00], before its solution is found by the method mentioned in Section 7.4-7.7, i.e., a dual method.[Cha83] Next follows an introduction of geometric programming with fuzzy variables. 7.9.2

Dual Geometric Programming with T −Fuzzy Variables

Suppose that a geometric programming with T −fuzzy variables to be (7.8.1), then its dual programming is deﬁned as. Deﬁnition 7.9.1. Let (7.8.1) be primal geometric programming with T −fuzzy variables. Then we call p J σ Ji i :: c˜ik σi wik ˜ ˜ (D) max d(w) = σ wik wik i=0 k=1

k=1

(7.9.1)

s.t. w00 = 1 T

Γ w=0 w0 a dual programming of (7.8.1), where w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T is a J−dimensional variable vector (J = J0 + J1 + · · · + Jp ), c˜ik is a T −fuzzy number, σ = ±1 and ⎞ ⎛ σ01 γ011 · · · σ01 γ01l · · · σ01 γ01m ⎟ ⎜ ··· ··· ··· ⎟ ⎜ ⎜ σ0J0 γ0J0 1 · · · σ0J0 γ0J0 l · · · σ0J0 γ0J0 m ⎟ ⎟ ⎜ ⎟ ··· ··· Γ = ⎜ ⎟ ⎜ ··· ⎜ σp1 γp11 · · · σp1 γp1l · · · σp1 γp1m ⎟ ⎟ ⎜ ⎠ ⎝ ··· ··· ··· σpJp γpJp 1 · · · σpJp γpJp l · · · σpJp γpJp m is an exponent matrix. We stipulate (wik )wik |wik =0 = 1. When σi taken 1, (7.9.1) turned into ˜ max d(w) =

>wik ) p ) p Ji = ) c˜ik i=0 k=1

wik

i=1

wi0 wi0

s.t. w00 = 1, Γ T w = 0, w 0, 1(1 i p) are fuzzy numbers, Γ an exponent matrix; w a where c˜ik = cik · ˜ J−dimensional variables vector. Disposal of Nonfuzziﬁcation in Problem: Theorem 7.9.1. If the problem (7.8.1) is deduced from T -fuzzy variable x ˜l = (˜ xl1 , x˜l2 , · · · , xlp )T , the dual form of (7.8.1) is (7.9.1).

242

7 Fuzzy Geometric Programming

Proof: As (7.8.1) is equivalent to (7.8.2) from Theorem 7.8.1, the dual form of the programming (7.8.2) contains parameter is obviously ! max d(w(J )) = σ

"σ Ji cik σi wik (J ) ( wik (J ) wik (J )) i=0 k=1 k=1 p ) Ji )

s.t. w00 (J ) = 1, Γ T w(J ) = 0, w(J ) 0,

(7.9.2)

where w(J ) = (w01 (J ), · · · , w0J0 (J ), · · · , wp1 (J ), · · · , wpJp (J ))T is a J−dimensional variable vector depending on a cone index J , and we stipulate (wik (J ))wik (J ) |wik (J )=0 = 1 correspondingly. We can also prove (7.9.2) to be equivalent to (7.9.1) under the above cone index J , while (7.8.2) and (7.9.2) are mutually dual under the cone index J , such that (7.8.1) and (7.9.1) are mutually dual problems, so the theorem holds. Obviously, (7.8.1) can be changed into a dual programming (7.9.1) with fuzzy coeﬃcients, while (7.9.1) is easier to be found than (7.8.1) since its variables are nonfuzzy and its optimal solution can be obtained by methods mentioned in the previous chapter. Next we discuss what condition is needed for the existence of a fuzzy optimal solution in (7.8.1). x) ˜1(< ˜1), (1 i p), then Deﬁnition 7.9.2. If x ˜ > 0 satisﬁes gi (˜ primal posynomial geometric programming (P˜ ) with T - fuzzy variables is called a fuzzy consistent (or fuzzy super-consistent). If z(J ) > 0, such that gi (z(J )) 1(< 1), (1 i p), then primal posynomial geometric programming depending on cone index J is called a consistence (or super-consistence). Lemma 7.9.1. (Basic lemma) For any T -fuzzy feasible solution x˜ in a primal posynomial geometric programming (7.8.1) with T -fuzzy variable, and any feasible one w in a dual programming (7.9.1) with fuzzy coeﬃcients, we have x) g0 (˜ x) g0 (˜

p :

˜ (gi (˜ x))wi0 d(w)

i=1

˜ x) = d(w) ⇐⇒ and g0 (˜ wik

⎧ x) ⎨ v0k (˜ , (i = 0; 1 k J0 ) g (˜ x = 0 ) ⎩ wi0 vik (˜ x), (i =

0; 1 k Ji )

holds, such that x ˜ and w denote a T -fuzzy optimal solution to (7.8.1) and an optimal solution to (7.9.1), respectively. Proof: From the knowledge of Theorem 7.9.1, (7.8.1) ⇐⇒ (7.8.2). Similarly, we can prove (7.9.1) equivalent to (7.9.2).

7.9 Dual Method of Geometric Programming with Fuzzy Variables

243

But (7.9.2) denotes a common programming depending on cone index J , under the same given cone index J , (P (J )) is mutually dual with (D(J )) with respect to cone index J . From the knowledge of Lemma 1.5.3 in Ref. [WY82], any feasible solution x(J ) and w(J ) in (P (J )) and (D(J )) contains p ) g0 (x(J )) g0 (x(J )) (gi (x(J ))wi0 d((J )), with g0 (x(J )) = d(w(J )) iﬀ i=1

$ wik =

v0k (x(J ))/g0 (x(J ), wi0 (J )vik (x(J )),

(i = 0, 1 k J0 ) (i = 0, 1 k Ji )

holds, x(J ) and w(J ) denoting optimal solutions to (P (J )) and (D(J )), respectively. ˜ and (D(J )), Again, by the equivalence of (P˜ ) and (P (J )) as well as (D) ˜ therefore, the theorem holds. we know that of (P˜ ) and (D), Theorem 7.9.2. (First fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.1) be deduced from T -fuzzy variable x˜l = (˜ xl1 , x ˜l2 , · · · , x ˜lp )T . ∗ If it is fuzzy super-consistent having T -fuzzy optimal solution x ˜ , then there must exist a Lagrange multiplier λ∗ = (λ∗1 , λ∗2 , · · · , λ∗p )T 0, such that #g0 (˜ x∗ ) +

p i=1

λ∗i # gi (˜ x∗ ) = 0,

λ∗i (gi (˜ x∗ ) − 1) = 0

(1 i p),

(7.9.3) (7.9.4)

while w∗ deﬁned by

∗ wik

⎧ v0k (˜ x∗ ) ⎪ ⎪ , ⎨ g0 (˜ x∗ ) = ∗ x∗ ) ⎪ ⎪ λi vik (˜ , ⎩ ∗ g0 (˜ x )

(i = 0; 1 k J0 ) (7.9.5) (i = 0; 1 k Jp )

is an optimal one of dual programming (7.9.1), with ˜ ∗ ). g0 (˜ x∗ ) = d(w

(7.9.6)

Proof: For a given cone index J , it may be proved that, similar to Theorem 7.8.1, the conditions of the theorem are equivalent to (7.8.2) J −super consistently containing an optimal solution z ∗ (J ) which depends on a cone index J . But under condition of the (7.8.2) J −super consistent having optimal solution z ∗ (J ), it can be proved that, similar to Theorem 1.6.3 in Ref. [WY82], there must be a Lagrange multiplier λ∗ = (λ∗1 , λ∗2 , · · · , λ∗p )T 0, such that #g0 (z ∗ (J )) +

p i=1

λ∗i # gi (z ∗ (J )) = 0,

λ∗i (gi (z ∗ (J )) − 1) = 0,

(7.9.7) (7.9.8)

244

7 Fuzzy Geometric Programming

while w∗ (J ) deﬁned by ⎧ v0k (z ∗ (J )) ⎪ ⎪ , (i = 0; 1 k J0 ) ⎨ g (z ∗ (J )) ∗ wik (J ) = λ∗0v (z ∗ (J )) ik ⎪ ⎪ , (i =

0; 1 k Jp ) ⎩ i g0 (z ∗ (J ))

(7.9.9)

is an optimal solution depending on the cone index J of the dual programming (7.9.2), with g0 (z ∗ (J )) = d(w∗ (J )). (7.9.10) Under the above cone index, (7.9.3)–(7.9.6) are equivalent to (7.9.7)–(7.9.10), respectively, therefore the theorem holds Theorem 7.9.3 (Second fuzzy dual theorem). Let the primal posynomial geometric programming (7.8.1) be deduced from T -fuzzy variable. If (7.8.1) is fuzzy consistent and a dual problem (7.9.1) has a feasible solution with components being positive, then (7.8.1) has an optimal T -fuzzy solution. Proof: Under a given cone index J , the condition of this theorem is equivalent to the following. If primal problem (7.8.2) is J −compatible, its dual problem (7.9.2) has a J −feasible solution with components being positive, so, under this condition, it may be proved that, similar to Theorem 1.8.1 in Ref. [WY82], (7.8.2) has a J −optimal solution. This is equivalent to the truth of the theorem. Theorem 7.9.2 shows us that if a primal problem (7.8.1) with T -fuzzy variable is fuzzy super-consistent, with optimal T -fuzzy solutions x ˜∗ , the dual ∗ problem (7.9.1) has an optimal solution w , but (7.9.1) and (7.8.1) have the same optimal T -fuzzy values. Theorem 7.9.3 further gives us a suﬃcient condition in which (7.8.1) has an optimal T -fuzzy solution by determination. Example 7.9.1: Consider Example 7.8.1. Substituting u1 = x1 + 0.27, u2 = x2 + 0.2, u3 = x3 − 0.1, with (7.8.3) changed into (7.8.4), its dual programming is 2 w01 1 w02 1 w11 1 w12 (w11 + w12 )w11 +w12 w01 w02 w11 2w12 s.t. w01 + w02 = 1; 2w01 − 2w12 = 0;

max

w01 − w12 = 0; w 0. Also we obtain a unique dual feasible solution: w01 = 23 , w02 = 13 , w11 = 2 1 3 , w12 = 3 , an optimal solution correspondingly, and the dual optimal value is MD = d(w) = 92 . Again from (7.8.1), combining substitutes, we can obtain a

7.9 Dual Method of Geometric Programming with Fuzzy Variables

J −optimal solution to a primal problem: x1 = 1.1, and the optimal value is g0 (x) = 7.9.3

9 2.

B

3 2

− 0.27, x2 =

3 2

245

− 0.2, x3 =

Therefore, d(w) = g0 (x).

Dual Geometric Programming with Trapezoidal Fuzzy Variables

Similarly we discuss dual geometric programming with trapezoidal fuzzy variables, consider (7.8.5), its dual programming like (7.9.1) is " ! p J Ji σi wik σ ) )i c˜ik ˜ wik max d(w) = σ wik i=0 k=1

k=1

(7.9.11)

s.t. w00 = 1, Γ T w = 0, w 0, where c˜ik is a trapezoidal fuzzy number.

Theorem 7.9.4. (First fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.5) with trapezoidal fuzzy variables being fuzzy superconsistent, having fuzzy optimal solution x ˜∗ . Then there must exist a Lagrange ∗ ∗ ∗ ∗ T multiplier λ = (λ1 , λ2 , · · · , λp ) 0, such that x∗ ) + #g0 (˜

p

λ∗i # gi (˜ x∗ ) = 0, λ∗i (gi (˜ x∗ ) − ˜1) = 0

(1 i p),

i=1

while w∗ deﬁned by

∗ wik

⎧ v0k (˜ x∗ ) ⎪ ⎪ , ⎨ g0 (˜ x∗ ) = ∗ ∗ λ vik (˜ x ) ⎪ ⎪ , ⎩ i ∗ g0 (˜ x )

(i = 0; 1 k J0 ) (i = 0; 1 k Ji )

˜ ∗ ). is an optimal one of a dual programming (7.9.3), with g0 (˜ x∗ ) = d(w Theorem 7.9.5. (Second fuzzy dual theorem) Let the primal posynomial geometric programming (7.8.5) be deduced from trapezoidal fuzzy variable. If (7.8.5) is fuzzy consistent and dual problem (7.9.11) has a feasible solution with components being positive, then (7.8.5) has a fuzzy optimal solution. Example 7.9.2: Find dual programming of Example 7.8.2. Because (7.8.7) can be turned into (7.8.8) and the dual problem in (7.8.8) is

246

7 Fuzzy Geometric Programming

1 w0 1 w1 1 w2 1 w3 w1 w2 w3 ) ( ) ( ) ( ) w1 w2 w3 w0 w1 w2 w3 s.t. w0 = 1, 1 − w0 + w1 + w2 = 0, 2 1 − w0 + w1 + w3 = 0, 3 w0 , w1 , w2 , w3 0. max (

Its optimum solution is w = (1, 13 , 16 , 0)T , optimal value d is also 1, such that we can get an approximately optimal solution of the prime problem, which is x = (2, 1 41 )T , and its optimal value is 1. 7.9.4

Disposal of Nonfuzziﬁcation in Fuzzy Number

Proposition 7.9.1. Let x˜ and y˜ be T -fuzzy variables (x, ξ, ξ)T and (y, η, η)T and reference functions (Lx˜ , Rx˜ ), (Ly˜, Ry˜), where all reference functions are invertible. Then x ˜ ˜b if and only if ˜αx˜,L sup y˜αy,R + sup y˜α˜y,L , sup x ˜αx˜,R + sup x ˜ 1 1 where if k = x ˜, y˜, αx˜,R = Rk ( 0 Rk−1 (α)dα), αy˜,R = Lk ( 0 L−1 k (α)dα). Proof: According to the deﬁnition proved by Roubens and similar to proof in Ref.[Rou91], the theorem is easy proved. For T -fuzzy variable x ˜ = (x, ξ, ξ)T and y˜ = (y, η, η)T , we have x ˜ y˜ if and only if x + 12 (ξ − ξ) y + 12 (η − η). Therefore, the real variable x corresponds to the T -fuzzy variable x ˜ = (x, ξ, ξ)T if 1 x = x + (ξ − ξ). 2 Especially, as a ˜ = (a, α, α)T and ˜b = (b, β, β)T are T -fuzzy numbers, we ˜ have a ˜ b if and only if a + 12 (α − α) b + 12 (β − β), and the real data a corresponds to the T -fuzzy data a ˜ = (a, α, α)T if 1 a ¯ = a + (α − α). 2

(7.9.12)

Generally, the real variable x corresponds to the trapezoidal fuzzy variable x ˜ = (xL , xU , ξ, ξ) if 1 x = [xL + xU + (ξ − ξ)], 2 and the real datum a corresponds to trapezoidal fuzzy datum a ˜ = (aL , aU , α, α) a ˜=

1 L [a + aU + (α − α)]. 2

(7.9.13)

7.9 Dual Method of Geometric Programming with Fuzzy Variables

247

Example 7.9.3: Find min g0 (˜ x) = 20˜ x1 x ˜3 + 40˜ x2 x ˜3 + 80˜ x1 x ˜2 −1 −1 −1 ˜ s.t. g0 (˜ x) = 8˜ x1 x ˜2 x ˜3 1, x > 0, where x ˜i are special trapezoidal fuzzy variables and ˜1 = (1, 1, 0, 0) is a special trapezoidal fuzzy number. Its dual programming is w01 w02 w03 ˜8 w11 w 20 40 80 ˜ max d(w) = w1111 w01 w02 w03 w11 s.t. w01 + w02 + w03 = 1, w01 + w03 − w11 = 0, w02 + w03 − w11 = 0, w01 + w02 − w11 = 0, w 0, where w = (w01 , w02 , w03 , w11 )T is a 4–dimensional vector, ˜8 is a fuzzy number. 4-th equation is computed in constraint equation groups, feasible solution is w01 = 13 , w02 = 13 , w03 = 13 , w11 = 23 . It is a unique feasible solution, so it is an optimal solution, and the optimal value is ˜ d(w) =

20 1 3

13

40 1 3

13

80 1 3

13 23 ˜ 8 2 3

w11 w11

=

1728000 1 3

13

3 × ˜8 2

23 .

When ˜ 8 ∈ [7.5, 8.5], ¯ 8 7.8 + 0.7α is deﬁned by (1.5.1) (n = 1), ˜8 = > 13 = 2 ˜ [1.5 × (7.8 + 0.7α)] 3 . 7.8 + 0.7α, and the optimal value is d(w) = 1728000 1 3 When ˜ 8 = (7.5, 8.5, 0.3, 0.5) is a trapezoidal fuzzy number, ¯8 = 8.15+ 0.5−0.3 2 >1 = 2 1728000 3 ˜ 3 by (7.9.13),and the optimal value is d(w) = (16.4) . 1 3 ˜ ¯ When 8 = (8, 0.3, 0.5) is a T -fuzzy number, 8 = 8 + 0.5−0.3 by (7.9.12). 2 = > 13 2 ˜ And the optimal value is d(w) = 1728000 (8.1) 3 . 1 3

7.9.5

Conclusions

This section gives methods to ﬁnd a programming with fuzzy variables and to determine an optimal solution. We can acquire an analytic solution to the primal problem by ﬁnding its dual problem as long as a solution exists in the primal programming problem.

248

7 Fuzzy Geometric Programming

7.10

Multi-Objective Geometric Programming with T -Fuzzy Variables

7.10.1 Modeling Deﬁnition 7.10.1. [Cao96a]. Let (j)

min g0 (˜ x) (1 j n) x) σ ˜ (1 i p), s.t. gi (˜

(7.10.1)

x˜ > 0. If x˜ = (˜ x1 , x˜2 , · · · , x˜m )T stands for an m-dimensional T -fuzzy variable vector, we call (7.10.1) as a multi-objective geometric programming with T -fuzzy ˜ σ×˜ 1, ˜ 1 = (1, 1, 1) are T -fuzzy numbers, variables. Here x˜i = (xi , ξ i , ξ i ), σ (j)

J 0

(j)

where σik , σ = ±1, and g0 (˜ x) = Ji

vik (˜ x) =

k=1

Ji

σik cik

k=1

k=1

m ) l=1

x ˜γl ikl (1

(j)

σ0k c0k

m ) l=1

γ

(j)

x ˜l 0kl (1 j n), gi (˜ x) =

i p) are fuzzy polynomials of x ˜, γikl is

an arbitrary real number. Theorem 7.10.1. Let the multi-objective geometric programming model with T - fuzzy variables be as in (7.10.1). Then it can be converted into a multiobjective geometric programming with a cone index J (j)

min g0 (z(J )) (1 j n) s.t. gi (z(J )) σ (1 i p), z(J ) > 0, (j)

where

(j) g0 (z(J ))

=

J 0

k=1

(j)

σ0k c0k

m ) l=1

γ

(j)

zl 0kl (J ), gi (z(J )) =

(7.10.2)

Ji

σik cik

k=1

m ) l=1

zlγikl (J ),

and (7.10.1) contains an optimal solution with T - fuzzy variable, which is equivalent to (7.10.2), containing an optimal solution depending on a cone index J . Proof: Similar to the proof of Theorem 7.8.1, (7.10.1) is turned into: min

J0 k=1

s.t.

Ji k=1

(j)

c0k

m : (zl (J ))γ0kl (1 j n) l=1

σik cik

m : (zl (J ))γikl σ (1 i p), l=1

zl (J ) > 0 (1 l m), such that (7.10.2) can be found out.

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

249

Since (7.10.1) is equivalent to (7.10.2), a parameter optimal solution to (7.10.2) depending on a cone index J is equivalent to an optimal T -fuzzy one to (7.10.1). Corollary 7.10.1. ˜ 1 is taken for σ ˜i in (7.10.1), then (7.10.1) is (j)

x) (1 j n) min g0 (˜ s.t. gi (˜ x) ˜ 1 (1 i p) x ˜>0 and 1 is taken for σ in (7.10.2), then (7.10.2) is (j)

min g0 (z(J )) (1 j n) s.t. gi (z(J )) 1 (1 i p), z(J ) > 0, so the conclusion corresponding to Theorem 7.10.1 still holds. Algorithm For a multi-objective geometric programming (7.10.1) with T -fuzzy variables, the objective functions can be either weighted before nonfuzziﬁcation or nonfuzziﬁed before its being weighted. Two algorithms can be advanced on the assumption that (7.10.1) with a solution is nonfuzziﬁed into (7.10.2) for our discussion. A. Nonfuzziﬁcation steps The steps nonfuzziﬁed to Model (7.10.1) are concluded as follows. Step 1. As for the given T -fuzzy variable x ˜l , natural number sets {1, 2, · · · , n} are partitioned into three parts by subscripts: ξ + ξ li zli = xli + li for 1 i N and each l; 2 xli + ξ li , if jl = 0 II : zli = for N + 1 i 2N and each l; x − ξ li , if jl = 1 li xli − ξ li , if jl = 0 for 2N + 1 i 3N and each l. III : zli = xli + ξ li , if jl = 1

I:

Step 2. Nonfuzzify variable x ˜l . 3N

∗ ξli

ξ +ξ

Select zli = xli + 3N , where ξli∗ is li 2 li (in the case of I), or ±ξ li (in II), or ±ξ li (in III). ˜l and T -fuzzy variable is turned into a deStep 3. Substitute zli for x termined variable, such that (7.10.1) is changed into a determined geometric programming (7.10.2). i=1

250

7 Fuzzy Geometric Programming

Step 4. Determine a satisfactory (resp. eﬀective) solution to problem (7.10.2) by geometric programming before a fuzzy satisfactory (resp. eﬀective) solution can be composed to (7.10.1). B. Direct primal algorithm For a multi-objective geometric programming (7.10.1) with T - fuzzy variable, two ways are advanced to nonfuzziﬁcation (7.10.1). Algorithm I: Nonfuzzify (7.10.1) into (7.10.2) before the weighted process for objective functions in (7.10.2). Give weight to j objective functions:κj (1 j n), respectively. Now, n ob(1) jective functions in (7.10.2) are weighted and then g0∗ (z(J )) = κ1 g0 (z(J )) + (2) (n) κ2 g0 (z(J )) + · · · + κn g0 (z(J )), where κj is a weighted factor satisfying 0 κj 1(1 j n), and κ1 + κ2 + · · · + κn = 1. Substitute g0∗ (z(J )) for n objective functions in (7.10.2), and we turn (7.10.2) into a single objective parameter geometric programming min g0∗ (z(J )) s.t. gi (z(J )) σ (1 i p), z(J ) > 0,

(7.10.3)

and again calculate it. Algorithm II: Use a weighted method to objective functions in (7.10.1) before nonfuzzifying it. Consider Problem (7.10.1), and n objective functions are weighted in (1) (2) (n) x) = κ1 g0 (˜ x) + κ2 g0 (˜ x) + · · · + κn g0 (˜ x). Substitute (7.10.1) and then g0∗ (˜ ∗ g0 (˜ x) for n objective functions in (7.10.1), and it is changed into min g0∗ (˜ x) x) σ ˜ (1 i p), s.t. gi (˜ x˜ > 0.

(7.10.4)

Nonfuzzify (7.10.4) in the above way and a determined geometric programming can be obtained like (7.10.3). Note: The above weight vector κj can be determined by an Analytic Hierarchy Process method, or an expert evaluation method. Solve an ordinary geometric programming like (7.10.3) and several solution methods appear in it. The primal algorithm can be adopted in ﬁnding an approximate satisfactory (or eﬀective) solution so that a fuzzy satisfactory solution can be found in (7.10.1), which behaves as a single objective geometric programming problem with respect to a cone index J . It is obvious that there exist a lot of direct solutions with respect to (7.10.3) [Cao89c][TA84] [WB67] [WY82].

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

7.10.2

251

Fuzzy Dual Problem

Deﬁnition 7.10.2. Given that max d˜(j) (w) = σ

p : Ji J0 Ji (j) J0 6σ 5: : c˜ c˜ik ( 0k w0k )σ0k w0k ( wik )σik wik w0k wik i=1 k=1

k=1

k=1

k=1

s.t. w00 = 1 Γ (j)T w = 0

(7.10.5)

w0 is called a multi-objective dual geometric programming with T - fuzzy variable w ˜ corresponding to (7.10.1), where ⎛

Γ (j)

(j)

σ01 γ011 ⎜ ··· ⎜ ⎜ σ γ (j) 0J0 =⎜ ⎜ σ γ0J0 1 ⎜ 11 111 ⎝ ··· σpJp γpJp 1

(j)

· · · σ01 γ01l ··· (j) · · · σ0J0 γ0J0 l · · · σ11 γ11l ··· · · · σpJp γpJp l

(j) ⎞ · · · σ01 γ01m ⎟ ··· ⎟ (j) ⎟ · · · σ0J0 γ0J0 m ⎟ (1 j n) · · · σ11 γ11m ⎟ ⎟ ⎠ ··· · · · σpJp γpJp m

denotes j−th exponent matrix. Its l-th column is composed of exponents from a (j) variable x ˜l at each item in j−th objective function g0 (˜ x)(1 j n) and constraint one gi (˜ x)(1 i p), where w = (w01 , · · · , w0J0 , · · · , wp1 , · · · , wpJp )T represents a J-dimension variables vector, and (wik )wik |wik =0 = 1 is stipulated. Theorem 7.10.2. If the problem (7.10.1) is deduced from T -fuzzy variable xl1 , x ˜l2 , · · · , xlp )T , then the dual form of (7.10.1) is (7.10.5). x ˜l = (˜ Proof: As (7.10.1) is equivalent to 7.10.2) from Theorem 7.10.1, the dual form of (7.10.2) is obviously max d(w(J )) s.t. w00 (J ) = 1, Γ¯ T w(J ) = 0,

(7.10.6)

w(J ) 0, where d(w(J )) = σ

J0 5:

J

p )

J )i

;

0 σ0k w0k (J ) c i=1 k=1 ( 0k w0k )(J ) w0k (J )

k=1

(j)

cik wik (J )

Ji k=1

wik (J )

< σ w (J ) 6 σ i ik

,

k=1

w(J ) = (w01 (J ), · · · , w0J0 (J ), · · · , wp1 (J ), · · · , wpJp (J ))T is a J− dimensional variable vector depending on a cone index J , and Γ¯ is an exponent matrix. Now let us stipulate (wik (J ))wik (J ) |wik (J )=0 = 1 correspondingly.

252

7 Fuzzy Geometric Programming

(7.10.6) can be also proved to be equivalent to (7.10.5) under the above cone index J , while (7.10.2) and (7.10.6) are mutually dual under the cone index J . Hence (7.10.1) and (7.10.5) are mutually dual problems, so the theorem holds. Dual algorithm Algorithm III: A single objective geometric programming is obtained, for a certain j, over (7.10.1), and its corresponding dual programming is (7.10.6). n groups of optimal solutions, the worst value Uj and the best value Lj are obtained by solving n dual geometric programming in terms of j(1 j n) respectively. Thereafter a single objective fuzzy geometric programming is obtained below: max xm+1 s.t. gi (x) 1 (1 i p), (j) gp+j (x) = U1j g0 (x) + (1 − x1 , x2 , · · · , xm+1 > 0,

Lj Uj )xm+1

1 (1 j n),

(7.10.7)

where gi (x)(1 i p + n) are posynomial when coeﬃcients are positive. Finally, the optimal compromise solution to problem (7.10.2) is acquired respectively by adopting a primal algorithm [Biw92][WB67][WY82], such that a fuzzy optimal compromise solution comes out. Algorithm IV: Change (7.10.1) into an ordinary single objective geometric programming like (7.10.3) by means of the two nonfuzziﬁcation methods above, and deduce its dual programming by a dual theory. Again a dual parameter solution to the dual problem can be obtained by using a dual algorithm, so can a fuzzy optimal compromise solution. 7.10.3 Numerical Example Let us consider a multi-objective geometric programming with T -fuzzy variables as follows: (1)

(2)

x) = x ˜1 , min g0 (˜ x) = x˜22 } min {g0 (˜ −1 −1 ˜ s.t. 4˜ x1 x˜2 1, x ˜2 ˜ 1, 1 + x˜21 ˜2 > 0, x˜1 , x

(7.10.8)

˜2 = (x2 , ξ 2 , ξ 2 ) be T -fuzzy variable, and ˜1 = where x˜1 = (x1 , ξ 1 , ξ 1 ), and x (1, 0, 0) be special T - fuzzy number. Here we might as well let T -fuzzy variables are adopted as follows: x ˜1 : x ˜2 :

1. (x1 , 1, 0), 4. (x2 , 0, 1),

2. (x1 , 0, 1), 5. (x2 , 1, 0),

3. (x1 , 2, 1); 6. (x2 , 2, 2).

7.10 Multi-Objective Geometric Programming with T -Fuzzy Variables

253

Now, let us divide the data into three groups including No.1, 4; No.2, 5 and No.3, 6. As for data No.1, 4, a value is got by Formula I in (1) Algorithm A. For the rest, the formulas is used corresponding to jp = 1 and jp = 0 in Formula II and III when odd numbers and even numbers appear, respectively. ˜2 can be nonfuzziﬁed So, x ˜1 , x (x1 + 0.5) + (x1 − 0) + (x1 + 1) = x1 + 0.5, 3 (x2 + 0.5) + (x2 − 0) + (x3 − 2) = x2 − 0.5. x ˜2 : 3

x ˜1 :

Such that, the geometric programming corresponding to (7.10.8) is min {(x1 + 0.5), (x2 − 0.5)2 } s.t. 4(x1 + 0.5)−1 (x2 − 0.5)−1 1, x2 − 0.5 1, 1 + x1 +0.5 2 x1 , x2 > 0.

(7.10.9)

This is a multi-objective geometric programming problem concerning the cone index J . Obviously, the variety of direct solution methods exists in (7.10.9). By adopting an object-weighted method, an objective function is (1) (2) changed into g0 (˜ x) = κ1 g0 (˜ x) + κ2 g0 (˜ x), such that g0 (x) = κ1 (x1 + 0.5) + 2 κ2 (x2 − 0.5) is obtained. We might as well take κ1 = κ2 = 12 , and let u1 = x1 + 0.5, u2 = x2 − 0.5, (7.10.9) is turned into min 12 u1 + 12 u22 −1 s.t. 4u−1 1 u2 1, u2 1, 1 + u21 u1 , u2 > 0.

(7.10.10)

By solving (7.10.10), its optimal solution is acquired as (u1 , u2 ) = (2, 2)T , so, optimal solution to (7.10.9) is (x1 , x2 ) = (2.5, 1.5)T . Certainly, a T -fuzzy optimal solution to (7.10.8) can be synthesized, but it is unnecessary to do so practically. Therefore this solution is considered as an approximately T -fuzzy optimal one in (7.10.8).

8 Fuzzy Relative Equation and Its Optimizing

Since Sanchez, a famous fuzzy biology mathematician, put forward a fuzzy relation equation in 1976 with innovative investigation, some scholars at home and abroad have already developed many characteristic methods to it. In this chapter, ﬁnding-solution is introduced to ( , ) and ( , ·) fuzzy relation equation and their application in business management. Meanwhile, a recently-rising optimal problem is discussed in fuzzy relation equation with fuzzy relation linear programming and fuzzy relation geometric programming put forward.

8.1

( , ) Fuzzy Relative Equation

Suppose that X, Y, W are ﬁnite sets; fuzzy matrix A ∈ Mm×n , b ∈ Mm×1 , and the fuzzy variable x ∈ Mn×1 is requested, respectively, such that A◦x =b

(8.1.1) is satisﬁed, where “◦” represents synthesizes operator ( , ), and records its solution sets for X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b, xi ∈ [0, 1], i ∈ I}. It is easy to prove the properties as follows. Proposition 8.1.1. If xi ∈ Xi , i ∈ I(I is a non-empty index set), then xi ∈ X. i∈I

Proposition 8.1.2. If x1 ⊆ x2 ⊆ x3 and x1 , x3 ∈ X, then x2 ∈ X. From Proposition 8.1.1 we know that, if X = φ, then the greatest element must exist in X(A, b) (i.e.,the greatest solution must exist in Equation (8.1.1)). This holds if only union is taken into consideration of all elements in X(A, b). For the sake of ﬁnding the greatest element of X(A, b), Sanchez deﬁned an operation: B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 255–292. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

256

8 Fuzzy Relative Equation and Its Optimizing

aij bi =

1, bi ,

when aij bi , ∀a , b ∈ [0, 1], when aij > bi , ij i

and x = AT B is x = (x1 , x2 , · · · , xn )T , where xj =

m

(8.1.2)

(aij bi )(1 j n).

i=1

Proposition 8.1.3. (Sanchez E.) X = φ ⇐⇒ A ◦ x ˆ = b, if X = φ, then x ˆ is the greatest element in X(A, b). Through calculation of x, the matrix table is listed, i.e., fuzzy extended matrix (A|b) with its computation process is ⎞ ⎛ a11 a12 · · · a1n b1 ⎜ a21 a22 · · · a2n b2 ⎟ ⎟ ⎜ (A|b) = ⎜ . .. .. .. .. ⎟ ⎝ .. . . . . ⎠ am1 am2· · · amn bm ⎛ ⎜ ⎜ −→ ⎜ ⎝

aij bi

a11 b1 a12 b1 · · · a1n b1 a21 b2 a22 b2 · · · a2n b2 .. .. .. . . .

⎞ ⎟ ⎟ ⎟ ⎠

am1 bm am2 bm · · · amn bm m m m xT = (ai1 bi ) (ai2 bi ) · · · (ain bi ) . aij bi

i=1

i=1

i=1

Here, −→ denotes implement “ ” operation to aij with bi . Deﬁnition 8.1.1. If ∃ˆ x ∈ X(A, b), such that x xˆ, ∀x ∈ X(A, b), then x ˆ is called a greatest solution to (8.1.1). If ∃˘ x ∈ X(A, b), such that x˘ x, ∀x ∈ X(A, b), then x ˘ is called the least solution to (8.1.1). And if ∃˘ x ∈ X(A, b), such that when x x˘, we have x = x ˘, x˘ is called a minimum solution to (8.1.1). Example 8.1.1: Find the greatest solution to fuzzy relative equations ⎛ ⎛ ⎞ ⎞ ⎛ ⎞ 0.3 0.2 0.7 0.8 0.7 x 1 ⎜ 0.5 0.4 0.4 0.9 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 0.4 ⎟ ⎜ 0.7 0.3 0.2 0.7 ⎟ ◦ ⎜ x2 ⎟ = ⎜ 0.4 ⎟ . (8.1.3) ⎜ ⎟ ⎝ x3 ⎠ ⎜ ⎟ ⎝ 0.9 0.6 0.1 0.2 ⎠ ⎝ 0.3 ⎠ x4 0.8 0.5 0.6 0.4 0.6 Solution: Because ⎛ ⎞ ⎛ 1 1 1 0.3 0.2 0.7 0.8 0.7 ⎜ 0.4 1 1 ⎜ 0.5 0.4 0.4 0.9 0.4 ⎟ ⎟ ⎜ ⎜ ⎜ 0.4 1 1 ⎜ 0.7 0.3 0.2 0.7 0.4 ⎟ −→ ⎜ ⎟ ⎜ ⎝ 0.3 0.3 1 ⎝ 0.9 0.6 0.1 0.2 0.3 ⎠ 0.6 1 1 0.8 0.5 0.6 0.4 0.6

⎞ 0.7 0.4 ⎟ ⎟ 0.4 ⎟ ⎟, 1 ⎠ 1

xT =(0.3,0.3,1,0.4)

8.1 ( , ) Fuzzy Relative Equation

257

it is easy to calculate the following. xT = (0.3, 0.3, 1, 0.4)T is a solution to (8.1.3), and is the greatest solution. Example 8.1.2: Judge whether the fuzzy relative equations 0.2 0.1 0.4 x1 = ◦ x2 0.3 0.2 0.1

(8.1.4)

has a solution or not. Solution:

0.1 0.4 0.2 0.2 0.1 0.3

1 0.2 . 1 1 xT =(1,0.2)

−→

Obviously, x = (1, 0.2)T does not satisfy (8.1.4), so (8.1.4) has no solution. Proposition 8.1.3 is determined whether fuzzy relative equation (8.1.1) has solutions or not and how to ﬁnd the problem of the greatest solution when it has solution. Generally, if (8.1.1) has a solution, and although it is not necessarily the least solution, certainly it must exist the minimum solution, that is the minimum element in X(A, b). Deﬁnition 8.1.2. Suppose ∃ˇ x0 , if from x ∈ X(A, b) and x xˇ0 , we can get x=x ˇ0 , calling x ˇ0 a minimum element in X(A, b). Proposition 8.1.4. (Czogala E. et al) If x ∈ X(A, b), then in X(A, b) certainly there exists a minimum element x ˇ0 , such that xˇ0 ⊆ X(A, b) ⊆ xˆ. From Proposition 8.1.4, we know that, if only ﬁnd all minimum solutions to (8.1.1), we get all solutions to (8.1.1). If b = (0, 0, · · · , 0)T , then (8.1.1) has a unique minimum solution x = (0, 0, · · · , 0)T , that is the least solution. Following next is all supposed as b = (0, 0, · · · , 0)T . Deﬁnition 8.1.3. Suppose x = AT b = (x1 , x2 , · · · , xn )T , and m × n fuzzy matrix D=(dij ) is deﬁned as bi , when aij xj bi (1 i m, 1 j n), dij = 0, otherwise, then we call D a distinguishing matrix of Equation (8.1.1). D ◦ x = b a distinguishing equation of (8.1.1). We deﬁne another operator β: bi , when aij bi aij βbi = 0, when aij < bi , obviously dij = (aij xj )βbi exists. Construction of the distinguishing matrix also can be achieved through ranking a matrix table:

258

8 Fuzzy Relative Equation and Its Optimizing

⎛

x1

x2 · · · xn

⎞ a11 a12 · · · a1n b1 ⎜ a21 a22 · · · a2n b2⎟ ⎟ (A|b) = ⎜ ⎝ · · · · · · · · · · · · · · ·⎠ am1 am2· · ·amn bm ⎛ ⎞ (a11 x1 )βb1 · · · (a1n xn )βb1 (aij xj )βbi ⎜ (a21 x1 )βb2 · · · (a2n xn )βb2 ⎟ ⎜ ⎟ −− −→ ⎝ ⎠ ··· ··· · · · (am1 x1 )βbm· · ·(amn xn )βbm =D. Proposition 8.1.5. X = φ ⇐⇒ each row has nonzero element of distinguishing matrix D. See about Example 8.1.1 through the calculation: ⎛ ⎞ 0 0 0.7 0 ⎜ 0 0 0.4 0.4 ⎟ ⎜ ⎟ ⎟ D=⎜ ⎜ 0 0 0 0.4 ⎟ . ⎝ 0.3 0.3 0 0 ⎠ 0 0 0.6 0 Obviously, each row in D has nonzero element, from Proposition 8.1.5, (8.1.3) a solution exists. ˇ 0 (A, b) is the whole minimum element in Proposition 8.1.6. Suppose X ∗ ˇ ∗ (A, b) is the whole miniX(A, b), again if X (A, b)={x|D ◦ x = b} and X 0 ∗ ∗ ˇ 0∗ (A, b) = X ˇ 0 (A, b). mum element in X (A, b), then X (A, b) ⊇ X(A, b), X By use of Proposition 8.1.6, if we ﬁnd a minimum solution to A ◦ x = b, we have only to ﬁnd a minimum one in distinguishing equation D ◦ x = b, but i-row element in D is bi instead of zero, thus it can simplify an operation to solution greatly. A nonzero element is taken in each row in D, and zero taken in the rest position before one transition matrix D(i) is obtained. Take max as each column of this matrix (that is supremum), and we get an N -dimension vector, then take its transpose, calling it a quasiminimum solution. Delete repetition and nonminimum solution in every quasiminimum solution and we obtain all to the minimum solution. Four transition matrixes D(1) , D(2) , D(3) and D(4) exist in the discretion matrix D in Example 8.1.1, respectively: ⎛ ⎛ ⎞ ⎞ 0 0 0.7 0 0 0 0.7 0 ⎜ 0 0 0.4 0 ⎟ ⎜ 0 0 0 0.4 ⎟ ⎜ ⎜ ⎟ ⎟ (2) ⎜ 0 0 0 0.4 ⎟ , ⎟, 0 0 0 0.4 D = D(1) = ⎜ ⎜ ⎜ ⎟ ⎟ ⎝ 0.3 0 0 0 ⎠ ⎝ 0.3 0 0 0 ⎠ 0 0 0.6 0 0 0 0.6 0 (0.3, 0, 0.7, 0.4) (0.3, 0, 0.7, 0.4)

8.1 ( , ) Fuzzy Relative Equation

⎛

D(3)

⎞ 0 0 0.7 0 ⎜ 0 0 0.4 0 ⎟ ⎜ ⎟ ⎟ =⎜ ⎜ 0 0 0 0.4 ⎟ , ⎝ 0 0.3 0 0 ⎠ 0 0 0.6 0 (0, 0.3, 0.7, 0.4)

259

⎛

D(4)

⎞ 0 0 0.7 0 ⎜ 0 0 0 0.4 ⎟ ⎜ ⎟ ⎟ =⎜ ⎜ 0 0 0 0.4 ⎟ . ⎝ 0 0.3 0 0 ⎠ 0 0 0.6 0 (0, 0.3, 0.7, 0.4)

The quasiminimum solution to (8.1.2) is got for: x1 =(0.3,0,0.7,0.4)T , x3 =(0,0.3,0.7,0.4)T ,

x2 =(0.3,0,0.7,0.4)T , x4 =(0,0.3,0.7,0.4)T ,

where x2 and x4 is repetition, and should be deleted, so x1 , x3 are minimum solutions to Equation (8.1.3). However ni exists in i−row nonzero element at D, then we can get n1 , n2 , · · · , nm transition matrix. As a result, there exist an n1 , n2 , · · · , nm quasi-minimum solution, but genuine minimum solution is only a part of it. In order to avoid those invalid labor, we especially put forward as follows a kind of valid method to ﬁnd a minimum solution. On universe Y , alternation of some nonempty fuzzy sets is called a fuzzy set chain, a chain for short, recorded as A˜1 ∗ A˜2 ∗ · · · ∗ A˜p , where A˜i ∈ F (Y ), A˜i = φ(1 i p). Call every A˜i an item of the chain, with assumption order of the item unimportant, that is if (i1 , i2 , · · · , ip ) is an arrangement of (1, 2, · · · , p), then A˜i1 ∗ A˜i2 ∗ · · · ∗ A˜ip = A˜1 ∗ A˜2 ∗ · · · ∗ A˜p . If there exists A˜i (1 i p − 1) in chain A˜1 ∗ A˜2 ∗ · · · ∗ A˜p , such that ˜ Ai ⊆ A˜p , then A˜p can be eliminated from chain, i.e., A˜1 ∗ A˜2 ∗ · · · ∗ A˜p = A˜1 ∗ A˜2 ∗ · · · ∗ A˜p−1 , calling it an elimination principle. Continuous implementing of some expunction rule can change the chain A˜1 ∗ A˜2 ∗ · · · ∗ A˜p into the chain of each item without containing each other, calling it a reduced chain. At p = 1, fuzzy set A˜1 itself is a chain only with one item, obviously it is a reduced chain. ˜1 ∗ B ˜2 ∗ · · · ∗ B ˜q are any two chains, then Suppose A˜1 ∗ A˜2 ∗ · · · ∗ A˜p and B their “union” operation is deﬁned as ˜1 ∗ B ˜2 ∗ · · · ∗ B ˜q ) (A˜1 ∗ A˜2 ∗ · · · ∗ A˜p ) (B ˜ 1 ) ∗ (A˜2 B ˜1 ) ∗ · · · ∗ (A˜p ˜1 )∗ B B (A˜1 ˜ 2 ) ∗ (A˜2 B ˜2 ) ∗ · · · ∗ (A˜p ˜2 ) ∗ · · · ∗ B B (A˜1 ˜ q ) ∗ (A˜2 ˜ q ) ∗ · · · ∗ (A˜p ˜q ). B B B (A˜1 Thus union of two chains is still one chain if and only if p = q = 1, and “union” of chain operation degenerates into “union” of fuzzy set operation.

260

8 Fuzzy Relative Equation and Its Optimizing

It is easily certiﬁcated that “union” of chain operation satisﬁes commutative, associative and idempotent laws, which can be expanded in unions of some other chains. In order to apply the concept of fuzzy chain to a minimum solution to distinguishing equation D ◦ x = b, we consider any row (di1 , di2 , · · · , din )(1 i m) of a distinguishing matrix D. If all of its nonzero element are dij1 , dij2 , · · · , dijk , then there exists a unique chain dij1 dij2 dij ∗ ∗ ···∗ k xj1 xj2 xjk corresponding to it, each item of this chain is a single point fuzzy set, called D concomitant chain in i row. It can be proved as follows. Proposition 8.1.7. The reduced chain of every item in union of concomitant chain of each row in distinguishing matrix D means just all of minimum solutions to Equation (8.1.1). Find each row concomitant chain of D in Example 8.1.1, i.e., ⎛ ⎞ 0.7 ˜ 0 0 0.7 0 → x3 = A1 0.4 0.4 ⎜ 0 0 0.4 0.4 ⎟ → x ∗ x = A˜2 4 ⎜ ⎟ 0.43 ˜3 ⎟→ 0 0 0 0.4 = A D=⎜ x4 ⎜ ⎟ ⎝ 0.3 0.3 0 0 ⎠ → 0.3 ∗ 0.3 = A˜ 4 x3 x2 0 0 0.6 0 → 0.6 = A˜5 . x3

Take union from each concomitant chain, then to change it into a reduced chain according to an elimination principle. P = A˜1 ∪ A˜2 ∪ A˜3 ∪ A˜4 0.7 0.4 0.4 0.4 0.3 0.3 0.6 = ∗ ) ∗ ) ( ( x3 x3 x4 x4 x1 x2 x3 I II III IV V 0.7 0.4 0.3 0.3 ∗ ). = ( x3 x4 x1 x2 0.7 0.4 0.4 0.4 0.7 0.4 ∗ ) = ; again x3 x3 and x4 appear in II, then ( x3 x3 x4 x4 x3 x4 0.3 0.3 0.6 0.3 0.3 and x4 do not appear in IV, then ( ∗ ) = ∗ ), so x1 x2 x3 x1 x2 0.7 0.4 0.3 0.7 0.4 0.3 )∗( ) P =( x3 x4 x1 x3 x4 x2 0.3 0.7 0.4 0.3 0.7 0.4 =( )∗( ). x1 x3 x4 x2 x3 x4 This reduced chain has two items, one is a minimum solution (0.3,0,0.7,0.4)T , and the other is a minimum solution (0,0.3,0.7,0.4)T . But the greatest solution is (0.30.310.4)T .

8.2 ( , ·) Fuzzy Relative Equation

261

In synthesis, the solutions set to Equation (8.1.2) is ⎛ ⎞ ⎛ ⎞ 0.3 [0, 0.3] ⎜ [0, 0.3] ⎟ ⎜ 0.3 ⎟ ⎟ ⎜ ⎟ X(A, b) = ⎜ ⎝ [0.7, 1] ⎠ ⎝ [0.7, 1] ⎠. 0.4 0.4 No solution may exist in the fuzzy relation equation A ◦ x = b abstracted from an actual problem. At this time we can consider it as its approximate solution. When it has a solution, the greatest solution to A◦x = b is x = AT b. Therefore we can make maximum shortage approximate solution by taking x to equation A ◦ x = b. Even though this equation has a solution, it also contains many solutions generally. How to choose an appropriate solution from numerous ones, it needs choosing it according to the demand of actual problems and persons’ experience, which remains to be studied in the future.

8.2 8.2.1

( , ·) Fuzzy Relative Equation Introduction

Let x = {x1 , x2 , · · · , xp }, y = {y1 , y2 , · · · , yq }(p n, q m) be a ﬁnite ﬁeld, and fuzzy matrix A ∈ Mm×n , x ∈ Mn×1 , b ∈ Mm×1 . Then consider the generalized fuzzy relative equations: A ◦ x = b,

(8.2.1)

we call (8.2.1) a (∨, ·) fuzzy relative equation, where “ ◦ ” represents Maxproduct operation, that is operator (∨, ·), ⎛ ⎞ (1) (2) (n) A1 (x1 ) A1 (x1 ) · · · A1 (x1 ) ⎜ ⎟ ... ... ... ... A=⎝ (8.2.2) ⎠, (1) (2) (n) Am (xm ) Am (xm ) · · · Am (xm ) x = (x1 , x2 , · · · , xn )T , b = (b1 , b2 , · · · , bm )T and “T” represents transpose. First of all, in this section, we study a solution to (8.2.1) theoretically, then, by the application of practical examples, attain the degree factor inﬂuencing economical beneﬁts in enterprizes of commerce, which ﬁnds the practical background for the application of Equations (8.2.1). 8.2.2 Solubility of ( , ·) Fuzzy Relative Equations and Theorem for Greatest Solution In order to discuss problems conveniently, let matrix element in (8.2.2) be (j) A˜i (xi ) = aij

(1 i m, 1 j n).

262

8 Fuzzy Relative Equation and Its Optimizing

Then, next step, we only discuss fuzzy relation equations like ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ x1 b1 a11 a12 · · · a1n ⎟ ⎜ ⎟ ⎜ ⎜ . . ⎟ . .. ⎠ ◦ ⎝ .. ⎠ = ⎝ .. ⎠ , ⎝ am1 am2 · · · amn xn bm

(8.2.3)

where the compound operation “ ◦ ” in matrix is (∨, ·) composition, i.e., (aij · xj ) = bi (i m) 1jn

and records its solution sets for X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b}. Deﬁnition 8.2.1

⎧ ⎨ bi , aij −1 bi aij ⎩ 1,

aij > bi , aij bi ,

∀aij , bi ∈ [0, 1],

(8.2.4)

where “ −1 ”is an operator deﬁned at [0,1]. And let Kj

m

aij −1 bi , (j n).

i=1

Then x ˆ = (K1 , K2 , · · · , Kn )T is a greatest element in X(A, b). Proposition 8.2.1. If a, b, c ∈ R, then b c ⇒ a −1 b a −1 c. Proof: a > b ⇒ a > c, then a −1 b = from (8.2.4),

a b ⇒ a −1 b = 1,

but hence

c b = a −1 c a a

a −1 c 1, a −1 b a −1 c.

Corollary 8.2.1. a −1 (b ∨ c) a −1 c. Proposition 8.2.2. a · (a −1 b) = a ∧ b; a −1 (a · b) b. Proof: 10

a > b ⇒ a −1 b =

b ⇒ a · (a −1 b) = b. a

a b ⇒ a −1 b = 1 ⇒ a · (a −1 b) = a · 1 = a.

(8.2.5)

8.2 ( , ·) Fuzzy Relative Equation

So 20

263

a · (a −1 b) = a ∧ b. When

a > ab ⇒ a −1 (a · b) = b; a ab ⇒ a −1 (a · b) = 1,

then

a −1 (a · b) b.

Theorem 8.2.1. There exists a solution x = (x1 , x2 , · · · , xn )T to fuzzy relative Equations (8.2.3) if and only if aij xj bi (i m, j n), and for each ji , there exists ji , such that aiji · xji = bi . Proof: Suﬃciency is certiﬁed. Now let’s prove the necessity. If x = (x1 , x2 , · · · xn )T is the solution to (8.2.3), then aij · xi bi (i m, j n).

(8.2.6)

Otherwise, if there exist i, j, such that aij · xj > bi , then (ai1 · x1 ) ∨ · · · ∨ (aij · xj ) ∨ · · · ∨ (ain · xn ) > bi . Contradictory, therefore, (8.2.6) holds. At the same time, if there exists a solution to (8.2.6), and aij · xj bi (i m; j n), there must exist ji for each i m, such that aiji xji = bi (j n). Otherwise, 10 We have proved it is impossible that if, for each i, there exists ji such that aiji xji > bi . 20 If for each j, there exists i, such that aij xj < bi , then (ai1 x1 ) ∨ (ai2 x2 ) ∨ · · · ∨ (ain xn ) < bi , which is in contradictory with no solution to (8.2.3). In practical application, (8.2.3) probably has no solution, but small alteration may be always given to A for ε and B for δ, so that Aε ◦ x = b

or A ◦ x = bδ

has a steady solution. So the following is always assumed to have solution to (8.2.3). If b in (8.2.3) is arranged in standardization, then b1 b2 · · · bm

(or b1 b2 · · · bm ).

For short, let bi still stand for bi and (aij ) for (aij ) correspondingly. Theorem 8.2.2. If there exists a solution X(A, b) = φ to (8.2.3), then x ˆ is its greatest solution.

264

8 Fuzzy Relative Equation and Its Optimizing

Proof: Because xˆ = φ, then {i, aij > bi } = φ (i m, j n). Hence when A◦x ˆ = (bi ; 1 i m), then bi =

n

[aij · (

j=1 n

n

j=1

(aij ·

j=1

n

(aij −1 bi )] =

[aij · (aij −1 bi )],

j=1

bi ) = bi (1 i m). aij

Again x, x ˆ ∈ X(A, b), then, Kj0 = =

n j=1 n

n

(aij0 −1 bi ) =

(aij0 −1 bi )

j=1

[aij0

−1

(

j=1

n

aij0 · xj0 )]

j=1

n

[aij0 −1 (aij0 · xj0 )]

j=1

= ai0 j0 −1 (ai0 j0 · xj0 ) xj0 . Hence x ⊆ xˆ. Corollary 8.2.2. If we have solution to x ◦ R = b, then x ˆT is its greatest solution. Proof: Because

x ◦ R = b ⇔ RT ◦ xT = bT ⇒ xT ⊂ [(RT )T −1 bT ] = x ˆ

and

RT ◦ x ˆ = bT ⇔ x⊂x ˆT

and

xˆT ◦ R = b.

So, the solution introduced is suitable for the inverse problem of comprehensive decision. Deﬁnition 8.2.2. Stipulate a∗ij

= aij β

−1

$ bi

Kj , 0,

aij · Kj = bi , others.

(8.2.7)

If Kj is a deﬁnition by Deﬁnition 8.2.1, it is impossible to make aij ·Kj > bi . Then we can get a deﬁnition equal to Deﬁnition 8.2.2. Deﬁnition 8.2.3. Stipulate $ a∗ij

= aij β

−1

bi

Kj , 0,

aij · Kj = bi , aij · Kj < bi .

(8.2.8)

Deﬁnition 8.2.4. Matrix A∗ = (a∗ij ), as nonzero element is the element of solution x ˆ, then A∗ is called a matrix of solution being chosen in (8.2.3) and

8.2 ( , ·) Fuzzy Relative Equation

265

the set of each row element in A∗ is called a row element set, written as follows. a∗ a∗ a∗ A∗i = ( i1 + i2 + . . . + in ) (1 i m). x1 x2 xn Deﬁnition 8.2.5. Stipulate an operator ⎧) ⎨ ri , : ri rj If ∃i = j, xi )( )= P ( ⎩all multiplication of sum, otherwise. xi xj Proposition 8.2.3.

1im

P

A∗i ⇐⇒ x∗ , where x∗ =

and ri is one of Kj (1 j n).

1im

x∗i =

) ri ( ), i i xi

Proof: From Deﬁnition 8.2.5 and by law of set operation, it is easy to obtain : ri A∗i ⇐⇒ ( ). xi i i 1im

a∗ij

= 0 is omitted in the course of P operation and also nonzero repeatAs edly removable element a∗ij has to be rejected in the application of absorptive law and so on, hence reserve ri isone of Kj . Obviously, at a∗ij > 0, x ˆ= A∗i . We reject the repeatedly removable 1im

elements in xˆ, then x∗j is obtained.

Theorem 8.2.3. Let X(A, b) = φ. Then x ˇj ∈ X(A, b) is a minimum solution to (8.2.3) at aij Kj = bi . Proof: As X(A, b) = φ, then {j|aij Kj = bi } = φ(j ∈ n). Hence at A · x∗j = (bi ; 1 i m), we know the following from Deﬁnition 8.2.5: bi =

n

P

(aij · rj ) ⇐⇒

j=1 n

(aij · Kj ) = bi (i m),

j=1 P

where ⇐⇒ denotes equivalence under operator P . So x∗j is a solution to (8.2.3), such that it is a minimum solution. Otherwise, if we have another x∗j ⊂ X(A, b), there exists (i0 , j0 ), such that rj 0 < rj0 , then n

(aij · rj ) = ai0 j0 rj 0 < ai0 j0 rj0

j=1

=

n

(aij · rj ) = bi (1 i m).

j=1

Contradictorily. Hence x ˇ∗j is the minimum solution to (8.2.3).

266

8 Fuzzy Relative Equation and Its Optimizing

Theorem 8.2.4. X(A, b) = φ ⇐⇒ Each row of A∗ has at least a nonzero element. f ormT h.8.2.1

=⇒ aij xj bi (1 i m; 1 j n). Proof: “ ⇒ ” X(A, b) = φ And for each i there exists ji , such that aiji · xji = bi , then, Kji ∈ X(A, b), so that aiji · Kji = bi (1 i m). From Deﬁnition 8.2.3, we know there exists at least a nonzero element in each row of A∗ . “ ⇐ ” If there exists at least a nonzero element a∗ij = Kj in each row of ∗ A , we might as well let a∗ij0 = Kj0 = 0(1 i m), while other a∗ij = 0. From Deﬁnition 8.2.3, for each i, aij0 · Kj0 = bi , aij · Kj < bi (j = j0 ). Obviously, it satisﬁes the suﬃcient condition in Theorem 8.2.1, hence there exists a solution to (8.2.3). If we represent the minimum solution and greatest solution by using x ˇ∗j (1 j n) and, x ˆ, respectively, then general solution of (8.2.3) is X(A, b) = ( x ˇ∗j ) xˆ = x ˇ∗ x ˆ. j

Obviously x ˆ is unique, but xˇ∗j may not. 8.2.3

Conclusion

We study the existence in solution to fuzzy relation equations in (∨, ·) and the theorems for greatest and minimum solutions before we give a short-circuit for solution. And at the same time, with these relation equations, we solve an inﬂuence factor for economical beneﬁts in enterprize of commerce, the result of which tallies with practice basically.

8.3

Algorithm Application and Comparing in ( , ·) Relative Equations

In this section, a better result is obtained after the (∨, ·) fuzzy relative equations are discussed in analysis of a shop proﬁt in application and the algorithm above is compared with the one in [LF99] after solutions through practical examples. 8.3.1 Application in Business Management Practically, Section 8.2 shows a solution to (8.2.1), the calculation step of which is to be explored by an example. The next table shows the commodity bought-sold by ﬁve stores in the suburb of a city in 1984 in China in Table 8.3.1:

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

267

Table 8.3.1. The Commodity Bought-sold Table y1 y2 y3 y4 y5

x1 1285 600 680 472 660

x3 x4 x5 x2 20 65 0.045 -0.8 91 0.056 10 82 0.042 -3.1 106 0.049 0.8 72 0.054

x2 x3 550 25 250 14 408 17 438.6 21.5 367.5 19.8

x4 x2

0.036 -0.0032 0.025 -0.0071 0.002

R1 490 262 401 480 378

By statistics material, the evaluation item is x1 for purchase, x2 for sale, x3 for expense, x4 for beneﬁts (ten thousand a unit), x5 for fund turnover (one day a unit), and the membership function distributed in economical beneﬁts is as follows. (1) Let A˜1 be a fuzzy subset, representing the beneﬁts of commodity bought at the universe U = R + = [0 + ∞]. Its membership function be ⎧ ⎪ 0 x1 < h3 x2 , ⎪ ⎪0, ⎪ ⎪ x1 − h3 x2 ⎪ ⎪ , h3 x2 x1 h1 x2 , ⎪ ⎪ ⎨ (h1 − h3 )x2 μA˜1 (x1 ) = 1, h1 x2 < x1 < h2 x2 , ⎪ ⎪ x − h x ⎪ 1 4 2 ⎪ , h2 x2 x1 h4 x2 , ⎪ ⎪ ⎪ (h2 − h4 )x2 ⎪ ⎪ ⎩0, h4 x2 < x1 < +∞. Its ﬁgure is 6 μA1 (x1 ) 1

0

h3 x2

h1 x2

A

A

h2 x2

A A

A A A h4 x2

x1

˜1 Fig. 8.3.1. Figure of Membership Function in A

where, h1 , h2 , h3 and h4 are constants. (2) Let A˜2 be a fuzzy subset representing commodity sale proﬁts at universe U = R + . Its membership function be ⎧ R1 < x2 < +∞, ⎪ ⎪1, ⎨ x2 − R2 μA˜2 (x2 ) = , R2 x2 R1 , ⎪ R − R2 ⎪ ⎩ 1 0, 0 < x2 < R2 .

268

8 Fuzzy Relative Equation and Its Optimizing

Its ﬁgure is

1

μA 2 (x2 ) 6

0

R2

x2

R1

˜2 Fig. 8.3.2. Figure of Membership Function in A

Here R2 and R1 represent the upper limit of poorest sales and best ones, respectively. (3) Let A˜3 be a fuzzy subset representing purchase cost at region U = R + . Its membership function be μA˜3 (x3 ) =

1 , x3 ∈ R + . x3 hx4 1 + 1000( − ) x2 x2

Its ﬁgure is 6 μA 3 (x3 ) 1

0 hx4

x3

Fig. 8.3.3. Figure of Membership Function in A˜3

Here, h denotes the ratio of retail proﬁts and purchase cost at the best beneﬁts in purchase cost. (4) Let A˜4 be a fuzzy subset representing commodity retail proﬁts at region U = R + . Its membership function be

μA˜4 (x4 ) = e

−(

x4 %−m)2 x2 , x4 ∈ R + .

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

269

Its ﬁgure is 6 μA 4 (x4 ) 1

x4

0 mx2

˜4 Fig. 8.3.4. Figure of Membership Function in A

where constant m is proﬁt rate at the best retail beneﬁts. (5) Let A˜5 be a fuzzy subset representing a cash ﬂow beneﬁt at region U = (0, 365). Its its membership function be ⎧ ⎪ 1, 0 x5 < n1 , ⎪ ⎨x − n 5 1 , n1 x5 n2 , x5 ∈ [0, 365], μA˜5 (x5 ) = n − n1 ⎪ ⎪ ⎩ 2 0, n2 < x5 365. Its ﬁgure is 6 μA 5 (x5 ) 1

0

@ @ @ @ @ @ @ n1 n2 x5

Fig. 8.3.5. Figure of Membership Function in A˜5 where n1 and n2 denote the upper limit of turnover days in good beneﬁt and its lower limit in bad beneﬁt, respectively. According to statistics material, it is proper to select h1 = 1.8, h2 = 2.2, h3 = 1.5, h4 = 2.5, h = 1, m = 3.2, n1 = 62, n2 = 88, R2 = 90% · R1 (R1 = 1983 year sales volume). It is known that the evaluation by experts for ﬁve stores in economical beneﬁts shows as follows: y1 b = (0.782,

y2 0.378,

y3 0.7,

y4 0.2,

y5 0.49)T .

270

8 Fuzzy Relative Equation and Its Optimizing

Now try to determine inﬂuencing degree of each individual quota to the whole economical beneﬁts. Let the inﬂuencing factor be x = (x1 , x2 , x3 , x4 , x5 )T , and replace the data in Table 8.3.1 and parameter above separately in (1) ∼ (5). Then calculate A, hence fuzzy relative equations corresponding to (8.2.1) are as follows ⎛

0.6 1 0.93 0.85 ⎜ 0.3 0.54 0.22 4 × 10−6 ⎜ ⎜0.56 1 0.78 0.61 ⎜ ⎝ 0 0.14 0.24 2.3 × 10−7 0.99 0.7 0.27 1.2 × 10−4

⎞ ⎛ 0.12 ⎜ 0 ⎟ ⎟ ⎜ ⎟ 0.77⎟ · ⎜ ⎜ 0 ⎠ ⎝ 0.38

x1 x2 x3 x4 x5

⎞

⎛

⎟ ⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎠ ⎝

0.782 0.378 0.7 0.2 0.49

⎞ ⎟ ⎟ ⎟. ⎟ ⎠

10 Make augmented matrix (A|b), and arrange it in standardization ⎛

0.6 1 0.93 0.85 ⎜0.56 1 0.78 0.61 ⎜ ⎜0.99 0.7 0.27 1.2 × 10−4 ⎜ ⎝ 0.3 0.54 0.22 4 × 10−6 0 0.14 0.24 1.2 × 10−7

⎞ 0.12 | 0.782 0.77 | 0.7 ⎟ ⎟ 0.38 | 0.49 ⎟ ⎟. 0 | 0.378⎠ 0 | 0.2

20 From (8.2.4) and (8.2.5), we obtain ⎛

0.49 0.7 0.83 0.92 0.91| Kj

⎞ 1 0.782 0.84 0.92 1 | 0.782 ⎜ 1 0.7 0.9 1 0.91 | 0.7 ⎟ ⎜ ⎟ ⎜0.49 0.7 1 1 1 | 0.49 ⎟ ⎜ ⎟. ⎝ 1 0.7 1 1 1 | 0.378⎠ 1 1 0.83 1 1 | 0.2 30 From (8.2.8), A∗ is obtained as follows: x1 x2 x3 x4 x5 ⎞ 0 0 0 0.92 0 ⎜ 0 0.7 0 0 0.91⎟ ⎜ ⎟ ∗ ⎜ A = ⎜0.49 0.7 0 0 0 ⎟ ⎟. ⎝ 0 0.7 0 0 0 ⎠ 0 0 0.83 0 0 ⎛

From the decision of Theorem 8.2.4, there exists a solution to the equations. 40 Calculate: 0.92 0.7 0.91 0.49 0.7 0.7 0.83 A˜∗1 A˜∗2 A˜∗3 A˜∗4 A˜∗5 = ·( + )·( + )· · x4 x2 x5 x1 x2 x2 x3 0.7 0.91 0.7 0.83 0.92 0.7 0.83 P 0.92 −→ ·( + )· · = · · . x4 x2 x5 x2 x3 x4 x2 x3

8.3 Algorithm Application and Comparing in ( , ·) Relative Equations

271

Therefore there exists a minimum solution to relative equations, which is the least solution, that is, x ˇ∗1 = (0, 0.7, 0.83, 0.92, 0)T , while its greatest solution is x ˆ = (0.49, 0.7, 0.83, 0.92, 0.91)T , and its solution set is T X(A, b) = [0, 0.49], 0.7, 0.83, 0.92, [0, 0.91] .

(8.3.1)

Note 8.3.1. If we come across ri = rj in the application of absorptive law, rj ri ri ri + · = . we have x1 x2 x1 x1 From (8.3.1), the proﬁt inﬂuences economical beneﬁts most greatly in the stores with the relative degree highest, and ﬂexible room is very small. Sale and expense correlates with economical beneﬁts closely. Though large sale and low expense are needed, yet attention must be paid to the suitable level of purchase and sale and to the appropriate rate of expense and proﬁt. Purchase and fund turnover can be respectively changed between [0, 0.49] and [0, 0.91] freely. In fact, the more sale the more proﬁt enlarge purchase running fund will be occupied with fund turnover which will aﬀect sale. If expense is too low regular purchase and sale will be aﬀected turn sale decreases proﬁt will cut down. Besides, it is not a noticeable inﬂuence that purchase and fund turnover aﬀect the store in economical beneﬁts, which proves calculation theoretically tallying with practical regularity. If we take the unique inﬂuence factor, the middle point of interval number in (8.3.1) is easily taken. Then we obtain x = (0.25, 0.7, 0.83, 0.92, 0.45)T . 8.3.2

Comparison in Algorithm

In Section 8.2, we give a short cut algorithm in (∨, ·) fuzzy relation equations. Now, we ﬁnd a solution to the example of [LoF99] by means of the method mentioned above xT ◦ A = b, ⎛ ⎞T ⎛ ⎞ x1 0.8 0.6 0.5 0.2 0.6 0.9 ⎛ ⎞ ⎜x2 ⎟ ⎜0.6 0.3 0.8 0.4 0.2 0.9⎟ 0.56 ⎜ ⎟ ⎜ ⎟ ⎜x3 ⎟ ⎜0.2 0.7 0.7 0.5 0.5 0.8⎟ ⎜0.42⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜x4 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ◦ ⎜0.4 0.6 0.4 0.1 0.5 0.2⎟ = ⎜0.64⎟ . ⎜x5 ⎟ ⎜0.2 0.1 0.7 0.3 0.1 0.8⎟ ⎜ 0.4 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜x6 ⎟ ⎜0.7 0.3 0.8 0.5 0.4 0.6⎟ ⎝0.42⎠ ⎜ ⎟ ⎜ ⎟ ⎝x7 ⎠ ⎝0.7 0.5 0.3 0.8 0.7 0.1⎠ 0.72 x8 0.5 0.3 0.8 0.4 0.2 0.4

(8.3.2)

x Step 1. Arrange an extended matrix of the equation ( ) in standardized A order from large to small in bi by operation of (8.2.4) and (8.2.5) before we obtain

272

8 Fuzzy Relative Equation and Its Optimizing

x x , → A A 0.72 0.64 0.56 0.42 0.42 0.4| Ki ⎛ ⎞ 0.8 1 0.7 0.7 0.7 1 | 0.7 ⎜0.8 0.8 0.93 1 1 1 | 0.8⎟ ⎜ ⎟ ⎜0.9 0.91 1 0.6 0.84 0.8 | 0.6⎟ ⎜ ⎟ ⎜1 1 1 0.7 0.84 1 | 0.7⎟ ⎜ ⎟. ⎜0.9 0.91 1 1 1 1 | 0.9⎟ ⎜ ⎟ ⎜ 1 0.8 0.8 1 1 0.8 | 0.8⎟ ⎜ ⎟ ⎝ 1 1 0.8 0.84 0.6 0.5 | 0.5⎠ 1 0.8 1 1 1 1 | 0.8

(8.3.3)

Find out a greatest solution to (8.3.2) x ˆ = (0.7, 0.8, 0.6, 0.7, 0.9, 0.8, 0.5, 0.8). Step 2. Transform (8.3.3) by means of (8.2.7) and we have ⎞T ⎛ x1 0 0 0.7 0.7 0.7 0 ⎟ x2 ⎜ ⎜0.8 0.8 0 0 0 0 ⎟ ⎟ 0 0 0 0.6 0 0 x3 ⎜ ⎟ ⎜ ⎟ ⎜ 0 0 0 0.7 0 0 x 4⎜ ∗ ⎟ . A = ⎜ x5 ⎜0.9 0 0 0 0 0 ⎟ ⎟ ⎟ x6 ⎜ ⎜ 0 0.8 0.8 0 0 0.8⎟ x7 ⎝ 0 0 0 0 0 0.5⎠ x8 0.5 0.8 0 0 0 0 Decide the matrix above with Theorem 8.2.4 and we know there exists a solution to (8.3.2). Step 3. Calculate A˜∗1 ∩ A˜∗2 ∩ A˜∗3 ∩ A˜∗4 ∩ A˜∗5 ∩ A˜∗6 0.8 0.9 0.8 0.8 0.8 0.7 0.8 · · = + + + + x2 x5 x2 x6 x8 x1 x6 0.7 0.6 0.7 0.7 0.8 0.5 · + + · + · x1 x3 x4 x1 x6 x7 0.9 0.8 0.8 0.8 0.7 0.8 0.5 P 0.8 −→ + + + · + · = x2 x5 x2 x6 x8 x1 x6 x7 0.7 0.8 0.8 0.7 0.8 0.5 0.7 0.9 0.8 0.7 · · + · · + · · + · x1 x2 x6 x1 x2 x7 x1 x5 x6 x1 0.8 0.9 0.5 0.7 0.9 0.8 0.5 0.7 0.9 0.5 0.8 · · + · · · + · · · , x2 x5 x7 x1 x5 x6 x7 x1 x5 x7 x8 where “P ” denotes an operator in Deﬁnition 8.2.4 obtained by application of an absorptive law and the like, with an element a∗ij = 0 in A∗ omitted. Now, we can obtain 6 minimum solutions arranged in Table 8.3.2:

8.4 Lattice Linear Programming with ( , ·) Operator

273

Table 8.3.2. Complete Set on Minimal Solutions Minimal Solutions x∗1 x∗2 x∗3 x∗4 x∗5 x∗6

(0.7, (0.7, (0.7, (0.7, (0.7, (0.7,

Values 0.8, 0, 0, 0, 0.8, 0, 0) 0.8, 0, 0, 0, 0, 0.5, 0) 0, 0, 0, 0.9, 0.8, 0, 0) 0.8, 0, 0, 0.9, 0, 0.5, 0) 0, 0, 0, 0.9, 0.8, 0.5, 0) 0, 0, 0, 0.9, 0, 0.5, 0.8)

Comparing algorithm. 1. Solution. If we solve the example in [LF99] by means of calculation mentioned in [Cao87b], we can obtain all of the minimum solutions to the Equations (8.3.2), with two solutions more than those in [LF99], i.e., x∗4 and x∗5 , which indeed denote the solutions to (8.3.2) and also are minimum solutions. 2. Simpliﬁcation. In fact, only three steps are required in [Cao87b] instead of four ones, which is simpler than the steps mentioned in [LF99], where six ones are needed in the calculation.

8.4

Lattice Linear Programming with ( , ·) Operator

8.4.1 Introduction Compared to the regular programming problem, this optimization problem subject to fuzzy relations about ( , ·), i.e., max-product composition and objective function about it have very diﬀerent nature. According to Ref. [BF98][LF01a,b][WZSL91] and [HK84], when the solution set of fuzzy relation equations is not empty, it can be completely determined by a unique maximum solution and a ﬁnite number of minimal solutions. Because the solution set is non-convex, traditional programming methods, such as the simplex algorithms, become useless. In this section, we study characteristic of optimal solution about optimization problem max Z = c ◦ xT (8.4.1) s.t. x ◦ A = b, 0 xi 1(1 i m), and

min Z = c ◦ xT s.t. x ◦ A = b, 0 xi 1(1 i m),

(8.4.2)

where “◦” denotes ( , ·) composition, A = (aij )(0 aij 1, 1 i m, 1 j n) is an (m × n)-dimensional fuzzy matrix and b = (b1 , b2 , · · · , bn )(0 bj 1) is an n-dimensional constant vector, c = (c1 , c2 , · · · , cm )(0 ci 1) is

274

8 Fuzzy Relative Equation and Its Optimizing

an m-dimensional constant vector, x = (x1 , x2 , · · · , xm ) is an m−dimensional variable vector, i ∈ I = {1, 2, · · · , m} and j ∈ J = {1, 2, · · · , n}. Call (8.4.1) and (8.4.2) fuzzy relation linear programming with ( , ·) operator. We ﬁrst build min-max methods for showing, then a step-by-step algorithm for solving (8.4.2), and ﬁnally one example is to illustrate how the algorithm is. 8.4.2 Characteristic of Optimal Solution The feasible domain of Problem (8.4.1) and (8.4.2) is a solution set of fuzzy relation equations. We consider the following fuzzy relation equation x ◦ A = b,

(8.4.3)

that is, we try to ﬁnd a solution vector x = (x1 , · · · , xm ), with 0 xi 1, such that m (xi · aij ) = bj (1 j n). (8.4.4) i=1,

X(A, b) = {x = (x1 , x2 , · · · , xm ) ∈ R m | x ◦ A = b, xi ∈ [0, 1], i ∈ I} denotes the solution set of (8.4.3) based on Ref.[LF01a,b]. Let X = {x ∈ R m | x = (x1 , x2 , · · · , xm ), 0 xi 1, ∀i ∈ I}. We say x1 x2 if and only if x1i x2i , ∀i ∈ I and for x1 , x2 ∈ X. In this way, the operator “” forms a partial order relation on X and (X, ) becomes a lattice. Deﬁnition 8.4.1. If ∃ˆ x ∈ X(A, b), such that x x ˆ, ∀x ∈ X(A, b), then x ˆ is called a greatest solution to (8.4.3). If ∃˘ x ∈ X(A, b), such that x˘ x, ∀x ∈ X(A, b), then x˘ is called a minimal solution to (8.4.3). And if ∃˘ x ∈ X(A, b), such that when x x ˘, we have x = x˘, x˘ is called a minimum solution to (8.4.3). If X(A, b) = φ, it can be completely determined by a unique maximum solution and a ﬁnite number of minimal solution [BF98] [LF01a,b]. The maximum solution can be obtained by applying the following operation x = A −1 b = [

n

(aij −1 bj )]i∈I ,

(8.4.5)

j=1

where

⎧ ⎨ 1, if aij bj , bj aij bj = , if aij > bj . ⎩ aij

(8.4.6)

ˇ We denote the set of all minimal solutions by X(A, b), then solution set of (8.4.3) is obtained by {x ∈ X | x ˇ x x}. (8.4.7) X(A, b) = ˇ x ˇ∈X(A,b)

8.4 Lattice Linear Programming with ( , ·) Operator

275

Theorem 8.4.1. If X(A, b) = φ, then x is an optimal solution to (8.4.1). Proof: When X(A, b) = φ, note that 0 x x, ∀x ∈ X(A, b), i.e., 0 xi xi , ∀i ∈ I. Therefore, 0 ci xi ci xi , ∀i ∈ I, then 0 c ◦ xT c ◦ xT and x is an optimal of (8.4.1). Theorem 8.4.2. If X(A, b) = φ, then one of the minimum solutions is an optimal solution to (8.4.2). ˇ b) such that Proof: According to (8.4.7), ∀x ∈ X(A, b), there exists x ˇ0 ∈ X(A, ˇ0i xi xi , ∀i ∈ I, such that, ci x ˇ0i ci xi ci xi , ∀i ∈ I, x ˇ0 x x, hence, x therefore, c ◦ x ˇT0 c ◦ xT c ◦ xT , ∀i ∈ I. ˇ We choose x ˇ∗ such that c ◦ x ˇ∗T = min{c ◦ xˇT | xˇ ∈ X(A, b)}, then, c◦x ˇ∗T c ◦ xT c ◦ xT , ∀x ∈ X(A, b). ˇ b) is an optimal solution to (8.4.2). So, the minimum solution xˇ∗ ∈ X(A, 8.4.3 Method to Optimal Solution According to Theorem 8.4.1 and (8.4.5), generating an optimal solution to (8.4.1) is not a problem. Since fuzzy relation equation has a ﬁnite number of minimum solutions and procedure of minimum solution to fuzzy relation equation is not easy, solving (8.4.2) is very diﬃcult. Here, we build a min-max method to ﬁnd an optimal solution to (8.4.2). A. Characterization of feasible domain [LF01a,b] Lemma 8.4.1. If x ∈ X(A, b), then for each j ∈ J, there exists i0 ∈ I such that xi0 ai0 j = bj and xi aij bj , ∀i ∈ I. When X(A, b) = φ, we deﬁne

and

Ij = {i ∈ I | xi aij = bj }, ∀j ∈ J

(8.4.8)

Λ = I1 × I2 × · · · × Im .

(8.4.9)

Hence, Ij is an index set and f = (f1 , f2 , · · · , fn ) ∈ Λ if and only if fj ∈ Ij , ∀j ∈ J. By the deﬁnition of Ij and Lemma 8.4.1, we can easily see the following result. Lemma 8.4.2. If X(A, b) = φ, then Ij = φ, ∀j ∈ J. Lemma 8.4.3. If X(A, b) = φ, then Λ = φ. In order to study X(A, b) in terms of the elements f ∈ Λ, we deﬁne Jfi = {j ∈ J | fj = i}, i ∈ I

(8.4.10)

276

8 Fuzzy Relative Equation and Its Optimizing

and F : Λ −→ R m such that, ⎧ ⎨ max bj , if J i = φ, f Fi (f ) = j∈Jfi aij ∀i ∈ I. ⎩ 0, if Jfi = φ,

(8.4.11)

Then we see the relationship between X(A, b) and F (Λ) = {F (f ) | f ∈ Λ}. Theorem 8.4.3. Given that X(A, b) = φ, (1) If f ∈ Λ, then F (f ) ∈ X(A, b). (2) For any x ∈ X(A, b), there exists f ∈ Λ such that F (f ) x. [BF98] ˇ Corollary 8.4.1. X(A, b) ⊂ F (Λ) ⊂ X(A, b). B. Min-max method According to Corollary 8.4.1, F (Λ) ⊂ X(A, b), every group value in F (Λ) belongs to a solution set to the fuzzy relation equation. On the other hand, ˇ X(A, b) ⊂ F (Λ), hence, any minimum solution to a fuzzy relation equation corresponds to a group value of F (Λ). Therefore, solving (8.4.2) becomes equivalent to ﬁnding an f ∗ ∈ Λ, such that m

(ci Fi (f ∗ )) = min{ f ∈Λ

i=1

m

(ci Fi (f ))}.

(8.4.12)

i=1

The min-max method is given bellow. Step 1. Choose fj ∈ Ij , ∀j ∈ J, such that cfj

bj bj = min cfj . fj ∈Ij afj j afj j

(8.4.13)

We deﬁne an index set Ij satisfying (8.4.13), so all the corresponding fj are taken. Ij only includes indexes satisfying (8.4.13). Obviously, Ij ⊂ Ij . Step 2. Let Λ1 = I1 × · · · × In . Obviously, Λ1 ⊂ Λ. Step 3. According to (8.4.11), we choose fj ∈ Ij such that f = (f1 , · · · , fn ) ∈ Λ1 , then we can construct a solution: ⎧ ⎨ max bj , if ∃ f = i, j x∗i = fj =i afj j (1 j n). (8.4.14) ⎩ 0, if fj = i, Theorem 8.4.4. Let f = (f1 , · · · , fn ) and f = (f1 , · · · , fn ) ∈ Λ1 . According to (8.3.14), computation of x∗ and x∗ corresponds to f and f respectively, Z ∗ and Z ∗ are objective values corresponding to x∗ and x∗ respectively, then Z ∗ = Z ∗ . Proof: The deﬁnition of Λ1 implies that fj , fj ∈ Ij ⊂ Ij , ∀j ∈ J = {j = bj bj 1, 2, · · · , n}. Hence, according to (8.4.13), cfj = cfj , ∀j ∈ J. afj j afj j

8.4 Lattice Linear Programming with ( , ·) Operator

Suppose

277

Z ∗ = ci0 x∗i0 , Z ∗ = ci1 x∗i1 ,

where x∗i0 = max

fj =i0

bj bj bj0 bj1 = (fj = i0 ), x∗i1 = max = (fj1 = afj j afj0 j0 0 fj =i1 af j a fj1 j1 j

i1 ). bj0

If Z ∗ < Z ∗ , i.e., cfj

0

fj1 , fj1 fj

∈

Ij1

and cfj

afj

0

bj1

j0

< cfj

1

bj1 afj j1

bj1

, for j1 , there exists fj1 , such that

1

= cfj . 1 a afj j1 fj j1 1 1 = i2 (1 i2 m). If there does not exist fj (j = j1 ), such that 1

Suppose fj1 = i2 , then

bj0

Z ∗ = cfj

0

a

fj j0 0

cfj

1

bj1 a

fj j1 1

> cfj

0

bj0 afj

0

= Z ∗ ,

j0

contradiction. bj bj1 If there exists fj (j = j1 ), such that fj = i2 , then x∗i2 = max . fj =i2 af j afj j1 j 1 Therefore, Z ∗ = cfj

0

bj0 bj1 bj0 cfj > cfj = Z ∗ , 1 a 0 a afj j0 fj j1 fj j0 0

1

0

contradiction. So, Z ∗ Z ∗ . By a similar argument, we can show that Z ∗ Z ∗ . Then, Z ∗ = Z ∗ and the proof is complete. Theorem 8.4.5. If X(A, b) = φ and x∗ is deﬁned according to (8.4.14), then x∗ is an optimal solution to (8.4.2). Proof: For any f = (f1 , · · · , fn ) ∈ Λ, f = (f1 , · · · , fn ) ∈ Λ1 such that fj , fj ∈ Ij , ∀j ∈ J. Let feasible solutions x1 and x∗ correspond to f and f , values of objective Z 1 and Z ∗ correspond to x1 and x∗ , respectively. Based on min-max method, we have cfj

bj bj cfj (1 j n). afj j afj j

Suppose Z ∗ = ci0 x∗i0 , Z 1 = ci1 x1i1 , where x∗i0 = max

fj =i0

bj bj1 = (fj = i1 ). fj =i1 afj j afj1 j1 1 bj0 bj1 If Z ∗ > Z 1 , then cfj > cfj1 . 0 a afj1 j1 fj j0

i0 ), x1i1 = max

0

bj bj0 = (f = afj j afj j0 j0 0

278

8 Fuzzy Relative Equation and Its Optimizing

bj0 bj0 cfj0 . afj j0 afj0 j0 0 Let fj0 = i2 (1 i2 m). If there does not exist fj (j = j0 ) such that fj = i2 , then For j0 , there exists fj0 such that fj0 , fj0 ∈ Ij0 and cfj

0

Z 1 cfj0

bj0 bj0 bj1 cfj > cfj1 = Z 1, 0 a afj0 j0 a fj1 j1 fj j0 0

contradiction. bj bj0 . If there exists fj (j = j1 ) such that fj = i2 , then x1i2 = max fj =i2 afj j afj0 j0 Therefore, Z 1 cfj0

bj0 bj0 bj1 cfj > cfj1 = Z 1, 0 a afj0 j0 a fj1 j1 fj j0 0

contradiction. So, Z ∗ Z 1 and the proof is complete. Based on Corollary 8.4.1, because f ∈ Λ1 ⊂ Λ, then x∗ = (x∗1 , · · · , x∗m ) is a feasible solution to (8.4.2). Theorem 8.4.4 and Theorem 8.4.5 show that x∗ is an optimal solution to (8.4.2). This method is called a min-max method. C. Algorithm Based on mini-max method, we advance an algorithm to an optimal solution to (8.4.2). Step the greatest solution to (8.4.3), i.e., compute x = A −1 n 1. Compute −1 b = [ j=1 (aij bj )]i∈I , according to (8.4.5). Step 2. Check feasibility. If x ◦ A = b, continue. Otherwise, stop. Step 3. Compute index sets Ij , ∀j ∈ J, according to (8.4.8). Step 4. Compute index sets Ij , ∀j ∈ J, according to (8.4.13). Step 5. Deﬁne Λ1 = I1 × · · · × In . Step 6. We choose any f ∈ Λ1 , then, compute an optimal solution x∗ according to (8.4.14), and obtain an optimal value Z ∗ . 8.4.4 Numerical Example Consider the following optimization problem: min Z = 0.4x1 ∨ 0.5x2 ∨ 0.3x3 ∨ 0.6x4 ∨ 0.8x5 ∨ 0.6x6 ∨ 0.7x7 ∨ 0.9x8 ∨ 0.5x9 ∨ 0.7x10 s.t. x ◦ A = b, 0 xi 1(i = 1, · · · , 10),

8.4 Lattice Linear Programming with ( , ·) Operator

where

279

⎛

⎞ 0.6 0.2 0.5 0.3 0.7 0.5 0.2 0.8 ⎜ 0.5 0.6 0.9 0.5 0.8 0.9 0.3 0.8 ⎟ ⎜ ⎟ ⎜ 0.1 0.9 0.4 0.7 0.5 0.7 0.4 0.7 ⎟ ⎜ ⎟ ⎜ 0.1 0.6 0.2 0.5 0.4 0.1 0.7 0.5 ⎟ ⎜ ⎟ ⎜ 0.3 0.8 0.8 0.8 0.8 0.5 0.5 0.8 ⎟ ⎜ ⎟, A=⎜ ⎟ ⎜ 0.8 0.4 0.1 0.1 0.2 0.8 0.8 0.3 ⎟ ⎜ 0.4 0.5 0.4 0.8 0.4 0.7 0.3 0.4 ⎟ ⎜ ⎟ ⎜ 0.6 0.3 0.4 0.3 0.1 0.2 0.5 0.7 ⎟ ⎜ ⎟ ⎝ 0.2 0.5 0.7 0.4 0.9 0.9 0.7 0.2 ⎠ 0.1 0.3 0.6 0.6 0.6 0.4 0.4 0.8 b = (0.48 0.56 0.72 0.56 0.64 0.72 0.42 0.64), x = (x1 , x2 , · · · , x10 ).

Solution: Because I = {1, · · · , 10}, J = {1, · · · , 8}, then Step 1. The greatest solution to the problem is x = A −1 b = (0.8, 0.8, 0.622, 0.6, 0.7, 0.525, 0.7, 0.8, 0.6, 0.8). Step 2. Since x ◦ A = b, we know X(A, b) = φ. Step 3. Compute the index set: I1 = {1, 8}, I2 = {3, 5}, I3 = {2}, I4 = {5, 7}, I5 = {2}, I6 = {2}, I7 = {4, 6, 9}, I8 = {1, 2, 10}. Step 4. Because I3 , I5 , I6 have only one element, respectively, then, I3 = I3 = {2}, I5 = I5 = {2}, I6 = I6 = {2}. Therefore we compute index set I1 , I2 , I4 , I7 , I8 . 0.48 b1 b8 , c8 ) = min(0.4× I1 = {1, 8}, according to (8.4.13), min(c1 , 0.9× a11 a81 0.6 b1 0.48 ) = min(0.32, 0.72) = 0.32 = c1 . Therefore I1 = {1}. 0.6 a11 0.56 b2 b2 I2 = {3, 5}, according to (8.4.13), min(c3 , 0.8× , c5 ) = min(0.3× a32 a52 0.9 0.56 b2 ) = c3 . Therefore I2 = {3}. 0.8 a32 By a similar method, we can compute I4 = {7}, I7 = {9}, I8 = {1}. Step 5. Λ1 = I1 × · · · × I8 = {1} × {3} × {2} × {7} × {2} × {2} × {9} × {1}. Step 6. According to Λ1 and (8.4.14), f = (1, 3, 2, 7, 2, 2, 9, 1). Because there dose not exist j ∈ J such that fj = 4, 5, 6, 8, 10, then, x∗4 = x∗5 = x∗6 = x∗8 = x∗10 = 0. b1 b8 0.48 0.64 , } = 0.8. , } = max{ Since f1 = f8 = 1, then x∗1 = max{ a11 a18 0.6 0.8 ∗ ∗ ∗ By a similar method, we can compute x2 = 0.8, x3 = 0.622, x7 = 0.7, x∗9 = 0.6.

280

8 Fuzzy Relative Equation and Its Optimizing

Therefore, an optimal solution is x∗ = (0.8, 0.8, 0.622, 0, 0, 0, 0.7, 0, 0.6, 0) and an optimal value is Z ∗ = 0.49. 8.4.5 Conclusion In this section, we build a min-max method for ﬁnding an optimal solution to latticed linear programming based on (∨, ·) composition.

8.5

Fuzzy Relation Geometric Programming with ( , ) Operator

8.5.1 Introduction We call min f (x) = (c1 ∧ xγ11 ) ∨ (c2 ∧ xγ22 ) ∨ · · · ∨ (cm ∧ xγmm ) (8.5.1) s.t x ◦ A = b (a) 0 xi 1(1 i m) a ( , ) (max-min) fuzzy relation geometric programming, where A = (aij )(0 aij 1, 1 i m, 1 l n) is an (m × n)-dimensional fuzzy matrix, x = (x1 , x2 , · · · , xm ) an m-dimensional variable vector, c = (c1 , c2 , · · · , cn )(ci 0) and b = (b1 , b2 , · · · , bn )(0 bj 1) an n-dimensional constant vector, γi an arbitrary real number, and composition operator is “ ◦ ” (∨, ∧), i.e., m (xi ∧ aij ) = bj (1 j n). i=1

Without loss of generality, suppose 1 b1 > b2 > · · · > bn > 0. Since a fuzzy relation geometric programming are widely applied in engineering optimization designment and modernization of management, technological economic analysis, it is signiﬁcant to solve such a programming. 8.5.2 Structure of Solution Set on Model Since the feasible domain of (8.5.1) is a solution set to (8.5.1)(a), solving (8.5.1)(a) is very important for optimization of (8.5.1). Now, we make some explanation to structure of solution set to (8.5.1)(a) as follows. Deﬁnition 8.5.1. [Luo89] If there exists a solution to (8.5.1)(a), it is called compatible. Suppose that X(A, b) = {(x1 , x2 , · · · , xm ) ∈ R m |x ◦ A = b, 0 xi 1} is the whole solution set to (8.5.1), ∀x1 , x2 ∈ X(A, b), we deﬁne x1 x2 ⇔ x1i x2i (1 i m), such a deﬁnition “ ” is a partial order relation on X(A, b). Deﬁnition 8.5.2. Similarly to Deﬁnition 8.4.1, we deﬁne x ˆ ∈ X(A, b) is a greatest solution, x˘ a minimal solution and x˘ a minimum solution to (8.5.1)(a).

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

281

Let x ˆi = ∧{bj |bj < aij } (1 i m, 1 j n).

(8.5.2)

Stipulate that set {∧Ø = 1}. If xˆ = (ˆ x1 , xˆ2 , · · · , xˆm ) is a solution to (8.5.1)(a), we can easily prove that x ˆ must be a greatest solution to (8.5.1)(a). For greatest solution to (8.5.1)(a), we have the following lemma. Lemma 8.5.1. [Pre81] x ◦ A = b is compatible, if and only if xˆ ◦ A = b and x ˆ is a greatest solution. Proof: The suﬃciency is evident easily, and now we prove necessity. If x is a solution to x ◦ A = b, and m

(xi ∧ aij ) = bj (1 j n),

i=1

then ∀k, j, there is xk ∧ akj bj , let k be ﬁxed. At akj bj , then 0 xk 1; at akj > bj , then 0 xk bj . According to set {∧Ø = 1}, we have ˆk , xk {bj |bj < akj } = x i.e., x x ˆ. Step forward, suppose bj < akj , since x ˆk = {bj |bj < akj } bj , hence x ˆk ∧ akj bj . Suppose bj akj , then x ˆk ∧ akj akj bj , so we have m

(ˆ xi ∧ aij ) bj ,

i=1

i.e., x ˆ ◦ A b. Since x x ˆ, then b=x◦Ax ˆ ◦ A b. Hence, xˆ ◦ A = b. Corollary 8.5.1. [San76] If X(A, b) = Ø, then x ˆ ∈ X(A, b). In terms of the minimal solution to (8.5.1)(a), Ref. [San76] has provided a suﬃcient and necessary condition, but it is diﬃcult to be satisﬁed, so generally speaking, the minimal solution does not exist in X(A, b). This enlarges the diﬃcult for us to solve (8.5.1)(a). Since X(A, b) is a partial order set to “ ”, its minimum element exists. Even though as minimum solution is often considered in many practical problems, we usually pay more attention to the minimum solution to (8.5.1)(a). For a minimum element of X(A, b), we have the following lemma. Lemma 8.5.2. If X(A, b) = Ø, then a minimum element must exist in X(A, b). If X(A, b) has a minimum element, its numbers usually are not

282

8 Fuzzy Relative Equation and Its Optimizing

˘ unique. If we denote the set of all minimum elements by X(A, b), then the solution set of (8.5.1)(a) can be denoted as follows: {x|˘ xxx ˆ, x ∈ X}. (8.5.3) X(A, b) = ˘ x ˘∈X(A,b)

We can clearly see that, by Formula (8.5.3), solution set structure of (8.5.1)(a) ˘ ˘ can be determined by X(A, b), and solving X(A, b) means doing X(A, b). Now we introduce the method that the minimum solution is found by a conservative path. Deﬁnition 8.5.3. Matrix C = (cij )m×n is called a characteristic matrix of A, here 1, bj aij , cij = 0, bj > aij . Obviously, the characteristic matrix is a boolean one. Let Gj = {i|cij = 1, 1 i m} (1 j n), and G = G1 × G2 × · · · × Gn . If xgi = ∨{bj |Kj = i}(1 i m), ∀g = (K1 , K2 , · · · , Kn ) ∈ G and stipulate {∨Ø = 0}, then xg = (xg1 , xg2 , · · · , xgm ) is a solution to (8.5.1)(a), xg is called a quasi-minimum solution to (8.5.1)(a). ˘ (A, b). We can denote all quasi-minimum solution set of (8.5.1)(a) by X ˘ ˘ Now we introduce how to choose X(A, b) through X (A, b). Deﬁnition 8.5.4. Let C be a boolean matrix, sequence p = (p(1), p(2), · · · , p(n)) ∈ G is called a path to C. Deﬁnition 8.5.5. [WZSL91] Suppose that pC = (p(1), p(2), · · · , p(n)) ∈ G is called a conservative path to C, when {p(1), p(2), · · · , p(k − 1)} ∩ Gk = Ø, and if p(i) is an element among {p(1), · · · , p(k − 1)}, ∀k ∈ {2, 3, · · · , n} that ﬁrst comes into Gk , then p(k) = p(i). At n = 1, every path to C is a conservative one. We denote all conservative path sets of C by W C (C), then have the following lemma. Lemma 8.5.3. (1) Minimum solutions to x ◦ A = b is one-to-one correspondence with elements in W C (C). (2) x ◦ A = b is compatible ⇔ G = Ø. The proof to see [WZSL91]. We can know by Lemma 8.5.2 and 8.5.3 C X(A, b) = {xp x x ˆ}, pC ∈W C (C) pC

where x

is a minimum solution corresponding to conservative path pC .

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

283

According to Deﬁnition 8.5.5 and Lemma 8.5.3, we can get the following ﬁltration rule about conservative paths. Rule 8.5.1. (Filtration rule of conservative paths) Let j0 (1 j0 n) be ﬁxed as follows. 1) j0 = 1, Kj is selected from every one of G1 . 2) At j < j0 , we suppose that Kj has been selected from Gj , then Kj0 is done by the following methods. 10 If G∗j0 = {K0 , · · · , Kj0 −1 } ∩ Gj0 = Ø, then Kj0 is Kj among {K0 , · · · , Kj0 −1 } that ﬁrst comes into G∗j0 . 20 If G∗j0 = Ø, then Kj0 is selected from every one elements of Gj0 . 3) g = (Kj1 , Kj2 , · · · , Kjn ), selected according to 1) and 2), is a conservative path, and then xg must be a minimum solution. 8.5.3 Solution on Model Let us consider the objective function f (x) = (c1 ∧ xγ11 ) ∨ (c2 ∧ xγ22 ) ∨ · · · ∨ (cm ∧ xγmm ).

(8.5.4)

The taken optimum value of f (x) is related to every item exponent γi of xi (1 i m), closely. Now we discuss (8.5.2) through the following three cases. ˆ to (8.5.1)(a) is Lemma 8.5.4. If γi < 0(1 i m), then greatest solution x an optimum one to (8.5.1). Proof: Since γi < 0(1 i m), then d(xγi i ) = γi xγi i −1 0 dxi for each xi with 0 xi 1. Hence xγi i is a monotone decreasing function about xi , so it is easy to know that ci ∧ xγi i is also one about xi . Moreover, ∀x ∈ X(A, b), at x x ˆ, then ci ∧ xγi i ci ∧ x ˆγi (1 i m), such that f (x) f (ˆ x), so x ˆ is an optimum solution to (8.5.1). Lemma 8.5.5. If γi 0(1 i m), then a certain minimum solution x ˘ to (8.5.1)(a) is an optimum one to (8.5.1). Proof: Since γi 0(1 i m), then d(xγi i ) = γi xγi i −1 0, dxi

284

8 Fuzzy Relative Equation and Its Optimizing

for each xi with 0 xi 1. Therefore xγi i is a monotone increasing function about xi , so is ci ∧ xγi i about xi . Moreover, ∀x ∈ X(A, b), according to (8.5.1), ˘ then there exists x ˘ ∈ X(A, b), such that x x˘, i.e., xi x˘i , so ci ∧ xγi i ci ∧ x˘i γi (1 i m), ˘ then f (x) f (˘ x), i.e., the optimum solution to (8.5.1) must exist in X(A, b). ∗ ˘ Let f (˘ x ) = min{f (˘ x)|˘ x ∈ X(A, b)}. Then ∀x ∈ X(A, b), there is f (x) ˘ ˘∗ ∈ X(A, b). f (˘ x∗ ), so x˘∗ is an optimum solution to (8.5.1). Here x As for the general situation, i.e., in (8.5.4), every item exponent γi (1 i m) of xi is either a positive number or a negative one. Let R1 = {i|γi < 0, 1 i m}, R2 = {i|γi 0, 1 i m}. Then R1 ∩ R2 =Ø, R1 ∪ R2 = I, here I= {1, 2, · · · , m}. Let f1 (x) = {(ci ∧ xγi i )}, f2 (x) = {(ci ∧ xγi i )}. Then f (x) = f1 (x) ∨ i∈R1

i∈R2

f2 (x). Therefore, we have the following two optimization models based on the above: min f1 (x) s.t x ◦ A = b (8.5.5) 0 xi 1(1 i m) and min f2 (x) s.t x ◦ A = b, 0 xi 1(1 i m).

(8.5.6)

By Lemma 8.5.4, x ˆ is an optimum solution to (8.5.5). By Lemma 8.5.5, ∃˘ x∗ ∈ ∗ ˘ X(A, b), x˘ is an optimum one to (8.5.6). Let xˆi , i ∈ R1 , xi ∗ = x˘i ∗ , i ∈ R2 . Then we have the theorem as follows. Theorem 8.5.1. If every item exponent γi (1 i m) of xi is either a positive number or a negative one, then x∗ is an optimum solution to (8.5.1). ˘ Proof: ∀x ∈ X(A, b), according to (8.5.2), ∃˘ x ∈ X(A, b), such that x ˘xx ˆ. By Lemma 8.5.4 and 8.5.5, we have f (x) = f1 (ˆ x) ∨ f2 (ˇ x) f1 (ˆ x) ∨ f2 (˘ x∗ ) = f (x∗ ). So x∗ is an optimum solution to (8.5.1). 8.5.4 Model Algorithm Algorithm 8.5.1 Step 1. According to the order of components of b from large to small, b is rearranged, and A, x and f (x) are adjusted corresponding to the narration above.

8.5 Fuzzy Relation Geometric Programming with ( , ) Operator

285

Step 2. By Formula (8.5.2), solve x ˆ. If x ˆ is not a solution to (8.5.1)(a), then turn to Step 10. Otherwise, turn to Step 3. Step 3. Check out sign of γi (1 i m). If γi < 0(1 i m), then turn to Step 9. Otherwise, turn to Step 4. Step 4. Solve the characteristic matrix C of A and Gj (1 j n), and we ˘ ﬁnd the minimum solution set X(A, b) of (8.5.1)(a) by Rule 8.5.1. ˘∗ by Lemma 8.5.2, turn to Step 5. If γi 0(1 i m), then we obtain x Step 8. Otherwise, turn to Step 6. Step 6. Gain x∗ by Theorem 8.5.1. Step 7. Print f (x∗ ), stop. Step 8. Print f (˘ x∗ ), stop. Step 9. Print f (ˆ x), stop. Step 10. Print “have no solution”, stop. 8.5.5 Examples Example 8.5.1: We consider the following fuzzy relation geometric programming 1 min f (x) = (3 ∧ x1 −2 ) ∨ (2 ∧ x2 −1 ) ∨ (1.5 ∧ x3 − 2 )∨ 5 (2.5 ∧ x4 −2 ) ∨ (0.5 ∧ x5 − 2 ) ∨ (4 ∧ x6 −1 ) s.t x ◦ A = b, 0 xi 1 (1 i 6), where b = (0.85, 0.6, 0.5, 0.1),

⎛

⎞ 0.5 0.2 0.8 0.1 ⎜ 0.8 0.2 0.8 0.1 ⎟ ⎜ ⎟ ⎜ 0.9 0.1 0.4 0.1 ⎟ ⎟ A=⎜ ⎜ 0.3 0.95 0.1 0.1 ⎟ . ⎟ ⎜ ⎝ 0.85 0.1 0.1 0.1 ⎠ 0.4 0.8 0.1 0

By Formula (8.5.2), we can solve x ˆ = (0.5, 0.5, 0.85, 0.6, 1, 0.6). Since x ˆ ◦ A = b, then x ˆ is a greatest solution to x ◦ A = b. It is easy to see ˆ is an optimum solution to Example γi < 0(1 i 6). By Lemma 8.5.4, x 8.5.1, and optimum value is f (ˆ x) = 3. Example 8.5.2: Consider ﬁnding 1

1

min f (x) = (1.5 ∧ x1 2 ) ∨ (2 ∧ x2 ) ∨ (0.8 ∧ x3 − 2 )∨ (0.9 ∧ x4 −2 ) ∨ (0.7 ∧ x5 −4 ) ∨ (1 ∧ x6 −1 ) s.t x ◦ A = b, 0 xi 1 (1 i 6), where A, b is the same as Example 8.5.1.

286

8 Fuzzy Relative Equation and Its Optimizing

Since exponent γi is either positive or negative, we solve characteristic matrix C of A by Algorithm 8.5.1: ⎛ ⎞ 0011 ⎜0 0 1 1⎟ ⎜ ⎟ ⎜1 0 0 1⎟ ⎟ C =⎜ ⎜ 0 1 0 1 ⎟ , G1 = {3, 5}, G2 = {4, 6}, G3 = {1, 2}, G4 = {1, 2, 3, 4, 5}. ⎜ ⎟ ⎝1 0 0 1⎠ 0100 8 conservative paths to C can be got by Rule 8.5.1: pC 1 = (3413), pC 5 = (5415),

pC 2 = (3423), pC 6 = (5425),

pC 3 = (3613), pC 7 = (5615),

pC 4 = (3623), pC 8 = (5625).

For the path above, a corresponding minimum solutions are x ˘1 = (0.5, 0, 0.85, 0.6, 0, 0), ˘ x2 = (0, 0.5, 0.85, 0.6, 0, 0), x ˘3 = (0.5, 0, 0.85, 0, 0, 0.6), ˘ x4 = (0, 0.5, 0.85, 0, 0, 0.6), x6 = (0, 0.5, 0, 0.6, 0.85, 0), x ˘5 = (0.5, 0, 0, 0.6, 0.85, 0), ˘ x ˘7 = (0.5, 0, 0, 0, 0.85, 0.6), ˘ x8 = (0, 0.5, 0, 0, 0.85, 0.6). 1

Let f1 (x) = (0.8 ∧ x3 − 2 ) ∨ (0.9 ∧ x4 −2 ) ∨ (0.7 ∧ x5 −4 ) ∨ (1 ∧ x6 −1 ), f2 (x) = 1 (1.5 ∧ x1 2 ) ∨ (2 ∧ x2 ). By Lemma 8.5.1, x ˆ is an optimum solution to f1 (x). By Lemma 8.5.2, x ˘1 , x ˘3 , x ˘5 , x˘7 is an optimum solution to f2 (x). By Theorem 8.5.1, x∗ = (0.5, 0, 0.85, 0.6, 1, 0.6) is an optimum solution to f (x), and optimum value is f (x∗ ) = 1. The method mentioned here can be applied to both project optimization designment and technological economic analysis, and is of practical use in research for environment protection and pollution disposal as well. 8.5.6 Conclusion In research for the fuzzy relation geometric programming, when its variable scale is not very large, we can smoothly reach the optimum point by applying this algorithm. However, when its variable scale is very large, the number of element ˘ b) of (8.5.1)(a) will increase signiﬁcantly. among the minimum solution set X(A,

8.6

Fuzzy Relation Geometric Programming with ( , ·) Operator

8.6.1 Introduction We call

min f (x) = (c1 · xγ11 ) ∨ (c2 · xγ22 ) ∨ · · · ∨ (cn · xγnn ) s.t A ◦ x = b (a) 0 xj 1(1 j n)

(8.6.1)

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

287

a (∨, ·) (max-product) fuzzy relation geometric programming, where A = (aij ) (1 i m, 1 j n) is (m × n)-dimensional fuzzy matrix, x = (x1 , x2 , · · · , xn )T an n-dimensional variable vector, b = (b1 , b2 , · · · , bm )T (0 bi 1) an m-dimensional constant vector, c = (c1 , c2 , · · · , cn )T (cj 0) an n-dimensional constant vector and γj is an arbitrary real number, and comn (aij · xj ) = bi . position operator “ ◦ ” is (∨, ·), i.e., j=1

To some problems, Operator (∨, ·) can overcome the Operator (∨, ∧) shortage, and it is irreplaceable to solve some practical problem. In this section, we propose a fuzzy relation geometric programming model. 8.6.2 Structure of Solution Set on Equation Since the feasible domain of (8.6.1) is a solution to (8.6.1)(a), solving Equation (8.6.1)(a) is very important in order to optimize Model (8.6.1), so that we make some exposition to structure of solution set in (8.6.1)(a) as follows [Pre81]. Deﬁnition 8.6.1. If there exists a solution in (8.6.1)(a), it is called compatible. Suppose X(A, b) = {x = (x1 , x2 , · · · , xn )T ∈ R n |A ◦ x = b, 0 xj 1} is the solution set of (8.6.1)(a). We deﬁne ∀x1 , x2 ∈ X(A, b), x1 x2 ⇔ x1j x2j (1 j n). Such that a deﬁnition “ ” is a partial order relation on X(A, b). Deﬁnition 8.6.2. Similar to Deﬁnition 8.1.1, we deﬁne x ˆ ∈ X(A, b) is a greatest solution, x ˘ a minimal solution and x ˘ a minimum solution to (8.6.1)(a). Let x ˆj =

m

(aij −1 bi ) (1 j n).

(8.6.2)

i=1

If x ˆ = (ˆ x1 , x ˆ2 , · · · , xˆn )T is a solution to (8.6.1)(a), we can easily prove that x ˆ must be a greatest solution to (8.6.1)(a), where aij −1 bi as (8.2.4). For greatest solution to (8.6.1)(a), we have Lemma 8.6.1. [San76] A ◦ x = b is compatible, if and only if A ◦ x ˆ = b and x ˆ is the greatest solution. Proof: The suﬃciency is evident, and now we prove the necessity. n (aij · xj ) = bi , (1 i m), so If x is a solution to A ◦ x = b, then j=1

∀i, j, there exists aij · xj bi , let j be ﬁxed at aij bi . Then 0 xj 1; at aij > bi , then 0 xj abiji , and, hence, we have xj ≤

m i=1

(aij −1 bi ) = x ˆj ,

288

8 Fuzzy Relative Equation and Its Optimizing

i.e., x x ˆ. Step forward, suppose that bi < aij , since xˆj =

m

(aij −1 bi ), then aij ·ˆ xj

i=1

bi ; suppose that bi aij , then aij · x ˆj aij bi , so we have n

(aij · x ˆj ) bi ,

j=1

i.e., A ◦ xˆ b. Since x x ˆ, then b = A ◦ x A ◦ x ˆ b. Hence, A ◦ x ˆ = b, and x ˆ are greatest solutions. Corollary 8.6.1. If X(A, b) = Ø, then x ˆ ∈ X(A, b). Similar to Section 8.5, Ref. [ZW91] has provided that a suﬃcient and necessary condition of minimal element exists in equation in (8.6.1)(a), but minimal element to (8.6.1)(a) may not exist in X(A, b) ordinarily. For minimum element of X(A, b), we have the following. Lemma 8.6.2. [ZW91] If X(A, b) = Ø, then a minimum element must exist in X(A, b). If X(A, b) has a minimum element, its numbers usually are not unique. ˘ [Wangx02] If we denote all minimum element by X(A, b), then solution set of (8.6.1)(a) can be denoted as follows: {x|˘ xxx ˆ, x ∈ X}. (8.6.3) X(A, b) = ˘ x ˘∈X(A,b)

We can clearly see by (8.6.3) that the solution set structure of (8.6.1)(a) can ˘ ˘ be obtained by X(A, b), solving X(A, b) involves solving X(A, b). Now we introduce the method to ﬁnd the minimum solution to a (∨, ·) fuzzy relation equation. Deﬁnition 8.6.3. Matrix D = (dij )m×n is called a discriminate matrix of A, where aij , aij · x ˆj = bi , dij = 0, aij · x ˆj = bi . We can easily prove that by Deﬁnition 8.6.3, (8.6.1)(a) has a solution if and only if discriminate matrix D of A contains at least a nonzero entry in each row. Deﬁnition 8.6.4. Matrix G = (gij )m×n is called a simpliﬁcation matrix of A, where x ˆj , aij · xˆj = bi , gij = 0, aij · xˆj = bi . ˘ Based on matrix G, X(A, b) can be ﬁltrated as follows.

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

289

Rule 8.6.1. (Filtration rule of minimum solution) 1) If bi = 0, then delete the i−th row of G. 2) If bi > 0, and ∃ k ∈ {1, 2, · · · , n}, such that k > i, ∀j = 1, 2, · · · , n, ckj = 0 ⇐⇒ cij = 0, then delete i-th row of G. ˜ To each row 3) The matrix gained by 1) and 2) can be denoted by G. ˜ of G, the only nonzero value is selected in every row with all entries of the ˜2, · · · , G ˜ p . To ˜1, G rest seen as zero, perhaps all of matrices are denoted by G ˜ each column of Gk (1 k p), the maximum is selected, a quasi-minimum solution xj can be obtained through such a method. The set composed of all xj is called a quasi-minimum solution one, and it includes all minimum solution to (8.6.1)(a). If repeat solution is deleted, and according to Deﬁnition 8.6.2, all ˘ minimum solutions X(A, b) can be got by ﬁltration [WZSL91][HK84][Zim91]. 8.6.3 Solving Solution on Model Let us consider the objective function as follows: f (x) = (c1 · xγ11 ) ∨ (c2 · xγ22 ) ∨ · · · ∨ (cn · xγnn ),

(8.6.4)

the optimal value of f (x) is related to exponent γj of every item xj (1 j n). Now we discuss (8.6.1) through the following three cases. Lemma 8.6.3. If γj < 0 (1 j n), then greatest solution xˆ to Equation (8.6.1)(a) is an optimal one to Model (8.6.1). Proof: Since γj < 0 (1 j n), then γ

d(xj j ) γ −1 = γj xj j 0 dxj γ

for each xj with 0 xj 1, so xj j is a monotone decreasing function about γ xj . It is easy to know that cj xj j is also a monotone decreasing function about xj . Therefore, ∀x ∈ X(A, b), when x xˆ, then γ

ˆγj (1 j n), cj · xj j cj · x such that f (x) f (ˆ x), so x ˆ is an optimal solution to (8.6.1). Lemma 8.6.4. If γj 0(1 j n), then a certain minimum solution x ˘ to (8.6.1)(a) is an optimal one to (8.6.1). Proof: Since γj 0(1 j n), then γ

d(xj j ) γ −1 = γj xj j 0 dxj γ

for each xj with 0 xj 1, hence xj j is a monotone increasing function with γ respect to xj , so is cj xj j with respect to xj .

290

8 Fuzzy Relative Equation and Its Optimizing

So, ∀x ∈ X(A, b), according to Formula (8.6.3), then there exists x ˘ ∈ ˘ X(A, b), such that x x ˘, that is, xj x˘j . Therefore, γ

cj · xj j cj · x˘j γj (1 j n), ˘ then f (x) f (˘ x), that is, the optimal solution to (8.6.1) must exist in X(A, b). Let ˘ f (˘ x∗ ) = min{f (˘ x)|˘ x ∈ X(A, b)}. Then ∀x ∈ X(A, b), there exists f (x) f (˘ x∗ ), so x ˘∗ is an optimal solution to ∗ ˘ (8.6.1), here x ˘ ∈ X(A, b). As for the general situation, that is, in function (8.6.4), the exponent γj (1 j n) of each item xj is either a positive number or a negative one. Let R1 = {j|γj < 0, 1 j n}, R2 = {j|γj 0, 1 j n}. Then R1 ∩ R2 = Ø, R1 ∪ R2 = J, here J = {1, 2, · · · , n}. Let γ γ f1 (x) = {(cj · xj j )}, f2 (x) = {(cj · xj j )}. j∈R1

j∈R2

Then f (x) = f1 (x) ∨ f2 (x). Therefore, we have the next two optimization models based on the above: min f1 (x) s.t A ◦ x = b, 0 xj 1(1 j n),

(8.6.5)

min f2 (x) s.t A ◦ x = b, 0 xj 1(1 j n).

(8.6.6)

and

By Lemma 8.6.3, x ˆ is an optimal solution to (8.6.5). By Lemma 8.6.4, ∃˘ x∗ ∈ ∗ ˘ X(A, b), such that x ˘ is an optimal solution to (8.6.6). Let x ˆj , j ∈ R1 , ∗ xj = x˘j ∗ , j ∈ R2 . We have the following theorem. Theorem 8.6.1. If exponent γj (1 j n) of each item xj is either a positive number or a negative one, then x∗ is an optimal solution to (8.6.1). ˘ Proof: ∀x ∈ X(A, b). According to (8.6.3), ∃˘ x ∈ X(A, b), such that x ˘xx ˆ. By Lemma 8.6.3 and 8.6.4, we have x) ∨ f2 (ˇ x) f1 (ˆ x) ∨ f2 (˘ x∗ ) = f (x∗ ). f (x) = f1 (x) ∨ f2 (x) f1 (ˆ So x∗ is an optimal solution to (8.6.1).

8.6 Fuzzy Relation Geometric Programming with ( , ·) Operator

291

8.6.4 Algorithm to Model A. Algorithm Algorithm 8.6.1 Step 1. x ˆ is found by (8.6.2). If x ˆ is not a solution to (8.6.1)(a), then turn to Step 9. Otherwise, turn to Step 2. Step 2. Check the sign of γj (1 j n). If γj < 0, then turn to Step 8. Otherwise, turn to Step 3. Step 3. Solving discrimination matrix D and simpliﬁcation matrix G of A. ˘ The minimum solution set X(A, b) of (8.6.1)(a) is ﬁltrated by Rule 8.6.1. Step 4. If γj 0 (1 j n), we obtain x˘∗ by Lemma 8.6.4. Turn to Step 7. Otherwise, turn to Step 5. Step 5. Gain x∗ by Theorem 8.6.1. Step 6. Print f (x∗ ), stop. Step 7. Print f (˘ x∗ ), stop. Step 8. Print f (ˆ x), stop. Step 9. Print “have no solution”, stop. B. Example Example 8.6.1: We now consider the (∨, ·) fuzzy relation geometric programming as follows: 1

1

min f (x) = (0.3 · x1 −2 ) ∨ (1.8 · x2 − 3 ) ∨ (1.5 · x3 − 2 ) ∨ (0.45 · x4 −2 ) s.t A ◦ x = b, 0 xj 1 (1 j 4), where b = (0.4, 0.2, 0.2)T , ⎛

⎞ 0.5 0 0.6 0.8 A = ⎝ 0.5 0.2 0 0.4 ⎠ . 0.2 0.1 0.3 0.2 ˆ = b, then By Formula (8.6.2), we can solve xˆ = (0.4, 1, 23 , 0.5)T . Since A ◦ x in A ◦ x = b exists a solution and x ˆ is a greatest solution to A ◦ x = b. It is easy to see γj < 0 (1 j 4), x ˆ is an optimal solution by Lemma 8.6.3, and the optimal value is f (ˆ x) = 1.875. Example 8.6.2: Finding 1

3

1

min f (x) = (0.4 · x1 − 2 ) ∨ (0.7 · x2 2 ) ∨ (0.6 · x3 2 ) ∨ (0.2 · x4 −2 ) s.t A ◦ x = b, 0 xj 1 (1 j 4), where A, b is the same as Example 8.6.1.

292

8 Fuzzy Relative Equation and Its Optimizing

The discriminate matrix of A is ⎛

⎞ 0 0 0.6 0.8 D = ⎝ 0.5 0.2 0 0.4 ⎠ . 0 0 0.3 0

Since each row of D contains at least a nonzero entry, a solution exists in (∨, ·) fuzzy relation equation A ◦ x = b. The outcome consists with Example 8.6.1. Because the exponent γj is either positive or negative, we solve simpliﬁcation matrix G of A by Algorithm 8.6.1, then ⎞ ⎛ 0 0 23 0.5 G = ⎝ 0.4 1 0 0.5 ⎠ , 0 0 23 0 the matrix G dealt with by Rule 8.2.1, we can get ˜ = 0.4 1 02 0.5 . G 0 0 3 0 Therefore, we have ˜ 1 = 0.4 0 02 0 , G ˜ 2 = 0 1 02 0 , G ˜ 3 = 0 0 02 0.5 . G 0 0 3 0 00 3 0 00 3 0 So all of minimum solutions to A ◦ x = b is 2 2 2 ˘(2) = (0, 1, , 0)T , x ˘(3) = (0, 0, , 0.5)T . x ˘(1) = (0.4, 0, , 0)T , x 3 3 3 1

3

1

Notice that f1 (x) = (0.4 · x1 − 2 ) ∨ (0.2 · x4 −2 ), f2 (x) = (0.7 · x2 2 ) ∨ (0.6 · x3 2 ). From Lemma 8.6.3, we know x ˆ is an optimal solution to f1 (x). From Lemma 8.6.4, we know x ˘(1) and x ˘(3) are optimal solutions to f2 (x). ∗ By Theorem 8.6.1, clearly, x = (0.4, 0, 23 , 0.5)T is an optimal solution to f (x), and the optimal value is f (x∗ ) = 0.8. 8.6.5 Conclusion The relation programming with (∨, ∧) and (∨, ·) operator are recently being paid more attention to by people, while the fuzzy relation geometric programming has been developing very slowly all the time. The reason is that it is diﬃcult to get an ideal result by traditional nonlinear optimization method since the feasible domain of this kind of programming is general nonconvex[BS79][Kel71]. Besides, owing to nonlinear objective function, it is very diﬃcult to provide the general algorithm to this kind of optimization problem. We can only make corresponding discussion on some concrete nonlinear objective. Ref. [LF01b] has provided a solution method to such a problem by genetic algorithm. However, when the scale of variable enlarges, it is diﬃcult to solve premature convergence problem of it.

9 Interval and Fuzzy Diﬀerential Equations

In this chapter, we put forward the concepts of ordinary diﬀerential equations in interval function (i.e., interval-valued function) and fuzzy (value) functions, discuss existence and uniqueness in solutions to the interval ordinary diﬀerential equation, study the existence and uniqueness of solutions to the former equation at ordinary points and fuzzy points by using a decomposition theorem in fuzzy sets, and get a kind of solution to this equation. At the same time, we research Solow economical increase model, and Duoma debt models inﬂuencing very greatly in economics, by applying a fuzzy set-value mapping method to extension of the diﬀerential equation.

9.1

Interval Ordinary Diﬀerential Equations

Deﬁnition 9.1.1. If we use R to denote a real number set, then we call the real number close interval x¯ = [x− , x+ ] = {x|x− x x+ , x− ; x+ ∈ R} an interval number, while the degenerated close interval [x, x] is seen as a real number x itself (x = 0 is especially example). + + Deﬁnition 9.1.2. Suppose x¯1 = [x− ¯2 = [x− 1 , x1 ], x 2 , x2 ], below “ ∗ ” means carrying on arithmetic “+, −, ×, ÷” to the real number, by use of a classical expansion principle, we have + − + ¯2 = {z|∃(x1 , x2 ) ∈ [x− x ¯1 ∗ x 1 , x1 ] × [x2 , x2 ], z = x1 ∗ x2 }.

If the income as a result is still a close interval number, we say an operation of R given by the formula above. When “ ∗ ” denotes division, 0 ∈ x ¯2 is an exception. Deﬁnition 9.1.3. Suppose F¯ : [a, b] → IR , IR = {[x1 , x2 ]|x1 x2 , x1 , x2 ∈ R}, + x → [x− 1 , x2 ]

B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 293–326. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

294

9 Interval and Fuzzy Diﬀerential Equations

then x → F¯ (x) = [F − (·), F + ()] is an interval function, here x1 = F − (·), x2 = F + (), and “(·)” denotes df − (x) df + (x) dn f − (x) + (x, f − (x), ), “()” denotes (x, f (x), ), · · · , ), · · · , dx dxn dx n + d f (x) ), and F − , F + with f − , f + are all the ordinary functions on [a, b], dxn hence ∀ ∈ [a, b], f − (x) f + (x), F − (·) F + (), f − (x), f + (x), such that F − (·), F + () continues at [a,b], then call F¯ (x) continuous at [a,b]. The relevant deﬁnition of continuum and diﬀerentiable in y = f (x) at [a,b] can be seen in [Cen87]. Deﬁnition 9.1.4. Suppose f¯(x) to be a function deﬁned at [a, b], and df + (x0 ) df − (x0 ) at x0 ∈ [a, b], if there exists ordinary derivatives and , dx dx − df (x0 ) we call that interval function f¯(x) is derivable at x0 , and [min{ , dx + − + df (x0 ) df (x0 ) df (x0 ) }, max{ , }] is interval derivative of f¯(x) at x0 . dx dx dx df + (x0 ) df − (x0 ) df + (x0 ) df − (x0 ) , [ , ] is an interval same order When dx dx dx dx + − df (x0 ) df (x0 ) , ] is an interval antitone derivative of f¯(x) at x0 . Otherwise, [ dx dx ¯ one of f (x) at x0 . Deﬁnition 9.1.5. Suppose f¯(x) to be a function deﬁned at interval [a, b], df − (x) df + (x) and if for ∀x ∈ [a, b], there exist derived functions and , called dx dx − + − + df (x) df (x) df (x) df (x) [min{ , }, max{ , }] is an interval derived function dx dx dx dx df¯(x) in f¯(x) on [a, b], brieﬂy written down as = [y − (x), y + (x)], and f¯(x) is dx called a primal function of y¯(x) on interval [a, b]. df − (x) df + (x) If , ∀x ∈ [a, b], then we call f¯(x) the same order derivable dx dx df − (x) df + (x) on [a,b], y¯(x) = [ , ] is the same order derived function of f¯(x) dx dx on [a, b] and f¯(x) the same order primal function of y¯(x). Otherwise, we call df + (x) df − (x) f¯(x) an antitone derivable, y¯(x) = [ , ] an antitone derivative dx dx ¯ ¯ function of f (x) on [a, b] and f (x) an antitone primal function of y¯(x).

9.1 Interval Ordinary Diﬀerential Equations

295

dn f − (x0 ) Similarly, if we deﬁne f¯(x) at interval [a, b], and if there exists , dxn n + d f (x0 ) at point x0 ∈ [a, b], we call dxn dn f − (x0 ) dn f + (x0 ) dn f − (x0 ) dn f + (x0 ) [min( , ), max( , )] n n dx dx dxn dxn dn f¯(x0 ) , while it is an nth an nth derivative of f¯(x) at x0 , written down as dxn derivative function of f (x) at x. Deﬁnition 9.1.6. The equation with unknown interval derivative is called an interval diﬀerential equation, the diﬀerential equation containing an unknown is called an interval ordinary diﬀerential equation, i.e., df¯(x) df − (x) df + (x) (9.1.1) = f¯(x) [ , )] = [f − (x), f + (x)], dx dx dx calling it the 1st interval ordinary diﬀerential equation, but dn f¯(x) df¯(x) + an (x)f¯(x) = 0 + · · · + an−1 (x) n dx dx df − (x) df + (x) dn f − (x) dn f + (x) (9.1.2) , )] + · · · + a (x)[ [ , )] n−1 dxn dxn dx dx + an (x)[f − (x), f + (x)] = [0, 0] is called the nth interval ordinary diﬀerential equation, where a1 (x), · · · , an−1 (x), an (x) are known functions. The following functions to discuss are all supposed to be the same order derivable [Cen87] (the antitone derivable can be similarly discussed). Deﬁnition 9.1.7. If function ϕ(x) ¯ is placed into (9.1.1) or (9.1.2), such that it is identical, then ϕ(x) ¯ is called an interval solution to them. Seeking the interval solutions to (9.1.1) or (9.1.2) means solving interval diﬀerential equation. Here ϕ(x) ¯ = [ϕ− (x), ϕ+ (x)], where ϕ− , ϕ+ are ordinary functions. Deﬁnition 9.1.8. The ﬁxed solutions problem is ⎧ 2¯ n¯ ¯ ⎪ ⎨ F¯ (x, f¯(x), df (x) , d f (x) , · · · , d f (x) ) = 0 (9.1.3) 2 n dx dx dx n−1 ¯ ¯(x0 ) ) f (x d f d ⎪ 0 (1) (n−1) (9.1.4) ⎩ f¯(x0 ) = f¯0 , = f¯0 , · · · , = f¯0 dx dxn−1 ⎧ − + n − n + ⎪ ¯ ([x, x], [f − (x), f + (x)], [ df (x) , df (x) ], · · · , [ d f (x) , d f (x) ]) ⎪ F ⎪ ⎪ dx dx dxn dxn ⎪ ⎪ ⎨ = [0, 0] df − (x0 ) df + (x0 ) (1)− (1)+ ⎪ , ] = [f0 , f0 ], · · · , [f − (x0 ), f + (x0 )] = [f0− , f0+ ], [ ⎪ ⎪ dx dx ⎪ ⎪ n−1 − ⎪ f (x0 ) dn−1 f + (x0 ) −(n−1) +(n−1) ⎩[ d , ] = [f0 , f0 ] dxn−1 dxn−1

296

9 Interval and Fuzzy Diﬀerential Equations

a solution satisfying (9.1.3) and (9.1.4), called an interval special one to the ﬁxed solutions problem. Deﬁnition 9.1.9. An expression of solution containing n arbitrarily interval constants in nth interval function equation looks like (9.1.3), f¯(x) = ϕ(x, ¯ C¯0 , C¯1 , · · · , C¯n−1 ), is called an interval general solution. For an initial value condition given arbitrarily in a certain scope, if at least in ⎧ ¯ df (x) ∂ ϕ¯ ⎪ ⎪ = (x, C¯0 , C¯1 , · · · , C¯n−1 ) ⎪ ⎪ ∂x ⎨ dx .. .. . . ⎪ ⎪ n¯ n ⎪ d ϕ ¯ f (x) ∂ ⎪ ⎩ = (x, C¯0 , C¯1 , · · · , C¯n−1 ) dxn ∂xn can be found a special ﬁxed value of an arbitrary constant C¯0 , C¯1 , · · · , C¯n−1 , such that the corresponding solution satisﬁes this condition. ∂ k ϕ(·) ¯ ∂ k ϕ− (·) ∂ k ϕ+ (·) ∂ k ϕ− (·) ∂ k ϕ+ (·) [min( , ), max( , )], and unk k k ∂x ∂x ∂x ∂xk ∂xk der the precondition of f¯ same order derivable, there is

Note:

¯ ∂ k ϕ− (·) ∂ k ϕ+ (·) ∂ k ϕ(·) [ , ]. k ∂x ∂xk ∂xk Theorem 9.1.1. (Existence theorem on implicit function) Suppose F¯ (¯ x0 , ∂ F¯ x ¯1 , · · · , x ¯n ), (¯ x0 , x ¯1 , · · · , x ¯n )(i = 0, 1, · · · , n) to be deﬁned and contin∂xi uous in some neighborhood σ at (¯ x00 , x ¯01 , · · · , x ¯0n ), with F¯ (¯ x00 , x ¯01 , · · · , x ¯0n ) = ¯ ∂F 0 0 0, (¯ x ,x ¯ , ··· ,x ¯0n ) = 0, then equation F¯ (¯ x0 , x ¯1 , · · · , x ¯n ) = 0 has a unique ∂xn 0 1 interval solution x ¯n = f (¯ x0 , x ¯1 , · · · , x¯n−1 ) in a certain neighborhood σ ⊇ σ 0 0 0 at point (¯ x0 ,¯,x1 , · · · , x ¯n ). Proof: Because ∂ F¯ 0 0 (¯ x ,x ¯ ,··· ,x ¯0n ) = 0 ⇐⇒ ∂xn 0 1 ∂ F¯ − −0 −0 ∂ F¯ + +0 +0 [ (x0 , x1 , · · · , x−0 (x , x1 , · · · , x+0 n ), n )] = [0, 0], ∂xn ∂xn 0 such that ∂ F¯ + +0 +0 ∂ F¯ − −0 −0 (x0 , x1 , · · · , x−0 (x , x1 , · · · , x+0 n ) = 0, n ) = 0. ∂xn ∂xn 0 The theorem holds according to the deﬁnition of interval and existence theorem of a classical implicit function.

9.1 Interval Ordinary Diﬀerential Equations

297

Theorem 9.1.2. Let the interval implicit function equation like (9.1.3), if at ∂ F¯ the considered region, ¯(n)

= 0, then a normal type of interval diﬀerential ∂ f (x) equation is attained as follows: 2¯ n−1 ¯ ¯ f (x) dn f¯(x) ¯(x), df (x) , d f (x) , · · · , d = ϕ(x, ¯ f ), n 2 n−1 dx dx dx dx

(9.1.5)

where ϕ¯ is a known interval function dependent upon n + 1 variation. ∂ F¯ Proof: Because ¯(n)

= 0, there exists ∂ f (x) ∂F + ∂F −

= 0, ¯+(n)

= 0. −(n) ∂f (x) ∂f (x) From the existence theorem of an interval implicit function, it is known that df − (x) d2 f − (x) dn−1 f − (x) dn f − (x) − − , = ϕ (x, f (x), , · · · , ), dxn dx dx2 dxn−1 dn f + (x) df + (x) d2 f + (x) dn−1 f + (x) , = ϕ+ (x, f + (x), ,··· , ). n 2 dx dx dx dxn−1 Therefore, the theorem holds. Theorem 9.1.3. Any normal type of interval diﬀerential equation (or group) can be turned into 1-order one. Proof: Because dn f¯(x)

2¯ n−1 ¯ ¯ f (x) ¯(x), df (x) , d f (x) , · · · , d = ϕ(x, ¯ f ) n 2 n−1 dx dx dx dx df¯1 (x) df¯n−1 (x) df¯(x) = f¯1 , = f¯2 , · · · , = ϕ(x, ¯ f¯, f¯1 , · · · , f¯n−1 ), ⇐⇒ dx dx dx where f¯i = [fi− , fi+ ](i = (1, 2, · · · , n − 1) is an unknown interval function, the theorem holds.

Corollary 9.1.1. The initial value problem in (9.1.5) is equivalent to the initial one in the 1st normal type of interval diﬀerential equations. Let y¯ = (¯ y1 , y¯2 , · · · , y¯n )T , f¯ = (f¯1 , f¯2 , · · · , f¯n )T , Then the equations like

d¯ y1 d¯ y2 d¯ yn T d¯ y =( , ,··· , ) . dx dx dx dx

d¯ yi = f¯i (x, y¯1 , y¯2 , · · · , y¯n ) can be written down as: dx d¯ y = f¯(x, y¯), (9.1.6) dx

dy − dy + d¯ yi where y¯ = [yi− , yi+ ], f¯i = [fi− , fi+ ], =[ , ](i = 1, 2, · · · , n). dx dx dx

298

9 Interval and Fuzzy Diﬀerential Equations

Therefore, only (9.1.6) needs discussing. Theorem 9.1.4. (Existence theorem in solution) Given interval diﬀerential equation (9.1.6) and initial value (x0 , y¯0 ), and we suppose f¯(x, y¯) continues at close ﬁeld æ : d(x, x0 ) ⊂ [a, a], d(Y¯ , y¯0 ) ⊂ [b− , b+ ](a > 0, b+ > b− > 0), and

d(x, Y¯ ) = dH ([x, x], [Y − , Y + ]) = max(|x − Y + |, |x − Y + |),

where dH is Hansdoﬀ measurement, then at least an interval solution exists in (9.1.6), value Y¯0 taken at x = x0 . Meanwhile it is determined and continuous at a certain interval containing x0 . Proof: (9.1.6) has at least a determined interval continuous solution to a certain interval at x0 if and only if in d¯ y− d¯ y+ = f¯− (x, y¯− ), = f¯+ (x, y¯+ ) dx dx

(9.1.7)

there exists at least a determined continuous solutions y0− and y0+ respectively at x0 and a certain interval of x0 . Therefore, y¯ = [y0− , y0+ ] is a determined continuous solution at a certain interval of (9.1.6) containing x¯0 . Theorem 9.1.5. (Uniqueness theorem of solution) Under the condition of Theorem 9.1.4, if in æ, the variation y¯ satisﬁes also the Lipschitz condition, i.e., ∃N > 0, such that two arbitrary values y¯1 , y¯2 in æ imply |f¯(x, y¯1 ) − f¯(x, y¯2 )| ⊂ N |¯ y1 − y¯2 |, then

(9.1.6) f¯(x, y¯)|x=x0 ,¯y=¯y0 = f¯0

(9.1.8)

(9.1.9)

has a unique determined continuous interval solution. Proof: From the deﬁnition of Hausdorﬀ measurement, we know (9.1.8) ⇐⇒ max{|f − (x, y1− ) − f − (x, y2− )|, |f + (x, y1+ ) − f + (x, y2+ )|} N max{|y1− − y2− |, |y1+ − y2+ |}

⇐⇒ |f − (x, y1− ) − f − (x, y2− )| N |y1− − y2− | or

|f + (x, y1+ ) − f + (x, y2+ )| N |y1+ − y2+ |.

(9.1.10) (9.1.11)

Because (9.1.6) satisfying the condition of Theorem 9.1.4 is equivalent to (9.1.7) satisfying respectively f − (x, y − ) and f + (x, y + ) continuing at the close regions:

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

299

æ ¯ 1 : |x − x0 | a, |y − − y0− | b− ; æ ¯ 2 : |x − x0 | a, |y + − y0+ | b+ and satisfying (9.1.10) and (9.1.11), it is known that, by using the uniqueness theorem of solution in classical ordinary diﬀerential equation, there exists a unique solution (x0 , y0− ) and (x0 , y0+ ) in (9.1.7), respectively, hence (x0 , y¯0 ), and that is a unique interval solution to (9.1.9). Theorem 9.1.6. Let f¯(x, y¯) be the same order derivable [Cen87]. Then there exists y¯ = ϕ(x ¯ 0 ) + c¯ in (9.1.6). Proof: Because d¯ y d¯ y − dy + = f¯(x, y¯) ⇐⇒ [ , ] = [f − (x, y − ), f + (x, y + )] dx dx dx dy − dy + =⇒ = f − (x, y − ), = f + (x, y + ) dx dx (f¯(·) has the same order primal functionϕ(·)) ¯ − − − + + + =⇒ y = ϕ (x0 ) + c ; y = ϕ (x0 ) + c , hence, y¯ = ϕ(x ¯ 0 ) + c¯.

9.2

Fuzzy-Valued Ordinary Diﬀerential Equations

Deﬁnition 9.2.1. Let A˜ ∈ F (R) be a fuzzy subset on R. If, for ∀α ∈ + ˜ [0, 1], Aα = [A− α , Aα ] with A1 = φ, then A is called a fuzzy number, whole sets of which are written as F (R). Deﬁnition 9.2.2. If 1) 2) 3) 4) Then

There exists a unique y0 ∈ R, such that μA˜ (y0 ) = 1; μA˜ (y) continues with respect to y; ˜ ⊂ [y1 , y2 ]; ∃[y1 , y2 ], such that S(A) ∀s, ∀t > s, ∀y ∈ (s, t), there is μA˜ (y) > min(μA˜ (s), μA˜ (t)). A˜ ∈ F (R) is called a convex normal fuzzy number.

Deﬁnition 9.2.3. Let f˜ : [a, b] → F (R), x → f˜(x). Then f˜ is called a fuzzyvalued function deﬁned at [a, b], and when f˜(x) is a convex normal fuzzy number, f˜ is called a convex normal fuzzy function. Deﬁnition 9.2.4. Let f¯α : [a, b] → IR , x → f¯α (x) [f˜(x)]α . Then f¯α is called an α−cut function of f˜, if and only if ∀α ∈ (0, 1], f¯α continues, so as f˜. The operation of fuzzy numbers is deﬁned by fuzzy extension principle [WL85]. (1) (2) (m) ˜ B ˜∈ αf (Aα , Aα , · · · , Aα ), we have A, If f (A˜(1) , A˜(2) , · · · , A˜(m) ) α∈(0,1]

F (R), then

300

9 Interval and Fuzzy Diﬀerential Equations

˜ α = Aα ± Bα ; 1) (A˜ ± B) ˜ 2) (k A)α = kAα . Deﬁnition 9.2.5. Let f˜(x) be deﬁned at [a, b] and f¯α (x) be diﬀerentiable ∀α ∈ (0, 1]. Then df¯α (x) df˜(x) = α dx dx α∈(0,1]

is called a fuzzy-valued derivative at ordinary point x. In the following, it is supposed that f˜(x) is the same order derivable at [a, b] (similar discussion by antitone derivable), then, the fuzzy-valued derivative df˜(x) df − (x) dfα+ (x) can be simply expressed as = , }. α{ α dx dx dx α∈(0,1] Deﬁnition 9.2.6. Let f˜ : [a, b] × [c, d] → F (R), (x, y) → f˜(x, y) be deﬁned as a binary fuzzy-valued function at [a, b] × [c, d]; its α−cut function is f¯α : x → f¯α (x, y) [f˜(x, y)]α = [fα− (x, y), fα+ (x, y)]. If for ∀α ∈ (0, 1], fα− and fα+ diﬀerentiable at (x, y), then the partial derivative of f˜ at (x, y) is deﬁned as: ∂ f˜(x, y) = ∂x ∂ f˜(x, y) = ∂y

α{

α∈(0,1]

∂ f˜α− (x, y) ∂ f˜α+ (x, y) , }, ∂x ∂x

α{

α∈(0,1]

∂ f˜α− (x, y) ∂ f˜α+ (x, y) , }. ∂y ∂y

df˜1 (x) df˜2 (x) , Theorem 9.2.1. If f˜1 (x), f˜2 (x) is the same order derivable, dx dx is a normal convex, then d ˜ df˜1 (x) df˜2 (x) (f1 (x) ± f˜2 (x)) = ± ; 1) dx dx dx ˜ d df (x) 2) (k f˜(x)) = k . dx dx Deﬁnition 9.2.7. The equation with unknown fuzzy-valued derivative is called a fuzzy-valued diﬀerential equation, df˜(x) = f˜(x) dx

α∈(0,1]

α

df¯α (x) = dx

α∈(0,1]

αf¯α (x)

(9.2.1)

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

301

is called the 1st fuzzy-valued diﬀerential one, and dn f˜(x) df˜(x) + an (x)f˜(x) = 0 + · · · + a (x) n−1 dxn dx dn f¯α (x) df¯α (x) + α + · · · + an−1 (x) α n dx dx α∈(0,1] α∈(0,1] an (x) αf¯α (x) = 0

(9.2.2)

α∈(0,1]

is called the nth fuzzy-valued diﬀerential one, where ai (x)(1 i n) is a known ordinary function (also a fuzzy-valued function), with an (x) = 0. Deﬁnition 9.2.8. If a fuzzy-valued function f˜(x) is substituted for (9.2.1) or (9.2.2), such that it is identical, then the function f˜(x) is a solution to it. The process of ﬁnding solution to (9.2.1) or (9.2.2) is called ﬁnding a solution to fuzzy-valued diﬀerential equation. Deﬁnition 9.2.9. Let a problem of fuzzy-valued ﬁxed solution be ⎧ 2˜ n˜ ˜ ⎪ ⎨ F˜ (x, f˜(x), df (x) , d f (x) , · · · , d f (x) ) = 0, 2 dx dx dxn (9.2.3) n−1 ˜ ˜(x0 ) ⎪ d f d f (x0 ) (n−1) ⎩ f˜(x ) = f˜ , ˜ ˜ = f0 . (a) ) = f0 , · · · , 0 0 dx dxn−1 Then a solution satisfying (9.2.3) is called a special solution, while an arbitrary expression of the solution to (9.2.3) f˜(x) = ϕ(x, ˜ C˜0 , C˜1 , · · · , C˜n−1 ) is called a fuzzy-valued general solution to problem. Here (9.2.3) ⎧ n¯ ¯ ⎪ ⎪ ¯α (x), α dfα (x) , · · · , α d fα (x) = 0, ¯ (x, F α f ⎪ ⎪ ⎪ dx dxn ⎪ α∈(0,1] α∈(0,1] ⎪ α∈(0,1] ⎪ ¯ ¯ ⎪ αfα (x0 ) = αf0α , ⎪ ⎪ ⎪ α∈(0,1] ⎪ ⎨ α∈(0,1] df¯0α (x0 ) = α αf¯0α , ⎪ ⎪ α∈(0,1] dx α∈(0,1] ⎪ ⎪ ⎪ ⎪ ··············· ⎪ ⎪ ⎪ ⎪ d(n−1) f¯0α (x0 ) ⎪ (n−1) ⎪ ⎪ α = αf¯0α , ⎩ (n−1) dx α∈(0,1] α∈(0,1] αϕ¯α (x, C¯0α , C¯1α , · · · , C¯(n−1)α ). f˜(x) α∈(0,1]

If at least to a certainty range arbitrary given by initial value condition (9.2.3)(a), a speciﬁcally ﬁxed value of arbitrary fuzzy number C˜0 , C˜1 , · · · , C˜n−1 can all be found, such that the corresponding solution satisﬁes this condition. Theorem 9.2.2. (Existence theorem on implicit function) Suppose neighborhood at point (x01 , x02 , · · · , x0n , u˜0 ), if

302

9 Interval and Fuzzy Diﬀerential Equations

1) F˜ (x1 , x2 , · · · , xn , u ˜) is a continuously convex normal fuzzy-valued function, with F˜ (x01 , x02 , · · · , x0n , u ˜0 ) = 0; ∂ F˜ 0 0 (x , x , · · · , x0n , u ˜0 ) is the same order fuzzy-valued partial derivative 2) ∂u 1 2 ˜ ∂F 0 0 (x , x , · · · , x0n , u ˜0 ) = 0. with continuation, and ∂u 1 2 ˜) = 0 contains a unique Then neighborhood at this point, F˜ (x1 , x2 , · · · , xn , u fuzzy-valued solution u˜ = ϕ(x ˜ 1 , x2 , · · · , xn ). Proof: Because

F˜ (x1 , x2 , · · · , xn , u ˜) =

αF¯α (x1 , x2 , · · · , xn , u ¯α ),

α∈(0,1]

∂ F˜ (x1 , x2 , · · · , xn , u ˜) = ∂u

α∈(0,1]

α

∂ F¯α (x1 , x2 , · · · , xn , u ¯α ), ∂u

∂ F˜ 0 0 again (x , x , · · · , x0n , u ˜0 ) = 0, we know the following according to the ∂u 1 2 assumption

α[

α∈(0,1]

i.e.,

∂ F¯α− 0 0 ∂ F¯α+ 0 0 (x1 , x2 , · · · , x0n , u (x , x , · · · , x0n , u ¯− ¯+ 0α , 0α )] = 0, ∂u ∂u 1 2

α

∂Fα− 0 0 (x1 , x2 , · · · , x0n , u− 0α ) = 0, ∂u

α

∂Fα+ 0 0 (x , x , · · · , x0n , u+ 0α ) = 0, ∂u 1 2

α∈(0,1]

α∈(0,1]

with ∀α ∈ (0, 1], all containing ∂ F¯α ¯ (F )α (x01 , x02 , · · · , x0n , u¯0α ) = 0, F¯α (x01 , x02 , · · · , x0n , u ¯0α ) = 0, ∂u u and continuing near at (x01 , x02 , · · · , x0n , u¯0α ), such that a unique interval solution u¯α = ϕ¯α (x01 , x02 , · · · , x0n ) exists in F¯α (x01 , x02 , · · · , x0n , u¯0α ) = 0 at this point. ˜0 ) = 0 exists a unique fuzzyTherefore at this point, F˜ (x01 , x02 , · · · , x0n , u αϕ¯α (x1 , x2 , · · · , xn ). valued solution u ˜= α∈(0,1]

If we solved like

dn f˜(x) from relation form (9.2.3), then we obtain an equation dxn dn y˜(x) dn−1 y˜(x) d˜ y (x) ,··· , = f˜(x, y˜(x), ), n dx dx dxn−1

(9.2.4)

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

303

where f˜ is a known fuzzy-valued function dependent upon n + 1 variation x, called a normal type fuzzy-valued diﬀerential equation. ∂ F˜

= 0, then (9.2.3) ∂ f˜(n) (x) contains a normal type fuzzy-valued diﬀerential equation (9.2.4).

Theorem 9.2.3. If by considering the region,

Proof: Because at the considered region ∂ F˜

= 0 ⇐⇒ ∂ f˜(n) (x)

α∈(0,1]

∂ F˜

= 0, while ∂ f˜(n) (x)

α

∂ ¯ (n−1) (x)) = 0, (F ∂x α

and according to existence theorem in the fuzzy-valued implicit function, the theorem holds. Theorem 9.2.4. Any of the nth normal type fuzzy-valued diﬀerential equations (9.2.4) is equivalent to a 1-order one ⎧ d˜ y (x) ⎪ ⎪ = y˜1 (x), ⎪ ⎪ dx ⎪ ⎪ y1 (x) ⎪ ⎨ d˜ = y˜2 (x), dx (9.2.5) .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ y (x) ⎩ d˜ = f˜(x; y˜(x), y˜1 (x), · · · , y˜n−1 (x)). dx Proof: Because (9.2.4) ⇐⇒ α∈(0,1]

α

dn y¯α (x) = dxn

α∈(0,1]

αf¯α (x, y¯α (x),

dn−1 y¯α (x) d¯ yα (x) ,··· , ). dx dxn−1

Suppose y˜(x) = ϕ˜( x) to be a solution to (9.2.4) in interval I = [a, b], let ϕ˜1 (x) =

˜ dϕ(x) ˜ dn−1 ϕ(x) , · · · , ϕ˜n−1 (x) = . dx dxn−1

Then which is equivalent to ⎧ dϕ¯α (x) ⎪ ⎪ = α αϕ¯1α (x), ⎪ ⎪ dx ⎪ α∈(0,1] α∈(0,1] ⎪ ⎪ ⎪ dϕ¯1α (x) ⎪ ⎪ ⎨ = α αϕ¯2α (x), dx α∈(0,1] α∈(0,1] ⎪ .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ dϕ¯(n−1),α (x) ⎪ ⎪ ⎪ = α αf¯α (x, ϕα (x), ϕ¯1α (x), · · · , ϕ¯(n−1)α (x)), ⎩ dx α∈(0,1] α∈(0,1]

304

9 Interval and Fuzzy Diﬀerential Equations

i.e.,

⎧ dϕ(x) ˜ ⎪ ⎪ = ϕ˜1 (x), ⎪ ⎪ dx ⎪ ⎪ ⎪ ⎨ dϕ˜1 (x) = ϕ˜2 (x), dx .. ⎪ ⎪ ⎪ . ⎪ ⎪ ⎪ ⎪ ⎩ dϕ˜n−1 (x) = f˜(x, ϕ(x), ˜ ϕ˜1 (x), · · · , ϕ˜n−1 (x)). dx αϕ¯iα (x), (1 i n − 1) is an unknown fuzzy-valued Because ϕ˜i (x) = α∈(0,1]

function, it shows that, ∀α, y¯α (x) = ϕ¯α (x), y¯1α (x) =

dϕ¯α (x) dn−1 ϕ¯α (x) , · · · , y¯(n−1)α (x) = dx dxn−1

is equivalent to a solution corresponding to a classical problem in interval dϕ(x) ˜ dϕ˜n−1 (x) , · · · , y˜n−1 (x) = is I = [a, b]. Thereby, y˜(x) = ϕ(x), ˜ y˜1 (x) = dx dxn−1 equivalent to a solution to (9.2.5) in interval I = [a, b]. Therefore, the theorem is certiﬁcated. Corollary 9.2.1. The arbitrary initial value problem of (9.2.4) is equivalent to the interval value problem of 1-order normal fuzzy-valued diﬀerential equations. Only the 1st case is discussed below, because the similar conclusion can be got to the others. Deﬁnition 9.2.10. d(˜ x, y˜) = dH (˜ x, y˜) is Hausdorﬀ measure induced by the measure d, deﬁned as ⎧ ˜}, sup{d(y, y˜)|y ∈ y˜}), if x ˜, y˜ = φ, ⎨ max(sup{d(x, x˜)| x ∈ x if x˜, y˜ = φ, dH (˜ x, y˜) = 0, ⎩ ∞, x ˜ = φ, y˜ = φ or x ˜ = φ, y˜ = φ. When x ˜, y˜ is a non-empty closed set of a closed region æ, we have dH (˜ x, y˜) = max{sup[d(x, x˜)|x ∈ x ˜], sup[d(y, y˜)|y ∈ y˜]}. d˜ y = f˜(x, y˜) and Theorem 9.2.5. (Existence theorem of the solution) Given dx the initial value (x0 , y˜0 ), with f˜(x, y˜), a convex normal fuzzy-valued function, continuing on closed region æ : |x − x0 | a, d(˜ y , y˜0 ) ⊆ [b− , b+ ](a > 0, b− > d˜ y ˜ contains at least a fuzzy-valued 0, b+ > 0, and b− < b+ ), then = f˜(x, y) dx solution with a value y˜0 taken at x = x0 , conﬁrmed and continuing at a certain interval containing x0 at the same time.

9.2 Fuzzy-Valued Ordinary Diﬀerential Equations

305

d¯ yα = Proof: From existence theorem of the solution in the interval, α dx α∈(0,1] αf¯α (x, y¯α ) contains at least a solution y¯0 , conﬁrmed and continuing at α∈(0,1]

certain interval of x0 . Again,

α∈(0,1]

α

d¯ yα = dx

d˜ y = dx

αf¯α (x, y¯α ) ⇐⇒

α∈(0,1]

f˜(x, y˜), hence the conclusion of the theorem holds. Theorem 9.2.6. (Uniqueness theorem of the solution) Under the condition of Theorem 9.2.5, if fuzzy variation y˜ still satisﬁes Lipschitz condition in nonempty closed region æ, i.e., ∃N > 0, any of the two values y˜1 , y˜2 in æ always contains dH [f˜(x, y˜1 ), f˜(x, y˜2 )] ⊆ N dH (˜ y1 , y˜2 ), (9.2.6) then a unique conﬁrmed and continuous fuzzy-valued solution exists in ⎧ y ⎨ d˜ = f˜(x, y˜), dx ⎩ f˜(x, y˜)| ˜ x=x0 ,˜ y=˜ y0 = f0 .

(9.2.7)

Proof: We know from the prove of Theorem 9.2.5, for arbitrary α ∈ (0, 1], d¯ yα = f¯α (x, y¯α ) continues at æ : |x − x0 | a, |¯ yα − y¯0α | ⊆ [b− , b+ ]. Again dx αdH [f¯α (x, y¯1α ), f¯α (x, y¯2α )] ⊆ N αdH (¯ y1α , y¯2α ), (9.2.6) ⇐⇒ α∈(0,1]

α∈(0,1]

for arbitrary α above, then dH [f¯α (x, y¯1α ), f¯α (x, y¯2α )] ⊆ N dH (¯ y1α , y¯2α ). From Theorem 9.2.5, we know ⎧ yα ⎨ d¯ = f¯α (x, y¯α ), dx ⎩¯ fα (x, y¯α )|x=x0 ,¯y=¯y0α = f¯0α contains a unique conﬁrmed and continuous interval solution ϕ¯α (x, y¯α ). Because of arbitrariness of α at (0, 1], then f˜ = αϕ¯α (x, y¯α ), that is α∈(0,1]

a unique conﬁrmed and continuous fuzzy-valued solution to (9.2.7) at æ. Theorem 9.2.7. Let f˜(x, y˜) be a convex normal fuzzy-valued function of the d˜ y same order integrable. Then in = f˜(x, y˜) exists fuzzy-valued solution y˜ = dx ϕ(x) ˜ + c˜, where c˜ is a fuzzy constant.

306

9 Interval and Fuzzy Diﬀerential Equations

Proof: Since =

d˜ y> d ˜ dx = f (x, y˜)dx ⇐⇒ y˜dx = f˜(x, y˜)dx dx dx > d = α y¯α dx = α f¯α (x, y¯α )dx, dx α∈(0,1] α∈(0,1] for ∀α ∈ (0, 1], there exists y¯α = f¯α (x, y¯α )dx. When f¯α (x, y¯α ) is the same order integrable and with same order primal function ϕ¯α (x), there exists y¯α = ϕ¯α (x) + c¯α ⇐⇒ α¯ yα = α(ϕ¯α (x) + c¯α ). α∈(0,1]

α∈(0,1]

Therefore, y˜(x) = ϕ(x) ˜ + c˜.

9.3

Ordinary Diﬀerential Equations with Fuzzy Variables

If f˜(x) is a fuzzy-valued function deﬁned in Section 9.2 and its derivative df˜(x) function is mapping from R to F (R). By the aid of extension principle, dx suppose x ˜ to be a fuzzy point, subset S(˜ x) ⊆ æ (æ is a closed region). The general fuzzy variable can be represented by an applied interval nest, i.e., we can determine a unique certain fuzzy variable by + − + − + − + (x− λ , xλ )|λ ∈ [0, 1], (xλ , xλ ) = φ, λ1 < λ2 ⇒ (xλ1 , xλ1 ) ⊇ (xλ2 , xλ2 ),

written down as x ˜

+ λ(x− λ , xλ ).

λ∈[0,1]

But upon the limit have

+ x− λ , xλ

x ˜=

+ (x− λ , xλ )

,

+ is taken for close interval [x− λ , xλ ], and + λ[x− λ¯ xλ . λ , xλ ] =

λ∈[0,1]

λ∈[0,1]

Similarly, fuzzy function y˜ deﬁnition is y˜ = λ[yλ− , yλ+ ] = λ¯ yλ . λ∈[0,1]

Deﬁnition 9.3.1.

λ∈[0,1]

d ˜ df˜(˜ x) f( λ¯ xλ ) ∈ F (F (R)), dx dx λ∈(0,1]

∂ f˜(˜ x, y˜) ∂ ˜ f( λ¯ xλ , α¯ yα ) ∈ F (F (R)), ∂x ∂x λ∈(0,1]

α∈(0,1]

∂ ˜ ∂ f˜(˜ x, y˜) f( λ¯ xλ , α¯ yα ) ∈ F (F (R)). ∂y ∂y λ∈(0,1]

α∈(0,1]

9.3 Ordinary Diﬀerential Equations with Fuzzy Variables

307

xλ , y¯α ) ∂ f˜(¯ xλ , y¯α ) df˜(¯ xλ ) ∂ f˜(¯ , , in the deﬁnition is induced dx ∂x ∂y derivative or partial derivative of fuzzy-valued function at ordinary points, then Deﬁnition 9.3.1 can be further deﬁned as. Because every

Deﬁnition 9.3.2.

df˜(˜ x) dx

α

α∈(0,1]

∂ f˜(˜ x, y˜) ∂x ∂ f˜(˜ x, y˜) ∂y

6 df¯α 5 λ¯ xλ , dx

α∈(0,1]

α∈(0,1]

λ∈(0,1]

α

6 ∂ f¯α 5 λ¯ xλ , α¯ yα , ∂x λ∈(0,1]

α∈(0,1]

6 ∂ f¯α 5 α λ¯ xλ , α¯ yα . ∂y λ∈(0,1]

α∈(0,1]

And then, the result of Section 9.2 can be extended to a diﬀerential equation case of the fuzzy-valued function at a fuzzy point. The main result is related hereinafter. Theorem 9.3.1. Any normal types of fuzzy-valued diﬀerential equation with fuzzy point x ˜ can be changed into the 1st normal type equation: d˜ y (˜ x) = f˜(˜ x, y˜). dx

(9.3.1)

Theorem 9.3.2. (Existence theorem of solution) Given (9.3.1) and the initial fuzzy value (˜ x0 , y˜0 ), with f˜(˜ x, y˜) a continuous convex normal fuzzyvalued function at a closed region æ : d(˜ x, x ˜0 ) ⊆ [a− , a+ ], d(˜ y , y˜0 ) ⊆ [b− , b+ ](a+ > a− > 0, b+ > b− > 0), then there exists at least a fuzzy-valued solution to (9.3.1); its value y˜0 is taken at x ˜=x ˜0 , conﬁrmed and continuing at a certain interval with x ˜0 , where d(˜ x, y˜) = dH (˜ x, y˜) is Hausdorﬀ measurement deﬁned by Deﬁnition 9.2.10. Proof: Only by changing the formula in proof of Theorem 9.2.5 into + + + dfα− (¯ x− ¯α− ) xλ , y¯α ) + dfα (¯ λ,y = fα− (¯ = fα+ (¯ x− , y ¯ ), x+ ¯α+ ), α λ λ,y dx dx + fα− , fα+ respectively at x− 0 and x0 contains at least a determinedly continuous solution for ∀λ ∈ (0, 1], α ∈ [0, 1]. Similarly to the proof in Theorem 9.2.5, the theorem can be certiﬁcated.

Theorem 9.3.3. (Uniqueness theorem of the solution) Under the condition of Theorem 9.3.2 if fuzzy variable y˜ still satisﬁes Lipschitz condition in nonempty closed region æ, i.e., ∃N > 0, for any of the two values y˜1 and y˜2 in æ, always there exists

308

9 Interval and Fuzzy Diﬀerential Equations

dH [f˜(˜ x, y˜1 ), f˜(˜ x, y˜2 )] ⊆ N dH (˜ y1 , y˜2 ),

(9.3.2)

⎧ y ⎨ d˜ = f˜(˜ x, y˜) dx ⎩ ˜ f (˜ x, y˜)|x˜=˜x0 ,˜y=˜y0 = f˜0

(9.3.3)

then

has a unique determinedly continuous fuzzy-valued solution. Proof: Because d˜ y (x) = f˜(˜ x, y˜) ⇐⇒ dx 6 5 6 d¯ yα 5 α λ¯ xλ = αf¯α λ(¯ xλ , y¯α ) , dx α∈(0,1] λ∈(0,1] α∈(0,1] λ∈(0,1] 9 5 5 6 6C αdH f¯α λ(¯ xλ , y¯1α ) , f¯α λ(¯ xλ , y¯2α ) (9.3.2) ⇐⇒ α∈(0,1]

⊆N

λ∈(0,1]

λ∈(0,1]

αdH (¯ y1α , y¯2α ),

α∈(0,1]

for λ ∈ (0, 1], α ∈ [0, 1], there is d¯ yα (¯ xλ ) = f¯α (¯ xλ , y¯α ), dx dH [f¯α (¯ xλ , y¯1α ), f¯α (¯ xλ , y¯2α )] ⊆ N dH (¯ y1α , y¯2α ). Similarly to the proof in Theorem 9.2.6, the theorem is certiﬁcated. Theorem 9.3.4. Let f˜(˜ x, y˜) be a continuously convex normal fuzzy-valued d˜ y function. Then for fuzzy-valued diﬀerential equation = f˜(˜ x, y˜) with fuzzy dx point, there exists a fuzzy-valued solution y˜ = ϕ(˜ ˜ x) + c˜, where c˜ is a fuzzy constant. Proof: Because d y˜(˜ x)dx = f˜(˜ x, y˜)dx ⇐⇒ dx 5 6 d α¯ yα λ¯ xλ dx = dx α∈(0,1]

λ∈(0,1]

αf¯α

α∈(0,1]

5

6 λ(¯ xλ , y¯α ) dx,

λ∈(0,1]

for ∀λ ∈ (0, 1], α ∈ [0, 1], there is y¯α (¯ xλ ) =

f¯α (¯ xλ , y¯α )dx.

Similarly to the proof in Theorem 9.2.7, the theorem is certiﬁcated.

9.4 Fuzzy Duoma Debted Model

9.4

309

Fuzzy Duoma Debted Model

A variety of fuzzy phenomena exist in economic systems in the realistic world, so it is of great signiﬁcance to build a model with fuzzy numbers to solve them. However, the function with fuzzy numbers is not diﬀerentiable. Moreover, the traditional operation rules can not be used to solve the fuzzy quadrate equality. In this section, debt model is generalized, an extensively-used fuzzy debt model is built by a concept of inverse image deﬁned by fuzzy functions. 9.4.1

Building Model

Let gi be real function T × R. When national income increases by constantly dY (t) relative rate, its curve is = γ1 Y (t) (0 < γ1 < 1), the model of debt dt D(t) by dD(t) − ν1 Y (t) = 0(0 < ν1 < 1), g1 : dt dY (t) (9.4.1) − γ1 Y (t) = 0(0 < γ1 < 1, ), g2 : dt g1 : D(0) = D0 , g2 : Y (0) = Y0 is determined as a function one B(t) =

iD(t) , Y (t)

(9.4.2)

(9.4.1) and (9.4.2) are called a model of Duoma debt [Duo44], where i (constant) is interest rate, γ1 , ν1 are parameters. Suppose the functions being discussed keeps good properties such as convexity and continuous diﬀerentiable in a fuzzy environment. It is vital for us to use Duoma model (9.4.1) and (9.4.2) in fuzziﬁcation to solve the operational and diﬀerential problems in fuzzy functions [Hei87][Zad75a]. Duoma debt model is generalized in the following discussion. Next comes the deﬁnition ﬁrst. Deﬁnition 9.4.1. Let T be a closed space on the real axis R, C(T ) be a continuous function cluster from T to R. Again suppose G to be a fuzzy ˜ (R) and D, ˜ Y˜ to be approximate quantities of function from T × R 2 to W ˜ (R) denotes subclass of fuzzy sets, then we deﬁne D, Y , where W ˜ ¯ : T × C (t) −→ W ¯ = (t, Y˜ (t), dD(t) , ˜ 1 (R), G G 1 1 1 dt ˜ (t) d Y ¯ : T × C (t) −→ W ¯ = (t, Y˜ (t), ˜ 2 (R), G G ), 2 2 2 dt ˜ =D ˜ 0, Q1 : C1 (T ) −→ R, Q1 (D) Q2 : C2 (T ) −→ R, Q2 (Y˜ ) = Y˜0 , ˜ Y˜ (∈ C(T ), t ∈ T ) and D ˜ 0 , Y˜0 are fuzzy numbers. where D,

310

9 Interval and Fuzzy Diﬀerential Equations

˜ dD(t) to be the approximate quantity of total net Suppose fuzzy derivative dt loan rate and Y˜ (t) to be that of ﬂow in national income, then the following fuzzy diﬀerential equations ˜ dD(t) ˜1 (0 < γ < 1), = γ Y˜ (t) + U dt dY˜ (t) ˜2 (0 < ν < 1) = ν Y˜ (t) + U dt reﬂect the proportion between the total loan rate and the national income and between the national income varying rate and the national income, where ˜2 are fuzzy numbers. ˜1 , U γ, ν are parameters; U Deﬁnition 9.4.2. From a ﬁxed solutions problem ˜ ¯ : dD(t) − γ Y˜ (t) = U ˜1 (0 < γ < 1), G 1 dt ˜ ¯ : dY (t) − ν Y˜ (t) = U ˜2 (0 < ν < 1), G 2 dt ˜ ˜ 0, Q1 : D(0) =D Q2 : Y˜ (0) = Y˜0 ,

(9.4.3)

the function determined by a fuzzy solution ˜ ˜ ≡ iD(t) B(t) Y˜ (t)

(9.4.4)

is called a fuzzy Duoma debt function, while the model determined by (9.4.3) and (9.4.4) is called fuzzy Duoma debt model. The economy needs great developing to keep debt within an allowable level, reﬂecting on a mathematics model as follows, % ˜ ˜ = iminj Dj (t) , B(t) max % j Y˜j (t)

(9.4.5)

˜ j (t), Y˜j (t) are fuzzy solution classes of (9.4.3). where D 9.4.2

Solution and Its Properties in Fuzzy Duoma Debt Model

Deﬁnition 9.4.3. (9.4.3) is called non-homogeneous fuzzy diﬀerential equa˜ 0 , Y˜0 , while a fuzzy subset of space C(T) tions with initial conditions D ˜ ˜ = (R(G, ¯ U ˜ )) ˜ I) ˜ B R(Q, C(T )

9.4 Fuzzy Duoma Debted Model

311

is a fuzzy solution to the problem, where I˜ stands for an initial value. Here ˜¯ ˜ ˜ ¯ U ˜) R(G, C(T ) means mapping fuzzy subsets R(G, U ) ∈ F (T × C(T )) into a strong mapping of C(T ) [Hei83][Hei87]. Coming next is to discuss homogeneous system. Obviously ˜ ¯ 0)) ˜ = (R(G, ˜ I) ˜ B R(Q, C(T )

(9.4.6)

is a fuzzy solution to (9.4.3) corresponding to a homogeneous problem. ˜ Y˜ continue on T , their membership degree can be respectively Since D, calculated as follows: ˜ ˜ ˜ ˜ ˜ D) ˜ = (R(G ¯ 1 , 0)) B( C(T ) (D) ∧ R(Q1 , I1 ) ˜ dD ˜ ) ∧ D(0)} t∈T dt ˜ ˜ 1 (t, Y˜ (t), dD(t) )(0) ∧ D(0)}. ˜ = inf {G t∈T dt

˜ ¯ 1 , 0)(t, Y˜ , = inf {R(G

Similarly, ˜ ˜ Y˜ ) = inf {G ˜ 2 (t, Y˜ (t), dY (t) )(0) ∧ Y˜ (0)}. B( t∈T dt Deﬁnition 9.4.4. We call fuzzy functions ⎧ sup ⎪ ⎨XD˜ 0 (t)(y) = ˜ ˜ ⎪ ⎩XY˜0 (t)(y) =

˜ D) ˜ B(

{D∈E(T ): D(t)=y}

sup

{Y˜ ∈E(T ): Y˜ (t)=y}

˜ Y˜ ) B(

a trajectory of the problems $

and

$

˜ 1 , 0) (G ˜ 1 , I˜1 ) (Q

(9.4.7)

˜ 2 , 0), (G ˜ 2 , I˜2 ), (Q

(9.4.8)

respectively, where E(T ) is a subclass in C(T ) as well as a domain of fuzzy trajectory X. Theorem 9.4.1. Given that g is a function on T × R 2 , such that ˜ Y, y) g(t, Y, y) ⊂ G(t,

(9.4.9)

for each t ∈ T ; Y, y, D0, Y0 ∈ R, If D0 and Y0 are solutions to (9.4.1), then D0 (t) ⊂ X(t), Y0 (t) ⊂ X(t) holds, where X(t) = XD˜ 0 (t) XY˜0 (t) is a union of fuzzy trajectories in Equation (9.4.7) and (9.4.8).

312

9 Interval and Fuzzy Diﬀerential Equations

Proof: By using (9.4.6) and (9.4.9) we obtain ˜ ˜ 0 ) = inf {( dY (t) − ν Y˜ (t))(0) ∧ Y˜ (0)} B(Y t∈T dt dD(t) dY˜ (t) from (9.4.1) − ν Y˜ (t))( − ν1 Y (t)) ∧ Y˜0 = 1}, = inf ({ t∈T dt dt i.e., the function Y0 satisﬁes (9.4.7) with membership degrees equal to 1: Y˜ (t)(Y0 (t)) =

sup {Y ∈E(T ): Y (t)=Y0 (t)}

˜ ) B(Y ˜ 0 ) = 1, B(Y

it follows that Y0 ⊂ Y˜ (t) = XY˜0 (t). Similarly we can prove ˜ = XD˜ 0 (t). D0 ⊂ D(t) Hence Y0 ⊂ X(t), D0 ⊂ X(t). Let D0 = p1 (t, γ1 , D0 ), Y0 = p2 (t, ν1 , Y0 ) be a singular solution to (9.4.1) and pj (j = 1, 2) be mapped from T ×R ×R n to R. If γ1 = (γ11 , . . . , γ1n ), ν1 = (ν11 , . . . , ν1n ) ∈ T, D0 , Y0 ∈ R is given by small variations respectively, i.e., ˜ (T ), ν1 → ν ∈ W ˜ (T ), D0 → D ˜ ∈W ˜ (R), Y0 → Y˜ ∈ W ˜ (R), γ1 → γ ∈ W ˜ Y˜ are approximate quantities of γ1 , ν1 , D0 , Y0 , then fuzzy difwhere γ, ν, D, ferential equations are obtained as follows: $ ˜ ¯ 1 , 0) (G (9.4.10) (p1 , I˜1 ) $

and

˜ ¯ 2 , 0), (G (p2 , I˜2 ).

(9.4.11)

˜ 0 ), Their approximate solutions and fuzzy trajectories represent p1 (t, γ, D p2 (t, ν, Y˜0 ) and X(t), respectively. Theorem 9.4.2. Let D0 , Y0 be a solution to Model (9.4.1). Then ˜ = p1 (t, γ, D) ˜ ⊂ X ˜ (t) ⊂ X(t), D D Y˜ = p2 (t, ν, Y˜ ) ⊂ X ˜ (t) ⊂ X(t), Y

where X(t) = XD˜ (t)

XY˜ (t).

9.4 Fuzzy Duoma Debted Model

313

Proof: Because D0 , Y0 is a solution to (9.4.1), they must be a solution to Equation (9.4.10) and (9.4.11) from the knowledge of Theorem 9.4.1. Then ˜ is the membership degree of function Y0 at fuzzy solution B

˜ 0 ) = inf {( dY0 (t) − ν1 Y0 (t))(0)} B(Y t∈T dt = inf {

I˜2 (0) C(ν1 )} Y0

sup

t∈T

{ν∈R: ν1 Y0 (t)=

C(ν1 )

dY0 (t) } dt

Y0 .

Similarly, ˜ 0 ) C(γ1 ) B(D

(9.4.13)

D0 .

If sup

C(ν1 ) = C(ν1 ),

dY0 (t) {ν1 ∈R:ν1 Y0 (t)= } dt and sup

C(γ1 ) = C(γ1 ),

dD0 (t) {γ1 ∈R:γ1 D0 (t)= } dt then the above formulas (9.4.12) and (9.4.13) become equalities. Therefore, for y ∈ R, then XY˜0 (t)(y) =

˜ ) B(Y

sup {Y ∈E(T ): Y (t)=y}

=

˜ B(p)

sup {ν1 ,Y0 : p2 (t,ν1 ,Y0 )=y}

sup

{C(ν1 )

{ν1 ,Y0 : p2 (t,ν1 ,Y0 )=y}

Y0 }

= p2 (t, ν, Y˜ ). Here, E(T ) = {Y ∈ C(T ) : Y0 (t) = p2 (t, ν1 , Y0 ), ν1 ∈ R n , Y0 ∈ R} is a solution to the subclass of Equations (9.4.10) and (9.4.11). So p2 (t, ν, Y˜ ) ⊂ XY˜ (t)(y). Similarly, we can prove XD˜ (t)(y) =

(9.4.12)

sup {D∈E(T ): D(t)=y}

˜ ˜ B(D) p1 (t, γ, D)(y),

314

9 Interval and Fuzzy Diﬀerential Equations

i.e., ˜ ⊂ X ˜ (t)(y), p1 (t, γ, D) D but X(t) = XD˜ (t)

XY˜ (t),

therefore ˜ ⊂ X(t), p2 (t, ν, Y˜ ) ⊂ X(t). p1 (t, γ, D) ˜ 0 ) = C(γ1 ) D0 , B(Y ˜ 0 ) = C(ν1 ) Y0 , then Corollary 9.4.1. When B(D ˜ X ˜ (t) = p2 (t, ν, Y˜ ). XD˜ (t) = p1 (t, γ, D), Y The above fuzzy trajectories XD˜ (t), XY˜ (t) are equal to direct image of fuzzy subset C × D by usual solution p1 and p2 . Consider particular non-homogeneous linear diﬀerential equations ⎧ ⎨ dD(t) − γ1 D(t) = U1 (9.4.14) dt ⎩D(0) = D 0

and

⎧ ⎨ dY (t) − ν1 Y (t) = U2 , dt ⎩Y (0) = Y . 0

(9.4.15)

˜ 1 , D0 → D ˜ 0 and U2 → U ˜2 , Y0 → Y˜0 , then the problem Suppose U1 → U above is changed into fuzzy non-homogeneous linear diﬀerential equations ⎧ ⎨ dD(t) ˜ ˜1 − γ D(t) =U (9.4.16) dt ⎩D(0) = D 0 and

Let

⎧ ⎨ dY (t) ˜2 , − νY (t) = U dt ⎩Y˜ (0) = Y˜ . 0

(9.4.17)

γt ˜ 0 − (1 − e )U ˜1 ˜1 , D ˜ 0 ) = eγt D XD˜ (t) = p1 (t, γ, U γ (9.4.18) eνt ˜ νt ˜ ˜ ˜ )U2 XY˜ (t) = p2 (t, ν, U2 , Y0 ) = e Y0 − (1 − ν be fuzzy trajectories of Equations (9.4.16) and (9.4.17). It is a direct image ˜1 × D ˜ 0, U ˜2 × Y˜0 by solutions to (9.4.14) and (9.4.15). of fuzzy subset U

9.5 Model for Fuzzy Solow Growth in Economics

315

Again, supposing ˜1 ˜1 U U )eγt − , γ γ (9.4.19) ˜2 ˜2 U U νt WY˜ (t) = (Y˜0 + )e − ν ν to be fuzzy solution sets to (9.4.10) and (9.4.11), it is easy to prove the following lemma. ˜0 + WD˜ (t) = (D

Lemma 9.4.1. XD˜ (t) ⊂ WD˜ (t), XY˜ (t) ⊂ WY˜ (t).

9.5 9.5.1

Model for Fuzzy Solow Growth in Economics Introduction

Solow, an American professor and a computation economist, established the determined economic growth model as early as in 1956 [Sol56]. Because of his great contribution to development of the economic growth theories, in 1987 he won Nobel economics prize. But, due to participation of human beings’ consciousness, there exists large quantity of indetermination in the economic system of the realistic world. This kind of indetermination is not only random, but also fuzzy. The writer researched randomness with fuzzy Solow economic growth model, and obtained some very good result. He only discusses problems in fuzzy situation below. However, progress is slow by surprise, the reason is that problems meet with indiﬀerentiability, integrability and whether fuzzy function equation is closed for arithmetic. In order to solve this nut, the writer discusses the problems, including the building, determination and solution to fuzzy Solow economic growth model as well as ﬁnding solution to the model with fuzzy coeﬃcient by using the fuzzy mapping theories. 9.5.2

Building of Fuzzy Solow Economic Growth Model

Suppose the following: 1) The produce function is Y = f (K, L) (KL > 0), where Y is for the output (not include the depreciation), K a capital, and L labor force. There exists the following under the complete competition. K i) The scale guerdon is constant, that is Y = Lf ( , 1) = Lφ(K ∗ ), here L K K∗ = . L df df ∂2f ii) Marginal productivity gradually decreases: > 0, > 0, < dK dL ∂K 2 2 ∂ f 0, < 0. ∂L2 Y variable. iii) Output–capital ratio q = K

316

9 Interval and Fuzzy Diﬀerential Equations

2) Equilibria condition: Planned saving rate S is equal to planned investdK ∗ (t) . ment rate dt 3) Labor force goes up by exponents: L = L0 eλt (λ > 0). K(t) , we have Hence, when the capital-labor ratio is K ∗ (t) ≡ L(t) dK ∗ (t) = Sφ(K ∗ (t)) − λK ∗ (t), dt

(9.5.1)

K ∗ (t0 ) = K0∗ ,

(9.5.2)

called Solow economic growth model, where (9.5.1) is a main equation, (9.5.2) K0 is the initial condition, K0∗ = (constant). L0 We expand the classic Solow model into fuzzy situation in this section, introducing the concept of fuzzy function and mapping at ﬁrst. at V is Deﬁnition 9.5.1. Let V be an arbitrary linear space. A fuzzy set A called a function in value taken from V the range at [0,1], while value A(x) is ˜ called a membership degree of x with respect to A. Deﬁnition 9.5.2. Let τ be a close interval at real line R, c(τ ) be all of be a fuzzy function from continuous function cluster from τ to R. Again let G ∗ 7 ˜ τ × R to space W (R) and K0 to an approximate value of K0∗ . Then fuzzy function is deﬁned as : τ × C(τ ) → W 7 (R), G ∗ K ∗ ) = G(t, K ∗ , dK ); G(t, dt g : C(τ ) → R, ∗ (t0 ), g(K ∗ ) = K ∗ (t0 ) ∈ W 7 (R). where K ∗ ∈ C(τ ), t0 , t ∈ τ, g(K ∗ ), K Then, imitate this and we can give out the deﬁnition in fuzzy Solow model. Deﬁnition 9.5.3. Suppose L), (K(t), > 0). (1) Non-distinct production function is like Y = f (K, L(t) ∗ dK (t) . (2) Satisfying fuzzy equilibria condition is S = dt λt (3) Fuzzy labor force increases by exponent L = L0 e (λ > 0). Under the complete competition, there exists the following: K ∗ ). ( K , 1)− Lϕ( a. Invariableness related to scale guerdon, that is, Y = Lf L b. Decrease gradually in marginal production. Y c. Fuzzy output-capital ratio q = changeable. K

9.5 Model for Fuzzy Solow Growth in Economics

317

∗ (t) ≡ K(t) reaches the The properties, when the capital-labor ratio K L(t) equilibria, are fuzzy, such that we have the system $ 0), (G, (9.5.3) ∗ (t0 )), ( g, K which is called a fuzzy Solow economic growth model, the special form of (9.5.3) denotes ⎧ ∗ (t) ⎨ dK K ∗ (t)) − λK ∗ (t), = Sϕ( (9.5.4) ⎩ ∗dt 0∗ , K (t0 ) = K ∗ 7 (R). K 0∗ , K ∗ (t), dK (t) ∈ W where S, dt 9.5.3

Solution and Its Properties of Model

d) to be a complete measure space, we deﬁne Deﬁnition 9.5.4. Suppose (X, Y ) = δ(X,

sup x∈X,y∈ Y

d(x, y), ∀x, y ∈ CB(x),

where CB(x) represents all the cluster of nonempty close bounded sets in X [Hei83][Hei87]. Deﬁnition 9.5.5. E(τ ) is sub-cluster of C(τ ), and it is a domain of the fuzzy in equation (9.5.3). trajectory K Deﬁnition 9.5.6. We call a fuzzy function K(t)(y) =

sup

∗) A(K

(9.5.5)

{K ∗ ∈E(τ ): K ∗ (t)=y}

a fuzzy solution set of system (9.5.3), and E(τ ) the domain of a fuzzy solution set K. Deﬁnition 9.5.7. We call 0) A˜ = R(G, R( g, K0∗ ) C(τ ) a solution to fuzzy Solow economic growth model (9.5.3). Here, 0) R(G, C(τ ) 0) ∈ F (τ × C(τ )) to C(τ ) represents a strong mapping of fuzzy subset R(G, and the membership function of solution represents

318

9 Interval and Fuzzy Diﬀerential Equations

0) K ˜ ∗ ) = R(G, ˜∗ ˜ 0∗ )), A( (R( g, K C(τ ) (K )

(9.5.6)

i.e., ∗ (t) dK ˜ K ˜ ∗ (t)) + λK ˜ ∗ (t) = 0) − Sϕ( C(τ ) dt ∗ (t) dK ˜ K ˜ ∗ (t)) + λK ˜ ∗ (t) = 0) (t, K ˜ ∗ (t)) ˜ ∗ (0))} − Sϕ( inf {R( g(K 0 t∈τ dt ∗ (t) dK ∗ (t0 )}, K ∗ (t)) + λK ∗ (t))(0) K − Sϕ( = inf {( 0 t∈τ dt where t0 , t ∈ τ . R(

to be the ﬁrst kind of fuzzy mapping of [0, τ ] × Theorem 9.5.1. Suppose G 7 (R) with ∃ q ∈ (0, 1), such that for all t ∈ [0, τ ], K ∗ (t) ∈ E(τ ) ⊂ C(τ ) → W C(τ ), there exists 2∗ (t))] 1∗ (t)), f (t, K δ[f (t, K 1∗ (t), K 2∗ (t)], δ[K 2∗ (t), f (t, K 1∗ (t))], δ[K 2∗ (t), f (t, K 2∗ (t))], qmax{d[K ∗ ∗ ∗ ∗ δ[K1 (t), f (t, K2 (t))], δ[K2 (t), f (t, K1 (t))]}. ∗ exists in (9.5.4) at E(τ ) ⊂ C(τ ). Then, solution K 0 Proof: To arbitrarily given β ∈ (0, 1), since q ∈ (0, 1), then q β , q 1−β ∈ (0, 1). ∗ (t) ∈ E(τ ), we have Deﬁne arbitrary K t 0∗ + ∗ (τ ))dτ, ∗ (t)) = K f (t, K T (K 0

where f is a multi-valued mapping, satisfying ∗) q β δ(K ∗, K ∗0 + ∗, T K d(K

t

∗ )dτ ), f (t, K

0

then, T denotes a single-valued mapping in E(τ ) → E(τ ). From the assumption and the basic theorem of integral calculus, we have 2∗ (t)) 1∗ (t), T K d(T K t t ∗ + ∗ (τ ))dτ, K ∗ + ∗ (τ ))dτ ) δ(K f (τ, K f (τ, K 0 1 0 2 0 0 t ∗ ), δ(K ∗ (t), K ∗ + ∗ (τ ))dτ ), ∗, K f (τ, K q max{d(K 1

2

∗ + ∗ (t), K δ(K 2 0 ∗ + ∗ (t), K δ(K 1 0 ∗ + ∗ (t), K δ(K 2 0

1

0

t

0

t

0

0

t

0

∗ (τ ))dτ ), f (τ, K 2 ∗ (τ ))dτ ), f (τ, K 2 ∗ (τ ))dτ )} f (τ, K 1

1

9.5 Model for Fuzzy Solow Growth in Economics

319

∗, K ∗ ), q · q −β max{q β d(K 1 2 t 1∗ (t), K 0∗ + 1∗ (τ ))dτ ), q β δ(K f (τ, K 0 t β ∗ ∗ 2 (t), K 0 + 2∗ (τ ))dτ ), f (τ, K q δ(K 0 t 1∗ (t), K 0∗ + 2∗ (τ ))dτ ), f (τ, K q β δ(K 0 t β ∗ ∗ 1∗ (τ ))dτ )} f (τ, K q δ(K2 (t), K0 + 0

1∗ , K 2∗ ), d(K 1∗ , T K 1∗), q 1−β max{d(K 2∗ ), d(K 1∗ , T K 2∗ ), d(K 2∗ , T K 1∗)}, 2∗ , T K d(K ∗ ∈ E(τ ) holds. ∗, K ∀K 1 2 ∗, K ∗ , T is Ciric-type compacted mapping, such that Therefore, for every K 1 2 ¯ ∗ in E(τ ) ⊂ C(τ ). And repeatedly we know T has a unique ﬁxed point K ∗ n ∗ ∞ ¯ ∗ as for any K0∗ ∈ E(τ ). sequence {Kn = T K0 }n=1 converges in K ∗ ∗ ∗ ¯ ¯ ¯ ∗ ¯ = TK with T K ∈ C(τ ), then K ∈ C(τ ), hence (9.5.4) Because of K ∗ in E(τ ). has a fuzzy solution K Theorem 9.5.2. Suppose g(t, K ∗ (t),

dK ∗ (t) dK ∗ (t) )= − Sϕ(K ∗ (t)) + λ1 K ∗ (t) dt dt

to be a function on τ × R 2 , with g(t, K ∗ (t),

∗ dK ∗ (t) K ∗ (t), dK (t) ). ) ⊂ G(t, dt dt

If K0∗ (t) is a solution to ordinary system $

dK ∗ (t) ) = 0, dt K ∗ (t0 ) = K0∗ , g(t, K ∗ (t),

˜ holds to every t ∈ τ, then K ∗ (t) ⊂ K(t)

dK ∗ (t) ∈ E(τ ). dt

Proof: By using (9.5.5) and (9.5.6), we obtain ∗

∗ (t) = inf {( dK (t) ) − Sϕ(K ∗ (t)) + λK ∗ (t))(0) K ∗ (t0 )} A(K 0 t∈τ dt dK ∗ (t) = inf {( ) − Sϕ(K ∗ (t)) + λK ∗ (t)) t∈τ dt dK ∗ (t) (( ) − Sϕ(K ∗ (t)) + λ1 K ∗ (t)) K0∗ = 1}. dt

(9.5.7)

320

9 Interval and Fuzzy Diﬀerential Equations

i.e., the function K0∗ satisﬁes (9.5.3) with grade of membership equal to 1: ∗ K(t)(K 0 (t)) =

sup {K ∗ ∈E(τ ):

K ∗ (t)=K0∗ (t)}

∗) A(K

∗ (t0 )) = 1, A(K it follows that

K(t) ⊃ K0∗ (t).

Theorem 9.5.3. If (9.5.7) has a singular solution K ∗ = p(t, S, K0∗ ) ⊂ K(t), S, K0∗ ∈ R, ∗ = p(t, S, ˜ K ˜ 0∗ ) under mapping and it is changed into K ˜ (R) are approximate quantities of S, K ∗ ∈ R, then W 0

C(τ ) .

˜ K ˜ 0∗ ∈ And S,

∗ (t) ⊂ K(t). K ˜ ∗ (K ∗ ), then the equality holds. ˜ 0) K ˜ 0 ) = S(S If A(x 0 Proof: Let y ∈ R, ˜ K(t)(y) =

∗) A(K

sup {K ∗ ∈E(τ ): K ∗ (t)=y}

=

A(p)

sup {S,a: p(t,S,K0 )=K ∗ }

sup {S,a: p(t,S,K0

)=K ∗ }

{S(S)

˜ ∗ (t0 )} K

˜ K)(y). ˜ = p(t, S, ˜ ∗ (t) is more accurate Thus an approximate solution to fuzzy systems (9.5.3) K ˜ than fuzzy trajectory K(t), containing an accurate solution to this system K ∗ = p(t, S, K0 ). 9.5.4

Conclusion

A fuzzy economic model is built by using fuzzy mapping theory and by generalizing the deﬁnite economic model into a fuzzy case. Fuzzy economic model adopted by us will contain more information than a crisp one, which coincides more with practicality. In the next section, we shall proves the feasible generalization by a numerical example.

9.6 9.6.1

Application of Fuzzy Economic Model Application of Fuzzy Duoma Debt Model

Deﬁnition 9.6.1. Function N (a, m, b) is called a triangular function [Cao90][Zim91] with its membership function satisfying

9.6 Application of Fuzzy Economic Model

⎧ 2 ⎪ t−a ⎪ ⎪ ⎪ ⎪ ⎨ m−a μN (t) = 1 ⎪ 2 ⎪ ⎪ t−b ⎪ ⎪ ⎩ b−m

321

a t < m, t = m, m < t b,

where m is the mean value of N, a is for left and b for right spread, and a m b, t, a, m, b ∈ R. Let us ﬁrstly deﬁne the following operation laws in triangle function N in order to solve (9.4.3) and (9.4.5). Deﬁnition 9.6.2. We deﬁne (c, n, d) = N (a + c, m + n, b + d); 1) N (a, m, b) + N$ N (ka, km, kb), for k 0, 2) kN (a, m, b) = N (kb, km, ka), for k < 0; 3) N (a, m, b) − N (c, n, d) = N (a, m, b) + N (d, −n, c) = N (a + d, m − n, b + c); 4) Suppose (1)

(2)

(3)

(1)

(2)

(3)

WD˜ 0 = S(WD0 , WD0 , WD0 ), WY˜0 (t) = N (WY0 , WY0 , WY0 ), (1)

(2)

(3)

(1)

(2)

(3)

p1 = N (p1 , p1 , p1 ), p2 = N (p2 , p2 , p2 ), (2)

(2)

(2)

(2)

where WD0 , WY0 , p1 , p2 left spreads and

(1)

(1)

(1)

(1)

are mean values, WD0 , WY0 , p1 , p2

(3) (3) (3) (3) WD0 , WY0 , p1 , p2

(1) % min(W ˜ 0 , p1 ) = N (WD0 D (1)

max(W % Y˜0 , p2 ) = N (WY0

stand for

for right spreads. Then

(1)

(2)

(1)

(2)

p1 , WD0 p2 , WY0

(2)

(3)

(2)

(3)

p1 , WD0 p2 , WY0

(3)

p1 ) = p1 , (3)

p2 ) = WY˜0 .

Consider the solution and stability of fuzzy Duoma Debt Model (9.4.3) and (9.4.4) with assumption of fuzzy functions are ˜1 = N (1, 0, −2), U ˜ 2 = N (−2, 0, 1), U ˜ 0 = N (D(1) , D(2) , D(3) ), Y˜0 = N (Y (1) , Y (2) , Y (3) ). D 0 0 0 0 0 0

(9.6.1)

The national economy shall be greatly developed to maintain the debt within an allowable level, a model reﬂecting mathematics is as follows: i(WD˜ 0 p1 ) ˜ B(t) = . (9.6.2) WY˜0 p2 Therefore, substitute U2 , Y0 of (9.6.1) for (9.4.17), and again from (9.4.18) and (9.4.19) we obtain

322

9 Interval and Fuzzy Diﬀerential Equations (1)

(2)

(3)

WY˜0 (t) = N (WY0 , WY0 , WY0 ) N (−2, 0, 1) N (−2, 0, 1) ]− ν ν 1 2 2 (2) (3) 1 (1) νt = e N (Y0 − , Y0 , Y0 + ) + N (− , 0, ) ν ν ν ν 1 2 1 (1) (2) νt 2 (3) νt = N [− + e (Y0 − ), Y0 e , + (Y0 + )eνt )]; ν ν ν ν (1)

(2)

(3)

= eνt [N (Y0 , Y0 , Y0 ) +

(1)

(2)

(3)

˜2 , Y˜0 ) = N (p , p , p ) p2 (t, ν, U 2 2 2 eνt (1) (2) (3) = eνt N (Y0 , Y0 , Y0 ) − (1 − )N (−2, 0, 1) ν 1 eνt 1 eνt (1) (2) (3) = N (Y0 eνt , Y0 eνt , Y0 eνt ) + N [2( − ), 0, − + ] ν ν ν ν 2 2 (2) 1 1 (1) (3) = N [(Y0 − )eνt + , Y0 eνt , (Y0 + )eνt − ]. ν ν ν ν Problem (9.4.3) is summed up as a solution to ⎧ ˜ ⎨ dD(t) ˜1 − γWY˜0 (t) = U ⎩ ˜ dt ˜0 D(0) = D i.e.,

⎧ ˜ ⎪ ⎪ dD(t) = N [− γ + γeνt (Y (1) − 2 ) + 1, γY (2) eνt ⎪ 0 0 ⎨ dt ν ν 2γ 1 νt (3) + γ(Y0 + )e − 2] ⎪ ⎪ ν ν ⎪ ⎩˜ (1) (2) (3) D(0) = N (D , D , D )

(9.6.3)

⎧ ˜ dD(t) 2γ 2 ⎪ (1) (2) ⎪ = N [γeνt (Y0 − ) + + 1, γY0 eνt , ⎪ ⎨ dt ν ν γ 1 (3) γeνt (Y0 + ) − − 2], ⎪ ⎪ ν ν ⎪ ⎩˜ (1) (2) (3) D(0) = N (D , D , D ).

(9.6.4)

0

and

and

⎧ ˜ ⎨ dD(t) ˜1 , − γp2 = U ⎩ ˜ dt ˜ 0, D(0) = D

0

0

0

0

0

By using an extended concept in [GV86], a solution to (9.6.3) and (9.6.4) is obtained respectively: γ (1) 2 γ (1) )t + (Y0 − )(eνt − 1) + D0 , ν ν ν γ (2) νt γ (3) 1 (2) 2γ 3) Y (e − 1) + D0 , ( − 2)t + (Y0 + )(eνt − 1) + D0 ], ν 0 ν ν ν

WD˜ 0 (t) =N [(1 −

9.6 Application of Fuzzy Economic Model

323

and ˜1 , D ˜ 0 ) = N [(1 + 2γ )t + γ (Y (1) − 2 )(eνt − 1) + D(1) , p1 (t, γ, U 0 ν ν 0 ν γ γ (2) νt γ 1 (2) (3) (3) Y (e − 1) + D0 , −(2 + )t + (Y0 + )(eνt − 1) + D0 ]. ν 0 ν ν ν Because of (9.6.2), hence, i · 1 2 2 1 (1) (2) (3) N [− + eνt (Y0 − ), Y0 eνt , + (Y0 + )eνt ] ν ν ν ν 2γ γ (1) 2 νt (1) γ (2) νt (2) {N [(1 + )t + (Y0 − )(e − 1) + D0 , Y0 (e − 1) + D0 , ν ν ν ν γ (3) 1 γ (3) −(2 + )t + (Y0 + )(eνt − 1) + D0 ]}. ν ν ν

˜ = B(t)

It is easy to get the following for t → +∞: ˜ →B ˜0 = B(t)

γ (1) 2 γ (2) γ (3) 1 iN [( (Y0 − ), Y0 , (Y0 + )] ν ν ν ν ν . 2 (2) (3) 1 (1) N (Y0 − , Y0 , Y0 + ) ν ν

˜ ˜ 0 related to the initial value, Obviously, B(t) approaches a fuzzy value B γ, ν and i, but an ordinary real number is shown as a special example of the value. As long as the national income increases by means of unchangeable relatedrate, it is not bad for the government to issue bonds for years on end. The debt with allowable level value continuously depends on parameter γ, ν, the initial value and interest rate i because the debt is increasing without inﬁniteness. 9.6.2

Application of Fuzzy Solow Economic Growth Model

Deﬁnition 9.6.3. We call a vector with quarter variable f (t) = m(f1 (t), f2 (t); f3 (t), f4 (t)) a fuzzy function. = m(c− , c+ ; a, b) Deﬁnition 9.6.4. We call number with quarter parameters C a fuzzy number, and its membership function μC is deﬁned as ⎧ if t a, b t, ⎪ ⎪ 0,t−a ⎨ if a < t < c− , − −a , c μC (t) = 1, if c− t c+ , ⎪ ⎪ ⎩ b−t if c+ < t < b, b−c− , a, b is the left-right distribution where, (c− , c+ ) is left-right main value of C, − + of C, respectively, with a c c b.

324

9 Interval and Fuzzy Diﬀerential Equations

Consider a fuzzy Cobb-Douglas function K ∗0.7 , Y = γ L where ϕ(K ∗ ) = f (

K ∗0.7 . Put it into (9.5.3), then , 1) = γ K L ⎧ ∗ (t) ⎨ dK ∗0.7 (t) − λK ∗ (t), = γ SK ⎩ ∗dt 0∗ (γ > 0 is a constant). K (t0 ) = K

(9.6.5)

If we stipule (n)∗ )β = m(K (n)∗β , K (n)∗β , K (n)∗β , K (n)∗β )(n = 0, 1), (K 1 2 3 4 and an ordinary real number is c = m(c, c; 0, 0)(c is a constant). Consider $ dK ∗ (t) = γSK ∗0.7 (t) − λK ∗ (t), dt ∗ K (t0 ) = K0∗ ,

(9.6.6)

7 (t) to (9.6.5), while the its solution W (t) can mapping into solution W trajectory 1 γ K ∗ (t) = [K0∗0.3 e−0.3λt + S(1 − e−0.3λt )] 0.3 λ in (9.6.6) directly mapping into fuzzy trajectory in (9.6.5): 1 ∗ (t) = [K − e−0.3λt )] 0.3 ∗0.3 e−0.3λt + γ S(1 K . 0 λ

The special form of (9.5.3) denotes ⎧ ∗ (t) ⎨ dK ∗ (t) + U , = SK dt ⎩ ∗ 0∗ , S K (t0 ) = K

(9.6.7)

(9.6.8)

its ordinary linear diﬀerential equation and initial condition are $ dK ∗ (t) = SK ∗ (t) + u, dt∗ SK (t0 ) = K0∗ . ˜ K) ˜ and W 7 (t) is the fuzzy ∗ = p(t, h, U, Corollary 9.6.1. Suppose that K 7 (t) ⊃ trajectory and fuzzy solution in System (9.6.8), respectively, then W ˜ ˜ p(t, h, U, K).

9.6 Application of Fuzzy Economic Model

325

Proof: Let y ∈ R. Then ˜ , K)(y) ˜ p(t, h, U =

{U(u)

sup

K(K)}

u {u,k:(k+ u h ) exp(ht)− h =y}

=

{U(hu)

sup

K(K)}

{u,k:(k+u) exp(ht)−u=y}

=

{U(hu)

sup

K(K)}

{u,k:(d exp(ht)−u(1−exp(ht))=y}

=

sup

{U(hu)

{u,k:(d−u)=y}

K(K) exp

ht } 1 − exp(ht)

exp(ht) ){U − U + exp(ht)K)(y)} h exp(ht) (( )U − U /h + exp(ht)K)(y) h U − U )(y) = (exp(ht)( + K) h h 7 = W (t)(y).

= ((

The corollary is proved from deﬁnition of direct image of fuzzy sets and properties of approximate quantities. ˜ K ˜ 0) Even thought the approximate solution to fuzzy system (9.6.8) p(t, h, U, 7 (t), we have to get the solution to is more accurate than a fuzzy solution W 7 7 W (t) because it is more easy to obtain W (t). Example 9.6.1: If 0∗ = m(1, 2, ; 0, 4), S = m(0, 1; −3, 4), K we solve

⎧ ∗ (t) ⎨ dK ∗0.7(t) − λK ∗ (t), = am(0, 1; −3, 4)K dt ⎩ ∗ K (0) = m(1, 2; 0, 4).

From practical sense of the problem, there must be t 0. By using Corollary 9.6.1, we can obtain a fuzzy trajectory to the problem: 5 1 ∗ (t) = m e−λt , ( γ + (2 − γ )e−0.3λt ) 0.3 K ; λ λ −3γ −0.3λt 1 4γ 4γ −0.3λt 1 6 −3γ + e + (4 − )e ) 0.3 , ( ) 0.3 . ( λ λ λ λ And by using (9.5.4), we can get a fuzzy solution to the problem 5 γ 1 1 7 (t) = m ((1 − γ )e−0.3λt ) 0.3 W , ( + 2e−0.3λt ) 0.3 ; λ λ −4γ −0.3λt 1 4γ 3γ −0.3λt 1 6 −3γ + e + (4 + )e ( ) 0.3 , ( ) 0.3 . λ λ λ λ

326

9 Interval and Fuzzy Diﬀerential Equations

˜ K) ˜ ⊂W 7 (t) is testiﬁed. Therefore, p(t, h, U, ∗ 7 When t → +∞, K (t), W (t) trends to be an average value γ 1 3γ 1 4γ 1 m[0, ( ) 0.3 ; (− ) 0.3 , ( ) 0.3 ]. λ λ λ 3γ 1 4γ 1 γ 1 γ 1 Because m[0, ( ) 0.3 ; (− ) 0.3 , ( ) 0.3 ] is not a ﬁxed number, (0, ( ) 0.3 ) λ λ λ λ 4γ 1 3γ 1 0.3 0.3 and ( ) are called the is regarded as left-right main value and (− ) λ λ ˜ left-right spreads of C, respectively. Hence, decision makers can choose most satisfactory value according to their practical requirement.

10 Interval and Fuzzy Functional and their Variation

The writer put forward the concept of an interval and a fuzzy (value) functional variation on base of the classic function and functional variation in 1991 [Cao91a]. In 1992, he extended the research of convex function and convex functional into the consideration of the interval and the fuzzy environment. Later he processed the research for a conditional extremum variation problem in interval and fuzzy-valued functional [Cao01e] and the functional variation with fuzzy functions [Cao99a]. In this chapter, interval and fuzzy-valued functional variation are discussed as follows. Section 1, Interval functional and its variation Section 2, Fuzzy-valued functional and its variation Section 3, Convex interval and fuzzy function and functional Section 4, Convex fuzzy-valued function and functional Section 5, Variation of interval and fuzzy-valued functional condition extremum Section 6, Variation of condition extremum on functional with fuzzy functions

10.1

Interval Functional and Its Variation

In chapter 1, we can ﬁnd the deﬁnition of interval numbers, its operation and interval functional. In this section, we discuss some properties of interval functional and its variation, introduce extremely valued condition for interval functional. Deﬁnition 10.1.1. If each function y¯(x) is a certain interval function y¯(x) = ¯ has some interval value corresponding to it, the interval [y − (x), y + (x)] and Π ¯ = variable is called a functional dependent on function y¯(x), written as Π − + Π[y (x), y (x)]. Deﬁnition 10.1.2. Suppose that the interval functional Π(¯ y (x)) is deﬁned on interval [a, b], and if point x ∈ [a, b], δy − = y − (x) − y1− (x), and δy + = B.-Y. Cao: Optimal Models & Meth. with Fuzzy Quantities, STUDFUZZ 248, pp. 327–361. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com

328

10 Interval and Fuzzy Functional and their Variation

y + (x) − y1+ (x) exist, where y1− (x), y1+ (x) and y − (x), y + (x) are ordinary functions which belong to the domain of a functional, the functional Π[¯ y(x)] is called the interval-model-variable variationableness, and [min(δy − , δy + ), max(δy − , δy + )] is called variation at x for y¯, i.e., δ y¯ [min(δy − , δy + ), max(δy − , δy + )], at δy − (x0 ) δy + (x0 ) (or δy − (x0 ) δy + (x0 )), y¯ is the same order (or antitone) variationableness at x0 . For ∀x ∈ [a, b], δ y¯ = [δy − , δy + ] (or δ y¯ = [δy + , δy − ]) is called the same order (or antitone) variation on [a, b]. (k)

y (k) (x), y¯0 (x)) ⊂ Deﬁnition 10.1.3. If for ∀ε > 0, ∃δ > 0, and when dH (¯ (−δ, δ), we have dH (Π y¯(x), Π y¯0 (x)) ⊂ (−ε, ε), calling the functional Π y¯(x) a k-th approaching continual interval functional on y¯0 (x), where, dH denotes a Hausdorﬀ metric. Deﬁnition 10.1.4. For ∀x ∈ [a, b],

y1− (x) y1+ (x), y − (x) y + (x), y − (x) y + (x), · · · , y −(n) (x) y +(n) (x) if and only if functional Π(y − (x)) and Π(y + (x)) approach to continue by k-th on y − (x) = y1− (x) and y + (x) = y1+ (x) [Ail52], respectively, Π y¯(x) is called an k-th approaching continual interval functional on y¯ = y¯1 . ∂ Π(¯ y + ϑδ y¯)|ϑ=0 ∂ϑ ¯ is called the 1st variation for the interval functional and, by δ Π, we deduce Deﬁnition 10.1.5. For the interval functional Π(¯ y (x)),

∂ ∂ Π(y − + ϑδy − )|ϑ=0 , Π(y + + ϑδy + )|ϑ=0 }, ∂ϑ ∂ϑ ∂ ∂ Π(y + + ϑδy + )|ϑ=0 }]. max{ Π(y − + ϑδy − )|ϑ=0 , ∂ϑ ∂ϑ

¯ [min{ δΠ

(10.1.1)

∂ ∂ ¯ is the Π(y − + ϑδy − )|ϑ=0 Π(y + + ϑδy + )|ϑ=0 , the functional Π At ∂ϑ ∂ϑ same order variationableness, and ¯ = [ ∂ Π(y − + ϑδy − )|ϑ=0 , ∂ Π(y + + ϑδy + )|ϑ=0 ] δΠ ∂ϑ ∂ϑ represents the same order variation of functional. ∂ ∂ ¯ is an Π(y + + ϑδy + )|ϑ=0 Π(y − + ϑδy − )|ϑ=0 , the functional Π At ∂ϑ ∂ϑ antitone variationableness, and ¯ = [ δΠ

∂ ∂ Π(y + + ϑδy + )|ϑ=0 , Π(y − + ϑδy − )|ϑ=0 ] ∂ϑ ∂ϑ

represents the antitone variation of functional.

10.1 Interval Functional and Its Variation

329

The next step is only to consider the same order variation since the antitone variation can be done in the same way. Therefore, (10.1.1) can be simpliﬁed into ¯ [ δΠ

∂ ∂ Π(y − + ϑδy − )|ϑ=0 , Π(y + + ϑδy + )|ϑ=0 ]. ∂ϑ ∂ϑ

∂n ¯ = Π(¯ Deﬁnition 10.1.6. For the interval functional Π y (x)), Π(¯ y+ ∂ϑn n ¯ ϑδ y¯)|ϑ=0 is called n-th variation for the interval functional and, by sign δ Π, n ¯ = ∂ Π(¯ ¯ (n 2) a higher variation. writing δ n Π y + ϑδ y¯)|ϑ=0 . We call δ n Π ∂ϑn Deﬁnition 10.1.7. We call F (x, y¯(x), y¯ (x)) an interval-compound function, writing

F (x, y¯(x), y¯ (x)) [F − (x, y − (x), y − (x)), F + (x, y + (x), y + (x))]. Deﬁnition 10.1.8. For the interval-compound function F (x, y¯(x), y¯ (x)), if ﬁxing variable x, we deﬁne δ F¯ =

∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 ∂ϑ

as the 1st-interval variation of the interval-compound function F¯ , δ n F¯ =

∂F n (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 ∂ϑn

is called its n-th interval variation. Similarly, we can deﬁne the variation of F [x, y¯(x), y¯ (x), · · · , y¯(n) (x)], φ[x, y¯(x), z¯(x)]. Theorem 10.1.1. For interval model variation, we have (1) (δ y¯) = δ y¯ , (δ y¯)(n) = δ y¯(n) ;

(2) δ(δ y¯) = 0.

Proof: (1) Let the interval-compound function be F¯ = F [x, y¯(x), y¯ (x)] = y¯ ; F [x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) ] = y¯ + ϑ(δ y¯) . Under the meaning of the same order variationableness, we have ∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) ) = (δ y¯) . ∂ϑ Therefore, we obtain δ F¯ = δ y¯ =

∂F (x, y¯ + ϑδ y¯, y¯ + ϑ(δ y¯) )|ϑ=0 = (δ y¯) . ∂ϑ

330

10 Interval and Fuzzy Functional and their Variation

Similarly, we can prove that the formula holds under the meaning of an antitone variationableness, hence (δ y¯) = δ y¯ . In the proper order, we can prove (δ y¯)(n) = δ y¯(n) . (2) Let the interval-compound function be F (x, y¯(x)) = δ y¯, F (x, y¯ + ϑδ y¯) = δ y¯. Then ∂F ∂F (x, y¯ + ϑδ y¯) = 0 ⇒ δ F¯ = (x, y¯ + ϑδ y¯)|ϑ=0 = 0. ∂ϑ ∂ϑ So, the theorem holds. Theorem 10.1.2. Let F¯ , F¯1 , F¯2 be interval-compound functions with the variationable same order. Then (1) (2) (3) (4) (5) (6)

δ(F¯1 ± F¯2 ) = δ F¯1 ± δ F¯2 , δ(F¯1 · F¯2 ) = F¯1 δ F¯2 + F¯2 δ F¯1 , δ(k · F¯ ) = kδ F¯ , ¯1 ¯ ¯ ¯ ¯ δ( F ) = F2 δF1F¯−2F1 δF2 (F¯2 = 0), F¯2 2 δ F¯ n = nF¯ n−1 δ F¯ , b b δ F¯ dx = δ F¯ dx. a

a

Proof: We only prove (2) and (6): (2) Let F (x, y¯(x), y¯ (x)) = F1 (x, y¯(x), y¯ (x)) · F2 (x, y¯(x), y¯ (x)). Then ∂F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) ∂ϑ ∂ {F1 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )}F2 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) = ∂ϑ ∂ + F1 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ) F2 (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ ). ∂ϑ From this, we obtain

(10.1.2)

δ F¯ = F¯1 δ F¯2 + F¯2 δ F¯1 . (6) Let F¯ = F (x, y¯(x), y¯ (x)). Then b b ∂ δ F (x, y¯(x), y¯ (x))dx = F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )|ϑ=0 dx ∂ϑ a a b ∂ F (x, y¯ + ϑδ y¯, y¯ + ϑδ y¯ )|ϑ=0 dx = ∂ϑ a b = δF (x, y¯(x), y¯ (x))dx. a

10.1 Interval Functional and Its Variation

331

¯ Lemma I. (Basic interval variation lemma) The interval function φ(x) remains continuous on (a, b) and as for arbitrary function η(x), it satisﬁes with the following. (1) η(x) has kth continuous derivative on (a, b); (2) η(a) = η(b); (3) |η(x)| < , |η (1) (x)| < , · · · , |η (k) (x)| < , where is a small arbitrary positive number, b ¯ ¯ we have a φ(x)η(x)dx = 0, then φ(x) ≡ 0 on [a, b]. Proof: Because b ¯ φ(x)η(x)dx =[ a

b

φ− (x)η(x)dx,

a

b

φ+ (x)η(x)dx] = 0,

a −

+

¯ by condition, we notice φ(x) = [φ (x), φ (x)]. b If by applying the variation lemma [Ail52] into φ− (x), a φ− (x)η(x)dx and b φ+ (x), a φ+ (x)η(x)dx, we have φ− (x) = 0 and φ+ (x) = 0, respectively. Hence, the lemma holds. The extreme value of the interval functional. Deﬁnition 10.1.9. If the value of interval functional Π(¯ y (x)) in any curve ¯ = Π(¯ approaching to y¯ = y¯0 (x) is smaller than Π(¯ y0 (x)), i.e., if ΔΠ y (x)) − y0 (x)) reaches the maximum (or a strict Π(¯ y0 (x)) ⊂ 0(or = 0), functional Π(¯ one) on y¯ = y¯0 . The minimum (or a strict one) of Π(¯ y (x)) can be deﬁned by imitation, and maximum (or minimum) of Π(¯ y(x)) is called an extreme value. Theorem 10.1.3. If the interval functional Π(¯ y (x)) with variation reaches max¯ = 0. imum (or minimum) on y¯ = y¯0 (x), then, on y¯ = y¯0 (x) there exists δ Π Proof: Consider ¯ Π(¯ y0 (x) + ϑδ y¯) = φ(ϑ) − − ⇐⇒ [Π(y0 (x) + ϑδy ), Π(y0+ (x) + ϑδy + )] = [φ− (ϑ), φ+ (ϑ)], ¯ is an inwhen y0− (x) and δy − , y0+ (x) and δy + are ﬁxed, respectively, φ(ϑ) ¯ terval function of ϑ. By assumption, φ(0) is taken for extreme value ⇐⇒ φ− (0), φ+ (0) is. Therefore, φ− (0) = 0, φ+ (0) = 0, i.e., y0 (x)) = 0. φ¯ (0) = 0 =⇒ δΠ(¯ It is not diﬃcult to extend the results above into the interval functional dependent upon multi-model-variable Π(¯ y1 (x), y¯2 (x), · · · , y¯n (x)) and upon a model interval functional of multi-variable or upon its variety of model interval functionals Π(¯ y (x1 , x2 , · · · , xn ));

332

10 Interval and Fuzzy Functional and their Variation

or Π(¯ z1 (x1 , x2 , · · · , xn ), z¯2 (x1 , x2 , · · · , xn ), · · · , z¯n (x1 , x2 , · · · , xn )). Theorem 10.1.4. If the interval functional Π(¯ y (x)) with 1st and 2nd interval ¯ and when y¯ = y¯0 (x), δΠ(¯ ¯ δ 2 Π, y0 (x)) = 0, and δ 2 Π(¯ y0 (x)) = 0 variation δ Π, hold, then an extreme value is taken for functional Π(¯ y (x)) on y¯ = y¯0 (x). When δ 2 Π(¯ y0 (x)) ⊂ 0, maximum exists and when δ 2 Π(¯ y0 (x)) ⊃ 0, minimum exists. ¯ Proof: Let the interval functional be φ(ϑ) = Π(¯ y0 (x)+ϑδ y¯), if δΠ(¯ y0 (x)) = 0, δ 2 Π(¯ y0 (x)) ⊂ 0, then − φ (0) = 0, φ− (0) < 0 ¯ ¯ φ (0) = 0, φ (0) ⊂ 0 =⇒ φ+ (0) = 0, φ+ (0) < 0 − φ (0), maximal value is taken =⇒ φ+ (0), maximal value is taken ¯ ⇐⇒ φ(0), maximal value is taken, i.e., Π(¯ y0 (x) + ϑδ y¯) Π(¯ y0 (x)). Therefore, maximal value is taken for Π(¯ y0 (x)). Similarly, we can prove the states of the minimum.

10.2

Fuzzy-Valued Functional and Its Variation

10.2.1 Introduction We aim at extending conception of functional variation under interval meaning into fuzzy state, having put forward the conception of fuzzy variation. In this section, we discuss some properties of fuzzy-valued functional and its variation, educe extremely valued condition for interval functional. 10.2.2 Variation of Fuzzy-Value Functional at Ordinary Point In [DPr78] and [Luo84a,b], we can ﬁnd some deﬁnitions of fuzzy number, its operation, and those of the fuzzy-value function. Deﬁnition 10.2.1. Let (1) y˜ : [a, b] → R, x → y˜(x), y˜(x) is a fuzzy-value function deﬁned on [a, b]; (2) y¯α : [a, b] → ER = {[e, f ]|e f ; e, f ∈ R}, x → y¯α (x) (˜ y (x))α = [yα− (x), yα+ (x)].

10.2 Fuzzy-Valued Functional and Its Variation

333

Then y¯α is called an α-cuts function for y˜, which is an interval function deﬁned on [a,b]. Deﬁnition 10.2.2. If for a kind of fuzzy-value function y˜(x), each function y˜(x) has some fuzzy numbers Π(˜ y(x)) corresponding to it, then Π(˜ y (x)) is called a fuzzy-valued functional of such function y˜(x), and we write it down ˜ = Π(˜ as Π y (x)). Deﬁnition 10.2.3. Let the fuzzy-valued functional be deﬁned on [a, b]. If for ∀α ∈ (0, 1], there exists δ y¯α = y¯α (x) − y¯1α (x), such that αδ y¯α = α(¯ yα (x) − y¯1α (x)), α∈(0,1]

α∈(0,1]

then it is called a fuzzy-model-variable variation in functional Π(˜ y(x)), written as δ y˜ = y˜(x) − y˜1 (x). Deﬁnition 10.2.4. Let y˜(x) be deﬁned on [a, b], for ∀α ∈ (0, 1], y¯α (x) is the same order (or antitone) variationableness, δ y˜(x) = αδ y¯α is called the α∈(0,1]

same order (or antitone) variation for y˜(x). Deﬁnition 10.2.5. Let y¯α : [a, b] → ER , x → y¯α (x), ¯ α : ER → [g, h], y¯α → Π(¯ Π yα (x)). Then Π(¯ yα ) is called an α-cuts functional of Π y˜, if only if for ∀α ∈ (0, 1], when Π(¯ yα ) approaches to continue by kth on y¯α = y¯0α (x), a fuzzy-valued functional Π y˜ can be called a kth approaching continuation on y˜ = y˜0 (x). Deﬁnition 10.2.6. For the fuzzy-valued functional Π(˜ y (x)), we call ∂ Π(˜ y + ϑδ y˜)|ϑ=0 the 1st variation of a fuzzy-valued functional, by sign ∂ϑ ˜ then δ Π, ∂ ˜ yα + ϑδ y¯α )|ϑ=0 , δΠ α Π(¯ ∂ϑ α∈(0,1]

2

∂ Π(˜ y + ϑδ y˜)|ϑ=0 the 2nd variation of a fuzzy-valued functional, and ∂ϑ2 ˜ then by sign δ 2 Π, ∂2 ˜ δ2Π α 2 Π(¯ yα + ϑδ y¯α )|ϑ=0 . ∂ϑ call

α∈(0,1]

˜ = Π(˜ Deﬁnition 10.2.7. For a fuzzy-valued functional of type Π y (x), z˜(x)), ˜ Π = Π(˜ u(x, y)), whose 1st variation is

334

10 Interval and Fuzzy Functional and their Variation

˜ = δΠ

α

∂ Π(¯ yα + ϑδ y¯α , z¯α + ϑδ¯ zα )|ϑ=0 , ∂ϑ

α

∂ Π(¯ uα + ϑ¯ uα )|ϑ=0 . ∂ϑ

α∈(0,1]

˜ = δΠ

α∈(0,1]

Similarly, we can deﬁne the 2nd variation. Deﬁnition 10.2.8. We call function like F (x, y˜(x), y˜ (x)) a fuzzy-valued compound function with F (x, y˜(x), y˜ (x)) αF (x, y¯α (x), y¯ α (x)). α∈(0,1]

Deﬁnition 10.2.9. If ﬁxing variable x for a fuzzy-valued compound function F (x, y˜(x)), y˜ (x)), we deﬁne ∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 ∂ϑ ∂ αδ F¯α = α F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )|ϑ=0 , ∂ϑ

δ F˜ =

α∈(0,1]

α∈(0,1]

which is called the 1st fuzzy-valued variation. And then ∂n F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 ∂ϑn ∂n αδ n F¯α = α n F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )|ϑ=0 , ∂ϑ

δ n F˜ =

α∈(0,1]

α∈(0,1]

which is called an nth fuzzy-value variation. In the same way, we can deﬁne the fuzzy-valued variation of F (x, y˜(x), y˜ (x), · · · , y˜(n) (x));

φ(x, y˜(x), z˜(x)).

Theorem 10.2.1. For fuzzy valued-model-variable variation, we have (1) (δ y˜) = δ y˜ , (δ y˜)(n) = δ y˜(n) ;

(10.2.1)

(2) δ(δ y˜) = 0.

(10.2.2)

Proof: a) Let the fuzzy-valued compound function be F˜ = F (x, y˜(x), y˜ (x)) = y˜ ;

F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) ) = y˜ + ϑ(δ y˜) .

10.2 Fuzzy-Valued Functional and Its Variation

335

Under the same order variationable meaning, we have ∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) ) ∂ϑ ∂ α F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) = ∂ϑ α∈(0,1]

α(δ y¯α ) (δ y˜) .

α∈(0,1]

Therefore, we obtain δ F˜ = δ y˜ =

∂ F (x, y˜ + ϑδ y˜, y˜ + ϑ(δ y˜) )|ϑ=0 = (δ y˜) . ∂ϑ

We can prove similarly that the formula holds under the variationable meaning in antitone, so (δ y˜) = δ y˜ . By mathematical induction, we can prove (δ y˜)(n) = δ y˜(n) in proper order. So, (10.2.1) holds. b) Let the fuzzy-valued compound function be F (x, y˜(x)) = δ y˜, Then ∂ F (x, y˜ + ϑδ y˜) ∂ϑ

F (x, y˜ + ϑδ y˜) = δ y˜.

α∈(0,1]

α

∂ F (x, y¯α + ϑδ y¯α ) = 0. ∂ϑ

Therefore δ F˜ =

∂ F (x, y˜ + ϑδ y˜)|ϑ=0 = 0, ∂ϑ

i.e., (10.2.2) holds. Theorem 10.2.2. Let F˜ , F˜1 , F˜2 be fuzzy-valued compound functions with the same order variationable. Then δ(F˜1 ± F˜2 ) = δ F˜1 ± δ F˜2 , δ(F˜1 · F˜2 ) = F˜1 δ F˜2 + F˜2 δ F˜1 , δ(k · F˜ ) = kδ F˜ , ˜1 ˜ ˜ ˜ ˜ ) = F2 δF1F˜−2F1 δF2 , (F˜2 = 0), δ( F F˜2 2 (5) δ F˜ n = nF˜ n−1 δ F˜ , b b (6) δ a F˜ dx = a δ F˜ dx.

(1) (2) (3) (4)

Proof: Only (2) and (6) are proved, and the others can be proved similarly. (2) Let F (x, y˜(x), y˜ (x)) = F1 (x, y˜(x), y˜ (x)) · F2 (x, y˜(x), y˜ (x)). Then

336

10 Interval and Fuzzy Functional and their Variation

∂F (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ ) ∂ϑ ∂ {F1 (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ )F2 (x, y˜ + ϑδ y˜, y˜ + ϑδ y˜ )} = ∂ϑ ∂F (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) α ⇐⇒ ∂ϑ α∈(0,1]

=

α

α∈(0,1]

∂F1 {(x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )}F2 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ) ∂ϑ

+ F1 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α )

∂F2 (x, y¯α + ϑδ y¯α , y¯α + ϑδ y¯α ). ∂ϑ

(6) The conclusion holds from proof of (6) in Theorem 10.1.2 and from expressive theorem of fuzzy numbers. ˜ Lemma II. (Basic fuzzy variation lemma) The fuzzy-valued function φ(x) continues on (a,b), and an arbitrary function η(x) satisﬁes with a classical variation lemma condition [Ail52], i.e., 10 η(x) with a kth continuous derivative on (a, b), 20 η(a) = η(b), 30 |η(x)| < , |η (1) (x)| < ,· · · , |η (k) (x)| < , where is a small arbitrary b ˜ ˜ = 0, then φ(x) = 0 on [a,b]. positive number, we have a φ(x)η(x)dx Proof: Since

b a

˜ φ(x)η(x)dx

α∈(0,1]

α

b a

¯ φ(x)η(x)dx, for an arbitrary α ∈

˜ (0, 1], we have φ(x) = 0, x ∈ [a, b] from variation Lemma I and expressive theorem of fuzzy numbers. The extreme value of the fuzzy-valued functional is below. Deﬁnition 10.2.10. If the fuzzy-valued functional Π(˜ y (x)) is smaller than ˜ = Π(˜ y (x)) − Π(˜ y0 (x)) on an arbitrary curve near to y˜ = y˜0 , i.e., if ΔΠ Π(˜ y0 (x)) ⊂ 0(or = 0), functional Π(˜ y(x)) is known to reach the maximum (or a strict maximum) on curve y˜ = y˜0 (x). The minimum valued curve can be deﬁned similarly as above. Theorem 10.2.3. If the fuzzy-valued functional Π(˜ y (x)) with variation ˜ = 0 on reaches maximum (or minimum) on y˜ = y˜0 (x), then, there is δ Π y˜ = y˜0 (x). Proof: As ˜ Π(˜ y0 (x) + ϑδ y˜) = φ(ϑ) ⇔ αΠ(¯ y0α (x) + ϑδ y¯α ) = α∈(0,1]

α∈(0,1]

αφ¯α (ϑ),

10.2 Fuzzy-Valued Functional and Its Variation

337

for an arbitrary α ∈ (0, 1] holds, then the conclusion holds from Theorem 10.1.3. It is not diﬃcult to extend the results above into the fuzzy-valued functional of other types. Theorem 10.2.4. If the fuzzy-valued functional Π(˜ y (x)) has the 1st and 2nd ˜ and, at y˜ = y˜0 (x), ˜ and δ 2 Π, fuzzy-valued variation δ Π δΠ(˜ y0 (x)) = 0,

δ 2 Π(˜ y0 (x)) = 0,

holds, then the extreme value is taken for the fuzzy-valued functional Π(˜ y(x)) y0 (x)) ⊂ 0, maximum exists, and at δ 2 Π(˜ y0 (x)) ⊃ 0, on y˜ = y˜0 (x). At δ 2 Π(˜ minimum exists. ˜ Proof: Let the fuzzy-valued functional be φ(ϑ) = Π(˜ y0 (x) + ϑδ y˜). Whlie ˜ φ(ϑ) = Π(˜ y0 (x) + ϑδ y˜) αφ¯α (ϑ) = αΠ(¯ y0α (x) + ϑδ y¯α ), α∈(0,1]

α∈(0,1]

for an arbitrary α ∈ (0, 1], the conclusion holds from Theorem 10.1.4. With L-R fuzzy functional variation discussed, we can obtain the same conclusion corresponding to the results above. 10.2.3 Variation of Ordinary or Fuzzy-Valued Functional at Fuzzy Points Let Πy be a variationable ordinary functional on [a,b] and δΠy be variation ˜ is a fuzzy point, i.e., it is a convex fuzzy set on R, of Πy. Suppose that X ˜ is and a support of X ˜ = {x ∈ R|μ ˜ (x) > 0} ⊂ [a, b]. s(X) X Since δΠy is also a function on [a,b], and by using one-place extension principle, we have the following. ˜ = αδΠy(Xα ) is the 1st Deﬁnition 10.2.11. Suppose that δΠy(X) α∈(0,1]

˜ where δΠy(Xα ) = {z|∃x ∈ variation of ordinary functional at fuzzy point X, Xα ; δΠy(x) = z}, and its membership function is μδΠy(X) μX˜ (x), ˜ (z) = δΠy(x)=z

∂2 ˜ + ϑδy(X))| ˜ ϑ=0 the 2nd variation of ordinary functional Π(y(X) ∂ϑ2 ˜ i.e., at fuzzy points, writing δ 2 Πy(X),

we call

338

10 Interval and Fuzzy Functional and their Variation

˜ δ 2 (Πy(X))

α∈(0,1]

α

∂2 Π(y(Xα ) + ϑδy(Xα ))|ϑ=0 . ∂ϑ2

The variation property of an ordinary functional at ordinary points can be extended into the state of an ordinary functional variation at fuzzy points by Deﬁnition 10.2.11. Deﬁnition 10.2.12. Let Π y˜ be one-place fuzzy-valued functional, which can be variationable on [a, b], where variation δΠ y˜ is mapping from [a, b] to ˜ be a fuzzy point and S(X) ˜ ⊂ [a, b] F (R). By the extended principle, let X ˜ can be deﬁned by be a support. Then the variation of Π y˜ at fuzzy point X ˜ = δΠ y˜(X) αδΠ y˜(Xα ) ∈ F (F (R)), α∈(0,1]

γ ∈ F (R)|∃x ∈ Xα ; δΠ y˜(x) = γ˜ }, its membership where δΠ(˜ y(Xα )) = {˜ function represents μδΠ y˜(X) γ) = μX˜ (x), ˜ (˜ δΠ y˜(x)=˜ γ

∂2 ˜ ϑδ y˜(X))| ˜ ϑ=0 the 2nd variation of the fuzzy-valued funcΠ(˜ y (X)+ ∂ϑ2 ˜ y˜(X) ˜ as tional at fuzzy points, writing δ 2 Π ∂2 ˜ y˜(X) ˜ δ2Π α 2 Π(˜ y (Xα ) + ϑδ y˜(Xα ))|ϑ=0 . ∂ϑ

we call

α∈(0,1]

The corresponding results of Section 10.1 and Section 10.2.2 can be used into the state of ordinary or fuzzy-valued functional on fuzzy-pointed variation, which is omitted here. 10.2.4 Conclusion The author has put forward the basic conception and properties of variation for interval and fuzzy functional in this section, and discussed its further results which will be widely used in fuzzy physics, engineering theory and approximate calculation. The variational calculus on border and direct algorithms in variational problems under fuzzy environment will be discussed next.

10.3

Convex Interval and Fuzzy Function and Functional

10.3.1 Introduction On the foundation of interval and fuzzy function, we introduce a concept on convex interval and convex fuzzy function with functional, give the deﬁnition

10.3 Convex Interval and Fuzzy Function and Functional

339

of a convex function and convex functional about an interval and a ordinary function at fuzzy points, and judge their convexity condition. 10.3.2 Convex Interval Function with Functional 1. Convex interval function See Ref.[Cen87] about the deﬁnition of interval function. ¯ = [J − (y), J + (y)](J − (y) J + (y)) be an interval Deﬁnition 10.3.1. Let J(y) function deﬁned on [a, b] ⊂ D ⊂ R (D is a convex region and R a real ﬁeld). If ∀λ ∈ [0, 1] and y, z ∈ D, there always exist J − (λy + (1 − λ)z) λJ − (y) + (1 − λ)J − (z) and

J + (λy + (1 − λ)z) λJ + (y) + (1 − λ)J + (z),

i.e., ¯ J(λy + (1 − λ)z) ⊆ λJ¯(y) + (1 − λ)J¯(z),

(10.3.1)

¯ we call J(y) a convex interval function. ¯ ¯ [−J + (y), −J − (y)] For interval function J(y), if J¯ is convex, then −J(y) is a concave function. ¯ Deﬁnition 10.3.2. Suppose J(y) to be an interval function and if at y0 ∈ [a, b], there exists common nth derivatives J −(n) (y0 ) and J +(n) (y0 )(n = 1, 2), ¯ meaning that J(y) has nth derivable at y0 , and [min{J −(n) (y0 ), J +(n) (y0 )}, max{J −(n) (y0 ), J +(n) (y0 )}] ¯ is nth interval derivative in J(y) at y0 . −(n) +(n) When J (y0 ) J (y0 ), [J −(n) (y0 ), J +(n) (y0 )] is nth interval same ¯ order derivative in J(y) at y0 . Otherwise, [J +(n) (y0 ), J −(n) (y0 )] is nth inter¯ val antitone derivative in J(y) at y0 . We assume the function to be all the same order derivable in the book. In the binary situation (n( 3)-variate circumstance is discussed similarly), we call ¯ i , yk ) ∂ 2 J(y ∂ 2 J − (yi , yk ) ∂ 2 J + (yi , yk ) ={ , } ∂yi ∂yk ∂yi ∂yk ∂yi ∂yk ¯ the 2nd partial derivative in binary interval function J. It is not diﬃculty to get the deﬁnition of interval matrix and interval Taylor theorem [JM61] by using the deﬁnition of interval function. ¯ Theorem 10.3.1. If J(y) is the 2nd diﬀerentiable interval function, with an ∂ 2 J¯ ) ⊇ 0, then J¯ is a convex interval function. interval matrix being ( ∂yi ∂yk

340

10 Interval and Fuzzy Functional and their Variation

Proof: According to the proof in Ref. [JM61], we suppose ¯ + (1 − t)z), f¯(t) = J(ty since f¯ (t) =

(yi − zi )(yk − zk )(

i,k

∂ 2 J¯ )|ty+(1−t)z , ∂yi ∂yk

the right is non-negative, such that f¯ (t) 0. As for functions f¯− (t) and f¯+ (t), we apply Taylor theorem [JM61], respectively and get

1 f¯(1) − f¯(λ) = (1 − λ)f¯ (λ) + (1 − λ)2 f¯ (λ ) ⊇ (1 − λ)f¯ (λ), 2 where λ is a number between 1 and λ. Similarly, f¯(0) − f¯ (λ) ⊇ −λf¯ (λ).

(10.3.2)

(10.3.3)

λ × (10.3.2) + (1 − λ) × (10.3.3), then λf¯(1) + (1 − λ)f¯(0) − f¯ (λ) ⊇ 0, this is (10.3.1), J¯ being a convex function by Deﬁnition 10.3.1. The theorem is certiﬁcated. Note 10.3.1. The interval function derivative is no more an interval number [WL85]. 2. Convex interval functional Deﬁnition 10.3.3. Let λ1 ¯ Π(y, y ) = F¯ (x, y, y )dx λ0

[Π (y, y ), Π (y, y )] = [ −

+

λ1

−

λ1

F (x, y, y )dx,

λ0

(10.3.4) +

F (x, y, y )dx]. λ0

Then we call (10.3.4) an interval functional, where F¯ is an interval function. ¯ be an interval functional deﬁned in convex region Deﬁnition 10.3.4. Let Π D. If for 0 λ 1; y, y ; z, z ∈ D, we always have ¯ ¯ ¯ z ), Π[λy + (1 − λ)z, λy + (1 − λ)z ] ⊆ λΠ(y, y ) + (1 − λ)Π(z,

(10.3.5)

¯ a convex in D. calling the interval functional Π ¯ ¯ If Π(y, y ) is a convex interval functional, then −Π(y, y ) [−Π + (y, y ), − −Π (y, y )] is a concave one. Theorem 10.3.2. Let F¯y y ⊇ 0 and F¯yy F¯y y − (F¯yy )2 ⊇ 0. Then F¯ (x, y, y ) is a convex interval function concerning two variable numbers y(x), y (x). If

10.3 Convex Interval and Fuzzy Function and Functional

341

¯ y(x), y (x) are regarded as two independent functions, then Π(y, y ) is called a convex interval functional in Deﬁnition 10.3.3. Proof: It is similar with Formal (10.3.1), for 0 λ 1; y, y ; z, z ∈ D, (10.3.5) always holds. Similarly to the proof in Theorem 10.3.1, we only prove ∂2 ¯ Π(ty + (1 − t)z, ty + (1 − t)z ) ⊇ 0. ∂t2

(10.3.6)

But, from Formal (10.3.4) in Deﬁnition 10.3.3, we can see that the left of Formal (10.3.6) is [(F¯yy )(y − z)2 + 2(F¯yy )(y − z)(y − z ) + (F¯y y )(y − z )2 ]dx, (10.3.7) where (F¯yy ), etc., represents F¯yy (x, ty + (1 − t)z, ty + (1 − t)z ) etc., and by an assumption, we know − − − 2 (Fyy )(y − z)2 + 2(Fyy )(y − z)(y − z ) + (Fy y )(y − z ) 0, + + + 2 (Fyy )(y − z)2 + 2(Fyy )(y − z)(y − z ) + (Fy y )(y − z ) 0.

Therefore (F¯yy )(y − z)2 + 2(F¯yy )(y − z)(y − z ) + (F¯y y )(y − z )2 ⊇ 0, i.e., (10.3.7) ⊇ 0, such that (10.3.6) holds. 10.3.3 Convex Function with Functional at Fuzzy Points 1. Convex function at fuzzy points Suppose J to be an ordinary diﬀerentiable function deﬁned on [a, b], and x ˜ to be a fuzzy point (i.e., a convex fuzzy set on R), and its support is S(˜ x) = {x ∈ R|μx˜ (x) > 0} ⊆ [a, b]. Suppose again y(˜ x) means also a fuzzy point, and its support is S(y(˜ x)) = {y(x) ∈ R|μy(˜x) (y(x)) > 0} ⊆ [c, d], then we have the following by an extension principle. Suppose J to be a one-place function deﬁned on [a, b], if S(y(˜ x)) ⊂ [c, d], then we deﬁne αJ(y(¯ xα )). J(y(˜ x)) α∈(0,1]

Deﬁnition 10.3.5. Let J(y(˜ x)) be an ordinary function deﬁned on [a, b]. Then we call J(y(˜ x)) a convex function at fuzzy point x ˜ if for ∀λ, α ∈ [0, 1] and y(˜ x), z(˜ x) ∈ R, we have

342

10 Interval and Fuzzy Functional and their Variation

J(λy(˜ x) + (1 − λ)z(˜ x)) ⊆ λJ(y(˜ x)) + (1 − λ)J(z(˜ x)) α{J(λy(¯ xα ) + (1 − λ)z(¯ xα ))} (10.3.8)

α∈(0,1]

⊆

α{λJ(y(¯ xα ) + (1 − λ)J(z(¯ xα ))}.

α∈(0,1]

Deﬁnition 10.3.6. Let J(y(˜ x)) be an ordinary function deﬁned on [a, b]. If the x0α ))(n = 1, 2) there exists ∀α ∈ (0, 1] at point y(¯ x0α ) ∈ R, derivative J (n) (y(¯ then we call n-th derivative of J(y(˜ x)) existence at fuzzy point y(˜ x0 ), written as x0 )) = αJ (n) (y(¯ x0α )), J (n) (y(˜ α∈(0,1]

where

J (n) (y(¯ x0α )) = {γ|∃y(x0 ) ∈ y(¯ x0α ), J (n) (y(x0 )) = γ},

its membership function is μJ (n) (y(˜x0 )) (γ) =

μy(˜x0 ) (y(x0 )).

J (n) (y(x0 ))=γ

In the binary situation (n( 3)-variate circumstance is discussed similarly), we call x), yk (˜ x)) xα ), yi (¯ xα )) ∂ 2 J(yi (˜ ∂ 2 J(yi (¯ = α ∂yi ∂yk ∂yi ∂yk α∈(0,1]

the 2nd partial derivative in a binary ordinary function at fuzzy points, and its membership function is μ ∂ 2 J(yi (˜x),yk (˜x)) (γ) = {μyi (˜x) (yi ) μyk (˜x) (yk )}. ∂yi ∂yk

∂ 2 J(yi (x),yk (x)) =γ ∂yi ∂yk

Theorem 10.3.3. Let y(˜ x) be a fuzzy point. If J is the 2-nd diﬀerentiable ordinary function, with a matrix being (

∂2J ) ⊇ 0, then J(y(˜ x)) is a convex ∂yi ∂yk

function at fuzzy points. Proof: According to the assumption and deﬁnition of fuzzy numbers, let f (t) = J(ty(˜ x) + (1 − t)z(˜ x)) be only the function concerning t. Then

f (t) =

i,k

(yi (˜ x) − zi (˜ x))(yk (˜ x) − zk (˜ x))(

∂2J )|ty(˜x)+(1−t)z(˜x) , ∂yi ∂yk

10.3 Convex Interval and Fuzzy Function and Functional

343

and the right end is not negative because the right end of ∂2J (yi (˜ x) − zi (˜ x))(yk (˜ x) − zk (˜ x))( )|ty(˜x)+(1−t)z(˜x) = ∂yi ∂yk i,k

α{

(yi (¯ xα ) − zi (¯ xα ))(yk (¯ xα ) − zk (¯ xα ))(

i,k

α∈(0,1]

∂2J )|ty(˜x)+(1−t)z(˜x) }, ∂yi ∂yk

obviously it is not negative, hence f (t) 0. From the extension principle and by applying Taylor theorem, we get 1 (10.3.9) f (1) − f (λ) = (1 − λ)f (λ) + (1 − λ)2 f (λ ) ⊇ (1 − λ)f (λ), 2 where λ is a number between 1 and λ. Similarly, f (0) − f (λ) ⊇ −λf (λ).

(10.3.10)

λ × (10.3.9) + (1 − λ) × (10.3.10), then λf (1) + (1 − λ)f (0) − f (λ) ⊇ 0, i.e., (10.3.8). Hence J(y(˜ x)) is a convex function at fuzzy points in Deﬁnition 10.3.5 and the theorem holds. 2. Convex functional at fuzzy points Deﬁnition 10.3.7. Suppose Π to be an ordinary functional and x ˜ to be a fuzzy point at R, then we call λ1 Π(y(˜ x), y (˜ x)) = F (˜ x, y(˜ x), y (˜ x))dx

λ0

αΠ(y(¯ xα ), y (¯ xα )) =

α∈(0,1]

α∈(0,1]

λ1

α

(10.3.11) F (¯ xα , y(¯ xα ), y (¯ xα ))dx

λ0

a functional at fuzzy points, where F is an ordinary function. Deﬁnition 10.3.8. Let Π be an ordinary functional deﬁned in convex region D. If in fuzzy points y(˜ x), z(˜ x) ∈ R for arbitrarily λ ∈ [0, 1], there is Π(λy(˜ x) + (1 − λ)z(˜ x), λy (˜ x) + (1 − λ)z (˜ x)) x)) + (1 − λ)Π(z(˜ x), z (˜ x)) ⊆ λΠ(y(˜ x), y (˜ α{Π(λy(¯ xα ) + (1 − λ)z(¯ xα ), λy (¯ xα ) + (1 − λ)z (¯ xα ))} α∈(0,1]

⊆

α{λΠ(y(¯ xα ), y (¯ xα )) + (1 − λ)Π(z(¯ xα ), z (¯ xα ))},

α∈(0,1]

then Π is called a convex functional at fuzzy points in D.

(10.3.12)

344

10 Interval and Fuzzy Functional and their Variation

Theorem 10.3.4. Let Fy y ⊃ 0 and Fyy Fy y − (Fyy )2 ⊇ 0. Then F (˜ x, y(˜ x), y (˜ x)) is a convex function concerning two fuzzy variable numbers y(˜ x) and y (˜ x). If y(˜ x) and y (˜ x) are regarded as two independent fuzzy functions, then x)) a convex functional at fuzzy points deﬁned by (10.3.11). we call Π(y(˜ x), y (˜ Proof: It is similar with Formal (10.3.8), for 0 λ 1, we always have (10.3.12) hold. Similarly to the proof in Theorem 10.3.3, we only prove ∂2 Π(ty(˜ x) + (1 − t)z(˜ x), ty (˜ x) + (1 − t)z (˜ x)) ⊇ 0. ∂t2

(10.3.13)

But, from Formal (10.3.11) in Deﬁnition 10.3.7, we can see that the left end of Formal (10.3.13) is [(Fyy )(y(˜ x) − z(˜ x))2 + 2(Fyy )(y(˜ x) − z(˜ x))(y (˜ x) − z (˜ x)) (10.3.14) x) − z (˜ x))2 ]dx, + (Fy y )(y (˜ where (Fyy ), etc., represents Fyy (˜ x, ty(˜ x) + (1 − t)z(˜ x), ty (˜ x) + (1 − t)z (˜ x)), etc. And by an assumption, we know (Fyy )(y(x− ) − z(x− ))2 + 2(Fyy )(y(x− ) − z(x− ))(y (x− ) − z (x− )) + (Fy y )(y (x− ) − z (x− ))2 0, (Fyy )(y(x+ ) − z(x+ ))2 + 2(Fyy )(y(x+ ) − z(x+ ))(y (x+ ) − z (x+ )) + (Fy y )(y (x+ ) − z (x+ ))2 0. Therefore, there is α(Fyy )(y(¯ xα ) − z(¯ xα ))2 + 2(Fyy )(y(¯ xα ) − z(¯ xα ))(y (¯ xα ) − z (¯ xα )) α∈(0,1]

+ (Fy y )(y (¯ xα ) − z (¯ xα ))2 ⊇ 0, x) − z(˜ x))2 + 2(Fyy )(y(˜ x) − z(˜ x))(y (˜ x) − z (˜ x)) ⇒ (Fyy )(y(˜ x) − z (˜ x))2 ⊇ 0, + (Fy y )(y (˜ i.e., (10.3.14) ⊇ 0, such that (10.3.13) holds. 10.3.4 Conclusion In this section, we expand the concept of a classic convex, establish the theory frame of the convex interval and fuzzy functions with convex functionals. In the next section, we will advance cove fuzzy-value function and functional. Under this frame, we can develop a lot of researches to optimizing problems concerning static, more static and dynamic cases under interval and fuzzy environment. The work concerning this aspect will be researched continuously.

10.4 Convex Fuzzy-Valued Function and Functional

10.4

345

Convex Fuzzy-Valued Function and Functional

In this section, on the foundation of fuzzy-valued function and functional variation, we put forward the next [Cao09]. (1) Developing a concept on convex fuzzy-valued function with functional. (2) Discussing the convexity in fuzzy-valued function and functional at ordinary and fuzzy points, respectively. 10.4.1 Convex Fuzzy-Valued Function and Functional at Ordinary Points 1. Convex fuzzy-Valued Function at Ordinary Points Fuzzy-valued function with functional can be deﬁned similarly as the above section. ˜ Deﬁnition 10.4.1. Suppose J(y) to be a fuzzy-valued function deﬁned at [a, b], and ˜ J(y) αJ¯α (y) = α[Jα− (y), Jα+ (y)], α∈(0,1]

α∈(0,1]

if for ∀λ ∈ [0, 1] and y, z ∈ R, we have ˜ J(λy + (1 − λ)z) ⊆ λJ˜(y) + (1 − λ)J˜(z),

(10.4.1)

˜ then we call J(y) the convex fuzzy-valued function. Here (10.4.1) α{J¯α (λy + (1 − λ)z)} α{λJ¯α (y) + (1 − λ)J¯α (z)} α∈(0,1]

⇐⇒

α∈(0,1]

α{Jα− (λy + (1 − λ)z)}

α∈(0,1]

α{λJα− (y) + (1 − λ)Jα− (z)}

α∈(0,1]

α{Jα+ (λy

+ (1 − λ)z)}

α∈(0,1]

α{λJα+ (y) + (1 − λ)Jα+ (z)}.

α∈(0,1]

˜ ˜ If J(y) is a convex fuzzy-valued function, then −J(y) = −Jα− (y)] is a concave one.

α[−Jα+ (y),

α∈(0,1]

˜ Deﬁnition 10.4.2. Let J(y) be a fuzzy-valued function deﬁned at interval [a, b]. If at some point y0 ∈ (a, b], there exists nth interval derivative (n) J¯α (y0 )(n = 1, 2) for ∀α ∈ (0, 1], then we call that nth fuzzy-valued deriva˜ tive exists in J(y) at y0 , written down as

346

10 Interval and Fuzzy Functional and their Variation

J˜(n) (y0 ) =

αJ¯α(n) (y0 ) =

α∈(0,1]

α[Jα−(n) (y0 ), Jα+(n) (y0 )],

α∈(0,1]

its membership function being μJ˜(n) (y0 ) (γ) = {α|Jα−(n) (y0 ) = γ, or Jα+(n) (y0 ) = γ}. As for the binary situation (n( 3)-variate circumstance is discussed similarly), we call ˜ i , yk ) ∂ 2 J(y = ∂yi ∂yk =

α

α∈(0,1]

∂ ∂ J¯α (yi , yk ) ∂yk ∂yi

α{

α∈(0,1]

∂ 2 Jα− (yi , yk ) ∂ 2 Jα+ (yi , yk ) }, α{ } ∂yi ∂yk ∂yi ∂yk α∈(0,1]

˜ its membership func2nd partial derivative in binary fuzzy-valued function J, tion being ˜ (γ) = μ ∂ 2 J(y i ,yk ) ∂yi ∂yk

{α|

∂ 2 Jα− (yi , yk ) ∂ 2 Jα+ (yi , yk ) = γ, or = γ}. ∂yi ∂yk ∂yi ∂yk

˜ Theorem 10.4.1. If J(y) is the 2nd diﬀerentiable fuzzy-valued function, with ∂ 2 J˜ ) ⊇ 0, then J˜ is a convex fuzzy-valued a fuzzy-valued matrix being ( ∂yi ∂yk function. Proof: According to the assumption and deﬁnition of a fuzzy-valued function, let ˜ + (1 − t)z). f˜(t) = J(ty ∂ 2 J˜ (yi − zi )(yk − zk )( )|ty+(1−t)z is not Because the right of f˜ (t) = ∂yi ∂yk i,k negative, such that f˜ (t) 0, from an extension principle and by applying Taylor theorem, we get 1 f˜(1) − f˜(λ) = (1 − λ)f¯ (λ) + (1 − λ)2 f˜ (λ ) ⊇ (1 − λ)f˜ (λ), 2

(10.4.2)

where λ is a number between 1 and λ. Similarly, f˜(0) − f˜ (λ) ⊇ −λf¯ (λ).

(10.4.3)

λ × (10.4.2) + (1 − λ) × (10.4.3), then λf˜(1) + (1 − λ)f˜(0) − f˜ (λ) ⊇ 0, which is (10.4.1). Hence J˜ is a convex fuzzy-valued function by Deﬁnition 10.4.1 and the theorem is certiﬁcated.

10.4 Convex Fuzzy-Valued Function and Functional

347

Note 10.1. The derivative of fuzzy-valued function is not necessarily a fuzzy number [WL85]. 2. Convex fuzzy-valued functional at ordinary points Deﬁnition 10.4.3. We call the formal λ1 ˜ Π(y, y) = F˜ (x, y, y )dx

λ0

¯ α (y, y ) = αΠ

α∈(0,1]

=

α[Πα− (y, y ), Πα+ (y, y )]

α∈(0,1] λ1

α

α∈(0,1]

(10.4.4)

F¯α (x, y, y )dx

λ0

a fuzzy-valued functional, where F˜ is a fuzzy-valued function. ˜ Deﬁnition 10.4.4. Let Π(y, y ) be a fuzzy-valued functional deﬁned in convex region D. If ∀λ ∈ [0, 1]; y, y ; z, z ∈ D, we always have ˜ ˜ ˜ z ) Π(λy + (1 − λ)z, λy + (1 − λ)z ) ⊆ λΠ(y, y ) + (1 − λ)Π(z, ¯ α (λy + (1 − λ)z, λy + (1 − λ)z ) αΠ (10.4.5)

α∈(0,1]

⊆

¯ α (y, y ) + (1 − λ)Π ¯ α (z, z )}, α{λΠ

α∈(0,1]

˜ calling the fuzzy-valued functional Π(y, y ) convex in D. ˜ ˜ If Π(y, y ) is a convex fuzzy-valued functional, then −Π(y, y) = −Πα+ (y, y ), −Πα− (y, y )] is a concave one.

α[

α∈(0,1]

Theorem 10.4.2. Let F˜y y ⊃ 0 and F˜yy F˜y y − (F˜yy )2 ⊇ 0. Then F˜ (x, y, y ) is a convex fuzzy-valued function concerning two variable numbers y(x) and y (x). If y(x) and y (x) are regarded as two independent functions, then we ˜ call Π(y, y ) a convex fuzzy-valued functional by Deﬁnition 10.4.3. Proof: It is similar with Formal (10.4.1), for 0 λ 1; y, y ; z, z ∈ D, we always have (10.4.5) hold. Similarly to the proof in Theorem 10.4.1, we only prove ∂2 ˜ Π(ty + (1 − t)z, ty + (1 − t)z ) ⊇ 0. (10.4.6) ∂t2 But, from Formal (10.4.4) in Deﬁnition 10.4.3, we can see that the left of the Formal (10.4.6) is [(F˜yy )(y − z)2 + 2(F˜yy )(y − z)(y − z ) + (F˜y y )(y − z )2 ]dx, (10.4.7)

348

10 Interval and Fuzzy Functional and their Variation

where (F˜yy ), etc., represents F˜yy (x, ty + (1 − t)z, ty + (1 − t)z ), etc., and by an assumption, we know (F¯α )yy (y − z)2 + 2(F¯α )yy (y − z)(y − z ) + (F¯α )y y (y − z )2 ⊇ 0, therefore, α{(F¯α )yy (y − z)2 + 2(F¯α )yy (y − z)(y − z ) + (F¯α )y y (y − z )2 } ⊇ 0, α∈(0,1]

⇒ (F˜yy )(y − z)2 + 2(F˜yy )(y − z)(y − z ) + (F˜y y )(y − z )2 ⊇ 0, i.e., (10.4.7) ⊇ 0, such that (10.4.6) holds. 10.4.2 Convex Fuzzy-Valued Function and Functional at Fuzzy Points 1. Convex fuzzy-valued function at fuzzy points Suppose that J˜ is a one-place fuzzy-valued function deﬁned at [a, b]. By extension principle, if y(˜ x) is a fuzzy point and its support is S(y(˜ x)) ⊂ [c, d], then ˜ x)) αJ˜(y(¯ xα )) ∈ F (F (R)) J(y(˜ α∈(0,1]

˜ x)) = {˜ is a fuzzy-valued function deﬁned at the fuzzy points, where J(y(˜ γ∈ ˜ F (R)|∃y(x) ∈ y(¯ xα ), J(y(x)) = γ˜ }, its membership function being γ) = μy(˜x) (y(x)). μJ(y(˜ ˜ x)) (˜ J˜(y(x))=˜ γ

Deﬁnition 10.4.5. If ∀λ ∈ [0, 1] and fuzzy points y(˜ x), z(˜ x) ∈ R, there is ˜ ˜ x)) + (1 − λ)J˜(z(˜ J[λy(˜ x) + (1 − λ)J˜(z(˜ x))] ⊆ λJ(y(˜ x)) ˜ α{J[λy(¯ xα ) + (1 − λ)z(¯ xα )]} α∈(0,1]

⊆

˜ xα )) + (1 − λ)J(z(¯ ˜ xα ))}, α{λJ(y(¯

α∈(0,1]

then we call J˜ a convex fuzzy-valued function at fuzzy points. ˜ Deﬁnition 10.4.6. Suppose J(y(x)) to be the fuzzy-valued function deﬁned at interval [a, b], if ∀α ∈ (0, 1], J˜(n) (y(¯ x0α ))(n = 1, 2) exist at certain point ˜ exists at fuzzy point y(¯ x0α ) ∈ R, then we call that nth derivative of J(y(x)) y(˜ x0 ), written down as J˜(n) (y(˜ x0 )) = αJ˜(n) (y(¯ x0α )) ∈ F (F (R)), α∈(0,1]

10.4 Convex Fuzzy-Valued Function and Functional

349

where J˜(n) (y(¯ x0α )) = {˜ γ ∈ F (R)|∃y(x0 ) ∈ y(¯ x0α ), J˜(n) (y(x0 )) = γ˜}, its membership function being μJ˜(n) (y(˜x0 )) (˜ γ) = μy(˜x0 ) (y(x0 )). J˜(n) (y(x0 ))=˜ γ

In the binary situation (n( 3)-variate circumstance are discussed similarly), we call ˜ i (˜ x), yk (˜ x)) ∂ 2 J(y = ∂yi ∂yk

α∈(0,1]

α

˜ i (¯ xα ), yk (¯ xα )) ∂ 2 J(y ∈ F (F (R)) ∂yi ∂yk

the 2nd partial derivative in binary fuzzy-valued function at fuzzy points, where ˜ i (¯ xα ), yk (¯ xα )) ∂ 2 J(y = {˜ γ |∃(yi (x), yk (x)) ∈ yi (¯ xα ) × yk (¯ xα ), ∂yi ∂yk ˜ i (x), yk (x)) ∂ 2 J(y = γ˜ }, ∂yi ∂yk its membership function being μ ∂ 2 J(y ˜ γ) = x),yk (˜ x) (˜ i (˜ ∂yi ∂yk

{μyi (˜x) (yi (x))

μyk (˜x) (yk (x))}.

∂ 2 J˜(yi (x),yk (x)) =˜ γ ∂yi ∂yk

Theorem 10.4.3. Let y(˜ x) be a fuzzy point. If J˜ is a 2nd diﬀerentiable fuzzy∂ 2 J˜ valued function, with a fuzzy-valued matrix being ( ) ⊇ 0, then J˜ is a ∂yi ∂yk convex fuzzy-valued function at fuzzy points. Combine Theorem 10.3.1 with Theorem 10.3.3 and we can get a proof immediately in this theorem. 2. Convex fuzzy-valued functional at fuzzy points ˜ to be a fuzzy-valued functional and x Deﬁnition 10.4.7. Suppose Π ˜ to be a fuzzy point in R, then we call λ1 ˜ Π(y(˜ x), y (˜ F˜ (˜ x, y(˜ x), y (˜ x)) = x))dx α∈(0,1]

λ0

˜ xα ), y (¯ αΠ(y(¯ xα )) =

λ1

α

α∈(0,1]

F˜ (¯ xα , y(¯ xα ), y (¯ xα ))dx

λ0

a fuzzy-valued functional at fuzzy points. ˜ be a fuzzy-valued functional deﬁned in convex Deﬁnition 10.4.8. Let Π region D. If in fuzzy point x ˜ ∈ R for arbitrarily λ ∈ [0, 1], there is

350

10 Interval and Fuzzy Functional and their Variation

˜ Π(λy(˜ x) + (1 − λ)z(˜ x), λy (˜ x) + (1 − λ)z (˜ x)) ˜ x), z (˜ ˜ x), y (˜ x)) + (1 − λ)Π(z(˜ x)) ⊆ λΠ(y(˜ ˜ α{Π(λy(¯ xα ) + (1 − λ)z(¯ xα ), λy (¯ xα ) + (1 − λ)z (¯ xα ))} α∈(0,1]

⊆

˜ xα ), y (¯ ˜ xα ), z (¯ α{λΠ(y(¯ xα )) + (1 − λ)Π(z(¯ xα ))},

α∈(0,1]

˜ a convex fuzzy-valued functional at fuzzy points in D. then we call Π Theorem 10.4.4. Let F˜y y ⊃ 0 and F˜yy F˜y y − (F˜yy )2 ⊇ 0. Then F˜ (˜ x, y(˜ x), y (˜ x)) is a convex fuzzy-valued function concerning two fuzzy variable x). If y(˜ x) and y (˜ x) are regarded as two independent numbers y(˜ x) and y (˜ ˜ x), y (˜ fuzzy functions, then we call Π(y(˜ x)) in Deﬁnition 10.4.7 a convex fuzzy-valued functional at fuzzy points. Combine Theorem 10.3.2 with Theorem 10.3.4 and we can get a proof in this theorem immediately.

10.5

10.5.1

Variation of Condition Extremum on Interval and Fuzzy-Valued Functional Introduction

In this section the interval and fuzzy valued variation is going to be extended into a functional condition extremum, developing that of an interval and fuzzyvalued functional and verifying an eﬀectiveness of the extension with a numerical example. 10.5.2

Variation of Condition Extremum in Interval Functional

Deﬁnition 10.5.1. We call x1 ¯ = Π F¯ (x, y; y )dx x 0x1 F − (x, y; y )dx, =[ x0

x1

(10.5.1) F + (x, y; y )dx]

x0

an interval functional dependent on n unknown functions, where y = y1 , y2 , · · · , yn ; y = y1 , y2 · · · yn . In [Cao91a] [Cao01e] and [Luo84a,b] you can ﬁnd the deﬁnition on interval value and its functional variation. Theorem 10.5.1. Suppose that functions y1 , y2 , · · · , yn enable extremum to exist in the interval functional (10.5.1) under the condition

10.5 Variation of Condition Extremum

(i = 1, 2, · · · , m; m < n)

ϕ¯i (x, y) = 0

351

(10.5.2)

with (10.5.2) independent, i.e., in m-order interval function determinants, only one determinant is not zero, i.e., D(ϕ¯1 , ϕ¯2 , · · · , ϕ¯m )

= 0, D(y1 , y2 , · · · , ym )

(10.5.3)

¯ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n) tally then proper chosen factors λ with Euler equation determined by interval functional ¯∗ = Π

x1

(F¯ +

x0

m

¯i (x)ϕ¯i )dx = λ

i=1

x1

F¯ ∗ dx,

(10.5.4)

x0

¯ i (x) and yj (x)(i = 1, 2, · · · , m; j = 1, 2, · · · , n) are deterwhile functions λ mined by the interval Euler equations and interval ones d ¯∗ F , = 0 (j = 1, 2, · · · , n), F¯y∗j − dx yj

(10.5.5)

ϕ¯i = 0 (i = 1, 2, · · · , m)

(10.5.6)

¯ 1 (x), λ ¯2 (x), · · · , λ ¯ m (x) are all regarded as respectively. If y1 , y2 , · · · , yn and λ ∗ ¯ model-variable of interval functional Π , then (10.5.6) can be considered as ¯ ∗ , where Euler equations of internal functional Π (10.5.3)

− + D(ϕ− D(ϕ+ 1 , · · · , ϕm ) 1 , · · · , ϕm )

= 0,

= 0. D(y1 , · · · , ym ) D(y1 , · · · , ym )

Proof: According to interval deﬁnition [Cao01e] and basic condition of extremum (Ref.[Cao91a]. Theorem 1.1), we have ¯∗ = 0 ⇔ δΠ

x1

x0

⇒

n m ∂ F¯ ¯ ∂ ϕ¯i d ∂ F¯ [ + − ]δyj dx = 0 λi (x) ∂yj i=1 ∂yj dx ∂yj, j=1

m x1

x0

(F¯y∗j −

j=1

⇒ F¯y∗j − where F¯ ∗ = F¯ +

d ¯∗ F , )δyj dx = 0 dx yj

d ¯∗ F , =0 dx yj

(j = 1, 2, · · · , m),

(10.5.7)

m ¯i (x)ϕ¯i . Besides, since (10.5.7) represents an interval λ i=1

¯ i and when (10.5.3) holds, we have the solution linear group with respect to λ − + ¯ ¯ i (x), the necessary λi (x) = [λi (x), λi (x)] (i = 1, 2, · · · , m). As for such λ n x1 d condition of the extremum in x0 (F¯y∗j − F¯y∗j )δyj dx = 0 can be changed dx j=1

352

10 Interval and Fuzzy Functional and their Variation

x1

n

d ¯∗ Fyj, )δyj dx = 0. Because of the arbitrary of δyj (j = dx j=m+1 m + 1, · · · , n), all of items are made to be zero, except one of them, by turns, and by applying basic variation Lemma I in Section 10.1, we have

into

x0

(F¯y∗j −

d ¯∗ F¯y∗j − F , = 0 (j = m + 1, · · · , n). dx yj

(10.5.8)

By combination of (10.5.7) and (10.5.8), we enable a condition extremum ¯i (x) all to tally with (10.5.5) function of functional required by Πi and factor λ and (10.5.6). Theorem 10.5.2. Suppose that functions y1 , y2 , · · · , yn enable extremum to exist in the interval functional (10.5.1) under the condition ψ¯i (x, y; y ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.9)

with (10.5.9) independent, i.e., there exists an m-order interval function determinant D(ψ¯1 , ψ¯2 , · · · , ψ¯m )

= 0, (10.5.10) ) D(y1 , y2 , · · · , ym ¯ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n) enable then proper chosen factors λ the interval functional in (10.5.1) to reach the condition extremum curve, i.e., its extremum curve x1 x1 m ∗ ¯ ¯ ¯ ¯ F¯1∗ dx, Π = (F + λi (x)ψi )dx = x0

x0

i=1

m ¯ i (x)ψ¯i , and where F¯1∗ = F¯ + λ i=1

(10.5.10)

− + D(ψ1− , ψ2− , · · · , ψm D(ψ1+ , ψ2+ , · · · , ψm ) )

= 0,

= 0. D(y1 , y2 , · · · , ym ) D(y1 , y2 , · · · , ym )

Proof: The theorem can be proved as Theorem 10.5.1. 10.5.3 Variation on Fuzzy-Valued Functional Condition Extremum at Ordinary Points Deﬁnition 10.5.2. Call x1 ˜ ˜ Π= F (x, y; y )dx = α x0

a∈(0.1]

x1

x0

F¯α (x, y; y )dx

(10.5.11)

a fuzzy-valued functional depending upon n unknown functions, here F¯α (x, y; y ) = [Fα− (x, y; y ), Fα+ (x, y; y )].

10.5 Variation of Condition Extremum

353

The deﬁnition can be found in Ref.[Cao91a] [Cao01e] and [Luo84a,b] with respect to fuzzy value and its functional variation. Theorem 10.5.3. Suppose that functions yj (j = 1, 2, · · · , n) make an extremum exist in (10.5.11) under the condition ϕ¯i (x, y1 , · · · , yn ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.12)

with (10.5.12) independent, i.e, there exists an m-order fuzzy-valued function determinant D(ϕ˜1 , ϕ˜2 , · · · , ϕ˜m )

= 0, (10.5.13) D(y1 , y2 , · · · , ym ) ¯ i (x) and yj (i = 1, 2, · · · , m; j = 1, 2, · · · , n) then, the proper chosen factors λ satisfy Euler equation determined by fuzzy-valued functional x1 x1 m ˜i (x)ϕ˜i )dx = ˜∗ = F˜ ∗ dx, Π (F˜ + (10.5.14) λ x0

x0

i=1

˜ i (x) and yj (x) are determined by fuzzy-valued Euler equations while functions λ and a fuzzy-valued one d ˜∗ F˜y∗j − F = 0 (j = 1, 2, · · · , n), dx yj

(10.5.15)

ϕ˜i = 0 (i = 1, 2, · · · , m),

(10.5.16)

˜ i (x) and yj (i = 1, · · · , m; j = 1, · · · , n) as the varirespectively. If we regard λ ˜ ∗ , we can regard (10.5.16) as Euler equations ables of fuzzy-valued functional Π ˜ ∗ , where of fuzzy functional Π

(10.5.13)

a∈(0,1]

α

D(ϕ¯1α , ϕ¯2α , · · · , ϕ¯mα

= 0. D(y1 , y2 , · · · , ym )

Proof: According to fuzzy-valued (or fuzzy-valued functional) deﬁnition [Cao01e] and its basic condition of extremum ([Cao91a], Theorem 2.1), we have ˜∗ = 0 δΠ ⇔ α

x0

α∈(0,1]

⇒

n j=1

α(F¯y∗j α −

α∈(0,1]

where F¯α∗ = F¯α +

x1

(F¯y∗j α −

d ¯∗ F , )δyj dx = 0 dx yj α

(10.5.17)

d ¯∗ F , ) = 0 (j = 1, 2, · · · , n), dx yj α

m ¯ i (x)ϕ¯iα . Besides, (10.5.17) is a fuzzy valued linear group λ i=1

¯ iα . When (10.5.13) holds, as for a certain α, we can get the with respect to λ

354

10 Interval and Fuzzy Functional and their Variation

¯ iα (x) = [λ− (x), λ+ (x)](i = 1, 2, · · · , m) by the proof in Theorem solution λ iα iα ¯ iα (x), the necessary condition of fuzzy extremum 10.5.1. As for such λ x1 n d ¯∗ Fyj α )δyj dα = 0 α (F¯y∗j α − dx x0 j=1 α∈(0,1]

is turned into α∈(0,1]

x1

α x0

n

(F¯y∗j α −

j=m+1

d ¯∗ F )δyj dα = 0. dx yj α

Because δyj is arbitrary, all of items are made to be zero, except one of them, in turns and, by the application of basic variation Lemma II in Section 10.2, we have d ¯∗ F , ) = 0 (j = m + 1, · · · , n). α(F¯y∗j α − (10.5.18) dx yj α α∈(0,1]

By combining (10.5.17) and (10.5.18), as for arbitrary α ∈ (0, 1], the condition ¯ i (x) should all ˜ and factor λ extremum obtained by fuzzy-valued functional Π meet with (10.5.15) and (10.5.16) from Theorem 10.5.1. Now the theorem holds. Theorem 10.5.4. Suppose that functions yj (j = 1, 2, · · · n) enable extremum to exist in (10.5.11) under the condition ψ˜i (x, y; y ) = 0 (i = 1, 2, · · · , m; m < n)

(10.5.19)

with (10.5.19) independent, i.e., there exists an m-order fuzzy-valued function determinant D(ψ˜1 , ψ˜2 , · · · , ψ˜m )

= 0, (10.5.20) ) D(y1 , y2 , · · · , ym ˜ i (x) and yj (i = 1, · · · , m; j = 1, 2, · · · , n), enable then proper chosen factors λ the fuzzy-valued functional in (10.5.11) to reach the condition extremum curve, i.e., its extremum curve x1 x1 m ˜i (x)ψ˜i )dx = ˜∗ = F˜1∗ dx, Π (F˜ + λ x0

i=1

x0

m ˜ i (x)ψ˜i , and λ where F˜1∗ = F˜ + i=1

(10.5.20)

α

− − − , ψ2α , · · · , ψmα ) D(ψ1α

= 0, ) D(y1 , y2 , · · · , ym

α

+ + + , ψ2α , · · · , ψmα ) D(ψ1α

= 0. D(y1 , y2 , · · · , ym )

α∈(0,1]

α∈(0,1]

10.5 Variation of Condition Extremum

355

Proof: The theorem can be proved like Theorem 10.5.3. 10.5.4

Numerical Example

Example 10.5.1: Find fuzzy functional extremum S˜ = equal circumference x1 D 2 1 + y dx = ˜l.

x1 x0

y˜dx under the

x0

˜∗ = Make a supplementary functional Π Euler equation is F˜ − y F˜y, = C˜1 , i.e.,

x1 x0

D ˜ (˜ y+λ 1 + y 2 )dx, and fuzzy

˜ 2 D λy ˜ 1 + y 2 − D = C˜1 . y˜ + λ 2 1+y

(10.5.21)

For a certain determined α, (10.5.21) is ¯ 2 D ¯ α ( 1 + y 2 )α − D λα y¯α α{¯ yα + λ }= α{C¯1α }. 2 ) ( 1 + y α α∈(0,1] α∈(0,1]

We ﬁrst ﬁnd

¯α λ y¯α − C¯1α = − D 1 + y¯α 2

by introducing parameter t, such that y¯ = tg t, then ¯ α cos t; y¯α − C¯1α = −λ ¯ α sin tdt λ d¯ yα d¯ yα ¯ α cos tdt; = =λ = tg t, therefore, d¯ xα = x tg t tg t ¯ α sin t. x¯α = C¯2α + λ Then, when extremal equation is represented by parameter form, we have $ ¯ α sin t x¯α − C¯2α = λ ¯ α cos t ¯ y¯α − C1α = −λ and by canceling t, then ¯2 , (¯ xα − C¯2α )2 + (¯ yα − C¯2α )2 = λ α such that

˜ 2 + (˜ ˜ 2=λ ˜2. (˜ x − C) y − C)

356

10 Interval and Fuzzy Functional and their Variation

− + + It is curve variety of functional extremum we ﬁnd, where Ciα , Ciα , λ− α , λα (i = ¯ ¯ 1, 2) are constants and parameters; Ciα , λα (i = 1, 2) are interval constants ˜ are fuzzy constant and parameter. ˜ λ and parameters; C,

10.5.5

Conclusion

The functional condition extremum problem mentioned in this section contains more information than a classical one. We notice that for α ∈ (0, 1], it is diﬃcult for us to ﬁnd all the curves. But, in practical application, we ﬁnd a solution to some α (or ﬁnite α) according to the requirement. It is worth mentioning that we can obtain the more satisfactory result in 0.618 searching way. The result discussed here can be easily extended into a condition extremum variation of ordinary or fuzzy-valued functional with fuzzy function y˜j (j = 1, · · · , n).

10.6 10.6.1.

Variation of Condition Extremum on Functional with Fuzzy Function Introduction

By Deﬁnition “nest of set”, a condition extremum variation problem of an ordinary and fuzzy functionals on ordinary function are extended into function being a fuzzy state. In this section, we discuss the ﬁrst condition extremum variation of functional with function, and extend it to variation of fuzzy-valued functional condition extremum with fuzzy function. 10.6.2

Condition Extremum Variation of Functional with Fuzzy Function

Let F be an ordinary diﬀerentiable functional deﬁned on [x0 , x1 ] ⊆ R, y˜j (j = 1, 2, · · · , n) be fuzzy functions (i.e., a convex fuzzy set on R) and the support of y˜j be s(˜ yj ) = {x ∈ R|μy˜j (x) > 0} ⊆ [aj , bj ]. By the extension principle, we have the following. Deﬁnition 10.6.1. Let’s call x1 ˜ = Π F (x, y˜; y˜ )dx = x0

x1

x0 α∈[0,1]

αF (x, y¯α ; y¯α )dx

(10.6.1)

− + , yjα ], an ordinary functional depending on n fuzzy functions, where y¯jα = [yjα − + − + − + y˜jα = [min(yjα , yjα ), max(yjα , yjα )], and y˜jα , y¯jα , yjα , yjα denote a fuzzy derivative, an interval value one and interval value left and right ones of y˜j , respectively, and

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

F (x, y˜, y˜ ) =

357

αF (x, y¯α , y¯α ),

α∈[0,1]

its membership function is

μF (x,˜y,˜y ) (γ) =

{μy˜(y) ∧ μy˜ (y)},

F (x,y,y )=γ

where F (x, y¯α , y¯α ) = {γ|∃(x, y, y ) ∈ X × Y¯α × Y¯α , F (x, y, y ) = γ}, y˜ = (˜ y1 , y˜2 , · · · , y˜n ), y˜ = (˜ y1 , y˜2 , · · · , y˜n ); y1α , y¯2α , · · · , y¯nα ), y¯α = (¯ y1α , y¯2α , · · · , y¯nα ). y¯α = (¯ From Deﬁnition 10.6.1, we know that the ordinary functional dependent on n fuzzy functions can be changed into an interval one for a certain determined α value. Therefore, it is easy to ﬁnd deﬁnitions of the functional variation according to Ref.[Cao91a] [Ail52] and [Cao91b]. Deﬁnition 10.6.2. Let us call Fyj (x, y˜, y˜ ) = αFyj (x, y¯α , y¯α ), α∈[0,1]

Fyj (x, y˜, y˜ ) =

αFyj (x, y¯α , y¯α ) (j = 1, 2, · · · , n)

α∈[0,1]

a partial derivatives of ordinary functional F on fuzzy points (x, y˜; y˜ ) with respect to y˜j and y˜j (j = 1, 2, · · · , n), respectively, whose membership functions are respectively [μy˜(y) ∧ μy˜ (y )] = μFyj (x,˜y,˜y ) (γ) =

Fyj (x,y,y )=γ

)=γ Fyj (x,y1 ,··· ,yn ;y1 ,··· ,yn

μFy (x,˜y,˜y ) (γ) = j

(μy˜1 (y1 ) ∧· · ·∧ μy˜n (yn )) ∧ (μy˜1 (y1 ) ∧· · ·∧ μy˜n (yn )) , [μy˜(y) ∧ μy˜ (y )] =

Fy (x,y,y )=γ j

)=γ Fy (x,y1 ,··· ,yn ;y1 ,··· ,yn

μy˜1 (y1 ) ∧ · · · ∧ μy˜n (yn ) ∧ (μy˜1 (y1 ) ∧ · · · ∧ μy˜n (yn )) ,

j

where x, γ ∈ R, y = (y1 , y2 , · · · , yn ) and y = (y1 , y2 , · · · , yn ) are real function vectors on real region R. Theorem 10.6.1. Suppose that fuzzy functions y˜1 , y˜2 , · · · , y˜n enable ordinary functional (10.6.1) under the conditions

358

10 Interval and Fuzzy Functional and their Variation

φi (x, y˜) = 0 (i = 1, 2, · · · , m; m < n))

(10.6.2)

to realize extremum and (10.6.2) are independent, i.e., there is an m-order fuzzy function determinant with fuzzy function D(φ1 (x, y˜), φ2 (x, y˜), · · · , φm (x, y˜))

= 0, D(˜ y1 , y˜2 , · · · , y˜m )

(10.6.3)

then the proper chosen factors ki (x) and y˜j (i = 1, 2, · · · , m; j = 1, 2, · · · , n) satisfy Euler equation given by an ordinary functional with fuzzy function x1 m ∗ [F (x, y˜, y˜ ) + ki (x)φi (x, y˜)]dx Π = x0 i=1 (10.6.4) x1

=

F ∗ (x, y˜, y˜ )dx,

x0

while functions ki (x) and y˜j (x) are determined by Euler equations with fuzzy function d ∗ F (x, y˜, y˜ ) = 0 (j = 1, 2, · · · , n) dx yj and by Equations with fuzzy function Fy∗j (x, y˜, y˜ ) −

φi (x, y˜) = 0 (i = 1, 2, · · · , m).

(10.6.5)

(10.6.6)

If y˜j and ki (x)(j = 1, 2, · · · , n; i = 1, 2, · · · , m) are regarded as fuzzy-modelvariable of functional Π ∗ , (10.6.6) is regarded as Euler equation of Π ∗ with fuzzy function. Proof: If for an arbitrary α ∈ [0, 1], the basic condition of extremum is ¯∗ = 0 δΠ ∗ = 0 ⇐⇒ αδ Π α ⇐⇒

α∈[0,1]

x1

x0

α

α∈[0,1]

n j=1

[Fyj (x, y¯α , y¯α )δyj −

d Fy (x, y¯α , y¯α )δyj ]dx = 0, dx j

or we intergrade by part the second item in each middle bracket. And by using the deﬁnition of the interval value (or function) in Ref. [Cao93c], and the basic condition of extremum in Theorem 1.1 in Ref. [Cao91a], we have x1 n d Fyj (x, y¯α , y¯α )]δyj dx = 0, α [Fyj (x, y¯α , y¯α ) − (10.6.7) dx x0 j=1 α∈[0,1]

y¯α = (¯ y1α , y¯2α , · · · , y¯nα ), obeys the independent constraints of m αφi (x, y¯α ) = 0 (i = 1, 2, · · · , m). α∈[0,1]

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

359

As for this formula, we ﬁnd variation with α∈[0,1]

α

n ∂φi (x, y¯α ) j=1

∂yj

δyj = 0 (i = 1, 2, · · · , m),

where there are n−m variation arbitrations in δyj , for example, say δym+1 , · · · , δyn arbitration, then x1 n ∂φi (x, y¯α ) ki (x)[ α δyj ]dx = 0 (i = 1, 2, · · · , m). ∂yj x0 j=1 α∈[0,1]

Add them with satisﬁed equation (10.6.7) from admitted variation δyj , respectively, then x1 n m ∂F (x, y¯α , y¯α ) ∂φi (x, y¯α ) α [ + ki (x) − ∂yj ∂yj x0 j=1 i=1 α∈[0,1]

d ∂F (x, y¯α , y¯α ) ]δyj dx = 0, dx ∂yj F ∗ (x,¯ yα ,¯ yα )=F (x,¯ yα ,¯ yα )+

m

i=1

kiα (x)φi (x,¯ yα )

− − − − − − − − − − − − − − −− −→ x1 n d ∗ Fyj (x, y¯α , y¯α )]δyj dx = 0, α [Fy∗j (x, y¯α , y¯α ) − dx x0 j=1 α∈[0,1]

and then we change it into x1 n d ∗ Fyj (x, y¯α , y¯α )]δyj dx = 0. α [Fy∗j (x, y¯α , y¯α ) − dx x0 j=m+1

(10.6.8)

α∈[0,1]

Again because δyj (j = m + 1, m + 2, · · · , n) is arbitrary, we make all of above function equations be zero except one of them by turns. For ∀α ∈ [0, 1], by applying basic Lemma I in variation of Ref.[Cao91a], we have d ∗ F (x, y¯α , y¯α )] = 0 α[Fy∗j (x, y¯α , y¯α ) − dx yj (10.6.9) α∈[0,1]

(j = m + 1, m + 2, · · · , n). Combine (10.6.8) and (10.6.9), we enable the function of conditional extremum realized by functional Π ∗ and factor ki (x) to all satisfy equations (10.6.5) and (10.6.6), so, Theorem 10.6.1 holds. Theorem 10.6.2. If we change (10.6.2) in Theorem 10.6.1 into diﬀerential equation with fuzzy function φ1 (x, y˜, y˜ ) = 0 (i = 1, · · · , m; m < n), while the other conditions are unchanged, the conclusion is still true.

360

10 Interval and Fuzzy Functional and their Variation

10.6.3

Variation of Fuzzy-Valued Functional Condition Extremum with Fuzzy Function

Deﬁnition 10.6.3. We call ∗ ˜ Π =

x1

F˜ (x, y˜; y˜ )dx ∈ F (F (R)) x1 F˜ (x, y¯α , y¯α )dx α = x0

(10.6.10)

x0

α∈[0,1]

a fuzzy-valued functional dependendent on n-fuzzy functions, where y¯α and y¯α are deﬁned by Deﬁnition 10.6.1, with F˜ (x, y˜, y˜ ) = αF˜ (x, y¯α , y¯α ), α∈[0,1]

F˜ (x, y¯α , y¯α ) = {˜ γ ∈ F (R)|∃(x, y, y ) ∈ X × Y¯α × Y¯α , F˜ ∗ (x, y, y ) = γ˜ }, and its membership function is

γ) = μF˜ (x,˜y,˜y ) (˜

{μy˜(y) ∧ μy˜ (y )}.

F˜ (x,y,y )=˜ γ

Deﬁnition 10.6.4. Let’s call F˜yj (x, y˜; y˜ ) = αF˜ ∗ (x, y¯α ; y¯ ) ∈ F (F (R)), yj

α

α∈[0,1]

F˜yj (x, y˜; y˜ ) =

αF˜y∗j (x, y¯α ; y¯α ) ∈ F (F (R))(j = 1, 2, · · · , n)

α∈[0,1]

a partial derivation of fuzzy-valued functional F˜ on fuzzy point (x, y˜; y˜ ) with respect to y and y , respectively, where F˜y∗j (x, y¯α ; y¯α ) = {˜ γ |∃(x, y; y ) ∈ X × Y¯α × Y¯α , F˜y∗j (x, y; y ) = γ˜ }, F˜y∗j (x, y¯α ; y¯α ) = {˜ γ |∃(x, y; y ) ∈ X × Y¯α × Y¯α , F˜y∗j (x, y; y ) = γ˜ }, their membership functions are μF˜y

γ) (x,˜ y;˜ y ) (˜ j

=

μF˜

(˜ γ) =

y;˜ y ) y (x,˜ j

(μy˜(y)

F˜yj (x,y;y )=˜ γ

(μy˜(y)

μy˜ (y )),

μy˜ (y )).

F˜y (x,y;y )=˜ γ j

Theorem 10.6.3. Suppose that fuzzy functions y˜j (j = 1, 2, · · · , n) make when fuzzy-valued functional (10.6.10) with fuzzy function under the condition

10.6 Variation of Condition Extremum on Functional with Fuzzy Function

φi (x, y˜) = 0 (i = 1, 2, · · · , m; m < n)

361

(10.6.11)

reached extremum, and (10.6.11) is independent, i.e., there is an m-order fuzzyvalued function determinant with fuzzy function D(φ1 (x, y˜), φ2 (x, y˜), · · · φm (x, y˜))

= 0. D(˜ y)

(10.6.12)

Then, the proper chosen factors ki (x)(i = 1, 2, · · · , m) and y˜j (j = 1, 2, · · · , n) satisfy Euler equation obtained by fuzzy functional with fuzzy function ˜∗ = Π

x1

[(F˜ (x, y˜, y˜ ) +

x0

m

x1

ki (x)φi (x, y˜))]dx =

i=1

F˜ ∗ (x, y˜, y˜ )dx,

x0

while functions ki (x) and yj (x) are determined by fuzzy-valued Euler equations d ˜∗ F˜y∗j (x, y˜, y˜ ) − F (x, y˜, y˜ ) = 0 (j = 1, 2, · · · , n) dx yj and by fuzzy-valued equations: φj (x, y˜) = 0 (i = 1, 2, · · · , m).

(10.6.13)

If we regard ki (x) and y˜j (i = 1, 2, · · · , m; j = 1, 2, · · · , n) as the model vari˜ ∗ , we can write (10.6.13) as Euler equations of able of fuzzy-valued functional Π ∗ ˜ fuzzy-valued functional Π , where (10.6.12)

α∈[0,1]

α

D(φ¯1α (x, y¯α ), φ¯2α (x, y¯α ), · · · , φ¯mα (x, y¯α ))

= 0. D(¯ yα )

Theorem 10.6.4. If we change (10.6.11) in Theorem 10.6.3 into fuzzy differential equations with fuzzy functions φi (x, y˜; y˜ ) = 0 (i = 1, 2, · · · , m; m < n) with the other conditions unchanged, the conclusion holds. 10.6.4

Conclusion

The condition extremum problem, functional, ordinary or fuzzy-valued, with fuzzy function is advanced, and a method to it is obtained for them containing fuzzy functions by the aid of the variational methods. This model will contain more information than a classical one and will be of extensive use in engineering act ﬁelds, the application examples remains to be completed by readers.

References

[Ail52] [AG01]

[AMA93] [AP93] [Asa82] [Avr76] [AW70] [BF98]

[Biw92] [BMS96]

[BP76] [BS79] [Cao87a]

[Cao87b]

Ailisgerzi: Calculus of variations. Soviet Union National Technology Theory Works Publishing House (1952) (Chinese Translation) Arikan, F., G¨ ung¨ or, Z.: An application of fuzzy goal programming to a multiobjective project network problem. Fuzzy Sets and Systems 119, 49–58 (2001) Aliev, P., Mamedova, G., Aliev, R.: Fuzzy Sets Theory and Its Application. Talriz University Press (1993) Admopoulos, G.I., Pappis, C.P.: Some results on the resolution of fuzzy relation equations. Fuzzy Sets and Systems 60, 83–88 (1993) Asai, K. (Writing), Zhao, R.H. (Translation): An Introduction to the Theory of Fuzzy Systems. Peking Norm. University Press, Peking (1982) Avriel, M.: Nonlinear programming Analysis and Methods. Prentice Hall Co. Inc., Englewood Cliﬀs (1976) Avriel, M., Williams, A.C.: Complementary geometric programming. SIAM J. Appl. Math. 19, 125–141 (1970) Bourke, M.M., Fisher, D.G.: Solution algorithms for fuzzy relational equations with max-product composition. Fuzzy Sets and Systems 94, 61–69 (1998) Biwal, M.P.: Fuzzy programming technique to solve multi-objective geometric programming problems. Fuzzy Sets and Systems 51, 67–71 (1992) Burnwal, A.P., Mukherjee, S.N., Singh, D.: Fuzzy geometric programming with nonequivalent objectives. Ranchi University Mathematical J. 27, 53– 58 (1996) Beightler, C.S., Phillips, D.T.: Applied Geometric Programming. John Wiley and Sons, New York (1976) Bazaraa, M.S., Shetty, C.M.: Nonlinear Programming—Theory and Algorithms. John Wiley and Sons, New York (1979) Cao, B.Y.: Solution and theory of question for a kind of fuzzy positive geometric program. In: Proc. of 2nd IFSA Congress, Tokyo, July 20-25, vol. I, pp. 205–208 (1987); Also to see: J. of Changsha Norm. Univ. of Water Resources and Electric Power. Natural Sci. Ed. 2(4), 51–61 (1987) Cao, B.Y.: The theory and practice of solution for fuzzy relative equations in Max-product. J. of Hunan Univ. of Science and Technology 3(2), 57–65 (1987)

364

References

[Cao89a] Cao, B.Y.: Study of fuzzy positive geometric programming dual form. In: Proc. 3rd IFSA Congress, Seattle, August 6-11, pp. 775–778 (1989) [Cao89b] Cao, B.Y.: Study on non-distinct self-regression forecast model. Kexue Tongbao 34(17), 1291–1294 (1989) [Cao89c] Cao, B.Y.: Study for a kind of regression forecasting model with fuzzy datums. J. of Mathematical Statistics and Applied Probability 4(2), 182– 189 (1989) [Cao90] Cao, B.Y.: Study on non-distinct self-regression forecast model. Chinese Sci. Bull. 35(13), 1057–1062 (1990) [Cao91a] Cao, B.Y.: Variation of interval-valued and fuzzy functional. In: Proc. of 4th IFSA Congress, Math., pp. 21–24 (1991) [Cao91b] Cao, B.Y.: Ordinary diﬀerential equations of interval-valued and fuzzyvalued functions. J. of Changsha Norm. Univ. of Water Resources and Electric Power 6(1), 26–38 (1991) [Cao91c] Cao, B.Y.: A method of fuzzy set for studying linear programming “Contrary Theory”. J. Hunan Educational Institute 9(2), 17–22 (1991) [Cao92a] Cao, B.Y.: Further study of posynomial geometric programming with fuzzy coeﬃcients. Mathematics Applicata 5(4), 119–120 (1992) [Cao92b] Cao, B.Y.: Another proof of fuzzy posynomial geometric programming dual theorem. BUSEFAL 66, 43–47 (1996) [Cao92c] Cao, B.Y.: Interval-valued and fuzzy convex function and convex functional research. J. of Fuzzy Systems and Math. (Special issue), 300–303 (1992) [Cao92d] Cao, B.Y. (ed.): Proceedings of the Results Congress on Fuzzy Sets and Systems. Hunan Science Technology Press, Changsha (1992) [Cao93a] Cao, B.Y.: Fuzzy geometric programming(I). Fuzzy Sets and Systems 53, 135–153 (1993) [Cao93b] Cao, B.Y.: Input-output mathematical model with T-fuzzy data. Fuzzy Sets and Systems 59, 15–23 (1993) [Cao93c] Cao, B.Y.: Extended fuzzy geometric programming. J. of Fuzzy Mathematics 2, 285–293 (1993) [Cao93d] Cao, B.Y.: Fuzzy strong dual results for fuzzy posynomial geometric programming. In: Proc.of 5th IFSA Congress, Seoul, July 4-9, pp. 588–591 (1993) [Cao93e] Cao, B.Y.: Nonlinear regression forecasting model with T-fuzzy Data to be linearized. J. of Fuzzy Systems and Mathematics 7(2), 43–53 (1993) [Cao94a] Cao, B.Y.: Posynomial geometric programming with L-R fuzzy coeﬃcients. Fuzzy Sets and Systems 64, 267–276 (1994) [Cao94b] Cao, B.Y.: Lecture in Economic Mathematics—Linear Programming and Fuzzy Mathematics. Tiangjing Translating Press of Science and Technology, Tiangjing (1994) [Cao95a] Cao, B.Y.: The study of geometric programming with (·, c)-fuzzy parameters. J. of Changsha Univ. of Electric Power (Natural Sci. Ed.) 1, 15–21 (1995) [Cao95b] Cao, B.Y.: Fuzzy geometric programming optimum seeking of scheme for waste water disposal in power plant. In: Proc. of FUZZY-IEEE/IFES 1995, Yokohama, August 22-25, pp. 793–798 (1995) [Cao95c] Cao, B.Y.: Types of non-distinct multi-objective geometric programming. Hunan Annals of Mathematics 15(1), 99–106 (1995)

References

365

[Cao95d] Cao, B.Y.: Study of fuzzy positive geometric programming dual form. J. of Changsha Univ. of Electric Power (Natural Sci. Ed.) 10(4), 343–351 (1995) [Cao95e] Cao, B.Y.: Classiﬁcation of fuzzy posynomial geometric programming and corresponding class properties. J. of Fuzzy Systems and Mathematics 9(4), 60–64 (1995) [Cao96a] Cao, B.Y.: New model with T-fuzzy variable in linear programming. Fuzzy Sets and Systems 78, 289–292 (1996) [Cao96b] Cao, B.Y.: Fuzzy geometric programming (II)—Fuzzy strong dual results for fuzzy posynomial geometric programming. J. of Fuzzy Mathematics 4(1), 119–129 (1996) [Cao96c] Cao, B.Y.: Cluster and recognition model with T −fuzzy data. Mathematical Statistics and Applied Probability 11(4), 317–325 (1996) [Cao97a] Cao, B.Y.: Research for a geometric programming model with T-fuzzy variable. J. of Fuzzy Mathematics 5(3), 625–632 (1997) [Cao97b] Cao, B.Y.: Fuzzy geometric programming optimum seeking of scheme for waste-water disposal in power plant. Systems Engineering—Theory & Practice 5, 140–144 (1997) [Cao98] Cao, B.Y.: Further research of solution to fuzzy posynomial geometric programming. Academic Periodical Abstracts of China 4(12), 1435–1437 (1998); Also to see: Popular Works by Centuries’ World Celebrities. In: Mah, Z.X. (ed.) U. S. World Celebrity Books LLC, pp. 15–20. World Science Press, California (1998) [Cao99a] Cao, B.Y.: Variation of functional condition extremum with fuzzy variable. J. of Fuzzy Mathematics 7(3), 559–564 (1999) [Cao99b] Cao, B.Y.: Fuzzy geometric programming optimum seeking in power supply radius of transformer substation. In: 1999 IEEE Int. Fuzzy Systems Conference Proceedings, Korea, July 25-29, vol. 3, pp. III–1749–III–1753 (1999) [Cao00a] Cao, B.Y.: Research of posynomial geometric programming with ﬂat fuzzy coeﬃcients. J. of Shantou University (Natural Sci. Ed.) 15(1), 13–19 (2000) [Cao00b] Cao, B.Y.: Parameterized solution to a fractional geometric programming. In: Proc. of the sixth National Conference of Operations Research Society of China, pp. 362–366. Global-Link Publishing Company, Hong Kong (2000) [Cao01a] Cao, B.Y.: Primal algorithm of fuzzy posynomial geometric programming. In: Joint 9th IFSA World Congress and 20th NAFIPS International Conference Proceedings, Vancouver, July 25-28, pp. 31–34 (2001). Also to see: Direct algorithm of fuzzy posynomial geometric programming. Fuzzy Systems and Mathematics 15(4), 81–86 (2001) [Cao01b] Cao, B.Y.: Model of fuzzy geometric programming in economical power supply radius and optimum seeking method. Engineer Sciences 3, 52–55 (2001) [Cao01c] Cao, B.Y.: Application of geometric programming and fuzzy geometric one with fuzzy coeﬃcients in seeking power supply radius transformer substation. Systems Engineering—Theory & Practice 21(7), 92–95 (2001) [Cao01d] Cao, B.Y.: Extension posynomial geometric programming. J. of Guangdong University of Technology 18(1), 61–64 (2001)

366

References

[Cao01e] Cao, B.Y.: Variation of condition extremum in interval and fuzzy valued functional. Fuzzy Mathematics 9(4), 845–852 (2001) [Cao02a] Cao, B.Y.: Fuzzy Geometric Programming. Kluwer Academic Publishers, Dordrecht (2002) [Cao02b] Cao, B.Y.: Variation of interval-valued and fuzzy functional. Fuzzy Mathematics 10(4), 797–808 (2002) [Cao04] Cao, B.Y.: Antinomy in posynomial geometric programming. Advances in Systems Science and Applications 1, 7–12 (2004) [Cao05] Cao, B.Y.: Application of Fuzzy Mathematics and Systems. Science Press, Peking (2005) [Cao07a] Cao, B.Y.: Fuzzy reversed posynomial geometric programming and its dual form. In: Melin, P., Castillo, O., Aguilar, L.T., Kacprzyk, J., Pedrycz, W. (eds.) IFSA 2007. LNCS (LNAI), vol. 4529, pp. 553–562. Springer, Heidelberg (2007) [Cao07b] Cao, B.Y.: Pattern classiﬁcation model with T-fuzzy data. In: Advance in Soft Computing, pp. 793–802. Springer, Heidelberg (2007) [Cao08] Cao, B.Y.: Cluster model with T -fuzzy data. Fuzzy Optimization and Decision Making 7(4), 317–329 (2008) [Cao09] Cao, B.Y.: Convexity study of interval functions and functionals & fuzzy functions and functionals. Fuzzy Information and Engineering 1(4), 421– 434 (2009) [CDR87] Charnes, A., Duﬀuaa, S., Ryan, M.: The more-for-less paradox in linear programming. European Journal of Operational Research 31, 194–197 (1987) [Cen87] Cen, Y.T.: Newton-Leibniz formulas of interval-valued function and fuzzy-valued function. Fuzzy Mathematics (3-4), 13–18 (1987) [Cha83] Chants, S.: The use of parametric programming in fuzzy linear programming. Fuzzy Sets and Systems 11, 243–251 (1983) [Chen94] Chen, S.Y.: Theory and application system fuzzy decision. Dalian Technology Publishing House, Dalian (1994) [CK71] Charnes, A., Klingman, D.: The more-for-less paradox in the distribution model. Cahiers de Center d’Etudes Recharche Operationelle 13(1), 11–22 (1971) [CL91] Cao, B.Y., Liu, G.H.: A new fuzzy recognition model for children’s health growth. In: Cao, B.Y. (ed.) Proceedings of Result Congress on Fuzzy Sets and Systems, pp. 198–200. Hunan Press of Science and Technology, Changsha China (1991) [Dan63] Dantzing, G.B.: Linear Programming and Extensions. Princeton U.P., Princeton (1963) [Dia87] Diamond, P.: Fuzzy least squares. Inform. Sci. 46, 141–157 (1988). In: Proc. of IFSA Congress, vol. I, Tokyo, July 20-25, pp. 329–332 (1987) [DPe66] Duﬃn, R.J., Peterson, E.L.: Duality theory for geometric programming. SIAM J. Appl. Math. 14, 1307–1349 (1966) [DPe72] Duﬃn, R.J., Peterson, E.L.: Reversed geometric programming treated by harmonic means. Indiana Univ. Math. J. 22, 531–550 (1972) [DPe73] Duﬃn, R.J., Peterson, E.L.: Geometric programming with signomials. J. Optimization Theory Appl. 11, 3–35 (1973) [DPr78] Dubois, D., Prade, H.: Operations on fuzzy number. Int. J. Systems Sciences 9(6), 613–626 (1978)

References [DPr80]

367

Dubois, D., Prade, H.: Fuzzy Sets and Systems—Theory and Applications. Academic Press, New York (1980) [DPZ67] Duﬃn, R.J., Peterson, E.L., Zener, C.: Geometric Programming: Theory and Applications. John Wiley and Sons, New York (1967) [Duf62a] Duﬃn, R.J.: Dual programs and minimum cost. SIAM J.Appl. Math. 10, 119–123 (1962) [Duf62b] Duﬃn, R.J.: Cost minimization problems treated by geometric means. Operations Res. 10, 668–675 (1962) [Duf70] Duﬃn, R.J.: Linearizing geometric programs. SIAM. Review 12, 211–227 (1970) [Duo44] Duoma, E.D.: Debt and national income. America Economic Review 10, 798–827 (1944) [Eck80] Ecker, J.G.: Geometric programming. SIAM Review 22(3), 338–362 (1980) [EK87] Eckerand, J.G., Kupferschmid, M.: An ellipsoid algorithm for non-linear programming. Mathematical Programming 27, 83–106 (1987) [Fang89] Fang, K.T.: Multivariate Statistical Analysis. East China Normal University Publisher, Shanghai (1989) [Fin77] Finke, G.: Auniﬁed approach to reshipment, overshipmemt and postoptimization problems. In: Proceedings of the 8th IFIP Conference on Optimization Techniques, part 2, pp. 201–208 (1977) [FL99] Fang, S.-C., Li, G.: Solving fuzzy relation equations with a linear objective function. Fuzzy Sets and Systems 103, 107–113 (1999) [Fu90] Fu, G.Y.: On optimal solution of fuzzy linear programming. Fuzzy Systems and Mathematics 4(1), 65–72 (1990) [GL70] Gitman, I., Levine, M.D.: An algorithm for detecting Unimodal fuzzy sets and its application as a clustering technique. IEEE Trans. on Comput. 19, 583–593 (1970) [Guj86] Gujarati, D., Hao, P., et al ( translate): The foundation econometrics. Science and Technology Literature Press Chong Qing Branch, Chong Qing (1986) [GV86] Goetschcl, R., Voxman, D.W.: Elementary fuzzy calculus. Fuzzy Sets and Systems 18, 31–42 (1986) [GZ83] Guan, M.G., Zheng, H.D.: Linear Programming. Shandong Science Technology Press, Jinan (1983) [Hei83] Heilpern, S.: Fuzzy mappings. Matematyka Stosowana XXII, 179–197 (1983) [Hei87] Heilpern, S.: Fuzzy equations. Fuzzy Mathematics 2, 77–84 (1987) [HK84] Higashi, M., Klir, G.J.: Resolution of ﬁnite fuzzy relation equations. Fuzzy Sets and Systems 13, 65–82 (1984) [ISTI82] Institute of Scientiﬁc and Technical Information. A new fuzzy recognization model for children’s health growth. Scientiﬁc and Technological Literature Publishing House, Peking (1982) [JM61] Minfu, J.: Variational method and its application (Iwanami Shoten). Science Technique Publisher of Shanghai, Shanghai (1961) [JS78] Jeﬀerson, T.R., Scott, C.H.: Avenues of geometric programming. New Zealand Operational Res. 6, 109–136 (1978) [Kel71] Klee, V.: What is a convex set? The American Mathematical Monthly 78(6), 616–631 (1971)

368

References

[Lao90] [LC02]

[LF99]

[LF01a]

[LF01b]

[Lin85] [Lin86] [LiR02]

[LiuDa98]

[LiuH04] [LiuS04]

[LiuT00] [LiuX01] [LL97] [LL01] [Luo84a] [Luo84b] [Luo89] [LW92] [LZ98] [Man79]

Lao, Q.T.: Fuzzy comprehensive evaluation to city environmental quality. Chinese Environmental Science 2 (1990) Lui, M.J., Cao, B.Y.: The Research and Expansion on Optimal Solution of Fuzzy LP. In: Proc. of 1st Int. Conf. on FSKD, Singapore, vol. 2, pp. 539–543 (2002) Loetamonphong, J., Fang, S.C.: An eﬃcient solution procedure for fuzzy relation equations with max-product composition. IEEE Tran. on Fuzzy Systems 7, 441–445 (1999) Loetamonphong, J., Fang, S.Z.: Optimization of fuzzy relation equations with max-product composition. Fuzzy Sets and Systems 118, 509–517 (2001) Loetamonphong, J., Fang, S.Z.: Solving nonlinear optimization problems with fuzzy relation equation constraints. Fuzzy Sets and Systems 119, 1–20 (2001) Lin, E.W.: The mathematical method for macroeconomics model, pp. 78–94. Fujian Press, Fujian (1985) Lin, Y.: On “Contrary Theory” in general linear programming. Chinese J. of Operations Research 5(1), 79–81 (1986) Li, R.J.: Analysis of possibilistic linear programming based on comparison of fuzzy numbers-discussed with author. Fuzzy Sets and Systems 16(4), 107–109 (2002) Liu, X.W., Da, Q.-l.: The solution for fuzzy linear programming with constraint satisfactory function. Journey of Systems Engineering 13(3), 36–40 (1998) Liu, H.W.: Comparison of fuzzy numbers based on a fuzzy distance measure. Shandong University Transaction 39(2), 31–36 (2004) Liu, S.T.: Fuzzy geometric programming approach to a fuzzy machining economics model. International Journal of Production Research 42(16), 3253–3269 (2004) Liu, T.F.: The solution to the problem of parametric linear programming by lumped matrix. Journal of Electric Power 15(1), 22–25 (2000) Liu, X.W.: Measuring the satisfaction of constraints in fuzzy linear programming. Fuzzy Sets and Systems 122, 263–275 (2001) Liu, W.Q., Luo, C.Z.: A few notes on fuzzy linear programming with elastic constraints. Mathematia Applicata 10(2), 105–109 (1997) Le´ on, T., Liern, V.: A fuzzy method to repair infeasibility in linearly constrained problems. Fuzzy Sets and Systems 122, 237–243 (2001) Luo, C.Z.: The extension principle and fuzzy numbers (I). Fuzzy Mathematics 4(3), 109–116 (1984) Luo, C.Z.: The extension principle and fuzzy numbers (II). Fuzzy Mathematics 4(4), 105–114 (1984) Luo, C.Z.: The Theory of Fuzzy Sets. The Publishing House of Beijing Normal University, Piking (1989) Lu, M.G., Wu, W.M.: Interval value and derivate of fuzzy-valued function. J. Fuzzy Systems and Mathematics 6(Special issue), 182–184 (1992) Liu, B.D., Zhao, R.Q.: Stochastic Programming and Fuzzy Programming. Tsinghua University Press, Peking (1998) Mangasarian, O.L.: Uniqueness of solution in linear programming. Linear Algebra and Its Applications 25, 151–162 (1979)

References

369

[MRM05] Mandal, N.K., Roy, T.K., Maiti, M.: Multi-objective fuzzy inventory model with three constraints: a geometric programming approach. Fuzzy Sets and Systems 150(1), 87–106 (2005) [MTM00] Maleki, H.R., Tata, M., Mashinchi, M.: Linear programming with fuzzy variable. Fuzzy Sets and Systems 109, 21–33 (2000) [NS76] Negoita, C.V., Sularia, M.: Fuzzy linear programming and tolerance in planning. Econ. Comp. Econ. Cybernetic Stud. Res. 1, 613–615 (1976) [Obr86] Obrad, M.M.: Mathematical dynamic model for long-term distribution system planning. IEEE Transaction on Power Systems 1, 34–41 (1986) [Pan87] Pan, R.F.: A simple solution for fuzzy linear programming. Journal of Xiangtan University (Natural Science) 3, 29–36 (1987) [PB71] Pascual, L.D., Ben-Israel, A.: Vector-valued criteria in geometric programming. Operations Res. 19, 98–104 (1971) [Pet78] Peterson, E.L.: Geometric programming. SIAM Review 18, 1–51 (1978) [Pet01] Peterson, E.L.: The Origins of Geometric Programming. Annals of Operations Research 105, 15–19 (2001) [Pre81] Prevot, M.: Algorithm for the solution of fuzzy relations equations. Fuzzy Sets and Systems 5, 319–322 (1981) [Rij74] Rijckaert, M.J.: Survey of programs in geometric programming. C.C.E.R.O. 16, 369–382 (1974) [Rou91] Roubens, M.: Inequality constraints between fuzzy numbers and their use in mathematical programming. In: Slowinski, R., Teghem, J. (eds.) Stochastic Versus Fuzzy Approaches to Multiobjective Mathematical Programming Under Uncertainty, pp. 321–330. Kluwer Academic Publishers, Dordrecht (1991) [RT91] Roubens, M., Teghem, J.J.: Comparison of methodologies for fuzzy and stochastic multi-objective programming. Fuzzy Sets and Systems 42, 119– 132 (1991) [Rus69] Ruspini, E.H.: A new approach to clustering. Information Control 15, 22–32 (1969) [SahK06] Sahidul, I., Kumar, R.T.: A new fuzzy multi-objective programming: entropy based geometric programming and its application of transportation problems. European Journal of Operational Research 173(2), 387–404 (2006) [San76] Sanchez, E.: Resolution of composite fuzzy relation equations. Inform. and Control 30, 38–48 (1976) [Shi81] Shi, G.Y.: Algorithm and convergence about a general geometric programming. J. Dalian Institute Technol. 20, 19–25 (1981) [Shim73] Shimura, M.: Fuzzy sets concept in rank-ordering objects. J. Math. Anal. & Appl. 43, 717–733 (1973) [Sol56] Solow, M.R.: Contribution economy increase theories. Economics Quarterly, 65–94 (1956) [TA84] Tanaka, H., Asai, K.: Fuzzy linear programming problem with fuzzy number. Fuzzy Sets and Systems 13, 1–10 (1984) [TD02] Tran, L., Duckstein, L.: Comparison of fuzzy numbers using a fuzzy distance measure. Fuzzy Set and Systems 130, 331–341 (2002) [TOA73] Tanaka, H., Okuda, T., Asai, K.: On fuzzy mathematical programming. J. Cybern. 3(4), 37–46 (1973)

370

References

[TUA80] Tanaka, H., Uejima, S., Asai, K.: Fuzzy linear regression model. In: Int. Congress on Applied Systems Research and Cybernetics, Acapulco, Mexico, December 1980, pp. 12–16 (1980) [TUA82] Tanaka, H., Uejima, S., Asai, K.: Linear regression analysis with fuzzy model. IEEE Transactions on Systems, Man and Cybernetics, SMC 12(6), 903–907 (1982) [Ver84] Verdegay, J.L.: A dual approach to solve the fuzzy linear programming problem. Fuzzy Sets and Systems 14, 131–140 (1984) [Ver90] Verma, R.K.: Fuzzy geometric programming with several objective functions. Fuzzy Sets and Systems 35, 115–120 (1990) [Wang83] Wang, P.Z.: Fuzzy Sets and Its Application. Shanghai Scientiﬁc and Technical Publishers, Shanghai (1983) [Wangx02] Wang, X.P.: Conditions under which a fuzzy relational equation has minimal solutions in a complete Brouwerian lattice. Advances in Mathematics 31(3), 220–228 (2002) [Wat87] Watada, J., Chen, G.F. (translated), et al.: Theories and Its Application of Fuzzy Multianalysis, pp. 6–17. Chongqing Branch of Scientiﬁc and Technological Literature Publishing House, Chongqing (1987) [WB67] Wilde, D.J., Beightler, C.S.: Foundations of Optimization, pp. 76–109. Prentice Hall Co. Inc., Englewood Cliﬀs (1967) [Wei87] Wei, H.P.: Application of Optimization Techniques. Tongji University Press (1987) [WL85] Wang, D.M., Lou, C.Z.: Extension of fuzzy diﬀerenial calculus. Fuzzy Mathematics 1, 75–80 (1985) [WX99] Xing, W.X., Xie, J.X.: Modern Optimization for Calculation Method. Tsinghua Press, Peking (1999) [WY82] Wu, F., Yuan, Y.Y.: Geometric programming. Math. in Practice and Theory (1-2), 46–63, 61–72, 60–80, 68–81 (1982) [WZSL91] Wang, P.Z., Zhang, D.Z., Sanchez, E., Lee, E.S.: Latticized linear programming and fuzzy relation inequalities. J. Math. Anal. and Appl. 159, 72–87 (1991) [Xu98] Xu, R.N.: The linear regression models with fuzzy regression parameters. Fuzzy Systems and Mathematics 2 (1998) [XuL01] Xu, R.N., Li, C.L.: Multidimensional least-squares ﬁtting with a fuzzy model. Fuzzy Sets and Systems 119, 215–233 (2001) [XuR89] Xu, R.Z.: Optimal Methods of Mathematics Programming in Economic Management. Sichuan Science Technique Publisher House, Chengdu (1989) [Yage80] Yager, R.: Fuzzy sets, Probabilities and Decision. J. Cybern. 10, 1–18 (1980) [YC05] Yang, J.H., Cao, B.Y.: Geometric Programming with Fuzzy Relation Equation Constraints. In: 2005 IEEE International Fuzzy Systems Conference Proceedings, Reno, Nevada, May 22-25, pp. 557–560 (2005) [YC06] Yang, J.H., Cao, B.Y.: The origin and its application of geometric programming. In: Proc.of the Eighth National Conference of Operations Research Society of China, pp. 358–363. Global-Link Publishing Company, Hong Kong, ISBN: 962-8286- 09-9 [YCL95] Yu, Y.Y., Cao, B.Y., Li, X.R.: The application of geometric and fuzzy geometric programming in option of economic supply radius of transformer substations. In: Zhou, K.Q. (ed.) Proceedings of Int. Conference on Inform and Knowledge Engineering, August 21-25, pp. 245–249. Dalian Maritime University Publishing House, Dalian (1995)

References

371

[YGR99] Yen, K.K., Ghoshrya, S., Roig, G.: A linear regression model using triangular fuzzy number coeﬃcient. Fuzzy Sets and Systems 106, 167–177 (1999) [YL99] Yang, M.S., Liu, H.H.: Fuzzy clustering procedures for conical fuzzy vector data. Fuzzy Sets and Systems 106, 189–200 (1999) [Ying92] Ying, L.J.: The study of fuzzy information—processing methods and their applications in fault diagnosis. Changsha Railway University Doctorate Dissertation (1992) [YJ91] Yang, C.E., Jin, D.Y.: The more-for-less paradox in linear programming and nonlinear programming. Systems Engineering 9(2), 62–68 (1991) [YWY91] Yu, Y.Y., Wang, X.Z., Yang, Y.W.: Optimizational selection for substation feel economic radius. J. of Changsha Normal University of Water Resources and Electric Power 6(1), 118–124 (1991) [YZY87] Yang, Q.Y., Zhang, Z.L., Yang, M.Z.: Transformer substation capacity dynamic optimizing in city power network planning. In: Proc. of Colleges and Univ. Speciality of Power System and Its Automation. The Third a Academic Annual Conference, pp. 7–11. Xian Jiao Tong Univ. Press, Xian (1987) [Zad65a] Zadeh, L.A.: Fuzzy sets. Inform. and Control 8, 338–353 (1965) [Zad65b] Zadeh, L.A.: Fuzzy sets and systems. In: Proc. of the Symposium on Systems Theory. Polytechnic Press of Polytechnic Institute of Brooklyn, NY (1965) [Zad75a] Zadeh, L.A.: Calculus of fuzzy restriction. In: Zadeh, L.A., Fu, K.S., Tanaka, K., Shimura, M. (eds.) Fuzzy Sets and Their Applications to Cognitive and Decision Processes. Academy Press, New York (1975) [Zad75b] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning, Part 1. Information Science 8, 199–249 (1975) [Zad76] Zadeh, L.A.: Fuzzy Sets and their application to pattern classiﬁcation and cluster analysis. Multivariate Analysis, 113–161 (1976) [Zad82] Zadeh, L.A., Chen, G.Q. (Translation): Fuzzy Sets, Language variable and Fuzzy Logics. Science in china Press, Peking (1982) [Zen61] Zener, C.: A mathematical aid in optimizing engineering design. Proc. Nat. Acad. Sci. USA 47, 537–539 (1961) [Zhang97] Zhang, Z.K.: The application of fuzzy mathematics in roboticized technology. Tsinghua University Press, Beijing (1997) [Zhe92] Zheng, X.C.: Prospect forecast in the amount of long distance telephone in China. Forecast, 3 (1992) [Zim76] Zimmermann, H.-J.: Description and optimization of fuzzy systems. Internat. J. General Systems 2, 209–215 (1976) [Zim78] Zimmermann, H.-J.: Fuzzy programming and linear programming with several objective functions. Fuzzy Sets and Systems 1, 45–55 (1978) [Zim91] Zimmermann, H.-J.: Fuzzy Sets Theory and Its Application. Kluwer Academic Publishers, Boston (1991) [Zim00] Zimmermann, H.-J.: Fuzzy Sets and Operations Research for Decision Support. Beijing Normal University Press, Beijing (2000) [ZW91] Zhang, W.X., Wang, G.J.: Introduction to Fuzzy Sets. Xi’an Jiaotong University Publishers, Xi’an (1991)

Index

A Absorptive law 8 admissible 196 aﬃne function 66 Algorithm 42 α-cut 1 α-level 110 Analytic Hierarchy Process 131 analytic solution 247 Antinomy 165 antitone 294 approximate indicator 41 appropriate linear transformation 86 approximate quantities 309 approximately fuzzy optimal 182 approximately less than or equal to 20 arithmetic operations 31 Associative law 8 axis 34 B basic feasible solution 165 basic fuzzy variation lemma 336 basic interval variation lemma 331 basic solution 103 binary 5 Boolean matrix 117 bounded 12 Business Management 214

C canonical types 203 capacity 36 Cartesian product 12 Cauchy sequence 65 Center distance 58 characteristic interval 155 characteristic matrix 282 Ciric-type compacted mapping 319 classiﬁcation 117 close cone 65 close interval 293 closed region 304 closure 16 Cluster Analysis 117 Cobb-Douglas function 324 collection sleeve 22 combination law 15 Commutative 8 comparison 61 compatible 68 compatibleness 68 complement operation 7 Complete set 3 components being positive 244 composition operator 280 compound 18 comprehensive decision 264 compromise solution 252 concomitant chain 260 condition extremum 327 cone index 67

374

Index

conﬁdence level 9 conservative path sets 282 consistent 199 constraint 18 constraint complete lattice 205 constraint inequality 221 consume coeﬃcient 95 Continuity 141 continuous and strictly monotone concave function 339 convex combination 12 Convex function 196 convex functional 327 Convex fuzzy set 11 convex normal 50 convex region 339 correlated variable 33 Cramer rule 59 crisp model 45 critical value 43 curve 49

44

E

D Data Mining 94 degree-of-diﬃculty 223 decomposition theorem 1 degenerate 203 degeneration optimum solution degree of possibility 21 degree of the ﬁtting 52 Delphi method 131 derivable 294 derivatives 294 determinant 351 diagnostics 43 diﬀerentiable 36 Diﬀerential Equations 293 direct algorithm 201 direct image 314 direct proportion 88 discretion 258 discrimination matrix 291 distance 44 distance closure 122 distinct 3 distinguishing 132 distributing function 78 distribution 69

Distributive law 8 dual algorithm 221 dual method 240 Dual law 8 dual simplex method 54 dual theorem 168 duality 185 dual variables 198 dynamic clustering 119

167

economical beneﬁts 261 elements 4 elimination principle 260 embodied 6 empty set 3 energy resource 12 environment protection 286 equality 5 equivalence 25 Error Analysis 61 essentially smaller than or equal Establishment 35 excluded-middle law 8 exhaustive 67 existence theorem 296 Expansion 1 expectation value 194 expected level 231 expert evaluation method 250 experts’ experience 36 explored 266 Exponential Model 44 exponential regularity 44 expression 70 extension principle 18 extensively 309 extract 46 extreme value 331 Euclidean space 10 Euler equation 351 F feasible direction 209 feasible domain 274 feasible solution 100 feature 129

139

Index ﬁltration rule 283 ﬁnite ﬁeld 261 ﬁtting 36 ﬁxed point 141 ﬁve-type fuzzy numbers 1 ﬂat fuzzy coeﬃcients 33 ﬂat fuzzy numbers 29 ﬂexible index 20 ﬂow 269 ﬂuctuating variables 188 Forecast 33 freely 20 function equation 296 Functional 327 functional condition extremum 327 Fuzziﬁcation 87 fuzziness 95 fuzzy coeﬃcient 159 fuzzy convex 28 fuzzy convex function 207 fuzzy convex programming 196 fuzzy distance 63 fuzzy dual programming 214 Fuzzy Duoma debt model 310 fuzzy environment 33 fuzzy exponent matrix 200 fuzzy extended matrix 256 fuzzy function 1 fuzzy function determinant 358 fuzzy geometric inequality 197 fuzzy Lagrange function 204 Fuzzy linear programming 102 fuzzy linear regression 35 fuzzy matrices 97 fuzzy maximum 142 Fuzzy Numbers Distance 171 fuzzy objective 20 fuzzy optimal solution 154 Fuzzy posynomial geometric programming 193 Fuzzy Quantities 1 fuzzy regression 35 fuzzy relation 1 fuzzy relation equations 262 fuzzy relation geometric programming 255 fuzzy reversed posynomial geometric programming 199 fuzzy satisfactory 188

375

fuzzy sets 1 fuzzy set chain 259 fuzzy set-value mapping 293 fuzzy solution 161 fuzzy subset 1 fuzzy super-consistent 242 fuzzy time series analysis 78 fuzzy trajectory 311 fuzzy-valued 18 fuzzy-valued diﬀerential 300 fuzzy-valued ﬁxed solution 301 fuzzy-valued functional 327 fuzzy variable 30 fuzzy vector 66 G generalized 94 genetic algorithm 292 geometric 49 geometric inequality 197 geometric programming 193 goal 47 global fuzzy optimum solution global minimum 225 grade of membership 320 gradient 203 greatest solution 179 group 78 H Hausdorﬀ measure 304 height 136 homogeneous system 311 I Idempotent law 8 image 18 imitation 63 immemorial rat 135 implicit function 296 independent variable 33 index 7 ineﬀective forecast 84 inﬁmum 20 inﬁnite 2 inﬁnite logic sum 2 inﬂuence factor 266

205

376

Index

initial value 296 Input-Output 95 integrable 172 integral calculus 318 intersection 5 interval 1 interval derivative 294 interval diﬀerential equation interval nest 306 Interval number 27 interval same order 294 Interval-valued 293 inverse 14

M

295

J J -compatible 68 J-dimensional 197 J -eﬀective 99 J -nonseparable 99 J -optimal solution 237 judgement method 54 K k-th approaching 328 Kuhn-Tucker condition

207

L Lagrange 193 Lagrange multiplier 243 Lattice 205 Lattice Linear Programming 273 least solution 256 least square method 36 Left distance 58 Leontief synthesized model 98 level set 231 limit point 226 line segments 42 linear independence 207 linear programming 33 linearized 46 Lipschitz condition 298 local fuzzy optimum 205 local minimum solution 225 lower and upper bounds 225 L–R fuzzy number 30

main diagonal 90 mapping 18 mathematical induction 335 maximizing 139 maximum membership degree principle 128 maximum shortage 261 maximum solution 101 Max method 275 mean value 30 measure 36 membership degree 1 membership function 1 Min-max methods 274 minimization 68 minimum element 5 mixed 67 model 33 model-variable 328 modernization 280 monomial 197 monotone increasing function 284 most satisfactory solution 222 multi-index 67 multi-objective 187 multiplication 110 multi-valued 318 mutually exclusive 67 N n-dimensional 11 necessary and suﬃcient condition neighborhood 296 nest of set 356 net 96 Non-distinct 316 nonempty 259 nonfuzziﬁcation 44 Nonfuzzify 69 nonfuzzy 242 Nonlinear regression 85 non-linearity type 91 non-negative elements 66 non-tentative-value 234 non-degeneration 207

64

Index normal form 154 normal fuzzy set 10 norm time sequence 83 Normal type 3 O objective function 46 operation properties 1 operator 1 optimal 1 optimization designment 280 optimal estimated values 58 optimal level 146 optimal matrix 155 Optimal Models 1 optimal solution 38 optimal value 47 optimize 287 optimization method 54 Optimizing 255 order relation 5 ordinary 2 ordinary diﬀerential equations 293 orthogonality 64 P parallelogram rule 64 parameter geometric programming 250 parameter variable 61 partial derivative 300 partial diﬀerence rate 88 Partial large-scale 3 Partial minitype 3 partial order set 281 pattern recognition 127 perfect forecast 39 piecewise continuous 30 platform 50 platform index 63 point range 227 Pole diﬀerence regularity 118 polynomial 234 posynomial 193 precision 43 primal function 294 primal problem 159 properties 1 pseudoconvex 207

377

Q quadruple 30 quantities 309 quasiconvex 207 quasi-minimum solution

259

R randomness 50 ranking 174 real bounded function 20 real line 316 reduced chain 259 reference function 30 reﬂection 5 regression forecast model 33 regular 63 relation 1 Representation Theorem 25 Restore original law 8 reverse posynomial 234 Right distance 58 right triangles 37 S satisfactory solution 188 self-dependent sequence 42 Self-regression 33 set value mapping 23 shape 35 shortcut method 119 single objective 189 single-valued 318 simplex method 54 simulated annealing algorithm 190 simultaneous equations 48 singular solution 312 soft constraint 168 Solow model 316 solution sets 143 spread 57 standard deviation standardization 118 strong α-cut set 9 strong dual theory 213 strong mapping 311

378

Index

structure of solution 280 sub-cluster 317 subscript set 203 subsets 1 subspace 64 super-consistence 213 support 9 supremum 20 symmetry 5 synthesizes operator 255 system cluster method 127 T

U unbiased estimations 61 unconstrained minimization union 5 unique 64 uniqueness 142 uniqueness theorem 298 unit 60 united table 125 unreduced 203 upper bound 194 upper-right-corner 200

77

V tangentially optimal 206 Taylor theorem 339 technological economic analysis 280 Telephone Amount 44 term 60 T -fuzzy data 85 T -fuzzy number 30 T -fuzzy variable 30 theoretical framework 182 theory 1 Three Mainstream Theorems 21 threshold value 36 topology induced 66 totally degenerate 203 traditional operation rules 309 trajectory 311 transformer substation 365 transitive closure 17 transitive relation 120 transplant 182 trapeziform fuzzy number 173 trapezoidal fuzzy variable 237 triangle distributing 110 triangular fuzzy numbers 57 Tucker 207 type (·, c) 29

vagueness 38 variation 130 variationableness 328 vector space 35 (∨, ·) composition 262 (∨, ·) Fuzzy Relative Equation 261 (∨, ∧) Fuzzy Relative Equation 255 W weight 83 weighted related-coeﬃcient width 36

90

Z Zadeh fuzzy sets 33 zero fuzzy number 108 zero relation 14 Zimmermann algorithm 143 0 0-1 law 8 0.618 method

47

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close