Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5656
James F. Peters Andrzej Skowron Marcin Wolski Mihir K. Chakraborty Wei-Zhi Wu (Eds.)
Transactions on Rough Sets X
13
Editors-in-Chief James F. Peters University of Manitoba, Winnipeg, Manitoba, Canada E-mail:
[email protected] Andrzej Skowron Warsaw University, Warsaw, Poland E-mail:
[email protected] Guest Editors Marcin Wolski Maria Curie-Skłodowska University, Lublin, Poland E-mail:
[email protected] Mihir K. Chakraborty University of Calcutta, Kolkata, India E-mail:
[email protected] Wei-Zhi Wu Zhejiang Ocean University, Zhejiang, P.R. China E-mail:
[email protected] Library of Congress Control Number: 2009931639 CR Subject Classification (1998): F.4.1, F.1.1, H.2.8, I.5, I.4, I.2
ISSN ISSN ISBN-10 ISBN-13
0302-9743 (Lecture Notes in Computer Science) 1861-2059 (Transaction on Rough Sets) 3-642-03280-X Springer Berlin Heidelberg New York 978-3-642-03280-6 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12720033 06/3180 543210
Preface
Volume X of the Transactions on Rough Sets (TRS) provides evidence of further growth in the rough set landscape, both in terms of its foundations and its applications. This volume of the TRS reflects a number of research streams that were either directly or indirectly begun by the seminal work on rough sets by Zdzislaw Pawlak (1926-2006)1. This seminal work started with Zdzislaw Pawlak’s early 1970s work on knowledge description systems prior to his discovery of rough sets during the early 1980s. Evidence of the growth of various rough set-based research streams can be found in the rough set database2 . This volume includes articles that are part of a special issue on “Foundations of Rough Sets” originally proposed by Mihir Chakraborty. In addition to research on the foundations of rough sets, this volume of the TRS also presents papers that reflect the profound influence of a number of other research initiatives by Zdzislaw Pawlak. In particular, this volume introduces a number of new advances in the foundations of rough sets. These advances have significant implications in a number of research areas such as entailment and approximation operators, extensions of information systems, information entropy and granulation, lattices, multicriteria attractiveness evaluation of decision and association rules, ontological systems, rough approximation, and rough geometry in image analysis. This volume of the TRS has been made possible thanks to the laudable efforts of a great many generous persons and organizations. We extend our thanks to the following reviewers: Cheng Ching-Hsue, Martine De Cock, Ivo D¨ untsch, Jianwen Fang, Anna Gomoli´ nska, Salvatore Greco, Jerzy W. Grzymala-Busse, Masahiro Inuiguchi, Szymon Jaroszewicz, Jouni J¨ arvinen, Piero Pagliani, Sankar Kumar Pal, Lech Polkowski, Yuhua Qian, Jaroslaw Stepaniuk, Wojciech Ziarko and Yiyu Yao. The editors and authors of this volume also extend their gratitude to Alfred Hofmann, Ursula Barth, Christine Reiss and the LNCS staff at Springer for their support in making this volume of the TRS possible. In addition, the editors extend their thanks to Marcin Szczuka for his consummate skill and care in the compilation of this volume.
1
2
See, e.g., Peters, J.F., Skowron, A.: Zdzislaw Pawlak: Life and Work, Transactions on Rough Sets V, (2006), 1-24; Pawlak, Z., A Treatise on Rough Sets, Transactions on Rough Sets IV, (2006), 1-17. See, also, Pawlak, Z., Skowron, A.: Rudiments of rough sets, Information Sciences 177 (2007) 3-27; Pawlak, Z., Skowron, A.: Rough sets: Some extensions, Information Sciences 177 (2007) 28-40; Pawlak, Z., Skowron, A.: Rough sets and Boolean reasoning, Information Sciences 177 (2007) 41-73. http://rsds.wsiz.rzeszow.pl/rsds.php
VI
Preface
The editors of this volume have been supported by the Ministry of Scientific Research and Higher Education of the Republic of Poland, research grant No. N N516 368334, the Natural Sciences and Engineering Research Council of Canada (NSERC) research grant 185986 and a Canadian Arthritis grant SRI-BIO-05. May 2009
Mihir Chakraborty Marcin Wolski Wei-Zhu Wu James F. Peters Andrzej Skowron
LNCS Transactions on Rough Sets
The Transactions on Rough Sets series has as its principal aim the fostering of professional exchanges between scientists and practitioners who are interested in the foundations and applications of rough sets. Topics include foundations and applications of rough sets as well as foundations and applications of hybrid methods combining rough sets with other approaches important for the development of intelligent systems. The journal includes high-quality research articles accepted for publication on the basis of thorough peer reviews. Dissertations and monographs up to 250 pages that include new research results can also be considered as regular papers. Extended and revised versions of selected papers from conferences can also be included in regular or special issues of the journal. Editors-in-Chief Managing Editor Technical Editor
James F. Peters, Andrzej Skowron Sheela Ramanna Marcin Szczuka
Editorial Board M. Beynon G. Cattaneo M.K. Chakraborty A. Czy˙zewski J.S. Deogun D. Dubois I. D¨ untsch S. Greco J.W. Grzymala-Busse M. Inuiguchi J. J¨ arvinen D. Kim J. Komorowski C.J. Liau T.Y. Lin E. Menasalvas M. Moshkov T. Murai
M. do C. Nicoletti H.S. Nguyen S.K. Pal L. Polkowski H. Prade S. Ramanna R. Slowi´ nski J. Stefanowski J. Stepaniuk Z. Suraj ´ R. Swiniarski M. Szczuka S. Tsumoto G. Wang Y. Yao N. Zhong W. Ziarko
Table of Contents
Rough Set Theory: Ontological Systems, Entailment Relations and Approximation Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marcin Wolski
1
Information Entropy and Granulation Co–Entropy of Partitions and Coverings: A Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniela Bianucci and Gianpiero Cattaneo
15
Lattices with Interior and Closure Operators and Abstract Approximation Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gianpiero Cattaneo and Davide Ciucci
67
Rough Approximation Based on Weak q-RIFs . . . . . . . . . . . . . . . . . . . . . . . Anna Gomoli´ nska
117
Rough Geometry and Its Applications in Character Recognition . . . . . . . Xiaodong Yue and Duoqian Miao
136
Extensions of Information Systems: The Rough Set Perspective . . . . . . . . Krzysztof Pancerz
157
Intangible Assets in a Polish Telecommunication Sector – Rough Sets Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agnieszka Maciocha and Jerzy Kisielnicki
169
Multicriteria Attractiveness Evaluation of Decision and Association Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Izabela Szcz¸ech
197
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
275
Rough Set Theory: Ontological Systems, Entailment Relations and Approximation Operators Marcin Wolski Department of Logic and Methodology of Science, Maria Curie-Sklodowska University, Poland
[email protected] Abstract. The paper exhibits some new connections between ontological information systems equipped with approximation operators and entailment relations. The study is based on the conceptual framework of information quanta [4,5], where each system is defined as a relational structure and all approximation operators are defined in terms of Galois connections. We start our investigation with Scott systems, that is sets equipped with Scott entailment relations. Following the research by Vakarelov [2,11,12,13], we shall consider Scott systems induced by property systems and provide their characterisation in terms of Galois connections. We also recall how such connections allow one to define approximation operators from rough set theory (RST) [6,7] and derivation operators from formal concept analysis (FCA) [14,15]. Since we would like to have a uniform representation for both complete and incomplete Pawlak information systems, our attention is drawn by topological property systems, which additionally allows for a natural Galois-based generalisation of approximation operators. While considering more specific Scott systems, such like Tarski systems and standard Tarski systems, we obtain stronger connections to approximation operators from RST and FCA. Eventually, on the basis of these operators one can define Scott information systems with the trivial consistency predicate.
1
Introduction
Entailment relations, introduced by Scott [9] as abstract genetalisations of Gentzen’s multi-conclusion sequent calculus, are naturally connected to a number of mathematical structures, e.g. continuous linear forms, commutative rings, spaces of valuations or distributive lattices [1]. In this paper we are interested in Scott entailment relations and information structures from rough set theory (RST) [6,7]. A number of contributions to this topic has been recently made by Vakarelov [2,11,12,13]. He regards basic structures from RST as ontological information systems where the given information is represented by means of ontological concepts such like objects, attributes and properties. On the other hand, by logical information systems he understands systems where the information is represented in terms of sentences and elaborated by means of a logical inference, J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 1–14, 2009. c Springer-Verlag Berlin Heidelberg 2009
2
M. Wolski
that is a consequence relation. The special emphasis is put on the Scott entailment relation. Vakarelov shows that there exists mutual interpretability between ontological systems form RST, such like Pawlak information systems and property systems, and Scott systems, that is sets equipped with Scott entailment relations. He also proves that the category of all distributive lattices and all lattice homomorphism is isomorphic to a reflective full subcategory of the category all Scott systems. The motivation to this paper also comes from the conceptual framework of information quanta (IQ) which was explicitly introduced and explored by Pagliani and Chakraborty [4,5]. In this framework information relational structures are considered together with approximation operators induced by means of Galois connections over the underlying binary relations. IQ distinguishes two basic types of information systems: property systems (PSs) and information quantum relational systems (IQRSs). PSs from IQ coincide with finite property systems introduced by Vakarelov [13], however they have a different form of representation. Operators which can be defined over both PSs and IQRSs are called quantum operators. In this paper we consider both quantum and nonquantum operators defined over PSs. We take (finite) topological spaces (regarded as PSs) and preordered sets (regarded as IQRSs) as working examples of information systems from IQ. As said earlier, we confine our attention only to PSs and, as a consequence, to finite topological spaces. The structures from RST which correspond to PSs and IQRSs are given by approximation topological spaces [8] and approximation spaces, respectively. It is worth emphasising that Alexandroff topological spaces allows us also to generalise approximation operators from RST preserving their Galois-based definitions. Furthermore, in the case of finite sets these spaces provide a PS-driven representation of Pawlak incomplete information systems. Generally speaking, the paper is concerned with approximation operators and information structures from IQ (ontological systems) and Scott entailment relations. We shall discuss all basic ontological systems and explain how they are related. The paper also explains how quantum operators give rise to approximation operators from RST and recalls how non-quantum operators define derivation operators from formal concept analysis (FCA) [14,15]. Thus, to some extent, we study how ontologies based on FCA and RST are mutually inter-related and how they are connected to the concept of Scott system. We also consider two special types of Scott systems: Tarski systems and standard Tarski systems. These systems have a nice characterisation in terms of approximation operators and, in consequence, in terms of RST and FCA ontologies. Furthermore, they allows one to define Scott information systems. Thus, eventually we show how ontological systems induce Scott information systems by means of approximation operators.
2
Ontological Information Systems
In this section we recall basic concepts from the framework of information quanta (IQ) [4,5]. Then we give a simple example from topology, which will be our
Ontological Systems, Entailment Relations and Approximation Operators
3
‘leitmotif’ through this paper. Next, we introduce basic concepts from rough set theory (RST) [6,7] and interpretret them in the framework of IQ. Following ideas by Vakarelov [11,12,13,2], we regard all structures at issue as ontological information systems where the given information is represented by means of ontological concepts such like objects, attributes and properties. However, for the completeness of presentation we shall also introduce the so-called information quantum relation systems where the given information about objects is reduced to a binary relation defined over them. In order to make the paper self-contained, we also recall results proved in [16]. For clarity, this section is divided into two subsections. 2.1
Information Quanta and Approximation Operators
As said above, this subsection is concerned with information systems from the conceptual framework of IQ. We also discuss in some detail their specific exemplifications, namely perordered sets and Alexandroff topologies, which serve us as a leading motive through this paper. Definition 1 (Property System). A triple P = (U, M, |=), where U and M are finite sets of objects and properties, respectively, |=⊆ U × M is a relation such that for all a ∈ U there exists m ∈ M such that a |= m, and for all m ∈ M there exists a ∈ U such that a |= m, is called a context or property system (PS). The concept of PS was earlier investigated by Vakarelov [13]; however, instead of incidence relation |= he used an information function f such that for all a ∈ U , f (a) ⊆ M . Needless to say, both definitions are mutually interpretable and depending on theoretical context we shall feel free to use the former or the latter definition. To be more specific, a finite property system P = (U, M, f ) is the same as P = (U, M, |=), where m ∈ f (a) iff a |= m. It is worth emphasising that IQ draws attention to a different aspect of PS, namely to the underlying relation |= and its approximation operators. It often happens that the set of properties is not sufficiently rich to distinguish any two given elements. In this case information must be encoded globally as a relation which glues objects into information granulae. Definition 2 (IQRS). Let P = (U, M, |=) be a PS and Qa = {a : (∀m ∈ M )(a |= m ⇒ a |= m)}, for all a ∈ U . Then, the relation R ⊆ U × U such that (a, a ) ∈ R iff a ∈ Qa , is called an information quantum relation, and the pair (U, R) is called an information quantum relational system (IQRS). As said above, apart from information systems IQ is also focused on approximation operators induced by binary relations. These operators are defined in terms of Galois connections. Our presentation of Galois connections is based on [3]. Definition 3 (Galois Connection). Let U = (U, ≤) and (V = (V, ) be partially ordered sets (posets). If π∗ : U → V and π ∗ : V → U are functions
4
M. Wolski
such that for all a ∈ U and b ∈ V , a ≤ π ∗ b iff π∗ a b, then the quadruple π = U, π∗ , π ∗ , V is called a Galois connection, and π∗ and π ∗ are called the coadjoint and adjoint part of π, respectively. Proposition 1 (Properties of Galois Connections). The following statements hold for any Galois connection U, π∗ , π ∗ , V : 1. 2. 3. 4. 5.
Both π∗ and π ∗ are order-preserving. π∗ and π ∗ are mutual quasi-inverses, i.e. π∗ π ∗ π∗ = π∗ and π ∗ π∗ π ∗ = π ∗ . π ∗ π∗ is a closure operator on U and π∗ π ∗ is an interior operator on V. π∗ (a) = inf {b ∈ V : a ≤ π ∗ (b)} and π ∗ (b) = sup{a ∈ U : π∗ (a) b}. π∗ preserves joins (i.e. suprema) and π ∗ preserves meets (i.e. infima).
Now we discuss a special type of Galois connections, namely connections induced by binary relations [3]. As said above, such connections are of a special importance in IQ. In what follows, P U denotes the powerset of U . Proposition 2 (Polarity). Any relation R ⊆ U × V induces a Galois connection called polarity R+ + = (PU, ⊆), R+ , R+ , (PV, ⊆) , where R+ and R+ are defined as follows: for any A ⊆ U and B ⊆ V , R+ (A) = {b ∈ V : (∀a ∈ A) a, b ∈ R} R+ (B) = {a ∈ U : (∀b ∈ B) a, b ∈ R} Proposition 3 (Axiality). Any relation R ⊆ U × V induces a Galois connection (adjunction) R∃ ∀ = (PU, ⊆), R∃ , R∀ , (PV, ⊆) called axiality, where R∃ and R∀ are defined as follows: for any A ⊆ U and B ⊆ V , R∃ (A) = {b ∈ V : (∃a ∈ U )((a, b) ∈ R & a ∈ A)}, R∀ (B) = {a ∈ U : (∀b ∈ V )((a, b) ∈ R ⇒ b ∈ B)}. R−1 means the converse relation of R, that is, bR−1 a iff aRb. The theoretical ∀ dual of R∃ ∀ , defined as R∃ ∀ = R∃ , R∀ = (R−1 )∃ , is also an axiality but from (P V, ⊆) to (PU, ⊆). Let us now recall, that for any topological space (U, τ ) we can convert the relation of set inclusion on τ into a preorder defined on elements of U , which is called the specialisation preorder : a b iff Cl({a}) ⊆ Cl({b}). For an arbitrary preordered set (U, ) there is always a topology τ whose specialisation preorder is and there are many of them in general. Definition 4 (Specialisation Topology). Let U = (U, ) be a preordered set. A specialisation topology on U is a topology τ with a specialisation preorder such that every automorphism of U is a homeomorphism of (U, τ ).
Ontological Systems, Entailment Relations and Approximation Operators
5
The topology τE induced by an equivalence relation E is an example of a specialisation topology. In order to obtain a stronger relationship between preorders and topologies we need the concept of an Alexandroff topology. Definition 5 (Alexandroff Space). A topological space (U, τ ) whose topology τ is closed under arbitrary intersections and arbitrary unions is called an Alexandroff space. In such a case, each a ∈ U has the smallest neighbourhood defined as follows: ∇(a) = {A ∈ τ : a ∈ A}. The Alexandroff topology is actually the largest specialisation topology induced by . In this topology the following sets ∇ (a) = {b ∈ U : a b}, for all a ∈ U , form a subbasis. Moreover one can prove that ∇(a) = ∇ (a), for any a. Proposition 4 (Correspondence). There exists a one-to-one correspondence between Alexandroff topologies on a set U and preorders on U . Actually, Alexandroff spaces and preordered sets regarded as categories are dually isomorphic and we may identify them. Let us stress that any finite topological space is an Alexandroff space. It is easy to observe, that any finite topological space (U, τ ) may be regarded as a PS P = (U, τ, |=), where open sets serve as properties and |= is the standard set-theoretic membership relation ∈. Proposition 5. Let (U, τ ) be a (finite) topological space, its specialisation preorder, and (U, R) its IQRS. Then R =. In other words, for any finite topological space its specialisation preorder coincides with the quantum information relation R induced by its PS. Proposition 6. Let (U, τ ) be a finite topological space and P = (U, τ, |=) its property system. Then: 1. |=∀ |=∃ (A) = Cl(A), 2. |=∃ |=∀ (A) = Int(A), 3. |=+ |=+ (A) = ∇(A), for all A ⊆ U . The similar results as for approximation topological spaces we can also prove for the corresponding IQRSs, that is, (finite) preordered sets, whose preorders are specialisation preorders. Proposition 7. Let (U, τ ) be a finite topological space, (U, R) its corresponding IQRS, and R∃ ∀ the corresponding axiality. Then:
6
M. Wolski
1. 2. 3. 4.
R∀ R∃ (A) = ∇(A), R∀ R∃ (A) = Cl(A), R∃ R∀ (A) = Int(A), R∃ R∀ (A) = x∈X (Cl({x}) ⊆ A),
for all A ⊆ U . Please observe, that operators induced by polarities cannot be defined over IQRSs. Therefore approximation operators induced by axialities, in contrast to operators induced by polarities, are also called quantum operators [4,5]. 2.2
Rough Set Theory
In this section we briefly recall basic notions from RST [6,7]. We start by introducing the concept of information system, then discuss concepts of approximation space and approximation topological space. We view all these structures as examples of information structures from IQ. This section also recalls results concerned with a Galois-based representation of approximation operators from RST [16]. Definition 6 (Information System). A quadruple I = (U, Att, V al, f ) is called an information system, where: – U is a non–empty finite set of objects; – A is a non–empty finite set of attributes; – V = A∈Att V alA , where V alA is the value–domain of the attribute A, and V alA ∩ V alB = ∅, for all A, B ∈ Att; – f : U × Att → V al is an information function, such that for all A ∈ Att and a ∈ U it holds that f (A, a) ∈ V alA . If f (A, a) = ∅ for all a ∈ U and A ∈ Att then the information system I is called complete. As one can easy observe, each information system I = (U, Att, V al, f ) gives rise to a PS PI = (U, M, f ), where M = A∈Att V alA , and f (a) = {f (a, A) : A ∈ Att}. The concept of information system leads to another very basic information structure form RST. As one can observe, each subset of attributes S ⊆ Att determines an equivalence relation IN D(S) ⊆ U × U defined as follows: IN D(S) = {(a, b) : (∀A ∈ S)f (a, A) = f (b, A)}. As usual, IN D(S) is called an indiscernability relation induced by S, the partition induced by the relation IN D(S) is denoted by U/IN D(S), and [a]S denotes the equivalence class of IN D(S) defined by a ∈ U . Obviously, U/IN D(Att) refines every other partition U/IN D(S), where S ⊆ Att. So, one can start with a pair (U, E) and assume that E = IN D(Att) for some I = (U, Att, V al, f ). The simple generalisation of this observation is given by: Definition 7 (Approximation Space). The pair (U, E), where U is a nonempty set and E is an equivalence relation on U , is called an approximation space. A subset A ⊆ U is called definable if A = B for some B ⊆ U/E, where U/E is the family of equivalence classes of E.
Ontological Systems, Entailment Relations and Approximation Operators
7
Definition 8 (Approximation Operators). Let (U, E) be an approximation space. For every concept A ⊆ U , its E-lower and E-upper approximations, A and A, respectively, are defined as follows: A = {a ∈ U : [a]E ⊆ A}, A = {a ∈ U : [a]E ∩ A = ∅}. The chief idea of RST is to approximate any set A by means of two definable sets: A and A. The lower approximation A consists of objects which necessarily belong to A, whereas the upper approximation A consists of objects which possibly belong to A. For any A ⊆ U , the pair (A, A) is called a rough set. Observe that for any definable set A, it holds that A = A. Of course, an approximation space is also an IQRS in which a quantum information relation is an equivalence relation. An approximation space (U, E) may be converted into a topological space (U, τE ) called an approximation topological space [8]. Customarily, Int and Cl will denote an interior and closure operators, respectively. Definition 9 (Approximation Topological Space). A topological space (U, τE ) where U/E, the family of all equivalence classes of E, is the minimal basis of τE and Int is given by Int(A) = {[a]E ∈ U/E : a ∈ U &[a]E ⊆ A} is called an approximation topological space. On this view, a set A ⊆ U is definable only if A ∈ τE . It is worth emphasising that every topological approximation space satisfies the following clopen set property: every closed set is open and every open set is closed [8]. In topology are such sets called clopen. Obviously, any finite approximation topological space (U, τE ) can be represented as a PS P = (U, τ, |=), where a |= A iff a ∈ A, for all A ∈ τE . Now, we explain relationships among all the notions introduced above. Firstly, from the perspective of approximation spaces, subsets of the universe U are regarded as concepts. Therefore a PS PI = (U, M, f ) induced by some information system I = (U, Att, V al, f ) must be interpreted extensionally – that is, each property from M must be defined as a subset of U . To do so we introduce a kind of inverse of V alA : −1 V alA,X = {a ∈ U : f (a, A) = X & X ∈ V alA }. −1 In this way, we get an extensional PS PIext = (U, M, |=), where M = {V alA,X : −1 A ∈ Att & X ∈ V alA } and a |= X iff a ∈ V alA,X .
Proposition 8. Let I = (U, Att, V al, f ) be a complete information system, PIext = (U, M, |=) its extensional PS, (U, E) its approximation space, i.e. E = IN D(Att), and (U, τE ) the corresponding approximation topological space. Then M is a subbasis of τE .
8
M. Wolski
Proof. U/IN D(Att) is the minimal basis for τE it suffices to show that the Since −1 −1 set {V alA,X : a ∈ V alA,X } denoted ∇V al (a) is equal to [a]E . By completeness of I = (U, Att, V al, f ) we have that b ∈ ∇V al (a) iff for all A ∈ Att and X ∈ V alA it holds that f (b, A) = X iff f (a, A) = X. It means that for all A ∈ Att f (b, A) = f (a, A), so (b, a) ∈ IN D(Att), which is equivalent to b ∈ [a]E . By Proposition 5 we obtain that: Corollary 1. Let (U, E) be a finite approximation space and (U, τE ) its approximation topological space. Then, (U, E) is an IQRS induced by the PS (U, τE , |=). Please observe that we can also easily connect information systems with topologies considered in Section 2.1. Proposition 9. Let I = (U, Att, V al, f ) be an incomplete information system and PIext = (U, M, |=) its extensional PS. Then M is a subbasis of an Alexandroff topology τI on U . To be more precise this topology is finite and, in consequence, Alexandroff. Summing up, given an information system I = (U, Att, V al, f ) we can obtain a finite (Alexandroff) topology τI . For a complete information system I = (U, Att, V al, f ) we get τI = τE , where E = IN D(Att). There is a very important difference between τI and τE : the latter is always symmetric, i.e. a ∈ ∇(b) iff b ∈ ∇(a). In other words one cannot have better knowledge about a than b. Definition 10 (Topological Property System). Let I = (U, Att, V al, f ) be an information system. Then the corresponding PS P = (U, τI , |=) will be called a topological property system induced by I. This approach to PSs based on finite topological spaces allows us to incorporate all results about approximation operators proved in Proposition 6 and 7 – as a special case we obtain that: Corollary 2. Let PI = (U, τE , |=) be a topological PS induced by a complete information system I = (U, Att, V al, f ), that is, E = IN D(Att). Then: 1. |=∀ |=∃ (A) = A = E∀ E ∃ (A), 2. |=∃ |=∀ (A) = A = E∃ E ∀ (A), for all A ⊆ U . Another theory of data analysis which is closely associated with IQ is formal concept analysis (FCA) [14,15]. In contrast to RST, FCA is based on polarities and therefore it is meaningful only for PSs. The main emphasis of FCA is put on a lattice of concepts which provides us with hierarchical knowledge about a given PS. Definition 11 (Concept). A concept of a given PS P = (U, M, |=) is a pair (A, B), where A ⊆ U and B ⊆ V such that A = |=+ (B) and B = |=+ (A). A set A is called an extent concept if A = |=+ |=+ (A). Similarly if B ⊆ M is such that B = R+ R+ (B) then B is called an intent concept.
Ontological Systems, Entailment Relations and Approximation Operators
9
In the traditional FCA community both extent and intent concepts are denoted by A . In what follows, we are interested in extent concepts rather than intent concepts, since in RST a concept is any subset of objects from a given domain U . On the other hand, most recent papers about FCA is concerned with intent concepts. Anyway, both type of concepts form closure systems which are dually isomorphic and from mathematical point of view there is no difference between them. Proposition 10 (Wille). The set of all extent concepts induced by a PS P = (U, M, |=) is a complete lattice under the subset inclusion ⊆. Such defined complete lattice will be called an FCA ontology of P = (U, M, |=) F CA and denoted by OP . Similarly, the set of all concepts A ⊆ U such that A = A will be called an RST ontology of a topological PS PI = (U, τI , |=) and denoted RST by OP . Corollary 3. For a topological PS P = (U, τE , |=) induced by a complete inforF CA RST mation system I = (U, Att, V al, f ) it holds that OP = OP . In the next section we focus upon relationships between topological property systems and Scott systems defined in terms of approximation operators.
3
Logical Information Systems
In this section we analyse various entailment relations originated from a Scott consequence relation [9]. One key point here is that Scott entailmentent relations are naturally connected to many basic mathematical concepts, e.g. continuous linear forms or distributive lattices [1]. It is thus also natural – in the context of RST – to ask, how this consequence relation is connected to concepts such like a Pawlak information system or a property system. Recently, Vakarelov has made a number of contributions to this topic [2,11,12,13]. Starting from his research, we aim to provide some new relationships between Scott entailment relations and basic information systems and approximation operators from IQ. In order to get a stronger relationship between Scott entailments and approximation operators, we also discuss more specific Scott systems, namely Tarski systems and standard Tarski systems. Eventually, following Scott e.g. [10], we examine the concept of Scott information system. Definition 12 (Entailment Relation). An entailment relation defined on a set S is a relation between finite subsets of S satisfying the following conditions: A B if A ∩ B =∅
(1)
AB A, A B, B
(2)
A {s} ∪ B A ∪ {s} B AB
(3)
10
M. Wolski
Such defined concept of entailment, as suggested by Scott, may be seen as a generalisation of Gentzen’s multi-conclusion sequent calculus. A pair S = (S, ) is called a Scott system. As said earlier, a PS P = (U, M, |=) can be represented as P = (U, M, f ), where f : U → PM is an information function such that m ∈ f (a) iff a |= m. Proposition 11 (Vakarelov). Let P = (U, M, f ) be a PS and define A P B iff
a∈A
f (a) ⊆
f (b),
b∈B
for all finite sets A, B ⊆ U . Then S = (U, P ) is a Scott system over P = (U, M, f ), called a canonical Scott system P. Proposition 12 (Vakarelov). Let S = (S, ) be a Scott system. Then there exists a PS PS = (U, M, f ), such that S = (S, ) = (U, PS ). As said earlier, for a given information system I = (U, Att, V al, f ), we are especially interested in its topological PS PI = (U, τI , |=) and the corresponding Galois connections, namely its axiality and polarity. Corollary 4. Let PI = (U, τI , |=) be a topological PS induced by an information system I = (U, Att, V al, f ). Let us also define A P B iff |=+ (A) ⊆ |=∃ (B), for all A, B ⊆ U . Then S = (U, P ) is a canonical Scott system over P = (U, τE , |=). Proof. It is easy to observe that |=+ (A) = a∈A f (a) and |=∃ (B) = b∈B f (b), where f is an information function induced by |=. Let us recall, that any two adjoint functions are mutual quasi-inverses and thus: |=+ |=+ |=+ (A) = |=+ (A) = |=+ (A), |=∃ |=∀ |=∃ (B) = |=∃ (B) = |=∃ (B). In consequence, we get that: Corollary 5. Let PI = (U, τE , |=) be a topological PS induced by a complete information system I = (U, Att, V al, f ). For the canonical Scott system S = (U, P ) it holds that: A P B iff A P B, and A P B iff A P B , for all A, B ⊆ U .
Ontological Systems, Entailment Relations and Approximation Operators
11
Thus, in actual fact, the Scott system S = (U, P ) induced by an approximation (topological) space is reduced – in terms of the gathered information about U – RST F CA to OP = OP . In other words, we do not actually infer from A to B but from the upper approximation A = A of A to the upper approximation B = B of B. When one generalises the above observations about RST to Alexandroff topological spaces which preserves most formal properties of approximation spaces something interesting will happen. Corollary 6. Let PI = (U, τI , |=) be a topological PS induced by an information system I = (U, Att, V al, f ). Define A P B iff |=+ (A) ⊆ |=∃ (B), for all A, B ⊆ U . Then S = (U, P ) is the canonical Scott system over PI . Corollary 7. PI = (U, τ, |=) be a topological PS induced by some information system I = (U, Att, V al, f ) and S = (U, P ) be the corresponding canonical Scott system. Then it holds that: A P B iff A P B, for all finite A, B ⊆ U . F CA Thus, in the case of incomplete information we actually reason using both, OP RST and OP , ontologies, which of course differ. Now let us consider a special type of Scott systems called Tarski systems:
Definition 13 (Tarski System). Let S = (S, ) be a Scott System. Then is a Tarskiconsequence relation in S and S is a Tarski system, if the following condition is satisfied for any A, B ⊆ S: if A B then there exists finite A ⊆ A and b ∈ B such that A {b}. Thus, for a Tarski S = (U, P ) system over a property system P = (U, M, f ), it holds that: if A P B then |=+ (A) ⊆ |=∃ ({b}), for some b ∈ B. This observation allows us to prove that: Proposition 13. Let S = (U, P ) be a Tarski system over a topological PS P = (U, τI , |=), which, in turn, is induced by some information system I = (U, Att, V al, f ). Then for any A, B ⊆ U it holds that: if A P B then b ∈ |=+ |=+ (A), and, in consequence, for some b ∈ B.
if A P B then b ∈ A ,
12
M. Wolski
Proof. Firstly observe that starting from an information system means that we deal only with finite sets. Thus if A P B then |=+ (A) ⊆ |=∃ ({b}). Furthermore, for any b ∈ B it holds that |=∃ ({b}) = |=+ ({b}). Thus, |=+ (A) ⊆ |=+ ({b}) iff |=+ |=+ ({b}) ⊆ |=+ |=+ (A) iff b ∈ |=+ |=+ (A). Thus, for Tarski systems induced by (incomplete) information systems the unF CA derlying ontology is OP . However, in the case of complete information systems F CA RST the ontologies OP and OP coincide, and thus we obtain: Corollary 8. Let S = (U, P ) be a Tarski system over PI = (U, τE , |=) induced by some complete information system I = (U, Att, V al, f ). Then for any A, B ⊆ U one obtains that: if A P B then b ∈ A, for some b ∈ B. On the basis of the above observations let us consider Scott systems with the standard Tarski consequence relation T ⊆ P(S) × S. Definition 14 (Standard Tarski System). Let S = (S, ) be a Scott System. Then a standard Tarski system over S is TS = (S, T ), where T ⊆ P(S) × S is defined by: A T a iff A {a}. A standard Tarski system TS = (S, T ) over a Scott system S = (S, P ), which in turn is induced by some PS P = (U, M, f ), will be denoted by TP . Proposition 14. Let TPI = (U, T ) be a standard Tarski system over a topological PS PI = (U, τI , |=), induced by some information system I = (U, Att, V al, f ). Then A is closed under T iff A = A , for all A ⊆ U . Proof. A set A is closed under T if A T a then a ∈ A. Assume that A is closed under T . By Proposition 13 A T a means that a ∈ A , and thus if A is closed under T then A = A . Now assume that A = A and A a. Then by Proposition 13 a ∈ A , and by the assumption one obtains a ∈ A. Thus A is closed under T . Corollary 9. Let TPI = (U, P ) be a standard Tarski system over PI = (U, τE , f ), induced by some complete information system I = (U, Att, V al, f ) . Then A is closed under T iff A = A, for all A ⊆ U . Given a standard Tarski entailment relation we can consider other structures based on similar concepts. Definition 15 (Scott Information System). A Scott information system is a structure SI = (S, Con, ), where – S is a nonempty set of objects or propositions, – Con is a set of finite subsets of S (the consistent sets of objects), and ∅ ∈ Con, – ⊆ Con × S is a binary relation (the entailment relation for objects),
Ontological Systems, Entailment Relations and Approximation Operators
13
such that the followingconditions are satisfied for any A, B ⊆ S and a, b, c ∈ S: 1. 2. 3. 4. 5.
if if if if if
A ⊆ B and B ∈ Con, then A ∈ Con, a ∈ S, then {a} ∈ Con, A a and A ∈ Con, then A ∪ {a} ∈ Con, a ∈ A and A ∈ Con, then A a, for all b ∈ B it holds that A b, and B c, then A c.
Proposition 15. Let TPI = (U, T ) be a standard Tarski system over PI = (U, τI , f ), induced by some information system I = (U, Att, V al, f ), and let Con = PU . Then (U, Con, P ) is a Scott information system. Proof. Since Con = PU , it suffices to show that (a) if a ∈ A, then A T a, and (b) if for all b ∈ B it holds that A T b, and B T c, then A T c. (a) Suppose a ∈ A, then a ∈ A , and thus, by Proposition 13, A T a. (b) First, if for all b ∈ B it holds that A T b, then for all b ∈ B we have b ∈ A , and therefore B ⊆ A . Now, suppose that B T c. It means that c ∈ B , and by the previous observation c ∈ A . In consequence, A T c. The above proposition allows for a simple definition of Scott information systems in terms of approximation operators from FCA and RST: Corollary 10. Let PI = (U, τI , |=) be a topological PS, induced by some information system I = (U, Att, V al, f ), and Con = PU . Define A F CA a iff a ∈|=+ |=+ (A). Then (U, Con F CA ) is a Scott information system. Corollary 11. Let PI = (U, τI , |=) be a topological PS, induced by some information system I = (U, Att, V al, f ), and Con = PU . Define A RST a iff a ∈|=∀ |=∃ (A). Then (U, Con F CA ) is a Scott information system. Observe that the two above corollaries hold for any information system.
4
Summary
In the paper we have explored some new connections between ontological information systems equipped with approximation operators and Scott entailment relations. There exists a correspondence preserving Galois-based definitions of approximation operators defined among Pawlak complete information systems, approximation spaces and approximation topological spaces. This correspondence can be generalised to Pawlak incomplete information system, preordered sets and Alexandroff topologies. Therefore the special emphasis has been put on topological property systems induced by Pawlak information systems. We have shown how Galois-based approximation operators are related to Scott systems, Tarski systems and standard Tarski systems: the more specific Scott system the stronger relationship with approximation operators. At the end, following Scott ideas, we have demonstrated how Galois-based approximation operators induced by topological property systems give rise to Scott information systems.
14
M. Wolski
Acknowledgements. The research has been supported by the grant N N516 368334 from Ministry of Science and Higher Education of the Republic of Poland.
References 1. Cederquist, J., Coquand, T.: Entailment relations and distributive lattices. In: Buss, S., H` ajek, P., Pudl` ak, P. (eds.) Logic Colloquium 1998. Lecture Notes in Logic, vol. 13, pp. 127–139. Association for Symbolic Logic (1999) 2. Dimov, G., Vakarelov, D.: On Scott Consequence Systems. Fundamenta Informaticae (FUIN) 33(1), 43–70 (1998) 3. Ern´e, M., Klossowski, E., Melton, A., Strecker, G.E.: A primer on Galois connections. In: Proceedings of the 1991 Summer Conference on General Topology and Applications in Honour of Mary Elen Rudin and Her Work. Annals of the New York Academy of Sciences, vol. 704, pp. 103–125 (1993) 4. Pagliani, P., Chakraborty, M.: Information quanta and approximation spaces. I: Non-classical approximation operators. In: Hu, X., Liu, Q., Skowron, A., Lin, T.S., Yager, R.R., Zhang, E.B. (eds.) Proceedings of the IEEE International Conference on Granular Computing, vol. 2, pp. 605–610. IEEE, Los Alamitos (2005) 5. Pagliani, P., Chakraborty, M.: Information quanta and approximation spaces. II: Generalised approximation space. In: Hu, X., Liu, Q., Skowron, A., Lin, T.S., Yager, R.R., Zhang, E.B. (eds.) Proceedings of the IEEE International Conference on Granular Computing, vol. 2, pp. 611–616. IEEE, Los Alamitos (2005) 6. Pawlak, Z.: Rough sets. Int. J. Computer and Information Sci. 11, 341–356 (1982) 7. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publisher, Dordrecht (1991) 8. Rasiowa, H.: Algebraic Models of Logics. University of Warsaw (2001) 9. Scott, D.: Completeness and axiomatizability. In: Proceedings of the Tarski Symposium, pp. 411–435 (1974) 10. Scott, D.: Domains for Denotational Semantics. In: Nielsen, M., Schmidt, E.M. (eds.) ICALP 1982. LNCS, vol. 140, pp. 577–613. Springer, Heidelberg (1982) 11. Vakarelov, D.: Consequence relations and Information Systems. In: Slowi´ nski, R. (ed.) Intelligent Decision Support, Handbook of Applications and Advances in Rough Sets Theory, pp. 391–400. Kluwer Academic Publishers, Dordrecht (1992) 12. Vakarelov, D.: A duality between Pawlak’s knowledge representation systems and BI-consequence systems. Studia Logica 55(1), 205–228 (1995) 13. Vakarelov, D.: Information systems, similarity and modal logics. In: Orlowska, E. (ed.) Incomplete Information: Rough Set Analysis, pp. 492–550. Physica-Verlag, Heidelberg (1998) 14. Wille, R.: Restructuring lattice theory: an approach based on hierarchies of concepts. In: Rival, I. (ed.) Ordered Sets, pp. 445–470. Reidel, Dordrecht (1982) 15. Wille, R.: Concept lattices and conceptual knowledge systems. Computers & Mathematics with Applications 23, 493–515 (1992) 16. Wolski, M.: Information quanta and approximation operators: once more around the track. In: Peters, J.F., Skowron, A. (eds.) Transactions on Rough Sets VIII. LNCS, vol. 5084, pp. 237–250. Springer, Heidelberg (2008)
Information Entropy and Granulation Co–Entropy of Partitions and Coverings: A Summary Daniela Bianucci and Gianpiero Cattaneo Dipartimento di Informatica, Sistemistica e Comunicazione Universit` a di Milano – Bicocca Viale Sarca 336/U14, I–20126 Milano, Italia {bianucci,cattang}@disco.unimib.it
Abstract. Some approaches to the covering information entropy and some definitions of orderings and quasi–orderings of coverings will be described, generalizing the case of the partition entropy and ordering. The aim is to extend to covering the general result of anti–tonicity (strictly decreasing monotonicity) of partition entropy. In particular an entropy in the case of incomplete information systems is discussed, with the expected anti-tonicity result, making use of a partial partition strategy in which the missing information is treated as a peculiar value of the system. On the other side, an approach to generate a partition from a covering is illustrated. In particular, if we have a covering γ coarser than another covering δ with respect to a certain quasi order relation on coverings, the induced partition π(γ) results to be coarser than π(δ) with respect to the standard partial ordering on partitions. Thus, one can compare the two coverings through the entropies of the induced partitions. Keywords: Measure distributions, probability distributions, partitions, partial partitions, coverings, partial ordering, quasi–ordering, entropy, co–entropy, isotonicity, anti–tonicity.
1
Introduction
Recently in literature [20,21,15] there is a great interest in generalizing to the case of coverings the notion of entropy, as measure of the information average, largely studied in the partition context of information theory by Shannon [26] (and see the textbooks [16,1,25] as interesting introductions to this argument). The essential behavior we want to generalize is the isotonicity (i.e., strictly increasing monotonicity) of this information measure with respect to the natural partial
The author’s work has been supported by MIUR\PRIN project “Automata and Formal languages: mathematical and application driven studies”and by “Funds of Sovvenzione Globale INGENIO allocated by Fondo Sociale Europeo, Ministero del Lavoro e della Previdenza Sociale, and Regione Lombardia”.
J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 15–66, 2009. c Springer-Verlag Berlin Heidelberg 2009
16
D. Bianucci and G. Cattaneo
order relation, usually adopted on the family of all possible partitions of a given universe. The aim is to provide a strictly isotonic evaluation of the approximation of a given set in the context of rough set theory. In [22,23] Pawlak introduced the roughness as a measure which quantitatively evaluates the boundary region of a set relatively to its upper approximation in a given partition context. But, as shown by examples in section 2.8, the boundary region could remain invariant also if the partition changes. In this way a strict isotonicity of this roughness measure of a set is not granted. Since this evaluation is based on the partition of the universe (which is independent from the set) and on the boundary of the set under consideration, in order to obtain a strict isotonic measure of the roughness of a set a solution is to multiply the strict isotonic measure of the partition granularity by the roughness measure of the set [9,5]. The strictly isotonic granularity measure considered in this work is the co–entropy of partitions. One of the problems that rises in extending the partition approach to the covering context is that from mutually equivalent formulations of the partial order relation on partitions one obtains different orderings and quasi–orderings on coverings . This leading to the fact that if one wants to catch the fundamental property of isotonicity of an entropy in the covering context, the selection of the right (quasi) order relation becomes a crucial choice. In the last years we have explored different (quasi) partial orderings on coverings with the most natural extensions to them of the partition entropy, often with negative results [5,2,4,6,12]. But as a final result of these negative attempts, we presented a new relation of partial ordering on coverings which allows one to obtain the requested isotonicity of the entropy, not directly, but in an indirect way [3]. To be precise on the partition properly induced from a covering by a well defined procedure. Since these researches only appeared in papers published in various, different contexts, and for the lack of space often with only brief descriptions (especially of proofs), we think it is now necessary to provide a unified view of these investigations and of the obtained results (see sections 3, 4.2 and 4.4). 1.1
Entropy of Abstract Discrete Probability Distributions
In this subsection we discuss the abstract approach to information theory, abstract in the sense that it does not refer to a concrete universe X of objects, with the associate power set P(X) as collection of all its subsets, but only to suitable finite sequences of numbers from the real unit interval [0, 1], each of which can be interpreted as a probability of occurrence of something. The main reason of this introduction is that both the cases of partitions and coverings can be discussed as particular applications of this unified abstract framework. First of all, let us introduce as information function (also called the Hartley measure of uncertainty, see [14]) the mapping I : (0, 1] → R assigning to any probability value p ∈ (0, 1] the real number I(p) := − log(p)
(1)
Information Entropy and Granulation Co–Entropy of Coverings
17
interpreted as the uncertainty associated with an event whose occurrence probability is p. This is the unique function, up to an arbitrary positive constant multiplier, satisfying the following conditions: (F-1) it is non–negative; (F-2) it satisfies the so–called Cauchy functional condition I(p1 ·p2 ) = I(p1 )+ I(p2 ); (F-3) it is continuous; (F-4) it is non–trivial (∃p0 ∈ (0, 1] s.t. I(p0 ) = 0). The information function is considered as a measure of the uncertainty due to the knowledge of a probability since if the probability is 1, then there is no uncertainty and so its corresponding measure is 0. Moreover, any probability different from 1 (and 0) is linked to some uncertainty whose measure is greater than 0 in such a way that the lower is the probability and the greater is the corresponding uncertainty (strict monotonically decreasing property of uncertainty information): 0 < p1 ≤ p2 implies 0 ≤ I(p2 ) ≤ I(p1 ). Let us now introduce the two crucial notions of finite probability distribution and random variable. A length N probability distribution is a vector p = (p1 , p2 , . . . , pN ) satisfying the following conditions: (pd-1) pi ≥ 0 for every i; n (pd-2) i=1 pi = 1. Trivially, from (pd-1) and (pd-2) it immediately follows that for every i, 0 ≤ pi ≤ 1. In this abstract context, a length N random variable is a vector a = (a1 , a2 , . . . , aN ) in which each component is a real number: ai ∈ R for any i. For a fixed length N random variable a and a length N probability distribution p, the numbers ai are interpreted as possible values of the random variable a and the quantities pi as the probability of occurrence of the event “a = ai ” (thus, pi can be considered as a simplified notation of p(ai ), further simplification of another standard notation p(a = ai )). The pair (p, a) consisting of a N–length probability distribution and a N–length random variable constitute a statistical scheme which in our finite case can be represented by the associated statistical matrix : p1 . . . p i . . . pN (p, a) = (2) a 1 . . . a i . . . aN Hence, the average (or mean or expectation) value of the random variable a with respect to a probability distribution p is given by the quantity Av(a, p) =
N
ai · pi
i=1
In particular, to any probability distribution p = (p1 , p2 , . . . , pN ) it is possible to associate the uncertainty (information) random variable I[p] = (I(p1 ), I(p2 ), . . . , I(pN )), according to the statistical matrix p 1 . . . pi . . . p N (p, I[p]) = (3) I(p1 ) . . . I(pi ) . . . I(pN )
18
D. Bianucci and G. Cattaneo
whose average with respect to the probability distribution p is Av(p, I[p]) = N i=1 I(pi ) · pi This is the uncertainty average called, according to Shannon [26], the information entropy of the probability distribution, and simply denoted by H(p) = Av(p, I[p]). Thus, taking into account (1), the entropy of the probability distribution p is explicitly expressed by the formula (with the convention 0 log 0 = 0): N H(p) = − pi log pi (4) i=1
Since the information I(p) of a probability value p has been interpreted as a measure of the uncertainty due to the knowledge of this probability, the information entropy of a probability distribution p can be considered as a quantity which in a reasonable way measures the average uncertainty associated with this distribution and expressed as the mean value of the corresponding information random variable I[p]. Indeed, given a probability distribution p = (p1 , p2 , . . . , pN ), its entropy H(p) = 0 iff one of the numbers p1 , p2 , . . . , pN is one and all the others are zero, and this is just the case where the result of the experiment can be predicted beforehand with complete certainty, so that there is no uncertainty as to its outcome. These probability distributions will be denoted by the conventional symbol pk = (δki )i=1,2,...,N , where δki is the Kronecker delta centered in k. On the other hand, given a probability distribution p = (p1 , . . . , pN ) the entropy H(p) = log N iff pi = N1 for all i = 1, . . . , N and this maximum of uncertainty corresponds to the uniform probability distribution pu = (1/N, 1/N, . . . , 1/N ). In all the other cases the entropy is a (strongly) positive number upper bounded by log N . In conclusion, the following order chain holds for any probability distribution p : 0 = H(pk ) ≤ H(p) ≤ H(pu ) = log N Measure Distributions and Probability Distributions. In investigating questions about information entropy, more often one has to do with the so– called measure distributions, i.e., real vectors of the kind m = (m1 , m2 , . . . , mN ) under the conditions: (md-1) mi ≥ 0 for every i; (md-2) ∃j0 such that mj0 = 0.
The total measure of a measure distribution m is the quantity M (m) := N i=1 mi , with M (m) = 0, which depends from the particular measure distribution m. For any measure distribution m it is possible to construct the corresponding probability distribution, which depends from m: m m2 mN 1 p(m) = , ,..., M (m) M (m) M (m) which turns out to be the normalization of the measure distribution m with 1 respect to its total measure M (m), i.e., p(m) = M(m) m. The entropy of p(m),
Information Entropy and Granulation Co–Entropy of Coverings
19
denoted by H(m) instead of H(p(m)) in order to stress its dependence from the original measure distribution m, is the sum of two terms 1 mi log mi M (m) i=1 N
H(m) = log M (m) −
(5)
If one defines as co-entropy the quantity (also this depending from the measure distribution m) N 1 E(m) = mi log mi (6) M (m) i=1 we have the following identity which holds for any arbitrary measure distribution: H(m) + E(m) = log M (m)
(7)
The name co–entropy assigned to the quantity E(m) rises from the fact that it “complements” the entropy H(m) with respect to the value log M (m), which depends from the distribution m. Of course, in the equivalence class of all measure distributions of identical total measure (m1 and m2 are equivalent iff M (m1 ) = M (m2 )) this value is constant whatever be their length N . From the above definition it is trivial that the terms mi in (6) give a negative contribution if 0 < mi < 1, and so the co–entropy could be a negative quantity. Nothing against this result, but in some applications we shall interpret the co– entropy as a measure of the average granulation and in this case it is interesting to express this quantity as a non–negative number. Therefore, in order to avoid this drawback of negative co–entropy it is possible to consider the quantity q(m) = min{mi = 0 : i = 1, 2, . . . , N } > 0 and to construct the associated measure distribution obtained by a normalization of the original distribution m according to: m m2 mN 1 mq := , ,..., q(m) q(m) q(m) equal to 0 if mi = 0 and greater than 1 if mi = 0. This measure N mi M(m) distribution has the total measure M (mq ) = i=1 q(m) = q(m) and then the associated co–entropy has the form with
mi q(m)
E(mq ) =
N 1 mi log mi − M (m) log q(m) M (m) i=1
obtaining as a final result the relationship E(mq ) = E(m) − log q(m). In particular, taking into account that E(mq ) ≥ 0, we have the inequality log q(m) ≤ E(m), that is the original co–entropy E(m) may be a negative quantity, but lower bounded by log q(m).
20
D. Bianucci and G. Cattaneo
From the point of view of the induced probability distributions, there is no change (they are invariant) with respect to the original distribution: m mi i p (mq ) = = = p(m) q(m) · M (mq ) i=1,...,N M (m) i=1,...,N and consequently also the entropies are invariant: H(mq ) = H(m) and the above relationship between entropy and co–entropy expressed by equation (7) assumes now the form: H(m) + E(mq ) = log
2
M (m) q(m)
Partitions
We treat now the role of entropy and co–entropy, as measure of average uncertainty and granulation respectively, in the concrete case of partitions of a fixed universe. First of all let us consider the case of partitions generated by information systems according to the Pawlak approach [22,24,17]. 2.1
The Information System Approach to Rough Set Theory by Partitions
There is a natural way to induce partitions from (complete) information systems (IS) formalized by a triple IS := X, Att, F consisting of a nonempty finite set X of objects, the universe of the discourse, a nonempty finite set Att of attributes about the object of the universe, and a mapping F : X ×Att → val which assigns to any object x ∈ A the value F (x, a) ∈ val assumed by the attribute a ∈ Att. Indeed, in this IS case the partition generated by a set of attributes A, denoted by π(A), consists of equivalence classes of indistinguishable objects with respect to the equivalence relation RA involving pairs of object x, y ∈ X: (In)
(x, y) ∈ RA
iff
∀ a ∈ A, F (x, a) = F (y, a).
The equivalence class generated by the object x ∈ X relatively to the set of attributes A is the granule of knowledge grA (x) := {y ∈ X : (x, y) ∈ RA } characterized by an invariant set of values assumed by any object of the class. We will assume that an IS satisfies the following conditions, in [10] called of coherence: (co1) The mapping F must be surjective; this means that if there exists a value v ∈ V which is not the result of the application of the information map F to some pair (x, a) ∈ X × Att, then this value has no interest with respect to the knowledge stored in the information system.
Information Entropy and Granulation Co–Entropy of Coverings
21
(co2) For any attribute a ∈ Att there exist at least two objects x1 and x2 such that F (x1 , a) = F (x2 , a), otherwise this attribute does not supply any knowledge and can be suppressed. Example 1. Let us imagine the following situation. Let us say that you are a physician and that you want to start collecting information about the health of some of your patients. The symptoms you are interested in are: the presence of fever, a sense of dizziness, blood pressure, headache and chest pain. But you are not interested in, for example, allergies. So, when organizing the data in your possession, you will consider just the first five attributes and omit the allergy attribute. The result is a situation similar to the one presented in Table 1, where the set of objects is X = {p1 , p2 , p3 , p4 , p5 , p6 , p7 , p8 , p9 , p10 }, the family of attributes is Att={Fever, Headache, Dizziness, Blood Pressure, Chest Pain} and the set of all possible values is val={very high, high, low, normal, yes, no}. Table 1. Medical complete information system Patient Fever Headache Dizziness Blood Pressure Chest Pain p1 no yes yes normal yes p2 high no yes low yes p3 very high no no low no p4 low no yes low yes p5 low yes no low no p6 high no yes low yes p7 very high no yes normal no p8 no yes yes normal yes p9 no yes yes low yes p10 no yes no high yes
If one consider the collection Att of all attributes, the universe results to be partitioned in the following equivalence classes:
π(Att) = {p1 , p8 }, {p2 , p6 }, {p3 }, {p4 }, {p5 }, {p7 }, {p9 }, {p10 } The granule {p2 , p6 } can be considered as the support of the invariant knowledge: “The patient presents high fever and low blood pressure, but he/she has no headache; he/she says to feel dizzy and to have chest pain.”. Similarly, if one consider the subfamily of attributes A = {Fever, Headache, Chest Pain}, the resulting partition of the universe under examination consists in the equivalence classes:
π(A) = {p1 , p8 , p9 , p10 }, {p2 , p6 }, {p3 , p7 }, {p4 }, {p5 } where for instance the granule {p3 , p7 } is the support of the knowledge “The patient has very high fever, but he/she does not have headache, nor chest pain.”, invariant for any object of this equivalence class.
22
D. Bianucci and G. Cattaneo
In any IS, given an attribute a ∈ Att one can define the set val(a) := {α : ∃x ∈ X s.t. F (x, a) = α} containing all the possible values of a and so the observation of this attribute a on an object x ∈ X yields the value F (x, a) ∈ val(a). To this fixed attribute a it is possible to assign a (surjective) mappings fa : X → val(a) defined by the correspondence x → fa (x) := F (x, a). Noting that the global set of possible values of the information system is related to the “single” val(a) by the relation val = ∪a∈Att val(a), then each attribute a can be identified with the mapping fa ∈ valX and so, introducing the collection of all such mappings Att(X) := {fa ∈ valX : a ∈ Att} in which Att plays the role of index set, an information system can be formalized also as a structure X, Att(X) . Thus, any fixed attribute a ∈ Att, with set of values val(a) = {α1 , α2 , . . . , αN }, generates a partition of the universe of objects π(a) = {fa−1 (α1 ), fa−1 (α2 ), . . . , fa−1 (αN )} where the generic elementary set of π(a) is fa−1 (αi ) := {x ∈ X : fa (x) = αi }, i.e., the collection of all objects with respect to which the attribute a assumes the fixed value αi . The pair (a, αi ) is interpreted as the elementary proposition “the attribute a has the value αi ” and Ai := fa−1 (αi ) as the elementary event which tests the proposition (a, αi ), in the sense that it is constituted by all objects with respect to which the proposition (a, αi ) is “true” (x ∈ Ai iff fa (x) = αi ). The event Ai is then the equivalence class (also denoted by [a, αi ]), of all objects on which the attribute a assumes the value αi . If we consider a set A consisting of two attributes a ∈ Att and b ∈ Att with corresponding set of values val(a) = {α1 , α2 , . . . , αN } and val(b) = {β1 , β2 , . . . , βM }, then it is possible to define the mapping fa,b : X → val(a, b), with val(a, b) := {(α, β) ∈ val(a) × val(b) : ∃x ∈ X s. t. fa,b (x) = (α, β)} ⊆ val(a) × val(b), which assigns to any object x the “value” fa,b (x) := (fa (x), fb (x)). In this case we can consider the pair (a, b) ∈ Att2 as
a single attribute of the new information system X, Att2 , {fa,b | a, b ∈ Att2 } , always based on the original universe X. The partition generated by the attribute (a, b) is then the collection −1 of all nonempty subsets of X, π(a, b) = {fa,b (αi , βj ) = ∅ : αi ∈ val(a) and βj ∈ val(b)}. The elementary events of the partition π(a, b) are so the subsets of the universe of the following form, under the condition of being nonempty: −1 fa,b (αi , βj ) : = {x ∈ X : fa,b (x) = (αi , βj )} = fa−1 (αi ) ∩ fb−1 (βj )
(8)
Example 2. Making reference to the above example (1), let us consider the two attributes F =Fever, with set of values val(F ) = {veryhigh, high, low, no}, and BP =Blood Pressure, with set of values val(BP ) = {high, normal, low}. The corresponding two partitions are
π(F ) = [F = very high] = {p3 , p7 }, [F = high] = {p2 , p6 }, [F = low] = {p4 , p5 }, [F = no] = {p1 , p8 , p9 , p10 }
π(BP ) = [BP = high] = {p10 }, [BP = normal] = {p1 , p7 , p8 }, [BP = low] = {p2 , p3 , p4 , p5 , p6 , p9 }
Information Entropy and Granulation Co–Entropy of Coverings
23
The set of potential values of the pair of attributes A = {F, BP } is val{F } × val{BP } = {(very high, high), (very high, normal), (very high, low), (high, high), (high, normal), (high, low), (low, high), (low, normal), (low, low), (no, high), (no, normal), (no, low)}, with corresponding classes(v h = very high): −1 fF,BP (v h, high) = ∅
−1 fF,BP (v h, normal) = {p7 }
−1 fF,BP (v h, low) = {p3 }
−1 fF,BP (high, high) = ∅
−1 fF,BP (high, normal) = ∅
−1 fF,BP (high, low) = {p2 , p6 }
−1 fF,BP (low, high)
−1 fF,BP (low, normal)
−1 fF,BP (low, low) = {p4 , p5 }
=∅
−1 fF,BP (no, high) = {p10 }
=∅
−1 fF,BP (no, normal) = {p1 , p8 }
−1 fF,BP (no, low) = {p9 }
−1 −1 −1 where in particular fF,BP (very high, high), fF,BP (high, high), fF,BP (high, −1 −1 normal), fF,BP (low, high) and fF,BP (low, normal) are empty, so they are not possible equivalence classes of a partition. Thus, we can erase the pair (very high, high), (high, high), (high, normal), (low, high) and (low, normal) as possible values obtaining val(R, F ) = {(very high, normal), (very high, low), (high, low), (low, low), (no, high), (no, normal),(no, low)} ⊂ val(F ) × val(BP ), in order to gain again the coherence condition of surjectivity (co1).
Hence, adopting the notation of [(a, αi ) & (b, βj )] to denote the elementary −1 event fa,b (αi , βj ) = ∅, we have that [(a, αi ) & (b, βj )] = [a, αi ] ∩ [b, βj ]. If (a, αi ) & (b, βj ) is interpreted as the conjunction “the attribute a has the value αi and the attribute b has the value βj ” (i.e., & represents the logical connective “and” between propositions), then this result says that the set of objects in which this proposition is verified is just the set of objects in which simultaneously “a has the value αi ” and “b has the value βj ”. On the other hand, making −1 use of the notations Ci,j := fa,b (αi , βj ), Ai = fa−1 (αi ) and Bj = fb−1 (βj ) we can reformulate the previous result as Ci,j = Ai ∩ Bj . In other words, elementary events from the partition π(a, b) are obtained as nonempty set theoretic intersection of elementary events from the partitions π(a) and π(b). This fact is denoted by π(a, b) = π(a) · π(b). The generalization to any family of attributes of this procedure is straightforward. Indeed, let A = {a1 , a2 , . . . , ak } be such a family of attributes from an information system. Then, it is possible to define the partition π(A) = π(a1 , a2 , . . . , ak ) = π(a1 ) · π(a2 ) · . . . · π(ak ) := {Ai ∩ Bj ∩ . . . ∩ Kp = ∅ : Ai ∈ π(a1 ), Bj ∈ π(a2 ), . . . , Kp ∈ π(ak )}. If now one considers another family of attributes B = {b1 , b2 , . . . , bh } then π(A ∪ B) := π(a1 , . . . , ak , b1 , . . . , bh ) = π(A) · π(B) 2.2
(9)
The Partition Approach to Rough Set Theory
So the usual approach to rough set theory as introduced by Pawlak turns out to be a particular case of a more general approach formally (and essentially) based on a concrete partition space, that is a pair (X, π) consisting of a nonempty (non necessarily finite) set X, the universe with corresponding power set P(X), forming the collection of sets which can be approximated , and a partition π :=
24
D. Bianucci and G. Cattaneo
{Ai ∈ P(X) : i ∈ I} of X (indexed by the index set I) whose elements are the elementary sets. The partition π can be characterized by the induced equivalence relation R ⊆ X × X, defined as (x, y) ∈ Rπ
iff
∃Aj ∈ π s.t. x, y ∈ Aj
(10)
In this case x, y are said to be indistinguishable with respect to Rπ and the equivalence relation Rπ is called the indistinguishability relation induced by the partition π. In this indistinguishability context the partition π is considered as the support of some knowledge available on the objects of the universe and so any equivalence class (i.e., elementary set) is interpreted as a granule (or atom) of knowledge contained in (or supported by) π. For any object x ∈ X we shall denote by grπ (x), called the granule generated by x relatively to π, the (unique) equivalence class from π which contains x (if x ∈ Ai , then grπ (x) = Ai ). A crisp set (we prefer, also for the forthcoming probability considerations, event ) is any subset of X obtained as the set theoretic union of elementary subsets: EJ = ∪{Aj ∈ π : j ∈ J ⊆ I}. The collection of all such crisp sets plus the empty set ∅ will be denoted by Eπ (X) and it turns out to be a Boolean algebra Eπ (X), ∩, ∪, c , ∅, X with respect to set theoretic intersection, union, and complementation. This Boolean algebra is atomic whose atoms are just the elementary sets from the partition π. From the topological point of view Eπ (X) contains both the empty set and the whole space, moreover is closed with respect to any arbitrary set theoretic union and intersection, i.e., it is a family of clopen subsets for a topology on X. In this way we can construct the concrete approximation space Rπ := P(X), Eπ (X), lπ , uπ , consisting of: (1) the Boolean (complete) atomic lattice P(X) of all approximable subsets of the universe X, whose atoms are the singletons; (2) the Boolean (complete) atomic lattice Eπ (X) of all definable subsets of X, whose atoms are the equivalence classes of the partition π(X); (3) the lower approximation map lπ : P(X) → Eπ (X) associating with any subset Y of X its lower approximation defined by the (clopen) crisp set lπ (Y ) := ∪{E ∈ Eπ (X) : E ⊆ Y } = ∪{A ∈ π : A ⊆ Y } (4) the upper approximation map uπ : P(X) → Eπ (X) associating with any subset Y of X its upper approximation defined by the (clopen) crisp set uπ (Y ) := ∩{F ∈ Eπ (X) : Y ⊆ F } = ∪{B ∈ π : Y ∩ B = ∅} The rough approximation of a subset Y of X is then the clopen pair rπ (Y ) := lπ (Y ), uπ (Y ) , with lπ (Y ) ⊆ Y ⊆ uπ (Y ) , which is the image of the subset Y under the rough approximation mapping rπ : P(X) → Eπ (X) × Eπ (X) described by the following diagram:
Information Entropy and Granulation Co–Entropy of Coverings
25
Y ∈ P(X) RRR mm RRRuπ RRR m RRR mm m m R( vmm rπ lπ (Y ) ∈ Eπ (X) uπ (Y ) ∈ Eπ (X) QQQ l QQQ lll QQQ lll l QQQ l l Q( vlll lπ (Y ), uπ (Y )
lπ mmmm
The boundary of Y is defined as the set bπ (Y ) := uπ (Y ) \ lπ (Y ), whereas its exterior is eπ (Y ) = uπ (y)c . Trivially, for any Y the collection π(Y ) := {lπ (Y ), bπ (Y ), eπ (Y )} is a new partition of X generated by Y inside the original partition π. The elements E from Eπ (X) has been called crisp since their rough representation is of the form rπ (E) = (E, E) with empty boundary. 2.3
Measures and Partitions
Let us recall that from the general theory of measure and integration on a (non– necessarily finite) universe X (see [27]), a measurable space is a pair X, E(X) , where E(X) is a σ–algebra of all measurable sets, i.e., a collection of subsets of X satisfying the conditions: (ms1) ∅ ∈ E(X); (ms2) E ∈ E(X) implies E c = X \ E ∈ E(X); (ms3) n∈N En ∈ E(X) for any countable family of measurable subsets En ∈ E(X). A measure on a measurable space is a mapping m : E(X) → R+ satisfying the conditions: (m1) m(∅) = 0; and the additivity condition (m2) for every countable family {An ∈ E(X) : n ∈N} of pairwise disjoint subsets of X (Ai ∩ Aj = ∅ for i = j) it is m( n An ) = n m(An ). We are particularly interested in nontrivial and finite measures, i.e., those measures satisfying the finiteness condition: (m3) m(X) < ∞. In this way the structure X, P(X), m is a finite measure space, and the non-triviality condition: (m4) m(X) = 0. Any nontrivial finite measure induces a probability measure p : E(X) → [0, 1] m(E) defined for any measurable set E ∈ E(X) as p(E) = m(X) , obtaining in this way the probability space X, E(X), p . In this probability context the measurable sets from E(X) are also called events. The Finite Universe Case. Let us assume that the universe of discourse is finite (|X| < ∞). The set Eπ (X) of all crisp elements induced from a (necessarily finite) partition π = {A1 , A2 , . . . , AN } of the universe X has the structure of an algebra of sets (the condition ms2 is necessarily applied to finite family only) for a measurable space X, Eπ (X) . In this finite universe context, and looking at the possible probabilistic applications, the elements from Eπ (X) are the events generated by the partition π, and so the ones from the original partition π are also said to be the elementary events. Since we are particularly interested in the
26
D. Bianucci and G. Cattaneo
class Π(X) of all possible partitions of a finite universe X, with associated measure and probability considerations, we must take into account the two peculiar partitions which can be introduced, whatever be the universe X: the trivial partition πg = {X} and the discrete partition πd consisting of all singletons {x}, for x ranging in X. Of course, if on the same universe one considers two different partitions, then the corresponding families of events are different between them. Example 3. Let us consider the universe X = {1, 2, 3, 4}, and the two partitions π1 = {{1, 2, 3}, {4}} and π2 {{1, 3}, {2, 4}}. The corresponding algebras of events are Eπ1 (X) = {∅, {1, 2, 3}, {4}, X} and Eπ2 (X) = {∅, {1, 3}, {2, 4}, X}, trivially different between them. In order to have a unique algebra of sets containing the algebras of sets induced by all possible partitions we assume the power set P(X) of X as the common algebra of sets, obtaining the measurable space (X, P(X)). If we consider a nontrivial finite measure m : P(X)→ R+ on this measurable space, since for any measurable subset (event) A = a∈A {a} of X (with the involved singletons {a} pairwise disjoint among them and in a finite number), it is possible to apply the additivity condition (m2) obtaining m(A) = m(∪a∈A {a}) = m(a) (11) a∈A
where for simplicity we have set m(a) instead of m({a}). This means that, if X = {x1 , x2 , . . . , xN }, in order to construct the measure space under study it is sufficient the knowledge of the vector: m(πd ) = (m(x1 ), m(x2 ), . . . , m(xN )) characterized by the two conditions of being a measure distribution according to the notion introduced in subsection 1.1. An interesting example of this kind of measure is the so–called counting measure assigning to any event E ∈ P(X) the measure mc (E) = |E|, i.e., the cardinality of the measurable set (event) under examination, which is obtained by the uniform measure distribution ∀ x ∈ X, mc (x) = 1. 2.4
Entropy (As Measure of Average Uncertainty) and Co–Entropy (As Measure of Average Granularity) of Partitions
From now on, also if not explicitly stated, the measure space X, P(X), m we take into account is finite |X| < ∞ and we will also assume that the measure m is not degenerate in the sense that the following condition of strict positivity is satisfied: (m4a)
∀ x ∈ X, m(x) > 0.
Thus, from any partition π = {A1 , A2 , . . . , AN }, where each event Ai represents a granule of knowledge supported by π, the following two N –component vectors can be constructed:
Information Entropy and Granulation Co–Entropy of Coverings
27
(md) the measure distribution m(π) = (m(A1 ), m(A2 ), . . . , m(AN )) . The quantity m(Ai ) > 0 expresses the measure of the granule Ai , whose N total sum is i=1 m(Ai ) = m(X), which is constant with respect to the variation of the partition π in Π(X); (pd) the probability distribution p(π) = (p(A1 ), p(A2 ), . . . , p(AN )),
with p(Ai ) =
m(Ai ) . m(X)
The quantity p(Ai ) ∈ (0, 1] describes the probability of the event Ai , and p(π) is a finite collection of non–negative real numbers (∀ i, p(Ai ) ≥ 0), N whose sum is one ( i=1 p(Ai ) = 1). One must not confuse the measure m(Ai ) of the “granule” Ai with the occurrence probability p(Ai ) of the “event” Ai . They are two semantical concepts very different between them. Of course, both these distributions depend on the choice of the partition π and if one changes the partition π inside the collection Π(X) of all possible partitions of X, then different distributions m(π) and p(π) are obtained. Fixed the partition π, on the basis of these two distributions it is possible to introduce two really different discrete random variables: (RV-G) The granularity random variable G(π) := (log m(A1 ), log m(A2 ), . . . , log m(AN )) where the real quantity G(Ai ) := log m(Ai ) represents the measure of the granularity associated to the knowledge supported by the “granule” Ai of the partition π. Some of these measures could be negative, but, as stressed at the end of subsection 1.1, it is possible to “normalize” this measure distribution without affecting the entropy in such a way that every granule turns out to have a non–negative granularity measure. From now on, also if not explicitly stated, we shall assume that all the involved measure distributions satisfy this condition. (RV-U) The uncertainty random variable I(π) := (− log p(A1 ), − log p(A2 ), . . . , − log p(AN )) where the non–negative real quantity I(Ai ) := − log p(Ai ) is interpreted (see [14], and also [1,25]) as a measure of the uncertainty related to the probability of occurrence of the “event” Ai of the partition π. Also in the case of these two discrete random variables, their semantical/terminological confusion should be avoided. Indeed, G(Ai ) involves the measure m(Ai ) (granularity measure of the “granule” Ai ), contrary to I(Ai ) which involves the
28
D. Bianucci and G. Cattaneo
log (M)
log(M) G(m)
I(p)
0 1
M
m
0
1/M
1
p
Fig. 1. Graphs of the granularity G(m) and the uncertainty I(p) measures in the “positivity” domains m ∈ [1, M ] and p = m/M ∈ [1/M, 1], with M = m(X)
probability p(Ai ) of occurrence of Ai (uncertainty measure of the “event” Ai ). Note that under the assumption that the measure of the event Ai ∈ π satisfies m(Ai ) ≥ 1, the corresponding granularity and uncertainty measures are both non–negative (see figure (1)). Moreover they are mutually “complementary” with respect to the fixed quantity log m(X), invariant with respect to the choice of the partition π: G(Ai ) + I(Ai ) = log m(X)
(12)
The granularity measure G is strictly isotonic (monotonic increasing) with respect to the set theoretic inclusion: i.e., A ⊂ B implies G(A) < G(B). On the contrary, the uncertainty measure I is strictly anti–tonic (monotonic decreasing): A ⊂ B implies I(B) < I(A). As it happens for any discrete random variable, it is possible to calculate its average with respect to the fixed probability distribution p(π), obtaining the two results: (GA) the granularity average with respect to p(π) expressed by the quantity Av(G(π), p(π)) :=
N
1 m(Ai ) · log m(Ai ) (13) m(X) i=1 N
G(Ai ) · p(Ai ) =
i=1
which in the sequel will be simply denoted by E(π); (UA) the uncertainty average with respect to p(π) expressed by the quantity Av(I(π), p(π)) :=
N i=1
1 m(Ai ) m(Ai ) · log (14) m(X) i=1 m(X) N
I(Ai ) · p(Ai ) = −
which is the information entropy H(π) of the partition π according to the Shannon approach to information theory [26] (and see also [16,1] for introductive treatments). Thus, the quantity E(π) furnishes a measure of the average granularity carried by the partition π as a whole, whereas the entropy H(π) furnishes the measure of the
Information Entropy and Granulation Co–Entropy of Coverings
29
average uncertainty associated to the same partition. In conclusion, also in this case the average granularity must not be confused with the average uncertainty supported by π. Analogously to the (12), related to a single event Ai , these averages satisfy the following identity, which holds for any arbitrary partition π of the universe X: H(π) + E(π) = log m(X) (15) Also in this case the two measures complement each other with respect to the constant quantity log m(X), which is invariant with respect to the choice of the partition π of X. Remark 1. Let us recall that in [28] Wierman has interpreted the entropy H(π) of the partition π as a granularity measure, defined as the quantity which “measures the uncertainty (in bits) associated with the prediction of outcomes where elements of each partition sets Ai are indistinguishable.” On the contrary, we prefer to distinguish the uncertainty measure of the partition π given by H(π) from the granularity measure of the same partition described by E(π). Note that in [19] it is remarked that the Wierman “granularity measure” coincides with the Shannon entropy H(π), more correctly interpreted as the “information measure of knowledge” furnished by the partition π. The co–entropy (average granularity) E(π) ranges into the real (closed) interval [0, log m(X) ] with the minimum obtained by the discrete partition πd = {{x1 }, {x2 }, . . . , {x|X| }}), collection of all singletons from X, and the maximum obtained by the trivial partition πg = {X}, consisting of the unique element X: that is ∀ π ∈ Π(X), 0 = E(πd ) ≤ E(π) ≤ E(πg ) = log m(X). From the point of view of rough set theory, the discrete partition is the one which generates the “best” sharpness of any subset Y of the universe X (∀ Y ∈ P(X), rπd (Y ) = Y, Y ), formalized by the fact that the boundary of any Y is bπd (Y ) = uπd (Y ) \ lπd (Y ) = ∅ (i.e., any subset is sharp), On the contrary, the trivial partition is the one which generates the “worst” sharpness of any subset Y of X (∀ Y ∈ P(X) \ {∅, X}, rπg (Y ) = ∅, X ; with ∅ and X the unique crisp sets since rπg (∅) = ∅, ∅ and rπg (X) = X, X ), formalized by the fact that the boundary of any nontrivial subset Y ( = ∅, X) is the whole universe bπg (Y ) = X. For these reasons, the interval [0, log m(X)] is assumed as the reference scale for measuring roughness (or sharpness): the less is the value the worst is the roughness (or the best is the sharpness). 0
◦
maximum sharpness minimum roughness
2.5
◦
log m(X)
minimum sharpness maximum roughness
Ordering on Partitions
On the family Π(X) of all partitions of the finite universe X, equipped with a non degenerate measure m (∀ x ∈ X, m(x) > 0) on the set of events P(X), one can introduce some binary relations according to the following.
30
D. Bianucci and G. Cattaneo
Definition 1. Let us consider a universe X and two partitions π1 , π2 ∈ Π(X) of X. We introduce the following four binary relations on Π(X): (por1) (por2)
π1 π2 iff ∀ A ∈ π1 , ∃B ∈ π2 s.t. A ⊆ B; π1 π2 iff ∀ B ∈ π2 , ∃{Ai1 , Ai2 , . . . , Aih } ⊆ π1 s.t. B = Ai1 ∪ Ai2 ∪ . . . ∪ Aih ;
(por3) (por4)
π1 π2 iff ∀ x ∈ X, grπ1 (x) ⊆ grπ2 (x); π1 ≤W π2 iff ∀ Ai ∈ π1 , ∀ Bj ∈ π2 , Ai ∩ Bj =∅
implies
Ai ⊆ Bj .
As a first result we have that the just introduced binary relations on Π(X) are mutually equivalent among them, and so define the same binary relation which turns out to be a partial order relation: π1 π2
iff π1 π2
iff π1 π2
iff π1 ≤W π2
(16)
Remark 2. Let us stress that the introduction on Π(X) of these partial order binary relations , , , and ≤W might seem a little bit redundant, but the reason of listing them in this partition context is essentially due to the fact that in the case of coverings of X they give rise to different relations as we will see in the covering section. Partition Lattice. Given a finite universe X we want to investigate the structure of all its partitions Π(X) from the point of view of the ordering , with particular regards to the eventual of some lattice structure. In words we can say that the meet of two partitions π1 = {A1 , A2 , . . . , AM } and π2 = {B1 , B2 , . . . , BN } in Π(X) which are one finer than the other with respect to , is a partition such that each of its elementary sets (or blocks, for simplicity) is contained both in some blocks of π1 and in some blocks of π2 , and such that there is no larger block sharing the same property. In particular, let us observe that the partition π1 ∧ π2 can be realized taking into account all the possible intersections of a granule from π1 and a granule from π2 , where some of the possible intersections Ai ∩ Bj , for Ai ∈ π1 and Bj ∈ π2 , could be empty, against the standard requirement of a partition. In this case, trivially this intersection is not considered. Hence, we can state the following result with respect to the meet of two partitions. Proposition 1 (Meet). Given two partition π1 = {A1 , A2 , . . . , AM } and π2 = {B1 , B2 , . . . , BN } in Π(X), the lattice meet of π1 and π2 with respect to the partial ordering is given by π1 ∧ π2 := {Ai ∩ Bj = ∅ : Ai ∈ π1 and Bj ∈ π2 }. This result can be extended to an arbitrary (necessarily finite) number of partitions {πj : j ∈ J} in order to have that ∧j∈J πj is the corresponding meet.
Information Entropy and Granulation Co–Entropy of Coverings
31
A join of two partitions π1 and π2 in Π(X) is a partition π such that each block of π1 and π2 is contained in some blocks of π, and such that no other partition with smaller blocks shares the same property. The formal result is the following. Proposition 2 (Join). Given two partitions π1 = {A1 , A2 , . . . , AM } and π2 = {B1 , B2 , . . . , BN } in Π(X), the join of π1 and π2 is given as π1 ∨ π2 = ∧{π ∈ Π(X) : π1 π and π2 π}. Example 4. In the universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} let us consider the two partitions π1 = {{1, 2, 3}, {4, 5, 6}, {7}, {8}, {9, 10}} and π2 = {{1, 2}, {3, 4}, {5, 6, 7}, {8, 9, 10}}. Then their lattice meet is the new partition π1 ∧ π2 = {{1, 2}, {3}, {4}, {5, 6}, {7}, {8}, {9, 10}} and the lattice join is the partition π1 ∨ π2 = {{1, 2, 3, 4, 5, 6, 7}, {8, 9, 10}}. The extension of proposition 2 to any family of partitions is now straightforward. As an important result we have the following. Theorem 1 (Partition lattice). The structure Π(X), , πd , πt is a poset with respect to the partial ordering , bounded by the least partition πd , the discrete one, and the greatest partition πg , the trivial one. Formally ∀ π ∈ Π(X), πd π πt . This poset is a (complete) lattice with respect to the lattice meet π1 ∧ π2 and the lattice join π1 ∨ π2 introduced above. As usual in a poset the induced strict ordering on partitions, denoted by π1 ≺ π2 , is defined as π1 π2 and π1 = π2 . This means that it must exists at least an equivalence class Bi ∈ π2 such that its partition with respect to π1 is formed at least of two subsets, i.e., ∃{Ai1 , Ai2 , . . . , Aip } ⊆ π1 , with p ≥ 2, s.t. Bi = Ai1 ∪ Ai2 ∪ . . . ∪ Aip . 2.6
Isotonic Behavior of Entropies and Co–Entropies of Partitions
Let us now consider two partitions π1 = {A1 , A2 , . . . , AM } and π2 = {B1 , B2 , . . . , BN } with corresponding probability distributions giving the two finite probability schemes A1 A2 . . . AM B1 B2 . . . BN π1 = π2 = p(A1 ) p(A2 ) . . . p(AM ) p(B1 ) p(B2 ) . . . p(BN ) According to (4), the entropies of π1 and π2 are, respectively, H(π1 ) = −
M l=1
p(Al ) log p(Al )
H(π2 ) = −
N k=1
p(Bk ) log p(Bk ).
32
D. Bianucci and G. Cattaneo
If we consider the probabilities of an event Bk of π2 , we have the following conditional probabilities for Al on π1 : p(Al |Bk ) =
p(Al ∩ Bk ) m(Al ∩ Bk ) = p(Bk ) m(Bk )
Let us recall that these quantities represent “the probability that the event Al of the scheme π 1 occurs, given that the event Bk of the scheme π 2 occurred.” [16]. From the fact that π2 is a partition we have that Al = Al ∩(∪k Bk ) = ∪k (Al ∩Bk ), with these latter pairwise disjoints, and so we get that p(Al ) = k p(Al ∩ Bk ), leading to the result p(Al ) = p(Al |Bk ) p(Bk ) (17) k
which can be expressed in the matrix form: ⎛ ⎞ ⎛ ⎞ p(A1 ) p(A1 |B1 ) . . . p(A1 |Bk ) . . . p(A1 |BN ) ⎜ .. ⎟ ⎜ ⎟ .. .. .. .. .. ⎜ . ⎟ ⎜ ⎟ . . . . . ⎜ ⎟ ⎜ ⎟ ⎜ p(Al ) ⎟ = ⎜ p(Al |B1 ) . . . p(Al |Bk ) . . . p(Al |BN ) ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ . ⎟ ⎜ ⎟ .. .. .. .. .. ⎝ .. ⎠ ⎝ ⎠ . . . . . p(AM ) p(AM |B1 ) . . . p(AM |Bk ) . . . p(AM |BN )
⎛
⎞ p(B1 ) ⎜ .. ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ p(Bk ) ⎟ ⎜ ⎟ ⎜ . ⎟ ⎝ .. ⎠ p(BN )
Taking inspiration from [13] we can interpret this result as describing a channel of a system which takes in input the events Bk with a given probability and produces as output an event Al characterized by some probability. Indeed, “a channel is described by a set of conditional probability p(Al |Bk ), which are the probability that an input Bk [...] will appear as some Al [...]. In this model a channel is completely described by the matrix of conditional probabilities [...]. A row contains all the probabilities that a particular input Bk becomes the output Al .” Trivially we also have that the following conditions are satisfied: (1) ∀ k, l, p(Al |Bk ) ≥ 0; (2) the sum of the elements of a column is always 1: ∀ k,
M
p(Al |Bk ) = 1
l=1
“this merely means that for each input Bk we are certain that something will come out, and the P (Al |Bk ) [for k fixed and l varying] give the distribution of these probabilities.” [13]. (3) if p(Al ) is the probability of the input Al of occurring, then N M
p(Al |Bk )p(Bk ) = 1
k=1 l=1
“This means that when something is put into the system, then certainly something comes out.” [13].
Information Entropy and Granulation Co–Entropy of Coverings
33
From the condition (2) we also have that, for any fixed Bk ∈ π2 , the vector of conditional probabilities generated by the probability distribution π1 given the occurrence of the event Bk : p(π1 |Bk ) := p(A1 |Bk ), . . . , p(Al |Bk ), . . . , p(AM |Bk ) (18) is a probability distribution. Hence, we have the following entropies of π1 conditioned by Bk (or k–conditional entropies of π1 ): H(π1 |Bk ) = −
M
p(Al |Bk ) log p(Al |Bk ).
(19)
l=1
This is a particular result of a more general one. Indeed, let π = {A1 , A2 , . . . , AN } be a partition of a universe X, and let C be a nonempty subset of X. Let p(π|C) = (p(A1 |C), . . . , p(AN |C)), be the vector whose elements are (∀ i = 1, . . . , N ): p(Ai ∩ C) m(Ai ∩ C) p(Ai |C) = = p(C) m(C) (i.e. each p(Ai |C) represents the probability of an Ai conditioned by C). Then p(π|C) is a probability distribution whose entropy, called the entropy of π conditioned by C, according to the general definition (4) is H(π|C) = − p(Ai |C) log p(Ai |C) (20) Ai ∈π
Conditioned Entropy of Partitions. Given two partitions π1 = Al l=1,...,M and π2 = Bk k=1,...,N of the universe X, let us consider their meet partition π1 ∧ π2 = Al ∩ Bk k=1,...,N l=1,...,M
where some of the events Al ∩ Bk could be empty, against the usual definition of partition. But without any loss of generality we assume here a weaker position in which some set of a partition can be empty, with the corresponding probability equal to 0. With respect to the meet partition π1 ∧ π2 we have the “probability distribution of the joint occurrence p(Al ∩ Bk ) of the events Al and Bk ” [16]: p(π1 ∧ π2 ) = p(Al ∩ Bk ) = p(Bk ) · p(Al |Bk ) k=1,...,N (21) l=1,...,M
“Then the set of [meet ] events Al ∩ Bk (1 ≤ l ≤ M , 1 ≤ k ≤ N ), with the probabilities qlk := p(Al ∩ Bk ) [of the joint occurrence of the events Al and Bk ] represents another finite scheme, which we call the product of the schemas π1 and π 2 .” [16]. We can now consider two discrete random variables.
34
D. Bianucci and G. Cattaneo
(RV-1) The uncertainty random variable of the partition π1 ∧ π2 I(π1 ∧ π2 ) = − log p(Al ∩ Bk ) k=1,...,N l=1,...,M
(RV-2) The uncertainty random variable of the partition π1 conditioned by the partition π2 I(π1 |π2 ) = − log p(Al |Bk ) k=1,...,N l=1,...,M
The uncertainty of the partition π1 ∧ π2 , as average of the random variable (RV-1) with respect to the probability distribution p(π1 ∧ π2 ), is so expressed by the meet entropy H(π1 ∧ π2 ) = − p(Al ∩ Bk ) log p(Al ∩ Bk ) (22) l,k
whereas we define as entropy of π1 conditioned by π2 the average of the discrete random variable (RV-2) with respect to the probability distribution p(π1 ∧ π2 ) expressed by the non–negative quantity: H(π1 |π2 ) := − p(Al ∩ Bk ) log p(Al |Bk ) (23) l,k
In the case of the meet partition π1 ∧ π2 of the two partitions π1 and π2 with associated probability distribution p(π1 ∧ π2 ) of (21), and according to the general definition of entropy (4), after some easy calculations we obtain the following result about the entropy of the meet in which the conditioned entropy (19) is involved N H(π1 ∧ π2 ) = H(π2 ) + p(Bk ) · H(π1 |Bk ) (24) k=1
On the other hand the following result about the conditioned entropy holds H(π1 |π2 ) =
N
p(Bk ) · H(π1 |Bk )
(25)
k=1
As a consequence, the following interesting relationship between the meet entropy and the conditioned entropy is stated by the identity: H(π1 ∧ π2 ) = H(π2 ) + H(π1 |π2 ) = H(π1 ) + H(π2 |π1 )
(26)
If one takes into account that the condition π1 π2 is equivalent to π1 ∧π2 = π1 , one has both the following results π1 π2
implies
H(π1 ) = H(π2 ) +
N
p(Bk ) · H(π1 |Bk )
k=1
π1 π2
implies
H(π1 ) = H(π2 ) + H(π1 |π2 ) = H(π1 ) + H(π2 |π1 )
Information Entropy and Granulation Co–Entropy of Coverings
35
From which it immediately follows that H(π1 ) ≥ H(π2 ). Moreover, the condition of strict inclusion π1 ≺ π2 implies that at least one of the addendum p(Bk ) · H(π1 |Bk ) must be different from 0, obtaining the strict anti–isotonicity condition π1 ≺ π2
implies
H(π2 ) < H(π1 )
hence, taking into account the relationship (15), we have that the co–entropy is a strictly isotonic mapping with respect to the partition ordering, i.e., π1 ≺ π2
implies
E(π1 ) < E(π2 )
Trivially, the probability distribution (21) leads to to relationships N
p(Al ∩ Bk ) = p(Al ) and
k=1
M
p(Al ∩ Bk ) = p(Bk )
l=1
which lead to the result p(π1 ) = p(π2 ) =
N
p(A1 ∩ Bk ),
N
k=1
k=1
M
M
p(Al ∩ B1 ),
l=1
p(A2 ∩ Bk ), . . . ,
N
p(AN ∩ Bk )
k=1
p(Al ∩ B2 ), . . . ,
M
l=1
p(Al ∩ BM )
l=1
from which the following result follows: Proposition 3. Let π1 and π2 be two partitions of the same universe X. Then H(π1 ∧ π2 ) ≤ H(π1 ) + H(π2 )
(27) N
The equality holds iff, whatever be h and k, it is p(Al ∩ Bk ) = k=1 p(Al ∩ Bk ) · M l=1 p(Al ∩ Bk ) = p(Al ) · p(Bk ), the so–called condition of (mutual) independence of all the events Al and Bk . In this case we will say that the two partitions π1 and π2 are independent (to be precise, two partitions π1 , π2 ∈ Π(X) are said to be independent iff for any A ∈ π1 , B ∈ π2 we have p(A ∩ B) = p(A) · p(B) ). Let us now give a direct proof of the case of two independent partitions. Proof. We have: H(π1 ∧ π2 ) = −
p(A ∩ B) · log p(A ∩ B).
A∈π B∈σ
Under the hypothesis that π1 and π2 are independent partitions, we obtain H(π1 ∧ π2 ) = − [p(A) · p(B)] · log[p(A) · p(B)] A∈π B∈σ
=−
A∈π
p(A) ·
B∈σ
p(B) · log p(B) +
B∈σ
p(B) ·
A∈π
p(A) · log p(A)
36
D. Bianucci and G. Cattaneo
Since we have that
p(A) =
A∈π
p(B) = 1
B∈σ
then we get the required result. Comparing (26) with (27) we obtain the further inequality H(π1 |π2 ) ≤ H(π1 ) with the equality iff the two partitions π1 and π2 are independent. Conditioned Co–Entropy of Partitions. As a first result about meet co– entropy, according to the general relationship (7) applied to (22), we have that the following hold: 1 E(π1 ∧ π2 ) = m(Al ∩ Bk ) · log m(Al ∩ Bk ) m(X) l,k
= E(π2 ) − H(π1 |π2 )
(28)
Moreover, introduced the co–entropy of the partition π1 conditioned by the partition π2 as the quantity 1 m(X) E(π1 |π2 ) : = m(Al ∩ Bk ) · log m(Al ∩ Bk ) m(X) m(Bk ) lk 1 = [ m(Bk ) · p(Al |Bk ) ] log [ m(X) · p(Al |Bk ) ] m(X) lk
it is easy to show that E(π1 |π2 ) ≥ 0. Furthermore, the expected relationship holds: H(π1 |π2 ) + E(π1 |π2 ) = log m(X) Note that from (28) it follows that π1 π2 2.7
implies
E(π2 ) = E(π1 ) + H(π1 |π2 )
(29)
A Pointwise Approach to the Entropy and Co–Entropy of Partitions
In order to better understand the application to coverings of the Liang–Xu (LX) approach to quantify information in the case of incomplete systems [21], let us now introduce (and compare with (13)) a new form of pseudo co–entropy related to a partition π = {A1 , . . . , AN } by the following definition in which the sum involves the “local” information given by all the equivalence classes grπ (x) for the point x ranging on the universe X: 1 ELX (π) := m(grπ (x)) · log m(grπ (x)) (30) m(X) x∈X
Information Entropy and Granulation Co–Entropy of Coverings
37
Trivially, ∀ π ∈ Π(X), 0 = ELX (πd ) ≤ ELX (π) ≤ ELX (πg ) = m(X) · log m(X). Moreover, it is easy to prove the following result which shows that, at least at the level of partitions, this “local” notion of entropy is a little bit “pathological”: 1 ELX (π) = m(Ai )2 · log m(Ai ) m(X) i=1 N
(31)
and so from the fact that 1 ≤ m(Ai ) ≤ m(Ai )2 it follows that ∀ π, 0 ≤ E(π) ≤ ELX (π). The comparison between (13) and (31) put in evidence the very profound difference of these definitions, and in some sense the “uselessness” of this latter notion. With the aim to capture some relationship with respect to a pseudo–entropy, for any partition π let us consider the vector m(grπ (x)) μπ := μπ (x) := s.t. x ∈ X (32) m(X) which is a pseudo–probability distribution since ∀ x, 0 ≤ μπ (x) ≤ 1, but μ(π) := N m(Ai )2 x∈X μπ (x) = i=1 m(X) ≥ 1; this latter quantity is equal to 1 when N 2 i=1 m(Ai ) = m(X), and in this case the vector μπ defines a real probability distribution. Moreover, for any partition it is μ(π) ≤ m(X), with μ(πt ) = m(X). Applying in a pure formal way the formula (4) to this pseudo–distribution one obtains HLX (π) = −
x∈X
μπ (x) · log μπ (x) = −
N m(Ai )2 i=1
m(X)
· log
m(Ai ) m(X)
(33)
from which it follows that (and compare with (15)): HLX (π) + ELX (π) = log m(X) · μ(π) Hence, ELX (π) is complementary to the “pseudo–entropy” HLX (π) with respect to the quantity log m(X) · μ(π), which is not invariant but depends from the partition π by its “pseudo–measure” μ(π). For instance in the case of the trivial partition it is HLX (πg ) = 0 and ELX (πg ) = m(X) · log m(X), with HLX (πg ) + ELX (πg ) = m(X) · log m(X). On the other hand, in the case of the discrete partition it is HLX (πd ) = log m(X) and ELX (πd ) = 0, with HLX (πd ) + ELX (πd ) = log m(X). Of course, the measure distribution (32) can be normalized by the quantity μ(π) obtaining a real probability distribution |grπ (x)| (n) (n) μ (π) = μπ (x) = N :x∈X 2 i=1 m(Ai ) (n) (n) (n) But in this case the real entropy HLX (π) = − x∈X μπ (x)·log μπ (x) is linked (n) 1 to the above pseudo co–entropy (30) by the relationship HLX (π)+ μ(π) ELX (π) = log[μ(π)·m(X) ], in which the dependence from the partition π by the “measure” μ(π) is very hard to handle in applications.
38
2.8
D. Bianucci and G. Cattaneo
Local Rough Granularity Measure in the Case of Partitions
From the point of view of the rough approximations of subsets Y of the universe X (considered as a measure space X, P(X), m ) with respect to its partitions π, we shall consider now the situation in which during the time evolution t1 → t2 one tries to relate the corresponding variation of partitions πt1 → πt2 with, for instance, the corresponding boundary modification bt1 (Y ) → bt2 (Y ) (see also Figure 2). Let us note that i = 1, 2, are such that if π1 π2 ,
then lπ2 (Y ) ⊆ lπ1 (Y ) ⊆ Y ⊆ uπ1 (Y ) ⊆ uπ2 (Y )
i.e., the rough approximation of Y with respect to the partition π1 , rπ1 (Y ) = (lπ1 (Y ), uπ1 (Y )), is better than the rough approximation of the same subset with respect to π2 , rπ2 (Y ) = (lπ2 (Y ), uπ2 (Y )). This fact can be denoted by the binary relation of partial ordering on subsets: rπ1 (Y ) rπ2 (Y ). This leads to a first but only qualitative valuation of the roughness expressed by the following general law involving the boundaries of Y relatively to the two partitions: π1 π2 implies that ∀ Y, bπ1 (Y ) ⊆ bπ2 (Y ) xxxxxxxxxxx xxxxxxxxxxx xxxxxxxxxxx xxxxx xxxxxxxxxxx xxxxxxx xxxxxxxxxxxxx xxxxxxxx xxxxxxx xxxxx xxxxxxxx xxxxxxx xxxxx xxxxxxxx xxxxxxx xxxxxxxxxxxxx xxxxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxxxxxxxxxxxxxxxxxxxxxxx xxxxx xxxxx x xxxx xxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxx
bπ (H) 1
xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxxxxxxxxx xxxxxxxx xxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
π1 < π 2
bπ (H) 2
Fig. 2. Qualitative variation of boundaries with variation of partitions
The delicate point is that the condition of strict ordering π1 ≺ π2 does not assure that the corresponding strict ordering ∀ Y , bπ1 (Y ) ⊂ bπ2 (Y ) holds. It is possible to give some very simple counter–examples (see for instance example 5) in which notwithstanding π1 ≺ π2 one has that ∃Y0 : bπ1 (Y0 ) = bπ2 (Y0 ) [9,5], and this is not a desirable behavior of such a qualitative valuation of roughness. Example 5. In the universe X = {1, 2, 3, 4, 5, 6}, let us consider the two partitions π1 = {{1}, {2}, {3}, {4, 5, 6}} and π2 = {{1, 2}, {3}, {4, 5, 6}}, with respect to which π1 ≺ π2 . The subset Y0 = {1, 2, 4, 6} is such that lπ1 (Y0 ) = lπ2 (Y0 ) = {1, 2} and uπ1 (Y0 ) = uπ2 (Y0 ) = {1, 2, 4, 5, 6}. This result implies that bπ1 (Y0 ) = bπ2 (Y0 ) = {4, 5, 6}. On the other hand, in many practical applications (for instance in the attribute reduction procedure), it is interesting not only to have a possible qualitative
Information Entropy and Granulation Co–Entropy of Coverings
39
valuation of the roughness of a generic subset Y , but also a quantitative valuation formalized by a mapping E : Π(X) × 2X → [0, K] (with K suitable non–negative real number) assumed to satisfy (at least) the following two minimal requirements: (re1) The strict monotonicity condition: for any Y ∈ 2X , π1 ≺ π2 implies Eπ1 (Y ) < Eπ2 (Y ). (re2) The boundary conditions: ∀ Y ∈ 2X , Eπd (Y ) = 0 and Eπg (Y ) = 1. In the sequel, sometimes we will use Eπ : 2X → [0, K] to denote the above mapping in the case in which the partition π ∈ Π(X) is considered fixed once for all. The interpretation of condition (re2) is possible under the assumption that a quantitative valuation of the roughness Eπ (Y ) should be directly related to its boundary by the measure m(bπ (Y )). From this point of view, the value 0 corresponds to the discrete partition for which the boundary of any subset Y is empty, and so its rough approximation is rπd (Y ) = (Y, Y ) with m(bπd (Y )) = 0, i.e., a crisp situation. On the other hand, the trivial partition is such that the boundary of any nontrivial subset Y ( = ∅, X) is the whole universe, and so its rough approximation is rπg (Y ) = (∅, X) with m(bπg (Y )) = m(X). For all other partitions π we must recall that πd π ≺ πg and 0 = m(bπd (Y )) ≤ m(bπ (Y )) ≤ m(bπg (Y )) = m(X), i.e., the maximum of roughness (or minimum of sharpness) valuation is reached by the trivial partition πg . This being stated, in literature one can find a lot of quantitative measures of roughness of Y relatively to a given partition π ∈ Π(X) formalized as mappings ρπ : 2X → [0, 1] such that: (rm1) the monotonicity condition holds: π1 π2 implies that ∀ Y ∈ 2X , ρπ1 (Y ) ≤ ρπ2 (Y ); (rm2) ∀ Y ∈ 2X , ρπd (Y ) = 0 and ρπg (Y ) = 1. The accuracy of the set Y with respect to the partition π is then defined as απ (Y ) = 1 − ρπ (Y ). The interpretation of the condition (rm2) is that in general a roughness measure directly depends from a valuation of the measure of the boundary bπ (Y ) of Y relatively to π. Two of the more interesting roughness measures are ) ρ(P π (Y ) :=
m(bπ (Y )) m(uπ (Y ))
and ρ(C) π (Y ) :=
m(bπ (Y )) m(X)
with the latter (considered in [5]) producing a better description of the former (introduced by Pawlak in [24]) with respect to the absolute scale of sharpness (C) (P ) previously introduced, since whatever be the subset Y it is ρπ (Y ) ≤ ρπ (Y ). These roughness measures satisfy the above “boundary” condition (re2), but their drawback is that the strict condition on partitions π1 ≺ π2 does not assure a corresponding strict behavior ∀ Y , bπ1 (Y ) ⊂ bπ2 (Y ), and so the strict correlation ρπ1 (Y ) < ρπ2 (Y ) cannot be inferred. It might happen that notwithstanding the strict partition order π1 ≺ π2 , the two corresponding roughness measures for a certain subset Y0 turn out to be equal ρπ1 (Y0 ) = ρπ2 (Y0 ) as illustrated in the following example.
40
D. Bianucci and G. Cattaneo
Example 6. Making reference to example 5 we have that although π1 ≺ π2 , for (P ) the subset Y0 we get ρπ1 (Y0 ) = ρπ2 (Y0 ) (for both roughness measures ρπ (Y0 ) (C) and ρπ (Y0 )). Summarizing we can only state the following monotonicity with respect to the partition ordering: π1 ≺ π2
implies
∀ Y ⊆ X,
ρπ1 (Y ) ≤ ρπ2 (Y )
Taking inspiration from [9] a local co–entropy measure of Y , in the sense of a “co–entropy” assigned not to the whole universe X but to any possible of its subset Y , is then defined as the product of the above (local) roughness measure times the (global) co–entropy: Eπ (Y ) := ρπ (Y ) · E(π)
(34)
For a fixed partition π of X also this quantity ranges into the closed real interval [0, log m(X) ] whatever be the subset Y , with the extreme values reached for Eπd (Y ) = 0 and Eπg (Y ) = log m(X), i.e., ∀ Y ⊆ X it is 0 = Eπd (Y ) ≤ Eπ (Y ) ≤ Eπg (Y ) = log m(X) Moreover, for any fixed subset Y this local co–entropy is strictly monotonic with respect to partitions: π1 ≺ π2
implies
∀ Y ⊆ X,
Eπ1 (Y ) < Eπ2 (Y )
(35)
Making use of the above interpretation (see the end of the section 2.3) of the real interval [0, log m(X) ] as an absolute scale of sharpness, from this result we have that, according to our intuition, the finer is the partition the best is the sharpness of the rough approximation of Y , i.e., Eπ : Y ∈ P(X) → Eπ (Y ) ∈ [0, log2 m(X) ] can be considered as a (local) rough granularity mapping. Example 7. Let us consider the universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}, its subset Y = {2, 3, 5, 8, 9, 10, 11}, and the following three different partitions of the universe X by granules: π1 = {{2, 3, 5, 8, 9}, {1, 4}, {6, 7, 10, 11}}, π2 = {{2, 3}, {5, 8, 9}, {1, 4, }, {6, 7, 10, 11}}, π3 = {{2, 3}, {5, 8, 9}, {1, 4, }, {7, 10}, {6, 11}} with π3 ≺ π2 ≺ π1 . The lower and upper approximations of Y with respect to π1 , π2 and π3 are equal, and given, respectively by: iπk (Y ) = {2, 3, 5, 8, 9} and oπk (Y ) = {2, 3, 5, 6, 7, 8, 9, 10, 11} , for k = 1, 2, 3. Note that necessarily eπ1 (Y ) = eπ2 (Y ) = eπ3 (Y ) = {1, 4}. Therefore, the corresponding roughness measures are exactly the same: ρπ1 (Y ) = ρπ2 (Y ) = ρπ3 (Y ), even though from the point of view of the granularity knowledge we know that the lower approximations of Y are obtained by different collections of granules: griπ2 (Y ) =
Information Entropy and Granulation Co–Entropy of Coverings
41
{{2, 3}, {5, 8, 9}} = griπ3 (Y ), as collection of two granules, are better (finer) than griπ1 (Y ) = {{2, 3, 5, 8, 9}}, a single granule, this fact formally written as griπ2 (Y ) = griπ3 (Y ) ≺ griπ1 (Y ). Similarly, always from the granule knowledge point of view, we can see that the best partitioning for the upper approximation of Y is obtained with π3 since groπ1 (Y ) = {{2, 3, 5, 8, 9}, {6, 7, 10, 11}}, groπ2 (Y ) = {{2, 3}, {5, 8, 9}, {6, 7, 10, 11}}, and groπ3 (Y ) = {{2, 3}, {5, 8, 9}, {7, 10}, {6, 11}}, and thus groπ3 (Y ) ≺ groπ2 (Y ) ≺ groπ1 (Y ). It is clear that the roughness measure ρπ (Y ) is not enough when we want to catch any possible advantage in terms of granularity knowledge given by different partitioning, even when the new partitioning does not increase the cardinality of the internal and the closure approximation sets. On the contrary, this difference is measured by the local co–entropy (34) since according to (35), and recalling that π3 ≺ π2 ≺ π1 , we have the following strict monotonicity: Eπ3 (Y ) < Eπ2 (Y ) < Eπ1 (Y ). 2.9
Application to Complete Information Systems: The Case of Fixed Universe
These considerations can be applied to the case of a complete Information System (IS) on the finite universe. Let us stress that in this subsection the universe is considered fixed, whereas is the collection of attributes applied to X which changes. Indeed, in many applications it is of a certain interest to analyze the variations occurring inside two information systems labelled with two parameters t1 and t2 , each of which is based on the same universe X. In particular, one has to do mainly with the following two cases: (1) dynamics (see [11]), in which ISt1 = (X, Att1 , F1 ) and ISt2 = (X, Att2 , F2 ) are under the conditions that Att1 ⊂ Att2 and ∀ x ∈ X, ∀ a1 ∈ Att1 , F2 (x, a1 ) = F1 (x, a1 ). This situation corresponds to a dynamical increase of knowledge (t1 and t2 are considered as time parameters, with t1 < t2 ) for instance in a medical database in which one fixed decision attribute d ∈ Att1 ∩Att2 is selected to state a certain disease related to all the resting condition attributes (i.e., symptoms) Ci = Atti \{d}. In this case the increase Att1 \{d} ⊆ Att2 \{d} corresponds to the fact that during the researches on the disease some symptoms which have been neglected at time t1 become relevant at time t2 under some new investigations. (2) reduct, in which ISt1 = (X, Att1 , F1 ) and ISt2 = (X, Att2 , F2 ) are under the conditions that Att2 ⊂ Att1 and ∀ x ∈ X, ∀ a2 ∈ Att2 , F2 (x, a2 ) = F1 (x, a2 ). In this case it is of a certain interest to verify if the corresponding partitions are invariant πAtt2 (ISt2 ) = πAtt1 (ISt1 ), or not. In the former case one can consider ISt2 as the result of the reduction of the initial attributes Att1 obtained by the suppression from ISt1 of the superfluous attributes Att1 \ Att2 . From a general point of view, a reduction procedure can be formalized by a (strictly) monotonically decreasing sequence of attribute families RP := {At ⊆ Att s.t. t ∈ N and At ⊃ At+1 }, with A0 = Att. In this case it holds the following diagram, linking the family At with the generated partition π(At ) whose co–entropy is E(At ):
42
D. Bianucci and G. Cattaneo
A0 = Att ⊃ A1 ⊃ . . . ⊃ At ⊃ At+1 . . . ⊃ AT = ∅ ↓ ↓ ↓ ↓ ↓ π(A0 ) π(A1 ) . . . π(At ) π(At+1 ) . . . {X} ↓ ↓ ↓ ↓ ↓ E(A0 ) ≤ E(A1 ) ≤ . . . ≤ E(At ) ≤ E(At+1 ) . . . ≤ log m(X) The first row constitutes the attribute channel, the second row the partition channel (measured by the corresponding co–entropy, whose upper bound is the trivial partition πt = {X}), and the last row the granularity channel (whose upper bound corresponds to the maximum of roughness log m(X)) of the reduction procedure. After the finite number of steps T = |Att|, one reaches the empty set AT = ∅ with corresponding π(AT ) = πt = {X}, the trivial partition, and E(AT ) = log m(X). In this reduction context, the link between the situation at step t and the corresponding one at t + 1 relatively to the co–entropy is given by equation (29) which assumes now the form: E(At+1 ) = E(At ) + H(At |At+1 )
(36)
From a general point if view, a practical procedure of reduction consists of starting from an initial attribute family A0 , and according to some algorithmic criterium Alg, step by step, one “constructs” the sequence of At , with this latter a subset of the previous At−1 . It is possible to fix a priori a suitable approximation value and then to stop the procedure at the first step t0 such that log m(X) − E(At0 ) ≤ . This assures that for any other further step t > t0 it is also log m(X) − E(At ) ≤ . The family of attributes A(t0 ) is the –approximate reduct with respect to the procedure Alg. Note that in terms of approximation the following order chain holds: ∀ t > t0 , E(At )−E(At0 ) ≤ log m(X)−E(At0 ) ≤ . On the other hand, for any triplet of steps t0 < t1 < t2 it is H(At1 |At2 ) = E(At2 ) − E(At1 ) ≤ log m(X) − E(At0 ) ≤ Example 8. In the complete information system illustrated in table 2 Table 2. Flats complete information system Flat Price Rooms Down-Town Furniture Floor Lift 1 2 3 4 5 6 7 8 9 10
high high high high high high low low low low
3 3 2 2 2 2 1 1 1 1
yes yes no no yes yes no no no yes
yes yes no no no no no yes no yes
3 3 1 1 2 2 2 3 2 1
yes no no yes no yes yes yes no yes
Information Entropy and Granulation Co–Entropy of Coverings
43
let us consider the following five (decreasing) families of attributes: A0 = Att = {P rice, Rooms, Down − T own, F urniture, F loor, Lif t} ⊃ A1 = {P rice, Rooms, Down − T own, F urniture, F loor} ⊃ A2 = {P rice, Rooms, Down − T own, F urniture} ⊃ A3 = {P rice, Rooms, Down − T own} ⊃ A4 = {P rice, Rooms} ⊃ A5 = {P rice} The corresponding probability partitions are π(A1 ) = π(A2 ) = {{1, 2}, {3, 4}, {5, 6}, {7, 9}, {8}, {10}}, π(A3 ) = {{1, 2}, {3, 4}, {5, 6}, {7, 8, 9}, {10}}, π(A4 ) = {{1, 2}, {3, 4, 5, 6}, {7, 8, 9, 10}}, and π(A5 ) = {{1, 2, 3, 4, 5, 6}, {7, 8, 9, 10}}. Note that π(A0 ) corresponds to the discrete partition πd . We can easily observe that π(A0 ) ≺ π(A1 ) = π(A2 ) ≺ π(A3 ) ≺ π(A4 ) ≺ π(A5 ) and that E(A0 ) = 0.00000 < E(A1 ) = 0.80000 = E(A2 ) < 1.07549 = E(A3 ) < 1.80000 = E(A4 ) < 2.35098 = E(A5 ) < log m(X) = 3.32193. Moreover, taking for instance E(A3 ) and E(A4 ) and according to (36), we have H(A3 |A4 ) = E(A4 ) − E(A3 ) = 0.72451. A0 = Att ⊃ A1 ⊃ A2 ⊃ A3 ⊃ A4 ⊃ A5 ⊃ AT = ∅ ↓ ↓ ↓ ↓ ↓ ↓ ↓ π(A0 ) ≺ π(A1 ) = π(A2 ) ≺ π(A3 ) ≺ π(A4 ) ≺ π(A5 ) ≺ {X} ↓ ↓ ↓ ↓ ↓ ↓ ↓ E(A0 ) < E(A1 ) = E(A2 ) < E(A3 ) < E(A4 ) < E(A5 ) < log m(X) 2.10
Application to Complete Information Systems: The Case of Fixed Attributes
We have just studied complete information systems whose universe X is fixed, taking into account the possibility of variation of to set of attributes, for instance when there is an increasing of their collections. Let us now consider the other point of view in which the set of attributes is fixed, whereas it is the universe of objects which increases in time. This approach can describe the situation of an hospital, specialized in some disease whose symptomatology has been characterized by a well recognized set of symptoms described as attributes. In this case the information table has the set of attributes fixed, and it is the set of patients which varies in time: ISt0 = Xt0 , {fa : Xt0 → val(a) | a ∈ Att} and ISt1 = Xt1 , {fa : Xt1 → val(a) | a ∈ Att} . For a fixed information system IS = X, {fa : X → val(a) | a ∈ Att} , with X = {x1 , x2 , . . . , xN }, the finite set of attributes Att = {a1 , a2 , . . . , aH } gives rise to the collection of attribute values V := val(a1 ) × val(a2 ) × . . . × val(aH ), which can be considered as an alphabet. Let us consider the information function fAtt : X → V al which associates with any object x ∈ X the corresponding value fAtt (x) = (fa1 (x), fa2 (x), . . . , faH (x)) ∈ V Then, the information table generates the string of N letters in the alphabet V x ≡ fAtt (x1 ), fAtt (x2 ), . . . , fAtt (xN ) ∈ V N
44
D. Bianucci and G. Cattaneo
which describes a microstate of length N which can be represented in a grid of N cells in such a way that the site j of the grid is labelled by the letter fAtt (xj ) ∈ V . The set V N is then the phase state of the IS. If we denote by V = {v1 , v2 , . . . , vL } the collection of all the letters of the alphabet V , then we can calculate the number ni (x) of cells of the microstate x with the same letter vi , obtaining in this way the new string, called configuration, n(x) ≡ n1 (x), n2 (x), . . . , nL (x) ∈ NL Since the cardinality N = |X| of the universe X is related to this configuration by the identity N = i ni (x), we also refers to n(x) as a configurations involving N objects. Clearly, two configurations x and y must be considered as belonging to the same macrostate if for any i = 1, 2, . . . , L they have the same number of cells with the letter vi , i.e., it is ni (x) = ni (y). This is of course an equivalence relation on the family of all possible N –length configurations. The total number W (n) of microstates from the phase space V N which are characterized by the macrostate n = (n1 , n2 , . . . , nL ) ∈ NL takes the following form: N! W (n) = n 1 ! n 2 ! . . . nL ! where ni represents the number of cells with potentially the letter vi of the alphabet. If we define as entropy of the configuration n the quantity h(n) = log W (n) making use of the Stirling formula and setting pi = ni /N , one obtains the approximation L h(n) = log W (n) ∼ pi log pi = −N i=1
from which the average entropy is defined as h(n) ∼ pi log pi =− N i=1 L
H(n) =
3
Partial Partitions
Real information systems are often incomplete, meaning that they present lack of information. An incomplete information system is formalized as a triple IIS := X, Att, F . Differently from the case of a complete information system, F is a mapping partially defined on a subset D(F ) of X × Att. In this way also the mapping representation of an attribute a is partially defined on a subset Xa of X. We denote the subset Xa := {x ∈ X : (x, a) ∈ D(F )} by definition domain of the attribute a. In order to extend to an incomplete information system the previously described properties and considerations about entropy of partitions, we have at least two different possibilities.
Information Entropy and Granulation Co–Entropy of Coverings
45
(i) First of all (see also [4]), let a, b be two attributes. Then, it is possible to define the (non–surjective) mapping fa,b : Xa ∪ Xb → val(a) × val(b) as ⎧ ⎪ ⎨(fa (x), fb (x)) x ∈ Xa ∩ Xb fa,b (x) := (fa (x), ∗) x ∈ Xa ∩ (Xb )c ⎪ ⎩ (∗, fb (x)) x ∈ (Xa )c ∩ Xb The generalization to any subset A of attributes is now straightforward, obtaining a mapping fA : XA → val(A), with XA = a∈A Xa and val(A) = Πa∈A val(a). Now, for any possible “value” α = (αi ) ∈ val(A), −1 one can construct the granule fA (α) = {x ∈ XA : fA (x) = α} of X labelled by α, also denoted by [A, α]. The family of granules gr(A) = {[A, α] : α ∈ val(A)} plus the null granule [A, ∗] = X \ XA (i.e., the collection of the states in which all the attributes are unknown) constitute a partition of the universe X, in which gr(A) is a partition of the subset XA of X (which can be considered as a “partial” partition of X). (ii) Another possibility could be the following. We can consider the covering generated by a similarity relation. In case of incompleteness it is often used the similarity relation described by [18], according to which, we have that two objects x, y ∈ X are said to be similar if and only if ∀ a ∈ A ⊆ Att, either fa (x) = fa (y) or fa (x) = ∗ or fa (y) = ∗
We will start investigating the first of these two options, leaving the treatment of the second one to a further section (section 4) dedicated to coverings . According to the considerations just discussed at point (i), any incomplete information system on the universe X can be formalized by a collection of surjective mappings fa : Xa → val(a), for a ∈ Att, with Att an index set of attributes, each of which is partially defined on a subset Xa of X. The attributes which are of certain interest for the objects of the universe are then identified with the corresponding mappings. Adding to val(a) the further null value ∗, we can extend the partially defined mapping fa to the global defined one, denoted by fa∗ : X → val∗ (a), which, to any object x ∈ X, assigns the value fa∗ (x) = fa (x) if x ∈ Xa , and the value fa∗ (x) = ∗ otherwise. For any family of attributes A one can construct the “common” definition domain XA = a∈A Xa and then it is possible to consider the multi–attributes mapping fA assigning to any object x ∈ XA the corresponding collection of values fA (x) = (fa∗ (x))a∈A . Note that for x ∈ XA at least one of the fa∗ (x) = ∗. Formally we can state that x ∈ / XA iff ∀ a ∈ A, fa∗ (x) = ∗. ∗ Let us denote by VA∗ the range of the multi–attribute mapping fA ; then on the −1 subset XA of the universe it generates a family of granules fA (α) = {x ∈ XA : −1 fA (x) = α} labelled by the multi–value α ∈ VA∗ . Let us denote by Aα = fA (α) the generic of the above granules, then in general α∈V ∗ Aα = XA ⊆ X and so, A also if their collection consists of nonempty and pairwise disjoint subsets of X, they are a partition of the domain XA , but not a partition of the universe X. ∗ Recalling that XA = X \XA , we can measures mA (Aα ) = |Aα | and define the ∗ ∗ ∗ mA (XA ) = 0, and so mA (X) = mA ( α Aα ∪ XA ) = α mA (Aα ) + mA (XA )=
46
D. Bianucci and G. Cattaneo
|XA |, with the natural extension to the σ–algebra of events EA (X) from X ∗ generated by the elementary events {Aα : α ∈ VA∗ } ∪ {XA } plus all the subsets ∗ (of measure 0) of XA , obtaining in this way a complete measure depending from the set of attributes A. In particular, the measure of the whole universe changes with the choice of A. The corresponding (globally normalized) probabilities are α| ∗ ∗ then p∗ (Aα ) = |A |X| and p (XA ) = 0. According to a widely used terminology, the collection π(A) = {Aα : α ∈ VA∗ } is a pseudo probability partition because A| we have p∗ ( α Aα ) = |X |X| ≤ 1, i.e., we do not always have the equality to 1. It is possible to define the entropy and related co–entropy of the pseudo probability partition generated by A as follows: ∗ |XA | 1 ˜ H(A) = log |X| − |Aα | log |Aα | = − p (Aα ) log p∗ (Aα ) |X| |X| ∗ ∗
(37a)
|X| − |XA | 1 ˜ E(A) = · log |X| + |Aα | log |Aα | |X| |X| ∗
(37b)
α∈VA
α∈VA
α∈VA
Also in this case we have that ˜ ˜ H(A) + E(A) = log |X|.
(38)
Remark 3. In the context of partial partitions generated by families of attributes from an incomplete information system, let us consider two families of attributes A and B, with A ⊆ B. The mapping fB is defined on XB which contains the definition domain XA of fA : XA ⊆ XB ; also the two corresponding σ–algebras are related in the same way: EA (X) ⊆ EB (X). This latter result assures that π(B) π(A) according to 1 (por2) generalized to the present case, but in general it is not assured that π(B) π(A). The following important result about isotonicity holds. Theorem 2. Given an incomplete information system, let A ⊆ B be two collections of attributes, and π(B) and π(A) the corresponding pseudo–probability partitions. Then we have ˜ ˜ H(A) ≤ H(B). Moreover, under the condition |XB | > |XA | the following strict isotonicity holds: ˜ ˜ A ⊂ B implies H(A) < H(B). Proof. Since A ⊆ B, following the discussion of remark 3, in the present case one obtains that, according to 1 (por2), π(B) π(A), whereas π(B) π(A) is not assured in general. Thus, we can state that it is true that there exists at least one Ah ∈ π(A) for which there exists {Bh1 , . . . , Bhm } ⊆ π(B) s.t. Ah = Bh1 ∪. . .∪Bhm . From the point of view of probabilities one has that |Ah | ∗ i = p (Bh ) |X| i=1 m
p∗ (Ah ) =
(39)
Information Entropy and Granulation Co–Entropy of Coverings
47
If one follows the proof of the property of additivity of entropy of partitions in [25, p.84], taking into account (39), it is possible to prove that p∗ (Ah ) log p∗ (Ah ) − =
m
p∗ (Bhk ) log p∗ (Bhk ) =
k=1 m
p
∗
log p∗ (Ah ) −
(Bhi )
i=1
p∗ (Bhk ) log p∗ (Bhk )
k=1
p p (Bhm ) ˜ = p (Ah ) · H ∗ ,..., ∗ . p (Ah ) p (Ah ) ∗
m
∗
(Bh1 )
∗
From this result one obtains that |Bα | |Bα | |XB | 1 log = log |X| − |Bα | log |Bα | |X| |X| |X| |X| ∗ ∗ α∈VB α∈VB ˜ p∗ (A1 ), . . . , p∗ (Bh1 ), . . . , p∗ (Bhm ), . . . , p∗ (AN ), p∗ (Bz1 ), . . . , p∗ (Bz ) =H k ∗ 1 ∗ m p (B ) p (B ) ∗ ∗ ∗ ∗ h h ˜ p (A1 ), . . . , p (Ah ), . . . , p (AN ) + p (Ah ) · H ˜ =H ,..., ∗ p∗ (Ah ) p (Ah ) ˜ p∗ (Bz1 ), . . . , p∗ (Bz ) + H k
˜ H(B) = −
˜ ∗ (Bz1 ), . . . , p∗ (Bz )) by H(B ˜ \ A). This term Let us denote the last term H(p k ˜ represents the part of the entropy H(B) regarding the classes of the partition π(B) that are not in A, i.e., Bz1 , . . . , Bzk ∈ π(B) such that Bz1 , . . . , Bzk ∈ / A. Thus we have ∗ 1 ∗ m p (B ) p (B ) ∗ h h ˜ ˜ ˜ ˜ \ A) H(B) = H(A) + p (Ah ) · H ,..., ∗ + H(B p∗ (Ah ) p (Ah ) ˜ ˜ This result means that we always have H(B) ≥ H(A), and thus we can write A⊆B
implies
˜ ˜ H(A) ≤ H(B)
(40)
˜ \ A) > 0, and from this In particular, if |XB | > |XA | then we have that H(B ˜ ˜ trivially H(A) < H(B) follows. As a direct consequence of theorem 2, and making use of (40) and (38), we have ˜ the following corollary regarding the co–entropy E(A). ˜ Corollary 1. Let A ⊆ B be two collections of attributes. Then we have E(B) ≤ ˜ E(A). Remark 4. Let us stress that our newly defined co–entropy (37b) is a generalization of the co–entropy (13) defined for complete information systems. In fact, in the particular case of a complete information system we have that XA = X, and thus (37b) becomes (13).
48
3.1
D. Bianucci and G. Cattaneo
Local Rough Granularity Measure in the Case of Partial Partitions
Let us consider a subset Y ⊆ X. Relatively to a definition domain XA , we may distinguish a subset YA = Y ∩XA and YA∗ = Y \YA , thus obtaining Y = YA ∪YA∗ . Similarly to what we have discussed in subsection 2.8, one can introduce two new notions of accuracy and roughness of Y ⊆ X relatively to the probability partition πA , as follows: Definition 2. Given an incomplete information system, let us take Y ⊆ X. Let XA be a definition domain corresponding to A ⊆ Att. We can define the following accuracy and roughness coefficients relative to the rough approximation of Y : |lπA (Y )| + |eπA (Y )| |XA | |bπA (Y )| ρ˜πA (Y ) := 1 − α ˜ πA (Y ) = . |XA |
α ˜ πA (Y ) :=
(41a) (41b)
The following holds Proposition 4. Given Y ⊆ X, let A ⊆ B be two collections of attributes of an incomplete information system, and π(B) and π(A) the corresponding probability partitions. Then, ∀ Y ⊆ XA
implies
ρ˜πB (Y ) ≤ ρ˜πA (Y ).
Proof. Since we have that A ⊆ B, then we know that π(B) π(A) and that XA ⊆ XB . Thus |XA | ≤ |XB | and |bπB (Y )| ≤ |bπA (Y )|. Hence we obtain ∀ Y ⊆ XA
and A ⊆ B
implies
ρ˜πB (Y ) ≤ ρ˜πA (Y ).
(42)
From a probability partition of a complete information system, we can define a rough entropy of Y relatively to the probability partition πA of an incomplete information system as: Definition 3. Given an incomplete information system, let us consider Y ⊆ X, and A ⊆ Att. We can define the following rough entropy of Y relatively to the probability partition πA : ˜πA (Y ) = ρ˜πA (Y ) · E(A). ˜ E
(43)
We have the following: Proposition 5. Let us consider an incomplete information system. Given Y ⊆ X, let A ⊆ B be two collections of attributes, XA ⊆ XB the corresponding definition domains, and π(B) and π(A) the corresponding probability partitions. If Y ⊆ XA then we have ˜πB (Y ) ≤ E˜πA (Y ) E
Information Entropy and Granulation Co–Entropy of Coverings
49
Proof. As previously shown, we have that 1 holds, and that, under the condition of Y ⊆ XA , ρ˜πB (Y ) ≤ ρ˜πA (Y ) holds too. Hence, trivially we have that ∀ Y ⊆ XA
and A ⊆ B
implies
If Y XA we have in general that A ⊆ B as illustrated in example 9.
˜πB (Y ) ≤ E ˜πA (Y ). E
does not imply
(44)
ρ˜πB (Y ) ≤ ρ˜πA (Y )
Example 9. Let us consider the incomplete information system illustrated in table 3 with universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}. Table 3. Flats incomplete information system Flat 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Price Rooms Down-Town Furniture high 2 * no high 2 * * high 2 * no low 3 yes no low 3 yes * low 3 yes no low 1 no no low 1 * * * * yes yes * * yes * * * * * * * * no * * * no * * * *
Let us choose the set Y = {1, 2, 7, 8, 9} ⊆ X (we can think of having a decision attribute, for instance “Close to the railway station” which partitions X in flats which are close to the railway station (flats {1, 2, 7, 8, 9}) and in flats which are considered far ({3, 4, 5, 6, 10, 11, 12, 13, 14})). Now let us consider two subsets of attributes, A, B ⊆ Att, with A = {P rice, Rooms} and B = {P rice, Rooms, Down − T own}. We thus obtain two definition domains XA = {1, 2, 3, 4, 5, 6, 7, 8} ⊆ XB = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and the corresponding probability partitions πB = {{1, 2, 3}, {4, 5, 6}, {7}, {8}, {9, 10}} and πA = {{1, 2, 3}, {4, 5, 6}, {7, 8}}, with respect to which πB πA . The subset Y is such that lπA (Y ) = lπB (Y ) = {7, 8}, uπA (Y ) = {1, 2, 3, 7, 8}, uπB (Y ) = {1, 2, 3, 7, 8, 9, 10}, eπA (Y ) = eπB (Y ){4, 5, 6}. This result implies that α ˜ πB (Y ) = 5/10 < α ˜ πA (Y ) = 5/8, ρ˜πA (Y ) = 3/8 < ρ˜πB (Y ) = 5/10. Moreover, as illustrated in table 4, the entropies and co–entropies in the case presented result to be respectively anti–tone and isotone as expected, whereas the local rough entropies do not respect the desired order. Hence, we have a local rough entropy (43) which behaves isotonically only under the restriction Y ⊆ XA . Since this restriction is due to the definition we gave of ρ˜πA (Y ), we propose the following possible solution: we consider the whole
50
D. Bianucci and G. Cattaneo Table 4. Entropies with different attribute families ˜ E ˜ E ˜π H A 1.35 2.45 0.92 B 1.89 1.91 0.95
set of attributes Att for the accuracy and roughness measure, thus obtaining constant values (given an incomplete information system), independent from the relationship between Y and the definition domains. |lπAtt (Y )| + |eπAtt (Y )| |XAtt | |bπAtt (Y )| ρ˜πAtt (Y ) := 1 − α ˜πAtt (Y ) = . |XAtt |
α ˜πAtt (Y ) =
(45a) (45b)
In this way, we can define the following measure of rough entropy, of Y relatively to the probability partition πA of an incomplete information system. ˜ Att (Y ) = ρ˜πAtt (Y ) · E(A). ˜ E πA
(46)
This new local rough entropy behaves isotonically, with respect to the partition order, for any Y ⊆ X, as trivially shown by the following proposition. Proposition 6. Let us consider an incomplete information system. Given Y ⊆ X, let A ⊆ B be two collections of attributes, XA ⊆ XB the corresponding definition domains, and π(B) and π(A) the corresponding probability partitions. Then, we have ˜ Att (Y ). E˜πAtt (Y ) ≤ E πA B Proof. Given Att, (45) is a constant value thus from 1, we trivially obtain the result ˜ Att (Y ). ∀ Y ⊆ X, A ⊆ B implies E˜πAtt (Y ) ≤ E (47) πA B Example 10. Let us consider the example 9. If we take the whole set of attributes Att = {P rice, Rooms, Down − −T own, F urniture}, the corresponding partition is π(Att) = {{1, 3}, {2}, {4, 6}, {5}, {7}, {8}, {9}, {10}, {12, 13}}. With respect to the set Y , the lower approximation is lπAtt (Y ) = {2, 7, 8, 9}, the upper is uπAtt (Y ) = {1, 2, 3, 7, 8, 9}, the external region is eπAtt (Y ) = {4, 5, 6, 10, 12, 13}. From this result we have that α ˜ πAtt (Y ) = 10/12, and ρ˜πAtt (Y ) = 1/6. Thus we obtain that, in the present case, the local rough entropies (46) for π(A) ˜πAtt (Y ) = 0.40898 > E ˜πAtt (Y ) = 0.31832 as expected. and π(B) are respectively E A B
4
Coverings
In this section we will treat the approach of extracting coverings from incomplete information systems by using the so–called similarity relation introduced in [18]
Information Entropy and Granulation Co–Entropy of Coverings
51
and previously illustrated. We will also illustrate an approach that allows to extract a collection of “similarity” classes from coverings that are not induced by a similarity relation. If one wants to use entropy or rough entropy - for example in order to reduce the information system - it is necessary to find an entropy that behaves “well” in the context of coverings. That is why in this section we will summarize the main approaches and attempts of approaches to entropies for coverings. 4.1
Genuine Coverings
It is interesting to observe that in the case of a covering γ = {B1 , B2 , . . . , BN } of the universe X it may happen that some of its elements are in some sense redundant. In particular, if for two of its elements Bi ∈ γ and Bj ∈ γ it results that Bi ⊆ Bj , then from the covering point of view the subset Bi can be considered “irrelevant” since its covering role is performed by the larger set Bj . It is thus interesting to select those particular “genuine” coverings for which this redundancy is not involved. To this purpose we introduce the following definition: A covering γ = {B1 , B2 , . . . , BN } is said to be genuine iff there is no element Bi ∈ γ equal to the whole X and the following condition is satisfied: ∀ Bi ∈ γ, ∀ Bj ∈ γ, Bi = Bi ∩ Bj or equivalently, ∀ Bi ∈ γ, ∀ Bj ∈ γ, Bi ⊆ Bj
implies
Bi = Bj .
The collection of all genuine coverings of X will be denoted by Γg (X) in the sequel. A canonical procedure to obtain a genuine covering γg from a covering γ is given by the following procedure. Let γ = {B1 , B2 , . . . , BN } be a generic covering of X, then (G1) construct all the maximal chains from γ with respect to the set theoretic inclusion Ci (γ) = {Bi1 , Bi2 , . . . , BiM }, Cj (γ) = {Bj1 , Bj2 , . . . , BjM }, . . ., Cz (γ) = {Bz1 , Bz2 , . . . , BzM }. (G2) collect all the maximal elements {BiM , BjM , . . . , BzM }, then this is a genuine covering of X induced by γ and denoted by γg . Trivially, γg ⊆ γ from which the following follow: ∀ Bi ∈ γ, ∃BjM ∈ γg ∀ BjM ∈ γg , ∃BjM ∈ γ
s.t. s.t.
Bi ⊆ BjM BjM ⊆ BjM
(48a) (48b)
Example 11. Let us consider the universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and the covering γ = {B1 = {1, 2, 3, 4, 5, 6}, B2 = {3}, B3 = {2, 3}, B4 = {3, 4, 6}, B5 = {4, 5, 6, 7, 8, 9, 10}, B6 = {7, 8, 9}, B7 = {7, 9}}. Then there are 3 maximal chains C1 = {B2 = {3}, B3 = {2, 3}, B1 = {1, 2, 3, 4, 5, 6}}, C2 = {B2 = {3}, B4 = {3, 4, 6}, B1 = {1, 2, 3, 4, 5, 6}}, and C3 = {B7 = {7, 9}, B6 = {7, 8, 9}, B5 = {4, 5, 6, 7, 8, 9, 10}} and so the genuine covering induced by γ is γg = {B1 = {1, 2, 3, 4, 5, 6}, B5 = {4, 5, 6, 7, 8, 9, 10}}.
52
D. Bianucci and G. Cattaneo
Let us observe that a genuine covering is not always a minimal covering, where a minimal covering γm is a subcovering of γ in which the sets have the smallest cardinality. Example 12. Let us consider the universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and the covering γ = {B1 = {2, 3}, B2 = {3, 4, 6}, B3 = {1, 4, 5, 6, 7}, B4 = {5, 7, 8, 9}, B5 = {7, 9}, B6 = {6, 7, 8, 9, 10}}. The minimal covering of γ is γm = {B1 , B3 , B6 }, whereas the genuine covering is γg = {B1 , B2 , B3 , B4 , B6 }. In fact, according to the above definition, in the genuine covering we have to add both sets B2 and B4 since they are not represented by any other set, i.e., there is no Bi , Bj ∈ γ such that B2 ⊆ Bi and B4 ⊆ Bj . 4.2
Orderings and Quasi–Orderings on Coverings
Since an entropy behaves “well” when it is isotonic or anti–tonic with respect to a certain (quasi) order relation, we will now introduce some definitions of orderings and quasi–orderings for coverings. In [5] and [3] one can find the definitions of some quasi–orderings (i.e., a reflexive and transitive, but in general non anti– symmetric relation [8, p. 20]) and an ordering for generic coverings, as extension to this context of the formulations (por1)–(por4) of the previously discussed ordering on partitions, with the first two and the fourth of the “global” kind and the third one of the “pointwise” kind. “Global” Orderings and Quasi–Orderings on Coverings. In the present section we take into account the generalization of the only first two global cases. The first quasi–ordering is the extension of (por1) given by the following binary relation for γ, δ ∈ Γ (X): γδ
iff
∀ Ci ∈ γ, ∃Dj ∈ δ
s.t. Ci ⊆ Dj
(49)
The corresponding strict quasi–order relation is γ ≺ δ iff γ δ and γ = δ. Let us observe that in the class of all genuine coverings Γg (X) the binary relation is an ordering [5]. In fact let γ, δ be two genuine coverings of X such that γ δ and δ γ. Then for ∀ C ∈ γ, and using γ δ, we have that ∃D ∈ δ such that C ⊆ D; but from δ γ it follows that there is also a C ∈ γ such that D ∈ C , and so C ⊆ D ⊆ C and by the genuine condition of γ necessarily C = D. Vice versa, for every D ∈ δ there exists C ∈ γ such that D = C. Another quasi–ordering on Γ (X) which generalizes to coverings the (por2) is defined by the following binary relation: γδ
iff
∀ D ∈ δ, ∃{C1 , C2 , . . . , Cp } ⊆ γ
s.t. D = C1 ∪ C2 ∪ . . . ∪ Cp (50)
In the covering context, there is no general relationship between (49) and (50) since it is possible to give an example of two (genuine) coverings γ, δ for which γ δ but γ δ, and of two other (genuine) coverings η, ξ for which η ξ but η ξ.
Information Entropy and Granulation Co–Entropy of Coverings
53
Let us now illustrate a binary relation introduced by Wierman (in the covering context in an unpublished work which he kindly sent to us) (see also [3]): γ W δ
iff
∀ Ci ∈ γ, ∀ Dj ∈ δ, Ci ∩ Dj =∅
implies
Ci ⊆ Dj
(51)
This binary relation, that corresponds to (1) (por4) in Π(X), has the advantage of being anti–symmetric on the whole Γ (X); but it presents the drawback (as explained by Wierman himself) that it is not reflexive in the covering context, as illustrated in the following example. Example 13. Let us consider a universe X = {1, 2, 3, 4, 5} and a covering γ = {C1 = {1, 2, 3}, C2 = {3, 4}, C3 = {2, 5}}. For the reflexivity of (51) we should have that ∀ δ ∈ Γ (X), δ W δ. But from this simple example we can see that this is not true. In fact γ W γ since, for instance, C1 ∩ C2 but we have neither C1 ⊆ C2 nor C2 ⊆ C1 . For this reason, in order to define an ordering on coverings, Wierman added the further condition γ = δ in the following way: γ ≤W δ
iff γ = δ
or γ W δ
(52)
So, we can see that in the covering case it is difficult to maintain the three properties of reflexivity, transitivity and anti–symmetry at the same time unless one adds more conditions in the definition of the binary relation, or restricts the applicability on a subclass of coverings, such as the class of all genuine ones. Another advantage of (52) (as illustrated by Wierman himself) is that the pair (Γ (X), ≤W ) is a poset lower bounded by the discrete partition πd = {{x1 }, {x2 }, . . . , {xm(X) }}, which is the least element, and upper bounded by the trivial partition πg = X which is the greatest element. Moreover, it is a lattice. Let us now illustrate how one can extract a partition π(γ) from a covering γ. We thought of a method consisting in two main steps (see [7]): first we create the covering completion γc , which consists of all the sets Ci of γ and of all the complements Cic ; then, for each x ∈ X, we generate the granule gr(x) = (C ∈ γc : x ∈ C). The collection of all granules gr(x) is a partition. The Wierman approach to generate a partition from a covering, presents a different formulation, i.e., gr(x) = x∈C C \ x∈C C, which is equal to the just introduced / granule. The following important proposition holds: Proposition 7. Given two coverings γ and δ one has that γ ≤W δ
implies
π(γ) π(δ)
This property (not described by Wierman) is very important since it allows us to compare two coverings through the entropies of their induced partitions, which behave anti–tonically with respect to the standard order relation (1) on Π(X). Hence we obtain the following important result: Proposition 8. Given two coverings γ and δ the following holds: γ ≤W δ
implies
H(π(δ)) ≤ H(π(γ))
and
E(π(γ)) ≤ E(π(δ))
54
D. Bianucci and G. Cattaneo
“Pointwise” Quasi–Orderings on Coverings. Let us now consider a covering γ = {C1 , C2 , . . . , CN } of X and let us see whether one can construct a collection of |X| similarity classes, where each class is generated by an object x ∈ X. The aim is to obtain a new covering induced by the original one, i.e., a new covering which expresses the original one via a collection of some kind of similarity classes. In [5] one can find the description of two possible kinds of similarity classes induced by an object x of the universe X: the lower granule γl (x) := ∩{Ci ∈ γ : x ∈ Ci } and the upper granule γu (x) = ∪{Cj ∈ γ : x ∈ Cj } generated by x. Of course, in the case of a trivial covering the upper granule of any point x is the whole universe X, and so this notion turns out to be “significant” in the only case of non trivial coverings. Thus, given a covering γ of a universe X, for any x ∈ X we can define the granular rough approximation of x induced by γ as the pair rγ (x) := γl (x), γu (x) , where x ∈ γl (x) ⊆ γu (x). The collections γu := {γu (x) : x ∈ X} and γl := {γl (x) : x ∈ X} of all such granules are both coverings of X, called the upper covering and the lower covering generated by γ. In particular, we obtain that for any covering γ of X the following hold: γl γ γu and γl γ γu . We can introduce now two more quasi–order relations on Γ (X) defined by the following binary relations: γ u δ
iff
∀ x ∈ X, γu (x) ⊆ δu (x)
γ l δ
iff
∀ x ∈ X, γl (x) ⊆ δl (x)
In [5] we have shown that γ δ implies γ u δ, but it is possible to give an example of two coverings γ, δ such that γ δ and for which γ l δ does not hold. So it is important to consider a further quasi–ordering on coverings defined as γ δ iff δ l γ and γ u δ. (53) which can be equivalently formulated as: γδ
iff
∀ x ∈ X, δl (x) ⊆ γl (x) ⊆ (???) ⊆ γu (x) ⊆ δu (x)
where the question marks represent an intermediate covering granule γ(x), which is something of “hidden” in the involved structure. This pointwise behavior can be formally denoted by ∀ x ∈ X, rγ (x) := γl (x), γu (x) δl (x), δu (x) =: rδ (x) . In other words, means that for any point x ∈ X the local approximation rγ (x) given by the covering γ is better than the local approximation rδ (x) given by the covering δ. So equation (53) can be summarized by γ δ iff ∀ x ∈ X, rγ (x) rδ (x) (this latter simply written in a more compact form as rγ rδ ). Orderings on Coverings in the Case of Incomplete Information Systems. Let us now consider which (quasi) order relations can be defined in the case of incomplete information systems IIS = X, Att, F . Let us start by describing how one can extract coverings from an incomplete information system. For any family A of attributes it is possible to define on the objects of X the similarity relation SA introduced in [18]:
Information Entropy and Granulation Co–Entropy of Coverings
xSA y
iff ∀ a ∈ A, either fa (x) = fa (y) or fa (x) = ∗ or fa (y) = ∗.
55
(54)
This relation generates a covering of the universe X through the granules of information (also similarity classes) sA (x) = {y ∈ X : (x, y) ∈ SA }, since X = ∪{sA (x) : x ∈ X} and x ∈ sA (x) = ∅. In the sequel this kind of covering will be denoted by γ(A) := {sA (x) : x ∈ X} and their collection by Γ (IS) := {γ(A) ∈ Γ (X) : A ⊆ Att}. Let us observe that, if in an incomplete information system X, Att, F we consider two subfamilies of attributes A, B ⊆ Att, with the induced coverings of X denoted by γ(A) and γ(B), the following holds: B⊆A
implies
γ(A) γ(B)
(55)
Unfortunately in general B ⊆ A does not imply γ(A) γ(B), as illustrated in the following example. Example 14. Let us consider the incomplete information system represented in table 5. Table 5. Flats incomplete information system Flat f1 f2 f3 f4 f5 f6
Price Rooms Down-Town Furniture high 2 yes * high * yes no * 2 yes no low * no no low 1 * no * 1 yes *
If one considers the set of all attributes (i.e., A = Att(X)) and the induced similarity relation we obtain the following covering: γ(A) = {sA (f1 ) = sA (f3 ) = {f1 , f2 , f3 }, sA (f2 ) = {f1 , f2 , f3 , f6 }, sA (f4 ) = {f4 , f5 }, sA (f5 ) = {f4 , f5 , f6 }, sA (f6 ) = {f2 , f5 , f6 }} This covering is not genuine since, for instance, sA (f1 ) = sA (f3 ) ⊂ sA (f2 ). Let us now take the subfamily of attributes D = {P rice, Rooms} and the induced similarity relation. The resulting covering is γ(D) = {sD (f1 ) = {f1 , f2 , f3 }, sD (f2 ) = {f1 , f2 , f3 , f6 }, sD (f3 ) = {f1 , f2 , f3 , f4 }, sD (f4 ) = {f3 , f4 , f5 , f6 }, sD (f5 ) = {f4 , f5 , f6 }, sD (f6 ) = {f2 , f4 , f5 , f6 }} of X. Also this covering is not genuine since, for instance, sD (f1 ) ⊂ sD (f2 ). It is easy to see that γ(A) γ(D), but it is not true that γ(A) γ(D): in fact, there is no collection of subsets sA (fi ) ∈ γ(A) for which we obtain that the set union is sD (f4 ) = {f3 , f4 , f5 , f6 } ∈ γ(D). On the coverings generated from an incomplete information system we can use the following pointwise binary relation [21], which corresponds to the generalization to incomplete information systems of the formulation (por3) (1) of the standard order relation on partitions; let us consider A, B ⊆ Att, we define: γ(A) ≤s γ(B) iff
∀ x ∈ X, sA (x) ⊆ sB (x)
(56)
56
D. Bianucci and G. Cattaneo
This is a partial order relation, and we have that B ⊆ A implies
γ(A) ≤s γ(B),
(57)
but in general γ(A) ≤s γ(B) does not imply π(γ(A)) π(γ(B)), as illustrated in the following example. Example 15. Let us consider the incomplete information system of table 5 from the previous example 14. Let us consider the whole collection of attributes A = Att(X) and its induced covering γ(A), previously illustrated. As to the granules generated by the completion γc (A) of γ(A), according to the procedure above described, in the present case we have π(γ(A)) = {grA (f1 ) = grA (f3 ) = {f1 , f3 }, grA (f2 ) = {f2 }, grA (f4 ) = {f4 }, grA (f5 ) = {f5 }, grA (f6 ) = {f6 }}. Let us now consider the subfamily of attributes B = {P rice, Down − T own, F urniture}. As for A = Att(X), let us consider the induced similarity relation and the resulting covering: γ(B) = {sB (f1 ) = sB (f2 ) = {f1 , f2 , f3 , f6 }, sB (f3 ) = sB (f6 ) = {f1 , f2 , f3 , f5 , f6 }, sB (f4 ) = {f4 , f5 }, sB (f5 ) = {f3 , f4 , f5 , f6 }} of X. Trivially we have γ(A) ≤s γ(B). The partition generated by the completion γc (B) of γ(B), according to the same procedure used for the granules generated by the completion γc (A) of γ(A), in this example is: π(γ(B)) = {grB (f1 ) = grB (f2 ) = {f1 , f2 }, grB (f3 ) = grB (f6 ) = {f3 , f6 }, grB (f4 ) = {f4 }, grB (f5 ) = {f5 }}. Let us observe that there is no ordering relation of any kind between the two partitions π(γ(A)) = {grA (fi )} and π(γ(B)) = {grB (fj )}, although this partitions have been generated from the same information system starting from two families of attributes B and A = Att with B ⊂ A = Att. The same happens when considering the quasi–orderings (49), (50): for instance, either γ(A) γ(B) or γ(A) γ(B) do not imply π(γ(A)) π(γ(B)). Let us now see what happens considering the partial order relation ≤W of equation (52) in the case of coverings induced from an incomplete information system. Let us start from an example: Example 16. Let us consider the incomplete information system illustrated in table 6. Let us take the whole set of attributes (i.e., A = Att(X)) and the induced similarity relation. The resulting covering is γ(A) = {sA (p1 ) = {p1 , p4 , p9 , p10 }, sA (p2 ) = {p2 , p9 }, sA (p3 ) = {p3}, sA (p4 ) = {p4 , p1 , p9 }, sA (p5 ) = {p5}, sA (p6 ) = {p6 , p10 }, sA (p7 ) = {p7 }, sA (p8 ) = {p8 }, sA (p9 ) = {p9 , p1 , p2 , p4 }, sA (p10 ) = {p10 , p1 , p6 }}. Let us now consider the subset of attributes B = {F ever, Headache, Dizziness, BloodP ressure} and the induced similarity relation. In this case the covering is γ(B) = {sB (p1 ) = {p1 , p4 , p8 , p9 , p10 }, sB (p2 ) = {p2 , p9 }, sB (p3 ) = {p3 }, sB (p4 ) = {p4 , p1 , p7 , p9 }, sB (p5 ) = {p5 }, sB (p6 ) = {p6 , p10 }, sB (p7 ) = {p7 , p2 , p4 , p9 , p10 }, sB (p8 ) = {p8 , p1 , p10 }, sB (p9 ) = {p9 , p1 , p2 , p4 , p7 }, sB (p10 ) = {p10 , p1 , p6 , p7 , p8 }}. Thus we have: B ⊆ A, and γ(A) γ(B), but we do not have γ(A) ≤W γ(B). In fact we can see that, for instance, sA (p9 ) ∩ sB (p1 ) = ∅, but sA (p9 ) sB (p1 ).
Information Entropy and Granulation Co–Entropy of Coverings
57
Table 6. Medical incomplete information system Patient Fever Headache Dizziness Blood Pressure Chest Pain p1 low yes yes * yes p2 high * yes low yes p3 * no no low no p4 low * yes low yes p5 low yes no low no p6 high yes no * yes p7 * no yes * no p8 * yes yes high no p9 * * yes low yes p10 * * * high yes
This means that, given two families of attribute sets A and B from an incomplete information system, and the corresponding induced coverings γ(A) and γ(B) respectively, unfortunately condition B ⊆ A does not imply γ(A) ≤W γ(B). 4.3
Entropies and Co–Entropies of Coverings: The “Global” Approach
In [5] one can find the following definitions of entropies with corresponding co– entropies for coverings, whose restrictions to partitions induce the standard entropy and co–entropy. Let us consider a covering γ = {B1 , B2 , . . . , BN } of the universe X. Let us start from an entropy based on a probability distribution in which the probability i) of an elementary event was represented by p(Bi ) = m(B |X| , where m(Bi ) = N 1 χ (x) (χ(Bi ) being the characteristic functional of the set Bi x∈X χ (x) Bi i=1
Bi
for any point x ∈ X) (see also [5,2,6,12]). The resulting entropy is then: H(γ) = −
N
p(Bi ) log p(Bi )
(58)
i=1
The corresponding co–entropy, which complements the entropy with respect to the quantity log |X|, is 1 m(Bi ) log m(Bi ) |X| i=1 N
E(γ) =
(59)
A drawback of this co–entropy is that it can assume negative values due to the fact that the quantities m(Bi ) can also be in the real interval [ 0, 1 ] (see [5,6]). Let us now describe a second approach to entropy and co–entropy for coverings. We can define the total outer measure of X induced from γ as m∗ (γ) := N i=1 |Bi | ≥ |X| = mc (X) > 0. An alternative probability of occurrence of the elementary i| event Bi from the covering γ can be described as p∗ (Bi ) := m|B ∗ (γ) obtaining
58
D. Bianucci and G. Cattaneo
that the vector p∗ (γ) := p∗ (B1 ), p∗ (B2 ), . . . , p∗ (BN ) . p∗ (γ) is a probability N ∗ distribution since trivially: (1) every p∗ (Bi ) ≥ 0; (2) i=1 p (Bi ) = 1. Hence we can define a second entropy of a covering as 1 |Bi | log |Bi | m∗ (γ) i=1 N
H ∗ (γ) = log m∗ (γ) −
(60)
and the corresponding co–entropy as 1 |Bi | log |Bi | ∗ m (γ) i=1 N
E ∗ (γ) :=
(61)
This co–entropy complements the entropy (60) with respect to the quantity log m∗ (γ), which depends from the choice of the covering γ. This fact represents a potential disadvantage when one studies the behavior of the entropy and co– entropy with respect to some quasi–ordering of coverings. In fact, showing an anti–monotonic behavior for the entropy would not lead automatically to a desired behavior of the co–entropy. Thus we now introduce a new definition for the co–entropy, starting from the imposition that this new co–entropy complements the entropy (60) with respect to the fixed quantity log |X|. 1 |Bi | log |Bi | ∗ m (γ) i=1 N
Ec∗ (γ) := log |X| − log m∗ (γ) +
(62)
Let us now see a third approach to entropy and co–entropy for coverings based i| on the probability of the elementary event Bi defined as pLX (Bi ) := |B |X| (see [5,2]). In this case one can observe that the probability vector pLX (γ) := (pLX (B1 ), pLX (B2 ), . . . , pLX (BN )) does not define a probability distribution N since in general i=1 pLX (Bi ) ≥ 1. Keeping in mind this characteristic, we (g) can define the following pseudo–entropy (originally denoted by HLX ) log |X| 1 − |Bi | log |Bi | |X| |X| i=1 N
H (g) (γ) = m∗ (γ)
(63)
(g)
The following quantity (originally denoted by ELX ) was firstly introduced as co–entropy N 1 (g) E (γ) := |Bi | log |Bi | (64) |X| i=1 But, in this case also we have the drawback of a co–entropy that complements |X| the entropy with respect to a quantity, m∗ (γ) log|X| , which depends from the covering γ. Again, in order to avoid this unpleasant situation, let us now define another co–entropy such that it complements the entropy (63) with respect to log |X|. N |X| − m∗ (γ) 1 Ec(g) (γ) := log |X| + |Bi | log |Bi | |X| |X| i=1
(65)
Information Entropy and Granulation Co–Entropy of Coverings
59
Isotonic Behavior of Global Entropies and Co–Entropies of Coverings. In the following example it is illustrated the non isotonicity of the co–entropy E described in equation (59) with respect to both quasi–orderings and of equations (49) and (50). Example 17. In the universe X = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}, let us consider the two genuine coverings γ = {C1 = {1, 4, 5}, C2 = {2, 4, 5}, C3 = {3, 4, 5}, C4 = {14, 15}, C5 = {4, 5, . . . , 13}} and δ1 = {D1 = {1, 4, 5} = C1 , D2 = {2, 4, 5} = C2 , D3 = {3, 4, . . . , 13, 14} = C3 ∪C5 , D4 = {4, 5, . . . , 14, 15} = C4 ∪C5 , }. Trivially, γ ≺ δ1 and γ δ1 . In this case E(γ) = 2.05838 < 2.18897 = E(δ1 ), as desired. In the same universe, let us now take the genuine covering δ2 = {F1 = {1, 4, 5, . . . , 12, 13} = C1 ∪ C5 , F2 = {2, 4, 5, . . . , 12, 13} = C2 ∪ C5 , F3 = {3, 4, . . . , 12, 13} = C3 ∪ C5 , F4 = {4, 5, . . . , 14, 15} = C4 ∪ C5 }. Trivially, γ ≺ δ2 and γ δ2 . Unfortunately, in this case we obtain E(γ) = 2.05838 > 1.91613 = E(δ2 ). As for the behavior of the entropies H, H ∗ and H (g) of equations (58), (60) and (63) and of the co–entropies E, E ∗ and E (g) of equations (59), (61) and (64) with respect to the quasi–orderings for coverings (49) and (50), the reader can find a deep analysis and various examples in [5,2]. We here only recall that the entropies H, H ∗ and H (g) for coverings behave neither isotonically nor anti–tonically with respect to these two quasi–order relations, even in the more favorable context of genuine coverings; the same unfortunately happens for the co–entropies E, E ∗ and E (g) . Let us now observe what happens with the order relation ≤W described in (52) on Γ (X), starting from two examples. Example 18. In the universe X = {1, 2, 3, . . . , 23, 24, 25}, let us consider the two genuine coverings γ1 = {{1}, {2}, {5}, {3, 4, 24, 25}, {6, 7, . . . , 12, 13, 15, 16, . . . , 22, 23}, {14}, {24, 25}} and δ1 = {{1, 6, 7, . . . , 22, 23}, {2, 3, 4, 24, 25}, {3, 4, 5, 24, 25}, {6, 7, . . . , 12, 13, 15, 16, . . . , 22, 23}}. Trivially, γ1 ≤W δ1 . From the results illustrated in table 7, we can observe that in this case the co–entropy E behaves anti–tonically, whereas the other co–entropies behave isotonically. Table 7. Entropies and co–entropies for γ1 and δ1 , with γ1 ≤W δ1
γ1 δ1
E H 2.96967 1,67419 2.84882 1,79504
E∗ Ec∗ H∗ 2.94396 2.83293 1.81093 3.76819 2.88848 1.75538
(g)
E (g) Ec H (g) 3.17947 2.80796 1.83589 6.93346 3.03262 1.61123
Example 19. Let us now consider the universe X = {1, 2, 3, . . . , 28, 29, 30} and the genuine coverings γ2 = {{1, 4}, {1, 5}, {6}, {2, 3, 15, 16, . . ., 28, 29}, {15, 16, . . . , 29, 30}} and δ2 = {{1, 4, 6}, {1, 5}, {6, 7, . . . , 13, 14}, {2, 3, 15, 16, . . . , (g) 29, 30}}. Trivially, γ2 ≤W δ2 . Except for the co–entropies Ec∗ and Ec , in this example we can observe an opposite behavior for the other co–entropies: in fact
60
D. Bianucci and G. Cattaneo
we now have that co–entropy E behaves isotonically, while the other co–entropies behave anti–tonically, as illustrated in table 8. Table 8. Entropies and co–entropies for γ2 and δ2 , with γ2 ≤W δ2
γ2 δ2
E H 2.76179 2.14510 3.47265 1.43424
E∗ Ec∗ H∗ 3.51058 2.89391 2.01298 3.44821 3.35511 1.55179
(g)
E (g) Ec H (g) 5.38290 2.76589 2.14100 3.67810 3.35097 1.55592
Let us also observe that the entropies H ∗ and H (g) behave anti–tonically in both situations. Example 20. In the universe X = {1, 2, . . . , 25}, let us consider the covering γ3 = {{1, 2, 6, 7, 8, 9, 10, 11, 16, , 17, 18, 19, 20, 21, 22, 23}, {13, 14, 15, 24}, {12, 13, 24}, {3, 4, 5, 25}}, and the induced partition π(γ3 ) = {{1, 2, 6, 7, 8, 9, 10, 11, 16, 17, 18, 19, 20, 21, 22, 23}, {13, 24}, {14, 15}, {12}, {3, 4, 5, 25}}. Trivially we have π(γ3 ) ≤W γ3 (this happens in general as shown by Wierman). The entropies are: H ∗ (γ3 ) = 1.61582 > 1.60386 = H ∗ (π(γ3 )); H (g) (γ3 ) = 1.62517 > 1.60386 = H (g) (π(γ3 )). Thus, contrary to the previous examples, in this case we have obtained a behavior of isotonicity for these two entropies. From these examples we can make some observations: the entropy (58) and co– entropy (59), although very interesting for the way the measure m(Bi ) describes each set of a covering, unfortunately do not behave isotonically, neither anti– tonically with respect to any of the considered ordering (52) and quasi–orderings (49) and (50) (see examples in [5,2]). The other two entropies (60) and (63) do not behave isotonically nor anti–tonically with respect to the quasi–orderings (49) and (50) as you can see from the examples in [5,2]; for what concerns the ordering (52) we found examples in which these entropies behave anti–tonically, and a simple counterexample in which they behave isotonically. In this simple counterexample we compared a covering with its induced partition and, although the generated partition is clearly finer than the covering, the entropies H ∗ and H (g) of the partition π(γ3 ), according to (60) and (63), are smaller than the entropies H ∗ and H (g) of the covering γ3 itself. 4.4
Pointwise Approaches to the Entropy and Co–Entropy of Coverings
In the section dedicated to partitions we have described a pointwise approach to the entropy and co–entropy of partitions. We have anticipated that the aim was to better understand the Liang–Xu (LX) approach to entropy in the case of incomplete information systems [21]. In the following subsections we will describe and analyze the Liang–Xu approach and some variations of it, including an approach described in [20].
Information Entropy and Granulation Co–Entropy of Coverings
61
Pointwise Lower and Upper Entropy and Co–Entropy of Coverings. Making use of the lower granules γl (x) and upper granules γu (x) for x ranging on the space X for a given covering γ, it is possible to introduce two (pointwise defined) LX entropies (resp., co–entropies), named the lower and upper LX entropies (resp., co–entropies) respectively (LX since we generalize in the covering context the Liang–Xu approach to quantify information in the case of incomplete information systems – see [21]) according to the following: |γj (x)| |γj (x)| log for j = l, u |X| |X| x∈X 1 ELX (γj ) : = |γj (x)| log |γj (x)| for j = l, u |X|
HLX (γj ) : = −
(66a) (66b)
x∈X
with the relationships (and compare with the case of partitions (15)): |γj (x)| HLX (γj ) + ELX (γj ) = x∈X · log |X| |X| Since for every point x ∈ X the following set theoretic inclusions hold: γl (x) ⊆ γu (x), with 1 ≤ |γl (x)| ≤ |γu (x)| ≤ |X|, it is possible to introduce the rough co– entropy approximation of the covering γ as the ordered pair of non–negative numbers: rE (γ) := (ELX (γl ), ELX (γu )) , with 0 ≤ ELX (γl ) ≤ ELX (γu ) ≤ |X| · log |X|. For any pair of coverings γ and δ of X such that γ δ, one has that ELX (δl ) ≤ ELX (γl ) ≤ (???) ≤ ELX (γu ) ≤ ELX (δu ) , and so we have that γ δ implies rE (γ) rE (δ), which expresses a condition of isotonicity of lower– upper pairs of co–entropies relatively to the quasi–ordering on coverings [5,2]. As a final remark, recalling that in the rough approximation space of coverings, partitions are the crisp sets since πl = π = πu for any π ∈ Π(X), then the pointwise entropies (66a) and co–entropies (66b) collapse in the pointwise entropy and co–entropy for partitions described in subsection 2.7. Pointwise Entropy and Co–Entropy of Coverings in the Case of Incomplete Information Systems. We will here illustrate two pointwise entropies and corresponding co–entropies of coverings generated by a similarity relation (54). We will start from the entropy and co–entropy in analogy with (66) (see also [21]): |sA (x)| |sA (x)| log |X| |X| x∈X 1 ELX (γ(A)) := |sA (x)| log |sA (x)| |X|
HLX (γ(A)) := −
x∈X
with the following relationship: HLX (γ(A)) + ELX (γ(A)) =
|sA (x)| · log |X| |X|
x∈X
(67a) (67b)
62
D. Bianucci and G. Cattaneo
These just introduced entropy and co–entropy, when applied to complete information systems, reduce to the pointwise entropy and co–entropy of partitions of equations (33) and (31), but not to the standard partition entropy and co– entropy expressed by equations (14) and (13). Another pointwise entropy has been introduced in [20] and it is described by the following equation: HLSLW (γ(A)) := −
1 |sA (x)| log |X| |X|
(68)
x∈X
Since we are also interested in co–entropies as measure of the granularity, we here introduce the corresponding co–entropy of (68): ELSLW (γ(A)) :=
1 log |sA (x)| |X|
(69)
x∈X
Moreover we obtain: HLSLW (γ(A)) + ELSLW (γ(A)) =
log |X| |X|
(70)
These entropy and corresponding co–entropy in the complete case reduce to the standard entropy and co–entropy of partitions of equations (14) and (13). Isotonic Behavior of Pointwise Entropy and Co–Entropy in the Case of Incomplete Information Systems. In the following propositions it will be shown that the co–entropy (67b) behaves anti–tonically with respect to set inclusion of subfamilies of attributes. Moreover, from (55) and (57) we obtain that this co–entropy is isotonic with respect to the quasi–ordering on coverings of equation (49) and to the order relation ≤s of equation (56). A further result is that in this context of incomplete information systems, the co–entropy (67b) is also isotonic with respect to the quasi ordering of equation (50) [2]. Proposition 9. [21] Let X, Att, F be an incomplete information system and let A, B ⊆ Att be two families of attributes with corresponding coverings of X be γ(A) and γ(B). Then γ(A) <s γ(B)
implies
ELX (γ(A)) ≤ ELX (γ(B)).
As an immediate consequence we have: Corollary 2. [21] In an incomplete information system X, Att, F let us consider two families of attributes A, B ⊆ Att. Let the induced coverings of X be γ(A) and γ(B). The following holds: B⊆A
implies
ELX (γ(A)) ≤ ELX (γ(B)).
From these two properties we obtain the following further result related to the (quasi) order relation on Γ (X).
Information Entropy and Granulation Co–Entropy of Coverings
63
Proposition 10. Given an incomplete information system X, Att, F and two similarity relations defined on the objects of X for the two subfamilies of attributes A and B (A, B ⊆ Att), let the induced coverings of X be respectively γ(A) and γ(B). For any A, B ⊆ Att the following holds: B⊆A
and
γ(A) γ(B)
implies
ELX (γ(A)) ≤ ELX (γ(B)).
(71)
Moreover, if ∃ sB (xk ) = sA (xk1 ) ∪ sA (xk2 ) ∪ . . . ∪ sA (xkp ) such that p ≥ 2, the strict anti–tonicity ELX (γ(A)) < ELX (γ(B)) holds. Proof. Let the two coverings be respectively γ(A) = {sA (x1 ), sA (x2 ), . . . , sA (xN )} and γ(B) = {sB (x1 ), sB (x2 ), . . . , sB (xN )}. We have γ(A) γ(B), hence ∀ sB (xi ) ∈ γ(B), ∃{sA (xi1 ), sA (xi2 ), . . . , sA (xip )} ⊆ γ(A) s.t. sB (xi ) = sA (xi1 ) ∪ sA (xi2 ) ∪ . . . ∪ sA (xip ). Let us consider the simple case in which sB (xk ) = sA (xk1 ) ∪ sA (xk2 ) ∪ . . . ∪ sA (xkp ) with p ≥ 2, and sB (xj ) = sA (xj ) for any j = k. Then we have that 1 |sB (xi )| · log |sB (xi )| |X| xi ∈X 1 = |sA (xj )| · log |sA (xj )| + |X|
ELX (γ(B)) =
xj ∈X, j =k
+|sA (xk1 ) ∪ . . . ∪ sA (xkp )| · log |sA (xk1 ) ∪ . . . ∪ sA (xkp )|
Since |sA (xk1 )∪. . .∪sA (xkp )|·log |sA (xk1 )∪. . .∪sA (xkp )| > |sA (xk )| log |sA (xk )|, we obtain that ELX (γ(A)) < ELX (γ(B)). Hence in general, given two coverings γ(A) and γ(B) induced by two similarity relations based respectively on the subfamilies of attributes A and B, with B ⊆ A, we have ELX (γ(A)) ≤ ELX (γ(B)).In particular, when p ≥ 2 the strict anti–tonicity ELX (γ(A)) < ELX (γ(B)) holds. Similarly, for the entropy (68) the following holds [20]: Proposition 11. Given an incomplete information system X, Att, F , two subfamilies of attributes A and B (A, B ⊆ Att) and the induced coverings of X, respectively γ(A) and γ(B), the following holds: γ(A) <s γ(B)
implies
HLSLW (γ(B)) ≤ HLSLW (γ(A)).
(72)
Moreover, if ∃ k ∈ {1, 2, . . . , |X|} such that sB (xk ) ⊂ sA (xk ) the strict anti– tonicity HLSLW (γ(B)) < HLSLW (γ(A)) holds. This result implies that: Corollary 3. Let X, Att, F be an incomplete information system, and let A, B ⊆ Att be two subfamilies of attributes and let B ⊆ A. Then it follows that: B ⊆ A implies HLSLW (γ(B)) ≤ HLSLW (γ(A)).
64
D. Bianucci and G. Cattaneo
In analogy with proposition 9, corollary 2 and proposition 10 and thanks to the relationship (70), one could easily proove the following three properties about co–entropy (69):
5
γ(A) <s γ(B) implies
ELSLW (γ(A)) ≤ ELSLW (γ(B))
B ⊆ A implies B ⊆ A and γ(A) γ(B) implies
ELSLW (γ(A)) ≤ ELSLW (γ(B)) ELSLW (γ(A)) ≤ ELSLW (γ(B))
Conclusions
We have analized the approach to information entropy in case of abstract discrete probability distributions. Then we have stepped to the entropy of partition of a universe X. We have then illustrated a first approach of information entropies to the case of incomplete information systems, through the here called “partial” partitions. Finally we extended to the covering case the approach of entropy of partitions. For what concerns the study of entropies (resp. co–entropies) for coverings, we have here shown the important result that allows one to compare two coverings that are in the ≤W order relation (52) through the entropy and co– entropy of the induced partitions. On the other side, the global entropies and co– entropies for coverings do not behave isotonically nor anti–tonically with respect to the (quasi) orderings here illustrated. But when we are in the incomplete information system context we can use one of the two pointwise entropies and corresponding co–entropies which all behave well with respect to all (quasi) order relations on coverings.
References 1. Ash, R.B.: Information theory. Dover Publications, New York (1990) (originally published by John Wiley & Sons, New York (1965)) 2. Bianucci, D., Cattaneo, G.: Monotonic behavior of entropies and co-entropies for coverings with respect to different quasi-orderings. In: Kryszkiewicz, M., Peters, J.F., Rybi´ nski, H., Skowron, A. (eds.) RSEISP 2007. LNCS (LNAI), vol. 4585, pp. 584–593. Springer, Heidelberg (2007) 3. Bianucci, D., Cattaneo, G.: On non-pointwise entropies of coverings: Relationship with anti-monotonicity. In: Wang, G., Li, T., Grzymala-Busse, J.W., Miao, D., Skowron, A., Yao, Y. (eds.) RSKT 2008. LNCS (LNAI), vol. 5009, pp. 387–394. Springer, Heidelberg (2008) 4. Bianucci, D., Cattaneo, G., Ciucci, D.: Entropies and co–entropies for incomplete information systems. In: Yao, J., Lingras, P., Wu, W.-Z., Szczuka, M.S., Cercone, ´ ezak, D. (eds.) RSKT 2007. LNCS (LNAI), vol. 4481, pp. 84–92. Springer, N.J., Sl¸ Heidelberg (2007) 5. Bianucci, D., Cattaneo, G., Ciucci, D.: Entropies and co–entropies of coverings with application to incomplete information systems. Fundamenta Informaticae 75, 77–105 (2007) 6. Bianucci, D., Cattaneo, G., Ciucci, D.: Information entropy and co–entropy of crisp and fuzzy granulations. In: Masulli, F., Mitra, S., Pasi, G. (eds.) WILF 2007. LNCS (LNAI), vol. 4578, pp. 9–19. Springer, Heidelberg (2007)
Information Entropy and Granulation Co–Entropy of Coverings
65
7. Bianucci, D.: Rough entropies for complete and incomplete information systems. Ph.D. thesis (2007) 8. Birkhoff, G.: Lattice theory, 3rd edn. American Mathematical Society Colloquium Publication, vol. XXV. American Mathematical Society, Providence (1967) 9. Beaubouef, T., Petry, F.E., Arora, G.: Information–theoretic measures of uncertainty for rough sets and rough relational databases. Journal of Information Sciences 109, 185–195 (1998) 10. Cattaneo, G., Ciucci, D.: Algebraic structures for rough sets. In: Dubois, D., Gryzmala-Busse, J.W., Inuiguchi, M., Polkowski, L. (eds.) Transactions on Rough Sets II. LNCS, vol. 3135, pp. 208–252. Springer, Heidelberg (2004) 11. Cattaneo, G., Ciucci, D.: Investigation about Time Monotonicity of Similarity and Preclusive Rough Approximations in Incomplete Information Systems. In: Tsumoto, S., Slowi´ nski, R., Komorowski, J., Grzymala-Busse, J.W. (eds.) RSCTC 2004. LNCS (LNAI), vol. 3066, pp. 38–48. Springer, Heidelberg (2004) 12. Cattaneo, G., Ciucci, D., Bianucci, D.: Entropy and co-entropy of partitions and coverings with applications to roughness theory. In: Bello, R., Falcon, R., Pedrycz, W., Kacprzyk, J. (eds.) Granular Computing: at the Junction of Fuzzy Sets and Rough Sets. Studies in Fuzziness and Soft Computing, vol. 224, pp. 55–77. Springer, Heidelberg (2008) 13. Hamming, R.W.: Cading and information theory. Prentice–Hall, New Jersey (1986) (Second edition of the 1980 first edition) 14. Hartley, R.V.L.: Transmission of information. The Bell System Technical Journal 7, 535–563 (1928) 15. Huang, B., He, X., Zhong, X.Z.: Rough entropy based on generalized rough sets covering reduction. Journal of Software 15, 215–220 (2004) 16. Khinchin, A.I.: Mathematical foundations of information theory. Dover Publications, New York (1957); translation of two papers appeared in Russian in Uspekhi Matematicheskikh Nauk 3, 3–20 (1953); 1, 17–75 (1965) 17. Komorowski, J., Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets: A tutorial. In: Pal, S., Skowron, A. (eds.) Rough Fuzzy Hybridization, pp. 3–98. Springer, Singapore (1999) 18. Kryszkiewicz, M.: Rough set approach to incomplete information systems. Information Sciences 112, 39–49 (1998) 19. Liang, J., Shi, Z.: The information entropy, rough entropy and knowledge granulation in rough set theory. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, 37–46 (2004) 20. Liang, J., Shi, Z., Li, D., Wierman, M.J.: Information entropy, rough entropy and knowledge granulation in incomplete information systems. International Journal of General Systems 35(6), 641–654 (2006) 21. Liang, J., Xu, Z.: Uncertainty measure of randomness of knowledge and rough sets in incomplete information systems. Intelligent Control and Automata 4, 2526–2529 (2000); In: Proc. of the 3rd World Congress on Intelligent Control and Automata 22. Pawlak, Z.: Information systems - theoretical foundations. Information Systems 6, 205–218 (1981) 23. Pawlak, Z.: Rough sets. Int. J. Inform. Comput. Sci. 11, 341–356 (1982) 24. Pawlak, Z.: Rough sets: Theoretical aspects of reasoning about data. Kluwer Academic Publishers, Dordrecht (1991) 25. Reza, F.M.: An introduction to information theory. Dover Publications, New York (1994) (originally published by Mc Graw-Hill, New York (1961)) 26. Shannon, C.E.: A mathematical theory of communication. The Bell System Technical Journal 27, 379–423, 623–656 (1948)
66
D. Bianucci and G. Cattaneo
27. Taylor, A.E.: General theory of functions and integration. Dover Publications, New York (1985) 28. Wierman, M.J.: Measuring uncertainty in rough set theory. International Journal of General Systems 28, 283–297 (1999) 29. Zhao, Y., Luo, F., Wong, S.K.M., Yao, Y.Y.: A general definition of an attribute ´ ezak, reduct. In: Yao, J., Lingras, P., Wu, W.-Z., Szczuka, M.S., Cercone, N.J., Sl¸ D. (eds.) RSKT 2007. LNCS, vol. 4481, pp. 101–108. Springer, Heidelberg (2007)
Lattices with Interior and Closure Operators and Abstract Approximation Spaces Gianpiero Cattaneo and Davide Ciucci Dipartimento di Informatica, Sistemistica e Comunicazione Universit` a di Milano – Bicocca Viale Sarca 336–U14, I–20126 Milano, Italia {cattang,ciucci}@disco.unimib.it
Abstract. The non–equational notion of abstract approximation space for roughness theory is introduced, and its relationship with the equational definition of lattice with Tarski interior and closure operations is studied. Their categorical isomorphism is proved, and the role of the Tarski interior and closure with an algebraic semantic of a S4–like model of modal logic is widely investigated. A hierarchy of three particular models of this approach to roughness based on a concrete universe is described, listed from the stronger model to the weaker one: (1) the partition spaces, (2) the topological spaces by open basis, and (3) the covering spaces.
1
Introduction
The notion of approximation space in the context of roughness theory has been introduced in [5] (see also [8]) in order to describe imprecise, uncertain concepts which can be approximated from the bottom and the top by crisp (sharp) ones. The aim was to give an abstract axiomatization to the Pawlak rough set theory based on a concrete set as the universe of the discourse [43,44], [45]. On the other hand, another theoretical approach to rough sets is the topological one (see for instance [67,49,59,60,61,48]). A closure (resp., interior) operator gives an upper (resp., lower) approximation of a generic element through a closed (resp., open) one. In particular, here, we consider Tarski closure and interior operators defined on de Morgan lattices (and hence also on Boolean algebras). We prove that abstract approximation spaces are equivalent to de Morgan lattices with a Tarski closure operation [10,11]. Once the theoretical framework has been set, we can then analyze the concrete models. In the present work, some models based on partitions and coverings arising from Information Systems are examined, and it is shown how they give rise to approximation spaces. In these concrete cases, a further step can be performed, introducing a particular Galois connection from the power set of objects of an information system on itself. It is well-known that any Galois connection generates a closure operation. J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 67–116, 2009. c Springer-Verlag Berlin Heidelberg 2009
68
G. Cattaneo and D. Ciucci
Here we relate this closure operation to a preclusive (i.e., irreflexive and symmetric) binary relation # on a universe X, as treated in [4,7,8,6], proving that it is equivalent to the Tarski closure obtained by the induced similarity relation R = ¬#. We underline that this work is mainly a collection of results spread in several works, in general discussed without proofs owing to the lack of space characterizing the contributions to meetings. Here a unified treatment is presented with the complete proofs of the main results.
2
Abstract Approximation Spaces
In this section we introduce the notion of Approximation Space as an abstraction of the Pawlak rough sets concrete approach. In the description of this abstract formalism, imprecise concepts, with their associated lower and upper knowledge, are mathematically realized by points of an abstract set, contrary to their concrete models in which they are realized by subsets of a given universe. In this abstract context some criteria must be given in order to “approximate” any vague concept by a pair consisting of a unique lower crisp concept and a unique upper crisp concept. Since we want that these approximations are the best possible inside the classes of corresponding crisp concepts, it is necessary to have also a criterium to state how an approximation is sufficiently good. This is realized by a partial order relation ≤ on the set of all approximable elements which mathematically describes the fact that an element a (resp. b) is a lower (resp., an upper ) approximation of the element b (resp., a), written a ≤ b. This partial ordering must be such that an algebraic structure of lattice is induced by it. Then, a “lower approximation map” and an “upper approximation map” furnish, respectively, the best approximations from the bottom and from the top of any element with respect to the involved partial ordering and according to the following definition. Definition 1. An abstract approximation space (according to [5], [8]) is a system R := Σ, L(Σ), U(Σ), where: (1) The set Σ has a structure Σ, ∧, ∨, 0, 1 of a lattice with respect to the partial order relation a ≤ b iff a = a ∧ b (or, equivalently, b = a ∨ b). This lattice is bounded by the least element 0 (∀a ∈ Σ, 0 ≤ a) and the greatest element 1 (∀a ∈ Σ, a ≤ 1). Elements from Σ are interpreted as concepts, data, etc., and are called the elements which can be approximated ( approximable elements); (2) L(Σ) is a sub-poset of Σ (i.e., a subset of Σ equipped with the partial order relation ≤ of Σ) containing the element 1 ∈ L(Σ), whose elements are said to be lower (also, inner) crisp; (3) U(Σ) is a sub-poset of Σ (i.e., a subset of Σ equipped with the partial order relation ≤ of Σ) containing the element 0 ∈ U(Σ), whose elements are said to be upper (also, closure) crisp.
Lattices with Interior and Closure Operators
69
The structure satisfies the following conditions. (Ax1) For any element a ∈ Σ which can be approximated, there exists (at least) one element l(a), called the lower (also inner) approximation of a, such that: (In1) l(a) ≤ a; (In2) l(a) ∈ L(Σ); (In3) ∀β ∈ L(Σ), β ≤ a ⇒ β ≤ l(a). (Ax2) For any element a ∈ Σ which can be approximated, there exists (at least) one element u(a), called the upper (also closure) approximation of a, such that: (Up1) a ≤ u(a); (Up2) u(a) ∈ U(Σ); (Up3) ∀γ ∈ U(Σ), a ≤ γ ⇒ u(a) ≤ γ. Let us stress that L(Σ) is the collection of lower crisp elements and U(Σ) the collection of upper crisp elements of an approximation space based on the set of approximable elements described by the lattice Σ. In general these two subposets of Σ are different between them and the meaning of (Ax1) (resp., (Ax2)) is that l(a) (resp., u(a)) furnishes the best approximation of the “imprecise” approximable element a from the bottom (resp., top) by lower (resp., upper) crisp elements. Remark 1. In the standard literature on rough sets, it is usually preferred the term definable element instead of the here adopted term of crisp element. However, the term definable has not a clear semantic and it refers to the ideas of intension and extension which are not relevant here (see [63] for a discussion about this argument). On the other hand, the term “crisp” is more coherent with our approach, where the lower and upper approximations give a valuation inside two “a priori” singled out classes of elements which are interpreted as support of some exact knowledge. In any approximation space it is trivial to prove that for any approximable element a ∈ Σ, the lower l(a) ∈ L(Σ) and the upper u(a) ∈ U(Σ) crisp elements, whose existence is assured by (Ax1) and (Ax2), are unique. Thus, it is possible to introduce in an equivalent way an approximation space as a structure R = Σ, L(Σ), U(Σ), l, u, consisting of a bounded distributive lattice Σ and two of its sub-posets L(Σ) and U(Σ), under the assumption of the existence of a lower approximation mapping l : Σ
→ L(Σ) and an upper approximation mapping u:Σ
→ U(Σ), given for any arbitrary a ∈ Σ respectively by the laws: l(a) : = max{β ∈ L(Σ) : β ≤ a}
(1a)
u(a) : = min{γ ∈ U(Σ) : a ≤ γ}.
(1b)
A brief comment about this formulation, taking into account the first condition (1a) (the second one can be discussed by a dual approach). Of course, for any element a ∈ Σ it is possible to collect all its crisp lower bounds L(a) := {β ∈ L(Σ) : β ≤ a}, but the condition that L(Σ) is a sub-poset of Σ does not assure that the greatest lower bound (g.l.b.) of this family exists. The meaning of (Ax1)
70
G. Cattaneo and D. Ciucci
is that in order to have an approximation space it is required that the system R must be such that for any element of Σ this g.l.b. exists and, further, that it is an element of L(Σ). Let us note that in presence of a complete approximation space, as we are going to define now, the existence of these two mappings according to equations (1) is assured, without any further requirement. Definition 2. An approximation space is complete iff Σ is a complete lattice and L(Σ), U(Σ) are complete join, meet sub semi–lattices of Σ, in the sense that L(Σ) (resp., U(Σ)) is closed with respect to arbitrary joins (resp., meets). Complete approximation spaces will be our frame of study when discussing of concrete examples based on the power set of a universe. In any approximation space, complete or not, the rough approximation of any element a ∈ Σ is then the lower–upper crisp pair r(a) := (l(a), u(a)) ∈ L(Σ) × U(Σ), with l(a) ≤ a ≤ u(a)
(2)
which is the image of the element a under the rough approximation mapping r:Σ
→ L(Σ) × U(Σ) described by the diagram: a ∈ ΣP PPP PPPu PPP oo o PP' o woo r l(a) ∈ L(Σ) u(a) ∈ U(Σ) OOO oo OOO o o OOO oo o o OO' wooo (l(a), u(a)) oo
l oooo
It is possible to characterize lower crisp and upper crisp elements according to the following result. Proposition 1. Let R be an approximation space. Then, (i) the element a ∈ Σ is lower crisp, i.e., a ∈ L(Σ), iff a = l(a); (ii) the element a ∈ Σ is upper crisp, i.e., a ∈ L(Σ), iff a = u(a). Proof. (i) If a = l(a) from (In2) it follows that a ∈ L(Σ). Vice versa, if a ∈ L(Σ) then, applying the (In3) to the element β = a ∈ L(Σ), the relationship a ≤ a implies a ≤ l(a); but from the (In1) it follows that a = l(a). The point (ii) is proved by duality. As shown later (see corollary 2) in any approximation space the sub-poset L(Σ) (resp., U(Σ)) of all lower (resp., upper) crisp elements turns out to be a join (resp., meet) sub semi-lattice of Σ; that is, for all a, b ∈ L(Σ) (resp., a, b ∈ U(Σ)) the corresponding lattice join (resp., meet) a ∨ b (resp., a ∧ b) in Σ is such that a ∨ b ∈ L(Σ) (resp., a ∧ b ∈ U(Σ)).
Lattices with Interior and Closure Operators
71
Following [5], an element e of Σ is said to be crisp (also exact , sharp) if and only if its lower and upper approximations coincide: l(e) = u(e), equivalently, iff its rough approximation is the trivial one r(e) = (e, e). Therefore, the collection of crisp elements from Σ consists of all elements which are simultaneously inner– upper crisp, Σc := L(Σ) ∩ U(Σ) (sometimes denoted also by LU(Σ)), which is not empty owing to the following result. Proposition 2. In any approximation space R the following holds: 0, 1 ∈ LU(Σ) (i.e., they are crisp elements). In particular, 0 ∈ L(Σ) and 1 ∈ U(Σ). Proof. The (In1) applied to the element 0 gives l(0) ≤ 0, and so necessarily l(0) = 0, but (In2) says that 0 = l(0) ∈ L(Σ). Applying the (In3) to the element a = 1 we get that for every β ∈ L(Σ) the condition β ≤ 1 implies β ≤ l(1); so, since for point (2) of definition 1 the element 1 ∈ L(Σ), we have in particular that 1 ≤ 1 implies 1 ≤ l(1), i.e., 1 = l(1) ∈ L(Σ). In a dual way we also obtain that 0, 1 ∈ U(Σ) using the three conditions (Up1)–(Up3). Let us stress that from a formal point of view, an approximation space R can be considered as the “merge” of the following two substructures. (LAS) The lower approximation space LR = Σ, L(Σ), l, consisting of the lattice Σ of approximable elements, the join sub semi-lattice L(Σ) of lower crisp elements, and the lower approximation mapping (1a). (UAS) The upper approximation space UR = Σ, U(Σ), u, consisting of the lattice Σ of approximable elements, the meet sub semi-lattice U(Σ) of upper crisp elements, and the upper approximation mapping (1b). Of course in some approximation space it may happen that the two sets of lower crisp and upper crisp sets coincide, formally, L(Σ) = U(Σ). In this case, this common lattice is just the collection of crisp elements Σc , and the corresponding approximation space will be simply denoted by the structure R = Σ, Σc , l, u, with l : Σ
→ Σc and u : Σ
→ Σc defined according to the above equations (1). In this case the lower approximation space is the triple LR = Σ, Σc , l and the upper approximation space is UR = Σ, Σc , u. A model of abstract approximation space according to definition 1, based on a nonempty set X, considered as the universe of the discourse, is described in the forthcoming subsection. In this model the elements of the abstract lattice a ∈ Σ are realized by concrete subsets A of the universe X, i.e., by A ∈ P(X) where P(X) is the power set of X, collection of all its possible subsets. Formally, Concretely a ∈ Σ −−−−−−−−→ A ∈ P(X) Realized as
The operations of the lattice structure Σ are realized by the usual set theoretic operations according to the following table:
72
G. Cattaneo and D. Ciucci
ABSTRACT
CONCRETE
Universe X Concretely
a∈Σ
−−−−−−−→
A ∈ P(X)
a≤b
−−−−−−−→
Concretely
A⊆B
a∧b
−−−−−−−→
Concretely
A∩B
a∨b
−−−−−−−→
Concretely
A∪B
a
Realized as
Realized as
Realized as
Realized as Concretely
−−−−−−−→ Ac = X \ A Realized as
Let us recall that in general a structure P(X), . . . based on the power set of a concrete universe is a model of the abstract theory Σ , . . . iff – any Axiom of the abstract theory Σ, . . . turns out to be a Theorem in the concrete model P(X), . . . . 2.1
The Partition Approach to Rough Set Theory
The usual Pawlak approach to rough set theory is formally based on a concrete partition space, that is a pair (X, π) consisting of a nonempty set X, the universe of discourse, with corresponding collection P(X) of all subsets of X, which are the approximable sets, and a partition π := {Gi ∈ P(X) : i ∈ I} of X (indexed by the index set I) whose elements are the elementary sets. Let us stress that from the theoretical point of view no requirement of finiteness about the universe X is done. All the results are true in the general context. First of all the partition π can be characterized by the induced equivalence relation Rπ ⊆ X × X, defined as (x, y) ∈ Rπ
iff
∃G ∈ π s.t. x, y ∈ G
(3)
In this case the two objects x, y are said to be indistinguishable with respect to π and the equivalence relation Rπ is called an indistinguishability relation. In this indistinguishability context the partition π is considered as the support of some knowledge available on the objects of the universe and so any equivalence
Lattices with Interior and Closure Operators
73
class (i.e., elementary set) G is interpreted as a granule (or atom) of knowledge contained in (or supported by) π. For any object x ∈ X we shall denote by gr(x) the (unique) equivalence class which contains x (gr(x) = G iff x ∈ G), called the granule generated by x. A crisp set is any subset of X obtained as set theoretic union of elementary sets (granules): EJ = ∪{Gj ∈ π : j ∈ J ⊆ I}; in particular any granule Gi of the partition π is crisp. The collection of all such crisp sets plus the empty set ∅ will be denoted by Eπ (X). The following hold: (i) The collection of all crisp sets is a Boolean algebra Eπ (X), ∩, ∪, c , ∅, X with respect to set theoretic intersection, union, and complementation. This Boolean algebra is atomic whose atoms are just the elementary sets (granules) G from the partition π. (ii) From the topological point of view Eπ (X) contains both the empty set and the whole space, moreover it is closed with respect to any arbitrary set theoretic union and intersection, i.e., it is a family of clopen (simultaneously closed and open) subsets for a topology on X. The following is now straightforward. Theorem 1. The concrete approximation space generated by the partition π of the universe X consists of the system Rπ := P(X), Eπ (X), lπ , uπ where: (1) the set of approximable elements Σ is the Boolean atomic (complete) lattice P(X), ∩, ∪, c , ∅, X of all approximable subsets A of the universe X; (2) the two sets of lower crisp elements L(Σ) and upper crisp elements U(Σ) are the same and coincide with the Boolean atomic (complete) lattice Eπ (X). Formally, in this model L(Σ) = U(Σ) = Eπ (X) and so the set Σc of crisp elements is just Eπ (X); (3) the map lπ : P(X)
→ Eπ (X) associating with any subset A of X its lower approximation defined by the (clopen) crisp set lπ (A) := ∪{O ∈ Eπ (X) : O ⊆ A}
(4)
satisfies the conditions (In1)–(In3) of Ax 1, i.e., it is a lower approximation map; (4) the map uπ : P(X)
→ Eπ (X) associating with any subset A of X its upper approximation defined by the (clopen) crisp set uπ (A) := ∩{C ∈ Eπ (X) : A ⊆ C}
(5)
satisfies the conditions (Up1)–(Up3) of Ax 2, i.e., it is an upper approximation map.
74
G. Cattaneo and D. Ciucci
The rough approximation of a subset A of X is then the clopen pair rπ (A) := (lπ (A), uπ (A)),
with lπ (A) ⊆ A ⊆ uπ (A)
(6)
Let us stress the following result, useful for a successive discussion. Proposition 3. Let (X, π) be a partition space, with associated approximation space Rπ . Then, making reference to the granules from π instead of the collection of crisp sets Eπ (X), the inner and the upper approximations of a subset A of X can be equivalently expressed as lπ (A) = ∪{G ∈ π : G ⊆ A}
(7a)
uπ (A) = ∪{H ∈ π : H ∩ A = ∅}
(7b)
Note that for any subset A, the universe turns out to be the set theoretic union of the mutually disjoint sets X = lπ (A) ∪ bπ (A) ∪ eπ (A), where bπ (A) := uπ (A) \ lπ (A) is the boundary and eπ (A) := X \ uπ (A) is the exterior of A. The triplet of subsets π(A) := {lπ (A), bπ (A), eπ (A)} is a new (local, i.e., depending from A) partition of X induced by A from the original partition π. 2.2
Partition Approximation Spaces from Information Systems
In this section we show as the usual Pawlak approach to roughness by information system (IS) gives rise to a partition approximation space (we recall the already quoted seminal papers [43,44,45]). Definition 3. A structure IF(X) = X, Att, V al, F , is an Information System iff: – X (called the universe) is a non empty set of objects (situations, entities, states); – Att is a non empty set of attributes, which assume values for the objects belonging to the set X; – V al is the set of all possible values that can be assigned to objects from X for attributes ranging on Att; – F (called the information function) is a mapping F : X × Att → V al which associates with any pair (x, a), consisting of an object x ∈ X and an attribute a ∈ Att, the corresponding value F (x, a) ∈ V al. In general in the sequel we assume that an information system satisfies the following conditions of coherence: 1. The information mapping F must be surjective; this means that if there exists a value v ∈ V al which is not the result of the application of the information map F to at least a pair (x, a) ∈ X × Att, then this value has no interest with respect to the knowledge stored in the information system and can be eliminated.
Lattices with Interior and Closure Operators
75
2. For any attribute a ∈ Att at least two different objects x1 = x2 must exist such that F (x1 , a) = F (x2 , a), otherwise this attribute does not supply any knowledge and can be suppressed by a reduct operation. The indiscernibility relation that may hold between the objects from the universe under investigation represents the mathematical basis for the rough set approach. Suppose an information system IS(X) = X, Att, V al, F , and let A be any family of attributes (a convenient subset of Att). Then, A determines a binary relation IA on the set of objects X, called the indiscernibility relation generated by A, and defined as follows: (x, y) ∈ IA
iff
∀a ∈ A, F (x, a) = F (y, a)
(8)
In other words, two objects x and y are IA -indiscernible iff they assume the same values for all the attributes belonging to the family A. The relation IA turns out to be an equivalence relation whose equivalence classes are the granules of knowledge grA (x) := {y ∈ X : (x, y) ∈ IA }, i.e., grA (x) consisting of all the objects y ∈ X that share with x the same values for all the attributes belonging to A (see the equation (8)). We shall denote by πA the collection of all such granules (equivalence classes) and by EA (X) the induced collection of crisp sets. On the basis of the partition space (X, πA ) generated by the family of attributes A, all the results analyzed in section 2.1 can be applied. In this information system context the lower approximation of a subset A of the universe will be denoted by lA (A) and the corresponding upper approximation by uA (A). The rough approximation of A is then the pair rA (A) = (lA (A), uA (A))
with lA (A) ⊆ A ⊆ uA (A)
Example 1. Let us consider an information system represented by the information table 1 involving a universe X of 5 flats with attributes “P=Price”, “R=Rooms”, “F=Furniture”, and “PF=Prestigious Flat”. Table 1. A simple information table involving a universe X of 5 flats X/Att Price Rooms Furniture Prestigious Flat x1 x2 x3 x4 x5
high high high high low
1 3 1 2 2
no yes no no yes
yes yes no yes no
The partition generated by the set of attributes A3 ={Price, Rooms, Furniture} is the following: πA3 = {{x1 , x3 }, {x2 }, {x4 }, {x5 }}.
76
G. Cattaneo and D. Ciucci
With respect to the attribute “PF=Prestigious flat”, interpreted as a decision attribute, we can single out two decision classes: [P F = y] consisting of all prestigious flats, and [P F = n] consisting of the not prestigious flats: [P F = y] = {x1 , x2 , x4 } and [P F = n] = {x3 , x5 }. The collection πP F = {[P F = y], [P F = n]} gives rise to a different partition of the universe X, called the decision-partition induced by the decision attribute Prestigious flat P F . The collection of prestigious flats [P F = y], considered as a subset of the universe X, is lower and upper approximated relatively to the collection of attributes A3 in the following way lA3 [P F = y] = {x2 , x4 } = [P = h, R = 3, F = y] ∪ [P = h, R = 2, F = n] uA3 [P F = y] = {x1 , x2 , x3 , x4 } = [P = h, R = 3, F = y] ∪ [P = h, R = 2, F = n] ∪ [P = h, R = 1, F = n]
That is, in this information system 1) a flat is necessarily prestigious iff its price is high and it consists of exactly 2 or 3 rooms; the role of furniture is irrelevant; 2) a flat is possibly prestigious iff the price is high; in this case not only the role of furniture continues to be irrelevant, but also irrelevant is the number of rooms.
3
Lower and Upper Approximation Spaces: Their Duality in the Context of de Morgan Lattices
A particular case of abstract approximation space is the one in which the lattice of approximable elements Σ is equipped with a de Morgan complementation mapping. Definition 4. A de Morgan lattice is a system Σ, ∧, ∨, , 0, 1 where 1. the substructure Σ, ∧, ∨, 0, 1 is a lattice, bounded by the least element 0 and the greatest element 1; 2. the mapping : Σ
→ Σ is a unary operation on Σ, called de Morgan complementation, assigning to any element a ∈ Σ the corresponding complement a ∈ Σ under the conditions that for arbitrary a, b ∈ Σ the following hold: (dM1) a = a (double negation law) (dM2) (a ∨ b) = a ∧ b (de Morgan law) Remark 2. The notion of de Morgan lattice just given is based on a non necessarily distributive lattice. This choice is made also by Kalman in [32] where de Morgan lattices are called i–lattices, as distinguished by distributive i–lattices. Quoting [14]: “This notion [of de Morgan distributive lattice] has been introduced by Gr. Moisil [37, p. 91] and studied by J. Kalman [32] under the name of distributive i–lattice.”
Lattices with Interior and Closure Operators
77
On the contrary, some other authors consider de Morgan lattice as a distributive lattice equipped with a de Morgan complementation. From the terminological point of view we can quote [14]: “If Σ has the last element 1, we shall say that Σ is a de Morgan Algebra. This notion has been studied by A. Bialynicki– Birula and H. Rasiowa [1,2] under the name of quasi–Boolean algebras. In this case, 0 = 1 is the first element of Σ.” For some more recent contributions see for instance [15,21]. Our choice of considering non distributive lattices is due to the desire of being more general as possible, since there exist de Morgan lattices which are not distributive. For example, let us consider the de Morgan lattice of Figure 1. 1 = 0 o • OO
OOO oo OOO ooo o o OOO o o o OOO o o o • b = b • a = a • OOO oo c = c OOO o o OOO oo OOO ooo o o OO ooo • 0 = 1
Fig. 1. An example of de Morgan lattice which is not distributive
This lattice is not distributive, since a ∨ (b ∧ c) = a = 1 = (a ∨ b) ∧ (a ∨ c). Note that in any de Morgan lattice one has that 1 = 0 and 0 = 1 , since for instance 0 ≤ 1 which, by contraposition and double negation law, implies 1 = 1 ≤ 0 . Similarly, from 0 ≤ 1 we get 1 ≤ 0 = 0. Moreover the following interesting result holds. Lemma 1. Under condition (dM1), the following properties are mutually equivalent among them: (dM2) (dM2a) (dM2b) (dM2c)
(a ∨ b) = a ∧ b (first de Morgan law) (a ∧ b) = a ∨ b (second de Morgan law) a ≤ b implies b ≤ a (contraposition law) b ≤ a implies a ≤ b (dual contraposition law)
Proof. For the first three statements see proposition 3.1 of [17]. Property (dM2c) is clearly equivalent to (dM2b) due to double negation (dM1). As an immediate consequence of this result we have the following. Corollary 1. Let us denote the de Morgan complementation mapping in the form φ : Σ
→ Σ, defined by the correspondence a ∈ Σ → φ(a) := a , then this map is a poset anti–isomorphism, i.e., the following hold:
78
G. Cattaneo and D. Ciucci
(aiso-1) φ is a bijective (one-to-one and onto) mapping of Σ into itself; (aiso-2) a ≤ b implies φ(b) ≤ φ(a) (first poset anti–morphism condition) (aiso-3) φ(b) ≤ φ(a) implies a ≤ b (second poset anti–morphism condition) Proof. The condition φ(a) = φ(b) means that a = b , from which a = b follows, and in virtue of (dM1) one gets a = b, i.e., the injectivity of φ. On the other hand, for every b ∈ Σ, there exists b ∈ Σ such that φ(b ) = b = b, i.e., the surjective condition is verified. (aiso-2) and (aiso-3) are nothing else than the (dM2b) and (dM2c). In general for the de Morgan negation neither the noncontradiction law, ∀a ∈ Σ, a ∧ a = 0, nor the excluded middle law, ∀a ∈ Σ, a ∨ a = 1 are required to hold, but trivially any Boolean lattice is a de Morgan lattice too. A strengthening of de Morgan lattices, intermediate with respect to Boolean lattices, are Kleene lattices, i.e., de Morgan lattices for which the further Kleene condition holds (distributive Kleene lattices are called by Kalman in [32] normal i–lattices): (K)
for any pair of elements a, b ∈ Σ, it is a ∧ a ≤ b ∨ b .
Proposition 4. Under conditions (dM1) and (dM2), property (K) is equivalent to the following (pure poset, i.e., non–equational) condition: a ≤ a
and
b ≤ b
implies
a ≤ b.
(9)
Proof. Indeed, let us suppose that (K) holds; if a ≤ a and b ≤ b then a = a∧a ≤ b ∨ b = b. On the other hand, for all a, b it holds a ∧ a ≤ a ∨ a = (a ∧ a) and b ∧ b = (b ∨ b ) ≤ b ∨ b . Thus by equation (9), a ∨ a ≤ b ∨ b . The two categories of de Morgan and Kleene lattice are distinct between them since there exist de Morgan lattices which are not Kleene, as can be seen in the following example. Example 2. The de Morgan lattice of figure 2, denoted by T in [15], is the smallest de Morgan lattice which is not Kleene since a ∧ a = a ≤ b = b ∨ b . 1 = 0 •?
??? ?? ?? • • b = b a =a ? ?? ?? ?? ? • 0 = 1
Fig. 2. An example of de Morgan lattice which is not a Kleene lattice
Lattices with Interior and Closure Operators
79
Any Boolean lattice is a Kleene lattice in which both the non–contradiction and the excluded middle laws hold. These laws substitute (and imply) the weaker condition (K). Coming back to the more general context, we have the following result. Proposition 5. Let Σ be a de Morgan lattice. (D1) Given a lower approximation space LR = Σ, L(Σ), l and introduced the collection Ud (Σ) := {γ ∈ Σ : ∃β ∈ L(Σ) s.t. γ = β } and the mapping defined for any a ∈ Σ by the law ud (a) := (l(a ))
(10)
then the triplet URd = Σ, Ud (Σ), ud is an upper approximation space, dual of the original lower approximation space Σ, L(Σ), l, in the sense that Ud (Σ) is naturally a lattice of upper crisp elements, the dual of L(Σ), and ud : Σ
→ U(Σ) is an upper approximation map, the dual of l (i.e., it satisfies all the above conditions (Up1)–(Up3)) . Similarly, (D2) Given an upper approximation space UR = Σ, U(Σ), u and introduced the collection Ld (Σ) := {β ∈ Σ : ∃γ ∈ U(Σ) s.t. β = γ }
(11)
and the mapping defined for any a ∈ Σ by the law ld (a) := (u(a ))
(12)
then the triplet URd = Σ, Ld (Σ), ld is a lower approximation space, dual of the original upper approximation space Σ, U(Σ), u, in the sense that Ld (Σ) is naturally a lattice of upper crisp elements, the dual of U(Σ), and ld : Σ
→ L(Σ) is a lower approximation map, the dual of u (i.e., it satisfies all the above conditions (In1)–(In3)). Proof. We shall prove the point (D1) only, since the other can be proved in a similar way by duality. First of all, for any pair of elements γ1 , γ2 ∈ Ud (Σ) we have that there exist two corresponding elements β1 , β2 ∈ L(Σ) such that γ1 = (β1 ) and γ2 = (β2 ) . Moreover β1 ∧ β2 ∈ L(Σ), since L(Σ) is a meet semi-lattice, from which it follows γ1 ∨ γ2 = (β1 ) ∨ (β2 ) = (dM2a) = (β1 ∧ β2 ) , with (β1 ∧ β2 ) ∈ Ud (Σ), and so Ud (Σ) is a join sub semi-lattice of Σ. On the other hand, from (In1) we have that for any a in Σ one has l(a) ≤ a, which applied to a leads to l(a ) ≤ a ; from this result, using (dM1) and (dM2b), one obtains a ≤ (l(a )) = ud (a). Moreover, for any a of Σ its lower approximation l(a) ∈ L(Σ), and applying this property to the particular case of a , as element of Σ, we have that also
80
G. Cattaneo and D. Ciucci
l(a ) ∈ L(Σ); from this it follows that ud (a) = (l(a )) ∈ Ud (Σ), since there exists the element β = l(a ) ∈ L(Σ). Finally, let an element γ ∈ Ud (Σ) (i.e., for which there exists β ∈ L(Σ) such that γ = β ) be such that a ≤ γ, then by (dM2b) we get β = (dM1) = β = γ ≤ a ; making use of (In3) applied to the element a we have that β ≤ l(a ) and so, by (dM2b), (l(a )) ≤ β = γ, i.e., we have proved the (Up3). In terms of the involutive anti–morphism φ : Σ
→ Σ introduced in corollary 1, recalling the self–invertibility condition φ−1 = φ, the two situations described by (D1) and (D2) in proposition 5 can be depicted by the two commutative diagrams which describe their interconnections by de Morgan complementation: ΣO φ
Σ
l
/Σ
≡
ud
ΣO
φ
φ
/Σ
Σ
u
/Σ
≡
ld
φ
/Σ
Fig. 3. Lower and upper approximations interconnection
Hence the de Morgan complementation determines a duality between the lower and upper approximations of any element a according to equations (10) and (12). Such a relationship can be represented by the diagram described in Figure 4, where for simplicity we continue to use the symbol l(a) (resp., u(a)) instead of ld (a) (resp., ud (a)), if no confusion is likely. (l(a) )
l(a) h
(
u(a)
(u(a ))
Fig. 4. de Morgan duality between lower and upper approximations
In this de Morgan context, two new notions involving the elements of Σ can be now introduced: given a ∈ Σ its exterior is the lower crisp element e(a) := u(a) ∈ L(Σ), i.e., the complement of its upper approximation; moreover, its boundary is the upper crisp element b(a) := u(a)∧l(a) ∈ U(Σ), i.e., the relative complement of its interior with respect to the closure. For any approximable element a ∈ Σ the triplet π(a) := {l(a), b(a), e(a)} consists of mutually orthogonal elements (two elements a, b of a complemented lattice Σ are said to be orthogonal, written a ⊥ b, iff a ≤ b or equivalently iff b ≤ a ). Moreover, if the de Morgan lattice Σ is distributive they orthogonally decompose the whole Σ in the sense that l(a) ∨ b(a) ∨ e(a) = 1.
Lattices with Interior and Closure Operators
81
Making use of the exterior of a generic element, we can consider the ortho– rough approximation of any element a ∈ Σ as the lower–exterior pair r⊥ (a) := (l(a), e(a)) ∈ L(Σ) × L(Σ), with l(a) ⊥ e(a)
(13)
which is the image of the element a under the ortho–rough approximation mapping r⊥ : Σ
→ L(Σ) × L(Σ) described by the diagram: a ∈ ΣO OOO oo OOOu OOO oo o OO' o wo r l(a) ∈ L(Σ) e(a) ∈ L(Σ) ⊥ OOO oo OOO o o o OOO o oo OO' wooo (l(a), e(a)) loooo
In the sequel we also (or almost always, if not explicitly declared) consider approximation spaces according to the following definition. Definition 5. A bi–approximation space is an approximation R = Σ, Ld (Σ), U(Σ), ld , u, where
space
(b-RAS1) The substructure UR = Σ, U(Σ), u is an upper approximation space, according to the (UAS) of section 2, based on a de Morgan lattice Σ, ∧, ∨, , 0, 1; (b-RAS2) Ld (Σ) is the dual lattice of U(Σ) consisting of the lower crisp elements according to equation (11); (b-RAS3) ld : Σ
→ Ld (Σ) is the dual lower approximation map relatively to u:Σ
→ U(Σ) according to equation (12). Let us observe that in the abstract treatment of approximation spaces, and in particular in the use of one of the two dualities proved in proposition 5, the de Morgan conditions (dM1) and (dM2) are the only necessary conditions, i.e., the minimal requirements on the complementation associated to a generic lattice in order to be able to prove all the required results. Of course, this does not exclude that in some particular applicative situation the involved lattice satisfies the stronger condition of being Kleene or Boolean complemented, as in the case of the examples treated in the forthcoming section, but from a pure theoretical point of view the only de Morgan conditions are sufficient to start from a lower (resp., upper) approximation space in order to construct by duality the corresponding upper (resp., lower) of a global bi–approximation space. Clearly, this holds in the present formalization of abstract approximation spaces. Things could be very different if weaker properties than the ones axiomatized in definition 1 are taken into account (see for instance [16]). But in this case we think that the reasons of these weakening must be clearly declared and semantically explained.
82
4
G. Cattaneo and D. Ciucci
Some Concrete Models: Partition, Topological, and Covering Approximation Spaces
After the previous discussion about the abstract duality between lower approximation spaces and upper approximation spaces under a general structure of de Morgan lattice, we treat now some concrete models which clarify this abstract environment and furnish some examples of possible applications. 4.1
Partition Approximation Spaces Revisited
In subsection 2.1 we have introduced the usual rough set theory based on a concrete universe X and its power set P(X) as the Boolean lattice of all approximable subsets of X (we stress again that in this concrete approach the role of the lattice Σ is played by P(X)). This concrete situation furnished a possible model of the abstract approach to roughness theory in which it is used the following two steps procedure to obtain the crisp elements of an approximation space: (Pr1) it is given a partition of X, which is a (non necessarily finite) collection π = {Gi : i ∈ I} of nonempty subsets of X such that the following hold: (C) covering condition: X = ∪{Gi ∈ π : i ∈ I} (D) disjointness condition: for any pair Gh = Gk of different subsets of the partition π, it is Gh ∩ Gk = ∅. (Pr2) The set of crisp elements Eπ (X) is obtained as set theoretic union of elements from the partition π: E ∈ Eπ (X) iff ∃{Gih ∈ π : h ∈ H} s.t. E = Gih h∈H
plus the empty set ∅. Let us stress that the development of this partition approach has been widely discussed in subsection 2.1. The main results from the rough-theory standpoint are theorem 1 and proposition 3. 4.2
Topological Approximation Spaces
Another concrete model of approximation space is based on the notion of topological space, in this context called topological approximation space. To this purpose, and generalizing what we have seen in the case of the partition spaces, let us recall that a topological space can be defined as a pair τ = (X, B) consisting of a set X and a base for a topology (simply open base). An open base is a collection B = {Bi ∈ P(X) : i ∈ I} of subsets of X, each of which is an open granule or (adopting the topological terminology, see [51, p. 99]) a basic open set, such that the following hold (and compare with the conditions (C) and (D) of the above partition case):
Lattices with Interior and Closure Operators
83
(C) covering condition: X = ∪{Bi ∈ B : i ∈ I} (OB) open base condition: for any pair Bi , Bj of subsets of the base B for which i ∈ B : h ∈ H} of elements from the base Bi ∩ Bj = ∅ a collection {B h exists such that i ∈ B : h ∈ H} Bi ∩ Bj = ∪{B h This constitutes a generalization of the step (Pr1) of the procedure relative to partitions since any partition π of X trivially is also a base for a topology of X. The second step (Pr2) is now formalized by (Pr2) The set of open sets O(X) is obtained as set theoretic union of elements from the open base B plus the empty set ∅. Hence, for a non–empty set O ∈ O(X) iff ∃{Bik ∈ B : k ∈ K} s.t. O = Bik k∈K
In this way we have induced a topological space as the pair (X, O(X)) consisting of a nonempty set X equipped with a family of open subsets O(X), satisfying the following conditions: (O1) (O2) (O3)
the empty set ∅ and the whole space X are open; the family O(X) is closed with respect to arbitrary unions; the family O(X) is closed with respect to finite intersections.
In particular any element of the open basis is an open set, that is B ⊆ O(X). In this topological framework one can consider the structure of complete lower approximation space induced from the topological space by an open base τ = (X, B) as follows LRτ = P(X), O(X), l where as usual (LT1) the role of the lattice of approximable elements Σ is played by the power set P(X) of the universe X, which is a complete lattice. (LT2) from the above outlined properties of open sets it immediately follows that O(X) plays the role of lattice of lower crisp elements, i.e., L(Σ) = O(X), since in particular (O2) assures that it is a complete meet sub semi-lattice of P(X); (LT3) as usual in any complete meet semi–lattice, for any subset A of X it is possible to introduce, according to the (1a), its lower approximation: l(A) := ∪{O ∈ O(X) : O ⊆ A}
(14)
which exists and is an open set: l(A) ∈ O(X). In other words, the lower approximation of the subset A (the approximable element) is its topological interior, i.e., the greatest open set contained in A, usually denoted by Ao := l(A).
84
G. Cattaneo and D. Ciucci
Since the power set P(X) of X is a Boolean lattice, and so in particular a de Morgan lattice in which the complementation mapping is the set theoretic complementation A ∈ P(X) → Ac = X \ A ∈ P(X), we can construct the dual base for a topology of closed granules as the collection K := {K ∈ P(X) : ∃B ∈ B s.t. K = B c }. This dual base can be characterized by the following. Proposition 6. The dual base K := {K ∈ P(X) : ∃B ∈ B s.t. K = B c } of an open base for a topology B = {Bi ∈ P(X) : i ∈ I} is a base for a topology of closed sets (simply, closed base for a topology, see [51, p. 112]), in the sense that the following hold: (DB) disjointness condition ∅ = ∩{K ∈ P(X) : K ∈ K} (CB) closed base condition: for any pair Ki , Kj of subsets of the closed base iv ∈ K : v ∈ V } of elements K for which Ki ∪ Kj = X a collection {K from the closed base exists such that iv ∈ K : v ∈ V } Ki ∪ Kj = ∩{K A closed set is either the set theoretic intersection of elements from the closed base or the whole space X. The collection of all closed sets is denoted by C(X); hence, for a non-trivial set C = X, C ∈ C(X) iff ∃{Kiw ∈ K : w ∈ W } s.t. C = Kiw w∈W
Trivially, a subset of X is closed iff it is the set theoretic complement of an open set: C ∈ C(X) iff ∃O ∈ O(X) s.t. C = Oc Therefore, the collection C(X) of all closed subsets of X satisfies the following conditions: (C1) both ∅ and X are closed; (C2) the family C(X) is closed with respect to finite unions; (C3) the family C(X) is closed with respect to arbitrary intersections. All the elements of the closed base are also closed, that is K ⊆ C(X). In this topological framework of closed sets one can introduce the structure of complete upper approximation space induced by the topological space τ = (X, B) URτ = P(X), C(X), u where (UT1) the role of the lattice of approximable elements Σ is played by the power set P(X) of the universe X, which is a complete lattice.
Lattices with Interior and Closure Operators
85
(UT2) from the above outlined properties of closed sets it immediately follows that C(X) plays the role of lattice of upper crisp elements, i.e., U(Σ) = C(X), since in particular (C3) assures that it is a complete meet sub semi-lattice of P(X); (UT3) as usual in any complete meet semi–lattice, for any subset A of X it is possible to introduce, according to the (1b), its upper approximation: u(A) := ∩{C ∈ C(X) : A ⊆ C}
(15)
which exists and is a closed set: u(A) ∈ C(X). In other words, the upper approximation of the subset A (the approximable element) is its topological closure, i.e., the smallest closed set containing A, usually denoted by A∗ := u(A). The (complete) topological approximation space is then the structure RT = P(X), O(X), C(X), l, u The topological rough approximation mapping rT : P(X)
→ O(X) × C(X) is the mapping which assigns to any subset A of the topological space X the pair rT (A) = (Ao , A∗ ), with Ao ⊆ A ⊆ A∗ , consisting of its interior (open subset Ao ∈ O(X)) and its closure (closed subset A∗ ∈ C(X)). Trivially, a subset E of a topological space X is crisp iff it is clopen (in particular the empty set and the whole space are clopen, and so crisp). Of course, a partition space (X, π) generates a topological space whose open base is just the family π, i.e., B = π; the corresponding closed base is defined in the usual way: a subset of X belongs to the closed base K iff it is the set complement of an equivalence class, in particular a set union of equivalence classes. But it is not true that a generic union of equivalence classes generates an element of K, see the example 1 whose corresponding closed base is K = {{x2 , x4 , x5 }, {x2 , x3 , x4 , x5 }, {x1 , x2 , x3 , x5 }, {x1 , x2 , x3 , x4 }} which for instance does not contain the set {x2 , x4 }, union of two open granules. If as usual in the topological context we construct an open set as union of elementary sets (i.e., equivalence classes from the partition π) according to E ∈ Oπ (X) iff ∃{Giw ∈ π : w ∈ W } s.t. E = w∈W Giw , then the set complement E c of any open set is also a set theoretic union of elementary sets from π, i.e., an open set. This means that E = (E c )c as set theoretic complement of an open set is, from the topological point of view, also closed, i.e., a clopen set. From this it follows that a partition gives rise to a topology of clopen: formally Oπ (X) = Cπ (X). In subsection 2.1 this unique family has been denoted by Eπ (X), formally O(X) = C(X) = Eπ (X), the collection of all crisp sets. Let us stress that in the case of a topological approximation space, as generalization of the partition approximation space, the equations (14) and (15) (in which are involved open sets from O(X) and closed sets from C(X) respectively) are generalizations of the corresponding partition versions (4) and (5), in which are involved crisp sets from Eπ (X). Then, one can try to generalize to the topological case also the equivalent formulation (7a) and (7b) expressed in
86
G. Cattaneo and D. Ciucci
terms of the partition π. Relatively to the lower approximation, we obtain the same set as l(A), that is l(A) = ∪{B ∈ B : B ⊆ A}. On the other hand, the upper approximation is different: u(op) (A) := ∪{B ∈ B : B ∩ A = ∅} = u(A)
(16)
Let us note that u(op) (A) ∈ O(X) is an open set (as set theoretic union of open sets), whereas u(A) = A∗ ∈ C(X) is a closed set (as set theoretic intersection of closed sets). From a terminological point of view, u(A) can be called the closed upper approximation, and u(op) (A) the open upper approximation of A. Moreover, if we construct the dual operator of u(op) we obtain a lower approximation by closed elements: l(cl) (A) := [u(op) (Ac )]c = ∩{D ∈ K : D ∪ A = X}
(17)
Dually to the upper approximation case, l(A) can be called the open lower approximation and l(cl) (A) the closed lower approximation of A and in general they give different results. Example 3. Let us consider the universe X = {1, 2, 3, 4, 5, 6} and the open base B = {{2}, {4}, {3, 4}, {1, 3, 4}, {4, 5, 6}, {3, 4, 5, 6}} , which trivially is not a partition of the universe X. These are the open granule of the topology on X given by the following open sets O(X) = {∅, {2}, {4}, {2, 4}, {3, 4}, {1, 3, 4}, {2, 3, 4}, {4, 5, 6}, {1, 2, 3, 4}, {2, 4, 5, 6}, {3, 4, 5, 6}, {1, 3, 4, 5, 6}, {2, 3, 4, 5, 6}, X} The corresponding dual base of closed granules is K = {{1, 2}, {1, 2, 3}, {2, 5, 6}, {1, 2, 5, 6}, {1, 3, 4, 5, 6}, {1, 2, 3, 5, 6}} . Note that in particular, {2, 5, 6} ∩ {1, 2, 3} ∩ {1, 3, 4, 5, 6} = ∅, but there is no pair of different closed granules Ki = Kj such that Ki ∩ Kj = ∅. The induced topology of closed sets is C(X) = {∅, {1}, {2}, {1, 2}, {1, 3}, {5, 6}, {1, 2, 3}, {1, 5, 6}, {2, 5, 6}, {1, 2, 5, 6}, {1, 3, 5, 6}, {1, 3, 4, 5, 6}, {1, 2, 3, 5, 6}, X} Let us consider now the approximable subset A = {1, 2, 5}, then l(A) = Ao = {2} and u(A) = A∗ = {1, 2, 5, 6} On the contrary, the open upper approximation of A is u(op) ({1, 2, 5}) = {1, 3, 4} ∪ {3, 4, 5, 6} ∪ {4, 5, 6} ∪ {2} = X On the other hand, computing the approximations of the set B = {1, 2, 4} we have l(B) = {2, 4} and u(B) = X, whereas the closed lower approximation of B is: l(cl) (B) = {1, 2} ∩ {1, 2, 3} ∩ {2, 5, 6} ∪ {1, 2, 5, 6} = {2} Let us remark that in this example we have seen a particular subset A of a concrete topological space such that u(A) ⊂ u(op) (A) and a particular subset B such that l(cl) (B) ⊂ l(B). These are indeed two particular cases of a general property, as we are going to show in the next section.
Lattices with Interior and Closure Operators
4.3
87
Covering Approximation Spaces
In the previous subsections 4.1 and 4.2 we have described two concrete models of abstract approximation spaces: the first is based on a partition space, and the second on a topological space by an open base. We have also stressed that the topological model is a generalization of the partition model. In this subsection we introduce a further model which turns out to be a generalization of the topological one, obtaining in this way a hierarchy of models of rough approximation spaces. Formally, a covering of the universe X is a family γ = {Ci ∈ P(X) : i ∈ I} of nonempty subsets of X which satisfies the unique (C) covering condition: X = ∪{Ci ∈ γ : i ∈ I} A lower pseudo–open crisp set is defined as the set theoretic union of subsets from the covering plus the empty set ∅. Denoting by Lγ (X) their collection, we have that for any nonempty subset L = ∅ of X L ∈ Lγ (X) iff ∃{Cir ∈ γ : r ∈ R} s. t. L = Cir r∈R
Of course, γ ⊆ Lγ (X) and so this latter is also a covering of X, i.e., X = ∪{L : L ∈ Lγ (X)}. Moreover, the following properties of a pseudo–open family hold: (PO1) (PO2)
Lγ (X) contains either the empty set and the whole universe, and it is closed with respect to arbitrary set theoretic union.
The lower pseudo–open approximation with respect to the covering γ of any subset A of the universe X is then lγ (A) = ∪{L ∈ Lγ (X) : L ⊆ A} obtaining in this way the lower approximation space induced from the covering LRγ = P(X), Lγ (X), lγ The dual of the covering γ = {Ci : i ∈ I} is the family γd := {D ∈ P(X) : ∃C ∈ γ s.t D = C c }, which satisfies the only (DB) disjointness condition: ∅ = ∩{D : D ∈ γd } An upper pseudo–closed crisp subset is then the set theoretic intersection of elements from the dual covering. The collection of all upper pseudo–closed crisp sets will be denoted by Uγ (X) and so also in this case for any nontrivial subset U =X U ∈ Uγ (X) iff ∃{Ds ∈ γd : s ∈ S} s.t. U = Ds s∈S
with U ∈ Uγ (X) iff ∃L ∈ Lγ (X) s.t. U = Lc Of course, the following properties of a pseudo–closed family holds:
88
G. Cattaneo and D. Ciucci
(PC1) (PC2)
Uγ (X) contains either the empty set and the whole universe, and it is closed with respect to arbitrary set theoretic intersection.
These properties qualify Uγ (X) as a particular Moore family of subsets of X (see [3, p. 111], and the footnote in that page). The corresponding dual upper pseudo–closed approximation space with respect to the covering γ is then URγ = P(X), Uγ (X), uγ , where for any subset A of X the upper pseudo–closed approximation is uγ (A) = ∩{U ∈ Uγ (X) : A ⊆ U } In analogy with the topological case, also in the present covering context we can introduce the lower pseudo–closed approximation and the upper pseudo–open approximation of A as the following generalizations of (17) and (16) respectively lγ(cl) (A) = ∩{U ∈ Uγ (X) : U ∪ A = X} u(op) (A) = ∪{L ∈ Lγ (X) : A ∩ L = ∅} γ Example 4. Let us consider the finite universe X = {1, 2, 3, 4, 5, 6}, with associated lower (pseudo–open) base γ = {{1, 6}, {2, 3}, {1, 2, 3, 4}, {3, 4, 5, 6}}. The corresponding collection of lower (pseudo–open) crisp sets is Lγ (X) = {∅, {1, 6}, {2, 3},{1, 2, 3, 4}, {3, 4, 5, 6}, {1, 2, 3, 6}, {1, 2, 3, 4, 6}, {1, 3, 4, 5, 6}, X} The dual upper (pseudo–closed) base is γd = {{1, 2}, {5, 6}, {2, 3, 4, 5}, {1, 4, 5, 6}}, with collection of upper (pseudo–closed) crisp sets Uγ (X) = {∅, {1, 2}, {5, 6}, {1, 4, 5, 6}, {2, 3, 4, 5}{1}, {2}, {5}, {4, 5}, X} (cl)
Now, the set A = {2, 3, 4} is such that lγ (A) = ∅ ⊂ {2, 3} = lγ (A) , and (op) uγ (A) = {2, 3, 4, 5} ⊂ X = uγ (A), so its rough approximation is the pseudoopen—pseudo-closed pair rγ ({2, 3, 4}) = ({2, 3}, {2, 3, 4, 5}). In all the examples discussed up to now it happens that the pseudo–open upper approximation is always greater than or equal to the corresponding pseudo– closed lower version and, dually, the pseudo–closed upper approximation is always less or equal than the corresponding pseudo–open upper version. This indeed is a general behavior according to the following result. Proposition 7. In any covering (and so in particular in any topological) space the following inequalities between approximations hold: ∀A ∈ P(X),
lγ(cl) A ⊆ lγ (A) ⊆ A ⊆ uγ (A) ⊆ u(op) (A) γ
Proof. We prove the inequality for the upper approximations, the one for the lower approximation can be derived by duality. Let x ∈ uγ (A). Then, ∀U ∈ U(X), [A ⊆ U → x ∈ U ]. Since U = Lc for some L ∈ L(X), we have ∀L ∈
Lattices with Interior and Closure Operators
89
L(X), [L ⊆ Ac → x ∈ Lc ], that is ∀L ∈ L(X), [L ∩ A = ∅ → x ∈ L] or ∀L ∈ L(X), [¬(L ∩ A = ∅) ∨ x ∈ L] and so ∀L ∈ L(X), [(x ∈ L) ∨ (L ∩ A = ∅)]. Since, by the covering property, ∃L0 : x ∈ L0 this last result assures that (op) L0 ∩ A = ∅, i.e., x ∈ uγ (A). So the “pseudo-open—pseudo-closed” rough approximation of any subset A, i.e., the pair rγ (A) = (lγ (A), uγ (A)), is “better ” than the “pseud-closed—pseudo (cl) (co) open” rough approximation of the same A, defined as rγ (A) := lγ (A), (op) uγ (A) . This can be formally written as rγ (A) = (lγ (A), uγ (A)) (lγ(cl) (A), u(op) (A)) = rγ(co) (A) γ where we have denoted by the partial order relation on the collection of all pairs of subsets from X, (P(X) × P(X))⊆ := {(A1 , Ap ) ∈ P(X) × P(X) : A1 ⊆ Ap }, defined as (A1 , Ap ) (B1 , Bp ) iff B1 ⊆ A1 ⊆ Ap ⊆ Bp Remark 3. This definition is similar to the one which formalizes the approximation by real numbers. In this numerical case, the pair of real numbers (rl , ru ), with rl ≤ ru , furnishes a better approximation with respect to the pair (sl , su ), with sl ≤ su , iff the following chain of inequalities is verified: sl ≤ rl ≤ ru ≤ su . This condition is denoted by√(rl , ru ) (sl , su ). For instance, in approximating the (irrational) real number 2 it is trivial that the pair (1.414, 1.415) furnishes a better approximation with respect to the pair (1, 3) owing to the fact that √ 1 < 1.414 < 2 < 1.415 < 3. (op)
Let us note that the pseudo-open–pseudo-open approximation rγ (A) = (op) (lγ (A), uγ (A)) of the generic subset A has been studied also by other authors, for instance in [49] for the case of coverings. Of course, also in this case (op) rγ (A) rγ (A). In particular, Pomykala in [49] addresses the problem of studying under which conditions the two above approximations, based on coverings, are Kuratowski topological interior and closure. In the next section, we will shaw that they are indeed a Tarski interior and closure, which is not (Kuratowski) topological. Further, Yao in [62] gives another possibility for the case of neighborhood systems (at least under the reflexivity condition of a binary relation on a universe), which we call “local ” since it involves the neighborhood of a generic point x ∈ X relatively to the covering γ as the pseudo–open set γ(x) := ∪{C ∈ γ : x ∈ C} ∈ Lγ (X). Using this notion, the lower and upper approximations of a subset A are defined as: lγ(l) (A) = {x ∈ X : γ(x) ⊆ A}
and u(l) = ∅} γ (A) = {x ∈ X : γ(x) ∩ A
(18)
Trivially, these two maps give rise to a different approximation of the generic approximable subset A of the universe X according to the inequalities: lγ(l) (A) ⊆ lγ (A) ⊆ A ⊆ uγ (A) ⊆ u(l) γ (A)
90
G. Cattaneo and D. Ciucci
with the “global” rough approximation rγ (A) := (lγ (A), uγ (A)) better than the (l) (l) (l) (l) “local” one rγ (A) := lγ (A), uγ (A) , i.e., rγ (A) rγ (A). Let us illustrate this situation with one example. Example 5. In the universe X = {1, 2, 3, 4, 5} let us consider the covering γ = {{1, 2}, {2, 3, 4}, {4, 5}}. The corresponding family of pseudo–open sets is Lγ (X) = {∅, {1, 2}, {2, 3, 4}, {4, 5}, {1, 2, 4, 5}, {1, 2, 3, 4}, {2, 3, 4, 5}, X}. The pseudo–open neighborhood of the points of the universe are then: γ(1) = {1, 2} γ(2) = {1, 2, 3, 4} γ(3) = {2, 3, 4} γ(4) = {2, 3, 4, 5} γ(5) = {4, 5} Notice that the family of pseud–closed sets is Uγ (X) = {∅, {3, 4, 5}, {1, 5}, {1, 2, 3}, {3}, {5}, {1}, X}. Let us consider the subset A = {3, 4, 5} of the universe, then lγ(l) (A) = {5} and lγ (A) = {4, 5} with lγ(l) (A) ⊂ lγ (A) (l) uγ (A) = {3, 4, 5} and u(l) γ (A) = {2, 3, 4, 5} with uγ (A) ⊂ uγ (A)
5
The Tarski Closure and Interior Abstract Approach to Approximation Spaces
Summarizing what previously discussed, the abstract approach to roughness introduced in section 2 is based on a family of “approximable” concepts, with associated two well defined subfamilies describing “lower” and “upper” crisp concepts respectively, with respect to which it is possible to construct “inner” and “outer” approximations. In order to introduce the equivalence (isomorphism) between two abstract approaches to roughness theory, let us give the following definition. Definition 6. A structure Σ, ∧, ∨, ∗ , 0, 1 is a closure lattice iff Σ, ∧, ∨, 0, 1 is a bounded lattice and the mapping ∗ : Σ → Σ, that associates with any element a from Σ its closure a∗ ∈ Σ, is a closure operation, that is, it satisfies the properties: (C1)
0∗ = 0
(normalized)
(C2) (C3)
a ≤ a∗ a∗ = a∗∗
(increasing) (idempotent)
(C4)
a∗ ∨ b∗ ≤ (a ∨ b)∗
(sub–additive)
Note that the just given definition of closure operation is equational. The first interesting result is that, under conditions (C1)–(C3), the sub–additive property (C4) is equivalent to the following (non–equational) monotonicity condition, which allows one to introduce a notion of closure in the more general context of a poset, which is not necessarily a lattice (see for instance [38]).
Lattices with Interior and Closure Operators
91
Lemma 2. Under conditions (C1)–(C3) the sub–additive property (C4) is equivalent to the monotonicity condition: (C4a)
a≤b
implies
a∗ ≤ b ∗
Proof. Let (C4) be true. Then, from a ≤ b we get b∗ = (a ∨ b)∗ ≥ a∗ ∨ b∗ ≥ a∗ . Vice versa, since in a lattice we have a, b ≤ a ∨ b, by the monotonicity condition(C4a) it follows that a∗ , b∗ ≤ (a ∨ b)∗ , and so the l.u.b. condition of the lattice implies a∗ ∨ b∗ ≤ (a ∨ b)∗ . In a closure lattice, owing to the condition (C2), it is possible to select the subset of closed elements defined as the collection of elements which are equal to their closure. Formally, C(Σ) := {a ∈ Σ : a = a∗ } The set C(Σ) is not empty since the element 0 is closed as consequence of (C1). Further, in general it does not coincide with the set of all elements Σ, i.e., there may exist elements which are not closed. Condition (C3) says that for any element a ∈ Σ the corresponding a∗ is closed. This notion has been developed by Tarski during the ’30s in his study of consequences in logic [55], but it can be considered as the most fascinating applications of closure operator on the power set of a given universe. Also if Tarski did not point out the importance of his approach in relation with the closure theory developed in the same period by others, the co–paternity of the notion of closure system can been attributed, besides to Tarski, also to Monteiro–Ribeiro [38] (in the abstract poset context), Ward [58] (in the abstract lattice context), Ore [39] (in the concrete power set context) and, in connection with Galois theory, Ore [40] and Everett [22] (both these latter with a poset version of the closure). For this reason, in the sequel we also refer to this notion with the term of Tarski closure operation. Proposition 8. The set C(Σ) of all closed element of a Tarski closure lattice satisfies the following (finite) Moore conditions: (M1) the greatest element is closed: 1 ∈ C(Σ); (M2) for any pair of closed elements a, b ∈ C(Σ), their meet is closed too: a∧b ∈ C(Σ). Proof. Applying (C2) to the element 1 we get 1 ≤ 1∗ , and so 1 = 1∗ . Moreover, from a ∧ b ≤ {a, b} and the monotonicity (C4a) we obtain (a ∧ b)∗ ≤ {a∗ , b∗ } = {a, b}, where the last equality follows from the closure of a and b. So (a ∧ b)∗ is a lower bound of the pair a, b and from this we get that (a ∧ b)∗ ≤ a ∧ b; but from (C2) it is a ∧ b ≤ (a ∧ b)∗ . These considerations can be extended to any arbitrary family if Σ is a complete lattice, with the following characterization.
92
G. Cattaneo and D. Ciucci
Proposition 9. Let Σ be a complete lattice. Then, (i) Any Tarski closure operator on Σ determines the closure system C(Σ) satisfying the Moore conditions: (M1) the greatest element is closed: 1 ∈ C(Σ); (M2) for any collection of closed elements {cα } ⊆ C(Σ) their meet is closed too: ∧cα ∈ C(Σ). (ii) Conversely, every closure system C(Σ) (also Moore family according to [3, p. 111]) satisfying conditions (M1)–(M2) on the complete lattice Σ determines an operator ∗ : Σ
→ Σ defined as follows: a∗ := ∧{c ∈ C(Σ) : a ≤ c}
(19)
satisfying properties (C2)–(C4), but in general not the (C1). Proof. The part (i) can be proved with a slight modification of the proof of the previous proposition 8. For the part (ii), if C(Σ) is a Moore family then for any element a ∈ Σ let us consider the lattice meet a∗ := ∧{c ∈ C(Σ) : a ≤ c}, which exists owing to the completeness of the lattice Σ and the fact that the involved family is not empty since a ≤ 1 with 1 ∈ C(Σ). Trivially, a ≤ a∗ since any c of the family is such that a ≤ c, and so the (C2) is true. Applying the just proved (C2) to the element a∗ we get a∗ ≤ (a∗ )∗ ; moreover, from the particular case a∗ ≤ a∗ it follows that (a∗ )∗ = ∧{d ∈ C(Σ) : a∗ ≤ d} ≤ a∗ , i.e., the (C3). Finally, if a ≤ b then {c ∈ C(Σ) : b ≤ c} ⊆ {d ∈ C(Σ) : a ≤ d}, which trivially leads to the monotonicity condition a∗ ≤ b∗ equivalent to (C4). As stated in the previous proposition, a Moore family generates an operator which satisfies condition (C2)–(C4). In general, the normalized condition (C1) does not hold, as can be seen in the following example. Example 6. Let us consider the set {0, n1 , n2 , . . . n−1 n , 1} with the induced complete lattice structure given by the usual order relation on numbers. Then, the subset { n1 , n2 , . . . n−1 n , 1} is a Moore family, but according to equation (19), 0∗ = n1 . Another important notion is the one of interior lattice. Definition 7. An interior lattice is a system Σ, ∧, ∨, o , 0, 1 where the mapping : Σ → Σ, that associates to any element a from Σ its interior ao ∈ Σ, is an interior operation, i.e., it satisfies the followings:
o
(I1)
1o = 1
(normalized)
(I2) (I3)
a ≤a ao = aoo
(decreasing) (idempotent)
(I4)
(a ∧ b)o ≤ ao ∧ bo
(sub–multiplicative)
o
Also in this interior case the following (non–equational and pure poset) result holds.
Lattices with Interior and Closure Operators
93
Lemma 3. Under conditions (I1)–(I3) the sub–multiplicative property (I4) is equivalent to the monotonicity condition: a≤b
implies
ao ≤ b o
Given an interior operator, the subset of open elements is defined as the collection of elements which are equal to their interior. Formally, O(Σ) = {a ∈ Σ : a = ao } Also the set O(Σ) is not empty since 1 is open as consequence of (I1); moreover, in a dual way as respect to the proof of condition (M1) in proposition 8, one has that also 0 is open. Further, in general it does not coincide with the set of all elements Σ, since there may exist elements which are not open. Condition (I3) says that for every a ∈ Σ the element ao is open. In general, the two subsets of open and closed elements do not coincide, neither one is a subset of the other. Their intersection is the set of clopen elements: CO(Σ) := C(Σ)∩O(Σ), which is not empty because it contains the least element 0 and the greatest element 1 of the lattice Σ. The equivalence between rough approximation spaces and lattices with interior and closure is given by the following result. Theorem 2 (i) Suppose an approximation space A = Σ, L(Σ), U(Σ) and for arbitrary a ∈ Σ let us define ao := l(a)
and
a∗ := u(a).
Then, A := Σ, o , ∗ is a lattice equipped with a Tarski interior operation and a Tarski closure operation ∗ such that
o
O(Σ) = L(Σ)
and
C(Σ) = U(Σ)
(ii) Suppose a lattice equipped with a Tarski interior and a Tarski closure operations A = Σ, o , ∗ and let us define L(Σ) := O(Σ)
and
U(Σ) := C(Σ).
Then, A := Σ, L(Σ), U(Σ) is a rough approximation space in which for arbitrary a ∈ Σ it is l(a) := ao
and
u(a) := a∗
(iii) Let A = Σ, L(Σ), U(Σ) be a rough approximation space. Then: (A ) = A. (iv) Let A = Σ, o , ∗ be a lattice equipped with an interior and a closure Tarski operator. Then: (A ) = A.
94
G. Cattaneo and D. Ciucci
Proof. (i)–the upper/closure case (the lower/interior case can be proved in a dual way). (C1) From (Up1) we have that 0 ≤ u(0) = 0∗ ; but owing to condition (3) in definition 1 it is 0 ∈ U(Σ), and so by (Up3) applied to γ = 0 one has that condition 0 ≤ 0 implies 0∗ = u(0) ≤ 0, i.e., 0∗ = 0. (C2) is nothing else that the (Up1) using a∗ = u(a). (C3) Applying (Up1) to a∗ we get a∗ ≤ (a∗ )∗ . On the other hand, from (Up2), it is a∗ = u(a) ∈ U(Σ); so this condition and a∗ ≤ a∗ , owing to (Up3), lead to (a∗ )∗ = u(a∗ ) ≤ a∗ . (C4) By (Up1) a ∨ b ≤ (a ∨ b)∗ , with (a ∨ b)∗ ∈ U(Σ) as a consequence of (Up2). Moreover, the order chain a, b ≤ a∨b ≤ (a∨b)∗ and (Up3) applied to γ = (a∨b)∗ imply a∗ , b∗ ≤ (a ∨ b)∗ , an so for the definition of l.u.b a∗ ∨ b∗ ≤ (a ∨ b)∗ . The identity C(Σ) = U(Σ) is nothing else that proposition 1. (ii)–Also in this case we prove the closure/upper case since the interior/lower is dual. Then, considering the set O(Σ) as the collection U(Σ) of upper crisp elements and setting u(a) := a∗ , the (Up1) is a reformulation of (C2). Moreover, the (C3) implies that a∗ ∈ O(Σ), i.e., the (Up2): u(a) ∈ U(Σ). Finally, applying the (C4a) to the case b = γ ∈ O(Σ) we get a ≤ γ implies a∗ ≤ γ ∗ = γ, i.e., the (Up3). Points (iii) and (iv) are trivial to prove. In this way we have shown the isomorphism between the structure Σ, L(Σ), U(Σ) of approximation space based on the lattice Σ and satisfying axioms (Ax1) and (Ax2), and the structure Σ, o , ∗ based on the same lattice Σ and equipped with an interior and a closure operation, satisfying conditions (I1)-(I4) and (C1)–(C4) respectively. Clearly, the set of crisp elements Σc = LU(Σ) of a rough approximation space coincide with the set of clopen elements CO(Σ). Let us note that as a consequence of the point (i) of this theorem, in any approximation space the lower and upper approximation mappings l and u can be considered as Tarski interior and closure operations and so the proved results about these operations can be applied. In particular, the point (M2) of proposition 8 which lead to the following. Corollary 2. The sub-poset L(Σ) (resp., U(Σ)) of any approximation space R is a join (resp., meet) sub semi-lattice of the lattice Σ. Definition 8. A structure Σ, ∧, ∨, ,∗ , 0, 1 is a Tarski closure lattice (instead of a de Morgan lattice with a Tarski closure operation) iff the following hold: (CdM1) The substructure Σ, ∧, ∨, 0, 1 is a de Morgan lattice. (CdM2) The mapping ∗ : Σ → Σ is a Tarski closure operation that associates with any element a from Σ its closure a∗ ∈ Σ. A dual notion is the one of interior de Morgan lattice. Definition 9. A Tarski interior lattice is a system Σ, ∧, ∨, , o , 0, 1 where: (IdM1) The substructure Σ, ∧, ∨, , 0, 1 is a de Morgan lattice.
Lattices with Interior and Closure Operators
95
(IdM2) The mapping o : Σ → Σ is a Tarski interior operation that associates with any element a from Σ its interior ao ∈ Σ. Clearly, the notions of Tarski interior lattice and Tarski closure lattice are dual, in the sense that in any Tarski interior lattice it is possible to define a Tarski closure operator and vice versa obtaining an equivalence between the two structures. Theorem 3 (i) Suppose a Tarski closure lattice T = Σ, ∧, ∨, , ∗ , 0, 1. Define the map o : Σ → Σ by the law ∀a ∈ Σ, ao := ((a )∗ ) Such a map is a Tarski interior operator and T i = Σ, ∧, ∨, , o , 0, 1 is a Tarski interior lattice. (ii) Suppose a Tarski interior lattice T = Σ, ∧, ∨, , o , 0, 1 . Define the map ∗ : Σ → Σ by the law ∀a ∈ Σ, a∗ := ((a )o ) Such a map is a Tarski closure operator and T c = Σ, ∧, ∨, , ∗ , 0, 1 is a Tarski closure lattice. (iii) Let T = Σ, ∧, ∨, , ∗ , 0, 1 be a Tarski closure lattice. Then (T i )c = T . (iv) Let T = Σ, ∧, ∨, , o , 0, 1 be a Tarski interior lattice. Then (T c ) i = T . Proof. (i) We have to prove that the map o satisfies conditions (I1)-(I4) of Definition 7. (I1) Since 1 = 0 , we obtain, by definition, 1o = (0 )∗ . Thus, by double negation and normalization (C1), we can conclude 1o = 1. (I2) By increase property (C2) applied to a we have a ≤ a∗ . Thus, by contraposition and double negation: ao := a∗ ≤ a = a. (I3) By definition, aoo = a∗∗ . Thus, by double negation and idempotent property (C3), we obtain aoo = a∗ = ao . (I4) Suppose a ≤ b. By contraposition and monotonicity (C4a), b∗ ≤ a∗ . Thus, again by contraposition, ao := a∗ ≤ b∗ = bo . (ii) By duality. Finally, (iii) and (iv) are trivial. Hence the de Morgan complementation determines a duality relation between the Tarski closure and interior of any element a according to: ao = a∗
and a∗ = ao
Such a relationship can be represented by the diagram described in figure 5. ao o
a f a
&
a∗
∗
Fig. 5. de Morgan duality between Tarski interior and closure
96
G. Cattaneo and D. Ciucci
If in the same structure we are able to define both the interior and closure operators it is meaningful and interesting to consider the relation between closed and open elements. It is trivial to prove that an element a is closed iff a is open, and vice versa an element a is open iff a is closed. In this way there is a duality between open and closed elements determined by the de Morgan negation: O(Σ) = {e ∈ Σ : e ∈ C(Σ)} and C(Σ) = {f ∈ Σ : f ∈ O(Σ)} In this abstract context we are also interested to the so–called exterior of an element a ∈ Σ defined as the open element e(a) := (a∗ ) ∈ O(Σ). Recalling that a rough approximation space R = Σ, L(Σ), U(Σ) can be considered as the “pasting” of a lower approximation space LR := Σ, L(Σ) and an upper approximation space UR := Σ, U(Σ), we restate theorem 2 in the more precise following way. (1) Any lower approximation space Σ, L(Σ) satisfying (Ax1) is equivalent to a Tarski interior space Σ, o satisfying conditions (I1)–(I4). (2) Any upper approximation space Σ, U(Σ) satisfying (Ax2) is equivalent to a Tarski closure space Σ, ∗ satisfying conditions (C1)–(C4). Some remarks on the equivalence between rough approximation spaces and Tarski closure (interior) lattices. (i) The non–equational equivalent conditions expressed in definition 1 capture the intuitive aspects of an expected approximation space. Indeed, as stressed, the semantical content of conditions (In1)–(In3) (resp, (Up1)– (Up3)) is that the lower (resp., upper) approximations l(a) (resp., u(a)) is the best approximation of the approximable element a from the bottom (resp., top) by open (resp., closed) elements. (ii) Theorem 2 can be summarized by the statement that the category of de Morgan lattice with closure (and induced interior) operator and the one of de Morgan lattice with upper (and induced lower) approximation map are equivalent (isomorphic) between them, according to the general category theory [35]. Precisely, starting with a Tarski closure lattice, forming its upper approximation space version and then coming back to the corresponding Tarski closure structure, one recovers the original starting structure. Vice versa, starting from an upper approximation space, forming its Tarski closure lattice version and then coming back to the corresponding upper approximation space structure, one recovers the original starting structure. (iii) This means that if one works in the context of a generalization of the closure operator conditions, for instance weakening (or suppressing) some of the above conditions (C1)–(C4) of definition 6, then some of the conditions expressed by (Up1)–(Up3) are correspondingly lost, or modified. A choice which, as stressed before, needs a justification from the interpretative point of view. In sections 4.3 we have seen that for any covering space (X, γ) the structure Rγ = P(X), Lγ (X), Uγ (X), lγ , uγ is a concrete model of approximation space.
Lattices with Interior and Closure Operators
97
Making use of theorems 2, 3 and the specific properties of the involved models, one obtains the following. Proposition 10. Let (X, γ) be a covering space. Then, on the Boolean lattice P(X), ∩, ∪, c , ∅, X, one has that (Cov1) the mapping ∗ : P(X)
→ P(X) associating with any subset A of X its upper pseudo–closed approximation A∗ = uγ (A) = ∩{U ∈ Uγ (X) : A ⊆ U } is a Tarski closure operation. (Cov1) the mapping o : P(X)
→ P(X) associating with any subset A of X its lower pseudo–open approximation Ao = lγ (A) = ∪{L ∈ Lγ (X) : L ⊆ A} is a Tarski interior operation. Example 7. In the concrete covering discussed in example 4 the two subsets A = {4} and B = {6} are such that {4}∗ ∪ {6}∗ = {4, 5, 6} ⊂ {1, 4, 5, 6} = {4, 6}∗. So it is a concrete Tarski closure lattice. Similarly, the two sets C = {1, 2, 3, 5, 6} and D = {1, 2, 3, 4, 5} are such that the corresponding interiors satisfy the strict inclusion (C ∩ D)o = {1, 2, 3} ⊂ {1, 2, 3, 4} = C o ∩ Do , i.e., against to what asserted by equation (6) in [67] this is an example of a Tarski interior generated by a covering which is not topological, in the sense that the stronger condition (a ∨ b)∗ = a∗ ∨ b∗ and (a ∧ b)o = ao ∧ bo which, respectively, characterize the Kuratowski (see [12]) topological interior and closure with respect to the Tarski ones, do not hold. A last remark is that since any partition and any topology by an open base are in particular coverings, then also in these two cases the above results can be applied. But in a forthcoming paper we will study the stronger properties of the interior and closure operators characterizing these two situations. 5.1
The Lattice–Algebraic Semantic of S4–Like Modal Logic Induced by Tarski Closure
In this section we want to analyze in the Tarski lattice context the algebraic semantic of some modal-logic system. We denote by ¬a the de Morgan complement a and by i(a) (resp., c(a)) the Tarski interior ao (resp., closure a∗ ) of a in order to capture in a modal algebra the formalism of usual propositional logic of a modal language based on the negation Not ¬, the standard connectives And ∧ and Or ∨. The interior i is interpreted as a necessity operator and the closure c as a possibility operator (L and M in a standard modal language L whose sentences are denoted by Greek letters α, β, γ, . . .). Let us discuss now how the Tarski interior (and dually the closure) operator satisfies the algebraic versions of axioms and rules of some well known modal
98
G. Cattaneo and D. Ciucci
principles. As usual “when interpreting [complemented lattice] structures from the viewpoint of logic, it is customary to interpret the lattice elements [a, b, . . .] as propositions, the meet operation [∧] as conjunction [and], the join operation [∨] as disjunction [or], and the orthocomplement [¬] as negation [not]” [28]. To this purpose, let us quote [13, p. 212] “intuitively, the points in Σ may be thought of as propositions – including ‘truth’ and ‘falsity’ (1 and 0) – closed under propositional analogues of negation, conjunction, disjunction, and necessitation (¬, ∧, ∨, and i). [...] It should be emphasized in this connection that the points in an algebraic model are not possible worlds, nor even, necessarily, sets of possible worlds.” In this lattice context “it is generally agreed that the partial ordering ≤ of a lattice [...] can be logically interpreted as a relation of implication, more specifically semantic entailment (see van Frassen [57]). [...] The implication relation, which is fundamental to logic, must not however be confused with an implication or conditional operation, which is a logical connective which, like conjunction and disjunction, forms propositions out of propositions remaining at the same linguistic level” [27]. In this context, the partial order relation a ≤ b in Σ is rather the algebraic counterpart of the statement “α ⊃ β is true” with respect to some implication connective ⊃ involving sentences α and β of the object language L. In this sense the implication relation a ≤ b in the lattice Σ corresponds to semantic entailment among formulas from the language L. Hence, the statement “α semantically entails β” is not a formula in the object language, but occurs in the metalanguage. Thus, in order to express the lattice conditions in a formal way nearer to the usual logical formalism, it is required a binary operation → on the lattice Σ playing the role of algebraic counterpart of the logical implication connective ⊃ in the language L, and which assigns to each pair of lattice elements a, b ∈ Σ another lattice element a → b ∈ Σ. “Since not just any binary lattice operation should qualify as a material implication, we must determine what criteria must (should) be satisfied by a lattice operation in order to be regarded as a material implication. First, it seems plausible to require that every implication operation → be related to the implication relation (≤) in such a way that if a proposition a implies [is less or equal to] a proposition b, then the conditional proposition a → b is universally true, and conversely. [...] Translating this into the general lattice context, we obtain a→b=1
iff a ≤ b.
(20)
Here 1 is the lattice unit element, which corresponds to the universally true proposition” [28]. In this just quoted paper, this condition is assumed as one of the minimal implicative conditions. Let us enter in some formal detail supposing that in our Tarski lattice structure there exists an implication operator → on Σ for which at least the minimal condition (20) is required to hold. Let us define as a truth functional tautology (also universally valid sentence) any well formed formula based on a Tarski interior lattice which is equal to the universally true element 1. It is now easy to
Lattices with Interior and Closure Operators
99
prove that the i and c operators satisfy the algebraic versions of axioms and rules of some celebrated modal principles, once interpreted i as a necessity operator and c as a possibility operator (L and M in the modal language L with implication connective ⊃). First of all, from the logic–algebraic semantic point of view the Tarski interior lattice rules can be interpreted as follows: (mod–1n) i(1) = 1. That is: if a sentence is true, then also its necessity its true (necessitation rule or according to [13, p. 20] the N principle). (mod–2n) i(a) ≤ a . In other words: necessity implies actuality (the characteristic T principle whatever is necessary is so: L(α) ⊃ α; in other words, if necessarily α, then α [13, p. 6]). (mod–3n) a≤b
implies
i(a) ≤ i(b)
if a conditional and its antecedent are both necessary, then so is the consequent (which is the characteristic K principle of modal logic L(α ⊃ β) ⊃ (L(α) ⊃ L(β)), see [13, p. 7]). (mod–4n) i(a) = i(i(a)) . Whatever is necessary is necessarily necessary (the characteristic 4– principle L(α) ↔ LL(α), see [13, p. 18]). (mod–5n) c(a) = ¬(i(¬a)) which “embodies the idea that what is possible is just what is not– necessarily–not. [...] possibility is always expressible in terms of necessity and negation, and so is theoretically superflous” (the DF ♦ principle M (α) ↔ ¬L¬(α), see [13, p. 7]). Let us enter in some details. (N) The above algebraic version of the N principle can assume also the form “a = 1 implies i(a) = 1” corresponding at the language level L to the necessitation transformation rule of [29, p. 4] “ α implies L(α)”, i.e., if α is true so is L(α). In other words, according to [13, p. 7], “this corresponds to a rule of inference (RN, the rule of necessitation). It states that the necessitation α of a valid sentence is itself always valid.” In symbols: L(α) . (T) According to the above minimal requirement (20) about an implication connective one has that (mod-2n) can be expressed as i(a) → a = 1, the algebraic version of the T modal tautology L(α) ⊃ α.
100
G. Cattaneo and D. Ciucci
(K) As a consequence of “a ≤ b implies i(a) ≤ i(b)”, under the axiom (mod2n) we now prove that “if the conditional and its antecedent are both necessarily true, then the consequent is necessarily true” [13, p. 7], i.e., formally, that i(a → b) = 1 and i(a) = 1
imply
i(b) = 1.
(21)
Indeed, both the two conditions i(a → b) = 1 and the (mod-2n), in the form i(a → b) ≤ a → b, imply a → b = 1, from which a ≤ b follows by (20). So, if also i(a) = 1 (i.e., if the conditional and its antecedent are both necessarily true), then from the (mod-2n) principle of the necessity operator applied to a ≤ b one obtains that also i(b) = 1 (i.e., then the consequent is necessarily true). Let us show that property (21) is equivalent to i(a → b) → (i(a) → i(b)) = 1.
(22)
Let the (21) be true. Under the hypothesis i(a → b) = 1, i.e., a ≤ b, the fact that “i(a) = 1 implies i(b) = 1” leads to the identity i(a) = i(b) = 1, written in particular as i(a) ≤ i(b), which by the (20) means that i(a) → i(b) = 1. Therefore, both conditions i(a → b) = 1 and i(a) = 1 imply that also i(a) → i(b) = 1, and so under these conditions one gets i(a → b) = i(a) → i(b) = 1, which in particular can be written as i(a → b) ≤ i(a) → i(b), i.e., i(a → b) → (i(a) → i(b)) = 1. Conversely, let i(a → b) → (i(a) → i(b)) = 1, i.e., i(a → b) ≤ (i(a) → i(b)). If i(a → b) = 1, then also i(a) → i(b) = 1, i.e., i(a) ≤ i(b), and so condition i(a) = 1 implies i(b) = 1. Finally, let us prove that (22) implies condition (mod-3n). Under the hypothesis a ≤ b, i.e., a → b = 1, the equation (22) (taking into account (mod-1n)) assumes the form 1 → (i(a) → i(b)) = 1, i.e., 1 ≤ (i(a) → i(b)) which leads to (i(a) → i(b)) = 1, i.e., i(a) ≤ i(b). Thus, we have proved the equivalence of (mod-3n) with (22), the algebraic version of the characteristic K principle formalized in a modal language L as the tautology L(α ⊃ β) ⊃ (L(α) ⊃ L(β)). (5) Is nothing else than the point (ii) of theorem 3. A particular consequence of lemma 3 is that the monotonicity condition of the interior is equivalent to the submultiplicative rule i(a ∧ b) ≤ i(a) ∧ i(b), which can be written as the “tautology” i(a ∧ b) → i(a) ∧ i(b) = 1, algebraic version of the modal principle M expressed as L(α ∧ β) → (Lα ∧ Lβ), see [13, p. 20]). That is under T , the algebraic version of the modal principle K is equivalent to the algebraic version of the modal principle M : i(a ∧ b) ≤ i(a) ∧ i(b)
M principle
In general the following property of modality is not valid: i(a) ∧ i(b) ≤ i(a ∧ b)
C principle
as shown in the following example. Example 8. Let us consider the Tarski distributive lattice given in the Hasse diagram of figure 6. Then, we have that i(b ∧ c) = i(a) = 0 < a = i(b) ∧ i(c).
Lattices with Interior and Closure Operators
101
1 = ¬0 •
• ?d = ¬a = i(d) ??? ?? ?? • • c = i(c) = ¬c ¬b = i(b) = b ? ?? ?? ?? ? • a = ¬d
• ¬1 = 0 = i(a) Fig. 6.
These results can be compared with the axiomatization of system S4 in [13], where of course the underlying basic structure of any algebraic model is the Boolean one. But what we have proved here is that, in order to obtain the algebraic versions of characteristic T, K, 4, and Df♦ axioms plus the N rule of inference, it is sufficient to consider a de Morgan lattice. To be precise we can state the following. (AM1 ) Let Σ, ∧, ∨, ¬, i, 0, 1 be a Tarski interior lattice. Then this structure is a modal algebra of a S4–like logical system with respect to the necessity connective i, where the term “like” stresses that in general (a) it is based on a de Morgan lattice instead of on a Boolean algebra, as assumed in any modal algebra (see [13, p. 212]), (b) the modal principles N , T , K, 4, and DF ♦ are satisfied, and so also the equivalent to K modal principle M , but not the “dual” modal principle C (on the contrary accepted in the standard S4 Lewis modal system). (c) From DF ♦ and T it follows the further property i(a) ≤ c(a) , i.e., according to (20) that i(a) → c(a) = 1, algebraic version of the modal principle D that “whatever is necessary is possible” L(α) ⊃ M (α), see [13, p. 16]). So, summarizing, the in general non Boolean structure of the lattice and the non validity of principle C are the main differences between the standard S4 modal system and our S4–like Tarski modal algebra (as widely discussed in the forthcoming subsection 5.2). In general, besides the refused modal principle C, also the following properties of modalities are not valid:
102
G. Cattaneo and D. Ciucci
a ≤ i(c(a)
B principle
i(a) = c(i(a)), c(a) = i(c(a))
5 principle
This is shown by the following example. Example 9. Let us consider the Tarski non-distributive (and so non–Boolean, but Kleene) lattice given in the Hasse diagram of figure 7. 1 = ¬0 = c(c) = c(d) •? ¬b = i(c) = c •
¬d = c(a) = a • ? ?
?? ?? ??
?? ?? ?? ?
• d = i(d) = ¬a
• b = c(b) = ¬c
• 0 = ¬1 = i(b) = i(a) Fig. 7.
Then, we have that b is a closed element, i.e., b = c(b) and i(c(b)) = i(b) = 0. So, c(b) = i(c(b)) and i(c(b)) < b. Further, c is an open element, i.e., i(c) = c and c(i(c)) = 1. So i(c) = c(i(c)). On the contrary, in this lattice the C principle holds. 5.2
A Comparison among the Algebraic-Semantic Tarski Closure Lattice, the Kripke Semantic Standard Model and the Syntactic Normal Systems of Modal Logic
The discussion about the S4–like model performed in the previous section 5.1 has been done in the so–called context of algebraic semantics in which modal connectives are interpreted as suitable operators of a de Morgan lattice structure Σ, ∧, ∨, , 0, 1. This kind of approach is based on the so–called modal algebra structures and gives rise to the study of modal systems by algebraic techniques. On the other hand, “over a period of three decades or so from the early 1930s there evolved [another kind] of mathematical semantic for modal logic: [...] Relational semantic uses relational structures, often called Kripke models, whose elements are thought of variously as being possible worlds, moments of time, evidential situations, or states of a computer” [23]. An important discussion must be done at this point. The universally adopted Kripke models of modal logics [33] are all based on the so–called standard models as pairs (X, R) consisting of a
Lattices with Interior and Closure Operators
103
nonempty universe of possible worlds X and a binary relation R on X, formally R ⊆ X × X [13, p. 63]. Let us stress that according to this definition of standard model, no particular requirement is asked to R, which is a generic binary relation. In the context of a Kripke model, based on a nonempty set X as universe of possible worlds, the complemented lattice structure taken into consideration is the one of the power set P(X) of all possible subsets of X equipped with the usual set theoretic intersection, union, and complementation operations which allows one to consider the Boolean algebra P(X), ∩, ∪, c , ∅, X. As usual the subsets A, B, . . . ∈ P(X) are interpreted as propositions of the formal language α, β, . . . ∈ L. In particular the generic subset A of the universe X interpreting the sentence α of the language L can be considered just as the collection of those possible worlds at which the corresponding sentence is true, the set intersection A ∩ B as the language conjunction α ∧ β, the set union A ∪ B as the language disjunction α ∨ β, and the set complement Ac = X \ A as the language negation ¬α. Differently from the Tarski algebraic model, based on an abstract de Morgan lattice, any standard Kripke model is based on a Boolean algebra of sets with additional operations interpreted as necessity and possibility. In the context of the possible worlds universe an interesting “account of necessity and possibility [...] dates to the philosopher Leibniz: a proposition is necessary if it holds at all possible worlds, possible if it holds at some. The idea is that different things may be true at different possible worlds, but whatever holds true at every possible world is necessary, while that which holds at at least one possible world is possible” [13, p. 1]. “This Leibniz conception of necessity and possibility [can be modified] by introducing an element of relative possibility. The result is a much more supple notion of validity, one that greatly reduces the stock of principles that are bound to hold” [13, p. 67]. “The interpretation of the relation R in a standard model [...] may be thought of as a relative possibility or, perhaps better, relevance. We shall write xRy to mean that the world y is possible relative to – or is relevant to – the world x. [...] The truth conditions of modal sentences use the relation R of relative possibility as follows: (Mo-1) the necessity of A is true at a world x if and only if A is true at every world z that is possible relative to x. And (Mo-2) the possibility of A is true at x if and only if A is true at some world z possible relative to x. [...] Instead of truth at every or some possible world, we now have truth at every or some world possible relative to the given one.” [13, p. 68]. Denoting by lR (A) (resp., uR (A)) the representation of the necessity (resp., possibility) of the sentence represented by the subset A in a Kripke standard model (X, R), all this can be formalized as follows lR (A) : = {x ∈ X : ∀z (xR z ⇒ z ∈ A)}
(23a)
uR (A) : = {x ∈ X : ∃z (xR z and z ∈ A}
(23b)
104
G. Cattaneo and D. Ciucci
It is an easy result that in any class of standard models the schemas N, M, and C are valid (see [13, p. 72]), and that none of the schemas D, T, B, 4, and 5 is valid in the class of all standard models [13, theorem 3.4, p. 76]. Of course, a standard model can be specialized by adding some further conditions about the behavior of the involved binary relation. Making reference to Kripke [33], “a normal model structure (nms) is an ordered [pair] (X, R), where X is a non empty set, and R a reflexive relation defined on X. If R is transitive, we call the nms a S4 model structure; if R is symmetric, we call it a Brouwersche model structure; if R is an equivalence relation, we call it a S5 model structure.” Thus, from the Kripke point of view a normal model is a standard model with the further requirement of reflexivity of the binary relation and in this case both the schemas D and T are valid. In order to have the validity of all the principles N, M, C, D, T and the further principle 4 the Kripke model must be based on a quasi–ordering, i.e., reflexive and transitive, relation [13, p. 83]. Contrary to this result, the previously discussed algebraic model of modal logic based on a Tarski closure lattice satisfies the only principles N, M, (besides D, T, and 4) but the further principle C (besides B and 5) is not valid. In other words, the difference of the Kripke (Boolean power set) model with respect to the Tarski algebraic (de Morgan lattice) model is on the unique principle C, true in any quasi–ordering standard model and in general not valid in the algebraic Tarski one. This difference leads also to the fact that in this Tarski closure lattice model the following rules of inference, as consequence of N and the (mod-3n) written in the equivalent formulation (21), are valid: a=1
implies
a→b=1
i(a) = 1 i(a) → i(b) = 1
implies
(RN) (RM)
Note that at the algebraic semantic level of the Tarski closure lattice structures, the following rule of inference is not valid: (a ∧ b) → c = 1
implies
(i(a) ∧ i(b)) → i(c) = 1
(RR)
Indeed, (RR) can be written as “(a ∧ b) ≤ c implies i(a) ∧ i(b) ≤ i(c)”, where this latter in the particular case of c = a ∧ b should lead to the non valid C principle “i(a) ∧ i(b) ≤ i(a ∧ b)”. From the syntactic point of view, the above results can be reformulated as the rules of inference, accepted in any normal system of modal logic: α RN. L(α) α⊃β RM. L(α) → L(β) whereas it is not valid the inference rule RR.
(α ∧ β) ⊃ γ (L(α) ∧ L(β)) ⊃ L(γ)
Lattices with Interior and Closure Operators
105
The inference rule RR is on the contrary valid in any normal system of modal logic, see [13, theorem 4.2 of p. 114], where indeed it is assumed the larger set of rules of inference (α1 ∧ . . . ∧ αn ) ⊃ α RK. (n ≥ 0) (L(α1 ) ∧ . . . ∧ L(αn )) ⊃ L(α) of which RN and RM are the particular cases of n = 0 and n = 1, whereas RR is the particular case of n = 2. As proved in the above quoted theorem 4.2 of [13], every normal system of modal logic has the rules of inferences RN, RM, RR, and a further RE, and the principles N, M, C, R, K, which together define the smallest normal system of modal logic, called K. The extensions of system K are obtained adding as theorems the schemas D, T, B, 4, and 5, in all possible combinations to obtain fifteen distinct systems [13, p. 131]. One of this important schema is KT4, wellknown as the Lewis system S4 (in which schema D is true). Thus, a syntactic formalization of a Tarski modal system differs from the Lewis S4 system at least by the non accepted inference rule RR and principle C. This is the reason for calling the discussed above Tarski closure lattice structure as a modal algebra of a S4–like modal system. For an analysis of the role of binary relations on a universe X as possible environment of rough set theory, see for instance [41,65,64,66], [42,20]. 5.3
The Tarski Closure Induced from an Incomplete Information System
In the comparison between the Tarski algebraic model of modal logic by closure operation on de Morgan lattices and the Kripke model on the Boolean algebra of the power set P(X), ∩, ∪, c , ∅, X of a universe X, we have seen that whatever be the binary relation adopted in this latter the property C is always validated. In this section we expose a Tarski algebraic model based on the above Boolean algebra of subsets which does not satifsfy condition C. This model refers to the so–called incomplete Information Systems (IS), consisting of a structure X, Att, V al, F , similar to the one described in section 2.2 characterizing complete IS, but in which the information mapping F is partially defined on a subset (X ×Att)p of the set X ×Att under the consistency condition: (con)
for any object x there must exists at least one attribute ax such that (x, ax ) ∈ (X × Att)p , i.e., no object is redundant with respect to the IS in the sense that there must exist at least one information inside the IS which can be obtained about it (otherwise this state can be suppressed by the IS without any loss of information).
In the sequel we use the null symbol ∗ to denote the fact that the value possessed by an object x with respect to the attribute a is unknown (missing). Formally we set F (x, a) = ∗ when (x, a) ∈ / (X × Att)p . Let us consider an incomplete IS and a family A of attributes from Att (A ⊆ Att), then a binary relation of A–indiscernibility IA ⊆ X × X about objects from X can be introduced according to:
106
(ind)
G. Cattaneo and D. Ciucci
let x, y ∈ X, then (x, y) ∈ IA (also written x IA y) ∀a ∈ A,
either F (x, a) = F (y, a),
or F (x, a) = ∗ ,
iff or F (y, a) = ∗
This binary relation IA on the set of objects X is reflexive and symmetric, but in general non transitive, called a similarity relation after Poincar´e [46] and (as seen before) used in the context of Kripke semantics of modal logic, or a tolerance relation after Zeeman [68] and adopted in the context of incomplete IS [34,50]. For any point x of the universe, let us construct the open (or indiscernibility) granule γA (x) := {y ∈ X : x IA y}, whose collection γA = {γA (x) : x ∈ X} is trivially a covering of X (as a consequence of the reflexivity condition x ∈ γA (x) for every point x). Now, we can apply to the covering γA the results of section 4.3. That is, we can construct the collection LA (X) of all open sets of the covering as any set theoretic union of open granules (plus the empty set): formally, L ∈ LA (X) iff ∃Y ⊆ X s.t. L = ∪{γA (y) : y ∈ Y }. Dually, a closed granule is any set theoretic complement of an open granule, i.e., any ωA (x) := (γA (x))c and a closed set is any set theoretic intersection of closed granules whose collection (plus the whole universe) is denoted by UA (X). In particular, as a consequence that LA (X) is closed with respect to arbitrary union and UA (X) with respect to arbitrary intersection, for any approximable subset A of X its lower (open) approximation and upper (closed) approximation are defined as lA (A) := ∪{L ∈ LA (X) : L ⊆ A} and uA (A) := ∩{U ∈ UA (X) : A ⊆ U } (24) Recall the following result, which is a corollary of propositions 10: Proposition 11. In the context of an incomplete IS, for any family of attributes A ⊆ Att, the correspondence A → lA (A) defines a Tarski interior operator and the correspondence A → uA (A) defines a Tarski closure operator. For any subset A of X, the open–closed pair rA (A) = (lA (A), uA (A)) is the rough approximation of A relatively to the family A of attributes. We give now an example of the two global approximations by closed and open elements, and of the local approximations. Example 10. Table 2 gives the representation of an incomplete IS based on a set X of 6 objects describing flats and involving 4 possible attributes about flats. Table 2. Flats incomplete information system Flat x1 x2 x3 x4 x5 x6
Price Rooms Down-Town Furniture high 2 yes * high * yes no * 2 yes no low * no no low 1 * no * 1 yes *
Lattices with Interior and Closure Operators
107
If one considers the set of all attributes (i.e., A = Att) and the induced similarity relation we have the following open granules (where we omit for simplicity the subscript A): γ(x1 ) = {x1 , x2 , x3 } γ(x4 ) = {x4 , x5 }
γ(x2 ) = {x1 , x2 , x3 , x6 } γ(x5 ) = {x4 , x5 , x6 }
γ(x3 ) = {x1 , x2 , x3 } γ(x6 ) = {x2 , x5 , x6 }
whose collection γ constitutes a covering of X, which trivially is neither a partition (γ(x1 ) ∩ γ(x6 ) = ∅) nor an open base for a topology (γ(x1 ) ∩ γ(x6 ) = {x2 } ∈ / γ). The corresponding family of pseudo–open sets is LA (X) ≡ ∅, {x4 , x5 }, {x1 , x2 , x3 }, {x2 , x5 , x6 }, {x4 , x5 , x6 }, {x1 , x2 , x3 , x6 }, {x2 , x4 , x5 , x6 }, {x1 , x2 , x3 , x4 , x5 }, {x1 , x2 , x3 , x5 , x6 }, X which is not a real topology. The collection of pseudo–closed granules induced by Att is then the family ω ≡ {x4 , x5 }, {x4 , x5 , x6 }, {x1 , x2 , x3 }, {x1 , x3 , x4 }, {x1 , x2 , x3 , x6 } with corresponding family of pseudo–closed sets: UA (X) ≡ ∅, {x4 }, {x6 }, {x4 , x5 }, {x1 , x3 }, {x1 , x2 , x3 }, {x4 , x5 , x6 } {x1 , x3 , x4 }, {x1 , x2 , x3 , x6 }, X Let us consider the subset of flats A = {x1 , x2 , x3 , x4 }. The two corresponding Tarski lower (interior) and upper (closure) approximations are l({x1 , x2 , x3 , x4 }) = {x1 , x2 , x3 } and u({x1 , x2 , x3 , x4 }) = X
(25)
whereas the local lower and upper approximations are l(l) (A) = {x1 , x3 } and u(l) (A) = X = u(A), with the strict inclusion l(l) (A) ⊂ l(A). In this way, the pseudo-open—pseudo-closed rough approximation r(A) = ({x1 , x2 , x3 }, X) is evidently better than the local rough approximation r(l) (A) = ({x1 , x3 }, X). On the other hand, with respect to the set B = {x1 , x4 }, the lower approximations are the open and local one l(B) = l(l) (B) = ∅ and the closed one l(cl) = ∅. The upper approximations are the closed one u(B) = {x1 , x3 , x4 }, the local one u(l) (B) = {x1 , x2 , x3 , x4 , x5 }, and the open one u(op) (B) = X, with the strict inclusions u(B) ⊂ u(l) (B) ⊂ u(op) (B). In this way, also in this case the pseudo-open—pseudo-closed rough approximation r(B) = (∅, {x1 , x3 , x4 }) is better than the local one r(l) (B) = (∅, {x1 , x2 , x3 , x4 , x5 }), which in its turn is better than the pseudo-closed—pseudo-open one r(op) (B) = (∅, X). Finally, let us see that from a modal standpoint, the operators l and u constitute a genuine model of a S4–like system, since they do not satisfy conditions C, B and 5. Indeed, if we consider the set of objects H = {x2 , x3 , x5 , x6 }, we have l(A)∩l(H) = {x1 , x2 , x3 }∩{x2 , x5 , x6 } = {x2 } ⊆ ∅ = l({x2 , x3 }) = l(A∩H), i.e., the condition C does not hold. Then considering the set B, whose approximations are computed above, we have: B ⊆ ∅ = l(c(B)) and u(B) = {x1 , x2 , x3 } = ∅ = l(c(B)), that is both principles B and 5 are not satisfied.
108
G. Cattaneo and D. Ciucci
For more details about tolerance rough sets, see [52,54]. Incomplete information system have been widely studied by Grzymala-Busse, for instance [26,24,25], and see also [53,36]. 5.4
The Similarity Kripke and the Preclusive Galois Connection Models: A Comparison
In section 5.3 we have seen that the binary relation of indistinguishability between objects from the universe X of an incomplete IS relatively to a family A of attributes is a similarity (symmetric and reflexive) relation IA . On the other hand, in section 5.2 we discussed the role of binary relations R on a universe of possible worlds X as Kripke standard models of modal logic in the context of the Boolean algebra P(X), ∩, ∪, c , ∅, X. In particular we stressed that in the context of standard models the schemas N, M and C are valid. The further condition of reflexivity of R gives rise to the Kripke normal model in which schema T (and so a fortiori schema D) is also valid. Similarity relations in this Kripke environment produces the Brouwersche model structure in which it is valid the further A ⊆ lR (uR (A))
B principle
The above discussed approach to the similarity (reflexive, symmetric but in general non transitive) relations can be considered as a particular case of the more general theory of Galois connections [22,40], whose fundamental points are now outlined making reference to the book [3]. Precisely, let G be a binary relation on the cartesian product X × V of two nonempty sets, i.e., G ⊆ X × V , with the notation xGv to denote the fact that (x, v) ∈ G. For any subsets A ⊆ X and Z ⊆ V , define the polar A# of A and the polar Z † of Z as follows: A# := {v ∈ V : ∀x ∈ A, xGv}
and Z † := {x ∈ X : ∀z ∈ Z, xGz}
(26)
So, A# ⊆ V is a subset of V whose elements v ∈ V can be characterized by the property written in the compact form ARv, whereas Z † ⊆ X is a subset of X whose elements x ∈ X are characterized by the property written in the compact form xRZ. The so–introduced two mappings # : P(X)
→ P(V ) and † : P(V )
→ P(X) define a Galois connection between the Boolean algebras of subsets P(X) and P(V ) since they satisfies the conditions: (GC1)
A⊆B
implies
B # ⊆ A#
(GC2)
Z⊆Y
implies
Y † ⊆ Z†
(GC3)
A ⊆ (A# )†
and Z ⊆ (Z † )#
Let us note that in the context of the modal approach to approximation operators, the mappings # and † are also called sufficiency operators[19]. The polar A# , as subset of V , gives rise to the new subset, (A# )† ⊆ X, and the polar Z † , as subset of X, gives rise to the new subset, (Z † )# ⊆ V . In
Lattices with Interior and Closure Operators
109
this way we can introduce two mappings: the first is a transformation of the power set P(X) on itself, ∗ : P(X)
→ P(X), defined by the correspondence A → A∗ := (A# )† , and the second a transformation of the power set P(V ) on itself, : P(V )
→ P(V ), defined by the correspondence Z → Z := (Z † )# . The main result is summarized by the following. Theorem 4. The transformations A → A∗ := (A# )† on P(X) and Z → Z := (Z † )# on P(V ) are both Tarski closure operators, the first on the Boolean algebra P(X) and the second on the Boolean algebra P(V ), such that the following hold: (i) The collections of closed sets relatively to these closures are C(X,∗ ) := {C ∈ P(X) : C = C ∗ } = {C ∈ P(X) : C = C #† } and C(V, ) = {K ∈ P(V ) : K = K } = {K ∈ P(V ) : K = K †# }. (ii) Moreover, according to the general theory, the transformation A → Ao := Ac ∗ c = Ac #† c is a Tarski interior operation on P(X) and the transformation A → Z := Z c c = Z c †# c is a Tarski interior operation on P(V ). (iii) Finally, the pair rX (A) = (Ao , A∗ ) is the rough approximation of A in P(X) and rV (Z) = (Z , Z ) is the rough approximation of Z in P(V ). An important particular case is the one in which X = V , and so G is a binary relation on X × X, here denoted by #. In this case the following can be proved. Proposition 12. If X = V and the binary relation # is symmetric (x#y implies y#x), then for any subset A of X it is A# = A† = {x ∈ X : ∀a ∈ A, xRa}, and so A∗ = A## and Ao = Ac##c . Moreover, the mapping P(X)
→ P(X), A → A# satisfies the following: (B1) (B2)
A ⊆ A## ( weak double negation law); A ⊆ B implies B # ⊆ A# ( contraposition law, equivalent under the weak condition (B1) to the de Morgan law A# ∩B # = (A∪B)# , but the dual de Morgan law in general does not hold).
Further, if the binary relation #, besides to be symmetric is also anti–reflexive (i.e., there is no x ∈ X for which x#x) then this mapping is a Brouwer complementation, i.e., besides the above (B1) and (B2) the following holds: (B3)
A ∩ A# = ∅
( noncontradiction law).
In this way, in the case of a generic anti–symmetric and reflexive binary relation #, i.e., a preclusive relation according to [4], the structure P(X), ∩, ∪, c , # , ∅, X is a Boolean (with respect to the standard set theoretic Boolean complementation Ac = X \ A) and Brouwer (with respect to the complementation # ) complete lattice, also called quasi BB–lattice since the following weak interconnection rule between the two complementations holds: (win)
A# ⊆ Ac .
For a discussion about the role of the Brouwer complementation on a lattice as algebraic semantic of the intuitionistic negation see [9], for the role of preclusive relations in rough set theory see [4,5,7,8,6], finally for the role of unusual complementations and related BB (or more generally BZ) structures see [17,18].
110
G. Cattaneo and D. Ciucci
Let us now analyze the main properties of a preclusive relation on a universe X. First of all, and recalling that we have set A∗ = A## , it is possible to introduce the mappings on the power set P(X) defined for any subset A of X as l# (A) := Ac # # c = Ac ∗ c
and
u# (A) := A## = A∗
(27)
which, by theorem 4 and duality, are Tarski interior and closure operators, respectively. This result can, also, be directly deduced (see [4]) as a consequence from the conditions which define a BB lattice structure, assumed as primitive notion. The following theorem proves the equality between Tarski approximations and preclusive approximations for any subset A of the universe X of an incomplete IS. Theorem 5. Let IS = X, Att, V al, F be an incomplete IS and A a subset of attributes. Then, the Tarski approximations are equal to the preclusive approximations, that is, for any A ⊆ X: l# (A) = lA (A)
and
u# (A) = uA (A)
Proof. In order to prove it, let us analyze how we construct the preclusive approximation. Let A = {a1 , . . . , an } be the set we want to approximate. The process consists of two steps:
n 1. Computing A# = {a1 , . . . , an }# = i=1 {ai }# = {y1 , . . . , ym }. That is we perform a similar process as when we construct the set of closed elements from an anti base ω but considering only the elements {a}# for a ∈ A. Further, yj are the only and all elements which are preclusive to all elements in A: for any j, yj #A and there is no other z such that z#A. m 2. Computing A## = {y1 , . . . , ym }# = j=1 {yj }# . Now, by definition the Tarski upper approximation of the same A is uA (A) = ∩{U ∈ CA (X) : A ⊆ U }, where any U , as closed set, is the set complement of an open set: U = Oc . Now, any open set is constructed as union of subset from the open base: O = ∪k {γ(uk )} with γ(uk ) = {w ∈ X : w I uk }. So, U = [∪k {γ(uk )}]c = ∩k {γ(uk )c }, with γ(uk )c = {h ∈ X : ¬(h I uk )} = {h ∈ X : h # uk } = {uk }# , obtaining that U = ∩k {uk }# . Now, recalling that A ⊆ U , i.e., {a1 , . . . , an } ⊆ ∩k {uk }# , we have that ∀i, ai ∈ ∩k {uk }# , i.e., ∀i, ∀k, ai ∈ {uk }# , from which it follows ∀k, [∀i, ai ∈ {uk }# ], i.e., ∀k, uk #A, and so there must exists one of the yjk such that uk = yjk . Thus, A ⊆ U = ∩k {yjk }# and uA (A) as intersection of all such closed U is a fortiori uA (A) = ∩h {yjh }# , for a suitable family of such {yjh }# . But since A## is the intersection of all these kind of set we have that A## ⊆ uA (A).
From the weak double negation law (B1) we have that A ⊆ A## = j {yj }# , c and so a fortiori for any j also A ⊆ {yj }# . Moreover, {yj }# = {w ∈ X : ¬(w#yj )} = {w ∈ X : w I yj } = γ(yj ), with the granule γ(yi ) open, and so {yj }# is closed. In this way we have that A ⊆ {yj }# , with {yj }# closed and so uA (A), as intersection of all closed set which contain A, is such that uA (A) ⊆ ∩j {yj }# = A## .
Lattices with Interior and Closure Operators
111
Since both the lower approximations can be obtained by duality, also the lower approximations coincide. A standard way to obtain a preclusive relation is given by the negation of a similarity (or tolerance) reflexive and symmetric relation R on X, by x#R y iff ¬(xR y). Let us recall that the Kripke semantic based on the pair (X, R) is a model of the modal B system [13]. In this Kripke model, for any subset A of worlds the accessibility relation R generates the following two subsets lR (A) : = {x ∈ X : ∀z (xR z ⇒ z ∈ A)} uR (A) : = {x ∈ X : ∃z (xR z and x ∈ A} which can also be expressed making use of the above Brouwer complementation # = ¬ R as lR (A) = Ac# and uR (A) = A#c . With respect to the approximations (27), the following order chain holds for any subset A of X: lR (A) ⊆ l# (A) ⊆ A ⊆ u# (A) ⊆ uR (A) .
(28)
Thus, the preclusive rough approximation of any subset A of X, r# (A) := (l# (A), u# (A)) is better than the similarity rough approximation of the same set, rR (A) := (lR (A), uR (A)). Example 11. In standard rough set theory based on complete IS two objects are equivalent if for all selected attributes they have the same values. If we relax this requirement we can say that two objects are similar if they have the same values only for some attribute. Formally, let D ⊆ Att(X), then x is similar to y with respect to D and , with ∈ [0, 1], and write xRD, y, iff |{ai ∈ D : F (ai , x) = F (ai , y)}| ≥
|D|
(29)
This relation says that two objects are similar if they have at least |D| attributes with the same value. It can be easily shown that this is a reflexive and symmetric, but non transitive, relation. This approach is very near to the original one of Poincar´e in his introduction to similarity relations just by “distances” (in the present case d(x, y) := |{ai ∈ D : F (ai , x) = F (ai , y)}|). With respect to the same set of attributes D ⊆ Att(X), the corresponding preclusive relation can be formalized as follows: the objects x and y are mutually preclusive with respect to D and , written x#D, y, iff |{ai ∈ D : F (ai , x) = F (ai , y)}|
0 and κs,t (X, Z) < 1, we have s < κ(X, Y ) ≤ κ(X, Z) < t by (a1). Then, κκs,t (X, Y ) = (κ(X, Y ) − s)/(t − s) ≤ (κ(X, Z) − s)/(t − s) = κκs,t (X, Z) as required. In the sequel, let f = κπ1 and consider any r, r , r ⊆ U1 × U2 . To prove rif 0 (κπ1 ) assume r ⊆ r . If r = ∅, then κπ1 (r, r ) = 1 directly by the definition of κπ1 . In the remaining case, r ∩ r = r by the assumption. Hence, (r ∩ r )← U2 = r← U2 . As a consequence, κπ1 (r, r ) = 1 by the definition of κπ1 . To show rif 2 (κπ1 ) assume r ⊆ r . If r = ∅, then κπ1 (r, r ) = κπ1 (r, r ) = 1 directly by the definition of κπ1 . For the remaining case note that r ∩ r ⊆ r ∩ r by the assumption. Hence, (r ∩ r )← U2 ⊆ (r ∩ r )← U2 by (2). Immediately, κπ1 (r, r ) ≤ κπ1 (r, r ) by the definition of κπ1 . The proof for κπ2 proceeds similarly. For (g) assume rif 6 (κ). Consider any X, Y ⊆ U where X = ∅. By the assumption, κ(X, Y )+κ(X, U −Y ) = 1. Hence, (g1) κ(X, U −Y ) = 1−κ(X, Y ). Observe
Rough Approximation Based on Weak q-RIFs
123
that κs,1−s (X, Y ) = 0 if and only if κ(X, Y ) ≤ s if and only if 1−κ(X, U −Y ) ≤ s if and only if κ(X, U − Y ) ≥ 1 − s if and only if κs,1−s (X, U − Y ) = 1 due to (g1). On the same grounds, κs,1−s (X, Y ) = 1 if and only if κs,1−s (X, U − Y ) = 0 since Y = U − (U − Y ). Therefore, κs,1−s (X, Y ) + κs,1−s (X, U − Y ) = 1 if κs,1−s (X, Y ) = 0 or κs,1−s (X, Y ) = 1. Now, suppose that 0 < κs,1−s (X, Y ) < 1 (and, hence, 0 < κs,1−s (X, U − Y ) < 1). We have κs,1−s (X, Y ) + κs,1−s (X, U − Y ) = (κ(X, Y ) − s)/(1 − 2s) + (κ(X, U − Y ) − s)/(1 − 2s) = (κ(X, Y ) − s + 1 − κ(X, Y ) − s)/(1 − 2s) = (1 − 2s)/(1 − 2s) = 1 by (g1). Thus, κπ1 , κπ2 , and κs,t are weak q-RIFs in virtue of (a). However, except for κs,1 which is a RIF due to (b), they are even not q-RIFs in general. ∗ Example 1. First, we show that rif −1 0 (κs,t ) and rif 2 (κs,t ) need not hold if t < 1. Furthermore, we give an argument against rif 6 (κs,t ) where s + t = 1. Let U = {0, . . . , 9}, κ£ be the underlying RIF, t = 0.6, and s = 0.3. Consider X = {0, 1, 2, 5}, Y = {0, . . . , 4}, and Z = {2, 3, 4, 8, 9}. Then, κ£ (X, Y ) = 0.75, κ£ (X, Z) = 0.25, κ£ (Y, Z) = 0.6, and κ£ (Y, U − Z) = 0.4. By the definition of κs,t , κs,t (X, Y ) = κs,t (Y, Z) = 1, κs,t (X, Z) = 0, and κs,t (Y, U − Z) = 1/3. Thus, rif −1 ⊆ Y. 0 (κs,t ) is not satisfied because both κs,t (X, Y ) = 1 and X The condition rif ∗2 (κs,t ) does not hold true either because κs,t (Y, Z) = 1 and κs,t (X, Z) < κs,t (X, Y ). Finally, rif 6 (κs,t ) is not satisfied because κs,t (Y, Z) + κs,t (Y, U − Z) = 1 + 1/3 = 4/3.
Example 2. Consider U1 = U2 = {0, 1}. We first show that rif −1 0 (κπ1 ) is not true. To see this, let r = {(0, 0)}, r = {(0, 0), (0, 1)}, and r = {(0, 1), (1, 1)}. Hence, κπ1 (r, r ) = 1 because r ⊆ r and κπ1 (r, r ) = 0 because r ∩ r = ∅. Moreover, r ∩ r = {(0, 1)} and, subsequently, (r ∩ r )← U2 = r← U2 = {0}. As a result, κπ1 (r , r ) = 1. From the latter and the fact that r ⊆ r we can −1 ∗ infer that rif 0 (κπ1 ) does not hold. Furthermore, rif 2 (κπ1 ) is not satisfied either because κπ1 (r , r ) = 1 and κπ1 (r, r ) > κπ1 (r, r ). That is, κπ1 is not a q-RIF. In the sequel, we justify the statement that rif 6 (κπ1 ) need not hold. Now, let r = {(0, 0), (1, 0), (1, 1)} and r = {(0, 0), (0, 1), (1, 0)}. By −r we denote the settheoretical complement of r . Then, r ∩r = {(0, 0), (1, 0)} and r ∩−r = {(1, 1)}. Hence, (r ∩ r )← U2 = r← U2 = {0, 1} and (r ∩ −r )← U2 = {1}. As a consequence, κπ1 (r, r ) + κπ1 (r, −r ) = 1 + 0.5 = 1.5. The case of κπ2 is similar.
4
Rough Approximation Spaces Generalized
Rough approximation spaces (or, simply, approximation spaces) are relational structures within which one can approximate sets of objects of a universe by means of certain approximation operators. The universe of such a space is granulated, i.e., covered by clumps of objects called information granules (infogranules) [13]. Taking granularity of the universe into account, we can distinguish two classes of sets of objects from the standpoint of the rough set theory (RST): exact sets and rough sets. The latter ones need to be approximated by some exact sets of objects.
124
A. Gomoli´ nska
Rough approximation spaces were introduced by Pawlak in the 80’s of the XX century [18] (see also [19,20,21]). They were initially defined as pairs of the form (U, ) where U is a non-empty set of objects of some sort and is an equivalence relation on U , understood as a relation of indiscernability of objects. Since that time, the original notion of approximation space has been subject to extension, generalization, or refinement [22,23,24,25,9,10,26,27,28,29,30,31,32,11,33,34]. In general, approximation operators and approximation spaces have attracted much attention among reseachers and have been intensively studied (see, e.g., [35,36,37,38,39,40,41,42,43,44,45,46]). Let us only mention two generalizations of Pawlak’s rough set model: parameterized approximation spaces introduced by Skowron and Stepaniuk [9,10] and variable-precision rough set model proposed by Ziarko [11,12] which will play an important role in our considerations. The latter model may be viewed as a special case of even more general approaches: the decision-theoretic rough set model and the probabilistic rough set model [29,30,31,33,34]. By a (rough) approximation space we mean a triple M = (U, , κ) where U is a non-empty set of objects as earlier, is a similarity relation on U , and κ is a weak q-RIF upon U . Thus, our approach is more general than the early Ziarko proposal [11,12] and – neglecting lists of parameters – the model introduced by Skowron and Stepaniuk in [9,10] where RIFs are used to measure the degree of inclusion of a set of objects in a set of objects. The lower and upper S-approximation operators3, lowS , uppS : ℘U → ℘U , are defined along the standard lines. The lower and upper P-approximation operators, lowP , uppP : ℘U → ℘U , and the t-positive and s-negative region operators, post , negs : ℘U → ℘U where 0 ≤ s < t ≤ 1, are slightly modified versions of the original definitions [18,19,11,12], but yet they are known from the literature as well. Thus, for any X ⊆ U , let lowP X = {u ∈ U | ← {u} ⊆ X}, def
uppP X = {u ∈ U | ← {u} ∩ X = ∅}, def
lowS X = {u ∈ U | κ(← {u}, X) = 1}, def
uppS X = {u ∈ U | κ(← {u}, X) > 0}, def
post X = {u ∈ U | κ(← {u}, X) ≥ t}, def
negs X = {u ∈ U | κ(← {u}, X) ≤ s}. def
(9)
Similar notions can be obtained starting with elementary infogranules of the form → {u}. The both approaches will coincide if is symmetric. Obviously, κ is dispensable for the P-approximation of sets. Namely, the lower P-approximation of X, lowP X, consists of all objects u whose elementary infogranules ← {u} are actually included in X. On the other hand, the upper P-approximation of X, uppP X, consists of all objects u such that 3
We will use ‘P’ to refer to Pawlak’s model and ‘S’ when thinking of Skowron and Stepaniuk’s approach.
Rough Approximation Based on Weak q-RIFs
125
infogranules ← {u} overlap with X. Similar ideas underlie the S-approximation operators where the role of ⊆ is played by a weak q-RIF κ. Indeed, the lower Sapproximation of X, lowS X, is the set of all objects u whose infogranules ← {u} are included to the highest degree in X, whereas the upper S-approximation of X, uppS X, consists of all objects u that ← {u} are included to some positive degree in X. In the sequel, the t-positive region and the s-negative region of X, post X and negs X, consist of all objects u whose elementary infogranules ← {u} are included to the degree at least t and at most s in X, respectively. Along the standard lines, we will say that X is P-exact (resp., S-exact and (s, t)-exact ) if uppP X − lowP X = ∅ (resp., uppS X − lowS X = ∅ and post X ∪ negs X = U ); otherwise, X is P-rough (resp., S-rough and (s, t)-rough). Example 3. Consider an approximation space M = (U, , κ£ ) where U = {u0 , . . . , u4 }. The similarity relation is given by the infogranules ← {u} for u ∈ U in Table 1. In the same table we also give the lower and upper Papproximations of these infogranules. It turns out that all the infogranules are P-rough. Table 1. Lower and upper P-approximations of the elementary infogranules of objects of M u u0 u1 u2 u3 u4
5
← {u} lowP (← {u}) uppP (← {u}) {u0 , u1 } {u0 , u1 } U − {u2 } {u1 } {u1 } {u0 , u1 , u4 } {u2 , u4 } {u2 } {u2 , u4 } {u0 , u3 } {u3 } {u0 , u3 } {u1 , u4 } {u1 , u4 } U − {u3 }
Properties of Approximation Operators
In this section we discuss general properties of low$ , upp$ , post , and negs where $ ∈ {P, S}, 0 ≤ s < t ≤ 1, is a reflexive relation on U , and κ is a weak q-RIF upon U . Analogous properties can be proved for the approximation operators based on elementary infogranules of the form → {u}4. The Pawlak operators are considered for the sake of comparison only because, obviously, their properties do not depend on κ. At the beginning we compare the Pawlak, the Skowron–Stepaniuk, and the Ziarko approximation operators. Proposition 4. We have that: (a) lowP lowS = pos1 ,
P S (b) rif −1 0 (κ) ⇒ low = low , (c) uppS = negc0 ,
4
Considerations of such operators will make sense only if is not symmetric.
126
A. Gomoli´ nska
(d) rif 4 (κ) ⇒ uppP uppS , S P (e) rif −1 4 (κ) ⇒ upp upp ,
(f ) rif 5 (κ) ⇒ uppP = uppS . Proof. The proof is easy, so we show only (c) by way of illustration. Consider any X ⊆ U and u ∈ U . Note that u ∈ negc0 X if and only if u ∈ neg0 X if and only if κ(← {u}, X) > 0 if and only if u ∈ uppS X. Example 4. We show that ‘’ cannot be enhanced to ‘=’ in (a) if κ underlying lowS is not a RIF. To this end, consider the infosystem from Example 3, a weak q-RIF κs,0.5 where s is arbitrary and κ£ is the underlying RIF, and X = {u0 , u2 , u3 }. It is easy to see that κ£ (← {u1 }, X) = κ£ (← {u4 }, X) = 0, κ£ (← {u0 }, X) = κ£ (← {u2 }, X) = 0.5, and κ£ (← {u3 }, X) = 1. By the definition of κs,0.5 , u ∈ lowS X if and only if κ£ (← {u}, X) ≥ 0.5. Hence, lowS X = {u0 , u2 , u3 } = X. On the other hand, lowP X = {u3 }. Basic observations about approximation of the empty set and the universe, the relationships between the lower and upper approximation operators, and the relationship between the operators of variable-precision positive and negative regions are collected below. Proposition 5. For any X ⊆ U and 0 ≤ s < t ≤ 1, (a) lowP id℘U uppP , (b) lowS uppS , S (c) rif −1 0 (κ) ⇒ low id℘U , (d) rif 4 (κ) ⇒ id℘U uppS , (e) lowP ∅ = uppP ∅ = ∅, (f ) rif 3 (κ) ⇒ post ∅ = uppS ∅ = ∅ & negs ∅ = U, (g) lowP U = uppP U = uppS U = post U = U & negs U = ∅, (h) uppP X = U − lowP (U − X), S S (i) rif −1 0 (κ) & rif 5 (κ) ⇒ upp X = U − low (U − X), (j) rif 6 (κ) ⇒ negs X = pos1−s (U − X).
Proof. We prove (g) only. First, note that (g1) ← {u} ⊆ U for every u ∈ U . Hence, lowP U = U by the definition of lowP . Due to this fact and (a), uppP U = U as well. In the sequel, (g2) κ(← {u}, U ) = 1 in virtue of (g1) and rif 0 (κ). The rest follows from (g2) and the definitions of uppS , post , and negs . Example 5. This example shows that lowS id℘U need not hold true if κ, underlying lowS , is not a RIF (cf. (c)). Consider the infosystem from Example 3, κs,0.5 from Example 4, and X = {u1 , u2 , u4 }. In this case we have κ£ (← {u3 }, X) = 0, κ£ (← {u0 }, X) = 0.5, and κ£ (← {u1 }, X) = κ£ (← {u2 }, X) = κ£ (← {u4 }, X) = 1. Hence, lowS X = U − {u3 }. Obviously, lowS X ⊆ X.
Rough Approximation Based on Weak q-RIFs
127
Example 6. Here we show that it can be negs X = pos1−s (U − X) unless the underlying weak q-RIF satisfies rif 6 (κ) (cf. (j)). To this end, consider an approximation space (U, , κπ2 ) where U = U1 × U2 , U1 = {0, 1}, U2 = {0, 1, 2}, u = (0, 0), and ← {u} = {(0, 0), (0, 1), (1, 0), (1, 2)}. One can check that κπ2 does not satisfy rif 6 (κ) (cf. Example 2). Let s = 0.3 and X = {(0, 0), (1, 1)}. Then, (← {u} ∩ X)→ U1 = {0} and (← {u})→ U1 = (← {u} ∩ (U − X))→ U1 = U2 . Hence, κπ2 (← {u}, X) = 1/3 and κπ2 (← {u}, U − X) = 1. As a result, u ∈ pos0.7 (U − X) − neg0.3 X. Next, we focus upon monotonicity and co-monotonicity of approximation operators, and consequences of these properties. The proof, omitted here, proceeds along the standard lines. Proposition 6. For any X, Y ⊆ U , 0 ≤ s ≤ s < t ≤ t ≤ 1, and f ∈ {lowP , uppP , uppS , post }, (a) post post & negs negs , (b) f ∈ MON(℘U ) & negs ∈ co − MON(℘U ), (c) f (X ∩ Y ) ⊆ f X ∩ f Y ⊆ f X ∪ f Y ⊆ f (X ∪ Y ), (d) negs (X ∪ Y ) ⊆ negs X ∩ negs Y ⊆ negs X ∪ negs Y ⊆ negs (X ∩ Y ). As well-known, the property (c) can be enhanced for lowP and uppP as follows: Proposition 7. For any X, Y ⊆ U , (a) lowP (X ∩ Y ) = lowP X ∩ lowP Y, (b) uppP (X ∪ Y ) = uppP X ∪ uppP Y. Example 7. We provide an evidence that neither lowS X ∩ lowS Y ⊆ lowS (X ∩ Y ) nor uppS (X ∪ Y ) ⊆ uppS X ∪ uppS Y hold in general. To this end, consider the infosystem from Example 3, X = {u0 , u2 }, and Y = {u1 , u2 }. Recall that ← {u0 } = {u0 , u1 }. Hence, κ£ (← {u0 }, X) = κ£ (← {u0 }, Y ) = 0.5, κ£ (← {u0 }, X ∩ Y ) = 0, and κ£ (← {u0 }, X ∪ Y ) = 1. First, suppose that the underlying weak q-RIF is κs,0.5 (s is arbitrary) based on κ£ . In this case we have κs,0.5 (← {u0 }, X) = κs,0.5 (← {u0 }, Y ) = 1 and κs,0.5 (← {u0 }, X ∩ Y ) = 0. Hence, u0 ∈ (lowS X ∩ lowS Y ) − lowS (X ∩ Y ). In the sequel, let the underlying weak q-RIF be κ0.5,t (t is arbitrary) based on κ£ . Then, κ0.5,t (← {u0 }, X) = κ0.5,t (← {u0 }, Y ) = 0 and κ0.5,t (← {u0 }, X ∪ Y ) = 1. As a consequence, u0 ∈ uppS (X ∪ Y ) − (uppS X ∪ uppS Y ). In this article we do not discuss the problem of composition of operation operators in detail. Let us only recall the following properties of composition of the P-approximation operators. Proposition 8. We have that: (a) lowP ◦ lowP lowP uppP ◦ lowP uppP uppP ◦ uppP , (b) lowP lowP ◦ uppP uppP
128
A. Gomoli´ nska
Additionally, if is transitive, then lowP ◦ lowP = lowP & uppP ◦ uppP = uppP . The “inclusions” in (a), (b) cannot be reversed in general. In particular, uppP is not idempotent, i.e., uppP ◦ uppP uppP need not hold. Recall that C : ℘U → ℘U is a topological closure operator if it complies with the following conditions, for any sets X, Y ⊆ U : (cl-1) C∅ = ∅, (cl-2) id℘U C, (cl-3) C ◦ C C (idempotence), (cl-4) C(X ∪ Y ) = CX ∪ CY. ˇ A weaker notion of a Cech closure operator is characterized by (cl-1), (cl-2), and the condition (cl-5) below: (cl-5) C ∈ MON(℘U ) (monotonicity). ˇ Since uppP is not idempotent in our case, it is merely a Cech closure operator due to Proposition 5e,a and Proposition 6b. It follows from Proposition 5f,d and ˇ Proposition 6b that uppS will also be a Cech closure operator if rif 3 (κ) and rif 4 (κ) hold.
6
Approximation of Sets by Means of Operators Based on Special Weak q-RIFs
In this section we present selected properties of approximation operators based on the particular weak q-RIFs, including RIFs, considered in the article. First, define two new inclusion measures, κκl , κκup : ℘U × ℘U → [0, 1], based on a RIF κ upon U . For any X, Y ⊆ U , let κκl (X, Y ) ⇔ κ(lowP X, lowP Y ), def
κκup (X, Y ) ⇔ κ(uppP X, uppP Y ). def
(10)
As earlier, the reference to κ will be omitted unless necessary. Let us note that κl (X, Y ) = 1 ⇔ lowP X ⊆ lowP Y, κup (X, Y ) = 1 ⇔ uppP X ⊆ uppP Y. Basic properties of κl and κup are the following: Proposition 9. Let f ∈ {κl , κup }. We have: (a) rif 0 (f ) & rif ∗2 (f ), (b) rif 3 (κ) ⇒ rif 3 (f ), (c) rif 4 (κ) ⇒ rif 4 (κup ), −1 (d) rif −1 4 (κ) ⇒ rif 4 (κl ).
(11)
Rough Approximation Based on Weak q-RIFs
129
Proof. We only prove the second part of (a) by way of example. Consider any X, Y, Z ⊆ U . In the case f = κl assume κl (Y, Z) = 1. Hence, κ(lowP Y, lowP Z) = 1 by the definition of κl . Thus, κ(lowP X, lowP Y ) ≤ κ(lowP X, lowP Z) since κ is a RIF. As a result, κl (X, Y ) ≤ κl (X, Z) by the definition of κl . Summarizing, rif ∗2 (κl ). The proof for κup is analogous. Thus, both κl and κup are truly q-RIFs due to (a). Example 8. Consider the infosystem from Example 3 and κl based on the standard RIF κ£ . First, we show that rif −1 0 (κl ) does not hold, i.e., κl is not a RIF. Let X = {u0 , u2 } and Y = {u0 , u1 }. We have lowP X = ∅ and lowP Y = Y . Since lowP X ⊆ lowP Y , κl (X, Y ) = 1. On the other hand, X ⊆ Y. Despite the fact that rif 4 (κ£ ), the condition rif 4 (κl ) is not fulfilled either. To see this consider X = {u0 , u1 } and Y = {u0 , u3 }. Then, lowP X = X and lowP Y = {u3 }. Thus, κl (X, Y ) = 0 because lowP X ∩ lowP Y = ∅. On the other hand, X ∩ Y = ∅. One can also see that rif 6 (κl ) does not hold in spite of the fact rif 6 (κ£ ). To this end, let X = {u0 , u2 }. Then, κl (X, Y ) = 1 for every Y ⊆ U (see the first part of this example). Thus, κl (X, Y ) + κl (X, U − Y ) = 2. Example 9. Consider again the infosystem from Example 3 and κl based on the standard RIF κ£ . In this example we first show that κup is not a RIF because rif −1 0 (κup ) does not hold. For X = {u0 , u3 } and Y = {u0 , u1 }, we obtain uppP X = X and uppP Y = U −{u2}. Obviously, X ⊆ Y . However, κup (X, Y ) = 1 since uppP X ⊆ uppP Y . In this case, rif −1 4 (κup ) does not hold either. Consider X = {u0 , u3 } as above and Y = {u1 }. We have uppP Y = {u0 , u1 , u4 } and uppP (U − Y ) = U − {u1 }. Clearly, X ∩ Y = ∅. On the other hand, κup (X, Y ) = 0.5 due to the fact that uppP X ∩ uppP Y = {u0 }. Observe also that rif 6 (κup ) is not true. Indeed, κup (X, Y ) + κup (X, U − Y ) = 0.5 + 1 = 1.5. Let κ be a RIF upon U . Now, we formulate certain properties of approximation operators based on the standard RIF κ£ , the RIFs κ1 , κ2 given by (6), the weak q-RIFs κπ1 , κπ2 defined by (8), the weak q-RIFs κs,t (0 ≤ s < t ≤ 1) based on κ and given by (7), and the q-RIFs κl , κup induced by κ. In what follows we explicitly refer to the weak q-RIFs underlying the approximation operators under consideration. Proposition 10. Let U be finite in the cases (a)–(c) and let U = U1 × U2 for some finite sets U1 , U2 in the case (g). For any X, Y ⊆ U and 0 ≤ s0 < t0 ≤ 1, we have: (a) lowSκ£ = lowSκ1 = lowSκ2 = lowP , (b) uppSκ£ = uppP , ⎧ if X = ∅, ⎨U (c) uppSκi X = ∅ if X = ∅ & i = 1, ⎩ {u | ← {u} = U } if X = ∅ & i = 2,
130
A. Gomoli´ nska
(d) lowSκs,t = post,κ & uppSκs,t = (negs,κ )c , (e) post0 ,κs,t = poss+t0 (t−s),κ & negs0 ,κs,t = negs+s0 (t−s),κ , (f ) rif 5 (κ) ⇒ uppSκ0,t = uppP , (g) uppSκπ1 = uppSκπ2 = uppP , (h) lowSκl = lowP , S P (i) rif −1 & uppSκl X ⊆ U − lowSκl (U − X), 4 (κ) ⇒ uppκl upp
(j) rif 4 (κ) ⇒ uppP uppSκup & U − lowSκup (U − X) ⊆ uppSκup X, (k) rif 5 (κ) ⇒ uppSκup (X ∪ Y ) ⊆ uppSκup X ∪ uppSκup Y, (l) rif 6 (κ) ⇒ pos1−s,κl (U − X) ⊆ negs,κl X & negs,κup X ⊆ pos1−s,κup (U − X). Proof. We only prove (c)–(e), (h), and (j). Consider any X ⊆ U , u ∈ U , and let 0 ≤ s0 < t0 ≤ 1. For (c) note that u ∈ uppSκ1 X if and only if κ1 (← {u}, X) > 0 if and only if #X/#(← {u} ∪ X) > 0 if and only if X = ∅. Moreover, u ∈ uppSκ2 X if and only ← if κ2 ( {u}, X) > 0 if and only if #((U − ← {u}) ∪ X)/#U > 0 if and only if (U − ← {u}) ∪ X = ∅ if and only if ← {u} = U or X = ∅. For the first part of (d) notice that u ∈ lowSκs,t X if and only if κs,t (← {u}, X) = 1 if and only if κ(← {u}, X) ≥ t if and only if u ∈ post,κ X. In the sequel, we have u ∈ uppSκs,t X if and only if κs,t (← {u}, X) > 0 if and only if κ(← {u}, X) > s if and only if u ∈ negs,κ X if and only if u ∈ (negs,κ )c X. For (e) consider three cases: (e1) κ(← {u}, X) ≤ s, (e2) s < κ(← {u}, X) < t, and (e3) κ(← {u}, X) ≥ t. The proofs of both parts are similar, so we only show that (∗) u ∈ post0 ,κs,t X ⇔ u ∈ poss+t0 (t−s),κ X. In the case (e1) notice that neither u ∈ post0 ,κs,t X nor u ∈ poss+t0 (t−s),κ X, so (*) holds true trivially. Now, consider (e2). By the definitions, κ(← {u}, X) − s ≥ t0 t−s ⇔ κ(← {u}, X) ≥ s + t0 (t − s) ⇔ u ∈ poss+t0 (t−s),κ X.
u ∈ post0 ,κs,t X ⇔ κs,t (← {u}, X) ≥ t0 ⇔
Finally, in the case (e3), u ∈ post0 ,κs,t X ∩ poss+t0 (t−s),κ X by the observation s + t0 (t − s) ≤ s + 1(t − s) = t and the definitions. Thus, (*) holds true trivially. In the case (h), to prove lowSκl lowP , assume u ∈ lowSκl X. Hence, κl (← {u}, X) = 1 by the definition of lowSκl . Thus, (h1) lowP (← {u}) ⊆ lowP X by (11). It follows directly from the definition of lowP that u ∈ lowP (← {u}). Hence, u ∈ lowP X due to (h1). The “inclusion” lowP lowSκl holds by Proposition 4a. For (j) assume rif 4 (κ). Hence, we obtain uppP uppSκup by Proposition 4d and Proposition 9c. For the remaining part suppose u ∈ U − lowSκup (U − X). Hence, κup (← {u}, U − X) < 1 by the definition of lowSκup and, subsequently, κ(uppP (← {u}), uppP (U − X)) < 1 by the definition of κup . Since κ is a RIF,
Rough Approximation Based on Weak q-RIFs
131
we immediately obtain uppP (← {u}) ⊆ uppP (U − X), i.e., uppP (← {u}) ∩ P P ← (U − upp (U − X)) = ∅. Hence, upp ( {u}) ∩ lowP X = ∅ due to Proposition 5h. As a consequence, uppP (← {u}) ∩ uppP X = ∅ by Proposition 5a. Hence, κ(uppP (← {u}), uppP X) > 0 by the assumption and, in the sequel, κup (← {u}, X) > 0 by the definition of κup . Finally, u ∈ uppSκup X by the definition of uppSκup . Some comments can be useful here. Let i = 1, 2. Due to (a), the lower Sapproximation operators based on κ£ and κi are equal to the lower P-approximation operator. In the case of κ£ , also the upper S-approximation operator is equal to the upper P-approximation operator by (b). As follows from (c), the upper S-approximation, induced by κi , of any non-empty set is the whole universe. For i = 1, the upper S-approximation of the empty set is empty, whereas it consists of objects to which every object is similar for i = 2. Hence, for instance, one can prove that uppSκ1 ◦ uppSκ1 uppSκ1 , uppSκ1 (X ∪ Y ) = uppSκ1 X ∪ uppSκ1 Y
(12)
where X, Y are arbitrary subsets of U . In virtue of (d), the lower S-approximation operator, based on κs,t , is equal to the operator of the t-positive region induced by κ. On the other hand, the upper S-approximation operator, based on κs,t , is complementary to the operator of the s-negative region induced by κ. Hence, a set X is S-rough in the sense of κs,t if and only if X is (s, t)-rough in the sense of κ. This justifies our earlier remark that κs,t is inspired by the variableprecision rough set model. According to (e), the variable-precision positive (resp., negative) region operators based on κs,t are equal to certain variable-precision positive (negative) region operators induced by κ. In virtue of (f), the upper Sapproximation operator based on κs,t will be equal to the upper P-approximation operator provided that s = 0 and rif 5 (κ). Unfortunately, the property need not hold in general (see Example 10). Due to (g), the upper S-approximation operators, defined in (U, , κπi ) where U = U1 × U2 , is a similarity relation on U , and i = 1, 2, are equal to the upper P-approximation operator. The property (h) says that the lower S-approximation operator based on κl is equal to the lower P-approximation operator5. The first parts of (i), (j) resemble Proposition 4e,d, respectively. The remaining parts, being weaker versions of Proposition 5i, say that the lower and upper S-approximation operators, based on κl and κup , respectively, are merely half-dual to each other. Property (k), being a counterpart of Proposition 7b, states that the upper S-approximation, based on κup , of a union of two sets will be a union of their upper S-approximations if rif 5 (κ) holds true of the RIF κ underlying κup . Finally, (l) gives us a weaker version of Proposition 5j for κl , κup . Example 10. We show that uppSf and uppP can be different for f = {κs,t , κl , κup } (cf. (f), (i), and (j)). As the underlying RIF we take κ£ which satisfies rif 5 (κ). Consider the infosystem from Example 3 as earlier. First, let 5
This is quite unexpected in the light of the fact that κl is merely a q-RIF.
132
A. Gomoli´ nska
X = {u0 , u1 }, s = 0.5, and t be arbitrary. Since κ£ (← {u3 }, X) = 0.5, κ0.5,t (← {u3 }, X) = 0. Hence, u3 ∈ uppSκ0.5,t X. On the other hand, u3 ∈ P upp X. In the sequel, let X = {u0 , u3 }. Note that κl (← {u0 }, X) = κ£ (lowP (← {u0 }), lowP X) = κ£ ({u0 , u1 }, {u3}) = 0. That is, u0 ∈ uppSκl X. On P the other hand, u0 ∈ upp X. Finally, let X = {u1 }. We have κup (← {u3 }, X) = κ£ (uppP (← {u3 }), uppP X) = κ£ ({u0 , u3 }, {u0 , u1 , u4 }) = 0.5, so u3 ∈ uppSκup X. At the same time, u3 ∈ uppP X since ← {u3 } ∩ X = ∅. Example 11. Here we argue that the inclusions in (l) cannot be reversed in general. Consider the infosystem from Example 3. Let κ£ , satisfying rif 6 (κ), be the RIF underlying κl and κup as earlier. First, let X = {u0 , u3 } and 0 ≤ s < 0.5. Since κl (← {u0 }, X) = 06 , we have u0 ∈ negs,κl X. On the other hand, κl (← {u0 }, U − X) = κ£ (lowP (← {u0 }), lowP (U − X)) = κ£ ({u0 , u1 }, {u1 , u2 , u4 }) = 0.5 < 1 − s, so u0 ∈ pos1−s,κl (U − X). For the second part consider X = {u0 , u1 } and 0.5 ≤ s < 1. Then, κup (← {u3 }, U − X) = κ£ (uppP (← {u3 }), uppP (U − X)) = κ£ ({u0 , u3 }, {u2 , u3 , u4 }) = 0.5 ≥ 1 − s. Hence, u3 ∈ pos1−s,κup (U − X). On the other hand, κup (← {u3 }, X) = κ£ (uppP (← {u3 }), uppP X) = κ£ ({u0 , u3 }, U − {u2 }) = 1, so u3 ∈ negs,κup X. In the light of our earlier remarks on closure operators, uppSκ1 is a topological closure operator in virtue of (12) and Proposition 1a,c. Furthermore, the operˇ ators uppS , based on κ£ and κπi (i = 1, 2), are Cech closure operators due to Proposition 10b,g. Last but not the least, uppSκ0,t (0 < t ≤ 1) and uppSκup are ˇ also Cech closure operators provided that rif 3 (κ) and rif 4 (κ) are satisfied by the RIF κ underlying κ0,t and κup (see Proposition 3c,d and Proposition 9b,c).
7
Final Remarks
In this paper we generalized the notion of rough approximation space by allowing rough inclusion measures to comply partially with the rough mereological axioms. More precisely, we replaced a rough inclusion function by a weak quasirough inclusion function (weak q-RIF) in the definition of such a space. Apart from weak q-RIFs we also defined their stronger versions referred to as quasirough inclusion functions (q-RIFs). We presented three examples of proper weak q-RIFs, known from the literature [5], and two examples of proper q-RIFs, not studied earlier (at least, up to the author’s knowledge), and we investigated their properties. However, we mainly focused on approximation of concepts in approximation spaces based on weak q-RIFs. First, we defined lower and upper rough approximation operators in line with Skowron and Stepaniuk (S-approximation operators), and variable-precision positive and negative region operators in the early Ziarko style. Pawlak’s lower and upper rough approximation operators were also considered for the sake of comparison. Next, we formulated and examined a number of properties of these operators. Properties of approximation operators, 6
See the second part of the preceding example.
Rough Approximation Based on Weak q-RIFs
133
induced by special cases of weak q-RIFs, were also investigated. As expected, the relaxation of the postulates for rough inclusion measures had an impact on properties of the lower S-approximation mainly. An observation which can be drawn is that the results of aproximation of concepts by means of operators based on weak q-RIFs do not differ much from those obtained in the case of RIFs in a general case. Nevertheless, the results may differ dramatically depending on particular instances of the weak q-RIFs used.
References 1. Polkowski, L., Skowron, A.: Rough mereology. In: Ra´s, Z.W., Zemankova, M. (eds.) ISMIS 1994. LNCS (LNAI), vol. 869, pp. 85–94. Springer, Heidelberg (1994) 2. Polkowski, L., Skowron, A.: Rough mereology: A new paradigm for approximate reasoning. Int. J. Approximated Reasoning 15, 333–365 (1996) 3. Polkowski, L., Skowron, A.: Rough mereology in information systems. A case study: Qualitative spatial reasoning. In: [47], pp. 89–135 (2001) 4. Le´sniewski, S.: Foundations of the General Set Theory 1 (in Polish). Works of the Polish Scientific Circle, vol. 2. Moscow (1916); also in [48], pp. 128–173 5. Stepaniuk, J.: Knowledge discovery by application of rough set models. In: [47], pp. 137–233 (2001) 6. Xu, Z.B., Liang, J.Y., Dang, C.Y., Chin, K.S.: Inclusion degree: A perspective on measures for rough set data analysis. Information Sciences 141, 227–236 (2002) 7. Zhang, W.X., Leung, Y.: Theory of including degrees and its applications to uncertainty inference. In: Proc. of 1996 Asian Fuzzy System Symposium, pp. 496–501 (1996) 8. Pawlak, Z., Skowron, A.: Rough membership functions. In: Fedrizzi, M., Kacprzyk, J., Yager, R.R. (eds.) Advances in the Dempster–Shafer Theory of Evidence, pp. 251–271. John Wiley & Sons, New York (1994) 9. Skowron, A., Stepaniuk, J.: Generalized approximation spaces. In: Lin, T.Y., Wildberger, A.M. (eds.) Soft Computing, pp. 18–21. Simulation Councils, San Diego (1995) 10. Skowron, A., Stepaniuk, J.: Tolerance approximation spaces. Fundamenta Informaticae 27, 245–253 (1996) 11. Ziarko, W.: Variable precision rough set model. J. Computer and System Sciences 46, 39–59 (1993) 12. Ziarko, W.: Probabilistic decision tables in the variable precision rough set model. Computational Intelligence 17, 593–603 (2001) 13. Zadeh, L.A.: Outline of a new approach to the analysis of complex system and decision processes. IEEE Trans. on Systems, Man, and Cybernetics 3, 28–44 (1973) 14. L ukasiewicz, J.: Die logischen Grundlagen der Wahrscheinlichkeitsrechnung. Cracow (1913); English trans. in [49], pp. 16–63 15. Gomoli´ nska, A.: On three closely related rough inclusion functions. In: Kryszkiewicz, M., Peters, J.F., Rybi´ nski, H., Skowron, A. (eds.) RSEISP 2007. LNCS (LNAI), vol. 4585, pp. 142–151. Springer, Heidelberg (2007) 16. Gomoli´ nska, A.: On certain rough inclusion functions. In: Peters, J.F., Skowron, A., Rybi´ nski, H. (eds.) Transactions on Rough Sets IX. LNCS, vol. 5390, pp. 35–55. Springer, Heidelberg (2008)
134
A. Gomoli´ nska
17. Drwal, G., Mr´ ozek, A.: System RClass – software implementation of a rough classifier. In: Klopotek, M.A., Michalewicz, M., Ra´s, Z.W. (eds.) Proc. 7th Int. Symp. Intelligent Information Systems (IIS 1998), Malbork, Poland, June 1998, pp. 392– 395 (1998) 18. Pawlak, Z.: Rough sets. Int. J. Computer and Information Sciences 11, 341–356 (1982) 19. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning About Data. Kluwer, Dordrecht (1991) 20. Pawlak, Z.: Rough set elements. In: Polkowski, L., Skowron, A. (eds.) Rough Sets in Knowledge Discovery, vol. 1, pp. 10–30. Physica, Heidelberg (1998) 21. Pawlak, Z.: A treatise on rough sets. In: Peters, J.F., Skowron, A. (eds.) Transactions on Rough Sets IV. LNCS, vol. 3700, pp. 1–17. Springer, Heidelberg (2005) 22. Gomoli´ nska, A.: Variable-precision compatibility spaces. Electronical Notices in Theoretical Computer Science 82, 1–12 (2003), http://www.elsevier.nl/locate/entcs/volume82.html 23. Gomoli´ nska, A.: Approximation spaces based on relations of similarity and dissimilarity of objects. Fundamenta Informaticae 79, 319–333 (2007) 24. Inuiguchi, M., Tanino, T.: Two directions toward generalization of rough sets. In: [50], pp. 47–57 (2003) 25. Peters, J.F., Skowron, A., Stepaniuk, J.: Nearness of objects: Extension of approximation space model. Fundamenta Informaticae 79, 497–512 (2007) 26. Skowron, A., Stepaniuk, J., Peters, J.F., Swiniarski, R.: Calculi of approximation spaces. Fundamenta Informaticae 72, 363–378 (2006) 27. Slowi´ nski, R., Vanderpooten, D.: Similarity relation as a basis for rough approximations. In: Wang, P.P. (ed.) Advances in Machine Intelligence and Soft Computing, vol. 4, pp. 17–33. Duke University Press (1997) 28. Yao, Y.Y.: Generalized rough set models. In: [51], pp. 286–318 (1998) 29. Yao, Y.Y.: Decision-theoretic rough set models. In: Yao, J., Lingras, P., Wu, W.´ ezak, D. (eds.) RSKT 2007. LNCS (LNAI), Z., Szczuka, M.S., Cercone, N.J., Sl¸ vol. 4481, pp. 1–12. Springer, Heidelberg (2007) 30. Yao, Y.Y.: Probabilistic rough set approximations. Int. J. of Approximate Reasoning (2007) (in press), doi:10.1016/j.ijar.2007.05.019 31. Yao, Y.Y., Wong, S.K.M.: A decision theoretic framework for approximating concepts. Int. J. of Man–Machine Studies 37, 793–809 (1992) 32. Yao, Y.Y., Wong, S.K.M., Lin, T.Y.: A review of rough set models. In: Lin, T.Y., Cercone, N. (eds.) Rough Sets and Data Mining: Analysis of Imprecise Data, pp. 47–75. Kluwer, Boston (1997) ´ ezak, D., Wang, G., Szczuka, M.S., 33. Ziarko, W.: Probabilistic rough sets. In: Sl D¨ untsch, I., Yao, Y. (eds.) RSFDGrC 2005. LNCS (LNAI), vol. 3641, pp. 283– 293. Springer, Heidelberg (2005) 34. Ziarko, W.: Stochastic approach to rough set theory. In: Greco, S., Hata, Y., Hirano, S., Inuiguchi, M., Miyamoto, S., Nguyen, H.S., Slowi´ nski, R. (eds.) RSCTC 2006. LNCS (LNAI), vol. 4259, pp. 38–48. Springer, Heidelberg (2006) 35. Bazan, J.G., Skowron, A., Swiniarski, R.: Rough sets and vague concept approximation: From sample approximation to adaptive learning. In: Peters, J.F., Skowron, A. (eds.) Transactions on Rough Sets V. LNCS, vol. 4100, pp. 39–63. Springer, Heidelberg (2006) 36. Cattaneo, G.: Abstract approximation spaces for rough theories. In: [51], pp. 59–98 (1998)
Rough Approximation Based on Weak q-RIFs
135
37. Doherty, P., Szalas, A.: On the correspondence between approximations and similarity. In: Tsumoto, S., Slowi´ nski, R., Komorowski, J., Grzymala-Busse, J.W. (eds.) RSCTC 2004. LNCS (LNAI), vol. 3066, pp. 143–152. Springer, Heidelberg (2004) 38. Gomoli´ nska, A.: A comparison of Pawlak’s and Skowron–Stepaniuk’s approximation of concepts. In: Peters, J.F., Skowron, A., D¨ untsch, I., Grzymala-Busse, J.W., Orlowska, E., Polkowski, L. (eds.) Transactions on Rough Sets VI. LNCS, vol. 4374, pp. 64–82. Springer, Heidelberg (2007) 39. Pagliani, P., Chakraborty, M.K.: Formal topology and information systems. In: Peters, J.F., Skowron, A., D¨ untsch, I., Grzymala-Busse, J.W., Orlowska, E., Polkowski, L. (eds.) Transactions on Rough Sets VI. LNCS, vol. 4374, pp. 253– 297. Springer, Heidelberg (2007) 40. Peters, J.F.: Approximation spaces for hierarchical intelligent behavioral system models. In: Dunin-K¸eplicz, B., Jankowski, A., Skowron, A., Szczuka, M. (eds.) Monitoring, Security, and Rescue Techniques in Multiagent Systems, pp. 13–30. Springer, Heidelberg (2005) 41. Pomykala, J.A.: Approximation operations in approximation space. Bull. Polish Acad. Sci. Math. 35, 653–662 (1987) 42. Skowron, A.: Approximation spaces in rough neurocomputing. In: [50], pp. 13–22 (2003) 43. Skowron, A., Swiniarski, R., Synak, P.: Approximation spaces and information granulation. In: Peters, J.F., Skowron, A. (eds.) Transactions on Rough Sets III. LNCS, vol. 3400, pp. 175–189. Springer, Heidelberg (2005) 44. Wolski, M.: Approximation spaces and nearness type structures. Fundamenta Informaticae 79, 567–577 (2007) 45. Wybraniec-Skardowska, U.: On a generalization of approximation space. Bull. Polish Acad. Sci. Math. 37, 51–62 (1989) ˙ 46. Zakowski, W.: Approximations in the space (U, Π). Demonstratio Mathematica 16, 761–769 (1983) 47. Polkowski, L., Tsumoto, S., Lin, T.Y. (eds.): Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems. Physica, Heidelberg (2001) 48. Surma, S.J., Srzednicki, J.T., Barnett, J.D. (eds.): Stanislaw Le´sniewski Collected Works. Kluwer/Polish Scientific Publ., Dordrecht/Warsaw (1992) 49. Borkowski, L. (ed.): Jan L ukasiewicz – Selected Works. North Holland/Polish Scientific Publ., Amsterdam/Warsaw (1970) 50. Inuiguchi, M., Hirano, S., Tsumoto, S. (eds.): Rough Set Theory and Granular Computing. Springer, Heidelberg (2003) 51. Polkowski, L., Skowron, A. (eds.): Rough Sets in Knowledge Discovery, vol. 1. Physica, Heidelberg (1998)
Rough Geometry and Its Applications in Character Recognition Xiaodong Yue and Duoqian Miao Department of Computer Science & Technology, Tongji University Shanghai, 201804, PR China
[email protected] [email protected] Abstract. The absolutely abstract and accurate geometric elements defined in Euclidean geometry always have lengths or sizes in reality. While the figures in the real world should be viewed as the approximate descriptions of traditional geometric elements at the rougher granular level. How can we generate and recognize the geometric features of the configurations in the novel space? Motivated by this question, rough geometry is proposed as the result of applying the rough set theory to the traditional geometry. In the new theory, the geometric configuration can be constructed by its upper approximation at different levels of granularity and the properties of the rough geometric elements should offer us a new perspective to observe the figures. In this paper, we focus on the foundation of the theory and try to observe the topologic features of the approximate configuration at multiple granular levels in rough space. Then we also attempt to apply the research results to the problems in different areas for novel solutions, such as the applications of rough geometry in the traditional geometric problem (the question whether there exists a convex shape with two distinct equichordal points) and the recognition work with principal curves. Finally, we will describe the questions induced from our exploratory research and discuss the future work. Keywords: Rough sets, rough geometry, geometric invariants, equichordal points, principal curves.
1
Introduction
As the belief of Zadeh that there exits the information granularities in many areas with different forms (see, e.g., [27,28]), human problem solving always involves the ability of perception, abstraction, representation and understanding of real world problems at different levels of granularity. Through the research on the basic issues of “Granular Computing”(see, e.g., [8,9,10,11]), the complex or uncertain problem is tried to be transformed among the different granular spaces for seeking the proper solutions (see, e.g., [21,22]). Like the research focus of granular computing theory, the multilevel methods of analyzing the image content in both spatial and frequency domain have been the hotspots in the research area of pattern recognition (see, e.g., [1,7]). The J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 136–156, 2009. c Springer-Verlag Berlin Heidelberg 2009
Rough Geometry and Its Applications in Character Recognition
137
analysis of the objects contained in the digital images from different levels or views can not only improve the recognition efficiency, but also help to understand the images’ content more effectively, and this recognition process may be more coincident with the human intelligence. In recent years, rough set theory has been applied to the research of image analysis and processing as a granular computing model to provide the hierarchical methods (see, e.g., [2,15,16]). Especially, some popular issues in the research area of image analysis, such as the recognition methods of off-line handwriting characters, usually pay attention to the geometric features of the objects (see, e.g., [14,29]), but the geometric elements analyzed in practical applications often have different representations from those in the traditional geometry. In other words, the figures in the real world should be viewed as the approximate descriptions of traditional geometric elements at rougher granular level. How can we generate and recognize the geometric features of the configurations in the novel space? Euclidean geometry has been the most popular measurement tool in the past thousands years, but the absolutely abstract and accurate geometric elements defined in Euclidean geometry always have lengths or sizes in reality. For example, the points and straight lines in a digital image are sized rather than abstract and the sizes of these geometric elements depend on the resolution of this digital image. In another view, the Euclidean points lying in the region of a pixel are indiscernible and equivalent in the digital image, then the partition of the Euclidean points can be obtained from this equivalence relation and the pixels can be considered as the equivalence classes of the Euclidean points. So rough geometry is proposed as the result of applying the granular computing scheme and rough set theory to the traditional geometry [12]. Fig. 1.1 shows a Euclidean straight line in digital space, and Fig. 1.2 indicates that the representation of this line in the digital space is formed by the pixel set that covers it. The pixel set can also be viewed as the union of the equivalence classes which have nonempty intersections with the straight line, namely, the pixel set constructs the upper approximation of the Euclidean straight line under the partitions in digital space. As a matter of fact, most geometric configurations in the real world are the approximate representations of Euclidean geometric elements, and the geometric properties of these approximate configurations are often different from those of the corresponding Euclidean geometric elements. The new properties of the
1.1: Euclidean Line
1.2: Digital Line
Fig. 1. A Euclidean Straight Line in Digital Space
138
X. Yue and D. Miao
geometric configuration constructed by upper approximation at different granular levels should offer us a new perspective to observe the geometric elements. The rest of this paper is organized as follows. Section 2 focuses on the theoretic foundation of rough geometry and based on that, we try to study the geometric properties and observe the variation of the topologic features of the approximate configuration at different granular levels in rough space. Then in Section 3, we also attempt to apply the research results to the problems in different areas for novel solutions, such as the applications of rough geometry in the traditional geometric problem (the question whether there exists a convex shape with two distinct equichordal points) and the recognition work with principal curves will be mainly introduced. Finally, we will describe the questions induced from our exploratory research and discuss the future work in Section 4.
2 2.1
Rough Geometry Rough Sets
The rough geometric space is constructed based on the foundation of rough set theory (see, e.g., [17,18,19]), so the notions of rough sets which are related to rough geometry will be firstly recalled as shown below. Information system IS is a pair S = (U, A), where U is a non-empty finite set of objects and A is a non-empty finite set of attributes. Each subset of attributes B ⊆ A determines a binary indiscernibility relation IN DS (B) : IN DS (B) ⇒ {(x, y) ∈ U × U|∀a ∈ B, a(x) = a(y)}. Because the binary relation IN DS (B) is reflexive, symmetric and transitive, IN DS (B) is an equivalence relation and defines a partition on the universe U . Given an equivalence relation R, the equivalence class of the element x ∈ U under the partition induced from R consists of all objects y ∈ U such that xRy, which is defined as [x]R = {y|y ∈ U ∧ xRy}, the objects in an equivalence class are indiscernible from each other. The equivalence class of any object x under the partition formed by the indiscernibility relation IN DS (B) (B ⊆ A) is usually denoted by [x]B for simplicity. In an information system S = (U, A), the subset of objects X ⊆ U can be described by the attributes subset B ⊆ A, i.e. X can be approximated using only the information contained in B by constructing the B-lower and B-upper approximations, which are denoted by BX and BX respectively, where BX = {x|[x]B ⊆ X} and BX = {x|[x]B ∩ X = ∅}. BX and BX can also be viewed as the intension and extension of the concept represented by X. 2.2
Rough Space and Rough Configuration
In rough geometry, rough set theory is combined with the traditional geometry and the figures are represented by the equivalence classes and set approximation. These approximate representations are the new geometric elements of distinct features in different space. In the following paragraphs, the fundamental concepts
Rough Geometry and Its Applications in Character Recognition
139
about the new space and configuration approximation defined in rough geometry will be introduced. Let ϕ is a mapping from a real number field to a subset of real num bers . For ∀x, y ∈ , if x ≤ y ⇒ ϕ(x) ≤ ϕ(y) , ϕ is called a monotone increasing mapping from to , the binary relation induced from the mapping ϕ : Eϕ = {(x, y) ∈ × |ϕ(x) = ϕ(y)} is obviously an equivalence relation on . Let n is an n-dimension Euclidean space, the indiscernibility relation “≈ϕ ” defined by Eϕ in n is as follows: two n-dimension points (x1 , . . . , xn ) and (y1 , . . . , yn ) are indiscernible iff (x1 , y1 ) ∈ Eϕ , (x2 , y2 ) ∈ Eϕ ,. . . ,(xn , yn ) ∈ Eϕ , i.e. (x1 , . . . , xn ) ≈ϕ (y1 , . . . , yn ) ⇔ (xi , yi ) ∈ Eϕ , (i = 1, 2, . . . , n). The relation ≈ϕ is reflexive, symmetric and transitive, thus it is an equivalence relation and determines a partition in n , in other words, the points of an n-dimension space can be divided into the corresponding regions through the mapping ϕ. Definition 1. Rough Space. Let ≈ϕ is an equivalence relation in n-dimension space n , the set of all equivalence classes formed by the partition induced from ≈ϕ , which denoted by n / ≈ϕ is called a rough space. Definition 2. Rough Point. Let n / ≈ϕ is a rough space induced from the equivalence relation ≈ϕ in an n-dimension space n , an element in the rough space, i.e. an equivalence class under the partition of relation ≈ϕ is called a rough point in the space n / ≈ϕ . Definition 3. Rough Configuration. Let n / ≈ϕ is a rough space induced from the equivalence relation ≈ϕ in an n-dimension space n , a subset of rough points in the rough space is called a rough configuration in the space n / ≈ϕ .
2.1: S in n / =
2.2: U≈1 (S) in n / ≈1
2.3: U≈2 (S) in n / ≈2
Fig. 2. Upper Approximations of A Straight Line in Different Rough Spaces
Definition 4. Rough Subspace. Let n / ≈1 and n / ≈2 are two rough spaces, if every rough point of n / ≈1 is contained in a rough point of n / ≈2 , the space n / ≈1 is called a rough subspace of n / ≈2 , i.e. n / ≈1 is a rough subspace induced from n / ≈2 , and n / ≈2 is called the upper space of n / ≈1 . The relation between such two spaces can be denoted by n / ≈1 ≤ n / ≈2 . Furthermore, a special rough space n / = is defined as {{(x1 , . . . , xn )}|(x1 , . . . , xn ) ∈ n }, i.e. every equivalence class in the space n / = contains only one Euclidean point. If the difference between one Euclidean point and the set containing only this point is ignored, the rough space n / = is just the n-dimension
140
X. Yue and D. Miao
Euclidean space n . Obviously, n / = is a rough subspace of any rough space
n / ≈ϕ . Definition 5. Transformation of Upper and Lower Approximation. Let S is a rough configuration in space n / ≈1 , the upper approximation and lower approximation of S in another rough space n / ≈2 are denoted by U≈2 (S) and L≈2 (S) respectively, which defined as U≈2 (S) = {P ∈ n / ≈2 |P ∩ S = ∅}, L≈2 (S) = {P ∈ n / ≈2 |P ⊆ S}, where S is the union of all elements of configuration S in space n / ≈1 . Fig. 2 indicates the transformation of upper approximation of a Euclidean straight line in rough spaces n / ≈1 and n / ≈2 , where n / ≈1 ≤ n / ≈2 . 2.3
Geometric Invariants of Upper Approximation Transformation
German mathematician Felix Klein had given the most general definition of “geometry” as the research on geometric invariants under a group of transformations, such as the projective geometry focuses on the geometric invariants under the projective transformation. In rough geometry, we will pay attention to the geometric invariants of configurations under upper approximation in rough spaces, i.e. the invariant properties of approximate configuration at different granular levels. In the following paragraphs, some geometric invariants of upper approximation transformation in rough spaces will be introduced and the concepts such as “rough line segment”, “rough convex” and “equal rough line segments” will be further represented in the proofs of the corresponding properties. The monotone mapping ϕδ : R → Z, x → x/δ, where δ ∈ R+ will be adopted to construct the rough spaces in this section, R, R+ and Z are the real number field, positive real number field and integer field respectively, and x is the operator for returning the greatest integer less than or equal to real number x. (see Fig. 3).
Fig. 3. Monotone Mapping ϕδ : R → Z
The mapping ϕδ divides R into a queue of intervals such as . . . [−2δ, −δ), [−δ, 0), [0, δ), [δ, 2δ) . . . The rough space n / ≈ϕδ induced from the indiscernibility relation ≈ϕδ , which is defined by the equivalence relation Eϕδ , is denoted by SP ACE(δ). We can see that SP ACE(δ1 ) ≤ SP ACE(δ2 ) iff δ2 is a multiple of δ1 , n / = is denoted by SP ACE(0), and 0 is considered to be divisible by any real number. Furthermore, the upper approximation of configuration S in SP ACE(δ) will be denoted as Uδ (S) rather than U≈ϕδ (S) for simplicity. Theorem 1. Let SP ACE(δ) ≤ SP ACE(δ1 ), a rough point in SP ACE(δ) is still a rough point through the transformation of upper approximation from SP ACE(δ) to SP ACE(δ1 ).
Rough Geometry and Its Applications in Character Recognition
141
Proof. Let SP ACE(δ) ≤ SP ACE(δ1 ), P is a rough point in SP ACE(δ), because δ1 is divisible by δ, so the upper approximation of this point Uδ1 (P ) is still a rough point in space SP ACE(δ1 ), see Fig. 4.
4.1: SP ACE(0.5)
4.2: SP ACE(1.5)
Fig. 4. Rough Points in SP ACE(0.5) and SP ACE(1.5)
Theorem 2. Let SP ACE(δ) ≤ SP ACE(δ1 ), the relative location of two rough points in rough space SP ACE(δ) is invariant in space SP ACE(δ1 ) through the proper transformation of upper approximation. Proof. Let SP ACE(δ) ≤ SP ACE(δ1 ), P (i1 , j1 ) and Q(i2 , j2 ) are two rough points in SP ACE(δ), and i1 ≤ i2 (maybe i1 ≥ i2 or j1 ≤ j2 or j1 ≥ j2 ), from Def. 2, the upper approximations of the two points in SP ACE(δ1 ), i.e. Uδ1 (P ) = (l1 , t1 ) and Uδ1 (Q) = (l2 , t2 ), also have l1 ≤ l2 (l1 ≥ l2 or t1 ≤ t2 or t1 ≥ t2 ). Definition 6. Rough Line Segment. Let S be a subset of rough points in SP ACE(δ), if there exists at least one Euclidean line segment l such that S = Uδ (l), S is called a rough line segment in SP ACE(δ). Theorem 3. Let SP ACE(δ) ≤ SP ACE(δ1 ), a rough line segment in SP ACE(δ) is still a rough line segment through the proper transformation of upper approximation from SP ACE(δ) to SP ACE(δ1 ). Proof. Let SP ACE(δ) ≤ SP ACE(δ1 ), S is a line segment in space SP ACE(δ), S = Uδ1 (S) is the upper approximation of S in SP ACE(δ1 ), according to Def. 1 and Def. 6, the Euclidean line segment l : S = Uδ (l) also has S = Uδ1 (l), so there must exist a Euclidean line segment whose upper approximation is S in SP ACE(δ1 ). Definition 7. Rough Convexity Let S be a rough configuration in SP ACE(δ), if (i0 , j0 ) ∈ S, let ST (i0 ) = max{j|(i0 , j) ∈ S}, SR (j0 ) = max{i|(i, j0 ) ∈ S}, SB (i0 ) = min{j|(i0 , j) ∈ S}, SL (j0 ) = min{i|(i, j0 ) ∈ S}. As illustrated in Fig. 5.1, i0 = 3, j0 = 2, ST (i0 ) = 3, SB (i0 ) = 0, SR (j0 ) = 6, SL (j0 ) = 1. A rough configuration S is called upper-convex, if for any pair of points P = (iP , jP ) and Q = (iQ , jQ ) in S (suppose iP ≤ iQ ), there exists at least one line
142
X. Yue and D. Miao
segment L in SP ACE(δ) passing through P and Q such that ST (i) ≥ LT (i) for any iP ≤ i ≤ iQ , see Fig. 5.2. As shown in Fig. 5.3, a rough configuration S will be called lower-convex, if for any pair of points P = (iP , jP ) and Q = (iQ , jQ ) in S (suppose iP ≤ iQ ), there exists at least one line segment L in SP ACE(δ) passing through P and Q that satisfies SB (i) ≤ LB (i) for any iP ≤ i ≤ iQ . Similarly, a rough configuration S is called right-convex, if for any pair of points P = (iP , jP ) and Q = (iQ , jQ ) in S (suppose jP ≤ jQ ), there exists at least one line segment L in SP ACE(δ) passing through P and Q that satisfies SR (j) ≥ LR (j) for any jP ≤ j ≤ jQ , and left-convex can be defined as follows, if for any pair of points P = (iP , jP ) and Q = (iQ , jQ ) in S (suppose jP ≤ jQ ), there exists at least one line segment L in SP ACE(δ) passing through P and Q that satisfies SL (j) ≤ LL (j) for any jP ≤ j ≤ jQ .
5.1: S in SP ACE(0.5)
5.2: Upper-Convexity
5.3: Lower-Convexity
Fig. 5. Convex Configurations in SP ACE(0.5)
Theorem 4. A upper-convex rough configuration in a rough subspace SP ACE(δ) is still upper-convex in the upper space SP ACE(δ1 ) of SP ACE(δ) through the proper transformation of upper approximation, the similar results can be obtained on lower-convex, left-convex and right-convex configurations. Proof. Let S is an upper-convex configuration in SP ACE(δ), from Def. 7, for any pair of points P = (iP , jP ) and Q = (iQ , jQ ) in S (suppose iP ≤ iQ ), there exists a line segment L : P Q in SP ACE(δ) such that ST (i) ≥ LT (i), iP ≤ i ≤ iQ . Thus for any i, (iP ≤ i ≤ iQ ), there must exist a point Z = (i, jZ ) satisfying Z ∈ S ∧ jZ ≥ LT (i) (1)
Let SP ACE(δ) ≤ SP ACE(δ1 ), S = Uδ1 (S) is the upper approximation of S in SP ACE(δ1 ), we can also get L = Uδ1 (L), P = Uδ1 (P ), Q = Uδ1 (Q) and Z = Uδ1 (Z). Let ϕδ (x) : x → x/δ is the mapping from SP ACE(δ) to SP ACE(δ1 ), in which δ1 /δ, if δ > 0 δ = (2) δ1 , if δ = 0 Because ϕδ (x) is monotone increasing, we have
iP = ϕδ (iP ) ≤ i = ϕδ (i) ≤ iQ = ϕδ (iQ )
(3)
Rough Geometry and Its Applications in Character Recognition
jZ = ϕδ (jZ ) ≥ ϕδ (LT (i)) = L T (i )
143
(4)
From (3) and (4), we know that for any i , iP ≤ i ≤ iQ , there is a point Z = (i , jZ ) such that Z ∈ S ∧ jZ ≥ L T (i ) (5)
As mentioned above, it can be inferred that for the pair of points P and Q in S , there exists a line segment L : P Q in SP ACE(δ1 ) such that S T (i ) ≥ L T (i ) for any i ( iP ≤ i ≤ iQ ). So S is still upper-convex in SP ACE(δ1 ). Through the similar proof, the results on lower-convex, left-convex and rightconvex configuration can also be obtained. Definition 8. Configuration Intersection. Let S1 and S2 are two rough configurations in SP ACE(δ), if S1 ∩ S2 = ∅, it is considered that S1 and S2 intersect in SP ACE(δ). Let P is a rough point and S is a rough configuration in SP ACE(δ), if P ∈ S, it is considered that S passes the point P . Theorem 5. Let SP ACE(δ) ≤ SP ACE(δ1 ), S1 and S2 are two rough configurations in SP ACE(δ), if S1 and S2 intersect in SP ACE(δ), Uδ1 (S1 ) and Uδ1 (S2 ) must intersect in the upper space SP ACE(δ1 ). Suppose S is a rough configuration in SP ACE(δ), if S passes the rough point P in SP ACE(δ), Uδ1 (S) must pass the point Uδ1 (P ) in SP ACE(δ1 ). Theorem 6. Let SP ACE(δ) ≤ SP ACE(δ1 ), configurations S1 and S2 are symmetric about the origin point or coordinate axis in SP ACE(δ), their upper approximations Uδ1 (S1 ) and Uδ1 (S2 ) in SP ACE(δ1 ) are still symmetric about the same element through the proper transformation. Definition 9. Rough Distance. Let P, Q ∈ SP ACE(δ) are two rough points, the upper approximation of the distance between P and Q is defined as Uδ (P, Q) = ϕδ (max{|AB||A ∈ P, B ∈ Q}) × δ + δ
(6)
and the lower approximation of the distance is correspondingly defined as Lδ (P, Q) = ϕδ (min{|AB||A ∈ P, B ∈ Q}) × δ
(7)
i.e. the distance approximation between the two points in rough space is constructed from the maximum and the minimum distance between the Euclidean points contained in the rough points. The closed interval dδ (P, Q) = [Lδ (P, Q), Uδ (P, Q)] is considered as the roughness range of the distance between points P and Q in SP ACE(δ). See Fig. 6, the maximal and the minimal Euclidean distance between two rough points P = (1, 1) and Q = (5, 3) in SP ACE(0.5) are the distances between Euclidean pints C, D and E, F respectively. U0.5 (P, Q) = ϕ0.5 (max{|AB||A ∈ P, B ∈ Q}) × 0.5 + 0.5 = ϕ0.5 (|CD|) × 0.5 + 0.5 = 3, L0.5 (P, Q) = ϕ0.5 (min{|AB||A ∈ P, B ∈ Q}) × 0.5 = ϕ0.5 (|EF |) × 0.5 = 1.5, thus the roughness range of the distance between P and Q is d0.5 (P, Q) = [1.5, 3].
144
X. Yue and D. Miao
6.1: U0.5 (P, Q)
6.2: L0.5 (P, Q)
Fig. 6. Rough Distance Between P and Q in SP ACE(0.5)
Definition 10. Equal Distance. Let T is an index set, given a set of rough point pairs {(Pt , Qt )|t ∈ T } in SP ACE(δ), the distance of a pair (Pt , Qt ) is the rough distance between Pt and Qt , the distances of the pairs in the set {(Pt , Qt )|t ∈ T } are considered equal, iff t∈T dδ (Pt , Qt ) = ∅. Theorem 7. Let SP ACE(δ) ≤ SP ACE(δ1 ), T is an index set, {(Pt , Qt )|t ∈ T } is a set of rough point pairs in SP ACE(δ), Uδ1 (Pt ) and Uδ1 (Qt ) are the upper approximations of Pt and Qt (t ∈ T ) in SP ACE(δ1 ), if the distances of all rough point pairs in {(Pt , Qt )|t ∈ T } are equal, the distances of the pairs in set {(Uδ1 (Pt ), Uδ1 (Qt ))|t ∈ T } are still equal in SP ACE(δ1 ) through the proper transformation. Proof. Let SP ACE(δ) ≤ SP ACE(δ1 ), (P, Q) is a rough point pair in SP ACE(δ), we have max{|A, B||A ∈ P, B ∈ Q} ≤ max{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}
(8)
min{|A, B||A ∈ P, B ∈ Q} ≥ min{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}
(9)
Because ϕδ (x) = x/δ is monotonically increasing, ϕδ (max{|A, B||A ∈ P, B ∈ Q}) × δ + δ ≤ ϕδ (max{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}) × δ + δ
(10)
As SP ACE(δ) ≤ SP ACE(δ1 ) and δ1 is divisible by δ, thus ϕδ (max{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}) × δ + δ ≤ ϕδ1 (max{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}) × δ1 + δ1
(11)
From (10) and (11), we have Uδ (P, Q) ≤ Uδ1 (Uδ1 (P ), Uδ1 (Q)). Similarly, ϕδ (min{|A, B||A ∈ P, B ∈ Q}) × δ ≥ ϕδ (min{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}) × δ ≥ ϕδ1 (min{|A, B||A ∈ Uδ1 (P ), B ∈ Uδ1 (Q)}) × δ1
(12)
Rough Geometry and Its Applications in Character Recognition
145
So Lδ (P, Q) ≥ Lδ1 (Uδ1 (P ), Uδ1 (Q)) , from the formulas above, we can infer that dδ (P, Q) ⊆ dδ1 (Uδ1 (P ), Uδ1 (Q)). Given a set of rough point pairs {(Pt , Qt )|t ∈ T } in SP ACE(δ), and all distancesof these pairs are equal, according to Def. 10, t∈T dδ (P t , Qt ) = ∅, because t∈T dδ (P , Q ) ⊆ d (U (P ), U (Q )) (t ∈ T ), therefore t t δ δ t δ t 1 1 1 t∈T dδ1 (Uδ1 (Pt ), Uδ1 (Qt )) ⊇ t∈T dδ (Pt , Qt ) = ∅. It follows that the distances between the upper approximations Uδ1 (Pt ) and Uδ1 (Qt ) (t ∈ T ) are equal in SP ACE(δ1 ). 2.4
Problems and Possible Improvement
Mapping to Construct Rough Space In this section, we suppose that the rough space is constructed by a very simple mapping ϕδ : R → Z, x → x/δ, and the equivalence relation induced from ϕδ can lead to the regular partition in n-dimension Euclidean space n . Although more complex mappings can be used to construct the rough space and the approximate configuration formed by the un-regular partition in n may be more appropriate to some practical problems, the analysis of the geometric properties in such spaces will become a very difficult work and most useful principles in transformation that mentioned above will be lost. Proper Transformation In addition, we must notice that some propositions and definitions introduced in this section are just tenable under the condition of proper transformation. In other words, some principles may not stand up in some extreme situations. For example, a rough line segment will turn to be a rough point when being transformed into the upper space that is rough enough to cover the segment with only one equivalence class. For some configurations, especially digital character,the improper transformation into rougher spaces can not guarantee some important topological features invariable, such as connectivity and curvature, which will lead to the recognition error. Furthermore, choosing the proper transformation space actually belongs to the issue of seeking the proper granular level for solution, and it should be considered depending on the specific problem. In the following sections, the methods of computing the proper roughness of upper space in transformation will be further introduced according to the specific applications. Extension to n-Dimensional Space The research work in this paper focuses on introducing rough geometry and its application in digital character recognition, thus the definitions and properties given above are mainly considered in 2D spaces. The generalization of this theory from 2D to nD will be our future work considering the specific applications, and the novel properties discovered will be further compared with the similar research work in [7].
146
3 3.1
X. Yue and D. Miao
Application of Rough Geometry Application in Equichordal Point Problem
In Euclidean geometry, the Equichordal Point Problem can be formulated in simple geometric terms. If C is a Jordan curve on the plane and P, Q ∈ C then the line segment P Q is called a chord of the curve C. A point inside the curve is called equichordal if every two chords through this point have the same length. For example, it is a well-known fact that there exits one equichordal point in a circle and the center of circle is the equichordal point. But can a convex shape have more than one equichordal point? This question was posed by Fujiwara in 1916 and independently by BlaschkeRothe and Weizenbock in 1917. Since then, the problem whether there exists a closed convex curve of two equichordal points had been a classic issue in traditional geometry until it was resolved by M.R.Rychlik in 1997 [20]. He proved that there exists no closed convex curve of more than one equichordal point. The Euclidean curve of two equichordal points and the analysis of its features are also introduced in the related research work (see, e.g., [12,20]). Although the research of M.R.Rychlik is of significant theoretical value, the closed convex curve of more than one equichordal point can exist in other spaces rougher than the Euclidean space. Because the representations of the shapes in the real world are always the approximations of the Euclidean geometric elements rather than absolutely accurate and abstract, the results from the analysis of the traditional geometric problem in rough space may be available in some specific applications. In the following paragraphs, we will introduce how to construct the proper rough space to represent the convex shape of two equichordal points. Fig. 10.5 represents the closed convex curve in rough space SP ACE(b/600), where a is the distance between two equichordal points, b is the length of the common chord and a = 0.3b in Euclidean space. The partition is so fine that we can denote a and b in the rough space instead of the distance approximation for simplicity. The curve of two equichordal points in Euclidean space can be constructed as follows (see Fig. 7): the two equichordal points are laid on the horizontal axis symmetrically, the distance between the two points is a and the length of the common chord is b, given a point in the plane denoted by number 0 as the initial point, from the initial point 0 a line segment of length b through the left equichordal point should be drawn, and the other end point of this segment will be denoted as point 1, then from the point 1 the second line segment of length b can be made through the right equichordal point and the new end point will be marked as number 2. In such process, the ordinal line segments passing through the left and right equichordal points respectively are created repeatedly, and the coordinates of the n+1th point can be computed from the nth point according to the iterative formula in the polar coordinate system and complex space [12]. The related research had proven that for any initial point in the plane, the iteration will converge to a pair of conjugated points, and all points denoted by the even numbers that converge to the right equichordal point can form a continuous curve, similarly, the all points denoted by the odd numbers that converge to the
Rough Geometry and Its Applications in Character Recognition
147
Fig. 7. Curve of a = 0.5b
left equichordal point will create another continuous curve, these two curves will be named even curve and odd curve respectively in the next paragraphs. The shape formed by the continuous curves that converge to the conjugated points on horizontal axis is shown in Fig. 8.1, the segment of the even curve in the first quadrant and the part of the odd curve in the third quadrant are convex, but the corresponding proportions of the curves in the second and fourth quadrants are not convex and have pulsation. The even curve and odd curve are not connected, but when the proper initialization makes the two continuous curves close enough on the horizontal axis, we can consider the shape as a closed curve approximately. In this closed curve, the chord is the line segment from the even number point to the odd number point. The approximate closed curve constructed by passing the left equichordal point first as mentioned above is called the right curve see Fig. 8.1, and the similar approximate construction through the right equichordal point first is called the left curve, see Fig. 8.2. We can also infer that for any pair of equichordal points on horizontal axis, the left and right curves that are symmetric with respect to the vertical axis can be constructed by adopting the symmetric initial points. Based on the above introduction, the closed curve of two equichordal points in different rough spaces can be represented. Let the distance between two equichordal points a is 0.5 and the length of the common chord b is 1. Given a rough space SP ACE(δ), when δ = 1/20, the equivalence class in the space is too small to shield the pulsation, thus the approximation of the closed curve is not convex in SP ACE(1/20), see Fig. 9.1. While δ = 1/9, the left and right curves will have the common upper approximation that is convex in SP ACE(1/9), see Fig. 9.2. Next we will describe the impacts of the ratio between distance a and the chord length b to the shape of closed curve in the same space. Given a rough
148
X. Yue and D. Miao
8.1: Right Curve
8.2: Left Curve
Fig. 8. Right and Left Curves
9.1: SP ACE(1/20)
9.2: SP ACE(1/9)
Fig. 9. Closed Curve in Different Spaces
10.1: a = 0.7b
10.2: a = 0.6b
10.4: a = 0.4b
10.3: a = 0.5b
10.5: a = 0.3b
Fig. 10. Right Curves in SP ACE(b/600)
space SP ACE(δ), where δ = b/600 and the common chord length b is a fixed value, the different shapes of the right curve according to the variant distance a are shown in Fig. 10.
Rough Geometry and Its Applications in Character Recognition
149
It is apparent that the pulsation of the closed curve is gradually becoming weaker as the distance a reduces. As illustrated in Fig. 10.1- 10.4, the approximation of the right curve is not convex and not symmetric about the vertical axis. We can define the convexity of the approximation of the right curve in the rough space according to its symmetric left curve as follows. If the upper approximation of the right curve can cover the left, this approximation is the common representation of both curves of two equichordal points in the rough space. Because the two curves are symmetric respect to the vertical axis, the common approximation is symmetric about the vertical axis according to Theorem 6. As introduced above, the closed curve is convex in the first and third quadrants, according to Theorem 4, the approximation is still convex in these quadrants, since its symmetry, the approximate shape is completely convex in all quadrants. Furthermore, we can obtain the result that the length approximations of all chords in the rough space are equal from Theorem 7. In this way, a convex closed curve of two equichordal points can be obtained in rough space. As shown in Fig. 10.5, when the ratio of a to b is 0.3, the pulsation is completely covered by the convex and symmetric approximation of closed curves in SP ACE(b/600). From the paragraphs above, we have learnt the important factors that influence the shape of the closed curve of two equichordal points. It can be inferred that for any common chord length b and the distance between two equichordal points a, given any initial point in the plane, there must exists a rough space in which left and right curves have the common upper approximation, and this approximation is a convex shape of two equichordal points. Especially when the corresponding partition in the space is fine enough, the rough configuration may turn to be a real closed convex curve in our vision. Thus the closed convex curve of two equichordal points can be constructed in the rough space. As introduced in this section, the application of rough geometry in the Equichordal Point Problem indicates that the rough configurations in the approximate space have their own properties different from those of the shapes in Euclidean space, and the novel results may be obtained from observing the traditional geometric problems in the rough space. 3.2
Application in Principal Curves
The term “Principal Curves” was first proposed by Hastie and Stuetzle in 1984, and principal curves are usually defined as “self-consistent” smooth curves which pass through the “middle” of a n-dimensional probability distribution (see, e.g., [3,4]). Principal curves can provide a nonlinear summary of the data through reflecting the data distribution in low dimensional space, and the curves’ shape is suggested by the data. In another view, the principal curves are the skeleton of data set and the data set is the “cloud” around the curves. They construct the one-dimensional manifold of the data in high dimensional space and can be viewed as the nonlinear generation of the principal component analysis (PCA). Because principal curves can preserve the most information of the data distribution, they usually serve as an efficient feature extraction tool. The field
150
X. Yue and D. Miao
11.1: SP ACE(1)
11.2: SP ACE(8)
Fig. 11. Principal Curves of ’0’ in Different Spaces
has been very active since Hastie and Stuetzle’s groundbreaking work, numerous alternative methods for estimating principal curves have been proposed and analyzed (see, e.g., [5,6,23,24]). The applications of this theory in various fields such as image analysis, feature extraction, and speech processing have demonstrated that principal curves are not only of theoretical interest, but also have a legitimate place in the family of practical unsupervised learning techniques (see, e.g., [13,14,29]). According to the definition of principal curves given by Hastie and Stuetzle, the self-consistency means each point of the curves is the average of all points that project there. Thus, the complexities of the algorithms producing principal curves are always closely relative to the scale of the data set. But in some practical problems, it may be not necessary to traverse all initial data points to produce the skeleton of the data distribution. In fact, the approximate representations of the curves that can catch the most important topological features of the distribution are sufficient for some recognition works, and these approximations can be obtained through only the rough points that can preserve the object’s primary structure. For example, existing recognition methods of off-line handwritten characters usually generate the features from all pixels contained in the configurations, but the objects can be viewed at rougher granular level to obtain the same results (see Fig. 11). Depending on the invariants of transformation among the rough spaces introduced in Section 2, such as the invariants of convexity, we can use principal curves to extract the geometric features of the characters in the spaces rougher than original images. This process can bring several benefits for the recognition work as follows: first, the efficiency of the algorithms for generating the skeletons will be improved as the scale of the original data set is greatly reduced; second, the detrimental effects of the trivial details produced from the redundant data in the character figures can be weakened at rougher level, and this result can also lead to the simplification of the classification rules as the third advantage. As mentioned above, our exploratory work tries to apply the rough geometry to the character recognition. In our experiments, the polygonal line algorithm of the principal curves methods (see, e.g., [5,6]) is adopted to produce the skeletons
Rough Geometry and Its Applications in Character Recognition
151
Fig. 12. Process of Off-line Characters Recognition
of the off-line handwritten digits. Furthermore, the proper rough spaces for upper transformation can be obtained according to the thickness of the character and the classifier is constructed based on rough sets methods. The flow diagram of the system is shown in Fig. 12. As illustrated in Table 1, the skeletons and the recognition results of the sample digit can be obtained in different rough spaces. One pixel of the original digital image is considered as the smallest equivalence class in the rough spaces and the corresponding δ = 1, so the finest rough space constructed from the pixels is denoted by SP ACE(1). The polygonal curves algorithm is used to extract the skeletons for generating the geometric features of characters at five different granular levels. The sample figures’ visions, extracted skeletons and recognition results of two persons’ handwritings in different rough spaces are displayed in the following table, in which N is the scale of the digital image, P is the number of points in the skeleton, and K is the components number of the principal curves extracted from the character. In the experiment, we discovered that the transformation of upper approximation just causes little damage of the characters’ geometric features that we are interested, and the skeletons got from the rough spaces are well enough for the recognition work. This observation is coincident with the invariants of the transformation introduced in the Section 2. From the analysis of the experimental data, we can see that the scales of the original data and the skeletons and the iteration times in the process for producing the principal curves are greatly reduced as the rough space transforms. It also should be noticed that the false recognition result caused by the trivial details can be rectified through the transformation (see Table 1). Accordingly, training with the features generated in the proper rough spaces, the classifier will be further simplified. As mentioned above, the efficiency of the recognition algorithm based on principal curves and rough sets can be effectively improved through the application of rough geometry as the preprocessing step in feature generation.
152
X. Yue and D. Miao Table 1. Recognition Results of Figure ‘9’ in Rough Spaces SPACE (1)
SPACE (4)
N : 500 × 500 N : 125 × 125
SPACE (12)
SPACE (20)
N : 62 × 62
N : 41 × 41
N : 25 × 25
P : 159
P : 60
P : 46
P : 32
P : 24
K:3
K:2
K:2
K:2
K:2
Result : 5
Result : 9
Result : 9
Result : 9
Result : 9
N : 500 × 500 N : 125 × 125
4
SPACE (8)
N : 62 × 62
N : 41 × 41
N : 25 × 25
P : 150
P : 56
P : 45
P : 32
P : 22
K:3
K:3
K:2
K:2
K:2
Result : 9
Result : 9
Result : 9
Result : 9
Result : 9
Conclusion and Prospect
In traditional geometry, the geometric elements are defined absolutely abstract and accurate, but the configurations we see in the real world always have lengths or sizes. Rough geometry attempts to combine the rough set theory with the geometric methods, generate and analyze the graphics at the rougher granular levels through the approximation transformation. The aim of the investigation is to construct the proper geometric spaces more available for problem solving. The motivation of the research and some principles of rough geometry have been introduced, and we also presented the applications of this theory in the traditional geometry problem and characters recognition. Although the new geometry is
Rough Geometry and Its Applications in Character Recognition
153
expected to be an effective tool for measuring the configurations in approximate spaces, at present it is just based on the personal views and ideas immature, perhaps controversial. There is still a long way to go before it turns to be an integrated system. In our future work, the improvement and the enrichment in theory will be continued, while the applications of the theory will be studied further as well. In the next paragraphs, we will describe the questions induced from our exploratory research according to the basic issues of the granular computing (see, e.g., [25,26]), and expect the possible solutions for these problems in the future work. Granulation How can we construct the optimal approximate space and representation of the configuration according to the specific application? This question refers to the constructions of the basic components of the granular computing: granules, granulated views and hierarchies, and these terms may correspond to the equivalence classes, rough configurations and rough spaces respectively in rough geometry. Depending on the existing definitions in rough geometry, although the upper approximations can preserve some geometric features of the original graphics, it also may lose some important information. For example, the upper approximation can damage the property of connectivity and increase the number of loops in the graphics, the changes of the topological features will bring us the undesirable effects. As the exiting method for constructing the approximations of the objects needs to be further improved, the following ideas may be helpful solutions in the future. The other approximation forms can be adopted in the same rough space defined in Section 2, such as lower approximation, then through combining the information obtained from different approximations, we can maintain the most features of the original objects in the approximate representations. In other words, we can observe the graphics from multiple profiles to get the sufficient information about the features. The second suggestion for improving the approximation is that we can define the different approximate spaces to catch the most geometric features in transformation. But it always requires the more complex mapping to induce the un-regular partitions in the space rather than the simple construction of the rough space. Although it may be a difficult work, we must notice that the semantics of the objects are usually ignored in the construction of the approximate space, it is possible to form the optimal un-regular rough spaces based on the objects’ contents to preserve the most features in transformation. Computing with Granules How can we construct the proper mappings between multi-level approximate spaces to preserve the most properties of the objects? How can we decide the optimal granular level for the problem solving? The key points in the issue of computing with granules are mappings between different level of granulations, granularity conversion and property preservation,
154
X. Yue and D. Miao
and they are also the essential targets of the transformation in rough geometry. In this paper, we define the mappings between different spaces from fine to rough like the upper approximation in rough sets, and it can lead to a simple transformation. This transformation will make it easy to seek the rules of properties preservation, but it also can cause the damage of the important geometric features as introduced above. So a more proper transformation should be defined according to the specific problems, such as the un-regular transformation may be defined based on the characters’ structure to catch more geometric properties in changing spaces. Furthermore, the upper approximation transformation is the bottom-up way to construct the hierarchy, while the inverse transformation, i.e. the top-down approaches, may be also useful for features preservation, for example, the analysis of the important details in local areas of the whole rough configuration can rectify the properties’ loss in transformation. In the practical problems, people usually tend to choose a proper granular level for solution, and the ability of cruising among the different levels of granularity with freedom is factually an embodiment of the human intelligence. The issue also exists in granular computing as a research hotspot. In pattern recognition process, the optimal granular level is always expected to improve the recognition results, and at this level, the trivial details will be neglected while the import features should be preserved even more distinct. Choosing the proper granular level for problem solving in granular computing corresponds to the computation about the roughness of the space in rough geometry. We suppose that constructing a proper rough space in rough geometry may be through the following two ways. The first is to formulate the topological changing of the rough configurations according to the regular transformation, but it may be a difficult work. The other way is to construct the proper space according to the given parameters of roughness based on the specific application, these parameters can be obtained from data training or the empirical knowledge. For the off-line handwriting recognition, the proper rough spaces can be constructed according to the average thickness of the characters as the prior knowledge. Moreover the optimal roughness can also be obtained from the data training, and this method often requires the evaluation criterion of the topological variation. By the way, it is usually believed that the geometric property values will become more and more accurate as the approximate space transforming from rough to fine, such as the area value of the closed configuration. But the properties do not always behave like this, for example, the relative deviation of perimeter for the digitized polygon will converge to a fixed value when the image resolution turning big, in which the relative deviation is computed from the absolute difference between the property values for the approximation of the graphics and that for the same graphics in Euclidean space [7]. Thus choosing the proper rough level to catch the geometric features in the approximate space is one of the most important issues in the rough geometry research. In this section, we have discussed the existing problems and the future work on rough geometry depending on the basic issues of granular computing. The new theory is so immature that it needs further development in many aspects,
Rough Geometry and Its Applications in Character Recognition
155
but it will provide a new perspective to observe the geometric elements in reality and encourage us to analyze the objects in multiple levels and views. Furthermore, the applications of rough geometry in practical problems are also attached importance in the related work, so the research subject will be valuable in both theory and application.
Acknowledgements This work was supported by National Natural Science Foundation of China (Serial No. 60475019, 60775036) and The Research Fund for the Doctoral Program of Higher Education (Serial No. 20060247039).
References 1. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 2nd edn. Publishing House of Electronics Industry, Beijing (2006) 2. Hassanien, A.: Fuzzy rough sets hybrid scheme for breast cancer detection. Image and Vision Computing 25(2), 172–183 (2007) 3. Hastie, T.: Principal Curves and Surfaces. Unpublished doctoral dissertation, Stanford University, USA (1984) 4. Hastie, T., Stuetzle, W.: Principal curves. Journal of the American Statistical Association 84(406), 502–516 (1988) 5. K´egl, B.: Principal curves: learning, design, and applications. Unpublished doctoral dissertation, Concordia University, Canada (1999) 6. K´egl, B., Krzyzak, A.: Learning and design of principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(3), 281–297 (2000) 7. Klette, R., Rosenfeld, A.: Digital Geometry: Geometric Methods for Digital Image Analysis. Beijing World Publishing Corporation, Beijing (2006) 8. Lin, T.Y.: Granular Computing on Binary Relations I: Data Mining and Neighborhood Systems. In: [19], pp. 107–121 (1998) 9. Lin, T.Y.: Granular Computing on Binary Relations II: rough set representations and belief functions. In: [19], pp. 121–140 (1998) 10. Lin, T.Y.: Granular Computing: Fuzzy Logic and Rough Sets. In: Skowron, A., Polkowski, L. (eds.) Computing with words in information/intelligent systems, pp. 183–200. Physica-Verlag, Heidelberg (1999) 11. Lin, T.Y.: Granular computing rough set perspective. The Newsletter of the IEEE Computational Intelligence Society 2(4), 1543–4281 (2005) 12. Ma, Y.: Rough Geometry. Computer Science 33(11A), 8 (2006) (in Chinese) 13. Miao, D.Q., Tang, Q.S., Fu, W.J.: Fingerprint Minutiae Extraction Based on Principal Curves. Pattern Recognition Letters 28, 2184–2189 (2007) 14. Miao, D.Q., Zhang, H.Y.: Off-Line Handwritten Digit Recognition Based on Principal Curves. Acta Electronica Sinica 33(9), 1639–1644 (2005) (in Chinese) 15. Mushrif, M.M., Ray, A.K.: Color image segmentation: Rough-set theoretic approach. Pattern Recognition Letters 29, 483 (2008) 16. Pal, S.K., Mitra, P.: Multispectral image segmentation using the rough-setinitialized EM algorithm. IEEE Transactions on Geoscience and Remote Sensing 40(11), 2495–2501 (2002)
156
X. Yue and D. Miao
17. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11, 341–356 (1982) 18. Pawlak, Z.: Rough sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht (1991) 19. Polkowski, L., Skowron, A. (eds.): Rough sets in knowledge discovery. PhysicaVerlag, Heidelberg (1998) 20. Rychlik, M.R.: A complete solution to the equichordal point problem of Fujiwara, Blaschke, Rothe and Weizenbock. Inventiones Mathematicae 129, 141–212 (1997) 21. Skowron, A.: Toward intelligent systems: calculi of information granules. Bulletin of International Rough Set Society 5, 9–30 (2001) 22. Skowron, A., Stepaniuk, J.: Information Granules: Towards Foundations of Granular Computing. International Journal of Intelligent Systems 16, 57–85 (2001) 23. Tibshirani, R.: Principal curves revisited. Statistics and Computation 2, 183–190 (1992) 24. Verbeek, J.J., Vlassis, N., Kr¨ ose, B.: A k-segments algorithm for finding principal curves. Pattern Recognition Letters 23, 1009–1017 (2002) 25. Yao, Y.Y.: Information granulation and rough set approximation. International Journal of Intelligent Systems 16(1), 87–104 (2001) 26. Yao, Y.Y.: A partition model of granular computing. In: Peters, J.F., Skowron, ´ A., Grzymala-Busse, J.W., Kostek, B.z., Swiniarski, R.W., Szczuka, M.S. (eds.) Transactions on Rough Sets I. LNCS, vol. 3100, pp. 232–253. Springer, Heidelberg (2004) 27. Zadeh, L.A.: Fuzzy sets and information granulation.advances in fuzzy set theory and applications. North-Holland Publishing, Amsterdam (1979) 28. Zadeh, L.A.: Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Set s and Systems 19, 111–127 (1997) 29. Zhang, H.Y., Miao, D.Q.: Analysis and Extraction of Structural Features of OffLine Handwritten Digits Based on Principal Curves. Journal of Computer Research and Development 42(8), 1344–1349 (2005) (in Chinese)
Extensions of Information Systems: The Rough Set Perspective Krzysztof Pancerz University of Information Technology and Management Sucharskiego Str. 2, 35-225 Rzesz´ ow, Poland
[email protected] Zamo´s´c University of Management and Administration Akademicka Str. 4, 22-400 Zamo´s´c, Poland
Abstract. In the paper, we consider the extensions of information systems from the rough set perspective. A consistent extension of a given information system includes only objects corresponding to the known attribute values which satisfy all minimal rules extracted from the original information system. A partially consistent extension of a given information system includes objects corresponding to the known attribute values which are consistent, to a certain degree with the knowledge represented by all minimal rules extracted from the original information system. Using adequately defined lower approximations of sets of objects in the original information system we can test membership of a new object added to the system to its consistent extension without computing any rules. If a given object does not belong to a consistent extension, then we can calculate a degree of the partial membership of the object to the consistent extension. This degree is expressed by the so-called consistency factor. Consistent and partially consistent extensions of information systems are useful in discovering or predicting new states of concurrent systems described by information systems. Keywords: rough set, extension of information system, rule.
1
Introduction
Extensions of information systems have been earlier considered in the literature (e.g. [3,9,10,11,12,13,14,15]). Any extension S ∗ of a given information system S is created by adding to the system S new objects whose signatures contain only values of attributes that appeared in S. An important role among extensions of a given information system S is played by the so-called consistent extensions of S. Such extensions are obtained if all of the new objects added to the system S satisfy every minimal rule true in S. An approach to consistent extensions of information systems can be generalized using the so-called partially consistent extensions. Having the case of a partially consistent extension S ∗ of a given information system S, we assume a situation that new objects added to S satisfy only some of all minimal rules true in S. Then, an essential thing is to determine J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 157–168, 2009. c Springer-Verlag Berlin Heidelberg 2009
158
K. Pancerz
the consistency factor of a new object added to S, with the knowledge included in S and expressed by the set of all minimal rules true in S. A value of the consistency factor can be between 0 and 1, 0 for the full inconsistency and 1 for the full consistency. Minimal rules true in S are understood from the rough set point of view. Two important questions arise in the field of consistent and partially consistent extensions of a given information system S. The first question (Q1 ) is: ”Does a new object added to S belong to any consistent extension of S?”. The second one (Q2 ) is: ”What is the consistency factor of a new object added to S with the knowledge included in S?”. In the standard approach, if we wish to answer these questions, we compute the set of all minimal rules true in S. Such a problem is N P -hard. In this paper, we present the answer to questions Q1 and Q2 using a crucial notion of the theory of rough sets, namely, the lower approximation of a set of objects in the information system S. All methods for computing consistent or partially consistent extensions of information systems presented earlier have involved computing in information systems all minimal rules or at least some of them (see [3,10,11,12,13,15]). Such methods are very complicated since information systems can have exponentially many minimal rules depending on the number of attributes and objects. An improved algorithm for testing the membership of a new object added to the system S in its consistent extension, has been proposed in [2]. The new algorithm does not require computing any rules from the original information system S. We only check for each object u added to S which objects from S generate rules which are not satisfied by u without computing such rules. In the process of checking we take advantage of the appropriate theorem given in [2]. The approach presented here is based on this theorem, but now we express this theorem directly in terms of rough sets. We define appropriate lower approximations of sets of objects which are used in testing membership to consistent extensions of S. We can also use such lower approximations in computing a consistency factor of an object belonging to the partially consistent extension of S. Consistent extensions and partially consistent extensions of information systems play an important role in the case when information systems are used to represent the knowledge of the behavior of concurrent systems (see [3,11,13]). We assume that a given information system S includes only a part of possible global states of a given concurrent system (the so-called Open World Assumption). The remaining knowledge can be discovered from consistent or partially consistent extensions computed for S. The new knowledge concerns new global states of a system which have not earlier been observed. The extensions of information systems can be also applied in state prediction problems (see [4,15]). An information system can describe the states of processes observed in a given concurrent system. If we extend a given information system by adding some new states which have not been observed yet, then we are interested in the degrees of consistency of added states with the knowledge of state coexistence included in the original information system. Such information can be helpful in predicting the possibility of appearing given states in the future in the examined system. The experiments show that the states from extensions of
Extensions of Information Systems: The Rough Set Perspective
159
original information systems, having greater values of consistency factors, appear significantly more often in the future. The rest of the paper is organized as follows. In Section 2, a brief overview of rough set rudiments is given. Precisely, we recall basic notions concerning information systems, approximations of sets and the decision language and rules. In Section 3, basic definitions concerning extensions of information systems are presented. Section 4 makes up the central part of the paper. In this section, we express the theorem given in [2] in terms of rough sets and, next, show how to use it in calculating consistent and partially consistent extensions of information systems. Finally, Section 5 consists of some conclusions.
2
Rough Set Rudiments
First, we recall basic concepts of the rough set theory (cf. [6], [8]) used in the paper and we fix notations. 2.1
Information Systems
A concept of an information system is one of the basic concepts of rough set theory. Information systems are used to represent some knowledge of elements of a universe of discourse. An information system is a pair S = (U, A), where U is a set of objects, A is a set of attributes, i.e., a : U → Va for a ∈ A, where Va is called a value set of a. A decision system is a pair S = (U, A), where A = C ∪D, C ∩D = ∅, and C is a set of condition attributes, D is a set of decision attributes. Any information (decision) system can be represented as a data table whose columns are labeled with attributes, rows are labeled with objects, and entries of the table are attribute values. For each object u ∈ U in the information or decision system S = (U, A), we define a signature of u by infS (u) = {(a, a(u)) : a ∈ A}. 2.2
Approximation of Sets
Let S = (U, A) be an information system. Each subset B ⊆ A of attributes determines an equivalence relation on U , called an indiscernibility relation Ind(B), defined as Ind(B) = {(u, v) ∈ U × U : ∀ a(u) = a(v)}. The equivalence class a∈B
containing u ∈ U will be denoted by [u]B . Let X ⊆ U and B ⊆ A. The B-lower approximation BX of X and the B-upper approximation BX of X are defined as BX = {u ∈ U : [u]B ⊆ X} and BX = {u ∈ U : [u]B ∩ X = ∅}, respectively.
160
2.3
K. Pancerz
Decision Language and Rules
With every information system S = (U, A) we associate a formal language L(S). Formulas of L(S) are built from atomic formulas in the form (a, v), where a ∈ A and v ∈ Va , by means of propositional connectives: negation (¬), disjunction (∨), conjunction (∧), implication (⇒) and equivalence (⇔) in the standard way. The object u ∈ U satisfies a formula φ of L(S), denoted by u φ (or in short u φ), if and only if the following conditions are satisfied: 1. 2. 3. 4.
S
u (a, v) iff a(u) = v, u ¬φ iff not u φ, u φ ∨ ψ iff u φ or u ψ, u φ ∧ ψ iff u φ and u ψ.
As a corollary from the above conditions we get: 1. u φ ⇒ ψ iff u ¬φ ∨ ψ, 2. u φ ⇔ ψ iff u φ ⇒ ψ and u ψ ⇒ φ. If φ is a formula of L(S), then the set |φ|S = {u ∈ U : u φ} is called the meaning of formula φ in S. A rule in the information system S is a formula of the form φ ⇒ ψ, where φ and ψ are referred to as the predecessor and the successor of the rule, respectively. The rule φ ⇒ ψ is true in S if |φ|S ⊆ |ψ|S . In our approach, we consider rules in the form φ ⇒ ψ, where φ is a conjunction of atomic formulas of L(S) and ψ is an atomic formula of L(S). If u (φ ∧ ψ), then we say that the object u supports the rule φ ⇒ ψ or that the object generates the rule φ ⇒ ψ. We say that the rule φ ⇒ ψ is satisfied by the object u ∈ U (or the object u ∈ U satisfies the rule φ ⇒ ψ) if and only if u (φ ⇒ ψ). A rule is called minimal in S if and only if removing any atomic formula from φ results in a rule being not true in S. The set of all minimal rules true and realizable (i.e., such rules φ ⇒ ψ that |φ ∧ ψ|S = ∅) in S will be denoted by Rul(S). By Rula (S) we will denote the set of all rules from Rul(S) having an atomic formula containing the attribute a in their successors.
3
Consistent and Partially Consistent Extensions of Information Systems
Extensions of information systems have been considered, among others, in [9] and [12]. Any extension S ∗ of a given information system S is created by adding to the system S new objects whose signatures contain only values of attributes that have appeared in S. Definition 1 (Extension). Let S = (U, A) be an information system. An information system S ∗ = (U ∗ , A∗ ) is called an extension of S if and only if the following conditions are satisfied:
Extensions of Information Systems: The Rough Set Perspective
161
– U ⊆ U ∗, – card(A) = card(A∗ ), – for each a ∈ A, there exists a∗ ∈ A∗ such that the function a∗ : U ∗ → Va is an extension of the function a : U → Va to U ∗ . Each extension S ∗ of a given information system S includes the same number of attributes and only such objects whose attribute values have already appeared in the original table representing S. Moreover, the data table representing S is a part of the data table representing S ∗ , i.e., all objects which appear in S, also appear in S ∗ . Definition 2 (Cartesian extension). Let S = (U, A) be an information system, {Va }a∈A the family of value sets of attributes from A, U a universe including U . An information system S MAX = (U MAX , A∗ ) such that: 1. U MAX = {u ∈ U : a∗ (u) ∈ Va for all a∗ ∈ A∗ }, 2. for each a ∈ A, there exists a∗ ∈ A∗ such that the function a∗ : U MAX → Va is an extension of the function a : U → Va to U MAX , is called a Cartesian extension of S. Objects and tuples of values of attributes on these objects are considered as identical ones. It is worth mentioning that the Cartesian extension of a given information system is a maximal (with respect to the number of objects) extension of this system. Obviously, for the Cartesian extension S MAX = (U MAX , A∗ ) of S, we have card(U MAX ) = card(Va ). a∈A
Let S = (U, A) be an information system, S MAX = (U MAX , A∗ ) its Cartesian extension and u∗ ∈ U MAX . By Rulu∗ (S) we denote the set of all rules from Rul(S) which are not satisfied by the object u∗ , i.e., Rulu∗ (S) = {(φ ⇒ ψ) ∈ Rul(S) : u∗ φ and not u∗ ψ}. A strength of the set Rulu∗ (S) of rules in S is computed as str(Rulu∗ (S)) = where
⎛ supp(Rulu∗ (S)) = card ⎝
supp(Rulu∗ (S)) , card(U )
⎞ |φ ∧ ψ|S ⎠
(φ⇒ψ)∈Rulu∗ (S)
is a support of a set Rulu∗ (S) of rules. It is easy to see that this coefficient determines a relative share of objects supporting rules not satisfied by a new object in the set of all objects in the information system. With every object u∗ from the Cartesian extension S MAX of S we associate a consistency factor of u∗ with the knowledge included in S (expressed by rules from Rul(S)) [13].
162
K. Pancerz
Definition 3 (Consistency factor). Let S = (U, A) be an information system, Rul(S) a set of all minimal rules true and realizable in S, S MAX = (U MAX , A∗ ) the Cartesian extension of S and u∗ ∈ U MAX . The consistency factor of u∗ with the knowledge included in S (expressed by rules from Rul(S)) is defined as ξS (u∗ ) = 1 − str(Rulu∗ (S)). We have 0 ≤ ξS (u∗ ) ≤ 1 for each u∗ ∈ U MAX . It is obvious that if u∗ ∈ U , then ξS (u∗ ) = 1 because Rulu∗ (S) = ∅. Having determined a consistency factor for each object of any extension of a given information system S we can talk about a consistent or partially consistent extension of S. Definition 4 (Consistent extension and partially consistent extension). Let S = (U, A) be an information system and S ∗ = (U ∗ , A∗ ) its extension. S ∗ is called a consistent extension of S if and only if ξS (u∗ ) = 1 for all u∗ ∈ U ∗ . Otherwise, i.e., if there exists u∗ ∈ U ∗ such that ξS (u∗ ) < 1, then S ∗ is called a partially consistent extension of S.
4
Rough Set Approach to Computing Extensions
In this section we reformulate the theorem given in [2]. Now, it will be expressed in terms of rough sets. Next we show how to use it in calculating consistent and partially consistent extensions of information systems. Let S = (U, A) be an information system, S ∗ = (U ∗ , A∗ ) its extension, and ∗ u ∈ U ∗ a new object from the extension S ∗ . For each attribute a ∈ A and u∗ as above we can translate an information system S into the information system Sa,u∗ = (Ua , Ca ∪ {a}) with irrelevant values of attributes in the following way. Each attribute c ∈ Ca corresponds exactly to one attribute c ∈ A∗ − {a∗ }. Each object u ∈ Ua corresponds exactly to one object u ∈ U and moreover: c(u) if c(u) = c(u∗ ) c (u ) = ∗ otherwise for each c ∈ Ca , and a(u ) = a(u). This means that we create a new information system for which appropriate sets of attribute values are extended by the value *. The symbol * means that a given value of the attribute is not relevant. Remark 1. For simplicity, the attribute c ∈ A − {a} in S and the attribute c in Sa,u∗ corresponding to c will be marked with the same symbol, i.e., c will be marked in Sa,u∗ with c. The system Sa,u∗ can be treated as a decision system with condition attributes constituting the set Ca and the decision attribute a. Example 1. Information systems can be used to represent the knowledge of the behavior of concurrent systems [7]. In this approach, an information system represented by a data table includes the knowledge of the global states of a
Extensions of Information Systems: The Rough Set Perspective
163
given concurrent system CS. The columns of a table are labeled with names of attributes (treated as processes of CS). Each row labeled with an object (treated as a global state of CS) includes a record of attribute values (treated as local states of processes). We assume that a given information system S includes only some of possible global states of CS (the so-called Open World Assumption). In the approach proposed here, the remaining knowledge can be discovered from consistent or partially consistent extensions computed for S. The new knowledge concerns new global states of a system which have not earlier been observed. The maximal consistent extension of S represents the largest set of global states of CS satisfying all true minimal rules extracted from S. Let an information system S = (U, A) describe some genetic system consisting of three genes marked with g1 , g2 and g3 . The elements of U can be interpreted as global states of this system. Each attribute from A corresponds to one gene of the genetic system. Each gene can have one of the three values: Ad (Adenine), Cy (Cytosine) and Gu (Guanine). Let us assume that we have observed eleven global states of our genetic system. Here, global states can be interpreted as chromosomes. All of them are collected in Table 1a representing the information system S. Formally, for S we have: the set of objects U = {u1 , u2 , . . . , u11 }, the set of attributes A = {g1 , g2 , g3 }, the sets of attribute values Vg1 = Vg2 = Vg3 = {Ad, Cy, Gu}. Table 1. a) An original information system S describing a genetic system, b) new objects added to S U/A u1 u2 u3 u4 u5 a) u6 u7 u8 u9 u10 u11
g1 Ad Cy Cy Cy Ad Cy Ad Ad Ad Gu Gu
g2 Gu Ad Gu Cy Gu Gu Cy Ad Cy Cy Cy
g3 Cy Gu Ad Cy U/A∗ g1∗ g2∗ g3∗ Ad b) u12 Ad Cy Cy Cy u13 Gu Ad Gu Ad Cy Gu Ad Cy
Let us assume that we have obtained new global states (shown in Table 1b) of our genetic system. For the object u12 , we obtain the information systems Sa,u12 with irrelevant values of attributes as shown in Table 2. For the object u13 , we obtain the information systems Sa,u13 with irrelevant values of attributes as shown in Table 3. For the information system Sa,u∗ = (Ua , Ca ∪ {a}), we define a characteristic relation R(Ca ) similarly to the definition of a characteristic relation in information systems with missing attribute values (cf. [1]). R(Ca ) is a binary relation
164
K. Pancerz
Table 2. Information systems with irrelevant values of attributes: a) Sg1 ,u12 , b) Sg2 ,u12 , c) Sg3 ,u12 Ug1 /Ag1 u1 u2 u3 u4 u5 a) u6 u7 u8 u9 u10 u11
g1 Ad Cy Cy Cy Ad Cy Ad Ad Ad Gu Gu
g2 ∗ ∗ ∗ Cy ∗ ∗ Cy ∗ Cy Cy Cy
g3 Ug2 /Ag2 Cy u1 ∗ u2 ∗ u3 Cy u4 ∗ u5 b) Cy u6 ∗ u7 Cy u8 ∗ u9 ∗ u10 Cy u11
g1 Ad ∗ ∗ ∗ Ad ∗ Ad Ad Ad ∗ ∗
g2 Gu Ad Gu Cy Gu Gu Cy Ad Cy Cy Cy
g3 Ug3 /Ag3 Cy u1 ∗ u2 ∗ u3 Cy u4 ∗ u5 c) Cy u6 ∗ u7 Cy u8 ∗ u9 ∗ u10 Cy u11
g1 g2 Ad ∗ ∗ ∗ ∗ ∗ ∗ Cy Ad ∗ ∗ ∗ Ad Cy Ad ∗ Ad Cy ∗ Cy ∗ Cy
g3 Cy Gu Ad Cy Ad Cy Ad Cy Gu Ad Cy
Table 3. Information systems with irrelevant values of attributes: a) Sg1 ,u13 , b) Sg2 ,u13 , c) Sg3 ,u13 Ug1 /Ag1 u1 u2 u3 u4 u5 a) u6 u7 u8 u9 u10 u11
g1 Ad Cy Cy Cy Ad Cy Ad Ad Ad Gu Gu
g2 g3 Ug2 /Ag2 g1 g2 g3 Ug3 /Ag3 g1 g2 g3 ∗ ∗ u1 ∗ Gu ∗ u1 ∗ ∗ Cy Ad Gu u2 ∗ Ad Gu u2 ∗ Ad Gu ∗ ∗ u3 ∗ Gu ∗ u3 ∗ ∗ Ad ∗ ∗ u4 ∗ Cy ∗ u4 ∗ ∗ Cy ∗ ∗ u5 ∗ Gu ∗ u5 ∗ ∗ Ad b) c) ∗ ∗ u6 ∗ Gu ∗ u6 ∗ ∗ Cy ∗ ∗ u7 ∗ Cy ∗ u7 ∗ ∗ Ad Ad ∗ u8 ∗ Ad ∗ u8 ∗ Ad Cy ∗ Gu u9 ∗ Cy Gu u9 ∗ ∗ Gu ∗ ∗ u10 Gu Cy ∗ u10 Gu ∗ Ad ∗ ∗ u11 Gu Cy ∗ u11 Gu ∗ Cy
on Ua defined as follows: R(Ca ) = {(u, v) ∈ Ua ×Ua : ∃c∈Ca c(u) = ∗ and ∀c∈Ca (c(u) = ∗) ⇒ (c(u) = c(v))}. For each u ∈ Ua , a characteristic set KCa (u) has the form: KCa (u) = {v ∈ Ua : (u, v) ∈ R(Ca )}. Let X ⊆ Ua . The Ca -lower approximation of X is determined as: Ca X = {u ∈ Ua : KCa (u) = ∅ and KCa (u) ⊆ X}. Let S = (U, A) be an information system, a ∈ A, and va ∈ Va . By Xava we denote the subset of U such that Xava = {u ∈ U : a(u) = va }.
Extensions of Information Systems: The Rough Set Perspective
165
Theorem 1. Let S = (U, A) be an information system, S ∗ = (U ∗ , A∗ ) its extension, u∗ ∈ U ∗ a new object from the extension S ∗ , and a ∈ A. The object u∗ satisfies a rule r ∈ Rula (S) if and only if for each va ∈ Va if Ca Xava = ∅, then a(u∗ ) = va . Proof. (1) For a given a ∈ A and va ∈ Va in the information system S = (U, A), let us consider a rule r ∈ Rula (S) in the form: (ci1 , vi1 ) ∧ (ci2 , vi2 ) ∧ . . . ∧ (cik , vik ) ⇒ (a, va ),
(1)
where ci1 , ci2 , . . . , cik ∈ Ca . By Cr we denote the set of all attributes appearing in atomic formulas in the predecessor of r, i.e., Cr = {ci1 , ci2 , . . . , cik }. For each u ∈ U , we define a set Mu = {c ∈ Ca : c(u) = ∗}. We are interested only in minimal rules such that Cr ⊆ Mu . If Cr ⊃ Mu , then u∗ does not satisfy the predecessor of r, i.e., u∗ [(ci1 , vi1 ) ∧ (ci2 , vi2 ) ∧ . . . ∧ (cik , vik )] and then it is not important whether u∗ (a, va ) or u∗ (a, va ). (2) Let Ca Xava = ∅. Each u ∈ Ca Xava generates a rule q in the form: (ci1 , ci1 (u)) ∧ (ci2 , ci2 (u)) ∧ . . . ∧ (cik , cik (u)) ⇒ (a, va ), where Mu = {ci1 , ci2 , . . . , cik }, which is true in the information system S. If q is not minimal, then there exists at least one minimal rule arising from q by removing some atomic formulas from the predecessor of q which is true in S. If a(u∗ ) = va , then the rule q becomes not true in the information system S ∗ . Each minimal rule true in S arising from q by removing some atomic formulas from the predecessor of q becomes also not true in S ∗ . Hence, the rule q (or each minimal rule arising from q by removing some atomic formulas from the predecessor of q) remains true in the information system S ∗ if a(u∗ ) = va . If u ∈ / Ca Xava , then there is no rule in the form of (1) from Rula (S) such that u (ci1 , vi1 ) ∧ (ci2 , vi2 ) ∧ . . . ∧ (cik , vik ), i.e., the object u does not generate any minimal rule true in S. Taking (1) into consideration, we obtain from (2) that each rule generated by u ∈ Ca Xava remains true after adding u∗ if and only if a(u∗ ) = va . The following corollary stems from Theorem 1. Corollary 1. Let S = (U, A) be an information system, S ∗ = (U ∗ , A∗ ) its extension, u∗ ∈ U ∗ a new object from the extension S ∗ , and a ∈ A. If Ca Xava = ∅, then u∗ satisfies each rule r ∈ Rula (S). On the basis of Theorem 1, we can provide the definition of a consistency factor (see Definition 3) in terms of appropriate lower approximations. Definition 5. Let S = (U, A) be an information system, Rul(S) a set of all minimal rules true and realizable in S, S MAX = (U MAX , A∗ ) the Cartesian
166
K. Pancerz
extension of S, and u∗ ∈ U MAX . The consistency factor of u∗ with the knowledge included in S (expressed by rules from Rul(S)) is defined as follows: ξS (u∗ ) = 1 − where
= U
) card(U , card(U )
{Ca Xava : Ca Xava = ∅ ∧ a(u∗ ) = va }.
a∈A va ∈Va
Using Theorem 1, Corollary 1 and Definition 5 it is easy to calculate the consistent and partially consistent extension of a given information system. S = (U, A). For example, we can determine the Cartesian extension S MAX = (U MAX , A∗ ) of S and next test membership of each object u ∈ U MAX − U in the consistent extension of S. If u does not belong to a consistent extension, then we can calculate a consistency factor of u with the knowledge included in S (expressed by rules from Rul(S)). Example 2. Let us continue Example 1. We need to answer the following questions: – Does u12 added to S belong to the consistent extension of S? – Does u13 added to S belong to the consistent extension of S? For the object u12 , we obtain the following lower approximations: Cg1 XgAd = 1 Gu Ad Cy Gu Ad Cy Cg1 XgCy = C X = C X = C X = C X = C X = C X g1 g1 g2 g2 g2 g2 g2 g2 g3 g3 g3 g3 = 1 Gu Cg3 Xg3 = ∅, where Cg1 = {g2 , g3 }, Cg2 = {g1 , g3 }, and Cg3 = {g1 , g2 }. It is easy to see that each lower approximation is an empty set. Therefore, u12 belongs to the consistent extension of S. For the object u13 ,we obtain the following lower approximations: – Cg1 XgAd = Cg1 XgGu = Cg2 XgAd = Cg2 XgCy = Cg3 XgAd = Cg3 XgCy = 1 1 2 2 3 3 Gu Cg3 Xg3 = ∅, – Cg1 XgCy = {u2 }, 1 – Cg2 XgGu = {u10 , u11 }, 2 where Cg1 = {g2 , g3 }, Cg2 = {g1 , g3 }, and Cg3 = {g1 , g2 }. It is easy to see that Cg1 XgCy = ∅ and g1 (u13 ) = Cy. Hence, there exists at 1 least one rule from Rulg1 (S) which is not satisfied by the object u13 . Analogously, Cg2 XgGu = ∅ and g2 (u13 ) = Gu. Hence, there exists at least one rule from 1 Rulg2 (S) which is not satisfied by the object u13 . Therefore u13 does not belong to the consistent extension of S. Example 3. Let us consider the information system S from Example 2 with new objects u12 and a13 added to S. Now, we need to answer the following questions: – What is the consistency factor of the object u12 added to S with the knowledge included in S?
Extensions of Information Systems: The Rough Set Perspective
167
– What is the consistency factor of the object u13 added to S with the knowledge included in S? The consistency factor ξS (u12 ) equals 1 because u12 belongs to the consistent extension of S. Therefore, we can say that the object u12 is consistent to the degree 1 (or, in short, consistent) with the knowledge included in the original , determined according to system S. For the object u13 we obtain that the set U Definition 5, has a form U = {u2 , u10 , u11 }. Hence, ξS (u13 ) = 0.7273. According to our approach we can say that the object u13 is consistent to the degree 0.7273 with the knowledge included in the original information system S. It is easy to see that the information system S ∗ = (U ∗ , A), where U ∗ = {u1 , u2 , . . . , u13 } is a partially consistent extension of the information system S, because for u13 we have ξS (u13 ) < 1.
5
Concluding Remarks
We have shown how to calculate the consistent and partially consistent extension of a given information system in an efficient way (without computing any rules). An appropriate theorem, expressed directly in terms of rough sets, enables us to perform such calculation. In the future work, we will also consider extensions of dynamic information systems (see [5], [14]) from the rough set perspective.
Acknowledgments This paper has been partially supported by the grant from the University of Information Technology and Management in Rzesz´ ow, Poland. The author is greatly indebted to anonymous reviewers for helpful remarks.
References 1. Grzymala-Busse, J.W.: Data with Missing Attribute Values: Generalization of Indiscernibility Relation and Rule Induction. In: Peters, J.F., et al. (eds.) Transactions on Rough Sets I. LNCS, vol. 3100, pp. 78–95. Springer, Heidelberg (2004) 2. Moshkov, M., Skowron, A., Suraj, Z.: On Testing Membership to Maximal Consistent Extensions of Information Systems. In: Greco, S., et al. (eds.) RSCTC 2006. LNCS (LNAI), vol. 4259, pp. 85–90. Springer, Heidelberg (2006) 3. Pancerz, K., Suraj, Z.: Synthesis of Petri Net Models: A Rough Set Approach. Fundamenta Informaticae 55(2), 149–165 (2003) 4. Pancerz, K.: Consistency-Based Prediction Using Extensions of Information Systems - an Experimental Study. In: Proceedings of the HSI 2008, pp. 591–596. IEEE, Los Alamitos (2008) 5. Pancerz, K.: Extensions of Dynamic Information Systems in State Prediction Problems: the First Study. In: Magdalena, L., et al. (eds.) Proceedings of the IPMU 2008, Malaga, Spain, pp. 101–108 (2008) (on CD, 2008) 6. Pawlak, Z.: Rough Sets - Theoretical Aspects of Reasoning About Data. Kluwer Academic Publishers, Dordrecht (1991)
168
K. Pancerz
7. Pawlak, Z.: Concurrent Versus Sequential the Rough Sets Perspective. Bulletin of the EATCS 48, 178–190 (1992) 8. Pawlak, Z.: Some Issues on Rough Sets. In: Peters, J.F., et al. (eds.) Transactions on Rough Sets I. LNCS, vol. 3100, pp. 1–58. Springer, Heidelberg (2004) 9. Rz¸asa, W., Suraj, Z.: A New Method for Determining of Extensions and Restrictions of Information Systems. In: Alpigini, J.J., et al. (eds.) RSCTC 2002. LNCS (LNAI), vol. 2475, pp. 197–204. Springer, Heidelberg (2002) 10. Skowron, A., Suraj, Z.: Rough Sets and Concurrency. Bulletin of the Polish Academy of Sciences 41(3), 237–254 (1993) 11. Suraj, Z.: Rough Set Methods for the Synthesis and Analysis of Concurrent Processes. In: Polkowski, L., Tsumoto, S., Lin, T.Y. (eds.) Rough Set Methods and Applications. Studies in Fuzziness and Soft Computing, vol. 56, pp. 379–488. Physica-Verlag, Berlin (2000) 12. Suraj, Z.: Some Remarks on Extensions and Restrictions of Information Systems. In: Ziarko, W., Yao, Y. (eds.) RSCTC 2000. LNCS (LNAI), vol. 2005, pp. 204–211. Springer, Heidelberg (2001) 13. Suraj, Z., Pancerz, K., Owsiany, G.: On Consistent and Partially Consistent Ex´ ezak, D., et al. (eds.) RSFDGrC 2005, Part tensions of Information Systems. In: Sl¸ I. LNCS (LNAI), vol. 3641, pp. 224–233. Springer, Heidelberg (2005) 14. Suraj, Z., Pancerz, K.: Some Remarks on Computing Consistent Extensions of Dynamic Information Systems. In: Proceedings of the ISDA 2005, Wroclaw, Poland, pp. 420–425 (2005) 15. Suraj, Z., Pancerz, K.: A New Method for Computing Partially Consistent Extensions of Information Systems: A Rough Set Approach. In: Proceedings of the IPMU 2006, vol. III, pp. 2618–2625. Editions EDK, Paris (2006)
Intangible Assets in a Polish Telecommunication Sector – Rough Sets Approach Agnieszka Maciocha and Jerzy Kisielnicki Dublin Institute of Technology, Aungier Street, Dublin, Ireland The University of Warsaw, Szturmowa 1/3, 02-678 Warszawa, Poland
Abstract. In this article, we present the results of a follow up study concentrated on the intangible assets and their role in the value creation process. As a result of the earlier investigation [47] we obtained two different groups of telecommunication companies. In the first group, we observed stable and continuous growth of the corporate value while in the second group this growth was insignificant and continually fluctuating. In order to obtain factors causing this discrepancy we decide to pursue further analysis. The present study used the data acquired through employee questionnaires sent to the analyzed companies. Using this data we constructed an Information System according to Pawlak and Slowinski [62], [50]. Next, we applied Rough Sets in order to differentiate these intangible assets (IA) which caused the aforementioned difference. Specifically, we focused on theoretical concepts of data reduction, quality of classification, accuracy of approximation, and core of the set. Consequently, we obtained a set of intangible assets which had the biggest impact on the value of analysed companies. Keywords: Rough Sets, Intangible Assets, Intellectual Capital.
1
Introduction
The rising interest in intangibles stems from two recent phenomena which constitute the premises of this research: – the change in the foundation of corporate value creation - intangible assets become acknowledged as the major drivers in the value creation process – the obsolescence of the traditional accounting system regarding the recognition, valuation and presentation of these drivers As a consequence of these joint phenomena, a wide disparity in financial markets between the market value and the book value of company equity has occurred since the second half of the 1990s [22]. On the other hand, in the 90’s we witnessed so called ’internet boom’ - which occurred as a result of widespread application of information and telecommunication technologies (ICT) in the business milieu. This caused significant and J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 169–196, 2009. c Springer-Verlag Berlin Heidelberg 2009
170
A. Maciocha and J. Kisielnicki
irreversible changes in the value creation process. New prospects and opportunities provided by ICT have been immediately grasped and utilized by a rapidly growing new type of organization - the dot.com. Dot.coms created their own rules and theories applying to their corporate milieu. The network effect constituted foundations for their new business model. It is assumed that the value of the product depends on the number of customers who already purchased it. Consequently, it was also believed that despite operating at a loss, these companies were able to build brand awareness quickly enough to allow themselves to charge for their service or product later. Their survival was tied to the rapid growth of their customer database, regardless of any already acquired losses. Due to easy access to capital, these firms were growing in large numbers. This situation had tremendous impact on the telecommunication sector’s development as these companies were providers of the aforementioned technologies. Growing potential for the ICT application was reflected in their rapidly rising profits. As a result, investments were made regardless of any reference to current or future cash flow. The rising profits of this sector excited the appetite of the investment community [42] which in turn led to flows of equity issuance, debt floatation and bank credit. Awash with cash, companies could afford to finance large-scale investment projects, notably the construction of vast fibre-optic cable networks, and pay high prices for the rights to use third generation wireless spectrum networks [42]. In anticipation even more profits, Europe saw a vast amount of money spent by mobile phone operators on 3-G licences. That was done of course, in the various forms of debt. This enormous spending gave rise to excessive expectations for future revenues and earnings, boosting share prices and allowing unprecedented levels of borrowing. Unfortunately, the expectation of a bright future for the industry did not come true. Though Internet traffic was growing quickly (doubling each year), its expansion was much slower than predicted. The double digit increase regarding telecommunication profits did not happen either. As the growth of profits slowed, the business plans of the telecommunications operators had to be adjusted and made more realistic. When the creditors and investors became aware of the degradation of net margins and the increase in debt stocks, they revised their expectations of earnings growth towards more sensible levels. A lot of companies needed to write down the value of assets acquired earlier. This in turn raised further fear in the stock market, which pushed telecommunications equity prices down. Inappropriate accountancy and governance practices prevailed in the sector. The whole situation resulted in numerous bankruptcies all over the world. As a result, accounting bodies become compelled to more precisely analyse corporate value creation and revalue the role of intangible assets in this process. Consequently the notions of market and book value were separated. It is essential to note that the ratio between these two values had been floating around 1.0 during last 45 years (1945 - 1990). Thus, the market capitalization of companies during that span of time was approximately the same as the value of their intangible assets [22].
Intangible Assets in a Polish Telecommunication Sector
171
This situation changed considerably in the late 90’s. In the so-called New Economy, drastic differences between market and book value brought the analysed ratio to 3.0 or more [22]. At present, the market value is perceived to be higher than book value, however the opposite situation can also be found. This disparity stems from the fact, that the book value, due to very restricted accounting policy, is not able to recognize and thus value some qualitative aspects of company assets, which in turn are recognized by market. Thus we can present the company value as follows: company value = = book value (value of net assets = assets - debt) (cf. [5]) + the value of external capital: relations with customers, suppliers (contracts, loyalty programs, cooperation contracts), shareholders plus such elements as brands and image + the value of internal capital: organizational structure, efficiency processes, new implemented technological solutions, strategy realization and competitive advantage and the value of human resources (employee’s creativity and efficiency, and managerial skills) From the above equation, one can see that the company value consists of both material elements - they are presented in the financial statements - and intangible elements which are mostly ignored by accounting practices [29]. The value of intangible elements is only acknowledged when there is a trade - then they are presented in aggregate form - as ’goodwill’. As intangible assets become a predominant factor of competitive advantage, they attracted considerable interest of both the scientific and corporate worlds. Much research has been and still is being conducted regarding intangibles. Theoreticians have tried to define and thus measure intangible assets. Numerous approaches, concepts and theories have been proposed. As a result many definitions as well measurement models have been recommended. Nevertheless, due to sophisticated nature of these elements none of them had been universally accepted and thus promoted. This paper presents a new approach that applies the rough sets theory to analyze intangible assets. Due to the constraints related to measurements of intangible assets we decided to investigate this issue applying a total different angle - analyzing the relationship between intangible assets and corporate value. In our previous research [47] we distinguished two types of companies - one where the difference between market and book value was significant and constantly growing, and the other with the difference insignificant and continually fluctuating. Accordingly, we acknowledged that the first group of companies possessed and utilized some of the intangibles, which were not present in the second group of companies. Thus, in this investigation, our goal was to analyse these factors. We wanted to find out which of the intangible assets created the disparity and had the biggest impact in the process. Using data collected from the questionnaires distributed to the employees of these two groups of companies, we built an information system according to Pawlak [60]. We decided to utilize rough sets theory as it appears to be an
172
A. Maciocha and J. Kisielnicki
effective tool for the analysis of information systems representing knowledge gained by experience [70]. This methodology has already been applied with success in many research studies related to business@@@ but not only these issues@@@. We believe that this study will produce information which will be of importance for companies competing in the global economy. By knowing which intangibles have the biggest impact on corporate value, companies will be able to utilize them more effectively and thus succeed. This study using rough sets methodology suggests a new research direction in the field of intangible assets.
2
Intangible Assets
In order to analyze the function as well as the position of intangible assets in a company value creation process, it is essential to decompose and scrutinize all the corporate assets first. In the literature, there are a lot of different but semantically similar propositions [20], [33], [72], [40], [7] it is possible however to categorize organizational resources into the three main classes: – tangible – financial – intangible While there has been total clarity with what is meant by tangible and financial assets, there is continued confusion in relation to the intangible assets of an organization. The existing ambiguity stems from the character of these assets itself: intangible, impalpable, untouchable, hard to see or describe [16], [63], [48]. Teece [79] proposed 5 criteria - depreciation, possibility of simultaneous usage, transport costs, property rights, and possibility to control property rights - in order to distinguish tangible assets of a company from its intangible ones. This categorization is presented in the Table 1. Consequently, despite growing and significant interest in the subject, with numerous definitions proposed by different authors, there is still a lack of one universally accepted definition. In part, it is a corollary of the fact that constituents of what could be regarded as intangibles are dependent on the purpose of dealing with the concept, e.g., accounting, statistics related to national accounts, or management control. Moreover, a significant variety of management theories also impinged on the way in which distinctiveness and thus definitions of intangibles are originated [38]. Another substantial problem is that intangibles can be considered simultaneously as assets and as generators of assets. In addition, they are created as a result of both: physical and non physical assets, plus they are often entrenched in physical assets, which makes it even trickier to identify them [11]. This situation has produced consequences in the literature, which is riddled with different terms and idioms including ’intangible assets’, ’intellectual capital’ or ’knowledge assets’ in order to describe similar ideas - a particular claim to probable, future benefits. Of course, there are also propositions which are used to distinguish these notions, however they are neither unquestionable nor unchangeable.
Intangible Assets in a Polish Telecommunication Sector
173
Table 1. Difference between Intangible and Tangible Assests [79] Intangible assets Use by one party need not prevent use by another Depreciation Does not ’wear out’: but usually depreciates rapidly Transfer costs Hard to calibraate (increase with the tacit portion) Property Limited (patents, trade serights crets, copyright, trademarks, etc.) and fuzzy, even in developed countries Enforcement Relatively difficult of property rights Publicness
Physical (tangible) assets Use by one party prevents simultaneous use by another Wears out: may depreciate quickly or slowly Easier to calibrate (depends on transportation and related costs) Generally comprehensive and cleaner at least in developed countries Relatively easy
In general, analyzing all of the available definitions [43,31,21,14] it is possible to distinguish two main groups, into which any particular definition found can be classified: – The management perspective – The accounting perspective The management approach is characterised as more open and flexible rather than narrow and meticulous. In this group we can find two types of definitions recognized by Stolowy: – Definitions by opposition - “Fixed assets other than tangible or financial” is the catch used by all approaches belonging to this category – Definitions by tautology - Words to the effect that intangible assets are characterized by their lack of physical substance To this group belongs, among others, the definition proposed by Lev [43] in his book ”Intangibles: Management, Measurement and Reporting”: intangibles assets - “a claim to future benefits that does not have a physical or financial (a stock or a bond) embodiment. In addition is the definition suggested by the Brookings Institute [31]: “non-physical factors that contribute to or are used in producing goods or providing services, or that are expected to generate future productive benefits for the individuals or firms that control the use of those factors”. As opposite to the previous outlook, the accounting perspective represents a very strict and detailed approach towards the definition of Intangible Assets. Stolowy notes this group of definitions to be called real definitions. This type of definition attempts to identify more precisely what an intangible asset is. They are more often than not simultaneously using one or another of the above approaches. To this group belong definitions recommended by the main accounting bodies. Here intangible assets are characterized as non-physical and
174
A. Maciocha and J. Kisielnicki
non-monetary sources of probable future economic profits accruing to the firm as a result of past events or transactions [34]. To this group also belongs the following very comprehensive and detailed definition: intangible assets are identifiable (separable) non-monetary sources of probable future economic benefits to an entity that lack physical substance, have been acquired or developed internally from identifiable costs, have a finite life, have market value apart from the entity, and are owned or controlled by the firm as a result of past transactions or events [80]. Despite this definition being the most precise, no definitions elaborated for accounting purposes have been universally accepted either. As long as they serve the needs of accounting systems measurement and valuation, they do not reflect and thus embrace these assets, which are recognized by the market. Furthermore, this type of definition is too restrictive as it suggests hat is especially visible when a company does not possess enough control over the future prospective benefits, resulting from the skills of the employees or conducted training. As a result there is a significant difference between these two outlooks. This in turn reflects in the disparities between market and a book value of the companies. Since the accounting viewpoint focuses on the valuation and measurement of organizational assets and liabilities presented in the financial statements, it has to precisely define the subject in the question. That’s why its definitions are so meticulous and precise without any doubtfulness or apprehension. This enables their valid and reliable estimation. Elements which are impossible to precisely define are disclosed in this case and perceived in some special cases as goodwill [46]. They are a subject of interest in the management viewpoint though. Consequently, since there is a lot of ambiguity and vagueness concerning their description, there are various problems and difficulties in their measurement. Therefore despite numerous models proposing measurement and valuation of intangibles [12],[6],[76],[3] [13] ,[81], [18], [52], [56] none of them had been commonly accepted. However, the goal of this paper is not to measure nor value intangible assets, but to distinguish which ones had the most significant impact in the value creation process. In our research we defined intangible assets using the proposition of Lev [43]: “a claim to future benefits that does not have a physical or financial (a stock or a bond) embodiment”. However in order to clearly understand which assets were the subject of our investigation it is necessary to provide a quick look at the intangible assets taxonomy. In the literature there are a number of different classifications proposed [4],[65],[30],[32],[80],[85]. Some of them identifies broad categories, while other classifications tend to be more articulated and specific. In general however, they fit into one of the two main classes. In the first, authors enumerate a list of different intangible assets regardless of any criteria or their association with other organizational resources (some of them can be clustered around organizational processes). Such type of taxonomy is proposed by a number of authors [43],[83],[66],[14]. In the second approach, authors usually divide all organizational resources into two groups (usually tangible and intangible), and then analyze in a logical way the part which concerns particular aspects of intangible resources [45],[27], [9].
Intangible Assets in a Polish Telecommunication Sector
175
Example of such taxonomy is proposed by the The Brookings Institute’s task force report. They distinguish three classes of intangibles [9]: 1. At level one, intangibles that can be sold. They are quite easy to define and describe. Brands, copyrights, patents and trademarks are regarded as being on this level. 2. At level two, intangibles that cannot be sold but which are in a certain way controlled by firms. Those categories cannot be separated from the other intangibles to measure or value them. Included here reputation and business processes. 3. At level three, intangibles that cannot be sold nor are controlled by firms. These types of assets are impossible to separate from the others. Here are enumerated human capital and organizational capital. Another example of this category of taxonomy is proposed by Eustace [22]. He distinguishes two groups of company resources, namely- tangible and intangible. As he suggests the principal intangible constituents of the corporate asset base could be further divided into “intangible goods” and “intangible competencies”. The former one is made up of two sub-classes, namely - intangible commodities and intellectual properties. While the latter incorporates assets that are generally bundled together and are inter-reliant to such extent that it is difficult to isolate and thus value them. This taxonomy has its reflection in the classification used by the accounting field. As far as intangible goods can be perceived as those intangible assets recognized by accounting standards, intangible competencies are forming the elements of so called ’goodwill’. In this case intangible competencies are vitally important in differentiating the market offer of the particular companies. In our research we decided to use the classification proposed by Contractor [17] (and especially one group he distinguished). He discriminated three different groups of organizational intangibles. By describing and defining a particular element of intangible assets as a criterion, he proposed the following three groups: – formally registered Intellectual Property Rights – more broadly defined intangible assets - these embrace formally registered Intellectual Property Rights plus unregistered organizational knowledge codified in the form of drawings, software, database, reports, formula as well as written down trade secrets – uncodified human and organizational capital It is common that the last category also includes such elements as reputation, customer loyalty, network links, and other ’goodwill’ type items [17]. The gradation from category I to category II is analogous to gradation from information to knowledge. Information or data alone can be perceived as intangible assets with value within company or for sale outside the organization. However information is not automatically knowledge unless it is systematized into a codified and functional form [10]. That presents a difference between a simple patent or formula and a well-organized idea with manufacturing potential. In this case
176
A. Maciocha and J. Kisielnicki
even knowledge possession doesn’t guarantee success. In order to be successful, the company needs to ensure that this knowledge is assimilated in its staff. The difficulty in organizational knowledge and thus intangible assets measurement is increasing when moving from a lower to higher level. It is the result that the conversion of data into knowledge involves an increasing difficulty when describing separability (possibility to identify and depict discrete bits of information) and formalization (degree of codification) of particular knowledge [17]. In our research we defined intangible assets as a claim to future benefits that does not have a physical or financial (a stock or a bond) embodiment [43], and in more precise terms we decided to focus only on the third level of intangible assets (Intellectual Capital - Human Capital) proposed by Contractor [17].
3 3.1
Basic Concept of Rough Sets Concept 1: The Rough Sets Theory and Indiscernibility of Objects
The rough sets theory provides a relatively new technique of reasoning from vague and imprecise data [24]. It involves methods for knowledge discovery and data mining [8], and is founded on the assumption that with every object of the universe of discourse there is associated some information (data, knowledge). Objects characterised by the same information are indiscernible (similar) in view of available information about them [70]. Created in this way the indiscernibility relation constitutes the mathematical foundations of the rough set theory. Consequently, this indiscernibility relation enables one to characterize a collection of objects which in general are impossible to be accurately described by means of the values of their sets of attributes, in terms of lower or upper approximation [8]. As a result, we can define a rough set as an approximate representation of a given crisp set in terms of two subsets (lower and upper approximation) derived from a crisp partition defined on the universal set involved [8], [35]. In other words, a rough set is basically any subset whose whole objects are the elements of the set of upper approximation but not of lower approximation [41]. By lower approximation of X we mean the set of all elements that are certainly in X, while the elements in the upper approximation can possible be classified as X [68]. Boundary region (BND) of a particular set (A) constitutes the difference between the upper and lower approximation. If the boundary region of X is not empty (i.e. if the upper and lower approximations are not identical: BN DX = ) then the set X is referred as definable (’rough set’) with respect to A, otherwise, it is called crisp [59]. 3.2
Concept 2: Information System
The first step in a rough set analysis is to select data on the attributes of predefined objects [64]. Then, information that we obtained in such way is transformed into a coded information table. This table constitutes a convenient tool for description of objects in terms of their attributes values. Consequently, it is also
Intangible Assets in a Polish Telecommunication Sector
177
called attribute-value system, information system or data table and is described as a collection U of objects that are described by a finite set Q of attributes. One attribute in Q is designated as a decision attribute, and the rest of the attributes are called condition attributes. Rows of this table correspond to objects (actions, alternatives, candidates, patients, etc.) and columns correspond to attributes. To each pair (object, attribute) there is designated value - descriptor. Descriptors (placed in each row of the table) correspond to the information about the equivalent object of a given decision situation. Using formal definition [60] we can present an information system as a 4 -tuples: SI = U, Q, V, f where U = {x1 , x2 , . . . Xn } is a finite set of objects, Q is a finite set of attributes, V = q∈Q Vq , is a set of attribute values and Vq is the set of values of attribute q, and f f :U × Q −→ V is an information function such that f (x, q) ∈ Vq for every q ∈ Q, x ∈ U . In short, information system is a pair (U; P) where U is a non - empty, finite set of objects called the universe and P is non-empty, finite set of attributes, such that a : U −→ Vp for any p ∈ P where VP is called the domain of P[54]. In the case when the particular set of attributes is separated into two subsets (condition attributes - describing criteria, test, symptoms, and decision attributes - depicting decisions, classifications, and taxonomies) the information system is acknowledged as a decision table. 3.3
Concept 3: Approximation of Sets
The indiscernibility relation is used to define basic operations in rough sets. Let P ⊆ Y and Y ⊆ U . The P-lower approximation of Y: P (Y ), and the P-upper approximation of Y: P¯ (Y ) are defined as follows: P Y = {x ∈ X : Ip (x) ⊆ Y } P Y = Ux∈Y Ip (x) Given a particular subset Y representing a rough set, it characterizes the approximation space into three distinct classification regions [44]: – positive region – boundary region – negative region The positive region is exactly equal to the lower approximation and is defined as [44]: P OSP (Y ) = P (Y ) The notation: P OSP (Y ) says that employing knowledge P, the set POS is the set of all elements of U which can be certainly classified as elements of Y. Those objects that certainly do not belong there constitute the negative region. In other words, the negative region (or upper approximation) is the set of elements of U which can possibly be classified as elements of Y, using the set of attributes P
178
A. Maciocha and J. Kisielnicki
[70]. The negative region of set Y in P is defined as the remaining elementary sets of the universe, after subtraction of upper approximation set from the whole sets of the universe [44]: N EGP (Y ) = U − P (Y ) If the information we have is not sufficient to classify an object then such objects belong to the boundary region [67]. 3.4
Concept 4: Accuracy and Quality of Approximation
Using upper and lower approximation it is possible to define accuracy as well as quality of approximation. These numbers are placed within [0,1] interval. They define exactly how it is possible to describe the examined set of objects using available information [62]. Accuracy of approximation is closely related to the inexactness of a class, which is caused by the occurrence of a boundary region of a set. In other words, the lower the accuracy of a set suggests a larger borderline region [24]. By definition, accuracy of approximation of Y is equal to the ratio of numbers of objects belonging to the lower approximation of Y to the number of objects belonging (representing) the upper approximation of the set Y [2]. It expresses the possible correct decision when classifying objects employing the attribute B. This ratio is defined as follows [25]: αP (Y ) =
card(P Y ) card(P Y )
where ’card’ means cardinality of the particular set Quality of approximation defines the ratio of all P-correctly sorted objects to all objects of a system [51]. In other words, it expresses the percentage of objects, which can be correctly classified to a particular class y employing attributes from the set P. The following indicator describes quality of approximation of classification y, by using attributes P [78],[70], [84]. γP (y) =
card(P Yi ) card(U)
where P ⊆ Q, and Q is finite set of attributes; Y ⊆ U , U is a finite set of objects; P Yi is the P-upper approximation of Yi ; subsets Yi , i=1, ..., n are classes of classification y; and card(x) is the cardinality of a set x. When the value of this ratio equals 1, the result of the classification is satisfactory. That means that all elements of the set have been unambiguously classified to upper area (positive region), using the set of attributes P [53]. 3.5
Concept 5: Dependency of Attributes
Analysis of dependencies between attributes is of crucial importance in the rough set approach towards knowledge examination [62]. Investigation of that dependencies aims at verification if there is a need to know the value of all attributes
Intangible Assets in a Polish Telecommunication Sector
179
in order to unambiguously characterize elements (objects) of U universe. Set of attributes R ⊆ Q depends on the set of attributes P ⊆ Qin S (denoted as P −→ R) if Ip ⊆ Ir [62]. Let P and R be subsets of attributes. In order to analyze dependency of attributes we can calculate degree of dependency γ(P, R) of a set P of attributes with respect to a set R of class labeling defined as [61]: γ(P, R) =
card(P OSP (R)) card(U)
where P OSP (R) = Y ∈U/R P Y (positive region of the partition U/R with respect to P), is the set of all elements of U that can be uniquely classified to blocks of the partition U/R, by means of P. The degree of dependency provides a measure of how important P is in mapping the data set examples into Q. We can notice three situations [55]: – γ(P, Q) = 1 - then Q is completely dependent on P hence the attributes are indispensable – 0 < γ(P, Q) < 1 - denotes partial dependency: some of the attributes are useful – γ(P, Q) = 0 - then classification Q is independent of the attributes in P hence the decision attributes are of no use to this classification 3.6
Concept 6: Significance of Attributes
A significance of attributes is closely connected with relationship between values of the individual conditional attributes and values of decision attributes. In other words, it describes the consequences of the subtraction of the particular condition attribute from the decision table. The significance of the particular attribute is expressed by calculating the change of the dependency (relationship) between attributes caused by elimination of the attribute in question from the set of considered conditional attributes. Given two sets P and Q and an attribute a ∈ P , the significance of a with respect to Q is defined by [58]: σp (Q, a) = γp (Q) − γP −{a} (Q) where γP −{a} (Q)is the ’complement dependency’ of a with respect to P [49]. The more important is a particular attribute, the bigger change in the calculated dependency. By employing these both values one can calculate normalized indicator of a significance of a particular conditional attribute a P, which is depicted as [55]: σ(P, D)(a) = 3.7
γp (D∗ )−γp−{a} (D∗ )γp (D∗ )
Concept 7: Attribute Reduction
Another central matter in the research on the rough set theory is knowledge reduction [37]. It is a process of finding the minimum number of indicators
180
A. Maciocha and J. Kisielnicki
(attributes) that are important within a database. The set of reduced indicators obtained is known as ’reduct’ [1]. Reduct is then defined as a subset R of the set of conditional attributes C such that [61], [58]: γC (Q) = γR (Q) Less formally we can say that reduced set of attributes R, R ⊆ Q, provides the same quality of classification as the original set of attributes Q [19]. Collection of the most important attributes in the system is called core of the set. Core is the most essential part of the set P, it can’t be eliminated without disturbing the ability of approximating the decision [62]. In other words, core of the set of the core is the intersection of the all reducts in the set [24], [78].
4 4.1
Application of Rough Sets Research - Goals and Assumptions
This paper addresses the issue of intangible assets and its role in value creation. It aims at identification of those intangible assets which had the biggest impact on the market value of companies in the Polish telecommunication sector. As we illustrated in the previous section, it is possible to analyse intangible assets from two different perspectives: accounting and management [46]. The difference between these two approaches results in the disparity between book and market value. In other words, the management approach acknowledges some of the intangibles assets (evaluated by the market in terms of market value), which are not recognized by accounting standards as such. Consequently in the financial statements, there are only some of the intangibles accepted by the management stance, resulting in the divergence between market and book values. Intangibles which are identified by the management perspective but not included in the accounting approach are the subject of our investigation. However, as there are serious problems regarding the definitions and consequently measurements and valuations of Intangible assets (IA), we approached this problem from a different perspective. Instead of trying to develop another method of measuring IA, we chose to examine the relationship between some of the intangibles, presented in the literature, and corporate value. In other words, our goal was not to measure the value of intangible assets but to investigate which of them has the biggest impact in creating company value. Having analyzed the Polish Telecommunication Sector in a previous study, we distinguished two types of companies. The first was characterized by positive and constant growth of corporate value and the second was found to be unstable and fluctuating. We also realized that such differences between these two types companies were potentially caused by Intangible Assets. Thus our objective was to determine which intangibles caused such difference. Due to the specific and sophisticated nature of IA it was necessary to choose a tool that be able to handle data characterized by vagueness and uncertainty. We applied the Rough Set Theory as this technique have been proved to be useful to analyze data
Intangible Assets in a Polish Telecommunication Sector
181
of this nature [70], [62], [26], and perform better than conventional statistical approaches [69]. Furthermore, we also considered the fact that in contrast to other intelligent methods such as Fuzzy Set Theory, Dempster-Shafer Theory, or statistical methods, Rough Set Theory does not require any external parameters, utilizing only information available in the given data [87]. Due to its variety of application, it has become very popular among scientists and as a result it is one of the most rapidly developing intelligent data analysis techniques [28]. 4.2
Methodology and Data
In order to distinguish different types of intangibles which should play a role in the value creation we analysed a number of different models used to measure and manage intangible assets [12], [6],[76], [3], [13], [81], [18], [52], [56]. As most of them emphasize the role of human resources, we decided to focus on this aspect of the companies. This approach was also determined by other factors such as access to information as well as validity and reliability of the acquired data. After the literature review we distinguished the five following areas of interest, which would describe human capital in the analysed organizations: – – – – –
Communication Competencies Organizational Culture Training Motivation
Next, each of the aforementioned areas needed to be described by a set of indicators (intangible assets). While constructing these measures we focused on the most famous and accepted propositions concerning measurement and valuation of IA: – – – –
Balanced Scorecard, Kaplan Norton [39] Intangible Assets Monitor, Sveiby[76] Skandia Navigator, Edvinsson [21] Value Creation Index (VCI)[52]
As result, we obtained a model consisting of 5 areas (Communication, Competencies, Organizational Culture, Motivation, and Training), each of them described by a set of indicators (from 5 to 10). This is illustrated in Table 2, 6, 10, 15, 17. On the whole, the constructed model was comprised of 43 indicators representing particular types of intangible assets. Next, on the basis of the model, a questionnaire was constructed. The questionnaire was sent to the employees of two opposite groups of companies (one where the difference between market and book values was growing and the other where it was fluctuating). Due to constraints related to the availability of data (relatively short presence on the Warsaw Stock Exchange - which itself is also rather young market) the study was confined to 3 out of 6 Polish telecommunication sector companies. These companies were selected because they were the largest and longest tenure companies at the WSE.
182
A. Maciocha and J. Kisielnicki
Two of these companies were classified into first group characterized by positive and growing value, and the third was categorized into the group with fluctuating and unstable value. When conducting the pilot study (10 questionnaires), one of the companies in the first group withdrew its participation in the research due to time constraints. Next, 120 questionnaires were sent to the remaining two companies split equally between the two. The questionnaires were disseminated to the employees occupying different positions within the two companies. In order to achieve as diversified perspectives as possible, these questionnaires were distributed to the different branches located in different Polish cities: – – – –
Warsaw (33 questionnaires) Lodz (11 questionnaires) Poznan (22 questionnaires) Wroclaw (11 questionnaires)
All together we collected 80 questionnaires: 55 questionnaires from the company representing the positive case versus 25 from the company representing the fuctuating case. Received data was organized in the form of the Information System proposed by Pawlak [60]. Subsequently, we employed Rose2Little software [71] in order to analyse received records by means of Rough Sets methodology. Due to the limitation of our software every area was analysed separately. However, despite this approach, in each area we had two types of attributes: condition attributes (specific for each area) and one decision attribute (the same for all the areas). The decision attribute was related to the category of the company (positive versus fluctuating). As we earlier explained, there were two groups of firms: one with the positive and growing value and the fluctuating one. The first one was classified as ’G’ (good) and the second one as B (bad). Subsequently the decision attribute induced the partition of the 80 cases (objects) into two decision classes: ’bad’ or ’good’ company, regardless of the type of analysed area. 4.3
Results
Communication. The area of communication has been organized in decision table. The rows of this table represent objects (in this case - questionnaires i.e. - opinions of the employees representing two different types of firms) described by 10 conditional attributes and by a single decision attribute that was given G - for the good group of companies, and B for the other group of companies. Subsequently the decision attribute induced the partition of the 80 cases (objects) into two decision classes. Table 2 represents the particular attributes as well as their values. According to the results obtained through RoseLitlle2 software, the two classes were perfectly approximated by the whole set of attributes: accuracy of approximation was equal to one. The quality of approximation of the classification was also equal to one which means that employed attributes granted satisfactory discrimination between the two aforementioned classes. Since we didn’t find any single attribute which would be necessary for approximation of the decision classes (the core of the set was empty), next we decided to calculate reducts of the information table.
Intangible Assets in a Polish Telecommunication Sector
183
Table 2. Communication - Attributes Attribute No. A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 D
Attribute/Indicator clear communication decisions provided by managers information provided by supervisor enables employees to work more quickly employees’ knowledge about the differences between their company and its competitors level of efficiency of communication between management and employees level of efficiency of communication between coworkers level of efficiency of communication between different departments duties are clearly defined employees’ familiarity with the long term organizational goals and strategy communication of the strengths and weaknesses of employees by supervisor frequency of assessment of the employee’s work Type of group company belongs to
Value 1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 G,B
As a result we obtained 38 reducts (Table 3). In order to find the most satisfactory reduct out of the obtained reducts, which constituted the minimum number of attributes we decided to use approach proposed in [70]. It starts with a single attribute characterized by the greatest quality of classification. Next to the selected attribute is added another one that provided the greatest quality of classification for the pair of attributes. The procedure repeats until the obligatory quality is reached by the set of attributes. This procedure for the area of communication is presented in the Table 5. According to the procedure we started with the attribute providing the highest quality of approximation - A9 (quality of approximation - 0.412). In the next iteration attribute A2 provided the highest score - 0.325. The couple (A9 + A2 ) was characterised by quality of approximation equal to 0.738. In the next step there is two attributes described by the highest score (0.212): A8 and A10 . Looking at the frequencies of the particular attributes in reducts (Table 4) we decided to choose A8 which has higher frequency than attribute 10 (A10) (but we also presented in table path with choosing attribute (A10 ). The triple (A9 +A2 +A8 ) provides quality of approximation equal 0.950. In the next iteration we can again choose two attributes (quality of approximation equal 0.50): A5 or A6 . In this case we looked again into the frequencies and picked the attribute with highest score: A5. In this way we obtained the most satisfactory reduct, consisting of the minimum number of attributes: A9 + A2 + A8 + A5 . These attributes are also characterised by the highest score in terms of their frequencies of the presence in the reducts.
184
A. Maciocha and J. Kisielnicki Table 3. Communication - Reducts Reduct No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Reduct A1 , A2 , A7 , A8 , A10 A2 , A4 , A7 , A8 , A10 A2 , A 3 , A 4 , A5 , A9 , A1 , A 2 , A 3 , A5 , A7 , A8 A1 , A 2 , A 3 , A4 , A5 , A8 A1 , A5 , A6 , A8 , A10 A1 , A5 , A6 , A8 , A10 A1 , A3 , A5 , A8 , A10 A2 , A5 , A8 , A10 A2 , A 3 , A 4 , A6 , A9 A3 , A 4 , A 5 , A6 , A9 A1 , A 2 , A 4 , A6 , A8 A1 , A3 , A5 , A7 , A9 , A10 A2 , A3 , A7 , A9 , A10 A2 , A 3 , A 4 , A6 , A8 A1 , A 2 , A 3 , A6 , A8 A1 , A 2 , A 5 , A6 , A7 , A8 A2 , A 4 , A 5 , A6 , A8 A2 , A 4 , A 5 , A7 , A8
Lenght Reduct No. 5 20 5 21 5 22 6 23 6 24 5 25 5 26 5 27 4 28 5 29 5 30 5 31 6 32 5 33 5 34 5 35 6 36 5 37 5 28
Reduct A3 , A4 , A5 , A6 , A8 A4 , A5 , A6 , A8 , A10 A2 , A6 , A8 , A10 A4 , A5 , A7 , A8 , A10 A2 , A4 , A5 , A7 , A9 A2 , A5 , A8 , A9 A1 , A4 , A5 , A7 , A9 , A10 A2 , A4 , A9 , A10 A2 , A6 , A8 , A9 A1 , A5 , A6 , A7 , A9 , A10 A2 , A6 , A7 , A9 A4 , A5 , A6 , A7 , A9 A3 , A5 , A6 , A7 , A9 A4 , A5 , A6 , A9 , A10 A1 , A5 , A8 , A9 , A10 A3 , A5 , A8 , A9 , A10 A4 , A5 , A8 , A9 , A10 A2 , A8 , A9 , A10 A5 , A6 , A8 , A9
Length 5 5 4 5 5 4 6 4 4 6 4 5 5 5 5 5 5 4 4
Table 4. Communication area: Frequency Attribute A5 A8 A2 A9 A10 A4 A6 A7 A1 A3
Frequency 26 25 21 20 19 18 18 15 13 13
%Frequency 68.42 65.79 55.26 52.63 50.00 47.37 47.37 39.47 34.21 34.21
As a result of our procedure we obtained the following attributes which are satisfying criteria of quality of approximation for the whole set of attributes (A9 + A2 + A8 + A5 ): – – – –
employees’ familiarity with the long term organizational goals and strategy communication of the strengths and weaknesses of employees by supervisor information provided by supervisor enables employees to work more quickly level of efficiency of communication between coworkers
Competencies. The area of competencies was described by 10 attributes (Table 6) - nine of which constituted condition attributes and one decision attribute (type
Intangible Assets in a Polish Telecommunication Sector
185
of group to which company belonged to). As it was mentioned earlier (section 3.3 – Concept 3: Accuracy and quality of approximation) the accuracy of approximation is applied in order to describe the degree of completeness of knowledge about demand (decision attribute) that could be obtained using the information on demand determinants (condition attributes)[24]. Table 5. Increase of quality of classification by successive augmentation of subset of attributes step quality of the set A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 1 Single attribute 0.125 0.150 0.013 0.112 0.038 0.112 0.150 0.188 0.412 0.013 2 A9 = 0.412 0.188 0.325 0.237 0.188 0.150 0.188 0.138 0.175 X 0.200 3
A9 + A2 = 0.738 0.062
X 0.188 0.162 0.137 0.188 0.150 0.212
X 0.212
4
A9 + A2 + A8 = 0.025 0.950
X 0.025 0.025 0.050 0.050 0.025
X 0.050
5
A9 + A2 + A8 + A5 = 1.00
3a
A9 + A2 + A10 = 0.000 0.95
X
X 0.025 0.050 0.000 0.025 0.025 0.050
X
X
Table 6. Competencies: Attributes Attribute No. A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 D
Attribute/Indicator
Value
managers’ competencies in general 1-5 level of managers’ competencies in their position 1-5 supervisor’s ability to motivate subordinates 1-5 supervisor’s ability to create an environment of growth and 1-5 development for the employees supervisors’ competencies in team building 1-5 approachability of managers to discuss job issues 1-5 coworkers’ competencies- in general 1-5 coworkers’ competencies in terms of the position 1-5 level of cooperation between employees 1-5 1-5 Type of group company belongs to G,B
Accuracy of approximation for class G was equal to 0.9718, and for class B - 0.8182. This may suggest that the nine employed condition attributes were slightly better at predicting those companies with a significant and increasing difference between market and book value compared to those companies with the opposite characteristic. As long as accuracy of classification communicates the percentage of possible correct decisions when predicting companies with good
186
A. Maciocha and J. Kisielnicki
characteristics of market/book value ratio by utilizing information provided by the nine attributes, then the quality of approximation expresses the percentage of correctly classified cases to all the casses (objects) in the system [51]. Here quality of approximation was equal to 0.9750, which is a very satisfactory score. In the area of Competencies we found three single attributes which turned out to be absolutely essential for the approximation of the decision classes. They constituted the core of the set (A3 , A4 , A9 ): – supervisor’s ability to motivate subordinates – supervisor’s ability to create an environment of growth and development for the employees – level of cooperation between employees The quality loss of the attributes in the core, in case of their removal from the core, is presented in the Table 7. Table 7. Competencies quality loss Attribute no. Quality loss A1 0.313 A2 0.213 A3 0.200
Since the quality of classification for the core was equal to 0.600 we needed to find additional attributes which would allow us to receive the same level of quality of approximation as the level for a whole set of attributes. Thus the next step was to calculate the reducts of the information table. Consequently we obtained five reducts with each containing five attributes (Table 8). Table 8. Competencies - Reducts Reduct no Reduct 1 A3 , A4 , A5 , A6 , A 9 2 A2 , A3 , A4 , A5 , A 9 3 A3 , A4 , A5 , A7 , A 9 4 A2 , A3 , A4 , A6 , A 9 5 A3 , A4 , A5 , A8 , A 9
Length 5 5 5 5 5
Due to the fact that the found reducts were the same in terms of the number of embedded attributes we looked at the frequency statistic, which describes the regularity of presence of every attribute in the created reducts. The highest score among the attributes representing the core was attribute 5 (A5 ) . Using the Quick Reduct procedure [70] we also found that attribute 5 (A5 ) possessed the greatest quality of classification when considering it as a candidate for adding to the attributes constituting the core of the analysed set. In the next iteration in this procedure we obtained three attributes - A6 , A7 , and A8 - which provided the highest score (Table 9).
Intangible Assets in a Polish Telecommunication Sector
187
Table 9. Increase of quality of classification by successive augmentation of subset of attributes A1 A2 A5 A6 A7 A8 0.150 0.325 0.350 0.200 0.112 0.100 0.00 0.025 X 0.025 0.025 0.025
Attribute quality gain core core+A5 =0.950
Organizational Culture. Organizational Culture has been analysed by means of nine attributes, eight of which constituted conditional attributes, and one decision attribute (Table 10). Since both accuracy of approximation as well as quality of approximation were equal to 1, we can state that utilized nine condition attributes provided ideal approximation of the analysed two classes as well as perfect discrimination between them. Table 10. Organizational Culture - Attributes Attribute No. A1 A2 A3 A4 A5 A6 A7 A8 D
Attribute/Indicator
Value
level of autonomy for employees when performing their duties support and incentives for employees’ creativity and innovation degree to which management attends to employees’ problems perceived level of job security degree to which management attends to employees’ suggestions access to necessary information support for knowledge sharing behaviour intention to stay in the organization Type of group company belongs to
1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 G,B
Analyzing the core of the set, we found two attributes (A3 and A7 ): – degree to which management attends to employees’ problems – support for knowledge sharing behaviour The quality loss of these attributes, in case of their removal from core is presented in the Table 11. Table 11. Organizational Culture quality loss Attribute no. Quality loss A1 0.287 A2 0.137
Because quality of classification for the core (0.688) was lower than the quality of classification for the whole set of attributes, we needed to add to the core attributes which would provide the satisfactory level of quality of approximation, equal to the level of the whole set. In other words we needed to look for reducts.
188
A. Maciocha and J. Kisielnicki Table 12. Organizational Culture - Reducts Reduct no Reduct Lenght 1 A3 , A6 , A7 , A 8 4 2 A2 , A3 , A4 , A 7 , A 8 5 3 A1 , A2 , A3 , A 7 , A 8 5 4 A2 , A3 , A4 , A 6 , A 7 5 5 A1 , A3 , A4 , A 6 , A 7 5 6 A1 , A2 , A3 , A 4 , A 5 , A 7 6
Table 13. Organizational Culture - Frequency Attribute A3 A7 A2 A4 A1 A6 A8 A5
Frequency 6 6 4 4 3 3 3 1
%Frequency 100.00 100.00 66.67 66.67 50.00 50.00 50.00 16.67
Table 14. Increase of quality of classification by successive augmentation of subset of attributes step 1 2 3
quality of the set core+ core+ A8 = 0.900 core +A8 +A6 =1.00
A1 A2 A4 A5 A6 A8 0.113 0.150 0.075 0.038 0.087 0.212 0.075 0.062 0.075 0.025 0.100 X
Consequently we received 6 reducts (Table 12). The most satisfactory was reduct no 1( A3 , A6 , A7 , A8 ) - one containing the minimal number of attributes. Table 13 presents the frequencies of the particular attribute in the reducts. The steps proposed by Slowinski [70] are depicted in the Table 14. As we can see the reduct chosen was the same as obtained through Quick Reduct procedure. Training. Area ’Training’ have been characterised by five condition attributes and one decision attribute (Table 15). We obtained a satisfactory level of the quality of classification (0.8625). However, the results concerning accuracy of classification in relation to the two classes in question were significantly different. As long as the nine attributes applied, were completely sufficient in explaining the good type of company (G) - the accuracy of classification for class G was equal to 0.8493, they might be insufficient in explaining the characteristic of a bad company (B). In relation to the class B, the accuracy of approximation was 0.3889.
Intangible Assets in a Polish Telecommunication Sector
189
Table 15. Training - Attributes Attribute No. A1 A2 A3 A4 A5 D
Attribute/Indicator
Value
number of training days per year 1-5 employee’s judgment of need for training 1-5 contribution of the provided training to the improvement of the 1-5 employee’s qualifications contribution of the provided training to the improvement of the 1-5 employee’s efficiency degree of the employees’ input into the training topics 1-5 Type of group company belongs to G,B
An interesting result was that all condition attributes were allocated to the core. This means that there were no redundant attributes: an exclusion of one of them would reduce the accuracy of classification. On the basis of that we can say, that all the attributes characterizing the area of Training were necessary and indispensable. According to the theory (subsection 3.3 – Concept 3: Accuracy and quality of approximation) the quality of classification of the core was equal to the quality of classification for the whole set. Table 16 represent the quality loss of these attributes in case they would be removed from the core. Table 16. Training - quality loss Attribute no. Quality loss A1 0.088 A2 0.050 A3 0.012 A4 0.012 A5 0.137
Motivation. Motivation was described by nine condition attributes and one decision attribute (class G, class B). On the basis of the obtained results (both, quality and accuracy of approximation were equal to 1) we were affirmed that the two classes were described perfectly.Due to the fact, that core of the set was empty, we needed to calculate the reduct of the whole set of condition attributes. Consequently, we obtained 18 reducts, with the length from 4 attributes to 6 attributes in the particular reduct (Table 18). The frequencies of each attribute in reducts is presented in Table 19. In order to find the most satisfactory reduct, and since there were four reducts characterised by the minimal number of attributes, we decided to use procedure suggested by Slowinski [70] (Table 20). We started with the reduct (A4 ) providing the highest quality of approximation (0.375). Next we added to that reduct the next highest (A5 - 0.300). As a result of such steps we received a reduct comprising the following attributes (A4 , A5 , A6 , A3 , A2 ):
190
– – – – –
A. Maciocha and J. Kisielnicki
fit of the career path with the employees’ expectations level of employees’ job satisfaction quality of the benefits package offered by the organization effectiveness of the motivation system type of performance considered as a basis for motivation system Table 17. Motivation - Attributes
Attribute No. A1 A2 A3 A4 A5 A6 A7 A8 A9 D
Attribute/Indicator
Value
existence of a career path in the organization type of performance considered as a basis for motivation system effectiveness of the motivation system fit of the career path with the employees’ expectations level of employees’ job satisfaction quality of the benefits package offered by the organization degree of satisfaction with promotion opportunities level of cooperation with supervisor level of personal development possibilities Type of group company belongs to
1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 1-5 G,B
Table 18. Motivation - Reducts Reduct Reduct No. 1 A2 , A3 , A6 , A 8 2 A2 , A3 , A6 , A 9 3 A2 , A3 , A5 , A 6 4 A2 , A3 , A6 , A 7 5 A5 , A6 , A7 , A 8 , A9 6 A3 , A6 , A7 , A 8 , A9 7 A3 , A4 , A6 , A 8 , A9 8 A1 , A3 , A5 , A 8 , A9 9 A4 , A5 , A7 , A 8 , A9
Lenght Reduct Reduct Lenght No. 4 10 A 1 , A5 , A7 , A8 , A9 5 4 11 A 3 , A5 , A7 , A8 , A9 5 4 12 A 2 , A3 , A4 , A5 , A7 5 4 13 A 3 , A4 , A5 , A8 , A9 5 5 14 A 1 , A2 , A3 , A7 , A8 , A9 6 5 15 A 1 , A2 , A3 , A4 , A8 , A9 6 5 16 A 1 , A4 , A5 , A6 , A8 , A9 6 5 17 A 1 , A2 , A5 , A6 , A8 , A9 6 5 18 A 1 , A2 , A3 , A5 , A7 , A9 4
Table 19. Motivation - Frequency Attribute A9 A3 A8 A5 A2 A7 A6 A1 A4
Frequency 14 13 13 11 9 9 9 7 6
%Frequency 77.78 72.22 72.22 61.11 50.00 50.00 50.00 38.89 33.33
Intangible Assets in a Polish Telecommunication Sector
191
Table 20. Increase of quality of classification by successive augmentation of subset of attributes step 1 2 3 4 5
5
quality gain quality gain A4 + A4 +A5 =0.675 A4 +A5 +A6 =0.850 A4 + A5 + A6 + A3 = 0.950
A1 0.213 0.212 0.075 0.050 0.00
A2 0.00 0.037 0.125 0.075 0.050
A3 A4 A5 A6 A7 A8 0.050 0.375 0.200 0.225 00.287 0.013 0.262 X 0.300 0.275 0.175 0.125 0.162 X X 0.175 0.125 0.062 0.100 X X X 0.037 0.062 X X X X 0.00 0.025
A9 0.025 0.250 0.113 0.050 0.025
Conclusions
In this paper, we presented results of research concerning intangible assets. We put a special focus on these intangibles, which are not recognized as such by accounting standards, but fully acknowledged by management approach. We investigated five areas described by set of indicators (intangible assets). Namely:Communication, Motivation, Competencies, Training, and Organizational Culture. Since Rough Sets is a technique providing more reliable and valid results than other methods [70], [23], [36], we decided to use that approach to analyze our data. Consequently, we obtained a set of indicators which had the biggest impact on the corporate value creation. In the three of the five analysed areas (Training, Organizational Culture, and Competencies) we found the core. The following list presents those intangibles which constitute the core of the three sets: – supervisor’s ability to motivate subordinates – supervisor’s ability to create an environment of growth and development for the employees – level of cooperation between employees – degree to which management attends to employees’ problems – support for knowledge sharing behaviour – number of training days per year – employee’s judgment of need for training – contribution of the provided training to the improvement of the employee’s qualifications – contribution of the provided training to the improvement of the employee’s efficiency – degree of the employees’ input into the training topics The aforementioned intangible assets were the condition attributes presented in the core. We did not presented here intangible assets (condition attributes) which constituted particular reducts. In the future we would pursue similar analysis in regards to the different business sectors. That type os investigation would allow us to make comparison of different sectors in terms of their intangible assets.
192
A. Maciocha and J. Kisielnicki
References 1. Ahmad, F., Hamdan, A.R., Bakar, A.A.: Determining Success Indicators of ECommerce Companies Using Rough Set Approach. The Journal of American Academy of Business (September 2004) 2. Ahn, B., Cho, S., Kim, Y.: The integrated methodology of rough set theory and artificial neural network for business failure prediction. Expert Systems with Applications 18 (2000) 3. Andriessen, D.: The Financial Value of Intangibles, Searching For The Holy Grail. Paper Presented At The 5th World Congress on The Management of Intellectual Capital, Hamilton, Ontario, Canada, January 16-18 (2002) 4. Arvidsson, S.: Demand and Supply of Information on Intangibles: The Case of Knowledge-Intense Companies, PhD Dissertation, Department of Business Administration, Finance, Lund University (2003) 5. Banbura, J., Pruszczynska, B.: Jak zwiekszyc wartosc firmy- Value Based Management cz. I, Francuski Instytut Gospodarki, http://nowoczesnafirma.wp.pl/ artykuly/artykul_5717.htm 6. Baruch, L.: Intangibles: Management, Measurement, and Reporting. Brookings Institute Press, Washingthon (2001) 7. Bernadette, L.: Intellectual Capital. CMA Magazine 72(1) (February 1998) 8. Beynon, M., Curry, B., Morgan, P.: Knowledge discovery in marketing - an approach through Rough Set Theory. European Journal of Marketing 35(7/8) (2001) 9. Blair, M., Wallman, S.: Unseen Wealth: Report of the Brookings Task Force on Intangibles. The Brookings Institution, New York (2001) 10. Boisot, M., Canals, A.: Data, information and knowledge: have we got it right. Journal of Evolutional Economics 14 (2004) 11. Bianchi, P., Iorio, R., Labory, S., Malagoli, N.: EU Policies for Innovation and Knowledge Diffusion. WP3: Policy Implications of the Intangible Economy (University of Ferrara), PRISM work package 3; PRISM 2002 (2002) 12. Bontis, N.: Assessing knowledge assets: a review of the models used to measure intellectual capital. International Journal of Management Reviews 3(1) (March 2001) 13. Brooking, A., Motta, E.: A Taxonomy of Intellectual Capital and a Methodology for Auditing It. In: 17th Annual National Business Conference, McMaster University, Hamilton, Ontario, Canada, January 24-26 (1996) 14. Bukowitz, W., Petrash, G.: Visualizing, Measuring and Managing Knowledge. Research Technology Management 40(4), 08956308 (1997) 15. Chouchoulas, A., Shen, Q.: Rough set-aided keyword reduction for text categorization. Applied Artificial Intelligence 15 (2001) 16. Collins English Dictionary. Harper Collins Publishers, England (2000) 17. Contractor, F.: Valuing Corporate Knowledge and Intangible Assets: Some General Principles. Knowledge and Process Management 7(4) (2000) 18. Dawson, C.: Human Resource Accounting: From Prescription to Description. Management Decision 32(6) (1994) 19. Dimitras, A., Slowinski, R., Susmaga, R., Zopounidis, C.: Business failure prediction using rough sets. European Journal of Operational Research 114 (1999) 20. Dobija, D.: Pomiar i sprawozdawczosc kapitalu intelektualnego przedsiebiorstwa. WSPiZ, Warszawa (2003) 21. Edvinsson, L., Malone, M.: Kapital Intelektualny. Wydawnictwo Naukowe PWN, Warszawa (2001)
Intangible Assets in a Polish Telecommunication Sector
193
22. Eustace, C.: The Intangible Economy Impact And Policy Issues, Report of the European High Level Expert, Group on the Intangible Economy, Enterprise Directorate-General, European Commission (October 2000) 23. Tay, F.E.H., Lixiang, S.: Economic and financial prediction using rough sets model. European Journal of Operational Research 141 (2002) 24. Goh, C., Law, R.: Incorporating the rough sets theory into travel demand analysis. Tourism Management 24, 511–517 (2003) 25. Gento, A.M., Redondo, A.: Rough sets and maintenance in a production line. Expert Systems 20(5) (November 2003) 26. Greco, S., Matarazzo, B., Slowinski, R.: Extension of the Rough Set Approach to Multicriteria Decision Support. INFOR 38(3) (August 2000) 27. Hall, R.: The Strategic Analysis of Intangible Resources. Strategic Management Journal 13 (1992) 28. Hassanien, A.: Intelligent Data Analysis of Breast Cancer Based on Rough Set Theory. International Journal on Artificial Intelligence Tools 12(4) (2003) 29. Hejduk, I., Grudzewski, W.: Managing Corporate Value. In: Krupa, T. (ed.) New Challenges and Old Problems in Enterprise Management. Wydawnictwa NaukowoTechniczne, Warszawa (2002) 30. Hendriksen, E., van Breda, M.: Accounting Theory, 5th edn. Buridge, Irwin (1992); Dobija D.: Pomiar i sprawozdawczosc kapitalu intelektualnego przedsiebiorstwa, WSPiZ, Warszawa 2003 (2003) 31. Hill, P., Youngman, R.: Revisiting Intangibles - A Misnomer? PRISM WP5: Part 2 (February 2002) 32. Holtham, C., Youngman, R.: Measurement And Reporting of Intangibles - A European Policy Perspective, PRISM - Papers, Working Papers WP 2 (December 2002) 33. Hunter, L.: Intellectual Capital: Accumulation and Appropriation, Melbourne Institute Working Paper No. 22/02 (November 2002), http://www.melbourneinstitute.com 34. IAS 38, par 8, International Financial Reporting Standards (IFRSs) including International Accounting Standards (IASs) and Interpretations as at 1 January 2005, International Accounting Standards Board (2005) 35. Intana, R., Mukaidono, M.: Generalization of Rough sets and its applications in information. Intelligent Data Analysis 6 (2002) 36. Kyoung-jae, K., Ingoo, H.: The extraction of trading rules from stock market data using rough sets. Expert Systems 18(4) (September 2001) 37. Liang, J., Xu, Z.: The algorithm on knowledge reduction in incomplete information systems. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 10(1) (2002) 38. Johanson, U., Martensson, M., Skoog, M.: Measuring and Managing Intangibles; Eleven Swedish Qualitative Exploratory Case Studies. Paper presented at International Symposium: Measuring and Reporting Intellectual Capital: Experience, Issues, and Prospects, Technical Meeting, Amsterdam, June 9-10 (1999) 39. Kaplan, R., Norton, D.: The balanced scorecard - measures that drive performance. Harvard Business Review 70(1) (1992) 40. Kurz, P.: Intellectual capital management and value maximization. Technology, Law and Insurance (5) (2000) 41. Lee, S., Vachtsevanos, G.: An application of rough set theory to defect detection of automotive glass. Mathematics and Computers in Simulation 60 (2002)
194
A. Maciocha and J. Kisielnicki
42. Lenain, P., Paltridge, S.: After The Telecommunications Bubble, OECD Economic Outlook (73) (May 2003), http://www.oecd.org/dataoecd/62/37/2635431.pdf pazdziernik 2005 43. Lev, B.: Intangibles - Management, Measurement, and Reporting. Brookings Institution Press, Washington (2001) 44. Lia, Y., Zhang, C., Swan, J.: An information filtering model on the Web and its application in Job Agent. Knowledge-Based Systems 13 (2000) 45. Lowendahl, B.: Strategic Management of Professional Service Firm. In: Di Tommaso, M., Paci, D., Schweitzer, S. (eds.) Copenhagen, Handelshojskolens Forlag 1997, The Geography of Intangibles, op. cit. (1997) 46. Maciocha, A.: Intellectual Capital Versus Intangible Assets - the Differences and Similarities. In: Conference proceedings, IC-Congress 2007, INHOLLAND University of professional education, Haarlem, The Netherlands, May 3-4 (2007) 47. Maciocha, A., Kisielnicki, J.: Kapital Intelektualny problemy pomiaru - studium na przykladzie sektora telekomunikacji. Zeszyty Naukowe Politechniki Warszawskiej, Warszawa (2007) 48. MACMILLAN English Dictionary for Advanced Learners. Bloomsbury Publishing Plc. (2002) 49. Mak, B., Munkata, T.: Rule Extraction from expert heuristics: A comparative study of rough sets with neural networks and ID3. European Journal of Operational Research 136 (2002) 50. Martinem, I., Perez, R.: Making decision in case-based systems using probabilities and rough sets. Knowledge-Based Systems 16, 205–213 (2003) 51. McKee, T.: Developing a Bankruptcy Prediction Model via Rough Sets Theory. International Journal of Intelligent Systems in Accounting, Finance & Management 9 (2000) 52. Measuring the Future: The Value Creation Index. Cap Gemini Ernst & Young Center For Business Innovation Report (2000) 53. Meskens, N., Levecq, P., Lebon, F.: Multivariate analysis and rough sets: Two approaches for software-quality analysis. International Transaction in Operational Research 9 (2002) 54. Ju-Sheng, M., Wei-Zhi, W., Wen-Xiu, Z.: Approaches to knowledge reduction based on variable precision rough set model. Information Sciences 159 (2004) 55. Mr´ ozek, A., Plonka, L.: Analiza danych metoda zbior´ ow przyblizonych - Zastosowanie w ekonomii, medycynie i sterowaniu. Akademicka Oficyna Wydawnicza PLJ, Warszawa (1999) 56. Nally, D.: Reinwenting Corporate Reporting. PriceWaterhauseCoopers’ report (May 2000) 57. Nijkamp, P., Vindigni, G.: Food security and agricultural sustainability: an overview of critical success factors. Environmental Management and Health 13(5) (2002) 58. Pawlak, Z.: Rough sets and decision analysis. Rough Sets: Theoretical Aspects of Reasoning About Data. Kluwer Academic Publishers, Dordrecht (1991) 59. Pawlak, Z.: Rough sets and decision analysis. INFOR 38(3) (August 2000) 60. Pawlak, Z.: Rough set approach to knowledge-based decision support. European Journal of Operational Research 99, 48–57 61. Pawlak, Z., Skowron, A.: Rudiments of rough sets. Information Sciences (177) (2007) 62. Pawlak, Z., Slowi´ nski, R.: Decision Analysis Using Rough Sets. International Transaction Operational Research 1(1) (1994)
Intangible Assets in a Polish Telecommunication Sector
195
63. The New Penguin English Dictionary. Penguin Books, England (2000) 64. Pheng, L.S., Hongbin, J.: Analysing ownership, locational and internalization advantages of Chinese construction MNCs using roughsets analysis. Construction Management and Economics (24), 1149–1165 (2006) 65. Pitk¨ anen, A.: The importance of intellectual capital for organizational performance. A research proposal for Ph.D. thesis, presented at the national doctorial tutorial for accounting in Lappeenranta, 19,20.I (2006) 66. Reilly, R.: Valuation of intangible assets for bankruptcy and reorganization purposes. Ohio CPA Journal 53(4) (August 1994) 67. Richards, D., Compton, P.: An alternative verification and validation technique for an alternative knowledge representation and acquisition technique. KnowledgeBased Systems 12 (1999) 68. Salonen, H., Nurmi, H.: Theory and Methodology. A note on rough sets and common knowledge events. European Journal of Operational Research 112 (1999) 69. Shen, L., Loh, H.T.: Applying rough sets to market timing decisions. Decision Support Systems 37(4) (September 2004) 70. Slowi´ nski, R., Zopounidis, C., Dimitras, A.: Prediction of company acquisition in Greece by means of the rough set approach. European Journal of Operational Research 100 (1997) 71. http://idss.cs.put.poznan.pl/site/software.html 72. Smith, P.: Systemic Knowledge Management: Managing Organizational Assets For Competitive Advantage. Journal of Systemic Knowledge Management (April 1998) 73. Steward, G.: The Quest for Value. A Guide for Senior Managers. Harper Business (1991) 74. Stolowy, H., Jeny, A.: How accounting standards approach and classify intangibles - an international survey. Paper prepared for presentation at the 22nd Annual Congress of the European Accounting Association, Bordeaux, France, May 5-7 (1999) 75. Sveiby, K.: The Invisible Balance Sheet: Key indicator for accounting control and valuation of know-how companies. The Konrad Group, Suecia (2000) 76. Sveiby, K.: The New Organizational Wealth - Managing & Measuring Intangible Assets. Berret Koehler (1997) 77. Tan, H., Plowman, D., Hancock, P.: Intellectual Capital and Financial Returns of Companies: an Exploration of the Pulic Model, http://www.handels.gu.se/ eaa2005/Paper_Poster_database/All_papers/FRG041.doc 78. Tay, F., Shen, L.: Economic and financial prediction using rough sets model Computing, Artificial Intelligence and Information Technology. European Journal of Operational Research 141 (2002) 79. Teece, D.: Strategies for Managing Knowledge Assets: the Role of Firm Structure and Industrial Context. Long Range Planning 33, 200; in: Intellectual Capital: Accumulation and Appropriation, op. cit. 80. Di Tommaso, M., Paci, D., Schweitzer, S.: The Geography of Intangibles. Paper prepared for the 3rd PRISM Forum, Copenhagen, September 19-20 (2002) 81. Ulf, J.: Why the concept of human resource costing and accounting does not work; A lesson form seven Swedish cases. Personnel Review 28(1/2) (1999) 82. Ulmer, M.: Latest research on the valuation of intellectual capital: Models, methods and their evaluation. Doctoral seminar in Corporate Finance, Summer semester (June 27, 2003)
196
A. Maciocha and J. Kisielnicki
83. Wagner, C.: Making intangible assets more tangible, The Futurist; Washington (May/June 2001); in: Holtham C., Youngman R.: Measurement And Reporting of Intangibles - A European Policy Perspective, PRISM - Papers, Working Papers WP 2 (December 2002) 84. Jinn-Tsai, W., Yi-Shih, C.: Rough set approach for accident chains exploration. Accident Analysis and Prevention 39, 629–637 (2007) 85. Youngman, R.: Managing and Measuring Intangibles: A Multi-Disciplinary Challenge. World Corporate Finance Review 2(11) (November 2002) 86. Zambon, S.: Accounting, Intangibles and Intellectual Capital: an overview of the issues and some considerations. WP4: Accounting, Audit, and Financial Analysis in the New Economy, PRISM/RESCUE, First Report (April 2002) 87. Ziarko, W.: Discovery through Rough Set Theory. Communications of the ACM 42(11) (November 1999)
Multicriteria Attractiveness Evaluation of Decision and Association Rules Izabela Szcz¸ech Institute of Computing Science, Pozna´ n University of Technology, Piotrowo 2, 60-965 Pozna´ n, Poland
[email protected] Abstract. The work is devoted to multicriteria approaches to rule evaluation. It analyses desirable properties (in particular the property M, property of confirmation and hypothesis symmetry) of popular interestingness measures of decision and association rules. Moreover, it analyses relationships between the considered interestingness measures and enclosure relationships between the sets of non-dominated rules in different evaluation spaces. It’s main result is a proposition of a multicriteria evaluation space in which the set of non-dominated rules will contain all optimal rules with respect to any attractiveness measure with the property M. By determining the area of rules with desirable value of a confirmation measure in the proposed multicriteria evaluation space one can narrow down the set of induced rules only to the valuable ones. Furthermore, the work presents an extension of an apriori-like algorithm for generation of rules with respect to attractiveness measures possessing valuable properties and shows some applications of the results to analysis of rules induced from exemplary datasets.
1 1.1
Introduction Knowledge Discovery
Computer systems are commonly used nowadays in vast number of application areas, including banking, telecommunication, management, healthcare, trade, marketing, control engineering, environment monitoring, research and science, among others. We are witnessing a trend to use them anytime and anywhere. As a result a huge amount of data of different types (text, graphics, voice, video) concerning in fact all human activity domains (business, education, health, culture, science) is gathered, stored and available. These data may contain hidden from a user interesting and useful knowledge represented (defined) by some nontrivial and not-explicitly visible patterns (relationships, anomalies, regularities, trends) [21], [48], [72], [87], [96], [97]. With the growth of the amount and complexity of the data stored in contemporary, large databases and data warehouses, the problem of extracting knowledge from datasets emerges as a real challenge, increasingly difficult and important. This problem is a central research and development issue of knowledge J.F. Peters et al. (Eds.): Transactions on Rough Sets X, LNCS 5656, pp. 197–274, 2009. c Springer-Verlag Berlin Heidelberg 2009
198
I. Szcz¸ech
discovery that generally is a non-trivial process of looking for new, potentially useful and understandable patterns in data [21], [61], [4]. The discovered knowledge is represented by patterns which can take the form of decision or association rules, clusters, sequential patterns, time series, contingency tables, and others [14], [24], [33], [61], [62], [70], [90], [89], [95]. In this article, we shall consider patterns expressed in the form of “if. . . , then. . . ” rules. The representation of knowledge in form of rules is considered as easier to comprehend than other forms (for discussion see [14], [55], [54], [75]). Rules are usually induced from a dataset being a set of objects characterized by a set of attributes. They can be described as consequence relations between the condition (the “if part”) and decision (the “then part”) formulas built from attribute-value pairs. The condition formulas are called the premise of the rule and the decision formulas are referred to as the conclusion or hypothesis of the rule. Objects from a dataset support the rule if the attribute-value pairs from the rule’s premise and conclusion match respectively the values of the object on each of the attributes mentioned in the rule, i.e. if the premise and conclusion of the rule is satisfied by the object. The more objects support the rule, the stronger the rule is. Rules can be induced from different datasets, e.g. from a dataset containing information about patients of a hospital. Such data can be gathered in the process of diagnostic treatment. Rules that could potentially be induced from such dataset could describe co-occurrence of certain symptoms and a disease: if symptom s1 is present and symptom s2 is absent then disease d1 . 1.2
Attractiveness Measures and Their Properties
Typically, the number of rules generated from massive datasets is quite large, but only a few of them are likely to be useful for the domain expert. It is due to the fact that many rules are either irrelevant or obvious, and do not provide new knowledge [10]. Therefore, in order to measure the relevance and utility of the discovered rules, quantitative measures, also known as attractiveness or interestingness measures (metrics), have been proposed and studied (for review see, e.g. [26], [37], [53], [81], [94]). They allow to reduce the number of rules that need to be considered by ranking them and filtering out the useless ones. Since there is no single attractiveness measure that captures all characteristics of the induced rules and fulfils the expectations of any user, the number of interestingness measures proposed in literature is large. Each of them reflects certain characteristics of rules and leads to an in-depth understanding of their different aspects. Among widely known and commonly applied attractiveness measures there are such as support and confidence [2], gain [25], conviction [6], rule interest function [72], dependency factor [66], entropy gain [58], [59], laplace [12], [93], lift [40], [6], [16]. While choosing an attractiveness measure(s) of rules for a certain application, the users also often take into consideration properties (features) of measures which reflect the user’s expectations towards the behaviour of the measures in particular situations. For example, one may demand that the used measure will increase its value for a given rule (or at least will not decrease) when the number
Multicriteria Attractiveness Evaluation of Decision and Association Rules
199
of objects in the dataset that support this rule increases. In this article, we shall focus on the following properties of attractiveness measures, well motivated in the recent literature [28], [9], [22], [15], [17], [10]: – the property M of monotonic dependency of the measure on the number of objects supporting or not the premise or the conclusion of the rule [28], [9], – the property of confirmation quantifying the degree to which the premise of the rule provides evidence for or against the conclusion [22], [15], – the property of hypothesis symmetry arguing that the significance of the premise with respect to the conclusion part of a rule should be of the same strength, but of the opposite sign, as the significance of the premise with respect to a negated conclusion [17], [10]. Analyses verifying if popular interestingness measures possess the above listed properties widen our understanding of those measures and of their applicability, and help us learn about relationships between different measures. The obtained results are also useful for practical applications because they show which attractiveness measures are relevant for meaningful rule evaluation. 1.3
Aim and Scope of the Article
The problem of choosing an appropriate attractiveness measure for a certain application is difficult not only because of the number of measures but also due to the fact that a single measure of interestingness is often an insufficient indicator of the quality of the rules. Therefore, a multicriteria evaluation, i.e. using at the same time more than one attractiveness measure (criterion), has become a common approach solving this issue [3], [23], [30], [31], [50]. In case of a multicriteria rule evaluation, objectively, the best rules are the non-dominated ones (also known as Pareto-optimal rules), i.e. those for which there does not exist any other rule that is better on at least one evaluation criterion and not worst on any other. The set of all non-dominated rules, with respect to particular evaluation criteria, is referred to as the Pareto-optimal set or the Pareto-optimal border. The popular measures of rule support and confidence, have been considered by Bayardo and Agrawal [3] as sufficient for multicriteria evaluation of rules. They express the number of objects in the dataset that support the rule and the probability with which the conclusion evaluates to true given that the premise is true, respectively. Those measures are used in the well-known apriori algorithms [2] and permit to benefit from the main advantage of these algorithms, which consists in reduction of the frequent itemset search space. In the literature, a group of attractiveness measures called Bayesian confirmation measures has also been thoroughly investigated ([11], [17], [22], [28]). The reason for that was different than the motivation for using support–confidence measures in connection with apriori algorithms. It followed from semantic consideration of attractiveness measures. In general, Bayesian confirmation measures quantify the degree to which a premise of a rule “provides arguments for or
200
I. Szcz¸ech
against” the rule’s conclusion. Therefore, their semantic meaning allows to distinguish the meaningful rules for which the premise confirms the conclusion. The measure of confidence does not have the means to do that and there may occur situations in which rules that are characterised by high values of confidence are in fact misleading because the premise disconfirms the rule’s conclusion. In this context, there arises a natural need to search for a substituting evaluation space that would include a confirmation measure. Moreover, since the property M of monotonic dependency of an attractiveness measure on a number of objects supporting or not the premise or conclusion of a rule, proposed by Greco, Pawlak and Slowi´ nski [28], has been recognised as crucial especially for confirmation measures, it is desirable for a new evaluation space to include measures that are not only confirmation measures but also have the property M. The most general goal of the article is to find an evaluation space such that its set of non-dominated rules would include rules that are optimal with respect to any measure with the property M. Of course, the confirmation semantics would also need to be included in such space to avoid analysing uninteresting and misleading rules. The problem of choosing an adequate multicriteria evaluation space is non trivial. It naturally leads to an important issue, not yet thoroughly discussed in the literature, of comparing different evaluation spaces, as well as determining the relationships of enclosure between their sets of non-dominated rules. If such relationships were discovered, it would mean that inducing non-dominated rules with respect to one evaluation space, one can guarantee that it contains optimal or Pareto-optimal rules with respect to combination of other measures. Such results would have a significant practical value as they would allow to determine a limited set of interesting rules more effectively, because instead of numerous rule evaluations in different spaces, one could conduct such an evaluation once only finding the most general set of non-dominated rules which contains other optimal or Pareto-optimal rules. In the above context the general aim of this work has be formulated as: Analysis of properties and relationships between popular rule attractiveness measures and proposition of multicriteria rule evaluation space in which the set of non-dominated rules will contain all optimal rules with respect to any attractiveness measure with the property M. To attain this aim the following detailed tasks should be completed: 1. Analysis of rule support, rule anti-support, confidence, rule interest function, gain, dependency factor, f and s attractiveness measures with respect to the property M, the property of confirmation and the property of hypothesis symmetry. 2. Analysis of relationships between the considered interestingness measures and analysis of enclosure relationships between the sets of non-dominated rules in different evaluation spaces. 3. Proposition of a multicriteria evaluation space in which the set of nondominated rules will contain all optimal rules with respect to any attractiveness measure with the property M.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
201
4. Determining the area of rules with desirable value of a confirmation measure in the proposed multicriteria evaluation space. 5. Extension of an apriori-like algorithm for generation of rules with respect to attractiveness measures possessing valuable properties and presentation of application of the results to analysis of rules induced from exemplary datasets. The plan of this article follows the above tasks. In particular, in Section 2 preliminaries on rules and their basis quantitative description as well as the definitions of the considered properties of attractiveness measures are presented. Section 3 is devoted to analysing whether considered interestingness measures possess the property M, the property of confirmation and the property of hypothesis symmetry. Section 4 describes different multicriteria evaluation spaces and discusses their advantages and disadvantages. Section 5 presents our proposition of the support–anti-support evaluation space, for which the set of non-dominated rules contains rules that are optimal with respect to any attractiveness measure that has the property M. Next, in Section 6 there is a presentation of an association mining system developed for showing applications of the results on exemplary datasets. Finally, Section 7 summarises the article with a discussion on the completed work and possible lines of further investigations.
2
Basic Quantitative Rule Description
The discovery of knowledge from data is done by induction. It is a process of creating patterns which are true in the world of the analyzed data. However, it is worth mentioning, as Karl Popper [74] did, that one cannot prove the correctness of generalizations of specific observations or analogies to known facts, but can refute them. In this article we consider discovering knowledge represented in form of rules. The starting point for such rule induction (mining) is a sample of larger reality often represented in a form of a data table. Formally, a data table is a pair S = (U, A)
(1)
where U is a nonempty finite set of objects (items) called universe, and A is a nonempty finite set of attributes [66], [68], [70]. For every attribute a ∈ A let us denote by Va the domain of a. By a(x) we will denote the value of attribute a ∈ A for an object x ∈ U . The are many scales of attributes describing objects. The classical hierarchy of attribute scales is the following [88], [5], [57], [13], [87]: – – – –
nominal, ordinal, interval, ratio.
Nominal scale can be regarded as names assigned to objects as labels. The domain of attributes with the nominal scale is an unordered set of attribute
202
I. Szcz¸ech
values and therefore, the only comparison that can be performed between two such values is equality and inequality. Relations such as “less than” or “greater than” and operations such as addition or subtraction are inapplicable for such attributes. For practical data processing values of attributes with nominal scale can take the form of numerals, but in that case their numerical value is irrelevant. Examples of attributes with nominal scale can include: the marital status of a person, the make of a car, religious or political-party affiliation, birthplace. The domain of attributes with the ordinal scale is an ordered set of attribute values. The values assigned to objects represent the rank order (1st , 2nd , 3rd etc.) of the objects. In addition to equality/inequality, one can also perform “less than” or “greater than” comparisons on ordinal attribute values. Nevertheless, conventional addition and subtraction remain meaningless. Examples of ordinal scales include the results of a horse race, which only express which horse arrived first, second, etc., or school grades. The domain of attributes with the interval scale is defined over numerical scale, in such way that differences between arbitrary pairs of values can be meaningfully compared. Therefore, equal differences between interval attribute values represent equal intervals and operations of addition or subtraction on interval attribute values are meaningful. Moreover, obviously, all the relation comparisons valid for nominal or ordinal attributes can also be performed. Operations such as multiplication or division, however, cannot be carried out as there exists an arbitrary zero point on the interval value scales, such as the 0 degrees of Celsius or Fahrenheit on the temperature scale. Therefore, one cannot say that one interval scale attribute value is e.g. double of another one. This limitation of interval scales is well illustrated by a popular joke: “If today we measure 0 degrees Celsius and tomorrow is twice as cold, what temperature do we measure tomorrow?” Among examples of attributes with the interval scale one can also mention year dates in many calendars. The domain of attributes with the ratio scale is defined over numerical scale, such that ratios between arbitrary pairs of values are meaningful. Thus, on ratio scales operations of multiplication or division can be performed, as well as all the operations and comparisons valid for interval scales. The zero value on a ratio scale is non-arbitrary, like in e.g. the Kelvin temperature scale which is proportional to heat content, and zero on that scale really means there is zero heat (zero is absolute). Therefore, one can multiply and divide meaningfully on ratio scales, e.g. removing half the heat content of the air would cause the Kelvin thermometer to register twice as small temperature value. Examples of attributes with ratio scale also contain many physical quantities such as mass or length. Social ratio scales include age, number of class attendances in a particular time, etc. Apart from the above mentioned attributes, there also exists a type of attributes called criteria, for which the domains are preference-ordered (e.g. from the least wanted to the most wanted). Among scales of criteria one can distinguish ordinal, interval or ratio scales [82]. One can also have partial order on the value set of attribute, e.g. representing a group of attributes.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
203
Some authors also distinguish other attribute types e.g. structural, for which the domain values are characterized by a taxonomy [33], [54]. Association and decision rules. A rule induced from a data table S is denoted by φ → ψ (read as “if φ, then ψ”), where φ and ψ are built up from elementary conditions using logical operator ∧ (and). The elementary conditions of a rule are defined as (a(x) rel v) where rel is a relational operator from the set {=, } and v is a constant belonging to Va . The antecedent φ of a rule is also referred to as premise or condition. The consequent ψ of a rule is also called conclusion, decision or hypothesis. Therefore a rule can be seen as a consequence relation (see critical discussion [28], [94] about interpretation of rules as material implications) between premise and conclusion. The attributes that appear in elementary conditions of the premise (conclusion, resp.) are called condition attributes (decision attributes, resp.). Obviously, within one rule, the sets of condition and decision attributes must be disjoint. The rules induced (mined) from data may be either decision or association rules, depending on whether the division of A into condition and decision categories of attributes has been fixed or not. One of the classical examples of data table used in the literature to illustrate algorithms of rule induction concerns playing golf (see Table 1) and was originally introduced by Quinlan [75], [76]. The dataset uses weather information to decide whether or not to play golf. It contains 14 objects (items) described by four attributes concerning the weather state: outlook (with nominal values sunny, overcast or rain), temperature (with ordinal values hot, mild or cold ), humidity (with ordinal values high or normal ) and windy (with nominal values true or false). Moreover, there is also a decision attribute play? with nominal values yes or no. Table 1. A data table describing the influence of the weather conditions on the decision whether or not to play golf outlook temperature humidity windy play? sunny sunny overcast rain rain rain overcast sunny sunny rain sunny overcast overcast rain
hot hot hot mild cold cold cold mild cold mild mild mild hot mild
high high high high normal normal normal high normal normal normal high normal high
false true false false false true true false false false true true false true
no no yes yes yes no yes no yes yes yes yes yes no
204
I. Szcz¸ech
Exemplary decision rules induced from this dataset could be the following: – – – – –
if if if if if
outlook=overcast then play=yes, outlook=sunny and humidity=normal then play=yes, outlook=sunny and humidity=high then play=yes, outlook=rain and windy=true then play=no, outlook=rain and windy=false then play=yes.
Such rules could have a descriptive function helping to describe under what weather conditions people are willing to play golf, or a predictive function helping to forecast whether people will tend to play golf if certain weather conditions occur. 2.1
Attractiveness Measures for Decision and Association Rules
The number of rules induced from massive datasets usually is so large that it overwhelms the human comprehension capabilities, and, moreover, vast majority of them have very little value in practice. Thus, in order to increase the relevance and utility of selected rules and limit the size of the resulting rule set, quantitative measures, also known as attractiveness or interestingness measures, have been proposed and widely studied in literature [1], [45], [50], [91]. The variety of proposed measures comes as a result of looking for means of reflecting particular characteristics of rules or sets of rules. Most of the attractiveness measures are gain-type criteria, which means that the higher values they obtain, the greater is the utility, interestingness of an evaluated rule. However, in literature there are also measures which are considered as cost-type criteria, i.e. the smaller the value of the measure for a given rule, the more attractive the rule is. Below, there are definitions of the attractiveness measures that are analyzed in this article. Rule support. One of the most popular measures used to identify frequently occurring rules in sets of items from data table S is the support considered by Jan L ukasiewicz in [51] and later used by data miners e.g. in [2]. Support of condition φ (analogously, ψ), denoted as supS (φ) (analogously, supS (ψ)), is equal to the number of objects in U satisfying φ (analogously, ψ). The support of rule φ → ψ (also simply referred to as support), denoted as supS (φ → ψ), is the number of objects in U satisfying both φ and ψ. Thus, it corresponds to statistical significance [37]. The domain of the measure of support can cover any natural number. The greater the value of support for a given rule, the more desirable the rule is, thus, support is a gain-type criterion. Example 2.1. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if outlook=sunny and humidity=normal then play=yes. On the basis of Table 1 we can calculate that supS (r1 ) = 4 as there are four objects supporting r1 (i.e. objects with “overcast” value of the attribute outlook
Multicriteria Attractiveness Evaluation of Decision and Association Rules
205
and at the same time with “yes” value for the decision attribute). In case of the second rule: supS (r2 ) = 2. Thus, rule r1 is more interesting (attractive) than r2 in the sense of support. Some authors define support as a relative value with respect to the number of all objects in the dataset U . Then, the rule support can be interpreted as the percentage of objects satisfying both the premise and conclusion of the rule, in the dataset. Throughout this article we will only consider the former definition of support. Rule anti-support. Anti-support of a rule φ → ψ (also simply referred to as antisupport ), denoted as anti-supS (φ → ψ), is equal to the number of objects in U having property φ but not having property ψ. Thus, anti-support is the number of counter-examples in the data table S, i.e. objects for which the premise φ evaluates to true but whose conclusion is different than ψ. Note that anti-support can also be regarded as supS (φ → ¬ψ). Similarly to support, the anti-support measure can obtain any natural value. However, its optimal value is 0. Any value greater than zero means that the considered rule is not certain i.e. there are some counter-examples for that rule. The less counter-examples we observe in the dataset, the better, and therefore anti-support is considered a cost-type criterion. Example 2.2. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if humidity=high then play=no. On the basis of Table 1 we can observe that there are no counter-examples for r1 (there are no objects in Table 1 for which outlook = overcast and play =yes), and thus anti-supS (r1 ) = 0. Rule r2 , however, is not pure as there are three counter-examples, which means that anti-supS (r2 ) = 3. Thus, from the view point of anti-support, r1 is more attractive than r2 . Similarly to support, one can also define anti-support as a relative value with respect to the number of all objects in the dataset U . Then, the rule anti-support can be considered as the percentage of counter-examples, in the dataset. Again, throughout this work we will only focus on the former definition of anti-support. Confidence. Among measures very commonly associated with rules induced from data table S, there is also confidence [51], [2]. The confidence of a rule (also called certainty), denoted as confS (φ → ψ), is defined as follows: confS (φ → ψ) =
supS (φ → ψ) . supS (φ)
(2)
Obviously, when considering rule φ → ψ, it is necessary to assume that the set of objects having property φ in U is not empty, i.e. supS (φ) = 0.
206
I. Szcz¸ech
Under the “closed world assumption” [77] (which is the presumption that what is not currently known to be true is false) adopted in rule induction, and because U is a finite set, it is legitimate to express probabilities P r(φ) and P r(ψ) in terms of frequencies supS (φ)/ |U | and supS (ψ)/ |U | , respectively. In consequence, the confidence measure confS (φ → ψ) can be regarded as conditional probability P r(ψ|φ) = P r(φ∧ψ)/P r(φ) with which conclusion ψ evaluates to true, provided that premise φ evaluates to true. Moreover, let us point out the relationship between confidence and another attractiveness measure called coverage, denoted as covS (φ → ψ) and defined for a data table S in the following manner: covS (φ → ψ) =
supS (ψ → φ) . supS (ψ)
(3)
Since supS (φ → ψ) = supS (ψ → φ) as they both express the number of objects satisfying both φ and ψ, it is clear that confidence of a rule φ → ψ can be regarded as coverage for rule ψ → φ. Confidence takes any value between 0 and 1. It is a gain-type criterion and thus, the most desirable value is 1, which reflects the situation in which all objects that satisfy the premise also support the whole rule (i.e. both the premise and conclusion). Let us note that confidence is equal to 1 only when the anti-support is 0. Example 2.3. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if humidity=high then play=no. Since there are no counter-examples for rule r1 (anti-supS (r1 ) = 0), confS (r1 ) = 44 = 1. For rule r2 , however, there are 3 counter-examples, which implies that confidence for this rule will not be 1. There are 7 objects supporting r2 ’s premise but only four of them support the whole rule (supS (r2 ) = 4). Thus, confS (r2 ) = 47 . It is, therefore, clear that in the considered S, r1 is better than r2 with respect to confidence. Rule interest function. The rule interest function RI introduced by PiatetskyShapiro in [72] is used to quantify the correlation between the premise and conclusion in the data table S. It is defined by the following formula: RIS (φ → ψ) = supS (φ → ψ) −
supS (φ) supS (ψ) . |U |
(4)
For rule φ → ψ, when RI = 0, then φ and ψ are statistically independent (i.e. the occurrence of the premise makes the conclusion neither more nor less probable) and thus, such a rule should be considered as uninteresting. When RI > 0 (RI < 0), then there is a positive (negative) correlation between φ and ψ [37]. Obviously, it is a gain-type criterion as greater values of RI reflect stronger trend towards positive correlation.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
207
After simple algebraic transformation, RI can also be expressed as: RIS (φ → ψ) =
supS (φ → ψ) supS (¬φ → ¬ψ) −supS (¬φ → ψ) supS (φ → ¬ψ) . (5) |U |
Now, one can also analyse RI ’s sign (interpreted as positive or negative correlation) as verification of the sign of the above nominator, i.e. the sign of the difference: supS (φ → ψ)supS (¬φ → ¬ψ) − supS (¬φ → ψ)supS (φ → ¬ψ). Example 2.4. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if outlook=sunny and humidity=normal then play=yes From Table 1 we obtain: RIS (r1 ) = 4 − (4 ∗ 9/14) = 1.42 and RIS (r2 ) = 2 − (2 ∗ 9/14) = 0.71. These results show that in both of the considered rules the premises are positively correlated with the conclusions, however the correlation in r1 is stronger. Gain function. For a data table S the gain function of Fukuda et al. [25] is defined in the following manner: gainS (φ → ψ) = supS (φ → ψ) − Θ supS (φ)
(6)
where Θ is a fractional constant between 0 and 1. Note that, for a fixed value of Θ = supS (ψ)/ |U |, the gain measure becomes identical to the presented above rule interest function RI. Moreover, if Θ is zero then, gain boils down to calculation of the support of the rule, and when Θ is equal to 1, gain will take negative values unless all objects satisfying φ also satisfy ψ (in that case gain will be 0). Thus, gain can take any integer value depending on what value Θ is set at. For a fixed Θ, greater values of gain are more desirable, thus it is a gain-type criterion. Example 2.5. Let Table 1 represent a data table S. We consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if humidity=high then play=no. Let us assume Θ = 0.5 (such value means that the value of supS (φ → ψ) is twice as important to us as the value of supS (φ)). Then, from Table 1 we obtain: gainS (r1 ) = 4 − 0.5 ∗ 4 = 2 and gainS (r2 ) = 4 − 0.5 ∗ 7 = 0.5. In this example both of the considered rules had the same value of supS (φ → ψ), however, for r2 there were also some counter-examples. The existence of counterexamples causes the difference between the value of supS (φ → ψ) and supS (φ), which directly influences the value of the gain measure. In this example, for the same value of Θ, gainS (r1 ) > gainS (r2 ), thus we can conclude that r1 is a more interesting rule with respect to gain.
208
I. Szcz¸ech
Dependency factor. For a data table S the dependency factor of Pawlak [69] (also considered by Popper [74]) is defined in the following manner: ηS (φ → ψ) =
supS (φ→ψ) supS (φ) supS (φ→ψ) supS (φ)
− +
supS (ψ) |U| supS (ψ) |U|
.
(7)
The dependency factor expresses a degree of dependency, and can be seen as a counterpart of correlation coefficient used in statistics. When φ and ψ are independent on each other, then ηS (φ → ψ) = 0. If −1 < ηS (φ → ψ) < 0, then φ and ψ are negatively dependent (i.e. the occurrence of φ decreases the probability of ψ), and if 0 < ηS (φ → ψ) < 1, then φ and ψ are positively dependent on each other (i.e. the occurrence of φ increases the probability of ψ). The dependency factor is a gain-type criterion. Example 2.6. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if outlook=sunny and humidity=normal then play=yes. From Table 1 we obtain: ηS (r1 ) =
4 4 4 4
− +
9 14 9 14
=
5 = 0.217 and ηS (r2 ) = 23
2 2 2 2
− +
9 14 9 14
=
5 = 0.217. 23
The results show that, from the viewpoint of dependency factor, both of the considered rules are of equal attractiveness. It is due to the fact that they have the same conclusion and do not have any counter-examples. The positive value of η reflects positive correlation between the premises in r1 and r2 , and the conclusion. Measures f and s. Among the best-known and widely studied confirmation measures (see the definition in Section Property of Bayesian Confirmation.), there are measures denoted by f and s, defined as follows (for a data table S): P r(φ|ψ) − P r(φ|¬ψ) , P r(φ|ψ) + P r(φ|¬ψ)
(8)
sS (φ → ψ) = P r(ψ|φ) − P r(ψ|¬φ).
(9)
fS (φ → ψ) =
Taking into account that conditional probability P r(◦|∗) = confS (◦ → ∗), measures f and s can be re-written as: confS (ψ → φ) − confS (¬ψ → φ) , confS (ψ → φ) + confS (¬ψ → φ)
(10)
sS (φ → ψ) = confS (φ → ψ) − confS (¬φ → ψ).
(11)
fS (φ → ψ) =
Among authors advocating for measure f , there are Kemeny and Oppenheim [46], Good [27], Heckerman [35], Horvitz and Heckerman [39], Pearl [71],
Multicriteria Attractiveness Evaluation of Decision and Association Rules
209
Schum [80] and Fitelson [22]. Measure s has been proposed by Christensen [11] and Joyce [42]. It is worth noting that confirmation measure f is monotone (and therefore gives the same ranking) with respect to the Bayes factor originally proposed by Jeffrey [41] and reconsidered as an interestingness measure by Kamber and Shingal [43]. The Bayes factor is defined by the following formula: kS (φ → ψ) =
confS (ψ → φ) . confS (¬ψ → φ)
Measures f and s are regarded as gain-type measures quantifying the degree to which the premise φ provides “support for or against” the conclusion ψ. Thus, they obtain positive values (precisely, [1, 0]) iff the premise φ confirms the conclusion ψ i.e. iff P r(ψ|φ) < P r(ψ). Measures f and s take negative values (precisely, [-1, 0[) when the premise φ disconfirms the conclusion ψ, i.e. iff P r(ψ|φ) < P r(ψ). In literature, these measures are found as a powerful tool for analyzing the confirmation of conclusion by a rule’s premise. Example 2.7. Let Table 1 represent a data table S. Let us consider two rules induced from Table 1: – r1 : if outlook=overcast then play=yes, – r2 : if humidity=high then play=yes. On the basis of Table 1 we can calculate that: 4 3 −0 − fS (r1 ) = 94 50 = 1 and fS (r2 ) = 93 9 + 5 9 +
4 5 4 5
=−
21 = −0.41 51
4 5 3 6 − = 1/2 and sS (r2 ) = − = −3/7. 4 10 7 7 For the first rule there are no counter-examples (the rule is certain) which means that the premise confirms the conclusion. This fact is reflected by positive values of measures f and s. In case of r2 , one can observe in Table 1 even more counter-examples than examples actually supporting the rule (supS (φ → ¬ψ) > supS (φ → ψ)). Thus, in r2 the premise disconfirms the conclusion, which is expressed by negative values of measures f and s. sS (r1 ) =
2.2
Desirable Properties of Attractiveness Measures
While choosing attractiveness measures for a certain application one also considers their properties (features), which express the user’s expectations towards the behavior of measures in particular situations. Those expectations can be of various types, e.g. one can desire to use only such measures that have the property of not going further away (or even of coming closer to) from their optimal value for a certain induced rule when the number of objects supporting the pattern increases. Properties group the attractiveness measures according to similarities in their characteristics. Using the measures which satisfy the desirable properties one can avoid considering unimportant rules. Therefore, knowledge of which commonly used interestingness measures satisfy certain valuable properties, is of high practical and theoretical importance.
210
I. Szcz¸ech
Property M. Greco, Pawlak and Slowi´ nski in [28] analyzed measuring of attractiveness of rules. They proposed a valuable property M of monotonic dependency of an attractiveness measure on the number of objects satisfying or not the premise or the conclusion of a rule. Property M makes use of elementary parameters of the considered dataset (numbers of objects satisfying some properties) and therefore is an easy and intuitive criterion helping to choose an appropriate attractiveness measure for a certain application [28], [8]. Formally, an attractiveness measure IS (φ → ψ) = = F [supS (φ → ψ) , supS (¬φ → ψ) , supS (φ → ¬ψ) , supS (¬φ → ¬ψ)]
(12)
being a gain-type criterion (i.e. the higher the value of the measure, the better) has the property M iff it is a function: – – – –
non-decreasing with respect to supS (φ → ψ), and non-increasing with respect to supS (¬φ → ψ), and non-increasing with respect to supS (φ → ¬ψ), and non-decreasing with respect to supS (¬φ → ¬ψ).
Respectively, an attractiveness measure IS (φ → ψ) = = F [supS (φ → ψ) , supS (¬φ → ψ) , supS (φ → ¬ψ) , supS (¬φ → ¬ψ)]
(13)
being a cost-type criterion (i.e. the lower the value of the measure, the better) has the property M iff it is a function: – – – –
non-increasing with respect to supS (φ → ψ), and non-decreasing with respect to supS (¬φ → ψ), and non-decreasing with respect to supS (φ → ¬ψ), and non-increasing with respect to supS (¬φ → ¬ψ).
Most of the considered attractiveness measures are gain-type criteria and therefore, below we will present the interpretation of property M only for this type of measures (for cost-type criteria, the considerations are analogous). The property M with respect to supS (φ → ψ) (or, analogously, with respect to supS (¬φ → ¬ψ)) means that any object in the dataset for which φ and ψ (or, analogously, neither φ nor ψ) hold together, increases (or at least does not decrease) the attractiveness of the rule φ → ψ. On the other hand, the property M with respect to supS (¬φ → ψ) (or, analogously, with respect to supS (φ → ¬ψ)) means that any object for which φ does not hold and ψ holds (or, analogously, φ holds and ψ does not hold), decreases (or at least does not increase) the attractiveness of the rule φ → ψ. Let us use the following example mentioned by Hempel [36] to show the interpretation of the property. Consider a rule φ → ψ: if x is a raven then x is black. In this case φ stands for being a raven and ψ stands for being black. If an attractiveness measure IS (φ → ψ) (being a gain-type criterion) possesses the property M then:
Multicriteria Attractiveness Evaluation of Decision and Association Rules
211
– the more black ravens or non-black non-ravens there will be in the data table S, the more attractive the rule will become, and thus IS (φ → ψ) will obtain greater value, – the more black non-ravens or non-black ravens in the data table S, the less attractive the rule will become and thus, the value of IS (φ → ψ) will become smaller. Greco, Pawlak and Slowi´ nski [28] have considered attractiveness measures with respect to property M. The results they obtained show that measures f and l [46], [27], [35], [39], as well as s [11], [42] possess the property M, while measures d [17], [18], [41] , [78], r [38], [44], [52], [79], [73], b [10] do not. Property of Bayesian Confirmation. Formally, an attractiveness measure cS (φ → ψ) has the property of Bayesian confirmation (or simply confirmation) iff it satisfies the following conditions: ⎧ ⎨ > 0 if P r(ψ|φ) > P r(ψ), cS (φ → ψ) = 0 if P r(ψ|φ) = P r(ψ), (14) ⎩ < 0 if P r(ψ|φ) < P r(ψ). Since the conditional probability P r(ψ|φ) = P r(φ ∧ ψ)/P r(φ) can be regarded as the confidence measure confS (φ → ψ), the above definition can be re–written as: ⎧ ⎨ > 0 if confS (φ → ψ) > supS (ψ)/ |U | , cS (φ → ψ) = 0 if confS (φ → ψ) = supS (ψ)/ |U | , (15) ⎩ < 0 if confS (φ → ψ) < supS (ψ)/ |U | . Measures that possess the property of confirmation are referred to as confirmation measures or measures of confirmation. According to Fitelson [22], measures of confirmation quantify the degree to which a premise φ provides “support for or against” a conclusion ψ. When their values are greater than zero, it means that the conclusion is satisfied more frequently when the premise is satisfied, rather than generally in the whole dataset. Measures of confirmation equal to zero, reflect that fulfilment of the premise imposes no influence on fulfilment of the conclusion. Analogously, when the value of confirmation measure is smaller than zero, it means that the premise only disconfirms the conclusion as the conclusion is satisfied less frequently when the premise is satisfied, rather than generally in the whole dataset. Thus, for a given rule φ → ψ, attractiveness measures with the property of confirmation express the credibility of the following proposition: ψ is satisfied more frequently when φ is satisfied, rather than when φ is not satisfied. This interpretation stresses the very valuable semantics of the property of confirmation. By using the attractiveness measures that possess this property one can filter out rules which are misleading and disconfirm the user, and this way, limit the set of induced rules only to those that are meaningful. Among commonly used and discussed Bayesian confirmation measures there are the following measures: f and l [46], [27], [35], [39], [22], s [11], [42] d [17], [18], [41] , [78], r [38], [44], [52], [79], [73], and b [10].
212
I. Szcz¸ech
Property of Hypothesis Symmetry. Many authors have also considered properties of symmetry of attractiveness measures. Eells and Fitelson have analyzed in [17] a set of best-known confirmation measures from the viewpoint of the following four properties of symmetry introduced by Carnap in [10]: 1. 2. 3. 4.
evidence symmetry (ES): IS (φ → ψ) = −IS (¬φ → ψ) commutativity symmetry (CS): IS (φ → ψ) = IS (ψ → φ) hypothesis symmetry (HS): IS (φ → ψ) = −IS (φ → ¬ψ) total symmetry (TS): IS (φ → ψ) = −IS (¬φ → ¬ψ)
Eells and Fitelson remark in [17] that given CS, ES and HS are equivalent i.e. provided that IS (φ → ψ) = IS (ψ → φ), −IS (¬φ → ψ) = −IS (φ → ¬ψ). Moreover, they show that that TS follows from the conjunction of ES and HS. They also conclude that, in fact, only HS is a desirable property, while ES, CS and TS are not. The meaning behind the hypothesis symmetry is that the influence of the premise on the conclusion part of a rule should be of the opposite sign, as the influence of the premise on a negated conclusion. The arguments against ES, CS and TS can be presented by an exemplary situation of randomly drawing a card from a standard deck ([17], [28]). Let φ stand for that the drawn card is the seven of spades, and let ψ be the hypothesis that the card is black. Despite the strong confirmation of ψ by φ, the negated premise is useless to the conclusion as the evidence the card is not the seven of spades (¬φ) is practically of no value to the conclusion the card is black (ψ). Thus, the ES is not valid. Continuing this example one can observe that the evidence that the card is black (ψ) does not confirm the hypothesis that the card is the seven of spades (φ) to the same extent as the evidence that the card is the seven of spades (φ), confirms the hypothesis that the card is black (ψ). This means that CS is not valid. Analogously, arguments against TS can be shown. The above mentioned example is also an argument for the hypothesis symmetry as, obviously, the evidence that the card is the seven of spades (φ) is negatively conclusive for the hypothesis that the card is not black (¬ψ). Having considered popular confirmation measures with respect to symmetry properties, Fitelson [22] concluded that measures f and l [46], [27], [35], [39], as well as s [11], [42] and d [17], [18], [41] , [78], satisfy the property of hypothesis symmetry.
3
Analyses of Properties of Particular Attractiveness Measures
Analyses verifying whether popular attractiveness measures possess valuable properties widen our understanding of those measures and of their applicability. Moreover, through such property analysis one can also learn about relationships between different measures. The obtained results are useful for practical applications because they show which interestingness measures are relevant for meaningful rule evaluation. Using the measures which satisfy the desirable properties one can avoid analysing unimportant rules.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
213
Many authors have considered different attractiveness measures with respect to several properties ([17], [22], [28]). However, analysis of property M, property of confirmation and property of hypothesis symmetry for many popular attractiveness measures still remains an open problem. In the following section we shall provide answers for some of those open questions. For the sake of the clarity of presentation, the following notation shall be used throughout the next sections: a = supS (φ → ψ), b = supS (¬φ → ψ), c = supS (φ → ¬ψ), d = supS (¬φ → ¬ψ), a + c = supS (φ), a + b = supS (ψ), b + d = supS (¬φ), c + d = supS (¬ψ), a + b + c + d = |U |.
(16)
We also assume that set U is not empty, so that at least one of a, b, c, d is strictly positive. 3.1
Analysis of Measures with Respect to Property M
In order to prove that for a data table S a gain-type measure IS (φ → ψ) has the property M we need to show that it is non-decreasing with respect to a and d, and non-increasing with respect to b and c. It means that all of the following conditions must be satisfied: ⎫ 1. the increase of a does not result in decrease of the measure, ⎪ ⎪ ⎬ 2. the increase of b does not result in increase of the measure, (17) 3. the increase of c does not result in increase of the measure, ⎪ ⎪ ⎭ 4. the increase of d does not result in decrease of the measure. In case of a cost-type measures JS (φ → ψ), we will say that it possesses the property M iff the following conditions will be fulfilled: ⎫ 1. the increase of a does not result in increase of the measure, ⎪ ⎪ ⎬ 2. the increase of b does not result in decrease of the measure, (18) 3. the increase of c does not result in decrease of the measure, ⎪ ⎪ ⎭ 4. the increase of d does not result in increase of the measure. During the analysis of measures with respect to property M we consider an increase of only one parameter at a time, e.g. if a increases, b, c and d remain unchanged. An increase of a by Δ > 0 (and analogously an increase of b, c or d) is a result of adding to U objects that satisfy both φ and ψ. It means that the data table S = (U, A) changes to S = (U , A), where |U | = a + b + c + d and |U | = (a + Δ) + b + c + d. Rule support. According to the notation in (16) rule support supS (φ → ψ) is a. Thus, obviously, rule support, being a gain-type criterion, increases with a and does not change (i.e. neither decreases nor increases) with b, c, or d. Therefore, it is legitimate to conclude that the measure of rule support has the property M. Rule anti-support. Anti-support is a cost-type criterion and therefore the conditions (18) need to be verified. Since anti-support can be regarded as the number of counter examples, anti-supS (φ → ψ) = c. Thus, obviously, anti-supS (φ → ψ) increases with c and does not does not change with a, b, or d. Therefore, it can be concluded that anti-supS (φ → ψ) has the property M.
214
I. Szcz¸ech
Confidence. Now, let us consider confidence with respect to the property M. Confidence is a gain-type criterion dependent only on supS (φ → ψ) and supS (φ), therefore the analysis of confS (φ → ψ) with respect to the property M can be practically narrowed down to analysis of its dependence on the number of objects supporting both the premise and conclusion, and on the number of objects satisfying the premise but not the conclusion. Theorem 3.1. Confidence measure has the property M. Proof. Let us consider confidence expressed in the notation (16): confS (φ → ψ) =
supS (φ → ψ) a = . supS (φ) a+c
(19)
As confS (φ → ψ) does not depend on b nor d, it is clear that conditions (17).2 and (17).4 are satisfied. However, conditions (17).1 and (17).3 require verification: Condition (17).1: Let us assume that the considered data table S = (U, A) is extended to S = (U , A) by adding to S some objects satisfying both φ and ψ. This addition increases the value of a to a + Δ where Δ > 0. Condition (17).1 will be satisfied if and only if confS (φ → ψ) =
a (a + Δ) ≤ confS (φ → ψ) = . a+c (a + Δ) + c
We can easily calculate that a (a + Δ) ≤ ⇔ a(a + c + Δ) ≤ (a + c)(a + Δ) ⇔ a+c (a + Δ) + c a2 + ac + aΔ ≤ a2 + ac + aΔ + cΔ ⇔ cΔ ≥ 0. Since both c and Δ are numbers greater than 0, the last inequality is always fulfilled, and therefore condition (17).1 is satisfied. Condition (17).3: Let us consider confidence given as: confS (φ → ψ) =
a . a+c
Since c is in the denominator of the confidence measure, the increase of c will result in the decrease of confidence, and therefore condition (17).3 is satisfied. Since all four conditions are satisfied, the hypothesis that confidence has the property M is true.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
215
Rule interest function. Let us now focus on the rule interest function and analyze it with respect to the property M. Such analysis will require verification of RI’s dependency on the change of a, b, c and d, i.e. all (17) conditions. Theorem 3.2. Rule interest function has the property M. [29] Proof. Let us observe that according to notation (16) measure RI can be rewritten as: (a + b) (a + c) . a+b+c+d After some simple algebraic transformation, we obtain RIS (φ → ψ) = a −
RIS (φ → ψ) =
ad − bc a+b+c+d
(20)
(21)
Taking into account condition (17).1, to prove the monotonicity of RI with respect to a we have to show that if a increases by Δ > 0, then RI does not decrease, i.e. (a + Δ) d − bc ad − bc − ≥ 0. a+b+c+d+Δ a+b+c+d The increase of a is a result of adding to U objects that satisfy both φ and ψ, i.e. a result of extending the data table S = (U, A) to S = (U , A). After few simple algebraic passages, and remembering that a, b, c, d are non-negative, we get (a + Δ) d − bc ad − bc d (b + c + d) Δ + bcΔ − = >0 a+b+c+d+Δ a+b+c+d (a + b + c + d) (a + b + c + d + Δ) such that we can conclude that RI is non-decreasing (more precisely, strictly increasing) with respect to a. Analogous proofs hold for the monotonicity of RI with respect to b, c and d. Gain function. We shall now consider gain function with respect to property M. Similarly to confidence, the gain function is a gain-type criterion that only depends on supS (φ → ψ) and supS (φ). Thus, analysis of gain function with respect to the property M boils down to analysis of its dependence on a and c. Theorem 3.3. Gain function has the property M. Proof. Let us consider gain function expressed in notation (16): gainS (φ → ψ) = a−Θ(a + c)
(22)
where Θ is a fractional constant between 0 and 1. As gainS (φ → ψ) does not depend on b nor d, it is clear that the change of b or d does not result in any change of gainS (φ → ψ). Thus, we only have to verify if conditions (17).1 and (17).3 hold.
216
I. Szcz¸ech
Condition (17).1: Let us assume that the considered data table S = (U, A) is extended to S = (U , A) by adding to S some objects satisfying both φ and ψ. Those objects increase the value of a to a + Δ where Δ > 0. The condition will be satisfied if and only if gainS (φ → ψ) = a−Θ(a + c) ≤ gainS (φ → ψ) = (a + Δ)−Θ(a + Δ + c). Let us observe that a−Θ(a + c) ≤ (a + Δ)−Θ(a + Δ + c) ⇔ a − a Θ−c Θ ≤ a + Δ−a Θ−cΘ − ΘΔ ⇔ Δ − ΘΔ ≥ 0 ⇔ Δ(1 − Θ) ≥ 0. The last inequality is always satisfied as Δ > 0 and (1 − Θ) ≥ 0 because Θ is a fractional constant between 0 and 1. Thus, condition (17).1 is satisfied. Condition (17).3: Let us assume that the considered data table S = (U, A) is extended to S = (U , A) by adding to S some objects satisfying φ but not satisfying ψ. That addition increases the value of c to c + Δ where Δ > 0. Condition (17).3 will be satisfied if and only if gainS (φ → ψ) = a−Θ(a + c) ≥ gainS (φ → ψ) = a−Θ(a + Δ + c). Let us observe that: a−Θ(a + c) ≥ a−Θ(a + Δ + c) ⇔ ⇔ a − a Θ−c Θ ≥ a − a Θ−c Θ − ΘΔ ⇔ ΘΔ ≥ 0. The last inequality is always satisfied as Δ > 0 and Θ ≥ 0. Thus, condition (17).3 is satisfied. Since all four conditions are satisfied, the gain function has the property M. Dependency factor. Let us, now analyse dependency factor with respect to the property M. The measure will satisfy the property only when all conditions (17) are fulfilled. Theorem 3.4. Dependency factor ηS (φ → ψ) does not have the property M [29]. Proof. Let us consider the dependency factor rewritten in notation (16): ηS (φ → ψ) =
a a+c a a+c
− +
a+b a+b+c+d a+b a+b+c+d
.
(23)
It will be shown by the following counterexample that ηS (φ → ψ) does not satisfy the condition that the increase of a results in non-decrease of the dependency factor, thus this measure does not have the property M.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
217
Let us consider case α, in which a = 7, b = 2, c = 3, d = 3, and case α’, in which a increases to 8 and b, c, d remain unchanged. The dependency factor does not have the property M as such increase of a results in the decrease of the measure: ηS (φ → ψ) = 0.0769 > 0.0756 = ηS (φ → ψ).
Measures f and s. Greco et al. have considered in [28] different measures from the perspective of property M. They have proved that measures f and s satisfy this property. 3.2
Analysis of Measures with Respect to Property of Confirmation
To prove that a measure has the property of confirmation the following conditions need to be verified: 1. the measure takes positive values iff confS (φ → ψ) > supS (ψ)/|U | 2. the measure value = 0 iff confS (φ → ψ) = supS (ψ)/|U | 3. the measure takes negative values iff confS (φ → ψ) < supS (ψ)/|U |
(24)
If all those conditions are satisfied, then a measure is said to have the property of confirmation. Rule support, anti-support and confidence. The domains of the attractiveness measures of support, anti-support and confidence are restricted to non-negative values only. Therefore, none of these measures can satisfy the last condition of (24). Hence, these simple measures do not have the property of confirmation. Rule interest function. Let us consider the rule interest function with respect to the property of confirmation. Theorem 3.5. Rule interest function has the property of confirmation. Proof. Let us consider rule interest function given by formula (4). Let us observe that according to condition (24).1: RIS (φ → ψ) = supS (φ → ψ) − supS (φ)|U|supS (ψ) > 0 ⇔ S (ψ) confS (φ → ψ) > sup|U| . Since confS (φ → ψ) =
(25)
supS (φ → ψ) , supS (φ)
we can observe that: confS (φ → ψ) >
supS (ψ) supS (φ) supS (ψ) ⇔ supS (φ → ψ) − >0 |U | |U |
which means that equivalence (25) is always true. Analogous proofs hold for conditions (24).2 and (24).3.
218
I. Szcz¸ech
Gain function. We will now analyse the gain function with respect to the property of confirmation. Theorem 3.6. Gain function has the property of confirmation iff Θ =
supS (ψ) |U| .
Proof. Let us consider gain function given by formula (6). Let us first consider condition (24).2, according to which: gainS (φ → ψ) = supS (φ → ψ) − ΘsupS (φ) = 0 ⇔ S (ψ) confS (φ → ψ) = sup|U| . Since confS (φ → ψ) =
(26)
supS (φ → ψ) , supS (φ)
we can observe that: confS (φ → ψ) =
supS (ψ) supS (φ) supS (ψ) ⇔ supS (φ → ψ) = |U | |U |
which means that equivalence (26) can be transformed in the following manner: gainS (φ → ψ) = 0 ⇔
supS (φ) supS (ψ) − ΘsupS (φ) = 0. |U |
(27)
It is easy to observe that equivalence (27) holds only for Θ = supS (ψ)/|U |. In that situation gain function actually boils down to rule interest function, which was proved to be a confirmation measure. Hence, gain function has the property of confirmation if and only if Θ = supS (ψ)/|U | . Dependency factor. Let us now focus on the dependency factor and analyze it with respect to the property of confirmation. Theorem 3.7. Dependency factor has the property of confirmation. Proof. Let us consider dependency factor given by formula (7). Let us observe that according to condition (24).1:
ηS (φ → ψ) =
supS (φ→ψ) supS (φ) supS (φ→ψ) supS (φ)
− +
supS (ψ) |U| supS (ψ) |U|
Since confS (φ → ψ) =
> 0 ⇔ confS (φ → ψ) >
supS (ψ) . |U |
(28)
supS (φ → ψ) supS (ψ) > , supS (φ) |U |
supS (ψ) S (φ→ψ) it is clear that sup > 0. supS (φ) − |U| Thus, both the nominator and the denominator of the dependency factor are positive and we can conclude that equivalence (28) is always true. Analogous proofs hold for conditions (24).2 and (24).3.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
219
Measures f and s. Among well recognized and established confirmation measures and important role is played by measures f and s. They have been considered as measures with property of confirmation since their introduction in literature and are widely discussed and analyzed by many authors ([9], [22], [49]). 3.3
Analysis of Measures with Respect to Property of Hypothesis Symmetry
In order to prove that a certain measure has the property of hypothesis symmetry it must be checked if its values for rules φ → ψ and φ → ¬ψ are the same but of opposite sign. Rule support, anti-support and confidence. Similarly as in confirmation analysis, in the analysis of property of hypothesis symmetry we sustain the limits introduced by the non-negative domains of support, anti-support and confidence. None of these attractiveness measures has the property of hypothesis symmetry as their values are never negative. E.g. supS (φ → ψ) = −supS (φ → ¬ψ). Rule interest function. Let us now analyse if the rule interest function satisfies the property of hypothesis symmetry. Theorem 3.8. Rule interest function has the property of hypothesis symmetry [29]. Proof. Let us consider RI expressed as in (20): RIS (φ → ψ) = a −
(a + c)(a + b) . a+b+c+d
For a negated conclusion RI is defined as: RIS (φ → ¬ψ) = c −
(a + c)(c + d) . a+b+c+d
The hypothesis symmetry will be satisfied by RI iff:
(a + c)(a + b) (a + c)(c + d) a− =− c− . a+b+c+d a+b+c+d Through simple algebraic transformation we obtain that: a−
(a + c)(a + b) (a + c)(c + d) ad − bc = −c + = a+b+c+d a+b+c+d a+b+c+d
and, therefore, we can conclude that RI has the property of hypothesis symmetry.
220
I. Szcz¸ech
Gain function. We shall now consider the gain function with respect to the property of hypothesis symmetry. Theorem 3.9. Gain function has the property of hypothesis symmetry iff Θ = [29].
1 2
Proof. Let us consider gain function expressed as in (22): gainS (φ → ψ) = a−Θ(a + c). For a negated conclusion gain function is defined as: gainS (φ → ¬ψ) = c−Θ(a + c). The hypothesis symmetry will be satisfied by this measure iff: a−Θ(a + c) = −[c−Θ(a + c)]. Through simple algebraic transformation we obtain that the equality above is satisfied only when Θ = 1/2. Dependency factor. Let us now perform the analysis of dependency factor with respect to the property of hypothesis symmetry. Theorem 3.10. The dependency factor η does not have the property of hypothesis symmetry [29]. Proof. Let us consider dependency factor expressed as in (23): ηS (φ → ψ) =
a a+c a a+c
− +
a+b a+b+c+d a+b a+b+c+d
.
For a negated conclusion it is defined as: ηS (φ → ¬ψ) =
c a+c c a+c
− +
c+d a+b+c+d c+d a+b+c+d
.
To prove that the dependency factor does not satisfy the hypothesis symmetry let us set a = b = c = 10 and d = 20. We can easily verify that ηS (φ → ψ) = 0.11 = 0.09 = ηS (φ → ¬ψ). Measures f and s. Eells et al. have considered in [17] several confirmation measures from the perspective of properties of symmetry. They have proved that measures f and s satisfy the property of hypothesis symmetry.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
4
221
Multicriteria Attractiveness Evaluation of Rules
Application of a measure that quantifies the interestingness of a rule induced from a data table S creates a complete preorder (see formal definition in Section 4.1) on the set of rules. This way the rules are ranked and it is possible to filter out the unwanted ones by setting a threshold on the value of the attractiveness measure. However, a single attractiveness measure is often not sufficient to evaluate the utility and attractiveness of rules. Thus, multicriteria attractiveness evaluation of rules has become very popular [3], [9], [23], [30], [31], [34], [50]. In this approach, the induced rules are evaluated with respect to many attractiveness measures (criteria) at once and as a result a partial preorder (see formal definition in Section 4.1) on the set of rules is obtained. The implication of many complete preorders by partial preorders is a very interesting problem both form theoretical and practical point of view, however not many authors have tackled it. The next sections will be devoted to discussion about the relationships between different complete and partial preorders and the comparison of sets of rules resulting from different evaluation approaches. 4.1
Definitions of Orders and Pareto-optimal Set
Complete Preorder on Set of Rules with Respect to a Single Attractiveness Measure. Let us denote by v any attractiveness measure that quantifies the interestingness of a rule induced from a data table S. Application of v to a set of induced rules creates a complete preorder, denoted as v , on that set. Recall that a complete preorder on a set X is any binary relation R on X that is strongly complete, (i.e. for all x, y ∈ X, xRy or yRx) and transitive (i.e. for all x, y, z ∈ X, xRy and yRz imply xRz). In simple words, if the semantics of xRy is “x is at most as good as y”, then a complete preorder permits to order the elements of X from the best to the worst, with possible ex-aequo but without any incomparability. In other words, considering an attractiveness measure v that induces a complete preorder on a set of rules X and two rules r1 , r2 ∈ X, rule r1 is preferred to rule r2 with respect to measure v if r1 v r2 and, moreover, rule r1 is indifferent to rule r2 if r1 ∼v r2 . Partial Preorder on Rules with Respect to Two Attractiveness Measures. A partial preorder on a set X is any binary relation R on X that is reflexive (i.e. for all x ∈ X, xRx) and transitive. In simple words, if the semantics of xRy is “x is at most as good as y”, then a partial preorder permits to order the elements of X from the best to the worst, with possible ex–aequo (i.e. cases of x, y ∈ X such that xRy and yRx) and with possible incomparability (i.e. cases of x, y ∈ X such that not xRy and not yRx). Let us denote by qt a partial preorder given by a dominance relation on a set X of rules in terms of any two different attractiveness measures q and t, i.e. for all r1 , r2 ∈ X r1 qt r2 if r1 q r2 and r1 t r2 . The partial preorder qt can be decomposed into its asymmetric part ≺qt and its symmetric part ∼qt in the following manner:
222
I. Szcz¸ech
given a set of rules X and two rules r1 , r2 ∈ X, r1 ≺qt r2 if and only if q(r1 ) ≤ q(r2 ) ∧ t(r1 ) < t(r2 ), or q(r1 ) < q(r2 ) ∧ t(r1 ) ≤ t(r2 ),
(29)
moreover r1 ∼qt r2 if and only if q(r1 ) = q(r2 ) ∧ t(r1 ) = t(r2 ).
(30)
Pareto-optimal Border. If for a rule r ∈ X there does not exist any rule r ∈ X, such that r ≺qt r then r is said to be non–dominated (i.e. Pareto–optimal ) with respect to attractiveness measures q and t. A set of all non–dominated rules with respect to q and t forms a Pareto-optimal border (Pareto-optimal set ) of the set of rules in the q-t evaluation space and is referred to as a q-t Pareto–optimal border. Monotonicity of a Function in Its Argument. Let (X, ) be a pair where X is a set of rules and is an ordering relation over X. A function g : X → IR is monotone (resp. anti–monotone) with respect to (monotone in ) if and only if x y implies that g(x) ≥ g(y) (resp. g(x) ≤ g(y)) for any x, y. Implication of a Complete Preorder by a Partial Preorder. A complete preorder v is implied by a partial preorder qt if and only if given a set of rules X and any two rules r1 , r2 ∈ X, r1 qt r2 : r1 ≺qt r2 ⇒ r1 ≺v r2 , and r1 ∼qt r2 ⇒ r1 ∼v r2 .
(31)
Moreover, Bayardo and Agrawal have shown in [3] that the following conditions are sufficient for proving that a complete preorder v defined over a rule value function g(r) is implied by a partial preorder qt : g(r) is monotone in q over rules with the same value of t, and g(r) is monotone in t over rules with the same value of q. 4.2
(32)
Support–Confidence Evaluation Space
Bayardo and Agrawal in [3] have investigated the concept of rule evaluation with respect to two popular attractiveness measures being rule support and confidence. They have considered rules with the same conclusion and evaluated them in such two dimensional space. It has been proved in [3] that, for a set of rules with the same conclusion, if a complete preorder v is implied by a particular support–confidence partial preorder sc , then rules optimal with respect to ≺v can be found in the set of non-dominated rules with respect to rule support and confidence.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
223
Adjusting (32), the following conditions are sufficient for proving that a complete preorder v defined over a rule value function g(r) is implied by the support–confidence partial preorder sc : – g(r) is monotone in rule support over rules with the same value of confidence, and – g(r) is monotone in confidence over rules with the same value of rule support.
(33)
Analysing the above conditions we only consider rules with the same conclusion. Moreover, we do not apply any changes to the data table S = (U, A) and therefore |U | and supS (ψ) are constant. In this context, the monotonicity of g(r) with respect to rule support over rules with the same value of confidence means that for any two rules r1 and r2 , such that confS (r1 ) = confS (r2 ), if supS (r1 ) ≤ supS (r2 ) then g(r1 ) ≤ g(r2 ). Analogously, the monotonicity of g(r) with respect to confidence over rules with the same value of rule support means that for any two rules r1 and r2 , such that supS (r1 ) = supS (r2 ), if confS (r1 ) ≤ confS (r2 ) then g(r1 ) ≤ g(r2 ). Bayardo and Agrawal have shown that the support–confidence Paretooptimal border (i.e. the set of non-dominated rules with respect to support and confidence) includes rules optimal according to several different attractiveness measures, such as gain, Laplace [12], lift [40], conviction [6], rule interest function, and others. This practically useful result allows to identify the most interesting rules according to those measures by solving an optimized rule mining problem with respect to rule support and confidence only.
Fig. 1. Support–confidence Pareto-optimal border
Moreover, since the conditions (33) are general enough, the analysis of relationship of other attractiveness measures with support and confidence can be conducted. Due to the utility of the support–confidence Pareto-optimal border, the problem of proving which other complete preorders can be implied by
sc remains an important issue both from theoretical and practical point of view.
224
I. Szcz¸ech
Monotonic relationship of measure f with support and confidence. Due to valuable properties of measure f (property M, property of confirmation and hypothesis symmetry) our analysis aimed to verify whether (among rules with a fixed hypothesis) rules that are best according to measure f are included in the set of the non-dominated rules with respect to support and confidence. To fulfill the above objective, it has been checked whether conditions (33) hold when the confirmation measure f is the g(r) rule value function. Theorem 4.1. Measure f is independent of rule support, and, therefore, monotone in rule support, when the value of confidence is held fixed [7]. Proof. Let us consider measure f transformed such that, for given U and ψ, it only depends on confidence of rule φ → ψ and support of ψ: fS (φ → ψ) =
|U | confS (φ → ψ) − supS (ψ) . (|U | − 2supS (ψ))confS (φ → ψ) + supS (ψ)
(34)
As we consider rules with a fixed conclusion ψ and we do not apply any changes to the data table S, the values of |U | and supS (ψ) are constant. Thus, for a fixed confidence, we have a constant value of measure f , no matter what the rule support is. Hence, confirmation measure f is monotone in rule support when the confidence is held constant. Theorem 4.2. Measure f is increasing in confidence, and, therefore, monotone with respect to confidence [7]. Proof. Again, let us consider measure f given as in (34). For the clarity of presentation, let us express the above formula as a function of confidence, still regarding |U | and supS (ψ) as constant values greater than 0: kx − m , nx + m where y = fS (φ → ψ), x = confS (φ → ψ), k = |U |, m = supS (ψ), n = |U | − 2supS (ψ). It is easy to observe that k = |U | > 0, and 0 < m ≤ |U |. In order to verify the monotonicity of f in confidence, let us differentiate y with respect to x. We obtain: ∂y m(k + n) = . ∂x (nx + m)2 y=
As m > 0, and k + n = |U | + |U | − 2supS (ψ) = 2|U | − 2supS (ψ) > 0 for |U | ≥ supS (ψ), the whole derivative is always not smaller than 0. Therefore, confirmation measure f is monotone in confidence. Thus, both of Bayardo and Agrawal’s sufficient conditions for proving that a total order ≤v defined over a confirmation measure f is implied by partial order ≤sc are held. This means that, for a class of rules with a fixed conclusion, rules optimal according to measure f will be found in the set of rules that are best with respect to both rule support and confidence.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
225
This result does not refer, however, to utility of scales in which fS (φ → ψ), having the property of confirmation, and confS (φ → ψ), not having the property of confirmation, are expressed. While the confidence is the truth value of the knowledge pattern “if φ, then ψ”, measure fS (φ → ψ) says to what extend ψ is satisfied more frequently when φ is satisfied rather than when φ is not satisfied. In other words, f says what is the “value of information” that φ adds to the credibility of ψ. For further discussion about weakness of confidence scale see [8], [72]. The difference of semantics and utility of confS (φ → ψ) on one hand, and fS (φ → ψ) or sS (φ → ψ) as representatives of measures with the confirmation property on the other hand, can be shown on the following example. Consider the possible result of rolling a die: 1, 2, 3, 4, 5, 6, and let the conclusion ψ = ”the result is divisible by 2”. Given two different premises: φ1 = ”the result is a number from a set {1, 2, 3}”, φ2 = ”the result is a number from a set {2, 3, 4}”, we get, respectively: confS (φ1 → ψ) = 1/3, fS (φ1 → ψ) = −1/3, sS (φ1 → ψ) = −1/3, confS (φ2 → ψ) = 2/3, fS (φ2 → ψ) = 1/3, sS (φ2 → ψ) = 1/3. This example, of course, acknowledges the monotone link between confirmation measure f and confidence. However, it also clearly shows that the values of confirmation measures have a more useful interpretation than confidence. In particular, in the case of rule φ1 → ψ, the premise actually disconfirms the conclusion as it reduces the probability of conclusion ψ from 1/2 = supS (ψ) to 1/3 = confS (φ1 → ψ). This fact is expressed by a negative value of confirmation measure f and s , but cannot be concluded by observing only the value of confidence. Finally, as semantics of fS (φ → ψ) is more useful than that of confS (φ → ψ), and as both of these measures are monotonically linked, it is reasonable to propose a new rule evaluation space in which the search for the most interesting rules is carried out taking into account confirmation measure fS (φ → ψ) and rule support [84]. 4.3
Support–f Evaluation Space
Combining of rule support and measure f in one rule evaluation space is valuable as f is independent of rule support in the sense presented in Theorem 4.1, and rules that have high values of measure f are often characterized by small values of the rule support. Proposition of a new evaluation space, naturally, brings a question of comparison with the support–confidence evaluation space. To fulfil that objective a thorough analysis of monotonicity of confidence in support and measure f has been carried out (for details see [8]). It has been shown that rules optimal in confidence lie on the Pareto-optimal border with respect to support and measure f . Moreover, it has been proved that all attractiveness measures for which the optimal rules are found on the support–confidence Pareto-optimal border also preserve that relationship with respect to support–f Pareto-optimal border. Thus, it is legitimate to conclude that the set of rules forming the support–confidence Pareto-optimal border is exactly the same as the set of rules constituting the
226
I. Szcz¸ech
support–f Pareto-optimal border [8], [9]. An illustration of this result on a real life dataset census, for conclusion workclass=’Private’, is presented on Figure 2 (for more details on the dataset refer to Section 6.2). Those two Pareto sets can, in fact, be regarded as monotone transformations of each other. Hence, substitution of confidence by f does not diminish the set of interestingness measures for which optimal rules reside on the Pareto-optimal border. However, as semantics of measure f is more useful than of confidence, we are strongly in favor of mining the Pareto-optimal border with respect to rule support and confirmation f and not rule support and confidence as it was proposed in [3]. The advantage of the support–f evaluation space comes from the fact that, contrary to confidence, measure f has the property of confirmation and thus has the means to filter out the disconfirming rules (marked on Figure 2 by red circles; the blue triangles represent rules with positive confirmation value). Let us stress that even the non-dominated rules, which are objectively the best rules, might be characterized by a negative value of confirmation measure f , and therefore need to be discarded. Confidence measure cannot distinguish such useless rules. The number of rules which are characterized by a negative value of any attractiveness measure with the property of confirmation (measure f is just a representative of the group of confirmation measures) depends on the dataset, but can potentially be quite large. Therefore, the reduction of the number of rules to be analyzed is another argument for the support–f evaluation space [86]. Table 2. Information about the percentage of rules with non-positive confirmation in the set of all generated rules for different conclusions, for minimal support=0.15 (census dataset) Considered conclusion No. of all No. of rules with Reduction rules non-positive percentage confirmation workclass=Private sex=Male income
supS (ψ) . |U |
(35)
Moreover, it has been analytically proved in [9] (see Theorem 4.9) that for a fixed value of rule support, any measure cS (φ → ψ) having the property of confirmation and the property M is monotone with respect to confidence. Let us also stress that all confirmation measures (no matter whether having the property M or not) change their signs in the same situations. Thus, the possession or not of property M will not influence our further discussion. Since, we limit our considerations to rules with the same conclusion and do not apply any changes to the data table S, |U | and supS (ψ) should be regarded as constant values. Thus, due to the monotonic link between cS (φ → ψ) and confidence, (35) shows that rules laying under a constant supS (φ → ψ)/|U | , expressing what percentage of the whole dataset is taken by the considered class ψ, are characterized by negative values of any measure with the property of confirmation. For those rules ψ is satisfied less frequently when φ is satisfied rather than generically. Figure 3 illustrates this analytical result. Of course, the more objects there are in the analyzed class with a particular conclusion, the more demanding is the position of the constant line separating rules with nonpositive confirmation and the less rules are expected to lie above it.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
229
Fig. 3. An example of a constant line representing any confirmation measure cS (φ → ψ) = 0 in a support–confidence space; rules laying under this constant line should be discarded from further analysis
It is also interesting to investigate a more general condition cS (φ → ψ) ≥ k, k ≥ 0, for some specific measures with the property of confirmation. In the following, we consider confirmation measure fS (φ → ψ). Theorem 4.3. [86] fS (φ → ψ) ≥ k ⇔ confS (φ → ψ) ≥
supS (ψ) (k + 1) . |U | − k (|U | − 2supS (ψ))
(36)
Proof. The analysis concerns only a set of rules with the same conclusion, and since we do not apply any changes to the data table S, the values of |U | and supS (ψ) are constant. For given U and ψ, let us consider confirmation measure fS (φ → ψ) written in terms of confidence and support of rule φ → ψ (effectively in terms of confidence only) as in (34): fS (φ → ψ) =
|U | confS (φ → ψ) − supS (ψ) . (|U | − 2supS (ψ))confS (φ → ψ) + supS (ψ)
(37)
Transforming the above definition of f to outline how confidence depends on f we obtain: confS (φ → ψ) =
fS (φ → ψ)supS (ψ) + supS (ψ) . |U | − fS (φ → ψ)(|U | − 2supS (ψ))
(38)
Considering inequality fS (φ → ψ) ≥ k for (38) we obtain the thesis of the theorem. Figure 4 in an exemplary application of the theoretical results on the census dataset (for more examples see also [85]). Rules with conclusion workclass=’Private’ are evaluated in support–confidence and support–f space. On
230
I. Szcz¸ech
Fig. 4. Rules with positive (blue circles) and non–positive confirmation measure value (red circles) in a support–confidence and support–f space; for minimal support=0.15 (conclusion: workclass=’Private’, census dataset)
the diagrams a constant line separates the rules with positive confirmation (blue circles situated above the line) from those with non–positive confirmation (red circles situated below the line). In the support–confidence evaluation space the
Multicriteria Attractiveness Evaluation of Decision and Association Rules
231
position of the line had to be calculated according to result (35), whereas the same information is given straightforward in the support–f evaluation space as only the sign of f needs to be observed. This example points out the advantage of support–f space over support–confidence space, however, it also shows that result (35) provides means by which the support–confidence space can actually be made meaningful. 4.4
Support–s Evaluation Space
Having proved a monotonic relationship between confidence and measure f our analysis proceeded to verify the existence of such link between confidence and another attractiveness measure with the property M and the property of confirmation, hoping that these results would finally allow to generalize the result for the whole class of attractiveness measures possessing the property M. Below, we considered measure s, having the property M, property of confirmation and hypothesis symmetry, and verified whether (among rules with a fixed hypothesis) rules that are best according to measure s are included in the set of nondominated rules with respect to support and confidence. To fulfil this objective, the monotonicity of s with respect to rule support over rules with the same value of confidence and the monotonicity of s with respect to confidence over rules with the same value of rule support was analyzed. Thus, in the first step it was verified whether for any two rules r1 and r2 , such that confS (r1 ) = confS (r2 ), if supS (r1 ) ≤ supS (r2 ) then sS (r1 ) ≤ sS (r2 ). Next, it was analyzed whether for any two rules r1 and r2 , such that supS (r1 ) = supS (r2 ), if confS (r1 ) ≤ confS (r2 ) then sS (r1 ) ≤ sS (r2 ). Theorem 4.4. [8] When the confidence value is held fixed, then: 1. measure sS (φ → ψ) is increasing in rule support (i.e. strictly monotone) iff sS (φ → ψ) > 0, 2. measure sS (φ → ψ) is constant in rule support (i.e. monotone) iff sS (φ → ψ) = 0, 3. measure sS (φ → ψ) is decreasing in rule support (i.e. strictly anti-monotone) iff sS (φ → ψ) < 0. Proof. Let us consider measure s expressed in the notation (16): sS (φ → ψ) =
a b − . a+c b+d
(39)
Only the proof of part 1 shall be presented, as the other points are analogous. Let us consider two rules: r1 : φ → ψ and r2 : φ → ψ such that they have the same value of confidence and supS (φ → ψ) < supS (φ → ψ). For the first rule supS (φ → ψ) = a. Let us express the support of r2 in the form of supS (φ → ψ) = a = a + Δ, where Δ > 0. Since confidence is to be constant, thus c should change into c = c + in such a way that: confS (φ → ψ) =
a a a+Δ = confS (φ → ψ) = = . a+c a + c a+Δ+c+ε
232
I. Szcz¸ech
Simple algebraic transformation lead to the conclusion that: a a+Δ Δ a = ⇔ = . a+c a+Δ+c+ε Δ+ε a+c
(40)
Let us observe that (40) implies that if c = 0 then = 0 and moreover if c > 0 then > 0. Since |U | and supS (ψ) must be kept constant, b and d need to decrease in such a way that b = b − Δ and d = d − . In this situation, the confirmation measure s for r2 will be: sS (φ → ψ) =
a
a b a+Δ b−Δ − = − . +c b + d a+Δ+c+ε b−Δ+d−ε
Remembering that confS (φ → ψ) = confS (φ → ψ), let us observe that: sS (φ → ψ) > sS (φ → ψ) ⇔
b b+d
>
dΔ > bε ⇔ dΔ + bΔ > bε + bΔ ⇔
b−Δ b−Δ+d−ε Δ Δ+ε
>
⇔ (41)
b b+d .
Considering (40) and (41) it can be concluded that: sS (φ → ψ) > sS (φ → ψ) ⇔
a b > ⇔ sS (φ → ψ) > 0. a+c b+d
This proves that, for a fixed value of confidence, measure s is increasing with respect to rule support if and only if sS (φ → ψ) > 0 and therefore in its positive range measure s is strictly monotone in rule support. Theorem 4.5. When the rule support value is held fixed, measure s is increasing with respect to confidence (i.e. measure s is monotone in confidence) [8]. Proof. Again, let us consider two rules r1 : (φ → ψ) and r2 : (φ → ψ), and measure s given as in (39). For the hypothesis, the rule support is supposed to be constant, i.e. supS (φ → ψ) = a = supS (φ → ψ) . Therefore, it is clear that a a confS (φ → ψ) = a+c will be smaller than confS (φ → ψ) = a+c only if we consider c = c−Δ, where Δ > 0. Now, operating on c’ the only way to guarantee that |U | and supS (ψ) remain constant (as we do not apply any changes to the data table S) is to increase d such that d = d + Δ. The values of a and b cannot change: a = a and b = b. Now, the value of measure s for r2 takes the following form: a b a b sS (φ → ψ) = − = − . a +c b + d a + c−Δ b + d + Δ Since Δ > 0, it is clear that sS (φ → ψ) > sS (φ → ψ). This means that for a fixed value of rule support, increasing confidence results in an increase of the value of measure s and therefore measure s is monotone with respect to confidence. As rules with negative values of measure s should always be discarded from consideration, the result from Theorem 4.4 states the monotone relationship just in the interesting subset of rules. It implies that rules for which sS (φ → ψ) ≥ 0
Multicriteria Attractiveness Evaluation of Decision and Association Rules
233
and which are optimal with respect to measure s will reside on the support– confidence Pareto-optimal border. They will also be found on the support–f Pareto-optimal border since those Pareto sets have the same contents. Since confirmation measure s has the property of monotonicity M, we propose to generate interesting rules by searching for rules maximizing confirmation measure s and support, i.e. substituting the confidence in the support–confidence Pareto-optimal border with measure s and obtaining in this way a support– s Pareto-optimal border. This approach differs from the idea of finding the Pareto-optimal border according to rule support and confirmation measure f , because support–f Pareto-optimal border contains the same rules as the support– confidence Pareto-optimal border, while in general support–s Pareto-optimal border contains a subset of the support–confidence Pareto-optimal border as stated in the following theorem. Theorem 4.6. If a rule resides on the support–s Pareto-optimal border (in case of positive value of confirmation measure s), then it also resides on the support– confidence Pareto-optimal border, while one can have rules being on the support– confidence Pareto-optimal border which are not on the support–s Pareto-optimal border [9]. Proof. Let us consider a rule r : φ → ψ residing on the support–s Pareto-optimal border and let us suppose that measure s has a positive value. This means that for any other rule r : φ → ψ we have that: supS (φ → ψ) > supS (φ → ψ) ⇒ sS (φ → ψ) < sS (φ → ψ).
(42)
On the basis of monotonicity of measure s with respect to support and confidence in case of positive value of s, we have that supS (φ → ψ) > supS (φ → ψ) and sS (φ → ψ) < sS (φ → ψ) implies that confS (φ → ψ) < confS (φ → ψ). This means that (42) implies that for any other rule r , supS (φ → ψ) > supS (φ → ψ) ⇒ confS (φ → ψ) < confS (φ → ψ). This means that rule r residing on the support–s Pareto-optimal border is also on the support–confidence Pareto-optimal border because one cannot have any other rule r such that supS (φ → ψ) > supS (φ → ψ) and confS (φ → ψ) ≥ confS (φ → ψ). Now, we prove by a counter-example that there can be rules being on the support–confidence Pareto-optimal border which are not on the support–s Pareto-optimal border. Let us consider rules r and r residing on the support– confidence Pareto-optimal border such that for rule r we have supS (φ → ψ) = 200 and confS (φ → ψ) = 0.667, while for rule r we have supS (φ → ψ) = 150 and confS (φ → ψ) = 0.68. We have that sS (φ → ψ) = 0.167 which is greater than sS (φ → ψ) = 0.142. Thus, rule r’ is not on the support–s Pareto-optimal border because it is dominated with respect to support–s by rule r having a greater support and a greater value of measure s. Theorem 4.6 states that some rules from the support–confidence Pareto-optimal border may be not present on the support–s Pareto-optimal border. Figure 5
234
I. Szcz¸ech
Fig. 5. Support–confidence Pareto-optimal border is the upper-set of support–s Paretooptimal border (conclusion: income supS (φ → ψ), we get that supS (φ → ¬ψ) > supS (φ → ¬ψ). This means that (47) implies that for any other rule r supS (φ → ψ) > supS (φ → ψ) ⇒ supS (φ → ¬ψ) > supS (φ → ¬ψ). This means that rule r residing on the support–confidence Pareto-optimal border is also on the support–anti-support Pareto-optimal border because one cannot have any other rule r such that supS (φ → ψ) > supS (φ → ψ) and supS (φ → ¬ψ) > supS (φ → ¬ψ). Now, we prove with a counter-example that there can be a rule being on the support–anti-support Pareto-optimal border which is not on the support– confidence Pareto-optimal border. Let us consider two rules r and r residing on the support–anti-support Pareto-optimal border such that for rule r we have support supS (φ → ψ) = 200 and anti-support supS (φ → ¬ψ) = 100, while for rule r we have support supS (φ → ψ) = 150 and anti-support supS (φ → ¬ψ) = 99. We have that confS (φ → ψ) = 0.667 which is greater than confS (φ → ψ) = 0.602. Thus, rule r is not on the support–confidence Pareto-optimal border because it is dominated in the sense of support–confidence by rule r having a larger support and a larger confidence. Let us observe that the support–confidence Pareto-optimal border has the advantage of presenting a smaller number of rules (more precisely, a not greater number of rules) than the support–anti-support Pareto-optimal border. However, its disadvantage is that it does not present the rules optimizing any attractiveness measure satisfying the property M. In fact, all the rules which are present on the support–anti-support Pareto-optimal border and not present on the support–confidence Pareto-optimal border maximize an attractiveness measure which is not monotone with respect to support because it does not satisfy the condition of the above Theorem 4.8. Summarizing illustration of comparison of the support–anti-support Paretooptimal border with non-dominated sets from all previously mentioned evaluation spaces is presented on Figure 7. On the diagram there are the Paretooptimal borders with respect to four evaluation spaces, for a fixed conclusion being income= minsup}; 14: } 15: Result = Ucount Fsetcount;
Multicriteria Attractiveness Evaluation of Decision and Association Rules
247
Fig. 10. The component diagram of the system
The user can choose the frequent itemset algorithm to be applied. There are two options: Apriori by Agrawal et al. [2] and FP-Growth by Han et al. [32]. For detailed presentation of those algorithms see also: [4], [60], [92]. The Apiori algorithm represents an iterative approach to association mining. In the first step, single-element frequent item sets are selected from the database (Apriori line 1). Based on these sets, larger frequent sets are found by generating candidate sets (Apriori line 4) and by prunning (Apriori lines 5-13) them. The first step of generating candidate sets is merging. Sets with size count are summed to create candidate sets of count+1 size (GenerateCandidateSets lines 3-5). Each of these newly created sets must be then verified to ensure it is a frequent set. This is done by checking if all count subsets of the candidate set are frequent (GenerateCandidateSets lines 6-11). The database is then scanned to verify each candidate set’s support and eliminate those which do not satisfy the minimal support threshold. Later frequent sets with size 2 are used to generate frequent sets of size 3 and so on. In each iteration the algorithm generates frequent sets that are one element bigger and with each iteration the database needs to be scanned. The algorithm ends when no more candidate sets can be generated. The process of rule mining using the FP-Growth algorithm consists of two major steps. In the first step the database is transformed into a special structure called the FP-Tree. In the second step the FP-Tree is explored recursively in search of frequent sets.
248
I. Szcz¸ech
GenerateCandidateSets(FSetcount): 1: foreach(pair of sets (s1, s2 ) in FSetcount) 2: { 3: union = s1 ∪ s2 ; 4: if (union.count == count + 1) 5: CandidateSets.Add(union); 6: foreach(Set s in CandidateSets) 7: { 8: subsets = count-subsets of s; 9: foreach (Set sub in subsets) 10: if (sub ∈ / FSetcount) 11: CandidateSets.Remove(s); 12: } 13: } CreateTree(): 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
FSet1 = frequent 1-itemsets; SortDesc(FSet1); Tree = new FPTree(null); foreach(Row r in database) { CurrentNode = Tree.Root(); foreach(Item i in FSet1 ) { if (i ∈ r and i ∈ / CurrentNode.Children) { CurrentNode.AddChild(i); CurrentNode = CurrentNode.Children(i); } if (i ∈ r and i ∈ CurrentNode.Children) { CurrentNode = CurrentNode.Children(i); CurrentNode.Support++; } } }
To transform a database into an FP-Tree, single-element frequent sets must be selected and sorted by support in descending order for each row of the database (CreateTree lines 1-2). Then an empty FP-Tree is created with a null labeled root (CreateTree line 3). Reading the database for the second time we only read the frequent elements and create an FP-Tree as shown in CreateTree lines 6-17. If there is no child labeled as the item, a new node is created with the desired label and support = 1. Otherwise, we increment the child’s support by 1 and consider it as the new root. Upon reaching a new record with elements to add to the tree, we return to the null-labeled root (CreateTree line 6). During the generation of the tree a header
Multicriteria Attractiveness Evaluation of Decision and Association Rules
249
FP-Growth(Tree, α): 1: if (Tree has a single path P ) 2: { 3: foreach(subset β of nodes in P ) 4: { 5: f s = α ∪ β; 6: f s.support = mini.support{i: i∈β} 7: Result.Add(f s); 8: } 9: } 10: else 11: { 12: foreach(Item i in Header) 13: { 14: β = i ∪ α; 15: β.support = i.support; 16: Result.Add(β); 17: create β’s conditional pattern base; 18: create β’s conditional FPTree Treeβ; 19: if (Treeβ = ∅) 20: FP-Growth(Treeβ, β); 21: } 22: }
list of item nodes is created in order to localize nodes containing the same item more rapidly and to identify each item’s support. An FP-Tree constructed in such a way is then recursively explored by calling FP-Growth (Tree, null). The recursive process of exploring the FP-Tree is based on distinguishing two situations. If the tree has only a single path then all non-empty subsets of that path are combined with the suffix pattern α (with which the function was called) and added to the result frequent pattern list (FP-Growth lines 1-9). Otherwise, if the tree consists of more than one path then the header list is read in support ascending order (FP-Growth lines 12-21). The examined item is merged with suffix pattern and added to the result list with support equal to the item’s support (FP-Growth lines 14-16). Based on paths containing nodes with the currently explored item a conditional pattern base is created and later a conditional FP-Tree (FP-Growth lines 17-18). The idea is to filter items that occur with the new suffix (the item currently read from header list) often enough to satisfy the minimal support value. Once these items are filtered a new FP-Tree is created in a similar way to the one created from the database (the support values of the nodes are incremented by the node’s support value and not 1 like in CreateTree()). If the tree is not empty we recursively call FP-Growth with the new tree and suffix (FP-Growth lines 19-20). Rule Generator is a unit that generates rules from the frequent itemsets found by the Frequent Itemset Generator. The basic rule generation algorithm is based on creating rules from the subsets of frequent itemsets:
250
I. Szcz¸ech
GenerateRules(FrequentSets): 1: 2: 3: 4: 5: 6:
foreach (FrequentSet fs in FrequentSets) { subsets = list of all non-empty subsets of fs; foreach (Subset sub in subsets) create rule: sub ⇒ (fs – sub); }
The rules can be formed with respect to different attractiveness measures i.e. the user can optionally set the following thresholds: maximal acceptable rule anti-support, minimal acceptable confidence, f or s measure value. On the Rule Generator’s output there will only be rules that satisfy the introduced thresholds. The results from Remark 6.1 are used to generate rules with a given anti-support threshold more effectively. For each generated rule the values of the following counters are obtained: a = supS (φ → ψ), b = supS (¬φ → ψ), c = supS (φ → ¬ψ) and d = supS (¬φ → ¬ψ). On their bases, for each rule, measures of rule support and anti-support, confidence, f and s are calculated. The user can see the output rules with their values of the considered attractiveness measures, in form of a table (see example on Figure 11). The user can also choose a particular attribute to be a decision attribute and in that case only rules with that attribute in the rule’s conclusion will be generated. If the user does not assign any decision attribute, association rules are generated.
Fig. 11. Example of rule presentation format (census dataset)
Ordering and Optimization Unit is a module that divides the set of all rules into subsets according to their conclusions. Rules from each of such group are ordered with respect to their value of support and with respect to anti-support when the value of support is the same. Such ordering allows to optimize the phase of finding the Pareto-optimal border (or the area close to it) in the support–antisupport evaluation space because there is no need to perform n2 of comparisons on rules (where n is the number of rules with the same conclusion). For each
Multicriteria Attractiveness Evaluation of Decision and Association Rules
251
group of rules with the same conclusion, this Pareto-optimal border is found in the following manner (for reference see also JumpAlg): 1. the first element from the ordered list (i.e. the rules that have the highest rule support and the smallest anti-support) are placed on the Pareto-optimal border, for they are surely non-dominated, 2. we jump to the element on the list with the next highest value of support and search for a rule(s) whose anti-support is smaller than the anti-support of the element chosen in the previous step. If such rule(s) is found, it is added to the Pareto-optimal border. The procedure is then continued for the next highest values of support until the element with the smallest support is reached. JumpAlg: 1: for (idx = 0; idx < Rules.count; idx += Rules[idx ].EqualCount) 2: { 3: if (idx ==0 ||Rules[idx].Measure > Rules[lastPareto].Measure) 4: { 5: ParetoOptimal.Add(Rules[idx ]); 6: lastPareto = idx ; 7: } 8: } where: Measure has property M.
Due to relationships between anti-support and other considered measures (in particular Theorem 5.1 and Theorem 5.2) the fact that the rules are ordered with respect to anti-support implies that they are also ordered according to confidence, measure f or s, etc. This means that the above described way of searching for Pareto-optimal borders can also be applied to looking for Pareto-optimal rules with respect to support and confidence, support and f , etc. Moreover, since the support–anti-support Pareto-optimal border is the upper-set of all the considered Pareto-optimal sets, we can mine them simply from the support–antisupport Pareto-optimal border instead of searching the set of all rules. Visualization Unit is responsible for presentation of the induced rules on diagrams. The user determines (through the User Interaction Unit) the conclusion with which the rules are to be displayed and the evaluation space. The user can view many charts at once in order to compare them. The thresholds for the evaluation criteria can be adjusted by the user, changing the set of rules that is presented. On the diagrams, rules with non-positive confirmation value are distinguished by color and shape from those with positive value. The user can limit the set of displayed rules only to the Pareto-optimal ones or view the whole set of rules with the chosen conclusion. The system presents the values of rule support and anti-support as relative values between 0 and 1. User Interaction Unit is a module that provides communication between the user and other system components. All the user-set parameters (e.g. thresholds) are delivered to the system through this unit.
252
I. Szcz¸ech
System Functionality Association miner - general information. Association miner is an association rule mining program that utilizes the Apriori or FP-Growth algorithm. It enables the user to create and view charts presenting rules in support–confidence, support–s, support–f and support–anti-support planes. Rules can be saved in special structures for later use as well as be exported to Excel format. Association miner benefits are: 1. efficient Pareto-optimal rule generation in several measure evaluation spaces together in one step, 2. chart exemplification of important theoretical thesis, 3. rule export capabilities. System Requirements. Microsoft .Net Framework 2.0 or higher, Windows XP Installation. The program does not require any special installation steps and is ready to use as long as all of the system requirements are met. Starting the program. To start Association miner, click the Association miner icon found in the installation directory. Brief User Guide Opening data files. To open a data file simply choose Open Data File from the File menu option list or press Ctrl+O while using the main form of the program. You can choose from *.arff and MSWeb *.data file formats. When asked type in the desired minimal support and optionally other measure requirements to generate rules. If you decide to cancel at this point, no rules will be generated but the database will be loaded. Managing rule files. Once rules are generated you can save them by accessing the Save Rule File option from the File menu. Rules can be saved to the default *.rff format or exported as an Excel sheet. Rules can also be loaded through the Open Rule File option from the File menu. Only *.rff files are supported by this option. Chart and multichart creation. To create a chart or multichart simply choose the desired option from the Options menu. Charts are only shown for rules with the same conclusion, so it is necessary to select one of the conclusions from the provided list. When a chart has been created the user has the possibility to filter rules through various options: 1. Show only Pareto, 2. Show only selection, 3. Show invisible.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
253
Figure 12 presents the main form of the program and gives access to all the options provided by the Association Miner : – premise: if clicked, sorts the rules according to the alphabetical order of the premise column – conclusion: if clicked, sorts the rules according to the alphabetical order of the conclusion column – supp: if clicked, sorts the rules according to the support column – conf: if clicked, sorts the rules according to the confidence column – s: if clicked sorts, the rules according to the confirmation-s value column – f : if clicked sorts, the rules according to the confirmation-f value column – a-supp: if clicked sorts, the rules according to the anti-support column
Fig. 12. Association Miner - main form
An example of the form with generated rules is presented on Figure 13. As it is shown on Figure 14, the algorithm for frequent itemset generation can be changed according to the user preferences. During the rule generation phase the user can set measure thresholds and assign a decision attribute if there is one (see Figure 15). The following values can be set: – Minimal support value for rule generation: specifies the minimal support value that will be used to generate rules. This field must be filled in order to generate rules. The provided values must be greater than 0 and less or equal to 1.
254
I. Szcz¸ech
Fig. 13. Association Miner - main form with generated rules
Fig. 14. Association Miner - settings
Multicriteria Attractiveness Evaluation of Decision and Association Rules
255
– Minimal confidence: specifies the minimal confidence value that will be used to generate rules. The provided values must be between 0 and 1. – Minimal confirmation-s: specifies the minimal confirmation-s value that will be used to generate rules. The provided values must be between -1 and 1. – Maximal anti-support: specifies the maximal anti-support value that will be used to generate rules. The provided values must be between 0 and 1. – Minimal confirmation-f : specifies the minimal confirmation-f value that will be used to generate rules. The provided values must be between -1 and 1. – Decision attribute: specifies an attribute that was chosen by the user to be a decision attribute. If no decision attribute is specified, then all possible association rules are generated.
Fig. 15. Association Miner - thresholds
For the generated set of rules the user can create charts presenting rules with a chosen conclusion in different evaluation spaces as shown on Figure 16. The following options can be set in a Chart creation dialog box (see Figure 17): – Choose conclusion: allows the user to choose a conclusion for which the chart(s) will be created. Charts can only be created for rules with the same conclusion. – Choose chart type • confidence: creates a chart showing rules in the support–confidence plane. • confirmation s: creates a chart showing rules in the support–s plane. • confirmation f : creates a chart showing rules in the support–f plane. • anti-support: creates a chart showing rules in the support–anti-support plane. • all: when selected creates all four of the above charts. An example of created charts is presented on Figure 18. The user can adjust the charts to his needs by operating on the following options/parameters:
256
I. Szcz¸ech
Fig. 16. Association Miner - charts
Fig. 17. Association Miner - chart creation parameters
– Show border: Shows the lines separating rules with negative confirmation value. Whether this line is shown or not, one can distinguish the rules with positive confirmation values by their shape of blue triangles (e.g. Figure 19). – Show only Pareto: Shows only rules that are Pareto-optimal in the specified plane (e.g. Figure 20). If the Show invisible checkbox is selected then rules that are not Pareto-optimal will be shown as hollow shapes on the chart (e.g. Figure 21). Otherwise the rules that are not Pareto-optimal will not be shown at all.
Multicriteria Attractiveness Evaluation of Decision and Association Rules
257
Fig. 18. Association Miner - example charts
– Show only selection: Shows on the chart only the rules that are within the selection lasso. The selection lasso is always drawn when the user presses the mouse on the chart picture and moves the mouse selecting rules (e.g. Figure 22). – Show invisible: Forces rules that should not be drawn (they are not pareto optimal/not within selection/are in the shaded part of the chart) to appear on the chart as hollow shapes. – Minimal support: Defines the minimal support for the shown rules. If the minimal support provided by the user in the textbox is lower than the one used to generate the rules the field corresponding to that support will still stay shaded. – Minimal y-axis value: Defines the minimal (maximal in case of antisupport) value of the measure in the y-axis of the chart. Rules that do not satisfy these conditions are not displayed on the diagram. – Save image: Saves the chart image in *.bmp format. The user can also create a multichart suitable for chart comparison (e.g. Figure 23). 6.2
Examples of the System’s Application
Census dataset. The census dataset is a selection from a dataset used by Kohavi et al. in [47]. It contains information about financial and social status of the questioned people. The number of analyzed instances reached 32 561. The
258
I. Szcz¸ech
Fig. 19. Association Miner - border
Fig. 20. Association Miner - Pareto-optimal rules
Multicriteria Attractiveness Evaluation of Decision and Association Rules
Fig. 21. Association Miner - hidden rules
Fig. 22. Association Miner - selection
259
260
I. Szcz¸ech
Fig. 23. Association Miner - multicharts
chosen instances did not contain any missing values. They were described by 9 nominal attributes differing in domain sizes: – workclass: Private, Local-gov, etc.; – education: Bachelors, Some-college, etc.;
Multicriteria Attractiveness Evaluation of Decision and Association Rules
261
Fig. 24. Multichart presenting all rules with the conclusion workclass=”Private” generated with the 0.15 minimal support threshold
– – – – – – –
marital-status: Married, Divorced, Never-married, etc.; occupation: Tech-support, Craft-repair, etc.; relationship: Wife, Own-child, Husband, etc.; race: White, Asian-Pac-Islander, etc.; sex: Female, Male; native-country: United-States, Cambodia, England, etc.; salary: >50K,