Lecture Notes in Artificial Intelligence Edited by J. G. Carbonell and J. Siekmann
Subseries of Lecture Notes in Computer Science
3613
Lipo Wang Yaochu Jin (Eds.)
Fuzzy Systems and Knowledge Discovery Second International Conference, FSKD 2005 Changsha, China, August 27-29, 2005 Proceedings, Part I
13
Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA Jörg Siekmann, University of Saarland, Saarbrücken, Germany Volume Editors Lipo Wang Nanyang Technological University School of Electrical and Electronic Engineering Block S1, 50 Nanyang Avenue, Singapore 639798 E-mail:
[email protected] Yaochu Jin Honda Research Institute Europe Carl-Legien-Str. 30, 63073 Offenbach/Main, Germany E-mail:
[email protected] Library of Congress Control Number: 2005930642
CR Subject Classification (1998): I.2, F.4.1, F.1, F.2, G.2, I.2.3, I.4, I.5 ISSN ISBN-10 ISBN-13
0302-9743 3-540-28312-9 Springer Berlin Heidelberg New York 978-3-540-28312-6 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 11539506 06/3142 543210
Preface
This book and its sister volume, LNAI 3613 and 3614, constitute the proceedings of the Second International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2005), jointly held with the First International Conference on Natural Computation (ICNC 2005, LNCS 3610, 3611, and 3612) from August 27–29, 2005 in Changsha, Hunan, China. FSKD 2005 successfully attracted 1249 submissions from 32 countries/regions (the joint ICNC-FSKD 2005 received 3136 submissions). After rigorous reviews, 333 high-quality papers, i.e., 206 long papers and 127 short papers, were included in the FSKD 2005 proceedings, representing an acceptance rate of 26.7%. The ICNC-FSKD 2005 conference featured the most up-to-date research results in computational algorithms inspired from nature, including biological, ecological, and physical systems. It is an exciting and emerging interdisciplinary area in which a wide range of techniques and methods are being studied for dealing with large, complex, and dynamic problems. The joint conferences also promoted cross-fertilization over these exciting and yet closely-related areas, which had a significant impact on the advancement of these important technologies. Specific areas included computation with words, fuzzy computation, granular computation, neural computation, quantum computation, evolutionary computation, DNA computation, chemical computation, information processing in cells and tissues, molecular computation, artificial life, swarm intelligence, ants colony, artificial immune systems, etc., with innovative applications to knowledge discovery, finance, operations research, and more. In addition to the large number of submitted papers, we were blessed with the presence of four renowned keynote speakers and several distinguished panelists. On behalf of the Organizing Committee, we thank Xiangtan University for sponsorship, and the IEEE Circuits and Systems Society, the IEEE Computational Intelligence Society, and the IEEE Control Systems Society for technical co-sponsorship. We are grateful for the technical cooperation from the International Neural Network Society, the European Neural Network Society, the Chinese Association for Artificial Intelligence, the Japanese Neural Network Society, the International Fuzzy Systems Association, the Asia-Pacific Neural Network Assembly, the Fuzzy Mathematics and Systems Association of China, and the Hunan Computer Federation. We thank the members of the Organizing Committee, the Advisory Board, and the Program Committee for their hard work over the past 18 months. We wish to express our heart-felt appreciation to the keynote and panel speakers, special session organizers, session chairs, reviewers, and student helpers. Our special thanks go to the publisher, Springer, for publishing the FSKD 2005 proceedings as two volumes of the Lecture Notes in Artificial Intelligence series (and the ICNC 2005 proceedings as three volumes of the Lecture Notes in Computer Science series). Finally, we thank all the authors
VI
Preface
and participants for their great contributions that made this conference possible and all the hard work worthwhile. August 2005
Lipo Wang Yaochu Jin
Organization
FSKD 2005 was organized by Xiangtan University and technically co-sponsored by the IEEE Circuits and Systems Society, the IEEE Computational Intelligence Society, and the IEEE Control Systems Society, in cooperation with the International Neural Network Society, the European Neural Network Society, the Chinese Association for Artificial Intelligence, the Japanese Neural Network Society, the International Fuzzy Systems Association, the Asia-Pacific Neural Network Assembly, the Fuzzy Mathematics and Systems Association of China, and the Hunan Computer Federation.
Organizing Committee Honorary Conference Chairs: General Chair: General Co-chairs: Program Chair: Local Arrangement Chairs: Proceedings Chair: Publicity Chair: Sponsorship/Exhibits Chairs: Webmasters:
Shun-ichi Amari, Japan Lotfi A. Zadeh, USA He-An Luo, China Lipo Wang , Singapore Yunqing Huang, China Yaochu Jin, Germany Renren Liu, China Xieping Gao, China Fen Xiao, China Hepu Deng, Australia Shaoping Ling, China Geok See Ng, Singapore Linai Kuang, China Yanyu Liu, China
Advisory Board Toshio Fukuda, Japan Kunihiko Fukushima, Japan Tom Gedeon, Australia Aike Guo, China Zhenya He, China Janusz Kacprzyk, Poland Nikola Kasabov, New Zealand John A. Keane, UK Soo-Young Lee, Korea Erkki Oja, Finland Nikhil R. Pal, India
Witold Pedrycz, Canada Jose C. Principe, USA Harold Szu, USA Shiro Usui, Japan Xindong Wu, USA Lei Xu, Hong Kong Xin Yao, UK Syozo Yasui, Japan Bo Zhang, China Yixin Zhong, China Jacek M. Zurada, USA
VIII
Organization
Program Committee Members Janos Abonyi, Hungary Jorge Casillas, Spain Pen-Chann Chang, Taiwan Chaochang Chiu, Taiwan Feng Chu, Singapore Oscar Cordon, Spain Honghua Dai, Australia Fernando Gomide, Brazil Saman Halgamuge, Australia Kaoru Hirota, Japan Frank Hoffmann, Germany Jinglu Hu, Japan Weili Hu, China Chongfu Huang, China Eyke H¨ ullermeier, Germany Hisao Ishibuchi, Japan Frank Klawoon, Germany Naoyuki Kubota, Japan Sam Kwong, Hong Kong Zongmin Ma, China
Michael Margaliot, Israel Ralf Mikut, Germany Pabitra Mitra, India Tadahiko Murata, Japan Detlef Nauck, UK Hajime Nobuhara, Japan Andreas N¨ urnberger, Germany Da Ruan, Belgium Thomas Runkler, Germany Rudy Setiono, Singapore Takao Terano, Japan Kai Ming Ting, Australia Yiyu Yao, Canada Gary Yen, USA Xinghuo Yu, Australia Jun Zhang, China Shichao Zhang, Australia Yanqing Zhang, USA Zhi-Hua Zhou, China
Special Sessions Organizers David Siu-Yeung Cho, Singapore Vlad Dimitrov, Australia Jinwu Gao, China Zheng Guo, China Bob Hodge, Australia Jiman Hong, Korea Jae-Woo Lee, Korea Xia Li, China
Zongmin Ma, China, Geok-See Ng, Singapore Shaoqi Rao, China Slobodan Ribari, Croatia Sung Y. Shin, USA Yasufumi Takama, Japan Robert Woog, Australia
Reviewers Nitin V. Afzulpurkar Davut Akdas K¨ urat Ayan Yasar Becerikli Dexue Bi Rong-Fang Bie
Liu Bin Tao Bo Hongbin Cai Yunze Cai Jian Cao Chunguang Chang
An-Long Chen Dewang Chen Gang Chen Guangzhu Chen Jian Chen Shengyong Chen
Organization
Shi-Jay Chen Xuerong Chen Yijiang Chen Zhimei Chen Zushun Chen Hongqi Chen Qimei Chen Wei Cheng Xiang Cheng Tae-Ho Cho Xun-Xue Cui Ho Daniel Hepu Deng Tingquan Deng Yong Deng Zhi-Hong Deng Mingli Ding Wei-Long Ding Fangyan Dong Jingxin Dong Lihua Dong Yihong Dong Haifeng Du Weifeng Du Liu Fang Zhilin Feng Li Gang Chuanhou Gao Yu Gao Zhi Geng O. Nezih Gerek Rongjie Gu Chonghui Guo Gongde Guo Huawei Guo Mengshu Guo Zhongming Han Bo He Pilian He Liu Hong Kongfa Hu Qiao Hu Shiqiang Hu Zhikun Hu Zhonghui Hu
Zhonghui Hu Changchun Hua Jin Huang Qian Huang Yanxin Huang Yuansheng Huang Kohei Inoue Mahdi Jalili-Kharaajoo Caiyan Jia Ling-Ling Jiang Michael Jiang Xiaoyue Jiang Yanping Jiang Yunliang Jiang Cheng Jin Hanjun Jin Hong Jin Ningde Jin Xue-Bo Jin Min-Soo Kim Sungshin Kim Taehan Kim Ibrahim Beklan Kucukdemiral Rakesh Kumar Arya Ho Jae Lee Sang-Hyuk Lee Sang-Won Lee Wol Young Lee Xiuren Lei Bicheng Li Chunyan Li Dequan Li Dingfang Li Gang Li Hongyu Li Qing Li Ruqiang Li Tian-Rui Li Weigang Li Yu Li Zhichao Li Zhonghua Li Hongxing Li Xiaobei Liang
Ling-Zhi Liao Lei Lin Caixia Liu Fei Liu Guangli Liu Haowen Liu Honghai Liu Jian-Guo Liu Lanjuan Liu Peng Liu Qihe Liu Sheng Liu Xiaohua Liu Xiaojian Liu Yang Liu Qiang Luo Yanbin Luo Zhi-Jun Lv Jian Ma Jixin Ma Longhua Ma Ming Ma Yingcang Ma Dong Miao Zhinong Miao Fan Min Zhang Min Zhao Min Daniel Neagu Yiu-Kai Ng Wu-Ming Pan Jong Sou Park Yonghong Peng Punpiti Piamsa-Nga Heng-Nian Qi Gao Qiang Wu Qing Celia Ghedini Ralha Wang Rong Hongyuan Shen Zhenghao Shi Jeong-Hoon Shin Sung Chul Shin Chonghui Song Chunyue Song
IX
X
Organization
Guangda Su Baolin Sun Changyin Sun Ling Sun Zhengxing Sun Chang-Jie Tang Shanhu Tang N K Tiwari Jiang Ping Wan Chong-Jun Wang Danli Wang Fang Wang Fei Wang Houfeng Wang Hui Wang Laisheng Wang Lin Wang Ling Wang Shitong Wang Shu-Bin Wang Xun Wang Yong Wang Zhe Wang Zhenlei Wang Zhongjie Wang Runsheng Wang Li Wei Weidong Wen Xiangjun Wen Taegkeun Whangbo Huaiyu Wu Jiangning Wu Jiangqin Wu
∗
Jianping Wu Shunxiang Wu Xiaojun Wu Yuying Wu Changcheng Xiang Jun Xiao Xiaoming Xiao Wei Xie Gao Xin Zongyi Xing Hua Xu Lijun Xu Pengfei Xu Weijun Xu Xiao Xu Xinli Xu Yaoqun Xu De Xu Maode Yan Shaoze Yan Hai Dong Yang Jihui Yang Wei-Min Yang Yong Yang Zuyuan Yang Li Yao Shengbao Yao Bin Ye Guo Yi Jianwei Yin Xiang-Gang Yin Yilong Yin Deng Yong
Chun-Hai Yu Haibin Yuan Jixue Yuan Weiqi Yuan Chuanhua Zeng Wenyi Zeng Yurong Zeng Guojun Zhang Jian Ying Zhang Junping Zhang Ling Zhang Zhi-Zheng Zhang Yongjin Zhang Yongkui Zhang Jun Zhao Quanming Zhao Xin Zhao Yong Zhao Zhicheng Zhao Dongjian Zheng Wenming Zheng Zhonglong Zheng Weimin Zhong Hang Zhou Hui-Cheng Zhou Qiang Zhou Yuanfeng Zhou Yue Zhou Daniel Zhu Hongwei Zhu Xinglong Zhu
The term after a name may represent either a country or a region.
Table of Contents – Part I
Fuzzy Theory and Models On Fuzzy Inclusion in the Interval-Valued Sense Jin Han Park, Jong Seo Park, Young Chel Kwun . . . . . . . . . . . . . . . . . .
1
Fuzzy Evaluation Based Multi-objective Reactive Power Optimization in Distribution Networks Jiachuan Shi, Yutian Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Note on Interval-Valued Fuzzy Set Wenyi Zeng, Yu Shi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
Knowledge Structuring and Evaluation Based on Grey Theory Chen Huang, Yushun Fan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
A Propositional Calculus Formal Deductive System LU of Universal Logic and Its Completeness Minxia Luo, Huacan He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Entropy and Subsethood for General Interval-Valued Intuitionistic Fuzzy Sets Xiao-dong Liu, Su-hua Zheng, Feng-lan Xiong . . . . . . . . . . . . . . . . . . . . .
42
The Comparative Study of Logical Operator Set and Its Corresponding General Fuzzy Rough Approximation Operator Set Suhua Zheng, Xiaodong Liu, Fenglan Xiong . . . . . . . . . . . . . . . . . . . . . . .
53
Associative Classification Based on Correlation Analysis Jian Chen, Jian Yin, Jin Huang, Ming Feng . . . . . . . . . . . . . . . . . . . . . .
59
Design of Interpretable and Accurate Fuzzy Models from Data Zong-yi Xing, Yong Zhang, Li-min Jia, Wei-li Hu . . . . . . . . . . . . . . . . .
69
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm Bin Ye, Chengzhi Zhu, Chuangxin Guo, Yijia Cao . . . . . . . . . . . . . . . . .
79
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets Yangdong Ye, Juan Wang, Limin Jia . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
XII
Table of Contents – Part I
Interval Regression Analysis Using Support Vector Machine and Quantile Regression Changha Hwang, Dug Hun Hong, Eunyoung Na, Hyejung Park, Jooyong Shim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 An Approach Based on Similarity Measure to Multiple Attribute Decision Making with Trapezoid Fuzzy Linguistic Variables Zeshui Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Research on Index System and Fuzzy Comprehensive Evaluation Method for Passenger Satisfaction Yuanfeng Zhou, Jianping Wu, Yuanhua Jia . . . . . . . . . . . . . . . . . . . . . . . 118 Research on Predicting Hydatidiform Mole Canceration Tendency by a Fuzzy Integral Model Yecai Guo, Yi Guo, Wei Rao, Wei Ma . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Consensus Measures and Adjusting Inconsistency of Linguistic Preference Relations in Group Decision Making Zhi-Ping Fan, Xia Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Fuzzy Variation Coefficients Programming of Fuzzy Systems and Its Application Xiaobei Liang, Daoli Zhu, Bingyong Tang . . . . . . . . . . . . . . . . . . . . . . . . . 140 Weighted Possibilistic Variance of Fuzzy Number and Its Application in Portfolio Theory Xun Wang, Weijun Xu, Weiguo Zhang, Maolin Hu . . . . . . . . . . . . . . . . 148 Another Discussion About Optimal Solution to Fuzzy Constraints Linear Programming Yun-feng Tan, Bing-yuan Cao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Fuzzy Ultra Filters and Fuzzy G-Filters of MTL-Algebras Xiao-hong Zhang, Yong-quan Wang, Yong-lin Liu . . . . . . . . . . . . . . . . . . 160 A Study on Relationship Between Fuzzy Rough Approximation Operators and Fuzzy Topological Spaces Wei-Zhi Wu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 A Case Retrieval Model Based on Factor-Structure Connection and λ−Similarity in Fuzzy Case-Based Reasoning Dan Meng, Zaiqiang Zhang, Yang Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Table of Contents – Part I
XIII
A TSK Fuzzy Inference Algorithm for Online Identification Kyoungjung Kim, Eun Ju Whang, Chang-Woo Park, Euntai Kim, Mignon Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Histogram-Based Generation Method of Membership Function for Extracting Features of Brain Tissues on MRI Images Weibei Dou, Yuan Ren, Yanping Chen, Su Ruan, Daniel Bloyet, Jean-Marc Constans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Uncertainty Management in Data Mining On Identity-Discrepancy-Contrary Connection Degree in SPA and Its Applications Yunliang Jiang, Yueting Zhuang, Yong Liu, Keqin Zhao . . . . . . . . . . . . 195 A Mathematic Model for Automatic Summarization Zhiqi Wang, Yongcheng Wang, Kai Gao . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Reliable Data Selection with Fuzzy Entropy Sang-Hyuk Lee, Youn-Tae Kim, Seong-Pyo Cheon, Sungshin Kim . . . . 203
Uncertainty Management and Probabilistic Methods in Data Mining Optimization of Concept Discovery in Approximate Information System Based on FCA Hanjun Jin, Changhua Wei, Xiaorong Wang, Jia Fu . . . . . . . . . . . . . . . 213 Geometrical Probability Covering Algorithm Junping Zhang, Stan Z. Li, Jue Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Approximate Reasoning Extended Fuzzy ALCN and Its Tableau Algorithm Jianjiang Lu, Baowen Xu, Yanhui Li, Dazhou Kang, Peng Wang . . . . 232 Type II Topological Logic C2T and Approximate Reasoning Yalin Zheng, Changshui Zhang, Yinglong Xia . . . . . . . . . . . . . . . . . . . . . . 243 Type-I Topological Logic C1T and Approximate Reasoning Yalin Zheng, Changshui Zhang, Xin Yao . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Vagueness and Extensionality Shunsuke Yatabe, Hiroyuki Inaoka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
XIV
Table of Contents – Part I
Using Fuzzy Analogical Reasoning to Refine the Query Answers for Relational Databases with Imprecise Information Z.M. Ma, Li Yan, Gui Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 A Linguistic Truth-Valued Uncertainty Reasoning Model Based on Lattice-Valued Logic Shuwei Chen, Yang Xu, Jun Ma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Axiomatic Foundation Fuzzy Programming Model for Lot Sizing Production Planning Problem Weizhen Yan, Jianhua Zhao, Zhe Cao . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Fuzzy Dominance Based on Credibility Distributions Jin Peng, Henry M.K. Mok, Wai-Man Tse . . . . . . . . . . . . . . . . . . . . . . . . 295 Fuzzy Chance-Constrained Programming for Capital Budgeting Problem with Fuzzy Decisions Jinwu Gao, Jianhua Zhao, Xiaoyu Ji . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Genetic Algorithms for Dissimilar Shortest Paths Based on Optimal Fuzzy Dissimilar Measure and Applications Yinzhen Li, Ruichun He, Linzhong Liu, Yaohuang Guo . . . . . . . . . . . . . 312 Convergence Criteria and Convergence Relations for Sequences of Fuzzy Random Variables Yan-Kui Liu, Jinwu Gao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Hybrid Genetic-SPSA Algorithm Based on Random Fuzzy Simulation for Chance-Constrained Programming Yufu Ning, Wansheng Tang, Hui Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Random Fuzzy Age-Dependent Replacement Policy Song Xu, Jiashun Zhang, Ruiqing Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 A Theorem for Fuzzy Random Alternating Renewal Processes Ruiqing Zhao, Wansheng Tang, Guofei Li . . . . . . . . . . . . . . . . . . . . . . . . . 340 Three Equilibrium Strategies for Two-Person Zero-Sum Game with Fuzzy Payoffs Lin Xu, Ruiqing Zhao, Tingting Shu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Table of Contents – Part I
XV
Fuzzy Classifiers An Improved Rectangular Decomposition Algorithm for Imprecise and Uncertain Knowledge Discovery Jiyoung Song, Younghee Im, Daihee Park . . . . . . . . . . . . . . . . . . . . . . . . . 355 XPEV: A Storage Model for Well-Formed XML Documents Jie Qin, Shu-Mei Zhao, Shu-Qiang Yang, Wen-Hua Dou . . . . . . . . . . . . 360 Fuzzy-Rough Set Based Nearest Neighbor Clustering Classification Algorithm Xiangyang Wang, Jie Yang, Xiaolong Teng, Ningsong Peng . . . . . . . . . 370 An Efficient Text Categorization Algorithm Based on Category Memberships Zhi-Hong Deng, Shi-Wei Tang, Ming Zhang . . . . . . . . . . . . . . . . . . . . . . . 374 The Integrated Location Algorithm Based on Fuzzy Identification and Data Fusion with Signal Decomposition Zhao Ping, Haoshan Shi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 A Web Document Classification Approach Based on Fuzzy Association Concept Jingsheng Lei, Yaohong Kang, Chunyan Lu, Zhang Yan . . . . . . . . . . . . 388 Optimized Fuzzy Classification Using Genetic Algorithm Myung Won Kim, Joung Woo Ryu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Dynamic Test-Sensitive Decision Trees with Multiple Cost Scales Zhenxing Qin, Chengqi Zhang, Xuehui Xie, Shichao Zhang . . . . . . . . . . 402 Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach Moon Hwan Kim, Jin Bae Park, Young Hoon Joo, Ho Jae Lee . . . . . . 406 Design of Fuzzy Rule-Based Classifier: Pruning and Learning Do Wan Kim, Jin Bae Park, Young Hoon Joo . . . . . . . . . . . . . . . . . . . . . 416 Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation Hongwei Zhu, Otman Basir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire Huiguang He, Jie Tian, Yao Lin, Ke Lu . . . . . . . . . . . . . . . . . . . . . . . . . . 436
XVI
Table of Contents – Part I
Fuzzy Clustering The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships Amit Banerjee, Rajesh N. Dav´e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Robust Kernel Fuzzy Clustering Weiwei Du, Kohei Inoue, Kiichi Urahama . . . . . . . . . . . . . . . . . . . . . . . . 454 Spatial Homogeneity-Based Fuzzy c-Means Algorithm for Image Segmentation Bo-Yeong Kang, Dae-Won Kim, Qing Li . . . . . . . . . . . . . . . . . . . . . . . . . 462 A Novel Fuzzy-Connectedness-Based Incremental Clustering Algorithm for Large Databases Yihong Dong, Xiaoying Tai, Jieyu Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . 470 Classification of MPEG VBR Video Data Using Gradient-Based FCM with Divergence Measure Dong-Chul Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree of Emotion from Facial Expressions M. Ashraful Amin, Nitin V. Afzulpurkar, Matthew N. Dailey, Vatcharaporn Esichaikul, Dentcho N. Batanov . . . . . . . . . . . . . . . . . . . . . 484 An Improved Clustering Algorithm for Information Granulation Qinghua Hu, Daren Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 A Novel Segmentation Method for MR Brain Images Based on Fuzzy Connectedness and FCM Xian Fan, Jie Yang, Lishui Cheng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Improved-FCM-Based Readout Segmentation and PRML Detection for Photochromic Optical Disks Jiqi Jian, Cheng Ma, Huibo Jia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 Fuzzy Reward Modeling for Run-Time Peer Selection in Peer-to-Peer Networks Huaxiang Zhang, Xiyu Liu, Peide Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 KFCSA: A Novel Clustering Algorithm for High-Dimension Data Kan Li, Yushu Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Table of Contents – Part I
XVII
Fuzzy Database Mining and Information Retrieval An Improved VSM Based Information Retrieval System and Fuzzy Query Expansion Jiangning Wu, Hiroki Tanioka, Shizhu Wang, Donghua Pan, Kenichi Yamamoto, Zhongtuo Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 The Extraction of Image’s Salient Points for Image Retrieval Wenyin Zhang, Jianguo Tang, Chao Li . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 A Sentence-Based Copy Detection Approach for Web Documents Rajiv Yerra, Yiu-Kai Ng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 The Research on Query Expansion for Chinese Question Answering System Zhengtao Yu, Xiaozhong Fan, Lirong Song, Jianyi Guo . . . . . . . . . . . . . 571 Multinomial Approach and Multiple-Bernoulli Approach for Information Retrieval Based on Language Modeling Hua Huo, Junqiang Liu, Boqin Feng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Adaptive Query Refinement Based on Global and Local Analysis Chaoyuan Cui, Hanxiong Chen, Kazutaka Furuse, Nobuo Ohbo . . . . . . 584 Information Push-Delivery for User-Centered and Personalized Service Zhiyun Xin, Jizhong Zhao, Chihong Chi, Jiaguang Sun . . . . . . . . . . . . . 594 Mining Association Rules Based on Seed Items and Weights Chen Xiang, Zhang Yi, Wu Yue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 An Algorithm of Online Goods Information Extraction with Two-Stage Working Pattern Wang Xun, Ling Yun, Yu-lian Fei . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609 A Novel Method of Image Retrieval Based on Combination of Semantic and Visual Features Ming Li, Tong Wang, Bao-wei Zhang, Bi-Cheng Ye . . . . . . . . . . . . . . . . 619 Using Fuzzy Pattern Recognition to Detect Unknown Malicious Executables Code Boyun Zhang, Jianping Yin, Jingbo Hao . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Method of Risk Discernment in Technological Innovation Based on Path Graph and Variable Weight Fuzzy Synthetic Evaluation Yuan-sheng Huang, Jian-xun Qi, Jun-hua Zhou . . . . . . . . . . . . . . . . . . . . 635
XVIII Table of Contents – Part I
Application of Fuzzy Similarity to Prediction of Epileptic Seizures Using EEG Signals Xiaoli Li, Xin Yao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 A Fuzzy Multicriteria Analysis Approach to the Optimal Use of Reserved Land for Agriculture Hepu Deng, Guifang Yang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Fuzzy Comprehensive Evaluation for the Optimal Management of Responding to Oil Spill Xin Liu, Kai W. Wirtz, Susanne Adam . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
Information Fusion Fuzzy Fusion for Face Recognition Xuerong Chen, Zhongliang Jing, Gang Xiao . . . . . . . . . . . . . . . . . . . . . . . 672 A Group Decision Making Method for Integrating Outcome Preferences in Hypergame Situations Yexin Song, Qian Wang, Zhijun Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676 A Method Based on IA Operator for Multiple Attribute Group Decision Making with Uncertain Linguistic Information Zeshui Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684 A New Prioritized Information Fusion Method for Handling Fuzzy Information Retrieval Problems Won-Sin Hong, Shi-Jay Chen, Li-Hui Wang, Shyi-Ming Chen . . . . . . . 694 Multi-context Fusion Based Robust Face Detection in Dynamic Environments Mi Young Nam, Phill Kyu Rhee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 Unscented Fuzzy Tracking Algorithm for Maneuvering Target Shi-qiang Hu, Li-wei Guo, Zhong-liang Jing . . . . . . . . . . . . . . . . . . . . . . . 708 A Pixel-Level Multisensor Image Fusion Algorithm Based on Fuzzy Logic Long Zhao, Baochang Xu, Weilong Tang, Zhe Chen . . . . . . . . . . . . . . . . 717
Neuro-Fuzzy Systems Approximation Bound for Fuzzy-Neural Networks with Bell Membership Function Weimin Ma, Guoqing Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
Table of Contents – Part I
XIX
A Neuro-Fuzzy Method of Forecasting the Network Traffic of Accessing Web Server Ai-Min Yang, Xing-Min Sun, Chang-Yun Li, Ping Liu . . . . . . . . . . . . . 728 A Fuzzy Neural Network System Based on Generalized Class Cover Problem Yanxin Huang, Yan Wang, Wengang Zhou, Chunguang Zhou . . . . . . . . 735 A Self-constructing Compensatory Fuzzy Wavelet Network and Its Applications Haibin Yu, Qianjin Guo, Aidong Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 A New Balancing Method for Flexible Rotors Based on Neuro-fuzzy System and Information Fusion Shi Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 Recognition of Identifiers from Shipping Container Images Using Fuzzy Binarization and Enhanced Fuzzy Neural Network Kwang-Baek Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761 Directed Knowledge Discovery Methodology for the Prediction of Ozone Concentration Seong-Pyo Cheon, Sungshin Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772 Application of Fuzzy Systems in the Car-Following Behaviour Analysis Pengjun Zheng, Mike McDonald . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
Fuzzy Control GA-Based Composite Sliding Mode Fuzzy Control for DoublePendulum-Type Overhead Crane Diantong Liu, Weiping Guo, Jianqiang Yi . . . . . . . . . . . . . . . . . . . . . . . . 792 A Balanced Model Reduction for T-S Fuzzy Systems with Integral Quadratic Constraints Seog-Hwan Yoo, Byung-Jae Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802 An Integrated Navigation System of NGIMU/ GPS Using a Fuzzy Logic Adaptive Kalman Filter Mingli Ding, Qi Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System Yinong Li, Zheng Ling, Yang Liu, Yanjuan Qiao . . . . . . . . . . . . . . . . . . 822
XX
Table of Contents – Part I
Design of Fuzzy Controller and Parameter Optimizer for Non-linear System Based on Operator’s Knowledge Hyeon Bae, Sungshin Kim, Yejin Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 A New Pre-processing Method for Multi-channel Echo Cancellation Based on Fuzzy Control Xiaolu Li, Wang Jie, Shengli Xie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837 Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems Chen Gang, Shuqing Wang, Jianming Zhang . . . . . . . . . . . . . . . . . . . . . . 841 Intelligent Fuzzy Systems for Aircraft Landing Control Jih-Gau Juang, Bo-Shian Lin, Kuo-Chih Chin . . . . . . . . . . . . . . . . . . . . . 851 Scheduling Design of Controllers with Fuzzy Deadline Hong Jin, Hongan Wang, Hui Wang, Danli Wang . . . . . . . . . . . . . . . . . 861 A Preference Method with Fuzzy Logic in Service Scheduling of Grid Computing Yanxiang He, Haowen Liu, Weidong Wen, Hui Jin . . . . . . . . . . . . . . . . . 865 H∞ Robust Fuzzy Control of Ultra-High Rise / High Speed Elevators with Uncertainty Hu Qing, Qingding Guo, Dongmei Yu, Xiying Ding . . . . . . . . . . . . . . . . 872 A Dual-Mode Fuzzy Model Predictive Control Scheme for Unknown Continuous Nonlinear System Chonghui Song, Shucheng Yang, Hui yang, Huaguang Zhang, Tianyou Chai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876 Fuzzy Modeling Strategy for Control of Nonlinear Dynamical Systems Bin Ye, Chengzhi Zhu, Chuangxin Guo, Yijia Cao . . . . . . . . . . . . . . . . . 882 Intelligent Digital Control for Nonlinear Systems with Multirate Sampling Do Wan Kim, Jin Bae Park, Young Hoon Joo . . . . . . . . . . . . . . . . . . . . . 886 Feedback Control of Humanoid Robot Locomotion Xusheng Lei, Jianbo Su . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890 Application of Computational Intelligence (Fuzzy Logic, Neural Networks and Evolutionary Programming) to Active Networking Technology Mehdi Galily, Farzad Habibipour Roudsari, Mohammadreza Sadri . . . . 900
Table of Contents – Part I
XXI
Fuel-Efficient Maneuvers for Constellation Initialization Using Fuzzy Logic Control Mengfei Yang, Honghua Zhang, Rucai Che, Zengqi Sun . . . . . . . . . . . . . 910 Design of Interceptor Guidance Law Using Fuzzy Logic Ya-dong Lu, Ming Yang, Zi-cai Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model Structure Wei Xie, Huaiyu Wu, Xin Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930 Fuzzy Virtual Coupling Design for High Performance Haptic Display D. Bi, J. Zhang, G.L. Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942 Linguistic Model for the Controlled Object Zhinong Miao, Xiangyu Zhao, Yang Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems Shao-Cheng Qu, Yong-Ji Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960 Fuzzy Control of Nonlinear Pipeline Systems with Bounds on Output Peak Fei Liu, Jun Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969 Grading Fuzzy Sliding Mode Control in AC Servo System Hu Qing, Qingding Guo, Dongmei Yu, Xiying Ding . . . . . . . . . . . . . . . . 977 A Robust Single Input Adaptive Sliding Mode Fuzzy Logic Controller for Automotive Active Suspension System Ibrahim B. Kucukdemiral, Seref N. Engin, Vasfi E. Omurlu, Galip Cansever . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981 Construction of Fuzzy Models for Dynamic Systems Using Multi-population Cooperative Particle Swarm Optimizer Ben Niu, Yunlong Zhu, Xiaoxian He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 Human Clustering for a Partner Robot Based on Computational Intelligence Indra Adji Sulistijono, Naoyuki Kubota . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001 Fuzzy Switching Controller for Multiple Model Baozhu Jia, Guang Ren, Zhihong Xiu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011 Generation of Fuzzy Rules and Learning Algorithms for Cooperative Behavior of Autonomouse Mobile Robots(AMRs) Jang-Hyun Kim, Jin-Bae Park, Hyun-Seok Yang, Young-Pil Park . . . . 1015
XXII
Table of Contents – Part I
UML-Based Design and Fuzzy Control of Automated Vehicles Abdelkader El Kamel, Jean-Pierre Bourey . . . . . . . . . . . . . . . . . . . . . . . . . 1025
Fuzzy Hardware Design of an Analog Adaptive Fuzzy Logic Controller Zhihao Xu, Dongming Jin, Zhijian Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034 VLSI Implementation of a Self-tuning Fuzzy Controller Based on Variable Universe of Discourse Weiwei Shan, Dongming Jin, Weiwei Jin, Zhihao Xu . . . . . . . . . . . . . . 1044
Knowledge Visualization and Exploration Method to Balance the Communication Among Multi-agents in Real Time Traffic Synchronization Li Weigang, Marcos Vin´ıcius Pinheiro Dib, Alba Cristina Magalh˜ aes de Melo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053 A Celerity Association Rules Method Based on Data Sort Search Zhiwei Huang, Qin Liao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063 Using Web Services to Create the Collaborative Model for Enterprise Digital Content Portal Ruey-Ming Chao, Chin-Wen Yang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067 Emotion-Based Textile Indexing Using Colors and Texture Eun Yi Kim, Soo-jeong Kim, Hyun-jin Koo, Karpjoo Jeong, Jee-in Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077 Optimal Space Launcher Design Using a Refined Response Surface Method Jae-Woo Lee, Kwon-Su Jeon, Yung-Hwan Byun, Sang-Jin Kim . . . . . . 1081 MEDIC: A MDO-Enabling Distributed Computing Framework Shenyi Jin, Kwangsik Kim, Karpjoo Jeong, Jaewoo Lee, Jonghwa Kim, Hoyon Hwang, Hae-Gook Suh . . . . . . . . . . . . . . . . . . . . . . 1092 Time and Space Efficient Search for Small Alphabets with Suffix Arrays Jeong Seop Sim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102 Optimal Supersonic Air-Launching Rocket Design Using Multidisciplinary System Optimization Approach Jae-Woo Lee, Young Chang Choi, Yung-Hwan Byun . . . . . . . . . . . . . . . . 1108
Table of Contents – Part I XXIII
Numerical Visualization of Flow Instability in Microchannel Considering Surface Wettability Doyoung Byun, Budiono, Ji Hye Yang, Changjin Lee, Ki Won Lim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113 A Interactive Molecular Modeling System Based on Web Service Sungjun Park, Bosoon Kim, Jee-In Kim . . . . . . . . . . . . . . . . . . . . . . . . . . 1117 On the Filter Size of DMM for Passive Scalar in Complex Flow Yang Na, Dongshin Shin, Seungbae Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127 Visualization Process for Design and Manufacturing of End Mills Sung-Lim Ko, Trung-Thanh Pham, Yong-Hyun Kim . . . . . . . . . . . . . . . 1133 IP Address Lookup with the Visualizable Biased Segment Tree Inbok Lee, Jeong-Shik Mun, Sung-Ryul Kim . . . . . . . . . . . . . . . . . . . . . . . 1137 A Surface Reconstruction Algorithm Using Weighted Alpha Shapes Si Hyung Park, Seoung Soo Lee, Jong Hwa Kim . . . . . . . . . . . . . . . . . . . 1141
Sequential Data Analysis HYBRID: From Atom-Clusters to Molecule-Clusters Zhou Bing, Jun-yi Shen, Qin-ke Peng . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151 A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System and Evaluation for Sound Environment Akira Ikuta, Hisako Masuike, Yegui Xiao, Mitsuo Ohta . . . . . . . . . . . . . 1161 Preventing Meaningless Stock Time Series Pattern Discovery by Changing Perceptually Important Point Detection Tak-chung Fu, Fu-lai Chung, Robert Luk, Chak-man Ng . . . . . . . . . . . . 1171 Discovering Frequent Itemsets Using Transaction Identifiers Duckjin Chai, Heeyoung Choi, Buhyun Hwang . . . . . . . . . . . . . . . . . . . . . 1175 Incremental DFT Based Search Algorithm for Similar Sequence Quan Zheng, Zhikai Feng, Ming Zhu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
Parallel and Distributed Data Mining Computing High Dimensional MOLAP with Parallel Shell Mini-cubes Kong-fa Hu, Chen Ling, Shen Jie, Gu Qi, Xiao-li Tang . . . . . . . . . . . . . 1192
XXIV Table of Contents – Part I
Sampling Ensembles for Frequent Patterns Caiyan Jia, Ruqian Lu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197 Distributed Data Mining on Clusters with Bayesian Mixture Modeling M. Viswanathan, Y.K. Yang, T.K. Whangbo . . . . . . . . . . . . . . . . . . . . . . 1207 A Method of Data Classification Based on Parallel Genetic Algorithm Yuexiang Shi, Zuqiang Meng, Zixing Cai, B. Benhabib . . . . . . . . . . . . . . 1217
Rough Sets Rough Computation Based on Similarity Matrix Huang Bing, Guo Ling, He Xin, Xian-zhong Zhou . . . . . . . . . . . . . . . . . 1223 The Relationship Among Several Knowledge Reduction Approaches Keyun Qin, Zheng Pei, Weifeng Du . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232 Rough Approximation of a Preference Relation for Stochastic Multi-attribute Decision Problems Chaoyuan Yue, Shengbao Yao, Peng Zhang, Wanan Cui . . . . . . . . . . . . 1242 Incremental Target Recognition Algorithm Based on Improved Discernibility Matrix Liu Yong, Xu Congfu, Yan Zhiyong, Pan Yunhe . . . . . . . . . . . . . . . . . . . 1246 Problems Relating to the Phonetic Encoding of Words in the Creation of a Phonetic Spelling Recognition Program Michael Higgins, Wang Shudong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256 Diversity Measure for Multiple Classifier Systems Qinghua Hu, Daren Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261 A Successive Design Method of Rough Controller Using Extra Excitation Geng Wang, Jun Zhao, Jixin Qian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266 A Soft Sensor Model Based on Rough Set Theory and Its Application in Estimation of Oxygen Concentration Xingsheng Gu, Dazhong Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1271 A Divide-and-Conquer Discretization Algorithm Fan Min, Lijun Xie, Qihe Liu, Hongbin Cai . . . . . . . . . . . . . . . . . . . . . . . 1277 A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines Gexiang Zhang, Zhexin Cao, Yajun Gu . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
Table of Contents – Part I
XXV
A Heuristic Algorithm for Maximum Distribution Reduction Xiaobing Pei, YuanZhen Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297 The Minimization of Axiom Sets Characterizing Generalized Fuzzy Rough Approximation Operators Xiao-Ping Yang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303 The Representation and Resolution of Rough Sets Based on the Extended Concept Lattice Xuegang Hu, Yuhong Zhang, Xinya Wang . . . . . . . . . . . . . . . . . . . . . . . . . 1309 Study of Integrate Models of Rough Sets and Grey Systems Wu Shunxiang, Liu Sifeng, Li Maoqing . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1325
Table of Contents – Part II
Dimensionality Reduction Dimensionality Reduction for Semi-supervised Face Recognition Weiwei Du, Kohei Inoue, Kiichi Urahama . . . . . . . . . . . . . . . . . . . . . . . .
1
Cross-Document Transliterated Personal Name Coreference Resolution Houfeng Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Difference-Similitude Matrix in Text Classification Xiaochun Huang, Ming Wu, Delin Xia, Puliu Yan . . . . . . . . . . . . . . . . .
21
A Study on Feature Selection for Toxicity Prediction Gongde Guo, Daniel Neagu, Mark T.D. Cronin . . . . . . . . . . . . . . . . . . . .
31
Application of Feature Selection for Unsupervised Learning in Prosecutors’ Office Peng Liu, Jiaxian Zhu, Lanjuan Liu, Yanhong Li, Xuefeng Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
A Novel Field Learning Algorithm for Dual Imbalance Text Classification Ling Zhuang, Honghua Dai, Xiaoshu Hang . . . . . . . . . . . . . . . . . . . . . . . .
39
Supervised Learning for Classification Hongyu Li, Wenbin Chen, I-Fan Shen . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
Feature Selection for Hyperspectral Data Classification Using Double Parallel Feedforward Neural Networks Mingyi He, Rui Huang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
Robust Nonlinear Dimension Reduction: A Self-organizing Approach Yuexian Hou, Liyue Yao, Pilian He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
An Effective Feature Selection Scheme via Genetic Algorithm Using Mutual Information Chunkai K. Zhang, Hong Hu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
XXVIII Table of Contents – Part II
Pattern Recognition and Trend Analysis Pattern Classification Using Rectified Nearest Feature Line Segment Hao Du, Yan Qiu Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
Palmprint Identification Algorithm Using Hu Invariant Moments Jin Soo Noh, Kang Hyeon Rhee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
Generalized Locally Nearest Neighbor Classifiers for Object Classification Wenming Zheng, Cairong Zou, Li Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
Nearest Neighbor Classification Using Cam Weighted Distance Chang Yin Zhou, Yan Qiu Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 A PPM Prediction Model Based on Web Objects’ Popularity Lei Shi, Zhimin Gu, Yunxia Pei, Lin Wei . . . . . . . . . . . . . . . . . . . . . . . . . 110 An On-line Sketch Recognition Algorithm for Composite Shape Zhan Ding, Yin Zhang, Wei Peng, Xiuzi Ye, Huaqiang Hu . . . . . . . . . . 120 Axial Representation of Character by Using Wavelet Transform Xinge You, Bin Fang, Yuan Yan Tang, Luoqing Li, Dan Zhang . . . . . . 130 Representing and Recognizing Scenario Patterns Jixin Ma, Bin Luo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 A Hybrid Artificial Intelligent-Based Criteria-Matching with Classification Algorithm Alex T.H. Sim, Vincent C.S. Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Auto-generation of Detection Rules with Tree Induction Algorithm Minsoo Kim, Jae-Hyun Seo, Il-Ahn Cheong, Bong-Nam Noh . . . . . . . . 160 Hand Gesture Recognition System Using Fuzzy Algorithm and RDBMS for Post PC Jung-Hyun Kim, Dong-Gyu Kim, Jeong-Hoon Shin, Sang-Won Lee, Kwang-Seok Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 An Ontology-Based Method for Project and Domain Expert Matching Jiangning Wu, Guangfei Yang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Pattern Classification and Recognition of Movement Behavior of Medaka (Oryzias Latipes) Using Decision Tree Sengtai Lee, Jeehoon Kim, Jae-Yeon Baek, Man-Wi Han, Tae-Soo Chon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Table of Contents – Part II XXIX
A New Algorithm for Computing the Minimal Enclosing Sphere in Feature Space Chonghui Guo, Mingyu Lu, Jiantao Sun, Yuchang Lu . . . . . . . . . . . . . . 196 Y-AOI: Y-Means Based Attribute Oriented Induction Identifying Root Cause for IDSs Jungtae Kim, Gunhee Lee, Jung-taek Seo, Eung-ki Park, Choon-sik Park, Dong-kyoo Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 New Segmentation Algorithm for Individual Offline Handwritten Character Segmentation K.B.M.R. Batuwita, G.E.M.D.C. Bandara . . . . . . . . . . . . . . . . . . . . . . . . 215 A Method Based on the Continuous Spectrum Analysis for Fingerprint Image Ridge Distance Estimation Xiaosi Zhan, Zhaocai Sun, Yilong Yin, Yayun Chu . . . . . . . . . . . . . . . . . 230 A Method Based on the Markov Chain Monte Carlo for Fingerprint Image Segmentation Xiaosi Zhan, Zhaocai Sun, Yilong Yin, Yun Chen . . . . . . . . . . . . . . . . . . 240 Unsupervised Speaker Adaptation for Phonetic Transcription Based Voice Dialing Weon-Goo Kim, MinSeok Jang, Chin-Hui Lee . . . . . . . . . . . . . . . . . . . . . 249 A Phase-Field Based Segmentation Algorithm for Jacquard Images Using Multi-start Fuzzy Optimization Strategy Zhilin Feng, Jianwei Yin, Hui Zhang, Jinxiang Dong . . . . . . . . . . . . . . . 255 Dynamic Modeling, Prediction and Analysis of Cytotoxicity on Microelectronic Sensors Biao Huang, James Z. Xing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Generalized Fuzzy Morphological Operators Tingquan Deng, Yanmei Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Signature Verification Method Based on the Combination of Shape and Dynamic Feature Yingna Deng, Hong Zhu, Shu Li, Tao Wang . . . . . . . . . . . . . . . . . . . . . . 285 Study on the Matching Similarity Measure Method for Image Target Recognition Xiaogang Yang, Dong Miao, Fei Cao, Yongkang Ma . . . . . . . . . . . . . . . . 289 3-D Head Pose Estimation for Monocular Image Yingjie Pan, Hong Zhu, Ruirui Ji . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
XXX
Table of Contents – Part II
The Speech Recognition Based on the Bark Wavelet Front-End Processing Xueying Zhang, Zhiping Jiao, Zhefeng Zhao . . . . . . . . . . . . . . . . . . . . . . . 302 An Accurate and Fast Iris Location Method Based on the Features of Human Eyes Weiqi Yuan, Lu Xu, Zhonghua Lin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 A Hybrid Classifier for Mass Classification with Different Kinds of Features in Mammography Ping Zhang, Kuldeep Kumar, Brijesh Verma . . . . . . . . . . . . . . . . . . . . . . 316 Data Mining Methods for Anomaly Detection of HTTP Request Exploitations Xiao-Feng Wang, Jing-Li Zhou, Sheng-Sheng Yu, Long-Zheng Cai . . . 320 Exploring Content-Based and Image-Based Features for Nude Image Detection Shi-lin Wang, Hong Hui, Sheng-hong Li, Hao Zhang, Yong-yu Shi, Wen-tao Qu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Collision Recognition and Direction Changes Using Fuzzy Logic for Small Scale Fish Robots by Acceleration Sensor Data Seung Y. Na, Daejung Shin, Jin Y. Kim, Su-Il Choi . . . . . . . . . . . . . . . 329 Fault Diagnosis Approach Based on Qualitative Model of Signed Directed Graph and Reasoning Rules Bingshu Wang, Wenliang Cao, Liangyu Ma, Ji Zhang . . . . . . . . . . . . . . 339 Visual Tracking Algorithm for Laparoscopic Robot Surgery Min-Seok Kim, Jin-Seok Heo, Jung-Ju Lee . . . . . . . . . . . . . . . . . . . . . . . . 344 Toward a Sound Analysis System for Telemedicine Cong Phuong Nguyen, Thi Ngoc Yen Pham, Castelli Eric . . . . . . . . . . . 352
Other Topics in FSKD Methods Structural Learning of Graphical Models and Its Applications to Traditional Chinese Medicine Ke Deng, Delin Liu, Shan Gao, Zhi Geng . . . . . . . . . . . . . . . . . . . . . . . . . 362 Study of Ensemble Strategies in Discovering Linear Causal Models Gang Li, Honghua Dai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Table of Contents – Part II XXXI
The Entropy of Relations and a New Approach for Decision Tree Learning Dan Hu, HongXing Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Effectively Extracting Rules from Trained Neural Networks Based on the New Measurement Method of the Classification Power of Attributes Dexian Zhang, Yang Liu, Ziqiang Wang . . . . . . . . . . . . . . . . . . . . . . . . . . 388 EDTs: Evidential Decision Trees Huawei Guo, Wenkang Shi, Feng Du . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 GSMA: A Structural Matching Algorithm for Schema Matching in Data Warehousing Wei Cheng, Yufang Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 A New Algorithm to Get the Correspondences from the Image Sequences Zhiquan Feng, Xiangxu Meng, Chenglei Yang . . . . . . . . . . . . . . . . . . . . . . 412 An Efficiently Algorithm Based on Itemsets-Lattice and Bitmap Index for Finding Frequent Itemsets Fuzan Chen, Minqiang Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Weighted Fuzzy Queries in Relational Databases Ying-Chao Zhang, Yi-Fei Chen, Xiao-ling Ye, Jie-Liang Zheng . . . . . . 430 Study of Multiuser Detection: The Support Vector Machine Approach Tao Yang, Bo Hu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Robust and Adaptive Backstepping Control for Nonlinear Systems Using Fuzzy Logic Systems Gang Chen, Shuqing Wang, Jianming Zhang . . . . . . . . . . . . . . . . . . . . . . 452 Online Mining Dynamic Web News Patterns Using Machine Learn Methods Jian-Wei Liu, Shou-Jian Yu, Jia-Jin Le . . . . . . . . . . . . . . . . . . . . . . . . . . 462 A New Fuzzy MCDM Method Based on Trapezoidal Fuzzy AHP and Hierarchical Fuzzy Integral Chao Zhang, Cun-bao Ma, Jia-dong Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Fast Granular Analysis Based on Watershed in Microscopic Mineral Images Danping Zou, Desheng Hu, Qizhen Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
XXXII Table of Contents – Part II
Cost-Sensitive Ensemble of Support Vector Machines for Effective Detection of Microcalcification in Breast Cancer Diagnosis Yonghong Peng, Qian Huang, Ping Jiang, Jianmin Jiang . . . . . . . . . . . 483 High-Dimensional Shared Nearest Neighbor Clustering Algorithm Jian Yin, Xianli Fan, Yiqun Chen, Jiangtao Ren . . . . . . . . . . . . . . . . . . 494 A New Method for Fuzzy Group Decision Making Based on α-Level Cut and Similarity Jibin Lan, Liping He, Zhongxing Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Modeling Nonlinear Systems: An Approach of Boosted Linguistic Models Keun-Chang Kwak, Witold Pedrycz, Myung-Geun Chun . . . . . . . . . . . . 514 Multi-criterion Fuzzy Optimization Approach to Imaging from Incomplete Projections Xin Gao, Shuqian Luo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 Transductive Knowledge Based Fuzzy Inference System for Personalized Modeling Qun Song, Tianmin Ma, Nikola Kasabov . . . . . . . . . . . . . . . . . . . . . . . . . 528 A Sampling-Based Method for Mining Frequent Patterns from Databases Yen-Liang Chen, Chin-Yuan Ho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Lagrange Problem in Fuzzy Reversed Posynomial Geometric Programming Bing-yuan Cao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Direct Candidates Generation: A Novel Algorithm for Discovering Complete Share-Frequent Itemsets Yu-Chiang Li, Jieh-Shan Yeh, Chin-Chen Chang . . . . . . . . . . . . . . . . . . . 551 A Three-Step Preprocessing Algorithm for Minimizing E-Mail Document’s Atypical Characteristics Ok-Ran Jeong, Dong-Sub Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Failure Detection Method Based on Fuzzy Comprehensive Evaluation for Integrated Navigation System Guoliang Liu, Yingchun Zhang, Wenyi Qiang, Zengqi Sun . . . . . . . . . . 567 Product Quality Improvement Analysis Using Data Mining: A Case Study in Ultra-Precision Manufacturing Industry Hailiang Huang, Dianliang Wu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Table of Contents – Part II XXXIII
Two-Tier Based Intrusion Detection System Byung-Joo Kim, Il Kon Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 SuffixMiner: Efficiently Mining Frequent Itemsets in Data Streams by Suffix-Forest Lifeng Jia, Chunguang Zhou, Zhe Wang, Xiujuan Xu . . . . . . . . . . . . . . . 592 Improvement of Lee-Kim-Yoo’s Remote User Authentication Scheme Using Smart Cards Da-Zhi Sun, Zhen-Fu Cao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
Mining of Spatial, Textual, Image and Time-Series Data Grapheme-to-Phoneme Conversion Based on a Fast TBL Algorithm in Mandarin TTS Systems Min Zheng, Qin Shi, Wei Zhang, Lianhong Cai . . . . . . . . . . . . . . . . . . . . 600 Clarity Ranking for Digital Images Shutao Li, Guangsheng Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 Attribute Uncertainty in GIS Data Shuliang Wang, Wenzhong Shi, Hanning Yuan, Guoqing Chen . . . . . . . 614 Association Classification Based on Sample Weighting Jin Zhang, Xiaoyun Chen, Yi Chen, Yunfa Hu . . . . . . . . . . . . . . . . . . . . 624 Using Fuzzy Logic for Automatic Analysis of Astronomical Pipelines Lior Shamir, Robert J. Nemiroff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 On the On-line Learning Algorithms for EEG Signal Classification in Brain Computer Interfaces Shiliang Sun, Changshui Zhang, Naijiang Lu . . . . . . . . . . . . . . . . . . . . . . 638 Automatic Keyphrase Extraction from Chinese News Documents Houfeng Wang, Sujian Li, Shiwen Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 A New Model of Document Structure Analysis Zhiqi Wang, Yongcheng Wang, Kai Gao . . . . . . . . . . . . . . . . . . . . . . . . . . 658 Prediction for Silicon Content in Molten Iron Using a Combined Fuzzy-Associative-Rules Bank Shi-hua Luo, Xiang-guan Liu, Min Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . 667
XXXIV Table of Contents – Part II
An Investigation into the Use of Delay Coordinate Embedding Technique with MIMO ANFIS for Nonlinear Prediction of Chaotic Signals Jun Zhang, Weiwei Dai, Muhui Fan, Henry Chung, Zhi Wei, D. Bi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Replay Scene Based Sports Video Abstraction Jian-quan Ouyang, Jin-tao Li, Yong-dong Zhang . . . . . . . . . . . . . . . . . . . 689 Mapping Web Usage Patterns to MDP Model and Mining with Reinforcement Learning Yang Gao, Zongwei Luo, Ning Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 Study on Wavelet-Based Fuzzy Multiscale Edge Detection Method Wen Zhu, Beiping Hou, Zhegen Zhang, Kening Zhou . . . . . . . . . . . . . . . 703 Sense Rank AALesk: A Semantic Solution for Word Sense Disambiguation Yiqun Chen, Jian Yin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 Automatic Video Knowledge Mining for Summary Generation Based on Un-supervised Statistical Learning Jian Ling, Yiqun Lian, Yueting Zhuang . . . . . . . . . . . . . . . . . . . . . . . . . . . 718 A Model for Classification of Topological Relationships Between Two Spatial Objects Wu Yang, Ya Luo, Ping Guo, HuangFu Tao, Bo He . . . . . . . . . . . . . . . . 723 A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception Xing-Jian He, Yue Zhang, Tat-Ming Lok, Michael R. Lyu . . . . . . . . . . . 727 Sunspot Time Series Prediction Using Parallel-Structure Fuzzy System Min-Soo Kim, Chan-Soo Chung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 A Similarity Computing Algorithm for Volumetric Data Sets Tao Zhang, Wei Chen, Min Hu, Qunsheng Peng . . . . . . . . . . . . . . . . . . . 742 Extraction of Representative Keywords Considering Co-occurrence in Positive Documents Byeong-Man Kim, Qing Li, KwangHo Lee, Bo-Yeong Kang . . . . . . . . . 752 On the Effective Similarity Measures for the Similarity-Based Pattern Retrieval in Multidimensional Sequence Databases Seok-Lyong Lee, Ju-Hong Lee, Seok-Ju Chun . . . . . . . . . . . . . . . . . . . . . . 762
Table of Contents – Part II XXXV
Crossing the Language Barrier Using Fuzzy Logic Rowena Chau, Chung-Hsing Yeh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768 New Algorithm Mining Intrusion Patterns Wu Liu, Jian-Ping Wu, Hai-Xin Duan, Xing Li . . . . . . . . . . . . . . . . . . . 774 Dual Filtering Strategy for Chinese Term Extraction Xiaoming Chen, Xuening Li, Yi Hu, Ruzhan Lu . . . . . . . . . . . . . . . . . . . 778 White Blood Cell Segmentation and Classification in Microscopic Bone Marrow Images Nipon Theera-Umpon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 KNN Based Evolutionary Techniques for Updating Query Cost Models Zhining Liao, Hui Wang, David Glass, Gongde Guo . . . . . . . . . . . . . . . . 797 A SVM Method for Web Page Categorization Based on Weight Adjustment and Boosting Mechanism Mingyu Lu, Chonghui Guo, Jiantao Sun, Yuchang Lu . . . . . . . . . . . . . . 801
Fuzzy Systems in Bioinformatics and Bio-medical Engineering Feature Selection for Specific Antibody Deficiency Syndrome by Neural Network with Weighted Fuzzy Membership Functions Joon S. Lim, Tae W. Ryu, Ho J. Kim, Sudhir Gupta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 Evaluation and Fuzzy Classification of Gene Finding Programs on Human Genome Sequences Atulya Nagar, Sujita Purushothaman, Hissam Tawfik . . . . . . . . . . . . . . . 821 Application of a Genetic Algorithm — Support Vector Machine Hybrid for Prediction of Clinical Phenotypes Based on Genome-Wide SNP Profiles of Sib Pairs Binsheng Gong, Zheng Guo, Jing Li, Guohua Zhu, Sali Lv, Shaoqi Rao, Xia Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 A New Method for Gene Functional Prediction Based on Homologous Expression Profile Sali Lv, Qianghu Wang, Guangmei Zhang, Fengxia Wen, Zhenzhen Wang, Xia Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
XXXVI Table of Contents – Part II
Analysis of Sib-Pair IBD Profiles and Genomic Context for Identification of the Relevant Molecular Signatures for Alcoholism Chuanxing Li, Lei Du, Xia Li, Binsheng Gong, Jie Zhang, Shaoqi Rao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 A Novel Ensemble Decision Tree Approach for Mining Genes Coding Ion Channels for Cardiopathy Subtype Jie Zhang, Xia Li, Wei Jiang, Yanqiu Wang, Chuanxing Li, Qiuju Wang, Shaoqi Rao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852 A Permutation-Based Genetic Algorithm for Predicting RNA Secondary Structure — A Practicable Approach Yongqiang Zhan, Maozu Guo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861 G Protein Binding Sites Analysis Fan Zhang, Zhicheng Liu, Xia Li, Shaoqi Rao . . . . . . . . . . . . . . . . . . . . . 865 A Novel Feature Ensemble Technology to Improve Prediction Performance of Multiple Heterogeneous Phenotypes Based on Microarray Data Haiyun Wang, Qingpu Zhang, Yadong Wang, Xia Li, Shaoqi Rao, Zuquan Ding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
Fuzzy Systems in Expert System and Informatics Fuzzy Routing in QoS Networks Runtong Zhang, Xiaomin Zhu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880 Component Content Soft-Sensor Based on Adaptive Fuzzy System in Rare-Earth Countercurrent Extraction Process Hui Yang, Chonghui Song, Chunyan Yang, Tianyou Chai . . . . . . . . . . . 891 The Fuzzy-Logic-Based Reasoning Mechanism for Product Development Process Ying-Kui Gu, Hong-Zhong Huang, Wei-Dong Wu, Chun-Sheng Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897 Single Machine Scheduling Problem with Fuzzy Precedence Delays and Fuzzy Processing Times Yuan Xie, Jianying Xie, Jun Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 Fuzzy-Based Dynamic Bandwidth Allocation System Fang-Yie Leu, Shi-Jie Yan, Wen-Kui Chang . . . . . . . . . . . . . . . . . . . . . . 911
Table of Contents – Part IIXXXVII
Self-localization of a Mobile Robot by Local Map Matching Using Fuzzy Logic Jinxia Yu, Zixing Cai, Xiaobing Zou, Zhuohua Duan . . . . . . . . . . . . . . . 921 Navigation of Mobile Robots in Unstructured Environment Using Grid Based Fuzzy Maps ¨ Ozhan Karaman, Hakan Temelta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 A Fuzzy Mixed Projects and Securities Portfolio Selection Model Yong Fang, K.K. Lai, Shou-Yang Wang . . . . . . . . . . . . . . . . . . . . . . . . . . 931 Contract Net Protocol Using Fuzzy Case Based Reasoning Wunan Wan, Xiaojing Wang, Yang Liu . . . . . . . . . . . . . . . . . . . . . . . . . . 941 A Fuzzy Approach for Equilibrium Programming with Simulated Annealing Algorithm Jie Su, Junpeng Yuan, Qiang Han, Jin Huang . . . . . . . . . . . . . . . . . . . . . 945 Image Processing Application with a TSK Fuzzy Model Perfecto Mari˜ no, Vicente Pastoriza, Miguel Santamar´ıa, Emilio Mart´ınez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 A Fuzzy Dead Reckoning Algorithm for Distributed Interactive Applications Ling Chen, Gencai Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961 Intelligent Automated Negotiation Mechanism Based on Fuzzy Method Hong Zhang, Yuhui Qiu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972 Congestion Control in Differentiated Services Networks by Means of Fuzzy Logic Morteza Mosavi, Mehdi Galily . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
Fuzzy Systems in Pattern Recognition and Diagnostics Fault Diagnosis System Based on Rough Set Theory and Support Vector Machine Yitian Xu, Laisheng Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980 A Fuzzy Framework for Flashover Monitoring Chang-Gun Um, Chang-Gi Jung, Byung-Gil Han, Young-Chul Song, Doo-Hyun Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
XXXVIII Table of Contents – Part II
Feature Recognition Technique from 2D Ship Drawings Using Fuzzy Inference System Deok-Eun Kim, Sung-Chul Shin, Soo-Young Kim . . . . . . . . . . . . . . . . . . 994 Transmission Relay Method for Balanced Energy Depletion in Wireless Sensor Networks Using Fuzzy Logic Seung-Beom Baeg, Tae-Ho Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998 Validation and Comparison of Microscopic Car-Following Models Using Beijing Traffic Flow Data Dewang Chen, Yueming Yuan, Baiheng Li, Jianping Wu . . . . . . . . . . . . 1008 Apply Fuzzy-Logic-Based Functional-Center Hierarchies as Inference Engines for Self-learning Manufacture Process Diagnoses Yu-Shu Hu, Mohammad Modarres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012 Fuzzy Spatial Location Model and Its Application in Spatial Query Yongjian Yang, Chunling Cao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022 Segmentation of Multimodality Osteosarcoma MRI with Vectorial Fuzzy-Connectedness Theory Jing Ma, Minglu Li, Yongqiang Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
Knowledge Discovery in Bioinformatics and Bio-medical Engineering A Global Optimization Algorithm for Protein Folds Prediction in 3D Space Xiaoguang Liu, Gang Wang, Jing Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031 Classification Analysis of SAGE Data Using Maximum Entropy Model Jin Xin, Rongfang Bie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 DNA Sequence Identification by Statistics-Based Models Jitimon Keinduangjun, Punpiti Piamsa-nga, Yong Poovorawan . . . . . . 1041 A New Method to Mine Gene Regulation Relationship Information De Pan, Fei Wang, Jiankui Guo, Jianhua Ding . . . . . . . . . . . . . . . . . . . . 1051
Knowledge Discovery in Expert System and Informatics Shot Transition Detection by Compensating for Global and Local Motions Seok-Woo Jang, Gye-Young Kim, Hyung-Il Choi . . . . . . . . . . . . . . . . . . . 1061
Table of Contents – Part II XXXIX
Hybrid Methods for Stock Index Modeling Yuehui Chen, Ajith Abraham, Ju Yang, Bo Yang . . . . . . . . . . . . . . . . . . 1067 Designing an Intelligent Web Information System of Government Based on Web Mining Gye Hang Hong, Jang Hee Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071 Automatic Segmentation and Diagnosis of Breast Lesions Using Morphology Method Based on Ultrasound In-Sung Jung, Devinder Thapa, Gi-Nam Wang . . . . . . . . . . . . . . . . . . . . 1079 Composition of Web Services Using Ontology with Monotonic Inheritance Changyun Li, Beishui Liao, Aimin Yang, Lijun Liao . . . . . . . . . . . . . . . 1089 Ontology-DTD Matching Algorithm for Efficient XML Query Myung Sook Kim, Yong Hae Kong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093 An Approach to Web Service Discovery Based on the Semantics Jing Fan, Bo Ren, Li-Rong Xiong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103 Non-deterministic Event Correlation Based on C-F Model Qiuhua Zheng, Yuntao Qian, Min Yao . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107 Flexible Goal Recognition via Graph Construction and Analysis Minghao Yin, Wenxiang Gu, Yinghua Lu . . . . . . . . . . . . . . . . . . . . . . . . . 1118 An Implementation for Mapping SBML to BioSPI Zhupeng Dong, Xiaoju Dong, Xian Xu, Yuxi Fu, Zhizhou Zhang, Lin He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128 Knowledge-Based Faults Diagnosis System for Wastewater Treatment Jang-Hwan Park, Byong-Hee Jun, Myung-Geun Chun . . . . . . . . . . . . . . 1132 Study on Intelligent Information Integration of Knowledge Portals Yongjin Zhang, Hongqi Chen, Jiancang Xie . . . . . . . . . . . . . . . . . . . . . . . 1136 The Risk Identification and Assessment in E-Business Development Lin Wang, Yurong Zeng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142 A Novel Wavelet Transform Based on Polar Coordinates for Datamining Applications Seonggoo Kang, Sangjun Lee, Sukho Lee . . . . . . . . . . . . . . . . . . . . . . . . . . 1150
XL
Table of Contents – Part II
Impact on the Writing Granularity for Incremental Checkpointing Junyoung Heo, Xuefeng Piao, Sangho Yi, Geunyoung Park, Minkyu Park, Jiman Hong, Yookun Cho . . . . . . . . . . . . . . . . . . . . . . . . . . 1154 Using Feedback Cycle for Developing an Adjustable Security Design Metric Charlie Y. Shim, Jung Y. Kim, Sung Y. Shin, Jiman Hong . . . . . . . . . 1158 w -LLC: Weighted Low-Energy Localized Clustering for Embedded Networked Sensors Joongheon Kim, Wonjun Lee, Eunkyo Kim, Choonhwa Lee . . . . . . . . . . 1162 Energy Efficient Dynamic Cluster Based Clock Synchronization for Wireless Sensor Network Md. Mamun-Or-Rashid, Choong Seon Hong, Jinsung Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166 An Intelligent Power Management Scheme for Wireless Embedded Systems Using Channel State Feedbacks Hyukjun Oh, Jiman Hong, Heejune Ahn . . . . . . . . . . . . . . . . . . . . . . . . . . 1170 Analyze and Guess Type of Piece in the Computer Game Intelligent System Z.Y. Xia, Y.A. Hu, J. Wang, Y.C. Jiang, X.L. Qin . . . . . . . . . . . . . . . . 1174 Large-Scale Ensemble Decision Analysis of Sib-Pair IBD Profiles for Identification of the Relevant Molecular Signatures for Alcoholism Xia Li, Shaoqi Rao, Wei Zhang, Guo Zheng, Wei Jiang, Lei Du . . . . . 1184 A Novel Visualization Classifier and Its Applications Jie Li, Xiang Long Tang, Xia Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
Active Information Gathering on the Web Automatic Creation of Links: An Approach Based on Decision Tree Peng Li, Seiji Yamada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200 Extraction of Structural Information from the Web Tsuyoshi Murata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204 Blog Search with Keyword Map-Based Relevance Feedback Yasufumi Takama, Tomoki Kajinami, Akio Matsumura . . . . . . . . . . . . . 1208
Table of Contents – Part II
XLI
An One Class Classification Approach to Non-relevance Feedback Document Retrieval Takashi Onoda, Hiroshi Murata, Seiji Yamada . . . . . . . . . . . . . . . . . . . . . 1216 Automated Knowledge Extraction from Internet for a Crisis Communication Portal Ong Sing Goh, Chun Che Fung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
Neural and Fuzzy Computation in Cognitive Computer Vision Probabilistic Principal Surface Classifier Kuiyu Chang, Joydeep Ghosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236 Probabilistic Based Recursive Model for Face Recognition Siu-Yeung Cho, Jia-Jun Wong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245 Performance Characterization in Computer Vision: The Role of Visual Cognition Theory Aimin Wu, De Xu, Xu Yang, Jianhui Zheng . . . . . . . . . . . . . . . . . . . . . . 1255 Generic Solution for Image Object Recognition Based on Vision Cognition Theory Aimin Wu, De Xu, Xu Yang, Jianhui Zheng . . . . . . . . . . . . . . . . . . . . . . 1265 Cognition Theory Motivated Image Semantics and Image Language Aimin Wu, De Xu, Xu Yang, Jianhui Zheng . . . . . . . . . . . . . . . . . . . . . . 1276 Neuro-Fuzzy Inference System to Learn Expert Decision: Between Performance and Intelligibility Laurence Cornez, Manuel Samuelides, Jean-Denis Muller . . . . . . . . . . . 1281 Fuzzy Patterns in Multi-level of Satisfaction for MCDM Model Using Modified Smooth S-Curve MF Pandian Vasant, A. Bhattacharya, N.N. Barsoum . . . . . . . . . . . . . . . . . . 1294 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
On Fuzzy Inclusion in the Interval-Valued Sense Jin Han Park1, Jong Seo Park2 , and Young Chel Kwun3 1
Division of Math. Sci., Pukyong National University, Pusan 608-737, South Korea
[email protected] 2 Department of Math. Education, Chinju National Universuty of Education, Chinju 660-756, South Korea
[email protected] 3 Department of Mathematics, Dong-A University, Pusan 604-714, South Korea
[email protected] Abstract. As a generalization of fuzzy sets, the concept of intervalvalued fuzzy sets was introduced by Gorzalczany [Fuzzy Sets and Systems 21 (1987) 1]. In this paper, we shall extend the concept of “fuzzy ˇ inclusion”, introduced by Sostak [Supp. Rend. Circ. Mat. Palermo (Ser. II) 11 (1985) 89], to the interval-valued fuzzy setting and study its fundamental properties for some extent.
1
Introduction
After the introduction of the concept of fuzzy sets by Zadeh [10] several researchers were concerned about the generalizations of the notion of fuzzy set, e.g. fuzzy set of type n [11], intuitionistic fuzzy set [1,2] and interval-valued fuzzy set [4]. The concept of interval-valued fuzzy sets was introduced by Gorzalczany [4], and recently there has been progress in the study of such sets by Mondal and ˇ Samanta [6] and Ramakrishnan and Nayagam [7]. On the other hand, Sostak [8] defined fuzzy inclusion between two fuzzy sets A and B in order to give measure of inclusion of one in the other and applied this notion in fuzzy topological spaces defined by himself. Fuzzy inclusion in intuitionistic fuzzy sets was defined and studied by C ¸ oker and Demirci [3]. In this paper, we define and study fuzzy inclusion in interval-valued fuzzy sets and then apply this inclusion to interval-valued ˇ fuzzy topological spaces in Sostak’s sense.
2
Preliminaries
First we shall present the fundamental definitions given by Gorzalczany [4]: Definition 1. [4] Let X be a nonempty fixed set. An interval-valued fuzzy set (IVF set, for short) A on X is an object having the form U A = {(x, [µL A (x), µA (x)]) : x ∈ X} U L where µL A : X → [0, 1] and µA : X → [0, 1] are functions satisfying µA (x) ≤ U µA (x) for each x ∈ X. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1–10, 2005. c Springer-Verlag Berlin Heidelberg 2005
2
J.H. Park, J.S. Park, and Y.C. Kwun
Let D be the set of all closed subintervals of the unit interval [0, 1] and consider singletons {a} in [0, 1] as closed subintervals of the form [a, a]. An IVF U X set A = {(x, [µL A (x), µA (x)]) : x ∈ X} in X can identified to element in D . Thus for each x ∈ X, A(x) is a closed interval whose lower and upper end points are U µL A (x) and µA (x), respectively. Obviously, every fuzzy set A = {µA (x) : x ∈ X} on X is an IVF set of the form A = {(x, [µA (x), µA (x)]) : x ∈ X}. Let X0 be a subset of X. For any interval [a, b] ∈ D, the IVF set whose value is the interval [a, b] for x ∈ X0 and [0, 0] for x ∈ X \ X0 , is denoted by [a,˜b]X0 . In particular, if a = b the IVF set [a,˜ b]X0 is denoted by simply a ˜X0 . The IVF set [a,˜b]X (resp. a ˜X ) is denoted by simply [a,˜ b] (resp. a ˜). For the sake of simplicity, we shall often U L U use the symbol A = [µL A , µA ] for the IVF set A = {(x, [µA (x), µA (x)]) : x ∈ X}. Definition 2. [4] Let A and B be IVF sets on X. Then L U U (a) A ⊆ B iff µL A (x) ≤ µB (x) and µA (x) ≤ µB (x) for all x ∈ X; (b) A = B iff A ⊆ B and B ⊆ A; L (c) The complement Ac of A is defined by Ac (x) = [1 − µU A (x), 1 − µA (x)] for all x ∈ X; (d) If {Ai : i ∈ J} is an arbitrary family of IVF sets on X, then U ∩Ai (x) = [sup µL Ai (x), sup µAi (x)], i∈J
∪Ai (x) = [inf
i∈J
i∈J
µL Ai (x),
inf µU Ai (x)].
i∈J
Definition 3. Let X and Y be two nonempty sets and f : X → Y be a function. Let A and B be IVF sets on X and Y respectively. (a) The inverse image f −1 (B) of B under f is the IVF set on X defined by U f −1 (B)(x) = [µL B (f (x)), µB (f (x))] for all x ∈ X. (b) The image f (A) of A under f is the IVF set on Y defined by f (A) = U [µL f (A) , µf (A) ], where µL f (A) (y) = µU f (A) (y) =
−1 supx∈f −1 (y) µL (y) =φ A (x) if f 0, otherwise, −1 supx∈f −1 (y) µU (y) =φ A (x) if f 0, otherwise.
for each y ∈ Y . Now we list the properties of images and preimages, some of which we shall frequently use in Sections 3 and 4. Theorem 1. Let A and Ai (i ∈ J) be IVF sets on X and B and Bi (i ∈ J) be IVF sets on Y and f : X → Y be a function. Then: (a) If A1 ⊆ A2 , then f (A1 ) ⊆ f (A2 ). (b) If B1 ⊆ B2 , then f −1 (B1 ) ⊆ f −1 (B2 ).
On Fuzzy Inclusion in the Interval-Valued Sense
3
A ⊆ f −1 (f (A)). If, furthermore, f is injective, then A = f −1 (f (A)). f (f −1 B. If, furthermore, then f (f −1 (B)) = B. (B)) ⊆ f is surjective, −1 −1 −1 −1 f ( Bi ) = ), f ( Bi ) = f (Bi ). f (Bi f ( Ai ) = f (Ai ), f ( Ai ) ⊆ f (Ai ) If, furthermore, f is injective, then f ( Ai ) = f (Ai ). (g) f −1 (˜ 1) = ˜ 1, f −1 (˜ 0) = ˜ 0. ˜ ˜ ˜ (h) f (0) = 0 and f (1) = ˜ 1 if f is surjective. (i) f −1 (B)c = f −1 (B c ) and f (A)c ⊆ f (Ac ) if f is surjective. (c) (d) (e) (f)
3
Fuzzy Inclusion in the Interval-Valued Sense
In this section, we shall extend the concept “fuzzy inclusion” [8,9] to the intervalvalued fuzzy setting: Definition 4. Let X be a nonempty set. Then the right fuzzy inclusion, denoted by , is the IVF set on DX × DX defined by U L µL (A, B) = inf{((1 − µA ) ∨ µB )(x) : x ∈ X}, L U µU (A, B) = inf{((1 − µA ) ∨ µB )(x) : x ∈ X}
for each A, B ∈ DX . Here µL (A, B) denote the lower limit of inclusion of A in U B, while µ (A, B) denotes the upper limit of inclusion of A in B. L Remark 1. Since 1 − µU A ≤ 1 − µA , we may deduce the following: L L U ((1 − µU A ) ∨ µB )(x) ≤ ((1 − µA ) ∨ µB )(x) for each x ∈ X L L U ⇒ inf ((1 − µU A ) ∨ µB )(x) ≤ inf ((1 − µA ) ∨ µB )(x) x∈X
x∈X
U ⇒ µL (A, B) ≤ µ (A, B). U Therefore, for each A, B ∈ DX , [µL (A, B), µ (A, B)] is closed interval.
Definition 5. For any two IVF sets A, B ∈ DX , closed interval [µL (A, B), U µ (A, B)] will be denoted by [A B], i.e., U [A B] = [µL (A, B), µ (A, B)].
Remark 2. The closed interval [A B] shows “to what extend the IVF set A is contained in the IVF set B” (cf. [3,9]). If A and B are crisp IVF sets on X given by A = ˜ 1C and B = ˜ 1D where C and D are nonempty subsets of X, then ˜ [A B] = 1 iff C ⊆ D, and [A B] = ˜ 0 otherwise. Similar to the concept of right fuzzy inclusion, we can easily define the left fuzzy inclusion as follows: L U U L U µL (A, B) = µ (B, A), µ (A, B) = µ (B, A) , [A B] = [µ (A, B), µ (A, B)].
4
J.H. Park, J.S. Park, and Y.C. Kwun
Definition 6. For any two IVF sets A, B ∈ DX , the interval-valued fuzzy equality of A to B defined as follows: [A B] = [A B] ∧ [A B]. µL (A, B)
L U U The interval values = µL (A, B)∧µ (A, B) and µ (A, B) = µ (A, B)∨ µU (A, B), respectively, denote the lower limit of equality and the upper limit of equality of the IVF set A to the IVF set B.
Now we present some of the basic properties of interval-valued fuzzy inclusion: Theorem 2. For IVF sets A, B, C and D on X, the following properties hold: (a) (b) (c) (d) (e) (f)
If A ⊆ B and D ⊆ C, then [A C] ≥ [B D]. [Ac B c ] = [B A]. [A ∪ B C ∩ D] ≤ [A C] ∧ [B D]. [A C] ∨ [B D] ≤ [A ∩ B C ∪ D]. [A ∩ B c ˜ 0] = [A B] = [˜ 1 Ac ∪ B]. If {Bi ∈ DX : i ∈ J}, then ∧i∈J [A Bi ] = [A ∩i∈J Bi ] and ∧i∈J [Bi A] = [∪i∈J Bi A].
L U U L L Proof. (a) Let A ⊆ B and D ⊆ C. Since µL A ≤ µB , µA ≤ µB , µD ≤ µC and U µU ≤ µ , we obtain D C U L L U [A C] = inf ((1 − µA ) ∨ µC )(x), inf ((1 − µA ) ∨ µC )(x) x∈X x∈X U L L U ≥ inf ((1 − µB ) ∨ µD )(x), inf ((1 − µB ) ∨ µD )(x) = [B D] . x∈X
(b)
x∈X
[B A] = inf ((1 − ∨ inf ((1 − ∨ x∈X x∈X L U U L = inf ((1 − (1 − µA )) ∨ (1 − µB ))(x), inf ((1 − (1 − µA )) ∨ (1 − µB ))(x) µU B)
x∈X c
µL A )(x),
µL B)
µU A )(x)
x∈X
= [A B ] . c
(c) By (a), [A ∪ B C ∩ D] ≤ [A C] and [A ∪ B C ∩ D] ≤ [B D] and hence [A ∪ B C ∩ D] ≤ [A C] ∧ [B D]. (d) Similar to (c). (e) U L L U [A B] = inf ((1 − µA ) ∨ µB )(x), inf ((1 − µA ) ∨ µB )(x) x∈X x∈X U L L U = inf (1 − (µA ∧ (1 − µB )))(x), inf (1 − (µA ∧ (1 − µB )))(x) x∈X x∈X U L = inf ((1 − µA∩B c ) ∨ 0)(x), inf ((1 − µA∩B c ) ∨ 0)(x) x∈X
x∈X
= [A ∩ B ˜ 0] c
On Fuzzy Inclusion in the Interval-Valued Sense
and
[A B] =
inf ((1 −
x∈X
µU A)
∨
µL B )(x),
=
5
inf (0 ∨
x∈X
(f)
[A ∩i Bi ] =
µL Ac ∪B ))(x),
inf ((1 −
x∈X
inf (0 ∨
x∈X
∨
µU B ))(x)
µU Ac ∪B )(x)
= [˜1 Ac ∪ B].
L L U inf (1 − µU ) ∨ µ )(x), inf ((1 − µ ) ∨ µ )(x) A ∩Bi A ∩Bi
x∈X
µL A)
x∈X
L L U inf ((1 − µU ) ∨ ∧ µ )(x), inf ((1 − µ ) ∨ ∧ µ )(x) i Bi i Bi A A x∈X x∈X L L U = inf inf (1 − µU ) ∨ µ )(x), inf inf ((1 − µ ) ∨ µ )(x) A Bi A Bi x∈X i x∈X i L L U = ∧i inf (1 − µU ) ∨ µ )(x), inf ((1 − µ ) ∨ µ )(x) A Bi A Bi =
x∈X
x∈X
= ∧i [A Bi ] and by (b) we obtain c
[∪i Bi A] = [Ac (∪i Bi ) ] = [Ac ∩i (Bi )c ] = ∧i [Ac (Bi )c ] = ∧i [Bi A] . Now we list the properties of interval-valued fuzzy inclusion related to images and preimages: Theorem 3. Let A, B be IVF sets on X and C, D be IVF sets on Y . For a function f : X → Y , the following properties hold: (a) [A B] ≤ [f (A) f (B)]. Furthermore, [A B] = [f (A) f (B)] if f is injective. (b) [C D] ≤ [f −1 (C) f −1 (D)]. Furthermore, [C D] = [f −1 (C) f −1 (D)] if f is surjective. (c) [A f −1 (f (A))] ≤ [f (A) f (A)], [f −1 (f (A)) A] ≤ [A A], [f (f −1 (C)) C] ≤ [f −1 (C) f −1 (C)] and [C f (f −1 (C))] ≤ [C C]. (d) [f (A) C] = [A f −1 (C)]. (e) If {Ci : i ∈ J} ⊆ DY , then [f (A) ∪i Ci ] = A ∪i f −1 (Ci ) . (f) If {Ai : i ∈ J} ⊆ DX , then
[f (∩i Ai ) C] = ∩i Ai f −1 (C) . L Proof. (a) By Definition 3 (b), we obtain inf y∈Y (1 − µU f (A) ) ∨ µf (B) (y) ≥
L L U inf x∈X (1 − µU ) ∨ µ (x) and inf (1 − µ ) ∨ µ y∈Y A B f (A) f (B) (y) ≥ inf x∈X ((1− L U µA ) ∨ µB (x) and hence
6
J.H. Park, J.S. Park, and Y.C. Kwun
U [f (A) f (B)] = µL (f (A), f (B)), µ (f (A), f (B)) L L U = inf (1 − µU ) ∨ µ (y), inf (1 − µ ) ∨ µ f (A) f (B) f (A) f (B) (y) y∈Y y∈Y
L L U ≥ inf (1 − µU ) ∨ µ (x), inf (1 − µ ) ∨ µ (x) = [A B]. A B A B x∈X
x∈X
Now we prove the equality in case that f is injective. L inf ((1 − µU f (A) ) ∨ µf (B) )(y) L U L = inf (1 − µU ) ∨ µ (y) ∧ inf (1 − µ ) ∨ µ f (A) f (B) f (A) f (B) (y) y∈f (X) y ∈f / (X) L = inf (1 − µU f (A) ) ∨ µf (B) (y) ∧ 1 y∈f (X) L = inf (1 − µU f (A) ) ∨ µf (B) (y) y∈f (X)
y∈Y
=
inf
y∈f (X)
(1 −
sup x∈f −1 (y)
µU A )(x) ∨
L = inf (1 − µU A ) ∨ µB (x).
sup x∈f −1 (y)
µL B )(x)
x∈X
Similarly, we have
U L U inf ((1 − µL f (A) ) ∨ µf (B) )(y) = inf (1 − µA ) ∨ µB (x).
y∈Y
x∈X
Hence, from two equalities above, we obtain [A B] = [f (A) f (B)]. (b) Similar to (a). (c) From (a) and (b), the required inequalities can be easily obtained. (d) By (b) and Theorem 1 (c), we have [f (A) C] ≤ [f −1 (f (A)) f −1 (C)] ≤ A f −1 (C)]. Similarly, by (a) and Theorem 1 (d), [A f −1 (C)] ≤ [f (A) f (f −1 (C))] ≤ f (A) C]. Hence [f (A) C] = [A f −1 (C)]. (e) By (d) and Theorem 1 (e), we have [f (A) i Ci ] = [ A f −1 ( i Ci ) = −1 A i f (Ci ) . (f) Similar to (e).
4
Interval-Valued Fuzzy Families
In this section, we define the concept of interval-valued fuzzy family to obtain generalized De Morgan’s laws and later interval-valued fuzzy topological spaces. Definition 7. An IVF set F on the set DX is called an interval-valued fuzzy U family (IVFF for short) on X and denoted by the form F = [µL F , µF ].
On Fuzzy Inclusion in the Interval-Valued Sense
7
Definition 8. Let F be an IVFF on X. Then the IVFF of complemented U L L c IVF sets on X is defined by F ∗ = [µL F ∗ , µF ∗ ], where µF ∗ (A) = µF (A ) and U U c X µF ∗ (A) = µF (A ) for each A ∈ D . Example 1. We can easily extend the fuzzy topological space, first defined by ˇ Sostak [8,9], to the case of interval-valued fuzzy sets [4]. Let τ be an IVFF on X. For each A ∈ DX , we can construct the closed interval τ (A) as follows: U τ (A) = [µL τ (A), µτ (A)]. ˇ In this case, an interval-valued fuzzy topology in Sostak’s sense (So-IVFT for short) on a nonempty set X is an IVFF τ on X satisfying the following axioms:
˜ =1 ˜ and τ (˜ (O1) τ (0) 1) = ˜ 1; (O2) τ (A1 ∩ A2 ) ≥ τ (A1 ) ∧ τ (A2 ) for any A1 , A2 ∈ DX ; (O3) τ ( i Ai ) ≥ i τ (Ai ) for any {Ai : i ∈ J} ⊆ DX . In this case the pair (X, τ ) is called an interval-valued fuzzy topological space ˇ in Sostak’s sense (So-IVFTS for short). For any A ∈ DX , the closed interval τ (A) is called the interval-valued degree of openness of A. Of course, a So-IVFTS (X, τ ) is fuzzy topological space in Chang’s sense. So we also define a So-IVFTS in the sense of Lowen [5] as follows: (X, τ ) is So-IVFTS in the sense of Lowen if (X, τ ) is IVFTS satisfying the condition ˜ ˜ that for each IVF set in the form [a, b], where [a, b] ⊆ [0, 1], τ [a, b] = ˜1 holds. Let (X, τ ) be a So-IVFTS on X. Then the IVFF τ ∗ of the complemented IVF setson X is defined by τ ∗ (A) = τ (Ac ) for each A ∈ DX . The closed interval U τ ∗ (A) = µL τ ∗ (A), µτ ∗ (A) is called the interval-valued degree of closedness of A. ∗ Thus the IVFF τ on X satisfies the following properties: ˜ =1 ˜ and τ ∗ (˜ (C1) τ ∗ (0) 1) = ˜ 1; ∗ ∗ (C2) τ (A1 ∪ A2 ) ≥ τ (A1 ) ∧ τ ∗ (A2 ) for any A1 , A2 ∈ DX ; (C3) τ ∗ ( i Ai ) ≥ i τ ∗ (Ai ) for any {Ai : i ∈ J} ⊆ DX . Now we extend the intersection and union of a fuzzy family [8,9] to the interval-valued fuzzy setting: Definition 9. Let F be an IVFF on X. Then (a) the intersection F of this IVFF is the IVF set defined by F (x) = µL∩F (x), µU∩F (x) for all x ∈ X, where L X µL∩F (x) = inf{1 − µU F (A) ∨ µA (x) : A ∈ D }, U X µU∩F (x) = inf{1 − µL F (A) ∨ µA (x) : A ∈ D };
(b) the union F of this IVFF is the IVF set defined by F (x) = µL∪F (x), µU∪F (x) for all x ∈ X, where L X µL∪F (x) = sup{µL F (A) ∧ µA (x) : A ∈ D }, U X µU∪F (x) = sup{µU F (A) ∧ µA (x) : A ∈ D }.
8
J.H. Park, J.S. Park, and Y.C. Kwun
Theorem 4. (Generalized De Morgan’s Laws) Let F be an IVFF on X. Then we have c (a) ( F ) = F ∗ . (b) ( F )c = F ∗ . Proof. Let x ∈ X. Then we have L X µL∩F ∗ (x) = inf{1 − µU F ∗ (A) ∨ µA (x) : A ∈ D } c L X = inf{1 − µU F (A ) ∨ µA (x) : A ∈ D } c L X = inf{1 − µU F (A ) ∨ (1 − (1 − µA ))(x) : A ∈ D } c U X = inf{1 − µU F (A ) ∨ (1 − µAc )(x) : A ∈ D } c U X = 1 − sup{µU F (A ) ∧ µAc (x) : A ∈ D } U X U = 1 − sup{µU F (A) ∧ µA (x) : A ∈ D } = 1 − µ ∩F (x)
and U X µU∩F ∗ (x) = inf{1 − µL F ∗ (A) ∨ µA (x) : A ∈ D } c U X = inf{1 − µL F (A ) ∨ (1 − (1 − µA ))(x) : A ∈ D } c L X = inf{1 − µL F (A ) ∨ (1 − µAc )(x) : A ∈ D } c L X = 1 − sup{µL F (A ) ∧ µAc (x) : A ∈ D } L X L = 1 − sup{µL F (A) ∧ µA (x) : A ∈ D } = 1 − µ ∩F (x). Hence ( F )c = F ∗ . (b) Similar to (a).
Finally, we shall define the image and preimage of IVFF’s under a function f :X →Y: Definition 10. Let F be an IVFF and let f :X → Y be an injective on X U L U function. Then the image F f = µL F f , µF f of F = µF , µF under f is the IVFF on Y defined as follows: L −1 µF (f (B)), if B ⊆ ˜1f (X) µL (B) = f F 0, otherwise µU F f (B) =
−1 µU (B)), if B ⊆ ˜1f (X) F (f 0, otherwise.
Definition 11. Let F be an IVFF on Y and let f : X → Y be a function. Then f −1 L U U the preimage F = µF f −1 , µF f −1 of F = µL F , µF under f is the IVFF on X defined by −1 µL (A) = sup{µL (B), B ∈ DY }, F (B) : A = f F f −1 −1 µU (A) = sup{µU (B), B ∈ DY }. F (B) : A = f F f −1
On Fuzzy Inclusion in the Interval-Valued Sense
9
Theorem 5. Let F be an IVFF on X and f : X → Y be an injective function. Then (a) f ( F ) = F f . (b) f ( F ) = F f . Proof. (a) We notice that, under B ⊆ ˜ 1f (X) , if B ∈ DY , then there exists a X −1 A ∈ D such that A = f (B) and so f (A) = B. Similarly, if A ∈ DX , then there exists a B ∈ DY such that B ⊆ ˜ 1f (X) and A = f −1 (B), and so B = f (A). −1 Let y ∈ Y . If f (y) = φ, then we have L L Y µL ∪F f (y) = sup{µF f (B) ∧ µB (y) : B ∈ D } L −1 Y ˜ = sup µF (f (B)) ∧ µL B (y) : B ∈ D , B ⊆ 1f (X) −1 Y ˜ ∨ sup µL (B)) ∧ µL F (f B (y) : B ∈ D , B ⊆1f (X) −1 Y ˜ = sup µL (B)) ∧ µL F (f B (y) : B ∈ D , B ⊆ 1f (X) ∨ 0 −1 Y ˜ = µL (B)) ∧ µL F (f B (y) : B ∈ D , B ⊆ 1f (X) L X = µL (A) ∧ µ (y) : A ∈ D F f (A) L X = µF (A) ∧ f (µL . A )(y) : A ∈ D
On the other hand, whenever f −1 (y) = φ, there exists a x ∈ X such that f (x) = y since f is injective. Then we have L L µL f (∪F ) (y) = f (µ∪F )(y) = µ∪F (x) X = sup{µL (A) ∧ µL A (x) : A ∈ D } L F L X = µF (A) ∧ f (µA )(y) : A ∈ D . L If f −1 (y) = φ, then we have µL ∪F f (y) = 0 and µf (∪F ) (y) = 0. Therefore, we L −1 obtain µL (y) = φ, then we have ∪F f = µf (∪F ) . Similarly, if f
U U Y µU ∪F f (y) = sup µF f (B) ∧ µB (y) : B ∈ D −1 Y ˜ = sup µU (B)) ∧ µU F (f B (y) : B ∈ D , B ⊆ 1f (X) U X = sup µU F (A) ∧ f (µA )(y) : A ∈ D and U µU [x ∈ f −1 (y)] f (∪F ) (y) = µ∪F (x) U X = sup µF (A) ∧ µU A (x) : A ∈ D U X = sup µU . F (A) ∧ f (µA )(y) : A ∈ D U U If f −1 (y) = φ, since f (∪µU )(y) = 0 and µU ∪F f (y) = 0, we have µ∪F f = µf (∪F ) . f F Hence f ( F ) = F . (b) Similar to (a).
10
J.H. Park, J.S. Park, and Y.C. Kwun
Theorem 6. Let F be an IVFF on Y and f : X → Y be a function. Then −1 (a) f −1 ( F ) = F f . −1 (b) f −1 ( F ) = F f . Proof. (a) Let N (A) = {B ∈ DY : A = f −1 (B)} for each A ∈ DX . Then X D = {N (A) : A ∈ DX }. Let x ∈ X. Then we have L X µL (x) = sup µL ∪F f −1 (A) ∧ µA (x) : A ∈ D f (∪F f −1 ) −1 X = sup sup µL (B), B ∈ DY ∧ µL F (B) : A = f A (x) : A ∈ D L −1 = sup sup µL (B), B ∈ DY : A ∈ DX F (B) ∧ µA (x) : A = f L X = sup sup µL F (B) ∧ µA (x) : B ∈ N (A) : A ∈ D L X = sup sup µL F (B) ∧ µf −1 (B) (x) : B ∈ N (A) : A ∈ D L Y = sup µL F (B) ∧ µB (f (x)) : B ∈ D = µL f −1 (∪F ) (x). −1 −1 Similarly, we have µU (x) = µU ( F) = Ff . f −1 (∪F ) (x). Hence f ∪F f −1 (b) Similar to (a).
References 1. K. Atanassov, “Intuitionistic fuzzy sets”, in: V. Sgurev, Ed., VII ITKR’s Session, Sofia (June 1983 Central Sci. and Techn. Library, Bulg. Academy of Sciences, 1984). 2. K. Atanassov, “Intuitionistic fuzzy sets”, Fuzzy Sets and Systems, Vol. 20, pp. 87-96, 1986. 3. D. C ¸ oker and M. Demirci, “On fuzzy inclusion in the intuitionistic sense”, J. Fuzzy Math., Vol. 4, pp. 701-714, 1996. 4. M. B. Gorzalczany, “A method of inference in approximate reasoning based on interval-valued fuzzy sets”, Fuzzy Sets and Systems, Vol. 21, pp. 1-17, 1987. 5. R. Lowen, “Fuzzy topological spaces and fuzzy compactness”, J. Math. Anal. Appl., Vol. 56, pp. 621-633, 1976. 6. T. K. Mondal and S.K. Samanta, “Topology of interval-valued fuzzy sets”, Indian J. pure appl. Math., Vol. 30, pp. 23-38, 1999. 7. P. V. Ramakrishnan and V. Lakshmana Gomathi Nayagam, “Hausdorff intervalvalued fuzzy filters”, J. Korean Math. Soc., Vol. 39, pp. 137-148, 2002. ˇ 8. A. Sostak, “On a fuzzy topological structure”, Supp. Rend. Circ. Mat. Palermo (Ser. II), Vol. 11, pp. 89-103, 1985. ˇ 9. A. Sostak, “On compactness and connectedness degrees of fuzzy sets in fuzzy topological spaces”, General Topology and its Relations to Mordern Analysis and Algebra, Helderman Verlag, Berlin, pp. 519-532, 1988. 10. L. A. Zadeh, “Fuzzy sets”, Inform. and Control, Vol. 8, pp.338-353, 1965. 11. L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning -I”, Inform. Sci., Vol. 8, pp. 199-249, 1975.
Fuzzy Evaluation Based Multi-objective Reactive Power Optimization in Distribution Networks Jiachuan Shi and Yutian Liu School of Electrical Engineering, Shangdong University, Jinan 250061, China
[email protected] Abstract. A fuzzy evaluation based multi-objective optimization model for reactive power optimization in power distribution networks is presented in this paper. The two objectives, reducing active power losses and improving voltage profiles, are evaluated by membership functions respectively, so that the objectives can be compared in a single scale. To facilitate the solving process, a compromised objective is formed by the weighted sum approach. The weights are decided according to the preferences and importance of the objectives. The reactive tabu search algorithm is employed to get global optimization solutions. Simulation results of a practical power distribution network, greatly improved voltage profiles and reduced power losses, demonstrated that the proposed method is effective.
1 Introduction Reactive power optimization (RPO) aims to reduce active power losses by adjusting reactive power distribution. Generally, regulating means in distribution systems are mainly transformer tap-changers and shunt capacitors, which are both discrete variables. Therefore, it is formed as a constrained combinatorial optimization problem. There have been several approaches, such as numerical programming, heuristic methods and AI-based methods, devised to solve the reactive power optimization problems. Numerical programming algorithms solve the optimization problems by iterative techniques. The quadratic inter programming algorithm based on the active power losses reduction formulas are implemented in distribution networks capacitor allocation and real time control problems [1]. Numerical programming algorithms are effective in small-scale cases. Computational burden increases greatly when dealing with the practical large-scale networks. Heuristic methods are based on the rules that are developed through intuition, experience and judgment. The heuristic methods produce near optimal solutions fast, but the results are not guaranteed to be optimal [2]. AI-based methods, including global optimization techniques [3,4] and fuzzy set theory [5-8], have been implemented in RPO. In [3], a GA-based two-stage algorithm, which combines GA and heuristic algorithm, is introduced to solve the constrained optimization problem. GA-SA-TS hybrid algorithms are introduced in [4]. Combining the advantages of individual algorithms, the local and global search strategies are L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 11 – 19, 2005. © Springer-Verlag Berlin Heidelberg 2005
12
J. Shi and Y. Liu
joined to find better solutions within reasonable time. Fuzzy set theory is efficient in remedying for the uncertainty of data and optimization models. The applications of fuzzy set theory in distribution network shunt capacitor placement are introduced in [5]. Fuzzy set theory is combined with dynamic programming to solve distribution network voltage/var regulation problems in [6,7,8]. It should be noted that, most of the papers focus on reducing active power losses and/or energy losses, and the voltage profiles are considered in operating constraints. The single-objective optimization model often increase voltage to upper limits to reduce active power losses, which may be not acceptable in practice. Therefore, reducing active power losses may conflict with voltage profiles constraints. A reactive power optimization method is presented in this paper. To solve the conflict between active power losses reduction and operating constraints, voltage profiles constraints are converted into objectives. That is to improve voltage profiles and reduce active power losses as much as possible. Therefore, the RPO problem is formed as a constrained combinatorial multi-objective optimization. Constraints include power flow, operating constraints, and adjusting frequencies. The membership functions are introduced to assess the objectives, so that the objectives can be compared without influenced by their original values. The weighted-sum approach is utilized to combine the objectives into one, and the constrained multiobjective optimization problem is solved by the Reactive Tabu Search (RTS) technology.
2 Problem Formulation A fuzzy evaluation based multi-objective optimization model for the RPO problem in distribution networks is presented in this section. The traditional RPO aims to minimize active power losses without voltage violations. The optimization process tends to minimize active power losses by increasing voltage to upper limit. Meanwhile the results are not acceptable in practice considering variations of loads and source node voltage. Furthermore, the voltage constraints are treated as “hard” constraints, that is, no voltage violation is allowed; it may be hard to find a feasible solution, especially considering two or more operating conditions. In the new RPO model, the voltage profiles constraints are considered as an objective. That is, the objectives are to improve voltage profiles and to reduce active power losses as much as possible. The constraints include power flow and operating constraints. The objectives are estimated by fuzzy membership functions. The membership functions scale the objectives to the unit interval [0,1], so that the satisfactory of the objectives can be compared without influenced by their original values. The weighted-sum approach is introduced to combine the objectives. The weights imply the importance of objectives, that is, the preferences of objectives. After combination, the multi-objective optimization problem is transformed into a single-objective optimization problem. The optimal solution of the latter problem is a nondominated solution of the former one.
Fuzzy Evaluation Based Multi-objective RPO in Distribution Networks
13
2.1 Voltage Profiles Assessment To evaluate the voltage profiles, a trapezoid membership function shown as Fig.1 is introduced.
⎧ Lupper − Vi 0 ⎪ Lupper − Lupper 1 ⎪ 0 ⎪⎪ Vi − Llower Fv (Vi ) = ⎨ lower 0 lower ⎪ L1 − L0 ⎪1 ⎪ ⎪⎩0
(L
< Vi < Lupper 0
)
(L
< Vi < L1lower
)
(L
≤ Vi ≤ L1upper
)
upper 1
lower 0 lower 1
(1)
(others)
where Vi is the voltage of node i; L0upper and L0lower are unacceptable voltage limits; L1upper and L1lower are acceptable voltage margins. Fv(Vi) 1
0
lower L0
lower upper L1 L1
upper Vi(p.u.) L0
Fig. 1. Membership function for voltage profiles assessment
The membership function of the network voltage profiles is
∑∑ F (V ) N
3
v
ip
i =1 p =1
Fvoltage =
(2)
3N
where p represents the phase A, B and C; N is the number of secondary buses; Vip is the phase voltage of node i. To evaluate voltage eligibility of the network, three-phase voltage-eligibility ratio (TPVER) is defined as
∑ ∑ f (V ) N
3
ip
TPVER =
(3)
i =1 p =1
⎧⎪0 f (Vip ) = ⎨ ⎪⎩1
3N (Vip > Vupper ) or (Vip < Vlower )
(V
lower
≤ Vip ≤ Vupper )
where Vlower and Vupper are voltage limits, e.g. 0.9 and 1.07p.u., respectively.
14
J. Shi and Y. Liu
2.2 Active Power Losses Assessment To compare with the other objectives, the active power loss is also valued by a membership function. Different from the membership functions for voltage profiles, there is not a standard or limit for active power losses reduction. In this paper, the membership function for active power losses is determined by two parameters, Pl_ori and Pl_min. The former one is the active power losses before optimization, and the membership value of Pl_ori is set 0.5. The latter one, Pl_min, is the minimum active power losses, and its membership value is 1. The active power losses can be calculated by Nb
Ploss = ∑ I i2 Ri
(4)
i =1
where Ploss is the active power losses of the whole network; Ii is the current of branch i; Ri is the resistance of branch i; Nb is the number of branches. The current can be divided into active power current Ii_real, and reactive power current Ii_imag. The reactive currents account for a portion of these losses.
I i = I i _ real + jI i _ imag
(5) N
Ploss = Ploss _ real + Ploss _ imag = ∑ I i =1
N
2 i _ real
Ri + ∑ I i =1
2 i _ imag
Ri
After reactive power compensation, branch current Ii is decreased to Ii’, and the active power losses is reduced to P’loss. While the losses caused by active power currents, Ploss_real, cannot be decreased by reactive power compensation.
Ii′ = I i _ real + j(Ii _ imag − Ii _ comp )
′ = Ploss_ real + Ploss ′ _ imag = Ploss_ real + ∑(Ii _ imag − I i _ comp ) Ri Ploss N
(6)
2
i =1
In an ideal operating condition, enough shunt capacitors are installed and reactive power current is reduced to 0. Therefore, the Ploss_imag is 0 and the Ploss_real can be treated as the minimum active power losses, Pl_min. The membership function of the active power losses shown as Fig. 2 is defined as ⎧0 Pl > (2Pl_ori − Pl _ min ) ⎪ − P P ⎪ l l _ ori + 0.5 Pl_ min ≤ Pl ≤ (2Pl_ori − Pl _ min ) Floss(Pl ) = ⎨0.5 × Pl _ min − Pl _ ori ⎪ ⎪1 Pl < Pl_ min ⎩
(7)
where Pl is the active power losses. The two parameters, Pl_ori and Pl_min, are determined by the system structure and original operating conditions. The assessment results of trial solutions under different operating conditions are comparable.
Fuzzy Evaluation Based Multi-objective RPO in Distribution Networks
15
Floss(Pl) 1
0.5
0 Pl_min
Pl_ori
Pl
Fig. 2. Membership function for active power losses assessment
2.3 Objective Function The multi-objective RPO can be formed as
Max Floss Max Fvoltage
(8)
s.t. Tk min ≤ Tk ≤ Tk max 0 ≤ Qcj ≤ Qcj max where Floss is the membership function for active power losses; Fvoltage is membership function for the voltage profiles; Tk is the ratio of transformer k; Qcj is the capacity of capacitors at node j. The restriction of power flow is not listed here. Normally, the solutions of a multi-objective optimization are a series of Plato solutions. A compromised objective function for operating condition m is represented as
Fobj _ m = (αFloss + β Fvoltage )m
(9)
where Fobj_m is the objective function for operation condition m;α,βare weight coefficients for the two objectives. Considering load variation, two or more different operating conditions are considered during the optimization process. The RPO problem is described as
Max Fobj = ∑ (Fobj _ m ) M
m =1
s.t.
(10)
Tk min ≤ Tk ≤ Tk max 0 ≤ Qcj ≤ Qcj max
where M is the number of operating conditions.
3 Solution Method Reactive Tabu Search algorithm (RTS) [9] is utilized to solve the constrained combinational optimization problem. The two searching strategies, diversification and intensification, are presented in many heuristic algorithms. In traditional tabu search, the length of tabu list is determined by experiences. The feedback mechanisms are
16
J. Shi and Y. Liu
introduced in RTS to adjust the length of tabu list. The diversification and the intensification can be balanced according to the process of optimization. Therefore, compared with traditional TS, the RTS is more robust and effective. The reactive power optimization process by RTS is described as the following: i) Read in initial data, including impedance of feeders, loads, regulation variables and inequality constraint conditions. Code the regulation variables. ii) Generate initial solution. Set the regulation variables randomly without breaking the constraint conditions, including power flow restriction. Calculate the objective function f(X), and set best solution vector Xopt as X. iii) Generate a group of trail solutions, X1, X2, … , Xk, by “move” from X. Check their feasibility and discard the unfeasible ones. iv) Get the corresponding values by searching Hash Table, in which all the visited configurations are stored. If the trial solutions are not available from the Hash Table, calculate the objective function, f(X1), f(X2), …, f(Xk), and keep them in the Hash Table. v) Search neighborhoods. Get the best one, X*, from the trail solutions. Update X with X*, if X* is not in the Tabu list, or X* fits aspiration criteria. Try the next solution, if the former one cannot update X. vi) Update Tabu List. Push the record of reversed move into a FIFO (First-InFirst-Out) stack, the Tabu List. vii) Update Xopt with X*, if f(X*) is better than f(Xopt). viii) Update the length of Tabu list. If most solutions are gotten from the Hash Table, the length of Tabu list increases till an upper limit to escape from local bests. If less solution is gotten from the Hash Table, the length of Tabu list decreases till a lower limit to reduce the constraints of Tabu list. ix) Terminate condition. Stop optimization and output results if f(Xopt) has not been improved for several iterations or number of maximum iteration is meet. If terminate conditions are not fit, iteration should continue from step iii).
4 Test Results The effectiveness of the proposed method is verified by the application to a practical distribution system shown as Fig.3 in Jinan, China. Data are sampled by the distribution SCADA system every 15 minutes. 4.1 Operating Conditions Selection Active power losses and voltage profiles are closely related with loads, which vary continually during different time of a day and different seasons of a year. If the voltage profiles in peak load conditions and valley load conditions are eligible, those during this period are believed to be acceptable. Consequently, the peak and valley load conditions cover the daily varying loads, and two typical days in spring and summer represent load variations in a year. Therefore, the loads variation during a long period can be covered by a few operating conditions and the optimization results fit a long-period voltage profiles variation.
Fuzzy Evaluation Based Multi-objective RPO in Distribution Networks
17
6000# 6001# 6010#
6002# 6003# 6004#
substation
6011# 6008# 6009# 6007# 6006# 6005#
Fig. 3. Structure of case distribution network
Operating conditions considered in optimization is closely related with calculation speed. Two typical operating conditions, the maximum loads and the minimum loads in a year, are utilized to “cover” voltage and loads variations in optimization. The former one is taken from peak load in a typical day in summer, and the latter one is valley load condition in spring. 4.2 Optimization Results Three-phase power flow is calculated by the forward-backward sweeping [10]. All 13 three-phase 10kV/0.4kV transformers in this network have no-load tap-changers (NLTC), which are 1±2.5%×2, and the original positions are 3, that is, the ratios are 1 p.u. The phase-to-neutral voltage upper and lower limits for 0.4kV distribution networks are 1+7% (235.4V) and 1-10% (198V) respectively. In Eq.1, acceptable upper and lower voltage margins are 1±2%, and unacceptable limits are 1±20%. In is 3 and is 1, which means the active power losses reduction is more Eq.9, emphasized than the voltage profiles improvement. Four nodes with biggest sensitivity values are pre-selected to be compensated, shown in Table1, all of which are under heavier loads. Furthermore, 6010# is the smallest capacity one (250kVA), whose impedance is bigger than that of the others (315kVA or 400kVA). Different regulation means are compared. In scheme 1, only non-load tap-changer (NLTC) and fixed capacitors (FC) are used to regulate voltage profiles. In scheme 2, NLTC, FC and switchable capacitors (SC) are utilized. After optimization, tap changers of all the transformers are regulated from position 3 to position 1, that is, to decrease the ratios from 1 to 0.95 in p.u. The voltage profiles and
α
β
18
J. Shi and Y. Liu
active power losses before and after optimization are shown in Table2. Both schemes can improve the voltage-eligibility ratio and decrease active power losses under two operating conditions evidently. Table 1. Sensitivity and capacity at 0.4kV sides Capacity (kVar) Locations
SC (w/kVar)
6010
Scheme 1 (FC)
Scheme 2 (FC+SC)
10.9
40
40+80
6007
8.4
40
40+80
6000
7.2
40
40+80
6008
6.1
60
60+80
Table 2. Voltage profiles and active power losses
Load Conditions Items
Summer Max
Spring Min
TPVER Power loss TPVER Power loss (%) (%) (%) (%)
Before optimization
94.87
2.38
0
6.39
Scheme 1 (NLTC, FC)
100
2.24
100
6.36
Scheme 2 (NLTC, SC, FC)
100
2.13
100
6.36
5 Conclusions The reactive power optimization problem is formed as a multi-objective optimization problem, which aims to improve voltage profiles and decrease active power losses. In this way, the conflict between active power losses reduction and voltage constraints can be solved compromise. The fuzzy evaluation based weighted-sum approach is effective in solving multi-objective optimizations. The membership functions scale the objectives to the unit interval [0,1]. Therefore, the satisfaction of different objectives can be compared fairly. The fuzzy evaluation strategy releases the influence of original values.
Fuzzy Evaluation Based Multi-objective RPO in Distribution Networks
19
References 1. Jin-Cheng Wang, et al. Capacitor placement and real time control in large-scale unbalanced distribution systems: Loss reduction formula, problem formulation, solution methodology and mathematical justification. IEEE Trans. on Power Delivery 1997; 12(2): 953-58. 2. H. N. Ng, M. M. A. Salama, A. Y. Chikhani. Classification of capacitor allocation techniques. IEEE Transactions on Power Delivery 2000; 15(1): 387-92. 3. KarenNan Miu, Hsiao-Dong Chiang, Gary Darling. Capacitor placement, replacement and control in large-scale distribution systems by a GA-based two-stage algorithm. IEEE Transactions on Power Systems 1997; 12(3): 1160-66. 4. Yutian Liu, Li Ma, Jianjun Zhang. Reactive power optimization by GA/SA/TS combined algorithms, Int. J. of Electric Power & Energy Systems 2002; 24(9): 765-69 5. S.F. Mekhamer, et al. Application of fuzzy logic for reactive-power compensation of radial distribution feeders; Transactions on power systems 2003; 18(1): 206-13 6. Yutian Liu, Peng Zhang, Xizhao Qiu. Optimal voltage/var control in distribution systems, Int. J. of Electric Power & Energy Systems 2002; 24(4): 271-76 7. Feng-Chang Lu, Yuan-Yih Hsu. Fuzzy dynamic programming approach to reactive power/voltage control in a distribution substation. IEEE Transactions on Power Systems 1997; 12(2): 681 –88 8. Andrija T. Saric, Milan S. Calovic, Vladimir C. Strezoski. Fuzzy multi-objective algorithm for multiple solution of distribution systems voltage control. Int. J. of Electrical Power & Energy Systems 2003; 25(2): 145-53. 9. V. J. Rayward-Smith, I.H. Osman, C. R. Reeves, G. D. Smith, Modern Heuristic search methods, John Wiley and Sons Ltd, 1996 10. Whei-Min Lin, et al, Three-phase unbalanced distribution power flow solutions with minimum data preparation. IEEE Transactions on Power Systems 1999; 14(3): 1178-83
Note on Interval-Valued Fuzzy Set Wenyi Zeng1,2 and Yu Shi1 1
2
Department of Mathematics, Beijing Normal University, Beijing, 100875, P.R. China Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, SE171 77, Sweden
[email protected] shi yu
[email protected] Abstract. In this note, we introduce the concept of cut set of intervalvalued fuzzy set and discuss some properties of cut set of interval-valued fuzzy set, propose three decomposition theorems of interval-valued fuzzy set and investigate some properties of cut set of interval-valued fuzzy set and mapping H in detail. These works can be used in setting up the basic theory of interval-valued fuzzy set.
1
Introduction
The theory of fuzzy set, pioneered by Zadeh[11], has achieved many successful applications in practice. As a generalization of fuzzy set, Zadeh[12, 13, 14] introduced the concept of interval-valued fuzzy set, after that, some authors investigated the topic and obtained some meaningful conclusions. For example, Biswas[2] and Li[7] in interval-valued fuzzy subgroup, Mondal[8] in intervalvalued fuzzy topology, Bustince etc.[3], Chen etc.[4], Yuan etc.[10], Arnould[1] and Gorzalczany[6] in approximate reasoning of interval-valued fuzzy set, Bustince etc.[3] and Deschrijver etc.[5] in interval-valued fuzzy relations and implication, Turksen[9] in normal forms of interval-valued fuzzy set and so on. These works show the importance of interval-valued fuzzy set. Just like that decomposition theorems of fuzzy set played an important role in the fuzzy set theory, it helped us to develop many branches such as fuzzy algebra, fuzzy measure and integral, fuzzy analysis, fuzzy decision making and so on. In this paper, our aim is to investigate decomposition theorems of intervalvalued fuzzy set in order that we can do some preparation for setting up the basic theory of interval-valued fuzzy set and develop its some relative branches. In this paper, our work is organized as follows. In the section 2, we introduce the concept of cut set of interval-valued fuzzy set and discuss some properties of cut set of interval-valued fuzzy set. In section 3, we propose three decomposition theorems of interval-valued fuzzy set and give some properties of cut set of interval-valued fuzzy set and mapping H. The final is conclusion. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 20–25, 2005. c Springer-Verlag Berlin Heidelberg 2005
Note on Interval-Valued Fuzzy Set
2
21
Preliminaries
In this section, we introduce the concept of cut set of interval-valued fuzzy set and give some properties of cut set of interval-valued fuzzy set. Let I = [0, 1] and [I] be the set of all closed subintervals of the interval [0, 1]. Then, according to Zadeh’s extension principle[11], we can popularize these operations such as ∨, ∧ and c to [I], thus, ([I], ∨, ∧, c) is a complete lattice with a minimal element ¯ 0 = [0, 0] and a maximal element ¯ 1 = [1, 1]. Furthermore, let a ¯ = [a− , a+ ], ¯b = [b− , b+ ], then we have, a ¯ = ¯b ⇐⇒ a− = b− , a+ = b+ , a ¯ ≤ ¯b ⇐⇒ a− ≤ b− , a+ ≤ b+ and a ¯ < ¯b ⇐⇒ a ¯ ≤ ¯b and a ¯ = ¯b. Considering [I] is dense, therefore, ([I], ∨, ∧, c) is a superior soft algebra. Suppose X be a universal set, we call a mapping: A : X −→ [I] an intervalvalued fuzzy set in X. Let IVFSs stands for the set of all interval-valued fuzzy sets in X. For every A ∈IVFSs and x ∈ X, A(x) = [A− (x), A+ (x)] is called the degree of membership of an element x to A, then fuzzy sets A− : X → [0, 1] and A+ : X → [0, 1] are called a low fuzzy set of A and a upper fuzzy set of A, respectively. For simplicity, we denote A = [A− , A+ ], and F (X) and P(X) as the set of all fuzzy sets and crisp sets in X, respectively. Therefore, some operations such as ∪, ∩, c can be introduced into IVFSs, thus, (IVFSs, ∪, ∩, c) is a superior soft algebra. Definition 1. Let A ∈IVFSs, λ = [λ1 , λ2 ] ∈ [I], we order: (1,1)
Aλ
(1,2)
Aλ
(2,1)
Aλ
(2,2)
Aλ (3,3)
Aλ
(3,4)
Aλ
(4,3)
Aλ
(4,4)
Aλ
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) λ1 , A+ (x) λ2 } (1,1)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) λ1 , A+ (x) > λ2 } (1,2)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) > λ1 , A+ (x) λ2 } (2,1)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) > λ1 , A+ (x) > λ2 } (2,2)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) λ1 (3,3)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) λ1 (3,4)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) < λ1 (4,3)
= A[λ1 ,λ2 ] = {x ∈ X|A− (x) < λ1 (4,4)
or A+ (x) λ2 } or A+ (x) < λ2 } or A+ (x) λ2 } or A+ (x) < λ2 }
(i,j)
A[λ1 ,λ2 ] is called the (i, j)th (λ1 , λ2 )-(double value) cut set of interval-valued (i,j)
(i,j)
fuzzy set A. Specially, if λ = λ1 = λ2 , Aλ = A[λ,λ] is called the (i, j)th λ-(single value) cut set of interval-valued fuzzy set A. For A ∈ F(X), λ ∈ [0, 1], we denote A1λ = {x ∈ X|A(x) λ}, A2λ = {x ∈ X|A(x) > λ}, A3λ = {x ∈ X|A(x) λ}, A4λ = {x ∈ X|A(x) < λ}. Therefore, we have the following properties. Property 1. A[λ1 ,λ2 ] = (A− )iλ1 ∩ (A+ )jλ2 ,
i, j = 1, 2
A[λ1 ,λ2 ] = (A− )iλ1 ∪ (A+ )jλ2 ,
i, j = 3, 4
(i,j)
(i,j)
22
W. Zeng and Y. Shi
Property 2. (2,2)
(1,2)
(1,1)
A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ]
(4,4)
(3,4)
(3,3)
A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ]
A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] , A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] ⊆ A[λ1 ,λ2 ] ,
(2,2)
(2,1)
(1,1)
(4,4)
(4,3)
(3,3)
Property 3. For λ1 = [λ11 , λ21 ], λ2 = [λ12 , λ22 ] ∈ [I] and λ11 < λ12 , λ21 < λ22 , then (2,2) (1,1) (3,3) (4,4) Aλ1 ⊇ Aλ2 , Aλ1 ⊆ Aλ2 . Property 4. For λ = [λ1 , λ2 ], then c (1,1) (4,4) Aλ = Aλ , c (2,1) (3,4) Aλ = Aλ ,
c (1,2) (4,3) Aλ = Aλ c (2,2) (3,3) Aλ = Aλ
Definition 2. For [λ1 , λ2 ] ∈ [I] and A ∈IVFSs, we order [λ1 , λ2 ] · A, [λ1 , λ2 ] ∗ A ∈IVFSs and their membership functions are defined as following. ([λ1 , λ2 ] · A) (x) [λ1 ∧ A− (x), λ2 ∧ A+ (x)] ([λ1 , λ2 ] ∗ A) (x) [λ1 ∨ A− (x), λ2 ∨ A+ (x)] Property 5. For A, B ∈IVFSs and λ, λ1 , λ2 ∈ [I], then we have, (1) λ1 λ2 ⇒ λ1 · A ⊆ λ2 · A, λ1 ∗ A ⊆ λ2 ∗ A (2) A ⊆ B ⇒ λ · A ⊆ λ · B, λ ∗ A ⊆ λ ∗ B
3
Decomposition Theorem
In this section, we will give three decomposition theorems of interval-valued fuzzy set and some properties of cut set of interval-valued fuzzy set and mapping H. (1,j) Theorem 1. A = [λ1 , λ2 ] · A[λ1 ,λ2 ] , j = 1, 2 [λ1 ,λ2 ]∈[I]
Theorem 2. A =
(4,i)
λ ∗ Aλc
c ,
i = 3, 4.
λ∈[I]
Supposed H be a mapping from [I] to P(X), H : [I] → P(X), for every λ = [λ1 , λ2 ] ∈ [I], we have H(λ) ∈ P(X). Obviously, cut set of interval-valued fuzzy set, Aλ ∈ P(X), it means that mapping H indeed exists. Based on Theorem 1 and Theorem 2, we have the following theorem in general. Theorem 3. For A ∈IVFSs, we have: (1,2)
(1,1)
(1) If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then A =
[λ1 , λ2 ] ·
[λ1 ,λ2 ]∈[I]
H([λ1 , λ2 ]). (4,4) (4,3) (2) If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then A = c H([λ1 , λ2 ]c ) .
[λ1 , λ2 ]∗
[λ1 ,λ2 ]∈[I]
Note on Interval-Valued Fuzzy Set
23
In the following, we investigate the relation of cut set of interval-valued fuzzy set and mapping H. For simplicity, we denote λ1 = [λ11 , λ21 ], λ2 = [λ12 , λ22 ] and λ1 , λ2 ∈ [I], then we give some properties of cut set of interval-valued fuzzy set and mapping H. (2,2)
(1,1)
Property 6. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1< λi2 , i = 1, 2 ⇒ H(λ1 ) ⊇ H(λ2 ), (1,1) (b) A[λ1 ,λ2 ] = H([α1 , α2 ]), λ1 = 0 and λ2 = 0,
α1 λ2 (2,2)
(2,1)
Property 7. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊇ H(λ2 ), (2,1) (b) A[λ1 ,λ2 ] = H([λ1 , α2 ]), λ2 = 0,
α2 λ2 (2,2)
(1,2)
Property 8. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: i i (a) λ1 , λ2 ∈ [I] and λ1 < λ2 , i = 1, 2 ⇒ H(λ1 ) ⊇ H(λ2 ), (1,2) (b) A[λ1 ,λ2 ] = H([α1 , λ2 ]), λ1 = 0,
α1 λ1 (2,1)
(1,1)
Property 9. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: i i (a) λ1 , λ2 ∈ [I] and λ1 < λ2 , i = 1, 2 ⇒ H(λ1 ) ⊇ H(λ2 ), (1,1) (b) A[λ1 ,λ2 ] = H([α1 , λ2 ]), λ1 = 0,
α1 λ1 (1,2)
(1,1)
Property 10. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: i i (a) λ1 , λ2 ∈ [I] and λ1 < λ2 , i = 1, 2 ⇒ H(λ1 ) ⊇ H(λ2 ), (1,1) (b) A[λ1 ,λ2 ] = H([λ1 , α2 ]), λ2 = 0,
α2 λ2
H([λ1 , α2 ]),
λ2 =1
24
W. Zeng and Y. Shi (4,4)
(3,3)
Property 11. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊆ H(λ2 ), (4,4) (b) A[λ1 ,λ2 ] = H([α1 , α2 ]), λ1 = 0 and λ2 = 0, α1 λ2 (4,4)
(4,3)
Property 12. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊆ H(λ2 ), (4,4) (b) A[λ1 ,λ2 ] = H([λ1 , α2 ]), λ2 = 0, α2 λ2 (4,4)
(3,4)
Property 13. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊆ H(λ2 ), (4,4) (b) A[λ1 ,λ2 ] = H([α1 , λ2 ]), λ1 = 0, α1 λ1 (3,4)
(3,3)
Property 14. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊆ H(λ2 ), (3,4) (b) A[λ1 ,λ2 ] = H([λ1 , α2 ]), λ2 = 0, α2 λ2 (4,3)
(3,3)
Property 15. If there exists A[λ1 ,λ2 ] ⊆ H([λ1 , λ2 ]) ⊆ A[λ1 ,λ2 ] , then we have: (a) λ1 , λ2 ∈ [I] and λi1 < λi2 , i = 1, 2 ⇒ H(λ1 ) ⊆ H(λ2 ), (4,3) (b) A[λ1 ,λ2 ] = H([α1 , λ2 ]), λ1 = 0, α1 λ1
H([α1 , λ2 ]),
λ1 =1
Note on Interval-Valued Fuzzy Set
4
25
Conclusion
In this paper, we introduce the concept of cut set of interval-valued fuzzy set and discuss some properties of cut set of interval-valued fuzzy set, propose three decomposition theorems of interval-valued fuzzy set and investigate some properties of cut set of interval-valued fuzzy set and mapping H in detail. These works can be used in setting up the basic theory of interval-valued fuzzy set. The discussion of representation theorems of interval-valued fuzzy set will be studied in other papers.
References [1] Arnould, T., Tano, S.: Interval valued fuzzy backward reasoning, IEEE Trans, Fuzzy Syst. 3(4)(1995), 425-437 [2] Biswas, R.: Rosenfeld’s fuzzy subgroups with interval-valued membership functions, Fuzzy Sets and Systems 63(1994), 87-90 [3] Bustince, H., Burillo, P.: Mathematical analysis of interval-valued fuzzy relations: Application to approximate reasoning, Fuzzy Sets and Systems 113(2000), 205-219 [4] Chen, S.M., Hsiao, W.H.: Bidirectional approximate reasoning for rule-based systems using interval-valued fuzzy sets, Fuzzy Sets and Systems 113(2000), 185-203 [5] Deschrijver, G., Kerre, E.E.: On the relationship between some extensions of fuzzy set theory, Fuzzy Sets and Systems 133(2003), 227-235 [6] Gorzalczany, M.B.: A method of inference in approximate reasoning based on interval-valued fuzzy sets, Fuzzy Setss and Systems 21(1987), 1-17 [7] Li, X.P., Wang, G.J.: The SH -interval-valued fuzzy subgroup, Fuzzy Sets and Systems 112(2000), 319-325 [8] Mondal, T.K., Samanta, S.K.: Topology of interval-valued fuzzy sets, Indian J. Pure Appl. Math., 30(1999), 23-38 [9] Turksen, I.B.: Interval-valued fuzzy sets based on normal forms, Fuzzy Sets and Systems 20(1986), 191-210 [10] Yuan, B., Pan, Y., Wu, W.M.: On normal form based on interval-valued fuzzy sets and their applications to approximate reasoning, Internat. J. General Systems 23(1995), 241-254 [11] Zadeh, L.A.: Fuzzy sets, Infor. and Contr. 8(1965), 338-353 [12] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning, Part 1, Infor. Sci. 8(1975), 199-249 [13] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning, Part 2, Infor. Sci. 8(1975), 301-357 [14] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning, Part 3, Infor. Sci. 9(1975), 43-80
Knowledge Structuring and Evaluation Based on Grey Theory Chen Huang and Yushun Fan CIMS ERC, Department of Automation, Tsinghua University ,Beijing 100084, PR China
[email protected] [email protected] Abstract. It is important nowadays to provide guidance for individuals or organizations to improve their knowledge according to their objectives, especially in the case of incomplete cognition. Based on grey system theory, a knowledge architecture which consists of grey elements including knowledge fields and knowledge units is built. The method to calculate the weightiness of each knowledge unit, with regard to the user's objectives, is detailed. The knowledge possessed by the user is also evaluated with grey clustering method by whitenization weight function.
1 Introduction Knowledge is a very important factor to knowledge workers and knowledge-intensive organizations. Guidance, which points out what the most important knowledge is according to user’s objectives and gives the evaluation of current knowledge possessions, is required to improve one's knowledge more efficiently. However, the , existing knowledge management technology[1 2] can not resolve this problem since the cognition and evaluation of knowledge is indefinite and difficult. , To resolve this problem, this paper introduces grey theory[3 4] into the knowledge management. The grey theory was founded by Prof. Julong Deng in 1982 and caused intense attention because of its original thought and broad applicability. Grey means the information is incomplete. This theory intended to use extremely limited known information to forecast unknown information. By far, it has already developed a set of technologies including system modeling, analysis, evaluation, optimization, forecasting and decision-making, and has been applied in many fields such as agriculture, environment and mechanical engineering. To knowledge workers and knowledge-intensive organizations, the objectives are usually indefinite and changing, while the cognition of its own knowledge is incomplete. Though the cognition will be continually improved along with the accumulation of knowledge, the characteristic of grey will always exist. Therefore, it is more suitable to use grey system theory, instead of other traditional theories and methods, to model and evaluate one's knowledge. In this paper, the knowledge architecture based on grey theory is given firstly. Then it provides a method to calculate the weightiness of each knowledge unit in the architecture, with regard to the user's objectives. Finally it presents how to evaluate the knowledge with grey clustering method. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 26 – 30, 2005. © Springer-Verlag Berlin Heidelberg 2005
Knowledge Structuring and Evaluation Based on Grey Theory
27
2 Knowledge Architecture Based on Grey Theory Since the pursuing of knowledge should be objective-driven, in this knowledge architecture, we should first define the system objective, which could change with time. The knowledge architecture mainly consists of knowledge fields (KF) and knowledge units (KU), both of which are grey elements. A knowledge field is the field that the knowledge belongs to. Each knowledge field could be divided into many sub-fields. The knowledge fields that can not be divided any more are called knowledge units. The knowledge architecture defined above is shown as Fig. 1. Here KF refers to knowledge field, while KU refers to knowledge unit.
Fig. 1. Knowledge architecture
Because of the incomplete cognition, each element in this knowledge architecture is a grey element, which means the element has not been cognized completely. We use grey degree to represent the extent of grey for grey elements. Grey degree describes the grey extent of a knowledge unit. The value range is (0,1] . 0 represents completely known, while 1 represents totally unknown. According to grey theory, the knowledge units will never turn 'white', in other words, the grey degree will never reach 0. But users can define that when the grey degree is less than a certain value (such as 0.1), the knowledge unit can be regarded as white approximately. What's more, since the grey degree is very hard to be measured precisely, we can define several grey clusters in (0,1] , thus the grey degree could be estimated by judging which grey cluster the knowledge unit belongs to.
3 The Weightiness of Knowledge Units The weightiness of a knowledge unit is a quantitative parameter used to indicate the importance of a knowledge unit, with regard to the user's objective. The basic principle to determine the weightiness of a knowledge unit is the analysis hierarchy process. The process is detailed as following. Firstly, construct judgement matrixes including Objective-KF/KU matrix and KF-KF/KU matrixes. Each node in the architecture can have a judgement matrix with its sub-nodes. Suppose node A has n sub-nodes. Then the judgement matrix is
28
C. Huang and Y. Fan
∆ = {δ ij }, i, j = 1, " , n. , which means it is n-rank. Compare every two sub-nodes: sub-node i and sub-node j. If they are the same important, we have δ ij = 1 . If i is more important than j, δ ij = 5 . If i is extremely more important than j,
δ ij = 9 .
Secondly, we can calculate the weightiness of every mono-layer according to the judgement matrixes. In other words, for a node that has sub-nodes, calculate the weightiness of its sub-nodes. It can be given by calculating the latent root λ max and eigenvector W of the judgement matrix
∆ , where
∆W = λ max W
(1)
Finally, we calculate the overall weightiness, making use of the weightiness of every mono-layer. Suppose the layer 1 has m elements: A1 ,..., Am , and the weightiness of Ai to layer 0 is
a 0i
,i=1,2,…,m. The layer 2 has k elements:
B1 ,..., Bk , and the weightiness of B j
to Ai is bi , j = 1,2,..., k . Here if B j is independent of Ai , we have bi j = 0 . Then the weightiness of B j to layer 0 is j
m
w0j = ∑ a 0i bi j , j = 1,2,...k
(2)
i =1
Actually, since B j only has one father node, supposing its father node is Ai , then the weightiness of B j to layer 0 is
w0j = a 0l bl j
(3)
If there are several objectives, the weightiness of each knowledge unit to each objective can be calculated in the similar way.
4 Knowledge Evaluation with Grey Clustering Method Since knowledge is very hard to evaluate, we use grey clustering method to classify knowledge units with whitenization weight function according to how they are mastered. Definition: Suppose there are n objects to be clustered, m clustering criterion, s different grey clusters. According to sampling xij (i = 1,2,..., n; j = 1,2,..., m) of
object i (i=1,2,…,n) regarding criterion j (j=1,2,…,m), classify object i into grey cluster k ( k ∈ {1,2,..., s} ). We call it grey clustering.[3] In the knowledge architecture, each knowledge unit is an object to be clustered. Clustering criterions are observation criterions used to judge how the knowledge unit is mastered. Grey clusters refer to the grey classes defined based on the extent of knowledge mastery such as ‘bad’, ‘medium’ and ’excellent’.
Knowledge Structuring and Evaluation Based on Grey Theory
29
In grey theory, whitenization weight function is frequently used to describe the preference extent when a grey element takes different value in its value field. Frequently used whitenization weight functions are shown in Fig. 2.
Fig. 2. Whitenization weight function
The whitenization weight function is represented as f jk (•) . If f jk (•) is as shown in Fig. 2(a), Fig. 2(b), Fig. 2(c) or Fig. 2(d), then it is represented as f jk [ x kj (1), x kj ( 2), x kj (3), x kj ( 4)] , f jk [−,−, x kj (3), x kj (4)] , f jk [ x kj (1), x kj (2),−, x kj ( 4)] or f jk [ x kj (1), x kj (2),−,−] separately. In the knowledge architecture, generally speaking, the
whitenization weight f jk [−,−, x kj (3), x kj (4)] ;
function that of
of ‘bad’ ‘medium’
grey grey
cluster cluster
should should
be be
like like
f jk [ x kj (1), x kj (2),−, x kj ( 4)] ; and that of ‘excellent’ grey cluster should be like f jk [ x kj (1), x kj (2),−,−] .
Since the significance and the dimension of each criterion are very different with each other, we adopt fixed weightiness to cluster. We call η j the clustering weightiness of criterion j. The steps of grey fixed-weightiness clustering are: (1) Give the whitenization weight function of sub-cluster k of criterion j f jk (•)( j = 1,2,..., m; k = 1,2,..., s ) . (2) Give the clustering weightiness of each criterion η j ( j = 1,2,..., m) by qualitative analysis or using the method given in Section 3. (3) Given f jk (•) , η j and x ij (i = 1,2,..., n; j = 1,2,..., m ) which are samplings of object i regarding criterion j, calculate grey fixed-weightiness clustering m
quotieties σ k = ∑ f k ( x ) ⋅η ( i = 1,2,..., n; k = 1,2,..., s ). i j ij j j =1
∗
k (4) If σ k = i max {σ i } , we can conclude that object i belongs to grey cluster k . ∗
1≤ k ≤ s
For example, suppose there are 4 knowledge units a, b, c, d. Establishing criterion set as {Ⅰ: number of literatures have been read; Ⅱ: number of published papers; Ⅲ: number of giving lectures}, we classify the four units into three clusters which are ‘excellent’, ‘medium’ and ‘bad’. The samplings are shown in table 1.
30
C. Huang and Y. Fan Table 1. Samplings of each knowledge unit regarding criterions
KU a KU b KU c KU d
criterionⅠ 5 44 20 25
criterionⅡ 1 4 3 5
criterionⅢ 0 3 1 2
Firstly, give the whitenization weight functions: f11[0,40,−,−] , f12 [0,20,−,40] , 1 1 3 f13 [−,−,5,10] , f 2 [0,10,−,−] , f 22 [0,5,−,10] , f 2 [ −,−,4,8] , f 3 [0,4,−,−] , f 32 [ 0,2,−,4] , f 33 [−,−,1,2] . Assume the clustering weightiness of each criterion is
η 1 = 0.2,η 2 = 0.5,η 3 = 0.3 . Then it can be given:
σ 1 = (σ 11 , σ 12 , σ 13∗ ) = (0.075,0.15,1.0) , σ 2 = (σ 21∗ , σ 22 , σ 23 ) = (0.625,0.55,0.5) , σ 3 = (σ 31 , σ 32 , σ 33∗ ) = (0.385,0.65,0.8) , σ 4 = (σ 41 , σ 42∗ , σ 43 ) = (0.525,0.95,0.375) The results indicate that knowledge unit b2 belongs to ‘excellent’ grey cluster, b4 belongs to ‘medium’ grey cluster, b1 and b3 belongs to ‘bad’ grey cluster. Combining the evaluation result with the weightiness of knowledge units, the foremost learning and developing directions can be known.
6 Conclusion This paper gives the knowledge architecture based on grey theory. It provides the method to calculate the weightiness of knowledge and to evaluate them. This method can not only help users to establish their knowledge architecture, but also inform them which are the most important knowledge and which are the foremost learning directions. It also provides the existing knowledge management technology a new idea, which is human-oriented since the user's objective and the expansibility of cognition are well considered. Furthermore, since the method provided in this paper is just objective-oriented, the process-based knowledge modeling and evaluation will be studied in our future work.
References 1. Schreiber, G., Akkermans, H., Anjewierden, A., deHoog, R., Shadbolt, N., VandeVelde, W., Wielinga, B.: Knowledge engineering and management: the commonKADS methodology. MIT Press, Cambridge Massachusetts London England (2000) 2. Mertins, K., Heisig, P., Vorbeck, J.: Knowledge management concepts and best practices. Tsinghua University Press, Beijing (2004) 3. Liu, S., Guo, T., Dang, Y.: Theory and applications of grey systems. Science Press, Beijing (1999) 4. Deng, J.: The tutorial of grey system theory. Huazhong Science and Technology Uniuersity Press, Wuhan (1990) 5. Xia, S., Yang, J., Yang, Z.: The introduction of system engineering. Tsinghua University Press, Beijing (1995)
A Propositional Calculus Formal Deductive System LU of Universal Logic and Its Completeness Minxia Luo1,2 and Huacan He1 1
School of Computer Science, Northwestern Polytechnical, University, Xi’an, 710072, P.R. China
[email protected],
[email protected] 2 Department of Mathematics, Yuncheng University, Yuncheng, 044000, P.R. China
Abstract. Universal logic has given 0-level universal conjunction operation, universal disjunction operation and the universal implication operation. We introduce a new kind of algebra system UBL algebra based on these operations. A general propositional calculus formal deductive system LU of universal logic based on UBL algebras is built up, and its completeness is proved.
1
Introduction
In past several years, fuzzy logic has been application in fuzzy control field. However, fuzzy logic theory is not consummate. Fuzzy logic has been doubt and animadversion (see [1]) because these methods of fuzzy reasoning have not strict logical foundation. Residuated fuzzy logic calculi are related to continuous t-norms which are used as truth functions for the conjunction connective, and their residua as truth function for the implication. Main examples are L ukasiewicz, G¨odel and product logics, related to L ukasiewicz t-norm (x∗y = max(0, x + y − 1)), G¨ odel t-norm (x∗y = min(x, y)) and product t-norm (x∗y = xy) respectively. Rose and Rosser [2] proved completeness results for L ukasiewicz logic and Dummet [3] for G¨ odel logic, and recently three of the authors [4] axiomatized product logic. More recently, H´ ajek [5] has proposed the axiomatic system BL corresponding to a generic continuous t-norm. A kind of fuzzy propositional logic was proposed by Professor Guojun Wang [6] in 1997, and its completeness has proved in 2002(see[8]). A new kind of logical system–based on strong regular residuated lattices has been proposed by Professor Daowu Pei in 2002, and its completeness has been proved(see[9]). A new requirement for classical logic was raised with developing of computer science and modern logic. Non-classical logic and modern logic are developing rapidly. Universal logic principle was proposed by professor Huacan He in 2001(see [10]). A kind of flexible relation between fuzzy propositions has been found in the study of artificial intelligence theory. The flexible relation L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 31–41, 2005. c Springer-Verlag Berlin Heidelberg 2005
32
M. Luo and H. He
was described by the model of continuous universal logic operation. The main reasons to influence the flexible relation are generalized correlation and generalized autocorrelation. Some 0-level binary universal operations are defined as follows(see[10]): T (x, y, h) = ite{0|x = 0 or y = 0; (max(0, xm + y m − 1))1/m } S(x, y, h) = ite{1|x = 1 or y = 1; 1 − (max(0, (1 − x)m + (1 − y)m − 1))1/m } I(x, y, h) = ite{1|x≤y; 0|m≤0 and y = 0; (1 − xm + y m )1/m } where m = (3 − 4h)/(4h(1 − h)), h∈[0, 1]. S = ite{β|α; γ}, it is a conditional expression which express that if α is true, then S = β; otherwise, S = γ. Universal logical formal deduction system B in ideal condition(h≡0.5) has been established by author [11], and its completeness has been proved [12]. In this paper, we introduce a kind of algebra–UBL algebra, and its properties are discussed. A general propositional calculus formal system LU based on UBL algebras is built up, and its completeness is proved. Moreover, formal deduction system B is a schematic extension of general universal logic system LU .
2
Main Properties of UBL Algebras
Some properties of the model of 0-level binary universal propositional connectives have been studied in [10]. It has been proved that the model of 0-level universal conjunction is a nilpotent t-norm for h∈(0, 0.75), a strict t-norm for h∈(0.75, 1) in [13]. The model of 0-level universal implication is a residuum of the model of 0-level universal conjunction, i.e., the model of 0-level universal conjunction and the model of 0-level universal implication form adjoint pair (see [13]). In this section, we introduce UBL algebra by the model of 0-level universal conjunction, universal disjunction and universal implication. Its main properties are studied as follows. Definition 1. Let L be a partial order set. A UBL algebra is an algebra (L, ⊗, ⊕, ⇒, 0, 1) with three binary operations and two constants such that (1) (L, ⊗, 1) is a commutative semigroup with the unit element 1, i.e. ⊗ is commutative, associative, and 1⊗x = x for all x; (2) (L, ⊕, 0) is a commutative semigroup with the unit element 0; (3) ⊗ and ⇒ form an adjoint pair, i.e. x⊗y≤z if and only if x≤y⇒z; (4) (x⇒y)⇒((x⊕z)⇒(y⊕z)) = 1; (5) x⇒y = 1 or y⇒x = 1. Example 1. (1) MV-algebra is an UBL algebra. (2) For ∀x, y∈[0, 1], we define the following operations: x⊗y = ite{0|x = 0 or y = 0; (max(0, xm + y m − 1))1/m } x⊕y = ite{1|x = 1 or y = 1; 1 − (max(0, (1 − x)m + (1 − y)m − 1))1/m }
A Propositional Calculus Formal Deductive System
33
x⇒y = ite{1|x≤y; 0|m≤0 and y = 0; (1 − xm + y m )1/m } where m∈R. Then ([0, 1], ⊗, ⊕, ⇒, 0, 1)) is UBL algebra which is called UBL unit interval. Proposition 1. Let L be an UBL algebra. ∀a, b, c∈L, the following properties are true: (1) a⊗b = b⊗a, a⊕b = b⊕a (2) (a⊗b)⊗c = a⊗(b⊗c), (a⊕b)⊕c = a⊕(b⊕c) (3) 1⊗a = a, 0⊕a = a (4) a⊗b≤c or a≤b⇒c (5) (a⊗b)⇒c = a⇒(b⇒c) (6) a≤b or a⇒b = 1 (7) 1⇒a = a (8) a⇒(b⇒a) = 1 (9) a⇒(b⇒c) = b⇒(a⇒c) (10) a≤b⇒c or b≤a⇒c (11) a⇒(b⇒a⊗b) = 1 (12) ((a⇒b)⊗a)⇒b = 1 (13) b⇒c≤(a⇒b)⇒(a⇒c) (14) a⇒b≤(b⇒c)⇒(a⇒c) (15) a⇒b≤(a⊗c)⇒(b⊗c) (16) a⇒b≤(a⊕c)⇒(b⊕c) (17) an ≤am , m≤n, where ak+1 = ak ⊗a. Proof omit.
3
A Propositional Calculus Formal Deductive System LU
The 0-level universal conjunction, universal disjunction and universal implication propositional connectives are ∧h , ∨h and →h , and written by ∧, ∨ and → respectively. Definition 2. For the 0-level model of universal conjunction, universal disjunction and universal implication, a propositional calculus system LU is defined as follows: The set F of well-formed formulas (wf s)of LU is defined as usual from a countable set of propositional variables P1 , P2 , · · ·, three connectives ∧, ∨, → and the truth constant ¯ 0 for 0. Further definable connectives are: ¬A : A→¯ 0 A≡B : (A→B)∧(B→A) Definition 3. The following formulas are axioms of universal logic system LU : (U 1) (A→B)→((B→C)→(A→C)) (U 2) A∧B→A (U 3) A∧B→B∧A (U 4) A→A∨B
34
M. Luo and H. He
(U 5) A∨B→B∨A (U 6) (A∧(A→B))→(B∧(B→A)) (U 7) (A→(B→C))→((A∧B)→C) (U 8) ((A∧B)→C)→(A→(B→C)) (U 9) ((A→B)→C)→(((B→A)→C)→C) (U 10) ¯ 0→A (U 11) (A∨B)∨C→A∨(B∨C) (U 12) (A→B)→(A∨C→B∨C) The deduction rule is modus ponens(MP): i.e., from A and A→B infer B. In a natural manner, we can introduce the concepts such as proof, theorem, deduction from a formula set Γ , Γ -consequence in the system LU . A theory of LU is a set of wf s. Γ A denotes that A is provable in the theory Γ . A denotes that A is a theorem of system LU . We denote Thm(LU ) = {A∈F| A} Ded(Γ ) = {A∈F| Γ A} Proposition 2. The hypothetical syllogism (HS rule for short) holds, i.e. assume Γ = {A→B, B→C}, then Γ A→C. Proof. Assume Γ = {A→B, B→C}, then 10 20 30 40 50
A→B (Γ ) (A→B)→((B→C)→(A→C)) (U 1) (B→C)→(A→C) (10 , 20 , MP) B→C (Γ ) A→C (30 , 40 , MP)
Theorem 1. The following formulae are theorems of the formal system LU : (T 1) (A→(B→C))→(B→(A→C)) (T 2) (B→C)→((A→B)→(A→C)) (T 3) A→(B→A) (T 4) A→A (T 5) A→(B→(A∧B)) (T 6) (A∧B)→(A→B) (T 7) (A→B∧C)→(A∧B→C) (T 8) (A∨B→C)→(A→B∨C) (T 9) ((A→C)∧(B→C))→(A∧B→C) (T 10) ((A→B)∧(A→C))→(A→B∨C) (T 11) (A∧(A→B))→B (T 12) (A→B)→(A∧C→B∧C) (T 13) A∧(B∧C)→(A∧B)∧C (T 14) (A∧B)∧C→A∧(B∧C)
A Propositional Calculus Formal Deductive System
Proof. (T 1) 10 20 30 40 50 60 70
B∧A→A∧B (U 3) (B∧A→A∧B)→((A∧B→C)→(B∧A→C)) (U 1) (A∧B→C)→(B∧A→C) (10 , 20 , MP) (B∧A→C)→(B→(A→C)) (U 8) (A∧B→C)→(B→(A→C)) (30 , 40 , HS) (A→(B→C))→(A∧B→C) (U 7) (A→(B→C))→(B→(A→C)) (50 , 60 , HS)
(T 2) 10 (A→B)→((B→C)→(A→C)) (U 1) 20 ((A→B)→((B→C)→(A→C)))→ ((B→C)→((A→B)→(A→C))) (T 1) 30 (B→C)→((A→B)→(A→C)) (10 , 20 , MP) (T 3) 10 A∧B→A (U 2) 20 (A∧B→A)→(A→(B→A)) (U 8) 30 A→(B→A) (10 , 20 , MP) (T 4) 10 20 30 40 50
A→(B→A) (T 3) (A→(B→A))→(B→(A→A)) (T 1) B→(A→A) (10 , 20 , MP) B ( Let B be any axiom) A→A (30 , 40 , MP)
(T 5) 10 20 30 40 50
B∧A→A∧B (U 3) (B∧A→A∧B)→(B→(A→A∧B)) (U 8) B→(A→A∧B) (10 , 20 , MP) (B→(A→A∧B))→(A→(B→A∧B)) (T 1) A→(B→A∧B) (30 , 40 , MP)
(T 6) 10 20 30 40 50
A∧B→B∧A (U 3) B∧A→B (U 2) A∧B→B (10 , 20 , HS) B→(A→B) (T 3) A∧B→(A→B) (30 , 40 , HS)
(T 7) 10 20 30 40 50
B∧C→(B→C) (T 6) (B∧C→(B→C))→((A→B∧C)→(A→(B→C))) (T 2) (A→B∧C)→(A→(B→C)) (10 , 20 , MP) (A→(B→C))→(A∧B→C) (U 7) (A→B∧C)→(A∧B→C) (30 , 40 , HS)
(T 8)
35
36
M. Luo and H. He
10 A→A∨B (U 4) 20 (A→A∨B)→((A∨B→C)→(A→C)) (U 1) 30 (A∨B→C)→(A→C) (10 , 20 , MP) 40 C→C∨B (U 4) 50 C∨B→B∨C (U 5) 60 C→B∨C (40 , 50 , HS) 70 (C→B∨C)→((A→C)→(A→B∨C)) (T 2) 80 (A→C)→(A→B∨C) (60 , 70 , MP) 90 (A∨B→C)→(A→B∨C) (30 , 80 , HS) (T 9) 10 (A→C)∧(B→C)→(A→C) (U 2) 20 A∧B→A (U 2) 30 (A∧B→A)→((A→C)→(A∧B→C)) (U 1) 40 (A→C)→(A∧B→C) (20 , 30 , MP) 50 (A→C)∧(B→C)→(A∧B→C) (10 , 40 , HS) (T 10) 10 (A→B)∧(A→C)→(A→C)∧(A→B) (U 3) 20 (A→C)∧(A→B)→(A→C) (U 2) 30 (A→B)∧(A→C)→(A→C) (10 , 20 , HS)) 0 4 C→B∨C (U 4) 50 (C→B∨C)→((A→C)→(A→B∨C)) (T 2) 60 (A→C)→(A→B∨C) (40 , 50 , MP) 70 ((A→B)∧(A→C))→(A→B∨C) (30 , 60 , HS) (T 11) 10 (A→B)→(A→B) (T 4) 20 ((A→B)→(A→B))→((A→B)∧A→B) (U 7) 30 (A→B)∧A→B (10 , 20 , MP) 0 4 A∧(A→B)→(A→B)∧A (U 3) 50 ((A→B)∧A→B)→(A∧(A→B)→B) (U 1) 60 (A∧(A→B))→B (40 , 50 , MP) (T 12) 10 (A∧(A→B))→B (T 11) 20 B→(C→B∧C) (T 5) 30 (A∧(A→B))→(C→B∧C) (10 , 20 , HS) 40 ((A∧(A→B))→(C→B∧C))→ (A→((A→B)→(C→B∧C))) (U 8) 50 A→((A→B)→(C→B∧C)) (30 , 40 , MP) 60 ((A→B)→(C→B∧C))→ (C→((A→B)→B∧C)) (T 1) 70 A→(C→((A→B)→B∧C)) (50 , 60 , HS) 80 (A→(C→((A→B)→B∧C)))→ (A∧C→((A→B)→B∧C)) (U 7) 90 (A∧C)→((A→B)→B∧C) (70 , 80 , MP) 100 ((A∧C)→((A→B)→B∧C))→ ((A→B)→(A∧C→B∧C)) (T 1) 110 (A→B)→(A∧C→B∧C) (90 , 100 , MP)
A Propositional Calculus Formal Deductive System
(T 13) 10 ((A∧B)∧C→D)→(A∧B→(C→D)) (U 8) 20 (A∧B→(C→D))→(A→(B→(C→D))) (U 8) 30 ((A∧B)∧C→D)→(A→(B→(C→D))) (10 , 20 , HS) 0 4 (B→(C→D))→(B∧C→D) (U 7) 50 ((B→(C→D))→(B∧C→D))→ ((A→(B→(C→D)))→(A→(B∧C→D))) (T 2) 60 (A→(B→(C→D)))→(A→(B∧C→D)) (40 , 50 , MP) 70 (A→(B∧C→D))→(A∧(B∧C)→D) (U 7) 80 (A→(B→(C→D)))→(A∧(B∧C)→D) (60 , 70 , HS) 90 ((A∧B)∧C→D)→(A∧(B∧C)→D) (30 , 80 , HS) 0 10 LetD = (A∧B)∧C 110 A∧(B∧C)→(A∧B)∧C (90 , 100 , MP) The proof of (T 14) is similar to (T 13).
37
Definition 4. The binary relation ∼ on F is called provable equivalence relation, A∼B if and only if A→B, B→A Theorem 2. The relation ∼ is a congruence relation of F . The quotient algebra [F ] = F /∼ = {[A]| A∈F} is an UBL algebra in which the partial ordering ≤ is defined as follows: [A]≤[B] if and only if A→B Proof. It is clear that the relation ∼ is an equivalent relation on F . It can be proved that the relation ∼ is a congruence relation of F by (U 1), (U 12), (T 1), (T 2) and (T 12). The quotient algebra [F ] = F /∼ = {[A]| A∈F} is an UBL algebra, where [A]∨[B] = [A∨B], [A]∧[B] = [A∧B], [A]→[B] = [A→B].
4
The Completeness of Formal System LU
Now we extend the semantical concepts of the system LU onto general UBL algebra. Definition 5. Let L be an UBL algebra. A (∧, ∨, →)-type homomorphism, i.e. v : F →L v(A∧B) = v(A)⊗v(B), v(A∨B) = v(A)⊕v(B), v(A→B) = v(A)⇒v(B) is called an L-valuation of the system LU . The set Ω(L) of all L-valuation of the system LU is called the L-semantics of the system LU .
38
M. Luo and H. He
Definition 6. Let L be an UBL algebra, A∈F, Γ ⊆F. (1) A is called an L-tautology, denote by |=L A, if ∀v∈Ω(L), v(A) = 1. (2) A is called an L-semantics consequence of Γ , denote by Γ |=L A, if ∀v∈ Ω(L), we always have v(A) = 1 whenever v(Γ )⊆{1}. A is L-tautology when Γ = ∅. We use T (L) denote the set of all L-tautology, i.e., T (L) = {A∈F| |=L A}. Theorem 3 (L-soundness). Let L be an UBL algebra, ∀A∈F, ∀Γ ⊆F. If Γ A, then Γ |=L A. Specially, Thm(LU )⊆T (L). Proof. ∀Γ ⊆F, if v(Γ )⊆{1} and Γ A, then A∈Axm(LU ), or A∈Γ , or A is obtained from B and B→A by MP, where B and B→A are Γ -consequence, and v(B) = v(B→A) = 1. If A∈Axm(LU ) or A∈Γ , then v(A) = 1. If A is obtained from B and B→A by MP, then v(A) = 1 since v(A)≥v(B)⊗(v(B)⇒v(A)) = v(B)⊗v(B→A) = 1.
Definition 7. Let L = (L, ⊕, ⊗, ⇒, 0, 1) be an UBL algebra. A filter on L is a non-empty set F ⊆L such that for each a, b∈L, a∈F, and b∈F implies a⊗b∈F, a∈F and a≤b implies b∈F. F is a prime filter iff for each a, b∈L, (a⇒b)∈F or (b⇒a)∈F. Remark. If F is a filter on UBL algebra, x, y∈F , then x⊕y∈F . In fact, it is true that x⊗y≤x≤x⊕y. Proposition 3. Let L be an UBL algebra and F be a filter. We define the relation as follows: a∼F b if and only if (a⇒b)∈F and (b⇒a)∈F. then (1) ∼F is a congruence and the quotient algebra L/∼F is an UBL algebra. (2) L/∼F is linearly ordered if and only if F is a prime filter. Proposition 4. Let L be an UBL algebra and a∈L\{1}. Then there is a prime filter F on L such that a∈F / . Proof. Let A be the set of all filters on L not containing a. Then A=∅ by {1}∈A. There is a maximal element F in A, because every chain {Ft }t∈I has an upper bound ∪t∈I Ft in partially order set (A, ⊆) by Zorn,s Lemma. To show that F is a prime filter on L.
A Propositional Calculus Formal Deductive System
39
If there exist b, c∈L, such that b⇒c∈F / and c⇒b∈F / , let F1 = {x∈L|∃y∈F, ∃n∈N, y⊗(b⇒c)n ≤x} F2 = {u∈L|∃v∈F, ∃m∈N, v⊗(c⇒b)m ≤u} We can prove that F1 and F2 are filters on L such that b⇒c∈F1 , F ⊂F1 , c⇒b∈F2 , F ⊂F2 . Now we only prove that F1 ∈A or F2 ∈A, i.e., a∈F / 1 or a∈F / 2. In fact, if a∈F1 and a∈F2 , then ∃y, v∈F , ∃n, m∈N such that a≥y⊗(b⇒c)n , a≥v⊗(c⇒b)m . Without loss of generality, we assume n≥m, then a≥v⊗(c⇒b)n . Hence a≥max(y⊗(b⇒c)n , v⊗(c⇒b)n )≥y⊗v⊗(max((b⇒c)n , (c⇒b)n )) = y⊗v Thus a∈F by y⊗v∈F , a contradiction. Therefore, a∈F / 1 or a∈F / 2.
Theorem 4. Let L be any an UBL algebra. Then exist UBL chain {Lt |t∈I} such that L can be isomorphically embedded into L∗ = Πt∈I Lt . Proof. Let P = {F | F is a prime filter on L}, then P=∅. We have that L/∼F is an UBL chain for all F ∈P by Proposition 3. We may prove that the mapping i : L→L∗ , x→([x]F )F ∈P is an embedding from L to L∗ , i.e., the mapping i is monomorphism. In fact, obviously, i is a homomorphism. ∀a, b∈L, if a=b, without loss of generality, we assume a≤b, then a⇒b=1. There exist a prime filter F on L such that a⇒b∈F / by Proposition 4. Thus [a]F ≤[b]F in L/∼F . Hence [a]F =[b]F , i.e., i(a)=i(b).
Proposition 5. If L1 and L2 are UBL algebra, L1 ∼ =L2 , then T (L1 ) = T (L2 ). Proposition 6. Let L, L1 be UBL algebra. If L1 is a subalgebra of L, then T (L)⊆T (L1 ). Theorem 5. Let L be an UBL chain and A∈F. If A∈T (L), then A∈T (L1 ), where L1 is any UBL algebra. Proof. We shall show that A∈T (L∗ ) when A∈T (Lt )(∀t∈I) by Theorem 4, Proposition 5 and Proposition 6, where L∗ = Πt∈I Lt , {Lt }t∈I is chain. In fact, ∀v∈Ω(L∗ ), then ft v∈Ω(Lt ), where ft : L∗ →Lt is a projection mapping. If v(A) = (at )t∈I =1, then exist t∈I such that at =1, i.e. ft v(A) = ft (v(A)) =1, a contradiction. Thus A∈T (L∗ ).
40
M. Luo and H. He
Theorem 6 ([L]-completeness). Let A∈F. Then A if and only if |=[L] A. Proof. The necessity is obviously by Theorem 3. [A]≤[B] if and only if A→B by Theorem 2 in [L]. [A] = [B] if and only if A∼B. If B∈Thm(LU ), then ∀A∈F, A→B, i.e., [A]≤[B]. Thus [B] = 1 is the maximal element of [L]. Assume [A] = 1, then [A] = [B], therefore A∼B, i.e., A∈Thm(LU ). Hence the maximal element 1 of [L] is the set of all theorems of system LU . Suppose A∈T ([L]), then ∀v∈Ω([L]), v(A) = 1. Specially, v : F →[L], A→[A], A∈F, v(A) = 1, i.e., [A] = 1. Thus A∈Thm(LU ), i.e., A.
Theorem 7 (completeness). Let A∈F, then the following are equivalent: (i) A; (ii) for each linearly ordered UBL algebra L, A∈T (L); (iii) for each UBL algebra L, A∈T (L). Proof. The implications (i)⇒(ii), (ii)⇒(iii) and (iii)⇒(i) have been proved by Theorem 3, Theorem 5 and Theorem 6 respectively.
Theorem 8 (strong completeness). Let Γ ⊆F, A∈F. The following are equivalent: (i) Γ A; (ii) for each linearly ordered UBL algebra L, Γ |=L A; (iii) for each UBL algebra L, Γ |=L A. Proof. The implications (i)⇒(ii), (ii)⇒(iii) have been proved by Theorem 3 and Theorem 5 respectively. We shall show (iii)⇒(i). We define the relation on F ∼Γ : A∼Γ B if and only if Γ A→B, Γ B→A It can be proved easily that ∼Γ is a congruence on F , and corresponding quotient algebra [F ]Γ = F /∼Γ = {[A]Γ |A∈F} is an UBL algebra, and ∀v∈Ω([F ]Γ ), v(Γ )⊆{1}. We can prove that 1=Ded(Γ ) is the maximal element of [F ]Γ . If Γ |=[F ]Γ A, then ∀v∈Ω([F ]Γ ), v(A) = 1. Specially, for the mapping v : F →[F ]Γ , A→[A]Γ , A∈F we have [A]Γ = 1=Ded(Γ ). Hence A∈Ded(Γ ), i.e., Γ A.
5
Conclusions
We introduce a new kind of algebra system UBL algebra based on 0-level universal conjunction operation, universal disjunction operation and universal implication operation. A general propositional calculus formal deductive system
A Propositional Calculus Formal Deductive System
41
LU of universal logic based on UBL algebras is built up, and its completeness is proved. It show that syntax and semantics of formal system LU is concordant. We may establish a solid logical foundation for flexible reasoning. Moreover, the formal deductive system B ([11]) of universal logic in ideal condition(h≡0.5) is a schematic extension of general universal logic system LU .
References 1. Hongxing L.: To see the success of fuzzy logic from mathematical essence of fuzzy control-on the paradoxical sussess of fuzzy logic. Fuzzy systems and mathematics 9(1995)1–14(in chinese) 2. Rose,P.A., Rosser,J.B.: Fragments of many valued statement calculi. Trant.A.M.S. 87(1958)1–53 3. Dummett,M.: A propositional calculus with denumerable matrix. Journal of Symbolic Logic 24(1959)97–106 4. H´ ajek,P., Godo,L., Esteva,F.: A complete many-valued logic with product conjunction. Archive for Mathematical Logic 35(1996)191–208 5. H´ ajek,P.: Metamathematics of fuzzy logic. Kluwer Academic Publishers(1998) 6. Guojun W.: A formal deductive system of fuzzy propositional calculus. Chinese Science Bulletin 42(1997)1041–1045(in chinese) 7. Guojun W.: The full implication triple I method for fuzzy reasoning. Science in China(Series E) 29(1999)43–53(in chinese) 8. Daowu P., Guojun W.: The completeness and applications of the formal system L∗ . Science in China(Series F) 45(2002)40–50 9. Daowu P.: A logic system based on strong regular residuated lattices and its completeness. Acta Mathematica Sinica 45(2002)745–752(in chinese) 10. Huacan He.: Principle of Universal Logic. Beijing: Chinese Scientific Press(2001)(in chinese) 11. Minxia L., Huacan H.: The formal deductive system B of universal logic in the ideal condition. Science of Computer 31(2004)95–98(in chinese) 12. Minxia L., Huacan H.: The completeness of the formal deductive system B of universal logic in the ideal condition. Science of computer(in chinese)(in press) 13. Minxia L., Huacan H.: Some algebraic properties about the 0-level universal operation model of universal logic. Fuzzy systems and Mathematics(in chinese)(in press)
Entropy and Subsethood for General Interval-Valued Intuitionistic Fuzzy Sets Xiao-dong Liu, Su-hua Zheng, and Feng-lan Xiong Department of Mathematics, Ocean University of China, Qingdao 266071, P.R. China
[email protected],
[email protected],
[email protected] Abstract. In this paper, we mainly extend entropy and subsethood from intuitionistic fuzzy sets to general interval-valued intuitionistic fuzzy sets, propose a definition of entropy and subsethood , offer a function of entropy and construct a class of subsethood function. Then from discussing the relationship between entropy and subsethood, we know that while choosing the subsethood, we can get some kinds of function of entropy based on subsethood. Our work is also applicable to practical fields such as: neural networks, expert systems, and other.
1
Introduction
Since L.A.Zadeh introduced fuzzy sets [1] in 1965, a lot of new theories treating imprecision and uncertainty have been introduced. Some of them are extensions of fuzzy set theory. K.T.Atanassov extended this theory, proposed the definition of intuitionistic fuzzy sets (IF S, for short) ([3] [4]) and interval-valued intuitionistic fuzzy sets (IV IF S, for short) [5], which have been found to be highly useful to deal with vagueness out of several higher order fuzzy sets. And then, in the year 1999, Atanassov defined a Lattice-intuitionistic fuzzy set [6]. A measure of fuzziness often used and cited in the literature is entropy first mentioned in 1965 by L.A.Zadeh [2]. The name entropy was chosen due to an intrinsic similarity of equations to the ones in the Shannon entropy [7]. But they are different in types of uncertainty and Shannon entropy basically measures the average uncertainty in bits associated with the prediction of outcomes in a random experiment. This theory was extended and a non-probabilistic-type entropy measure for IF S was proposed by Eulalia Szmidt, Janusz Kacprzyk [8]. We extended it again onto general interval-valued intuitionistic fuzzy sets (V IF S, for short) in this paper. We organize this paper as follows: in section 2, at first, we define the definition of V IF S , then discuss the relation between V IF S and some other kinds of intuitionistic fuzzy sets. From the discussion, we make sure that all these sets are subset of V IF S in the view of isomorphic imbedding mapping, and this ensures that our work about the theory of entropy and subsethood is a reasonable extension of F S, IF S and IV IF S. So according to section 2, theories in this paper are feasible on all these kinds of intuitionistic fuzzy sets. In section 3, we L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 42–52, 2005. c Springer-Verlag Berlin Heidelberg 2005
Entropy and Subsethood for General Interval-Valued IFS
43
mainly extend entropy [7] onto V IF S(X). Firstly, we give a reasonable definition of A is less fuzzy than B on V IF S. For there is not always a comparable relation between any two elements of L, which is a lattice. So we premise the definition of A refines B with the discussion of the incomparable condition. Then we definite entropy by giving the properties of A refines B in theorem 3.1. Finally from theorem 3.2, we get a idiographic entropy function. In section 4, we definite a class of subsethood function by three real functions. Then, from discussing the relation between entropy and subsethood, we get a class of entropy function which is useful in different conditions and eg 2 ensures that is reasonable. Thus, while choosing the subsethood, we can get some kinds of function of entropy based on subsethood.
2
Definitions and Quantities of Some Kinds of IFS
Throughout this paper, let X is a nonempty definite set,|X| = n. Definition 2.1. An interval number over [0, 1] is defined as an object of the form: a = [a− , a+ ] with the property: 0 ≤ a− ≤ a+ ≤ 1 . Let L denote all interval numbers over [0, 1]. We define the definition of relation (it is specified by “ ≤ ”): a ≤ b ⇔ a− ≤ b− and a+ ≤ b+ . We can easily prove that “ ≤ ” is a partially ordered relation on L. So < L, “ ≤ ” > is a complete lattice where − − a∨ b = [a ∨ b−, a+ ∨ b+ ] , a∧ b = [a ∧ b− , a+ ∧ b + ] , − + − ai = [ ai ai ] , ai = [ ai , a+ . i ] i∈I
i∈I
i∈I
i∈I
i∈I
i∈I
Let 1 = [1, 1] denotes the maximum of < L, “ ≤ ” > and 0 = [0, 0] denotes the minimum of < L, “ ≤ ” >. We define the “C” operation, where ac = [1 − a+ , 1 − a− ] . We can easily prove that the complement operation “C” has the following properties: c c c c c c 1. (a ) =a, = (ac ∨bc ) , c 2. (a∨b) = (a ∧b ),c (a∧b) c c c c 3. ( ai ) = a i , ( ai ) = ai , 4. If a ≤ b , then (a ≥ b ) ∀a, b ∈ L . i∈I
i∈I
i∈I
i∈I
Then we will give the definition of intuitionistic fuzzy sets on L .
Definition 2.2. A general interval valued intuitionistic fuzzy sets A on X is an object of the form: A = {< x, µA (x), νA (x), x ∈ X >} , where µA : X → L and νA : X → L , with the property: 0 ≤ µA (x) ≤ νA (x)c ≤ 1 (∀x ∈ x). For briefly, we denote A = {< x, µA (x), νA (x) >, x ∈ X} by A =< µA (x), νA + − + (x) > and denote µA (x) = [µ− A (x), µA (x)] ∈ L, νA (x) = [νA (x), νA (x)] ∈ L . Let V IF S(X) denote all set of general interval valued intuitionistic fuzzy sets on L . We define the relation “ ≤ ”: A ≤ B ⇔ µA (x) ≤ µB (x) and νA (x) ≥ νB (x) . We can easily prove that ≤ is also a partially ordered relation on V IF S(X). So < V IF S(X), “ ≤ ” > is a complete lattice where
44
X.-d. Liu, S.-h. Zheng, and F.-l. Xiong
A ∨ B = [µA (x) ∨ µB (x), νA (x) ∧ νB (x)] , A ∧ B = [µA (x) ∧ µB (x), νA (x) ∨ νB (x)] , c
i∈I i∈I
Ai = [
i∈I
Ai = [
i∈I
µAi , µAi ,
i∈I i∈I
νAi ] , νAi ] .
We define the “C” operation, where A =< νA (x) , µA (x) > . We can easily prove that the complement operation “C” has the following c properties: c 1. (Ac )c = A , 2. (A∨B)c = (Ac ∧B c ), 3. ( Ai )c = Ai , ( Ai )c = Ai , i∈I
i∈I
i∈I
i∈I
4. If A ≤ B, then (Ac ≥ B c ) ∀A, B ∈ V IF S(X). So we know < V IF S(X), “ ≤ ” > is a complete lattice with complement . Let I denotes the maximum of V IF S(X) and θ denotes the minimum of V IF S(X), Where µI (x) = 1 , νI (x) = 0 , µθ (x) = 0 , νθ (x) = 1. Then we will discuss the relationship between IV IF S(X)[5] and V IF S(X) . Definition 2.3. An interval-valued intuitionistic fuzzy set on X ( IV IF S(X), for short) is an object of the form B = {< x, MB (x), NB (x), x ∈ X >} . Where MB : X → L and NB : X → L (∀x ∈ X) , With the condition 0 ≤ MB+ (x) + NB+ (x) ≤ 1 (∀x ∈ X). Proposition 2.1.IV IF S(X)[5] is a subset of V IF S(X) . Proof. To any B belongs to IV IF S(X)[5] , we have B = {< x, MB (x), NB (x), x ∈ X >} , Where MB : X → L and NB : X → L With the condition 0 ≤ MB+ (x) + NB+ (x) ≤ 1 (∀x ∈ X) , + that is 0 ≤ MB (x) ≤ 1 − NB+ (x) (∀x ∈ X). from definition 2.2 we can easily prove 0 ≤ MB (x) ≤ NB (x)C ≤ 1 (∀x ∈ X). That is B belongs to V IF S(X) . So IV IF S(X)[5] is a subset of V IF S(X) . We have an example here: Eg1. Let A = {< x, [0, 0.3], [0, 0.8] >, x ∈ X} for [0, 0.8]c = [0.2, 1], [0, 0.3] ≤ [0.2, 1], we have A ∈ V IF S(X). But from 0.3 + 0.8 ≥ 1, we get A ∈ V IF S(X). Then we get IV IF S(X) ⊂ V IF S(X) and IV IF S(X) = V IF S(X). It is shown in the Theorem 2.1 that in the view of isomorphism insertion, we consider P (X) is a subset of V IF S(X). Definition 2.4. ,where A function P (X) → V IF S(X) 1 , if x ∈ A, 0 , if x ∈ A, µδ(A) (x) = ; νδ(A) (x) = (∀A ∈ P (X)) . 0 , if x ∈ A 1 , if x ∈ A Theorem 2.1.The function δ is a injection map reserving union, intersection and complement. Proof. Let Ai ∈ I , 1. It is easy to prove that δ is a injection . 2. µδ( Ai ) (x) = 1 ⇔ x ∈ Ai ⇔ ∃i0 ∈ I satisfying x ∈ Ai0 , i∈I i∈I µδ(Ai ) (x) = 1 ⇔ ∃i0 ∈ I, µδ(Ai0 ) (x) = 1 ⇔ ∃i0 ∈ I satisfying x ∈ Ai0 , i∈I That is µδ( Ai ) (x) = µδ(Ai ) (x) (∀x ∈ X) , i∈I
i∈I
Entropy and Subsethood for General Interval-Valued IFS
νδ(
i∈I
Ai ) (x)
45
= 1 ⇔ x ∈ Ai ⇔ ∀i ∈ I there is x ∈ Ai . i∈I
νδ(Ai ) (x) = 1 ⇔ ∀i ∈ I, νδ(Ai ) (x) = 1 ⇔ ∀i ∈ I , there is x ∈ Ai . i∈I That is νδ( Ai ) (x) = νδ(Ai ) (x) (∀x ∈ X). So we have δ( Ai ) = δ(Ai ). i∈I i∈I i∈I i∈I 3. Similarly, we can prove δ( Ai ) = δ(Ai ) , i∈I
i∈I
µδ(Ac ) (x) = 1 ⇔ x ∈ Ac ⇔ x ∈ A, (1) νδ(Ac ) = 1 ⇔ x ∈ Ac ⇔ x ∈ A , (2) µ(δ(A))c (x) = ν(δ(A)) (x) = 1 ⇔ x ∈ A, (3) ν(δ(A))c (x) = µ(δ(A)) (x) = 1 ⇔ x ∈ A . (4) From (1) and (3) we have µδ(Ac ) (x) = µ(δ(A))c (x) (∀x ∈ X) , νδ(Ac ) = ν(δ(A))c (x) (∀x ∈ X) . Then we get δ(Ac ) = (δ(A))c . So δ is a injection map reserving union, intersection and complement. In the view of imbedding mapping, we may consider P (X) as the subset of V IF S(X). Let F (X) is the class of all fuzzy sets proposed by L.A.Zadeh, let ξ: F (x) → V IF S(X) , ξ(F ) = A =< µA (x), νA (x) > , µA (x) = [F (x), F (x)] , νA (x) = [1 − F (x), 1 − F (x)] . So that the injection ξ holds union, intersection and complement. In the same way, we may also consider F (X) as the subset of V IF S(X)in the view of imbedding mapping. 4.
Definition 2.5. An intuitionistic fuzzy set on X (IF S(X) , for short) is an object of the form: A = {< x, µA (x), νA (x), x ∈ X >} , where µA : X → [0, 1] and νA : X → [0, 1] , with the property: 0 ≤ νA (x) + µA (x) ≤ 1 (∀x ∈ x). Let’s discuss the relationship between IF S(X)[4] and V IF S(X) . Let η: IF S(X) → V IF S(X) Where any C belongs to IF S(X)[4] , C = {< x, α(x), β(x) >, x ∈ X} (α(x) + β(x) ≤ 1), η(C) = A =< µA (x), νA (x) >, µA (x) = [α(x), α(x)] , νA (x) = [β(x), β(x)] . Similarly we have IF S(X) is the subset of V IF S(X) in the view of imbedding mapping. All the above ensure that what we get in this paper are feasible on P (X), F (X), IF S(X) and IV IF S(X). A method of judging whether set A ∈ V IF S(X) is a subset of P (X), is given by the following theorem . Theorem 2.2. Let A ∈ V IF S(X) , so we have A ∨ Ac = I ⇔ A ∈ δ(P (X)) . Proof. “ ⇒ ” Let A =< µA (x), νA (x) > , from definition 1) we have: Ac =< νA (x), µA (x) >, µA∨Ac = µA (x) ∨ νA (x), νA∨Ac = µA (x) ∧ νA (x) . − From µA (x)∨νA (x) = 1 , we have µ− (1) A (x)∨νA (x) = 1 ; + + From µA (x)∧νA (x) = 0 , we have µA (x)∧νA (x) = 0 . (2) There are two different cases: + − 1. If νA (x) = 0 , then we have νA (x) = 0 , that is νA (x) = 0 , and from (1) − + we have µA (x) = 1 . So we get µA (x) = 1, that is µA (x) = 1 ; − 2. If µ+ A (x) = 0 then we have µA (x) = 0 , that isµA (x) = 0 .
46
X.-d. Liu, S.-h. Zheng, and F.-l. Xiong
− + And from (1) we have νA (x) = 1 . So we get νA (x) = 1 that is νA (x) = 1 . So we know that for any x belongs to X, µA (x) and νA (x) can only be 0 or 1 with the following property: µA (x) = 1 ⇔ νA (x) = 0 . Let K = {x|x ∈ X, µA (x) = 1} , from definition we have: µδ(K) (x) = 1 ⇔ x ∈ K ⇔ µA (x) = 1 ; νδ(K) (x) = 0 ⇔ x ∈ K ⇔ µA (x) = 1 ⇔ νA (x) = 0 (∀x ∈ X) . So we get µδ(K) (x) = µA (x), νδ(K) (x) = νA (x) (∀x ∈ X) . That is δ(K) = A . “ ⇐ ” It is straight-forward .
Corollary 2.1. A ∧ Ac = θ ⇔ A ∈ δ(P (X)) . From proposition 2.2 to proposition 2.4, we give some qualities of V IF S. Proposition 2.2. Let A ∈ V IF S(X),then we have: 1. µA (x) = 1 ⇒ νA (x) = 0 (∀x ∈ X), 2. νA (x) = 1 ⇒ µA (x) = 0 (∀x ∈ X). Proof. 1. For νA (x) ≤ (µA (x))c = (1)c = 0 , we have νA = 0 . Similarly we can prove 2. Proposition 2.3. A ∨ Ac = θ (∀A ∈ V IF S(X)) Proof. Let A =< µA (x), νA (x) > then Ac =< νA (x), µA (x) >, A ∨ Ac =< µA (x) ∨ νA (x), νA (x) ∧ µA (x) >. If A ∨ Ac = θ , then we have µA (x) ∨ νA (x) = 0, νA (x) ∧ µA (x) = 1 . It is contradiction. Hence A ∨ Ac = θ (∀A ∈ V IF S(X)). Proposition 2.4. Let A ∈ V IF S(x) ,Then we have νA∨Ac (x) = νA∧Ac (x) ⇔ µA (x) = νA (x) (∀x ∈ X) . Proof. ” ⇒ ” From the definition2.2, − + + νA∨Ac (x) = νA (x) ∧ µA (x) = [νA (x) ∧ µ− A (x), νA (x) ∧ µA (x)] , − − + νA∧Ac (x) = νA (x) ∨ µA (x) = [νA (x) ∨ µA (x), νA (x) ∨ µ+ A (x)] . From νA∨Ac (x) = νA∧Ac (x) , − − − + + + + we have νA (x) ∧ µ− A (x) = νA (x) ∨ µA (x) , νA (x) ∧ µA (x) = νA (x) ∨ µA (x) . − − + + So we get νA (x) = µA (x), νA (x) = µA (x) . That is νA (x) = µA (x) . “ ⇐ ” It is straight-forward .
3
Entropy on VIFS
De Luca and Termini [12] first axiomatized non-probabilistic entropy. The De Luca-Termini axioms formulated for intuitionistic fuzzy sets are intuitive and have been wildly employed in the fuzzy literature. To extend this theory, firstly we definite the function as follows: Definition 3.1. M : V IF S(X) → [0, 1], where ∀A ∈ V IF S(x) , − + 1 M (A) = 2n [2 − νA (x) − νA (x)] (∀x ∈ X) . x∈X
Entropy and Subsethood for General Interval-Valued IFS
47
− + Proposition 3.1. To any A ∈ V IF S(X), M (A) = 0 ⇔ νA (x) = νA (x) = 1 ⇔ A = θ (∀x ∈ X). − + Proof. M (A) = 0 ⇔ 2 − νA (x) − νA (x) = 0 (∀x ∈ X) , − + − + for νA (x) ≤ 1, νA (x) ≤ 1 ,we have M (A) = 0 ⇔ νA (x) = νA (x) = 1 (∀x ∈ X) . From proposition 2.2, we have µA (x) = 0 . So we have µA (x) = µθ (x), νA (x) = − + νθ (x) . That is A = θ , hence M (A) = 0 ⇔ νA (x) = νA (x) = 1 ⇔ A = θ (∀x ∈ X).
Proposition 3.2. Let A, B ∈ V IF S(X) , then we have A ≥ B ⇒ M (A) ≥ M (B) . Proof. For A ≥ B , then we have νA (x) ≤ νB (x) (∀x ∈ X) . So we have − − + + νA (x) ≤ νB (x), νA (x) ≤ νB (x) . − + − + Hence [2−νA (x)−νA (x)] ≥ [2 − νA (x) − νA (x)] . That isM (A) ≥ M (B) . x∈X
x∈X
Proposition 3.3. Let A, B ∈ V IF S(X) , then we have νA (x) ≤ νB (x) and M (A) ≥ M (B) ⇔ νA (x) = νB (x)
(∀x ∈ X) .
Proof. “ ⇐ ”It is straight-forward. − − + + “ ⇒ ” For νA (x) ≤ νB (x) , we have νA (x) ≤ νB (x) and νA (x) ≤ νB (x) . − + − + So we get 2 − νA (x) − νA (x) ≥ νB (x) − νB (x) (∀x ∈ X). If there is a x0 ∈ X, − − + + let one of the following two inequations νA (x) < νB (x), νA (x) < νB (x) establish, then we have − + − + M (A) = [2 − νA (x) − νA (x)] + [2 − νA (x0 ) − νA (x0 )] x =xo , x∈X − + − + > [2 − νB (x) − νB (x)] + [2 − νB (x0 ) − νB (x0 )] x =xo , x∈X
= M (B) . − − This is contrary to M (A) = M (B) . So for any x ∈ X ,we have νA (x) = νB (x) , + + νA (x) = νB (x) . That is νA (x) = νB (x) (∀ ∈ X) . Definition 3.2. Let A, B ∈ V IF S(X) with the following properties: 1. If µB (x) ≥ νB (x), then µA (x) ≥ µB (x) and νA (x) ≤ νB (x) , 2. If µB (x) < νB (x), then µA (x) ≤ µB (x) and νA (x) ≥ νB (x) , 3. If µB (x) and νB (x) is incomparable, there are two conditions: − 1). If µ− B (x) ≥ νB (x) , then there are four inequations as follow: − − + − − + + (1)µA (x) ≥ µB (x) , (2)µ+ A (x) ≤ µB (x) , (3)νA (x) ≤ νB (x) , (4)νA (x) ≥ νB (x) . − − 2).If µB (x) < νB (x) then there are four inequations as follow: − + + − − + + (5)µ− A (x) ≤ µB (x) , (6)µA (x) ≥ µB (x) , (7)νA (x) ≥ νB (x) , (8)νA (x) ≤ νB (x) . Thus we call that A refines B (that is A is less fuzzy than B).
Theorem 3.1. LetA, B ∈ V IF S(X) , and A refines B, then we have 1. (A ∧ Ac ) ≤ (B ∧ B c ) , 2. (A ∨ Ac ) ≥ (B ∨ B c ) . Proof. We prove inequation 1 first: 1. If µB (x) ≥ νB (x) from definition 3.2, we have µA (x) ≥ µB (x) νA (x) ≤ νB (x) . Then we have µA (x) ≥ µB (x) ≥ νB (x) ≥ νA (x) .
and
48
X.-d. Liu, S.-h. Zheng, and F.-l. Xiong
Hence µA∧Ac (x) = µA (x) ∧ νA (x) = νA (x), µB∧B c (x) = µB (x) ∧ νB (x) = νB (x) , νA∧Ac (x) = µA (x) ∨ νA (x) = µA (x) , νB∧B c (x) = µB (x) ∨ νB (x) = µB (x) . So µA∧Ac (x) ≤ µB∧B c (x) , νA∧Ac (x) ≥ νB∧B c (x) . 2. If µB (x) < νB (x) , from definition 3.2, we have µA (x) ≤ µB (x) and νA (x) ≥ νB (x) . Then we have µA (x) ≤ µB (x) ≤ νB (x) ≤ νA (x) . Hence µA∧Ac (x) = µA (x)∧νA (x) = µA (x), µB∧B c (x) = µB (x)∧νB (x) = µB (x) , νA∧Ac (x) = µA (x) ∨ νA (x) = νA (x), νB∧B c (x) = µB (x) ∨ νB (x) = νB (x) . So µA∧Ac (x) ≤ µB∧B c (x) , νA∧Ac (x) ≥ νB∧B c (x) . 3. In the case of µB (x) and νB (x) is incomparable, there are still another two different cases: − 1). If µ− B (x) ≥ νB (x) (9), for µB (x) is not larger than νB (x), we have + + − − µB (x) ≤ νB (x)(10). From (1), (3) and (9) we get µ− A (x) ≥ µB (x) ≥ νB (x) ≥ − − − νA (x). So we have µA (x) ≥ νA (x) (11) and from (2), (4) and (10) we get + + + + + µ+ A (x) ≤ µB (x) ≤ νB (x) ≤ νA (x). So we have µA (x) ≤ νA (x) (12). Then from (9) and (10), we have − + + − + µB∧B c (x) = µB (x) ∧ νB (x) = [µ− B (x) ∧ νB (x), µB (x) ∧ νB (x)] = [νB (x), µB (x)] , − − + + − + νB∧B c (x) = µB (x) ∨ νB (x) = [µB (x) ∨ νB (x), µB (x) ∨ νB (x)] = [µB (x), νB (x)] . − + In the same way, from (11) and (12), we get µA∧Ac (x) = [νA (x), µA (x)] and + νA∧Ac (x) = [µ− A (x), νA (x)] . Then from (2) and (3), we get µA∧Ac (x) ≤ µB∧B c (x) . From(1) and (4) we get νA∧Ac (x) ≥ νB∧B c (x) . − + 2). If µ− B (x) < νB (x) (13), for µB (x) is not less than νB (x), we have µB (x) ≥ + − − − − νB (x) (14). From (5), (13) and (7), we get µA (x) ≤ µB (x) < νB (x) ≤ νA (x). − − + + Hence µA (x) < νA (x) (15), and from (6),(14) and (8), we get µA (x) ≥ µB (x) ≥ + + + νB (x) ≥ νA (x). Henceµ+ A (x) ≥ νA (x) (16). Then from (13)and (14), we have − − + − + µB∧B c (x) = µB (x) ∧ νB (x) = [µB (x) ∧ νB (x), µ+ B (x) ∧ νB (x)] = [µB (x), νB (x)], − − + + − νB∧B c (x) = µB (x) ∨ νB (x) = [µB (x) ∨ νB (x), µB (x) ∨ νB (x)] = [νB (x), µ+ B (x)]. + In the same way, from (15) and (16), we get µA∧Ac (x) = [µ− A (x), νA (x)] − + and νA∧Ac (x) = [νA (x), µA (x)]. Then from (5) and (8), we get µA∧Ac (x) ≤ µB∧B c (x). From(6) and (7), we get νA∧Ac (x) ≥ νB∧B c (x). Summary, we get inequation 1. (A ∧ Ac ) ≤ (B ∧ B c ). From inequation 1. , we can easily get inequation 2. (A ∨ Ac ) ≥ (B ∨ B c ). Then we can define the definition of entropy on V IF S. The De Luca-Termini axioms[12] were formulated in the following way. Let E be a set-to-point mapping E : F (X) → [0, 1] . Hence E is a fuzzy set defined on fuzzy sets. E is an entropy measure if it satisfies the four De Luca and Termini axioms: 1. E(A) = 0 iff A ∈ X(A non-fuzzy) , 2. E(A) = 1 iff µA (x) = 0.5 for ∀x ∈ X, 3. E(A) ≤ E(B) if A is less fuzzy than B, i.e., if µA ≤ µB when µB ≤ 0.5 and µA ≤ µB when µB ≥ 0.5 , 4.E(A) = E(Ac ) . Since the De Luca and Termini axioms were formulated for fuzzy sets, we extend them for V IF S. Definition 3.3. A real function E : V IF S(X) → [0, 1] is an entropy measure if E has the following properties:
Entropy and Subsethood for General Interval-Valued IFS
1.E(A) = 0 ⇔ A ∈ δ(P (X)) , 3.E(A) ≤ E(B) if Arefines B ,
2.E(A) = 1 ⇔ νA (x) = µA (x) 4.E(A) = E(Ac ) .
49
(∀x ∈ X) ,
Definition 3.4. Let σ: V IF S(X) → [0, 1], where any A ∈ V IF S(X), we have c ) σ(A) = M(A∧A (∀x ∈ X). M(A∨Ac ) From proposition 2.3, we know that A ∨ Ac = θ(∀A ∈ V IF S(X)) , then from proposition 3.1, we know that M (A ∨ Ac ) = 0, and then from A ∨ Ac ≥ A ∧ Ac and proposition 3.2, we know M (A ∨ Ac ) ≥ M (A ∧ Ac ), that is c ) 0 ≤ σ(A) = M(A∧A M(A∨Ac ) ≤ 1 . So the definition of is reasonable. Theorem 3.2. σ is entropy. Proof. 1. σ(A) = 0 ⇔ M (A ∧ AC ) = 0 ⇔ A ∧ Ac = θ ⇔ A ∈ δ(P (X)) . 2. σ(A) = 1 ⇔ M (A ∧ Ac ) = M (A ∨ Ac ) . For A ∨ Ac ≥ A ∧ Ac , we get νA∨Ac (x) ≤ νA∧Ac (x) , (∀x ∈ X) . So from proposition 3.3, we have νA∨Ac (x) = νA∧Ac (x) (∀x ∈ X). And from proposition 2.4, we get µA (x) = νA (x) (∀x ∈ X). 3. If A refines B, we have A ∧ Ac ≤ B ∧ B c . From proposition 3.2, we have M (A ∧ Ac )≤ M (B ∧ B c ). And from A ∨ Ac ≥ B ∨ B c , we have M (A ∨ Ac ) c ) M(B∧B c ) ≥ M (B ∨ B c ). So we get M(A∧A M(A∨Ac ) ≤ M(B∨B c ) , that is σ(A) ≤ σ(B) . 4. It is straight-forward. So we constructed a class of entropy function on V IF S.
4
Subsethood on VIFS
Definition 4.1. A real function Q : IF S(X) × IF S(X) → [0, 1] is called subsethood, if Q has the following properties: 1. Q(A, B) = 0 ⇔ A = I, B = θ, 2. If A ≤ B ⇒ Q(A, B) = 1, 3. If A ≥ B and Q(A, B) = 1, then A = B, 4. If A ≤ B ≤ C, then Q(C, A) ≥ Q(C, B), Q(C, A) ≤ (Q(B, A)). Definition 4.2. We define the function f : V IF S(X) × V IF S(X) → [0, 1] by − − − 1 f (A, B) = 2n { min[1, g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1))] x∈X + + + + min[1, g(ϕ(µ+ A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1))]} . x∈X
where ϕ: [0, 2] → [0, 2] and ψ: [0, 2] → [0, 2] with the following properties: 1. α > β ⇒ ϕ(α) > ϕ(β), ψ(α) > ψ(β) (α, β ∈ [0, 2]), 2. ϕ(α) = 2 ⇔ α = 2; ψ(β) = 0 ⇔ β = 0 , 3. ϕ(1) = ψ(1) = 1 . And the function g : [0, 2] × [0, 2] → [0, 2] with the following properties: 1. α > β ⇒ g(α, γ) < g(β, γ), g(γ, α) > g(γ, β) (α, β, γ ∈ [0, 2]) , 2. g(α, β) = 0 ⇔ α= 2, β= 0 , 3. g(1, 1) = 1 .
50
X.-d. Liu, S.-h. Zheng, and F.-l. Xiong
Theorem 4.1. f is subsethood. Proof. Let A, B ∈ V IF S(X) 1. For each x ∈ X, we have µI (x) = 1, νI (x) = 0, µθ (x) = 0 and νθ (x) = 1. − − − Then g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) = 0 , + + + + g(ϕ(µA (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) = 0 . So we get f (I, θ) = 0 . On the contrary, If f (A, B) = 0 , Then for each x ∈ X , We have − − − g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) = 0 . − − − − And then ϕ(µA (x) − µB (x) + 1) = 2 , ψ(νA (x) − νB (x) + 1) = 0 . − − − − So we can get (µA (x)−µB (x)+1) = 2 (1), (νA (x)−νB (x)+1) = 0 (2) . − − − From (1), we have µ− A (x) = µB (x) + 1 . for µA (x) ≤ 1, µB (x) ≥ 0 , We have − − µA (x) = 1 , µB (x) = 0 . − − − − From (2), we have νB (x) = νA (x) + 1 . Similarly, we haveνA (x) = 0, νB (x) = 1 . + + + By the same way, we can prove µA (x) = 1, νA (x) = 0, µB (x) = 0, and + νB (x) = 1 . Thus we have µA (x) = 1 , νA (x) = 0 , µB (x) = 0 , and νB (x) = 1 , that is A = I and B = θ . 2. If A ≤ B , then for each x ∈ X, we have µA (x) ≤ µB (x) and νA (x) ≥ νB (x). So we have the following four inequations: − + + µ− A (x) − µB (x) + 1 ≤ 1 , µA (x) − µB (x) + 1 ≤ 1 , − − + + νA (x) − νB (x) + 1 ≥ 1 , νA (x) − νB (x) + 1 ≥ 1 . − − + Then we have ϕ(µA (x) − µB (x) + 1) ≤ 1 , ψ(µA (x) − µ+ B (x) + 1) ≤ 1 , − − + + ϕ(νA (x) − νB (x) + 1) ≥ 1 , ψ(νA (x) − νB (x) + 1) ≥ 1 . And then we have − − − g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) ≥ g(1, 1) = 1 , + + + + g(ϕ(µA (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) ≥ g(1, 1) = 1 . So we get − − − 1 f (A, B) = 2n { min[1, g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) + + + + + min[1, g(ϕ(µA (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1))} =1. 3. If f (A, B) = 1, then for each x ∈ X, we have − − − g(ϕ(µ− A (x) − µB (x) + 1) , ψ(νA (x) − νB (x) + 1)) ≥ 1 , + + + + g(ϕ(µA (x) − µB (x) + 1) , ψ(νA (x) − νB (x) + 1)) ≥ 1 . And from A ≥ B, we have µA (x) ≥ µB (x), µA (x) ≤ µB (x) (∀x ∈ X). So we − + + have the following four inequations: µ− A (x) − µB (x) + 1 ≥ 1, µA (x) − µB (x) + 1 ≥ − − + + 1, νA (x) − νB (x) + 1 ≤ 1, νA (x) − νB (x) + 1 ≤ 1. If A = B , then at least one of the four inequations given above would be never equal: − 1) If µ− A (x) − µB (x) + 1 > 1, then we have − − − g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) − − < g(1, ψ(νA (x) − νB (x) + 1)) ≤ g(1, 1) ≤ 1. It is contradiction. + 2)If µ+ (x) − µ (x) + 1 > 1, then we have A B + + g(ϕ(µ+ (x) − µ+ A B (x) + 1), ψ(νA (x) − νB (x) + 1)) + + < g(1, ψ(νA (x) − νB (x) + 1)) ≤ g(1, 1) = 1. It is contradiction.
Entropy and Subsethood for General Interval-Valued IFS
51
− − 3) If νA (x) − νB (x) + 1 < 1, then we have − − − g(ϕ(µ− A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) − − < g(ϕ(µA (x) − µB (x) + 1), 1) ≤ g(1, 1) = 1. It is contradiction. + + 4) If νA (x) − νB (x) + 1 < 1, then we have + + + g(ϕ(µ+ A (x) − µB (x) + 1), ψ(νA (x) − νB (x) + 1)) + + < g(ϕ(µA (x) − µB (x) + 1), 1) ≤ g(1, 1) = 1. It is contradiction. So we have A = B. 4. Let A ≤ B ≤ C (A, B, C ∈ V IF S(X)) ,then for each x belongs to X, we have µA (x) ≤ µB (x) ≤ µC (x) and νA (x) ≥ νB (x) ≥ νC (x). So we have − − − − − − − µ− C (x) − µA (x) ≥ µC (x) − µB (x) , νC (x) − νA (x) ≤ νC (x) − νB (x) , + + + + + + + + µC (x) − µA (x) ≥ µC (x) − µB (x) , νC (x) − νA (x) ≤ νC (x) − νB (x) . Then we get − − 1 − f (C, A) = 2n { min[1, g(ϕ(µ− C (x) − µA (x) + 1), ψ(νc (x) − νA (x) + 1))] + + + + + min[1, g(ϕ(µC (x) − µA (x) + 1), ψ(νC (x) − νA (x) + 1))} − − − 1 − ≤ 2n { min[1, g(ϕ(µC (x) − µB (x) + 1), ψ(νc (x) − νB (x) + 1))] + + + + min[1, g(ϕ(µ+ (x) − µ (x) + 1), ψ(ν (x) − ν (x) + 1))]} C B C B = f (C, B) . In the same way, we can prove f (C, A) = f (B, A) .
Eg2. For ϕ, ψ : [0, 2] → [0, 2], g : [0, 2] × [0, 2] → [0, 2], ϕ(x) = x, ψ(y) = y, g(x, y) = [(2 − x) + y] × 12 . It is straight-forward to prove that ϕ and ψ has the following qualities: 1). α > β ⇒ ϕ(α) > ϕ(β), ψ(α) > ψ(β), (α, β ∈ [0, 2]) . 2). ϕ(α) = 2 ⇔ α = 2 ; ψ(β) = 0 ⇔ β = 0 . 3). ϕ(1) = ψ(1) = 1 . And to g , it also has qualities as follows: 1). α > β ⇒ g(α, γ) < g(β, γ), g(γ, α) > g(γ, β) (α, β, γ ∈ [0, 2]) , 2). g(α, β) = 0 ⇔ α= 2, β= 0 , 3).g(1, 1) = 1 . It is shown in theorem 4.2. the relationship between entropy and subsethood on V IF S. Theorem 4.2. Let Q is subsethood, ρ: V IF S(X) → [0, 1], where ρ(A) = Q(A ∨ Ac , A ∧ Ac ) (∀A ∈ V IF S(X)), then we have ρ is entropy. Proof. 1. ∀A ∈ δ(P (X)) for A ∨ Ac = I, A ∧ Ac = θ , we have ρ(A) = Q(A ∨ Ac , A ∧ Ac ) = Q(I, θ) = 0 . On the contrary, if ρ(A) = Q(A ∨ Ac , A ∧ Ac ) = 0 . Then we have A ∨ Ac = I, A ∧ Ac = θ , so we get ∀A ∈ δ(P (X)) . 2. If E(A) = 1 that is Q(A ∨ Ac , A ∧ Ac ) = 1 and A ∨ Ac ≥ A ∧ Ac . Then we have A ∨ Ac = A ∧ Ac . So we get µA (x) = νA (x) ; On the contrary, if µA (x) = νA (x), we have A ∨ Ac = A ∧ Ac , So we may get Q(A ∨ Ac , A ∧ Ac ) = E(A) = 1. 3. If A refines B, then we have A∧Ac ≤ B ∧ B c ≤ B ∨ B c ≤ A ∨ Ac , so we can get E(A) = Q(A ∨ Ac , A ∧ Ac ) ≤ Q(B ∨ B c , A ∧ Ac ) ≤ Q(B ∨ B c , B ∧ B c ) ≤ E(B). 4. E(A) = Q(A ∨ Ac , A ∧ Ac ) = Q(Ac ∨ A, Ac ∧ A) = E(Ac ) .
52
X.-d. Liu, S.-h. Zheng, and F.-l. Xiong
From theorem 4.1 and theorem 4.2, we can construct a class of reasonable subsethood and entropy function, which are useful in different practical conditions.
5
Conclusion
In this paper, we offered different kinds of entropy function and subsethood function and they would be practical in different experiments. would be practical in different experiments.
References 1. Zadeh, L.A.: Fuzzy sets. Inform.and Control 8 (1965) 338–353 2. Zadeh, L.A.: Fuzzy sets and Systems. In: Proc. Systems Theory, Polytechnic Institute of Brooklyn, New York (1965) 29–67 3. Atanassov, K.T.: Intuitionistic Fuzzy Sets. In: V.Sgurev(Ed.), VII ITKR’s session, Sofia, June (1983), Central Sci.and Techn.Library, Bulgaria Academy of Sciences(1984) 4. Atanassov, K.T.: Intuitionistic fuzzy sets. Fuzzy Sets and Systems 20 (1986) 87–97 5. Atanassov, K.T., Gargov, G.: Interval valued intuitionistic fuzzy sets. Fuzzy Sets and Systems 31 (1989) 343–349 6. Atanassov, K.T.: Intuitionistic Fuzzy Sets. Physica-Verlag, Heidelberg, New York(1999) 7. Jaynes, E.T.: Where do We Stand on Maximum Entropy? In: Levine, Tribus (Eds.), The Maximum Entropy Formalism, MIT Press, Cambridge, MA. 8. Szmidt, E., Kacprzyk, J.: Entropy for Intuitionistic Fuzzy Sets. Fuzzy Sets and Systems 118 (2001) 467–477 9. Yu-hai Liu, Feng-lan Xiong, Subsethood on Intuitionistic Fuzzy Sets. International Conference on Machine Learning and Cybernetics V.3 (2002) 1336–1339 10. Glad Deschrijver, Etienne E.Kerre, On her Relationship between some Extensions of Fuzzy Set Theory. Fuzzy Sets and Systems 133 (2003) 277–235 11. Guo-jun Wang, Ying-ming He: Intuitionistic Sets and Fuzzy Sets. Fuzzy Sets and Systems 110 (2000) 271–274 12. Luca, A.Ed., Termini, S.: A Definition of a Non-probabilistic Entropy in the Setting of Fuzzy Sets Theory. Inform. and Control 20 (1972) 301–312
The Comparative Study of Logical Operator Set and Its Corresponding General Fuzzy Rough Approximation Operator Set Suhua Zheng, Xiaodong Liu, and Fenglan Xiong Department of Mathematics, Ocean University of China; Qingdao 266071, Shandong, P.R. China
[email protected] [email protected] Abstract. This paper presents a general framework for the study of fuzzy rough sets in which constructive approach is used. In the approach, a pair of lower and upper general fuzzy rough approximation operator in the lattice L is defined. Furthermore, the entire property and connection between the set of logical operator and the set of its corresponding general fuzzy rough approximation operator are examined, and we prove that they are 1-1 mapping. In addition, the structural theorem of negator operator is given. At last, the decomposition and synthesize theorem of general fuzzy rough approximation operator are proved. That for how to promote general rough approximation operator to suitable general fuzzy rough approximation operator, that is, how to select logical operator , provides theory foundation.
1
Introduction
After Pawlak proposed the notion of rough set in 1982[3], many authors have investigated attribute reduction and knowledge representation based on rough set theory. However, about fuzzy rough set and fuzzy rough approximation theory, the study ([4][5][6]) is not penetrating enough. In 2002, all kinds of logical operators and their corresponding fuzzy rough sets are defined and investigated in [1]. In 2003, generalized approximation operators are studied in [10]. This paper is the development and extension, we define the upper and lower general fuzzy rough approximation operator decided by t-norm and t-conorm in the lattice L. Besides, we study the entire properties and relativity between the logical operator set and its corresponding fuzzy rough approximation operator set. This paper is organized as follows. In section 2, we define the mapping σ from the set of t-norm operators in the lattice L to the set of general fuzzy rough approximation operators, and we prove that σ is a surjection. Furthermore, we construct 1-1 mapping from the set of t-norm equivalent classes to the set of general upper fuzzy roughapproximation operators. Subsequently, in section 3, we define the mapping N from t-norm operator set to the pair of lower and upper fuzzy rough approximation operator set. In addition, we give the constructional theorem of negator, so we can construct different forms of negator L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 53–58, 2005. c Springer-Verlag Berlin Heidelberg 2005
54
S. Zheng, X. Liu, and F. Xiong
operators , then get the dual t-norm and t-conorm operators with respect to N and their corresponding general lower and upper fuzzy rough approximation operators. In section 4, we prove the decomposition and synthesize theorem of general fuzzy rough approximation operator. In this paper, unless otherwise stated, we will consider L be a lattice with the largest and least element 0, 1; X, Y and Z to be finite and nonempty set; FL (X) to be all the fuzzy sets on X whose range is L; R ∈ FL (X × Y ) where R is a serial fuzzy relation[10]: for every x ∈ X, there exists y ∈ Y , such that R(x, y) = 1.
2
The Properties and Relativity of t-norm Operator Set and General Upper Fuzzy Rough Approximate Operator Set
Definition 2.1 ([1]) A t-norm operator in L is a mapping t : L × L −→ L, if for ∀x, y, y1 , y2 ∈ L, t satisfies the following conditions: 1)t(x, y) = t(y, x); 2)t(x, 1) = x ; 3)If y1 ≥ y2 , then t(x, y1 ) ≥ t(x, y2 ). From the definition 2.1, it is easy to get the follows: 4) t(x, 0) = 0 for all x ∈ L ;
5) t(x, y) ≤ x ∧ y
for all x, y ∈ L;
Definition 2.2. ([1][3])Let t be a t-norm in L, then ϕt : FL (Y ) → FL (X) is called a general upper fuzzy rough approximation operator decided by t, if for every x ∈ X, A ∈ FL (Y ), it have: ϕt (A)(x) = sup t(R(x, y), A(y)). y∈Y
For briefly, we denote ϕt by ϕ. And, (X, Y, R) is called the general fuzzy rough approximation space. Theorem2.1. Let λ ∈ L, xi , xj , xk , xe ∈ X; y r , y d , y p , y q ∈ Y ; then ϕ has the following properties: p1) ϕ(z z), λ). p2) ϕ(φ) = φ. λ )(x) = t(R(x, p3)ϕ( zλi i ) = ϕ(zλi i ). (zλi i ∈ FL (Y ), I is finite; and if λi ∈ {λi |i ∈ I}, i∈I
i∈I
i∈I
i, j ∈ I, i = j, it followszi = zj ). p4)If R(xi , y d ) = R(xj , y r ), it follows that ϕ(yλd )(xi ) = ϕ(yλr )(xj ) . p5)If R(xi , y d ) = 1, it follows that ϕ(yλd )(xi ) = λ. d i r j p6) ϕ(yR(x j ,y r ) )(x ) = ϕ(yR(xi ,y d ) )(x ). p7) Let R(xi , y d ) ≥ R(xj , y r ), it follows that ϕ(yλd )(xi ) ≥ ϕ(yλr )(xj ). p8) Let u, v ∈ L and satisfying : R(xi , y d ) < u < R(xj , y r ), R(xk , y p ) < v < R(xe , y q ); it follows that : ϕ(yvd )(xi ) ∨ ϕ(yup )(xk ) ≤ ϕ(yvr )(xj ) ∧ ϕ(yuq )(xe ). From now on, assume that ∀α ∈ L, α and R(x, y)(∀x ∈ X, y ∈ Y ) are comparative. Theorem2.2. Let L = {R(x, y) | x ∈ X, y ∈ Y }, L {0} = {0 = L0 < L1 < L2 < . . . < Ls = 1}, T = {t|t is a t-norm operator in L}, M = {ϕ|ϕ : FL (Y )→
The Comparative Study of Logical Operator Set and Its Corresponding
55
FL (X) and satisfies p2) − p8)}; we define σ : T → M , by σ(t) = ϕt . Then σ is a surjection. Proof: Firstly, we construct t(u, v) from ϕ: 1) If 0 ∈ {u, v}, then we define t(u, v) = 0; 2) If neither of them is equal to zero and at least one of them belong to L − {0}, we have: If u ∈ L − {0}, then we can assume u = Li = R(xi , y i ) , where 0 < i ≤ s, xi ∈ X, y i ∈ Y , so we define t(u, v) = ϕ(yvi )(xi ); By theorem 2.1 p4) follows that the definition is reasonable. Otherwise , let t(u, v) = t(v, u) . We can see that: If v = R(xj , y j ), from p6) j i i j it follows that: ϕ(yR(x j ,y j ) )(x ) = ϕ(yR(xi ,y i ) )(x ) i.e. t(u, v) = t(v, u). 3) If u = 0, v = 0 and u, v ∈L − {0} , let R(xi , y i ) = Li < u < Li+1 = R(xi+1 , i+1 y ), R(xj , y j ) = Lj < v < Lj+1 = R(xj+1 , y j+1 ). We define: t(u, v) = ϕ(yvi )(xi )∨ϕ(yuj )(xj ). And we can know: If arbitrary x ∈ X, y ∈ Y , it follows that R(x, y) = 0; then we define R(x0 , y 0 ) = 0 = ϕ(yv0 )(x0 ) = ϕ(yu0 )(x0 ). Now , we can prove that t(u, v) is a t-norm, and ϕt = ϕ. From all of the above, we obtain that σ is a surjection. Theorem2.3. Let σ and T be defined as before t1 , t2 ∈ T , then for every x ∈ X, y ∈ Y, λ∈ L, the following conclusion holds: σ(t1 ) = σ(t2 ) if and only if t1 (R(x, y), λ) = t2 (R(x, y), λ). Corollary2.4. Let arbitrary x ∈ X, y ∈ Y, λ ∈ L, If we define the equivalent relation on T as follows: t1 ∼ t2 ⇔ t1 (R(x, y), λ) = t1 (R(x, y), λ); then ∃µ : T /∼−→ M ,by µ([t]) = σ(t), ∀[t] ∈ T /∼; furthermore, µ is a 1-1 mapping. Definition2.3. We use t to denote the t-norm that we construct only by 1),2),3) in theorem 2.2 . In fact, we also can construct other t-norm by ϕ, we can only change in 3) as follows: Let R(xi , y i ) = Li < u < Li = R(xi+1 , y i+1 ), R(xj , y j ) = Lj < v < Lj+1 = R(xj+1 , y j+1 ); where 0 ≤ i, j ≤ s − 1. Then we can definite t(u, v) = ϕ(yvi+1 )(xi+1 ) ∧ ϕ(yuj+1 )(xj+1 ). We denote it by t, we can prove that σ(t) = σ(t). In order to show it more clearly, we give theorem 2.5. Theorem2.5. Let arbitrary u, v ∈ L, t ∈ T, ϕ∈ M , and σ(t) = ϕ, then we have t(u, v) ≤ t(u, v) ≤ t(u, v).
3
The Structural Theorem of Negator , the t-norm Equivalent Classes and the Pair of Dual Upper and Lower General Fuzzy Rough Approximation Operator is 1-1 Corresponding
Definition 3.1. [1] A t-conorm operator in L is a mapping g : L × L −→ L. If for all x, y, y1 , y2 ∈ L, g satisfies the following conditions:
56
S. Zheng, X. Liu, and F. Xiong
1)g(x, y) = g(y, x) ; 2)g(x, 0) = x;
3) If y1 ≥ y2 , then g(x, y1 ≥ g(x, y2 ).
From the definition 3.1, it is easy to get the property: 4)g(x, y) ≥ x ∨ y for every x, y ∈ L. We denote G = {g|g is a t-conorm in L}. Definition 3.2[1]: A function N : L −→ L is called negator , if ∀x, y ∈ L, N has the properties: 1)N (N (x)) = x; 2) If x ≥ y, then N (x) ≤ N (y). It is easy to know the following properties hold: 3)N (0) = 1, N (1) = 0; 4)[8] N (xi ) = N ( xi ), where I is finite and xi ∈ L. i∈I
i∈I
In addition, w e denote Π = {N |N is a negator in L.}, and assume Π =Φ Definition 3.3[1] Let t ∈ T, g ∈ G, N ∈ Π, if they satisfy t(x.y) = N (g(N (x), N (y))), we call that t, g are dual with respect to N . It is easy to know that the following property holds: t, g is dual with respect to N if and only if t(N (x), N (y)) = N (g(x, y)). Definition 3.4.[1][8]: Let arbitrary A ∈ FL (Y ), x ∈ X; ψg : FL (Y ) −→ FL (X) is called general lower fuzzy rough approximation operator, if ψg (A)(x) = inf g(N (R(x, y)), A(y)). y∈Y
Let x ∈ Y, A ∈ FL (Y ), we define N (A) ∈ FL (Y ) by N (A(x)) = N (A)(x). Definition 3.5. Let ϕ, ψ : FL (Y ) → FL (X) are the upper and lower general fuzzy rough approximation operator respectively, we call that they are dual with respect to N if they satisfy: for every A ∈ FL (Y ), ϕ(A) = N (ψ(N (A))). Theorem 3.1. Let E = {ψg |g ∈ G}, and WN = {(ϕ, ψ)|ϕ, ψare dual with respect toN } ⊂ M × E, where N is given. Let arbitrary t ∈ T , we define the function ΣN : T −→ WN by ΣN (t) = (ϕt , ψg ), where g is dual to t with respect to N . Then there exists ν : T /∼ → WN by ν([t]) = ΣN (t), and σ is a 1-1 mapping. Theorem 3.2. Let L = [0, 1], h(x) is a negator if and only if there is a function f (x) in [0, 1] ,which is continuous , strictly monotone decreasing and satisfying f (0) = 1 , and ∃ξ ∈ (0, 1) such that ⎧ if 0 ≤ x ≤ ξ, ⎨ f (x), h(x) = ⎩ −1 f (x), if ξ < x ≤ 1. Remark. Theorem 3.2 is different from the constructional theorem of negator in [9]. There exist examples to prove that.
4
The Decomposition and Composition Theorem of General Fuzzy Rough Approximation Operator
In this section, L is an order lattice with the largest andleast element denoted by 1,0. And, it has negator operator which satisfies that λ = α, for all α ∈ L. λ 1 ⇒ conf (r) > sup(cr ), ar and cr are positively correlated, and the following holds: 0 < conf (r) − sup(cr ) ≤ 1 − sup(cr ) In particular, we have: 0 < Cm =
conf (r) − sup(cr ) ≤1 1 − sup(cr )
The nearer the ratio Cm is close to 1, the higher the positive correlation between antecedent ar and class label cr . 3. Dependence(ar , cr ) < 1 ⇒ conf (r) < sup(cr ), ar and cr are negatively correlated, and the following holds: −sup(cr ) ≤ conf (r) − sup(cr ) < 0 In particular, we have: −1 ≤ Cm =
conf (r) − sup(cr ) MIN TOTAL WEIGHT) do 4 A ← A, T ← T ; 5 ar ← ∅; 6 while (1) do 7 foreach literal li ∈ A do CalculateFoilGain(li ) in T ; 8 l = AttributeOfBestGain(); 9 if l.gain ≤ MIN BEST GAIN then 10 φ = Cm (ar , ci ); 11 if φ ≥ φmin ∧ N oMoreGeneralRule(ar ⇒ ci ) then 12 R ← R ∧ (ar ⇒ ci , φ); 13 end 14 if φ ≤ −φmin ∧ N oMoreGeneralRule(a r ⇒ ci ) then 15 R ← R ∧ (ar ⇒ ci , φ); 16 end 17 end 18 gainThreshold = bestGain*GAIN SIMILARITY RATIO; 19 foreach l ∈ A do 20 if l .gain ≥gainThreshold then 21 ar ← ar ; ar ← ar ∧ l ; 22 remove ar from A ; 23 remove each t ∈ T dissatisfy ar ; 24 A ← A , T ← T ; 25 CARGenerator(T ,A ,ci ); 26 end 27 end 28 ar ← ar ∧ l; 29 remove ar from A ; 30 remove each t ∈ T dissatisfy ar ; 31 end 32 end 33 foreach t ∈ P satisfy ar do reduce t.weight by a decay factor 34 end
Once the classifier has been established in the form of a list of rules, regardless of the methodology used to generate it, there are existing a number of strategies for using the resulting classifier to classify unseen data object as follows: (1) Choose the single best rule which matches the data object and has the highest ranks to make a prediction, (2) Collect all rules in the classifier satisfying the given unseen data and make a prediction by the ”combined effect” of different class association rules, and (3) Select k-best rules for each class, evaluates the average accuracy of each class and choose the class with the highest expected accuracy as the predicted class. In our case, ACBCA establish the classifier in the form of a list of ordered rules in the process of rules generation. Instead of taking all rules satisfying the
66
J. Chen et al.
new data object into consideration, ACBCA just selects a small set of strong correlated association rules to make prediction by the following procedure: Step 1: For those rules satisfying the new data in antecedent, selects the first k-best rules for each class according to rules’ consequents; Step 2: Calculates the rank of each group by summing up the Cm of relevant rules. We think the average accuracy is not advisable because many trivial rules with low accuracy will weaken the effect of the whole group; Step 3: The class label of new data will be assigned by that of the highest rank group.
4
Experimental Results and Performance Study
To evaluate the accuracy and efficiency of ABCBA, we have performed an extensive performance study on some datasets from UCI Machine learning Repository [10]. It would have been desirable to use the same datasets as those used by other associative classification approaches; however it was discovered that many of these datasets were no longer available in the UCI repository. All the experiments are performed on a 2.4GHz Pentium-4 PC with 512MB main memory. And a 10-fold cross validation was performed on each dataset and the results are given as average of the accuracies obtained for each fold. The parameters of ABCBA are set as the following. In the rule generation algorithm, MIN TOTAL WEIGHT is set to 0.05, MIN BEST GAIN to 0.7, GAIN SIMILARITY RATIO to 0.99, φmin to 0.2 and decay factor to 2/3. The best 3 rules are used in prediction. The experimental results for C4.5, Ripper, CBA, CMAR and CPAR are taken from [6]. The table 1 gives the average predictive accuracy for each algorithm respectively. Bold values denote the best accuracy for the respective dataset. Column 2∼3 present the results of traditional rule-based classification algorithms C4.5 and Ripper. Column 4∼6 show the results of several recently developed associative classification methods CBA, CMAR and CPAR. Column 7 describes the results of ABCBA when only positive rules are considered for classification. Column 8 shows the results of ABCBA, whichconsiders both positive rules of the form ar ⇒ cr and negative rules of the form ar ⇒ cr . As can be seen, ABCBA on almost all occasions achieves best accuracy or comes very close. Moreover, the classification accuracy increases when the correlated rules are taken into consideration. This is most noticeable for the Ionosphere, the Iris Plant and the Pima Indians diabetes datasets on which the correlated rules can give better supervision to the classification.
5
Conclusions
Associative classification is an important issue in data mining and machine learning involving large, real databases. We have introduced ABCBA, a new associative classification algorithm based on correlation analysis. ABCBA differs from
Associative Classification Based on Correlation Analysis
67
Table 1. Comparison of Accuracies on UCI Different Datasets Dataset austral breast cleve crx diabetes german hepatic horse iono iris labor led7 pima wine zoo Average
C4.5 84.7 95 78.2 84.9 74.2 72.3 80.6 82.6 90 95.3 79.3 73.5 75.5 92.7 92.2 83.4
Ripper 87.3 95.1 82.2 84.9 74.7 69.8 76.7 84.8 91.2 94 84 69.7 73.1 91.6 88.1 83.15
CBA 84.9 96.3 82.8 84.7 74.5 73.4 81.8 82.1 92.3 94.7 86.3 71.9 72.9 95 96.8 84.69
CMAR 86.1 96.4 82.2 84.9 75.8 74.9 80.5 82.6 91.5 94 89.7 72.5 75.1 95 97.1 85.22
CPAR 86.2 96 81.5 85.7 75.1 73.4 79.4 84.2 92.6 94.7 84.7 73.6 73.8 95.5 95.1 84.77
ABCBA p ABCBA all 85.22 85.8 95.1 94.5 78.21 79.21 85.51 85.65 76.67 75.99 70.9 73.3 80.83 80.83 82.02 82.02 92.02 93.74 94.67 95.3 89.17 97.17 72.56 73.91 74.08 76.18 93.13 94.5 96 94 84.41 85.47
existing algorithms for this task insofar, that it does not employ the supportconfidence framework and take both positive and negative rules under consideration. It uses an exhaustive and greedy algorithm based Foil Gain to extract CARs directly from the training set. During the rule building process, instead of selecting only the best literals, ABCBA inherits the basic idea of CPAR, keeping all close-to-the-best literals in rules generation process so that it will not miss the some important rule. Since the class distribution in the dataset and the correlated relationship among attributes should be taken into account because it is conforming to the usual or ordinary course of nature in the real world, we present a new rule scoring schema named Cm to evaluate the correlation of the CARs. The experimental results of the ABCBA show that a much smaller set of positive and negative association rules can achieve best accuracy or comes very close on almost all occasions. Actually, we just use the association rules of the type (ar ⇒ cr ) and (ar ⇒ cr ) for classification. These two type rules have an direct association with the class label, so they can be considered together. However, if there are more than two classes in the dataset, (ar ⇒ cr ) and (ar ⇒ cr ) just provide information that ”not-belong-to”, instead of giving the association between data attributes and class label directly. We are currently investigating reasonable and effective methods to use these two kinds of rules.
References 1. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993) 2. Cohen, W.W.: Fast effective rule induction. In Prieditis, A., Russell, S.J., eds.: ICML 1995, Tahoe City, California, USA, Morgan Kaufmann (1995) 115-123
68
J. Chen et al.
3. Liu, B. Hsu, W. and Ma, Y.: Integrating Classification and Association Rule Mining. Proceedings KDD-98, New York, 27-31 August. AAAI. (1998) 80-86. 4. Li W., Han, J. and Pei, J.: CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules. Proceedings of the 2001 IEEE International Conference on Data Mining, San Jos´e, California, USA, IEEE Computer Society (2001) 5. J. R. Quinlan and R. M. Cameron-Jones: FOIL: A midterm report. In Proceedings of European Conference. Machine Learning, Vienna, Austria 1993, (1993) 3-20 6. X. Yin and J. Han: CPAR: Classification based on Predictive Association Rules, Proc. 2003 SIAM Int.Conf. on Data Mining (SDM’03), San Fransisco, CA, May 2003. 7. Piatetsky-Shapiro, G.,: Discovery, Analysis, and Presentation of Strong Rules. Knowledge Discovery in Databases, G. Piatetsky-Shapiro and WJ Frawley (Eds.), AAAI/MIT Press, 1991, pp. 229-238. 8. Sergey Brin, Rajeev Motwani, and Craig Silverstein: Beyond market baskets: Generalizing association rules to correlations. SIGMOD Record (ACM Special Interest Group on Management of Data), (1997) 265-276. 9. Xindong Wu,Chengqi Zhang,Shichao Zhang: Efficient Mining of Both Positive and Negative Association Rules. ACM Transactions on Information Systems, 22(2004), 3: 381-405. 10. Blake, C.L. and Merz, C.J. (1998). UCI Repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html, Irvine, CA: University of California, Department of Information and Computer Science.
Design of Interpretable and Accurate Fuzzy Models from Data Zong-yi Xing 1, Yong Zhang 1, Li-min Jia 2, and Wei-li Hu 1 1
Nanjing University of Science and Technology, Nanjing 210094, China 2 Beijing Jiaotong University, Beijing, 100044, China
[email protected] Abstract. An approach to identify data-driven interpretable and accurate fuzzy models is presented in this paper. Firstly, Gustafson-Kessel fuzzy clustering algorithm is used to identify initial fuzzy model, and cluster validity indices are adopted to determine the number of rules. Secondly, orthogonal least square method and similarity measure of fuzzy sets are utilized to reduce the initial fuzzy model and improve its interpretability. Thirdly, constraint LevenbergMarquardt algorithm is used to optimize the reduced fuzzy model to improve its accuracy. The proposed approach is applied to PH neutralization process, and results show its validity.
1 Introduction During the past years, fuzzy modeling techniques have become an active research area due to its successful application to classification, data mining, pattern recognition, simulation, prediction, control, etc [1-4]. Several fuzzy modeling methods have been proposed including fuzzy clustering based algorithm [5], neurofuzzy systems [6-7] and genetic rules generation [8-9]. However all these technologies only focus on precision that simply fit data with highest possible accuracy, neglecting interpretability of obtained fuzzy models, which is considered as a primary merit of fuzzy systems and is the most prominent feature that distinguishes fuzzy systems from many other models [10]. In order to improve interpretability of fuzzy models, some methods have been developed. Setnes et al. [11] proposed a set-theoretic similarity measure to quantify the similarity among fuzzy sets, and to reduce the number of fuzzy sets in the model. Yen et al. [12] introduced several orthogonal transformation techniques for selecting the most important fuzzy rules from a given rule base in order to construct a compact fuzzy model. Abonyi et al. [13] proposed a combined method to create simple TakagiSugeno fuzzy model that can be effectively used to represent complex systems. Delgado [14] presents fuzzy modeling as a multi-objective decision making problem considering accuracy, interpretability and autonomy as goals, and all these goals are handled via single-objective ε – constrained decision making problem which is solved by a hierarchical evolutionary algorithm. This paper proposes systematic techniques to construct interpretable and accurate fuzzy models. The paper is organized as follows: section 2 describes the TakagiSugeno fuzzy model. The systematic fuzzy modeling approach, including fuzzy clustering algorithm, cluster validity measure, rule reduction and fuzzy sets merging L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 69 – 78, 2005. © Springer-Verlag Berlin Heidelberg 2005
70
Z.-y. Xing et al.
and constraint Levenberg-Marquardt optimization, is stated in section 3. The proposed method is demonstrated on the PH neutralization benchmark in section 4. Section 5 concludes the paper.
2 Takagi-Sugeno Fuzzy Model The Takagi-Sugeno (TS) Fuzzy model [1] was proposed by Takagi and Sugeno in an effort to develop a systematic approach to generating fuzzy model from a given inputoutput data set. A typical fuzzy rule of the model has the form:
R i : IF x1 is Ai ,1 and L and x p is Ai , p
(1)
THEN yi = ai 0 + ai1 x1 + L + aip x p where xj are the input variables, Aij are fuzzy sets defined on the universe of discourse of the input variables, yi are outputs of rules. The output of the TS fuzzy model is computed using the normalized fuzzy mean formula: c
y (k ) = ∑ pi ( x) yˆi
(2)
i =1
where c is the number of rules, Pi is the normalized firing strength of the ith rule:
∏ A (x ) P ( x) = ∑ ∏ A (x ) p
j =1
i
ij
c
p
i =1
j =1
j
ij
(3)
j
Given N input-output data pairs {xk , yk } , the model in (2) can be written as a linear regression problem y = Pθ + e
(4)
where θ is consequents matrix of rules, and e is approximation error matrix. In this paper, Gaussian membership functions are used to represent the fuzzy set Aij Aij ( x j ) = exp(−
2 1 ( x j − vij ) ) 2 σ ij2
(5)
where vij and σ ij represent center and variance of Gaussian function respectively.
3 Design of Data-Driven Interpretable and Accurate Fuzzy Models 3.1 Construct Initial Fuzzy Models Using Fuzzy Clustering Algorithm
The Gustafson-Kessel [15] (GK) algorithm is employed to construct initial fuzzy models. The objective function of GK algorithm is described following:
Design of Interpretable and Accurate Fuzzy Models from Data c
71
N
J (Z; U, V ) = ∑∑ ( µik )m Dik2
(6)
i =1 k =1
where
U = [ µik ]
Z is the set of data,
is the
fuzzy partition
matrix,
V = [ v1 , v 2 ,L , v c ] is the set of centers of the clusters, c is the number of clusters, N T
is the number of data, m is the weighting exponent, µik is the membership degree between the ith cluster and kth data, which satisfy conditions:
∑
µik ∈ [0,1];
C
µik = 1; 0 0 . Generate the matrix U with the membership randomly. Compute the parameters of model using (13), (14) and (17). Calculate norm of distance utilizing (8. Update the partition matrix U using (12); Stop if U (l ) − U (l −1) ≤ ε , else go to 3).
3.2 Cluster Validation Indices
It is essential to determine the number of rules, i.e. the number of fuzzy clusters. This problem can be solved by validation analysis using cluster validity indices. There are two categories of fuzzy validity indices. The first category uses only the membership values of a fuzzy partition of data. On the other hand, the latter one involves both partition matrix and the data itself. The partition coefficient (PC) and the partition entropy coefficient (PE) [16] are the typical cluster validity indices of the first category. PC (c) = PE (c) = −
,
1 c N ∑ ∑ µik2 n i =1 k =1
(18)
1 c N ∑ ∑ µik log a µik n i =1 k =1
(19)
.
The number corresponding to PC (c) ∈ [1/ c,1] PE (c) ∈ [0, log a c] significant knee is selected as the optimal number of rules. The compactness and separation validity function proposed by Xie and Beni (XB) [17] is a representation of the second category. The smallest value of XB indicates the optimized clusters. where
XB(c) =
∑ ∑ c
N
i =1
k =1
µikm xk − vi
n ⋅ min vi − vk i,k
2
2
(20)
Design of Interpretable and Accurate Fuzzy Models from Data
73
3.3 Rule Reduction Based on Orthogonal Least Square Method
In the process of cluster validity analysis, it is possible that the noise or abnormal data are clustered to create redundant or incorrect fuzzy rules. This problem can be solved by rule reduction using orthogonal least square method (OLS) [18]. The OLS method transforms the columns of the firing strength matrix into a set of orthogonal basis vectors. Using Gram–Schmidt orthogonalization procedure, the firing strength matrix P is decomposed into
P = WA
(21)
where W is a matrix with orthogonal columns wi, and A is an upper triangular matrix with unity diagonal elements. Substituting (21) into (4) yields y = WAθ + e = Wg + e
(22)
where g = Aθ . Since the columns wi of W are orthogonal, the sum of squares of y(k) can be written as c
y T y = ∑ gi2 wiT wi + eT e
(23)
i =1
Dividing N on both side of (23), it can be seen that the part of the output variance yTy/N explained by the regressors is ∑ gi wiT wi / N , and an error reduction ratio due to an individual rule is defined as
[err ]i =
gi2 wiT wi , 1 ≤ i ≤ c. yT y
(24)
The ratio offers a simple means for seeking a subset of important rules in a forward-regression manner. If it is decided that r rules are used to construct a fuzzy model, then the first r rules with the largest error reduction ratios will be selected. If the importance measure of a fuzzy rule is far less than others, and deletion of this rule doesn’t deteriorate precision performance, this fuzzy rule will be picked out to improve interpretability of fuzzy model. 3.4 Similarity Merging of Fuzzy Sets
Fuzzy models obtained above may contain redundant information in the form of similarity between fuzzy sets. This makes the fuzzy model uninterpretable, for it is difficult to assign qualitatively meaningful labels to similar fuzzy sets. In order to acquire an effective and interpretable fuzzy model, elimination of redundancy and making the fuzzy model as simple as possible are necessary. As for two similar fuzzy sets, a similarity measure is unutilized to determine if fuzzy sets should be combined to a new fuzzy set. For fuzzy sets A and B, a settheoretic operation based similarity measure [11] is defined as
74
Z.-y. Xing et al.
∑ S ( A, B) = ∑
N k =1 N k =1
[ µ A ( xk ) ∧µ B ( xk )] [ µ A ( xk ) ∨µ B ( xk )]
(25)
Where X = {x j | j = 1, 2,L , m} is the discrete universe, ∧ and ∨ are the minimum and maximum operators respectively. S is a similarity measure in [0,1]. S=1 means the compared fuzzy sets are equal, while S=0 indicates that there is no overlapping between fuzzy sets. If similarity measure S ( A, B ) > τ , i.e. fuzzy sets are very similar, then the two fuzzy sets A and B should be merged to create a new fuzzy set C, where τ is a predefined threshold. In a general way, τ = [0.4 − 0.7] is a good choice. 3.5 Optimization
After rule reduction and similar fuzzy sets merging, the precision of initial fuzzy model is improved, while its precision performance is reduced. It is essential to optimize the reduced fuzzy model to improve its precision, while preserves its interpretability. The precision and parameters of fuzzy model are strongly nonlinear, so a robust optimization technique should be applied in order to assure a good convergence. The constraint Levenberg-Marquardt (LM) method [19] is adopted in this paper. The premise parameters are limited to change in a range of ±α % around their initial values in order to preserve the distinguishability of fuzzy sets. For the sake of maintaining the local interpretability of fuzzy model, the consequent parameters are restricted to vary ± β % of the corresponding consequent parameters.
4 Example PH neutralization [20] is a typical nonlinear system with three influent streams (acid, buffer and base) and one effluent stream. The influent buffer stream and the influent acid stream are kept constant, and the influent base stream change randomly. The output is the PH in the tank. In order to determine the number of rules, cluster validity indices including PC, PE, XB and Average Partition density (PA) [21] are adopted. Fig. 1 diagrams the result of cluster validity analysis intuitionally. Obviously, all the cluster validity indices indicate that optimal number of rules is 3. The GK fuzzy clustering algorithm is used to constructed fuzzy model with 3 fuzzy rules. The fuzzy sets of obtained fuzzy model are illustrated in Fig 2(b). Obviously, the fuzzy sets are distinguishable, and it is easy to assign understandable linguistic term to each fuzzy set. The OLS method is adopted to pick out unnecessary fuzzy rules. The importance measures of fuzzy rules are [0.2249 0.0250 0.7438] respectively, where the measure of the 2nd fuzzy rule is far less than the others. Without considering precision performance, the 2nd fuzzy rule can be deleted. Fig 2(a) diagrams the fuzzy sets of membership with 2 rules.
Design of Interpretable and Accurate Fuzzy Models from Data 0.85
0.95
0.80
0.80
0.75
75
0.65
PC
PE 0.70
0.50
0.65
0.35
0.60
2
3
4
5 6 7 number of rules (a)
8
9
10
0.20
0.26
13
0.22
12
0.18
2
3
4
5 6 7 number of rules (b)
8
9
10
2
3
4
5 6 7 number of rules (d)
8
9
10
11
XB
PA 0.14
10
0.10
9
0.06
8 2
3
4
5 6 7 number of rules (c)
8
9
10
Fig. 1. Cluster validity indices
After rule reduction, the interpretability of fuzzy model is improved, while the precision is reduced. If the model with 2 rules can satisfy practical demands, this model will be employed, otherwise, the model with 3 rules is preserved. The reduced fuzzy model with 2 rules is adopted in order to demonstrate interpretability in this paper. Fuzzy sets merging process is carried out sequentially. The similarity measure between fuzzy sets of flow rate is 0.3420, and the measure between fuzzy sets of PH is 0.0943. Without considering precision and reality, the fuzzy sets of flow rate can be merged to a new fuzzy set, so the flow rate variable can also be deleted for containing only one fuzzy set. In practice, similarity measure between fuzzy sets of flow rate is small, and flow rate variable is independent variable, and the precision performance of model after fuzzy sets merging is deteriorative, so the fuzzy sets of flow rate are not merged in this paper. In order of illustrated interpretability and precision of different fuzzy models intuitively, Fig. 2 diagrams fuzzy sets of model with 2, 3, 4,5 fuzzy rules, and Table 1 shows the corresponding errors and number of rules/fuzzy sets. Obviously, with the increase of rules, the precision is improved, while the interpretability is reduced. When the number of rules is 4 and 5, the fuzzy sets are heavy overlapped, whereas the precision is only increased a little. R1 : If FR(k ) is High and PH (k ) is High Then PH (k + 1) = 0.0321FR (k ) + 0.8614 PH ( k ) + 0.6659 2
R : If FR(k ) is Low and PH (k ) is Low Then PH (k + 1) = 0.1103FR (k ) + 0.7521PH ( k ) + 0.2614
(26)
76
Z.-y. Xing et al.
1.0 0.8 µ 0.6 0.4 0.2 0
A2
5
1 0.8 µ 0.6 0.4 0.2 0
A1
10
15 20 25 Flow Rate A2
4
30
35
1.0 0.8 µ 0.6 0.4 0.2 0
A1
6
8
PH ( )
1.0 0.8 µ 0.6 0.4 0.2 0
10
A2
5
A1
10
15 20 25 Flow Rate
A2
A2
5
1.0 0.8 A2 µ 0.6 0.4 0.2 0
4
6
A4
15 20 Flow Rate
25
A3
4
6
30 A1
PH
35 A3
8
PH
10
(b)
A1A3
10
30
A1
(a) 1.0 0.8 µ 0.6 0.4 0.2 0
A3
8
10
35 A4
1.0 0.8 0.6 µ 0.4 0.2 0
A2
A1
5
10
1.0 0.8 A1 0.6 µ 0.4 0.2 0
(c)
A4
15 20 Folw Rate A4
4
A3
6
PH
A5
25 A2
8
30
35
A3
A5
10
(d)
Fig. 2. Fuzzy sets of different fuzzy models Table 1. Comparison of different fuzzy models Number of fuzzy rules 2 2 3 4 5
Number of fuzzy sets 2 4 6 8 10
Training error 1.6290 0.4020 0.3791 0.3716 0.3739
Validation error 1.7547 0.3433 0.3160 0.3119 0.3099
Validation error descend — 411.13% 8.64% 1.31% 0.65%
The fuzzy model is optimized by Levenberg-Marquardt method. The constraint of antecedent parameters is 5%, and the constraint of consequent parameters is 15%. The optimized fuzzy model is described as (26), where FR(k ) is the value of flow rate, PH (k ) is the value of PH. Fig. 3(a) diagrams the fuzzy sets of the model. After optimization, the training error and validation error of the model are 0.3711 and 0.3088. Without decrease interpretability, the precision of the model is improved. Fig 3(b) illustrates the comparison of model outputs and measured outputs.
Design of Interpretable and Accurate Fuzzy Models from Data
1.0 0.8 µ 0.6 0.4 0.2 0 1.0 0.8 µ 0.6 0.4 0.2 0
Low
5
10
High
15 20 25 Flow Rate
Low
4
6
PH
(a)
8
12 11 10 9 8 30 35 PH 7 6 High 5 4 3 2 10
77
Model output Measured output 50 100 150 200 250 300 350 400 450 500 number of data
(b)
Fig. 3. Fuzzy sets of the model and comparison of model output and measured output
5 Conclusion This paper proposes systematic techniques to construct interpretable and accurate fuzzy models. Firstly, GK fuzzy clustering algorithm is used to identify fuzzy model, and cluster validity indices are adopted to determine the number of rules. Secondly, orthogonal least square method and similarity measure of fuzzy sets are utilized to reduce the initial fuzzy model and improve its interpretability. Thirdly, constraint Levenberg-Marquardt algorithm is used to optimize the reduced fuzzy model. At last, the simulation on PH neutralization illustrates the effectiveness of proposed methods.
Acknowledgements This paper is supported by scientific research foundation of Nanjing University of Science and Technology, and by Jiangsu Planned Projects for Postdoctoral Research Funds.
References [1] Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans on Systems Man and Cybernetics, 1985, 15(1): 116-132 [2] Sugeno, M., Yasukawa, T.: A fuzzy-logic- based approach to qualitative modeling. IEEE Trans on Fuzzy Systems, 1993, 1(1): 7-31 [3] Wang, L. X.: Adaptive fuzzy systems and control: design and stability analysis. Prentice Hall, 1994 [4] Min, Y. C., Linkens, D. A.: Rule-base self-generation and simplification for data-driven fuzzy models. Fuzzy sets and systems, 2004,142(2): 243-265 [5] Gomez-Skarmeta, A. F., DELGADO, M., VILA, M. A.: About the use of fuzzy clustering techniques for fuzzy model identification. Fuzzy Sets and Systems, 1999, 106(2): 179188 [6] Jang, J. R.: ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans on Systems Man, and Cybernetics, 1993, 23:665–684
78
Z.-y. Xing et al.
[7] Lefteri, H. T., Robert, E. U.: Fuzzy and Neural Approaches in Engineering. Wiley, 1997 [8] Cordon, O., Herrera, F., Hoffmann, F., Magdalena, L.: Genetic Fuzzy Systems: Evolutionary Tuning and Learning of Fuzzy Rule Bases. World Scientific, 2001 [9] Cordon, O., Gomide, F., Herrera, F., Hoffmann, F., Magdalena, L.: Ten Years of Genetic Fuzzy Systems: Current Framework and New Trends. Fuzzy sets and systems, 2004, 141(1): 5-31 [10] Babuska, R., Bersini, H., Linkens, D. A., Nauck, D., Tselentis, G., Wolkenhauer, O.: Future Prospects for Fuzzy Systems and Technology [EL/OB]. ERUDIT Newsletter, Aachen, Germany, 6(1), 2000. Available: http://www.erudit.de/erudit/newsletters/news 61/page5.htm [11] Sentes, M., Babuska, R., Kaymak, U., Lemke, H. R. N.: Similarity Measures in Fuzzy Rule Base Simplification. IEEE Trans on Systems Man and Cybernetics, 1998, 28(3): 376-386. [12] Yen, J., Wang, L.: Simplifying fuzzy modeling by both gray relational analysis and data transformation methods . IEEE Trans on Systems Man and Cybernetics, 1999, 29(1): 1324 [13] Abonyi, J., Roubos, J. A., Oosterom, M., Szeifert, F.: Compact TS-fuzzy models through clustering and OLS plus FIS model reduction. Proc of IEEE int conf on fuzzy systems, Sydney, Australia, 2001: 1420-1423 [14] Delgado, M. R., Zuben, F. V., Gomide, F.: Multi-Objective Decision Making: Towards Improvement of Accuracy, Interpretability and Design Autonomy in Hierarchical Genetic Fuzzy Systems. Proc of IEEE int conf on fuzzy systems, Honolulu, Hawai, 2002: 1222-1227 [15] Gustafson, D., Kessel, W.: Fuzzy clustering with a fuzzy covariance matrix. Proc of IEEE conf on decision and control. San Diego, USA, 1979: 761-766 [16] Bezdek, J. C.: Pattern Recognition with fuzzy objective algorithm. New York: Plenum Press, 1981 [17] Xie, X. L., Beni, G.: A Validity Measure for Fuzzy Clustering. IEEE Trans on Pattern Analysis and Machine Intelligence, 1991, 13(8): 841-847 [18] Yen, J., Wang, L.: Simplifying fuzzy modeling by both gray relational analysis and data transformation methods. IEEE Trans on Systems Man and Cybernetics, 1999, 29(1): 1324 [19] Gaweda, A. E.: Optimal data-driven rule extraction using adaptive fuzzy neural models. University of Louisville 2002 [20] Babuska, R.: Fuzzy Modeling for Control. Boston: Kluwer Academic Publishers, 1998 [21] Gath, I., Geva, A. B.: Fuzzy clustering for the estimation of the parameters of the components of mixtures of normal distributions. Pattern Recognition Letters, 1989, 9: 77-86
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm Bin Ye, Chengzhi Zhu, Chuangxin Guo, and Yijia Cao College of Electrical Engineering, National Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, Zhejiang, China
[email protected],
[email protected] Abstract. This paper presents a new kind of Evolutionary Fuzzy System (EFS) based on the Least Squares (LS) method and a hybrid learning algorithm: Adaptive Evolutionary-programming and Particle-swarm-optimization (AEPPSO). The structure of the Extended Fuzzy Basis Function Network (EFBFN) is firstly proposed, and the LS method is used to design it with presetting the widths of the hidden units in EFBFN. Then, to enhance the performance of the obtained EFBFN ulteriorly, a novel learning algorithm based on least squares and the hybrid of evolutionary programming and particle swarm optimization (AEPPSO) is proposed, in which we use EPPSO to tune the parameters of the premise part in EFBFN, and the LS algorithm to decide the consequent parameters in it simultaneously. In the simulation part, the proposed method is employed to predict a chaotic time series. Comparisons with some typical fuzzy modeling methods and artificial neural networks are presented and discussed.
1 Introduction In recent years, various neural-fuzzy networks have been proposed. Among them, the fuzzy basis function networks (FBFNs) [1], which are similar in structure to radial basis function networks (RBFNs) [2], have gained much attention. In addition to their simple structure, FBFNs possess another advantage that they can readily adopt various learning algorithms already developed for RBFNs. Among the various methods for designing FBFNs, the OLS method [1] has attracted much attention due to its simple and straightforward computation with reliable performance. However, its performance largely depends on the preset input membership functions (MFs) because the parameters in the MFs remain unchanged during learning process. Furthermore, it fails to yield a meaningful fuzzy system after learning [3]. In this paper, we extend the original FBFN with singleton output fuzzy variables to be Extended FBFN with 1st-order fuzzy output, named EFBFN, and the least squares (LS) method is used to select the significant fuzzy rules from candidate fuzzy basis functions (FBFs [1]) based on error reduction measure which is calculated by a projection matrix instead of orthogonalization. While the LS algorithm is a computationally efficient way of constructing fuzzy system, its performance depends on the preset parameters of basis functions. Therefore, we have the necessity to tune the parameters of the obtained MFs within a small range using a tuning algorithm. Here, a tuning algorithm based on least squares and the hybrid of evolutionary programming and L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 79 – 88, 2005. © Springer-Verlag Berlin Heidelberg 2005
80
B. Ye et al.
particle swarm optimization, named Adaptive Evolutionary-programming and Particle-swarm-optimization (AEPPSO), is proposed. The hybrid algorithm EPPSO based on EP and PSO is performed to tune the parameters of the premise part, while the LS algorithm is used to decide the consequent parameters of the fuzzy rule base simultaneously. To demonstrate the performance of the proposed algorithm for designing EFBFN based on the hybrid of LS and AEPPSO, the prediction of a chaotic time series are performed, and the results are also compared to other methods.
2 Extended Fuzzy Basis Function Network 2.1 Structure of Extended Fuzzy Basis Function Network The original FBFN [1] used singleton fuzzy membership functions as output fuzzy variables, while in this paper we extend the FBFN to be the following form:
i i i i i i i R : if x1 is A1 ( x1 ) and ... xk is Ak ( xk ), then y is ξ 0 + ξ1 x1 + ... + ξ k xk ,
(1)
i
where R represents the ith rule (1 ≤ i ≤ r ) , x j (1 ≤ j ≤ k ) is an input variable and i i y is an output variable. A j ( x j ) are the fuzzy variables defined as in Eqn.2, and i ξ j (1 ≤ i ≤ r , 0 ≤ j ≤ k ) are the consequent parameters of the fuzzy model.
1 i i i 2 A j ( x j ) = exp[ − *[( x j − m j ) / σ j ] ], 2 i
(2)
i
where m j and σ j are the mean value and the standard deviation of the Gaussian type MF, respectively. The fuzzy basis functions defined by Wang can also be used here: i i pi ( χ ) = ∏ kj =1 A j ( x j ) / ∑ ir=1 (∏ kj =1 A j ( x j )), i = 1, 2,...r , χ = [ χ1 , ..., χ k ].
(3)
The fuzzy basis function expansion defined as in Eqn.4 for extended FBFN is equivalent to the 1st-order T-S fuzzy model suggested by Takagi and Sugeno [4]. r ∗ y = ∑ pi ( χ )Ci , i =1 i i i where Ci is the consequent part of the fuzzy rule, and Ci = ξ 0 + ξ1 x1 + ... + ξ k xk . 2.2 EFBFN Based on Least Squares Method
Before describing the LS method for designing EFBFN, we define the I/O data as:
\ = {χ , y | χ = ( x , x ,...x ), h = 1, 2...N } . h h h h1 h 2 hk
(4)
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm
81
According to the orthogonal least algorithm proposed for constructing FBFN by Wang [1], the following N candidate fuzzy basis function (FBF) nodes are generated at the beginning: h h ph ( χ ) = ∏ kj =1 A j ( x j ) / ∑ hN=1 ( ∏ kj =1 A j ( x j )), h = 1, 2, ... N , χ = [ χ , ..., χ ]. , 1 k
(5)
where in the fuzzy MFs, the widths and the candidate centers are defined as follows: max( x j )−min( x j ) i , m j = χ h , where r j = 1, 2..., k , χ h = [ xh1, xh 2 ,...xhk ], x j = [ x1 j ,..., x Nj ]. i
σj =
(6)
Assume that r (r < N) EFBFN nodes have been selected by the OLS algorithm, the resultant FBF expansion produced by the OLS algorithm is represented by: r h h ∗ y = ∑ p ( χ )Ci = ∑ir=1 Ci ( ∏ kj =1 A ji ( x j )) / ∑ hN=1 ( ∏ kj =1 A ji ( x j )), i = 1, 2, ...r . hi i =1
(7)
Compare above equation with the original definition of EFBFN in Eqn. (4), i.e., the following equation: ∗ i i i i y = ∑ir=1 Ci ( ∏ kj =1 Aij ( x j )) / ∑ir=1 (∏ kj =1 A j ( x j )), C = ξ + ξ x + ... + ξ x , i k k 0 1 1
(8)
it is clear to see that the fuzzy basis functions selected by the OLS algorithm can not be interpreted as true FIS since they retain the normalization factor in the denominator before training. While the least squares method used in this paper for constructing EFBFN can be clearly understood in terms of fuzzy rules defined in physics domains and with a computational efficiency [3]. We consider the inferred output formula, Eqn.4 as a linear regression model: y = Wξ + e ,
(9)
where y = [ y1 ,... yl ,..., y N ] is the desired output, e = [e1 , e2 ,..., eN ] is the error signal, the matrix W and the vector ξ are defined as follows:
W = [ w1 ,...w N ] = [p1 ,...p r , p1. ∗ x1,...p r . ∗ x1 ,...p1. ∗ x k ,...p r . ∗ x k ] 1 1 r 1 r r T ξ = [ξ 0 ,...ξ 0 , ξ1 ,...ξ1 ,...ξ k ,...ξ k ] T T pi = [ pi ( χ1 ),..., pi ( χ N )] , x j = [ x1 j , x2 j ,...x Nj ] , T pi . ∗ x j = [ pi ( χ1 ) x1 j ,..., pi ( χ N ) x Nj ] , i = 1, 2,...r , j = 1, 2, ..., k The LS algorithm selects the most significant FBFs by maximizing the following error reduction measure [err]: + [err ] = WW d ,
(10)
82
B. Ye et al.
+ + where W denotes the pseudoinverse of W . It can be seen that WW is the orthogonal projection onto the column space of W . With LS algorithm applied in EFBFN, when we are selecting the first EFBFN (r = 1) node from the FBF sets, the N candidate FBFs are calculated using the following formula: 1 ph ( χ g ) = ∏ kj =1 A j ( x j ), g = 1, 2, ... N , h = 1, 2,...N ,
(11)
where the parameters in the fuzzy MFs are defined as in Eqn. (6). When s-1 EFBFN nodes have been picked from the data sets, for selecting the sth EFBFN nodes, the Ns+1 candidate FBFs for EFBFN using LS are defined as follows: i i p ( χ ) = ∏ kj =1 A j ( x j ) / ∑ hs =1 ∏ kj =1 A j ( x j ), h ≠ h , ..., h ≠ h , g = 1, 2, ... N . 1 h g s −1
(12)
Our objective is to find r fuzzy rules from the N training data pairs. The selection procedure can be described as follows: Step 1: for 1 ≤ h ≤ N , compute ( h) T p1 = [ ph ( χ1 ), ph ( χ 2 ), ..., ph ( χ N )] , h = 1, 2,... N , ( h) ( h) ( h) ( h) W1 = [p1 , p1 . ∗ x1 ,..., p1 . ∗ x k ] ( h) [err ]1 = W1( h ) ( W1( h) )+ y (h ) (h ) ( h) Find [err ]1 1 = max{[err ]1 ,1 ≤ h ≤ N }, and select p1 = p1 1 . Step s : where s ≥ 2, for 1 ≤ h ≤ N , h ≠ h1, ..., h ≠ hs −1 , compute ( h) T p s = [ ph ( χ1 ), ph ( χ 2 ),..., ph ( χ N )] , h = 1, 2,...N , (h) ( h) ( h) (h) Ws = [p1 ,..., p s , p1. ∗ x1 ,..., p s . ∗ x1, ..., p1. ∗ x k , ..., p s . ∗ x k ] ( h) [ err ]s = Ws( h ) ( Ws( h ) )+ y (h ) (h ) ( h) Find [ err ]s s = max{[ err ]s ,1 ≤ h ≤ N , h ≠ h1, ..., h ≠ hs −1}, and select p s = p s s .
The procedure will stop when the number of the selected FBFs reaches r . Once r FBFs are found, i.e., r fuzzy rules are generated, the matrix W in Eqn. (9) is determined as W = [p1 ,..., p r , p1. ∗ x1 ,..., p r . ∗ x1 ,..., p1. ∗ x k ,..., p r . ∗ x k ] , and the consequent parameters can be calculated as: + ξ = W d.
(13)
r ∗ y = ∑ pi ( χ ) ⋅ χ ′ ⋅ ξ i , i =1
(14)
The final EFBFN is:
i i i T where χ ′ = [1, x1 ,..., xk ] , and ξ i = [ξ 0 , ξ1 ,..., ξ k ] .
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm
83
3 Adaptive EPPSO for EFBFN The EFBFN constructed by LS algorithm has the limitation that the widths of the membership functions in the hidden units are prefixed and the centers are selected only for the available train data pairs. In this section, we will use the proposed algorithm AEPPSO to tune the parameters of the MFs in the EFBFN obtained using LS algorithm, and to determine the consequent parameters simultaneously.
3.1 Population Initialization for EPPSO (1) Parameter representation The fuzzy model we used in this paper has been described as in Eqn. (1), in which the parameters could be divided into the premise part and the consequent part. The parameter set of the two parts can both be expressed as a two dimensional matrix. For the premise part, the parameter set i i {m j , σ j ,1 ≤ i ≤ r ,1 ≤ j ≤ k } of MFs constitute the first matrix named Q , while for i the consequent part, the consequent parameter set {ξ j ,1 ≤ i ≤ r , 0 ≤ j ≤ k } constitutes the second matrix named ξ , where r is the number of the fuzzy rules and k is the number of input variables. The Gaussian type MF is used in this paper, thus the size of Q is ^1 = r × (2 k ) , and the size of ξ is ^ 2 = r × ( k + 1) , totally ^ = ^1 + ^ 2 parameters.
Initialization In this step, M individuals forming the population is initialized, (2) and each consists of ^1 parameters. The initialization of the parameter matrix Q is performed based on those MF parameters of the EFBFN obtained by LS algorithm in i
i
the above section. Assume that the obtained parameters are {m j (0), σ j (0)} , and the domains of the ith input variable in the train data set has been found to be [min( x ), max( x )] , then we initialize the centers and widths for the FBFs as random j j values with the following domain: i i i i i i m j ∈ [ m j (0) − δ j , m j (0) + δ j ], σ j ∈ [σ j (0) − δ j , σ j (0) + δ j ],
(15)
where δ j is a small positive value, usually defined as δ j = (max( x j ) − min( x j )) / 10 . These individuals could be regarded as population members in terms of EP and particles in terms of PSO, respectively.
3.2 Algorithm Description (1) PSO operator In each generation, after the fitness values of each individual are evaluated, the top 50% individuals are selected as the elites and the others are discarded. Similar to the maturing phenomenon in nature, the individuals are firstly enhanced by PSO and become more suitable to the environment after acquiring the
84
B. Ye et al.
knowledge from the society. The whole elites could be regarded as a swarm, and each elite corresponds to a particle in it. In PSO, individuals (particles) of the same generation enhance themselves based on their own private cognition and social interactions with each other. And this procedure is regarded as the maturing phenomenon in EPPSO. The selected M/2 elites are regarded as particles in PSO, and each elite correT
sponds to a particle zθ = ( zθ 1 , zθ 2 , ..., zθ ^ ) . 1 In applying PSO, we adopt the following equation [5] to improve these individuals:
v (t + 1) = ω v (t ) + c r (Ψ g (t ) − z (t )) + c r (Ψ g (t ) − z (t )) θ θ 11 θ 22 θ , z (t ) = z (t ) + v (t + 1)
θ
θ
(16)
θ
T is the velocity vector of this particle, vθ = ( vθ 1 , vθ 2 , ..., vθ ^ ) 1 T Ψθ = (ψ θ 1 , ψ θ 2 , ..., ψθ ^ ) is the best previous position encountered by the ith par1
where
ticle, Ψ g (t ) is the best previous position among all the individuals of the swarm, θ = 1, 2, ..., M / 2 , ω is a parameter called the inertia weight, t is the iteration counter, c1 and c2 are positive constants, referred to as cognitive and social parameters, respectively, and r1, r2 are random numbers, uniformly distributed within the interval [0, 1]. In Eqn. (16), Ψ g (t ) is the best performing individual evolved so far, either the enhanced elite obtained by PSO or the offspring produced using EP. By performing the PSO operation on the selected elites before mutation operation in EP, we may accelerate the convergence speed of the individuals and improve the search ability. The enhanced elites will be copied to the next generation and also designated as the parents of the generation for EP, copied and mutated to produce the other M/2 individuals. (2) EP operator To produce better-performing offspring, the mutation parents are selected merely from the enhanced elites by PSO. In the EP operation, the M / 2 particles in PSO will be M / 2 population members with parameters θ | zθ = zθ n ,1 ≤ θ ≤ M / 2; 1 ≤ n ≤ ^1} . The parameter mutation changes the pa-
{z
rameters of membership functions by membership functions by adding Gaussian random numbers generated with the probability of p to them [6]: θ n = zθ n + α ∗ e
z
( Fmax − Fm )/ Fmax
∗ N (0,1),
(17)
where Fmax is the largest fitness value of the individuals in the current generation, Fm is the fitness of the mth individual, α is a real value between 0 and 1. The combination of the offspring of EP and the elites enhanced by PSO comes to be the new generation of EPPSO, and after the fitness evaluation, the evolution will go ahead again until termination condition is satisfied. The details of the basic procedure of EP are referred to Ref. [7]. (3) Least Squares Estimate The hybrid algorithm EPPSO is applied to tune the MF parameters of the EFBFN, while the LS is used to determine the ^ 2 consequent
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm
85
i
parameters ξ j of the fuzzy rule base. For each individual in each generation, when the premise parameter matrix Q is determined by EPPSO, we fix these parameters and using the following procedure to obtain the consequent parameters: i
i
i. Use the MF parameters {m j , σ j } in Q to calculate the FBFs pi ( χ ) as follows: i ∏ kj =1 A j ( x j )
i 1 xj − mj 2 i , where A j ( x j ) = exp[ − * ( ) ]. pi ( χ ) = i 2 σ ij ∑ ir=1 (∏ kj =1 A j ( x j )) ii. Calculate matrix W T W = [ w1 ,...w N ] = [p1,...p r , p1. ∗ x1, ...p r . ∗ x1 ,...p1. ∗ x k ,...p r . ∗ x k ], T T where pi = [ pi ( χ1 ), ..., pi ( χ N )] , x j = [ x1 j , x2 j ,... x Nj ] , iii. Now, we can calculate the consequent parameters of the current fuzzy rule base using Eqn. (13), and also the inferred output could be calculated using Eqn. (14). Since the fitness evaluation is not the same for different problem, we will introduce it in the simulation part. When the fitness is evaluated, we select the top 50% individuals and till this step the program finish one generation. And the evolution will go on until the termination condition is satisfied.
4 Simulation Results In this section, simulation results of predicting a chaotic time series using the fuzzy inference systems based on the proposed hybrid algorithm are presented. The chaotic time series is generated from the Mackey-Glass differential delay equation [8] defined below: x (t ) = 0.2 x(t − τ ) /[1 + x10 (t − τ )] − 0.1x(t ),
(18)
the problem is to use the past values of x to predict some future value of x. The same example as published in [8-10] has been adopted to allow a comparison with the published results. To obtain the time series value at each integer point, we applied the fourth Runge-Kutta method to find the numerical solution of Eqn. (18). Initial condition x(0)=1.2, τ = 17 and the value of the signal six steps ahead x(t+6) is predicted based on the values of the signal at current moment, 6, 12 and 18 steps back. The input-output data pairs are of the following format: [ x(t − 18), x(t − 12), x(t − 6), x(t ); x(t + 6)].
(19)
The data range of 118 ≤ t ≤ 1117 has also been adopted, with the first 500 samples forming the training data set and the second 500 forming the validation data set. The nondimensional error index (NDEI) has been calculated to compare model performance, and the fitness function used in this example is defined as F = 1 / NDEI .
86
B. Ye et al.
During the training process of the EFBFN constructed for the chaotic time series using LS algorithm, the NDEI decrease of predicting the chaotic time series using EFBFN comes to be very little when the number of fuzzy rules reaches about twelve. Hence, we determine to generate a FIS with twelve fuzzy rules for predicting the chaotic time series. The final predicting NDEI in training and testing are 0.020 and 0.022, respectively. We use the proposed algorithm AEPPSO to tune the MF parameters of the obtained EFBFN to achieve better performance. The twelve fuzzy rules in EFBFN result in 156 free parameters totally, in which 96 premise parameters to be tuned using EPPSO, and other 60 consequent parameters to be determined by LS method. In applying AEPPSO, 70 individuals are initially randomly generated in a population, i.e., M = 70. During PSO operation, the inertia weights are set to be w = 0.35, w = 0.1 , the cognitive parameter c1 and the social parameter c2 are max min both set to be 1.5; the mutation probability p and learning parameter α in EP are both set to be 0.1. To show the superiority of AEPPSO, other two algorithms named AEP (Adaptive Evolutionary Programming) and APSO (Adaptive Particle Swarm Optimization) are also performed to the same problem. The evolutions are processed for 100 generations and repeated for 5 runs. The training averaged best-so-far RMSE values over 5 runs for each generation are shown in Fig.1. From the figure, we can see that AEP converges with a slower speed compared to APSO and AEPPSO, while APSO converges fastest with a larger NDEI compared to that of AEPPSO. The phenomenon sufficiently incarnates the characteristics of each algorithm as follows: i. AEP obtains knowledge merely from the individuals themselves without sharing the knowledge with each other, while the individuals in APSO adapt themselves according to their own knowledge and also their companions’ knowledge. Therefore, the APSO is reasonably to have a faster convergence speed. ii. APSO memorizes the previous knowledge of good solutions as flying destinations of the particles. However, the particles are restricted within the previous knowledge and thus coming to be lacking in diversity. This feature makes APSO probably converge to an unsatisfying result. iii. AEPPSO combines the two algorithms’ merits and the simulation results in Fig.1 validate its features of satisfying result and convergence speed. Observing from the figure, the tuning performance for EFBFN based on the hybrid of LS and AEP is much worse than that of the other two, uncovering the ineffectiveness of EP in problems of pursuing high precision. Table 1 lists different methods’ generalization capabilities which are measured by using each method to predict 500 points immediately following the training set. The result of the proposed method outperforms that of any other approaches listed in the table except for that of ANFIS. Note that the generalization NDEIs of the hybrid algorithms are the mean values over 5 runs, and the minimum generalization NDEIs of the hybrid algorithms LS+AEP, LS+APSO and LS+AEPPSO are 0.015, 0.011 and 0.009, respectively.
Generating Extended Fuzzy Basis Function Networks Using Hybrid Algorithm
87
0.022
LS+AEPPSO LS+AEP LS+APSO
0.02
0.018
NDEI
0.016
0.014
0.012
0.01
0.008
0
10
20
30
40
50
60
70
80
90
100
Generation
Fig. 1. Average best-so-far NDEI in each generation (iteration) using different methods Table 1. Comparisons of generalization capability with published results Method Cascade-Correlation NN [8] Back-Prop NN [8] 6th-order Polynomial [8] ANFIS [9] AEPLSE [10] LS+AEP LS+APSO LS+AEPPSO
Training Data 500 500 500 500 500 500 500 500
Error Index (NDEI) 0.06 0.02 0.04 0.007 0.014 0.017 0.013 0.011
Unlike the back-propagation and LSE method of the neuro-fuzzy system ANFIS, the method proposed in this paper use the hybrid of evolution algorithm and LSE, and obtains an approximate performance. As a new evolutionary fuzzy system, the EFBFN generating method base on the hybrid of LS and AEPPSO provide another effective method of modeling and prediction using fuzzy inference system.
5 Conclusions A novel fuzzy modeling strategy based on least squares method and the so-called Adaptive Evolutionary-programming and Particle-swarm-optimization (AEPPSO) has been presented. The LS method is firstly used to determine the fuzzy basis functions of the proposed EFBFN by presetting centers of Gaussian membership functions. In the second stage, the combined algorithm (EPPSO) is proposed to tune the obtained MF parameters in EFBFN, and the LS method is used again here to determine the consequent parameters in it simultaneously. Proposed as a new kind of Evolutionary Fuzzy System (EFS), the method has been examined on predicting a chaotic time series, and the results are compared to neuro-fuzzy system, neural networks, and other fuzzy modeling methods to demonstrate its effectiveness.
88
B. Ye et al.
Acknowledgements This work is supported by the Outstanding Young Scholars Fund (No. 60225006) and Innovative Research Group Fund (No. 60421002) of Natural Science Foundation of China.
References 1. Wang, L. X., Mendel, J. M.: Fuzzy Basis Functions, Universal Approximation, and Orthogonal-Least Squares Learning. IEEE Trans. Neural Networks. 3 (1992) 807-814 2. Chen, S., Cowan, C. F. N., Grant, P. M.: Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks. IEEE Trans. Neural Networks. 2 (1991) 302-309 3. Lee, C. W., Shin, Y. C.: Construction of Fuzzy Systems Using Least-Squares Method and Genetic Algorithms. Fuzzy Sets and Systems. 137 (2003) 297-323 4. Takagi, T., Sugeno, M.: Fuzzy Identification of Systems and Its Application. IEEE Trans. Systems, Man, and Cybernetics. 15 (1985) 116-132 5. Shi, Y., Eberhart, R. C.: Parameter Selection in Particle Swarm Optimization. The 7th Annual Conference on Evolutionary Programming, San Diego, USA. 7 (1998) 591-600 6. Hwang, H. S.: Automatic Design of Fuzzy Rule Bases for Modeling and Control Using Evolutionary Programming. IEE Proc. Control Theory Appl. 146 (1999) 9-16 7. Fogel, D. B.: Evolutionary Computations: Toward a New Philosophy of Machine Intelligence. New York: IEEE (1995) 8. Crowder, R. S.: Predicting The Mackey-Glass Time Series With Cascade-correlation Learning. Proc. 1990 Connectionist Models Summer School: Carngie Mellon University. (1990) 117-123 9. Jang, J. S. R., Sun, C. T., Mizutani, E.: ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Systems., Man and Cybernetics. 23 (1993) 665-685 10. Ye, B., Guo, C. X., Cao, Y. J.: Identification of fuzzy model using evolutionary programming and least squares estimate. Proc. 2004 IEEE Fuzzy Systems Conf. 2 (2004) 593-598
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets* Yangdong Ye1, Juan Wang1, and Limin Jia2 1
Department of Computer Science, Zhengzhou University, Zhengzhou 450052, China
[email protected],
[email protected] 2 School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China
[email protected] Abstract. The paper defines a fuzzy time Petri net (FTPN) which adopts four fuzzy set theoretic functions of time called fuzzy timestamp, fuzzy enabling time, fuzzy occurrence time and fuzzy delay, to deal with temporal uncertainty of train group operation and we also present different firing strategies for the net to give prominence to key events. The application instance shows that the method based on FTPN can efficiently analyze trains converging time, the possibility of converging and train terminal time in adjustment of train operation plan. Compared with time interval method, this method has some outstanding characteristics such as accurate analysis, simple computation, system simplifying and convenience for system integrating.
1 Introduction The main issues [1][2] of train group operation modeling in the research of Railway Intelligent Transportation System (RITS) include the representation of train group concurrency, processing of multi-levels problems oriented to different control and decision, the disposal of system hybrid attributes and processing of various uncertainty factors affecting train group operation, all of which are relative to analysis of time parameter. With accelerating the speed of train group operation and the influences of objective uncertainty factors, it is increasingly important to process temporal uncertainty issues in train group operation. The analysis of temporal uncertainty in trains converging is significant for dynamic control during train operation, trains dispatching in stations, passengers changing trains, goods transferred and resources distribution of railway system. In the process of dealing with time factors in train group operation, the usual representation of single point of time is impractical and not integrated and the representation of time interval is difficult to do quantitative analysis. Based on existed Petri net model of train group operation [2][10], the paper introduces fuzzy set theory to describe uncertainty or subjective time information, which can satisfy applications of reality. How to represent uncertainty knowledge is an attentional issue [3~7] in the research of Petri net modeling. With the existed time Petri net (TPN) [8~11], we define *
This research was partially supported by the National Science Foundation of China under grant number 600332020 and the Henan Science Foundation under grant number 0411012300.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 89 – 99, 2005. © Springer-Verlag Berlin Heidelberg 2005
90
Y. Ye, J. Wang, and L. Jia
a fuzzy time Petri net adopting fuzzy set theoretic functions of time to analyze trains converging issue. The paper introduces relative concepts of fuzzy time Petri net and the fuzzy set theoretic functions of time in Section 2 and 3; then we give an example of train operation in Section 4 and make relative analysis in Section 5; finally aiming at temporal uncertainty we compare this method with reasoning algorithm of TPN in Section 6.
2 Fuzzy Time Petri Nets In order to describe train behaviors with time constraints, the paper defines a fuzzy time Petri nets adopting four fuzzy time functions to deal with temporal uncertainty of trains converging issue. These functions appear in some publications and applications [6~8] and their definitions were given by Murata etc [3~5]. Definition 2.1. Fuzzy time Petri nets’ system (FTPN’s) FTPN’s = (P, T, E, β, A, FT, D, M0), where: (1) (2) (3) (4) (5) (6) (7) (8)
P = {p1,p2,… pn} is a finite set of places; T = {t1,t2,… tm,} is a finite set of transitions, where P∪T ≠ ∅, P∩T = ∅; E = {e1,e2,… em} is a finite set of events; β: E→T, is a mapping function that represents an event is relevant to a transition; A ⊆ (P × T)∪(T × P) is a set of arcs; FT is a set of fuzzy timestamps. It is related with tokens. An unrestricted timestamp is represented by [0,0,0,0], and an empty timestamp is ∅; D is a set of fuzzy delay time that is related with outgoing arcs of transitions; M0: P→FT, is an initial marking function.
Definition 2.2. State marking of FTPN’s A marking Mi: P→FT (i=0,1,2,3…) is a description of dynamic behaviors of system. A marking of system corresponds to a vector about places. In this paper, we use sets of tokens, {(p, π(τ))| p∈P, π(τ) ∈FT}, to describe marking and the empty timestamp ∅ does not appear in the sets of tokens. If M2 is directly reachable from M1 via e1, the sequence q is M1[e1>M2. Definition 2.3. Firing strategies of transitions in FTPN’s We primarily define FTPN to describe the analysis of temporal uncertainty facing train group operation. The firing of transition takes no time. Under conditions with no conflicts, transitions will fire immediately when all required tokens arrive. For conditions with conflicts, system needs to use certain firing strategies to do different analysis. The following strategies are used according to different problems in FTPN’s: (1) the earliest enabling time firing strategy; (2) the strategy of multi-condition first firing; (3) the strategy of high-possibility first firing; (4) mixed strategy etc. Using these firing strategies we can give prominence to key events and simplify system analyzing.
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets
91
3 Fuzzy Time Function A fuzzy time function is a mapping function from time scale, the set of all non-negative real numbers, to the real interval [0, 1]. The value of function indicates degree of possibilities for an event on a point of time τ. Fuzzy time functions are specified by 4-tuple [3~5] and their graphs are trapezoid describing train earliest arrival or departing time, the most possible time and the latest time. The paper uses square brackets express the functions to keep consistent with time interval [10,11]. Definition 3.1. Fuzzy timestamp π(τ) The fuzzy timestamp is the possibility distribution of a token arriving in one place on time τ. Definition 3.2. The possibility distribution of the latest time latest is a multiple operator to calculate the possibility distribution of the latest time of fuzzy timestamps. Suppose that there are n fuzzy timestamps πi(τ) = hi[ai,bi,ci, di], i=1,2,…,n, we calculate the possibility distribution of the latest time as following latest{π1(τ),π2(τ)} = min(h1,h2)[max(a1,a2),max(b1,b2),max(c1,c2),max(d1,d2)] latest{πi(τ),i=1,2,…,n} = latest{latest{…{latest{π1(τ), π2(τ)},…,πi(τ)},…},πn(τ)}. Definition 3.3. The possibility distribution of the earliest time earliest is an operator to calculate the possibility distribution of the earliest time among many timestamps. Suppose that there are n fuzzy timestamps πi(τ)=hi[ai,bi,ci, di], i=1,2,…,n, the possibility distribution of the earliest time is following earliest{π1(τ),π2(τ)} = max(h1,h2)[min(a1,a2), min(b1,b2),min(c1,c2),min(d1,d2)] earliest{πi(τ), i=1,2,…,n} = earliest { earliest { …{ earliest { π1(τ), π2(τ) }, … , πi(τ)}, …}, πn(τ)}. Definition 3.4. Fuzzy enabling time e(τ) When the occurrence of a transition or an event needs more than one token or resource, the latest time possibility distribution is fuzzy enabling time e(τ). The net in this paper is ordinary net (the weight of inputting arcs is 1). So if transition t has n input places, to fire t needs n tokens, i.e. the occurrence of the corresponding event needs n tokens respectively in n places. The fuzzy enabling time of t is e(τ) = latest{πi(τ), i=1,2,…,n}
(1)
Definition 3.5. Fuzzy occurrence time o(τ) The fuzzy occurrence time o(τ) is the possibility distribution of an event occurring time. In ordinary circumstances we adopt the principle of First Come, First Served, to give high priority to the earlier enabling event. The fuzzy occurrence time of an event is the minimum overlapping area between fuzzy enabling time of this event and the result of earliest, and this operation is expressed by min [3][4]. In this paper the operator Min rounds the value of the calculated result of min. Suppose that there are n enabling events ei, i=1,2,…,n, and the corresponding fuzzy enabling time is ei(τ), i=1,2,…,n by formula (1). The fuzzy occurrence time of the event et with fuzzy enabling time et(τ) calculates as following
92
Y. Ye, J. Wang, and L. Jia
ot(τ) = Min{et(τ), earliest{ei(τ), i=1,2,…,t,…,n }}
(2)
Definition 3.6. Fuzzy delay d(τ) The fuzzy delay is a fuzzy time function associated with outgoing arcs from transitions. It is one kind of measurement about time when describing and analyzing events and means degree of possibility of time span that system states change from one to another when an event occurs. Given fuzzy delay and fuzzy occurrence time, the fuzzy timestamp of the new token is calculated as following π(τ) = o(τ)⊕d(τ) = [o1,o2,o3,o4] ⊕[d1,d2,d3,d4] = [o1+d1,o2+d2,o3+d3,o4+d4]
(3)
Suppose that transition t has one input place p1 and one output place p2, and no conflict. If the fuzzy timestamp of the token in p1 is π1(τ), then the fuzzy enabling time is et(τ) = π1(τ) and the fuzzy occurrence time is ot(τ) = et(τ) = π1(τ). Given fuzzy delay d(τ), the fuzzy timestamp of the new token in p2 after the event occurs is π2(τ) = ot(τ) ⊕ d(τ) = et(τ) ⊕ d(τ) = π1(τ) ⊕ d(τ) Now the fuzzy timestamp of new token equals the result of operator fuzzy timestamps of the token in input place and the fuzzy delay.
(4)
⊕ between
4 Train Group Behaviors Model During the modeling of train group behaviors, a series of temporal uncertainty problems emerge, which are caused by objective stochastic uncertainty in the train operation departure from train operation graph. 4.1 Example of Train Group Operation In order to explain the validity of FTPN clearly, a railway net in Fig. 1 is given, where S is station and Se is section. Train operation is denoted by following time constraints, where time (unit of time uses minute) is given by fuzzy time function.
Fig. 1. Railway route net of certain area
Train Tr1 plans to start from Station S1 at 7:00am, then to arrive at S6, and after time [13,15,20,22] on Se611 to reach terminal S11. There are three paths to S6. Tr1 runs on Se12 and Se25 for time [18,20,25,27] and [13,15,20,22] to S5, or on Se13 and Se35 for [18,20,25,27] and [20,22,30,32] to S5, and then from S5 to spend [18,20, 25,27] on Se56 to S6. Another path is from S1 via Se14 and Se46 to S6 for time [26, 28,32,34]
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets
93
and [26,28,32,34] respectively. The problem of converging will emerge at S6; If it happens, the train will delay for [2,2,4,4]; If not, there is no delay. Train Tr2 starts from S8 at 7:00am, via Se89, Se96 for [18,20,25,27] and [20,22,25,27] respectively to reach S6; or via Se87, Se76 for [13,15,18,20] and [23,25,30,32] to get to S6, and then spends [18,20,25,27] to S10. There is the same problem of converging at S6. 4.2 Structure of the Model We construct FTPN model as shown in Fig. 5. The descriptions of places and transitions are listed in Table 1. FTPN’s = ( P, T, E, β, A, FT, D, M0 ) where
,
(1) P = { pi | i=1,2,…,14 }; T = { tj | j=1,2,…,16 }, E = { ej | j=1,2,…,16 }; (2) Fuzzy delay D = { d1 = [18,20,25,27], d2 = [18,20,25,27], d3 = [13,15,20,22], d4 = [20,22,30,32], d5 = [18,20,25,27], d6 =[26,28,32,34], d7 = [26,28,32,34], d8 = d15 = [0,0,0,0], d9 = d17 = [2,2,4,4], d10=[13,15,20,22], d11 = [18,20,25,27], d12 = [13,15,18,20], d13 = [20,22,25,27], d14 = [23,25,30,32], d16 = [18,20,25,27] }; (3) M0: { (p1, π01(τ)), (p9, π02(τ)) }; π01(τ)=[0,0,0,0], π02(τ)=[0,0,0,0]. Table 1. Places and transitions of train operation FTPN model p1 ,p2,p3,p4 ,p5,p6 p7, p13 p8 p9,p10, p11, p12 p14 t1,t2,t3,t4,t5 t6,t7 t8,t15 t9 t10 t11,t12 ,t13 ,t14, t16
Tr1 is in station S1, S2, S3, S4, S5 and S6. Tr1 and Tr2 is ready to depart from S6. Tr1 is in station S11. Tr2 is in station S8, S9, S7 and S6. Tr2 is in station S10. Tr1 is running on section Se12, Se13, Se25, Se35 and Se56. Tr1 is running on section Se14 and Se46. Trains are in S6 without trains converging. Trains stay in S6 with trains converging. Tr1 is running on section Se611. Tr2 is running on section Se89, Se87, Se96, Se76 and Se610.
5 Analysis of Temporal Uncertainty of Trains Converging This chapter analyzes the issues in the process of adjusting train operation plan for the model in Fig. 3 including trains converging time, the possibility of converging, train terminal time and the adjustment of train operation time. 5.1 Problem Statement The analysis of time parameters of trains converging in stations can conclude to the computation of timestamps and analysis of transition sequences from one state of FTPN’s to another. Three transition sequences, t1,t3,t5; t2,t4,t5; t6,t7, exist when one token moves from p1 to p6. Two sequences, t11,t13; t12,t14, exist when token moves from p9 to p12. Suppose that the possible fuzzy timestamps in p6 are π61(τ), π62(τ), π63(τ) and the fuzzy timestamps in p12 are π121(τ), π122(τ). When there are tokens in both p6 and p12, system marking is Mp:
94
Y. Ye, J. Wang, and L. Jia
{(p6, π6(τ)), (p12, π12(τ))}. Mp is the critical state whether trains converging happens or not. All possible fuzzy timestamps in Mp can be calculated by formula (3) (4) and the results are shown in Fig. 2.
Fig. 2. All possible fuzzy timestamps in Mp
From Fig. 2 transition t8, t15 and t9 under Mp may be enabled simultaneously, so the corresponding events e8, e15 and e9 have conflict. With different firing strategies the occurrence sequence from one state to another is different. Here we adopt the principle of small subscript first to deal with the events, which have the same enabling time and are independent to each other. The behaviors of two trains after arriving in station S6 conclude as following, q1: Mp [e9>Mj [e10>M10[e16>Mf q2: Mp [e8>M8 [e15>Mnj [e10>M10’[e16>Mnf among which the markings are Mj: {(p7, πj1(τ)), (p13, πj2(τ))}, Mnj: {(p7, πnj1(τ)), (p13, πnj2(τ))}, Mf : {(p8, π8(τ)), (p14, π14(τ))} and Mnf : {(p8, π8’(τ)), (p6, π14’(τ))}. The trains converging issue is the computation of timestamps and analysis of transition sequences from state M0 to Mp, then to Mj or Mnj and to final state Mf or Mnf. All the transition sequences and correlative markings can be seen in Table 2. When trains converging does not happen the values of corresponding fuzzy timestamps keep invariable, so we use the same subscripts in Table 2. Table 2. Transition sequences and part markings of model Occurrence Transition sequence of sequence each case Sequence with q11: t1,t3,t5,t11,t13,t9,t10,t16 q12: t6,t7,t11,t13,t9,t10,t16 Trains converging: q1 q13: t1,t3,t5,t12,t14,t9,t10,t16 q21: t2,t4,t5,t11,t13,t8,t15,t10,t16 q22: t2,t4,t5,t12,t14,t8,t15,t10,t16 Sequence q : t ,t ,t ,t ,t ,t ,t ,t without trains 23 6 7 12 14 8 15 10 16 converging: q2 q24: t1,t3,t5,t11,t13,t8,t15,t10,t16 q25: t6,t7,t11,t13,t8,t15,t10,t16 q26: t1,t3,t5,t12,t14,t8,t15,t10,t16
State Mp before converging
State Mj/Mnj after converging
Mp1: {(p6,π61(τ)),(p12,π121(τ))} Mp2: {(p6,π63(τ)),(p12,π121(τ))} Mp3: {(p6,π61(τ)),(p12,π122(τ))} Mp4: {(p6,π62(τ)),(p12,π121(τ))} Mp5: {(p6,π62(τ)),(p12,π122(τ))} Mp6: {(p6,π63(τ)),(p12,π122(τ))} Mp7: {(p6,π61(τ)),(p12,π121(τ))} Mp8: {(p6,π63(τ)),(p12,π121(τ))} Mp9: {(p6,π61(τ)),(p12,π122(τ))}
Mj1: {(p7,π71(τ)),(p13,π131(τ))} Mj2: {(p7,π72(τ)),(p13,π132(τ))} Mj3: {(p7,π73(τ)),(p13,π133(τ))} Mnj1: {(p7,π62(τ)),(p13,π121(τ))} Mnj2: {(p7,π62(τ)),(p13,π122(τ))} Mnj3: {(p7,π63(τ)),(p13,π122(τ))} Mnj4: {(p7,π61(τ)),(p13,π121(τ))} Mnj5: {(p7,π63(τ)),(p13,π121(τ))} Mnj6: {(p7,π61(τ)),(p13,π122(τ))}
In base of the earliest enabling time firing strategy we adopt the strategies of multi-condition first firing and high-possibility first firing to deal with temporal uncertainty of trains converging. The problems include that whether trains converging occurs, what the possibility of converging is, what the earliest or latest terminal time is and how to adjust train operation plan with analysis of trains converging.
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets
95
5.2 Analysis Using the Strategy of Multi-condition First Firing The strategy of multi-condition first firing means that comparing the number of required tokens for the enabling events the transition requiring more tokens will fire first. This strategy stands out the complex event which requires more resources. With the strategy the occurrence sequence is q1. The analysis of trains converging time corresponds to the computation of timestamp of e9. The fuzzy occurrence time of e9 under Mp1, Mp2, Mp3 in Table 2 can be calculated by formula (1) (2) respectively. oe91(τ) = 0.5[49,52,52,54], oe92(τ) = 0.25[52,53,53,54], oe93(τ) = 0.25[49,51,51,52]. Final fuzzy timestamps corresponding to marking Mf can be calculated by formula (4) and the results are shown with dashed line in Fig. 3(a) (b). The analysis of trains converging can be made out from above computation. • Trains converging time and the possibility of converging can be got by oe9(τ). There is the highest occurrence possibility 0.5 at τ=52 i.e. train Tr1 and Tr2 can meet in station S6 between 7: 49am and 7: 54am and the largest possibility is 0.5. • The earliest or latest terminal time after trains converging can be got from Fig. 3(a) (b). The earlier fuzzy time functions when Tr1 arrives in S8 are π81(τ) = 0.5[64,69,76,80] and π83(τ)=0.25[64,68,75,78], so that the earliest terminal time of Tr1 is 8:04am and the highest possibility arriving in terminal station is 0.5. • The paths corresponding with the earliest or latest terminal time also can be got. The fuzzy timestamp π81(τ) must pass through the transition sequence t1,t3,t5,t11,t13,t9,t10,t16, i.e. Tr1 goes only through the section Se13, Se35, Se56 and Se611 and stops for a while at station S6, and after that there is the earliest terminal time 8:04am. Similarly the time and paths of Tr2 can be analyzed in the same way. 5.3 Analysis with the Strategy of High-Possibility First Firing The strategy of high-possibility first firing means that for the given occurrence possibility threshold δ, if there is time τ making the possibility hi of oei(τ) greater than δ, i.e. hi>δ, then the event ei is the key event to be analyzed. This strategy actually gives prominence to the event with higher possibility. If δ=0.5 in the strategy of high-possibility first, the trains converging event of the model will become the nonoccurrence or low-possibility event. The occurrence sequence q2 includes all cases without trains converging. The fuzzy occurrence times of e8, e15 are oe8(τ) = π6(τ) and oe15(τ) = π12(τ) and the marking after e8 and e15 is Mnj: {(p7,πnj1(τ)),(p13,πnj2(τ))}. The corresponding transition sequences and fuzzy timestamps can be seen in Table 2. All final fuzzy timestamps calculate by formula (4) are shown in Fig 3 (a) (b) with real line. The analysis of temporal uncertainty about trains converging can be got from above results. • Trains converging event does not occur with this strategy. So the two trains run respectively and there is no stop in station S6.
96
Y. Ye, J. Wang, and L. Jia
• The earliest or latest terminal time without trains converging can be got from Fig. 3(a) (b). The earliest fuzzy time when Tr1 arrives in S8 is π81’(τ)=[62,70,90,98], so that the earliest terminal time of Tr1 is 8:02am and the highest possibility is 1. • Corresponding with π81(τ) Tr1 goes only through the section Se12, Se25, Se56 and Se611 and no stop in S6, and then there is the earliest terminal time 8:02am.
(a) All possible fuzzy timestamps of token in p8
(b) All possible fuzzy timestamps of token in p14
Fig. 3. All final fuzzy timestamps
5.4 Analysis of Train Operation Plan Adjustment Based on Trains Converging Analysis of train operation plan adjustment based on trains converging can make out from Fig. 2. The possibility of trains not converging is higher than that of trains converging. If the system in reality needs trains converging, we can do quantitative analysis for train operation plan using the FTPN model. Among all the cases of π6(τ), π12(τ) the maximum time difference is 5. With the minimum adjustment degree when the later token arrives ahead of schedule for 5 units of time or the earlier token delays for 5 units the event e9 has the highest possibility 1 through transition sequence t1, t3, t5, t11, t13. That is to say that if Tr2 departs ahead of time or speed up, or Tr1 put off departing, the possibility of trains converging will be higher. With the adjustment based on one train as benchmark and the highest possibility 1, Tr1 passes through Se12, Se25, Se56 and Tr2 goes through Se89, Se96, the time of converging is [44,50,50,54] or [49,55,55,59] which are shown in Fig. 4 (a) (b).
(a) Fuzzy timestamps of Tr1 ahead for 5 minutes
(b) Fuzzy timestamps of Tr2 delaying for 5 minutes
Fig. 4. Train operation adjustment for trains converging
Similarly when the later token delays for 5 units of time or the earlier token arrives ahead of time for 5 units of time e9 has the lowest possibility 0.
6 Comparing Analysis with Reasoning Algorithm of Time Petri Net The present research of temporal knowledge reasoning algorithm [10,11] of TPN aims at the system model of TPN, uses finite system states, creates sprouting graph of time parameters, and then with the sprouting graph analyzes time parameters.
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets
Fig. 5. FTPN model of train operation
97
Fig. 6. TPN model of train operation
6.1 Time Petri Net Model of the Example Time Petri net (TPN) adopts time interval to deal with temporal uncertainty of train group operation [10]. The model of the example using TPN in [10] is shown in Fig. 6, where time interval parameters are marked. The corresponding sprouting graph with time parameter of the model is shown in Fig. 7.
Fig. 7. Time parameter sprouting graph of train operation
6.2 Comparing Analysis We analyze the example using the temporal knowledge reasoning algorithm of TPN and the specific contrasts with the analysis in chapter 5 can be seen in Table 3.
98
Y. Ye, J. Wang, and L. Jia Table 3. Temporal knowledge reasoning contrast of TPN and FTPN
TPN FTPN Representation of temporal uncertainty Time interval Fuzzy time function Processing of temporal uncertainty No Quantitative analysis Quantitative analysis Data structure Dynamic sprouting graph Timestamp Computation procedure 4 2 [49,54] 0.5[49,52,52,54] Occurrence time of trains converging in [52,54] 0.25[52,53,53,54] q11,q12,q13 [49,52] 0.25[49,51,51,52] The earliest terminal time of Tr1 after [64,78] 0.25[64,68,75,78] converging and transition sequence q13: t1,t3,t5,t12,t14,t9,t10,t16 q13: t1,t3,t5,t12,t14,t9,t10,t16 The earliest terminal time of Tr2 after [69,83] 0.25[69,73,80,83] converging and transition sequence q13: t1,t3,t5,t12,t14,t9,t10,t16 q13: t1,t3,t5,t12,t14,t9,t10,t16 Adjustment of train operation plan No Adjustment degree is 5 units of time
The following conclusions can be deduced from Table 3: (1) The descriptions of temporal uncertainty of the two methods are consistent. (2) The fuzzy time functions of FTPN have quantitative analysis. (3) Our computation procedure is simple. The existed method has four procedures which are modeling, system states searching, creating sprouting graph and analyzing processes. But our method just needs two procedures, system states searching and time parameters calculating and analyzing when token moves. (4) Time analysis is easier in our method. FTPN adopts token structure of fuzzy timestamp which is much simpler than sprouting graph structure. (5) Our method adopts firing strategies that can give prominence to key events in different conditions. (6) The method is convenient for integration of train object models.
7 Conclusions The paper defines a fuzzy time Petri net for temporal uncertainty of train group operation and introduces four fuzzy set theoretic functions of time to the net for representation of temporal uncertainty during train operation. Oriented to temporal uncertainty of trains converging in train operation plan adjustment, this paper presents different firing strategies for the model to give prominence to key events and simplify system modeling and analyzing. The application instance shows that the method can analyze quantitatively temporal uncertainty of trains converging including trains converging time, the possibility of converging and terminal time in train operation plan adjustment. Comparing with time interval method we conclude that our method has outstanding characteristics such as accurate analysis, simple computation, system simplifying and convenience for system integrating. So the method can be used to represent temporal knowledge and enriches the research of various railway expert systems.
References 1. Jia, L.-M., Jiang, Q.-H.: Study on Essential Characters of RITS. Proceeding of 6th International Symposium on Autonomous Decentralized Systems. IEEE Computer Society, Pisa Italy (2003) 216-221.
Analysis of Temporal Uncertainty of Trains Converging Based on Fuzzy Time Petri Nets
99
2. Ye, Y.-D., Zhang, L., Du, Y.-H., Jia, L.-M.: Three-dimension Train Group Operation Simulation System Based on Petri Net with Objects. Proceeding of IEEE 6th International Conference on Intelligent Transportation Systems. Vol.2. (2003) 1568-1573. 3. Murata, T.: Temporal Uncertainty and Fuzzy-timing High-Level Petri Nets. 17th International conference on Application and Theory of Petri Nets, Lecture Notes in Computer Science, vol.1091. Springer-Verlag, New York (1996) 11-28. 4. Zhou, Y., Murata, T.: Petri Net Model with Fuzzy-timing and Fuzzy-Metric. International Journal of Intelligent Systems. 14(8) (1999) 719-746. 5. Zhou, Y., Murata, T., DeFanti, T.A.: Modeling and Performance Analysis using Extended Fuzzy-timing Petri Nets for Networked Virtual Environments. IEEE Transaction on System, Man and Cybernetics-Part B: Cybernetics, 30(5) (2000) 737-756. 6. Cardoso, J., Valette, R., Dubois, D.: Possibilistic Petri Nets. IEEE Transaction on System, Man and Cybernetics-Part B: Cybernetics. 29(5) (1999) 573-582. 7. Dubois, D., Prade, H.: Processing Fuzzy Temporal Knowledge. IEEE Transaction on System, Man and Cybernetics. 19(4) (1989) 729-744. 8. Merlin P.M.: A Methodology for The Design and Implementation of Communication Protocol. IEEE Transaction on Communication. 24(6) (1976) 614-621. 9. Tsai, J.-J.P., Yang, S.-J., Chang, Y.-H.: Timing Constraint Petri Nets and Their Application to Schedulability Analysis of Real-time System Specifications. IEEE Transaction on Software Engineering. 21(1) (1995) 32-49. 10. Ye, Y.-D., Du, Y.-H., Gao, J.-W., Jia, L.-M. A Temporal Knowledge Reasoning Algorithm using Time Petri Nets and its Applications in Railway Intelligent Transportation System. Journal of the china railway society. 24(5) (2002) 5-10. 11. Jong, W.-T., Shiau, Y.-S., Horng, Y.-J., Chen, H.-H., Chen, S.-M.: Temporal Knowledge Representation and Reasoning Techniques Using Time Petri Nets. IEEE Transaction on System, Man and Cybernetics, Part-B: Cybernetics. 29(4) (1999) 541-545.
Interval Regression Analysis Using Support Vector Machine and Quantile Regression Changha Hwang1 , Dug Hun Hong2 , Eunyoung Na3 , Hyejung Park3, and Jooyong Shim4 1
3
Division of Information and Computer Sciences, Dankook University, Yongsan Seoul, 140-714, South Korea
[email protected] 2 Department of Mathematics, Myongji University, Yongin Kyunggido, 449-728, South Korea
[email protected] Department of Statistical Information, Catholic University of Daegu, Kyungbuk 712 - 702, South Korea {ney111, hyjpark }@cu.ac.kr 4 Corresponding Author, Department of Statistics, Catholic University of Daegu, Kyungbuk 702-701, South Korea
[email protected] Abstract. This paper deals with interval regression analysis using support vector machine and quantile regression method. The algorithm consists of two phases - the identification of the main trend of the data and the interval regression based on acquired main trend. Using the principle of support vector machine the linear interval regression can be extended to the nonlinear interval regression. Numerical studies are then presented which indicate the performance of this algorithm.
1
Introduction
Regression analysis has been the most frequently used technique in various areas of business, science, and engineering. But we encounter many situations where necessary assumptions for regression analysis cannot be met because they are not based on random variables. In such situations interval regression analysis, which is regarded as the simplest version of possibilistic regression analysis introduced by Tanaka et al.[9], can be a good alternative. Interval regression that can be measured its spreading extent of data in the presence of heteroscedasticity, is addressed and analyzed. In interval regression analysis, the sample involved whole interval will be estimated. However, when estimation is made only through the whole data, outlier becomes its focus; thus in this case, the analysis of tendency for the whole data would be difficult. Lee and Tanaka[6] proposed upper and lower approximation model to overcome the above problem in the linear interval regression based on the quantile regression. The support vector machine(SVM), firstly developed by Vapnik[10][11], is being used as a new technique for regression and classification problem. SVM is gaining popularity due to many attractive features, and promising empirical L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 100–109, 2005. c Springer-Verlag Berlin Heidelberg 2005
Interval Regression Analysis Using Support Vector Machine
101
performance. It has been successfully applied to a number of real world problems such as handwritten character and digit recognition, face detection, text categorization and object detection in machine vision. The aforementioned applications are related to classification problems. It is also widely applicable in regression problems. SVM was initially developed to solve classification problems but recently it has been extended to the domain of regression problems. SVM is based on the structural risk minimization(SRM) principle, which has been shown to be superior to traditional empirical risk minimization(ERM) principle. SRM minimizes an upper bound on the expected risk unlike ERM minimizing the error on the training data. By minimizing this bound, high generalization performance can be achieved. For introductions and overviews of recent developments of SVM regression can be found in Gunn[2], Smola and Schoelkopf[8], Kecman[4], and Wang[12]. SVM avoids over-fitting by using regularization technique and is robust to outlier. Hong and Hwang[3] introduced interval regression analysis using quadratic loss SVM for crisp input and interval output. In this paper, we propose interval regression analysis for crisp input and output using the principle of SVM and quantile regression. The proposed model can be applied to both the linear and nonlinear interval regression. The remainder of this paper is organized as follows. In Section 2 we explain the estimation methods of quantile regression using SVM. In Section 3 we explain the estimation methods of interval regression using the principle of SVM and the quantile regression method in Section 2. In Section 4 we perform the numerical studies through four examples. Finally, Section 5 gives the conclusions.
2
Quantile Regression Using Support Vector Machine
The quantile regression model introduced by Koenker and Bassett[5] assumes that the conditional quantile function of the response given xi is linearly related to the covariate vector xi as follows Q(p|xi ) = β(p)t xi for p ∈ (0, 1)
(1)
where β(p), p-th regression quantile, is defined as any solution to the optimization problem, min β
n
ρp (yi − β t xi )
(2)
i=1
where ρp (·) is the check function defined as ρp (r) = pr I(r ≥ 0) + (p − 1)r I(r < 0).
(3)
Here we reexpress the above problem by the formulation stated in Vapnik[10][11]. 1 w2 + γ (pξi + (1 − p)ξi∗ ) for p ∈ (0, 1), 2 i=1 n
minimize
102
C. Hwang et al.
subject to
yi − wt xi ≤ ξi , wt xi − yi ≤ ξi∗ , ξi , ξi∗ ≥ 0,
(4)
where the p-th regression quantile β(p) is expressed in terms of wt . We construct a Lagrange function as follows: L=
1 2
−
w + γ 2
n
n
(pξi + (1 −
p)ξi∗ )
i=1
α∗i (ξi∗ + yi − wt xi ) −
i=1
n
−
n
αi (ξi − yi + wt xi )
i=1
(ηi ξi + ηi∗ ξi∗ ).
(5)
i=1
We notice that the positivity constraints αi , α∗i , ηi , ηi∗ ≥ 0 should be satisfied. After taking partial derivatives of the above equation with regard to the primal variables (w, ξi , ξi∗ ) and plugging them into the above equation, we have the optimization problem below. max − ∗ α,α
n n 1 (αi − α∗i )(αj − α∗j )xti xj + yi (αi − α∗i ) 2 i,j=1 i=1
(6)
with constraints αi ∈ [0, pγ] and αi ∈ [0, (1 − p)γ]. Solving the above equation with the constraints determines the optimal Lagrange multipliers, the αi , α∗i , the p-th regression quantile estimators and the conditional quantile function estimator given the covariate vector x are obtained as, respectively, w=
n
(αi − α∗i )xi and Q(p|x) =
i=1
n
(αi − α∗i )xti xi .
(7)
i=1
For the nonlinear quantile regression case, the conditional quantile function requires the computations of dot products φ(xk )t φ(xl ), k, l = 1, · · · , n, in a potentially higher dimensional feature space. Under certain conditions(Mercer[7]), these demanding computations can be reduced significantly by introducing a kernel function K such that φ(xk )t φ(xl ) = K(xk , xl ).
(8)
The kernels often used are given below. K(x, y) = (xt y + 1)d , K(x, y) = e−
x−y2 2σ2
,
(9)
where d and σ 2 are kernel parameters.
3
Support Vector Interval Regression
The proposed model is divided into two parts - the lower approximation model and the upper approximation model. In the lower approximation model, a main
Interval Regression Analysis Using Support Vector Machine
103
proportion of the data without extreme points is determined from the data classified into three classes. In the upper approximation model, intervals including all observations are obtained based on the already obtained lower approximation model. 3.1
Lower Approximation Model
In the lower approximation model, we identify the interval regression model from the main trend of data. We adopt the quantile regression using SVM to select the data set which we want to consider mainly. By applying the quantile regression, we can determine a main proportion of the given data without extreme points. If we want to consider p100%(0 < p < 1) center-located observations mainly, that portion of data can be obtained by the quantile regression with p1 = 0.5 + p/2 and p2 = 0.5 − p/2. The obtained model is insensitive to extreme data points since only p100% center-located observations are used. The lower approximation model is expressed as YL (xi ) = B + · · · + Bm xim 0−+ B1 xi1 + = yL (xi ), yL (xi ) , i ∈ C2 ,
(10)
where the interval coefficients are denoted as Bj = (bj , dj ) (i = 0, 1, · · · , m) and − + yL (xi ) and yL (xi ) are bounds of YL (xi ). Outputs in Class 2 (i ∈ C2 ) should be included in the lower approximation model YL (xi ), which can be expressed as follows: t b xi + dt |xi | ≥ yi yi ∈ YL (xi ) ⇐⇒ , i ∈ C2 . (11) bt xi − dt |xi | ≤ yi It is desirable that observations in C1 are located above the upper bound of YL (xi ), while observations in C3 are located below the lower bound of YL (xi ). But, for multivariate data, some observations are in C1 or C3 or both are included in the lower approximation model YL (xi ). Thus, to permit some observations in C1 or C3 or both are included in YL (xi ), we introduce a tolerance vector θ = (θ0 , · · · , θm )t . Adding the tolerance vector θ to the radius vector d in the interval coefficient vector B, the following inequalities for observations in C1 and C3 are considered: yi ≥ bt xi + (dt |xi | − θt |xi |), i ∈ C1 , yi ≤ bt xi − (dt |xi | − θt |xi |), i ∈ C3 ,
(12)
where θj is the tolerance parameter introduced by Lee and Tanaka[6]. θj = 0 (j = 0, 1, · · · , m) indicates that a lower approximation model YL (xi ) classifies the data into three classes clearly, and θj = 0 for some i = {0, 1, · · · , n} indicates that the lower approximation model YL (xi ) includes some data points in C1 or C3 or both. Based on the assumptions mentioned above, the problem is to obtain the optimal interval coefficients of the lower approximation model Bj = (bj , dj ) (i = 0, 1, · · · , m) that minimize the following objective function: 1 ∗ min (b2 + d2 + θ2 ) + γ( ξ1i + ξ2i + (ξ3i + ξ3i )) 2 i∈C
i∈C
i∈C2
104
C. Hwang et al.
subject to
⎧ t d |xi | ≤ ξ1i , i ∈ C = C1 ∪ C2 ∪ C3 ⎪ ⎪ ⎪ ⎪ θt |xi | ≤ ξ2i , i ∈ C ⎪ ⎪ ⎪ ∗ ⎪ yi − bt xi ≤ ξ3i + , bt xi − yi ≤ ξ3i + , i ∈ C2 ⎪ ⎪ ⎨ bt x − dt |x | ≤ y ≤ bt x + dt |x |, i ∈ C i
i
i
i
i
2
yi ≥ bt xi + dt |xi | − θt |xi |, i ∈ C1 ⎪ ⎪ ⎪ ⎪ yi ≤ bt xi − dt |xi | + θt |xi |, i ∈ C3 ⎪ ⎪ ⎪ t t ⎪ ⎪ ⎪ d |xi | − θ |xi | ≥ 0, i ∈ C ⎩ (∗) ξ1i ≥ 0, ξ2i ≥ 0, i ∈ C and ξ3i ≥ 0, i ∈ C2 .
(13)
∗ Here, ξ1i represent spreads of the estimated outputs, and ξ2i , ξ3i , ξ3i are slack variables representing upper and lower constraints on the outputs of the model. We construct a Lagrange function and differentiating it with respect to ∗ a, c, ξ1i , ξ2i , ξ3i , ξ3i , we have the corresponding dual optimization problem. we can derive the corresponding dual optimization problem.
t max − 21 α2i α2j |xi |t |xj | i,j∈C α1i α1j |xi | |xj | +
i,j∈C t +2 i,j∈C α7i α7j |xi | |xj | + i,j∈C2 (α3i − α∗3i )(α3j − α∗3j ) xti xj
+ i,j∈C2 (α4i − α∗4i )(α4j − α∗4j ) xti xj
+ − 2 i∈C1 ,j∈C3 α5i α6j xti xj
+ i,j∈C2 (α4i + α∗4i )(α4j + α∗4j ) |xi |t |xj |
+3 i,j∈C1 α5i α5j |xi |t |xj | + 3 i,j∈C3 α6i α6j |xi |t |xj |
+ i,j∈C2 (α3i − α∗3i )(α4j − α∗4j ) xti xj
−2 i∈C2 ,j∈C1 (α3i − α∗3i )α5j xti xj
+2 i∈C2 ,j∈C3 (α3i − α∗3i )α6j xti xj
−2 i∈C2 ,j∈C1 (α4i − α∗4i )α5j xti xj
+2 i∈C2 ,j∈C3 (α4i − α∗4i )α6j xti xj
(14) −2 i∈C,j∈C2 (α1i + α∗7i )(α4j + α∗4j ) |xi |t |xj |
∗ t −2 i∈C2 ,j∈C1 (α4i + α4i )α5j |xi | |xj |
−2 i∈C2 ,j∈C3 (α4i + α∗4i )α6j |xi |t |xj |
+2 i,j∈C (α1i + α∗2i )α7j |xi |t |xj |
+2 i∈C,j∈C1 (α1i − α∗2i )α5j |xi |t |xj |
−2 i∈C,j∈C3 (α1i + α∗2i )α6j |xi |t |xj |
+4 i∈C1 ,j∈C3 α5i α6j |xi |t |xj |
− i∈C2 α3i ( −
yi ) + i∈C2 α∗3i ( + yi ) − i∈C2 α4i yi + i∈C2 α∗4i yi + i∈C1 α5i yi − i∈C3 α∗3i yi (∗) (∗) subject to 0 ≤ α1i , α2i , α3i ≤ γ, α4i , α5i , α6i , α7i ≥ 0.
Solving the above problem, the optimal upper approximation model is obtained as follows:
∗ YL (x) = (
+ (α4i − α∗4i )]xi t x − i∈C1 α5i xi t x i∈C2 [(α3i − α3i )
t + i∈C3 α6i xi t x , (15) i∈C [−α3i
+ α7i ]|xi | |x|
∗ t + i∈C2 (α4i + α4i )|xi | |x| − i∈C1 α5i |xi |t |x| − i∈C3 α6i |xi |t |x| ).
Interval Regression Analysis Using Support Vector Machine
3.2
105
Upper Approximation Model
In the upper approximation model, we formulate the interval regression model including all observations based on the already obtained lower approximation model(main trend). The upper approximation model YU (xi ) including all data can be expressed as YU (xi ) = A + · ·· + Am xim 0−+ A1 xi1 + = yU (xi ), yU (xi ) , i = 1, · · · , n,
(16)
where Aj = (aj , cj ) (j = 0, 1, · · · , m), aj and cj are a center and a radius of − + the interval coefficient Aj and yU (xi ) and yU (xi ) are bounds of YU (xi ). We have already obtained the lower approximation model YL (xi ) representing the main trend of the given data. Since the upper approximation model YU (xi ) should include the lower approximation model. Thus, we can formulate the upper approximation model including all observations as the following problem:
n
n
n ∗ min 12 (a2 + c2 ) + γ( i=1 ξ1i + i=1 ξ2i + i=1 ξ2i ), t subject to c |xi | ≤ ξ1i , ∗ yi − at xi ≤ ξ2i + , at xi − yi ≤ ξ2i + , − t t a xi − c |xi | ≤ min(y(xi ), yL (xi )), + at xi + ct |xi | ≥ max(y(xi ), yL (xi )) (∗) ξ1i ≥ 0, ξ2i ≥ 0.
(17)
Then constructing a Lagrange function and differentiating it with respect to ∗ a, c, ξ1i , ξ2i , ξ2i , we have the corresponding dual optimization problem.
maximize − 21 ni,j=1 (α2i − α∗2i )(α2j − α∗2j )xti xj
n − 21 i,j=1 (α3i − α∗3i )(α3j − α3j ∗ )xti xj
n − i,j=1 (α2i − α∗2i )(α3j − α∗3j )xti xj
n − 21 i,j=1 (α3i + α∗3i )(α3j + α∗3j )|xi |t |xj |
(18) − 21 ni,j=1 (α1i α1j )|xi |t |xj |
n
+ i,j=1 α1i (α3j + α∗3j )|xi |t |xj | + ni=1 (α2i − α∗2i )yi
n
n + − + i=1 α3i max(y(xi ), yL (xi )) − i=1 α∗3i min(y(xi ), yL (xi )) n ∗ − i=1 (α2i + α2i ) Solving the above problem, the optimal upper approximation model is obtained as follows:
n YU (x) = ( i=1 [(α − α∗2i ) + (α3i − α∗3i )]xi t x,
2i (19) n ∗ t i=1 [−α1i + (α3i + α3i )]|xi | |x|). By using kernel tricks mentioned in Section 2, we easily extend the linear interval regression into the nonlinear interval regression. We obtain the following dual optimization problem:
106
C. Hwang et al.
maximize − 21 ni,j=1 (α2i − α∗2i )(α2j − α∗2j )K(xi , xj )
n − 21 i,j=1 (α3i − α∗3i )(α3j − α∗3j )K(xi , xj )
n − i,j=1 (α2i − α∗2i )(α3j − α∗3j )K(xi , xj )
n − 21 i,j=1 (α3i + α∗3i )(α3j + α∗3j )K(|xi |, |xj |)
(20) − 21 ni,j=1 (α1i α1j )K(|xi |, |xj |)
+ ni,j=1 α1i (α3j + α∗3j )K(|xi |, |xj |)
+ ni=1 (α2i − α∗2i )yi
n n + − + i=1 α3i max(y(xi ), yL (xi )) − i=1 α∗3i min(y(xi ), yL (xi )) n ∗ − i=1 (α2i + α2i ) Here we should notice that the constraints 0 ≤ α1i , α2i , α∗2i ≤ γ and α3i , α∗3i ≥ 0 are unchanged. Solving the above dual optimization problem determines the Lagrange multipliers, α1i , αki , α∗ki , k = 2, 3. For the nonlinear case, the difference from the linear case is that a and c are no longer explicitly given. However, they are uniquely defined in the weak sense by the inner products at φ(x) and ct φ(|x|). Similar to the linear case, it is noted that ct φ(|x|) is nonnegative. Therefore, interval nonlinear regression function is given as follows:
n ∗ ∗ YU (x) = ( i=1 [(α
2in− α2i ) + (α3i − α3i∗)]K(xi , x), (21) i=1 [−α1i + (α3i + α3i )]K(|xi |, |x|)).
4
Numerical Studies
In order to illustrate the performance of the interval regression estimation using SVM, four examples are considered. Two examples are for the linear interval regression and other two examples are for the nonlinear interval regression. First, we apply the linear interval regression to the officials number data(Lee and Tanaka[6]). The officials number data are from 32 mid-size (the number of officials is between 400 and 1000) cities of Korea in 1987, Input data : x1 = x2 = x3 = x4 = x5 = x6 = x7 = Output data :
area (km2 ) population (1000) number of district annual revenues (108 won) rate of water sewerage service (%) rate of water service (%) rate of housing supply (%) y = number of officials.
We put the main trend of the number of officials in mid-size cities as 60%, so that p1 = 0.8 and p2 = 0.2. Our goal is to determine the lower approximation model YL (xi ) and the upper approximation model YU (xi ). Here, C1 , C2 , C3 are found by quantile regression
using SVM.
In this data set, γ and are chosen as 30 and 0 which minimize i dt |xi | + i |yi − bt xi | . For the illustration of the nonlinear
Interval Regression Analysis Using Support Vector Machine
107
interval regression, 51 of x are generated from xi = 0.04(i − 1)− 1 and 51 of y are generated from yi = xi + 2 exp(−16x2i ) + 0.5ei , i = 1, · · · , 51 , ei ∼ U (−1, +1). C1 , C2 , C3 are found by SVM based quantile regression and the Gaussian kernel x−y2
K(x, y) = e− σ2 is used. For quantile regression, the kernel parameter σ 2 and γ are chosen as 0.1 and 500 by 10-fold cross validation, respectively. And the kernel parameter σ 2 and γ are chosen as 0.3 and 200 and = 0.05 which minimize i dt |φ(xi )|+ i |yi −bt φ(xi )| . Figure 1 and 2 illustrate the estimation results of officials number data and the simulated nonlinear data, respectively. In figures dot line shows the main trend of data and solid line shows the interval regression including all data. We have another real data sets, each of data set has one input variable so that they can show the difference of the linear and the nonlinear interval regression model clearly. Figure 3 illustrates the estimation result of fisheggs data (DeBlois and Leggett[1]) where the number eaten and the density of eggs are known to be linearly related. For the estimation of the fisheggs data we use p1 = 0.8, p2 = 0.2, γ = 200, and = 0.1. Figure 4 illustrates the estimation result of the ultrasonic data available from http://www.itl.nist.gov/div898/strd/nls, where the ultrasonic response and the metal distance are known to be nonlinearly related. Here we use p1 = 0.8, p2 = 0.2, γ = 500, σ 2 = 1 for the quantile regression, and γ = 100, σ 2 = 1, = 0.05 for the interval regression. The Gaussian kernel is used for the ultrasonic data , the values of parameters used in estimation of fisheggs data
and ultrasonic
data are obtained from 10-fold cross validation and minimizing i dt |φ(xi )| + i |yi − bt φ(xi )| . 2.5
1000
2
900
1.5
800
1
700
0.5
y
number of officials
1100
600
0
500
−0.5
400
−1
300
−1.5 −1
0
5
10
15 city number
20
25
30
Fig. 1. Linear interval Regression 1
5
−0.8
−0.6
−0.4
−0.2
0 x
0.2
0.4
0.6
0.8
1
Fig. 2. Nonlinear interval Regression 1
Conclusions
We proposed interval regression analysis using the principle of SVM and quantile regression. We identify the main trend from the lower approximation model using the designated center-located proportion of the given data. The obtained lower approximation model can explain the relationship between the input and output
108
C. Hwang et al.
20
100
18
90
16
80
70 ultrasonic response
number eaten
14
12
10
8
60
50
40
6
30
4
20
2
0
10
0
10
20
30
40
50
60
70
density
Fig. 3. Linear interval Regression 2
0 0.5
1
1.5
2
2.5
3 3.5 metal distance
4
4.5
5
5.5
6
Fig. 4. Nonlinear interval Regression 2
data well, because the obtained model was not influenced by extreme points. The upper approximation model including all data was obtained on the basis of the lower approximation model. Through examples, we showed that the proposed algorithm derives the satisfying solutions and is attractive approach to interval regression. In particular, we can use this algorithm successfully when a linear model is inappropriate. By using two approximation models we could easily recognize observations whose outputs are extremely far from the main trend of the given data, which leads our proposed algorithm to be useful for the analysis of the data with possibilistic viewpoints.
Acknowledgement This work was supported by the Korea Research Foundation Grant(KRF-2004042-C00020).
References 1. DeBlois, E.M. and W.C. Leggett. : Functional response and potential impact of invertebrate predators on benthic fish eggs: analysis of the Calliopius laeviusculuscapelin (Mallotus villosus) predator-prey system. Marine Ecology Progress Series 69 (1991) 205–216 2. Gunn S. : Support vector machines for classification and regression. ISIS Technical Report, U. of Southampton (1998) 3. Hong D. and Hwang C. : Interval regression analysis using quadratic loss support vector machine. IEEE Transactions on Fuzzy Systems 13(2) April (2005) 229–237 4. Kecman V. : Learning and soft computing, support vector machines, neural networks and fuzzy logic moldes. The MIT Press, Cambridge, MA (2001) 5. Koenker R. and Bassett G. : Regression quantiles. Econometrica 46 (1978) 33–50 6. Lee H. and Tanaka H. : Upper and lower approximation models in interval regression using regression quantile techniques. European Journal of Operational Research 116 (1999) 653–666
Interval Regression Analysis Using Support Vector Machine
109
7. Mercer J. : Functions of positive and negative and their connection with the theory of integral equations. Philosphical Transactions of the Royal Society A (1909) 415– 446 8. Smola A. and Schoelkopf B. : On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica 22 (1998) 211–231 9. Tanaka H., Koyana K. and Lee H. : Interval regression analysis based on quadratic programming. In: Proceedings of The 5th IEEE International Conference on Fuzzy Systems, New Orleans, USA (1996) 325–329 10. Vapnik V. N. : The nature of statistical learning theory. Springer, New York (1995) 11. Vapnik, V. N.: Statistical learning theory. Springer, New York (1998) 12. Wang, L.(Ed.) : Support vector machines: theory and application. Springer, Berlin Heidelberg New York (2005)
An Approach Based on Similarity Measure to Multiple Attribute Decision Making with Trapezoid Fuzzy Linguistic Variables Zeshui Xu College of Economics and Management, Southeast University, Nanjing, Jiangsu 210096, China
[email protected] Abstract. In this paper, we investigate the multiple attribute decision making problems under fuzzy linguistic environment. We introduce the concept of trapezoid fuzzy linguistic variable and some operational laws of trapezoid fuzzy linguistic variables. We develop a similarity measure between two trapezoid fuzzy linguistic variables. Based on the similarity measure and the ideal point of attribute values, we develop an approach to ranking the decision alternatives in multiple attribute decision making with trapezoid fuzzy linguistic variables. We finally illustrate the developed approach with a practical example.
1 Introduction Multiple attribute decision making under linguistic environment is an interesting research topic having received more and more attention from researchers during the last several years [1-5]. In the process of multiple attribute decision making, the linguistic decision information needs to be aggregated by means of some proper approaches so as to rank the given decision alternatives and then to select the most desirable one. Bordogna et al. [1] developed a model within fuzzy set theory by linguistic ordered weighted average (OWA) operators for group decision making in a linguistic context. Herrera and Martínez [2] established a linguistic 2-tuple computational model for dealing with linguistic information. Li and Yang [3] developed a linear programming technique for multidimensional analysis of preferences in multiple attribute group decision making under fuzzy environments, in which all the linguistic information and real numbers are transformed into triangular fuzzy numbers. Xu [4,5] proposed some methods, which compute with words directly. In this paper, we shall investigate the multiple attribute decision making problems under fuzzy linguistic environment, in which the decision maker can only provide their preferences (attribute values) in the form of trapezoid fuzzy linguistic variables. In order to do that, this paper is structured as follows. In Section 2 we define the concept of trapezoid fuzzy linguistic variable and some operational laws of trapezoid fuzzy linguistic variables, and then develop a similarity measure between two trapezoid fuzzy linguistic variables. In Section 3 we develop an approach to ranking the decision alternatives based on the similarity measure and the ideal point of attribute values. We L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 110 – 117, 2005. © Springer-Verlag Berlin Heidelberg 2005
An Approach Based on Similarity Measure to Multiple Attribute Decision Making
111
illustrate the developed approach with a practical example in Section 4, and give concluding remarks in Section 5.
2 Trapezoid Fuzzy Linguistic Variables In [6], Zadeh introduced the concept of linguistic variable, that is, whose values are words rather than numbers. The concept of linguistic variable has played and is continuing to play a pivotal role in decision making with linguistic information. Computing with words is a methodology in which the objects of computation are words and propositions drawn from a natural language, e.g., small, large, far, heavy, not very likely, etc., it is inspired by the remarkable human capability to perform a wide variety of physical and mental tasks without any measurements and any computations [7]. In the process of decision making with linguistic information, the decision maker generally provides his/her linguistic assessment information by using a linguistic scale. In [8], Xu defined a finite and totally ordered discrete linguistic scale as S = { s i | i = − t ,..., t } , where t is a non- negative integer, s i represents a possible value for a linguistic variable, and it requires that si < s j iff i < j . For example, a set of nine labels S could be:
S = {s − 4 = extremely poor , s −3 = very poor , s − 2 = poor , s−1 = slightly poor, s0 = fair, s1 = slightly good, s2 = good, s3 = very good, s4 = extremely good } In the process of information aggregating, some results may do not exactly match any linguistic labels in S . To preserve all the given information, Xu [8] extended the discrete label set S to a continuous label set S = { s α | α ∈ [ − q , q ]} , where
q ( q > t ) is a sufficiently large positive integer. If sα ∈ S , then sα is termed an original linguistic label, otherwise,
sα is termed a virtual linguistic label. In general,
the decision maker uses the original linguistic labels to evaluate alternatives, and the virtual linguistic labels can only appear in operation. Since the decision maker is characterized by his own personal background and experience, in some situations, the decision maker may provide fuzzy linguistic information because of time pressure, lack of knowledge, and their limited expertise related with the problem domain. In the following we define the concept of trapezoid fuzzy linguistic variable Definition 1. Let
sγ
s = [ sα , s β , sγ , sη ] ∈ S , where sα , s β , sγ , sη ∈ S , s β
indicate the interval in which the membership value is 1, with
s , respectively, then s
sα
and
sη
and indi-
cating the lower and upper values of is called a trapezoid fuzzy linguistic variable, which is characterized by the following member function (see Fig.1)
112
Z. Xu
⎧0, ⎪ ⎪ d ( sθ , s α ) ⎪ d (s , s ) , β α ⎪⎪ µ s (θ ) = ⎨1, ⎪ ⎪ d ( sθ , sη ) , ⎪ d ( s γ , sη ) ⎪ ⎪⎩ 0 ,
s − q ≤ sθ ≤ s α s α ≤ sθ ≤ s β s β ≤ sθ ≤ s γ s γ ≤ sθ ≤ s η s η ≤ sθ ≤ s q
S is the set of all trapezoid fuzzy linguistic variables. Especially, if any two of α , β , γ ,η are equal, then s is reduced to a triangular fuzzy linguistic variable; if
where
any three of variable.
α , β , γ ,η are
equal, then
s
is reduced to an uncertain linguistic
Fig. 1. A trapezoid fuzzy linguistic variable
Consider any three trapezoid fuzzy linguistic variables
s s = [ sα , s β , sγ , sη ],
s1 = [sα1 , s β1 , sγ 1 , sη1 ], s2 = [sα2 , sβ2 , sγ 2 , sη2 ] ∈ S , and suppose that λ ∈ [0,1] , then we define their operational laws as follows: 1) 2)
s1 ⊕ s2 = [sα2 , sβ2 , sγ 2 , sη2 ] ⊕[sα2 , sβ2 , sγ 2 , sη2 ] = [sα2 +α2 , sβ2 +β2 , sγ 2 +γ 2 , sη2 +η2 ]; λ s = λ[ sα , s β , sγ , sη ] = [ s λα , s λβ , s λγ , s λη ] .
In order to measure the similarity degree between any two trapezoid fuzzy linguistic values
s1 = [ sα1 , s β1 , sγ 1 , sη1 ] and s 2 = [ sα 2 , s β 2 , sγ 2 , sη2 ] ∈ S ,
duce a similarity measure as follows:
we intro-
An Approach Based on Similarity Measure to Multiple Attribute Decision Making
113
| α − α 1 | + | β 2 − β 1 | + | γ 2 − γ 1 | + | η 2 − η1 | s ( s1 , s 2 ) = 1 − 2 8q
(1)
where
s( s1 , s2 )
is called the similarity degree between
s1 and s 2 .
Obviously, the
greater the value of s ( s1 , s 2 ) , the closer s1 to s 2 . The properties of the similarity degree s ( s1 , s 2 ) are shown as follows:
2) s ( s1 , s 2 ) = 1 iff s1 = s 2 , that is α1 = α 2 , β1 = β 2 , γ 1 = γ 2 , η1 = η2 ; 1) 0 ≤ s ( s1 , s 2 ) ≤ 1 ;
3)
s( s1 , s 2 ) = s( s 2 , s1 ) .
Below we propose an operator for aggregating trapezoid fuzzy linguistic variables. Definition 2. Let
TFLWA : S n → S , if
TFLWAw (s1 , s 2 ,..., s n ) = w1 s1 ⊕ w2 s 2 ⊕ " ⊕ wn s n where w = (w1 , w2 ,..., wn ) is the weighting vector of the
i = 1,2,..., n,
n
∑w i =1
i
(2)
s i , s i ∈ S , wi ≥ 0,
= 1 then TFLWA is called a trapezoid fuzzy linguistic
weighted averaging (TFLWA) operator. Especially, if w = (1 n ,1 n ,...,1 n ) , then TFLWA operator is reduced to a trapezoid fuzzy linguistic averaging (TFLA) operator. Example 1. Assume
s1 =[s−3, s−2 , s0 , s2 ], s2 = [s−1 , s0 , s1 , s2 ] , s3 =[s0 , s1, s2 , s4 ], and
s4 = [ s−1 , s1 , s2 , s3 ] , w = (0.3,0.1,0.2,0.4) , then by the operational laws of trape-
zoid fuzzy linguistic variables, we have
TFLWAw (s1 , s 2 , s3 , s4 ) = 0.3×[s−3 , s −2 , s0 , s 2 ] ⊕ 0.1×[s−1 , s0 , s1 , s 2 ] ⊕ 0.2 × [ s −1 , s 0 , s1 , s 2 ] ⊕ 0.4 × [ s 0 , s1 , s 2 , s 4 ]
= [ s −1.2 , s − 0.2 , s1.1 , s 2.8 ] 3 A Similarity Measure Based Approach A multiple attribute decision making problem under fuzzy linguistic environment is represented as follows:
X = {x1 , x 2 ,..., x n } be the set of alternatives, and U = {u1 , u 2 ,..., u m } be the set of attributes. Let w = (w1 , w2 ,..., wm ) be the weight vector of attributes, Let
114
Z. Xu
where wi ≥ 0, i = 1,2,..., m,
= 1 . Suppose that A = (aij ) m×n is the fuzzy
m
∑w i =1
i
aij = [aij(α ) , aij( β ) , aij(γ ) , aij(η ) ] ∈ S is the attribute
linguistic decision matrix, where
value, which takes the form of trapezoid fuzzy linguistic variable, given by the deci-
x j ∈ X with respect to the attribute ui ∈U . Let a j = (a1 j , a2 j ,...,amj ) be the vector of the attribute values corresponding to the
sion maker, for the alternative
x j , j = 1,2,...,n . Definition 3. Let A = (a ij ) m×n be the decision matrix with trapezoid fuzzy linalternative
guistic variables, then we call I = ( I 1 , I 2 ,..., I m ) the ideal point of attribute
values, where I i = [ I i(α ) , I i( β ) , I i(γ ) , I i(η ) ] , I i(α ) = max{aij(α ) } , I i( β ) = max{a ij( β ) } , j
j
I
(γ ) i
(γ ) ij
= max{a } , I j
(η ) i
(η ) ij
= max{a } , i = 1,2,..., m . j
In the following we develop an approach to ranking the decision alternatives based on the similarity measure and the ideal point of attribute values: Step 1. Utilize the TFLWA operator
z j = TFLWAw (a1 j , a 2 j ,..., a mj ) = w1a1 j ⊕ w2 a2 j ⊕ " ⊕ wm amj , to
derive
the
overall
j = 1, 2 ,..., n values z j ( j = 1, 2,..., n )
(3) of
the
alternatives
x j ( j = 1,2,..., n) , where w= (w1, w2 ,...,wm ) is the weight vector of attributes. Step 2. Utilize the TFLWA operator
z = TFLWA w ( I 1 , I 2 ,..., I m ) = w1 I 1 ⊕ w2 I 2 ⊕ " ⊕ wm I m , j = 1, 2 ,..., n (4) to derive the overall value z of the ideal point I = ( I 1 , I 2 ,..., I m ) , where w = (w1 , w2 ,..., wm ) is the weight vector of attributes. Step 3. By (1), we get the similarity degree s ( z , z j ) between z and z j
( j = 1,2,..., n). Step 4. Rank all the alternatives x j ( j = 1,2,..., n) and select the best one in accordance with
s ( z , z j ) ( j = 1,2,..., n).
Step 5. End.
An Approach Based on Similarity Measure to Multiple Attribute Decision Making
115
4 Illustrative Example In this section, a decision making problem of assessing cars for buying (adapted from [2]) is used to illustrate the developed approach. Let us consider, a customer who intends to buy a car. Four types of cars
x j ( j = 1, 2, 3, 4) are available. The customer takes into account four attributes to decide which car to buy: 1) G 1 : economy, 2)
G2 : comfort, 3) G3 : design, and 4)
G 4 : safety. The decision maker evaluates these four types of cars x j ( j = 1, 2, 3, 4) under
the
Gi (i = 1,2,3,4)
attributes
(whose
weight
vector
is
w = (0.3,0.2,0.1,0.4) ) by using the linguistic scale S = {s − 4 = extremely poor , s −3 = very poor , s − 2 = poor , s−1 = slightly poor, s0 = fair, s1 = slightly good, s2 = good, s3 = very good, s4 = extremely good } and gives a fuzzy linguistic decision matrix as listed in Table 1.
Table 1. Fuzzy linguistic decision matrix A
x1 [s-3, s-2, s0, s1] [s-1, s0, s3, s4] [s0, s1, s2, s4] [s-2, s-1, s0, s2]
Gi G1 G2 G3 G4
x2
[s-2, s0, s1, s2] [s0, s1, s2, s3] [s-1, s0, s3, s4] [s-1, s0, s2, s3]
x3 [s-1, s1, s3, s4] [s-4, s-3, s-1, s1] [s1, s2, s3, s4] [s-2, s-1, s0, s1]
x4 [s0, s1, s2, s4] [s-1, s2, s3, s4] [s-2, s0, s1, s2] [s1, s2, s3, s4]
In the following, we utilize the approach developed in this paper to get the most desirable car: Step 1. From Table 1, we get the vector of the attribute values corresponding to the alternative x j ( j = 1,2,3,4) , and the ideal point as follows:
a11 = [ s − 3 , s − 2 , s 0 , s1 ] , a21 = [s−1 , s0 , s3 , s4 ] , a31 = [s0 , s1 , s2 , s4 ] , a 41 = [ s −2 , s −1 , s0 , s 2 ] . 2) a 2 = (a12 , a 22 , a32 , a 42 ) , where a12 = [s − 2 , s 0 , s1 , s 2 ] , a 22 = [ s0 , s1 , s 2 , s3 ] , a32 = [ s −1 , s 0 , s3 , s 4 ] , a 42 = [ s −1 , s0 , s 2 , s3 ] . 3) a3 = (a13 , a 23 , a 33 , a 43 ) , where a13 = [ s −1 , s1 , s 3 , s 4 ] , a 23 = [ s −4 , s −3 , s −1 , s1 ] , a33 = [ s1 , s 2 , s3 , s 4 ] , a 43 = [ s −2 , s −1 , s 0 , s1 ]
1) a1 = (a11 , a 21 , a31 , a 41 ) , where
116
Z. Xu
4) a 4 = (a14 , a 24 , a34 , a 44 ) , where a14 = [s 0 , s1 , s 2 , s 4 ] ,
a 24 = [s −1 , s 2 , s3 , s 4 ] , a34 = [ s − 2 , s0 , s1 , s 2 ] , a 44 = [s1 , s 2 , s3 , s 4 ] 5) I = ( I 1 , I 2 , I 3 , I 4 ) , where I 1 = [ s 0 , s1 , s 3 , s 4 ] , I 2 = [ s 0 , s 2 , s 3 , s 4 ] , I 3 = [ s1 , s 2 , s3 , s 4 ] , I 4 = [ s1 , s 2 , s 3 , s 4 ] . Step 2. Utilize (3) to derive the overall values z j ( j = 1,2,3,4 ) of the alternatives
x j ( j = 1,2,3,4) : z1 = [ s −1.9 , s −0.9 , s 0.8 , s 2.3 ] , z 2 = [ s −1.1 , s −0.4 , s1.8 , s2.8 ] z3 = [ s−1.8 , s −0.5 , s1 , s 2.2 ] , z 4 = [ s 0 , s1 .5 , s 2 .5 , s 3 .8 ] Step 3. Utilize (4) to derive the overall value z of the ideal point I : z = [ s 0 .5 , s1 .7 , s 3 , s 4 ] Step 4. By (1) (suppose that tween
z
and
q = 4 ),
z j ( j = 1,2,3,4) :
we get the similarity degree
s( z , z j )
be-
s( z , z1 ) = 0.72 , s ( z , z 2 ) = 0.83 , s( z , z 3 ) = 0.74 , s( z , z 4 ) = 0.96 Step 5. Rank all the alternatives x j ( j = 1,2,3,4) in accordance with s ( z , z j )
( j = 1,2,3,4) : x 4 ; x 2 ; x3 ; x1 and thus the best car is
x4 .
5 Concluding Remarks We have defined the concept of trapezoid fuzzy linguistic variable, and developed a similarity measure between two trapezoid fuzzy linguistic variables. Based on the similarity measure and the ideal point of attribute values, we have developed an approach to multiple attribute decision making with trapezoid fuzzy linguistic variables, which utilizes an aggregation operator called trapezoid fuzzy linguistic weighted averaging (TFLWA) operator to fuse all the given decision information corresponding to each alternative, and uses the similarity measure to rank the decision alternatives and then to select the most desirable one.
Acknowledgement This work was supported by China Postdoctoral Science Foundation under Project (2003034366).
An Approach Based on Similarity Measure to Multiple Attribute Decision Making
117
References 1. Bordogna, G., Fedrizzi, M., Pasi G.: A Linguistic Modeling of Consensus in Group Decision Making Based on OWA Operators. IEEE Transactions on Systems, Man, and Cybernetics-Part A 27 (1997) 126-132. 2. Herrera, F., Martínez, L.: An Approach for Combining Numerical and Linguistic Information Based on the 2-Tuple Fuzzy Linguistic Representation Model in Decision Making. International Journal of Uncertainty, Fuzziness and Knowledge -Based Systems 8 (2000) 539-562. 3. Li, D.F., Yang, J.B.: Fuzzy Linear Programming Technique for Multi-attribute Group Decision Making in Fuzzy Environments. Information Sciences 158 (2004) 263-275. 4. Xu, Z.S.: Uncertain Linguistic Aggregation Operators Based Approach to Multiple Attribute Group Decision Making under Uncertain Linguistic Environment. Information Sciences 168 (2004) 171-184. 5. Xu, Z.S.: Uncertain Multiple Attribute Decision Making: Methods and Applications, Tsinghua University Press, Beijing (2004). 6. Zadeh, L.A.: Outline of a New Approach to the Analysis of Complex Systems and Decision Processes. IEEE Transactions on Systems, Man, and Cybernetics 3 (1973) 28-44. 7. Zadeh, L.A.: From Computing with Numbers to Computing with Words-from Manipulation of Measurements to Manipulation of Perceptions. International Journal of Applied Mathematics and Computer Science, 12 (2002) 307-324. 8. Xu, Z.S. Deviation Measures of Linguistic Preference Relations in Group Decision Making. Omega, 33 (2005) 249-254.
Research on Index System and Fuzzy Comprehensive Evaluation Method for Passenger Satisfaction* Yuanfeng Zhou1, Jianping Wu2, and Yuanhua Jia3 1
School of Traffic and Transportation, Beijing Jiaotong University, 100044, Beijing, P. R. China
[email protected] 2 School of Civil Engineering and the Environment, University of Southampton, SO17 1BJ, Southampton, UK 3 School of Traffic and Transportation, Beijing Jiaotong University, 100044, Beijing, P. R. China
Abstract. Passenger satisfaction index (PSI) is one of the most important indexes in comprehensive evaluation of management performance and service quality of passenger transport corporations. Based on the investigations in China, the authors introduced the notion and method for passenger group division and the concept of index weight matrix, and made successful application for passenger satisfaction evaluation. Index weight matrix developed by applying AHP and Delphi methods gives satisfactory results. The paper ends with examples of using a fuzzy inference system for passenger satisfaction evaluation in Beijing railway station.
1 Introduction Satisfaction is the feeling of people formed by comparing perceivable quality with that expected for services or products. At present, competition for passengers in transport markets is fierce. It is important to develop a passenger satisfaction index system (PSI), and help transport enterprises efficiently modify and improve their service quality and management performance. The research team conducted by Claes Fornell developed a SCSB model in 1989[8]. In 1994, an ACSI model was developed in the USA and in 1999, an ECSI model was developed with the sponsorship of EOQ and EFQM. More attention was paid to the commodities quality and service in above models. In China, Zhao Ping assessed the China railway passenger satisfaction by making use of the least square method in 2001[6]. However, this model didn’t consider the difference made by education and the economic background of different passenger groups, which is believed to have significant influence on the accuracy and objectivity of the evaluation results [7]. In this paper, we give an index system for passenger satisfaction evaluation. We designed a series of questionnaires firstly and investigated in stations and trains. From the statistic data, the index matrix was plotted and then passenger satisfaction was *
This paper is based on the China Railway User Satisfaction Evaluation Project, which was supported by the S&T Development Plan(2002X024) of the Railway Department of China.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 118 – 121, 2005. © Springer-Verlag Berlin Heidelberg 2005
Research on Index System and Fuzzy Comprehensive Evaluation Method
119
gained by fuzzy comprehensive evaluation. Then, a real-time emulator for PS evaluation was designed and the result could reflect the true situation properly.
2 Index for Passenger Satisfaction Evaluation Passenger satisfaction includes much element associated with a passenger starting from buying/booking a ticket to the completion of a trip. Based on the pre-research results, a passenger satisfaction evaluation index system was proposed in this study. The author proposed a two-level index system, the General Survey and the Specific Survey. A general survey will have the following characteristics: comprehensiveness, importance, generality and completeness. The indexes of a general survey includes security, economy, efficiency, convenience, comfort, service quality, staff service quality, monitoring, infrastructures, etc. The details can be found in the research of Jia and Zhou in 2003[7]. However, a specific survey is the survey, which focuses on a specific service or products of passenger transport.
3 Fuzzy Model for Evaluation of Passenger Satisfaction As the result of the evaluation of satisfaction degree of passenger transport has involved too many factors, to use the Fuzzy theory (L.A. Zadeh, 1965) for the evaluation of the satisfaction degree is possibly a better alternative. A fuzzy evaluation system can be represented as:
U = {u1 , u 2 ,"u i }, where u i is
one of the indexes in the investigation. According to rundle theory, we divide passenger’s subjective judgment into 5 levels from very unsatisfactory to very satisfactory, and the relevant score evaluation, i.e.: V
Pj (j=1,…5) was given for a quantitative
= {v1 , v2 , " v5 } . This paper used the method which combines
AHP with Delphi method, to decide the fuzzy set of weights of indexes: A = {a1 , a 2 ," ai } . The details of calculation process of the eigenvector
W = [W1 ,",Wi ,",Wn ] , λmax and C.R. can be seen in the study of Li, 1993[2]. T
Passenger satisfaction is the subjective judgment of passengers, which is closely related to many factors such as education background, occupancy, age, etc. The following are identified as the major elements to be considered for passenger group classification: Social attributes; Natural attributes; Geographical distribution; Nature of the travel (private or company paid travel); Travel frequency; Economic solvency and psychological enduring ability of passengers. Different weights were given to different individuals and groups. When a passenger i of group k submitted a questionnaire, the real-time evaluation system will calculate the passenger’s weight vector by AHP method according to the indexes importance ordering given by him. The synthesized index weight vector of passenger group k is:
120
Y. Zhou, J. Wu, and Y. Jia
1 n Si ⋅ Wi k ∑ n i =1 T Si = µ xi = (µA ,…µC )xi = µA Aij + µE Eij + µI Iij + µOOij + µF Fij + µCCij Ak =
(1) (2)
Where, µ is the weighing factor vector. For passenger i, Si shows his contribution to index weight distribution of passenger group k, Si is calculated based on his individual information. Aij : passenger i with age group j. Similarly, Eij is for education, Iij is for income, Oij is for occupation, Fij is for frequency of travel, Cij is for the charge source. T
Single Passenger Satisfaction Evaluation Model: m
5
f i = ∑∑ σ ij PjWi
(3)
i =1 j =1
σ ij = 1 Satisfaction degree j of index i was selected σ ij = 0 Satisfaction degree j of index i wasn’t selected m
s.t.
5
f i = ∑∑ σ ij PjWi ≤ 100 i=1,2,…,n
(4)
i =1 j =1
W = ( w1 , " , wm ) T ≥ 0
(5)
Fuzzy Comprehensive Evaluation for Passenger Satisfaction: Firstly, we develop the membership function of index and build the fuzzy inference matrix Rnm. In the matrix, rij is the percentage of passengers who give index i a judgment of j. In order to consider the different influences of factors, the weight matrix A was applied. This leads to a fuzzy inference matrix B which has the form: B =A*R
(6)
B = {b1 , b2 ,"bm } is the fuzzy evaluation results. We applied the following rules: k −1
Assuming bk
,
= max bi , then b = ∑ bi . If b ≤ 0.5 the passenger satisfaction is i =1
grade K, otherwise, it is grade K-1[3]. The passenger transport enterprises can take as a comprehensive index of the passenger satisfaction evaluation.
b
4 Application Example Taking Beijing Railway Station passenger transport service as a reference, following is the relevant parameters and evaluation result:
Research on Index System and Fuzzy Comprehensive Evaluation Method
λmax = 15.387 B = A*R =
C.I.= 0.1067
R.I. = 1.58
121
C.R. = 0.0675 < 0.10
0.299953 0.459185 0.089498 0.111913 0.039451 0.315847 0.459253 0.083172 0.103227 0.038501
,
where b12 = 0.758 b22 = 0.774. As far as the service quality of Beijing Station is concerned, the evaluation of the higher middle group and that of the lower group passengers are both of the similar degree of comparatively satisfied, although the lower group passengers give a slightly higher score. This result is relatively much more acceptable and proper than the evaluating result conducted by Zhao Ping(2001).
5 Conclusion and Recommendations Based on the results above, we have the following conclusions and recommendations: Index weight reflects the differences between passengers on evaluation because of their different backgrounds. It is important to produce a credible index weight matrix. The index weight systems estimated by the method of combination of AHP and Delphi has given satisfactory and reasonable results. The fuzzy inference system has been successfully used in passenger satisfaction evaluation, and the application in Beijing Stations has shown its efficiency and credibility. The real time evaluation systems can help on management performance of transport enterprises by understanding the requests of passengers of different groups quickly.
References 1. Jianping Huang, Jin-Ming Li: Application of Consumer Satisfaction and Satisfaction Index in China. Journal of Beijing Institute of Business, Beijing (2000) 2. Guogang Li, Bao-Shan Li: Management Systems Engineering. The Publish House of the People's University of China, Beijing (1993) 3. Yongbo Lv, Yifei Xu: System Engineering. The Publish House of the Northern Jiaotong University, Beijing (2003) 4. Yong-Ling Cen: Fuzzy Comprehensive Evaluation Model for Consumer Satisfaction. Journal of Liaoning Engineering and Technology University (2001) 5. Yuanfeng Zhou, Yuan-hua Jia: Research on Fuzzy Evaluation Method for Railway Passenger Satisfaction. Journal of Northern Jiaotong University, vol.27(5), Beijing (2003)64-68. 6. Ping Zhao: Guide to China Customer Satisfaction Index. The Publish House of China Criterion, Beijing (2003) 7. Yuanhua Jia, Yuanfeng Zhou, Sufen Li: Report of China Railway Passenger Satisfaction Testing Index System and Data-collection and Real-time Evaluation system (2003) 8. Claes Fornell: A National Customer Satisfaction Barometer: The Swedish Experience. Journal of Marketing Vol56 January (1992)6~21 9. L.A. Zadeh: Fuzzy Sets. Inf. and Control. 8 (1965)38-53
Research on Predicting Hydatidiform Mole Canceration Tendency by a Fuzzy Integral Model Yecai Guo1, Wei Rao1, Yi Guo2, and Wei Ma2 1
Anhui University of Science and Technology, Huainan 232001, China 2 Shanghai Marine University, Shanghai, China
[email protected] Abstract. Based on the Fuzzy mathematical principle, a fuzzy integral model on forecasting the cancerational tendency of hydatidiform mole is created. In this paper, attaching function, quantum standard, weight value of each factor, which causes disease, and the threshold value of fuzzy integral value are determined under condition that medical experts take part in. The detailed measures in this paper are taken as follows: First, each medical expert gives the score of the sub-factors of each factor based on their clinic experience and professional knowledge. Second, based on analyzing the feature of the scores given by medical experts, attaching functions are established using K power parabola larger type. Third, weight values are determined using method by the analytic hierarchy process[AHP] method. Finally, the relative information is obtained from the case histories of hydatidiform mole cases. Fuzzy integral value of each case is calculated and its threshold value is finally determined. Accurate rate of the fuzzy integral model(FIM) is greater than that of the maximum likelihood method (MLM) via diagnosing the history cases and for new cases, the diagnosis results of the FIM is in accordance with those of the medical experts.
1 Introduction Hydatidiform mole is a benign and deutoplasmic tumo(u)r and regarded as benign hydatidiform mole. When its organization transfers to adjacent or far apparatus or invades human body of womb and becomes greater in size, volume, quantity, or scope, uterine cavity may be wore out and this will result in massive haemorrhage of abdomen. When its organization invades intraliagamentary, uterine hematoma is brought, indeed, this organization may transfer to vagina, lung, and brain, patients may die. In this case, benign hydatidiform mole has been turn into the malignant tumor and is fatal, it is called as malignant hydatidiform mole[1]. Now that malignant hydatidiform mole is the result of benign hydatidiform mole canceration, it is very necessary for us to predict that if the benign hydatidiform mole can become malignant hydatidiform mole or not. This is a question for worthful discussion. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 122 – 129, 2005. © Springer-Verlag Berlin Heidelberg 2005
Research on Predicting Hydatidiform Mole Canceration Tendency by a FIM
123
In this paper, based on the fuzzy mathematic method, we discuss this the problem. The organization of this paper is as follows. In section 2, we establish the model for predicting the benign hydatidiform mole canceration. Section 3 gives a example to illustrate the effective of this model..
2 Hydatidiform Mole Canceration Prediction Model Assume that prediction set of hydatidiform mole canceration is given by T={T1,,T2}.
(1)
where T1 denotes hydatidiform mole non-canceration tendency, T2 represents hydatidiform mole canceration tendency. According to the correlative literatures[1][2] and the conditions of medical apparatus, assume that factor set S associated with the pathogenies of hydatidiform mole is written as
S = {S1 , S 2 , " , S10 } .
(2)
where S1, S2, S3, S4, S5, S6, S7, S8, S9, S10 are ages of the suffers, number of pregnancy, delitescence, enlarged rate of womb, womb size(weeks of pregnancy), hydatid mole size, highest titer value of HCG, Time Taken by turning result of pregnancy test based on crude urine into Negative Reaction(TTNR), pathological examination, and method for termination of pregnancy, respectively. The sub-factor set of the ith factor of pathogenies is denoted by
Si = {Si1 , Si 2 , " , SiL , " , SiK } . where i = 1, 2, " ,10; L = 1, 2, " , K , K represents number of sub-factor of the factor.
(3)
i th
2.1 Quantum Standard of Each Factor Each factor of the pathogenies of hydatidiform mole is quantized according to the given standard in Table 1 2.2 Conformation of Attaching Function of Each Factor Attaching function of each factor is given by following equations
3 5
µ ( S1 j ) = log(7 + S1 j )( j = 1, 2, " , 7) , 1 3
µ ( S 2 j ) = ln(6 + S 2 j )( j = 1, 2, 3)
124
Y. Guo et al.
1 µ ( S3 j ) = log(4 + S3 j )( j = 1, 2, " , 6) ,
2 3 µ ( S 4 j ) = log(8 + S4 j )( j = 1, 2, 3) , 4 4 µ ( S5 j ) = log(5 + S5 j )( j = 1, 2, 3) , 5 3 µ ( S6 j ) = log(15 + S 6 j )( j = 1, 2,3) , 5 3 µ ( Sij ) = log(10 + Sij )(i = 7,8; j = 1, 2, " , 7) , 5 5 µ ( S9 j ) = log(7 + S9 j )( j = 1, 2, 3) , 7 5 µ ( S10, j ) = log(10 + S10, j )( j = 1, 2, " , 7) . 9
(4)
where Sij is the j th sub-factor of the i th factor S i . Their quantum standard is given and their attaching function values are calculated by Eq.(4) and shown by Table 1. Table 1. Sub-factors of each factor and their scores and weights (1)
Factor
Sub-factor
Score Attaching function
44 (S17)
5 6 9 11 12 13 14
0.6475 0.6684 0.7225 0.7532* 0.7672 0.7806 0.7933
1(S21) 2~3 (S22) >3(S23)
1 3 7
0.6486 0.7324* 0.8549
0.1034
)(S41) Normal(=Month) (S42) Slow(<Month)(S43)
12 3 0
Womb size (S5)
0.1034
19 (S53)
7 8 12
Age (S1)
Number of Pregnancy (S2)
Delitescence (days) (S3)
Weight
0.0345
0.0345
0.9758* 0.7810 0.6773 0.8633 0.8911* 0.9844
Research on Predicting Hydatidiform Mole Canceration Tendency by a FIM
125
Table 1. Sub-factors of each factor and their scores and weighs (2)
Factor
Weigh Sub-factor t
Score
Attaching function
Womb size (S5)
0.1034
19 (S53)
7 8 12
0.8633 0.8911* 0. 9844
Hydatid mole Size (S6)
0.1725
Small grape particles (S61) Big grape particles(S62) All grape particles (S63)
17 1 10
0.9030 0.7224 0.8388*
50 (S71) 100 (S72) 200 (S73) 400 (S74) 800 (S75) 1600 (S76) 3200(S77)
2 6 7 12 14 15 18
0.6475 0.7225 0.7383 0.8055 0.8281* 0.8388 0.8682
7days(S81) 1~2weeks (S82) 2~3weeks (S83) 3~4weeks (S83) 1~2months (S84) 2~3months (S85) >3months (S86)
1 4 5 7 11 12 13
0.6248 0.6877 0.7057 0.7383* 0.7933 0.8055 0.8170 0.6036 0.8186* 0.9314
Highest titer Value(IU/L) of HCG (S7)
0.1725
TTNR (S8)
0.1034
Pathological examination (S9)
Low-grade hyperplasia (S91) Middle-grade hyperplsia(S92) 0.1379 Serve-grade hyperplasia (S93)
0 7 12 5
Method for termination of pregnancy (S10)}
Nature (S10,1) Induced labor success (S10,2) Induced labor unsuccess(S10,3) Uterine aspiration (S10,4) 0.0345 Uterine apoxesis (S10,5) Uterine apoxesis removel (10,6) Direct removel (S10,7)
10 12 6 7 1 2
0.6534 0.7228 0.7458 0.6689 0.6836 0.5786* 0.5995
2.3 Weight of Each Factor According to the clinic experience of medical experts and relative literatures, the weight value of each factor may be determined. Assume that weight set is expressed by
A = {a1 , a2 , " , a10 }
(5)
where ai is the weight value of the ith factor Si . The weight values are given by Table 1.
126
Y. Guo et al.
2.4 Computing Fuzzy Measure a( Si ) For λ -Fuzzy measure a( Si ) [3],[4], we have example,
a( S1 ) = a1 , a ( Si ) = a1 + a ( Si −1 ) + λ ⋅ ai ⋅ a ( Si −1 ) .
(6a) (6b)
When λ = 0 , according to the Eq.(6), we also have Using Eq.(7), fuzzy measure may be computed. 2.4 Computing Fuzzy Integral E Value In limited ranges, fuzzy integral is denoted by the symbol “E” and given by
(8) . where µ ( Si ) is a attaching function of the factor S i and monotone. Weight value ai is arrayed according to the depressed order of µ ( Si ) . 2.6 Threshold of Fuzzy Integral E Based on the clinic experience of the medical experts, the threshold of fuzzy integral E value is determined and shown in Table 2. Table 2. Threshold of fuzzy integral E
Hydatidiform mole canceration tendency Threshold of fuzzy integral(E)
Canceration pqi } , # is the set numbers, n −1
ND LLD q ∈ {1, " , n} . In the computing process, if #{x } = 1 or #{x } = 1 , then the process is over. The alternative with the largest non-dominance degree or the largest dominance degree is preferred to others.
3 Consensus Measures and Feedback Mechanism 3.1 Consensus Measures Definition 3.1. Let V c = {v1c , ", vnc } and V i = {v1i , " , vni } be the collective and
individual (i.e., expert
ei ) ordered vector of alternatives, respectively, where v cj and
v ij are the position of alternative x j in that collective ordered vector and individual
134
Z.-P. Fan and X. Chen
are the position of alternative x j in that collective ordered vector and individual (i.e.,
()
expert ei ) ordered vector, then consensus degree Ci x j and linguistic consensus 2
()
degree Q [Ci x j ] of each expert
ei for each alternative x j are defined according to
the following expressions:
()
Ci x j = 1 − v cj − v ij (n − 1) ,
(5a)
Q 2 [Ci ( x j )] = Q 2 [1 − v cj − v ij
(5b)
(n − 1)] .
In which, for example, if collective and individual (i.e., expert
e3 ) ordered vector
of alternatives are V c = {x3 , x1 , x2 , x4 } and V 3 = {x2 , x3 , x4 , x1} , linguistic quantifier
() C(x)= 1 − 2 − 4 /( 4 − 1) = 1 − 2 / 3 = 1 / 3 , Q [C(x)] = s = EU . Definition 3.2. If Q [C(x )] = s , where s is the largest element in set S, then consensus measure C(x )of expert e for alternative x reaches the consensus level. Or else consensus measure C(x ) of expert e for alternative x does not reach the ()
“most ” with the pair (0.3, 0.8), then C3 x1 and Q 2 [C3 x1 ] are obtained as: 2
3
1
3
1
1
2
i
i
j
T
T
j
i
i
j
j
i
consensus level.
j
()
Definition 3.3. Consensus degree Ci x j and linguistic consensus degree Q 2 [C ( x j )]
of all experts on alternative x j are defined according to the following expressions: m
m
i =1
i =1
C ( x j ) = [ ∑ Ci ( x j )] m = [∑ (1 − v cj − v ij (n − 1))] m , m
m
i =1
i =1
Q 2 [C ( x j )] = Q 2 [( ∑ Ci ( x j )) m] = Q 2 [( ∑ (1 − v cj − v ij (n − 1)) m] .
(6a) (6b)
Definition 3.4. If Q 2 [C ( x j )] = sT , where sT is the largest element in set S , then
consensus measure C ( x j ) of all experts on alternative x j reaches the consensus level. Or else consensus measure C ( x j ) of all experts on alternative x j does not reach the consensus level. Definition 3.5. Consensus degree C X and linguistic consensus degree Q 2 [C X ] of all experts over the alternative set are defined according to the following expressions: n
n m
j =1
j =1i =1
C X = [ ∑ C ( x j )] n = [ ∑ ∑ (1 − v cj − v ij (n − 1))] nm , n
n m
j =1
j =1 i =1
Q 2 (C X ) = Q 2 [( ∑ C ( x j )) n] = Q 2 [( ∑ ∑ (1 − v cj − v ij (n − 1))) nm] .
(7a) (7b)
Consensus Measures and Adjusting Inconsistency of Linguistic Preference Relations
135
Definition 3.6. If Q 2 [C X ] = s T , where sT is the largest element in set S , then the
consensus measure C X of all experts over the alternative set reaches the consensus level. Or else consensus measure C X of all experts over the alternative sst does not reach the consensus level. Theorem 3.1. If the consensus measure of expert ei (i = 1,", m) on alternative x j
reaches the consensus level, then consensus measure on alternative x j of all experts reaches the consensus level.
,
Proof. First, using (5b), Q 2 [Ci ( x j )] = sT , ∀i ∈ {1 ", m} . Using (4b), then Ci ( x j ) > b ,
,
∀i ∈ {1 ", m} .
Using
m
m
i =1
i =1
(6a),
we
have
m
C ( x j ) = [∑ (1 − v cj − v ij (n − 1))] m i =1
= [ ∑ Ci ( x j )] m > ( ∑ b) m = b , i.e., C ( x j ) > b . Hence, Q 2 [C ( x j )] = C . By Definition 3.4, we know that consensus measure on alternative x j of all experts reaches the consensus. Theorem 3.2. If consensus measure of all experts on alternative x j ( j = 1,", n)
reaches the consensus level, then consensus measure C X of all experts reaches the consensus level.
,
Proof. First, using (6b), Q 2 [C ( x j )] = sT , ∀j ∈ {1 ", n} . Using (4b), then C ( x j ) > b ,
,
∀j ∈ {1 ", n} .
Using
n
n
j =1
j =1
(7a),
we
have
n m
C X = [ ∑ ∑ (1 − v cj − v ij (n − 1))] nm j =1i =1
= [ ∑ (C ( x j ))] n > ( ∑ b) n = b . i.e., C X > b . Hence, Q 2 [C X ] = C . By Definition
3.6, consensus measure of all experts over the alternative set reaches the consensus level. Remark 3.1. Note that definitions 3.2, 3.4 and 3.6, as in Definitions 3.2, 3.4 and 3.6, the required consensus levels are difference if linguistic relative quantifier are difference based on the difference parameters (a, b). The required consensus levels are usually decided by a leading decision maker. 3.2 Feedback Mechanism
When consensus measure C X has not reached the required consensus level, i.e., C X < C , then the experts’ opinions should be improved or modified. In this pape, the rule of feedback mechanism is developed below. (1) If v cj − v ij < 0 , then move forward position of alternative x j for the ith expert, i.e., number m j about x j ; xi (i = 1,", n) should be increased in the above Pi [9].
136
Z.-P. Fan and X. Chen
(2) If v cj − v ij = 0 , do not changed position of alternative x j for the ith expert. (3) If v cj − v ij > 0 , then move backward position of alternative x j for the ith expert, i.e., numbers m j about x j ; xi (i = 1,", n) should be decreased in the above Pi [9]. An adjusting approach is expressed in the following steps. Step 1. Consensus measure is calculated by (5a)-(7a). If Q 2 [C X ] = sT , then go Step 5. Or else go next step. Step 2. Let v cj − v ij = max v cj − v ij , then the linguistic preference relation of the *
*
*
1 {A}− {A} [A]R R−4 A {A}− = [A]R 1)2 F (S)
s:
?,
d:
F (S) × F (S) → [0, 1] (A, B) → s(A, B)
1*2
F (S) × F (S) → [0, 1] (A, B) → d(A, B)
1+2
631
7% 8 '% 8 9% 7
∀(A, B) ∈ F (S) × F (S) d(A, B) = 1 − s(A, B),
1-2
d TS ' F (S) > α− {A} {A}−α = {A∗ ∈ F (S)|d(A∗ , A) ≤ α} = {A∗ ∈ F (S)|s(A∗ , A) ≥ 1 − α},
{A} {A}− = {A∗ ∈ F (S)|d(A∗ , A) ≤ 0} = {A∗ ∈ F (S)|s(A∗ , A) ≥ 1}.
C1T
% C1T A → B A∗ B∗
1.2
A → B ∈ ∅ A∗ ∈ F (S) B ∗ ∈ F (S) % A∗ ∈ {A}− A∗ A → B B ∗ ∈ {B − } ∗ A A → B % A∗ ∈/ {A}− A∗ A → B A∗ ' % % A∗ A → B ( ' ( % C1T @ ' ' A∗ → B ∗ A → B ' % B ∗ f ∈ F }
(1)
In fact, Category membership set is the extension of fuzzy set, where value domain is [0, 1] instead of ℜ. The reason that we adopt ℜ is that it is a more nature way and more efficient than [0, 1] in text categorization. It is obvious that a category c with a membership function specifies a category membership set distinctly. From definition 1, we know that the most vital step for constructing category membership sets is finding suitable membership functions. Luckily, a large number of statistical methods used for feature selection have been proposed in text categorization [7][8]. These statistical methods select words useful to text categorization from large word set by measuring the relativity of words and categories. Therefore, these statistical methods are also suitable for acting as membership functions. In the following sections, we will descript these methods in detail. 2.2 MI Mutual information (or MI for short) is a criterion commonly used in statistical language modeling of word associations, feature selection, and related applications [8][9][10]. Let fi be a word in F, cj be a category in C, and P(A) be the probability that event A occurs. The mutual information criterion between fi and cj is defined to be
MI ( f i , c j ) = log
P( f i ∧ c j )
(2)
P ( f i ) × P (c j )
For the sake of computation, we can estimate MI(fi, cj) by using
MI ( f i , c j ) ≈ log
1
ℜ is the set of real numbers.
X ×N ( X + Z ) × P( X + Y )
(3)
376
Z.-H. Deng, S.-W. Tang, and M. Zhang
where X is the number of documents that contain word fi and belong to Dj ( that is to say, they are labeled category cj); Y is the number of documents that contain fi but don’t belong to Dj; Z is the number of documents that don’t contain fi but belongs to category cj. MI(fi, cj) has a natural value of zero if fi and cj are independent. A characteristic of mutual information is that the score is strongly influence by the marginal probabilities of words, as is showed in following equivalent formula
MI ( f i , c j ) = log P( f i | c j ) − log P( f i )
(4)
If words have an equal conditional probability P(fi | cj), these words, which occurs rarely, will have a higher score than common words. 2.3 OddsRatio OddsRatio is commonly used in information retrieval where the problem is to rank out documents according to their relevance for the positive class with using occurrence of different words as features. It was first used as feature selection methods by Mladenic[7]. Mladenic have compare six feature scoring measures with each other on real Web documents. He found that OddsRation showed the best performance. This shows that OddsRatio is best for feature scoring and may be very suitable for acting as membership function. If one considers the two-way contingency table of a word fi and a category cj, where X is the number of documents that contain fi and belong to Dj, Y is the number of documents that belong to Dj, U is the number of documents that contain fi but don’t belong to Dj, V is the number of documents that don’t belong to Dj, then the OddsRatio between fi and cj is defined to be
OddsRatio ( f i , c j ) = log
P ( f i | c j )( 1 − P ( f i | ¬ c j ))
(5)
(1 − P ( f i | c j ) P ( f i | ¬ c j )
and is estimated using ⎛ X U ⎛ X U ⎞⎞ OddsRatio ( f i , c j ) ≈ log ⎜⎜ (1 − ) / ⎜ (1 − ) ⎟ ⎟⎟ Y V Y V ⎠⎠ ⎝ ⎝
(6)
2.4 CHI
The CHI (Abbreviation for χ2 statistic) measures the lack of independence between a word and a category and can be compared to the χ2 distribution with one degree of freedom to judge extremeness. Given a word fi and a category cj, The CHI of fi and cj is given by
CHI ( f i , c j ) =
N × ( XV − UY ) 2 ( X + U ) × (Y + V ) × ( X + Y ) × (U + V )
(7 )
where N is the total number of training documents; A is the number documents that contain fi and belong to Dj; Y is the number of documents that contain fi but don’t belong to Dj; U is the number of documents that belong to Dj but don’t contain fi; V is the number of documents that neither contain fi nor belong to Dj . The CHI(fi, cj) has a value of zero if fi and cj are independent. On the other hand, the CHI(fi, cj) has the
An Efficient Text Categorization Algorithm Based on Category Memberships
377
maximal value of N if fi and cj either co-occur or co-absent. The more fi and cj are correlative the more the CHIij is high and vice versa. Yang [8] reported that CHI is one of the most effective feature selection methods. Therefore, CHI may be a good choice as a membership function. As mentioned in [11], a major weakness of CHI is that it is not reliable to low-frequency words.
3 CMB: A Category-Membership-Based Algorithm Before describing CMB, we first present some basic concepts. These concepts include document representation and similarity of documents and categories. 3.1 Basic Concepts
For classifying documents efficiently, documents should represent by some models that are suitable to be processed by computer. In this paper, we adopt the classic models called word bag. This model considers that each document is described by a set of representative words. For differentiating the importance of each word for describing the document semantic contents, a weight is associated with each word of a document. Definition 2. (word bag model with weights) Let d be a document. Document d is represented as the set of tuple of word f and its weight in d, which is as follows:
d = {< f , freq( f , d ) > f ∈ F }
(8)
Word frequency2 is known to provide one good measure of how well that words describes the document contents [12]. However, it is well known that words in long documents have high word frequencies than words in short documents. If we use word frequency as freq(f, d) directly, it would be unfair for words in short documents. Therefore, we measure freq(f, d) with the normalized frequency of word f in d. That is, freq(f, d) is given by
freq( f , d ) =
wd ( f , d ) max{wd ( f i , d ) | f i ∈ F }
(9)
where the maximum is computed over all words in F, wd(f, d) is the word frequency of f in d. That is, wd(f, d) is equal to the number of times that f occurs in d. Given a document d represented by word bag model with weights and a category c with a membership function µc, the similarity of d and d is defined to be n
sim(d , c) = ∑ freq( f i , d ) × µ c ( f i )
(10)
i =1
3.2 Algorithm Description
Given a training document set TD = ∪Di , where Di is the set of documents that are labeled category ci, and a membership function, such as MI, OddsRatio, or CHI, we 2
The word frequency of a word f in a document d is the number of times that f occurs in d.
378
Z.-H. Deng, S.-W. Tang, and M. Zhang
can collect all words and compute the category membership set of each category. For an unlabeled document, we get the similarity of this document and each category by formula (10). By sorting categories in similarity score descending order, we obtain a ranked list of categories. Since documents of our data set (Newsgroup_18828) have only one category, we assign the only top ranking category to the unlabeled document. Above discussions are the core ideas of CBM. Constructing CBM classifier includes two components: one for learning classifiers and the other for classifying unlabeled documents. For the sake of description, we label the former Training_Phase and the latter Classifying_Phase. The pseudo-code for CBM is shown as follows. Training_Phase: Input: training documents set TD = ∪ Di, 1≤ i ≤ m, Di = {document d | d is labeled category ci}, a membership function µ. // µ may be MI, OddsRatio, or CHI Output: word set F = {f1, f2, …, fn}; CMS = {CMS1, CMS2, …, CMSm}, where CMSi is the category membership set of ci.
Step 1. F = ∅, CMS = ∅. Step 2. Scan the training documents set TD once. Collect words in TD. Insert all these words into F. Let F = {f1, f2, …, fn}. Step 3. For i = 1 to m do: Construct CMSi, the category membership set of ci, according to µ; CMS = CMS ∪ {CMSi}; Classifying_Phase: Input: F, CMF, and an unlabelled document dnew. Output: the category of dnew.
Step 1. Sim_Scores = ∅; Scan dnew once. Compute the normalized frequency of each word fi (∈ F) in dnew. Step 2. For i = 1 to m, do: Compute Simi, the Similarity of dnew and ci, according to formula (10); Sim_Scores = Sim_Scores ∪ {Simi}; Step 3. Sort C = {c1, …, cm} in similarity score descending order as OC. Let cx be the only top category in the ranking list OC. cx is outputted as the category of dnew.
4 Experimental Evaluation To assess the effectiveness of our algorithm, we conduct experiments to evaluate the performance of the CBM by comparing it with k-NN and Naïve Bayes on text collections Newsgroup_18828. 4.1 k-NN
k-NN stands for k nearest neighbor classification, a well known statistical method which has been intensively studied in machine learning for over four decades [13].
An Efficient Text Categorization Algorithm Based on Category Memberships
379
k-NN has been applied to text categorization since the early stages of the research. It is one of the top-performing methods among algorithms used for text categorization [6][14][15]. The idea of k-NN algorithm is simple: given a new unlabeled document, the algorithm finds the top k nearest neighbors among the training documents, and predict the category of the unlabeled document with the categories of the k neighbors. Let d, which was represented as a vector of words with their weights, be an unlabeled document. The k-NN algorithm assigns a similarity score to each candidate category ci using the following formula
s(d , ci ) =
∑ cos(d , d )
di∈kNN ∩ Di
i
(11)
where kNN is the set of k nearest neighbors of document d, and Di is the set of training documents labeled category ci. By sorting the scores of all candidate categories, we obtain a ranked list of categories for document d. The only top ranking category is assigned to d. 4.2 Naïve Bayes
Naïve Bayes are also commonly used in text categorization [2] [6] [14]. The basic idea is to use the joint probabilities of words and categories to estimate the probabilities of categories given a document. The naïve part of such a model is the assumption of word independence. C = {c1, …, cm} be the set of categories, F = {f1, …, fn} be word set, and TD = ∪Di be the set of training documents with labeled categories, where Di is the set of documents that are labeled category ci. Given a new unlabeled document d, the Naïve Bayes algorithm assigns d to a category c* as follows: n
c* = arg max P(c j )∏ P( f i | c j ) wf ( fi ,d ) c j ∈C
(12)
i =1
where P(cj) is the priori probability of category cj and P(fi | cj) is the conditional probability of word fi given category cj. wf(fi, d) is the same as mentioned in section 3.1. By training documents, P(cj) and P(fi | cj) can be estimated as follows:
P (c j ) ≈
P( fi | c j ) ≈
nj
(13)
N
nij + 1 nj + n
(14)
where N is the total number of training documents in TD, nj is the number of documents in Dj, nij is the number of documents that contain word fi and belong to Dj, n is the total number of words in F.
380
Z.-H. Deng, S.-W. Tang, and M. Zhang
4.3 Text Collection
The Newsgroup_188283 text collection, collected by Jason Rennie, contains 18828 documents evenly divided among 20 UseNet discussion groups. 14120 documents (about 75%) were used for training documents and the remaining 4708 documents (about 25%) for test documents. We conduct experiments by using fourfold crossvalidation method. That is, Newsgroup_18828 is split into four subsets, and each subset was used once as test documents in a particular run while the remaining subsets were used as training documents for that run. The split into training and test documents for each run was the same for all algorithms. Finally, the results of experiments are averages of four runs. For each run, we used a stop word list to remove common words, and the words were stemmed using Porter’s suffix-stripping algorithm [16]. Furthermore, we also skip rare frequency words that occur in less than three documents. After above text preprocessing operation, the number of words from training documents is about 26,000. 4.4 Performance Measures
In terms of the performance measures, we followed the stand recall, precision and F1 measures. Recall (r) is defined to be the radio of correct positive predictions by the system divided by the total number of positive examples. Precision (p) is defined as the radio of correct positive predictions by the system divided by the total number of positive predictions by the system. Recall and precision reflect the different aspect of classification performance. Usually, if one of the two measures is increasing, the other will decrease. To obtain a better measure describing performance, F1, which first introduced by van Rijsbergen [17], is adopted. It combines recall and precision in the form as follows:
F1 =
2rp r+ p
(15)
For evaluating an average performance across categories, we used the microaveraging method and macro-averaging method [15]. micro-averaging method counts the decisions for all the categories in a joint pool and computer the global recall, precision and F1 for the global pool. macro-averaging method first computes recall, precision and F1 for each category, and then averages over categories as a global measure of the average performance over all categories. In this paper, we use microaveraging and macro-averaging F1for performance measures. 4.5 Primary Results
Table 1 summarizes the categorization results of our experiments. CMB-MI means that MI is selected as the membership function, and so do CMB-CHI and CMBOddsRatio. For the parameter k in k-NN, which is the number of nearest neighbor, the values of 5, 15, 30, 60, 90 were tested. The best result select as the final performance of k-NN. 3
http://www.ai.mit.edu/~jrennie/20Newsgroups/20news-18828.tar.gz.
An Efficient Text Categorization Algorithm Based on Category Memberships
381
The micro-level analysis suggests that Naive Bayes > CMB-OddsRatio > k-NN > CMB-MI > CMB-CHI. While the macro-level analysis suggests that CMB-OddsRatio > k-NN > {Naive Bayes, CMB-MI} > CMB-CHI. Combining the micro-level scores and macro-level score with each accounting for 50%, we have performance ranking as follow: CMB-OddsRatio > Naive Bayes > k-NN > CMB-MI > CMB-CHI. The bad performanc of CMB-CHI may be that its weakness mentioned in section 2.4. That is, CHI is not reliable to low-frequency words. As stated in [12], lowfrequency words are assumed to useful for distinguishing relevant documents from non-relevant documents. This means that low-frequency words contain relatively rich information and hence play important roles in classifying text documents. The other way round, low-frequency words have higher scores than common words in MI. This makes MI to be coincident with the assumption in information retrieval. Therefore, MI performs far better than CHI when they associate with CMB to classify documents. OddsRatio measure the membership of a word f to a category c by considering the times that f occurs in positive documents (documents labeled c) and negative documents (no-c documents). This makes OddsRatio more rational. Hence, OddsRatio achieves the best performance. Our experimental results also show that membership functions play a vital importance in CMB for classifying documents. Table 1. Performances summary
k-NN micro-averaging F1 macro-averaging F1
0.82 0.812
Naive Bayes 0.829 0.811
CMB-MI
CMB-CHI
0.816 0.811
0.682 0.66
CMBOddsRatio 0.824 0.818
5 Conclusion In this paper, we have presented CMB, a simple but efficient text categorization algorithm, based on category memberships. Comparison experiments on Newsgroup_18828 text collections show the effect of our method. As a future work, we need the additional research for finding better methods used to measure the category memberships. In addition, we will investigate combination of membership functions. The intuition is that different membership functions measure membership scores in qualitatively different ways. This suggests that these different functions potentially offer complementary information and the proper combination of these functions would be more effective than each one. Plentiful results from combination of classifier would provide valuable information. Acknowledgement. This research is supported by the National Natural Science Foundation of China under grant No. 60473072. Any opinions, findings, and conclu-
382
Z.-H. Deng, S.-W. Tang, and M. Zhang
sions or recommendations expressed in this paper are the authors’ and do not necessarily reflect those of the sponsor. We are also grateful to anonymous reviewers for their comments.
References 1. Yang,Y.: Expert network: Effective and efficient learning from human decisions in text categorization and retrieval. In 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1994) 13-22 2. McCallum, A., Nigam, K.: A comparison of event models for naïve bayes text classification. In AAA-98 Workshop on Learning for Text Categorization (1998) 3. Apte, C., Damerau, F., Weiss, S.: Text mining with decision rules and decision trees. In proceedings of Conference on Automated Learning and Discovery, Workshop 6: Learning from Text and the Web (1998) 4. Ng, H.T., Goh, W.B., Low, K.L.: Feature selection, perceptron learning, and a usability case study for text categorization. In 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1997) 67-73 5. Schapire, R.E., Singer, Y.: BoosTexter: A Boosting-based System for Text Categorization. Machine Learning 2/3 (2000) 135-168 6. Joachims, T.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proceedings of the 1998 European of conference on Machine Learning (1998) 137-142 7. Mladenic, D., Grobelnik, M.: Feature Selection for Classification Based on Text Hierarchy. In Working notes of Learning from Text and the Web, Conference on Automated Learning and Discovery (1998) 8. Yang, Y., Pedersen, J.P.: A Comparative Study on Feature Selection in Text Categorization. In Proceedings of 14th International Conference on Machine Learning (1997) 412420 9. Church, K.W., Hanks, P.: Word association norms, mutual information and lexicography. In Proceedings of 27th ACL (1989) 76-83 10. Fano, R.: Transmission of information. MIT Press, Cambridge, MA (1961) 11. Dunning, T.E.: Accurate methods for the statistics of surprise and coincidence. Computational Linguistics 1 (1993) 61-74 12. Ricardo, B.Y., Berthier, R.N.: Modern Information Retrieval. ACM Press (1999) 13. Dasarathy, B.V.: Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. MCGraw-Hill Computer Science Series. IEEE Computer Society, Las Alamitos, California (1991) 14. Yang, Y.: An evaluation of statistical approaches to text categorization. Journal of Information Retrieval 1/2 (1999) 67-88 15. Yang, Y., Liu, X.: A re-examination of text categorization methods. In 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1999) 42-49 16. Porter, M.F.: An algorithm for suffix stripping. Program 3 (1980) 130-137 17. van Rijsbergen, C.J.: Information Retrieval. Butterworths, London (1979)
The Integrated Location Algorithm Based on Fuzzy Identification and Data Fusion with Signal Decomposition Zhao Ping and Haoshan Shi Institute of Electric Information, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, P. R. China
[email protected],
[email protected] Abstract. In this paper, an efficient integrated location algorithm based on fuzzy identification and data fusion is presented in order to carry put precision and reliability position estimation and improve location accuracy and efficiency. In addition to the selectivity advantage gained by combining different location parameters, the use of integrated location algorithm by data fusion may increase location integrity with which a robust and anti-interference result can be obtained.
1 Introduction Network location signals are fuzzy and unsteady in most time and vary widely due to all kind of channel noises and interferences [1]. Also due to complex circumstance in city area there are strong multipath and non line of sight (NLOS) interferences to make more uncertainty and complicacy for position estimation. In order to model a hybrid location algorithm, we propose a systematic methodology based on fuzzy identification and data fusion. The algorithm conveys three distinct features: an improved fuzzy identification model; a selected signal stratification decomposition; and an efficient algorithm with integrated location estimation based on data fusion of TOA, TDOA and AOA. So the location algorithm is no longer restricted to ordinary signal parameter processing for user position estimation.
2 The Fuzzy Identification Model Fuzzy stratification identification for interfered signal is focused for long time. There are so many sorts of fuzzy identification algorithm with different methods, but they are almost same in their main courses except their own typical attribute and correlation character. As an identification method for nonlinear location signals, which are non-stable and varied in wide range with low signal noise ratio (SNR) and channel interference, we use two forms of the fuzzy identification courses, directness and indirectness L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 383 – 387, 2005. © Springer-Verlag Berlin Heidelberg 2005
384
Z. Ping and H. Shi
fuzzy identification; exploit a selected signal processing with stratification and decomposition; carry out parameter estimation based on data fusion and logic reason judgment. 2.1 Directness Fuzzy Identification Assume that there are n fuzzy subsets in theory field:
{A , A 1
~
2
~
, ..., A
n
~
if for every A i there is a subfunction µ ~
Ai ~
}
(1)
( u 0 ) , so for any one element u 0
∈U we
can make sure its attribute according to subjection principle as below. If it satisfies:
µ Ai (u 0 ) = max( µ A1(u 0 ),..., µ An (u 0 )) ~
Then we take
~
(2)
~
u 0 as subjection to A i , i.e. u 0 should be subjected to model A i . ~
~
2.2 Indirectness Fuzzy Identification Assume that A 1, A 2 , ..., A n are n fuzzy subsets on theory field U and an object ~
~
~
B waiting for identification is also a fuzzy subset on theory field U. The ~
identification course can be carried out as below according to nearby choosing principle. If there is a fuzzy set A i , which cause: ~
( B, Ai ) = max{( B, A1), ( B, A2),..., ( B, An)} ~
~
~
~
~
~
~
~
(3)
Then B and A i are most close to each other, that is B may be relatively attachable ~
~
~
to A i . Among equation (3), ( B , A j ) means adjacence degree between B and A i . ~ ~ ~ ~
~
2.3 Identification Model Data fusion judgment can be carried out by different principles according to different signal characteristic. It can fuse different data to give out precise identification result and also reliable parameter estimation. The framework of fuzzy signal identification course based on stratification decomposition and data fusion is shown in figure 1. The modeling and identification environment carries out parameter identification through a synergistic usage of clustering techniques such as stratification decomposition, data fusion and optimization judgment etc. According to the selection and adjustment of the weighting factor, an aggregate objective function can be used to achieve a balance between approximation and generalization.
The Integrated Location Algorithm Based on Fuzzy Identification and Data Fusion
385
3 Signal Decomposition and Feature Extraction A layered decomposition with signal processing for extracting features has two aims. One is to reduce original signal information set or cancel useless and unnecessary component. The other is to concentrate needed important information with known noise statistical characteristic from stratification classification. So, the needed fine feature on local signal can be extracted by stratification decomposition form original signal [2]. For the first purpose, location signal decomposition based on wavelet transform provides a set of decomposed signals in independent frequency bandwidths, which contain much independent dynamic information in different stratification due to the orthogonality of wavelet transform. For the second purpose, the identified signals are filtered by Kalman filter with known noise statistical characteristic to pick up needed components. Figure 2 (a) shows the original signal which consists of gradual change signal piling up three different period sinusoid waves with added noise, and Figure 2 (b) shows the decomposition result by wavelet analysis with ‘db1’ at lever 3. Simulation shows that a signal generated by the standard state-space stochastic model can be decomposed into sub layers at the different sampling frequencies associated to different levels of resolution. The main advantage is that these innovations are all uncorrelated with each other but they can be added together synthetically to form correlated signal.
Fig. 1. Fuzzy identification framework
Fig. 2. Wavelet stratification decomposition
4 Integrated Location Algorithm with Data Fusion The location algorithm makes use of statistical feature of objective characteristic to buildup different kinds of data fusion project. Here the data fusion is carried out once more as well as the parameter fuzzy identification to fulfill precise and reliable location within mobile network. According to a typical concept model of data fusion system configuration presented in 1992, which mainly included pretreatment module, four level fusion and data management function, Kleine-Ostmann and Bell put forward a data fusion model for network location problem [3]. In this model not only three basis data fusions can be come true, which are data layer fusion, attribution layer fusion and decision-
386
Z. Ping and H. Shi
making layer fusion, but also among different fusion layer data fusing can be carried out simultaneously. In the K-B model, the main locating data comes from independent TOA or TDOA parameters. But in some time or at certain position, location algorithm is incapable of obtaining three TOA parameters for essential calculation requirement. So we bring forward a TOA/TDOA/AOA integrated location algorithm based on data fusion and mathematical statistics. Through defining confidence function, a position-based dynamic location algorithm is obtained for multi parameters integrated location. As smart antenna or antenna array developing, angular resolution and precision of AOA is increased greatly by the frame of critical system parameters. By AOA parameter data fusion, the fuzzy problem caused by insufficiency of TOA or TDOA parameters can be avoided. The framework model of integrated location algorithm with three parameters of TOA/TDOA/AOA is shown in figure 3.
Fig. 3. The framework model of integrated location algorithm
The algorithm can carry out terminal position estimation from different layer approaches. One is to make use of single parameter among AOA, TOA and TDOA for position estimation; the result data can be as input for second level data fusion. Another approach is to make data conversion from AOA, TOA and TDOA to other form as location algorithm input; it also can transform plane equation groups into space line equation of AOA as dynamic location algorithm input. There are also several different data fusion methods to make mobile terminal position estimation such as taken the results of first level and second level data fusion as input to carry out algorithm in network decision-making layer.
5 Conclusion From all above discussion, we can conclude that the integrated location algorithm with fuzzy identification and data fusion has remarkable abilities on location precision and accuracy, system reliability and configuration integrity, anti-interference ability and weak signal processing. By using network stratification processing with TOA/TDOA/AOA parameter selection, many advantages can be achieved to improve location integrity and perfect location configuration as to get optimal location estimation.
The Integrated Location Algorithm Based on Fuzzy Identification and Data Fusion
387
References 1. Perez, Jose A. Moreno; Vega, J. Marcos Moreno; Verdegay, Jose L., Fuzzy location problems on networks, Fuzzy Sets and Systems, v 142, n 3, Mar 16, 2004, p 393-405. 2. Chiu, S., Fuzzy Model Identification Based on Cluster Estimation, Journal of Intelligent & Fuzzy Systems, Vol. 2, No. 3, Sept. 1994. 3. KLEINE-OSTMANN T, BELL A. E. A data fusion architecture for enhanced position estimation in wireless networks [J].IEEE Communications Letters, 2001, 5(8).
A Web Document Classification Approach Based on Fuzzy Association Concept Jingsheng Lei, Yaohong Kang, Chunyan Lu, and Zhang Yan College of Information Science and Technology, Hainan University, Haikou 570228, P.R. China
[email protected] Abstract. In this paper, a method of automatically identifying topics for Web documents via a classification technique is proposed. Web documents tend to have unpredictable characteristics, i.e. differences in length, quality and authorship. Motivated by these fuzzy characteristics, we adopt the fuzzy association concept to classify the documents into some predefined categories or topics. The experimental results show that our approach yields higher classification accuracy compared to the vector space model.
1 Introduction Due to the explosive growth of available information on the World Wide Web (WWW), users have suffered from the information overload. To alleviate the problem, many data mining techniques have been applied into the Web context. This research area is generally known as Web mining1. Web mining is defined as the discovery and analysis of useful information from WWW. Some examples of Web mining techniques include analysis of user access patterns2, Web document clustering3, classification4, and information filtering. In this paper, an intelligent content-based filtering that can automatically and intelligently filter Web documents based on the user preferences by utilizing topic identification is proposed. Web documents tend to have unpredictable characteristics, i.e. differences in length, quality and authorship. Motivated by these fuzzy characteristics, the fuzzy association concept in classifying Web documents into a predefined set of categories is adopted in our approach.
2 Fuzzy Association Method for Document Classification The process of classifying Web documents in our approach is explained as follows. Given C = {C1 , C2 ,", Cm } , a set of categories, where m is the number of categories. Step 1. The first step is to collect the training sets of Web documents, TD = {TD1 , TD2 ,", TDm } , from each category in C. This step involves crawling through the hypertext links encapsulated in each document. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 388 – 391, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Web Document Classification Approach Based on Fuzzy Association Concept
389
Step 2. To get index terms of a Chinese web document, first it needs word segment to deal with. Next, banish the stop words using stop list and decrease the terms by some algorithm. Then the document index term set is gained. The keywords from TD are extracted and put into separate keyword sets, CK = {CK1, CK 2 , " , CK m } . The document frequency-inverse category frequency (df-icf ) strategy, adapted from the tf-idf concept is proposed to select and rank the keywords within each category based on the number of documents in which the keyword appears (i.e. df) and the inverse of the number of categories in which the keyword appears (i.e. icf). df − icf (k , Ci ) = DF (k , Ci ) × ICF (k )
(1)
where DF (k , Ci ) is the number of documents in which keyword k occurs at least once, ICF (k ) = log( | C | ) , | C | is the total number of categories, and CF (k )
CF (k ) is the number of categories in which the keyword k occurs at least once. Step 3. Let A = {k1 , k2 ,", kn } be the set of all distinct keywords from CK, where n is
the number of all keywords. Then, the keyword correlation matrix M is generated via Eq. (2). ri , j =
ni , j
(2)
ni + n j − ni , j
where ri , j represents the fuzzy relation between keyword i and j, ni , j is the number of documents containing both ith and jth keywords, ni is the number of documents including the ith keyword, and n j is the number of documents including the jth keyword. Step 4. To classify a test document d into category Ci , a set of keywords from CK i are used to represent Ci . Then d is cleaned and its set of representative keywords is extracted from A. That is, d = {| k1 |, | k2 |, ", | k n |} , where | ki | is the frequency that ki appeared in d. After that, the membership degree between d and Ci is calculated using the following equation µd ,C = i
∑ [1 − ∏ (1 − r
a ,b
∀k a ∈d
)]
(3)
∀k b ∈CK i
where µ d ,C is the membership degree of d belonging to Ci , and ra ,b is the fuzzy relation between keyword k a ∈ d and keyword kb ∈ CK i . i
Document d is classified into category Ci when µ d ,C is the maximum for all i. The keyword ka in d is associated to category Ci if the keywords kb in CK i are related i
to k a . Whenever there is at least one keyword in CK i which is strongly related to ka ∈ d
(r
a ,b
≈1
), then Eq. (3) yields µ
d ,C i
≈ 1 , and the keyword ka is a good fuzzy
390
J. Lee et al.
index for the category Ci . In the case when all keywords in CK i are either loosely related or unrelated to ka , then ka is not a good fuzzy index for Ci
(µ
d ,Ci
≈0
).
3 Experimental Results and Discussions 3.1 Experimental Data Sets Experiments using the predefined categories as document topics and the document sets collected from two Web portals: Sohu , Sina and Yahoo! Chinese are conducted. In our experiments, we only consider documents in Chinese and ignore all other nonChinese documents, the selected categories and number of the Web documents are shown in Table 1. Based on these predefined categories, we collected 5651 documents from each of the Web directories as the training and test data sets. To avoid the problem of over-fitting the data when performing the experiments, we randomly select two-third of the document sets as the training set and one-third as the test set. For the Sohu, Sina and Yahoo! Chinese training data set, 100 index terms whose dficf values are the highest among all index terms are selected from each of its 11 categories. Next, we combine these index terms into the set of 1463 distinct index terms. Table 1. Predefined category sets and the number
Category News
Abbr. news
The number of the Web documents
Sports
sport
521
finance & economics
fin
528
Entertainment
et
519
Education
edu
573
Games
gm
561
554
Life
life
586
Autos
auto
417
Travel
travel
482
Health
hl
478
house property
hm
432
TOTAL
5651
3.2 Experimental Results and Discussions To compare the performance of our method (Fuzzy) to the vector space model(Vector) approach, we use the test data sets from the three Web directories. In Fig.1, the performance result based on the 11 categories of the data set is presented. As expected, our approach yields higher accuracies for most of the categories.
A Web Document Classification Approach Based on Fuzzy Association Concept
391
100
VSM Our approach
90
Accuracy(%)
80 70 60 50 40 30 20 10
e news
sport
fin
et
du
gm
life
auto
travel
hl
hm
Category
Fig. 1. Classification performance comparison by categories
4 Conclusion In this paper, a fuzzy classification approach that automatically identifies topics for Web documents via a classification technique was proposed. Our approach adopts the fuzzy association concept as a machine learning technique to classify the documents into some predefined categories or topics. The result is that each pair of words has an associated value to distinguish itself from other pairs of words. We performed several experiments using the data sets obtained from three different Web directories: Sohu , Sina and Yahoo! Chinese. We compared our approach to the vector space model approach. The results show that, our approach yields higher classification accuracies compared to the vector space model when varying the number of category representation keywords.
References 1. R. Cooley, B. Mobasher and J. Srivastava, Web mining: Information and pattern discovery on the world wide web, Proc. 9th IEEE Int. Conf. Tools Artif. Intell. (ICTAI'97), Newport Beach, CA, November 1997, 558-567 2. J. Pitkow and P. Pirolli, Mining longest repeating subsequences to predict world wide web surfing, Proc. SENIX Symp. Internet Tech. Syst. (USITS'99), Boulder, CO, October 1999, 139-150. 3. A. Z. Broder, S. C. Glassman, M. S. Manasse Syntactic clustering of the web, Proc. 6th Int.World Wide Web Conf. (WWW'6), Santa Clara, CA, April 1997, 391-404. 4. S. T. Dumais and H. Chen, Hierarchical classification of web content, Proc. 23rd Int. ACM Conf. Res. Dev. Inf. Retrieval (SIGIR), Athens, Greece, August 2000, 256-263.
Optimized Fuzzy Classification Using Genetic Algorithm1 Myung Won Kim and Joung Woo Ryu School of Computing, Soongsil University, 1-1, Sangdo 5-Dong, Dongjak-Gu, Seoul, Korea
[email protected],
[email protected] Abstract. Fuzzy rules are suitable for describing uncertain phenomena and natural for human understanding and they are, in general, efficient for classification. In addition, fuzzy rules allow us to effectively classify data having nonaxis-parallel decision boundaries, which is difficult for the conventional attribute-based methods. In this paper, we propose an optimized fuzzy rule generation method for classification both in accuracy and comprehensibility (or rule complexity). We investigate the use of genetic algorithm to determine an optimal set of membership functions for quantitative data. In our method, for a given set of membership functions a fuzzy decision tree is constructed and its accuracy and rule complexity are evaluated, which are combined into the fitness function to be optimized. We have experimented our algorithm with several benchmark data sets. The experiment results show that our method is more efficient in performance and comprehensibility of rules compared with the existing methods including C4.5 and FID3.1 (Fuzzy ID3).
1 Introduction Data mining is a new technique which discovers useful knowledge from data [1]. In data mining important evaluation criteria are efficiency and comprehensibility of knowledge. The discovered knowledge should well describe the characteristics of the data and it should be easy to understand in order to facilitate better understanding of the data and use it effectively. Classification is one of important techniques in data mining and it is used in various applications including pattern recognition, customer relationship management, targeted marketing, and disease diagnosis. A decision tree such as ID3 and C4.5 is one of the most widely used classification methods in data mining [2] ~ [4]. One of the difficult problems in classification is to handle quantitative data appropriately. Conventionally, a quantitative attribute domain is divided into a set of crisp regions and by doing so the whole data space is partitioned into a set of (crisp) subspaces (hyper-rectangles), each of which corresponds to a classification rule describing that a sample belonging to the subspace is classified into the representative class of the subspace. However, such a crisp partitioning is not natural to human and inefficient in performance because of the sharp boundary prob1
This work was supported by Korea Research Foundation Grant (KRF-2004-041-D00627).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 392 – 401, 2005. © Springer-Verlag Berlin Heidelberg 2005
Optimized Fuzzy Classification Using Genetic Algorithm
393
lem. Recently, fuzzy decision trees have been proposed to overcome this problem [5] ~ [11]. It is well known that the fuzzy theory not only provides natural tool for describing quantitative data but also generally produces good performance in many applications. However, one of the difficulties with fuzzy decision trees is determining an appropriate set of membership functions representing fuzzy linguistic terms. Usually membership functions are given manually, however, it is difficult for even an expert to determine an appropriate set of membership functions when the volume and dimensionality of data are large. In this paper we investigate combining the fuzzy theory and the conventional decision tree algorithm for accurate and comprehensible classification. We propose an efficient fuzzy rule generation method using the fuzzy decision tree (FDT) algorithm for data mining, which integrates the comprehensibility of decision trees and the expressive power of fuzzy sets. We also propose the use of genetic algorithm for optimal set of fuzzy rules by determining an appropriate set of fuzzy sets for quantitative data. In our method for a given fuzzy membership function a fuzzy decision tree is constructed and it is used to evaluate classification accuracy and rule complexity. Fuzzy membership functions evolve so that they optimize the fitness function combining both classification accuracy and rule complexity.
2 Fuzzy Inference 2.1 Fuzzy Classification Rules We use a simple form of fuzzy rules and inference for better human understanding. Each fuzzy rule is of the form “if A then B” where A and B are called an antecedent and a consequent, respectively. In our approach the antecedent is simple conditions conjoined while the consequent is “Class is k.” A simple condition is of the form “Att is Val” where Att represents an attribute name and Val represents a value of the attribute. Each fuzzy rule is associated with a CF (Certainty Factor) to represent the degree of belief that the consequent is drawn from the antecedent satisfied. Rule (1) is a typical form of fuzzy classification rules used in our approach.
Ri : if Ai1 is Vil and Ai 2 isVi 2 ... and Aim is Vim then ' Class' is k (CFi )
(1)
In the rule Aik represents an attribute and Vik represents a fuzzy linguistic term represented by a fuzzy set associated with attribute Aik. Application of the rule to a sample X results the confidence with which X is classified into class k given that the antecedent is satisfied. In this paper among a variety of fuzzy inference methods we adopt the standard method as described in the following: 1) min is used to combine the degrees of satisfaction of individual simple conditions of the antecedent; 2) product is used to propagate the degree of satisfaction of the antecedent to the consequent; 3) max is applied for aggregating the results of individual rule applications.
394
M.W. Kim and J.W. Ryu
For a given sample X, according to our method the confidence of class k is obtained as Conf k ( X ) = max ⎧⎨⎛⎜ min µVij ( x j ) ⎞⎟ ⋅ CFi ⎫⎬ j ⎠ Ri ∈R ( k ) ⎩⎝ ⎭ where x j is the value for attribute Aij of X .
In equation (2)
µV (x)
(2)
represents the membership degree that x belongs to fuzzy
set V, R(k) represents the set of all rules that classify samples into class k (their consequent parts are 'Class is k') . The class of the maximum Confk(X) is the final classification of X. Membership functions are very important in fuzzy rules. They affect not only the performance of fuzzy rule based systems but also the comprehensibility of rules. Triangular, trapezoidal, and Gaussian membership functions are widely used, however, in this paper we adopt triangular membership functions. A triangular membership function can be represented by a triple of numbers (l, c, r), where l, c, and r represent the left, the center, and the right points of the triangular membership function, respectively. In this paper we investigate the use of genetic algorithm to automatically generate an appropriate set of membership functions for a given set of data to classify. The membership functions are optimized in the sense that the generated rules are efficient and comprehensible and it is described in Section 3. 2.2 Fuzzy Decision Tree A fuzzy decision tree is similar to a (crisp) decision tree. It is composed of nodes and arcs representing attributes and attribute values or value sets, respectively. The major difference is that in a fuzzy decision tree each arc is associated with a fuzzy linguistic term, which is usually represented by a fuzzy set. Also in a fuzzy decision tree a leaf node represents a class and it is associated with a certainty factor representing the confidence of the decision corresponding to the leaf node. In a fuzzy decision tree a decision is made by aggregating the conclusions of multiple rules (paths) fired as Equation (2) describes while in a crisp decision tree only a single rule is fired for a decision. Let us assume that A1, A2,…, Ad represent attributes in consideration for a given data set, where d represents the dimension of the data. The whole data space W can be represented as W = U 1 × U 2 × ... × U d , where U i represents the domain of attribute Ai. A sample X can be represented as a point in W as X = ( x1 , x 2 , , , x d ) , where
xi ∈U i . In a fuzzy decision tree each arc (l, m) from node l to node m is associated with a fuzzy set F(l, m) representing a fuzzy linguistic term as a value of the attribute selected for node l. Suppose we have a fuzzy decision tree and let n be a node and Pn be the path from the root node to node n in the tree. Then we can consider that node n is associated with a fuzzy subspace Wn of W defined as follows. Wn = S1 × S2 × ... × Sd
Optimized Fuzzy Classification Using Genetic Algorithm
395
⎧ F (l , m) if Ai = att (l ) for an arc (l , m) in Pn ; Si = ⎨ otherwise. Ui ⎩
where
Here, F(l,m) is a fuzzy set corresponding to arc (l,m) and att(l) represents the attribute selected for node l. Let vn (X ) represent the membership that X belongs to Wn, then we have the following according to the above definitions:
ν n ( X ) = µW ( X ) n
= min µ F (l ,m ) ( xi ) where Ai = att (l ). ( l ,m ) in Pn
Our fuzzy decision tree construction algorithm is as follows. <Step 1> Starting with the root node, continue to grow the tree as following until the termination conditions are satisfied. If one of the following conditions is satisfied, then make node m a leaf node. (1)
1 D
∑v
X ∈D
m
( X ) ≤ θs
∑ v (Y ) ≥θ ∑v (X ) m
( 2)
X ∈Dk * X ∈D
d
where k * = arg max ( ∑ vm (Y )) k∈C
m
Y ∈Dk
(3) no more attributes are available.
In condition (2) class k* represents the representative class of the corresponding fuzzy subspace. In this case it is the leaf node whose class is k* and the associated CF is determined by CF =
∑
Y ∈ Dk*
v m (Y )
∑v
X ∈D
m
(X )
Otherwise, <Step 2> (1) Let E (m, Ai* ) = min( E (m, Ai )) . i
E(m,Ai) represents the entropy of attribute Ai for node m and it was defined in [11]. (2) For each fuzzy membership functions of attribute Ai*, make a child node of node m. (3) Go to Step 1 and apply the algorithm to all newly generated nodes, recursively. Node expansion ((2) in Step 2) corresponds to partitioning the fuzzy subspace corresponding to node m into fuzzy subspaces each of which corresponds to a value of the attribute selected for the node. In Step 1 the threshold parameters θ s and θ d determine when partitioning terminates. Condition (1) prohibits further partitioning
396
M.W. Kim and J.W. Ryu
sufficiently sparse fuzzy subspaces while condition (2) prohibits further partitioning fuzzy subspaces having sufficiently large portion of a single class samples. The parameters θ s and θ d are used to control overfitting by prohibiting too much detail rules to be generated. After constructing a fuzzy decision tree, we name each fuzzy membership function with an appropriate linguistic term.
3 Membership Function Generation Using Genetic Algorithm In fuzzy rules membership functions are important since they affect both of accuracy and comprehensibility of rules. However, membership functions are usually given manually and it is difficult even for an expert to determine an appropriate set of membership functions when the volume and dimensionality of data are large. In this paper, we propose the use of genetic algorithm to determine an optimal set of membership functions for a given classification problem. 3.1 Genetic Algorithm Genetic algorithm is an efficient search method simulating natural evolution, which is characterized by survival of the fittest. Our fuzzy decision tree construction using genetic algorithm is following. (1) Generate an initial population of chromosomes of membership functions; (2) Construct fuzzy decision trees using the membership functions; (3) Evaluate each individual set of membership functions corresponding to a chromosome by evaluating the performance and tree complexity of its corresponding fuzzy decision tree; (4) Test if the termination condition is satisfied; (5) If yes, then exit; (6) Otherwise, generate a new population of chromosomes of membership functions by applying genetic operators, and go to (2). We use genetic algorithm to generate an appropriate set of membership functions for quantitative data. Membership functions should be appropriate in the sense that they result in a good performance and they result in as simple a decision tree as possible. In our genetic algorithm a chromosome is of the form < φ1 , φ2 ,..., φd > where φi represents a set of membership functions associated with attribute Ai, which is, in turn, of the form { f1 ( Ai ), f 2 ( Ai ) , , , f l ( Ai )} where f j ( Ai ) represents a triplet of membership function for attribute Ai. The number of membership functions for an attribute may not necessarily be fixed. 3.2 Genetic Operations We have genetic operations such as crossover, mutation, addition, and merging as described in the following.
Optimized Fuzzy Classification Using Genetic Algorithm
397
1) Crossover: it generates new chromosomes by exchanging the whole sets of membership functions for a randomly selected attribute of the parent chromosomes. 2) Mutation: we combine random mutation and heuristic mutation. In random mutation membership functions randomly selected from a chromosome are mutated by adding Gaussian noise to the left, the center, and the right points of an individual membership function. In heuristic mutation membership functions are adjusted to classify correctly a set of randomly sampled incorrectly classified data. 3) Addition: for any attribute in a chromosome, the set of associated membership functions are analyzed and new appropriate membership functions are added if necessary. For example, when some attribute values are not covered or poorly covered by the current membership functions, new membership functions are added to cover those values properly. 4) Merging: any two close membership functions of any attribute of a chromosome are merged into one. We apply the roulette wheel method for selecting candidate chromosomes for the crossover operation. We also adopt the elitism in which the best fit chromosome in the current population is selected for the new population. If membership functions are determined for each attribute, we generate an fuzzy decision tree according to the algorithm described in Section 2.2. We use the performance of the generated fuzzy decision tree in fitness evaluation of a chromosome. Suppose a fuzzy decision tree τ(e) is generated from the membership functions represented by chromosome e. The fitness score of chromosome e is given by Fit (c) = αP (τ (e)) − βC (τ (e))
(3)
where P(τ(e)) represents the performance of decision tree τ(e) and C(τ(e)) represents the complexity of τ(e), measured in terms of the number of nodes of τ(e) in this paper. We also can use genetic algorithm to generate fuzzy rules by evolving the form of rules (selection of attributes and their associated values) and membership functions simultaneously. However, genetic algorithm is generally time-consuming and our method trades off between classification accuracy and computational time.
4 Related Works C4.5 is a successor to ID3 [2] and it is wildly used for applications [3], [4]. One of the important improvements made in C4.5 is that it can handle continuous valued attributes. In C4.5 continuous values are partitioned into two intervals by selecting a cut point, which maximizes the information gain indicating how well a partition classifies the data. By sorting the data according to the attribute, then identifying adjacent values of the attribute that differ in their target classification, we can generate candidate cut points. Among those cut points one that maximizes the information gain is selected. ID3 and C4.5 are a crisp decision tree in the sense that the whole data space is partitioned into a set of crisp subspaces and they suffer the sharp boundary problem.
398
M.W. Kim and J.W. Ryu
A fuzzy decision tree is more powerful, efficient, and natural to human understanding, particularly compared with crisp decision trees. [5] proposes fuzzification of CART (Classification And Regression Tree) using sigmoidal fuzzy splits replacing Boolean tests of a crisp CART tree and applying the back-propagation algorithm to learn parameters associated with fuzzy splits to optimize the global impurity of the tree. However, the structure of fuzzy decision tree proposed in [5] is different from that we propose in this paper and it is difficult to directly compare them. [6] proposes a method for automatically generating fuzzy membership functions for continuous valued attributes based on the principle of maximum information gain in cut point selection for each attribute. Our method for constructing a fuzzy decision tree described in Section 2 is similar to Janikow’s method proposed in [8]. A fuzzified ID3 have been proposed in [6] and [9] and fuzzy membership functions are generated automatically based on the principle of maximum information gain in selection of cut points for each continuous valued attribute. However, FID3 is still a hill-climbing method, which may get stuck to local optima. FID3 also allows the same attribute selected for different nodes to have different values associated with their outgoing arcs. It can improve the efficiency of the fuzzy decision tree generated by allowing the data space partitioned into sufficiently small subspaces if necessary. However, it can harm comprehensibility by creating many odd fuzzy linguistic terms. [10] proposes an automatic design of fuzzy rule-based classification systems. It is approached by combining tools for feature selection, model initialization, model reduction and model tuning. However, the model initialization is derived from the data not determined by the number of clusters but clustered by the number of classes.
5 Experiments 5.1 Experiments with Manually Generated Data Sets In this experiment we use manually generated data sets as shown in Fig.1. For the slashed pattern and the rotated generalized XOR data we notice that our algorithm can well classify data with non-axis-parallel decision boundaries. Such data are difficulty to classify efficiently using the conventional ID3 and C4.5 type decision tree based classification methods. For the generalized XOR, the histogram analysis fails to
Fig. 1. (a) Slashed two-class data; (b) Generalized XOR; (c) Rotated generalized XOR
Optimized Fuzzy Classification Using Genetic Algorithm
399
Fig. 2. Membership functions generated for the data sets (a), (b) of Fig.1
Fig. 3. Membership functions and fuzzy rules generated for data set (c) of Fig.1
generate an appropriate set of membership functions, however, the genetic algorithm well succeeds in doing it (Fig. 2(b)). For all cases our algorithm generates simple fuzzy rules which classify patterns 100% correctly. Fig. 3(b) illustrates four fuzzy rules generated for the rotated generalized XOR. Each rule corresponds to one of four pattern blocks as shown in Fig.1(c). We also notice that fuzzy rules have a kind of abstraction capability in that fuzzy rules in linguistic terms describe rough pattern classification, while all the detail classification is taken care of by membership functions. For example, for the slashed two-class data two fuzzy rules are generated and they describe upper left corner for black dots and lower right corner for white dots. 5.2 Comparison of FDT and Other Classification Algorithms We have experimented our algorithm with several other sets of data including the credit screening data (approval of credit cards), the heart disease data (diagnosis of heart disease), and the sonar data (target object classification by analyzing the echoed sonar signals) in the UCI machine learning databases [12]. In these experiments, we use genetic algorithm to generate membership functions for the FDT algorithm. Table 1 compares the performance of FDT with that of C4.5 [3], [4] (release 7 and 8) in classification accuracy and tree size. Accuracies and tree sizes are averaged using 10fold cross-validation for each data set. It is clearly shown that our algorithm is more efficient than C4.5 both in classification accuracy and tree size. However, it should be noticed that for FDT the tree sizes are more sensitive to training data compared with C4.5. It can be expected by considering the nature of genetic algorithm. It is because that in genetic algorithm membership functions can evolves quite differently from others depending the training data and the initial population of chromosomes. For each data set in only one or two cases out of 10 relatively big trees are constructed.
400
M.W. Kim and J.W. Ryu
Table 1. Comparison of FDT and C4.5(release 7 and 8) (±x indicates the standard deviation.)
C4.5(Rel. 7) Data set
FDT
C4.5(Rel. 8)
accuracy(%)
tree size accuracy(%)
tree size
accuracy(%)
tree size
Breast cancer
94.71±0.09
20.3±0.5 94.74±0.19
25.0±0.5
96.66±0.02
5.9±2.8
Credit screening
84.20±0.30
57.3±1.2 85.30±0.20
33.2±1.1
86.18±0.03
4±0.0
75.10±0.40
45.3±0.3 77.00±0.50
39.9±0.4
77.8±0.05
22±6.0
Heart disease Iris Sonar
95.13±0.20
9.3±0.1
95.20±0.17
8.5±0.0
97.99±0.03
4.2±0.4
71.60±0.60
33.1±0.5 74.40±0.70
28.4±0.2
77.52±0.06
14.1±3.1
Table 2. Comparison of C4.5, FID3.1 and FDT
Data set Iris Bupa Pima
C4.5 accuracy(%) tree size 94.0 5.0 67.9 34.4 74.7 45.2
FID3.1 accuracy(%) tree size 96.0 5.0 70.2 28.9 71.56 23.1
FDT accuracy(%) tree size 97.99 4.2 70.69 12.1 76.90 4.2
In the next experiment, we compare our algorithm with C4.5 and FID3.1 using the data sets of Iris, Bupa and Pima. All experiments were performed using 10-fold crossvalidation. Table 2 compares the performance in accuracy and tree size of FDT and FID3.1 [9]. It is clearly shown that FDT is consistently more efficient than both C4.5 and FID3.1.
6 Conclusions In this paper, we propose a fuzzy rule generation algorithm based on fuzzy decision tree for data mining. Our method provides the efficiency and the comprehensibility of the generated fuzzy rules, which are important to data mining. Particularly, fuzzy rules allow us to effectively classify data of non-axis-parallel decision boundaries using membership functions properly, which is difficult to do using the conventional attribute-based methods. We also propose the use of genetic algorithm for automatic generation of an optimal set of membership functions. We have experimented our algorithm with several benchmark data sets including the iris data, the Wisconsin breast cancer data, the credit screening data, and others. The experiment results show that our method is more efficient in classification accuracy and compactness of rules compared with C4.5 and FID3.1. We plan to investigate the use of co-evolution algorithm in place of genetic algorithm and dynamic sampling to speed up the algorithm significantly. We can incorpo-
Optimized Fuzzy Classification Using Genetic Algorithm
401
rate FID3 into initialization of membership functions in our method for more efficient search. We are also applying our algorithm for generating fuzzy rules describing the blast furnace operation in steel making.
References 1. Fayyad, U., Mannila, H., and Piatetsky-Shapiro, G.: Data Mining and Knowledge Discovery. Kluwer Academic Publishers (1997) 2. Quinlan, J.R.: Induction of decision trees. Machine Learning, Vol.1, Issue 1 (1986) 81-106 3. Quinlan, J.R.: Improved use of continuous attributes in C4.5. Journal of Artificial Intelligence Research, Vol.4 (1996) 77-90 4. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers (1993) 5. Suárez, A. and Lutsko, J.F.: Globally Optimal Fuzzy Decision Trees for Classification and Regression. IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.21, Issue 12 (1999) 1297-1311 6. Zeidler, J. and Schlosser M.: Continuous-Valued Attributes in Fuzzy Decision Trees. Proc. of the 6th Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-Based Systems (1996) 395-400 7. J.Abonyi and J.A. Roubos: Structure identification of fuzzy classifiers. 5th Online world conference on soft computing in industrial applications (2000) 4-18 8. Janikow, C.Z.: Fuzzy Decision Tree: Issues and Methods. IEEE Trans. on Systems, Man and Cybernetics - Part B, Vol.28, Issue 1 (1998) 1–14 9. Janikow, C.Z., Faifer, M.: Fuzzy Partitioning with FID3.1. Proc. of the 18th International Conference of the North American Fuzzy Information Processing Society, IEEE 1999 (1999) 467-471 10. Johannes A. Roubus, Magne Setnes, Janos Abonyi: Learning fuzzy classification rules from labeled data. Information sciences informatics and computer science: an international journal archive, Vol. 150, Issue 1-2 (2003) 77-93 11. Myung Won Kim, Joung Woo Ryu: Optimized Fuzzy Classification for Data Mining. Lecture Notes in Computer Science, Vol. 2973 (2004) 582-593 12. Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science (1998)
Dynamic Test-Sensitive Decision Trees with Multiple Cost Scales Zhenxing Qin1, Chengqi Zhang1, Xuehui Xie2, and Shichao Zhang1,2 1
Faculty of Information Technology, University of Technology, Sydney, PO Box 123, Broadway, Sydney, NSW 2007, Australia {zqin, Chengqi, zhangsc}@it.uts.edu.au 2 Network center, Guangxi Normal University, 15 Yucai road, Guilin, Guangxi, P.R. China 541004
[email protected] Abstract. Previous work considering both test and misclassification costs rely on the assumption that the test cost and the misclassification cost must be defined on the same cost scale. However, it can be difficult to define the multiple costs on the same cost scale. In our previous work, a novel yet efficient approach for involving multiple cost scales is proposed. Specifically speaking, we first introduce a new test-sensitive decision tree with two kinds of cost scales, that minimizes the one kind of cost and control the other in a given specific budget. In this paper, a dynamic test strategy with known information utilization and global resource control is proposed to keep the minimization of overall target cost. Our work will be useful in many urgent diagnostic tasks involving target cost minimization and resource consumption for obtaining missing information.
1 Introduction Recently, researchers have begun to consider both test and misclassification costs. In [6], the cost-sensitive learning problem is cast as a Markov Decision Process (MDP), and solutions are given as searches in a state space for optimal policies. However, it may take very high computational cost to conduct the search process. Similar in the interest in constructing an optimal learner, [7] studied the theoretical aspects of active learning with test costs using a PAC learning framework. Turney [4] presented a system called ICET, which uses a genetic algorithm to build a decision tree to minimize the sum of both costs. Of these works, [1] proposed a decision tree based method that explicitly considers how to directly incorporate both types of costs in decision tree building processes and in determining the next attribute to test, should the attributes contain missing values. However, sometimes we may meet difficulty to define the multiple costs on the same cost scale. Our previous work in [9] proposed a new test cost-sensitive decision tree with two cost scales. In this paper, we consider the dynamic tree for specific test example in the testsensitive tree with two cost scales. The goal is to minimize one scale (called target L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 402 – 405, 2005. © Springer-Verlag Berlin Heidelberg 2005
Dynamic Test-Sensitive Decision Trees with Multiple Cost Scales
403
scale) and control other one (resource scale) in a specific budget. In some situations, such as medical diagnosis, this scenario is more practical since doctors and scientists often suggest specific medical tests for specific patient.
2 Dynamic Tree Building with Multiple Cost Scales The goal of our decision-tree learning algorithm is to minimize the sum of target cost on misclassification and test, at the same time, resource cost must less than the resource budget. We assume test and the misclassification cost contain two kinds of cost – target and resource. Both of the target and resource have been defined on two different cost scales relatively, such as dollar cost and time cost incurred in a medical diagnosis. We assume there is a maximum limit on resource, called resource budget. Table 1 shows a sample of two cost scales on “Ecoli” dataset. From table1, we can see that, there are two kinds of costs, cost1 is the target cost, and cost2 is the resource consumption. We also use the same test case (table 2) in [1] to illustrate our test strategies. Table 1. Test and misclassification costs set for “Ecoli” dataset FP
FN
A1
A2
A3
A4
A5
A6
Target
800
800
50
60
60
50
50
30
Resource
150
100
10
20
10
10
10
10
Table 2. An example testing case with several unknown values. The true values are in parenthesis and can be obtained by performing the tests (with costs list in Table 1). A1
A2
A3
A4
A5
A6
Class
? (6)
2
? (1)
2
2
? (3)
P
This strategy is exactly the idea of trade-off between target and resource. It uses the target gain ratio to choose potential splitting attributes. Assume the total target cost reduction of choosing A as a splitting attribute is GA. GA is actually equals to TTA in [1], if GA > 0, the attribute A is a candidate for further splitting. In this paper, we need to consider the resource budget B. Assume the resource consumption here of A is CA and the rest resource is B’. We first normalization the consumption as C’A , the gain ratio of choosing A as a splitting attribute is RA = GA / C’A This dynamic tree building strategy builds a new tree for current test example based on known values. From this tree, the testing examples can easily be classified, as val-
404
Z. Qin et al.
ues of attributes used in the tree are all known in the testing example. At the leaf node it reaches, we follow our tree building procedure to evaluate if this node should be split by the attributes with unknown values. Once an attribute is chosen to split the node, we begin to evaluate the resource consumption of all his children. The tree building is stopped as no further target cost gain or resource exhausted.
3 Experiments We conducted experiments on five real-world datasets [1,3,5]. These datasets are chosen because they have at least some discrete attributes, binary class, and a good number of examples. The numerical attributes in datasets are discretized first using minimal entropy method [6].
Target Cost
Static
Dynamic
200 150 100 50
A us tr al ia
Th yr oi d
H ea rt
B re as t
E co li
0
Fig. 1. Comparing of total target cost of two tree building strategies on different datasets
Total Target Cost
Static
Dynamic
300 200 100 0
20% 40% 80 100% Percentage of tests performed Fig. 2. Comparing of total target cost on percentage of tests performed under resource budget
First, we compare the target cost and resource consumption of the static building strategy in [1] against our dynamic strategy (we assume resource budget is 180 that only support half tests) on all five dataset in Figure1. We can see that dynamic strategy outperform the other two in target cost. It means dynamic strategy got a better overall performance with limit resource budget.
Dynamic Test-Sensitive Decision Trees with Multiple Cost Scales
405
To compare the influence of resource budget on three strategies, we conducted an experiment on all the datasets with varying budget B to support a part of all needed tests from 20 to 100 percent. For the more completely usage of resource, we use OST testing strategy first, once the cost is exhaust we use M2 testing strategy to give a result. The result is shown in figure 5. From figure 5, we can see that all target cost will go down as the test examples can explore further branches, then lower total cost are obtained. The performance-first strategy also outperforms the other two in target cost with same resource consumption.
4 Conclusions and Future Work In this paper, a new dynamic test strategy is proposed to utilize the known values and resource. Our experiments show that our new dynamic test strategy outperformed dramatically the other strategies as no enough resource for tests. In real world, resource is usually insufficient, so our new strategy is thus more robust and practical. In the future, we plan to consider how to minimize the total target cost with partial costresource exchanging. In some situations, such as medical diagnosis, this scenario is more practical since lot of hospitals provide VIP services. We also want to extend our test strategy to near Optimal Batch Test.
References 1. C. Ling, Q. Yang, J. Wang, and S. Zhang. Decision trees with minimal costs. In Proceedings of 2004 International Conference on Machine Learning, 2004. 2. Turney, P. D., Types of cost in inductive concept learning, Workshop on Cost-Sensitive Learning at the Seventeenth International Conference on Machine Learning, Stanford University, California.2000 3. UCI Repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science. (See [http://www.ics.uci.edu/~mlearn/ MLRepository.html]) 4. Turney, P. D, Cost-sensitive classication: Empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Articial Intelligence Research, 2: 369-409, 1995. 5. Machine Learning. McGraw Hills 6. Fayyad, U. M., & Irani, K. B. Multi-interval discretization of continuous-valued attributes for classification learning. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, pages 1022--1027. Morgan Kaufmann, 7. Greiner, R., Grove, A. J., and Roth D. (2002), Learning cost-sensitive active classiers. Articial Intelligence, 139(2): 137-174, 2002. 8. Quinlan, J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, California, 1993. 9. Z. Qin, C. Zhang and S. Zhang, Cost-sensitive Decision Trees With Multiple Cost Scales. In: Proceedings of the 17th Australian Joint Conference on Artificial Intelligence (AI 2004), Cairns, Queensland, Australia, 6-10 December, 2004.
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach Moon Hwan Kim1 , Jin Bae Park1, Young Hoon Joo3 , and Ho Jae Lee1 1
2
Department of Electrical and Electronic Engineering, Yonsei University, Seodaemun-gu, Seoul, 120-749, Korea {jmacs, jbpark, mylchi}@yonssei.ac.kr School of Electronic and Information Engineering, Kunsan National University, Kunsan, Chonbuk, 573-701, Korea
[email protected]. Abstract. A linear matrix inequality approach to designing accurate classifier with a compact T–S(Takagi–Sugeno) fuzzy-rule is proposed, in which all the elements of the T–S fuzzy classifier design problem have been moved in parameters of a LMI optimization problem. Two-step procedure is used to effectively design the T–S fuzzy classifier with many tuning parameters: antecedent part and consequent part design. Then two LMI optimization problems are formulated in both parts and solved efficiently by using interior-point method. Iris data is used to evaluate the performance of the proposed approach. From the simulation results, the proposed approach showed superior performance over other approaches.
1
Introduction
Patten classification plays a crucial role in a large number of applications, including printed and handwritten text recognition [1, 2, 3], speech recognition [4], human face recognition [5]. A great variety of conventional and computational intelligence techniques for pattern classification can be found in [6]. The pattern classification system can be implemented by using various systems: linear system, fuzzy system, neural network system, etc [9,10,11,12,13,14,15,16] . One of them, the fuzzy system, is studied by many researchers recently because it has high comprehensibility and is easy to apply to the complex classification problems. Fuzzy classification system design can be performed by supervised learning using a set of training data with fuzzy or nonfuzzy labels. When given a pattern, the fuzzy classifier computes the membership value of the pattern in each class and makes decisions based on these membership values. The fuzzy labels of a fuzzy classifier can be defuzzified and then the fuzzy classifier becomes a hard classifier but uses the idea of fuzziness in the model. Membership function is the key point of fuzzy rule-based systems. However, the conventional method of designing membership functions for fuzzy systems relies on the manual work and experience of experts. This inevitably becomes a bottleneck in fuzzy rule-based system design. In recent years, some researchers have employed the learning ability of various optimization methods to learn the membership functions from training data for fuzzy systems. Jang developed L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 406–415, 2005. c Springer-Verlag Berlin Heidelberg 2005
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach
407
an adaptive network-based fuzzy inference system (ANFIS) which is used as a fuzzy controller [7]. Setnes used genetic algorithm (GA) to optimize parameters in fuzzy system [17]. If flexible membership functions and fuzzy rules with both certainty grade are determined simultaneously based on accurate and compact fuzzy-rule base, the design of fuzzy classifiers can be regarded as an optimization problem with lots of system’s tuning parameters. The conventional non-evolutionary optimization method is hard to applied to determine all parameters in the fuzzy classifier because these optimization problems is so complex and has nonlinear relationship between antecedent and consequent part. The performance of evolutionary optimization method, GA, also would be greatly degraded when applied to a large parameter optimization problem (LPOP) that is shown by theoretical analysis in [8]. As a result, the success of the approach to formulating the fuzzy classifier design to an LPOP mainly relies on a new powerful optimization algorithm to solve the LPOP. In this paper, an LMI approach to designing accurate classifier with a compact T–S fuzzy-rule base is proposed. Two step design procedure is presented. All design elements in the antecedent part is converted to parameters in the convex optimization problem. The designed convex optimization problem is solved by LMI optimization method efficiently. Some theoretical base is given to design antecedent part alone without any consideration for consequent part. After all parameters in the antecedent part are determined, the consequent parameters are determined by solving LMI optimization problem converted from overdetermined problems. To show clear design procedure, two design algorithms are given. The organization of this paper is as follows. Section 2 presents the basic approach to classifier design by LMI. Simulation results testify to the classifier’s performances and the utilities of the proposed method are discussed in Section 3.
2
LMI Based Fuzzy Classifier Design
The T–S fuzzy classifier is consist of T–S type fuzzy rules described as the following form [17]: Ri : IF x1 is Ai1 and . . . and xn is Ain THEN yi (x) = bi1 x1 + . . . + bin xn + ci , i = 1, . . . , l
(1)
where xi ∈ R is the ith feature input, Ai1 , . . . , Ain are the antecedent fuzzy sets, yi (x) is the consequent output of the ith rule, x = [x1 , . . . , xn ]T ∈ F ⊂ Rn is the input feature vector, F is the feature vector set, bij and ci are consequent parameters, and l is the number of fuzzy rule. The output of T–S type fuzzy rule system is inferred by following equations: l τi (x)yi (x) Y (x) = i=1 (2) l i=1 τi (x) n τi (x) = µAij (xj ) (3) j=1
408
M.H. Kim et al.
where τi (x) is the firing strength of the ith rule and µMij (xj ) ∈ R[0, 1] is the membership degree of the jth feature of the ith rule. To compute the degrees of class membership for pattern x, a Gaussian membership function is adopted such that µAij (xj ) = e
−
2 (mi j −xj ) σi j
(4)
where cij is the center and σji is the width of jth feature of the ith rule. Fuzzy classifier almost always means arriving at a hard classifier because most pattern recognition systems require hard labels for objects being classified. In order to convert soft label Y (x) to hard label Yc (x), we use following mapping equation, Yc (x) = argg min{|g − Y (x)|}, g ∈ {1, . . . , n}
(5)
where g is the index of the class and n denotes the number of classes. Designing the T–S fuzzy classifier can be treated as parameter optimization problem with compact fuzzy rule sets. In general, the compact fuzzy rule means the minimum rule for fuzzy system with good performance. In this paper, we consider compact fuzzy rule as minimum size fuzzy rule which consist of one fuzzy rule for one class. Then the main goal of designing T–S fuzzy classifier is to determine parameter of membership functions in the antecedent and parameters in the consequent. Figure 1 presents the overall procedure of the proposed design method. 2.1
Identification of Antecedent Part
Notice that the antecedent part can not be identified without any consideration for consequent part because final output Yc (x) is calculated based on outputs First step
Second step
Antecedent part
Consequentt part
Membership function . .
Consequent parameter
Membership function Identify membership function
Identify consequent parameters
LMI optimization method
LMI optimization method
Fig. 1. Proposed Fuzzy classifier design procedure
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach
409
of both part. However, the antecedent part and consequent part are hard to designed simultaneously because there are some nonlinear relationship hard to deal in mathematical method. To overcome this difficulty, the classical design goal for antecedent part is given. For computational convenience, the parameters of τi (x) can be reformulated as follows: τi (x) = e =e
− −
2 (mi 1 −x1 ) i σ1
n
−
2 (mi 2 −x2 ) i σ2
× ...× e
−
2 (mi n −xn ) i σn
2 (mi j −xj ) σi j
j=1
= e−(x−mi )
T
where σi = diag
×e
σiT σi (x−mi )
(6)
√1 i , . . . , √1 i
σn
σ1
is the diagonal matrix containing the widthes
of the Gaussian membership functions in the antecedent part of the ith rule, and mi = [mi1 , . . . , min ] represents center values of the membership function of the ith rule. In classical fuzzy classifier design, the membership function should satisfy following conditions, τi (x) = 0, x ∈ Ci τi (x) = 1, x ∈ Ci where Ci means the set of data belonging to class i. Therefore, the main design objective can be defined as determining Vi and mi satisfying, (x − mi )T σiT σi (x − mi ) = 0, ∀x ∈ Ci
(7)
(x − mi )
(8)
T
σiT σi (x
− mi ) = ∞, ∀x ∈ Ci .
The condition (7) can be formulated as condition of LMI optimization problem directly. However, it is not easy to consider condition (8) because it is non-convex form. To overcome this difficulty, we relax the condition (8) to condition which is minimizing σi . Theorem 1. If x belongs to class Ci , Vi and mi of membership functions in the antecedent part of the rule i are determined by solving the following general eigenvalue problem (GEVP): Minimize qi ,σi
γ
subject to
γW > σi > 0 γ > 0, σi x − qi γ
(9) ∀x ∈ Ci
(10)
where qi = σi mi , W = diag{w1 , . . . , wn } is diagonal matrix, wi is the variance of ith feature, and denotes the transposed element matrix for the symmetric position.
410
M.H. Kim et al.
Proof. The proof is omitted due to lack of space. Theorem 1 gives the method to determine membership functions in the antecedent part of only one rule. we could apply this method to each fuzzy rule and obtain well-identified membership functions. 2.2
Identification of Consequent Part
After parameters of membership functions in the antecedent part is determined, the consequent parameters should be identified. For the computational convenience, Y (x) can also be represented as following matrix form, Y (x) = DT (Bx + C)
(11)
where ⎡
⎤ d1 (x) ⎢ .. ⎥ ⎢ . ⎥ ⎢ ⎥ ⎥ D=⎢ ⎢ di (x) ⎥ , ⎢ . ⎥ ⎣ .. ⎦ dl (x) di (x) = l
τi (x)
j=1 τj (x)
⎡
⎤ b11 . . . b1n ⎢ .. .. ⎥ ⎢ . . ⎥ ⎢ ⎥ ⎢ B = ⎢ bi1 . . . bin ⎥ ⎥, ⎢ . .. ⎥ . ⎣ . . ⎦ bl1 . . . bln
⎡ ⎤ c1 ⎢ .. ⎥ ⎢.⎥ ⎢ ⎥ ⎥ C=⎢ ⎢ ci ⎥ ⎢.⎥ ⎣ .. ⎦ cl
.
(12)
(13)
Assume that the parameters of antecedent is completely determined. With Given H and x we could formulated following key equation, Yd = DT (Bx + C),
∀x ∈ F
(14)
where Yd is desired output of the class and is determined as one of index of class. Finally, by finding A and B satisfying (14), we could get desired output Y (x). Notice that (14) can be converted LMI optimization problem directly. Theorem 3 shows the GEVP for determining A and B in the consequent part. Theorem 2. If x, Yd , and H are given, A and B of the proposed T–S fuzzy classifier are determined by solving the following GEVP Minimize B,C
γ subject to γ > 0, Yd − DT (Bx + C) I
Proof. The proof is omitted due to lack of space.
∀x ∈ F.
(15)
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach
3
411
Simulation
The iris database created by Fisher is a common benchmark in the classification and the pattern recognition studies. It has four feature variables: sepal length, sepal width, petal length, and petal width and consists of 150 feature vectors: 50 for each iris sestosa, iris versicolor, iris virinica. Figure 2 shows the design step for iris data. Table 1 shows some results of well-known classifier systems. For example, Wang et al. [18] applied a self-adaptive neuro-fuzzy inference system (SANFIS) that is capable of self-adapting and self-organizing its internal structure to acquire a parsimonious rule-base for interpreting the embedded knowledge of a system from the given training data set. He derived three-rule SANFIS system with 97.5% correct. Abonyi et al. [19] proposed a new data-driven method to design compact fuzzy classifiers via combining a genetic algorithm, a decision-tree initialization, and a similarity-driven rule reduction technique. The final system had three fuzzy rules. The accuracy is 96.11% correct (six misclassifications). Abe et al. [20] discussed a fuzzy classifier with ellipsoidal regions. They applied clustering techniques to extract fuzzy rules, with one rule around one cluster center, and then they tuned the slopes of their membership functions to obtain a high recognition rate. Finally, they obtained a fuzzy classifier with a recognition rate of 98.7% (two misclassifications). Shi et al. [21] applied an integer-code genetic algorithm to learn a Mamdani-type fuzzy system for classifying the iris data by training on all 150 patterns. After several trials with different learning options, a four-rule fuzzy system was obtained with 98% correct recognition Iris data Calculate variances of each feature Identify membership function in antecedent part
Calculate firing stength
LMI optimization method
Identify consequent parameter
Evaluate performance Fig. 2. Design procedure for iris classification
412
M.H. Kim et al.
(three misclassifications). Russo [22] applied a hybrid GA neuro-fuzzy approach to learn a fuzzy model for the iris data. He derived a five-rule fuzzy system with 18 fuzzy sets and 0 misclassifications. Ishibuchi et al. [23] applied all 150 samples in the training process, and derived a fuzzy classifier with five rules. The resolution was 98.0% correct and three misclassifications. For the Iris example, we use 150 patterns to design a fuzzy classifier via the proposed method. Since Iris example has three classes, the number of compact fuzzy rule is three. σi and mi in antecedent part and A and B in consequent part are identified. Figure 3 shows the identified membership function. The consequent parameters is then determined as ⎡
⎤ −0.0000 0.0000 −0.0001 B = ⎣−0.1121 −0.2234 0.0029 ⎦ , −0.1020 −0.0624 0.1276
⎡ ⎤ 0.6667 C = ⎣1.7547⎦ . 1.8412
Table 1 shows the comparison of results between the above fuzzy classifier system with other well-known classifier systems on number of rules and classification accuracy. The resulting system arrives at the highest degree of accuracy using the smallest number of term sets. To estimate the performance of the proposed method on unseen data, the five-fold cross-validation experiment was performed on the iris data. In experiment, the normalized iris data were divided into five disjoint group containing 30 different patterns which consist of each ten patterns of three classes. Then we derived fuzzy classifier via the proposed method on all data outside one group and tested the resulting fuzzy classifier on the data inside that group. Finally, five fuzzy classifiers were derived. Table 2 reports the results of five-folder crossvalidation. The average classification result is 97.64% correct (about 2.4 misclassifications) on the training data and 97.30% correct (about 0.8 misclassification) on the test data using 3 rules.
4
Conclusions
This paper proposed automatic method for designing accurate classifier with compact fuzzy-rule base using an LMI optimization method. All the elements of the T–S fuzzy classifier design problem have been moved in parameter of LMI optimization problem. Interior point method is used to effectively solve the design problems of various-dimensional fuzzy classifier with many tuning parameters. Two-step design procedure is used to design classifier: antecedent part and consequent part. In order to determine parameters in the antecedent part without any consideration for consequent parameter, some theoretical analysis are given. Then after antecedent part is designed, consequent parameters are determined based on firing predetermined strength of antecedent. In simulation, the high performance of the proposed method is validated by using iris data.
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach
1
1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5
Rule 1 Rule 2 Rule 3
0.9 0.8 0.7 0.6 0.5 0.4 0
0.2
0.4
0.6
0.8
1
Rule 1 Rule 2 Rule 3
0
0.2
(a) 1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5
413
0.4
0.6
0.8
1
0.8
1
(b) 1 Rule 1 Rule 2 Rule 3
Rule 1 Rule 2 Rule 3
0.9 0.8 0.7 0.6 0.5 0.4
0
0.2
0.4
0.6
0.8
1
0
0.2
(c)
0.4
0.6
(d)
Fig. 3. Membership functions for iris data: training simulation. a) feature x1 . (b) feature x2 . (c) feature x3 . (d) feature x4 . Table 1. Comparison of classification results for Iris data
Wang et al. Abonyi et al. This paper Shi et al. Russo Ishibuchi et al.
Rules Classification accuracy (%) 3 97.5 3 98.3 3 98.7 4 98.0 5 100 5 98.0
Table 2. Five-folder cross-validation result on Iris data Rules Training patterns Misclassifications(training) Classification accuracy(training) Testing patterns Misclassifications(testing) Classification accuracy(testing)
1 3 120 3 97.5 30 0 100
2 3 120 1 99.2 30 2 93.3
3 3 120 3 97.5 30 1 96.7
4 3 120 2 98.3 30 0 100
5 Average 3 3 120 120 3 2.4 97.5 97.63 30 30 1 0.8 96.7 97.3
414
M.H. Kim et al.
References 1. Chen M., Kundu A., Zhou J.: Off-line handwritten word recognition using a hidden Markov model type stochastic network. IEEE Trans. Pattern Anal. Mach. Intel. 16 (1994) 481–496. 2. Cohen E.: Computational theory for interpreting handwritten text in constrained domains. Artif. Intell. 67 (1994) 1–31. 3. Partizeau M., Plamondon R.: A fuzzy-syntactic approach to allograph modeling for cursive script recognition. IEEE Trans. Pattern Anal. Mach. Intel. 17 (1995) 702–712. 4. Bourlard H., Morgan N.: Connectionist Speech Recognition-A Hybrid Approach. Boston. MA: Kluwer Academic (1994). 5. Lam K. M., Yan H.: Locating and extracting the eye in human face images. Pattern Recog. 29 (1996) 771–779. 6. Schalkoff R.: Pattern Recognition-Statistical, Structural and Neural Approaches. New York, Wiley (1992). 7. Jang J. S. R.: Fuzzy controller design without domain experts. Proc. IEEE Int. Conf. Fuzzy Systems. San Diego, CA, Mar. (1992) 289–296. 8. Kumar K., Narayanaswamy S., Garg S.: Solving large parameter optimization problems using a genetic algorithm with stochastic coding. In Winter G., Periaux J., Galan M., Cuesta P.: Genetic Algorithms in Engineering and Computer Science. Eds. New York, Wiley (1995). 9. Ishibuchi H.,Murata T.,Turksen I. B.: Single-objective and two-objective genetic algorithms for selecting linguistic rules for pattern classification problems. Fuzzy Sets Syst. 89 (1997) 135–150. 10. Wang C. -H., Hong T. -P., Tseng S. -S.: Integrating fuzzy knowledge by genetic algorithms. Fuzzy Sets Syst. 2 (1998) 138–149. 11. Ishibuchi H., Nakashima N., Murata T.: Performance evaluation of fuzzy classifier systems for multidimensional pattern classification problems. IEEE Trans. Syst., Man, Cybern. B 29 (1999) 601–618. 12. Hall L. O., Ozyurt I. B.,Bezdek J. C.,: Clustering with genetically optimized approach. IEEE Trans. Evolut. Computing (1999) 103–112. 13. Hwang H. -S.: Control strategy for optimal compromise between trip time and energy consumption in a high-speed railway IEEE Trans. Syst., Man, Cybern. A. 28 (1998) 791–802. 14. Jagielska I., Matthews C., Whitfort T.: An investigation into the application of neural networks, fuzzy logic, genetic algorithms, and rough sets to automated knowledge acquisition for classification problems. Neurocomputing. 24 (1999) 37– 54. 15. Russo M.: FuGeNeSys.A fuzzy genetic neural system for fuzzy modeling IEEE Trans. Fuzzy Syst. 6 (1998) 373–388. 16. Wang L. and Yen J.: Extracting fuzzy rules for system modeling using a hybrid of genetic algorithms and Kalman filter. Fuzzy Sets Syst. 101, (1999) 353–362. 17. Setnes M. and Roubos H.: GA-fuzzy modeling and classification: complexity and performance. IEEE Trans. Fuzzy Syst. 8 (2000) 509–522. 18. Wang J. S., Lee G. C. S.: Self-adaptive neuro-fuzzy inference system for classification application. IEEE Trans. Fuzzy Syst. 10 (2002) 790–802. 19. Abonyi J., Roubos J. A., Szeifert F.: Data-driven generation of compact, accurate, and linguistically sound fuzzy classifiers based on a decision- tree initialization. Int. J. Approx. Reason. 32 (2003) 1–21.
Design of T–S Fuzzy Classifier via Linear Matrix Inequality Approach
415
20. Abe S., Thawonmas R.: A fuzzy classifier with ellipsoidal regions. IEEE Trans. Fuzzy Syst. 5 (1997) 358–368. 21. Shi Y., Eberhart R., Chen Y.: Implementation of evolutionary fuzzy system. IEEE Trans. Fuzzy Syst. 7 (1999) 109–119. 22. Russo M.: Genetic fuzzy learning. IEEE Trans. Evolut. Computat. 4 (2000) 259– 273. 23. Ishibuchi H., Nakashima T., Murata T.: Three-objective geneticbased machine learning for linguistic rule extraction. Inf. Sci. 136 (2001) 109–133.
Design of Fuzzy Rule-Based Classifier: Pruning and Learning Do Wan Kim1 , Jin Bae Park1 , and Young Hoon Joo2 1
2
Yonsei University, Seodaemun-gu, Seoul, 120-749, Korea {dwkim, jbpark}@yonsei.ac.kr, Kunsan National University, Kunsan, Chunbuk, 573-701, Korea
[email protected] Abstract. This paper presents new pruning and learning methods for the fuzzy rule-based classifier. For the simplicity of the model structure, the unnecessary features for each fuzzy rule are eliminated through the iterative pruning algorithm. The quality of the feature is measured by the proposed correctness method, which is defined as the ratio of the fuzzy values for a set of the feature values on the decision region to one for all feature values. For the improvement of the classification performance, the parameters of the proposed classifier are adjusted by using the gradient descent method so that the misclassified feature vectors are correctly recategorized. Finally, the fuzzy rule-based classifier is tested on two data sets and is found to demonstrate an excellent performance.
1
Introduction
The fuzzy rule-based classifier is a popular counter part of a fuzzy control system and a fuzzy modeling [1, 2], which carries out the pattern classification by using the membership grades of the feature variables. If there is a contribution of the fuzzy rule-based classifier, it would lie in guiding the steps by which one takes knowledge in a linguistic form and casts it into discriminant functions [3]. In numerous researches [9, 4, 10, 11, 5, 6, 12, 13, 14, 7, 8], the excellent capabilities to the pattern classification of the fuzzy rule-based classifier have been shown. In the design of the fuzzy rule-based classifier, there are two main issues which involve the model complexity and the classification performance. If too many free parameters are used, there is a danger of overfitting; conversely, if too few parameters are used, the training set may not be learned. Thus, the design process can be divided into two strategies: the feature selection and the learning. For the simplicity of the model, one possibility is to reduce the dimensionality by selecting an appropriate subset of the existing features. The key point in the feature selection is the measure of the quality of a set of the features, which concerns some measure of the predictive power of the features. An attractive approach [4,5,6] is to measure the similarity of the overlapping degree between two fuzzy sets. For the improvement of the classification performance, the learning in the fuzzy rule-based classifier can be formulated as minimizing a cost function. In [7, 8], the cost function was selected as the squared-error between the L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 416–425, 2005. c Springer-Verlag Berlin Heidelberg 2005
Design of Fuzzy Rule-Based Classifier: Pruning and Learning
417
classifier output and the desired value, and then the learning rules are derived from forming the gradient. The classifier outputs for the correct class and the rest converge into the upper bound and the lower bound, respectively, as the cost function approaches zero. Despite of the existence of the excellent previous researches [4,5,6,7,8], there are still critical issues. In [4, 5, 6], measuring the overlapping degree does not become the accurate criterion in the classification problem because not all data on the overlapping region can be judged as the classification error. And also, a way to measure some degree between two classes may not be efficient in the multicategory case. In [7, 8], the desired values for the correct class and the rest are defined as the upper bound and the lower bound of the classifier outputs, respectively. However, these desired values are very strict conditions for the learning objective, which is that the classifier output for the correct class gets larger than one for the rest as the cost function approaches zero. These strict conditions make the learning inefficient. In addition, this approach cannot be applied in the classifier with the unbounded outputs. This paper aims at developing the fuzzy rule-based classifier for the model complexity and the classification capability. The main contributions of this paper are to measure the qualities of the given feature variables by using a new analysis technique of the fuzzy sets, correctness method and to derive the learning rules for the classifier without the upper bound. In the proposed pruning method, an appropriate subset of the existing features for each fuzzy rule is selected through the iterative pruning algorithm. To measure the quality of the features, the correctness degree is defined as the ratio of the fuzzy values for a set of the feature values on the decision region to one for all feature values. In the proposed learning method, the related parameters of the proposed classifier are tuned by using the gradient descent method so that the misclassified feature vectors are correctly re-categorized.
2 2.1
Fuzzy Rule-Based Classifier Initiallization
For a given feature vector x, the proposed fuzzy rule-based classifier is formulated in the following form. Ri : IF x1 is Γi1 and . . . and xn is Γin , THEN yi = dci (x)
(1)
2 x −mp where yi is the output vector of Ri , Γih (xh ) = exp − 21 hσp ih , and ih 2 xh −mcih 1 1 n dci (x) = exp − . The unknown parameters n c n h=1 c 2 σ 2 (2π)
h=1
σih
ih
p c mpih , mcih , σih , and σih are initially identified by using the arithmetic average and the standard deviation for the training data.
418
D.W. Kim, J.B. Park, and Y.H. Joo
The ith final output of (1) is inferred as follows: yi (x) = dpi (x)dci (x)
(2)
where dpi (x) = maxh∈In Γih (xh ) by using the maximum inference engine and the singleton fuzzifier. From (2), the proposed classifier is said to assign a feature vector x to the class i if yi (x) > yj (x),
∀j =i
(3)
The effect of yi (x) is to separate the feature space into m decision regions R1 , R2 , . . . , Rm . 2.2
Pruning
We suggest a pruning algorithm based on the analysis of the fuzzy sets for eliminating the unnecessary features for each fuzzy rule. The difficult point of selecting the unnecessary feature is that pruning the feature has direct impact on the classification performance. The main reason is that the decision regions are changed by eliminating the labelled feature. Thus, in the analysis of the fuzzy set, the class label and the decision region are considerable matters. In addition, for the general application, the analysis technique is efficiently usable in the multicategory case. Dealing such issues are formulated as follows: Problem 1 (The analysis of the fuzzy set for pruning the feature). If the analysis technique of the fuzzy set is used for checking to whether the feature is unnecessary for the supervised learning, it is sufficiently satisfied with following conditions: (i) The analysis tool must consider the class label and the decision region, which are important concepts in the supervised learning. (ii) The analysis technique of the fuzzy set must be simply applicable in the multicategory case. To resolve Problem 1, the correctness degree of the fuzzy set Γih for the hth feature xh labelled as wi is measured by using the cardinality, and it is defined as follows. Definition 1. The correctness degree of Γih is defined as C(Γih ) =
|Γih |(xh ∈wi )∈Rih |Γih |xh ∈wi
(4)
where | · | denotes the cardinality of a set, and Rih is one-dimensional fuzzy region according to dpi (xh ) > dpj (xh ) for all j = i. By applying the definition of the cardinality, (4) becomes (xh ∈wi )∈Rih Γih (xh ) C(Γih ) = xh ∈wi Γih (xh )
(5)
Design of Fuzzy Rule-Based Classifier: Pruning and Learning
419
Specifically, if Γih is not overlapped with the others and/or all xh ∈ wi fall on Rih , then C(Γih ) = 1. Conversely, if Γih is completely overlapped with the others and/or no xh ∈ wi falls on Rih , C(Γih ) = 0. By using the correctness degree (4), we set the criterions of selecting the fuzzy rule and the feature, which are applied in the proposed pruning algorithm. To select the fuzzy rule, the following average correctness degree is employed. n 1 (xh ∈wi )∈Rih Γih (xh ) ¯ C(Γih ) = (6) n xh ∈wi Γih (xh ) h=1
The proposed pruning algorithm becomes as follows. Step 1 Select the fuzzy rule in the order of large value of (6). Step 2 Prune any feature of the selected fuzzy rule that result in improving the recognition rate, where the feature is selected in the order of small value of (4). Step 3 If no feature of the selected fuzzy rule is pruned, stop the algorithm; otherwise, update (4) and then repeat by going to Step 1. Remark 1. The proposed method can be applied in the both of prepruning and postpruning. 2.3
Learning
Our goal is to develop a learning technique, which can be formulated as minimizing a cost function J , for the parameters of (1) so that the misclassified feature vectors until the preceding step are correctly re-categorized. To this end, we formulate the following problem of adjusting the parameters for the misclassified feature vectors. Problem 2 (Learning parameters for the misclassified feature vectors). To correctly categorize the misclassified feature vector x labelled as wi , the learning method should be sufficiently satisfied with the following condition: p c The parameters σih , σih , mpih , and mcih of (1) should be adjusted so as to satisfy yi (x) > yj (x) for all j = i. Define the cost function J =
(maxj∈Im ,j=i yj (x) + − yi (x))2 2
(7)
It is noticed that, because of > 0, yi (x) obviously gets larger than yj (x) for all j = i as J approaches zero. The problem of minimizing the squared-error can be numerically solved by a gradient descent method. The following theorems p c suggest the gradient descent method to tune the parameters σih , σih , mpih , and c mih of (1).
420
D.W. Kim, J.B. Park, and Y.H. Joo
Theorem 1. Given the misclassified feature vector x labelled as wi , the paramp c eters σih , σih , mpih , and mcih of (1) can be adjusted by the following learning rules, respectively: For the class i ∂dpi (x) j∈Im ,j =i ∂mpih ∂dp (x) = α2 ( max dpj (x)dcj (x) − dpi (x)dci (x) + )dci (x) i p j∈Im ,j =i ∂σih
∆mpih = α1 ( max p ∆σih
dpj (x)dcj (x) − dpi (x)dci (x) + )dci (x)
(8) (9)
and for the class j ∆mpjh = −β1 ( max
dpj (x)dcj (x) − dpi (x)dci (x) + )dcj (x)
p ∆σjh = −β2 ( max
dpj (x)dcj (x) − dpi (x)dci (x) + )dcj (x)
j∈Im ,j =i
j∈Im ,j =i
∂dpj (x) ∂mpjh ∂dpj (x) p ∂σjh
(10) (11)
p p where α1 , α2 , β1 , and β2 are the learning rates for mpih , σih , mpjh , and σjh , respectively.
Proof. The proof is omitted due to lack of space. Theorem 2. The consequent parameters of the fuzzy model can be adjusted by the following learning rules: for the class i ∂dci (x) j∈Im ,j =i ∂mcih ∂dc (x) = γ2 ( max dpj (x)dcj (x) − dpi (x)dci (x) + )dpi (x) i c j∈Im ,j =i ∂σih
∆mcih = γ1 ( max c ∆σih
dpj (x)dcj (x) − dpi (x)dci (x) + )dpi (x)
(12) (13)
and for the class j ∆mcjh = −δ1 ( max
dpj (x)dcj (x) − dpi (x)dci (x) + )dpj (x)
∂dcj (x) ∂mcjh
(14)
c ∆σjh = −δ2 ( max
dpj (x)dcj (x) − dpi (x)dci (x) + )dpj (x)
∂dcj (x) c ∂σjh
(15)
j∈Im ,j =i
j∈Im ,j =i
c c where γ1 , γ2 , δ1 , and δ2 are the learning rates for mcih , σih , mcjh , and σjh , respectively.
Proof. The proof is omitted due to lack of space.
3 3.1
Computer Simulations: Glass Data Iris Data
The Iris data [15] is a common benchmark in the pattern recognition studies [13, 12, 14]. It has four features: x1 -sepal length, x2 -sepal width, x3 -petal length,
Design of Fuzzy Rule-Based Classifier: Pruning and Learning
421
Table 1. Parameter values of initial fuzzy rule-based classifier i 1 2 3
F,B σi1 0.3525 0.5173 0.6359
mF,B i1 5.0060 5.9340 6.5880
F,B σi2 0.3791 0.3161 0.3225
mF,B i2 3.4280 2.7640 2.9740
F,B σi3 0.1737 0.4713 0.5519
mF,B i3 1.4620 4.2480 5.5520
F,B σi4 0.1054 0.2003 0.2747
mF,B i4 0.2460 1.3220 2.0260
and x4 -petal width and consists of 150 feature vectors: 50 for each Iris sestosa, Iris versicolor, and Iris virginica. Iris sestosa is linearly separable from the others; the latter are not linearly separable from each other. To show the effectiveness of the proposed classifier, we provides a simulation for Iris data– all 150 feature vectors are selected as training data. Now, following the design procedure, the design of the proposed fuzzy rule-based classifier is given by following steps. F Step 1 Initialization. In the premise parts of (1), mF ih and σih are simply identified by the mean and the standard deviation of the Iris data, which are shown in Table 1. In the consequent parts of (1), the mean vectors, the covariance matrices, and the prior probabilities are obtained as follows:
⎡
⎤ ⎡ ⎤ 5.0060 0.1242 0 0 0 ⎢ 3.4280 ⎥ ⎢ 0 0.1437 0 0 ⎥ ⎥ ⎢ ⎥ , P (C1 ) = m1 = ⎢ ⎣ 1.4620 ⎦ , Σ1 = ⎣ 0 0 0.0302 0 ⎦ 0.2460 0 0 0 0.0111 ⎡ ⎤ ⎡ ⎤ 5.9340 0.2676 0 0 0 ⎢ 2.7640 ⎥ ⎢ 0 0.0999 0 0 ⎥ ⎥ ⎢ ⎥ , P (C2 ) = m2 = ⎢ ⎣ 4.2480 ⎦ , Σ2 = ⎣ 0 0 0.2221 0 ⎦ 1.3220 0 0 0 0.0401 ⎡ ⎤ ⎡ ⎤ 6.5880 0.4043 0 0 0 ⎢ 2.9740 ⎥ ⎢ 0 0.1040 0 0 ⎥ ⎥ ⎢ ⎥ , P (C3 ) = m3 = ⎢ ⎣ 5.5520 ⎦ , Σ3 = ⎣ 0 0 0.3046 0 ⎦ 2.0260 0 0 0 0.0754
50 , 150
50 , 150
50 150
Then, the initial classifier is given by R1 : IF x1 is A11 and x2 is A12 and x3 is A13 and x4 is A14 , THEN y1 = dB 1 (x) R2 : IF x1 is A21 and x2 is A22 and x3 is A23 and x4 is A24 , THEN y2 = dB 2 (x) R3 : IF x1 is A31 and x2 is A32 and x3 is A33 and x4 is A34 , THEN y3 = dB 3 (x)
(16)
where x = [x1 , x2 , x3 , x4 ]T . The recognition rate of the initial classifier (16) is 95.33%.
422
D.W. Kim, J.B. Park, and Y.H. Joo Table 2. Correctness degrees for initial fuzzy sets i 1 2 3
Ai1 0.8930 0.7759 0.8003
Ai2 Ai3 0.8263 1 0.5486 0.9648 0.4822 0.9730
Ai4 0.9999 0.9606 0.9598
Table 3. Correctness degrees for fuzzy sets after pruning i 1 2 3
Ai1 ∅ ∅ 1
Ai2 Ai3 Ai4 ∅ ∅ 1 ∅ 0.9648 1 1 0.9730 ∅
Step 2 Pruning. After several iterations using the proposed pruning algorithm, x1 , x2 , x4 in R1 , x1 , x2 in R2 , and x4 in R3 are effectively eliminated. Specifically, Table 2 and 3 show the correctness degrees of the initial classifier and the pruned classifier, respectively. Thus, the complexity of (16) rapidly decreases. Nevertheless, its recognition rate rises to 96.67%. Step 3 Learning. Until the preceding step, the fuzzy rule-based classifier has the following misclassified feature vectors: [5.9, 3.2, 4.8, 1.8]T , [6.0, 2.7, 5.1, 1.6]T , and [6.7, 3.0, 5.0, 1.7]T in Iris versicolor, and [6.0, 2.2, 5.0, 1.5]T and [4.9, 2.5, 4.5, 1.7]T in Iris virginica. After several iterations using the learning rules in Theorem 1 and 2, the feature vectors [5.9, 3.2, 4.8, 1.8]T and [6.0, 2.7, 5.1, 1.6]T among the misclassified feature vectors is correctly categorized. After all, the final classifier is obtained as follows: R1 : IF x4 is A14 , THEN y1 = dB 1 (x4 ) R2 : IF x3 is A23 and x4 is A24 , THEN y2 = dB 2 (x3 , x4 ) R3 : IF x1 is A31 and x2 is A32 and x3 is A33 , THENy3 = dB 3 (x1 , x2 , x3 )
(17)
where the tuned parameter values are given in Table 4 and 5. Therefore, the recognition rate of the proposed fuzzy rule-based classifier increases from Table 4. Premise parameters of fuzzy rule-based classifier after learning F F F F i σi1 mF σi2 mF σi3 mF σi4 mF i1 i2 i3 i4 1 ∅ ∅ ∅ ∅ ∅ ∅ 0.1737 0.2460 2 ∅ ∅ ∅ ∅ 0.5338 4.2857 0.2361 1.3384 3 0.6015 6.6231 0.3225 2.9740 0.5163 5.5778 ∅ ∅
Design of Fuzzy Rule-Based Classifier: Pruning and Learning
423
Table 5. Consequent parameters of fuzzy rule-based classifier after learning B B B B i σi1 mB σi2 mB σi3 mB σi4 mB i1 i2 i3 i4 1 ∅ ∅ ∅ ∅ ∅ ∅ 0.1054 0.2460 2 ∅ ∅ ∅ ∅ 0.5175 4.2926 0.2312 1.3885 3 0.6375 6.6364 0.3616 3.0079 0.5434 5.6144 ∅ ∅
Table 6. Comparison of classification results on Iris data (150 training data) Ref. Number of rules Number of premise fuzzy sets Recognition rate [12] 4 12 98% [13] 3 6 96.67% [14] 8 16 95.3% Ours 3 6 98%
96.67% to 98%. Table 6 shows that the proposed classifier is superior to other fuzzy-rule-based classifiers in terms of the complexity and the recognition rate. 3.2
Glass Data
The glass data set [16] is based on the chemical analysis of glass splinters. Nine features are used to classify six types of glass: building windows float processed, building windows non float processed, vehicle windows float processed, containers, tableware, and headlamps. The features are refractive index, sodium, magnesium, aluminum, silicon, potassium, calcium, barium, and iron. The unit of measurement of all features but refractive index is weight percent in corresponding oxide. We attempt to perform the 25 times pattern classification. One half of 214 feature vectors are randomly selected as the training data and the other half are used as the testing data. Table 7 contains the simulation results of the proposed classifier for each step of the design procedure. Although the average number of the features reduces from 9 to 5.61, the average recognition rates of the training and the testing set increase from 47.44% to 68.90% and from 39.48% to 58.28%, respectively. That definitely shows that the proposed design algorithm effectively provides the robustness for the overfitting and the decline of the dimensionality. Moreover, the classification performance of the proposed classifier is better than other classifiers as shown in Table 8. Table 7. Classification results on glass data Design procedure Initial Pruning Learning
Avg. number of features Avg. training Avg. testing for each fuzzy rule recognition rate recognition rate 9 47.44% 39.48% 5.61 64.11% 53.23% 5.61 68.90% 58.28%
424
D.W. Kim, J.B. Park, and Y.H. Joo Table 8. Comparison of classification results on glass data Ref. Avg. testing recognition rate [17] 50.95% [7] 52.70% Ours 58.28%
4
Conclusions
In this paper, a novel design approach to the fuzzy rule-based classifier has been proposed for the model complexity and the classification performance. Unlike other pruning methods based on the similarity analysis between two fuzzy sets, the proposed method utilizes the correctness degree, which is the major factor that improves the simplicity of the model. In addition, the problem of learning the premise parameters is formulated as minimizing the cost function, which is determined as squared-error between the classifier output for the correct class and the sum of the maximum output for the rest and a positive scalar. Finally, the computer simulations are given. The results show that the proposed fuzzy rule-based classifier has the low complexity, the very accurate classification ability, and the robustness for the overfitting in comparison with the conventional classifier. It indicates the great potential for reliable application of the pattern recognition.
References 1. Joo Y. H., Hwang H. S., Kim K. B. , and Woo K. B.: Linguistic model identification for fuzzy system. Electron. Letter 31 (1995) 330-331 2. Joo Y. H., Hwang H. S., Kim K. B., and Woo K. B.: Fuzzy system modeling by fuzzy partition and GA hybrid schemes. Fuzzy Set and Syst. 86 (1997) 279-288 3. Duda R. O., Hart P. E., and Stork D. G.: Pattern classification. A wiley-interscience publishing company, inc. (2001) 4. Wu T. P. and Chen S. M.: A new method for constructing membership functions and fuzzy rules from training examples. IEEE Trans. Syst., Man, Cybern. B. 29 (1999) 25-40 5. Roubos H. and Setnes M.: Compact transparent fuzzy models and classifiers through iterative complexity reduction. IEEE Trans. Fuzzy Systems 9 (2001) 516524 6. Setnes M. and Roubos H.: GA-fuzzy modeling and classification: complexity and performance. IEEE Trans. Fuzzy Systems 8 (2000) 509-522 7. Pal N. R. and Chakraborty S.: Fuzzy rule extraction from ID3-type decision trees for real data. IEEE Trans. Syst., Man, Cybern. B. 31 (2001) 745-754. 8. Paul S. and Kumar S.: Subsethood based adaptive linguistic networks for pattern classification. IEEE Trans. Syst., Man, Cybern. C. 33 (2003) 248-258 9. Ishibuchi H., Murata T. and Turksen I. B.: Single-objective and two-objective genetic algorithms for selecting linguistic rules for pattern classification problems. Fuzzy Sets Syst. 89, (1997) 135-149
Design of Fuzzy Rule-Based Classifier: Pruning and Learning
425
10. Abe S. and Thawonmas R.: A fuzzy classifier with ellipsoidal regions. IEEE Trans. Fuzzy Systems 5 (1997) 358-368 11. Thawonmas R. and Abe S.: A novel approach to feature selection based on analysis of class regions. IEEE Trans. Syst., Man, Cybern. B. 27 (1997) 196-207 12. Shi Y., Eberhart R., and Chen Y.: Implementation of evolutionary fuzzy systems. IEEE Trans. Fuzzy Systems 7 (1999) 109-119 13. Li R., Mukaidono M., and Turksen I. B.: A fuzzy neural network for pattern classification and feature selection. Fuzzy Sets Syst. 130 (2002) 101-108 14. Hong T. P. and chen J. B.: Processing individual fuzzy attributes for fuzzy rule induction. Fuzzy Sets Syst. 112 (2000) 127-140 15. Fisher R. A.: The use of multiple measurements in taxonomic problems. Ann. Eugenics. 7 (1936) 179-188 16. Merz C. J. and Murphy P. M.: UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html, Irvine, Dept. of Information and Computer Science, Univ. of California, Irvine (1996) 17. Castellano G., Fanelli A. M., and Mencar C.: An empirical risk functional to improve learning in a neuro-fuzzy Classifier. IEEE Trans. Syst., Man, Cybern. B. 34 (2004) 725-731
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation Hongwei Zhu1 and Otman Basir2 1
Pattern Analysis and Machine Intelligence Research Group, University of Waterloo, Waterloo, Ontario, Canada
[email protected] 2 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
[email protected] Abstract. A fuzzy set theory based region merging approach is presented to tackle the issue of oversegmentation from the watershed algorithm, for achieving robust image segmentation. A novel hybrid similarity measure is proposed as the merging criterion, based on the region-based similarity and the edge-based similarity. Both similarities are obtained using the fuzzy set theory. To adaptively adjust the influential degree of each similarity to region merging, a simple but effective weighting scheme is employed with the weight varying as region merging proceeds. The proposed approach has been applied to various images, including gray-scale images and color images. Experimental results have demonstrated that the proposed approach produces quite robust segmentations.
1
Introduction
Many image segmentation methods have been intensively investigated, including edge-based methods, region-based methods, and hybrid methods [1,2]. Boundary-based methods operate on gradient images, and segment images using the edge information. Region-based methods segment images by grouping neighboring pixels or regions according to certain similarity (homogeneity) criteria, defined on such features as gray level, color information, or wavelet coefficients. Hybrid-based methods take in account multiple futures for image segmentation, such as edge information and region information. The watershed algorithm presented in [3] represents a typical hybrid-based method for image segmentation. It begins with creating a gradient of the image to be segmented. Applying the immersion simulation approach to the gradient image produces an initial segmentation with each local minimum of the gradient corresponding to a region. As a result, oversegmentation occurs due to the high sensitivity of the watershed algorithm to image pixel intensity variations. As far as watershed based image segmentation approaches are concerned, their segmentation performances highly rely on the region merging criteria adopted. In [1], a region dissimilarity function is derived from an optimization L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 426–435, 2005. c Springer-Verlag Berlin Heidelberg 2005
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation
427
point of view, based on the piecewise least-square approximation. This function takes into account the sizes and the mean pixel intensities of two regions under consideration, and the two most similar regions having the minimum disimilarity are merged first. In [4], a dissimilarity measure is calculated for two neighboring regions as the square of the difference between the summation of the pixel intensity mean, the second-order central moment, and the third-order central moment of each region. The region merging criterions proposed in [1] and [4] take into account only region information, without considering the edge information between neighboring regions. A segmentation algorithm that integrates region and edge information is proposed in [6] for merging neighboring regions. A merit function is defined, which takes edge length in consideration, however it ignores the important information of pixel intensity and their spatial distribution. A region merging criterion is defined in [5] using Sugeno fuzzy integral, to take into account region based features (mean, and standard deviation of pixel intensities of a region, region size) and edge based features (gradient magnitude), and the expected importance of these features. It applications sometimes suffer the difficulty in obtaining appropriate importance values for the adopted features due to the lack of domain experts’ knowledge. A hybrid dissimilarity measure is proposed in [2] for color image segmentation. This measure is defined as a constant weighted combination of two components: 1) the difference of mean values between two regions, in terms of the Hue component in the HSV color space, 2) the average gradient magnitude of pixels in the shared edge between the two regions. Both region information and edge information are exploited. However, they are utilized in a very rough manner. First, the region size is not considered as region sizes are demonstrated quite important to eliminate oversegmentaions in watershed based approaches [1]. Secondly, only edge’s strength is utilized while its length is overlooked. Thirdly, the use of constant weights in the hybrid dissimilarity measure is not suitable since it is not able to be adaptable to the region merging process for robust segmentation performance. To mitigate the above drawbacks, we propose a new hybrid similarity measure based on the fuzzy feature representation of regions and edges using the fuzzy set theory. The hybrid similarity is a weighted combination of two similarities: the region based similarity and the edge based similarity, with variable weights that are adapted to the region merging process. The region based similarity takes into account the mean pixel intensity of a region and its size. The edge-based similarity accounts for edge strength and edge length. Each region is formulated as a fuzzy set. The membership assumes an normalized Gaussian function with respect to the mean pixel intensity of the region and the region size. Therefore, the region-based similarity is obtained by calculating the similarity between two fuzzy sets. It is justified that, when the region-based similarity is used as the region merging criterion, it is equivalent to the optimization derived dissimilarity measure presented in [1]. Furthermore, a second fuzzy set is formulated to represent the edge between two regions. Each pixel on the edge constitutes a member of this fuzzy set with the membership value indicating the
428
H. Zhu and O. Basir
strength of edge (degree of edgeness [12]) at this pixel. Therefore, the edge-based similarity is calculated based on the the cardinality of this fuzzy set, which not only indicates the information of the number of edge pixels, but also reflects the degree of edgeness of the edge pixels. A weight that varies with the region number adaptively adjusts the influential degrees from the two similarities during the region merging process. To examine its effectiveness, the proposed approach has been carried out to segment various images, including gray-scale images and color images. Experimental results have demonstrated that the proposed approach produces robust and meaningful segmentations. In this paper we focus on the design of the new hybrid similarity measure without detailing the watershed based image segmentation procedure. Readers are strongly referred to [7] for the gradient image generation using the Canny edge operator, [3] for watershed transform algorithm, and [8] for region merging using the region adjacency graph (RAG) . The remainder of this paper is organized as follows. Section 2 specifies the fuzzy set based definition of region-based similarity from a global point of view. Section 3 details how to formulate local similarity based on the edge information of a region pair. A hybrid similarity measured is defined in Section 4. Experimental results on different images are presented in Section 6, and conclusions are finally provided in Section 5.
2
Region-Based Similarity: A Global View
Let IM0 denote a grey level image or a color component image to be segmented, with its pixel intensity denoted by g0 (x, y) at point (x, y). Let IM1 be the corresponding gradient image with pixel intensity g1(x, y) at point (x, y). Suppose that the current image region under consideration, R0 , has K neighboring regions Rk , k = 1, · · · K. Let Ek denote the edge that consists of all the edge pixels between R0 and Rk . Based on the above notations, we propose a hybrid similarity measure which takes into account both region-based features and edge-based features. As far as the similarity of two regions is concerned, in terms of their pixel intensities, variant relevant features have been utilized in different manners [1,4,2,9]. We adapt the similarity measure originally used for image retrieval in [9] to image segmentation and employ it to calculate the region-based similarity as follows. The underlying idea is that each region is formulated as a ˜ k for Rk , with an associated membership function µ ˜ defined fuzzy set, e.g., R Rk on the pixel intensity g0 . For illustration simplicity, we assume that k can take number 0, so as Rk to represent also R0 . The similarity between two regions is thus obtained using fuzzy similarity measure. For effectiveness and efficiency of region-based similarity computation, the key is to design a good membership function. Using the aforementioned notations, we define two region-based features: 1) region size of Rk : |Rk |, which is the number of pixels in region Rk , and 2) mean pixel intensity of Rk : 1 uk = g0 (x, y) (1) |Rk | (x,y)∈Rk
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation
429
˜ k with the membership function Then we represent region Rk as a fuzzy set R assuming the normalized Gaussian function: µR˜ k (g0 ) = e−(g0 −uk ) where d = √ C
|Rk |
2
/d2
(2)
, and it is a parameter controlling the spread of the membership
function. C is an image dependent constant. Given two regions, say R0 and Rk , their similarity can be calculated by ˜ 0 and R ˜ k . There exist applying fuzzy similarity measures to the fuzzy sets R many different definitions of fuzzy similarity measures [10]. Similarly as [9], the employed fuzzy similarity measure is defined as: ˜0, R ˜ k ) = max min{µ ˜ (g0 ), µ ˜ (g0 )} S(R R0 Rk g0 ∈
(3)
Substituting the membership function in Eq. (2) to Eq. (3) yields: ˜0, R ˜k ) = e S(R
− C12 ×
|R0 |×|Rk |×(u0 −uk )2 √ √ 2 |R0 |+ |Rk |
(
)
(4)
which indicates that the similarity of two regions depends on the difference of their mean pixel intensities and their sizes. The closer their mean pixel intensities are, the more similar they are. The smaller their sizes are, the larger their similarity is. It is interesting to see that this property is quite similar as that provided by the dissimilarity in [1]: δ(R0 , Rk ) =
|R0 | × |Rk | × (u0 − uk ) |R0 | + |Rk |
2
(5)
which is resulted from the piecewise (region-wise) least-square approximation. This is not strange if we rewrite Eq. (4) and Eq. (5) as follows for comparison:
˜0, R ˜ k ) = e−δ (R0 ,Rk )/C S(R
2
with
2
(u0 − uk ) δ (R0 , Rk ) = 2 1 |R0 | + 1 |Rk | and δ(R0 , Rk ) =
(u0 − uk )2 1 /|R0 | + 1 /|Rk |
(6)
(7)
Obviously, both δ(R0 , Rk ) and δ (R0 , Rk ) react, in a same manner, to the variation of |R0 |, |Rk | and (u0 − uk )2 , respectively. Therefore, applying the two measures in Eq. (6) and Eq. (7) to any given set of regions produces the same order of dissimilarity between region pairs though their dissimilarity values are different. As a result, if the similarity measure in Eq. (4) individually serves as the region merging criterion, no doubt it results in the same segmentation as applying the dissimilarity measure in Eq. (5). In this sense, the region-based
430
H. Zhu and O. Basir
similarity measure is able to achieve the piecewise (region-wise) least-square approximation. Next, we determine parameter C in Eq. (4) as: m n C =σ× × (8) 16 16 where m and n denote the IM0 ’s size respectively in the x and y directions, and σ is the overall standard derivation of the pixel intensities in image IM0 .
3
Edge-Based Similarity: A Local View
The above region-based similarity may be considered as a global-level measure as two regions are compared since it takes into account their mean pixel intensities and their sizes only, without considering the local information around their edge. Actually, an edge is an indicative of dissimilarity among neighboring pixels, and it provides an important feature to distinguish discrete objects, from a local point of view. Given two regions R0 and Rk , the edge between them can be represented as a set of pixels on the edge Ek : Ek = {p1 , · · · , p1 , · · · , p|Ek | }
(9)
where pi denotes the ith pixel on edge Ek for i = 1, · · · , |Ek |. |Ek | denotes the number of pixels at Ek . The crisp edge representation in Eq. (9) suggests that a pixel pi belongs to the edge with a full degree of membership. This representation may not be so informative since it overlooks the difference between edge pixels, and is not able to distinguish the degrees of membership to the edge for edge pixels that have different gradient magnitudes. In addition, due to possible influence from noise, it is not reasonable to assign the edge pixels caused by noise with a full degree of membership. In other words, edge pixels should be treated differently in certain proper manner. In view of this, it is natural for us to replace the crisp representation in Eq. (9) using a fuzzy edge representation: µE˜k (g1 (pi )) ˜ Ek = (10) i = 1, · · · , |Ek | pi where µE˜k (g1 (pi )) denote the degree of edgenss of edge pixel pi [12], and it further assumes a specific trapezoidal membership function, i.e., µE˜k (g1 (pi )) = T (g1 (pi ); min g1 , max g1 )
(11)
where min g1 and max g1 denote the minimum and the maximum of gradient gradient magnitude in IM1 . The used trapezoidal membership function is defined as: ⎧ if x < a ⎨0 T (x; a, b) = (x − a)/(b − a) if a ≤ x < b (12) ⎩ 1 if x ≥ b
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation
431
Having the fuzzy edge representation in Eq. (10), the cardinality (sigma count) of fuzzy set E˜k is defined as: |Ek |
˜k | = |E
µE˜k (g1 (pi ))
(13)
i=1
In the context of fuzzy image processing using fuzzy geometry of images, the ˜k | can be viewed as the length of the fuzzy set along the edge cardinality |E ˜k | carries the information of number of edge pixels direction [11]. Therefore, |E and the degrees of edgeness of pixels on the edge. It provides an overall indicative of the edge’s strength. It is also natural to map the cardinality to the range from 0 ˜k |). to 1, to quantify the overall strongness of the edge using a function, say M (|E ˜ Proper M (|Ek |) would be a monotonically increasing function of cardinality |E˜k |, as will be addressed later. Therefore, the second similarity measure for region merging may be defined as the degree of non-strongness of the edge: S(Ek ) = 1 − M (|E˜k |)
(14)
S(Ek ) denotes the non-separability of region R0 and Rk from the local viewpoint ˜k | in Eq. (13) can be further rewritten as: of their edge. The cardinality |E |Ek |
˜k | = |E
µE˜k (g1 (pi )) = |Ek | × µ ¯E˜k , with µ ¯E˜k =
i=1
|Ek | 1 µ ˜ (g1 (pi )) |Ek | i=1 Ek
(15)
µ ¯E˜k is the average degree of edgeness for edge Ek . For computational efficiency, min g1 and max g1 in Eq. (11) are respectively set as the minimum and the maximum in gradient image IM1 . Thus, the computation of µ ¯E˜k can be simplified as calculating the membership value of the average gradient magnitude g¯1k over all pixels on edge Ek , i.e., µE˜k (¯ g1k ), with g¯1k
|Ek | 1 = g1 (pi ) |Ek | i=1
(16)
This is because the membership function of edgeness in Eq. (11) is guaranteed linear due to the use of min g1 and max g1 respectively being the minimum and the maximum in gradient image IM1 . Therefore, using the above strategy, the ˜k | in Eq. (13) is a simple product of the number of edge pixels and cardinality |E the average edge gradient magnitude: ˜k | = |Ek | × µ ˜ (¯ |E Ek g1k )
(17)
˜k |) used in the edge-based It is time to define the mapping function M (|E similarity definition of Eq. (14). Associated with the simplified representation of ˜k |) as: cardinality in Eq. (17), we define M (|E ˜k |) = 0.5T (|Ek |; |E|a , |E|b ) + 0.5T (µ ˜ (¯ M (|E Ek g1k ); min g1 , max g1 )
(18)
where |E|a and |E|b are two parameters to adjust the effect of edge length to the overall edge strongness. They are respectively set as: |E|a = 1 and |E|b takes the smaller one of m, n.
432
4
H. Zhu and O. Basir
Hybrid Similarity: A Combined View
Having obtained the region-based similarity in Eq. (4) and the edge-based similarity in Eq. (14), a hybrid similarity measure is defined as: ˜0 , R ˜ k ) + (1 − λ) × S(Ek ) Sh (R0 , Rk ) = λ × S(R
(19)
where λ ∈ [0, 1]), is a weight adjusting the impact factors of the region-based similarity and the edge-based similarity to the hybrid measure. If λ = 1, the hybrid measure is fully determined by region-based similarity, and the region merging degraded to the approach used in [1] due the the previously justified equivalence between the use of Eq. (6) and that of Eq. (7) in the region merging criteria. If λ = 0, the hybrid measure considers only the local-level edge based similarity, without taking into account global-level region-based information. Surely, it is important to determine λ properly. λ in Eq. (19) is designed to automatically vary with the region merging process, so as to dynamically adjust the influential factors of the two similarities to the hybrid similarity (i.e., the region merging criterion). The adopted strategy is that, we decrease the contribution from the region-based similarity and increase that from the edge-based similarity as the region merging process proceeds. The rationale is based on the following underlying facts. 1) There exist many small regions in the initial oversegmentation due to the watershed algorithm. They are quite homogenous (in a relative sense compared to the later generated regions). In nature, each small region may represent an area with uniform or gradually changing pixel intensities. Most of them are likely to correspond to detailed pieces of some large objects. Therefore, it is suitable to utilize a region merging criterion to merge these small regions, which is derived from the piecewise least-square approximation. 2) When serving as the region merging criterion, the fuzzy set theory based region similarity measure in Eq. (4) has been justified to have the equivalent performance as the dissimilarity measure in [1], which is an optimal region merging criterion from the viewpoint of the piecewise least-square approximation. Therefore, it is reasonable to account more on the proposed region-based similarity measure at the beginning stage of merging. 3) Most of the edges existing between regions in the initial oversegmentation are short, which are likely to be resulted from image details or image noise. These edges may not represent real borders between distinct realistic objects. In other words, the edge information in the initial segmentation is not reliable and less representative. Certainly, we can not account on it too much at the beginning stage of region merging. 4) As region merging proceeds, region sizes becomes larger and edges become longer and stronger. From an asymptotic point of view, a larger size region has a larger variance of pixel intensity. That is, very likely a large region is no more as uniform as its small component regions. In this sense, in contrast to the early merging, it is less suitable to still apply the piecewise least-square approximation derived region merging criterion.
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation
433
5) As region merging proceeds, in general edges between regions become longer due to the increasing region sizes, and edges become also stronger since weak edges between similar neighboring regions disappear due to earlier region merging. Therefore, with merging proceeding the edge information becomes more and more reliable and representative to real ones. It would be reasonable to increase the influential weight of edge-based similarity, accordingly. Based on the above analysis, we empirically determine weight λ as: λ = 0.1 + 0.9 × T (r; 1, 100)
(20)
where r denotes the number of surviving regions during the region merging process. When there are more than 100 regions during region merging, i.e. r ≥ 100 , λ is 1, indicating only the region-based similarity is taken as the region merging criterion. As the region number is less than 100, weight λ decreases in a proportional way to the region number, together with a base weight 0.1, so that the influential fact of region-based similarity decreases with the region number while that of the edge-based similarity increases.
5
Experimental Results
The proposed image segmentation approach has been evaluated on different type images, including gray-scale images and color images. The reported images include: Cameraman (gray), Girl (color), Lena (color), head-MRI (gray), Airplane (F-16) (color). In case of color images, the Y component (luminance) images in the YUV system are selected as the original intensity images. All images are scaled to the size of 256 × 256. The Canny edge operator is first applied to the intensity images to generate gradient images. Gradient images are further preprocessed by setting those gradients less than 3% of the maximum gradient magnitude to 0, so as to reduce the number of regions in the initial segmentation due to the use of the watershed algorithm. Based on the gradient images, the watershed algorithm is carried to form initial segmentation, and then the proposed approach is employed for final image segmentation through region merging. For comparisons, two region merging criteria are employed, and they are respectively the hybrid similarity measure in Eq. (19) and the region-based similarity measure in Eq. (4). In these experiments, we set the target number of regions in the final segmentations as 19. Their results are shown in Fig. 1. For each image (on a row) column (a) presents the original image to be segmented; Columns (b) and (d) present the label maps of the segmentation with each region assigned an unique number (label), respectively using the hybrid measure and the region-based measure; Columns (c) (using the hybrid measure) and (e) (using the region-based measure) highlight the boundaries (edge maps) in the final segmentations. Both region merging criteria result in good segmentations based on our visual judgement. Important edges are preserved while meaningful image details
434
H. Zhu and O. Basir
(a) Image
(b) Label map
(c) Edge map
(d) Label map
(e) Edge map
Fig. 1. Segmentation results: (b) (c) using the hybrid similarity, (d) (e) using regionbased similarity
are also maintained (see facial features of Girl and Lena). However, the hybrid measure outperforms the region-based measure. For example, the far tall building with weak contrast to the sky in image Cameraman is successfully segmented using the hybrid similarity measure. For F16 image, both measures based methods are not able to segment the airplane as one individual region (mixed with the mountain in the front when using the hybrid measure, and mixed with the cloud in the upper middle part when using the region-based measure). However, based on human visual judgement, the hybrid measure produces more reasonable segmentation than the region-based measure since a more complete boundary of the plane is obtained in the column of the last row.
Fuzzy Sets Theory Based Region Merging for Robust Image Segmentation
6
435
Conclusions
Based on the fuzzy set theory, we proposed an effective hybrid similarity measure for robust region merging to tackle the oversegementation problem in the watershed algorithm based methods. Region-based features and edge-based features collaborate through the hybrid measure in an natural manner to the region merging process, for achieving good balance between the use of the global region information and the local edge information. Note that if the region-based similarity is individually used it results in the same segmentation as that using the region merging measure that is based on the the piecewise least-square approximation [1]. Experimental results on larger number of variant images have demonstrated the robustness and the effectiveness of the proposed image segmentation approach.
References 1. Haris, K., Efstratiadis, S.N., Maglaveras, N., Katsaggelo, A.K.: Hybrid image segmentation using watersheds and fast region merging. IEEE Trans. Image Processing. 7 (1998) 1684–1699 2. Navon, E., Miller, O., Averbuch, A.: Color image segmentation based on adaptive local thresholds. Image and Vision Computing. 23 (2005) 69–85 3. Vincent, L., Soille, P.: Watersheds in digital space: an efficient algorithm based on immersion simulations. IEEE Trans. PAMI. 13 (1991) 583–598 4. Kim, J.B., Kim, H.J.: Multiresolution-based watersheds for efficient image segmentation. Pattern Recognition Letters. 24 (2003) 473–488 5. Zhu, H., Basir, O., Karray, F.: Fuzzy integral based region merging for watershed image segmentation. Proc. 10th IEEE Int. Conf. on Fuzzy Systems. 1 (2001) 27–30 6. Chu, C., Aggarwal, J.K.: The integration of image segmentation maps using region and edge information. IEEE Trans. PAMI. 15 (1993) 1241–1252 7. Canny, J.: A computational approach to edge detection. IEEE Trans. PAMI, 8 (1986) 679–698 8. Ballard, D., Brown, C.: Computer Vision. Englewood Cliffs, NJ: Prentice-Hall. (1982) 9. Chen, Y., Wang, J.Z.: A region-based fuzzy feature matching approach to contentbased image retrieval. IEEE Trans. PAMI. 24 (2002) 1252–1267 10. Van der Weken, D., Nachtegael, M., Kerre, E.E.: Using similarity and homogeneity for the comparison of images. Image and Vision Computing. 22 (2004) 695–702 11. Pal, S.K., Ghosh, A.: Index of area coverage of fuzzy image subsets and object extraction. Pattern Recognition Letters. 11 (1990) 831–841 12. Tizhoosh, H.R.: Fuzzy Image Processing: Introduction in Theory and Practice. Springer-Verlag, Gemany. (1997)
A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire Huiguang He, Jie Tian1, Yao Lin, and Ke Lu Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Chinese Academy of Science, P.O.Box 2728, Beijing, 100080, China
[email protected] [email protected] Abstract. In this paper we report the combination of the Live-Wire method with the region growing algorithm based on fuzzy affinity. First, we employed anisotropic diffusion filter to process the images which smoothed the images while keeping the edge, and then we confined the possible boundary in applying the Live-Wire method to the over-segmentation found by the region growing algorithm. The speed and the reliability of the segmentation of the Live-Wire method are greatly improved by such combination. This method has been used for CT and MR image segmentation. The results confirmed that our method is practical and accurate in the medical image segmentation.
1 Introduction Image segmentation is one of the most challenging problems in medical image analysis and computer vision. Many techniques are developed to fulfill satisfactory segmentation results, but there is no unified approach, yet which is suitable to all kinds of images [1]. Usually, the segmentation methods are divided into two classes: automatic method and interactive method. Automatic methods [2][3] have no user intervention, and the complete success cannot be always guaranteed. Interactive methods [4][5] range from totally manually drawing of the object boundary to the detection of object boundaries with the minimal user assistance. The automatic methods are currently used in an application-specific and tailored fashion, and they fail to different image modalities. To make it work effectively in a repeated fashion on a large number of data sets often requires considerable research and development. Since interactive methods provide the prior information in the segmentation process and it may improve the segmentation results, it attracts more and more attention. A X. Falcao etc [6] proposed a live-wire segmentation method. In his method, the image was considered as a directed graph with the pixels as the nodes and the bound1
Corresponding author: Jie Tian; Telephone: 8610-62532105; Fax: 8610-62527995. This paper is supported by the Project for National Science Fund for Distinguished Young Scholars of China under Grant No. 60225008, 863 hi-tech research and development program of China No. 2004AA420060, the National Natural Science Foundation of China under Grant No. 60302016,Beijing Natural Science Fund under Grant No. 4051002 4042024.
,
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 436 – 443, 2005. © Springer-Verlag Berlin Heidelberg 2005
A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire
437
ary of nearby pixels as the sides connecting nodes. After assigning a cost value to every pixel edge, we can trace the boundary in a desired object by graph searching. This boundary is the shortest path between two boundary points appointed by the user and can be solved via Dynamic Programming (DP). There’re two shortcomings in the traditional live-wire method. Firstly, the DP module found the shortest path from the appointed node to all the other nodes without any discrimination, which made it very slow. Secondly, the segmentation result relied too much on the cost function and parameters, and training was necessary before any segmentation, which made the operation very complex and baffled its practical application. In order to overcome these shortages, we combine the live-wire method with region growing based on fuzzy affinity method. Before the live wire method has been used, we first apply anisotropic diffusion filter to smooth the images while keeping the edges; second, we employ the region growing method based on fuzzy affinity to obtain an oversegmentation of the image. During the live-wire process, when DP module is performed, it is limited on the boundary of the regions decided by the over-segmentation. For this purpose, we need only remove the sides that are inside a region from the graph before applying DP to find the shortest path. The merit of our method lies in three aspects. First, the searching scope is limited by the specific region, which is only one quarter of which used without oversegmentation, thus, the speed is much faster. Second, the segmentation accuracy can be improved since the shortest paths are bounded as the potential boundaries of desired objects. Finally, the reliance of the segmentation result on the cost function and parameters are greatly decreased and the training has no longer needed. Figure 1 is the main steps of our method. 2ULJLQDO ,PDJH
)X]]\5HJLRQ *URZLQJ
)X]]\ 2EMHFW
0HUJH 6HJPHQW DWLRQ 5HVXOW
)X]]\/LYHZLUH
2YHU 6HJPHQWDWLRQ
Fig. 1. The main steps of the method
This paper is organized as follows: Section 2 describes the anisotropic diffusion. Section 3 introduces the region growing method with fuzzy affinity. Section 4 demonstrates how to use improved live-wire method to extract the boundary from the oversegmentation. Finally, we present the implementation result and draw the conclusion.
2 Anistropic Diffusion Due to the complex background and diversity of medical image, most of them have poorly defined object boundary, and nearby the desired boundary it may exist noise of strong feature. Therefore, pre-process is necessary.
438
H. He et al.
Embedding the original image in a set of images derived from it, we can describe the series of images being processed as I(x,y,t). When t=0, I(x,y,0) represents the original image. Then the image filtering process may be achieved by convolving the original image with a function F(x,y,t) (see equation 1). I(x,y,t) = I(x,y,0) * F(x,y,t)
(1)
Gaussian kernel G(x,y;t) is a single variable function (variance t). Gaussian filter can smooth the images, but it can also distort the edge. So we need a special filter which can smooth the image while keeping the edge. Anisotropic diffusion proposed by Perona and Malik [7] can work well. The diffusion equation is
It =
⎛ ⎞ 1 ⎜ ⎟ I I + ξξ 2 ηη 2 ⎜ ⎟ I 1 + ∇ 1 + ∇I ⎝ ⎠ 1
▽
(2)
I| is the gradient magnitude, andξdenotes the contour direction Where | andηstands for the gradient direction. From the equation 2, we can see that when | I| is large it allows almost no smoothing in the gradient direction, while in the contour direction, it always executes maximal smoothing. In this manner, the image is smoothed without destroying the edges.
▽
3 Region Growing with Fuzzy Affinity The concept of fuzzy affinity and fuzzy connectedness was first presented by Rosenfeld [8] in 1979. J. K. Udupa [9] extended the theory and presented a framework of fuzzy connected object definition. The fuzzy connectedness describes the spatial and topological relationship between every pair of image elements. A local fuzzy relation as affinity is assigned to every pair(c,d) which reflects the strength of local hanging togetherness with a value in [0,1]. Fuzzy connectedness is a global fuzzy relation, which takes all paths from c to d. Unlike gradient and threshold-based methods, which are not robust to noise within the object boundaries, our method combines both the intensity and gradient information to better estimate those boundaries. Our fuzzy affinity is defined as follows:
µ k (c, d ) = ω1 f 1 + ω 2 f 2 1 f ( c ) − f ( d ) − µ1 2 ) − (
(3)
1 ( f ( c ) + f ( d )) / 2 − µ 2 2 − ( )
δ δ and ϖ 1 + ϖ 2 = 1, µ1,δ 1 , µ 2 , δ 2 is the where f1 = e 2 , f2 = e 2 respective mean and the standard deviation of the gradient feature and intensity feature f(c). To simplify the computation, we choose 4-neighbor to describe µα ( c, d ) , 1
2
⎧1, if c = d , or c and d are 4 − neighbor 0, otherwise ⎩
µ α ( c, d ) = ⎨
(4)
The fuzzy affinity describes the probability that two points belong to the same object. From this definition, it is clear that if the density difference between c and d is smaller
A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire
439
and the average density of c and d is closer to the mean intensity of the interest object, then the value is bigger. This definition is reasonable by intuitive understanding. To further reduce the computation time, we use region growing method to get the result, and it is easy to prove that region growing based fuzzy affinity can get the same result as DP. The following is the pseudo code of algorithm Fuzzy-RegionGrowing(FRG): Input: Image C, fuzzy affinity threshold x Output: Image after being segmented by FRG Auxiliary Data Structure: A dualistic flag list L={flag, p}, flag denotes if the point p has been visited or not (1 is yes, 0 is not). A queue Q stores the location of the candidate point. Begin 0. Initialize L (let flag=0), Q (let Q is empty); Repeat 1-3 steps until the flag of each pixel in L is 1 1. Select a point p from L if L[p].flag=0, let L[p].flag=1; 2. For each neighbor c of p, and calculate (p, c) if (p, c)>x, then put c in Q, and let L[c].flag=1; endif Endfor While Q is not empty do: 3. Remove a pixel c from Q, 4. For each neighbor d of c, and calculate (c, d) if (c, d)>x, then put d to Q, and Let L[d].flag=1 endif Endfor End while 5. Output region R(o), R(o) include all the pixel which have been put in Q in the step 1-4 End
4 Improved Live-Wire Method We get the over-segmentation of the image after region growing based on fuzzy affinity. Since over-segmentation may separate one object to two or more parts, we should perform merging process for over-segmentation. The popular Fisher’s test has been used to decide whether two adjacent regions can be merged. Suppose there are two adjacent regions R1 and R2, where n1 , n2 , µ1 , µ 2 , σ 1 , σ 2 are the size, sample means, and sample variances of R1 and R2, respectively, the squared Fisher distance is defined as follows: (n + n )( µ − µ 2 ) 2 FD 2 = 1 22 1 2 (5) n1 σ 1 + n 2 σ 2
440
H. He et al.
If this statistic is smaller than a certain threshold, then the regions are merged. Then we will use improved live-wire method to extract the boundary. Our improved live-wire method is different from the original [7] in that we use fuzzy affinity as the cost function, and the searching scope is limited by the over-segmentation. First, the user selects an initial point in the boundary of the desired object, and then the other points will be specified by the interactive method. While the user defines a point in the boundary, the computer will calculate the shortest path from this node to all the other nodes in the graph, and the path will be accepted only when it is in the region computed by FRG algorithm. As the user moves mouse, the system will display the shortest path from previous user-defined node to the current mouse position in real time by searching graph. If this path adequately describes the boundary of the desired object, then the user can confirm that the path is a valid boundary for the desired object. Then make the current mouse position as new starting point. A complete 2D boundary is specified via a set of live-wire segments in this fashion. Figure 2 is just the example
Fig. 2. The left is the CT image, and the right is the part of boundary calculated by live-wire
5 Result and Evaluation 5.1 Experimental Results The algorithm was implemented with C++, and we run all the experiment on the 3DMED medical image processing and analyzing system developed by Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences [10]. Figure 3 shows the segmentation of the MR image. Figure 4 shows the CT images of knee joint segmentation. Figure 5 shows the tumor segmentation. These experiment results represent that our algorithm is efficient not only for CT image, but also for MR images.
Fig. 3. The left shows the original MR image of patient’s brain, and the middle is the satisfied boundary of the white matter. The right is the 3D model of the white mater reconstructed by 3Dmed.
A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire
441
Fig. 4. From the left to the right, the first and third images are the CT images of knee joint. The second and fourth images are the boundary of the bone of the first and third images respectively. The fifth image is the 3D bone model of the knee joint.
Fig. 5. From the left to the right, the first image is the original MR image of the head in the coronary direction, and we can see that there is a tumor in it. The second image is the boundary of the tumor. The third image is the image in the axial direction reconstructed by the virtual cutting technology, and 3D model of the tumor reconstructed by 3DMED is shown as the forth image.
5.2 Quantitative Evaluation To evaluate the segmentation results quantitatively and objectively, the International Brain Segmentation Repository (IBSR) database have been used to test our algorithm. Both the real brain MR image and the corresponding ground truth segmentation can be freely downloaded from the website http://neuro-www.mgh.harvard.edu/cma/ibsr. Since the output of the edge detection is a binary image, the Baddeley’s
∆pw [11] has
been selected as the distance measure for comparing such binary “boundary pixel images” according to Thomas C.M. Leey’s evaluation method [12]. Let X be a grid with N pixels, and let A ⊂ X and B ⊂ X be the set of all “black pixels” of a true and fitted binary image respectively.
⎡1 ∆pw ( A, B ) = ⎢ ⎣N
1
⎤p ∑ | w(d ( x, A)) − w(d ( x, B)) | p ⎥⎦
(6)
Where d(x, A) is the smallest distance from x to A, w(t)=mint(t, c) is a threshold function, and p and c are parameters provided by the user. We follow Baddeley and set p=2 and c=5. We use the dataset 788_6_m from IBSR to demonstrate the segmentation results. It contains 60 contiguous 3.0 mm slices, which scanned by 1.5 Tesla General Electric Signa. We give the segmentation result of the white matter in some slices from this
442
H. He et al.
(a) Segmentation of slice 44
(b) Segmentation of slice 45 Fig. 6. Segmentation of the brain white matter in MR images from IBSR Table 1. The Baddeley Distance Measure
Distance measure
Slice 43
Slice 44
Slice 45
Slice 46
Slice 47
Slice 48
Slice 49
0.357
0.267
0.268
0.275
0.258
0.289
0.268
dataset (Figure 6). The left column is the original image; the middle one is the segmentation result of our method; the right one is the ground truth segmentation. The Baddeley’s distance between the segmentation result of our method and the ground truth segmentation has been computed and the evaluation is enumerated in Table 1.
6 Conclusion Interactive segmentation is one of the most important algorithms in medical image processing. The proposed method is different from other work on live-wire in three aspects; all of them are focused on time-saving with the improved accuracy of segmentation. (1) Our fuzzy affinity definition combined the intensity and the gradient of the image, which makes it robust to the noise. (2) By confining the searching scope to the FRG computed over-segmentation, we change the method to seek the shortest
A New Interactive Segmentation Scheme Based on Fuzzy Affinity and Live-Wire
443
path from the point defined by the user to all the other points in the image, and don’t allow the path to go through the inside of the object, thus, the scope of DP is only one quarter of the original. (3) Our method guarantees that the shortest path found by DP is the potential boundary. However there are areas of potential improvements, such as changing the definition of fuzzy affinity or cost function so that we can minimize the user involvement and computing time while maintaining the result of segmentation.
References 1. James S.Duncan, and Nicholas Ayache, “Medical Image Analysis: Progress over Two Decades and the Challenges Ahead”, IEEE Trans. on PAMI, 2000, 22: 85-105 2. Michael Kaus, Simon K. Warfield, Arya Nabavi, Peter M. Black, Ferenc A. Jolesz, and Ron Kikinis. Automated segmentation of MRI of brain tumors. Radiology, 2001, 218:586591. 3. Jui-Cheng Yen, Fu-Juay Chang and Shyang Chang, “A New Criterion for Automatic Multilevel Thresholding”, IEEE Trans. on Image Processing, 1995, 4: 370-377 4. Falcao, A.X., Bergo, F.P.G., Interactive volume segmentation with differential image foresting transforms IEEE Transactions on Medical Imaging, 2004, 23:1100 – 1108 5. Carl-Fredrik Westin, Liana M. Lorigo, Olivier Faugeras, W.Eric L.Grimson, Steven Dawson, Alexander Norbash, and Ron Kikinis. Segmentation by adaptive geodesic active contours. MICCAI(Medical Image Computing and Computer Assisted Intervention) , 2000, 266-275. 6. P. Perona and J. Malik, Scale space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997,12:629–639. 7. X. Falcao, J. K. Udupa, S. Samarasekera and Shoba Sharma, “User-steered Image Segmentation Paradigms: Live Wire and Live Lane”, Graphic models and Image Processing, 1998, 60:233-260 8. Azrier Rosenfeld, “Fuzzy Digital Topology”, Information and Control, 1979, 40:76-87 9. Jayaram K. Udupa, Supun Samarasekera, “Fuzzy Connectedness and Object Definition: Theory, Algorithms, and Applications in Image Segmentation”, Graphical Model and Image Processing, 1996, 58:246-261 10. Jie Tian, Huiguang He, Mingchang Zhao, Integrated 3D medical image processing and analysis system, Proc. SPIE, 2003, 4958:284-293 11. A.J. Baddeley, Errors in binary images and an Lp version of the Hausdorff metric, Nieuw Archief voor Wiskunde, 1992, 157–183. 12. T.C.M. Leey, A minimum description length based image segmentation procedure and its comparison with a crossvalidation based segmentation procedure, Department of Statistics, University of Chicago, 1997.
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships Amit Banerjee and Rajesh N. Davé Department of Mechanical Engineering, New Jersey Institute of Technology, Newark, NJ 07032, USA {ab2, dave}@njit.edu
Abstract. A new robust clustering scheme based on fuzzy c-means is proposed and the concept of a fuzzy mega-cluster is introduced in this paper. The fuzzy mega-cluster is conceptually similar to the noise cluster, designed to group outliers in a separate cluster. This proposed scheme, called the mega-clustering algorithm is shown to be robust against outliers. Another interesting property is its ability to distinguish between true outliers and non-outliers (vectors that are neither part of any particular cluster nor can be considered true noise). Robustness is achieved by scaling down the fuzzy memberships, as generated by FCM so that the infamous unity constraint of FCM is relaxed with the intensity of scaling differing across datum. The mega-clustering algorithm is tested on noisy data sets from literature and the results presented.
1 Introduction Cluster analysis is a technique for grouping and finding substructure in data. The most common application of clustering methods is to partition a data set into clusters, where similar data vectors are assigned to the same cluster and dissimilar data vectors to different clusters. The immensely popular k-Means is a partitioning procedure that partitions data based on the minimization of a least squares type fitting functional. The fuzzy derivative of k-Means known as Fuzzy c-Means (FCM) [1], has an objective functional of the form, c
n
J (X; U, v) = ∑ ∑ u ijm d 2 ( v i , x j ) ,
(1)
i =1 j=1
where n is the number of data vectors, c is the number of clusters to be found, u ij ∈[0,1] is the membership degree of the jth data vector xj in the ith cluster, the ith cluster represented by the cluster prototype vi, m ∈[1, ∞) is a weighting exponent called the fuzzifier and d(vi,xj) is the distance of xj from the cluster prototype vi. The fixed-point iterative FCM algorithm (FCM-AO) guarantees a local minimum solution when J(X;U,v) is minimized. In order to avoid the trivial solution, uij = 0, additional assumptions have to be made leading to probabilistic, possibilistic and noise clustering. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 444 – 453, 2005. © Springer-Verlag Berlin Heidelberg 2005
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships
445
However, since FCM is based on a least squares functional, it is susceptible to outliers in the data. The performance of FCM is known to degrade drastically when the data set is noisy. This is similar to LS regression where the presence of a single outlier is enough to throw off the regression estimates. The need has therefore been to develop robust clustering algorithms within the framework of FCM (primarily because of FCM’s simplistic iterative scheme and good convergence properties). The usual FCM minimization constraints are, c
∑ u ij = 1, i =1
0≤
n
∑ u ij ≤ n, j=1
i = 1,.., c ; j = 1,.., n
(2)
The relaxation of the equality constraint in (2) has been shown to robustify the resulting algorithm. This relaxation is usually accomplished by reformulating the objective functional in (1), in order to avoid the trivial solution uij = 0. The Possibilistic cMeans (PCM) algorithm [2] was developed to provide information on the relationship between vectors within a cluster. Instead of the usual probabilistic memberships as calculated by FCM, PCM provides an index that quantifies the typicality of a data vector as belonging to a cluster. This is also shown to impart a robust property to the procedure in the sense that noise points have less typicality in good clusters. Another effective clustering technique based on FCM is the Noise Clustering (NC) algorithm [3] which uses a conceptual class called the noise cluster to group together outliers in the data. All data vectors are assumed to be a constant distance, called the noise distance, away from the noise cluster. The presence of the noise cluster allows outliers to have arbitrarily small memberships in good clusters. Later modifications define a varying noise distance for every datum [4,5]. The Least Biased Fuzzy Clustering (LBFC) algorithm [6] partitions the data set by maximizing the total fuzzy entropy of each cluster, which in turn is a function of clustering memberships. The scaled LBFC clustering memberships are shown to be related to PCM typicalities and the resulting LBFC algorithm is robust against outliers. The Fuzzy Possibilistic c-Means (FPCM) algorithm [7] has an optimization functional which is a combination of probabilistic and possibilistic components. The algorithm uses two types of memberships, a probabilistic FCM type membership that measures the degree of sharing of a datum among the different clusters and a possibilistic component of membership that provides information on intra-cluster datum relationships. For an outlier, FPCM generates lowvalued typicalities and like PCM, is a noise resistant procedure. The Credibilistic Fuzzy c-Means (CFCM) algorithm [8] uses datum credibility as the measure to delineate outliers from good datum in the data set. As opposed to typicality in PCM, credibility of a datum represents its typicality to the entire data set and not to any particular cluster. An outlier is shown to have a low credibility and hence is atypical to the data set. The Fuzzy Outlier Clustering (FC-O) algorithm [9] uses datum weights and a modified membership function which is inversely proportion to the datum weight; the outliers get assigned a large weight and hence have a low membership. There is another class of noise resistant clustering methods based on robust statistics, prominent among which are the Fuzzy c-Medians [10] that uses cluster median as the representative prototype and the Fuzzy Trimmed c-Prototypes [11] based on the ro-
446
A. Banerjee and R.N. Davé
bust least trimmed squares regression procedure. For a detailed review of robust fuzzy clustering methods and their relation to robust statistics, the reader is referred to [12]. The aforementioned techniques and algorithm have been shown to be effective in clustering noisy data but they are plagued with problems of their own. Strictly speaking PCM is not a clustering algorithm but rather a mode seeking algorithm [13], which in disguise makes it tolerant of noise. One needs to have a reasonably good estimate of cluster variances to start with, which might not be possible in all cases. The noise distance in NC is also user specified and clustering results could be sensitive to variations in the noise distance. FC-O also depends on user specified quantities like total datum weights and a weighting exponent in addition to the fuzzifier. LBFC suffers from the same anomaly as PCM; it often generates coincident clusters since the objective functional is linearly separable. The centroids generated by FPCM are often seriously affected by outliers as would be with FCM. The concept of data credibility although appealing, is fundamentally plagued with the logic of total credibility – according to the current formulation, while outliers have zero credibility, no datum can have full credibility (unity). Moreover all FCM based methods suffer from the dependency on proper initialization. In this paper we propose an FCM based noise clustering procedure and introduce the concept of a mega-cluster. The mega-cluster is compared with Davé’s noise cluster; they are similar in the sense that both are theoretical concepts and designed to cluster noise points together but are shown to be fundamentally different. The concept of memberships of data points in the mega-cluster is compared to the theory of data credibility. The FCM memberships are modified by scaling them down; the farther a datum is from a prototype, the more intense is the scaling. This scaling-down relaxes the unity constraint in (2) for good clusters. The motivation for this is derived from the Rival Checked FCM (RCFCM) [14] and Suppressed-FCM (S-FCM) [15] where selective memberships are scaled up and the rest scaled down in order to increase the convergence speed of the FCM algorithm.
2 The Concept of a Fuzzy Mega-c luster FCM partitions the data set into overlapping clusters but in general works well with compact, well separated and spherical clusters. Outliers in the data influence the location of the prototypes; as a result, the centroids are pulled towards the outlier(s). At this point, we make a distinction between true outliers and non-conforming nonoutliers (which in the course of this work would be referred to as non-outliers). While the former data vectors are noise and do not belong to any cluster in the data, nonoutliers are data vectors that neither belong to any cluster in the data nor can be considered noise. In other words, non-outliers can be considered to be equally likely to belong to any cluster in the data set and because of this reason, such data vectors can not be assigned to any one particular cluster. The difference is clearly indicated in fig.1 and any clustering algorithm should have the power to treat such entities differently. Unfortunately FCM and almost all robust clustering algorithms (except noise clustering in our experience) fail to differentiate between true outliers and nonoutliers; FCM would assign memberships of (0.5, 0.5) to both x' and x'' in fig. 1.
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships
447
x´
x´´
Fig. 1. True outliers and Non conforming non-outliers, x' is a true outlier, equally unlikely to belong to either cluster and x'' is a non-outlier, equally likely to lie in either cluster
We define a cluster called the mega-cluster which would view data vectors differently depending on how they belong to any good cluster in the data. Suppose in a two cluster data set, the datum x is a good representative of cluster I. In such a case, the membership of x in cluster I would be the largest, followed by its membership in the mega cluster and it would have the smallest membership in cluster II. On the other hand, if x' is a true outlier, its membership in the mega cluster would be largest, followed by relatively small memberships in the two good clusters I and II. This treatment is fundamentally different from the concepts of noise cluster and credibility of a data point visà-vis the entire data set. With the noise cluster, the membership of x would be the largest in cluster I, followed by its membership in cluster II and it would have a comparatively small membership in the noise cluster. However like the mega-cluster, x' would have a high degree of membership in the noise cluster, followed by low memberships in the two clusters. The concept of credibility as opposed to membership is defined as the degree of representativeness of a data point to the entire data set and as per definition, noise points have low data credibility and good data points have high credibility. Now if x'' is a non-outlier, its membership in the mega-cluster would be the highest followed by almost equal memberships in the two clusters; moreover if it is a symmetrically located non-outlier (as is x'' in fig. 1b) the sum of its memberships in the two clusters would equal its membership in the mega-cluster. This treatment allows for the subjective fact that such a non-outlier is equally likely to be considered part of either of the clusters but most likely considered noise. The mega-cluster can be thought of as a super-group encompassing the entire data set and views the data points differently depending on their belongingness in true clusters of the data. A further proposition would be that a megacluster membership is representative of both credibility and noise memberships (as well as true FCM memberships). A high mega-cluster membership would correspond to a high noise membership and low credibility and likewise a low mega-cluster membership would correspond to a low noise membership and a high credibility and thus a high true membership of the data point in one of the clusters. This cluster would not be detected by the standard FCM formulation. It is further assumed that while all data points have varying degrees of membership in the mega-cluster, they are all equally representative of the mega-cluster in the sense that distance of the data points from the megacluster is zero. Conceptually for the purposes of prototype calculations, the mega-cluster can be thought of as composed of n-point centers, each located exactly at the n data points. Furthermore, the memberships of a datum summed over the true clusters and the
448
A. Banerjee and R.N. Davé
mega-cluster is unity, hence the FCM update equations can be used without any change of form.
3 The Proposed Algorithm We seek to reduce the sensitivity of the FCM formulation towards noise by scaling down the memberships produced by FCM, in an inverse proportion to the clusterdatum distance. To speed up the convergence of FCM, two membership scaling procedures were proposed, viz. Rival Checked-FCM and the Suppressed-FCM. In every iteration of the FCM-AO scheme and for each datum, the two algorithms reward the largest membership by scaling it up by a constant factor. RCFCM then suppresses the second highest membership while SCFM suppresses all other memberships by a corresponding factor. Because of the scaling up, the two algorithms are found to be highly sensitive to noise. In fact in our experiments, RCFCM in most cases does not converge to a stable solution because it disturbs the sequence of memberships (as a result there is much oscillation between successive iterations). On the other hand, at low values of α (0.1-0.3), where SCFM behaves more like hard c-Means (HCM), we found that it generates singleton noise clusters most of the time and hence with appropriate modifications can be used as an outlier diagnostic tool. The proposed algorithm is based on the following logic – what essentially distinguishes a good data point from an outlier is their distance (dissimilarity) from a representative prototype. This difference becomes muddled in the presence of noise in the data because of the centroid-pulling effect of the outliers. Hence for noisy data, if one could provide a mechanism which would accentuate this difference, one could conceptually reproduce results similar to FCM on a noise free data. The proposed algorithm tries to underline this difference between good points and outliers by scaling down membership values of a data point across all clusters, in an inverse proportion to their distance from the cluster prototypes. Hence the effective membership of all points in true classes is less than one. This scaling down is more prominent for outliers which successively undergo a drastic reduction in memberships, which relates to a corresponding increase of its membership in the proposed conceptual mega-cluster. The FCM-AO algorithm is presented below, Initialize: Randomly initialize centroid locations, V0=vi. Let k=0; fix fuzzifier m and termination condition, ε > 0. Iterate: 1. Compute prototype-data point distances, Dk as, d ij = || v i − x j || A .
(3)
k
2. Calculate memberships U using, 1
u ij =
. 1 m −1 ⎡ d ij2 ⎤ ∑ ⎢d2 ⎥ k =1 ⎣ ⎢ kj ⎦⎥ If there exists (r,j) such that drj = 0, then let urj = 1, uij = 0 for all i ≠ r. c
(4)
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships
449
3. Calculate cluster centers, Vk+1 using, n
vi =
∑ u ijm x j j=1
n
∑ u ijm
.
(5)
j=1
4. If ||V
k+1
k
– V || < ε, then terminate. Else increment k=k+1 and return to step 1.
In the proposed algorithm (henceforth referred to as MC), the memberships as calculated by FCM are then modified depending on the datum-cluster center distance. For unusually large distances, the scaling is more intense and is achieved by scaling with respect to the maximum distance in the data set and is shown in (6a). This scaling repeatedly done on the outliers reduces their memberships rapidly as compared to scaling done on good datum. For reasonable datum-cluster center distances, the scaling is moderate and is done with respect to the sum of distances of the datum from all the c cluster centers as in (6b), ⎤ ⎡ ⎢ βd ij ⎥ ⎥ u ij , β ∈[0,1] . intense scaling: u ij = ⎢1 − (d ij ) ⎥ ⎢ max i =1,..,c ⎦⎥ ⎣⎢ j=1,..,n
(6a)
⎡ ⎤ ⎢ βd ij ⎥ ⎥ u ij , β ∈[0,1] . moderate scaling: u ij = ⎢1 − c ⎢ d kj ⎥⎥ ⎢⎣ ∑ k =1 ⎦
(6b)
This modification is introduced in the FCM-AO as step 2-1, after the completion of FCM membership update in step 2. An if-else condition is used to decide whether to use a moderate or an intense scaling and the condition checks how unusually large a particular datum-cluster center distance, dij, is. The scaling is comparable in content to the credibility of a datum xj as proposed by [8],
Ψj = 1 −
(1 − θ) α j max (α k )
k =1,..,n
, where α j = min (d ij ) . i =1,..,c
(7)
As with credibility, β=0 reduces the formulation to FCM and at β=1 the formulation serves as a complete noise reduction algorithm. At levels between β=1 and β=0, the algorithm tries to balance between assigning a low membership to true outliers and assigning comparatively higher memberships to non-outliers, such as x'' in fig 1. If it is known that the data is noise free, a choice of β=0 would reproduce FCM results known to be fairly accurate in the absence of noise. In the presence of noise and general outliers, a judicious choice of β needs to be made; as inferred from the experiments presented in the next section, it is seen that any value of β in the range 1.0-0.7 generates good partitions in noisy data sets. This scaling down of memberships re-
450
A. Banerjee and R.N. Davé
laxes the unity constraint in (2); the resultant constraint is an inequality condition and the membership of a datum xj in the mega-cluster is hence given by, c
u MCj = 1 − ∑ u ij .
(8)
i =1
4 Experiments and Results We use the three data sets presented in [7] called X11, XA12 and XB14. X11 is a noise free data set consisting of 11 two-dimensional vectors while XA12 and XB14 are noisy versions of X11. A comparison of FCM, FPCM and CFCM on X11, XA12 and XB14 is presented in [8]. Here we compare performance of FCM as against the proposed algorithm on the same three data sets. The vector x6 is a non-outlier with equal probability of belonging to the two underlying clusters in X11. The two well defined clusters lie on either side of x6. The vectors xA12 (in XA12) and xB12, xB13 and xB14 (in XB14) are true outliers. In our implementations, we have used c=2, m=2 and ε = 0.001 for both FCM and the proposed algorithm (for β=1). For a prototype initialization of v1 = x2-10-3 and v2 = x11-10-3 in case of XA12, we find that the proposed algorithm performs better than FCM, the cluster centers generated are shown in fig. 2a. The data vectors are shown by solid squares, FCM centroids are depicted by crosses and the MC centroids by small triangles. In fact the results are comparable to the ones generated by CFCM and certainly better than FPCM (see fig. 2. of [8]). FCM also fails to distinguish between x6 and xA12. The proposed algorithm provides a higher membership for x6 as compared to xA12 in the two clusters. For the data set XB14 shown in fig. 2b, the results provide a striking contrast; while FCM groups the three outliers in one cluster and the rest of the data set into another, the proposed algorithm finds the two real clusters. This is comparable to what FCM would generate on the noise-free data set X11. The proposed algorithm produced the same result over a wide range of distance factors for intense scaling (> 0.8 dmax, 0.5 dmax and 0.3 dmax) in case of XA12 while there was a little difference in memberships for XB14 when intense scaling was done for dij > 0.8 dmax as compared to the memberships obtained for dij > 0.5 dmax. The results presented in Table I pertain to intense
40
30
xA12
30
xB13
20
xB12
xB14
20 10 10 0
0
x6
-10
-10 -5
0
5
-10
-5
0
5
10
Fig. 2. Centroids generated by the proposed algorithm (v-MC) and FCM (v-FCM) on the (a)XA12 and (b)XB14 data set
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships
451
Fig. 3. The two-cluster normal data set with uniformly distributed noise
scaling done for dij > 0.5 dmax (the difference was however insignificant, affecting only third decimal places in the memberships). In both cases, the symmetrically located x6 has a membership of about 0.5 in the mega-cluster and the true outliers had relatively large memberships in the mega-cluster compared to their memberships in the good clusters. Table 1. Memberships for XB14 as generated by the proposed mega-clustering algorithm (compare with Table I, [8], p.1463)
Vector
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 xB12 xB13 xB14
Feature 1 X -5.00 -3.34 -3.34 -3.34 -1.67 0.00 1.67 3.34 3.34 3.34 5.00 0.00 -7.00 10.00
Feature 2 Y 0.00 1.67 0.00 -1.67 0.00 0.00 0.00 1.67 0.00 -1.67 0.00 27.00 23.00 25.00
Memberships for XB14 u2j uMCj u1j 0.930136 0.001265 0.068599 0.916823 0.001806 0.081371 0.997538 0.000002 0.002460 0.865677 0.004842 0.129481 0.794419 0.011815 0.193766 0.241098 0.259063 0.499839 0.009883 0.811054 0.179063 0.002283 0.906717 0.091000 0.000000 0.999322 0.000678 0.003771 0.880852 0.115277 0.001369 0.927366 0.071265 0.041163 0.037726 0.921111 0.179158 0.094016 0.726826 0.000000 0.089727 0.910273
The MC algorithm is also tested on a large synthetic data set with 25% noise. The data set is shown in fig. 3 and was first published in [16]. It consists of two normally distributed clusters of 150 2-D patterns each and 100 uniformly distributed noise points. For c=2, β=1 and intense scaling, the MC algorithm correctly identified 96 of the 100 noise points as true outliers, while FCM clustered the outliers together with one of the good cluster (the true cluster on the right). Identification of 96 out of 100
452
A. Banerjee and R.N. Davé
outliers is however the best obtained result; the results of clustering varying with the distance factor used for intense scaling. The MC algorithm converged in lesser time than the FLTS algorithm reported in [16].
5 Conclusions An intuitive and easily realizable robust clustering scheme based on FCM has been presented in this paper. The concept of a fuzzy mega-cluster which is central to the proposed scheme is also introduced and discussed in detail and shown to be conceptually similar to Davé’s noise cluster. While the robust properties of the proposed megaclustering algorithm are investigated using test cases from literature, we also enunciate another interesting property of the algorithm – the power to distinguish true outliers from non-conformers. The sensitivity of FCM towards noise is reduced by scaling down the memberships and the excess membership is attributed to the megacluster. The scaling down of memberships in the good clusters is more intense for vectors which are perceived to have an unnaturally large distance from the prototypes and such a definition makes more intuitive sense when the data set is noisy. Although the proposed algorithm is robust, it still suffers from typical FCM drawbacks such as dependence on fairly good initialization and the tendency to get trapped in local minima. The proposed scheme, like most robust clustering procedures, expects that the approximate amount of contamination in the data set is known beforehand. For XA12, where the contamination was less severe (~ 15%), it is found that intense scaling could be done for almost any distance factor and the results produced are identical but in the case of XB14 and the two-normal cluster set, where the contamination is almost close to 30%, the results differed when intense scaling was employed under different distance factors. This dependency needs to be further investigated with larger and more natural data sets.
References 1. Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York (1981) 2. Krishnapuram, R., Keller, J.M.: A Possibilistic Approach to Clustering. IEEE Trans. Fuzzy Syst.. 1 (1993) 98-110 3. Davé, R.N.: Characterization and Detection of Noise in Clustering. Pattern Recog. Letters. 12 (1991) 657-664 4. Davé, R.N., Sen, S.: On Generalizing the Noise Clustering Algorithms. Invited paper, 7th IFSA World Congress. Prague (1997) 205-210 5. Davé, R.N., Sen, S: Noise Clustering Algorithm Revisited. In Proc. Biennial Workshop NAFIPS. Syracuse (1997) 199-204 6. Beni, G., Liu, X.: A Least Biased Fuzzy Clustering Method. IEEE Trans. Pattern Anal. Mach. Intell. 16 (1994) 954-960 7. Pal, N.R., Pal, K., Bezdek, J.C.: A Mixed c-Means Clustering Model. In Proc. 6th IEEE Conf. Fuzzy Syst. (1997) 11-21 8. Chintalapudi, K.K., Kam, M.: A Noise-Resistant Fuzzy c-Means Algorithm for Clustering. In Proc. 7th IEEE Conf. Fuzzy Syst. (1998) 1458-1463
The Fuzzy Mega-cluster: Robustifying FCM by Scaling Down Memberships
453
9. Keller, A.: Fuzzy Clustering with Outliers. In Proc. 19th International Conference of NAFIPS, the North American Fuzzy Information Processing Society (2000) 143-147 10. Kersten, P.R.: Fuzzy Order Statistics and their Application to Fuzzy Clustering. IEEE Trans. Fuzzy Syst. 7 (1999) 708-712 11. Kim, J., Krishnapuram, R., Davé, R.N.: Application of the Least Trimmed Squares Technique to Prototype-based Clustering. Pattern Recog. Letters 17 (1996) 633-641 12. Davé, R.N., Krishnapuram, R.: Robust Clustering Methods: A Unified View. IEEE Trans. Fuzzy Syst. 5 (1997) 270-293 13. Krishnapuram, R., Keller, J.M.: The Possibilistic c-Means Algorithm: Insights and Recommendations. IEEE Trans. Fuzzy Syst. 4 (1996) 385-393 14. Wie, L.M., Xie, W.X.: Rival Checked Fuzzy c-Means Algorithm. Acta Electronica Sinica, 28 (2000) 63-66 15. Fan, J.L., Zhen, W.Z., Xie, W.X.: Supressed Fuzzy c-Means Clustering Algorithms. Pattern Recog. Letters 24 (2003) 1607-1612 16. Banerjee, A., Davé, R.N.: The Feasible Solution Algorithm for Fuzzy Least Trimmed Squares Clustering. In Proc. 23rd International Conference of NAFIPS, the North American Fuzzy Information Processing Society (2004) 222-227
Robust Kernel Fuzzy Clustering Weiwei Du, Kohei Inoue, and Kiichi Urahama Kyushu University, Fukuoka-shi, 815-8540 Japan
Abstract. We present a method for extracting arbitrarily shaped clusters buried in uniform noise data. The popular k-means algorithm is firstly fuzzified with addition of entropic terms to the objective function of data partitioning problem. This fuzzy clustering is then kernelized for adapting to the arbitrary shape of clusters. Finally, the Euclidean distance in this kernelized fuzzy clustering is modified to a robust one for avoiding the influence of noisy background data. This robust kernel fuzzy clustering method is shown to outperform every its predecessor: fuzzified k-means, robust fuzzified k-means and kernel fuzzified k-means algorithms.
1
Introduction
Practical issues in clustering are arbitrary shapes of clusters, robustness to noise or outlier data, estimation of the number of clusters, and so on. We adress in this paper to the former two issues assuming the number of clusters to be given. The k-means algorithm is popularly used for clustering data. It is an iterative method of which solution depends on initial values. Its fuzzification such as the fuzzy c-means is known to improve the stability of the algorithm raising the chance of convergence to the global optimum solution. Hence the fuzzy clustering methods have been used for complex data such as image segmentation. However, the shape of clusters extracted with the k-means or its fuzzified version is restricted to some simple forms such as spheres or shells. Therefore the kernel techniques have been incorporated into them to enable them to adapt to arbitrary shapes of clusters[1,2]. These kernelized methods can deal with arbitrarily shaped clusters, however, the data used in the experiments there include no noise, hence their robustness is unknown. In fact, the kernel fuzzy clustering method is not robust to noise data as is shown with the experiment in this paper. Another branch of the extension of clustering methods is their robustification. The extension of the distance measure from the popularly used Euclidean norm to a nonlinear one used in robust statistics makes the algorithm insensitive to noise data[3,4]. However, these robust clustering methods assume prescribed shapes of clusters as the same as the basic k-means or the fuzzy c-means algorithms, hence they cannot extract arbitrarily shaped clusters. In summary, kernelized clustering algorithms can extract arbitrarily shaped clusters but are not robust to noise data, while nonlinear distance methods are robust to noise data but cannot extract arbitrarily shaped clusters. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 454–461, 2005. c Springer-Verlag Berlin Heidelberg 2005
Robust Kernel Fuzzy Clustering
455
We present, in this paper, a robust kernel fuzzy clustering algorithm which is a kernelized robust fuzzy clustering and can extract arbitrarily shaped clusters without the influence of noise data. The performance of the method is examined for 2-dimensional toy data and high dimensional face images
2
Algorithms
We consider extracting n clusters from m data di (i = 1, ..., m). We use a kernel technique. Let the mapping of a datum di into a high dimensional space be φi = φ(di ). We adopt the Gaussian kernel and analyze the data in the high dimensional space without explicit computation of φi by exploiting the kernel 2 trick: φTi φi = e−αdi −di which can be computed by using only the feature vectors in the original low dimensional space. In the subsequent subsections, we start from the basic fuzzy clustering algorithm and extend it to a robust one and then kernelize it successively. 2.1
Fuzzification of k-Means with Entropic Regularization
If we add an entropic term to the objective function in the spherical clustering problem, we get the regularized objective function: m n
xij di − rj 2 + β −1
i=1 j=1
m n
xij lnxij
(1)
i=1 j=1
The fuzzy memberships xij are the solution of the optimization problem which minimizes this objective function together with the centroid vectors rj under the constraint nj=1 xij = 1 (i = 1, ..., m). The xij is the membership of the datum i in the cluster j and rj is the centroid of the cluster j. With the Lagrange multiplier method, we get from (1) xij = yij (
n
yij )−1
(2)
j =1
rj =
m
m xij di /( xij )−1
i=1
i=1
(3)
where yij = e−βdi −rj . We set initially x = [xij ] (i = 1, ..., m; j = 1, ..., n) randomly and compute r = [r1 , ..., rn ] with eq.(3), and then we update x with eq.(2). We repeat this update of x until its convergence. The objective function (1) decreases monotonically along this iteration and stops at its local minimum. We call this the KMFE (k-means fuzzified with entropy). The entropic regularization is an alternative scheme of fuzzification to the popular fuzzy cmeans where the Euclidean norm is powered instead of addition of regularization terms. 2
456
2.2
W. Du, K. Inoue, and K. Urahama
Robust KMFE
If we modify the squared Euclidean norm di − rj 2 into 1 − e−γdi −rj , then eq.(1) becomes 2
m n
xij (1 − e−γdi −rj ) + β −1 2
i=1 j=1
m n
xij lnxij
(4)
i=1 j=1
This modification of the norm is popularly used in the robust statistics and has also been incorporated into the fuzzy c-means to make it robust[3]. The equations for x and r become in this case xij = eβzij (
n
eβzij )−1
(5)
j =1
rj =
m
m xij di zij ( xij zij )−1
i=1
i=1
(6)
where zij = e−γdi −rj . We set initially x randomly and solve eq.(6) with an iterative method and update x by substituting the obtained r into eq.(5). This update is repeated until the convergence. We can prove the global convergence of this algorithm by using the Legendre transform in a similar way in [5], whose details are omitted here. Now, before proceeding to the next subsection, we derive eq.(4) from a kernel in order to clarify the difference between this robust algorithm and the kernel algorithm in the next subsection. If we denote ϕj = φ(rj ), then there holds 2 the equation φi − ϕj 2 = 2(1 − e−αdi −rj ), hence eq.(1) is transformed into eq.(4) through this kernelization. Zhang et al. called this technique the kernel method[6]. However, the centroids are unique points in the original space in their method, therefore it cannot treat arbitrarily shaped clusters. In fact, Zhang et al.[6] dealt with only ellipsoidal clusters without entangled ones. 2
2.3
Kernel KMFE
The KMFE is written in a high dimensional space as m n
xij φi −
rjφ 2
+β
−1
i=1 j=1
m n
xij lnxij
(7)
i=1 j=1
where rjφ is the prototype of the cluster j in the high dimensional space, which is different to ϕj in the above subsection. From eq.(7), the equation for x becomes φ 2
eq.(2) where yij = e−βφi −rj in this case, and the equation for rφ becomes rjφ =
m
m xij φi ( xij )−1
i=1
i=1
(8)
Robust Kernel Fuzzy Clustering
457
Substitution of this rjφ into φi − rjφ leads to φi −rjφ 2 = 1−2
m
sii xi j (
i =1
m
xi j )−1 +
i =1
m m
xi j si i xi j (
i =1 i =1
m
xi j )−2 (9)
i =1 φ 2
where sii = φTi φi = e−αdi −di . Substituting eq.(9) into yij = e−βφi −rj , we get the expression of yij as a function of x only. Thus, in this case, eq.(2) becomes the equation for x alone. We solve it with the iteration starting from random x. Contrary to the scheme in the previous subsection where the prototypes lie in the original space, they lie in the high dimensional space in this scheme. Therefore this scheme is capable to extract arbitrarily shaped clusters owing to improved separability of clusters in the high dimensional space. 2
2.4
Robust Kernel KMFE
Finally the same modification of the norm as that in eq.(1) into eq.(4) is applied to eq.(7), which leads to m n
φ 2
xij (1 − e−γφi −rj ) + β −1
i=1 j=1
m n
xij lnxij
(10)
i=1 j=1 φ 2
In this case, if we denote zij = e−γφi −rj , then the equation for x becomes eq.(5) and the equation for rφ becomes rjφ =
m
m xij zij φi ( xij zij )−1
i=1
i=1
(11)
of which substitution into φi − rjφ leads to φi − rjφ 2 = 1 − 2
m
sii xi j zi j (
i =1
+
m m
m
xi j zi j )−1
i =1
xi j zi j si i xi j zi j (
i =1 i =1
m
xi j zi j )−2
(12)
i =1 φ 2
Substituting eq.(12) into zij = e−γφi −rj , we get the equation for z including x. Hence we set initially x randomly and substitute it into this equation for z and solve it by iteration, and then update x by substituting the obtained z into eq.(5). We repeat this update of x until its convergence. We can prove its convergence in a similar way to [5].
3
Experiments
We have experimented the above four schemes firstly for 2-dimensional toy data and then for high dimensional face image data.
458
W. Du, K. Inoue, and K. Urahama
2
1
0
-1
-2 -2
-1
0
1
2
Fig. 1. Example of 2-dimensional data
2
1
0
-1
-2 -2
-1
0
1
2
Fig. 2. Result of KMFE
2
1
0
-1
-2 -2
-1
0
1
2
Fig. 3. Result of robust KMFE
3.1
Toy Data
The first data with which we have experimented is shown in Fig.1 which includes 312 inlier data composed of four clusters and 50 uniformly distributed noise data. The parameters were set as α = 10, β = 30, γ = 1. A common initial value of x was used in its iterations for every scheme. A common threshold value of 0.6 was also used for defuzzifying x into crisp values 0 or 1.
Robust Kernel Fuzzy Clustering
459
2
1
0
-1
-2 -2
-1
0
1
2
Fig. 4. Result of kernel KMFE
2
1
0
-1
-2 -2
-1
0
1
2
Fig. 5. Result of robust kernel KMFE
1
classification rate
robust kernel KMFE 0.8
0.6
kernel KMFE
robust KMFE KMFE
0.4 0.2
0.4 0.6 threshold value
0.8
Fig. 6. Variation in classification rates with threshold value
The result of the KMFE in section 2.1 is illustrated in Fig.2. The spherical cluster at the bottom is lost and two entangled belt clusters at the center are divided into three clusters. These errors are attributed to the disturbance from noise data. Next Fig.3 illustrates the result of the robust KMFE in section 2.2. Because the influence of noise data is weakened, no cluster is left out, but the central two belt clusters are still divided erroneously.
460
W. Du, K. Inoue, and K. Urahama
The result of the kernel KMFE in section 2.3 is shown in Fig.4 where two spherical clusters and one belt cluster are correctly extracted except for the remaining one belt cluster which is divided in half. The final result shown in Fig.5 is that of the robust kernel KMFE in section 2.4. All of four clusters are extracted correctly and almost all of noise data are discarded. The variation in the classification rate with the value of the threshold for defuzzification of the memberships is shown in Fig.6. This classification rate is defined as 1) for noise data, if every membership is below the threshold, then they are correctly classified, and 2) for inlier data, if the membership in the correct cluster is above the threshold, then their classification is correct. The dotted line in Fig.6 denotes the KMFE, finely broken line is the robust KMFE, coarsely broken line is the kernel KMFE, and the solid line represents the robust kernel KMFE which keeps the best performance in these methods.
Fig. 7. Image samples
robust kernel KMFE
classification rate
1 kernel KMFE
0.8 0.6 0.4
robust KMFE KMFE
0.2 0.2
0.4 0.6 threshold value
0.8
Fig. 8. Classification rates for face image data
Robust Kernel Fuzzy Clustering
3.2
461
Face Images
We have next experimented with image data sampled from the UMIST Face Database. Experimented data are face images of three persons from each of which 23, 26 and 24 images are sampled and 30 images of natural scenes are added as noise data. Some images in data are illustrated in Fig.7. Inlier data are composed of face images photographed from various viewpoints, hence they form elongated clusters. The feature vectors are arrays of gray levels of each pixel. Image size is 112 × 92 hence the dimensionality of feature vectors is 10304. Figure 8 illustrates the variation in the classification rate with the value of the threshold for this data. The threshold should be set to high values because noise data lie close to inlier data in this dataset. The robust kernel KMFE reveals high performance around such high threshold values, hence it is useful for these complex data.
4
Conclusion
We have presented a robust kernel fuzzy clustering method for extracting arbitrarily shaped clusters without the disturbance from noise data. Its performane has been verified with experiments for toy data and face images. Detailed theoretical analysis of the algorithm and development of the devise for determination of the number of clusters are the subjects of further study.
References 1. Girolami, M.: Mercer kernel-based clustering in feature space. IEEE Trans. Neural Netw. 13 (2002) 780–784 2. Kim, D.-W., Lee, K., Lee, D., Lee, K. H.: Evaluation of the performance of clustering algorithms in kernel-based feature space. Patt. Recog. 38 (2004) 607–611 3. Wu, K.-L., Yang, M.-S.: Alternative c-means clustering algorithms. Patt. Recog. 35 (2002) 2267–2278 4. Leski, J.: Towards a robust fuzzy clustering. Fuzzy Sets & Syst. 137 (2003) 215–233 5. Urahama, K.: Convergence of alternative c-means clustering algorithms. IEICE Trans. Inf. & Syst. E86-D (2003) 752–754 6. Zhang, D.-Q., Chen, S.-C.: Kernel-based fuzzy and possibilistic c-means clustering. Proc. ICANN’03 (2003) 122–125
Spatial Homogeneity-Based Fuzzy c-Means Algorithm for Image Segmentation Bo-Yeong Kang1 , Dae-Won Kim2 , and Qing Li1 1
2
School of Engineering, ICU, 103-6, Moonji-ro, Yuseong-gu, Daejeon, Korea Department of BioSystems, KAIST, Guseong-dong, Yuseong-gu, Daejeon, Korea
[email protected] Abstract. A fuzzy c-means algorithm incorporating the notion of dominant colors and spatial homogeneity is proposed for the color clustering problem. The proposed algorithm extracts the most vivid and distinguishable colors, referred to as the dominant colors, and then used these colors as the initial centroids in the clustering calculations. This is achieved by introducing reference colors and defining a fuzzy membership model between a color point and each reference color. The objective function of the proposed algorithm incorporates the spatial homogeneity, which reflects the uniformity of a region. The homogeneity is quantified in terms of the variance and discontinuity of the spatial neighborhood around a color point. The effectiveness and reliability of the proposed method is demonstrated through various color clustering examples.
1
Introduction
The objective of color clustering is to divide a color set into c homogeneous color clusters. Color clustering is used in a variety of applications, such as color image segmentation and recognition. Color clustering is an inherently ambiguous task because color boundaries are often blurred due to the image acquisition process [1]. Fuzzy clustering models have proved a particularly promising solution to the color clustering problem. In fuzzy clustering, the uncertainty inherent in a system is preserved as long as possible before decisions are made. As a result, fuzzy clustering is less prone to falling into local optima than crisp clustering algorithms [1,2,3]. The most widely used fuzzy clustering algorithm is the FCM algorithm proposed by Bezdek [1]. This algorithm classifies a set of data points X into c homogeneous groups represented as fuzzy sets F1 , F2 , ..., Fc . The objective is to obtain the fuzzy c-partition F = {F1 , F2 , .., Fc } for both an unlabeled data set X = {x1 , ..., xn } and the number of clusters c by minimizing the function Jm : Jm (U, V : X) =
c n
(µFi (xj ))m xj − vi 2
(1)
i=1 j=1
where µFi (xj ) is the membership degree of data point xj to the fuzzy cluster Fi , and is additionally an element of a (c × n) pattern matrix U = [µFi (xj )]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 462–469, 2005. c Springer-Verlag Berlin Heidelberg 2005
Spatial Homogeneity-Based Fuzzy c-Means Algorithm
463
The i-th row of U , Ui , corresponds to the fuzzy cluster Fi . V = (v1 , v2 , .., vc ) is a vector comprised of the centroids of the fuzzy clusters F1 , F2 , ..., Fc . Thus, a fuzzy partition can be denoted by the pair (U, V ). xj −vi denotes the Euclidean norm between xj and vi . The parameter m controls the fuzziness of membership of each datum. The goal is to iteratively improve a sequence of sets of fuzzy clusters F (1), F (2), ..., F (t) (where t is the iteration step) until Jm (U, V : X) shows no further improvement. Since Bezdek first reported his fuzzy set theoretic image segmentation algorithm for aerial images [1,2,4], numerous reports have appeared detailing the superior manner in which the FCM algorithm handles uncertainties [5,6,7,8,9,10]. However, traditional FCM methods have a number of limitations. First, the initialization of cluster centroids is important because different selections of the initial cluster centroids can potentially lead to different local optima or different partitions. However, there is no general agreement regarding which initialization scheme gives the best results [1,2]. Most algorithms developed to date assign the random locations to the initial centroids because such a selection will definitely converge in the iterative process. Moreover, the traditional spatial FCM algorithms are limited in formulating the influence of the spatial neighborhood of a data point xj on calculating a degree of membership of xj to a given cluster Fi . As Liew pointed out [10], in the original spatial FCM algorithm put forwarded by Tolias and Panas [9], the influence of neighboring color points on a central color point is binary in that it is implemented through the addition and subtraction of a fixed constant. The adaptive spatial FCM algorithm of Liew, which uses the notion of a homogeneous regions, still does not provide an explicit mathematical formulation for handling homogeneous regions. To solve the addressed problems, a new fuzzy c-means algorithm for clustering color data is proposed in the present study. The initial cluster centroids are selected based on the notion that dominant colors in a given color set are unlikely to belong to the same cluster (Section 2.1). Thus, we developed a scheme for identifying the most vivid and distinguishable colors in a given color data set; these dominant colors are then used to guess the initial cluster centroids for the FCM algorithm. In addition, spatial knowledge is incorporated into the clustering algorithm by employing the homogeneity of the spatial neighborhood, which is quantified in terms of the variance and discontinuity of the neighborhood of a color point. (Section 2.2). The homogeneity value of color xj indicates the likelihood that xj and its neighborhood belong to the same homogeneous region. This homogeneity value is used in calculating the degree of membership of xj to fuzzy clusters in the FCM algorithm. We call the proposed method a spatial homogeneity-based fuzzy c-means (SHFCM) algorithm.
2 2.1
Spatial Homogeneity-Based Fuzzy c-Means Algorithm Initialization of Cluster Centroids
The clustering initialization procedure aims to establish good initial centroid for each cluster. We guess the initial centroids from the dominant colors that are the
464
B.-Y. Kang, D.-W. Kim, and Q. Li
most distinguishable colors in a given color data set; the number of dominant colors is set equal to the number of clusters (c). To obtain the dominant colors from a color set X, we use the notion of reference colors, assuming that these colors contain the major distinguishable colors found in natural scenes [11]. This set provides standard colors with which to compare and measure similarities between colors. The dominant colors are taken as the top c reference colors that record the higher matching score for all colors x ∈ X. The matching scores are calculated by the degree of membership of each color point to a set of reference colors. Suppose a color point, denoted by x = (xL , xa , xb ) ∈ X, is a point in the CIELAB color space; the superiority of the CIELAB color space over other color spaces has been demonstrated in many color image applications [13,14]. Likewise, the i-th reference color, denoted by i ri = (rL , rai , rbi ) ∈ R, is a point in the CIELAB space. The distance between x and ri is calculated from the CIELAB color difference formula [12], denoted by i )2 + (x − ri )2 + (x − ri )2 . δ(x, ri ) = (xL − rL a b a b We now define a membership function µri : x → [0, 1] for a given color point x that quantifies the degree of membership µri (x) of x to the reference color ri . ⎧ 1.0 if δ(x, ri ) = 0 ⎪ ⎪ ⎪ 0.0 ⎨ if δ(x, rj ) = 0 (ri = rj , rj ∈ R) λ −1 k µri (x) = (2) δ(x, ri ) ⎪ ⎪ otherwise ⎪ ⎩ δ(x, rj ) j=1 where λ is a positive weighting parameter for the membership of x to ri . If a color point x exactly coincides with a reference color ri , it has a membership degree of 1.0. Conversely, if x exactly coincides with another reference color rj , then x has no relation to ri and the membership degree is assigned a value of 0.0. When the color point x does not exactly coincide with any reference colors, the membership value for each reference color is determined by Eq. 2. To determine the dominant colors, we compute the membership degrees between all color points xj ∈ X and reference colors ri ∈ R. Each reference color ri has two additional attributes, denoted µi and pi . Reference color ri is therei fore defined as ri = (rL , rai , rbi ), µi , pi . Here, µi = max µri (xj ) indicates the highest membership degree obtained by computing µri (xj ) for all xj ∈ X, and pi = arg maxxj µri (xj ) indicates the closest color point xj to ri . For a color point xj ∈ X, we compute the color membership degree for each of the reference colors, and update µi and pi . When the computation is completed, the reference colors are sorted by µi in decreasing order. The sorted list of reference colors is represented by Rs = (rs1 , rs2 , ..., rsk ) where the reference color rs1 has the highest value of µi and rsk has the lowest one. Now we can define the dominant colors as D = {di | di = rsi , 1 ≤ i ≤ c}. Thus the set of dominant colors consists of the first c reference colors in the sorted list, which represent the most distinguishable and vivid colors in a given color set X. Having established the dominant colors, the initial cluster centroids are assigned to the color point xi that is closest to the dominant color di , i.e., V0 = {vi | vi = pi , pi ∈ di }.
Spatial Homogeneity-Based Fuzzy c-Means Algorithm
2.2
465
Spatial Homogeneity
Homogeneity is a local information that reflects the uniformity of a region [15]; when a region is homogeneous, data in the homogeneous region are likely to have similar characteristics. This concept has the potential to be very useful in color clustering because the objective of such clustering is to partition color data into several homogeneous groups. For simplicity, in the present study we consider a spatial homogeneity in two-dimensional space. Recently, Cheng and Sun proposed a homogeneity-based histogram analysis [16] in which homogeneity is formulated in terms of the gray intensity. In the present work, we extend their definition of homogeneity to the CIELAB color space. For each color point, the homogeneity value is computed in terms of two components, variance and discontinuity. In this scheme, the homogeneity value of a color increases with decreasing variance and discontinuity. In other words, the higher the homogeneity value of color xj ∈ X, the greater the likelihood that xj and its spatial neighborhood belong to the same homogeneous region. The variance is a measure of the color contrasts in a predefined region. Let xj be a color point, and let Sj denote a spatial neighborhood in two-dimensional ∗ space, specified to be a d × d window size centered at xj . The mean values MjL , ∗ ∗ Mja , and Mjb of the color data in Sj can be computed as: ∗
MjL =
1 xk,L , d2 xk ∈Sj
∗
Mja =
1 xk,a , d2 xk ∈Sj
∗
Mjb =
1 xk,b d2
(3)
xk ∈Sj
Therefore, the variance σ 2 (xj ) of color xj is computed as: ∗ ∗ ∗ 1 σ 2 (xj ) = 2 (xk,L − MjL )2 + (xk,a − Mja )2 + (xk,b − Mjb )2 d
(4)
xk ∈Sj
The discontinuity g(xj ) of color xj is obtained through the first derivatives at spatial coordinate xj , which is defined as the magnitude of the gradient between xj and Sj [15,16]. 1/2
2 1/2 ∂g 2 ∂g 2 2 g(xj ) = Gxj ,h + Gxj ,v = + (5) ∂xj,h ∂xj,v where G2xj ,h and G2xj ,v are the gradients for the horizontal and vertical directions, respectively. The two measures, σ 2 (xj ) and g(xj ), have different scales that needs to be reconciled through a normalization. Thus, the variance and discontinuity are normalized by their maximum values; specifically, σn2 (xj ) = σ 2 (xj )/ max σ 2 (xj ) and gn (xj ) = g(xj )/ max g(xj ). Then the spatial homogeneity h(xj ) of color xj is defined as: h(xj ) = 1 − σn2 (xj ) × gn (xj ) (6) The goal of the present study is to cluster a set of color data X = {x1 , x2 , ..., xn } into c homogeneous groups. To achieve this, we propose a spatial homogeneity-
466
B.-Y. Kang, D.-W. Kim, and Q. Li
based fuzzy c-means (SHFCM) algorithm. The objective of SHFCM is to cluster the data X into c clusters (F1 , F2 , .., Fc ) by minimizing the function Jm (U, V : X) =
c n
(µFi (xj ))m xj − vi 2
(7)
i=1 j=1
where µFi (xj ) =
1 µij + µik h(xk ) nj
(8)
xk ∈Sj
and
1 −1 c xj − vi 2 m−1 µij = (9) xj − vz 2 z=1 subject to 0 ≤ µFi (xj ) ≤ 1, ci=1 µFi (xj ) = 1, 0 < nj=1 µFi (xj ) < n. Here, µij is obtained by computing relative distances between clusters in a color feature space. xk ∈ Sj is a spatial neighborhood of xj , defined as a d × d window. nj (= d2 ) is the number of xj and its neighbors,k h(xk ) is the homogeneity of xk . In Eq. 8, we can see that µFi (xj ) is influenced by the relations between xk ∈ Sj and Fi and, furthermore, the extent of this influence is controlled by the degree of homogeneity h(xk ). Let us suppose that xj is located in a homogeneous region. Under such circumstances, it is useful to exploit the relations between the neighborhood and the centroid because data in a homogeneous region are likely to belong to the same cluster. In contrast, when xk ∈ Sj lies in a nonhomogeneous region such as an edge, the influence of xk on xj should be made as small as possible. Thus, the degree to which xk influences xj is determined through its homogeneity value h(xk ).
3
Experimental Results
To test the effectiveness with which the proposed method clusters color data, we applied the conventional FCM algorithm and the proposed SHFCM algorithm to four image segmentation problems, and compared the performances of the algorithms. In the FCM calculations, the initial centroids were randomly selected. In the SHFCM, the initial centroids were obtained from the aforementioned dominant colors. In these experiments, the FCM and SHFCM parameters were set as follows: the weighting exponent was set to m = 2.0 because this value has been overwhelmingly favored in previous studies [1]; the termination criterion was set to = 0.001; λ was set to be 2.0 as this value provides the best performance; and the neighborhood dimension was set to d = 3. Figure 1(a) shows an original image (“pants”) containing four colors (c = 4): white for the background, a gray for the left-side pants, a red for the right-side pants, and black for the rectangular mark on the red pants. (Please note that all figures in this section must be viewed in color to appreciate the color segmentation results.) An ideal clustering would result in four segmented objects.
Spatial Homogeneity-Based Fuzzy c-Means Algorithm
(a)
(b)
(c)
(d)
467
Fig. 1. Comparison of the results using FCM and SHFCM: (a) original image “pants”; (b) homogeneity (threshold = 0.9); (c) FCM clustering; (d) SHFCM clustering
Although this image appears to be simple to cluster, the clustering problem is complicated by the presence of many color points on the gray pants (e.g., those in the crease areas) that are close to those of the rectangular mark on the red pants in the color space. Figure 1(b) shows the homogeneity values extracted from the original image; in this figure, the white areas represent homogeneous regions and the black areas represent nonhomogeneous regions. These regions were determined based on a threshold homogeneity value of 0.90. Clustering based on the FCM algorithm [Fig. 1(c)] led to an over-segmentation in the left-side gray pants due to the large number of creases in those pants, and the black mark in the right-side pants is not clear. In contrast, the SHFCM algorithm [Fig. 1(d)] provided a better classification on the four objects than FCM. Figure 2(a) shows an original image (“clown”) containing various colors. Each clustering algorithm was used to segment the image into six colors (c = 6): white, black, red, brown, yellow, and green. The FCM algorithm failed to give correct clustering results [Fig. 2(c)]; the green hair was misclassified as a yellow color in the final image. In contrast, the SHFCM algorithm [Fig. 2(d)] successfully segmented the six colors; it clearly classified the hair as being of green color because the green color was identified as one of the dominant colors. Figure 3(a) shows an original image (“house”). Each clustering algorithm was used to segment this image into six clusters (c = 6): white, blue, red, black, gray, and green. The FCM algorithm failed to give a correct clustering result [Fig. 3(c)]; it over-segmented the lower part of the green grass and misclassified the gray roof of the house as being of white color. In contrast, the SHFCM algorithm successfully segmented
468
B.-Y. Kang, D.-W. Kim, and Q. Li
(a)
(b)
(c)
(d)
Fig. 2. Comparison of the results using FCM and SHFCM: (a) original image “clown”; (b) homogeneity (threshold = 0.9) (c) FCM clustering; (d) SHFCM clustering
(a)
(b)
(c)
(d)
Fig. 3. Comparison of the results using FCM and SHFCM: (a) original image “house”; (b) homogeneity (threshold = 0.9); (c) FCM clustering; (d) SHFCM clustering
the six color objects [Fig. 3(d)]; it more clearly classified the green grass and the black shade, demonstrating its superior performance.
Spatial Homogeneity-Based Fuzzy c-Means Algorithm
4
469
Conclusion
To tackle the color clustering problems, in this paper we have proposed a new spatial homogeneity-based fuzzy c-means (SHFCM) algorithm. In this algorithm, the initial cluster centroids are selected by identifying the dominant colors in a given color set, and then placing the initial centroids at the points closest in color space to the dominant colors. In addition, the proposed method exploits spatial knowledge by inclusion of the influence of the neighborhood of a color point, as discriminated by its homogeneity value. Comparisons of the SHFCM and FCM methods showed that the SHFCM method is more effective.
References 1. Bezdek, J.C.: Fuzzy Models and Algorithms for Pattern Recognition and Image Processing. Kluwer Academic Publishers, Boston (1999) 2. Jain, A.K., Dubes, R.C.: Algorithms For Clustering. Prentice-Hall, NJ (1998) 3. Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A Review. ACM Computing Surveys 31 (1999) 264–323 4. Pal, N.R., Pal, S.K.: A Review On Image Segmentation Techniques. Pattern Recognition 26 (1993) 1277–1294 5. Lim, Y.W., Lee, S.U.: On The Color Image Segmentation Algorithm Based on the Thresholding and the Fuzzy c-Means Techniques. Pattern Recognition 23 (1990) 935–952 6. Bensaid, A.M.: Partially Supervised Clustering For Image Segmentation. Pattern Recognition 29 (1996) 859–871 7. Cheng, T.W., Goldgof, D.B, Hall, L.O.: Fast Fuzzy Clustering. Fuzzy Sets and Systems 93 (1998) 49–56 8. Qzdemir, D., Akarun, L.: A Fuzzy Algorithm For Color Quantization Of Images. Pattern Recognition 35 (2002) 1785–1791 9. Tolias, Y.A., Panas, S.M.: Image Segmentation by a Fuzzy Clustering Algorithm Using Adaptive Spatially Constrained Membership Functions. IEEE Trans. Syst. Man Cybern. 28 (1998) 359–369 10. Liew, A.W.C., Leung, S.H., Lau, W.H.: Fuzzy Image Clustering Incorporating Spatial Continuity. IEE Proceedings of Vis. Image Process 147 (2000) 185–192 11. Kim, D.-W., Lee, K.H., Lee, D.: A Novel Initalization Scheme For the Fuzzy c-Means Algorithm for Color Clustering. Pattern Recognition Letters 25 (2004) 227–237 12. Wyszecki, G., Stiles, W.S.: Color Science : Concepts and Methods, Quantitative Data and Formulae. Wiley-Interscience Publication, New York (2000) 13. Paschos, G.: Perceptually Uniform Color Spaces For Color Texture and Analysis: An Empirical Evaluation. IEEE Trans. Image Processing 10 (2001) 932-937 14. Shafarenko, L., Petrou, H., Kittler, J.: Histogram-based Segmentation In a Perceptually Uniform Color Space. IEEE Trans. on Image Processing 7 (1998) 1354–1358 15. Gonzalez, R.C., Wintz, P.: Digital Image Processing. Addison-Wesley, MA (1987) 16. Cheng, H.-D., Sun, Y.: A Hierarchical Approach To Color Image Segmentation Using Homogeneity. IEEE Transactions on Image Processing 9 (2000) 2071–2082
A Novel Fuzzy-Connectedness-Based Incremental Clustering Algorithm for Large Databases Yihong Dong1,2, Xiaoying Tai1, and Jieyu Zhao1 1
Institute of Computer Science and Technology, Ningbo University, Ningbo 315211, China 2 Institute of Artificial Intelligence, Zhejiang University, Hangzhou 310027, China
[email protected] Abstract. Many clustering methods have been proposed in data mining fields, but seldom were focused on the incremental databases. In this paper, we present an incremental algorithm-IFHC that is applicable in periodically incremental environment based on FHC[3]. Not only can FHC and IFHC dispose the data with numeric attributes, but with categorical attributes. Experiment shows that IFHC is faster and more efficient than FHC in update of databases.
1 Introduction Incremental mining technique, is a technique which updates the mining result incrementally when insertions and deletions on the operational databases occur, instead of re-mining the whole changed databases. If we can make full use of the latest mining result, we can get better efficiency. The insertion or deletion of data should be taken into account in this incremental update instead of the whole update databases. There have some reports on the research in incremental clustering algorithms. IncrementalDBSCAN[1] is the first incremental clustering algorithm whose performance evaluation has been proven to a good efficiency on a spatial database as well as on a WWW-log database. IGDCA[2] first partitions the data space into a number of units, and then deals with units instead of points. Only those units with the density no less than a given minimum density threshold are useful in extending clusters. Based on fuzzy hierarchical clustering algorithm(FHC) proposed in literature [3], we present its incremental algorithm-IFHC based on FHC to deal with a bulk of updates.
2 Incremental Fuzzy-Connectedness-Based Clustering Algorithm In FHC[3] method, the datasets are partitioned firstly into several sub-clusters using partitioning method, and then fuzzy graph of sub-clusters are constructed by analyzing the fuzzy-connectedness among the sub-clusters. By making λ cut graph for the fuzzy graph, we get the connected components of the fuzzy graph , which are the result of clustering we want to get. The algorithm can be performed in highL. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 470 – 474, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Novel Fuzzy-Connectedness-Based Incremental Clustering Algorithm
471
dimensional data set, clustering the arbitrary shape of clusters such as the spherical, linear, elongated or concave ones. Definition 1: ( neighborhood) Neighborhood of an object is an area with object p as center and r as radius, which is called neighborhood p, denoted by Neig ( p) . Definition 2: (connection) Point x is a connection between neighborhood p and neighborhood q, if and only if x is either in the neighbor of point p or in that of point q. We use the signal connection( p, q) to denote this relationship. connection( p, q ) =
{x | x ∈ Neig ( p), x ∈ Neig (q)} . Definition 3: (fuzzy-connectedness) Fuzzy-connectedness is the connected intensity between neighborhood p and neighborhood q. µ ( p, q ) =
| connection( p, q) | N p + N q − | connection( p, q ) |
0 ≤ µ ( p, q ) ≤ 1
Definition 4: (directly λ -fuzzy-connection) D is a set of objects, p ∈ D,q ∈ D, neighborhood p and neighborhood q is directly λ -fuzzy-connection if and only if λ
µ ( p, q) ≥ λ ,denoted as p ↔ q . D
Definition 5: ( λ -fuzzy-connection) D is a set of objects, if a chain p1 , p 2 ,..., p n , λ
p1 = q, pn = p, where where pi ∈ D (1 ≤ i ≤ n), pi +1 ↔ pi exists, neighborhood p and D
neighborhood q is λ -fuzzy-connection in D, denote as p
λ D
q.
Definition 6: (affected-sub-cluster of p) Assume a set of sub-clusters in database D is O = {O1 , O 2 ,..., O m } , p ∈ D, affected-sub-cluster of p means the set of sub-clusters’ centers contained in neighborhood p, which is denoted as Affected _ subcluster ( p )
= {q | ∃q ∈ D, and q ∈ Neig ( p)} . Definition 7: (center of p) Assume a set of sub-clusters in database D is O = {O1 ,
O2 ,..., Om }, p ∈ D, center of p is the nearest sub-cluster in Affected _ subcluster ( p ) , denoted as denoted as center(p)=Oj. Definition 8: (Affected_neig) Let D be a database of objects and p be some object(either in or not in D). We define the set of neighborhoods in D affected by the insertion or deletion of p as Affected _ Neig ( p ) = Affected _ subcluster ( p ) ∪ {r | ∀r
λ D ∪{ p}
x, x ∈ Affected _ subcluster ( p )}
Lemma: Let D be a set of objects and p be some object. Then ∀x ∈ D : x ∉
Affected _ Neig ( p) ⇒ {q | q
λ D \{ p}
x} = {q | q
λ D ∪{ p}
x}
472
Y. Dong, X. Tai, and J. Zhao
Proof: ⊆ : ∀q ∈ {q | q
λ D \{ p }
x} ,by definition of λ fuzzy-connection there exists chain λ
p1 , p 2 ,..., pn , p1 = q, pn = x, where pi ∈ D \ { p}(1 ≤ i ≤ n), pi +1 ↔ pi . ∵ D \ { p} ⊆ D
D ∪ {p},
∴ pi ∈ D ∪ {p} . From the same reason by definition of λ fuzzy-
connection, q ∈ {q | q
λ D ∪{ p}
x} ,so {q | q
⊇: Assume that for each q ∈ {q | q
λ D \{ p }
λ D ∪{ p}
x} ⊆ {q | q
λ D ∪{ p}
x}
x} ,the formula {q | q
λ D \{ p }
x} ⊇ {q | q
λ D ∪{ p}
x} does
not come into existence. Then a chain including neighborhood p exists at least to make neighborhood q and neighborhood x λ fuzzy-connection, i.e. ∃p1 , p 2 ,..., λ
p,..., p n , p1 = q, p n = x, where pi ∈ D \ { p}(1 ≤ i ≤ n), p i +1 ↔ p i . By the definition of D
the set Affected_neig, we know that x ∈ Affected _ Neig ( p ) which is in contrast to the assumption of x ∉ Affected _ Neig ( p ) . Thus, {q | q By summarization, {q | q
λ D \{ p }
x} = {q | q
λ D \{ p }
x} ⊇ {q | q
λ D ∪{ p}
x}
λ D ∪{ p}
x}
Due to lemma, after inserting or deleting an object p, it is sufficient to reapply FHC algorithm to the set Affected_neig(p) in order to update the clustering. Insertions. When inserting a new object p, new directly λ -fuzzy-connection may be established, but none are removed. By lemma, it is sufficient to restrict the application of the clustering procedure to the set Affected_neig(p). When inserting an object p into the database D, we can distinguish the following cases:
(1) If Affected_neig(p)=Ø, p is a noise object and nothing else is changed. (2) If Affected_neig(p)=Center(p), and |Center(p)|<Minpts, then p is a noise object; If Affected_neig(p)=Center(p) and |Center(p)|>=Minpts, the object p is a number of cluster Center(p), and should be absorbed into cluster Center(p). λ
(3) Affected _ Neig ( p) = Affected _ subcluster ( p) ∪ {r | ∀r x, x ∈ Affected _ subcluster ( p )} D ∪{ p } ,where Affected _ subcluster ( p ) = {q | ∃q ∈ O and q ∈ Neig ( p )} , connection(p,q) should be increment by 1. µ ( p, q ) must be recomputed to judge whether a new directly λ -fuzzy-connection is created. If it is created, two or more clusters may be combined. Deletions. As opposed to the insertion, when deleting an object p, directly λ -fuzzyconnection may be removed, but no new directly λ -fuzzy-connection are established. It will make a cluster split into two or more clusters. In fact, the split of a cluster is not very frequent. When deleting an object p from the database D we can distinguish the following cases:
(1) If Affected_neig(p) =Ø, then p is an outlier which is regarded as a noise object. Deletion of p does no more effect to the clustering result. (2) If Affected_neig(p)=Center(p), object p should be deleted from the cluster Center(p).
A Novel Fuzzy-Connectedness-Based Incremental Clustering Algorithm
(3)
Affected _ Neig( p) = Affected _ subcluster( p) ∪ {r | ∀r
λ D \{ p }
473
x, x ∈ Affected _ subcluster( p)}
connection(p,q) should be decease by 1. µ ( p, q ) should be recomputed to judge whether directly λ -fuzzy-connection is removed. If directly λ -fuzzyconnection is removed, the cluster will be divided into two or more clusters.
3 Comparison IFHC Versus FHC In this section, we evaluate the efficiency of IFHC versus FHC. We present an experimental evaluation using a synthetic database of 100,000 records with 30 clusters of similar sizes and 5% noise that distributed outside of the clusters. The Euclidean distance was used as distance function and ε was set to 2 and λ was set to 0.08. All experiments have been run on PIV1.7G machine with 256M of RAM and running VC6.0. Three experiments are designed to compare the performance of IFHC versus FHC. There are only insertions, only deletions, and both insertions and deletions as Table 1 shows. There are half insertions and half deletions in “both insertions and deletions” item. From these tables, we can see that IFHC is faster and more efficient than FHC in updates of databases. Table 1. Run time comparison IFHC versus FHC
Number of updates Insertions FHC IFHC Deletions FHC IFHC Both insertions FHC and deletions IFHC
1000 327.1 34.3 318.5 31.5 325.8 32.2
2000 331.7 51.0 316.2 48.5 322.6 50.8
3000 335.4 67.2 313.4 65.9 326.3 66.3
4000 338.6 85.4 311.7 82.3 324.9 83.7
5000 340.6 102.8 307.8 104.4 320.6 98.5
4 Conclusion Recently many researchers focus on clustering as a primary data mining method for knowledge discovery. In this paper, we introduce a novel fuzzy-connectedness-based incremental algorithm-IFHC to deal with a bulk of updates instead of single update. FHC and IFHC have high efficiency to discover any clusters with arbitrary shapes. The results of our experimental study in data sets show IFHC is faster and more efficient than FHC. Acknowledgements. This work is partially supported by Scientific Research Fund of Zhejiang Provincial Education Department of China(20030485) to Yihong Dong, Natural Science Foundation of China(NSFC 60472099) to Xiaoying Tai.
474
Y. Dong, X. Tai, and J. Zhao
References 1. Ester M., Kriegel H. –P, Sander J., Wimmer M., Xu X.: Incremental clustering for mining in a data warehousing environment. In Proc. 24th VLDB Int. Conf. New York(1998)323333 2. Chen Ning, Chen An, Zhou Long-xiang: An incremental grid density-based clustering algorithm. Journal of Software, 2002, 13(01)1-7 3. Yihong Dong, Yueting Zhuang: Fuzzy Hierarchical Clustering Algorithm Facing Large Databases. In Proc. of 5th World Congress on Intelligent Control and Automation, Hangzhou(2004)4282-4286
Classification of MPEG VBR Video Data Using Gradient-Based FCM with Divergence Measure Dong-Chul Park Intelligent Computing Research Lab., Dept. of Information Engineering, Myong Ji University, Korea
[email protected] Abstract. An efficient approximation of the Gaussian Probability Density Function (GPDF) is proposed in this paper. The proposed algorithm, called the Gradient-Based FCM with Divergence Measure (GBFCM (DM)), employs the divergence measurement as its distance measure and utilizes the spatial characteristics of MPEG VBR video data for MPEG data classification problems. When compared with conventional clustering and classification algorithms such as the FCM and GBFCM, the proposed GBFCM(DM) successfully finds clusters and classifies the MPEG VBR data modelled by the 12-dimensional GPDFs.
1
Introduction
Multimedia technology has applied to various areas including information and communication, education, and entertainment[1]. Recently, the research on video transmission in multimedia service has become one of the most active fields and video service will be even more active when the broadband network service becomes available [2],[3]. Video data are analyzed and classified based on its contents in many applications. However, when video data are compressed, the analysis of the video data with a compressed format becomes complicated. Furthermore, it will be even harder to classify and retrieve the video data by their contents when the digital video data are stored by various compression methods[4]. If the compressed video data can be classified without going through the decompressing procedure, the efficiency and usefulness of the video data, especially the MPEG(Moving Picture Expert Group) VBR(Variable Bit Rate) video data, can be maximized [5]. Patel and Sethi proposed a method to analyze the compressed video data directly by using a decision-tree classifier [6] and Liang and Mendel proposed a classification method by employing a fuzzy classifier that uses the Fuzzy cMeans( FCM) algorithm [5]. Patel and Sethi’s approach basically considers the MPEG VBR video data as deterministic time-series data. However, according to Rose’s thorough analysis on MPEG VBR data [7], it would be more realistic to consider the MPEG VBR video data as GPDF data. The probabilistic nature of the MPEG VBR video data was also accepted by Liang and Mendel. Liang and Mendel, however, only utilize FCM with mean values of the GPDF L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 475–483, 2005. c Springer-Verlag Berlin Heidelberg 2005
476
D.-C. Park
data while missing the variance information of the GPDF data. In order to utilize the entire information, mean and variance, of the GPDFs in MPEG VBR video data, this paper proposes a clustering algorithm based on the divergence measure. The Gradient-based FCM(GBFCM) is considered as a basis for the new clustering algorithm[4,5]and the resultant GBFCM with Divergence Measure, GBFCM(DM), is proposed in this paper for clustering MPEG VBR video data [8]-[13]. The MPEG video data traffic is introduced in Section 2. Section 3 summarizes several conventional algorithms for data clustering. The Gradient-based FCM with Divergence Measure is proposed in Section 4. Experimental results and performance comparisons among conventional algorithms and the GBFCM(DM) are given in Section 5. Section 6 concludes this paper.
2
Characteristics of MPEG Video Traffic
The MPEG video traffic is composed of sequences of GoP(group of picture). Each GoP includes I-frame (Infra coded frame), P-frame(Predictive coded frame), and B-frame (Bidirectional predictive coded frame). I-frames use DCT encoding only to compress a single frame without reference to any other frame in the sequence. Typically I-frames are coded with 2 bits per pixel on average. P-frames are encoded as the differences from the previous I or P frame. The new P-frame is first predicted by taking the previous I or P frame and ‘predicting’ the values of each new pixel. B-frames are encoded as differences from the previous or next I or P frame. B-frames use prediction as for P-frames but for each block either the previous I or P frame is used or the next I or P frame. Fig. 1 shows an example of MPEG data with I-, P-, and B-frame from the video data ‘SOC1’ of Table 2 [7].
3 3.1
Existing Algorithms Fuzzy c-Means(FCM) Algorithm
The objective of clustering algorithms is to group of similar objects and separate dissimilar ones. Bezdek first generalized the fuzzy ISODATA by defining a family of objective functions Jm , 1 < m < ∞, and established a convergence theorem for that family of objective functions [11,12]. For FCM, the objective function is defined as : n c Jm (U, v) = (µki )m (di (xk ))2 (1) k=1 i=1
where di (xk ) denotes the distance from the input data xk to v i , the center of the cluster i, µki is the membership value of the data xk to the cluster i, and m is the weighting exponent, m ∈ 1, · · · , ∞, while n and c are the number of input data and clusters, respectively. Note that the distance measure used in FCM is the Euclidean distance.
Classification of MPEG VBR Video Data Using Gradient-Based FCM
477
Fig. 1. Example of MPEG data: (a) whole data (b) I-frame (c) P-frame (d) B-frame
Bezdek defined a condition for minimizing the objective function with the following two equations [11,12]: µki = c
1
2 di (xk ) m−1 j=1 ( dj (xk ) )
n m k=1 (µki ) xk vi = n m k=1 (µki )
(2)
(3)
The FCM finds the optimal values of group centers iteratively by applying Eq. (2) and Eq. (3) in an alternating fashion. 3.2
Gradient-Based Fuzzy c-Means(GBFCM) Algorithm
One attempt to improve the FCM algorithm was made by minimizing the objective function using one input data at a time instead of the entire input data. That is, the FCM in Eq. (2) and Eq. (3) uses all data to update the center value of the cluster, but the GBFCM that is used in this paper was developed to update the center value of the cluster with a given individual data sequentially [8,9]. Given one data xi and c clusters with centers at v j , (j = 1, 2, · · · , c), the objective function to be minimized is:
478
D.-C. Park
Ji = µ21i (v 1 − xi )2 + µ22i (v 2 − xi )2 + · · · + µ2ci (v c − xi )2
(4)
with the following constraint: µ1i + µ2i + · · · + µci = 1
(5)
The basic procedure of the gradient descent method is that starting from an initial center vector, v 0 , the gradient ∆Ji of the current objective function can be computed. The next value of v is obtained by moving to the direction of the negative gradient along the error surface such that: v k+1 = v k − η
∂Ji ∂v k
where k is the iteration index and ∂Ji = 2µ2ki (v k − xi ) ∂v k Equivalently, v k+1 = v k − 2ηµ2ki (v k − xi )
(6)
where η is a learning constant. A necessary condition for optimal positions of the centers for the groups can be found by the following: ∂Ji =0 (7) ∂µ After applying the condition of Eq. (7) , the membership grades can be found as: 1 µki = c (8) di (xk ) 2 j=1 ( dj (xk ) ) Both the FCM and GBFCM have an objective function that related the distance between each center and data with a membership grade reflecting the degree of their similarities with respect to other centers. On the other hand, they differ in the way they try to minimize it: • As can be seen from Eq. (2) and Eq. (3), all the data should be present in the objective function in the FCM and the gradients are set to zero in order to obtain the equations necessary for minimization [14]- [16]. • As can be seen from Eq. (6) and Eq. (8), however, only one datum is present for updating the centers and corresponding membership values at a time in the GBFCM. More detailed explanation about the GBFCM can be found in [8,9].
Classification of MPEG VBR Video Data Using Gradient-Based FCM
4
479
GBFCM with Divergence Measure
The MPEG VBR video data to be used in this paper are originally in a timeseries format. However, previous researches confirm that the GPDF representation of MPEG VBR video data has advantages over one-dimensional time-series representation as far as data classification accuracy is concerned [7,10,13]. In distribution clustering, selecting a proper distance measure between two data vectors is very important since the performance of the algorithm largely depends on the choice of the distance measure[11,17]. After evaluating various distance measures, the Divergence distance (Kullback-Leibler Divergence) be2 2 tween two GPDFs, x = (xµi , xσi ) and v = (viµ , viσ ), i = 1, · · · , d , is chosen as the distance measure in our algorithm[11,18]: D(x, v) =
2 2 d xσ + (xµi − viµ )2 viσ + (xµi − viµ )2 ( i + ) 2 2 viσ xσi i=1
(9)
2
where xµi and xσi denote µ and σ 2 values of the ith component of x , respectively, 2 while viµ and viσ denote µ and σ 2 values of the ith component of v, respectively. The GBFCM to be used in this paper is based on the FCM algorithm. However, instead of calculating the center parameters of the clusters after applying all the data vectors in FCM, the GBFCM updates their center parameters at every presentation of data vectors. By doing so, the GBFCM can converge faster than the FCM [8,9]. To deal with probabilistic data such as the GPDF, the proposed GBFCM(DM) updates the center parameters, mean and variance, according to the distance measure shown in Eq. (9). That is, the membership grade for each data vector x to the cluster i is calculated by the following: µi (x) = c
1
D(x,v i ) 2 j=1 ( D(x,v j ) )
(10)
After finding the proper membership grade from an input data vector x to each cluster i, the GBFCM-DM updates the mean and variance of each center as follows: v µi (n + 1) = v µi (n) − ηµ2i (x)(v µi (n) − xµ ) (11) Ni 2 v σi (n
+ 1) =
σ2 k=1 (xk,i (n)
+ (xµk,i (n) − v µi (n))2 ) Ni
(12)
where 2
• v µi (n) or v σi (n) : the mean or variance of the cluster i at the time of iteration n 2 • xµk,i (n) or xσk,i (n) : the mean or variance of the k th data in the cluster i at the time of iteration n • η and Ni : the learning gain and the number of data in the cluster i Table 1 is a pseudocode of the GBFCM(DM).
480
D.-C. Park Table 1. The GBFCM Algorithm
Algorithm GBFCM(DM) Procedure main() Read c, ε, m [c: initialize cluster, ε: is small value, m is a weighting exponent (m ∈ 1, . . . ∞)] error := 0 While (error > ε) While (input file is not empty) Read one datum x [Update GBFCM(DM) center Mean] v µ (n + 1) = v µ (n) − ηµ2 (v µ (n) − xµ ) [Update GBFCM(DM) membership grade] 1 µi (x) = c D(x,v i) 2 (
j=1 D(x,v j )
)
e := v µ (n + 1) − v µ (n) End while 2
Ni
v σi (n + 1) = k=1 error := e End while 2 Output µi , v µ and v σ End main() End
5
2
µ
µ
2 (x σ k,i (n)+(x k,i (n)−v i (n)) )
Ni
Experiments and Results
The MPEG video traffic data sets considered in this paper are from the following internet site: http://www3.informatik.uni-wuerzburg.de/MPEG/ The data sets are prepared by O. Rose [7] and are compressed with MPEG-1, where the size of GoP is 12 and the sequence is IBBPBBPBBPBB. 10 video streams are used for experiments. Each video stream has 40,000 frames at 25 frames/sec. Table 2 shows the subjects of the 10 video streams. The video data prepared by Rose[7] have been analyzed and approximated with the statistical features of frames and GoP size by using Gamma or Lognormal by Manzoni et. al [10]. Later, Krunz et. al have found that Lognormal representation is the best matching method for I/P/B frames[13]. In this paper, the Lognormal representation of MPEG VBR Video data is also employed for our experiments. For applying the divergence measure, the mean and variance of each 12 frame data in the GoP are obtained and each GoP is expressed as 12-dimensional data
Classification of MPEG VBR Video Data Using Gradient-Based FCM
481
Table 2. MPEG VBR Video used for experiments MOVIE SPORTS “Jurassic Park”(Dino) ATP tennis final (Atp) “The silence of the lambs”(Lambs) Formula 1:GP Hockenheim 1994 (Race) “Star Wars”(Star) Super Bowl 1995: Chargers-49ers (Sbowl) “Terminator 2”(Term) Two 1993 World Cup matches (Soc1) “a 1994 movie preview”(Movie2) Two 1993 World Cup matches (Soc2)
Table 3. Classification results of Experiment 1 in False Alarm Rate (%) # of Clusters 3 4 5 6 7
FCM 31.1 28.6 26.8 30.1 27.8
GBFCM 28.6 26.7 26.8 24.8 25.6
GBFCM(DM) 13.2 12.9 12.7 14.2 12.8
Table 4. Classification results of Experiment 2 in False Alarm Rate (%) # of Clusters 3 4 5 6 7
FCM 17.4 21.2 20.5 18.8 18.4
GBFCM 15.8 16.5 15.2 22.1 17.6
GBFCM(DM) 10.4 10.1 9.9 9.9 9.9
where each dimension consists of the mean and the variance values of each frame in the GoP. In our experiments, parts of available MPEG VBR video data are used for the training of algorithms and the rest of the video data are used for performance evaluation of the trained algorithms. Experiments are performed in two ways by which the training data are chosen: – Experiment 1: Dino from movie data and ATP from sports data – Experiment 2: randomly chosen 10 % of data from each class of data Since Dino and ATP data have been thoroughly analyzed and successfully modeled with the GPDF, each data stream is selected for a training data set representing each class. Note that each MPEG VBR data set consists of 3,333 GPDF data. The test data sets used are the rest of the available MPEG VBR data for both cases of experiments. Note that the training data and test data sets in the experiment 2 are selected for 20 different cases for obtaining unbiased results. The False Alarm Rate (FAR) is used for the performance measure [5]: FAR =
# of misclassification cases total # of cases
482
D.-C. Park
For performance evaluation, the FCM and GBFCM are compared with the proposed GBFCM(DM). In the cases of the FCM and GBFCM, however, they use a only the mean of each frame data while the GBFCM(DM) uses both the mean and the variance. In experiments, each class of movie and sports is assigned with several clusters because one cluster for each class is found insufficient. The number of clusters for each class has been increased up to 7. Classification results for the experiments are given in Table 3 and Table 4. As can be seen from Table 3 and Table 4, the proposed GBFCM(DM) outperforms both the FCM and GBFCM. This is a somewhat obvious result because the FCM and GBFCM do not utilize the variance information of the frame data while the GBFCM(DM) does. The divergence measure use in GBFCM(DM) plays a very important role for modeling and classification of MPEG VBR data streams. This result implies that the divergence measure makes it possible to utilize the research results obtained by researchers including Rose and Manzoni et. al [7,10,13].
6
Conclusions
An efficient clustering algorithm, called the Gradient-Based Fuzzy C-Means algorithm with Divergence Measurement (GBFCM(DM)) for the GPDF is proposed in this paper. The proposed GBFCM(DM) employs the divergence measure as its distance measurement and utilizes the spatial characteristics of MPEG VBR video data for MPEG data classification problems. When compared with conventional clustering and classification algorithms such as the FCM and GBFCM, the proposed GBFCM(DM) successfully finds clusters and classifies the MPEG VBR data modelled by the 12-dimensional GPDFs. The results show that the GBFCM(DM) gives 5-15% improvement in False Alarm Rate over the FCM and GBFCM whose FARs are in the range between 15.2% and 31.1% according to the number of clusters used. This result implies that the divergence measure used in the proposed GBFCM(DM) plays a very important role for modeling and classification of MPEG VBR video data streams. Furthermore, the GBFCM(DM) can be used as a useful tool for clustering GPDF data.
Acknowledgement This research was supported by the Korea Research Foundation (Grant # R052003-000-10992-0(2004)).
References 1. Pacifici,G.,Karlsson,G.,Garrett,M.,Ohta,N.: Guest editorial real-time video services in multimedia networks, IEEE J. Select. Areas Commun. 15 (1997) 961-964 2. Tsang,D.,Bensaou,B.,Lam,S.: Fuzzy-based rate control for real-time MPEG video, IEEE Trans. Fuzzy Syst. 6 (1998) 504-516
Classification of MPEG VBR Video Data Using Gradient-Based FCM
483
3. Tan,Y.P.,Yap,K.H.,Wang,L.(eds.): Intelligent Multimedia Processing with Soft Computing, Series: Studies in Fuzziness and Soft Computing, Vol. 168. Springer, Berlin Heidelberg New York (2005) 4. Dimitrova,N.,Golshani,F.: Motion recovery for video content classification, ACM Trans. Inform. Sust. 13 (1995) 408-439 5. Liang,Q.,Mendel,J.M.: MPEG VBR Video Traffic Modeling and Classification Using Fuzzy Technique, IEEE Trans. Fuzzy Systems 9 (2001) 183-193 6. Patel,N.,Sethi,I.K.: Video shot detection and characterization for video databases. Pattern Recog. 30 (1977) 583-592 7. Rose,O.: Satistical properties of MPEG video traffic and their impact on traffic modeling in ATM systems, Univ. Wurzburg,Inst. Comput.Sci. Rep. 101 (1995) 8. Park,D.C.,Dagher,I.: Gradient Based Fuzzy c-means ( GBFCM ) Algorithm, IEEE Int. Conf. on Neural Networks, ICNN-94 3 (1994) 1626-1631 9. Looney,C.: Pattern Recognition Using Neural Networks, New York, Oxford University press (1997) 252-254 10. Manzoni,P.,Cremonesi,P.,Serazzi,G.: Workload models of VBR video traffic and their use in resource allocation policies, IEEE Trans. Networking 7 (1999) 387-397 11. Bezdek,J.C.: Pattern recognition with fuzzy objective function algorithms, New York : Plenum, (1981) 12. Bezdek,J.C.: A convergence theorem fo the fuzzy ISODATA clustering algorithms, IEEE trans. pattern Anal. Mach. int. 2 (1980) 1-8, 24 (1975) 835-838. 13. Krunz,M.,Sass,R.,Hughes,H.: Statistical characteristics and multiplexing of MPEG streams, Proc. IEEE Int. Conf. Comput. Commun., INFOCOM’95, Boston, MA 2 (1995) 445-462 14. Kohonen,T.: Learning Vector Quantization, Helsinki University of Technology, Laboratory of Computer and Information Science, Report TKK-F-A-601 (1986) 15. Dunn.,J.C.: A fuzzy relative of the ISODATA process and its use in detecting compact well separated clusters. J. Cybern. 3 (1973):32-75 16. Windham,M.P.: Cluster Validity for the Fuzzy cneans clustering algorithm, IEEE trans. pattern Anal. Mach. int. 4 (1982) 357-363 17. Gokcay,E.,Principe,J.C.: Information Theoretic Clustering, IEEE Trans. Pattern Ana. Mach Int. 24 (2002) 158-171 18. Fukunaga,K.: Introduction to Statistical Pattern Recognition, Academic Press Inc. 2nd edition (1990)
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree of Emotion from Facial Expressions M. Ashraful Amin1, Nitin V. Afzulpurkar2, Matthew N. Dailey3, Vatcharaporn Esichaikul1, and Dentcho N. Batanov1 1 Department
of CS & IM Department of MT & ME, Asian Institute of Technology, Thailand
[email protected],
[email protected] {vatchara, batanov}@cs.ait.ac.th 3 Sirindhorn International Institute of Technology, Thammasat University, Thailand
[email protected] 2
Abstract. Although many systems exist for automatic classification of faces according to their emotional expression, these systems do not explicitly estimate the strength of given expressions. This paper describes and empirically evaluates an algorithm capable of estimating the degree to which a face expresses a given emotion. The system first aligns and normalizes an input face image, then applies a filter bank of Gabor wavelets and reduces the data’s dimensionality via principal components analysis. Finally, an unsupervised Fuzzy-C-Mean clustering algorithm is employed recursively on the same set of data to find the best pair of principle components from the amount of alignment of the cluster centers on a straight line. The cluster memberships are then mapped to degrees of a facial expression (i.e. less Happy, moderately happy, and very happy). In a test on 54 previously unseen happy faces., we find an orderly mapping of faces to clusters as the subject’s face moves from a neutral to very happy emotional display. Similar results are observed on 78 previously unseen surprised faces.
1 Introduction A significant amount of research work on facial expression recognition has been performed by researchers from multiple disciplines [1], [2], [3]. In this research, we build on existing systems by applying a fuzzy clustering technique to not only determine the category of a facial expression, but to estimate its strength or degree. The clustering is also used to choose the best description of faces in a reduced dimension. Very few researchers have considered the problem of estimating the degree or intensity of facial expressions. Kimura and Yachida [4] used the concept of a potential network on normalized facial images to recognize and estimate facial expression and its degree respectively. Pantic and Rothkrantz [5] used the famous Ekman [1] defined FACS (Facial Action Coding System) to determine facial expression and its intensity. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 484 – 493, 2005. © Springer-Verlag Berlin Heidelberg 2005
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree
485
2 The Facial Expression Degree Estimation System Our implementation of the facial expression recognition & degree estimation system involves four major steps (Fig-1).
Fig. 1. The facial expression degree estimation system
2.1 Facial Data Acquisition The training and testing data for our experimental facial expression recognition & degree estimation system is collected from the Cohn-Kanade AU-Coded Facial Expression Database [6]. 2.2 Data Preprocessing Two main issues in image processing will affect the recognition results: the brightness distribution of the facial images and facial geometric correspondence to keep face size constant across subjects. To ensure the above-mentioned criterions in facial expression images, an affine transformation (rotation, scaling and translation) is used to normalize the face geometric position and maintain face magnification invariance and also to ensure that gray values of each face have close geometric correspondence [7].
Fig. 2. Normalization of Face image using the affine-transformation
486
M.A. Amin et al.
2.3 Facial Data Extraction The Gabor wavelets, whose kernels are similar to the 2D receptive field profiles of the mammalian cortical simple cells, exhibit desirable characteristics of spatial locality and orientation selectivity and are optimally localized in the space and frequency domains [8]. The Gabor wavelet (kernel, filter) can be defined as follows:
ψ µ ,ν ( z ) =
k µ ,ν
σ2
kµ ,ν
2
e
2
2σ
2
z
2
σ2 ⎡ ⎢e ikµ ,ν z − e − 2 ⎢ ⎢⎣
⎤ ⎥ ⎥ ⎥⎦
(1)
where µ and ν define the orientation and scale of the Gabor kernel, z = ( x, y ), •
denotes the Euclidean norm operator, and wave vector k µ ,ν is defined as follows: k µ ,ν = kν e
i φµ
(2)
where kν = k max f ν (here ν = {0,1,...,4} ) and φµ = πµ 8 (here µ = {0,1,2...,7} ), here k max is the maximum frequency, and f is the spacing factor between kernels in frequency domain [9]. We employ a lattice of phase-invariant filters at five scales, ranging between 16 and 96 pixels in width, and eight orientations, 0 to 7π / 8 .
Fig. 3. The magnitude of Gabor kernels at five different scales
Fig. 4. Gabor magnitude representation of the face from Figure-2
σ = 2π , k max = π 2 , and f = 2 Principle Component Analysis (PCA): Even with sub-sampling, the dimensionality of our Gabor feature vector is much larger to classify. Principal components analysis is a simple statistical method to reduce dimensionality while minimizing mean squared reconstruction error [11].
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree
487
Fig. 5. An ideal membership function for the degree of an emotion
2.4 Clustering and Classification
Fuzzy C-Means (FCM) [10] is one of the most commonly used fuzzy clustering techniques for different degree estimation problems. Its strength over the famous k-Means algorithm [11] is that, given an input point, it yields the point’s membership value in each of the classes. In one dimension, we would expect the technique to yield a membership function as shown in Fig. 5. The aim of FCM is to find cluster centers (centroids) that minimize a dissimilarity function. The membership matrix U is randomly initialized as: c
∑uij =1, for ∀ j =1,...,n
(3)
i =1
The dissimilarity function that is used in FCM is given as: c
J (U, c1, c2 ,...,cc ) =
c n
∑ ∑∑uijmdij2 Ji =
i =1
(4)
i =1 j =1
Here, u ij ∈ [0,1] , ci is the centroid of i th cluster, d ij is the Euclidian distance between i th centroid and j th data point and m ∈ [1, ∞] is a weighting exponent. To reach a minimum of dissimilarity function there are two conditions. These are given in Equation (5) and (6). n
n
ci = ∑ u ij x j
∑u
⎛ d ij u ij = 1 ∑ ⎜ ⎜ k =1 ⎝ d kj
⎞ ⎟ ⎟ ⎠
m
j =1
c
m ij
(5)
j =1
2 /( m −1)
(6)
Pseudo code for Fuzzy-C-Means follows: I. Randomly initialize the membership matrix (U) that has constraints in Equation (3). II. Calculate centroids ( ci ) by using Equation (5).
488
M.A. Amin et al.
III. Compute dissimilarity between centroids and data points using Equation (4). Stop if its improvement over previous iteration is below a threshold. IV. Compute a new U using Equation (6) Go to Step 2.
3 Results and Observations 3.1 Neutral-Happy Faces
We applied the above-mentioned steps on 945 facial images which are classified as neutral-happy sequences portrayed by 50 actors. Initially we divided the data into two random groups to use in the PCA stage: 200 faces are used to compute the covariance matrix, and then the remaining 745 faces are projected onto this covariance matrix’s principal components. Out of these 745 faces we held out 4 randomly-selected subjects (54 faces) to test our clustering approach. We used FCM to cluster the remaining 691 faces into 3 fuzzy clusters. From this clustering we obtained the clusters. In Fig. 6., we present the scatter plot of the first three clustering, where the cluster centers are the black dots. In the clustering process we used different combinations of principal components.We considered the principle components in groups of two in sequence: (1st, 2nd), (2nd, 3rd), (3rd, 4th), (4th, 5th) and so on in different combinations until we get a satisfactory result. Later in this chapter we describe what satisfactory result means.
Fig. 6. Plot for principle component pairs; (1st,2nd),(2nd,3rd),(3rd,4th) of Neutral-Happy faces
Test Subject-1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Less Happy Medium Happy Very Happy
1
2
3
4
5
6
7
8
9
10
11
12
13
Fig. 7. Neutral-Happy photo sequence of Test Example-1; Plot of membership values
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree
489
Test Subject-2 1.2 1 0.8
Less Happy
0.6
Medium Happy Very Happy
0.4 0.2 0 1
2
3
4
5
6
7
8
9
10
11
12
13
14
Fig. 8. Neutral-Happy photo sequence of Test Example-2; Plot of membership values
Two individual subjects Neutral-Happy facial sequences are presented in Fig.7. and Fig.8., along with the membership values for each facial image that are projected using the (3rd, 4th) principle component pairs. Few interesting points from these two figures need to be noticed, one is that the membership function is similar to the ideal trapezoidal fuzzy membership function given in Fig. 5.. The other observation is that when we use the winner-take-all criteria to assign the absolute membership; the first person goes slowly to the maximum intensity on the other hand the second individual remains longer in the maximum intensity (Table 1). Similar results are presented on Fig.9. (Due to space constrains the image sequence is skipped). Winner-take-all strategy is applied on all the fuzzy membership values for all 4 subjects for (3rd, 4th) principle component and the result is provided in Table 1. If any face is classified as less where it is appearing after the sequence medium then it is consider as an uneven assignment of class. It absolutely follows the desired sequencing for a fuzzy clustering in the fuzzy membership representation and also reflected in the absolute class assignment. Test subjects are viewed in trajectory plotted using the 3rd and 4th principle component cluster center. Here notice that, the individuals faces are moving from near of one cluster center to the other cluster centers in Fig.10.. This again proves that the system is able to correctly capture the fuzzy characteristic that is embedded in the degree estimation process of facial expression. Test Subject-3
Test Subject-4
1.2
1.2
1
1
0.8
Less Happy
0.6
Medium Happy Very Happy
0.4 0.2
0.8
Less Happy
0.6
Medium Happy Very Happy
0.4 0.2
0
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
1
2
3
4
5
6
7
8
9
10
11
12
Fig. 9. Plot of Neutral-Happy sequence of Test Example-3 (left) & Example-4 (right) membership values
490
M.A. Amin et al. Table 1. Absalute membership asigned to each image for Neutral-Happy sequence Subject
Test Subject-1 Test Subject-2 Test Subject-3 Test Subject-4
Color in Fig. 11. Green Blue Red Black
Less Happy (LH) 1-6 1-3 1-8 1-5
Medium Happy (MH) 7-9 4-6 9-10 6-9
Very Happy (VH) 10-13 7-14 11-15 10-12
Fig. 10. Neutral-Happy (3rd, 4th principle component) trajectory of the 4 test individuals
3.2 Neutral-Surprise Faces
We applied the similar steps on 1173 facial images which are classified as NeutralSurprise sequences as portrayed by 63 actors. This time 209 faces are kept for PCA. The remaining 964 faces are projected onto this covariance matrix’s principal components. Out of these 964 faces we held out 4 randomly-selected subjects (72 faces) to test our clustering approach. Cluster center is calculated using other 892 faces (Fig. 11.). The best clustering is achieved for 2nd, 3rd principle components. In Fig.12. and Fig.13. notice that the membership curve is rather an x-curve then a trapezoidal one. But also notice that it still follows the concept of fuzzy class assignment as only two consecutive classes are present for one individual.
Fig. 11. Plot for (1st, 2nd), (2nd,3rd), (3rd,4th ) principle components of Neutral-Surprise faces
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree
491
Test Subject-1 1.2 1 Less Surprised
0.8
Medium Surprised
0.6
Very Surprised
0.4 0.2 0 1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16
Fig. 12. Neutral-Surprise photo sequence of Test Example-1; Plot of membership values
Test Subject-2 1.2 1 0.8
Less Surprised
0.6
Medium Surprised Very Surprised
0.4 0.2 0 1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
Fig. 13. Neutral-Surprised photo sequence of Test Example-2; Plot of membership values Test Subject-3
Test Subject-4 1.2
1.2
1
1 0.8
Less Surprised
0.6
Medium Surprised
Very Surprised
0.4 0.2
0.8
Less Surprised
Medium Surprised
0.6
Very Surprised
0.4 0.2
0
0
1
3
5
7
9
11
13
15
17
19
21
23
1
2
3
4
5
6
7
8
9
10
11
12
13
Fig. 14. Plot of Neutral-Surprise sequence of Test Example-3 (left) & Example-4 (right) membership values
492
M.A. Amin et al. Table. 2. Absalute membership asigned to each image for Neutral-Surprised sequence Subject Test Subject-1 Test Subject-2 Test Subject-3 Test Subject-4
Color in Fig. 17. Green Blue Red Black
Less Surprise (LS) 1-6 1-17 ----1-3
Medium Surprise (MS) 7-10 18-30 1-17 4-6
Very Surprise (VS) 11-16 ----18-23 7-13
Notice that in Table 2. for subject-2 the Very Surprised category is empty. And for the 3rd subject the Less Surprised is empty, which also is trivial from the membership curves. More over the result could be more suitably projected from Fig.15., here notice that two subject ends their trajectory substantially earlier.
Fig. 15. Neutral-Surprise (2nd, 3rd principle component) trajectory of the 4 test individuals
3.3 Best Cluster Criterion
From the above example and evidences it is clear that our proposed system works satisfactorily. But interesting point to be noted is that how to find out the principle components that best captures the fuzzyness of data. As we could see the principle component pair for different emotion is different (Happy 3rd, 4th , Surprised 2nd, 3rd). We had to check as many of the combination possible. The best pair of principle components is chosen depending on minimum distance criterion (MDC). The criterion is, out of all combinations of principle component pairs, we cluster them using the similar initial condition and record the distance of the middle cluster center from the middle of the other two centers connecting line. This is provided in the following equation: D = ( x 2 − ( x1 + x 3 2)) 2 + ( y 2 − ( y1 + y 3 2)) 2
(7)
Fuzzy-C-Mean Determines the Principle Component Pairs to Estimate the Degree
493
Where ( x1 , y1 ) , ( x 2 , y 2 ) and ( x3 , y 3 ) are cluster centers and D is the distance of the middle cluster center from the midpoint of the connecting line of other two.
4 Conclusion and Future Work Here we have shown that fuzzy clustering is a promising approach to estimating the degree of intensity of a facial expression, when a face is characterized with Gabor kernels and projected into a low-dimensional space using PCA. The best result is achieved when (3rd,4th) and (2nd, 3rd) principal components are used to describe the Neutral-Happy and Neutral-Surprise faces consecutively. The best suitable principle component is selected depending on the MDC (Minimum Distance Criteria). Satisfactory results are observed in the experimentation process. Presently we are doing experiment on other prototypic emotions using the same approach and in future we will expand this to more sophisticated facial expressions that are already known as hard problem to characterize with computers.
Reference 1. Ekman, P. and Friesen, W.: The Facial Action Coding System. Consulting Psychologists Press, San Francisco, USA, 1978. 2. Pantic, M. and Rothkkrantz, L. J. M.: Automatic Analysis of Facial Expression: the State of Art, IEEE Trans. Pattern analysis and machine intelligence, Vol. 24, NO. 1, 2000, 1424-1445. 3. Cowic, R., Douglas, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W. and Taylor, J. G.: EMOTION RECOGNITION in Human-Computer Interaction IEEE Signal Processing Magazine, no. 1, 2001, 32-80. 4. Kimura, S. and Yachida, M.: Facial Expression Recognition and Its Degree Estimation, Proc. Computer Vision and Pattern Recognition, 1997, 295-300. 5. Pantic, M. and Rothkrantz, L. J.M.: An Expert System for Recognition of Facial Actions and their Intensity, Proc. 12th International Conference on Innovative Applications of Artificial Intelligence, 2000, 1026-1033. 6. http://vasc.ri.cmu.edu/idb/html/face/facial_expression/, 15 May 2004. 7. Jain, A. K.: Fundamentals of Digital Image Processing, Prentice-Hall of India Private Limited, New Delhi, India, 2003. 8. Daugman, J. G.: Complete Discrete 2-D Gabor Transform by Neural Networks for Image Analysis and Compression, IEEE Trans. Acoustics, Speech and Signal Processing, vol. 36, no. 7, 1988, 1169-1179. 9. Lee, T. S.: Image Representation Using 2D Gabor Wavelets, IEEE Trans. PAMI, Vol. 18, no. 10, 1996, 959-971. 10. Höppner, F., Klawonn, F., Kruse, R. and Runkler, T.: Fuzzy Cluster Analysis, Wiley, 1999. 11. Dubes, R. C.: Cluster analysis and related issues, Handbook of Pattern Recognition & Computer Vision, Chen, C. H., Pau, L. F., and Wang , P. S. P. (Eds.): World Scientific Publishing Co., Inc., River Edge, NJ, 3–32.
An Improved Clustering Algorithm for Information Granulation Qinghua Hu and Daren Yu Harbin Institute of Technology, Harbin, China, 150001
[email protected] Abstract. C-means clustering is a popular technique to classify unlabeled data into dif-ferent categories. Hard c-means (HCM), fuzzy c-means (FCM) and rough c-means (RCM) were proposed for various applications. In this paper a fuzzy rough c-means algorithm (FRCM) is present, which integrates the advantage of fuzzy set theory and rough set theory. Each cluster is represented by a center, a crisp lower approximation and a fuzzy boundary. The Area of a lower approximation is controlled over a threshold T, which also influences the fuzziness of the final partition. The analysis shows the proposed FRCM achieves the trade-off between convergence and speed relative to HCM and FCM. FRCM will de-grade to HCM or FCM by changing the parameter T. One of the advantages of the proposed algorithm is that the membership of clustering results coincides with human’s perceptions, which makes the method has a potential application in understandable fuzzy information granulation.
1 Introduction In essence, an information granule is a cluster of physical or metal objects drawn together by indistinguishability, similarity, proximity or functionality (Zadeh, 1997). Information granulation, constraint representation and constraint propagation are main aspects of computing with words. Information granulation, the point of departure of computing with words, plays an important role in these techniques and computational theory of perceptions (Zadeh, 1999). Generally speaking, there are two kinds of information granules: crisp and fuzzy. Although crisp granules are the foundation of a variety of techniques, such as interval analysis, rough set theory, D-S theory, they fail to reflect the fact that the granules are fuzzy rather than crisp in most of perception information and human reasoning. In human cognition, fuzziness is the direct consequence of fuzziness of the perceptions of similarity. It is entailed by the finite capability of human mind and sensory organs to resolve detain and store information. However, fuzzy information granulation underlies the remarkable human ability to make rational decision in a condition of imprecision, partial knowledge, partial certainty and partial truth. Fuzzy information granulation plays an important role in fuzzy logic, computing with words and computational theory of perceptions. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 494–504, 2005.
© Springer-Verlag Berlin Heidelberg 2005
An Improved Clustering Algorithm for Information Granulation
495
Partitioning a given set of unlabeled data into granules is a most fundamental problem in pattern recognition and data mining. C means clustering algorithms are a series of popular techniques to find the structure in unlabeled sample sets. A classical clustering algorithm introduced three decades ago [1], called Hard C Means (HCM), is to assign a sample with a label according to the nearest neighbor principle and minimize the within cluster distance. Fuzzy c means (FCM), an generalization of HCM introduced by Dunn [2] and generalized by Bezdek [3] is one of the most wellknown techniques in clustering analysis. The main difference between HCM and FCM is introducing the weighting index m, which is used to control the fuzziness. FCM has better global convergence and slower speed. Basically the performance of FCM clustering is dependent of some parameters, such as the fuzziness weighting exponent, number of clusters [4, 5, 6] and initialization. Rough set theory [7] is a new paradigm to deal with uncertainty, vagueness and incompleteness and has been applied to fuzzy rule extraction [8], reasoning with uncertainty [9] and fuzzy modeling. Rough set theory is proposed for indiscernibility in classification according to some similarity, whereas fuzzy set theory characterizes the vagueness in language variables. They are complementary in some aspects. Combining fuzzy set and rough set has been an important direction in reasoning with uncertainty [10, 11]. Pawan Lingras [12] introduced a new clustering method, called rough k-means, for web users pattern mining. The algorithm describes a cluster by a center and a pair of lower and upper approximations. And the lower and upper approximations are weighted different parameters in compute the new centers in the algorithm. S. Asharaf etc. extended the technique as a Leader one [13], which doesn’t require user specify the number of clusters. In this paper, a fuzzy rough c-means clustering algorithm will be presented. It combines two soft computing techniques, rough set theory and fuzzy set theory, together. Each cluster is represented by a center, a crisp lower approximation and a fuzzy boundary in this algorithm. Then a new center is a weighting average of the lower approximation and boundary. The rest of the paper is organized as follows.
2 Review on HCM and FCM C-means clustering is defined over a real-number space, and computes its centers iteratively by minimizing the objective function. Given data set
X = {x1 , x2 ,
v = {v1 , v2 ,
, xN } ⊂ R s
,
u = {uik }c× N ∈ M fcn
is
a
N data points into c clusters Ci (i = 1, 2,
N
c
J (u , v) = ∑ ∑ uik || xk − vi || 2 , k =1 i =1
c
where ∑ uik = 1 and uik ∈ {0, 1} for Hard c-means clustering. i =1
membership
matrix,
, vc } are c centers of clusters. vi ∈ R , 2 ≤ c ≤ N . C-means partitions , c) . The objective function is s
496
Q. Hu and D. Yu
The c-means algorithm calculates cluster centers iteratively as follows: l
1. Initialize the centers c using random sampling; 2. Decide membership of the patterns in one of the c clusters according the nearest neighbor principle from cluster center criteria; 3. Calculate new
c l +1 centers as vil +1 =
∑ kN=1uikl +1 x k ∑ kN=1uikl +1
⎪1 if i = arg min{|| xk − vil ||} l +1 ⎧ where uik =⎨ ⎪⎩ 0, otherwise 4. if max i || vil +1 − vil ||< ε , end, otherwise go to step 2. Hard c-means algorithm is intuitional, easy to be implemented and has fast convergence. However the results from this method are dependent of the cluster center initialization because it will stop in a local minimum. The most popular clustering algorithm FCM was proposed by Dunn and generalized by Bezdek in 1981. Each pattern doesn’t belong to some cluster definitely in this technique, but is a member of all clusters with a membership value. Here the objective function is defined as: J m (u , v ) = c
where ∑ uik = 1 and uik ∈ [0, 1] i =1
N
c
∑ ∑ (u ik ) m || x k − v i || 2
k =1 i =1
, m ∈ (1, + ∞) .
In order to minimize the objective function, the new centers and memberships are calculated as follows: vi = u ik =
∑ kN=1 (u ik ) m x k , i = 1,2, , c ∑ kN=1 (u ik ) m 1
⎛ d ik j =1⎝ d jk c
∑ ⎜⎜
⎞ ⎟ ⎟ ⎠
2 / m −1
, i = 1,2,
, c; k = 1,2,
,N
The performance of FCM depends on selection of fuzziness weighting exponent m. On one side FCM degrades to HCM when m = 1 , on the other side, FCM will give the mass center of all data when m → ∞ . Bezdek suggested that the preferred value of m is over the range 1.5-2.5. FCM usually stops on a local minimum or saddlepoint. The result changes with the initialization. There are a lot of variants of FCM proposed for all kinds of applications in the last decade. Different metrics [14], objective functions [15] and constraint conditions on membership and center will lead to a new clustering algorithm.
An Improved Clustering Algorithm for Information Granulation
497
3 Rough Sets and Rough C-Means Rough set methodology has been witnessed great success in modeling imprecise and incomplete information. The basic idea of this method hinges on classifying objects of discourse into clusters containing indiscernible objects with respect to some attributes.In this section we will review some basic definitions in rough set theory. is called an information system (IS), where U= {x1 , x 2, x n } is the
universe; A is a family of attributes, called knowledge on the universe; V is the value domain of A and f is an information function f : U × A → V . Any subset B of knowledge A defines an indiscernibility relation or equivalence relation IND (B) on U and generates a partition Π B of U, where IND ( B ) = {( x, y ) ∈ U × U | ∀a ∈ B, f a ( x) = f a ( y )} Π B = U / B = ⊗{a ∈ B : U / IND (a)}
where A ⊗ B = { X ∩ Y : ∀X ∈ A, ∀Y ∈ B, X ∩ Y ≠ φ } . An indiscernibility relation on U will generate a partition of U, and a partition of U is necessarily induced by an indiscernibility relation. Indiscernibility relation and partition are corresponding one by one. We denote the equivalence classes induced by attribute set B as Π B = U / B = {[ x i ] B : x i ∈ U }
Knowledge B induces the elemental concepts [ x i ] B of the universe, which are used to approximate arbitrary subsets in U. We say Π A is a refinement of Π B if there is a partial ordering Π A ≺ Π B ⇔ ∀ [ xi ] A ∈ Π A , ∃ [ x j ] B : [ xi ] A ⊆ [ x j ] B
Arbitrary subset X of U is characterized by a two-tuple < B X , B X > , called lower approximation and upper approximation, respectively
⎧ B X = ∪{[ x i ] B | [ x i ] B ⊆ X } . ⎨ ⎩ B X = ∪{[ x i ] B | [ x i ] B ∩ X ≠ φ } Lower approximation is the greatest union set of [ x i ] B contained in X and upper approximation is the least union set of [ x i ] B containing X. Lower approximation sometimes also is called positive region, denoted by POS B ( X ) . Correspondingly, negative region is defined as NEG B ( X ) = U − B X . If B X = B X , we say that set X is definable, otherwise, X is indefinable or rough set. BN B ( X ) = B X - B X is called boundary set. A set X in U is definable if it’s
498
Q. Hu and D. Yu
composed by some elemental concepts, which leads X can be precisely characterized with respect to knowledge B and BN B ( X ) = φ . According to the definitions of lower and upper approximations, we know that the object set [ xi ] B belongs to the lower approximation means all of the objects in [ xi ] B are contained by X definitely, [ x j ] B belongs to upper approximation of X is to say that objects in [ xi ] B probably are contained based on knowledge B. Here knowledge B classifies the universe into three cases respect to a certain object subset X: lower approximation, boundary region and negative region. There are some elemental properties in rough set theory. Given an information system , B ⊆ A , U / B = { X 1 , X 2 , , X c } ,: • ∀x ∈ U , x ∈ B X i ⇒ x ∉ B X j , j = 1, 2, • ∀x ∈ U , x ∈ B X i ⇒ x ∈ B X i , i = 1, 2,
, c; j ≠ i
,c
• ∀x ∈ U ∀i, x ∉ BX i ⇒ ∃X k , X l : x ∈ BX k and x ∈ BX l .
Property 1 shows an object can be part of at most one lower approximation; property 2 means that objects that belong to the lower approximation necessarily are contained by the upper approximation and the third shows if an object is not part of any lower approximation, the object must belong to at least two upper approximations. The key problem in incorporating rough set theory into c-means clustering is how to define the definitions of lower and upper approximation in real space. Assumed that the lower approximations and upper approximations have been found, the modified centroid is given by: ⎧ ∑ x∈AX x j ∑ x∈AX − AX x j , | AX − AX |≠ 0 + ωupper × ⎪ωlower × | AX | ⎪ | AX − AX | vj = ⎨ ∑ x∈AX x j ⎪ ω otherwise lower × ⎪ | AX | ⎩
where ω lower and ω upper correspond to the relative importance of the lower approximation and boundary . It is easy to find that the above formula is a generalization of HCM. Especially when | AX − AX |= 0 , RCM will degenerates to HCM. Let’s design the criteria to determine whether an object belongs to the lower approximation, boundary or negative region. For each object x and center point v , D( x, v) is the distance from x to v . The differences between D ( x, vi ) and D( x, v j ) are used to determine the label of x . Let D( x, v j ) = min D( x, vi ) and T = {∀i, i ≠ j : | D( x, vi ) − D( x, v j ) |≤ Threshold} . 1≤i ≤c
If T ≠ φ ⇒ x ∈ A(vi ) , x ∈ A(v j ) and x ∉ A(vl ), l = 1,2,
If T = φ , x ∈ A(v j ) , and x ∈ A(v j ) .
,c
An Improved Clustering Algorithm for Information Granulation
499
The performance of rough c-means algorithm depends on five conditions: the weighting index ω lower , ω upper ; Threshold, number of clusters c and initialization of c centers. So it’s difficult for users to manipulate the algorithm in practice. We will present a fuzzy rough c means (shortly, FRCM) to overcome this problem.
4 Fuzzy Rough Clustering As we know, HCM assigns a label to an object definitely; the membership value is 0 or 1. While FCM maps a membership over the arrange 0 to 1; each object belongs to some or all of the clusters to some fuzzy degree. RCM classify the object space into three parts, lower approximation, boundary and negative region. Then different weighting values are taken in computing the new centers, respectively. According RCM all the objects in lower approximation take the same weight, and all the objects in boundary take another weighting index uniformly. In fact, the objects in boundary regions have different influence on the centers and clusters. So different weighting should be imposed on the objects. How to d it in computing? According to the lower approximation is the object subset which belongs to a cluster without doubt. And the boundary is the object subset that maybe belongs to the cluster. So boundary is the region assigned a label with uncertainty. The fuzziness membership should be imposed on the objects in boundary. Here we define that membership function is given by
,
xk ∈ A(vi )
1, ⎧ ⎪ 1 , ⎪ 2/ m−1 uik = ⎨ c ⎛ dik ⎞ ⎪ ∑⎜ ⎟ ⎪ j=1⎜ d jk ⎟ ⎝ ⎠ ⎩
xk ∈ A(vi )
i = 1,2,
, c; k = 1,2,
,N .
It is worth noting that the membership function is constructed and is not derived by the objective function. So it is not strict. However, this problem has little influence on the performance. Then the new centers are calculated by vi =
∑ kN=1(uik ) m xk , i = 1,2, , c . ∑ kN=1(uik ) m
The objective function used is
J m (u, v) =
N c
∑ ∑ (uik ) m || x k − vi || 2
k =1i =1
Then the lower and upper approximations are defined the same as RCM. For each object x and center point v , D( x, v) is the distance from x to v . The differences between D ( x, vi ) and D( x, v j ) are used to determine the label of x . Let
D( x, v j ) = min D( x, vi ) 1≤i ≤c
and T = {∀i, i ≠ j : | D( x, vi ) − D( x, v j ) |≤ Threshold} .
500
Q. Hu and D. Yu
1 If T ≠ φ ⇒ x ∈ A(vi ) , x ∈ A(v j ) and x ∉ A(vl ), l = 1,2,
,c
2 If T = φ , x ∈ A(v j ) , and x ∈ A(v j ) . It’s worth noting that the definitions of lower and upper approximations are different from the classical ones. They are not defined based on any predefined indiscernible relation on the universe. What’ more it is remarkable that the distance metric is not limited with Euclidean distance. So 1-norm, 2-norm, p-norm, infinite norm and some other measures can be applied. The FRCM can be formulated as follows: Input: unlabeled data set; number of cluster c, threshold T, exponent index m, stop criterion ε . Output: membership matrix U. ( 0) Step 1. Let l=0, J m (u, v) = 0 ; randomly make a membership U cl × N ;
Step 2. Compute the c centers vi(l ) , (i = 1,2, Step 3. Compute
the
vi(l ) , (i = 1,2,
A(vi(l ) ), A(vi(l ) ) ,
, c) with U cl × N and data set;
uij(l +1)
vi(l +1) , (i = 1,2,
, c)
with
, c) , and Threshold T,
(l +1) Step 4. Compute J m (u, v) ; (l +1) (l ) Step 5. || J m (u, v) − J m (u , v) ||< ε , then stop, otherwise, l = l + 1 , go to step 2.
The difference between FCM and FRCM is that the membership values of objects in lower approximation are 1, while those in boundary region are the same as FCM. In other word, FRCM first partitions the data into two classes: lower approximation and boundary. Only the objects in boundary are fuzzified. The difference between RCM and FRCM is that, as for RCM the same weights are imposed on all of the objects in lower approximations and boundary regions, respectively. However, as to FCM, each object in boundary region is imposed a distinct weight, respectively.
5 Numeric Experiments To illustrate the differences of these algorithms, let’s look at the membership function in the one-dimensional space for a two-cluster problem. The membership functions are showed in figure 1. Comparing figure 1, we can find the membership generated by FRCM clustering is much similar with human conceptions. For example, we need to cluster some men into two groups according their ages. HCM and RCM will present two crisp sets of the men, which cannot reflect the fuzziness in human reasoning. Although FCM will give two fuzzy classes, but the memberships of the fuzzy sets are not understandable. As to a one year old baby, his memberships to young and old are nearly identical and close to 0.5. In fact, it’s unreasonable and unacceptable according to my intuition. Fuzzy rough c-means algorithm is implemented and applied to some artificial data. Three clusters of data are generated randomly. There are one hundred points in each
An Improved Clustering Algorithm for Information Granulation
501
cluster and the data points are satisfied a normal distribution. The data and result of HCM are showed in figure 2.
(A) 1 0.8 0.6 0.4 0.2 0
(B) 1 0.8 0.6 0.4 0.2 0
(C) 1 0.8 0.6 0.4 0.2 0
(D) 1 0.8 0.6 0.4 0.2 0
(E) Fig. 1. (A) shows some data distributed in one-dimensional space; (B) is the membership function with HCM; (C) is the membership function with RCM; (D) is function with FCM and (E) is with FRCM. It’s easy to see that HCM gets a crisp boundary line; RCM produces a boundary region; FCM and FRCM develop a soft boundary region.
502
Q. Hu and D. Yu
Figure 3 shows the clustering results with different thresholds. And the comparison is presented in table 1. We can find the iteration number of HCM is the fastest. And global convergence, namely objective function, of FCM is the best. FRCM make a tradeoff between speed and convergence. And with the change of threshold FRCM has a similar performance as HCM or FCM. 3 2.5 2 1.5 1 0.5 0
0
0.5
1
1.5
2
Fig. 2. Result of HCM 3
3
2.5
2.5 2
2
1.5
1.5
1
1
0.5
0.5 0
0
0
0.5
1
1.5
0
Figure 3 (A) Threshold=0.25 3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0.5
1
1.5
1
1.5
2
Figure 3 (C) Threshold=0.75
0
0
0.5
1
1.5
Figure 3 (D) Threshold=1
Fig. 3. The results with different Threshold from FRCM Table 1. Comparison of HCM, FCM and FRCM HCM Iter. count obj. fcn Center1 Center2 Center3
6 7.0589 0.4487,0.5200 0.4919,2.4553 1.4868, 1.5083
2
Figure 3 (B) Threshold=0.5
3
0
0.5
2
FCM 16 5.6172 0.4272,0.5153 0.4675,0.4699 1.4873,1.5054
FRCM T=0.25 11 6.5679 0.4592, 0.5002 0.4953, 2.4701 1.4792, 1.5097
T=10 16 5.6172 0.4272, 0.5153 0.4675, 2.4699 1.4873, 1.5054
2
An Improved Clustering Algorithm for Information Granulation
503
6 Conclusion Combining two soft computing methods, a fuzzy rough c means clustering algorithm (FRCM) is proposed in this paper. FRCM characterizes each class with a positive region, a fuzzy boundary region and a negative region. According rough set methodology, the objects in positive regions and negative regions are definitely `contained or not contained by the class. Only the objects in boundary regions are fuzzy and unclassified, so a soft separating hyper-plane is required in these regions. A new membership function is constructed based on the idea that membership should be only assigned the classes which centers are near the data points. What’ more, the size of fuzzy boundary regions can be controlled with a proportional index, which control the fuzziness of the clustering results. Because the clustering results are similar to human’s perception, this algorithm is applicable to fuzzy information granulation and computing with words.
Reference 1. Mac Queen, J.: Some methods for classification and analysis of multivariate observations. In: Le Cam. 281-297 2. Dunn, J.C.: Some recent investigations of a new fuzzy partition algorithm and its application to pattern classification problems. J. cybernetics, 4 (1974) 1-15 3. Bezdek, J., C.: pattern recognition with fuzzy objective function algorithms. Plenum, New York. 1981 4. Nikhil R. Pal, James C. Bezdek: On cluster validity for the fuzzy c-means model. IEEE transaction on fuzzy sytems. 3 (1995) 370-379 5. Jian Yu, Qiansheng Cheng, Houkuan Huang: Analysis on weighting exponent in the FCM. IEEE Transaction on SMC, Part B—cybernetics, 31 (2004) 634-639 6. Shehroz S. Khan, Amir Ahmad: Cluster center initialization algorithms for K-means clustering. Pattern recognition letters. 25 (2004) 1293-1302 7. Pawlak Z.: rough sets—theoretical aspects of reasoning about data. Kluwer academic publishers, 1991 8. Jensen R., Shen Q.: Fuzzy-rough attribute reduction with application to web categorization. Fuzzy sets and systems. 141 (2004) 469-485 9. Wang, Yi-Fan: Mining stock price using fuzzy rough set system. Expert Systems with Applications. 24 (2003) 13-23 10. Dubois D., Prade H.: Putting fuzzy sets and rough sets together, in: R. Slowiniski (Ed.), intelligent Decision support, 1992, 203-232 11. Wu, W. Zhang, W.: Constructive and axiomatic approaches of fuzzy approximation operators. Information Sciences. 159 (2004) 233-254 12. Pawan Lingras, Chad West: Interval set clustering of web users with rough k-means. Inter. J. of intell. Inform. system. 23 (2003) 5-16 13. Asharaf S., Narasimh M. Murty: A rough fuzzy approach to web usage categorization. Fuzzy sets and systems. 148 (2004) 119-129 14. Hathaway, R. J., Bezdek J. C. C-means clustering strategies using Lp norm distance. IEEE Trans. On fuzzy systems. 8 (2000) 576-582
504
Q. Hu and D. Yu
15. Li R.P. Mukaidon M. A maximum entropy approach to fuzzy clustering. In: proc. Of the 4th IEEE conf. on fuzzy systems, 1995. 2227-2232 16. Zadeh L. A.: Fuzzy logic equals Computing with words. IEEE Transactions on fuzzy systems 2 (1996) 103-111 17. Zadeh L. A.: Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy sets and systems. 90 (1997) 111-127 18. Zadeh L A.: From computing with numbers to computing with words - From manipulation of measurements to manipulation of perceptions. IEEE Transactions on circuits and systems. 46 (1999) 105-119
A Novel Segmentation Method for MR Brain Images Based on Fuzzy Connectedness and FCM Xian Fan, Jie Yang, and Lishui Cheng Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, 200030, Shanghai, P.R. China
[email protected] Abstract. Image segmentation is an important research topic in image processing and computer vision community. In this paper, a new unsupervised method for MR brain image segmentation is proposed based on fuzzy c-means (FCM) and fuzzy connectedness. FCM is a widely used unsupervised clustering algorithm for pattern recognition and image processing problems. However, FCM does not consider the spatial coherence of images and is sensitive to noise. On the other hand, fuzzy connectedness method has achieved good performance for medical image segmentation. However, in the computation of fuzzy connectedness, one needs to select seeds manually which is elaborative and timeconsuming. Our new method used FCM as the first step to select salient seeded points and then applied fuzzy connectedness algorithm based on those seeds. Thus our method achieved unsupervised automatic segmentation for brain MR images. Experiments on simulated and real data sets proved it is effective and robust to noise.
1 Introduction Image processing techniques are important in medical engineering, such as 3D visualization, registration and fusion, and operation planning. In image processing, object extraction or image segmentation is the most crucial step because it lays the foundation for subsequent steps such as object visualization, manipulation, and analysis. Since manual or semi-automatic segmentation takes a lot of time and energy, automatic image segmentation algorithms have received a lot of attention. Images are by nature fuzzy [1]. This is especially true to the biomedical images. The fuzzy property of images is usually made by the limitation of scanners in the ways of spatial, parametric, and temporal resolutions. What’s more, the heterogeneous material composition of human organs adds to the fuzzy property in magnetic resonance images (MRI). As the goal of image segmentation is to extract the object from the other parts, segmentation by hard means may despoil the fuzziness of images, and lead to bad results. By contrast, using fuzzy methods to segment biomedical images would respect the inherent property fully, and could retain inaccuracies and uncertainties as realistically as possible [2]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 505 – 513, 2005. © Springer-Verlag Berlin Heidelberg 2005
506
X. Fan, J. Yang, and L. Cheng
Although the object regions of biomedical images manifest themselves with heterogeneity of intensity values due to their fuzzy property, knowledgeable human observers could recognize the objects easily from background. That is, the elements in these regions seem to hang together to form the object regions in spite of their heterogeneity of intensity. In 1973, Dunn[3] firstly developed “fuzzy c-means” (FCM) which is a fuzzy clustering method to allow one piece of data to belong to two or more clusters. In [4], Bezdek improved the algorithm so that the objective function minimizes in an iterative procedure. Clark has developed a system to provide completely automatic segmentation and labeling of MR brain images in [5]. Although FCM maintains inaccuracy of the elements’ membership to every cluster, it does not take into consideration that the elements in the same object are hanging together. What’s more, it is susceptible to noise and may end with a local minimum. In [6], Rosenfeld developed fuzzy digital topological and geometric concepts. He defined fuzzy connectedness by using a min-max construct on fuzzy subsets of 2-D picture domains. Based on this, Udupa [7] introduces a fundamental concept called affinity, which combined fuzzy connectedness directly with images and utilized it for image segmentation. Thus the properties of inaccuracy and hanging togetherness are made full use of. However, the step of manual selection of seeds, although onerous and time-consuming, is unavoidable in the initialization of fuzzy connectedness. This paper proposes a new segmentation method based on the combination of fuzzy connectedness and FCM, and applies it to segment MR brain images. The method makes full use of the advantages of the two fuzzy methods. The segmentation procedure implements automatic seed selection and at the same time guarantees the quality of segmentation. The experiments on simulated and real MR brain images prove that this new method behaves well. The remainder of this paper is organized as follows. Section 2 introduces the fuzzy connectedness method. Section 3 describes our fuzzy connectedness and FCM based method in detail. Section 4 presents the experimental results. Conclusions are given in Section 5.
2 Background In this section, we summarize the fuzzy connectedness method for segmentation proposed by Udupa[7]. Let X be any reference set. A fuzzy subset of A is a set of ordered pairs
: X → [0,1] . µ A is called the membership function of A in X . A 2-ary in X is a fuzzy subset of X×X , fuzzy relation ρ ρ = {(( x, y ), µ ρ ( x, y )) | x, y ∈ X } where µ ρ : X × X → [0,1] .
where µ A
For any n ≥ 2 , let n -dimensional Euclidean space R be subdivided into hypercuboids by n mutually orthogonal families of parallel hyperplanes. Assume, with no n
A Novel Segmentation Method for MR Brain Images
507
loss of generality, that the hyperplanes in each family have equal unit spacing so that the hypercuboids are unit hypercubes, and we shall choose coordinates so that the center of each hypercube has integer coordinates. The hypercubes will be called spels (an abbreviation for “space elements”).
Z n is the set of all spels in R n .
A fuzzy relation α in Z is said to be a fuzzy adjacency if it is both reflexive and symmetric. It is desirable that α be such that is a non increasing function of the disn
tance || c − d || between c and d .We call the pair adjacency, a fuzzy digital space.
( Z n , α ) , where α is a fuzzy
( Z n , α ) is a pair C = (C , f ) where C = {c | −b j ≤ c j ≤ b j for some b ∈ Z +n } ; Z +n is the set of n -tuples of positive integers; f , called scene intensity, is a function whose domain is C . C is called the scene domain, whose range [ L, H ] is a set of numbers (usually integers). n Let C = (C , f ) be any scene over ( Z , α ) . Any fuzzy relation κ in C is said A scene over a fuzzy digital space
C
to be a fuzzy spel affinity (or, affinity for short) in if it is reflexive and symmetric. In practice, for κ to lead to meaningful segmentation, it should be such that, for any c, d ∈ C , µκ (c, d ) is a function of 1) the fuzzy adjacency between c and d ; 2) the homogeneity of the spel intensities at c and d ; 3) the closeness of the spel intensities and of the intensity-based features of c and d to some expected intensity and feature values for the object. Further,
µκ (c, d )
(i.e., µκ is shift variant).
µκ (c, d )
may depend on the actual location of
Path strength is denoted as the strength of a certain path connecting two spels. Saha and Udupa[7] have shown that under a set of reasonable assumptions the minimum of affinities is the only valid choice for path strength. So the path strength is
µ N ( p) = min [ µκ (ci −1 , ci ) ] ,
(1)
1 and
Every pair of
N is denoted as the fuzzy κ -net in C .
(ci −1 , ci ) is a link in the path while ci −1 and ci may not always be
adjacent.
C over (Z n , α ) , for any affinity κ and κ –net N in C , fuzzy -connectedness K in C is a fuzzy relation in C defined by the following mem-
For any scene
κ
bership function. For any
c, d ∈ C
µ K (c, d ) = max [ µ N ( p) ] p∈Pcd
.
(2)
508
X. Fan, J. Yang, and L. Cheng
Fig. 1. Illustration of the algorithm of fuzzy connectedness
Combined with eq.(1), eq.(2) shows the min-max property of the fuzzy connectedness between each two spels, as is illustrated in Fig.1. A physical analogy one may consider is to think that there are a lot of strings connecting spels A and B , each with its own strength (called path strength as to a certain path). Imagine that A and B are pulled apart. Under the force the strings will break one by one. As to a certain string, the affinity of the link where the string breaks is denoted as the path strength of this string (the path strength is defined as the minimum affinity of all the links in the path). When all but one string are broken, the last string behave as the strongest one and it’s path strength is denoted as the fuzzy connectedness between spels A and B . Let S be any subset of C . We refer to S as the set of seed spels and assume throughout that S ≠ ∅ . The fuzzy kθ -object of containing S equals
{
C
}
OKθ ( S ) = c | c ∈ C and max[ µ K ( s, c)] ≥ θ . s∈S
With eq.(3), we could extract the object we want given computed via dynamic programming [7].
θ
and
(3)
S . This could be
3 Proposed Method In this section, we propose a new segmentation method which is unsupervised and robust, based on the combination of fuzzy connectedness and FCM. As Udupa has been trying to utilize the fuzzy nature of the biomedical images in both aspects of inaccuracy and hanging togetherness, users’ identifying seed spels belonging to the various objects is always left to be an onerous task. Therefore, automatic selection of seeds becomes important in our research work. On the other hand, FCM, as an old fuzzy method in clustering, is likely to converge to the local minimum and thus lead to false segmentation. Our new method combines the two fuzzy methods organically, trying to implement automatic seed selection and guarantee the segmentation quality. The outline of the new method is as follows. First, FCM is used to pre-segment the MR brain image, through which the scope of the seeds is obtained. Since the number
A Novel Segmentation Method for MR Brain Images
509
of seeds within the scope is much more than that we need, the unqualified seeds within the scope are automatically eliminated according to their spatial information. With the left seeds as an initialization, the MR brain image is segmented by fuzzy connectedness. Here are the detailed steps of our proposed algorithm applied to MR brain images. Step 1. Preprocess the MR brain images, including denoising, intensity correction, and inhomogeneity-correction. In intensity correction, a standardized histogram of MR brain image is acquired, and all the other images’ is corrected so that their histogram would match the standardized one as best as possible [8]. Step 2. Set the number of clustering to 4, and pre-segment the image by FCM, so that the expected segmented objects will be white matter, grey matter, CSF and background. Step 3. Compute the average intensity of each of the four clusters, which is denoted as
v j (b ) , b = 1, 2,3, 4 . Due to the biomedical knowledge that white matter’s
intensity is larger than other tissues in MR brain image, we find the largest v j
(i )
. The cluster
v j (i ) belongs to is the white matter.
Step 4. Compute the deviation of the cluster obtained in step 3. The scope of the seeds is defined by the equation
v j (i ) ± 0.3 δ (i ) .
(4)
Step 5. Define N as the number of seeds in fuzzy connectedness. Step 6. Take the spatial information of the spels within the scope into account, and class them into N clusters. Make the center spels of each cluster the seeds as initialization. Step 7. Segment the image precisely with selected seeds by fuzzy connectedness method. With step 1, we get the standardized histogram of each MR brain image, which guarantees that the parameter 0.3 in eq.(4) will work. Then according to the average and deviation intensity of the region of interest, the scope of the seeds could be gotten with the eq.(4). Since the spatial information of these spels is taken into account, the automatically selected seeds are effective.
4 Experimental Results In this section, we give the experimental results with both simulated and real MR brain images. To simulated images, as the reference result is the “ground truth”, accuracy is described with three parameters: true positive volume fraction (TPVF), false positive volume fraction (FPVF), and false negative volume fraction (FNVF). These parameters are defined as follows:
510
X. Fan, J. Yang, and L. Cheng
TPVF (V ,Vt ) =
V ∩ Vt , Vt
FPVF (V ,Vt ) =
V − Vt , Vt
(6)
Vt − V
(7)
FNVF (V , Vt ) =
Vt
,
(5)
Vt denotes the set of spels in the reference result, V denotes the set of spels resulting from the users’ algorithms. FPVF (V , Vt ) denotes the cardinality of the set of spels expressed as a fraction of the cardinality of Vt that are in the segmentation result of the method but not in Vt . Analogously, FNVF (V , Vt ) denotes a fraction of Vt that is missing in V . We use probability of error (PE) to Where
evaluate the overall accuracy of the simulated image segmentation. PE [9] could be described as
PE = FPVF (V ,Vt ) + FNVF (V ,Vt ) .
(8)
The larger PE is, the poorer the accuracy of the segmentation method is. To the real images, as the reference result is made manually, the inaccuracy of the reference segmentation result should be taken into account. Zijdenbos’s[10] Dice Similarity Coefficient (DSC), which has been adopted for voxel-by-voxel classification agreement, is proper to evaluate the segmentation result of real images. We now describe it as follows. For any type T, assuming that Vm denotes the set of pixels assigned for it by manual segmentation and
Vα denotes the set of pixels assigned for
it by the algorithm, DSC is defined as follows:
DSC = 2 •
| Vm ∩ Vα | . | Vm | + | Vα |
(9)
Since manual segmentations are not “ground truth”, DSC provides a reasonable way to evaluate automated segmentation methods. 4.1 Simulated Images We applied our method on simulated MR brain images provided by McConnell Brain Image Center [11]. As the brain database has provided the gold standard, validation of the segmentation methods could be carried out. The experimental images in this paper were imaged with T1 modality, 217 * 181 (spels) resolution and 1mm slice thickness. 3% noise has been put up and the intensity-nonuniformity is 20%.
A Novel Segmentation Method for MR Brain Images
511
We take the 81th to 100th slices of the whole brain MRI in the database, and segment them with fuzzy connectedness (FC), FCM and our proposed method respectively. The evaluation values are listed in table 1 according to the gold standard. Table 1. Average value of 20 simulated images using three segmentation methods (%)
Study FCM FC Proposed Method
(a)
(b)
TPVF 96.45 98.03 98.89
FPVF 2.79 1.09 1.55
FNVF 3.55 1.97 1.11
(c)
PE 6.34 3.06 2.66
(d)
Fig. 2. Segmentation of the white matter from simulated images (a) original image; (b) result of segmentation by fuzzy connectedness; (c) result of segmentation by our proposed method; (d) white matter provided by gold standard
It could be noticed in table 1 that our proposed method behaves best, and the second is fuzzy connectedness, the last FCM. This should be own to the high quality of the automatic seed selection. And the high accuracy of fuzzy connectedness also contributes to the good result. Take the 97th slice as an example. When using fuzzy connectedness as the segmentation method, manual selected seeds are (79, 64), (61,131), as shown in Fig. 2(b). When our proposed method is implemented, the scope of the seeds from FCM is 0.98949 ± 0.0002878. That is, the mean value of the intensity is 0.98949, and the deviation value is 0.0009593. According to eq.(4), the scope of seeds is obtained. Considering the spatial information of the seeds from FCM, the unqualified seeds are eliminated and the two left is (85, 65) and (62,130), as is shown in Fig.2(c). It could be seen that the seeds automatically selected are located near to the manual selected ones, and thus they work as effectively as the manual ones. 4.2 Real Images We applied our method to twelve real MRI brain data sets. They were imaged with a 1.5 T MRI system (GE, Signa) with the resolution 0:94 *1.25*1.5 mm (256* 192*
512
X. Fan, J. Yang, and L. Cheng
124 voxels). These data sets have been previously labeled through a labor-intensive (usually 50 hours per brain) manual method by an expert. We first applied our method on these data sets and then compared with expert results. We give the results from the 3th to the 10th of them gained by our method in Table 2 and Fig. 3. Table 2. Average DSC of 8 real cases (%)
FC FCM Proposed Method
1 72 78 82
2 67 77 83
3 61 79 85
4 67 80 84
5 68 82 85
6 55 82 86
7 72 81 82
8 54 80 81
(a) (b) (c) (d) Fig. 3. Segmentation of the white matter in real images (a) original image; (b) the scope of seeds to be selected from FCM; (c) white matter segmented from proposed method; (d) white matter segmented from FCM
According to Table 2, the rank of DSC by the three methods is the same as that in simulated image experiments. It could be observed that there are several cases’ DSC of FCM is rather poor. That is because the noise in real images changes widely in different cases. Since FCM is sensitive to noise, it could not segment the object correctly. Take the 6th case as an illustration. Although the original figure is corrupted by noise severely, the scope of the seeds selected by FCM is still within the region of white matter as is marked in Fig.3 (b). Thus our proposed method behaves well with two automatically selected seeds as in Fig.3(c). Yet the result of FCM is very poor as is shown in Fig.3 (d).
5 Conclusion Fuzzy property is the nature of images. When the images are segmented by fuzzy methods, the inaccuracies property of the elements is adopted. FCM is one of the fuzzy clustering techniques, but it ignores the hanging-togetherness property of images. Thus it is sensitive to noise and may end with a local minimum. Fuzzy
A Novel Segmentation Method for MR Brain Images
513
connectedness is a method which takes both the inaccuracies and hangingtogetherness of the elements into consideration. Although fuzzy connectedness behaves well, the selection of seeds takes operators time and energy. In our proposed method, seed selection is made automatic by FCM. The quality of the automatically selected seeds is guaranteed by the elimination of unqualified seeds. What’s more, unlike FCM, the new method is robust because of the use of fuzzy connectedness. Through simulated and real image experiments, we could see that our proposed method could not only automatically select seeds, but has a desirable accuracy.
Reference 1. Udupa, J.K., Saha, P.K.: Fuzzy Connectedness and Imaging Segmentation. Proceedings of The IEEE, Vol.91, No.10, pp: 1649-1669, 2003 2. Saha, P.K., Udupa, J.K. and Odhner, D.: Scale-based fuzzy connected image segmentation: Theory, algorithms, and validation. Computer Vision and Image Understanding, Vol.77, pp: 145-174, 2000 3. Dunn, J.C. A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters. Journal of Cybernetics, Vol.3, pp: 32, 1973 4. Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, 1981 5. Clark, M.C.: MRI segmentation using fuzzy clustering techniques. IEEE Engineering in Medicine and Biology, Vol.13, No.5, pp: 730, 1994 6. Rosenfeld, A. Fuzzy digital topology. Information and Control, Vol.40, No.1, pp: 76, 1979 7. Udupa, J.K. Samarasekera, S. Fuzzy Connectedness and Object Definition: Theory, Algorithms, and Applications in Image Segmentation. Graphical Model and Image Processing, Vol.58, No.3, pp: 246, 1995 8. Nyul, L.G., Udupa, J.K.: On standardizing the MR image intensity scale. Magn Reson Med, Vol.42, pp: 1072-1081, 1999 9. Dam, E.B.: Evaluation of diffusion schemes for multiscale watershed segmentation. MSC. Dissertation, University of Copenhagen, 2000 10. Zijdenbos, A.P., Dawant, B.M., Margolin, R.A., Palmer, A.C. : Morphometric analysis of white matter lesions in MR images: Method and validation. IEEE Transactions on Medical Imaging, Vol.13, No.4, pp: 716, 1994 11. Cocosco, C.A., Kollokian, V., Kwan, R.K.-S., Evans, A.C.: BrainWeb: Online Interface to a 3D MRI Simulated Brain Database. NeuroImage, Vol.5, No.4, pp: part 2/4, S425, 1994
Improved-FCM-Based Readout Segmentation and PRML Detection for Photochromic Optical Disks Jiqi Jian, Cheng Ma, and Huibo Jia Optical Memory National Engineering Research Center, Tsinghua University, Beijing 100084, P.R. China
[email protected] {macheng, jiahb}@tsinghua.edu.cn
Abstract. Algorithm of improved Fuzzy C-Means (FCM) clustering with preprocessing is analyzed and validated in the case of readout segmentation of photochromic optical disks. Characteristic of the readout and its differential coefficient and other knowledge are considered in the method, which makes it more applicable than the traditional FCM algorithm. The crest and trough segments could be divided clearly and the rising and falling edges could be located properly with the improved-FCM-based readout segmentation, which makes RLL encoding/decoding applicable to photochromic optical disks and makes the storage density increased. Further discussion proves the consistency of the segmentation method with PRML, and the improved-FCM-based detection could be regarded as an extension of PRML detection. Keywords: Fuzzy clustering, FCM, PRML, optical storage, photochromism.
1 Introduction With the fast development of technology and with the ever growing demand for high density optical data storage, photochromic optical disks, including multi-level and multi-wavelength photochromic optical disks, become more and more important a research area. Continuous efforts and experiments have been made on it, but most sample disks are dotted recorded, whose storage density is lower than run-length limited (RLL) encoded disks, such as CD, DVD and blu-ray disk. The nonapplication of RLL encoding to photochromic optical disks is partly because of the indistinct edges of the information characters on a photometric optical disk, different from the sharp edges of the characters of CD, DVD and blu-ray disk, which make it hard to clearly divide the readout into wave crests and troughs and locate the rising and falling edges, as RLL encoding/decoding requests. As the laboratory findings in the Optical Memory National Engineering Research Center, the information characters and readouts of a dualwavelength photochromic sample disk are shown in fig. 1 and fig. 2. In recent years the synthesis between clustering algorithms and fuzzy set theory has led to the development of fuzzy clustering whose aim is to model fuzzy unsupervised patterns efficiently. Fuzzy C-means (FCM) method proposed by Bezdech is the L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 514 – 522, 2005. © Springer-Verlag Berlin Heidelberg 2005
Improved-FCM-Based Readout Segmentation and PRML Detection
(a)
515
(b)
Reflectance f(x)
Fig. 1. Information characters on a dual-wavelength photometric sample disk (a) Information characters for the laser, 650nm; (b) Information characters for the laser, 532nm
x(µm) Fig. 2. Readouts of a dual-wavelength photometric sample disk
most widely used fuzzy clustering algorithm.[1-8] It may help us to divide the readout of photochromic disks into crests and troughs and locate the rising and falling edges.
2 Theoretical Background FCM clustering is based on the theory of fuzzy sets. The most important feature of fuzzy set theory is the ability to express in numerical format the impression that stems from a grouping of elements into classes that do not have sharply defined boundaries, which is exactly fit for crest and trough classification of the readouts of photochromic disks. For a given dataset X={ x j |j=1,2,…,n}, FCM algorithm generates a fuzzy partition providing the membership degree uij of data x j to cluster i, (i=1,2,…,C). The objective of the algorithm is to minimize the objective function J m to get the optimal fuzzy partition for the given dataset, where m is a real-valued number which controls the
516
J. Jian, C. Ma, and H. Jia
‘fuzziness’ of the resulting clusters, and d ( x j , vi ) corresponds to the square value of the Euclidean distance between x j and vi . The procedure of the algorithm can be briefly described as Fig 3.
x j , (j=1,2,…,n), Fix C and m, m>1 Initialize fuzzy partition matrix U uij ∈ [0,1] , (i=1,2,…,C), c
∑u i =1
ij
n
= 1 , 0 < ∑ uij < n j =1
Compute the cluster centers n
vi =
∑u j =1 n
m ij
∑u j =1
xj
m ij
Update the fuzzy partition matrix , 1 uij = 1 c ⎛ d ( x , v ) ⎞ m −1 j i ⎜⎜ ⎟⎟ ∑ ( , d x v k =1 ⎝ j k)⎠
d ( x j , vi ) = x j − vi
2
Calculate the objective function n
Jm
c
J m(l ) = ∑∑ (uijm d ( x j , vi )) j =1 i =1
No Satisfy terminal condition? J m(l ) − J m( l −1) ≤ ε Yes End Fig. 3. Process of FCM algorithm
Improved-FCM-Based Readout Segmentation and PRML Detection
517
3 Comparison of Two Readout Segmentation Methods
Reflectance f (x)
The traditional FCM algorithm and improved FCM method with preprocessing are tested illustratively using the 650nm readout curve f(x) of a dual-wavelength photometric sample disk shown in fig. 2. Both of f(x) and its differential coefficient f’(x) are shown in fig. 4.
x(µm)
f’(x)
(a)
x(µm)
(b) Fig. 4. 650nm readout curve and its differential coefficient
3.1 FCM-Based Readout Segmentation Using the readout curve and its differential coefficient as dataset given, we can segment the readout with FCM method. When the exponent for the partition matrix m is 2.0[8], we can get the result shown in Fig. 5(a), where cluster centers are indicated by the large characters, and (b), where the curve segments with crosses are crests, the curve segments with dots are troughs, and between them, there are the rising and falling edges. The segmentation result of the 650nm readout is shown in Fig. 4, from which we could see that the curve is segmented unsupervisedly. However, there are some mistakes of the result of readout segmentation based on FCM Clustering. For example, some parts of the curve are not classified properly, such as the pseudo-peaks on the left side in the figure. And some edges are located inaccurately, such as the first rising edge on the left.
518
J. Jian, C. Ma, and H. Jia
f’(x)
The main reason of the errors is that there is no fore-known information applied in the algorithm, which limits the efficiency of the algorithm significantly, no matter how many categories the points on curve are divided into, as fig. 5 suggests.
Reflectance f(x)
Reflectance f(x)
(a)
x(µm)
(b) Fig. 5. Result of readout segmentation based on traditional FCM algorithm (C=2)
519
f’(x)
Improved-FCM-Based Readout Segmentation and PRML Detection
Reflectance f(x)
Reflectance f(x) (a)
x(µm)
(b) Fig. 6. Result of readout segmentation based on traditional FCM algorithm (C=4)
3.2 Improved-FCM-Based Readout Segmentation with Preprocessing With the characteristic of the readout and its differential coefficient considered as well as other knowledge, we could add the following pretreatments to the segmentation based on FCM clustering. Firstly, we could give two threshold values A and B for the readout curve. A>B. When the value of a readout sample point is larger than A, the point can be classified as crest point, and when the value is smaller than B, the point can be classified as trough point. Secondly, give two threshold values C and D
520
J. Jian, C. Ma, and H. Jia
Reflectance f(x)
for the differential coefficient of the readout. C>D. When the differential coefficient is larger than C or smaller than D, the point would be classified as rising or falling edge point. Finally, on the basis of the preprocessing, FCM method could be used to judge the classification of other parts of the readout with the initial centers as the results of pre-clustering. When A=0.50, B=0.35, C=0.05, D=-0.07, m=2.0, we could get the result shown in Fig. 7, where points on the curve are divided into four categories: crest points, trough points, rising or falling edge points.
x(µm)
f’(x)
(a)
Reflectance f(x) (b) Fig.7. Result of readout segmentation based on improved FCM method
Improved-FCM-Based Readout Segmentation and PRML Detection
521
The crest and trough segments are divided clearly, and the rising and falling edges are located properly. 3.3 Consistency of the Readout Segmentation Methods with PRML Detection As data density increases, partial response maximum likelihood (PRML) detection method has been applied more and more widely to avoid the Inter-Symbol Interference (ISI) problem.[9, 10] Instead of trying to distinguish individual peaks to find flux reversals, PRML manipulates the analog data stream coming from the disk (the "partial response" component) and then determines the most likely sequence of the bits ("maximum likelihood"). In a PRML system, the channel is truncated to a target PR mode by linear equalization, and the PR modes could be expressed as P( D) = r0 + r1 D + r2 D 2 + rM D M . Both of the two readout segmentation methods manipulate the readout f(x) and its differential coefficient f’(x) together, where f’(x) =( f(x) - f(x-1))/2 =(1-D) f(x)/2. So the detection based on the improved-FCM-based readout segmentation method could be regarded as the extension of PRML detection with the PR mode P( D ) = α 0 + α1 D ,
…
where α 0 and α1 are variable and both have different expressions when f(x) and f’(x) have different relation with the threshold values A, B, C, D, etc.
4 Conclusion Algorithm of improved FCM clustering is analyzed and validated in the case of readout segmentation of photochromic optical disks. Characteristic of the readout and its differential coefficient and other knowledge are considered in the method, which makes it more applicable than the traditional FCM algorithm. The crest and trough segments could be divided clearly and the rising and falling edges could be located properly with the improved method, which makes RLL encoding/decoding applicable to photochromic optical disks and paves the way for the advance of photochromic optical storage, as well as photochromism based multi-level and multi-wavelength optical storage. Further discussion proves the consistency of the segmentation method with PRML, and the detection method based on the improved-FCM-based readout segmentation could be regarded as the extension of the PRML detection method with the PR mode P( D ) = α 0 + α1 D where α 0 and α1 are variable and both have different expressions when f(x) and f’(x) have different relation with the threshold values.
References 1. Bezdek J. C.: Fuzzy mathematics In pattern classfication, PhD thesis, Cornell University, Ithaca (1973) 2. Bezdek J. C.: Pattern Recognition with Fuzzy Objective Function Algorithm, Plenum, New York (1981) 3. Yoo S. H., Cho S. B.: Partially Evaluated Genetic Algorithm Based on Fuzzy c-Means Algorithm. Parallel Problem Solving from Nature - PPSN VIII, Lecture Notes in Computer Science, Vol. 3242. Springer-Verlag, Berlin Heidelberg New York (2004) 440
522
J. Jian, C. Ma, and H. Jia
4. Pedrycz W., Loia V., Senatore S.: P-FCM: a proximity-based fuzzy clustering. Fuzzy Sets and Systems, Vol.148. Elsevier, Hoboken (2004) 21-41 5. Tsekouras G. E., Sarimveis H.: A new approach for measuring the validity of the fuzzy cmeans algorithm. Advances in Engineering Software, Vol. 35. Elsevier, Hoboken (2004) 567-575 6. Loia V., Pedrycz W., Senatore S.: P-FCM: a proximity-based fuzzy clustering for usercentered web applications. International Journal of Approximate Reasoning, Vol. 34. Elsevier, Hoboken (2003) 121-144 7. Sebzalli Y. M., Wang X. Z.: Knowledge discovery from process operational data using PCA and fuzzy clustering. Engineering Applications of Artificial Intelligence, Vol. 14, Elsevier, Hoboken (2001) 607-616 8. Paland N. R., Bezdek J. C.: On cluster validity for the fuzzy c-means model. IEEE Tans. Fuzzy Systems, Vol. 3, IEEE (1995) 370-379 9. Lee C. H., Cho Y. S.: A PRML detector for a DVDR system. IEEE Trans. Consumer Electron, Vol. 45, IEEE (1999) 278-285 10. Choi S. H., Kong J. J., Chung B. J., Kim Y. H.: Viterbi detector architecture for high speed optical storage. in Speech and Image Technologies for Computing and Telecommunications, IEEE Proceedings/TENCON, Vol. 1, IEEE (1997) 89-92
Fuzzy Reward Modeling for Run-Time Peer Selection in Peer-to-Peer Networks Huaxiang Zhang1 , Xiyu Liu1 , and Peide Liu2 1
2
Dept. of Computer Science, Shandong Normal Univ. Jinan 250014, Shandong, China
[email protected] Dept. of Computer Science , Shandong Economics Univ. Jinan 250014, Shandong, China
Abstract. A good query plan in p2p networks is crucial to increase the query performance. Optimization of the plan requires effective and suitable remote cost estimation of the candidate peers on the basis of the information concerning the candidates’ run-time cost model and online time. We propose a fuzzy reward model to evaluate a candidate peer’s online reliability relative to the query host, and utilize a real-time cost model to estimate the query execution time. The optimizer is based on the run-time information to generate an effective query plan.
1
Introduction
The peer-to-peer (p2p) [1] paradigm has opened up new research areas such as networking and distributed computing. As one of the main application areas, file sharing has gained much attention. Routing the query messages to the destination peers and executing the query plan efficiently pose new challenges to the research. As each peer can provide services such as data services or computing services, a query plan can be executed with the coordination of relevant peers. The execution cost estimation of a query plan is a crucial problem, and an optimization approach should be employed to generate the plan. The cost of executing a query at remote nodes may be expensive because of the communication cost or the heavy workload at the remote sites, and different distributed computing strategies result in quite different query processing performances. To optimize the query cost in distributed information systems is difficult. The evaluation cost models proposed can be categorized into static and dynamic cost models. The static cost model simplifies the distributed systems as static systems, and utilizes different approaches in the cost measurement metrics. As the name implies, no changes will be made in static models once they are derived. So employing the static models to estimate a query cost is not suitable to a dynamic environment such as p2p networks. In p2p networks, high-availability, L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 523–530, 2005. c Springer-Verlag Berlin Heidelberg 2005
524
H. Zhang, X. Liu, and P. Liu
fault-tolerance and scalability are main characteristics, and every peer can joint in or drop out of the systems freely. The workload of a remote peer changes as more or less tasks need to be supported, and the dynamic property of p2p requires new query cost models be proposed. Availability of a peer is also very important since the dynamic property of p2p networks. This issue is important since many p2p systems are designed on the fundamental assumption that the availability of a peer depends on the online time of it. In p2p networks, intermittent component online time is feasible, and that a peer leaves the system and joins the system again at a later time is very common. We are also motivated to study peer availability in part to shape the evaluation of p2p systems. The primary goal of this consideration is to optimize the query cost model in p2p networks.
2
Related Work
As soft computing technologies have been adopted in communications [2], some models based on these technologies have been proposed. Bosc et al [3] propose a fuzzy data model as the basis of the design of a run-time service selection trader. It’s not suitable to the p2p networks as the trader environment is static. The cost-based query optimization in distributed information sources has been studied [4], and a calibration-based approach has been proposed. Based on the sampling query running against user database systems, zhu et. Al [5] proposes a local cost model. Shahabi et. al [6] proposes a run-time cost statistics model for the remote peers by employing a probe-based strategy. Even though the approach proposed in [6] is able to reflect more run-time cost statistics of remote peers, but it lacks scalability due to the extensive probe query number. All the models mentioned above take no dynamism into consideration. Ling et. al [7] proposes a progressive ”push-based” remote cost monitoring approach, and introduces a fuzzy cost evaluation metric to measure a peer’s reliability. Even though their approach takes dynamism into consideration, the fuzzy reliability is not properly defined. If we say a remote peer is more reliable to a query host, it means both the remote and the query host share more common online time, instead that the remote peer has more online time. Even complex p2p query systems have been proposed in the literature recently, which employ different message routing and query location schemes, how to optimize a query plan is still a key issue needing much attention, because the query plan has great impacts on the query performance.
3
Cost Evaluation
A complex query generated by a peer can be executed in several peers having the target resources. Suppose there are candidate peers storing the target
Fuzzy Reward Modeling for Run-Time Peer Selection
525
resources, the objective of the query optimization is to select n ( n m ) peers from m candidates to execute the query at the minimal cost and with high execution reliability. This problem can be considered as a multi-agent coordination problem [8], and an effective coordination mechanism is essential for a query host to achieve its goal in p2p systems. The essential coordination solves the query problem in the p2p environment with distributed resources. Selection of the candidate peers should take the remote peer’s reliability, the execution time and other factors into consideration. In p2p networks, a peer’s reliability depends on its online time relative to other peers, and this online time can be estimated. Many factors affect the query execution time of a peer, such as the cpu processing speed, the network communication cost, the input or output, the size of information to be transmitted and the waiting time. To simplify the query cost optimization problem, we think a peer’s reliability and the query execution time are the two main factors that should be utilized in a query cost model. 3.1
Relative Online Time
A peer’s availability is not well modeled in p2p networks, and the reliability of one peer to another peer depends not only its online availability but also on the inter-dependence between these peers. A peer’s online time determines its reliability. For example, if a peer is offline, and the host still sends message to it, then the host will get no result and have to re-transmit the message to other candidate peers. In this case, the peer is unreliable, and we set the reliability degree to 0; if a peer is online all the time and can provide other peers with software or hardware services, then it is reliable and we set its reliability to 1. The concept of reliability is relative. For example, if a peer A is online from 0 am to 2 am, and peer B is online from 2 am to 12 pm, even though peer B’s online time is very long, it is not reliable to peer A as they share no common online time. Relative reliability of each peer pair can also be characterized by using conditional probabilities. Consider both peer A and B, the conditional probability of B being available given that A is available for a given time period. The value P (B = 1/A = 1) is relative reliability of B to A. In this case, these two peers are dependent. Definition 1: [a, b] is defined as an online time interval of a peer, and the length of [a, b] is denoted as [a, b](= b − a). [a, b] ∧ [c, d] is the common online time interval falling within both [a, b] and [c, d] concurrently. If [a, b]∧[c, d] = , then [a, b] ∧ [c, d] = 0. Definition 2: [a, b] ∨ [c, d] means a peer’s two online time intervals, and [a, b] ∨ [c, d] = b − a + d − c − [a, b] ∧ [c, d].
526
H. Zhang, X. Liu, and P. Liu
Definition 3: If the jth online time interval of peer n is [snj , enj ](j = 1, · · ·, k) (k is the number of online time intervals), then the online time interval of peer n is Tno = ∨kj=1 [snj , enj ], and n’s online time length is Tno . Definition 4: If Tno and Tmo are the online time intervals of peer n and m respectively, then we define Tno ∧ Tmo as the online time of peer n relative to m, and denote it as Tnmo . Tnmo is the relative online time length, and it’s easy to prove that Tnmo = Tmno . If Tnmo = , it means peer m and n share no common online time intervals. In this case, peer n is unreliable to peer m. If Tnmo = Tno , it means peer n’s online time falls within peer m’s online time. In this case, peer m is completely reliable to peer n. Else if Tnmo = Tmo , it means peer m’s online time falls within peer n’s online time, and peer n is completely reliable to peer m.
3.2
Reliability Degree
The execution time of a query is affected by several factors as we described above, and it is also affected by the dynamism of the p2p networks. A peer will take more time to finish a query with heavy workload than that it takes with light workload. Definition 5: The relative reliability degree of peer m to n is defined as rmn = Tmno Tno . Example 1. If m’s online time intervals are [2, 4] and [6, 10], and n’s online time intervals are [3, 5], [7, 9], [14, 20]. We have Tmno = [3, 4] ∨ [7, 9] and Tmno = 4−3+9−7 = 3. Tno = [3, 5]∨[7, 9]∨[14, 20] and Tno = 5−3+9−7+20−14 = 10. Tnmo mno 3 3 rmn = T Tno = 10 = 0.3 . We can similarly calculate rnm = Tmo = 6 = 0.5 . It’s clear rmn = rnm .
3.3
Fuzzy Reward Formulation
The query can be considered as a cooperative task requiring several peers to finish, and the query host peer has to select the peers that will engage in the coordination. Peer selection should follow a query optimization rule to select the peers from all the candidates with less query cost and more reliability. If a candidate peer is more reliable to a query host, we think it needs less reward for the query execution. Otherwise, we think it needs more reward. So the query optimization is transformed to be a problem of selecting peers to minimize the cost and rewards. We use fuzzy set theory [9] to formulate the fuzzy reward. The reliability degree of a candidate relative to the query host falls within [0, 1], and we regard it as a fuzzy set member. Then a fuzzy set A(r) in the universe of dis-
Fuzzy Reward Modeling for Run-Time Peer Selection
527
course where it denotes all possible reliability degrees can be formulated as A(r) = r∈C µA (r)/r . C is the set of all candidates’ reliability degrees, and the membership function µA (r) in equation (1) indicates the degree that r belongs to A(r). 0 r≤ω µA (r) = (1) 2(1 + (1 − ω)(r − ω)−1 )−1 r > ω Where, ω is a threshold. A host may set a reliability degree and selects candidates with degrees no less than the set degree ω . µA (r) is approaching 1 if and only if r is closing to 1. As the relation between the reliability degree and the reward is discussed above, we define the fuzzy reward function f (r) in equation (2) ∞ r≤ω f (r) = (2) (1 + (1 − ω)(r − ω)−1 )/2 r>ω f (r) indicates the fuzzy reward needed for a candidate peer of reliability degree r to execute the query generated by a host. The objective of the host is to select suitable ones from all candidates with r > ω to minimize its cost. The online time interval can be collected statistically by a query sent to a candidate peer. 3.4
Cost Model
Several criteria influence the cost needed for a peer to execute a remote query, such as the waiting time, the communication cost, the workload of the peer, the cpu cost and input/output cost, the size of the message to be transmitted and etc. We use the cost model proposed in [7] to estimate the cost in terms of time. It is used to calculate the time a peer required to finish a query. The model employs four evaluation criteria. Previous studies on the static query optimization assume the evaluation criteria remain as constants. This may not be true for the dynamic p2p environment. We utilize the above cost model and describe the cost at a very high level. For each peer, we use a query execution time to indicate the cost. The time may change as the dynamic property of p2p networks. The host peer has a local cache to store the cost models of its candidates and updates the models according to their corresponding cost monitoring agents at the candidate peers. We use the cost models stored locally to estimate the query time required by a candidate peer. How to estimate the available time of a candidate peer should be solved. We adopt a prober to periodically probe each peers to determine whether they are available or not at a particular time interval. This information is stored in the host peer and updated according newly coming information. Relative reliability of a peer is calculated based on the probed information stored in the host peer.
528
4
H. Zhang, X. Liu, and P. Liu
Query Optimization Algorithm
For the peers in the host candidate list, we use equation (1) and (2) to calculate their reliability degrees and fuzzy rewards. The candidates with degrees less than the threshold are ignored, and we can get the left peer number m . For a given query, we utilize the cost model to estimate the execution time t required by each peer. Then the host has m value pair (ti , fi )(i = 1, · · ·, m) . fi is the fuzzy reward requested by the ith peer for executing the query, and ti is the time cost. The host needs to select n(n m) from the m peers to finish the query. The query ending time is the largest one among all the times required by the selected n peers, and we denote the ending time as te . Therefore we have the following optimization problem: n Q = min(αte + fi ) (3) i=1
m
f
i Where, α is a coefficient, and can be gotten as α = i=1 . m i=1 ti (3) is a programming problem, and can be solved by a query optimization algorithm proposed in the following. If we arrange all the time values in an ascending order, we can get an ordered tuple (t1 , · · ·, tm ) , where ti is the ith smallest member of the tuple. So we can conclude the maximal time t required for a host to finish the query must be a member from tuple (tn , · · ·, tm ) . We testify each member in (tn , · · ·, tm ) to calculate the Q values and compare them with each other. The time that minimizes the values is the solution of equation (3). We have the optimization algorithm as shown in table 1.
Table 1. Query selection algorithm (1) arrange all the time ts in an ascending order, and get an ordered tuple T = (t1 , · · ·, tm ) (2) initialize a set S = , put m value pairs of (f, t) into S ; arrange the pairs in an ascending list ordered by t. S = {(f1 , t1 ), · · ·, (fm , tm )} (3) arrange the values of f whose t is a member of (t1 , · · ·, tn ) in ascendant order 0 0 too. The ordered values of f are denoted as F = (f1 , · · ·, fn0 ) 0 We calculate U0 = ( n i=1 fi ) + αtn (4) for j =1 to m − n j−1 { compute Uj = ( n−1 ) + fn+j + αtn+j ; i=1 fi j j−1 Insert fn+j into F . Finally, we get an ordered F j , F j = (f1j , · · ·, fn+j )} (5)the smallest one of Uk , where k ∈ (0, · · ·, m − n), is the best one. The first n − 1 members of F k in addition with fn+k are the fuzzy rewards we should select, and tn+k is the maximal time of the selected n peers.
The analysis of computational complexity of algorithm in table one can be given as follows: the time complexities in step (1), (2), (3), (4) and (5) are
Fuzzy Reward Modeling for Run-Time Peer Selection
529
m−n m log m , m , n log n , i=0 log(n + i) and m − n + 1 separately. So the algorithm’s computational complexity is the sum of step (1) to (5). We get O(m log m).
5
Conclusions
We study the problem of remote query cost optimization in p2p networks. It’s important for a query host to optimize the query and generate a query strategy. As the host just needs some of the candidates to execute the query, a selection algorithm is required. The objective is to select peers that provide minimal response times and with high reliability degrees. We consider a query as a cooperative task that can be performed by multi-agent, and propose a concept of fuzzy reward to evaluate a peer’s online reliability relative to the query host. We also adopt a peer cost model to estimate the dynamic cost of each peer, and use this model to calculate the query execution time. Taking the fuzzy reward and the dynamic cost model into consideration, we can utilize the run-time information to optimize the query. We argue that existing measurements and proposed novel query cost models in p2p systems do not capture the complex time-varying nature of availability in the dynamic p2p environment, and propose the concept of relative reliability to deal with the query optimization problem. As query optimization problem is very complex in p2p environment, some extensive work should be done.
Acknowledgement This paper is supported by the Natural Science Fund of Shandong Province of China with Grant No. Z2004G02
References 1. Risson J., Moors T.: Survey of research towards robust peer-to-peer networks search methods. In: Technical Report UNSW-EE-P2P-1-1, Univ. of New South Wales, Sydney, Australia (2004) 2. Wang, L.P.: Soft Computing in Communications. Springer, Berlin Heidelberg New York (2003) 3. Bosc P., Damiani E., Fugini M.: Fuzzy service selection in a distributed objectoriented environment. In: IEEE TRAN. on fuzzy systems 9, 5 (2001) 682–698 4. Adali S., Candan K. S., Papakonstantinou Y. and Subrahmanian V.: Query caching and optimization in distributed mediator systems. In: Proc. of the 1996 ACM SIGMOD Int. Conf. on Management of Data, Montreal, Canada (1996) 137–148 5. Zhu Q., Larson P.A.: Solving local cost estimation problem for global query optimization in multidatabase-systems. In: Distributed and Parallel Databases 6,(1998) 373–420
530
H. Zhang, X. Liu, and P. Liu
6. Khan L., McLeod D., Shahabi C.: An Adaptive Probe-Based Technique to Optimize Join Queries in Distributed Internet Databases. In: J. Database Manag. 12, 4 (2001) 3–14 7. Ling B. , Ng W. S. , Shu Y., Zhou A.: Fuzzy Cost Modeling for Peer-to-Peer Systems. In: LNCS, Vol 2872 (2004) 138–143 8. Bourne R. A. , Excelente-Toledo C. B., Jennings N. R.: Run-Time Selection of Coordination Mechanisms in Multi-Agent Systems. In: Proc. 14th European Conf. on Artificial Intelligence. Berlin, Germany (2000) 348–352 9. Jang J.S.R., Sun C. T., Mizutani E.: Neuro-fuzzy and soft computing. Prentice Hall (1997)
KFCSA: A Novel Clustering Algorithm for High-Dimension Data Kan Li and Yushu Liu Dept. of Computer Science and Engineering, Beijing Institute of Technology, Beijing , China, 100081
[email protected] [email protected] Abstract. Classical fuzzy c-means and its variants cannot get better effect when the characteristic of samples is not obvious, and these algorithms run easily into locally optimal solution. According to the drawbacks, a novel mercer kernel based fuzzy clustering self-adaptive algorithm(KFCSA) is presented. Mercer kernel method is used to map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. A self-adaptive algorithm is proposed to decide the number of clusters, which is not given in advance, and it can be gotten automatically by a validity measure function. In addition, attribute reduction algorithm is used to decrease the numbers of attributes before high dimensional data are clustered. Finally, experiments indicate that KFCSA may get better performance.
1 Introduction Clustering analysis groups data points according to some distance or similarity measure in order that objects in a cluster have high similarity.The popular methods such as Cmeans ,fuzzy C-means and its variants[1-5]which represent clusters through centroids by optimizing the squared error function in the input space. If the separation boundaries among clusters are nonlinear, the performance of these methods will be decreased. At the same time, the outliers affect the effect of the clustering. Fuzzy c-means algorithm cannot get better clustering effect when the characteristic of samples is not obvious, and the current fuzzy clustering algorithm run easily into locally optimal solution because the number of clusters needs to be determined in advance. According to the disadvantage of fuzzy c-means, a novel kernel based fuzzy clustering self-adaptive algorithm(KFCSA) is proposed in the paper. Mercer kernel method is introduced to fuzzy cmeans method. It may map implicitly the input data into the high-dimensional feature space through nonlinear mapping. Validity measure function is used to justify iteratively the number of clusters instead of number of clusters given in advance.
2 Attribute Reduction of High-Dimension Data Rough set based attribute reduction algorithm here is used to reduce the number of attributes in the terrain database(high-dimension data) in order to improve speed of clustering. Before a decision table is reducted, the decision table should be judged L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 531 – 536, 2005. © Springer-Verlag Berlin Heidelberg 2005
532
K. Li and Y. Liu
whether or not it is consistent. To an inconsistent table, the table is divided into two parts: the complete consistent table and the incomplete consistent table. 2.1 Attribute Reduction of Consistent Decision Table According to database theory, redundancy and dependency should be as few as possible in databases. Our algorithm views this rules as criteria of attribute reduction. The attribute set which average relevance is the minimum value is the last result of reduction. Conditional entropy is used to judge the relevance of attributes. Algorithm 1. RSAR(Rough Set Based Attribute Reduction Algorithm) Input. Decision table S={U,A,V,f}, A=C ∪ D, condition attributes C and decision attributes D. Output. A set of attributes REDU. Method
( ) ,
Step 1. Step 1.Computes the core C0 based on discernibility matrix M= mij nxn where i,j=1,2…n. REDU= C0; Step 2. Step 2.The matrix element mij which does not include core builds an expression by conjunct, that is ∧ mij; Step 3. Step 3.Converts the above expression into the extract form. Its terms Si are the set of attribute reduction Ri={V Si,i=1,2…n}; Step 4. Step 4. To Ri, computes the relevance of attributes in Ri based on the condi-
( B|A ) =- ∑ p(ai) ∑ p(bj | ai) log( p(bj | ai)) , ( A,B are the elements of Ri , A ( U/IND ( A ) ={a ,a …a },B ( U/IND ( B ) = {b ,b …b }).In Ri=RiUC , the set which value is the minimum of average
tional entropy H
n
m
i =1
j =1
1
1
2
n
2
n
0
of attribute relevance is REDU. 2.2 Results
Terrain data include DEM, water area, concealment, vegetation, barrier, shelter , distance and traffic capacity attributes. The terrain data are from the city of Xiamen, which is located in the south of 。 。 。 。 China. The range is from (117 38’14’’,24 33’30’’) to (117 52’30’’,24 25’14’’). After attribute reduction algorithm in high dimensional data is used, DEM, water area, concealment, vegetation and traffic capacity attributes will be used to cluster.
3 Feature Space Clustering 3.1 Algorithm 2. KFCSA (Kernel Based Fuzzy C-Means Self-adaptive Algorithm) Clustering of data in a feature space has been previously proposed[6],where c-means was expressed by the kernel trick. It was the hard-clustering case. In the paper, we use kernel function to fuzzy c-means.
KFCSA: A Novel Clustering Algorithm for High-Dimension Data
533
An alternative kernel function is given which is equivalent to map into a high dimensional space called feature space( x → φ (x) ).The mapping is gotten by means of a replacement of the inner product. In the feature space, cluster center can be expressed as: (1)
n
c j = ∑ γ jk φ ( xk ) k =1
The objective function is defined as: n
c
(2)
n c
n
α T T α J = ∑∑ u ij || φ ( xi ) − ∑ γ jk φ ( x k ) || 2 = ∑ ∑ uij [k ( xi , xi ) − 2γ j ki + γ j kγ j ] i =1 j =1
i =1 j =1
k =1
where γ i = (γ i1, γ i 2 "γ in )T , k = (k1, k1 " kn ) , ki = (ki1, ki 2 " kin )T Equation uij and γ jk is as follows (3)
n
α −1 ∑ uij k k j
1
uij = c
dij
g =1
dig
∑(
1 ) α −1
, γj =
j =1
n
α ∑ uij
j =1
where dij = k ( xi , xi ) − 2γ Tj ki + γ Tj kγ j . The kernel based fuzzy c-means self-adaptive algorithm is as follows Step1. Initializes the positive parameters α , ε and iterations m=1; initialize γ j ;fix c=2; Step2. Computes the kernel matrix using Gaussian kernel k ( xi , x j ) = e
− q|| xi − x j || 2
;
) Step3. Updates uij(m ) ; calculate γ (m again; j
Step4. If max | uij( m) − uij( m −1) |≤ ε , stop; else m=m+1,go step3; Step5. If validity measure function s get the minimum value, the clustering is over; else c=c+1, go step3. 3.2 Clustering Validity Measure Analysis Validity measure function is used to estimate the number of clusters. Girolami used the block diagonal structure in the kernel matrix to determine the number of clusters[7]. Girolami’s method is used in c-means method. Other researchers applied the method to fuzzy c-means. But the method has its disadvantage. When distinguish of the data in the data set is not obvious, the block diagonal structure in the matrix is not also obvious. It easily arrives at locally optimal solution. In the paper, the number of clusters is determined through the self-adaptive algorithm. The number of clusters need not be given in advanced. The initial value of number of the clusters is supposed. Then validity measure function is proposed to justify iteratively the number of clusters. Validity measure function is to estimate the correctness of the number of clusters.
534
K. Li and Y. Liu
Compactness of clustering is expressed as c n
comp = ∑ ∑ λij
k ( xi , xi ) − 2γ Tj ki + γ Tj kγ j
i =1 j =1
nuij
(4)
2
where λij is used to judge the outlier.
⎧1 ⎪
λij = ⎨
, u ij > u lj i ≠ l ,
⎪⎩0 , others.
If λij =0, the data point is the outlier and it will be deleted. uij 2 is introduced to compactness function to strengthen the compactness of clusters. Separability of clustering is expressed as sep = min (γ iT kγ i − 2γ iT kγ j + γ Tj kγ j )
(5)
i, j
Validity measure function is defined as c n
s=
comp = sep
∑ ∑ λij
k ( xi , xi ) − 2γ Tj ki + γ Tj kγ j
i =1 j =1
min (γ iT kγ i i, j
nuij − 2γ iT kγ j
(6)
2
+ γ Tj kγ j )
where comp means the compactness of clustering and sep means the separability of clustering.After the number of clusters c is determined, data set may be divided into c clusters and weighted sum of squares from data in the clusters to their cluster centers arrive at the minimum value.
4 Experiments 4.1 Test in the Standard Data Set In order to verify the validity and feasibility of the clustering algorithm, experiments are made with fuzzy c-means and our algorithm. Experimental data are iris data set(from UCI Repository). We select randomly 20 data in iris set. In order to distinguish data, cluster centers and data points are drawn with different labels in the figures. In fig.1, the number of clusters is determined( c=3) in advance. From the result of FCM algorithm, the cluster centers easily run into locally optimal solution. At the same time, the results are related to the selection of the initial value of cluster centers. In the fig.2, KFCSA algorithm is used. The number of clusters(c=3) is gotten automatically via self-adaptive method. Three clusters may be shown in the fig.2. From the figures, the clustering effectiveness with our algorithm is better than the one with fuzzy c-means algorithm. The error rate in our algorithm is lower than the one in fuzzy c-means.
KFCSA: A Novel Clustering Algorithm for High-Dimension Data
Fig. 1. FCM clustering
535
Fig. 2. KFCSA clustering
4.2 Application in the High Dimensional Data Terrain is analyzed to determine the characteristic of terrain by the algorithm of KFCSA. DEM, water area, concealment, vegetation and traffic capacity attributes in the terrain data through RSAS algorithm are used to clustering analysis. The terrain 。 。 。 。 data are also from (117 38’14’’,24 33’30’’) to (117 52’30’’,24 25’14’’) , which is located in the city of Xiamen in China. The result of terrain analysis by KFCSA algorithm is shown in fig.3. Two ellipses in fig.3 indicate two clusters which are selected by KFCSA algorithm.
Fig. 3. KFCSA applied in high dimensional data
5 Conclusions The validity of clustering algorithm depends on characteristic distinction along clusters. When distinction along clusters is not obvious, or overlapped, classical fuzzy cmeans method cannot tackle it well. In the paper, mercer kernel based fuzzy clustering self-adaptive algorithm is proposed. Data in the input space by mercer kernel is mapped into the feature space. Characteristic of data in the feature space may be strengthened. It can realize well the clustering to the data which distinction is faint. In our algorithm, self-adaptive method is used to determine automatically the number of clusters. Experiments show that our proposed algorithm gets better performance than
536
K. Li and Y. Liu
classical clustering algorithms. In addition, attribute reduction algorithm is used to decrease the numbers of attributes before these terrain data are clustered.
References 1. MacQueen,J.: Some methods for classification and analysis of multivariate observations. Proc. 5th Berkeley Symposium(1967) 281-297 2. Hartigan,J., Wang,M.:A K-means clustering algorithm. Applied Statistics,Vol. 28.(1979)100-108 3. Bezdek,J.C.: Pattern recognition with fuzzy objective function algorithm. Plenum Press(1981) 4. Jain,A., Dubes,R.: Algorithms for clustering data. Prentice Hall(1988) 5. Wallace,R.: Finding natural clusters through entropy minimization. Ph.D Thesis. CarnegieMellon University, CS Dept (1989) 6. Schölkopf ,B., Smola, A., Müller,K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation,Vol. 10(5) (1998)1299-1319 7. Girolami, M.:Mercer kernel based clustering in feature space.IEEE Trans Neural Network,Vol.13(3) (2002)780-784
An Improved VSM Based Information Retrieval System and Fuzzy Query Expansion Jiangning Wu1, Hiroki Tanioka2, Shizhu Wang3, Donghua Pan1, Kenichi Yamamoto2, and Zhongtuo Wang1 1
Institute of Systems Engineering, Dalian University of Technology, Dalian, 116024, China {jnwu, gyise, wangzt}@dlut.edu.cn 2 Department of R&D Strategy, Justsystem Corporation, Tokushima, 771-0189, Japan {hiroki_tanioka, kenichi_yamamoto}@justsystem.co.jp 3 Dalian Justsystem Co.,Ltd, Dalian, 116024, China
[email protected] Abstract . In this paper, we propose an improved information retrieval model, where the integration of modification-words and head-words is introduced into the representation of user queries and the traditional vector space model. We show how to calculate the weights of combined terms in vectors. We also propose a new strategy to construct the thesaurus in a fuzzy way for query expansion. Through the developed information retrieval system, we can retrieve documents in a relatively narrow search space and meanwhile extend the coverage of the retrieval to the related documents that do not necessarily contain the same terms as the given query. Experiments for testing the retrieval effectiveness have been implemented by using benchmark corpora. Experimental results show that the improved information retrieval system is capable of improving the retrieval performance both in precision and recall rates.
1 Introduction With the information explosion on the Internet, Internet users have to encounter huge amount of information junk when they retrieve documents. Therefore, there is a great need for tools and methods to filter such information junk out and meanwhile retain the documents that users really want. The information retrieval (IR) system is thought to be one of good tools for solving the problems mentioned above. So far, many models have been proposed to construct effective information retrieval systems, of which the vector space model (VSM) [1] [2] [3] is the most influential. To date, this model leads the others in terms of performance [1]. It is hereby adopted in our study to construct the improved IR system. The major problem with VSM comes from the over simplicity of its purely term-based representation of information [4]. In this model, keywords are identified, pulled out of context, and further processed to generate term vectors. Unfortunately, independent keywords cannot adequately capture the document contents, resulting in poor retrieval performance. This motivates us to find a new way to the representation of information. It should be noted that among the identified keywords, some are nouns and verbs and some are adjectives and adverbs according to their parts of speech. From the grammatical point of view, there would be no any actual sense if an adjective or L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 537 – 546, 2005. © Springer-Verlag Berlin Heidelberg 2005
538
J. Wu et al.
adverb appears alone. In other words, adjectives and adverbs are both constraint words rather than independent concepts. In this paper, adjectives and adverbs are named as modification-words (MWs), and nouns and verbs are named as head-words (HWs). Ignoring the constraints of MWs will result in many irrelevant documents coming from the independent meaningless modification-words. To enhance MWs in user queries, a new information representing method is proposed in this paper, which integrates MWs with HWs to form new combined terms. The combined terms bring somewhat closer to user’s requests of the given queries. Except for the above problem with VSM, there exit some other problems to be solved. In the traditional VSM, the system’s relevance judgment is based on the basic assumption that a query and a document are related to each other only if there are shared words in the query and the document. However, simply by examining the terms a document and a query share, it is still difficult to determine whether the document is relevant to the user. The difficulty lies in the fact that most terms have multiple meanings (polysemy) on the one hand, and on the other hand, some concepts can be described by more than one term (synonym). Therefore, sometimes, many unrelated documents may be included in the answer set because they match some of the query terms, while some other relevant documents may not be retrieved because they do not contain any of the exact query terms [5]. To handle these problems, we propose a new method in which MWs and HWs are partially expanded based on the developed fuzzy synonym thesauruses, and then the expansion MWs and the expansion HWs are recombined in terms of the correlations between them for the search process. Based on the proposed methods, we develop a new IR system that can retrieve the relevant documents in a relatively narrow search space and meanwhile extend the coverage of the retrieval to the related documents that do not necessarily contain the same words as the given query. The experimental results show that the retrieval results obtained from our proposed IR system have improvements in two main measures, precision and recall.
2 System Architecture The flowchart of the proposed IR system is shown in Figure 1. This system consists mainly of three processing stages: the query expanding stage, the document representing stage, and the query-document matching stage. During the first stage, MWs and HWs are firstly identified by means of the syntactic analyzer. After that, if the given query has no MWs, the system only follows the path 2. In this case, the proposed IR system will return the same retrieval results as normal (we call this process normal search in the paper). On the other hand, if the given query contains MWs, the proposed system will follow the path 1. In this case, MWs and HWs will be integrated at first and then MWs and HWs be expanded respectively to recombine new terms according to the correlations between them as well as the fuzzy synonym thesauruses. Therefore, the system will return different retrieval results comparing with normal search (we call this process modification & expansion search in the paper). During the second stage, documents are represented by use of the modified VSM. The document vector can then be formed. During the last stage, structured search is conducted to obtain the similarities between queries and documents and the scores of the candidate documents. Finally, the retrieved documents whose scores exceed the predefined threshold return to the end user.
An Improved VSM Based IR System and Fuzzy Query Expansion
539
User interface
NLPs
Input a query in natural language
Document corpus Parse the given query
Index the parsed keywords and tag MWs
Index documents and tag MWs No (path 2) Is there MW? Yes (path 1)
Integrate MWs with HWs
Expand MWs Fuzzy thesaurus
From document vector
Expand HWs
Form expanded query vector
Process matching by modified VSM
Calculate the similarity value
CB search
Rank the candidate documents
Display and store the retrieval results
Result database
Fig. 1. The flowchart of the proposed IR system
3 Methods 3.1 General VSM In general VSM [2], each document di, i ∈ [1, m], m is the total number of documents in the corpus, can be represented as a weighted vector, di =
{w1i , w2i , … wki , … , wli }T ,
(1)
540
where
J. Wu et al.
wki is the weight of the term tk to document di which is a nonnegative value, l
is the number of indexing terms contained in document di, and T is the transpose operator. Similarly, a query can be represented as a vector in which the weights indicate the importance of each term in the overall query. Thus, the query vector qj, j ∈ [1, n], n is the total number of queries, can be written as: qj = where
{w1 j , w2 j ,…, wkj ,…, wlj }T ,
(2)
wkj is the weight of term tk to query qj which is a nonnegative value; l and T
respectively have the same meanings as above. The similarity s(di, qj) between two vectors can be measured by the cosine of the angle between di and qj. That is (3)
s(di, qj) = cos (di, qj) . 3.2 Modified VSM
The basic idea behind the modified VSM is recalculate the weights of combined terms and the similarity values between document vectors and query vectors presented in Formulas (1) and (2). We all know that the classic tf_idf is a common way to weigh the importance and uniqueness of a term in a document. In this study, we use the following formulas provided by Justsystem Corporation to define tf and idf respectively. tfki = 0.5 + 0.5 × (freq / max_ freq) and
(4)
idfk = 1.0 + log (total_doc / dist ) ,
(5)
where freq is the count of the term tk in document di, max_freq is the maximum count of a term in the same document, total_doc is the total number of the document collection, and dist is the number of documents which contain the term tk. The weight of the combined term can be calculated by the following formula: wki = tfki × idfk .
(6)
By conducting normalization to Formula (6), we finally get
wki =
tf ki × idf k l
∑ (tf k =1
ki
× idf k )
, 2
where l is the size of the indexing term set in document di.
(7)
An Improved VSM Based IR System and Fuzzy Query Expansion
541
With respect to the weight of term tk in the query qj, denoted as wkj, it is defined in a similar way as wkj (that is, tfkj × idfk). Once the weights of all indexing combined terms both in the document di and in the query qj are determined, the similarity between di and qj can be measured by the cosine of the angle. That is l
s(di, qj) =
∑w k =1
ki
× wkj
l
. l
∑w ×∑w 2 ki
k =1
(8)
2 kj
k =1
3.3 Query Expansion and Fuzzy Thesaurus Query expansion is a natural idea when, as is often in practice the case, the user’s initial requests are very brief, regardless of whether the initial request terms are very specific or not. Thus enlarging requests allows both for more discriminating retrieval through matches on several terms and for more file coverage through getting a match at all. Query expansion using thesauruses is proved to be an effective way for improving the performance of the IR system. Briefly, there are two types of thesauri, that is, hand-crafted thesauri and corpus-based thesauri [9]. The latter is generated based on the term co-occurrence statistics. In our study, we construct the fuzzy thesaurus by calculating the simultaneous occurrences of terms and the term-term similarities in thesauruses derived from WordNet1. Now let's take the determination of expansion terms and their corresponding weights by the fuzzy way into account in detail. Suppose that the given query qj contains l terms describing l aspects of the query. Suppose that all terms in the given query can be expanded according to the fuzzy thesauruses. If all the expansion terms are directly added into the original query vector, then the new expanded query vector will be in the form of qj =
{w1 j , w1 j ,1 , w1 j , 2 ,… w1 j , N1 , w2 j ,…, wsj ,…, wlj , wlj ,1 ,… wlj , Nl }T ,
where
(9)
wlj , N l is the weight of the expansion term related to the term tl in the query j, Nl is
the number of expanded terms corresponding to the term tl and T is the transpose operator. To calculate the weights of the expansion terms, we must firstly obtain the values of nearness degrees between the original term and the expansion terms. In this paper, they are determined mainly based on the term co-occurrence measure. Suppose that two terms that occur frequently together in the same document are related to the same concept. Therefore, the similarity of the original term t k in the query and the expansion term t e can be determined by a term-term relevance coefficient r ke according to thesauruses derived from WordNet, such as Tanimoto coefficient [9]: 1
WordNet http://www.cogsci.princeton.edu/~wn/
542
J. Wu et al.
rke =
nke , nk + ne − nke
(10)
where nk is the number of synonym sets in thesauruses which contain the term tk, ne is the number of synonym sets in thesauruses which contain the term te, and nke is the number of synonym sets in thesauruses which contain both terms. Such a relevance coefficient represents the nearness degree of the term tk to the expansion term te, which takes values in the interval [0, 1]. All nearness values for the term tk and all corresponding expansion terms form a relevance matrix Rke shown as following:
t1 t1 Rke = t 2 t Nk
⎡ r11 ⎢r ⎢ 21 ⎢ ⎢ ⎢⎣rN k 1
t2 r12 r22 rN k 2
t Nk r1N k ⎤ r2 N k ⎥⎥ , ⎥ rke ⎥ … rN k N k ⎥⎦
(11)
where the element rke represents the nearness degree of term tk to term te, rke ∈ [0, 1], and Nk is the number of expansion terms. For k ≠ e, rke = rek; for k = e, rkk = ree = 1. In real-world applications, not all expansion terms are important enough to the document collection. From this point of view, we therefore define a membership value between document di and expansion terms in a fuzzy way based on the term correlation matrix Rke. In this research, the membership value is defined as following Nk
µ kd = 1 − ∏ (1 − rkm ) , i
for rkm ∈ Rke
(12)
m =1 m ≠k
where µ kd (k = 1, …, Nk,) denotes the membership degree for term tk to document di, which is computed as a complement of the negated algebraic product over all expansion terms involved; and rkm takes values from the term correlation matrix Rke. Here the adoption of an algebraic sum over all expansion terms with respect to the given index term tk (instead of the classical maximum function) allows a smooth transition for the values of µ kd factor. Once the relatively important expansion terms are determined, the weighting i
i
method can be applied to them. For a query j, qj = {w1 j , w2 j , … , wkj , … , wlj } and T
an expansion term te, the similarity between qj and te can be defined as following [11]: s(qj, te) =
∑w
t kj ∈q j
kj
× rke
for k ∈ [1, l]
(13)
where wkj is the weight of term tk when it is contained in the query j, rke is the nearness value for the term tk and the expansion term te, and l is the total number of indexing terms in the collection.
An Improved VSM Based IR System and Fuzzy Query Expansion
543
The weight of the actual expansion term te, denoted as wej, with respect to the query vector qj is defined based on s(qj, te) as: wej =
s(q j , t e )
∑w
.
(14)
kj
t kj ∈q j
Depending to the modification weights of all expansion terms corresponding to the original term tk, an expansion term vector can then be formed. It is qk = {wej ,1 , wej , 2 , … , wej , N }T . e
(15)
where Ne is the number of expansion terms with respect to the original term tk. By adding the above vector into the original query vector, the expanded query vector can then be obtained.
4 Effectiveness Evaluation We use a small size collection, LA-times, contained in the TREC2 corpus to examine the effectiveness of the proposed IR system. The queries and a list of documents relevant to each query are randomly chosen from this collection. The statistics on them are listed in Table 1 below. To improve the performance of the proposed IR system, we create a fuzzy synonym thesaurus that stores a number of adjectives to expand MWs occurring in the user queries and the documents. We use WordNet to determine the related terms, where only synonymy relations implied in WordNet synsets are concerned. In our study, we use precision and recall to evaluate the effectiveness of the proposed IR system. To verify the correctness and effectiveness of the proposed IR system developed based on the methods described above, three types of experiments have been implemented at first, Normal Search (NS), Head-word WeighTing Search (HWTS) and Head-word WeighTing plus Expansion Search (HWTplusES). In respect of NS, only CB Search tool developed by Justsystem Corp. is involved in, which is used as a baseline against the other two search approaches. The average precision and recall as benchmarks obtained by using NS for LA-times collection are presented in Tables 2 below. In HWTS, CB search tool is also used alone. However differing from NS, for HWTS, the weights of head-words are adjusted by adding an important factor that reflects the importance of the head-words in the examined query in order to heighten the weights of head-words (mainly referring to nouns in our experiments). Therefore an increased precision is obtained against NS for LA-times collection, see Table 2. To test our fuzzy thesaurus and query expansion method addressed previously, the other experiment named HWTplusES based on HWTS is then designed and implemented. In this experiment, the VSM modifying module and the document retrieving 2
TREC http://trec.nist.gov/
544
J. Wu et al.
module are both used. The retrieved results obtained by using HWTplusES for LAtimes collection are also summarized in Table 2. In addition, to reveal the constraints of modifiers to the associated head-words ad hoc adjectives and nouns, the experiment named Modifier Adjective Search (MAS) is accordingly conducted for LA-times collection by implementing combined search. The retrieved results as shown in Table 2 indicate that the precision of the proposed IR system increases greatly with a 5.16 percent rise against the normal search. Figure 2 illustrates 11-point precision-recall curves using three different searches with respect to La-times collection for all 108 queries. Except for the precision-recall curves, we also draw a bar graph for La-times collection using all 108 queries in order to make a comparison between NS, HWTS and HWTplusES in another way, see Figure 3, in which each bar is corresponding to one point in Figure 2. Table 1. Statistics for the selected collection
Collection name LA-times
Number of documents
Number of queries
Size (Mbytes)
3,319
108
15
Table 2. Average precision and recall obtained from four different search approaches
Name of approach NS HWTS HWTplusES MAS
Avg. Precision (%) 20.01 20.10 20.08 25.17
Avg. Recall (%) 73.73 73.80 73.71 34.84
precision
NS HWTS HWTplusES 1 0.8 0.6 0.4 0.2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 recall
Fig. 2. Precision-recall curves using three different searches with respect to LA-times collection for all 108 queries
An Improved VSM Based IR System and Fuzzy Query Expansion
545
HWT S
Precision
HWT plusES 0.004 0.002 0 -0.002 -0.004 -0.006 -0.008 -0.01 Recall
Fig. 3. A bar graph for LA-times collection using all 108 queries corresponding to 11 precisionrecall points in two experiments HWTS and HWTplusES
5 Conclusion Remarks In this paper, a new improved IR system based on VSM is proposed. The results obtained from the experiments lead us to believe that the integration of MWs and the associated HWs is indeed an effective way to improving the performance of the vector-based IR system. Our proposed system not only performs well in terms of precision and recall but also provides a fuzzy thesaurus that is very useful for query expansion. The automatic approach to constructing such a fuzzy thesaurus can save, to a great extent, the manpower and meanwhile avoid inconsistencies resulted from human mistakes. After analyzing the method and doing experiments with TREC corpus, it can be concluded that during the text retrieval, restricting HWs by the associated MWs can improve the precision of the proposed IR system compared to the normal CB Search system, and expanding MWs and HWs simultaneously can improve the recall of the proposed IR system compared to the normal CB Search system.
Acknowledgements The work reported in this paper is subject to an international collaborative research project that is sponsored by Justsystem Corporation of Japan. The authors would like to thank master students Huinan Ma and Jun Zhang of Dalian University of technology who did much work on documentation and experimention.
References 1. Kraft D.H., Petry F.E.: Fuzzy information systems: managing uncertainty in databases and information retrieval systems. Fuzzy Sets and Systems. 90 (1997) 183-191 2. Salton G., Wong A., Yang C. S.: A vector space model for automatic indexing. Communications of the ACM. 18 (1975) 613-620
546
J. Wu et al.
3. Salton G., Buckley C.: Term-weighting in information retrieval using the term precision model. Journal of the Association for Computing Machinery. 29 (1982) 152-170 4. Papadimitriou C.H., Raghavan P., Tamaki H., Vempala S.: Latent semantic indexing: A probabilistic analysis. Journal of Computer and System Sciences. 61 (2000) 217-235 5. Letsche T.A., Berry M.W.: Large-scale information retrieval with latent semantic indexing. Information Sciences. 100 (1997) 105-137 6. Chandren-Muniyandi R., Komputer J.S., Maklumat F.T. dan S.: Neural network: An exploration in document retrieval system. In: Proceedings of TENCON 2000. Vol. 1 (2000) 156-160 7. Ramirez C., Cooley R.: Case-based reasoning model applied to information retrieval. In: IEE Colloquium on Case Based Reasoning: Prospects for Applications. (1995) 9/1 -9/3. 8. Liu G.: The semantic vector space model (SVSM)⎯A text representation and searching technique. In: Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences, Vol. IV: Information Systems: Collaboration Technology Organizational Systems and Technology. Vol. 4 (1994) 928 –937 9. Mandala R., Tokunaga T., Tanaka H.: Query expansion using heterogeneous thesauri. Information Processing and Management. 36 (2000) 361-378 10. Kim M.C., Choi K.S.: A comparison of collocation-based similarity measures in query expansion. Information Processing and Management. 35 (1999) 19-30 11. Qiu Y., Frei H.: Concept based query expansion. In: Proceedings of the 16th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. (1993) 160-169 12. Nie J.Y., Jin F.: Integrating logical operators in query expansion in vector space model. In: Workshop on Mathematical/Formal Methods in Information Retrieval, 25th ACM-SIGIR. (2002)
The Extraction of Image’s Salient Points for Image Retrieval Wenyin Zhang1,2 , Jianguo Tang1,2 , and Chao Li1 1
Chengdu University of Information Technology, 610041, P.R. China 2 Chengdu Institute of Computer Applications Chinese Academy of Sciences, Chengdu 610041, P.R. China
Abstract. A new salient point extraction method from Discrete Cosine Transformation (DCT) compressed domain for content-based image retrieval is proposed in this paper. Using a few significant DCT coefficients, we provide a robust self-adaptive salient point extraction algorithm, and based on salient points, we extract 13 rotation-, translation- and scaleinvariant moments as the image shape features for retrieval. Our system reduces the amount of data to be processed and only needs to do partial entropy decoding and partial de-qualification. Therefore, our proposed scheme can accelerate the work of image retrieval. The experimental results also demonstrate it improves performance both in retrieval efficiency and effectiveness. Keywords: Salient Point, Image Retrieval, Discrete Cosine Transformation, DCT.
1
Introduction
Digital image databases have grown enormously in both size and number over the years [1]. In order to reduce bandwidth and storage space, most image and video data are stored and transmitted by some kind of compressed format. However, the compressed images cannot be conveniently processed for image retrieval because they need to be decompressed beforehand, and that means an increase in both complexity and search time. Therefore, it is important to develop an efficient image retrieval technique to retrieve wanted images from the compressed domain. Nowadays, more and more attention has been paid on the compressed-domain based image retrieval techniques [2] which extract image features from the compressed data of the image. The JPEG is the image compression standard [3] using DCT and is widely used in large image databases and on the World Wide Web because of its good compression rate and image quality. However, the conventional image retrieval approaches used for JPEG compressed images need full decompression which consumes too much time and requires large amount of computation. Some new researches [4,5,6,7,8,9,10] have recently resulted in improvements in that image features can be directly extracted in the compressed domain without full decompression. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 547–556, 2005. c Springer-Verlag Berlin Heidelberg 2005
548
W. Zhang, J. Tang, and C. Li
The purpose of this paper is to propose a novel compressed image retrieval method based on salient points [11] computed from DCT compressed domain. The salient points are interesting for image retrieval because they are located in visual focus points and thus they can capture the local image information and reduce the amount of data to be processed. The salient points are related to the visually most important parts of the images and lead to a more discriminant image feature than interesting points such as corners [14]. Unlike the traditional interesting points, the salient points should not be clustered in few regions. It’s quite easy to understand that using a small amount of such points instead of all images reduces the amount of data to be processed. First, based on a small part of important DCT coefficients, we provide a new salient point extraction algorithm which is very robust to noise, rotation, translation and scale and most of common image processing such as lighting, darkening, blurring, compressing and so on. Then, we adaptively choose some important salient points to constitute a binary salient map of the image, which represents the shape of the objects in the image. Last, we extract 13 rotation-, translation- and scale-invariant moments [12,13] from the salient map as the shape features of the image for retrieval. The remainder of this paper is organized as follows. In Section 2, we introduce the works related to JPEG compression image retrieval. In Section 3, we discuss in details our new scheme, followed by the experimental results and analysis. Finally, Section 5 concludes the paper.
2
Related Works
Direct manipulation of the compressed images and videos offers low-cost processing of real time multimedia applications. It is more efficient to directly extract features in the JPEG compressed domain. As a matter of fact, many JPEG compressed image retrieval methods based on DCT coefficients have been developed in recent years. Climer and Bhatia proposed a quadtree-structure-based method [4] that organizes the DCT coefficients of an image into a quadtree structure. This way, the system can use these coefficients on the nodes of the quadtree as image features. However, although such a retrieval system can effectively extract features from DCT coefficients, its main drawback is that the computation of the distances between images will grow undesirably fast when the number of relevant images is big or the threshold value is large. Feng and Jiang proposed a statistical parameter-based method [5] that uses the mean and variance of the pixels in each block as image features. The mean and variance can be directly computed via DCT coefficients. However, this system has to calculate the mean and variance of each block in each image, including the query image and the images in the database, and the calculation of the mean value and variance value of each block is a computationally heavy load. Chang, Chuang and Hu provided a direct JPEG compressed image retrieval technique [6] based on DC difference and the AC correlation. Instead of fully decompressing the images, it only needs to do partial entropy decoding and extracts the DC difference and the AC correlation
The Extraction of Image’s Salient Points for Image Retrieval
549
as two image features. However, although the retrieval system is faster than the method [4,5], it doesn’t do well in anti-rotation. The related techniques are not limited to the above three typical methods. Shneier [7] described a method of generating keys of JPEG images for retrieval, where a key is the average value of DCT coefficients computed over a window. Huang [8] rearranged the DCT coefficients and then got the image contour for image retrieval. B.Furht [9] and Jose A.Lay [10] made use of the energy histograms of DCT coefficients for image or video retrieval. Most image retrieval methods based on DCT compressed domain strengthened the affectivity and efficiency of image retrieval [2]. But most of these research focused on global statistical feature distributions which have limited discriminating power because they are unable to capture the local image information or shape information. In our proposed approach, we use the image salient points computed from a small part of significant DCT coefficients to describe the image feature. The salient points give local outstanding information and on the whole provide the shape features of the image.
3
The Proposed Method
In this section, we introduce in details our retrieval methods based on salient points. The content of the section is arranged with the sequence: edge point detection→ salient point extraction→image feature extraction. 3.1
Fast Edge Detection Based on DCT Coefficients
Edges are significant local changes in the image and are important feature for analyzing image because they are relevant to estimating the structure and properties of objects in the scene. Here we provide a fast edge detection algorithm in DCT domain which directly compute the pixel gradients from DCT coefficients to get edge information. Based on it, we give the salient points extraction algorithm. The 8 × 8 Inverse DCT formula is as follows: f (x, y) =
7 7 1 C(x, u)C(y, v)F (u, v); 4 u=0 v=0
(2x + 1)uπ ; 16 1 √ ,x = 0 2 c(x) = 1 , x = 0
where : C(x, u) = c(u) cos
Compute derivative to formula (1), we get:
f (x, y) =
∂f (x, y) ∂f (x, y) + ∂x ∂y
(1)
550
W. Zhang, J. Tang, and C. Li 7 7 1 C (x, u)C(y, v)F (u, v) 4 u=0 v=0
=
7 7 1 C(x, u)C (y, v)F (u, v) + 4 u=0 v=0
W here :
C (x, u) = −
(2)
(2x + 1)uπ) uπ c(u) sin 8 16
From the equation (2), we can compute the pixel gradient in (x, y), its magnitude can be given by: G(x, y) = |(
∂f (x, y) ∂f (x, y) )| + |( )| ∂x ∂y
(3)
In order to simplify computation, we change the angle (2x+1)uπ to acute angle 16 [15]. Let (2x + 1)u = 8(4k + l) + qx,u , k, l and qx,u are integers, in which: qx,u = (2x + 1)u mod 8, k = (2x + 1)u/32, l = (2x + 1)u/8 mod 4, 0 ≤ qx,u < 8, 0 ≤ l < 4. Then, we can do as follows: (8(4k + l) + qx,u (2x + 1)uπ ) = sin( ) sin( 16 16 ⎧ ⎫ qx,u π ⎪ ⎪ ⎪ ⎪ ⎪ sin( 16 ) : qx,u = qx,u , l = 0 ⎪ ⎪ ⎪ ⎪ ⎪ qx,u π ⎨ ⎬ sin( 16 ) : qx,u = 8 − qx,u , l = 1 = qx,u π ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ − sin( 16 ) : qx,u = qx,u , l = 2 ⎪ ⎪ ⎪ ⎩ ⎭ qx,u π − sin( 16 ) : qx,u = 8 − qx,u , l = 3
= (−1)
l−1 2
sin(
qx,u ) 16
(4)
Similarly, we can get:
cos(
qx,u l+1 (2x + 1)uπ ) = (−1) 2 cos( ) 16 16
(5)
To the formulae (4) and (5), the sign and the qx,u can be decided aforehand according to x, u. Let ssx,u and csx,u be the signs of formulae (4) and (5). The ssx,u and qx,u can be described as follows: ⎫ ⎫ ⎧ ⎧ + + + + + + + +⎪ 0 1 2 3 4 5 6 7⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + + + + + + − −⎪ 0 3 6 7 4 1 2 5⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + + − − − + + + 0 5 6 1 4 7 2 3 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎬ ⎨ ⎨ +++−−++− 07254361 ; qx,u = ssx,u = + + − − + + − −⎪ 0 7 2 5 4 3 6 1⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ + + − + + − + + 0 5 6 1 4 7 2 3⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪+ + − + − + + −⎪ ⎪0 3 6 7 4 1 2 5⎪ ⎪ ⎪ ⎪ ⎪ ⎭ ⎭ ⎩ ⎩ ++−+−+−+ 01234567
The Extraction of Image’s Salient Points for Image Retrieval
551
The csx,u can be given the same as ssx,u . For more time-saving, according
to Taylor formula, we extend sin(
qx,u 16 )
and cos(
n qx,u π ( sin( )= sin(k) ( ) 16 4
qx,u 16
k=0
n qx,u π ( cos( )= cos(k) ( ) 16 4
at π/4:
qx,u − π4 )n + Rn ( ) n! 16
qx,u 16
k=0
q
qx,u 16 )
(6)
qx,u − π4 )n + Rn ( ) n! 16
(7)
−4
( 3π )n+1 ( π )n+1 | x,u4 |n+1 ≤ 16 where : |Rn | < 4 (n + 1)! (n + 1)! When consider to extend up to second order, the equation (6) and (7) can be approximated as follows: √ qx,u π 2 π2 )≈ [1 − (4 − qx,u ) + (4 − qx,u )2 ] (8) sin( 16 2 16 512 √ qx,u π 2 π2 )≈ [1 + (4 − qx,u ) − (4 − qx,u )2 ] (9) cos( 16 2 16 512 The residue error R2 is more less than 0.034. This suggests that the equation (8) and (9) can be approximately decided by qx,u and can be calculated off-line. As such, the C (x, u) and C(x, u) in the equation (2) also can be calculated approx imately by the qx,u , ssx,u and csx,u off-line, which means that the coefficients of the extension of equation (2) can be computed in advance and the equation (3) is only related to the DCT coefficients F (u, v). So, the computation of the equation (3) is much simplified. Further more, we need not use all the coefficients to compute the equation (2), because most of the DCT coefficients with high frequency are zero and do nothing to the values of edge points, so they can be omitted, which means we can use a small part of DCT coefficients with low frequency to compute the edge points and more computation cost is saved again. Fig.1 gives an example for edge detection using different DCT coefficients. From the Fig.1, we can see that the more the DCT coefficients used, the smoother the edge. With the decreasing of the number of DCT coefficients used, the “block effect” becomes more and more obvious. In a 8 × 8 block, the more gray changes, the larger every edge point value. The Fig.2 gives another example of edge detection with first 4 × 4 DCT coefficients. 3.2
Salient Points Computation
According to analysis in Sec.3.1 that the edge points in a block reflect the gray changes in this block, the more changes, the larger edge points value, we sum all the edge points values in one block to stand for one salient point value, which means that one 8 ∗ 8 block corresponds to one salient point. If M × N stands for
552
W. Zhang, J. Tang, and C. Li
Fig. 1. An example of edge detection: (a) Lena.jpg, (b)-(h) are the edge images of Lena, which are computed respectively by first n*n DCT coefficients, 2 ≤ n ≤ 8.
Fig. 2. Another example of edge detection, which are computed respectively by first 4*4 DCT coefficients.
the size of an image, the maximum of its salient points number is M/8 × N/8. Let Sp(x , y ) be the salient point value in (x , y ), 0 ≤ x < M/8, 0 ≤ y < N/8, it can be computed as follows: (γ is a parameter, 2 ≤ γ ≤ 7)
Sp(x , y ) =
x ×8+γ y ×8+γ
|G(x, y)|
(10)
x=x ×8 y=y ×8
3.3
Adaptive Selection of Salient Points
Not all salient points are important, we want to adaptively select the salient points with larger value. The number of the salient points extracted will clearly influence the retrieval results. Less salient points will not mark the image; More salient point will increase the computation cost. A given threshold T may be
The Extraction of Image’s Salient Points for Image Retrieval
553
not suited for all images. Through experiments we have found that the gray changes in a block can be relatively reflected by the variance (denoted by σ)of AC coefficients in this block. The more changes, the larger the variance. Let Msp be the mean value of Sp(x , y ), We adaptively select the salient points which satisfy the following condition:
λ × Sp(x , y ) > µ × Msp
(11)
W here : λ = σ/128, 0 ≤ λ ≤ 1;
x 1 ⎪⎪λSim (q, d ) + (1 − λ)⎜ a 2 ⎜ 1 + max(mms) − min(mms) ⎟ ⎜ ⎟ Sim4 (q, d ) = ⎨ q' + ne(qa ) ⎝ ⎠ ⎝ ⎠ ⎪ q' ∩ d + ne(qa ∩ d ) = 1 ⎩⎪Sim2 (q, d ),
(5)
Formula (5) is modified based on the formula (4), and it adds a query expansion words. Query words and query expansion words are considered when computing the span size ratio and word matching ratio. When counting the number of query words, the set of query expansion words ( q a ) as a whole counts as one query word. The
ne( ) function checks whether the set qa ∩ d is empty. If
q a ∩ d > 0 then
ne ( q a ∩ d ) returns 1, represent that query expansion word occurring in document,
while 0 represents that query expansion word doesn’t occurring in document. 3.3 Case Study of Computing Similarity
地球距离太阳有多远? 地球 太阳 距离
(How far away from the earth to the sun?), Give a question Q3: , , },and question type is “ (num_distance)”. After query words are { query expansion, query expansion words are { }. The computing process of similarity between question and document based on minimal matching span is shown as follows:
寰球,球
距离 米,里,公里,英尺,码,厘米,
The Research on Query Expansion for Chinese Question Answering System
577
Assuming that there is a document d with query words matching at following position : Posd ( ) = {20 35 60} , Posd ( ) = {38 70} , Posd ( ) = {Φ} ,Before query expansion, matching span ( ms ) is {20,35,38,60,70},{20,35,70}{20,38},{35,38}, {38,60}, ,{60,70},obtained by definition 1, and minimal matching span ( mms ) is {35,38},obtained by definition 2, so its span size ratio is 2/(1+38-35)=0.5, and its
……
地球
,,
太阳
,
距离
word matching ratio is 2/3=0.67. Take α = 1 8 , β = 1 , and λ = 0.4 , assuming the global
similarity before query expansion, Sim1 ( q, d ) = 0.72 ,is computed by formula(1), then Sim 3 ( q , d ) is the minimal matching span similarity of unexpanded 1
0.4 × 0.72 + 0.6 × (0.5) 8 × (0.6)1 = 0.657 .
距离 , 英尺
(distance)”,query is expanded. Assume that the According to question type “ matching positions of query expansion words in the document are as follows: Posd ( ) = {41} Posd ( ) = {Φ} Posd ( ) = {Φ} Posd ( ) = {Φ} Posd ( ) = {180} ,
, 里 , 码 , 米 Pos (厘米) = {Φ} , Pos (寰球 ) = {Φ}, Pos (球) = {Φ} , in which expansion words “公里”and “米”occur in the document. Therefore, distribution relationship of 公里
d
d
d
two words must be considered respectively, so that one is selected to compute similarity. ”,its corresponding ( ms ) is {20,35,38,41,60,70}, For expansion word “ ,{20,,41,70},obtained by definition 1, and {20,35,38,41},{35,38,41},{20,38,41}, minimal matching span (mms) is {35,38,41},obtained by definition 2, so span size ratio is 3/(1+41-35)=0.429. Similarity, for example word “ ”,its corresponding ms is {20,35,38,60,70,180}, ,{60,70,180},and its minimal matching {20,35,38,180},{35,38,180},{35,85,180}, span (mms) is {60,70,180},so the span size ratio is 3/(1+180-60)=0.025. ” is bigger than that of “ ”, “ ” is Because the span size ratio of “ selected as expansion word to compute similarity . The word span ratio after query expansion is 3/4=0.75, take α = 1 8 β = 1 λ = 0.4 . Assuming the global similarity after query expansion, Sim2 (q, d ) = 0.8 , computed by formula (3), then the minimal matching span similarity after query expansion Sim4 (q, d ) is
公里
……
米
…… 公里 , ,
米 公里
1
0.4 × 0.8 + 0.6 × (0.429) 8 × (0.75)1 = 0.725 .
4 Experiment and Evaluation The experiment focuses on the evaluation of performance effect that query expansion has on answer-document retrieval. At present, there is no uniform test data set of questions and answer-documents about Chinese question answering system, and the purpose of answer retrieval in question answering system is to retrieve the document containing correct answers. Therefore, data set for experiments must be collected and tagged. 10 fine-grained question types including “ (num_price)”, (num_speed)”,” (num_temperature)”,“ (num_age)”,“ (num_weight)”, “ (num_distance)”,“ (num_frequency)”,“ (num_area)”,“ (loc_island)”, “ (time_year)” are selected to do this experiment. We collect 20 questions for and “
速度 距离 年份
温度 频率
年龄 面积
价格 重量 岛屿
578
Z. Yu et al.
each fine-grained type respectively, obtain the query words after parsing each question, and submit them to Baidu search engine; then we extract the first 30 WebPages of each question as the candidate answer-documents, and establish 200 Chinese questions and more than 6000 corresponding document sets; finally, we process these documents, tag whether the document contains the corresponding answers, and tag the number of corresponding answer for document containing correct answers .Thus a test data set of questions and answers is established. Generally, in information retrieval, evaluation methods are precision and recall rate. For Chinese question answering system, correct answers are needed to be further extracted from the recall documents, so evaluation emphasizes whether the recall documents contain correct answer-documents. Therefore, there is a quite simple evaluation method a@n for evaluating precision. It only checks whether the first n recall documents contain correct answer-documents. If they contain correct answerdocument, then it is 1,and 0 otherwise. For all test questions, a@n refers to that the number of all questions in test set is divided by the number of documents containing correct answers in the first n recall documents. The experiment uses precision a@n to evaluate answer-document retrieval. In order to verify the effect of the proposed method. Using vector space modole (VSM) and minimal matching span (MMS) to retrieve answer-document, we evaluate the answer-document retrieval experiment that doesn’t expand query (unexpanded) and does expand query (expanded) for the 200 questions in test set respectively. We use formula (1) (VSM unexpanded), formula (3) (VSM expanded), formula (4) (MMS unexpanded) and formula (5) (MMS expanded) to compute document similarity respectively. Table 2 lists the data of answer retrieval results before query expansion and after query expansion. From the above data, when we evaluate that the number of recall documents is 5,8,10,15 respectively, document retrieval precision shows that query expansion makes substantial improvement over an unexpanded baseline using two methods respectively. The percent in bracket is ratio improved after query expansion. The method based on MMS make better than the one based on VSM. Table 2. Comparison of answer-document retrieval results before query expansion and after query expansion based on MMS and VSM respectively
a@n a@5 a@8 a@10 a@15
VSM unexpanded expanded 0.445 0.51(+14.6%) 0.595 0.665(+11.8%) 0.68 0.745(+9.7%) 0.775 0.85(+9.7%)
MMS unexpanded expanded 0.505 0.71(+40.6%) 0.655 0.78(+19.1%) 0.75 0.855(+14%) 0.855 0.96(+12.3%)
5 Conclusions The purpose of answer-document retrieval in Chinese question answering system is to recall the documents containing correct answer to questions. Query expansion method
The Research on Query Expansion for Chinese Question Answering System
579
in information retrieval is not fully suitable for question answering system. According to the features of Chinese question answering system, this paper proposes a new query expansion method that establishes related words for specific question type through statistical method, and expands query with related words for specific question type and synonymy in HowNet. In order to verify the effect of this method, on the basis of vector space model, this paper proposes a computing method of similarity between questions and documents based on minimal matching span. This method fully considers the effects of query words and query expansion words on answerdocument retrieval. The experiment results show that in Chinese question-answering system, answer-document retrieval has quite good effect using query expansion method proposed in this paper. Further research will focus on the following aspects: 1. query expansion of specific question type for extracting sentences as answer; 2. answer extraction according to question type and answer sentence model.
References 1. Zheng, S.F., Liu, T., Qin, B.: Overview of Question Answering. Journal of Chinese Information Processing, Vol.16, No.6, (2002) 46-52 2. Cui, H., Wen, J.R., Li, M.Q.: A Statistical Query Expansion Model Based on Query Logs. Journal of Software, Vol.19, No.3, (2003) 1593-1599 3. He, H.Z., He, P.L., Gao, J.F.: Query Expansion Based on the Context in Chinese Information Retrieval. Joural of Chinese Information Processing, Vol.16, No.6, (2003) 3245 4. Voorhees, E.: Overview of the TREC 2003 Question Answering Track. In: Voorhees, E. (eds.): Proceeding of the 11th Text Retrieval Conference,NIST Special Publication, Gaithersburg, (2003)1-15 5. Li, X., Roth, D.: Learning Question Classifier. In: Tseng, S.C. (eds): Proceeding of the 19th International Conference on Computational Linguistics, Morgan Kaufmann Publishers, Taipei, (2002) 556-562 6. Singhal, A., Salton, G., Mitra, M.: Document Length Normalization. Information Processing & Management. Vol.32, No.5, (1996) 619–633 7. Buckley, C., Singhal, A., Mitra, M.: New Retrieval Approaches Using SMART. In: Harman, D. (eds.): Proceedings of the Fourth Text REtrieval Conference. NIST Special Publication, Gaithersburg, (1995) 25-48 8. Salton, G. Buckley, C.: Term Weighting Approaches in Automatic Text Retrieval. Information Processing and Management, Vol.24, No.5, (1998) 513–523
Multinomial Approach and Multiple-Bernoulli Approach for Information Retrieval Based on Language Modeling Hua Huo1,2 , Junqiang Liu2 , and Boqin Feng1 1
2
Department of Computer Science, Xi’an Jiaotong University, P.R. China
[email protected] School of Electronics and Information, Henan University of Science and Technology, P.R.China
Abstract. We present a new retrieval method based on multipleBernoulli model and multinomial model in this paper. We use the multiple-Bernoulli model and multinomial model to estimate the term probabilities by importing the conjugate prior and the term frequencies, and use Dirchlet method to smooth the models for solving the ”zero probability” problem of the language model.
1
Introduction
Ponte and Croft’s multiple-Bernoulli approach in [4] and Miller’s multinomial approach in [3] are typical retrieval methods based language model. For further improving retrieval performance, we present a new multiple-Bernoulli approach and a new multinomial approach. Different from the two approaches in which the term frequencies in the query and document are not captured, we employ the term frequencies and use Dirchlet smoothing method to solve the ”zero problem” in the approaches instead of the linear interpolation smoothing method. The general idea is to build a language model Md for each document d , and rank the documents according to how likely the query q can be generated from each of these document models, i.e. P (q|Md ). In different models, the probability is calculated in different approaches [4]. In M ultinomial model, the query is treated as a sequence of independent terms (i.e. q = w1 , w2 , wm ), taking into account possibly multiple occurrences of the same term. The ”ordered sequence of terms assumption” behind this approach states that both queries and documents are defined by an ordered sequence of terms.A query of length k is modeled by an ordered sequence of k random variables, one for each term occurrence in the query.Based on this assumption,the query probability can be obtained by multiplying the individual term probabilities [1]. P (q|Md ) =
m
P (wi |Md )
(1)
i
Supported by the science research foundation program of Henan University of Science and Technology, China (2004ZY041) and the natural science foundation program of the Henan Educational Department, China (200410464004).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 580–583, 2005. c Springer-Verlag Berlin Heidelberg 2005
Multinomial and Multiple-Bernoulli Approach for Information Retrieval
581
In M ultiple − Bernoulli model, a query is represented as a vector of binary attributes, one for each unique term in the vocabulary, indicating the presence or absence of terms in the query. Terms are assumed to occur independently of one another in a document.So,the query likelihood P (q|Md ) is thus formulated as the product of two probabilities-the probability of producing the query terms and the probability of not producing other terms [1]. P (q|Md ) =
P (wi |Md )
w∈q
2
(1 − P (wi |Md ))
(2)
w!∈q
Computing the Term Probability
Both queries and documents are represented as vectors of indexed words. Given a document d = (f1 , f2 , ..., fV ) ∈ [0, 1]V , where fi is the term frequency of the ith word wi , and V is the size of the vocabulary. Furthermore, consider a Multiple-Bernoulli generation model or a multinomial generation model for each document, parameterized by the vector: Md = (Mf1 , Mf2 , ..., MfV ) ∈ [0, 1]V which indicates the probabilities of emission of the different words in the vocabV ulary. Where Mfi = P (wi |Md ) and Mfi = 1. Now, let us define the length of i=1 a document as the sum of their components: nd = fi . We wish to compute the i
maximum likelihood by Bayesian inference and use it as the documents language model [1][5]. According the Bayesian approximate equation P (Md |d) ≈ P (d|Md )P (Md )
(3)
ˆd ≈ arg max P (d|Md )P (Md ) M
(4)
we have Md
where P (d|Md ) is the likelihood of the document given Md ,and P (Md ) is the prior of the sample distribution model. We assume that we sample from a multinomial distribution once for each word in the document model.When Md parameterizes a multinomial and model prior is Dirichlet, the conjugate prior for the multinomial, we get: ˆd = arg max(Γ (nd + M Md
V i=1
σi ))/(
V
Γ (fi + σi ))
i=1
V
(Mfi )fi +σi −1
(5)
i=1
where Γ is the Gamma function, Γ (s) = (s − 1)! . σi are the parameters of the Dirichlet prior.The solution to the above Equation by using EM algorithm yields the following form of probability estimates: Mˆfi = (fi + σi − 1)/(nd +
V i=1
σi − V )
(6)
582
H. Huo, J. Liu, and B. Feng
The simplest method of choosing the parameters σi is to attribute equal probability to all words in the query. However, this leads to ”zero probability” problem.So, we use the document collection model to smooth the document model and choose to set the parameters as σi = µci /nc + 1. The setting of σi yields the popular Dirichlet smoothing [2][5]. µ is the smoothing parameter,ci is the number of times word wi appears in the collection, and nc is the total number of words in the collection. So, we get the term probability P (wi |Mˆd ) = (fi + µ(ci /nc ))/(nd +
V
(µ(ci /nc )))
(7)
i=1
If we assume that we sample from a multiple-Bernoulli distribution once for each word in the document model, we get Mˆd = arg max Md
V Γ (αi + βi ) (Mfi )fi +αi −1 (1 − Mfi )nd −fi +βi −1 Γ (α )Γ (β ) i i i=1
(8)
The solution to the above Equation is Mˆfi = (fi + αi − 1)/(nd + αi + βi − 2)
(9)
We set αi = µci /nc + 1 and βi = nc /ci + µ(1 − ci /nc ) − 1. This setting also leads the Dirichlet smoothing, then we get the term probability P (wi |Mˆd ) = (fi + µ(ci /nc ))/(nd + (nc /ci ) + µ − 2)
3
(10)
Experiments and Results
Two sets of experiments are performed on the four data sets from TREC.The first set of experiments investigates whether the performances of retrieval based on our new multiple-Bernoulli approach (NMB) and our new multinomial approach (NML) are sensitive to the setting of the smoothing parameter µ. The second set of experiments is to compare the performance of NMB and NML with the performance of the Ponte and Croft’s traditional multiple-Bernoulli approach(TMB) and the performance of the Miller’s traditional multinomial approach (TML). Results of the first set of experiments are presented in Fig. 1. It is clear that average precision is much more sensitive to µ ,especially when the values of µ are small. However, the optimal values of µ seem to vary from data set to data set, though in most they are around 2000. Results of the second set of experiments are shown in Table 1. We observe that the NMB has improvements of 11.5%, 13%, 10.5%, and 13.6% in average precision on the four data sets respectively over the TMB, and that the average improvement in average precision is 12.2%. We think that the improvements of the NMB over the TMB are attributed to the using of the term frequency and Dirchlet smoothing method in the NMB. We also find that the NML has improvements of 7.7%, 10.1%, 8.3%, and 11.6% respectively over the TML in average precision on the four data sets, and the average improvement is 9.4%.
Multinomial and Multiple-Bernoulli Approach for Information Retrieval
583
Table 1. Average precision of TMB,NMB,TML and NML on four data sets Data set TREC5 TREC6 TREC7 TREC8 Avg.
TMB 0.2304 0.2101 0.2243 0.2105 0.2188
NMB Chg1% TML NML Chg2% 0.2568 11.5 0.2571 0.2768 7.7 0.2374 13 0.2404 0.2646 10.1 0.2478 10.5 0.2585 0.2799 8.3 0.2392 13.6 0.2386 0.2662 11.6 0.2453 12.2 0.2486 0.2718 9.4
0.30
Average precision of NML
Average precision of NMB
0.30 0.28 0.26 0.24 0.22 0.20
TREC5 TREC6 TREC7 TREC8
0.18 0.16
0.28 0.26 0.24 0.22 0.20
TREC5 TREC6 TREC7 TREC8
0.18 0.16 0.14
0.14
0.12
0.12 0
500
1000
1500
2000
2500
3000
3500
u
4000
0
500
1000
1500
2000
2500
3000
3500
4000
u
Fig. 1. Plots of average precision of NMB and NML for different µ
References 1. Lafferty,J., Zhai, C,: Document language models, query models, and risk minimization for information retrieval, In proceedings of SIGIR’01, (2001)111-119. 2. Metzler, D., Lavrenko, V., and Croft, W.B.: Formal multiple-Bernoulli models for language modeling. In proceedings of ACM SIGIR’04,(2004)231-235. 3. Miller, D.H., Leek, T. and Schwartz, R.: A hidden Markov model information retrieval system. In proceedings of ACM SIGIR’99 , (1999)214-221. 4. Ponte, J., Croft,W.B.: A language modeling approach to information retrieval. In proceedings of ACM SIGIR’98, (1998)275-281. 5. Zaragoza,H., Hiemstra,D.,et.: Bayesian extension to the language model for ad hoc information retrieval. In proceedings of ACM SIGIR’03.(2003)325-327.
Adaptive Query Refinement Based on Global and Local Analysis Chaoyuan Cui, Hanxiong Chen, Kazutaka Furuse, and Nobuo Ohbo University of Tsukuba, Tsukuba Ibaraki 305-8573, Japan {cui, chx, furuse, ohbo}@dblab.is.tsukuba.ac.jp
Abstract. The goal of information retrieval (IR) is to identify documents which best satisfy users’ information need. The task of formulating an effective query is difficult in the sense that it requires users to predict the keywords that will appear in the desired documents. In our study we proposed a method of query refinement by combining candidate keywords with query operators. The method uses the concept Prime Keyword Set, which is a subset of whole keywords and obtained by global analysis of the target database. Considering user’s intension we generate rational size of candidates by local analysis based on several specified principles. The experiments are conducted to confirm the effectiveness and efficiency of our proposed method. Moreover, as an extension of our approach an online system is implemented to investigate the feasibility.
1
Introduction
The rapid growth of on-line documents increases the difficulty for users to find documents relevant to his/her information need. Unfortunately, short queries are becoming increasing common in retrieval applications, especially with the advent of the World Wide Web (WWW). According to the research by [4], queries submitted by users to the WWW search engines contains averagely only two keywords. Short query makes it difficult to distinguish relevant documents from irrelevant ones. Therefor it requires a system to provide candidates automatically for helping users to refine their initial queries. Generally, meaning-based candidates and statistics-based candidates are two main sources of related keywords. Meaning-based approach construct thesauri manually. WordNet ([6]) provides a tool to search dictionaries conceptually, rather than merely alphabetically. It’s basic object is synset, which is a set of synonymous words representing a certain meaning. Synsets are organized by the semantic relations defined on them. However, this kind of thesaurus is difficult to use because of ambiguity. Selecting the correct meaning for refining may be difficult, especially in case of short queries ([10]). Statistics-based approach can be divided into two kinds, namely global analysis and local analysis. Automatic thesaurus construction technique grouped keywords together based on their co-occurrence in documents, keywords which often occur together in documents are assumed to be similar. These thesaurus can be L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 584–593, 2005. c Springer-Verlag Berlin Heidelberg 2005
Adaptive Query Refinement Based on Global and Local Analysis
585
used for automatic or manual query expansion. A global technique requires some corpus-wide statistics that take a considerable amount of computer resources to compute, such as co-occurrence data about all possible pairs of terms in a database. In contrast, local strategy only processes a small number of top-ranked documents. It is based on a hypothesis that top-ranked documents is indeed relevant. and carried out by relevance feedback systems. [8] improves the quality of the initial run by re-ranking a small number of the retrieved documents by making use of the term co-occurrence information to estimate word correlation. [3] use information-theoretic term-score function within top retrieved documents to assign scores to candidate expansion terms. Query expansion has been shown to produce good retrieval performance and prevent query drift. [11] combines the global analysis and local analysis. This approach is based on the use of noun groups instead of simple keywords, as document concepts. It works as follows. Firstly retrieve some top-ranked passages using the original query. This is accomplished by breaking up the documents retrieved by the query in fixed length passages (of say, 300 words) and ranking these passages as if they were documents. Then the similarity between each concept c in the top-ranked passages and the whole query q is computed using a variant of tf -idf ranking. The query q is refined by adding some top-ranked concepts, based on the similarity between the concepts and q. In this work, we adopted both global analysis and local analysis to generate refinement candidates based on statistics. We differ from the above work in that we analyze the relationship between documents of the database. in global analysis, which improve the precision of refinement significantly. The rest of the paper is organized as follows. Section 2 gives several principles used throughout this study, and approaches for refinement. Section 3 introduces two important concepts. Section 4 and 5 explain how global analysis and local analysis work. Section 6 outlines the test data and gives experimental results. Section 7 offers some conclusions and directions for future works.
2
Principles and Goal
The existing techniques emphasize to improve retrieval effectiveness, but there still remains a problem. Sometimes information user wants can not be reproduced by way of suggested candidates. Certainly, all keywords included in retrieval results satisfy user’s intension, but large size of candidates is inconvenient for user to browse them all. Here, we offer several principles which reflect the basic requirement of user in refinement system. – No loss of information : any of retrieved documents can be accessed through candidates or their combinations – Screen effect : appropriate reduction of documents by refined query – Rational size of candidates – Response time
586
C. Cui et al.
Based on these principles we aim at developing a refinement system as shown in Figure 1 which illustrates a running example of the prototype system. We use CRAN test collection as a database, which includes 1398 documents and 4612 keywords. In this example, suppose that a user issues a query by keyword “method”. The system shows that there are 366 hits for this query, and it takes 0.01 second to retrieve. Besides the links to hit documents, refinement candidates each of which accompanied by bool operators, are provided. For example, in the first line, “+(8) -(358) affected”, “+” and “-” represent the bool operators “AND” and “NOT”, respectively. Selection of this line with “+(8)” means that by adding candidate keyword “affected” to the original query, 8 documents will leave in the result. Selection of “-” means the exclusion of documents containing “affected”. Here suppose that the user is interested in the field of aircraft and does not want documents that include keyword “tn”, then user may choose “+ aircraft” and “- tn” which means that the query is modified to {method Fig. 1. An example of prototype refine+ aircraft - tn}. Modified query be- ment system come more clear and retrieved documents decrease. User can access documents by the links, or, carry out the refinement process repeatedly until he/she is satisfied with the results.
3
Prime Keyword Set
One of the drawback in the interactive system is that it takes lots of computation after user’s query arrived. Because most of the systems search the database and have to process contents of retrieved documents. Usually the content of a document is represented approximately by keywords included in it, and content of database is thus represented by a keyword set included in it. In fact, some of these keywords are unnecessary in sense of specifying documents. On other hand, a document can be represented only by some “specific” keywords. As an extension, a database can also be represented by a “specific” keyword set. In our study, we call this kind of keyword set prime keyword set. If a database can be well represented by prime keyword set, the effectiveness of online computation would be improved. For convenience we first give the notations that will be used in the following explanation. Let D and K be a set of documents and a set of keywords, respectively. ρ extracts keywords from a document d ∈ D,or ρ(d) = {k | (k ∈ K) ∧ (k is a keyword included in d)}. Further, let D ⊂ D. ρ(d) is denoted by d∈D
ρ(D), and in particular ρ(D) = K. A query Q is simply a subset of K. A query
Adaptive Query Refinement Based on Global and Local Analysis
587
evaluation retrieving all the documents that contain all the given keywords in Q is defined as σ(Q) = {d | d ∈ D ∧ Q ⊆ ρ(d)}. A refinement candidate of Q is also a subset of K. Let X and Y be subsets of K, the fact that ”Y is a refinement candidate of X”, is denoted by an association rule X ⇒ Y . The confidence of such a rule is defined as |σ(X ∪ Y )|/|σ(X)| where |σ(X ∪ Y )| is also called the support of the rule. In order to ensure that system does not arbitrarily narrow the retrieved result by user’s query we introduce two concepts: coverage and prime keyword set. Coverage: Let K(⊆ 2K ) be a subset of the power set of K and D(⊆ D) be a set of documents. If D⊆ σ(Q) Q∈K
then we say that K covers D, or K is a coverage of D. In a similar fashion, K is also called a coverage of K (⊆ 2K ) iff σ(Q) ⊆ σ(Q) Q∈K
Q∈K
Naturally, coverage K of D is a minimum coverage if and only if D can not be covered by any pure subset of K. Minimum coverage is defined in order to exclude redundant keywords from the keyword database. In general, minimum coverage is not unique for a specific database. Prime Keyword: Let D and K be the set of all documents and all keywords, respectively. Kp is called the prime keyword set of D (or K) iff {{k} | k ∈ Kp } is a minimum cover of D (or 2K ). In addition, keywords included in prime keyword set are called prime keywords. By the definition of coverage, prime keywords guarantees no loss of information. In other words, any documents can be retrieved from a certain subset of prime keyword, expressed as the follows. D= σ({k}) ≡ σ(K) k∈Kp
K⊂Kp
Based on these concepts we extract prime keyword set and generate candidates according to the relationship between prime keywords.
4
Global Analysis
In global analysis, system extracts prime keyword set based on the principles mentioned in Section 2. Though no loss of information can be guaranteed by prime keywords, the practicability of system may be hurt due to the composition of database. As a refinement system, it is important to keep good screen
588
C. Cui et al.
effect. Keywords with very low support are unsuitable to be candidates because too many candidates are necessary. On the other hand, keywords with very high support can not well discriminate one document from the others. Unfortunately, some documents only contain keywords with very low or very high support. We call them outliers in sense of effectiveness of refinement. In our study, in the pre-processing phase we exclude outlier documents in which keywords are out of a specified range of support. Certainly, outlier documents can be retrieved by the system though no refinement candidates are returned. Refinement Coefficient. Theoretically, the vector space model had the disadvantage that index terms are assumed to be mutually independent. Due to the locality of many term dependencies, their indiscriminate application to all documents in the database might in fact decline the overall performance. In order to obtain high effectiveness of refinement the relationship between keywords should be taken into consideration. In our study, we use Refinement Coefficient (RC) defined below to express the correlation of keywords. cnf (kx ⇒ k) k ∈ρ(σ({k}))−{k}
tf (k,d) RC(k, d) = |ρ({d})| × x (| ρ(σ({k}) |> 1) |ρ(σ({k})|−1 Where tf (k, d) is the term frequency of keyword k included in the document d, | ρ({d}) | is the number of keywords included in d, ρ(σ({k})) is a set of keywords that co-occur with k. The formula consists in two parts, the left part shows the importance of keyword in d. In general, the higher the frequency of keyword that appears in the document is, the more important the keyword becomes. On the other hand, if the value of | ρ({d}) | is small, keywords included in d are considered to be important. As an extreme example, if a document contains only one keyword, then the keyword is essential. The right part means average effect of k in the refinement. kx is a keyword included in ρ(σ({k})) − {k}. Suppose kx is issued as a query, the refinement degree of k is denoted as cnf (kx ⇒ k). So the right part reflects average effect of refinement when when each kx is a query.
Generation of Prime Keyword Set. We use RC as a main factor to extract prime keyword set. Firstly, system select a keyword with maximal RC value from each document. Certainly, the keyword set selected is a coverage of database. Then keywords in the coverage are sorted in ascending order of RC. And keyword whose RC value is small are deleted if the rest of them can cover the whole database after this one is removed. Finally, the minimal cover is output as prime keyword set after the redundant keywords are removed. The pseudo code is as follows. Algorithm Generation of Prime Keyword Set Input : Document Set D, Keyword Set K, range of support M inSpt and M axSpt Output : Prime Keyword Set Kp (initially empty) 1 forall d ∈ D do 2 Xd = {k|k ∈ ρ({d}) ∧ M inSpt ≤ spt(k) ≤ M axSpt}
Adaptive Query Refinement Based on Global and Local Analysis
3 4 5 6 7 8
select k0 such that RC(k0 , d) = maxk∈Xd {RC(k, d)} Kp := Kp {k0 } end sort Kp in ascending order of RC forall ki ∈ Kp do if σ({k}) = D then Kp := Kp − {ki }
9
end
589
k∈Kp −{ki }
The algorithm extracts precise information that can make local analysis efficiency. In this algorithm, all keywords included in each document have to be processed, so the cost of computation is very high. Fortunately, this process is done during the off-line phase in advance. Therefore it does not affect the interactive processing.
5
Local Analysis
There have been a number of efforts to improve local analysis ([2], [3], [5], [7], [9]). These strategies can not work well when loss of information occurs. Almost all of existing methods ignore this problem. Our approach makes up for the drawback of existing techniques by using concept prime keywords. At a glance, it may take great effort and may be infeasible. In fact, the size of σ(Q) is greatly reduced and system only aims at prime keywords included in it to carry out local analysis. Fortunately, the prime keyword set is a coverage of the whole database, and naturally prime keywords included in retrieved documents become a coverage of it. Candidates from this keyword set can guarantee no loss of information. In particular, the retrieved results contain user’s intension. Keywords, especially prime keywords contained in them reflect screen effect of refinement. In addition, the size of prime keyword set is significantly reduced so time efficiency would be improved. Local RC. Based on the prime keywords extracted from σ(Q), system looks for refinement candidates. In local analysis, we still use RC as a criterion of evaluation to discriminate candidates from others. And we localize RC as follows in order to keep consistency. tf (k,d) RCl (k, d) = |ρ({d})| × cnf (Q ⇒ {k}), d ∈ σ(Q) RCl reflects the effect of refinement to query Q.
Generation of Candidates. We proposed two algorithms, called Generation of Conjunction Candidates and Generation of Exclusion Candidates to generate candidates. The similarity function between document d and Query Q is defined as follows.
590
C. Cui et al.
sim(d, Q) =
w(kx , d)
kx ∈Q
w2 (k, d)×|Q|
k∈ρ(d)
where w(k, d) is the weight of keyword k in document d, and w(k, d) =
tf (k,d) |ρ({d})|
|D| × log( |σ(k)| ).
Using sim(d, Q), we define σr (Q) as a document set which satisfies the following condition. σr (Q) ⊂ σ(Q) ∧ |σr (Q)| = r ∧ ∀d1 ∈ σr (Q), ∀d2 ∈ (σ(Q) − σr (Q)), sim(d1 , Q) ≥ sim(d2 , Q) The pseudo code of two algorithms is as follows. Algorithm Generation of Conjunction Candidates Input: Kp , σ(Q), and ranking threshold r. Output: Refinement Candidates Ca 1 sort σ(Q) in descending order of sim(d, Q) 2 Ca := ρp (σr (Q))− Q 3 while σ(Q) − σ({k}) = ∅ k∈Ca 4 forall d ∈ σ(Q) − σ({k}) do k∈Ca
5 6 7
select k0 such that RCl (k0 , d) = maxk∈ρp ({d})−Q {RCl (k, d)} Ca := Ca ∪ {k0 } end
Algorithm Generation of Exclusion Candidates Input: Kp , σ(Q), and l low-ranked documents. Output: Refinement Candidates Ca 1 sort σ(Q) in ascending order of sim(d, Q) 2 Ca := ρp (σl (Q)) − ρp (σn−l (Q)) − Q 3 while σ(Q) − σ({k}) = ∅ k∈Ca 4 forall d ∈ σ(Q) − σ({k}) do k∈Ca
5 6 7
select k0 such that RCl (d, k0 ) = maxk∈ρp ({d})−Q {RCl (d, k)} Ca := Ca ∪ {k0 } end
The former emphasize the importance of prime keywords appeared in r topranked documents. If prime keywords in σr (Q) do not cover σ(Q), then more keyword are chosen from uncovered documents. This process is performed repeatedly until σ(Q) is covered. Moreover, the candidate set is a subset of ρp (σ(Q)), the size of it can be significantly reduced. The effect of candidates and the time of online computation will discussed in our experiments. Besides the above idea,
Adaptive Query Refinement Based on Global and Local Analysis
591
2500 "RetrievedPk" "Conjunction" "Exclusion" 10e+1 1-keyword-query
2000
2-keyword-query
Average Execution Time(:second)
Number of Candidates
1
1500
1000
500
10e-1
10e-2
10e-3
0 0
1000
2000
3000
4000 5000 6000 Support of Query
7000
8000
Fig. 2. Number of candidates
9000
10000
10e-4 10e+3
10e+4
10e+5
10e+6
Data Size
Fig. 3. Execution time vs. database size
if all irrelevant documents are excluded, the remainder are the relevant ones and become easy to be found. The latter algorithm Generation of Exclusion Candidates is designed based on this idea.
6
Experiments
We use C++ for our implementation. The experiments were run on a 3.0GHz Pentium IV processor with 4GB of main memory and 70GB of disk space running Debian Linux (Kernel 2.4). The proposed methods are evaluated using TREC7,8 ad hoc dataset, which includes 527,993 documents and 72,359 keywords after the elimination of stop words and stemming, as well as 100 test questions. As mentioned in section 4, our system eliminates outliers before extracting prime keyword set. In our experiments, we set M inSpt and M axSpt to [500, 10, 000] and 738 outlier documents are removed. Applying algorithm extracting prime 1 keyword set to the dataset, we obtained a prime keyword set of less than 30 of the original keyword set. Size of Candidates and Time of Online Processing. In order to reveal time efficiency and candidate’s reduction ratio of the two algorithms in local analysis, we compare them with the method which picked up all the prime keywords included in σ(Q) as refinement candidates. In the following this method is referred to as RetrievedPk. We take the following steps to carry out our experiments. First the system randomly issues 10,000 1-keyword-queries and 2-keyword-queries respectively. Then we generate candidates based on the two algorithms. For queries with same support, our system calculate the average number of candidates and the average time. Figure 2 shows the number of candidates by different algorithms. By our local analysis, in most cases, the number of conjunction and exclusion candidates are reduced to less than 15% and 3%, respectively. Next experiment aims at investigating the scalability of our approach. As shown in Figure 3, three datasets, ad hoc, FT(a subset of ad hoc) and CRAN are used to measure the execution time. The results show that in both heuris-
592
C. Cui et al. 1
0.07 "query" "query+1c" "query+2c"
"query" "query+1c" "query+2c" 0.06
0.8
F-Measure
Average Precision
0.05
0.6
0.4
0.04
0.03
0.02 0.2 0.01
0
0 0.1
0.2
0.3
0.4
0.5
Recall
Fig. 4. Average Precision and Recall
0.6
0
5
10
15
20 25 30 Order of ranked documents
35
40
45
50
Fig. 5. F-Measure
tics, the processing time is almost linear to the size of database. Considering the hardware specification used in the experiments, we can conclude that the system is feasible. Effect of Candidates. This experiment investigates the effect of refinement. Obviously, query by several keywords can narrow the intention and decrease the ambiguity. This is proved in Figure 4. Adding one candidate to the initial query improves the average precision, and an addition candidate enhances the average precision further. A single measure which combines recall and precision might be of interest. One such measure is the harmonic mean F of recall and precision which is computed as F (i) = 2/(r−1 (i)+p−1 (i)) where r(i) is the recall for the i-th document in the ranking, p(i) is the precision for the i-th document in the ranking, Figure 5 shows the same result with Figure 4 about F -Measure. Comparing with Rocchio. There have been a lot of approaches attempted to improve retrieval effects on query modification. Rocchio is a famous way and has been proved to be effective. Many systems use Ide Regular formula to improve the performance by query expansion ([1]). Qnew = αQorig +β dj −γ dj ∀dj ∈Dr
∀dj ∈Dn
Here, Qorig and Qnew are the initial query vector and the expanded (refined) query vector, respectively Dr and Dn stand for the sets of relevant and irrelevant documents, respectively. Considering the fact that the information contained in the irrelevant documents is much less important, we set the parameter γ to 0. The experiments are done by the following steps. – Look for σ(Qorig ). – Sort retrieved results by sim(d, Qorig ). – Specify relevant documents, look for expansion query by Ide Regular. Figure 6 shows that our method doubles the precisions of those of Rocchio’s for all recall. As for F -measure shown in Figure 7, our method is also superior to Rocchio’s remarkably.
Adaptive Query Refinement Based on Global and Local Analysis 0.7
593
0.06 "query" "Rocchio" "query+1c"
"query" "Rocchio" "query+1c"
0.6
0.05
0.5
F-Measure
Average Precision
0.04 0.4
0.3
0.03
0.02 0.2
0.01
0.1
0
0 0.1
0.2
0.3
0.4 Recall
0.5
0.6
0
5
10
15
20 25 30 Order of ranked documents
35
40
45
50
Fig. 6. Comparison with Rocchio (Average Fig. 7. Comparison with Rocchio (Fprecision) Measure)
7
Future Work
Our study use global analysis and local analysis to improve response time and quality of candidates in query refinement. It guarantees no loss of information and is proved to be effective and efficient experimentally. In order to examine the practicability we attempt to implement a refinement system with practical dataset It is expected to applied to web search engine in the future.
References 1. Baeza-Yates, R. and Ribeiro-Neto, B.: Modern Information Retrieval. Addison Wesley, pp.24-34, 1999 2. Carmel, D., Farchi, E. and Petruschka, Y.: Automatic Query Refinement using Lexical Affinities with Maximal Information Gain. ACM SIGIR, pp.283-290, 2002 3. Carpineto, C., De Mori, R., Romano, G. and Bigi, B.: An Information-Theoretic Approach to Automatic Query Expansion. ACM TOIS, vol.19(1), pp.1-27, 2001. 4. Croft, W.B., Cook, R. and Wilder, D.: Providing government information on the Internet: Experience with THOMAS. In Proceedings of the Digital Libraries Conference, (DL’95) pp.19-24, 1995 5. Cui, C., Chen, H., Furuse, K. and Ohbo, N.: Web Query Refinement without Information Loss. APWeb 2004, pp.363-372, 2004 6. George, M.: Special Issue, WordNet: An On-line Lexical Database. International Journal of Lexicography, 3(4), 1990. 7. Kraft, R. and Zien, J.: Mining anchor text for query refinement Proceedings of the 13th international conference on World Wide Web pp.666-674(2004) 8. Mitra, M., Singhal, A. and Buckley, C.: Improving Automatic Query Expansion. ACM SIGIR, pp.206-214, 1999 9. V´ elez, B., et al: Fast and Effective Query Refinement. ACM SIGIR, pp.6-15, 1997 10. Voorhees, E.M.: Query Expansion Using Lexical-Semantic Relations. In ACM SIGIR, pp.61-69, 1994 11. Xu, J., and Croft, W. B.: Improving the effectiveness of information retrieval with local context analysis. ACM Transactions on Information System, vol.18(1), pp.79112, 2000.
Information Push-Delivery for User-Centered and Personalized Service Zhiyun Xin1, Jizhong Zhao2, Chihong Chi1, and Jiaguang Sun1 1 School
of Software, Tsinghua University, 100084, Beijing, China
[email protected] 2 Department of Computer Science and Technology, Xi’an Jiaotong University, 710049, Xi’an, China zjz@m ail.xjtu.edu.cn
Abstract. In this paper, an Adaptive and Active Computing Paradigm (AACP) for personalized information service in heterogeneous environment is proposed to provide user-centered, push-based high quality information service timely in a proper way, the motivation of which is generalized as R4 Service: the Right information at the Right time in the Right way to the Right person, upon which formalized algorithms of adaptive user profile management, incremental information retrieval, information filtering, and active delivery mechanism are discussed in details. The AACP paradigm serves users in a push-based, eventdriven, interest-related, adaptive and active information service mode, which is useful and promising for long-term user to gain fresh information instead of polling from kinds of information sources. Performance evaluations based on the AACP retrieval system that we have fully implemented manifest the proposed schema is effective, stable, feasible for adaptive and active information service in distributed heterogeneous environment.
1 Introduction During the past decades pull-based information service such as search engine and traditional full-text retrieval were studied much more[1-5], and many applications have been put to real use. However with the explosive growth of the Internet and World Wide Web, locating relevant information is time consuming and expensive, push technology[4-10] promises a proper way to relieve users from the drudgery of information searching. Some current commerce software or prototype systems such as PointCast Network, CNN Newswatch, SmartPush and ConCall serve users in a personalized way[8,10,12], while recommendation system[13-15] such as GroupLens, MovieLens, Alexa, Amazon.com, CDNow.com and Levis.com are used in many Internet commerce fields. Although kinds of personalized recommendation systems were developed, still many things left unresolved, these problems result in deficiency and low quality of information service as the systems declared. One of most important reasons of which is that single recommendation mechanism such as content-based or collaborative recommendation is difficult to serve kinds of users for their various information needs[10-15]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 594 – 602, 2005. © Springer-Verlag Berlin Heidelberg 2005
Information Push-Delivery for User-Centered and Personalized Service
595
In this paper an Adaptive and Active Computing Paradigm (AACP) for personalized information service in wide-area distributed heterogeneous environment is proposed to provide user-centered, push-based high quality information service timely in a proper way, the motivation of which is generalized as R4 Service: the Right information at the Right time in the Right way to the Right person, Upon which formalized algorithms framework of adaptive user profile management, incremental information retrieval, information filtering, and active delivery mechanism are discussed in details. As the work of National High-Tech Research and Development Plan of China, we have fully implemented the Self-Adaptive and Active Information Retrieval System (AIRS) for scientific research use, and evaluations showed the AIRS system is effective and reliable to serves large scale users for adaptive and active information retrieval.
2 The Adaptive and Active Computing Paradigm 2.1 Abstract Model of Information Retrieval System Usually traditional information retrieval system[1-6] is composed of RetrievalEnabled Information Source (IS), Indexing Engine (XE), Information Model (IM), Retrieval Engine (RE), Graphic User Interface (GUI), entrance for users to retrieve for information, which includes the order option parameters. Definition 1: The general Information System can be defined according to its components in a system’s view as: IRSSysView : ={IS, XE, IM, RE, GUI}
(1)
Especially, in an open source information system such as search engine, the indirect IS is the WWW, while the direct IS is the abstract image of the indirect information source. In non-open information sources such as full-text retrieval system the IS is the original metadata suitable for retrieval. Definition 2: In a user’s view, the information retrieval system is a 3-tuple framework composed of virtual or real information source (IS), user’s Retrieval Input (RI) and Retrieval Output (RO), which is: IRSUserView : ={IS, RI, IO}
(2)
Traditional information system is essentially information-centered, pull-based computing paradigm, which serves customers in a passive mode, and can’t meet longterm users’ demand of getting information in an active way. 2.2 Adaptive and Active Computing Paradigm An Adaptive and Active Computing Paradigm (AACP) for personalized information service[12-15] in heterogeneous environment is user-centered, push-based high quality information service in a proper way, the motivation of which is generalized as R4 Service: the Right information at the Right time in the Right way to the Right person, that is: R4 : =R × R × R × R
(3)
596
Z. Xin et al.
The R4 Service serves users in an active and adaptive mode for high quality information timely and correctly, which can adjust the information content according to user’s preference dynamically and adaptive to user’s interest in some degree, automatically retrieve and delivery related information to users with interaction. The R4 service can be described for adaptive and active information retrieval. 2.3 The Abstract Architecture of AACP Definition 3: The Adaptive and Active Information System (AAIS) for personalized information service is encapsulated on the traditional information system with adaptivity and activity, and can be viewed as the following composition: AACP : =IRSSysView AAIS
(4)
AAIS : =(VUser, VIS, C, T, P)
(5)
Where
The semantic of each element in AAIS is described as table 1: Table 1. Semantic of AAIS
0Symbol VUSER VIS C T P
1Definition Vector of User’s interests or preferences Vector of Information Source Condition that VIS matches VUSER Trigger for the system starts to work for U Period that the system serves for U
For a long-term user in information retrieval system, what one really needs in a relative stable period is just a little part in the whole global information domain, the relationships that can be precisely described as definition 4. Definition 4: A Personalized Information Set (PIS) is a subset of Domain Information Set (DIS), while DIS is a subset of Global Information Set (GIS), so that: PIS
⊆ DIS ⊂ GIS
(6)
Considering the high-dimension of information system, usually the domains overlap each other especially in multi-subjects, the set of the above is in fact n-dimension space, and fig.4 just shows a flat model of the above relationship. Definition 5: Personalized Information Service over heterogeneous information source is a matching map R (where R stand for Retrieval) from VIS to VUser with the Condition C and trigger T during the period P, that is: Q=R (VUser , VIS, C, T, P ; MInfo)
(7)
Information Push-Delivery for User-Centered and Personalized Service
597
where Minfo is the information Model for the Retrieval Map, which may be Boolean, Probable, or VSM modal. To serve users adaptively and actively, the system must know users’ needs, usually user profile is the basic infrastructure, which decides the quality of adaptive and active service. Except for adaptivity and activity, the AACP paradigm also builds on the infrastructure of information indexing, information retrieval, information filtering. Moreover, automatically monitoring for retrieval and delivery is another key technology. The following section will talk about each key technology and its optimization strategy in details.
3 Infrastructure of AACP 3.1 Abstract User Profile To serve users according to their preference and interests, user profile is needed as a image of what users require, by which then system can decide who to serve, when to serve, what to serve and how to serve. Thus a user profile can be defined in an abstract form as the following multi-tipple: AUP : =(ExtUID,IntUID,TrueVector,RecVectosr,
(8)
StartTime,EndTime,Freq ) where ExtUID, IntUID stand for external and internal User Identity respectively, and TrueVector stands for true and confirmed user interest vector ,but RecVector is the recommended interest vector, which should be evaluated by user whether to retrieve by the vector or not, while StartTime, EndTime describe the start and end time that user want to be served, Freq is the frequency that user consumes the information that pushed to him/her. To provide high quality information it is necessary to define different interests with different weight, thus TrurVector and RecVector can be defined as following: TrueVector : =
(9)
wi ≥ wj,(1 ≤ i, j ≤ n, i<j), wi ≥ w0 (threshold of the Weight of TrueVector) RecVector : =
(10)
w’i ≥ w’j,(1 ≤ i, j ≤ n, i<j), w’0<w’i<w0. According to the above definition, we can see that TrueVector is set of weighted keyuwords whose weight must be more than w0, and RecVector’s should be not less than w’0.As for the other key problem for user profile in AACP, the profile location, the style of update user profile are important aspects to be considered. In fact, location of profile can be either at client, proxy server, or web server side, and the style of update is can be explicit or implicit, that is: (11) ProfileLoc : =Client|Proxy|Server UpdateStyle : =Explicit|Implicit
598
Z. Xin et al.
3.2 Adaptive User Profile Management Algorithm: Adaptive Update of User Profile Input: IntUID,TrueVector TV= s β if α > β ;
2) There is the negation operator: neg ( sα )
= s −α .
To preserve all the given information, we extend the discrete term set
S to a
continuous term set S = {s a | a ∈ [− q, q]} , where q ( q > t ) is a sufficiently large positive integer. If we call
sα ∈ S , then we call sα the original linguistic term, otherwise,
sα the virtual linguistic term. In general, the expert uses the original linguis-
tic terms to evaluate alternatives, and the virtual linguistic terms can only appear in operation [21].
~ = [α − , α + ] = { x | α − ≤ x ≤ α + } , where Definition 1. Let α
α − , x, α + ∈ R ,
α − and α + are the lower and upper limits, respectively, then α~ is called an interval − + + ~ is called a positive interval number. number. Especially, if α , x, α ∈ R , then α s = [ s a , s b ] , where s a , sb ∈ S , s a and s b are the lower and Definition 2. Let ~ s the uncertain linguistic variable. upper limits, respectively, we then call ~ ~
Let S be the set of all uncertain linguistic variables, and let Ω be the set of all positive interval numbers. Consider any three uncertain linguistic variables ~ s1 = [sa1 , sb1 ] and ~ s2 = [sa2 , sb2 ] , and any three positive interval s = [sa , sb ] , ~
686
Z. Xu
~ = [α − ,α + ] , α~ = [α − ,α + ] and α~ = [α − ,α + ] , we define their operanumbers α 1 1 1 2 2 2 tional laws as follows.
~ s1 ⊕ ~ s2 = [sa1 , sb1 ] ⊕ [sa2 , sb2 ] = [sa1 ⊕ sa2 , sb1 ⊕ sb2 ] = [sa1 +a2 , sb1 +b2 ] ; ~ ⊗~ s = [α − ,α + ] ⊗[s , s ] = [s , s ] , where a' = min{α − a, α −b, α + a, α +b}, 2) α 1)
a
b
a'
b'
b ' = max{α − a , α − b, α + a , α + b} ; 3) ~ s1 ⊕ ~ s2 = ~ s2 ⊕ ~ s1 ; ~ ~ ~ ~ 4) α ⊗ s = s ⊗ α ; ~ ⊗ (~ 5) α s1 ⊕ ~ s 2 ) = α~ ⊗ ~ s1 ⊕ α~ ⊗ ~ s2 ; ~ ~ ~ ~ ~ ~ 6) (α ⊕ α ) ⊗ s = α ⊗ s ⊕ α ⊗ ~ s. 1
2
Definition 3. Let
1
2
~ s1 = [ s a1 , sb1 ] and ~ s 2 = [ s a2 , sb2 ] be two uncertain linguistic
variables, and let l ~s1 = b1 − a1 and l ~s = b2 − a 2 , then the degree of possibility of 2
~ s1 ≥ ~ s2 is defined as
{
}
min l ~s1 + l ~s2 , max( b1 − a 2 , 0) p(~ s1 ≥ ~ s2 ) = l ~s1 + l ~s2 Similarly, the degree of possibility of
~ s2 ≥ ~ s1 is defined as
{
}
min l ~s1 + l ~s2 , max( b2 − a1 , 0) p (~ s2 ≥ ~ s1 ) = l ~s1 + l ~s2
(1)
(2)
From Definition 3, we have the following useful result: Theorem 1. Let
~ s1 = [ s a1 , sb1 ] and ~ s 2 = [ s a2 , sb2 ] be two uncertain linguistic
variables, then 1) 0 ≤ p (~ s1 ≥ ~ s 2 ) ≤ 1, 0 ≤ p (~ s2 ≥ ~ s1 ) ≤ 1 ;
2) p(~ s1 ≥ ~ s1 ) = p ( ~ s2 ≥ ~ s2 ) = 1 2 . s1 ≥ ~ s2 ) + p(~ s2 ≥ ~ s1 ) = 1. Especially, p ( ~
Proof. 1) Since
max( b1 − a 2 , 0) ≥ 0 , l ~s = b1 − a1 ≥ 0 , l ~s2 = b2 − a 2 ≥ 0 1
then
{
}
min l ~s1 + l ~s2 , max( b1 − a 2 , 0) ≥ 0 By (1), we have p(~ s1 ≥ ~ s2 ) ≥ 0 . Also since
{
}
min l ~s1 + l ~s2 , max( b2 − a1 , 0) ≤ l ~s1 + l ~s2 then by (1), we have p(~ s1 ≥ ~ s2 ) ≤ 1. Thus 0 ≤ p(~s1 ≥ ~s2 ) ≤ 1 . Similarly, by (2), we can prove that 0 ≤ p(~ s ≥~ s ) ≤1. 2
1
A Method Based on IA Operator for MAGDM with Uncertain Linguistic Information 687
3) Since
{
}
{
}
min l ~s1 + l ~s2 , max(b1 − a2 , 0) min l ~s1 + l ~s2 , max(b2 − a1 , 0) p( ~ s1 ≥ ~ s 2 ) + p( ~ s2 ≥ ~ s1 ) = + l ~s1 + l ~s2 l ~s1 + l ~s2
=
{
}
{
}
min l ~s1 + l ~s2 , max(b1 − a2 , 0) + min l ~s1 + l ~s2 , max(b2 − a1 , 0) l ~s1 + l ~s2
(3)
then (i) If b1 ≤ a2 , then by (1), we know that p ( ~ s1 ≥ ~ s 2 ) = 0 . By
b1 ≤ a2 , we have
b2 − a1 ≥ b2 − a1 + b1 − a 2 = (b2 − a 2 ) + (b1 − a1 ) = l ~s2 + l ~s1 thus, by (2), we have p ( ~ s2 ≥ ~ s1 ) = 1 . Hence, p ( ~ s1 ≥ ~ s2 ) + p(~ s2 ≥ ~ s1 ) = 1 . (ii) If
b2 ≤ a1 , then, by (2), we know that p ( ~s 2 ≥ ~s1 ) = 0 . By b2 ≤ a1 , we have b1 − a 2 ≥ b1 − a 2 + b2 − a1 = (b2 − a 2 ) + (b1 − a1 ) = l ~s2 + l ~s1
thus, by (1), we have p ( ~ s1 ≥ ~ s 2 ) = 1 . Hence, p ( ~ s1 ≥ ~ s2 ) + p(~ s2 ≥ ~ s1 ) = 1 . (iii) If
a1 ≤ a2 < b1 ≤ b2 , then b1 − a2 < b1 − a2 + b2 − a1 = (b2 − a2 ) + (b1 − a1 ) = l ~s2 + l~s1
(4)
b2 − a1 < b2 − a1 + b1 − a2 = (b2 − a2 ) + (b1 − a1 ) = l ~s2 + l~s1
(5)
and
thus, by (3), we have
b − a + b − a (b − a ) + (b1 − a1 ) l~s2 + l~s1 =1 = p (~ s1 ≥ ~ s 2 ) + p (~ s2 ≥ ~ s1 ) = 1 2 2 1 = 2 2 l~s1 + l~s2 l~s1 + l~s2 l~s + l~s 1
(6)
2
a1 < a2 ≤ b2 < b1 , it follows that (4) - (6) hold. (v) If a2 < a1 ≤ b1 < b2 , it follows that (4) - (6) hold. (vi) If a2 ≤ a1 < b2 ≤ b1 , it follows that (4) - (6) hold.
(iv) If
p (~ s1 ≥ ~ s2 ) + p(~ s2 ≥ ~ s1 ) = 1 always holds. ~ ~ Especially, if s1 = s 2 , i.e., s a1 = s a2 and sb1 = s b2 , then By (i)-(vi), we know that
p(~ s1 ≥ ~ s1 ) + p(~ s1 ≥ ~ s1 ) = 1 or p(~ s2 ≥ ~ s2 ) + p(~ s2 ≥ ~ s2 ) = 1 and thus
p(~ s1 ≥ ~ s1 ) = p(~ s2 ≥ ~ s2 ) = 1 2 This completes the proof of Theorem 1.
688
Z. Xu
3 A Method Based on IA Operator for MAGDM with Uncertain Linguistic Information ~n
Definition 4. An interval aggregation (IA) operator is a mapping IA : S such that
~ →S ,
~ ⊗~ ~ ⊗~ ~ ⊗~ IAw~ ( ~ s1 , ~ s 2 ,..., ~ sn ) = w s1 ⊕ w s2 ⊕ " ⊕ w sn 1 2 n ~ = (w ~ ,w ~ ,..., w ~ ) T is the weighting vector of n arguments ~ where w s j = [sa j , sb j ] , 1 2 n
~ = [w− , w+ ] ∈ Ω , j = 1,2,..., n , w j j j
n
∑w
− j
≤ 1,
j =1
Example 1. Assume
n
∑w j =1
+ j
≥ 1.
~ s1 = [s−1 , s1 ] , ~ s2 =[s−3, s−2 ], ~ s3 = [ s −1 , s 0 ] and ~ s4 =[s−4 , s−2 ] ,
~ = ([ 0 .1, 0 . 2 ], [ 0 .1, 0 . 3], [ 0 .2 , 0 . 4 ], [ 0 . 3, 0 . 5 ]) T , then and w
IA w~ ( ~ s1 , ~ s2 , ~ s3 , ~ s 4 ) = [ 0 .1,0 .2 ] ⊗ [ s −1 , s1 ] ⊕ [ 0 .1,0 .3] ⊗ [ s − 3 , s − 2 ] ⊕ [0.2,0.4] ⊗ [s −1 , s 0 ] ⊕ [0.3,0.5] ⊗ [s −4 , s −2 ] = [ s − 0 .2 , s 0 .2 ] ⊕ [ s − 0 .9 , s − 0 .2 ] ⊕ [ s − 0 .4 , s 0 ] ⊕ [ s − 2 , s − 0 .6 ] = [ s − 3 .5 , s − 0 . 6 ] Based on the IA operator and the formula for the comparison between two uncertain linguistic variables, in the following we develop a method for the MAGDM problems, in which the information about the attribute weights and the expert weights are interval numbers, and the attribute values take the form of uncertain linguistic information. Step 1. For a MAGDM problem with uncertain linguistic information, there exist a finite of alternatives X = {x1 , x2 ,...,xn } and a finite set of attributes
U = {u1 , u 2 ,..., u m } , and there exists a finite set of experts D = {d1 ,d 2 ,...,d l } . Let ~ = (w ~ ,w ~ ,..., w ~ ) T be the weight vector of attributes, where w 1
2
m
~ = [ w − , w + ] ∈ Ω , i = 1, 2 ,..., m , w i i i ~
~ ~
m
∑w i =1
~
− i
≤ 1,
m
∑w
+ i
≥1
i =1
and let λ = ( λ 1 , λ 2 ,..., λ l ) T be the weight vector of experts, where
~ λ k = [λ−k , λ+k ] ∈ Ω , k = 1, 2,..., l ,
l
∑ λ −k ≤ 1, k =1
l
∑λ
+ k
≥1
k =1
~ Ak = (a~ij( k ) ) m×n is the decision matrix with uncertain linguistic ~ ( k ) ∈ S~ is a preference value (attribute value), given by the information, where a ij
Suppose that
expert
d k ∈ D , for the alternative x j ∈ X
with respect to the criterion
ui ∈ U .
A Method Based on IA Operator for MAGDM with Uncertain Linguistic Information 689
Step 2. Utilize the IA operator
(
)
~ ⊗ a~ ( k ) ⊕ w ~ ⊗ a~ ( k ) ⊕ " ⊕ w ~ ⊗ a~ ( k ) a~ j( k ) = IAw~ a~1(jk ) , a~2( kj ) ..., a~mj( k ) = w 1 1j 2 2j m mj
k = 1, 2 ,..., l ; j = 1, 2 ,..., n ~ ( k ) (i = 1,2,..., m) in the j th column of to aggregate the preference information a ij ~ (k ) the Ak , and then get the preference degree a j ( j = 1,2,..., n) of the alternative x j over all the other alternatives (corresponding to Step 3. Utilize the IA operator
(
dk ∈ D )
)
~ ~ ~ a~ j = IA λ~ a~ j(1) , a~ j( 2 ) ..., a~ j( l ) = λ1 ⊗ a~ j(1) ⊕ λ2 ⊗ a~ j(2) ⊕"⊕ λl ⊗ a~ j(l ) ,
j = 1, 2 ,..., n
a~ j( k ) ( k = 1,2,..., l ) corresponding to the alternative x j , and then get ~ of the alternative x over all the other alternatives. the collective preference degree a
to aggregate
j
j
~ ( j = 1,2,..., n) , we first Step 4. To rank these collective preference degrees a j ~ with all a~ ( j = 1,2,..., n) by using (1). For simplicity, we let compare each a j j
pij = p (a~i ≥ a~ j ) , then we develop a complementary matrix [23-26] as P = ( pij )n×n , where pij ≥ 0 , pij + p ji = 1 , pii = 1 2 , i , j = 1, 2,..., n Summing all elements in each line of matrix P , we have n
p i = ∑ p ij , i = 1, 2 ,..., n j =1
~ ( j = 1, 2 ,..., n ) in descending order in accordance with the Then we rank the a j value of p i ( i = 1, 2 ,..., n ). Step 5. Rank all the alternatives
x j ( j = 1,2,..., n) and select the best one(s) in
~ ( j = 1,2,..., n) . accordance with the a j Step 6. End.
4 Illustrative Example In this section, a MAGDM problem of evaluating university faculty for tenure and promotion (adapted from Bryson and Mobolurin [27]) is used to illustrate the developed method.
690
Z. Xu
A practical use of the proposed approach involves the evaluation of university faculty for tenure and promotion. The attributes used at some universities are teaching,
u2 :
research, and
u3 :
u1 :
service (whose weight vector w = ([0.20,0.30],
[0.30,0.35], [0.35,0.40])T ). Five faculty candidates (alternatives) x j ( j = 1, 2, 3, 4, 5) are to be evaluated using the term set
S = { s − 4 = extremely poor , s − 3 = very poor , s − 2 = poor , s −1 = slightly poor , s 0 = fair , s1 = slightly good , s 2 = good , s 3 = very good , s 4 = extremely good } by four experts d k (k = 1,2,3,4) (whose weight vector λ = ([0.15,0.20], [0.23,0.25],
[0.30,0.40], [0.15,0.25]) T under these three attributes, as listed in Tables 1-4, respectively. ~ Table. 1. Decision matrix A1
ui u1 u2 u3
x1 [s2 , s3] [s3 , s4] [s1 , s3]
x2
[s0 , s2] [s1 , s2] [s1 , s2]
x3 [s2 , s4] [s2 , s3] [s3 , s4]
x4 [s3 , s4] [s0 , s1] [s2 , s3]
x5 [s1 , s2] [s0 , s2] [s0 , s1]
x4 [s3 , s4] [s0 , s1] [s1 , s3]
x5 [s2 , s4] [s3 , s4] [s2 , s4]
x4 [s2 , s4] [s-1 , s1] [s1 , s3]
x5 [s0 , s1] [s0 , s1] [s1 , s2]
x4 [s0 , s2] [s0 , s1] [s0 , s2]
x5 [s-1 , s0] [s1 , s2] [s0 , s1]
~
Table. 2. Decision matrix A 2
ui u1 u2 u3
x1 [s0 , s1] [s2 , s4] [s-1 , s0]
x2
[s1 , s2] [s1 , s2] [s3 , s4]
x3 [s0 , s2] [s0 , s2] [s2 , s3]
~
Table 3. Decision matrix A 3
ui u1 u2 u3
x1 [s1 , s3] [s3 , s4] [s2 , s3]
x2
[s1 , s2] [s0 , s1] [s0 , s2]
x3 [s2 , s3] [s1 , s3] [s2 , s4]
Table 4. Decision matrix
ui u1 u2 u3
x1 [s0 , s1] [s2 , s3] [s2 , s3]
x2
[s2 , s3] [s3 , s4] [s2 , s3]
~ A4
x3 [s3 , s4] [s1 , s3] [s2 , s4]
A Method Based on IA Operator for MAGDM with Uncertain Linguistic Information 691
Step 1. Utilize the IA operator
(
)
~ ⊗ a~ ( k ) ⊕ w ~ ⊗ a~ ( k ) ⊕ w ~ ⊗ a~ ( k ) , a~ j( k ) = IA w~ a~1(jk ) , a~2( kj ) , a~3( kj ) = w 1 1j 2 3j 3 3j
k = 1, 2,3, 4; j = 1, 2,3, 4,5 ~ ( k ) (i = 1,2,3) in the j th column of the to aggregate the preference information a ij ~ (k ) Ak , and then get the preference degree a j of the alternative x j over all the other
d k ):
alternatives (corresponding to
a~1(1) = [ s1.65 , s 3.50 ] , a~2(1) = [s0.65 , s2.10 ] , a~ (1) = [s , s ] , a~ (2) = [s , s ] , 5
0.2
1.7
1
a~4( 2 ) = [ s 0.95 , s 2.75 ] a~3( 3) = [ s1.40 , s 3.55 ] a~ ( 4 ) = [ s , s ] , 2
2.0
3 .5
0.25
1.70
a~3(1) = [s 2.05 , s3.85 ] , a~4(1) = [ s1.30 , s 2.75 ] a~ ( 2) = [ s , s ] , a~ ( 2 ) = [ s , s ] 2
1.55
2.90
3
0 .7
2. 5
~ ( 2 ) = [ s , s ] , a~ ( 3) = [ s , s ] , a~ (3) = [s , s ] , a 5 2 .0 4 .2 1 1 .8 3 .5 2 0.20 1.75 ~ (3) = [s , s ] , a~ (1) = [ s , s ] , a~ ( 4) = [ s , s ] , a 4 0.45 2.75 5 0.35 1.45 1 1.30 2.55 a~3( 4) = [s1.60 , s3.85 ] , a~4( 4 ) = [ s 0 , s1.75 ] , a~5( 4) = [s 0.1 , s1.1 ]
Step 2. Utilize the IA operator
(
)
~ ~ ~ ~ a~ j = IAλ~ a~ j(1) , a~ j( 2) , a~ j(3) , a~ j( 4) = λ1 ⊗ a~ j(1) ⊕ λ2 ⊗ a~ j( 2) ⊕ λ3 ⊗ a~ j(3) ⊕ λ4 ⊗ a~ j( 4) ,
j = 1, 2 ,3, 4 ,5
~ ( k ) (k = 1,2,3,4) corresponding to the alternative to aggregate a j ~ of the alternative collective preference degree a j
x j , and then get the
x j over all the other alternatives:
a~1 = [ s1.04 , s 3.16 ] , a~ 2 = [ s 0.81 , s 2 .72 ] , a~3 = [ s1.13 , s 3.78 ] a~ 4 = [ s 0.55 , s 2 .78 ] , a~5 = [ s 0.61 , s 2.25 ] ~ ( j = 1,2,3,4,5) , we first Step 3. To rank these collective preference degrees a j ~ ~ compare each a with all a ( j = 1,2,3,4,5) by using (1), and then develop a j
j
complementary matrix:
⎡ 0.5 ⎢0.4169 ⎢ P = ⎢0.5744 ⎢ ⎢0.4000 ⎢⎣0.3218
0.5831 0.4256 0.6000 0.6782⎤ 0.5 0.3487 0.6014 0.6845⎥⎥ 0.6513 0.5 0.6619 0.7389⎥ ⎥ 0.3986 0.3381 0.5 0.5607⎥ 0.3155 0.2611 0.4393 0.5 ⎥⎦
Summing all elements in each line of matrix P , we have
p1 = 2.7869 , p 2 = 2.5515, p 3 = 3.1265, p 4 = 2.1974, p 5 = 1.8377
692
Z. Xu
~ ( j = 1,2,3,4,5) in descending order in accordance with the Then we rank the a j value of p i ( i = 1,2,3,4,5 ):
a~3 > a~1 > a~2 > a~4 > a~5 Step 4. Rank all the alternatives
x j ( j = 1,2,3,4,5) in accordance with the
a~ j ( j = 1,2,3,4,5) :
x 3 ; x1 ; x 2 ; x 4 ; x 5 thus the best alternative (school) is
x3 .
5 Concluding Remarks In this paper, we have studied the MAGDM problems, in which the information about the attribute weights and the expert weights are interval numbers, and the attribute values take the form of uncertain linguistic information. We have introduced some operational laws of uncertain linguistic variables and a formula for comparing two uncertain linguistic values, and proposed the interval aggregation (IA) operator. Furthermore, we have developed a method, based on the IA operator and the formula for comparing two uncertain linguistic variables, for MAGDM with uncertain linguistic information. In the future, we shall continue working in the application and extension of the developed method.
Acknowledgement This work was supported by China Postdoctoral Science Foundation under Project (2003034366).
References 1. Zadeh, L.A.: The Concept of a Linguistic Variable and Its Application to Approximate Reasoning. Part 1,2 and 3, Information Sciences 8(1975) 199-249, 301-357; 9(1976) 43-80. 2. Zadeh, L.A.: A Computational Approach to Fuzzy Quantifiers in Natural Languages. Computers & Mathematics with Applications 9(1983) 149-184. 3. Kacprzyk, J.: Group Decision Making with a Fuzzy Linguistic Majority. Fuzzy Sets and Systems 18(1986) 105-118. 4. Bonissone, P. P., Decker, K.S.: Selecting Uncertainty Calculi and Granularity: An Experiment in Trading-off Precision and Complexity. In: Kanal, L.H., Lemmer, J.F.(eds.): Uncertainty in Artificial Intelligence. North-Holland, Amsterdam (1986) 217-247. 5. Degani ,R., Bortolan, G.: The Problem of Linguistic Approximation in Clinical Decision Making. International Journal of Approximate Reasoning 2 (1988) 143-162. 6. Delgado, M., Verdegay, J.L., Vila, M.A.: On Aggregation Operations of Linguistic Labels. International Journal of Intelligent Systems 8(1993) 351-370.
A Method Based on IA Operator for MAGDM with Uncertain Linguistic Information 693 7. Herrera, F.: A Sequential Selection Process in Group Decision Making with Linguistic Assessment. Information Sciences 85(1995) 223-239. 8. Herrera, F., Herrera-Viedma, E., Verdegay, J.L.: A Model of Consensus in Group Decision Making under Linguistic Assessments. Fuzzy Sets and Systems 78(1996) 73-87. 9. Torra, V.: Negation Functions Based Semantics for Ordered Linguistic Labels.International Journal of Intelligent Systems 11(1996) 975-988. 10. Bordogna, G., Fedrizzi, M., Passi G.: A Linguistic Modeling of Consensus in Group decision Making Based on OWA Operator. IEEE Transactions on Systems, Man, and Cybernetics 27(1997) 126-132. 11. Zadeh, L.A., Kacprzyk, J. Computing with Words in Information/Intelligent Systems-Part 1: Foundations: Part 2: Applications. Heidelberg, Germany: Physica-Verlag, 1999, vol. I. 12. Herrera, F., Herrera-Viedma, E.: Choice Functions and Mechanisms for Linguistic Preference Relations. European Journal of Operational Research 120 (2000) 144-161. 13. Herrera, F., Martínez, L.: A 2-Tuple Fuzzy Linguistic Representation Model for Computing with Words. IEEE Transactions on fuzzy systems 8 (2000) 746-752. 14. Herrera, F., Martínez, L.: A Model Based on Linguistic 2-Tuples for Dealing with Multigranular Hierarchical Linguistic Contexts in Multi-Expert Decision-Making. IEEE Transactions on Systems, Man, and Cybernetics 31 (2001) 227-234. 15. Herrera, F., Martínez, L.: An Approach for Combining Linguistic and Numerical Information Based on 2-Tuple Fuzzy Linguistic Representation Model in Decision-Making. International Journal of Uncertainty, Fuzziness, Knowledge-based Systems 8 (2002) 539-562. 16. Delgado, M., Herrera, F., Herrera-Viedma, E., Martin-Bautista, M.J, Martinez, L., Vila, M. A.: A Communication Model Based on the 2-Tuple Fuzzy Linguistic Representation for a Distributed Intelligent Agent System on Internet. Soft Computing, 6 (2002) 320-328. 17. Cordón, O., Herrera, F., Zwir, I.: Linguistic Modeling by Hierarchical Systems of Linguistic Rules. IEEE Transactions on fuzzy systems 10 (2002) 2-20. 18. Xu, Z.S.: Study on Methods for Multiple Attribute Decision Making Under Some Situations. Ph.D thesis, Southeast University, Nanjing, China (2002). 19. Xu, Z.S., Da, Q.L.: An Overview of Operators for Aggregating Information. International Journal of Intelligent Systems 18 (2003) 953-969. 20. Xu, Z.S.: A Method Based on Linguistic Aggregation Operators for Group Decision taking with Linguistic Preference Relations. Information Sciences 166 (2004) 19-30. 21. Xu, Z.S.: EOWA and EOWG Operators for Aggregating Linguistic Labels Based on Linguistic Preference Relations. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 112 (2004) 791-810. 22. Xu, Z.S.: An Approach to Group Decision Making Based on Incomplete Linguistic Preference Relations. International Journal of Information Technology and Decision Making, 4 (2005) 153-160. 23. Xu, Z.S., Da, Q.L.: The Uncertain OWA Operator. International Journal of Intelligent Systems 17 (2002) 569-575. 24. Xu, Z.S., Da, Q.L.: An Approach to Improving Consistency of Fuzzy Preference Matrix. Fuzzy Optimization and Decision Making 2 (2003) 3-12. 25. Xu, Z.S.: Goal Programming Models for Obtaining the Priority Vector of Incomplete Fuzzy Preference Relation. International Journal of Approximate Reasoning 36 (2004) 261-270. 26. Xu, Z.S., Da, Q.L.: A Least Deviation Method to Obtain a Priority Vector of a Fuzzy Preference Relation. European Journal of Operational Research 164 (2005) 206-216. 27. Bryson, N., Mobolurin, A.: An Action Learning Evaluation Procedure for Multiple Criteria Decision Making Problems. European Journal of Operational Research 96 (1996) 379-386.
A New Prioritized Information Fusion Method for Handling Fuzzy Information Retrieval Problems Won-Sin Hong1, Shi-Jay Chen2, Li-Hui Wang3, and Shyi-Ming Chen1 1
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, R. O. C. 2 Department of Information Management, National United University, Miao-Li, Taiwan, R. O. C. 3 Department of Finance, Chihlee Institute of Technology, Panchiao, Taipei County, Taiwan, R. O. C.
Abstract. In this paper, we present a new prioritized information fusion method for handling fuzzy information retrieval problems. We also present a new center-of-gravity method for ranking generalized fuzzy numbers. Furthermore, we also extend the proposed prioritized information fusion method for handling fuzzy information retrieval problems in the generalized fuzzy number environment, where generalized fuzzy numbers are used to represent the degrees of strength with which documents satisfy particular criteria.
1 Introduction In [7], [8] and [9], Yager presented a conjunction operator of fuzzy subsets, called the non-monotonic/prioritized operator, to deal with multi-criteria decision-making problems. In [1], Bordogna et al. used the prioritized operator for fuzzy information retrieval. In [4], Chen et al. extended Yager’s prioritized operator [8] to present a prioritized information fusion algorithm, based on generalized fuzzy numbers, for aggregating fuzzy information. In this paper, we present a new prioritized information fusion method for handling fuzzy information retrieval problems. We also present a new center-of-gravity method for ranking generalized fuzzy numbers. Furthermore, we also extend the proposed prioritized information fusion method for handling fuzzy information retrieval problems in the generalized fuzzy number environment, where generalized fuzzy numbers are used to represent the degrees of strength with which documents satisfy particular criteria.
2 Generalized Trapezoidal Fuzzy Numbers ~ In [2] and [3], Chen presented a generalized trapezoidal fuzzy number A = ( a1 , a 2 , a 3 , a 4 ; wˆ A~ ) as shown in Fig. 1, where 0 < w A~ ≤ 1, a1, a2, a3 and a4 are positive real numbers, a1 ≤ a2 ≤ a3 ≤ a4, the universe of discourse X of the generalized fuzzy ~ number A is characterized by the membership function µ A~ , and µ A~ : X → [0, 1]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 694–697, 2005. © Springer-Verlag Berlin Heidelberg 2005
A New Prioritized Information Fusion Method
695
µ (X) Ã
1
~ A
wˆ A~
0
a1
a2
a3
a4
X
~ Fig. 1. A generalized trapezoidal fuzzy number A .
3 A New Center-of-Gravity Method for Ranking Generalized Fuzzy Numbers In the following, we present a new method to calculate the center-of-gravity of generalized trapezoidal fuzzy numbers and to rank generalized trapezoidal fuzzy numbers. ~ Assume that there is a generalized fuzzy number A = (a, b, c, d ; w) , where 0 < w ≤ 1, and a, b, c and d are real numbers. Then we can calculate its center-of-gravity G ( x A~ , y A~ ) as follows:
m a+b+d n b+c+d m n 1 2 × + × × w+ × w), (1) , m+n m+n m+n 3 3 3 m+n 3 where m = d - a and n = c - b. Then, based on the center-of-gravity G ( x A~ , y A~ ) of the ~ ~ generalized trapezoidal fuzzy number A , we can calculate the ranking value R( A ) of ~ the generalized trapezoidal fuzzy number A , shown as follows: ~ Case 1: if a1 = a2 = a3 = a4, then the ranking value R( A ) of the generalized fuzzy ~ number A is calculated as follows: w 2 ~ 2 (2) R ( A) = a1 + ( ) 2 × . 2 5 ~ Case 2: if A is a generalized trapezoidal fuzzy number, a generalized rectangular ~ fuzzy number, or a generalized triangular fuzzy number, then the ranking value R( A ) ~ of the generalized trapezoidal fuzzy number A is calculated as follows: ~ 2 (3) R( A) = ( x A~ ) 2 + ( y A~ ) 2 × . 5
G ( x A~ , y A~ ) = (
4 A New Prioritized Information Fusion Method for Handling Fuzzy Information Retrieval Problems Assume that there are n documents d1 , d 2 , ... , d n , and assume that there are two different prioritized criteria A1 and A2, where A1 is the first order criterion, A2 is the second order criterion, A1(di) denotes the degree of strength that document di satisfies the first order criterion A1, A1(di) ∈ [0, 1] , A2(di) denotes the degree of strength that document di satisfies the second order criterion A2, and A2(di) ∈ [0, 1] . The proposed priori-
696
W.-S. Hong et al.
tized information fusion method for handling fuzzy information retrieval problems is now presented as follows: Step 1: Calculate the degree of correlation r(A1, A2) [6] between the first order criterion A1 and the second order criterion A2, shown as follows:
∑ (A (d ) − A (d ))(A (d ) − A (d )) n
r(A1, A2) =
i =1
1
1
i
2
i
2
i
i
∑ (A (d ) − A (d )) ∑ (A (d ) − A (d )) n
n
2
1
i =1
1
i
i
i =1
,
(4)
2
2
i
2
i
where A1(di) denotes the degree of strength that document di satisfies the first order criterion A1, A2(di) denotes the degree of strength that document di satisfies the second order criterion A2, A1 (d i ) denotes the mean value of A1, A2 (d i ) denotes the
mean value of A2, and r(A1, A2) ∈ [− 1, 1] . Step 2: Calculate the degree of compatibility C(A1, A2) between the first order criterion A1 and the second order criterion A2, shown as follows: 1+ r (5) , C ( A1 , A2 ) = 2 where r denotes the degree of correlation between the first order criterion A1 and the second order criterion A2, and C(A1, A2) ∈ [0, 1] . Step 3: Calculate the fusion result F (d i ) of document di, where 1 ≤ i ≤ n, shown as
follows:
⎛ ⎞ ⎛ C ( A1 , A2 ) ⎞ 1 (6) ⎟⎟ + ⎜⎜ A2 ( d i ) × ⎟⎟, F (d i ) = ⎜⎜ A1 (d i ) × 1 ( , A ) + C A + C A 1 ( , A ) 1 2 ⎠ 1 2 ⎠ ⎝ ⎝ where A1(di) denotes the degree of strength that document di satisfies the first order criterion A1, A2(di) denotes the degree of strength that document di satisfies the second order criterion A2, and C(A1, A2) denotes the degree of compatibility between the first order criterion A1 and the second order criterion A2. The larger the value of F (d i ) , the more the degree of satisfaction that document di satisfies the user’s query, where 1 ≤ i ≤ n. Assume that there are n documents d1 , d 2 , ... , and d n , and assume that there are two different prioritized criteria A1 and A2, where A1 is the first order criterion, A2 is the ~ second order criterion, A1 (d i ) is a generalized trapezoidal fuzzy number denoting the ~ degree of strength that document di satisfies the first order criterion A1, and A2 (d i ) is a generalized trapezoidal fuzzy number denoting the degree of strength that document di satisfies the second order criterion A2. The proposed prioritized information fusion method for handling fuzzy information retrieval problems in the generalized trapezoidal fuzzy number environment is now presented as follows: Step 1: Based on formula (1), defuzzify generalized trapezoidal fuzzy numbers ~ ~ A1 (d i ) and A2 (d i ) , where 1 ≤ i ≤ n. Based on formulas (2) and (3), calculate the ~ ~ ~ ~ ranking values R A1 ( d i ) and R A2 ( d i ) of A1 (d i ) and A2 (d i ) , respectively, where ~ ~ 1 ≤ i ≤ n. Let A1(di) = R A1 ( d i ) and let A2(di) = R A2 ( d i ) , where 1 ≤ i ≤ n.
(
) (
)
(
)
(
)
A New Prioritized Information Fusion Method
697
Step 2: Based on formula (4), calculate the degree of correlation r(A1, A2) between the first order criterion A1 and the second order criterion A2. Step 3: Based on formula (5), calculate the degree of compatibility C(A1, A2) between the first order criterion A1 and the second order criterion A2. Step 4: Based on formula (6), calculate the fusion result F (d i ) of document di, where 1 ≤ i ≤ n. The larger the value of F (d i ), the more the degree of strength that docu-
ment di satisfies the user’s query, where 1 ≤ i ≤ n.
5 Conclusions In this paper, we have presented a new prioritized information fusion method for handling fuzzy information retrieval problems. We also have presented a new centerof-gravity method for ranking generalized fuzzy numbers. Furthermore, we also have extended the proposed prioritized information fusion method for handling fuzzy information retrieval problems in the generalized fuzzy number environment, where generalized fuzzy numbers are used to represent the degrees of strength with which documents satisfy particular criteria.
Acknowledgements This work was supported in part by the National Science Council, Republic of China, under Grant NSC 94-2213-E-011-003.
References 1. Bordogna, G., Pasi, G.: Linguistic Aggregation Operators of Selection Criteria in Fuzzy Information Retrieval. International Journal of Intelligent Systems 10 (1995) 233-248 2. Chen, S.H.: Ranking Fuzzy Numbers with Maximizing Set and Minimizing Set. Fuzzy Sets and Systems 17 (1985) 113-129 3. Chen, S.H.: Ranking Generalized Fuzzy Number with Graded Mean Integration. Proceedings of the Eighth International Fuzzy Systems Association World Congress, Taipei, Taiwan, Republic of China (1999) 899-902 4. Chen, S.J., Chen, S.M.: A Prioritized Information Fusion Algorithm for Handling Multicriteria Fuzzy Decision-Making Problems. Proceedings of the 2002 International Conference on Fuzzy Systems and Knowledge Discovery, Singapore (2002) 5. Fuller, G., Tarwater, J.D.: Analytic Geometry. Addison-Wesley, Massachusetts (1992) 6. Witte, R.S.: Statistics. Holt, Rinehart and Winston, New York (1989) 7. Yager, R.R.: Non-Monotonic Set Theoretic Operations. Fuzzy Sets and Systems 42 (1991) 173-190 8. Yager, R.R.: Second Order Structures in Multi-Criteria Decision Making. International Journal of Man-Machine Studies 36 (1992) 553-570 9. Yager, R.R.: Structures for Prioritized Fusion of Fuzzy Information. Information Sciences 108 (1998) 71-90
Multi-context Fusion Based Robust Face Detection in Dynamic Environments Mi Young Nam and Phill Kyu Rhee Dept. of Computer Science & Engineering Inha University, 253, Yong-Hyun Dong, Incheon, Nam-Gu, South Korea
[email protected],
[email protected] Abstract. We propose a method of multiple context fusion based robust face detection scheme. It takes advantage of multiple contexts by combining color, illumination (brightness and light direction), spectral composition(texture) for environment awareness. It allows the object detection scheme can react in a robust way against dynamically changing environment. Multiple context based face detection is attractive since it could accumulate face model by autonomous learning process for each environment context category. This approach can be easily used in searching for multiple scale faces by scaling up/down the input image with some factor. The proposed face detection using the multiple context fusion shows more stability under changing environments than other detection methods. We employ Fuzzy ART for the multiple context- awareness. The proposed face detection achieves the capacity of the high level attentive process by taking advantage of the context-awareness using the information from illumination, color, and texture. We achieve very encouraging experimental results, especially when operation environment varies dynamically.
1 Introduction Context, in this paper, is modeled as the effect of the change of application working environment. The context information used here is illumination, color, texture contexts. As environment context changes, it is identified by the multiple context fusion, and the detection scheme is restructured. The detection scheme is restructured to adapt to changing environment and to perform an action that is appropriate over a range of the change. The goal of this paper is to explore the possibility of environment-insensitive face detection by adopting the concept of multiple context fusion. Context, in this paper, is modeled as the effect of the change of application working environment. The context information used here is illumination, color, texture contexts. As environment context changes, it is identified by the multiple context fusion, and the detection scheme is restructured. The detection scheme is restructured to adapt to changing environment and to perform an action that is appropriate over a range of the change. Even though human beings can detect face easily, it is not easy to make computer to do so. The difficulties in face detection are caused by the variations in illumination, pose, complex background, etc. Robust face detection needs high invariance with regard to those variations. Much research has been done to solve this problem [3,5,6]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 698 – 707, 2005. © Springer-Verlag Berlin Heidelberg 2005
Multi-context Fusion Based Robust Face Detection in Dynamic Environments
699
Jones et. al. [1] have provided the comparison of nine chrominance spaces and two parametric techniques. Difficulties in detecting face skin color comes from the fact that ambient light, image capturing devices, etc. can affect skin color too much. Recently, the concept of context-awareness has been applied for robust face detection under dynamically changing environments [7]. . In this paper, we propose multiple context fusion for the adaptive detection scheme using illumination, color, texture information. We employ Fuzzy ART for the context modeling and awareness. It provides the capacity of the high level attentive process by the environment context-awareness in face detection. We achieve very encouraging results from extensive experiments. The outline of this paper is as follows. In section 2, we will present the architecture of the proposed face detection scheme. Adaptive color modeling using illumination context awareness will be discussed in section 3. In section 4, we will present multiple Bayesian classifies based on multiple context-awareness. We will give experimental results in section 5. Finally, we will give concluding remarks.
2 Face Detection Architecture Using Multi-context Fusion The specific task we are discussing in this paper is a robust face detector which is adaptive against changing illumination environments. The system consists of the multi-resolution pyramid, the multi-context fusion module, face/non-face determination using multiple Bayesian classifier as shown in Fig. 1. The details will be discussed in the following.
Fig. 1. Face detection system architecture
In the proposed method, the feature space for face detection with multiple scales and varying illumination condition has been partitioned into subspaces properly so that face detection error can be minimized. However, the partitioning of multiple scales is very subjective and ambiguous in many cases. The multi-resolution pyramid
700
M.Y. Nam and P.K. Rhee
generates face candidate region. Pyramid method based face detection is attractive since it could accumulate the face models by autonomous learning process. In our approach, the face object scale is approximated with nine steps. In this module, 20×20 windows are generated. A tradeoff must be considered between the size of multiple scale, view representation of the object and its accuracy. Finally, multi-context based Bayesian is in a multi classifier structure by combining several Bayesian classifiers. Each Bayesian classifier is trained using face images in each environment context category. Face candidate windows are selected using the multi-context based Bayesian classifiers, and finally face window is filtered by the merging process. In the learning stage, the candidate face regions are clustered into 4 face models, 6 face-like non-face models, non-face models using combined learning algorithm[10], the threshold parameters of multiple Bayesian classifiers are decided. Initially, seed appearance models of face is manually gathered and classified for training the detector module. The detector with prior classes is trained by the initial seed data set.
3 Environment Modeling and Identification Using Multiple Context-Awareness In this session, we present environment modeling and identification using multiple contexts such as color, illumination, and texture information 3.1 Illumination Context-Awareness Environment context is analyzed and identified using the Fuzzy ART. It is hybrid method of supervised and unsupervised learning method. The idea is to train the network in two separate stages. First, we perform an unsupervised training (FuzzyART) to determine the Gaussians' parameters (j, j). In the second stage, the multiplicative weights wkj are trained using the regular supervised approach. Input pattern is vectorized for image size of 20x20 pixels, input node had mosaic of size of 10x10 pixels. 10x10 image is converted 1x100 vector as show in Fig. 2. Fig. 3 shows images of three clusters various illuminant face dataset, we define 3 step illuminant environment.
Fig. 2. Face data vectorization is 1x100 dimension
Multi-context Fusion Based Robust Face Detection in Dynamic Environments
Cluster1
Cluster2
Cluster3 Fig. 3. The three group Discriminant result for illumination conditions
Fig. 4. The five group Discriminant result for illumination conditions
Fig. 5. The details of skin color modeling
701
702
M.Y. Nam and P.K. Rhee
3.2 Color Context Modeling The skin color modeling and identification also is carried out by Fuzzy ART network as shown Fig.5. The use of color information can simplify the task of face localization in complex environments [9, 10]. Several skin color spaces have been proposed. Difficulties of detecting face skin color comes from no universal skin color model can be determined. Ambient light, un-expectable in many cases, affects skin color. Different cameras produce different colors under same light conditions. Skin color’s area is computed 1,200 face images of 20 x 20 pixel scale. There are several color spaces such YCrCb, HSV, YCbCr, CbCr. Even though which color space is most effective in separating face skin and non-face skin colors, YCbCr has been widely used [6]. We choose CrCb color space which almost approaches the performance of YCbCr color space and provides faster execution time than other color spaces. Table 1 is shown context-based skin color histogram. Table 1. The skin color histogram distribution for each cluster
skin color area Cr value Cb value
Cluster 1 125-160 90-120
Cluster 2 100-130 110-140
Custer 3 80-100 125-160
Cluster 4 60-100 100-125
3.3 Texture Context Modeling We also take advantage of the information of frequency domain using Haar wavelet transform. It is organize the image into sub-bands that are localized in orientation and frequency. In each sub-bands, each coefficient is spatially localized. We use a wavelet transform based on 3 level decomposition producing 10 sub-bands. Generally, low frequency component has more discriminate power than higher. Also too high component has some noisy information. Then we use 3 level decomposition of Haar wavelet transform. The Fuzzy ART network is used for the texture context-awareness using Haar wavelet transform.
4 Multiple Context Based Bayesian Classifiers We show that illumination context based Bayesian face modeling can be improved by strengthening both contextual modeling and statistical characterization. Multiple Bayesian classifiers are adopted for deciding face and non-face. A Bayesian classifier is produced for each illumination context. We have modeling four Multi class based Bayesian classifiers are made in 4 group for face. Initially, seed appearance models of face is manually gathered and classified for training the appearance detector module. Each detector, Bayesian classifier here, with prior classes is trained by the training data set. Training set of faces is gathered and divided into several group in accordance with illumination contexts. Multi-resolution
Multi-context Fusion Based Robust Face Detection in Dynamic Environments
703
5x5 4x4 3x3 54-D Feature Vector
2x2 Sub-region Blocks
Fig. 6. Face feature extraction[8]
pyramid consists of nine levels by an experiment, and the offset between adjacent levels is established by four pixels. Each training face image is scaled to 20x20 and normalized using the max-min value normalization method, and vectorized. Since the vectorization method affects much to face detection performance, we have tested several vectorization methods and decided 54 dimensional hierarchical vectorization method [11, 12] (see Fig. 6). We use sub-regions for the feature of first classifier. Illumination the training images to a number of predefined clustering group by FART. The detector trained each face illuminant cluster. Squared Mahalanobis distance[5] a useful method that measures free model and free pattern relationship using similarity degree, is adopted for the Bayesian classifier. The center of cluster is determined by the mean vector, and the shape of the cluster is determined by the covariance matrix Mahalanobis distance as shown in Eq. 1.
r 2 = ( x − µ x )' ∑ −1 ( x − µ x ) where
µ x is
the average vector of face’s vector,
(1)
∑ is the covariance matrix of
independent elements. We use 4 face group that is shown Fig. 4. This is generated by proposed learning method. The y axis is cluster number and x axis is sample under each cluster. The advantages of the two-stage training are that it is fast and very easy to interpret. It is especially useful when labeled data is in short supply, since the first stage can be performed on all the data (not only the labeled part). Fig.7, shows the face candidate of Fig.7: blue box is face candidate area and white box is exact face area.
Fig. 7. The face detection using proposed method (skin color area and context based Bayesian)
704
M.Y. Nam and P.K. Rhee
4 Experiment The experiment of the proposed method has been performed with images captured in various environments 800 images are captured and used in the experiment. The superiority of the proposed method is discussed in the following. Fig.8 shows face detection in real-time image by using the proposed cascade of context-based skin color and context-based Bayesian methods. Face detection result shows Table 2-6. Images have size of 320 x 240 pixels and encoded in 256 color levels. We resized to various the size using multi-resolution of 9 steps. Rescaled images are transferred. Each face is normalized into a re-scaled size of 20x20 pixels and each data – training image and test images – is preprocessed by histogram equalization and max-min value normalization. Tables compare the detection rates (AR) for the context-based color system and the numbers of contextbased Bayesian classifiers for the three boosting-based systems, given the number of false alarms (FA) and false rejection (FR). Table 2 shows that the result of face detection between multi context-based Bayesian and single context based Bayesian in our lab gray images. Table 3 shows that the result of face detection between multi context based Bayesian and single context based Bayesian in our lab color images. Table 4 shows that the result of face detection between multiple context based Bayesian and single context based Bayesian in FERET fafb dataset. Table 5 shows that the result of face detection between multiple context based Bayesian and single context based Bayesian in FERET fafc dataset include dark illuminant image.
Fig. 8. Examples face detection in color image Table 2. The face detection our lab gray images
Method Multiple Context Model Single Context Model
Total Image
AR
FR
FA
800
790
5
5
800
755
30
16
Multi-context Fusion Based Robust Face Detection in Dynamic Environments
705
Table 3. Face detection on our lab in color images
Method Context-based Bayesian Single Bayesian
Total Image
AR
FR
FA
900
895
5
10
900
780
45
50
Table 4. Face detection result in FERET dataset normal images
Method Multiple Context Model Single Context Model
Total Image
AR
FR
FA
1196
1192
5
10
1196
1130
21
40
Table 5. Face detection result in FERET dataset dark images
Method Multiple Context Model Normal Context Model
Total Image
Accept
FR
FA
194
192
0
3
194
130
25
45
From Tables, we know that propose method is good face detection performance other method. By we were going to do studying repeatedly, clustering's performance improved. As seen in tables and figures, error in front side face appears in tilted image. We must consider about tilted image. We could improve pose classification's performance for face through recursive studying like experiment result. The combined effect of eye lasses is also investigated. In this experiment, the factor of glasses and illumination was considered and experimental images were classified by the factor of glasses and illumination. We classified bad illumination images into the image including a partially lighted face, good images into that including a nearly uniformly lighted face. Next figure is shown the face detection in CMU dataset.
706
M.Y. Nam and P.K. Rhee
5 Concluding Remarks This paper discusses a cascade detection scheme by combining the color, texture, and feature-based method with the capability of multi context-awareness. Even though much research has been done for object detection, it still remains a difficult issue. The detection scheme aims at robustness as well as fast execution way under dynamically changing context. Difficulties in detecting face coming from the variations in ambient light, image capturing devices, etc. has been resolved by employing Fuzzy ART for multi context-awareness. The proposed face detection achieves the capacity of the high level attentive process by taking advantage of the illumination, color and texture context-awareness. We achieve very encouraging experimental results, especially when illumination condition varies dynamically. Experimental result has shown that the proposed system has detected the face successfully 99% of the offline image under varying face images. The context information used here is illumination, color, texture contexts. As environment context changes, it is identified by the multiple context fusion, and the detection scheme is restructured.
Multi-context Fusion Based Robust Face Detection in Dynamic Environments
707
Reference 1. Jones, M.J.; Rehg and J.M.;”Statistical color models with application to skin detection,” Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference, Vol.1, (1999), pp.23-25 2. B.K.L. Erik Hjelmas, “Face Detection: A Survey,” Computer Vision and Image Understanding, Vol. 3, no.3, (2001) pp. 236-274 3. T. V. Pham, et. Al., „Face detection by aggregated Bayesian network classifiers, „ Pattern Recognition Letters 23 (2002) pp. 451-461 4. Li, S.Z.; Zhenqiu Zhang; “FloatBoost learning and statistical face detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on , Vol.26, (2004) pp.1112 – 1123 5. C. Liu, “A Bayesian Discriminating Features Method for Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 6, (2003), pp. 725-740 6. H. Schneiderman and T. Kanade, “Object Detection Using the Statistics of Parts,” Int’l J. Computer Vision, Vol. 56, no. 3, (2004), pp.151- 177 7. Context Driven Observation of Human Activity 8. M.Y Nam and P.K Rhee, “A Scale and Viewing Point Invariant Pose Estimation”, KES2005,(2004), pp. 833-842 9. D. Maio and D. Maltoni, "Real-time face location on gray-scale static images,'' Pattern Recognition, vol.33, no. 9,(2000), pp. 1525-1539 10. M. Abdel-Mottaleb and A. Elgammal, "Face Detection in complex environments from color images,'' IEEE ICIP, (1999), pp. 622-626 11. P. Viola and M. Jones, “Robust Real Time Object Detection,” IEEE ICCV Workshop Statistical and Computational Theories of Vision, July (2001). 12. S.Z. Li, L. Zhu, Z.Q. Zhang, A. Blake, H. Zhang, and H. Shum, “Statistical Learning of Multi-View Face Detection,” Proc. European Conf. Computer Vision, Vol. 4, (2002), pp. 67-81
Unscented Fuzzy Tracking Algorithm for Maneuvering Target Shi-qiang Hu1, Li-wei Guo1, and Zhong-liang Jing2 1
College of Informatics and Electronics, Hebei University of Science and Technology, Shijiazhuang, 050054, China
[email protected] 2 Institute of Informatics and Electronics, Shanghai Jiaotong University, Shanghai, 200030, China Abstract. A novel adaptive algorithm for tracking maneuvering targets is proposed in this paper. The algorithm is implemented with fuzzy filtering and unscented transformation. A fuzzy system allows the filter to tune the magnitude of maximum accelerations to adapt to different target maneuvers. Unscented transformation act as a method for calculating the statistics of a random vector. A bearing-only tracking scenario simulation results show the proposed algorithm has a robust advantage over a wide range of maneuvers.
1 Introduction In the past years, the problem of tracking maneuvering targets has been attracted attention in many fields [1]. The interacting multiple model (IMM) algorithm and the current statistical model and adaptive filtering (CSMAF) algorithm are two effective algorithms for tracking maneuvering targets[2]. In the IMM algorithm, at time k the state estimate is computed under each possible current model using r filters, with each filter using a different combination of the previous model-conditioned estimates. When the range of target maneuvers is wider, the IMM algorithm needs more filters and its computational burden is more considerable. The CSMAF algorithm takes advantage of current model concept to realize adaptive tracking for maneuvering targets using only one filter. It is obvious that the computational amount of this algorithm is much less than that of the IMM algorithm. However, its performance depends on pre-defined parameter maximum acceleration α max indicating the intensity of maneuver. During tracking process, the value of α max is constant so that it is difficult to suit to different movement of target. For solving this problem, we propose a fuzzy system incorporated with CSMAF to online adjust the value of α max . In tracking applications, the measurements are directly available in the original sensor coordinates, which maybe give rise to nonlinear problem. The classic algorithm to solve the non-linear filtering is extended Kalman filter (EKF). But this filter has two important potential drawbacks. Firstly, the derivation of the Jacobian matrices can be complex causing implementation difficulties. Secondly, these linearization’s can introduce large errors and even cause divergence of the filter. To address these limitations, Julier and Uhlmann developed an unscented transformation (UT) instead of linearization method [3], [4]. Unscented filter based on UT has been shown to be a superior alternative to the EKF in a variety of applications. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 708 – 716, 2005. © Springer-Verlag Berlin Heidelberg 2005
Unscented Fuzzy Tracking Algorithm for Maneuvering Target
709
An unscented fuzzy filtering (UFCSMAF) algorithm for tracking maneuvering targets is proposed in this paper. Fuzzy system is used in CSMAF model to adjust magnitude of maximum accelerations to adapt to different target maneuvers. A UT is combined with fuzzy CSMAF model for tracking maneuvering targets. The structure of our proposed algorithm is shown in Figure 1. Compared with the CSMAF algorithm, this new adaptive algorithm has a robust advantage over a wide range of maneuvers and overcomes the shortcoming of the CSMAF algorithm.
Fuzzy System Fuzzy Rules
Fuzzification
Defuzzification
E ∆E
Fuzzy Inference Engine
Output
Unscented Filter
amax
Input
Fig. 1. The structure of Unscented Fuzzy Filter
The remainder of this paper is organized as follows. In section 2, the CSMAF algorithm is analyzed. In section 3, the fuzzy CSMAF is given. An unscented fuzzy CSMAF is proposed in section 4. In section 5 the simulation experiments have carried out to prove the filter performance. Finally, conclusions are given in section 6.
2 Adaptive Filtering and Fuzzy Filtering 2.1 Current Statistical Models and Adaptive Filtering The CSMAF algorithm assumes that when a target is maneuvering with certain acceleration, its acceleration during the next period is limited within a range around current acceleration. Hence it is not necessary to take all possible values of maneuvering acceleration into consideration when modeling the target acceleration probability. A modified Rayleigh density function whose mean is the current acceleration is utilized, and the relationship between the mean and variance of Rayleigh density is used to set up an adaptive algorithm for the variance of maneuvering acceleration. The state equation of the current statistical model is
x( k + 1) = Fx( k ) + Ga ( k ) + B w w ( k )
(2.1)
710
S.-q. Hu, L.-w. Guo, and Z.-l. Jing
x(k ) as the current acceleration and Considering the one-step-ahead prediction of also as the mean of randomly maneuvering acceleration at the instant kT, we have a ( k ) = xˆ (k / k − 1) 2 ⎧ 2α (4 − π ) ⎡ amax − a ( k ) ⎦⎤ ⎣ ⎪ ⎪ π σ w2 = 2ασ a2 = ⎨ 2 α 2 (4 −π ) ⎪ ⎡⎣ a− max − a ( k ) ⎤⎦ ⎪⎩ π
(2.2)
a (k ) > 0 a (k ) < 0
(2.3)
From Eqs.(2.3) we see the following facts: (1) when amax is large, the algorithm can keep fast response to target maneuvers because the system variance or system frequency band takes a large value; (2) when amax is a constant, if the target is maneuvering with a smaller acceleration, the system variance will be larger and the tracking precision will be lower; if the target is maneuvering with a larger acceleration, the system variance will be smaller and the tracking precision will be higher. Therefore amax is an important parameter in filter design. But in real application, it is difficult to give an appropriate value to suit an unknown trajectory. 2.2 Fuzzy Filtering Based on Current Statistical Model Since α max is predefined and affects the system variance, we use a fuzzy system to modify the value of α max in order to get the most appropriate level in every case. Based on the error and change of error in the last prediction, these rules determine the magnitude of α max in CSMAF. The fuzzy decision system consists of two input variables and one output variable. If the dimension of measurement ny = 2 , the input variables E (k ) and ∆E ( k ) at the kth scan are defined by
E (k ) =
E12 (k ) + E22 (k ) 2
,
∆E ( k ) =
∆E12 (k ) + ∆E22 ( k ) 2
Where E1 (k ) , E2 ( k ) and ∆E1 (k ) , ∆E2 (k ) are normalized error and change of error of each component of measurement respectively. E1 (k ) and ∆E1 (k ) are defined by ⎧ y1 (k ) − yˆ1 (k ) ⎪ y (k ) − y (k − 1) if y1 (k ) − yˆ1 (k ) < y1 (k ) − y1 (k − 1) 1 ⎪ 1 ⎪ y1 (k ) − yˆ1 (k ) E1 (k ) = ⎨ if y1 (k ) − yˆ1 (k ) > y1 (k ) − yˆ1 (k − 1) ⎪ y1 (k ) − yˆ1 (k ) ⎪ 0.0 if y1 (k ) − yˆ1 (k ) = y1 (k ) − y1 (k − 1) ⎪ ⎩
Unscented Fuzzy Tracking Algorithm for Maneuvering Target
⎧ E1 (k ) − E1 ( k − 1) ⎪ E1 (k − 1) ⎪ ⎪ E (k ) − E1 ( k − 1) ∆E1 (k ) = ⎨ 1 ⎪ E1 (k ) − E1 ( k − 1) ⎪ 0.0 ⎪ ⎩
711
if E1 ( k ) − E1 (k − 1) < E1 (k − 1) if E1 (k ) − E1 (k − 1) > E1 (k − 1) , if
E1 (k ) − E1 ( k − 1) = E1 (k − 1)
Where y1 (k ) and yˆ1 (k ) are the measurement and prediction respectively of the first component of measurement vector. The fuzzy sets for the input variables E (k ) and ∆E ( k ) are labeled as the linguistic terms of LP (large positive), MP (medium positive), SP (small positive), and ZE (zero). The output variable is amax . The fuzzy sets for amax are labeled in the linguistic terms of EP (extremely large positive), VP (very large positive), LP (large positive), MP (medium positive), SP (small positive), and ZE (zero). With the input and output variables defined above, a fuzzy rule can be expressed as follows in Table 1. The rules were obtained by interviewing some defense experts. Table 1. Fuzzy rules for α max
E
∆E
ZE SP MP LP
ZE VP LP EP VP
SP SP LP VP ZE
MP EP VP MP MP
LP EP VP MP EP
3 Unscented Fuzzy Filtering Based on Current Statistical Model The discrete state and measurement equations of the target are described by Esq. (2.1). The UFCSMAF equations are as follows: Initialization:
xˆ (k-1) = E [ x(k − 1)] , P(k-1) = E ⎡⎣(x(k-1) − xˆ (k − 1))(x(k-1) − xˆ (k − 1))T ⎤⎦
(3.1)
Calculate sigma points:
χ ( k − 1) = ⎡⎣ xˆ (k − 1) xˆ (k − 1) ± (nx + λ )P(k − 1) ⎤⎦
(3.2)
Time update: χ i (k | k − 1) = Fχ i ( k − 1) + G [ χ i ( k | k − 1) ]3 ⇒ χ i (k | k − 1) = Φχ i (k − 1)
(3.3)
712
S.-q. Hu, L.-w. Guo, and Z.-l. Jing 2 nx
x ( k | k −1) = ∑ wi χi (k | k −1)
(3.4)
i =0
2nx
P(k | k −1) = ∑wi [ χi (k | k −1) − x(k | k −1)][ χi (k | k −1) − x(k | k −1)] + 2ασa2BwBTw T
i =0
(3.5)
Measurement update: 2 nx
Pyy = ∑ wi ( ξ i ( k | k − 1) − yˆ ( k | k − 1) ) ( ξ i ( k | k − 1) − yˆ ( k | k − 1) ) T
T
(3.6)
i =0
2 nx
Pxk y k = ∑ wi ( χ i (k | k − 1) − x(k | k − 1) )( ξ i ( k | k − 1) − yˆ (k | k − 1) ) i=0
−1 K (k ) = Pxy Pyy T
T
(3.7) (3.8)
⎧ E (k ) Fuzzy system ⎯⎯⎯⎯⎯ → amax . ⎨ ⎩ ∆E (k )
4 Simulations and Performance Comparison In this section, proposed algorithm (UFCSMAF) is tested and compares its performance to UCSMAF. The mean and the standard deviation of the estimation error are chosen as the measure of performance. The scenario shown in figure 2 is as follows. The target initial position is (10, 15) km and initial velocity is (-0.3, 0) km/s. In first segment from 0 to 30s, the target moves with a constant velocity; in the second segment from 31 to 45s, the target makes a left turn with a centripetal acceleration 70m/s2; in the third segment from 46 to 60s, the target moves with a constant velocity; in the fourth segment from 61 to 80s, the target makes a right turn with a centripetal acceleration 20m/s2; in the fifth segment from 81 to 100s, the target moves with a constant velocity. Two distributed observers measure the target line-of-sight (LOS) angles and are located at (0, 0) km and (10, 0) km respectively. So the measured angles are given by
⎡ ⎛ y (k ) − y1o ⎞ ⎤ ⎢ arctan ⎜ 1 ⎟⎥ ⎝ x(k ) − xo ⎠ ⎥ ⎢ y ( k ) = h ( x( k ) ) + v ( k ) = ⎢ + v (k ) 2 ⎥ ⎛ ⎞ − ( ) y k y o ⎢arctan ⎜ ⎥ 2 ⎟ ⎢⎣ ⎝ x(k ) − xo ⎠ ⎥⎦
(
Where x( k ) = [ x(k ), x (k ), x(k ), y (k ), y (k ), y (k ) ] , xoi , yoi T
)
(4.1)
is the observer loca-
tion and v( k ) is Gaussian measurement noise. Table 2 lists the simulation parameters in detail.
Unscented Fuzzy Tracking Algorithm for Maneuvering Target
713
16000
14000
12000
Target trajectory y axis (m)
10000
8000
6000
4000
Observer 2000
0 0
5000
10000
15000
x axis (m)
Fig. 2. Tracking scenario Table 2. Simulation parameters
Item
Description
UCSMAF-I
UCSMAF-II
UFCSMAF
α
Maneuvering frequency
1/20
1/20
1/20
amax
Maximum acceleration
100 m/s2
500 m/s2
Adaptation
λ
UT parameter
0
0
0
Pd
Detection probability
1
1
1
T
Sample rates
1s
1s
1s
σθ
Sensor noise
0.003 rad
0.003 rad
0.003 rad
L
Simulation length
100
100
100
The theoretical lower-bound (CRLB) plays an important role in algorithm evaluation and assessment of the level of approximation introduced in the filtering algorithms. A derivation of such a bound is derived under the scenario. The CRLB of the state vector components are calculated as the diagonal elements of the inverse information matrix J
{
}
CRLB [ xˆ (k )] j = ⎡⎣ J -1 (k ) ⎤⎦ jj
(4.2)
If the target motion is deterministic (in the absence of process noise and with a non-random initial state vector), for the case of additive Gaussian noise, the information matrix can be calculated using the following recursion T
J ( k + 1) = ⎡⎣F −1 (k ) ⎤⎦ J (k )F −1 (k ) + H T (k + 1)R −1H ( k + 1)
(4.3)
714
S.-q. Hu, L.-w. Guo, and Z.-l. Jing
Where F is transition matrix and H is the Jacobian of h(*) evaluated at the true state. It is difficulty to get a tractable form of J in maneuvering scenario. Here we give a conservative bound. Suppose that the maneuvering characteristic is well known, and then we can piecewise calculate J . During the derivation, target state is given T by x(k ) = [x (k ), x (k ), y (k ), y (k )] . The Jacobian is given by
⎡ − ( y(k ) − yo1 (k )) ⎢ 2 2 ⎢ ( x(k ) − x1o (k ) ) + ( y(k ) − y1o (k )) H(k ) = ⎢ ⎢ − ( y(k ) − yo2 (k ) ) ⎢ 2 2 ⎢ ( x(k ) − xo2 (k ) ) + ( y(k ) − yo2 (k ) ) ⎣
( x(k) − x (k)) ( x(k) − x (k)) + ( y(k) − y (k )) ( x(k) − x (k )) ( x(k ) − x (k)) + ( y(k ) − y (k)) 1 o
0
2
1 o
1 o
2
2 o
2
2 o
0
2
2 o
⎤ 0⎥ ⎥ ⎥ ⎥ 0⎥ ⎥ ⎦
When target moves with a constant velocity, F is give by
⎡1 ⎢0 F=⎢ ⎢0 ⎢ ⎣0
T 1 0 0
0 0 1 0
0⎤ 0 ⎥⎥ T⎥ ⎥ 1⎦
When target moves with a circle motion
sin ΩT ⎡ ⎢1 Ω ⎢ 0 cos ΩT F = ⎢⎢ 1 − cos ΩT ⎢0 Ω ⎢ ⎢⎣0 sin ΩT
cos ΩT − 1 ⎤ ⎥ Ω ⎥ 0 − sin ΩT ⎥ sin ΩT ⎥ 1 ⎥ Ω ⎥ 0 cos ΩT ⎥⎦ 0
150
100
UCSMAF-I UCSMAF-II UFCSMAF
UCSMAF-I UCSMAF-II UFCSMAF
80
100
40
Mean error in y (m)
Mean error in vy (m/s)
60
50
0
-50
20
0
-20
-40
-60
-100
-80
-150
0
10
20
30
40
50
Time (s)
(a)
60
70
80
90
100
-100
0
10
20
30
40
50
Time (s)
(b)
Fig. 4. Mean estimation error (a) y and (b) v y
60
70
80
90
100
Unscented Fuzzy Tracking Algorithm for Maneuvering Target 150
715
140
UCSMAF-I UCSMAF-II UFCSMAF sqrt(CRLB)
UCSMAF-I UCSMAF-II UFCSMAF sqrt(CRLB)
120
Error St. Dev in vy (m/s)
Error St. Dev in y (m)
100 100
50
80
60
40
20
0
0
10
20
30
40
50
60
70
80
90
Time (s)
100
0
0
10
20
30
40
50
60
70
80
90
100
Time (s)
(a)
(b)
Fig. 5. The Error standard deviation versus square root of CRLB (a) y and (b) v y
Where Ω is angular rate, the initial J (0) is given by −1
⎡ σ x2 0 σ xy2 0⎤ ⎢ ⎥ 2 0 σv 0 0⎥ −1 ⎢ (4.4) J (0) = [ P(0) ] = 2 ⎢σ xy 0 σ y2 0 ⎥ ⎢ ⎥ 0 0 σ v2 ⎦⎥ ⎣⎢ 0 In (4.4) σ v is determined by the prior knowledge of target maximum speed. σ v = 300 m/s is used in this scenario.
⎧ 2 1 ⎛ cos 2 ϕ 2 2 cos 2 ϕ1 2 ⎞ σ ϕ1 + σ ϕ2 ⎟ ⎪σx = ⎜ r12 | C |2 ⎝ r22 ⎠ ⎪ ⎪ 2 2 1 ⎛ sin ϕ 2 2 sin ϕ1 2 ⎞ ⎪ 2 σ ϕ1 + σ ϕ2 ⎟ ⎨ σy = ⎜ | C |2 ⎝ r22 r12 ⎠ ⎪ ⎪ ⎪σ xy = 1 ⎜⎛ cos 2ϕ 2 σ ϕ2 + cos 2ϕ1 σ ϕ2 ⎟⎞ 1 2 ⎪⎩ | C |2 ⎝ 2r22 2r12 ⎠ In Esq. (4.5) ϕi is the measurement of observer i , | C |=
(4.5)
sin(ϕ 2 − ϕ1 ) and r2 r1
ri = ( x − xoi ) 2 + ( y − yoi ) 2 (i = 1, 2) . The mean of estimation error in target position y and velocity v y is show in figure 4. The standard deviation of error, against the square root of CRLB, is shown in figure 5. Observe from figure 4 three methods all approximately unbiased in nonmaneuvering and small maneuvering stages. While in large maneuvering stage, UCSMAF-II is the most accurate, followed by the UFCSMAF and UCSMAF-I. Observe from figure 5 UFCSMAF combines the advantages both UCSMAF-I and UCSMAF-II. In the non-maneuvering and small maneuvering stages the performance of UFCSMAF and UCSMAF-I are similar closely and obviously better than UCSMAF-II. In the large maneuvering stage the performance of UFCSMAF and
716
S.-q. Hu, L.-w. Guo, and Z.-l. Jing
UCSMAF-II are similar closely and obviously better than UCSMAF-I. All three algorithms show the level of error standard deviation is much higher than the theoretical bound since the bound given in this paper is fairly conservative. UFCSMAF is the preferred algorithm among the three considered since it can adaptive tune to parameter amax to adapt a wider range of maneuvers.
5 Conclusion A UFCSMAF algorithm has been presented in this paper for tracking a maneuvering target. The Cramer-Rao lower bound has also been derived. The theoretic analysis and computer simulations have confirmed that the adaptive algorithm has a robust advantage over a wide range of maneuvers and overcomes the shortcoming of the traditional current statistic model and adaptive filtering algorithm. Future work will need to address the better model characterizing target maneuvers. In addition, more accurate performance bound incorporating uncertainty will be developed.
Acknowledgements This research was jointly supported by National Natural Science Foundation of China (60375008), Hebei PH.D Discipline Special Foundation (B2004510), Aerospace Supporting Technology Foundation (JD04) and PH.D Foundation of Hebei University of Science and Technology.
References 1. Blackman. S, Popoli. R. Design and Analysis of Modern Tracking System, Artech House, Norwood, MA, 1999. 2. Chan. K, Lee, V. and Leung, H. Radar Tracking for Air Surveillance in a Stressful Environment Using a Fuzzy-gain Filter. IEEE Transactions on Aerospace and Electronic Systems, (1997) 80-89 3. Julier, S. J., Uhlmann, J. K., Unscented Filtering and Nonlinear Estimation. Proceedings of IEEE, (2004) 401-422 4. Julier, S.J, Uhlmann, J.K. A New Method for the Nonlinear Transformation of Means and Covariance in Filters and Estimators. IEEE Transactions on Automatic Control. (2000), 477-482 5. Wan E A, Vander M R. The Unscented Kalman Filter for Nonlinear Estimation. Proceedings of Signal Processing, (2000) 153-158.
A Pixel-Level Multisensor Image Fusion Algorithm Based on Fuzzy Logic1 Long Zhao, Baochang Xu, Weilong Tang, and Zhe Chen School of Automation Science and Electrical Engineering, Beihang University, Beijing 100083, China
[email protected] {xbcyl, welleo}@163.com
[email protected] Abstract. A new multisensor image fusion algorithm based on fuzzy logic is proposed. The membership function and fuzzy rules of the new algorithm is defined using the Fuzzy Inference System (FIS) editor of fuzzy logic toolbox in Matlab 6.1. The new algorithm is applied to fuse Charge-Coupled Device (CCD) and Synthetic Aperture Rader (SAR) images. The fusion result is compared with some other fusion algorithms through some performance evaluation measures for the fusion effect, and the comparison results show that the new algorithm is effective.
1 Introduction Multisensor Image Fusion (MIF) is the technique through which the images for the same target obtained by two or more sensors are processed and combined to create a single composite image which cannot be achieved with a single image source[1]. MIF makes good use of the complementary and redundant information of multisensor images and is widely used in such applications as computer vision, remote sensing, target recognition, medical image processing, and battle filed reconnaissance, etc[2]. Since fuzzy set theory was proposed by Zadeh in 1965, fuzzy logic has been widely used in target recognition, image analysis, and automatic control, etc[3]. And image fusion algorithm based on fuzzy logic has been researched more than ever. Based on the fuzzy logic of both Mamdani model and ANFIS, the medical image fusion algorithm for Infrared and CCD has been discussed by Harpreet, et al[4]. However, there have not been any fuzzy logic based CCD/SAR image fusion references in navigation/guidance application. On the basis of fuzzy logic, a new algorithm is proposed for CCD/SAR image fusion. The membership function and fuzzy rules of the new algorithm are defined by using FIS editor of fuzzy logic toolbox. The fusion result is compared with some other fusion algorithms through some performance evaluation measures for fusion effect. 1
The work is supported by aeronautical science fund of P. R. China (03D51007).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 717 – 720, 2005. © Springer-Verlag Berlin Heidelberg 2005
718
L. Zhao et al.
2 Image Fusion Algorithm Based on Fuzzy Logic 2.1 FIS Based Mamdani Model FIS is a computing process by using fuzzy logic from input space to output space. FIS includes membership function, fuzzy logic computing, and fuzzy rules. The fuzzy rules of FIS based on Mamdani model are given by:[5] If input 1 is x, and input 2 is y, then output is z. 2.2 Image Fusion Based on FIS The membership function and fuzzy rules of CCD/SAR image fusion, based on fuzzy logic, are defined by using FIS editor of fuzzy logic toolbox in Matlab 6.1[5]. The algorithm process of CCD/SAR image fusion using fuzzy logic for pixel-level is given as follows. • • • • • • • •
Read CCD image in variable S1 and SAR image in variable S2, respectively. Locate the size of CCD image, which has r1 rows and c1 columns. Locate the size of SAR image, which has r2 rows and c2 columns. Select r and c, which stands for min (r1, r2) and min (c1, c2) respectively, then calculate min (r, c). Obtain a new CCD image in variable M1 and a new SAR image in variable M2 from S1 and S2, respectively, and both M1 and M2 are the same in size. Make an image fusion.fis file by FIS editor, which contains two input images. Decide the number and type of membership function for both CCD and SAR images, and make rules from two input source images to the output fused image. Obtain the fused image and the performance evaluation measures of fusion effect.
3 Fusion Experiment and Fusion Effect Analysis 3.1 Experimental Results for CCD/SAR The image fusion algorithm based on fuzzy logic is applied to CCD/SAR, and a simulating program is designed. In order to prove the effectiveness of the new algorithm, it is compared with other three fusion algorithms[6]. Algorithm 1: weight averaging based on principal component analysis fusion rule. Algorithm 2: LPD + MAX operator fusion rule. Algorithm 3: LPD + mean operator fusion rule. The source images and fused image of CCD/SAR are shown in figure1.
A Pixel-Level Multisensor Image Fusion Algorithm Based on Fuzzy Logic
(a)
(b)
(c)
(d)
(e)
719
(f)
Fig. 1. (a) CCD source image, (b) SAR source image, (c) fused image by fuzzy logic, (d) fused image by algorithm 1, (e) fused image by algorithm 2, and (f) fused image by algorithm 3 Table 1. Performance evaluation measures of fusion effect
Algorithm
Entropy
New Algorithm Algorithm 1 Algorithm 2 Algorithm 3
7.645 7.510 7.571 7.504
Mean of Cross Entropy 0.473 0.503 0.312 0.511
Mean Square Root of Cross Entropy 0.626 0.589 0.360 0.595
Mean
Standard Deviation
124.02 126.85 149.49 128.14
58.62 45.67 73.90 45.45
3.2 Performance Evaluation of Fusion Effect In order to analyze the fusion effect and quality of the fused image, image entropy, mean of cross entropy, mean square root of cross entropy, mean, and standard deviation are used as the objective performance evaluation measures [6,7]. The performance evaluation measures of fusion effect for CCD/SAR are shown in table 1. The experimental results in figure 1 and table 1 show that, because of the great difference in gray level and contrast between CCD image and SAR image, the fused image using algorithm 1 and algorithm 3 cannot obtain more information from source images. And the fusion effect is not good enough. Algorithm 2 and the new algorithm have a better performance than the other two algorithms in fusion effect. However, since the MAX rule is chosen in algorithm 2, the fused image has a stronger aberration, and although it has the smallest cross entropy, it’s smaller in entropy than the new algorithm. The new algorithm adopts the algorithm of fuzzy logic, and the fused image is much more tied to human’s vision characteristic, and is therefore more effective.
4 Conclusion A new image fusion algorithm based on fuzzy logic is studied. The membership function and fuzzy rules of the new algorithm is defined using FIS editor of fuzzy logic toolbox in Matlab 6.1, and the new algorithm is easy to apply. The new algorithm is applied to CCD/SAR image fusion, and the fusion result is compared with other three fusion algorithms through some performance evaluation measures of the fusion effect. The comparison results demonstrate that the new fusion algorithm is effective.
720
L. Zhao et al.
References 1. Guoqiang Ni: Study on Multi-Band Image Fusion Algorithms and Its Progressing (I). Photoelectron Technique and Information, 14 (2001) 11-17 2. Mingge Xia, You He: A Survey on Multisensor Image Fusion. Electronics and Control, 9 (2002) 1-7 3. Zadeh L. A.: Fuzzy sets. Information and Control, 8 (1965) 338-353 4. Singh H., Raj J., Kaur G.: Image Fusion Using Fuzzy Logic and Application. Proceedings of IEEE International Conference on Fuzzy Systems, vol. 1. (2004) 337-340 5. Zhixing Zhang, Chunzai Sun: Neuro-Fuzzy and Soft Computing. Xi’an Jiaotong University Press, China Xi’an (2000) 6. Yanli Wang, Zhe Chen: Performance Evaluation of Several Fusion Approaches for CCD/SAR Images. Chinese Geographical Science, 13 (2003) 91-96 7. Yanmei Cui, Guoqiang Ni: Analysis and Evaluation of The Effect of Image Fusion Using Statistics Paraments. Journal of Beijing Institute of Technology, 20 (2000) 102-106
Approximation Bound for Fuzzy-Neural Networks with Bell Membership Function Weimin Ma1,2 and Guoqing Chen1 1
2
School of Economics and Management, Tsinghua University, Beijing, 100084, P.R. China {mawm, chengq}@em.tsinghua.edu.cn School of Economics and Management, Xi’an Institute of Technology, Xi’an, Shaanxi Province, 710032, P.R. China
Abstract. A great deal of research has been devoted in recent years to the designing Fuzzy-Neural Networks (FNN) from input-output data. And some works were also done to analyze the performance of some methods from a rigorous mathematical point of view. In this paper, the approximation bound for the clustering method, which is employed to design the FNN with the Bell Membership Function, is established. The detailed formulas of the error bound between the nonlinear function to be approximated and the FNN system designed based on the input-output data are derived.
1
Introduction
FNN systems are hybrid systems that combine the theories of fuzzy logic and neural networks. Designing the FNN system based on the input-output data is a very important problem [1,2,3]. Some Universal approximation capabilities for a broad range of neural network topologies have been established by researchers like Ito [4], and T.P.Chen [5]. Their work concentrated on the question of denseness. Some Approximation Accuracies of FNN have also been established in [3]. More results concerning the approximation of Neural Network can be found in [6,7,8]. In paper [2], an approach so called Nearest Neighborhood Clustering was introduced for training of Fuzzy Logic System. The paper [3] studied the relevant approximation accuracy of the clustering method with Triangular Membership Function and Gaussian Membership Function. In this paper, taking advantage of some similar techniques of paper [3], we obtain the upper bound for the approximation by using the clustering method with Bell Membership Function.
2
Clustering Method with Bell Membership Function
Before introducing the main results, we firstly introduce some basic knowledge on the designing FNN systems with clustering method, which was proposed in [2]. Given the input-out data pairs (xq0 , y0q ),
q = 1, 2, . . .
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 721–727, 2005. c Springer-Verlag Berlin Heidelberg 2005
(1)
722
W. Ma and G. Chen
where xq0 ∈ U = [α1 , β1 ] × . . . × [αn , βn ] ⊂ Rn and y0q ∈ Uy = [αy , βy ] ⊂ R. If the data are assumed to be generated by an unknown nonlinear function y = f (x), the clustering method in [2] can help us to design a FNN system to approximate the function f (x). For convenience to illustrate the main result of this paper, we describe this method in a brief way as follows. Step 1. To begin with the first input-output pair (x10 , y01 ), select a radius parameter r, establish a cluster center with letting x1c = x10 , and set yc1 (1) = y01 , B 1 (1) = 1. Step 2. For the kth input-out pair (xk0 , y0k ), k = 2, . . ., suppose there are M lk clusters with centers at x1c , x2c , . . . , xM c . Find the nearest cluster center xc k to x0 to satisfy |xk0 − xlck | = min |xk0 − xlc |,
l = 1, 2, . . . , M.
l
(2)
Then there are two cases Case 1. If |xk0 −xlck | ≥ r, establish xk0 as a new cluster center with xM+1 = xk0 , c M+1 k M+1 l l yc (k) = y0 and B (k) = 1, and keep yc (k) = yc (k − 1) and B l (k) = B l (k − 1) for any l. Case 2. If |xk0 − xlck | < r, do the following: yclk (k − 1)B lk (k − 1) + y0k B lk (k − 1) + 1
yclk (k) =
B lk (k) = B lk (k − 1) + 1
(3) (4)
and meanwhile set ycl (k) = ycl (k − 1),
B l (k) = B l (k − 1), for l = lk .
(5)
Step 3. Then the FNN system can be constructed as: M 1 ycl (k) · |x−xl |2 1+ σ2c l=1 fˆk (x) = M 1 l=1
M
=
ycl (k)
1+
·
|x−xlc |2 σ2
n
j=1
l=1 M n l=1 j=1
1
1+
|xj −xl |2 c,j σ2
(6)
1 1+
|xj −xl |2 c,j σ2
where M = M + 1 for case 1 and M = M for case 2. Step 4. Repeat by going to Step 2 with k = k + 1. The above FNN system is constructed using singleton fuzzier, product inference engine and center-average defuzzifier, as detailed in [2]: the basic idea is to group the data pairs into clusters in terms of their distribution and then use the
Approximation Bound for FNN with Bell Membership Function
723
fuzzy IF-THEN rules for one cluster to construct the fuzzy system; the radius parameter r is selected to determine the size of the clusters, namely the smaller of r, the more number of clusters and vice versa. Some intensive simulation results concerning this method for various problems can also be found in that paper. This paper concentrates on the approximation bound for this method with bell membership function.
3
Approximation Bound with Bell Membership Function
The following theorem gives the approximation bound of FNN system fˆk (x) of (6) which is constructed by using the clustering method with bell membership function. Theorem 1. Let f (x) be a continuous function on U that generates the inputoutput pairs in (1). Then the following approximation property holds for the FNN system fˆk (x) of (6): |f (x) − fˆk (x)| ≤
n ∂f d2x n−1 r + dx + 2 · M · (σ + ) · ∂xi σ ∞ i=1
(7)
where the infinite norm ·∞ is defined as d(x)∞ = supx∈U |d(x)| and dx is the distance from x to the nearest cluster, i.e., dx = min |x − xlc | = |x − xlcx |
(8)
l
Proof. From (6) we have M
|f (x) − fˆk (x)| ≤
|f (x) −
ycl (k)|
l=1 M n
l=1 j=1
n
·
1
j=1
1+
|xj −xl |2 c,j σ2
(9)
1 1+
|xj −xl |2 c,j σ2
Paper [3] obtain the following result when the relevant approximation bound for the clustering method with triangular membership function was discussed: n ∂f
l l |f (x) − yc (k)| ≤ (10) ∂xi · |xi − xc,i | + r ∞ i=1 Combining the (9) and (10), we have
M n n
∂f 1 l ∂xi · |xi − xc,i | + r · |xj −xl |2 c,j ∞ j=1 l=1 i=1 1+ σ2 |f (x) − fˆk (x)| ≤ M n 1 l=1 j=1
1+
|xj −xl |2 c,j σ2
724
W. Ma and G. Chen
⎤⎫ ⎧ ⎡ M n ⎪ ⎪ ⎪ ⎪ 1 l ⎪ ⎪ |xi − xc,i | · ⎪ ⎢ ⎥ ⎪ |xj −xl |2 n ⎨ ⎬ c,j ∂f ⎢ ⎥ j=1 l=1 1+ σ2 ⎢ ⎥ ≤ · r + ⎥⎪ ⎪ ∂xi ∞ ⎢ M n ⎣ ⎦⎪ i=1 ⎪ ⎪ 1 ⎪ ⎪ ⎪ l 2 ⎩ ⎭ |x −x | l=1 j=1
Now, we just focus on analyzing the term M n l |xi − xc,i | · l=1 M n
j=1
1+
j
(11)
c,j σ2
1 1+
|xj −xl |2 c,j σ2
1
l=1 j=1
1+
|xj −xl |2 c,j σ2
on the right-hand side of (11). We only consider the case i = 1 and the proof remains the same for i = 1, 2, . . . , n. Employing the similar method in [3], given any point x = (x1 , x2 , . . . , xn ) ∈ U , we divide space U in to 2n areas and define some sets concerning the cluster center as follows: U1x = {x ∈ U : x1 − x1 ≥ 0, . . . , xn − xn ≥ 0} U2x = {x ∈ U : x1 − x1 ≥ 0, . . . , xn − xn < 0} ... U2xn −1 = {x ∈ U : x1 − x1 < 0, . . . , xn − xn ≥ 0} U2xn = {x ∈ U : x1 − x1 < 0, . . . , xn − xn < 0}
(12)
And define some sets concerning the cluster centers V x = {x ∈ U : |x1 − x1 | < dx }, x
V = {x ∈ U : |x1 − x1 | ≥ dx }, x x Vmx = V ∩ Um , (m = 1, . . . , 2n ).
(13)
Apparently, there are two cases that we need to consider. Case 1: xlc ∈ V x , which indicates |xlc,1 − x1 | < dx , we have
n 1 l |x1 − xc,1 | · |xj −xl |2 xlc ∈V x
M n l=1 j=1
dx ·
0 and Π e = Π Te > 0 satisfying following LMI (23) guarantees G e
∞
≤γ .
Γ11 * ⎡ ⎢ T L=⎢ H e ( µ )Σ e + J e ( µ ) Be ( µ ) J e ( µ ) J e ( µ )T − Π e T T −2 ⎢⎣Π e Fe ( µ ) + γ Π e Ge ( µ ) Ce ( µ )Σ e Π e K e ( µ )T
* ⎤ * ⎥⎥ < 0, Γ33 ⎥⎦
(23)
where
Γ11 = Σ e Ae ( µ )T + Ae ( µ )Σe + Be ( µ ) Be ( µ )T + γ −2 Σe Ce ( µ )T Ce ( µ )Σ e , Γ 33 = γ −2 Π e Ge ( µ )T Ge ( µ )Π e − Π e . ⎡Π + σ n2 Π −1 Let γ = 2σ n , Σ e = diag (Σ1 , σ n2 Σ1−1 , 2σ n ) and Π e = ⎢ 2 −1 ⎣Π − σ n Π LMI (23) can be written as follows : ⎡U1T ⎢ L=⎢ 0 ⎢ 0 ⎣ ⎡V1T ⎢ +⎢ 0 ⎢0 ⎣
0 V2T 0
0 U 2T 0
⎡U1 0 ⎤ ⎥ ⎢ 0 ⎥ Lc ( µ ) ⎢ 0 ⎢0 U 2T ⎥⎦ ⎣
⎡V1 0⎤ ⎥ ⎢ 0 ⎥ Lo ( µ ) ⎢ 0 ⎢0 V2T ⎥⎦ ⎣
0 V2 0
0 U2 0 0 0 V2
0⎤ ⎥ 0⎥ U 2 ⎥⎦ ⎤ ⎥ ⎥ < 0, ⎥ ⎦
Π − σ n2 Π −1 ⎤ ⎥ . Then Π + σ n2 Π −1 ⎦
(24)
where ⎡ σ n Π −1 ⎤ ⎡ 0 σ n Σ1−1 0 ⎤ ⎡ I 0 0⎤ ⎡I ⎤ T T , , , . U1 = ⎢ U V V = = = ⎥ 2 ⎢ 2 ⎥ ⎢ ⎥ 1 ⎢ −1 ⎥ 0 −I ⎦ ⎣0 0 I ⎦ ⎣I ⎦ ⎣0 ⎣ −σ n Π ⎦ Case 2: ( k = n, v = q − 1 ) In this case, A12 ( µ ) , A21 ( µ ) , A22 ( µ ) , F21 ( µ ) , F22 ( µ ) , H12 ( µ ) and H 22 ( µ ) are empty matrices so that the error system becomes
A Balanced Model Reduction for T-S Fuzzy Systems with IQC’s
⎡ Ae ( µ ) ⎢ G = ⎢ H e (µ ) ⎢ C (µ ) ⎣ e ⎡ A( µ ) ⎢ 0 ⎢ ⎢ H11 ( µ ) := ⎢ ⎢ 0 ⎢ 0 ⎢ ⎢⎣ −C ( µ ) e
809
Be ( µ ) ⎤ ⎥ Ke (µ ) J e (µ ) ⎥ 0 ⎥ Ge ( µ ) ⎦ 0 0 0 F11 ( µ ) B(µ ) ⎤ 0 A( µ ) F11 ( µ ) F12 ( µ ) B ( µ ) ⎥⎥ 0 0 0 K11 ( µ ) J1 ( µ ) ⎥ ⎥. 0 H11 ( µ ) K11 ( µ ) K12 ( µ ) J1 ( µ ) ⎥ 0 H 21 ( µ ) K 21 ( µ ) K 22 ( µ ) J 2 ( µ ) ⎥ ⎥ 0 ⎥ C ( µ ) −G1 ( µ ) G1 ( µ ) G2 ( µ ) ⎦ Fe ( µ )
(25)
The change of coordinate with M in the error system gives ⎡ Ae ( µ ) Fe ( µ ) Be ( µ ) ⎤ ⎡ M −1 Ae ( µ ) M ⎢ ⎥ ⎢ G e = ⎢ H e ( µ ) K e ( µ ) J e ( µ ) ⎥ := ⎢ H e ( µ ) M ⎢ C (µ ) G (µ ) 0 ⎥ ⎢ Ce ( µ ) M e ⎣ e ⎦ ⎣
M −1 Fe ( µ ) M −1 Be ( µ ) ⎤ ⎥ K e (µ ) J e (µ ) ⎥ , ⎥ Ge ( µ ) 0 ⎦
(26)
⎡I I ⎤ where M = ⎢ ⎥. ⎣ I −I ⎦ We define γ = 2π q and ⎡ Π1 + π q2 Π1−1 0 ⎤ ⎡Π1 0 ⎤ ⎡Σ ⎢ 2 −1 Π=⎢ ⎥ , Σe = ⎢ 2 −1 ⎥ , Π e = ⎢ Π1 − π q Π1 I 0 π Σ 0 π q iq ⎥ q ⎢⎣ ⎣ ⎦ ⎦ ⎢ 0 ⎣
Π1 − π q2 Π1−1 Π1 + π q2 Π1−1 0
⎤ ⎥ ⎥. 2π q I iq ⎥ ⎦
0 0
Then LMI (23) can be written as ⎡U1T ⎢ L=⎢ 0 ⎢ 0 ⎣
⎡V1T ⎢ +⎢ 0 ⎢0 ⎣
0 T 2
U 0
0 V2T 0
⎡U1 0 ⎤ ⎥ ⎢ 0 ⎥ Lc ( µ ) ⎢ 0 ⎢0 U 2T ⎥⎦ ⎣
⎡V1 0⎤ ⎥ ⎢ 0 ⎥ Lo ( µ ) ⎢ 0 T ⎥ ⎢0 V2 ⎦ ⎣
0 U2 0 0 V2 0
0⎤ ⎥ 0⎥ U 2 ⎥⎦
(27)
0⎤ ⎥ 0 ⎥ < 0, V2 ⎥⎦
where
U1 = [ I
⎡π Π −1 ⎡ I I 0⎤ 0] , U 2 = ⎢ , V1 = ⎡⎣ 0 π q Σ −1 ⎤⎦ , V2 = ⎢ q 1 ⎥ ⎣0 0 I ⎦ ⎣ 0
−π q Π1−1 0
0⎤ ⎥. −I ⎦
This completes the proof. In theorem 2, we have derived an upper bound of the model reduction error. In order to get a less conservative model reduction error bound, it is necessary for n − k
810
S.-H. Yoo and B.-J. Choi
smallest σ i ’s of Σ and q − v smallest π i ’s of Π to be small. Hence we choose a cost function as J = tr ( PQ) + α tr ( RS ) for a positive constant α . Thus, we minimize the non-convex cost function subject to the convex constraints (9) and (11). Since this optimization problem is non-convex, the optimization problem is very difficult to solve it. So we suggest an alternative suboptimal procedure using an iterative method. We summarize an iterative method to solve a suboptimal problem. step 1: Set i = 0 . Initialize Pi , Qi , Ri and Si such that tr ( Pi + Qi ) + α tr ( Ri + Si ) is minimized subject to LMI's (9) and (11). step 2: Set i = i + 1 . 1) Minimize J i = tr ( Pi −1Qi ) + α tr ( Ri Si −1 ) subject to LMI (9). 2) Minimize J i = tr ( PQ i i ) + α tr ( Ri Si ) subject to LMI (11). step 3: If J i − J i −1 is less than a small tolerance level, stop iteration. Otherwise, go to step 2.
5 Concluding Remark In this paper, we have studied a balanced model reduction problem for T-S fuzzy systems with IQC’s. For this purpose, we have defined generalized controllability and observability Gramians for the uncertain fuzzy system. This generalized Gramians can be obtained from solutions of LMI problem. Using the generalized Gramians, we have derived a balanced state space realization. We have obtained the reduced model of the fuzzy system by truncating not only some state variables but also some IQC’s.
References 1. Moore, B.C. : Principal component analysis in linear systems: Controllability, observability and model reduction. IEEE Trans. Automatic Contr., Vol. 26 (1982) 17-32 2. Pernebo, L., Silverman, L.M. : Model reduction via balanced state space representations. IEEE Trans. Automatic Contr., Vol. 27 (1982) 382-387 3. Glover, K. : All optimal Hankel-norm approximations of linear multivariable systems and their error bounds. Int. J. Control, Vol. 39 (1984) 1115-1193 4. Liu, Y., Anderson, B.D.O. : Singular perturbation approximation of balanced systems. Int. J. Control, Vol. 50 (1989) 1379-1405 5. Beck, C.L., Doyle, J., Glover, K. : Model reduction of multidimensional and uncertain systems. IEEE Trans. Automatic Contr., Vol. 41 (1996) 1466-1477 6. Wood, G.D., Goddard, P.J., Glover, K. : Approximation of linear parameter varying systems. Proceedings of the 35th CDC, Kobe, Japan, Dec. (1996) 406-411 7. Wu, F. : Induced L2 norm model reduction of polytopic uncertain linear systems. Automatica, Vol. 32. No. 10 (1996) 1417-1426 8. Haddad, W.M., Kapila, V. : Robust, reduced order modeling for state space systems via parameter dependent bounding functions. Proceedings of American control conference, Seattle, Washington, June (1996) 4010-4014
A Balanced Model Reduction for T-S Fuzzy Systems with IQC’s
811
9. Tanaka, K., Ikeda, T., Wang, H.O. : Robust stabilization of a class of uncertain nonlinear systems via fuzzy control : Quadratic stabilizability, control theory, and linear matrix inequalities. IEEE Trans. Fuzzy Systems, Vol. 4. No. 1. Feb. (1996) 1-13 10. Nguang, S.K., Shi, P. : Fuzzy output feedback control design for nonlinear systems : an LMI approach. IEEE Trans. Fuzzy Systems, Vol. 11. No. 3. June (2003) 331-340 11. Tuan, H.D., Apkarian, P., Narikiyo, T., Yamamoto, Y. : Parameterized linear matrix inequality techniques in fuzzy control system design. IEEE Trans. Fuzzy Systems, Vol. 9. No. 2. April (2001) 324-332 12. Yakubovich, V.A. : Frequency conditions of absolute stability of control systems with many nonlinearities. Automatica Telemekhanica, Vol. 28 (1967) 5-30
An Integrated Navigation System of NGIMU/ GPS Using a Fuzzy Logic Adaptive Kalman Filter Mingli Ding and Qi Wang Dept. of Automatic Test and Control, Harbin Institute of Technology, 150001 Harbin, China
[email protected] Abstract. The Non-gyro inertial measurement unit (NGIMU) uses only accelerometers replacing gyroscopes to compute the motion of a moving body. In a NGIMU system, an inevitable accumulation error of navigation parameters is produced due to the existence of the dynamic noise of the accelerometer output. When designing an integrated navigation system, which is based on a proposed nine-configuration NGIMU and a single antenna Global Positioning System (GPS) by using the conventional Kalman filter (CKF), the filtering results are divergent because of the complicity of the system measurement noise. So a fuzzy logic adaptive Kalman filter (FLAKF) is applied in the design of NGIMU/GPS. The FLAKF optimizes the CKF by detecting the bias in the measurement and prevents the divergence of the CKF. A simulation case for estimating the position and the velocity is investigated by this approach. Results verify the feasibility of the FLAKF.
1 Introduction Most current inertial measurement units (IMU) use linear accelerometers and gyroscopes to sense the linear acceleration and angular rate of a moving body respectively. In a non-gyro inertial measurement unit (NGIMU) [1-6], accelerometers are not only used to acquire the linear acceleration, but also replace gyroscopes to compute the angular rate according to their positions in three-dimension space. NGIMU has the advantages of anti-high g value shock, low power consumption, small volume and low cost. It can be applied to some specific occasions such as tactic missiles, intelligent bombs and so on. But due to the existence of the dynamic noise of the accelerometer output, it is inevitable that the system error increases quickly with time by integrating the accelerometer output. The best method to solve this problem above is the application of the integrated navigation system. NGIMU/GPS integrated navigation system can fully exert its superiority and overcome its shortcomings to realize the real-time high precision positioning in a high kinematic and strong electrically disturbed circumstance. But when using the conventional Kalman filter (CKF) in the NGIMU/GPS, the filtering results are often divergent due to the uncertainty of the statistical characteristics of dynamic noise of the accelerometer output and the system measurement noise. So, in order to ascertain the statistical characteristics of the noises mentioned above and L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 812 – 821, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Integrated Navigation System of NGIMU/ GPS Using a FLAKF
813
alleviate the consumption error, a new fuzzy logic adaptive Kalman filter (FLAKF) [7] is proposed in designing a NGIMU/GPS integrated navigation system.
2 Accelerometer Output Equation As all know, the precession of gyroscopes can be used to measure the angular rate. Based on this principle, IMU measures the angular rate of a moving body. The angle value can be obtained by integrating the angular rate with given initial conditions. With this angle value and the linear acceleration values in three directions, the current posture of the moving body can be estimated. The angular rate in a certain direction can be calculated by using the linear acceleration between two points. To obtain the linear and angular motion parameters of a moving body in three-dimension space, the accelerometers need to be appropriately distributed on the moving body and the analysis of the accelerometer outputs is needed. An inertial frame and a rotating moving body frame are exhibited in Fig. 1, where b represents the moving body frame and I the inertial frame. ZI Zb
M(x,y,z)
ω
r
Yb
Ob
R' R
Xb
OI
YI
XI Fig. 1. Geometry of body frame (b) and inertial frame (I)
The acceleration of point M is given by
+ r + ω × r + 2ω × r + ω × (ω × r ) , a=R I b b
(1)
is the inertial acwhere rb is the acceleration of point M relative to body frame. R I b I celeration of O relative to O . 2ω × rb is known as the Coriolis acceleration, ω × ( ω × r ) represents a centripetal acceleration, and ω × r is the tangential acceleration owing to angular acceleration of the rotating frame If M is fixed in the b frame, the terms rb and rb vanish. And Eq.(1) can be rewritten as
+ ω × r + ω × (ω × r ) . a=R I
(2)
814
M. Ding and Q. Wang
Thus the accelerometers rigidly mounted at location ri on the body with sensing direction θi produce Ai as outputs. + Ω r + ΩΩr ] ⋅ θ Ai = [ R I i i i
(i = 1,2,..., N ) ,
(3)
where ⎡ 0 ⎢ Ω = ⎢ ωz ⎢− ω y ⎣
− ωz 0
ωx
⎤ ⎡R ωy ⎤ Ix ⎥ ⎢ ⎥ − ω x ⎥ , RI = ⎢ RIy ⎥ .
(4)
⎥ ⎢R ⎣ Iz ⎦
0 ⎥⎦
3 Nine-Accelerometer Configuration In this research, a new nine-accelerometer configuration of NGIMU is proposed. The locations and the sensing directions of the nine accelerometers in the body frame are shown in Fig.2. Each arrow in Fig.2 points to the sensing direction of each accelerometer.
Z A8
A6
A5
A7 A9
O
A2
A4 Y
A1
A3 X Fig. 2. Nine-accelerometer configuration of NGIMU
The locations and sensing directions of the nine accelerometers are
⎡0 0 1 − 1 0 0 0 0 1⎤ [r1 ,", r9 ] = l ⎢⎢1 − 1 0 0 1 − 1 0 0 0⎥⎥ , ⎢⎣0 0 0 0 0 0 1 1 0⎥⎦
(5)
where l is the distance between the accelerometer and the origin of the body frame. ⎡1 1 0 0 0 0 1 0 0⎤ [θ1 ,", θ9 ] = ⎢⎢0 0 1 1 0 0 0 1 0⎥⎥ . ⎢⎣0 0 0 0 1 1 0 0 1⎥⎦
(6)
A Integrated Navigation System of NGIMU/ GPS Using a FLAKF
815
It is easy to obtain
⎡ 0 0 0 0 1 −1 0 −1 0 ⎤ [r1 × θ1 , " , r9 × θ 9 ] = l ⎢⎢ 0 0 0 0 0 0 1 0 − 1⎥⎥ . ⎢⎣− 1 1 1 − 1 0 0 0 0 0 ⎥⎦
(7)
With Eq.(3), we get the accelerometer output equation ⎡0 ⎢0 ⎢ ⎢0 ⎢ ⎢0 Ai = ⎢ l ⎢ ⎢− l ⎢0 ⎢ ⎢− l ⎢0 ⎣
0
−l
0
l
0 0 0
l −l 0
0
0
l
0
0 −l
0 0
1 0 0⎤ ⎡0 ⎥ ⎢0 1 0 0 ⎥ ⎡ ω x ⎤ ⎢ 0 1 0⎥ ⎢ ⎥ ⎢0 ⎥ ω y ⎢ 0 1 0⎥ ⎢ ⎥ ⎢0 ⎢ ω z ⎥ 0 0 1 ⎥ ⎢ ⎥ + ⎢ 0 ⎥ R Ix ⎢ 0 0 1 ⎥ ⎢ ⎥ ⎢ 0 ⎢ R Iy ⎥ 1 0 0 ⎥ ⎢ ⎥ ⎢ 0 ⎥ ⎢ R Iz ⎥ ⎢ 0 1 0⎥ ⎣ ⎦ ⎢0 ⎢0 0 0 1 ⎥⎦ ⎣
0 0
0
0
0 0
0
0
0 0
0
0
0 0 0 0
0 l
0 0
0 0
−l
0
0 0
0
l
0 0
l
0
0 0
0
l
l ⎤ − l⎥ ⎥ ⎡ ω x2 ⎤ l ⎥⎢ 2 ⎥ ⎥ ωy − l⎥⎢ 2 ⎥ ⎢ ωz ⎥ 0 ⎥⎢ ⎥. ⎥ ⎢ω y ω z ⎥ 0⎥ ⎢ω x ω z ⎥ ⎥ 0 ⎥⎢ ⎥ ⎢ω x ω y ⎦⎥ 0 ⎥⎣ 0 ⎥⎦
(8)
With Eq.(8), the linear expressions are 1 ( A3 + A4 + A5 − A6 − 2 A8 ) , 4l
(9a)
1 (− A1 − A2 + A5 + A6 + 2 A7 − 2 A9 ) , 4l
(9b)
1 ( − A1 + A2 + A3 − A4 ) , 4l
(9c)
ω x =
ω y =
ω z =
= 1 ( A + A ) , R = 1 ( A + A ) , R = 1 ( A + A ) . R Iz 5 6 Ix 1 2 Iy 3 4 2 2 2
(9d)
4 Conventional Kalman Filter (CKF) In Eq.(9), the linear acceleration and the angular acceleration are all expressed as the linear combinations of the accelerometer outputs. The conventional algorithm computes the navigation parameters as the time integration or double integrations of the equations in Eq.(9). But a numerical solution for the navigation parameters depends on the value calculated from previous time steps. And if the accelerometer output has a dynamic error, the error of the navigation parameters will inevitably increase with t and t 2 rapidly. So the design of a NGMIMU/GPS integrated navigation system is expected. In this section, the CKF is used in the system.
816
M. Ding and Q. Wang
In order to analyze the problem in focus, we ignore the disturbance error contributed to the accelerometers due to the difference of the accelerometers’ sensing directions in three-dimension space. Define the states vector X (t ) for the motion as X (t ) = [S e (t ) S N (t ) Ve (t ) V N (t ) ω (t )] , T
(10)
where S e (t ) is the estimation eastern position of the moving body at time t with respect to the earth frame (as inertial frame), S N (t ) is the estimation northern position, Ve (t ) is the estimation eastern velocity, V N (t ) is the estimation northern velocity and ω (t ) is the estimation angular rate along x axis. Considering the relationship between the parameters, the states equations are then S e = Ve , S N = V N , Ve = a e , VN = a N , ω = ω x ,
(11)
+ T R + T R , a = T R + T R + T R . a e = T11 R Ix 21 Iy 31 Iz N 12 Ix 22 Iy 32 Iz
(12)
where
In Eq.(12), T11 , T21 , T31 , T12 , T22 and T32 are the components of the coordinate trans⎡T11 form matrix T = ⎢T21 ⎢ ⎢⎣T31 We also obtain
T12 T22 T32
ω x =
T13 ⎤ T23 ⎥ . ⎥ T33 ⎥⎦ 1 ( A3 + A4 + A5 − A6 − 2 A8 ) . 4l
(13)
The system state equation and system measurement equation in matrix form become X = ΨX + Gu + ΓW ,
(14)
Z = HX + ε .
(15)
and
In the system measurement equation Eq.(15), the input vector X is the output of the GPS receiver (position and velocity). In Eq.(14) and Eq.(15), ε and W denote the measurement noise matrix and the dynamic noise matrix respectively. The preceding results are expressed in continuous form. Equation of state and measurement for discrete time may be deduced by assigning t = kT , where k = 1,2,... , and T denotes the sampling period. Straightforward application of the discrete time Kalman filter to (14) and (15) yields the CKF algorithm as outlined below. Xˆ 0 0 is the initial estimate of the CKF state vector. P0 0 is the initial estimate of the CKF state vector error covariance matrix.
A Integrated Navigation System of NGIMU/ GPS Using a FLAKF
817
5 Fuzzy Logic Adaptive Kalman Filter (FLAKF) The CKF algorithm mentioned in the above section requires that the dynamic noise and the system measurement noise process are exactly known, and the noises processes are zero mean white noise. In practice, the statistical characteristics of the noises are uncertain. In the GPS measurement equation Eq.(15), among the measurement noise ε , the remnant ionosphere delay modified by the ionosphere model is just not zero mean white noise. Furthermore, the value of the dynamic error of the accelerometer output also cannot be exactly obtained in a kinematic NGIMU/GPS positioning. These problems result in the calculating error of K k and make the filtering process divergent. In order to solve the divergence due to modeling error, In this paper, a fuzzy logic adaptive Kalman filter (FLAKF) is proposed to adjust the exponential weighting of a weighted CKF and prevent the Kalman filter from divergence. The fuzzy logic adaptive Kalman filter will continually adjust the noise strengths in the filter’s internal model, and tune the filter as well as possible. The structure of NGIMU/GPS using FLAKF is shown in Fig. 3.
NGIMU System
EKF
Output
Residual error GPS System
Pseudo-range
FLAKF
Fig. 3. Structure of NGIMU/GPS using FLAKF
The fuzzy logic is a knowledge-based system operating on linguistic variables. The advantages of fuzzy logic with respect to more traditional adaptation schemes are the simplicity of the approach and the application of knowledge about the controlled system. In this paper, FLAKF is to detect the bias in the measurement and prevent divergence of the CKF. Let us assume the model covariance matrices as ⎧ Rk = αRv , ⎨ ⎩Qk = βQ v
(16)
where α and β are the adjustment ratios which are time-varying. The value of α and β can be acquired from the outputs of FLAKF. Let us define δ = Z − HXˆ i
k k −1
as residual error, which reflects the degree to which the NGIMU/GPS model fits the data. According to the characteristic of the residual error of CKF, the variance matrixes of the dynamic noise of the accelerometer output and the measurement noise can be adjust self-adaptively using α and β . If α = β = 1 , we obtain a regular CKF. The good way to verify whether the Kalman filter is performing as designed is to monitor the residual error. It can be used to adapt the filter. In fact, the residual error δ is the difference between the actual observing results and the measurement predic-
818
M. Ding and Q. Wang
tions based on the filter model. If a filter is performing optimally, the residual error is a zero-mean white noise process. The covariance of residual error Pr relates to Qv and Rv . The covariance of the residual error is given by: Pr = H (ΨPk −1Ψ T + Qv ) H T + Rv .
(17)
Using some traditional fuzzy logic system for reference, the Takagi-Sugeno fuzzy logic system is used to detect the divergence of CKF and adapt the filter. According to the variance and the mean of the residual error, two fuzzy rule groups are built up. To improve the performance of the filter, the two groups calculate the proper α and β respectively, and readjust the covariance matrix Pr of the filter. As an input to FLAKF, the covariance of the residual error and the mean value of the residual error are used in order to detect the degree of the divergence. By choosing n to provide statistical smoothing, the mean and the covariance of the residual error are
δ=
Pr =
1 n
1 n
t
∑δ
j
,
(18)
j =t −n
t
∑δ δ j
T j
.
(19)
j = t − n +1
The estimated value Pr can be compared with its theoretical value Pr calculated from CKF. Generally, when covariance Pr is becoming larger than theoretical value Pr , and mean value δ is moving from away zero, the Kalman filter is becoming unstable. In this case, a large value of β is applied. A large β means that process noises are added and we are giving more credibility to the recent data by decreasing the noise covariance. This ensures that all states in the model are sufficiently excited by the process noise. Generally, Rv has more impact on the covariance of the residual error. When the covariance is extremely large and the mean takes the values largely different from zero, there are presumably problems with GPS measurements. Therefore, the filter cannot depend on these measurements anymore, and a smaller α will be used. By selecting the appropriate α and β , the fuzzy logic controller optimally adapt the Kalman filter and tries to keep the innovation sequence act as a zero-mean white noise. The membership functions of the covariance and the mean value of the residual error are also built up.
6 Simulations and Results The simulations of NGIMU/GPS using the CKF and the FLAKF are performed respectively in this section. Fig. 4, Fig. 5, Fig. 6 and Fig .7 illustrate the eastern position estimating error, the northern position estimating error, the eastern velocity estimating error and the northern velocity estimating error respectively. In this simulation, the GPS receiver used is the Jupiter of Rockwell Co.. The initial conditions in position, velocity, posture angle and angular rate are x (0) = 0 m, y (0) = 0 m, z (0) = 0 m, v x (0) = 0 m/s, v y (0) = 0 m/s, v z (0) = 0 m/s,
α x = 0 rad
,α
y
= 0 rad
,
A Integrated Navigation System of NGIMU/ GPS Using a FLAKF
819
α z = π 3 rad, ω x (0) = 0 rad/s, ω y (0) = 0 rad/s, ω z (0) = 0 rad/s respectively. The accelerometer static bias is 10-5g and the swing of posture angle is 0.2 rad. Moreover, when using the CKF, assume that W and ε are all Gaussian distribution, the covariance are Qv = (0.01) I 9×9 and Rv = (0.01) I 5×5 respectively, and P0 0 = (0.01) I 5×5 . The time required for simulation is 100s, and that for sampling is 10ms. Comparing the curves in Fig 4 and Fig. 5, it is obvious that the eastern position estimating error and the northern position estimating error of NGIMU/GPS using the two filtering approaches are all leveled off after estimating for some time. And the errors acquired with the FLAKF are less than those with the CKF. In Fig.4, the error drops from 160m to 50m after using the FLAKF at 100s. In Fig.5, that is 220m to 100m. The similar results are also acquired in Fig.6 and Fig. 7 in the velocity estimation. The curves indicate that the NGIMU/GPS with the FLAKF can effectively alleviate the error accumulation of the estimation of the navigation parameters. When a designer lacks sufficient information to develop complete models or the parameters will slowly change with time, the FLAKF can be used to adjust the performance of CKF on-line.
Fig. 4. The estimating error of the eastern position
Fig. 5. The estimating error of the northern position
820
M. Ding and Q. Wang
Fig. 6. The estimating error of the eastern velocity
Fig. 7. The estimating error of the northern velocity
6 Conclusions Due to the existence of the dynamic noise of the accelerometer output, it is inevitable that the navigation parameter estimation error increases quickly with time by integrating the accelerometer output. The use of the FLAKF to design a NGIMU/GPS based on a NGIMU of nine-accelerometer configuration can overcome the uncertainty of the statistical characteristics of the noises and alleviate the errors accumulation speed. By monitoring the innovations sequence, the FLAKF can evaluate the performance of a CKF. If the filter does not perform well, it would apply two appropriate weighting factors α and β to improve the accuracy of a CKF. In FLAKF, there are 9 rules and therefore, little computational time is needed. It can be used to navigate and guide autonomous vehicles or robots and achieved a relatively accurate performance. Also, the FLAKF can use lower state-model without compromising accuracy significantly. Another words, for any given accuracy, the FLAKF may be also to use a lower order state model.
A Integrated Navigation System of NGIMU/ GPS Using a FLAKF
821
References 1. L.D. DiNapoli: The Measurement of Angular Velocities without the Use of Gyros. The Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia (1965) 34-41 2. Alfred R. Schuler: Measuring Rotational Motion with Linear Accelerometers. IEEE Trans. on AES. Vol. 3, No. 3 (1967) 465-472 3. Shmuel J. Merhav: A Nongyroscopic Inertial Measurement Unit. J. Guidance. Vol. 5, No. 3 (1982) 227-235 4. Chin-Woo Tan, Sungsu Park: Design of gyroscope-free navigation systems. Intelligent Transportation Systems, 2001 Proceedings. Oakland (2001) 286-291 5. Sou-Chen Lee, Yu-Chao Huang: Innovative estimation method with measurement likelihood for all-accelerometer type inertial navigation system. IEEE Trans. on AES. Vol. 38, No. 1 (2002) 339-346 6. Wang Qi, Ding Mingli and Zhao Peng: A New Scheme of Non-gyro Inertial Measurement Unit for Estimating Angular Velocity. The 29th Annual Conference of the IEEE Industry Electronics Society (IECON’2003). Virginia (2003) 1564-1567 7. J. Z. Sasiadek, Q. Wang, M. B. Zeremba: Fuzzy Adaptive Kalman Filtering For INS/GPS Data Fusion. Proceedings of the 15th IEEE International Symposium on Intelligent Control, Rio, Patras, GREECE (2000) 181-186
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System Yinong Li1, Zheng Ling1, Yang Liu1, and Yanjuan Qiao2 1
State Key Laboratory of Mechanical Transmission, Chongqing University, Chongqing 400044, China
[email protected] 2 Changan Automobile CO.LTD Technology Center, Chongqing 400044, China
Abstract. Based on the analysis of the vehicle dynamics control system, the longitudinal control of tracking for the platoon of two vehicles is discussed. A second-order-model for the longitudinal relative distance control between vehicles is presented, and the logic switch rule between acceleration and deceleration of the controlled vehicle is designed. Then the parameters autoadjusting fuzzy-PID control method is used by adjusting the three parameters of PID to control the varying range of error for the longitudinal relative distance and relative velocity between the controlled vehicle and the navigation vehicle, in order to realized the longitudinal control of a vehicle. The simulation results shown that compared with fuzzy control, the parameters auto-adjusting fuzzyPID control method decreases the overshoot, enhances the capacity of antidisturbance, and has certain robustness. The contradictory between rapidity and small overshoot has been solved.
1 Introduction With the rapid development and increased preservation of the vehicle in recent years, traffic jam and accident has become a global problem. Under the background of increasingly deteriorated traffic conditions, many countries has embarked on the research of the Intelligent Transport System (ITS) in order to utilize the roads and vehicles most efficiently, increase the vehicle security and reduce the pollution and congestion. In the Intelligent Transport System, automatic driving, which is an important component of transport automatization and intelligence, has become popular and important in vehicle research [1-2]. As a main content of the vehicle active safety, the vehicle longitudinal dynamic control system is composed of the super layer system and the under layer system[3]. The super layer of the vehicle longitudinal dynamic control system is to realize the automatic tracking of the longitudinal space of the vehicle platoon. The under layer of the vehicle longitudinal dynamic control system is to transfer the output of the super layer system to the controlled vehicle system, and computed the expected acceleration/deceleration. The longitudinal dynamic control system is the combination of the super layer system and the under layer system. It is a complicated nonlinear system, thus the actual characteristics of this system are hard to be described exactly by linear system L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 822 – 832, 2005. © Springer-Verlag Berlin Heidelberg 2005
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System
823
[4]. For the vehicle dynamic longitudinal control system, intelligent control theory is used to fit the actual system more exactly, to realize real time control, and to respond more quickly. In this paper, parameter auto-adjusting fuzzy-PID control method is adopted to control the established vehicle longitudinal dynamic system, the real-time performance and the validity are studies in order to realize and improve the control performance of the vehicle longitudinal dynamic system.
2 Longitudinal Relative Distance Control Model 2.1 Longitudinal Relative Distance Control Between Two Vehicles The control quality of the longitudinal relative distance between two vehicles is an important index to evaluate the active security of the vehicle. The control parameter of this system is the longitudinal relative distance between the leader and the follower. Considering the dynamic response characteristic of the system, we take the chang rate of the longitudinal relative distance between the two vehicles as another parameter of this control system to improve the accuracy of the model. The function of the throttle/brake switch logic control system is to determine the switch between the engine throttle and brake main cylinder, and transmit the expected acceleration or deceleration to the under layer of the longitudinal dynamic control system [5]. The logic control for throttle or brake switch is shown in Fig1. Control rule one (engine throttle)
The engine
The switch logic
Control rule two (brake master cylinder)
The brake master cylinder
The controlled vehicle
Fig. 1. The logic switch between throttle / brake
The one dimensional control model of the longitudinal relative distance error is described as follow
δ d ′ = ( x h − x) − L − H
(1)
where H is the expected longitudinal relative distance between the two vehicles; L is the length of the vehicle body; xh , x are the longitudinal dimensions of the back bumper of the leading vehicle and the controlled vehicle, respectively; δd′ is the error of the longitudinal relative distance. For this one dimension control model, the structure is simple, the physical meaning is clear, and few information are required, so it is easy to realize the control purpose. However, when in the actual longitudinal running condition, the error of the longitudinal relative distance between the two vehicles is related to the change rate of the longitudinal displacement (i.e. running speed) of the controlled vehicle.
824
Y. Li et al.
Considering the influence of the controlled vehicle’s running speed to the accuracy of the model, some researches proposed the two dimensional model for the longitudinal relative distance control [6]:
δ d = ( xh − x) − L − H − λ ⋅ v = δ d ′ − λ ⋅ v
(2)
where δd is the error of the longitudinal relative distance between the two vehicles; λ is the compensate time for the controlled vehicle to converge to δd′; v is the running speed of the controlled vehicle. 2.2 Longitudinal Relative Speed Control The error of the longitudinal relative speed is expressed by:
δ v = vh − v
(3)
where δv is the error of the longitudinal relative speed; vh is the speed of the leading vehicle. 2.3 Second-Order Model for the Longitude Relative Distance Control Combined the two dimensions control model of the longitudinal relative distance and the control model of the relative speed between the two vehicles, a second-order model for the longitudinal relative distance control can be established (rewrite equation (2) and equation (3)), and the state space equations of this model can be written as follows
⎧δ d = ( xh − x) − L − H − λ ⋅ v ⎨ δ v = vh − v ⎩ ⎡0 ⎤ ⎡− λ ⎤ ⎡0 1 ⎤ X = A ⋅ X + B ⋅ u + Γ ⋅ w = ⎢ ⋅ X + ⎢ ⎥ ⋅u + ⎢ ⎥ ⋅ w ⎥ ⎣1⎦ ⎣ −1⎦ ⎣0 0 ⎦
(4)
(5)
where X is the state variables vector of the control system, X T = [ x1 x 2 ] = [δ d δ v ] ; u is the control variable of the control system (acceleration and deceleration of the controlled vehicle); w is the disturbance variable of the control system (acceleration or deceleration of the leading vehicle). This second-order control model includes information about the longitudinal displacement, speed, acceleration and deceleration of the leading vehicle and the controlled vehicle respectively. The information can reflect the real time performance, dynamic response and dynamic characteristics of the control system based on the automatic tracking from a vehicle to a vehicle. This control system is a two-input, single-output system, the inputs variables are the relative distance error and the relative speed error, the outputs are the expected acceleration or deceleration of the controlled vehicle.
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System
825
3 Throttle /Brake Switch Logic Control Model Throttle/brake switch logic control model is an important component to realize the super layer control of the vehicle longitudinal dynamics. The expected acceleration / deceleration of the controlled vehicle, which are outputs of the second-order longitudinal relative distance controll, are transmitted to the switch logic control system. According to the switch logic rule, we can determine whether the throttle control or the brake control should be executed, and then the acceleration or deceleration will be transformed into engine throttle angle and brake pressure of the master cylinder. The longitudinal dynamic model of the controlled vehicle can be expressed as:
τ d − τ b − rFrr − rc(v − vb ) 2 = εa
(6)
where τd is the driving torque transmitted to the rear wheel by the driving system; τ b is the sum of the torques imposed on the rear wheel and the front wheel; r is the wheel radius; c is the coefficient of wind resistance; Frr is rolling resistance force; f is the coefficient of rolling resistance; m is the vehicle mass; v is the running speed of the vehicle; vb is the speed of wind; a is the acceleration of the vehicle;
1 Je + mr 2 + J wr + J wf ) 2 r i
ε= (
, i = 1/ i
i ηT ;
gear i
igear is the speed ratio of the auto transmission; ii is the main transmission ratio of the driving system; ηT is the mechanical efficiency of the transmission system; Jwf Jwr are moments of inertia of the rear wheel and the front wheel, respectively. When the throttle angle is zero, the minimum acceleration of the controlled vehicle is:
、
a min =
1 ⎡ Tem ⎤ − rFrr − rc(v − vb ) 2 ⎥ ⎢ ε⎣ i ⎦
(7)
where Tem is the minimum output torque as the engine throttle is totally closed. The switch logic rule between the engine throttle and the brake master cylinder can be deduced by the equation of the minimum acceleration. Control rule for the engine throttle: a synth ≥ amin (8) Control rule for the brake master cylinder:
a synth < amin
(9)
where asynth is the expected acceleration / deceleration of the controlled vehicle. The existence of the switch rule between the engine throttle and the brake master cylinder may probably lead to vibration of the control system, thus, it is necessary to introduce a cushion layer near the switch surface to avoid violent vibration of the system and achieve good control. The optimized switch logic rules are: Control rule for the engine throttle: a synth − a min ≥ s (10) Control rule for the brake master cylinder:
826
Y. Li et al.
a synth − a min < s
(11) 2
where s is the thickness of the cushion layer, let s=0.05m/s . According to the switch rule between the engine throttle and the brake master cylinder, we can determine whether the engine throttle control or the brake master cylinder control will be executed, and then calculate the expected throttle angle or brake master cylinder pushrod force. For the engine control, the expected longitudinal driving torque can be written as: τ d = εa synth + rFrr + rc (v − vb ) 2 (12) For the brake master cylinder control, the engine throttle is totally closed, and then the expected longitudinal brake torque can be written as:
τ b = −εa synth +
Tem − rFrr − rc(v − vb ) 2 i
(13)
The minimum acceleration / deceleration m/s2
From equation (7), it can be seen the minimum acceleration is a function of the speed of the controlled vehicle. Substituting the simulation parameters of the vehicle in this equation, we can obtain the switch logic rule between the engine throttle and the brake master cylinder by simulation. The simulation result is shown in Fig. 2.
) (
Velocity ( km/h)
Fig. 2. Throttle/brake master cylinder switch logic law
4 Fuzzy PID Control of the Vehicle Longitudinal Dynamic System 4.1 Parameter Self-adjusting Fuzzy PID Control
Parameter self-adjusting fuzzy PID control is a compound fuzzy control. In order to satisfy the self-adjusting requirement of the PID parameters under different error e and different change rate of the error ec, the PID parameters are modified on line by fuzzy control rules [9]. The accuracy of the traditional PID control combined with the intelligence of the fuzzy control will make the dynamic characteristic and static characteristic of the controlled object well. The basic thought of the parameter selfadjusting fuzzy PID control is: at first, we find the fuzzy relations between the three parameters of the PID and the error e, the change rate of the error ec, respectively, and adjust e and ec through the progress continuously; and then revising the three parameters on line according to the fuzzy control rules, in order to satisfy the different requirement of the control parameters under different e and ec, and achieve the ideal
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System
827
control purpose. The computation is simple, so it is easy to be realized on the single chip processor. The structure of the parameter self-adjusting fuzzy PID controller is shown in Fig.3. P
e
e d/dt
The fuzzy controller
ec
Control signal
I
PID parameter adjusting
D
Fig. 3. Parameter self-adjusting fuzzy PID controller
In the parameter self-adjusting fuzzy PID control, self-adjusting rules for the parameters KP KI and KD under different e and ec can be simply defined as follows:
,
① If ︱e︱is larger, larger K
P and smaller KD should be chosen (to accelerate the system response), and let KI =0 in order to avoid large overshoot, thus the integral effect can be eliminate ). e is medium, KP should be set smaller (in order to make the overshoot of the system response relatively small), KI and KD should be chosen proper value (the value of KD has great effect to the system). e is smaller, KP and KI should be set larger in the cause of making the system have good static characteristics, KD should be set properly in order to avoid oscillation near the equilibrium point.
② If ︱ ︱ ③ If ︱ ︱
︱︱
Here it takes the absolute value of the error e fuzzy and the change rate of the error ec fuzzy as the input language variables, where the variables with subscript “fuzzy” represent the fuzzy quantities of the parameters. Here e fuzzy can also be represented as E . Each of the language variables has three language values, i.e. big (B), medium (M), and small (S). The membership functions are shown respectively in Fig.4.
︱ ︱
︱︱
︱︱ u
u
uSC(|EC|)
uMC (|EC|) uBC (|EC|)
uSE (|E|)
uME (|E|)
|E1 |
|E 2 |
uBE (|E|)
|EC|
|EC1 |
|EC2 |
|E|
|EC3 |
|E3 |
Fig. 4. The membership functions of the fuzzy PID
The membership functions can be adjusted by choosing the different values of
︱e ︱,︱e ︱,︱e ︱and ︱ec ︱,︱ec ︱,︱ec ︱in the different turning points. 1
2
3
1
2
3
828
Y. Li et al.
4.2 The Process of the Parameter Self-adjusting Fuzzy PID Control
For the fuzzy input variables, there are five combination forms of based on the rules designed above.
① ② ③ ④ ⑤
︱e︱ and ︱ec︱
e fuzzy = B
且 ec = M 且 ec = M 且 ec
e fuzzy = M
fuzzy
=B
e fuzzy
fuzzy
=M
fuzzy
=S
e fuzzy
e fuzzy = S
The membership degree of each form can be computed by expressions as follows:
① µ (e ② µ (e ③ µ (e ④ µ (e ⑤ µ (e
, ec
1
fuzzy
2
fuzzy
, ec fuzzy
3
fuzzy
, ec fuzzy
4
fuzzy
5
fuzzy
, ec
fuzzy
fuzzy
, ec fuzzy
)= µ )= µ )= µ )= µ )= µ
BE
(e (e (e (e (e
fuzzy
) ) ∧ µ (ec ) ) ∧ µ (ec ) ) ∧ µ (ec ) )
ME
fuzzy
BC
fuzzy
ME
fuzzy
MC
fuzzy
ME
fuzzy
SC
fuzzy
SE
fuzzy
The three PID parameters can be computed by the following equations based on the measurement of e and ec .
︱︱ ︱ ︱ ⎡ )× K = ⎢∑ µ (e , ec 5
KP
j
⎣ j =1
fuzzy
(
)
⎤ ⎤ ⎡ 5 ⎥ / ⎢∑ µ j e fuzzy , ec fuzzy ⎥ ⎦ ⎦ ⎣ j =1 5 ⎤ ⎤ ⎡ × K Ij ⎥ / ⎢∑ µ j e fuzzy , ec fuzzy ⎥ ⎦ ⎦ ⎣ j =1 5 ⎤ ⎤ ⎡ × K Dj ⎥ / ⎢∑ µ j e fuzzy , ec fuzzy ⎥ ⎦ ⎦ ⎣ j =1
fuzzy
Pj
(14)
(
)
(
)
(15)
(
)
(
)
(16)
⎡5 K I = ⎢∑ µ j e fuzzy , ec fuzzy ⎣ j =1 ⎡5 K D = ⎢∑ µ j e fuzzy , ec fuzzy ⎣ j =1
where KPj, KIj, KDj (j=1,2,3…5) are the weights of the parameters KP, KI, KD respectively under different states. They can be set as:
① ② ③ ④ ⑤
K P1 = K P′ 1 , K I 1 = 0, K D1 = 0 K P 2 = K P′ 2 , K I 2 = 0, K D 2 = K D′ 2 K P 3 = K P′ 3 , K I 3 = 0, K D 3 = K D′ 3 K P 4 = K P′ 4 , K I 4 = 0, K D 4 = K D′ 4 K P 5 = K P′ 5 , K I 5 = K I′5 , K D 5 = K D′ 5
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System
,
829
and
K′D1∼ K′D5 are the adjusting values of the parameters where K′P1∼ K′P5 K′I1∼ K′I5 KP, KI, KD respectively under different states by using normal PID parameter adjusting method. Using the online self-adjusting PID parameters KP, KI, KD, the output control variable u can be computed by the following discrete differential equations of the PID control algorithm. u n = K P E n + K I ∑ E n + K D (En − E n−1 ) (17) 4.3 The Selection of Control Parameters and the Research of the Simulation 1) The Selection of Control Parameters It takes the fuzzy quantitative factors of the inputs e and ec , and the defuzzificational scale factors of the output variables P I and D respectively as follows: K e = 7.2 , K ec = 8 , RP = 0.6 , RI = 1.3 , RD = 1.0 The adjusting values adjusted by normal PID parameter-adjusting method under different states are K P′ 1 ~ K P′ 5 , K I′1 ~ K I′5 , K D′ 1 ~ K D′ 5 .
、
1) K P′ 1 = 1, K I′1 = 0, K D′ 1 = 0 2) K P′ 2 = 2, K I′2 = 0, K D′ 2 = 1.25 3) K P′ 3 = 3, K I′3 = 0, K D′ 3 = 2.5 4) K P′ 4 = 4, K I′ 4 = 0, K D′ 4 = 3.75 5) K P′ 5 = 5, K I′5 = 2, K D′ 5 = 5 2) Analysis of the Simulation Combined the desired parameter-adjusting fuzzy PID controller and the vehicle longitudinal dynamic system, the MATLAB/Simulink platform is used to do the simulation research. The frame of the control is shown in Fig.5. Fig.6~Fig.10 are the simulation results of the parameters self-adjusting fuzzy PID control system of the vehicle longitudinal dynamic system. The simulation parameters are Ke=7.2 Kec=8 RP=0.6 RI=1.3 RD=1.0 λ=2 L+H=7.5.
,
,
,
, ,
Information of the leading vehicle
The under layer of the vehicle longitudinal dynamics The switch logic between the engine and the brake Control master cylinder signal u
The super layer of The output signal the vehicle longitudinal dynamics
e
PID adjusting
The fuzzy controller
e
d/dt ec
Fig. 5. Fuzzy PID control system of the vehicle longitudinal dynamics
,
830
Y. Li et al.
) (
Longitudinal relative distance error m
Longitudinal relative speed error m/s
) (
time (s)
time (s)
Fig. 6. Longitudinal relative distance error
) (
Torque
Force
Nm
N
) (
` Fig. 7. Longitudinal relative speed error
time (s)
time (s)
Fig. 8. The push force of the brake master cylinder
Fig. 9. The brake torque
Angle
deg
) (
time (s)
Fig. 10. The open angle of the engine throttle
Fig.6 and Fig.7 show that the longitudinal relative distance error between the leading vehicle and the controlled vehicle within ±1m is achieved through the parameter self-adjusting fuzzy PID control of the controlled vehicle, while the leading vehicle is accelerating or decelerating. The result is better than ±1.4m while only using the fuzzy control method. It also can be seen the longitudinal relative speed error is between 1m/s and –0.6m/s, it is better than ±1m/s while only using the fuzzy
Method of Fuzzy-PID Control on Vehicle Longitudinal Dynamics System
831
control method. Because the change process is smooth, and at the end the error convergent to zero, the auto-tracking of the controlled vehicle to the leading vehicle can be realized. So the parameters self-adjusting fuzzy PID control is more robust than the fuzzy control method. The response of the brake master cylinder pushrod force is shown in Fig.8, it can be seen that there is a break at the moment of 10s when the controller coming to work, and at the rest time the curve is smooth and is accordance with the brake torque curve in Fig.11. Fig.9 and Fig10 show the response of the controlled vehicle’s brake torque and the throttle angle. When the controlled vehicle’s throttle angle is above zero, the vehicle will accelerate and the brake torque is zero; otherwise, when the throttle open angel is below zero, the vehicle will decelerate and the brake torque is above zero. It can be seen the simulation curve using parameter self-adjusting fuzzy PID control fits the logic, and for most time the transition is smooth. However, it also can be seen the frequent alternation between the acceleration and the deceleration from 20 s to 35 s leads to the frequent switch between the brake torque and the throttle angel.
5
Conclusion
Through above analysis, it can be seen that, for vehicle longitudinal dynamic system, the parameter self-adjusting fuzzy PID control can achieve the control purpose of maintaining the safe distance between two vehicles. It absorbs both the fuzzy control’s advantage of expressing irregular events and the adjusting function of the PID control, thus, the overshoot is reduced, the dynamic anti-disturbance capability is strengthened and the robustness is improved. Meanwhile, the parameter self-adjusting fuzzy PID control method also solves the conflict between the quick response and small overshoot. However, the frequent alternation between the acceleration and the deceleration leads to the phenomenon of the frequent switch between the brake torque and the throttle open angel, this should be improved in the future.
Acknowledgments This research is supported by National Natural Science Foundation of China (No. 50475064) and Natural Science Foundation of Chongqing (No: 8366).
References 1. Hesse, Markus.: City Logistics: Network Modeling and Intelligent Transport Systems. Journal of Transport Geography, Vol.10, No.2 . (2002) 158-159 2. Marell. A, Westin. K.: Intelligent Transportation System and Traffic Safety – Drivers Perception and Acceptance of Electronic Speed Checkers. Transportation Research Part C: Emerging Technologies, Vol. 7, No.2-3. (1999) 131-147 3. Rajamani. R., Shladover.S. E.: An Experimental Comparative Study of Autonomous and Co -Operative Vehicle-Follower Control Systems. Transportation Research Part C: Emerging Technologies, Vol. 9, No. 1. (2001) 15-31
832
Y. Li et al.
4. Kyongsu,Yi, Young Do Kwon: Vehicle-to-Vehicle Distance and Speed Control Using an Electronic-Vacuum Booster. JSAE Review, Vol. 22. (2001) 403–412 5. Sunan Huang, Wei.Ren: Vehicle Longitudinal Control Using Throttles and Brakes. Robotics and Autonomous Systems, Vol. 26. (1999) 241-253 6. Swaroop, D., Hedrick, J. K., Chien, C.C., Ioannou, P.: A Comparision of Spacing and Headway Control Laws for Automatically Controlled Vehicles. Vehicle System Dynamics., Vol. 23. (1994) 597-625 7. Visioli, A.: Tuning of PID Controllers with Fuzzy Logic. IEE Proceedings-Control Theory Appl, Vol. 148, No. 1. (2001)1-6
Design of Fuzzy Controller and Parameter Optimizer for Non-linear System Based on Operator’s Knowledge Hyeon Bae1, Sungshin Kim1, and Yejin Kim2 1
School of Electrical and Computer Engineering, Pusan National University, 30 Jangjeon-dong, Geumjeong-gu, 609-735 Busan, Korea {baehyeon, sskim, yjkim}@pusan.ac.kr http://icsl.ee.pusan.ac.kr 2 Dept. of Environmental Engineering, Pusan National University, 30 Jangjeon-dong, Geumjeong-gu, 609-735 Busan, Korea
Abstract. This article describes a modeling approach based on an operator’s knowledge without a mathematical model of the system, and the optimization of the controller. The system used in this experiment could not easily be modeled by mathematical methods and could not easily be controlled by conventional systems. The controller was designed based on input-output data, and optimized under a predefined performance criterion.
1 Introduction Fuzzy logic can express linguistic information in rules to design controllers and models. The fuzzy controller is useful in several industrial fields, because the fuzzy logic can be easily designed by expert’s knowledge for rules. Therefore, the fuzzy logic can be used in un-modeled system control based on the expert’s knowledge. During the last several years, fuzzy controllers have been investigated in order to improve manufacturing processes [1]. Conventional controllers need mathematical models and cannot easily handle nonlinear models because of incompleteness or uncertainty [2], [3]. The controller for the experimental system presented in this study was designed based on the empirical knowledge of the features of a tested system [4].
2 Ball Positioning System and Control Algorithms As shown in Fig. 1, the experimental system consists of two independent fans operated by two DC motors. The purpose of this experiment was to move a ball to a final goal position using the two fans. This system contains non-linearity and uncertainty caused by the aerodynamics inside the path. Therefore, the goal of this experiment was to initially design the fuzzy controller based on the operator’s knowledge and then optimize using the system performance. In this study, the position of the ball is found by image processing with real-time images. The difference of each image was measured to determine the position of the moving ball [5], [6]. Figure 2 shows the sequence of the image processing to find the ball in the path. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 833 – 836, 2005. © Springer-Verlag Berlin Heidelberg 2005
834
H. Bae, S. Kim, and Y. Kim
Ball path
CCD camera
Motors with fan
Micro controller
Fig. 1. The experimental system and components
Final ball position
Initial ball position
(a) Initial position of the ball
(b) Final position of the ball
(c) Find the ball
Fig. 2. Image processing to determine the ball position
3 Experimental Results 3.1 Fuzzy Controller In an operator’s experiment, the convergence rates and error values were measured to compare the performance. This was achieved by the performance criterion that is, the objective function, and fuzzy rules were fixed based on empirical knowledge. In an operator’s test, random parameters were first selected and these parameters were subsequently adjusted according to the evaluated performance. The results of the test are shown in Table 1. 3.2 Optimization Hybrid Genetic Algorithm: GA and Simplex Method Thirty percent of crossover and 1.5% mutation rates were implemented for the genetic algorithm in this test. The first graph of Fig. 3 represents the performance with respect to each membership function. The performance values jumped to worse values once in the middle of a graph, but they could be improved gradually with iteration. After a GA process, the performance showed worse values than the previous values for the simplex method, as shown in the second graph of Fig. 3, because performance differences exists in real experiments even though the same membership functions are used. The reason is that all parameters transferred from the GA are used to operate the sys-
Design of Fuzzy Controller and Parameter Optimizer for Non-linear System
835
tem, and then the performance is evaluated again in the simplex method, so small bits gaps can exist. Simulated Annealing As shown in Fig. 4, two graphs represent the results for the SA algorithm. In the initial part of the second graph of Fig. 4, the performance values drop significantly and then lower to better values as the iterations are repeated. SA can search for good solutions quickly even though the repeated iteration times are less than those of the GA. When SA is used for optimization, to select the initial values is very important. In this experiment, the SA starts with the search parameters from the operator’s trials. Initially a high temperature coefficient is selected for the global search and then it is improved gradually during the processing for the local search. Table 1. Parameters of membership functions for experiments
A B C D E F
NB CE VA -40 20 -40 30 -50 35 -50 25 -30 20 -35 25
Error M.F NS ZE CE VA CE VA -20 10 0 10 -15 15 0 5 -10 20 0 8 -20 10 0 10 -15 10 0 10 -18 10 0 8
PS CE VA 20 10 15 20 10 20 20 10 15 10 35 25
Derivative Error M.F PB N ZE P CE VA CE VA CE VA CE VA 70 40 -10 5 0 3 10 5 80 50 -4 5 0 5 4 5 50 25 -4 5 0 3 4 5 50 25 -4 5 0 4 4 5 30 20 -4 4 0 3 4 4 50 20 -2 2 0 2 2 2
Fig. 3. Performance of the experiment in the case of the Hybrid GA
Fig. 4. Performance of the experiment in the case of the SA
836
H. Bae, S. Kim, and Y. Kim
4 Conclusions The primary goal of this study was to design and optimize a fuzzy controller based on the operator’s knowledge and running process. The method used characteristics of the fuzzy controller to control systems using fixed rules without mathematical models. Gaussian functions were employed as fuzzy membership functions and 8 functions were used for error and derivative error. Thus, a total of 16 parameters were optimized. SA is better than the hybrid GA considering convergence rate. But it is difficult to determine which one is the better optimization method. It depends on the conditions under which the systems operate.
Acknowledgement This work was supported by “Research Center for Logistics Information Technology (LIT)” hosted by the Ministry of Education & Human Resources Development.
References 1. Lee, C. C.: Fuzzy Logic in Control Systems: Fuzzy Logic Controller, Part I, II. IEEE Transaction on Systems, Man, and Cybernetics 20, (1990) 404-435 2. Tsoukalas, L. H. and Uhrig, R. E.: Fuzzy and Neural Approaches in Engineering. John Wiley & Sons, New York (1997) 3. Yen, J. and Langari, R.: Fuzzy Logic: Intelligence, Control, and Information. Prentice Hall, NJ, (1999) 4. Mamdani, E. H. and Assilian, S.: An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller. International Journal of Man-Machine Studies 7, (1975) 1-13 5. Haralick, R. M. and Shapiro, L. G.: Computer and Robot Vision. Addison Wesley, MA (1992) 6. Jain, R., Kasturi, R. and Schunck, B. G.: Machine Vision. McGraw-Hill, New York, (1995) 30-33
A New Pre-processing Method for Multi-channel Echo Cancellation Based on Fuzzy Control* Xiaolu Li1, Wang Jie2, and Shengli Xie1 1 Electronic and Communication Engineering, South China University of Technology, 510641, China 2 Control Science and Engineering Post-doc workstation, South China University of Technology, 510641, China
[email protected] Abstract. The essential problem of multi-channel echo cancellation is caused by the strong correlation of two-channel input signals and the methods of preprocessing are always used to decorrelate it and the decorrelation degree depends on nonlinear coefficient α . But in most research, α is constant. In real application, the cross correlation is varying and α should be adjusted with correlation. But there is not precise mathematical formula between them. In this paper, the proposed method applies fuzzy logic to choose α so that the communication quality and convergence performance can be assured on the premise of small addition of computation. Simulations also show the effect of validity method.
1 Description of Problem In free-hand mobile radiotelephone or teleconference system, how to cancel the echo is very important to assure communication. At present, adaptive cancellation technology is mainly adopted in multi-channel echo cancellation. The essential problem of such adaptive multi-channel echo cancellation is caused by strong correlation between input signals[1,2] and the convergence performance is bad. So researchers propose many pre-processing methods[1-4], How to choose the nonlinear transforming coefficient α is the main problem in the method. Figure 1 is the block of stereophonic echo canceller with nonlinear pre-processing units with two added pre-processing units Let output
x i ( n ) = [ xi ( n), x i ( n − 1), " , x i ( n − L + 1)]T , i = 1,2 of
the
microphone
at
time
n
in
the
remote
denote
room,
and
the let
T h i = [h1, 0 , h1,1 , " , h1, L −1 ] denote the true echo path in the local room and
*
The work is supported by the Guang Dong Province Science Foundation for Program of Research Team (grant04205783), the National Natural Science Foundation of China (Grant 60274006), the Natural Science Key Fund of Guang Dong Province, China (Grant 020826), the National Natural Science Foundation of China for Excellent Youth (Grant 60325310) and the Trans-Century Training Program, the Foundation for the Talents by the State Education Commission of China.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 837 – 840, 2005. © Springer-Verlag Berlin Heidelberg 2005
838
X. Li, W. Jie, and S. Xie
Fig. 1. Block of stereophonic echo canceller with nonlinear pre-processing units
ˆ i (n) = [wˆ 1,0 (n), wˆ 1,1 (n)," , wˆ 1, L −1 (n)]T denote the adaptive FIR filters at time w n respectively, where i = 1,2 ,and L is the length of impulse response and adaptive filters. The echo signal at time n is y (n) . In order to improve convergence, one of pre-processing methods is nonlinearly transforming x1 x 2 and using the results as the input of filters, i.e.
、
xi' (n) = xi (n) + αf [ xi (n)]
(1)
where f is a nonlinear function, e.g. half-wave commutate function. Through adjusting the nonlinear coefficient α , we can change the introduced degree of distortion.
2 Adjustment of Nonlinear Component Based on Fuzzy Control The larger α is, the more added nonlinear component is and the quicker the filters converge, but large α perhaps will influence speech quality. Contrarily, small α brings good speech quality, but perhaps the convergence performance won’t be improved much. So adjustment of α should be on the premise of speech quality. So, large α should be adopted when correlation between input signals is strong, and small α should be adopted when the correlation is faint. The relationship is gotten from experience and there is not precise mathematical formula. Following we propose a sort of LMS algorithm which applies fuzzy logic to nonlinear pre-processing for stereophonic echo cancellation. Fuzzy logic is used to adjust α so that the communication quality and convergence performance can be assured on the premise of small addition of computation. The input of fuzzy inference
New Pre-processing Method for Multi-channel Echo Cancellation
system(FIR) is the correlation coefficient nonlinear component coefficient
ρx x
839
of input signals, and the output is the
1 2
a . Three linguistic variables: small(S), mini-
mum(M) and large(L) are defined to deal with the magnitude of either of
ρx x
1 2
and
a . Complete fuzzy rules are shown in Table1. Table 1. Fuzzy Control Rules
:ρ
:a
Rules
Input
R0
L
L
R1
M
M
R2
S
S
Output
x1 x2
At last, the proposed algorithm can reduce to following equations as follows
ρx x = 1 2
cov( x1 (n), x 2 (n))
; a ( n)
D( x1 (n) ) D( x 2 (n) )}
= FIS ( ρ x1x2 (n))
(2)
xi' = xi (n) + α (n) f [ xi (n)]
(3)
x 'i (n) = [ x 'i (n), xi' (n − 1),..., x 'i (n − L + 1)]
(4)
d ( n) = x T ( n ) h ( n) w i (n + 1) = w i (n) + 2µ
e( n ) = d ( n ) − x T ( n ) w ( n )
,
(5)
'
e ( n) x i ( n ) x 1 ( n ) x 1 ( n ) + x 2 ( n) T x 2 ( n) '
T
'
'
'
,i
= 1,2 (6)
3 Simulations Simulations show that when the correlation is strong, a (n) is also large;when weak correlation is caused by background noise or position changing of speakers, the designed fuzzy system will give small a (n) to reduce speech distortion. Fig.2 shows the good convergence performance of mean square error of the adaptive filters. We adopt Mean Square Error(MSE) as the criterion:
MSE = 10 log10 E[e 2 (n)] .
840
X. Li, W. Jie, and S. Xie
Fig. 2. Mean square error of the adaptive filters
4 Conclusion The essential problem of multi-channel echo cancellation is caused by the strong correlation, and the methods of pre-processing the input signals are always used to decorrelate it. In real application, the cross correlation of two-channel input signals is varying with time and it also will be influenced by background noise and not be constant. But in most research, α is constant but in fact α should be adjusted with correlation. Their relationship is gotten from experience and there is not precise mathematical formula between them. In this paper the proposed method applies fuzzy logic to nonlinear pre-processing for stereophonic echo cancellation so that the communication quality and convergence performance can be assured on the premise of small addition of computation. Simulations also show the effect of validity method.
References 1. M. Mohan Sondhi, Dennis R. Morgan, Joseph L. Hall, Stereophonic acoustic echo cancellation- an overview of the fundamental problem, IEEE Signal Processing Letters. 2(1995)148-151 2. Jacob Benesty, Dennis R. Morgan and M. Mohan Sondhi, A better understanding and improved solution to the probems of stereophonic acoustic echo cancellation, ICASSP. (1995)303-305 3. Benesty J, Morgan D and Sondhi M.A better understanding and an improved solution solution to the problem of stereophonic acoustic echo cancellation. IEEE Trans. Speech Audio Procesing. 6(1998) 156-165 4. Suehiro Shimauchi and Shoji Makino, Stereo projection echo canceller with true echo path estimation, ICASSP. (1995)3059-3063
Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems Chen Gang, Shuqing Wang, and Jianming Zhang National Key Laboratory of Industrial Control Technology, Institute of Advanced Process Control, Zhejiang University, Hangzhou, 310027, P. R. China
[email protected] Abstract. Two different fuzzy control approaches are proposed for a class of nonlinear systems with mismatched uncertainties, transformable to the strictfeedback form. A fuzzy logic system (FLS) is used as a universal approximator to approximate unstructured uncertain functions and the bounds of the reconstruction errors are estimated online. By employing special design techniques, the controller singularity problem is completely avoided for the two approaches. Furthermore, all the signals in the closed-loop systems are guaranteed to be semi-globally uniformly ultimately bounded and the outputs of the system are proved to converge to a small neighborhood of the desired trajectory. The control performance can be guaranteed by an appropriate choice of the design parameters. In addition, the proposed fuzzy controllers are highly structural and particularly suitable for parallel processing in the practical applications.
1 Introduction Based on the fact that FLS can approximate uniformly a nonlinear function over a compact set to any degree of accuracy, FLS provides an alternative way to modeling and design of nonlinear control system. It provides a way to combine both the available mathematical description of the system and the linguistic information into the controller design in a uniform fashion. In order to improve the performance and stability of FLS, the synthesis approach to constructing robust fuzzy controllers has received much attention. Recently, some design schemes that combine backstepping methodology with adaptive FLS have been reported. In order to avoid the controller singularity problem, the control gain is often assumed to be known functions [1], [2]. However, this assumption cannot be satisfied in many cases. In [3], a FLS is used to approximate the unknown control gain function, but the controller singularity problem is not solved. The possible controller singularity problem can be avoided when the sign of the control gain is known [4]. The problem becomes more difficult if the sign of the control gain is unknown. Another problem is that some tedious analysis is needed to determine regression matrices [1], [3]. Therefore, the approaches are very complicate and difficult to use in practice. In this note, we will present two approaches to solve the aforementioned problems. One is the robust adaptive fuzzy tracking control (RAFTC) L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 841 – 850, 2005. © Springer-Verlag Berlin Heidelberg 2005
842
C. Gang, S. Wang, and J. Zhang
for the low order systems with known sign of the control gain. The other is the robust fuzzy tracking control (RFTC) proposed for the high order systems with unknown sign of the control gain. The control schemes presented in this note have several advantages. First, they can incorporate in an easy way the linguistic information about the system through if-then rules into the controller design. The controllers are highly structural and particularly suitable for parallel processing in the practical applications. Second, the controller singularity problem is completely avoided. The outline of this note is as follows. In Section 2, formulation of our robust tracking problem of nonlinear system with mismatched uncertainties is presented. RAFTC and RFTC are developed in Section 3 and Section 4, respectively. Finally, the note is concluded in Section 5.
2 System Description and Problem Statement Consider the n-order nonlinear system of the form
ξ = f (ξ ) + ∆q(ξ ) + bg (ξ )u , y = h (ξ ) ,
(1)
where ξ ∈ R n is the state, u ∈ R is the input, y ∈ R is the output, and h is a smooth function on R n . f and g are known functions with g (ξ ) ≠ 0, ∀ξ ∈ R n . b is an unknown parameter with b ≠ 0 . ∆q(ξ ) represents uncertainties due to many factors, such as modeling errors, parameter uncertainties, disturbances, and so on. The control objective is to make the output y tracks the desired trajectory y r (t ) in the presence of bounded uncertainties. Assumption 1. The desired output trajectory y r (t ) and its derivatives up to nth order are known and bounded. Assumption 2. The uncertainties ∆q(ξ ) satisfy the structural coordinate-free con-
{
}
dition ad z ∆q ∈ ϕ i , ∀z ∈ ϕ i , 0 ≤ i ≤ n − 2 . ϕ i = span g , ad f g , ", ad if g , 0 ≤ i ≤ n − 1 , are involutive and of constant rank i + 1 . The nominal form of (1) is described by the form
ξ = f (ξ ) + bg (ξ )u , y = h (ξ ) .
(2)
Lemma 1. If the system (2) has relative degree n and Assumption 2 is satisfied, there exists a global diffeomorphism x = φ (ξ ) , transforming the system (1) into the form
x i = x i +1 + ∆ i (x1 ,", x i ) , 1 ≤ i ≤ n − 1 , x n = γ (x ) + bβ (x )u + ∆ n (x ) , y = x1 .
(3)
Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems
843
3 Design of RAFTC Assumption 3. The sign of the unknown parameter b is known. Furthermore, b satisfies b0 < b < b1 for some unknown positive constant b0 , b1 . In this note, we consider a FLS consisting of the product-inference rule, singleton fuzzifier, center average defuzzifier, and Gaussian membership function. Based on universal approximation theorem, given a compact set Ω xi ⊂ R i , the unknown function ∆ i can be expressed as ∆ i (x1 ,", x i ) = wi*T hi (x1 ,", x i ) + v i (x1 ," , x i ) ,
(4)
where wi* is an optimal weight vector; hi is the fuzzy function vector; the reconstruction error v i is bounded, i.e., there exists unknown constant ρ i > 0 such that vi < ρ i .
The design procedure consists of n steps. At each step i (1 ≤ i < n ) , the direct adaptive fuzzy control techniques are employed to design a fictitious controller α i . For the residual uncertainties, a robustness term is introduced to compensate them. At the nth step, the actual control u appears and the design is completed. For sparing the space, we omit these steps and directly give the resulting adaptive controller e1 = x1 − y r , ei = x i − α i −1 , i = 2," , n ,
α1 = − k1 e1 − wˆ 1T h1 − ϕ 1 + y r , α i = −k i ei − wˆ iT hi − ϕ i − ei −1 + α i −1 , i = 2,", n − 1 ,
(
)
u = − qˆ β −1 γ + k n en + wˆ nT hn + ϕ n + e n −1 − α n −1 , ϕ i = ρˆ i tanh (ei ε i ) , i = 1," , n ,
(5)
where k i > 0 and ε i > 0 for i = 1," , n are design parameters; q = b −1 ; wˆ i , ρˆ i , and qˆ denote the estimates of wi , ρ i , and q , respectively. It is well known that for conventional adaptive laws, the unmodeled dynamics, disturbances, and the reconstruction errors may lead to parameter drift and even instability problems. Consider the constraint region Ω wi = wi wi ≤ ci for approximation
{
}
parameter vector wi with constant ci > 0 . wˆ i , ρˆ i , and qˆ are updated according to wˆ i = Qi (hi ei − σ i wˆ i ) ,
ρˆ i = −k c A i (ρˆ i − ρ i0 ) + A i ei tanh (ei ε i ) , qˆ = g1 (sgn (b )e nφ1 − ηqˆ ) ,
(6)
844
C. Gang, S. Wang, and J. Zhang
where φ1 = γ + k n e n + wˆ nT hn + ϕ n + e n −1 − α n −1 ; Qi = QiT is a positive definite matrix; A i > 0 , k c > 0 , g1 > 0 , and η > 0 are constants; ρ i0 is initial estimate of ρ i ; the switching parameter σ i is chosen as
σ i = 0 , if wˆ i < ci or ( wˆ i ≥ ci and wˆ iT hi ei ≤ 0 ) σi =
( wˆ
2 i
wˆ i
2
)
− ci2 wˆ iT hi ei
(δ
2 i
+ 2δ i ci
)
(7) , if wˆ i ≥ ci and wˆ iT hi ei > 0 ,
with small constant δ i > 0 . According to (5), we obtain the error dynamics of the closed-loop system ~ E = − KE − W T H + M − B + SE + D ,
(
(8)
)
T ~ T ~ ,", w ~ ), where E = (e1 ,", e n ) , K = diag(k1 ,", k n ) , H = h1T ,", hnT , W = diag(w 1 n T T T * ~ ~ ~ wi = wˆ i − wi , M = (v1 ,", v n ) , B = (ϕ 1 , " , ϕ n ) , D = (0,", bq φ1 ) , q = q − qˆ , and
S ∈ R n×n has only nonzero elements s i ,i +1 = 1 , s i +1,i = −1 , i = 1,", n − 1 . Theorem 1. Consider the closed-loop system consisting of (3) satisfying Assumption 3, controller (5), and the adaptive laws (6). For bounded initial conditions, 1) all signals in the closed-loop system are bounded; 2) the tracking error e1 can be kept as small as possible by adjusting the controller parameters in a known form. T Proof. Define Q = diag(Q1 ,", Qn ) , L = diag(A 1 ,", A n ) , ρ = (ρ1 ,", ρ n ) , ρ~ = ρ − ρˆ , ρˆ = (ρˆ ,", ρˆ )T , Wˆ = diag(wˆ ,", wˆ ) . Considering the following Lyapunov candi1
1
n
n
date V=
(
)
b ~2 1 T 1 ~ ~ 1 q . E E + tr W T Q −1W + ρ~ T L−1 ρ~ + 2 2 2 2 g1
(9)
The time derivative of V along the trajectory of (8) satisfies
(
)
b ~ ~ . V = E T − KE − W T H + M − B + SE + D + tr ⎛⎜W T Q −1Wˆ ⎞⎟ − ρ~ T L−1 ρˆ − q~qˆ ⎠ ⎝ g1
(
n
)
n
(
)
n
T ≤ − E T KE − ∑ σ i wˆ i − wi* wˆ i + k c ∑ ρ~i ρˆ i − ρ i0 + b ηq~qˆ + ∑ ρ i ε i . i =1
i =1
(10)
i =1
Referring to (7) for i = 1 , σ 1 = 0 if the first condition is true. If wˆ 1 ≥ c1 and
(
)
T
wˆ 1T h1e1 > 0 , then σ 1 wˆ 1 − w1* wˆ 1 ≥ 0 , because wˆ 1 ≥ w1* and σ 1 > 0 . By employ-
(
)
T
ing the same procedure, we can achieve the same results σ i wˆ i − wi* wˆ i ≥ 0 ,
Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems
i = 2 ", n . Therefore, we can obtain
∑σ i (wˆ i − wi* ) n
T
i =1
845
wˆ i ≥ 0 . Consequently, there
exists V ≤ − λmin (K ) E
2
−
1 k c ρ~ 2
2
−
(
n 1 1 b ηq~ 2 + k c ∑ ρ i − ρ i0 2 2 i =1
)
2
+
η 2b0
(11)
n
+ ∑ ρ iε i , i =1
where λmin (K ) denotes the minimum eigenvalue of K . We see that V is negative
{
{
}
}
whenever E ∉ Ω E = E E ≤ c 0 λmin (K ) , or ρ~ ∉ Ω ρ~ = ρ~ ρ~ ≤ 2c 0 k c , or
{
( bη )}, where c
n 2 kc n η ρ i − ρ i0 + + ∑ ρiε i . ∑ 2 i =1 2b0 i =1 According to standard Lyapunov theorem extension, these demonstrate the uniform ultimate boundedness of E , ρ~ , and q~ . Since wˆ i (i = 1,", n) and y r(i ) (i = 0,", n )
q~ ∉ Ω q~ = q~ q~ ≤ 2c0
0
=
(
)
are bounded, all fictitious functions α i (i = 1,", n − 1) and control input u are bounded. Consequently, we can conclude that all signals in the closed-loop system are bounded. According to (11), we know that the tracking error satisfies lim e1 (t ) ≤ c 0 λmin (K ) . The small tracking error can be achieved by increasing cont→∞
trol gain k i and decreasing ε i , η . The parameter k c offers a tradeoff between the magnitudes of ρ~ and e1 . It is also shown the closer ρ i0 (i = 1,", n ) are to ρ (i = 1,", n ) , the smaller ρ~ and e become. i
1
The RAFTC is suitable for the low order system with known sign of control gain. For the high order system, the online computation burden will be heavy. In the next section, we will present an approach for the high order system. The control laws have the adaptive mechanism with minimal learning parameterizations. Furthermore, a priori knowledge of the control gain sign is not required.
4 Design of RFTC The design procedure is briefly given as follows. Step i ( 1 ≤ i < n ): According to (4), the fictitious controllers are chosen as
α1 = −k1e1 − w1T h1 − ϕ 1 + y r , α i = −k i ei − wiT hi − ϕ i − ei −1 + α i −1 , i = 2,", n − 1 ,
(12)
where the nominal vector wi is designed and fixed by a priori knowledge. There exist constants ρ wi and ρ vi such that wi − wi* ≤ ρ wi , v i ≤ ρ vi . The robustness term ϕ i
is given by ϕ i = ρˆ wi hi tanh ( hi ei ε wi ) + ρˆ vi tanh (ei ε vi ) . ρˆ wi and ρˆ vi denote the es-
timates of ρ wi and ρ vi , respectively. k i > 0 , ε wi > 0 , and ε vi > 0 are design parameters. ρˆ wi and ρˆ vi are updated as follows:
846
C. Gang, S. Wang, and J. Zhang
ρˆ wi = −σ wi (ρˆ wi − ρ wi0 ) + rwi ei hi tanh ( hi ei ε wi ) , ρˆ = −σ (ρˆ − ρ 0 ) + r e tanh (e ε ) , vi
vi
vi
vi
vi i
i
vi
(13)
where σ wi , rwi , σ vi , rvi are positive constants. Step n : In the final step, we will design the actual controller. Since the sign of the control gain is unknown, we will introduce Nussbaum-type gain in the controller design. Choosing the following actual controller u = N (ω )α n β (x ) , ω = α n en ,
(14)
α n = k n en + γ + w hn + ϕ n + en −1 − α n −1 . T n
N (ω ) is a Nussbaum-type function which has the following properties [6] s
s
lim sup ∫ N (ω ) dω = +∞ ; lim inf ∫ N (ω ) dω = −∞ . s →∞
0
s →∞
0
(15)
( )
In this note, the Nussbaum function N (ω ) = exp ω 2 cos(πω 2 ) is considered. It should be pointed out that the Nussbaum-type gain technique was firstly proposed in [6] for a class of first-order linear system. Subsequently, the method was generalized to higher order linear systems [7] and nonlinear systems [8], [9]. According to (12) and (14), we obtain the following error dynamics of the closedloop system ~ (16) E = − KE − W T H + M − B + SE + D ,
(
)
T ~ T ~ , ", w ~ ), where E = (e1 ,", e n ) , K = diag(k1 ,", k n ) , H = h1T ,", hnT , W = diag(w 1 n ~ = w − w * , M = (v ,", v )T , B = (ϕ ,", ϕ )T , D = (0,", (bN (ω ) + 1)α )T , and w 1 1 i i i n n n
s ∈ R n×n has only nonzero elements s i ,i +1 = 1 , s i +1,i = −1 , i = 1,", n − 1 . Theorem 2. Suppose Assumption 1 is satisfied. Consider the closed-loop system consisting of (3), controller (12), (14), and the parameter updating laws (13). Given a compact set Ω n ⊂ R n , for any x (0) ∈ Ω n , the errors ei , i = 1,", n , and parameter estimates ρˆ wi , ρˆ vi , i = 1,", n , are uniformly ultimately bounded (UUB).
Proof. Define R w = diag(rw1 ,", rwn ) , Rv = diag(rv1 ,", rvn ) , ρ w = (ρ w1 ,", ρ wn ) , ρˆ w = (ρˆ w1 ,", ρˆ wn )T , ρ~w = ρˆ w − ρ w , ρv = (ρv1 ,", ρvn )T , ρˆv = (ρˆ v1,", ρˆvn )T , ρ~v = ρˆ v − ρ v . Considering the following Lyapunov function candidate 1 1 1 (17) V = E T E + ρ~wT Rw−1 ρ~w + ρ~vT Rv−1 ρ~v . 2 2 2 T
The time derivative of V along the trajectory of (16) is given by
Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems
(
847
)
~ V = E T − KE − W T H + M − B + (bN + 1)ω + ρ~wT Rw−1 ρˆ w + ρ~vT Rv−1 ρˆ v . By completing the squares
ρ~wi (ρˆ wi − ρ wi0 ) =
(
ρ~vi (ρˆ vi − ρ vi0 )
(
)
(
)
1 (ρˆ wi − ρ wi )2 + 1 ρˆ wi − ρ wi0 2 − 1 ρ wi − ρ wi0 2 , 2 2 2 2 2 1 1 1 2 = (ρˆ vi − ρ vi ) + ρˆ vi − ρ vi0 − ρ vi − ρ vi0 , 2 2 2
)
(
)
we obtain n ⎛ ⎞ σ σ V ≤ − E T KE − ∑ ⎜⎜ wi ρ~wi2 + vi ρ~vi2 ⎟⎟ + (bN (ω ) + 1)ω + ς , 2rvi i =1 ⎝ 2 rwi ⎠ n ⎛ σ where ς = ∑ ⎜⎜ ρ wi ε wi + ρ vi ε vi + wi ρ wi − ρ wi0 2rwi i =1 ⎝
(
)
2
+
σ vi
(ρ
2rvi
vi
2⎞ − ρ vi0 ⎟⎟ . Thus ⎠
)
V ≤ − λV + ς + (bN (ω ) + 1)ω ,
{
(18)
}
where λ = min 2k1 ,",2k n , σ w1 ,", σ wn , σ v1 ," , σ vn . Solving the inequality (18) yields 0 ≤ V (t ) ≤
t ς + e − λtV (0) + ∫ (bN (ω ) + 1)ωe − λ (t −τ ) dτ , ∀ t ≥ 0 . λ 0
(19)
[ ) ) dω (τ ) , ∀t ∈ [0, t ) . Two cases need to be consid-
We first show that ω (t ) is bounded on 0, t f by seeking a contradiction. Define t
P (ω (0), ω (t )) = ∫ (bN (ω ) + 1)e − λ (t −τ
f
0
ered: 1) ω (t ) has no upper bound and 2) ω (t ) has no lower bound. Case 1: Suppose that ω (t ) has no upper bound on 0, t f ) , i.e., there exists a mono-
[
tone increasing sequence {ω (t i )} such that lim ω (t i ) = +∞ as lim t i = t f . i →∞
i →∞
First, we consider the case b > 0 . Suppose that 4 M + 1 > ω (0) , where M is an integer. We have t1
P (ω (0),4 M + 1) = ∫ (bN (ω ) + 1)e − λ (t1 −τ ) dω (τ )
(
≤ be
)
0
(4 M +1)2
+ 1 (4 M + 1 − ω (0)) .
Noting that N (ω ) is negative on intervals [4 M + 1,4 M + 3] , thus P (4 M + 1,4 M + 3) =
t2
∫ (bN (ω ) + 1)e
t1
− λ (t2 −τ )
dω (τ )
(20)
848
C. Gang, S. Wang, and J. Zhang
≤ be − λ (t2 − t1 )
4 M + 2.5
∫ N (σ )dσ + 2
≤ − 2 2 be − λ (t2 − t1 )+ (4 M +1.5 ) + 2 . 2
(21)
4 M +1.5
According to (20), (21), we have
e
(4 M +1)2
(−
P (ω (0),4 M + 3) ≤ 2 2be
− λ (t2 − t1 )+ 4 M +1.25
)
+ b(4 M + 1 − ω (0 )) + (4 M + 3 − ω (0))e − (4 M +1) . 2
Hence, P (ω (0),4 M + 3) → −∞ as M → ∞ , which yields a contradiction in (19). Then, we consider the case b < 0 . Suppose that 4 M − 1 > ω (0) , then t1
P (ω (0),4 M − 1) = ∫ (bN (ω ) + 1)e − λ (t1 −τ ) dω (τ )
(
≤ be
)
0
(4 M −1)2
+ 1 (4 M − 1 − ω (0)) .
(22)
Noting that N (ω ) is positive on intervals [4 M − 1,4 M + 1] , thus P (4 M − 1,4 M + 1) =
t2
∫ (bN (ω ) + 1)e
− λ (t2 −τ )
dω (τ )
t1
≤ be − λ (t2 −t1 )
4 M + 0.5
∫ N (σ )dσ + 2 ≤
(23)
2 2 be − λ (t2 − t1 )+ (4 M −0.5) + 2 . 2
4 M − 0.5
According to (22), (23), we have
e
(4 M −1)2
( 2 2 be
P (ω (0),4 M + 1) ≤ − λ (t 2 − t1 )+ 4 M − 0.75
)
+ b (4 M − 1 − ω (0)) + (4 M + 1 − ω (0))e − (4 M −1) . 2
Hence, P (ω (0),4 M + 1) → −∞ as M → ∞ , which yields a contradiction in (19).
[
Thus, we conclude that ω (t ) is upper bounded on 0, t f ) .
Case 2: Employing the similar method as in case 1, we can prove that ω (t ) is
[
lower bounded on 0, t f ) .
[
Therefore, ω (t ) must be bounded on 0, t f ) . As an immediate result, V (t ) is bounded
[
on 0, t f ) . All the signals in the closed-loop system are bounded, then t f → ∞ . This proves that all the signals of the closed-loop system are UUB, i.e., for any compact set Ω n ⊂ R n , there exist a controller such that as long as x (0) ∈ Ω n , ei , ρˆ wi , and ρˆ vi are
UUB, respectively. Correspondingly, the tracking error e1 (t ) satisfies
t ⎛ ⎞ e1 (t ) ≤ 2⎜⎜ ς λ + ∫ (bN (ω ) + 1)ωe − λ (t −τ ) dτ + e − λtV (0)⎟⎟ . 0 ⎝ ⎠
Thus, by appropriately adjusting the design parameters, the tracking error e1 (t ) can be kept as small as possible.
Robust Adaptive Fuzzy Control for Uncertain Nonlinear Systems
849
Remark 1. For the low order system with mismatched uncertainties and known sign of control gain, we present a new RAFTC algorithm. In order to avoid the possible divergence of the on-line tuning of FLS, a new adaptive law is proposed to make sure that the FLS parameter vectors are tuned within a prescribed range. By a special design technique, the controller singularity problem is avoided. Remark 2. For the high order system with mismatched uncertainties and unknown sign of control gain, we present a new RFTC algorithm. The main feature of the algorithm is the adaptive mechanism with minimal learning parameters. The online computation burden is kept to minimum. By employing Nussbaum gain design technique, the controller singularity problem is avoided perfectly. Remark 3. Both the algorithms are very suitable for practical implementation. The controllers and the parameter adaptive laws are highly structural. Such a property is particularly suitable for parallel processing and hardware implementation in practical applications.
5 Conclusions In this note, the tracking control problem has been considered for a class of nonlinear systems with mismatched uncertainties, transformable to the strict-feedback form. By combining backstepping design technique with fuzzy set theory, RAFTC for the low order system with known sign of the control gain and RFTC for the high order system with unknown sign of the control gain are proposed. Both algorithms completely avoid the controller singularity problem. The proposed controllers are highly structural and particularly suitable for parallel processing in the practical applications. Furthermore, the RFTC algorithm has the adaptive mechanism with minimal learning parameterizations. It is shown the proposed algorithms can guarantee the semiglobally uniform ultimate boundedness of all the signals in the closed-loop system. In addition, the tracking error can be reduced to arbitrarily small values by suitably choosing the design parameters.
References 1. Lee, H., Tomizuka, M.: Robust Adaptive Control Using a Universal Approximator for SISO Nonlinear Systems. IEEE Trans. Fuzzy Syst. 8 (2000) 95-106 2. Jagannathan, S., Lewis, F.L.: Robust Backstepping Control of a Class of Nonlinear Systems Using Fuzzy Logic. Inform. Sci. 123 (2000) 223-240 3. Wang, W.Y., Chan, M.L., Lee, T.T., Liu, C.H.: Adaptive Fuzzy Control for Strict-Feedback Canonical Nonlinear Systems with H∞ Tracking Performance. IEEE Trans. Syst. Man, Cybern. 30 (2000) 878-885 4. Yang, Y.S., Feng, G., Ren, J.: A Combined Backstepping and Small-Gain Approach to Robust Adaptive Fuzzy Control for Strict-Feedback Nonlinear Systems. IEEE Trans. Syst. Man, Cybern. 34 (2004) 406-420 5. Polycarpou, M.M., Ioannou, P.A.: A Robust Adaptive Nonlinear Control Design. Automatica. 32 (1996) 423-427
850
C. Gang, S. Wang, and J. Zhang
6. Nussbaum, D.R.: Some Remarks on a Conjecture in Parameter Adaptive Control. Syst. Contr. Lett. 3 (1983) 243-246 7. Mudgett, D.R., Morse, A.S.: Adaptive Stabilization of Linear Systems with Unknown High Frequency Gains. IEEE Trans. Automat. Contr. 30 (1985) 549-554 8. Ge, S.S., Wang, J.: Robust Adaptive Neural Control for a Class of Perturbed Strict Feedback Nonlinear Systems. IEEE Trans. Neural Networks. 13 (2002) 1409-1419 9. Ding, Z.T.: Adaptive Control of Nonlinear System with Unknown Virtual Control Coefficients. Int. J. Adapt. Control Signal Process. 14 (2000) 505-517
Intelligent Fuzzy Systems for Aircraft Landing Control Jih-Gau Juang1, Bo-Shian Lin1, and Kuo-Chih Chin2 1
Department of Communication and Guidance Engineering National Taiwan Ocean University, Keelung 20224, Taiwan
[email protected] [email protected] 2 ASUSTeK Computer Inc, Taipei 101, Taiwan
[email protected] Abstract. The purpose of this paper is to investigate the use of evolutionary fuzzy neural systems to aircraft automatic landing control and to make the automatic landing system more intelligent. Three intelligent aircraft automatic landing controllers are presented that use fuzzy-neural controller with BPTT algorithm, hybrid fuzzy-neural controller with adaptive control gains, and fuzzy-neural controller with particle swarm optimization, to improve the performance of conventional automatic landing system. Current flight control law is adopted in the intelligent controller design. Tracking performance and adaptive capability are demonstrated through software simulations.
1 Introduction In a flight, take-off and landing are the most difficult operations in regard to safety issues. The automatic landing system of an airplane is enabled only under limited conditions. If severe wind disturbances are encountered, the pilot must handle the aircraft due to the limits of the automatic landing system. The first Automatic Landing System (ALS) was made in 1965. Since then, most aircraft have been installed with this system. The ALS relies on the Instrument Landing System (ILS) to guide the aircraft into the proper altitude, position, and approach angle during the landing phase. Conventional automatic landing systems can provide a smooth landing which is essential to the comfort of passengers. However, these systems work only within a specified operational safety envelope. When the conditions are beyond the envelope, such as turbulence or wind shear, they often cannot be used. Most conventional control laws generated by the ALS are based on the gain scheduling method [1]. Control parameters are preset for different flight conditions within a specified safety envelope which is relatively defined by Federal Aviation Administration (FAA) regulations. According to FAA regulations, environmental conditions considered in the determination of dispersion limits are: headwinds up to 25 knots; tailwinds up to 10 knots; crosswinds up to 15 knots; moderate turbulence, wind shear of 8 knots per 100 feet from 200 feet to touchdown [2]. If the flight conditions are beyond the preset envelope, the ALS is disabled and the pilot takes over. An inexperienced pilot may not be able to guide the aircraft to a safe landing at airport. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 851 – 860, 2005. © Springer-Verlag Berlin Heidelberg 2005
Intelligent Fuzzy Systems for Aircraft Landing Control
852
China Airlines Flight 642 had a hard landing at Hong Kong International Airport on 22 August 1999. The lifting wing was broken during the impact that killed 3 passengers and injured 211 people. After 15 months investigation, a crash report was released on November 30, 2000. It showed that the crosswind-correction software 907 on the Boeing MD-11 had a defect. Boeing also confirmed this software problem later and replaced nearly 190 MD-11's crosswind-correction software with the 908 version. According to Boeing's report [3], 67% of the accidents by primary cause are due to human factors and 5% are attributed to weather factors. By phase of flight, 47% accidents are during final approach or landing. It is therefore desirable to develop an intelligent ALS that expands the operational envelope to include more safe responses under a wider range of conditions. The goal of this study is that the proposed intelligent automatic landing controllers can relieve human operators and guide the aircraft to a safe landing in wind disturbance environment. In this study, robustness of the proposed controller is obtained by choosing optimal control gain parameters that allows wide range of disturbances to the controller. In 1995, Kennedy and Eberhart presented a new evolutionary computation algorithm, the real-coded Particle Swarm Optimization (PSO) [4]. PSO is one of the latest population-based optimization methods, which dose not use the filtering operation (such as crossover and mutation) and the members of the entire population are maintained through the search procedure. This method was developed through the simulation of a social system, and has been found to be robust in solving continuous nonlinear optimization problems [5-7]; they are suitable for determination of the control parameters which give aircraft better adaptive capability in severe environment. Recently, some researchers have applied intelligent concepts such as neural networks and fuzzy systems to intelligent landing control to increase the flight controller's adaptively to different environments [8-12]. Most of them do not consider robustness of controller due to wind disturbances [8-10]. In [11], a PD-type fuzzy control system is developed for automatic landing control of both a linear and a nonlinear aircraft model. Adaptive control for a wide range of initial conditions has been demonstrated successfully. The drawback is that the authors only set up the wind disturbance at the initial condition. Persistent wind disturbance is not considered. In [12] wind disturbances are included but the neural controller is trained for a specific wind speed. Robustness for a wide range of wind speeds has not considered. In previous works [13-14], we have utilized neural networks to automatic landing control. Environment adaptive capability has been improved but the rate of convergence is very slow. Here, we present three learning schemes, fuzzy-neural controller with BPTT algorithm, fuzzy-neural controller with adaptive control gains, and fuzzy-neural controller with PSO algorithm, to guide the aircraft to a safe landing and make the controller more robust and adaptive to the ever-changing environment.
2 Aircraft Landing System The pilot descends from the cruise altitude to an altitude of approximately 1200ft above the ground. The pilot then positions the airplane so that the airplane is on a heading towards the runway centerline. When the aircraft approaches the outer airport marker, which is about 4 nautical miles from the runway, the glide path signal is in-
853
J.-G. Juang, B.-S. Lin, and K.-C. Chin
tercepted (as shown in Fig. 1). As the airplane descends along the glide path, its pitch, attitude and speed must be controlled. The aircraft maintains a constant speed along the flight path. The descent rate is about 10ft/sec and the pitch angle is between -5 to +5 degrees. Finally, as the airplane descends 20 to 70 feet above the ground, the glide path control system is disengaged and a flare maneuver is executed. The vertical descent rate is decreased to 2ft/sec so that the landing gear may be able to dissipate the energy of the impact at landing. The pitch angle of the airplane is then adjusted, between 0 to 5 degrees for most aircraft, which allows a soft touchdown on the runway surface.
Altitude 1200 ft Glide Path
≈ 50 ft
Runway Position
Flare Path
0 ft
Touchdown
Fig. 1. Glide path and flare path
A simplified model of a commercial aircraft that moves only in the longitudinal and vertical plane is used in the simulations for implementation ease [12]. To make the ALS more intelligent, reliable wind profiles are necessary. Two spectral turbulence forms modeled by von Karman and Dryden are mostly used for aircraft response studies. In this study the Dryden form [12] was used for its demonstration ease. Figure 2 shows a turbulence profile with a wind speed of 30 ft/sec at 510 ft altitude. W ind Gust velocity components: Longitudinal (Solid) & Vertical (Dashed) 20 10
ft/sec
0 -10 -20 -30 -40 0
5
10
15
20 25 30 Time (sec.)
35
Fig. 2. Turbulence profile
40
45
50
Intelligent Fuzzy Systems for Aircraft Landing Control
854
3 Landing Controller Design In this study, the aircraft maintains a constant speed alone the flight path, we assumed that the change in throttle command is zero. The aircraft is thus controlled solely by the pitch command. Detailed descriptions can be found in [12]. 3.1 Fuzzy-Neural Controller with BPTT Algorithm
In this section, we first design and analyze the performance of the fuzzy-neural controller for auto-landing in severe wind condition. The learning process is shown in Fig. 3, where AM is the aircraft model, FNNC is the fuzzy modeling neural network controller, and LIAM is the linearized inverse aircraft model [13]. Every learning cycle consists of all stages from S 0 to S k . Weight changes in the FNNC are updated using a batch model. The controller is trained by Backpropagation through time (BPTT) algorithm. The inputs for the fuzzy-neural controller are: altitude, altitude command, altitude rate, and altitude rate command. The output of the controller is the pitch command. Detail structure of the fuzzy modeling network can be found in [1415]. The fuzzy neural controller starts learning without any control rule. The LIAM calculates the error signals that will be used to back propagate through the controller in each stage [13].
S0
FNNC
C0
S1
AM
Sk−2
FNNC
∆C0
LIAM
Ck−2
AM
Sk−1
∆Ck−2 e1
ek−2
LIAM
FNNC
Ck−1
AM
∆Ck−1 ek−1
LIAM
ek
Sk
_
+ Sd
Fig. 3. Learning Process
In the simulations, successful touchdown landing conditions are defined as follows -3 ≤ h(T ) ft/sec ≤ 0, 200 ≤ x (T ) ft/sec ≤ 270
-300 ≤ x(T ) ft ≤ 1000, -1 ≤ θ (T ) degree ≤ 5 where T is the time at touchdown. Initial flight conditions are: h(0)=500 ft, x (0) =235 ft/sec, x(0) =9240 ft, and γ o =-3 degrees. With the wind turbulence speed at 30 ft/sec, the horizontal position at touchdown is 418.5 ft, horizontal velocity is 234.7 ft/sec, vertical speed is -2.7 ft/sec, and pitch angle is -0.08 degrees, as shown in Fig. 4 to Fig. 6. Table 1 shows the results from using different wind turbulence speeds. The controller can successfully guide the aircraft flying through wind speeds of 0 ft/sec to 45 ft/sec while the conventional controller can only reach 30 ft/sec [12].
855
J.-G. Juang, B.-S. Lin, and K.-C. Chin
Fig. 4. Aircraft altitude and command
Fig. 5. Aircraft vertical velocity and command
Fig. 6. Aircraft pitch angle and command
Intelligent Fuzzy Systems for Aircraft Landing Control
856
Table 1. The results from using different turbulence strength Wind speed (ft/sec) Landing point (ft) Aircraft vertical Speed (ft/sec) Pitch angle (degree)
3.2
10 580.3 -2.2 -0.81
20 541.8 -2.3 -0.63
30 418.5 -2.7 -0.08
40 457.3 -2.3 -0.06
45 247.9 -2.9 0.35
Fuzzy-Neural Controller with Adaptive Control Gains
In previous section the control gains of the pitch autopilot in glide-slope phase and flare phase are fixed. Robustness of the fuzzy-neural controller is achieved by the BPTT training scheme. In this section, a neural network generates adaptive control gains for the pitch autopilot. Different wind disturbances are the inputs of the neural network, as in Fig. 7. The fuzzy neural controller is trained by BP instead of BPTT. With the wind turbulence speed at 50 ft/sec, the horizontal position at touchdown is 547 ft, horizontal velocity is 235 ft/sec, vertical speed is –2.5 ft/sec, and pitch angle is 0.34 degrees, as shown in Fig. 8 to Fig. 10. Table 2 shows the results from using different wind turbulence speeds. The controller can successfully guide the aircraft flying through wind speeds of 34 ft/sec to 58 ft/sec.
kθ kq
S0
C0
S1
kθ kq
Sk −2
Ck − 2
S k −1
kθ kq
Ck −1
Fig. 7. Learning process – with wind disturbance NN
Fig. 8. Aircraft altitude and command
Sk
857
J.-G. Juang, B.-S. Lin, and K.-C. Chin
Fig. 9. Aircraft vertical velocity and command
Fig. 10. Aircraft pitch angle and command Table 2. The results from using different turbulence strength Wind speed (ft/sec) Landing point (ft) Aircraft vertical Speed (ft/sec) Pitch angle (degree)
34 427.9 -2.9 0.15
40 346.8 -2.9 0.10
45 853.5 -2.8 0.20
50 547.5 -2.5 0.34
58 943.7 -2.8 0.25
3.3 Fuzzy-Neural Controller with Particle Swarm Optimization
In the PSO algorithm, each member is called “particle”, and each particle flies around in the multi-dimensional search space with a velocity, which is constantly updated by the particle’s own experience and the experience of the particle’s neighbors or the experience of the whole swarm. Each particle keeps track of its coordinates in the problem space, which are associated with the best solution (fitness) it has achieved so far. This value is called pbest. Another best value that is tracked by the global version of the particle swarm optimizer is the overall best value, and its location, obtained so far by any particle in the population. This location is called gbest. At each time step,
Intelligent Fuzzy Systems for Aircraft Landing Control
858
the particle swarm optimization concept consists of velocity changes of each particle toward its pbest and gbest locations. Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and gbest locations. The turbulence strength increases progressively during the process of parameter search. The purpose of this procedure is to search more suitable control gains for kө and kq in glide path and flare path. With the wind turbulence speed at 50 ft/sec, the horizontal position at touchdown is 505 ft, horizontal velocity is 235 ft/sec, vertical speed is -2.7 ft/sec, and pitch angle is 0.21 degree, as shown in Fig. 11 to Fig. 13. Table 3 shows the results from using different wind turbulence speeds. The controller can successfully guide the aircraft flying through wind speeds of 30 ft/sec to 90ft/sec. With the same wind turbulence speed at 50 ft/sec, Fig. 14 shows the absolute error for height using fuzzy-neural controller with adaptive control gains (dash line) and hybrid fuzzy-neural controller with PSO (solid line), respectively. It indicates that the performance of hybrid fuzzy-neural controller with PSO is much better.
Fig. 11. Aircraft altitude and command
Fig. 12. Aircraft vertical velocity and command
859
J.-G. Juang, B.-S. Lin, and K.-C. Chin
Fig. 13. Aircraft pitch angle and command Table 3. The results from using different turbulence strength Wind speed (ft/sec) Landing point (ft) Aircraft vertical Speed (ft/sec) Pitch angle (degree)
30 467.2 -2.4 -0.18
40 403.7 -2.5 0.06
50 505.3 -2.7 0.21
70 543.0 -2.5 0.91
90 771.4 -2.8 1.31
Fig. 14. Performance of Fuzzy-Neural Controller
4 Conclusions For the safe landing of an aircraft with a conventional controller, the wind speed limit of turbulence is 30 ft/sec. In this study, Control gains are selected by a combination method of a nonlinear control design, a neural network, and particle swarm optimization. Comparisons on different control schemes are given. The hybrid fuzzy-neural controller with adaptive control gains can overcome turbulence to 58 ft/sec. The
Intelligent Fuzzy Systems for Aircraft Landing Control
860
fuzzy-neural controller with BPTT and the fuzzy-neural controller with PSO algorithm can reach 45 ft/sec and 90 ft/sec, respectively. The fuzzy-neural controller with PSO algorithm has best performance and the convergence rate is also improved. The purpose of this paper has been achieved.
Acknowledgement This work was supported by the National Science Council, Taiwan, ROC, under Grant NSC 93-2213-E-019 -007.
References 1. Buschek , H., Calise, A.J.: Uncertainty Modeling and Fixed-Order Controller Design for a Hypersonic Vehicle Model. Journal of Guidance, Control, and Dynamics. 20 (1997) 42-48 2. Federal Aviation Administration: Automatic Landing Systems. AC 20-57A (1971) 3. Boeing Publication: Statistical Summary of commercial Jet Airplane Accidents. Worldwide Operations 1959-1999. (2000) 4. Kennedy, J., Eberhart, R.C.: Particle Swarm Optimization. Proceedings of IEEE International Conference on Neural Networks. 4 (1995) 1942-1948 5. Shi, Y., Eberhart, R. C.: Empirical Study of Particle Swarm Optimization. Proceedings of the 1999 Congress on Evolutionary Computation. (1999) 1945-1950 6. Angeline, P. J.: Using Selection to Improve Particle Swarm Optimization. Proceedings of IEEE International Conference on Evolutionary Computation. (1998) 84-89 7. Zheng, Y.L., Ma, L., Zhang, L., Qian, J.: On the Convergence Analysis and Parameter Selection in Particle Swarm Optimization. Proceedings of the Second IEEE International Conference on Machine Learning and Cybernetics. (2003) 1802-1807 8. Izadi, H., Pakmehr, M., Sadati, N.: Optimal Neuro-Controller in Longitudinal Autolanding of a Commercial Jet Transport. Proceedings of IEEE International Conference on Control Applications. (2003) 1-6 9. Chaturvedi, D.K., Chauhan, R., Kalra, P.K.: Application of Generalized Neural Network for Aircraft Landing Control System. Soft Computing. 6 (2002) 441-118 10. Ionita, S., Sofron, E.: The Fuzzy Model for Aircraft Landing Control. Proceedings of AFSS International Conference on Fuzzy Systems. (2002) 47-54 11. Nho, K., Agarwal, R.K.: Automatic Landing System Design Using Fuzzy Logic. Journal of Guidance, Control, and Dynamics. 23 (2000) 298-304 12. Jorgensen, C.C., Schley, C.: A Neural Network Baseline Problem for Control of Aircraft Flare and Touchdown. Neural Networks for Control. (1991) 403-425 13. Juang, J.G., Chang, H.H., Chang, W.B.: Intelligent Automatic Landing System Using Time Delay Neural Network Controller. Applied Artificial Intelligence. 17 (2003) 563581 14. Juang, J.G.: Fuzzy Neural Networks Approaches for Robotic Gait Synthesis. IEEE Transactions on Systems Man and Cybernetics—Part B: Cybernetics. 30 (2000) 594-601 15. Horikawa, S., Furuhashi, T., Uchikawa, Y.: On Fuzzy Modeling Using Fuzzy Neural Networks with the Back-Propagation Algorithm. IEEE Transactions on Neural Networks. 3 (1992) 801-806
Scheduling Design of Controllers with Fuzzy Deadline* Hong Jin, Hongan Wang, Hui Wang, and Danli Wang Institute of Software, Chinese Academy of Sciences. Beijing 100080 {hjin, wha, hui.wang, dlwang}@iel.iscas.ac.cn
Abstract. Because some timing-constraints of a controller task may be not determined as a real-time system engineer thinks of, its scheduling with uncertain attributes can not be usually and simply dealt with according to classic manners used in real-time systems. The model of a controller task with fuzzy deadline and its scheduling are studied. The dedication concept and the scheduling policy of largest dedication first are proposed first. Simulation shows that the scheduling of controller tasks with fuzzy deadline can be implemented by using the proposed method, whilst the control performance cost gets guaranteed.
1 Introduction Existing academic researches on co-designing of control and scheduling assume to know all timing-constraints (e.g., sampling period, deadline, etc.) [1][2][3][4]. However, some timing-constraints of a control loop are usually dependent on requirements for controlled process dynamics and loop performance, e.g., the imprecise clock, overload or computer fault can lead to a relative change for the sampling period all. Moreover, computer hardware, control algorithm, real-time operating system, scheduling algorithm and network delay can cause delay and jitter in control system all [3]. The controller tasks with fuzzy deadline usually appear in control problems although a control engineer does not care of the implementing process of controller task in computer-controlled systems. Lingual terms can be used to describe timingconstraints, e.g., the response time of less than 0.1 seconds and the sampling period of 2 seconds. The later means that 2±0.1% seconds can be considered to be correct and acceptable. Another natural comprehend is that the deadline is admitted to have a small variant around the sampling period which is precise [5], and the small variant can be considered as random, uncertainty or fuzzy. So, it is important and meaningful to study the scheduling of control tasks with uncertain/fuzzy attributes in computercontrolled systems. However, existing researches on the scheduling of tasks with uncertain/fuzzy attributes have not considered control tasks as their scheduling objects besides of the performance cost of a control system [5][6]. Under the precondition of assuring control performance, how to use limited computing resources to achieve the scheduling of controller tasks with fuzzy deadline is the problem cared in this paper. The time-driven sampling is considered. The proposed scheduling algorithm of largest dedication first is introduced in Section 3. *
This work is supported by China NSF under Grant No. 60374058, 60373055 and 60373056.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 861 – 864, 2005. © Springer-Verlag Berlin Heidelberg 2005
862
H. Jin et al.
2 Controller Model 2.1 Controller Task In computer-controlled systems [1][3], the dispersion of the reference input r(t) with the system output y(t) is used as the input of a controller which recalculates the manipulated variable used as the input of the controlled system G(s) for every P seconds. The control aim is to make y(t) approximate to r(t) as quickly as possible. The state update of the manipulated variable (Update State) and the calculation of system output (Calculate Output) will make up of the close loop of the whole system. In the co-design of control and scheduling, the simplest model assumption about a controller task, Ti, is that Ti is periodic, and has a fixed sampling period Pi, a known worst cast execute time and a hard deadline di. And its release/arrive time is assumed to be zero in this paper. Moreover, the execute time of Ti is denoted as Ci. In following discussion, the deadline di is assumed to be fuzzy uncertain. 2.2 Fuzzy Deadline For fuzzy description of deadline, which can take any value in some interval with a probability, there are many continuous fuzzy numbers to be referred, e.g., trapezoid/ triangle or truncated normal fuzzy number. The continuous trapezoid deadline is used here [6]. Let di≡Trapezoid(ai,ei,fi,bi), where ai≤ei≤fi≤bi≤Pi. Let µi(t) be [ai,bi]-cut trapezoid membership function of di, then, µi(t) is equal to (t-ai)hi/(ei-ai) for ai t m
(9)
where δ , ε error.
are the permission transfer orbit error and the expected terminal orbit
3 Control Law The constellation initialization control comes to the orbital maneuver control of each satellite from the initial orbit to the terminal orbit There are infinite kinds of control law (t k , ∆vk ) ( k = 1, , m ) for the orbital maneuver. However, the different control law results the different transfer orbits (the transfer trajectories), which lead to the different fuel cost. Thereby, it is necessary to design a control law to achieve the fuelefficient orbital maneuvers. 3.1 Control Efficiency
The orbital maneuver control comes to the 6 orbital parameters correction control. Each parameter correction has an appropriate control phase. The same fuel cost in different phases results in the different effects. The relation between the control phases and the control effects determines control efficiency. Observing the influence coefficients of the control acceleration in Gauss perturbation equations (1)-(6), one can find the following relations between the control efficiency and the satellites phases: a. Correcting i is most efficient, as the latitude argument satisfies: cos(ω + f ) = 1 , or (ω + f ) = kπ , k = 0,1,
where the satellite pass the equator.
(10)
Fuel-Efficient Maneuvers for Constellation Initialization
913
b. Correcting Ω is most efficient, as the latitude argument satisfies: sin(ω + f ) = 1 , or (ω + f ) = kπ +
π 2
, k =0,1,
(111)
where the satellite pass the south or north apex. c. The semimajor axis a can be corrected at any phase. Correcting a by the transverse control force is more efficient than by the radial control force. With the most efficient correction on a by the transverse control force, the true anomaly satisfies: cos( f ) = 1 , or f = 2kπ , k = 0,1,
(12)
where the satellite is at perigee. d. The eccentricity e can be corrected at any phase. Correcting e by the transverse control force is more efficient than by the radial control force. With the most efficient correction on e by the transverse control force, the true anomaly satisfies: cos( f ) = ±1 , or f = kπ , k = 0,1,
(13)
where the satellite is at apogee or perigee. e. The argument of perigee ω can be corrected at any phase. With the most efficient correction on ω by the transverse control force, the true anomaly satisfies: sin( f ) = ±1 , or f = kπ + π / 2 , k = 0,1,
(14)
where the satellite is at the midpoint between the apogee and the perigee. f. The mean anomaly M can be corrected at any phase. With the most efficient correction on M by the transverse control force, the true anomaly satisfies: sin( f ) = ±1 , or f = kπ + π / 2 , k = 0,1,
(15)
where the satellite is at the midpoint between the apogee and the perigee. Thereby, the efficiency of orbital parameter correction depends strongly on satellite phases. In the simple case that only one orbital parameter needs to be corrected, the control phase for the orbital maneuver can be determined as above. In the general case that more than one orbital parameter need to be corrected, the control phase for the orbital maneuver can be determined as the plan described in Fig. 1 as follows: 1) The correction of the semimajor axis and eccentricity takes place between the perigee (the initial maneuver position) and the apogee (the terminal maneuver position); 2) The correction of the argument of the perigee and the mean anomaly takes place between two midpoints of perigee and the apogee; 3) The correction of the inclination takes place between the ascending equator and the descending equator; 4) The correction of the right ascension of ascending node takes place between the south apex and the north apex. Let T denote the orbit period, then the maneuver duration last about 2 times T, thus the plan is called the “2T”-control strategy.
914
M. Yang et al.
correcting semi-major axis and eccentricity
perigee
apogee
T/2
correcting argument of perigee and mean anomaly
midpoint between perigee and apogee
correcting inclinaton
midpoint asending between equator perigee and apogee
T/2
correcting right ascension of ascending node
north apex
desending south equator apex
T/2
T/2
Fig. 1. “2T”-plan of orbital maneuvers control
3.2 Control Rules Based on Fuzzy Logic Rule
The requirements of the control efficiency and the transfer orbit error given by the equation (8) may conflict during the process of the constellation initialization: in some cases, the satellite is at the position requiring orbit maneuver, but the satellite is not at the control efficient phase; while in some other cases, the satellite is at the right control efficient phase, but the satellite is not at the position needed for the orbital maneuver. Obviously, if the bivalent logic is used for the constellation initialization, there may be no solution for the control efficiency. Therefore, the fuzzy logic is proposed to be used to resolve the conflicting constraints of control accuracy and control efficiency. According to the “2T”-control strategy stated as above, the control rules based on fuzzy logic for each satellite of the constellation are described as follows. "If 'the semi-major axis is not proper' and 'the satellite is around the perigee', then perform the satellite maneuver, correct the semi-major axis"; "If 'the eccentricity is not proper' and 'the satellite is around the perigee or the apogee', then perform the satellite maneuver, correct the eccentricity"; "If 'the argument of the perigee is not proper' and 'the satellite is around the midpoint between the perigee and the apogee', then perform the satellite maneuver, correct the argument of perigee"; "If 'the mean anomaly is not proper' and 'the satellite is around the midpoint between the perigee and the apogee', then perform the satellite maneuver, correct the mean anomaly"; "If 'the inclination is not proper' and 'the satellite ascend pass the equator or descend pass the equator', then perform the satellite maneuver, correct the inclination";
Fuel-Efficient Maneuvers for Constellation Initialization
915
"If 'the right ascension of ascending node is not proper' and 'the satellite pass by the south or the north apex', then perform the satellite maneuver, correct the right ascension of ascending node"; "If any above condition clause does not hold, then do not perform the corresponding satellite maneuvers". By the strategies above, if a few conditions cause maneuvers operation meantime, combine all the corresponding strategies to achieve the maneuvers aim: e.g. if both semi-major axis and inclination exceed errors, the satellite's location is at perigee, as well as the perigee is near the equator, then, the maneuver operation will correct both semi-major axis and inclination. If the maneuvers triggered by one or several conditions, all strategies are not available until the maneuvers end. It should be noted that it spends about half orbital period to complete any maneuver, and it spends about “2T” to complete all maneuvers resulted from above rules. 3.3 Control Planner Based on Fuzzy Logic Rule: Determine the Control Moment
The fuzzy logic based planner includes: fuzzifier, fuzzy inference and defuzzifier to every above control rule based on a fuzzy set. &ULVS9DOXH
&ULVS9DOXH
)X]]\9DULDEOHV
'HIX]]LILHU
)X]]\,QIHUHQFH
)X]]LILHU
)X]]\5XOH%DVH 0HPEHUVKLS)XQFWLRQ
Fig. 2. Planner based on fuzzy logic
3.3.1 Fuzzification For the fuzzification of the input variables including the orbital parameters and the satellite phases, there are two methods of fuzzification, generally, i.e. single value fuzzification and non-single value fuzzification. In the set formed by single value fuzzification, the 'degree of membership' of given input value is 1, while the 'degrees of membership' of other input values are all 0. In the set formed by non-single fuzzification, the 'degree of membership' of given input value is 1, while the 'degrees of membership' of other input values are decreasing to 0 so that the membership function is of triangle shape. The following Case 1 and Case 2 illustrate the fuzzification for the input variable of semimajor axis and true anomaly.
916
M. Yang et al.
Case 1, The input of the semimajor axis is ainput . The fuzzy set formed by the single fuzzification
AS ′ = {semimajor axis is
ainput }={ (a, µ as ′ ( a )) }
⎧1 a = ainput ⎩0 otherwise
,
µ as ′ ( a ) = ⎨
Case 2, The input of the true anomaly is f input . The fuzzy set formed by the single fuzzification
PS ′ =
{true
anomaly
is
f input
}={
( f , µ ps′ ( f ))
}
,
⎧1 f = f input . ⎩0 otherwise For the fuzzification of each rule stated above, the fuzzy set is formed by the implication relation induced from the rule. The membership function is select as the minimum value among the relative variables. The following Case 3 illustrates the fuzzificatin of rules R1, R2, and R3. Case 3, The 3 rules are
µ ps′ ( f ) = ⎨
R1 : If a is NAS and f is APS , jp is ON ; R2 : if a is AS , jp is OFF ; R3 : if f is NAPS , jp is OFF , where, NAS is the fuzzy set defined as ‘the semimajor axis is not proper’, APS is the fuzzy set defined as the ‘the satellite is near the perigee’, NAPS is the fuzzy set defined as ‘the satellite is not near the perigee’, ON is the fuzzy set defined as ‘the orbital maneuver is on’, OFF is the fuzzy set defined as ‘the orbital maneuver is off’. The fuzzy sets are
{(
R1 = NASandAPS → ON = a , f , jp
)}
µ R1 = µ NASandAPS ∧ µON = min{µ NASandAPS , µON } R2 = AS → OFF = {(a , jp )}
µ R2 = µ AS ∧ µOFF = min{µ AS , µOFF } R3 = NAPS → OFF = {(a , jp )}
µ R3 = µ NAPS ∧ µOFF = min{µ NAPS , µOFF } 3.3.2 Fuzzy Inference By evaluating each input variable in the condition clauses, a lot of new fuzzy sets are constructed. An object set is a union joined by the new fuzzy sets and the fuzzy sets determined by each rule. The "degree of membership" of the object set applies the method "maximum and minimum" in union operation. The following Case 4 illustrates this process. Case 4, Let ainput be semi-major axis, f input be argument of perigee, the following
fuzzy inference is considered:
Fuel-Efficient Maneuvers for Constellation Initialization
917
Input: a is AS ′ , f is APS ′ R1 : If a is NAS and f is APS , jp is ON ; R2 : if a is AS , jp is OFF ; R3 : if f is NAPS , jp is OFF ; Output: jp is ON ′ Now, it is expect to get ON ′ from AS ′ and APS ′ . By the fuzzy set theory, we have the output set resulted from the union operation by input sets and implicit relation sets. ON ′ = ( AS ′andAPS ′)
∪ Ri = ∪[( AS ′andAPS ′) Ri ] 3
3
i =1
i =1
(16)
where the union operation applies "maximum and minimum" method. ( AS ′andAPS ′) Ri = { jp}
(17)
µ( AS ′andAPS ′) Ri ( jp) = max min{µ( AS ′andAPS ′) (a , f ), µ Ri (a , f , jp)}
(18)
µON ′ ( jp ) = max{µ ( AS ′andAPS ′) R1 ( jp), µ ( AS ′andAPS ′) R2 ( jp ), µ ( AS ′andAPS ′) R3 ( jp )}
(19)
By the union operation on fuzzy set, relation (17)-(19) equals to:
µ( AS ′andAPS ′) R1 ( jp ) = α1 ∧ µON ( jp )
(20)
µ( AS ′andAPS ′) R2 ( jp ) = α 2 ∧ µOFF ( jp )
(21)
µ( AS ′andAPS ′) R3 ( jp ) = α 3 ∧ µOFF ( jp )
(22)
where
[
]
α1 = max ( µ AS ′ ( a ) ∧ µ NAS ( a )) ∧ ⎡ max ( µ APS ′ ( f ) ∧ µ APS ( f ))⎤ a
[
⎢⎣
]
α 2 = max (µ AS ′ ( a ) ∧ µ AS ( a ) ) a
(
(24)
)⎥⎦
α 3 = ⎡ max µ APS ′ ( f ) ∧ µ NAPS ( f ) ⎤ ⎢⎣
f
By deriving reasoning above, it leads to: α1 = Then, obtain the output set ON ′ on Fig.3.
(23)
⎥⎦
f
(25)
1 1.5 1 3 3 ∧ = , α2 = , α3 = . 4 4 4 4 4
918
M. Yang et al.
1
3/ 4
1/ 4 0
jp
0
1 Fig. 3. Output set ON ′
3.3.3 Defuzzifier Defuzzifier is to determine an element (clarity value) and its "degree of membership" according to the object fuzzy set. Then it's possible to determine whether the result clauses (orbital maneuvers operation) are to implement. A simplest method to clarify is called "maximum degree of membership", i.e. by choosing the element of the output set whose value is maximum, define the maximum value as clarity value. This is illustrated in following Case 5. Case 5, In the output set in case 4, the clarity value is jp = 0 , relative "degree of membership" is 3 / 4 . So, the maneuver is not operated by the plan mentioned above. The above fuzzy logic based planner, which is actually a high level controller of the autonomous control, determines not only the control moment but also the initial state and terminal state in the orbital maneuver control. Then, the control value will be given in the base level controller. 3.4 Base-Level Controller: Determine the Control Value Fuzzy planner generates the control phases, the initial state, and the terminal state relative to the "2T"-control strategies. Let ( r (t0 ), v (t0 )) be the initial state, ( r (t f ), v (t f )) be the terminal state, where r and v represent positon and velocity on the earth centered inertial frame. According to Folta-Quinn algorithm [1],[2], the multi-impulsive control strategy can be given as:
∆v (t0 ) = V ∗ (t0 ) R ∗−1 (t0 )δr (t0 ) − δv (t0 ) δr (t0 ) = r (t0 ) − r0 (t0 ) , δv (t0 ) = v (t0 ) − v0 (t0 ) ; ( ( r (t0 ), v (t0 )) is measurement value)
∆v (t1 ) = V ∗ (t1 ) R ∗−1 (t1 )δr (t1 ) − δv (t1 ) δr (t1 ) = r (t1 ) − r0 (t1 ) , δv (t1 ) = v (t1 ) − v0 (t1 ) ; ( ( r (t1 ), v (t1 )) is measurement value)
Fuel-Efficient Maneuvers for Constellation Initialization
919
∆v (t2 ) = V ∗ (t2 ) R ∗−1 (t2 )δr (t2 ) − δv (t2 ) δr (t2 ) = r (t2 ) − r0 (t2 ) , δv (t2 ) = v (t2 ) − v0 (t2 ) ; ( ( r (t2 ), v (t2 )) is the measurement value) ……………………
∆v (tn −1 ) = V ∗ (tn −1 ) R∗−1 (tn −1 )δr (tn −1 ) − δv (tn −1 ) δr (tn −1 ) = r (tn −1 ) − r0 (tn −1 ) , δv (tn −1 ) = v (tn−1 ) − v0 (tn−1 ) ; ( ( r (tn−1 ), v (tn−1 )) is the measurement value)
∆v (t f ) = v (t f ) − v (t −f ) ; ( v (t −f ) is the measurement value). where, ( r0 (ti ), v0 (ti )) is the reference orbital state, ∆v (ti ) is the relative velocity increment given by the control force. The combination of the fuzzy logic based planner and the base-level controller forms the complete control law for the orbital maneuvers of constellation initialization.
4
Simulation Results
The simulation studies are made for the constellation initialization control, which is composed of a leading satellite and two following satellites. The nominal mean orbital elements are given in Table 1. The initial actual mean orbital elements are given in Table 2. The satellites’ parameters are given in Table 3. The control objective is to complete the constellation initialization by making each satellite capture its nominal orbit through the orbital maneuvers. The control law based on fuzzy logic proposed in this paper is utilized. The simulation results are shown by Tables 4-5 and Fig.4. The Fig.4 shows the orbital elements’ history of the leading satellite during the orbital maneuvers. The Table 4 indicates the errors between the actual orbit and the nominal orbit after the orbital maneuvers. The Table 5 gives the control velocity increments (related to fuel cost) for each satellite. We compare the fuel cost with that of theoretical fuel-optimal control, which indicates that the proposed controller present almost fuel-optimal for the orbital maneuvers. Table 1. The nominal mean orbital elements of the satellites
Leading satellite a (km) e i (deg) Ω (deg) ω (deg) M (deg)
7077.732 0.0010444 98.2102 188.297 90.0 0.0
Following satellite 1 7077.732 0.0010444 98.2102 188.547 90.0 -3.645
Following satellite 2 7077.782 0.0010444 98.2102 189.297 90.0 -3.645
920
M. Yang et al. Table 2. The initial mean orbital elements of the satellites
a (km)
7072.732
Following satellite 1 7072.732
e
0.0010444
0.0010444
0.0010444
i (deg)
98.2602
98.2602
98.2602
Ω (deg)
188.287
188.537
189.287
90.0
90.0
90.0
0.0
-3.645
-3.645
Leading satellite
ω
Following satellite 2 7072.782
(deg) M (deg)
Table 3. The satellites’ parameters
Leading satellite
(m ) Mass(kg) Ballistic coefficient(m /kg) Drag area
2
2
Following satellite 1
Following satellite 2
19.0
7.7
7.7
2041
529
529
0.0093
0.0146
0.0146
Table 4. The satellites’ orbital error after maneuvers
Leading satellite
Following satellite 1
Following satellite 2
a (km)
3.997173641808331e+000
-3.825943507254124e+000 -3.733077733777463e+000
e
-2.761827728973033e-007
-2.657781140937184e-007
-2.600482751169506e-007
i (deg)
-1.861868721647546e-008
-4.478373658125342e-008
-5.976489974185141e-008
Ω (deg) -4.378249796752870e-009 -5.423838064955913e-009 -5.604544459737944e-009
ω (deg) -4.831685062731913e-003 -4.656147215549164e-003 -4.586139368894873e-003 M (deg) 4.827844629503300e-003
4.652108639755664e-003
4.581993999006565e-003
Table 5. The velocity increments of control used for orbital maneuver
∆v correcting a (m/s) ∆v correcting i (m/s) ∆v correcting Ω (m/s) Total (m/s)
Leading satellite
Following satellite 1
Following satellite 2
4.757970010946486
4.755761081871730
4.764105520997420
6.659891731044932
6.660358212506530
6.660588676447985
1.366379351351783
1.365854079243388
1.365907194139736
12.78424109334320
12.78197337362165
12.79060139158514
Fuel-Efficient Maneuvers for Constellation Initialization
x 10 7079
-3
1.4 98.26
7078
98.25 i (deg)
1.3
7077
a (km)
921
e
7076 7075 7074
1.2 1.1
98.24 98.23 98.22
7073
98.21 0
500
1000
0
500
1000
x 30s
0
500
1000
x 30s
x 30s
91 188.55 300
89
M (deg)
188.5 Ω (deg)
ω (deg)
90
188.45
88
188.4
100
188.35
87
188.3 0
500
1000
0 0
x 30s
200
500
1000
0
500
1000
x 30s
x 30s
Fig. 4. History of the mean orbital parameters of leading satellite
5 Conclusions A new orbital controller based on fuzzy logic is proposed. It is composed of two level controllers: the high level controller is a fuzzy logic based planner, which resolves the conflicting constraints induced from the control efficiency and the control accuracy, while the base level controller is the well-known Folta-Quinn algorithm. The analysis and simulation studies indicate that the algorithm is very effective in reducing fuel cost for constellation formation capturing control.
References 1. Joseph, R. Q.: EO-1 Technology Validation Report Enhanced Flying Formation Algorithm (JPL). NASA/GSFC, August 8, (2001) 2. David, F., Hawkins, A.: Results of NASA’s First Autonomous Formation Flying Experiment : Earth Observing-1 (EO-1). AIAA 2002-4743, (2002) 3. Battin, R.: An Introduction to the Mathematics and Methods of Astrodynamics, AIAA Education Series, Chapter 9 and 11. (1987) 4. Kiguchi, K., Tanaka, T., Fukuda, T.: Neuro-fuzzy Control of a Robotic Exoskeleton with EMG Signal. IEEE Trans. Fuzzy Systems, 12 (2004) 481-490 5. Wang, L., Frayman, Y.: A Dynamically-generated Fuzzy Neural Eetwork and its Application to Torsional Vibration Control of Tandem Cold Rolling Mill Sprindles. Engineering Applications of Artificial Intelligence, 15 (2003) 541-550
Design of Interceptor Guidance Law Using Fuzzy Logic Ya-dong Lu, Ming Yang, and Zi-cai Wang Control & Simulation Center, Harbin Institute of Technology, 150080 Harbin, P.R. China
[email protected] Abstract. In order to intercept the high-speed maneuverable targets in threedimensional space, a terminal guidance law based on fuzzy logic systems is presented. After constructing the model of the relative movement between target and interceptor, guidance knowledge base which including fuzzy data and rules is obtained according to the trajectory performance. On the other hand, considering the time-variant and nonlinear factors in the thrust vector control system, the interceptor’s mass is identified in real time. By using the learning algorithms, the logic rules are also revised correspondingly to improve the fuzzy performance index. Simulation results show that this method can implement efficiently the precise guidance of the interceptor as well as preferable robust stability.
Nomenclature g m P x, P y, P z r xM, yM, zM xT, yT, zT vM vT
α β θM ψM θT ψT
qε qβ
Gravity acceleration Interceptor mass Thrust in body frame Range between interceptor and target Coordinates of the interceptor Coordinates of the target Interceptor speed Target speed Angle-of-attack Sideslip angle Flight path angle of the interceptor Flight path azimuth angle of the interceptor Flight path angle of the target Flight path azimuth angle of the target Line-of-sight elevation angle Line-of-sight azimuth angle
1 Introduction As a precise guidance weapon, the role of space interceptor is to engage incoming adversarial target and to destroy it by collision. The interceptor trajectory consists of L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 922 – 929, 2005. © Springer-Verlag Berlin Heidelberg 2005
Design of Interceptor Guidance Law Using Fuzzy Logic
923
three relatively guided stages: boost to accelerate, midcourse to steer along a collision trajectory with the target, and terminal to correct for any remaining position and velocity errors. In these stages, guidance law during the terminal stage is a key to a successful intercept. Some classical guidance laws, such as proportional navigation guidance (PNG)1, are deduced on the supposition that target flights without maneuvering. But when considering interceptors’ time-variant nonlinear factors, it is difficult for interceptors using these laws which are based on precise mathematical model to destroy targets performing uncertain maneuvers. On the other hand, it is well known that fuzzy logic control have the ability to make use of knowledge expressed in the form of linguistic rules without completely resorting to the precise plant models. In recent years, researchers have also attempted to apply it on missile guidance designs2. However, different from common missiles, all of the interceptor actuators are made up of thrust engines and the mass of interceptor varies along with vanishing energy. Therefore interceptor actual overloads are influenced by time-variant mass. Considering these varying factors, a self-tuning fuzzy logic guidance law is designed. In this design scheme, when current mass is estimated on-line, interceptor guidance rules are also revised to improve impact accuracy and trajectory performance. Simulation results show that the proposed guidance scheme has potential for use in the development of high performance interceptors.
2 Interceptor and Target Dynamic Models The 3-D space can be divided into two planes: vertical plane and horizontal plane. With this arrangement, we can obtain the geometry relations between interceptor and target in the Cartesian inertial frame, as shown in Fig.1. M and T denote mass points of interceptor and target respectively. Supposing that interceptor flights out of aerosphere during the terminal stage, we consider there are no aerodynamic forces exerting on the interceptor. The interceptor is modeled as a point mass and the equations of motion are described by
θT
θ M qε
ψM
qβ
ψT
Fig. 1. Intercept Geometry
924
Y.-d. Lu, M. Yang, and Z.-c. Wang
⎧ x& M = vM cos θ M cosψ M ⎪ ⎪ y& M = vM sin θ M ⎪ ⎪ z& M = −vM cos θ M sinψ M ⎪ Py sin α cos β Pz sin β ⎪ + ⎨v&M = − g sin θ M − m m ⎪ ⎪θ& = − mg cos θ M + Py cos α ⎪ M mvM ⎪ − Py sin α sin β − Pz cos β ⎪ψ& M = ⎪⎩ mvM cos θ M
(1)
Based on the same supposing, when without considering target’s maneuverability, its equations of motion are ⎧ x&T = vT cosθT cosψ T ⎪ ⎪ y&T = vT sin θT ⎪ ⎪ z&T = −vT cosθT sinψ T ⎪ ⎨ ⎪v&T = − g sin θT ⎪ − g cosθT ⎪θ&T = vT ⎪ ⎪⎩ψ& T = 0
According to their geometry relations, the line-of-sight elevation angle azimuth angle
(2)
qε and the
q β and their rates can be derived as ⎧ ⎪qε ⎪ ⎪ ⎪ ⎪q β ⎪ ⎨ ⎪ ⎪q&ε ⎪ ⎪ ⎪q& ⎪ β ⎩
⎡ y ⎤ r ⎥ = arctan ⎢ 2 2 ⎢⎣ xr + zr ⎥⎦ ⎡− z ⎤ = arctan ⎢ r ⎥ ⎣ xr ⎦
(x = =
2 r
)
+ z r2 y& r − yr ( xr x&r + zr z&r )
(x
2 r
+ yr2 + z r2
)
(3)
xr2 + z r2
z r x&r − xr z&r xr2 + zr2
xr = xT − x M , yr = yT − y M , z r = zT − z M , x& r = x&T − x& M , y& r = y& T − y& M , z& r = z&T − z& M .
where the relative range and velocity are defined by
From (3), the line-of-sight angle and rate have non-linear relations with target range. It has been testified that if without exerting orbit control, the interceptor lineof-sight rate will increase in the shape of parabola when the range decreases3.
Design of Interceptor Guidance Law Using Fuzzy Logic
925
3 Fuzzy Logic Guidance Law In proportional navigation guidance law, the rate of flight path angle is proportional to the rate of line-of-sight angle, which can be described as
dq dθ M =K ε dt dt
(4)
where the coefficient K should satisfy the requirements as below 1. the minimum value of K should insure that q& is convergent; 2. the value of K should be limited by missile’s usable lateral overload; 3. the selection of K must ensure that the guidance system can run stably. When proportional navigation guidance is adopted, the shape of trajectory changes according to coefficient K. The more K increases, the flatter trajectory shape is, and the less lateral overload requires. So after satisfying all the requirements above, in order to have perfect maneuverability, the value of K should be increased4 5.
r
r&
q&
Fig. 2. The Configuration of Self-tuning Fuzzy Logic Guidance System
The fuzzy inference system is next set up to implement above heuristic reasoning. Figure 2 shows the self-tuning fuzzy logic guidance system. In practice, target position and velocity information in the Cartesian inertial frame are obtained from a ground-based radar or satellite. Interceptor position and velocity information in the Cartesian inertial frame are obtained from an inertial reference unit. In this guidance scheme, after these data are acquired, they are converted into linguistic variables through fuzzification interface first. Then these linguistic variables form the inputs to a fuzzy inference engine that uses a knowledge base to generate a set of linguistic outputs. The knowledge base includes a collection of rules that associate each combination of the input linguistic variables into a set of desirable actions expressed in the form of output linguistic variables. After converting into crisp actuator commands through defuzzification interface, the onboard inertial reference unit can measure actual acceleration information. Through these information, interceptor current mass is estimated by parameter-identifying unit. The guidance effects are evaluated correspondingly in evaluation unit. When receiving these effects, the rules in knowledge base are revised by on-line tuning unit. All these proceedings hold on until interceptor destroys target.
926
Y.-d. Lu, M. Yang, and Z.-c. Wang
The fuzzy inference system is set up with four inputs and two outputs. The inputs to this inference system are the interceptor-target relative range relative velocity and line-of-sight elevation angle rate and azimuth angle rate. The outputs are the desired thrust forces in the horizon and vertical planes, respectively. Due to non-negative characteristic of relative range and velocity, their linguistic variables are assumed to take three linguistic sets defined as B(Big), M(Middle) and S(Small). Meanwhile, in order to compute conveniently, we choose five linguistic sets defined as PB(Positive Big), PS(Positive Small), ZE(Zero), NS(Negative Small), NB(Negative Big) to express angle rate. To outputs, seven linguistic sets are defined as PB, PM(Positive Middle), PS, ZE, NS, NM(Negative Middle), NB. For these variables, the triangular membership functions are adopted in Fig.3 where the physical domains are set to cover the operating ranges of all variables.
,
Fig. 3. Membership Functions Used for Inputs and Outputs
The format of fuzzy rules is given by if r = Ai and r& = Bi and q& = Ci , then
Fn = Di
where Ai, Bi, Ci, Di represent input and output fuzzy sets. Fuzzy rule table is shown in Tab.1. In this table, there are forty five rules in all. Table 1. Rule Table for the Fuzzy Logic Guidance Law
q&
F r B
M
S
r&
B M S B M S B M S
NB NM NS NS NB NM NS NB NB NM
NS NS NS NS NM NS NS NM NM NS
ZE ZE ZE ZE ZE ZE ZE ZE ZE ZE
PS PS PS PS PM PS PS PM PM PS
PB PM PS PS PB PM PS PB PB PM
The max-min inference is used to generate the best possible conclusion. For this type of inference is computationally easy and effective, it is appropriate for real-time
Design of Interceptor Guidance Law Using Fuzzy Logic
927
control applications. The crisp guidance command is calculated by using the centerof-gravity defuzzification which is defined as
F = ∑ Wi Fi / ∑ Wi i
(5)
i
When interceptor introduces thrust engines to adjust its orbit and attitudes, the mass of interceptor decreases along with vanishing energy. In order that the trajectory performance can be improved, interceptor mass should be identified on-line during the whole terminal stage6. In every guidance period, interceptor current mass can be estimated by using the recursive least square (RLS) method. Then the guidance rules are revised according to the learning algorithms correspondingly. In this proceeding, fuzzy performance index is induced to evaluate guidance performance. Fuzzy performance index is defined as def
FP = µ (ea )
(6)
where ea is the error between desired overload and actual overload. µ (⋅) denotes choiceness degree and can be expressed with the triangular membership functions as shown in Fig.3. Table 2. Revised Rules Used for Guidance Rule Base
q&
∆F
q&&
N ZE P
N PB PM PS
ZE PS ZE NS
P NS NM NB
Revised guidance rules are listed in Tab.2. According to Tab.1 and Tab.2, the new value Fi can be expressed as
Fˆi = Fi − (1 − FP)∆F ⋅ wi( k −m ) where
(7)
wi( k − m ) is the fitness degree of the i rule in the (k-m) guidance period. Fˆi is
the revised value of
Fi . So we get the guidance output increment ∆Fk∗ in the k guid-
ance period. 45
∆Fk∗ =
∑w F i =1 45
i
i
∑w
(8)
i
i =1
In the beginning of every period, fuzzy performance FP index is computed. If FP satisfies the ending condition FP > θ , the learning proceeding will be over, otherwise we go on above computing.
928
Y.-d. Lu, M. Yang, and Z.-c. Wang
4 Simulation Results The initial conditions of the interceptor are: m0=35kg, r0=100km, vM0=3400m/s, vT0=7400m/s. In order to test interceptor’s performance under different circumstance, the target is assumed to flight in two modes: motion without maneuvering and motion with maneuvering which sine-wave velocity’s amplitude is 20m/s2. In fuzzy logic guidance law, fuzzy set ranges of three inputs(r, r& , q& ) are set to be [0, 120000], [9000, 11000], [-2.0, 2.0], respectively. Output ranges are set to be [-120, 120]. Table 3. Simulation Results in Two Target Flight Modes Non-maneuvering Target Mode Miss Energy distance(m) consumption(kg)
Maneuvering Target Mode Miss Energy distance(m) consumption (kg)
PNG
0.021
0.481
16.647
1.569
FLG
0.018
0.434
0.392
0.976
5
4.661
5
4.662
x 10
4.6609
x 10
0.03
4.6608
4.6607
4.6606
4.66
PNG FLG
0.025
4.6605
4.6604
4.658
4.6603 6.3795
6.38
0.02
6.3805 6.381 6.3815 6.382 6.3825 6.383 6.3835 6.384 5
d q ε / d t (ra d / s )
x 10
0.015
Y (m )
4.656
4.654
0.005
4.652
4.65
4.648 6.34
0.01
0
Target Interceptor with PNG Interceptor with FLG 6.36
6.38
6.4 X(m)
(a)
6.42
6.44
6.46 5
x 10
-0.005
0
2
4
6
8
10
t(s)
(b)
Fig. 4. (a) Interceptor and Target Trajectories (b) Line-of-sight Angle Rates
Tab.3 lists the detailed data of the simulation. When target flights in nonmaneuvering mode, both two guidance laws bring on perfect results. But when target is maneuverable, it is obvious that fuzzy logic guidance law causes less miss distance than proportional navigation guidance and the interceptor consumes less engine energy than the latter. Fig.4 shows the interceptor trajectories and the line-of-sight angle rate curves when using different guidance laws. When interceptor approaches to target, the line-of-sight angle rate of interceptor using proportional navigation guidance gets emanative. In the end, interceptor desired overload increases and causes the more
Design of Interceptor Guidance Law Using Fuzzy Logic
929
miss distance. While in fuzzy logic guidance law, the line-of-sight angle rate maintains convergence in Fig.4. Interceptor’s maneuverability is applied sufficiently in flight fore-stage. So interceptor trajectory becomes smooth in the end stage and interceptor has much more maneuverability to destroy target.
5 Conclusions A self-tuning fuzzy logic guidance law which used in interceptor’s terminal stage has been studied. In this novel guidance design scheme, fuzzy inference algorithms synthesize the qualitative aspects of guidance knowledge and reasoning processes. On the other hand, designed guidance rules are revised on-line to suit the changes of onboard status. Interceptor using this fuzzy logic guidance law has much more maneuverability and less desired lateral overloads. Simulation results show the proposed guidance law provides intelligence and robust stability as well as the perfect miss distance performance.
References 1. Wu Wenhai, Qu Jianling, Wang Cunren: An Overview of the Proportional Navigation. Flight Dynamics, Vol. 22. China Test Pilot Institute, Xi’an, PRC (2004) 1-5. 2. Li Shaoyuan, Xi Yugeng, Chen Zengqiang, et al: The New Progresses in Intelligent Control. Control and Decision, Vol. 15. Northeast University, Shenyang, PRC (2000) 1-5. 3. Shih-Ming Yang: Analysis of Optimal Midcourse Guidance Law. IEEE Transactions on Aerospace and Electronic Systems, Vol. 32. IEEE Aerospace and Electronic Systems Society, Sudbury, MA,USA (1996) 419-425. 4. Eun-Jung Song, Min-Jea Tahk: Three-Dimensional Midcourse Guidance Using Neural Networks for Interception of Ballistic Targets. IEEE Transactions on Aerospace and Electronic Systems, Vol. 38. IEEE Aerospace and Electronic Systems Society, Sudbury, MA,USA (2002) 404-414. 5. Chih-Min Lin, Yi-Jen Mon: Fuzzy-Logic-Based Guidance Law Design for Missile Systems. Proceeding of the 1999 IEEE International Conference on Control Applications. IEEE Control Systems Society, Kohala Coast, HI, USA (1999) 421-426 6. Chun-Liang Lin, Hao-Zhen Hung, Yung-Yue Chen, et al: Development of an Integrated Fuzzy-Logic-Based Missile Guidance Law Against High Speed Target. IEEE Transactions on Fuzzy Systems, Vol. 12. IEEE Computational Intelligence Society, Waco, TX, USA (2004): 157-169.
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model Structure Wei Xie1, Huaiyu Wu2, and Xin Zhao2 1 Satellite
Venture Business Laboratory, Kitami Institute of Technology, 165 Koencho, Kitami, Hokkaido, 090-8507, Japan
[email protected] 2 College of Information Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, Hubei Province, P. R. China
[email protected] Abstract. Relaxed linear matrix inequalities (LMIs) conditions for fuzzy observer-based controller design are proposed based on a kind of improved T-S fuzzy model structure. The improved structure included the original T-S fuzzy model and enough large bandwidth pre- and post-filters. By this structure fuzzy observer-based controller design can be transformed into LMIs optimization problem. Compared with earlier results, it includes the less number of LMIs that equals the number of fuzzy rules plus one positive definition constraint of Lyapunov function. Therefore, it provides us with less conservative results for fuzzy observer-based controller design. Finally, a numerical example is demonstrated to show the efficiency of proposed method.
1 Introduction As is well known, Takagi-Sugeno (T-S) fuzzy system can be formalized from a large class of nonlinear system. The approach using the T-S fuzzy model [11], considered like a universal approximated fuzzy controller [2], has been investigated extensively. The T-S fuzzy models are described by a set of fuzzy “IF-THEN" rules with fuzzy sets in the antecedents and dynamics LTI systems in the consequent. These submodels are considered as local linear models, the aggregation of which representing the nonlinear system behavior. Despite the fact that the global T-S model is nonlinear due to the dependence of the membership functions on the fuzzy variables, it has a very special formulation, known as Polytopic Linear Differential Inclusions (PLDI) [1], in which the coefficients are normalized membership functions. A great deal of attention has been focused on the stability analysis and synthesis of these systems. Several researchers have addressed the issue of stability and a substantial amount of progress has been made in stability analysis and design of T-S fuzzy control systems [6, 7, 10, 12-16, 18]. For example, Tanaka and Sugeno presented sufficient conditions for the stability of T-S models [8] using a quadratic Lyapunov approach. The stability depends on the existence of a common positive definite matrix guarantying the stability of all local subsystems. Most of the above mentioned references utilize the interior-point convex optimization methods by solving linear matrix inequalities [1,5]. However, the present results are only L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 930 – 941, 2005. © Springer-Verlag Berlin Heidelberg 2005
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
931
sufficient and still include some conservatism, although T-S fuzzy control system is globally asymptotically stable, we maybe fail to find such a common positive definite matrix. Thus it is important to decrease the conservatism of LMI based stability analysis and synthesis conditions of these systems. Some works have been developed in order to establish new stability conditions by relaxing some of the previous constraints. So one way for obtaining relaxed stability conditions consists to use a piecewise quadratic Lyapunov function formulated as a set of LMIs [7]. Using the PI fuzzy controller and the Lyapunov technique, Chai et al. [4] show that asymptotic stability of the Takagi-Sugeno fuzzy systems can be ensured under certain restriction on the control signal and the rate of change of the output. Jadbabaie [8] introduces TS fuzzy rules that describe state dependent Lyapunov function where each T-S rule has fuzzy sets in the antecedents and quadratic Lyapunov function in the consequent. Recently, a relaxed stability condition for Takagi-Sugeno fuzzy systems is derived by Chadli et al. [3] via non-quadratic Lyapunov function technique and LMIs. A nonlinear control law is investigated by Thierry et al. [17] for Takagi-Sugeno fuzzy models. Some new stability conditions are proposed to increase more design freedom by introducing some non symmetrical matrices in [9].
Fig. 1. Relevant fuzzy control structure
Obviously, almost all of above methods are considered to decrease the conservatism of quadratic stability based results by increasing some design freedom. In this paper, some relaxed linear matrix inequalities (LMI) conditions for fuzzy observer-based controllers design are proposed according to a kind of improved T-S fuzzy model structure. In order to increase the conservatism of earlier LMI based stability analysis and synthesis conditions, it is very significant to reduce the number of constrained LMI conditions greatly. From the earlier results, state feedback and observer gain matrices are always dependent on the membership functions on the fuzzy variables, it results in increasing the numbers of LMIs much. Here, we consider constructing a kind of improved T-S fuzzy model structure which is composed of the original T-S fuzzy model and enough large bandwidth stable pre- and post-filters. It will transfer state feedback and observer gain matrices of original plant into state matrix of a new augmented system, and make state feedback and observer gain matrices of new augmented system independent of membership functions on the fuzzy variables. By this trick, it results in less numbers of constrained LMIs conditions, which almost equals the number of fuzzy rules, and less conservative results will be obtained.
932
W. Xie, H. Wu, and X. Zhao
2 Preliminary In this section, firstly the notation regarding T-S fuzzy system is introduced. Useful conception and lemma are recapped. Definition 1: Takagi-Sugeno Fuzzy System A dynamic T-S fuzzy model G is described by a set of fuzzy “IF-THEN” rules with fuzzy sets in the antecedents and dynamic LTI systems in the consequent. A general T-S plant rule can be written as follows (for i th plant rule) IF x1 (t ) is M 1i and L x p (t ) is M ip , then x& (t ) = Ai x(t ) + Bi u (t ) and y (t ) = C i x(t )
where x T (t ) = [ x1 (t ), x 2 (t ),L , x p (t )], u T (t ) = [u1 (t ), u 2 (t ), L , u q (t )], xi (t ) is the state vector, M ip is the fuzzy set, y (t ) is the output vector, p is the number of the state vector, and q is the number of the input vector. Using singleton fuzzifier, max-product inference and center average defuzzifier, we can write the aggregated fuzzy model as p
x& (t ) =
∑ wi ( x(t ))( Ai x(t ) + Bi u(t )) i =1
p
∑ wi ( x(t ))
p
and y (t ) =
∑ wi ( x(t ))C i x(t ) i =1
i =1
p
∑ wi ( x(t ))
(1)
i =1
Where p
wi ( x(t )) = ∏ g ij ( x j (t )) j =1
(2)
where g ij is the membership grade of the j th state variable x j (t ) to the i th fuzzy set M ij . Defining
µ i ( x(t )) =
wi ( x(t )) p
∑ wi ( x(t ))
(3)
i =1
where µ i ( x(t )) is the normalized membership function in relation with the i th subsystem described by p
∑ µ i ( x(t )) = 1, i =1
(0 ≤ µ i ( x(t )) ≤ 1, 1 ≤ i ≤ p)
(4)
Therefore, the equation (1) can be represented by p ⎧ ⎪ x& (t ) = ∑ µ i ( x(t ))( Ai x(t ) + Bi u (t )) ⎪ i =1 ⎨ p ⎪ y (t ) = µ ( x(t ))C x(t ) ∑ i i ⎪⎩ i =1
(5)
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
933
Considering the stability of T-S fuzzy model G , here sufficient conditions are given based on Lyapunov stability theory as follows: Lemma 1[14]: The continuous time T-S fuzzy system (1) is globally asymptotically stable if there exists a common positive definite matrix P which satisfies the following inequalities:
AiT P + PAi < 0,
(1 ≤ i ≤ p)
(6)
As to T-S fuzzy model (1), it was found in [14] that a stabilizing observed-based controller can be formulated as p
xˆ& = ∑ µ i ( x(t ))( Ai xˆ (t ) + Bi u (t ) + L j (C i xˆ (t ) − y (t ))) i =1 p
u = ∑ µ j ( x(t ))F j xˆ (t )
(7)
j =1
where F j and L j are state feedback and observer gain matrices for this plant, respectively. Moreover, F j = V j Pf−1 and L j = Pl −1W j should satisfy the following LMIs, respectively, as ⎧ Pf > 0 ⎪⎪ T T T ⎨ Pf Ai + Ai Pf − BiVi − Vi Bi < 0, (1 ≤ i ≤ p ) ⎪ T T ⎩⎪ Pf ( Ai + A j ) + ( Ai + A j ) Pf − ( BiV j + B jVi ) − ( BiV j + V j Bi ) < 0, ( j < i ≤ p)
(8)
and ⎧ Pl > 0 ⎪⎪ T T T ⎨ Ai Pl + Pl Ai − Wi C i − C i Wi < 0, (1 ≤ i ≤ p ) ⎪ T T ⎪⎩( Ai + A j ) Pl + Pl ( Ai + A j ) − (Wi C i + W j C i ) − (Wi C j + W j C i ) < 0, ( j < i ≤ p)
(9)
、
where matrices V j W j are LMI variables. Since state feedback and observer gain matrices are dependent on the membership functions on the fuzzy variables, the numbers of LMIs in both (8) and (9) with r (r + 1) + 2 linear matrix inequalities will increase. Therefore, it is important to decouple state feedback and observer gain matrices with the membership functions on the fuzzy variables or make these matrices independent of membership functions. In the next section, we present an improved T-S fuzzy model structure. Based on this formulation, state feedback and observer gain matrices will be independent of membership functions on the fuzzy variables. By this way the earlier quadratic stability based results become less conservative.
3 Improved T-S Fuzzy Model Structure A newly-proposed T-S fuzzy model structure is composed of original fuzzy T-S model and enough large bandwidth LTI pre- and post-filters, i.e., Gu and G y , as shown
934
W. Xie, H. Wu, and X. Zhao
in Fig.2. These filters are primarily used to make pre-filtering of the control inputs and post-filtering of the measured outputs, respectively.
Fig. 2. Augmented fuzzy T-S plant and controller
The state matrix expressions are given by ⎧ x& u = Au xu + Bu u and G y Gu : ⎨ ⎩u 0 = Cu xu
⎧⎪ x& y = Ay x y + Bu y 0 :⎨ ⎪⎩ y = C y x y
(10)
where Au , Ay are stable coefficient matrices. Thus, the augmented fuzzy T-S ~ model G is described by ⎧⎡ x& ⎤ ⎛ Ai r ⎜ ⎪⎢ ⎥ & ⎪⎢ xu ⎥ = ∑ µ i ( x(t ))⎜ 0 ⎜ ⎪⎢ x& ⎥ i =1 ⎝ B y Ci ⎪⎣ y ⎦ ⎨ ⎡x⎤ ⎪ ⎪y = 0 0 C ⎢x ⎥ y ⎢ u⎥ ⎪ ⎢x y ⎥ ⎪ ⎣ ⎦ ⎩
[
Bi C u Au 0
0 ⎞⎡ x ⎤ ⎡ 0 ⎤ ⎟⎢ ⎥ 0 ⎟ ⎢ xu ⎥ + ⎢⎢ Bu ⎥⎥u ⎟ Ay ⎠ ⎢⎣ x y ⎥⎦ ⎢⎣ 0 ⎥⎦
(11)
]
Defining following notations: ⎛ Ai ~ ⎜ Ai = ⎜ 0 ⎜ ⎝ B y Ci
Bi C u Au 0
0 ⎞ ⎡0⎤ ⎟ ~ ⎢ ⎥ ~ 0 ⎟ , B = ⎢ Bu ⎥ and C = 0 0 C y ⎟ ⎢⎣ 0 ⎥⎦ Ay ⎠
[
]
(12)
Therefore, the state feedback and observer gain matrices of original plant can be ~ shifted into the new state matrix Ai . It can be seen that the new state feedback and observer gain matrices will be independent of membership functions on the fuzzy variables. It should be noted that the filter bandwidth must be chosen larger than the desired system bandwidth. Furthermore, whenever the plant model includes actuator and sensor dynamics, the control and measurement matrices are free from membership functions on the fuzzy variables. Hence the proposed filtering operations
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
935
are not restrictive in practice. Consequently, to design an observer-based controller for (1) is transferred to an equivalent problem for this augmented fuzzy T-S model (11). As to T-S fuzzy model (11), a fuzzy observer-based controller can be defined as a set of T-S IF-THEN rules, which stabilize this system. A general observer-based controller rule can be written as i th observer-based controller rule, i.e., x1 (t ) is M 1i and L x p (t ) is M ip , then IF ~ ~ x&ˆ (t ) = Ai xˆ (t ) + B u (t ) + Li ( yˆ (t ) − y (t )) and u (t ) = Fi x(t ) Therefore, we have the following lemma for stabilizing observed-based controller design: Lemma 2: A stabilizing observed-based controller for fuzzy T-S model (11) can be formulated as
⎧& p ~ ~ ~ ⎪ xˆ = ∑ µ i ( x(t ))( Ai xˆ (t ) + B u (t ) + Li (Cxˆ (t ) − y (t ))) ⎪ i =1 ⎨ p ⎪u = µ ( x(t ))F xˆ (t ) ∑ j i ⎪⎩ j =1
(13)
where matrices Fi and Li are state feedback and observer gain matrices for each local plant, respectively. Moreover, Fi = Vi Pf−1 and Li = Pl −1Wi should satisfy the following LMIs, respectively, i.e., ~ ~ ~ ~ Pf > 0, Pf AiT + Ai Pf + B Vi + ViT B T < 0, (1 ≤ i ≤ p) (14) Pl > 0,
~ ~ ~ ~ AiT Pl + Pl Ai + Wi C + C T WiT < 0,
(1 ≤ i ≤ p)
(15)
Proof: If the controller (13) is substituted into plant (21), the closed-loop state matrix can be expressed as
~ ~ p ⎡A + B Fi Acl = ∑ µ i ( x(t ))⎢ i 0 i =1 ⎣⎢
~ B Fi
⎤ ~ ~⎥ Ai + Li C ⎦⎥
(16)
According to lemma 1, the closed system is also said to be globally asymptotically ~ ~ ~ ~ stable since (14) and (15) make Ai + B F and Ai + Li C globally asymptotically stable, respectively. Thus, equation (13) can be rewritten as the following equivalent formulation p p ⎧ ~ ~ ~ & = + = x ( x ( t )) ( A x B y ) µ µ i ( x(t ))(( Ai + B Fi + Li C ) x k − Li y ) ∑ ∑ i ki k ki ⎪ k ⎪ i =1 i =1 ⎨ (17) p p ⎪u = µ ( x(t ))C x = µ ( x(t ))F x ∑ i ∑ j ki k i k ⎪⎩ i =1 j =1 The original plant and augmented fuzzy T-S controller is shown in Fig.3.
936
W. Xie, H. Wu, and X. Zhao
Fig. 3. The original plant and augmented fuzzy T-S controller
It can be seen from Fig.3 that the augmented fuzzy T-S controller is composed of original controller and two LTI stable filters. The state matrix expression of this augmented fuzzy T-S controller is described as: ⎧⎡ x& k ⎤ ⎛ Aki p ⎜ ⎪⎢ ⎥ ⎪⎢ x& y ⎥ = ∑ µ i ( x(t ))⎜ 0 ⎜B C ⎪⎪⎢⎣ x& u ⎥⎦ i =1 ⎝ u ki ⎨ ⎡ xk ⎤ ⎪ ⎪u = [0 0 C ] ⎢ x ⎥ u ⎢ y⎥ ⎪ 0 ⎢⎣ xu ⎥⎦ ⎩⎪
Bki C y Ay 0
0 ⎞⎡ xk ⎤ ⎡ 0 ⎤ ⎟ 0 ⎟ ⎢⎢ x y ⎥⎥ + ⎢⎢ B y ⎥⎥ y 0 Au ⎟⎠ ⎢⎣ xu ⎥⎦ ⎢⎣ 0 ⎥⎦
(18)
In a similar manner to the T-S fuzzy model (11), a fuzzy observer-based controller related to model (18) can be defined as a set of T-S IF-THEN rules. A general observer-based controller rule can be written as i th observer-based controller rule, i.e., ~ ~ ~ IF x1 (t ) is M 1i and L x p (t ) is M ip , then ~ x& k (t ) = Aki ~ x k (t ) + Bk y (t ) and u (t ) = C k ~ x k (t ) Therefore, we have the following theorem of stabilizing observed-based controller design for model (1) as Theorem: A stabilizing observed-based controller for fuzzy T-S model (1) can be formulated as p ⎧ ~& ~ ~ x ( t ) µ i ( x (t ))( Aki ~x k (t ) + Bk y ( y )) = ∑ ⎪ k ⎪ i =1 ⎨ p ⎪u (t ) = µ ( x(t ))C~ ~ ∑ i k x k (t ) ⎪⎩ i =1
(19)
Where ⎛ Aki ⎜ ~ Aki = ⎜ 0 ⎜B C ⎝ u ki
Bki C y Ay 0
0⎞ ⎡0⎤ ⎟ ~ ⎢ ⎥ ~ 0 ⎟ , Bk = ⎢ B y ⎥ , C k = [0 0 Cu ] ⎢⎣ 0 ⎥⎦ Au ⎟⎠
(20)
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
~ ~ ~ where Aki = Ai + B Fi + Li C ,
Bki = − Li , C ki = Fi ,
both
937
Fi = Vi Pf−1 and
Li = Pl −1Wi should satisfy the LMIs (14) and (15). Based on this improved fuzzy T-S model structure, the design of the stabilized fuzzy observer-based controllers can be transformed to solve LMIs optimization problems (14) and (15) only with the number of control rules plus one positive definition constraint of Lyapunov function. Even though the order of controller increases much, it will provide us with less conservative results than that in the situation of inequalities (8) and (9).
4 Numerical Example Some numerical examples based on the translational oscillation with a rotational actuator (TORA) system by Bupp et al (1995)[19] are provided to verify the effectiveness of the proposed stabilizing fuzzy observer-based controllers design scheme. The TORA system considers a translational oscillator with an attached eccentric rotational proof mass actuator, where the nonlinear coupling between the rotational motion of the actuator and the translational motion of the oscillator provides the control mechanism. The system dynamics can be expressed as the following equation: x& (t ) = f ( x) + g ( x)u (t ) where u (t ) is the torque applied to the eccentric mass, and x2 ⎡ ⎤ 0 ⎤ ⎡ ⎢ ⎥ ⎢ − ε cos x3 ⎥ − x3 + εx 42 sin x3 ⎢ ⎥ ⎥ ⎢ 2 2 1 − ε 2 cos 2 x3 ⎥ and g ( x) = ⎢1 − ε cos x3 ⎥ f ( x) = ⎢⎢ ⎥ x4 x4 ⎥ ⎢ ⎢ ⎥ 2 1 ⎥ ⎢ − ε cos x ( x ε x sin x ) 3 1 4 3 ⎥ ⎢ ⎥ ⎢ 2 2 ⎢⎣ ⎥⎦ 1 − ε 2 cos 2 x3 ⎣1 − ε cos x3 ⎦
(21)
which x1 is the normalized displacement of the platform from the equilibrium position, x 2 = x&1 , x3 = θ is the angle of the rotor, and x 4 = x& 3 . The equilibrium point of this system could be any point [0,0, x3 ,0] among which only the point [0,0,0,0] is the desired equilibrium point. The linearization around the point [0,0,0,0] has two eigenvalues nonlinear system.
+i and −i , which means that this system is a critical
The resulting T-S model consists of four fuzzy rules as following: Rule 1: IF x3 (t ) is near 0, then
938
W. Xie, H. Wu, and X. Zhao
⎡ 0 1 ⎢ ⎢− 1 − ε 2 A1 = ⎢ 0 ⎢ ε ⎢ ⎢⎣ 1 − ε 2 Rule 2: IF x3 (t ) is near
π 2
⎡0 ⎢ −1 A2 = ⎢ ⎢0 ⎢ ⎣⎢ 0 Rule 3: IF x3 (t ) is near
T 1 0 0⎤ ⎡ 0 ⎤ ⎡1⎤ ε ⎥ ⎢ ⎥ ⎢0 ⎥ 0 ε 0⎥ ⎢− 1 − ε 2 ⎥ ⎢ ⎥ = , B , C = 0 0 1⎥ 1 ⎢ 0 ⎥ y ⎢1⎥ ⎢ 1 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 0⎥ ⎣0 ⎦ ⎢⎣ 1 − ε 2 ⎥⎦ ⎥⎦
and x 4 (t ) is small, then
1
0
0 0.01 0 0
π 2
0 0
2ε
π
T 0⎤ ⎡0⎤ ⎡1 ⎤ ⎥ ⎢0⎥ ⎢0⎥ 0⎥ , B2 = ⎢ ⎥ , C y = ⎢ ⎥ ⎢0⎥ ⎢1 ⎥ 1⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 1 0⎦⎥ ⎣ ⎦ ⎣0⎦
and x 4 (t ) is big, 1-3then
⎡0 ⎢ −1 A3 = ⎢ ⎢0 ⎢ ⎢⎣ 0
1 0 0 0
0 2ε
π
0 0
T 0⎤ ⎡0⎤ ⎡1 ⎤ ⎥ ⎢0⎥ ⎢0⎥ 0⎥ , B3 = ⎢ ⎥ , C y = ⎢ ⎥ ⎢0⎥ ⎢1 ⎥ 1⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 0⎥⎦ ⎣1 ⎦ ⎣0⎦
Rule 4: IF x3 (t ) is near π , then ⎡ 0 1 ⎢ ⎢− 1 − ε 2 A4 = ⎢ 0 ⎢ ε ⎢ ⎣⎢ 1 − ε 2
T 1 0 0⎤ ⎡ 0 ⎤ ⎡1⎤ ε ⎥ ⎥ ⎢ ⎢0 ⎥ 0 0 0⎥ 2 ⎥ ⎢− , B4 = ⎢ 1 − ε ⎥ , C y = ⎢ ⎥ ⎥ 0 0 0 1 ⎢1⎥ ⎥ ⎢ 1 ⎥ ⎢ ⎥ ⎥ ⎢ 0 0 0⎥ ⎣0 ⎦ ⎢⎣ 1 − ε 2 ⎦⎥ ⎦⎥
(22)
Since only state feedback matrix includes some parameters dependent of membership functions on the fuzzy variables, a low-pass filter Gu can be added to the input, i.e., ⎧ x&u = −100 xu + u Gu : ⎨ ⎩u 0 = 100 xu ~ Then system matrices of the augmented fuzzy T-S model G are described by ~ ⎡A Ai = ⎢ i ⎣0
[
Bi Cu ⎤ ~ ⎡ 0 ⎤ ~ , B = ⎢ ⎥ and C = C y Au ⎥⎦ B ⎣ u⎦
0
]
(23)
Using Matlab LMI toolbox, the design of the proposed stabilizing observed-based controller was finished. The observer gain matrices are given by
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
939
⎡ 71.74 ⎤ ⎡ 54.49 ⎤ ⎡ 49.40 ⎤ ⎡ 47.62 ⎤ ⎢ - 94.31 ⎥ ⎢ - 71.21⎥ ⎢ - 67.08⎥ ⎢ - 65.15⎥ ⎥ ⎥ ⎥ ⎥ ⎢ ⎢ ⎢ ⎢ L1 = ⎢ - 66.64⎥ , L2 = ⎢ - 101.32⎥ , L3 = ⎢ - 76.53⎥ , L4 = ⎢ - 68.95⎥ ⎥ ⎥ ⎥ ⎥ ⎢ ⎢ ⎢ ⎢ ⎢ - 149.34⎥ ⎢ - 96.15⎥ ⎢ - 32.20⎥ ⎢ - 31.40⎥ ⎢ 128.53 ⎥ ⎢ 80.50⎥ ⎢ 18.01 ⎥ ⎢ 17.71 ⎥ ⎦ ⎦ ⎦ ⎦ ⎣ ⎣ ⎣ ⎣ with the Lyapunov function matrix ⎡ 13.57 ⎢ - 3.32 ⎢ Pl = ⎢ 12.05 ⎢ ⎢ 5.58 ⎢ 5.88 ⎣
- 3.32 12.05 5.58 5.88 ⎤ 3.24 - 6.17 3.82 3.71 ⎥⎥ - 6.17 16.94 - 10.14 - 9.75 ⎥ ⎥ 3.82 - 10.14 51.68 51.64⎥ 3.71 - 9.75 51.64 52.04⎥⎦
And the state feedback matrices are given by F1 = [ 261.34 - 105.63 - 94.45 - 192.13 F2 = [280.45 - 158.11 - 118.68 - 206.02 F3 = [ 280.44 - 158.10 - 118.67 - 206.02 F4 = [ 261.34 - 105.64 - 94.46 - 192.13
97.27] 97.14] 97.135] 97.27]
with the Lyapunov function matrix - 0.0043 ⎡ 6.66 ⎢ - 0.0043 6.66 ⎢ Pf = ⎢ 0.0914 - 9.0225 ⎢ ⎢ 9.0197 0.0783 ⎢ - 0.0045 0.0902 ⎣
0.091 - 9.023 23.26 - 6.347 - 0.213
9.0197 0.0783 - 6.347 22.23 - 0.103
- 0.0045 ⎤ 0.0902 ⎥⎥ - 0.2128 ⎥ ⎥ - 0.1030 ⎥ 13.0841⎥⎦
According to Fig.3, a stabilizing T-S fuzzy controller just as described in (19) and (20) can be constructed. From the results of above numerical computation, it is obvious that the proposed structure for T-S fuzzy model provides us less conservative results than earlier work based on LMI conditions (8) and (9).
5 Conclusions This paper proposed relaxed linear matrix inequality (LMI) conditions for fuzzy observer-based controllers design based on a kind of improved T-S fuzzy model structure. The improved structure is composed of the original T-S fuzzy model and large bandwidth pre- and post-filters. Since fuzzy observer-based controllers design can be transformed into solving LMIs optimization problem, it only deals with the less number of LMIs which almost equals the number of fuzzy rules compared with
940
W. Xie, H. Wu, and X. Zhao
the earlier approaches. Therefore, some less conservative results for fuzzy observer design can be obtained.
References 1. Boyd, S., Ghaoui, L.E., Feron, E., Balakrishnan, V.: Linear matrix inequalities in systems and control theory. PA: SIAM, Philadelphia (1994) 2. Castro, J.: Fuzzy logic controllers are universal approximator. IEEE Trans. on Systems, Man, Cybernetics, Vol. 25, April (1995) 629-635 3. Chadli, M., Maquin, D., Ragot, J.: Relaxed stability conditions for Takagi-Sugeno fuzzy systems. IEEE International Conference on Systems, Man, and Cybernetics, Nashville, Tennessee, USA, October 8-11, (2000) 3514-3519 4. Chai, J.-S., Tan, S., Chan, Q., Hang, C.-C.: A general fuzzy control scheme for nonlinear processes with stability analysis. Fuzzy Sets and Systems 100, (1998) 179-195 5. Gahinet, P., Nemirovskii, A., Laub, A.J., Chilali, M.: LMI toolbox. Natick, MA: Mathworks, (1995) 6. Jadbabaie, A.: A reduction in conservatism in stability and L2 Gain analysis of T-S fuzzy systems via Linear matrix inequalities. IFAC 1999, 14 ~ triennial World congress, Beijing, P.R. China (1999) 285-289 7. Johansson, M., Rantzer, A., Arz6n, K.: Piecewise quadratic stability for affine Sugeno systems. FUZZ. IEEE'98, Anchorage, Alaska (1998) 8. Jadbabaie, A.: A reduction in conservatism in stability and L2 Gain analysis of T-S fuzzy systems via Linear matrix inequalities. IFAC 1999, 14 ~ triennial World congress, Beijing, P.R. China (1999) 285-289 9. Liu, X.D., Zhang, Q.L.: New approaches to H∞ controller designs based on fuzzy observers for T-S fuzzy systems via LMI. Automatica 39 (2003) 1571 – 1582 10. Narendra, K.S., Balakrishnan, J.: A common Lyapunov function for stable LTI systems with commuting A-matrices. IEEE Trans. on Automatic Control, Vol. 39, No. 12, (1994) 2469-2471 11. Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. on Systems, Man, Cybernetics, Vol. 15, No. l, (1985) 116-132 12. Takagi, T., Ikeda, T., Wang, H.O.: Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based design. IEEE Trans. on Fuzzy Systems, Vol. 6, No. 2, (1998) 250-256 13. Tanaka, K., Ikeda, T., Wang, H.O.: Robust stabilization of uncertain non-linear systems via fuzzy control: quadratic stability, H control theory, and LMIs. IEEE Trans. on Fuzzy Systems, Vol. 4, No. 1, (1996) 1-12 14. Tanaka, K., Sugeno, M.: Stability and design of fuzzy control systems. Fuzzy Set and Systems, Vol. 45, No. 2, (1992) 135-156 15. Tanaka, K., Nishimuna, M., Wang, H.O.: Multi-objective fuzzy control of high rise/high speed elevators using LMIs. American Control Conference, Philadelphia, Pennsylvanie, June (1998) 3450-3454 16. Teixeira, M.C.M., Zak, S.H.: Stabilizing controller design for uncertain nonlinear systems using fuzzy models. IEEE. Trans. on Fuzzy Systems, Vol.7, No.2, (1999) 133-140 17. Thierry, M.G., Laurent, V.: Control laws for Takagi-Sugeno fuzzy models. Fuzzy Sets and Systems 120 (2001) 95-108 18. Wang, H.O., Tanaka, K., Griffin, M.F.: An approach to fuzzy control of nonlinear systems: stability and design: issues. IEEE. Trans. on Fuzzy Systems, Vol. 4, No. l, (1996)
Relaxed LMIs Observer-Based Controller Design via Improved T-S Fuzzy Model
941
19. Bupp, R.T., Bernstein, D.S., Coppola, V.T.: A benchmark problem for nonlinear control design: problem statement, experimental testbed, and passive nonlinear compensation. Proceedings of American Control Conference, 21-23 June 1995, Vol. 6. Seattle, WA, USA, (1995) 4363-4367 20. Wang, L.-X.: Adaptive Fuzzy Systems and Control: Design and Stability Analysis. Prentice Hall, Engelwood Cliffs, N. J., (1994)
Fuzzy Virtual Coupling Design for High Performance Haptic Display D. Bi1, J. Zhang2,*, and G.L. Wang2 1
Tianjin University of Science and Technology, Tianjin, 300222, P. R. China
[email protected] 2 Sun Yat-Sen University, GuangZhou, 510275, P. R. China
[email protected] Abstract. Conventional virtual coupling is designed mainly for stabilizing the virtual environment (VE) and it thus may have poor performances. This paper proposes a novel adaptive virtual coupling design approach for haptic display in passive or time-delayed non-passive virtual environment. According to the performance errors, the virtual coupling can be adaptively tuned through some fuzzy logic based law. The designed haptic controller can improve the "operating feel" in virtual environments, while the system's stability condition can still be satisfied. Experimental results demonstrate the effectiveness of this novel virtual coupling design approach.
1 Introduction Haptic feedback is a way of conveying information between human and computer [1], [2], [3]. As with most control problems, there are two conflicting goals for haptic display designs, performance and stability. Earlier research focused more on stability than fidelity issues. The stability of haptic system was first addressed by Minsky et al. [4]. In their paper, a continuous time, time-delayed model approximated the effects of sample-and-hold. Colgate et al. [5] used a simple benchmark problem to derive conditions under which a haptic display would exhibit passive behavior. A more general haptic display system design method to guarantee stable operation-- "virtual coupling" structure was introduced by Colgate et al. [6] and Zilles et al. [7] by connecting the virtual environment with the haptic device. The proposed virtual coupling was a virtual mechanical system interposed between the haptic interface and the virtual environment to limit the maximum or minimum impedance presented by the virtual environment in such a way as to guarantee stability. Correct selection of virtual coupling parameters can guarantee stable haptic display in virtual environments. The virtual coupling parameters can be set empirically or by some theoretical design procedure. One fruitful approach is to use the idea of passivity to design the virtual coupling, as passivity is a sufficient condition for system stability. The major problem with using passivity theory for designing virtual coupling parameters is that it is too conservative. To improve the performance, Adams et al [8] derived *
Corresponding author.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 942 – 949, 2005. © Springer-Verlag Berlin Heidelberg 2005
Fuzzy Virtual Coupling Design for High Performance Haptic Display
943
a new virtual coupling design approach, by applying impedance or admittance based haptic display, which is less conservative than the passivity based design method. Miller et al. [9] extended the analysis to nonlinear non-passive virtual environments and designed the virtual coupling by considering the whole passivity condition of both the virtual environment and the haptic interface, so that the excessive passivity can be reduced by extracting some constant damping from the haptic interface. Hannaford et al. [10] moved a step further by designing a virtual coupling with adaptively changing parameter value, which was calculated in real time through the virtual environment "passivity observer". A disadvantage of these methods is that they cannot improve the haptic interface performance if the virtual environment is passive. In this paper, we propose a new virtual coupling design method: two-port network model based adaptive virtual coupling for stable and accurate haptic display. Different from constant-parameter virtual coupling, the parameter values of this two-port network model based virtual coupling can be adaptively tuned according to fuzzy logic algorithm [11], [12]. Due to its simple structure, fuzzy logic algorithm is relatively easy to use and is well understood by a great majority of industrial practitioners and automatic control designers [12], [13]. Comparing with traditional virtual coupling, this two-port network based virtual coupling can increase the performance of haptic interface in addition to stabilizing the whole haptic display system. The adaptive tuning of the virtual coupling can improve the system’s response time, increase the haptic display accuracy. This paper is organized as follows: in Section 2, we briefly review the network based haptic display system and traditional virtual coupling. In section 3, the adaptive nonlinear virtual coupling for haptic display is presented. In section 4, we present some case studies using this fuzzy logic based haptic controller in experiments. Finally the conclusions are drawn in section 5.
2 Network Based Haptic Display Network models, common in circuit theory, where they are used to characterize the effects of different loading conditions on two terminal electrical networks, are a natural way of describing stability and performance in bilateral teleoperation [2], [14], and in the field of control for haptic display [8], [10]. Haptic display methods can be divided into two categories: Admittance based and Impedance based haptic display. For admittance based haptic display, the operator applies force to the haptic interface, the haptic interface produce a kinematic movement to the operator, whereas the impedance based haptic display works the other way around. Fig. 1 shows a typical structure of the network based haptic controller for admittance based display. In Fig. 1, the human operator, virtual coupling and virtual environment are modeled as a one-port network model, whereas the haptic interface is modeled as a two-port network model. The human operator contacts the haptic interface with the velocity v h and force f h . The virtual environment modulates the angular velocity v r and force f r according to the physical law in the virtual world. For impedance based haptic display, the directions of the arrows are inverted.
944
D. Bi, J. Zhang, and G.L. Wang
Fig. 1. Typical network based admittance haptic display system
3 Adaptive Virtual Coupling Design for Haptic Display In most prior research work, the virtual coupling is designed as a damping element connecting the haptic interface and the virtual environment to stabilize the nonlinear VR. It is difficult to achieve at the same time a high performance for the haptic display with such an approach. To overcome such a limitation and to improve the performance of the haptic display, we develop an adaptive nonlinear virtual coupling based on two-port network model and fuzzy logic. By adaptively tuning the parameters of the nonlinear virtual coupling, the fuzzy logic based virtual coupling can result in a stable and high performance haptic display. In what follows, we first introduce the two-port network based virtual coupling. Then we present the design method using fuzzy logic theory. The following derivation is based on the admittance based haptic display. The proposed two-port network based virtual coupling is different from the traditional virtual coupling structure. As shown in Fig. 2, the two-port network model
f r equals the measured interaction force f h , vi is the measured velocity from the haptic device, v r is VE output. Here f i is the virtual coupling output to the haptic device and its parameter k as shown in
based virtual coupling is designed as follows:
equation (1) is adaptively tuned by the fuzzy logic based law. Here only this parameter is tuned, because general fuzzy based PID parameter tuning method has proved its validity and easy to use [11]. From Fig. 2, f i can be represented in a new form:
f i (t ) = K∆x(t ) + K i ∫ ∆x(t )dt + B∆x& (t ) where
∆x(t ) = xr (t ) − xi (t ) ,
interface output output
(1)
is the displacement error between the controller haptic
xi = ∫ vi dt and the reference virtual environment displacement
x r = ∫ v r dt , f i (t ) is the controller input to the haptic interface.
K , B, K i areconstant gains that can be determined by Ziegler-Nichols formula respectively.
Fuzzy Virtual Coupling Design for High Performance Haptic Display
vi
∫
xi
ki ∫
k
+
fi
+
xr
B
++
945
vr
∫
d dt fr = f h
Fig. 2. Adaptive virtual coupling design for haptic display
Converting equation (1) into frequency domain, we have
Fi (s ) = K ∆X (s ) + K i
∆ X (s ) + Bs ∆X (s ) s
(2)
Using backward difference and the trapezoidal approximation for the derivative and the integral respectively, the discrete-time realization of (2) is
Fi ( z ) = K∆X ( z ) + K i B
(
)
T 1 + z −1 ∆X ( z ) + 2 1 − z −1
1 1 − z −1 ∆X ( z ) T
(3)
where T > 0 is the sampling period. Here we extended the fuzzy set-point weighting based technique [11] and derive our algorithm called the extended fuzzy set-point weighting technique which is described in the following formulation. Equation (3) can be rewritten as: Fi (nT ) = K (δ (nT )x r (nT ) − xi (nT )) + T 1 + z −1 1 ∆x(nT ) + B 1 − z −1 ∆x(nT ) −1 2 1− z T δ (nT ) = 1 + f (nT ) Ki
(
)
(4)
f ( nT ) is the output of a fuzzy inference system consisting of triangular and trapezoidal membership functions for the two inputs ∆ x , ∆ x& , and nine triangular Where
functions for the output. The fuzzy rules are listed in Fig. 3. In Table 1, the definitions of the linguistic variables in the fuzzy inference system are given. Through the designed controller, we can see, by tuning f (nT ) , the controller parameter K can be tuned accordingly. While f (nT) is the fuzzy inference of two inputs
∆ x , ∆ x& according to the fuzzy rule.
The fuzzy based virtual coupling must keep system stable in nonlinear virtual environments. This can be realized by tuning the dead zone width d > 0 of the membership function as seen in Fig. 4. In general, the dead zone width can be different. Here we let them be same to simplify the design and notation in the following discussions.
946
D. Bi, J. Zhang, and G.L. Wang ∆ x&
& ∆x
∆x
NB
NS
Z
PS
PB
Z
PS
PB
NB
NVB
NB
NM
NS
Z
NB
PVB
PB
PM
PS
Z
NS
NB
NM
NS
Z
PS
NS
PB
PM
PS
Z
NS
Z
NM
NS
Z
PS
PM
Z
PM
PS
Z
NS
NM
PS
NS
Z
PS
PM
PB
PS
PS
Z
NS
NM
NB
PB
Z
PS
PM
PB
PVB
PB
Z
NS
NM
NB
NVB
NS
NB
∆x
Fig. 3. Basic rules for the fuzzy inference (upper: when
xr > 0 , lower: when xr < 0 )
Table 1. Definition of the linguistic variables NVB NB NM NS Z PS PM PB PVB
NVB
Negative very big Negative big Negative medium Negative small Zero Positive small Positive medium Positive big Positive very big
NB NM NS
1 Z
PS PM PB
d
PVB
∆x(∆x&)
Fig. 4. Stabilizing the haptic interface by tuning the fuzzy dead zone width
4 Implementations 4.1 Experimental Setup Fig. 5 illustrates the schematic diagram of our experimental test bed. The haptic device is a planar 1-DOF rotating link with a vertical joint connected to a DC motor through a gearbox with a ratio 1:80. The mass of the moment inertial of the link is
I = 2.438 × 10− 2 kgm2 . An optical encoder, with a resolution of 500 pulses per revolution, measures the joint angular displacement. A finger type three-dimensional force sensor is installed on the end of the link. This force sensor has a resolution of 0.005 N for the force measurement in each direction. A DSP (PS1103 PPC) control-
Fuzzy Virtual Coupling Design for High Performance Haptic Display
947
ler board is used on a host PC for the haptic display control. The PC based virtual environment is connected to the controller for real-time information change. The first experiment conducted is the performance comparison between the traditional virtual coupling and our adaptive virtual coupling for the case of a virtual wall. The second experiment repeats the first but with the computational time delay and feedback signal time delay incorporated respectively. To obtaining objective results, a point-mass of 0.1 kg is installed on top of the three-dimensional force sensor to replace the human operator.
Fig. 5. Schematic diagram of the experimental test bed
4.2 Experimental Results The first experiment displays a virtual wall without considering any time delay. The virtual finger driven by the haptic device met the virtual wall at θ = 1744 . from its initial position. For admittance based haptic display, we model this ideal “virtual wall contact” behavior as a displacement step function output. Fig. 6 shows the experimental result by using the traditional virtual coupling and our adaptive virtual coupling scheme. The dashed line shows the interaction displacement response of the haptic display system using the adaptive virtual coupling. The dash-dotted line shows the experimental results using the traditional virtual coupling. Comparing the results of the two controllers, we can see that with the adaptive virtual coupling scheme, very little overshoot or fluctuation is observable in the displacement when meeting the virtual wall. Here the initial fuzzy logic based virtual coupling parameters were selected using IAE principle as K = 3.5, B = 0.12 . In the next experiment, we considered in the display of a virtual wall with time delay of 0.1s as the disturbance from the feedback signal. The initial virtual coupling parameter values were set at K = 0.5, B = 0.12 . Fig. 7 shows the corresponding results. The dash-dotted line shows the experimental results using the traditional virtual coupling. The dashed line shows the display result using our adaptive virtual coupling in which case a fast response in the displacement is clearly observable.
948
D. Bi, J. Zhang, and G.L. Wang
Fig. 6. Interaction with no time delay
Fig. 7. Interaction with 0.1s delay Table 2. The IAE values using adaptive virtual coupling and constant virtual coupling Haptic display with no time delay Haptic display with 0.1s delay
Adaptive virtual coupling 0.918 1.003
Constant virtual coupling 1.213 1.308
Table 2 shows the quantitative comparison of the IAE values of the experimental results using the adaptive virtual coupling and traditional virtual coupling approach. From the comparison, we can see that the performance is improved in all the cases when using the adaptive virtual coupling as compared against using the traditional virtual coupling.
5 Conclusions The conventional virtual coupling design for haptic display is concerned mainly with stability, which usually makes performance conservative. This paper presents an adaptive virtual coupling design approach for haptic display which takes into account both the stability and performance in its design. The studies show that with the adaptive haptic coupling design, improved performance in the haptic display can be
Fuzzy Virtual Coupling Design for High Performance Haptic Display
949
achieved, while at the same time the stability can be guaranteed by tuning the parameters when VE is bounded output. The implementation proved the validity of the developed adaptive virtual coupling design method. The Fuzzy logic based adaptive virtual coupling can increase the system’s speed of response and the haptic display accuracy in addition to stabilizing the human-haptic interface interaction. Further explorations in the work include the how to select initial parameters for stable and accurate haptic display.
References [1] Mandayam A. S., Basdogan C.: Haptics in VE: Taxonomy, Research Status, and Challenges. Computers and Graphics 21 (1997) 393-404 [2] Hannaford B.: A Design Framework for Teleoperators with Kinesthetic Feedback. IEEE J. Robot. Automat. 5 (1989) 426-434 [3] Li Y. F., Bi D.: A Method for Dynamics Identification for Haptic Display of the Operating Feel in Virtual Environments. IEEE Transaction on Mechatronics 8 (2003) 1-7 [4] Minsky M., Ouh Y. M., Steele O., Brooks F. P., Behensky M.: Feeling and Seeing Issues in Force Display. Computer Graphics 24 (1990) 235-243 [5] Colgate J. E., Grafing P. E., Stanely M. C., Schenkel G.: Implementation of Stiff Virtual Walls in Force Reflecting Interface. Proceeding of IEEE Virtual Reality Annual International Symptom, Seattle, WA (1993) 202-208 [6] Colgate J. E., Brown J. M.: Factors Affecting the Z-Width of a Haptic Display. Proceeding of IEEE International Conference on Robot and Automation, Los Alamitos, CA (1994) 3205-3210 [7] Zilles C. B., Salisbury J. K.: A Constraint-Based God-Object Method for Haptic Display. Proceeding of IEEE International Conference on Intelligent Robot system, Pittsburgh, PA (1995) 146-151 [8] Adams R. J., Hannaford B.: Control Law Design for Haptic Interfaces to Virtual Reality. IEEE Transaction on Control System Technology 10 (2002) 3-13 [9] Miller B.E., Colgate J. E., Freeman R. A.: Guaranteed Stability of Haptic Systems with Nonlinear Virtual Environments. IEEE Transaction on Robotics and Automation 16 (2000) 712-719 [10] Hannaford B., Ryu J. H.: Time-Domain Passivity Control of Haptic Interfaces. IEEE Transaction on Robotics and Automation 18 (2002) 1-10 [11] Visioli A.: Fuzzy Logic Based Set-Point Weighting for PID Controllers. IEEE Transactions on System, Man, Cybernatics, Part a 29 (1999) 587-592 [12] Visioli A.: Tuning of PID Controllers with Fuzzy Logic. IEE Proceedings-Control Theory Application 148 (1998) 1-8 [13] Misir D., Maliki H. A., Chen G.: Design and Analysis of a Fuzzy PID Controller. International Journal of Fuzzy Set System 79 (1998) 73-93 [14] Anderson R. J., Spong M. W.: Asymptotic Stability for Force Reflecting Teleoperators with Time Delay. Int. J. Robot. Res. 11 (1992) 135-149 [15] Schaft A. V. D.: L2-Gain and Passivity Techniques in Nonlinear Control. Springer-Verlag London Limited (2000) [16] Cavusoglu M. C., Tendick F.: Multirate Simulation for High Fidelity Haptic Interaction with Deformable Objects in Virtual Environments. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2000), San Francisco, CA (2000) 2458-2465 [17] Margaret L. M., Hespanha J. P., Sukhatme G. S.: Touch in Virtual Environments: Haptics and the Design of Interactive Systems. Prentice Hall (2002)
Linguistic Model for the Controlled Object Zhinong Miao1,2, Xiangyu Zhao2, and Yang Xu1 1
Center of Intelligent Control and Development Southwest Jiaotong University, Chengdu 610031, Sichuan. P.R. China
[email protected] 2 Department of engineering of electronic information, Panzhihua University, Panzhihua 617000, Sichuan. P.R. China
Abstract. A fuzzy model representation for describing the linguistic model of the object to be controlled in a control system is prompted. With the linguistic model of controlled object or process to be controlled, we can construct a close loop system representation. Consequently, we can discuss the system appearance with the assistance of the linguistic model as we do using a mathematic model in a conventional control system. In this paper, we discuss the describing ability of a fuzzy model and give a formal representation method for describing a fuzzy model. The combine method for a fuzzy system constructed by multiple fuzzy models is also discussed based on the controller model and the linguistic model of controlled object.
1 Introduction As a solution to control complex systems in their whole operation range, fuzzy controllers constitute a good offer. Since the first application of fuzzy theory to automatic operating area has to be constrained to small perturbation control in Mamdani’s paper in 1975, fuzzy control has gradually been constituted as a powerful technique of control [2] [3]. Fuzzy controllers are non-linear controllers that provide a formal methodology for representing, manipulating and implementing a human’s heuristic knowledge about how to control a system. They could be viewed as artificial decision-makers that operate in a closed-loop system in real time [4]. There are many issues discussing the methodology for design and analyzing of fuzzy control system, including discussing the rule base construction, fuzzy modeling, and adaptive fuzzy control [5]. Fuzzy system theory enables us to utilize qualitative, linguistic information about a system to construct a mathematical model for it. For many real-life system, which are highly complex and inherently non-linear, conventional approached to modeling often cannot be applied whereas the fuzzy approach might be the only viable alternative. For design and analyses of the fuzzy system, conventional fuzzy system theory is lack of the model of controlled object or plant. So the method of conventional fuzzy system design cannot guzarantee the stability and robustness of closed-loop system. To settle these problems, much research has been done and most existing methods for the design and analysis of fuzzy controller can be roughly divided into two groups, one is that use fuzzy information to construct a model of the plant and utilize nonlinear theory to synthesize a controller and the other is to treat fuzzy model locally. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 950 – 959, 2005. © Springer-Verlag Berlin Heidelberg 2005
Linguistic Model for the Controlled Object
951
Till now few issue address on the linguistic modeling of controlled object. This paper would present a linguistic model describing the characteristics of the controlled object. Actually, as we constructed the rule base of a fuzzy controller, we give an assumption of the characteristics of the controlled object. The potential model is indeed exists in every fuzzy control system but without any syllabify elaborating. As conventional control system, we can construct a close loop description of a fuzzy control system using the model of object to be controlled. Up to now, fuzzy control system is lack of systemic design and analyses methodology for reason that there is not a complete illustrate of the system because there is not an appropriate model of controlled object. Although it is a great advantage of fuzzy control that we don’t need a mathematic model of the controlled plant, it is also a cumber of systemic design and analysis of the system. This paper attempt to prompt an approach to model the controlled plant to provide an appropriate mode illustrating the plant and constructs a close loop system while inheriting the advantage of using human’s heuristic knowledge about of the plant instead of using a mathematic model of controlled plant. The new method is based on the state space method of modern control system to construct a fuzzy model for the controlled object or the plant. The rest of this paper is organized as follows: section II present the linguistic model of controlled object based on human empirical knowledge and the representation of linguistic model using fuzzy state space method. In section III, some combine method of two linguistic model (one is the controller model and another is the linguistic model for controlled object) is prompted section IV includes some conclusion of the future usage for the linguistic model for controlled object.
2 Fuzzy Model of Controlled Object Fuzzy control use human’s intuitional knowledge, we can use these knowledge to construct a fuzzy controller and tune the controller while apply the controller to the practice case or simulation case. Fuzzy control is proved to be an appropriate alternative for traditional control method in many nonlinear and complicated cases which can not be represented by a precise mathematic model. As we construct a fuzzy controller in the form of rule base, it is possible to construct a model of the controlled object using operator’s control experience. Just as the linguistic model of controller, the model of controlled object should be linguistic model. Begin with a typical fuzzy controller with two inputs and one output showed in figure 1. Usually an engineer would construct a fuzzy controller with rules as follows: IF e is A and e is B THEN u is C
d e(t ) is the change of error and u dt is the output of the controller. This rule means that if the error e is with value A, and the change of error e is with value B, to drive the output of system to the ideal state we should apply a drive single u on the controlled object. Here
e(t ) = r (t ) − y (t ) is the error, e(t ) =
Although theses rules are based on the intuition knowledge of the engineer about control process according to his experience of controlling the controlled process or object, it is actually based on the knowledge of the controlled object. Nobody can take
952
Z. Miao, X. Zhao, and Y. Xu
Fig. 1
any control action if he completely has no knowledge about the controlled process. The operator must know that if the controls signal u is applied on the controlled object the response of the controlled object would change to the ideal state. So the rule implies the model of controlled object. That is that the single u would drive the system output change towards the ideal state. That means we can use a rule as follows to represent the dynamic of controlled object.
if
u is C then e is
A and
e is B
Where C is a fuzzy set defined in the universe of discourse of controls signal u , A is the corresponding fuzzy value of error e and B is the fuzzy value of change of error e . So we can construct a linguistic model of controlled object just as we construct a fuzzy controller. This is the basic idea of the linguistic model of controlled object. 2.1 Describe Ability of Fuzzy Model Many works have been done on discussing the approximating ability of fuzzy system to linear or nonlinear property. Theorem: for every continue function there must exist a fuzzy logic system
g and ε > 0 defined in the set U ∈ R n ,
f with a form as n
m
∑ y l [∏ ail exp(−( j =1
f ( x) =
m
i =1 n
∑ [∏ a j =1
i =1
l i
exp(−(
xi − xil
σ il
xi − xil
σ il
)2 )] (1) 2
) )]
That satisfies
sup x∈U | f ( x) − g ( x) |< ε .
The theorem can be extended to discrete domain Then it can be deduced that for every g ∈ L2 (U ) and fuzzy logic system in the form as (1) that satisfy
(∫
f ( x) − g ( x) dx 2
U
Where
U ∈ R n and
)
1/ 2
0 , there must exist a
Linguistic Model for the Controlled Object
953
2 L2 (U ) = ⎡ g : U → R | ∫ U g ( x) dx < ∞ ⎤ ⎣ ⎦
According to the description showed above, we can construct a fuzzy model for description of controlled object and the model is able to approximate any linear or nonlinear property of controlled object. The model is also in the form of linguistic representation just as the fuzzy controller. So there is also a rule base in the same form of controller. With the help of fuzzy model of controlled object, we can construct a close loop of fuzzy control system as showed in figure 2.
Fig. 2
In the close loop constructed by the controller, controlled object and feedback, the input of fuzzy model of controlled object is the output of the controller u and the output of the fuzzy model of controlled object is the system response y , actually the response is not only related to the input u but also related to the current state of the controlled object. So many issues represent the fuzzy model of the controlled object as
xk +1 = (u , xk ) D Ro Where
(2)
x k is the current state of controlled object and x k +1 is the output of the fuzzy
model which is also the system response,
Ro is the rule relation representing the lin-
guistic model of controlled object. Also the fuzzy controller can be represented as
u = xk D Rc Where
(3)
u is the output of fuzzy controller and Rc is the fuzzy relation representing the
fuzzy model of fuzzy controller. In this representation, the rule base should include rules as
R l : if
u is then
Al
and
xk
is
Bl
xk +1 is C l
So the equation (2) and (3) form the base of analysis of fuzzy system for its stability and system performance. There is some drawback in the rule that in human’s intuited knowledge, we always do not know what would the next state ( xk +1 ) be but we know how the state would change. That means by human’s intuited knowledge the rule should be in the form as
954
Z. Miao, X. Zhao, and Y. Xu
R l : if
Al
u is
then ∆xk
and
xk
Bl
is
is C l
It prompt a new issue that how we can deduce the next state of controlled object xk +1 from the current state x k and the change of state ∆xk deduced from the rule.
∆xk and use regular math’s “+” calculator to get the crisp value of xk +1 . The crisp value of xk +1 A simple method is to calculate the crisp value of the change of state
is feedback to the controller and the model of controlled object as next input of system. By this way the next state of controlled object is
xk +1 = xk + ∆x n
∑ µ (c )C l
= xk +
L
l =1
n
∑ µ (c ) l
l =1
2.2 Representation of Fuzzy Model for Controlled Object As with all modeling problems, the first step is to identify the relevant quantities whose interaction the model will specify. These quantities can be classified into input,
output, and state variables. Let U , Y , Σ and denote the input, output, and state spaces, respectively. To simplify the discussion, we assume that all these spaces are subsets of Euclidean space of (possibly) different dimensions. Because we are interested in dynamical systems we also need to specify a set T ⊂ R of times of interest;
T = R or R + for continuous time systems and T = {kτ | τ > 0 and k ∈ Z or N } for discrete time systems. Given these sets, a general dynamical system is defined in [11] as a quintuple D = (u , Σ, y, s, r ) where Σ is the state space, u is a set of input functions u () : T → U , and y is a set of output functions y () : T → Y . The dynamics are encoded by the state transition function s :
typically,
s : T × T × Σ ×U → Σ (t1 , t0 , x0 , u ()) → x1 = s (t1 , t0 , x0 , u ()) that produces the value of the state
x1 at
time
t1 given
the value of the state x0 at
time t0 and the input for all times. The map is only defined for t1 the read-out function.
r : Σ ×U × T → Y
( xt , u (t ), t ) → y (t ) = r ( xt , u (t ), t )
> t0 . Finally, r is
Linguistic Model for the Controlled Object
955
that produces the output function at time t given the value of the state and input at time t .To keep the definition consistent two axioms are imposed on the state transition function. For fuzzy system, the dynamic system can be described as the same form. In this paper, we restrict our attention to discrete time models and, in particular, models whose time stamps take values in the set T = {kτ | k ∈ N } for some τ > 0 . Without loss of generality, we assume τ
= 1.
So a fuzzy dynamical system is a quintuple D = (U , Σ , Y , IR, RO ) where F
F
F
Σ F is the fuzzy state space, Σ F ⊂ I α 1 × " × I α N ,that is, for every x F ∈ Σ F ,
⎡ x1F ⎤ ⎢ ⎥ xF = ⎢ # ⎥ ⎢ xNF ⎥ ⎣ ⎦ i ∈1," , N
, and
⎡ p1i ⎤ ⎢ ⎥ xiF = ⎢ # ⎥ ∈ I α i ⎢ pαi i ⎥ ⎣ ⎦ j ∈1," , α i
,
with
0 ≤ p ij ≤ 1
,
;
U F is a set of fuzzy input functions u F () : T → U F ⊂ I b1 ×" × I bm F F c1 cl Y F is a set of fuzzy output functions y () : T → Y ⊂ I ×" × I
IR = {IR1 ," , IRn } is a set of inference rules IRi : Σ F × U F × T → I ai IR : Σ F × U F × T → Σ F
IR produces the value of the state at the next time instant, given the value of the state and the input at the current time instant; RO = {RO1 ," ROl } is a set of read-out maps ROi : Σ F × U F × T → I ci RO : Σ F × U F × T → Y F RO produces the value of the output at the current time given the value of the state and input. Fuzzy state space is a new concept for fuzzy system introduced from modern control theorem. The basic problem for fuzzy state space is the choice of state variable. Conventional fuzzy controller always uses the error e and the change of error e as it’s input variables as showed in figure 1. It is based on human’s intuited knowledge about the controlled object and the controller uses the input to deduce an output signal to drive the controlled object. But it is not sufficient to describe the dynamic property of controlled object. In such kind of system which is common in practice, a given error e , the change of error
e and the input signal u can not determine the next state of system. So we have
956
Z. Miao, X. Zhao, and Y. Xu
to use some concept of state variables in modern control theory for reference. Some important concept is defined as following: State: state of a dynamic system is a minimized set of variables of the system that it can determine the future performance at the time t ≥ t0 if the variables at time
t = t0 and the input at the period t ≥ t0 are determined. State variables: state variables for a dynamic system is a minimized set of variables that determining the system state. State variables must efficient to express the system state which means that the state variables can determine the only system behavior at any time. And it also must be necessary which means that the state variables are the minimized set of variables that can be used to represent the system state. State vector: if representing dynamic system behavior needs n variables; the vector X which takes the n variables as its sub variables, the vector X is the state vector of
t = t0
the dynamic system. As the state at time
and the input
u (t ) at
the period
t ≥ t0 are determined, system state X (t ) at any time in t ≥ t0 is determined. State space: if
x1 x2 " xn is the state variables for a dynamic system, the
n dimensions space is the state space. Any state of the system can be represented as a point in the state space. All these variables form the state vector of input vector of gas burning boiler and the state vector can determine the system state at any time t ≥ t0 . The state variables form the state vector for describe the object.
X = ( x1 , x2 ," , xn ) 3 Combination of Fuzzy Model For conventional fuzzy controller showed in figure 1, the fuzzy controller model uses rules with the form of expression (1) where
R l is the lth rule, x j is the chosen vari-
u is the considered output variable. The syml bols A are membership functions and C is the rule consequent deduced from
ables expressing the system's state and l j
l
the R .
R l : if
x lj
is
Alj
then u
is C l
(1)
j = 1......m , For MISO fuzzy controller, suppose that there are n rules in the rule base and the output u can be formulated as
∑ C∏ u= ∑ ∏ n
i =1 n
i =1
i
m j =1
m
j =1
µj µj
(2)
Linguistic Model for the Controlled Object
Where
µ j the membership that variable x j
the universe of discourse and
957
belongs to the fuzzy subset defined in
Ci is the center of the fuzzy subset of the consequent
i
deduced from R . In the reference process described above, the model of controlled object is not taken into account. Main reason is that there is not explicit fuzzy model of controlled object. The linguistic model of controlled object makes it possible to analyze the output of plant y based on the input error and change of error. Based on the model constructed before, it is possible to discuss the combination of the models of controller and controlled object. Basically there are two methods to combine the two fuzzy models; the difference of the two methods lays on that the signal which is transported between the two fuzzy models. One is that the output of the controller u is defuzzificated and the crisp value of u is the input of the fuzzy model of controlled object while the other method takes u as a fuzzy value and translate it to the fuzzy model of controlled object. The combination using these two methods is discussed as following. 3.1 Crisp Value Transfer The deduced output of the controller is defuzzificated and the crisp value of u is treated as the input of the fuzzy model of controlled object. So the output of model of controlled object which is also the system response is
xk +1 = xk + ∆xk
∆xk is deduced from the fuzzy model of controlled object with a input as u and current state of controlled object xk . And
∆~ xk = (u~, ~ x ) D Ro Using the regular combine process and center of gravity defuzzification method, there comes n
R
∆xk =
∑ ∆xi µi (u)∏ µi ( xk ) i =1 R
k =1
n
∑ µ (u)∏ i =1
i
k =1
µi ( xk )]
R is the number of the rules in the rule base, and n is the dimension of status variable we choose to represent system state. For the controller inference process, conventional method use the error and change of error as it’s input. Basically fuzzy controller is a discrete controller and the error e can be represented as
ek = xk − r
958
Z. Miao, X. Zhao, and Y. Xu
r is the set value of the system response. And the change of error e can be represented as
e = ek − ek −1
= ( xk − r ) − ( xk −1 − r ) = ∆xk so the output of controller can be represented as
u = (e, e) D Rc
= ( xk − r , ∆xk ) D Rc As the set value r is a constant value, it can be omitted using a coordinate change on the linguistic value of xk .Using the same inference mechanism and defuzzification method, we have
u=
R
n
i =1 R
k =1 n
∑ ui µi ( xk )∏ µi (∆xk −1 )] ∑ µ ( x )∏ i =1
i
k
k =1
µi (∆xk −1 )]
Based on the description (1) and (2), it is obviously that the next state of controlled object (it is also the system response) is related not only to the current controller output and the current state, but also related to the passed state of controlled object. This is tally with human’s knowledge about the process control. 3.2 Fuzzy Value Transfer As the signal u is transferred between the two fuzzy models, it suggests a method to combine the two models by a kind of rule chain. Many papers have addressed their goal to combine the two fuzzy models using the signal as a fuzzy variable. But it can only be used as a theoretical analyses method for the reason that in real world the two fuzzy models are physically separated. The signal can not be transferred in fuzzy value. A typical combine method is:
X K +1 = ( X K ,U K ) D RP U K = X K D RC RP is the fuzzy relation of controlled process and RC is the fuzzy relation of fuzzy controller. The combination o f the two equations
X K +1 = X K D R1 D RP
Linguistic Model for the Controlled Object
959
4 Conclusions The model for controlled object and the representation discussed above presents a method to describe the fuzzy control system as a close loop. It constructs a base to study fuzzy control system on systematic design method and analysis for system performance. Comparing the two combination methods of crisp and fuzzy value transferred, the crisp value method is much complicated but the fuzzy value transferred method can only be used as a theoretic analyses method for the reason that in actual applications the value transferred is crisp instead fuzzy. Based on the linguistic model of controlled object, the performance of system can be discussed before the great deal of simulation and tuning work. The performance of the system can be determined in the design process.
Acknowledgments This work is partly supported by the Department of Electronic Engineering and Information Panzhihua University and Department of Mathematics Southwest Jiaotong University. It is also a part of work supported by national nature science fund. Grant number (60474022).
References 1. Y. Lin and G. A. Cunningham : A new approach to fuzzy-neural modeling. IEEE Trans. Fuzzy Syst., vol. 3, pp. 190–197, May1995 2. D. Landau, R. Lozano, and M. M’Saad: Adaptive Control. NewYork: Springer-Verlag, 1998. 3. Lixing Wang : Adaptive fuzzy system and control design and reliability analyses. Prentice Hall 1994 4. John Lygeros,A: Formal Approach to Fuzzy Modeling IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 5, NO. 3, AUGUST 1997 5. F. M. Callier and C. A. Desoer: Linear System Theory. New York:Springer-Verlag, 1991.
Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems Shao-Cheng Qu and Yong-Ji Wang Department of Control Science and Engineering, Huazhong University of Science and technology, Wuhan, China, 430074,
[email protected] Abstract. The novel fuzzy sliding mode control problem is presented for a class of uncertain nonlinear systems. The Takagi-Sugeno (T-S) fuzzy model is employed to represent a class of complex uncertain nonlinear system. A virtual state feedback technology is proposed to design the sliding mode plane. Based on Lyapunov stability theory, sufficient conditions for design of the fuzzy sliding model control are given. Design of the sliding mode controller based on reaching law concept is developed, which to ensure system trajectories from any initial states asymptotically convergent to sliding mode plane. The global asymptotic stability is guaranteed. A numerical example with simulation results is given to illustrate the effectiveness of the proposed method.
1 Introduction Since uncertainties are always the sources of instability for nonlinear system, the stabilization problems of uncertain nonlinear systems are extremely important. In the past decade, many researchers have paid a great deal of attention to various control methods in uncertain nonlinear systems [1]. With the development of fuzzy systems, fuzzy control represented by IF-THEN rules has become one of the useful control approaches for complex systems [2]. Takagi and Sugeno et al. proposed a kind of fuzzy inference system so-called Takagi-Sugeno (T-S) fuzzy model [3-6]. It combines the flexibility of fuzzy logic theory and rigorous mathematical theories of linear or nonlinear system into a unified framework. It has been shown that PDC control design and conditions for the stability within the framework of T-S fuzzy model can be formulated into LMIs form [7-9]. Currently, stability analysis and synthetically design for nonlinear system represented in T-S fuzzy models have been well addressed in [8-12]. We would like to point out when the uncertainties are bounded and the bounds are known, sliding mode control (SMC) is a reasonable method to stabilize uncertain fuzzy system. SMC approach, based on the use of discontinuous control laws, is known to be an efficient alternative way to tackle many challenging problems of system synthesis [13]. A sliding mode control system has various attractive features such as fast response, good transient performance. Especially, ideal sliding mode system is insensitive to matched parameter uncertainties and external disturbances. Actually, SMC method has been used to design for linear systems and nonlinear systems long before [14-17]. Although the SMC for uncertain nonlinear systems received increasing attention, there has been very little work to discuss uncertain T-S fuzzy system using SMC approach [18]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 960 – 968, 2005. © Springer-Verlag Berlin Heidelberg 2005
Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems
961
In this paper, by using a novel virtual state feedback technique, sufficient conditions for design of robust sliding mode plane are given based on Lyapunov stability approach. The purpose of virtual state feedback control is only helpful to design the robust sliding mode plane. Design of the sliding mode controller based on reaching law concept is developed to guarantee system trajectories from any initial conditions asymptotically convergent to sliding mode plane. Therefore, the asymptotic stability of the global fuzzy sliding mode system is guaranteed.
2 Problem Formulation The continuous fuzzy dynamics model, proposed by Takagi and Sugeno, is described by fuzzy IF-THEN rules, which represents local linear input-output relations of nonlinear system. Consider an uncertain nonlinear system that can be described by the following uncertain T-S fuzzy model Plant Rule i : IF z1 (t ) is M i1 and … and z g (t ) is M ig THEN
x (t ) = ( Ai + ∆Ai (t )) x(t ) + Bi u (t ) + f i ( x, t ) , i = 1, 2" n
(1)
where n is the number of IF-THEN rules, M ij is the fuzzy set; x(t ) ∈ R n is the state vector, u (t ) ∈ R m is the input control; Ai and Bi are known constant matrices with appropriate dimensions, ∆Ai (t ) represent parameter uncertainties, and f i ( x, t ) is a bounded external disturbance. z1 (t ), z 2 (t )" z g (t ) are the premise variables, and
Ψ (t ) is the continuous initial state function. By using standard fuzzy inference method, i.e., a singleton fuzzifier, product fuzzy inference, and central-average defuzzifier, system (1) can be inferred as n
x (t ) = ∑ hi ( z (t ))[( Ai + ∆Ai (t )) x(t ) + Bi u (t ) + f i ( x, t )]
(2)
i =1
where z (t ) = [ z1 (t ), z 2 (t ) " z g (t )]T , and hi ( z (t )) denotes the normalized membership function which satisfies n
g
i =1
j =1
hi ( z (t )) = wi ( z (t )) / ∑ wi ( z (t )) , wi ( z (t )) = ∏ M ij ( z j (t ))
(3)
and M ij ( z j (t )) is the grade of membership of z j (t ) in M ij . It is assumed that
wi ( z (t )) ≥ 0 for all t . Then we can obtain the following conditions n
hi ( z (t )) ≥ 0 , ∑ hi ( z (t )) = 1 , i = 1 , 2 , " , n
(4)
i =1
In this paper, we make the following assumptions. Assumption 1: All the matrices Bi are identical, i.e., B1 = B2 = " = Bn : = B . Furthermore, suppose that the matrix B is of full column rank.
962
S.-C. Qu and Y.-J. Wang
Assumption 2: The time-varying parameter uncertain matrices ∆Ai (t ) are defined as follows ∆Ai = H i Fi (t ) Ei , where H i and Ei are known matrices with appropriate dimensions; and Fi (t ) are unknown matrices satisfying Fi T (t ) Fi (t ) ≤ I for ∀ t , where the elements of Fi (t ) are Lebesgue measurable. Assumption 3: The external disturbances satisfy the matching conditions and are bounded as f i ( x, t ) = Bf i ( x, t ) and || f i ( x, t ) || ≤ δ f (t ) . Let us choose the sliding mode plane
S = B T P x(t ) = 0
(5)
where P ∈ R n×n is a symmetric positive definite matrix to be chosen later. The objective of this work is to investigate the stabilization for uncertain T-S fuzzy system in two steps. The first step is to construct suitable sliding mode controller, which to guarantee that system trajectories can be driven to the sliding mode plane in a finite time. Another step is to derive the sufficient conditions for asymptotic stability of the sliding mode system. Before proceeding, we recall some lemmas that will be frequently used throughout the proofs of our main results. Lemma 1 [19]: Given constant matrices R1 , R2 and R3 with appropriate dimensions, where matrices R1 = R1T , R2 = R2T > 0 , then R1 + R3T R2−1 R3 < 0 if and only if
⎡ R1 ⎢ ⎣ R3
R3T ⎤ ⎥ 0 , we have
HFE + ( HFE )T ≤ ε HH T + ε −1 E T E
3 Main Conclusions Theorem 1: Under assumptions 1-4, the trajectories of uncertain T-S fuzzy system from any initial states can be driven to the sliding mode plane described by (5) in a finite time with the control
u (t ) = u eq + u n
(6)
where the equivalent control is n
ueq = − ∑ hi ( B T PB ) −1 B T PAi x(t )
(7)
i =1
in which, for convenience, we use the briefness notation hi to denote hi ( z (t )) . The switching control is n
u n = −∑ hi {( B T PB) −1 [|| B T PH i || . || Ei x(t ) || + || B T PB || δ f (t ) + ε 0 ] sgn( S )} (8) i =1
where ε 0 is a positive constant.
Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems
963
Proof: Consider the Lyapunov function candidate
V = 0 .5 S T S
(9)
which is positive-definite for all S ≠ 0 .The derivative of (9) with respect to time along (2) is
V = S T S = S T B T P x (t ) n
= ∑ hi S T B T P [( Ai + ∆Ai (t )) x(t ) + Bi u (t ) + f i ( x, t )] i =1
Substituting (6)-(8) into the above equation and noting that assumption 1, we get n
V = ∑ hi S T B T P [( Ai + ∆Ai (t )) x(t ) + Bu (t ) + f i ( x, t )] i =1 n
= ∑ hi [ S T B T P∆Ai x(t ) + S T B T PBf i ( x, t ) + S T B T PBu n ] i =1
Considering assumption 2-4 and relation (4),
V in above equation can be expressed
as n
V ≤ ∑ hi || S T || [|| B T PH i || ⋅ || Ei x(t ) || + || B T PB || δ f (t )] + S T B T PBu n i =1
n
= −∑ hi ε 0 S T sgn( S ) i =1
= −ε 0 || S || ≤0 The last inequality is known to show that the system trajectories will be arrived at the sliding mode plane within a finite time. The reaching mode of the uncertain T-S fuzzy system is guaranteed. The proof is completed. The next is to design the robust sliding mode plane such that the system trajectories restricted to the sliding mode plane are robust stable in the presence of both parameter uncertainties and external disturbances. The following results can be obtained. Theorem 2: Under assumptions 1-4, the SMC system (1) will asymptotically stable to sliding mode plane described by (5) with P = Q −1 under the control (6) if there exist symmetric positive define matrices Q > 0 , general matrix L , and scalars ε > 0 , satisfying the following LMIs for i = 1, 2" n
⎡Wi ⎢ ⎣*
QEiT ⎤ ⎥ 0 , we obtain
M ≤ PAi + AiT P + ε PH i H iT P + ε −1 EiT Ei
(15)
By Lemma 1, inequality (15) is equivalent to
⎡ PA + AiT P + ε PH i H iT P EiT ⎤ M ≤⎢ i ⎥ −ε I⎦ Ei ⎣
(16)
Pre-multiplying and post-multiplying (16) by N T and N = diag{P −1 , I } , respectively, then considering relation Ai = Ai − BK , and defining P −1 = Q , L = KQ , we obtain N T MN ≤ Ω i , where
⎡ A Q − BL + QAiT − LT B T + ε H i H iT Ωi = ⎢ i * ⎣
QEiT ⎤ ⎥ −ε I⎦
(17)
If Ω i < 0 , then N T MN < 0 . So we can obtain M < 0 . Furthermore, V ( x, t ) < 0 . The proof is completed. In term of the principle of sliding mode control strategy, it can be concluded that the closed-loop system given by (2), (5) and (6) is global asymptotically stable.
Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems
965
4 Numerical Simulation Consider the uncertain nonlinear system
⎧ x1 (t ) = x2 (t ) ⎨ 3 3 ⎩ x 2 (t ) = −0.1x2 (t ) − 0.02 x1 (t ) − 0.67 x1 (t ) + f ( x, t ) + u (t ) The purpose of control is to achieve closed-loop stability and to attenuate the influence of the exogenous external disturbance. It is also assumed that states is measurable and x1 (t ), x2 (t ) ∈ [−1.5, 1.5] . Let us assume that
− 0.1x23 (t ) = c(t ) x2 (t ) , c(t ) ∈ [−0.225, 0] . Using the same procedure in [5], the nonlinear term can be represented as
− 0.67 x13 (t ) = M 11 ⋅ 0 ⋅ x1 (t ) − (1 − M 11 ) × 1.5075 x1 (t ) By solving the equation, the membership functions M 11 of fuzzy set can be interpreted as
M 11 ( x2 (t )) = 1 −
x22 (t ) x 2 (t ) , M 12 ( x2 (t )) := 1 − M 11 ( x2 (t )) = 2 . 2.25 2.25
By using these fuzzy sets, the nonlinear system can be presented by the following uncertain T-S fuzzy model. Plant Rule 1: IF x2 (t ) is M 11 THEN
x (t ) = ( A1 + ∆A1 (t )) x(t ) + B1 [u (t ) + f1 ( x, t )] Plant Rule 2: IF x2 (t ) is M 12 THEN
x (t ) = ( A2 + ∆A2 (t )) x(t ) + B2 [u (t ) + f 2 ( x, t )] The model parameters are given as follows
1 ⎤ 1 ⎡ 0 ⎡ 0 ⎤ , A2 = ⎢ A1 = ⎢ ⎥ , ∆A1 = H 1 F1 (t ) E1 , ⎥ ⎣− 0.02 − 0.1125⎦ ⎣− 1.5275 − 0.1125⎦
∆A2 = H 2 F2 (t ) E2 , Ei = [0, 1] , H i = [0, 0.1125]T , Fi T (t ) Fi (t ) ≤ I , Bi = [0, 1]T , f i ( x, t ) = 0.2 sin( x1 (t ) + x2 (t )) for i = 1, 2 . Next, solving the LMIs (10) for i = 1, 2 gives
⎡ 0.3285 − 0.2319⎤ Q=⎢ ⎥ , L = [− 0.0131 0.6995 ] , ε = 1.0750 . ⎣− 0.2319 0.2150 ⎦
966
S.-C. Qu and Y.-J. Wang
It then follows from Theorem 1 and Theorem 2 that the closed-loop system is robust stable. The simulation results for the nonlinear systems are shown in Figs. 1-3. For these simulations, the initial values of the system states are x1 (0) = −1.2 , x2 (0) = 1.4 , and control parameter of (12) is ε 0 = 0.1 . Obviously, the state trajectories are attracted toward the sliding mode planes and the global closed-loop nonlinear system is asymptotically stable.
5 Conclusion In this paper, a fuzzy sliding mode control approach has been studied for a class of uncertain nonlinear systems in the presence of both parameter uncertainties and external disturbances. T-S fuzzy model is employed to represent the uncertain nonlinear systems. By using virtual state feedback technique, sufficient conditions for design of robust sliding mode plane are given based on the Lyapunov theory. Design of the sliding mode stabilizing controller based on reaching law concept is proposed, which to guarantee system trajectories convergent to sliding mode plane. The global asymptotically stability of the closed-loop system was proven. It is pointed out that assumption 1 is too strict for general nonlinear system, which is major limitation of the proposed method in this paper. A numerical example with simulation results is given to illustrate the effectiveness of the proposed method. 1.5 x1(t) x2(t) 1 state 0.5
0
-0.5
-1
-1.5
0
2
4
6 Time (sec)
8
Fig. 1. The state of system under the proposed controller
10
Fuzzy Sliding Mode Control for Uncertain Nonlinear Systems
967
0.5 u(t) 0 Control -0.5
-1
-1.5
-2
-2.5
0
2
4
6 Time (sec)
8
10
Fig. 2. The proposed controller 9 sliding mode 8 s(t) 7 6 5 4 3 2 1 0 -1
0
2
4
6 Time (sec)
8
10
Fig. 3. The sliding mode of system
References 1. M. Vidysagar. Nonlinear Systems Analysis. Englewood Cliffs, NJ: Prentice-Hall, 1993. 2. L. X. Wang, “Adaptive fuzzy systems and control: design and stability analysia”, Upper Saddle River, NJ: Prentice-Hall, 1994.
968
S.-C. Qu and Y.-J. Wang
3. K. Tanaka and M.Sugeno. “Stability analysis and design of fuzzy control system”, Fuzzy Sets Syst., 45, pp. 135-156, 1992. 4. K. Tanaka and M.Sano. “Fuzzy stability criterion of a class of linear”, Inform. Sci, 71, pp. 3-26, 1993. 5. K. Tanaka, T. Ikeda, and H. O. Wang. “Robust stabilization of a class of uncertain nonlinear systems via fuzzy control: Quadratic stability, H∞ control theory and linear matrix inequalities”, IEEE Trans. Fuzzy Syst., 4, pp. 1-13, 1996. 6. K. Tanaka, T. Ikeda, and H. O. Wang. “Fuzzy regulators and fuzzy observers: Relaxed stability conditions and LMI-based designs”, IEEE Trans. Fuzzy Syst., 6, pp. 250-265, 1998. 7. G. Reng and P. M. Frank, “Approaches to quadratic stabilization of uncertain fuzzy dynamic system”, Cir. Syst. 48, pp. 760-769, 2001. 8. K. R. Lee and J. H. Kim, “Output feedback robust H∞control of uncertain fuzzy dynamic systems with time-varying delay”, IEEE Trans. Fuzzy Syst, 8, pp. 657-664, 2000. 9. Xing-Ping Guan and Cai-lian Chen, “Delay-dependent guaranteed const control for T-S fuzzy systems with time delays”, IEEE Trans. Fuzzy Syst, 12, pp. 236-249, 2004. 10. Zidong Wang, Daniel W. C. and Xiaohui Liu, “A note on the robust stability of uncertain stochastic fuzzy systems with time-delays”, IEEE Trans on Syst, man, and cyber-part A: Syst and Huma, 34, pp. 570-576, 2004. 11. Rong-Jyun Wang, Wei-Wei Lin and Wen-June Wang, “Stabilizability of linear quadratic state feedback for uncertain fuzzy time-delay systems ”, IEEE Trans on Syst, man, and cyber-part B: Cybernetics, 1, pp. 1-4, 2004. 12. Liu X. and Zhang Q., “New approaches to H∞ controller designs based on fuzzy observers for T-S fuzzy systems via LMI”, Automatica, 39, pp. 1571-1582. 2003. 13. Drakunov, S. V. and Utkin, V. I., “Sliding mode control in dynamic system”, International Journal of Control, 55, pp.1029-1037, 1992. 14. F. Gouaisbaut, M. Darnbrine and J. P. Richard, “Robust control of delay systems: a sliding mode control design via LMI”, Systems & Control Letters, 46, pp.219-230, 2002. 15. Said Oucheriah, “Exponential stabilization of linear delayed systems using sliding-mode controllers”. IEEE transaction on circuit and systems, 1: fundamental theory and application, 50 (6), pp.826-830, 2003. 16. Xiaoqiu Li and R. A. Decarlo. “Robust sliding mode control of uncertain time delay systems”. International Journal of Control, 76(13), pp.1296-1305, 2003. 17. Y. Niu, J.Lam and and X.Wang. “Sliding-mode control for uncertain neutral delay systems”. IEE Proceedings of control theory and applications, 151(1), pp.38-44, 2004. 18. Xinghuo Yu, Ahihong Man and Baolin Wu, “Design of fuzzy sliding mode control systems”, Fuzzy Sets and Systems, 95, pp. 295-306, 1998. 19. S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalies in system and control theory. Philadelphia, PA: SLAM, 1994.
Fuzzy Control of Nonlinear Pipeline Systems with Bounds on Output Peak1 Fei Liu and Jun Chen Institute of Automation, Southern Yangtze University, Wuxi, 214122, P.R. China
[email protected] Abstract. A new fuzzy control method for nonlinear pipeline system is discussed in this paper. The nonlinear dynamics of pipeline system is composed by two gravity-flow tanks, and are described by Takagi-Sugeno (T-S) fuzzy model. The controller design is based on overall stability, and is carried out via the parallel distributed compensation (PDC) scheme. To obtain better output dynamic performance, a given bounds is introduced to the output of nonlinear systems. Moreover, by means of linear matrix inequality (LMI) technique, it is shown that the existence of such constrained control system can be transformed into the feasibility of a convex optimization problem. Finally, by applying the designed controller, the simulation results demonstrate the efficiency.
1 Introduction Pipeline system composed by two gravity-flow tanks is a familiar device in the chemical processes. The control of pipeline system is always attracted lots of scientists’ interest because of the nonlinearity of the process dynamics. Among the existing control approaches, there are two typical methods. The first one is based on strict nonlinear mathematical models or on the given point to linearization, but this method has bad robust performance; while the second one is based on expert experience and intelligent control method, however, it doesn’t guarantee the stability of system and lacks universal applicability. Recently, based on passivity theory, paper [4] proposed a control method for pipeline system. Under the precondition of system stability, it provides a class of widely applicable controller design methods. Also based on stability, this paper presents a new fuzzy control method. Pipeline system [4] is first described by a T-S fuzzy model[1]. It is well known that Takagi-Sugeno (T-S) fuzzy models can provide an effective representation of complex nonlinear systems. Different from the approach depended on fuzzy rule tables, in this type of fuzzy models, the premise is lingual variables, while the conclusion is linear combination with certain input variables. It combines linear system theory with fuzzy theory to deal with the control problem for nonlinear system. The controller, which can guarantee the fuzzy system asymptotical stability in the large, is designed based on the 1
Supported by the Key Project of Chinese Ministry of Education (NO105088).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 969 – 976, 2005. © Springer-Verlag Berlin Heidelberg 2005
970
F. Liu and J. Chen
parallel distributed compensation (PDC) scheme [2]. The stability criterion is come down to find a common symmetric positive definite matrix P to satisfy a set of Lyapunov matrix inequalities. And under the condition of constraints on the system output, design conditions for the stability and output constraints can be reduced to convex problems involving linear matrix inequalities [3] (LMIs). Moreover, due to the interior-point algorithm, the feasibility problem of LMIs can be solved very efficiently by means of MATLAB LMI toolbox. At last, the designed controller is employed in a simulation experiment, and the results demonstrate the efficiency of this method.
2 T-S Fuzzy Model and Stability Consider the following affine nonlinear system:
x = f ( x) + g ( x )u
(1)
where x ∈ R n is the state vector, u ∈ R m is the input vector, f (x) and g (x) are both nonlinear functions. The T-S fuzzy models are described by fuzzy IF-THEN rules, each of which represents the local linear subsystem in a different state-space region. The i th rule of the fuzzy system (1) is given in the following form: Rule i :
IF z1 (t ) is M i1 and … and z p (t ) is M ip
(2)
THEN x (t ) = Ai x(t ) + Bi u (t ) , i = 1,2,..., r. where z1 (t ) , z 2 (t ) ,…, z p (t ) are the premise variables. In general, these parameters may be functions of the state variables, external disturbances, and/or time. M ij (i = 1,2,..., r , j = 1,2,..., p ) are fuzzy sets, x(t ) ∈ R n is the state vector, u (t ) ∈ R m is the input vector, r is the number of IF-THEN rules, and Ai , Bi are some constant matrices of appropriate dimensions. Using weighted average method for defuzzification, the final output of the fuzzy systems are inferred as follows: ∑ r w ( z (t ))[ Ai x(t ) + Bi u (t )] x (t ) = i =1 i r ∑i =1 wi ( z (t )) (3) r = ∑ hi ( z (t ))[ Ai x(t ) + Bi u (t )] i =1
where p
wi ( z (t )) = ∏ M ij ( z j (t )) , j =1
hi ( z (t )) =
wi ( z (t )) r ∑i =1 wi ( z (t ))
, i = 1,2,..., r .
Fuzzy Control of Nonlinear Pipeline Systems with Bounds on Output Peak
971
in which M ij ( z j (t )) is the grade of membership of z j (t ) in the fuzzy set M ij , hi ( z (t )) is the possibility for the i th rule to fire. It is easy to find that wi ( z (t )) ≥ 0 , i = 1,2,..., r , ∑ir=1 wi ( z (t )) > 0 for all t . Therefore hi ( z (t )) ≥ 0 , i = 1,2,..., r , and
∑ir=1 hi ( z (t )) = 1 for all t . Then the method of the PDC scheme is utilized to design the fuzzy controller to stabilize fuzzy system (2) . The idea of the PDC scheme is to design a compensator for each rule of the T-S fuzzy model. The resulting overall fuzzy controller, which is nonlinear in general, is a fuzzy blending of each individual linear controller. The fuzzy controller shares the same fuzzy sets with the fuzzy system (2) , so the i th control rule for the fuzzy system (2) is described as follows: Rule i : IF z1 (t ) is M i1 and … and z p (t ) is M ip (4) THEN u (t ) = − Fi x(t ) , i = 1,2,..., r . where Fi is the local state feedback gain matrix. The overall state feedback fuzzy control law is represented by u (t ) = −
∑ir=1 wi ( z (t )) Fi x(t ) ∑ir=1 wi ( z (t ))
r
= − ∑ hi ( z (t )) Fi x(t ) i =1
(5)
Now, substituting (5) into (3) , we can obtain the following closed-loop system: r
x (t ) = ∑ hi2 ( z (t ))G ii x(t ) + i =1
r
∑
i , j =1,i < j
hi ( z (t ))h j ( z (t ))(G ij + G ji ) x(t )
(6)
where Gij = Ai − Bi F j , i, j = 1,2,..., r and i < j ≤ r . The design of the state feedback fuzzy controller is to determine the local feedback gains Fi such that the above closed-loop fuzzy system is asymptotically stable. Based on the stability theory, it can be seen that the equilibrium of the fuzzy system described by (6) is asymptotically stable in the large if there exist a common symmetric positive definite matrix P such that the following matrix inequalities are satisfied: P>0
(7)
GiiT P + PGii < 0
(8)
(Gij + G ji ) T P + P(Gij + G ji ) < 0
(9)
where Gij = Ai − Bi F j , i, j = 1,2,..., r and i < j ≤ r .
972
F. Liu and J. Chen
According to Schur complement, it is easy to find that the matrix inequalities (7) , (8) and (9) are equivalent to the following LMIs [3], respectively: X >0
(10)
Ai X + XAiT − Bi Yi − YiT BiT < 0
(11)
Ai X + XAiT + A j X + XATj − Bi Y j − Y jT BiT − B j Yi − YiT B Tj < 0
(12)
where X = P −1 , Yi = Fi X , i, j = 1,2,..., r and i < j ≤ r .
The above conditions are LMIs with respect to variables X , Yi . This feasibility problem can be solved very efficiently by means of the recently developed interior-point methods. Furthermore, if matrices exist which satisfies these inequalities, then the feedback gains are given by Fi = Yi X −1 .
3 Pipeline System Control Consider the following series arrangement of two identical gravity-flow tanks equipped with outlet pipes (see Fig.1). The dynamic equations of this system are described as follows:
x1 = x 2 = x 3 = x 4 =
Ap g L
x2 −
Kf
ρA p2
x12
(
1 FC max α −(1−u ) − x1 At Ap g L
x4 −
Kf
) (13)
x2 2 3 ρA p
1 (x1 − x 3 ) At
where x1 and x 3 are the volumetric flow rates of liquid leaving the tanks via the pipes, x 2 and x 4 are the heights of the liquid in the tank, respectively. FC max = 2 m 3 s is the maximum value of the volumetric rate of fluid entering the first tank, g = 9.81 m s 2 is the gravitational acceleration constant, L = 914m is the length of the pipes, K f = 4.41 Ns 2 m 3 is the friction factor, ρ = 998 kg m 2 is the density of the liquid, A p = 0.653m 2 is the cross-sectional area of the pipes, and At = 10.5m 2 is the cross-sectional area of each of the tanks. The parameter α = 5 is the range ability parameter of the valve and the control input u is the valve position.
Fuzzy Control of Nonlinear Pipeline Systems with Bounds on Output Peak
973
Fig. 1. A series of two gravity flow tanks
In order to represent the system (13) in the format given by (1) , we regard the control input term via the following auxiliary variable v v = FC max α −(1−u )
(14)
Then the system (13) can be transformed into the following form: x = f ( x) + g ( x)v
(15)
where
K f 2⎤ ⎡ Ap g x2 − x1 ⎥ ⎢ ρA p2 ⎥ ⎢ L ⎡0 ⎢ ⎥ 1 ⎢1 − x1 ⎢ ⎥ At ⎢ ⎢ ⎥ f (x ) = , g ( x ) = ⎢ At K f 2⎥ ⎢ Ap g ⎢0 x4 − x3 ⎥ ⎢ ρA p2 ⎥ ⎢⎣ 0 ⎢ L ⎢ ⎥ 1 (x1 − x3 ) ⎥ ⎢ ⎣ At ⎦
⎤ ⎥ ⎥ ⎥. ⎥ ⎥⎦
Using the linearization method introduced in paper [5], the system (15) can be illustrated by the following T-S fuzzy models. This linearization method overcomes the disadvantage of the Tailor series approach, and adopts local linearization idea to provide a new linearization method for nonlinear systems. Rule 1: IF x 4 (t ) is about 1 THEN x (t ) = A1 x(t ) + B1v(t ) Rule 2: IF x 4 (t ) is about 5 THEN x (t ) = A2 x(t ) + B2 v(t )
974
F. Liu and J. Chen
where ⎡ − 0.0153 0.0091 0.0017 0.0021⎤ ⎢− 0.0952 0 0 0 ⎥⎥ A1 = ⎢ , ⎢ 0.0017 0.0021 − 0.0153 0.0091⎥ ⎥ ⎢ 0 − 0.0952 0 ⎦ ⎣ 0.0952 ⎡− 0.0370 0.0101 0.0011 0.0031⎤ ⎢− 0.0952 0 0 0 ⎥⎥ A2 = ⎢ , ⎢ 0.0011 0.0031 − 0.0370 0.0101⎥ ⎥ ⎢ 0 − 0.0952 0 ⎦ ⎣ 0.0952 ⎡ 0 ⎤ ⎢0.0952⎥ ⎥. B1 = B2 = ⎢ ⎢ 0 ⎥ ⎥ ⎢ ⎣ 0 ⎦ Selecting membership functions of the form h1 ( x 4 ) = 1 − ( x 4 − 1) 4 , h2 ( x 4 ) = 1 + ( x 4 − 5) 4 , solving LMIs (10) , (11) , (12) , a feasible solution is ontained ⎡ 1551.5 − 394.8 1284.8 − 383.5 ⎤ ⎢− 394.8 1822.4 69.9 − 1028.4⎥⎥ X =⎢ , ⎢ 1284.8 69.9 1762.9 403.4 ⎥ ⎥ ⎢ ⎣− 383.5 − 1028.4 403.4 4982.0 ⎦ F1 = [3.9787 4.0388 − 4.1664 1.4611] , F2 = [4.3154 4.1636 − 4.4364 1.5346] . Then the fuzzy state feedback controller is constructed as follows: u = −(h1 F1 + h2 F2 )( x − x d ) + v d
(16)
where ( x d , v d ) is the desired operating point of the system (15) , choosing x d = [1.8389 5.0 1.8389 5.0]T , v d = 1.8389 . To test the performance of the controller design, applying the controller (16) to system (15) , the simulation results are described in Fig.2 with the initial condition x 0 = [1.3 2.5 1.3 2.5]T . From Fig.2, it can be seen that the designed fuzzy controller globally stabilizes the system (15) .
Fuzzy Control of Nonlinear Pipeline Systems with Bounds on Output Peak
975
Fig. 2. Simulation results of pipeline system without output constraints
However, the dynamical response of liquid height x 2 is not perfect. To obtain better dynamical performance, output constraints will be introduced to control system. Assume that z = Cx is the controlled output vector, where x = [ x1 x 2 x3 x 4 ]T is system vector, matrix C ∈ R1×4 is selected by requirement. Here, the aim is to limit the liquid height x 2 , so we choose C = [0 1 0 0] , that is z = x 2 . Under the known initial condition x 0 and the upper limit ξ > 0 , the system (15) is asymptotically stable in the large and satisfies the condition z ≤ ξ if there exist matrix X such that LMIs (10) - (12) and the following matrix inequalities are satisfied: ⎡X ⎢xT ⎣ 0
⎡X ⎢ ⎣⎢CX
x0 ⎤ >0 ξ ⎥⎦
XC T ⎤ ⎥>0 ξ ⎦⎥
(17)
(18)
For the sake of comparing with Fig.2, we choose the initial condition and the desired operating point just the same as in Fig.2. Consider that this method has a stated conservativeness, so the value of variable ξ can be tested for several times, such as ξ = 24 . Then, by solving LMIs (10) - (12) and (17) , (18) , the feasible solutions are as follows: 8.4 ⎤ ⎡322.7 − 5.1 290.4 ⎢ − 5.1 17.4 − 0.6 − 13.9 ⎥ ⎥, X =⎢ ⎢290.4 − 0.6 370.7 119.7 ⎥ ⎥ ⎢ ⎣ 8.4 − 13.9 119.7 1008.4⎦ F1 = [4.0886 75.1448 −4.3597 1.5083] , F2 = [4.1009 75.1490 −4.3691 1.5094] .
976
F. Liu and J. Chen
Using the above feedback gains F1 , F2 to construct a fuzzy controller, the simulation results are depicted in Fig.3. From Fig.3, it is obvious to see that the overshoot of liquid height x 2 curve is reduced, while other system response curves are not influenced.
Fig. 3. Simulation results of pipeline system with output constraints
4 Conclusions In this paper, the control problem for nonlinear pipeline system described by the T-S fuzzy model is carried out. It is shown that the nonlinear system can be represented by a set of local models, and the controllers for each local model are designed via PDC scheme. It is also shown that the stable condition can be transformed into convex problems in terms of LMIs. To improve the response performance of system, control constraints is introduced to the system output variable. The simulation results demonstrate that the method is effective.
References 1. Tanaka, K., Sugeno, M.: Stability Analysis and Design of Fuzzy Control Systems. Fuzzy Sets and Systems, Vol.45, (1992) 135-156 2. Wang, H.O., Tanaka, K., Griffin, M.F.: Parallel Distributed Compensation of Nonlinear Systems by Takagi-Sugeno Fuzzy Model. Proc. FUZZ-IEEE/IFES’95, (1995) 531-538 3. Boyd, S., EI Ghaoui, L., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA: SIAM, (1994) 4. Sira-Ramirez, H., Angulo-Nunez, M.I.: Passivity-Based Control of Nonlinear Chemical Processes. Int. J. Control, Vol.68, (1997) 971-996 5. Marcelo C. M. Teixeira, Stanislaw H. Zak: Stabilizing Controller Design for Uncertain Nonlinear Systems Using Fuzzy Models. IEEE Trans. Fuzzy Syst., Vol.7, (1999) 133-142
Grading Fuzzy Sliding Mode Control in AC Servo System Hu Qing1, Qingding Guo 2, Dongmei Yu1, and Xiying Ding 2 1
School of Electrical Engineering, Shenyang University of Technology, Postalcode 110023, No.58 Xinghua South Street, Tiexi District, Shenyang, Liaoning, P.R. China {aqinghu, yu_dm163}@163.com 2 School of Electrical Engineering, Shenyang University of Technology, Postalcode 110023, No.58 Xinghua South Street, Tiexi District, Shenyang, Liaoning, P.R. China {guoqd, dingxy}@sut.edu.cn
Abstract. In this paper a strategy of grading fuzzy sliding mode control (FSMC) applied in the AC servo system is presented. It combines the fuzzy logic and the method of sliding mode control, which can reduce the chattering without decreasing the system robustness. At the same time, the exponent approaching control is added by grading. The control strategy makes the response of the system quick and no overshoot. It is simulated to demonstrate the feasible of the proposed method by MATLAB6.5 and the good control effect is received.
1 Introduction Since the 1990's, the combination of the fuzzy logic control and sliding mode control has been widely researched and applied. The most significant aspect is FSMC characterized by illumination [1], [2]. The combination of the fuzzy control and sliding mode control can solve the existed problems and keep the system stability effectively. To solve the problem of chattering this paper presents the method using the fuzzy reasoning to decide the magnitude of the control. At the same time, the exponent approaching method is adopted in the start-up process of the system, which can speed up the response of the system and make the time of response is secondary priority.
2 System Analysis and Controller Design 2.1 Design of Sliding Mode Controller The control methods are chosen by the judgment of s to adopt either fuzzy sliding mode control or exponent approaching control. The speed differential equation of the permanent magnet synchronous motor (PMSM) follows as[3]:
P P dω r B = − ω r + n K t iq − n TL dt J J J L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 977 – 980, 2005. © Springer-Verlag Berlin Heidelberg 2005
(1)
978
H. Qing et al.
where J, ωr, B, TL, Pn and iq denote inertia, angular velocity, friction factor, load torque, pairs of poles and quadrate-axis current, respectively. Kt is constant. Suppose x1 = θr−θ and x 2 = x1 = −ω r . Where, θr is the reference position of the rotor and θ is the practical position of the rotor. The state equation is
⎧ x1 = x 2 ⎨ ⎩ x 2 = −ax 2 − bu + F (t )
(2)
where a = B0/J0, b = KtPn/J0, F(t) = d0TL + g⋅dwr/dt + hwr, and u = iq is the input. J0, B0 is ideal value, d0 = Pn/J0, g = ∆J/Pn, h = ∆B/Pn, and ∆J, ∆B is the practical changing variable of them. The sliding mode line is defined as s = cx1 + x2. Using the following control rules: ⎧⎪u = u1 ⎨ ⎪⎩u = u 2
if s ≥ s 0
(3)
if s < s 0
where u1 is exponent approaching control and u2 is fuzzy sliding mode control. Calculation of the exponent control rules: Supposing u = u1, so u1 can be derived:
u1 =
1 [(c − a + k ) x 2 + c ⋅ k ⋅ x1 + ε ⋅ sign( s )] b
(4)
where k, ε are the constants of control. Calculation of FSMC rules: The reachable condition s ⋅ s < 0 must be satisfied. Suppose u = u2, substituting (2) to s ⋅ s < 0 , we have
1 1 ⎧ ⎪u2 > b (c − a ) x2 + b F ⎨ 1 1 ⎪u2 < (c − a ) x2 + F b b ⎩
when s > 0 . when s < 0
(5)
Define u2 = ueq + uk. The control law superposes equivalent control ueq and switching control uk. ueq is the necessary control effort that makes the state trajectory remains on the sliding mode line. The action of uk is that any initial state point on phase plane can come to the sliding mode line in finite interval. So the achievable property can be satisfied. 2.2 Design of Fuzzy Sliding Mode Controller
The physical meaning of fuzzy controller is that the distance between state point and sliding mode line and the derivative of it reasons out the output. Chosen of membership: The domain of s, s , uk are chosen as [−3, 3]. On this domain, the 7 fuzzy subsets are set, [NB, NM, NS, ZO, PS, PM, PB]. The membership of them is shown in Fig. 1. As shown in Fig. 1, the membership near ZO is lumping and the degree of reasoning is stronger, which makes the control effort soft.
Grading Fuzzy Sliding Mode Control in AC Servo System
979
Fig. 1. Fuzzy sets on input and output variables domain Table 1. Fuzzy reasoning rules
s uk s NB NM NS ZO PS PM PB
NB
NM
NS
ZO
PS
PM
PB
NB NB NB NB NS PB PB
NB NB NM NM NM PB PB
NB NM NS NS PS PM PB
NB NM NB ZO PB PM PB
NB NM NS PS PS PM PB
NB NB PM PM PM PB PB
NB NB PS PB PB PB PB
Chosen of control rules: The following rules in table 1 are adopted. Where s is the distance between the state point and the sliding mode line and s is the velocity of it.
3 Simulations of the System Mathematical model and parameters of an AC servo motor: G(s) = Pn /(J+B), where Pn = 2, J = 0.0013 Kgm2, B = 0.0026Nm/(rad ⋅ s). Kt = 0.0124. Chosen of control parameters: c = 5, K1 = 3/4, K2 = 3/100, K3 = 5/3, where K1, K2 are the quality factors, K3 is the anti-quality factor. s0 = 10, k = 10, ε = 0.2. Simulation parameters: the reference position θr = 5, the simulation step is 0.01s. At 3s, the step overload is added. PID parameters: KP = 10, KI = 2, KD = 0.5. Output responses of system with step interruption: Fig. 2 and Fig. 3 are the PID control and FSMC with a step overload, respectively. Compared with the traditional PID methods, as shown in Figures, FSMC has the property of no overshoot, quick response and strong robustness. Simulations of reducing chattering: Fig. 4 is the phase trajectory of traditional sliding mode control. Fig. 5 is the phase trajectory of FSMC. It is obvious that FSMC has the good control effect in reducing chattering.
980
H. Qing et al.
Fig. 2. The PID control with step overload
Fig. 4. Error state vector of sliding mode
Fig. 3. The FSMC with step overload
Fig. 5. Error state vector of fuzzy sliding mode
4 Conclusions The simulations show that the grading FSMC presented by this paper reduces the chattering by controlling the output with the sliding mode variables s and s . It keeps the robustness to the parameters vary and interruption. Moreover, the exponent approaching control is added by grading, so the response of the system is sped up and the very good effect is received.
References 1. Wang, F. Y.: Sliding Mode Variable Structure Control. Mechanics Industry Press, Beijing (1995). 2. Ha, Q. P., Rye, D.C., Durrant -Whyte, H.F.: Fuzzy Moving Sliding Mode Control with Application to Robotic Manipulators. Automatics, 35 (1999) 607-616. 3. Guo, Q. D., Wang, C. Y.: AC Servo System. Mechanics Industry Press, Beijing (1994).
A Robust Single Input Adaptive Sliding Mode Fuzzy Logic Controller for Automotive Active Suspension System Ibrahim B. Kucukdemiral1 , Seref N. Engin1 , Vasfi E. Omurlu2 , and Galip Cansever1 1
2
Yildiz Technical University, Department of Electrical Engineering, Turkey {beklan, nengin, cansever}@yildiz.edu.tr, Yildiz Technical University, Department of Mechanical Engineering, Turkey
[email protected] Abstract. The proposed controller in this paper, which combines the capability of fuzzy logic with the robustness of sliding mode controller, presents prevailing results with its adaptive architecture and proves to overcome the global stability problem of the control of nonlinear systems. Effectiveness of the controller and the performance comparison are demonstrated with chosen control techniques including PID and PD type self-tuning fuzzy controller on a quarter car model which consists of component-wise nonlinearities.
1
Introduction
There are two major objectives in the studies of Automotive Active Suspension Systems (AASSs) that are to improve ride comfort by reducing the vertical acceleration of the sprung mass and to increase holding ability of the vehicle by providing adequate suspension deflections. To overcome the problems that originate from the complexity and nonlinearity of vehicle systems, various kinds of Fuzzy Logic Controllers (FLCs) are suggested such as [1,2,3,4,5].In this work, we propose a novel, robust, simple and industrially applicable FLC with a single state feedback for AASSs, where the stability of the controller is secured in the sense of Lyapunov. The other benefit is that, the rule-base for the proposed controller does not need to be tuned by an expert since an adaptation mechanism takes the responsibility for tuning. Online simulations of the proposed system verify that the proposed suspension controller exhibits much better performances compared to other methods namely, the passive suspension, classical Proportional-IntegralDerivative (PID) controller and PD-type self-tuning fuzzy controller (STFPD), when testing with standard bumps and random road profiles.
2
Modelling the Active Suspension System
A typical active suspension system for a quarter car model is illustrated in Fig. 1. It is assumed that the tire never leaves the ground. The actuator force is denoted with fu . Nonlinear spring and damping forces are provided as L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 981–986, 2005. c Springer-Verlag Berlin Heidelberg 2005
982
I.B. Kucukdemiral et. al.
Fig. 1. Quarter car model used
fs = ks (zu − zs ) (ks /4) (zu3 − zs3 ) fb = bs |z˙u − z˙s |(z˙u − z˙s ).
(1)
Here damping and spring coefficients are denoted by, bs = 1000 Nsec/m and, ks = 45000N/m respectively. Car body displacement, zs , wheel displacement, zu , and road displacement, zr , are all measured from the static equilibrium position. The dynamic equations of the quarter car active suspension system, neglecting nonlinear effects for now, are ms z¨s = ks (zs − zu ) + bs (z˙s − z˙u ) + fu mu z¨u = −ks (zs − zu ) − bs (z˙s − z˙u ) − fu + kt (zr − zu )
(2)
where ms = 250.3 kg and mu = 30.41 kg are the masses of the car body and the wheel respectively, z¨s denotes the acceleration of the car body and kt = 150000 N/m is the tire spring constant.
3
Design of a Robust Single-Input Adaptive Fuzzy Sliding Mode Controller for AASS
In general, a compact dynamic equation for AASS can be regarded as a second order differential equation, such as z¨s (t) = −
B K Td Fu z˙s (t)− zs (t)− + ≡ Ap z˙s (t)+Kp zs (t)+Dp Td +Mp u(t) (3) M M M M
where M is the total mass of the body, B is the damping coefficient, Td is the term for total unknown load disturbances and finally Fu is the applied control force to the system. The error of sprung mass displacement is e(t) = zs (t) − zd (t) where zd (t) is the desired displacement of the car body. The first step in sliding mode controller design process is choosing the sliding surface. Therefore the following sliding surface is employed t s(t) = z˙s (t) − [¨ zd (τ ) − k1 e(τ ˙ ) − k2 e(τ )] dτ. (4) 0
A Robust Single Input Adaptive Sliding Mode
983
Assuming that the system dynamics are well known, a feedback linearization u (t) can be applied. However this control law cannot be directly applicable. Thus, one method to overcome this difficulty is to imitate the feedback linearization method given by an adaptive fuzzy logic controller such as N wr θr uf z (s) = r=1 N r=1 wr
(5)
where θr , r = 1, 2, . . . , N are the discrete singleton control signals labelled as adjustable parameters and wr is the firing of r. rule. In the present work, we have chosen Gaussian membership functions for the fuzzification process of s. And N is chosen to be 7. If θr is chosen as an adjustable parameter, (5) can be regarded as uf z = ΘT ∆ where Θ = [θ1 , θ2 , . . . , θN ]T and ∆ = [δ1 , δ2 , . . . , δN ]T are the corresponding vectors. According to the universal approximation property of the fuzzy systems (5) can be approximated by a fuzzy system with a bounded approximation error ε with a bound ρ such as u (t) = uf z (t) + ε = Θ T ∆ + ε. ˆ ˆ as the approximation of Let u ˆf z = Θ∆ be the approximation of u (t) and Θ Θ . Moreover, in order to compensate the approximation error u ˆf z − u we add a variable structure controller term to the control signal which results u(t) = u ˆf z + uvs . Then, after some algebraic manipulations, one can achieve the error dynamics as e¨ + k1 e˙ + k2 e = Mp [ˆ uf z + uvs − u ] = s. ˙ Denoting u ˆ f z − u = T ˆ ˜ ˜ u ˆf z − uf z − ε as u ˜f z and Θ − Θ as Θ then u˜f z = Θ ∆ − ε is obtained. the variable structure controller is defined as uvs = −ρ · sat(s/Φ) where ρ is the variable switching gain and Φ is the thickness of the boundary layer and chosen as 0.0008 for the present application and sat is the saturation function. If ρ denotes the equivalent gain and ρˆ denotes the estimated gain then the error for switching gain will be ρ˜ = ρˆ − ρ . In order to achieve minimum approximation error and to guarantee the existence of sliding mode, we have chosen a Lyapunov function candidate as, ˜ ρ˜) = 1 + Mp Θ ˜TΘ ˜ + Mp ρ˜2 V (s, Θ, 2 2γ1 2γ2
(6)
where γ1 and γ2 are positive constants. then the time derivative of V along the trajectory will be ˜ ρ˜) = s.Mp [−ρˆ · sat(s/Φ) − ε] + Mp (ˆ V˙ (s, Θ, ρ − ρ )ρˆ˙ γ2
(7)
Then choosing ρˆ˙ = γ2 sat(s/Φ), and taking into consideration that ρ˜˙ = ρˆ˙ and ˜˙ = Θ ˆ˙ = γ1 · s · ∆, one can easily bring 7 to result, V˙ = −Mp |s| (ρ − |ε|) ≤ 0 Θ Finally by using Barbalat’s lemma, one can easily show that the stability of the proposed controller and adaptation laws are achieved in the sense of Lyapunov. The final structure of the proposed controller is shown in Fig. 2.
984
I.B. Kucukdemiral et. al.
Fig. 2. Block diagram of the proposed controller
4
Simulation Results
To evaluate the proposed controller presented above, a simulation environment of AASS is created for the quarter car model which has component-wise nonlinearities within. Damper and spring element nonlinearities in the system dynamics equations are chosen as in actual components. Vehicle speed of 72 km/h is chosen and two types of road profiles are prepared for controller performance evaluation: standard bump-type surface profile with 10cm length × 10cm height and a random road profile generated to simulate stabilized road with 1cm × 1cm pebbles. Open loop, PID and STFPD are employed along with the proposed controller.The rule-bases for the main FLC and self-tuning mechanism are chosen, as described in [6]. On the other hand, PID controller is, first, tuned with well-known Ziegler-Nichols tuning method and then, with numerous repetitive simulations around previously obtained acceptable PID controller values [7]. Although the initial rule table of the proposed controller is not important because of the self organizing structure of the proposed controller, initial value of the adjustable vector Θ is chosen as Θ = [5000 3000 1000 0 −1000 −3000 −5000]T .The values of k1 and k2 are chosen to be 10.1 and 0.16, respectively. First, bump-type road profile is applied to the system for four types of controllers. Car body displacements are plotted as shown in Fig. 3. In Fig. 3, the proposed controller clearly produces the shortest response time of 0.85 sec. and the lowest peak value of 0.4 cm. Open loop response has continuing oscillations of 25 sec. and also it has a high peak value. PID controller decreases the peak value while shortening the response time. On the other hand, STFPD still has a higher peak value and slower response time compared to the proposed controller. However, for chosen PID parameters, classical PID seems to perform better than PD type self-tuning fuzzy controller.
A Robust Single Input Adaptive Sliding Mode
985
Fig. 3. Car Body Displacement for a simulated bump (Proposed controller: solid bold; passive suspension: dotted; PID controller: dash; STFPD controller: dot-dash)
Fig. 4. Body displacement for random road profile (Proposed controller: solid bold; passive suspension: dotted; PID controller: dot-dash; STFPD controller: dash)
As a second experiment, proposed controller and evaluatory ones have been tested using the random road profile. Fig. 4 shows the response of all four controllers for this road profile and it is obvious that the proposed sliding mode controller has overwhelming success over other controllers. Summary of all responses to two kinds of road conditions can be seen through Table 1.
5
Conclusion
A novel single-input adaptive fuzzy sliding mode controller is proposed and successfully employed to control AASS with component-wise nonlinearities. The strategy is robust and industry applicable since it has a single input FLC as a
986
I.B. Kucukdemiral et. al. Table 1. Comparison of the controller performances
controller
Proposed Passive (Open loop) STFPD PID
z¨s random road (rms) 0.1288 0.4994 0.3209 0.2936
z¨s bumpy road (rms) 1.2538 1.6369 1.4876 1.4097
Sett. time First peak (meter) 0.0040 0.0133 0.0135 0.0112
bumpy road (sec.) 0.85 25 6 4.15
main controller. Thus, the rule base of the FLC drastically decreases when it is compared with the traditional FLCs. On the other hand, the efficiency of the controller is improved by combining a sliding mode compensator which also has an adaptive structure. The stability of the proposed scheme is achieved in the sense of Lyapunov. In order to demonstrate the effectiveness of the proposed method, the controller is applied to the suspension system in comparison with the passive suspension, PID controller and STFPD. Road profiles that are tested are a simulated random road surface and a bump. The simulation results show that the proposed control scheme improves the ride comfort considerably when compared to the aforementioned controller structures.
References 1. Ting, C.S., Li, T.H.S., Kung, F.C.: Design of fuzzy controller for active suspension system. Mechatronics. 5 (1993) 457–470 2. Yeh, E.C., Tsao, Y.J.: A fuzzy preview control scheme of active suspension for rough road. Int. J. Vehicle Des. 15 (1994) 166–180 3. Rao, M.V.C., Prahald, V.: A tunable fuzzy logic controller for vehicle-active suspension system. Fuzzy Sets and Syst. 85 (1997) 11–21 4. D’Amato, F.J., Viassolo, D.E.: Fuzzy control for active suspensions. Mechatronics. 10 (2000) 897–920 5. Huang, S.J., Lin, W.C.: Adaptive fuzzy controller with sliding surface for vehicle suspension control. IEEE Trans. on Fuzzy Syst. 11 (2003) 550–559 6. Mudi, R.K., Pal, N.R.: A robust self-tuning scheme for PI- and PD-type fuzzy controllers. IEEE Trans. on Fuzzy Syst. 7 (1999) 2–16 7. Omurlu, V.E., Engin, S.N., Kucukdemiral, I.B.: A Robust Single Input Adaptive Sliding Mode Fuzzy Logic Controller for a Nonlinear Automotive Suspension System. MED’04 Conference, June 6-9, (2004), Kusadasi, Aydin, Turkey.
Construction of Fuzzy Models for Dynamic Systems Using Multi-population Cooperative Particle Swarm Optimizer Ben Niu1, 2, Yunlong Zhu1, and Xiaoxian He1, 2 1
Shenyang Institute of Automation, Chinese Academy of Sciences, 110016, Shenyang, China, 2 School of Graduate, Chinese Academy of Sciences, 100039, Beijing, China {niuben, ylzhu}sia.cn
Abstract. A new fuzzy modeling method using Multi-population Cooperative Particle Swarm Optimizer (MCPSO) for identification and control of nonlinear dynamic systems is presented in this paper. In MCPSO, the population consists of one master swarm and several slave swarms. The slave swarms execute Particle Swarm Optimization (PSO) or its variants independently to maintain the diversity of particles, while the particles in the master swarm enhance themselves based on their own knowledge and also the knowledge of the particles in the slave swarms. The MCPSO is used to automatic design of fuzzy identifier and fuzzy controller for nonlinear dynamic systems. The proposed algorithm (MCPSO) is shown to outperform PSO and some other methods in identifying and controlling dynamic systems.
1 Introduction The identification and control of nonlinear dynamical systems has been a challenging problem in the control area for a long time. Since for a dynamic system, the output is a nonlinear function of past output or past input or both, and the exact order of the dynamical systems is often unavailable, the identification and control of this system is much more difficult than that has been done in a static system. Therefore, the soft computing methods such as neural networks [1-3], fuzzy neural networks [4-6] and fuzzy inference systems [7] have been developed to cope with this problem. Recently, interest in using recurrent networks has become a popular approach for the identification and control of temporal problems. Many types of recurrent networks have been proposed, among which two widely used categories are recurrent neural networks (RNN) [3, 8, 9] and recurrent fuzzy networks (RFNN) [4, 10]. On the other hand, fuzzy inference systems have been developed to provide successful results in identifying and controlling nonlinear dynamical systems [7, 11]. Among the different fuzzy modeling techniques, the Takagi and Sugeno’s (T-S) type fuzzy controllers have gained much attention due to its simplicity and generality [7, 12]. T-S fuzzy model describes a system by a set of local linear input-output relations and it is seen that this fuzzy model can approach highly nonlinear dynamical systems. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 987 – 1000, 2005. © Springer-Verlag Berlin Heidelberg 2005
988
B. Niu, Y. Zhu, and X. He
The bottleneck of the construction of a T-S model is the identification of the antecedent membership functions, which is a nonlinear optimization problem. Typically, both the premise parameters and the consequent parameters of T-S fuzzy model are adjusted by using gradient descent optimization techniques [12-13]. Those methods are sensitive to the choice of the initial parameters, easily got stuck in local minima, and have poor generalization properties. This hampers the aposteriori interpretation of the optimized T-S model. The advent of evolutionary algorithm (EA) has attracted considerable interest in the construction of fuzzy systems [14-16]. In [15], [16], EAs have been applied to learn both the antecedent and consequent part of fuzzy rules, and models with both fixed and varying number of rules have been considered. As compared to traditional gradient-based computation system, evolutionary algorithm provides a more robust and efficient approach for the construction of fuzzy systems. Recently, a new evolutionary computation technique, the particle swarm optimization (PSO) algorithm, is introduced by Kennedy and Eberhart [17, 18], and has already come to be widely used in many areas [19-21]. As already has been mentioned by Angeline [22], the original PSO, while successful in the optimization of several difficult benchmark problems, presented problems in controlling the balance between exploration and exploitation, namely when fine tuning around the optimum is attempted. In this paper we try to deal with this issue by introducing a multi-population scheme, which consists of one master swarm and several slave swarms. The slave swarms evolve independently to supply new promising particles (the position giving the best fitness value) to the master swarm as evolution goes on. The master swarm updates the particle states based on the best position discovered so far by all the particles both in the slave swarms and its own. The interactions between the master swarm and the slave swarms control the balance between exploration and exploitation and maintain the population diversity, even when it is approaching convergence, thus reducing the risk of convergence to local sub-optima. The paper is devoted to a novel fuzzy modeling strategy to the fuzzy inference system for identification and control of nonlinear dynamical systems. In this paper, we will use the MCPSO algorithm to design the T-S type fuzzy identifier and fuzzy controller for nonlinear dynamic systems, and the performance is also compared to other methods to demonstrate its effectiveness. The paper is organized as follows. Section 2 gives a review of PSO and a description of the proposed algorithm MCPSO. Section 3 describes the T-S model and a detailed design algorithm of fuzzy model by MCPSO. In Section 4, simulation results of one nonlinear plant identification problem and one nonlinear dynamical system control problems using fuzzy inference systems based on MCPSO are presented. Finally, conclusions are drawn in Section 5.
2 PSO and MCPSO 2.1 Particle Swarm Optimization Particle Swarm Optimization (PSO) is inspired by natural concepts such as fish schooling, bird flocking and human social relations. The basic PSO is a population
Construction of Fuzzy Models for Dynamic Systems
989
based optimization tool, where the system is initialized with a population of random solutions and searches for optima by updating generations. In PSO, the potential solutions, called particles, fly in a D-dimension search space with a velocity which is dynamically adjusted according to its own experience and that of its neighbors. The location and velocity for the ith particle is represented as x i = ( x , x , ... x iD ) i1
i2
and v i = ( v , v , ...v iD ) , respectively. The best previous position of the ith particle i1 i 2 is recorded and represented as Pi = ( P , P , ..., PiD ), which is also called pbest . The i1 i 2 index of the best particle among all the particles in the population is represented by the symbol g , and p g is called gbest . At each time step t , the particles are manipulated according to the following equations: vi ( t + 1) = vi ( t ) + R1c1 ( Pi − xi ( t )) + R 2 c 2 ( Pg − xi ( t ))
(1)
xi (t + 1) = xi (t ) + vi (t )
(2)
where R and R are random values within the interval [0, 1], c and c are 2 1 2 1 acceleration constants. For Eqn. (1), the portion of the adjustment to the velocity influenced by the individual’s own pbest position is considered as the cognition component, and the portion influenced by gbest is the social component. A drawback of the aforementioned version of PSO is associated with the lack of a mechanism responsible for the control of the magnitude of the velocities, which fosters the danger of swarm explosion and divergence. To solve this problem, Shi and Eberhart [23] later introduced an inertia term w by modifying (1) to become: vi ( t + 1) = w × vi ( t ) + R1c1 ( Pi − xi ( t )) + R 2 c 2 ( Pg − xi ( t ))
(3)
They proposed that suitable selection of w will provides a balance between global and local explorations, thus requiring less iteration on average to find a sufficiently optimal solution. As originally developed, w often decreases linearly from about 0.9 to 0.4 during a run. In general, the inertia weight w is set according to the following equation:
w = wmax −
wmax − wmin × iter itermax
(4)
where itermax is the maximum number of iterations, and iter is the current number of iterations. 2.2 Multi-population Cooperative Particle Swarm Optimization The foundation of PSO is based on the hypothesis that social sharing of information among conspecifics. It reflects the cooperative relationship among the individuals (fish, bird, insect) within a group (school, flock, swarm). Obviously it is not the case
990
B. Niu, Y. Zhu, and X. He
of the nature. In natural ecosystem, many species have developed cooperative interactions with other species to improve their survival. Such cooperative—also called symbiosis—co-evolution can be found in organisms going from cells (e.g., eukaryotic organisms resulted probably from the mutualistic interaction between prokaryotes and some cells they infected) to superior animals (e.g., African tick birds obtain a steady food supply by cleaning parasites from the skin of giraffes, zebras, and other animals) [24, 25]. Inspired by the phenomenon of symbiosis in the natural ecosystem, a master-slave mode is incorporated into the PSO, and a Multi-population (species) Cooperative Optimization (MCPSO) is thus presented. In our approach, the population consists of one master swarm and several slave swarms. The symbiotic relationship between the master swarm and slave swarms can keep a right balance of exploration and exploitation, which is essential for the success of a given optimization task. The master-slave communication model is shown in Fig.1, which is used to assign fitness evaluations and maintain algorithm synchronization. Independent populations (species) are associated with nodes, called slave swarms. Each node executes a single PSO or its variants, including the update of location and velocity, and the creation of a new local population. When all nodes are ready with the new generations, each node then sends the best local individual to the master node. The master node selects the best of all received individuals and evolves according to the following equations:
M M M M M M S M vi (t + 1) = wvi (t ) + R1c1( pi − xi (t )) + R2c2 ( pg − xi (t )) + R3c3( pg − xi (t ))
(5)
M M M xi (t + 1) = xi (t ) + vi (t )
(6)
where M represents the master swarm, c is the migration coefficient, and R is a 3 3 uniform random sequence in the range [0, 1]. Note that the particle’s velocity update in the master swarm is associated with three factors: M i. pi : Previous best position of the master swarm. ii. PgM : Best global position of the master swarm. iii. p gS : Previous best position of the slave swarms. Node 1
Node 2
Node k Local best
Master Fig. 1. The master-slave model
Construction of Fuzzy Models for Dynamic Systems
991
As Shown in Eqn. (5), the first term of the summation represents the inertia (the particle keeps moving in the direction it had previously moved), the second term represents memory (the particle is attracted to the best point in its trajectory), the third term represents cooperation (the particle is attracted to the best point found by all particles of master swarm) and the last represents information exchange (the particle is attracted to the best point found by the slave swarms). The pseudocode for the MCPSO algorithm is listed in Fig 2. Algorithm MCPSO Begin Initialize all the populations Evaluate the fitness value of each particle Repeat Do in parallel Node i , 1 ≤ i ≤ K //K is the number of slaver swarms End Do in parallel Barrier synchronization //wait for all processes to finish Select the fittest local individual p gS from the slave swarms Evolve the mast swarm // Update the velocity and position using (5) and (6), respectively Evaluate the fitness value of each particle Until a terminate-condition is met End Fig. 2. Pseudocode for the MCPSO algorithm
3 Fuzzy Model Based on MCPSO 3.1 T-S Fuzzy Model Systems In this paper, the fuzzy model suggested by Takagi and Sugeno is employed to represent a nonlinear system. A T-S fuzzy system is described by a set of fuzzy IFTHEN rules that represent local linear input-output relations of nonlinear systems. The overall system is then an aggregation of all such local linear models. More precisely, a T-S fuzzy system is formulated in the following form: l l i l l l l R : if x1 is A1 and ... xn is An , then yˆ = α 0 + α1x1 + ... + α n xn ,
(7)
l l where yˆ (1 ≤ l ≤ r ) is the output due to rule R and α il (1 ≤ i ≤ n ) , called the consequent parameters, are the coefficients of the linear relation in the lth rule and l will be identified. Ai ( xi ) are the fuzzy variables defined as the following Gaussian membership function:
992
B. Niu, Y. Zhu, and X. He
x −ml 2 l Ai ( x i ) = ex p[ − 12 ∗ ( i l i ) ], σi
(8)
where 1 ≤ l ≤ r , ...,1 ≤ i ≤ n, xi ∈ R , mil and σ il represent the center (or mean) and the width (or standard deviation) of the Gaussian membership function, respectively. mil and σ il are adjustable parameters called the premise parameters, which will be identified. Given an input ( x 0 ( k ), " , xn0 ( k )) , the final output yˆ( k ) of the fuzzy system is 1 inferred as follows: yˆ ( k ) =
n r l l ∑ l =1 yˆ l ( k )( ∏ i =1 Ail ( xi0 ( k ))) ∑ lr=1 yˆ ( k ) w ( k ) = , n r l ∑ l =1 ( ∏ i =1 Ail ( xi0 ( k ))) ∑ lr=1 w ( k )
where the weight strength
(9)
w l (k ) of the lth rule, is calculated by: l l 0 w ( k ) = ∏ in=1 Ai ( xi ( k )).
(10)
3.2 Fuzzy Model Strategy Based on MCPSO
The detailed design algorithm of fuzzy model by MCPSO is introduced in this section. The overall learning process can be described as follows: (1) Parameter representation In our work, the parameter matrix, which consists of the premise parameters and the consequent parameters described in section 3.1, is defined as a two dimensional matrix, i.e.,
⎡ m1 ⎢ 12 ⎢ m1 ⎢" ⎢ r ⎢⎣ m1
σ1 " mn σ n α 0 α1 " α n ⎤ ⎥ 2 2 2 2 2 2 σ1 " mn σ n α 0 α1 " α n ⎥ 1
" r σ1
"
1
1
1
" r mn
" r σn
" r α0
1
1
" " "⎥ r r⎥ α1 " α n ⎥⎦
The size of the matrix can be represented by D = r × (3n + 1) . (2) Parameter learning a)
In MCPSO, the master swarm and the slave swarm both work with the same parameter settings except for the velocity update equation. Initially, N × n ( N ≥ 2, n ≥ 2) individuals forming the population should be randomly generated and the individuals can be divided into N swarms (one master swarm and N − 1 slave swarms). Each swarm contains n individuals with random
Construction of Fuzzy Models for Dynamic Systems
993
positions and velocities on D dimensions. These individuals may be regarded as particles in terms of PSO. In T-S fuzzy model system, the number of rules, r , should be assigned in advance. In addition, the maximum iterations wmax , minimum inertia weight wmin and the learning parameters c , c , the migration 1 2 coefficient c should be assigned in advance. After initialization, new 3
individuals on the next generation are created by the following step. b) For each particle, evaluate the desired optimization fitness function in D variables. The fitness function is defined as the reciprocal of RMSE (root mean quadratic error), which is used to evaluate various individuals within a population of potential solutions. Considering the single output case for clarity, our goal is to minimize the error function: RMSE =
K 2 1 ( y p ( k + 1) − y r ( k + 1)) K k∑ =1
(11)
where K is the total time steps, y p (k + 1) is the inferred output and yr ( k + 1) is the desired reference output.. Evaluate the fitness for each particle. Compare the evaluated fitness value of each particle with it’s pbest . If current value is better than pbest , then set the current location as the pbest location in D-dimension space. Furthermore, if current value is better than gbest , then reset gbest to the current index in particle array. This step will be executed in parallel for both the master swarm and the slave swarms. e) In each generation, after step d) is executed, the best-performing
c) d)
particle p gS among the slave swarms should be marked. Update the velocity and position of all the particles in N − 1 slave swarms according to Eqn. (3) and Eqn. (2), respectively (Suppose that N − 1 populations of SPSO with the same parameter setting are involved in MCPSO as the slave swarms). g) Update the velocity and position of all the particles in the master swarm according to Eqn. (5) and Eqn. (6), respectively. f)
(3) Termination condition The computations are repeated until the premise parameters and consequent parameters are converged. It should be noted that after the operation in master swarm and slaver swarm the values of the individual may exceed its reasonable range. Assume that the domain of the ith input variable has been found to be [min( xi ), max( xi )] from training data, then the domains of mil and σ il are defined as
[min( xi ) − δ i , max( xi ) + δ i ] and [ di − δ i , di + δ i ], respectively, where δ i is a small positive value defined as δ i = (max( xi ) − min( xi )) /10 , and di is the predefined width of the Gaussian membership function, the value is set as (max( xi ) − min( xi )) / r , the variable r is the number of fuzzy rules.
994
B. Niu, Y. Zhu, and X. He
4 Illustrative Examples In this section, two nonlinear dynamic applications, including an example of identification of a dynamic system and an example of control of a dynamic system are conducted to validate the capability of the fuzzy inference systems based on MCPSO to handle the temporal relationship. The main reason for using these dynamic systems is that they are known to be stable in the bounded input bounded output (BIBO) sense. A. Dynamic System Identification The systems to be identified are dynamic systems whose outputs are functions of past inputs and past outputs as well. For this dynamic system identification, a serialparallel model is adopted as identification configuration shown in Fig.3. Example 1: The plant to be identified in this example is guided by the difference equation [2, 4]: y p ( k + 1) = f [ y p ( k ), y p ( k − 1), y p ( k − 2), u ( k ), u ( k − 1)],
(12)
x x x x ( x −1) + x 4 . f [ x1 , x 2 , x3 , x 4 , x 5 ] = 1 2 3 52 3 2 1+ x3 + x2
(13)
where
Here, the current output of the plant depends on three previous outputs and two previous inputs. Unlike the authors in [1] who applied a feedforward neural network with five input nodes for feeding appropriate past values of y p ( k ) and u ( k ) , we only use the current input u ( k ) and the output y p ( k ) as the inputs to identify the output of the plant y p (k + 1) . In training the fuzzy model using MCPSO for the nonlinear plant, we use only ten epochs and there are 900 time steps in each epoch. Similar to the inputs used in [2, 3]. The input is an independent and identically distributed (iid) uniform sequence over [-2, 2] for about half of the 900 time steps and a single sinusoid given by 1.05 * sin(π k / 45) for the remaining time steps. In applying MCPSO to this plant, the number of swarms N = 4 , the population size of each swarm n = 20 , are chosen, i.e., 80 individuals are initially randomly generated in a population. The number r of the fuzzy rules is set to be 4. For master swarm, inertial weights wmax , wmin , the acceleration constants c , c , and the migration coefficient 1
2
c3 , are set to 0.35, 0.1, 1.5, 1.5 and 0.8, respectively. In slave swarms the inertial weights and the acceleration constants are the same as those used in master swarm. To show the superiority of MCPSO, the fuzzy identifiers designed by PSO are also applied to the same identification problem. In PSO, the population size is set as 80 and initial individuals are the same as those used in MCPSO. For fair comparison, the
Construction of Fuzzy Models for Dynamic Systems
995
other parameters, wmax , wmin , c , c are the same as those defined in MCPSO. To 1 2 see the identified result, the following input as used in [3, 4] is adopted for test:
u(k ) = sin(π k / 25), k < 250 = 1.0, 250 ≤ k < 500 = −1.0, 500 ≤ k < 750 = 0.3sin(π k / 25) + 0.1sin(π k / 32) + 0.6sin(π k /10), 750 ≤ k < 1000.
(14)
Fig. 3. Identification of nonlinear plant using MCPSO 1
output
0.5
0
-0.5
-1
-1.5 0
200
400
600
800
1000
time
Fig. 4. Identification results using MCPSO in Example 1, where the solid curve denotes desired output and the dotted curve denotes the actual output Table 1. Performance comparisons with different methods for Example 1
Method RMSE(train) RMSE (test)
RSONFIN 0.0248 0.0780
RFNN 0.0114 0.0575
TRFN-S 0.0084 0.0346
PSO 0.0386 0.0372
MCPSO 0.0146 0.0070
996
B. Niu, Y. Zhu, and X. He
Fig.4 shows the desired output (denoted as a solid curve) and the inferred output obtained by using MCPSO (denoted as a dotted curve) for the testing input signal. Table 1 gives the detailed identification results using different methods, where the results of the methods RSONFIN, RFNN and TRFN-S come from literature [5]. From the comparisons, we see that the fuzzy controller designed by MCPSO is superior to the method using RSONFIN, and is slight inferior to the methods using RFNN and TRFN-S. However, among the three types of methods, it achieves the highest identification accuracy in the test part, which demonstrates its better generalized ability. The results of MCPSO identifier also demonstrate the improved performance compared to the results of the identifiers obtained by PSO. The abnormal phenomenon that the test RMSE is smaller than the train RMSE using fuzzy identifier based on PSO and MCPSO may attributes to the well-regulated input data in test part (during time steps [250, 500] and [500, 750], the input data is equal to a constant). B. Dynamic System Control
As compare to linear systems, for which there now exists considerable theory regarding adaptive control, very litter is known concerning adaptive control of plants governed by nonlinear equations. It is in the control of such systems that we are primarily interested in this section. Based on MCPSO, the fuzzy controller is designed for the control of dynamical systems. The control configuration and input-output variables of MCPSO fuzzy controller are shown in Fig.5, and are applied to one MISO (multi-input-single-output) plant control problem in the following example. The comparisons with other control methods are also presented. Example 2: The controlled plant is the same as that used in [2] and [5] and is given by
y p (k + 1) =
y p (k ) y p (k − 1)( y p (k ) + 2.5) + u(k ). 2 2 1 + y p (k ) + y p (k − 1)
(15)
In designing the fuzzy controller using MCPSO, the desired output yr is specified by the following 250 pieces of data: yr ( k + 1) = 0.6 yr ( k ) + 0.2 yr ( k − 1) + r ( k ),1 ≤ k ≤ 250, r ( k ) = 0.5 sin(2π k / 45) + 0.2 sin(2π k / 15) + 0.2 sin(2π k / 90).
The inputs to MCPSO fuzzy controller are y p ( k ) and yr ( k ) and the output is u (k ). There are five fuzzy rules in MCPSO fuzzy controller, i.e., r = 5 , resulting in total of 35 free parameters. Other parameters in applying MCPSO are the same as those used in Example 1. The fuzzy controller designed by PSO is also applied to the MISO control problem and the parameters in using PSO are also the same as those defined in Example 1.The evolution is processed for 100 generations and is repeated for 50 runs. The averaged best-so-far RMSE value over 50 runs for each generation is shown in Fig.6.
Construction of Fuzzy Models for Dynamic Systems
997
Fig. 5. Dynamical system control configuration with MCPSO fuzzy controller 1 MCPSO PSO
0.9 0.8
RMSE
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
20
40
60
80
100
Iteration
Fig. 6. Average best-so-far RMSE in each generation for PSO and MCPSO in Example 2
From the figure, we can see that MCPSO converges with a higher speed compared to PSO and obtains a better result. In fact, since the competition relationships of the slave swarms, the master swarm will not be influenced much when a certain slave swarms gets stuck at a local optima. Avoiding premature convergence allows MCPSO continue search for global optima in optimization problems The best and averaged RMSE error for the 50 runs after 100 generations of training for each run are listed in Table 2, where the results of the methods GA and HGAPSO are from [6]. It should be noted that the TRFN controller designed by HGPSO (or GA) is evolved for 100 generations and repeated for 100 runs in literature [6]. To test the performance of the designed fuzzy controller, another reference input r ( k ) is given by: r ( k ) = 0.3sin(2π k / 50) + 0.2 sin(2π k / 25) + 0.4 sin(2π k / 60), 251 ≤ k ≤ 500.
998
B. Niu, Y. Zhu, and X. He 4 3 2
output
1 0 -1 -2 -3 -4 0
50
100
150
200
250
time step
(a) 4 3 2
output
1 0 -1 -2 -3 -4 0
50
100
150
200
250
time step
(b) Fig. 7. The tracking performance by MCPSO controller in Example 2 for (a) training and (b) test reference output, where the desired output is dotted as solid curve and the actual output by a dotted curve
The best and averaged control performance for the test signal over 50 runs is also listed in Table 2. From the comparison results, we can see that the fuzzy controller based on MCPSO outperforms those based on GA and PSO greatly especially in the test results, and reaches the same control level with the TRFN controller based on HGPSO. To demonstrate control performance using the MCPSO fuzzy controller for the MISO control problem, one control performance of MCPSO is shown in Fig.7 for both training and test control reference output.
Construction of Fuzzy Models for Dynamic Systems
999
Table 2. Performance comparisons with different methods for Example 2
Method RMSE (train mean) RMSE (train best) RMSE (test mean) RMSE (test best)
GA 0.2150 0.1040 — —
PSO 0.1364 0.0526 0.1526 0.1024
HGAPSO 0.0890 0.0415 — —
MCPSO 0.1024 0.0518 0.1304 0.0704
5 Conclusions The paper proposed a multi-population cooperative particle swarm optimizer to identify the T-S fuzzy model for processing nonlinear dynamic systems. In the simulation part, we apply the suggested method to respectively design a fuzzy identifier for a nonlinear dynamic plant identification problem and a fuzzy controller for a nonlinear dynamic plant control problem. To demonstrate the effectiveness of the proposed algorithm MCPSO, its performance is compared to several typical methods in dynamical systems.
Acknowledgements This work is supported by the National Natural Science Foundation of China (No.70431003) and the National Basic Research Program of China (No. 2002CB312200). The first author would like to thank Prof. Q.H Wu of Liverpool University for many valuable comments. Helpful discussions with Dr. B. Ye, Dr. L.Y. Yuan and Dr. S. Liu are also gratefully acknowledged.
References 1. Narenda, K. S., Parthasarathy, K: Adaptive identification and control of dynamical systems using neural networks. In: Proc. of the 28th IEEE Conf. on Decision and Control, Vol. 2. Tampa, Florida, USA (1989) 1737-1738 2. Narenda, K. S., Parthasarathy, K.: Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks 1 (1990) 4-27 3. Sastry, P. S., Santharam, G., Unnikrishnan, K. P.: Memory neural networks for identification and control of dynamical systems. IEEE Trans. Neural Networks 5 (1994) 306-319 4. Lee, C. H., Teng, C. C.: Identification and control of dynamic systems using recurrent fuzzy neural networks. IEEE Trans. Fuzzy Syst. 8 (2000) 349-366 5. Juang, C. F.: A TSK-type recurrent fuzzy network for dynamic systems processing by neural network and genetic algorithms. IEEE Trans. Fuzzy Syst. 10 (2002) 155-170 6. Juang, C. F.: A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans. Syst. Man Cyber. B 34 (2004) 997-1006 7. Tseng, C. S., Chen, B. S., Uang, H. J.: Fuzzy tracking control design for nonlinear dynamical systems via T-S fuzzy model. IEEE Trans. Fuzzy Syst. 9 (2001) 381-392
1000
B. Niu, Y. Zhu, and X. He
8. Chow, T. W. S., Yang, F.: A recurrent neural-network-based real-time learning control strategy applying to nonlinear systems with unknown dynamics, IEEE Trans. Industrial Electronics 45 (1998) 151-161 9. Gan, C., Danai, K.: Model-based recurrent neural network for modeling nonlinear dynamic systems. IEEE Trans. Syst., Man, Cyber. 30 (2000) 344-351 10. Juang, C. F., Lin, C. T.: A Recurrent self-constructing neural fuzzy inference network. IEEE Trans. neural networks. 10 (1999) 828-845 11. Wang, L. X., Mendel, J.M.: Back-propagation fuzzy systems as nonlinear dynamic systems identifiers. In: Proc. IEEE Int. Conf. Fuzzy Syst., San Diego, USA (1992) 14091418 12. Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application. IEEE Trans. Syst., Man, Cyber. 15 (1985) 116-132 13. Tanaka K., Ikeda, T., Wang, H. O.: A unified approach to controlling chaos via an LMIbased fuzzy control system design. IEEE Trans. Circuits and Systems 45 (1998) 10211040 14. Karr, C. L.: Design of an adaptive fuzzy logic controller using a genetic algorithm. In: Proc. of 4th Int. Conf. Genetic Algorithms, San Diego, USA (1991) 450–457 15. Wang, C. H., Hong, T. P., Tseng, S.S.: Integrating fuzzy knowledge by genetic algorithms. IEEE Trans. Evol. Comput. 2 (1998) 138-149 16. Ishibuchi, H., Nakashima, T. and Murata, T.: Performance evaluation of fuzzy classifier systems for multi dimensional pattern classification problems. IEEE Trans. Syst., Man, Cyber. B 29 (1999) 601-618 17. Eberhart, R. C., Kennedy, J.: A new optimizer using particle swarm theory. In: Proc. of Int. Sym. Micro Mach. Hum. Sci., Nagoya, Japan (1995) 39-43 18. Kennedy, J., Eberhart, .R. C.: Particle swarm optimization. In: Proc. of IEEE Int. Conf. on Neural Networks, Piscataway, NJ (1995) 1942-1948 19. Zhang, C., Shao, H., Li, Y.: Particle swarm optimization for evolving artificial network. In: Proc. of IEEE Int. Conf. Syst., Man, Cyber., Vol.4. Nashville, Tennessee, USA (2000) 2487–2490 20. Engelbrecht, A. P. , Ismail, A. : Training product unit neural networks. Stability Control: Theory Appl. 2 (1999) 59–74 21. Mendes, R., Cortez, P. Rocha, M. and Neves, J.: Particle swarms for feedforward neural network training. In: Proc. of Int. Joint Conf. on Neural Networks, Honolulu, USA (2002) 1895–1899 22. Angeline, P. J.: Evolutionary optimization versus particle swarm optimization: philosophy and performance difference. In: Proc. of the 7th Annual Conf. on Evolutionary Programming, San Diego, USA (1998) 601-610 23. Shi, Y., Eberhart, R. C.: A modified particle swarm optimizer. Proc. of IEEE Int. Conf. on Evolutionary Computation, Anchorage, USA (1998) 69-73 24. Moriartv, D., Miikkulainen: Reinforcement learning through symbiotic evolution Machine learning. 22 (1996) 11-32 25. Wiegand, R. P.: An analysis of cooperative coevolutionary Algorithms. PhD thesis, George Mason University, Fairfax, Virginia, USA (2004)
Human Clustering for a Partner Robot Based on Computational Intelligence Indra Adji Sulistijono1,2 and Naoyuki Kubota1,3 1
Department of Mechanical Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo 192-0397, Japan 2 Electronics Engineering Polytechnic Institute of Surabaya – ITS (EEPIS-ITS), Kampus ITS Sukolilo, Surabaya 60111, Indonesia
[email protected] 3 “Interaction and Intelligence”, PRESTO, Japan Science and Technology Corporation (JST)
[email protected] Abstract. This paper proposes computational intelligence for a perceptual system of a partner robot. The robot requires the capability of visual perception to interact with a human. Basically, a robot should perform moving object extraction, clustering, and classification for visual perception used in the interaction with human. In this paper, we propose a total system for human clustering for a partner robot by using long-term memory, k-means, self-organizing map and fuzzy controller is used for the motion output. The experimental results show that the partner robot can perform the human clustering.
1 Introduction Recently, a personal robot is designed to entertain or to assist a human owner, and such a robot should have capabilities to recognize a human, to interact with a human in natural communication, and to learn the interaction style with the human. Especially, visual perception is very important, because the vision might include much information for the interaction with the human. Image-based human tracking might play a prominent role in the next generation of surveillance systems and human computer interfaces. Estimating the pose of the human body in a video stream is difficult problem because of the significant variations in appearance of the object throughout the sequence [16]. The robot might specify the intention of the human by using a built map, because a task is dependent on its environmental conditions. Furthermore, the robot should learn not only an environmental map, but also human gestures or postures to communicate with the human. In the previous studies, various image processing methods for robotic visual perception such as differential filter, moving object detection, and pattern recognition [14,15]. In order to perform the pattern recognition, the robot needs patterns or templates, but a humanL. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1001 – 1010, 2005. © Springer-Verlag Berlin Heidelberg 2005
1002
I.A. Sulistijono and N. Kubota
friendly robot cannot know pattern nor templates for the persons beforehand. Therefore, the robot must learn patterns or templates through the interaction with the human. For this, we propose a method for visual tracking of a human. First of all, the robot extracts a human from an image taken by a built-in CCD camera. Since we assume a human can move, a moving object is a candidate for the human. A longterm memory is used in order to extract a human from the taken image. The differential filter is used for detecting a moving object. Next, the color combination pattern is extracted by k-means. A robot classifies the detected human by self-organizing map based on the color pattern. The position of the human is obtained from the SOM node. Finally, the robot moves toward the detected human. This paper is organized as follows. Section 2 explains a vision-based partner robot, and proposes a human recognition method based on the visual perception. Section 3 shows several experimental results and section 4 summarizes the paper.
2 Visual Perception 2.1 A Partner Robot We developed a partner robot; MOBiMac as shown in Figure 1. This robot is designed for the usage as a personal computer as well as a partner. Two CPUs are used for PC and robotic behaviors and the robot has two servo motors, four ultrasonic sensors, and a CCD camera. The ultrasonic sensor can measure 2000 mm. Therefore, the robot can take various behaviors such as collision avoiding, human approaching, and visual tracking.
CCD Camera Ultra Sonic Sensor
PC Unit
CCD Camera
Wireless LAN V25 CPU
Control Unit
RS 232C
Host Computer
Sensor Board
Ultra-sonic Sensor 8ch
PWM Board
DC Motor
DC Motor UPP Board
Wireless Keyboard & Mouse
Rotary Encoder
Fig. 1. A partner robot; MOBiMac and its control architecture
The robot takes an image from the CCD camera, and extracts a human. If the robot detects the human, the robot extracts the color patterns of the human. And the robot takes a visual tracking behavior (see also in figure 1). Figure 2 shows the total architecture of visual tracking process and the detailed procedure is explained in the following.
Human Clustering for a Partner Robot Based on Computational Intelligence
1003
Image taking Differential filter k-means for color extraction SOM for feature extraction SOM for human clustering Combining for Positioning Fuzzy inference Motion output
Fig. 2. Total architecture of visual tracking
2.2 Differential Filter In order to detect a human, we use the recent topic of psychology [3-7]. In this paper, a long-term memory based on expectation is used to detect a human considered as a moving object. The robot has a long-term memory used as a background image GI(t) composed of gi (i=1, 2, ..., l). A temporal image TI(t) is generated using differences between an image at t and the GI(t-1). Each pixel has a belief value bi satisfying 0.010 and ∑ sρ =1 Φ ρ Pρ +GTjk Pi + PiG jk < 0
(10)
where: Gjk=Aj-BjFk, and i, j, k, ρ=1,2,…,s. P is common positive definite matrix and satisfies PT=P. (Proof) The proof is omitted due to lack of space. With the given Φρ, the regional feedback gains Fj can be determined with the PDC scheme and LMI approach proposed in ref. [4]. With this condition, the close-loop FSM system can guarantee both stability and smoothness. Remark: It is difficult to determine Фρs so as to satisfy (9), Reference [3] proposes a new PDC design in the case of the time derivatives of membership function is computable from the states. This paper employs the algorithm proposed in ref. [3].
4 Example Consider a fuzzy switching system which FSM model rules and controller rules are shown as follows: Model rules: Regional Rule i: If x2(t) is Ni Then Local Plant Model j: If x2 (t ) is M ij Then x (t ) = Aij x (t )+ Bij u (t ) Controller rules: Regional Rule i: If x2(t) is Ni Then Local Controller Model j: If x2 (t ) is M ij Then u = − Fij x (t ) Where i = 1, 2; j = 1, 2 . x(t)=[x1(t); x2(t)] is state variable. Ni is fuzzy sets. Each regional membership functions are assigned as follows: x2 (t ) 0 .
(5)
This complex expression consumes a lot of resource even in software implementation. Another simplified but effective form of factor is given as follows. Given x any increment x, α should increase α correspondingly, and α should be proportional to x; on the other hand, for the same amount of x, when x is larger, α should be larger too. Therefore, α=k x x, where k is the 2 proportional constant. Thus dα/dx=k x, so α(x) =(1/2)k x +b, where b is the initial value of α(x), viewed as a parameter. Set k’=(1/2)k, and denote it as k too, then we get: α(x) =kx2+b, k>0, b>0 . There are some constraints about choosing the parameters of k and b as follows:
△
△
△
△
·
△ ·△ · ·
△
△
(1) |x/α(x)| ≤ δE, δ is around 1. Input variables after contraction should still in UOD. (2) 013.4). The strength of the wall injection, ε , defined by the ratio of injected velocity to the inlet bulk velocity, was set to 0.05, representing a quite strong injection. It remained constant along both upper and lower walls and the spatial variation of ε was not considered. The bottom wall was cooled ( −Tw ) and the top wall was heated ( Tw ) at the same rate so that both walls were maintained at constant temperature. The flow was assumed to be homogeneous in the spanwise direction which allows transform method. The adequacy of the computational domain size and the periodic boundary condition in the spanwise direction was assessed Na [2].
3 Results and Discussion The effect of α on the performance of DMM for passive scalar was investigated in a channel with wall injection. The computation was done with 129 × 65 × 65 grids. The impact of the injected vertical flow on the turbulent boundary layer is accompanied by the lifted shear layer and this adds complexity to flow. Mass conservation leads to a streamwise acceleration or strong inhomogeneity in the middle of the channel. As shown in Figure 2, the flow is characterized by the formation of increasingly stronger streamwise vortices as it moves downstream. This feature of getting more and more turbulent structures in the streamwise direction is thought to be associated with the growing lifted shear layer. It would be useful to investigate the effect of α in this type of complex flow to investigate the range of model’s utility. For the purpose of comparison, DNS with 513 × 257 × 257 grids were also performed and the data were filtered in physical space to get the filtered statistics.
On the Filter Size of DMM for Passive Scalar in Complex Flow
1131
Fig. 3. Time-averaged temperature (passive scalar) profiles at several streamwise locations
Fig. 4. Root-mean square temperature (passive scalar) profiles at several streamwise locations
Figure 3 shows the comparison of mean temperature profiles at several streamwise locations. Note that the location of x/h=9 is in the simple channel flow. It is clear that the mean temperature is not sensitive to the value of α upstream of wall injection. Even though a priori test results suggest the sensitivity of α , this sensitivity does not clearly appear in an actual LES. As the flow moves downstream, however, the choice of α = 2 produces much better prediction. Thus, it would be interesting to investigate what feature of DMM generates the difference in the presence of wall injection. A similar behavior can also be found in the rms profiles shown in Figure 4. As mentioned earlier, the resolution may play an important role in the present flow due to the strong shear layer formed away from wall. As a first step of the research,
1132
Y. Na, D. Shin, and S. Lee
65 grids were used in the vertical direction in which explicit filtering is not done. However, in order to generalize the effect of α , a more careful examination on the resolution should be carried out in the future.
4 Summary The dynamic mixed model extended to the prediction of passive scalar transport was tested in a channel with injection. Since the optimal value of α and its range of utility is likely to depend on the flow, DMM was tested in a strong shear layer generated by the strong wall injection. A close investigation of the results suggests that the performance of the model showed sensitivity to the size of the effective filter width ratio unlike in a simple channel flow. Overall, the value of α = 2 produced a better mean and rms passive scalar statistics for the flow under investigation. However, more work for a variety of complex flows will be required in order to determine the model’s range of utility of DMM in its current form.
Acknowledgement This work was supported by grant No. R01-2004-000-10041-0 from the Basic Research Program of the Korea Science & Engineering Foundation.
References 1. Zang, Y., Street R. L. and Koseff, J. R.: A Dynamic Mixed Subgrid-scale Model and its Application to Turbulent Recirculating Flows, Phys. Fluids, A 5, vol. 12 (1993) 3186-3196. 2. Na, Y.: Direct Numerical Simulation of Turbulent Scalar Field in a Channel with Wall Injection. Numerical Heat Transfer, Part A (2005) 165-181. 3. Lee, G. and Na, Y.: On the Large Eddy Simulation of Temperature Field Using Dynamic Mixed Model in a Turbulent Channel, Trans. KSME B, Vol. 28, No. 10, (2004) 1255-1263. 4. Germano, M. , Piomelli, U., Moin P. and Cabot W. H.: A Dynamic Subgrid-scale Eddy Viscosity Model, Phys. Fluids, A 3, vol. 7 (1991) 1760-1765.
Visualization Process for Design and Manufacturing of End Mills Sung-Lim Ko, Trung-Thanh Pham, and Yong-Hyun Kim Department of Mechanical Design and Production Engineering, Konkuk University, CAESIT(Center for Advanced E-system Integration Technology), 1 Hwayang-dong, Kwangjin-gu, Seoul 143-701, South Korea
[email protected] Abstract. The development of CAM system for design and manufacturing of end mills becomes a key approach to save the time and reduce cost for end mills manufacturing. This paper presents the calculation and simulation of CNC machining end mill tools using on 5-axes CNC grinding machine tool. In this study the process of generation and simulation of grinding point data between the tool and the grinding wheels through the machined time are describes. Using input data of end mill geometry, wheels geometry, wheel setting, machine setting the end mill configuration and NC code for machining will be generated and visualized in 3 dimension before machining. The 3D visualizations of end mill manufacturing was generated by using OpenGL in C++.
1 Introduction The flat end mills are commonly used in industry for high speed machining. They are characterized by a complex geometry with many geometric parameters, which deals with some complicated processes in machining with grinding CNC machine. In the previous studies by Ko [1, 2], the development of CAM system was limited in 2D cross sections of the tools and the results of simulation were restricted to the main fluting operation. Ko already provided a machining simulation for specific 5-axes machines in 2D. Dani Tost presented an approach for the computation of the external shape of the drills through a sequence of coordinated movements of the tool and the wheels on machines of up to 6-axes. The proposed method reduces the 3D problem to 2D dynamic boolean operations followed by a surface tiling [3, 4]. However they only have shown the fluting and gashing simulations. This paper presents the process of generation and simulation of grinding point data between tool and grinding wheels for all grinding processes in 3D. The program was developed for prediction of configuration of end mills. OpenGL was used for developing CAM system. The NC code of all processes for end mill manufacturing will be generated automatically in the program and saved in NC code files. These NC code files will be used for machining in CNC grinding machines. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1133 – 1136, 2005. © Springer-Verlag Berlin Heidelberg 2005
Visualization Process for Design and Manufacturing of End Mills
1134
2 Development of Software for Design and Manufacturing of End Mills There are several types of CNC grinding machine tools for end mill manufacturing. In this study, the end mill is machined on CNC grinding machine tool, which has 5 degrees of freedom: three axes in translation (X, Y, Z) and two in rotation (B, C) as shown in Fig.1. Tool is fixed at tool holder, which has two degrees of freedom; rotation (C) and translation (Z). The grinding wheels are arranged at the wheel spindle, which has 3 degrees of freedom; two translations (X, Y) and rotation (B). Fig.2 shows the arrangement of grinding wheels for end mill manufacturing. Wheel number 2 and 3 are used for fluting and gashing operations respectively. Wheel number 4 is used for 1st and 2nd clearance operations. The 1st and 2nd end teeth are formed by using wheel number 1.
Fig. 1. 5-axis CNC grinding machine
Fig. 2. Grinding wheels for end mill manufacturing
2.1 Basic Algorithm for the Helical Flute Grinding The simulation algorithm for the helical groove machining in this study was based on the assumption that a wheel with finite thickness consists of a finite number of thin disks. Using the given wheel geometry and the required design parameters of end mill as input data the computer program for predicting and simulating of the helical flute configuration was completed. The program will determine the contact point between circle and ellipse, which is the projection of wheel on the plane normal to axial direction of end mill. And the cross point between wheel and outer end mill diameter are calculated. This point will be reference point to configure the flute shape and another parameters. In this case, the diameter of end mill is fixed and Z-axes of wheel location is also fixed and only Y-axes of wheel is changed to find contact point. The flute con-
1135
S.L. Ko, T.-T. Pham, and Y.-H. Kim
figuration was generated from the cross points between the given grinding wheel and end mill. The determination of first contact point is very important. Most of end mill geometry are determined using this point. 2.2 Simulation Results Generation of NC Codes The end mill shapes are constructed by surface modeling using triangles and quads. The visualization implied for construction of end mill shapes depends on the boundary of the model. Fig.3 and Fig.4 show cross section of helical flute and the simulation results of end mill shapes. Using the input data for end mill geometry, wheel geometry and machine setting, the computer program will generate the grinding points for clearance faces, endteeth faces, gash face. The visualization of these faces implies the construction of the boundary of the surfaces. The OpenGL graphics system allows creating interactive programs that produce color images of moving three dimension objects.
Fig. 3. Cross section of helical flute
Fig. 4. Simulation results
The visualization of end mill relates to lighting conditions surrounding an object, blending, antialiasing, texture mapping, shading. In addition, OpenGL was used to
Visualization Process for Design and Manufacturing of End Mills
1136
exploit the possibilities of the projective Z-Buffer and shading techniques, which offer and optimize the visualization delay for whole process. The NC codes for all processes of manufacturing in 5-axis CNC grinding machine were generated from the computer program based on machine coordinate and stored in the same window with simulation results as shown in Fig.4. Each machining process is carried out by rotation and translation of the axis of wheel and tool. The initial and final positions of all processes were calculated.
3 Conclusion The CAM system for design and manufacturing of end mills was developed to predict geometric end mill configuration before machining. The NC codes were generated from the CAM system for manufacturing of end mills in 5-axis CNC grinding machine.
Acknowledgement This work was supported by Konkuk University in 2004.
References 1. Sung-Lim Ko, “Geometrical analysis of helical flute grinding and application to end mill”, Trans. NAMRI/SME XXII (1994). 2. Yong-Hyun Kim, Sung-Lim Ko, “Development of design and manufacturing technology for end mill in machining hardened steel”, Journal of Materials Processing Technology 130-131 (2002) 653-661. 3. Dani Tost, Anna Puig, Lluis Perez-Vidal, “ Boolean operations for 3D simulation of CNC machining of drilling tools”, Computer-Aided Design 36 (2004) 315–323 4. Dani Tost, Anna Puig, Lluis Perez-Vidal, “3D simulation of tool machining”, Computer & Graphics 27 (2003) 99-106.
IP Address Lookup with the Visualizable Biased Segment Tree Inbok Lee1, , Jeong-Shik Mun2 , and Sung-Ryul Kim2, 1 Department of Computer Science, King’s College London, London, United Kingdom
[email protected] 2 Division of Internet & Media and CAESIT, Konkuk University
[email protected] [email protected] Abstract. The IP address lookup problem is to find the longest matching IP prefix from a routing table for a given IP address. In this paper we implemented and extended the results of [3] by incorporating the access frequencies of the target IP addresses. Experimental results showed that the number of memory access is reduced significantly.
1
Introduction
An Internet router reads the target address of an incoming packet and chooses a path of the packet using its routing table. Definition 1. For m-bit IP addresses, a routing table T is a set of pairs (p, hop) such that p is a string in {0, 1}∗ of length at most m (in fact, p is a prefix of an IP address) and hop is an integer in [1, H], where H is the number of the next hops. Let n be the number of IP prefixes stored in the routing table. Given an IP address x and a routing table T , the lookup problem is to find (px , hopx ) ∈ T satisfying these two conditions: 1. px is a prefix of x. 2. |px | > |p| for any other pair (p, hop) ∈ T such that p is a prefix of x. Traditionally, patricia tries [5] were used for the IP address lookup problem. Several approaches were proposed to replace the patricia tire, which can be devided into two categories: hardware-based approaches and software-based approaches. We focus on the latter.
This work was supported by the Post-doctoral Fellowship Program of Korea Science and Engineering Foundation (KOSEF). Corresponding Author.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1137–1140, 2005. c Springer-Verlag Berlin Heidelberg 2005
1138
2
I. Lee, J.-S. Mun, and S.-R. Kim
Algorithm
Our algorithm is based on Lee et al.’s [3]. It represents the IP prefixes as intervals, which was motivated by [2]. Given a k-bit IP prefix P = p1 p2 · · · pk , we represent it as an interval [PL , PH ]. where PL = p1 p2 · · · pk 0 · · · 0 and PH = p1 p2 · · · pk 1 · · · 1 be two integers obtained by appending m − k 0’s and 1’s at the end of P , where m is the length of an IP address. In this representation, a longer IP prefix has a narrower interval. Since the IP address lookup problem is to find the longest matching IP prefix P of x, our aim is to find the narrowest interval [PL , PH ] that contains x. These intervals are stored in the segment tree [4]. For each interval [PL , PH ], we compute four values: PL − 1, PL , PH and PH + 1. Let U be the set of these values, where values smaller than 0 or greater than 2m − 1 are excluded. Then we update U ← U ∪{0, 2m − 1} to make sure that U contains both endpoints 0 and 2m − 1. The segment tree for the set U = {u1 , u2 , · · · , uN } (0 = u1 < u2 < . . . < uN = 2m − 1) is a balanced binary tree. The leaves represent basic intervals [u1 , u2 ], [u3 , u4 ], . . . , [uN −1 , uN ]. A parent node represents the union of the intervals of its children. The height of the segment tree is O(log n). To perform the IP address lookup, we visit the nodes whose intervals contain the target IP address from the root. We report the shortest interval. Our new algorithm is based on the frequencies of the intervals. Note that we do not aim to reduce the time for each lookup operation. Rather, we would like to enhance overall performance. In the original segment tree, each leaf has the same depth. We modify the segment tree with a simple rule: move the frequently accessed leaves close to the root. For those leaves, the number of memory access is reduced. We will call the resulting tree as the biased segment tree. This approach resembles Huffman coding [1] in data compression. The steps for building the biased segment tree are as follows. Step 1. Compute the frequencies of the basic intervals. To do so, we first build the normal segment tree and count the number of access for every leaf from the training set which reflects the access pattern. Step 2. Divide the basic intervals into classes based on the frequency information obtained in Step 1. Step 3. Put each class into a desired level. Here we use the rule: move the frequently accessed leaves close to the root. Step 4. From the deepest level, create an internal node between two neighboring node. We keep on doing so until we create a root node. Example 1. Suppose that m = 3 and there are four IP prefixes , 0, 01 and 11. These can be represented by [000(2) , 111(2) ] = [0, 7], [000(2) , 011(2) ] = [0, 3], [010(2) , 011(2)] = [2, 3], and [110(2) , 111(2)] = [6, 7]. Figure 1 (a) represents them as intervals. Then we represent these intervals as a segment tree. Then basic intervals are [0, 1], [2, 3], [4, 5] and [6, 7]. The segment tree for U is given in Figure 1 (b). Now, given an IP address x = 010(2) = 2, x ∈ [0, 7], x ∈ [0, 3],
IP Address Lookup with the Visualizable Biased Segment Tree
1139
and x ∈ [2, 3], but x ∈ [6, 7]. It means that , 0 and 01 are prefixes of x, but 11 is not. After visiting nodes n1 , n2 , and n5 , we can find the intervals which contain x. We report 01 as the longest prefix that matches x. However, if n7 is heavily visited and that node n5 and n6 is rarely visited. We move n7 close to the root node as in Figure 1 (c). The depth of n7 is reduced to 1, while those of n5 and n6 are increased to 3. In this case, if the number of accessing n7 is greater than the sum of the numbers of accessing n5 and n6 , we can get a performance improvement.
n1
[0,3]
n2
[0,7]
n3
[0,3]
[0,7]
[0,7]
[0,7]
[6,7]
[0,5]
[4,7]
[6,7]
n7 [0,3] [0,1]
0
1
2
3
4
5
6
7
[2,3]
[4,5]
[0,1]
[2,5]
[6,7]
n4
n4
n5 [2,3]
(a)
n6
n7
(b)
[6,7]
[0,3] [2,3]
[2,3]
n5
[4,5]
(c)
n6
Fig. 1. Examples of (a) intervals, (b) segment tree, and (c) biased segment tree
Now we analyze the effect of modifying the segment tree. We want to know which leaf should be moved closer to the root and how close it should be moved to the root. At first each leave has the same depth at first. Then the number of memory access is O(log n) where n is the number of leaves. When we move a leaf to k upper level, it has the same effect of adding 2k leaves. Hence, if the probability that a leaf is accessed is ρ, the following equation should be satisfied to improve the performance: ρ(log(n + 2k ) − k) + (1 − ρ) log(n + 2k ) > log n. It follows ρ > log(1 + 2k /n)/k.
3
Experimental Results
We implemented the IP address lookup algorithm in [3] and our extension. We compared the number of memory access rather than the running time, since the latter can be changed in a different situation. We used the snapshot of the routing table of ATT Net which contains 77,774 IP prefixes, obtained from http://archive.pch.net/archive. We created two random traffic data set whose sizes are 11,116,474 and 3,131,010, respectively and performed two experiments. In the first experiment, we divided the leaves into three categories. The access ratio among the levels is 1:50:500, that is, a node which belongs to Level 1 is accessed 500 times more frequently than one in Level 3. The depths of the nodes in Level 1 and 2 are reduced by 2 and 1, respectively. In the second
1140
I. Lee, J.-S. Mun, and S.-R. Kim
experiment, we used five categories and the access ratio is 1:10:50:100:500. The depths of the nodes in Level 1, 2, 3, and 4 are reduced by 4, 3, 2, and 1, respectively. We counted the number of memory access for each level and compute the average value. The results are shown in Figure 2. Experiment 1 Data 1 Data 2 [3] Ours [3] Ours Level 1 24.60 23.32 24.65 23.14 Level 2 24.58 24.64 24.57 24.80 Level 3 24.60 25.32 24.57 26.11 Level 4 N/A Level 5 N/A Expected value 24.60 23.44 24.64 23.29 Average 24.60 23.50 24.64 24.34
Experiment 2 Data 1 Data 2 [3] Ours [3] Ours 24.60 22.46 24.84 22.22 24.60 23.93 24.60 23.40 24.57 25.04 24.58 25.02 24.60 25.73 24.61 26.54 24.59 25.93 24.42 27.80 24.60 22.93 24.61 22.72 24.60 23.22 24.56 22.84
Fig. 2. Experimental results
In the above experiments, we were able to obtain up to 8% speed up in average. Although our modified algorithm is slower than the original algorithm in [3] in the lower levels, the gain obtained from the higher levels can justify our modification.
4
Conclusion
In this paper we showed how to incorporate the frequency information into the routing table and presented the experimental results. We want to know whether other scheme can use the frequency information too. Combining the frequency information and the B-tree data structure will increase the performance.
References 1. D. A. Huffman. A method for the construction of minimum-redundancy codes. Proceedings of Institute of Electrical and Radio Engineers, 40(7):1098–1101, 1958. 2. B. Lampson, V. Srinivasan, and G. Varghese. IP lookups using multiway and multicolumn search. IEEE/ACM Transactions on Networking, 7(3):324–334, 1999. 3. I. Lee, K. Park, Y. Choi, and S. K. Chung. A simple and scalable algorithm for the IP address lookup problem. Fundamenta Informaticae, 56(1-2):181–190, 2003. 4. F. P. Preparata and M. I. Shamos. Computational Geometry - An Introduction. Springer-Verlag, 1995. 5. R. Sedgewick. Algorithms in C++. Addison-Wesley, 1992.
A Surface Reconstruction Algorithm Using Weighted Alpha Shapes Si Hyung Park1, Seoung Soo Lee2 , and Jong Hwa Kim2 1
Voronoi Diagram Research Center, Hanyang University, 17 Haenggang-dong, Seoungdong-gu, Seoul, 133-791, Korea
[email protected] 2 CAESIT, Konkuk University, 1 Hwayang dong, Gwangjin-gu, Seoul, 143-701, Korea {sslee, jhkim}@konkuk.ac.kr Abstract. This paper discusses a surface reconstruction method using the Delaunay triangulation algorithm. Surface reconstruction is used in various engineering applications to generate CAD model in reverse engineering, STL files for rapid prototyping and NC codes for CAM system from physical objects. The suggested method has two other components in addition to the triangulation: the weighted alpha shapes algorithm and the peel-off algorithm. The weighted alpha shapes algorithm is applied to restrict the growth of tetrahedra, where the weight is calculated based on the density of points. The peel-off algorithm is employed to enhance the reconstruction in detail. The results show that the increase in execution time due to the two additional processes is very small compared to the ordinary triangulation, which demonstrates that the proposed surface reconstruction method has great advantage in execution time for a large set of points.
1
Introduction
Recent advances in measurement systems have enabled more precise measurement of 3-D shapes. Normally a set of points is obtained as the output of measurement systems, but in most cases further processing is required to be useful for engineering purposes. That is, noise should be removed, only useful points should be extracted, and triangular surfaces should be constructed to model the original shape as closely as possible. The triangular surfaces can be used in CAD modeling or computer graphics after surface modeling like NURBS(NonUniform Rational B-Spline). The construction of a triangular net can make rendering process easier since it has a simple data structure, and it is easy to access the necessary information to form a surface model. In ”surface reconstruction”, the closeness to the original shape and the speed of reconstruction is the most important performance measures. This paper presents a construction method of triangular net using 3-D positional data of points in an object without using relational information between the points or the normal vector of surface. We suggest a novel algorithm for surface reconstruction using an enhanced Delaunay triangulation method, which transforms a set of points into a triangular net. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1141–1150, 2005. c Springer-Verlag Berlin Heidelberg 2005
1142
2 2.1
S.H. Park, S.S. Lee, and J.H. Kim
Related Works Alpha Shapes
The alpha shapes, first described by Edelsbrunner et al.[1-8] are a type of triangulations parameterized by a single global variable, called alpha. The shape of triangular patch is determined by adjusting the value of alpha, which determines the size of a sphere used in a triangulation. If the points are not evenly distributed the value of alpha can be either too small or too large in some regions. To resolve this problem, Edelsbrunner proposed weighted alpha shapes, which assigns different weight for each point in the set. The weight determines the level of details around that point. However, Edelsbrunner described the method only theoretically. 2.2
Crust Algorithm
The crust algorithm proposed by Amenta et al.[9-15] constructs manifold triangulations by interpolating collections of three-dimensional points. In the crust algorithm, Voronoi diagram of the sample points are obtained and the poles in the Voronoi vertices are selected to estimate the medial axis. Then the Delaunay triangulation is performed with the combined point set of the samples and poles and the triangles of which all the vertices are consisted of sample points are chosen. This algorithm is proven to converge to the surface being sampled as the density of point increases. However, it takes long execution time for many practical applications. 2.3
Zero-Set Algorithm
This approach was proposed by Hoppe et al.[16-17]. They estimate tangent planes at each sample point based on the shape of k nearest points, and then the signed distances between the points and the nearest point’s tangent plane are calculated. Then the distance function is interpolated and polygonalized by their marching cubes algorithm. The zero-set algorithm constructs an approximate surface rather than interpolated surface. Thus, this method needs low-pass filtering of the data, which is desirable in the presence of noise, but causes some loss of information.
3
The Proposed Surface Reconstruction Algorithm
The Delaunay triangulation method employed in the proposed algorithm is an incremental construction algorithm using the oct-subdivision[18] and it constructs the Delaunay triangles prior to the Voronoi diagram. Thus, the alpha shapes algorithm is more efficient than the crust algorithm to apply the proposed Delaunay triangulation. Although the alpha shapes algorithm works well for the evenly distributed points, it needs to change the value of alpha if the points are distributed irregularly. To resolve the problem, Edelsbrunner suggested the
A Surface Reconstruction Algorithm Using Weighted Alpha Shapes
1143
weighted alpha shapes method, in which every point in the set is assigned weight describing the level of details around that point. This method has an advantage in constructing shapes that have different densities, but it is difficult to apply since the weights should be predetermined when the point data is entered. Thus we suggest an algorithm based on the weighted alpha shapes method in which the weights are computed while triangulation, instead of input process of the point data. 3.1
Calculation of Weights
To compute the weights, we first define the following global variable. 3 (xmax − xmin )(ymax − ymin )(zmax − zmin ) size = , (1) N where N is the number of points The above equation makes the average density of the points in the entire grids to be one by adjusting the size of uniform grid. Next, the number of points inside the grid elements is counted. While constructing the Delaunay tetrahedra, the number of grid elements including the smallest box of the empty sphere is computed. For example, if there are 24 uniform grids and 3 oct-subdivided grids, the number of grid elements is 24 + 3/8. Thus, the localized density of points is computed as follows: p ρ= , (2) cubic where p is the number of points inside the smallest box cubic is the number of grid elements in the smallest box Fig. 1 shows the relationship between the number of points in a cube (5003 volume) and the average length of edges comprising the Delaunay triangles. The results are obtained with 12 different numbers of points varying the number of points from 50 up to 4,000. For a given number of points, 100 tests are performed and the average length is plotted for each number of points. In each case the points are uniformly distributed. In Fig.1 the average length is in inverse proportion to the cubic root of the number of points (or the density) in the cube. Thus, the relationship between the weighted alpha shapes value and the density of points can be expressed by the following equation. 1 αw ∝ 3 × size (3) ρ In the tests, it is observed that the variance of the edge length tends to increase as the irregularity of data points in a grid increases. To adjust that effect, the weight constant, Wco , is multiplied in the right hand side of Equation (3) as in Equation (4). 1 αw = Wco 3 × size (4) ρ
1144
S.H. Park, S.S. Lee, and J.H. Kim
Fig. 1. Relationship between the number of points and length of Delaunay triangle edge
The value of Wco is set to be larger than 1 to patch the points that are apart more than the average. The empirical results suggest that the value of Wco to be 1.0 ∼ 1.2. Finally, the sensitivity function fst (ρ) is used to correct the error incurred depending on the density of points. Hence Equation (4) is rewritten as Equation (5). αw = Wco ( 3
1 + fst (ρ)) × size ρ
(5)
In Fig. 2 it is observed that the range of edge length is in inverse proportion to the cubic root of the number of points in the cube. That is, if the density of points is very low, the range of edge length becomes larger. Hence the sensitivity function can be expressed as fst (ρ) = Sco ×
3
1 , ρ
(6)
where Sco is the sensitivity constant According to the results in Fig. 1, the range of edge length is about 1/10 of the edge length. Thus, the sensitivity constant Sco is set to be some value in (0, 0.1). Based on the observations made in Fig.2, the density function fst (ρ) is used in the following two cases: Case 1: the density of points is very low, Case 2: the density of points in a grid is larger than but the number of points in the grid is very small due to the small grid size.
A Surface Reconstruction Algorithm Using Weighted Alpha Shapes
1145
Fig. 2. Relationship between the number of points and range of Delaunay triangle edge
Fig. 3. Relatively uniform point data set
The weight constant and the sensitivity constants are automatically evaluated and determined in the program. To compare the performance of the constant alpha shapes method and the weighted alpha shapes method, the data points in Fig. 3 are used. The results by the constant alpha shapes method and the weighted alpha shapes method are presented in Fig. 4 and Fig. 5, respectively. In the constant alpha shapes method, if the value of alpha shapes is small, the details are expressed well but sparse regions are not reconstructed as shown in Fig. 4. On the contrary, if the value of alpha shapes is large, no hole or gap is appearing but details are not described well. The result by the weighted alpha shapes method
1146
S.H. Park, S.S. Lee, and J.H. Kim
(a) α = 1.0 × size
(b) α = 1.2 × size
(c) α = 1.4 × size
(d) α = 1.5 × size
Fig. 4. Application of constant alpha shapes
Fig. 5. Application of weighted alpha shapes
A Surface Reconstruction Algorithm Using Weighted Alpha Shapes
Fig. 6. Non-uniform point data set
Fig. 7. Application of constant alpha shapes
Fig. 8. Application of weighted alpha shapes
1147
1148
S.H. Park, S.S. Lee, and J.H. Kim
shows more detail and less sparse regions compared to the results of the constant alpha shapes method. It also shows a successful triangulation in the region where density changes. Fig. 6 shows an example of non-uniformly distributed data points compared to Fig. 3. The ratio of the number of high density lattices to that of low density lattices is several hundreds times more in this case. Fig. 7 shows that the constant alpha shapes method results in unsatisfactory shapes. Fig. 8 presents the results by the weighted alpha shapes method with and without sensitivity function. The sensitivity function helps to reduce errors in low density area but describes less detail in the region where density changes. 3.2
Peel-Off Algorithm
Although the weighted alpha shapes method shows better quality than the uniform alpha shapes methods, it still lacks an ability to describe small details. Thus, we develop an additional process for the delicate job, which we call ”the peel-off algorithm”. The peel-off algorithm can produce fast and accurate results if the following two conditions are satisfied: 1. The peel-off algorithm is applied after the weighted alpha shapes method is applied. 2. There should be no interior point in the data set or if there exists an interior point it should be sufficiently away (bigger than weighted alpha radius of that region) from the points that compose the surface. The structure of the algorithm is conceptually simple and the pseudo code is presented below. Deletion of discontinuous surfaces; while(Not all point set(except interior point)exists in the surface) { Renew current surface; while(Check current surface triangle) { if(The alpha radius of current Delaunay triangle is larger than those of three triangle in the back) { Delete the current triangle; Set the three triangles in the back to be new surfaces; } } } The performance of peel-off algorithm is tested for the surface reconstruction of objects that are composed of about 15,000 ∼ 391,000 points. The results
A Surface Reconstruction Algorithm Using Weighted Alpha Shapes
(a)Mannequin
(b) Ear
(c) Rocker Arm
(d) Raptor
1149
Fig. 9. Results with peel-off algorithm Table 1. Running time(Test System : P4 2.0 GHz 512 Mbytes) Model
Number of Points Time (min)
Mannequin Ear Rocker Arm Raptor
15K 21K 34K 391K
1 min 2 sec 1 min 28 sec 2 min 21 sec 34 min 1 sec
are shown in Fig. 9 and the execution times are summarized in Table 1. The execution times in Table 1 include the execution time of the weighted alpha shapes method and the peel-off algorithm. The execution times increase almost linearly in proportion to the number of points.
4
Conclusion
This paper proposed an improved weighted alpha shapes method, which can be applied if only positional data is given. Also, the weights are not fixed as a single constant but determined according to the density of points. In addition, the peel-off algorithm constructs details successfully by suppressing the formation of wrong triangular surfaces. The alpha shapes method does not require additional computation time because it is a reconstruction method that limits the surface of 3-D Delaunay triangulation. Hence the linearity between the running time
1150
S.H. Park, S.S. Lee, and J.H. Kim
and the number of points is maintained. Also, the peel-off algorithm does not add much computation time because there are only a few wrong triangular surfaces after the weighted alpha shapes method is applied. Considering the surface reconstruction using the Delaunay triangulation for a large set of points (larger than 500,000), our reconstruction algorithm has a significant advantage in terms of computation time.
References 1. H.Edelsbrunner. : Surface Reconstruction by Wrapping Finite Sets in Space. Technical Report 96-001. Raindrop Geomagic. Inc. (1996) 2. H.Edelsbrunner, N. Shah. : Triangulating Topological Spaces. Proceedings 10th ACM Symp., Computational Geometry (1994) 285-292 3. H.Edelsbrunner, Ernst P. Mucke : Three-Dimensional Alpha-Shapes. ACM Transactions on Graphics, 13(1) (1994) 43-72 4. H.Edelsbrunner : Weight Alpha Shapes. Technical Report UIUCDCS-R-92-1760, Department of Computer Science. University of Illinois at Urbana-Champaign (1992) 5. H.Edelsbrunner, David G.Kirkpatrick, Raimund Seidel : On the Shape of a Set of Points in the Plane. IEEE Transactions on Information Theory, IT-29(4) (1983) 551-559 6. H.Edelsbrunner, N. Shah : Triangluating Topological Spaces. Proceedings 10th ACM Symposium Computational Geometry. (1994) 43-72 7. H.-L. Cheng, T. K. Dey, H. Edelsbrunner, J. Sullivan : Dynamic Skin Triangulation. Discrete Comput. Geom. 25 (2001) 525-568 8. N. Akkiraju, H. Edelsbrunner : Triangulating the Surface of a Molecule. Discrete Appl. Math. 71 (1995) 5-22 9. Nina Amenta, Sunghee Choi, Guenter Rote : Incremental Constructions con BRIO. ACM Symposium on Computational Geometry (2003) 211-219 10. Sunghee Choi and Nina Amenta : Delaunay Triangulation Programs on Surface Data. The 13th ACM-SIAM Symposium on Discrete Algorithms (2002) 135-136 11. Nina Amenta, Sunghee Choi, Ravi Kolluri : The Power Crust. Proceedings of 6th ACM Symposium on Solid Modeling (2001) 249-260 12. Nina Amenta, Sunghee Choi, Ravi Kolluri : The Power Crust, Unions of Balls, and the Medial Axis Transform. Computational Geometry: Theory and Applications, 19:(2-3) (2001) 127-153 13. Nina Amenta, Marshall Bern : Surface Reconstruction by Voronoi filtering. Discrete and Computational Geometry, 22 (1999) 481-504 14. Nina Amenta, Marshall Bern, David Eppstein : The Crust and the Beta-Skeleton: Combinatorial Curve Reconstruction. Graphical Models and Image Processing, 60/2:2 (1998) 125-135 15. Nina Amenta, Marshall Bern, David Eppstein : Optimal Point Placement for Mesh Smoothing. Journal of Algorithms 30 (1999) 302-322 16. H. Hoppe : Surface Reconstruction from Unorganized Points. Ph.D. Thesis, Computer Science and Engineering, University of Washington (1994) 17. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle : Surface Reconstruction from Unorganized Points. In SIGGRAPH 92 Proceedings (1992) 71-78 18. Si Hyung Park, Seoung Soo Lee, Jong Hwa Kim : The Delaunay Triangulation by Grid Subdivision . ICCSA 2005 Proceedings, LNCS 3482 (2005) 1033-1042
HYBRID: From Atom-Clusters to Molecule-Clusters Zhou Bing1, Jun-yi Shen2, and Qin-ke Peng2 1
Dept. of Computer Science and Engineering, Northeastern University at Qin Huang-dao 066004, He Bei, China
[email protected] 2 School of Electronics and Information Engineering, Xi’an Jiaotong University, 710049, Shaan Xi, China {jyshen, qkpeng}@mail.xjtu.edu.cn
Abstract. This paper presents a clustering algorithm named HYBRID. HYBRID has two phases: in the first phase, a set of spherical atom-clusters with same size is generated, and in the second phase these atom-clusters are merged into a set of molecule-clusters. In the first phase, an incremental clustering method is applied to generate atom-clusters according to memory resources. In the second phase, using an edge expanding process, HYBRID can discover molecule-clusters with arbitrary size and shape. During the edge expanding process, HYBRID considers not only the distance between two atom-clusters, but also the closeness of their densities. Therefore HYBRID can eliminate the impact of outliers while discovering more isomorphic moleculeclusters. HYBRID has the following advantages: low time and space complexity, no requirement of users’ involvement to guide the clustering procedure, handling clusters with arbitrary size and shape, and the powerful ability to eliminate outliers.
1 Introduction Since clustering techniques can discover the distributions and patterns of data without any special knowledge about application fields, it is an important task of Data Mining and has been studied widely in the fields of Statistics, Machine Learning and Database communities. There are many different clustering methods, for example, CLARANS[1] and DBSCAN[3] for spatial data clustering, BIRCH[2] and CURE[4] for very large database, etc. We believe that a good clustering algorithm should be able to solve four problems. First, it should have low time and space complexity. For a clustering algorithm with the time complexity of O(n2) and the space complexity of O(n), it is very hard to handle a database with millions of data objects due to the high demand on computing resources. Second, the algorithm should require no or very little involvement of users. Because it is hard for users to understand the distribution pattern and interrelationship between data (in fact, this is why we study data mining), it is impractical to require users to provide useful parameters. Normally these subjective parameters cannot represent the real distribution of data objects. Using these parameters to guide the clustering procedure may cause incorrect results. The third problem is how to analyze arbitrary clusters. Many distance-based clustering algorithms use center and radius to L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1151 – 1160, 2005. © Springer-Verlag Berlin Heidelberg 2005
1152
Z. Bing, J.-y. Shen, and Q.-k. Peng
represent a cluster. These methods are suitable for discovering the clusters with spherical shape and similar size, but they cannot work well for arbitrary clusters. The last problem is how to eliminate the impact of “noise”. It is inevitable that some error data exist in real databases, so detecting and removing these noises are important for obtaining high quality clusters. This paper attempts to address those problems by proposing a new clustering algorithm named HYBRID. This algorithm has low time and space complexity. It can generate the best clustering results with limited memory by scanning the database only once. Clustering is completed automatically and intelligently without reading in cluster-related parameters from users. HYBRID can discover arbitrary clusters from a dataset containing noises. Our experiments show that this algorithm is a good clustering algorithm. In the rest of the paper, we first briefly introduce some well-known clustering algorithms. In section 3, our algorithm is described in details. Section 4 shows the experimental results. The last section is our concluding remarks.
2 Related Works Data clustering has been studied widely in the fields of Statistics, Machine Learn-ing, and Database. This section summarizes some well-known clustering algorithms. CLARANS[1] attempts to deal with very large-scale databases. In CLARANS, a cluster is represented by its medoid, or the most centrally located data point in the cluster. CLARANS requires two parameters: Numlocal and Maxneighbor. Numlocal represents the expected cluster number K. The clustering problem is how to find Numlocal medoids. The clustering process is formalized as searching a graph in which each node is a K-partition represented by a set of K medoids, and two nodes are neighbors if they only differ by one medoid. Each step of clustering is to find a node with the minimum cost from Maxneighbor neighbors via random searching. This node is called the local minimum node. CLARANS stops when the local minimum node does not change. CLARANS requires the cluster number, as an input parameter, and it cannot discover arbitrary clusters. In addition, it has a high time complexity of O(n2). BIRCH[2] is a kind of hierarchical clustering method. In BIRCH, a cluster is represented by its center and radius. BIRCH defines the concept of Cluster Feature (CF), and then performs clustering by dynamically building a CF-tree. A CF tree is built when new data objects are inserted into the closest leaf node, making the new radius less than the limitation of radius. While the CF-tree runs out of the memory, the radius limitation can be increased automatically so that a leaf node can “absorb” more data objects, and a new smaller CF-tree can created. The drawback of BIRCH is that it cannot discover arbitrary clusters. However, it requires no input parameters, and it has the ability to handle “noise”. Also it is an incremental clustering method that does not require the whole dataset to be read into the main memory in advance. It scans the dataset only once. So the space complexity is very low, and its time complexity is only O(n). DBSCAN[3] is a kind of clustering algorithm based on density. It needs two parameters: Eps and Minpts. DBSCAN defines a cluster to be a maximum set of densityconnected points, and every core point in a cluster must have at least a minimum number of points (MinPts) within a given radius (Eps). To find a cluster, DBSCAN che-
HYBRID: From Atom-Clusters to Molecule-Clusters
1153
cks the Eps-neighborhood of each point in the database. If the Eps-neighborhood of a point p contains at least MinPts number of points, a cluster using p as core point will be generated. And then DBSCAN retrieves all density-reachable points from these core points. In this process, some density-reachable clusters will be merged together. While no new point is added into any cluster, the clustering procedure stops. DBSCAN can discover arbitrary clusters and find noise very well. However, its disadvantages are obvious. Firstly, DBSCAN is very sensitive to the parameters. Secondly, its time complexity reaches to O(n2), if space index is not used. Although the time complexity can reduce to O(nlogn) by using space index, the space complexity will increase. CURE[4] is also a hierarchical clustering algorithm. In CURE, a cluster is not represented by center and radius, but by a constant number of well-scattered points in a cluster. CURE requires the cluster number K to be inputted as a parameter. At the beginning, each data point is treated as a cluster, and then two clusters with the closest pair of representative points are merged. This process is repeated until obtaining the expected cluster number. Because the representative points can capture the shape and extent of a cluster, CURE can discover arbitrary clusters. At meat time, by shrinking the chosen points towards the center of a cluster, noise can be detected and removed. CURE has the disadvantage of requiring the cluster number as an input parameter. Furthermore, it has a space complexity of O(n), so CURE has to use random sampling to reduce the data amount. However for a very large database, the sample size is still too large to be read into memory in advance. If we restrict the sample size to fit into the memory, clustering error may occur due to too few samples. The time complexity of CURE is O(n2logn) at the worst case. For low dimensional data space, the time complexity is O(n2). On the contrary, HYBRID has the following advantages. 1. HYBRID has low time and space complexities. 2. HYBRID does not require any input parameters that related to cluster pattern. 3. HYBRID can discover arbitrary clusters. 4.HYBRID has a good ability to handle noises.
3 The HYBRID Clustering Algorithm 3.1 The Basic Idea HYBRID is based on an observation that any clusters can be considered as an aggregation of some spherical sub-clusters of same size. We call those spherical subclusters as atom-clusters, and the clusters formed by atom-clusters are called molecule-cluster. Atom-clusters and molecule-clusters have the following relationship. The precision of a molecule-cluster is a function of the size of atom-clusters. The smaller the atom-cluster is, the more atom-clusters the molecule can contain and the more precise the molecule-cluster is (as shown in Figure 1). The size of an atomcluster depends on the size of memory. If all data objects can be hold in the memory, an atom-cluster is a real data point. Otherwise, an atom-cluster is represented by the center and radius of some points. When memory size increases, more atom-clusters can be included in the memory. Then each atom-cluster can stand for a smaller number of real points, and the size of an atom-cluster becomes smaller. This means that more precise molecules can be obtained. Therefore, the precision of moleculeclusters is a function of memory size. In practice, we can obtain the expected quality by toning the size of memory.
1154
Z. Bing, J.-y. Shen, and Q.-k. Peng
Fig. 1. The relationship of atom-cluster and molecule-cluster
Exploiting the above idea, we propose a two-phase clustering algorithm named HYBRID. The first phase is to generate atom-clusters. The second phase is to aggregate atom-clusters to form molecule-clusters. At the first phase, we create atomclusters according to the size of memory, using the definition of Cluster Feature[2] and an incremental clustering method named CLAP[5]. At the second phase, when merging two atom-clusters, we consider not only their distance but also the similarity of their densities. By merging the atom-clusters with the closest distance and the most similar density, our algorithm provides a powerful ability to eliminate noises and obtain isomorphic molecules clusters. 3.2 Implementation First phase: Generate spherical atom-clusters with the same size We use a method named CLAP[5], which is an improvement of BIRCH algorithm[2], to generate atom-clusters. Like BIRCH, CLAP is based on Clustering Feature and CF-tree[2] but introduce a Separate Factor S to define the upper bound of the diameter of a leaf node. G A Clustering Feature isG defined as a triple: CF={N, L S , SS}. N is the number of data points in a cluster. LS is the linear sum of N data points. SS is the square sum of N data points. CF summarizes the information of data points in a cluster, so that the data in a cluster can be represented by a CF instead of a collection of data points. A CF-tree is a high balanced tree with two parameters: branching factor B and threshold T. Branching factor is the maximum number of children of a non-leaf node. Threshold defines the limitation of the diameter of a cluster. The tree size is a function of T. The larger T is, the smaller the tree is. With the introducing of Separate Factor S, CLAP algorithm modifies the procedure of generating CF-tree. When adding a new child into a leaf node, if the diameter of the new leaf is greater than S, a new leaf node will be generated to hold this new entry, no matter whether the old leaf has a spare place or not. This scheme increases the compactness of leaf nodes and improves the precision of clustering. Like T, S is adjusted dynamically using the following heuristic approach, which is similar to that used in [2]:
HYBRID: From Atom-Clusters to Molecule-Clusters
1155
1. Ni+1=Min(2Ni, N), where N is the total amount of input data. We expect that the new Ti+1 and Si+1 can cause Ni+1 data to be read into memory. So Ni+1 is an important parameter to estimate Ti+1 and Si+1. 2. Generally the average radius of the root cluster in a CF-tree, r, grows with the number of data points Ni, so we can maintain a record of r and the number of points Ni, and estimate ri+1 via least squares linear regression. We define an expansion factor f=Max(1.0, ri+1/ ri)to predict the growth rate of Ti+1 and Si+1. 3. We increase T in order to merge child entries in leaf nodes and increase S in order to merge leaf nodes. We therefore choose the closest two child entries and two leaf nodes to calculate the minimal distance among child entries (entryDmin) and the minimal distance among leaf nodes (leafDmin). 4. Finally, let Ti+1=Max(entryDmin, f* Ti), Si+1=Max(leafDmin, f* Si). If Ti+1= Ti and Si+1= Si, then we let Ti+1=Ti*(Ni+1/ Ni)1/d, Si+1=Si*(Ni+1/ Ni)1/d (d is the dimension of the data space) to make Ti+1 and Si+1 greater than Ti and Si. The other two improvements of CLAP are using random sampling technique[6] to decrease the running time greatly and combining the global searching and local searching to eliminate the impact of input order, which means that at the beginning CLAP searches all the leaf nodes sequentially to find a proper cluster; and after the first CF-tree rebuilt, CLAP uses CF-tree structure to speedup the searching procedure. After the first phase, we can obtain lots of atom-clusters represented by CF. We call the set of atom-clusters as ATOM. We define the centroid C of an atom-cluster a G as Ca= LS /N. According to the property of CF-tree, The radius of all atom-clusters is no lager than T. We define the radius R of all atom-clusters as T and the density D of an atom-cluster a as Da=N/R. Next we will merge the atom-clusters into molecule-clusters with arbitrary shape and size. Second phase: Aggregate the atom-clusters to form molecule-clusters The process of merging atom-clusters could be considered as the edge expanding process of molecule-clusters, which is an iterative process of forming a new edge from an old edge and then adding the new edge to molecule-cluster. We define Em, the edge of a molecule-cluster m, as a set of atom-clusters, which are neighbors of m and have similar density with m. In order to define the neighbors of molecule-cluster, we first define the neighbors of atom-cluster. We define the neighbors of an atomcluster a as: Ga={x| Distance(Ca,Cx) ≤2R}, x∈ATOM
(1)
Then the neighbors of a molecule-cluster m are defined as: Gm=
∪G , where a∈E . a
(2)
m
Because of the uneven distribution of the density in one molecule-cluster, which may vary gradually, we define the density of a molecule m as the average density of its edge but not the average density of the atom-clusters contained in m, that is: Dm=
∑D /||E ||,where ||E || is the amount of atom-clusters of E a
m
m
m
and a∈Em
(3)
We define that an atom-cluster a has a similar density to a molecule-cluster m, if and only if the relationship between a and m satisfies the following formula:
1156
Z. Bing, J.-y. Shen, and Q.-k. Peng
|Dm-Da| ≤ (Dmax-Dmin)/2
(4)
where Dmax and Dmin are the maximal density and the minimal density in ATOM set. According to above definitions, the process of forming a new edge of moleculecluster m is to pick out those atom-clusters, which have similar density to m, from Gm. As summary, the procedure of generating molecule-clusters is shown as follows.
initializing the set of atom clusters ATOM; initializing the set of molecule clusters MOLECULE = empty; While(ATOM != empty) initializing the current molecule cluster m = empty; choosing any atom cluster a from ATOM, let m = a, ATOM = ATOM - a; finding Gm(neighbor atom clusters of m) from ATOM; While(G m != empty) generateing Em(new edge of m) from Gm; ATOM = ATOM - Em; m = m + Em; finding Gm from ATOM; MOLECULE = MOLECULE + m;
Fig. 2. The procedure of generating molecule-clusters
In the edge expanding procedure, we consider the distance among atom-clusters as well as their densities. Therefore we can obtain more isomorphic molecule-clusters and treat those sparse molecule-clusters as noises. 3.3 Algorithm Analysis The time complexity of the first phase is O(n)[5]. In second phase, suppose we have M atom-clusters at beginning and obtain K molecule-clusters at last. Then each molecule-cluster has M/K atom-clusters on average. When we begin to generate the ith molecule-cluster, only (1-(i-1)/K)M atom-clusters are left. During the process of edge expanding, the worst case is that only one atom-cluster is found each time, and then we need to search the left atom-clusters M/K times to generate a moleculecluster. Hence the cost of building the i-th molecule-cluster is O((1-(i-1)/K)M2/K). The whole cost of phase 2 is about O(M2/2).The overall time cost of HYBRID is: O(n+M2/2). HYBRID does not require all the data to be read into memory in advance. The space complexity is O(M), which is independent of the amount of data points.
HYBRID: From Atom-Clusters to Molecule-Clusters
1157
4 Experimental Results HYBRID is developed by C language and supported by a database named MYSQL. HYBRID is executed on a PC, which has a CPU of Intel P4 1.7G series and two 256M memory chips. The operation system is LINUX. 4.1 Memory Scalability In this section, we use one 2-dimensional datasets to evaluate the memory scala-bility. The dataset has four clusters and many noises. We make the experiments three times. At first time, we use all the memory (proximately 512M); at the second time, we limit the memory as half as the first time; and at the third time, the memory was decreased to quarter as much as that of the first time. The results are depicted by figure 3, from which we can see that the precision of molecule-clusters decreases with the decrease of memory. When the memory decreased, an atom-cluster will absorb more points and its diameter becomes larger, which may cause two atom-clusters been merged together wrongly. Then noises are clustered together and cannot be cleared, and even all the atom-clusters are merged to form one molecule-cluster.
(a)
(b)
(c)
(d)
Fig. 3. Memory scalability( (a) The original datasets, (b) The result of full memory, (c) The result of half memory, (d) The result of quarter memory )
4.2 Time Scalability In this section, the time scalability of HYBRID is evaluated by a real dataset that extracted from the 1990 US Census Data, which can be found at the Census Bureau website of the U.S. Department of Commerce1. This dataset has 1,106,228 records, and each record has 16 attributes. Two distinct ways of increasing the data size are used. One is to increase the number of 2-dimensional points (The results are depicted by table1and figure4). The other is to increase the number of dimension with a fixed number of points (The results are depicted by table2 and figure5). With the increase of dimension number, the number of atom-clusters increases very fast. So the time scalability of high dimensional points is not very good. Table 1. Result of point number increase
2 dimension Point Number Time (s) 1
1 614571 77
2 737485 87
http://www.census.gov/DES/www/des.html
3 860399 96
4 983314 106
5 1106228 117
1158
Z. Bing, J.-y. Shen, and Q.-k. Peng
Fig. 4. Time scalability of point number increase Table 2. Result of dimension number increase
245828 points Dimension Time(s) Atom-cluster Numbers
1 3 39 736
2
3
6 145 7214
9 592 19149
4 12 2193 51072
5 15 3810 89814
Fig. 5. Time scalability of dimension number increase
4.3 Comparisons with Other Clustering Algorithm In this section, we compare the performance of HYBRID with that of DBSCAN and CURE. Although HYBRID is applicable to any dataset for which a similarity matrix is available, we perform HYBRID on four two-dimensional datasets. Because clusters in 2D datasets are easy to visualize, and similar datasets have been used to evaluate the performance of DBSCAN and CURE (http://www.cs.umn.edu/~han/chameleon.html[7]) . The geometric shapes of our datasets are shown below.
DS1
DS2
DS3
Fig. 6. The four two-dimensional datasets
DS4
HYBRID: From Atom-Clusters to Molecule-Clusters
1159
The first dataset, DS1, has four clusters with different size, shape, and density, and contains noise points. The second dataset, DS2, contains two clusters that are close to each other and different regions of the clusters have different densities. The third dataset, DS3, has six clusters of different size, shape, and orientation, as well as random noise points and special artifacts such as streaks running across clusters. Finally, the forth dataset, DS4, has eight clusters of different shape, size, density, and orientation, as well as random noises. A challenge of this dataset is that clusters are very close to each other and they have different densities. The results of HYBRID are shown below:
Fig. 7. The clustering results of HYBRID
From above pictures, we can see that HYBRID discovers the cluster patterns in DS1, 2, 4 correctly. For DS3, because of the streaks running across clusters, two clusters are merged improperly, but other clusters are correct. Additionally, at the edges of the clusters, due to the impact of noises and uneven densities, HYBRID generates some small clusters by mistakes. However, the overall results show that HYBRID can discover arbitrary clusters accurately and eliminate noises powerfully. The following results[7] are obtained by DBSCAN:
Eps=0.5, MinPts=4 Eps=3.5, MinPts=4
Eps=0.4, MinPts=4 Eps=3.0, MinPts=4
Fig. 8. The clustering results of DS1, DS2 obtained by DBSCAN
DBSCAN can find arbitrary shape of clusters, if the right density of the clusters can be determined in a priori and the density of clusters is uniform. DBSCAN finds the right clusters for the datasets DS1, as long as it is supplied the right combination of Eps and Minpts. However, it does not perform well on DS2, as this dataset contains clusters of different densities. The next two pictures show the sensitivity of DBSCAN with respect to the Eps parameter. As we decrease Eps, the natural clusters in the dataset are fragmented into a large number of smaller clusters. The results[7] of CURE are shown as follows:
1160
Z. Bing, J.-y. Shen, and Q.-k. Peng
K
=2
K
=8
K
=17
K
=25
Fig. 9. The clustering results of DS2, DS4 obtained by CURE
CURE is able to find the right clusters for DS2, but it fails to find the right clusters for DS4. Because CURE does not consider the densities of clusters, it may mistakenly merge two clusters with complex shapes under the impact of noises. CURE may select a wrong pair of clusters, which are close to each other but have very different densities, and merge them together. In addition, CURE requires the cluster number as an input parameter. The incorrect parameter will result in wrong clusters.
5 Conclusion In this paper, we have proposed a new clustering algorithm named HYBRID, which has the following advantages: low time and space complexity, no requirement of cluster-related parameters to guide the clustering procedure, handling clusters with arbitrary size and shape, and the powerful ability to eliminate outliers. The experimental results show that HYBRID is a good clustering algorithm. Since clustering deals with very large databases and high-dimensional data types, it puts high demands on space and time. As a solution, parallel algorithms can be used to provide powerful computing ability for clustering. Our further research is to study parallel clustering algorithms.
Reference 1. Raymond T., Hau N. J.: Efficient and effective clustering methods for spatial data mining. Proceedings of the 20th VLDB Conference. Santiago Chile (1994) 144-155. 2. Tian Zhang, Raghu Ramakrishnan, Miron Livny: BIRCH: An Efficient Data Clustering Method for Very Large Databases. Proceedings of the ACM SIGMOD Conference on Management of Data. Montreal Canada (1996) 103-114. 3. Ester M., Kriegel H.P., Sander J.: A density-based algorithm for discovering clusters in large spatial database with noise. Proceedings of 2nd Intl. Conf. on Knowledge Discovering in Databases and Data Mining. Portland USA (1996) 226-231. 4. Guha U., Rastogi R., Shim K.: CURE: an efficient clustering algorithm for large databases. Pergamon Information Systems. 26 (2001) 35-58. 5. Zhou Bing, Shen JunYi, Peng QinKe: Clustering Algorithm Based on Random-Sampling and Cluster-Feature. Journal Of Xi’an Jiaotong University. 37 (2003) 1234-1237. 6. Motwani R., Raghavan P.: Randomized Algorithms. Cambridge University Press, London (1995) 7. George Karypis, Eui-Hong (Sam) Han, Vipin Kumar: CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling. COMPUTER. 32 (1999) 68-75.
A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System and Evaluation for Sound Environment Akira Ikuta1 , Hisako Masuike2 , Yegui Xiao1 , and Mitsuo Ohta3 1
Prefectural University of Hiroshima, Hiroshima, Japan 734-8558 2 NTT Data Chugoku, Hiroshima, Japan 732-086 3 Hiroshima University, Emeritus, Japan
Abstract. The actual sound environment system exhibits various types of linear and non-linear characteristics, and it often contains an unknown structure. Furthermore, the observations in the sound environment are often contain fuzziness due to several causes. In this paper, a method for estimating the specific signal for acoustic environment systems with unknown structure and fuzzy observation is proposed by introducing a fuzzy probability theory and a system model of conditional probability type. The effectiveness of the proposed theoretical method is confirmed by applying it to the actual problem of psychological evaluation for the sound environment.
1
Introduction
The internal physical mechanism of actual sound environment system is often difficult to recognize analytically, and it contains unknown structural characteristics. Furthermore, the stochastic process observed in the actual phenomenon exhibits complex fluctuation pattern and there are potentially various nonlinear correlations in addition to the linear correlation between input and output time series. In our previous study, for complex sound environment systems difficult to analyze by using usual structural methods based on the physical mechanism, a nonlinear system model was derived in the expansion series form reflecting various type correlation information from the lower order to the higher order between input and output variables[1]. The conditional probability density function contains the linear and nonlinear correlations in the expansion coefficients, and these correlations play an important role as the statistical information for the input and output relationship. In this paper, a complex sound environment system including the human consciousness and response for physical sound phenomena is considered. It is necessary to pay our attention on the fact that the observation data in the sound environment system often contain the fuzziness due to several causes. For example, the human psychological evaluation for noise annoyance can be judged by use of 7 scores: 1.very calm, 2.calm, 3.mostly calm, 4.little noisy, 5.noisy, L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1161–1170, 2005. c Springer-Verlag Berlin Heidelberg 2005
1162
A. Ikuta et al.
6.fairly noisy, 7.very noisy[2]. However, each score is affected by the human subjectivity and the borders between two scores are vague. In this situation, in order to evaluate the objective sound environment system, it is desirable to estimate the waveform fluctuation of the specific signal based on the observed data with fuzziness. As a typical method in the state estimation problem, the Kalman filtering theory and its extended filter are well known[3,4,5]. These theories are originally based on the Gaussian property of the state fluctuation form. On the other hand, the actual signal in sound environment exhibits various types of probability distribution forms apart from a standard Gaussian distribution due to the diversified causes of fluctuation. In our previous studies[6,7], several state estimation methods for sound environment systems with non-Gaussian fluctuations have been proposed on the basis of Bayes’ theorem. However, these state estimation algorithms have been realized by introducing the additive model of the specific signal and an external noise. The actual sound environment systems exhibit complex fluctuation properties and often contain unknown characteristics in the relationship between the state variable and the observation. Furthermore, the observation data often contain fuzziness. Thus, it is necessary to improve the previous state estimation methods by taking account of the complexity and uncertainty in the actual systems. From the above viewpoint, based on the fuzzy observations, a method for estimating adaptively the specific signal for sound environment systems with unknown structural characteristic is theoretically proposed in this study. More specifically, after regarding the human noise annoyance scores as observation data with fuzziness, by adopting an expansion expression of the conditional probability distribution reflecting the information on linear and non-linear correlation between the input and output signals as the system characteristics, an estimation method of the specific signal with non-Gaussian properties is propsed by introducing a fuzzy theory. The proposed estimation method can be applied to an actual complex sound environment system with unknown structure by considering the coefficients of conditional probability distribution as unknown parameters and estimating simultaneously these parameters and the specific signal. Furthermore, the effectiveness of the proposed theory is confirmed experimentally by applying it to the estimation of sound level based on the successive observation of the psychological evaluation for noise annoyance.
2
Sound Environment System with Unknown Structure and Fuzzy Observation
Consider a complex sound environmnet system with an unknown structure that cannot be obtained on the basis of the internal physical mechanism of the system. In the observations of actual sound environment system, the sound level data often contain the fuzziness due to human subjectivity in noise annoyance evaluation, confidence limitations in sensing devices, and quantizing errors in digital observations, etc. Therefore, in order to evaluate adaptively the objective
A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System
1163
sound environment, it is desirable to estimate the fluctuation waveform of the specific signal based on the observations with fuzziness. Let xk and yk be the input and output signals at a discrete time k for a sound environment system. For example, for the psychological evaluation in sound environment, xk and yk denote respectively the physical sound level and human response quantity for it. It is assumed that there are complex nonlinear relationships between them. Since the system characteristics are unknown, a system model in the form of a conditional probability is adopted. More precisely, attention is focused on the conditional probability distribution function P (yk |xk ) reflecting all linear and nonlinear correlation information between xk and yk . Furthermore, let zk be the fuzzy observation obtained from yk . For example, zk expresses the noise annoyance scores (1.very calm, 2.calm, 3.mostly calm, 4.little noisy, 5.noisy, 6.faily noisy, 7.very noisy) taking the individual and psychological situation into consideration for yk . The fuzziness of zk is characterized by the membership function µzk (yk ). As the membership function, the following function is adopted. µzk (yk ) = exp{−α(yk − zk )2 },
(1)
where α(> 0) is a parameter. Expanding the joint probability distribution function P (xk , yk ) in an orthogonal form based on the product P (xk ) and P (yk ), the following expression on the conditional probability distribution function P (yk |xk ) can be derived. P (yk |xk ) =
R S P (xk , yk ) = P (yk ) Ars θr(1) (xk )θs(2) (yk ), P (xk ) r=0 s=0
Ars = < θr(1) (xk )θs(2) (yk ) >, (A00 = 1, Ar0 = A0s = 0 (r, s = 1, 2, · · ·)),
(2) (3)
where < > denotes the averaging operation on the variables. The linear and non-linear correlation information between xk and yk is reflected hierarchically (1) (2) in each expansion coefficient Ars . The functions θr (xk ) and θs (yk ) are orthonormal polynomials with the weighting functions P (xk ) and P (yk ) respectively, and can be decomposed by using Schmidt’s orthogonalization[6]. Though (2) is originally an infinite series expansion, a finite expansion series with r ≤ R and s ≤ S is adopted because only finite expansion coefficients are available and the consideration of the expansion coefficients from the first few terms is usually sufficient in practice. Since the objective system contains an unknown specific signal and unknown structure, the expansion coefficients Ars expressing hierarchically the statistical relationship between xk and yk must be estimated on the basis of the fuzzy observation zk . Considering the expansion coefficients Ars as unknown parameter vector a: a = (a1 , a2 , · · · , aI ) = (a(1) , a(2) , · · · , a(S) ), a(s) = (A1s , A2s , · · · , ARs ), (s = 1, 2, · · · , S),
(4)
1164
A. Ikuta et al.
the following simple dynamical model is introduced for the simultaneous estimation of the parameters with the specific signal xk : ak+1 = ak , (ak = (a1,k , a2,k , · · · , aI,k ) = (a(1),k , a(2),k , · · · , a(S),k )),
(5)
where I(= RS) is the number of unknown expansion coefficients to be estimated. On the other hand, based on the correlative property in time domain for the specific signal fluctuating with non-Gaussian property, the following time transition model for the specific signal is generally established. xk+1 = F xk + Guk ,
(6)
where uk is the random input with mean 0 and variance σu2 . Two parameters F and G are estimated by using a system identification method[8]. A method to estimate xk based on the fuzzy observation zk is derived in this study by introducing a fuzzy probability theory and an expansion series expression of the conditional probability distribution function.
3
State Estimation Based on Fuzzy Observation
In order to derive an estimation algorithm for a specific signal xk , based on the successive observations of fuzzy data zk , we focus our attention on Bayes’ theorem[9] for the conditional probability distribution. Since the parameter ak is also unknown, the conditional probability distribution of xk and ak is considered. P (xk , ak |Zk ) =
P (xk , ak , zk |Zk−1 ) , P (zk |Zk−1 )
(7)
where Zk (= {z1 , z2 , · · · , zk }) is a set of observation data up to a time k. After applying fuzzy probability[10] to the right side of (7), expanding it in a general form of the statistical orthogonal expansion series[11], the conditional probability density function P (xk , ak |Zk ) can be expressed as: µzk (yk )P (xk , ak , yk |Zk−1 )dyk P (xk , ak |Zk ) = µzk (yk )P (yk |Zk−1 )dyk ∞ ∞ ∞ (1) (2) Blmn P0 (xk |Zk−1 )P0 (ak |Zk−1 )ψl (xk )ψm (ak )In (zk ) =
l=0 m=0 n=0
∞
(8) B00n In (zk )
n=0
with
In (zk ) =
µzk (yk )P0 (yk |Zk−1 )ψn(3) (yk )dyk , (1)
(9)
(2) Blmn = < ψl (xk )ψm (ak )ψn(3) (yk )|Zk−1 >, (m = (m1 , m2 , · · · , mI )). (10)
A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System (1)
(2)
1165
(3)
The functions ψl (xk ), ψm (ak ) and ψn (yk ) are the orthogonal polynomials of degrees l, m and n with weighting functions P0 (zk |Zk−1 ), P0 (ak |Zk−1 ) and P0 (yk |Zk−1 ), which cam be artificially chosen as the probability density functions describing the dominant parts of P (xk |Zk−1 ), P (ak |Zk−1 ) and P (yk |Zk−1 ). (1) (2) Based on (8), and using the orthonormal relationships of ψl (xk ) and ψm (ak ), the recurrence algorithm for estimating an arbitrary (L, M)th order polynomial type function fL,M (xk , ak ) of xk and ak can be derived as follows: fˆL,M(xk , ak ) = < fL,M (xk , ak )|Zk > L M ∞
=
LM Clm Blmn In (zk )
l=0 m=0 n=0 ∞
,
(11)
B00n In (zk )
n=0 LM where Clm is the expansion coefficient determined by the equality:
fL,M (xk , ak ) =
L M l=0 m=0
LM (2) Clm ψl (xk )ψm (ak ). (1)
(12)
In order to make the general theory for estimation algorithm more concrete, the well-known Gaussian distribution is adopted as P0 (xk |Zk−1 ), P0 (ak |Zk−1 ) and P0 (yk |Zk−1 ), because this probability density function is the most standard one. P0 (xk |Zk−1 ) = N (xk ; x∗k , Γxk ), P0 (ak |Zk−1 ) = P0 (yk |Zk−1 ) = N (yk ; yk∗ , Ωk )
I
N (ai,k ; a∗i,k , Γai,k ),
i=1
(13)
with 1 (x − µ)2 N (x; µ, σ 2 ) = √ exp{− }, 2σ 2 2πσ 2 x∗k = < xk |Zk−1 >, Γxk =< (xk − x∗k )2 |Zk−1 >,
a∗i,k = < ai,k |Zk−1 >, Γai,k =< (ai,k − a∗i,k )2 |Zk−1 >, yk∗ = < yk |Zk−1 >, Ωk =< (yk − yk∗ )2 |Zk−1 > .
(14)
Then, the orthonormal functions with three weighting probability density functions in (13) can be given in the Hermite polynomial[12]: I ai,k − a∗i,k 1 xk − x∗ 1 (1) (2) √ ψl (xk ) = √ Hl ( k ), ψm (ak ) = Hm i ( ), Γai,k mi ! Γxk l! i=1
1 yk − y ∗ ψn(3) (yk ) = √ Hn ( √ k ). Ωk n!
(15)
1166
A. Ikuta et al.
Accordingly, by considering (1)(13) and (15), (9) can be given by eK3 (zk ) In (zk ) = √ 2K1 Ωk
1 (yk − K2 (zk ))2 exp{− } 1/K1 π/K1 n
1 yk − K2 (zk ) ·√ dnr Hr ( )dyk n! r=0 1/2K1
(16)
with K1 =
2αΩk + 1 2αΩk zk + yk∗ , K2 (zk ) = , 2Ωk 2αΩk + 1
K3 (zk ) = K1 {K22 (zk ) −
2αΩk zk2 + yk∗ 2 }, 2αΩk + 1
(17)
where the fuzzy data zk are reflected in K2 (zk ) and K3 (zk ). Furthermore, dnr (r = 0, 1, 2, · · · , n) are the expansion coefficients in the equality: n yk − yk∗ yk − K2 (zk ) √ Hn ( )= dnr Hr ( ). Ωk 1/2K1 r=0
(18)
By considering the orthonormal condition of Hermite polynomial[12], (16) can be expressed as follows: In (zk ) = √
eK3 (zk ) dn0 , 2K1 Ωk n!
(19)
where a few concrete expressions of dn0 in (19) can be expressed as follows: 1 (K2 (zk ) − yk∗ )2 1 d00 = 1, d10 = √ (K2 (zk ) − yk∗ ), d20 = + − 1, Ωk 2K1 Ωk Ωk d30 =
d40 =
3(K2 (zk ) − yk∗ ) 3/2
2K1 Ωk
−
3(K2 (zk ) − yk∗ ) (K3 (zk ) − yk∗ )3 √ + , 3/2 Ωk Ωk
3 3{(K2 (zk ) − yk∗ )2 − Ωk } + (2Ωk K1 )2 Ωk2 K1 +
(K2 (zk ) − yk∗ )4 − 6Ωk (K2 (zk ) − yk∗ )2 + 3Ωk2 . Ωk2
(20)
Using the property of conditional expectation and (2), the two variables yk∗ and Ωk in (20) can be expressed in functional forms on predictions of xk and ak at a discrete time k (i.e., the expectation value of arbitrary functions of xk and ak conditined by Zk−1 ), as follows:
A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System
yk∗
1167
= |Zk−1 >=< ∞ 1
=
e1s Ars θr(1) (xk )|Zk−1 >
r=0 s=0 1
e1s < A(s),k Θ(xk )|Zk−1 >,
(21)
s=0
Ωk =
=
r=0 s=0
=
2
e2s < A(s),k Θ(xk )|Zk−1 >
(22)
s=0
with A(s),k = (0, a(s),k ), (s = 1, 2, · · ·), A(0),k = (1, 0, 0, · · · , 0), Θ(xk ) = (θ0 (xk ), θ1 (xk ), · · · , θR (xk ))T , (1)
(1)
(1)
(23)
where T denotes the transpose of a matrix. The coefficients e1s and e2s in (21) and (22) are determined in advance by expanding yk and (yk − yk∗ )2 in the following orthogonal series forms: yk =
1 i=0
e1i θi (yk ), (yk − yk∗ )2 = (2)
2 i=0
(2)
e2i θi (yk ).
(24)
(2)
Furthermore, using (2) and the orthonormal condition of θi (yk ), each expansion coefficient Blmn defined by (10) can be obtained through the similar calculation process to (21) and (22), as follows: (1) (2) Blmn = < ψl (xk )ψm (ak ) ψn(3) (yk )P (yk |xk )dyk |Zk−1 > = (ψn(3) (yk ) =
n s=0 n i=0
(1)
(2) ens < ψl (xk )ψm (ak )A(s),k Θ(xk )|Zk−1 >,
(25)
(2)
eni θi (yk ), eni ; appropriate coef f icients).
In the above, the expansion coefficient Blmn can be given by the predictions of xk and ak . Finally, by considering (5) and (6), the prediction step to perform the recurrence estimation can be given for an arbitrary function gL,M(xk+1 , ak+1 ) with (L, M)th order, as follows: ∗ gL,M (xk+1 , ak+1 ) = < gL,M (xk+1 , ak+1 )|Zk >
= < gL,M (F xk + Guk , ak )|Zk > .
(26)
1168
A. Ikuta et al.
The above equation means that the predictions of xk+1 and ak+1 at a discrete time k are given in the form of estimates for the polynomial functions of xk and ak . Therefore, by combining the estimation algorithm of (11) with the prediction algorithm of (26), the recurrence estimation of the specific signal can be achieved.
4
Application to Psychological Evaluation for Noise Annoyance
To find the quantitative relationship between the human noise annoyance and the physical sound level for environmental noises is important from the viewpoint of noise assessment. Especially, in the evaluation for a regional sound environment, the investigation based on questionnaires to the regional inhabitants is often given when the experimental measurement at every instantaneous time and at every point in the whole area of the region is difficult. Therefore, it is very important to estimate the sound level based on the human noise annoyance data. It has been reported that the noise annoyance based on the human sensitivity can be distinguished each other from 7 annoyance scores, for instance, 1.very calm, 2.calm, 3.mostly calm, 4.little noisy, 5.noisy, 6.fairly noisy, 7.very noisy, in the psychological acoustics[2]. After recording the road traffic noise by use of a sound level meter and a data recorder, by replaying the recorded tape through amplifier and loudspeaker in a laboratory room, 6 female subjects (A, B, · · · , F ) aged of 22-24 with normal hearing ability judged one score among 7 noise annoyance scores (i.e.,1, 2, · · · , 7) at every 5 [sec.], according to their impressions for the loudness at each moment using 7 categories from very calm to very noisy. Two kinds of data (Data 1 and Data 2) were used, namely, the sound level data of road traffic noise with mean values 71.4 [dB] and 80.2 [dB]. The proposed method was applied to an estimation of the time series xk for sound level of a road traffic noise based on the successive judgments zk on human annoyance scores after regarding zk as fuzzy observation data. Figure 1 shows one of the estimated results of the waveform fluctuation of the sound level. In this figure, the horizontal axis shows the discrete time k of the estimation process, and the vertical axis represents the sound level. For comparison, the estimated result obtained by a method without considering fuzzy theory is also shown in this figure. More specifically, a similar algorithm to our previously reported methods[6,7] based on expansion expressions of Bayes’ theorem without consideing the membership function in (8) is applied to estimate the specific signal xk . There are great discrepancies between the estimates based on the method without considering the membership function and the true values, while the proposed method estimates precisely the waveform of the sound level with rapidly changing fluctuation. The root mean squared errors of the estimation are shown in Table 1 (for Data 1) and Table 2 (for Data 2). It is obvious that the proposed method shows more accurate estimation than the results based on the method without considering fuzzy theory.
A Fuzzy Adaptive Filter for State Estimation of Unknown Structural System
1169
100 95 90 85
[dBA]
80 75 70 65 60
Xk
55 50 45 40 35
True values Estimated results by considering fuzzy theory Estimated results without considering fuzzy theory
30 25 0
20
40
60 k
80
100
120
Fig. 1. Estimation results of the fluctuation waveform of the sound level based on the succesive judgement on human annoyance scores by the subject A (for Data 1) Table 1. Root mean squared error of the estimation in [dB] (for Data 1) Subject A B C D E F Proposed Method 3.65 3.63 4.51 4.62 4.89 4.56 Compared Method 7.55 4.10 15.8 5.06 5.13 5.75 Table 2. Root mean squared error of the estimation in [dB] (for Data 2) Subject A B C D E F Proposed Method 4.59 4.26 4.82 6.80 7.49 4.65 Compared Method 10.7 7.79 4.96 14.6 11.6 4.64
5
Conclusion
In this paper, based on the observed data with fuzziness, a new method for estimating the specific signal for sound environment systems with unknown structure has been propoesd. The proposed estimation method has been realized by introducing a system model of conditional probability type and a fuzzy probability theory. The proposed method has been applied to the estimation of an actual sound environment, and it has been experimentally verified that better results are obtained as compared with the method without considering fuzzy theory.
1170
A. Ikuta et al.
The proposed approach is quite different from the traditional standard approaches. It is still at early stage of development, and a number of practical problems are yet to be investigated in the future. These include: (i) Application of the proposed state estimation method to a diverse range of practical estimation problems for sound environment systems with unknown structure and fuzzy observation. (ii) Extension of the proposed method to cases with multi-dimensional state variable and multi-source configuration. (iii) Finding an optimal number of expansion terms in the proposed estimation algorithm of expansion expression type. (iv) Extension of the proposed theory to the actual situation under existence of the external noise (e.g., background noise).
References 1. Ohta, M., Ikuta, A.: An acoustic signal processing for generalized regression analysis with reduced information loss based on data observed with amplitude limitation. Acustica 81 (1995) 129–135 2. Namba, S., Kuwano, S., Nakamura, T.: Rating of road traffic noise using the method of continuous judgement by category. J. Acousti. Soc. Japan 34 (1978) 29–34 3. Kalman, R. E.: A new approach to linear filtering and prediction problem. Trans. ASME, Series D, J. Basic Engineering 82 (1960) 35–45 4. Kalman, R. E., Buch, R.: New results in linear filtering and prediction theory. Trans. ASME, Series D, J. Basic Engineering 83 (1961) 95–108 5. Kushner, H. J.: Approximations to optimal nonlinear filter. IEEE Trans. Autom. Control AC-12 (1967) 546–556 6. Ohta, M., Yamada, H.: New methodological trials of dynamical state estimation for the noise and vibration environmental system— Establishment of general theory and its application to urban noise problems. Acustica 55 (1984) 199–212 7. Ikuta, A., Ohta, M.: A state estimation method of impulsive signal using digital filter under the existence of external noise and its application to room acoustics. IEICE Trans. Fundamentals E75-A (1992) 988–995 8. Eyhhoff, P.: System identification: parameter and state estimation. John Wiley & Sons (1974) 9. Suzuki, Y., Kunitomo, N.: Bayes statistics and its application. Tokyo University Press (1989) 50–52 10. Zadeh, L. A.: Probability measures of fuzzy events. J. Math. Anal. Appl. 23 (1968) 421–427 11. Ohta, M., Koizumi, T.: General treatment of the response of a nonlinear rectifying device to a stationary random input. IEEE Trans. Inf. Theory IT-14 (1968) 595– 598 12. Cramer, H.: Mathematical methods of statistics. Princeton University Press (1951) 133, 221–227
Preventing Meaningless Stock Time Series Pattern Discovery by Changing Perceptually Important Point Detection Tak-chung Fu1,2,† , Fu-lai Chung1, Robert Luk1, and Chak-man Ng2 1
Department of Computing, The Hong Kong Polytechnic University, Hong Kong. {cstcfu, cskchung, csrluk}@comp.polyu.edu.hk 2 Department of Computing and Information Management Hong Kong Institute of Vocational Education (Chai Wan), Hong Kong.
[email protected] Abstract. Discovery of interesting or frequently appearing time series patterns is one of the important tasks in various time series data mining applications. However, recent research criticized that discovering subsequence patterns in time series using clustering approaches is meaningless. It is due to the presence of trivial matched subsequences in the formation of the time series subsequences using sliding window method. The objective of this paper is to propose a threshold-free approach to improve the method for segmenting long stock time series into subsequences using sliding window. The proposed approach filters the trivial matched subsequences by changing Perceptually Important Point (PIP) detection and reduced the dimension by PIP identification.
1 Introduction When time series data are divided into subsequences, interesting patterns can be discovered and it is easier to query, understand and mine them. Therefore, the discovery of frequently appearing time series patterns, or called surprising patterns in paper [1], has become one of the important tasks in various time series data mining applications. For the problem of time series pattern discovery, a common technique being employed is clustering. However, applying clustering approaches to discover frequently appearing patterns is criticized as meaningless recently when focusing on time series subsequence [2]. It is because when sliding window is used to discretize the long time series into subsequences given with a fixed window size, trivial match subsequences always exist. The existing of such subsequences will lead to the discovery of patterns derivations from sine curve. A subsequence is said to be a trivial match when it is similar to its adjacent subsequence formed by sliding window, the best matches to a subsequence, apart from itself, tends to be the subsequence that begin just one or two points to the left or the right of the original subsequence [3]. Therefore, it is necessary to prevent the over-counting of these trivial matches. For example, in Fig.1, the shapes of S1, S2 and S3 are similar to a head and shoulders (H&S) patterns while the †
Corresponding Author.
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1171 – 1174, 2005. © Springer-Verlag Berlin Heidelberg 2005
1172
T.-c. Fu et al.
shape of S4 is completely different from them. Therefore, S2 and S3 should be considered as trivial matches to S1 and we should only consider S1 and S4 in this case.
P I P id e n tif ie d
S
1
S
2
S
3
S
4
Fig. 1. Trivial match in time series subsequences (with PIPs identified in each subsequence)
References [3,4] defined the problem of enumerating the most frequently appearing pattern (which are called the most significant motifs, 1-motif, in reference [3]) in a time series P is the subsequence S1 that has the highest count of non-trivial matches. Therefore, the Kth most frequently appearing pattern (significant motif, K-Motif) in P is the subsequence SK that has the K highest count of non-trivial matches and satisfies D(SK, Si)>2R, for all 1 ≤ i < K . However, it is difficult to define a threshold R, to distinguish trivial and non-trivial matches. It is case dependent and there is no general rule for defining this value. Furthermore, reference [2] suggested that applying a classic clustering algorithm in place of subsequence time series clustering to cluster only the motifs discovered from K-motif detection algorithm. The objective of this paper is to develop a threshold-free approach to improve the segmentation method for segmenting long stock time series into subsequences using sliding window. The goal is redefined as to filter all the trivial matched subsequences formed by sliding window. The remaining subsequences should be considered as nontrivial matches for further frequently appearing pattern discovery process.
2 The Proposed Frequently Appearing Pattern Discovery Process Given a time series P = { p 1 ,..., p m } and fixing the width of the sliding window at w, a set of time series segments W ( P ) = { S i = [ p i ,..., p i + w −1 ] | i = 1,..., m − w + 1} can be formed. To identify trivial matches from the matching process of subsequences formed by sliding window, a method based on detecting the changes of the identified Perceptually Important Points (PIPs) is introduced. PIP identification is first proposed in reference [5] for dimensionality reduction and pattern matching of stock time series. It is based on identifying the critical points of the time series as the general shape of a stock time series is typically characterized by a few points. By comparing the differences between the PIP identified between two consequent subsequences, a trivial match occurred if the same set of PIP is identified and the second subsequence can be ignored. Otherwise, both subsequences are non-trivial matched and should be considered as the subsequence candidates of the pattern discovery process. This process carries along from the starting subsequence of the time series obtained by using sliding window till the end of the series. In Fig.1, the same set of PIP is identified in the subsequence S1, S2 and S3. Therefore, the matching of subsequence S2 and S3 with subsequence S1 should be considered as trivial match and subsequence S2 and S3
Preventing Meaningless Stock Time Series Pattern Discovery
1173
should be filtered. On the other hand, the set of PIP obtained from subsequence S4 is different from that of subsequence S3. This means that they are non-trivial match. Therefore, S1 and S4 are identified as the subsequence candidates. After all these trivial matched subsequences are filtered, the remaining subsequences should be considered as non-trivial matches and served as the candidates for further discovery process on frequently appearing patterns. They will be the input patterns for the training of the clustering process. k-means clustering technique can be applied to these candidates. The trained algorithm is expected to group a set of patterns M 1 ,..., M k which represent different structures or time series patterns of the data, where k is the number of the output patterns. Although the clustering procedure can be applied directly to the subsequence candidates, it will quite time consuming when a large number of data points (high dimension) are considered. By compressing the input patterns with the PIP identification algorithm, the dimensionality reduction can be achieved (Fig.2). compressed to
14 data points
compressed to
compressed to
9 data points
a) Examples of patterns discovered from original 14 data points and compressed to 9 data points
30 data points
9 data points
b) Examples of patterns discovered from original 30 data points and compressed to 9 data points
90 data points
9 data points
c) Examples of patterns discovered from original 90 data points and compressed to 9 data points
Fig. 2. Examples of dimensionality reduction based on PIP identification process
3 Experimental Result The ability of the proposed pattern discovery algorithm was evaluated in this section. Synthetic time series is used which is generated by combining 90 patterns with length of 89 data points from three common technical patterns in stock time series as shown in Fig.8 (i.e.30x3). The total length of the time series is 7912 data points. Three sets of subsequence candidate were prepared for the clustering process. They include the (i) original one, the subsequences formed by sliding window, where w=89. This is the set which is claimed to be meaningless in reference [2]; (ii) motifs, K-motifs [2,3] formed from the time series, where K is set to 500 based on the suggested method in paper [3] and (iii) proposed PIP method, the subsequence candidates filtered by detecting the change of PIPs and 9 PIPs are used. The number of pattern candidates and the time needed for pattern discovery are reported in Fig.4a. Only 281 motifs were formed while half of the subsequences were filtered by detecting the change of PIPs. The proposed PIP method is much faster than the other two approaches because the subsequences are compressed from 89 data points to 9 data points. The dimension for the clustering process is greatly reduced. Fig.4b shows the final patterns discovered. Six groups were formed by each of the approaches and it shows that the set of the pattern candidates deducted from the proposed approach is the most similar set to the pattern templates used to form the time series. On the other hand, the patterns discovered from the original subsequences seem not too related to the patterns which are used to construct the time series. Although the motifs approach can also discover the patterns which used to construct
1174
T.-c. Fu et al.
(i) original (ii) motifs (iii) PIP
No. of candidate 7833 281 3550
(a)
Time 1:02:51 0:02:33 0:00:03
(b)
Fig. 3. (a) Number of patterns and time needed for pattern discovery by using different pattern candidates and (b) Pattern discovered (i) original, (ii) motifs and (iii) PIP
the time series, it smoothed out the critical points of those patterns. Also, uptrends, downtrends and a group of miscellaneous patterns are discovered in all the approaches. To sum up, meaningless patterns are discovered by applying the clustering process on the time series subsequences (i) whereas both motifs and the proposed approach can partially solve this problem by filtering the trivial matched subsequences. However, it is still difficult to determine the starting point of the patterns and leads to the discovery of the shifting patterns. Additionally, the proposed approach can preserve the critical points of the patterns discovered and speed up the discovery process.
4 Conclusion In this paper, a frequently appearing pattern discovery process for stock time series by changing Perceptually Important Point (PIP) detection is proposed. The proposed method tackles the main problem of discovering meaningless subsequence patterns with the clustering approach. A threshold-free approach is introduced to filter the trivial matched subsequences, which these subsequences will cause the discovery of meaningless patterns. As demonstrated in the experimental results, the proposed method can discover the patterns hidden in the stock time series which can speed up the discovery process by reducing the dimension and capturing the critical points of the frequently appearing patterns at the same time. We are now working on the problem of determining the optimal number of PIPs for representing the time series subsequences and the results will be reported in the coming paper.
References 1. Keogh, E., Lonardi, S., Chiu, Y.C.: Finding Surprising Patterns in a Time Series Database in Linear Time and Space. Proc. of ACM SIGKDD (2002) 550-556 2. Keogh, E., Lin, J., Truppel, W.: Clustering of Time Series Subsequences is Meaningless: Implications for Previous and Future Research. Proc. of ICDM, (2003) 115-122 3. Lin, J., Keogh, E., Lonardi, S., Patel, P.: Finding Motifs in Time Series. In: Workshop on Temporal Data Mining, at the ACM SIGKDD (2002) 53-68 4. Patel, P., Keogh, E., Lin, J., Lonardi, S, Mining Motifs in Massive Time Series Databases. Proc. of the ICDM (2002) 370-377 5. Chung, F.L., Fu, T.C., Luk, R., Ng, V., Flexible Time Series Pattern Matching Based on Perceptually Important Points. In: Workshop on Learning from Temporal and Spatial Data at IJCAI (2001) 1-7
Discovering Frequent Itemsets Using Transaction Identifiers Duckjin Chai, Heeyoung Choi, and Buhyun Hwang Department of Computer Science, Chonnam National University, 300 Yongbong-dong, Kwangju, Korea {djchai, hychoi}@sunny.chonnam.ac.kr
[email protected] Abstract. In this paper, we propose an efficient algorithm which generates frequent itemsets by only one database scan. A frequent itemset is a set of common items that are included in at least as many transactions as a given minimum support. While scanning the database of transactions, our algorithm generates a table having 1-frequent items and a list of transactions per each 1-frequent item, and generates 2-frequent itemsets by using a hash technique. k(k ≥ 3)-frequent itemsets can be simply found by checking whether for all (k − 1)-frequent itemsets used to generate a k-candidate itemset, the number of common transactions in their lists is greater than or equal to the minimum support. The experimental analysis of our algorithm has shown that it can generate frequent itemsets more efficiently than FP-growth algorithm.
1
Introduction
As an information extraction method, data mining is a technology to analyze a large amount of accumulated data to obtain information and knowledge valuable for decision-making. Because data mining is used to produce information and knowledge helpful in generating profit, it is widely used in various industrial domains such as telecommunication, banking, retailing and distribution for shopping bag analysis, fraud detection, customer classification, and so on [1,4,11]. Data mining technologies include association rule discovery, classification, clustering, summarization, and sequential pattern discovery, etc.[1,4]. Association rule discovery, an area being studied most actively, is a technique to investigate the possibility of simultaneous occurrence of the data. In this paper, we study a technique to discover association rules that describe the associations among data. Most of previous studies, such as [2,3,5,6,7,8,9,10], have adopted an Apriori-like heuristic approach: if any k-itemset, where k is the number of items in the itemset, is not frequent in the database, a (k + 1)-itemset containing the k-itemset is never frequent. The essential idea is to iteratively generate the set of (k + 1)-candidate itemsets from k-frequent itemsets (for k ≥ 1), and check whether each (k + 1)-candidate itemset is frequent in the database.
This work was supported by Institute of Information Assessment(ITRC).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1175–1184, 2005. c Springer-Verlag Berlin Heidelberg 2005
1176
D. Chai, H. Choi, and B. Hwang
The Apriori heuristic method can achieve good performance by significantly reducing the size of candidate sets. However, in situations with prolific frequent itemsets, long itemsets, or quite low minimum support thresholds, where the support is the number of transactions containing the itemsets, an Apriori-like algorithm may still suffer from nontrivial costs[6]. [6] proposed FP-growth algorithm using a novel tree structure(FP-tree). It discovers frequent itemsets by scanning a database twice without generating the candidate itemsets. FP-growth algorithm generates FP-tree and discovers frequent itemsets by checking all nodes of FP-tree. For the construction of FP-tree, FP-growth algorithm needs nontrivial cost for pruning non-frequent items and sorting the rest frequent items per each transaction. Moreover, there can be a space problem because new nodes have to be frequently inserted into the FP-tree if items in each transaction differ from items on the path of FP-tree. In this paper, we propose an algorithm that can compute a collection of frequent itemsets by only one database scan. We call this algorithm FTL(Frequent itemsets extraction algorithm using a TID(Transaction IDentifier) List table) from now on. By only one database scan, FTL algorithm discovers 1-frequent items and constructs a TID List table of which each row consists of a frequent itemset and a list of transactions including it. Simultaneously, FTL algorithm computes the frequency of 2-itemsets by using a hash function with two 1frequent items as parameters. This algorithm can reduce much computing cost that is needed for the generation of 2-frequent itemsets. k(k ≥ 3)-frequent itemsets are discovered by using TID List table. Most of association rule discovery algorithms spend their time on the scanning of massive database. Therefore, the less the number of database scanning is, the better the efficiency of an algorithm is. Consequently, FTL algorithm can considerably reduce the computing cost since it scans a database only once. The remaining of this paper is organized as follows. Section 2 presents FTL algorithm. In Section 3, we show the performance of FTL algorithm through the simulation experiment. Section 4 summarizes our study.
2
FTL Algorithm
In association rule discovery, the most efficient method is to discover frequent itemsets by only one database scan. However, previous algorithms must have found frequent itemsets sequentially from 1-frequent items to the final k-frequent itemsets because the frequent itemsets cannot be found easily in the massive database. Therefore, previous algorithms tried to reduce the number of database scans and the number of candidate itemsets. In this section, we propose an algorithm that can find frequent itemsets by only one database scan. Discovery of frequent itemsets is to find out a set of common items included in transactions of which the number satisfies a given minimum support. Consequently, if there is a data structure having information
Discovering Frequent Itemsets Using Transaction Identifiers
1177
Table 1. A transaction database as running example TID Items 100 f, a, c, d, g, i, m, p 200 a, b, c, f, l, m, o 300 b, f, h, j, o 400 b, c, k, s, p 500 a, f, c, e, l, p, m, n
about transactions in which each 1-frequent item is included, we can find frequent itemsets by searching the data structure without a database scan. We generate a data structure which is called TID List table. The TID List table consists of a 1-frequent item and a list of transactions containing it. The TID List table is used to generate k(k ≥ 3)-frequent itemsets. The 1-frequent items are discovered when TID List table is constructed and 2-frequent itemsets are extracted using a hash table which is generated when a database is scanned. Therefore, TID List table and hash table are simultaneously constructed when a database is scanned. When a count value of each 2-itemset stored in bucket of hash table is greater than the minimum support, the 2-itemset becomes a 2-frequent itemset. Many algorithms for generating candidate itemsets focused on the generation of the smaller candidate itemsets [3,10]. Especially, since the number of 2-candidate itemsets is generally big, it fairly improves the performance of mining to reduce the number of 2-candidate itemsets[10]. Our algorithm generates k-candidate itemsets for k ≥ 3. The performance of our algorithm can be improved since 2-candidate itemsets are not generated. The generation of candidate itemsets in our algorithm uses a method used in the Apriori[3]. Through probing TID List table, it can be checked whether the generated candidate itemsets are frequent. For all items in a candidate itemset, if the number of common TIDs is greater than the given minimum support, the candidate itemset becomes a frequent itemset. 2.1
The Generation of 1-Frequent Items and the TID List Table
All items of frequent itemset are included together in a transaction represented as one record in a database. That is, k items contained in the k-frequent itemset are included together in some identical transaction. For example, if an itemset {A, B, C} is frequent with the support 3, at least three transactions include all of items A, B, and C. For each 1-frequent item, FTL algorithm computes a list of transactions that include it, and stores it into the TID List table. Table 2 shows items and their TID lists for an example used at Table 1. 1-frequent items can be discovered by comparing the length of TID list of each item with a minimum support. At Table 2, 1-frequent items which satisfy the minimum support(50%) are shown at Table 3.
1178
D. Chai, H. Choi, and B. Hwang Table 2. TID List table Item TID List Length Item TID List Length a 100, 200, 500 3 j 300 1 b 200, 300, 400 3 k 400 1 c 100, 200, 400, 500 4 l 200, 500 2 d 100 1 m 100, 200, 500 3 e 500 1 n 500 1 f 100, 200, 300, 500 4 o 200, 300 2 g 100 1 p 100, 400, 500 3 h 300 1 s 400 1 I 100 1
Table 3. 1-TID List table for 1-frequent items 1-frequent item TID List Support a 100, 200, 500 3 b 200, 300, 400 3 c 100, 200, 400, 500 4 f 100, 200, 300, 500 4 m 100, 200, 500 3 p 100, 400, 500 3
2.2
The Generation of 2-Frequent Itemsets Using a Hash Technique
Using a hash function with two parameters, FTL algorithm computes the frequency of 2-itemsets and selects 2-frequent itemsets from them. Two parameters represent two items. When TID List table is constructed through the database scan, at the same time we generate a bucket of each 2-itemset using a hash function having two items as its parameters. Whenever each bucket is referenced by a hash function, the bucket count is increased by 1. Table 4 shows the frequency of 2-itemsets which are calculated by a hash function f (x, y) = array[x][y], where x and y are an item. Since a pattern in association rule discovery is not a sequential one, items in an itemset are commutative. That is, an itemset {a, b} is the same as an itemset {b, a}. We will maintain items in a itemset in alphabetic order from this moment. A collection of buckets is represented by a two-dimensional array. For example, the frequency count of a itemset {a, b} is stored to the two-dimensional array[a][b]. If the frequency count of a bucket for an itemset is greater than the minimum support, it is extracted as 2-frequent itemset. Therefore, we can get 2-frequent itemsets such as Table 4. We assume that the minimum support is 50 percent. If we use a hash technique to generate all frequent itemsets, the computing cost will be very low. But its space overhead can be high since nk operations are required, where n is the number of total items in database and k is the number of items included in itemsets. The number of buckets can be increased exponentially
Discovering Frequent Itemsets Using Transaction Identifiers
1179
Table 4. Hash table having the frequency of 2-itemsets hash function: f (x, y) = array[x][y] index–> {a,f} {a,c} {a,d} {a,g} {a,i} {a,m} {a,p} {a,b} {a,l} {a,o} {a,e} {a,n} count–> 3 3 1 1 1 3 2 1 2 1 1 1 {b,c} {b,f} {b,l} {b,m} {b,o} {b,k} {b,s} {b,p} {c,f} {c,d} {c,g} {c,i} 2 1 1 1 1 1 1 1 3 1 1 1 {c,m} {c,p} {c,l} {c,o} {c,k} {c,s} {c,e} {c,n} {d,f} {d,g} {d,i} {d,m} 3 3 2 1 1 1 1 1 1 1 1 1 {d,p} {f,g} {f,i} {f,m} {f,p} {f,l} {f,o} {f,h} {f,j} {f,o} {f,e} {f,n} 1 1 1 3 2 2 1 1 1 1 1 1 {g,i} {g,m} {g,p} {i,m} {i,p} {l,m} {l,o} {l,n} {l,p} {m,p} {m,o} {m,n} 1 1 1 1 1 2 1 1 1 2 1 1 {h,j} {h,o} {j,o} {k,s} {p,s} {e,p} {e,m} {e,n} {e,l} {p,n} 1 1 1 1 1 1 1 1 1 1 2-frequent itemsets {a,f} {a,c} {a,m} {c,f} {c,m} {c,p} {f,m} 3 3 3 3 3 3 3
according to the increase of k. There is a tradeoff between a computing overhead and space overhead. However, if this hash technique is used partially, frequent itemsets can be computed effectively. Especially, if we use this hash technique in the generation of 2-frequent itemsets or 3-frequent itemsets, the performance of an algorithm may be improved since most computing cost is required for the generation of 2-frequent itemsets or 3-frequent itemsets. 2.3
The Generation of k(k > 2)-Frequent Itemsets Using TID List Table
In FTL algorithm, candidate itemsets are generated by using Apriori-gen used at Apriori algorithm. FTL algorithm uses (k − 1)-TID List table to find k-frequent itemsets(k > 3). (k − 1)-TID List table has information about transactions including (k − 1)-frequent itemsets. Therefore, we can calculate the support of each candidate itemset by computing the number of common transactions that include (k − 1)-frequent itemsets used to generate the k-candidate itemset. k-TID List table consists of TID lists. Each of them is a list of transactions containing a k-frequent itemset. k-TID List table is constructed by using (k − 1)TID List table since k-frequent itemsets is composed only by the combination of (k − 1)-frequent itemsets in (k − 1)-TID List table. Generally, k-TID List table becomes smaller than (k − 1)-TID List table when k is greater than 3. Thus, it can reduce the number of comparisons since the TID List table size becomes smaller and smaller. We can find the fact by investigating Table 6 and 7. Table 5 is an example of generating 3-frequent itemsets by using 1-TID Lists of Table 3 and 2-frequent itemsets of Table 4. 2-frequent itemsets are {a,f}, {a,c}, {a,m}, {c,f}, {c,m}, {c,p}, and {f,m} as shown in Table 4.
1180
D. Chai, H. Choi, and B. Hwang
Table 5. Generation of 3-frequent itemsets using 1-TID List table and 2-frequent itemsets 3-candidate itemset Item a c {a, c, f} f a c {a, c, m} m a f {a, f, m} m c f {c, f, m} m
TID List 100,200,500 100, 200, 400, 500 100, 200, 300, 500 100,200,500 100, 200, 400, 500 100, 200, 500 100, 200, 500 100, 200, 300, 500 100, 200, 500 100, 200, 400, 500 100, 200, 300, 500 100, 200, 500
Common TIDs 100, 200, 500
100, 200, 500
100, 200, 500
100, 200, 500
Table 6. 3-TID List table for 3-frequent itemsets 3-frequent itemset {a, c, f} {a, c, m} {a, f, m} {c, f, m}
TID List Support 100, 200, 500 3 100, 200, 500 3 100, 200, 500 3 100, 200, 500 3
By using Apriori-gen algorithm, FTL algorithm generates 3-candidate itemsets, {a, c, f}, {a, c, m}, {a, f, m}, and {c, f, m}. As shown in Table 5, the candidate itemsets include transactions 100, 200, and 500. These candidate itemsets become 3-frequent itemset since they satisfy the minimum support 50%. Thus, we get Table 6 as 3-TID List table. Next, we can generate 4-candidate itemset {a, c, f, m} from 3-frequent itemsets. At this time, we use 3-TID List table of Table 6. 4-candidate itemset {a, c, f, m} is generated from the first two 3-frequent itemsets, {a, c, f} and {a, c, m}. Since the remaining two 3-frequent itemsets, {a, f, m} and {c, f, m}, are included in 4-candidate itemset {a, c, f, m}, these itemsets need not be considered. We have to compute the number of common transactions included in both of TID lists for two 3-frequent itemsets, {a, c, f} and {a, c, m}, and check whether the number satisfies the support. Therefore, for calculating the number of common transactions including all items of k-candidate itemset, the use of (k − 1)-TID List table is more effective than that of 1-TID List table. As shown in Table 7, since both TID lists for 3-frequent itemset {a, c, f} and {a, c, m} are transactions 100, 200, and 500, 4-candidate itemset {a, c, f, m} includes transactions 100, 200, and 500, and satisfies the minimum support 50%.
Discovering Frequent Itemsets Using Transaction Identifiers
1181
Table 7. Generation of 4-frequent itemsets using 3-TID List table 4-candidate itemset 3-frequent itemset TID List Common TIDs {a, c, f} 100, 200, 500 {a, c, f, m} 100, 200, 500 {a, c, m} 100, 200, 500
Algorithm 1. FTL: TID List construction and Frequent Itemset extraction Input :A transaction database DB and a minimum support threshold ε Output :The complete set of frequent itemsets Procedure TIDList&2-frequent(DB) for each transaction do generate 1-TID List table generate bucket using hash function for all buckets do if count value of bucket > ε then insert to 2-frequent itemsets Procedure k-frequent(TID List, k-frequent itemsets) for all generate (k + 1)-candidate itemsets do for all (k + 1)-candidate itemsets do compare TIDs of each item if the number of the common TID > ε then insert to (k + 1)-frequent itemsets update TID List table if candidate itemsets can generate then call k-frequent(TID List, (k + 1)-frequent itemsets)
Thus, the 4-candidate itemset becomes 4-frequent itemset. Since k-candidate itemsets(k > 4) cannot be generated, FTL algorithm is terminated. Let the number of items of an itemset be m and the length of TID list be n. Since each TID list in TID List table are sorted by TIDs of transactions, the computing time for calculating the support of an itemset is O(mn). If not being sorted, the computing time is O(m·n2 ). For all items in an itemset, TID List table is searched from the first position of each TID list. If a common TID in all their TID lists is found, a count is increased by one. The search is continued until the count is equal to the minimum support or one of the TID lists is ended. If the count is equal to the minimum support, the itemset becomes a frequent itemset. This procedure for computing frequent itemsets can considerably reduce the computing time. The FTL algorithm generates TID List table having information that 1frequent items and their list of TIDs including them by only one database scanning. The performance of FTL algorithm can be improved since k-frequent itemsets(k > 2) can be extracted by searching only (k − 1)-TID List table. The FTL algorithm is shown in Algorithm 1.
1182
D. Chai, H. Choi, and B. Hwang
16
D1 Dataset
14
FP-growth FTL
Run time(sec.)
12 10 8 6 4 2 0 0.1
0.3
0.5
1
1.5
2
2.5
3
Support threshold(%)
Fig. 1. Scalability with support threshold(Data set : D1 )
3
Experimental Evaluation and Performance Study
In this section, we present a performance comparison of FTL with a recently proposed efficient method, FP-growth algorithm. The experiments are performed on a 2-GHz Pentium PC machine with 1 GB main memory, running on Microsoft Windows/XP. All the programs are written in Microsoft/Visual C++6.0. Notice that we implement their algorithms to the best of our knowledge based on the published reports on the same machine and compare in the same running environment. Please also note that run time used here means the total execution time, i.e., the period between input and output, instead of CPU time measured in the experiments in some literature. As the generator of data set, we have used the generator used for the performance experiment in the previous papers. The data generator can be downloaded from the following URL. http://www.almaden.ibm.com/software/quest/Resources/index.shtml We report experimental results on two data sets. The first one is T20.I10. D10K with 1K items, which is denoted as D1 . In this data set, the average size of transactions and the maximal potential size of frequent itemsets are 20 and 10, respectively, while the number of transactions in the data set is set to 10K. The second data set, denoted as D2 , is T20.I10.D100K with 1K items. And the size of transactions and the maximal size of frequent itemsets are 20 and 10, respectively. There are exponentially numerous frequent itemsets in both data sets, as the support threshold goes down. There are long frequent itemsets as well as a large number of short frequent itemsets in them. The scalability of FTL and FP-growth as the support threshold decreases from 3% to 0.1% is shown in Fig.1 and Fig.2. Each graph shows the runtime of FTL algorithm and FP-growth algorithm while increasing support threshold. Fig.1 and 2 are the result of data set D1 and D2 , respectively. As shown in Fig.1 and Fig.2, both FP-growth and FTL have good performance when the support
Discovering Frequent Itemsets Using Transaction Identifiers
120
1183
D2 Dataset FP-growth
Run time(sec.)
100
FTL
80 60 40 20 0 0.1
0.3
0.5
1
1.5
2
2.5
3
Support threshold(%)
Fig. 2. Scalability with support threshold(Data set : D2 )
threshold is pretty low, but FTL is better. This phenomenon results from the fact that the cost to generate frequent itemsets in FTL algorithm is less than that in FP-growth algorithm and a database is scanned only once. According to the increase of support threshold, the runtime difference of two algorithms becomes much bigger. To test the scalability with the number of transactions, experiments on data set D2 are used. The support threshold is set to 0.5%. The results are presented in Fig.3. Both FTL and FP-growth algorithms show linear scalability with the number of transactions from 20K to 80K. As the number of transactions grows up, the difference between the two methods becomes larger and larger. In a database with a large number of frequent items, candidate itemsets can become quite large. However FTL compares only two records of TID List table as shown in Table 7. Therefore, FTL can find frequent itemsets fastly. This explains why FTL has advantages when the number of transactions is large.
4
Conclusions
We proposed the FTL algorithm that can reduce the number of database scans by one and thus discovers efficiently frequent itemsets. FTL algorithm generates TID List table by one database scanning and simultaneously calculates the frequency of 2-itemsets using a hash technique. Since k-TID List table has the information about transactions including k-frequent items, TID List table is the compact database from which k-frequent itemsets can be calculated. One of advantages of FTL algorithm is that k-frequent itemsets can be computed by only one scan of database. In FTL algorithm, it is easy and efficient to test whether each k-candidate itemset is frequent or not. Since (k + 1)-candidate itemsets are generated from k-frequent itemsets, a method is necessary to effi-
1184
D. Chai, H. Choi, and B. Hwang
18 FP-growth 16
FTL
14
Run time(sec.)
12 10 8 6 4 2 0 20
40
60
80
Number of transaction(K)
Fig. 3. Scalability with number of transactions
ciently prune candidate itemsets for much more efficiency. Our simulation experiments have shown that FTL algorithm is more efficient than FP-growth algorithm for given two data set.
References 1. Adrians, P., Zantige, D.: Data Mining. Addison-Wesley. (1996) 2. Agrawal, R., Aggarwal, C., and Prasad, V. V. V.: A tree projection algorithm for generation of frequent itemsets. In J. Parallel and Distributed Computing. (2000) 3. Agrawal, R., Srikant, R.: Fast algorithms for mining association rules. In VLDB. (1994) 487–499 4. Berry, M.J.A., Linoff, G.: Data Mining Techniques-For marketing, Sales, and Customer Support. Wiley Computer Publishing. (1997) 5. Grahne, G., Lakshmanan, L., and Wang, X.: Efficient mining of constrained correlated sets. In ICDE. (2000) 6. Han, J., Pei, J., and Yin, Y.: Mining frequent patterns without candidate generation. In ACM SIGMOD. (2000) 1–12 7. Lent, B., Swami, A., and Widom, J.: Clustering association rules. In ICDE. (1997) 220–231 8. Liu, B., Hsu, W., and Ma, Y.: Mining association rules with multiple minimum supports. In ACM SIGKDD. (1999) 337–341 9. Ng, R., Lakshmanan, L. V. S., Han, J., and Pang, A.: Exploratory mining and pruning optimizations of constrained associations rules. In SIGMOD. (1998) 13–24 10. Park, J.S., Chen, M.S., and Yu, P.S.: An effective hash-based algorithm for mining association rules. In ACM SIGMOD. (1995) 175–186 11. Simoudis, E.: Reality Check for Data Mining. IEEE Expert: Intelligent Systems and Their Applications 11 (5), October, (1996)
Incremental DFT Based Search Algorithm for Similar Sequence Quan Zheng, Zhikai Feng, and Ming Zhu Department of Automation, University of Science and Technology of China, Hefei, 230027, P.R. China
[email protected] [email protected] Abstract. This paper begins with a new algorithm for computing time sequence data expansion distance on the time domain that, with a time complexity of O(n×m), solves the problem of retained similarity after the shifting and scaling of time sequence on the Y axis. After this, another algorithm is proposed for computing time sequence data expansion distance on frequency domain and searching similar subsequence in long time sequence, with a time complexity of merely O(n×fc), suitable for online implementation for its high efficiency, and adaptable to the extended definition of time sequence data expansion distance. An incremental DFT algorithm is also provided for time sequence data and linear weighted time sequence data, which allows dimension reduction on each window of a long sequence, simplifying the traditional O(n×m×fc) to O(n×fc).
1 Introduction In time sequence data mining, the extensively applicable technique, the fundamental issue of time sequence data similarity comparison has a promising prospect. Current methods of data similarity comparison and fast similar subsequence searching include, apart from Euclid technique, frequency domain method [1],[2],[3],[4], segmentation method [5],[6], waveform descriptive language method [7]. Previous studies produced the concept of time sequence expansion distance to preserve the similarity of time sequence data after linear shifting. Major studies on expanded similar sequence searching have been conducted by Chu et al [8] and Agrawal et al [9]. However, the distance proposed in [8] is asymmetrical, which may lead to results against common sense, while the algorithm is basically a costly distance computation on the time domain. The similar sequence searching algorithm in [9] suffers from: 1. simple normalization technique is used to solve the shifting and scaling problems in subsequence similarity comparison, which not universally applicable; 2. complicated and costly. In this paper, the basic frequency domain method is extended to apply to the search in expanded similar sequence. Main points include: providing an analytical result for computing time sequence data expansion distance on time domain; An innovative computing method for time sequence data expansion distance based on frequency domain analytical solution, and the fast searching technique for relevant similar sequence. Computing on a dimension reduced frequency domain, the technique is L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1185 – 1191, 2005. © Springer-Verlag Berlin Heidelberg 2005
1186
Q. Zheng, Z. Feng, and M. Zhu
highly efficient, with a time complexity of O(n×fc), and adaptable to expansion distance of time sequence data; In the similar subsequence searching, DFT dimension reduction is necessary for each window of a long sequence. The incremental DFT for each window of a long sequence and the incremental DFT for linear weighted time sequence data proposed below reduce the time complexity from the traditional O(n×m×fc) to O(n×fc).
2
Expanded Time Sequence Data Distance and Its Analytical Solution
Definition 1: The expanded asymmetric distance of one dimension time series data of the same length (m) x=[x0,x1,…, xm-1]T y=[y0,y1,…, ym-1]T is defined as:
,
m −1
d ( x , y ) = min ( ∑ ( xi − ay i − b ) 2 )1/ 2 . a ,b
(1)
i=0
The advantage of this definition is that it maintains the similarity of time sequence data after scaling and shifting, applicable to different scaling and shifting amount of transducers. This distance is asymmetrical and against common sense, therefore another definition for time sequence data distance is given by [1], as the minimum value of 2 asymmetric distance d(x,y) and d(y,x). Although Chu et al [8] proposed an algorithm that maps the time sequence data to a shifting-eliminated plane where the distance is computed, the method is over-complicated. This paper proposes a direct analytical method for computing time sequence data expansion distance through computing the optimum parameter of a, b. Theorem 1: The optimum analytical solution of the asymmetric distance of one dimension time sequence data x y of the length m is:
,
m −1
d ( x , y ) = ( ∑ ( x i − a t y i − bt ) 2 ) 1 / 2 .
(2)
i =0
m −1
Where: a t
=
∑x y i
i=0 m −1
∑y i =0
i
− mx y ,
2 i
− my
2
bt = x − at y
The symmetrical is the minimum value of d(x,y) and d(y,x). The time complexity of the search algorithm for relevant similar sequence is, at a perfect matching, O(m), while at subsequence searching, it is O(n×m), where n is the length of the long time sequence and m is the length of subsequence. This algorithm, i.e. computing time sequence data distance on the time domain, avoids searching on the (a,b) plane, and may quickly obtain the analytic solution of the expansion distance according to the values of time sequence data.
Incremental DFT Based Search Algorithm for Similar Sequence
1187
3 Computing Time Sequence Data Distance on the Frequency Domain The search algorithm for similar sequence based on time domain analytic solution, as described in theorem 1, is conducted on the time domain, therefore costly and unsuitable for online application, whereas frequency domain methods are generally not applicable to the expansion distance of time sequence data. Our concern is to somehow extend the frequency domain method for computation of time sequence, and adapt it to the definition of time sequence data expansion distance. Lemma 1: let the corresponding Fourier coefficient of time sequence data x be Xf, and the time sequence data x after linear transformation be y=a x+b then the Fourier parameter Yf for the number f item in time sequence data y is:
× ,
Y f = aX f + j 2π m
Where c = −
b 1 − e cfm . cf m 1− e
(3)
,X ,Y Are the number f component of time sequence data x f
f
and y, respectively. Theorem 2: The expanded asymmetric distance of time sequence data x and y are approximately: f c −1
d ( x, y ) ≈ ( ∑ X f =0
fc −1
where:
at =
∑(X
bt =
f =0
fc −1
f
f =0
fc −1
2
)1 / 2 .
(4)
fc −1
⊕Z f )∑(Yf ⊕ Z f ) − ∑(X f ⊕Yf )∑(Z f ⊕ Z f ) f =0
f =0
2
f =0
⎛ ⎞ ⎜ ∑(Yf ⊕Z f )⎟ − ∑(Yf ⊕Yf )∑(Z f ⊕ Z f ) ⎜ ⎟ f =0 ⎝ f =0 ⎠ f =0 fc −1
fc −1
∑(X
f
b 1 − e cfm − atY f − t cf m 1− e
fc −1
fc −1
fc −1
f
⊕Y f ) − a t ∑ (Y f ⊕ Y f ) f =0
fc −1
∑ (Y
f
⊕Zf)
f =0
fc is the limiting frequency;
Zf =
1 − e cfm m (1 − e cf )
is a complex number sequence
introduced for convenience's sake; Xf is the Fourier parameter of the number f item in the time sequence data x, and Yf is the Fourier parameter of the number f item in the time sequence data y; function ⊕ is a mapping from complex number to real number, i.e. the product of the real parts of two complex numbers plus the product of their imaginary parts.
1188
Q. Zheng, Z. Feng, and M. Zhu
The relevant subsequence searching algorithm allows, at the same time, incremental DFT and expansion distance computation with frequency domain analytic solution, thereby to perform similarity comparison. The time complexity of it is O(n×fccc). Compared to the similar subsequence searching on time domain, the time complexity of which being O(n×m) , this algorithm is more efficient and suitable for online application, because the fc ranges from 2-5, at 2-3 magnitudes lower than m. Furthermore, this algorithm maintains the similarity of time sequence data after linear shifting, and is therefore adaptable to the expanded definition of distance. To simply the matter, the similar subsequence searching algorithm on the frequency domain that utilizes incremental DFT, and solves the issue of shifting and scaling is henceforth called: Extended frequency domain method.
4 Incremental Fourier Shifting of Time Sequence Data and Linear Weighted Time Sequence Data Regarding the searching of similar subsequence, the algorithm described in section 3 requires discrete Fourier shifting for each subsequence window. According to traditional DFT formulae, time complexity for obtaining low order fc Fourier parameters is O(n×m×fc), which is costly. We now present an incremental Fourier shifting algorithm that greatly enhances the efficiency, and is suitable for online application. The long time sequence x is divided into n-m+1 interlapping time windows at the length m. xwi represents the number i window, capitalized XWi,f represents the number f frequency component of the time window. Theorem 3: The relation between XWi,f, the number f Fourier parameter of the data time window xwi,, and XWi-1,f, the number f Fourier parameter of the previous time window, is:
XWi , f = Where:
∆ i, f =
1 m
XWi −1, f
( xwi ,m e cfm −
e cf
xwi −1,0 e
cf
+ ∆ i, f .
)=
1 m
( xi + m e cfm −
(5)
xi −1 ) e cf
On some occasions, if the time sequence data x is closer to the current time (m-1), it is regarded as more important than the more distant points. For convenience sake, we introduce a forgetting function f (t) to contribute to the weight of distance. see (6).
f (t ) = z + kt = (1 − km + k ) + kt .
(6)
Definition 2: The linear forgetting distance dw(x,y) for 2 one dimensional time sequence data x y of the length of m is:
,
m −1
d w ( x, y ) = (∑ ( xt − y t ) 2 f (t ) 2 )1 / 2 . t =0
(7)
Incremental DFT Based Search Algorithm for Similar Sequence
1189
In the time sequence, the number t datum in the number i window is represented as xwi,t. the datum after weighing is xw’i,t. Their relations are:
xw' i ,t = xwi ,t f (t ) = xwi ,t (1 − km + k + kt ) .
(8)
From definition 2, m −1
m −1
t =0
t =0
d w ( x, y ) = (∑ ( xt − y t ) 2 f (t ) 2 )1 / 2 = (∑ ( x' t − y ' t ) 2 )1 / 2 = d ( x' , y ' ) Therefore computing the weighted distance between 2 subsequences is equivalent to computing the Euclidean distance between two weighted time sequences. According to Parseval Rules, we may take the first few frequency components from the frequency domain of the weighted time sequence data to perform an approximate distance computation, allowing fast similar sequence search. The issue now is how to obtain the Fourier parameters of each window after linear weighing, and in an incremental manner. Time window xw’is time window xw after weighing. When the DFT parameters of the previous window XWi-1,f linear weighted Fourier parameter XW’i-1,f and auxiliary parameter XWTi-1,f are given, how to obtain the DFT parameters XW’i-1,f of this linear weighted data window in an incremental
,
manner. Where, XWTi,f=
1
m −1
∑ tx m t =0
Lemma 2: XWTi , f
e cft
w ,t
,The following lemmas can be obtained:
(m − 1) x w,m e cfm + x w,0 1 ) = cf ( XWTi −1, f − XWi , f ) + e e cf m
Lemma 3: The number f Fourier parameter XW i,f of the time sequence window XW’i after linear forgetting is:
XW ' i , f = (1 − km + k ) XWi , f + kXWTi , f Therefore, incremental algorithm for Fourier parameter for the time sequence window XW’i,f after linear forgetting can be obtained,namely, theorem 4. It’s easy to prove by combining theorem 3, lemma 2 and lemma 3. Theorem 4: Recursion formulae for incremental computation of the Fourier parameter of linear forgetting time sequence window are shown in (9)-(13):
XW0, f =
XWT0, f =
XWi , f = XWTi , f =
XWi −1, f
ecf
1 m
m −1
∑x t =0
0, t
e cft .
1 m−1 ∑ tx0,t ecft . m t =0 +
x 1 ( xw,m ecfm − wcf−1,0 ) . e m
( m − 1) xw,m −1ecfm + xw−1,0 . 1 ( XWTi −1, f − XWi −1, f ) + cf e ecf m
(9)
(10)
(11)
(12)
1190
Q. Zheng, Z. Feng, and M. Zhu
XW 'i , f = (1 − km + k ) XWi , f + kXWTi , f .
(13)
When the weighted Fourier parameters of each window are obtained, the approximate weighted distance of time sequence data can be computed on dimension reduced frequency domain, achieving high efficiency in similar sequence searching. Obviously, the time complexity of incremental DFT algorithmic is O(n×fc) much lower thant the time complexity of traditional DFT algorithm O(n×m×fc).
,
5 Experiment A comparison of the running time between extended frequency domain method and time domain method is shown in Table 1 and Table 2. In Table 1, the length of time sequence n = 200000 limiting frequency fccc 3; in Table 2, length of subsequence m= 2000; limiting frequency in both tables fccc 3. The results indicate that with the extended frequency domain method, the time is about 1/10 1/50 of the time domain method, greatly improving the efficiency of search algorithm. The former is also adaptable to the definition of expansion distance of time sequence.
,
-
=
=
Table 1. The running time of time domain method and extended frequency method along with subsequence length
subsequence Length m 500 1000 1500 2000 2500
Time domain method (Second) 32.32 64.44 98.98 132.36 164.97
Extended frequency method (Second) 3.391 3.375 3.359 3.406 3.390
domain
Table 2. the running time of time domain method and extended frequency method along with sequence length Sequence length n
50000 100000 150000 200000 250000
Time domain method (Second) 32.65 69.53 100.79 132.36 169.06
Extended frequency method (Second) 0.844 1.687 2.547 3.406 4.297
domain
6 Conclusion In this paper, an analytical algorithm is proposed for computing time sequence data expansion distance on the frequency domain, offering new techniques for similar
Incremental DFT Based Search Algorithm for Similar Sequence
1191
subsequence searching. It is proven, through experiment, this algorithm is more efficient than the time domain based algorithm, and suitable for online application, adaptable to the definition of time sequence data expansion distance. An incremental DFT algorithm is also provided for time sequence data and linear weighted time sequence data, which greatly improves the efficiency of DFT dimension reduction on each window of a long sequence, simplifying the traditional O(n×m×fc) time complexity to O(n×fc).
Reference 1. R. Agrawal, C.Faloutsos and A.swami: Efficient similarity search in sequence database. In FODO, Evanston , Illinois, October (1993) 69-84 2. Faloutsos Christos. Ranganathan M. and Manolopulos Yannis: Fast subsequence matching in time series databases. Proc ACM SIGMOD, Minnerapolis MN, May 25-27 (1994) 419-429 3. D.Rafiei and A.O.Mendelzon: Efficient retrieval of similar time sequences using DFT. In FODO, Kobe,Japan (1998) 203-212 4. K.P.Chan and A.W.C.Fu: Efficient time series matching by wavelets. In ICDE, Sydney, Australia (1999) 126-133 5. Keogh Eamonn, Padhraic smyth: A probabilistic approach to fast pattern matching in time series databases. Proceedings of the Third Conference on Knowledge Discovery in Databases and Data Mining, AAAI Press, Menlo Park, CA (1997) 24-30 6. Keogh Eamonn, Michael J.Pazzani: An Enhanced representation of time series which allow fast and accurate classification, clustering and relevance feedback. Proceeding of the 4th International Conference of Knowledge discovery and Data Mining, AAAI Press, Menlo Park, CA (1998) 239-241 7. Rakesh Agrawal, Giuseppe Psaila, Edward L.Wimmers, Mohamed Zait: Querying shapes of histories. Proceedings of the 21st VLDB Conference, Zurich, Switzerland (1995) 502-514 8. K.K.W. Chu, M.H Wong: Fast time-series searching with scaling and shifting. In proceedings of the l g ACM Symposium on Principles of Database Systems, Philadelphia, PA (1999) 237-248 9. Agrawal R, Lin K I, Sawhney H S, Shim K: Fast similarity search in the presence of noise, scaling and translation in time-series databases. In Proc. 1995 Int. Conf. Very Large Data Bases(VLDB’95), Zurich, Switzerland (1995) 490-501
Computing High Dimensional MOLAP with Parallel Shell Mini-cubes Kong-fa Hu, Chen Ling, Shen Jie, Gu Qi, and Xiao-li Tang Department of Computer Science and Engineering, Yangzhou University
[email protected] Abstract. MOLAP is a important application on multidimensional data warehouse. We often execute range queries on aggregate cube computed by preaggregate technique in MOLAP. For the cube with d dimensions, it can generate 2d cuboids. But in a high-dimensional cube, it might not be practical to build all these cuboids. In this paper, we propose a multi-dimensional hierarchical fragmentation of the fact table based on multiple dimension attributes and their dimension hierarchical encoding. This method partition the high dimensional data cube into shell mini-cubes. The proposed data allocation and processing model also supports parallel I/O and parallel processing as well as load balancing for disks and processors. We have compared the methods of shell mini-cubes with the other existed ones such as partial cube and full cube by experiment. The results show that the algorithms of mini-cubes proposed in this paper are more efficient than the other existed ones.
1 Introduction Data warehouses integrate massive amounts of data from multiple sources and are primarily used for decision support purposes.Since the advent of data warehousing and online analytical processing (OLAP), data cube has been playing an essential role in the implementation of fast OLAP operations [1]. Materialization of a data cube is a way to pre-compute and store multi-dimensional aggregates so that multi-dimensional analysis can be performed on the fly. For this task, there have been many efficient cube computation algorithms proposed, such as BUC [2], H-cubing [3], and Starcubing [4].Those methods have taken effect for the low-dimensional cube in the traditional data warehouse. But in the high-dimensional data cube, it is too costly in both computation time and storage space to materialize a full cube. For example, a data cube of 100 dimensions, each with 10 distinct values, may contain as many as 11100 aggregate cells. Although the adoption of Iceberg cube[4], Condensed cube[5],Dwarf[6] or approximate cube[7] delays the explosion, it does not solve the fundamental problem. No feasible data cube can be constructed with such data sets. In this paper we will address the problem of developing an efficient algorithm to perform OLAP on such data sets. In this paper, we propose a multi-dimensional hierarchical fragmentation of the fact table based on multiple dimension attributes their dimension hierarchical encoding. Such an approach permits a significant reduction of processing and I/O overhead for many queries by restricting the number of fragments to be processed for both the L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1192 – 1196, 2005. © Springer-Verlag Berlin Heidelberg 2005
Computing High Dimensional MOLAP with Parallel Shell Mini-cubes
1193
fact table and bitmap data. This method also supports parallel I/O and parallel processing as well as load balancing for disks and processors.
2 Shell Mini-cubes OLAP Queries tend to be complex and ad hoc, often requiring computationally expensive operations such as joins and aggregation. The OLAP query that accesses a large number of fact table tuples that are stored in no particular order might result to much more many I/Os, causing a prohibitive long response time. To illustrate the method, a tiny data cube PRT, Table 1,is used as a running example. Table 1. A sample data cube with two measure values TID 1 2 3 4 …
DimProduct Category Class Product Country Office OA Computer China Office OA Computer China Office OA Computer China Office OA Computer China … … … …
dimRegion dimTime Measure Province City Year Month Day Count SaleNum Jiangsu Nanjing 1998 1 1 1 20 Jiangsu Nanjing 1998 1 2 1 60 Jiangsu Yangzhou 1998 1 2 1 40 Jiangsu Yangzhou 1998 1 3 1 20 … … … … … … …
The cube PRT have three dimensions ,such as (P,R,T). From the RPT Cube, we would compute eight cuboids:{(P,R,T),(P,R,All), (P,All,T), (All,R,T), (P,All,All), (All,R,All), (All,All,T), (All,All,All)}.To the cube of d dimensions, it would create 2d cuboids.For the cube with d dimensions (D1,D2,...,Dd) and |Di| distinct values for each d
d
∏ (| D
i
| + 1)
cells. But in a highdimension Di, it can generate 2 cuboids and dimensional cube with many cuboids, it might not be practical to build all these indices. If we consider the dimension hierarchies, the cuboids is vary much. So we can partition all the dimensions of the high-dimensional cube into independent groups, called Cube segments. For example, for a database of 30 dimensions, D1, D2, ..., D30, we first partition the 30 dimensions into 10 fragments(mini-Cubes) of size 3: (D1,D2,D3), (D4,D5,D6), ... , (D28,D29,D30). For each mini-Cube, we compute its full data cube while recording the inverted indices. For example, in fragment mini-Cube (D1,D2,D3), we would compute eight cuboids:{(D1,D2,D3),…,(All,All,All)}. An inverted encoding index is retained for each cell in the cuboids. The benefit of this model can be seen by a simple calculation. For a base cuboid of 30 dimensions, there are only 8×10 = 80 cuboids to be computed according to the above shell fragment partition. Comparing this 3 1 to C 30 = 4525cuboids for the partial cube shell of size 3 and 230= 109cu+ C 302 + C 30 i =1
boids for full cube, the saving is enormous. We propose a novel hierarchical encoding on each dimension table, called dimension hierarchical encoding. It is constructed as a path of values corresponding to dimension attributes that belong to a common hierarchy.
1194
K.-f. Hu et al.
Definition 1. (Dimension hierarchical encoding ) The dimension hierarchical member encoding of the hierarchy
Lij on the dimen-
i
sion Di is B L j :dom( Lij )→{| biє{0,1},i=0,...,k-1}.The dimension hierarchical encoding B Di of each member on the dimension Di is defined as formula 1. i
i
i
i
B Di =(...((B L1 1 − δ.
(5)
It means that a sampling algorithm must yield a good approximation of f r(x, D) with reasonable probability. The difference between (4) and (5) is just that (4) is an absolute error measure, while (5) is a relative error measure. In this paper, P r(x) = 1 − δ. Therefore, if δ < 12 , the sampling ensemble works. According to the Hoeffding bound, ∀δ > 0, 0 < < 1, if the sample size n satisfies the following inequalities (6)/(7), then it satisfies (4)/(5), respectively [10]. 1 2 n > 2 ln (6) 2 δ 1 2 n> ln (7) 2f r(x, D)2 2 δ Since we require δ < 12 , the sample size n must satisfy: 1 ln4 22
(8)
1 ln4 2f r(x, D)2 2
(9)
n> n>
Sampling Ensembles for Frequent Patterns
1201
So, we have: Theorem 1. ∀ 0 < < 1, when the sample size n satisfies the inequality (8)/(9), the sampling ensemble works well in terms of the measure of probable error (4)/(5). As a rule, since we don’t know the real support of a frequent pattern, fr(x,D), we always minimize it to the support threshold such that the sample size satisfies all conditions we might encounter. Similar with the result in [9], there exists heavily overestimated phenomenon. However, it has a wonderful function of theoretical direction, especially for large scale databases. Similarly, according to Chernoff Bound, we have: Corollary 1. ∀ 0 < < 1, when the sample size n satisfies the inequality (10)/(11), the sampling ensemble works well in terms of the measure of probable error (4)/(5). 3f r(x, D) n> ln4 (10) 2 n>
3 ln4 f r(x, D)2
(11)
According to the Center Limited Theorem in statistic, we have: Corollary 2. ∀ 0 < < 1, when the sample size n satisfies the inequality (12)/(13), the sampling ensemble works well in terms of the measure of probable error (4)/(5). 2 f r(x, D)(1 − f r(x, D)) 3 −1 n> φ (12) 2 4 n>
(1 − f r(x, D)) f r(x, D)2
2 3 φ−1 4
(13)
√ y 2 where Φ(y) = (1/ 2π) −∞ e−x /2 dx is the normal distribution function. (12)/ (13) is the best result under worst-case analysis. Since sampling ensemble is indeed a kind of Monte Carlo algorithm, we have: Corollary 3. when the sample size of sampling ensemble is larger the theoretical low bound, with the increase of quantity of individual samples in an ensemble, the probability of correct answers exponentially increases. 3.2
Bias-Variance Decomposition
In order to learn where the performance of a sampling ensemble originates and explain why it works, in this section, we use the bias-variance decomposition [13-16] to analyze the sample error of an ensemble. The sample error is measure by Cerror(S, D) + ξ¯S,D in this section.
1202
C. Jia and R. Lu
Let the error induced by individual samples in an ensemble be: N 1 E¯ = N i=1
|f r(x, D) − f r(x, Si )|
(14)
x∈L(D)∪L(Si )
where N is the total number of individual samples in an ensemble. For sure the simpleness of the inference, we ignore the factor of normalization in this section, 1 for example, the factor L(D)∩L(S is ignored in (14). i ) Similarly, the error of a sampling ensemble is as follows: ¯b = E |f r(x, D) − f r(x, Sbag )| (15) x∈L(D)∪L(Sbag )
where Sbag is the final answer set of the ensemble. It’s just the bias of the ensemble. And the variance of the ensemble is denoted as follows: N 1 E¯v = N i=1
|f r(x, Si ) − f r(x, Sbag )|
(16)
x∈L(Si )∪L(Sbag )
It measures the diversity of answers on sample sets in an ensemble. Seeing that N E¯ = N1 |f r(x, D) − f r(x, Si )| ≤ ≤ +
1 N 1 N 1 N
N
i=1 x∈L(D)∪L(Si )
i=1 x∈L(D)∪L(Si )∪L(Sbag ) N i=1 x∈L(D)∪L(Si )∪L(Sbag ) N
|f r(x, D) − f r(x, Si )| |f r(x, D) − f r(x, Sbag )| |f r(x, Sbag ) − f r(x, Si )|
i=1 x∈L(D)∪L(Si )∪L(Sbag )
¯v + E = E¯b + E where we set E =
1 N
+ N1
N
|f r(x, D) − f r(x, Sbag )|
i=1 x∈L(Si )−(L(D)∪L(Sbag )) N
|f r(x, Sbag ) − f r(x, Si )|
i=1 x∈L(D)−(L(Si )∪L(Sbag ))
We have: ¯b , and the average sample Theorem 2. The sample error of sampling ensemble, E ¯ error of individual samples, E, satisfy the following inequality. ¯b ≥ E ¯ −E ¯v − E E
(17)
Although we can’t draw a conclusion that Eb ≤ E, the larger the right hand of (17), the larger Eb will be.
Sampling Ensembles for Frequent Patterns
1203
For the sample error defined by Cerror(S, D) + ξ¯S,D , if we let f¯r(·, ·) substitute for f r(·, ·), the similar result with (17) will be obtained. For the two databases, S and D, ⎧ x ∈ L(D) but x ∈ / L(S) ⎨1 x∈ / L(D) but x ∈ L(S) f¯r(x, D) = 0 ⎩ f r(x, D) x ∈ L(S) ∩ L(D) It means that we give a penalty to a false positive or a false negative frequent pattern. For the three databases, S, Sbag and D, ⎧ 1 x ∈ (L(D) ∩ L(S)) or ⎪ ⎪ ⎪ ⎪ x ∈ (L(D) ∩ L(Sbag )) or ⎪ ⎪ ⎨ x ∈ (L(S) ∩ L(Sbag )) but f¯r(x, D) = x ∈ / L(S) ∩ L(Sbag ) ∩ L(D) ⎪ ⎪ ⎪ ⎪ f r(x, D) x ∈ L(S) ∩ L(Sbag ) ∩ L(D) ⎪ ⎪ ⎩ 0 others It predicates that the penalty is given to the minority frequent patterns in L(D), L(S) and L(Sbag ). Thus, the following equation holds. |f¯r(x, D) − f¯r(x, S)| = Cerror(S, D) + αξ¯S,D x∈L(D)∪L(S)
where α = L(S)∪L(D) L(S)∩L(D) . Using similar techniques, we have: Corollary 4. The sample error of sampling ensemble, e¯b , and the average sample error of individual samples, e¯, satisfy the following inequality. e¯b ≥ e¯ − e¯v − e Where e¯ =
N 1 (Cerror(D, Si ) + ξ¯D,Si ) N i=1
e¯b = Cerror(D, Sbag ) + ξ¯D,Sbag e¯v = and e =
1 N
+ N1
N
N 1 (Cerror(Si , Sbag ) + ξ¯Si ,Sbag ) N i=1
(18)
(19) (20) (21)
|f¯r(x, D) − f¯r(x, Sbag )|
i=1 x∈L(Si )−(L(D)∪L(Sbag )) N
|f¯r(x, Sbag ) − f¯r(x, Si )|
i=1 x∈L(D)−(L(Si )∪L(Sbag ))
According to (18), the performance of an ensemble is influenced by the individual samples in an ensemble. It’s better to diminish e¯ and enlarge e¯v . It
1204
C. Jia and R. Lu
means that the more accuracy and the more diversity of the answers on samples in an ensemble, the more accuracy of the ensemble might be (the result will be supported by the experiments in the next section).
4
Some Experiments
In this section, we use both a synthetic and a real-world benchmark database to test the performance of sampling ensembles. The synthetic database is generated using code from the IBM QUEST project [1]. And the parameter settings are as same as those in [1]. In the paper, we choose the T10I4D100k denoted as D1 as a test database. The real-world database is large sales database derived from Blue Martini Software Inc, named BMS-POS and denoted as D2 . And in order to make a fair comparison we use Apriori written by Christian Borgelt1 and the fast speed sequential random sampling algorithm, Method D in [19], in all cases to compute the frequent itemsets and perform sampling. For both of the databases, the support threshold is 0.77% for sure there are neither too many nor too few frequent itemsets. Because of space limitation, we just show the results of sampling ensembles with 3 individual samples at 1% sampling ratios on D1 and D2 respectively (table 1). These results indicate that Cerror(S, D), ξ¯S,D and VS,D are all reduced (the similar results can also been obtained at other sampling ratios). Table 1. The three individual samples with sampling ratio 1% Name False Missed Cerror ξ¯S,D VS,D S1 126 45 0.289831 0.00220286 0.00186835 S2 173 47 0.345369 0.00243885 0.00202185 S3 186 48 0.36 0.00246875 0.00179169 3 − Ensemble 79 40 0.201507 0.00140448 0.00160673 (a) The result on D1 Name False Missed Cerror ξ¯S,D VS,D S1 301 164 0.225745 0.001261 0.0016519 S2 257 180 0.212239 0.00138286 0.00128904 S3 307 128 0.206259 0.00117324 0.00113885 3 − Ensemble 251 108 0.146169 0.000758604 0.00066939 (b) The result on D2
Figure 1 shows the cardinal errors of sampling ensembles with different number of individual samples at 5% sampling ratio for D1 , 1% sampling ratio for D2 . According to figure 1, with the increase of number of individual samples, the cardinal error is gradually reduced. It means that we can obtain very accurate answers at a small (fixed) sample size when the number of individual samples is proper. It makes the difficult problem of sample size selection, as argued by several researches, can be avoid or covert into the problem of selecting the number of individual samples at small sample size. 1
http://fuzzy.cs.uni-magdeberg.de/∼borgelt/
Sampling Ensembles for Frequent Patterns
(a) D1 with 5% sampling ratio
1205
(b) D2 with 1% sampling ratio
Fig. 1. The cardinal errors of ensembles with different number of individual samples
Since the more diversity of individual samples, the more accuracy of an ensemble might be according to theorem 2. We also test the performance of sampling ensembles with no overlap individual samples (for space limitation, we just show the results on D1 with the 5% sampling ratio in table 2 ). According to the experiments, in general, the sample error of a sampling ensemble is sure less than that of a sampling ensemble using sampling Method D directly (it will cause overlap phenomena) at the same sample size. Table 2. The sample error of a sampling ensemble on D1 Name Cerror ξ¯S,D VS,D 3 − Ensemble 0.0987654 0.000938128 0.000818714 5 − Ensemble 0.0770833 0.000781941 0.00061186 7 − Ensemble 0.0623701 0.000674501 0.000539261 (a) overlap sampling Method D Name Cerror ξ¯S,D VS,D 3 − Ensemble 0.0792683 0.000847682 0.000667883 5 − Ensemble 0.0695297 0.000655385 0.000504559 7 − Ensemble 0.053719 0.00060131 0.000456445 (b) no overlap sampling Method D
5
Conclusion
One of the approaches to improving the scalability of data mining algorithm is to take random sample and do data mining on it. However, it is difficult to determine the appropriate sample size for a given accuracy. Many researchers are dedicate themselves to solve this problem. But they seldom consider how to improve the accuracy of answers at small sample size. In this paper, we give a sampling ensemble method to improve the accuracy of answers at a fixed small
1206
C. Jia and R. Lu
sample size. And some pertinent theoretical problems are discussed by using machine learning method. Both the theoretical analysis and the real experiments show the sampling ensemble works well.
References 1. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In VLDB’94, Santiago, Chile (1994) 487-499 2. Zaki, M. J., Hsiao, C.-J.: CHARM: An Efficient Algorithm for Closed Association Rule Mining. Technical Report 99-10, Computer Science Dept., Rensselaer Polytechnic Institute (1999) 3. Han, J., Pei, J., Yin, Y.: Mining Frequent Patterns Without Candidate Generation. In SIGMOD’00, Dallas, TX (2000) 1-12 4. Chen, B., Haas, P., Scheuermann, P.: A New Two-phase Sampling Based Algorithms for Discovery Association Rules. In ACM SIGKDD’02, EDmonton, Alberta, Canada (2002) 462-468 5. Bronnimann, H., Chen, B., et al: Efficient Data Reduction with EASE, In ACM SIGKDD’03, Washington, D.C., USA (2003) 59-68 6. Parthasarathy, S.: Efficient Progressive Sampling for Association Rules. In ICDE’02, Maebashi City, Japan (2002) 354-361 7. Jia, C-Y., Gao, X-P.: Multi-scaling Sampling: An Adaptive Sampling Method for Discovering Approximate Association Rules. to be appeared, Journal of Computer Science and Technology. 8. Toivonen, H.: Sampling Large Databases for Association Rules. In VLDB’96, Mumbai (Bombay), India (1996) 134-145 9. Zaki, M. J., Parthasarathy, S., Li, W., Ogihara, M.: Evaluation of Sampling for Data Mining of Association Rules. In Proceeding of the 7th Workshop on Research Issues in Data Engineer, Birmingham, UK (1997) 42-50 10. Watanabe, O.: Simple Sampling Techniques for Discovery Science. IEICE Trans. Information and Systems, E83-D(1) (2000) 19-26 11. John, G. H., Langley, P.: Static Versus Dynamic Sampling for Data Mining. In ICDM’96, AAAI Press (1996) 367-370 12. Domingo, C., Gavalda, R., Watanabe, O.: Adaptive Sampling Methods for Scaling Up Knowledge Discovery Algorithms. Data Mining and Knowledge Discovery, An International Journal, Vol. 6(2). (2002) 131-152 13. Breiman, L.: Bagging Predictors. Machine Learning, Vol. 24. (1996) 123-140 14. Breiman, L.: Bias, Variance, and Arcing Classifiers. The Annals of Statistics, Vol. 26(3). (1998) 801-849 15. Dietterich, T. G.: Ensemble Methods in Machine Learning. Lecture Notes in Computer Science, Vol. 1857. Springer-Verlag, Berlin Heidelberg New York (2000) 1-15 16. Krogh, A., Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. Advances in Neural Infromation Processing Systems 7 (1995) 231-238 (2000) 267-279 17. Esposito, R., Saitta, L.: Monte Carlo Theory As An Explanation of Bagging and Boosting. In IJCAI’03, Acapulco, Mexico (2003) 499-504 18. Valiant, L. G., A Theory of the Learnable. Communications of the ACM 27. (1984) 1134-1142 19. Vitter, J. S.: An Efficient Algorithm for Sequential Random Sampling. In ACM Transactions Mathematical Software, Vol. 13(1). (1987) 58-67.
Distributed Data Mining on Clusters with Bayesian Mixture Modeling1 M. Viswanathan, Y.K. Yang, and T.K. Whangbo College of Software, Kyungwon University, San-65, Bokjeong-Dong, Sujung-Gu, Seongnam-Si Kyunggi-Do South Korea {murli, ykyang, tkwhangbo}@kyungwon.ac.kr
Abstract. Distributed Data Mining (DDM) generally deals with the mining of data within a distributed framework such as local area and wide area networks. One strong case for DDM systems is the need to mine for patterns in very large databases. This requires mandatory partitioning or splitting of databases into smaller sets which can be mined locally over distributed hosts. Data Distribution implies communication costs associated with the need to combine the results from processing local databases. This paper considers the development of a DDM system on a cluster. In specific we approach the problem of data partitioning for data mining. We present a prototype system for DDM using a data partitioning mechanism based on Bayesian mixture modeling. Results from comparison with standard techniques show plausible support for our system and its applicability.
1 Introduction Data mining research is continually coming up with improved tools and methods to deal with distributed data. There are mainly two scenarios in distributed data mining (DDM): A database is naturally distributed geographically and data from all sites must be used to optimize results of data mining. A non-distributed database is too large to process on one machine due to processing and memory limits and must be broken up into smaller chunks that are sent to individual machines to be processed. In this paper we consider the latter scenario [3]. This paper considers the use of a Bayesian clustering system (SNOB) [1] for dividing large databases into meaningful partitions. These variable-sized partitions are then distributed using a standard MPI based system to a number of local hosts on a cluster. The local hosts run data mining algorithms on the data partition received thus producing local models. These models are combined into a global model by using a model aggregation scheme. The accuracy of the global model is verified through a crossvalidation technique. 1
This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Assessment).
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1207 – 1216, 2005. © Springer-Verlag Berlin Heidelberg 2005
1208
M. Viswanathan, Y.K. Yang, and T.K. Whangbo
In light of this paper we typically consider DDM as a four stage process. • Data Partitioning – The large dataset must be partitioned into smaller subsets of data with the same domain according to some function f. The choice of algorithm is dependent on the user, on the DDM system requirements and resources available. Communication needs, message exchange and processing costs are all considerations of this task. • Data Distribution – Once we have partitioned the database the data must be distributed usually over a LAN. An appropriate distribution scheme must be devised depending on number of clusters, size of clusters, content of clusters, size of LAN and LAN communication speeds. For example the user may decide to distribute similar (this measure of similarity is non-trivial) clusters on the same local host machine to save on communication costs or may decide to distribute one cluster per host (assuming the required number of hosts are available on the LAN) to save on processing costs by utilizing absolute parallel processing. • Data Modelling – Once the data is distributed on the LAN, local data models have to be constructed. Many different classification tools and algorithms are widely available. The choice of classification algorithm depends on the nature of the data (numerical, nominal or a combination of both), the size of the data and the destined nature and purpose of the models. • Model Aggregation - All the local models must be combined and aggregated in some optimal way such as to produce one global model to describe the whole database. Model aggregation attempts to achieve one of the fundamental aims of DDM that is to come up with a final global data model from the distributed database to match the efficiency and effectiveness of a data model developed from undistributed data. Some existing techniques rely on such methods as voting, combining and meta-classifiers. In order to develop a good and efficient model aggregation framework one must analyse communication needs and costs as well as datasets sizes and distributions. For example if local datasets are reasonably large it would be undesirable to transfer them in their entirety across hosts since the incurring communication costs will effectively halt the whole system, in this case it might be a good idea to transfer sample subsets of the data or maybe just transfer descriptive data [7,8].
2 Data Partitioning and Clustering As suggested earlier we consider the issue of DDM in a case where the existing database is too big to process on a single machine due to memory and storage constraints. There is a great need in this case to use some intelligent mechanism to break the database into smaller groups of data 2.1 SNOB SNOB is a system developed for cluster analysis using mixture modeling by Minimum Message Length (MML) [1]. SNOB aims to discover the natural classes in the data by categorising data sets based on their underlying numerical distributions. It does this using the assumption that if it can correctly categorize the data, then the data
Distributed Data Mining on Clusters with Bayesian Mixture Modeling
1209
can be described most efficiently (i.e. using the minimum message length). SNOB uses MML induction, a scale-invariant Bayesian technique based on information theory [2]. To summarize SNOB considers that a database is usually made up of many objects or instances. Each instance has a number of different attributes with each attribute having a particular value. We can think of this as a population of instances in a space with each attribute being a different dimension or variable. SNOB assumes to know the nature, number and range of the attributes. The attributes are also assumed to be uncorrelated and independent of each other. SNOB attempts to divide the population into groups or classes such that each class is tight and simple while ensuring the classes’ attribute distributions significantly differ from one another. Snob takes an inductive inference approach to achieve data clustering [2]. 2.2 Data Partitioning Sub- system Our system employs SNOB in the initial process of data partitioning. One of our objectives in this project is to investigate whether using SNOB-based clustering is appropriate and efficient for partitioning in the framework of DDM. In figure 1 we depict the partitioning aspect of out DDM system with Snob being the core of the partitioning operation. Once SNOB is used to cluster the data and generate a report file containing details of the clusters, the complete database is divided into the appropriate clusters using a standard program.
Main DataBase
Cluster 1 Cluster 2 Cluster 3 Cluster 1 Clustor K
SNOB Clustering
Cluster Report
Partition Program
Fig. 1. The partitioning component of the DDM system using SNOB
In order to test the predictive accuracy of the models generated at the end of the DDM process we retain 10% of data records randomly selected from the database for cross-validation purposes. The test dataset is removed from the original dataset so we are finely left with the un-clustered dataset and each cluster containing only training data (for classification). In addition to these there is a separate dataset for testing purposes only.
1210
M. Viswanathan, Y.K. Yang, and T.K. Whangbo
3 Distributing the Data Distributing Data over a network such as a LAN is no trivial task. Many considerations must be taken into account before deciding on the distribution strategy. We need to take into consideration the number, size, and nature of clusters and purpose of distributing the data in order to come up with an optimal distribution scheme. Not only that, to further complicate matters every dataset is different and will be clustered differently and so we will have a different distribution scheme that will optimize each of these datasets. This problem is out of the scope of this project. As a simple solution we used the Message Passing Interface (MPI) to distribute the data in our DDM system. MPI is a library specification for message-passing, proposed and developed as a standard by a broadly based committee of vendors, implementers, and users. MPI is used by programs for inter-process communication and was designed for high performance on both massively parallel machines and on workstation clusters [5]. As can be seen in figure 2 the complete (un-clustered) training dataset resides on the master machine (ID 0) and each cluster is exported to a different host machine (ID>0). The first cluster is sent to host 1, the second cluster to host 2 and so on, although this is not the optimal solution for distribution this solution will still classify the clusters in parallel and cause an overall speedup in the data mining task. It should be noted that the distribution automatically occurs in conjunction with Data Modelling (classification) phase in our DDM system. In concept Data Distribution and Data Modelling are two distinctly different steps however in this project these two steps are combined into one using MPI and the Classification Algorithm.
C lu s te r 1
C o m p le te T r a in in g D a ta S e t
C lu s te r 2
C lu s te r i
Fig. 2. The MPI Architecture
4 Local Data Mining with C4.5 Creating data models in order to predict new future unseen data is the main task of data mining. There are various types of models that can be constructed however we
Distributed Data Mining on Clusters with Bayesian Mixture Modeling
1211
will use a classification model based on the C4.5 decision tree learning system. C4.5rules is a classification-rule learning algorithm developed by Quinlan that uses decision trees to represent its model of the data. It aims to find a function that can map data to a predefined class correctly thus accurately predicting and classifying previously unseen examples and data. The function is a decision tree and is built from a training data set and is evaluated using a test data set, these sets are usually subsets of the entire data. The algorithm attempts to find the simplest decision tree that can describe the structure of the data [6]. The implementation of the C4.5 that is used for this project is by Ross Quinlan [6]. The original source code was changed to incorporate distribution via MPI and to output human readable rule files. The C4.5 algorithm was embedded with MPI for use in our system. The result is the full dataset is classified by C4.5 on the master host and each cluster is classified by C4.5 on a different host in the network. In case of insufficient number of hosts in the network some hosts will classify more then one cluster. The working directory is copied to all hosts involved in the classification process and the algorithm is run from each host’s local copy of this directory. Once the C4.5 program finishes and terminates all the files from all the hosts are copied back to the original directory on the master machine. So as we have mentioned in the previous section the Data modeling is combined with Data Distribution in our DDM system. It should be noted that our C4.5 classification algorithm is a DDM classification algorithm. Figure 3 shows how C4.5 and MPI work together in our DDM system.
Cluster 1 Rule Set on Host 1
Cluster 1
Cluster 1 Rule Set on Host 2
Cluster 2
Cluster 3
C4.5 Algorithm
Message Passing Interface
Cluster 1 Rule Set on Host 3
Master Machine
Cluster 1 Rule Set on Host 4
Cluster 4
Cluster 1 Rule Set on Host k
Cluster k
Full Rule Set on Master Machine 0
Fig. 3. Distributed Data Modelling using MPI and the C4.5 classification algorithm
5 Model Aggregation Using a Voting Scheme Model Aggregation is the key step in the DDM process since it uses all distributed data and its associated learned knowledge to generate the all-inclusive and compre-
1212
M. Viswanathan, Y.K. Yang, and T.K. Whangbo
hensive data model for the original and complete database. With such a model we can predict the outcome of unseen new data introduced to our system. The Data Aggregation task itself is an area of extensive research and many methods and techniques have been developed [7, 8]. We have chosen to use a simple but effective voting technique whereby the classification models (rules) are validated against the unseen test data by means of positive and negative votes. A positive vote is an unseen data record that is classified correctly by a classification rule and the negative vote is an incorrectly classified record. Once the votes are summed up and the rules are thereby ranked, the system can choose what rules to discard and what rules to keep, these rules will form the global data model. Data Model A Data Model B Data Model C Data Model D Data Model E Fig. 4. The Data Aggregation Process
In terms of our DDM system, voting works as follows: Each Rule is tested on each data record in the test dataset and is given a positive vote for a correct prediction a negative vote for a false prediction and no vote at all if the rule conditions don’t hold. After all the records in the test dataset have been scanned the error rate is computed as: (1)
However if there are no votes at all then the error rate is considered unknown and is set to the value of –1.
6 Distributed Data Mining – System Components The whole DDM system can finally be discussed in terms of its individual components which have been discussed in the above sections. Our system takes a dataset and
Distributed Data Mining on Clusters with Bayesian Mixture Modeling
1213
using SNOB partitions it to clusters. The clusters get distributed over the LAN using MPI. Data models are developed for each cluster dataset using the classification algorithm C4.5. Finally the system uses a voting scheme to aggregate all the data models. The final global classification data model comprises of the top three rules for each class (where available). We can describe our DDM system in terms of the following algorithm in figure 5. Report N Data MaxHosts Ci Ri GModel
Snob Report File Number of Clusters produced by Snob Complete Dataset Maximum Number of Hosts on LAN Cluster I Rule set of I Global Classification Model
1:
(Report, N) = Snob(Data);
2:
Partition(Data, Report);
3:
For i = 1 to N
4: 5: 6: 7:
Ri = C4.5Classify(MPI(Ci, MaxHosts)); For i = 1 to N Validate(Ri); GModel = Aggregate(R1•R2•…•Ri); Fig. 5. Functional algorithm
Note that MPI is used in conjunction with the known maximum number of hosts (line 4) to classify the clusters in parallel using the C4.5 classification algorithm. If the number of clusters exceeds the available number of hosts then some hosts will classify multiple clusters (using MPI). Also the aggregation model scans all Rule files (line 7) from all clusters and picks the best rules out of the union of all cluster rule sets.
Data
Fig. 6. DDM System Components
During the Classification phase we have also classified the original un-clustered dataset and produced rules modelling this data. To finely ascertain if our DDM system is efficient we compare our Global Model to this un-clustered Data Model. We
1214
M. Viswanathan, Y.K. Yang, and T.K. Whangbo
compare the top three rules for each class from this model with our rules from the Global Model. If our global model is over 90% accurate in comparison to the unclustered data model we can consider this a good result and conclude that our DDM system is efficient.
7 DDM System Evaluation and Conclusions Our DDM system was applied to relatively smaller databases in order to test the effectiveness of the concept. The system was used to mine (classify) the original database as well the clustered partitions. The DDM system was designed to output classification rules for the original database and the clustered datasets. In the case of the partitioned clusters the rule were validated and aggregated using our voting scheme. Five real-world databases were compared taking into account the top classification rules for each class from the original database. Thus we could compare the difference in predictive accuracy of the model derived from the original datasets with the aggregated global model from the partitioned database. As an example of the empirical evaluation we present experiments on the ‘Demo’ database which is taken from Data for Evaluating Learning in Valid Experiments (Delve) from [13]. The top three rules for each class from the cluster-derived rules are compared against the rules mined from the un-partitioned dataset. The aim of this evaluation is to determine the effect of the clustering process on the efficiency of our classification model and its predictive accuracy. The total average error for the rules from the original dataset is 42.8% while for the clustered partitions it is 41.83%.
Error Rate Average
120 100 80 60 40 20 0 Class 1
Class 2
Class 3
Unclustered Rules
Class 4
Class 5
Clustered Rules
Fig. 7. Accuracy of Un-clustered Rules versus Clustered Rules
From the chart in figure 7 it can be seen that the first two classes of the clustered rules have a lower average error rate then the un-clustered ones. Rules for class three have approximately equal average error rates for both sets of rules and for the last two
Distributed Data Mining on Clusters with Bayesian Mixture Modeling
1215
classes the clustered rules have greater average error rates then the un-clustered ones. If we average out these results both set of rules will result in approximately total equal average error rates. If we had to graph these they would be two straight lines around the 42% average error rate mark. From the experimental evaluation the total average error rate for the un-partitioned datasets taking all datasets into account, was 30.86%. And the total average error rate for the clustered data taking all datasets into account is 35.46%. Overall in our experiments the clustering caused 4.6% loss of accuracy in our data modeling. Considering the other costs involved in standard schemes we think that this drop in accuracy is not very significant. In general our evaluation gives us plausible evidence supporting SNOB in the framework of Distributed Data Mining. There are many issues in this project that need further research and enhancement to improve the overall DDM system performance. The use of SNOB to cluster the entire database may be questioned on the premise that it is probably impossible to apply clustering on the entire database due to memory and processor limitations. This being true our primary objective is to demonstrate the effectiveness of SNOB in producing meaningful partitions that enable us to reduce the costs of model aggregation. In other words SNOB could be used to create partitions from arbitrary subsets of the very large database. A more efficient aggregation model may produce more predictive accuracy and hence a better DDM system.
References 1. Wallace C. S., Dowe D. L.: MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing, 10(1), January (2000) 73-83 2. Wallace C. S., Freeman P. R.: Estimation and Inference by Compact Coding. Journal of the Royal Statistical Society series B., 49(3), (1987) 240-265 3. Park B. H., Kargupta H.: Distributed data mining: Algorithms, systems, and applications. In In Data Mining Handbook, To be published, (2002) 4. Kargupta H., Hamzaoglu I., Stafford B.: Scalable, Distributed Data Mining Using an Agent Based Architecture. Int. Conf. on Knowledge Discovery and Data Mining, August (1997) 211-214 5. Message Passing Interface Forum. MPI: A message-passing interface standard. International Journal of Supercomputer Applications, Vol. 8(3/4), (1994) 165-414 6. Quinlan, J. R.. C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann (1993) 7. Prodromidis A., Chan P.: Meta-learning in Distributed Data Mining Systems: Issues and Approaches. In Hillol Kargupta and Philip Chan, editors, Advances of Distributed Data Mining. MIT/AAAI Press, (2000) 8. Chan P., Stolfo S. J.:.. A Comparative Evaluation of Voting and Meta-learning on Partitioned Data. In Proceedings of Twelfth International Conference on Machine Learning, pages 90–98, (1995) 9. Subramonian R., Parthasarathy S.: An Architecture for Distributed Data Mining. In Fourth International Conference of Knowledge Discovery and Data Mining, New York, NY, (1998) 44–59
1216
M. Viswanathan, Y.K. Yang, and T.K. Whangbo
10. Fayyad U.M., Piatetsky-Shapiro G., Smyth P.: From data mining to knowledge discovery: an overview. In: U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R. Uthurusamy. Advances in Knowledge Discovery & Data Mining, AAAI/MIT, (1996) 1-34 11. Grigorios T., Lefteris A., Ioannis V.: Clustering Classifiers for Knowledge Discovery from Physically Distributed Databases. Data and Knowledge Engineering, 49(3), June (2004) 223–242 12. Kargupta H., Park B., Hershberger D., Johnson E.: Collective Data Mining: A New Perspective towards Distributed Data Mining. In Advances in Distributed and Parallel Knowledge Discovery, MIT/AAAI Press, (2000) 133–184 13. Neal, R. M.: Assessing relevance determination methods using DELVE', in C. M. Bishop (editor), Neural Networks and Machine Learning, Springer-Verlag (1998) 97-129
A Method of Data Classification Based on Parallel Genetic Algorithm Yuexiang Shi 1, 2, 3, Zuqiang Meng 2, Zixing Cai 2, and B.Benhabib 3 1
School of information engineer, XiangTan University, XiangTan 411105, China 2 School of information engineer, Central South University, ChangSha 410082, China 3 Department of Mechanical and Industrial Engineering, Toronto University, Ontario, Canada M5S 3G8
Abstract. An effectual genetic coding is designed by constructing fullclassification rule set. This coding results in full use of all kinds of excellent traits of genetic algorithm in data classification. The genetic algorithm is paralleled in process. So this leads to the improvement of classification and its ability to deal with great data. Some defects of current classifier algorithm are tided over by this algorithm. The analysis of experimental results is given to illustrate the effectiveness of this algorithm.
1 Introduction With the development of information technology and management, data mining becomes a new fiddle in Computer Science from 1990[1]. Today, data mining has been used in large enterprise, telecom, commercial, bank and so on. And the fields have made the best benefits from it. Algorithm ID3[2] and C4.5[3] are the first algorithms for data classification. The algorithms and the ramification were based on the decision tree. But there are some defects no way to avoid. Such as it is not powerful for retractility[4] and adjust. There is difficult to get the best decision tree just using the heuristic from information plus. The defect is lack of integer searching strategy. In fact, constructing the best decision tree is a NP[5]. But parallel genetic algorithm has the specific function to solve those problems. This paper presents a technology based on parallel genetic algorithm to class the data. It can improve and increase the ability of classification algorithm for magnanimity data.
2 Related Definitions Definition 1 (Date set). For a specific data mining system, the process data is called data set. It was written as U = {S1 , S 2 , , S size } . The object in U is called as data L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1217 – 1222, 2005. © Springer-Verlag Berlin Heidelberg 2005
1218
Y. Shi et al.
element. It is also called sample, example or object. The size presents the number of data set. Definition 2 (Length of attribute). The number was used to describe the attributes of data element is called attribute length. Definition 3 (Training set). The set of data element being used for the forecast model is called training set. It is written as T and T ⊆ U . Theorem 1. For the attribute length h of data set U, its attributes variables are A1 , A2 , , Ah , the construction depth of decision tree is no more than h. (Proof: omit)
Aj
… …
1
Aj 1 2
… …
2
Xj
Xj
Ai 1 2
Fig. 1. The decision tree only with one node
… …
Xi
Fig. 2. The decision tree with multi-nodes
Definition 4 (Decision tree). Tree was called decision tree about B(U), when every interior node can present a test was taken on the attribute of B, every branch presents one test output and every leaf presents a distribution of one class. Definition 5 (Classification regulation). IF-THEN regulation r was called one classification about B (⊆ U ) , when the pre-part of regulation r was consisted of B’s attribute which forms the SAT. The post-part is a judge formulation that fits the data elements of SAT. Apparently, one classification regulation corresponds to one class C. Written as C = r (B ) . Definition 6 (Completed classification regulation). The set R was called completed classification regulation about B (⊆ U ) that was consisted of some classification regulations and
B = ∪ r ( B) . r∈R
Theorem 2. For the given data set U, there must be a completed classification regulation about U. That is to say exist R, let
U = ∪ r (U ) .(Proof: omit) r∈R
Data classification is a prediction model by a limited training set T and directed studying. It was used to describe the reserved data set U. That is to pre-class for U. This model is a completed classification regulation R about U. Assumed T was extracted from U and enough represents U, and U is very large to GB, TB, so T is large too. How to get the best and completed classification regulation R from the magna-
A Method of Data Classification Based on Parallel Genetic Algorithm
1219
nimity data about U, avoiding the exist problems, this is the research goal. In order to solve the problem, the genetic algorithm will be used in seeking the objects.
3 The Design of Genetic Algorithm for Data Classification *Coding. From theorem 1, for the attribute length h of data set U, the depth of any decision tree is no more than h. Like that, the number of SAT’s pre-part is also no more than h from classification regulation export. So for any regulation r, it can be described as IF-THEN: IF P1 AND P2 AND AND Ph THE Ph+1, Among that, Pi(i=1,2, h+1) all are judgment proposition and corresponds to the test attribute Ai. If there is no test about this attribute, the test was presented with TRUE. Assume Ai have different values: a1 , a 2 , , a xi . T was divided into n class: C1 , C 2 , , C n . So the proposition Pi
…
…
…
can be expressed as: the value of Ai is. ji{1,2, ,Xi}. This item was coded as ji. If the value of Pi is TRUE, the item’s code is star ‘*’. Proposition Ph+1 was presented as: this data element that fits the pre-part SAT belongs to class Cy. This item was coded as j’ and j ∈ {1,2, , n} . So the IF-THEN regulation r was coded as: j1 j 2 jh j' and called as chromosome θ r of regulation r. For the θ r , the bit’s value not equal ‘*’ was called valid genetic bit. The number of valid genetic was called the valid length about this chromosome and was written as genetic( θ r ). It was also called as valid genetic bit and valid genetic length based on regulation r. genetic( θ r ) was written as genetic(r). j1 , j 2 , , j h j '∈ {1,2, , max{max{ X i | i = 1,2, , h}, n},*}h +1 . So one completed class regulation set can be coded as: { j1(1) j 2(1) j h(1) j (1)' , j1( 2 ) j 2( 2 ) j h( 2 ) j ( 2)' , , j1( q ) j 2( q ) j h( q ) j ( q )' } Among these, q
represents the size of R and has the relation with specific R. The valid genetic length of classification regulation set was defined as:
genetic( R ) = max{genetic( j1(1) j 2(1)
j h(1) j (1)' ), genetic( j1( q ) j 2( q )
j h( q ) j ( q )' )}
*Group set. Group is an individual set consisted with classification regulation. It was present as:
P = {{ j1(1) j2(1)
jh(1) j (1)' , j1( 2 ) j2( 2 )
jh( 2 ) j ( 2 )' ,
, j1( q ) j2( q )
jh( q ) j ( q )'} | i = 1,2,
, Psize }
Psize represents the group size. *Adapt function. Adapt function is a judge standard given by the algorithm for the individual and shown like +. One excellent completed classification regulation can exactly class for U. There is a contact between classification regulation set and valid genetic bits. According above, for individual S ∈ P , the adapt function was defined as:
f ( S ) = w1 x1 + w2 x 2 + w3 x3 + w4 (1 − x 4 ) among the formula, x1 = genetic( S ) shows the valid length of genetics.
1220
Y. Shi et al.
x 2 = ∑ genetic(r ) shows the total of valid genetic bits. r∈S
x3 =| S | shows the size of S. That is to say S has the number of classification regulations. x 4 = represents the wrong probability based on classification S.
wx is a weight coefficient corresponds to adapt function. *Genetic Algorithm Select Monte Carlo was used. Crossover For any S1,S2 P, random select θ1 S1 , θ2 S2, and random get a positive integer pos. Then the single crossover was taken at the location pos ofθ1, θ 2. In the same time, the dynamic adjustment was adapted to crossover probability Pc. When the individual tends to single model and heighten the crossover probability Pc optimising ability for the algorithm. Set Pc = − k ( f min − f avg ) /( f max − f avg + ε ) .
Ⅰ Ⅱ
∈
∈
∈
f max is the biggest adapt value at present. f min is the minimum and the f avg is the middle. k ∈ (0,1] , ε is a little positive real number and avoid the denominator equals zero.
Ⅲ
Variation It is mainly to ensure the reach for search space and keeps the multiplicity of individual. The process is random to select S P, θ S, and random select a valid genetic bit(assume j), added a random positive number ram, then models this bit Xj(number of attribute Aj). The variation probability Pm [0,1] and general is 0~0.05[6].
∈ ∈ ∈
4 Parallel Genetic Algorithms There are 4 kinds of parallel Genetic Algorithm. Take the characteristics of the database system into consideration, the article selects granularity parallel model(CPGA) which can also be called as acnode. The method is to extract training-set T from dataset U and at the same time, generating Psize decision trees through T using C4.5. And above all these, educing Psize complete-sorted-rules-sets, then divided them into several subgroup and send them to each acnode as initial colony. The colony evolves themselves independently in succession until they fulfill certain qualifications (for example, time interval or genetic algebra etc.). The acnode exchange each other and continue to evolve until the Algorithm converge. The exchange manner between the acnodes is one of the key method of the Genetic Algorithm. We present a stochastic exchange method in article [8], shown as Fig.3.(annotate: as an example, Pi in the figure represents acnode, i=1,...5; the dotted lines represent possible data exchange fashion). The article selects this method though it maybe inconvenient when implementing program, yet it can avoid the problem of prematurity[8].
A Method of Data Classification Based on Parallel Genetic Algorithm
1221
P1 P5
P2
P4
P3
Fig. 3. Improved CPGA
5 Experiment Results The experiment based upon some filiale of the China Petro Co.. The sample attributes mainly include the following control parameter as temperature, compressive stress, metal etc. The result are divided into 5 kinds and respectively indicate 5 product class of dissimilar quality: particularly excellent, excellent, commonly, not good and waster. In the experiment, there is a contrast between the CMGA of the parallel Genetic Algorithm and the one based on common Genetic Algorithm. The size of the trainingset T is 500, and the testing-set size is 600. The initial value of the Genetic Algorithm Cross-probability is 0.7, and it's dynamic adjust coefficient K is 0.6, the initial value of the aberrance probability Pm is 0.03, the weight coefficient W1, W2, W3 and W4 are respectively denoted as 0.01,0.0001,0.01,0.95, the constringency of the Algorithm is controlled by threshold. The Algorithm will be considered as constringency when the sort mistake rate is under or equal to 0.30%. As to CMPGA, other than denoted as the same parameter as above, the acnodes start an interchange among outstanding individual each 20 generation. The parallel environment is composed of 3 PC connected by internet and the message transfer between courses is based on traditional Socket Programming method. There is a 40%-65% higher of CMPGA than that of CMGA.
6 Conclusions The article constructs a complete-sorted-rules-set, realizes genetic coding and thereafter reinforces the maneuverability of the Algorithm. The system utterly data management and global optimize ability is advanced at one side because of the effective import of the parallel Genetic Algorithm, and at the other side, enlarging the manage mode from n to O(n3) using the concealed parallel of the Genetic Algorithm. Thus it greatly solves the retractility of the sort algorithm and fit to the mining to the large database.
References 1. Li li, Xu zhanweng. An algorithm of classification regulation and trend regulation. MiniMicro system, 2000, 21(3): 319-321 2. Quinlan J R. Induction of decsion trees. Machine Learning, 1986, (1):81-106
1222
Y. Shi et al.
3. Quinlan J R. C4.5 Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann Publishers, Inc. 1993 4. Han Jianwei, Kamber M. Data Mining: Concept and technology. Peking: Mechanical industry publishers. 2001.8 5. Hong J R. AE1: An extension approximate method for general covering problem. Interna-
tional Journal of Computer and Information Science, 1985,14(6):421-437 6. Xi Yugong, Zi Tianyou. Summary of genetic algorithm. Control theory and application. 1996,13(6):697-708 7. Wang Hongyan. Parallel genetic algorithm research development. Computer Science, 1999,26(6):48-53 8. Meng Zuqiang, Zheng Jinghua. Parallel genetic algorithm based on shared memory and
communication. Computer engineering and application. 2000, 36(5):72-74
Rough Computation Based on Similarity Matrix Huang Bing1, Guo Ling2, He Xin2, and Xian-zhong Zhou3 1
Department of Computer Science & Technology, Nanjing Audit University, Nanjing 210029, China 2 Department of Automation, Nanjing University of Science & Technology, Nanjing 210094, China 3 School of Engineering Management, Nanjing University, Nanjing 210093, China
Abstract. Knowledge reduction is one of the most important tasks in rough set theory, and most types of reductions in this area are based on complete information systems. However, many information systems are not complete in real world. Though several extended relations have been presented under incomplete information systems, not all reduction approaches to these extended models have been examined. Based on similarity relation, the similarity matrix and the upper/lower approximation reduction are defined under incomplete information systems. To present similarity relation with similarity matrix, the rough computational methods based on similarity relation are studied. The heuristic algorithms for non-decision and decision incomplete information systems based on similarity matrix are proposed, and the time complexity of algorithms is analyzed. Finally, an example is given to illustrate the validity of these algorithms presented.
1 Introduction The rough set theory proposed by Pawlak[1] is a relatively new mathematic tool to deal with imprecise and uncertain information. Because of the successful application of rough set theory to knowledge acquisition, intelligent information processing, pattern recognition and data mining, many researchers in these fields are very interested in this new research topic since it offers opportunities to discover useful knowledge in information systems. In rough set theory, a table called information system or information table is used as a special kind of formal language to represent knowledge. In an information system, if all the attribute values of each object are known, this information system is called complete. Otherwise, it is called incomplete. Obviously, it is very difficult to ensure completion of an information system due to the uncertainty of information. As we know, not all of attributes are indispensable in an information system. The approach to removing all redundant attributes and preserving the classification or decision ability of the system is called attribute reduction or knowledge reduction. Though knowledge reduction has been an important issue in rough set theory, most of knowledge reductions are based on complete information systems [2-4]. To deal with incomplete information, several extended models to the classical rough set are L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1223 – 1231, 2005. © Springer-Verlag Berlin Heidelberg 2005
1224
H. Bing et al.
proposed [5-6]. In these extended theories, equivalence relation is relaxed to tolerance relation, similarity relation or general binary relation, respectively. In [7], several Definitions of knowledge reductions based on tolerance relation were presented. However, corresponding definitions and approaches to knowledge reduction based on similarity relation were not studied. In this paper, we aim at the approach to attribute reduction based on similarity relation in incomplete information systems. The remainder of this paper is organized as follows. Section 2 reviews rough set model based on similarity relation in incomplete systems and Section 3 gives the definitions of lower approximation reduction and lower approximation matrix in incomplete data tables, and proposes an algorithm for lower approximation reduction. Similar to Section 3, the corresponding contents are studied in incomplete decision tables in Section 4. An illustrative example is given in Section 5 and Section 6 concludes this paper.
2 Incomplete Information Systems and Similarity Relation Definition 1. S = (U , A = C U D, V , f ) is called an information system, where U is a non-empty and finite set of objects called the universe, C is a non-empty and finite set of conditional attributes, D is a non-empty and finite set of decision attributes, and C I D = φ . In general, D is the set including a single element. The information function, f , is a map from U × A onto V , a set of attribute values. If every object in the universe has a value on any attribute, this information system is complete. Otherwise, it is incomplete. Considering further, if D = φ , it’s a non-decision information system, otherwise a decision table. For simplicity, we assume that all values of every object on decision attributes are known. Definition 2. In an incomplete information system S = (U , A = C U D, V , f ) , where x, y ∈ U , the object
x is called similar to y with respect to B ⊆ A , if ∀ b ∈ B , such that
f ( x, b) = * ∨ ( f ( x, b) = f ( y, b)) , and the relation between
x and y is denoted as
S B ( x, y ) .
Obviously, the similarity relation is reflexive and transitive, but not symmetric. Definition 3. R B ( x) = { y : y ∈ S B ( y , x)} ; R B−1 ( x) = { y : y ∈ S B ( x, y )} . R B (x) denotes the set of all objects in U which are similar to
x with respect to B ,
−1 B
and R ( x) denotes the set of all objects in U to which x are similar with respect to
B . Obviously, ∀ b ∈ B ⊆ A , R B ( x) ⊆ R B \{b} ( x) and R B ( x) ≠ R B−1 ( x) .
R B−1 ( x ) ⊆ R B−1\{b} ( x ) . In general,
Rough Computation Based on Similarity Matrix
1225
Definition 4. S = (U , A = C U D, V , f ) is an incomplete information system. The
upper approximation B( X ) and lower approximation B( X ) of X ⊆ U with respect to B ⊆ A are defined as follows: B( X ) = U R B ( x) , B( X ) = {x : R B−1 ( x) ⊆ X } x∈ X
Theorem 1. ∀ b ∈ B ⊆ A , X ⊆ U , B( X ) ⊆ B \ {b}( X ) , B \ {b}( X ) ⊆ B( X ) .
Based on the similarity relation, we can define the upper/lower approximation reduction.
3 Similarity Matrix and Approximation Reduction for Non-decision Incomplete Information Systems Definition 5. S = (U , C , V , f ) is an non-decision incomplete information system,
where U = {x1 , x 2 , L , x n } . The lower approximation matrix of B ⊆ C can be defined as x j ∈ RB−1 ( xi ) . else
1 B M B = ( m ij ) n× n = { 0
Definition 6. S = (U , C , V , f ) is an non-decision incomplete information system,
where U = {x1 , x 2 , L , x n } . The upper approximation matrix of B ⊆ C can be defined as x j ∈ RB ( xi ) . else
B 1 M B = ( m ij ) n× n = { 0
Theorem 2. The upper/lower approximation matrix has the following attributes
(1) m = m = 1 1 ≤ i ≤ n (2) m = 1, m = 1 ⇒ m = 1 ; m (3) m = 1 ⇔ m = 1 ii
B ij
B ij
ii
B jl
B il
B ij
B
B
= 1, m jl = 1 ⇒ mil = 1
B ji
Proof. (1) and (2) are direct results from the self-reflect and attributes of similarity. And (3) holds since x j ∈ RB−1 ( xi ) ⇔ xi ∈ RB ( x j ) . Definition 7. S = (U , C , V , f ) is an non-decision incomplete information system,
where U = {x1 , x 2 , L , x n } and B ⊆ C . If ∀ xi (1 ≤ i ≤ n) ∈ U and RB−1 ( xi ) = RC−1 ( xi ) RB ( xi ) = RC ( xi )
), B is called lower / upper approximation consistent set.
If B is the
upper/lower approximation consistent set and any proper subset of B is not, B is called lower/upper approximation reduct. According to (3) in theorem 2, we’ll only discuss lower approximation reduct.
1226
H. Bing et al.
, M = (m ) ,then M ≤ M ⇔ m ≤ m ≤ M ,and ∃ m < m .
Definition 8. If two matrix M1 = (mij1 ) m× n 1 ≤ i ≤ m,1 ≤ j ≤ n
;M
1
Theorem 3. If E ⊂ F ⊆ C
< M 2 ⇔ M1
,M
F
2 ij m × n
2
1 ij
2
1
1 ij
2
2 ij
2 ij
≤ ME.
Theorem 4. B ⊆ C is lower similarity reduct ⇔ M B = M C M B < M B \{b} holds.
, and
∀b∈B
,
Definition 9. S = (U , C ,V , f ) is a non-decision incomplete system, where B ⊆ C . If M B < M B \{b} , we say b is lower-approximation dispensable in B. The set of all
lower-approximation dispensable attributes in C is called the core of S = (U , C ,V , f ) , denoted as Core(C ) = {c ∈ C : M C \{c} < M C } . Theorem 5. If Re d (C ) is denoted as all the lower-approximation reduct of S = (U , C ,V , f ) , Core(C ) = I Re d (C ) .
Proof. ∀ c ∈ Core(C )
,then M
C \{ c }
> M C . If ∃ a lower approximation reduct,
,then B ⊆ C \ {c} ,hence M > M hand, ∀ c ∈ I Re d (C ) , if c ∉ Core(C ) , then M such that c ∉ B
B
C
C \{ c }
B
,
. It is contradictive. On the other = MC
, thus there is subset of
C \ {c} which is lower approximation reduct. It is contradictive.
Definition 10. S = (U , C , V , f ) is a non-decision incomplete information system. Let M B = (mij ) n×n be the lower approximation matrix of B ⊆ C . The lower approximation B
significance of b ∈ C \ B with respect to B and b ∈ B in B are defined respectively as Sig (b) = M B − M B U{b} B
and
Sig
B \{b}
(b) = M B \{b} − M B}
,
where
M B = mij : mij = 1,1 ≤ i, j ≤ n . B
B
According to Definition 5 and 6, the lower and upper approximation reduct are both minimal conditional attribute subsets, which preserve the lower and upper approximation of every decision class respectively. Definition 11. If Sig B (b) > 0 , b is called significant with respect to B . If Sig
B \{b}
(b) > 0 , b is called significant in B .
Rough Computation Based on Similarity Matrix
1227
Theorem 6. Core(C ) = {c ∈ C : Sig C \{c} (c) > 0} .
An algorithm to compute lower approximation reduct of incomplete data table is given as follows. Algorithm 1.
Input: an incomplete data table S = (U , C ,V , f ) Output: a lower approximation reduct of S = (U , C ,V , f ) 1.
Compute M C ;
2.
Compute Core(C ) = {c ∈ C : Sig C \{c} (c) > 0} . Let B = Core(C ) ; (1) Judge A B = A C , if it holds, then go to 3; (2) Select c 0 ∈{c : Sig (c) = max Sig (c )} , and do B = B U {c 0 } . Go to (1). B B b∈C \ B
3.
Let k the number of attributes added into B = Core(C ) in step 2, which is denoted as {c i j : j = 1,2, L , k} , where j is the added order. Judge Sig
4.
B \{ c i j }
B = B \ {c ik } .
(ci j ) = 0 in reverse order. If it holds, then
Output B.
Notes: Adding attributes step by step from the core until the attribute set satisfies the condition of lower approximation consistent set. Take the significance of attributes as a measure to be added. The greater the significance is, the more important the attribute is. The attempt aims to get a reduct with the least elements. The last step of algorithm 1 checks the dispensable of every non-core attribute to ensure that the final result is a reduct. We analyze the time complexity of Algorithm 1 as follows. 2
In step 1, the time complexity is O( C U ) . In step 2, we need to compute all 2
2
M C \{c} (c ∈ C ) and compare them with M C , so its time complexity is O( C U ) .The 2
2
2
2
time complexity of (1) is O( C U ) ; and that of (2) is O( C U ) since we need to find all matrices of every {c} U B and compute the number of nonzero elements. In the worst 3
2
case, the algorithm loops C times and the time complexity of step 2 is O( C U ) . 3
2
Therefore, the time complexity of Algorithm 1 is O( C U ) .
1228
4
H. Bing et al.
Similarity Matrix and Upper/lower Approximation Reduction for Incomplete Decision Table
Definition
S = (U , A = C U D, V , f )
12.
is
an
incomplete
decision
table,
where U = {x1 , x 2 , L , x n } . Define the lower and upper generalized decision function as d B ( xi ) = { f ( x j , d ) : x j ∈ RB−1 ( xi )} and d B ( xi ) = { f ( x j , d ) : x j ∈ RB ( xi )} respectively.
Definition
S = (U , A = C U D, V , f )
13.
is
an
incomplete
decision
table,
where U = {x1 , x 2 , L , x n } and B ⊆ C . B is a relative lower approximation consistent set
, d ( x ) = 1 ⇒ d ( x ) = 1 and B is a relative upper approximation consistent set if ∀ x ∈ U , d ( x ) = d ( x ) .
if ∀ xi ∈ U
C
i
B
i
i
B
i
C
i
If B is a relative lower/upper approximation consistent set and any proper subsets of B are not, B is relative lower/upper reduct. According to the above definitions, relative lower/upper approximation reduct is the minimal condition attribution set to keep the lower/upper approximation of each decision class constant. In other words, they keep all certain and possible rules constant, respectively. Definition
14.
S = (U , C U D,V , f )
is
an
incomplete
decision
table,
where U = {x1 , x 2 , L , x n } . The relative lower approximation matrix of B ⊆ C is defined as D B = ( d ijB ) n× n , where
1 B d ij = { 0 Definition
15.
d B ( xi ) = 1, f ( x j , d ) = f ( xi , d )
S = (U , C U D,V , f )
else is
an
incomplete
decision
table,
where U = {x1 , x 2 , L , x n } . The relative upper approximation matrix of B ⊆ C is defined B
as D B = (d ij ) n×n , where B 1 d ij = { 0
f ( x j , d ) ∈ d B ( xi ) else
.
Theorem 7. (1) B ⊆ C is the relative lower approximation reduct ⇔ D B = D C and ∀ b ∈ B, D B \{b} < D B ;
(2) B ⊆ C is the relative upper approximation reduct ⇔ D B = D C and ∀ b ∈ B, D B \{b} > D B .
Rough Computation Based on Similarity Matrix
1229
Similarly, we give the matrix representation of relative upper and lower approximation core as follows: Definition
16.
(1)
Relative
Core (C) = {c ∈ C : DC \{c] < DC } D
;(2)
lower
Relative
approximation
upper
approximation
core
core
D
Core (C ) = {c ∈ C : D C \{c ] > D C } . Theorem 8. Core D (C ) = I Re d D (C ) , where Re d D (C ) is all relative lower D
D
approximation reducts. Core (C ) = ICore (C ) , where Re d D (C ) is all relative upper approximation reducts. Definition 17. The relative lower and upper approximation significance of b∈C \ B
with
respect
to
B
are
defined
as
Sig (b) = D B U{b} − D B D B
and
D
Sig B (b) = D B − D B U{b} respectively.
An algorithm to compute relative lower approximation reduct for incomplete decision table is given as follows. Algorithm 2
Input: an incomplete decision table S = (U , C ,V , f ) Output: a relative lower approximation reduct of S = (U , C ,V , f ) . 1.
Compute D C ;
2.
Compute Core D (C ) = {c ∈ C : D C − DC \{c} > 0} . Let B = Core D (C ) ; (1) Judge D B = D C . If it holds, go to 3; (2) Select c0 ∈ {c ∈ C \ B : Sig DB (b) = Max Sig DB (b)} and do B = B U {c0 } . Then go to c∈C \ B
(1). 3.
Let k the number of attributes added into B = Core(C ) in step 2, which is denoted as {c i j : j = 1,2, L , k} , where j is the added order. Judge Sig DB \{c } (ci j ) = 0 in reverse ij
order. If it holds, then B = B \ {ci j } . 4.
Output B. 3
2
The time complexity of algorithm 2 is O( C U ) through similar analysis with algorithm 1. The algorithm of relative upper approximation reduct can be given in a similar way.
1230
H. Bing et al.
5 Example Analysis An incomplete information system is given is table 1, where U = {x1 , x 2 , L , x12 } , C = {c1 , c2 , c3 , c4 , c5} , D = {d } .
Table. 1
U\A c1 c2 c3 c4 c5 d x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
1 1 1 1 1 0 0 0 0 0
0 0 1 * * 1 1 0 * *
1 1 * * 0 0 0 * * 1
1 * * * * 0 * * * *
0 * * * 1 1 * * * 0
1 1 2 2 1 2 2 1 1 2
Without considering decision attribute D = {d } , we use algorithm 1 to obtain the lower approximation reduct for the incomplete data table. Compute
B = Core(C ) = {c1 , c2 } , M B U{c3 } = M C , Sig {c3} = M B − M B U{c3 } = 8
Sig {c4} = M B − M BU{c4 } = 2 B
B
; Sig
B
{c 5 } =
M B − M B U{c5 } = 10
。 So
;
B = {c1 , c2 , c5} is
the minimal reduct. Similarly, with algorithm 2 we obtain the corresponding lower approximation relative reduct, which is {c1 , c2 , c3} or {c1 , c2 , c5} .
6 Conclusions Knowledge reduction is always a hot topic in rough set theory. However, most of knowledge reductions are based on complete information systems. Because of the existence of incomplete information in real world, the classical theory model of rough sets is extended to that based on similarity relation. In this paper, based on similarity relation, the upper/lower approximation reduction is defined and their approaches to corresponding reduction are examined. Finally, the example analysis shows the validity of this approach.
Rough Computation Based on Similarity Matrix
1231
References 1. Pawlak Z. Rough sets. International Journal of Computer and Information Sciences 1982;11:341-356. 2. Ju-Sheng Mi, Wei-Zhi Wu, Wen-Xiu Zhang. Approaches to approximation reducts in inconsistent decision tables. In G. Wang et al. (Eds.): Rough Set, Fuzzy Sets, Data Mining, and Granular Computing, 2003; 283-286. 3. Wen-Xiu Zhang, Ju-Sheng Mi, Wei-Zhi Wu. Approaches to knowledge reductions in inconsistent systems. International Journal of Intelligent Systems 2003; 18: 989-1000. 4. Zhang Wen-Xiu, Leung Yee, Wu Wei-Zhi. Information systems and knowledge discovery. Beijing: Science Press; 2003. 5. Kryszkiewicz, M. Rough set approach to incomplete information systems. Information Sciences 1998; 112, 39-49. 6. R. Slowinski, D. Vanderpooten. A generalized definition of rough approximations based on similarity. IEEE Transaction on Knowledge and Data Engineering 2000; 12: 331-336. 7. Zhou Xian-zhong Huang Bing. Rough set-based attribute reduction under incomplete information systems. Journal of Nanjing University of Science & Technology, 2003; 27: 630-635.
The Relationship Among Several Knowledge Reduction Approaches Keyun Qin1 , Zheng Pei2 , and Weifeng Du1 1
Department of Applied Mathematics, Southwest Jiaotong University, Chengdu, Sichuan 610031, China
[email protected] 2 College of Computers & Mathematical-Physical Science, Xihua University, Chengdu, Sichuan, 610039, China
[email protected] Abstract. This paper is devoted to the discussion of the relationship among some reduction approaches of information systems. It is proved that the distribution reduction and the entropy reduction are equivalent, and each distribute reduction is a d reduction. Furthermore, for consistent information systems, the distribution reduction, entropy reduction, maximum distribution reduction, distribute reduction, approximate reduction and d reduction are all equivalent.
1
Introduction
Rough set theory(RST), proposed by Pawlak [1], [2], is an extension of set theory for the study of intelligent systems characterized by insufficient and incomplete information. The successful application of RST in a variety of problems have amply demonstrated its usefulness. One important application of RST is the knowledge discovery in information system (decision table). RST operates on an information system which is made up of objects for which certain characteristics (i.e., condition attributes) are known. Objects with the same condition attribute values are grouped into equivalence classes or condition classes. The objects are each classified to a particular category with respect to the decision attribute value, those classified to the same category are in the same decision class. Using the concepts of lower and upper approximations in RST, the knowledge hidden in the information system may be discovered. One fundamental aspect of RST involves the searching for some particular subsets of condition attributes. By such one subset the information for classification purpose provides is equivalent to (according to a particular standard) the condition attribute set done. Such subsets are called reducts. To acquire brief decision rules from information systems, knowledge reduction is needed. Knowledge reduction is performed in information systems by means of the notion of a reduct based on a specialization of the general notion of independence due to Marczewski [3]. In recent years, more attention has been paid to knowledge reduction in information systems in rough set research. Many types L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1232–1241, 2005. c Springer-Verlag Berlin Heidelberg 2005
The Relationship Among Several Knowledge Reduction Approaches
1233
of knowledge reduction and their applications have been proposed for inconsistent information systems in the area of rough sets [4], [5], [6], [7], [8], [9], [10], [11], [12]. The first knowledge reduction approach due to[6] which is carry out through discernibility matrixes and discernibility functions. This kind of reduction is based on the positive region of the universe and we call it d reduction. For inconsistent information systems, Kryszkiewicz [7] proposed the concepts of distribution reduction and distribute reduction.Zhang [5] proposed the concepts of maximum distribution reduction and approximate reduction and provide new approaches to knowledge reduction in inconsistent information systems. Furthermore, some approaches to knowledge reduction based on variable precision rough set model were proposed [13]. Information entropy is a measure of information involved in a system. Based on conditional information entropy, some knowledge reduction approaches in information systems were proposed in [4]. This paper is devoted to the discussion of the relationship among some reduction approaches of information systems. It is proved that the distribution reduction and the entropy reduction are equivalent, and each distribute reduction is a d reduction. Furthermore, for consistent information systems, the distribution reduction, entropy reduction, maximum distribution reduction, distribute reduction, approximate reduction and d reduction are all equivalent.
2
Preliminaries and Notations
An information system is a quadruple S = (U, AT ∪ {d}, V, f ), where (1) U is a non-empty finite set and its elements are called objects of S. (2) AT is the set of condition attributes and d is the decision attribute of S. (3) V = ∪q∈AT ∪{d} Vq , where Vq is a non-empty set of values of attribute q ∈ AT ∪ {d}, called domain of the attribute q. (4) f : U → AT ∪ {d} is a mapping, called description function of S, such that f (x, q) ∈ Vq for each (x, q) ∈ U × (AT ∪ {d}). Let S = (U, AT ∪ {d}, V, f ) be an information system and A ⊆ AT . The discernibility relation ind(A) on U derived from A, defined by (x, y) ∈ ind(A) if and only if ∀a ∈ A, f (x, a) = f (y, a), is an equivalent relation and hence (U, ind(A)) is a Pawlak approximation space. We denote by [x]A the equivalent class with respect to ind(A) that containing x and U/A the set of these equivalent classes. For each X ⊆ U , according to Pawlak [1], the upper approximation A(X) and lower approximation A(X) of X with respect to A are defined as A(X) = {x ∈ U |[x]A ∩ X = ∅},
A(X) = {x ∈ U |[x]A ⊆ X}.
(1)
Based on the approximation operators, Skowron proposed the concept of d reduction of an information system. Definition 1. Let S = (U, AT ∪ {d}, V, f ) be an information system and A ⊆ AT . The positive region posA (d) of d with respect to A is defined as posA (d) = ∪X∈U/d A(X).
1234
K. Qin, Z. Pei, and W. Du
Definition 2. Let S = (U, AT ∪ {d}, V, f ) be an information system and A ⊆ AT . A is called a d consistent subset of S if posA (d) = posAT (d). A is called a d reduction of S if A is a d consistent subset of S and each proper subset of A is not a d consistent subset of S. All the d reductions can be carry out through discernibility matrixes and discernibility functions [6]. Let S = (U, AT ∪ {d}, V, f ) be an information system and B ⊆ AT , x ∈ U . We introduce the following notations: U/d = {D1 , · · · , Dr };
µB (x) = (D(D1 /[x]B ), · · · , D(Dr /[x]B ));
γB (x) = {Dj ; D(Dj /[x]B ) = maxq≤r D(Dq /[x]B )}; δB (x) = {Dj ; Dj ∩ [x]B = ∅}; 1 r ηB = Σ |B(Dj )|; |U | j=1 |D ∩[x] |
j B where D(Dj /[x]B ) = |[x] is the include degree of [x]B in Dj . B| For inconsistent information systems, Kryszkiewicz [7] proposed the concepts of distribution reduction and distribute reduction. Based on this work, Zhang [5] proposed the concepts of maximum distribution reduction and approximate reduction. Farther more, the judgement theorems and discernibility matrixes with respect to those reductions are obtained. These reductions are based on the concept of include degree.
Definition 3. Let S = (U, AT ∪ {d}, V, f ) be an information system, A ⊆ AT . (1) A is called a distribution consistent set of S if µA (x) = µAT (x) for each x ∈ U . A is called a distribution reduction of S if A is a distribution consistent set of S and no proper subset of A is distribution consistent set of S. (2) A is called a maximum distribution consistent set of S if γA (x) = γAT (x) for each x ∈ U . A is called a maximum distribution reduction of S if A is a maximum distribution consistent set of S and no proper subset of A is maximum distribution consistent set of S. (3) A is called a distribute consistent set of S if δA (x) = δAT (x) for each x ∈ U . A is called a distribute reduction of S if A is a distribute consistent set of S and no proper subset of A is distribute consistent set of S. (4) A is called a approximate consistent set of S if ηA = ηAT . A is called a approximate reduction of S if A is a approximate consistent set of S and no proper subset of A is approximate consistent set of S. [5] proved that the concepts of distribute consistent set and approximate consistent set are equivalent, a distribution consistent set must be a distribute consistent set and a maximum distribution consistent set. Let S = (U, AT ∪ {d}, V, f ) be an information system, A ⊆ AT and U/AT = {Xi ; 1 ≤ i ≤ n},
U/A = {Yj ; 1 ≤ j ≤ m},
U/d = {Zl ; 1 ≤ l ≤ k}.
The conditional information entropy H(d|A) of d with respect to A is defined H(d|A) = −
m j=1
(p(Yj ) ·
k l=1
p(Zl |Yj )log(p(Zl |Yj ))),
The Relationship Among Several Knowledge Reduction Approaches |Y |
1235
|Z ∩Y |
j l where p(Yj ) = |U|j , p(Zl |Yj ) = |Y and 0log0 = 0. j| Based on conditional information entropy, Wang [4] proposed the concept of entropy reduction for information systems.
Definition 4. Let S = (U, AT ∪ {d}, V, f ) be an information system and A ⊆ AT . A is called an entropy consistent set of S if H(d|A) = H(d|AT ). A is called an entropy reduction of S if A is an entropy consistent set of S and non proper subset of A is entropy consistent set of S.
3
The Relationship Among Knowledge Reduction Approaches
In this section, we discuss the relationship among knowledge reduction approaches. In what follows we assume that S = (U, AT ∪ {d}, V, f ) is an information system, A ⊆ AT and U/AT = {Xi ; 1 ≤ i ≤ n},
U/A = {Yj ; 1 ≤ j ≤ m},
U/d = {Zl ; 1 ≤ l ≤ k}.
Theorem 5. Let Yj = ∪t∈Tj Xt , 1 ≤ j ≤ m, where Tj is an index set. For each 1 ≤ l ≤ k, |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )≤ |Zl ∩ Xt |log( ). |Yj | |Xt | t∈Tj
Proof. Let Tj1 = {Xt ; t ∈ Tj , Zl ∩ Xt = ∅}. If Tj1 = ∅, then Zl ∩ Xt = ∅ for each t ∈ Tj and hence Zl ∩ Yj = ∅, the conclusion holds. If Tj1 = ∅, by lnx ≤ x − 1, it follows that |Zl ∩ Yj | |Zl ∩ Xt | |Zl ∩ Yj |log( )− |Zl ∩ Xt |log( ) |Yj | |Xt | t∈Tj
=
|Zl ∩ Xt |log(
t∈Tj
=
t∈Tj1
≤
t∈Tj
|Zl ∩ Yj ||Xt | |Zl ∩ Xt |log( ) |Yj ||Zl ∩ Xt | |Zl ∩ Xt |(
t∈Tj1
=
|Zl ∩ Yj | |Zl ∩ Xt | )− |Zl ∩ Xt |log( ) |Yj | |Xt |
|Zl ∩ Yj ||Xt | − 1)loge |Yj ||Zl ∩ Xt |
loge (|Zl ∩ Yj ||Xt | − |Yj ||Zl ∩ Xt |) |Yj | t∈Tj1
=
loge ( (|Zl ∩ Yj ||Xt | − |Yj ||Zl ∩ Xt |) |Yj | t∈Tj1
=
loge (|Zl ∩ Yj |(|Yj | − |Yj |
t∈Tj1
t∈Tj −Tj1
|Xt |) − |Yj ||Zl ∩ Yj |) ≤ 0.
1236
K. Qin, Z. Pei, and W. Du
Theorem 6. H(d|AT ) = H(d|A) if and only if |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )= |Zl ∩ Xt |log( ), |Yj | |Xt | t∈Tj
for each 1 ≤ l ≤ k and 1 ≤ j ≤ m, where Yj = ∪t∈Tj Xt , 1 ≤ j ≤ m, and Tj is an index set. Proof. H(d|AT ) = −
n
(p(Xi ) ·
i=1
p(Zl |Xi )log(p(Zl |Xi )))
l=1
1 |Zl ∩ Xi | (p(Xi )|Zl ∩ Xi |log( ), |U | |Xi | i=1 k
=−
k
n
l=1
H(d|A) = −
m
(p(Yj ) ·
j=1
p(Zl |Yj )log(p(Zl |Yj )))
l=1
1 |Zl ∩ Yj | (p(Xi )|Zl ∩ Yj |log( ). |U | |Yj | j=1 k
=−
k
m
l=1
The sufficiency is trivial because each A equivalent class is just a union of some AT equivalent classes. Necessity: For each 1 ≤ l ≤ k and 1 ≤ j ≤ m, by Theorem 5, |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )≤ |Zl ∩ Xt |log( ), |Yj | |Xt | t∈Tj
and hence m
|Zl ∩ Yj | |Zl ∩ Xt | )≤ |Zl ∩ Xt |log( ). |Yj | |Xt | i=1 n
|Zl ∩ Yj |log(
j=1
If there exists 1 ≤ l ≤ k and 1 ≤ j ≤ m such that |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )< |Zl ∩ Xt |log( ). |Yj | |Xt | t∈Tj
consequently, m j=1
|Zl ∩ Yj | |Zl ∩ Xt | )< |Zl ∩ Xt |log( ). |Yj | |Xt | n
|Zl ∩ Yj |log(
i=1
and hence H(d|AT ) < H(d|A), a contradiction. Theorem 7. A ⊆ AT is a distribution reduction of S if and only if A is an entropy reduction of S.
The Relationship Among Several Knowledge Reduction Approaches
1237
Proof. It needs only to prove that A is a distribution consistent set if and only if A is an entropy consistent set. Sufficiency: Assume that A is a distribution consistent set. For each 1 ≤ l ≤ k and 1 ≤ j ≤ m, let Yj = [x]A . We notice that Jx,A = {[y]AT ; [y]AT ⊆ [x]A } is a partition of [x]A . By |Zl ∩ [x]A | = [y]AT ∈Jx,A |Zl ∩ [y]AT |, it follows that |Zl ∩ [y]AT | |Zl ∩ [y]A | |Zl ∩ [x]A | = = , |[y]AT | |[y]A | |[x]A | for each [y]AT ∈ Jx,A and hence |Zl ∩ [x]A |log(
|Zl ∩ [x]A | )= |[x]|A
|Zl ∩ [y]AT |log(
[y]AT ∈Jx,A
|Zl ∩ [y]AT | ), |[y]AT |
it follows by Theorem 6 that H(d|AT ) = H(d|A) and A is an entropy consistent set. Necessity: Assume that A is an entropy consistent set. For each x ∈ U , Jx,A = {[y]AT ; [y]AT ⊆ [x]A } forms a partition of [x]A . Let J = Jx,A − {[x]AT }. For each 1 ≤ l ≤ k, by Theorem 6, it follows that |Zl ∩ [x]A |log( and hence
|Zl ∩ [x]A | )= |[x]A |
|Zl ∩ [y]AT |log(
[y]AT ∈Jx,A
|Zl ∩ [y]AT |log(
[y]AT ∈Jx,A
|Zl ∩ [y]AT |log(
[y]AT ∈Jx,A
|Zl ∩ [x]A | )= |[x]A |
|Zl ∩ [y]AT | ). |[y]AT |
|Zl ∩ [y]AT |log(
[y]AT ∈Jx,A
|Zl ∩ [y]AT | ), |[y]AT |
|Zl ∩ [x]A ||[y]AT | ) = 0, |[x]A ||Zl ∩ [y]AT |
that is, |Zl ∩ [x]A ||[x]AT | ) |[x]A ||Zl ∩ [x]AT | |Zl ∩ [x]A ||[y]AT | |Zl ∩ [y]AT |log( ) |[x]A ||Zl ∩ [y]AT |
−|Zl ∩ [x]AT |log( =
[y]AT ∈J
≤
|Zl ∩ [y]AT |(
[y]AT ∈J
=
loge |[x]A |
|Zl ∩ [x]A ||[y]AT | − 1)loge |[x]A ||Zl ∩ [y]AT |
(|Zl ∩ [x]A ||[y]AT | − |[x]A ||Zl ∩ [y]AT |)
[y]AT ∈J
loge (|Zl ∩ [x]A |(|[x]A | − |[x]AT |) − |[x]A |(|Zl ∩ [x]A | − |Zl ∩ [x]AT |)) |[x]A | loge = (|[x]A ||Zl ∩ [x]AT | − |[x]AT ||Zl ∩ [x]A |). |[x]A |
=
1238
K. Qin, Z. Pei, and W. Du
It follows that ln(
|Zl ∩ [x]A ||[x]AT | |Zl ∩ [x]A ||[x]AT | )≥ − 1. |[x]A ||Zl ∩ [x]AT | |[x]A ||Zl ∩ [x]AT |
By lna ≤ a − 1 for each a > 0, ln(
|Zl ∩ [x]A ||[x]AT | |Zl ∩ [x]A ||[x]AT | )= − 1, |[x]A ||Zl ∩ [x]AT | |[x]A ||Zl ∩ [x]AT |
and hence
|Zl ∩ [x]A ||[x]AT | = 1, |[x]A ||Zl ∩ [x]AT |
because a = 1 is the unique root of lna = a − 1, that is |[x]AT ∩ Zl | |[x]A ∩ Zl | = |[x]AT | |[x]A | and A is a distribution consistent set. Theorem 8. A is an entropy consistent set if and only if |[x]AT ∩ [x]d | |[x]A ∩ [x]d | = |[x]AT | |[x]A | for each x ∈ U . Proof. By Theorem 7, the necessity is trivial. Sufficiency: We prove that |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )= |Zl ∩ Xt |log( ), |Yj | |Xt | t∈Tj
for any 1 ≤ l ≤ k and 1 ≤ j ≤ m and finish the proof by Theorem 6, where Yj = ∪t∈Tj Xt . If Zl ∩ Yj = ∅, then Zl ∩ Xt = ∅ for each t ∈ Tj and the conclusion holds. If Zl ∩ Yj = ∅, suppose that x ∈ Zl ∩ Yj , it follows that Zl = [x]d and Yj = [x]A . Let Jx,A = {[z]AT ; [z]AT ⊆ [x]A }. Consequently, |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )− |Zl ∩ Xt |log( ) |Yj | |Xt | t∈T j
|[x]d ∩ [x]A | = |[x]d ∩ [x]A |log( )− |[x]A | =
[z]AT ∈Jx,A
=
[z]AT ∈Jx,A
|[x]d ∩ [z]AT |log(
|[x]d ∩ [z]AT |log(
[z]AT ∈Jx,A
|[x]d ∩ [x]A | )− |[x]A |
|[x]d ∩ [z]AT |log(
[z]AT ∈Jx,A
|[x]d ∩ [x]A ||[z]AT | |[x]d ∩ [z]AT |log( ). |[x]A ||[x]d ∩ [z]AT |
|[x]d ∩ [z]AT | ) |[z]AT | |[x]d ∩ [z]AT | ) |[z]AT |
The Relationship Among Several Knowledge Reduction Approaches
1239
Assume that u ∈ [x]d ∩ [z]AT , it follows that u ∈ [x]A and hence [x]d = [u]d , [z]AT = [u]AT and [x]A = [u]A , consequently, |[x]d ∩ [x]A ||[z]AT | |[u]d ∩ [u]A ||[u]AT | = = 1, |[x]A ||[x]d ∩ [z]AT | |[u]A ||[u]d ∩ [u]AT | and hence |Zl ∩ Yj |log(
|Zl ∩ Yj | |Zl ∩ Xt | )= |Zl ∩ Xt |log( ). |Yj | |Xt | t∈Tj
Theorem 9. Let S = (U, AT ∪{d}, V, f ) be an information system and A ⊆ AT . If A is a distribute consistent set of S, then A is a d consistent set of S. Proof. Let A ⊆ AT be a distribute consistent set of S and U/d = {D1 , D2 , · · · , Dr }. For each x ∈ posAT (d), it follows that [x]AT ⊆ [x]d and hence δAT (x) = {Dj ; Dj ∩ [x]AT = ∅} = {[x]d } = δA (x). Assume that [x]d = Dj , it follows that Dl ∩ [x]A = ∅ for each l ≤ r, l = j., that is [x]A ⊆ Dj = [x]d and x ∈ A([x]d ) ⊆ posA (d), it follows that posAT (d) ⊆ posA (d). posA (d) ⊆ posAT (d) is trivial.
4
Knowledge Reduction for Consistent Information Systems
An information system S = (U, AT ∪ {d}, V, f ) is called to be consistent, if [x]AT ⊆ [x]d for each x ∈ U . In this section, we discuss knowledge reductions for consistent information systems. We will prove that the concepts of distribution reduction, approximate reduction and d reduction are equivalent for consistent information systems. Theorem 10. Let S = (U, AT ∪ {d}, V, f ) be an information system. S is consistent if and only if posAT (d) = U . Proof. If S is consistent, then [x]AT ⊆ [x]d for each x ∈ U and hence x ∈ AT ([x]d ) ⊆ ∪X∈U/d AT (X) = posAT (d), that is posAT (d) = U . If posAT (d) = U , then x ∈ posAT (d) for each x ∈ U and hence x ∈ AT ([x]d ), that is [x]AT ⊆ [x]d . Theorem 11. Let S = (U, AT ∪ {d}, V, f ) be an information system. S is consistent if and only if δAT (x) = {[x]d )} for each x ∈ U . Proof. If S is consistent, then [x]AT ⊆ [x]d for each x ∈ U and hence δAT (x) = {[x]d )}. If δAT (x) = {[x]d )} for each x ∈ U , then [x]AT ∩ [y]d = ∅ for each [y]d = [x]d , that is [x]AT ⊆ [x]d and S is consistent. Theorem 12. Let S = (U, AT ∪ {d}, V, f ) be an information system. S is consistent if and only if H(d|AT ) = 0.
1240
K. Qin, Z. Pei, and W. Du
Proof. Assume that U/AT = {X1 , X2 , · · · , Xn },
U/d = {Y1 , Y2 , · · · , Ym }.
It follows that H(d|AT ) = − =−
n
(p(Xi ) ·
i=1 n m i=1 j=1
m
p(Yj |Xi )log(p(Yj |Xi )))
j=1
|Yj ∩ Xi | |Yj ∩ Xi | log( ). |U | |Xi |
If S is consistent, then for each i(1 ≤ i ≤ n), there exists unique j(1 ≤ j ≤ |Yj ∩Xi | |Yj ∩Xi | m) such that Xi ⊆ Yj , and hence |X = 1 or |X = 0, consequently, i| i| H(d|AT ) = 0. If H(d|AT ) = 0, then − it follows that
n m |Yj ∩ Xi | |Yj ∩ Xi | log( ) = 0, |U | |Xi | i=1 j=1
m |Yj ∩ Xi | |Yj ∩ Xi | log( ) = 0, |U | |Xi | j=1
for each i(1 ≤ i ≤ n), that is there exists j(1 ≤ j ≤ m) such that Xi ⊆ Yj , and S is consistent. Theorem 13. Let S = (U, AT ∪ {d}, V, f ) be a consistent information system and A ⊆ AT . (1) A is an entropy consistent set if and only if S = (U, A ∪ {d}, V, f ) is consistent. (2) A is a approximate consistent set if and only if S = (U, A ∪ {d}, V, f ) is consistent. (3) A is a positive domain consistent set if and only if S = (U, A ∪ {d}, V, f ) is consistent. By this Theorem, for consistent information systems, the concepts of distribution reduction, entropy reduction, maximum distribution reduction, distribute reduction, approximate reduction and d reduction are all equivalent.
Acknowledgements The authors are grateful to the referees for their valuable comments and suggestions. This work has been supported by the National Natural Science Foundation of China (Grant No. 60474022).
The Relationship Among Several Knowledge Reduction Approaches
1241
References 1. Pawlak, Z.: Rough Sets. International Journal of Computer and Information Science. 11 (1982) 341–356 2. Pawlak, Z.(ed.): Rough Sets: Theoretical Aspects of Reasoning About Data. Kluwer Academic Publishers, Boston (1991) 3. Marczewski.: A General Scheme of Independence in Mathematics. Bulletin de L Academie Polonaise des Sciences–Serie des Sciences Mathematiques Astronomiques et Physiques. 6 (1958) 731–736 4. Wang, G, Y., Yu, H., Yang, D, C.: Decision Table Reduction Based on Conditional Information Entropy. Chinese Journal of Computers (in Chinese). 25 (2002) 759– 766 5. Zhang, W, X., Mi, J, S., Wu, W, Z.: Knowledge Reductions in Inconsistent Information Systems. Chinese Journal of Computers (in Chinese). 26 (2003) 12–18 6. Skowron, A., Rauszer, C.: The Discernibility Matrices And Functions in Information System. Intelligent Decision Support Handbook of Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht (1992) 7. Kryszkiewicz.: Comparative Study of Alternative Type of Knowledge Reduction in Inconsistent Systems. International Journal of General Systems. 16 (2001) 105–120 8. Beynon, M.: Reducts within The Variable Precision Rough Set Model: A Further Investigation. European Journal of Operational Research. 134 (2001) 592–605 9. Quafatou, M.: RST: A Generalization of Rough Set Theory. Information Sciences. 124 (2000) 301–316 10. Zheng, P., Keyun, Q.: Obtaining Decision Rules And Combining Evidence Based on Modal Logic. Progress in Natural Science (in chinese). 14 (2004) 501–508 11. Zheng, P., Keyun, Q.: Intuitionistic Special Set Expression of Rough Set And Its Application in Reduction of Attributes. Pattern Recognition and Artificial Intelligence(in chinese). 17 (2004) 262–266 12. Slowinski, R., Zopounidis, Dimitras, A. I.: Prediction of Company Acquisition in Greece by Means of The Rough Set Approach. European Journal of Operational Research. 100 (1997) 1–15 13. Jusheng, M., Weizhi, W., Wenxiu, Z.: Approaches to Knowledge Reduction Based on Variable Precision Rough Set Model. Information Sciences. 159 (2004) 255–272
Rough Approximation of a Preference Relation for Stochastic Multi-attribute Decision Problems Chaoyuan Yue, Shengbao Yao, Peng Zhang, and Wanan Cui Department of Control Science and Engineering, Huazhong University of Science and Technology,Wuhan, Hubei, 430074, China
Abstract. Multi-attribute decision problems where the performances of the alternatives are random variables are considered in this paper. The suggested approach grades the probabilities of preference of one alternative over another with respect to the same attribute. Based on the graded probabilistic dominance relation, the pairwise comparison information table is defined. The global preferences of the decision maker can be seen as a rough binary relation. The present paper proposes to approximate this preference relation by means of the graded probabilistic dominance relation with respect to the subsets of attributes.
1
Introduction
Multi-attribute decision making (MADM) is widely applied in many fields such as military affairs, economy and management. When dealing with MADM, it is usual to be confronted to a context of uncertainty. We suppose in this paper that uncertainty is due to the fact that performance evaluations of alternatives on each of the attributes lead to random variables with probability distribution. This kind of problems are called stochastic multi-attribute decision making. Based on the results in [1]-[3], this short paper presents a method to solve above-mentioned problems. In our approach, we define a dominance relation by grading the probabilities of preference of one alternative over another with respect to the same attribute. The global preference of the DM is approximated by means of the graded probabilistic dominance relation with respect to the subsets of attributes. This paper is organized as follows. In the next section graded probabilistic dominance relation about attribute is introduced. The global preference is approximated by means of the graded probabilistic dominance relation. Section 3 is devoted to generation of decision rules and section 4 groups conclusion.
2
Rough Approximation of a Preference Relation
The multi-attribute problem that is considered in this paper can be represented by an < A, Q, E > model, where A is a finite set of potential alternatives ai (i = 1, 2, · · · , n) , Q = {q1 , q2 , · · · , qn } is a finite set of attributes and E is the set L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1242–1245, 2005. c Springer-Verlag Berlin Heidelberg 2005
Rough Approximation of a Preference Relation
1243
of evaluations Xik expressed by the probability density function fik (xik ) which associates the performance of alternative ai with respect to the attribute qk . For any alternatives ai and aj in A , the possibility of preference of ai over aj with respect to qk can be quantified by the probability pkij = p(Xik ≥ Xjk ). Suppose that random variables Xik and Xjk are independent, we have k k pij = fij (xik , xjk )dxik dxjk = fik (xik )fjk (xjk )dxik dxjk xik ≥xjk
xik ≥xjk
fijk (xik , xjk )
where is the joint probability density function of random variables Xik and Xjk . Notice that 0 ≤ pkij ≤ 1 and pkij + pkji = 1 hold. pkij measures the strength of preference of ai over aj with respect to qk .In order to distinguish the strength, we propose to grade the preference according to the value of pkij . The following set I of interval is defined: I = {[0, 0.15], (0.15, 0.3], (0.3, 0.45], (0.45, 0.55), [0.55, 0.7), [0.7, 0.85), [0.85, 1.0]} In term of above partition, we define the set Tqk of binary graded probabilistic dominance relations on A: Tqk = {Dqhk , h ∈ H} where qk ∈ Q, H = {−3, −2, −1, 0, 1, 2, 3}. The degree h in the relation ai Dqhk aj corresponds to the interval that pkij belongs to one by one. Due to pkij + pkji = 1, ∀ai , aj ∈ A, we have ai Dqhk aj ⇐⇒ aj Dq−h ai . k In order to represent preferential information provided by the DM, we shall use the pairwise comparison table (PCT) introduced by Greco et al [2]. The preferential information concerns a set B ⊂ A of, so called, reference actions, with respect to which the DM is willing to express his/her attitude through pairwise comparisons. Let Q be the set of attributes (condition attributes) describing the alternatives, andD, the decision attribute. The decision table is defined as 4-tuple:T =< U, Q D, VQ VD , g >, where U ⊆ B × B is a finite set of pairs of alternatives, VQ = {Vqk , qk ∈ Q} is the domains of the condition attributesand VD is the domain of the decision attribute, and g : U × (Q ∪ D) −→ VQ VD is a total function. This function is such that: (1)g[(ai , aj ), q] = h, if ai Dqh aj , q ∈ Q, (ai , aj ) ∈ U ; (2)g[(ai , aj ), D] = P , if ai is preferred to aj , (ai , aj ) ∈ U ; (3)g[(ai , aj ), D] = N , if ai is not preferred to aj , (ai , aj ) ∈ U , where ”ai is preferred to aj ” means ”ai is at least as good as aj ”.Generally, the decision table can be presented as in Table 1. Table 1. Pairwise comparison table.
q1 q2 ··· qm D (ai , aj ) g[(ai , aj ), q1 ] g[(ai , aj ), q2 ] · · · g[(ai , aj ), qm ] g[(ai , aj ), D] = P ··· ··· ··· ··· ··· ··· (as , at ) g[(as , at ), q1 ] g[(as , at ), q2 ] · · · g[(as , at ), qm ] g[(as , at ), D] = N
1244
C. Yue et al.
The binary relation P defined on A is the comprehensive preference relation of the DM. In this paper, it is supposed that P is a complete and antisymmetric binary relation on B, i.e., 1)∀ai , aj ∈ B, ai P aj and/or aj P ai ; 2) both ai P aj and aj P ai implies ai = aj . In order to approximate the global preference of the DM, the following dominance relation with respect to the subset of the condition attributes is defined: Definition 3.1. Given ai ,aj ∈ A , ai positively dominates aj by degree h with h respect to the set of attributes R, denoted by ai D+R aj , if and only if ai Dqfk aj with f ≥ h, ∀qk ∈ R; ai negatively dominates aj by degree h with respect to h the set of attributes R, denoted by ai D−R aj , if and only if ai Dqfk aj with f ≤ h, ∀qk ∈ Q. The above defined dominance relations satisfy the following property: h k Property 3.1. If (ai , aj ) ∈ D+R , then (ai , aj ) ∈ D+S for each S ⊆ R, k ≤ h; h k If (ai , aj ) ∈ D−R , then (ai , aj ) ∈ D−S for each S ⊆ R, k ≥ h. In the suggested approach, we propose to approximate the global preference h h relation by D+R and D−R dominance relations. Therefore, P is seen as a rough binary relation. The lower approximation of preferences, denoted by R∗ (P ) and R∗ (N ), and the upper approximation of preferences, denoted by R∗ (P ) and R∗ (N ), are respectively defined as: h h R∗ (P ) = {(D+R ) ∩ U ⊆ P }, R∗ (P ) = {(D+R ) ∩ U ⊇ P }; R∗ (N ) =
h∈H
h∈H
h {(D−R ) ∩ U ⊆ N }, R∗ (N ) =
h∈H
h {(D−R ) ∩ U ⊇ N }.
h∈H
Let h + h h+ min (R) = min{h ∈ H : (D+R )∩U⊆P }, hmax (R) = max{h ∈ H : (D+R )∩U ⊇ P } h − h h− min (R) = min{h ∈ H : (D−R )∩U ⊇ N }, hmax (R) = max{h ∈ H : (D−R )∩U⊆N }
According to the property 3.1 and the definitions of the approximation of the preferences, the following conclusion can be obtained. Theorem 3.1. h+
(R)
h−
(R)
min R∗ (P ) = D+R
max R∗ (N ) = D−R
3
h+
max ) ∩ U, R∗ (P ) = D+R
h−
(R)
min ) ∩ U, R∗ (N ) = D−R
) ∩ U;
(R)
) ∩ U.
Decision Rules
The rough approximations of preferences can serve to induce a generalized description of alternative contained in the information table in term of ”if · · ·, then · · ·” decision rules. We will consider the following two kinds of decision rules: h h 1.If ai D+R aj , then ai P aj , denoted by ai D+R aj =⇒ ai P aj ; h h 2.If ai D−R aj , then ai N aj , denoted by ai D−R aj =⇒ ai N aj .
Rough Approximation of a Preference Relation
1245
h h Definition 4.1. If there is at least one pair (au , av ) ∈ D+R ∩ U [D−R ∩ U] h such that au P av [au N av ], and as P at [as N at ] holds for each pair (as , at ) ∈ D+R ∩ h h h U [D−R ∩ U ], then ai D+R aj =⇒ ai P aj [ai D−R aj =⇒ ai N aj ] is accept as a D++ decision rule [ D−− -decision rule]. h Definition 4.2. A D++ -decision rule ai D+R aj =⇒ ai P aj will be called minik mal if there is not any other rule ai D+S aj =⇒ ai P aj such that S ⊆ R, k ≤ h; h A D−− -decision rule ai D−R aj =⇒ ai N aj will be called minimal if there is not k any other rule ai D−S aj =⇒ ai N aj such that S ⊆ R, k ≥ h. The following theorem 4.1 expresses the relationships between decision rules and the approximations of preferences P and N . Both theorem 4.1 and theorem 4.2 can be useful for the induction of the decision rules. Theorem 4.1. h h (1)If ai D+R aj =⇒ ai P aj is a minimal D++ -decision rule, then R∗ (P ) = D+R ∩ U, h h (2)If ai D−R aj =⇒ ai N aj is a minimal D−− -decision rule, then R∗ (N ) = D−R ∩ U. Theorem 4.2. Assuming U = B × B with B ⊆ A , the following conclusions hold: h (1)If ai D+R aj =⇒ −h ai D−R aj =⇒ ai N aj h (2)If ai D+R aj =⇒ −1 ai D−R aj =⇒ ai N aj
4
ai P aj is a minimal D++ -decision rule and h > 0, then is a minimal D−− -decision rule; ai P aj is a minimal D++ -decision rule and h ≤ 1, then is a minimal D−− -decision rule.
Conclusion
In this paper a new rough set method for stochastic multi-attribute decision problems was presented. It is based on the idea of approximating a preference relation represented in a PCT by graded probabilistic dominance relations. This methodology supplies some very meaningful ”if..., then...” decision rules, which synthesize the preferential information given by the DM and can be suitably applied to obtain a recommendation for the choice or ranking problem. Further research will tend to refine this approach for application.
References 1. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht(1991) 2. Greco, S., Matarazzo, B., Slowinski R.: Rough approximation of a preference relation by dominance relations. European Journal of Operational Research.117(1999)63-83 3. Zaras, K.: Rough approximation of a preference relation by a multi-attribute dominance for deterministic, stochastic and fuzzy decision problems. European Journal of Operational Research.159(2004)196-206
Incremental Target Recognition Algorithm Based on Improved Discernibility Matrix Liu Yong, Xu Congfu, Yan Zhiyong, and Pan Yunhe College of Computer Science, Zhejiang University, Hangzhou 310027, China
Abstract. An incremental target recognition algorithm based on improved discernibility matrix in rough set theory is presented. Some comparable experiments have been completed in our ”Information Fusion System for Communication Interception Information (IFS/CI2 )”. The results of experimentation illuminate that the new algorithm is more efficient than the previous algorithm.
1
Introduction
It is difficult to recognize kinematic targets, such as communication broadcast station and its embarked platform, accurately and efficiently, in communication intercept. Traditional targets recognition approach is implement by problalities[1]. As it is a mathematical method, it achieves results by complex calculate. In this paper, we provide a new approach for the targets recognition implemented by the data mining method. In our Information Fusion System for Communication Interception Information (IFS/CI2 )[2], the system obtains the identity, attributes, location and so on, analyzes communication parameter, characteristic of communication, neighbor information, manual information etc. together with others, and then evaluates situation of battlefield and minatory degree. So the targets recognition can be regard as a data mining process, mining the targets decision rules from the previous raw data attributes, such as the location of targets, frequency of the broadcast in targets etc., then deduce the new coming data items’ type of targets by the previous mining results, normally the results are described as decision rules. IFS/CI2 adopts a hierarchical fusion model, which is divided into two parts: the junior associate module and the senior fusion module. The former gets the type of corresponding station, number, platform, and network station and so on, by associating operation of time and location on communication interception data; the latter gets network station attributes, identity of station, types and deploy of arms and so on, and then evaluates situation of battlefield and minatory degree. Since Rough Set theory [3] is effective in data classification, it works as an important tool in the junior associate module of IFS/CI2 . The literature [4] discusses the kinematic target recognition and tracking algorithm based on Rough Set Theory briefly, and obtains satisfied recognition ratio of L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1246–1255, 2005. c Springer-Verlag Berlin Heidelberg 2005
Incremental Target Recognition Algorithm
1247
kinematic targets. However, the result of simulation shows that the efficiency of targets recognition algorithm in [4] is not so ideal, and the essential reason is that the non-incremental algorithm of literature [4] has the following shortages: (1)Complexity of the non-incremental algorithm is large. Because the communication interception information is incremental, and this means that we cannot obtain all the data at once, the data are produced in batches. When the new batch of data coming, the non-incremental algorithm in [4] has to re-calculate all the data, which includes the historical data and the new one. So the temporal wastage in the computation is huge. (2) Non-incremental algorithm can’t make full use of the existed classification results and the rules. There is no need to compute from scratch when the new batch of data comes. In fact, the new data only affect few classification results and rules, which need to be modified. Therefore, this paper utilizes the improved discernibility matrix [5] to perfect the incremental targets recognition algorithm, which lays a strong emphasis on improving target recognition efficiency, as well as keeping recognition ratios. Furthermore, this paper introduces confidence factor to solve the problem of quantitative computation of rules, that IFS/CI2 adopts many subjectively experiential rules, and it is lack of objective criterion to verify the validity and veracity. The algorithm presented in this paper can process the inconsistent data[6] efficiently. It can obtain both certain rules, also named consistent rules, and uncertain rules, also named inconsistent rules.
2
Incremental Algorithm for Target Recognition
This section will focus on the incremental platform recognition algorithm in IFS/CI2 . Sine the principle and process of incremental station recognition algorithm is similar to the platform recognition algorithm, the former will not be discussed. Other algorithm in IFS/CI2 , such as data condense algorithm, target network recognition algorithm, numerical scatter algorithm have been discussed in [2,8]. Before presenting the incremental algorithm, the confidence factor of rule and category of incremental data will be first introduced. 2.1
Confidence Factor and Support Factor of Rule
Since the algorithm presented in this paper will extract both consistent and inconsistent rules, we must first introduce the confidence factor of rules, especially to deal with inconsistent rules. Usually, consistent rules can be denoted by: < Condition 1|Condition 2| ... |Condition n → Conclusion > For inconsistent rules, there may be different conclusions, although the condition attributes and the condition attribute combination are the same, it is
1248
L. Yong et al.
necessary to introduce a confidence factor α(0 < α ≤ 1) to identify all the inconsistent rules. The α means the probability of the rule on the same condition attributes. And the denotation is: < Condition 1|Condition 2| ... |Condition n → Conclusion, α > Evidently, consistent rules’ confidence factor α ≡ 1, while the confidence factor of inconsistent rule will be adjusted during the executing of algorithm. For example, the inconsistent rules, which may appear in communication antagonizing, are: < there are 4 or more stations in the platform on the ground→the platform is headquarter, 0.95> The computation basis of confidence factor is Rough Operator [8,9] in rough set theory. Rough Operator is the conditional probability of decision ψ, given transcendent probability of condition ϕ. And it is can be defined as: p(ψ) = Σ(p(ϕ) ∗ u(ϕ, ψ)) = Σp(ϕ ∧ ψ)
(1)
If the data to be recognized in the database matches multiple platform type recognition rules, it is necessary to improve the former formula of Rough Operator. And the improved one is: p (ψ) = Σ(α ∗ (p(ϕ) ∗ u(ϕ, ψ))) = Σ(α ∗ p(ϕ ∧ ψ))
(2)
Here, α is the confidence factor corresponding to the matched rule. We compute the confidence factors of all matched rules respectively. The larger the Rough Operator is, the higher the matching ratio of the rule. And we choose the rule with largest Rough Operator to judge the platform type. Normally, there are some noise and distortion in those data from sensors. And this will cause wrong rules based on those distorted data. In our algorithm, we introduce a measure, which is similar to the association rule algorithm[10], named support factor of rules, µ, to filter the noise rules. 2.2
Definition of Improved Discernibility Matrix and Classes of Incremental Data
In the incremental station platform recognition algorithm, we adopt the improved discernibility matrix[5], and its definition is as follows: Definition 1 Improved Discernibility Matrix: in the information system S = (U, A), where U denotes domain, and A denotes attribute set composed of
Incremental Target Recognition Algorithm
1249
condition and decision attributes, suppose B ⊆ A, Ei , Ej ∈ U/IN D(B),i, j = 1, 2, ..., n = |U/IN D(B)|;Xk ∈ U/IN D(D), where D is the decision attribute, j = 1, 2, ..., m, m = |U/IN D(D)|. The improved discernibility matrix is an n × n phalanx MS (B) = MS (i, j)n×n , 1 ≤ i, j ≤ n = |U/IN D(B)|. The unit of the phalanx MS (i, j) is defined as follows: If Ei ⊆ Xk , Ej ⊆ Xk , and i = j, then MS (i, j) = N U LL; else MS (i, j) = {a ∈ B : a(Ei ) = a(Ej )}, i, j = 1, 2, ..., n Where, the N U LL means that the difference of the two corresponding items is neglectable. U/IN D(B) and U/IN D(D) denote the classification of domain on condition attribute B and decision attribute D. From the above definition, it is obvious that the improved discernibility matrix must be a symmetry matrix, and the N U LL values in this matrix can be ignored when calculating the discernibility functions and the comparative discernibility functions, so the computational complexity will be decreased greatly. Furthermore, incremental platform recognition depends on classification of incremental data. It classifies the new data according to the consistency of corresponding condition and decision attributes of new data and existed rules. Consider information system S = (U, A), and suppose M is the rule set, there is a rule φi → ϕi , where i is an element in U , φi is the antecedent, and ϕi is the consequent. In this new category system, there exist four possible conditions when a new item of data is added to the information system S. They are defined respectively as follows: Definition 2. CS category the new added datum x belongs to CS category, if and only if ∃(φ → ϕ) ∈ M, φx → φ and ϕx = ϕ. Definition 3. CN category the new added datum x belongs to CN category, if and only if ∀(φ → ϕ) ∈ M, ϕx = ϕ. Definition 4. CC category the new added datum x belongs to CC category, if and only if x does not belong to CN category, and y ∈ U satisfies φx ≡ φy and ϕx = ϕy . Definition 5. P C category the new added datum x belongs to P C category, if and only if x does not belong to CN category, and y ∈ U satisfies φx = φy . 2.3
Incremental Platform Recognition Algorithm
The raw data is incremental, and the previous algorithm [4] must compute all the data from scratch, it is difficult to deal with huge data in real-time. So the platform recognition algorithm that can process incremental data in real-time is required. The incremental platform recognition algorithm gives each platform recognition rule a confidence factor, if there are several rules matching the new item data, we compute the improved rough operator mentioned previously by formula (2), then select the one with largest rough operator to recognize, and at the
1250
L. Yong et al.
same time, the improved discernibility matrix is employed to extract recognition rules contained in the data. When there is new data imported, the forenamed classification definition of incremental data is used to judge and process. The incremental platform recognition algorithm is constructed by three subalgorithm, main recognition algorithm, original data rule-extracting algorithm and incremental data rule-extracting algorithm. The original data rule-extracting algorithm is used to deal with the static data in the beginning of the recognition which is same to our previous approach[4]. And the incremental data ruleextracting algorithm processes the incremental data by their categories that is defined in section 2.2. The recognition algorithm executes after the data condense algorithm, which combines multiple items of data into one item. In data condense algorithm, all the data items describing the same targets at one times-tamp are united into one item, and the unused attributes of raw data are removed from the database. After attributes reduction, there will usual be some conflicts, which is also called inconsistent condition. In this condition, there are more than one items whose condition attributes are all uniform, while they decision attributes are different in the database after data condensing. This may cause by the attributes removed in the condense algorithm. The recognition rules generated by those inconsistent data will be incompatible. In our recognition approach, the condition equivalence class are calculated by Ei ∈ U/IN D(C ∪ D), including the decision attributes into the equivalence categories generating. The conflict items will be classified into different equivalence class, they will produce different rules, and we can choose the rule who has the largest confidence factor.
3
Comparison of Practice Experiment
In IFS/CI2, the processed data arrive batch by batch, it can be regard as a typical incremental data sequence. In the information process system, it is quite important to ensure the system to proceed in real-time and efficiently. The most important criterion to describe the efficiency of this sort of systems is response time, which is the consumed time from the initial data to completing processing of latest data. We design and implement an algorithm comparison experiment based on respond time. In this section, we discuss the practice comparison experiment between incremental algorithm and non-incremental algorithm emphatically on respond time. The hardware of the experiment is PC with Pentium IV 866MHZ CPU, 256M ROM and 40G hard disk, whose operating system is Windows 2000 Server. The comparison experiment results are shown in Figure 1 and Figure 2. In those figures, the time axises are all the response time of fusion center. From the above comparison experiments results, it can be inferred that there is no notable difference between two algorithms when the size of data is small, however, when the size becomes large, incremental platform recognition algorithm is more effective than no-incremental algorithm for it need not process platform data in the database from the beginning.
Incremental Target Recognition Algorithm
1251
Algorithm 1. main recognition algorithm Data: α0 , µ0 , Preliminary parameter database P (C, D), C is the attributes set of parameters, D is the possible decision. Result: The recognition rules set whose support factor larger than µ0 and confidence factor larger than α0 and the recognition result for each data-item. begin Step 1. Data pretreatment, dividing preliminary database P (C, D) into a number of object equivalent class by condition attribute set C: Ei ∈ U/IN D(C ∪ D), i = 1, 2, ..., |U/IN D(C ∪ D)|. divide P (C, D) into a number of decision equivalent classes on decision attribute set D: Xj ∈ U/IN D(D), j = 1, 2, ..., |U/IN D(D)| while The Preliminary DB not NULL do Step 2. Data recognition, if preliminary data item is non-decision then As to data in station database without decision, match the existed rules in rule database to judge the type of the platform, which it belongs to. If there are several rules matching the data, compute the Rough Operator of every rules by formula (2), choose the rule with the largest rough operator to match, and then get the type of platform. Step 3. Rule extracting, if preliminary data is with decision then if data item is current original one(opposite to incremental data) then Call Algorithm2(Original Data Rule-extracting Algorithm) if data item is incremental one then Call Algorithm3(Incremental Data Rule-extracting Algorithm) end
1252
L. Yong et al.
Algorithm 2. Original data Rule-extracting Algorithm Data: α0 , µ0 , Data item in Preliminary parameter database P (C, D). Result: The recognition rules. begin Step 1. Compute improved discernibility matrix MS (i, j), If Ei ⊆ Xk , Ej ⊆ Xk , and i = j, then MS (i, j) = N U LL; else MS (i, j) = {a ∈ C : a(Ei ) = a(Ej )}, i, j = 1, 2, ..., n Step 2. Compute relative discernibility function f (Ei ) of each equivalent class Ei , Step 3. Extract rules from f (Ei ), if Ei ⊆ Xj then Get the consistent rule, α = 1 if Ei ⊂Xj and Ei ∩ Xj = 0 then Get the inconsistent rule, and its confidence is: |Ei ∩ Xj | |Ei | And the support factor can be calculated as: α=
µ=
|Ei ∩ Xj | |U |
Step 4. Put the rule whose support factor larger than µ0 and confidence factor larger than α0 into the rule database of the recognition algorithm. end
Fig. 1. Response Time of Non-incremental Approach in Platform Data set of IFS/CI2
Incremental Target Recognition Algorithm
1253
Algorithm 3. Incremental Data Rule-extracting Algorithm Data: α0 , µ0 , Incremental data item R in Preliminary parameter database P (C, D). Result: The recognition rules. begin if incremental data R belongs to CS category or CC category then Suppose R ∈ Ei (object equivalent class), consider Ei , and there are the following two situations: (1) When Ei ⊂ Xj (decision equivalent class) and Ei ∩ Xj = 0, the confidence of Des(Ei , C) → Des(Xj , D) is changed to: α=
|Ei ∩ Xj | + 1 |Ei | + 1
the support factor is changed to: µ=
|Ei ∩ Xj | + 1 |U | + 1
as to other Xk , which has the property k = j and Ei Xk = 0, change the confidence of the rule Des(Ei , C) → Des(Xj , D) to: α=
|Ei ∩ Xj | |Ei | + 1
the support factor is changed to: µ=
|Ei ∩ Xj | |U | + 1
(2) When Ei ⊆ Xk , the rule database is not changed. if incremental data R belongs to CN category or PC category then Add a new line below the last line of improved discernibility matrix MS (n, n), and add a new column behind the last column, then get a new improved discernibility matrix MS (n + 1, n + 1). Compute the new discernibility and new relative discernibility functions according to the new matrix, which are used to extract rules. Put the rule whose support factor larger than µ0 and confidence factor larger than α0 into the rule database of the recognition algorithm. end
4
Conclusion
The incremental recognition algorithm has more advantages than the nonincremental one[4]. – It processes data incremental, so the recognition time will decrease effectively when dealing with the incremental data. – It can filter the noise and distorted data by introducing the support factor of rules.
1254
L. Yong et al.
Fig. 2. Response Time of Incremental Approach in Platform Data set of IFS/CI2
– It can deal with the inconsistent condition,which often occurs after data condensing, using the confidence factor of rules The experiments results show that the incremental recognition algorithm can is more effective than the previous one, while keeping the same recognition ratio.
Acknowledgements. This paper is sponsored by National Science Foundation of China (No.60402010)and Zhejiang Province Science Foundation(No.M603169), Advanced Research Project of China Defense Ministry (No.413150804), and partially supported by the Aerospace Research Foundation (No. 2003-HT-ZJDX-13).
References 1. Yaakov Bar-Shalom: Tracking Methods in a Multitarget Environment. IEEE Transactions on Automatic Control. 4 (1978) 618–626 2. Xu congfu, Pan Yunhe: IFS/CI2: An intelligent fusion system of communication interception information. Chinese Journal of Electronics and Information Technology. 10 (2002) 1358–1365 3. Pawlak Z.: Rough set. International Journal of Computer and Information Science. 5 (1982) 341–356 4. Liu Yong, Xu congfu, Pan Yunhe: A New Approach for Data Fusion: Implement Rough Set Theory in Dynamic Objects Distinguishing and Tracing. IEEE International Conference on Systems, Man & Cybernetics, Hague (2004) 3318–3323 5. A. Skowron and C. Rauszer: The discernibility matrices and functions in information systems. Fundamenta Informaticae. 2 (1991) 331–362
Incremental Target Recognition Algorithm
1255
6. Shan N., Ziarko W.: An incremental learning algorithm for constructing decision rules. In: Kluwer.R.S.(eds.): Rough Sets, Fuzzy Sets and Knowledge Discovery, Springer-Verlag (1994) 326–334 7. Liu Yong, Xu Congfu, Li Xuelan, Pan Yunhe: A dynamic incremental rule extracting algorithm based on the improved discernibility matrix. The 2003 IEEE International Conference on Information Reuse and Integration, USA (2003) 93–97 8. Liu Qing, Huang Zhaohua, Yao Liwen: Rough Set Theory: Present State and Prospects. Chinese Journal of Computer Science. 4 (1997) 1–5 9. Liu Qing, Huang Zhaohua, Liu Shaohui, Yao Liwen: Decision Rules with Rough Operator and Soft Computing of Data Mining. Chinese Jounal of Computer Research and Development. 7 (1999) 800–804 10. Agrawal, R., Srikant, S.: Fast Algorithms for Mining Association Rules in Large Databases. In: VLDB’94, Morgan Kaufmann (1994) 487–499
Problems Relating to the Phonetic Encoding of Words in the Creation of a Phonetic Spelling Recognition Program Michael Higgins and Wang Shudong Department of Kansei Design Engineering, Faculty of Engineering and Technology, Yamaguchi University 2-16-1 Tokiwadai, Ube City, Yamaguchi Prefecture, Japan {higginsm, peterwsd}@yamaguchi-u.ac.jp
Abstract. A relatively new area of research in centering on the phonetic encoding of information. This paper deals with the possible computer applications of the Sound Approach© English phonetic alphabet. The authors review some preliminary research into a few of the more promising approaches to the application of the processes of machine learning to this phonetic alphabet for computer spell-checking, computer speech recognition etc. Applying mathematical approaches to the development of a data-based phonetic spelling recognizer, and speech recognition technology used for language pronunciation training in which the speech recognizer allows a large margin of pronunciation accuracy, the authors delineate the parameters of the current research, and point the direction of both the continuation of the current project and future studies.
1 Introduction In 1993-1994, Dr. Michael Higgins of Yamaguchi University, Japan developed and did initial testing on a new system of phonetic spelling of the sounds in English as an aid to learning better English pronunciation and improving listening and spelling skills in English for Japanese students of English. The method, subsequently entitled “A Sound Approach”, has been proven to be a very effective English phonetic system [1]. The Sound Approach (SA) alphabet represents without ambiguity all sounds appearing in the pronunciation of English language words, and does so without using any special or unusual symbols or diacritical marks; SA only uses normal English letters that can be found on any keyboard but arranges them so that consistent combinations of letters always represent the same sound, for example, for the word, “this”, instead of using IPA (International Phonetic Alphabet) symbols of /ðis/, SA uses /dhis/ to express the pronunciation. Consequently, any spoken word can be uniquely expressed as a sequence of SA alphabet symbols, and pronounced properly when being read by a reader knowing the SA alphabet. For instance, the sentence “One of the biggest problems with English is the lack of consistency in spelling,” is written in SA (showing word stress) as: “Wun uv dhu BI-gust PRAA-blumz widh EN-glish iz dhu lak uv kun-SIS-tun-see in SPEL-ing. ” L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1256 – 1260, 2005. © Springer-Verlag Berlin Heidelberg 2005
Problems Relating to the Phonetic Encoding of Words
1257
2 Project Development Due to representational ambiguity and the insufficiency of English language characters to adequately and efficiently portray their sounds phonetically, the relationship between a word expressed in SA alphabet and its possible spellings is one to many. That is, each SA sequence of characters can be associated with a number of possible, homophonic sequences of English language characters (e.g. “tuu” is equivalent to “to”, “too”, and “two”). However, within a sentence usually only one spelling for a spoken word is possible. The major challenge in this context is the recognition of the proper spelling of a homophone/homonym given in SA language. In addition to the obvious speech recognition OS that would eventually follow, automated recognition of the spelling has the potential for development of SA-based phonetic text editors which would not require the user to know the spelling rules for the language but only being able to pronounce a word within a relatively generous margin of error and to express it in the simple phonetic SA-based form. Speech recognition parameters could also be adjusted to accommodate the wider margin of error inherent in SA which is based on International Broadcast Standard English. As the SA is phonetic (i.e., a clear one-to-one correlation of sound to spelling), the words themselves could be ‘regionally tuned’ so that one could effectively select the regional accent that they are accustomed to much in the same way that we currently select the keyboard by language groupings. In other words, someone from Australia could select Australia as their text editor and phonetically spell the word ‘day’ as ‘dai’ without contextual ambiguity. (See Fig. 1)
I’d like to go to the hospital again today. 1) Ai’d laik tuu go tuu dhu HAAS-pi-tul u-GIN tuu-DEI. (International Standard Broadcast English) 2)Aa’d lak tu go tu dhu HAAS-pi-tul u-GEE-un tu-DEI. (Generic Southern US English) 3) Oi’d laik tuu go tuu dhu HOS-pi-tul u-GEIN tuu-DAI. (Australian-English) Fig. 1. Regionally tuned sample sentences
The approach adapted in this project involves the application of rough sets [2] in the development of a data-based word spelling recognizer. In this part, the techniques of rough sets, supported by rough-set based analytical software such as KDD-R [3], would be used in the analysis of the classificatory adequacy of the decision tables, and their minimization and extraction of classification (decision) rules to be used in the spelling recognition. While the initial identification and minimization of the required number of information inputs in such decision tables would be one of the more labor intensive aspects of the project, it should be emphasized at this point that the latter stages of the process of minimization and rule extraction would be automated to a large degree and adaptive in the sense that inclusion of new spoken word-context combinations would result in regeneration of the classification rules without human intervention. In this sense the system would have some automated learning ability allowing for continuous expansion as more and more experience is accumulated while being used [4]. The adaptive pattern classification part of the system development is absolutely key to the successful deployment of the system. This aspect of the system, however, only becomes
1258
M. Higgins and W. Shudong
important when some of the more essential problems are solved. Given that, let us briefly outline how we plan to use rough sets in our approach. In the word spelling recognition problem, in addition to the problem of homophones, one of the difficulties is the fact that many spoken words given in SA form correspond to a number of English language words given in a standard alphabet. To resolve, or to reduce this ambiguity, the context information must be taken into account. That is, the recognition procedure should involve words possibly appearing before, and almost certainly after the word to be translated into Standard English orthography. In the rough-set approach this will require the construction of a decision table for each word. In the decision table, the possible information inputs would include context words surrounding the given word and other information such as the position of the word in the sentence, and so on. (See Appendix) Several extensions of the original rough sets theory have been proposed recently to better handle probabilistic information occurring in empirical data, and in particular the Variable Precision Rough Sets (VPRS) model [4] which serves as a basis of the software system KDD-R [3] to be used in this project. In the preliminary testing of SA completed in 1997, a selection of homonyms was put into representative sentences. The words in the sentences were assigned numbers (features) according to a simple, and relatively unrefined, grammatical protocol. These numbers were then inserted into decision tables and using KDD-R it was found that the computer could accurately choose the correct spelling of non-dependent homonyms (i.e., those homonyms for which the simple grammatical protocol was unable to determine the correct spelling from the context) 83.3% of the time, as in the sentence, “The ayes/eyes have it.” With dependent homonyms, as in the sentence, “We ate eight meals,” the computer could accurately choose the correct spelling more than 98% of the time [4]. Besides the above usages of SA, we are also testing to build HMM/GMM modules of speech recognizers for English pronunciation training based on SA for an online English pronunciation training system, currently, just for Japanese learners of English. As SA encoded speech recognizer allows larger scope of parameter baseline limitation than any current existing speech recognizer, it would more accurately catch the real error of English pronunciation. For example, many acceptable /f/ pronunciations in Japanese English words are judged as wrong or unable to recognize in the current English pronunciation training system using ASR technology [5]. The “regionally tuned” feature of SA can be effectively used for foreign language pronunciation training, too. Usually a non-native speaker’s English pronunciation is a mixture of accents of American English, British English, Australian English, etc, and of course colored by his/her own mother tongue. In this case, if ASR modules are built based on SA, then users’ input sounds will be encoded to SA with a large margin of speech signal parameters. From SA codes, spoken words are re-coded to regular text on the basis of one-to-one correspondence of SA to regular word, the speech recognition rates will be greatly improved accordingly. Additionally the ASR software training time will be much shorter. Therefore, the speech of foreign language learners who read continuous sentences into a computer will be more easily recognized and translated into correct texts (STT). Also the English learner does not need to worry about what kind of English he/she has to speak.
Problems Relating to the Phonetic Encoding of Words
1259
3 Current Challenges The adaptive pattern classification part of the system development presents one of the largest difficulties. As alluded to above, a major difficult with the approach suggested is that the practicality of re-coding every word in the English language into a Sound Approach coding is problematic, simply due to the size of the problem, and the fact that it is difficult, at this time, to accommodate any practical public machine learning, whereby users could add new words to the Sound Approach© coded dictionary, which is the way normal spell check systems overcome the problem of an initially small dictionary size. To ease this problem, an already phonetically coded dictionary using the IPA symbols as pronunciation guides has been temporarily adopted, and an interface to link the dictionary (IPA) coding with the Sound Approach coding is being developed. This obviously will save a lot of time, as so many words have already been encoded. However, the problem is that such dictionaries do not contain the phonetic coding of all the inflections of a word, e.g. go = goes = went, or play = playing = played = plays. Therefore, we are still faced with the problem of having to encode many words by hand, before the system can be used as a practical phonetic spell checker. This problem must be solved before we can seriously consider other issues such as how to process words ‘in context’ and so on.
4 Prospects and Conclusion If these particular problems can be adequately addressed in the coming year, the authors see no major difficulty for being able to complete the classification of the homonyms into decision tables and, using that as a base, develop a protocol for converting words written in Standard English or IPA symbols into SA characters for ease of encoding. In this way, the interactive database of SA to Standard or Standard to SA could be completed in a relatively short amount of time. This will then make completing the spelling recognition project possible with a more complete regional tunability and, in turn, pave the way to complete voice recognition capability.
References 1. Higgins, M.L, with Higgins M.L and Shima, Y.: Basic Training in Pronunciation and Phonics: A Sound Approach, vol. 19, number 4, The Language Teacher (1995) 4-8. 2. Ziarko,W.: Rough Sets, Fuzzy Sets and Knowledge Discovery. Springer Verlag (1994) 3. Ziarko,W and Shan, N.: KDD-R: A Comprehensive System for Knowledge Discovery Using Rough Sets. Proceedings of the International Workshop on Rough Sets and Soft Computing, San Jose (1994) 164-173. 4. Higgins.M.L, with Ziarko, W.: Computerized Spelling Recognition of Words Expressed in the Sound Approach. New Directions in Rough Sets, Data Mining, and Granular-Soft Computing: Proceedings, 7th International Workshop, RSFDGrC '99. Lecture Notes in Artificial Intelligence 1711, Springer Tokyo (1999). 543-550 5. Goh Kawai and Keikichi Hirose: A Call system for teaching the duration and phone quality of Japanese Tokushuhaku Proceedings of the Joint Conference of the ICA (International Conference on Acoustics) and ASA (Acoustical Society of America) (1998) 2981-2984
1260
M. Higgins and W. Shudong
Appendix Values of the observations (Grammatical Protocol): 0: none 1.verb 2: noun/pronoun 3: adjective 4: adverb 5: article 6: connective 7: number 8: possessive a: let, please, etc. b: will, shall, can (modals), etc. c: prepositions
Fig. 2. Values
Fig. 3. Sample Table 1b — Reduct: “ai”
Table 1. Sample Table 1a: “ai” (IPA symbol: aI ); (Sound Spelling: ai ) Head Word 1 2 3 4 5 6 7 8 9 10 11 12 13
Sentence Number 15 16 17 18 19 20 21 22 23 24 25 26 27
-5
-4 2 7 0 0 0 0 0 0 0 1 0 0 0
-3 b 2 0 0 0 2 0 0 0 1 0 0 0
1 c 0 0 2 1 0 0 2 c 0 0 0
-2
-1 2 2 0 0 1 2 0 0 1 1 0 5 0
Sample Sentences: 15. “I’ll love you for aye.” 16. “All those in favor say, ‘aye’.” 17. “Aye, Captain.” 18. “The ‘ayes’ have it.” 19. “He injured his eye at work.” 20. “He gave me the eye.” 21. “Her eyes are blue.” 22. “The eyes have it.” 23. “She’s making eyes at me.” 24. “I’m going to keep an eye on you.” 25. “He eyed the situation carefully before he went in.” 26. “The letter i comes after the letter h and before j.” 27 “I want to go out tonight.”
Spelling c 1 0 5 8 5 8 5 1 5 2 2 0
aye aye aye ayes eye eye eyes eyes eyes eye eyed i I
Diversity Measure for Multiple Classifier Systems Qinghua Hu and Daren Yu Harbin Institute of Technology, Harbin, China
[email protected] Abstract. Multiple classifier systems have become a popular classification paradigm for strong generalization performance. Diversity measures play an important role in constructing and explaining multiple classifier systems. A diversity measure based on relation entropy is proposed in this paper. The entropy will increase with diversity in ensembles. We introduce a technique to build rough decision forests, which selectively combine some decision trees trained with multiple reducts of the original data based on the simple genetic algorithm. Experiments show that selective multiple classifier systems with genetic algorithms get greater entropy than those of the top-classifier systems. Accordingly, good performance is consistently derived from the GA based multiple classifier systems although accuracies of individuals are weak relative to top-classifier systems, which shows the proposed relation entropy is a consistent diversity measure for multiple classifier systems.
1 Introduction In the last decade, multiple classifier systems (MCS) become a popular technique for building a pattern recognition machine [4, 5]. This system is to construct several distinct classifiers, and then combines their predictions. It has been observed the objects misclassified by one classifier would not necessarily misclassified by another, which suggests that different classifiers potentially offered complementary information. This paradigm is with several names in different views, such as neural network ensemble, committee machine, and decision forest. In order to construct a multiple classifier system, some techniques were exploited. The most widely used one is resampling, which selects a subset of training data with different algorithms. Resampling can be roughly grouped into two classes; one is to generate a series of training sets from the original training set and then trains a classifier with each subset. The second method is to use different feature sets in training classifiers. Random subspace method, feature selection were reported in the documents [2, 4]. The performance of multiple classifier systems not only depends on the power of the individual classifiers in the system, but also is influenced by the independence between individuals [5, 6]. Diversity plays an important role in combining multiple classifiers, which guilds MCS users to design a good ensemble and explain the success of a ensemble systems. Diversity may be interpreted differently from some angles, such as independence, orthogonality or complementarity [7, 8]. Kuncheva pointed that diversity is generally beneficial but it is not a substitute for accuracy [6]. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1261 – 1265, 2005. © Springer-Verlag Berlin Heidelberg 2005
1262
Q. Hu and D. Yu
As there are some pair-wise measures, which cannot reflex the whole diversity in MCS, A novel diversity measure for the whole system is presented in the paper, called relation entropy, which is based on the pair-wise measures.
2 Relation Entropy Here we firstly introduce two classical pairwise diversity measures, Q-statistic and correlation coefficient. Given a multiple classifier system with n individual classifiers {C1 , C 2 , C i , L , C n } , the joint output of two classifiers, C i and C j , 1 ≤ i, j ≤ n , can be represented in a 2 × 2 table as shown in table 1. Table 1. The relation table with classifiers C i and C j
C j correct (1)
C j wrong (0)
N 11
N 10
C i correct (1)
N 01 Yule introduced Q-statistic for two classifiers defined as C i wrong (0)
Q ij =
N
11
N
00
− N
10
N
01
N
11
N
00
+ N
10
N
01
N 00
The correlation coefficient ρ ij is defined as N 11 N
ρ ij = (N
11
+ N
10
)( N
01
00
+ N
00
− N 10 N )( N
11
01
+ N
01
)( N 10 + N
00
)
Compute the Q-statistic or correlation coefficient of each pair of n classifiers, a matrix will produce: M = rij n×n Here rii = 1 , rij = r ji and | rij |≤ 1 . Therefore
( )
matrix |M| is a fuzzy similarity relation matrix. the greater the value | rij | , i ≠ j , is, the stronger the relation between C i and C j is and then the weaker of independence between classifiers is. The matrix M surveys the total relation of classifiers in the MCS. Given a set of classifiers C = {C1 , C 2 , L , C n } , R is a fuzzy relation on C. It can be denoted as a relation matrix ( Rij ) n×n , where Rij is the relation degree between C i and
C j with respect to relation R . As we know that the larger Rij is, the stronger the relation of C i and C j is. As to correlation coefficient, Rij denotes the degree of correlation between C i and C j . If Rij > Rik , we say C i and C j are more indiscernible than C i and C k .
Diversity Measure for Multiple Classifier Systems
1263
Definition 1. Let R be a fuzzy relation over a set C, wi the weight of C i in the ensemble system, 0 ≤ wi ≤ 1 and ∑ wi = 1 . ∀ C i ∈ C . We define expected relation dei
gree of C i to all C j ∈ C with respect to R as follows: n
π (C i ) = ∑ w j • rij j =1
Definition 2. The information quantity of relation degree of C i is defined as I (C i ) = − log 2 π (C i )
It’s easy to show that the larger π (C i ) is, the stronger C i is with other classifiers in the ensemble system, and the less I (C i ) is, which shows that the measure I (C i ) describes the relation degree of C i to all classifiers in system C with respect to relation R. Definition 3. Given any relation R between individuals in multiple classifier system,
and a weight factor series of C, the relation entropy of the pair is defined as H w ( R) = ∑ wi • I (C i ) = − ∑ wi log 2 π (C i ) Ci ∈C
Ci ∈C
Information entropy gives the total diversity of a multiple classifier system if relations used represent the similarity of outputs of individual classifiers. This measure not only takes the relations between classifiers into account, but also computes the weight factors of individual classifiers in ensemble. The proposed information entropy can applied to a number of pairwise similarity measures for multiple classifier systems, such as Q-statistic, correlation coefficient and so on.
3 Experiments Searching the optimal ensemble of multiple classifier systems involves combinational optimization. Genetic algorithms make a good performance in this kind of problems. Some experiments were conducted with UCI data. The numbers of reducts range between 5 and 229. All the trees are trained with CART algorithm and two-thirds samples in each class are selected as training set, others are test set. Here, for simplicity, 20 reducts are randomly extracted from the reduct sets of all data sets if there are more than 20 reducts. Subsequent experiments are conducted on the 20 reducts. The accuracies with different decision forests are shown in table 2. GAS means the forests based on genetic algorithm. TOP denotes the forests with the best trees. We find that GAS ensembles get consistent improvement for all data sets relative to systems combining the best classifiers. All entropies of Q-statistic and correlation coefficient in two kinds of ensembles as to the data sets are shown in table 3. As the entropies represent the total diversity in systems, we can find GAS based ensembles consistently catch more diversity than top-classifier ensembles.
1264
Q. Hu and D. Yu
Table 2. Comparison of decision forests
Data BCW Heart Ionos WDBC Wine WPBC
size 10 6 8 7 7 9
GAS accuracy 0.9766 0.8857 0.9901 0.9704 1.00 0.75
size 10 6 8 7 7 9
TOP accuracy 0.92642 0.85714 0.94059 0.94675 0.97917 0.70588
Table 3. Relation entropy of multiple classifier systems
BCW Heart IONOS WDBC Wine WPBC
TOP 0.1385 0.0031 0.0978 0.1780 0 1.0541
Q-statistic GAS 0.2252 0.4011 0.2313 0.2340 0.0593 1.1466
Correlation coefficient TOP GAS 0.6348 0.7719 0.0698 0.9319 0.7689 1.2639 1.0296 1.2231 0.8740 1.3399 1.7425 1.8319
4 Conclusion Diversity in multiple classifier systems plays an important role in improve classification accuracy and robustness as the performance of ensembles not only depends on the power of individuals in systems, but also is influenced by the independence between individuals. Diversity measures can guild users to select classifiers and explain the success of the multiple classifier system. Here a total diversity measure for multiple classifier systems is proposed in the paper. The measure computes the information entropy represented with a relation matrix. If Q-statistic or correlation coefficient is employed, the information quantity reflexes the diversity of the individuals. We compare two kinds of rough decision forest based multiple classifier systems with 9 UCI data sets. GA based selective ensembles achieve consistent improvement for all tasks compared with the ensembles with best classifiers. Correspondingly, we find the diversity of GAS with the proposed entropy based on Q-statistic and correlation coefficient is consistently greater than that of top-classifier ensembles, which shows that the proposed entropy can be used to explain the advantage of GA based ensembles.
References 1. Ghosh J.: Multiclassifier systems: Back to the future. Multiple classifier systems. Lecture notes in computer science, Vol.2364. Springer-Verlag, Berlin Heidelberg (2002) 1-15 2. Zhou Z., Wu J., Tang W.: Ensembling neural networks: Many could be better than all. Artificial intelligence 137 (2002) 239-263
Diversity Measure for Multiple Classifier Systems
1265
3. Ho Ti Kam: Random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence. 20, (1998) 8, 832-844 4. Czyz J., Kittler J., Vandendorpe L.: Multiple classifier combination for face-based identity verification. Pattern recognition. 37 (2004) 7: 1459-1469 5. Ludmila I. Kuncheval: Diversity in multiple classifier systems. Information fusion. 6, (2005) 3-4 6. Kuncheva L. I. et al.: An experimental study on diversity for bagging and boosting with linear classifiers. Information fusion. 3 (2002) 245-258 7. Hu Qinghua, Yu Daren: Entropies of fuzzy indiscernibility relation and its operations. International journal of uncertainty, fuzziness and knowledge based systems. 12 (2004) 575-589 8. Hu Qinghua, Yu Daren, Wang Mingyang: Constructing rough decision forests. The tenth conference on rough sets, fuzzy sets, data mining and granular computing. 2005
A Successive Design Method of Rough Controller Using Extra Excitation Geng Wang, Jun Zhao, and Jixin Qian Institute of Systems Engineering, Zhejiang University, Hangzhou, 310027, China
[email protected] Abstract. An efficient design method to improve the control performance of rough controller is presented in this paper. As the input-output data of the history process operation may not be enough informative, extra testing signals are used to excite the process to acquire sufficient data reflecting the control laws of the operator or the existing controller. Using data from the successive exciting tests or excellent operation by operators, the rules can be updated and enriched, which is helpful to improve the performance of the rough controller. The effectiveness of the proposed method is demonstrated through two simulation examples emulating PID control and Bang-Bang control, respectively.
1 Introduction In industrial process, it is usually very difficult to obtain quantitative models of the complex process. In such cases it is necessary to observe the operation of experts or experienced operators and discover rules governing their actions for control. Rough set theory [1],[2],[3] provides a methodology for generating rules from process data, which enable to set up a decision-making utility that approximates operator’s knowledge about how to control the system. Such a processor of decision rules is referred to as a “rough controller” [4]. The rough controller has been applied to industrial control successfully [5-11]. However, the result of rough control is coarse. One reason is that the rules extracted from history operating data usually are not sufficient, which means the rules may not provide full knowledge for controlling the outputs, therefore the performance of rough controller may be poor. An improved approach to design rough controller is proposed in this paper, First extra testing signals is used to excite the system to acquire sufficient data reflecting strategy of operators or experts, which is similar to model identification; next, rough set is used to extract rules from the collected data; then the rules are updated and enriched by using data from the successive exciting tests or excellent operations by operators; finally, rough controller is designed based on the rules. This method is applied to emulate two control schemes, a PID control, and a Bang-Bang control, both with good control performance. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1266 – 1270, 2005. © Springer-Verlag Berlin Heidelberg 2005
A Successive Design Method of Rough Controller Using Extra Excitation
1267
2 Design Method of Rough Controller 2.1 Data Acquirement and Excitation The proposed method to acquire data is shown in Fig.1. Extra testing signals are impose to excite the process. The signals type can be pseudo-random signal or step signals. Both the control signals and output signals are recorded. Control signals
Testing signals Plant operators
Output signals
Process
Fig. 1. Data collection system
The testing signals should ensure that the system is excited sufficiently, so that the acquired data can cover the whole input-output space and completely reflect the actions of the operator. For example, in the example of emulating PID controller, the error of set point to output signal and change in error are chosen as condition attributes. Pseudo-random signal is used as the testing signal to ensure enough magnitude and enough frequency. In the example of emulating Bang-Bang controller, to ensure the whole input-output space can be overlapped and the control laws of Bang-Bang control can be implemented exactly, multi-step signal with different magnitude is used. 2.2 Rough Set Analysis The goal of the analysis stage is to generate rules. The procedure is as follows: 1. Select condition attributes and decision attributes, and discretize these attributes by manual discretization. 2. Check and remove redundant data. 3. Check contradictory data and process them with weighted average method. 4. Generate minimal decision rules from data. 2.3 Rough Rules Update and Complement As the rules derived from single test may not be sufficient. One or more additional tests need to be performed, and operators’ excellent operations can also be referenced. These tests can use the same type of testing signals, but with different magnitude or frequency. Data acquired from these tests or operators’ excellent operations are used to generate new rules, and these rules are used to update or enrich original knowledge base successively. The method can be processed through the following operations. Suppose U representing the rules set in the original knowledge base, x’ is a new rule extracted from a new test.
∈ ∧ f(x’, c )≠∧ f(x, c ), then x’ is a new rule and is appended to set U. ∈ ∧ f(x’, c )=∧ f(x, c ), and ∧ f(x’, d )=∧ f(x, d ), then x’ is ig-
1. if ∀ x U , 2. if ∃ x U , nored.
i
i
i
i
j
j
1268
G. Wang, J. Zhao, and J. Qian
∈ ∧
=∧
∧
∧
f(x’, ci ) f(x, ci ), and f(x’, dj )≠ f(x, dj ), then x’ is proc3. if ∃ x U , essed by weighted average method with these rules x, as follows: k
Vd = ∑ Vdk nk i =1
k
∑n i =1
(1)
k
where k is the number of the decisions, Vd the value of the decisions, n the number of their corresponding objects, Vd the final decision. Take that emulating PID controller as example; total 16 tests are performed successively. The numbers of the rules after being updated and enriched are shown in Table 1. It can be seen that the rules are not sufficient only for the first several tests, and the rule number increases with successive tests. The test can be terminated when the rule numbers do not change. Table 1. The number of the rules after each test
Test Rule Number Test Rule Number
1 73 9 113
2 97 10 113
3 106 11 114
4 110 12 115
5 110 13 115
6 112 14 115
7 112 15 115
8 112 16 115
2.4 Rough Controller Design Based on Rules The basic structure of the rough controller [4] is shown in Fig. 2. It consists of the following parts: rough A/D converter, inference engine, and knowledge base. Knowledge base is the central part that contains rules derived from test data by rough set. Forward feed signals
Knowledge base Rough A/D converter
Input signals
Feedback signals
Output signals
Inference engine
Process
Disturbing signals
Fig. 2. Rough control system
3 Simulation Two examples emulating a PID controller and a Bang-Bang controller, respectively, are used to validate the proposed method. In the examples, the PID controller and Bang-Bang controller are assumed as the plant operator.
A Successive Design Method of Rough Controller Using Extra Excitation
1269
Fig. 3. Comparison between rough controller and PID controller with step input form 0 to 0.25 (Left: output, Right: control move)
In the example of emulating PID controller, the rough controller is expected to response to any input or disturbance distributed in [-0.3, 0.3] like the PID controller. Fig.3 shows the responses of the rough controller and PID controller when tracking step input signal from 0 to 0.25. It is observed that the control move of rough controller is different with the PID controller, but the control performances of the two controllers are very similar.
Fig. 4. Comparison between Bang-Bang controller and rough controller with step input form 0 to 2.9 (Left: output, Right: control move)
Bang-Bang control is a special type of control where control move is only allowed to be discrete value, which is very similar to operators’ control actions. In the example of emulating Bang-Bang controller, the rough controller is expected to response to any input or disturbance distributed in [0, 5] like the Bang-Bang controller. The response to a step input from 0 to 2.9 of the rough controller and Bang-Bang controller is showed in Fig.4, it is observed that the control scheme of rough controller (e.g. two time switches) is the same as the Bang-Bang controller. The small steady error compared with Bang-Bang controller is just owing to the signal magnitude discretization from 2.9 to 3. When the magnitude of input signal is the same as the exciting signal used in testing procedure, the identical response of the rough controller and the BangBang controller is achieved.
1270
G. Wang, J. Zhao, and J. Qian
4 Conclusions A method of designing a rough controller based on the rough set theory is presented in this paper. The method provides a new idea of a control system capable of emulating the human operator’s decision process.
Acknowledgement The authors would like to gratefully acknowledge financial support from the China National Key Basic Research and Development Program under Grant. 2002CB312200.
References 1. Pawlak, Z.: Rough sets. International Journal of Information and Computer Science, 11(1982) 2. Pawlak, Z.: Rough sets.: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, The Neatherlands(1991) 3. Mrózek, A.: Rough Sets and Dependency Analysis among Attributes in Computer Implementations of Expert’s Inference Models. International Journal of Man-Machine Studies, 30(1989) 4. Mrózek, A., Planka, L., Kedziera, J.: The methodology of rough controller synthesis. Proceeding of the Fifth IEEE International Conference on Fuzzy Systems, (1996)1135-1139 5. Mrózek, A.: Rough sets in computer implementation of rule-based control of industrial process, Intelligent Decision Support: Handbook of Applications and Advances of the rough sets Theory, Kluwer Academic Publishers, (1992)19-31 6. Mrózek, A., Planka, L.: Rough sets in industrial applications, Rough sets in knowledge discovery 2: applications, case studies and software systems, Physica-Verlag, Heidelberg, (1998)214-237 7. Planka, L., Mrózek, A.: Rule-based stabilization of the inverted pendulum. Computational Intelligence, 11(1995) 348-356 8. Lamber-Torres, G.: Application of rough sets in power system control center data mining, Power Engineering Society Winter Meeting, 1(2002) 627-631 9. Munakata, T.: Rough control: A perspective. Rough sets and Data Mining: Analysis of Imprecise Data. Kluwer Academic Publisher, (1997)77-88 10. Huang, J.J., Li, S.Y., Man, C.T.: A T-S type of rough fuzzy controller based on process input-output data. Proceedings of IEEE Conference on Decision and Control, 5(2003)47294734 11. Peters, J.F., Skowron, A., Suraj, Z.: An application of rough set methods in control design, Fundamental Information, 43 (2000)269-290
A Soft Sensor Model Based on Rough Set Theory and Its Application in Estimation of Oxygen Concentration Xingsheng Gu and Dazhong Sun Research Institute of Automation, East China University of Science & Technology, Shanghai, 200237, P. R. China
[email protected] Abstract. At present, much more research in the field of soft sensor modeling is concerned. In the process of establishing soft sensor models, how to select the secondary variables is still an unresolved question. In this paper, rough set theory is used to select the secondary variables from the initial sample data. This method is used to build the soft sensor model to estimate the oxygen concentration in a regeneration tower and the good result is obtained.
1 Introduction In many chemical processes, due to the limitations of measurement device, it is often difficult to estimate some important process variables [1] (e.g. product composition variables and product concentration variables), or the variables of interest can only be obtained with long measurement delays. But these variables are often used as feedback signals for quality control [2]. If they are not measured fast and right, the quality of the product can be badly affected. In this case, soft sensor can be used to build a model between easily and frequently obtained variables (secondary variables) and those important process variables (primary variables) are stand for the values of primary variables. Soft sensor is a newly developing technique which is on the basis of inferential control. An important step of soft sensor modeling is the selection of secondary variables which have functional relationship with the primary variable. But the optimal selection method of secondary variables is still unresolved. Rough set theory (RST) was proposed by Pawlak as a new method of dealing with fuzzy and uncertain knowledge. In this paper, we apply the rough set theory to the selection of secondary variables in order to propose a new approach for soft sensor technique.
2 Rough Set Theory [3] An information system is: S = (U , A,V , f ) , where U = {x1 , x2 , , xn }is a nonempty and finite set of objects. A = {a1 , a 2 , , a n } is a nonempty and finite set of attributes. V = ∪Va
, V a is the domain set of a . f : U × A → V is an information function and
a∈ A
L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1271 – 1276, 2005. © Springer-Verlag Berlin Heidelberg 2005
1272
X. Gu and D. Sun
∀a ∈ A , x ∈ U , f ( x, a) ∈ Va [3 4] . Let P ⊆ A , an indiscernibility relation
ind (P ) is defined as:
ind ( P) = {( x, y ) ∈ U × U ∀a ∈ P, f ( x, a) = f ( y, a)}
(1)
The indiscernibility relation ind (P ) is an equivalence relation on U . All the equivalence classes of the relation ind ( P ) is expressed as
U / ind ( P) = {X 1 , X 2 , where
X i ( i = 1,2,
, X m}
(2)
, m ) is called the i-th equivalence class. X i ⊆ U , X i ≠ Φ ,
X i ∩ X j = Φ ( i ≠ j; i, j = 1,2,
m
, m ), ∪ X i = U [7]. The equivalence class is i =1
also called the classification of U . K = (U , ind ( P )) is called a knowledge base Knowledge (attributes) reduction is one of the kernel parts of the rough set theory. Some attributes are perhaps redundant. If an attribute is redundant, it can be removed from the information system. Reduction and core are two basic concepts of knowledge reduction. Definition 1: suppose R ⊆ A is an equivalence relation, r ∈ R , the attribute redundant if ind ( R) = ind ( R − {r }) otherwise r is indispensable in R .
,
r is
If every r ( r ∈ R ) is indispensable in R , R is independent, otherwise R is dependent. Definition 2: Suppose P ⊆ R , if P is independent and ind ( P) = ind ( R) , then P is
called a reduction of R , expressed as red (R) . R has many reductions, all the indispensable attributes in
R included by all the
reductions is called the core of R , expressed as: core ( R ) = ∩ red ( R ) .
P and Q are two equivalence relations of U . posP (Q) is called the positive region of Q with respect to P and can be expressed as: Definition 3: Suppose
posP (Q ) =
∪ PX .
X ∈U / Q
A decision table is a special information system S = (U , A,V , f ) where A = C ∪ D ,
C ∩ D = Φ , C is the condition attributes set, D is the decision attributes set and Φ is an empty set. It’s a two-dimension table. Each row is an object and each column is an attribute. Suppose c ∈ C is a condition attribute, the significance of c with respect to D is defined as:
σ CD (c ) = ( posC ( D ) − posC − c ( D ) ) / U Where
U represent the cardinality of U .
(3)
A Soft Sensor Model Based on Rough Set Theory and Its Application
1273
3 Soft Sensor Modeling Method In this paper, we consider using the input and output sample data to build a soft sensor model for a MISO system based on the rough set theory. It includes three steps. 3.1 Discretization of Continuous Attribute Values
The input and output data of the real process is continuous. Before using rough set theory to judge the significance of one input attribute with respect to the output attribute, the continuous attribute values should be transferred into discretization values expressed as 1, 2, …, n. There exist a number of discretization methods such as fuzzy c-means clustering method [5] and equal-width-interval method. In this paper, we apply equal-width-interval method to these inputs and output attributes. The equal-width-interval method is as follows: for one attribute x ∈ [ xmin , xmax ] , divide i = 1,2,
x into ni part, each partition point is Y j , j ∈ ( 1,2, ,n ) . If x − Yi = min{ x − Y j }, , ni , the discrete value of x is i . Substitute the continuous attribute values i
with the discrete attribute values can get the decision table [4]. 3.2 Removing the Redundant Condition Attributes [3, 4]
Using the above method of judging the significance of the input attributes with respect to the output attribute to remove the redundant condition attributes. C = {C1 , C 2 , , C r } (r is the number of the input attributes) are the condition attributes and D = {D1 , D2 , , Dm } (m is the number of the output attributes) is the output
= 1,2 , , r . If σ CD (Ci ) is big, the condition attribute C i is significant to D , otherwise, C i is less significant attributes. For each input attribute, compute σ CD (C i ) , i
to D . If σ CD (Ci ) = 0 , Ci is insignificant to D . 3.3 Building Soft Sensor Model
There are several ways to build soft sensor models such as mechanism analysis, regression analysis, artificial neural network and fuzzy techniques and so on. In this paper, we will use the Elman U(k-1) network to build the soft sensor Wih model. Elman network is a dynamic recurrent neural network as shown Who in figure 1 [6]. It can be seen that in Whc Y(k) a Elman network, in addition to the input, hidden and the output units, context Wch X(k) there are also context units. The feedforward weights such as Wih, Wch Xc(k) and Who are modifiable. The recurrent weights Whc is fixed as a constant one. The output vector is Y(k)∈Rm and the input vector is u(k-1) ∈ R Fig. 1. Elman network
1274
X. Gu and D. Sun
X(k)∈Rn is the hidden layer output vector. Xc(k) is a n×n matrix. The relationship between the input and output can be represented as.
X (k ) = F (Wch X c (k ) + Wih u (k − 1)) X c ( k ) = X ( k − 1) Y (k ) = G (Who X (k ))
F And G are activation functions of hidden units and output units respectively and the dynamic back-propagation (DBP) learning rule is used to train the Elman network.
4 Rough Set Based Soft Sensor Model for Oxygen Concentration In this part, we will use the rough set theory to select the secondary variables and then build a soft sensor for the oxygen concentration in a regeneration tower. Oxygen concentration is an important variable of the regeneration tower. According to the analysis of technological mechanisms, oxygen concentration is relating to 15 variables as shown in figure 2. Variable 16 is the oxygen concentration. Variables 1 to 4 are gas flux. Variables 5 and 6 are gas pressure. Variables 7 to 13 are gas temperature. Variable 14 is the flux of cycling catalyst. Variable 15 is the coking of the catalyst. But these 15 variables are not equally significant to the oxygen concentration. Perhaps some of them can be removed from the initial data set. Rough set theory is a strong tool to do this work. In this application, C = {C1 , C 2 , , C15 } and D = {D1 }. 200 samples are used to judge the significance of each input variable to the oxygen concentration. Using the above method, we can get the following results shown in Table 2.
Fig. 2. Regeneration Tower
A Soft Sensor Model Based on Rough Set Theory and Its Application
1275
Table 2. Significance of condition attributes to decision attribute
σ CD (C1 )
σ CD (C 2 )
σ CD (C3 )
σ CD (C 4 )
σ CD (C5 )
0
0 σ CD (C7 )
0.003 σ CD (C8 )
0 σ CD (C9 )
0.01
σ CD (C 6 )
0
0
0
0
σ CD (C11 )
σ CD (C12 )
σ CD (C13 )
σ CD (C14 )
0 σ CD (C15 )
0.01
0
0.005
0
0.005
σ CD (C10 )
Fig. 3. The training result of the Elman network
Fig. 4. The generalization result of the Elman network
、 、 、
Table 2 shows that C3 C5 C11 C13 and C15 are significant to D . Using these five variables as the input variables to train the Elman network. The structure of the network is 6-13-1. The train and generalization results are shown in Figure 3 and Figure 4. The root mean square errors for the train and test samples are 0.0184 and 0.0197 respectively which show that the modeling accuracy is high.
1276
X. Gu and D. Sun
5 Conclusions In this paper, we used rough set theory to judge the significance of the initial selected secondary variables to the primary variables in soft sensor technique. On the basis of rough set theory, some less significance or insignificance variables can be removed. This method is applied to the estimation of oxygen concentration in regeneration tower and the satisfactory result is obtained. The simulation results show the effectiveness of the proposed method.
References 1. Zhong, W., Yu, J.S.: MIMO Soft Sensors for Estimating Product Quality with On-line Correction. Chemical Engineering Research & Design,Transaction of the Insitute of Chemical Engineers. 2000, 78(A): 612-620 2. Tham, M.T., Morris, A.J. et al.: Soft-sensing: a Solution to the Problem of Measurement Delays. Chem Eng Res Des, 1989, 67: 547-554. 3. Zhang, W.X., Wu, W.Z., et al.: Rough Set Theory and Methods. Beijing: Science Press, 2001. 4. Luo, J.X., and Shao, H.H.: Selecting Secondary Measurements for Soft Sensor Modeling Using Rough Set Theory. Proceedings of the 4th World Congress on Intelligent Control and Automation, Press of East China University of Science and Technology, 2002: 415-419. 5. Li, M., and Zhang, H.G.: Research on the Method of Neural Network Modeling Based on Rough Sets Theory. ACTA AUTOMATICA SINICA, 2002, 28(1): 27-33. 6. Tham, D.T., and Liu, X.: Training of Elman Networks and Dynamic System Modeling. International Journal of Systems Science, 1996, 27(2):221-226.
A Divide-and-Conquer Discretization Algorithm Fan Min, Lijun Xie, Qihe Liu, and Hongbin Cai College of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610051, China {minfan, xielj, qiheliu, caihb}@uestc.edu.cn
Abstract. The problem of real value attribute discretization can be converted into the reduct problem in the Rough Set Theory, which is NP-hard and can be solved by some heuristic algorithms. In this paper we show that the straightforward conversion is not scalable and propose a divide-and-conquer algorithm. This algorithm is fully scalable and can reduce the time complexity dramatically especially while integrated with the tournament discretization algorithm. Parallel versions of this algorithm can be easily written, and their complexity depends on the number of objects in each subtable rather than the number of objects in the initial decision table. There is a tradeoff between the time complexity and the quality of the discretization scheme obtained, and this tradeoff can be made through adjusting the number of subtables, or equivalently, the number of objects in each subtable. Experimental results confirm our analysis and indicate appropriate parameter setting.
1
Introduction
The majority of machine learning algorithms can be applied only to data described by discrete numerical or nominal attributes (features). In the case of continuous attributes, there is a need for a discretization algorithm that transforms continuous attributes into discrete ones [1]. And the discretization step determines how coarsely we want to view the world [2]. The problem of real value attribute discretization can be converted into the reduct problem in the Rough Set Theory [3][5][6]. But some existing algorithms is not scalable in practice [2][6] when the decision table has many continuous attributes and/or many possible attribute values. Recently we have proposed an algorithm called the tournament discretization algorithm for situations where the number of attributes is large. In this paper we propose a divide-and-conquer algorithm that can dramatically reduce the time complexity and is applicable especially for situations where the number of objects in the decision table is large. By integrating these two algorithm together, we can essentially cope with decision tables with any size. A parallel version of this algorithm can be easily written and run to further reduce the time complexity. There is a tradeoff between the time complexity and the quality of the discretization scheme obtained, and this tradeoff can be made through adjusting the number of subtables, or equivalently, the number of objects in each subtable. Experimental results confirm our analysis and indicate appropriate parameter setting. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1277–1286, 2005. c Springer-Verlag Berlin Heidelberg 2005
1278
F. Min et al.
The rest of this paper is organized as follows: in Section 2 we enumerate relative concepts about decision tables, discretization schemes and discernibility. In Section 3 we analyze existing rough set approaches for discretization and point out their scalability problem. In Section 4 we present and analyze our divide-andconquer discretization algorithm. Some experimental results are given in Section 5. Finally, we conclude and point out further research works in Section 6.
2
Preliminaries
In this section we emulate relative concepts after Nguyen [3] and Komorowski [2], and propose the definition of discernibility for attributes and cuts. 2.1
Decision Tables
Formally, a decision table is a triple S = (U, A, {d}) where d ∈ A is called the decision attribute and elements of A are called conditional attributes. a : U → Va for any a ∈ A ∪ {d}, where Va is the set of all values of a called the domain of a. For the sake of simplicity, throughout this paper we assume that U = {x1 , . . . , x|U| }, A = {a1 , . . . , a|A| } and d : U → {1, . . . , r(d)}. Table 1 lists a decision table. Table 1. A decision table S U x1 x2 x3 x4 x5
2.2
a1 1.1 1.3 1.5 1.5 1.7
a2 0.2 0.4 0.4 0.2 0.4
a3 0.4 0.2 0.5 0.2 0.3
d 1 1 1 2 2
Discretization Schemes
We assume Va = [la , ra ) ⊂ to be a real interval for any a ∈ A. Any pair (a, c) where a ∈ A and c ∈ is called a cut on Va . Let Pa be a partition on Va (for a ∈ A) onto subintervals Pa = {[ca0 , ca1 ), [ca1 , ca2 ), . . . , [caka , caka +1 )} where la = ca0 < ca1 < . . . < caka < caka +1 = ra and Va = [ca0 , ca1 ) ∪ [ca1 , ca2 ) ∪ . . . ∪ [caka , caka +1 ). Hence any partition Pa is uniquely defined and often identified as the set of cuts: {(a, ca1 ), (a, ca2 ), . . . , (a, caka )} ⊂ A × . Any set of cuts P = ∪a∈A Pa defines from S = (U, A, {d}) a new decision table S P = (U, AP , {d}) called P -discretization of S, where AP = {aP : aP (x) = i ⇔ a(x) ∈ [cai , cai+1 ) for x ∈ U and i ∈ {0, . . . , ka }}. P is called a discretization scheme of S. While selecting cut points, we only consider midpoints of adjacent attribute values, e.g., in the decision table listed in Table 1, (a1 , 1.2) is a possible cut while (a1 , 1.25) is not.
A Divide-and-Conquer Discretization Algorithm
2.3
1279
Discernibility
The ability to discern between perceived objects is important for constructing many entities like reducts, decision rules or decision algorithms [8]. The discernibility is often addressed through the discernibility matrix and the discernibility function. In this subsection we propose definitions of discernibility for both attributes and cuts from another point of view to facilitate further analysis. Given a decision table S = (U, A, {d}), the discernibility of any attribute a ∈ A is defined as the set of object pairs discerned by a, this is formally given by DP (a) = {(xi , xj ) ∈ U × U |d(xi ) = d(xj ), i < j, a(xi ) = a(xj )},
(1)
where i < j is required to ensure that the same pair does not appear in the same set twice. The discernibility of an attribute set B ⊆ A is the union of the discernibility of each attribute, namely, DP (B) = DP (a). (2) a∈B
Similarly, the discernibility of a cut (a, c) is defined by the set of object pairs it can discern, DP ((a, c)) = {(xi , xj ) ∈ U × U | d(xi ) = d(xj ), i < j, a(xi ) < c ≤ a(xj ) or a(xj ) < c ≤ a(xi )}.
(3)
The discernibility of a cut set P is the union of the discernibility of each cut, namely, DP (P ) = DP ((a, c)). (4) (a,c)∈P
It is worth noting that maintaining discernibility is the most strict requirement used in the rough set theory because it implies no loss of information [11].
3 3.1
Related Works and Their Limitations The Problem Conversion
In this subsection we explain the main idea of the rough set approach for discretization [3] briefly, and we point out that this conversion is independent of the definition of reduction and the reduction algorithm employed. Given a decision table S = (U, A, {d}), denote the cut set containing all possible cuts by C(A). Construct a new decision table S(C(A)) = (U, C(A), {d}) called C(A)-cut attribute table of S, where ∀ct = (a, c) ∈ C(A), ct : U → {0, 1}, and 0, if a(x) < c; ct(x) = { (5) 1, otherwise . Table 2 lists the C(A)-cut attribute table, denoted by S(C(A)), of the decision table S listed in Table 1. We have proven the following two theorems [6]:
1280
F. Min et al. Table 2. S(C(A)) U (a1 , 1.2) (a1 , 1.4) (a1 , 1.6) (a2 , 0.3) (a3 , 0.25) (a3 , 0.35) (a3 , 0.45) d x1 0 0 0 0 1 1 0 1 x2 1 0 0 1 0 0 0 1 x3 1 1 0 1 1 1 1 1 x4 1 1 0 0 0 0 0 2 x5 1 1 1 1 1 0 0 2
Theorem 1. DP (C(A)) = DP (A).
(6)
Theorem 2. For any cut set P , DP (AP ) = DP (P ).
(7)
Therefore the discretization problem (constructing AP from A) is converted into the reduction problem (selecting P from C(A)). Nguyen [3][5] integrated the conversion process with reduction process and employed the Boolean approach. According to above proved theorems, this conversion maintains the discernibility of decision tables, hence it is independent from the definition of reduction or reduction algorithm employed. For example, if the definition of reduction requires that the positive region is maintained, i.e., the decision tables (U, C(A), {d}) and (U, P, {d}) have the same positive region, then S and S P would have the same positive region; if the definition of reduction requires that the generalized decision is maintained, i.e., ∂C(A) = ∂P , then ∂A = ∂AP . 3.2
The Scalability Problem of Existing Approaches
Although the above-mentioned approach seems perfect because the discretization problem has been converted to another problem which is solved by many efficient algorithms, using it directly is not scalable in practice when the decision table has many continuous attributes and/or many possible attribute values. The decision table S(C(A)) = (U, C(A), {d}) has |U | rows and |C(A)| = O(|A||U |) columns, which may be very large. For example, in the data table WDBC of the UCI library [10] (stored in the file uci/breast-cancer-wisconsin/ wdbc.data.txt), there are 569 objects and 31 continuous attributes, each attribute having 300 - 569 different attribute values, and S(C(A)) should contain 15,671 columns, which is simply not supported by the ORACLE system run in our computer. Nguyen [4] also proposed a very efficient algorithm based on Boolean approach. But from our point of view, this approach is not flexible because only object pairs discerned by given cuts are used as heuristic information. Moreover, this approach may not be applicable for mixed-mode data [7]. In previous works [6] we have developed the tournament discretization algorithm. This algorithm has some rounds, during round i the discretization schemes
A Divide-and-Conquer Discretization Algorithm
1281
of decision tables containing 2i conditional attributes (may be less for the last one) are computed on the basis of cut sets constructed in round i − 1, resulting in |A| 2i cut sets. In this process the number of candidate cuts of current decision table could be kept under a relative low degree. By using this algorithm we have computed discretized scheme of WDBC, but it took my compute 10,570 seconds for the plain version and 1,886 seconds for the parallel version. Both are rather long. Moreover, this algorithm may be invalid when the number of possible cuts of any attribute exceeds 1000, namely, the upper bound of columns supported by the ORACLE systems.
4
A Divide-and-Conquer Discretization Algorithm
We can use the divide-and-conquer approach to the other dimension of the decision table, namely, the number of objects. In this section we firstly propose the algorithm structure, then analyze parameter setting. 4.1
The Algorithm Structure
Firstly we list our discretization algorithm in Fig. 1. DivideAndConquerDiscretization (S = (U, A, {d})) {input: A decision table S.} {output: A discretization scheme P .} Step 1. divide S into K subtables; Step 2. compute discretization schemes of subtables; Step 3. compute the discretization scheme P of S based on discretization schemes of subtables; Fig. 1. A Divide-and-Conquer Discretization Algorithm
Generally, in Step 1 we require the family of subtables to be a partition of S. Namely, let the set of subtables be {S1 , S2 , . . . , SK } where Si = (Ui , A, {d}) for all i ∈ {1, 2, . . . , K}, K i=1 Ui = U and ∀i, j ∈ {1, 2, . . . , K}, i = j, Ui ∩ Uj = ∅. Moreover, we require all subtables except the last one to be the same size, namely, |U1 | = |U2 | = . . . = |UK−1 | = |U| K . We have these requirements because: 1. Our algorithm is intended for data tables with large amount of rows, subtables containing the same row may not be preferred; 2. Loss of rows, i.e., some rows are not included in any subtable may incur too much loss of information. However, for very huge data tables we may lose this requirement; 3. Subtables with almost the same size is preferred from both statistical and implementation points of view.
1282
F. Min et al.
Moreover, it is not encouraged to construct subtables using adjacent rows, e.g., U1 = {u1 , u2 , . . . , u |U | }, because many data tables such as IRIS are well K organized according to decision attribute values. We propose the following scheme to meet these requirement. Firstly, select a prime number p such that |U |%p = 0. Then generate a set of numbers N = {n1 , n2 , . . . , n|U| } where nj = (j ∗ p)%|U | + 1 for all j ∈ {1, 2, . . . , |U |}. Because |U |%p = 0, it is easy to prove that N = {1, 2, . . . , |U |}. At last we let Ui = {un
(i−1)∗
|U | +1 K
, un
(i−1)∗
|U | +2 K
, . . . , un
i∗
|U | K
}
for all i ∈ {1, 2, . . . K − 1} and UK = {un
(K−1)∗
|U | +1 K
, un
(K−1)∗
|U | +2 K
, . . . , un|U | }.
For example, if U = 8, K = 2 and p = 3, then U1 = {u1 , u4 , u7 , u2 }, U2 = {u5 , u8 , u3 , u6 }. It is easy to see that objects of any subtable are distributed in S evenly with no more tendency, and this scheme of constructing subtables can easily break any bias of S by choosing appropriate p (relatively larger ones such as 73, 97 are preferred). In Step 2 any discretization algorithm could be employed, while in this paper only Nguyen’s algorithm [3] (employed while |A| ≤ 8) and our tournament discretization algorithm [6] (employed while |A| > 8) are concerned to keep a unified form. In Step 3 we use the same idea mentioned in Subsection 3.1. Instead of K using C(A), we use cuts selected from all subtables, i.e., i=1 Pi where Pi is the discretization scheme of Si . 4.2
Time Complexity Analysis
Time required for Step 1 is simply ignorable compared with that of Step 2, and time required for Step 3 is also ignorable if K is not very large. It is worth noting that Step 2 can be easily distributed into K computers/processors and run in parallel. In order to specify its relationship with respective decision table, we use P (S) instead of P to denote the discretization scheme of S, and P (S1 ) to denote the discretization scheme of S1 . Obviously, the time complexity of computing discretization scheme of any subtable is equal to that of S1 . The time complexity of the most efficient reduction algorithm is O(M 2 N log N ) where N is the number of objects and M is the number of attribute [9]. We have developed an entropy-based algorithm with the same complexity. In the following we assume that this algorithm is employed and give time complexity of different algorithms or combinations of algorithms: If we apply Nguey’s algorithm [3] directly to S, because S(C(A)) has O(|A| |U |) cut attributes, the time complexity of computing P (S) is O(|A|2 |U |3 log |U |).
(8)
A Divide-and-Conquer Discretization Algorithm
1283
If we apply the tournament discretization algorithm [6] directly to S, the time complexity of computing P (S) is O(|A||U |3 log |U |),
(9)
O(log |A||U |3 log |U |)
(10)
which is reduced to if the parallel mechanism is employed and there are |A| 2 computers/processors to use [6]. If we apply Nguey’s algorithm [3] to subtables, the time complexity of computing P (S1 ) is |U | 3 |U | O(|A|2 ( ) log( )), (11) K K which is also the time complexity of the parallel version of computing P (S) if there are K computers/processors to use. And the time complexity for the plain version of computing P (S) is O(K|A|2 (
|U | 3 |U | ) log( )). K K
(12)
If we apply the tournament discretization algorithm [6] to subtables, the time complexity of computing P (S1 ) is O(|A|(
|U | 3 |U | ) log( )), K K
(13)
which is reduced to O(log |A|(
|U | 3 |U | ) log( )). K K
(14)
if the parallel mechanism is employed and there are |A| 2 computers/processors to use. And this is also the time complexity of computing P (S) if there are |A| 2 ∗ K computers/processors to use. If the parallel mechanism is not employed, the time complexity for computing P (S) is |U | 3 |U | O(K|A|( ) log( )). (15) K K By comparing equations (8) and (14) or (15) it is easy to see that our algorithms have made great progress on deducing time complexity of the discretization algorithm. 4.3
Tradeoff for Deciding Suitable Parameter
According to above analysis, larger K will incur lower time complexity. But larger K, or equivalently, smaller |U| K has some drawbacks. When we divide S into subtables, we are essentially losing some candidate cuts. For example, if we
1284
F. Min et al.
divide S listed in Table 1 into 2 subtables S1 = ({x1 , x2 }, {a1 , a2 , a3 }, d) and S2 = ({x3 , x4 , x5 }, {a1 , a2 , a3 }, d), cuts (a1 , 1.4), (a3 , 0.35) and (a3 , 0.45) will be lost. When subtables are large enough, the loss of cuts may be trivial and ignorable. But when subtables are relatively small, this kind of loss may be unendurable. For one extreme, if K = |U |, any subtable will contain exactly one object, and there will be no candidate cut at all. Obviously, appropriate K may varies directly as |U | for different applications, and it may be suitable to investigate on appropriate setting of |U| K . In fact, if we keep |U| in a certain range for different applications, according to equations K (11), (13) and (14), the time complexities of parallel versions are not influenced by |U |. We will analyze this issue through examples in the next section.
5
Experimental Results
We are developing a tool called Rough set Developer’s kit (RDK) using the Java platform and the ORACLE system to test our algorithms and also as a basis of application development. For convenience we run our algorithm in my notebook PC with an Intel Centrino 1.4G CPU and 256M memory. The reducing algorithm employed throughout this paper is entropy based. And the discretization algorithm for subtables is the tournament discretization algorithm. Table 3 lists some results of the WDBC dataset. Specifically, K = 1 indicates that we do not divide S into subtables. P OS(S P ) denotes the number of objects in the positive region of the discretized decision table S P . Total time indicates time used for the plain version of our algorithm, while Parallel time indicates time used for the parallel version of our algorithm. Processors indicates the number of computers/processors required for running the parallel version. Step 3 time indicates time required for executing Step 3 of our algorithm. Time units are all seconds. Experiment analysis: 1. While K ≤ 5, the positive region of S P is the same as that of S, i.e., all 569 objects are in the positive region. But this no longer hold true for K > 5. This difference is essential from the Rough Set point of view. 2. If the discretization quality is estimated by the number of selected cuts |P |, suboptimal results could be obtained especially when K ≤ 6. 3. The total time and parallel time decreases as K increases, but this trend does not continue significantly when K > 4. This is partly because that more subtables will incur more overheads. 4. The run time of Step 3 is no longer ignorable while K ≥ 7. 5. In this example, K = 4 is the best setting. 6. For generalized situations, we recommend that |U| K to be between 100 and 150 (corresponding to K = 4 or 5 in this example) because subtables of such size tend to maintain most cuts, while at the same time easy to discretize.
A Divide-and-Conquer Discretization Algorithm
1285
Table 3. Some results of the WDBC data set K 1 2 3 4 5 6 7 8 9 10 11 12
6
P OS(S P ) 569 569 569 569 569 550 550 567 566 554 565 565
|P | 11 13 13 12 11 13 18 13 15 12 12 13
Total time 10,570 2,858 2,354 1,395 1,189 1,225 920 865 942 994 1093 912
Parallel time 1,886 278 207 107 80 108 46 52 58 46 76 36
Processors 16 32 48 64 80 96 112 128 144 160 176 192
Step 3 time 0 11 51 25 24 34 16 26 34 21 49 21
Conclusions and Further Works
In this paper we proposed a divide-and-conquer discretization scheme that divides the given decision table into subtables and combine discretization schemes of subtables. While integrated with the tournament discretization algorithm, this algorithm can discretize decision tables with any size. Moreover, the time complexity of parallel versions of our algorithm is influenced by |U1 | rather than |U| |U | if there are K = |U processors/computers to use. We have also given 1| suggestion of |U1 | to be between 100 and 150 through an example. Further research works include applying our algorithms along with parameter settings on applications to test their validity.
References 1. L. Kurgan and K.J. Cios.: CAIM Discretization Algorithm. IEEE Transactions on Knowledge and Data Engeering 16(2) (2004) 145–153. 2. J. Komorowski, Z. Pawlak, L. Polkowski, A. Skowron.: Rough sets: A Tutorial. S. Pal, A. Skowron (Eds.) Rough Fuzzy Hybridization (1999) 3–98. 3. H.S. Nguyen and A. Skowron.: Discretization of Real Value Attributes. Proc. of the 2nd Joint Annual Conference on Information Science, Wrightsvlle Beach, North Carolina (1995) 34–37. 4. H.S. Nguyen.: Discretization of Real Value Attributes, Boolean Reasoning Approach. PhD thesis, Warsaw University, Warsaw, Poland (1997). 5. H.S. Nguyen.: Discretization Problem for Rough Sets Methods. RSCTC’98, LNAI 1424 (1998) 545–552. 6. F. Min, H.B. Cai, Q.H. Liu, F. Li.: The Tournament Discretization Algrithm, already submitted to ISPA 2005. 7. F. Min, S.M. Lin, X.B. Wang, H. B. Cai, “Attribute Extraction from Mixed-Mode Data,” already submitted to ICMLC 2005.
1286
F. Min et al.
8. R.W. Swiniarski, A. Skowron: Rough Set Methods in Feature Selection and Recognition. Pattern Recognition Letters 24 (2003) 833–849. 9. Liu Shao-Hui, Sheng Qiu-Jian, Wu Bin, Shi Zhong-Zhi, Hu Fei. Research on Efficient Algorithms for Rough Set Methods. Chinese Journal of Computer 26(5) (2003) 524–529 (in Chinese). 10. C.L. Blake, C.J. Merz. UCI Repository of Machine Learning Databases. http://www.ics.uci.edu/ mlearn/MLRepository. html, UC Irvine, Dept. Information and Computer Science, 1998. 11. M. Cryszkiewicz.: Comparative Studies of Alternative Type of Knowledge Reduction in Inconsistent Systems. International Journal of Intelligent Systems 16(1) (2001) 105 – 120.
A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines* Gexiang Zhang1, Zhexin Cao2, and Yajun Gu3 1
School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031, Sichuan, China
[email protected] 2 College of Profession and Technology, Jinhua 321000 Zhejiang, China 3 School of Computer Science, Southwest University of Science and Technology, Mianyang 621002 Sichuan, China
Abstract. Rough set theory (RST) can mine useful information from a large number of data and generate decision rules without prior knowledge. Support vector machines (SVMs) have good classification performances and good capabilities of fault-tolerance and generalization. To inherit the merits of both RST and SVMs, a hybrid classifier called rough set support vector machines (RS-SVMs) is proposed to recognize radar emitter signals in this paper. RST is used as preprocessing step to improve the performances of SVMs. A large number of experimental results show that RS-SVMs achieve lower recognition error rates than SVMs and RS-SVMs have stronger capabilities of classification and generalization than SVMs, especially when the number of training samples is small. RS-SVMs are superior to SVMs greatly.
1 Introduction For many practical problems, including pattern matching and classification [1,2] function approximation [3], data clustering [4] and forecasting [5], Support Vector Machines (SVMs) have drawn much attention and been applied successfully in recent years. The subject of SVM covers emerging techniques that have been proven successful in many traditionally neural network-dominated applications [6]. An interesting property of SVM is that it is an approximate implementation of the structure risk minimization induction principle that aims at minimizing a bound on the generation error of a model, rather than minimizing the mean square error over the data set [6]. SVM is considered as a good learning method that can overcome the internal drawbacks of neural networks [7,8,9]. Although SVMs have strong capability of recognizing patterns and good capabilities of fault-tolerance and generalization, SVMs cannot reduce the input data and select the most important information. Rough Set Theory (RST) can supplement the deficiency of SVMs effectively. RST, introduced by Zdaislaw Pawlak [10] in his seminal paper of 1982, is a new mathematical approach to uncertain and vague data analysis and is also a new * This work was supported by the National EW Laboratory Foundation (No.NEWL51435QT22 0401). L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1287 – 1296, 2005. © Springer-Verlag Berlin Heidelberg 2005
1288
G. Zhang, Z. Cao, and Y. Gu
fundamental theory of soft computing [11]. In recent years, RST becomes an attractive and promising research issue. Because RST can mine useful information from a large number of data and generate decision rules without prior knowledge [12,13], it is used generally in many fields [10-16], such as knowledge discover, machine learning, pattern recognition and data mining. RST has strong capabilities of qualitative analysis and generating rules, so it is introduced to preprocess the input data of SVMs so as to extract the key elements to be the inputs of SVMs. In our prior work, an Interval-Valued Attribute Discretization approach (IVAD) was presented to process the continuously interval-valued features of radar emitter signals [17]. RST was combined with neural networks to design rough neural networks and experimental results verify that rough neural networks are superior to neural networks [18]. Unfortunately, neural networks have some unsolved problems, such as over-learning, local minimums and network structure decision, especially many difficulties for determining the neural nodes of hidden layers [7,8,9]. So this paper incorporates SVMs with RST to design a hybrid classifier called Rough Set Support Vector Machines (RS-SVMs). The new classifier inherits the merits of both RST and SVMs. Experimental results show that the introduction of RST not only enhances recognition rates and recognition efficiencies of SVMs, but also strengthens classification and generalization capabilities of SVMs. This paper is organized as follows. Section 2 gives feature selection method using RST. Section 3 presents a hybrid classifier based on RST and SVMs. Simulation experimental results are analyzed in section 4. Conclusions are drawn in Section 5.
2 Feature Selection Method RST can only deal with discrete attributes. In engineering applications, especially in pattern recognition and machine learning, the features obtained using some feature extraction approaches usually vary in a certain range (interval values) instead of fixed values because of some reasons such as plenty of noise. The existing discretization methods based on cut-splitting cannot deal with the information system that contains some interval attribute values effectively, while IVAD can discretize well the interval-valued continuous features. So the IVAD is firstly used to process the features. The key problem of IVAD is to choose a good class-sepability criterion function. When an attribute value varies in a certain range, in general, the attribute value always orders a certain law. Without loss of generality, suppose the law is approximate Gaussian distribution. This paper uses the below class-separability criterion function in feature discretization. J = 1−
∫ f ( x) g ( x)dx ∫ f ( x)dx ⋅ ∫ g ( x)dx 2
2
(1)
In (1), f(x) and g(x) represent the probability distribution functions of attribute values of two objects in universe U in a decision system, respectively. Using the criterion function and discretization algorithm in [17], the interval-valued features can be discretized effectively. After discretizing continuous features, some methods in
A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines
1289
RST can be used to select the most discriminatory feature subset from the original feature set composed of a large number of features. This paper introduces attribute reduction method based on discernibility matrix and logic operation [12,13] to reduce discretized decision table. The detailed reduction algorithm is as follows. Step 1 Computing discernibility matrix CD of decision table. Step 2 For the elements cij (cij ≠ 0, cij ≠ φ ) of all nonempty set in discernibility
matrix CD , construct corresponding disjunction logic normal form. Lij = ∨ ai ai ∈cij
(2)
Step 3 Conjunction operation is performed using all disjunction logic normal form Lij and a conjunction normal form L is obtained finally, i.e.
L=
∧
cij ≠ 0, cij ≠φ
Lij
(3)
Step 4 Transforming the conjunction normal form L into disjunction normal form L ' and achieve L ' = ∨ L . Step 5 The results of attribute reduction is achieved. In disjunction normal form L ' , each conjunction item corresponds a result of attribute reduction of decision table and the attributes contained in the conjunction item constitute a set of condition attribute after reduction. Although RST finds all the reducts of the information system, the multi-solution problem brings many difficulties to decide the input features of classifiers of automatic recognition. So this paper introduces the complexity of feature extraction to solve the problem. The complexity is measured using consuming time of feature extraction. In all reducts of the information system, the feature subset with the lowest complexity is considered as the final feature subset.
3 Rough Set Support Vector Machines SVMs have good classification, fault-tolerance and generalization capabilities. Though, SVMs cannot select and reduce the input data. If the dimensionality of input vector is very high, the training time and testing time of SVMs will be very long. Moreover, high-dimensional input feature vector usually has some redundant data, which may lower the classification and generalization performances of SVMs. So it is very necessary to use some methods to preprocess the input data of SVMs. Fortunately, RST can mine useful information from a large number of features and eliminate the redundant features, without any prior knowledge. To introduce strong capabilities of qualitative analysis and generating rules into SVMs, this section uses RST as preprocessing step of SVMs to design a hybrid classifier called RS-SVMs. The structure of RS-SVMs is shown in Fig. 1. The steps of designing RS-SVM are as follows.
1290
G. Zhang, Z. Cao, and Y. Gu
Step 1 Training samples are used to construct a decision table in which all attributes are represented with interval-valued continuous features. Step 2 IVAD is employed to discretize the decision table and discretized decision table is obtained. Step 3 Attribute reduction methods are applied to deal with the discrete attribute table. Using attribute reduction method, multiple solutions are usually obtained simultaneously. So the complexity of feature extraction is introduced to select the final feature subset with the lowest cost from the multiple reduction results. After selection again, the final decision rule can be achieved. Step 4 According to the final decision rule obtained, Naïve Scaler algorithm [12] is used to discretize the attribute table discretized by using IVAD and decide the number and position of cutting points. Thus, all cutting-point values are computed in terms of the attribute table before discretization using IVAD and the discretization rule, i.e. the preprocessing rule of SVMs, is generated. Step 5 The training samples are processed using the preprocessing rule and then are used to be the inputs to train SVMs. Step 6 When SVM classifiers are tested using testing samples or SVM classifiers are used in practical applications, the input data are firstly dealt with using preprocessing rule and then are applied to be inputs of trained SVMs. Training samples
Decision table
Reduction methods
Testing samples
IVAD
Discretized decision table
Naïve Scaler Algorithm
Decision rules
Preprocessing rules
Untrained SVMs
Trained SVMs
Complexity of features
Output
Fig. 1. Structure of RS-SVMs
SVMs were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue [19]. Currently there are two types of approaches for multiclass SVM. One is by constructing and combining several binary classifiers while the other is by directly considering all data in one optimization formulation [19]. The former approach including mainly three methods: one-versus-rest (OVR) [19,20], one-versus-one (OVO) [19,21] and support vector machines with binary tree architecture (BTA) [22]. Some experimental results [1-9,19-22] show that the combinatorial classifier of several binary classifiers is a valid and practical way for solving muticlass classification problem.
A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines
1291
4 Simulations We choose 10 radar emitter signals to make simulation experiments. The 10 signals are represented with x1 , x2 ,L , x10 , respectively. In our prior work, 16 features have been extracted from the 10 radar emitter signals [23,24,25]. The 16 features are represented with a1 , a2 ,L , a16 , respectively. After discretization and reduction, the final result a5 , a10 is obtained. To bring into comparison, several feature selection approaches including Resemblance Coefficient method (RC) [25], Class-Separability method (CS) [26], Satisfactory Criterion method (SC) [27], Sequential Forward Search using distance criterion (SFS) [28], Sequential Floating Forward Search using distance criterion (SFFS) [28] and New Method of Feature Selection (NMFS) [29]. The results obtained using RC, CS, SC, SFS, SFFS and NMFS are a2 a7 , a4 a15 , a5 a12 , a1a4 , a4 a5 and a6 a7 , respectively. To test the classification performance of the results obtained by the 7 feature selection methods, BTA is used to construct SVM classifiers to recognize 10 radar emitter signals. Average accurate recognition rates obtained by using the 7 feature selection methods are shown in Table 1. Table 1. Comparison of recognition rates (RR) obtained by 7 methods
Methods Features RR (%)
RC a2a7 93.89
CS a4a15 87.69
SC a5a12 95.12
SFS a1a4 63.27
SFFS a4a5 84.73
NMFS a6a7 77.68
Proposed a5a10 95.32
Table 1 shows that the average recognition rate of the proposed method is higher than other 6 methods, which indicates that the feature selection method using RST is superior to other 6 methods. Simultaneously, the experimental results show that the introduced discretization method is feasible and valid. The classification and generalization capabilities of RS-SVMs are compared with those of SVMs using the following experiments. 6 classifiers including OVR-SVM, OVO-SVM, BTA-SVM, OVR-RS-SVM, OVO-RS-SVM and BTA-RS-SVM are employed to recognize the 10 radar emitter signals. The inputs of the 6 classifiers uses the selected feature subset obtained by the proposed method, i.e. two features a5 and a10 . Performance criterions including recognition error rate and recognition efficiency are used to evaluate the several classifiers. Recognition efficiency includes training time (Trt) and testing time (Tet). The samples in training group are used to train 6 classifiers and then the samples in testing group are applied to test the trained classifiers. Statistical results of 100 experiments using the 6 classifiers are shown in Table 2, respectively. To compare the training time and the capabilities of classification and generalization of SVMs with those of RS-SVMs, different samples including 10, 20, 30, 40 and 50 are respectively applied to train OVR-SVM, OVO-SVM, BTA-SVM and OVR-RS-SVM, OVO-RS-SVM, BTA-RS-SVM. Also, testing samples of 5 dB, 10 dB, 15 dB and 20 dB are respectively used to test trained SVM and RS-SVM. After 100 experiments, the changing curves of average recognition rates (ARR)
1292
G. Zhang, Z. Cao, and Y. Gu
obtained using OVR-SVM and OVR-RS-SVM, OVO-SVM and OVO-RS-SVM, BTA-SVM and BTA-RS-SVM, are shown in Fig.2, Fig.3, Fig.4, respectively. The average training time (ATT) spent by OVR-SVM and OVR-RS-SVM, OVO-SVM and OVO-RS-SVM, BTA-SVM and BTA-RS-SVM, are shown in Fig.5, Fig.6, Fig.7, respectively. All experiments are made using a personal computer (P-IV, CPU: 2.0GHz, EMS memory: 256Mb). Table 2. Experimental result comparison of 6 classifiers (%)
Signals
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 Error rate Trt (sec.) Tet (sec.)
SVMs
RS-SVMs
OVR 0 13.86 0 0 0 0 0 73.20 0.33 0 8.74 754.74
OVO 0 0 0 0 0 0 0 0 0 0 0 51.96
BTA 20.36 26.00 0 0 0 0 0 0 0 0.39 4.68 101.14
OVR 0.40 0 0 0 0 0 0 0 0 0 0.04 878.11
OVO 0 0 0 0 0 0 0 0 0 0 0 52.58
BTA 0 0 0 0 0 0 0 0 0 0 0 108.53
233.58
6.56
4.22
232.38
27.64
25.02
Average recognition rate
100
95
90
85
80
OVR-SVM OVR-RS-SVM 75 10
20 30 40 Number of training samples
50
Fig. 2. ARR of OVR-SVM and OVR-RS-SVM
From Table 2 and Fig. 2 to Fig.7, several conclusions can be drawn as follows. (1) Table 2 shows that recognition rates of 3 RS-SVM classifiers including OVRRS-SVM, OVO-RS-SVM and BTA-RS-SVM are higher than or not less than those of 3 SVM classifiers including OVR-SVM, OVO-SVM and BTA-SVM. When the number of training samples is 50, three RS-SVM classifiers including OVR-RS-SVM,
A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines
1293
OVO-RS-SVM, BTA-RS-SVM and OVO-SVM classifier are good classifiers to recognize the 10 radar emitter signals using the selected features. 101
Average recognition rate
100.5 100 99.5 99 98.5 98 97.5
OVO-SVM OVO-RS-SVM
97 10
20 30 40 Number of training samples
50
Fig. 3. ARR of OVO-SVM and OVO-RS-SVM
102
Average recognition rate
100
98
96
94
92
BTA-SVM BTA-RS-SVM
90 10
20 30 40 Number of training samples
50
Fig. 4. ARR of BTA-SVM and BTA-RS-SVM 900
Average training time (s)
800
700
600
500
400
300
200
100
0 10
OVR-SVM OVR-RS-SVM
20 30 40 Number of training samples
50
Fig. 5. ATT of OVR-SVM and OVR-RS-SVM
1294
G. Zhang, Z. Cao, and Y. Gu 55 50 Average training time (s)
45 40 35 30 25 20 15 OVO-SVM OVO-RS-SVM
10 5 10
20 30 40 Number of training samples
50
Fig. 6. ATT of OVO-SVM and OVO-RS-SVM 110
100 Average training time (s)
90
80
70
60
50
40
30
20
10
10
BTA-SVM BTA-RS-SVM
20 30 40 Number of training samples
50
Fig. 7. ATT of BTA-SVM and BTA-RS-SVM
(2) Table 2 and Fig. 5 to Fig.7 show that RS-SVM classifiers need more time than that of SVM classifiers because the discretization procedure in RS-SVM consumes a little time. (3) From Fig.2 to Fig.4, recognition error rates of 3 RS-SVM classifiers are lower than those of 3 SVM classifiers when the number of training samples varies from 10 to 50. The experimental results indicate that classification and generalization capabilities of RS-SVM classifiers are much stronger than those of SVM classifiers, especially when the number of training samples is small. Fig.2 to Fig.4 also show that classification and generalization capabilities of RS-SVM classifiers when the number of training samples is 10 correspond with those of SVM classifiers when the number of training samples is 50. That is to say, RS-SVM classifiers with 10 training samples are superior to SVM classifiers with 50 training samples because the former have much lower recognition error rates and much shorter training and testing time than the latter. In 3 RS-SVM classifiers, the OVO-RS-SVM classifier is the best from recognition rate and recognition efficiency.
A Hybrid Classifier Based on Rough Set Theory and Support Vector Machines
1295
(4) When the number of training samples is 50, OVO-SVM seems to be the best classifier from the evaluation criterions of 6 classifiers. However, Fig.3 and Fig.6 indicate that OVO-RS-SVM is superior to SVM when the number of training samples decreases. (5) If the same values of evaluation criterions of classifiers are obtained, RS-SVM classifiers need much shorter training time and testing time than that of SVM classifiers because RS-SVM classifiers need smaller training samples. Therefore, the above analysis indicates that the introduction of rough set theory decreases recognition error rates and enhances recognition efficiencies of SVM classifiers, and strengthens classification and generalization capabilities of SVM classifiers.
5 Conclusions This paper combines RST with SVMs to design a hybrid classifier. RST is used to preprocess the input data of SVMs both in training procedure and in testing procedure. Because RST selects the most discriminatory features from a large number of features and eliminates the redundant features, the preprocessing step enhances the efficiency of SVMs in training and testing phases and strengthens classification and generation capabilities of SVMs. Experimental results verify that RS-SVMs are much superior to SVMs in recognition capability and in recognition efficiency. The proposed hybrid classifier is promising in other applications, such as image recognition, speech recognition and machine learning.
References 1. Osareh, A., Mirmehdi1, M., Thomas, B., and Markham, R.: Comparative Exudate Classification Using Support Vector Machines and Neural Networks. Lecture Notes in Computer Science, Vol.2489. (2002) 413-420 2. Foody, G.M., Mathur, A.: A Relative Evaluation of Multiclass Image Classification by Support Vector Machines. IEEE Transactions on Geoscience and Remote Sensing, Vol.42, No. 6. (2004) 1335-1343 3. Ma, J.S., Theiler, J., and Perkins, S.: Accurate On-line Support Vector Regression. Neural Computation, Vol.15, No.11. (2003) 2683-2703 4. Ben-Hur, A., Horn, D., Siegelmann, H.T., and Vapnik, V.: Support Vector Clustering. Journal of Machine Learning Research, Vol.2, No.2. (2001) 125-137 5. Kim, K.J.: Financial Time Series Forecasting Using Support Vector Machines. Neurocomputing, Vol.55, No.1. (2003) 307-319 6. Dibike, Y.B., Velickov, S., and Solomatine, D.: Support Vector Machines: Review and Applications in Civil Engineering. Proc. of the 2nd Joint Workshop on Application of AI in Civil Engineering, (2000) 215-218 7. Kecman, V.: Learning and Soft Computing, Support Vector Machines, Neural Networks and Fuzzy Logic Models. The MIT Press, Cambridge, MA (2001) 8. Wang, L.P. (Ed.): Support Vector Machines: Theory and Application. Springer-Verlag, Berlin Heidelberg New York (2005)
1296
G. Zhang, Z. Cao, and Y. Gu
9. Samanta, B.: Gear Fault Detection Using Artificial Neural Networks and Support Vector Machines with Genetic Algorithms. Mechanical Systems and Signal Processing, Vol.18, No.3. (2004) 625-644 10. Pawlak, Z.: Rough Sets. Informational Journal of Information and Computer Science, Vol.11, No.5. (1982) 341-356 11. Lin, T.Y.: Introduction to the Special Issue on Rough Sets. International Journal of Approximate Reasoning, Vol.15, No.4. (1996) 287-289 12. Wang, G.Y.: Rough Set Theory and Knowledge Acquisition. Xi’an Jiaotong University Press, Xi’an (2001) 13. Walczak, B., Massart, D.L.: Rough Sets Theory. Chemometrics and Intelligent Laboratory Systems, Vol.47, No.1. (1999) 1-16 14. Dai, J.H., Li, Y.X.: Study on Discretization Based on Rough Set Theory. Proc. of the first Int. Conf. on Machine Learning and Cybernetics, (2002) 1371-1373 15. Roy, A., Pal, S.K.: Fuzzy Discretization of Feature Space for a Rough Set Classifier. Pattern Recognition Letter, Vol.24, No.6. (2003) 895-902 16. Mitatha, S., Dejhan, K., Cheevasuvit, F., and Kasemsiri, W.: Some Experimental Results of Using Rough Sets for Printed Thai Characters Recognition. International Journal of Computational Cognition. Vol.1, No.4. (2003) 109–121 17. Zhang, G.X., Hu, L.Z., and Jin, W.D.: Discretization of Continuous Attributes in Rough Set Theory and Its Application. Lecture Notes in Computer Science, Vol.3314. (2004) 1020-1026 18. Zhang, G.X., Hu, L.Z., and Jin, W.D.: Radar Emitter Signal Recognition Based on Feature Selection Algorithm. Lecture Notes in Artificial Intelligence, Vol.3339. (2004) 1108-1114 19. Hsu, C.W., Lin, C.J.: A Comparison of Methods for Multiclass Support Vector Machines. IEEE Transaction on Neural Networks. Vol.13, No.2. (2002) 415-425 20. Rifkin, R., Klautau, A.: In Defence of One-Vs-All Classification. Journal of Machine Learning Research, Vol.5, No.1. (2004) 101-141 21. Platt, J.C., Cristianini, N., and Shawe-Taylor, J.: Large Margin DAG’s for Multiclass Classification. Advances in Neural Information Processing Systems, Vol.12. (2000) 547553 22. Cheong, S.M., Oh, S.H., and Lee, S.Y.: Support Vector Machines with Binary Tree Architecture for Multi-class Classification. Neural Information Processing-Letters and Reviews, Vol.2, No.3. (2004) 47-51 23. Zhang, G.X., Hu, L.Z., and Jin, W.D.: Intra-pulse Feature Analysis of Radar Emitter Signals. Journal of Infrared and Millimeter Waves, Vol.23, No.6. (2004) 477-480 24. Zhang, G.X., Rong, H.N., Jin, W.D., and Hu, L.Z.: Radar Emitter Signal Recognition Based on Resemblance Coefficient Features. Lecture Notes in Artificial Intelligence, Vol.3066. (2004) 665-670 25. Zhang, G.X., Jin, W.D., and Hu, L.Z.: Resemblance Coefficient and a Quantum Genetic Algorithm for Feature Selection. Lecture Notes in Artificial Intelligence. Vol.3245. (2004) 155-168 26. Zhang, G.X., Hu, L.Z., and Jin, W.D.: Quantum Computing Based Machine Learning Method and Its Application in Radar Emitter Signal Recognition. Lecture Notes in Artificial Intelligence, Vol.3131. (2004) 92-103 27. Zhang, G.X., Jin, W.D., and Hu, L.Z.: A Novel Feature Selection Approach and Its Application. Lecture Notes in Computer Science, Vol.3314. (2004) 665-671 28. Mitra, P., Murthy, C.A., and Pal, S.K.: Unsupervised Feature Selection Using Feature Similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.24, No.3. (2002) 301-312 && , T.J., Wang, H., and Xiao, X.C.: Recognition of Modulation Signal Based on a New 29. Lu Method of Feature Selection. Journal of Electronics and Information Technology, Vol.24, No.5. (2002) 661-666
A Heuristic Algorithm for Maximum Distribution Reduction Xiaobing Pei and YuanZhen Wang Department of Computer Science, HuaZhong University of Science & Technology, Wuhan, Hubei 430074, China
[email protected] Abstract. Attribute reduction is one of the basic contents in decision table. And it has been proved that computing the optimal attribute reduction is NP-complete. A lot of algorithms for the optimal attribute reduction were proposed in consistent decision table. But most decision tables are inconsistent in fact. In this paper, the judgment theorem with respect to maximum distribution reduction is obtained and the significance of attributes is defined in decision table, from which a polynomial heuristic algorithm for the optimal maximum distribution reduction is proposed. Finally, the experimental results show that this algorithm is effective and efficient.
1 Introduction The rough set theory [1] was first proposed by professor Pawlak in 1982, it is an excellent mathematics tool to handle imprecise and uncertain knowledge, it has been successfully applied in many fields such as data mining, decision support [3][4] etc. Attribute reduction is one of the basic contents in rough set theory. It is well known that an information system or a decision table may usually have irrelevant and superfluous knowledge, which is inconvenient for us to get concise and meaningful decision. When reducing attributes, we should eliminate the irrelevant and superfluous knowledge without losing essential information about the original data in decision table. It is NP-complete problem to computing the optimal attribute reductions [2]. Nowadays, the main algorithms for the optimal reduction have been proposed based on positive region [5], mutual information [6], attribute frequency [7] and attribute ordering [8] etc in consistent decision table. But most decision tables are inconsistent in fact. Zhang et al [9] introduced the concept of maximum distribution reduction in decision table and proposed an algorithm for the set of all maximum distribution reduction based on discernible matrix, but the algorithm is not efficient, its time complexity is exponent. In this paper, the judgment theorem with respect to maximum distribution reduction will be given and the significance of attributes will be defined, from which a polynomial heuristic algorithm for the optimal maximum distribution reduction will be proposed. Finally, experimental results for this algorithm will be given. L. Wang and Y. Jin (Eds.): FSKD 2005, LNAI 3613, pp. 1297 – 1302, 2005. © Springer-Verlag Berlin Heidelberg 2005
1298
X. Pei and Y. Wang
2 Basic Concepts of Rough Set Theory In this section, we introduce only the basic notations from rough set approach used in the paper. A decision table S is defined by S=, A=C U D is a set of attributes, where C and D are condition attributes and decision attributes respectively, U={x 1 ,x 2 ,…,x n } is a non-empty finite set of objects. V=
UV
a
, and V a is value set
a∈ A
of attribute a. f: U × A → V is a total function such that f(x i ,a) ∈ V a for each a ∈ A, x i ∈ U. For every B
⊆ A defines an equivalence relation denoted by ind(B) called
indiscernibility relation as defined below: ind(B)={(x,y): f (x, a
k
)=f(y, a
k
),
∀ a k ∈ B}. Thus U/ind(B) is a set of equivalence classes, as defined below, U/ind(B) ={[x] B : x ∈ U}, where [x] B ={y: (x,y) ∈ ind(B)} is a equivalence class for an example x with respect to concept B. If ind(C) ⊆ ind(D) then it can be said that the decision table S is consistent, else inconsistent. Certain rules can be generated in consistent decision table and possible rules or uncertain rules can also be generated from inconsistent decision table. Let U/ind(D)={D 1 ,D 2 ,...,D r }. The maximum distribution information vector of x ∈ U with respect to attribute set C is defined by: Dinf C (x)={d 1 ,d 2 ,..,d r },
, ,
⎪⎧1 P ( D j /[ x ] C ) = Max { P ( D k /[ x ] C | k = 1, 2 ,..., r } , ⎪⎩ 0 P ( D j /[ x ] C ) ≠ Max { P ( D k /[ x ] C | k = 1, 2 ,..., r }
dj= ⎨
Where P(D j /[x] C )=
| D j I [ x ]C | | [ x ]C |
.
For every subset X of U, if Dinf C (x)=Dinf C (y) for every x, y ∈ X then the maximum distribution information vector of X with respect to attribute set C is defined by: Dinf C (X)=Dinf C (y), where y ∈ X . Let B is an attribute subset of set C, if Dinf B (x)=Dinf C (x) for every x ∈ U, then B can be called maximum distribution consistent set. If B is a maximum distribution consistent set and no proper subset of B is maximum distribution consistent set, then B can be called maximum distribution reduction. There may be more than one maximum distribution reductions in decision table [9]. The maximum distribution reductions are particular subset of attributes with the same preserving maximum decision rules as the full set of attributes.
A Heuristic Algorithm for Maximum Distribution Reduction
1299
3 Judgement Theorem of Maximum Distribution Reduction In this section, the judgment theory with respect to maximum distribution reduction will be obtained. Theorem 1: Let S= be a decision table, A=C U D, C and D are condition attributes and decision attributes respectively, B ⊆ C is a attribute set. U/ind(D)={D 1 ,D 2 ,...,D r }, U/ind(C)={X 1 ,X 2 ,..,X n }, U/ind(B) ={Y 1 ,Y 2 ,..,Y m }, Y j (j ≤ m)={X
'
' j1 ,X j 2
,..,X
'
jt j
}, where X
'
ji
∈ U/ind(C), 1 ≤ i ≤ t j . Then attributes set
B is maximum distribution consistent set with respect to attribute set C if and only if for each j (1 ≤ j ≤ m) satisfy Dinf C (X
'
j1 )=…=Dinf C
(X
'
).
jt j
Proof: If B is a maximum distribution consistent set, then Dinf B (x)=Dinf B (x) for each x ∈ U. Therefore, we have Dinf C (X ji )=Dinf B (Y j ) for each X '
that is Dinf C (X
'
j1 )=…=Dinf C
(X
'
jt j
'
).
On the other hand, for each x ∈ U, there exist j and i such that x ∈ X x∈ Y
j
(i=1,2,…,t j ),
ji
'
ji
∈ U/ind(C),
∈ U/ind(B).
Suppose that there exist l
≤ r such that P(D l / X 'ji )=Max{P(D k / X 'ji ) | k=1,2,..,r},
where i=1,2,…, t j . Since Dinf C (X
| Dl I X 'ji | | Dm I X 'ji | | X 'ji |
>
| X 'ji |
'
j1 )=…=Dinf C
'
jt j
), it is easy to conclude that
for all m ≤ r, m ≠ l . tj
tj
∑| D I X l
Thus we conclude that
(X
' jk
k =1
k =1
>
∑| D k =1
j
I X 'jk |
tj
tj
∑| X
|
' jk
|
∑| X k =1
' jk
, m ≤ r, m ≠ l .
|
'
Therefore we have Dinf C (X ji )=Dinf B (Y j ). That is Dinf B (x)=Dinf C (x) for each
x ∈ U. Thus we conclude that B is a maximum distribution consistent set with respect to attribute C. We conclude the judgment theorem of maximum distribution reduction as below according to theorem 1. Theorem 2: Let S= be a decision table, A=C U D, C and D are condition attributes and decision attributes respectively, B ⊆ C is a attribute set. Then B is maximum distribution reduction if and only if 1) B is maximum distribution consistent
1300
X. Pei and Y. Wang
set; 2) for each a ∈ B, there is j (1 ≤ j ≤ m) such that Dinf C (X
'
j1 )=…=Dinf C
not true, where U/ind(B-{a})={Y 1 ,Y 2 ,..,Y m }, Y j (j ≤ m)={X X
' ji
' j1
,X
'
(X
' jt j
,..,X
j2
) is
' jt j
},
∈ U/ind(C), 1 ≤ i ≤ t j .
Proof: We can directly complete the proof according to theorem 1.
4 Heuristic Algorithm for Maximum Distribution Reduction 4.1 The Significance of Attributes Definition 1: Let S= be a decision table, A=C U D, C and D are condition attributes and decision attributes respectively. The significance of attribute a C is defined by: SGF (a, C, D)= Card ( DC −{a} ), where DC −{a} ={[x] C |
∈
Dinf C ([x] C ) ≠ Dinf C −{a} (x), x ∈ U },
Card ( DC −{a} ) denotes cardinality of set
DC −{a} . The more big SGF (a, C, D) is, the more important attribute a is, with respect to attributes C. In this paper, we regard SGF (a, C, D) as heuristic information to search for the optimal reduction. 4.2 Heuristic Algorithm for the Optimal Maximum Distribution Reduction A heuristic algorithm for the optimal maximum distribution reduction will be proposed based on theorem 2 and definition 1. Algorithm 1: A heuristic algorithm for optimal maximum distribution reduction Input: Let S=, A=C U D, C is condition attributes and D is decision attributes; Output: the optimal maximum distribution reduction of decision table S. Step 1. Calculate U/ind(C)={X 1 ,X X
j
2
,..,X
n
}; Calculate Dinf
C
(X
j
), where
∈ U/ind(C), j ≤ n;
Step 2. Calculate SGF (a, C, D) for each a ∈ C and arrange the attribute set C into sort ascending by using SGF (a, C, D). Step 3. Let RED=C; Step 4.Judge whether for each attribute a ∈ RED is required from back to front in attribute set RED: Step 5.1 if SGF (a, RED, D)=0, then RED=RED-{a}; Step 6. Return RED;
A Heuristic Algorithm for Maximum Distribution Reduction
1301
In most case, |D|=1, where |D| denotes cardinality of decision attributes set D. The time complexity of step 1 is O(|C| 2
2
|U|
2
); the time complexity of step 2 is
2
2
2
O(|C| |U| ), the time complexity of step 4 is O(|C| |U| ). Therefore the time 2
2
complexity of algorithm 1 is O(|A| |U| ) according to |C|