Communications in Computer and Information Science
120
Tai-hoon Kim Thanos Vasilakos Kouichi Sakurai Yang Xiao Gansen Zhao ´ ˛zak (Eds.) Dominik Sle
Communication and Networking International Conference, FGCN 2010 Held as Part of the Future Generation Information Technology Conference, FGIT 2010 Jeju Island, Korea, December 13-15, 2010 Proceedings, Part II
13
Volume Editors Tai-hoon Kim Hannam University, Daejeon, South Korea E-mail:
[email protected] Thanos Vasilakos University of Western Macedonia, Kozani, Greece E-mail:
[email protected] Kouichi Sakurai Kyushu University, Fukuoka, Japan E-mail:
[email protected] Yang Xiao The University of Alabama, Tuscaloosa, AL, USA E-mail:
[email protected] Gansen Zhao Sun Yat-sen University, Guangzhou, China E-mail:
[email protected] ´ ˛zak Dominik Sle University of Warsaw & Infobright, Poland E-mail:
[email protected] Library of Congress Control Number: 2010940170 CR Subject Classification (1998): C.2, H.4, I.2, D.2, H.3, H.5 ISSN ISBN-10 ISBN-13
1865-0929 3-642-17603-8 Springer Berlin Heidelberg New York 978-3-642-17603-6 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
Welcome to the proceedings of the 2010 International Conference on Future Generation Communication and Networking (FGCN 2010) – one of the partnering events of the Second International Mega-Conference on Future Generation Information Technology (FGIT 2010). FGCN brings together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of communication and networking, including their links to computational sciences, mathematics and information technology. In total, 1,630 papers were submitted to FGIT 2010 from 30 countries, which includes 150 papers submitted to the FGCN 2010 Special Sessions. The submitted papers went through a rigorous reviewing process: 395 of the 1,630 papers were accepted for FGIT 2010, while 70 papers were accepted for the FGCN 2010 Special Sessions. Of the 70 papers, 6 were selected for the special FGIT 2010 volume published by Springer in LNCS series. Fifty-one papers are published in this volume, and 13 papers were withdrawn due to technical reasons. We would like to acknowledge the great effort of the FGCN 2010 International Advisory Board and Special Session Co-chairs, as well as all the organizations and individuals who supported the idea of publishing this volume of proceedings, including SERSC and Springer. Also, the success of the conference would not have been possible without the huge support from our sponsors and the work of the Organizing Committee. We are grateful to the following keynote speakers who kindly accepted our invitation: Hojjat Adeli (Ohio State University), Ruay-Shiung Chang (National Dong Hwa University), and Andrzej Skowron (University of Warsaw). We would also like to thank all plenary speakers for their valuable contributions. We would like to express our greatest gratitude to the authors and reviewers of all paper submissions, as well as to all attendees, for their input and participation. Last but not least, we give special thanks to Rosslin John Robles and Maricel Balitanas. These graduate school students of Hannam University contributed to the editing process of this volume with great passion.
December 2010
Tai-hoon Kim Thanos Vasilakos Kouichi Sakurai Yang Xiao Gansen Zhao Dominik ĝlĊzak
Organization
General Co-chairs Alan Chin-Chen Chang Thanos Vasilakos MingChu Li Kouichi Sakurai Chunming Rong
National Chung Cheng University, Taiwan University of Western Macedonia, Greece Dalian University of Technology, China Kyushu University, Japan University of Stavanger, Norway
Program Co-chairs Yang Xiao Charalampos Z. Patrikakis Tai-hoon Kim Gansen Zhao International
University of Alabama, USA National Technical University of Athens, Greece Hannam University, Korea Sun Yat-sen University, China
Advisory Board
Wai-chi Fang Hsiao-Hwa Chen Han-Chieh Chao Gongzhu Hu Byeong-Ho Kang Aboul Ella Hassanien
National Chiao Tung University, Taiwan National Sun Yat-sen University, Taiwan National Ilan University, Taiwan Central Michigan University, USA University of Tasmania, Australia Cairo University, Egypt
Publicity Co-chairs Ching-Hsien Hsu Houcine Hassan Yan Zhang Damien Sauveron Qun Jin Irfan Awan Muhammad Khurram Khan
Chung Hua University, Taiwan Polytechnic University of Valencia, Spain Simula Research Laboratory, Norway University of Limoges, France Waseda University, Japan University of Bradford, UK King Saud University, Saudi Arabia
Publication Chair Maria Lee
Shih Chien University, Taiwan
VIII
Organization
Special Session Co-chairs Hong Kook Kim Young-uk Chung Suwon Park Kamaljit I. Lakhtaria Marjan Kuchaki Rafsanjani Dong Hwa Kim
Gwangju Institute of Science and Technology, Korea Kwangwoon University, Korea Kwangwoon University, Korea Atmiya Institute of Technology and Science, India Shahid Bahonar University of Kerman, Iran Hanbat University, Korea
Table of Contents – Part II
Congestion Avoidance and Energy Efficient Routing Protocol for WSN Healthcare Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Babak Esmailpour, Abbas Ali Rezaee, and Javad Mohebbi Najm Abad
1
An Efficient Method for Detecting Misbehaving Zone Manager in MANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marjan Kuchaki Rafsanjani, Farzaneh Pakzad, and Sanaz Asadinia
11
Query Answering Driven by Collaborating Agents . . . . . . . . . . . . . . . . . . . . Agnieszka Dardzinska
22
Attribute-Based Access Control for Layered Grid Resources . . . . . . . . . . . Bo Lang, Hangyu Li, and Wenting Ni
31
A Local Graph Clustering Algorithm for Discovering Subgoals in Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Negin Entezari, Mohammad Ebrahim Shiri, and Parham Moradi
41
Automatic Skill Acquisition in Reinforcement Learning Agents Using Connection Bridge Centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parham Moradi, Mohammad Ebrahim Shiri, and Negin Entezari
51
Security Analysis of Liu-Li Digital Signature Scheme . . . . . . . . . . . . . . . . . Chenglian Liu, Jianghong Zhang, and Shaoyi Deng An Optimal Method for Detecting Internal and External Intrusion in MANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marjan Kuchaki Rafsanjani, Laya Aliahmadipour, and Mohammad M. Javidi SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francesco Colace, Massimo De Santo, and Salvatore Ferrandino Intrusion Detection in Database Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammad M. Javidi, Mina Sohrabi, and Marjan Kuchaki Rafsanjani
63
71
83 93
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seungjin Park and Seong-Moo Yoo
102
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol for MANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marjan Kuchaki Rafsanjani, Sanaz Asadinia, and Farzaneh Pakzad
112
X
Table of Contents – Part II
Decision-Making Model Based on Capability Factors for Embedded Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hamid Reza Naji, Hossein Farahmand, and Masoud RashidiNejad
123
Socio-Psycho-Linguistic Determined Expert-Search System (SPLDESS) Development with Multimedia Illustration Elements . . . . . . . . . . . . . . . . . . Vasily Ponomarev
130
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss Using Multiple Codebooks and Comfort Noise for CELP-Type Speech Coders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nam In Park, Hong Kook Kim, Min A. Jung, Seong Ro Lee, and Seung Ho Choi
138
Duration Model-Based Post-Processing for the Performance Improvement of a Keyword Spotting System . . . . . . . . . . . . . . . . . . . . . . . . Min Ji Lee, Jae Sam Yoon, Yoo Rhee Oh, Hong Kook Kim, Song Ha Choi, Ji Woon Kim, and Myeong Bo Kim Complexity Reduction of WSOLA-Based Time-Scale Modification Using Signal Period Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Duk Su Kim, Young Han Lee, Hong Kook Kim, Song Ha Choi, Ji Woon Kim, and Myeong Bo Kim A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chan Jun Chun, Young Han Lee, Yong Guk Kim, Hong Kook Kim, and Choong Sang Cho Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy for Dual-Channel Noisy Speech Recognition . . . . . . . . . . Ji Hun Park, Min Hwa Shin, and Hong Kook Kim 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Guk Kim, Sungdong Jo, Hong Kook Kim, Sei-Jin Jang, and Seok-Pil Lee Integrated Framework for Information Security in Mobile Banking Service Based on Smart Phone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong-Nyuo Shin and Myung Geun Chun A Design of the Transcoding Middleware for the Mobile Browsing Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sungdo Park, Hyokyung Chang, Bokman Jang, Hyosik Ahn, and Euiin Choi
148
155
162
172
180
188
198
Table of Contents – Part II
A Study of Context-Awareness RBAC Model Using User Profile on Ubiquitous Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bokman Jang, Sungdo Park, Hyokyung Chang, Hyosik Ahn, and Euiin Choi Challenges and Security in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . Hyokyung Chang and Euiin Choi 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sung-Ja Choi and Gang-Soo Lee Output Current-Voltage Characteristic of a Solar Concentrator . . . . . . . . Dong-Gyu Jeong, Do-Sun Song, and Young-Hun Lee Efficient Thread Labeling for Monitoring Programs with Nested Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ok-Kyoon Ha, Sun-Sook Kim, and Yong-Kee Jun A Race Healing Framework in Simulated ARINC-653 . . . . . . . . . . . . . . . . . Guy Martin Tchamgoue, In-Bon Kuh, Ok-Kyoon Ha, Kyong-Hoon Kim, and Yong-Kee Jun
XI
205
214
218 223
227 238
A K-Means Shape Classification Algorithm Using Shock Graph-Based Edit Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solima Khanam, Seok-Woo Jang, and Woojin Paik
247
Efficient Caching Scheme for Better Context Inference in Intelligent Distributed Surveillance Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soomi Yang
255
A System Implementation for Cooperation between UHF RFID Reader and TCP/IP Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sang Hoon Lee and Ik Soo Jin
262
Study of Host-Based Cyber Attack Precursor Symptom Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jae-gu Song, Jong hyun Kim, Dongil Seo, Wooyoung Soh, and Seoksoo Kim Design of Cyber Attack Precursor Symptom Detection Algorithm through System Base Behavior Analysis and Memory Monitoring . . . . . . Sungmo Jung, Jong hyun Kim, Giovanni Cagalaban, Ji-hoon Lim, and Seoksoo Kim The Improved 4-PSK 4-State Space-Time Trellis Code with Two Transmit Antennas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ik Soo Jin
268
276
284
XII
Table of Contents – Part II
A Study on Efficient Mobile IPv6 Fast Handover Scheme Using Reverse Binding Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Randy S. Tolentino, Kijeong Lee, Sung-gyu Kim, Miso Kim, and Byungjoo Park A Software Framework for Optimizing Smart Resources in the Industrial Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dongcheul Lee and Byungjoo Park
291
301
Automatic Image Quality Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . Jee-Youl Ryu, Sung-Woo Kim, Seung-Un Kim, and Deock-Ho Ha
311
Programmable RF System for RF System-on-Chip . . . . . . . . . . . . . . . . . . . Jee-Youl Ryu, Sung-Woo Kim, Dong-Hyun Lee, Seung-Hun Park, Jung-Hoon Lee, Deock-Ho Ha, and Seung-Un Kim
316
Development of a Mobile Language Learning Assistant System Based on Smartphone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jin-il Kim, Young-Hun Lee, and Hee-Hyol Lee Implementation of the Sensor Node Hardware Platform for an Automatic Stall Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yoonsik Kwak, Donghee Park, Jiwon Kwak, Dongho Kwak, Sangmoon Park, Kijeong Kil, Minseop Kim, Jungyoo Han, TaeHwan Kim, and SeokIl Song A Study on the Enhancement of Positioning Accuracy Performance Using Interrogator Selection Schemes over Indoor Wireless Channels . . . . Seungkeun Park and Byeong Gwon Kang A Fully Parallel, High-Speed BPC Hardware Architecture for the EBCOT in JPEG 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dong-Hwi Woo, Kyeong-Ryeol Bae, Hyeon-Sic Son, Seung-Ho Ok, Yong Hwan Lee, and Byungin Moon Implementating Grid Portal for Scientific Job Submission . . . . . . . . . . . . . Arun D. Gangarde and Shrikant. S. Jadhav A Comprehensive Performance Comparison of On-Demand Routing Protocols in Mobile Ad-Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jahangir khan and Syed Irfan Hayder
321
330
335
343
347
354
Preserving Energy Using Link Protocol in Wireless Networks . . . . . . . . . . Anita Kanavalli, T.L. Geetha, P. Deepa Shenoy, K.R. Venugopal, and L.M. Patnaik
370
Trust Based Routing in Ad Hoc Network . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikita V. Talati, Sharada Valiveti, and K. Kotecha
381
Table of Contents – Part II
XIII
Routing in Ad Hoc Network Using Ant Colony Optimization . . . . . . . . . . Pimal Khanpara, Sharada Valiveti, and K. Kotecha
393
Non-repudiation in Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purvi Tandel, Sharada Valiveti, K.P. Agrawal, and K. Kotecha
405
The Vehicular Information Space Framework . . . . . . . . . . . . . . . . . . . . . . . . Vivian Prinz, Johann Schlichter, and Benno Schweiger
416
Effectiveness of AODV Protocol under Hidden Node Environment . . . . . . Ruchi Garg, Himanshu Sharma, and Sumit Kumar
432
Prevention of Malicious Nodes Communication in MANETs by Using Authorized Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Chandrakant, P. Deepa Shenoy, K.R. Venugopal, and L.M. Patnaik
441
Performance Evaluation of FAST TCP Traffic-Flows in Multihomed MANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mumajjed Ul Mudassir and Adeel Akram
450
Fault Tolerant Implementation of Xilinx Vertex FPGA for Sensor Systems through On-Chip System Evolution . . . . . . . . . . . . . . . . . . . . . . . . . S.P. Anandaraj, R. Naveen Kumar, S. Ravi, and S.S.V.N. Sharma
459
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
469
Table of Contents – Part I
Multiple Object Tracking in Unprepared Environments Using Combined Feature for Augmented Reality Applications . . . . . . . . . . . . . . . Giovanni Cagalaban and Seoksoo Kim
1
Study on the Future Internet System through Analysis of SCADA Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jae-gu Song, Sungmo Jung, and Seoksoo Kim
10
A Novel Channel Assignment Scheme for Multi-channel Wireless Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Xia, Zhenghu Gong, and Yingzhi Zeng
15
Threshold Convertible Authenticated Encryption Scheme for Hierarchical Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chien-Lung Hsu, Yu-Li Lin, Tzong-Chen Wu, and Chain-Hui Su
23
An Active Queue Management for QoS Guarantee of the High Priority Service Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyun Jong Kim, Jae Chan Shim, Hwa-Suk Kim, Kee Seong Cho, and Seong Gon Choi A Secured Authentication Protocol for SIP Using Elliptic Curves Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tien-ho Chen, Hsiu-lien Yeh, Pin-chuan Liu, Han-chen Hsiang, and Wei-kuan Shih New Mechanism for Global Mobility Management Based on MPLS LSP in NGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Myoung Ju Yu, Kam Yong Kim, Hwa Suk Kim, Kee Seong Cho, and Seong Gon Choi A Fault-Tolerant and Energy Efficient Routing in a Dense and Large Scale Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seong-Yong Choi, Jin-Su Kim, Yang-Jae Park, Joong-Kyung Ryu, Kee-Wook Rim, and Jung-Hyun Lee
37
46
56
66
Network Management Framework for Wireless Sensor Networks . . . . . . . . Jaewoo Kim, HahnEarl Jeon, and Jaiyong Lee
76
FDAN: Failure Detection Protocol for Mobile Ad Hoc Networks . . . . . . . Haroun Benkaouha, Abdelkrim Abdelli, Karima Bouyahia, and Yasmina Kaloune
85
XVI
Table of Contents – Part I
Interference Avoiding Radio Resource Allocation Scheme for Multi-hop OFDMA Cellular Networks with Random Topology . . . . . . . . . . . . . . . . . . Sunggook Lim and Jaiyong Lee
95
Topology Control Method Using Adaptive Redundant Transmission Range in Mobile Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . MyungJun Youn, HahnEarl Jeon, SeogGyu Kim, and Jaiyong Lee
104
Timer and Sequence Based Packet Loss Detection Scheme for Efficient Selective Retransmission in DCCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BongHwan Oh, Jechan Han, and Jaiyong Lee
112
Transposed UL-PUSC Subcarrier Allocation Technique for Channel Estimation in WiMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maged M. Khattab, Hesham M. EL-Badawy, and Mohamed A. Aboul-Dahab Load Performance Evaluation of the SSD According to the Number of Concurrent Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seung-Kook Cheong and Dae-Sik Ko Experimental Investigation of the Performance of Vertical Handover Algorithms between WiFi and UMTS Networks . . . . . . . . . . . . . . . . . . . . . . Stefano Busanelli, Marco Martal` o, Gianluigi Ferrari, Giovanni Spigoni, and Nicola Iotti
121
132
137
Next Generation RFID-Based Medical Service Management System Architecture in Wireless Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . Randy S. Tolentino, Kijeong Lee, Yong-Tae Kim, and Gil-Cheol Park
147
A Study on Architecture of Malicious Code Blocking Scheme with White List in Smartphone Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kijeong Lee, Randy S. Tolentino, Gil-Cheol Park, and Yong-Tae Kim
155
An Authentication Protocol for Mobile IPTV Users Based on an RFID-USB Convergence Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yoon-Su Jeong and Yong-Tae Kim
164
Design of a Software Configuration for Real-Time Multimedia Group Communication; HNUMTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gil-Cheol Park
172
Recognition Technique by Tag Selection Using Multi-reader in RFID Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bong-Im Jang, Yong-Tae Kim, and Gil-Cheol Park
180
UWB-Based Tracking of Autonomous Vehicles with Multiple Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefano Busanelli and Gianluigi Ferrari
188
Table of Contents – Part I
Information System for Electric Vehicle in Wireless Sensor Networks . . . Yujin Lim, Hak-Man Kim, and Sanggil Kang Maximizing Minimum Distance to Improve Performance of 4-D PSK Modulator for Efficient Wireless Optical Internet Access and Digital Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hae Geun Kim Implementation of the Vehicle Black Box Using External Sensor and Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sung-Hyun Back, Jang-Ju Kim, Mi-Jin Kim, Hwa-Sun Kim, You-Sin Park, and Jong-Wook Jang Implementation of a SOA-Based Service Deployment Platform with Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chao-Tung Yang, Shih-Chi Yu, Chung-Che Lai, Jung-Chun Liu, and William C. Chu A Mobile GPS Application: Mosque Tracking with Prayer Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rathiah Hashim, Mohammad Sibghotulloh Ikhmatiar, Miswan Surip, Masiri Karmin, and Tutut Herawan Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XVII
199
207
217
227
237
247
Congestion Avoidance and Energy Efficient Routing Protocol for WSN Healthcare Applications Babak Esmailpour1, Abbas Ali Rezaee2, and Javad Mohebbi Najm Abad1 1
2
Islamic Azad University-Quchan Branch, Iran Faculty of Payame Noor University, Mashhad, Iran
[email protected],
[email protected],
[email protected] Abstract. Recent advances in wireless sensor technology facilitate the development of remote healthcare systems, which can significantly reduce the healthcare cost. The use of general and efficient routing protocols for Healthcare wireless sensor networks (HWSN) has crucial significance. One of the critical issues is to assure the timely delivery of the life-critical data in the resource-constrained WSN environment. Energy, and some other parameters for HWSN are considered here. In this paper, a data centric routing protocol which considers end to end delay, reliability, energy consumption, lifetime and fairness have been taken into account. The Proposed protocol which is called HREEP (Healthcare REEP) provides forwarding traffics with different priorities and QoS requirements based on constraint based routing. We study the performance of HREEP using different scenarios. Simulation results show that HREEP has achieved its goals. Keywords: Clustering, Healthcare Application, Congestion Avoidance, Routing Protocol, Wireless Sensor Networks.
1 Introduction Healthcare aware wireless sensor networks (HWSN) following wireless sensor networks have received great attention nowadays. Additive applications of these networks lead to an increase in their importance. Accessibility to low cost hardware such as CMOS cameras and microphones has caused the expansion of healthcare aware wireless sensor networks. HWSN consists of wireless nodes which can transmit healthcare relevant traffic in addition to sensing healthcare relevant events. By developing hardware, equipping small nodes with necessary devices is possible now [1,2]. Protocols which are designed for WSN lose a proportion of their efficiency if directly used for HWSN. But they still have so many similar characteristics. With respect to HWSN characteristics, their protocols should be designed in cross layer manner [3]. Many of those characteristics are mentioned below: -
Application dependency: Designing HWSN protocols is completely depended on its application. Application characteristics determine goals and crucial parameters.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 1–10, 2010. © Springer-Verlag Berlin Heidelberg 2010
2
B. Esmailpour, A.A. Rezaee, and J.M.N. Abad
-
-
-
Energy consumption efficiency: like wireless sensor networks nodes, nodes which are designed for healthcare aware wireless sensor networks also have limited primary energy resources and they mostly can’t be recharged (or recharging node’s energy is not economically commodious) so energy consumption is still considered as a fundamental parameter. Capability of forwarding data with different real time requirements: for different reasons traffics with different priorities are forwarded in healthcare aware wireless sensor networks. Protocols should be capable of sending the traffics simultaneously and as a result each traffic achieves its own real time requirements. The ability of sending data with different reliabilities: healthcare aware wireless sensor networks’ traffics need different reliabilities. These networks protocols should be capable of sending these traffics.
In this paper, we focus only on the issue of Routing in healthcare WSNs. In particular, we focus on large-scale medical disaster response applications. The Proposed protocol HREEP (Healthcare REEP) which is a data centric routing protocol takes end to end delay, reliability, energy consumption, network lifetime and fairness into consideration. As is known, all of the aforementioned parameters are not independent; for example energy consumption and network lifetime are inversely related. The main goal of the proposed protocol is to control these parameters using constraint based routing process. Parameters which are important for HREEP are also important for wireless sensor networks, too. But with respect to the fact that HWSNs are a subset of WSNs, parameters are more commensurate with HWSN [4]. Depending on their application, the delay parameter has different importance for HWSNs. In real time applications, information should reach destination in an appropriate time otherwise its importance decreases (in hard real time application receiving data out of legal interval is valueless). Another point worth mentioning is that different data types have different delay thresholds; therefore network reaction should be commensurate with data types. Energy consumption, lifetime and fairness are relevant parameters to protocol’s energy efficiency. Indeed life time increment is the essential goal; however two main elements for increasing lifetime is consuming energy efficiently and performing fairness. The aim to perform fairness is consuming energy of network nodes fairly. When network node’s energy has less variance, network lifetime will be prolonged. To perform fairness, nodes’ energy should be used equally. If one part of a network is used more than other parts, its energy will decrease sooner than others and then the network will be partitioned. If a network is partitioned, its energy consumption increases severely. Using different paths to send data to sink makes the fairness performance better. When network lifetime is prolonged, apparently we can use its services longer [5]. The Proposed protocol is composed of the following 4 phases; request dissemination, event occurrence report, route establishment and data forwarding. The rest of the paper is organized as follows: in section 2 related works will be discussed. In section 3, HREEP is presented in detail. In section 4, we will evaluate proposed protocol efficiency and finally in section 5 we conclude the paper.
Congestion Avoidance and Energy Efficient Routing Protocol
3
2 Related Works HREEP is a data centric protocol. Data centric protocols are a major part in different routing protocols in wireless sensor networks [2, 3]. Many successful routing protocols are presented for WSNs and HWSNs hitherto. Directed Diffusion and SPIN are two famous routing protocols for WSNs, which have received attention. In both, requests are disseminated in network and routing is done based on data type. Each of the aforementioned protocols is improved many times, as they are known as family; for example [7]. SPIN has many flows; for example it is not scalable, it is not energy efficient and etc. Healthcare aware wireless sensor networks routing protocols uses different methods to perform their tasks. HREEP makes routes based on network conditions and traffic requirements at the same time. The Proposed protocol has used many of ideas which are pointed to in REEP [8]. REEP protocol has different phases like other data centric protocols. The Mentioned phases are: Sense event propagation, Information event propagation and Request event propagation. In Sense event propagation phase sink sends its requests to all of the network nodes. In Information event propagation phase each node sends its sensed data to the sink. In next phase which is entitled Request event propagation sink responses to all of the nodes which send their sensed data and during this communications routes are established. This plan phasing is almost similar to data centric routing protocols [9][10][11].
3 The Proposed Protocol Data centric protocol HREEP is composed of the following 5 different phases: Request Propagation dissemination, event occurrence report, route establishment, data forwarding and route recovery. The Proposed protocol structure is shown in fig.1. In phase 1, sink floods its request to entire network nodes. Phase 1 will be discussed in section 3.1. Then four other phases, event occurrence report, route establishment, data forwarding and route recovery, are presented in details in sections 3.2, 3.3, 3.4 and 3.5 respectively. We have designed the proposed protocol based on healthcare aware wireless sensor networks characteristics. These networks are used for different applications [6]. Using one network for different applications is economical, because different applications are performed using one hardware infrastructure and this leads to a decrease in cost. Proposed protocol can send traffics with different QoS requirements. For more tangible Phase1: Request Propagation
Phase2: Event Report
Phase3: Route Establishing
Phase5: Route Recovery
Phase4: Data Transmission
Fig. 1. Proposed protocol structure
4
B. Esmailpour, A.A. Rezaee, and J.M.N. Abad
discussion, we will present an example. Assume that HWSN is used to monitor one patient. There are two traffics in the mentioned network. To monitor vital limbs, high priority report should send to sink through network. But for other events (for example periodical events to monitor other limbs), network nodes use low priority traffic. 3.1 Request Dissemination Phase In this phase sink should flood its requests to entire network nodes. Following points should be considered for this phase packets: -
-
Priority of used application; in HWSN a network may be used for forwarding more than one traffic with different characteristics. Therefore, traffic priority should be specified before forwarding. Time; it is possible that many packets which belong to one application are propagated through network in different times. Therefore, forwarding time should be specified in all packets. Furthermore, many of request have lifetime; when lifetime expires the aforementioned request is not valuable any more. Destination nodes geographical confine; this field is not vital for application that the requests should be sent to the entire network nodes. Request specification; each request contains destination nodes task and the way they should react to the event.
3.2 Event Occurrence Report Phase When Request dissemination phase is done, the entire network nodes know their task. When a node senses an event relevant to its task, it should report the sensed event features to the sink. Node should necessarily regard all the specifications which are outlined in task characteristics in its report so that the sink can react properly. In this phase the relevant information to the occurred event will be sent to the sink but sending of the fundamental information relevant to the event will be done in the data sending phase. Furthermore the very phase paves the way for providing packet routing. With this end in mind a packet will be created by a node and the relevant data to the sensed event will be located there. Through sending the packet to the sink the necessary routing tables will be provided for the aim of data routing in the nodes. The final routing will be executed in the route establishment phase. Indeed in the second phase in each node the completion of the final routing will be done by gathering all the essential information in each node in the form of permanent routing table. This act will end in the creation of routing tables for each specific node in the third phase. When an event is sensed by a node, according to its task it should be reported to the sink. The node will send the packet to all its neighbors by the time it is created (this packet is called the second phase packet). If the nodes are aware of their situations the packet will be sent to the neighbors who are far closer than the sending node to the sink. Although this matter leads to a decrease in the protocol’s energy consumption, considering the need for localization process, it can’t be implemented everywhere. It is to be noted that in the application which the request should be sent to one part of the network the nodes are certainly aware of their situations.
Congestion Avoidance and Energy Efficient Routing Protocol
5
By receiving the second phase packet each node creates a record in a routing table which is titled the second phase table. In this record the packet’s priority (compatible with traffic priority and the specified event), source node, sending node, the length of the traversed path, the numbers of traversed hops are kept. In the proposed protocol each node owns an ID which is located in the entire sent packet. The traversed route is the sum of the routes the packet has taken from the source node to the current node. After inserting a record, the node will send a packet to all its neighbors. This procedure will continue until the packet reaches the sink. We have to bear in mind having more than one record is more likely from one certain source node in the second phase table. This is due to the different routes which a node can be reached by the second phase packet but the packets which have the same field will be ignored. At the end of the second phase each node owns a routing table named the second phase table which will be used for determining the final route in the third phase. The records of the second phase table dictate the possible ways between the specified node and the event sensor source node. 3.3 Route Establishment Phase After the sink received all the second phase packets, it sends back and acknowledge packet (this packet is called the third packet phase) to the source node announcing to send all its gathered data to the sink. It is possible for an event to be sensed by more than a sensor node. At this stage according to the sent data by the source node, the sink chooses one or more nodes for the final data sending. In the second phase packet, each packet specifies its own sensing accuracy. For instance, in the healthcare applications, the received vital signals specify the sensing accuracy. According to mentioned notes a sensor should be chosen for reporting the sensed events. After choosing the source node, the third phase packet will be sent to its destination. As the third phase packet traverses the path, it creates the third phase table in the middle nodes. The third phase routing table is the final routing table which made the sent data routing possible from the source node. The sending acknowledgement depends on the sensed event priority. Two different acknowledgements are considered, acknowledgement for high priority (real time traffic) and acknowledgement for low priority (non real time traffic). The sink evaluates the second phase routing table for sending the acknowledgement with high priority. The first record will be chosen for the sending acknowledgement. The second phase packets will be located in the second phase routing table according to the time. Whenever a node receives the second type packet, it will locate it in the first available record. In fact the order of records´ numbers in the second phase routing table specifies the order of the time which they were received. Due to the great importance of time for real time applications the first record of the second phase table will be chosen. It is worth mentioning that the first record was first created in terms of time. But records selection in the source node is always of great importance. The only records will be considered that their source node is the very node which is chosen by the sink. Every node constitutes two tables in the second phase. Phase three routing table, for high priority traffics and routing table for low priority traffics. During this phase, these two tables are completed. When a node in phase three receives a packet with
6
B. Esmailpour, A.A. Rezaee, and J.M.N. Abad
high priority, a record for that in the routing table of phase with a high priority is created. In this table the following parameters are placed: The sending node, the receiving node, the source node and the type of function. According to what was mentioned, every node chooses the first record from the routing table in phase two as the next hop for the packet in phase three with high priority. This process continues until the packet arrives at its source. In fact, at the end of the third phase in the third phase non real time routing table, for every source one record is placed. Concepts which were mentioned in current section concerned traffic with a high priority. In the rest of the section finding low priority table in phase three will be elucidated. The sink considers the records relating to the source, among the routing records of phase two. For each of the records the probability of Pi is calculated through the formula (1):
TD = Pi HC
(1)
TD is the field which includes the length of the record path and HC is the number of the path hops of the record. Pi is the probability of record selection as the next hop, for the third phase packet with low priority. After determining Pi for each record with the specified source node, two records will be chosen randomly (according to the probability) then the third phase packet with low priority will be sent for them. Selecting different ways is to achieve fairness in energy consumption of network nodes. Without considering the priority all the traffic will be sent via one fixed path; similar to mechanism which is used in REEP protocol. This prevents the fairness from being achieved in energy consumption of network nodes. Each node registers the node in the routing table with low priority and in the next stage by the use of the same procedure with the sink the next two hops will be chosen and the third phase packet will be sent to them. In the record of non real time third phase table all the packet characteristics will be registered. In the following picture the relevant pseudo code to the third phase is presented. 3.4 The Data Forwarding Phase At the end of the third phase the real time and non real time routing table will be created. Each node owns a real time and non real time third phase routing table. The source node (the event sensor node) depending on the type of event sensed can send its data to the sink once it has received real time acknowledgement (the real time third phase packet) and non real time acknowledgement (the non real time acknowledgement). As was mentioned earlier, all the nodes including the source nodes have both types of routing tables. The third phase real time routing table is used to send real time data and the third phase non real time routing table to send non real data. For every source in the third phase real time routing table in the direction of the sink, there is only one record. Every node by receiving the real time traffic from the specified node sends the data to the next hop using that record. However, in the non real time routing table of phase three for every source there will be more than one
Congestion Avoidance and Energy Efficient Routing Protocol
7
record in the table. Every record has one Pj , the choice of the next hop depends on the Pj . The larger the Pj of a record is, the higher the chances of its selection are. Ultimately, one record will be selected as the next hop and the data will be sent to it. 3.5 Route Recovery Phase During data transmission phase congestion may happen especially near sink (near sink nodes are those nodes close to the sink). We use a simple strategy on the nearsink nodes to save energy and avoid congestion at the same time. We use field hop_count in every packet as our specific label field. Hop count indicates how far away this packet is from the sensing field (patient body). Every forwarding node updates the label field by increasing one (hop_count = hop_count +1). As our packets and command are going in the same tout, so in an intermediate node we use this parameter in the algorithm below in upstream data packet and downstream commands to change path. If(upstream_pk_hopcount > downstream_pk_hopcount ) * Node is more near sink If(node_energy < threshold) Change_path() To change path, node sends a control packet for its neighbors. If its neighbor energy is above threshold and has other path it changes the path. This saves energy in near sink nodes and avoids congestion. As a result network life time get better.
4 The Evaluation of the Performance of the Proposed Protocol In this section the performance of the proposed protocol HREEP is examined. The protocol REEP is a known protocol in the area of wireless sensor networks. Both the protocols HREEP and REEP have been implemented in the Opnet [12] simulator and their performance depending on various scenarios were investigated. In Figure 2 network topology is shown. As observable in fig.2 we have considered each body as a cluster. In each cluster a cluster head is determined. Cluster head has higher amount of resources rather than other cluster members. Firstly we will examine two protocols in terms of the performance of energy. In figure 3 the lifetime of the network for different rates has been drawn. The rates of the horizontal axis relate to the production rate by the source node. In other words, in the fourth phase the sending rate of data is taken to be different and for every rate the lifetime of the network has been calculated. As can been seen in figure 3, for the rates under 50(packet/sec) the difference between the lifetimes of the protocols is noteworthy. For example the life time of the network using HREEP for data rate 10 equals 7 time unit and while using REEP equals 1.5 time unit. This means prolonging the lifetime of the network by more than 100 percent.
8
B. Esmailpour, A.A. Rezaee, and J.M.N. Abad
In figure 4, fairness in the consumption of energy of the network nodes is examined. The horizontal axis is the sending rate of data and the horizontal axis is the parameter which calculates the variance of the energy of network nodes through formula 2. n
Dev = ∑ (Energyi − Ave )
2
(2)
i =1
Fig. 2. Network Topology
The higher the amount of the Dev for a protocol, the less success the protocol has achieved success in maintaining balance in the energy consumption of nodes since the variance of energy nodes has increased. As can be seen in figure 4 the HREEP has a lower variance. The nodes the variance of HREEP shows a 25 percent variance decrease. The parameters of network lifetime and variance are in some way dependent. If we can keep better balance in the energy consumption of nodes the lifetime of the network increases under the same conditions. Another fundamental parameter which is considered in this protocol is the end to end delay. Delay is a parameter which is crucially important for the healthcare aware wireless sensor networks. In figures 5 and 6, HREEP and REEP are compared in terms of delay. The delay presented in figures 5 and 6 concerning this section are related to the sensed data delay and do not include control data. As can be seen in the figures 5 the end to end delay for real time traffic in HREEP (HREEP_RT) is less than the end to end delay for non real time traffic (HREEP_NRT). By comparing numbers in figures 5 and 6 we can easily conclude that delay for HREEP-RT is less than REEP; and REEP delay and HREEP-NRT delay are almost similar. The reaction of protocols in the beginning of the graphs of figures 5 and 6 show the marked increase of delay for HREEP-RT, HREEP-NRT and REEP. The reason for this is congestion in routers for the purpose of sending the remaining packets of phase two. When all the packets of phase two sent, the delay approaches stability. In a stable
Congestion Avoidance and Energy Efficient Routing Protocol
Fig. 3. Lifetime comparison between HREEP and REEP
Fig. 5. Delay comparison between HREEPNRT and HREEP-RT
Fig. 4. Comparison HREEP and REEP
fairness
9
between
Fig. 6. Delay for REEP
condition the delay of REEP and HREEP-NRT are seen to be very close. And the delay of HREEP-RT is significantly lower than them. RT or real time traffic is the kind of traffic which requires low delay. But NRT traffic has considerably lower sensitivity to delay than. The goal of the protocol is to send the real time traffic with as low delay as possible and to send the non real time traffic with an acceptable level of delay. The vertical axis relates to delay and the horizontal axis to the time of packets generation.
5 Conclusion In this article a Congestion Avoidance routing protocol for the healthcare wireless sensor networks was presented. The proposed protocol was data-driven and event driven when a sensor in patient body alarm and comprised several various phases. The first phase of HREEP was designed to disseminate the demands of the sink. The other phases of HREEP are respectively event occurrence report, the route establishment, data forwarding and route recovery. Generally, the proposed protocols have taken into account several parameters including the parameters of end to end delay, reliability, energy consumption, the lifetime of the network and fairness in energy consumption. Finally, utilizing simulation, the performance of HREEP protocol was evaluated. The results of the simulation show that Proposed routing protocol
10
B. Esmailpour, A.A. Rezaee, and J.M.N. Abad
conscious of the proposed service quality has achieved its ends, which were to control the aforementioned parameters.
References 1. Tubaishat, M., Madria, S.: Sensor Networks: An Overview. IEEE Potentials, 20–23 (2003) 2. Akyildiz, I.F., Su, W., Sankarasubramaniam, W., Cayirci, E.: A Survey On Sensor Networks. IEEE Communication Magazine, 102–114 (2002) 3. Al-Karajki, J.N.: Routing Techniques in Wireless Sensor Networks: A Survey. IEEE , The Hashemite University Ahmed E. Kamal, Lowa State University (2004) 4. Stankovic, J.A., Cao, Q., Doan, T., Fang, L., He, Z., Kiran, R., Lin, S., Son, S., Stoleru, R., Wood, A.: Wireless sensor networks for in-home healthcare: Potential and challenges. In: Proc. High Confidence Medical Device Software Systems (HCMDSS) Workshop (2005) 5. Baker, C.R., Armijo, K., Belka, S., Benhabib, M., Waterbury, A., Leland, E.S., Pering, T., Wright, P.K.: Wireless sensor networks for home health care. In: Proc. 21st International Conf. Advanced Information Networking Applications Workshops, AINAW 2007 (2007) 6. Aziz, O., Lo, B., King, R., Yang, G.Z., Darzi, A.: Pervasive body sensor network: An approach to monitoring the post-operative surgical patient. In: Proc. IEEE International Workshop Wearable Implantable Body Sensor Networks, pp. 13–18 (2006) 7. Akkaya, K., Younis, M.: A Survey on Routing Protocols for Wireless Sensor Networks. Department of Computer Sciences and Electrical Engineering University of Maryland, Annual ACM/IEEE (2000) 8. Zabin, F., Misra, S., Woungang, I., Rashvand, H.F.: REEP: data-centric, energy-efficient and reliable routing protocol for wireless sensor networks. IET Commun. 2(8), 995–1008 (2008) 9. Gharavi, H., Kumar, S.P.: Special Issue on Sensor Networks and Applications. Proceedings of the IEEE 91(8) (2003) 10. Shnayder, V., Chen, B.R., Lorincz, K., Thaddeus, R.F., Jones, F., Welsh, M.: Sensor Networks for Medical Care. Harvard Univ., Tech. Rep. TR-08-05 (2005) 11. Wood, A., Virone, G., Doan, T., Cao, Q., Selavo, L., Wu, Y., Fang, L., He, Z., Lin, S., Stankovic, J.: ALARM-NET: Wireless Sensor Networks for Assisted-Living and Residential Monitoring. Dept. Computer Science, Virginia Univ., Tech. Rep. CS-2006-11 (2006) 12. http://www.opnet.com
An Efficient Method for Detecting Misbehaving Zone Manager in MANET Marjan Kuchaki Rafsanjani1, Farzaneh Pakzad2, and Sanaz Asadinia3 1
Department of Computer Engineering, Islamic Azad University Kerman Branch, Kerman, Iran
[email protected] 2 Islamic Azad University Tiran Branch, Tiran, Iran
[email protected] 3 Islamic Azad University Khurasgan Branch, Young Researchers Club, Khurasgan, Iran
[email protected] Abstract. In recent years, one of the wireless technologies increased tremendously is mobile ad hoc networks (MANETs) in which mobile nodes organize themselves without the help of any predefined infrastructure. MANETs are highly vulnerable to attack due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring, management point and lack of a clear defense line. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. In our proposed scheme, the network with distributed hierarchical architecture is partitioned into zones, so that in each of them there is one zone manager. The zone manager is responsible for monitoring the cluster heads in its zone and cluster heads are in charge of monitoring their members. However, the most important problem is how the trustworthiness of the zone manager can be recognized. So, we propose a scheme in which “honest neighbors” of zone manager specify the validation of their zone manager. These honest neighbors prevent false accusations and also allow manager if it is wrongly misbehaving. However, if the manger repeats its misbehavior, then it will lose its management degree. Therefore, our scheme will be improved intrusion detection and also provide a more reliable network. Keywords: Collaborative algorithm, Honest neighbors, Intrusion detection, Zone manager, Mobile Ad hoc Network (MANET).
1 Introduction A mobile ad hoc network is a wireless network with the characteristics of selforganization and self-configuration, so that it can quickly form a new network without the need for any wired network infrastructure. Nodes within radio range of each other can communicate directly over wireless links, and those that are far apart use other nodes as relays. The network topology frequently changes due to the mobility of mobile nodes as they move in, or move out of their network vicinity [1],[2]. Thus, a T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 11–21, 2010. © Springer-Verlag Berlin Heidelberg 2010
12
M.K. Rafsanjani, F. Pakzad, and S. Asadinia
MANET is a collection of autonomous nodes that form a dynamic multi-hop radio network with specific purpose in a decentralized manner [1]. Due to this special characteristic, MANETs experience more vulnerability that brings more security concerns and challenges compared to other networks. Moreover due to their open medium, dynamically changing network topology and lacking central monitoring and absence of a clear line of defense, MANET is particularly vulnerable to several types of attacks like passive eavesdropping, active impersonation and denial of services. An intruder that compromises a mobile node in MANET can destroy the communication between the nodes by broadcasting false routing information, providing incorrect link state information and overflowing other nodes with unnecessary routing traffic information. One way of securing a mobile ad hoc network is to apply prevention method such as encryption and authentication, but past experiments have shown that encryption and authentication as intrusion prevention are not sufficient. So, the need arises for a second wall of defense as intrusion detection system [2],[3]. The idea is that when a MANET being intruded, if an intrusion detection system is existed, it could be detected as early as possible, and the MANET could be protected before any extensive harm can be done. Research efforts are going on to develop Intrusion Detection Systems (IDS) to detect intrusion, identify the misbehaving nodes, and isolate them from the rest of the network. Moreover, the presence of a detection system will discourage misbehaving nodes from attempting intrusion in future. Although, it is likely that the intruder will think twice before he attempts to break in it, again in future [4]. However, in most of IDSs, monitoring nodes or cluster heads is supposed to be valid nodes in order to initiate IDS and response systems. But in real world this idea is different and we can face to misbehaving or malicious monitoring nodes or cluster heads. (We consider malicious node as misbehavior node.) In this paper, we focus on finding misbehaving monitoring node or malicious cluster heads. So, if these nodes have been misbehavior nodes then they can send false information to other nodes or report an innocent node as destructive. In our proposed scheme, The network is partitioned to zones with one zone manager which is responsible to monitor on cluster heads in its zone, but the most important problem is how can specify the integrity of zone manager which is done by “honest neighbors” of zone manager. Also we detect compromised nodes in a cluster based on Algorithm for Detection in a Cluster (ADCLU) which is also used by zone manager for detecting malicious cluster heads [4]. The rest of this paper is organized as follows: In the next section, we review some related work in intrusion detection for MANETs. In Section 3, we present and explain our intrusion detection scheme. In Section 4, we conclude this paper with a discussion on future work.
2 Preliminaries There are three typical architectures for an intrusion detection system (IDS): standalone, distributed and cooperative, and hierarchical IDS architecture [5]. Moreover, there exists another classification which is combination of these architectures called hybrid architecture.
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
13
In the stand-alone IDS architecture every node runs an intrusion detection system agent and every decision made is based only on information collected at local node, since there is no cooperation among nodes in the network like Watchdog technique [6]. The merits of this architecture have no network overhead for the intrusion detection process such as audit data exchange. Moreover, this system could reduce the risk where attackers accuse legitimate nodes misbehaving in purpose to have those nodes excluded from the network [7]. However, this architecture has limitations to be implemented in real environment because in most types of attacks, information on each individual node might not be enough to detect intrusions. In addition, since every node runs IDS, resources are required in every node. Therefore, this scheme is not suitable for nodes with limited resources. Furthermore, due to the lack of node cooperation, this scheme may fail to detect a misbehaving node in the presence of (a) ambiguous collision, (b) receiver collision, (c) limited transmission power, (d) false misbehavior, (e) collusion, and (f) partial dropping [6]. Finally, this scheme hasn’t security protection. The second type of architecture is a distributed and cooperative model. Zhang and Lee [8] have proposed the intrusion detection system in MANETs should also be distributed and cooperative. Similar to stand-alone architecture, every node participates in intrusion detection and response by having an IDS agent running on them. An IDS agent is responsible for detecting and collecting local events and data to identify possible intrusions, as well as initiating a response independently. However, neighboring IDS agents cooperatively participate in global intrusion detection actions when the evidence is inconclusive through voting mechanism [2]. The merits of this architecture are such as follow: Network overhead can be reduced by exchanging data only when it is needed. The lack of completeness of the local audit data can also be compensated by asking the intrusion status in neighboring nodes. Although the IDS can overcome some limitations presented in the stand-alone IDS, it has the following problems. First, cooperative intrusion detection may lead to heavy communication and calculation between nodes, causing degradation in network performance. Second, the sharing data between trusted nodes is not in general true since there are a lot of possible threats in a wireless network environment [7]. Hierarchical intrusion detection systems architectures have been designed for multi-layered ad hoc network infrastructures where the network is are divided into smaller sub-networks (clusters) with one or more cluster heads that are responsible for the intrusion detection in the networks. Therefore, these cluster heads act as manage points which are similar to switches, routers, or gateways in traditional wired networks. Each IDS agent runs on every node. Also it is responsible for detecting intrusion locally by monitoring local activities. A cluster head is responsible locally for its node as well as globally for its cluster, e.g. monitoring network packets and initiating a global response when network intrusion is detected [2],[3],[7]. This type of architecture is the most suitable architecture in term of information completeness. Moreover, the idea of reducing the burden of hosting IDS agent in some nodes helps the system to conserve overall energy. However, this has to be paid for the network overhead to form clusters and audit data exchange, not to mention the relatively long detection time as the data exchange is needed to perform the detection.
14
M.K. Rafsanjani, F. Pakzad, and S. Asadinia
Moreover, malicious nodes that are elected as cluster heads could result to the devastation of the networks. In the zone-based IDS proposed in [9], the network is partitioned into nonoverlapping zones. Every node in a zone (intra-zone node) sends an alert message to a gateway node (inter-zone node) with alert flooding and the gateway node will send out an alarm message at a fixed interval representing the zone. Zone-based IDS cannot detect intrusion in real time because its gateway generates alarms only at a fixed interval. Furthermore, in MANET intrusion detection system there are two types of decision making including collaborative decision making and independent decision making. In collaborative decision making, each node participates actively in the intrusion detection procedure. Once one node detects an intrusion with strong confidence, this node can start a response to the intrusion by starting a majority voting to determine whether attack happens [8]. On the other hand, in the independent decision making framework, certain nodes are assigned for intrusion detection [10]. These nodes collect intrusion alerts from other nodes and decide whether any node in the network is under attack. These nodes do not need other nodes’ participation in decision making [2],[3].
3 The Proposed Scheme Our scheme is inspired form the collaborative techniques for intrusion detection in mobile ad hoc networks, which use collaborative efforts of nodes in a neighborhood to detect a malicious node in that neighborhood [4]. The first step of our scheme is based on Marchang et al.’s technique (ADCLU algorithm) [4] which is designed for detection of malicious nodes in a neighborhood of nodes, in which each pair of nodes may not be in radio range of each other, but where there is a node among them which has all the other nodes in its one hop vicinity. This neighborhood is identical to a cluster [11]. This technique uses message passing between the nodes. A node called the monitoring node initiates the detection process. Based on the messages that it receives during the detection process, each node determines the nodes it suspects to be malicious and send votes to the monitoring node. The monitoring node upon inspecting the votes determines the malicious nodes from among the suspected nodes [4]. In this scheme, authors assumed that the initiating node of this algorithm i.e., the monitoring node is not malicious and when the monitoring node initiates the detection process by sending out a message to the other nodes, the malicious nodes have no way of knowing that a detection algorithm is in progress. So, if these nodes have been misbehavior nodes then they can send false information to other nodes, report an innocent node as destructive or do not initiate the detection process. Therefore, it is important that a monitoring node has been a valid node. This shortcoming also viewed in many distributed or hierarchical or hybrid intrusion detection systems. In our scheme, the network is divided to zones with one zone manager in each zone which is responsible to monitor cluster heads in its zone. Zone manager is the heart of the controlling and coordinating with every node in the zone. It maintains the configuration of the node, record the system status information of each component, and make
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
15
the decisions. Also monitoring of cluster heads can be done by zone manager via expanding the ADCLU algorithm. The second step of our scheme is allocated for detecting a misbehaving zone manager in which zone manager neighbors should control its activity and report any misbehaving. This scheme creates reciprocal term between nodes in multi level hierarchical. 3.1 Detecting Malicious Cluster Heads Based on ADCLU The ADCLU algorithm [4] can be used to detect malicious nodes in a set of nodes, which forms a cluster, which is defined as a neighborhood of nodes in which there a node, which has all the other nodes as its 1-hop neighbors as shown in Fig 1. To present the algorithm we make the following assumptions: The wireless links between the nodes are bi-directional. When the monitoring node initiates the detection process, the malicious nodes have no way of knowing that a detection algorithm is in progress. 0
2 1
4
3
Fig. 1. A neighborhood (cluster) in a MANET consisting of 5 nodes: an edge between two nodes denotes they are within radio range of each other
Step 1: The monitoring node, M broadcasts the message RIGHT to its neighbor nodes asking them to further broadcast the message in their neighborhood. M broadcast: (RIGHT) Step 2: Upon receiving the message RIGHT, each neighbor, B of M further broadcast the message in its neighborhood B broadcast: (X) (X = RIGHT if B is not malicious, X ≠ RIGHT if B is malicious) Step 3: The monitoring node, M then broadcasts a MALICIOUS-VOTEREQUEST message in its neighborhood. M broadcast: (MALICIOUS-VOTE-REQUEST) Step 4: On receipt of a MALICIOUS-VOTE-REQUEST message from M, each neighbor, B of M does the following: Let PA be the message node B received from node A in step 2 (if node B does not receive any message from A or if it receives a message different from RIGHT, PA is assigned default message WRONG). If PA≠ RIGHT, then B sends a vote for node A being a suspected node to M. B M: (VOTE; A) Step 5: On receipt of the votes in step 4, the monitoring node does the following: I. Accept only distinct votes from each of the nodes (By distinct votes, we mean that the monitoring node can accept at most one vote about a suspected node from any node).
16
M.K. Rafsanjani, F. Pakzad, and S. Asadinia
II. Let NA be the number of votes received for node A. If NA ≥ k, mark node A as malicious. (The monitoring node also gives its vote. k is the threshold value.) Zone manager also can use this algorithm for detecting the cluster heads work properly or not. But for detecting a validation of zone manager we propose a distributed scheme to controls the zone manager, investigate its operation, the zone manger is isolated if any misbehaving has been observed and selection of new zone manager is accomplished. 3.2 Detecting Valid Monitoring Zone Manager The first zone manager can be selected randomly or by consideration the routing table in DSR. Then an IDS agent would be installed on the neighboring nodes of zone manager and each node runs an IDS independently. However, nodes would cooperate with each other to detect some ambiguous intrusions. Neighboring nodes must know each other and trust to each other to identify the precision of their decisions. The creation of a trusted community is important to ensure the success of MANET operations. A special mechanism needs to be deployed to enable nodes to exchange security associations between them. In addition, this mechanism is able to speed up the creation process of a trusted community in the network. Each node needs to meet and establish mutual trust with other nodes which requires a lot of time and effort. The reliance concept proposed in this study makes this process simpler and faster by providing a secure platform for nodes to exchange their security associations. This ongoing trust exchange process between nodes without doubt could lessen the amount of anonymous communication, and thus lead to the creation of a trusted community in the networks [12]. A secure platform must be provided in which each node needs to build its own trusted neighbors lists. In fact, this module is created first by virtual trust between nodes and based on the good reputation of other nodes through experiences. Each node promiscuously listen to its neighbors transmissions which is located in its one hop vicinity and also it is a neighbor of zone manger. These nodes decrease its neighbor reputation degree if it has seen any misbehaving such as dropping packets, modifying messages and the reputation will be increased if it forwards packets without any modification. In addition, each activity of their neighbors can be viewed form routing tables. After the neighbor`s node reputation degree gain the threshold value it will be registered in “honesty neighbors” list. In addition, these direct neighbors would be exchanged their “honesty neighbors” to create a new set of associate nodes, namely indirect honesty neighbors (implicit trust). So, a ring of “honest neighbors” can surround the zone manager and control its activity as shown in Fig 2. It is clear evidently zone manager also exists in their trusted neighbors. If each of these nodes misbehaves or acts maliciously the reputation degree will be degraded and then it will be omitted from “honest neighbors” list if this degree is lower that threshold value. This process has not been required that all IDSs of neighboring nodes were active and in fact some of them can go to sleep mode. If one node detects that zone manager is misbehaving, it will send an alert to its honest neighbors, the modules in the sleeping state will be activated, changing from the sleeping state to the running state to initiate their IDSs and cooperate in zone manager intrusion detection. If they also
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
17
observed zone manager misbehavior send warning to altogether and cut off their communications with zone manager, simultaneously, the warning will be send to the cluster heads. Then cluster heads can run ADCLU to dismiss zone manager with strong evidence.
Legend: Zone manager Honest neighbors Communication link
A B
G
Ring of honest neighbors A sample of indirect trust between nodes
C F E
D
Fig. 2. Honest neighbors model for detecting misbehaving zone manager
After the removal of zone manager, new manager should be selected; the simpler and faster process is the honesty neighbors select a node which has lower misbehaving or higher reputation rate as zone manager.
4 Simulation Results Our algorithm was simulated using the GloMoSim Simulator. In the base scenario, 250 nodes are placed in an area of 2000 m ×2000 m with 4 sections1000 m×1000 m and 16 clusters. In this model, each node selects a random destination within the simulation area and the nodes move uniformly according to the waypoint mobility model with a maximum speed of 10 m/s. The time of simulation was 300s and the used routing protocol was DSR. The data traffic was generated by 10 constant bit rate (CBR) sources, with sending rates of single 1024 bytes every second. We use the 802.11 protocol at the MAC layer. The radio propagation range is set to 250m and the data rate is 2 Mbit/s. Message loss was considered by random selection of messages at various steps of the algorithm. 20 percentages of nodes considered malicious nodes. The malicious nodes were selected at random and were made to drop or modify all the messages that they were to forward. In view of our algorithm, they send WRONG messages. Figs. 3–5 show the end to end delay, delivery ratio and overhead respectively once the nodes have no mobility. Fig.3 shows the end to end delay of our algorithm in comparison to ADCLU and DSR protocol. Our algorithm produces higher end to end delay results than the other protocols. In general, DSR protocol runs better than other algorithms in simple environments. Although this protocol doesn’t operate any detection and response process so the delay is less than others.
18
M.K. Rafsanjani, F. Pakzad, and S. Asadinia
On the other hand, our protocol is more complicated than ADCLU, so the higher delay is clear. Consider Fig.4, the delivery ratio of our proposed scheme is better than the other two protocols. If maximum number of messages are passed and received successfully it has two meanings, whether there exist no attacks in the networks or the attack has been identified and fixes. Considering 20 percent of simulated nodes are malicious and this indicates the correct functioning of our algorithm to deal with invaders. Fig.5 shows the overhead per true received packets between our proposed algorithm, ADCLU and DSR. Our proposed method has a lower level rather than ADCLU. This shows that despite of existence of attacks, our algorithm can deliver more packets to destination. In general, packet delivery ratio and overhead have an inverse relationship. So when the overhead is higher the delivery ratio will be lower, and the lower overhead results in higher delivery ratio.
end to end delay ADCLU
the proposed method
DSR
delay(sec)
0.02 0.015 0.01 0.005 0 100
150
200 number nodes
Fig. 3. End to end delay without mobility
Fig. 4. Packet delivery ratio without mobility
250
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
19
Fig. 5. Overhead per true received packets without mobility
Figs. 6–8 show the end to end delay, delivery ratio and overhead respectively when nodes move with maximum speed of 10m/s. According to figures, our proposed scheme has better functioning despite of movement of nodes. end to end delay ADCLU
the proposed method
DSR
delay(sec)
0.02
0.015
0.01 100
150
200 number nodes
Fig. 6. End to end delay with maximum speed 10m/s
Fig. 7. Packet delivery ratio with maximum speed 10m/s
250
20
M.K. Rafsanjani, F. Pakzad, and S. Asadinia
Fig. 8. Overhead per true received packets with maximum speed 10m/s
5 Conclusion and Future Work In this paper, we have proposed a scheme to improve IDS for MANET. This scheme aims to minimize the overheads and maximize the performance of network and to provide a degree of protection against the intruder. In our proposed scheme, we focus on reliability of zone manager which is done by its honesty neighbors. As follow, the development of the scheme is: the network is divided to zones with one zone manager which is the monitor of the cluster heads in its zone. The validation of zone manager is accomplished by its honesty neighbor that is neglected in many IDS techniques. In most of these techniques, monitoring node is a valid node, but if monitoring node be a misbehaving node, it can refuse initiating intrusion detection algorithm or accuse an innocent node as destructive. So, these honest neighbors prevent false accusations, and also allow zone manager to be a manager if it is wrongly misbehaving. However, if manger repeats its misbehavior, it will lose its management degree. Our scheme can apply for developing a sophisticated intrusion detection system for MANET. This experiment emphasizes the importance of validation of zone manager for running IDS algorithms, which is neglected in latest researches. Our simulation results show that the algorithm works well even in an unreliable channel where the percentage of loss of packages is around 20%.
References 1. Xiao, H., Hong, F., Li, H.: Intrusion Detection in Ad hoc Networks. J. Commu. and Comput. 3, 42–47 (2006) 2. Farhan, A.F., Zulkhairi, D., Hatim, M.T.: Mobile Agent Intrusion Detection System for Mobile Ad hoc Networks: A Non-overlapping Zone Approach. In: 4th IEEE/IFIP International Conference on Internet, pp. 1–5. IEEE Press, Tashkent (2008)
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
21
3. Fu, Y., He, J., Li, G.: A Distributed Intrusion Detection Scheme for Mobile Ad hoc Networks. In: 31st Annual International Computer Software and Applications Conferences (COMPSAC 2007), vol. 2, pp. 75–80. IEEE Press, Beijing (2007) 4. Marchang, N., Datta, R.: Collaborative Techniques for Intrusion Detection in Mobile Adhoc Networks. J. Ad Hoc Networks 6, 508–523 (2008) 5. Brutch, P., Ko, C.: Challenges in Intrusion Detection for Wireless Ad hoc Networks. In: Symposium on Applications and the Internet Workshops (SAINT 2003 Workshops), pp. 368–373. IEEE Press, Florida (2003) 6. Marti, S., Giuli, T.J., Lai, K., Baker, M.: Mitigating Routing Misbehavior in Mobile Ad hoc Networks. In: 6th Annual International Conference on Mobile Computing and Networking, pp. 255–265. ACM, New York (2000) 7. Arifin, R.M.: A Study on Efficient Architecture for Intrusion Detection System in Ad hoc Networks. M.SC. Thesis, repository.dl.itc.u-okyo.ac.jp/dspace/bitstream/2261/../K01476.pdf, pp. 1–53 (2008) 8. Zhang, Y., Lee, W., Huang, Y.: Intrusion Detection Techniques for Mobile Wireless Networks. J. Wireless Networks 9, 545–556 (2003) 9. Sun, B., Wu, K., Pooch, U.W.: Alert Aggregation in Mobile Ad hoc Networks. In: 2nd ACM Workshop on Wireless Security (WiSe 2003), pp. 69–78. ACM, New York (2003) 10. Anantvalee, T., Wu, J.: A Survey on Intrusion Detection in Mobile Ad hoc Networks. In: Xiao, Y., Shen, X., Du, D.Z. (eds.) Wireless/Mobile Network Security, vol. 2, pp. 159– 180. Springer, Heidelberg (2007) 11. Huang, Y., Lee, W.: A Cooperative Intrusion Detection System for Ad hoc Networks. In: ACM Workshop on Security in Ad Hoc and Sensor Networks (SASN 2003), pp. 135–147. ACM, New York (2003) 12. Razak, A., Furnell, S.M., Clarke, N.L., Brooke, P.J.: Friend-Assisted Intrusion Detection and Response Mechanisms for Mobile Ad hoc Networks. J. Ad Hoc Networks 6, 1151– 1167 (2008)
Query Answering Driven by Collaborating Agents Agnieszka Dardzinska Bialystok University of Technology, ul. Wiejska 4C, 15-351 Bialystok Poland
[email protected] Abstract. We assume that there is a group of collaborating agents where each agent is defined as an Information System coupled with a Query Answering System (QAS). Values of attributes in an information system S form atomic expressions of a language used by the agent associated with S to communicate with other agents. Collaboration among agents is initiated when one of the agent's, say the one associated with S and called a client, is asked by user to resolve a query containing nonlocal attributes for S. Then, the client will ask for help other agents to have that query answered. As the result of this request, knowledge in the form of defnitions of locally foreign attribute values for S is extracted at information systems representing other agents and sent to the client. The outcome of this step is a knowledge-base KB created at the client site and used to answer the query. In this paper we present a method of identifying which agents are semantically the closest to S and show that the precision and recall of QAS is getting increased when only these agents are ask for help by the client. Keywords: query, information system, agent, knowledge base.
1 Introduction We assume that there is a group of collaborating agents where each agent is defined as an Information System (can be incomplete) coupled with a Query Answering System (QAS) and a knowledge base which is initially empty. Incompleteness is understood as a property which allows to use a set of weighted attribute values as a value of an attribute. Additionally, we assume that the sum of these weights has to be equal 1. The defnition of an information system of type λ given in this paper was initially proposed in [9]. The type λ was introduced with a purpose to monitor the weights assigned to values of attributes by Chase algorithm. If a weight is less than λ, then the corresponding attribute value is ruled out as a possible value and weights assigned to the remaining attribute values are equally adjusted so its sum is equal again to one. Semantic inconsistencies are due to different interpretations of attributes and their values among sites (for instance one site can interpret the concept young differently than other sites). Different interpretations are also implied by the fact that each site may differently handle null values. Null value replacement by a value suggested either by statistical or some rule-based methods is quite common before a query is T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 22–30, 2010. © Springer-Verlag Berlin Heidelberg 2010
Query Answering Driven by Collaborating Agents
23
answered by QAS. Ontologies ([5], [6], [11], [12], [13], [1], [2], [14], [4]) are widely used as a part of semantical bridge between agents built independently so they can collaborate and understand each other. In [8], the notion of the optimal rough semantics and a method of its construction was proposed. The rough semantics can be used to model and nicely handle semantic inconsistencies among sites due to different interpretations of incomplete values. As the result of collaborations among agents, a knowledge base of any agent is updated and it contains rules extracted from information systems representing other agents. Although the names of attributes can be the same among information systems, their granularity levels may differ. As the result of these differences, the knowledge base has to satisfy certain properties in order to be used by Chase. Also, the semantic differences between agents may influence the precision and recall of a query answering system. We will show that it is wise to use the knowledge obtained from agents which are semantically close to the client agent when solving a query. This way, the precision and recall is getting improved.
2 Query Processing with Incomplete Data In real life, data are often collected and stored in information systems residing at many different locations, built independently, instead of collecting and storing them at a single location. In this case we talk about distributed (autonomous) information systems or about agents. It is very possible that an attribute is missing in one of these systems and it occurs in many others. Assume now that user submits a query to one of the agents (called a client), which can not answer it because some of the attributes used in a query do not exist in the information system representing the client site. In such case, the client has to ask other agents for definitions of these unknown attributes. All these new definitions are stored in the knowledge base of a client and then used to chase these missing attributes. But, before any chase algorithm, called rule-based chase, can be applied, semantic inconsistencies among sites have to be somehow resolved. For instance, it can be done by taking rough semantics [3], [9] as mentioned earlier. Defnition 1 We say that S(A) = (X; A; V) is an incomplete information system of type λ, if S(A) is an incomplete information system introduced by Pawlak in [7] and the following two conditions hold: , ∑
:1
,
for any
1, and
Now, let us assume that S1(A), S2(A) are incomplete information systems, both of type λ. The same set X of objects is stored in both systems and the same set A of attributes is used to describe them. The meaning and granularity of values of attributes from A in both systems S1 and S2 is also the same. Additionally we assume that:: ,
:1
and
,
:1
.
24
A. Dardzinska
We say that containment relation ψ holds between S1 and S2, if the following two conditions hold:
Instead of saying that containment relation holds between S1 and S2, we can equivalently say that S1 was transformed into S2 by containment mapping ψ. This fact can be presented as a statement or Similarly, we can either say that containment relation ψ holds between
was transformed into and .
by ψ or that
So, if containment mapping ψ converts an information system S1 to S2, then S2 is more complete than S1. Saying another words, for a minimum one pair , or the average , either ψ has to decrease the number of attribute values in difference between confidences assigned to attribute values in has to be increased by ψ. To give an example of a containment mapping ψ, let us take two information systems S1, S2 both of the type λ, represented as Table 1 and Table 2. Table 1. Information System S1 X x1 x2
a 1 , , 3 1 , , 4
2 , 3 3 , 4
b 2 , , 3 1 , , 3
c
1 , 3 2 , 3
x3
,
1 , 2
d
,
,
2 , 3
,
1 2
,
2 , 3
,
1 3
,
1 , 3
,
2 3
1 3
x6 x7
,
1 2
x4 x5
e 1 , , 2
,
1 , 4
,
3 4
,
1 , 3
,
2 3
x8
It can be easily checked that the values assigned to e(x1), b(x2), c(x2), a(x3), e(x4), a(x5), c(x7), and a(x8) in S1 are different than the corresponding values in S2. In each of these eight cases, an attribute value assigned to an object in S2 is less general than the value assigned to the same object in S1. It means that ψ(S1) = S2.
Query Answering Driven by Collaborating Agents
25
From now on, an agent will be denoted by AG(S;K), where S is an incomplete information system of type λ and K is a knowledge base containing rules extracted from information systems of other agents collaborating with AG(S;K). Table 2. Information System S2 X x1 x2
a 1 , , 3 1 , , 4
2 , 3 3 , 4
b 2 , , 3
1 , 3
,
3 , 4
,
1 , 3 1 , , 2 ,
x3 x4 x5
c
d 2 3 1 , 2
,
2 3
1 , 3
,
2 3
,
1 4
x6
,
x7 x8
e 1 , , 3
, ,
2 , 3
,
1 3
1 , 4
,
3 4
3 Query Processing Based on Collaboration and Chase Assume that we have a group G of collaborating agents and user submits a query q(B) to an agent AG(S(A);K) from that group, where S(A) = (X; A; V), K = , B are the attributes used in q(B), and . All attributes in \ are called foreign for AG(S(A);K). Since AG(S(A);K) can collaborate with other agents in G, definitions of foreign attributes for AG(S(A);K) can be extracted from information systems associated with agents in G. In [8], it was shown that agent AG(S(A);K) can answer the query q(B) assuming that definitions of all values of attributes from \ can be extracted at the remote sites for S and used to answer q(B). Foreign attributes for S, can be seen as attributes with only null values assigned to all objects in S. Assume now that we have three collaborating agents: AG(S;K), AG(S1;K1), AG(S2;K2), where S=(X;A;V), S1=(X1;A1;V1), S2=(X2;A2;V2), and K = K1 = K2 = . If the consensus between AG(S;K) and AG(S1;K1) on the knowledge extracted from S(A∩A1) and S1(A∩A1) is closer than the consensus between AG(S;K) and AG(S2;K2) on the knowledge extracted from S(A∩A2) and S2(A∩A2), then AG(S1;K1) is chosen by AG(S;K) as the agent to be asked for help in solving user queries. Rules defining foreign attribute values for S are extracted at S1 and stored in K. Assuming that systems S1, S2 store the same sets of objects and use the same attributes to describe them, system S1 is more complete than system S2, if ψ(S2) = S1. The question remains, if the values predicted by the imputation process are really correct, and if not, how far they are (assuming that some distance measure can be set up) from the correct values which clearly are unknown? Classical approach, to this
26
A. Dardzinska
kind of problems, is to start with a complete information system and remove randomly from it, e.g. 10 percent of its values and next run the imputation algorithm on the resulting system. The next step is to compare the descriptions of objects in the system which is the outcome of the imputation algorithm with descriptions of the same objects in the original system. But, before we can continue any further this discussion, we have to decide on the interpretation of functors “or” and “and”, denoted in this paper by “+” and “*”, correspondingly. We will adopt the semantics of terms proposed in [10] since their semantics preserves distributive property, which means: t1* (t2 + t3) = (t1* t2) + (t1* t3), for any queries t1, t2, t3. So, let us assume that S = (X; A; V ) is an information system of type λ and t is a term constructed in a standard way from values of attributes in V seen as constants and from two functors + and *. By NS(t), we mean the standard interpretation of a term t in S defined as (see [10]): •
,
:
,
, for any
• • ,
where for any
,
,
we have:
• , •
\
,
\
,
, max
,
·
Assume that AG(S;K) is an agent, where S = (X; A; V) and K contains definitions of attribute values in B. Clearly . The null value imputation algorithm Chase, given below, converts information system of type λ to a new more complete information system Chase( ) of the same type. Initially NULL values are assigned to all attributes in B for all objects in . The proposed algorithm is new in comparison to known strategies for chasing NULL values in relational tables because of the assumption about partial incompleteness of data (sets of weighted attribute values can be assigned to an object as its value). Algorithm ERID [3]) is used by Chase algorithm to extract rules from this type of data. Algorithm Chase converts the incomplete information system to a new information system of type λ which is more complete. Now, let us assume that agent AG(S;K) represents the client site, where S is a partially incomplete information system of type λ. When a query q(B) is submitted to AG(S(X;A;V);K), its query answering system QAS will replace S by Chase(S) and next will solve the query using, for instance, the strategy proposed in [10]. Clearly, we can argue why the resulting information system obtained by Chase can not be stored aside and reused when a new query is submitted to AG(S;K)? If AG(S;K) does not have many updates, we can do that by keeping a copy of Chase(S) and next reuse that copy when a new query is submitted to AG(S;K). System Chase(S), if stored aside, can not be reused by QAS when the number of updates in the original S and/or K exceeds a given threshold value.
Query Answering Driven by Collaborating Agents
27
4 In Search for the Best Agent Assume again that agent AG(S;K) represents the client site. As we already pointed out, the knowledge base K, contains rules extracted from information systems representing other agents. Our goal is to find optimal i-agent AG(Si;K) for client AG(S;K), where by optimal we mean an agent of maximal precision and recall. The distance between two agents is calculated using the formula: ∑ ∑ sup
,
∑ ∑
·
·
where sup · max sup
sup · max sup
,1
,1
From all the agents we have, we choose the agent with the minimal value of , which corresponds to the closest agent to the client. The definition of d discovered at the closest agent to the client and stored in KB of the client will guarantee that Query Answering System connected with the client has maximal precision and recall in group of agents having d. Example Let us assume we have three information systems S, S1, S2, as represented as Table3, Table4 and Table5. Information system S has no information about attribute d, which appears in other systems such as S1 and S2. Our goal is to choose one of the system (the Agent AG(S,K)), from which we will be able to predict values of d in system S. Table 3. Information System S Z z1 z2 z3 z4 z5 z6
a 1 1 2 0 2 0
b 2 1 1 2 2 3
c L H H H L L
d
Table 4. Information System S1 X x1 x2 x3 x4 x5 x6 x7
a 0 0 1 1 0 1 2
b 1 2 3 1 2 3 1
c H L L H L L H
d 3 3 3 3 1 1 3
e + + + + -
28
A. Dardzinska Table 5. Information System S2 Y y1 y2 y3 y4 y5 y6
a 1 1 0 0 2 2
b 1 1 2 3 2 3
c H H L L L H
d 1 3 1 3 1 3
Because attributes a, b, c are common in all of the systems, first we extract rules describing them. For each rule we calculate support and confidence in a standard way. For system S1 we have: 1 , sup 1, 3 1 , sup 1, 3 1 , sup 1, 3 , sup 2, 1 , sup 2, 1 1 , sup 1, 3 1 , sup 1, 3 1 , sup 1, 3 1 , sup 1, 3 ... , sup 1, 1 , sup 2, 1 ... For system S2 we have: , sup 2, 1 , sup 2, 1 1 , sup 1, 2 1 , sup 1, 2 1 , sup 1, 2 1 , sup 1, 2 2 , sup 2, 3 1 , sup 1, 3
Query Answering Driven by Collaborating Agents
, sup
2,
, sup
1,
29
2 3 1 3 1 2
, sup 1, ... , sup 1, , sup 1, ...
1 1
We do the same for system S. The distance between S and S1 is calculated:
. .
.
. .
0.83
0.85 and the distance between S and S2: . Because the distance between S and S1 is smaller than between S and S2, we choose S1 as the better agent for contact with S. Next, the chosen- closest agent S1 contacts with information system S, to improve it, using the containment relation described earlier.
5 Conclusion We proposed the method of finding and identifying the closest agent (from semantical point of view) to the given client. Tests are very promising. To improve our strategy, we can look for additional hidden slots taking into consideration. We can chose this attribute randomly, but also we can identify which attribute the highest support. Acknowledgments. This paper is sponsored by W/WM/11/09.
References 1. Benjamins, V.R., Fensel, D., Prez, A.G.: Knowledge management through ontologies. In: Proceedings of the 2nd International Conference on Practical Aspects of Knowledge Management (PAKM 1998), Basel, Switzerland (1998) 2. Chandrasekaran, B., Josephson, J.R., Benjamins, V.R.: The ontology of tasks and methods. In: Proceedings of the 11th Workshop on Knowledge Acquisition, Modeling and Management, Banff, Alberta, Canada (1998) 3. Dardzinska, A., Ras, Z.W.: On Rules Discovery from Incomplete Information Systems. In: Lin, T.Y., Hu, X., Ohsuga, S., Liau, C. (eds.) Proceedings of ICDM 2003 Workshop on Foundations and New Directions of Data Mining, Melbourne, Florida, pp. 31–35. IEEE Computer Society, Los Alamitos (2003) 4. Fensel, D.: Ontologies: a silver bullet for knowledge management and electronic commerce. Springer, Heidelberg (1998) 5. Guarino, N. (ed.): Formal Ontology in Information Systems. IOS Press, Amsterdam (1998) 6. Guarino, N., Giaretta, P.: Ontologies and knowledge bases, towards a terminological clarification. In: Towards Very Large Knowledge Bases: Knowledge Building and Knowledge Sharing. IOS Press, Amsterdam (1995)
30
A. Dardzinska
7. Pawlak, Z.: Information systems - theoretical foundations. Information Systems Journal 6, 205–218 (1981-1991) 8. Ras, Z.W., Dardzinska, A.: Ontology Based Distributed Autonomous Knowledge Systems. Information Systems International Journal 29(1), 47–58 (2004) 9. Ras, Z.W., Dardzinska, A.: Solving Failing Queries through Cooperation and Collaboration. World Wide Web Journal 9(2), 173–186 (2006) 10. Ras, Z.W., Joshi, S.: Query approximate answering system for an incomplete DKBS. Fundamenta Informaticae Journal 30(3/4), 313–324 (1997) 11. Sowa, J.F.: Ontology, metadata, and semiotics. In: Ganter, B., Mineau, G.W. (eds.) ICCS 2000. LNCS (LNAI), vol. 1867, pp. 55–81. Springer, Heidelberg (2000) 12. Sowa, J.F.: Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks/Cole Publishing Co., Pacific Grove (2000b) 13. Sowa, J.F.: Ontological categories. In: Albertazzi, L. (ed.) Shapes of Forms: From Gestalt Psychology and Phenomenology to Ontology and Mathematics, pp. 307–340. Kluwer Academic Publishers, Dordrecht (1999a) 14. Van Heijst, G., Schreiber, A., Wielinga, B.: Using explicit ontologies in KBS development. International Journal of Human and Computer Studies 46(2/3), 183–292 (1997)
Attribute-Based Access Control for Layered Grid Resources* Bo Lang, Hangyu Li, and Wenting Ni State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China
[email protected],
[email protected],
[email protected] Abstract. Attribute-Based Access Control (ABAC) is a fine-grained and flexible authorization method. In this paper, considering the layered structure of Grid resources, an ABAC model named Grid_ABAC is presented, and the implementation architecture of Grid_ABAC basing on XACML is proposed. The paper also describes the method for integrating Grid_ABAC seamlessly into the authorization framework of the Globus Tloolkit. The test result shows that Grid_ABAC is efficient and provides a more flexible and open access control method for grid computing. Keywords: Attribute based access control, Grid computing, Globus, XACML, SAML.
1 Introduction Grid systems are virtual organizations whose users and resources are dynamically changeable [1]. Traditional access control models, such as DAC, RBAC, are basing on static user information and are not very suitable for this kind of systems. Attributed-Based Access Control (ABAC) which makes decisions relying on attributes of requestors, resources, environment and actions is fine-grained and scalable, and is regarded as a promising access control method for grid computing. EXtensible Access Control Markup Language (XACML) is an OASIS standard [2]. It is a policy description language of ABAC and also provides an authorization framework. Several ABAC systems such as Gridship[3] have been used in grid computing at present, and the research of XACML-based ABAC system is attracting more and more attentions[4][5]. In this paper, we present a model named Grid_ABAC based on the layered structure of Grid resources and existing ABAC models [6][7]. We also proposed implementation architecture of Grid_ABAC based on XACML, and finally integrated the Grid_ABAC system with GT4(Globus Tloolkit 4.0.5). *
The work was supported by the Hi-Tech Research and Development Program of China under Grant No.2007AA010301, the Foundation of the State Key Laboratory of Software Development Environment under Grant No.SKLSDE-2009ZX-06, and the National Important Research Plan of Infrastructure Software under Grant No.2010ZX01042-002-001-00.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 31–40, 2010. © Springer-Verlag Berlin Heidelberg 2010
32
B. Lang, H. Li, and W. Ni
The paper is organized as follows. Section 2 analyses the features of Grid_ABAC and gives its definition. Section 3 introduces the implementation mechanism of Grid_ABAC. Section 4 describes how to integrate the model with GT4. Section 5 gives efficiency tests of Grid_ABAC authorization in GT4. Section 6 summarizes our work.
2 The Grid_ABAC Model 2.1 The Layered Resource and Policy Structure in Grid Resources in Gird are usually constructed as a tree for management, as shown in Fig1. The resource is identified and located by a URL which is composed of the path from the root of the tree to the resource node.
Fig. 1. The structure of resources and policies in Grid
Each resource in the tree has a specific policy. Policies of resources in upper layer should be applied to the resources in lower layer. This relation is similar to the relation between parent class and subclass in object oriented programming. Subclasses not only inherit the properties of their parents but can also define their own properties. Therefore, the policy of a resource node is a policy set which contains the policies of its parent nodes and its own, and the access control decision is made by combining the decision results of all the policies. In Fig 1, each resource Rn owns its specific policy Pn. The policy set of Rn includes the policy Pn owned by Rn and the policies owned by its ancestral nodes. For example, the policy set of resource R2.2.2 is {P2.2.2,P2.2,P2}. The decision of a request to R2.2.2 would be make by evaluating all the policies in {P2.2.2,P2.2,P2}.
Attribute-Based Access Control for Layered Grid Resources
33
2.2 The Definition of Grid_ABAC We propose an attribute based access control model for the layered grid resources. The definition of Grid_ABAC has the following parts: (1) Attribute Type (AT): An attribute type is composed of a unique identifier, the name and the data type of the attribute. Each AT must be related to one kind of entity, such as subject, resource and environment. (2) Basic Attribute Set, there are four types of Attribute Set: z Subject Attribute Set Attr(sub)={SubjectAttri|i [0,n]} z Resource Attribute Set Attr(res)={ ResourceAttri|i [0,n]} z Action Attribute Set Attr(act)={ActionAttri|i [0,n]} z Environment Attribute Set Attr(env)={EnviromentAttri|i [0,n]} (3) The Xth Grid resource in layer ln, which is denoted by Rl1l2…ln.X. (4) Specific policy for a resource Rl1l2…ln.X denoted by Grid_P_Rl1l2…ln.X Grid_P_Rl1l2…ln.X is the specific policy owned by Rl1l2…ln.X and indicates the special security requirement of this resource. Grid_P_Rl1l2…ln.X is defined as a 4-tuple: Grid_P_Rl1l2…ln.X(S, R, A, E), where S, R, A, E represent Subject, Resource, Action, Environment respectively. (5) Policy set of a resource Rl1l2…ln.X denoted by Grid_PolicySet_Rl1l2…ln.X: The policy set owned by Rl1l2…ln.X includes its specific policy Grid_P_Rl1l2…ln.X and the specific policies owned by the upper layer nodes, i.e. Grid_P_Rl1, , Grid_P_Rl1l2…ln. Each policy set has a combining algorithm for making a final decision basing on the decision resultsof the policies in the policy set. (6) Grid_ABAC policy evaluation Grid_ABAC_authz_Rl1l2…ln(Attr (sub), Attr (res), Attr (act), Attr (env)) is the evaluation function of a policy set. It evaluates each policy and makes a decision on these evaluation results using a combination algorithm combine_alg:
∈ ∈ ∈
∈
:
……
Grid_ABAC_authz_Rl1l2…ln.X (Attr(sub),Attr(res),Attr(act),Attr(env) ) =combine_alg(abac_authz_pl1 (Attr (sub), Attr (res), Attr (act), Attr (env)),abac_authz_pl1l2 (Attr (sub), Attr (res), Attr (act), Attr (env)) …… abac_authz_pl1l2…ln.X (Attr (sub), Attr (res), Attr (act),Attr (env))
PERMIT = DENY
3 The Implementation Framework of Grid_ABAC XACML defines a common access control description language and a policy framework used to standardize request/response decision flow [2]. This paper proposes an implementation framework of Grid_ABAC in Grid platform based on
34
B. Lang, H. Li, and W. Ni
XACML as shown in Fig2. The framework is divided into the requestor domain and the server domain. The requestor domain contains the requestor and the SAML( Security Assertion Markup Language) attribute authority. The Grid_ABAC decision mechanism is in the server domain, which is made up of the ABAC decision function and the attribute administration module. When the requestor sends a request to the server, the request is intercepted by the authorization engine and is sent to the ABAC decision function, which makes authorization evaluation using the attributes of the requestor obtained from the attribute administration module. Finally, the ABAC decision function returns the authorization decision to the authorization engine where the decision is enforced.
Fig. 2. Grid_ABAC implementation framework
3.1 The ABAC Decision Function ABAC decision function is the core of the implementation mechanism of Grid_ABAC. It is consisted of the ABAC Policy Decision Point ABACPDP, the ABAC Policy Administration Point ABACPAP, and the ABAC Policy Information Point ABACPIP. The main function of ABACPDP is to make decision according to the policies. ABACPAP is used to find policies from policy database in the local grid service, and ABACPIP is responsible for finding attribute values of the requestor needed during policy evaluation. 3.2 Attribute Administration Attribute administration is designed to manage and find attributes for ABAC decision function. Most of the attributes of the requestor are declared in an Attribute Certificate (Attribute Cert.) released by the attribute authority of the requestor
Attribute-Based Access Control for Layered Grid Resources
35
domain. Considering the changeability of the attributes, we provide another real time method for getting the attributes dynamically from the attribute authority of the requestor domain by using SAML. The attribute requests and responses are constructed in SAML assertions to guarantee secure message transmission. As shown in Fig2, attribute administration is composed of the attribute certificate management function and the attribute procurement function. The attribute certification management is responsible for managing and maintaining the attribute certificates of users, therefore ABACPDP needs not to connect to the remote attribute authority each time when it wants to get the attributes of a requestor. Attribute certificate management has two services, that is, the attribute certificate download service and the attribute certificate submission service. The users can submit their attribute certificate to the attribute certificate management module before sending the first request. The certificate download service will be called by the attribute procurement function to get attributes. The attribute procurement function provides interfaces for ABACPIP to get attributes by using attribute certificates or SAML remote connections.
4 Integrating Grid_ABAC with GT4 4.1 The GT4 Authorization Framework and Its Extension GT4 [8] uses WSRF framework and implements stateful Web Services. In GT4 authorization framework, shown in Fig3, authorization engine fills the role of PEP in XACML. It is responsible for intercepting user’s request and carrying out access control decisions. MasterPDP is a special component of the Globus authorization framework. As the topmost PDP of common PDPs and PIPs in GT4, MasterPDP manages all the PDPs and PIPs. By using the authorization configuration file, MasterPDP can support multiple policies. The authorization configuration file points out which kinds of PDP or PIP would be used, and MasterPDP instantiates these PDP or PIP to implement attributes collection or authorization evaluation. Finally, MasterPDP combines all the authorization results returned by each PDP according to the combining algorithm configured in the authorization configuration file and returns the authorization result to the authorization engine. GT4 authorization framework provides two interfaces for authorization extension, including PIP and PDP extensions. The new authorization policy can be integrated into GT4 by implementing these interfaces. On the base of analyzing the structure of the GT4 authorization framework, we put forward two methods for integrating Grid_ABAC with GT4. One is the built-in ABAC authorization which can rapidly response to ABAC authorization request. The other is delegating the ABAC authorization function to a remote authorization service by sending a SAML authorization request, which reduces the load of the local grid service and needs less modification to the authorization framework than the built-in manner; however the message transmission and processing delay will decrease the authorization efficiency. The extended GT4 authorization framework is shown in Fig.3.
36
B. Lang, H. Li, and W. Ni
Fig. 3. Grid_ABAC-extended Authorization Framework of GT4
As shown in Fig3, ABACMasterPDP is the built-in ABAC authorization, which implements the PDP interface in GT4. ABACMasterPDP connects the built-in ABACPDP and the MasterPDP of the GT4 authorization framework. It accepts user’s request from MasterPDP, packages and sends information needed such as resource attributes to the built-in ABACPDP and finally returns authorization decisions to MasterPDP. The implementation of the remote ABAC authorization service relies on a built-in PDP in GT4 named SamlCalloutPDP. SamlCalloutPDP creates and sends SAML request to the remote ABAC authorization service, and then gets SAML authorization response which contains the authorization result from the remote service. 4.2 The Built-in Grid_ABAC in GT4 On receiving an authorization request from the authorization engine, MasterPDP creates an ABACMasterPDP instance. Then ABACMasterPDP collects attributes and sends them to ABACPDP. ABACPAP finds policy for ABACPDP basing on the URI of the requested resource. In the process of evaluation, ABACPIP firstly finds attributes from attribute certificate. If the attribute cannot be found in any attribute certificate, ABACPIP would get it from the remote attribute authority by calling SAML attribute procurement interface. When the authorization evaluation completes, the result would be returned to the authorization engine of GT4.
Attribute-Based Access Control for Layered Grid Resources
37
Resources in GT4 conform to the WSRF standard. The security policy of resources is constructed by using the Grid_ABAC policy structure introduced in section 2.1. The URI structure of resources is Protocol:// IP address/wsrf standard/services / ****. This URI indicates the resource is WSRF standardized and its catalog is /wsrf/services/****. Based on this structure, the resource layer begins from directory “wsrf”. Policies in “wsrf” layer should be applied to all the resources below “wsrf” layer, and the policy in the last part of the URI can only be used by the specific resources in this layer. ABACPAP finds policy according to the URI of the resource. 4.3 SAML Callout Grid_ABAC SAML callout Grid_ABAC is an independent authorization service which can act as an authorization gateway used by all the grid services in the domain. As shown in Fig3, Authorization Engine calls Grid_ABAC authorization service through SamlCalloutPDP which implements the PDP Interface. As a built-in PDP, SamlCalloutPDP is created by MasterPDP of GT4, and it can send SAML authorization request to remote Grid_ABAC authorization service. A SAML authorization request includes the requested service URI and the identifier of the attribute certificate of the requestor. Grid_ABAC authorization service parses the SAML authorization request and carries out authorization evaluation using the information of the request and the policies. Authorization decision is returned in a SAML authorization assertion to the SamlCalloutPDP, where the decision will be extracted from the assertion and forwarded to the MasterPDP.
5 Testing and Efficiency Analysis In our testing example, we build a grid service named the “SecureCounterServices” and deploy our Grid_ABAC mechanisms in the authorization framework of this service. The policy for this resource is “Only professor from Beihang University can access this resource.” The main part of the policy in XACML is as follows: professor …… wsrf/services/SecureCounterService
38
B. Lang, H. Li, and W. Ni
The test result shows that the authorization can return expected decisions. Fig.4 shows the authorization process information and attributes procurement information on server. Two attributes of the user, that is, “Beihang” and “professor” which are expected in the policy all can be found, so the authorization result is “Permit”.
Two attributes found
Authorization Result
Fig. 4. ABAC Authorization Process
We also do efficiency test in this example. We record the time duration from the point when ABACMaseterPDP is created to the point when the authorization result is retuned. Tests are divided into three groups. Each group changes an influence factor. The three factors are policy complexity, the length of the attribute certificate and the number of SAML attribute procurements. The hardware and software environment is as follows: CPU: Intel Pentium4 2.66G Memory: 512MB Operation System: Linux FedoraCore5 Grid platform: GlobusTloolkit4.0.5 The test results are shown in Table1, Table 2, and Table 3.The data used in the test is chosen according to the possible situation in practice. As shown in the tables, because of network delay, SAML remote attribute procurement has the longest running time. In the real application, the number of attribute needed in a policy is less than twenty. Most of them can be found in the attribute certificate, therefore the SAML remote attribute procurement may be rarely invoked. As shown in Table3, invoking SAML
Attribute-Based Access Control for Layered Grid Resources
39
Table 1. Evaluation tests with different policies complexity Number of rules in Policy
Evaluation time cost(ms)
10 50 100 500
163 166 172 190
Table 2. Evaluation tests with different attribute certification complexity Number of attributes in the attribute certification
Evaluation time cost(ms)
10 50 100 500
206 249 291 373
Table 3. Evaluation tests with different SAML attribute procurement invocations Number of invocations
Evaluation time cost(ms)
1 10 20 30
294 403 530 656
remote attribute procurement 30 times costs 656ms, which means that the time cost of Grid_ABAC keeps within millisecond even in the worst situation. The running results also show that the time cost doesn’t increase sharply with the increase of the complexity of policies and attribute certificates.
6 Conclusions Basing on the current ABAC model and the layered resource management structure in grid system, we put forward a Grid_ABAC model and also present its XACML-based implementation structure. We analyzes the authorization framework of GT4 platform, provides two methods for integrating Grid_ABAC with the GT4 authorization framework. Compared with the original built-in authorization policy in GT4, Grid_ABAC is more flexible and can support fine-grained authorization.
References 1. 2.
Foster, I., Kesselman, C., Tuecke, S.: The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International J. Supercomputer Applications 15(3), 200–222 (2001) OASIS: eXtensible Access Control Markup Language (XACML) Version 2.0. (2003), http://www.oasis-open.org/committees/xacml
40 3.
4.
5. 6.
7.
8.
B. Lang, H. Li, and W. Ni Barton, T., Basney, J., Freeman, T., Scavo, T., Siebenlist, F., Welch, V., Ananthakrishnan, R., Baker, B., Goode, M.: Keahey. K.: Identity Federation and Attribute-based Authorization through the Globus Toolkit, Shibboleth, Gridshib, and MyProxy. In: 5th Annual PKI R&D Workshop (2006) Demchenko, Y., Gommans, L., de Laat, C.: Using SAML and XACML for complex resource provisioning in grid based applications. In: IEEE Workshop on Policies for Distributed Systems and Networks 2007, Bologna, Italy, pp. 183–187 (2007) Shen, H.: A Semantic- and Attribute-Based Framework for Web Services Access Control. In: 2nd International Workshop on Intelligent Systems and Applications, ISA 2010 (2010) Yuan, E., Tong, J.: Attribute based access control (ABAC) for Web services. In: The 3rd International Conference on Web Services, pp. 561–569. IEEE Computer Society, Orlando (2005) Lang, B., Foster, I., Siebenlist, F., Ananthakrishnan, R., Freeman, T.: A Flexible Attribute Based Access Control Method for Grid Computing. Journal of Grid Computing 7, 169– 180 (2009) GT 4.0: Security: Authorization Framework (2004), http://www.globus.org/toolkit/docs/4.0/security/authzframe
A Local Graph Clustering Algorithm for Discovering Subgoals in Reinforcement Learning Negin Entezari, Mohammad Ebrahim Shiri, and Parham Moradi Department of Computer Science Amirkabir University of Technology Tehran, Iran {negin.entezari,shiri,pmoradi}@aut.ac.ir
Abstract. Reinforcement Learning studies the problem of learning through interaction with the unknown environment. Learning efficiently in large scale problems and complex tasks demands a decomposition of the original complex task to simple and smaller subtasks. In this paper a local graph clustering algorithm is represented for discovering subgoals. The main advantage of the proposed algorithm is that only the local information of the graph is considered to cluster the agent state space. Subgoals discovered by the algorithm are then used to generate skills. Experimental results show that the proposed subgoal discovery algorithm has a dramatic effect on the learning performance. Keywords: Hierarchical Reinforcement Learning, Option, Skill Acquisition, Subgoal Discovery, Graph Clustering.
1 Introduction Reinforcement Learning (RL) [1] is an approach that studies the problem faced by an intelligent agent that must learn behavior through interactions with an unknown environment. In each step of interaction, the agent selects an action that causes a change in the state of the environment and then receives a scalar reinforcement signal called reward. The agent’s objective is to learn a policy that maximizes the longterm reward. Large state space and lack of immediate reward are crucial problems in the area of reinforcement learning. Two approaches which have been proposed to tackle these problems are function approximation [2] and task decomposition [3-6]. The main idea of Hierarchical Reinforcement Learning (HRL) methods is decomposition of the learning task into simple and smaller subtasks that increase the agent’s learning performance. One common way to decompose the complex task to the set of simple subtasks is identifying important states known as subgoals and then learn sub-policies to reach these subgoals [7-14]. These sub-policies are called temporally extended actions or skills or macro actions. A macro-action or a temporally extended action is a sequence of actions chosen from the primitive actions. A suitable set of skills can accelerate learning. It is desirable to devise methods by which an agent is T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 41–50, 2010. © Springer-Verlag Berlin Heidelberg 2010
42
N. Entezari, M.E. Shiri, and P. Moradi
automatically able to discover skills and construct high-level hierarchies of reusable skills [8-19]. There are a variety of approaches to discover subgoals automatically. Some approaches introduce highly visited states as subgoals [7-9]. Another group of approaches are graph based methods that introduce the border states between densely connected regions as subgoals [10-13]. In these approaches, the agent’ transition history is mapped to a graph and each observed state is considered as a node in the graph and each transition between states is mapped to an edge in the graph. Graph nodes indicate the state space and the edges represent the state transitions. Menache et al. [10] discovered bottleneck states by performing Max-Flow/Min-Cut algorithm on the transition graph. In [12], the N-Cut algorithm is utilized to find landmark states of the local transition graph. In addition, recently researchers have shown interest in graph centrality measures to identify important states. To this aim Şimşek and Barto in [9] utilized betweenness centrality. In this paper we present a local graph clustering algorithm to provide an appropriate partitioning of the input graph. The main characteristic of our clustering algorithm is that it only uses local information of the graph. Other methods proposed earlier, use global information of graph and they are inapplicable for large graphs. Şimşek et al. in [12] addressed this problem by considering a local scope of the transition graph. They perform the Normolized-cut algorithm on the local transition graph which requires setting many parameters for different domains, while our local graph clustering algorithm is suitable for large graphs without setting any adjustable parameters. The rest of the paper is organized as follows. The reinforcement learning and it’s extension to use macro-actions is described in section 2. In section 3, the proposed method is described. The benchmark tasks, simulation and results are described in section 4, and section 5 contains the final discussion and the concluding remarks
2 Reinforcement Learning with Option In RL, the environment is usually formulated as a finite-state Markov Decision Process (MDP). Due to the Markov property of the environment, the probability distribution of new state and reward after executing an action depends only on the previous state and action, not on the entire history of states. An MDP consists of a set of states S , a set of actions A , a reward function R ( s, a ) , and a transition function P ( s ′ | s, a) . In each step of interaction, in each state s ∈ S , the agent chooses an action a ∈ A . The value of this state transition is communicated to the agent through a scalar reinforcement signal called reward, R ( s, a ) and with the probability of P ( s ′ | s, a) , the agent observes a new state s′ ∈ S . Theagent’s task is to optimize the
action choices in such away that it maximizes the expected discounted return: ∞
R t = ∑ γ k rt + k +1 , k =0
(1)
A Local Graph Clustering Algorithm
43
where γ is the discount rate, 0 ≤ γ ≤ 1 . Q-Learning is the most commonly used algorithm in RL. Macro Q-Learning is the hierarchical form of Q-Learning including options. To represent skills, we use option framework [4]. Options are a generalization of primitive actions including temporally extended courses of actions. Even if adding options to primitive actions offers more choices to the agent and makes decision making more complicated, the decomposition of the original task provided by options can simplify the task and facilitate learning. An option is a tuple < I , π , β > , where I ⊆ S is an initiation set consisting of all the states in which the option can be initiated. π : S × O → [0,1] , is an option policy where O is the set of all possible options and β : S → [0,1] is a termination condition that is the probability of terminating an option in each state. Macro Q-Learning updates the value of one-step options (primitive actions) and multi-step options. The update rule is: Q (s , o ) → Q (s , o ) + α ⎡ r + γ k max Q (s ′, o ′) − Q (s , o ) ⎤ , ⎢⎣ ⎥⎦ o ′∈O s ′
(2)
where k is the number of time steps that the option o is executing, and r is the k
cumulative discounted reward over this time, r = ∑ γ i rt + i +1 . i =0
3 Proposed Method In this paper we utilize a local graph clustering algorithm to indicate strongly connected components of the agent’s transition graph and consider the border states of highly intra-connected regions as subgoals. A local graph clustering algorithm only uses local information to generate a clustering of the input graph. Local clustering algorithms address the complexity drawbacks of global clustering algorithms and are suitable for clustering large graphs. Given a graph G = (V , E ) where V is the set of vertices and E is the set of edges, the main goal of the clustering algorithm is finding a set of clusters X i ( i ∈1,...,k ) which maximizes the intra-cluster connectivity and
minimizes the inter-cluster connectivity. To this aim the ratio R( X ) is defined according to definition 1. The outline of the proposed algorithm is shown in Fig. 1. Definition 1. If X ⊆ V , the ratio R(X) is defined as follows:
R( X ) =
∑
i∈ X , j∈b ( X )
∑
aij
i∈n ( X ), j∈b ( X )
aij
(3)
where b( X ) is the border nodes of X , n( X ) indicates the neighbors of border nodes that are not a member of X and aij denotes the proper element of the adjacency matrix of G , that is:
44
N. Entezari, M.E. Shiri, and P. Moradi
⎧1, if i and j are connected through an edge , aij = ⎨ ⎩ 0, if not .
(4)
Neighbors that maximize the ratio R ( X ) will be added to X while there are no more such neighbors. Definition 2. The set of candidate nodes C ( X ) is defined as follows:
C ( X ) := arg max R (X + v ) v ∈n ( X )
(5)
Definition 3. the set of final candidate nodes that can be added to X is defined as follows:
C f ( X ) := arg max
u ∈C ( X )
∑a
uv
v ∈n ( X )
(6)
Local graph clustering algorithm foreach v ∈ V do X := {v} ; while | n( X ) |> 0 do C ( X ) := arg max R (X + v ) ; v ∈n ( X )
C f ( X ) := arg max u∈C ( X )
∑
v∈n ( X )
auv ;
if R ( X U C f ( X )) ≥ R ( X ) then X := X U C f ( X ) ; else break; end end merge overlapping clusters; Fig. 1. The outline of the proposed local graph clustering algorithm
Nodes to be added to X are selected in two steps: 1. Define candidate nodes C (X ) that can be added to X according to definition 2. 2. To ensure that the intra-cluster connectivity is maximized in the future, according to definition 3, nodes of C (X ) that have maximum connectivity with neighbor nodes form final candidates. All members of C f ( X
)
are then added to X if the following condition is
satisfied:
R (X +Cf (X
) ) ≥ R (X )
(7)
A Local Graph Clustering Algorithm
45
Fig. 2 shows different steps of the algorithm on a sample graph. The original graph is represented in Fig. 2(a).
(a) The original graph
(b) Maximal cluster for node 1
(c) Soft clustering of the original graph
(d) Result of merging overlapping clusters
Fig. 2. Different steps of the proposed graph clustering algorithm on a sample graph
Initially, each graph node v ∈ V (v = 1,...,12) is considered as a cluster and in this sample graph there are 12 clusters in the beginning. For each node v , all neighbors of v are examined and candidates nodes are then selected according to (6) and are then inserted to X . Repeatedly neighbors of new set X are searched for candidates until the maximal set X is achieved. Let v = 1 , initially we have X = {1} . In this step, neighbors of X are nodes 2 and 3. According to equation (3), R( X + 2) = 1 and R( X + 3) = 2 , so node 2 is the only candidate node and will be inserted in 3 X = {1} . It can be seen that X = {1, 2} is a maximal set for v = 1 and adding nodes 3 or 4 to X will decrease the ratio R( X ) . Fig. 2(b) shows tha maximal set X for node 1. These steps are repeated for all other nodes of V . As can be seen in Fig. 2(c), The result of this step is a soft clustering of the given graph. Overlapping clusters in Fig. 2(c) are then merged into one cluster. The final non-overlapping clusters as shown in Fig. 2(d), are our desirable clusters. Finally Border nodes of clusters are identified as subgoals. In our example, nodes 4 and 9 are the border nodes of clusters. Fig. 3 also shows the result of clustering on the transition graph of a four-room gridworld.
46
N. Entezari, M.E. Shiri, and P. Moradi
Fig. 3. Result of clustering the transition graph of a four-room gridworld. Black nodes are border nodes of clusters.
4 Experimental Results We experimented with the proposed algorithm in two domains: a four-room gridworld and a soccer simulation test bed. In our experimental analysis, the agent used an ε greedy policy, where the ε was set to 0.1. The learning rate α and discount rate γ were set to 0.1 and 0.9 respectively. The generated options terminated with probability one at corresponding subgoal states indicated by the algorithm. The options also terminated with the same probability at the goal state and outside the initiation set; at all other states they terminated with probability zero. 4.1 Four-Room Gridworld
The four-room gridworld [4] is shown in Fig. 4(a). From any state the agent is able to perform four actions, up, down, left and write. There is a reward of -1 for each action and a reward of 1000 for actions directing the agent to the goal state. We performed 100 randomly selected episodic tasks (random start and goal states) and each task consists of 80 episodes. 4.2 Soccer Simulation Test Bed
Soccer domain shown in Fig 4(b) is a 6 × 9 grid. At each episode the ball is randomly located at one of two positions B1 and B 2 . Two agents try to find and own the ball and score a goal. To score a goal, each player owning the ball must reach to the opponent goal (represented with red line in the Fig. 4(b)). Each agent has five primitive actions: North, East, South, West and Hold. The hold action does not change the position of agent. Agents receive a reward of -1 for each action and a reward of +100 for action causing the agent find and own the ball. Action that leads scoring a goal gives the reward of +1000 to the agent. If the agent owing the ball is going to enter the other
A Local Graph Clustering Algorithm
47
agent’s location, the ball owner will change and agents stay in their locations. If the agent is going to enter the same location as other agent owing the ball, with probability 0.8, owner of the ball does not change and agents stay in their location.
(a)
(b)
Fig. 4. (a) Four-room gridworld (b) Soccer simulation test bed
4.3 Results
The transition graph of each domain was clustered by the proposed clustering algorithm and the border states of clusters were identified as subgoals. In the four-room gridworld, as illustrated in Fig. 4(a), cells labeled with 1,2,…,8 are identified as subgoals by our clustering algorithm.
Fig. 5. Comparison of Q-Learning and Q-Learning with options generated based on the subgoals extracted by the proposed algorithm in a four-room gridworld. Average number of steps to reach the goal is shown.
48
N. Entezari, M.E. Shiri, and P. Moradi
Fig. 5 shows the average steps to reach the goal. Compared to the Q-Learning with only primitive actions, the skills improved performance remarkably. In addition, as can be seen in Fig. 6, average reward obtained by the learning agent is significantly increased.
Fig. 6. Average reward obtained by the agent, comparing Q-Learning with primitive actions and with skills
The same experiments were implemented in soccer simulation domain and similar results were achieved. Fig. 7 compares the number of goals scored by the agent while learning with primitive actions with the case of learning with additional generated options. As expected, options speed up the learning process and as a result the agent is able to score a larger number of goals.
Fig. 7. The number of goals scored by the agent. Comparing Q-Learning with Q-Learning with options generated based on the subgoals extracted by the proposed algorithm in soccer simulation test bed.
A Local Graph Clustering Algorithm
49
5 Conclusion This paper presents a graph theoretic method for discovering subgoals by clustering the transition graph. The proposed algorithm is a local clustering algorithm that solely uses local information to generate an appropriate clustering of the input graph. Global clustering algorithms have time complexity O(N3), where N is the total number of visited states. The L-Cut algorithm [12] which is a local graph partitioning method is of complexity O(h3), with h as the number of states in local scope of the transition graph. One drawback of the L-cut algorithm is that the local cut may not be a global cut of the entire transition graph. Another disadvantage of the L-Cut algorithm is that it demands setting a lot of parameters. The proposed algorithm uses the local information to generate a global clustering of the transition graph and comparing to global graph clustering algorithms has less time complexity. In addition, no parameter setting is needed in the algorithm. Our Experiments in two benchmark environments show that discovering subgoals and including policies to achieve these subgoals in the action set can significantly accelerate learning in other, related tasks.
References 1. Kaelbling, L.P., Littman, M.L.: Reinforcement Learning: A Survey. J. Artificial Intelligence Research 4 (1996) 2. Bertsekas, D.B., Tsitsiklis, J.N.: Neuro-dynamic programming. Athena Scientific (1995) 3. Parr, R., Russell, S.: Reinforcement learning with hierarchies of machines. In: Proc. the 1997 Conference on Advances in Neural Information Processing Systems, Cambridge, MA, USA, pp. 1043–1049 (1997) 4. Sutton, R., Precup, D., Singh, S.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. J. Artificial Intelligence 112, 181–211 (1999) 5. Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artificial Intelligence 13, 227–303 (2000) 6. Barto, A.G., Mahadevan, S.: Recent Advances in Hierarchical Reinforcement Learning. Discrete Event Dynamic Systems 13, 341–379 (2003) 7. Şimşek, Ö., Barto, A.G.: Learning Skills in Reinforcement Learning Using Relative Novelty, pp. 367–374 (2005) 8. Digney, B.L.: Learning hierarchical control structures for multiple tasks and changing environments. In: Proc. the Fifth International Conference on Simulation of Adaptive Behavior on From Animals to Animals 5, Univ. of Zurich, Zurich, Switzerland (1998) 9. McGovern, A., Barto, A.G.: Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density. In: Proc. the Eighteenth International Conference on Machine Learning, pp. 361–368 (2001) 10. Menache, I., Manno, S., Shimkin, N.: Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, p. 295. Springer, Heidelberg (2002) 11. Mannor, S., Menache, I., Hoze, A., Klein, U.: Dynamic abstraction in reinforcement learning via clustering. In: Proc. the Twenty-First International Conference on Machine Learning, Banff, Alberta, Canada (2004)
50
N. Entezari, M.E. Shiri, and P. Moradi
12. Şimşek, Ö., Wolfe, A.P., Barto, A.G.: Identifying useful subgoals in reinforcement learning by local graph partitioning. In: Proc. The 22nd International Conference on Machine Learning, Bonn, Germany (2005) 13. Jing, S., Guochang, G., Haibo, L.: Automatic option generation in hierarchical reinforcement learning via immune clustering. In: 1st International Symposium on Systems and Control in Aerospace and Astronautics, ISSCAA 2006, p. 4, p. 500 (2006) 14. Şimşek, Ö., Barto, A.G.: Skill Characterization Based on Betweenness. In: Advances in Neural Information Processing Systems, vol. 21, pp. 1497–1504 (2009) 15. Jonsson, A., Barto, A.G.: Automated state abstraction for options using the u-tree algorithm. In: Advances in Neural Information Processing Systems: Proceedings of the 2000 Conference, pp. 1054–1060 (2001) 16. Elfwing, S., Uchibe, E., Doya, K.: An Evolutionary Approach to Automatic Construction of the Structure in Hierarchical Reinforcement Learning. In: Genetic and Evolutionary Computation, pp. 198–198 (2003) 17. Jonsson, A., Barto, A.: A causal approach to hierarchical decomposition of factored MDPs. In: Proc. the 22nd International Conference on Machine Learning, Bonn, Germany ( 2005) 18. Jonsson, A., Barto, A.: Causal Graph Based Decomposition of Factored MDPs. J. Machine Learning, Res. 7, 2259–2301 (2006) 19. Mehta, N., Ray, S., Tadepalli, P., Dietterich, T.G.: Automatic discovery and transfer of MAXQ hierarchies. In: Proc. of the 25th International Conference on Machine Learning, Helsinki, Finland (2008)
Automatic Skill Acquisition in Reinforcement Learning Agents Using Connection Bridge Centrality Parham Moradi, Mohammad Ebrahim Shiri, and Negin Entezari Faculty of Mathematics & Computer Science, Department of Computer Science, Amirkabir University of Technology, Tehran, Iran {pmoradi,shiri,negin.entezari}@aut.ac.ir
Abstract. Incorporating skills in reinforcement learning methods results in accelerate agents learning performance. The key problem of automatic skill discovery is to find subgoal states and create skills to reach them. Among the proposed algorithms, those based on graph centrality measures have achieved precise results. In this paper we propose a new graph centrality measure for identifying subgoal states that is crucial to develop useful skills. The main advantage of the proposed centrality measure is that this measure considers both local and global information of the agent states to score them that result in identifying real subgoal states. We will show through simulations for three benchmark tasks, namely, “four-room grid world”, “taxi driver grid world” and “soccer simulation grid world” that a procedure based on the proposed centrality measure performs better than the procedure based on the other centrality measures. Keywords: Reinforcement Learning, Hierarchical Reinforcement Learning, Option, Skill, Graph Centrality Measures, Connection Bridge Centrality.
1 Introduction Reinforcement learning (RL)[1] is appropriate machine learning technique when intelligent agents need to learn to act with delayed reward in unknown stochastic environments. It is well known that the state space in RL generally grows exponentially with the number of state variables. Approaches to contain the state space explosion include function approximation and state abstraction in hierarchical reinforcement learning (HRL) [2]. More recent approaches to HRL include Options [3], MAXQ [4] and HAM [5]. The main idea of HRL methods is to decompose the learning task into set of simple subtasks. This decomposition simplifies the learning task by reducing the size of state space since every subtask considers only a smaller number of relevant states. Moreover the learning is accelerated since every separate task is easier to learn. A popular approach to define subtasks is to identify important states which are useful to reach. These key states called “subgoals” and the agent learns “skills” to reach them. The skill, or temporally extended action, is a closed-loop policy over one step actions. A suitable set of skills can help improve the agent’s efficiency in learning to solve difficult problems. We represent skills using the options framework [3]. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 51–62, 2010. © Springer-Verlag Berlin Heidelberg 2010
52
P. Moradi, M.E. Shiri, and N. Entezari
A number of methods have been suggested to identify subgoals automatically. The first one is frequency based which observes either how often a state has been visited by the agent during successful trials or how much reward a state has gained [6-9]. A difficulty with the frequency based approach is that the agent may need excessive exploration of the environment in order to distinguish between “important” and “regular” states. The second approach is policy based where the agent initially learns a task and then analyzes the learned policy for certain structural properties [10-11]. This approach may not prove useful in domains with delayed reinforcement. Finally, the third one is graph theoretic where the agent's transition history is mapped to a graph, and then the states within densely connected regions are identified as subgoals[1221]. In this approach subgoal states are identified using max flow-mincut algorithm[20], while in [18-19, 21-22] the state space is partitioned using some graph clustering algorithms. Moreover In [16] the data mining algorithm is used to discover strongly connected regions in the graph. On the other hand, graph centrality measures are used as an effective measure not only to find subgoals but also for evaluate them [13-15, 17]. In [17] betweenness centrality measure is used to find and ranking subgoals. In the previous work[13] we propose a new centrality measure called connection graph stability to identify subgoal states. While in [14] co-betweenness centrality measure is used to identify subgoals. Moreover in [15] we incorporate prior knowledge using complex network measures to expedites the agents learning performance. The main advantage of graph theoretic approach is that the agent transition graph can be constructed without using of reward information signals. Therefore, this approach can be used for domains with delayed reinforcement. On the other hand, graph for partitionclustering methods suffer from the computational complexity of ing the transitions graph and finding subgoal states, where is the number of nodes in the graph. Moreover the shortest paths graph centrality measures can be computed in , where are the number of nodes and edges in the graph, respectively. for sparse graphs[23]. Alternatively, this This complexity can be reduced to measure can be computed using approximation methods with less computational efforts. The previous graph centrality measures such as betweenness, conection graph stability and co-betweenness, that have been used to identifying subgoal states in RL [13-15, 17] consider only the global information of nodes in the graph to rank them. It is well known that bottleneck states not only have global properties but also have some local properties. In this paper we presented a new graph centrality measure called Connection Bridge Centrality (CBC) to identify potential subgoals that considers both local and global information of the states to score them. We will show through simulation on some benchmarks that CBC centrality measure performs better than the other graph centrality measures. The rest of the paper is organized as follows: In section 2 we describe reinforcement learning basics and its extension to use option. The proposed centrality measure (CBC) is described in section 3. In section 4 the skill acquisition algorithm is described. The benchmark tasks, simulation and results are described in section 5, and section 6 contains the final discussion and the concluding remarks.
Automatic Skill Acquisition in Reinforcement Learning Agents
53
2 Reinforcement Learning with Option The interaction of the agent and the environment can be represented using Markov Decision Process (MDP) framework. A finite MDP is a tuple , , , , where S is a finite set of states, A is a finite set of actions, T: S A S 0 1 is a state transition probability function and R: S A is a reward function. At each decision stage, the agent observes a state s S and executes an action a A with probability T which results in a state transition to s S and the agent obtains a scalar reward r . The agent’s goal is to find a map from states to actions, called policy, which maximizes the expected discounted reward over time, E ∑ γ r , where γ 1 is a discount factor and r is the reward obtained at time t. To represent skills, we use the options framework [3]. A (Markov) option is a temporally-extended action or macro action, specified by a triple , , where I denotes the option’s initiation set, i.e., the set of states in which the option can be invoked; π is the option’s policy, mapping states belonging to I to a sequence of actions; β denotes the option’s termination condition, which β s denotes the probability that the option terminates in state s. In this paper, Macro-Q-Learning algorithm [3] is used to optimize policies for both primitive actions and options. The Q-function that maps every state-action pair to the expected reward for taking this action at that state is updated as follow: Q s ,o
1
α Q s ,o
α γ max Q s
,o
r
γ
(1)
where τ is the actual duration of the option o , α is the learning rate. The update rule for a primitive action is similar with τ = 1.
3 Connection Bridge Centrality Measure One can find different centrality measures in the literature including: betweenness, closeness, degree, eigenvector centrality and information centrality. In this paper we extend the previous work[13] and propose a new centrality measure called Connection Bridge Centrality(CBC) that considers both local and global information of the nodes in the graph to score them. Consider a graph , with set of nodes and set of edges. We use and to show the number of nodes and edges in the graph respectively and we use denotes the length of shortest paths connecting s and t, presents the length of shortest path connecting s and node t while passing through . Moreover is the u denotes the number of total number of shortest paths between s and t and shortest paths connecting s and t while passing throughu. The following are common graph centrality measures: Closeness Centrality [23] Betweenness Centrality [23]
∑
,
(2) (3)
54
P. Moradi, M.E. Shirri, and N. Entezari
(4)
Node Connection Graph Sta ability [13]
Most subgoal states are botttleneck states in the corresponding graphs. These states not only have global properties but also have some local properties that specify them frrom the other states. For examp ple the degree distribution of these states is differing frrom their neighboring states. Acccording to this fact we propose a bridge coefficient that considers the local informatiion of the node. The bridge coefficient is defined as follow ws: ∑
(5)
where represents the neighborss of denotes the deegree of the node and the node and 0 is a constant value, in this paper we set ∑
.
The proposed bridge co oefficient considers the local information of the nodee to score it. To consider the global g information of the node, we can use closeness (1), betweenness (2) or node co onnection graph stability (3) centrality measures respectiively. It has been shown thatt NCGS centrality measure detect bottleneck states m more precisely than other presen nted centrality measures in RL environments[13]. Speccifically, the Concept Bridge Centrality(CBC) C measure is defined as follows: (6) where he node connection graph stability measure of node (3) denotes th and shows the bridg ge coeffitiont of node (4). To show the effectiveness of proposed centrality measure, this measure was tessted on a small network. Figure 1 and table 1 show the essence of concept bridge centraality measure. If we want to select the top ranked nodes- those nodes scored more thaan a predefined threshold - by setting the threshold to t=0.5, NCGS selects nodes A5, A6 and A7 nodes while CBC only o select node A6 as top rank node. Moreover if wee set the threshold to t=0.2 the NCGS N identifies nodes A1, A3, A4, A5, A6, A7 and A88 as the top rank nodes while the t CBC measure selects only A5 and A6 as top rannked nodes. These results show that considering local information of the node resultss in identify bottleneck nodes prrecisely.
Fig. 1. A small network example
Automatic Skill Acquisition in Reinforcement Learning Agents
55
Table 1. Centrality values of Figure 1 graph nodes including node degrees, Bridge Coefficient , normalized Node Connection Graph Stability and normalized Connection Bridge Centrality
Node A1 A2 A3 A4 A5 A6 A7 A8 A9
Degree(u) 3 2 2 2 4 2 3 2 2
BR(u) 1.09 3.62 1.34 1.94 1.27 3.62 1.25 1.35 1.35
NCGS(u) 0.24 0.17 0.21 0.256 0.74 1 0.55 0.22 0.22
CBC(u) 0.07 0.17 0.07 0.138 0.26 1 0.18 0.08 0.08
4 Skill Acquisition Algorithm The simplest approach to creating useful skills is to search by generating many new skills and letting the agent test them by adding them to its set of actions. However, due to the great size of the state space and the many actions and skills to select, such approach will inevitably suffer from inefficiency. So it is necessary to find small quantity yet high quality subgoals. To find the correct skills with reasonable cost, this paper focuses on discovering useful subgoals first, and then creates potential skills to accomplish the subgoals. In this section our proposed method for automatic skill acquisition in reinforcement learning by autonomous agents will be described. This method score and identify subgoals using graph centrality measures. The outline of the learning procedure is described in the algorithm 1. First of all the environment is explored by the agent and then the agent’s state transition history is mapped to a graph. Each new states visited by the agent becomes a node in the graph and each s s , s S is translated to an arc i, j in the graph. Then observed transition s the top k scored nodes in the graph identified as candidate subgoals. Then the new skills for the agents will be created to reach the discovered subgoals based on option framework [3]. Moreover these skills will be added to agent’s action space. Algorithm 1. Skill acquisition algorithm Repeat (1). Interacts with the environment and learn using Macro-Q-Learning. (2). Save state transition history. (3). If stop conditions are met then (3.1). Translate the state transition history to a graph representation. (3.2). Score graph nodes using concept bridge centrality measure. (3.2). Identify top k high scored nodes as subgoal. (3.3). Learn options to reach identified subgoals. (3.4). Add new options to agent’s action space. until no new states was found by agent
56
P. Moradi, M.E. Shiri, and N. Entezari
The complexity of computing the concept bridge centrality (CBC) measure is equal to the complexity of node connection graph stability centrality measure, because the computational complexity of bridge coefficient ( is and the NCGS centrality may be computed in time and O n m space on unweighted graphs, where and are the number nodes and edges in the corresponding graph of explored states respectively. On weighted graphs the space requirements remains same but the time requirement increases to O nm n log n [23]. Because of the MDP properties and limitation of number of actions, in the most environments the corresponding agent’s transition graph is sparse, so the time complexity of concept and log on unweighted and weighted bridge centrality will reduce to graphs respectively and the space complexity reduces to in both weighted and unweighted graphs.
5 Environment Benchmarks We present an empirical evaluation of proposed algorithm aimed at understanding whether proposed method is effective in identifying the subgoal states in an environment and whether the skills it generates are useful. We present results in three environments: a four-room grid world [3] , the taxi grid world [4] and soccer simulation[16] that is more complex than four-room and taxi grid world. Complementary details and results about these domains are described in corresponding subsections. 5.1 Four Room Grid World The four room grid world is shown in figure 2.a consists of four rooms connected to each other through four doors. The agent is located at a randomly selected start point and asked to find a randomly selected goal point. The agent has four primitive actions, namely to move up, down, left and right. Then, we randomly select 60 tasks (namely 60 pairs <start, goal> locations). Each task is performed 100 times (episodes). The agent receives a reward of 1000 at the goal state and a reward -1 for all other states. The agent uses ε-greedy policy to selects actions with ε 0.1. The learning rate α and the discount factor γ are set to 0.1 and 0.9 respectively. 5.2 Taxi Driver Grid World The taxi task has been a popular illustrative problem for RL algorithms since its introduction by Dietterich [4]. This domain is shown in figure 2.b, consists of a 5x5 grid with 4 special cells (RGBY). A passenger should be picked up from a cell and then dropped off in a destination cell, where the pickup and drop off nodes are two randomly chosen cells from the set of RGYB nodes. In each episode, the location of the taxi is chosen randomly. The taxi must pick up the passenger and deliver him, using the primitive actions up, down, left, right, Pickup and Putdown. For each iteration, a sequence of 300 episodes was considered. The taxi receives a reward of +20 for successfully delivering the passenger, -10 for attempting to pickup or drop off the passenger at incorrect locations and -1 for other actions. The other parameters were set the same as in the four-room grid problem.
Automatic Skill Acquisition in Reinforcement Learning Agents
(a)
57
(b)
Fig. 2. (a ) Four room grid world domain (b) Taxi driver domain
5.3 Soccer Simulation Grid World The soccer simulation domain proposed by kheradmandian[16] to evaluate skill discovery methods in RL. As can be seen in figure 6.a, it consists of a 6*10 grid environment, two goals, a ball and two agents. At each episode one agent tries to own the ball and move to the opponent’s goal and score. The other agent tries defending and owning the ball. Each agent has five primitive actions: MoveLeft, MoveRight, MoveUp, MoveDown and Hold. The hold action causes the agent to hold the ball and remain in its location. To score a goal an agent must hold the ball and move to one of two states in front of the opponent’s goal and perform a MoveRight (MoveLeft) action if the opponent’s goal is in the right(left) side of the field. When an agent scores a goal, the opponent owns the ball and two players are placed at specified location in front of their gate. The ball will be changed between player agents according to the following rules, a) If the agent which does not hold the ball is going to enter the other agent’s location, then with the probability of 0.8 the owning of the ball does not change and the locations of the players remain unchanged. b) If the agent that holds the ball is going to enter the location of the other with no moving player, then owning of the ball is changed and the locations of the players remain unchanged. The agents receive -1 reward for each action, +10 for owing the ball, -10 for missing the ball and +20 for scoring a goal. The agent uses an ε-greedy policy with ε 0.1. The learning rate α and the discount factor γ are set to 0.1 and 0.9 respectively.
6 Performance and Evaluation In the first step, the corresponding graph of states was scored based on three different mentioned centrality-based scoring methods. Figure 3 reports the scores assigned for the four-room grid task based on closeness(CC), betweenness (BC) and concept bridge centrality (CBC) methods respectively. Because of symmetry only the scores of the first 65 nodes have been shown in the figure 5. It can be seen that for the four room grid task, the CBC measure assigns high scores to the door points (e.g. Nodes labeled with 25, 51, 62 and 78) distinctly comparing to two other measures. All the four doors
58
P. Moradi, M.E. Shiri, and N. Entezari
are detected using the proposed method for a threshold larger than 96% but the other methods are not able to find the doors alone and they also assign high scores to door neighbors (eg. Nodes labeled with 24,26, 43, 54, 70 , 77 and 79) and either face false acceptance, e.g. for threshold of 95%, CC found 10 additional nodes, or false rejection, e.g. BC discards the main doors for the thresholds larger than 82%. These results show that the concept bridge centrality (CBC) assigns high scores to the bottleneck nodes comparing with the closeness(CC) and betweenness(BC) centrality measures.
Fig. 3. Normalized assigned scores to the nodes using closeness(CC), betweenness(BC) and concept bridge (CBC) cenrality measures for four room grid world
As it was expected, the nodes around main subgoals, e.g. neighbors of hallway doors in the four-room grid world, have also high centrality scores. By varying the threshold, different numbers of nodes are extracted as subgoals. Figure 4.a compares the number of candidate subgoals that identified in four-room grid world by different methods for different threshold values. When the threshold is set to 0.15, there are 104,26 and 14 candidate subgoals extracted while using CC, BC and CBC. When threshold is set to 0.3, the CBC identified real bottelnecs while BC and CC identified 104 and 13 candidate subgoals respectively. If we consider the redundant subgoals for creating skills we may create some complexity and additional penalties to an agent while obtaining no benefits. The experiments were repeated for taxi grid world and the same qualitative results are reported. Figure 4.b compares the number of candidate subgoals that identified in taxi grid world by different methods when the threshold value slides from zero, i.e. extract all nodes, to one. It should be noted that our proposed centrality measure has effectively reduced the sensitivity of appropriate threshold selection. To show the effectiveness of the proposed subgoal discovery algorithm, we repeat the experiments for the case that the agent extracts the subgoals by setting a threshold on CC, BC and CBC scores or. Figures 5.a and 5.b show the average obtained reward in four room grid and taxi-worlds respectively when the threshold was set to 0.8 for four room grid world and 0.4 for taxi driver respectively. Skills are created in both domains based on subgoals extracted by applying proposed algorithm or setting the threshold on three mentioned centrality measures scores. The results are compared
Automatic Skill Acquisition in Reinforcement Learning Agents
59
with the case that the agent uses the standard RL without using skill. In this experiment, for the four-room (taxi driver) world, while using CBC, BC and CC the agent identifies 12(6), 16(10) and 72(152) subgoals respectively and this point is reached after 23 (25), 28 (31) and 39 (50) episodes respectively. The agent also reached this point after 41(152) episodes using “without skill” approach.
(a)
(b)
Fig. 4. The number of identified subgoals by applying closeness(CC), betweenness(BC) and concept bridge centrality(CBC) as the function of the threshold values for (a) four-room grid and (b) taxi grid world
The experiments were repeated for the soccer simulation grid world and the same qualitative results were reported. Figure 12 shows the number of goals obtained by the agent for the different mentioned skill acquisition approaches comparing the situation that the agent uses standard Q-learning without using any skill. In these experiments the agent was able to gain 200 goals after 730 time steps when the subgoals were extracted using CBC centrality measure, while using BC and CC when the threshold is set to 0.5 the agent gained the same goals after 848 and 1332 time steps respectively and using “without skill” approach the agent gained the same goals after 1484 time steps.
(a)
(b)
Fig. 5. Average reward obtained in a (a) four room grid world and (b) taxi grid world, when the agent use standard RL, i.e. without Skill, use skills generated from subgoals extracted by connection bridge centrality (CBC), betweenness centrality( BC) and closeness centrality(CC) measures.
60
P. Moradi, M.E. Shiri, and N. Entezari
(a)
(b)
Fig. 6. (a) soccer simulation grid world and (b) Avegarge scored goals in soccer simulation when the agent use standard RL, i.e. without Skill, use skills generated from subgoals extracted by Proposed Algorithm and concept bridge centrality (CBC), betweenness centrality (BC) and closeness centrality(CC) measures.
7 Conclusion In this paper, a graph theoretic based skill acquisition algorithm was presented. In brief, the main contributions of the proposed method are to utilize complex network theory measures for improving the subgoal identification process. In particular, the concept bridge centrality was defined and applied for ranking agent states and extract candidate subgoals. Applying proposed centrality measure on three benchmark problems, results in improving the results of the skill acquisition process. Here we report that the proposed method is also able to create skills incrementally. To do so, some temporary skills will be built based on explored states and in the next episodes by exploring more states, some new skills can be identified and then redundant skills or weaker ones will be removed. Further investigation on this issue and utilization of the proposed approach in more challenging environments are under progress. From a computational complexity point of view, the proposed method run time is O nm , where n and m are the number of nodes and edges in the corresponding graph of explored states, respectively. This complexity will be reduced to O n for sparse graphs. This result is the same as [17] and comparable with the method proposed in [20] with O n complexity, where n is the number of states, and [18] with O n complexity, where n is the number of states observed in the last episode. The proposed method has a few numbers of adjustable parameters. While other methods such as L-Cut [18] and Relative Novelty [8] include manually tuned parameters, the proposed method does not have any adjustable parameters.
References 1. 2.
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. Journal of Artificial Intelligence Research 4, 237–285 (1996) Barto, A.G., Mahadevan, S.: Recent Advances in Hierarchical Reinforcement Learning. Discrete Event Dynamic Systems 13, 341–379 (2003)
Automatic Skill Acquisition in Reinforcement Learning Agents 3. 4. 5.
6.
7.
8. 9.
10.
11.
12.
13.
14.
15.
16. 17.
18.
61
Sutton, R., Precup, D., Singh, S.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112, 181–211 (1999) Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Int. Res. 13, 227–303 (2000) Parr, R., Russell, S.: Reinforcement learning with hierarchies of machines. In: Conference Reinforcement Learning with Hierarchies of Machines, pp. 1043–1049. MIT Press, Cambridge (1998) Digney, B.L.: Learning hierarchical control structures for multiple tasks and changing environments. In: Proceedings of the Fifth International Conference on Simulation of Adaptive Behavior on From Animals to Animats 5, pp. 321–330. MIT Press, Univ. of Zurich, Zurich, Switzerland (1998) McGovern, A., Barto, A.G.: Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density. In: Conference Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density, pp. 361–368. Morgan Kaufmann, San Francisco (2001) Şimşek, Ö., Barto, A.G.: Learning Skills in Reinforcement Learning Using Relative Novelty, pp. 367–374 (2005) Shi, C., Huang, R., Shi, Z.: Automatic Discovery of Subgoals in Reinforcement Learning Using Unique-Dreiction Value. In: IEEE International Conference on Cognitive Informatics, pp. 480–486 (2007) Goel, S., Huber, M.: Subgoal Discovery for Hierarchical Reinforcement Learning Using Learned Policies. In: Conference Subgoal Discovery for Hierarchical Reinforcement Learning Using Learned Policies, pp. 346–350. AAAI Press, Menlo Park (2003) Asadi, M., Huber, M.: Autonomous subgoal discovery and hierarchical abstraction for reinforcement learning using Monte Carlo method. In: Proceedings of the 20th National Conference on Artificial Intelligence, vol. 4, pp. 1588–1589. AAAI Press, Pittsburgh (2005) Kazemitabar, S., Beigy, H.: Automatic Discovery of Subgoals in Reinforcement Learning Using Strongly Connected Components. In: Proceedings of the 15th International Conference on Advances in Neuro-Information Processing, pp. 829–834 (2009) Ajdari Rad, A., Moradi, P., Hasler, M.: Automatic Skill Acquisition in Reinforcement Learning using Connection Graph Stability Centrality. In: Conference The IEEE International Symposium on Circuits and Systems, ISCAS 2010 (2010) Moradi, P., Ajdari Rad, A., Khadivi, K., Hasler, M.: Automatic Discovery of Subgoals in Reinforcement Learning using Betweeness Centrality Measures. In: Conference 18th IEEE Workshop on Nonlinear Dynamics of Electronic Systems, NDES 2010 (2010) Moradi, P., Ajdari Rad, A., Khadivi, A., Hasler, M.: Automatic Skill Acquisition using Complex Network Measures. In: Conference International Conference on Artificial Intelligence and Pattern Recognition, AIPR 2010 (2010) Kheradmandian, G., Rahmati, M.: Automatic abstraction in reinforcement learning using data mining techniques. Robotics and Autonomous Systems 57, 1119–1128 (2009) Şimşek, Ö., Barto, A.G.: Skill Characterization Based on Betweenness. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 21, pp. 1497–1504 (2009) Şimşek, Ö., Wolfe, A.P., Barto, A.G.: Identifying useful subgoals in reinforcement learning by local graph partitioning. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 816–823. ACM, Bonn (2005)
62 19.
20.
21.
22.
23.
P. Moradi, M.E. Shiri, and N. Entezari Mannor, S., Menache, I., Hoze, A., Klein, U.: Dynamic abstraction in reinforcement learning via clustering. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 71. ACM, Banff (2004) Menache, I., Mannor, S., Shimkin, N.: Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 295–306. Springer, Heidelberg (2002) Jing, S., Guochang, G., Haibo, L.: Automatic option generation in hierarchical reinforcement learning via immune clustering. In: Conference Automatic Option Generation in Hierarchical Reinforcement Learning Via Immune Clustering, p. 4, p. 500 (2007) Kazemitabar, S., Beigy, H.: Using Strongly Connected Components as a Basis for Autonomous Skill Acquisition in Reinforcement Learning. In: Yu, W., He, H., Zhang, N. (eds.) ISNN 2009. LNCS, vol. 5551, pp. 794–803. Springer, Heidelberg (2009)
Brandes, U.: A faster algorithm for betweenness centrality. Journal of Mathematical Sociology 25, 163–177 (2001)
Security Analysis of Liu-Li Digital Signature Scheme Chenglian Liu1,4, , Jianghong Zhang2, and Shaoyi Deng3 1
Department of Mathematics, Royal Holloway, University of London
[email protected] 2 College of Sciences, North China University of Technology
[email protected] 3 Department of Communication Engineering, Henan Polytechnic University
[email protected] 4 Department of Mathematics and Computer Science, Fuqing Branch of Fujian Normal University
Abstract. In 2008, Liu and Li proposed a digital signature scheme without using one-way hash function and message redundancy. They claimed their scheme are more efficient in computation and communication for small device. In this paper, we will point out an new attack to certain the Liu-Li scheme is insecure. Then we give an improvement, this scheme is suitable for low power computation and mobile device. Keywords: Forgery attack, Digital signature, Algebra structure.
1 Introduction Most internet applications need the help of digital signatures for authentication purposes, such as authenticating electronic tax reports, stock transactions, and electronic commerce deals. This is the real reason why digital signatures are very valuable in the modern digital data processing world. Several important works dealing with digital signatures have been proposed in [6] [7] [8] [9] [10] and [11]. Shieh et al. [5] firstly proposed a digital Multisignature scheme for authenticating delegates in mobile code system on July 2000. Hwang and Li [3] pointed out the issue of forgery attack on the Shieh et al. scheme. Wu and Hsu [4] demonstrated potential insider forgery attacks. In 2004, Chang and Chang [2] presented a new digital signature that did not use oneway hash functions or message redundancy, and claimed that their scheme modified the properties of Shieh et al. scheme. Later, Zhang [1] showed that Chang-Chang version was still vulnerable to forgery attacks. Chien [12], Kang and Tang [13], Liu and Li [14] also gave various attack version for these schemes. We enhance the Shieh et al. scheme and propose a new scheme without using one- way hash functions or message redundancy. In this article, we preserve the attributes of Liu-Li scheme to provide a safe environment. Section 2 briefly reviews Liu-Li digital signature scheme and our attack method. Section 3 improved to the scheme and carried out security analysis. A conclusion will be drawn in Section 4.
Corresponding author.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 63–70, 2010. c Springer-Verlag Berlin Heidelberg 2010
64
C. Liu, J. Zhang, and S. Deng
2 Review of Liu-Li Scheme 2.1 Liu-Li Scheme Setup two prime numbers p and q such that q|p − 1, where the typical size for these parameters are: |p| = 1024 and |q| = 160 bits. Setup an element g ∈ Zp∗ of order q. The xi is private key of the signer U , where xi ∈ [1, p − 1] and gcd(xi , p − 1) = 1. Yi is the corresponding public key such that Yi ≡ g xi
(1)
(mod p).
There are two phases described as below. A. Signature Generation Phase Suppose Ui wants to sign the message mi ∈ [1, p − 1], then Ui does the following: Step 1: Ui computes
si ≡ (Yi )mi
(mod q).
(2)
Step 2: Ui randomly selects an integer number ki where ki ∈ [1, p − 1] and computes ri ≡ mi · g −ki
(mod p).
(3)
Step 3: Ui computes ti , where ti ≡ x−1 i · (ki − ri · si ) (mod q).
(4)
Step 4: Ui sends the signature (si , ri , ti ) of mi to the verifier V . B. Verification Phase After receiving the signature (si , ri , ti ), V verifies the following: Step 1: V computes
m ≡ Yiti · ri · g ri ·si
Step 2: V computes
si ≡ (Yi )mi
(mod p).
(mod p).
(5) (6)
If it holds, V can be certain that (si , ri , ti ) is indeed the signature generated by Ui in the recovered message mi . Proof mi ≡ (yi )(si +ti ) · ri · αri
(mod p)
≡ (α)xi (si +ti ) · mi · α−ki · αri (ki −ri )
≡α ≡ mi
Q.E.D
(−ki +ri )
· mi · α (mod p)
(mod p)
(mod p)
Security Analysis of Liu-Li Digital Signature Scheme
65
2.2 Forgery Attacks on Liu-Li Scheme Most of algorithm for protocol scheme exist algebra structure leakage. Our method is a kind of algebra attack on finite computation, we can attack success in polynomial time. An attacker Eve performs the follow steps to forge the signature (si , ri , ti ). Step 1: Eve randomly chooses integer ti , ti ∈ [1, q]. Step 2: Eve randomly chooses α, α ∈ [1, p/q] and computes r = αq. Step 3: Eve computes
t
mi ≡ Yi i ri Step 4: Eve computes si ≡ (Yi )m
(7)
(mod p).
(8)
(mod p).
(9)
The signature is forged after Eve finished the above steps. She makes the forged signature (si , ri , ti ) of Ui . Verification of the forged signature (si , ri , ti ) in message m is shown below. Proof
m ≡ Y ti r g ri si
(mod p)
≡
Y ti ri g αqsi
≡
Y ti ri (mod p) mi (mod p)
(mod p)
≡ Q.E.D
Then the message m will pass the verification
si ≡ (Yi )mi
(mod q).
(10)
If Liu-Li’s Scheme changes to mi
si ≡ Yi
(mod p),
ti ≡ x−1 i (ki − ri si ) (mod p − 1).
(11) (12)
Zp∗
and g ∈ of order (p − 1) Our attack changes to follow steps: Step 1: Eve computes
ri ≡ (p − 1)/2.
Step 2: Eve randomly chooses ti , ti ∈ [1, p − 1]. Step 3: Eve computes t mi ≡ Yi i ri (mod p).
(13)
(14)
66
C. Liu, J. Zhang, and S. Deng
Step 4: Eve computes si ≡ (Yi )m Step 5: If
si
(mod p).
(15)
≡ 1 (mod 2), return to Step 2.
The signature is forged after Eve finished the above steps. She makes the forged signature (si , ri , ti ) of Ui . Verification of the forged signature (si , ri , ti ) in message m is shown below. Proof
m ≡ Y ti r g ri si ≡Y
(mod p)
ti (p−1)si /2 ri g ti
≡ Y ri
(mod p)
(mod p)
≡ mi (mod p) Q.E.D Then the message mi will pass the verification mi
si ≡ Yi
(16)
(mod p).
If holds, Eve is certain that (si , ri , ti ) is indeed the signature generated by Ui in the recovered message mi . Thus, Liu-Li scheme underlying digital signature scheme is not secure.
3 Our Methodology 3.1 Our Improvement Let p be a large prime, and g ∈ Fp∗ is a random multiplicative generator element. Signer Ui chooses his/her private key xi , where xi ∈ [1, p − 1], gcd(xi , p − 1) = 1 and computes the public keys y1 ≡ g xi y2 ≡ g
x2i
(mod p),
(17)
(mod p)
(18)
Signature Generation Phate Step 1: Ui computes
si ≡ (y2 )mi
(mod p).
(19)
Step 2: Ui randomly selects an integer ki ∈ [1, p − 1] and computes ri ≡ (si + mi · y1−ki )
(mod p).
(20)
Step 3: Ui computes −1 ti ≡ x−1 i · (ki − ri − xi · si )
(mod p − 1).
Step 4: Ui sends the signature (si , ri , ti ) of mi to the verifier V .
(21)
Security Analysis of Liu-Li Digital Signature Scheme
67
Verification Phase: After receiving signature (si , ri , ti ), the receiver V can check the signature and recover message mi as follows: Step 1: V computes
mi ≡ y2ti · (ri − si ) · y1ri · g si
Step 2: V checks whether
si ≡ (y2 )mi
(mod p).
(22) (23)
(mod p).
If it holds, V can be convinced that (si , ri , ti ) is indeed the signature generated by Ui in the recovered message mi . Proof mi ≡ y2ti · (ri − si ) · y ri · g si −1 x−1 i (ki −ri −xi si )
≡ y2
· (ri − si ) · y1ri
−1 −1 x−1 i ki −xi ri −xi si
≡ y2 ≡
y1ki
·
y1−ri
·g
−s
(mod p) (mod p)
· mi · y1−ki · y1ri · g si
· mi ·
y1−ki
·
y1ri
·g
s
(mod p)
(mod p)
≡ mi (mod p) Q.E.D
3.2 Security Analysis and Simulation Attacks security analysis. Definition 1 (Discrete Logarithm Problem, DLP) Discrete Logarithm Problem DLP (p,g,y) is a problem that on input a prime p and integers g, y ∈ Zp∗ , outputs x ∈ Zp−1 satisfying g x ≡ y (mod p) if such an x exists. Otherwise, it outputs ⊥. The above function, which outputs ⊥ if there is no solution to the query, should be expressed as DLP and the notation DLP should be used only for a weaker function such that nothing is specified for the behavior of the function in the case when there is no solution to the query. Definition 2 (Computational Square-Root Exponent, CSRE) Computational Square-Root Exponent CSRE(p, g, y) is a problem that on input a 2 ∗ prime p and integers g, y ∈ Zp∗ , outputs g x (mod p) for x ∈ Zp−1 satisfying y ≡ g x (mod p) if such an x exists. Otherwise, it outputs ⊥. According to the notation used in [15], the above function, which outputs ⊥ if there is no solution to the query, should be expressed as CSRE. and the notation CSRE should be used only for a weaker function such that nothing is specified for the behavior of the function in the case when there is no solution to the query. However, since we evaluate only stronger problems, we omit astarisk throughout the paper for the sake of simplicity.
68
C. Liu, J. Zhang, and S. Deng
Attack Scenarios. There are three simulated scenarios to discuss. According to public parameters in the network environment, an attacker can easily get hold of this information, may find any hole to do this. Scenario 1:An attacker Eve wants to get the signer Ui ’s private key xi from the valid signature (si , ri , ti ). As observed in section 3.1, it is known that −1 ti ≡ x−1 i · (ki − ri − xi si ) (mod p − 1);
that is,
−1 x−2 i si + xi · (ri − ki ) + ti ≡ 0
(mod p − 1).
(24)
(25)
If Eve wants to get xi , she has know the value of ki from ri ≡ (si + mi · y1−ki ) (mod p). She can calculate
(ri − si ) · (mi )−1 ≡ y1−ki
(mod p).
(26)
(27)
It is inconceivable for her to retrieve ki while she face discrete logarithm problems, even if ri , mi and si are known. So, Eve can not calculate Ui ’s private key xi from the valid signature (si , ri , ti ). Thus, this attack cannot succeed against our scheme. Scenario 2: Eve wants to impersonate Ui and generate a valid signature in the message mi . She can compute (28) si ≡ (y2 )mi (mod p), and
ri ≡ (si + mi · y1 )−ki ·si
(mod p).
(29)
Fortunately, Eve does not know xi to compute ti , where −1 ti ≡ x−1 i · (ki − ri − xi si ) (mod p − 1).
(30)
That is to say, she cannot generate a valid signature (si , ri , ti ) for message mi . Scenario 3: Eve wants to perform forgery attack on the proposed scheme. If Eve forges a valid signature (si , ri , ti ) of Ui . She may compute as follow steps. Step 1: Eve randomly selects numbers mi , β ∈ [1, p − 1]. Step 2: Eve computes ri ≡ (y2β mi + si ) si
≡
m y2 i
(mod p),
(31) (32)
(mod p).
Step 3: Eve computes t
mi ≡ y2i · (ri − si ) · y1ri · g si ≡
mi y2ti +β y1ri g si
(mod p)
(mod p)
(33)
Security Analysis of Liu-Li Digital Signature Scheme
69
Step 4: Eve computes y t+β y r g s ≡ 1 t y2i
≡
(34)
(mod p),
y2−β y1−ri g −si
(35)
(mod p).
Eve can not compute ti , she failed this attack. Step 1: Eve randomly selects numbers ti , β ∈ [1, p − 1], t
mi ≡ y2i y1β
(36)
(mod p).
Step 2: Eve computes
ri ≡ (g −s y1δ + si ) (mod p),
(37)
si
(38)
≡
m y2 i
(mod p)
Step 3: Eve computes t
y2i y1β ≡ mi ≡ ≡ y1β
≡
β ≡ ri
≡
(mod p)
t r y2i · (ri − si ) · y1i · g si (mod t r +δ (mod p) y2i y1i ri +δ y1 (mod p) ri + δ (mod p − 1) r −β (g −si y1i + si ) (mod p)
p) (39) (40) (41) (42)
Eve cannot compute ri , she failed this attack.
4 Conclusions We proposed an improvement to Liu-Li scheme which use neither one-way hash functions nor message redundancy. In our scheme, if attacker try to compute the private key, then he/she may face discrete logarithm problem. On the other hand, according to our simulation of three attack scenarios. The attacker obtained any public parameter such as triple signatures, he/she also can not infer secret key or value. Thus, our scheme is secure and more suitable in low power computation and mobile device.
Acknowledgments The authors would like to thank anonymous reviewers for their valuable comments. This research was supported in part by the National Natural Science Foundation of China (No. 60703044), the NOVA Programma (No. 2007B-001), the PHR fund and Program for New Century Excellent Talents in University (NCET-06-188) and the Fuqing Branch of Fujian Normal University of China under the contract number KY2010-030.
70
C. Liu, J. Zhang, and S. Deng
References 1. Zhang, F.: Cryptanalysis of Chang et al.’s signature scheme with message recovery. IEEE Communications Letters 9, 358–359 (2005) 2. Chang, C.-C., Chang, Y.-F.: Signing a digital signature without using one-way hash functions. IEEE Communications Letters 8, 485–487 (2004) 3. Hwang, S.-J., Li, E.-T.: Cryptanalysis of Shieh-Lin-Yang-Sun signature scheme. IEEE Communications Letters 7, 195–196 (2003) 4. Wu, R.-C., Hsu, C.-R.: Cryptanalysis of digital multisignature schemes for authenticating delegates in mobile code systems. IEEE Transactions on Vehicular Technology 52, 462–464 (2003) 5. Shieh, S.-P., Lin, C.-T., Yang, W.-B., Sun, H.-M.: Digital multisignature schemes for authenticating delegates in mobile code systems. IEEE Transactions on Vehicular Technology 49, 1464–1473 (2000) 6. Elgamal, T.: A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Transactions on Information Theory 31, 469–472 (1985) 7. Harn, L.: New digital signature scheme based on discrete logarithm. Electronics Letters 30, 396–398 (1994) 8. Hwang, S.J., Chang, C.C., Yang, W.P.: An Encryption Signature Scheme with Low Message Expansion. Journal of the Chinese Institute of Engineers 18, 591–595 (1995) 9. Nyberg, K., Rueppel, A.: Message Recovery for Signature Schemes Based on the Discrete Logarithm Problem. In: De Santis, A. (ed.) EUROCRYPT 1994. LNCS, vol. 950, pp. 182– 193. Springer, Heidelberg (1995) 10. Piveteau, J.M.: New signature scheme with message recovery. Electronics Letters 29, 2185 (1993) 11. Shao, Z.: Signature scheme based on discrete logarithm without using one-way hash function. Electronics Letters 34, 1079–1080 (1998) 12. Chien, H.-Y.: Forgery attacks on digital signature schemes without using one-way hash and message redundancy. IEEE Communications Letters 10, 324–325 (2006) 13. Kang, L., Tang, H.: Digital signature scheme without hash function and message redundancy. Journal on Communications 27, 18–20 (1006) (In China) 14. Liu, J., Li, J.: Cryptanalysis and Improvement on a Digital Signature Scheme without using One-way Hash and Message Redundancy. In: International Conference on Information Security and Assurance (ISA 2008), pp. 266–269. SERSC, Korea (2008) 15. Konoma, C., Mambo, M., Shizuya, H.: Complexity Analysis of the Cryptographic Primitive Problems through Square-Root Exponent. IEICE Transaxtions on Fundamentals of Electronics, Communications and Computer Sciences E87-A, 1083–1091 (2004)
An Optimal Method for Detecting Internal and External Intrusion in MANET Marjan Kuchaki Rafsanjani, Laya Aliahmadipour, and Mohammad M. Javidi Department of Computer Science, Shahid Bahonar University of Kerman, Kerman, Iran
[email protected],
[email protected],
[email protected] Abstract. Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption. Keywords: Mobile Ad hoc Network (MANET); Intrusion Detection System (IDS); Cluster leader; Trust value; Game theory.
1
Introduction
Mobile ad hoc networks (MANETs) and wireless is relatively new communication paradigm. MANETs do not require expensive base stations or wired infrastructure. Nodes within radio range of each other can communicate directly over wireless links, and those that are far apart use other nodes as relays [1]. For example, a MANET could be deployed quickly for military communications in the battlefield. Due to their communication type and constraint resources, MANETs are vulnerable to diverse types of attacks and intrusions [2]. Intrusion Detection Systems (IDS) are security tools that, like other measures such as antivirus software, firewalls and access control schemes, are intended to strengthen the security of information and communication T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 71–82, 2010. © Springer-Verlag Berlin Heidelberg 2010
72
M.K. Rafsanjani, L. Aliahmadipour, and M.M. Javidi
systems [3]. The cooperation among nodes is a crucial requirement for intrusion detection in Mobile Ad hoc Networks (MANETs) due to their autonomous nature [4].The cooperation usually requires all the nodes to launch their own IDSs to increase the detection capability and resource consumption. But nodes in MANET have only limited resources. A common approach for reducing the overall resource consumption of intrusion detection is for nodes to acquiesce in electing a leader to serve as the intrusion detection system (IDS) for a cluster of one-hop nodes [5]. For preventing of internal intrusion due to selfish or malicious nodes, first we must build trust relationship between each node. Trust is defined as “a set of relations among entities that participate in a protocol. These relations are based on the evidence generated by the previous interactions of entities within a protocol. In general, if the interactions have been faithful to the protocol, then trust will accumulate between these entities”. Trust has also been defined as the degree of belief about the behavior of other entities or agents [6]. Therefore, building trust relationship between nodes in MANET plays a significant role in improving the network security, performance and quality of service. We will introduce methods of calculating trust value and explain Bayesian game theory between neighboring nodes, based on [7]. This method is used because it converges quickly since trust relationships are only established among neighbor nodes. After this phase, we should elect a trustee leader with the least cost for each cluster of one-hop nodes. In the third phase we have a Bayesian game for detection external intruder based on [5] between cluster leader and external intruder to find the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In this paper due to use combination of Game Theory in various positions will lead to discussion types of intrusion. So increase network security and reduce resource consumption.
2 Our Proposed Method Our proposed method has three phase that is organized as follows: first we establish trust relationship between neighboring nodes to prevent internal intruder based on scheme that proposed by Jiang et al [7]; then we proposed our leader election scheme and in the last phase we present method for detecting external intruder based on game that proposed by Otrok et al [5]. 2.1 Trust Establishment Relationship Mobile ad hoc network due to lack of routing infrastructure, they have to cooperate to communicate. Nodes are rational; their actions are strictly determined by self interest. Therefore, misbehavior exists. Malicious nodes are the nodes that join the network with the intent of harming it by causing network partitions, denial of service, etc. While selfish nodes are the nodes that utilize services provided by others but do not reciprocate to preserve resources. To save battery and bandwidth, nodes should not forward packets for others. If this dominant strategy is adopted, however, all nodes are worse off. Therefore, an ideal scheme is needed to give nodes an incentive to cooperate [7]. In most existing research that works on the trust establishment in MANET, trustor ranks the trust level of trustee using evaluation model based on the direct and indirect evidences collected respectively [8],[9]. The advantage of this approach
An Optimal Method for Detecting Internal and External Intrusion in MANET
73
is that the trust value about trustee is computed based on comprehensive investigation in the whole network. Therefore, the trust value is more accurate and objective. On the other hand, in order to boot the process of trust establishment, existing approaches designate a default trust value to all trustees subjectively, such as 0.5, in the bootstrapping phase. That is, from the new node’s point of view, all other nodes have the same trust level. This may result in hidden danger for not distinguishing between favorable nodes and malicious ones. Authors in [7] propose a trust establishment scheme for MANET based on game theory. In their scheme, trust is regarded as a node’s private estimation about other nodes. Without using the indirect evidences which are often adopted in traditional approaches, their trust evaluation model is based on the game results and history interaction information. This method is used because it converges quickly since trust relationships are only established among neighbor nodes. At first, we introduce game theory then we present game for estimating trust value. Game Theory. Game theory [10] has been successfully applied to many disciplines including economics, political science, and computer science. Game theory usually considers a multi-player decision problem where multiple players with different objectives can compete and interact with each other. Game theory classifies games into two categorizes: Non-cooperative and cooperative. Non-cooperative games are games with two or more players that are competing with each other. On the other hand, cooperative games are games with multi-players cooperating with each other in order to achieve the greatest possible total benefits. A game consists of a set of players a set of moves (or strategy) available to those players, and a specification of payoffs for each combination of strategies. A player's strategy is a plan for actions in each possible situation in the game. A player's payoff is the amount that the player wins or loses in a particular situation in a game. A player has a dominant strategy if that player's best strategy does not depend on what other players do [11]. To predict the optimal strategy used by intruders to attack a network, the authors of [12] model a non-cooperative game-theoretic model to analyze the interaction between intruders and the IDS in a MANET. They solve the problem using a basic signaling game which falls under the gambit of multi-stage dynamic non-cooperative game with incomplete information. In [7] authors proposed Bayesian game between neighboring node for estimating trust value for each other. Otrok et al in [5] solve trade off security and resource consumption by A nonzerosum non cooperative game based on Bayesian Nash equilibrium is used to model the interaction between the leader and external intruder, in this game intruder has complete information about leader, but leader dosen,t have complete information about intruder. The solution of such a game guides the IDS to inform the victims to launch their IDS according to the game derived threshold. Network Model and Computation Trust Value. We use a undirected graph G =< V, E > to model a mobile ad hoc network, where V is the set of nodes, and E is the set of edges, in other words, the pair of nodes with a common edge are the neighbors of each other. We denote Ni as the set of all neighbors of node i and |Ni| represents the number of nodes in Ni, i.e., the degree of node i in the graph G. In [8] they assume that each node has a property set Θi= at time t, where Θi(t) is the
74
M.K. Rafsanjani, L. Aliahmadipour, and M.M. Javidi
energy utilization rate of vi at time t. Hi(t)={hj i (t)| j=1, ..., |Ni|} is the interaction history records such as packet forwarding behavior about all nodes in Ni. |Ni|(t) is the number of i’s neighbors. hji(t)=< f ij , Rji > is the interaction history record of node i on node j. f ij is the number of packets forwarded actually by vj on behalf of vi and Rji is the number of all packets that vi ask vj to forward at time t. Θi (t) is the private information for vi about which other nodes do not know. The information about nodes’ properties and history interaction records are indispensable for trust evaluation. Therefore, each node must have some storage space called as information base. Each entry of information base records one of all neighbor node’s information: node’s properties, value of trust, and interaction records; authors in [7] propose trust value of node i on node j; that is Tij. Tij = (1-α ) Ti j,g + α Ti j,o
(1)
j,g
In equation (1) Ti is predicted value of node i on node j by game analyzing based on the node’s properties. To obtain this value, node i must play games with its neighbor and estimate the optimal expected utility Uij brought to it by the neighbor node j and then we can compute Ti j,g = Ui j /∑(Ui j)
jє Ni
(2)
For the detail analysis, refer to [9]. Ti j,o= Fij/Rij
(3)
Ti j,o is observed value obtained by direct interaction history. α is the weight factor reflecting the preferences. If there is not history, α = 0, along with gathering of the interaction record, α increases gradually. The game is played between node i and node i’s neighbor. For estimation trust value node j by node i, according to equation (1) due to players don’t have complete information about each other, Bayesian game is used. Assume a two-player Bayesian game is : Γ =
N = {a, b} is the set of players. Θ is the set of player’s types. Ai is the action set of player i. Ui is the utility function set of player i. Each player chooses the action based on its own type. In this step we need Utility is the function of strategy and type, computation of Ti j,g and Ti j,o. Calculation of Ti j,g . In MANET, energy Ei of each node i is limited. Besides, allocating some energy to forward packet for others is called forward energy, a node must reserve some energy to handle its own business is called self-energy, such as numerical computation, data generation, etc. they assume that node i has the action space < ai1, ai2 > in the energy distribution game, where ai1 is the amount of self-energy and ai2 is the amount of forward-energy. Obviously, ai1 and ai2 satisfy the condition ai1 + ai2 ≤ Ei so that Ei is dynamic changing with the passing of time and the increasing of interaction numbers. The utility function of node i is ui=ui [(a01,a02),(a11,a12);θ]. Suppose the function of ui is: ( ) ( ) )))) (4) (4)
An Optimal Method for Detecting Internal and External Intrusion in MANET
75
Where j = 1 − i, β +γ = 1, xi and yj are constants, which mean the existing previous profits foundations at the areas of self-energy and forward-energy. The constraint condition is ai1+ai2≤ Ei. So we let Ti j,g = (
⁄∑
)
(5)
evaluation. Another method is to integrate more numbers of interaction records: ∑
( )
Ti j,o =∑
( )
(6)
in equation (6) c is the interactive numbers in history. Clearly, c=k−1 means to integrate all the history records into the estimation of Ti j,o (k). In the realistic environment, recording all history information is impossible for a node. Therefore, the value of c should be determined in accordance with the actual situation. Calculation of weight factor. The weight factor α is important for the weighted average value of Tij. Assume the number of interactions between node i and node j is δ(i,j). Then we can calculate weight factor as: α =∑
(,) (,)
(7)
Where ( ∑ ( , ) 0)0) if it is equal to 0, it means that there is no interaction between nodes). Obviously, the more δ(i,j) is, the larger α is. It shows that node i and node j have close relations, so the calculation of trust value should prefer relying on the direct observation. Therefore nodes find out behavior of their neighbors after estimating trust value about them. We define the threshold of trust value T0, it depends on network application; if network application is confidential then we let (T0>> 0.5) otherwise (T0 =0.5). When node i want to forward packets via its neighbor, at first look at Tij (jЄNi) in its information memory base and choose node j that has the most trust value. So selfish or malicious node be denied of network services. Described trust evaluation process is classified into three phases: initial phase, update phase and reestablish phase [13]. In this paper introduce initialization phase: when node i enter MANET for the first time, it should evaluate the trust value of all neighbors. This process is called trust relationship initialization. Before initialization, the information base of node i is empty. First, node i discovers all neighbors by broadcasting hello request within one-hop range. After that, node i evaluates the trust value of the neighbors using the equation (1) described in previous sections. So that, at this time in the first of initialization phase, node i has not any history information about its neighbors. Initialization of Trust Relationship Phase. Step 1: Update the neighbor set Ni; 1.1 Node i send hello(i) massage to the all nodes within its radio rang . 1.2 Node j which received the hello(i), sends the reply( j) to node i and add node i to its own neighbor set. 1.3 After a time delay node i according to received reply(j)s message makes Ni set. Step 2: Update the trust value Tij .
76
M.K. Rafsanjani, L. Aliahmadipour, and M.M. Javidi
2.1 Node i plays game with neighbor node j and calculates Ti j,g and Ui j . 2.2 Read the history records about node j and calculates Ti j,o 2.3 Integrate the and into the trust value Ti j. Step 3: Update the information base. In this section we apply trust establishment relationship phase, therefore nodes can have an estimate of their neighbors behavior. So if node be malicious or selfish then its neighbors estimate low trust value about it and it is denied of network services or is removed. But if malicious or selfish node has important role in network, for example bridge or gateway, we couldn’t remove it, since losing it will cause a partition in the network and nodes will not be able to communicate between the clusters. Therefore in the next section, we proposed leader-IDS election scheme to always examine behavior of malicious or selfish node. 2.2 Our Leader Election Scheme Related Work. In the most of existing researches work on the election cluster leader in MANET, the election process can be based on one of the following models: Random [14], in this model each node is equally likely to be elected regardless of its remaining resources and node’s type. Connectivity index [15], in this approach elects a node with high degree of connectivity even though the node with both election schemes, some nodes will die faster than others, leading to a loss in connectivity and potentially the partition of network. Weight-based model [16], in this model elects a node with the most remaining resources without consider the type of node (selfish or malicious).In hierarchical [18], cluster based protocol to elect a leader in a mobile ad hoc network, and last method is a scheme that proposed by Mohammed et al.; a mechanism design-based multi-leader election scheme [16]. We investigated the advantages and disadvantages of last method and then improved the proposed method by Mohammed et al. [16]. In this approach authors consider appropriate criteria for electing the leader as most cost efficient and normal type and punish malicious node. To motivate nodes in behaving normally in every election round, they relate the detection service to nodes’ reputation value. The design of incentives is based on a classical mechanism design model, namely, Vickrey,Clarke, and Groves (VCG) [19]. The model guarantees that truth-telling is always the dominant strategy for every node during each election. Authors justify the correctness of proposed method through analysis and simulation. Empirical results indicate that their mechanism can effectively improve the overall lifetime and effectiveness of IDSs in a MANET. Therefore nodes to behave normally during the leaders election mechanism. However, a malicious node can disrupt their election algorithm by claiming a fake low cost just to be elected as a leader. Once elected, the node does not provide IDS services, which eases the job of intruders. To catch and punish a misbehaving leader who does not serve others after being elected, authors have proposed a decentralized catch-and-punish mechanism using random checker nodes to monitor the behavior of the leader [16]. To improve the performance and reduce the false positive rate of checkers in catching the misbehaving leader, they have also formulated a cooperative game-theoretical model to efficiently catch and punish misbehaving leaders with less false positive
An Optimal Method for Detecting Internal and External Intrusion in MANET
77
rates. This scheme can certainly be applied to thwart malicious nodes by catching and excluding them from the network. However this method considers appropriate criteria for electing the leader but increases overhead on the network. In this paper we use trust value of each node for estimating node’s behavior in leader election process. We improve the scheme that proposed by Mohammed et al[16] with establishment trust relationship between neighboring nodes instead of using VCG and checkers node in leader electing. Our proposed method considers the leader has most cost efficient and normal behavior. The cost Calculation for Analyzing packet of each node is presented in [16]. In our paper, before leader election mechanism in MANET must establish trust relationship between nodes in order that reconnoiter selfish or malicious node. After a period of a lifetime network we can apply Trust establishment relationship and leader election mechanism at the same time. Authors in [16] design an election protocol based on two requirements. First, to protect all the nodes in a network; every node should be monitored by some leader nodes. Second, to balance the resource consumption of IDS services, we want the overall cost of analysis for protecting the whole network to be minimized and leader provided that normal behavior. We assume that every node knows its neighbors, and their trust value. Which is reasonable since nodes usually have information based storage about their neighbors for routing purposes. To start a new election, the protocol uses four types of messages. Begin-Election, used by every node to initiate the election process; Hello, used to announce the cost of a node; Vote, sent by every node to elect a leader; Acknowledge, sent by the leader to broadcast its payment, and also as a confirmation of its leadership. For describing the protocol, we need the following notations: Service - table(k): The list of all ordinary nodes, those voted for the leader node k. reputation-table(k): The reputation table of node k. Each node keeps the record of reputation of all other nodes. Neighbors (k) is the set of node k’s neighbors. Leader node (k): The ID of node k’s leader. If node k is running its own IDS then the variable contains k. leader (k): a boolean variable and set to TRUE if node k is a leader. Otherwise it is FALSE. Each node has information base memory to save its properties and neighbor trust value. Leader Election Algorithm. Initially, all nodes start the election procedure by sending Begin−Election (H(k, ck)) messages. This message contains the hash value of its unique identifier (ID) and cost of analysis. This message is circulated among two hops of every node. On receiving the Begin−Election from all neighbors, each node sends its respective cost of analysis. Each node k checks whether it has received all the hash values from its neighbors. Then it sends Hello (IDk, costk). Step 1: for all nodes participate in leader election when receive ’Begin-Election’ 1.1 if (received ’Begin-Election’ from all neighbors) send Hello(IDk,, costk) Upon receiving the Hello from a neighbor node n, the node k compares the hash value to the real value to verify the cost of analysis. Then node k calculates the least-cost value among its neighbors and compares Tkj (jєNk); if node k finds out Tkj >T0 then votes for node j, else Tkj T0 ) { Send Vote (k, i); Leader node(k)=i; } else Mark(node i) } } The elected node i then calculates its payment [16] and sends an Acknowledge message to all the serving nodes. The Acknowledge message contains the payment and all the votes the leader received. The leader then launches its IDS. Step 3 : executed by elected leader node k for (i=1 to i ≤│Nk│ && ni єNk) { Send Acknowledge message to node i Leader(k) := TRUE; Compute payment, Pi; Update service−table(k); Update reputation−table(k); Acknowledge = Pi + all the votes; Send Acknowledge(i); } By this election mechanism we are sure leader be trustee and has enough remaining resource and also has least cost without we use VCG mechanism for incentive nodes to participate in election process and checkers node to punish malicious node. 2.3 Detection External Intruder by Leader Cluster In this phase of our method, for detecting external intruder by cluster leader, we use the method that proposed by Otrok et al; [5], because they formalize the tradeoff between security and IDS resource consumption as nonzero-sum, non cooperative game between leader and intruder with complete information about leader. As a result of game, leader IDS find out the threshold that if probability of attack exceed threshold then notify to victim node to launch its own IDS. Game guides intruder to attack once the probability of stepping into the robust mode is low. The game will be repeated such that in every election round the leader-IDS will be monitoring via sampling the
An Optimal Method for Detecting Internal and External Intrusion in MANET
79
protected node’s incoming traffic and deciding according to the game solution whether to inform the victim node to launch its IDS or not. In previous sections we discuss about trust establishment relationship in MANET and then proposed our leader election scheme. Now, we consider a MANET that nodes cooperate with each other without threat of internal intruder and they elect a low cost trustee leader in the their cluster to detect external intruders. In order to detect an intrusion, the leader-IDS samples the incoming packets for a target node based on a sampling budget determined through that target node’s reputation. Once the probability of attack goes beyond a threshold, the leader-IDS will notify the victim node to launch its own IDS. First we introduce details of game then propose solution of game based on [5]. Each player has private information about his/her preferences. In our case, the leader-IDS type is known to all the players while the external node type is selected from the type set: Θ ={Malicious (M), Normal (N)}. And we have the intruder’s pure strategy as Aintruder ={Attack ,Not Attack}. On the other hand, leader-IDS strategy is selected from the strategy space AIDS = {Robust, Moderate }. Knowing that the external node type is a private information. Bayesian Equilibrium dictates that sender’s action depends on his/her type θ. By observing the behavior of the sender at time tk, the leader-IDS can calculate the posterior belief evaluation function μtk+1(θi|ai) using the following Bayes rule: μtk+1(θi|ai) =∑
(
) (
( )
|
)
(
|
(8)
)
Where μtk (θi) > 0 and Ptk (ai|θi) is the probability that strategy ai is observed at this phase of the game given the type θ of the node i. It is computed as follows: Ptk (Attack|θi = M) = Em × O + Fm(1 − O)
(9)
Ptk (Attack|θi = N) = Fm
(10)
Where O is the probability of attack determined by the IDS. Fm is the false rate generated by the leader-IDS due to sampling and Em is the expected detection rate via sampling in moderate mode. We can shows Competition between the leader-IDS and external intruder in this game following table. Table 1. Moderate to robust game Strategy
Moderate
Robust
Attack
Co(Em)V-ca; Em V- Cm
Co(Er)V-Ca ; Er V-Cr
Not-Attack
0 ;-Cm
0 ; - Cr
By solving this game using pure strategy, there is no Nash equilibrium. Thus, mixed strategy is used to solve the game where q is the probability to run in robust mode and p is the probability to attack by the attacker. In Table I, the game is defined where the utility function of the IDS by playing the Robust strategy while the attacker plays the Attack strategy is defined as Er V−Cr. It represents the payoff of protecting the monitored node, which values V, from being compromised by the attacker, where
80
M.K. Rafsanjani, L. Aliahmadipour, and M.M. Javidi
Er V >> Cr . On the other hand, the payoff of the attacker if the intrusion is not detected is defined as Co(Er)V−Ca. It is considered as the gain of the attacker for compromising the victim node. Additionally, they define EmV−Cm as the payoff of IDS, if strategy Moderate is played while the attacker strategy remains unchanged. Conversely, the payoff of the attacker if the intrusion is not detected is defined as Co(Em)V−Ca.. Now, if the attacker plays Not-Attack strategy and the IDS strategy is Robust then the losses of the IDS is Cr while the attacker gains/losses nothing. Moreover, the payoff of the attacker with the same strategy and IDS strategy is Moderate is 0 while the losses of the IDS is defined as Cm which is the cost of running the IDS in moderate mode. Where, Co(Er)=1−Er, and Er is the expected detection of an intrusion in the robust mode. Er= E leader +E victim, where E leader and Evictim are the expected detection by leader-IDS and monitored node (victim) respectively. Em=Eleader is the expected detection in the moderate mode; so that only the leaderIDS is running the IDS to detect intrusions. On the other hand, Co(Em) is equal to 1−Em. Cr is the cost of running the IDS in robust mode. We define the cost as the aggregation of the cost of monitoring by the leader Cleader and cost of monitoring by the victim Cvictim. Cm is the cost of running the IDS in moderate mode which is equal to Cleader. Ca is the cost of attack by the intruder. V is the value of the protected victim node (asset). The value of V could vary from one node to another according to its role in the cluster. For example, gateway nodes are valued more than regular nodes. To solve the game and find the optimal values of p and q, the IDS and attacker compute their corresponding utility functions followed by the first derivative of the functions. From Table I the IDS utility function UIDS is defined as follows: UIDS = [qp(Er V − Cr) + p(1 − q)(Em V − Cm) − q(1 − p)Cr
(11)
−(1−q)(1−p)Cm]μ(θ = M)−[qCr+(1−q)Cm](1−μ(θ = M)) The main objective of the IDS is to maximize this utility function by choosing for a fixed p*, a q* strategy that maximizes the probability of protecting the victim node and leads to equilibrium where the following holds: UIDS(p*,q)≤UIDS(p*,q*)
(12)
To attain this aim, the IDS will calculate the optimal value of p* by finding the first derivative with respect to q* and setting it to zero. This will result to the following: p*=
(13)
The value of p* is used by the leader-IDS to decide whether to inform the victim node to launch its own IDS or not. Knowing that the leader-IDS is monitoring and analyzing traffic via sampling to detect an intrusion launched by an external attacker i. The IDS is computing the belief μ, as in Equation (8); each node to check whether it is behaving maliciously or normally. If the sender type is malicious and decided to attack by launching an intrusion the expected probability to be detected by leader-IDS is Eleader. Since the intrusion could be launched iteratively and could be missed in the coming iterations, the IDS will decide to inform the victim node to launch its own
An Optimal Method for Detecting Internal and External Intrusion in MANET
81
IDS if the probability of attack is greater than p*. On the other hand, the utility function Ua of the attacker is defined as follows: Ua = qp(Co(Er )V − Ca) + p(1 − q)(Co(Em )V − Ca)
(14)
The main objective of the attacker is to maximize this utility function by choosing for a fixed q*, a p* that maximizes the probability of compromising the victim node. To maximize the utility function, it is sufficient to set the first derivative with respect to p to zero which will be equal to:
∗
q*=
(
)
(15)
From the solution of the game, the attacker best strategy is to attack once the probability of running the IDS by the victim node (robust mode) is less than q*. To achieve this, the attacker will observe the behavior of the IDS at time tk to determine whether to attack or not at time tk+1 by comparing its estimated observation with the derived threshold. In this paper, three phases order to be implemented namely trust establishment relationship between neighboring node, election leader and detection external intruder, increase security, performance and reduce resource consumption for intrusion detection.
3 Conclusion Our method detects internal and external intrusions. A trust relationship creates between neighboring nodes and causes each node in the game with its neighboring nodes and also observation of their behavior, estimates a trust value for each node. If the estimated trust value of a node be less than a threshold, then this node is detected as a misbehaving node. It is clear, if this misbehaving node (selfish or malicious) is a connecting bridge between different parts of the network, we can not remove it, but this node should be always monitored by cluster head in order to intrusion detection. So, in the next phase, when we want to elect a leader for these neighboring nodes, the chance of electing misbehaving nodes as a leader will be decreased. While we consider neighboring nodes in a cluster, therefore, after passing a time period from the beginning of the network function, can run both Trust Establishment Relationship algorithm and Leader Election algorithm synchronously. The selected leader in the proposed method will be the ideal leader, because it has enough energy resource for intrusions detection in its cluster; and has the lowest cost for packet analyzing and also the leader isn’t misbehaving node. For detecting external intrusions a game is introduced that creates high performance and has low consumption cost.
References 1. Sun, B., Osborne, A.: intrusion detection techniques in mobile ad hoc and wireless sensor network. IEEE Wireless Communications, 56–63 (2007) 2. Lima, M., Santos, A., Pujolle, G.: A Survey of Survivability in Mobile Ad Hoc Networks. IEEE Communications surveys & tutorials, 66–77 (2009)
82
M.K. Rafsanjani, L. Aliahmadipour, and M.M. Javidi
3. García-Teodoroa, G., Díaz-Verdejoa, J., Maciá-Fernándeza, G., Vázquezb, E.: Anomalybased network intrusion detection. Techniques, Systems and Challenges, pp. 18–28. Elsevier, Amsterdam (2009) 4. Hu, Y., Perrig, A.: A survey of secure wireless ad hoc routing. IEEE Security and Privacy, 28–39 (2004) 5. Otrok, H., Mohammed, N., Wang, L., Debbabi, M., Bhattacharya, P.: A Moderate to Robust Game Theoretical Model for Intrusion Detection in MANETs. In: IEEE International Conference on Wireless & Mobile Computing, Networking & Communication, WIMOB, pp. 608–612 (2008) 6. Seshadri Ramana I, K., Chari, A., Kasiviswanth, N.: A Survey on trust Management for mobile ad hoc networks. International Journal of Network Security & Its Applications (IJNSA), 75–85 (2010) 7. Jiang, X., Lin, C., Yin, H., Chen, Z., Su, L.: Game-based Trust Establishment for Mobile Ad Hoc Networks. In: IEEE International Conference on Communications and Mobile Computing, CMC, pp. 475–479 (2009) 8. Eschenauer, L., Gligor, V., Baras, J.: Trust establishment in mobile ad-hoc networks. In: Proceedings of the Security Protocols Workshop, Cambridge (2002) 9. Ren, K., Li, T., Wan, Z., Bao, F., Deng, R.H., Kim, K.: Highly reliable trust establishment scheme in ad hoc networks. Comput. Networks 45(6), 687–699 (2004) 10. Morris, P.: Introduction to Game Theory, 1st edn. Springer, Heidelberg (1994) 11. Ganchev, A., Narayanan, L., Shende, S.: Games to induce specified equilibriaI. Theoretical Computer Science, pp. 341–350. Elsevier, Amsterdam (2008) 12. Patchay, A., Min Park, J.: A Game Theoretic Approach to Modeling Intrusion Detection in Mobile Ad Hoc Networks. In: Proceedings of the 2004 IEEE Workshop on Information Assurance and Security, pp. 280–284 (2004) 13. Wang, K., Wu, M., Shen, S.: A Trust Evaluation Method for Node Cooperation in Mobile Ad Hoc Networks. In: IEEE 50th International Conference on Information Technology: New Generations 2008, pp. 1000–1005 (2008) 14. Huang, Y., Lee, W.: A cooperative intrusion detection system for ad hoc networks. In: Proceedings of the 1st ACM Workshop Security of Ad Hoc and Sensor Networks, pp. 135– 147 (2003) 15. Kachirski, O., Guha, R.: Effective intrusion detection using multiple sensors in wireless ad hoc networks. In: 36th Annual Hawaii International Conference on System Sciences, pp. 57.1 (2003) 16. Mohammed, N., Otrok, H., Wang, L., Debbabi, M., Bhattacharya, P.: A mechanism design-based multi-leader election scheme for intrusion detection in MANET. In: Wireless Communications and Networking Conference, WCNC, pp. 2816–2821 (2008) 17. Dagadeviren, O., Erciyes, K.: A Hierarchical Leader Election Protocol for Mobile Ad Hoc Network. In: Bubak, M., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2008, Part I. LNCS, vol. 5101, pp. 509–518. Springer, Heidelberg (2008) 18. Mas-Colell, A., Whinston, M., Green, J.: Microeconomic Theory. Oxford University Press, New York (1995) 19. Otrok, H., Mohammed, N., Wang, L., Debbabi, M., Bhattacharya, P.: A game-theoretic intrusion detection model for mobile ad-hoc networks. Computer Communications. Elsevier Journal, 708—721 (2008)
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach Francesco Colace1, Massimo De Santo1, and Salvatore Ferrandino2 1 DIIIE – Università degli Studi di Salerno, via Ponte Don Melillo 84084 Fisciano (Salerno), Italy {fcolace,desanto}@unisa.it 2 Ufficio Sistemi Tecnologici – Università degli Studi di Salerno, via Ponte Don Melillo 84084 Fisciano (Salerno), Italy
[email protected] Abstract. The last decade has witnessed an intense spread of computer networks that has been further accelerated with the introduction of wireless networks. Simultaneously with, this growth has increased significantly the problems of network management. Especially in small companies, where there is no provision of personnel assigned to these tasks, the management of such networks is often complex and malfunctions can have significant impacts on their businesses. A possible solution is the adoption of Simple Network Management Protocol. Simple Network Management Protocol (SNMP) is a standard protocol used to exchange network management information. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP has a big disadvantage: its simple design means that the information it deals with is neither detailed nor well organized enough to deal with the expanding modern networking requirements. Over the past years much efforts has been given to improve the lack of Simple Network Management Protocol and new frameworks has been developed: A promising approach involves the use of Ontology. This is the starting point of this paper where a novel approach to the network management based on the use of the Slow Intelligence System methodologies and Ontology based techniques is proposed. Slow Intelligence Systems is a general-purpose systems characterized by being able to improve performance over time through a process involving enumeration, propagation, adaptation, elimination and concentration. Therefore, the proposed approach aims to develop a system able to acquire, according to an SNMP standard, information from the various hosts that are in the managed networks and apply solutions in order to solve problems. To check the feasibility of this model first experimental results in a real scenario are showed. Keywords: Network Management, Ontology, Slow Intelligence System, SNMP.
1 Introduction Networks and distributed computing systems are becoming increasingly important and at the same time, more and more critical to the world of Information Technology. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 83–92, 2010. © Springer-Verlag Berlin Heidelberg 2010
84
F. Colace, M. De Santo, and S. Ferrandino
This rash spread, however, resulted in increased difficulty in configuring and managing networks. In fact, there is an emergence of diverse network devices and it has become greatly difficult to configure those multifarious network devices with a manual work. The concept of network management is quite articulated. It involves activities such as the identification and management of various devices, monitoring their performance and much more. So efficient and intelligent configuration management techniques are urgently needed to configure these devices with automation or semiautomation [1]. A solution for this problem can be the adoption of the Simple Network Management Protocol (SNMP). The SNMP manages network hosts such as workstations or servers, routers, bridges and hubs to a central computer that runs the network management software. SNMP performs management services through a distributed architecture of systems and management agents. Since its introduction in the late 1980s the SNMP showed good performance in monitoring for fault and performance, but it is very hard to use in managing large networks. In fact, SNMP structure of management information (SMI) or Next Generation Structure of Management Information (SMIng) is insufficient to represent hierarchical network configuration data. SNMP, besides, supports with difficulties several high-level management operations required by network configuration tasks. On the other hand as previously said the network management is a hot topic and there is a real interest in the development of an effective methodology. In literature, ontology is considered a good way for supporting the network management and many papers deal with ontology based methodologies for network management. In particular, they propose ontology as a layer able to improve the interoperability among devices and operators. In this sense [2] proposes an ontology driven approach to the semantic interoperability problem in the management of enterprise service. Another interesting approach is in [3], which proposes an improvement of the current network management methods with the application of formal ontologies techniques. In particular, it introduces management information meta-model integrating all the information that currently belongs to different management model used to interoperate with the managed resource. Another advantage related to this approach is the ability to include basic semantic behavior for a manager to monitor and control these resources. This paper introduces a novel approach to the network management based on the use of the Slow Intelligence System methodologies [4] and ontology. The proposed approach aims to develop a system able to acquire, according to an SNMP standard, information from the various hosts that are in the managed networks and apply solutions in order to solve problems. In particular, the proposed system can handle multiple networks and adopt solutions that have proved successful in some other context. By the use of ontologies, the system will be able to choose the right action to take when some hosts send SNMP alerts. The use of the Slow Intelligence System approach will allow the system to automatically infer the actions to take. This paper is organized as follows. The next section introduces the slow intelligence systems approach. The second section describes the ontology. The third section explains why a Slow Intelligence System needs Ontology to work at his best while the third section describes the proposed system. The last section introduces the first experimental results.
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
85
2 What Is a Slow Intelligence System? We will first introduce the concept of Slow Intelligence and present a general framework for designing and specifying Slow Intelligence Systems (SIS). We view Slow Intelligence Systems as general-purpose systems characterized by being able to improve performance over time through a process involving enumeration, propagation, adaptation, elimination and concentration. A Slow Intelligence System continuously learns, searches for new solutions and propagates and shares its experience with other peers. A Slow Intelligence System differs from expert systems in that the learning is implicit and not always obvious. A Slow Intelligence System seems to be a slow learner because it analyzes the environmental changes and carefully and gradually absorbs that into its knowledge base while maintaining synergy with the environment. A slow intelligence system is a system that solves problems by trying different solutions, is context-aware to adapt to different situations and to propagate knowledge, and may not perform well in the short run but continuously learns to improve its performance over time. Slow Intelligence Systems typically exhibit the following characteristics: Enumeration: In problem solving, different solutions are enumerated until the appropriate solution or solutions can be found. Propagation: The system is aware of its environment and constantly exchanges information with the environment. Through this constant information exchange, one SIS may propagate information to other (logically or physically adjacent) SISs. Adaptation: Solutions are enumerated and adapted to the environment. Sometimes adapted solutions are mutations that transcend enumerated solutions of the past. Elimination: Unsuitable solutions are eliminated, so that only suitable solutions are further considered. Concentration: Among the suitable solutions left, resources are further concentrated to only one (or at most a few) of the suitable solutions. The sixth one, on the other hand, is rather unique for SIS: Slow decision cycle(s) to complement quick decision cycle(s): SIS possesses at least two decision cycles. The first one, defined as the quick decision cycle, provides an instantaneous response to the environment. The second one, defined as the slow decision cycle, tries to follow the gradual changes in the environment and analyze the information acquired by experts and past experiences. The two decision cycles enable the SIS to both cope with the environment and meet long-term goals. Sophisticated SIS may possess multiple slow decision cycles and multiple quick decision cycles. Most importantly, actions of slow decision cycle(s) may override actions of quick decision cycle(s), resulting in poorer performance in the short run but better performance in the long run. Now we can consider the structure of SIS by the introduction of the basic building block and advanced building block. Problem and solution are both functions of time, thus we can represent the time function for problem as x(t)problem, and the time function for solution as y(t)solution. The timing controller is also a time
86
F. Colace, M. De Santo, and S. Ferrandino
function timing-control(t). For the two-decision-cycle SIS, the basic building block BBB can be expressed as follows: if timing-control(t) == 'slow' then /* timing-control(t) is ‘slow’ */ y(t)solution = gconcentrate (geliminate (gadapt (genumerate(x(t)problem)))) else /* timing-control(t) is not ‘slow’ */ y(t)solution = fconcentrate (feliminate (fadapt (fenumerate(x(t)problem))))
where genumerate, gadapt, geliminate, and gconcentrate are the transform functions for enumeration, adaptation, elimination and concentration respectively during slow decision cycles, and fenumerate, fadapt, feliminate, and fconcentrate are the transform functions for enumeration, adaptation, elimination and concentration respectively during quick decision cycles. An Advanced Building Block can be a stand-alone system. The major difference between an ABB and a BBB is the inclusion of a knowledge base, further improving the SIS’s problem solving abilities.
3 Why a Slow Intelligence System Needs Ontology? The definition of ontology is still a challenging task [5]. The term ‘ontology’ has its origin in the Greek word ‘ontos’, which means ‘being’. Therefore, in this sense ontology could be defined as a branch of philosophy dealing with the order and structure of reality. In the 1970s ontology came to be of interest in the computer science field. In particular the artificial intelligence community started to use the concept in order to create a domain of knowledge and establish formal relationships among the items of knowledge in that domain for performing some processes of automated reasoning, especially as a means for establishing explicit formal vocabulary to be shared among applications. The term ‘ontology’ was first used in the computer science field by Gruber who used the term to refer to an explicit specification of a conceptualization [6]. The use of this term is rapidly growing due to the significant role it plays in information systems, semantic web and knowledge-based systems, where the term ‘ontology’ refers to “the representation of meaning of terms in vocabularies and the relationships between those terms” [7]. Also this kind of definition is still satisfactory for each field where ontology can be applied and so perhaps a good practical definition would be this: “an ontology is a method of representing items of knowledge (ideas, facts, things) in a way that defines the relationships and classification of concepts within a specified domain of knowledge” [5]. Following this point of view, ontologies are “content theories”, since their principal contribution lies in identifying specific classes of objects and the relations that exist in some knowledge domains [8]. Ontologies can be classified into lightweight and heavyweight ontologies [9]. Lightweight ontologies include concepts, concept taxonomies, simple relationships between concepts (such as specialization “is_a”) and properties that describes concepts. Heavyweight ontologies add axioms and constraints to lightweight ontologies. Axioms and constraints clarify the intended meaning of the terms gathered in the ontology. Commonly ontology is defined as O = {C, A, H, RT, R} where: • C is the concept set. c ∈ C expresses one concept and in each ontology there is ever a root concept marked as “Thing”. In particular for each c ∈ C there exist a descendant nodes set (CDN) containing all its under layer concepts and an ancestry nodes set (CAN) containing all upper layer concepts
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
87
• A is the concept attributes set. For c ∈ C its attributes set is expressed as AC = {a1, …, an} where n expresses the number of attributes related to c • H expresses the concept hierarchy set. The formalism (ci,cj) means that ci is the sub-concept of cj. In other words this set contains the is_a relations among the classes. • RT is the set of semantic relations type. RT = RTD U RTU. RTD means the set of predefined relation (same_as, disjoint_with, equivalent) while RTU means the set of user defined relation type. The formalism (ci,cj, r) with r ∈ R T means that between ci and cj there is the r relation. The set RelRT(ci,cj) contains the relation r between ci and cj • R is the set of non-hierarchical relations. The formalism (ci,cj, r) with r ∈ R means that between ci and cj there is the r relation. The set Rel(ci,cj) contains the relation r between ci and cj
4 Ontological Basic Operations In this section the basic ontological operations will be introduced and explained in details: Definition 1. Ontology Equality Operator (OEO): the ontology equality operator is a function OEO : O → O It is so defined: given the ontologies O1={C1,A1,H1,RT1,R1} O2=OEO(O1) the obtained ontology is defined as O2={C1,A1,H1,RT1,R1} Definition 2. Ontology Change Operator (OCO): an ontology change operator is a function OCO : O → O that modifies the structure and the relations of ontology. In particular three different kinds of changes can be considered: Atomic_Change, Entity_Change and Composite_Change. Definition 3. Atomic Change Ontology (ACO): the atomic changes are further classified into additive changes (Addition) and removal changes (Removal), which represent the minimal operations that can be performed in ontology. Definition 4. Add Operator (ADD): the ADD operation add a concept c as the sub concept of d: ADD(c,d). Definition 5. Add Relationship Operator (ADDRel): the ADDRel operation add a relation, hierarchical or not, r between two nodes c and f of an ontology: ADDRel(c, d, r). Definition 6. AddAttribute Operator (ADDAtt): the ADDAtt attribute a to a concept c: ADDAtt(c,a). Definition 7. Del Operator (DEL): the DEL operation delete a concept c in the ontology: DEL(c). This operation erase also all the relationships of c with the other nodes of the ontology and all its sub-concepts and their relationships with the other node of the ontology. Definition 8. Entity Change Operator (ECO): the entity change operator introduces changes in the properties of classes.
88
F. Colace, M. De Santo, and S. Ferrandino
Definition 9. Composite Change Ontology (CCO): The composite change includes a set of ACO e ECO changes. Definition 10. Ontology Merging Function (OMF): the ontology merging function is a function OMF : OxO → O and it is so defined: given the ontologies O1={C1,A1,H1,RT1,R1} and O2={C2,A2,H2,RT2,R2} the merged ontology is defined as O3={C3,A3,H3, RT3,R3} where C 3 = C1 ∪ C 2 In particular the building merged ontology process will be the following: 1. O3 = O1 2. ∀ci ∈ C 2 with CAN(ci) ={Thing} and ∉ C1 then in O3 execute the atomic operation add(ci,Thing) and ∀a j ∈ AC addAttr(ci, aj) j 2
3. ∀ci ∈ C 2 with CAN(ci) ={Thing} and ∈ C1 then in O3 ∀a j ∈ Ac 2 and ∀ a j ∉ Ac 1 i i
execute the atomic operation addAttr(ci, aj) 4. ∀ci ∈ C 2 with CAN(ci) ≠ {Thing} and ∉ C1 then in O3 execute the atomic operation add(ci, cj) ∀cj∈ CAN(ci) and ∀a j ∈ AC j 2 addAttr(ci, aj) 5. ∀ci ∈ C 2 with CAN(ci) ≠ {Thing} and addAttr(ci, aj) 6. ∀ci , c j ∈ C2 and
∈ C1 then in O3 ∀a j ∈ AC j 2 and ∉ AC
j 1
∈ C 3 execute in O3 Addrel(ci, cj, rij)
Definition 11. Ontology Simplification Function (OSF): the ontology simplification function is a function osf : OxO → O . It is so defined: given the ontologies O1={C1,A1,H1,RT1,R1} and O2={C2,A2,H2,RT2,R2} the simplified ontology is defined as O3={C3,A3,H3, RT3,R3} where:
• •
O3 = OMF(O1, O2) ∀c i ∈ C3 with CDN(ci) is empty and if ∀c j ∈ C 3 Rel(ci, cj) is empty del(ci)
All the previous functions and operations will be adopted by the various system modules in order to accomplish their various tasks. In particular each of these functions can guarantee the sharing of knowledge and the improvement of each system knowledge domain.
5 A Slow Intelligence Network Manager Based on SNMP Protocol As previously said the aim of this paper is the introduction of a LAN-based management system based on SNMP protocol and the Slow Intelligence approach. Suppose to have M different LANs to which may belong to N different types of hosts that have to be managed. Each of these LANs is dynamic and therefore allows the introduction of new hosts and the disappearance of some of them. The local servers are in principle able to solve the main problems in the LAN management, but thanks to the dynamism of the LANS may be faced with unexpected situations. The environmental conditions in which the LAN operates can influence the performance of various hosts and must
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
89
be taken into account. In this scenario a fundamental role is played by ontologies. In particular it is necessary to introduce and define the following ontologies: OSNMP = {CSNMP, ASNMP, HSNMP, RTSNMP, RSNMP}. This ontology aims to define the entire structure of SNMP protocol by analyzing the various messages and the relations between them OFault = {CFault, AFault, HFault, RTFault, RFault}. This ontology describes each kind of possible errors that can occur within a LAN OCause = {CCause, ACause, HCause, RTCause, RCause}. This ontology defines the causes of the faults that may occur in a LAN OSolution = {CSolution, ASolution, HSolution, RTSolution, RSolution}. This ontology defines the solutions that can be taken to recover from fault situations which occurred within a LAN OAction = {CAction, AAction, HAction, RTAction, RAction}. This ontology aims to identify the actions to be taken in order to recover from fault situations OComponent = {CComponent, AComponent, HComponent, RhComponent, RAction}. This ontology describes the components that may be present within a LAN OEnvironment = {CEnvironment, AEnvironment, HEnvironment, RhEnvironment, REnvironment}. This ontology describes the environment where the LAN works In order to allow the communication among the various hosts and servers that are in the various LAN the following messages have to be introduced: MCSl(SNMP, ID_Components) = this is the SNMP message that the client sends to the local server when an error has occurred. The ID_Componente used to identify the type of component that launched the message. MSlC({Action}) = this message, sent by local server, contains the actions that the client have to implement for the resolution of the highlighted fault. The local server has to implement the following functions: O’Fault = f(MCSl(SNMP), O’SNMP) = this function aims to build the ontology of faults from the analysis of received SNMP messages and SNMP ontology within the local server. It is important to underline how the SNMP ontology on the local server is only a part of that present in the central server and is built from time to time following the faults that occur within the LAN. O’Cause = g(MCSl(SNMP), O’SNMP) = this function aims to obtain the ontology of the causes that generated the received SNMP messages. O’Solution = h(O’Fault, O’Cause) = this function calculates the ontology of possible solutions that the local server can find for the solution of the fault situation O’Action = k(O’Solutiom) = this function calculates the ontology of possible solutions that the system can identify error to resolve the situation highlighted by the SNMP These functions can be considered as the enumeration phase of the Slow Intelligent. After the determination of these functions the system can adopt the Action to apply in the LAN by the use of the following function: {Action} = t(O’Action, O’Component, O’Environment) = this is the set of actions that the client, or the host involved in the fault, must implement in order to solve the problem identi-
90
F. Colace, M. De Santo, and S. Ferrandino
fied by the SNMP message. In practice, this involves defining, from ontologies of actions and components, the instances of actions to implement to resolve the faults that occurred. This function implements the Adaptation, Elimination and Concentration phases of a Slow Intelligence System. All these operations are carried out by involving the local server and hosts on the managed LAN. It is obviously the local server can not always perform operations that are asked, because it does not know the full SNMP ontology. In fact the managed LAN can change: for example new components can be added. So new messages, functions and actions have to be expected among local servers and central server. The messages are so defined: MCSlj(SNMP, ID_Component) = this message contains the SNMP signal, sent by a host, that the local server is unable to manage and that it sends to the central network. The central server sends this message to the other local servers local in order to obtain information on the management of the SNMP signal. MSljC(O’SNMP-i, O’Cause-i, O’Solution-i, O’Action-i , {Actioni}) = this message contains the information obtained from local servers about the SNMP signal management. downstream of question to which they have undergone. This message can be empty when no local servers ever managed in the past this kind of SNMP signal. Related to these messages there are the following functions: O’SNMP-i = F(MSliSlj(SNMP), SNMP-j) = this function expresses the subset of the SNMP ontology built in the local server j needed by the local server i. O’Cause-i = G(O’SNMP-i) = this function expresses the ontology representing the causes of the fault. This ontology is built in the j-th local server and can be empty when this server never faced this problem. O’Solution-i = H(O’Cause-i) = this function gives the ontology of the solutions that can be adopted in order to solve the fault related to the SNMP signal. This ontology is built in the j-th local server and can be empty when this server never faced this problem. O’Action-i = K(O’Solution-i) = this function gives the ontology representing the actions that can be adopted for the solutions of the faults related to the SNMP signal. This ontology is built in the j-th local server and can be empty when this server never faced this problem. The central server collects all the ontologies, obtained in the various local servers and previously described, and selects one of them according to an analysis based on ontology similarity. After this phase the central server can determine the action that have to be applied in the i-th LAN in order to solve the fault. So these actions can be sent to the i-th local server. In this way the following function can be introduced: {Actioni} = T(O’Action-i, O’Component-j) = this function calculates the set of actions that the client must adopt in order to solve the problem identified by the SNMP signal. The set of possible actions can of course be zero. In this case the support of an expert is needed. The previous messages and functions implement the propagation phase of the slow intelligence system approach. The operational workflow is the following: Step 1: a SNMP messages generated by the Client as a result of a fault and sent to the local server Step 2: The local server receives the SNMP message and tries to identify the problem through analysis of various ontologies.
SNMP-SI: A Network Management Tool Based on Slow Intelligence System Approach
91
Step 3: If the local server can identify the problem it generates the solutions and the actions that the various hosts in the LAN have to be apply. Step 3.1: The hosts get the actions and put them into practice Step 4: If the local server does not identify the problem sends the report to the central server. Step 5: The central server sends to all local servers received the message Step 5.1: Other local servers after receiving the message attempts to determine the possible actions and then send everything to the central server. Step 6: If the central server has received the possible actions by local servers then sends them to the local server that has requested it. If no action is received, however, the central server, based on the received message and its general ontologies determines the actions to be sent to the local server. Step 7: the local server send the actions to the various hosts that are in the LAN Step 7.1: The hosts get the actions and put them into practice
6 Experimental Results In order to test the performance of the proposed system an experimental campaign has been designed. First of all the working scenario has been settled (figure 3). Three different LANs have been monitored. The first one is composed by a Cisco switch and 30 personal computers equipped with Microsoft as Operative System and Microsoft Office as applicative software. These personal computers can surf in internet using various browsers. The second one is composed by a Cisco router, a Cisco switch, two application servers and 50 personal computers with various operative systems. The application servers offer e-learning services. The third LAN is composed by a Nortel switch, a web server, a mail server, a HP network printer and 50 personal computers. For each LAN a local server and SNMP ontology (faults and actions) have been introduced. Each ontology is able to cover about the 50% of the SNMP events that the LAN’s devices can launch. The experimental phase aimed to evaluate the following system’s parameters:
•
The system’s ability to identify the correct management actions to apply in the LAN after a SNMP signal. This parameter, named CA, is so defined: CA =
•
# Correct _ Action # Correct _ Action + # Wrong _ Action
The system’s ability to manage the introduction of a new component in a LAN. In particular the system has to recognize components that were previously managed in other LANs. This parameter, named KC, is so defined: KC =
# Correct _ Action _ NC # Correct _ Action _ NC + #Wrong _ Action _ NC
The previous indexes were calculated in the following way:
•
The CA index: this index was calculated after one, two, three, four and five hours with not variations in the configuration of the LANs
92
•
F. Colace, M. De Santo, and S. Ferrandino
The KC index was estimated after the introduction of new components in a LAN. In particular new devices that are in the other LANs has been introduced in the LAN and this index was evaluated after one, two, three, four and five hours with not variations in the configuration of the LANs.
In the next table the obtained results are showed: Table 1. Obtained Results. The KC has to be considered as average value. Index CA KC
1 hour 75,00% 59,00%
2 hours 87,40% 74,33%
3 hours 91,33% 82,66%
4 hours 94,20% 91,50%
5 hours 98,50% 93,33%
The indexes show the good performances of the system. In particular the CA index, that expresses the ability of the system in the recognition of the correct actions in the LAN after a SNMP signal, is very good. The KC index witnesses how the system uses at the best the SIS approach. In fact the system improves its performances sharing knowledge among the various local servers. At the beginning the index is very low but it increases after few iterations.
7 Conclusions In this paper a novel method for network management has been introduced. This method is based on, SNMP; Ontology and Slow Intelligence System approach. It has been tested in an operative scenario and the first experimental seems to be good. The future works aim to improve the system by the use of new and effective methodologies for the ontology management and the use of other network management approaches.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Xu, H., Xiao, D.: A Common Ontology-based Intelligent Configuration Management Model for IP Network Devices. In: Proceedings of the First International Conference on Innovative Computing, Information and Control Yiu Wong, A.K., Ray, P., Parameswaran, N., Strassner, J.: Ontology Mapping for the Interoperability Problem in Network Management. IEEE Journal on Selected Areas in Communication 23(10) (2005) López de Vergara, J.E., Guerrero, A., Villagrá, V.A., Berrocal, J.: Ontology-Based Network Management: Study Cases and Lessons Learned. J. Network Syst. Manage. 17(3), 234–254 (2009) Chang, S.-K.: A General Framework for Slow Intelligence Systems. International Journal of Software Engineering and Knowledge Engineering 20(1), 1–15 (2010) Jepsen, T.: Just What Is an Ontology, Anyway? IT Professional 11(5), 22–27 (2009) Gruber, T.R.: Translation approach to portable ontology specification. Knowledge Acquisition 5, 199–220 (1993) OWL Web Ontology Overview, W3C Recommendation (February 10, 2004), http://www.w3.org/TR/2004/REC-owl-features-20040210/ Maedche, A., Staab, S.: Ontology Learning for the Semantic Web. IEEE Intelligent Systems 16(2), 72–79 (2001) Corcho, O.: A Layered Declarative Approach to Ontology Translation with Knowledge Preservation. Frontiers in Artificial Intelligence and Applications, vol. 116 (2005)
Intrusion Detection in Database Systems Mohammad M. Javidi, Mina Sohrabi, and Marjan Kuchaki Rafsanjani Department of Computer Science, Shahid Bahonar University of Kerman, Kerman, Iran
[email protected],
[email protected],
[email protected] Abstract. Data represent today a valuable asset for organizations and companies and must be protected. Ensuring the security and privacy of data assets is a crucial and very difficult problem in our modern networked world. Despite the necessity of protecting information stored in database systems (DBS), existing security models are insufficient to prevent misuse, especially insider abuse by legitimate users. One mechanism to safeguard the information in these databases is to use an intrusion detection system (IDS). The purpose of Intrusion detection in database systems is to detect transactions that access data without permission. In this paper several database Intrusion detection approaches are evaluated. Keywords: Database systems, Intrusion Detection System (IDS), Transaction.
1 Introduction Protecting data in network environments is a very important task nowadays. One way to make data less vulnerable to malicious attacks is to deploy an intrusion detection system (IDS). To detect attacks, the IDS is configured with a number of signatures that support the detection of known intrusions. Unfortunately, it is not a trivial task to keep intrusion detection signatures up to date because a large number of new intrusions take place daily. To overcome this problem, the IDS should be coupled with anomaly detection schemes, which support detection of new attacks. The IDSs cause early detection of attacks and therefore make the recovery of lost or damaged data simpler. Many researchers are working on increasing the intrusion detection efficiency and accuracy, but most of these efforts are to detect the intrusions at network or operating system level. They are not capable of detecting corruption data due to malicious transactions in databases. In recent years, researchers have proposed a variety of approaches for increasing the efficiency and accuracy of intrusion detection. Most of these efforts focus on detecting intrusions at the network or operating system level [1-6] and are not capable of detecting the malicious intrusion transactions that access data without permission. The corrupted data can affect other data and the damage can spread across the database very fast, which impose a real danger upon many real-world applications of databases. Therefore such attack or intrusions on the databases should be detected quickly and accurately. Otherwise it might be very difficult to recover from such damages. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 93–101, 2010. © Springer-Verlag Berlin Heidelberg 2010
94
M.M. Javidi, M. Sohrabi, and M.K. Rafsanjani
There are many methodologies in database intrusion detection such as access patterns of users, time signatures, Hidden Markov Model, mining data dependencies among data items and etc. Researchers are working on using Artificial Intelligence and Data Mining to make the IDS more accurate and efficient. The rest of this paper is organized as follows: Section II presents the various approaches proposed for intrusion detection in database systems. In section III, we compare the approaches by giving their highlighting features and finally the performances of the approaches, which improve the previous ones, are evaluated in detail.
2 Approaches 2.1 Access Patterns of Users Chung et al. [7] proposed DEMIDS (DEtection of MIsuse in Database Systems) for relational database systems. This misuse detection system uses audit logs to build profiles, which describe the typical behavior of users working with the database systems by specifying the typical values of features audited in audit logs. These profiles can be used to detect both intrusion and insider abuse, but in particular DEMIDS detects malicious behavior by legitimate users who abuse their privileges. By a database schema and associated applications, some working scopes comprising certain sets of attributes, which are often referenced together with some values, will be formed by the access patterns of users. The idea of working scopes has been captured well by the concept of frequent item-sets, which are sets of features with certain values. DEMIDS defines a notion of distance measure, which measures the closeness of a set of attributes with respect to the working scopes, by integrity constraints (the data structure and semantics, which are encoded in the data dictionary) and the user behavior reflected in the audit logs. Distance measures are used for finding the frequent item-sets in the audit logs, which describe the working scopes of users by a novel data mining approach. Then misuse can be detected by comparing the derived profiles against the security policies specified or against new information (audit data) gathered about the users. 2.2 Time Signatures Lee et al. [8] proposed an intrusion detection system for real-time database systems via time signatures. Transactions often have time constraints in real time database systems (like in Stock Market applications). They exploit the real-time properties of data in intrusion detection. Their approach monitors the behavior at the level of sensor transactions, which are responsible for updating the values of real-time data. Sensor transactions have predefined semantics such as write-only operations and well defined data access patterns where as user transactions have wide varieties of characteristics. Real time database systems deal with temporal data objects, which should be updated periodically and their values change over time. Therefore a sensor transaction is generated in every period. They supposed that every sensor transaction consumes time e to complete the processing, where 0 < e < P, P is the time period to update the temporal data and there is only one transaction in a period P. By applying this
Intrusion Detection in Database Systems
95
signature, if a transaction attempts to update a temporal data, which is already being updated in that period, an alarm will be raised. 2.3 Hidden Markov Model The approach proposed by Barbara et al. [9] for database intrusion detection uses Hidden Markov Model (HMM) and time series to find malicious corruption of data. They used HMM to build database behavioral models, which capture the changing behavior over time and recognized malicious patterns with using them. 2.4 Mining Data Dependencies among Data Items Hu et al. [10] proposed a model for detecting malicious transactions that are targeted at corrupting data. They used data mining approach for mining data dependencies among data items, which are in the form of classification rules, i.e., what data items are most likely to be updated after one data item is updated and what other data items probably need to be read before this data item is updated in the database by the same transaction. The transactions not compliant to the data dependencies generated are identified as malicious transactions. Compared to the existing approaches for modeling database behavior [9] and transaction characteristics [7,8] for detecting malicious transactions, the advantage of their approach is that it’s less sensitive to the change of user behaviors and database transactions. It’s based on the fact that although the transaction program in real-world database applications changes often, the whole database structure and essential data correlations does not change a lot. 2.5 Role-Based Access Control (RBAC) Model Bertino et al. [11] proposed an intrusion detection system for database systems, which is conceptually very similar to DEMIDS. One important fact that they considered is that databases typically have very large number of users and therefore keeping a profile for each single user is not practical. Hence their approach is based on the wellknown role-based access control (RBAC) model. It builds a profile for each role and checks the behavior of each role with respect to that profile. Under an RBAC system, permissions are associated with roles rather than with single users. With using roles, the number of profiles to build and maintain is much smaller than when considering individual users. Therefore their approach is usable even for databases with large user population. “Other important advantages of RBAC are that it has been standardized (see the NIST model [12]) and has been adopted in various commercial DBMS products as well in security enterprise solutions [13]. This implies that an ID solution, based on RBAC, could be deployed very easily in practice.” [11] Moreover, the approach used by DEMIDS for building user profiles assumes domain knowledge about the data structures and semantics encoded in a given database schema, which can adversely affect the general applicability of their methods, but Bertino et al. built profiles using syntactic information from the SQL queries, which makes their approach more generic than others’ ones. They used Naïve Bayes Classifier to predict the role which the observed SQL command most likely belongs to, and
96
M.M. Javidi, M. Sohrabi, and M.K. Rafsanjani
compared it with the actual role. If the roles differ from each other, the SQL statement is considered illegal. 2.6 Weighted Data Dependency Rule Miner (WDDRM) Srivastava et al. [14] proposed an approach for database intrusion detection using a data mining technique, which improves the approach offered by Hu et al. [10]. With respect to the size of current databases being increased at the number of attributes, Srivastava et al. considered that it is very difficult for administrators to keep track of all attributes whether they are accessed or modified correctly or not so their approach takes the sensitivity of the attributes into consideration. Sensitivity of an attribute shows the importance of the attribute for tracking against malicious modifications. They divided the attributes into different categories based on their relative importance or sensitivity. Therefore the administrator checks only those alarms, which are generated due to malicious modification of sensitive data instead of checking all the attributes. If sensitive attributes are to be tracked for malicious modifications then generating data dependency rules for these attributes is essential because if there is not any rule for an attribute, the attribute cannot be checked. The approach proposed by Hu et al. [10] does not generate any rule for high sensitive attributes, which are accessed less frequently because it does not consider the sensitivity of attributes. The motivation of Srivastava et al. for dividing attributes in different sensitivity groups and assigning weights to each group is to bring out the dependency rules for possibly less frequent but more important attributes. Therefore they named their algorithm “weighted data dependency rule miner” (WDDRM). After generating the weighted data dependency rules, the algorithm marks the transactions, which do not follow the extracted data dependencies as malicious transactions. Srivastava et al. compared their work with the non-weighted dependency rule mining approach. They carried out several experiments and showed that WDDRM performs better than non-weighted dependency rule mining approach. 2.7 Dependencies among Data Items and Time-Series Anomaly Analysis Hashemi et al. [15] proposed an approach for identifying malicious transactions using a data mining technique, which detects malicious transactions more effectively than the approach proposed by Hu et al. [10]. They considered that the approach proposed by Hu et al. [10] has some major disadvantages. “It can only find malicious transactions that corrupt data items and cannot identify transactions that read data without permission. This results in a significant reduction in the detection rate when most malicious transactions only intend to read data items illegally. In addition, when a dependency rule is violated by a transaction, without considering the confidence of the rule, the transaction is always identified as an intrusion. This incurs a high false positive rate. Furthermore, sometimes consistency rules of a database do not allow users to execute any arbitrary transaction and hence dependency between data items is no longer violated. But there can be some abnormal variations in the update pattern of each data item.” [15]
Intrusion Detection in Database Systems
97
These types of intrusions that are not detected by the previous models are considered by Hashemi et al. By detecting these intrusions, their proposed approach is able to significantly improve the intrusion detection rate. Their approach has three advantages. First, dependency rules among data items are extended to not only detect transactions that write data without permission, but also to detect transactions that read data without permission. Second, a novel behavior similarity criterion is introduced to reduce the false positive rate of the detection. Third, time-series anomaly analysis is conducted to identify intrusion transactions, which update data items with unexpected pattern. They detected intrusion transactions in databases using two components. (1) The first component extracts dependency rules among data items, which are represented by the order in which these items are accessed (read or write). It is similar to the approach offered by Hu et al. [10], but the concept of malicious transactions is extended from those that corrupt data to those that either read data or write data or both without permission. Moreover, when a transaction t violates a dependency rule, it is not identified as malicious immediately. Their approach examine whether there exists any normal transaction, which violates t and has similar behavior to it. If such a transaction exists, t is not considered as intruder. This is desirable because, in reality, dependency rules are not always 100% correct. (2) The second component uses anomaly analysis of the time series corresponding to each data item. The anomaly analysis approach can detect intrusion transactions, which cannot be identified by first component. This component extracts the time series from the normal transaction log for each data item and divides it using clustering techniques. This component uses a suffix tree structure to efficiently discover normal update patterns and their frequencies for the corresponding data item based on the separated representation of each time series. The anomaly weight of each pattern is calculated by the difference between its occurrence frequency in the transaction in question and in the normal time series. Finally, using weighted output integration, the final decision is made by combining the outputs of the above two components. Hashemi et al. conducted several experiments for evaluating the performance of their proposed method comparing with the performance of the approach presented by Hu et al. [10], in terms of false positive and true positive rates and they showed that their proposed approach achieves better performance than the rival method in all experiments.
3 Comparison of Approaches We have reviewed existing approaches of database intrusion detection in this paper. The highlighting features of them are as follows: • Chung et al. [7] applied access patterns of users to detect intrusions. Their misuse detection system (DEMIDS) utilizes audit logs to build profiles which describe the typical behavior of users working with the database systems. The derived profiles are used to detect misuse behaviors.
98
M.M. Javidi, M. Sohrabi, and M.K. Rafsanjani
• Lee et al. [8] used time signatures to detect intrusions in real-time database systems. Time signatures are tagged to data items and security alarm is raised when a transaction attempts to write a temporal data item that has already been updated within a certain period. • Barbara et al. [9] utilized Hidden Markov Model to simulate users’ behavior for detecting malicious data corruption. • Hu et al. [10] detected malicious transactions that are targeted at corrupting data. They mined data dependencies among data items, which are in the form of classification rules. The transactions that do not follow the mined data dependencies are identified as intruders. • Bertino et al. [11] employed role-based access control to detect intrusions in database systems. Their approach is usable even for databases with large user population. It uses Naïve Bayes Classifier to predict the role which the observed SQL command most likely belongs to and compares it with the actual role. If the roles are different, the SQL statement is considered illegal. • Srivastava et al. [14] improved the approach offered by Hu et al. [10]. They proposed a novel weighted data dependency rule mining algorithm that considers the sensitivity of the attributes while mining the dependency rules so it generates rules for possibly less frequent but more important attributes. • Hashemi et al. [15] extended the approach presented by Hu et al. [10]. Their approach is based on (1) mining dependencies among data items, (2) finding abnormal update patterns in time series corresponding to each data item’s update history. The concept of malicious transactions is extended from those that corrupt data to those that either read data or write data or both without permission. As mentioned earlier, some of these approaches improved the approaches proposed before them. For instance Srivastava et al. [14] improved the approach offered by Hu et al. [10]. For studying relative performance, they have compared their work with the non-weighted dependency rule mining approach (Hu et al. [10]), which they call as DDRM. Fig.1 shows the loss suffered by the intrusion detection system in terms of weight unit using both approaches. It is observed that WDDRM outperforms DDRM. This is because WDDRM tracks the sensitive attributes in a much better way than DDRM and therefore overall loss is minimized.
Fig. 1. Comparison of DDRM and WDDRM [14]
Intrusion Detection in Database Systems
99
Another instance is the approach offered by Hashemi et al. [15], which extends the approach proposed by Hu et al. [10]. Several experiments were conducted for evaluating the performance of the proposed method in terms of false positive and true positive rates. The first experiment was devoted to the false positive rates of the rival algorithms. As it can be seen from Table 1, the false positive rate of Hashemi et al. method is desirably less than that of the offered method by Hu et al. [10]. This improvement is mainly because the behavior similarity measure that helps their algorithm avoids miss-identifying normal transactions as intruders. Second experiment assesses the true positive rates of the algorithms and focuses on dependency between data items regardless of the pattern by which the value of a particular data item changes. It can be observed from Table 2 that with an increase in the dependency between data items, both algorithms’ true positive rates increase, but the Hashemi et al. approach always performs better than the alternative method. This improvement is achieved using the read and write sequence sets, which take into account the variety of dependencies among read and write operations. They conducted the third experiment to compare true positive rate of their approach against the rival method based on both dependency rules and the pattern by which the value of every data item changes. It is observed from Table 3 that again the proposed approach achieves better performance than the rival method with different dependency factors. Their approach’s advantage results from the fact that it considers not only the dependency rules, but also the anomaly in update patterns. Table 1. False Positive Rates (FPR) of the proposed approaches by Hashemi et al. and Hu & Panda in the first experiment [15] Dependency factor
1
2
3
4
5
2
4
6
7
8
FPR (%)
1.5
1.8
1.9
1.7
1.8
No. of rules
1
2
3
4
4
FPR (%)
5.2
9.6
10.2
11.4
7.5
Hashemi et al.’s approach No. of rules
Hu & Panda’s approach
Table 2. True Positive Rates (TPR) of the proposed approaches by Hashemi et al. and Hu & Panda in the second experiment [15] Dependency factor Hashemi et al.’s approach
Hu & Panda’s approach
1
2
3
4
5
No. of rules
2
4
6
7
8
TPR (%)
44.8
70.8
82.6
87.3
90.7
No. of rules
1
2
3
4
4
TPR (%)
33.3
58.1
72.5
81.3
82.1
100
M.M. Javidi, M. Sohrabi, and M.K. Rafsanjani
Table 3. True Positive Rates (TPR) of the proposed approaches by Hashemi et al. and Hu & Panda in the third experiment [15] Dependency factor
1
2
3
4
5
2
4
6
7
8
TPR (%)
83.7
93.6
97
97.6
98.8
No. of rules
1
2
3
4
4
TPR (%)
26.2
58.7
75.3
78.8
83.9
Hashemi et al.’s approach No. of rules
Hu & Panda’s approach
4 Conclusion We have reviewed and compared current studies of intrusion detection in database systems. In particular, this paper reviews existing approaches which are between 2000 and 2008. The methodologies of the approaches are Access patterns of users, Time signatures, Hidden Markov Model, Mining data dependencies among data items, Role-based access control (RBAC) model, Weighted data dependency rule miner and finally Dependencies among data items and time-series anomaly analysis. As mentioned earlier, some of these approaches improved the approaches proposed before them. Srivastava et al. improved the approach proposed by Hu & Panda. They tracked the sensitive attributes in a much better way than & Panda and therefore their overall loss has been minimized. They showed that their method outperforms the rival method. Also Hashemi et al. extended the approach proposed by Hu & Panda. [10]. Their improvement is mainly because of three factors, which are as follows: • The behavior similarity measure that helps their algorithm avoids missidentifying normal transactions as intruders. • Using the read and write sequence sets, which take into account the variety of dependencies among read and write operations. • Considering not only the dependency rules, but also the anomaly in update patterns. Their experimental evaluations showed that their approach has a better performance than the rival method.
References 1. Forrest, S., Hofmeyr, S.A., Somayaji, A., Longstaff, T.A.: A Sense of Self for Unix Processes. In: IEEE Symposium on Security and Privacy, pp. 120–128. IEEE Computer Society Press, Los Alamitos (1996) 2. Javitz, H.S., Valdes, A.: The SRI IDES Statistical Anomaly Detector. In: IEEE Symposium on Security and Privacy (1991) 3. Frank, J.: Artificial Intelligence and Intrusion Detection: Current and Future Directions. In: 17th National Computer Security Conference (1994)
Intrusion Detection in Database Systems
101
4. Noel, S., Wijesekera, D., Youman, C.: Modern intrusion detection, data mining, and degrees of attack guilt. In: Applications of Data Mining in Computer Security. Kluwer Academic, Dordrecht (2002) 5. Ertoz, L., Eilertson, E., Lazarevic, A., Tan, P., Srivava, J., Kumar, V., Dokas, P.: The MINDS – Minnesota Intrusion Detection System. In: Next Generation Data Mining, MIT Press, Boston (2004) 6. Qin, M., Hwang, K.: Frequent episode rules for Internet traffic analysis and anomaly detection. In: IEEE Conference on Network Computing and Applications (NAC 2004). IEEE Press, New York (2004) 7. Chung, C.Y., Gertz, M., Levitt, K.: Demids: A Misuse Detection System for Database Systems. In: Integrity and Internal Control Information Systems: Strategic Views on the Need for Control, pp. 159–178. Kluwer Academic Publishers, Norwell (2000) 8. Lee, V.C., Stankovic, J., Son, S.H.: Intrusion Detection in Real-Time Database Systems via Time Signatures. In: 6th IEEE Real Time Technology and Applications Symposium (RTAS 2000), p. 124 (2000) 9. Barbara, D., Goel, R., Jajodia, S.: Mining Malicious Data Corruption with Hidden Markov Models. In: 16th Annual IFIP WG 11.3 Working Conference on Data and Application Security, Cambridge, England (2002) 10. Hu, Y., Panda, B.: A Data Mining Approach for Database Intrusion Detection. In: ACM Symposium on Applied Computing, pp. 711–716 (2004) 11. Bertino, E., Kamra, A., Terzi, E., Vakali, A.: Intrusion Detection in RBAC-administered Databases. In: 21st Annual Computer Security Applications Conference, pp. 170–182 (2005) 12. Sandhu, R., Ferraiolo, D., Kuhn, R.: The NIST Model for Role Based Access Control: Towards a Unified Standard. In: 5th ACM Workshop on Role Based Access Control. (2000) 13. Karjoth, G.: Access Control with IBM tivoli Access Manager. ACM Transactions on Information and Systems Security (TISSEC) 6(2), 232–257 (2003) 14. Srivastava, A., Sural, S., Majumdar, A.K.: Weighted Intra-transactional Rule Mining for Database Intrusion Detection. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 611–620. Springer, Heidelberg (2006) 15. Hashemi, S., Yang, Y., Zabihzadeh, D., Kangavari, M.: Detecting Intrusion Transactions in Databases Using Data Item Dependencies and Anomaly Analysis. Expert Systems J. 25(5) (2008)
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks Seungjin Park1 and Seong-Moo Yoo2 1
Department of Management and Information Systems University of Southern Indiana Evansville, IN 47712, USA 2 Electrical and Computer Engineering Department The University of Alabama in Huntsville Huntsville, AL 35899, USA
[email protected],
[email protected] Abstract. Among many ways to achieve security in wireless mobile ad hoc networks, the approach taken in this paper is to ensure that all nodes in the network receive critical information on security such as public keys. To achieve this, a reliable global broadcasting of the information must be accomplished, which in turn, relies on a reliable 1-hop broadcasting in which a message from the source node is guaranteed to be delivered to all nodes within the source node’s transmission range. This paper presents a MAC protocol that guarantees a reliable and efficient 1-hop broadcast. The unique feature of the proposed algorithm is that each node is able to dynamically adjust its transmission range depending on the node density around it. Simulation results show the effectiveness of the proposed algorithm. Keywords: Secure, routing, ad hoc network, 1-hop broadcasting.
1 Introduction A Mobile Ad Hoc Network (MANET) consists of a set of wireless mobile hosts (or nodes) that are free to move in any direction at any speed. A MANET does not require any preexisting fixed infrastructures, and therefore it can be built on the fly. However, due to its inherent properties such as lack of infrastructure, mobility of nodes, and absence of trusted centralized node, MANETs suffer significant security issues. Among many proposed algorithms on security in ad hoc networks, a group of researchers have taken routing as a main approach to attain the security in the network [17, 18, 19]. Note that those algorithms might not work properly if there were not a reliable 1-hop broadcast, since most of the algorithms are based on it. In this paper, a reliable 1-hop broadcast is proposed to support the security. Due to the nature of wireless networks, a transmission in the networks is basically a 1-hop broadcast, in which a signal transmitted from a node (source node) reaches all nodes within its transmission range (neighbors of the source node). Many important T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 102–111, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks
103
algorithms in MANETs heavily depend on the performance of 1-hop broadcasting [11, 12, 13, 14, 15]. These algorithms only work correctly provided that the 1-hop broadcasting is reliable, i.e., packet delivery from the source node to all its neighbor nodes is guaranteed. Otherwise, inaccurate and/or insufficient information may cause severe degradation of the algorithms and therefore result in an unsecure network. Although many algorithms have been developed based on a reliable 1-hop broadcasting, thus far not many reliable 1-hop broadcasting protocols actually have been proposed. Achieving a reliable 1-hop broadcast in wireless networks is not easy due to the collisions caused by a phenomenon known as Hidden Terminal Problem [1]. If a transmission is just between two nodes (i.e., point-to-point communication), then RTS-CTS protocol can be resolve the problem eventually [16]. However, point-topoint approach will be extremely inefficient and often useless if it is used for 1-hop broadcast in MANETs. Another degradation of network throughput occurs when nodes cannot explore possible simultaneous transmissions. An example of this type is the Exposed Terminal Problem [2] that prevents nearby nodes from successful simultaneous transmissions. Many communication algorithms proposed thus far are based on the single-channel MAC protocols. However, IEEE 802.11 standard for wireless LAN provides multiple channels for communication between nodes in the network [1]. Although using only one channel is simple to implement, utilizing multiple channels may allow simultaneous communication without causing any collision and contention. Therefore, higher network throughput can be achieved if the multiple channels are used carefully [2, 3, 4, 5]. This paper presents a reliable 1-hop broadcasting algorithm called Flexible Reliable 1-hop Broadcasting (FROB) that guarantees the delivery of broadcast message from the transmitting node to all nodes within its transmission range. Like EROB [6], FROB uses three different channels, one for data packets transmissions and two for control packet transmissions to prevent collisions. However, main difference between EROB and FROB is that EROB allows a node to use only two different levels of transmission ranges, whereas FROB uses many different transmission ranges for control and data transmission for further increase in the network throughput by allowing as many simultaneously transmissions as possible. Simulation results show that FROB outperforms naïve algorithm that does not provide any precaution on collisions, another algorithm that implements CSMA/CA [7], and EROB. The rest of the paper is organized as follows. Section 2 explains terminology and related works for the paper. New 1-hop broadcasting algorithm is presented in Section 3 followed by simulation result in Section 4. Finally, conclusion and discussion will be presented in Section 5.
2 Preliminary Basic knowledge, terminology, and related works that help understanding this paper will be presented in this section.
104
S. Park and S.-M. Yoo
B TRV S
D V
TRS C
Fig. 1. An illustration of Hidden Terminal Problem. Simultaneous transmissions from both V and S will cause a collision at nodes within the overlapped area of S’s and V’s transmission ranges, for example D. Note that V cannot detect S’s transmission since V is out of S’s transmission range.
2.1 Terminology When a node S in a wireless network transmits a signal, the signal propagates all directions to the distance usually proportional to the transmission power. The area covered by the signal is approximated by a circle centered at S and is called transmission range (or broadcast area) and is denoted as TRS. TRS is also used to denote the radius of the area, if there is no possibility of confusion. In this paper, it is assumed that every node has the same maximum transmission range, and can adjust its transmission range depending on the network density around it. If a node S transmits a packet type P, then the transmission range required for the transmission is denoted as TRS,P. If S uses its maximum transmission power, it is denoted as TRS,MAX or simply TRMAX. Figure 1 shows transmission ranges of S and V. A set of nodes in TRV is denoted as N(V). Any node P∈N(V) is called a 1-hop neighbor (or simply a neighbor) of V. Likewise, if the minimum number of links connecting nodes P and Q is n, they are n-hop neighbors each other. n the following, a neighbor implies a 1-hop neighbor, unless otherwise specified. n Figure 1, nodes B, C, D are the neighbors of S, and D is a neighbor of V. Note that transmission and 1-hop broadcast are synonymous in wireless networks, since when the source node transmits a packet, the packet actually reaches all nodes in the source’s transmission range (the same effect as 1-hop broadcast). An area is called broadcast threatening area of node V such that the transmission from a node, say W, in that area causes TRV ∩ TRW ≠ {}. Since it is assumed in this paper that every node has the same maximum transmission range, TRV = TRW =
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks
105
TRMAX for every node. Therefore, the radius of the broadcast threatening area of V would be 2*TRV. In this paper, different channels are assigned to the different packet types. A channel that is used to transmit packet type P is denoted as CHP. 2.2 Related Works Among some related works [6, 8, 9, 10], most recently, Park and Yoo [6] have proposed an algorithm called Efficient Reliable 1-Hop Broadcasting (EROB) that is similar to the proposed algorithm FROB. Although EROB works with node mobility, here we present the version with static nodes. Two types of packets are used in EROB: control packets and data packets. Data packets contain the data to be 1-hop broadcast, and control packets are used to enhance the efficiency of the data packet transmission. Although control packets may not be essential, the network throughput is usually higher with them due to their control over packet collisions. For example, using RTS and CTS control packets may produce higher network throughput [16]. EROB uses only a single type of control packet called Broadcast-In-Progress (BIP in short) to prevent collisions for achieving reliable 1-hop broadcast. A BIP is produced and used in two cases. 1) Prior to 1-hop broadcast of a data packet, a node transmits a BIP to secure not only the broadcast area but also broadcast threatening area as well to prevent possible collisions. 2) On receiving a BIP, a node that is currently involved in any other communication generates and transmits a BIP to warn other nodes in its broadcast threatening area not to initiate data packet transmission. To prevent the data collisions, EROB uses three different channels. CHBIP and CHDATA are dedicated to control packet (i.e., BIPs) and data packet transmissions, respectively. The third channel CHCOL is also for BIPs only but is used to prevent the BIP Propagation Problem that will be explained shortly. Since different types of packets use different channels, collisions can occur only between the same type of packets in the same channel, not between different types. BIPs in FROB prevent data packet collisions as follows. Recall that BIPs are transmitted only along CHBIP and data packets are transmitted only along CHDATA. Suppose node S has a data packet to be 1-hop broadcast. Then, prior to data transmission, S prepares and transmits a BIP with TRMAX (= TRBIP) to inform the nodes in its transmission range with its intention of immediate data packet transmission. Then, on receiving the BIP, if a node is currently not involved in any communication, then the node remains silent. Otherwise, i.e., if the node is currently involved in any other communication, it transmits a BIP to warn S not to transmit data packet, since S’s transmission may cause collision at the node. On receiving either a BIP or a collision of BIPs, S refrains from transmitting data packet. Following two examples illustrate this case.
106
S. Park and S.-M. Yoo
Algorithm EROB At the broadcasting node S 1. S listens its surrounding for any ongoing transmission. If there is, S waits a random amount of time and starts step 1. Otherwise, goes to step 2. 2. S transmits a BIP via CHBIP and waiting for any response. If S detects a BIP or BIP collision during the transmission, it stops transmission, and goes to step 1.
At the receiving node D
3. D received either a BIP or a collision along CHBIP. Regardless of what it received, D performs one of the followings depending on its current status. Case 1: D is transmitting a data packet. D transmits a BIP along CHCOL with TRMAX. Case 2: D is receiving a data packet. D transmits a BIP along CHCOL with
TRMAX . 2
Case 3. D is not involved with data 4. If S hears any in CHCOL, it waits a packet transmission. D keeps silent. random amount of time, and goes to step 1. Otherwise, S starts transmitting data packet. If S hears a BIP or collision of BIPs along CHBIP during data transmission, it transmits BIP along CHCOL with TRMAX. Fig. 2. The summary of EROB
Case 1) Suppose there is a node, say D in S’s transmission range that is involved in data packet transmission. Then, D prepares and transmits a BIP to warn S not to transmit data packet, since if S does, it would cause data packet collision at D. If there are two or more nodes that are involved in data transmission in S’s transmission range, they all transmit BIPs to warn S. In this case, although S would hear a garbled message (i.e., garbled BIPs) that is not possible to decode correctly, S interprets the situation correctly and not to transmit data packet. Case 2) Suppose there is more than one node that has data for 1-hop broadcast. For example, suppose both S and V have data for 1-hop broadcast. Then, as described above, they transmit BIPs prior to data transmission to prevent collisions.
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks
107
Although BIPs are very useful, they also may cause a serious problem, called BIP Propagation Problem, similar to Whistle Propagation Problem [9]. The approach FROB has taken to resolve the BIP Propagation Problem is to use additional channel CHCOL so that when a node receives either a BIP or a collision in CHBIP, it stops propagating the BIP. EROB is summarized in Figure 2.
3 Proposed Algorithm: Flexible Reliable 1-Hop Broadcasting (FROB) This section presents a new 1-hop broadcasting algorithm, called Flexible Reliable 1hop Broadcasting (FROB) that guarantees the completion of 1-hop broadcasting. Note that EROB may suffer a lot of collisions if the network density is high, because nodes in EROB have only two predefined transmission ranges of TRMAX and
TR MAX . Therefore, if a node suffers a lot of collisions due to the many simultane2 ous transmitting neighbor nodes, it should reduce its transmission range to avoid the collisions, which is not possible in EROB because nodes cannot reduce their transmission ranges further down below
TR MAX . On the other hand, the main improve2
ment of FROB over EROB is that the nodes in FROB have capability of adjusting their transmission ranges to any value. Therefore, networks implementing FROB may enhance their network throughput considerably. Our main focus in this section is to show how FROB finds the best transmission ranges for nodes when collision is detected. The nodes in FROB take following phases to accomplish the reliable 1-hop broadcasting. Phase 1) A node S that has a data packet transmits a BIP prior to sending data packet. Then, S enters Phase 2. Phase 2) If S does not hear any BIP or collision of BIPs along CHBIP, S enters Phase 4, because it indicates that it is safe to transmit a data packet. Otherwise, goes to Phase 3. Phase 3) S reduces its transmission range, transmits a BIP again, and enters Phase 2. Phase 4) S starts transmitting a data packet. In Phase 3, a node reduces its transmission range when it detects a BIP or a collision of BIP. The question is then how much it should be reduced. Possible approaches could be to reduce the transmission range to: 1) half of previous transmission range. This method has an advantage of fast approaching to the transmission range that does not suffer any collision. However, the disadvantage would be that it may not produce the best transmission range, that is, the largest transmission range that does not suffer collision. 2) various predefined values from sion range.
TR MAX TR MAX to to find out the best transmis6 3
This method may take more time to converge to the best transmission
108
S. Park and S.-M. Yoo
range, however, most of the time it would reach better transmission range than previous method. In our simulation, we implement both methods and try to discover the relationship between the number of active neighbor nodes and the best transmission range.
4 Simulation In our simulation, EROB and FROB have been tested and compared under the environment similar to EROB. The size of the network in our simulation is 2000×2500, and transmission range is 200. At the beginning of simulation, each node is assigned random starting and destination positions with randomly chosen speed of between 0 and 100. Once a node reaches its destination, the destination becomes a new starting point and the new destination is assigned with a new speed. Nodes move straight between starting and destination point. In our simulation, it is assumed that every node always tries to 1-hop broadcast to generate hostile environment. Data packet transmission duration is 5 unit times and BIP is 2 unit times, which seems reasonable since BIP size is much smaller than data packet. Each simulation lasts 200 unit times. The first simulation result on the number of packet collisions with CSMA, EROB, and FROB is presented in Figure 3, where x-axis represents the number of active nodes in the network. The result clearly shows the advantage of FROB over CSMA and EROB. Figure 4 shows the number of successes in transmitting data packets in CSMA, EROB, and FROB. Again, the figure clearly shows that FROB performs far better that the other two protocols. The success of FROB in 1-hop broadcast is mainly due
Fig. 3. he number of packet collisions in CSMA, EROB, and FROB. X-axis indicates the number of nodes that are currently involved in 1-hop broadcast in the network.
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks
109
Fig. 4. The number of successful 1-hop broadcast in CSMA, EROB, and FROB. X-axis represents the number of nodes that are currently conducting 1-hop broadcasting.
to its flexibility of adjusting transmission range at each node, which not only does reduce the packet collisions but also improves the packet transmission rates. However, it should be pointed out that the transmission ranges of FROB are usually smaller than the transmission ranges of other two protocols. This implies that it takes more time for a source node to route a packet to the destination node that is not within the source node’s transmission range, because it may take more hops to reach the destination node in FROB than in the other two protocols.
5 Conclusion An algorithm that implements a reliable global broadcast (or simply a broadcast) in which information from the source node is guaranteed to be delivered to all nodes in the network is critical to achieve network security. For example, a public key should be delivered to all nodes in the network. Otherwise, security may not be assured. This paper proposes a reliable 1-hop broadcast algorithm called Flexible Reliable 1-Hop Broadcasting (FROB) that guarantees the source node delivers its packet to all nodes in its transmission range. Note that reliable 1-hop broadcasting is the first step to achieve global broadcasting. A reliable 1-hop broadcast is very useful in almost all networks, especially in wireless networks where every transmission is a 1-hop broadcast by nature. Despite the importance of it, 1-hop broadcast is hard to accomplish in wireless networks due to the collisions caused by Hidden Terminal Problem.
110
S. Park and S.-M. Yoo
This paper presents an algorithm, called Flexible Reliable 1-Hop Broadcast (FROB in short), that guarantees the completion of 1-hop broadcast in wireless mobile ad-hoc networks. In addition to data packets, FROB uses a single type of control packets, Broadcast In Progress (BIP), to prevent collisions. FROB also implements three different channels, one for data packets and the other two for BIPs to prevent collisions further. Another unique feature of EROB is to allow each node to adjust its transmission range so that as many simultaneous 1-hop broadcasts be explored as possible to enhance the network throughput. Other advantages obtained from the adjustment of transmission ranges include 1) power saving due to smaller transmission range, 2) less number of collisions because the smaller the transmission range, the less number of nodes it contains, and 3) longer network lifespan. Simulation results support the significant improvement of the proposed algorithm over EROB.
References 1. Allen, D.: Hidden Terminal Problems in Wireless LAN’s. IEEE 802.11 Working Group paper 802.11/93-xx 2. Bharghavan, V., et al.: MACAW: A Media Access Protocol for Wireless LAN’s. In: Proc. ACM SIGCOMM (1994) 3. IEEE 802.11 Working Group.: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications (1997) 4. Deng, J., Haas, Z.: Dual Busy Tone Multiple Access (DBTMA): A New Medium Access Control for Packet Radio Networks. In: Proc. of IEEE ICUPC (1998) 5. Tang, Z., Garcia-Luna-Aceves, J.J.: Hop-Reservation Multiple Access (HRMA) for AdHoc Networks. In: Proc. of IEEE INFOCOM (1999) 6. Park, S., Yoo, S.: An Efficient Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks (submitted for publication) 7. Kleinrock, L., Tobagi, F.: Packet Switching in Radio Channels: Part I – Carrier Sense Multiple-Access Modes and Their Throughput-Delay Characteristics. IEEE Transactions on Communications 23, 1400–1416 (1975) 8. Park, S., Palasdeokar, R.: Reliable One-Hop Broadcasting (ROB) in Mobile Ad Hoc Networks. In: 2nd ACM International Workshop on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN 2005), pp. 234–237 (2005) 9. Lembke, J., Ryne, Z., Li, H., Park, S.: Collision Avoidance in One-Hop Broadcasting for Mobile Ad-hoc Networks. In: IASTED International Conference on Communication, Internet, and Information Technology, pp. 308–313 (2005) 10. Park, S., Anderson, R.: Guaranteed One-Hop Broadcasting in Mobile Ad-Hoc Networks. In: The 2008 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2008) (July 2008) 11. Wu, J., Dai, F.: Broadcasting in Ad Hoc Networks Based on Self-Pruning. In: INFOCOM (2003) 12. Li, Y., Thai, M.T., Wang, F., Yi, C.-W., Wan, P.-J., Du., D.-Z.: On greedy construction of connected dominating sets in wireless networks. In: Wireless Communications and Mobile Computing (WCMC), vol. 5(8), pp. 927–932 (July 2005) 13. Wan, P.-J., Alzoubi, K.M., Frieder, O.: Distributed construction on connected dominating set in wireless ad hoc networks. Mobile Networks and Applications 9(2), 141–149 (2004)
A Secure Routing Using Reliable 1-Hop Broadcast in Mobile Ad Hoc Networks
111
14. Haas, J.: A new routing protocol for the reconfigurable wireless networks. In: Proc. of IEEE 6th International Conference on Universal Personal Communications 1997, pp. 562– 566 (1997) 15. Lim, H., Kim, C.: Multicast tree construction and flooding in wireless ad hoc networks. In: Proceedings of the Third ACM International Workshop on Modeling, Analysis and Simulation of Wireless and Mobile Systems, MSWiM (2000) 16. Karn, P.: MACA - A New Channel Access Method for Packet Radio. In: ARRL/CRRL Amateur Radio 9th Computer Networking Conference (1990) 17. Ding, Y., Chim, T., Li, V., Yiu, S.M., Hui, C.K.: ARMR: Anonymous Routing Protocol with Multiple Routes for Communications in Mobile Ad Hoc Networks. Ad Hoc Networks 7, 1536–1550 (2009) 18. Kim, J., Tsudik, G.: SRDP: Secure Route Discovery for Dynamic Source Routing in MANETs. Ad Hoc Networks 7, 1097–1109 (2009) 19. Qian, L., Song, N., Li, X.: Detection of Wormhole Attacks in Multi-Path Routed Wireless Ad Hoc Networks: Statistical Analysis Approach. Journal of Network and Computer Applications 30, 308–330 (2007)
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol for MANET Marjan Kuchaki Rafsanjani1, Sanaz Asadinia2, and Farzaneh Pakzad3 1
Department of Computer Science, Shahid Bahonar University of Kerman, Kerman, Iran
[email protected] 2 Islamic Azad University Tiran Branch, Tiran, Iran
[email protected] 3 Islamic Azad University Khurasgan Branch, Young Researchers Club, Khurasgan, Khurasgan, Iran
[email protected] Abstract. Mobile Ad hoc networks (MANETs) require dynamic routing schemes for adequate performance. This paper, presents a new routing algorithm for MANETs, which combines the idea of ant colony optimization with Zone-based Hierarchical Link State (ZHLS) protocol. Ant colony optimization (ACO) is a class of Swarm Intelligence (SI) algorithms. SI is the local interaction of many simple agents to achieve a global goal. SI is based on social insect for solving different types of problems. ACO algorithm uses mobile agents called ants to explore network. Ants help to find paths between two nodes in the network. Our algorithm is based on ants jump from one zone to the next zones which contains of the proactive routing within a zone and reactive routing between the zones. Our proposed algorithm improves the performance of the network such as delay, packet delivery ratio and overhead than traditional routing algorithms. Keywords: Zone based Hierarchical Link State (ZHLS); Ant Colony Optimization (ACO); Swarm Intelligence (SI); Mobile Ad hoc Networks (MANETs).
1 Introduction Mobile ad hoc network (MANET) is an infrastructure-less multi-hop network where each node communicates with other nodes directly or indirectly through intermediate nodes. Thus, all nodes in a MANET basically function as mobile routers participating in some routing protocol required for deciding and maintaining the routes. Since MANETs are infrastructure-less, self-organizing, rapidly deployable wireless networks, they are highly suitable for applications communications in regions with no wireless infrastructure, emergencies and natural disasters, and military operations [1,2]. Routing is one of the key issues in MANETs due to their highly dynamic and distributed nature. Numerous ad hoc routing algorithms exist to allow networking under various conditions. They can be separated into three groups, proactive, reactive and hybrid algorithms. In proactive routing algorithms maintain continuously updated T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 112–122, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol
113
state of the network and the existing routes; however, in some cases it may generate an unnecessary overhead to maintain the routing tables and then may be better to create routes only on demand, the case of reactive routing algorithms. In reactive routing algorithms require time-consuming route creations that may delay the actual transmission of the data when sources have no path towards their destination and then, in this case may be better to use a proactive routing algorithm. In hybrid protocols try to profit the advantages of both reactive and proactive protocols and combine their basic properties into one. These protocols have the potential to provide higher scalability than pure reactive or proactive protocols thanks to the collaboration between nodes with close proximity to work together and therefore reduce the route discovery overhead [3]. Recently, a new family of algorithms emerged inspired by swarm-intelligence, which provides a novel approach to distributed optimization problems. The expression “Swarm Intelligence” defines any attempts to design algorithms inspired by the collective behavior of social insect colonies and other animal societies. Ant colonies, bird flocking, animal herding and fish schooling are examples in nature that use swarm intelligence. Several algorithms which are based on ant colony were introduced in recent years to solve the routing problem in mobile ad hoc networks. This paper provides the description of a hybrid routing scheme based on both an Ant Colony Optimization (ACO) and a Zone based Hierarchical Link State (ZHLS) protocol that pretends to profit the advantages of both reactive and proactive algorithms. Ant Colony Optimization (ACO) is a family of optimization algorithms based on real ants' behavior in finding a route to food nest. It has been observed available routes, ants find shortest route to food nest. To achieve this, ants communicate through deposition of a chemical substance called pheromone along the route. Shortest path has highest concentration leading to more and more ants using this route [4]. There are some successful ant-based algorithms for the network that we will introduce them in next section.
2 Related Work Routing in MANETs has traditionally used the knowledge of the connectivity of the network with emphasis on the state of the links. To overcome the problems associated with the link-state and distance vector algorithms, numerous routing protocols have been proposed. The routing protocols proposed for MANETs are generally categorized into three groups: table driven (also called proactive) and on-demand (also called reactive) and hybrid protocols which are both proactive and reactive in nature [3]. 2.1 Routing in Mobile ad Hoc Networks In Proactive routing protocols, each node continuously maintains up-to-date routes to every other node in the network. Routing information is periodically transmitted throughout the network in order to maintain routing table. Thus, if a route has already existed before traffic arrives, transmission occurs without delay. Otherwise, traffic packets should wait in queue until the node receives routing information corresponding to its destination. However, for highly dynamic network topology, the proactive
114
M.K. Rafsanjani, S. Asadinia, and F. Pakzad
schemes require a significant amount of resources to keep routing information up-todate and reliable. Proactive protocols suffer the disadvantage of additional control traffic that is needed to continually update stale route entries. Since the network topology is dynamic, when a link goes down, all paths that use that link are broken and have to be repaired. This protocol is appropriate for a network with low mobility. Certain proactive routing protocols are Destination-Sequenced Distance Vector (DSDV) [5], Wireless Routing Protocol (WRP) [6] and so on. The main differences among them are the number of used tables, the information that is kept and the forward packet police to maintain the tables updated. Reactive Routing Protocols in contrast to proactive approach, a node initiates a route discovery throughout the network, only when it wants to send packets to its destination. For this purpose, a node initiates a route discovery process through the network. This process is completed once a route is determined or once a route has been established, it is maintained by a route maintenance process until either the destination becomes inaccessible along every path from the source or until the route is no longer desired. In reactive schemes, nodes maintain the routes to active destinations. A route search is needed for every unknown destination. Therefore, theoretically the communication overhead is reduced at expense of delay due to route research. Furthermore, the rapidly changing topology may break an active route and cause subsequent route searches. Reactive strategies are suitable for networks with high mobility and relatively small number of flows. Some reactive protocols are Ad hoc On-Demand Distance Vector (AODV) [7], Dynamic Source Routing (DSR) [8], Temporally Ordered Routing Algorithm (TORA) [9] and Associativity-Based Routing (ABR) [10]. Hybrid Protocols, each node maintains both the topology information within its zone and the information regarding neighboring zones that means proactive behavior within a zone and reactive behavior among zones. Thus, a route to each destination within a zone is established without delay, while a route discovery and a route maintenance procedure is required for destinations that are in other zones. The Zone Routing Protocol (ZRP) [11], Zone-based Hierarchical Link State (ZHLS) routing protocol [12] and Distributed Dynamic Routing algorithm (DDR) [13] are three hybrid routing protocols. The hybrid protocols can provide a better trade-off between communication overhead and delay, but this trade-off is subjected to the size of a zone and the dynamics of a zone. The hybrid approach is an appropriate candidate for routing in a large network. JoaNg et al. [12] proposed a hybrid routing protocol is called Zone-based Hierarchical Link State (ZHLS) routing protocol in the effort to combine the features of proactive and reactive protocols. In ZHLS routing protocol, the network is divided into nonoverlapping zones. Unlike other hierarchical protocols, there is no zone-head. ZHLS defines two levels of topologies - node level and zone level. A node level topology tells how nodes of a zone are connected to each other physically. A virtual link between two zones exists if at least one node of a zone is physically connected to some node of the other zone. Zone level topology describes how zones are connected together. There are two types of Link State Packets (LSP) as well - node LSP and zone LSP. A node LSP of a node contains its neighbor node information and is propagated with the zone where as a
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol
115
zone LSP contains the zone information and is propagated globally. So, each node has full node connectivity knowledge about the nodes in its zone and only zone connectivity information about other zones in the network. So given the zone id and the node id of a destination, the packet is routed based on the zone id till it reaches the correct zone. Then in that zone, it is routed based on node id. A of the destination is sufficient for routing so it is adaptable to changing topologies. In ZHLS, Zone LSPs are flooded throughout the network so that all nodes know both zone level and node level topologies of the network. This simplifies the routing but introduces communication overhead [12]. 2.2 Ant-Based Routing Algorithms for MANETs There exist some successful ant-based algorithms to network control, being the most prominent AntNet [14], and Ant-based Control (ABC) [15], which have a number of properties desirable in MANETs. AntNet and ABC use two ants, forward and backward ants to find the shortest route from the source to the destination. AntNet [14] is a proactive ACO routing algorithm for packet switch networks. In this algorithm, a forward ant is launched from the source node at regular intervals. A forward ant at each intermediate node selects the next hop using the information stored in the routing table of that node. The next node is selected with a probability proportional to the goodness of that node which is measured by the amount of pheromone deposited on the link to that node. When a forward ant reaches the destination, it generates a backward ant which takes the same path as the corresponding forward ant but in opposite direction. The backward ant updates pheromone values as it moves on its way to the source node. ARA (Ant colony based Routing Algorithm) proposed by Gunes et al. [16] is a reactive ACO routing algorithm for mobile ad hoc networks. ARA has two phases: route discovery, and route maintenance. In route discovery phase, the sender broadcasts a forward ant. The ant is relayed by each intermediate node until reaches the destination. After receiving a forward ant in the destination, the ant is destroyed and a backward ant is sent back to the sender. The backward ant increases the pheromone value corresponding to the destination in each intermediary node until it reaches the sender. When the sender receives a backward ant, the route maintenance phase starts by sending data packets. Since the pheromone track is already established by the forward and backward ants, subsequent data packets will perform the route maintenance by adjusting the pheromone values. ARAMA (Ant Routing Algorithm for Mobile Ad hoc networks) proposed by Hossein and Saadawi [17] is a proactive routing algorithm. The main task of the forward ant in other ACO algorithms for MANETs is collecting path information. However, in ARAMA, the forward ant takes into account not only the hop count factor, as most protocols do, but also the links local heuristic along the route such as the node’s battery power and queue delay. ARAMA defines a value called grade. This value is calculated by each backward ant, which is a function of the path information stored in the forward ant. At each node, the backward ant updates the pheromone amount of the node’s routing table, using the grade value. The protocol uses the same grade to update pheromone value of all links. The authors claim that the route discovery and maintenance overheads are reduced by controlling the forward ant’s generation rate. However, they do not clarify how to control the generation rate in a dynamic environment.
116
M.K. Rafsanjani, S. Asadinia, and F. Pakzad
AntHocNet is a hybrid ant based routing protocol proposed by Di Caro [18] in the effort to combine the advantages from both AntNet and ARA. AntHocNet reactively finds a route to the destination on demand, and proactively maintains and improves the existing routes or explore better paths. In AntHocNet, ant maintains a list of nodes it has visited to detect cycles. The source node sends out forward ants and when it receives all the backward ants, one generation is completed. Each node i keeps the identity of the forward ants, the path computation, number of hops, number of the ant from the source to node i, and the time the ant visited node i. Note that more than one ant may have reached node i and therefore the identity of the ant is important. When an ant arrives at a node, the node checks the ant’s path computation and the time it reached node i. If the path computation and time are within a certain limit of those produced by another ant of the same generation then the ant is forwarded. Otherwise, the ant is discarded. In case of a link failure at a node and no alternative paths are available, the node sends a reactive forward ant to repair the route locally and to determine an alternative path. If a backward ant is received for the reactive forward ant, the data packets are sent along the newly found path and all its neighbors are notified about the change in route. Otherwise, the node sends a notification to all its neighbors of the lost destination paths which in turn initiate forward ants from the neighbors. In the next section, we present the main ideas of our algorithm.
3 The Our Proposed Routing Scheme Our algorithm uses the ZHLS protocol which consists of the proactive routing within a zone and reactive routing between the zones. The network is divided into zones which are the node’s local neighborhood. The network divides into non-overlapping zones; a node is only within a zone. The zone size depend on node mobility, network density, transmission power and propagation characteristics. Each node knows its physical location by geo-location techniques such as Global Positioning System (GPS). The nodes can be categorized as interior and gateway nodes. Zone5
N
P
M
Zone6 Zone7
E
B
A S
F C
Zone1
I Zone3
L
H
D G
Zone2
Fig. 1. Example of our scheme structure
K Zone4
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol
117
In Fig 1 for node S, nodes C, D, and E are gateway nodes, and nodes A, B are interior nodes. All other nodes are exterior nodes (outside the zone). To determining gateway and interior nodes, a node needs to know its local neighbors. This is achieved by a detection process based on replies to hello messages transmitted by each node. Each node only knows the connectivity within its zone and the zone connectivity of the whole network. 3.1 Routing Table The algorithm has two routing tables, Intrazone Routing Table (IntraRT) and Interzone Routing Table (InterRT). IntraRT is a routing table maintained proactively. A node can determine a path to any node within its zone immediately. InterRT is a routing table for storing routes to a destination out of its zone. The gateway nodes of the zone are used to find routes between zones. 3.2 ANTs The defined ants in our scheme are same with HOPNET algorithm [19] that classified in 5 types: internal forward ant, external forward ant, backward ant, notification ant and error ant. The internal forward ant is the responsible for maintaining the proactive routing table continuously within its zone. The external forward ant performs the reactive routing to nodes beyond its zone. When an external forward ant is received at the destination, it is converted to a backward ant and sent back along the discovered route. If a new route is reactively discovered, then a notification ant will be sent to source node and to all nodes on the route to update their reactive routing table. The error ant is utilized to warn some changes in the network topology and to restart a new search by the destination if the source still needs a route. 3.3 Route Discovery We use ACO algorithm for finding the shortest route between two nodes (Vi,Vj) in network. Each communication link has two values, , represents pheromone value per link and , represents time which the links may be in connection. The pheromone value gets updated by the ants as they move the links. The ants change the concentration of the pheromone value on their path to the destination and on their route back to the source. Route discovery occurs by Intrazone and Interzone routing. The IntraRT basic structure is a matrix whose rows are its neighbors and the columns are all identified nodes within its zone. In route discovery within a zone (Intrazone routing), each node periodically sends internal forward ants to its neighbors to maintain the Intrazone routing table updated. When the source node wants to transmit a data packet to a node within its zone, it first searches the columns of its IntraRT to see if the destination exists in its zone. If it finds the destination in its IntraRT, then Route discovery phase is done. At the current node, the ant verifies the pheromone amount for each neighbor which has a route to destination. The neighbor which has the biggest pheromone amount is chosen to next hop. After selecting a node as next hop increase pheromone concentration selected link and along all other links the pheromone is decremented. Pheromone concentration on a link (Vi,Vj) along consists considering the path from current node Vi to source node Vs, the pheromone value on link (Vi,Vs) in Vj’s
118
M.K. Rafsanjani, S. Asadinia, and F. Pakzad
routing table is reinforced. The amount of pheromone on a link (Vi,Vs) is increased by following equation[19]: ,
,
,
1
,
That has to be chosen appropriately to avoid fast or slow evaporation and T (Vs,Vi) represents the total time required to traverse from Vs to Vi. The pheromone concentration on all other entries not equal to Vi in the same column Vs in Vj’s routing table is decremented using the evaporation equation below: ,
1
,
2
Where is the evaporation coefficient provided by the user [19]. On its path back to the source, an ant again updates the pheromone concentration. The pheromone concentration update for entry (Vb, Vd) is [19]: ,
,
,
,
3
If not found the destination in its IntraRT, then Route discovery between zones is done. In route discovery between zones (Interzone routing), When a node wants to send a data packet to a destination node, it verifies the Interzone routing table to discover an existent route. If the route exists and has not expired, then the node transmits the data packet. Otherwise, the node starts a search process to find a new path to destination. When a source node will to transmit a data packet to a node thither its zone, the node sends external forward ants to search a path to the destination. The external forward ants are first sent by the node to its gateway nodes. The gateway nodes check to see if the destination is within its zone. If the destination is not within its zone and the path has expired, the ants jump between the border zones via the other gateway nodes until an ant localizes a zone with the destination. This ant propagation through the border zones is called bordercast. At the destination, forward ant is converted to a backward ant and is sent to the source. Then, the data packet is transmitted. Use bordercast and routing tables process reduces the delay, because intraRT proactively maintains all the routes within its zone and interRT stores the path to the destination that the ants recently visited. These tables contribute to fast end to end packet transmission since the paths are readily accessible. An example of the route discovery between zones is given below using Fig 1. Assume the source I want a route to the destination L. Since L does not belong to I’s zone, node I will send external forward ants to gateway nodes its neighbor zones, namely F and H. Nodes H and F look through the IntraRT table to check if L is within its zones. In this example, L will not be in the tables. Therefore, H will send the ant to its gateway node G. Node G will send external forward ants to gateway nodes of its neighbor zones, D and K. D cannot find L in its zone. Therefore, Node D sends the ant to its gateway nodes. Node K finds the destination node L within their zone. K then send forward ants with their attached addresses to node L via the path indicated in IntraRT table. The backward ant traverses in the reverse direction, for example, to source I from destination T.
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol
119
3.4 Route Maintenance In mobile ad-hoc network, the flexible mobility and communication interference will lead to the invalidation of some route. There are two reasons which an intermediate node will not be able to deliver packets: i) the pheromone concentration along the neighboring links is zero, in this case the ants cannot select any links to travel if all their links, up and down are zero and the data packet is failed at that node, ii) damaged route. If the damaged route is within a zone, it will recover after a period because the IntraRT is proactively maintained. If the damaged route is between zones, the up node of the broken link will conduct a local repair process and then search an alternative path to the destination while buffering all the packets it receives. If the node finds a new path to the destination, it will send all the buffered packets to the destination; then a notification ant will be sent to the source to allow the source node knows the change of route. If a new path cannot be found instead failed path, an error ant will be sent to the source node. Hence packet delivery ratio will be increased [19].
4 Simulation Results Our algorithm is implemented in GloMoSim simulator. The simulation environment includes 200 mobile nodes working with IEEE 802.11, the area is 1000 m × 1000 m, they move according to the random way point mobility model (RWP). Each node moves with maximal 10 m/s, the whole time of simulation is 300s. The data rate is 2 packets per second (1024 bytes). Fig. 2 shows the end to end delay of our proposed algorithm in comparison to AODV protocol and HOPNET algorithm. Our proposed algorithm produces better end to end delay results than AODV. This is attributed to the zone framework and the local intrazone routing table and interzone routing table. The intrazone table proactively maintains all the routes within its zone and interzone stores the path to the destination that the ants recently visited.
Fig. 2. End to end delay
120
M.K. Rafsanjani, S. Asadinia, A and F. Pakzad
These tables contribute to fast end to end packet transmission since the paths are usting the evaporation rate of pheromone on the links, the readily accessible. By adju ants can traverse on the liinks or ignore the links by decrementing the pherom mone concentration. The evaporaation rate helps in discarding links that are broken. Thhese reasons allow our proposed d algorithm to produce better end to end delay results. Fig 3 shows the delivery y ratio for our proposed algorithm, HOPNET and AOD DV. Since the network has dense ants can find multiple paths, because the ants can chooose t a single path like AODV. from multiple paths rather than
Fig. 3. Packet delivery ratio
Fig. 4 shows the control ov verhead of our proposed algorithm, HOPNET and AOD DV. AODV is a pure reactive prrotocol. Proposed algorithm is proactive within a zone. T The control packets are periodiically sent out within a zone to maintain the routes in the zone. This is a major factorr for the overhead in proposed algorithm.
Fig. 4. Overhead per true received packets
A Hybrid Routing Algorithm Based on Ant Colony and ZHLS Routing Protocol
121
5 Conclusion In this work, Ant Colony Optimization algorithm and Zone-based Hierarchical Link State protocol are used for routing in MANETs. In fact, it is a hybrid routing algorithm that has the potential to provide higher scalability than pure reactive or proactive protocols. Our algorithm contains the proactive routing within a zone and reactive routing between the zones. The scheme that we presented in this paper only represents our initial effort for the development of routing algorithm for MANETs. In addition, although we have reasoned that our routing scheme is more advantageous over most previous schemes in terms of end to end delay and packets delivery ratio. As the network size increases, the overhead decreases and it is better than AODV.
References 1. Haas, Z.J., Gerla, M., Johnson, D.B., Perkins, C.E., Pursley, M.B., Steenstrup, M., Toh, C.K.: Mobile Ad-Hoc Networks. IEEE J. on Selected Areas in Communications, Special Issue on Wireless Networks 17(8), 1329–1332 (1999) 2. Mauve, M., Widner, J., Hartenstein, H.: A Survey on Position-based Routing in Mobile Ad-hoc Networks. IEEE Network J. 16, 30–39 (2001) 3. Abolhasan, M., Wysocki, T., Dutkiewicz, E.: A Review of Routing Protocols for Mobile Ad hoc Networks. Ad Hoc Networks J., Elsevier Computer Science 2, 1–22 (2004) 4. Dorigo, M., Di Caro, G., Gambardella, L.: Ant Colony Optimization: A New Metaheuristic. In: IEEE Congress on Evolutionary Computation, Washington, DC, vol. 2, pp. 1470–1477 (1999) 5. Perkins, C.E., Watson, T.J.: Highly Dynamic Destination Sequenced Distance Vector Routing (DSDV) for Mobile Computers. In: ACM Conference on Communications Architectures SIGCOMM 1994, London, UK (1994) 6. Murthy, S., Garcia-Luna-Aceves, J.J.: A routing protocol for packet radio networks. In: The 1st ACM/IEEE Annual International Conference on Mobile Computing and Networking, Berkeley, CA, pp. 86–95 (1995) 7. Das, S., Perkins, C., Royer, E.: Ad hoc on Demand Distance Vector (AODV) Routing. Internet Draft, draft-ietf-manetaodv-11.txt, work in progress (2002) 8. Johnson, D.B., Maltz, D.A.: The Dynamic Source Routing Protocol for Mobile Ad hoc Networks. Internet Draft, draft-ietf-manet-dsr-07.txt, work in progress (2002) 9. Toh, C.K.: Associativity-based Routing for Ad-hoc Mobile Networks. Wireless Personal Communications 4(2), 103–139 (1997) 10. Park, V.D., Corson, M.S.: A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks. In: The IEEE Conference on Computer Communications, Kobe, Japan, pp. 7–11 (1997) 11. Hass, Z.J., Pearlman, R.: Zone Routing Protocol for Ad-hoc Networks, Internet Draft, draft-ietf-manet-zrp-02.txt, work in progress (1999) 12. Joa-Ng, M., Lu, I.T.: A Peer-to-Peer Zone-based Two-level Link State Routing for Mobile Ad Hoc Networks. IEEE J. on Selected Areas in communications, Special Issue on AdHoc Networks, 1415–1425 (1999) 13. Nikaein, N., Laboid, H., Bonnet, C.: Distributed Dynamic Routing Algorithm (DDR) for Mobile Ad hoc Networks. In: 1st Annual Workshop on Mobile Ad Hoc Networking and Computing, MobiHOC 2000 (2000)
122
M.K. Rafsanjani, S. Asadinia, and F. Pakzad
14. DiCaro, G., Dorigo, M.: AntNet: Distributed Stigmergetic Control for Communications Networks. J. on Artificial Intelligence Research 9, 317–365 (1998) 15. Schoonderwoerd, R., Holland, O., Bruten, J., Rothkrantz, L.: Ant-based Load Balancing in Telecommunication Networks. Adaptive Behavior 5, 169–207 (1996) 16. Gunes, M., Sorges, U., Bouazzi, I.: ARA – The Ant Colony Based Routing Algorithm for MANETs. In: the International Conference on Parallel Processing Workshops (ICPPW 2002), Vancouver, BC, pp. 79–85 (2002) 17. Hossein, O., Saadawi, T.: Ant Routing Algorithm for Mobile Ad hoc Networks (ARAMA). In: 22nd IEEE International Performance, Computing, and Communications Conference, Phoenix, Arizona, USA, pp. 281–290 (2003) 18. DiCaro, G., Ducatelle, F., Gambardella, L.M.: AntHocNet: An Adaptive Nature Inspired Algorithm for Routing in Mobile Ad hoc Networks. European Transactions on Telecommunications (Special Issue on Self-Organization in Mobile Networking) 16(2) (2005) 19. Wang, J., Osagie, E., Thulasiraman, P., Thulasiram, R.K.: HOPNET: A Hybrid Ant Colony Optimization Routing Algorithm for Mobile Ad hoc Network. Ad Hoc Network J. 7(4), 690–705 (2009)
Decision-Making Model Based on Capability Factors for Embedded Systems Hamid Reza Naji1, Hossein Farahmand2, and Masoud RashidiNejad2 1
Computer Department, Islamic Azad University, Kerman Branch, Kerman, Iran Electrical Engineering Department, Shahid Bahonar University, Kerman, Iran
2
Abstract. In this paper a decision-making modelling concept based on the identification of capability factors and finding mathematical models to describe or to prescribe best choice for evaluation of embedded systems (ES) is presented. The techniques utilize a combination of subjective and qualitative assumptions and mathematical modelling techniques. The digital cell phone as a sample of ES is analyzed as case study to show the application of the proposed approach. The results show the high performance of this methodology for capability evaluation of such systems. Keywords: Capability Factors, Multiple Criteria Decision-Making, Embedded Systems.
1 Introduction Many systems utilize a combination of subjective and qualitative assumptions in order to present a decision support modelling [1,2]. Our method proposes a hybrid heuristic technique (HHT) for capability evaluation which is conducted by comparing and evaluating success factors associated with their risks. In this regards decision-making concepts are based on the identification of capability factors while introducing mathematical models. This technique tends to offer a generic tool for systems analysts to assess and compare industrial capability with respect to specified system features. It is therefore imperative to describe what is meant by a system and its capability. A system may be described as: “a complex and highly interlinked network of parts exhibiting synergistic properties”. Capability indices proposed in this paper are the product of utilizing fuzzy relations and analytic hierarchy process (AHP) techniques. Fuzzy relation is adopted to create a common quantitative measure to relate various factors and ESs relational concept of “capable” [3,4]. AHP technique is adopted to define a pair-wise comparison of different factors. This technique is implemented to assign weights to each factor based on the relative levels of importance for each factor in comparison with the others. Capability factors may vary due to the nature of the each system; the methodology discussed in this paper will be sufficiently flexible to accommodate systems’ diversity. A case study is introduced to illustrate the effectiveness of the proposed approach. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 123–129, 2010. © Springer-Verlag Berlin Heidelberg 2010
124
H.R. Naji, H. Farahmand, and M. RashidiNejad
Many multi-objective approaches have been proposed in the literature [5,6,7,8]. Although they operate at different level of abstraction and deal with different optimization objectives, the overall goal is always the same. In section 2 capability evaluation of embedded systems is discussed. Section 3 explains mathematical modelling of fuzzy multi objective decision problem. Section 4 illustrates digital cell phones as a case study. Section 5 provides conclusions.
2 Capability Evaluation (CE) The proposed technique for capability evaluation of ESs will utilise a systems engineering approach to offer a tool to assist decision makers to quantify such qualitative judgements [9]. It will not completely replace knowledge-based judgements but it will offer a platform for a more robust and sound system analysis, while it uses expert system criteria. It may be argued that the best way to measure capability is to study the degree of success of delivering final result; true. In addition to constituent elements of a system, capability evaluation technique (CET) will consider the ability to deliver the final outcome as a feature. Measurement factors may vary due to the nature of the system but a generic algorithm will be introduced that can be flexible enough to accommodate the diversity of systems. A study is presented to illustrate the potential of the proposed approach, while the novelty of proposed evaluation technique can be based upon methodical and simple criterion to systems analysis from different perspectives. 2.1 Problem Definition For modelling simplification, this paper deals with the minimum requirements to determine the system capability (SC) among different agents that can be defined by equation 1. SCi = f ( xij ) , SCi : system capability for ith ES and
xij : jth element of the ith ES . (1)
In order to quantify qualitative elements crisp and fuzzy variables can be assigned. With regards to establish a fuzzy decision-making process, it is necessary to fuzzyfy the quantifiable elements. This can be achieved by defining the suitable membership functions, in which these functions should consider the properties and behaviour of respected variables. In the following section, a background to fuzzy sets is discussed. 2.1.1 Methodology The proposed technique for evaluation of systems’ capability and comparison will include Fuzzy Sets Theory (FST) and Analytic Hierarchy Process (AHP) procedures. CET being designed as a transparent strategic management decision support system adopts: • •
FST to conform to mainly qualitative nature of decisions factors. AHP for its special structure of intuitive way of problem solving and its novelty in handling Multiple Criteria Decision Making (MCDM) procedures.
Decision-Making Model Based on Capability Factors for Embedded Systems
125
2.1.2 Fuzzy Sets According to fuzzy set theory, each object x in a fuzzy set X is given a membership value using a membership function denoted by μ (x ) which corresponds to the characteristic function of the crisp set where the values range between zero and one. 2.1.3 Membership Function Membership functions can be mathematically described as linear or non-linear functions. A well-behaved membership function needs to be assigned for each fuzzyfied element. In most cases, linear membership functions are sufficient to explain the behaviour of the related value of elements. In cases where a linear membership function cannot satisfy the functional behaviour of the elements, a non-linear membership function is required. 2.1.4 Fuzzy Multi Objective Decision Fuzzy Multi Objective Decision (FMOD) can be mathematically simulated and analysed using fuzzy rules. FMOD can be defined as a combination of Fuzzy Sets, levels of importance of decision variables, and unequal importance levels of objectives and constraints. The proposed method utilises FMOD techniques to optimise an objective function with respect to constraints.
3 Mathematical Model of Fuzzy Multi Objective Decision Problem A fuzzy decision problem, D (x) , can be defined as a set of N o objectives and N c constraints with the intent to select the best alternative from a set of X possible alternatives. The level of satisfaction by x for given criteria can be described as μ i ( x) ∈ [0,1] where it can be represented by a membership function, in which the higher value of a membership implies a greater satisfaction as it is shown in Figure 1. μ fi (x i )
f imin
f imax
fi (x i )
Fig. 1. A Typical Membership Function
In order to determine the level to which x satisfies all criteria denoted by D ( x) , the following statements could be made:
1.
The fuzzy objective O is a fuzzy set on X characterised by its membership function: μ
O
( x ) : X
→
[ 0 ,1 ]
.
(2)
126
H.R. Naji, H. Farahmand, and M. RashidiNejad
2.
The fuzzy constraint C is a fuzzy set on X characterised by its membership function: μ C ( x ) : X → [ 0 ,1 ] .
3.
(3)
The fuzzy decision D , must be satisfied by a combination of fuzzy objectives and fuzzy constraints.
The following section will discuss how equal or unequal levels of importance of goals and constraints can be applied to the proposed FMOD [9]. 3.1 Goals and Constraints with Equal Importance
If the goals and constraints are of equal importance, relationships (4) or (5) are satisfied:
x is desired where mathematical
⎧ O 1 ( x ) & O 2 ( x ) & O 3 ( x ) & ....... O N O ( x ) ⎪ and ⎨ ⎪ C ( x ) & C ( x ) & C ( x ) & ....... C 1 2 3 NC (x) ⎩
(4)
D ( x ) = O1 ( x ) I O 2 ( x ) I ...... I O N O ( x ) I C 1 ( x ) I C 2 ( x ) I ..... I C N C ( x ) .
(5)
Where: N O ( x ) : Number of objectives , N C ( x ) : Number of constraints. Oi ( x) : Fuzzy value of the ith objective for alternative x. Ci ( x) : Fuzzy value associated with satisfaction of the ith constraints by alternative x. The fuzzy decision in this case is characterised by its membership function:
μ D ( x ) = min {μ O ( x )
,
μ C (x)
}.
(6)
The best alternative xopt can be determined by:
D ( x ) = max x∈ X ( D ( x )) . opt
(7)
Where xopt satisfies:
max x∈ X
μ D ( x) = =
max x∈ X
max x∈ X
(min {μ O ( x )
{
,
μ C ( x ) })
(min μ O1 ( x ),..., μ O N o ( x )
,
}
(8)
μ C ( x ),...., μ c ( x ) ) 1
Nc
3.2 Goals and Constraints with Unequal Importance
In case, where objective and constraints are of unequal importance it should be ensured that alternatives with higher levels of importance and consequently higher memberships are more likely to be selected. The positive impact of the levels of importance, wi, on fuzzy set memberships is applied through the proposed criterion. It can be realized by associating higher values of wi to objective and constraints. For example, the more important alternative the higher the value associated with it.
Decision-Making Model Based on Capability Factors for Embedded Systems
127
FMOD set D(x) can be represented as equation 9, where Ow(x) and Cw(x) are weighted objectives and constraints sets. N is the total number of objectives and constraints and K is the number of alternatives. D(x) =Ow (x) ∩ Cw (x) (9) Where, w= [w1, w2… wi…wN] and X=[x1, x2 …, xK] D(x) =min {Ow (x), Cw (x)}= N min{O1w 1 (x), O 2w 2 ( x),K, Oiw i (x), Ciw+1i+1 (x),K, Cw N (x )}
.
(10)
Where xopt should satisfy: w ( x ) = max (min{μ w ( x ) , μ w ( x )}) . max μ D o c x ∈X
(11)
x ∈X
This can be expressed as:
⎫ ⎧ w x opt = arg ⎨max μ D ( x )⎬ . x ∈X ⎭ ⎩ x
= org{ max (min{μ oi i ( x) , μ c j N o + j ( x)})} w
w
opt
x ∈X i = 1...N , o
j = 1...N
(12)
c
(13)
, N +N =N o c
3.3 Calculation of Exponential Weighing Values Using AHP
Analytical Hierarchy Process (AHP) is a method used to support complex decisionmaking process by converting qualitative values to numerical values. AHP’s main concept of priority can be defined as the level of strength of one alternative relative to another. This method assists a decision-maker to build a positive reciprocal matrix of pair-wise comparison of alternatives for each criterion. A vector of priority can be computed from the eigenvector of each matrix. The sum of all vectors of priorities forms a matrix of alternative evaluation. The final vector of priorities can be calculated by multiplying the criteria weighted vector by the matrix of alternative evaluation. The best alternative has the higher priority value. CET algorithm evaluates the relative importance of the decision variables using a pair-wise comparison matrix. The relative importance of each objective or constraints can be obtained using paired comparison of the elements taken two at a time. This method can be used to obtain the exponential weighing values that properly reflect the relative importance of the objective criteria and constraints concerning a decision problem. For the purpose of decision-making under variable importance, the paired comparison matrix P with the following properties is performed: • • •
A square matrix of order equal to the sum of the number of objectives and constraints. The diagonal elements are 1. 1 . (14) P ij = P ji
128
•
H.R. Naji, H. Farahmand, and M. RashidiNejad
The off-diagonal elements are specified by looking at the table of importance scale. For example, if object i is less important than object j then Pji = 3 , while if it
is absolutely more important, then Pji = 9 , and so on. To compare a set of N objects in pairs according to their relative weights, the pair-wise comparison matrix can be expressed as: ⎡w P = p ij = ⎢ i ⎣⎢ w j
[ ]
i = 1, 2 ,... N
j = 1, 2 ,... N
⎤ . ⎥ ⎦⎥
(15)
Where wi refers to the ijth entry of P which indicates how element i is compared to element wj
j. In order to find the vector of weights W = [w1 by the vector W to get: ⎡w PW = ⎢ i ⎣⎢ w j
⎤ ⎡ N ⎤ ⎥ [w i ] = ⎢ ∑ w i ⎥ = N [w i ] ⎣ i =1 ⎦ ⎦⎥
w2 .......wN ]T , we multiply matrix P
∴
PW = NW & ( P − NI ) = 0 (16)
In the above calculations if P is consistent, all eigenvalues are zero except a nonzero eigenvalue referred to λmax which is equal to N (the number of objects). The estimated weights can be found by normalizing the eigenvector corresponding to the largest eigenvalue. In the case where objectives and constraints have unequal importance, it should be ensured that alternatives with more importance are more likely to have higher impact.
4 Conclusion Evaluation and comparison of systems capabilities seems to be a desirable measurement tool for systems engineering and analysis. The achieved objective was to introduce a quantitative approach to address a qualitative matter. Application of a multi-objective optimisation via a heuristic technique is addressed in this paper. CET algorithm adopts fuzzy optimisation technique to evaluate and compare embedded systems (ECSs) capabilities. This paper utilises the advantages of fuzzy optimisation and AHP to address multi-objective optimisation with regard to equal/un-equal levels of importance. Relative priorities are assigned to the objectives/constraints using AHP. Acknowledgments. This paper is published based on the research fund from Islamic Azad University, Kerman, Iran.
References [1] Ossadnik, W., Lange, O.: Theory and Methodology, AHP-based evaluation of AHPSoftware. European Journal of Op. Research 118, 578–588 (1999) [2] Reddy, A., Naidu, M.: An Integrated approach of Analytical Hierarchy Process Model and Goal Model (AHP-GP Model). IJCSNS International Journal of Computer Science and Network Security 7(1), 108–117 (2007) [3] Zadeh, L.A.: Fuzzy Sets. Information and Control 8, 338–353 (1965)
Decision-Making Model Based on Capability Factors for Embedded Systems
129
[4] Lee, H., Chu, C., Chen, K., Chou, M.: A Fuzzy Multiple Creiteria Decision Making Model For Airline Competitiveness Evaluation. Proceedings of the Eastern Asia Society for Transportation Studies 5, 507–519 (2005) [5] Nuovo, A., Palesi, M., Patti, D.: Fuzzy Decision making in Embedded System Design. In: Proceedings of the 4th International Conference on Hardware/Software Codesign and System Synthesis, Seoul, Korea, October 22-25, pp. 223–228 (2006) [6] Ascia, G., Catania, V., Palesi, A.M.: Multi-objective genetic approach for system-level exploration in parameterized systems-on-a-chip. IEEE Trans. on Computer-Aided Design of Integrated Systems 24(4), 635–645 (2005) [7] Eisenring, M., Thiele, L., Zitzler, E.: Conflicting Criteria in Embedded System Design. IEEE Design & Test 17(2), 51–59 (2000) [8] O’Hagan, M.: A fuzzy Decision Maker, Technical Report in Fuzzy Logic (2000), http://wwwfuzzysys.com/fdmtheor.pdf [9] Mousavi, A., Bahmanyar, M., Sarhadi, M., Rashidinejad, M.: A technique for advanced manufacturing systems capability evaluation and comparison. Int. Journal of Advanced Manufacturing Tech. 31(9-10), 1044–1048 (2007)
Socio-Psycho-Linguistic Determined Expert-Search System (SPLDESS) Development with Multimedia Illustration Elements Vasily Ponomarev NPP “RUMB”, Research and Development Department, Rabochaya 29, 142400 Moscow region Noginsk, Russia {Vasily.Ponomarev moshimik}@gmail.com
Abstract. SPLDESS development with the elements of a multimedia illustration of traditional hypertext search results by Internet search engine provides research of information propagation innovative effect during the public access information-recruiting networks of information kiosks formation at the experimental stage with the mirrors at the constantly updating portal for Internet users. Author of this publication put the emphasis on a condition of pertinent search engine results of the total answer by the user inquiries, that provide the politically correct and not usurping socially-network data mining effect at urgent monitoring. Development of the access by devices of the new communication types with the newest technologies of data transmission, multimedia and an information exchange from the first innovation line usage support portal is presented also (including the device of social-psycho-linguistic determination according the author's conception). Keywords: Data mining; expert systems; knowledge engineering; multimedia; search engine; information propagation.
1 Introduction Now, according to design decisions, SPLDESS has been develop at the present time by the international collective in the framework of author's conceptual model [1] that should carry out the retrieval of images, storage of images and delivery of the relevant formats of data search, storage and search engine results of relevant formats images according to the set user communicative intention in the Internet of community private, public and state information resources by knowledge bases system of below-mentioned structure that is corresponded to the subject structure of " KMTZ" portal, that is automatically everyday updating at mirrors: http://www.kmtz.info and http://www.kmtz.biz. The described below procedures of data processing and knowledge, according the mentioned below illustrated specification should represent the content and multimedia images, that in turn render active the estimation of the given search order, by the search engine results pertinent criterion and by semantic[8], pragmatical and discourse structure of the communicative package SPL-stratification account of due type that is described in a below-mentioned part of the present publication. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 130–137, 2010. © Springer-Verlag Berlin Heidelberg 2010
Socio-Psycho-Linguistic Determined Expert-Search System
131
2 Idea of the Multimedia Specifications in the Technical Project 2.1 The Primary Specification 1. Socializing types of polythematic information 2. The polythematic information about municipal infrastructure 3. The information about business commercial subjects 2.2 The Secondary Specification 1. The most frequent subject chains, that are in correspondence with ratings of social-psycho-linguistic determined illustrations prestigiousness and templateness 2. Polysemantic spectrum of influence scale generation at the set subjects communicative intention perception unification by standard and multimedia means. To each saturation spectrum of the answer search image is set a correspondence in the form of a multimedia illustration at the final screen format of the search query processing result and interactive specified motivations in the restricted thematic category are set. Innovation. It is necessary to notice that the stock of hierarchically - easy to recognize illustrations is provided to be received as a result of indisputable participants of world hierarchy historical events animation (at the scale like "the Big explosion epoch of dinosaurs - the ice age - ancient history of Middle east - history of Africa ancient history of Asia - ancient history of Europe - history of Pre-Columbian era in America - history of continents in average centuries - history of continents in new time - history of continents in the newest - modern history of fashion and the hi-tech spheres of services, or industry (with references to current price-lists)", for example by image style like in Fig. 1:
Vacancies\description standard\: Duties Requirements Conditions
The accessible normal variant
The accessible high rating variant
1.Participation in the project of the informationanalytical system development, that is created by the employer company (the
I. Business requirements development II. Specifications execution III.Techniques and standards, instructions, regulations preparation
The favourable high rating variant
Very valuable high rating variant
Fig. 1. Hierarchy historical events animation image style
132
V. Ponomarev
The information on the employer The contact person Site of the employer Preferences
box decision and Internet service) 2.Gathering' analysis' documenting producing ' and the coordination of requirements to system at the basis of interaction with potential customers and experts 3.Drawing up the specifications of technical projects on existing functional completion and development of new system modules 4.The functional implementation control by criteria of correspondence to the technical project' carrying out of business testing of the implemented functionality' developers and experts of the testing consultation group 5.User documentation producing
IV.Primary testing, correction of errors V. Service, development of clearing settlements maintenance systems … … …
Fig. 1. (continued)
Socio-Psycho-Linguistic Determined Expert-Search System
… .
… .. …
The resume …
The educational centres … Persons are interested in the demanded kinds of training … Tourism …
The recreational centres …. Welfare & legal consultations …
Fig. 1. (continued)
133
134
V. Ponomarev
3 Scale of Ambitions It is necessary to notice, that author designed SPLDESS usage as superstructures in user interface in the framework of multilanguage project "Experimental stage" of SPLDESS - Information System of Public Access at the basis of 2 university centres in European and Asian countries, with the assistance of several university centres in countries-scientific cooperation partners provides research of innovative effect from formation of Experimental stage is information - recruiting networks of information kiosks in public access with the mirrors at the constantly updating portal for Internet users (including strong support of access from mobile phones [4] and other portable devices [3] of constantly updated types of communication devices with the usage of new front end data transmission, multimedia and information interchange technologies). At the basis of long-term researches and the expert analysis [5], it is established that the most actual types of information, that are necessary for harmonious development and a successive susceptibility of new generation for all socially valuable kinds of activity, and that are also simply daily required for individual or socio-cultural activity, should be united in Information System of Public Access with public access terminals (information kiosks with the touch screen) [2] to demanded authentic, not commercialised information resources. These resources are united by economic platform, but protected from criminal influences by special Internet, WAN and LAN security technologies in a popular databank on subjects: Training (vocational training of new generations to acquire a professions, that are claimed at the modern labour market) Recreation and Leisure (possibilities of new generations for cultural, individualised, non-criminal leisure, sports, etc.) Jobs (employment of new generations for work with a sufficient wage level, including creation and development of family) Social Law Protection (for the new generations first of all) The above-named terminals with the touch screen for public access to demanded authentic, not commercialised information resources are necessary for placing information kiosks and corresponding portal mirrors in the above-named university centres according to the certain feasibility report. In parallel with an information resource in the Internet the project provides the annual edition on hard copiers and DVD as the "Special directory-navigator for the different regions for new generations" contains the information about what to do for new generations in that case when traditional communications or mass media do not provide with their operative, authentic, timely and updated information about the socialization ways in a modern international global and local societies . The directorynavigator, contains information resources which will allow to new generations to be guided freely in the problems of employment, training, education, recreation, leisure, and also social and legal protection. Besides, during the formation of SPLDESS experimental stage Information System of with terminals (information kiosks with the touch screen) public access to demanded authentic, not commercialised information resources, it was planned to development and to research the adaptive marcetological demands of its special module of preventive protection of the information created by the expense of the state from
Socio-Psycho-Linguistic Determined Expert-Search System
135
illegitimate concealments commercialization completeness as regards of its most socially significant address components. The mentioned module should provide the transformation of the above-named terminals user natural language inquiry, or their Internet versions to query system search engine in corresponding language with accent on search engine results of the government agencies, that are authorised for execution of control-allowing functions [7] in the framework of monitoring competency for commercially-unlimited order of the socially significant information corresponding types distribution in all its completeness at subjects that are set forth above contact phones and addresses (at the final stage of the present research probably to provide experimental development and marketing [6] demands approbation of the described module expansion , that should carry out pertinent electronic statements of claim with digital sign support and possibility of conclusion to the printer with the subsequent automated packing an envelope and dispatch to all appropriate instances for execution of the full spectrum resulted national users of the states under which jurisdiction spends the present research information requirements generation). In case of users rights infringement by cause of relevant information absence, that is financed by the expense of national, city, municipal or any other type local budgets information resources, it should conduct to automated generation of claim or claims statements about no-purpose use of state resources and the subsequent collecting from the guilty costs established in a judicial order for put damage to property and moral damage in the corresponding size.
4 Summaries and Conclusions In conclusion during the summarising all aforesaid, it is necessary to notice, that the purpose of this research paper is of the script - frame production knowledge model concept presentation for social psycho linguistic determined communication streams in human computer natural language processing of toolkit, that is limited by above mentioned subjects. The most part of the considered project, as opposed to the pure theoretic approach in various branches of an artificial intelligence, applies for the software development problem definition. This approach is aimed to supersets of public access networks that are capable to communicate with any type of information systems only by a restricted natural language dialog mode, or often only by restricted colloquial dialect. In spite of the fact, that some aspects of the offered methodology are tightly related to the traditional technologies of machine translation systems, but the main resolution of the problem is given in the traditional structure of the knowledge engineering, that is inherited from classic expert systems that involve the paradigm of further application development and the original toolkit of knowledge presentation. The information streams have frame knowledge representations and substantiations that are included in the considered conceptual model. The resulting matrix toolkit interprets the generation and production script, that is in the dependence of user's socio-linguistic, sociopsychological and psycho-linguistic attributes, that are mentioned at our knowledge base subject structure prototypes at http://www.kmtz.info.
136
V. Ponomarev
The author's strategic purpose is the development of a knowledge presentation model for a similar class of existing infrastructure systems with the assistance of special multimedia hardware (information kiosks public access network) means. This simple approach to implementation of demonstrated significant technological decisions easily supports some opportunities of the visual output (multimedia, video, speech processing and traditional terminals with menu selections) for massive reduction of the excessive text information. The development of an expert search system with elements of socio-psycho linguistic determinations, adaptations, verifications and now the special multimedia toolkits, that suppose to break through the industry of gradual development of the artificial intelligence tools, highly demanded by some perspective segments of national and international software market [6]. These tools are very significant from the investment point of view. Today some institutions in private, and in the state-public status are concerned in the prompt arms by the above described information-communication technologies, as existing specialized establishments in this area, and even in the public control international organizations, that is carrying out the highly technological, but the democratic information protection policy of the international-recognized rights for each person-user of data transmission global networks joint establishment by the expense of the faster account infrastructure. This new generation infrastructure should be able to monitor the content providing politically correct not usurping socially-network effect evolution of dynamics regarding the urgency of priority changes to glocalisation, preventive maintenance of extremist tendencies in conditions of non-selectiveness and anonymity of popular special and educational services in Internet communications and mass media. Acknowledgments. I am grateful to my colleague Yuri Natchetoi for the care with which he reviewed the former version of this manuscript, that was reflected in style of finishing works on joint software design and especially to incumbent Director of my company for the care with which my chief reviewed the last version of this manuscript and for conversations that clarified my thinking on this and other matters. His friendship and professional collaboration meant a great deal to me.
References 1.
2.
3.
4.
Ponomarev, V.V.: Conceptual Model of the linguistic software complex of the expert search system with elements of the socio-psycho linguistic determination. Dialog MEPhI, Moscow (2004) (in Russian) Ponomarev, V.V.: Implementation of the new approach to automated reporting from the terminal stations of the municipal information system “Youth”. In: New software technologies, Moscow, MGIEM, pp. 44–51 (2005) (in Russian) Ponomarev, V., Natchetoi, Y.: Semantic Content Engine for E-business and E-Government with Mobile Web support. In: Proceedings of the Third International Conference on Internet Technologies and Applications, Wrexham, UK, pp. 702–710 (2009) Kaufman, V., Natchetoi, Y., Ponomarev, V.: On Demand Mobile CRM Applications for social marketing. In: Proceedings of ICE-B, Porto, Portugal, pp. 397–404 (2008)
Socio-Psycho-Linguistic Determined Expert-Search System 5.
6. 7. 8.
137
Ponomarev, V.V., Gorelov, Y.K.: Independent Moscow Regional Ecological Monitoring and Recultivation Center on the Base of Expert Social-Informational Technological Integrated Data Bank as Experience of Adequatization. In: Program of People to people International, Moscow City Council, Intel Service Center, Russian Ministry for the Protection of the Environment and Natural Resources, Moscow (1994) Webber, Y.: Marketing to the Social Web: How Digital Customer Communities Build Your Business. Wiley, Chichester (2007) Ponomarev, V.V.: Composition, structure and functional extension of linguistic support for Applications. 3 – 5, Mashinostroitel, Number 11, (2006) (in Russian) ISSN 00025 – 4568 Yu, L.: Semantic Web and Semantic Web services. Chapman and Hall/CRC, Boca Raton (2007)
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss Using Multiple Codebooks and Comfort Noise for CELP-Type Speech Coders Nam In Park1, Hong Kook Kim1, Min A. Jung2, Seong Ro Lee3 , and Seung Ho Choi4 1
School of Information and Communications Gwangju Institute of Science and Technology (GIST), Gwangju 500-712, Korea {naminpark,hongkook}@gist.ac.kr 2 Department of Computer Engineering, 3 School of Information Engineering Mokpo National University, Jeollanam-do 534-729, Korea {majung,srlee}@mokpo.ac.kr 4 Department of Electronic and Information Engineering Seoul National University of Science and Technology, Seoul 139-743, Korea
[email protected] Abstract. In this paper, a packet loss concealment (PLC) algorithm for CELPtype speech coders is proposed to improve the quality of decoded speech under burst packet loss. A conventional PLC algorithm is usually based on speech correlation to reconstruct decoded speech of lost frames by using the information on the parameters obtained from the previous frames that are assumed to be correctly received. However, this approach is apt to fail to reconstruct voice onset signals since the parameters such as pitch, LPC coefficient, and adaptive/fixed codebooks of the previous frames are almost related to silence frames. Thus, in order to reconstruct speech signals in the voice onset intervals, we propose a multiple codebook based approach which includes a traditional adaptive codebook and a new random codebook composed of comfort noise. The proposed PLC algorithm is designed as a PLC algorithm for G.729 and its performance is then compared with that of the PLC algorithm employed in G.729 by means of perceptual evaluation of speech quality (PESQ), a waveform comparison, and an A-B preference test under different random and burst packet loss conditions. It is shown from the experiments that the proposed PLC algorithm provides significantly better speech quality than the PLC of G.729, especially under burst packet loss and voice onset conditions. Keywords: Speech coding, G.729, packet loss concealment (PLC), comfort noise, burst packet loss, voice onset.
1 Introduction With the increasingly popular use of the Internet, IP telephony devices such as voice over IP (VOIP) and voice over WiFi (VoWiFi) phones have attracted wide attention for speech communications. In order to realize an IP phone service, speech packets T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 138–147, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss
139
are transmitted using a real-time transport protocol/user datagram protocol (RTP/UDP), but RTP/UDP does not check it out whether or not the transmitted packets are correctly received [1]. Due to the nature of this type of transmission, the packet loss rate would become higher as the network becomes congested. In addition, depending on the network resources, the possibility of burst packet losses also increases, potentially resulting in severe quality degradation of the reconstructed speech [2]. In this paper, a new packet loss concealment (PLC) algorithm for CELP-type speech coders is proposed as a means of improving the quality of decoded speech under burst packet losses, especially when the packet loss occurs during voice onset intervals. The proposed PLC algorithm is based on speech correlation to reconstruct decoded speech corresponding to lost packets. CELP-type speech coders decompose speech signals into vocal track parameters and excitation signals. The former is reconstructed by repeating the parameters of the previous speech frame that is assumed to be correctly received. On the other hand, excitation signals are reconstructed by combining voiced and random excitations. In other words, voice excitation is obtained from the adaptive codebook excitation scaled by a voicing probability, while random excitation is generated by permuting the previous decoded excitation in order to compensate for an undesirable amplitude mismatch under burst packet loss conditions. However, this approach is apt to fail to reconstruct voice onset signals since the parameters such as a pitch period, linear predictive coding (LPC) coefficients, and adaptive/fixed codebooks of the previous frames are almost related to silence frames [3]. The proposed PLC algorithm can mitigate this problem by a multiple codebook using comfort noise. The performance of the proposed PLC algorithm is evaluated by implementing it on the G.729 speech decoder and compared with that of the PLC algorithm already employed in the G.729 speech decoder. The remainder of this paper is organized as follows. Following this introduction, Section 2 describes a conventional PLC algorithm that is employed in the G.729 decoder [7]. After that, Section 3 describes the proposed PLC algorithm and implements it on the G.729 decoder. Section 4 then demonstrates the performance of the proposed PLC algorithm, and this paper is concluded in Section 5.
2 Conventional PLC Algorithm The PLC algorithm employed in the G.729 standard reconstructs speech signals of the current frame based on previously received speech parameters. In other words, the PLC algorithm replaces the missing excitation with an equivalent characteristic from a previously received frame, though the excitation energy gradually decays. In addition, it uses a voicing classifier based on a long-term prediction gain. During the frame error concealment process, a 10 ms frame is declared as voiced if at least a 5 msec subframe of the 10 ms frame has a long-term prediction gain of more than 3 dB; otherwise, the frame is declared as unvoiced. A lost frame inherits its class from the previous speech frame. The synthesis filter in the lost frame uses the linear predictive coding (LPC) coefficients of the last good frame. In addition, the gains of the adaptive and fixed codebooks are attenuated by a constant factor, and the pitch period of the lost frame uses the integer part of the pitch period from the previous frame. To avoid repeating the same periodicity, the pitch period is increased by one for each subsequent subframe.
140
N.I. Park et al.
3 Proposed PLC Algorithm Contrary to the conventional PLC algorithm described in the previous section, the proposed PLC algorithm consists of two blocks: a speech correlation based PLC (SCPLC) block and a multiple codebook based PLC (MC-PLC) block. The former includes voicing probability estimation, periodic/random excitation generation, and speech amplitude control. The latter incorporates comfort noise to construct multiple codebooks for reconstructing voice onset signals. Fig. 1(a) shows an overview of the proposed PLC algorithm. First of all, the multiple codebook, e2 ( n) , is updated every frame regardless of packet loss. If the current frame is declared as a lost frame, LPC coefficients of the previous good frame are first scaled down to smooth the spectral envelope. Next, a new excitation signal, eˆ(n) , is estimated by a speech correlation based PLC block. Here, the updated multiple codebook is used in order to achieve eˆ( n) . If consecutive frame losses occur, the signal amplitude estimate, Ai, for the lost frame is obtained prior to the excitation estimation described above. Finally, decoded speech corresponding to the lost frame is obtained by filtering the estimated new excitation by using the smoothed LPC coefficients. Packet loss occurrence Multiple codebook based PLC Previous frame
Packet loss occurrence
e2 (n)
Burst packet loss ? Previous frame
No Burst packet loss ?
Excitation
e(n)
Yes Signal amplitude control
Pitch
Excitation No
LPC
Computation of Voicing probability
pv
P
Pitch
e(n)
LPC
P
puv
Yes
Speech correlation based PLC
ai Signal amplitude control
eˆ(n)
Ai
e~ (n)
r
LPC smoothing
Ai (n )
Periodic excitation generation
eˆ(n)
r i ai
Synthesis filter Decoded speech for lost packet
(a)
Random excitation generation
e~(n) Synthesis filter
r
ai
LPC smoothing
r i ai
Decoded speech for lost packet
(b)
Fig. 1. Overviews of (a) the proposed PLC algorithm and (b) the speech correlation based PLC algorithm [3]
3.1 Speech Correlation Based PLC 3.1.1 Generation of Periodic and Random Excitation Using the Voicing and Unvoicing Probability Fig. 1(b) shows an overview of the speech correlation based PLC (SC-PLC) block. This block tries to estimate a new excitation signal, eˆ(n) , for a lost frame by combining the periodic excitation obtained from the estimated voicing probability, and the random excitation obtained by permuting the previously decoded excitation signal. Note here that the updated multiple codebook is used in order to generate the periodic and random excitations, which will be explained in Section 3.2.
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss
141
Pv Periodic excitation
Random excitation
Puv
Fig. 2. Example of generating excitation signals by the speech correlation block
The SC-PLC algorithm generates the excitation of a lost frame by a weighted sum of the voiced and unvoiced excitations, which in turn is based on the pitch and the excitation of the previous frame, as shown in Fig 2. In particular, voiced excitation is first generated from an adaptive codebook by repeating the excitation of the previous frame during the pitch period, referred to as periodic excitation in this paper. That is, the periodic excitation, e p (n) , is given by e p ( n ) = e( n − P )
(1)
where e(n) is the excitation of the previous frame and P is the pitch period estimate of the current frame. Next, to generate unvoiced excitation, referred to as random excitation, temporal excitation is produced based on a random permutation of the excitation of the previous frame, such as et (n) = Pπ (e(n))
(2)
where et (n) is the temporal excitation, Pπ is a permutation matrix, and n is generated by a random sequence in the range of P. An excitation sample is selected randomly from within a selection range having the same length of pitch period. To select the next excitation sample, P is increased by one to prevent the same excitation sample from being selected. In addition, assuming that the fixed codebook has some degree of contribution to the periodicity of the speech signal as an adaptive codebook [4], we can compute the maximum cross-correlation between the periodic excitation and the temporal excitation as ⎛ N −1 ⎞ ⎜ e p ( n ) ⋅ et ( n − m ) ⎟ ⎜ ⎟ i=0 ⎠ m * = arg max ⎝ N −1
∑
0 ≤ m ≤ N −1
2
(3)
∑ e (n − m) 2 t
i =0
where N is the frame size and set to 80 for G.729. The best random excitation that contributes to the speech signal periodicity is then defined as er ( n) = et ( n − m* )
(4)
where er (n ) is the random excitation. As shown in Fig. 2, to recover the lost frame, we can obtain the reconstructed excitation by a weighted sum of the periodic and random excitation as
142
N.I. Park et al. eˆ( n ) = pve p ( n ) + puv er ( n )
(5)
where eˆ(n), pv , and puv are the reconstructed excitation, the voicing probability, and the unvoicing probability, respectively. In Eq. (5), pv and puv are required to obtain the excitation. To this end, we first compute a correlation coefficient, r , between the excitation decoded in the previous frame and its delayed version up to the estimated pitch period of the current frame, P . In other words, we have N −1
r=
∑ e( n )e (n − P ) n=0
N −1
∑e
.
(6)
N −1
2
(n)
n=0
∑e
2
(n − P )
n =0
Using the correlation coefficient, pv and puv are estimated as ⎧1, ⎪ ⎪ ⎪ r − 0.03 pv = ⎨ ⎪ 0.3 ⎪0, ⎪⎩
if r > 0.33 if 0.03 ≤ r ≤ 0.33
(7)
otherwise
and puv = 1 − pv .
(8)
The above probabilities are finally applied to Eq. (5) to obtain the reconstructed excitation. 3.1.2 Speech Amplitude Control Using Linear Regression The SC-PLC algorithm described in Section 3.1.1 tends to reconstruct speech signals with relatively flat amplitudes, resulting in unnatural quality of decoded speech. To overcome this problem, we introduce a smoothing method for controlling the amplitude of decoded speech by using a linear regression technique. Fig. 3 shows an example of the amplitude control. Assuming that i is the current frame and g i is the original speech amplitude, the PLC employed in G.729 estimates the amplitude, gi′′, by attenuating the codebook gain, whereas the speech correlation based PLC estimates the amplitude, gi*, using linear regression. In the figure, the amplitude obtained by linear regression provides a better estimate than the amplitude obtained by attenuating the codebook gain. Here, the linear regression based on the linear model is g i′ = a + b i
(9)
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss 600
■
500
◆
400
g i −3
◆
◆
300
g i*
◆ gi
gi− 2
200 100
g i −1
143
g i′ = a + bi
● g i′′
◆
0 i-4
i-3
i-2
i-1
i
Current frame
Fig. 3. Amplitude prediction using linear regression
where g i′ is the newly predicted current amplitude, a and b are coefficients for the first order linear function, and i is the frame number [6]. Assuming that measurement errors are normally distributed and the past four amplitude values are used, we find a and b such that the difference between the original speech amplitude and the speech amplitude estimated from the figure is minimized. In other words, a* and b* are the optimized parameters with respect to a and b . Based on these parameters, the amplitude estimate for the i-th frame is denoted as gi* = a* + b* i .
(10)
To obtain the amplitude of a lost frame, the ratio of amplitude of the i-th current frame and that of the (i-1)-th frame is first defined as σi =
g i* g i −1
(11)
where σ i is the amplitude ratio of the i-th frame. Moreover, the number of consecutive lost frames is taken into account by observing that if consecutive frames losses occur, the speech amplitude also decreases. We define a scale factor, si , as ⎧ 1.1, ⎪ ⎪ 1.0 , si = ⎨ ⎪0.9 , ⎪⎩ 0 ,
if li = 1,2 if li = 3,4
(12)
if li = 5 ,6 otherwise
where li is the number of consecutive lost frames until the i-th frame. Then, the estimated amplitude, A′i , can be determined by using the equation of Ai′ = si σ i .
(13)
For the continuous amplitude attenuation, A′i , is smoothed with the estimated amplitude of the (i-1)-th frame, Ai′−1, as Ai′( n) = -
Ai-′ 1 − Ai′ ⋅ n + Ai-′ 1, N
n = 0 ,L ,N − 1
(14)
144
N.I. Park et al.
where Ai′(n) is the smoothed amplitude of the n-th sample for the i-th frame. Finally, we multiply Ai′(n) to the excitation eˆ(n) to obtain the amplitude adjusted excitation. That is, e~ ( n) = Ai′( n)eˆ(n) and it is applied to the synthesis filter. 3.2 Multiple Codebook Based PLC The SC-PLC block is apt to fail to reconstruct voice onset signals. When the current frame is a voice onset, the previous frames are silent or noise frames. Thus, if the current frame is lost, then the coding parameters such as pitch period, LPC coefficients, and excitation codebooks are not enough to reconstruct the current frame. To overcome this problem, we propose a multiple codebook based PLC (MC-PLC) approach. Random pitch
Random grid
Random adaptive codebook
Random positions and pulse signs
Random fixed codebook
g rf
g ra ecng (n)
Adaptive codebook II
e2 (n)
1
SC-PLC
e~ ( n)
FI 0
ga Adaptive codebook I
1
e(n)
1 0
FI 0
Speech synthesis filter FI
Fixed codebook I
gf
Reconstructed speech signals
Fig. 4. Structure of the proposed multiple codebook generation based on comfort noise, where FI means a frame erasure indicator
Fig. 4 shows a structure of the MC-PLC block. In this block, comfort noise is incorporated to make a secondary adaptive codebook for the excitation generation of a CELP-type coder. As shown in the figure, the adaptive codebook II excitation, e2 (n), is used every frame without regarding to frame loss. If there is no frame loss, i.e., the frame indicator (FI) is set to 0, speech signals are reconstructed by filtering e(n) . Simultaneously, the adaptive codebook II is updated as the sum of e(n) and ecng (n). Otherwise, the previous excitation of SC-PLC is substituted with e2 ( n). After applying e2 (n) to SC-PLC, speech signals are reconstructed by filtering ~e (n) . In this case, the adaptive codebook II is only updated by using the excitation the sum of ~e (n) by SC-PLC and ecng (n) by the comfort noise. Here, ecng (n) is defined as ecng ( n) = g ra era (n) + g rf erf (n)
(15)
where g ra and g rf are the gains of the random adaptive codebook excitation, era (n), and the random fixed codebook excitation, erf (n) , respectively [11].
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss
145
In Eq. (15), ecng (n) should be small enough, compared to the excitation, e(n). In this paper, the squared sum of ecng (n) over a subframe is set below the squared sum of e(n), such as 39
∑
39
( g ra era ( n ) + g rf erf ( n )) 2 = α
n=0
∑ (e( n))
2
(16)
n=0
where α is a scale factor and is set adaptively depending on g a , the gain of the adaptive codebook I as shown in Fig. 5. In other words, we have if g a ≥ 0.6 ⎧0.48, ⎪⎪ α = ⎨0.8 g a , if 0.12 ≤ g a < 0.6 . ⎪ ⎪⎩0.108, if g a < 0.12
(17)
Before solving Eq. (16), we randomly choose g ra according to the rule that is already applied to generate the comfort noise in ITU-T Recommendation G.729 Annex B [11]. Finally, g rf is also finally obtained from Eq. (16).
4 Performance Evaluation To evaluate the performance of the proposed PLC algorithm, we replaced the PLC algorithm employed in G.729 [7] with the proposed PLC algorithm, and then we measured perceptual evaluation of speech quality (PESQ) scores according to ITU-T Recommendation P.862 [8]. For the PESQ test, 96 speech sentences, composed of 48 males and 48 females, were taken from the NTT-AT speech database [9] and processed by G.729 with the proposed PLC algorithm under different packet loss conditions. The performance was also compared with that using the PLC algorithm employed in G.729, which is called G.729-PLC. In this paper, we simulated two different packet loss conditions, including random and burst packet losses. During these simulations, packet loss rates of 3, 5, and 8% were generated by the Gilbert-Elliot model defined in ITU-T Recommendation G.191 [10]. Under the burst packet loss condition, the burstiness of the packet losses was set to 0.66, thus the mean and maximum consecutive packet losses were measured at 1.5 and 3.7 frames, respectively. Figs. 5(a) and 5(b) compare PESQ scores when the proposed MC-PLC and G.729PLC were employed in G.729 under single packet loss conditions and burst packet loss conditions, respectively. As shown in the figure, the MC-PLC algorithm had higher PESQ scores than the G.729-PLC algorithm for all the conditions. In particular, the effectiveness of the proposed PLC algorithm was investigated when packet losses occurred in voice onset intervals. Fig. 5(c) shows the PESQ scores for G.729PLC, SC-PLC, and MC-PLC under this simulated condition. It was shown from the figure that MC-PLC provided lowest PESQ scores for any number of consecutive packet losses during the voice onset.
146
N.I. Park et al.
3.8
3.8 3.7 MC-PLC (Proposed PLC)
3.6 3.5 3.4 3.3
G.729-PLC
G.729 PLC
2.9
MC-PLC (Proposed PLC)
2.8
SC-PLC
2.7
MC-PLC (Proposed PLC)
PESQ (MOS)
3.6
PESQ (MOS)
PESQ (MOS)
3
G.729 PLC
3.7
3.5 3.4 3.3
2.6 2.5 2.4 2.3
3.2
3.2
3.1
3.1
3
3 No loss
3
5
8
2.2 2.1 2 No loss
Single packet loss rate (%)
0
0.33
0.66
Burstiness
(a)
(b)
1
2
3
Number of Consecutive Frame Losses (c)
Fig. 5. Comparison of PESQ scores of MC-PLC and G.729-PLC under (a) single packet loss conditions and (b) burst packet loss conditions (c) of G.729-PLC, SC-PLC, and MC-PLC according to different number of consecutive packet losses occurred in voice onset intervals
Fig. 6. Waveform comparison; (a) original waveform, (b) decoded speech signal without any packet loss, and reconstructed speech signals using (c) packet error patterns (d) G.729-PLC, (e) SC-PLC, and (f) MC-PLC Table 1. A-B preference test results
Burstiness/ Packet loss rate 3% γ =0.0 5% (random) 8% 3% γ =0.66 5% 8% Average
Preference Score (%) G.729 PLC No difference 14.44 47.78 8.89 45.56 18.89 34.44 17.78 45.56 12.22 42.22 7.78 41.11 13.33 42.78
Proposed PLC 37.78 45.55 46.67 36.66 45.56 51.11 43.89
Fig. 6 shows waveform comparison of reconstructed speech by different PLC algorithms. Figs. 6(a) and 6(b) show the original speech waveform and the decoded speech waveform with no loss of the original signal, respectively. After applying the packet error pattern expressed as a solid box in Fig. 6(c), SC-PLC (Fig. 6(e)) and MCPLC (Fig. 6(f)) reconstructed speech signals better than G.729-PLC (Fig. 6(d)). However, SC-PLC was not enough to reconstruct the voice onset signal, as shown in a dotted box in Fig. 6(c). On the other hand, MC-PLC provided better reconstruction of voice onset signals than SC-PLC. Finally, in order to evaluate the subjective performance, we performed an A-B preference listening test, where 10 speech sentences including 5 males and 5 females were processed by both G.729-PLC and MC-PLC under random and burst packet loss conditions. It was shown from the table that MC-PLC was significantly preferred than G.729-PLC.
A Packet Loss Concealment Algorithm Robust to Burst Packet Loss
147
5 Conclusion In this paper, we proposed a packet loss concealment algorithm for a CELP-type speech coder for the performance improvement of speech quality when frame erasures or packet losses occurred in voice onset intervals. The proposed PLC algorithm combined a speech correlation PLC and a multiple codebook-based PLC algorithm. We evaluated the performance of the proposed PLC algorithm on G.729 under random and burst packet loss rates of 3%, 5%, and 8%, and then we compared it with that of the PLC algorithm already employed in G.729 (G.729-PLC). It was shown from PESQ tests, waveform comparison, and A-B preference tests that the proposed PLC algorithm provided better speech quality than G.729-PLC for all the simulated conditions. Acknowledgments. This work was supported in part by Mid-career Researcher Program through NRF grant funded by the MEST (No. 2010-0000135), and supported by the Ministry of Knowledge Economy (MKE), Korea, under the Information Technology Research Center (ITRC) support program supervised by the National IT Industry Promotion Agency (NIPA) (NIPA-2010-C1090-1021-0007).
References 1. Goode, B.: Voice over internet protocol (VoIP). Proceedings of the IEEE 90(9), 1495– 1517 (2002) 2. Jian, W., Schulzrinne, H.: Comparision and optimization of packet loss repair methods on VoIP perceived quality under bursty loss. In: Proceedings of NOSSDAV, pp. 73–81 (2002) 3. Cho, C.S., Park, N.I., Kim, H.K.: A packet loss concealment algorithm robust to burst packet loss for CELP-type speech coders. In: Proceedings of ITC-CSCC, pp. 941–944 (2008) 4. Kim, H.K., Lee, M.S.: A 4 kbps adaptive fixed code excited linear prediction speech coder. In: Proceedings of ICASSP, pp. 2303–2306 (1999) 5. Kondoz, A.M.: Digital Speech: Coding for Low Bit Rate Communication Syste, 2nd edn. Wiley, Chichester (2004) 6. Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes The Art of Scientific Computing, 3rd edn. Cambridge University Press, Cambridge (2007) 7. ITU-T Recommendation G.729.: Coding of Speech at 8 kbit/s Using Conjugate-Structure Code-Excited Linear Prediction (CS-ACELP) (1996) 8. ITU-T Recommendation P.862.: Perceptual Evaluation of Speech Quality (PESQ), and Objective Method for End-to-End Speech Quality Assessment of Narrowband Telephone Networks and Speech Coders (2001) 9. NTT-AT.: Multi-Lingual Speech Database for Telephonometry (1994) 10. ITU-T Recommendation G.191.: Software Tools for Speech and Audio Coding Standardization (2000) 11. ITU-T Recommendation G.729 Annex B.: A Silence Compression Scheme for G.729 Optimized for Terminals Conforming to Recommendation V.20 (1996)
Duration Model-Based Post-processing for the Performance Improvement of a Keyword Spotting System Min Ji Lee1, Jae Sam Yoon1, Yoo Rhee Oh1, Hong Kook Kim1, Song Ha Choi2, Ji Woon Kim2, and Myeong Bo Kim2 1 School of Information and Communications Gwangju Institute of Science and Technology (GIST), Gwangju 500-712, Korea {minji,jsyoon,yroh,hongkook}@gist.ac.kr 2 Camcorder Business Team, Digital Media Business Samsung Electronics, Suwon-si, Gyenggi-do 443-742, Korea {songha.choi,jiwoon.kim,kmbo.kim}@samsung.com
Abstract. In this paper, we propose a post-processing method based on a duration model to improve the performance of a keyword spotting system. The proposed duration model-based post-processing method is performed after detecting a keyword. To detect the keyword, we first combine a keyword model, a non-keyword model, and a silence model. Using the information on the detected keyword, the proposed post-processing method is then applied to determine whether or not the correct keyword is detected. To this end, we generate the duration model using Gaussian distribution in order to accommodate different duration characteristics of each phoneme. Comparing the performance of the proposed method with those of conventional anti-keyword scoring methods, it is shown that the false acceptance and the false rejection rates are reduced. Keywords: Keyword spotting, post-processing method, duration model.
1 Introduction The latest smart phones and electronic devices require voice commands for fast and convenient usage. In order to activate voice commands automatically, a keyword spotting system can be adopted in such devices to detect predefined keywords from the continuous speech signals. Moreover, it can resolve the performance degradation of isolated word recognition or continuous speech recognition systems [1]. Generally, keyword spotting systems use hidden Markov models (HMMs) [2]; i.e., a keyword model, a non-keyword model, and a silence model are first modeled by each HMM. Then, input speech is decoded with the trained HMMs, combining the three HMMs to detect specified keywords. After detecting keywords, a post-processing technique can be applied to reduce detection errors in the system. Also, in practical applications, it is often better to let the user speak again rather than to provide the wrong result, which can be also done on the basis of the post-processing results. There have been several post-processing methods proposed. Among them, anti-keyword scoring methods are T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 148–154, 2010. © Springer-Verlag Berlin Heidelberg 2010
Duration Model-Based Post-processing for the Performance Improvement
149
commonly applied using the weighted Euclidean distance and the Kullback-Leibler distance [3]-[5]. On the contrary, we propose a duration model-based post-processing method in order to improve the performance of the keyword spotting system, especially in the case of one keyword. Before applying the proposed post-processing method, we first construct a baseline keyword spotting system to detect keywords. By using the information on the detected keyword, we then verify the detected keyword through the post-processing method. Finally, we compare the performance of the proposed postprocessing method with those of conventional post-processing methods. The remainder of this paper is organized as follows. Following this introduction, Section 2 explains the overall procedure for the baseline keyword spotting system and reviews several conventional post-processing methods. Section 3 presents the proposed duration model-based post-processing method that improves the performance of the keyword spotting system. Section 4 evaluates the performance of the proposed duration model-based post-processing method. Finally, we summarize our findings in Section 5.
2 Baseline Keyword Spotting System This section explains the baseline keyword spotting system and reviews several conventional post-processing methods used in keyword spotting systems. A general structure of a typical keyword spotting system is shown in Fig. 1. The system first extracts speech features from the input speech signal. Next, the system detects keywords via a keyword model, a non-keyword model, and a silence model. After that, in order to reduce errors and thus improve the recognition performance, a post-processing method can be applied. 2.1 Feature Extraction In this paper, speech features are extracted using the ETSI Advanced Front-end [6]. As a result, we use 39-dimensional feature vectors, which include 12 mel-frequency cepstral coefficients (MFCCs) and one log energy parameter with their deltas and delta-deltas.
Fig. 1. General structure of a keyword spotting system
150
M.J. Lee et al.
Fig. 2. Grammar network for the keyword spotting system combining a keyword, a nonkeyword, and a silence model
2.2 Keyword Detection Keywords are detected by using three acoustic models—a keyword model, a nonkeyword model, and a silence model, and they are combined by a finite state network. Fig. 2 shows the finite state network designed when there is only one keyword in each sentence. 2.2.1 Keyword Model The keyword spotting system uses a tri-phone HMM; each HMM has four states and four mixtures per state. The keyword model is trained using the clean speech database from the Speech Information Technology and Industry Promotion Center (SiTEC) [7] and the word database from the ETRI word database [8]. The keyword model is then adapted using speech sentences spoken by 6 female and 10 male speakers, which are recorded under five background noise conditions [9]. In this paper, the keyword is “Umsungmenu”; therefore, the keyword model consists of nine triphone models such as /sil-U+m/, /U-m+s/, /m-s+v/, /s-v+N/, /v-N+m/, /N-m+e/, /m-e+n/, /e-n+ju/, and /n-ju+sil/. 2.2.2 Non-keyword Model and Silence Model The keyword spotting system uses a filler model that is trained using non-keyword speeches and background noise such as classical music, pop music, TV drama, female speech, and male speech. Note here that the non-keyword model has 11 states and 8 mixtures per state. A silence model is trained with five states and four mixtures per state using speech features obtained from silent intervals. 2.2.3 Detection Rule As described above, we detect keywords by combining a keyword model, nonkeyword model, and silence model. Here, we denote the keyword model, non-keyword, and the silence model as λK, λNK, and λS, respectively. We then determine a keyword on a maximum likelihood criterion using λK, λNK, and λS. For a given sequence is S, a detection rule is described as where S* is the detected optimal sequence.
S * = arg max P ( S | λK , λ NK , λS ) S
(1)
Duration Model-Based Post-processing for the Performance Improvement
151
2.3 Post-processing Using Anti-keyword Scoring
A log likelihood scoring for an anti-keyword and an N-best decoding are widely used as a post-processing method. In this paper, we like to review anti-keyword scoring methods. Anti-keyword scoring methods actually use an anti-keyword model. There are two different ways of generating the anti-keyword model. The first one is to use a weighted Euclidean distance [3], while the second is based on the Kullback-Leibler distance [4][5]. Both methods calculate a distance between probability distributions, and the anti-keyword is generated from models whose distances are close to the target model. The distance is represented by 2 N ⎛ 1 V ( μ i ,d , s − μ j ,d ,s ) DE ( pi , p j ) = ∑ ⎜ ∑ ⎜ σ i , d , sσ j , d , s s =1 ⎝ V d =1
⎞ ⎟ ⎟ ⎠
(2)
where DE ( pi , p j ) is the distance between the probability distribution of the i-th and jth phonemes, pi and p j , N is the number of states, V is the dimension of feature vectors, and μi ,d , s and σ i ,d ,s indicate the mean and the standard deviation of the d-th component of the s-th state for the i-th phoneme, respectively. On the other hand, a symmetry Kullback-Leibler distance is given by N V 1 DKL ( pi , p j ) = ∑∑ ( KL( f i ,d ,s ( x), f j ,d ,s ( x) + KL( f j ,d ,s ( x), f i ,d ,s ( x))) s =1 d =1 2
(3)
where f i ,d ,s ( x) and f j ,d , s ( x) indicate the probability distributions of the d-th component of the s-th state for the i-th and j-th phoneme, and x in f i ,d , s ( x) and f j ,d , s ( x) is a random variable. In Eq. (3), KL( f i ,d ,s ( x), f j ,d , s ( x)) is given by KL ( f i , d , s ( x ), f j , d , s ( x )) =
2 1 ⎛⎜ σ j , d , s ln ⎜ 2 ⎝ σ i2, d , s
⎞ σ i2, d , s ( μ i,d ,s − μ j ,d , s ) 2 1 ⎟− + + 2 ⎟ σ 2j , d , s σ 2 , , j d s ⎠
.
(4)
Next, we detect the keyword using a network comprised of a filler model, a keyword model, and a silence model. Then, the detected keyword interval is passed to the antikeyword model network. Finally, we calculate the score as the difference between the detected keyword log likelihood and the log likelihood obtained by passing the antikeyword model, which is defined as
S=
1 [log P(O | WK ) − log P(O | WAK )] fe − f s
(5)
where S is the score, f s is the start frame, and f e is the end frame. The keyword is detected from the network over the interval from f s to f e . In addition, O indicates the observation vector sequence, WK is the keyword model, and WAK is the non-keyword model. This score is then compared to an appropriate threshold to determine whether the detected keyword is accepted or rejected.
152
M.J. Lee et al.
Fig. 3. Duration probabilities of nine phonemes obtained from a keyword
3
Proposed Duration Model Based Post-processing
This section proposes a new post-processing method, which uses a duration model. The proposed post-processing method is based on the fact that each phoneme has a different duration which is modeled by a Gaussian distribution. By using the training database, we obtain a Gaussian probability for each phoneme with the mean vector 2 and variance matrix, μ and σ2, which is denoted as f ( x) = 1 exp − ( x − 2μ ) , where 2σ 2πσ 2 x indicates the duration of each phoneme obtained from the input speech signal. Fig.3 shows duration probabilities of nine phonemes of “UmsungMenu.” Next, the probability f (x) is then compared with a threshold. In this paper, the threshold is set to f ( μ + 3σ ). The detected keyword is subsequently accepted as the accurate keyword if f ( x ) > f ( μ + 3σ ) for all phonemes of the keyword; otherwise, the detected keyword is rejected.
4 Performance Evaluation In this section, we evaluated the performance of the proposed method in terms of the false acceptance rate and the false rejection rate. To measure the false acceptance rate, we used data that did not contain the keyword. On the other hand, we used data containing the keyword for measuring the false rejection rate. Tables 1 and 2 show the performances of the anti-keyword scoring method using the weighted Euclidean and the Kullback-Leibler distance, respectively. On one hand, Table 3 shows the performance of the proposed duration model-based postprocessing method. As shown in Tables 1 and 2, the false rejection rate was inversely
Duration Model-Based Post-processing for the Performance Improvement
153
proportional to the false acceptance rate for the anti-keyword scoring method. The lowest false acceptance rate of anti-keyword scoring method was 34.86%. However, the false acceptance rate of the proposed method was 29.00%, as shown in Table 3. The lowest false rejection rate of anti-keyword scoring method was 18.50%, while the false acceptance rate was 95.90%. On the other hand, the false rejection rate of the proposed method was 37.50%. As a result, it was shown from the comparison of the tables that the false acceptance rate and false rejection rate were both reduced in the proposed method. Table 1. Performance comparison of an anti-keyword scoring method using the weighted Euclidean distance depending on different thresholds
Threshold 4.0 3.5 3.0 2.5 2.0 Average
False acceptance rate (%) 95.90 91.39 84.42 74.18 55.32 80.24
False rejection rate (%) 18.50 42.50 67.50 90.50 99.00 63.60
Table 2. Performance comparison of an anti-keyword scoring method using the KullbackLeibler distance depending on different thresholds
Threshold 4.0 3.5 3.0 2.5 2.0 Average
False acceptance rate (%) 92.62 82.37 70.49 54.91 34.86 67.04
False rejection rate (%) 47.00 72.50 89.50 97.50 99.50 81.20
Table 3. Performance of the proposed duration model-based method
Decision rule f ( x ) > f ( μ + 3σ )
False acceptance rate (%) 29.00
False rejection rate (%) 37.50
5 Conclusion We proposed a post-processing method based on a duration model in order to improve the performance of a keyword spotting system. For each phoneme, a duration model was trained by Gaussian probability. We compared the performance of the proposed duration model-based method with those of the anti-keyword scoring methods. As a result, it was shown that the false acceptance and the false rejection rates were found
154
M.J. Lee et al.
to have an inverse relationship for the anti-keyword scoring methods but the proposed method could reduce both the false acceptance and false rejection rates. Moreover, the false acceptance rate of was 34.86% when the false rejection rate was 99.50% for the anti-keyword scoring method. On the other hand, the proposed method provided the false acceptance rate and false rejection rate as 29.00% and 37.50%, respectively. Acknowledgments. This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0023888) as well as in part by the Ministry of Knowledge Economy (MKE), Korea, under the Information Technology Research Center (ITRC) support program supervised by the National IT Industry Promotion Agency (NIPA) (NIPA-2010-C1090-1021-0007).
References 1.
2. 3. 4. 5.
6. 7. 8. 9.
Kim, M.J., Lee, J.C.: Non-keyword model for the improvement of vocabulary independent keyword spotting system. In: Proceedings of Acoustical Society of Korea Conference, vol. 25, pp. 319–324 (2006) Rose, R.C., Paul, D.B.: A hidden Markov model based keyword recognition system. In: Proceedings of ICASSP, pp. 129–132 (1990) Li, X.Q., King, I.: Gaussian mixture distance for information retrieval. In: Proceedings of International Conference on Neural Networks, pp. 2544–2549 (1999) Johnson, D.H., Sinanović, S.: Symmetrizing the Kullback–Leibler Distance. Rice University, Houston, TX, Technical Report (2001) Kim, Y.K., Song, H.J., Kim, H.S.: Performance evaluation of non-keyword modeling for vocabulary-independent keyword spotting. In: Proceedings of International Symposium on Chinese Spoken Language Processing, pp. 420–430 (2006) ETSI ES 202 050, Speech Processing, Transmission and Quality Aspects (STQ); Distribution Speech Recognition; Advanced Feature Extraction Algorithm (2002) Kim, B.W., Choi, D.L., Kim, Y.I., Lee, K.H., Lee, Y.J.: Current state and future plans at SiTEC for speech corpora for common use, Malsori, pp. 175–186 (2003) Kim, S., Oh, S., Jung, H.Y., Jeong, H.B., Kim, J.S.: Common speech database collection. In: Proceedings of Acoustical Society of Korea Conference, pp. 21–24 (2002) Zavagliakos, D., Schwartz, R., McDonough, J.: Maximum a posteriori adaptation for large scale HMM recognizers. In: Proceedings of ICASSP, pp. 725–728 (1996)
Complexity Reduction of WSOLA-Based Time-Scale Modification Using Signal Period Estimation Duk Su Kim1, Young Han Lee1, Hong Kook Kim1, Song Ha Choi2, Ji Woon Kim2, and Myeong Bo Kim2 1 School of Information and Communications Gwangju Institute of Science and Technology, Gwangju 500-712, Korea {dskim867,cpumaker,hongkook}@gist.ac.kr 2 Camcoder Business Team, Digital Media Business Samsung Electronics, Suwon-si, Gyenggi-do 443-742, Korea {kmbo.kim,jiwoon.kim,songha.choi}@samsung.com
Abstract. In this paper, we propose a computational complexity reduction method for a waveform similarity overlap-and-add (WSOLA) based time-scale modification (TSM) algorithm using signal period estimation. In the proposed method, a signal period is estimated from the normalized cross-correlation. An optimal shift, a maximally similar point, of WSOLA for the current frame can be estimated from the estimated period obtained from the previous frame. Then, we reduce the search range for calculating the normalized cross-correlation around the estimated optimal shift instead of calculating for the full search range. In this manner, we can reduce the computational complexity required for normalized cross-correlations, which dominates most of the complexity in WSOLA. It is shown from experiments that the proposed method gives a relative complexity reduction of 56% for the WSOLA-based TSM algorithm while maintaining speech quality. Keywords: Time-scale modification, WSOLA, complexity reduction, signal period estimation.
1 Introduction Time-scale modification (TSM) is a technique used to modify the duration of speech or audio signals while minimizing the distortion of other important characteristics, such as the pitch and timbre. TSM has been widely used in the fields of speech and audio signal processing. For example, it has been used during preprocessing in speech recognition systems to improve the recognition rate [1]. Also, TSM can be applied to speech synthesis systems in order to produce sounds more naturally [2]. Moreover, TSM has been used to improve the compression rate in speech and audio coding [3]. Over the last three decades, various overlap-and-add TSM algorithms have been developed. Among them, synchronized overlap-and-add (SOLA) based TSM [4], pitch synchronous overlap-and-add (PSOLA) based TSM [2], and waveform similarity overlap-and-add (WSOLA) based TSM [5] show relatively good performance T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 155–161, 2010. © Springer-Verlag Berlin Heidelberg 2010
156
D.S. Kim et al.
regarding output quality. However, SOLA-based and PSOLA-based TSM have disadvantages compared to WSOLA-based TSM. One disadvantage of SOLA-based TSM is in that its output does not guarantee the exact output length because overlap-andadd is performed according to output similarity. In other words, the overlap position differs in each frame. In PSOLA-based TSM, the output quality changes according to the pitch estimation algorithm. That is, TSM performance varies according to the performance of the pitch estimation algorithm. Hence, PSOLA-based TSM requires a high quality pitch estimation algorithm, which also incurs more computational complexity. In contrast, WSOLA-based TSM provides a similar output quality as the other two algorithms, while having a relatively lower computational complexity. Another benefit for WSOLA-based TSM is that the output length is also guaranteed. Nevertheless, the number of computations in WSOLA-based TSM needs to be further reduced to implement it on a resource-limited device in real-time. Also, since the search range for the similarity calculation becomes wider as a time-scale factor increases, real-time processes could be impossible for a high time-scale factor. In addition, the complexity for the similarity calculation increases geometrically as the sampling rate of the input signal increases. As an effort to reducing the complexity of WSOLA-based TSM, a reduction algorithm was proposed [6]. In the previous work, cross-correlation calculations were performed for a specific interval in order to adjust the number of samples participating in the calculation. In this paper, we propose a method to further reduce the complexity of WSOLAbased TSM using signal period estimation. Since short-time speech signals are periodic, the estimated period can be used for the similarity calculation, resulting in the reduced search range for the similarity calculation in the WSOLA-based TSM algorithm. The organization of the rest of this paper is as follows. Following this introduction, we shortly review the WSOLA-based TSM algorithm in Section 2. After that, we discuss methods for reducing the complexity of WSOLA-based TSM. In Section 4, the speech quality and computational complexity of the proposed method are compared with those of the conventional method. Finally, we conclude this paper in Section 5.
2 WSOLA-Based Time-Scale Modification The WSOLA-based TSM algorithm uses an input similarity measure to eliminate the phase distortion of overlap-and-add based time-scale modification [5]. That is, it determines the input frame in order to maintain the natural continuity that exists in the input signal. The synthesis equation of the WSOLA algorithm is as follows [5]. y ( n) =
∑ v( n − k ⋅ L ) x( n + k ⋅ L ⋅ α − k ⋅ L + Δ k ) k ∑ v ( n − k ⋅ L)
(1)
k
where x (n ) , y (n ) , and v(n ) are an input signal, its corresponding time-scaled output signal, and a window signal, respectively. In addition, L indicates the overlap-and-add (OLA) length, and α is a time-scale factor. In this paper, a Hanning window whose
Complexity Reduction of WSOLA-Based Time-Scale Modification
157
length 2L is used for v(n) . If α is greater than 1.0, the output signal is timecompressed. Otherwise the output signal is time-expanded. In Eq. (1), Δ k represents an optimal shift of the k-th frame. The optimal shift is determined by the following equation of Δ k = arg max[corr ( R( n ), C Δ ( n ))]
(2)
Δ
where corr ( R (n ), C Δ (n )) is the normalized cross-correlation between the reference signal, R(n ), and a candidate signal near the analysis instance, C Δ (n ), for a search range, − Δ max ≤ Δ ≤ + Δ max . That is, the normalized cross-correlation is represented as 2 L−1
corr ( R (n ), C Δ (n )) =
In
the
above
equation,
∑ R( n )C Δ (n )
n=0
⎛ 2 L−1 2 ⎞ ⎜ ∑ R (n ) ⎟ ⎝ n =0 ⎠
1/ 2
⎛ 2 L−1 2 ⎞ ⎜ ∑ C Δ (n) ⎟ ⎝ n =0 ⎠
1/ 2
.
R (n) = v (n − (k − 1) L) x (n + (k − 1) ⋅ L ⋅ α + Δ k −1 + L)
(3)
and
C Δ ( n) = v ( n − k ⋅ L ) x ( n + k ⋅ L ⋅ α + Δ ) .
Fig. 1. Example of WSOLA-based time-scale modification with a time-scale factor of 2
Fig. 1 shows processing steps of the WSOLA algorithm with a time-scale factor of 2. Assuming that (A) is the output of the k-th frame, the reference signal of the (k+1)-th frame becomes (A'). In order to determine the output signal of the (k+1)-th frame, we calculate the normalized cross-correlation between (A') and the input signal ranging from − Δ max to Δ max . From this calculation, the shift, Δ k +1 , of the (k+1)-th frame which gives the greatest normalized cross-correlation is selected as the optimal shift, thus the output signal is obtained by using Eq. (1). In this figure, (B') is selected as the output signal; hence, it is overlapped with (A) to provide the time-scaled output signal.
158
D.S. Kim et al.
3 Proposed Complexity Reduction Method for WSOLA In this section, we propose a method for reducing computational complexity for the WSOLA algorithm, whose processing steps are shown in Fig. 2. Speech signals are first segmented into a frame whose length is 960 samples, and then each frame is divided into three subframes. Next, the WSOLA algorithm is initialized for every frame according to a given time-scale factor. Then, the WSOLA algorithm is applied with a full search range to find an optimal shift for the first subframe. After that, the WSOLA algorithm for the second subframe is performed with a reduced search range depending on the optimal shift from the first subframe. If the normalized crosscorrelation from the reduced search range is smaller than a pre-determined threshold, TH NCC , the detection result is ignored. Instead, the WSOLA algorithm is performed with a full search range. In the same manner, an optimal shift of the third subframe is obtained using the estimated optimal shift from the second subframe.
Fig. 2. Processing steps of the proposed complexity reduction method for the WSOLA algorithm, which are applied to the k-th frame
Complexity Reduction of WSOLA-Based Time-Scale Modification
159
3.1 Optimal Shift Estimation and Similarity Calculation with a Reduced Range Since speech signals are assumed to be periodic, we can estimate an optimal shift of the current subframe by using the optimal shift from the previous subframe. First of all, the signal period can be estimated as p (i ) = L ⋅ (1 − α ) + Δ k ,i −2 − Δ k ,i −1 , i = 1,2
(4)
where p(i ) is the signal period of the i-th subframe and Δ k ,i = 0 if i < 0 . The estimated signal period is subsequently used to estimate the optimal shift for the i-th subframe, Δ k ,i , as Δ k ,i = arg max[ corr ( Ri ( n ), β l ( n ))] .
(5)
l
In the above equation, corr ( Ri ( n ), β l ( n )) represents the normalized cross-correlation between the reference signal, of the i-th subframe and Ri (n ), β l ( n) = v( n − k ⋅ L) x( n + k ⋅ L ⋅ α + Δ k ,i −1 + L − l ⋅ P(i)) . In addition, the search range, l , is located between − Δ s and Δ s such that Δ s = {i | Δ k ,i ≤ Δ max }, where Δ max means the absolute of the maximum value for the full search range. Therefore, since the search range is reduced as [ − Δ s , Δ s ], we can reduce the computational complexity of the WSOLA algorithm. 3.2 Detection of Sudden Signal Period Changes For a sudden change in the signal period between two adjacent subframes, the estimated signal period for the previous subframe cannot be applied to the next subframe. In this case, it is necessary to calculate the normalized cross-correlation for the full search range. In the proposed method, we detect a sudden signal period change using a pre-determined normalized cross-correlation threshold. In other words, the proposed method is applied only if there is no signal period change. Otherwise, the WSOLA is performed for the full search range. To show the effectiveness of the proposed method on the complexity reduction, we measure the percentage of the subframes to which the proposed method is applied. Several Korean speech utterances were recorded in a semi-anechoic chamber at a sampling rate of 32 kHz, where the number of frames for the utterances was 23,992. Table 1. Percentage of subframes to which the proposed method is applied according to different normalized cross-correlation thresholds
Threshold 0.9 0.8 0.7 0.6 0.5
Ratio (%) 47.97 58.27 64.33 68.18 71.28
160
D.S. Kim et al.
Table 1 shows the percentage of subframes that the proposed method was applied by varying the threshold, TH NCC , from 0.5 to 0.9 with a step of 0.1. It was shown from the table that the percentage increased as the normalized cross-correlation threshold increased. However, speech quality tended to be degraded as the threshold was lowered. Thus, we set the threshold to 0.8 by considering the trade-off between the complexity reduction and quality degradation.
4 Performance Evaluation To evaluate the computational complexity and speech quality of the proposed WSOLA algorithm, we computed weighed millions operations per second (WMOPS) and carried out a preference test when a time-scale factor is 0.5. We then compared the performance of the proposed WSOLA method with that of a conventional WSOLA algorithm that performed by a full-range search. In the experiment, we prepared the speech data spoken by five males and five females. Table 2 shows the experimental setup for the test. Table 3 compares the computational complexity between the conventional WSOLA algorithm and the proposed WSOLA algorithm. It was shown from the table that we could obtain complexity reduction of 56.0% on the average. Table 4 shows the results of a preference test for male and female speech data. To this end, eight people with no hearing disabilities participated in the test. Two files processed by the conventional WSOLA and the proposed WSOLA algorithm were presented to the participants, and the participants were asked to choose their preference. If they felt no difference between two files, they were guided to select ‘no difference.’ As shown in the table, the audio signals processed by the proposed WSOLA algorithm were similar to that by the conventional one, even though the proposed algorithm noticeably reduced the computational complexity. Table 2. Experimental setup for the performance evaluation
Window length 640 samples
OLA length
Δ max
Δs
320 samples
160 samples
20 samples
Threshold 0.8
Table 3. Comparison of the computational complexity between the conventional and the proposed WSOLA measured in WMOPS
Method Male speech Female speech Average
Conventional WSOLA 197.3 197.3 197.3
Proposed WSOLA 95.1 78.5 86.8
Reduction ratio (%) 51.8 60.2 56.0
Complexity Reduction of WSOLA-Based Time-Scale Modification
161
Table 4. Preference test results (%)
Method Male speech Female speech
Conventional WSOLA 17.2 11.4
No difference 71.4 77.2
Proposed WSOLA 11.4 11.4
5 Conclusion In this paper, we proposed a complexity reduction method for a WSOLA-based TSM by incorporating signal period estimation for reducing a search range of TSM. The proposed algorithm utilized the fact that speech signals are somewhat periodic for a short time interval. This allowed an optimal shift of the current frame to be estimated by using the optimal shift obtained from the previous frame. As a result, we could reduce the computational complexity required for computing normalized crosscorrelations. In the experiments, we obtained an average complexity reduction of 56% using the proposed WSOLA algorithm while maintaining speech quality. Acknowledgments. This work was supported by the Mid-career Researcher Program through an NRF grant funded by MEST, Korea (No. 2010-0000135).
References 1. 2. 3.
4. 5.
6.
Chong-White, N.R., Cox, R.V.: Enhancing speech intelligibility using variable rate timescale modification. Journal of the Acoustical Society of America 120(6), 3452 (2006) Moulines, E., Charpentier, F.: Pitch-synchronous waveform processing techniques for textto-speech synthesis using diphones. Speech Communication 9(5-6), 453–467 (1990) Wayman, J.L., Wilson, D.L.: Some improvements on the synchronized-overlap-add method of time scale modification for use in real-time speech compression and noise filtering. IEEE Transactions on Acoustics, Speech, and Signal Processing 36(1), 139–140 (1988) Roucos, S., Wilgus, A.: High quality time-scale modification of speech. In: Proceedings of ICASSP, pp. 236–239 (1985) Verhelst, W., Roelands, M.: An overlap-add technique based on waveform similarity (WSOLA) for high quality time-scale modification of speech. In: Proceedings of ICASSP, pp. 554–557 (1985) Choi, W.Y.: Audio signal time-scale modification method using variable length synthesis and reduced cross-correlation computations. U.S. Patent Application 2005/0273321 (2005)
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio Chan Jun Chun1, Young Han Lee1, Yong Guk Kim1, Hong Kook Kim1, and Choong Sang Cho2 1
School of Information and Communications Gwangju Institute of Science and Technology (GIST), Gwangju 500-712, Korea {cjchun,cpumaker,bestkyg,hongkook}@gist.ac.kr 2 Multimedia-IP Research Center Korea Electronics Technology Institute, Seongnam-si, Gyeonggi-do 463-816, Korea
[email protected] Abstract. In this paper, we propose a new method of upmixing stereo signals into 7.1-channel signals in order to provide more auditory realism. The proposed upmixing method employs an adaptive panning and a decorrelation technique for making more channels and reproducing natural reverberant surround sounds, respectively. The performance of the proposed upmixing method is evaluated using a MUSHRA test and compared with those of conventional upmixing methods. It is shown from the tests that 7.1-channel audio signals upmixed by the proposed method are preferred, compared to not only their original stereo audio signals but also 7.1-channel audio signals upmixed by conventional methods. Keywords: Audio upmixing, multi-channel audio, conversion of stereo to 7.1-channel audio, adaptive panning, decorrelator.
1 Introduction Due to the rapidly increasing demand for audio applications, researchers have been investigating many audio fields. Among such fields, multi-channel, rather than stereo, audio systems utilize additional speakers to present a more realistic sound. Specifically, such audio systems not only improve ambient effects but also widen the sound. In multi-channel audio systems, the number of channels for playing audio signals should be identical to that for recording in order to take full advantage of the system. If audio signals with smaller number of channels as in a playing-out speaker configuration are available, then the auditory realism cannot be expected. However, by using audio upmixing, i.e., conversion of stereo signals into multi-channel audio signals, this drawback can be mitigated. Thus, we can utilize mono or stereo audio content for multi-channel audio systems, providing more realistic sound. There exist numerous multi-channel audio systems; typically, stereo, 5.1-channel, and 7.1-channel speaker configurations are shown in Figs. 1(a), 1(b), and 1(c), respectively, which are defined by ITU-R Recommendation BS.775-1 [1]. Although stereo T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 162–171, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio
(a)
(b)
163
(c)
Fig. 1. Speaker configurations defined by ITU-R Recommendation BS.775-1: (a) stereo, (b) 5.1-channel, and (c) 7.1-channel
or 5.1-channel audio content has been popularly available, 7.1-channel audio content is still relatively rare. Therefore, it is necessary to convert audio content with stereo or 5.1-channel into that suitable for a 7.1-channel speaker system. In this paper, we propose an audio upmixing method that converts stereo audio signals to 7.1-channel audio signals. To this end, we first review several upmixing methods that have been applied for converting stereo signals into 5.1-channel signals. The methods to be reviewed here include a passive surround decoding method [2], an least mean square (LMS)-based method [3], a principal component analysis (PCA)based method [4], and panning methods [5]-[7]. After comparing quality of such methods, we adopt adaptive panning to derive the center channel signal for 7.1channel upmixing. Furthermore, a decorrelator is employed in order to reproduce reverberant effects in surround channel signals. This paper is organized as follows. Following the introduction, we shortly review conventional upmixing methods for multi-channel speaker systems in Section 2. Next, we propose an upmixing method from stereo to 7.1-channel audio signals using adaptive panning and a decorrelator in Section 3. In order to compare the performance of the proposed method with those of the conventional methods, we conduct subjective tests based on multiple stimuli with hidden reference and anchor (MUSHRA) tests [8] in Section 4. Finally, we conclude this paper in Section 5.
2 Conventional Upmixing Methods The following subsections describe several conventional upmixing methods including passive surround decoding, LMS-based, PCA-based, adaptive panning, constant power panning and speaker-placement correction amplitude panning (SPCAP) methods. 2.1 Passive Surround Decoding Method The passive surround decoding (PSD) method is an early passive version of the Dolby Surround Decoder [2]. In this method, a center channel is obtained by adding the original left and right channels. On the other hand, a surround channel can be derived by subtracting the right channel from the left channel. That is, the center and the surround channels are obtained as
164
C.J. Chun et al.
Center(n) = (x L (n) + x R (n) ) / 2 ,
(1)
Rear(n) = (x L ( n) − xR (n)) / 2
(2)
where xL (n) and x R (n) denote the left and the right audio sample at the time index n, respectively. Note that in order to maintain a constant acoustic energy, the center and the surround channel are lowered by 3 dB, which is implemented by multiplying 1/2 to the center and the surround channel signals. However, there are two surround channels in the 5.1-channel configuration as shown in Fig. 1(b). In this paper, a discrete Hilbert transform is used to generate such channels [9]. By using a finite-duration impulse response (FIR) approximation having a constant group delay, we can implement the discrete Hilbert transform. In particular, the approximation is done using a Kaiser window which is defined as 1/ 2 ⎧ ⎛ ⎛ 2 ⎞ ⎞ ⎪ I ⎜ ⎜ − ⎡ n − nd ⎤ ⎟ ⎟ 1 β ⎪ 0 ⎜ ⎜ ⎢ n ⎥ ⎟ ⎟ sin⎛⎜ π n − n d ⎞⎟ ⎜ ⎣ d ⎦ ⎠ ⎟ ⎪ ⎜ 2 ⎟⎠ ⎠ ⎝ h(n) = ⎨ ⎝ ⎝ ⋅ , 0≤n≤M n − nd I 0 (β ) ⎪ π ⎪ 2 ⎪0, otherwise ⎩
(3)
where M is the order of the FIR discrete Hilbert transform, and nd is equal to M / 2 . In this case, M and β are set to 31 and 2.629, respectively. 2.2 LMS-Based Upmixing Method The LMS-based upmixing method creates the center and surround channels using the LMS algorithm [3]. In this method, one of the original stereo channels is taken as a desired signal, d (n), of the adaptive filter, and the other is considered as an input, x(n). Then, the error signal, e(n) is the difference between the output, y (n), of the filter and the desired signal, d (n). Finally, y (n) is defined as a linear combination of the input signals such as y (n) = w T (n)x(n) = w(n)x T (n)
(4)
where x(n) = [x(n) x(n − 1) Λ x(n − N + 1)]T and w(n) = [x0 w1 Λ wN −1 ]T . In Eq. (3), w(n) is a coefficient vector of the N-tapped adaptive filter that is obtained based on the LMS algorithm as w(n + 1) = w (n) + 2μe(n) x(n)
(5)
where μ is a constant step size, which is set to 10 −4 in this paper. As a result, y (n) and e(n) are the signals for the center and the surround channel, respectively. Similarly, a discrete Hilbert transform using a Kaiser window is utilized to determine surround channels as shown in Eq. (3).
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio
165
2.3 PCA-Based Upmixing Method The PCA-based upmixing method decomposes the original stereo signals into two different signals, where one is highly correlated but the other is somewhat uncorrelated [4]. In other words, to derive the center and the surround channels, a 2 x 2 covariance matrix, A, is obtained as ⎡cov(x L , x L ) cov(x L , x R ) ⎤ A=⎢ ⎥ ⎣cov(x R , x L ) cov(x R , x R )⎦
(6)
where cov(x p , xq ) is the covariance of x p and xq , and p and q could be the left channel, L , or the right channel, R. The covariance matrix in Eq. (5) has two eigenvectors, which become the basis vectors for a new coordinate system. These eigenvectors are then used as weight vectors corresponding to the left and right channels to generate the center and surround channels, such as Center(n) = c L xL (n) + c R xR (n),
(7)
Rear(n) = s L xL (n) + s R xR (n)
(8)
where [c L cR ] is the eigenvector corresponding to the greatest eigenvalue and [ s L s R ] is the other eigenvector. 2.4 Adaptive Panning Method The adaptive panning (ADP) method generates the center and surround channels by panning the original stereo signals [5]. A weight vector for ADP is recursively estimated using the LMS algorithm. Let us define y (n) to be a linear combination of the x(n) = [ x L (n) x R (n)]T , with a weight vector, original stereo signals, T w(n) = [wL (n) wR (n)] . Then, y (n) is represented as y (n) = w T (n)x(n) = w (n)x T (n)
(9)
where wL (n) and wR (n), the elements of the weight vector corresponding to the left and the right channels, respectively, are then estimated using the LMS algorithm as wL (n + 1) = wL (n) − μ y(n)[ x L (n) − wL (n) y(n)],
(10)
wR (n + 1) = wR (n) − μ y(n)[ x R (n) − wR (n) y(n)]
(11)
where μ is a constant step size and set to 10 −10 in this paper. Finally, the center and surround channels can be determined as Center(n) = wL (n) x L (n) + wR (n) x R (n),
(12)
Rear(n) = wR (n) x L (n) − wL (n) x R (n).
(13)
Finally, in order to derive surround channels, we also use a discrete Hilbert transform as described in Section 2.1.
166
C.J. Chun et al.
2.5 Constant Power Panning Method In this method, audio signals for additional channels are determined by panning original signals. If each additional channels is mixed with stereo audio signals, denoted as L and R shown in Fig. 2, we have y (n) = g L x L (n) + g R x R (n)
(14)
where y (n) is the audio signal for an additional channel. The g L and g R are panning gains for the left and right channels, x L (n) and x R (n), respectively. In order to estimate panning gains, θ m , is calculated as
⎧ θ i − θ1 ⋅ 90, if θ1 ≥ θ i ⎪ ` ⎪θ 4 − θ1 ⎪⎪ θ − θ θ m = ⎨ i 1 ⋅ 90, if θ 4 ≤ θ i < θ1 ⎪θ 4 − θ1 ⎪θ − θ ` ⎪ i` 1 ⋅ 90, if θ i ≥ θ 4 ⎩⎪θ 4 − θ1
(15)
where θ i as shown in Fig. 2 is the placement angle for the i-th additional channel and it is mapped to θ m . Then, the panning gains, g L and g R , are determined as g L = cosθ m and g R = sinθ m .
(16)
When stereo audio signals are converted into 5.1-channel audio signals, Table 1 shows panning gains for each additional channel. Finally, we determine the center and surround channels by using the equation of Center (n) = 0.7071 ⋅ x L (n) + 0.7071 ⋅ x R (n),
(17)
RL(n) = 0.9135 ⋅ x L (n) + 0.4067 ⋅ x R (n),
(18)
RR (n) = 0.4067 ⋅ x L ( n) + 0.9135 ⋅ x R (n).
(19)
In order to create LFE channel, we employ an FIR low-pass filter having a cut-off frequency of 200 Hz. By filtering audio signals for the center channel, audio signals for the LFE channel are derived.
Fig. 2. Angles for computing panning gains used in Eq. (14) for the constant power panning method
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio
167
Table 1. Panning gains for constant power panning from stereo to 5.1-channel audio Channel C (i=0) L (i=1) RL (i=2) RR (i=3) R (i=4)
gR 0.7071 0 0.4067 0.9135 1
gL 0.7071 1 0.9135 0.4067 0
2.6 Speaker-Placement Correction Amplitude Panning (SPCAP) Method Similarly to the constant power panning method, the SPCAP method derives additional channels by panning original signals [7]. In SPCAP, however, a cosineweighted panning method is used for calculating a panning value. If stereo audio signals are upmixed from stereo to 5.1-channel audio signals using SPCAP, two panning values are estimated as
pL =
1 [1 + cos(θ i − θ L )] and pR = 1 [1 + cos(θi − θ R )] 2 2
(20)
where L and R are the left and right channel, respectively, and θ i is the placement angle for the additional channel as shown in Fig. 2. In order to conserve power, the panning values are normalized to obtain two panning gains, g L and g R , as
gL =
pL
β
and g R =
pR
β
(21)
where β = p L + pR . By using Eq. (21), audio signals for the additional channels are derived as Center (n) = 0.5 ⋅ x L ( n) + 0.5 ⋅ x R (n),
(22)
RL (n) = 0.8338 ⋅ x L (n) + 0.1662 ⋅ x R (n),
(23)
RR (n) = 0.1662 ⋅ x L (n) + 0.8338 ⋅ x R (n).
(24)
Similarly, we use an FIR low-pass filter having a cut-off frequency of 200 Hz to derive the LFE channel, considering the input of the low-pass filter as the center channel.
3 Upmixing from Stereo to 7.1-Channel Audio In this section, we propose a new upmixing method from stereo to 7.1-channel audio signals. Fig. 3 shows an overall structure of the proposed method. As shown in the figure, the proposed method basically combines two upmiximg methods; upmixing from stereo to 5.1-channel signals, and upmixing from 5.1-channel to 7.1-channel signals. The stereo-to-5.1 upmixing block is adopted from one of the upmixing methods described in Section 2. On the other hand, the 5.1-to-7.1 upmixing block employs a decorrelator to generate the surround channels for the 7.1-channel configuration.
168
C.J. Chun et al.
Fig. 3. Overall structure of upmixing stereo to 7.1-channel audio signals
3.1 Stereo to 5.1-Channel Upmixing
Fig. 4(a) shows a detailed block diagram for upmixing stereo to 5.1-channel audio signals based on the adaptive panning method described in Section 2.4. Note here that the adaptive panning method is selected from the exhaustive subjective tests. In the figure, each channel is labeled as FL (front left), FR (front right), C (center), LFE (low frequency enhancement), RL (rear left), or RR (rear right).
(a)
(b)
Fig. 4. Block diagram for upmixing: (a) stereo to 5.1-channel audio signals, (b) 5.1-channel to 7.1-channel audio signals
3.2 5.1 to 7.1-Channel Upmixing
As shown in Figs. 1(b) and 1(c), the channel configuration for 7.1-channel is different from that of 5.1-channel. In other words, the surround channels in 5.1-channel look like being split into two pairs of stereo channels such as one pair of side channels, SL (side left), SR (side right), and the other pair of real channels, RL (rear left) and RR (rear right). The side channels go frontier than the surround channels in 5.1-channel, but the rear channels go back. Fig. 4(b) shows a block diagram for performing the 5.1-to-7.1 channel upmixing. Similarly to the block diagram shown in Fig. 4(b), the adaptive panning method is also applied to create SL and SR for 7.1-channel. Here, SL and SR are determined by panning the front and rear channels as SL( n) = wFL (n) FL(n) + w RL (n) RL(n),
(25)
SR(n) = wFR ( n) FR ( n) + wRR (n) RR( n).
(26)
The weight vectors are recursively estimated using the LMS algorithm as wFL (n + 1) = wFL ( n) + μSL(n)[ FL(n) − wFL ( n) SL( n)]
(27)
wRL ( n + 1) = wRL ( n ) + μSL ( n)[ RL( n) − wRL ( n ) SL ( n)]
(28)
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio
169
wFR ( n + 1) = wFR ( n) + μSR ( n)[ FR (n ) − wFR (n ) SR ( n)]
(29)
w RR ( n + 1) = wRR (n) + μSR( n)[ RR(n) − wRR (n) SR(n)]
(30)
where μ is a constant step size, set at 10 −10. In order to add reverberation effects to the rear channels, we employ a decorrelator that is designed by randomizing the phase response in the frequency domains. The following subsections further describe the decorrelator design and a mixing method using the decorrelator in detail. 3.2.1 Decorrelator Design One approach of designing a decorrelator is to employ the magnitude and phase randomization. Initially, the time-domain original audio signals are transformed into the frequency-domain ones using a Fourier transform. Then, the magnitude and phase responses of the transformed audio signals are obtained. Subsequently, we randomize the magnitude and phase responses, but unwanted discontinuity in the response boundaries could be occurred. Therefore, we employ a cosine interpolation to eliminate this discontinuity with the weight value shown in Table 2. Finally, we determine decorrelated audio signals using an inverse Fourier transform. Table 2. Weights in the phase response (kHz) Weight
16 0.625
3.2.2 Mixing Method After the decorrelation process, original and decorrelated audio signals are mixed to generate the rear left and right channel signals, such as RL(n) = (0.7071 ⋅ SL(n) + 0.7071 ⋅ DL (n)) / 2,
(31)
RR( n) = (0.7071 ⋅ SR( n) + 0.7071 ⋅ DR( n)) / 2
(32)
where DL(n) and DR(n) are decorrelated audio signals from SL(n) and SR(n), respectively. Note that in order to match the energy of the original audio signal and that of the upmixed audio signal, the rear channel signals are lowered by 6 dB, which is implemented by multiplying 1 / 2 .
4 Performance Evaluation We compared the quality of the proposed upmixing method in two aspects such as upmixing from 5.1-channel to 7.1-channel and upmixing from stereo to 7.1-channel audio signals. Thus, we conducted MUSHRA tests in compliance with the ITU multi-channel configuration standard defined by ITU-R Recommendation BS.775-1 [1]. The audio contents, sampled at 44.1 kHz, to be compared in the test were as follows:
170
C.J. Chun et al.
Hidden reference 3.5 kHz low-pass filtered anchor 7 kHz low-pass filtered anchor 5.1-channel audio signals (in case of upmixing from stereo to 7.1-channel, stereo audio signals) Upmixed audio signals obtained by the conventional upmixing methods, and Upmixed audio signals obtained by the proposed method.
The MUSHRA test results for upmixing from 5.1-channel to 7.1-channel and from stereo to 7.1-channl are shown in Figs. 5(a) and 5(b), respectively. It was shown from the figures that the upmixed 7.1-channel audio signals were preferred, compared to the original audio signals. Moreover, the proposed upmixing method outperformed the conventional methods.
(a)
(b)
Fig. 5. Comparison of MUSHRA test scores for the audio signals upmixed by different methods: (a) upmixing from 5.1-channel to 7.1-channel audio signals, (b) upmixing from stereo to 7.1-channel audio signals
5 Conclusion In this paper, we proposed an upmixing method based on adaptive panning and decorrelation. The proposed upmixing method could convert stereo to 7.1-channel signals. Moreover, comparing the performance of the proposed method with those of conventional methods in terms of MUSHRA test scores, it was shown that 7.1-channel audio signals generated by the proposed upmixing method were preferred rather than those by the conventional methods.
Acknowledgement This work was supported in part by “Fusion-Tech. Developments for THz Info. & Comm.” Program of GIST in 2010, and in part by the Ministry of Knowledge Economy (MKE), Korea, under the Information Technology Research Center (ITRC) support program supervised by the National IT Industry Promotion Agency (NIPA) (NIPA-2010-C1090-1021-0007).
A Real-Time Audio Upmixing Method from Stereo to 7.1-Channel Audio
171
References 1. 2. 3. 4.
5. 6.
7.
8. 9.
ITU-R BS.775-1: Multi-Channel Stereophonic Sound System with or without Accompanying Picture (1994) Dolby Laboratory, http://www.dolby.com/professional/getting-dolbytechnologies/index.html Bai, M.R., Shih, G.-Y., Hong, J.-R.: Upmixing and downmixing two-channel stereo audio for consumer electronics. IEEE Trans. on Consumer Electronics 53, 1011–1019 (2007) Chun, C.J., Kim, Y.G., Yang, J.Y., Kim, H.K.: Real-time conversion of stereo audio to 5.1 channel audio for providing realistic sounds. International Journal of Signal processing, Image processing and Pattern Recognition 2(4), 85–94 (2009) Irwan, R., Aarts, R.M.: Two-to-five channel sound processing. J. Audio Eng. Soc. 50, 914–926 (2002) West, J.R.: Five-channel Panning Laws: an Analytical and Experimental Comparison. M.S. Thesis, Department of Music Engineering, University of Miami, Coral Gables, Florida (1998) Sadek, R., Kyriakakis, C.: A novel multichannel panning method for standard and arbitrary loudspeaker configurations. In: Proc. of 117th AES Convention, Preprint 6263, San Francisco, CA (2004) ITU-R BS. 1534-1: Method for the Subjective Assessment of Intermediate Quality Levels of Coding System (2003) Bosi, M., Goldberg, R.E.: Introduction to Digital Audio Coding and Standards. Kluwer Academic Publishers, Massachusetts (2002)
Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy for Dual-Channel Noisy Speech Recognition Ji Hun Park1, Min Hwa Shin2, and Hong Kook Kim1 1
School of Information and Communications Gwangju Institute of Science and Technology, Gwangju 500-712, Korea {jh_park,hongkook}@gist.ac.kr 2 Multimedia IP Research Center Korea Electronics Technology Institute, Seongnam, Gyeonggi-do 463-816, Korea
[email protected] Abstract. In this paper, a voice activity detection (VAD) method for dualchannel noisy speech recognition is proposed on the basis of statistical models constructed by spatial cues and log energy. In particular, spatial cues are composed of the interaural time differences and interaural level differences of dualchannel speech signals, and the statistical models for speech presence and absence are based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed using only speech signals segmented by the proposed VAD method. The performance of the proposed VAD method is then compared with those of conventional methods such as a signal-to-noise ratio variance based method and a phase vector based method. It is shown from the experiments that the proposed VAD method outperforms conventional methods, providing the relative word error rate reductions of 19.5% and 12.2%, respectively. Keywords: Voice activity detection (VAD), end-point detection, dual-channel speech, speech recognition, spatial cues.
1
Introduction
Voice activity detection (VAD) is a technique for detecting the presence or absence of desired speech. VAD has been used in various speech-based applications, such as speech recognition and speech coding, by deactivating some processes during nonspeech intervals. By doing this, we can reduce the number of computations and network bandwidth usage [1][2]. There have been many VAD methods proposed to discriminate speech intervals from non-speech intervals. Among potential methods, methods based on energy levels and zero crossing rates are the most common, which detect speech intervals effectively with low complexity. However, the discrimination capability of features such as energy levels and zero crossing rates decreases under low signal-to-noise ratio (SNR) conditions, resulting in the degradation of VAD performance [3]. To overcome this problem, noise-robust VAD features such as T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 172–179, 2010. © Springer-Verlag Berlin Heidelberg 2010
Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy
173
periodicity measure [4], cepstral features [5], and long-term spectral divergence [6] have been investigated. In particular, Davis et al. incorporated the Welch-Bartlett method [7] into a VAD method to obtain a low-variance spectrum estimate [8]. Then, the estimate of the power spectral density of noise and the variance of SNRs estimated from non-speech intervals were utilized as VAD features. This method could provide stable VAD performance under different SNR conditions, but this method tends to lose its effectiveness in non-stationary noise environments. Thus, in order to improve VAD performance in non-stationary noise environments, Kim et al. proposed a multi-channel VAD method using a phase vector as a VAD feature [9]. However, this method required reasonable number of microphones to get the improved performance, which was similar to the case of beamforming techniques. Thus, this method restricted the performance improvement in a dual-channel microphone environment. In this paper, we propose a statistical model-based VAD using spatial cues and log energy for dual-channel noisy speech recognition. To this end, statistical models for speech presence and absence are constructed, and then speech intervals are detected via these statistical models. Following this introduction, we propose a statistical model-based VAD that employs spatial cues and log energy in Section 2. In Section 3, we evaluate the performance of the proposed VAD method in terms of discrimination analysis and speech recognition performance. Finally, we summarize our findings in Section 4.
2
Proposed Statistical Model-Based VAD
Fig. 1 shows a schematic diagram of the proposed statistical model-based VAD using spatial cues and log energy as VAD features. In the figure, the proposed method first extracts auditory spectral signals from the binaural input noisy speech. Next, a likelihood ratio of the probabilities for speech presence and speech absence is estimated from a Gaussian function-based statistical model. Finally, a speech interval is determined by comparing the likelihood ratio with a threshold. x L (n)
Gammatone Analysis x R (n)
xLi , j (n)
ti , j
Spacial Cues Extraction
xRi , j (n)
li , j
p(ti , j , li , j , ei , j | π j ,s )
Search π j,s
Log Energy Computation
ei , j
π j,n
p(ti , j , li , j , ei , j | π j ,s )
Likelihood Ratio Computation
Λ (i )
Decision
V (i )
Gaussian Model VAD Feature Extraction
Fig. 1. Block diagram of the proposed statistical model-based VAD employing spatial cues and log energy
2.1
Gammatone Analysis
Binaural input signals, at a sampling rate of 16 kHz, are decomposed into auditory spectral signals by a gammatone filterbank [10] whose center frequencies are linearly spaced on an equivalent rectangular bandwidth (ERB) scale [11] from 50 Hz to 8
174
J.H. Park, M.H. Shin, and H.K. Kim
kHz. Auditory spectral signals are then windowed using a rectangular window with a time resolution of 20 msec and a frame rate of 100 Hz, resulting in left and right auditory spectral signals for the -th frame and the -th frequency band, , and , , respectively. 2.2
Spatial Cues and Log Energy Extraction
In order to construct the statistical models for speech presence and absence, spatial cues such as the interaural time difference (ITD), interaural level difference (ILD), and log energy are extracted for each time-frequency (T-F) bin. First of all, a normalized cross-correlation coefficient for each T-F bin is computed between the left and right auditory spectral signals, which is defined as N −1
CC (τ ) =
∑ xLi , j (n) xRi, j (n − τ )
n=0
i, j
N −1
N −1
n=0
n=0
(1)
∑ ( xLi , j (n))2 ∑ ( xRi , j (n)) 2
where ranges from -16 to 16 and corresponds to a range from -1 msec to 1 msec at represents the number of speech samples a sampling rate of 16 kHz. In addition, per frame and is set to 320 in this paper. Next, the ITD for the ( , )-th T-F bin is estimated as the time lag at which the normalized CC is maximized. In other words,
ti , j = arg max CC i , j (τ )
(2)
τ
In addition to ITD extraction, the ILD for the ( , )-th T-F bin is computed as the ratio of energies obtained from the left and right auditory spectral signals using the equation of ⎛ N −1 li , j = 10 log10 ⎜ ∑ ( xLi , j (n)) 2 ⎝ n=0
N −1
⎞ ⎠
∑ ( xRi, j (n)) 2 ⎟
n =0
(3)
The proposed VAD method assumes that speech and noise sources are all directional and that the speech source is located directly in front of dual microphones, i.e., at an angle of 0°. In this case, the distributions of ITD and ILD extracted in the silence or pause intervals are similar to those extracted from the desired speech. To discriminate speech intervals from such silent intervals, a log energy parameter is incorporated into the statistical models. The log energy parameter is defined as ei , j = log 10
N −1
∑ ( x Li , j ( n)) 2
n=0
(4)
Note that the log energy is used to discriminate the speech intervals from silence intervals by investigating energy level without regard to the spatial information. Therefore, we choose the left channel in Eq. (4), while either channel can be utilized.
Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy
2.3
175
Model-Based VAD Feature Extraction
To detect speech intervals, a likelihood ratio test is performed. The likelihood ratio is deduced using the statistical model for speech presence and absence, where the model is trained by employing a Gaussian kernel density estimator [12]. To this end, we collect all the three-dimensional vectors composed of ITD, ILD, and log energy, which are obtained from the training data. Then, a three-dimensional (3D) plane is constructed for each frequency band, where axes of the plane are linearly quantized from the minimum to the maximum of ITD, ILD, and log energy value. In this paper, each axis is quantized at a step of 50. By using the Gaussian kernel density estimator, likelihoods of speech presence and absence, i.e., the desired speech and non-speech, are then estimated for each region in the 3D plane. As a result, the number of models for speech presence and absence is 125,000 (=50 50 50) likelihoods each. Next, the likelihoods of speech presence and absence for the ( , )-th T-F bin could be estimated by searching the region of the statistical speech and non-speech models, respectively. In the proposed VAD method, the likelihood ratio, Λ , of speech presence over speech absence for each analysis frame is utilized as a feature for VAD, which is computed by taking the sum of the likelihood ratios for all frequency bands. In other words, Λ (i ) =
N j −1
∑ Λ (i , j ) j =0
(5)
where Λ , is the , , , , , | , ⁄ , , , , , | , , and , , , , , | , speech presence probability of the VAD features obtained from the ( , )-th T-F bin, for a given speech presence model, , , at the j-th frequency band. Similarly, is the speech absence probability of the VAD features obtained , , , , , | , 32 is the number of the frequency bands. from the ( , )-th T-F bin. Also, 2.4
Decision Rule
The proposed statistical model-based VAD method detects speech intervals by comparing the likelihood of the VAD features with two different thresholds. Specifically, and for the -th frame, we compute a running average and a standard deviation, , of the likelihoods from the start frame of each utterance to the ( 1)-th frame. After that, we combine them to determine the thresholds for the speech/non-speech determination of the -th frame such as
Ti , c = α c (μi −1 + σ i −1 )
(6)
where the subscript, c, can be either a speech interval, , or a non-speech interval, , depending on whether the threshold is for a speech interval or a non-speech interval. In addition, and are set to 3 and 1, respectively, since actual speech signals have greater likelihood than non-speech signals due to stronger spatial correlation. In particular, the running averages and standard deviations for the first ten frames are set to the same value such as
176
J.H. Park, M.H. Shin, and H.K. Kim
9
μ0 = μ1 = L = μ10 = 101 ∑ Λ (i )
(7)
i=0
2 σ 02 = σ 12 = L = σ 10 =
1 10
2 ∑ (Λ (i ) − μ i ) . 9
Next, the likelihood ratio of the -th frame is compared with VAD result, such as
⎧ 1, ⎪ V ( i ) = ⎨ 0, ⎪V (i − 1), ⎩
(8)
i =0
if Λ (i ) > Ti , s if Λ (i ) < Ti , n otherwise
,
or
,
to give the
(9)
where 1 or 0 implies if the frame is a speech interval or a non-speech interval, respectively. The likelihood can be variant for non-speech intervals in non-stationary noise environments, thus the thresholds should be updated to accommodate such variations. To this end, the mean and variance of the VAD features for the -th frame are updated if the ( 1)-th frame is determined as non-speech, such as
⎧γμ + (1 − γ ) Λ (i ), if V (i ) = 0 μ i = ⎨ i −1 μ i −1 , otherwise ⎩ ⎧⎪ ( i −i 1) σ i2−1 + ( Λ (i ) − μ i )( Λ (i ) − μ i −1 ), if V (i ) = 0 otherwise σ i2−1 , ⎩⎪
σ i2 = ⎨
(10)
(11)
where represents a weighting factor for updating the mean and is set to 0.1 in this and , the thresholds for the ( 1)-th frame are uppaper. Using the updated dated by Eq. (6).
3
Performance Evaluation
In this section, the performance of the proposed VAD method was evaluated in terms of both speech recognition performance and discrimination analysis such as false rejection rate (FRR) and false alarm rate (FAR), and it was compared with those of conventional VADs. 3.1
Binaural Database
To evaluate the proposed VAD method, a binaural database was artificially constructed using 200 utterances of a Korean speech corpus [13]. In other words, the binaural signals were obtained from speech signals that were mixed with noise signals under simulated conditions. The speech and noise signals were initially processed by a filter characterized by a head-related impulse response (HRIR) modeled from a
Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy
177
KEMAR dummy head [14]. More specifically, speech signals were filtered using an HRIR having an angle of 0°, while the noise signals were convolved with HRIRs with an angle of 20° or 40°. Finally, the speech and noise signals were combined with different SNRs of 0, 10, and 20 dB. In this paper, we simulated four different noise types such as babble, factory noise, classical music, and speech noise. 3.2
Discrimination Analysis
First, the proposed VAD was evaluated in terms of its ability to discriminate speech intervals from non-speech intervals in different noise environments. In particular, the discrimination performance of the proposed VAD method using spatial cues and energy (SE-VAD) was compared to the SNR variance-based VAD method (SNRVAD) [8] and the phase vector-based VAD method (PV-VAD) [9]. In the discrimination analysis, two types of error rates, FRR and FAR, were measured by comparing the VAD results of each VAD method with those of manual segmentation. That is, ⁄ ⁄ FRR and FAR are defined as 100 and 100, where and are the total numbers of actual speech and non-speech frames and are the numbers labeled by the manual segmentation, respectively. Also, of incorrectly detected speech and non-speech frames, respectively. Table 1 shows the FRRs and FARs of different VAD methods according to different noise types, respectively. In the tables, all the FRRs and FARs for each noise type were averaged for SNRs of 0, 10, and 20 dB. As shown in the tables, the proposed VAD method had the lowest average FRR and FAR, which implied that the proposed VAD outperformed the SNR variance-based and the phase vector-based VADs. Table 1. Comparison of false rejection rates and false alarm rates of different VAD methods according to different noise types Error Rate False Reject Rates (%) False Alarm Rates (%) Noise Babble Factory Music Speech Avg. Babble Factory Music Speech Avg. VAD SNR-VAD 5.9 5.6 1.9 2.4 4.0 27.7 20.4 42.6 58.1 37.2 PV-VAD 9.2 8.1 8.4 15.7 12.9 4.4 3.5 11.3 25.4 11.2 SE-VAD 4.1 3.9 3.3 4.0 3.8 8.2 6.7 7.4 5.8 7.0
3.3
Speech Recognition Performance
As another measure of the VAD performance, speech recognition experiments were also performed for speech segments that include only speech intervals detected by each VAD method. The speech recognition system used here was constructed using 18,240 utterances of a Korean speech corpus [13]. As a recognition feature, 13 melfrequency cepstral coefficients (MFCCs) were extracted for every 10 ms analysis frame. The 13 MFCCs were then concatenated with their first and second derivatives, resulting in a 39-dimensional feature vector. The acoustic models were 2,296 tied triphones represented by 3-state left-to-right hidden Markov model with 4 Gaussian mixtures. For the language model, the lexicon size was 2,250 words and a finite state network grammar was employed.
178
J.H. Park, M.H. Shin, and H.K. Kim 60
Word Error Rate (%)
55 50 45 40 35 30 25 20
Baseline
Babble Manual
Factory Music Noise Type SNR-VAD
PV-VAD
Speech SE-VAD
Fig. 2. Comparison of average word error rates (%) of different VAD methods according to different noise types
Fig. 2 compares the word error rates (WERs) of speech recognition systems 1) without using any VAD method, 2) employing the manually segmented VAD method, 3) SNR-VAD, 4) PV-VAD, and 5) SE-VAD. All the WERs for each noise type were averaged for SNRs of 0, 10, and 20 dB. It was shown from the figure that the proposed VAD method, SE-VAD, provided smaller WERs than SNR-VAD and PV-SNR for all noise types. In particular, SE-VAD achieved relative WER reductions of 19.5% and 12.2%, compared to SNR-VAD and PV-VAD, respectively.
4
Conclusion
In this paper, a voice activity detection (VAD) method for dual-channel noisy speech recognition was proposed by using spatial cues and log energy. The proposed method was able to discriminate whether each frame was a speech or a non-speech frame based on the likelihood ratio test. The likelihood ratio was provided by a Gaussian kernel density-based statistical model trained by a VAD feature composed of the spatial cues and the log energy. To evaluate the performance of the proposed VAD method, the FRRs and FARs of the proposed VAD method were firstly measured by comparing the VAD results of the proposed VAD with those of manual segmentation. Then, speech recognition experiments were performed for speech segments that only included speech intervals detected by the VAD. As a result, the proposed VAD method outperformed the SNR variance-based and the phase vector-based VADs in terms of FRRs, FARs, and word error rates (WERs). In particular, the proposed VAD method achieved relative WER reductions of 19.5% and 12.2%, compared to the SNR variance-based VAD method and the phase vector-based VAD method, respectively. Acknowledgments. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0023888).
Statistical Model-Based Voice Activity Detection Using Spatial Cues and Log Energy
179
References 1. Junqua, J.C., Mak, B., Reaves, B.: A robust algorithm for word boundary detection in the presence of noise. IEEE Transactions on Speech and Audio Processing 2(3), 406–412 (1994) 2. ETSI TS 101 707, V7.5.0: Digital Cellular Telecommunications System (Phase 2+); Discontinuous Transmission (DTX) for Adaptive Multi-Rate (AMR) Speech Traffic Channels (2000) 3. Rabiner, R., Sambur, M.R.: An algorithm for determining the endpoints of isolated utterances. Bell System Technical Journal 54(2), 297–315 (1975) 4. Tuker, R.: Voice activity detection using a periodicity measure. IEE Proceedings-I, Communications, Speech and Vision 139(4), 377–380 (1992) 5. Haigh, J.A., Mason, J.S.: Robust voice activity detection using cepstral features. In: Proceedings of the IEEE TENCON, pp. 321–324 (1993) 6. Ramirez, J., Segura, J.C., Benitez, C., Torre, A., Rubio, A.: Efficient voice activity detection algorithms using long-term speech information. Speech Communication 42(3-4), 271– 287 (2004) 7. Welch, P.D.: The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Transactions on Audio Electroacoustics 15(2), 70–73 (1967) 8. Davis, A., Nordholm, S., Tognery, R.: Statistical voice activity detection using lowvariance spectrum estimation and an adaptive threshold. IEEE Transactions on Audio, Speech, and Language Processing 14(2), 412–424 (2006) 9. Kim, G., Cho, N.I.: Voice activity detection using phase vector in microphone array. Electronic Letters 43(14), 783–784 (2007) 10. Patterson, R.D., Nimmo-Smith, I., Holdsworth, J., Rice, P.: An Efficient Auditory Filterbank Based on the Gammatone Functions. APU Report 2341, MRC, Applied Psychology Unit, Cambridge U.K (1998) 11. Glasberg, B.R., Moore, B.C.J.: Derivation of auditory filter shapes from notched–noise data. Hearing Research 47(1-2), 103–138 (1990) 12. Parzen, E.: On estimation of a probability density function and mode. The Annals of Mathematical Statistics 33(3), 1065–1076 (1962) 13. Kim, S., Oh, S., Jung, H.-Y., Jeong, H.-B., Kim, J.-S.: Common speech database collection. Proceedings of the Acoustical Society of Korea 21(1), 21–24 (2002) 14. Gardner, W.G., Martin, K.D.: HRTF measurements of a KEMAR. The Journal of the Acoustical Society of America 97(6), 3907–3908 (1995)
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment Yong Guk Kim1, Sungdong Jo1, Hong Kook Kim1, Sei-Jin Jang2, and Seok-Pil Lee2 1 School of Information and Communications Gwangju Institute of Science and Technology, Gwangju 500-712, Korea {bestkyg,sdjo,hongkook}@gist.ac.kr 2 Digital Media Research Center Korea Electronics Technology Institute, Seongnam, Gyeonggi-do, 463-816, Korea {sjjang,lspbio}@keti.re.kr
Abstract. In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane. Keywords: Sound source elevation, 3D audio, head-related transfer function (HRTF), early reflection, spectral notch filtering, directional band boosting.
1 Introduction Recently, a wide range of multimedia technologies for consuming multi-media contents are rapidly developing in home appliances such as digital TVs (DTVs), personal computers (PCs), and several kinds of hand-held devices. With these recent developments in multimedia techniques and contents, increased demand on even more realistic audio services has continued to grow. However, the audio rendering applicability of such handheld devices is rather limited, and most of users who use computers, TVs or home theater systems personally still use limited audio rendering systems such as stereo or 5.1-channel loudspeaker systems as shown in Fig. 1 [1]. For realistic and immersive audio rendering in virtual reality applications, not only directional effects but also elevation effects are necessary. In the stereo or 5.1Ch loudspeaker configuration, however, it is difficult for rendering elevation effects. This is because all of loudspeakers are placed on the horizontal plane and sound sources can be localized between the pair of two loudspeakers under the panning law generally. Besides, it is difficult to place loudspeakers vertically in home environments, and it costs a lot for constructing such a loudspeaker system. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 180–187, 2010. © Springer-Verlag Berlin Heidelberg 2010
3D Sound Techniques for Sound Source Elevation
181
C 30˚
30˚
30˚
110˚
(a)
30˚
110˚
(b)
Fig. 1. Speaker configuration defined by ITU-R Recommendation BS.775-1; (a) stereo and (b) 5.1 channel
As an alternative, 3D audio is capable of reproducing the spatial attributes of audio signals (e.g., direction, distance, width of sound source, and room envelopment) in limited reproduction environments such as stereo or 5.1-channel [2][3]. There are two kinds of sound localization techniques in 3D audio; binaural and transaural audio techniques. They are rendering methods for spatial sounds only using stereo audio sources [4]. A binaural method renders audio sources by using stereo headphone, and it can deliver direction and elevation effects more easily than a transaural method. It, however, has an in-head localization or front-back confusion problem due to either absence of reverberation and crosstalk or the application of non-personalized headrelated transfer functions (HRTFs). On the other hand, a transaural method can render audio sources out of the head. However, localization performance (e.g., direction and elevation effects) is rapidly degraded because of the existence of undesired crosstalk and reverberation [4]. In this paper, we propose a sound source elevation method in stereo loudspeaker listening environments. The proposed method integrates several 3D sound techniques such as HRTF-based rendering, early reflection, spectral notch filtering, and directional band boosting techniques. This paper is organized as follows. Following the introduction, we describe an HRTF model based technique as well as an early reflection generation technique in Section 2. In addition, we describe spectral notch filtering and directional band boosting techniques. In Section 3, we propose a sound source elevation method by integrating the techniques explained in Section 2. In Section 4, we discuss the performance evaluation result of the proposed method by measuring the perceived elevation from the stereo loudspeakers which are located on the horizontal plane. Finally, we conclude our findings in Section 5.
2 3D Sound Rendering Techniques 2.1 HRTF Model A head-related transfer function (HRTF) plays a major role in the processing of 3D sound applications. HRTFs are defined as impulse responses of the paths from sound
182
Y.G. Kim et al.
sources to listener’s ear drums [3]. These HRTFs represent reflections and diffractions by the head, the ears, and the torso. An HRTF can be measured by means of dummy head microphones designed to mimic human head and torso or modeled by a digital filter. In the applications for sound positioning, externalization, or crosstalk cancellation, localization performance degrades during measured HRTF employment due to individual disagreement and reverberation absence [5]. Accordingly, HRTF should be individually measured to increase localization performance despite inherent difficulties in practical measurement and necessary cost. To overcome such difficulties, a number of previous works have mathematically modeled the HRTF [6][7]. Similarly, such a model-based HRTF, in particular, a structural HRTF model, is employed in this paper [7]. Fig. 2 shows a schematic diagram of the structural HRTF model, being composed of head and pinna models. The HRTF model parameters are the azimuth, the elevation of a virtually-localized sound source, and the head radius of a listener.
Pinna model Input sound
Head model
Output sound Pinna model
azimuth (θ ), elevation (φ ) head radius ( a )
Fig. 2. Block diagram of a structured HRTF model
A sound localization procedure using the HRFT model is as follows. First, the head model is applied to an input sound to simulate head shadow effects, i.e., for a given azimuth, θ , the head model for left or right channels is represented as H m ( z ,θ ) =
( 2α m (θ ) + βT ) + ( βT − 2α m (θ )) z −1 ( 2 + βT ) + ( βT − 2) z −1
(1)
where m represents L or R for left or right channels, respectively. Further, β = 2c / a where a and c are the head radius and the speed of sound, respectively. In addition, T is the sampling period. Subsequently, α m (θ ) is calculated according to θ as and
α L (θ ) = 1 − sin(θ )
(2)
α R (θ ) = 1 + sin(θ ) .
(3)
Second, the pinna model is applied to the head model output, sin,m (n), to simulate the pinna effect. As a result, we have the output, s out ,m (n), which is represented by
3D Sound Techniques for Sound Source Elevation N
sout ,m (n) = sin,m (n) + ∑ p k sin,m (n − τ k (θ , φ ))
183
(4)
k =1
where φ and N denotes the elevation and the number of pinna reflections, respectively. In addition, τ k (θ , φ ) = Ak cos(θ / 2) sin( Dk (π / 2 − φ )) + Bk , where Ak , Dk , and Bk are constants described in Table 1. Finally, the shoulder model is applied to the pinna model output, optionally, which is composed of single reflection to simulate shoulder reflection. Table 1. The pinna model coefficients
k
ρk
Ak
Bk
Dk
1 2 3 4 5 6
1 0.5 -1 0.5 -0.25 0.25
0 1 5 5 5 5
0 2 4 7 11 13
0 1 0.5 0.5 0.5 0.5
2.2 Early Reflections Generation Using Image Method A sound source is reflected and diffracted by ceilings and walls within rooms during delivery to listeners’ ears. As a result of such phenomena, listeners are able to perceive sound source distance, and 3D sound rendering effect can be more realistic with the help of reverberation effect. In order to localize sound out-of-the-head, artificial reverberation is generated using the image method [8][9]. In order to generate reverberation using the image method, we first assume that an actual sound source is located at position which is higher than 60 degrees of elevation for a given room configuration. Based on this assumption, we simulate and generate a room response by using parameters as follows: sound source is located at (5m, 6m, 8.5m) in a 3D auditory space and a microphone is located at (5m, 5m, 1.5m) in a room with a dimension 1
0.8
value
0.6
0.4
0.2
0
0
0.5
1 1.5 time sample
2
2.5 x 10
4
Fig. 3. Room impulse response for sound source elevation, which is generated by the image method
184
Y.G. Kim et al.
of (10m, 10m, 9m). Then, depending on the location of each image source and a room type, an impulse response with different delay, magnitude, and reflection coefficient is generated. After that, all the impulse responses from the image sources are summed up into a room impulse response. Fig. 3 illustrates the generated room impulse response by using above parameters when the sound source is sampled at 48 kHz. The length of the room impulse response is approximately 24,000, thus the room impulse response can artificially simulate the reverberation effect for perceiving distance and elevation. 2.3 Spectral Notch Filtering In this paper, we investigate the measured HRTF database, CIPIC HRTFs [10], in order to analyze the variation of spectral characteristics depending on elevated sound perception. It is noted from this investigation that notches are located differently on HRTF spectra due to dissimilar azimuths and elevations. Accordingly, in order to analyze spectral notch positions for vertical plane localization, average HRTF spectrum, which corresponds to several vertical directions, is calculated over the CIPIC HRTFs, where 45 subjects exist in the database. Consequently, we find that three notches are located at the average HRTF spectrum, positioned at 9,991 Hz, 12,231 Hz, and 13,954 Hz. 2.4 Directional Band Boosting We carried out listening experiments by using complex tones and found it out that if the sound source had the ingredient of above 7 kHz, listeners could distinguish and perceive sound sources as if they were localized above the head. In addition, Blauert reported that a directional band affected the perception of the direction of sound [11]. Thus, we apply the directional band boosting method for sound source elevation. 10
value
8 6 4 2
23813
22688
21563
20438
19313
18188
17063
15938
14813
13688
11438
12563
10313
9187.5
8062.5
6937.5
5812.5
4687.5
3562.5
2437.5
187.5
1312.5
0
QMF Subband Center Frequency (Hz)
Fig. 4. Plot for directional band boost and attenuation for sound source elevation
48 kHz, Stereo WAVE
Reflection
Head Model
Pinna /Shoulder Model
Notch Filter
QMF Analysis
Boost/ Attenuation
QMF Synthesis
Notch Filter
QMF Analysis
Boost/ Attenuation
QMF Synthesis
Fig. 5. Overall structure of the proposed sound elevation method
48 kHz, Stereo WAVE
3D Sound Techniques for Sound Source Elevation
185
In order to boost a directional band in the subband domain, a 64-channel quadrature mirror filterbank (QMF) analysis and synthesis method [12] is first applied. That is, input stereo audio signals are decomposed by the QMF filterbank analysis, and then the subband signals corresponding to a directional band are boosted. Fig. 4 shows how much each band is boosted or attenuated for sound source elevation, where xaxis is a center frequency of the QMF filterbank. The subband signals are multiplied by the value corresponding to their center frequency and synthesized by the QMF synthesis filter.
3 Proposed Sound Source Elevation Method Fig. 5 shows the overall structure of the proposed sound elevation method. As shown in the figure, the proposed method integrates the image method based reverberation, HRTF modeling, spectral notch filtering, and directional band boosting. First, stereo audio signal is split into two mono audio signals and then each of them is convolved with a room impulse response which is composed of early and late reflections, as described in Section 2.2. Next, HRTF models, as in Section 2.1, are applied to the convolved mono signal in order to localize sound images at arbitrary positions. Second, to improve elevation effects, a spectral notch filter described in Section 2.3 is applied. The directional band boosting method, which is described in Section 2.4, is then performed after applying the QMF analysis. Finally, the QMF synthesis filtering is carried out to obtain the two mono signals that are merged into a stereo audio output signal.
4 Performance Evaluation In order to evaluate the perceptual elevation of the proposed sound elevation method, audio files with different genres such as audio files include white noise, sound effects, speech and audio signals were prepared. Fig. 6 illustrates the configuration of loudspeakers used in this experiment. In order to investigate horizontal and vertical localization effects, each participant was initially asked to listen to the original sound which was played by the loudspeakers which are located on the horizontal plane; i.e., 0 cm far away from the horizontal axis. After that, each participant was also asked to
30˚
30˚
1m
Fig. 6. Configuration of loudspeakers for listening experiments
186
Y.G. Kim et al.
Perceived Elevation (degree)
frontal
lateral
25 20 15 10 5 0 Speech
Music
Whitenoise
Sound Effect
Fig. 7. Perceived elevation measured in degree from the horizontal plane
listen to a pair of audio files composed of an original audio file and its corresponding file processed by the proposed method. In order to measure the perceptual elevation, a laser pointer was used to indicate a perceived position of the processed file relative to that of the original file. We repeated this procedure to lateral direction. In this experiment, nine people with no auditory diseases participated. Fig. 7 shows the perceptual elevation result measured in degree according to different audio genres. It was shown from the figure that the proposed method provided around the perceived elevation of 17º~21º against the horizontal plane. In particular, speech was perceived with the highest degrees among all the genres.
5 Conclusion In this paper, a sound source elevation method for a loudspeaker listening environment was proposed by combining several 3D audio techniques including a structural HRTF model, early reflections, spectral notch filtering and directional band boosting technique. A subjective listening test was performed to evaluate the perceived elevation. As a result, we could elevate audio sources by using the proposed method as higher as 17º~21º.
References 1. ITU-R Recommendation BS. 775-1: Multi-Channel Stereophonic Sound System with or without Accompanying Picture (1994) 2. Breebaart, J., Faller, C.: Spatial Audio Processing – MPEG Surround and Other Applications. Wiley, Chichester (2007) 3. Begault, D.R.: 3D Sound for Virtual Reality and Multimedia. Academic Press, Cambridge (1994) 4. Gardner, W.G.: 3-D Audio Using Loudspeakers. Kluwers Academic Publishers, Norwell (1998) 5. Wenzel, E.M., Arruda, M., Kistler, D.J., Wightman, F.L.: Localization using nonindividualized head-related transfer functions. J. Acoust. Soc. Am. 94(1), 111–123 (1993)
3D Sound Techniques for Sound Source Elevation
187
6. Kistler, D.J., Wightman, F.L.: A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction. J. Acoust. Soc. Am. 91(3), 1637–1647 (1992) 7. Brown, C.P., Duda, R.O.: An efficient HRTF model for 3D sound. In: Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 298– 301 (1997) 8. Allen, J.B., Berkley, D.A.: Image method for efficiently simulating small-room acoustics. J. Acoust. Soc. Am. 65(4), 943–951 (1979) 9. McGovern, S.G.: Fast image method for impulse response calculations of box-shaped rooms. J. Applied Acoustics 70(1), 182–189 (2008) 10. Algazi, V.R., Duda, R.O., Thompson, D.M., Avendano, C.: The CIPIC HRTF database. In: Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 99–102 (2001) 11. Blauert, J.: Spatial Hearing. MIT Press, Cambridge (1997) 12. 3GPP TS 26.401: Enhanced aacPlus General Audio Codec; General Description (2004)
Integrated Framework for Information Security in Mobile Banking Service Based on Smart Phone Yong-Nyuo Shin1 and Myung Geun Chun2, 1
Hanyang Cyber University, Dept. of Computer Engineering, 17 Haengdang-dong, Seongdong-gu, Seoul, Korea
[email protected] 2 Chungbuk National University, Dept.of Electrical & Computer Engineering, 410 Seongbong-ro, Heungdeok-gu, Cheongju chungbuk, Korea
[email protected] Abstract. Since Apple launched the iPhone service in November 2009 in Korea, smartphone banking users are increasing dramatically, forcing lenders to develop new products to deal with such demand. The bank of korea took the lead in jointing together to create a mobile banking application that each bank can adapt for its own use. In providing smartphone services, it is of critical importance to take the proper security measures, because these services, while offering excellent mobility and convenience, can be easily exposed to various infringement threats. This paper proposes a security framework that should be taken into account by the joint smartphone-based mobile banking development project. The purpose of this paper lies in recognizing the value of smartphones as well as the security threats that are exposed when smartphones are introduced, and provides countermeasures against those threats, so that an integrated information security framework for reliable smartphone-based mobile financial services can be prepared, by explicitly presenting the difference between personal computers and smartphones from the perspective of security. Keywords: Mobile, Security, Banking Service, Smart Phone, Integrated Framework, Authentication, Threats, Countermeasures.
1
Introduction
As smartphones have become widely adopted, they have brought about changes in individual lifestyles, as well as significant changes in the industry. As the mobile technology of smartphones has become associated with all areas of the industry, it is not only accelerating innovation in other industries such as shopping, healthcare service, education, and finance, but is also creating new markets and business opportunities [1]. In addition, the wide adoption of smartphones has increased the competition among enterprises. As Hana Bank and Industrial bank of Korea started the development of smartphone-based banking services earlier than other banks, competition to take the lead in this new market seems to be accelerating further. *
Corresponding author.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 188–197, 2010. © Springer-Verlag Berlin Heidelberg 2010
Integrated Framework for Information Security in Mobile Banking Service
189
In providing smartphone services, it is of critical importance to take the proper security measures [5], because these services, while offering excellent mobility and convenience, can be easily exposed to various infringement threats. In particular, efforts are required to apply security systems that can preemptively cope with potential threats in the area of banking services, which demand high reliability. This study proposes a security framework that should be taken into account by the joint smartphone-based mobile banking development project of the Bank of Korea. The purpose of this study lies in recognizing the value of smartphones as well as the security threats that are exposed when smartphones are introduced, and provides countermeasures against those threats, so that an integrated information security framework for reliable smartphone-based mobile financial services can be prepared, by explicitly presenting the difference between personal computers and smartphones from the perspective of security. When the global hit product the iPhone was distributed in Korea for the first time, the dominant viewpoint was that it was another early adopter product that reflected the preferences of youth. However, the change in our daily lives that followed its introduction was more significant that had been expected. Usage patterns are changing as more people access public information using their smartphones, and people now even prefer to use their smartphones for online banking rather than turning on their personal computer. The mobile communications market and wireless network markets are thus undergoing a period of upheaval, which has caused some to proclaim that we are entering “the age of smartphones.” However, only a few users recognize the enormous threat exposed by smartphones, which indeed, is as significant as their power to affect our lives in positive ways. As the smartphone is essentially a PC that we can hold in our hands, it is vulnerable to every type of violation that can occur to a PC. Even worse, smartphones are quite vulnerable to security violations because they are based on wireless networks. Considering these factors, the types of violation incidents that can occur on smartphones were classified, and damage cases of mobile malicious code were analyzed. Already, a diverse range of mobile malicious code has appeared – from ‘proof-of-concept’ code that simply exposes the possibility of damage, to code that exploits vulnerabilities in a smartphone OS to actually disclose the owner’s information and cause financial damages. Various tools that have already been formalized can be utilized to analyze malicious computer code. However, different tools should be used for each platform to analyze mobile malicious code, and as of yet no methodology has been systematically organized to determine which tools should be used by which method. The integrated information security framework for mobile banking services proposed in this paper seems to be utilized effectively when security measures are established for the joint smartphonebased mobile banking development project being promoted by the Bank of Korea. Following the introduction, Section 2 outlines existing studies related to our work. Section 3 introduces the joint smartphone-based banking project performance evaluation procedure. Section 4 describes the comparison of smartphone OS features and security threats to the platform. Section 5 presents the integrated framework for information security in mobile banking service based on smart phone. The last section provides a conclusion.
190
2 2.1
Y.-N. Shin and M.G. Chun
Related Studies Universal Subscriber Identity Module
Mobile finance services, which were released with an emphasis on unit functions, are evolving into finance and communication convergence services with the introduction of the Universal Subscriber Identity Module (USIM) service. The USIM is a smart card mounted on the mobile handset, which can save the subscriber’s information and enable access to various services, such as public transportation, membership, coupons, mobile banking, credit card, and stock services. The USIM is a safe smart card with a user interface and communication function. Currently, a high-performance Near Field Communication USIM has almost reached the commercialization stage. Domestic communication service providers have developed an applet that provides a hardware security module and storage token function, and are preparing to incorporate the digital certificate in the USIM by the second half of 2010. Once digital certificate installation is completed, a digital certificate that has been issued or saved in the USIM can be invoked and used by the application user interface of financial applications such as banking, securities, and settlement. 2.2
Authentication Based on User Memory
As security threats in open mobile-based electronic financial service can have diverse attack patterns, omnidirectional security management to counter those patterns is required. Among the platform security threats, keyboard hooking is a technique that intercepts a user’s keyboard input, which is mostly exploited to find the user’s input password. This vulnerability is most serious in the area of Internet banking performed using a regular personal computer, and financial losses occur periodically due to such attacks [2]. In addition, SMS (Short Message Service) hooking can also be exploited to maliciously modify the personal information used to access a financial service. The programs that manage SMS text are implemented as applications, and proprietary text engine programs are embedded and executed by each mobile operator due to different standards and localization problems. A password that is safer than certification based on the existing password should be sufficiently complex (wide password space), easy to remember for a longer period of time, and safe against various external attacks. For this purpose, techniques using a graphic password as well as certification techniques that utilize the user’s calculation capability are being studied. This technique performs authentication using the line pattern of each point. A safe method of alternating the password needs to be studied, for application to electronic financial services. The Bank of America performs authentication through the identification of the image that the user has registered when logged in. 2.3
Using Biometric Technology in Electronic Financial Services
Beginning in the second half of 2010, Korean subscribers were able to use Internet banking services or home trade system (HTS) from their smartphones or PCs through fingerprint authentication and Question and answer authentication. Table 1 shows the major alternative authentication methods.
Integrated Framework for Information Security in Mobile Banking Service
191
Table 1. Major authentication methods Fingerprint recognition Question and answer authentication Iris recognition
Vein authentication
Checks the identity by comparing the fingerprint obtained from the detector with the saved information. Inputs the answer to a question that only the authorized user knows, and requests to answer that question at the time of financial transaction. Checks the identity by comparing the iris information scanned from the automatic focus camera with the saved information. Checks the identity of the user by comparing the infrared vein imaging of the finger or palm with the saved information.
Subscribers can also make payments of 300,000 won or more in online shopping malls. Currently, a digital certificate is required for these types of transactions. This is a follow-up measure to the Act on "Alleviation of the public certificate use obligation for electronic financial transaction" announced in March 2010. According to this measure, starting in July methods other than the digital certificates are expected to be allowed for e-Banking and electronic payments of 300,000 won or more. These authentication methods focus on securing the same security level as the public certificate, and must satisfy required items – user identification, server authentication, communication channel encryption, forgery/alternation prevention of transaction details, and non-repudiation. Fingerprint recognition is evaluated as the most basic authentication method, and is already applied to a significant number of security systems. The fingerprint method identifies the user at the time of the financial transaction using the stored fingerprint information. In addition, the question and answer authentication method is a strong candidate, in which the user inputs the information in advance that is known to him/her only, and is required to answer the question at the time of financial transaction to identify him/herself. Biometric authentication methods such as voice recognition, iris and palm vein pattern recognition have also become the subject of discussion. Figure 2 shows the iris recognition screen utilized for mobile banking services in Japan.
Fig. 1. Iris recognition screen for mobile banking services in Japan
192
3
Y.-N. Shin and M.G. Chun
Joint Smartphone Banking Development Project
Since the release of the iPhone on November 28, 2009, Hana Bank and the Industrial Bank of Korea announced the release of application-type smartphone banking services in Korea. These banks seemed to be poised to re-arrange the structure of the existing feature-phone based mobile banking market, and occupy the smartphone banking market in its early stage. These smartphone banking services are implemented such that the free application is downloaded from iPhone AppStore, and the public certificate is moved/copied from the PC. Currently, account transfer and balance check of deposit, loan, fund, and foreign currency accounts are supported by these services. In addition to independent services provided by individual banks, the banking industry has been promoting the development of a joint smartphone-based banking system since early 2009. The joint smartphone banking service standard (plan) was prepared through discussions by the Mobile Finance Committee, which is composed of 17 commercial banks. In December 2009, the Finance Informatization Promotion Committee adopted the “joint development project for the smartphone-based mobile banking service” as one of the finance information projects, and commissioned the Korea Financial Telecommunications & Clearings Institution to implement the project. The joint smartphone-based development project is expected to minimize the investment cost and risk to individual banks by realizing economies of scale in the smartphone environment, in which applications and security modules must be developed for each OS type, and to prevent customers of smaller-scale banks from becoming an alienated service class, as those banks would unavoidably have to give up service provisioning. In addition, overall service stability can be improved by applying common security measures such as public certificate, security module, and information leak prevention. However, customization by each bank will be allowed in application development (customer contact segment) in order to reflect the service differentiation needs of each bank, so that the differentiation needs of participating banks and market needs for convenient services can be satisfied at the same time.
Fig. 2. Configuration of the joint development project system for smartphone-based mobile banking services
Integrated Framework for Information Security in Mobile Banking Service
4
193
Comparison of Smartphone OS Features and Security Threats to the Platform
Generally, smartphone users select a smartphone model to buy after considering the use purpose and comparing the strengths and shortcomings of the product design and functions. Users have only limited options in the domestic market due to the obligatory integration of WIPI. However, with various smartphones being imported from overseas, users now can have more choices and purchase products at lower prices. As a result of competition in the market, domestic manufacturers are also releasing diverse smartphones that satisfy user demands. Among several kinds of open-type mobile platforms, Apple’s iPhone OS (OS X iPhone), Google’s Android, and Microsoft’s Windows Mobile are the 3 major open-type platforms. Windows Mobile is the Windows CE based mobile platform, and most domestic financial services are based on the Windows platform. Apple’s iPhone OS has ported the existing general-purpose Mac OS to the mobile terminal, and added necessary functions. Users can purchase and install applications from the AppStore, and no compatibility with other devices or services is provided due to Apple’s closed policy [11]. Google’s Android provides an open source-based open type mobile platform. Various types of terminals and mobile operators can use the Android OS [8][12]. Droid09 malicious code was a phishing attack that was discovered recently (Android Development, 2010). By deceptively presenting a malicious application distributed in the Android market as a normal banking application, user passwords were stolen. The smartphone is a mobile handset equipped with computer functions, and has characteristics that make it similar to a PC. To attack a smartphone, hackers must have a prior understanding of the specific smartphone OS, as there are a larger number of smartphone OS than PC OS. In most cases, the scope of smartphone security incidents is limited to individuals, such as personal information leak, device disabling, and financial information loss. As smartphones handle sensitive information and dedicated smartphone security software is not sufficient, it seems that security measures need to be established. Types of smartphone security incidents include personal information leaks, limited device use, illegal billing generation, and mobile DDoS. Table 3 shows the details. Table 2. Types of smartphone security incidents Violation incident pattern
Details
Personal information leak
Confidential information leak and privacy violation such as receiving message, phone directory, location information, and business files.
Limited device use
Terminal screen change, terminal breakdown (malfunction), battery consumption, information (file, phone directory, etc.) and program uninstallation.
Illegal billing generation
Financial loss due to spam SMS sending and small amount payments by the mobile handset.
Mobile DDoS
Causes illegal billing, web site paralysis, and terminal disabling by creating large amounts of traffic at a specific site and sending SMS to a specific terminal.
194
Y.-N. Shin and M.G. Chun Table 3. Security threats of smartphone-based financial services Area
Meaning Derivative attack is expected due to
Platform
vulnerabilities or unique functional characteristics of the OS used in the open type mobile platform. Attacks on vulnerabilities of an
Application
application that is recognized by users, unlike a virus.
Storage
Example Virus and malicious code, keyboard hacking, SMS hooking, process and memory (dump) hacking
Phishing program, data file and execution file alteration.
Access to the file system loaded onto the
Access to the internal storage file system
internal/external memory of the mobile
inside the mobile terminal, and extraction
handset, confidential information
of activated and deleted confidential
extraction, and alteration attacks are
information.
expected. Reduced availability due to network
Network
traffic occupancy, and attack using the
Causes traffic error by sending random
zombie terminal (malicious bot infection
SMS/MMS through terminal misuse.
by the smartphone) are expected.
5
Integrated Framework for Information Security
We analyze the security threats that can occur in the open type mobile-based electronic financial service, and presents countermeasures to analyzed threats. The open type mobile terminal provides high performance and scalability that equivalent to that of a PC, and allows the sharing and installation of applications developed by individuals [3]. As a result, an application that is not certified can cause a security threat. 5.1
Universal Subscriber Identity Module
As there can be various types of security threats in open type mobile based electronic financial services, omnidirectional security management to counter these threats is required [1]. The purpose of this paper lies in providing smartphone-based mobile financial services, as well as in analyzing the possible security threats and proposing countermeasures against those threats. 5.2
Security Countermeasures against Platform-Based Financial Services
An open platform implies a software system that provides an interface (software development kit), which enables users to use all or part of the platform and application, when integrated in the open-type mobile platform. As the smartphone is a portable terminal that is integrated with the open platform, smartphone-based business can be exposed to security threats more easily than any other area.
Integrated Framework for Information Security in Mobile Banking Service
195
Fig. 4. Security Framework for Smartphone mobile banking services
a. Virus and malicious code The first mobile virus code was found in 2004 on a mobile terminal running Symbian OS [6], and since then, approximately 400 mobile malicious codes have been detected. Recently, a new mobile malicious code (Trojan-SMS.Python.Flocker) was detected by Kaspersky, which sends an SMS to the recipient registered in the phone directory of the mobile terminal that instructs them to transfer money to a certain account, using mobile malicious code.
▣ Threats ① Modification, deletion, or disclosure of the user’s personal information or stored application. ② Excessive traffic due to continuous requests for data, which were not initiated by the user. ③ Sends large numbers of SMS using the user’s phone directory. ▣ Scenario ① The attacker publishes the malicious code that he/she created using online application sharing web sites and others. ② The user downloads and installs the attacker’s malicious code in his/her terminal from the sharing web site. ③ The attacker sends the user’s personal information stored in the mobile termin-
al to his/her PC periodically, such as public certificate, specific files for financial transactions, phone directory, e-mail, and photos, using the malicious code installed in the victim’s mobile terminal.
▣ Countermeasures ① Anti-virus installation is taken as the minimum measure to protect against virus and malicious code attack. Anti-virus S/W can be used to detect abnormal process execution in real time, and control unauthorized access to the resources saved in the mobile terminal.
196
②
Y.-N. Shin and M.G. Chun
Code signature technology is applied to most mobile platforms. However, some platforms do not strictly apply this technology. Therefore, a separate code signature function is needed for the application equipped with the financial service.
b. Keyboard input value hooking Keyboard input value hooking is a technique that snatches the user’s keyboard input, and is exploited to find the password that the user inputs. This security weakness is most prominent in Internet banking performed on a general computer, and financial damages related to this vulnerability periodically occur.
▣ Threats ① It is known that hooking is possible for a physical keyboard like a QWERTY ②
keyboard, but no such incidents have been reported thus far for a virtual keyboard. As the virtual keyboard accepts input with a fixed number of characters at a fixed location, which is integrated with the terminal when it is shipped out, there is a possibility that the input character can be calculated by detecting mouse clicks or keyboard events.
▣ Scenario ① The attacker can distribute the keyboard hooking program to the user via email, or make a malicious hooking program to be downloaded. ② The user begins mobile banking or trading service, and inputs the PIN and certificate password to the input window. ③ The input keyboard value is hooked and transferred to the attacker. Consequently, the attacker can obtain the user’s password and account number.
▣ Countermeasures ① Technology is required that provides confidentiality of keyboard input values ② 6
by making the data pass through the adapter (hardware that performs encryption) before connecting the keyboard device to the mobile terminal. The virtual keyboard can be used for the computer Internet banking system.
Conclusion
In providing smartphone services, it is of critical importance to take the proper security measures, because these services, while offering excellent mobility and convenience, can be easily exposed to various infringement threats. In particular, efforts are required to apply security systems that can preemptively cope with potential threats in the area of banking services, which demand high reliability. Smartphones are quite vulnerable to security violations because they are based on wireless networks. Considering these factors, the types of violation incidents that can occur on smartphones were classified, and damage cases of mobile malicious code were analyzed. Already, a diverse range of mobile malicious code has appeared – from ‘proof-of-concept’
Integrated Framework for Information Security in Mobile Banking Service
197
code that simply exposes the possibility of damage, to code that exploits vulnerabilities in a smartphone OS to actually disclose the owner’s information and cause financial damages. Various tools that have already been formalized can be utilized to analyze malicious computer code. However, different tools should be used for each platform to analyze mobile malicious code, and as of yet no methodology has been systematically organized to determine which tools should be used by which method. The integrated information security framework for mobile banking services proposed in this paper seems to be utilized effectively when security measures are established for the joint smartphone-based mobile banking development project being promoted by the Bank of Korea. Hopefully, this study could be refined continuously by performing verification and re-establishment through actual application to the joint smartphone-based mobile banking development project. In addition, it could be utilized as an international standardization item driven by the Bank of Korea through the international organization related to security (ISO/IEC JTC1 SC27) or finance (ISO TC68).
References 1.
2.
3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
Claessens, J., Dem, V., Vandewalee, J.: On the security of Today’s Online Electronic Banking Systems. Computers & Security, Elsevier advanced Technology 21(3), 257–269 (2002) Calisir, F., Gumussoy, C.A.: Internet banking versus other banking channels:Young consumers view. International Journal of Information Management Computers & Security 28, 215–221 (2008) Heikkinen, P.: A framework for evaluating mobile payments., Financial Markets and Statistics, Bank of Finland (2009) Mobile Banking Overview, mobile marketing association (2009) Chang, Y.F., Chen, C.S., Zhou, H.: Smart phone for mobile commerce. Computers & Security (31), 740–747 (2009) Symbian Developer Network, http://developer.symbian.com/ Forum Nokia, http://forum.nokia.com/ Android SDK, http://code.google.com/android/ Open Handset Alliance, http://www.openhandsetalliance.com/ Android Development, http://www.android-devs.com/?p=127 Apple iPhone, http://www.apple.com/iphone/ Google Android, http://code.google.com/intl/ko/android
A Design of the Transcoding Middleware for the Mobile Browsing Service Sungdo Park, Hyokyung Chang, Bokman Jang, Hyosik Ahn, and Euiin Choi∗ Dept. of Computer Engineering, Hannam University, Daejeon, Korea {sdpark,hkjang,bmjang,hsahn}@dblab.hannam.ac.kr,
[email protected] Abstract. Mobile devices have the limited environment such as low process performance, small screen size, low network speed and restricted user interface. This situation prevented the use of diverse and rich desktop-based information and services because the user could use limited services in mobile telecommunication environment. Also, this demands that service providers should develop a separate web contents for mobile telecommunication, but it is a waste of time and effort. Therefore, in this paper, we proposed web contents transcoding middleware that could provide automatic web contents re-authoring for the mobile device. Keywords: Mobile web, Transcoding, Middleware, Adaptation.
1 Introduction Recently, the demand for the use of web at the variety of mobile devices including mobile phones is growing. This is based on new worth discovery for the realization of a variety of wire-wireless integrated service in ubiquitous environment. Ultimately, this has been started from a natural demand that web environments in the wirewireless should be integrated into one. Through this, we expect to create many new business opportunities[1]. Mobile web contents market was dominated by general cellular phone in the past, so service providers could provide only native applications, but now the web content development and distribution is activated by development environment with open platform and open source and expansion of the smartphone market such as iPhone and Android. Therefore, this situation is very attractive factor from the web content provider's side because the mobile market which is possible to compete through the technology is better than the desktop market which is holding by major companies. However, there are some problems to service the web content, which is optimized at desktop, to the mobile device which has limitations such as screen size, performance, network speed and support software. Currently, Most of web contents providers have been working to rebuild desktop web contents to mobile based. ∗
Corresponding Author.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 198–204, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Design of the Transcoding Middleware for the Mobile Browsing Service
199
There are two different methods to rebuild web contents for the mobile device. These are manual method and automated method[2, 3]. Manual authoring method that the web content provider prepares a variety of web contents version for various device profiles and provides is currently used most frequently. This approach is possible to provide high quality web contents, but the web author have to manage web contents directly and have to expect what contents are accessed by user. Also, it is very waste of time and costs, because they prepare same web contents for various versions of mobile devices. Automatic authoring method using transcoding technique is an approach that could provide transformed web contents for the user device when the user requested. It is an attractive technique because it could be transform automatically in any device environment. But, transformed web contents which apply this method have lower quality than manual authoring method. Furthermore, most of transcoding techniques are based on method that can simply assign web contents to fit the size of the mobile device through parsing HTML codes and searching regular patterns. Therefore, in this paper, we proposed item based on content block extraction method, which is not dependent on only HTML codes, for solving problem of Automatic authoring method and designed transcoding middleware which can provide automatic web contents re-authoring for mobile devices.
2 Related Works 2.1 Mobile Browsing It is mobile browsing to view web contents based on existing desktop using mobile devices. In mobile browsing, there are four methods as follows in order to view web contents. 2.1.1 Full Browsing Full browsing is a method which shows the screen of a desktop as it is to on a mobile device without enlarging or downsizing[4]. This method may have a problem of contents-aware in low-resolution mobile devices because there happen left/right and top/bottom scroll extremely. Recently, there is a try to embed a zoom in/out function into mobile devices or support high-resolution in order to be able to see a full screen at a glance, however, it is a crystal clear for a mobile device with 3-inch-screen to grab a full screen web site. 2.1.2 Mini-map Mini-map method shows a mini-map downsized of full-screen, so the current location will be displayed if a user scrolls it. For the user, it is possible to move a bit more easily directly, but the space for display will be shorten because it takes some room of mobile display[5].
200
S. Park et al.
2.1.3 Thumbnail Thumbnail is a method that shortens a whole page to a screen size and links a part to the enlarged page as clicked[2, 6]. It has been advanced of previous thing that is very effective method about moving sub-page from complexity main-page. Users can watch contents through moving a sub-page. But, it is able to lose half of efficient in case of moving page which is consisted of complexity page by moving. 2.1.4 Vertical Align Vertical-align is a method which optimizes to vertical side in screen[7, 8]. It is superior to others because that is not occurred left/right scroll by fitting horizontalsize although vertical-scroll can be lengthen, but It has a problem of scroll which can be lengthen in case of portal-page which have many contents. Google has a function that provide to automatic page division. But it will occur to problem that is frequently page-moving in case of finding particular contents by users because it is decided by size for page-division. 2.2 Web Content Adaptation System Through considering in heterogeneous mobile device of each user, the system which can provide by automatically adapting in user environment for web contents based on desktop are developed and proposed. 2.2.1 MobileGate MobileGate proposed a method to service by transforming from web contents to image that protect to occur for impossible contents into mobile-device in content of web-contents. It is imagification for whole web-page that is divided by carat about user's preference area and using method to service by transforming image suitable format in mobile[9]. 2.2.2 Xadaptor Jiang He proposed a Xadaptor adapting system using a rule-based approach for flexibility and scalability[10]. It explains to adapting technique about various architecture and basic type such as text, image, streaming media, etc. Especially, it provides to adapting technique such as table and frame.
3 A Design of Transcoding Middleware Web content is Generally made on the basis of the desktop environment thus it has difficulties of use at the mobile device which has limited resources such as low process performance, small screen size, low network speed and restricted user interface. Recently, some parts of limitations of existing mobile devices is supplemented by the appearance of the smartphone which competes with desktop performance through development of the mobile technology, but the mobile device still have problems, such as readability of web contents and scrolling for showing whole page, due to its special features. Especially, there is a serious problem that the developer has to rebuild web contents for mobile device owing to differences of each mobile platform.
A Design of the Transcoding Middleware for the Mobile Browsing Service
201
These problems can be resolved through the middleware which can reconstruct web contents by extracting item based on content block and using the user preference for the priority location of items. Figure 1 shows structure of web contents transcoding middleware proposed in this paper.
Fig. 1. Structure of the web content transcoding middleware
3.1 Extraction Method of Web Content Item Blocks Web content transcoding technique is an adaptation method that can automatically transform web contents for various user devices and can provide web contents suitable for their devices and platforms. Most adaptation methods, based on heuristic methods which find regular patterns in accordance with HTML code analysis, split web content into blocks and summarize it for the size of the mobile, and hence link it to the interface with block. However, this method causes a problem that cannot extract items, which are content units for user recognition, because it is dependent on only code or size of web content. Web content based on desktop does not consist of one subject but detailed items such as menu, login, logo, search, news, etc. Users are able to search and read interesting items, and click hyperlink linked to the items. If an item is divided into different blocks, or a number of items are grounded into one block, it will cause difficulties for users to understand the content of each item. Therefore, most of transcoding methods are less effective at the structurally complex web page such as portal sites. Hence, we proposed item block extraction method based contents using document object model in this paper. DOM(Document Object Model) is a structure which is made as object tree by parsing HTML document. It can be possible to analyze the content or the structure of web contents according to searching DOM tree, modifying or deleting particular nodes and tags. And, through this, generated web content is optimized for mobile browsing service environment. Figure 2 shows an example of DOM tree for the web content.
202
S. Park et al.
Fig. 2. Document object model tree of web contents
3.2 Location Assign Module with User Preference Generated web content should mitigate inconvenience of the interface using preference of users in the mobile browsing environment. Which items will be preferred by users among items of web content? It will be the item which users prefer and frequently use. Interface manipulation is the most troublesome in mobile browsing. Personalizing service which searches items preferred by users on desktop web applications uses a service tool which reduces awkward interfaces as well as difficulty of search for interesting items for users. Item block extraction method of personalizing web content proposed in this paper does not extract the size or code on web content but item based blocks, the unit to consist the content. To create extracted blocks on personalizing web content by preference of users, it makes that web content is rebuilt by calculating the preference of each item block using user preference profile. This also includes user interest information. The user preference can be measured by collaborative filtering and profiling. Through attaching user preference weight to the web content item, we can suitably assign web contents using prioritization at the mobile display screen. Figure 3 shows concept of Web Content Location Assign using User Profile.
Fig. 3. A concept of the web content location assign using user profile
A Design of the Transcoding Middleware for the Mobile Browsing Service
203
3.3 Styling Module HTML tag and style sheet language are widely used for item design on most of desktop based web content. However, there is a difficulty about applying web content to mobile devices, because style sheet is designated for the desktop environment. Hence, the middleware should convert original web content to suit the mobile environment for representing content items to users. The module which is in charge of analyzing and adjusting about style sheet is needed, because the style sheet which need to accept and the style sheet which need to reconfigure are mixed.
4 Conclusion Due to development of the mobile technology, now users can use various desktop based web contents at the mobile devices, so the mobile browsing environment that users can access web contents over any device at anytime at anywhere is possible. However, there are several problems to browsing web contents at the mobile owing to the limitation, such as screen size, support software, etc., of mobile devices compared with desktop-pc. Hence, service providers should develop a separate web contents for the mobile device environment. Therefore, in this paper, we proposed the automatic re-authoring middleware which can reconstruct web contents by extracting item based on content block and using the user preference for the prioritization of items. Also, for this purpose, we proposed item block extraction method based on content using document object model, and propose a web content reconstruction methodology which can reassign items according to user preference profile for interface manipulation that users have felt the most uncomfortable using mobile browsing. In the future, we would develop and test proposed middleware applying item extraction method and reconstruction methodology, and the process of evaluation verification of proposed middleware would be proceeded. Acknowledgments. This work was supported by the Security Engineering Research Center, granted by the Korea Ministry of Knowledge Economy.
References 1. Jones, G.J.F., Brown, P.J.: Context-Aware Retrieval for Ubiquitous Computing Environments. In: Crestani, F., Dunlop, M.D., Mizzaro, S. (eds.) Mobile HCI International Workshop 2003. LNCS, vol. 2954, pp. 371–374. Springer, Heidelberg (2004) 2. Hwang, Y., Kim, J., Seo, E.: Structure-Aware Web Transcoding for Mobile Devices. IEEE Internet Computing Magazine 7, 14–21 (2003) 3. Lum, W.Y., Lau, F.C.M.: User-Centric Content Negotiation for Effective Adaptation Service in Mobile Computing. IEEE Transaction on Software Engineering 29(12), 1100– 1111 (2003) 4. Kaikkonen, A.: Mobile Internet: Past, Present, and the Future. International Journal of Mobile Human Computer Interaction, 29–45 (2009)
204
S. Park et al.
5. Roto, V., Popescu, A., Koivisto, A., Vartiainen, E.: Minimap-A Web Page Visualization Method for Mobile Phones. In: CHI 2006 Proceedings on Mobile Surfing and Effects of Wearables, pp. 35–44 (2006) 6. Lam, H., Baudisch, P.: Summary Thumbnails: Readable Overviews for Small Screen Web Browsers. In: Proceedings of the SIGCHI Conference on Human Factors in Computing System, pp. 681–290 (2005) 7. Roto, V.: Browsing on Mobile Phones, Nokia Research Center, http://www.research.att.com/~rjana/WF12_Paper1.pdf 8. Roto, V., Kaikkonen, A.: Perception of Narrow Web Pages on a Mobile Phone. In: 19th Internationa Symposium on Human Factors in Telecommunication (2003) 9. Park, D., Kang, E., Lim, Y.: An Automatic Mobile Web Generation Method from PC Web Using DFS and W-DFS. In: Gervasi, O., Gavrilova, M.L. (eds.) ICCSA 2007, Part II. LNCS, vol. 4706, pp. 207–215. Springer, Heidelberg (2007) 10. He, J., Gao, T., Yen, I., Bastani, F.: A Flexible Content Adaptation System Using a RuleBased Approach. IEEE Trans. on Knowledge and Data Engineering 19(1) (2007)
A Study of Context-Awareness RBAC Model Using User Profile on Ubiquitous Computing Bokman Jang, Sungdo Park, Hyokyung Chang, Hyosik Ahn, and Euiin Choi∗ Dept. of Computer Engineering, Hannam University, Daejeon, Korea {bmjang,sdpark,hkjang,hsahn}@dblab.hannam.ac.kr,
[email protected] Abstract. Recently, With the IT technique growth, there is getting formed to convert to ubiquitous environment that means it can access information everywhere and every-time using various devices, and the computer can decide to provide useful services to users. But, in this computing environment will be connected to wireless network and various devices. According to, recklessness approaches of information resource make trouble to system. So, access authority management is very important issue both information resource and adapt to system through founding security policy to need a system. So, this model has a problem that is not concerned about user's context information as user's profile. In this paper suppose to context-awareness RABC model that based on profile about which user's information which provide efficiently access control to user through active classification, inference and judgment about user who access to system and resource. Keywords: RBAC, User Profile, Ubiquitous computing, Access Control, Context-Awareness.
1 Introduction Recently, With the IT technique growth, there is getting formed to convert to ubiquitous environment that means it can access information everywhere and everytime[1, 2]. Since the advent of the ubiquitous environment, the user can connect computing environment every-time using various devices, and the computer can decide to provide useful services to users according to context awareness. But, in this computing environment will be connected to wireless network and various devices. According to, recklessness approaches of information resource make trouble to system. So, access authority management is very important issue both information resource and adapt to system through founding security policy to need a system. But, existing access control security model is available to approach information resource ∗
Corresponding Author.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 205–213, 2010. © Springer-Verlag Berlin Heidelberg 2010
206
B. Jang et al.
or computing system by using simply user ID and Password. So, this model has a problem that is not concerned about user's context information as user's profile[3, 4]. Ubiquitous Computing environment's access control model that has different existing security access control model in which is authorized by user's simple information(e.g,. ID and Password), and it has to add user's location information, existing user's information(same as user profile), time, and device information during realtime. And then, it provides service about user's access request by environmentinformation(location, time, etc). In case that authorized-user who is certificated to use service, non-authorized user access to resource and system same as authorized-user, it is limited to authorized-user who even if has certification about access authorization of requesting information. So, Access control model has to consolidation and automation about resource security by control according to user environments(same as context information) and user profile. In this paper suppose to dynamical context-awareness RABC model that based on profile about which user's information and user's environment information which provide efficiently access control to user through active classification, inference and judgment about user who access to computing system and information resource. Therefore, we suggested D-PACS(Dimension Profile and Access Control System) which stored location and time, frequency information of often used service, and put the services which expected to use in any location on time to storage, and propose to context-awareness RBAC model that is modeling user's information and user's context information accessing resource by ontology.
2 Related Work 2.1 RBAC RBAC is access control model that is more popular in commercial area as alternative model of MAC(Mandatory Access Control) or DAC(Discretionary Access Control). The best feature of the RBAC is not directly allowed to user who is available for performance of operation about information that is assigned by role that is point obtaining access authority of user through assigned role to user. As management for access authority as relation of role and entity-role, a number of user and entity, can be managing authority and authorization efficiently in distributed computing with occasionally changing component. Also, assigned low-role by between roles of hierarchy architecture provide to authority inheritance that is available of upper-role. Using authority inheritance can more efficiently perform to authorization about role that is consisted of hierarchy architecture. This method has advantage of not only simplifying authority management, but also offering flexibility to implementation of security policy[5, 6, 7]. 2.2 GRBAC(Generalized-RBAC) GRBAC(Generalized RBAC) model use subject role, object role, environment role in access control decision. And that added context information to existing role-based
A Study of Context-Awareness RBAC Model
207
access control. Through modeling in role of subject, object and environment entity offer simplicity and flexibility of access control policy. Security manager describe access authority policy through five attribute that is subject role, object role, environment role, operation, sign. , sign>, , -> Example, above the expression show not reading history case on weekend of assigned user as role of doctor. Also, in order to solve inexplicit authorization through role-hierarchy architecture use authority inheritance concept of role-hierarchy architecture. Authority inheritance is divided into three type of standard, strinc, lenient. GRBAC model is handling request of user's access through context information as defining environment role after describing access control policy and adapting transmission rule. But, GRBAC is not presented resolve a problem about collision between authority as causing access authority transmission. And GRBAC is difficult to management that occur a plenty of hierarchy as defining user's condition to environment role[6, 8]. 2.3 Access Control Security Architecture Using Context Information CASA is suggested by Georgia Institute of Technology that is security platform of middleware level for security of which associated security with user bio information or location information. Context-aware authorization into CASA authorize to user with user-id, location, role using a bio-awareness method or Active Badge sensor. Context-aware Access control provide to security service as a like giving the authorization and access-control in moving the computer environment of which is mutual-interaction frequently with various network, service and device. Also, that is suggested to GPDL for policy decision and GRBAC-Model giving authorization by considering with user-awareness, time and place[9, 10, 11]. SOCAM propose to OWL for context-information modeling in middleware and that is consisted of several components. Context Providers make abstraction for various context-information. And SOCAM is presented by OWL which can use and share other service components. Context Interpreter provide to logical service about context-information. Context Database saves the context-information ontology and instance in the infra domain of each other. Context-aware Services provide to suitable service in present context considering with context-information of various level. Service Locating Service informs the location of context provider and context interpreter. Also, Service Locating Service helping the service can be located by user or application[12].
3 Context-Awareness RBAC Model Architecture Using User Profile We have to infer what service is fit to user well for serving various services to user based context-aware information which arose in ubiquitous environment. Generally, we stored profile for using user’s inclination and information. Also, because services that often used have high probability which continuously using, if the services stored
208
B. Jang et al.
in profile, we could reduce time of service using. Therefore, previous technique which information and time of often using services stored in profile was suggested. But there are need to information of user’s location and time for providing more correct services. For example, we assume that A service was used 10 times on a day, and if time of service using is 3 P.M, we should infer that the service almost would use to afternoon. And time and frequency of information is important in ubiquitous. But location information is very important also. Even if services was same, frequency of service was different each other. Therefore we suggest technique that providing the service which demanded by user to store location information with time and frequency in profile and that put the service in location on time to using. The system, which we were suggested, is consists of Agent and D-PACS and User, service provider. And figure 1 shows how we structured D-PACS system.
Fig. 1. D-PACS Architecture
D-PACS was consists of 3 modules, such as Service manager, Agent manager, Inference engine, and each modules consist of sub-modules. Service manager is responsible for processing the services which user was requested or predicted by DPACS. And then the predicted services stores on service storage. If users request the predicted service to D-PACS, we will directly search it on service storage without searching works. So it is more quickly find service which user requested, and is able to provide it to user. Agent manager is responsible for receiving information from D-PACS manager on agent, and then send to inference engine. Also, it is send services which was find from service provider to service handler on agent. Analyzer within inference engine is responsible for analyzing context with profile and sensor information to provide suitable service for user, processing access control of users. And predictor estimates services which user is going to use service on other place. In inference engine, authorization service module performs that is both in charge of management and
A Study of Context-Awareness RBAC Model
209
treatment in context information of subject and confirming identification about subject that accessible of context-aware access control system. Also, Authorization service module provides service of assignment as dynamically about role of user through analysis of access policy and acquiring added information that is access location, access time, spatial area about context information of subject which is access of resource. And Authorization service module performs for role of access control through comparison and analysis of security policy with both user role of activated user and activated context role in present. Authentication service module performs for monitoring function of user's access control. Authentication service module acquires context information by surround sensor or device besides access information of approached subject. And then, through comparison and analysis of context information about surround environment of accessed user is in charge of pre-processing about authority level of user who wants access. And, through authorization service is in charge of function that provide to data about authority of user to access. Context knowledge model database is storing both context information which analyze to data from authorization service and resource which want approach of user, and is storing location information anywhere user could stayed in a place and context model information how we infer to context. User & Role, Constraint Policy, Context Knowledge Model represent either approval or disapproval about including request of access to transaction list and each transaction and that is storing as type of rule about approval information. Context-aware access control model is using OWL(Web Ontology Language) for collecting and analyzing context information about surround environment of user's.
4 User Profile and Context Modeling 4.1 Definition of User Profile and Context Model User profile specifies information of interest for an end user. So the profile was structured user information part and service information part. User information part was stored user’s information such as user’s name, inclination, hobby and Service information part was stored services which we were used such as service name, service provider etc. structure of user profile was follow: - User Information: User name, User ID, Personal inclination, hobby, etc - Service Information: Service Name, Service Provider, Service context, Service frequency value, etc Because profile stored how much the service information used, stored not only used service, but also information when, how, where used. Also, there are stored the information about what context used. And proposed model defines basic information, location, time, device of user by using owl in assuming hospital environment. Figure-2 show owl source code and figure-3 show appearance of source through protege from owl source code.
210
B. Jang et al.
Fig. 2. OWL Source code
Fig. 3. User information Ontology modeling attribute in Protege application
4.2 Profile Manipulation in D-PACS We assumed that the services will use this place and time next time, if service was demanded in specific location and time. So, we used the information of time, location, frequency to provide services to user more correctly and suggested D-PACS technique which using recently access time, access time, frequency of access, location value, weekend value. And the values stored in D-PACS profile. - Recently access time(t): This value stored time when service used recently, and use for finding service which not used for a long time. - Access time(a): This value have to 24 from 0, and if service was used on 1 P.M, it’s value has 13.
A Study of Context-Awareness RBAC Model
211
- Frequency of access (f): This value stored frequency of service how many user used the service. - Location value(l): This value have unique number of place where service was used. For example, if user used A service in house and office, location value of A service which used in house is 1, other is 10. - Weekend value(e): This value have to 7 from 1, if service used on Monday, weekend value is 1. Generally, people’s life pattern was repeated per week. So we use the value for analyzing service frequency of user per week. Analyze and recommend what service is fit to user based inferred context to use the information of location, time, weekend in user profile. And we find service which frequency value of service is the highest. And if requesting service is existed in service storage, we could not need the searching process. Because we are already stored the information of service in service storage, we just have only to request it from service storage, and then provide the service to user. So, we are able to reduce the searching time of request service. Predict what service is going to use based inferred context to use the information of location, time, weekend in user profile. And we find service which frequency(f(f)) of service that appeared on prediction time(j) after current time(t) is the highest 4.3 Access Control Processing Workflow in D-PACS
① User make an approach to authorization service for authority of authentication to access in resource. User utilize for application in order to access of resource. ② Authorization service call up authentication service for authorizing of authority about user in present. Authentication service is collecting context information of user's surroundings about approach of resource in present. For user's role to request of approach of resource and context-aware service that ask for context information.
③
Fig. 4. Performance architecture of context-awareness access control model(D-PACS)
212
B. Jang et al.
④ Acquired information by context information of user's surroundings transfer to authorization service module and authorization service module transmit information about receiving of acquired information to authentication service module. Acquired authorization service module by context information of user's surroundings try to access of resource that is approach to context knowledge repository for performing access control and role assignment of user. It request data of access policy and information about role-assignment of user from context knowledge repository. Authorization service is granting access authorization by access policy and role of user who want to approach of resource in present. User request to service through acquisition of access authority about assigned role. Authorization service module make request to service and authorization service module make an approach to suitable resource in level of access authority through level of authority and role by assigned resource of requiring to user. Context knowledge repository can be approached to suitable resource about level of access authority by assigned of authority, security policy and context of user in present.
⑤ ⑥ ⑦ ⑧ ⑨
5 Conclusion Meaning of Ubiquitous computing environment where is available to use for computer conveniently and naturally in common life which is without constraint of location or time. Thus, in distributed computing environment such as ubiquitous environment, user is efficiently available to use and to share of resource between user and other user. Also, we need to access control model to control that is available to access of user that is possible to access in case of sharing resource. And, for using of efficient resource that need to access control model which is able to control of approach to user without authority. Therefore, in this paper is proposed to model that have advantage of which active authorization is more possible then existing access control model as adding a function of authorization about collaborative resource control about other subject in different with RBAC and GRBAC. Proposed model, in this paper call D-PACS, will be making system of active access control that is based on suitable context-aware in ubiquitous environment. We assign to role of access authority about information resource and user to assign of suitable role. And then, we provide to service that can be available to information resource through valid access authority of user who is suitable. Also, for active access control based on contextaware, we use to context role by quantificational expression which is relationship between context information. For using information resource, we will be implementing active access control based on context-aware that is estimation of validity about acquired access control through checking satisfaction of security policy about context role in present(although user have a assigned role). And, for adapting service along to context transition, we will provide to service which must provide to user in specified context with security policy through aware of automatically about transition of context role. Acknowledgments. This work was supported by Hannam University Research Fund, 2010.
A Study of Context-Awareness RBAC Model
213
References 1. Lyytinen, K., Yoo, Y.: Issues and challenges in ubiquitous computing. Communications of the ACM 45, 62–96 (2003) 2. Schilit, B.N., Adams, N., Want, R.: Context- aware computing applications. In: Proc. IEEE Workshop on Mobile Computing Systems and Applications, pp. 85–90 (1994) 3. Potonniée, O.: A decentralized privacy- enabling TV personalization framework. In: 2nd European Conference on Interactive Television: Enhancing the Experience, euroITV 2004 (2004) 4. Klyne, G., Reynolds, F., Woodrow, C., Ohto, H., Hjelm, J., Butler, M.H., Tran, L.: Composite Capability/Preference Profiles (CC/PP): Structure and vocabularies 1.0. W3C Recommendation, W3C (2004) 5. Ferraiolo, D.F., Cugini, J.A., Kuhn, D.R.: Role-Based Access Control(RBAC): Features and Motivations. In: 11th Annual Computer Security Application Conference (November 1995) 6. Sandhu, R.S., Coyne, E.J.: Role-Based Access Control Models. IEEE Computer 20(2), 38– 47 (1996) 7. Sandhu, R.S., Ferraiolo, D., Kuhn, R.: The NIST Model for Role-Based Access Control:Towards a Unified Model Approach. In: 5th ACM Workshop on RBAC (August 2000) 8. Neumann, G., Strembeck, M.: An Approach to Engineer and Enforce Context Constraints in an RBAC Environment. In: 8th ACM Symposium on Access Control Models and Technologies (SACMAT 2003), pp. 65–79 (June 2003) 9. Covington, M.J., Moyer, M.J., Ahamad, M.: Generalized role-based access control for securing future application. In: NISSC, pp. 40–51 (October 2000) 10. Convington, M.J., Fogla, P., Zhan, Z., Ahamad, M.: Context-aware Security Architecture for Emerging Applications. In: Security Applications Conference (ACSAC) (2002) 11. Biegel, G., Vahill, V.: A Framework for Developing Mobile, Context-aware Applications. In: IEEE International Conference on Pervasive Computing and Communications (PerCom) (2004) 12. Gu, T., Pung, H.K., Zhang, D.Q.: A Middleware for Building Context-Aware Mobile Service. In: Proceedings of IEEE Vehicular Technology Conference (VTC) (2004)
Challenges and Security in Cloud Computing Hyokyung Chang and Euiin Choi∗ Dept. Of Computer Engineering, Hannam University, Daejeon, Korea
[email protected],
[email protected] Abstract. People who live in this world want to solve any problems as they happen then. An IT technology called Ubiquitous computing should help the situations easier and we call a technology which makes it even better and powerful cloud computing. Cloud computing, however, is at the stage of the beginning to implement and use and it faces a lot of challenges in technical matters and security issues. This paper looks at the cloud computing security. Keywords: Cloud Computing, confidentiality and data encryption, data integrity, availability and recovery.
1 Introduction It was not long ago that Ubiquitous Computing was in the middle of the hot issue in IT industry. Ubiquitous, which means existing, found or seeming to be found everywhere at the same time, itself was good enough to attract the people today who have almost every advanced technology and service. There has been no such software/hardware to offer such services, even if there was, it would be pretty big or should be more various, it would happen to cost a lot for clients and even service providers. Cloud Computing has got close to this problem from another angle and solved it or almost done it. The core elements of Cloud Computing are ubiquity of broadband and wireless network, falling storage cost, progressive improvements in Internet computing software [3]. Cloud-service clients will be able to add more capacity at peak demand, reduce costs, experiment with new services, and remove unneeded capacity, whereas service providers will increase utilization via multiplexing, and allow for larger investments in software and hardware [3]. However, when the existing computing environment changes to cloud environment, there are some issues to be solved, the security is one of them. Cloud Computing services allocate and manage separate resource to protect data, it is general for the level of security to get higher rather than each enterprise or individual manages data directly [2]. However, there will be more damages when any accident happens, it can cause a lot problems of confidentiality of the enterprise or privacy of individual. Thus, in order to utilize Cloud Computing industry, solving security issue should be the first to go. Section 2 sees the definition of Cloud Computing, technological features and challenges, Cloud Computing security will be discussed in section 3, and conclusion and further research will be included in section 4. ∗
Corresponding author.
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 214–217, 2010. © Springer-Verlag Berlin Heidelberg 2010
Challenges and Security in Cloud Computing
215
2 Definition of Cloud Computing, Technological Features and Challenges 2.1 Definition of Cloud Computing Gartner defines cloud computing as "a style of computing where scalable and elastic IT-related capabilities are provided 'as a service' to external customers using Internet technologies [1,6,9]." Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services [5,8]. The service itself is called as Software as a service (SaaS), and the datacenter hardware and software is called a Cloud. When a Cloud is made available in a pay-as-you-go manner to the public, it is called a Public Cloud and the service being sold is Utility Computing. Private Cloud is referred to the internal datacenters of a business or other organization that are not made available to the public. Cloud computing is the sum of SaaS and Utility Computing, but does not include Private Clouds [5]. Figure 1 shows the roles of users and providers in Cloud Computing. The top level can be recursive. SaaS providers can be SaaS users. For instance, a mashup provider of rental maps might be a user of the Craigslist and Google maps services [5].
Fig. 1. Users and Providers of Cloud Computing
2.2 Cloud Computing Technological Features Technological features of Cloud Computing infrastructure and service includes virtualization, service-oriented software, grid computing technology, management facilities, and power efficiency. Consumers purchase such services in the form of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or software-as-a-service (SaaS) and sell value-added services (such as utility services) to users [3]. 2.3 Cloud Computing Challenges There are several issues that Cloud Computing faces, I reviewed what M. D. DiKaiakos et al. described in their research [3] and would like to focus on 4 of them:
216
H. Chang and E. Choi
software/hardware architecture, data management, cloud interoperability, security and privacy. For software/hardware architecture, novel systems and services exploiting a high degree of parallelism should be designed. For storage technology, hard disk drives should be replaced to hybrid hard disks augmented with flash memories, which provide reliable and high-performance data storage. For data management, data can be well stored and kept at secure location, but it can be stored at untrusted hosts as well, which creates enormous risks for data privacy. Therefore, new data managements shall be employed. For Cloud interoperability, it refers to customers' ability to use the same artifacts, such as management tools, virtual server images, and so on, with a variety of cloud computing providers and platforms. To do that, there should be new standards and interfaces which will enable enhanced portability and flexibility of virtualized applications. For security and privacy, in Cloud Computing, a datacenter holds information that end-users would more traditionally have stored on their computers. This concerns regarding user privacy protection because users must outsource their data. Furthermore, centralized services in Cloud Computing could affect the privacy and security of users' interactions. Cloud services should preserve data integrity and user privacy. There should be new protection mechanisms to secure data privacy, resource security, and content copyrights.
3 Cloud Computing Security Privacy and data security technology among Cloud Computing technologies is dealt with in this section. Chul-Soo Lim described it into 8 of categories in his paper [2] and Traian Andrei [7] also mentioned the same referring to Gartner’s [4], however, 3 of them are discussed here, which are confidentiality and data encryption, data integrity, and availability. 3.1 Confidentiality and Data Encryption So as to secure data of individuals or enterprises, encryption technology shall be offered basically. Especially, in Cloud Computing, the availability of entire system can be falling when encrypting a large volume of data, so an appropriate cipher should be used for the situation. For example, it can be considered that a stream cipher is employed instead of block cipher like DES or AES. Also when something happens to the key-stored server, it is not possible for a lot of users to access to data, there should be a study on a key management. 3.2 Data Integrity AWS S3 service down incident in July in 2008 happened because there was no checkroutine for exchanged data between servers. As we can see in this instance, it is very important to check errors in data and messages in Cloud Computing. Recently, there are weaknesses in MD5 and SHA used a lot to check integrity, NIST is promoting and developing SHA-3, a new hash algorithm [4].
Challenges and Security in Cloud Computing
217
3.3 Availability and Recovery It is very important to study on fault tolerance and data recovery technologies when there is an incident in order to prevent a service discontinuance or data loss. Examples of Cloud service discontinuance and data loss are the examples of what problems can be caused when these mechanisms not work properly. He also described some security guidelines for Cloud Computing using what Gartner says [4,10]. They are privileged user access, regulatory compliance, data location, data segregation, recovery, investigative support, and long-term viability.
4 Conclusion and Further Research Cloud Computing with Green IT concept is such an innovation not only in Internet services but also in entire IT industry. Its concept is, however, still very complicated and confusing. It also has a few issues related to SLA, security and privacy, and power efficiency. This paper described the definition of Cloud Computing, technological features, and challenges roughly and also took a close look at Cloud Computing security among challenges. Cloud Computing is still at the beginning stage, so some new types of security threats according to developing of new service models will appear. Thus, a further study on that requires continuing.
References 1. Gartner Says Cloud Computing Will Be As Influential As E-business (June 2008), http://www.gartner.com/it/page.jsp?id=707508 2. Lim, C.: Cloud Computing Security Technology. Review of KIISC 19(3), 14–17 (2009) 3. Dikaiakos, M.D., et al.: Cloud Computing Distributed Internet Computing for IT and Scientific Research. In: IEEE Internet Computing, pp. 10–13 (September/October 2009) 4. Gartner, Assessing the Security Risks of Cloud Computing (June 2008), http://www.gartner.com/Display_Document?id=685308 5. Armbrust, M., et al.: Above the Clouds: A Berkeley View of Cloud Computing. In: Technical Report No.UCB/EECS-2009-28 (2009), doi: http://www.eeec.berkeley.edu/Pubs/TechRpts/2009/ EEEC-2009-28.html 6. Cloud computing, wikipedia, http://en.wikipedia.org/wiki/Cloud_computing 7. Andrei, T.: Cloud Computing Challenges and Related Security Issues (May 2009), http://www.cs.wustl.edu/~jain/cse571-09/ftp/cloud.pdf 8. http://www.cloudtech.org/2010/07/19/ cloud-computing-%E2%80%93-the-emerging-computing-technology/ 9. Mirzaei, N.: Cloud Computing (2008), http://grids.ucs.indiana.edu/ptliupages/publications/ ReportNarimanMirzaeiJan09.pdf 10. Brodkin, J.: Gartner: Seven cloud-computing security risks, Infoword (July 2008), http://www.infoworld.com/article/08/07/02/Gartner_Seven_ cloudcomputing_security_Risks1.html
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D Sung-Ja Choi and Gang-Soo Lee Hannam Univeristy, Dept. of Computer Science, Daejeon, 306-791, Korea
[email protected],
[email protected] Abstract. The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1]. Keywords: 3D Viewer, MAP 3D, Cloud, Clustering, RIA.
1 Introduction Cloud service provide individual of computing resource by using the third infra. It is similar to clouds and the customer is freely to use the third infra as own computer. The used amount is paid. It is the type of service in distributed computing environments[3][4]. For this, the provider of cloud computing must support that the sever of cluster is able to work with directly connected IT resource and advancement of virtualization technique. Also, Cloud service has serious issues as halt, troubles, secret outflow, compatible of problem and so on when worked on cloud service[2]. There SLA&QoS can not be empasizeed its importance too much. Meanwhile, Meanwhile, the management of clustering depends on the management systems of existing hardware supplier as HP, IBM and so on. So, ISV & a small business model of cloud service supplier is dificult to make the server construction because of confustion of existing platform. It makes to difficult efficient management and security. This article suggests new framework of cloud clustering management system and business mode for cross platform. It applies as RIA & AIR provide the 3D viewer with Googl map 3D for clustering management system. It called 3DV_CCMS which is upgrading of CCMS.
2 Related Researches We research various map API for 3D supporting viewer. It actively support applicable map API to support clustering zone for cluster management mash up service. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 218–222, 2010. © Springer-Verlag Berlin Heidelberg 2010
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D
219
ESRI ArcGIS Esri offers to build and deploy GIS applications on multiple platforms. So, it publishes and consumes GIS Web services using REST or SOAP and creates simple maps from your enterprise data. Also, it gets sample code, configurable templates, online data, and more to help you develop useful GIS applications.[5]
Google Earth & Maps Google Maps (formerly Google Local) is a web mapping service application and technology provided by Google free for non-commercial use that powers many mapbased services, including the Google Maps website, Google Ride Finder, Google Transit, and maps embedded on third-party websites via the Google Maps API. It offers street maps, a route planner for traveling by foot, car, or public transport and for an urban business locator for numerous countries around the world. According to one of its creators (Lars Rasmussen), Google Maps is "a way of organizing the world's information geographically"[7]. Google earth is virtual globe, map and geographic information program. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. Google Earth is also available as a browser plug in. It was also made available for mobile viewers on iPhone & android [6].
Yahoo Maps Yahoo Maps API offers three ways to integrate maps into a website - a simple RESTbased API for map images, an Ajax JavaScript API, and a Flash API. This summary refers to the AJAX API which is closest to Google and Microsoft models [8].
3 3DV_CCMS We had suggested platform of cloud clustering management which had called CCMS. It had cloud clustering server for RIA environments, AIR client node for cross platforms, and android mobile node for manager. However, it had needed accessibility, visibility, expanding of service environments, So we suggest the server framework called 3DV_CCMS. It provides 3D viewer as adapting Google map 3D for considering above items. Preexistence functions of CCMS have monitor the collector, performance chart, event log viewer, real-time viewer, auto recovery management failures and the viewer, reporting and alarm viewer. It provides clustering management information to management. The following items are resource of management. Total list have 14 items of management resource, but Table 1 is omitted. Also, CCMS is added the following functions for 3D viewer of supporting. 3D navigation function supports to the clustering zone of 3D graph. It is possible to know the total clustering zone. So, it is upgrading on visibility of cluster management system. Cluster zone view confirms map of clustering node divided clustering zone.
220
S.-J. Choi and G.-S. Lee
Fig. 1. The platform of 3DV_CCMS Table 1. Resource of management list resource free top watch ……… iostat
explain Show the statics of used & free memory Show the statics CPU and check process-level statistics output custom programs as full-screen periodically …………… omission……………………. Check CPU and I / O statistics.
Clustering node view shows the structure of clustering node and XML data of clustering zone & node as grid component. CCMS Interface module call following functions as performance chart, the real time view, and the report function and so on through CCMS interface link module.
3D Viewer Platform of Cloud Clustering Management System: Google Map 3D
221
4 Implementations The RIA platform offers user to benefits as getting the most out of rich resources. The server platform of clustering management system is constructed by applying RIA platform of Adobe, also, it have interface Google map 3D. It is possible to work easily, access fast and view clustering node information.
Fig. 2. The 3D viewer of clustering node zone
This shows a figure of 3D viewer as information of clustering node zone. Using the Google map key is possible with Flex interface. Also, execution screen move 3D angle according to movement of mouse. Red icon shows the active clustering node zone. Gray icon shows the non-active clustering node which has failure clustering node. By clicking the colored icon, it shows the detailed cluster node of information.
Fig. 3. The interface of CCMS
222
S.-J. Choi and G.-S. Lee
Clustering node zone of information has AIR client cluster nodes. This figure shows the monitoring screen of cluster management information in the real-time. It has simulated on the VMware7.x with Linux server.
5 Analysis and Conclusions 3DV_CCMS suggests the platform of clustering management system in clouding computing environments. It use the Google of map 3D and RIA platform. Although the latest research of 3D & cloud is very active, but a management system for a man of enterprise of small business model is nonexistent. Also a management system which is applying 3D is difficult to find. The 3DV_CCMS of platform is able to apply a various business model and guarantee accessibility, visibility and scalability. Henceforward, the our researches need a existing cluster management system mapping for considering cross platform of environments, also, the researches of 3D engine based on open source will study continually.
References 1.
2. 3. 4. 5. 6. 7. 8.
Choi, S.-J., Lee, G.-s.: CCMS: A Cloud Clustering Management System for AIR & Android environments. In: International Conference on Convergence & Hybrid Information Technology 2010, pp. 18–21 (2010) Methods of Improving the Legal System to Promote Cloud Computing, NIPA (2010) Jiao, Simpson, Siddique, Product family design and platform-based product development: a state-of-art review. J. Intell. Msnuf. (2007) Gartner, Forest: Sizing the Cloud; Understanding the Opportunities in Cloud Service (2009) http://www.esri.com/getting-started/developers/index.html http://en.wikipedia.org/wiki/Google_Earth http://en.wikipedia.org/wiki/Google_Maps http://www.programmableweb.com/api/yahoo-maps
Output Current-Voltage Characteristic of a Solar Concentrator Dong-Gyu Jeong1, Do-Sun Song2, and Young-Hun Lee3 1
Dept. of Electrical and Electronic Eng., Woosuk University, Samnye-up Wanju-gun, Jaollabuk-do, 565-701, Korea Tel.: + 82-63-290-1449; Fax: +82-63-290-1447
[email protected] 2 Dept. of Eng., Woosong Information College, #226-2, Jayang-dong, Dong-gu, Daejeon, 300-71, Korea Tel.: +82-42-629-6381
[email protected] 3 Dept. of Electronic Eng., Hannam University, 133 Ojeong-dong, Daedeok-gu, Daejon, 306-791, Korea Tel.: +82-42-629-7565
[email protected] Abstract. Solar concentrators have received much attention in their potential applications. In solar concentrators the generated current is directly affected by hourly- daily variation factor and the number of suns. In this paper the output current-voltage characteristic of a solar concentrator is derived. The derivation is based on a simplified circuit model for a solar cell. Computer simulation results show that the open circuit voltage of the concentrator at output terminals increases logarithmically with the number of suns and the variation factor, and the maximum output power of solar concentrator rapidly increases with number of suns. Keywords: Solar Concentrator, Output current-voltage characteristic, Open circuit voltage, Number of suns, Variation factor of sunlight intensity.
1 Introduction A solar concentrator is designed to operate under illumination greater than 1 sun. The incident sunlight on solar concentrator is focused or guided by optical elements such that high intensity sunlight beam concentrates on a small solar cell area. Due to the concentrated sunlight the concentrator has several potential advantages, which include the possibility of lower cost and higher efficiency potential than one sun solar cell. Recently solar concentrators in space have received growing attention in view of reduced solar array cost. Many types of solar concentrators have developed for space flights[1,2,3,4]. In some systems, the concentrators are used to focus the sunlight on the receiving area, which is the surface of solar cell[2,4]. And the concentrators in other systems are used to heat the molecular weight gas to a solar recket[1]. T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 223–226, 2010. © Springer-Verlag Berlin Heidelberg 2010
224
D.-G. Jeong, D.-S. Song, and Y.-H. Lee
However, in this paper the output current-voltage characteristic of a solar concentrator based on a simplified circuit model for a photovoltaic solar cell is derived. In the simplified model the load current through a solar cell is calculated. And then the open circuit voltage of a solar concentrator under consideration of sunlight intensity variation and number of suns is derived. Finally the maximum output power of solar concentrator is also shown in followed figure.
2 Simplified Model of a Solar Cell A simplified equivalent circuit model of a photovoltaic solar cell becomes analytically manageable if the series resistance Rs and shunt resistance Rsh effects are negligible. i.e if Rs =0 and Rshȥ . With this simplified model, the output current I passing through load is equal to I = Ig ėIo(eqV/nkT- 1) = Ig ėIo(eqV/kT - 1)
for n=1,
(1)
where the Io represents revere saturation current of a diode and Ig does the generated current of the solar cell. Generally the n has a value between 1 and 2, which depends upon their defect levels. Currently advanced technologies make it possible to reduce the defect levels to obtain n close to 1. For the simplicity, we assume that the diode has very low defects and n being 1. The short circuit current Isc for the diode in solar cells occurs when the voltage V equals 0 volt: Isc = I(V=0) = Ig = Imax for forward bias power quadrant.
(2)
For an ideal cell the maximum current value Imas through the load is the total current produced in a solar cell by photon excitation. And thus the Isc is equal to the generated current Ig. The generated current Ig(or Isc) depends upon the intensity of incident sunlight. The open circuit voltage Voc at the solar cell output occurs when there is no current through the load, and thus it can be approximated by Voc = V(I=0) = (kT/q)ln(Isc/Io+1) for forward bias power quadrant. Ą (kT/q)ln(Isc/Io) for
Isc Io .
(3)
3 Current-Voltage Characteristic of a Solar Concentrator It is well known that the voltage of a solar cell does not depend upon its size and remains fairly constant with changing sunlight intensity. However the current in the cell is almost directly proportional to sunlight intensity. There are several kinds of factors to affect on the incident sunlight intensity in solar concentrators. One of the factors is hourly-daily variation factor η. The hourlydaily variation factor η of the incident sunlight could be set between 0 ~ 1[5]. And we also consider that a solar concentrator has M suns. On the consideration of the variation factor η and number of suns M, the generated current Ig in Eq.(1) must be
Output Current-Voltage Characteristic of a Solar Concentrator
225
changed by ƅMIsc . The output current I of the solar concentrator in Eq.(1) also must be changed by (4) I = ƅMIsc - Io(eqV/kT – 1). The open circuit voltage V´oc in Eq. (3) must be also changed by Voc: Voc = (kT/q)ln(ƅMIsc/Io) = Voc + (kT/q)ln(ƅM)
(5)
The open circuit voltage Voc of a concentrator increases logarithmically with the number of suns M multiplied by the variational factor ƅ.
4 Simulation Results The computer simulations for the output current-voltage and the power of a solar concentrator are carried out to certify the effects of number of suns M. For simplicity we assume that the temperature T of the solar cell maintains constant value. Generally single junction silicon solar cells produce approximately 0.5 ~ 0.6 volt of Voc at room temperature. We also assume that the Voc is 0.6 volt, the variation factor is 1, and Isc at 1 sun is 18 mA/cm2. Fig. 1 shows the output current as a function of output terminal voltage V, and Fig. 2 does the output power as a function of output terminal voltage V. The solid line, dotted line, and solid-dotted line in both figures represent the current-voltage of the concentrator at 1, 5, 10 suns respectively.
Fig. 1. Current-voltage characteristic of a solar concentrator
Fig. 2. Power characteristic of a solar concentrator
In Fig. 1 the output current I over the range 0 ~ Vmax increases linearly with number of suns M, where Vmax represents the voltage when the concentrator has maximum power. In other hands the open circuit voltage Voc of the concentrator increases logarithmically with the number of suns M, where the Voc is in the range marked as . And the maximum output power in Fig. 2 is rapidly increased with number of suns M, where the maximum power point is on slash line.
226
D.-G. Jeong, D.-S. Song, and Y.-H. Lee
5 Conclusion In this paper the output current-voltage characteristic of a solar concentrator are derived. Incident sunlight intensity suffers hourly and daily variation. And in solar concentrator the generated current is directly affected by the number of suns M. The open circuit voltage of the concentrator at output terminals increases logarithmically with the number of suns M multiplied by sunlight-variation factor η, and the maximum output power of solar concentrator is rapidly increased with number of suns M. The derived output characteristics of a solar concentrator can be usefully applied to the design of solar power systems.
Acknowledgments This paper was financially supported by The Small and Medium Business Administration in Korea as the name of ‘The 2010 SanHak Consortium’.
References [1] [2] [3] [4] [5]
Grossman, G., Williams, G.: Inflatable Concentrators for Solar Propulsion and Dynamic Spsce Power. J. of Solar Energy Engineering 122, 229–236 (1990) Eskenazi, M.: Design, Analysis & Testing the Cellsaver Concentrator for Spacecraft Solar Arrays Stribling, R.: Hughes 702 Concentrator Solar Array. In: 28th IEEE PVSC, pp. 25–29 (2000) Ralph, E.L., et al.: G-STAR Space Solar Array Desigh. In: 28th IEEE PVSC (September 2000) Gillette, G., Pierpoint, W., Treado, S.: A general illuminance model for daylight availability. J. of IES, 380 – 340 (1984)
Efficient Thread Labeling for Monitoring Programs with Nested Parallelism Ok-Kyoon Ha1 , Sun-Sook Kim2 , and Yong-Kee Jun1, 1
2
Department of Informatics, Specialized Graduate School for Aerospace Engineering, Gyeongsang National University, Jinju 660-701, South Korea
[email protected],
[email protected],
[email protected] Abstract. It is difficult and cumbersome to detect data races occurred in an execution of parallel programs. Any on-the-fly race detection techniques using Lamport’s happened-before relation needs a thread labeling scheme for generating unique identifiers which maintain logical concurrency information for the parallel threads. NR labeling is an efficient thread labeling scheme for the fork-join program model with nested parallelism, because its efficiency depends only on the nesting depth for every fork and join operation. This paper presents an improved NR labeling, called e-NR labeling, in which every thread generates its label by inheriting the pointer to its ancestor list from the parent threads or by updating the pointer in a constant amount of time and space. This labeling is more efficient than the NR labeling, because its efficiency does not depend on the nesting depth for every fork and join operation. Some experiments were performed with OpenMP programs having nesting depths of three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling. Keywords: thread labeling, happened-before relation, logical concurrency, data races, parallel programs, nested parallelism, NR labeling.
1
Introduction
Data races [3,9] in parallel programs [13] is a kind of concurrency bugs that occurr when two parallel threads access a shared memory location without proper
“This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)” (NIPA-2010(C1090-1031-0007)). Corresponding author: In Gyeongsang National University, he is also involved in the Research Institute of Computer and Information Communication (RICIC).
T.-h. Kim et al. (Eds.): FGCN 2010, Part II, CCIS 120, pp. 227–237, 2010. c Springer-Verlag Berlin Heidelberg 2010
228
O.-K. Ha, S.-S. Kim, and Y.-K. Jun
inter-thread coordination and at least one of these accesses is a write. The races must be detected for debugging, because they may lead to unpredictable results. However, it is difficult and cumbersome to detect data races in an execution of a parallel program. Any on-the-fly race detection [5,8] techniques needs a representation of the Lamport’s happened-before relation [7] for generating unique identifiers which maintain logical concurrency information for parallel threads. NR labeling [6,11] which is an efficient thread labeling scheme supports the fork-join program model with nested parallelism and generates concurrency information using nest regions for nesting threads. The efficiency of NR labeling depends only on the nesting depth N of a parallel program, because it creates and maintains a list of ancestors information for a thread on every fork and join operation. Thus, NR labeling requires O(N ) time complexity for creating and maintaining thread labels, and the storage space for the concurrency information is O(V + N T ) in worst case, where V is the number of shared variables in the parallel program. This paper presents an improved NR labeling, called e-NR labeling, which does not depend on the nesting depth of a parallel program. Thus, it requires a constant amount of time and space complexity. The basic idea is to use a pointer for a thread label to refer to its ancestor list by inheritance or update. For the reference, we change the list of a thread label into a pointer which points on the ancestor list of each created thread. The storage space for the concurrency information is O(V + T ), and the time to generate a unique identifier for each thread is O(1) in the worst case. Some experiments were performed on OpenMP programs with nesting depths three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling. This paper is organized as follows. Section 2 illustrates the notion of nested parallel programs and our motivation. Section 3 presents e-NR labeling for the programs with nested parallelism. In Section 4, we analyze the efficiency of the labeling scheme used for on-the-fly race detection in synthetic programs. In the last section, we conclude the paper and present the future work.
2
Background
Parallel or multi-threaded programming is a natural consequence of the fact that multi-processor and multi-core systems are already ubiquitous. This section illustrates the notion of nested loops and introduces our motivation which generates thread concurrency information in parallel programs. 2.1
Parallel Loop Programs
OpenMP [10,12] is a typical model for scalable and portable parallel programs. It employs the simple fork-join execution model that makes the program efficiently
Efficient Thread Labeling for Monitoring Programs with Nested Parallelism #pragma omp parallel for private( i ) for (i = 1; i