Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
4804
Robert Meersman Zahir Tari (Eds.)
On the Move to Meaningful Internet Systems 2007: CoopIS, DOA, ODBASE, GADA, and IS OTM Confederated International Conferences CoopIS, DOA, ODBASE, GADA, and IS 2007 Vilamoura, Portugal, November 25-30, 2007 Proceedings, Part II
13
Volume Editors Robert Meersman Vrije Universiteit Brussel (VUB), STARLab Bldg G/10, Pleinlaan 2, 1050 Brussels, Belgium E-mail:
[email protected] Zahir Tari RMIT University, School of Computer Science and Information Technology Bld 10.10, 376-392 Swanston Street, VIC 3001, Melbourne, Australia E-mail:
[email protected] Library of Congress Control Number: 2007939491 CR Subject Classification (1998): H.2, H.3, H.4, C.2, H.5, D.2.12, I.2, K.4 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI ISSN ISBN-10 ISBN-13
0302-9743 3-540-76835-1 Springer Berlin Heidelberg New York 978-3-540-76835-7 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12193035 06/3180 543210
Volume Editors Robert Meersman Zahir Tari
CoopIS Francisco Curbera Frank Leymann Mathias Weske
DOA Pascal Felber Aad van Moorsel Calton Pu
ODBASE Tharam Dillon Michele Missikoff Steffen Staab
GADA Pilar Herrero Daniel S. Katz Mar´ıa S. P´erez Domenico Talia
IS M´ ario Freire Sim˜ao Melo de Sousa Vitor Santos Jong Hyuk Park
OTM 2007 General Co-chairs’ Message
OnTheMove 2007 held in Vilamoura, Portugal, November 25–30 further consolidated the growth of the conference series that was started in Irvine, California in 2002, and then held in Catania, Sicily in 2003, in Cyprus in 2004 and 2005, and in Montpellier last year. It continues to attract a diversifying and representative selection of today’s worldwide research on the scientific concepts underlying new computing paradigms that of necessity must be distributed, heterogeneous and autonomous yet meaningfully collaborative. Indeed, as such large, complex and networked intelligent information systems become the focus and norm for computing, it is clear that there is an acute and increasing need to address and discuss in an integrated forum the implied software and system issues as well as methodological, semantical, theoretical and application issues. As we all know, e-mail, the Internet, and even video conferences are not sufficient for effective and efficient scientific exchange. This is why the OnTheMove (OTM) Federated Conferences series has been created to cover the increasingly wide yet closely connected range of fundamental technologies such as data and Web semantics, distributed objects, Web services, databases, information systems, workflow, cooperation, ubiquity, interoperability, mobility, grid and high-performance systems. OnTheMove aspires to be a primary scientific meeting place where all aspects of the development of Internet- and Intranetbased systems in organizations and for e-business are discussed in a scientifically motivated way. This sixth 2007 edition of the OTM Federated Conferences event, therefore, again provided, an opportunity for researchers and practitioners to understand and publish these developments within their individual as well as within their broader contexts. Originally the federative structure of OTM was formed by the co-location of three related, complementary and successful main conference series: DOA (Distributed Objects and Applications, since 1999), covering the relevant infrastructure-enabling technologies, ODBASE (Ontologies, DataBases and Applications of SEmantics, since 2002) covering Web semantics, XML databases and ontologies, CoopIS (Cooperative Information Systems, since 1993) covering the application of these technologies in an enterprise context through, e.g., workflow systems and knowledge management. In 2006 a fourth conference, GADA (Grid computing, high-performAnce and Distributed Applications), was added as a main symposium, and this year the same happened with IS (Information Security). Both started off as successful workshops at OTM, the first covering the large-scale integration of heterogeneous computing systems and data resources with the aim of providing a global computing space, the second covering the issues of security in complex Internet-based information systems. Each of these five conferences encourages researchers to treat their respective topics within a
VIII
Preface
framework that incorporates jointly (a) theory , (b) conceptual design and development, and (c) applications, in particular case studies and industrial solutions. Following and expanding the model created in 2003, we again solicited and selected quality workshop proposals to complement the more “archival” nature of the main conferences with research results in a number of selected and more “avant garde” areas related to the general topic of distributed computing. For instance, the so-called Semantic Web has given rise to several novel research areas combining linguistics, information systems technology, and artificial intelligence, such as the modeling of (legal) regulatory systems and the ubiquitous nature of their usage. We were glad to see that no less than eight of our earlier successful workshops (notably AweSOMe, CAMS, SWWS, ORM, OnToContent, MONET, PerSys, RDDS) re-appeared in 2007 with a second or third edition, and that four brand-new workshops emerged to be selected and hosted, and were successfully organized by their respective proposers: NDKM, PIPE, PPN, and SSWS. We know that as before, workshop audiences will productively mingle with another and with those of the main conferences, as is already visible from the overlap in authors! The OTM organizers are especially grateful for the leadership and competence of Pilar Herrero in managing this complex process into a success for the fourth year in a row. A special mention for 2007 is to be made of the third and enlarged edition of the OnTheMove Academy (formerly called Doctoral Consortium Workshop), our “vision for the future” in research in the areas covered by OTM. Its 2007 organizers, Antonia Albani, Torben Hansen and Johannes Maria Zaha, three young and active researchers, guaranteed once more the unique interactive formula to bring PhD students together: research proposals are submitted for evaluation; selected submissions and their approaches are presented by the students in front of a wider audience at the conference, and are independently and extensively analyzed and discussed in public by a panel of senior professors. This year these were once more Johann Eder and Maria Orlowska, under the guidance of Jan Dietz, the incumbent Dean of the OnTheMove Academy. The successful students only pay a minimal fee for the Doctoral Symposium itself and also are awarded free access to all other parts of the OTM program (in fact their attendance is largely sponsored by the other participants!). All five main conferences and the associated workshops share the distributed aspects of modern computing systems, and the resulting application-pull created by the Internet and the so-called Semantic Web. For DOA 2007, the primary emphasis stayed on the distributed object infrastructure; for ODBASE 2007, it became the knowledge bases and methods required for enabling the use of formal semantics; for CoopIS 2007, the topic as usual was the interaction of such technologies and methods with management issues, such as occur in networked organizations; for GADA 2007, the topic was the scalable integration of heterogeneous computing systems and data resources with the aim of providing a global computing space; and last but not least in the relative newcomer IS 2007 the emphasis was on information security in the networked society. These subject areas overlap naturally and many submissions in fact also treated an envisaged
Preface
IX
mutual impact among them. As for the earlier editions, the organizers wanted to stimulate this cross-pollination by a shared program of famous keynote speakers: this year we were proud to announce Mark Little of Red Hat, York Sure of SAP Research, Donald Ferguson of Microsoft, and Dennis Gannon of Indiana University. As always, we also encouraged multiple event attendance by providing all authors, also those of workshop papers, with free access or discounts to one other conference or workshop of their choice. We received a total of 362 submissions for the five main conferences and 241 for the workshops. Not only may we indeed again claim success in attracting an increasingly representative volume of scientific papers, but such a harvest of course allows the Program Committees to compose a higher-quality crosssection of current research in the areas covered by OTM. In fact, in spite of the larger number of submissions, the Program Chairs of each of the three main conferences decided to accept only approximately the same number of papers for presentation and publication as in 2004 and 2005 (i.e., average one paper out of every three to four submitted, not counting posters). For the workshops, the acceptance rate varied but was much stricter than before, consistently about one accepted paper for every two to three submitted. Also for this reason, we separate the proceedings into four books with their own titles, two for main conferences and two for workshops, and we are grateful to Springer for their suggestions and collaboration in producing these books and CD-Roms. The reviewing process by the respective Program Committees was again performed very professionally and each paper in the main conferences was reviewed by at least three referees, with arbitrated e-mail discussions in the case of strongly diverging evaluations. It may be worthwhile to emphasize that it is an explicit OnTheMove policy that all conference Program Committees and Chairs make their selections completely autonomously from the OTM organization itself. Continuing a costly but nice tradition, the OnTheMove Federated Event organizers decided again to make all proceedings available to all participants of conferences and workshops, independently of one’s registration to a specific conference or workshop. Each participant also received a CD-Rom with the full combined proceedings (conferences + workshops). The General Chairs are once more especially grateful to all the many people directly or indirectly involved in the set-up of these federated conferences, who contributed to making them a success. Few people realize what a large number of individuals have to be involved, and what a huge amount of work, and sometimes risk, the organization of an event like OTM entails. Apart from the persons mentioned above, we therefore in particular wish to thank our 12 main conference PC Co-chairs (GADA 2007: Pilar Herrero, Daniel Katz, Mar´ıa S. P´erez, Domenico Talia; DOA 2007: Pascal Felber, Aad van Moorsel, Calton Pu; ODBASE 2007: Tharam Dillon, Michele Missikoff, Steffen Staab; CoopIS 2007: Francisco Curbera, Frank Leymann, Mathias Weske; IS 2007: M´ ario Freire, Sim˜ ao Melo de Sousa, Vitor Santos, Jong Hyuk Park) and our 36 workshop PC Co-chairs (Antonia Albani, Susana Alcalde, Adezzine Boukerche, George Buchanan, Roy Campbell, Werner Ceusters, Elizabeth Chang, Antonio
X
Preface
Coronato, Simon Courtenage, Ernesto Damiani, Skevos Evripidou, Pascal Felber, Fernando Ferri, Achille Fokoue, Mario Freire, Daniel Grosu, Michael Gurstein, Pilar Herrero, Terry Halpin, Annika Hinze, Jong Hyuk Park, Mustafa Jarrar, Jiankun Hu, Cornel Klein, David Lewis, Arek Kasprzyk, Thorsten Liebig, Gonzalo M´endez, Jelena Mitic, John Mylopoulos, Farid Nad-Abdessalam, Sjir Nijssen, the late Claude Ostyn, Bijan Parsia, Maurizio Rafanelli, Marta Sabou, Andreas Schmidt, Sim˜ ao Melo de Sousa, York Sure, Katia Sycara, Thanassis Tiropanis, Arianna D’Ulizia, Rainer Unland, Eiko Yoneki, Yuanbo Guo). All together with their many PC members, did a superb and professional job in selecting the best papers from the large harvest of submissions. We also must heartily thank Jos Valente de Oliveira for the efforts in arranging facilities at the venue and coordinating the substantial and varied local activities needed for a multi-conference event such as ours. And we must all be grateful also to Ana Cecilia Martinez-Barbosa for researching and securing the sponsoring arrangements, to our extremely competent and experienced Conference Secretariat and technical support staff in Antwerp, Daniel Meersman, Ana-Cecilia, and Jan Demey, and last but not least to our energetic Publications Chair and loyal collaborator of many years in Melbourne, Kwong Yuen Lai, this year vigorously assisted by Vidura Gamini Abhaya and Peter Dimopoulos. The General Chairs gratefully acknowledge the academic freedom, logistic support and facilities they enjoy from their respective institutions, Vrije Universiteit Brussel (VUB) and RMIT University, Melbourne, without which such an enterprise would not be feasible. We do hope that the results of this federated scientific enterprise contribute to your research and your place in the scientific network. August 2007
Robert Meersman Zahir Tari
Organizing Committee
The OTM (On The Move) 2007 Federated Conferences, which involve CoopIS (Cooperative Information Systems), DOA (distributed Objects and Applications), GADA (Grid computing, high-performAnce and Distributed Applications), IS (Information Security) and ODBASE (Ontologies, Databases and Applications of Semantics) are proudly supported by RMIT University (School of Computer Science and Information Technology) and Vrije Universiteit Brussel (Department of Computer Science).
Executive Committee OTM 2007 General Co-chairs
GADA 2007 PC Co-chairs
CoopIS 2007 PC Co-chairs
DOA 2007 PC Co-chairs
IS 2007 PC Co-chairs
ODBASE 2007 PC Co-chairs
Publication Co-chairs
Local Organizing Chair
Robert Meersman (Vrije Universiteit Brussel, Belgium) and Zahir Tari (RMIT University, Australia) Pilar Herrero (Universidad Polit´ecnica de Madrid, Spain), Daniel Katz (Louisiana State University, USA), Mar´ıa S. P´erez (Universidad Polit´ecnica de Madrid, Spain), and Domenico Talia (Universit` a della Callabria, Italy) Francisco Curbera (IBM, USA), Frank Leymann (University of Stuttgart, Germany), and Mathias Weske (University of Potsdam, Germany) Pascal Felber (Universit´e de Neuchˆatel, Switzerland), Aad van Moorsel (Newcastle University, UK), and Calton Pu (Georgia Tech, USA) M´ario M. Freire (University of Beira Interior, Portugal), Sim˜ ao Melo de Sousa (University of Beira Interior, Portugal), Vitor Santos (Microsoft, Portugal), and Jong Hyuk Park (Kyungnam University, Korea) Tharam Dillon (University of Technology Sydney, Australia), Michele Missikoff (CNR, Italy), and Steffen Staab (University of Koblenz-Landau, Germany ) Kwong Yuen Lai (RMIT University, Australia) and Vidura Gamini Abhaya (RMIT University, Australia) Jos´e Valente de Oliveira (University of Algarve, Portugal)
XII
Organization
Conferences Publicity Chair Workshops Publicity Chair Secretariat
Jean-Marc Petit (INSA, Lyon, France) Gonzalo Mendez (Universidad Complutense de Madrid, Spain) Ana-Cecilia Martinez Barbosa, Jan Demey, and Daniel Meersman
CoopIS 2007 Program Committee Marco Aiello Bernd Amann Alistair Barros Zohra Bellahsene Boualem Benatallah Salima Benbernou Djamal Benslimane Klemens B¨ohm Laura Bright Christoph Bussler Malu Castellanos Vincenco D’Andrea Umesh Dayal Susanna Donatelli Marlon Dumas Schahram Dustdar ohannesson Eder Rik Eshuis Opher Etzion Klaus Fischer Avigdor Gal Paul Grefen Mohand-Said Hacid Geert-Jan Houben Michael Huhns Paul Johannesson Dimka Karastoyanova Rania Khalaf Bernd Kr¨ amer Akhil Kumar
Dominik Kuropka Tiziana Margaria Maristella Matera Massimo Mecella Ingo Melzer J¨ org M¨ uller Wolfgang Nejdl Werner Nutt Andreas Oberweis Mike Papazoglou Cesare Pautasso Barbara Pernici Frank Puhlmann Manfred Reichert Stefanie Rinderle Rainer Ruggaber Kai-Uwe Sattler Ralf Schenkel Timos Sellis Brigitte Trousse Susan Urban Willem-Jan Van den Heuvel Wil Van der Aalst Maria Esther Vidal Jian Yang Kyu-Young Whang Leon Zhao Michael zur Muehlen
DOA 2007 Program Committee Marco Aiello Bernd Amann Alistair Barros,
Zohra Bellahsene Boualem Benatallah Salima Benbernou
Organization
Djamal Benslimane Klemens B¨ohm Laura Bright Christoph Bussler Malu Castellanos Vincenco D’Andrea Umesh Dayal Susanna Donatelli Marlon Dumas Schahram Dustdar Johannesson Eder Rik Eshuis Opher Etzion Klaus Fischer Avigdor Gal Paul Grefen Mohand-Said Hacid Geert-Jan Houben Michael Huhns Paul Johannesson Dimka Karastoyanova Rania Khalaf Bernd Kr¨ amer Akhil Kumar Dominik Kuropka Tiziana Margaria Maristella Matera
Massimo Mecella Ingo Melzer J¨ org M¨ uller Wolfgang Nejdl Werner Nutt Andreas Oberweis Mike Papazoglou Cesare Pautasso Barbara Pernici Frank Puhlmann Manfred Reichert Stefanie Rinderle Rainer Ruggaber Kai-Uwe Sattler Ralf Schenkel Timos Sellis Brigitte Trousse Susan Urban Willem-Jan Van den Heuvel, Wil Van der Aalst Maria Esther Vidal Jian Yang Kyu-Young Whang Leon Zhao Michael zur Muehlen
GADA 2007 Program Committee Jemal Abawajy Akshai Aggarwal Sattar B. Sadkhan Almaliky Artur Andrzejak Amy Apon Oscar Ardaiz Costin Badica Rosa M. Badia Mark Baker Angelos Bilas Jose L. Bosque Juan A. Bota Blaya Pascal Bouvry Rajkumar Buyya Santi Caball Llobet
Mario Cannataro Jes´ us Carretero Charlie Catlett Pablo Chacin Isaac Chao Jinjun Chen F´elix J. Garc´ıa Clemente Carmela Comito Toni Cortes Geoff Coulson Jose Cunha Ewa Deelman Marios Dikaiakos Beniamino Di Martino Jack Dongarra
XIII
XIV
Organization
Markus Endler Alvaro A.A. Fernandes Maria Ganzha Felix Garc´ıa Angel Lucas Gonzalez Alastair Hampshire Jose Cunha Neil P Chue Hong Eduardo Huedo Jan Humble Liviu Joita Kostas Karasavvas Chung-Ta King Kamil Kuliberda Laurent Lefevre Ignacio M. Llorente Francisco Luna Edgar Magana Gregorio Martinez Ruben S. Montero Reagan Moore Mirela Notare Hong Ong Mohamed Ould-Khaoua Marcin Paprzycki Manish Parashar
Jose M. Pe˜ na Dana Petcu Beth A. Plale Jos´e Luis V´azquez Poletti Mar´ıa Eugenia de Pool Bhanu Prasad Thierry Priol V´ıctor Robles Rizos Sakellariou, Univ. of Manchester Manuel Salvadores Alberto Sanchez Hamid Sarbazi-Azad Franciszek Seredynski Francisco Jos´e da Silva e Silva Antonio F. G´omez Skarmeta Enrique Soler Heinz Stockinger Alan Sussman Elghazali Talbi Jordi Torres Cho-Li Wang Adam Wierzbicki Laurence T. Yang Albert Zomaya
IS 2007 Program Committee J.H. Abbawajy Andr´e Adelsbach Emmanuelle Anceaume Jos´e Carlos Bacelar Manuel Bernardo Barbosa Jo˜ ao Barros Carlo Blundo Phillip G. Bradford Thierry Brouard Han-Chieh Chao Hsiao-Hwa Chen Ilyoung Chong Stelvio Cimato Nathan Clarke Miguel P. Correia
Cas Cremers Gwena¨el Do¨err Paul Dowland Mahmoud T. El-Hadidi Huirong Fu Steven Furnell Michael Gertz Swapna S. Gokhale Vesna Hassler Lech J. Janczewski Wipul Jayawickrama Vasilis Katos Hyun-KooK Kahng Hiroaki Kikuchi Paris Kitsos
Organization
Kwok-Yan Lam Deok-Gyu Lee S´ergio Tenreiro de Magalh˜ aes Henrique S. Mamede Evangelos Markatos Arnaldo Martins Paulo Mateus Sjouke Mauw Natalie Miloslavskaya Edmundo Monteiro Yi Mu Jos´e Lu´ıs Oliveira Nuno Ferreira Neves Maria Papadaki Manuela Pereira Hartmut Pohl Christian Rechberger Carlos Ribeiro Vincent Rijmen Jos´e Ruela
Henrique Santos Biplab K. Sarker Ryoichi Sasaki J¨ org Schwenk Paulo Sim˜ oes Filipe de S´ a Soares Basie von Solms Stephanie Teufel Luis Javier Garcia Villalba Umberto Villano Jozef Vyskoc Carlos Becker Westphall Liudong Xing Chao-Tung Yang Jeong Hyun Yi Wang Yufeng Deqing Zou Andr´e Z˜ uquete
ODBASE 2007 Program Committee Andreas Abecker Harith Alani J¨ urgen Angele Franz Baader Sonia Bergamaschi Alex Borgida Mohand Boughanem Paolo Bouquet Jean-Pierre Bourey Christoph Bussler Silvana Castano Paolo Ceravolo Vassilis Christophides Philipp Cimiano Oscar Corcho Ernesto Damiani Ling Feng Asuncion G´omez-P´erez Benjamin Habegger Mounira Harzallah Andreas Hotho
Farookh Hussain Dimitris Karagiannis Manolis Koubarakis Georg Lausen Maurizio Lenzerini Alexander L¨oser Gregoris Metzas Riichiro Mizoguchi Boris Motik John Mylopoulos Wolfgang Nejdl Eric Neuhold Yves Pigneur Axel Polleres Li Qing Wenny Rahayu Rajugan Rajagopalapillai Rainer Ruggaber Heiko Schuldt Eva Soderstrom Wolf Siberski
XV
XVI
Organization
Sergej Sizov Chantal Soule-Dupuy Umberto Straccia Heiner Stuckenschmidt VS Subrahmanian York Sure
Francesco Taglino Robert Tolksdorf Guido Vetere Roberto Zicari
OTM Conferences 2007 Additional Reviewers Baptiste Alcalde Soeren Auer Abdul Babar Luis Manuel Vilches Bl´ azquez Ralph Bobrik Ngoc (Betty) Bao Bui David Buj´ an Carballal Nuno Carvalho Gabriella Castelli Carlos Viegas Damasio J¨ org Domaschka Viktor S. W. Eide Michael Erdmann Abdelkarim Erradi Peter Fankhauser Alfio Ferrara Fernando Castor Filho Ganna Frankova Peng Han Alexander Hornung Hugo Jonker R¨ udiger Kapitz Alexander Lazovik Thortsen Liebig Jo¨ ao Leit¨ ao
Baochuan Lu Giuliano Mega Paolo Merialdo Patrick S. Merten Maja Milicic Dominic M¨ uller Linh Ngo Jos´e Manuel G´ omez P´erez Sasa Radomirovic Hans P. Reiser Thomas Risse Kurt Rohloff Romain Rouvoy Bernhard Schiemann Jan Schl¨ uter Martin Steinert Patrick Stiefel Boris Villaz´ on Terrazas Hagen Voelzer Jochem Vonk Franklin Webber Wei Xing Christian Zimmer
Table of Contents – Part II
GADA 2007 International Conference (Grid Computing, High-Performance and Distributed Applications) GADA 2007 PC Co-chairs’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
Keynote Service Architectures for e-Science Grid Gateways: Opportunities and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179 Dennis Gannon, Beth Plale, and Daniel A. Reed
Data and Storage Access Control Management in Open Distributed Virtual Repositories and the Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186 ˙ Adam Wierzbicki, L ukasz Zaczek, Radoslaw Adamus, and Edgar Glowacki Transforming the Adaptive Irregular Out-of-Core Applications for Hiding Communication and Disk I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200 Changjun Hu, Guangli Yao, Jue Wang, and Jianjiang Li Adaptive Data Block Placement Based on Deterministic Zones (AdaptiveZ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214 J.L. Gonzalez and Toni Cortes Keyword Based Indexing and Searching over Storage Resource Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233 Adnan Abid, Asif Jan, Laurent Francioli, Konstantinos Sfyrakis, and Felix Schuermann
Networks eCube: Hypercube Event for Efficient Filtering in Content-Based Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244 Eiko Yoneki and Jean Bacon Combining Incomparable Public Session Keys and Certificateless Public Key Cryptography for Securing the Communication Between Grid Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264 Elvis Papalilo and Bernd Freisleben
XVIII
Table of Contents – Part II
Collaborative Grid Environment and Scientific Grid Applications (Short Papers) A Service-Oriented Platform for the Enhancement and Effectiveness of the Collaborative Learning Process in Distributed Environments . . . . . . . 1280 Santi Caball´e, Fatos Xhafa, and Thanasis Daradoumis Social Networking to Support Collaboration in Computational Grids . . . 1288 Oscar Ardaiz, Isaac Chao, and Ram´ on Sang¨ uesa A Policy Based Approach to Managing Shared Data in Dynamic Collaborations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296 Surya Nepal, John Zic, and Julian Jang Grid Service Composition in BPEL for Scientific Applications . . . . . . . . . 1304 Onyeka Ezenwoye, S. Masoud Sadjadi, Ariel Cary, and Michael Robinson Efficient Management of Grid Resources Using a Bi-level Decision-Making Architecture for “Processable” Bulk Data . . . . . . . . . . . . 1313 Imran Ahmad and Shikharesh Majumdar Towards an Open Grid Marketplace Framework for Resources Trade . . . . 1322 Nejla Amara-Hachmi, Xavier Vilajosana, Ruby Krishnaswamy, Leandro Navarro, and Joan Manuel Marques
Scheduling A Hybrid Algorithm for Scheduling Workflow Applications in Grid Environments (ICPDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331 Bogdan Simion, Catalin Leordeanu, Florin Pop, and Valentin Cristea Contention-Free Communication Scheduling for Group Communication in Data Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349 Jue Wang, Changjun Hu, and Jianjiang Li SNMP-Based Monitoring Agents and Heuristic Scheduling for Large-Scale Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1367 Edgar Maga˜ na, Laurent Lefevre, Masum Hasan, and Joan Serrat HARC: The Highly-Available Resource Co-allocator . . . . . . . . . . . . . . . . . . 1385 Jon MacLaren
Middleware Assessing a Distributed Market Infrastructure for Economics-Based Service Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403 Ren´e Brunner, Isaac Chao, Pablo Chacin, Felix Freitag, Leandro Navarro, Oscar Ardaiz, Liviu Joita, and Omer F. Rana
Table of Contents – Part II
XIX
Grid Problem Solving Environment for Stereology Based Modeling . . . . . 1417 ˇ ˇ amek, J´ ulius Parulek, Marek Ciglan, Branislav Simo, Miloˇs Sr´ Ladislav Hluch´y, and Ivan Zahradn´ık Managing Dynamic Virtual Organizations to Get Effective Cooperation in Collaborative Grid Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435 Pilar Herrero, Jos´e Luis Bosque, Manuel Salvadores, and Mar´ıa S. P´erez
Data Analysis Sidera: A Cluster-Based Server for Online Analytical Processing . . . . . . . 1453 Todd Eavis, George Dimitrov, Ivan Dimitrov, David Cueva, Alex Lopez, and Ahmad Taleb Parallel Implementation of a Neural Net Training Application in a Heterogeneous Grid Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473 Rafael Men´endez de Llano and Jos´e Luis Bosque
Scheduling and Management (Short Papers) Generalized Load Sharing for Distributed Operating Systems . . . . . . . . . . 1489 A. Satheesh and S. Bama An Application-Level Service Control Mechanism for QoS-Based Grid Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497 Claudia Di Napoli and Maurizio Giordano Fine Grained Access Control with Trust and Reputation Management for Globus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505 M. Colombo, F. Martinelli, P. Mori, M. Petrocchi, and A. Vaccarelli Vega: A Service-Oriented Grid Workflow Management System . . . . . . . . . 1516 ´ R. Tolosana-Calasanz, J. A. Ba˜ nares, P. Alvarez, and J. Ezpeleta
Information Security (IS) 2007 International Symposium IS 2007 PC Co-chairs’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527
Keynote Cryptography: Past, Present and Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529 Whitefield Diffie
Access Control and Authentication E-Passport: Cracking Basic Access Control Keys . . . . . . . . . . . . . . . . . . . . . 1531 Yifei Liu, Timo Kasper, Kerstin Lemke-Rust, and Christof Paar
XX
Table of Contents – Part II
Managing Risks in RBAC Employed Distributed Environments . . . . . . . . 1548 Ebru Celikel, Murat Kantarcioglu, Bhavani Thuraisingham, and Elisa Bertino STARBAC: Spatiotemporal Role Based Access Control . . . . . . . . . . . . . . . 1567 Subhendu Aich, Shamik Sural, and A.K. Majumdar Authentication Architecture for eHealth Professionals . . . . . . . . . . . . . . . . 1583 Helder Gomes, Jo˜ ao Paulo Cunha, and Andr´e Z´ uquete
Intrusion Detection On RSN-Oriented Wireless Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . 1601 Alexandros Tsakountakis, Georgios Kambourakis, and Stefanos Gritzalis A Hybrid, Stateful and Cross-Protocol Intrusion Detection System for Converged Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616 Bazara I.A. Barry and H. Anthony Chan Toward Sound-Assisted Intrusion Detection Systems . . . . . . . . . . . . . . . . . . 1634 Lei Qi, Miguel Vargas Martin, Bill Kapralos, Mark Green, and Miguel Garc´ıa-Ruiz
System and Services Security End-to-End Header Protection in Signed S/MIME . . . . . . . . . . . . . . . . . . . 1646 Lijun Liao and J¨ org Schwenk Estimation of Behavior of Scanners Based on ISDAS Distributed Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659 Hiroaki Kikuchi, Masato Terada, Naoya Fukuno, and Norihisa Doi A Multi-core Security Architecture Based on EFI . . . . . . . . . . . . . . . . . . . . 1675 Xizhe Zhang, Yong Xie, Xuejia Lai, Shensheng Zhang, and Zijian Deng
Network Security Intelligent Home Network Authentication: Home Device Authentication Using Device Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688 Deok-Gyu Lee, Yun-kyung Lee, Jong-wook Han, Jong Hyuk Park, and Im-Yeong Lee Bayesian Analysis of Secure P2P Sharing Protocols . . . . . . . . . . . . . . . . . . . 1701 Esther Palomar, Almudena Alcaide, Juan M. Estevez-Tapiador, and Julio C. Hernandez-Castro
Table of Contents – Part II
XXI
Network Coding Protocols for Secret Key Distribution . . . . . . . . . . . . . . . . 1718 Paulo F. Oliveira and Jo˜ ao Barros 3-Party Approach for Fast Handover in EAP-Based Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734 Rafa Marin, Pedro J. Fernandez, and Antonio F. Gomez
Malicious Code and Code Security SWorD – A S imple Wor m D etection Scheme . . . . . . . . . . . . . . . . . . . . . . . . 1752 Matthew Dunlop, Carrie Gates, Cynthia Wong, and Chenxi Wang Prevention of Cross-Site Scripting Attacks on Current Web Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770 Joaquin Garcia-Alfaro and Guillermo Navarro-Arribas Compiler Assisted Elliptic Curve Cryptography . . . . . . . . . . . . . . . . . . . . . . 1785 M. Barbosa, A. Moss, and D. Page
Trust and Information Management Trust Management Model and Architecture for Context-Aware Service Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1803 Ricardo Neisse, Maarten Wegdam, Marten van Sinderen, and Gabriele Lenzini Mobile Agent Protection in E-Business Application: A Dynamic Adaptability Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1821 Salima Hacini, Zizette Boufa¨ıda, and Haoua Cheribi Business Oriented Information Security Management – A Layered Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835 Philipp Klempt, Hannes Schmidpeter, Sebastian Sowa, and Lampros Tsinas Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1853
GADA 2007 PC Co-chairs’ Message
This volume contains the papers presented at GADA 2007, the International Symposium on Grid Computing, High-Performance and Distributed Applications. The purpose of the GADA series of conferences, held in the framework of the OnTheMove Federated Conferences (OTM), is to bring together researchers, developers, professionals and students in order to advance research and development in the areas of grid computing and distributed systems and applications. This year’s conference was in Vilamoura, Algarve, Portugal, during November 29–30. In the last decade, grid computing has developed into one of the most important topics in the computing field. The research area of grid computing has been making particularly rapid progress in the last few years, due to the increasing number of scientific applications that are demanding intensive use of computational resources and a dynamic and heterogeneous infrastructure. At the same time, business applications of grid systems emerged in several sectors of our societies showing the importance of grids for supporting large communities of users. Within this framework, the GADA workshop arose in 2004 as a forum for researchers in grid computing whose aim was to extend their background in this area, and more specifically, for those who used grid environments in managing and analyzing data. Both GADA 2004 and GADA 2005 were constituted as successful events, due to the large number of high-quality papers received as well as the brainstorming of experiences and ideas interchanged in the associated forums. Because of this demonstrated success, GADA was upgraded as a conference within On The Move Federated Conferences and Workshops (OTM 2006). GADA 2006 covered a broader set of disciplines, although grid computing kept a key role in the set of main topics of the conference. GADA 2007 continues as an OTM Conference. The objective of grid computing is the integration of heterogeneous computing systems and data resources with the aim of providing a global computing space. The achievement of this goal is creating revolutionary changes in the field of computation, because it offers services that enable resource sharing across networks, with data being one of the most important resources and data management services a critical component for advanced applications. Thus, data access, management and analysis within grid and distributed environments constitute a main part of the conference. Therefore, the main goal of GADA 2007 was to provide a framework in which a community of researchers, developers and users can exchange ideas and works related to grid, high-performance and distributed applications and systems. The second goal of GADA 2007 was to create interaction between grid computing researchers and the other OTM attendees. The 16 revised full papers presented were carefully reviewed and selected from a total of 55 submissions with an acceptance rate of 29%. Additionally,
1178
Preface
ten revised short papers and six poster papers were also accepted. Each submitted paper was reviewed by two reviewers and one of the Program Chairs, and a total of 72 reviewers were involved in the review process. Topics of the accepted full papers include data and storage, networks, scheduling, middleware, and data analysis. Topics of the accepted short papers include collaborative grid environments, grid applications, scheduling, and management. We would like to thank the members of the Program Committee and their external reviewers for their hard and expert work. We would also like to thank the OTM General Co-chairs, the workshop organizers, the external reviewers, the authors, and the local organizers for their contributions to the success of the conference. August 2007
Pilar Herrero Daniel Katz Mar´ıa S. P´erez Domenico Talia
Service Architectures for e-Science Grid Gateways: Opportunities and Challenges Dennis Gannon, Beth Plale, and Daniel A. Reed Department of Computer Science, School of Informatics, Indiana University, Bloomington, Indiana, 47405
[email protected],
[email protected] Renaissance Computing Institute, 100 Europa Drive Suite 540, Chapel Hill, North Carolina, 27517
[email protected] Abstract. An e-Science Grid Gateway is a portal that allows a scientific collaboration to use the resources of a Grid in a way that frees them from the complex details of Grid software and middleware. The goal of such a gateway is to allow the users access to community data and applications that can be used in the language of their science. Each user has a private data and metadata space, access to data provenance and tools to use or compose experimental workflows that combine standard data analysis, simulation and post-processing tools. In this talk we will describe the underlying Grid service architecture for such an eScience gateway. In this paper we will describe some of the challenges that confront the design of Grid Gateways and we will outline a few new research directions.
1 Introduction Grid technology has matured to the point where many different communities are actively engaged in building distributed information infrastructures to support discipline specific problem solving. This distributed Grid infrastructure is often based on Web and Web service technology that brings tools and data to the users in ways that enable modalities of collaborative work not previously possible. The vanguard of these communities are focused on specific scientific or engineering disciplines such as weather prediction, geophysics, earthquake engineering, biology and high-energy physics, but Grids can also be used by any community that requires distributed collaboration and the need to share networked resources. For example, a collaboration may be as modest as a handful of people at remote locations that agree to share computers and databases to conduct a study and publish a result. Or a community may be a company that needs different divisions of the organization at remote locations to work together more closely by sharing access to a set of private services organized as a corporate Grid. A Grid portal is the user-facing gateway to such a Grid. It is composed of the tools that the community needs to solve problems. It is usually designed so that users interact with it in terms of their domain discipline and the details of Grid services and distributed computing are hidden from view. R. Meersman and Z. Tari et al. (Eds.): OTM 2007, Part II, LNCS 4804, pp. 1179–1185, 2007. © Springer-Verlag Berlin Heidelberg 2007
1180
D. Gannon, B. Plale, and D.A. Reed
As we have described in [1] here are five common components to most Grid Gateways. 1. Data search and discovery. The vast majority of scientific collaborations revolve around shared access to discipline specific data collections. Users want to be able to search for data as they would search for web pages using an Internet index. The difference here is that the data is described in terms of metadata and instead of keywords, the search may involve terms from a domain specific ontology and contain range values. The data may be generated by streams from instruments and consequently the query may be satisfied only by means of a monitoring agent. 2. Security. Communities like to protect group and individual data. Grid resource providers like to have an understanding of who is using their resources. 3. User-private data storage. Because each user must “login” and authenticate with the portal, the portal can provide the user with a personalized view of their data and resources. This is a common feature of all portals in the commercial sector. In the eScience domain private data can consists of a searchable index of experimental result and data gathered from the community resources. 4. Tools for designing and conducting computational experiments. Scientist need to be able to run analysis processes over collected data. Often these analysis processes are single computations and often they are complex composed scenarios of preprocessing, analysis, post processing and visualization. These experimental workflows are often repeated hundreds of times with slightly different parameter settings or input data. A critical feature of any eScience Gateway is the capability compose workflows, add new computational analysis programs to a catalog of workflow components and a simple way to run the workflows with the results automatically stored in the user’s private data space. 5. Tracking data provenance. The key to the scientific method is repeatability of experiments. As we become more data and computationally oriented in our approach to science, it is important that we support ways to discover exactly how a data set was generated. What were the processes that were involved? What versions of the software were used? What is the quality of the input data? A good eScience gateway should automatically support this data provenance tracking and management so that these questions can be asked. A service architecture to support these capabilities is illustrated in Figure 1. This SOA is based on the LEAD [2] gateway (https://portal.leadproject.org) to the Teragrid Project [3]. The goal of the LEAD Gateway is to provide atmospheric scientists and students with a collection of tools that enables them to conduct research on “real time” weather forecasts of severe storms. The gateway is composed of a portal server that is a container for “portlets” which provide the user interfaces to the cyberinfrastructure services listed above. The data search and discovery portlets in the portal server talk to the Data Catalog Service.
Service Architectures for e-Science Grid Gateways: Opportunities and Challenges
1181
This service is an index of data that is known to the system. The user’s personal data is cataloged in the MyLEAD service [4]. MyLEAD is a metadata catalog. The large data files are stored on the back-end Grid resources under management of the Data Management Service. The myLEAD Agent manages the connection between the metadata and the data. Workflow in this architecture is described in terms of dataflow graphs, were the nodes of the graph represent computations and the edges represent data dependencies. The actual computations are programs that are pre-installed and run on the back-end Grid computing resources. However, the workflow engine, which sequences the execution of each computational task, sees these computations as just more Web services. Unlike the other services described in this section, these application services are virtual in that they are created on-demand by an application factory and each application service instance controls the execution of a specific application on a specific computing resource. The application service instances are responsible for fetching the data needed for each invocation of the application, submitting the job to the compute engine, and monitoring the execution of the application. The pattern of behavior is simple. When a user creates or selects an experiment workflow template, the required input data is identified and then bound to create a concrete instance of the workflow. Some of the input data comes from user input and others come from a search of the Data Catalog or the user’s MyLEAD space. When the execution begins, the workflow engine sends work requests for specific applications to a fault tolerance/scheduler that picks the most appropriate resources and an Application Factory (not shown) instantiates the required application service.
Fig. 1. The LEAD Service Oriented Architecture consists of 12 persistent services. The main services shown here include the Portal Server, the Data Catalog, the users MyLEAD metadata catalog, the workflow engine and fault tolerance services, the Data Provenance service and the Data Management Service.
1182
D. Gannon, B. Plale, and D.A. Reed
A central component of the system is an event notification bus, which is used to convey workflow status and control messages throughout the system. The application service instances generate “event notifications” that provide details about the data being staged, the status of the execution of the application and the location of the final results. The MyLEAD agent listens to this message stream and logs the important events to the user’s metadata catalog entry for the experiment. The workflow engine, provenance collection service and data management service all hear these notifications, which also contain valuable metadata about the intermediate and final data products. The Data Management service is responsible for migrating data from the Compute Engine to long-term storage. The provenance collection service [5] records all of this information and organizes it so that data provenance queries and statistics are easily satisfied.
2 Lesions Learned and Research Challenges The LEAD Gateway and its associated CyberInfrastructure have been in service for about one year. It has been used by several hundred individuals and we have learned a great deal about where it has worked well and where it has failed. In this section of the paper we outline some of the key research challenges we see ahead. 2.1 Scientific Data Collections The primary reason users cite for a discipline-specific science gateway is access to data. Digital data and its curation is a central component of the U.S. National CyberInfrastructure vision [6] and it plays a central role in many Grid projects. The principle research challenge is simply stated: How can digital library and cyberinfrastructure discovery systems like Science Gateways work together to support seamless storage and use of data? While it has been common practice for a long time to store digital scientific data, this data is often never accessed a second time because it is poorly cataloged. To be valuable, the data must remain ‘alive’, that is, accessible for discovery and participation in later discovery scenarios. To make data available for second use it must have sufficient metadata to describe it and it must be cataloged and easily discoverable by advanced scientific search engines. The Research challenges are in real time provenance gathering, data quality assessment, semantics and representation. Data provenance lies at the heart of the scientific process. If we publish a scientific result we should also make the data available for others to evaluate. It should be possible to trace the data through the entire process that generated it. It should be possible to evaluate the quality of the original “source” data used in the experiments. Where and when was that data generated? Was it from trusted sources? What is the quality of the data analysis engines that were used to process that data? Data discovery can only be facilitated by having better methods of generating metadata as well as tooling to understand and mediate metadata schemas. As we develop technologies that allow us to preserve data for very long periods we must also provide the technology to index and catalog it. Keyword search is not effective for scientific data unless we have an ontology that is recognized and well agreed upon meaning for the vocabulary used in our metadata.
Service Architectures for e-Science Grid Gateways: Opportunities and Challenges
1183
2.2 Continuous Queries An exciting new area of Grid and CyberInfrastructure research involves “continuous queries” that allows a user to pose a set of conditions concerning the state of the word and a set of processing actions or responses to take when the conditions are met. The underlying Grid technology required to carry out continuous queries is rapidly evolving. In the case of enterprise Grids it already exists in specialized forms. For example eBay has a Grid infrastructure that is constantly reacting to data that is in a state of nearly constant change and those changes effect real human transactions every second. Financial investment firms build substantial Grid infrastructures (that they are often reluctant to describe) to manage specialized continuous queries that track micro-fluctuations in complex market metrics. Pushed to its extreme, this idea can have profound implications for eScience. This concept is important to the LEAD project and the research has led to the capability to pose queries like “Watch the weather radar over Chicago for the next 8 hours. If a ‘supercell’ cloud is detected launch a storm forecast workflow on the TeraGrid supercomputers.” The LEAD capability is far from expressing such queries in English, but a new spin-off project is investigating the semantic and syntactic formulation of these queries and architectural implications for the service architecture for the underlying Grid. This concept has broad implications for other domains. In the areas of drug discovery, one can pose queries that scan the constantly changing genomic and chemical ligand databases looking for patterns of protein bindings that have sought after gene expression inhibitors. Grid systems themselves are dynamic entities that must be constantly monitored. A continuous query can search for patterns of resource utilization that may indicate an impending system failure and, when detected, launch a task to begin a system reconfiguration and notify concerned people. The more we explore this idea, the more we see its potential for the next generation of Grid applications. 2.3 Grid Reliability and Fault Recovery A well kept secret of large-scale Grids is that they not as reliable as the original designers predicted. The fact is that systems like the TeraGrid, which involve large, heterogeneous collections of petascale computers and data resources are complex, dynamical systems. There are many causes for failure. The middleware fails under heavy load. Loads can on individual systems vary wildly. Processors fail constantly. Networks fail. High performance data movement systems like GridFTP are often poorly installed and fail frequently. Large distributed Grids in the enterprise world require “operations centers” staffed 24x7 to monitor service and watch for failures. The effect of this dynamic instability on Grid workflow can be profound. If a workflow depends upon a dozen or so distributed operations and if, under heavy load, the individual computation and data movement functions are only operating at 95%, the multiplicative effect is to reduce the workflow reliability to about 50%. The LEAD workflows have demonstrated this failure rate. Consequently, the workflow management must incorporate a sophisticated fault recovery system. While we have some initial successes with a basic fault recovery mechanism, work on this problem still requires much more research. Basic questions include
1184
D. Gannon, B. Plale, and D.A. Reed
1. Can we build an application/system monitoring infrastructure that can use adaptivity to guarantee reliable workflow completion? 2. Can we dynamically redeploy failed components or regenerate lost data? 3. What are the limits to our expectation of reliability? We all know TCP is designed to make reliable stream services out of UDP packet delivery, but we also know that TCP is not, and cannot be perfect. 4. Can we predict the likelihood of failure based on past experience with similar workflows? Can we build smart schedulers that understand past experience with failures of a particular workflow to allocate the best set of resources to minimize failure? Virtualization can provide a new approach to both fault tolerance and recovery. If an application can be deployed on a standard virtual machine and that VM is hosted on a number of resources on the Grid, we can deploy the application on-demand. By instrumenting the VM we can monitor the application in a non-intrusive way and if it begins to fail, we can checkpoint it for restart on another host. A critical research challenge is to build scalable VMs that run well on petascale clusters. 2.4 New Modalities for User Interfaces The standard web portal interface is not ideal for science that requires highly interactive and collaborative capabilities. There are several factors that are going to drive fundamental changes in the user interface. The first of these is the evolution of web technology from its current form to what is commonly called Web 2.0. Put in the most simple terms, Web 2.0 refers to user-side client software, some of which runs in the browser but much in special new application frameworks like Microsoft’s Silverlight and Adobe’s Apollo that are based on rich, high bandwidth interaction with remote services. The second factor that will drive change is multicore processor architectures. With a 32 core desktop machine or laptop the idea of using speech, gesture, 3D exploration of data visualization are all much more realistic. Another scenario enabled by multicore client side resources is moving more of the service processing to the client itself. For example, can a user’s client have a local cache of his or her metadata collections? Grid workflow orchestration often needs a remote service to be the workflow engine, but it may be possible for the workflow engine to clone itself and run a version locally on the desktop thus relieving the load on a shared engine. An interesting research challenge is to consider the possibility of dynamically migrating the cyberinfrastructure services to the client side while the user is there. For example, both the workflow engine and a clone of the fault recovery services could be made local. If we move away from a browser-web-portal model in favor of stand-alone clients this could optimize service response time and improve reliability. 2.5 Social Networking for Science Social networking Wikis and web services are bring communities together by providing new tools for people to interact with each other. Services like Facebook allow groups of users to create shared resource groups and networks. The LEAD gateway and others would provide a much richer experience for their users if there
Service Architectures for e-Science Grid Gateways: Opportunities and Challenges
1185
was more capability for dynamic creation of special interest groups with tools to support collaboration. Another significant innovation is the concept of human-guided search. Search services like ChaCha provide a mechanism where users can pose queries in English that are first parsed and matched against a knowledgebase of frequently asked questions. If a match fails, a human expert is enlisted to help guide the search. In the case of a discipline science, this can be a powerful way to help accelerate the pace of projects that require interdisciplinary knowledge. In a Grid environment expert users can guide other scientists to valuable services or workflows. A type of scientific social tagging can be used to create domain specific ontologies.
3 Conclusions In this short paper we have tried to present some ideas for future research that has come out of our work on the LEAD project. It is far from complete and there are many others working on similar problems in this area. Our primary emphasis has been to look at the problems of making data reuse easier, supporting continuous queries that can act like agents searching for important features in dynamic data and triggering responses, automatic fault tolerance in dynamic Grids, understanding how Web 2.0 and multicore will change our user interactions and the impact of social networking. Each of these research challenges are critical for us to advance to the next level of science gateways.
References 1. Gannon, D., et al.: Building Grid Portals for e-Science: A Service Oriented Architecture. In: Grandinetti, L. (ed.) High Performance Computing and Grids in Action, IOS Press, Amsterdam (to appear, 2007) 2. Droegemeier, K., et al.: Service-Oriented Environments for Dynamically Interacting with Mesoscale Weather. CiSE, Computing in Science & Engineering 7(6), 12–29 (2005) 3. Catlett, C.: The Philosophy of TeraGrid: Building an Open, Extensible, Distributed TeraScale Facility. In: Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid (2002) 4. Pallickara, S.L., Plale, B., Jensen, S., Sun, Y.: Structure, Sharing and Preservation of Scientific Experiment Data. In: IEEE 3rd International workshop on Challenges of Large Applications in Distributed Environments, Research Triangle Park, NC (2005) 5. Simmhan, Y., Plale, B., Gannon, D.: Querying capabilities of the karma provenance framework. Concurrency and Computation: Practice and Experience (2007) 6. National Science Foundation Cyberinfrastructure Council, Cyberinfrastructure Vision for 21st Century Discovery. NSF document 0728 (March 2007)
Access Control Management in Open Distributed Virtual Repositories and the Grid Adam Wierzbicki1,∗, Łukasz Żaczek1, Radosław Adamus1,2, and Edgar Głowacki1 2
1 Polish-Japanese Institute of Information Technology Computer Engineering Department, Technical University of Lodz
Abstract. The management of access control (AC) policies in open distributed systems (ODS), like the Grid, P2P systems, or Virtual Repositories (databases or data grids) can take two extreme approaches. The first extreme approach is a centralized management of the policy (that still allows a distribution of AC policy enforcement). This approach requires a full trust in a central entity that manages the AC policy. The second extreme approach is fully distributed: every ODS participant manages his own AC policy. This approach can limit the functionality of an ODS, making it difficult to provide synergetic functions that could be designed in a way that would not violate AC policies of autonomous participants. This paper presents a method of AC policy management that allows a partially trusted central entity to maintain global AC policies, and individual participants to maintain own AC policies. The proposed method resolves conflicts of the global and individual AC policies. The proposed management method has been implemented in an access control system for a Virtual Policy that is used in two European 6th FP projects: eGov-Bus and VIDE. The impact of this access control system on performance has been evaluated and it has been found that the proposed AC method can be used in practice. Keywords: Access Control Management, Role-based access control, Virtual Repository, data grid.
1 Introduction Open distributed systems (ODS), like computational grids, P2P systems, or distributed Virtual Repositories (databases or data grids), create unique challenges for access control. This is due to the fact that information and services in such systems are provided by autonomous entities that contribute to the system. An access control (AC) mechanism for ODS is faced with the following two, extreme alternative design choices: either (1) use a centralized management of the access control policy (although its enforcement could still be distributed), or (2) allow each autonomous participant to manage its own access control policy. ∗
This research has been supported by European Commission under the 6th FP project e-Gov Bus, IST-4-026727-ST.
R. Meersman and Z. Tari et al. (Eds.): OTM 2007, Part II, LNCS 4804, pp. 1186–1199, 2007. © Springer-Verlag Berlin Heidelberg 2007
Access Control Management in Open Distributed Virtual Repositories and the Grid
1187
In this paper, we focus on the management aspect of an AC policy: how such a policy is initially specified and maintained, rather than on the aspect of enforcement of AC policies. The latter aspect has been the subject of much previous work that has established that AC policy enforcement can be distributed [1-4]. However, AC policy management has not been considered much in previous research. In particular, we attempt to tackle the question: how to reach a balance between the two extremes of centralized and completely distributed AC policy management. In the centralized approach, a central entity maintains an AC policy that must balance the requirements of all autonomous ODS participants. In other words, central management entity must be fully trusted by all autonomous ODS participants to fully understand and enforce their individual AC policy requirements. Such an approach is realistic only if the security policies of the ODS participants are not too strict and complex, and do not change too frequently. In the centralized management approach, an AC policy can be enforced in a distributed or centralized manner. The distributed approach assumes that each individual ODS participant will express his own AC policy, that is usually enforced in a distributed manner, as well. This approach has the smallest trust requirements, yet it creates the danger of limiting the system’s functionality. This is especially true for systems that provide complex functionality that requires sensitive services or information provided by individual participants, but does not violate their security policy. Consider a distributed Virtual Repository (database or data grid) that uses virtual views of information provided by several autonomous participants. One of the participants, A, maintains a strict AC policy to protect information about employee salaries. Another participant, B, provides information about all projects that an employee has participated in – this information is made publicly available by B. The administrators of the Virtual Repository wish to provide a view that would show the statistical relationship between employee salaries and the number of projects that employees participate in. This view would not violate A’s security policy; yet A cannot make salary information public. The example shows that certain functions that use a combination of information or services from many participants may not be possible to realize under decentralized access control. If all participants in the ODS would provide only public information, a reverse phenomenon could occur: a function that uses a combination of public information could violate the security policy of individual participants. The reason for this is the possibility of obtaining results that reveal sensitive information, for example through correlation of partial data. Fully distributed AC policy management is realistic in ODS systems that do not provide complex, synergetic functions, like in P2P file sharing systems. In this paper, we describe an AC policy that lies in between the two extremes. It assumes a limited trust in a central entity, yet it does not distribute AC management completely and allows the central entity to maintain a global AC policy. Our concept of AC management relies on the use of granting privileges. Appropriate roles in the system contain privileges that make it possible to allow or deny access to certain AC objects. The granting system is constructed in such a way that autonomous participants can express their own local AC policy in the system. On the other hand, a system administrator role has the privilege of granting access to AC objects that use information or services from the autonomous participants. As a result, it becomes
1188
A. Wierzbicki et al.
possible for the ODS to provide complex functions that depend on the services and information of ODS participants. Our suggestion of AC policy management that makes a balance of the centralized and distributed approaches requires an appropriate granularity of the AC policy. Yet, this is not the only issue: how is an AC policy maintained in such a management system? What happens if individual participants change their AC policies – how is the global AC policy affected? How to resolve conflicts between the global AC policy and the policies of participants? The paper attempts to answer these questions. The AC policy management proposed in this paper has been implemented and tested in a Virtual Repository that is used by two European 6th FP projects: eGov-Bus and VIDE. In particular, the impact of the Access Control architecture on the performance of the Virtual Repository has been evaluated. Therefore, the contribution of this paper is a method of access control management for open, distributed systems and a comprehensive access control method for virtual repositories. The paper is organized as follows: the next section describes our design of access control for virtual repositories. It introduces the concept of a view, discusses requirements for an AC system in a Virtual Repository, and describes how access control works in our architecture. Section 3 demonstrates AC management on an example scenario and gives an overview of management functions. Section 4 describes the setup and results of performance tests with a prototype implementation of our AC system. Section 5 discusses related work, and section 6 concludes.
2 Access Control Design in a Virtual Repository In this section, we present the design of an access control system for a Virtual Repository. The proposed system has features that can be exploited to provide more flexible management of AC policies. 2.1 The Virtual Repository A virtual repository is a mechanism that supports transparent access to distributed, heterogeneous, fragmented and redundant resources. There are many forms of transparency, in particular location, concurrency, implementation, scaling, fragmentation, heterogeneity, replication, indexing, security, connection management and failure transparency. Due to transparency implemented on the middleware level, some complex features of a distributed and heterogeneous data/service environment do not need to be included in the code of client applications. Moreover, a virtual repository supplies relevant data in the volume and shape tailored to the particular use. Thus a virtual repository much amplifies the application programmers’ productivity and supports flexibility, maintainability and security of software. Virtual Repositories can be used to create data grids, or service buses that connect multiple sources of data. The main integration facility in the presented architecture that allows to achieve required VR requirements are the virtual updatable views. Virtual views have been considered by many authors as a method of adapting heterogeneous data resources to some common schema assumed by the business model of an application.
Access Control Management in Open Distributed Virtual Repositories and the Grid
1189
Unfortunately, classic SQL views have limitations that restrict their application in this role. The limitation mainly concerns: the limitation of relational data model and limited view updating. The concept of updateable object views [12, 13] that was developed for our VR overcomes SQL views’ limitation. Its idea relies in augmenting the definition of a view with the information on users’ intents with respect to updating operations. An SBQL updatable view definition is subdivided into two parts. The first part is the functional procedure, which maps stored objects into virtual objects (similarly to SQL). The second part contains redefinitions of generic operations on virtual objects (so called view operators). These procedures express the users’ intents with respect to update, delete, insert and retrieve operations performed on virtual objects. A view definition usually contains definitions of sub-views, which are defined on the same rule, according to the relativity principle. Because a view definition is a regular complex object, it may also contain other elements, such as procedures, functions, state objects, etc.
Role: Civil servant U1
U1
Role: Client U1
Role: Data source S3 U1 Role modification
Data access Access Control Virtual Repository System programmer’s layer Attribute X
Attribute Z
Attribute Y S1
Access Control Policy
S2
Access Control Policy
S3
Access Control Policy
Fig. 1. Architecture of access control in the Virtual Repository
2.2 Requirements for Access Control in the Virtual Repository The basic functional requirements for the proposed access control system are a consequence of the openness and distribution of the Virtual Repository (VR). All information that is provided by the repository constitutes the property of institutions that are autonomous and have their own security policies (including access control policies). The institutions provide information to the VR, but wish to control the use of that information. We have already given an example of how this control could limit the functionality of the Virtual Repository, if access control management is fully distributed. However, providing just publicly available information in the VR may
1190
A. Wierzbicki et al.
still violate the access control policy of the provider. As an example, consider again two data sources (VR participants) A and B. A contains personal data information such as names, addresses, date of birth, and identification numbers (like NIP, a VAT ID, or PESEL, a personal ID in Poland). This data source provides information about name and date of birth to the VR. The second data source contains information from the health care system: names of patients, date of birth and their health care records. For statistical purposes, dates of birth and health care records are provided to the VR, without revealing the names of patients. The creation of a view in the VR that uses information from both data sources may violate the security policy of B. The health care records that have been provided for statistical purposes may now be related to names through the date of birth. This example shows that data sources need a measure of control over the VR access control policy. The previous example also shows that by being overly restrictive in their individual AC, the data sources could limit the functionality of the VR. 2.3 Architecture Overview On Figure 1, a scenario is presented that can be used to demonstrate the architecture of access control in the Virtual Repository. The VR accesses data from 3 data sources. The system programmer’s layer of the VR defines the interface that can be used to access the data from the underlying data sources [11]. The access to data sources is controlled by their own access control policies that are transparent to the VR (a failure of data source access control may be signaled to the system programmer’s layer using an exception). We propose to use Role-based Access Control (RBAC) in the VR. A user who has been successfully authenticated can activate one of his roles. Roles are used to decouple access control privileges from users. When a role is modified, all users who have been assigned to that role will have modified access privileges. The user who modified the role need not know all affected users, in fact, this may be impossible in a decentralized systems such as the VR. Roles can be assigned to users during a registration procedure (after user authentication), and there may be default roles: for example, the Client role is assigned by default to any user that accesses the VR. A Data source role can be assigned to users who are administrators of the data sources. A Civil Servant role can be assigned to special users who are employed by the government’s civil service. A user can obtain a non-default role by receiving a special certificate from the VR, that contains the roles assigned to the user. 2.4 Expressing Access Control Privileges Roles are sets of privileges. A privilege is an authorization to perform a particular operation; without privileges, a user cannot access any information in the Virtual Repository. To ensure data security, a user should only be granted those privileges needed to perform their job functions, thus supporting the principle of least privilege.
Access Control Management in Open Distributed Virtual Repositories and the Grid
1191
A privilege expresses the type of data, an access mode, and information that may be used to grant other privileges. We propose to express privileges using tuples: Privilege: ([Type or View or Data source interface], Mode, Grant Flag) A Type may be any class created in the Virtual Repository (including Role and Privilege). A View may be any View created in the Virtual Repository (a view is used here instead of a set of object instances that has been frequently used in access control in object-oriented databases [7]). A Data source interface is provided to the Virtual Repository by the System Programmer’s layer: this should be the only way to access the data source from the VR. A Mode may be one of the following: Mode = {Read, Modify, Create, Delete, Modify_Definition}x{Allow, Deny} Note that modes may be negative. A mode that contains the negation flag (a Deny value of the second tuple coordinate) can be contained in a privilege. The privilege must then be interpreted as a restriction: the operation that would be permitted with a mode without a negation flag is now forbidden. The use of negative modes is motivated by the fact that data sources may not be able to predict all uses of their information, and some information may be too sensitive to be released by default. The use of negative modes allows data sources to use a closed access control policy: all information provided by a data source is by default forbidden to access by users who activate the Client role. This restriction may be overridden for specific uses of the information provided by the data source (for example, for some views). We shall return to this issue later on. The final component of a privilege is the Grant Flag: Grant Flag = {CanGrant, CannotGrant} When a user activates a role that contains a privilege with a grant flag CanGrant, then that user will be authorized to modify any other role by adding or deleting the same privilege with the grant flag set to any value. Usually, users should not create privileges with the grant flag set to CanGrant. One exception is the situation when a new view is created that uses information provided from a data source. This situation will be explained in more detail below.
3 Access Control Management To describe how AC policies are managed in our proposal, we have prepared an example scenario of using access control in the Virtual Repository. The scenario demonstrates typical management functions of creating and modifying access control policies. 3.1 A Scenario of Access Control in the Virtual Repository To illustrate the operation of access control in the Virtual Repository of the VR, consider the following scenario. As shown on Figure 1, the three data sources provide
1192
A. Wierzbicki et al.
the following attributes to the Virtual Repository: S1.x, S2.y and S3.z. When each of the data sources joined the Virtual Repository, a separate administrative role has been created for that particular data source. For simplicity, let us assume that there is a single mode, i.e. the Read mode. Therefore, there are three roles for data sources in the system: Data source S1: {(S1, (Read, Deny), CanGrant)} Data source S2: {(S2, (Read, Deny), CanGrant)} Data source S3: {(S3, (Read, Deny), CanGrant)} These roles contain privileges that allow each data source to deny access to its information. Let us assume that data source S3 considers its information to be sensitive and uses the closed access control policy. This means that as soon as the data sources have been added to the Virtual Repository, S3 will modify the Client role to add a restriction: Client: {(S3, (Read, Deny), CannotGrant)} Now a programmer of the Virtual Repository creates a view called View1 that uses information from all three attributes of the data sources. When View1 is created, the administrative programmer of the Virtual Repository automatically obtains full access rights to the view (becomes the owner of the view), including rights to grant its privileges. Thus, the role of the administrative programmer contains the privilege: Administrator: {(View1, (Read, Allow), CanGrant)} The administrative programmer now modifies the roles of the data sources. Since only S3 has limited access to its information, S3 should be given the privilege of granting read access rights to View1: Data source S1: {(S1, (Read, Deny), CanGrant)} Data source S2: {(S2, (Read, Deny), CanGrant) } Data source S3: {(S3, (Read, Deny), CanGrant) , (View1, (Read, Allow), CanGrant) , (View1, (Read, Deny), CanGrant)} To do this, the administrative programmer must be aware that View1 uses data from data source S3. This step could be partially automated using compiler support. The administrators of the data sources will inspect the created view and decide whether the proposed information can be released without compromising their security policy. Each administrator makes an autonomous decision, independent of the others. Let us assume that the administrator of S3 will decide that access to his information through View1 can be allowed. Then, the administrator of S3 will modify the Client role, which will take the form: Client: {(S3, (Read, Deny), CannotGrant), (View1, (Read, Allow), CannotGrant)} From that moment on, the users who activate the Client role may access View1, since View1 has been made available to the Client role. Notice that if any data source administrator would decide not to allow access to View1, he could add a negative privilege to the Client role that would then not be able to access instances of View1.
Access Control Management in Open Distributed Virtual Repositories and the Grid
1193
3.2 Resolving Conflicts For completeness, the default decision of access control in the Virtual Repository should be to deny access. If a decision about granting permissions cannot be reached, access should be denied.
2500 Time ( ms )
2000 1500 1000 500 0 1 10 100 1000 Access Control disabled Number of views Access Control enabled, suspension OFF Access Control enabled, suspension ON Fig. 2. Comparison of average execution times with AC
Mode conflicts occur if a role contains privileges that can be interpreted both to allow and to disallow mode access. Conflicts should be resolved using the natural and straightforward policy that “the most specific authorization should be the one that prevails” [9]. A source of conflicts may be different access control policies of two or more data sources. In our example, suppose that data sources S2 and S3 consider their information to be sensitive and use a closed access control policy. As has been discussed above, S3 considers that View1 can be made available to the Client role. However, S2 considers that View1 should not be made available to the Client role (perhaps it could be made available to the Civil Servant role). In this case, both data sources will specify conflicting access privileges, and the Client role will look like: Client: {(S3, (Read, Deny), CannotGrant), (View1, (Read, Allow), CannotGrant), (View1, (Read, Deny), CannotGrant)} In this case, the access control decision should deny access. As long as the decision to make View1 available to the Client role is not unanimous by all data sources, access cannot be granted. The administrator’s and programmers of the VR should resolve this conflict by redesigning View1 or negotiating with administrators of the data sources. To avoid the case that the administrator of any data source that is used by a view is not consulted or overlooked in the procedure of establishing access
1194
A. Wierzbicki et al.
Time ( ms )
rights, it is better to modify all roles by a default denial of access to any new or modified view. This default denial of access would be removed only if all administrators agree on such a step.
3500 3000 2500 2000 1500 1000 500 0 1 view x 5000 subviews
100 views x 50 subviews
Access Control disabled Access Control enabled, suspension OFF Access Control enabled, suspension ON Fig. 3. Comparison of execution times with multiple view calls
Such a method of resolving conflicts, along with the design of the access control mechanisms, should ensure that the resulting AC policy is a compromise between the security requirements of the contributing data sources and of the usability and functionality of the Virtual Repository. 3.3 View Modification The case of view modification should be treated in the same way as the creation of a new view: whenever an existing view is modified, all the access control rules that concern this view should be removed from any roles. In other words, access to a modified view should be denied until all the administrators of local data sources agree on granting access to this view 3.4 Summary of AC Management As demonstrated by the scenario in section 2.4, AC management in our proposal relies on the ability to grant (and refuse) privileges (and the existence of negative privileges). However, note here that the AC architecture proposed above could be used to implement various AC policy management schemes. As an example, centralized AC policy management can be supported if the administrative programmer does not consult the administrators of data sources, but sets the access privileges to views himself. On the other hand, fully distributed AC policy
Access Control Management in Open Distributed Virtual Repositories and the Grid
1195
management can be supported if the administrators of data sources never grant any privileges to the views of the VR. Thus, the proposed AC architecture can be used with the two extreme AC policy management schemes. The proposed access control management maintains the autonomy of the administrators of data sources. There is a central point of control (the administrator role), but with limited privileges. When a user creates a view, the Administrator role becomes modified to include full access rights to that view. The administrator of the VR becomes the owner of the view and modifies the roles of data sources that have been used in the view. The VR administrator is therefore a partially trusted third party (interacting with the data source administrators and with the creator of the view). The VR administrator is partially trusted to modify privileges according to the AC policy management procedure. However, the VR administrator is not fully trusted to understand and express the access policies of the individual data sources. This task is still left to the data source administrators. This modification could be partially automated by the system at compile time. Also, the information of which data sources have been used in a view or class is best known to the user who created this view. Thus, the modification of the roles of the data sources will become simpler if the privilege of creating views is limited to the Administrator role. Then, the same user who created the view will become its owner and can modify access rights of data sources. Another source of role modifications are the data sources. Following one of the basic assumptions, each institution that owns a data source of the Virtual Repository should be given autonomous decision capability over access control, in order to implement its own security policy. The modification of the access privileges to Views can be made by a data source administrator at any time. Furthermore, the Virtual Repository should support the audit of execution trails that will enable data source administrators to verify that no view has been given access privileges to their data without their approval. An important element of our AC architecture is the ability to suspend access control checks once access has been granted. In our example, this has the effect that users assigned to the Client role can access View1 after the data source administrators have modified the Client role to permit this access. However, note that the Client role still contains negative privileges to the data source that contributes to View1. If access control would be evaluated for every function call, it would finally raise an access control exception when the user with the Client role accesses data source S3. This will not happen for one reason: that the same user has called a function of View1 before, and the Client role has access permission to View1. The access control checks can be viewed as a stack: if an access control check lower on the stack succeded, all other AC checks higher on the stack will be suspended. In our architecture, this is achieved by maintaining a context of every function call that includes the active role and a flag that can be used to suspend AC checking.
4 Performance Evaluation of a Prototype Implementation 4.1 Test Environment The proposed AC systems has been implemented in the ODRA (Object Database for Rapid Application development) system [11], designed mainly for enabling virtual
1196
A. Wierzbicki et al.
repositories creation. ODRA is an operating prototype that may be numbered amongst the most advanced distributed object-oriented database management systems available today. ODRA is used and continuously developed within two European 6th FP projects: eGov-Bus and VIDE. The AC system has been tested to evaluate its performance impact, and an important optimization has been proposed. The proposed optimizations concern access control enforcement, since the management of access control incurs only a negligible overhead; yet the proposed optimization can be useful in most data grids. In the present section, we give an overview of the results of performance evaluation that indicate that our proposal can be used in practice. ODRA has been implemented in Java, and access control has been implemented using a specialized set of classes. Access control in ODRA is executed at run-time (although some of AC tasks can be moved into the compile time in the future). Every call to a subroutine in ODRA must call a method of the access control class that is used to execute access control. Parameters of this method include an object (view) identifier and a user context that includes the activated user role. If access is not granted, this method throws an access control exception. 4.2 Test Evaluation The performance tests of access control execution in the ODRA prototype have been conducted using a feature of the implementation that allows to turn access control off. The tests have repeatedly executed a function of a view with AC turned on or off, and measured real execution times (without debugging or console delays). Each test has been run 10 times, and the presented results include the mean time and a confidence interval. The simplest test involved calling one function of one view that called another function of another view. This test could be run in a loop that could repeat 10, 100, or 1000 times. The results, presented on Figure 2, show that access control execution in the ODRA prototype does not incur a serious overhead if the number of calls to subroutines is small. The figure presents three measurement results for each size of the test: one with access control turned off, another with access control turned on, and a third that presents the results of using the proposed suspension of access control. Recall that access control suspension is related to the management of AC in ODRA. Since a view is granted the right to execute after it has been analyzed by all interested parties (data source administrators), this implies that the view could be given full privileges without compromising security. Yet, if this view calls a function of another view, access control is executed again for the called view. If the calling view has been already investigated by interested parties and found safe for a certain role, this would imply that it cannot call another view that should be prohibited for this role. (This constraint is currently verified manually, but in the future will be verfied during compilation.) Therefore, the proposed suspension mechanism turns off further execution of access control once access privileges have been granted to a view for a certain role. On Figure 2, it can be seen that the proposed mechanism indeed reduces execution times. The effect of the proposed mechanism is more apparent, when the first view accesses more other views, and access control is executed for every called view. Figure 3 shows the results of two tests: in the first one, the first view called 5000
Access Control Management in Open Distributed Virtual Repositories and the Grid
1197
other views; in the second one, the first view called 50 views, but the first view’s function was called 100 times. In the first test, access control execution incurs a 20% overhead (about 500 milliseconds). Using the optimized access control, the overhead becomes negligible. A similar situation occurs in the second test, although there the overhead of access control that performs checks every time is smaller.
5 Related Work Role-based access control is a dominating approach in open distributed systems, because of its scalability. Attribute-based access control based on identities of individual users is usually considered non-scalable in P2P and Grid systems. Research on access control in P2P systems has demonstrated that access control enforcement can be fully distributed [1-4]. Paradoxically, some P2P access control research uses a centralized approach to policy management. In [4], the access control policy is under full control of an “authority” that assigns rights to each role. However, enforcement is fully distributed. In other P2P systems such as [1], both AC management and enforcement are distributed, and every peer can autonomously express his own access control policy. The Open Grid Services Architecture-Data Access and Integration (OGSA-DAI) implements emerging standards of the Data Access and Integration Services Working Group (DAIS-WG) of the Global Grid Forum (GGF). It uses the Community Authorization Service (CAS) provided by the Globus Toolkit [6]. Interestingly, this work uses a distributed approach to AC policy management: resource providers have ultimate authority over their resources, and global roles are mapped to local database roles. In our opinion, this approach can limit the functionality of the grid by prohibiting synergetic functions that use sensitive data, but do not include this data in the results. The appropriate granularity and design of access control in databases has been the subject of much previous work [7-10]. While access control granularity has a significant impact on management, the subject of access control management in open distributed systems has not been investigated in this research. Similarly, the issue of modality conflict resolution has been studied in the database community [7-10]; yet again, this research concerned systems with centralized control. An overview of access control management issues can be found in [9]. A large body of research concerns the use of trust management for access control in ODS. An example of this research is [3]. However, while using trust enhances the possibility of making right access control decisions, it purely trust-based systems do not have enough flexibility in expressing privileges. In addition, research on trust management is not concerned with the problem of access control management.
6 Conclusion and Future Work The problem of access control management in Open Distributed Systems has so far received little attention in the literature – most previous work has focused on access control execution, proving that it could be distributed. Proposed AC systems for ODS
1198
A. Wierzbicki et al.
have used one of the two extreme management approaches: centralized or distributed management. In this paper, we have made the case that either of these approaches creates difficulties: the centralized approach requires high trust in the central management entity, and the distributed approach can limit the functionality of the ODS or pose new security threats, because combining publicly available information may create results that violate local security policies. These observations concern all Open Distributed Systems that have more complex functionality, like Virtual Repositories or data grids. In our work on access control for the Virtual Repository ODRA, we have attempted to solve this problem by proposing a novel management approach that allows the creation of compromise access control policies. The autonomous participants, owners of data sources in the VR, retain control over the use of their data and can inspect and deny access to any view that uses their data. On the other hand, it is possible to create views that use sensitive data without violating security policies. Thus, the access control system does not limit the functionality of the Virtual Repository. We have implemented and tested our access control system in the ODRA prototype. Our performance tests have allowed us to propose an important optimization that reduces the execution overhead of access control to a negligible value. The prototype implementation proves that our approach is practical. Among future work, we plan to investigate how our access control approach could be generalized for the computational grid, for example by integrating it with the Globus toolkit. An important area of future work is the partial automation of the access control management procedures, such as using the compilation of views to support the decision on what data sources are affected by a view. Another direction of our future work is investigation of means of access control enforcement (such as through sandboxing) that protect against luring attack.
References 1. Park, J., Hwang, J.: Role-based access control for collaborative enterprise in peer-to-peer computing environments; Symposium on Access Control Models and Technologies. In: Proceedings of the eighth ACM symposium on Access control models and technologies, Italy (2003) 2. Crispo, B., et al.: P-Hera: Scalable fine-grained access control for P2P infrastructures. In: ICPADS 2005. 11th International Conference on Parallel and Distributed Systems, pp. 585–591 (2005) 3. Tran, H., et al.: A Trust based Access Control Framework for P2P File-Sharing Systems. In: Proceedings of the 38th Hawaii International Conference on System Sciences (2005) 4. Nicolacopoulos, K.: Role-based P2P Access Control, Ph.D. Thesis, Lancaster University (2006) 5. Pereira, A.: Role-Based Access Control for Grid Database Services Using the Community Authorization Service. IEEE Trans. On Dependable and Secure Computing 3(2) (2006) 6. Foster, I., Kesselman, C.: The Globus Toolkit. In: Foster, I., Kesselman, C. (eds.) The Grid: Blueprint for a New Computing Infrastructure, pp. 259–278. Morgan Kaufmann, San Francisco (1999)
Access Control Management in Open Distributed Virtual Repositories and the Grid
1199
7. Rabitti, F., Bertino, E., Kim, W., Woelk, D.: A model of authorization for next-generation database systems 8. Notargiacomo, L.: Role-Based Access Control in ORACLE7 and Trusted ORACLE7. In: ACM RBAC Workshop, MD, USA (1996) 9. Samarati, P., de Capitani di Vimercati, S.: Access Control: Policies, Models, and Mechanisms. In: Focardi, R., Gorrieri, R. (eds.) FOSAD 2000. LNCS, vol. 2171, pp. 137– 196. Springer, Heidelberg (2001) 10. Ahad, R., David, J., Gower, S., Lyngbaek, P., Marynowski, A., Onuebge, E.: Supporting access control in an object-oriented database language. In: Pirotte, A., Delobel, C., Gottlob, G. (eds.) EDBT 1992. LNCS, vol. 580, p. 171. Springer, Heidelberg (1992) 11. Lentner, M., Subieta, K.: ODRA: A Next Generation Object-Oriented Environment for Rapid Database Application Development, http://www.ipipan.waw.pl/~subieta/artykuly/ ODRA%20paperpl.pdf 12. Kozankiewicz, H., Stencel, K., Subieta, K.: Integration of Heterogeneous Resources through Updatable Views. In: ETNGRID-2004. Workshop on Emerging Technologies for Next Generation GRID, IEEE, Los Alamitos (2004) 13. Kozankiewicz, H.: Updateable Object Views. PhD Thesis, Finished PhD-s Hanna Kozankiewicz (2005), http://www.ipipan.waw.pl/~subieta/
Transforming the Adaptive Irregular Out-of-Core Applications for Hiding Communication and Disk I/O Changjun Hu, Guangli Yao, Jue Wang, and Jianjiang Li School of Information Engineering, University of Science and Technology Beijing No. 30 Xueyuan Road, Haidian District, Beijing, P.R. China, 100083
[email protected],
[email protected],
[email protected],
[email protected] Abstract. In adaptive irregular out-of-core applications, communications and mass disk I/O operations occupy a large portion of the overall execution. This paper presents a program transformation scheme to enable overlap of communication, computation and disk I/O in this kind of applications. We take programs in inspector-executor model as starting point, and transform them to a pipeline fashion. By decomposing the inspector phase and reordering iterations, more overlap opportunities are efficiently utilized. In the experiments, our techniques are applied to two important applications i.e. Partial differential equation solver and Molecular dynamics problems. For these applications, versions employing our techniques are almost 30% faster than inspectorexecutor versions. Keywords: Program Transformation, Iteration Reordering, Computationcommunication overlap, Computation-Disk I/O overlap.
1 Introduction In a large number of scientific and engineering applications, access patterns to major data arrays are not known until run-time. Especially in some cases, the access pattern changes as the computation proceeds. They are called adaptive irregular applications [1], as shown in Figure 1. for(...){ if(change) irreg[i]=…; … for(…) …=x(A[irreg[i]]); } Fig. 1. An adaptive irregular code abstract
Large scale irregular applications involve large data structures, which increase the memory usage of the program substantially. Additionally, more and more Cluster R. Meersman and Z. Tari et al. (Eds.): OTM 2007, Part II, LNCS 4804, pp. 1200–1213, 2007. © Springer-Verlag Berlin Heidelberg 2007
Transforming the Adaptive Irregular Out-of-Core Applications
1201
systems adopt a multi-user mechanism, which makes a limit upon the available memory for each user. Therefore, a parallel program may quickly runs out of memory. The program must store the large data structures on the disk, and fetch them during the execution. For this kind of applications, though VM (Virtual Memory) makes the programming comfort and ensures the correctness, frequently paging causes the poor performance of programs. To deal with the requirements of irregular out-of-core applications, some runtime libraries (such as CHAOS+[2], LIP[3]) and some language extensions (such as Vienna Fortran [4], HPF+[5]) have been developed. CHAOS+ generates I/O schedules and out-of-core specific communication schedules, exchanges data between processors, and translates indices for copies of out-of-core data, etc to optimize the performance. LIP supports for non–trivial load balancing strategies and provides optimization techniques for I/O operations. Vienna Fortran combines the advantages of the shared memory programming paradigm with the mechanisms for explicit user control to provide facilities in solving irregular outof-core problems. HPF+ is an improved variant of HPF with many new features to support irregular out-of-core applications, such as the generalized block and the indirect data distribution formats, distributions to processor subsets, dynamic data redistribution, etc. With the help of these technologies, programmers not only judiciously insert messages to satisfy remote accesses, but also explicitly orchestrate the necessary disk I/O to ensure that data is operated in chunks small enough to fit in the system's available memory. Unfortunately, all methods above take no consideration in transforming programs in a pipeline fashion to utilize the overlap opportunities. In this paper, we present a transformation scheme to improve the performance of the adaptive irregular out-ofcore applications based on Ethernet switched Clusters. According to dependency analysis, adaptive irregular out-of-core programs are restructured in a pipeline fashion using our transformation scheme. In the execution of a transformed program, more overlap opportunities are efficiently utilized and unpredictable communications and disk I/O operations are hided. The preprocessing of the access pattern is also optimized. The rest of the paper is organized as follows. Section 2 outlines the main features of the adaptive irregular out-of-core applications and how the program executes in the inspector-executor model. In section 3, the dependencies of different phases in the model are shown, and how the reordering scheme works is interpreted. The last part in this section presents transformation steps to implement the scheme above. Section 4 evaluates the performance of these technologies. Some related works are placed in section 5, and section 6 presents the conclusion.
2 Overview of Adaptive Irregular Out-of-Core Applications 2.1 Adaptive Irregular Out-of-Core Applications In adaptive irregular applications, accesses to data arrays are achieved by indirection arrays, which are not determined until run-time, and changes as the computation proceeds. A preprocessing is needed to determine the data access patterns, which is inevitable and time-consuming in every execution. As it is shown in Figure 1, while
1202
C. Hu et al.
the program iterates the outer loop, the condition variable change will become true, and then the index array irreg will have different values. Changes in access patterns cause performance degradation of adaptive irregular out-of-core applications. Applications are called out-of-core applications if the data structures used in the computation cannot fit in the main memory. Thus, the primary data structures reside in files on the local disk. Additionally, more and more Cluster systems have adopted a multi-user mechanism, which sets a limit upon user-available memory. Processing out-of-core data, therefore, requires staging data in smaller chunks that can fit in the user available memory of the computing system. 2.2 Execution Model The traditional model [3] for processing parallel loops with adaptive irregular accesses consists of three phases: work distributor, the inspector, and the executor phase. Initially, iterations data and indirection arrays are distributed among the nodes in a Cluster according to a specified fashion. The inspector is then engaged to determine the data access pattern. Table 1 lists the works needed to be done by inspector.
、
Table 1. Works to be done by inspector Works needed to be done resolve irregular accesses in the context of the specified data distribution find what data must be communicated and where it is located translate the indirection arrays to subscripts referenced to either the local I/O buffer or the network communication buffer send and receive the number and displacement of data which is to be communicated
Required Resource
CPU
Network
After the inspector phase, disk I/O operation and the network communication are issued to load the desired data to the buffers. The computation loop is then carried out. In many parallel applications, according to the data access pattern, the iterations assigned to a particular node can be divided into two groups: local iterations that access only locally available data and non-local iterations that only access data originally reside on other nodes. Respectively, two memory buffers are allocated which are called local area and non-local area. Local area is prepared for the data reside on local disk, and non-local area is for the data originally reside on other nodes. The execution of the local iterations is called local computation and execution of the non-local iterations is called non-local computation. The non-local computation can’t be performed until the data communication is completed. Some compile-time technologies are proposed to optimizing irregular applications. In [6], the compiler creates inspectors to analyze of the access pattern in both compile-time and run-time. And computation-communication overlap is enabled by restructuring irregular parallel loops. The out-of-core parallelization strategy is a natural extension of the inspectorexecutor approach. Iterations assigned to each node by the work distributor should be
Transforming the Adaptive Irregular Out-of-Core Applications
1203
split into chunks small enough to fit into the available memory, the portion of which is called one i–section. The inspector-executor model is then applied to each i-section. Figure 2 describes how the program executes in this model. Split iterations into several i-sections i-section
Inspector Inspector Preprocess Executor Disk I/O Data Communication Computation Fig. 2. Inspector-executor model applied to irregular out-of-core applications
3 Transformation In our transformation, we assume that the irregular loop body has no loop carried dependences. Every iteration isn’t needed by the future iterations and can be completed before the entire computation is completed. Most of the practical irregular scientific applications have this kind of loops, including Partial differential equation solver and molecular dynamics codes. We take the adaptive irregular out-of-core programs in inspector-executor model as the starting point, and transform the programs into a pipeline fashion. Figure 3 shows a typical program in inspectorexecutor model. The transformation process consists of the following steps. (1)
for(;. A message m matches a subscription s if all the predicates are evaluated to be true based on the content of m.
4 eCube Hypercube Event This section presents a multidimensional event representation, the eCube, for efficient indexing, filtering, and matching. These operators are fundamental for events and influences a higher-level event dissemination model. There are various data structures and access methods for multidimensional data, and an overview and comparative analysis are presented in [8] [11] [1]. Choosing the indexing structure is complex and has to satisfy the incremental way of maintaining the structure and range query capability. We carefully investigated the UB-tree and RTree structures. The UB-tree is designed to perform multidimensional range queries [3]. It is a dynamic index structure based on a BTree and supports updates with logarithmic performance and space complexity O(n). The RTree is widely used for spatio-temporal data indexing, and it supports dynamic tree splitting and merging operations. Thus, we have chosen RTree to represent multidimensional events and event filtering, where events require dynamic operations.
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
Each key stored in a leaf entry is intuitively a box, or collection of intervals, with one interval per dimension
1249
Bounding Box 7
2-Dimensions (Xlow, XHigh, Ylow, Yhigh)
4
= (2,5,4,7) 2
5
Fig. 3. Minimum Boundary Rectangle
4.1 RTree An RTree [15], extended from a B+ Tree, is a data structure that can index multidimensional information such as spatial data. Fig. 3 shows an example of 2-dimensional data. An RTree is used to store minimum boundary rectangles (MBRs), which represent the spatial index of an n-dimensional object with two n-dimensional points. Similar to BTrees, RTrees are kept balanced on insert and delete, and they ensure efficient storage utilisation. Structure. An RTree builds a MBR approximation of every object in the data set and inserts each MBR in the leaf level nodes. Fig. 4 illustrates a 3-dimensional RTree; rectangles A-F represent the MBRs of the 3-dimensional objects. The parent nodes, R5 and R6, represent the group of object MBRs. When a new object is inserted, a cost-based algorithm is performed to decide in which node a new object has to be inserted. The goals of the algorithm are to limit the overlap between nodes and to reduce the dead-space in the tree. For example, grouping objects A, C, and F into R5 requires a smaller MBR than if A, E, and F were grouped together instead. Enforcing a minimum/maximum number of object entries per node ensures balanced tree formation. When a query object searches the tree for the intersection operation, the tree is traversed, starting at the root, by passing each node where the query window intersects a MBR. Only object MBRs that intersect the query MBR at the leaf-level have to be retrieved from disk. A BTree may require a single path through the tree to be traversed, while an RTree may need to follow several paths, since the query window may intersect more than one
R5 R6
R2 G
A R1 F
C R6
R5
H D R3 G
E R4
B
R1 R3 AF
R4 R2
DGH B E
Point Query q
Fig. 4. RTree Structure
CG
1250
E. Yoneki and J. Bacon
MBR in each node. MBRs are hierarchically nested and can overlap. The tree is heightbalanced; every leaf node has the same distance from the root. Let M be the number of entries that can fit in a node and m the minimum number of entries per node. Leaf and internal nodes contain between m and M entries. As items are added and removed, a node might overflow or underflow and require splitting or merging of the tree. If the number of entries in a node falls under the m bound after a deletion, the node is deleted, and the rest of its entries are distributed among the sibling nodes. Each RTree node corresponds to a disk page and an n-dimensional rectangle. Each non-leaf node contains entries of the form (ref, rect), where ref is the address of a child node and rect is the MBR of all entries in that child node. Leaves contain entries of the same format, where ref points to an object, and rect is the MBR of that object. Search. Search in an RTree is performed in a similar way to that in a BTree. Search algorithms (e.g. intersection, containment, nearest) use MBRs for the decision to search inside a child node. This implies that most of the nodes in the tree are never touched during a search. The average cost of search is O(log n) and the worst case is O(n). Different algorithms can be used to split nodes when they become full. In Fig. 4, a point query q requires traversing R5, R6 and child nodes of R6 (e.g. R2 and R4) before reaching the target MBR E. When the coverage or overlap of MBRs is minimised, RTree gives maximum search efficiency. For nearest neighbour (NN), the search for point data is based on the distance calculation shown in Fig. 5. Let M IN DIST (P, M ) be the minimum distance between a query point and a boundary rectangle, and let M IN M AXDIST (P, M ) be the upper bound of minimum distance to data in the boundary rectangle (i.e. among the points belonging to the lines consisting of MBR, select the one closest to the query point). However, there is no guarantee that the MBR contains the nearest object even if M IN DIST is small. In Fig. 5, the smaller M IN DIST from the query point is MBR1, while the nearest object of O21 is in MBR2. The search algorithm for nearest neighbour is: 1. If the node is a leaf , then find NN. If non leaf , sort entries by MINDIST to create Active Branch List (ABL). 2. if M IN DIST (P, M ) > M IN M AXDIST (P, M ) then remove MBR. If the distance from the query point to the object is larger than M IN M AXDIST (P, M ) then the object is removed (i.e. M contains an object that is closer to P than the object). If the distance from the query point to the object is larger than M IN DIST (P, M ), then M is removed (i.e. M does not contain objects that are closer to P than the object). 3. Repeat 1 and 2 until ABL is empty. O11 O22
Query Point
MBR1
MBR2 O21 MIN DIST
O12
MINMAX DIST
Fig. 5. Nearest Neighbour Search
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1251
4.2 Adaptation to Publish/Subscribe Event filtering in a content-based publish/subscribe system can be considered as querying in a high dimensional space, but applying multidimensional index structures to publish/subscribe systems is still unexplored. Thus, we have both publication and subscription are modelled as eCubes in our implementation, where matching is regarded as an intersection query on eCubes in an n-dimensional space. Point queries on the eCube are transformed into range queries to make use of efficient point access methods for event matching. This corresponds to the realisation of symmetric publish/subscribe, and it automatically provides effective range queries, nearby queries, and point queries. Traditional databases support multidimensional data indexing and query, when using a query language as an extension of SQL. For example, a moving object database can index and query position/time of tracking objects. Applications in ubiquitous computing require such functions over distributed network environments, where data are produced by publishers via event brokers, and the network itself can be considered as a database. The query is usually persistent (i.e. continuous queries). Stream data processing and publish/subscribe systems address similar problems. Nevertheless, supporting spatial, temporal, and other event attributes with a multidimensional index structure can dramatically enhance filtering and matching performance in publish/subscribe systems. For example, the event of tracking a car, which is associated with changes of position through time, needs spatio-temporal indexing support. GPS, wireless computing and mobile phones are able to detect positions of data, and ubiquitous applications desperately need this data type for tracking, rerouting traffic, and location aware-services. Both point and range queries can be performed over the eCube in a symmetric manner between publishers and subscribers. The majority of publish/subscribe systems consider that subscriptions cover event notifications. We focus on symmetric publish/subscribe, and the case of when event notifications cover subscriptions is therefore also part of the event filtering operation. Thus, typical operations with the eCube can be classified into the following two categories: – Event Notifications ⊆ Subscriptions: events are point queries and subscriptions are aggregated in the eCube. For example, subscribers are interested in the stock price of various companies, when the price dramatically goes up. All subscribers have interests in different companies, and an event of a specific company’s price change will be notified only to the subscribers with the matching subscriptions. – Event Notifications ⊇ Subscriptions: events are range queries and subscriptions are point data. For example, a series of news related to Bill Gates is published to the subscribers who are located in New York and Boston. Thus, an attribute indicates the location in the event notification to New York and Boston. Subscribers with the attribute London will not receive the event. 4.3 Cube Subscription Events and subscriptions can essentially be described in a symmetric manner with the eCube. Consider an online market of music, where old collections may be on sale. Events represent a cube containing 3 dimensions (i.e. Media, Category, and Year). Subscriptions can be:
1252
E. Yoneki and J. Bacon
Fig. 6. 3-Dimensional Subscription
Point Query: CDs of Jazz released in 2005 Partial Match Query: Any media of Jazz Range Query: CDs and DVDs of Jazz and Popular music released between 2000 and 2005
Fig. 6 depicts the 3-dimensional eCube and the above subscriptions are shown. 4.4 Expressiveness We consider event filtering as search in high dimensional data space and introduce a hypercube based filtering model. It is popular to index spatio-temporal objects by considering time as another dimension on top of a spatial index so that a 3-dimensional spatial access method is used. We consider extension to n dimensions, which allows to include any information such as weather, temperature, or interests of the subscribers. Thus, this approach takes advantage of the range query efficiency by using multidimensional indexing. The indexing mechanism with the eCube can be used for filtering expression for content-based filtering, aggregation of subscription, and part of the event correlation mechanism. Ultimately, the event itself can be represented as a eCube for symmetric publish/subscribe. Thus, the eCube filter uses the geometrical intersection of publications/subscriptions represented in hypercubes in a multidimensional event space. This will provide selective data dissemination in an efficient manner including symmetric publish/subscribe. Data from WSNs can be multidimensional and searching for these complex data may require more advanced queries and indexing mechanisms than simply hashing values to construct a DHT so that multiple pattern recognitions and similarities can be applied. Subscribing to unstructured documents that do not have a precise description may need some way to describe the semantics of the documents. Another aspect is that searching a DHT requires the exact key for hashing, while users may not require exact results. This section discusses the expressiveness of query and subscription. The eCube can express these subscriptions and filtering by use of another dimension with time values. A simple real world example for use of the eCube can be with geographical data coordinates in 2-dimensional values. A query such as Find all book stores within 2 miles of my current location can be expressed in an RTree with the data splitting space of hierarchically nested, and possibly overlapping, rectangles.
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1253
4.5 Experimental Prototype The prototype implementation of RTree is an extension of the Java implementation [16] based on the paper by Guttman [15]. We extended it to become more compact. It currently supports range, point, and nearest neighbour queries. The prototype is a 100KB class library in Java with JDK 1.5 SE. The experiments aim to demonstrate the applicability of an RTree for event and subscription representation. 4.6 Evaluation of eCube with Sensor Data In this section, we show the brief evaluation of the eCube addressing the filtering capability. We experiment the eCube with live traffic data from the city of Cambridge. Data is gathered from sensors of inductive loops installed at various key junctions across Cambridge and collected every five minutes from raw signal information. Different sizes of data sets are used for the experiments, ranging from 100 to 40,000. The motor-way data from April 3rd 2006 is used, which is transformed into 1-, 3-, and 6dimensional data with attributes Date, Day, Time, Location, Flow and Occupancy. The raw data are point data, which are converted to zero size range data so that range queries can be issued against them by the intersection operation. This experiment demonstrates the functionality of RTree and compares the operation with a simple brute force operation, where the set of predicates are used for query matching. Complex range queries directly mapping to real world incidents can be processed such as speed of average car passing at junction A is slower than at junction B at 1:00 pm on Wednesdays. It is not easy to show the capability of the eCube filtering for expressive and complex queries in a quantitative manner. Thus, experiments focus on the performance of a high-volume range filtering processes.
350
350
300
300
300
250
250
Fig. 7. Single Range Query Operation: RTree vs. Brute Force
00 0
00 0
40
00 0
30
00
00 0
00
Data Size
20
10
50
00
00
Data Size
40
30
0
0 10
00 0
00 0
40
00 0
30
00
00
00 0
20
10
50
00
0
00
00
40
30
20
50
10
0 10
00 0
00 0
40
00 0
30
00
00
00 0
20
50
10
40
30
20
50
10
00
0
00
0
0
50
0 00
100
50
0
100
50
00
150
100
Data Size
Brute Force (1D) RTree (1D)
200
20
150
50
150
250
Brute Force (3D) RTree (3D)
200
10
Brute Force (6D) RTree (6D)
200
milliseconds
milliseconds
350
10
milliseconds
Dimension Size. Fig. 7 and Fig. 8 show the processing speed of a range query. The X axis indicates the data size with bytes; it is not linearly scaled over the entire range. This X axis coordination is same in Fig. 7-11. The sizes of data sets are selected between 100 and 40,000 as seen on the X axis. Two partitions (between 1000 and 5000, and between 10,000 and 40,000) are scaled linearly. This applies to all the experiments, where different data sets are used. An RTree has been created for data of 1, 3, and 6 dimensions.
E. Yoneki and J. Bacon 350
350
300
300 250
Rtree: Data Size
00 0
00 0
40
00 0
30
20
00
00 0
10
00
00
00
50
40
30
20
0 10
00 0
00 0
40
00 0
20
30
00
00 0
50
10
00
00 40
00
30
50
10
0
0 00
50
0 0
100
50
0
150
100
10
6 Dimensions 3 Dimensions 1 Dimension
200
00
150
50
milliseconds
6 Dimensions 3 Dimensions 1 Dimension
200
20
milliseconds
250
10
1254
Brute Force: Data Size
Fig. 8. Single Range Query: Dimensions
The operation using the brute force method is also shown, where each predicate is compared with the query. For 1-dimensional data, the use of RTree incurs too much overhead, but the RTree outperforms at increasing numbers of dimensions. The difference in the number of dimensions has little influence over the RTree performance. Thus, once the structure is set, it guarantees an upper bound on the search time. Matching Time. Fig. 9 and Fig. 10 show average matching operation times for a data entry against a single query. The Y axis indicates total matching time / number of data items. Fig. 9 is depicted the comparison between RTree and Brute Force within the same dimensional data, while Fig. 10 shows the same experiment results for comparison of different dimensional data. For 1-dimensional data, the use of RTree incurs too much overhead, but increasing dimensions does not affect operation time. In these figures, the X axes are in non-linear scales. The cost of the brute force method increases with increasing dimension of data, which is shown in Fig. 10. RTree Storage Size. Fig. 11 shows the storage requirement for RTree. The left figure shows storage usage, while the right one shows construction time. The current configuration uses 4096B per block. Since the index may also contain user defined data, there is no way to know how big a single node may become. The same data set is used for the repeating experiments and the standard deviation is therefore 0. The storage manager will use multiple blocks per node if needed, which will slow down performance. There are only few differences with changing dimension size, because the data size in each element is about the same in this experiment. The standard deviation value is ∼ = 0, because the input data for each experiment is identical.
RTree
0.1
0.01
Brute Force RTree
0.1
0.01
Data Size
Data Size (1D)
Brute Force RTree
0.1
0.01
0.001
10 0 50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
0.001
10 0 50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
0.001
1
Data Size (3D)
10 0 50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
Brute Force
matching time (ms) per data item
1
Data Size (6D)
matching time (ms) per data item
matching time (ms) per data item
1
Data Size
Fig. 9. Single Range Query Matching Time
Data Size
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing 1
RTree 1D RTree 3D RTree 6D 0.1
0.01
Brute Force 1D Brute Force 3D matching time (ms) per data item
0.01
0.001
0.001 10 0
50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
10 0
Brute Force 6D 0.1
50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
matching time (ms) per data item
1
1255
Data Size
Data Size
Fig. 10. Matching Time 0.5
6 Dimensions 3 Dimensions 1 Dimension
3000
0.4
2000 1500
6 Dimensions 3 Dimensions 1 Dimension
minutes
2500
0.3 0.2
1000
0.1 500
50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
10 0
50 0 10 00 20 00 30 00 40 00 50 00 10 00 0 20 00 0 30 00 0 40 00 0
0
0
10 0
Num of Pages (4096B/page)
3500
Data Size
Data Size
Fig. 11. Construction of RTree
The experiments highlight that RTree based indexing is effective for providing data selectivity among high volumes of data. It gives an advantage for incremental operation without the need for complete reconstruction. These experiments are not exhaustive and different trends of data may produce different results. Thus, it will be necessary to conduct further experiments with various real world data as future work. RTree indexing enables neighbourhood search, which allows similarity searches. This will be an advantage for supporting subscriptions that do not pose an exact question or only need approximate results. Approximation or summarisation of sensor data can be modelled using this function.
5 Event Broker Grid with eCube Filter We present an extension to a typed content-based publish/subscribe system (i.e. Hermes) with the eCube filtering. In content-based publish/subscribe, the eCube filter can be placed in the publisher and subscriber edge brokers, or distributed over the networks based on the coverage relationship of filters. If the publish/subscribe system takes rendezvous routing, a rendezvous node needs to keep all the subscriptions for the matching. Multidimensional range queries support selective data to subscribers who are interested in specific data. Hermes [23] is a typed content-based publish/subscribe system built over Pastry. The basic mechanism is based on the rendezvous mechanism that Scribe uses [7]. Addition-
1256
E. Yoneki and J. Bacon P: Publisher
P2
B: Broker
P1
B1 B2 B3
S1
B4
R R: Rendezvous Broker
S2
B5
S: Subscriber
Fig. 12. Content-Based Routing for Publish/Subscribe in Hermes
ally, Hermes enforces a typed event schema providing type safety by type checking on notifications and subscriptions at runtime. The rendezvous nodes are chosen by hashing the event type name. Thus, it extends the expressiveness of subscriptions and aims to allow multiple inheritance of event types. In Hermes, the content-based publish/subscribe routing algorithm is an adaptation of SIENA [6] and Scribe using rendezvous nodes. Both advertisements and subscriptions are sent towards the rendezvous node, and each node en route keeps track. Routing between the publisher, where the advertisement comes from, and the subscriber is created through this process. An advertisement and subscriptions meet in the worst case at the rendezvous node. The event notification follows this routing, and the event dissemination tree is therefore rooted from the publisher node. This will save some workload from the rendezvous nodes. Fig. 12 shows routing mechanisms for content-based publish/subscribe. Arrows are white for advertisements, light grey for subscriptions, and black for publications. The black arrow from broker 1 to broker 3 shows a shortcut to subscriber 1 that is different from the routing mechanism of Scribe. Subscription 2 in content-based routing travels up to the broker hosting the publisher Fig. 12. Grey circles indicate where filtering states are kept. 5.1 eCube Event Filter In content-based networks such as SIENA [6], the intermediate server node creates a forwarding table based on subscriptions and operates event filtering. Under high event publishing environments, the speed of filtering based on matching the subscription predicates at each server is crucial for obtaining the required performance. In [25] and [4], subscriptions are clustered to multicast trees. Thus, filtering is performed at both the source and receiver nodes. In contrast, the intermediate nodes perform filtering for selective event dissemination in [21]. In Hermes, a route for event dissemination for a specific event type is rooted at the publisher node through a rendezvous node to all subscribers by constructing a diffusion tree. The intermediate broker nodes operate filtering for content-based publish/subscribe. The filtering mechanism is primitive, with each predicate of the subscription filter being kept independently without any aggregation within the subscriber edge broker. The coverage operation requires a comparison of each predicate against an event notification. The eCube is integrated to subscription filters to provide efficient matching and coverage operations. In the experiments, the effectiveness and expressiveness of typed channels and filtering attributes are compared. The advantages of this approach include efficient range query and filter performance (resource and time).
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1257
The balance between typed-channel and content-based filtering is a complex issue. In existing distributed systems, each broker has a multi-attribute data structure to match the complex predicate for each subscription. The notion of weak filtering for hierarchical filtering can be used as summary-based routing (see [30] and [9]), so that the balance between the latency of the matching process and event traffic can be controlled. When highly complex event matching is operated on an event notification for all subscriptions, it may result in too high message processing latency. This prevents reasonable performance of publishing rates to all subscribers. The subscription indexing data structure and filter matching algorithm are two important factors to impact the performance in such environments including filter coverage over the network. Event filtering in content-based publish/subscribe can provide better performance if similar subscriptions are in a single broker or neighbour brokers. Physical proximity provides low hop counts per event diffusion in the network with a content-based routing algorithm [20]. If physical proximity is low, on the other hand, routing becomes similar to simple flooding or unicasting. 5.2 Range Query A DHT is not suited for range queries, which makes it hard to build a content-based publish/subscribe system over structured overlay networks. When the subscription contains attributes with continuous values, it becomes inefficient to walk through the entire DHT entries for matching. Range queries are common with spatial data and desirable in geographic-based applications of pervasive computing, such as queries relating to intersections, containment, and nearest neighbours. Thus, eCube provides critical functions. However, DHT mechanisms in most of the current structured overlay distribute data uniformly without an exhaustive search. Range queries introduce new requirements such as data placement and query routing in distributed publish/subscribe systems. 5.3 Experiments The experiment in this section demonstrates a selective and expressive event filter that can be used to provide flexibility to explore the subscriptions. The performance of scalability issues in Hermes is reported in [23] and general control traffic (e.g. advertisement, subscription propagation) are also reported in [27]. Thus, to keep the results independent of secondary variables, only the message traffic for the dissemination of subscriptions is therefore measured. The metrics used for the experiments are the number of publications disseminated in the publish/subscribe system. The number of hops in the event dissemination structure varies depending on the size of the network and the relative locations between publishers and subscribers. Experimental Setup. The experiments are run on FreePastry [27], a Pastry simulator. Publishers, subscribers and rendezvous nodes are configured with deterministic node ids, and all the other brokers get node ids from Pastry simulations. One thousand Pastry nodes are deployed. All pastry nodes are considered as brokers, where the Hermes event broker function resides, and the total number of nodes (N=1000) gives average hop counts from the source to the routing destination as
1258
E. Yoneki and J. Bacon 1990 Sub 1
2006
Year
1993
Sub 2
1995
Sub 7
Pub Pub Pub Pub
1 2 3 4
1990
88
10
100 80
30
2002 2000
99
10
2006
2002 2005
Sub 6
100
70 50
2003
2001
Sub 5
Sub 8
77 22
2004 1998
Sub 4
100
Rank
2006
1995
Sub 3
1
33
2006 2006
60 100
1
2001 1998
60 90 2003 2005
50 10
Fig. 13. Subscriptions and Publications
log24 (1000) ≈ 2.5, where 4 is given as a configuration value. Eight subscribers connect to the subscriber edge brokers individually. The subscriptions are listed in Fig. 13. 1000 publications are randomly created for each event type by a single publisher. This is a relatively small scale experiment, but considering the characteristics of Hermes, where each publisher creates an individual tree combining the rendezvous node, the experiment is sufficient for evaluation. Subscriptions and publications. A single type CD with two attributes (i.e. released year and ranking) are used for the content-based subscription filter. In Fig. 13, eight subscriptions are defined with different ranges on two attributes. The publications take the form of a point for the eCube RTree. Four different publications are defined and 250 instances of each publication are published: 1000 event notifications are processed
Fig. 14. Pub/Sub System over Pastry
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1259
Matching Publication Rate %
100 80 60 40 20 0 Sub1
Sub2
Sub3
Sub4 Sub5 Subscribers
Sub6
Sub7
Sub8
Fig. 15. Matching Rate in Scribe (No Filtering)
in total. Same sets of publications and subscriptions are used in all experiments unless stated otherwise. Base Case with eCube Event Filter. This experiment demonstrates the basic operation of eCube event filters. The experiment is operated on Hermes with eCube filtering and Scribe, where no filtering is equipped. Fig. 14 shows the logical topology consisting of 8 subscribers (i.e. Sub01-Sub08), a publisher, and a rendezvous node along router nodes. Identifiers indicate the addresses assigned by the Pastry simulation. Fig. 15 depicts the matching publication rate for each subscriber node in the Scribe experiment. With eCubes, there are no false positives and subscribers receive only matching publications. It is obvious that filters significantly help to control the traffic of event dissemination. Multiple Types vs. Additional Dimension as Type. When multiple types share the same attributes, there will be two ways: first, defining three predefined types for separated channels and second, defining a single channel with an additional attribute, which distinguish different types. This experiment operates two settings and compares the publication traffic. In the first scenario, three types are used: Classic, Jazz, and P op. Thus, 3 rendezvous nodes are created. All three types share the same attributes. Table 1 shows the defined types along the subscriptions. The publisher publishes 1000 events for each type, 3000 Table 1. 3 Types and Matching Subscriptions
Subscriber Classic Jazz Sub 1 Sub 2 Sub 3 Sub 4 Sub 5 Sub 6 Sub 7 Sub 8
Pop
1260
E. Yoneki and J. Bacon Unmatch
3 Types
Additional Dimensions 2500 Number of Publications
2500 Number of Publications
Match
3000
3000
2000 1500 1000 500
2000 1500 1000 500
0
0
1
2
3
4 5 Subscribers
6
(a) Traffic Comparaison
7
8
1
2
3
4 5 Subscribers
6
7
8
(b) Matching Rate (3 types)
Fig. 16. Comparison between Channels on Types vs. Additional Dimensions
publications in total. Unless there is a super type defined for three types, each type creates an independent dissemination tree and causes multiple traffic. In Fig. 17, three rendezvous nodes appear for each type. For the second setting, instead of using multiple types, an additional dimension is added to the eCube. Fig. 16(a) depicts the total event traffic between two settings. The apparent result shows significant improvement of the traffic with the additional dimensional approach. Fig. 16(b) shows the matching ratio on received events in the experiment with 3 types. When different event types are used, which are not hierarchical, separated route construction for each event type is performed for event dissemination. Different types,
Fig. 17. Publish/Subscribe System with 3 Event Types
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1261
which may contain the same attributes, may not have a super type. Also super types may contain many other subtypes, of which the client may not want to receive notifications. Thus, additional dimensions on the filtering attributes may be a better approach for flexible indexing. Transforming the type name to the dimension can preserve locality, similarity or even hierarchy. This will provide an advantage for neighbour matching. The eCube filter introduces flexibility between the topic and content-based subscription models. The experiments show that adding additional dimensions in the hypercube filter transforming from type outperforms constructing on individual channel for type. Transforming the type name to a dimension can preserve similarity and hierarchy, that automatically provides neighbour matching capability. Further experiments for flexible indexing will be useful future work. DHT mechanisms contain two contradictory sides: the hash function distributes the data object evenly within the space to achieve a balanced load, whereas the locality information among similar subscriptions may be completely destroyed by applying a hash function. For example the current Pastry intends to construct DHT with random elements to accomplish load balance. Nevertheless similarity information among subscriptions is important in publish/subscribe systems.
6 Related Work In database systems, multidimensional range query is solved using indexing techniques, and indices are centralised. Recently distributed indexing is becoming popular, especially in the context of P2P and sensor networks. Indexing techniques tradeoff data insertion cost against efficient querying (see [32] for further details). A similar idea to the eCube is CAN-based multicast [24]. In [19], Z-ordering [22] is used for the implementation of CAN multicast. Z-ordering interleaves the bits of each value for each dimension to create a one-dimensional bit string. For matching algorithms, fast and efficient matching algorithms are investigated for publish/subscribe system in [10]. Topic-based publish/subscribe is realised by a basic DHT-based multicast mechanism in [35], [36], [24]. More recently, some attempts on distributed content-based publish/subscribe systems based on multicast have become popular [2], [5], [29]. An approach combining topic-based and content-based systems using Scribe is described in [28]. In these approaches, the publications and the subscriptions are classified in topics using an appropriate application-specific schema. The design of the domain schema is a key element for system performance, and managing false positives is critical for such approach. Recently, several proposals have been made to extend P2P functionality to more complex queries (e.g. range queries [14], joins [17], XML [12]). [13] describes the Range Search Tree (RST), which maps data partitions into nodes. Range queries are broken to sub-queries corresponding to nodes in the RST. Data locality is obtained by the RST structure, which allows fast local matching. However, sub-queries make the matching process complex. Our eCube demonstrates a unique approach for representing events and can be used in different systems.
1262
E. Yoneki and J. Bacon
7 Conclusions In this paper, we have introduced eCube, a novel event representation structure for efficient indexing, filtering and matching events and have applied it with a typed content-based publish/subscribe system for improvement of event filtering processes. The experiments show various advantages including efficiency of range queries and additional dimensions in the hypercube filter transforming from type. Transforming the type name to a dimension can preserve similarity and hierarchy that automatically provides neighbour matching capability. We continue to work on a regular expression version of RTree for a better indexing structure. Transformation mechanisms such as a feature extraction process to reduce the number of dimensions may be useful. A series of future work include lightweight versions of indexing structure for supporting resource-constrained devices and fuzzy semantic queries for the matching mechanism. An important aspect is that the values used to index eCube will have a huge impact. For example, the use of a locality sensitive hashing value from string data and the current form of the eCube filter can both be exploited with the locality property. This will be worthwhile future work. Acknowledgment. We would like to thank to Derek Murray for valuable comments.
References 1. Ahn, H.K., Mamoulis, N., Wong, H.M.: A survey on multidimensional access methods. Technical report, Utrecht University (2001) 2. Banavar, G., et al.: An efficient multicast protocol for content-based publish-subscribe systems. In: Proc. ICDCS, pp. 262–272 (1999) 3. Bayer, R.: The universal B-tree for multidimensional indexing. Technical Report TUMI9637, Technische Universitat Munchen (1996) 4. Cao, F., Singh, J.: Efficient event routing in content-based publish-subscribe service networks. In: Proc. IEEE INFOCOM (2004) 5. Carzaniga, A., Rosenblum, D., Wolf, L.: Design and evaluation of a wise-area event notification service. ACM Trans. on Computer Systems 19(3) (2001) 6. Carzaniga, A., Rutherford, M., Wolf, A.: A routing scheme for content-based networking. In: Proc. IEEE INFOCOM (2004) 7. Castro, M., et al.: Scribe: A large-scale and decentralized application-level multicast infrastructure. Journal on Selected Areas in Communication 20 (2002) 8. de Berg, M., et al.: Computational Geometry-Algorithms and Applications. Springer, Heidelberg (1998) 9. Eugster, P., Felber, P., et al.: Event systems: How to have your cake and eat it too. In: Proc. Workshop on DEBS (2002) 10. Fabret, F., Jacobsen, H.A., et al.: Filtering algorithms and implementation for very fast publish/subscribe systems. In: Proc. SIGMOD, pp. 115–126 (2001) 11. Gaede, V., et al.: Multidimensional access methods. ACM Computing Surverys 30(2) (1998) 12. Galanis, L., Wang, Y., et al.: Locating data sources in large distributed systems. In: Proc. VLDB, pp. 874–885 (2003) 13. Gao, J., Steenkiste, P.: An adaptive protocol for efficient support of range queries in DHTbased systems. In: Proc. IEEE International Conference on Network Protocols (2004)
eCube: Hypercube Event for Efficient Filtering in Content-Based Routing
1263
14. Gupta, A., et al.: Approximate range selection queries in peer-to-peer systems. In: Proc. CIDR, pp. 141–151 (2003) 15. Guttman, A.: R-trees: A dynamic index structure for spatial searching. In: Proc. ACM SIGMOD (1984) 16. Hadjueleftheriou, M.: Spatial index, http://research.att.com/∼marioh/spatialindex/index.html. 17. Harren, M., et al.: Complex queries in DHT-based peer-to-peer networks. In: Proc. Workshop on P2P Systems, pp. 242–250 (2002) 18. IBM. IBM MQ Series (2000), http://www.ibm.com/software/ts/mqseries/ 19. jxta.org. http://www.jxta.org/ 20. Meuhl, G., Fiege, L., Buchmann, A.: Filter similarities in content-based publish/subscribe systems. In: Proc. ARCS (2002) 21. Oliveira, M., et al.: Router level filtering on receiver interest delivery. In: Proc. NGC (2000) 22. Orenstein, J., Merrett, T.: A class of data structures for associative searching. In: Proc. Principles of Database Systems (1984) 23. Pietzuch, P., Bacon, J.: Hermes: A distributed event-based middleware architecture. In: Proc. Workshop on DEBS (2002) 24. Ratnasamy, S., et al.: Application-level multicast using content-addressable networks. In: Crowcroft, J., Hofmann, M. (eds.) NGC 2001. LNCS, vol. 2233, Springer, Heidelberg (2001) 25. Riabov, A., Liu, Z., Wolf, J., Yu, P., Zhang, L.: Clustering algorithms for content-based publication-subscription systems. In: Proc. ICDCS (2002) 26. Rjaibi, W., Dittrich, K.R., Jaepel, D.: Event matching in symmetric subscription systems. In: Proc. CASCON (2002) 27. Rowstron, A., Druschel, P.: Pastry: scalable, decentraized object location and routing for large-scale peer-to-peer systems. In: Proc. ACM.IFIP/USENIX Middleware, pp. 329–350 (2001) 28. Tam, D., Azimi, R., Jacobsen, H.-A.: Building content- based publish/subscribe systems with distributed hash tables. In: DBISP2P 2004 (2003) 29. Terpstra, W.W., et al.: A peer-to-peer approach to content-based publish/subscribe. In: Proc. Workshop on DEBS (2003) 30. Wang, Y., et al.: Summary-based routing for content-based event distribution networks. ACM Computer Communication Review (2004) 31. Yoneki, E.: Event broker grids with filtering, aggregation, and correlation for wireless sensor data. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTM 2005. LNCS, vol. 3762, pp. 304– 313. Springer, Heidelberg (2005) 32. Yoneki, E.: ECCO: Data Centric Asynchronous Communitcation. PhD thesis, University of Cambridge, Technical Report UCAM-CL-TR677 (2006) 33. Yoneki, E., Bacon, J.: Object tracking using durative events. In: Enokido, T., Yan, L., Xiao, B., Kim, D., Dai, Y., Yang, L.T. (eds.) Embedded and Ubiquitous Computing – EUC 2005 Workshops. LNCS, vol. 3823, pp. 652–662. Springer, Heidelberg (2005) 34. Yoneki, E., Bacon, J.: Openness and Interoperability in Mobile Middleware. CRC Press, Boca Raton (2006) 35. Zhao, B.Y., et al.: Tapestry: A resilient global-scale overlay for service deployment. IEEE Journal on Selected Areas in Communications 22 (2004) 36. Zhuang, S.Q., et al.: Bayeux: An architecture for scalable and fault-tolerant wide-area data dissemination. In: Proc. ACM NOSSDAV, pp. 11–20 (2001)
Combining Incomparable Public Session Keys and Certificateless Public Key Cryptography for Securing the Communication Between Grid Participants Elvis Papalilo and Bernd Freisleben Department of Mathematics and Computer Science, University of Marburg, Hans-Meerwein-Str., D-35032 Marburg, Germany {elvis,freisleb}@informatik.uni-marburg.de
Abstract. Securing the communication between participants in Grid computing environments is an important task, because the participants do not know if the exchanged information has been modified, intercepted or coming/going from/to the right target. In this paper, a hybrid approach based on a combination of incomparable public session keys and certificateless public key cryptography for dealing with different threats to the information flow is presented. The properties of the proposed approach in the presence of various threats are discussed.
1 Introduction Security is a key problem that needs to be addressed in Grid computing environments. Grid security can be broken down into five main areas: authenticcation, authorization/access control, confidentiality, integrity and management of security/control mechanisms [1]. Grids are designed to provide access and control over enormous remote computational resources, storage devices and scientific instruments. The information exchanged, saved or processed can be quite valuable and thus, a Grid is an attractive target for attacks to extract this information. Each Grid site is independently administered and has its own local security solutions, which are mainly based on the application of X.509 certificates for distributing digital identities to human Grid participants and a Public Key Infrastructure (PKI) for securing the communication between them. The primarily used techniques for assuring message level security are: • public/private key cryptography – participants use the public keys of their counterparts (as defined in their certificates) for encrypting messages. In general, only the participant in possession of the corresponding private key is able to decrypt the received messages. • shared key cryptography – participants agree on a common key for encrypting the communication between them. The key agreement protocol is based on using the target partner’s certified public key. R. Meersman and Z. Tari et al. (Eds.): OTM 2007, Part II, LNCS 4804, pp. 1264–1279, 2007. © Springer-Verlag Berlin Heidelberg 2007
Combining Incomparable Public Session Keys
1265
These solutions are built on top of different operating systems. When all participants are brought together to collaborate in this heterogeneous environment, many security problems arise. In general, Grid systems are vulnerable to all typical network and computer security threats and attacks [2], [3], [4], [5], [6]. Furthermore, the use of web service technology in the Grid [7] will bring a new wave of threats, in particular those inherited from XML Web Services. Thus, the application of the security solutions mentioned above offers no guarantees that the information exchanged between Grid participants is not going to be compromised or abused by a malicious third party that listens to the communication. Furthermore, they all escape the idea why a participant in the Grid environment was chosen among the others for completing a specified task and for how long a collaboration partner is going to be considered. Thus, the behaviour of the participants also needs to be considered in order to limit the possibility of malicious participants to actively take part in a collaboration. An alternative solution to the problem is the establishment of a secured communication channel between collaborating participants (using a virtual private network - VPN). Thus, the transport mechanism itself is secured. Although in this case an inherently secure communication channel is opened between parties, the method itself is impractical to be used in Grid environments [8] due to: • administration overhead – new tunnels need to be configured each time a new virtual organization joins or leaves the environment. • incompatibility between different formats used for private IP spaces in small and large networks – 16-bit private IP space is preferred for small networks, while in large networks the 24-bit private IP space is preferred. There exists the possibility that (multiple) private networks use the same private IP subnet. In this paper, we propose a hybrid message level encryption scheme for securing the communication between Grid participants. It is based on a combination of two asymmetric cryptographic techniques, a variant of Public Key Infrastructure (PKI) and Certificateless Public Key Cryptography (CL-PKC). Additionally, we first sort the collaboration partners according to their (past) behavior by considering the notion of trust in Grid environments, and in a second step, we assign to them the corresponding keys for encrypting the communication. Such a key is valid until no more tasks are left to be sent to this target partner, and as long as this partner is a trusted partner (according to the expressed trust requirements). We mainly concentrate on the confidentiality of the communication between Grid participants, but issues related to authorization, integrity, management and nonrepudiation will also be treated. The paper is organised as follows. In section 2, related work is discussed. In section 3, an analysis of the threats to the communication between participants in Grid environments is presented. In section 4, our approach for securing the communication between Grid participants is proposed. Section 5 concludes the paper and outlines areas of future research.
1266
E. Papalilo and B. Freisleben
2 Related Work There are several approaches for establishing secure communication between Grid participants. For example, the Globus Toolkit [9] uses the Grid Security Infrastructure (GSI) for enabling secure communication (and authentication) over an open network. GSI is based on public key encryption, X.509 certificates and the Secure Sockets Layer (SSL) communication protocol. Some extensions have been added for single sign-on and delegation. A Grid participant is identified by a certificate, which contains information for authenticating the participant. A third party, the Certificate Authority (CA), is used to certify the connection between the public key and the person in the certificate. To trust the certificate and its contents, the CA itself has to be trusted. Furthermore, the participants themselves can generate certificates for temporary sessions (proxy certificates). By default, GSI does not establish confidential (encrypted) communication between parties. It is up to the GSI administrator to ensure that the access control entries do not violate any site security policies. Other approaches try to improve the security of the communication between Grid participants by making use of different encryption methods. Lim and Robshaw [10] propose an approach where Grid participants use identity-based cryptography [11] for encrypting the information they exchange. However, in traditional identity-based encryption systems, the party in charge of the private keys (private key generator PKG) knows all the private keys of its participants, which principally is a single point of attack for malicious participants. Furthermore, the approach requires that a secure channel exists between a participant and its PGK, which in turn is not very practical in Grid environments. In a later publication [12], the authors try to solve these problems by getting rid of a separate PKG and by enabling the participants to play the role of the PKG for themselves. Additionally, a third party is introduced with the purpose of giving assurances on the authenticity of the collaborating parties. Collaborating participants, based on publicly available information and using their PKG capabilities, generate session keys “on the fly”, which are used between collaborating participants to exchange the initial information (job request, credentials from the third trusted party, etc.). During a collaboration, a symmetric key, on which parties have previously agreed, is used for encrypting/decrypting the information flow. This could also be a single point of attack (the attack is directed only towards a single participant) for a malicious participant willing to obtain it. Saxena and Soh [13] propose some applications of pairing-based cryptography, using methods for trust delegation and key agreement in large distributed groups. All Grid participants that collaborate at a certain moment form a group. A subset of group members generates the public key, and the rest of the group generates the private key. A distributed trusted third party with a universal key escrow capability must always be present for the computation of the keys. These keys (public/private) are going to be used within the group for encrypting/decrypting the communication between group members. A similar approach is followed by Shen et al. [14] where some strategies for implementing group key management in Grid environments are proposed. The main difference to the work by Saxena and Soh [13] is the re-calculation of the group key every time a participant re-joins the group.
Combining Incomparable Public Session Keys
1267
The vulnerability of both approaches lies in the fact that all group members are aware of the public/private key. A malicious participant, already part of the group, could decrypt all messages that group members exchange between them. Even if a malicious participant is not part of the group, a single point of attack (gaining access or stealing key information from only a single group participant) could be sufficient to decrypt all the information the group participants exchange between them. Crampton et al. [15] present a password-enabled and certificate-free Grid security infrastructure. Initially, a user authenticates itself to an authentication server through a username and password. After a successful verification, the user obtains through a secure channel the (proxy) credentials (public and private keys) that will be used during the next collaboration with a resource. The resource in turn verifies if the user is authorized to take advantage of its services and creates its proxy credentials and a job service in order to complete the tasks assigned by the user. A single trusted authority accredits the authentication parameters for the users, resources and authentication servers. There are several problems with this approach. First, the complexity of the environment is artificially increased. While the authentication of the resources is done directly by the trusted authority, the authentication of the users is done by a third party, the authentication server. Adding more components to the authentication chain increases the points of attack. Second, the resource has to believe that the user is authenticated through a “trusted” authentication server and not by a malicious one. Third, the resource has to believe that the user is not impersonating someone else in the environment. Finally, a single participant (the trusted authority) is in charge of the authentication parameters of all other participants in the environment. It must be trusted by the participants, and at the same time it has access to private information of the participants. Thus, the participants’ private information is not protected either in the scenario where this “trusted” third party turns out to be malicious or in the scenario where another malicious participant gains access to the private information of different participants through attacking this “trusted” third party (as a single point of attack). Additionally, some web services security standards (applied also to Grid services) are also emerging. XML Signature [16] signs messages with X509 certificates. This standard assures the integrity of messages, but it does not offer any support for threat prevention. WS-SecureConversation [17] is a relatively new protocol for establishing and using secure contexts with SOAP messages. Partners establish at the beginning a secure context between them, but all the following messages are signed using the XML-Signature standard. XML Encryption [18] is also a standard for keeping all or part of a SOAP message secret. A participant in the communication is able to encrypt different sections of an XML document with different keys making possible for its collaboration partners to access only certain parts of the document, according to the assigned keys. However, in the case when many partners want access to the same part of the document or to the entire document at the same time, they come in the possession of the same key.
1268
E. Papalilo and B. Freisleben
3 Communication Threats A collaboration in Grid environments takes place between interacting participants. A participant is either a service provider (i.e. a node to host and provide a service, or a service instance running on the provider node) or a service consumer (i.e. a node that requests a service from a provider (including the request to deploy and perform a service at the provider), or a service instance running on the consumer node). In general, there exists a flow of information from a source participant to a target participant, as shown in Fig. 1.a.
(a) Normal Flow
(b) Intercepted Flow
(d) Forged Flow
(c) Modified Flow
(e) Interrupted Flow
Fig. 1. Communication Threat Scenarios between Grid Participants
This information flow can be the target of different threats. The same threats, as depicted by Stallings [19], can also be encountered in Grid environments: passive threats and active threats.
Combining Incomparable Public Session Keys
1269
The aim of passive threats is to simply intercept the communication and obtain the information being transmitted, as shown in Fig 1.b. They affect the confidentiality of the exchanged information, and are difficult to detect due to the lack of direct intervention possibilities on the information the parties are exchanging. The situation changes completely when active threats are considered. Here, intervention on the information flow is always possible. The information flow can be: − modified: the integrity of the exchanged information is placed at risk as a result of the modification of the data being exchanged, through the intervention of an unauthorized third party (Fig 1.c). − forged: the authenticity of the exchanged information is placed at risk as a result of the forged stream an unauthorized participant tries to exchange with the target participant, impersonating another authorized participant in the environment (Fig 1.d). This is also a non-repudiation problem. − interrupted: the normal communication between partners is interrupted as a result of any intervention from an unauthorized participant in the environment (Fig 1.e). This is a threat to availability. Prevention is the key to fighting passive threats. For active threats, fast detection and recovery are crucial. In this paper, we will concentrate on issues related to confidentiality and integrity of the messages exchanged between participants. Furthermore, authorization and management issues will be sketched.
4 Approaches to Securing the Communication Between Grid Participants 4.1 Basic Key Management Model and Encryption Scheme Grid systems typically make use of public key cryptography for securing a communication session between collaborating participants [1]. Two parties use a randomly generated shared key for encrypting/decrypting the communication between them. To ensure that the data is read only by the two parties (sender and receiver), the key has to be distributed securely between them. Throughout each session, the key is transmitted along with each message and is encrypted with the recipient's public key. A second possibility is to use asymmetric session keys. Each of the parties randomly generates a pair of session keys (a public and a private one). Their application is similar to symmetric session keys with the difference that in this case different keys are used for encrypting and decrypting messages. In this paper, we allow each Grid participant to generate its own keys such that each participant simultaneously possesses multiple public keys while all these keys correspond to a single private key. This method was first proposed by Waters et al. [20] and was later further developed by Zeng and Fujita [21]. According to their scheme, each time two participants A and B communicate with each other, the sender (participant A) decides to use either a public key from its pool
1270
E. Papalilo and B. Freisleben
of existing public keys or to generate a new one. This key is going to be sent to the receiver (participant B). Whenever B sends a message to A, the message is encrypted using A’s previously sent public key. Upon receipt, A decrypts it using its private key. The entire process is described in Fig. 2:
Fig. 2. Encrypting/Decrypting Scheme Used in [20]
The generation of the public keys is done according to the following algorithm: 1. 2. 3. 4. 5. 6. 7.
Select a cyclic group G of order n; Select a subgroup of G of order m, where m