Advances in Enterprise Information Technology Security Djamel Khadraoui Public Research Centre Henri Toudor, Luxembourg Francine Herrmann University Paul Vertaine-Metz, France
Information science reference Hershey • New York
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Assistant Managing Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Kristin Roth Jennifer Neidig Sara Reed Sharon Berger Becky Shore Jamie Snavely Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-pub.com/reference and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2007 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Advances in enterprise information technology security / Djamel Khadraoui and Francine Herrmann, editors. p. cm. Summary: “This book provides a broad working knowledge of all the major security issues affecting today’s enterprise IT activities. Multiple techniques, strategies, and applications are thoroughly examined, presenting the tools to address opportunities in the field.It is an all-in-one reference for IT managers, network administrators, researchers, and students”--Provided by publisher. Includes bibliographical references and index. ISBN 978-1-59904-090-5 (hardcover) -- ISBN 978-1-59904-092-9 (ebook) 1. Business enterprises--Computer networks--Security measures. 2. Information technology--Security measures. 3. Computer security. 4. Data protection. I. Khadraoui, Djamel. II. Herrmann, Francine. HF5548.37.A38 2007 005.8--dc22 2007007267 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Table of Contents
Foreword . ............................................................................................................................................ xii Preface . ............................................................................................................................................... xiv Acknowledgment . ............................................................................................................................xviii
Section I Security Architectures Chapter I Security Architectures / Sophie Gastellier-Prevost and Maryline Laurent-Maknavicius........................ 1 Chapter II Security in GRID Computing / Eric Garcia, Hervé Guyennet, Fabien Hantz, and Jean-Christophe Lapayre....................................................................................................................... 20 Chapter III Security of Symbian Based Mobile Devices / Göran Pulkkis, Kay J. Grahn, Jonny Karlsson, and Nhat Dai Tran...................................................................................................... 31 Chapter IV Wireless Local Area Network Security / Michéle Germain, Alexis Ferrero, and Jouni Karvo............. 75 Chapter V Interoperability Among Intrusion Detection Systems / Mário M. Freire.............................................. 92
Section II Trust, Privacy, and Authorization Chapter VI Security in E-Health Applications / Snezana Sucurovic...................................................................... 104
Chapter VII Interactive Access Control and Trust Negotiation for Autonomic Communication / Hristo Koshutanski and Fabio Massacci............................................................................................. 120 Chapter VIII Delegation Services: A Step Beyond Authorization / Isaac Agudo, Javier Lopez, and Jose A. Montenegro.............................................................................................................................. 149 Chapter IX From DRM to Enterprise Rights and Policy Management: Challenges and Opportunities/ Jean-Henry Morin and Michel Pawlak................................................................................................ 169
Section III Threat Chapter X Limitations of Current Antivirus Scanning Technologies / Srinivas Mukkamala, Antonins Sulaiman, Patrick Chavez, and Andew H. Sung................................................................... 190 Chapter XI Phishing: The New Security Threat on the Internet / Indranil Bose.................................................... 210 Chapter XII Phishing Attacks and Countermeasures: Implications for Enterprise Information Security / Bogdan Hoanca and Kenrick Mock.................... 221 Chapter XIII Prevention and Handlind of Malicious Code / Halim Khelafa............................................................ 239
Section IV Risk Management Chapter XIV Security Risk Management Methodologies / Francine Herrmann and Djamel Khadraoui............... 261 Chapter XV Information System Life Cycles and Security/ Albin Zuccato............................................................ 274 Chapter XVI Software Specification and Attack Langauges / Mohammed Hussein, Mohammed Raihan, and Mohammed Zulkernine................................................................................................................. 285
Chapter XVII Dynamic Management of Security Constraints in Advanced Enterprises/ R. Manjunath................... 302 Chapter XVIII Assessing Enterprise Risk Level: The CORAS Approach / Fredrik Vraalsen, Tobias Mahler, Mass Soldal Lund, Ida Hogganvik, Folker den Braber, and Ketil Stølen............................................ 311 Compilation of References ............................................................................................................... 334 About the Contributors .................................................................................................................... 355 Index.................................................................................................................................................... 363
Detailed Table of Contents
Foreword . ............................................................................................................................................ xii Preface . ............................................................................................................................................... xiv Acknowledgment . ............................................................................................................................xviii
Section I Security Architectures Chapter I Security Architectures / Sophie Gastellier-Prevost and Maryline Laurent-Maknavicius........................ 1 This chapter proposes three different realistic security-level network architectures that may be currently deployed within companies. For more realistic analysis and illustration, two examples of companies with different size and profile are given. Advices, explanations, and guidelines are provided in this chapter so that readers are able to adapt those architectures to their own companies and to security and network needs. Chapter II Security in GRID Computing / Eric Garcia, Hervé Guyennet, Fabien Hantz, and Jean-Christophe Lapayre....................................................................................................................... 20 GRID computing implies sharing heterogeneous resources, located in different places, belonging to different administrative domains, over a heterogeneous network. There is a great similarity between GRID security and classical network security. Moreover, additional requirements specific to grid environments exist. This chapter is dedicated to these security requirements, detailing various secured middleware systems. Finally, the chapter gives some examples of companies using such systems.
Chapter III Security of Symbian Based Mobile Devices / Göran Pulkkis, Kay J. Grahn, Jonny Karlsson, and Nhat Dai Tran...................................................................................................... 31 Fundamental security requirements of a Symbian-based mobile device such as physical protection, device access control, storage protection, network access control, network service access control, and network connection security are described in detail in this chapter. Symbian security is also evaluated by discussing its weaknesses and by comparing it to other mobile operating systems. Chapter IV Wireless Local Area Network Security / Michéle Germain, Alexis Ferrero, and Jouni Karvo............. 75 This chapter describes in its first part the security features of IEEE 802.11 wireless local area networks and shows their weaknesses. A practical guideline for choosing the preferred WLAN configuration is given. The second part of this chapter is dedicated to the wireless radio network by presenting the associated threats with some practical defence strategies. Chapter V Interoperability Among Intrusion Detection Systems / Mário M. Freire.............................................. 92 This chapter presents first a classification and a brief description of intrusion detection systems, taking into account several issues such as information sources, analysis of intrusion detection systems, response options for intrusion detection systems, analysis timing, control strategy, and architecture of intrusion detection systems. The problem of information exchange among intrusion detection systems, the intrusion detection exchange protocol, and a format for the exchange of information among intrusion detection systems is discussed. The lack of a format of the answers or countermeasures interchanged between the components of intrusion detection systems is also discussed as well as some future trends in this area.
Section II Trust, Privacy, and Authorization Chapter VI Security in E-Health Applications / Snezana Sucurovic...................................................................... 104 This chapter presents security solutions in integrated patient-centric Web-based health-care information systems, also known as electronic health-care record (EHCR). Security solutions in several projects have been presented and in particular a solution for EHCR integration from scratch. Implementations of Public Key Infrastructure, privilege management infrastructure, role-based access control, and rule-based access control in EHCR have been presented. Regarding EHCR integration from scratch, architecture and security have been proposed and discussed.
Chapter VII Interactive Access Control and Trust Negotiation for Autonomic Communication / Hristo Koshutanski and Fabio Massacci............................................................................................. 120 This chapter proposes a novel interactive access control model: servers should be able to interact with clients asking for missing or excessing credentials, whereas clients my decided to comply or not with the requested credentials. The process iterates until a final agreement is reached or denied. Further, the chapter shows how to model a trust negotiation protocol that allows two entities in a network to automatically negotiate requirements needed to access a service. A practical implementation of the access control model is given using X.509 and SAML standards. Chapter VIII Delegation Services: A Step Beyond Authorization / Isaac Agudo, Javier Lopez, and Jose A. Montenegro.............................................................................................................................. 149 Because delegation is a concept derived from authorization, this chapter aims to put into perspective the delegation implications, issues, and concepts that are derived from a selected group of authorization schemes that have been proposed during recent years as solutions to the distributed authorization problem. It is also the analysis of some of the most interesting federation solutions that have been developed by different consortiums or companies, representing both educational and enterprise points of view. The final part of this chapter focuses on different formalisms specifically developed to support delegation services and which can be integrated into a multiplicity of applications. Chapter IX From DRM to Enterprise Rights and Policy Management: Challenges and Opportunities/ Jean-Henry Morin and Michel Pawlak................................................................................................ 169 This chapter introduces digital rights management (DRM) in the perspective of digital policy management (DPM), focusing on the enterprise and corporate sector. DRM has become a domain in full expansion with many stakes, which are by far not only technological. They also touch legal aspects as well as business and economic. Information is a strategic resource and as such requires a responsible approach of its management, almost to the extent of being patrimonial. This chapter mainly focuses on the latter introducing DRM concepts, standards and the underlying technologies from its origins to its most recent developments in order to assess the challenges and opportunities of enterprise digital policy management.
Section III Threat Chapter X Limitations of Current Antivirus Scanning Technologies / Srinivas Mukkamala, Antonins Sulaiman, Patrick Chavez, and Andew H. Sung................................................................... 190 This chapter describes common attacks on antivirus tools and a few obfuscation techniques applied to recent viruses that were used to thwart commercial-grade antivirus tools. Similarities among different malware and their variants are also presented in this chapter. The signature used in this method is the percentage of application programming interface (APIs) appearing in the malware type. Chapter XI Phishing: The New Security Threat on the Internet / Indranil Bose.................................................... 210 The various ways in which phishing can take place are described in this chapter. This is followed by a description of key strategies that can be adopted for protection of end users and organizations. The end user protection strategies include desktop protection agents, password management tools, secure e-mail, simple and trusted browser setting, and digital signature. Some of the commercially available and popular antiphishing products are also described in this chapter. Chapter XII Phishing Attacks and Countermeasures: Implications for Enterprise Information Security / Bogdan Hoanca and Kenrick Mock.................... 221 This chapter describes the threat of phishing in which attackers generally sent a fraudulent e-mail to their victims in an attempt to trick them into revealing private information. This chapter starts defining the phishing threat and its impact on the financial industry. Next, it reviews different types of hardware and software attacks and their countermeasures. Finally, it discusses policies that can protect an organization against phishing attacks. An understanding of how phishers elicit confidential information along with technology and policy-based countermeasures will empower managers and end users to better protect their information systems. Chapter XIII Prevention and Handlind of Malicious Code / Halim Khelafa............................................................ 239 This chapter provides a wide spectrum of end users with a complete reference on malicious code, or malware. End users include researchers, students, as well as information technology and security professionals in their daily activities. First, the author provides an overview of malicious code, its past, present, and future. Second, he presents methodologies, guidelines and recommendation on how an organization can enhance its prevention of malicious code, how it should respond to the occurrence of a malware incident, and how it should learn from such an incident to be better prepared in the future. Finally, the author addresses the issue of the current research as well as future trends of malicious code and the new and future means of malware prevention.
Section IV Risk Management Chapter XIV Security Risk Management Methodologies / Francine Herrmann and Djamel Khadraoui............... 261 This chapter provides a wide spectrum of existing security risk management methodologies. The chapter starts presenting the concept and the objectives of enterprise risk management. Some exiting security risk management methods are then presented by sowing the way to enhance their application to enterprise needs. Chapter XV Information System Life Cycles and Security/ Albin Zuccato............................................................ 274 This chapter presents a system life cycle and suggests which aspects of security should be covered at which life-cycle stage of the system. Based on this, a process framework is presented that, due to its iterativity and detailedness, accommodates the needs for life-cycle oriented security management. Chapter XVI Software Specification and Attack Langauges / Mohammed Hussein, Mohammed Raihan, and Mohammed Zulkernine................................................................................................................. 285 In this chapter, it is presented a study on the classification of software specification languages discussing the current state of the art regarding attack languages. Specification languages are categorized based on their features and their main purposes. A detailed comparison among attack languages is provided. We show the example extensions of the two software specification languages to include some features of the attack languages. We believe that extending certain types of software specification languages to express security aspects like attack descriptions is a major step towards unifying software and security engineering. Chapter XVII Dynamic Management of Security Constraints in Advanced Enterprises/ R. Manjunath................... 302 In this chapter, the security associated with the transfer of the content is quantified and treated as a quality of service parameter. The user is free to select the parameter depending upon the content being transferred. As dictated by the demanding situations, a minimum agreed security would be assured for the data at the expense of the appropriate resources over the network. Chapter XVIII Assessing Enterprise Risk Level: The CORAS Approach / Fredrik Vraalsen, Tobias Mahler, Mass Soldal Lund, Ida Hogganvik, Folker den Braber, and Ketil Stølen............................................ 311 This chapter gives an introduction to the CORAS approach for model-based security risk analysis. It presents a guided walkthrough of the CORAS risk-analysis process based on examples from risk analysis
of security, trust, and legal issues in a collaborative engineering virtual organisation. CORAS makes use of structured brainstorming to identify risks and treatments. To get a good picture of the risks, it is important to involve people with different insight into the target being analysed, such as end users, developers and managers. One challenge in this setting is to bridge the communication gap between the participants, who typically have widely different backgrounds and expertise. The use of graphical models supports communication and understanding between these participants. The CORAS graphical language for threat modelling has been developed especially with this goal in mind. Compilation of References ............................................................................................................... 334 About the Contributors .................................................................................................................... 355 Index.................................................................................................................................................... 363
xii
Foreword
This excellent reference source offers a fascinating new insight into modern issues of security. It brings together contributions from an international group of active researchers who, between them, are addressing a number of the current key challenges in providing enterprise-wide information technology solutions. The general area of security has long been acknowledged as vitally important in enterprise systems design; because of the key role it has in protecting the resources belonging to the organization and in ensuring that the organization meets its objectives. Historically, the emphasis has been on protecting complete systems and hardening the communications between trusted systems against external attack. Architects have concentrated on creating an encapsulation boundary supported by a trusted computing base able to control the access to all the available resources. However, the themes selected for this book illustrate a change of emphasis that has been in progress over recent years. There has been a steady movement during this time towards finer grain control with the introduction of progressively more subtle distinctions of role and responsibility and more precise characterization of target resources. The controls applied have also become more dynamic, with increasing emphasis on delegation of responsibility and change of organizational structure, and the need for powerful trust models to support them. At the same time there has been a blurring of the traditional boundaries, because of the need for controlled cooperation and limited sharing of resources. The protection is in terms of smaller and more specialized resource units, operated in potentially more hostile environments. Two examples may help to illustrate this trend. On the one hand, there is a need to protect information and privileges embodied in mobile devices. A mobile phone or PDA may contain information or access tokens of considerable sensitivity and importance, and the impact of loss or theft of the device needs to be bounded by system support that resists tampering and illicit use. On the other hand, digital rights management focuses on the protection against unauthorized use of items of information, ranging from software to entertainment media, which need to be subject to access controls even when resident within the systems managed by a potential attacker. Both these situations challenge the traditional complete system view of security provision. These examples illustrate that the emphasis is on flexibility of the organizational infrastructure and on the introduction of new styles of information use. However, this is not primarily a book about mechanisms; it is about enterprise concerns and on the interplay that is required between enterprise goals and security solutions. Even a glance at the contents makes this clear. The emphasis is on architecture and the interplay of trust, threat and risk analysis. Illustrated by practical examples and concerns, the discussion covers the subtle relationship between the exploitation of new opportunities and the exposure to new threats. Strong countermeasures that rule out otherwise attractive organizational structures represent a lost opportunity, but business decisions that change the underlying assumptions in a way that invalidates the trust and risk analysis may threaten the viability of the organization in a fundamental way.
xiii
Nothing illustrates this better than the growing importance of social engineering, or phishing, styles of attack. The attacks are based on abuse of the social relationship that must be developed between an organization and its clients, and on the ignorance of most users of the way authentication works and of the dangerous side effects of communicating with untrusted systems. Countermeasures range from education and management actions to the development of authentication techniques suitable for application between mutually suspicious systems. One of the messages to be taken from these essays is that security must be a major consideration at all stages in the planning and development of information technology solutions. Although this is a view that experts have been promoting for many years, it is still not universally adopted. Yet we know that retrofitting security to partially completed designs is much more expensive and is often ineffectual. Risk analysis needs to start during the formulation of a business process, and the enterprise needs a wellformulated trust model as an accepted part of its organizational structure. Only in this way can really well-informed technical choices be made about the information technology infrastructure needed to support any given business initiative. The stronger integration of business and infrastructure concerns also allows timely feedback on any social or organizational changes required by the adoption of particular technical solutions, thus reducing the risk of future social attacks. For these reasons, the section on risk management and its integration with the software lifecycle is a fitting culmination of the themes presented here. It is the endpoint of a journey from technical architectures, through trust models and threat awareness to intelligent control of risks and security responses to them. I hope this book will stimulate a greater awareness of the whole range of security issues facing the modern enterprise in its adoption of information technology, and that it will help to convince the framers of organizational policy of the importance of addressing these issues throughout the lifecycle of new business solutions, from their inception through deployment and into service. We all know that reduction of risk brings competitive advantage, and this book shows some of the ways in which suitable security approaches can do so. Peter F. Linington Professor of Computer Communication University of Kent, UK
Peter Linington is a professor of computer communication and head of the Networks and Distributed Systems Research Group at the University of Kent. His current work focuses on distributed enterprise modeling, the checking of enterprise pattern application and policy-based management. He has been heavily involved in the development of the ISO standard architecture for open distributed processing, particularly the enterprise language. His recent work in this area has focused on the monitoring of contractual behaviour in e-business systems. He has worked on the use of multiviewpoint approaches for expressing distribution architectures, and collaborated regularly with colleagues on the formal basis of such system. He was been an advocate of model-driven approaches before they became fashionable, and experimented in the Permabase project with performance prediction from models. He is currently working on the application of model driven techniques to security problems. He has performed consultancy for BT on the software engineering aspects of distribution architectures. He has recently been awarded an IBM Faculty Award to expand work on the enhancement of the Eclipse modelling framework with support for OCL constraint checking.
xiv
Preface
In the last decade information and computer security is mainly moving from the confines of academia to the enterprise concerns. As populations become more and more comfortable with the extensive use of networks and the Internet, as our reliance on the knowledge-intensive technology grows, and as progress in the computer software and wireless telecommunication increases accessibility, there will be a higher risk of unmanageable failure in enterprise systems. In fact, today’s information systems are widely spread and connected over the networks, but also heterogeneous, which involves more complexity. This situation has a dramatic drawback regarding threats, which are now occurring on such networks. Indeed, the drawback of being open and interconnected is that they are more and more vulnerable as a wide range of threats and attacks. These attacks have appeared during the last few years and are growing continuously with IP emergence and with all new technologies exploiting it (SIP vulnerabilities, phishing attacks, etc.) and also due to the threats exposing operators (DDOS) and end user (phishing attacks, worms, etc.). The Slammer and SoBig attacks are some of the examples that were widely covered in the media and broadcast into the average citizen home. From the enterprise perspective, information about customers, competitors, products and processes is a key issue for its success. The increasing importance of information technology for production, providing and maintaining consistent security of this information on servers and across networks becomes one of the major enterprise business activities. This means that it requires a high flexibility of the organizational infrastructure and on the introduction of new ways of information usage. In such a complex world, there is a strong need of security to ensure system protection in order to maintain the enterprise activities operational. However, this book gathers some essays that will stimulate a greater awareness of the whole range of security issues facing the modern enterprise. It mainly shows how important to have a strong interaction that is required between enterprise goals and security solutions.
Objectives It is the purpose of this book to provide a practical survey of the principals and practice of IT security with respect to enterprise business systems. It also offers a broad working knowledge of all the major security issues affecting today’s enterprise IT activities, giving readers the tools to address opportunities in the field. This is mainly because the security factors provide to the enterprise a high potential in order to provide trusted services to their customers. This book shows also to readers how to apply a number of security techniques to the enterprise environment with its complex and various applications. It covers the many domains related to the enterprise security, including: communication networks and
xv
multimedia, applications and operating system software, social engineering and styles of attacks, privacy and authorisation and enterprise security risk management. This book gathers a best collection of papers written by many authors instead of a book that focuses on a specific approach or methodology.
Intended Audience Aimed at the information technology practitioner, the book is valuable to CIO’s, operations managers, network managers, database managers, software architects, application integrators, programmers, and analysts. The book is also suitable for graduate, master and postgraduate course in computer science as well as for computers in business courses.
Structure Of The Book The book chapters are organized in logical groupings that are akin to appropriate levels in an enterprise IT security. Each section of the actual book is devoted to carefully chosen papers, some of which reflect individual authors’ experience. The strength of this approach is that it gives a benefit from a rich diversity of viewpoints and deep subject matter knowledge. The book is organized into eighteen chapters. A brief description of each of the chapters follows: Chapter I proposes three different realistic security-level network architectures that may be currently deployed within companies. For more realistic analysis and illustration, two examples of companies with different size and profile are given. A number of advices, explanations and guidelines are provided in this chapter so readers are able to adapt those architectures to their own companies and both security and network needs. Chapter II is dedicated to the security requirements detailing various secured middleware systems, such as GRID computing, which implies sharing heterogeneous resources, located in different places belonging to different administrative domains over a heterogeneous network. It shows that there is a great similarity between GRID security and classical network security. Moreover, additional requirements specific to grid environments exist. At the end, the chapter gives some examples of companies using such systems. Chapter III describes in detail the fundamental security requirements of a Symbian based mobile device such as physical protection, device access control, storage protection, network access control, network service access control, and network connection security. Symbian security is also evaluated by discussing its weaknesses and by comparing it to other mobile operating systems. Chapter IV describes in its first part the security features of IEEE 802.11 wireless local area networks, and shows their weaknesses. A practical guideline for choosing the preferred WLAN configuration is given. The second part of this chapter is dedicated to the wireless radio network by presenting the associated threats with some practical defence strategies. Chapter V presents first a classification and a brief description of intrusion detection systems, taking into account several issues such as information sources, analysis of intrusion detection systems, response options for intrusion detection systems, analysis timing, control strategy, and architecture of intrusion detection systems. It is then discussed the problem of information exchange among intrusion detection systems, being addressed the intrusion detection exchange protocol and a format for the exchange of information among intrusion detection systems. The lack of a format of the answers or countermeasures
xvi
interchanged between the components of intrusion detection systems is also discussed as well as some future trends in this area. Chapter VI presents security solutions in integrated patient-centric Web based healthcare information systems, also known as electronic healthcare record (EHCR). Security solutions in several projects have been presented and in particular a solution for EHCR integration from scratch. Implementations of , privilege management infrastructure, role based access control and rule based access control in EHCR have been presented. Regarding EHCR integration from scratch architecture and security have been proposed and discussed. Chapter VII proposes a novel interactive access control model: servers should be able to interact with clients asking for missing or excessing credentials whereas clients my decided to comply or not with the requested credentials. The process iterates until a final agreement is reached or denied. Further the chapter shows how to model a trust negotiation protocol that allows two entities in a network to automatically negotiate requirements needed to access a service. A practical implementation of the access control model is given using X.509 and SAML standards. Chapter VIII aims to put into perspective the delegation implications, issues and concepts that are derived from a selected group of authorization schemes which have been proposed during recent years as solutions to the distributed authorization problem. It is also the analysis of some of the most interesting federation solutions that have been developed by different consortiums or companies, representing both educational and enterprise points of view. The final part of this chapter focuses on different formalisms specifically developed to support delegation services and which can be integrated into a multiplicity of applications. Chapter IX introduces digital rights management (DRM) in the perspective of digital policy management (DPM) focusing on the enterprise and corporate sector. DRM has become a domain in full expansion with many stakes, which are by far not only technological. They also touch legal aspects as well as business and economic. Information is a strategic resource and as such requires a responsible approach of its management almost to the extent of being patrimonial. This chapter mainly focuses on the latter introducing DRM concepts, standards and the underlying technologies from its origins to its most recent developments in order to assess the challenges and opportunities of enterprise digital policy management. Chapter X describes common attacks on antivirus tools and a few obfuscation techniques applied to recent viruses that were used to thwart commercial grade antivirus tools. Similarities among different malware and their variants are also presented in this chapter. The signature used in this method is the percentage of APIs (application programming interface) appearing in the malware type. Chapter XI describes the various ways in which phishing can take place. This is followed by a description of key strategies that can be adopted for protection of end users and organizations. The end user protection strategies include desktop protection agents, password management tools, secure email, simple and trusted browser setting, and digital signature. Some of the commercially available and popular antiphishing products are also described in this chapter. Chapter XII describes the threat of phishing in which attackers generally sent a fraudulent email to their victims in an attempt to trick them into revealing private information. This chapter starts defining the phishing threat and its impact on the financial industry. Next, it reviews different types of hardware and software attacks and their countermeasures. Finally, it discusses policies that can protect an organization against phishing attacks. An understanding of how phishers elicit confidential information along with technology and policy-based countermeasures will empower managers and end-users to better protect their information systems.
xvii
Chapter XIII provides a wide spectrum of end users with a complete reference on malicious code or malware. End users include researchers, students, as well as information technology and security professionals in their daily activities. First, the author provides an overview of malicious code, its past, present, and future. Second, he presents methodologies, guidelines and recommendation on how an organization can enhance its prevention of malicious code, how it should respond to the occurrence of a malware incident, and how it should learn from such an incident to be better prepared in the future. Finally, the author addresses the issue of the current research as well as future trends of malicious code and the new and future means of malware prevention. Chapter XIV provides a wide spectrum of existing security risk management methodologies. The chapter starts presenting the concept and the objectives of enterprise risk management. Some exiting security risk management methods are then presented by sowing the way to enhance their applications to enterprise needs. Chapter XV presents a system life cycle and suggests which aspects of security should be covered at which life cycle stage of the system. Based on this it is presented a process framework that due to its iteratively and detailed ness accommodates the needs for life cycle oriented security management. Chapter XVI presents a study on the classification of software specification languages discussing the current state of the art regarding attack languages. Specification languages are categorized based on their features and their main purposes. A detailed comparison among attack languages is provided. We show the example extensions of the two software specification languages to include some features of the attack languages. We believe that extending certain types of software specification languages to express security aspects like attack descriptions is a major step towards unifying software and security engineering. Chapter XVII qualifies and treats the security associated with the transfer of the content, as a quality of service parameter. The user is free to select the parameter depending up on the content being transferred. As dictated by the demanding situations, a minimum agreed security would be assured for the data at the expense of the appropriate resources over the network. Chapter XVIII gives an introduction to the CORAS approach for model-based security risk analysis. It presents a guided walkthrough of the CORAS risk analysis process based on examples from risk analysis of security, trust and legal issues in a collaborative engineering virtual organisation. CORAS makes use of structured brainstorming to identify risks and treatments. To get a good picture of the risks, it is important to involve people with different insight into the target being analysed, such as end users, developers and managers. One challenge in this setting is to bridge the communication gap between the participants, who typically have widely different backgrounds and expertise. The use of graphical models supports communication and understanding between these participants. The CORAS graphical language for threat modelling has been developed especially with this goal in mind.
xviii
Acknowledgment
The editors would like to acknowledge the help of all involved in the collation and review process of the book, without whose support the project could not have been satisfactorily completed. A further special note of thanks goes also to all the staff at IGI Global, whose contributions throughout the whole process from inception of the initial idea to final publication have been invaluable. Deep appreciation and gratitude is due to Paul Verlaine University (Metz – France) and the CRP Henri Tudor (Luxembourg), for ongoing sponsorship in terms of generous allocation of on-line and off-line Internet, hardware and software resources and other editorial support services for coordination of this year-long project. Most of the authors of chapters included in this also served as referees for articles written by other authors. Thanks go to all those who provided constructive and comprehensive reviews. However, some of the reviewers must be mentioned as their reviews set the benchmark. Reviewers who provided the most comprehensive, critical and constructive comments include: Peter Linington from University of Kent, Jean Henry Morin from University of Genova (Switzerland), Albin Zuccato from University Karlstad (Sweden), Muhammad Zulkernine from Queen University (Canada), Maryline Laurent-Maknavicius of ENST Paris, Fabio Massacci of University of Trento (Italy), Srinivas Mukkamala of New Mexico Tech’s Institute, Fredrik Vraalsen from SINTEF (Norway), Halim M. Khelalfa of University of Wollongong in Dubai, Bogdan Hoanca of the University of Alaska Anchorage, and Hervé Guyennet of the University of Franche-Comté (France). Support of the department of computer science Metz (Paul Verlaine) University is acknowledged for the support and the archival server space reserved for the review process. Special thanks also go to the publishing team at IGI Global. In particular to Jan Travers, who continuously prodded via e-mail for keeping the project on schedule and to Mehdi Khosrow-Pour, whose enthusiasm motivated me to initially accept his invitation for taking on this project. In closing, we wish to thank all of the authors for their insights and excellent contributions to this book. We also want to thank all of the people who assisted us in the reviewing process. Finally, we want to thank our families (husband, wife, children and parents) for their support throughout this project. Djamel Khadraoui, PhD, and Francine Herrmann, PhD April 2007
Section I
Security Architectures
Chapter I
Security Architectures Sophie Gastellier-Prevost Institut National des Télécommunications, France Maryline Laurent-Maknavicius Institut National des Télécommunications, France
Abstract Within a more and more complex environment, where connectivity, reactivity and availability are mandatory, companies must be “electronically accessible and visible” (i.e., connection to the Web, e-mail exchanges, data sharing with partners, etc.). As such, companies have to protect their network and, given the broad range of security solutions on the IT security market, the only efficient way for them is to design a global secured architecture. After giving the reader all the necessary materials and explaining classical security and services needs, this chapter proposes three different realistic security-level architectures that may be currently deployed within companies. For more realistic analysis and illustration, two examples of companies with different size and profile are given. A number of advices, explanations and guidelines are provided in this chapter so readers are able to adapt those architectures to their own companies and both security and network needs.
Introduction Today, with the increasing number of services provided by companies to their own internal users (i.e., employees), end-customers, or partners, networks are increasing in complexity, hosting more and more elements like servers and proxies. Facing a competitive business world, companies have no choice than expecting their services to be fully available and reliable. It is well known
that service disruptions might result in the loss of reactivity, performance and competitiveness, and finally a probable decreasing number of customers and loss of turnover. To offer the mandatory reactivity and availability in this complex environment, the company’s network elements are requested to be robust against malicious behaviours that usually target deterioration, alteration or theft of information. As such, strict security constraints must be defined for
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security Architectures
each network element, leading to the introduction of security elements. For an efficient security introduction into its network, a company must think about its global secured architecture. Otherwise, the resulting security policy might be weak as part of the network may be perfectly secured while a security hole remains in another one. Defining a “single” and “miracle” security architecture is hardly ever possible. Therefore this chapter expects to give companies an overall idea of how a secured architecture can look like. In order to do that, this chapter focuses on two types of companies: A and B, and for each of them, three types of architectures are detailed, matching different security policies. Note that those three architecture families result from a number of studies performed on realistic architectures that are currently being deployed within companies (whatever sizes). For readers to adapt the described architectures to their own needs, this chapter appears much more as guidelines for designing appropriate security and functional architecture. Obviously, the presented architectures are not exhaustive and correspond to various budgets and security levels. This chapter explains the positioning of each network and security elements with many details and explanations, so that companies are able to adapt one of those architectures to their own needs. Just before getting to the very heart of the matter, the authors would like to pay your attention that a company introducing security elements step by step, must always keep in mind the overall architecture, and be very careful during all deployment steps because of probable weak points until having deployed the whole solution. Prior to describing security architectures, the chapter introduces all the necessary materials for the readers to easily understand the stakes behind the positioning of elements within the architectures. That includes system and network
elements, but also authentication tools, VPN and data security tools, and filtering elements. When defining the overall network architecture within a company, the security constraints should be considered as well as the needs and services constraints of the company. All those elements will be detailed in the second part of this chapter, and in order to make explanations easier, two companies types will be chosen for further detailed architectures. Finally, the next three parts of the chapter will focus on the three families of architectures, and for each of them a number of illustrations are proposed to support architectures explanations. The first designed architecture is based on only one router that may be increased with some security functions. This is a low-budget architecture in which all the security leans on the integrity of the router. The second architecture is a more complex one equipped with one router and one firewall. The security of the architecture is higher than the first one because a successful intrusion into the router may only affect network elements around the router, and not elements behind the firewall benefiting from its protection. The third architecture requires two firewalls and a possible router. As the control operated by firewalls (and proxies) are much deeper than routers do, the intrusion attempts are more easily detected and blocked, so the company’s network is less vulnerable. Moreover, the integrity again relies on two filtering equipments one after the other and is stronger than what is offered in the first architecture.
Security basis This section briefly introduces all the necessary materials for the readers to easily understand the stakes behind the positioning of elements within the architectures.
Security Architectures
System and Network Elements Private networks are based on a number of servers, and network level equipments including the following: •
•
•
•
•
Dynamic host configuration protocol (DHCP) server dynamically assigns an IP address to the requesting private network equipment, usually after booting. Domain name system (DNS) server mainly translates a domain name (URL) into an IP address, usually to enable browsers to reach a Web server only known by its URL. Lightweight directory access protocol (LDAP) server is an online directory that usually serves to manage and publish employees’ administrative data like name, function, phone number, and so forth. Network address translation (NAT) performs translation between private and public addresses. It mainly serves to enable many private clients to communicate over the public network at the same time with a single public IP address, but also to make a private server directly accessible from the public network. E-mail server supports electronic mailing. A private e-mail client needing to send an e-mail requests the server, under the simple mail transfer protocol (SMTP), and if necessary, the latter relays the request to the external destination e-mail server also using SMTP; for getting its received e-mails from the server, the client sends a POP or IMAP request to the server. The e-mail server implements two fundamental functions—the e-mail forwarding/receiving and storing—which are usually separated on two distinct equipments for security reasons. The sensitive storing server next referred to as “e-mail” must be protected against e-mail disclosures and removals. The other, named “e-mail proxy” is in charge of
•
•
•
•
•
•
e-mail exchanges with the public network, and may be increased with anti-virus and antispam systems to detect virus within email attachments, or to detect e-mail as a spam. E-mails can also be encrypted and signed with secure/multipurpose internet mail extensions (S/MIME) or pretty good privacy (PGP) protocols. Anti-virus protects network (files, operating systems…) against viruses. It may be dedicated to the e-mail service or may be common to all the private network’s hosts which should contact the anti-virus server for updating their virus signatures basis. Internet/Intranet/Extranet Web servers enable employees to access to shared resources under hypertext transfer protocol (HTTP) requests from their own browser. Resources may be restricted to some persons like company’s employees (Intranet server), external partners like customers (extranet server), or may be unrestricted so it is known as the public server. Access points (AP) are equipments giving IEEE 802.11 wireless equipments access to the wired network. Virtual LAN (VLAN) are designed to virtually separate flows over the same physical network, so that direct communications between equipments from different VLANs could be restricted and required to go through a router for filtering purposes. Network access server (NAS) / Broadband access server (BAS) are gateways between the switched phone network and an IP-based network. NAS is used by ISPs to give “classical” (i.e., 56K modem, etc.) PSTN/ISDN dial-up users access, while BAS is used for xDSL access. Intrusion detection system (IDS) / Intrusion prevention system (IPS) are used to detect intrusions based on known intrusion scenario signatures and then to react by dynamically denying the suspected flow.
Security Architectures
IDS/IPS systems may be either networkoriented (NIDS) in order to protect a LAN subnet, or host-oriented (HIDS) in order to protect a machine.
Authentication Tools The authentication of some entities (persons or equipments) leans either on the distributed approach, where the authentication may be performed in many equipments, or the centralized approach, where only few authentication servers have capabilities to authenticate. The distributed approach is based on defining a pair of complementary public and private keys for each entity with the property that an encryption using one of these keys requires decrypting with the other key. While the private key remains known by the owner only, the public key must be widely distributed to other entities to manage the authentication. To avoid spoofing attacks, the public key is usually distributed in the form of an electronic certificate whose authenticity is guaranteed by a certification authority (CA) having signed the certificate. Management of certificates is known under the public key infrastructure (PKI) approach. The PKI approach is presented as distributed as any equipment having trust into the CA considers the certificate as valid and is then able to authenticate the entity. Certificates usage may be used for signing and encrypting e-mails or for securing sessions with Web servers using SSL (see section “VPN and data security protocols”). However, the remaining important PKI problem is for the entities to distinguish trusted authorities from fake authorities. The centralized approach enables any equipment like APs, proxies to authenticate some entities by asking the centralized authentication server whether provided authentication data are correct. The authentication server may be a remote authentication dial-in user service (RADIUS) or LDAP server (Liska, 2002). The RADIUS server
is widely used by ISPs to perform AAA functions (authentication, authorization, accounting), in order to authenticate remote users when establishing PPP connections, and to support extra accounting and authorization functions. Several methods are available like PAP/CHAP/EAP. In usual companies, when LDAP servers are already operational, with no need of authorization and accounting, the LDAP server solution is preferred over RADIUS to enforce authentication.
VPN and Data Security Protocols A virtual private network (VPN) (Gupta, 2002) may be simply defined as a tunnel between two equipments carrying encapsulated and/or encrypted data. The VPN security leans on a data security protocol like IP security (IPsec) or secure socket layer (SSL). IPsec is used to protect IP packet exchanges with authentication of the origin, data encryption and integrity protection at the IP packet layer. SSL introduces the same data protection features but at the socket layer (between transport and application layers). SSL was originally designed to secure electronic commerce protecting exchanges between Web servers and clients, but the SSL protection is also applicable to any TCP-based applications like telnet, FTP. VPN solutions may also combine Layer 2 tunneling protocol (L2TP) for tunnelling management only and IPsec for security services enforcement. VPNs are based on one of these protocols, so VPNs are next referred to as IPsec VPN, L2TP/IPsec (L2TP over IPsec) VPN and SSL VPN. VPNs may secure the interconnection between remote private networks. To do so, two VPN gateways, each one positioned at the border of each site are necessary. An IPsec tunnel (or L2TP tunnel over IPsec) is configured between the gateways. In this scenario, IPsec is preferred to SSL because IPsec affects up to the IP level and site interconnection only requires IP level equipments like routers. So the introduction of
Security Architectures
IPsec into an existing network architecture only requires replacing the border router with a firewall or increasing the router with IPsec capacities. In the case of nomads, to let moving users accessing private network resources like e-mail server, data basis, the VPN should be established between the nomad and the gateway at the border of the private network. Several technologies are possible but today, the most used ones are L2TP/ IPsec and SSL VPN. SSL VPN appears as a solution of choice by a number of companies because the administration of nomads is easier than in IPsec: no licence is necessary for the SSL client as the ordinary Web browser is an SSL client, and most of the services that need to be accessed by remote nomads like e-mail server or data basis, the VPN should be established. While heavy to manage, IPsec VPN based on L2TP over IPsec gives nomads full access to the private network. The nomad is provided with one public address provided by the ISP and one private address allocated by the private network when establishing the L2TP tunnel. So the tunnel enables the nomad to create IP packets as will be received by the targeted equipment. Note that today, when performing both IPsec and NAT, NAT should be applied first: otherwise, IPsec tunnel establishment will fail due to inconsistencies between the IP address declared when creating IPsec tunnel and the one present in the IP packets received by the IPsec endpoint.
Filtering Elements and DMZ For private networks to remain protected from intrusions, the incoming and outgoing traffic is filtered at the border of the private network thanks to some more or less sophisticated filtering equipments like routers, firewalls, and proxies (Cheswick, 2003: Pohlman, 2002). Routers are basic IP packet filters which analysis is limited to IP source/destination addresses, protocol number, and source/destination
port numbers, and which security policy rules are known under access control list (ACL). As such, traffic may be authorized or denied according to the packet origin or destination. As routers rely on the correspondence between TCP/UDP services and port numbers, the access to some applications may be as such controlled, so the risk is to permit some traffic based on its claimed destination port number (e.g., 80) while the real encapsulated traffic (e.g., FTP) should be denied. Bypassing packet filter’s policy is pretty simple using HTTP tunnelling for instance, so the solution is to proceed to a deeper analysis of the packet, as done by proxies or firewalls. A proxy is a software between a client and a server, with the client behaving as directly connected to the server and the server to the client. Proxies in the security context are applicationlevel filters, and commercial products include proxies for telnet, FTP, HTTP (URL proxy) or SMTP. First the client connects to the proxy and then in case of permission, the proxy establishes a second connection to the targeted server, and it relays the traffic between the two entities. The proxy may control the authenticity of the client, the client’s address, and also the content of the exchanges. Firewalls are equipments dedicated to filtering where the kernel is specialized and optimized for operating filtering. As such, application-level analysis may be performed like in proxies but with better performances because the filtering is enforced at the kernel, and does not require decapsulation of packets or TCP flow control, which are CPU and time consuming. Additionally, firewalls may support IDS/IPS functions. A demilitarized zone (DMZ) is a restricted subnet, separated from the private and public networks, that allows servers to be accessible from other areas while keeping them protected. It also forbids direct connections from the public area to the private network, so that a successful attack requires performing two intrusions, first on the DMZ and second on the private network.
Security Architectures
Usual equipments hosted in DMZ include proxies and Web servers. •
Needs and Constraints for Companies The challenge for a company is to get its services fully available whatever happens: failures, or malicious behaviours that usually target deterioration, alteration or theft of information. Note that in this context, “available” is used in a generic meaning which covers as much availability as confidentiality and integrity. Of course, there is no interest in providing an operational service if nonauthorized users can read or modify data. The first step for a company that wants to secure its network, prior to deploying any security equipment, is to define all existing services, and expected ones in a close future. As such, a whole process must be followed in order to define the following: •
•
•
Expected services and/or applications: ° Public Web site only ° Public Web site with online secured transaction ° Intranet Web site with secured access for employees ° Extranet Web site with secured access for partners ° Electronic mailing whether encrypted and signed. If secured, is it between endto-end stations or e-mail servers? ° Wireless network support ° Content servers accessible for downloads Trust levels regarding employees, partners, remote users: Does the network request protection from external area only, or both from internal and external area? Data availability for users: Perhaps some servers will be accessible “on-site” only?
•
•
•
•
•
•
What kind of reliability for the network (equipment redundancy, link backup)? Data sensitivity: Can employees have access to all server contents; for example, can the accounting department database be accessible by any employee? Should remote users’ connection be secured for setup only and/or data exchanges? Privileged users: Clearly define how many/ who are privileged users. It should be fewest persons as possible, and not necessarily the general manager of the company, especially if he or she is a too busy and keeps the password on a piece of paper on his or her desk. Security levels: Does the company think that tunnelling is enough secured or does it expect that encryption is a minimum security requirement? Are layer-3 and layer-4 filterings considered as secured enough, or are e-mail content filtering and visited Web pages controls essential? (e.g., a bank that provides online accounting transactions will not expect the same security levels for its Web site than a florist will). Number of sites: Depending on how many sites the company has to manage and the capacity of its routers (e.g., products will be selected for their bandwidth and engine performance but also by the maximum number of simultaneous tunnels supported). Type of users: Internal employees only, partners’ ones, remote users. If remote users, which access type is used: dial-up with 56K modem, or xDSL modem. Number of users: Depending on remote users number, choice of mechanisms and products will be impacted. Quality of service requirements: If the company needs to support voice over IP / video over IP traffic between branch offices and headquarters, traffic encryption should be avoided if possible because of the introduced latency delay that may exceed
Security Architectures
•
•
•
•
the maximum threshold that guarantees a good quality. Traffic volume: Security measures would probably not be the same if the company wants to secure a 100 Mbps link, or a 100 kbps link. Staff expertise: Whether the company is a florist that wants to sell flowers online, or a world-wide bank, staff expertise regarding security problems will not be the same. Willingness to outsource: Many small companies would prefer to outsource their security and network management, while perhaps, huge companies would prefer to manage by themselves. Budget limitation: Companies usually plan some budgets for security investment including equipment purchasing, integration and maintenance. However, unless the company is obsessed by getting the best security level whatever the cost of the solution, companies can use the return on security investment (ROSI) indicator (Sonnenreich, 2006) in order to help the decision makers selecting the security solution appropriate to the company. The ROSI takes into account the risk exposure in terms of financial wastes, the capacity without the security solution to mitigate attacks and the cost of the security solution.
responding to different security policies, are explained. Let’s start with the two companies’ profiles. A is a medium-sized company that needs to secure its existing network with the following requirements: • •
•
• • • • •
• As a consequence of highlighting those above services needs and constraints within a company, a personalized architecture may be designed in terms of systems and networks with specific security constraints, then resulting in an adapted security policy. This defines security measures for each network element, leading to the introduction of security elements. In order to give a concrete and practical point of view of security architectures, two types of companies are defined—A and B—so that, for each of them, three types of architectures, cor-
•
• •
A is set up with about 35 employees, the headquarters, and two branch offices. Headquarters and branch offices are connected to ISP using, respectively, 2 Mbit/s and 1 Mbit/s xDSL routers. Routers include basic functions like NAT, filtering based on access-lists. Employees work on-site, except ten sale managers working as remote users equipped with laptop and modem: four of them use a 56K dial-up connection, while six use an xDSL connection. Remote users’ connections are for e-mail access only. Web portal on Internet (Internet Web). E-mail server. In the headquarters, IP addresses are dynamically assigned to on-site employees. ISP provided A with three static public IP addresses for the headquarters, one static public IP address per branch office, and dynamically assigned public IP addresses to remote users. Management servers: RADIUS for authentication, Anti-virus with e-mail proxy function, DNS server and DHCP server. Staff expertise is low in terms of security management: only two persons are working on system and network management, so A prefers to outsource its security management. A wants to be protected from external area. In terms of redundancy, A wants a minimum protection.
Security Architectures
•
For data exchanges, A wants to secure branch offices-to-headquarters communications and remote users’ e-mail access.
B is a big-sized company that needs to secure its existing network with the following requirements: • • •
• •
•
•
• • • • • • •
B is set up with about 300 employees, the headquarters, and about 20 branch offices. Headquarters are connected to ISP using a router with a 10 Mbits/s leased line. Branch offices are connected to ISP using, respectively, for 5 small-sized of them, a 1 Mbits/s xDSL router; 15 medium-sized routers are connected using a router with a leased line at higher rates. All routers include functions like NAT, IPsec, filtering based on access-lists. In addition to internal employees working on-site, many employees need remote access. All these remote users are equipped with laptop and xDSL access. Remote users’ connections are for e-mail access, Intranet connection, and internal servers downloading. Branch offices connections are for e-mail access, Intranet connection, internal servers downloads, and multimedia over IP traffic (VoIP calls and internal TV broadcasts). Multimedia over IP is later referred to as MoIP. Web portal on Internet (Internet Web). Extranet Web server for partners, with secured connections. Intranet Web server for employees, with secured connections. E-mail server with possibility of encrypted and signed e-mails. Multimedia over IP (MoIP) server(s). Simulation server. In the headquarters, IP addresses are dynamically assigned to on-site employees.
•
•
•
•
• •
•
•
ISP provided B with four static public IP addresses for the headquarters, one static public IP address per branch office, and dynamically assigned public IP addresses to remote users. Management servers: LDAP or RADIUS for authentication, anti-virus, e-mail proxy with anti-virus / antispam functions, DNS server, DHCP server. Staff expertise is good in terms of security management: 15 persons are working on system and network management, and B wants to manage its security by itself, like 63% of the responding companies to the 2005 CSI/FBI Computer Crime and Security Survey (CSI Publications, 2005). B expects to be protected from both internal and external area. However, if not possible, it should be at least protected from the external area. In terms of redundancy, B wants a maximum protection. B wants to be alerted in case of malicious behaviours, especially if they are issued from the external area. For data exchanges, B wants to secure branch offices-to-headquarters communications, and remote users-to-headquarters connections. In a next future, B expects to equip the headquarters with a wireless network for internal users.
A minimal and low cost protection The first architecture is a low-budget one, based on the existing routers that are increased with some security functions like filtering capacities of a firewall, and where several DMZ may be defined for hosting servers. Because all the security relies on a single router only, this router must be really well-protected in terms of availability (i.e.,
Security Architectures
redundancy for power supply, routing engine, and fans tray appear as mandatory).
Company A Case Study for Minimal Protection Regarding A company’s requirements, the headquarters’ network must be protected from the external area, so that the best position for most sensitive servers is within the internal area, as depicted in Figure 1. Because of its border position, the router is highly likely to be attacked from Internet, and with its ACL configuration, only the most basic network attack attempts are blocked. As a consequence, the servers positioned in the router’s DMZ are not highly protected, and should support fewer strategic functions as possible. With the condition that each router’s DMZ must host machines accessible from the external area, the router’s DMZ hosts at least the DNS server, Internet Web server. The three public IP addresses allocated by the ISP for the headquarters serve as follows. The first one is assigned to the router for its external link, the second one to the Internet Web server, and the third one to the DNS server. The e-mail proxy is accessible thanks to the port redirection done by the router. Internal users at the headquarters are protected from external area thanks to the router’s ACL, which must be very strict for incoming traffic. Additionally, unidirectional NAT function enables internal users to perform outgoing connections with only one public IP address (the router’s external one). With private addresses remaining hidden, internal machines are not directly reachable from the external area and are better protected. DNS and Internet Web servers must be visible at least from the external area, so they must be located in a router’s DMZ. Unlikely RADIUS, DHCP and e-mail servers are internally used only: since A company trusts its internal staff (see A company’s profile in section “Needs and
Constraints for the Companies”), they are positioned in the internal area. Anti-virus is also an important function in the network, and is required by A company to protect the e-mail server, in addition to its internal computers. As such, it must be separated from the internal area where the e-mail server is already located, but it must also be connected to the external area in order to download viruses’ signatures updates, and to exchange e-mails with external servers. Therefore, it is located in a router’s DMZ, separated from the DNS and Internet servers’ one, so that all incoming e-mails go through anti-virus and next, are forwarded to the internal e-mail server thanks to the integrated e-mail proxy function of the anti-virus. in addition, the proxy may be configured so that the e-mail server is the only one authorized to initialize the connection with the proxy: this results in a better protection for the e-mail server. For remote users’ access, an SSL VPN is established between the users’ laptop and the SSL gateway, and during establishment, users are authenticated by the SSL gateway thanks to the RADIUS server. In the architecture, the router supports the SSL gateway function, that is, it gets access to the e-mail server on behalf of users and relays new e-mails to the users under HTTP format. For the branch offices, an L2TP/IPsec or IPsec tunnel is established with the headquarters between the two border routers, so that branch offices’ users may access to the e-mail server and any other server as if they were connected to the headquarters. In this kind of architecture, ACL in the router must be very restrictive, so that malicious behaviours coming from external area are blocked. For example, incoming traffic (i.e., from external area) that is authorized is restricted to the following: •
SSL connections from remote users (users are authenticated, and traffic is encrypted
Security Architectures
•
• •
•
using shared keys between the headquarters and the remote user), L2TP/IPsec or IPsec tunnels from branch offices (public IP addresses of the branch offices are well known, and routers are authenticated through IPsec tunnel), SMTP traffic that goes directly to anti-virus, HTTP traffic which is directly forwarded to Internet Web server except if the HTTP traffic is received due to a previous internal user’s request, DNS traffic.
All other incoming traffic is forbidden. The resulting architecture for Company A is given in Figure 1.
Company B Case Study for Minimal Protection Regarding B company’s requirements, the headquarters network must be protected both from
internal and external areas. As such, the most sensitive servers should not be accessible to users, and access should be under the router’s control. The router only blocks the most basic network attack attempts, so to block malicious behaviours and protect internal staff as much as possible, its ACL configuration must be very restrictive. The Internet/Extranet Web and the DNS server must be in the border router’s DMZ because they are visible from the Internet. Similarly the MoIP server is placed in a DMZ so that exchanges with the branch offices’ MoIP servers are possible through the external area. The e-mail proxy is integrated in the anti-virus server and requires access from the external area for e-mail exchanges. All these servers are located in router’s DMZ, with the idea that each DMZ hosts machines that are accessed by the same category of persons or machines, and it protects them with a specific security policy. So, the router defines four DMZ including respectively: Internet Web and DNS, anti-virus with e-mail proxy function, Extranet Web, and MoIP.
Figure 1. Company A architecture with minimal protection
10
Security Architectures
Because DHCP is only used for internal staff, and is not so sensitive, it may remain in the internal area. Servers like intranet Web, e-mail, LDAP or RADIUS, and simulation server are too sensitive, so they are located in the internal area, but they are not protected at all from the internal staff, and misbehaviours. Because of it, this router-onlybased architecture is not suitable for B’s security requirements. Note that the extranet Web as well as all other internal servers accessed from Internet with no mandatory VPN connection (Internet Web, DNS, e-mail proxy) should be provided with a static bidirectional NAT translation, or port redirection, defined in the router. The four public addresses provided to B may be assigned to the following headquarters’ equipments: external link of the router, Internet Web server, DNS server, Extranet Web server.
xDSL remote users and branch offices should connect through a L2TP/IPsec or IPsec VPN to the border router so they have access to the internal resources like e-mail, simulation server. During VPN establishment, remote users are authenticated by the router which should contact the LDAP or RADIUS server for authentication verification. The authentication of remote routers in branch offices may be performed based on preshared keys or public key certificates known by the router itself. Additionally to VPN, if needed, the Intranet Web SSL protection may be activated to protect data exchange and login/password of users if they are required to authenticate to the Intranet Web. For remote partners to get access to the Extranet Web, a specific rule into the router may be configured to permit packets with a source address belonging to the partner’s address spaces (if known), the destination address of the
Figure 2. Company “B” architecture with minimal protection
11
Security Architectures
Extranet Web and the destination port number of the extranet Web. For data confidentiality reasons, during transfer, an SSL connection may be established between the partner’s machine and the extranet Web. Moreover, a stronger security access to the extranet Web may be obtained by requiring authentication of partners based on login/password under the control of the LDAP/ RADIUS server. As a result, access control is twofold based on the source IP addresses (done in border router) and the login/password (done in the Extranet Web). In this architecture, ACL for authorized incoming traffic (i.e., from external area) in the router may look like the following: •
•
•
• • •
•
SSL connections from partners (based on IP address if known, and login/password) to extranet Web L2TP/IPsec or IPsec tunnel from branch offices (public IP addresses of the branch offices are well known, and routers are authenticated through IPsec tunnel) L2TP/IPsec or IPsec tunnel from remote users (authentication is made through tunnel) SMTP traffic that goes directly to anti-virus MoIP traffic, that goes directly to the MoIP server HTTP traffic, that is directly forwarded to Internet Web server except if it comes from an internal user DNS traffic
All other incoming traffic is forbidden. The resulting architecture for Company B is given in Figure 2. In conclusion of these two case studies, the main advantage of this kind of architecture is its low cost, but all the security leans on the integrity of the router and as such this basic architecture appears as suitable for small companies only (B company’s requirements are not achieved).
12
Note that in this kind of architecture, only network-layer and protocol-layer attacks are blocked. There’s no way to block ActiveX or JavaCode attacks, or to filter visited Web sites, except if additional proxies are added. Even with proxies’ introduction, there’s no way to protect them in an efficient way within this type of architecture.
A medium-LEVEL security architecture The second type of architecture equipped with one border router and one firewall, is more complex and may serve to define many DMZ to isolate servers. The security of the architecture is higher than the first one because a successful intrusion into the router may only affect network elements around the router, and not elements behind the firewall benefiting from the protection of the firewall. An intrusion into the headquarters assumes that two intrusions are successfully performed, one into the first router or router’s DMZ to bypass its security policy, and a second one into the firewall ahead of the headquarters. A firewall instead of a second router is introduced for a stronger security. The resulting security level is higher as the firewall is hardware cleanly designed equipment which additionally to routing and NAT functions may implement high-level functions like IDS/IPS and proxies, and moreover, predefined ports’ behaviour with controlled exchanges in between (cf. section “Filtering Elements and DMZ”). Note that if the company chooses a software firewall product (i.e., software installed on a computer with many network cards), that can be installed with its own operating system or with the computer’s existing operating system, the authors recommend to install it with its own including operating system because of possible weaknesses in the computer’s existing operating system. As previously explained, servers positioned in the router’s DMZ are not highly protected, and
Security Architectures
should support non strategic functions for the company. Sensitive ones, like RADIUS, LDAP, intranet Web, extranet Web, e-mail should remain in the firewall’s DMZ. Note that the number of DMZs is generally limited because of budget savings. However, if financially affordable, the general idea that should be kept in mind when defining the architecture is each DMZ should host machines that should be accessed by the same category of persons or machines. This avoids persons from one category attempting to get access to resources of another category by realizing an attack locally to the DMZ which remains undetectable by the firewall. As such, one DMZ may be defined for the extranet, another one for the Intranet. Note that no servers are positioned in the subnet between the firewall and the router: otherwise, a successful intrusion on that server would lead to the intruder installing a sniffing tool and so spy-
ing all the traffic of the company which is going through this central link.
Company A. Case Study for Medium Protection Internal users are better protected from the Internet attacks than in the first type of architecture with the extra firewall introduction. The Internet Web and DNS servers have the same level of protection than in the first architecture against possible attacks from Internet area. Even if internal users are considered as trusted by company A, the RADIUS server positioned in a firewall’s DMZ is better protected than in the first architecture as internal users have no direct access to it. On the other hand, the e-mail and DHCP servers within the internal network remain with the same level of protection against potential employees’ misbehaving.
Figure 3. Company “A” architecture with medium protection
13
Security Architectures
The e-mail service is well protected from Internet thanks to the router and firewall which are configured so that SMTP packets coming from Internet and addressed to the e-mail proxy are permitted. Remote users’ access and branch offices’ access are achieved in the same way than in the first kind of architecture (see section “Company A Case Study for Minimal Protection”). Finally, for users of remote branches to get their e-mails through the VPN, one rule should be configured in the firewall to permit machines from branch offices to send POP or IMAP packets to the e-mail server. With this kind of architecture (as depicted in Figure 3), all requirements of Company A are achieved and this solution can be a good value for small and medium-sized companies, both from a technical and financial point of view (i.e., it gives the best ROSI - return on security investment). However the security can be improved as shown for RADIUS server. Additionally, some elements may be outsourced as requested by A
company, like firewall management, router management, SSL gateway.
Company B. Case Study for Medium Protection Internal users are better protected from the Internet attacks than in the first type of architecture with the extra firewall introduction. With the addition of the firewall (as depicted in Figure 4), sensitive servers like Intranet/Extranet Web, Simulation server, e-mail server, and LDAP/RADIUS server, three DMZ are defined on the firewall: •
• •
One is the Intranet DMZ for hosting Intranet resources like the Intranet Web, the Simulation server, and the e-mail server. One is for the Extranet resources including the Extranet server. The latest one is for the authentication server either the LDAP or RADIUS server.
Figure 4. Company “B” architecture with medium protection
14
Security Architectures
For its protection, the firewall should be configured so that communications to the authentication server are restricted to only the machines needing to authenticate users: the headquarters’ border router (for remote users’ authentication), the intranet Web (employees’ authentication), the extranet Web (client’s authentication) and the e-mail server (employees’ authentication). The extranet Web is moved to the firewall’s DMZ to offer extranet partners a higher protection level. Only the DHCP server remains connected to the headquarters to ensure the dynamic configuration of internal machines. As the firewall is unable to securely support dynamic port allocation, the MoIP server is positioned in the router’s DMZ and the router only authorizes incoming MoIP calls from remote branches (based on source IP addresses). The Internet Web, the DNS server, and e-mail proxy also remain in the border router’s DMZ because they are visible on the Internet, so they may be subject to intrusions and in case of success, subverted subnets are limited to the router’s DMZ, which is far from the sensitive DMZ of the firewall. xDSL remote users’ access and branch offices’ access are achieved in the same way than in the first kind of architecture (see section “Company B Case Study for Minimal Protection”). For remote partners to get access to the extranet Web, a specific rule into the router and the firewall may be configured. Otherwise, authentication process remains unchanged compared to the previous architecture. The security policy of Company B, as defined in section “Needs and Constraints for the Companies”, is respected with this type of architecture. In terms of ROSI, it can be a suitable solution for classical medium to big-sized companies without critical sensitivity. All the network or security based servers are under the firewall or router’s control contrary to
the first architecture, except the DHCP server which remains into the private network for functional reasons. Servers which access is restricted to the same group of persons or machines are grouped together in the same DMZ. Note that the present architecture assumes that a number of DMZ is available in the firewall and router. In case the firewall and/or the router is not provided with enough DMZ, or for budget savings, a first solution would be to move some of the equipments into the headquarters with the same drawbacks as described in the first architecture. A second solution is to limit the number of DMZ and to group servers together in the same DMZ, but with the risk that users benefiting from an authorized access on a server, attempts illegally to connect to another server in the same DMZ.
High-level security architecture The third architecture equipped with two firewalls, is the most complex one giving a maximum level of protection, with the possibilities to define many DMZ to isolate servers. The resulting security level is obviously higher as there are two firewalls implementing high-level security functions like IDS/IPS, proxies. When defining a high-level security architecture, the more lines of defense are introduced, the more difficult the attacker will break through these defenses and the more likeliness the attacker will give up the attack. All those principles targeting delaying (rather than preventing) the advance of an attacker are better known under “defense in depth” strategy and are today widely applied by security experts. The security of this architecture is higher than the two previous ones because a successful intrusion into the headquarters assumes that two intrusions are successfully performed, one into the first firewall to bypass its filter rules,
15
Security Architectures
and a second one into the firewall ahead of the headquarters. Note that for better understanding and further references, the firewall directly connected to the external area is called “external” firewall, while the one directly connected to the internal area is called “internal” firewall. In this kind of architecture, the fundamental idea that should be kept in mind is that the firewall products must come from different manufacturers or software editors, in order to prevent weaknesses. Within the same manufacturer/editor, common weaknesses from one product to another may result from to the same development teams using the same version of operating system Moreover, in case a software firewall product is selected to be installed on a computer with many network cards, the best from a security point of view is to install it with its own included operating system. Contrary to previous architectures, servers positioned in the DMZ are highly protected, so the way to choose the best DMZ for each server is to put it as close as possible to persons using it, i.e. Internet Web server should be on the “external” firewall, while Intranet Web server should be on the “internal” firewall. Furthermore, as already explained in the other architectures, each DMZ should host machines that should be accessed by the same category of persons or machines. This avoids persons from one category attempting to get access to resources of another category by realizing a local attack within the DMZ with no detection by the firewall. Finally, this architecture can be improved by introducing a router between the “external” firewall and external area, especially if firewall products are software ones installed on a computer (equipped with network cards), and those firewalls have been installed on the existing operating system instead of their own one. Otherwise the risk is that an intruder finds a way to shutdown the firewall process, so that the “external” firewall
16
is like a simple computer having only routing activated with no security rules. Please note, that for the next following case studies, the considered architectures are based on two firewalls without any additional border router.
Company A. Case Study for High-Level Security Architecture Internal users are better protected from the Internet attacks than in the previous type of architecture, due to the two firewalls. The Internet Web and DNS servers are also better protected than before against possible attacks from Internet area. They are still located on a DMZ of the “external” firewall because incoming traffic addressed to these two servers comes mainly from external area. The RADIUS server is used both for internal staff authentication, and remote offices/users’ one. Considering the number of employees, it seems that the number of authentication requests seems to be higher from the internal area. Therefore, RADIUS is located on a DMZ of the “internal” firewall. Because there are more DMZs than in the previous architecture, e-mail server can be located in a DMZ of a firewall. Considering Company A’s requirements, anti-virus with e-mail proxy function is moved to a DMZ of the “external” firewall, and then the e-mail server is connected to a DMZ of the “internal” firewall. Note that the e-mail server is not located on the same DMZ than the RADIUS server, because incoming requests sent to RADIUS come from unauthenticated users, and may contain malicious information like e-mail server attacks. Because DHCP is only used by internal staff, and is not so sensitive, it can remain in the internal area. Remote users’ access and branch offices access are achieved in the same way than in the two first
Security Architectures
kinds of architecture (see section “Company A Case Study for Minimal Protection”). With this kind of architecture (as depicted in Figure 5), all requirements of Company A are achieved, and intrusions attempts become really hard. However, this kind of solution is probably too much expensive regarding the targeted security requirements for small and medium-sized companies.
Company B. Case Study for High-Level Security Architecture Internal users are better protected from the Internet attacks than in the previous type of architecture, due to the two firewalls. The Internet Web and DNS servers are located on a DMZ of the “external” firewall because incoming traffic addressed to these two servers comes mainly from external area. In order to improve the filtering level of some sensitive servers like intranet Web, some ad-
ditional proxies can be added. For instance, an HTTP proxy for intranet Web can be installed in the MoIP DMZ to do users’ authentication but also high control on HTTP data (format and content). The “external” firewall should be configured so that HTTP traffic to Intranet Web is redirected to HTTP proxy for a first filtering. As such, the efforts required for introducing Intranet Web are really higher than before. Anti-virus functions can be separated for email server and internal staff needs, that is, the e-mail anti-virus functions remain the same as the previous architecture, while a specific anti-virus server dedicated to internal needs can be added on the intranet DMZ of the “internal” firewall. To improve reactivity of Company B when malicious behaviours occur, IDS functions can be added on servers (HIDS function) or subnets (NIDS). Examples of IDS positioning may be: HIDS within the Simulation server (if it contains very sensitive data) or LDAP/RADIUS server,
Figure 5. Company A architecture with high-level protection
17
Security Architectures
Figure 6. Company “B” architecture with high-level protection
and NIDS on the internal side of the “internal” firewall. In order to avoid direct communications between subnets of the internal network or to protect servers from users, VLANs can be defined. For example, the access to the accounting database server may be allowed for the accounts department staff only and separated from the rest of the network. All other servers’ positions remain unchanged compared to the previous architectures. Remote users’ access and branch offices’ access are achieved in the same way than in the two first kinds of architecture (see section “Company B Case Study for Minimal Protection”). With this kind of architecture (as depicted in Figure 6), all requirements of B company are achieved, and beyond them, security can be improved with additional proxies capabilities or IDS external elements. In terms of ROSI, this solution is mandatory for companies with critical sensitivity (e.g. banks),
18
but it can also be suitable for all classical medium to big-sized companies. When Company B will introduce wireless equipments in its network (Kizza, 2005), it should first strongly control mobiles’ access as they will gain access to the headquarters’ network. For a higher security level, the wireless network may be considered as a specific VLAN within the “internal” network, and/or an extra DMZ hosting APs.
Conclusion This chapter addresses the problematic of designing security architectures and wishes to give as much information as possible in these few pages, so it helps administrators deciding which architecture is the most suitable for them. For more concrete explanations, two companies were considered with different sizes, and constraints. The first one, A, is medium-sized
Security Architectures
company with two branch offices and 35 employees: it wants to be protected from external area: it has no internal security expertise, implements a limited number of servers, and restricts remote access to e-mails. The second company, B, is big-sized with about 20 branch offices and 300 employees: it wants to be protected both from internal and external areas: the staff expertise is good: a number of network and security servers are implemented; access from branch offices and remote users is possible to Intranet Web, e-mail and any internal servers: it requires a highsecurity level with redundancy and alarms consideration. For both companies, three families of architectures are studied, a low security level architecture with a router-only protection, a medium level security architecture with one router and one firewall and a high security level architecture with two firewalls. For each of these six cases, explanations or discussions are given relative to the positioning of equipments, the objectives of the DMZ, the number of DMZs, the VPN mechanism selection (L2TP/IPsec, IPsec, SSL) for a secure access by remote users and remote branches, the access control performed by proxies, firewalls and routers. Other discussions include users’ authentication by LDAP/RADIUS servers, the e-mail problematic with the requirement for the open e-mail system to be reachable by any Internet machine, and to be protected so to avoid e-mail divulging, careful WiFi introduction into existing networks, VLAN usage to partition the network and limit direct interactions between machines … Recommendations are also given for the selection of the firewall product and its installation. To conclude, as described in this chapter, finding the appropriate architecture is a huge task
as the final architecture depends on so various parameters like existing security and network architectures, security constraints, functional needs, size of companies, available budget, management of remote users or branch offices. The idea of the authors, when writing this chapter, was to give useful guidelines to succeed in defining the appropriate architecture that reaches best compromise between companies’ needs and constraints. Hope it helps.
References Cheswick, W. R., Bellovin, S. M., & Rubin, A. D. (2003). Firewalls and Internet security: Repelling the wily hacker. Addison-Wesley. CSI Publications. (2005). CSI/FBI computer crime and security survey. Retrieved from http://www. GoCSI.com Gupta, M. (2002). Building a virtual private network. Premier Press. Kizza, J. M. (2005). Computer network security. Springer. Liska, A. (2002). The practice of network security: Deployment strategies for production environments. Prentice Hall. Pohlman, N., & Crothers, T. (2002). Firewall architecture for the enterprise. Wiley. Sonnenreich, W., Albanese, J., & Stout, B. (2006, February). Return on security investment (ROSI)–A practical quantitative model. Journal of Research and Practice in Information Technology, 38(1), 99.
19
20
Chapter II
Security in GRID Computing Eric Garcia University of Franche-Comté, France Hervé Guyennet University of Franche-Comté, France Fabien Hantz University of Franche-Comté, France Jean-Christophe Lapayre University of Franche-Comté, France
Abstract GRID computing implies sharing heterogeneous resources, located in different places belonging to different administrative domains over a heterogeneous network. There is a great similarity between GRID security and classical network security. Moreover, additional requirements specific to GRID environments exist. We present these security requirements and we detail various secured middleware systems. Finally, we give some examples of companies using such systems.
INTRODUCTION Grid technologies enable large-scale aggregation and harnessing computational, data and other resources across institutional boundaries. Fifty years of innovation have increased the speed of individual computers by an impressive factor, yet they are still too slow for many scientific problems.
A solution to the inadequacy of computer power is to “cluster” multiple individual computers. First explored in the early 1980s, this technique is now standard practice in supercomputer centers, research labs and industry. Although clustering can provide significant improvements in overall computing power, a cluster remains a dedicated resource, built at a single location. Rapid improve-
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security in GRID Computing
ments in communication technologies led many researchers to consider a more decentralized approach to the problem of computing power. Several projects then saw the light of day: (Del Fabro, 2004; http://www.globus.org; http://setiathome. ssl.berkeley.edu) to name a few. Internet computing came to something much more powerful because of the ability for communities to share resources as they tackle common goals in a seemingly virtual machine. Science is increasingly collaborative and multidisciplinary, and it is not unusual for teams to span institutions, countries and continents. GRID computing implies sharing heterogeneous resources, located in different places belonging to different administrative domains over a heterogeneous network. As GRID applications gained popularity and interest in the business world, securing business trades was not regarded lightly way. Securing information encompasses authenticating the source of a message, verifying the integrity of the message to ensure there has been no malicious modification, or protecting the confidentiality of the message being sent from prying eyes. Because of the cross institution nature of GRID application communications, GRID computing has specific security needs. It has to protect a GRID community against unwanted eyes, and yet, it has to allow wider and wider access to many more identified participants. The challenge was securing these legitimate participants while not affecting local entities’ authority neither the performances. The geographical dispersion of GRID participants is often unpredictable, leaving us less margin to superimpose a new constraining protocol on the existing systems.
OVERVIEW OF DISTRIBUTED SYSTEMS AND GRID COMPUTING
networks: the first distributed systems were developed in which both data and treatment could be distributed. Parallel computing has moved from a proprietary design, centred on a supercomputer that was supported by homogeneous processors connected to an internal network, towards heterogeneous clusters of workstations distributed worldwide. The growing popularity of the Internet combined with the availability of powerful computers and high-speed networks as low cost commodity components are changing the way we do computing. These technologies enable the clustering of a wide variety of geographically distributed resources (such as supercomputers, storage systems, data sources, special devices and services that can be used as a unified resource). This new paradigm is popularly termed as “GRID” computing. The GRID is analogous to the electrical power grid and aims to couple distributed resources and offer consistent and inexpensive access to resources, irrespective of their physical location.
At the Beginning: Metacomputing Metacomputing appeared at the beginning of the mineties. The idea was to gather within a metacomputer a group of small independent units equipped with calculation and storage capacities. But network performance did not make it possible to develop such platforms on WAN. Tools were then developed to allow the installation of clusters on high performance LAN. Parallel virtual machine (PVM) (Sunderam, 1990) was developed by Oak Ridge National Laboratory in 1990, then MPI (Message Passing Interface 1993) (Franke, 1994). These software tools simply made it possible to facilitate the programming of these applications by exchanging messages.
For several decades, researchers have tried to federate data-processing resources through
21
Security in GRID Computing
GRID Computing The first appearance of the Grid goes back to 1998 (Foster, 1998). This new concept was then defined as being “hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.” A GRID is a software toolbox which provides services to manage distributed material and software resources. This evolution gave metacomputing a new dimension in particular by allowing the interconnection of several clusters. Grid’5000 is a French experimental GRID linking together nine towns using optical fibres of 10Gb/s. One of the hardware objective of the platform is to reach 5,000 processors in 2007. This platform permits to deploy, to test, to improve applications, middleware, data Grid, security infrastructure.
Peer-to-Peer An important characteristic of this family of metacomputing applications is that each site cooperates on an equal basis. The appearance of peer-to-peer (P2P) is directly related to the advent of Internet (Oram, 2001). Compared to GRID Computing, the assets of P2P are: • •
Choice of decentralized, and nonhierarchical organization Management of instability, and fault-tolerance
Indeed, P2P application is deployed to broad scale and an interruption on a node or on a network link does not endanger all the applications. We find a wide variety of P2P systems (Saxena, 2003) currently used such as Nasper, Gnutella, Kazaa where peers are unaware of the
22
total membership of nodes in the system. There are more structured P2P systems (e.g., Chord and Pastry). Most large-scale P2P systems covers more traditional, synchronous group communication systems such as SETI@Home, Totem or Horus where scalability is typically limited, and group membership requires constant online presence of each peer. Client programs such as MSN Messenger, AOL Instant Messenger allow users to exchange text, voice and files. Users of P2P systems ask the enhancement of access control with new authentication and authorization capabilities to address users that know little about each other. P2P systems introduce other problems that require to focus the attention on protection from those who offer resources, rather than from those who want to access them, JXTA (http://www.jxta.org) provides some functionality (e.g., encryption, signatures and hashes) for the development of secure P2P applications. Reputation techniques allow the expression and reasoning about trust in a peer based on its behavior and interactions other peers have experienced with it (Damiani, 2002).
A Last Evolution Towards Total Computing The last most recent evolution permits the development of GRIDs on a very large scale. New middleware allow data management in completely heterogeneous media, wireless and mobile. This last family tends to associate qualities of GRID computing and peer-to-peer. The word computing is often associated with the final GRID: GRID computing or global computing, are the terms that are usually used. But these platforms are not devoted exclusively to computing; it is also possible to find Data GRID type applications. In the continuation of this chapter, we will not differentiate between the concepts of GRID and GRID computing.
Security in GRID Computing
SECURITY REQUIREMENTS FOR GRID COMPUTING Security has often been a neglected aspect of most applications or systems design until a cyber attack makes it real. Oftentimes, security practitioners consider security as being a step behind electronic war. However the first step towards a better protected system is awareness. Just like many other systems and architectures, GRID computing paid little attention to securing its communications. The need was very bleak since the instigation for GRID computing emerged from a well-thinking scientific community working for a common interest. As GRID applications gained popularity and gained interest in the business world, securing business trades was not regarded in a light way. Securing information encompasses authenticating the source of a message, verifying the integrity of the message against malicious modification, or assuring the confidentiality of the message being sent against prying eyes. As electronic communication becomes pervasive, access control to privileged information became increasingly pertinent, and when new forms of attacks, such as ones not aiming at theft of information (denial of access) emerged, new protective measures needed to be put into place to ensure constant availability of service.
power, for example. In particular, we can notice that there is a greater need for dynamicity, and greater importance of process supervision and of rights delegation. Classical security features have to be adapted to GRID computing environments.
Authentication, Authorization, and Auditing (AAA) Each entity of the GRID must be able to authenticate the others. GRID entities must be authorized to communicate with other entities from the same domain or from another one. Auditing must take into account the dynamic aspect of GRID environments where component binding varies considerably and can have a short life cycle.
Confidentiality, Integrity, Availability (CIA) Communication between GRID entities must be secure. Confidentiality must be ensured for sensitive data from the communication stage to a potential storage stage. Problems of integrity must be detected in order to avoid treatment faults. Availability is directly bound to performance and cost in GRID environments, and is therefore an important requirement.
Traditional Security Features
Fault Tolerance
There is a great similarity between GRID security and classical network security. It depends on the activity type, on the risks firms are ready to take and overall on the cost of the installation and the configuration of security systems such as firewall. All these features exist for the GRID; however, some are more important in this case. Moreover, additional requirements specific to GRID environments exist. Indeed, security policies have to protect a GRID computing platform without adding too many constraints that could seriously decrease performance in terms of calculation
In GRID environments fault tolerance must be managed to ensure that a fault on a component does not cause the loss of all the work performed. Moreover, it can be important, in the particular case of the GRID, to recover a part of the work on a faulty node in order to increase global performance when a fault occurs. In the framework of fault tolerance, it is also required to supervise, to trace a process (initialization, used nodes, bindings) and to store this information.
23
Security in GRID Computing
SECURED MIDDLEWARE OF GRID COMPUTING Whereas very few turnkey security systems exist, a lot of organizations and companies are currently trying to develop some like Global GRID Forum (GGF), Enterprise GRID Alliance (EGA), GRID Research Integration Development and Support Center (GRIDS), secured systems for GRID are presented below.
Systems Using X509 Certificates One of the most popular security middleware for GRID is Globus using GSI [11] (GRID Security Infrastructure). GSI is based on the PKI security architecture which authenticates servers, users and processes. In order to do so, an X509 certificate signed by a certificate authority (CA) is delivered for each user and machine. A certificate is a file which contains at least the follow information: •
The name of the authority which created the certificate
• • • • • • •
The name and first name of the user Organization name Unit name E-mail address Public key Validity period Numeric signature
To simplify the procedure and to avoid users having to authenticate themselves each time they have to submit a calculation, a proxy is used. This is the single sign on (SSO) method. The proxy is a new certificate with a new private/public key. This new certificate is signed by the user himself and not by the CA Figure 1. This credential mechanism provided by the proxy implies that once someone accesses a remote system, he can give the remote system permission to use his credentials to access other systems for him. When connections are established, the SSL protocol is used to encrypt communications. Table 1 shows a comparison of X.509 public key certificates with a X509 proxy certificates (Welch, 2004).
Figure 1. Delegation method Sign CA
Sign USER
PROXY
Table 1. Comparison of X.509 public key certificates and X.509 Proxy Certificates
24
Certificate Attribute
X.509 Public Key Certificate
X.509 Proxy Certificates
Issuer/Signer
A Certification Authority
A public key certificate or another Proxy Certificate
Name
Any as allowed by issuer’s policy
Scoped to namespace defined by issuer’s name
Delegation from Issuer
None
Allows for arbitrary policies expressing issuer’s intent to delegate rights to Proxy Certificate bearer
Key pairs
Uses unique key pair
Uses unique key pair
Security in GRID Computing
A lot of middleware use GSI, to secure their system or to extend it for their own requirements like GRID Particle Physics (GRIDPP), TeraGrid. Another example is the data-exchange systems which often use GRIDftp (module of Globus) with GSI like DataGRID.
of the domain. The server is transformed by the DNSSEC extensions into a local Certification Authority.
IPSec to Secure Data Transport IPSec is a protocol suite for networking devices to communicate privately using IP. IPSec requires a secret key distribution mechanism. Authors propose to open the architecture with the implementation of a nonproprietary certificate authority infrastructure that will allow resources to authenticate other resources directly, without appealing to a central authority like Kerberos. The security extension is used to the DNS protocol, referred to as the DNSSEC extensions.
Systems Using IPSec and DNSec GRIDSec (Le, 2002, 2003) is an architecture using DNSSec as a key distribution system, SSH to secure initial authentication and IPSec to protect the users communication, as illustrated in Figure 1.
DNSSec and Secure Key Management
GRIDSec Architecture
The fundamental objectives of DNSSec are to provide authentication and integrity to the inherently insecure DNS. Authentication and integrity of information held within DNS zones are provided through cryptographic signatures generated through the use of public key technology To make the secure network transport scalable, the SSH client is modified to query the DNS server for the host key of an SSH server. This key distribution server will still host the usual DNS resource records, the host key, and a signature authenticating that host key for each SSH client
In a GRIDSec architecture, a DNSSec server is defined as a key distribution system federating several zones. Each zone has an SSH server to manage and identify each zone’s key to the DNSSec server; each zone also has a VPN module to enable secure data exchange between different zones through the Internet. The GRID system overlaid by GRIDSec has a Resource Broker agent to obtain and locate the resource requested by a given user (see Figure 2).
Figure 2. GRIDSec model Resource VPN module SSH server
4 5
7
Internet 6
1 2 3
VPN module SSH server
dnssec server
25
Security in GRID Computing
Each site participating to the GRID has a VPN module and an SSH server. An initial phase encompasses the authentication of each SSH server to a federating DNSSec server. The OpenSSH client installed on the SSH server enables the secure key exchange between each SSH server and the DNSSec server. The SSH server sends a request for registration (1) to the DNSSec server. The DNSSec server sends in return its public key (2); the requester SSH server will encrypt its own public key with that key and digitally sign it with HMAC (3). This authentication phase occurs for each zone (4) and (5). In a second phase, secure VPN tunnels will be created between the sites (6) since IPSec can reuse the previously exchanged secret keys. At this stage, the SSH servers’ keys have been gathered by the DNSSec. A user in a federated zone can request a resource located by the resource broker to be in a different zone. SSH server C will request SSH server A’s public key from the DNSSec server. Using IPSec, the two sites will be able to establish pairwise VPN links. In a third phase, when SSH servers’ keys change (due to compromised keys, or security maintenance to renew keys), the DNSSec server will update its SSH keys record files (7).
Systems Using Fine Authorizations The Legion (http://legion.virginia.edu) system provides a fine mechanism of authorization. Each resource contains a list of objects which can access it. Moreover each method of an object has an “allow” and “deny” list, which specifies the authorizations (ACL). An object will be authorized to access a method if and only if it does not appear in the deny list and does appear in the allow list. GRIDLab focuses on the development of a flexible, manageable and robust authorization service called GAS (GAS; www.gridlab.org/gas). The main goal of GAS is to provide functionality that would be able to fulfill most authorization require-
26
ments of GRID computing environments. GAS is designed as a trusted single logical point for defining security policy for complex GRID infrastructures. As the flexibility is a key requirement, it is to be able to implement various security scenarios, based on push or pull models, simultaneously. Thanks to theses characteristics, GAS is also interoperable with other security toolkit like Globus.
Systems Using Sandboxing In GRID computing relying in P2P architecture, applications are often transferred from a resource to another without having the capabilities to check mutual authenticities. In front of this difficulty to secure P2P systems, applications are more and more performed in a secure context (i.e., in a box which is an interface from and to the applications. Operations are woken up and permissions are given or refused. Permissions can mainly be applied to network, file system and system configuration. Thus, even if someone succeeds to transmit a malicious code, it is ineffectual because of permission requirements. This concept is called the “Sandboxing”. Several implementations of the sandboxing exist: Java Sandboxing, Java Webstart, Gentoo Sandbox, Norman Sandbox, FMAC, Google Sandbox, S4G (Sandbox for GRID). Figure 3 shows a simplified representation of the Java Sandbox Architecture. Either they intercept systems calls: strace, /proc, allowing or refusing them; or they let the application running in a virtual context, like chroot. Figure 3. Simplified Java sandbox architecture Java Application Sandbox JVM OS Hardware
Network
Security in GRID Computing
HiPoP (Hantz, 2005) and XtremWEB (http:// www.lri.fr/~fedak/XtremWeb) are two examples of GRID computing systems using this concept of sandboxing. HiPoP means Highly distrIbuted Platform Of comPuting and is a platform entirely written in JAVA. It performs coarse grain tasks having dependences in a highly distributed way. This platform relies on a P2P architecture. On the contrary of others platforms, which use static sandbox where permissions stay the same from the beginning to the end of the execution of the deamons, HiPoP uses HiPoP Dynamic Sandbox (HDS) (Hantz, 2006). Figure 4 shows an example of HiPoP permission attribution where the left part of the figure is a piece of a direced acyclic graph (DAG) to perform. When R1 has finished its execution of the task J1, it asks (1) the resource provider a reference of a resource to perform J2. Then the resource provider chooses the resource R2 and signals it (2) that R1 will contact it to perform a task. Thus R2 will accept the connection from R1. Without this query, R2 would reject all communications from all the peers. To continue, the resource provider advertises (3) R1 that it can
submit its task to R2. Finally, R1 submit (4) the task J2 to R2. After this submission, the network permission is automatically removed and R1 will no longer access to R2. Network permissions are just an example of permissions that HiPoP takes into account, but it also manages file permissions (tasks are only authorized to read and write files in a temporary directory identified by them). Moreover, tasks can not read information on the resource system (hostname, IP address, type of the resource, OS used) and can not modify the administration system like overload the security manager, added some additional permissions.
SECURED GRID COMPUTING IN INDUSTRY GRID computing is developing considerably in research centres, and companies are now using this type of technique more and more. The multinationals find it difficult to face up to the complex computing infrastructures which do
Figure 4. Dynamic permission attribution
1
j1
RP 3
R
4
2
j2 R
27
Security in GRID Computing
not react sufficiently quickly to the evolutions of the expectations of their activities. Currently the majority of the professional applications are managed in a rigid way. In answer to this problem, certain companies have designed an adaptable infrastructure which shares and automatically manages the system resources. The users develop their applications on GRID architecture inside or outside the company but use machines located in other companies which are legally or financially dependent. For the moment, security techniques do not allow companies to widely use machines distributed all over the Internet.
Aim of GRID The aim of GRID is: •
•
•
•
To obtain a more efficient use of resources inside the company or inside the group by reserving unused machines To distribute the application load by distributing the treatments on idle or underloaded resources To optimize the material and software investments by having a global view and a policy at the company level and not only at the service level To concentrate computing power to carry out complex calculations with powerful modeling software
To reach this aim, it is necessary to solve problems of security, fault-tolerance, scheduling of tasks, data transfer and communication time. The first two problems are the most fundamental. To solve the first, a certain number of solutions exist to allow authentication, integrity, confidentiality and to maintain replay. The second problem is also significant, because in a GRID structure, machines can break down, can be moved, or can be replaced. A solution then consists in regularly saving an
28
image of the application to be able to recall it after having to restart due to a problem.
Examples and Domains Concerned The main American public agencies support the Globus system: NASA, the NSF, DARPA, and in the same way, large software distributors like IBM, Sun, Cisco, Microsoft. GRIDXpert (http://www. ud.com) target 4 sectors: manufacturing, energy, biotechnology and finance. The network game domain can also be approached with Butterfly.net. Security in the GRID is directed towards the use of Web service technology. Microsoft’s Passport project or Sun’s Liberty Alliance were developed to solve security problems by using certificates. Research laboratories have joined the companies: lNRIA with Microsoft and Alcatel. Large Data processing companies like IBM, Sun, Microsoft, Platform, and United Devices have taken a clear turn in the direction of the GRID. The GRID computing version of Oracle (http:// www.oracle.com/technologies/grid/index.html) is a software architecture designed to pool together large amounts of low cost modular storage and servers to create a virtual computing resource which can be transparently distributed. The resources can include storage, servers, database servers, application servers, and applications. Pooling resources together offers dependable, consistent, pervasive, and inexpensive access to these resources regardless of their location and the period where they are needed. Oracle 10g is managed by Oracle Enterprise Manager 10g GRID Control, a Web-based management console that enables administrators to manage many application servers as though they were one, thereby automating administrative tasks and reducing administrative costs. Oracle 10g supports single sign on (SSO) permitting users to be authenticated only one time to be allowed to access servers. OracleAS 10g provides a single, unified, standard based end-to-end security and identity management infrastructure based on Oracle In-
Security in GRID Computing
ternet Directory, OracleAS 10g Single sign-on server and OracleAS certificate authority. To provide a secure environment to run enterprise applications, OracleAS 10g provides a number of security enhancements including comprehensive Java2 security support; SSL support for all protocols (RMI, RMI-over-IIOP, SOAP, JMS, LDAP); a least privilege model for administrative privilege; and a comprehensive PKI-based security infrastructure. Platform Symphony (http://www.platform. com/) is an enterprise providing GRID software for financial services. This system allows forwardthinking financial services firms to easily move to a true GRID environment where multiple users and applications share computing resources in virtual pools that dynamically adjust and scale based on the priorities and needs of the business. Platform symphony is based on the scalable platform enterprise GRID orchestrator (EGO) that sets the benchmark for GRID performance and reliability across heterogeneous enterprise environments. DFI (http://www.d-fi.fr) proposes a technology allowing users to optimize the computing power of all the machines of a company to redistribute them to the applications which require it, according to their needs. This system relies on Oracle’s GRID Control and Sun’s System Manager.
On-Demand Computing On demand computing is an approach already in use. Companies have access to a powerful calculation resource and only pay large Computer firms for the resources they actually use. The goal is to adapt the power of computing, storage and also budget. For example, the Danish company Lego (http://www.pcexpert.fr) thinks that the use of the IBM technique “on demand” will enable it to reduce management costs of its infrastructure by 30%. Indeed, it has one peak period at Christmas time when it needs very significant power over
a short period. Why thus be equipped all year long with a calculation capacity which will only be used for one month? Datasynapse’s GRIDServer (http://www. datasynapse.com/solutions/gridserver.html) creates a flexible, virtual infrastructure that enables organizations to improve application performance and resiliency by automatically sharing and managing computing resources across the enterprise. GRIDServer is adaptive GRID infrastructure software designed to virtualize compute and data intensive applications. By creating a virtualized environment across both applications and resources, GRIDServer provides an “on demand” environment to provide real-time capacity for process intensive business applications.
CONCLUSION Computational GRID are becoming increasingly useful and powerful in the execution of large-scale and resource intensive applications. Data transit through multiple networks and their security can be put at risk. Network security is a hard-todefine paradigm in that its definition varies with the different organizations which implement it. Security is defined by the policies that implement the services offered to protect the data. These services are confidentiality, authentication, nonrepudiation, access control, integrity and to protect or to prevent against such attacks. GRID computing has its specific security requirements due to the nature of its domain distribution. We are dealing with existing and the issue of trust is very important. A certificate authority needs to identify and authenticate a legitimate GRID participant to other participants, without damaging the local entities’ authority. Since Grid computing is a voluntary contribution and a trust-based relationship between different domains, it is important to establish a host-based authentication approach. Most of the time, participants speak
29
Security in GRID Computing
to each other and identify themselves before engaging into such a collaboration; it cannot be an anonymous relationship. Confined in research laboratories, GRID Computing is finally making its entrance in the business world. GRID computing has its specific security requirements due to the nature of its application domain. Large software publishers like Microsoft, Sun, IBM or HP are developing and offering GRID computing solutions. Thus far this technique is still inside the company. Indeed, it is too difficult to ensure security for Internet deployment. Therefore, GRID computing is primarily used inside companies or, when necessary, between several companies by using VPN.
REFERENCES Damiani, E., De Capitani, S., Paraboschi, S., Samarati, P., & Violante, F. (2002). A reputationbased approach for choosing reliable resources in peer-to-peer networks. Ninth ACM Conference on Computer and Communications Security. Del Fabbro, B., Laiymani, D., Nicod, J. M., & Philippe, L. (2004). A data persistency approach for the DIET metacomputing environment. International. Conference on Internet Computing, IC’04, Las Vegas, NV (pp. 701-707). Foster, I., & Kesselman, C. (1998). The globus project: A status report. In Proceedings of the IPPS/SPDP ’98 Heterogeneous Computing Workshop (pp. 4-18). Foster, I., Kesselman, C., Tsudik, G., & Tuecke, S. (1998). A security architecture for computational grids. In Proceedings of the Fifth ACM Conference on Computer and Communications Security Conference (pp. 83-92).
30
Franke, H., Hochschild, P., Pattnaik, P., & Snir, M. (1994) An efficient implementation of MPI. International Conference on Parallel Processing. Hantz, F., & Guyennet, H. (2005). HiPoP: Highly distributed platform of computing. In Proceedingss of the IEEE Joint Internationa Conference on Autonomic and Autonomous Systems (ICAS’05) and International Conference on Networking and Services (ICNS’05). Tahiti, French Polynesia. Hantz, F., & Guyennet, H. (2006). A P2P Platform using sandboxing. WSHPCS (Workshop on Security and High Performance Computing Systems) in conjunction with the 20th European Conference on Modelling and Simulation (ECMS 2006), Bonn, Germany (pp. 736-739). Le, V., & Guyennet, H. (2002). IPSec and DNSSEC to support GRID Application Security. Workshop Security in the Second IEEE/ACM International Symposium on Cluster Computing and the GRID, CCGrid2002, Berlin, Germany (pp. 405-407). Le, V., & Guyennet, V. (2003). A scalable security architecture for grid applications. GridSec, Second Workshop on Security and Network Architecture, Nancy, France (pp. 195-202). Saxena, N., Tsudik, G., & Yi, J. H. (2003). Admission control in peer-to-peer: Design and performance evaluation. ACM Workshop on Security of Ad Hoc and Sensor Networks (SASN). Sunderam V. S. (1990). A framework for parallel distributed computing. Concurrency: Practice and Experience, 2(4), 315-339 Oram, A. (2001). Peer-to-peer: Harnessing the power of disruptive technologies. O’Reilly. Welch, V., Foster, Y., Kesselman, C., Mulmo, O., Pearlman, L., Tuecke, S., et al. (2004). X.509 proxy certificates for dynamic delegation. Third Annual PKI R&D Workshop.
31
Chapter III
Security of Symbian Based Mobile Devices Göran Pulkkis Arcada Polytechnic, Finland Kaj J. Grahn Arcada Polytechnic, Finland Jonny Karlsson Arcada Polytechnic, Finland Nhat Dai Tran Arcada Polytechnic, Finland
ABSTRACT Security issues of Symbian-based mobile computing devices such as PDAs and smart phones are surveyed. The evolution of Symbian OS architecture is outlined. Security threats and problems in mobile computing are analyzed. Theft/loss of the mobile device or removable memory cards exposes stored sensitive information. Wireless connection vulnerabilities are exploited for unauthorized access to mobile devices, to network, and to network service. Malicious software attacks in form of Trojan horses, viruses, and worms are also becoming more common The Symbian OS is open for external software and content which makes Symbian devices vulnerable for hostile applications. Embedded security features in Symbian OS are: a cryptographic software module, verification procedures for PKI signed software installation files, and support for the communication security protocols IPSec and TLS. The newest version 9.3 of Symbian also embeds a platform security structure with layered trusted computing, protection capabilities for installed software, and data caging for integrity and confidentiality of private data. Fundamental security requirements of a Symbian based mobile device such as physical protection, device access control, storage protection, network access control, network service access control, and
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security of Symbian Based Mobile Devices
network connection security are described in detail. Symbian security is also evaluated by discussing its weaknesses and by comparing it to other mobile operating systems. Current availability of add-on security software for Symbian based mobile devices is outlined in an appendix. In another appendix, measurement results on how add-on security software degrades network communication performance of a Symbian based mobile device are presented and analyzed as a case study.
INTRODUCTION Users of the Internet have become increasingly more mobile. At the same time, mobile users want to access Internet wireless services demanding the same quality as over a wire. Emerging new protocols and standards, and the availability of WLANs, cellular data and satellite systems are making the convergence of wired and wireless Internet possible. Lack of standards is however still the biggest obstacle to further development. Mobile devices are generally more resource constrained due to size, power and memory. The portability making these devices attractive greatly increases the risk of exposing data or allowing network penetration. Mobile handheld devices can be connected to a number of different kinds of networks. Such wireless networks are cellular networks, personal area networks (PANs), local area networks (LANs), metropolitan area networks (MANs) and wide area networks (Satellite-based WANs). Network services needed for transferring data to and from a mobile device include among others e-commerce, electronic payments, WAP and HTTP services. The network connection of a mobile device can be based on a dial-up connection through a cellular network (GSM, UMTS), be based on packet communication through a cellular network (GPRS), be a WLAN or a Bluetooth connection, or be an infrared link (IrDA). Network connection examples are e-mailing (pop3, pop3s, imap, imaps, smtp, smtps), web browsing (http, https), synchronization with a desktop computer (HotSync, ActiveSync, SyncML), network
32
monitoring/management (snmp), reception of video/audio streams, and communication of any installed application. Realization of data services over mobile devices offers interesting new features for the user, but also a threat to security. A mobile device optimized for data services requires that the terminal becomes an open platform for software applications, i.e. the mobile device becomes more vulnerable to attacks. Mobile computing also requires operating systems supporting mobile environments. Such a widely used operating system is Symbian OS. Symbian is a common operating system for mobile communication devices. The most important requirements are multitasking/threading, real-time operation of the cellular software, effective power management, small size of the operation system itself, ease of developing new features, reusability, modularity, connectivity and robustness (DIGIA Inc., 2003). The world’s top mobile phone manufacturers with the largest market share have chosen Symbian. According to many analysts, the major part of operation systems for mobile communication devices of the future will rely on Symbian or on Windows. In this chapter, security issues of Symbian based mobile devices are surveyed.
BACKGROUND Mobile computing device types are pocket PC, also called personal digital assistant (PDA), and smart phone. Symbian is the leading operating
Security of Symbian Based Mobile Devices
system for smart phones currently available on the market. Symbian was founded as a private independent company in June 1998 by Ericsson, Matsushita, Motorola, Nokia and Psion. Currently, Symbian is owned by BenQ, Ericsson, Panasonic, Nokia, Siemens AG, and Sony Ericsson. There are both open and closed platforms based on Symbian OS. Examples of open platforms are the Nokia platforms UIQ, Series 60, Series 80, and Series 90 and examples of closed platforms are the platforms developed for NTT DoCoMo’s FOMA handsets. The most recent version of Symbian OS is Symbian OS v9.3. (Symbian OS Version 9.3, 2006) During recent years, security has become a very important issue when Symbian OS platforms and applications are developed and designed. The security threats related to data stored in the devices, network communication, and software installation have increased in parallel with the evolution of the Symbian device platforms and the increasing use of Symbian devices. Symbian devices are becoming more commonly used also by corporate employees for storing confidential data. Such data are easily physically accessed if the device is lost or stolen. Confidential data sent to and from Symbian devices over various wireless network connections can be captured “from the air” by intruders. Malware attacks are also an increasing threat against Symbian devices. The first Symbian worm, Cabir, was detected in 2004. Today, there are already several known Symbian viruses and malware threatening smart phone users. Security solutions for Symbian devices are currently under ongoing development. The Symbian OS provide embedded security features i.e. underlying support for secure communications protocols, such as TLS/SSL, and authentication of installable software using digital certificates. Security solutions are also developed by several third party companies. Such solutions include anti-virus software, personal firewalls, memory card encryption, and access control systems.
SECURITY THREATS AND PROBLEMS FOR MOBILE DEVICES Today’s mobile devices offer many benefits to enterprises: access to e-mail/Internet, to customer’s information, and to vital corporate data. These benefits are however associated with risks such as loss/theft of device, malicious software, unauthorized access to data or device, hacking, cracking, wireless exploit, etc.(de Haas, 2005) Modern mobile devices, such as smart phone and PDA computers, are small, portable and thus easily lost and stolen. In addition they have connection interfaces to several types of wireless networks such as general packet radio service (GPRS), wireless local area network (WLAN), infrared data association (IrDA), and Bluetooth. Unfortunately only few such devices are presently equipped with firewalls or anti-virus software. Moreover, many mobile devices lack credible physical and electronic access control. These features make mobile computing devices targets of security attacks such as (Olzak, 2005): • • •
Theft/loss of the device and removable memory cards Malicious code Exploit of wireless connection vulnerabilities
The most serious security threats with mobile devices are unauthorized access to data and credentials stored in the memory of the device.
Theft/Loss of Information/Device Obviously, by design modern mobile computers and other types of portable devices have a higher risk of being stolen than a nonportable device. Many users are carrying around confidential corporate or client data on mobile devices without any protection. Often such devices cause security risks if stolen. Often bigger loss comes from the
33
Security of Symbian Based Mobile Devices
loss of the data than the loss of the device itself when the device is stolen. Most platforms for mobile devices only offer simple software-based login schemes. Such schemes can however easily be bypassed by reading the information from the device without login. Accordingly, critical and confidential unencrypted data stored in the device memory is an easy target for an attacker who has physical access to the device. Encryption and authentication are therefore strongly recommended solutions in order to avoid loss of data confidentiality, if a mobile device is lost or stolen. (Symantec, 2005; Hickey, 2005)
Malicious Software Malware have constituted a growing threat for mobile devices since the first Symbian worm (Cabir) was detected in 2004. Malware is still not a serious threat, but the continuous increasing number of the mobile device users worldwide is changing the situation. In the near future the threat might become similar to the problems encountered in the PC world today. Most likely the development of malware makes especially companies to face completely new kinds of attacks such as Trojan horses in games, screensavers and other applications, which attempt to make false billing, delete and transfer data. Malicious software does not only cause serious threats for the mobile device itself, it may also cause a threat for the network which the mobile device is connected to. (F-Secure, 2004; Hicks, 2005) Viruses are easily spread to an internal computer network and there are several methods by which a mobile device can be infected. Malware can be received manually via MMS, Bluetooth, infrared or WLAN, or by downloading and installing from the Web. Current malware is primary focused on Symbian OS and Windows based devices. Malware may result in (Olzak, 2005):
34
• • • •
Loss of productivity Exploitation of software vulnerabilities to gain access to recourses and data Destruction of information stored on a SIM card Hi-jacking of air time resulting in increased costs
Wireless Connection Vulnerabilities Handheld devices are often connected to the Internet through wireless networks such as cellular mobile networks (GSM, GPRS, and UMTS), WLANs, and Bluetooth networks. These networks are based on open air connections and are thus by their nature easy to access. Furthermore, confidential data transmitted over an unprotected wireless network can easily be captured by an eavesdropper. Transmitting data over wireless networks are open doorways for hackers, outsiders and causes a remarkable security risk. Data transmitted over the air can be easily exploited, if the networks are unprotected. Despite the mentioned security risks, many Bluetooth networks and especially WLANs are still unprotected even today. WLANs have earlier been associated with serious security vulnerabilities because of the lack of user authentication methods. However, today WLANs fulfill secure user authentication requirements, when solutions based on the recently ratified security standard 802.11i are implemented. For any wireless connectivity, the most effective way to ensure end-to-end security is to set up a Virtual Private Network (VPN) channel. The data channel is then encrypted. In addition to avoid security risks, users should disable all wireless network connections, including Bluetooth, infrared and WLAN whenever these connections aren’t needed. (Taylor, 2004; Ye & Cheang, 2005)
Security of Symbian Based Mobile Devices
SYMBIAN OS ARCHITECTURE The Symbian OS is implemented and used in several different user interface platforms, both open and closed. Notable is that “open” doesn’t in this case mean that the Symbian OS source code is publicly available. It rather means that the APIs are publicly documented and anyone is able to develop software for Symbian OS devices. The latest version of Symbian is Symbian OS v9.3. The architecture, visualized in Figure 1, has five layers (Siezen, 2005): • • • • •
User interface (UI) framework Application services OS services Base services Kernel services and hardware interface
UI Framework The user interface (UI) framework consists of the UI application framework subsystem and internationalization support. The main objective of the graphical user interface (GUI) framework is to minimize UI designer constraints by defining as little policy as possible. This makes porting of application user interfaces between different Symbian phones easier. Internationalization sup-
port provides i.e. operating system compatibility for various input languages.
Application Services The application services provide application engines for the central mobile phone applications with the purpose to ensure compatibility between different Symbian devices. Application services include: •
•
• •
•
•
Personal information management (PIM) services: Applications such as agenda, todo, and contacts Messaging services: Short message service (SMS), enhanced message service (EMS), and e-mail (including support for both POP3 and IMAP4 protocols) Content management services Internet and web application support: HTTP transport framework and WAP stack Data synchronization services (OMA): Providing the OMA (SyncML) data synchronization client Provisioning services (OMA): Enables the network operator to deliver settings to the mobile device using a technique based on the Nokia Ericsson over-the-air (OTA)
Figure 1. Symbian OS architecture overview
35
Security of Symbian Based Mobile Devices
specification and the Nokia smart messaging specification.
Java The Symbian OS provides a Java application execution environment which is optimized for mobile devices and mobile applications. This provides compatibility with mobile device Java applications and advanced Java applications able to make use of capabilities of a Symbian device. Symbian OS versions 9.1 and later support J2ME MIDP 2.0 and CLDC 1.1.
applications. This ensures the integrity of the Symbian devices and the network, and still enables an open environment for third party applications. Other embedded security features are data confidentiality, integrity, and authentication realized by providing underlying support for secure communication protocols such as TLS/SSL and IPSec. The Symbian OS security services also support authentication of installable software using digital signatures. Embedded security features in Symbian OS are surveyed in more detail in a later section
Comms Services
OS Services The OS services level is the heart of the Symbian OS. These services provide important OS infrastructure components such as multimedia and graphics subsystems, networking, telephony, short link protocols, security services, and PC connectivity infrastructure.
Multimedia and Graphics Services Multimedia and graphics services consist of multimedia, OpenGL ES, and the graphics subsystem. Multimedia services include multimedia framework (MMF), media support library (MSL), image conversion library (ICL), and camera support. OpenGL for Embedded Systems (OpenGL ES) is a subset of the OpenGL 3D graphics API specially designed for embedded devices such as mobile phones. The graphics subsystem implements the graphics device interface (GDI) and provides i.e. shared access for Symbian OS applications to components such as the screen, keyboard and pointing devices input, bitmap fonts, and scalable fonts.
Security Services Symbian OS v9 provides extended platform security in form of capability control of installed
36
Networking, telephony and short link protocols are actually subsystems of the “Comms Services” of the Symbian OS services. The purpose of the Comms services is to provide key frameworks and system services for communication and networking. The networking services contain the key frameworks and system services for wide area communication. Various communication protocols can be implemented through a socket interface. Both IPv4 and IPv6 are supported using a dual IP stack. A plug-in architecture is provided for the IP stack allowing licensees to implement extensions, such as IPSec for secure network communication. The telephony subsystem provides a multimode API for the clients. An abstraction layer for cellular networks is provided including support for GSM, GPRS, EDGE, CDMA (IS-95), 3GPP2, CDMA2000 1 x RTT, and 3GPP W-CDMA. The Symbian OS services provide support for point to point communications through the short link services. Supported short link technologies include Bluetooth, serial, USB, and infrared (IrDA).
Security of Symbian Based Mobile Devices
Connectivity Services The connectivity services implements the connection manager and the connectivity framework. The connection manager handles connections between a PC and a Symbian OS device including both PC side and mobile device side components. Standard TCP/IP protocols are used for data transfer. The connectivity framework implements the PC Connectivity toolkit. Features of this toolkit are: • • • •
PC and mobile device synchronization Software install from PC Backup and restore Remote file management.
EMBEDDED SECURITY FEATURES IN SYMBIAN Original embedded security feature in Symbian OS are: •
• •
Base Services The base system services provide the programming framework for all other components, than the above mentioned. The main elements visible for the user are the file system and the common user libraries.
Kernel Services and Hardware Abstraction Interface The main functionality of the kernel services and the abstraction interface is to ensure Symbian OS robustness, performance, and efficient power management. These are all essential in a mobile phone. The kernel services and hardware abstraction interface include also logical device drivers. The kernel is the core of the system and performs i.e. memory allocation, power management, owns device drivers, and implements the scheduling policy. The logical device drivers provide drivers and/or software controllers for devices such as DTE serial port, DCE serial port, USB client 1.1, keyboard, Ethernet, etc. For more details about the architecture of Symbian OS v9.1, see (Siezen, 2005).
•
Cryptographic module with: ° implementations of symmetric algorithms (DES, 3DES, RC2, RC4, and RC5) and asymmetric cryptographic algorithms (RSA, DSA, and DH) ° Implementations of hash functions (MD5, SHA1, and HMAC) ° A pseudo-random number generator for cryptographic key generation Certificate management module Password locking of contents of multimediacard (MMCs) and other removable memory cards Installation packet signing, see (Symbian Signed, 2006).
IPSec and VPN support were added in Symbian OS v6.0. SSL/TLS support and a content security feature, a digital rights management (DRM) API, were introduced in Symbian OS v6.0. platform security features were embedded in Symbian OS v9.1: • To control access to sensitive operations and to sensitive APIs. • To provide confidentiality of private data in a Symbian device. • To protect the hardware and software integrity of a Symbian device. (Symbian OS, 2006)
Installation Packet Signing Developers of Symbian applications should follow the Symbian Signed procedure described in (Symbian Signed, 2006):
37
Security of Symbian Based Mobile Devices
•
•
• •
•
An application developer request and gets from VeriSign a publisher ID on a X.509 certificate for a signature key pair. The developer creates a SIS file and signs it with the private key of the certified key pair. The developer sends the application to a Symbian Test House. If test criteria are met, then the VeriSign certified signature is removed and replaced by a signature created by the Symbian Root certified private key. The Symbian Signed SIS file is returned to the developer for distribution to Symbian device users.
The Symbian Software Installer: • Stores the Symbian Root Certificate on the Symbian device if it isn’t already stored. • Tries to verify the SIS file signature with the public key on the Symbian Root Certificate. • Installs the SIS file only if the signature is verified.
Certificate Management The Certificate Management module: • Stores WTLS certificates and X.509 certificates. Certificates are used for authentication of application developers, web servers, and Symbian device users. • Nerifies trust in stored certificates. • Checks certificate revocation using the online certificate status protocol (OCSP). Certificate management is implemented by methods of the CcertStore class. (Symbian OS, 2006)
Platform Security Platform security is based on the following concepts: (EMCC Software, 2005; Shackman, 2005; Heath, 2006):
38
•
•
•
Unit of Trust: The kernel, file system and the software installer are part of a trusted computing base (TCB) and have unrestricted access to the device’s resources. The TCB is responsible for maintaining integrity of the device. Other system components surrounding the TCB comprise a trusted computing environment (TCE). Capability Model: A capability is an entity of protection. Functionality in Symbian OS is implemented by a set of application programming interfaces. An API needing protection is associated with a capability. Applications must be authorized by the Symbian Software Installer to access the capabilities they wish to use. Only authorized applications are trusted to use capability protected APIs. Data caging: Data caging is a filing system facility for protection of private data.
Permissions Authorization of an application can give the application either ‘Blanket’ permission or ‘Single shot’permission to a Symbian API. ‘Blanket’ permission grants a capability until the application is uninstalled or re-installed. ‘Single shot’ permission requires end-user permission each time the application is started. ‘Single shot’ permission can be given to all unsigned applications and to some Symbian Signed applications.
Capabilities Capabilities requested by an application are listed in its project definition file (MMP). Capabilities are grouped in three sets for authorization by the Symbian Software installer: •
“Unsigned-Sandboxed” set consisting of the capabilities ° Nonclassified APIs ° ‘LocalServices’ and ‘UserEnvironment’ with ‘Blanket’ permission
Security of Symbian Based Mobile Devices
•
•
° ‘Network Services’, ‘ReadUserData’, and ‘WriteUserData’ with ‘Single shot’ permission “Basic” set consisting of the capabilities ‘Network Services’, ‘ReadUserData’, and ‘WriteUserData’ with ‘Blanket’ permission “Extended” set consisting of the capabilities ‘NetworkControl’, ‘PowerMgmt’, ‘TrustedUI’, ‘SwEvent’, ‘ProtServ’, ‘MultimediaDD’, ‘ReadDeviceData’, ‘WriteDeviceData’, ‘DRM’ and ‘SurroundingsDD’.
Applications in the “Basic” and “Extended” sets are all Symbian Signed. The most powerful capabilities: •
• • •
AllFiles: Granting read access to entire file system and write access to private directories of other processes. CommDD capability: Granting access to communicating device drivers. DiskAdmin capability: Granting access to specific disk administration operations. TCB capability: Granting unrestricted access to all hardware and software, including write access to executables and shared readonly resources are however not included in the “Extended Set”.
A Symbian process will always get the capabilities of the executable file. Capabilities cannot change during execution. A library module can be loaded dynamically only if it has equal or more capabilities than the calling process.
Data Caging Data caging implements a protected directory structure in Symbian OS: •
by the Symbian OS Kernel, File Server, and Software Installer. Executable files are stored in \Sys\bin, which is the only place from which C++ programmed software can run. A locally unique security identifier (SID) must be contained in every executable file. • Read-only resource files shared by all applications are stored in \Resource, which can be modified only by the Symbian OS Software Installer. • Private data for all installed programs is stored in \Private by the Symbian OS Software Installer. Only the process of running the executable file with SID=<SID> has access to its private data in the subdirectory \Private\<SID>\. Other directories than \Sys, \Resource, and \Private are not protected by data caging.
Other Platform Security Features Security options for client/server communication are available. Every Symbian OS server process can define and check what capabilities, which SID, and which VID (Vendor Identifier) are required from the calling client process. The calling client process can check the name of the server process. A new secure backup and restore functionality is implemented. Also file capabilities are backed up and restored. Private data files are backed up and restored in cooperation with the owning process. The data-sharing mechanism in earlier Symbian OS versions has been replaced by a Central Repository for secure storage of structured data. Central Repository is implemented as a Symbian OS server process, which manages the data storage.
Sstem critical file and executable files are stored in \Sys, which can be modified only
39
Security of Symbian Based Mobile Devices
PHYSICAL PROTECTION Physical security involves safekeeping systems from theft, physical and electromagnetic damage, and preventing unauthorized access to those systems (Rittinghouse & Ransome, 2004). Today, device theft is more attractive to thieves as mobile devices become smaller and more powerful (Grami & Schell, 2005). When a stolen device is reported, location technology can be employed to help track down the thief. All employees should be held responsible for taking every reasonable precaution to ensure the physical security of their mobile devices from theft, abuse, avoidable hazards, or unauthorized use. Protection of stored content against power failures and other functional failures and possibilities to recover stored content after damage, after a functional failure or after a not prevented intrusion attack are highly important security measures. Shielding the mobile device from unwanted wireless communication, protection of stored content in case of theft or other loss of the mobile device and visible ownership information for return of a lost or stolen mobile device are other essential security measures. When a mobile device is misplaced or stolen, it can be used to purchase items, enabling thieves to easily commit fraud. There are no safeguards against theft of electronic cash on such devices (Hong, 2005). In the near future, when many mobile devices are
used in home automation, for instance to remotely lock/unlock doors, insufficient physical protection of mobile devices is also a threat to the owner’s home security.
DEVICE ACCESS CONTROL It is highly important to implement reliable access control mechanisms for Symbian devices in order to protect the data stored in memory, since physical access control mechanisms are ineffective due to the small size and easy portability of such devices. There are currently no widely adopted standards for access control in Symbian based mobile devices. Access control services are mostly provided by third party companies, see section ‘Add-on Security Software’ for examples. Access control on a mobile device can be implemented using a combination of the following security services and features (Perelson & Botha, 2004): • • • •
The principle of access control in a mobile device is shown in Figure 2.
Figure 2. The principle of access control in mobile devices
40
Authentication service Confidentiality service Nonrepudiation service Authorization
Security of Symbian Based Mobile Devices
Authentication An authentication service is a system confirming that a user, trying to access the mobile device, is the owner of or is permitted access to the device. There are several methods in which a user can authenticate to a handheld device. For Symbian devices at least the following authentication methods are available: • • •
Passwords/PINs Visual login Biometrics
Visual login and biometrics are, however, only available as add-on security hardware and software, see Appendix 1.
PIN and Password Authentication PIN and password authentication means protection of the device’s system using a numeric (PIN) or alphabetic (password) combination of digits/characters which is to be entered by the user in order to access the system. The PIN is typically four digits of length and is entered by the user from a ten-digit (0-9) numerical keypad. However, PINs are susceptible to shoulder surfing or to systematic trial-and-error attacks due to their limited length and alphabet. Passwords are more secure than PINs since they support a larger alphabet and increase the number of digits in the password string. (Jansen, 2003) In Symbian based mobile phones the user is normally by default authenticated to the SIM and no password/PIN protection is activated for the device itself. This means that the whole system of the device can be accessed by removing the SIM and starting the device in “offline mode”. Most mobile phones, however, provide a system lock function. This function locks the system if the SIM is removed or changed. This lock code typically consists of more digits than standard
PINs, e.g. Nokia Series 60 phones use a five digit numerical lock code.
Visual Login An example of a visual login method is picture passwords. A picture password system can be designed to require a sequence of pictures or objects matching a certain criteria and not exactly the same pictures. For example, the user must find a certain number of objects with four sides. Shoulder surfing of a picture password is much more difficult than shoulder surfing passwords or PINs. (Duncan, 2004)
Biometrics Biometric user authentication is based on a technology which measures and analyzes physical or behavioral characteristics of a human. Examples of physical characteristics utilized for user authentication to Symbian devices include fingerprint, voice, and face (Biometrics, 2006; Yoshihisa, 2005). Biometric user authentication based on behavioral characteristics can for example be a system analyzing the movement of the person carrying the device (Karkimo, 2005). Biometric user authentication systems are becoming more and more common in Symbian devices and such systems are currently provided by several third party companies.
Authorization Hitherto, user authorization has generally not been considered to be important for Symbian based mobile devices. These devices are typically personal and the authentication process infers that the user is authorized. It has also been assumed that all data stored on a device is owned by only one person who is the device owner. It is, however, becoming more common that handheld devices replace desktop and notebook computers in com-
41
Security of Symbian Based Mobile Devices
panies. This means that a single device, owned by the company, may be used by several employees and may contain confidential company information. Thus, the need for proper user authorization services is becoming more important. Needed user authorization features for mobile devices include (Perelson & Botha, 2004): •
•
•
File masking: Certain protected records are being prevented from being viewed by unauthorized users. Access control lists: Such a list defines permissions for a set of particular objects associated with a user. Role based access control: Permissions are defined in association with user roles.
HMACs and signatures provide strong protection against tampering attacks, since a checksum, a CRC, and a hash of a tampered file can easily be updated by an attacker. Online integrity control of program and data files must be combined with online integrity control of the configuration of a mobile device. This is needed to give sufficient protection against attempts to enter malicious software like viruses, worms and Trojans. Malicious software can be stored in the file system of a tampered configuration. Confidentiality required for user data can be granted by file encryption software. This software also protects the integrity of the stored encrypted files, since successful decryption of an encrypted file is also an integrity proof.
STORAGE PROTECTION
NETWORK ACCESS CONTROL
Storage protection of a mobile device means:
Symbian devices support various wireless network connections. Typical networks are cellular networks such as 2G, 2.5G, and 3G, wireless local area networks (WLANs), and local connectivity networks such as Bluetooth and IrDA.
• • •
Online integrity control of all stored program code and all stored data Optional confidentiality of stored user data Protection against unauthorized tampering of stored content
Protection should include all removable storage modules used by the mobile device. The integrity of: • • •
The operating system code The program code of installed applications Dystem and user data
can be verified when being used by traditional tools like checksums, cyclic redundancy codes CRC, hashes, message authentication codes (MAC, HMAC), and cryptographic signatures. Only hardware base security solutions for protection of verification keys needed by MACs,
42
Identification Hardware Identification hardware contains user information and cryptographic keys used to authenticate users to mobile devices, applications, networks, and network services. Common identification hardware used in Symbian devices include: • • •
Subscriber identity module (SIM) Public key infrastructure SIM (PKI SIM) Universal SIM (USIM)
SIM A basic SIM card is a smartcard securely storing an authentication key identifying a GSM network user. The SIM card is technically a microcomputer, consisting of a CPU, ROM, RAM, EEPROM and
Security of Symbian Based Mobile Devices
Input/Output (I/O) circuits. This microcomputer is able to perform operations based on information stored inside it, such as performing cryptographic calculations with the individual authentication key needed for authenticating the subscriber. The SIM card also contains storage space for i.e. Short message services (SMS) messages, multimedia messaging system (MMS) messages, and a phone book. The use and content of a SIM card is protected by PIN codes (Rankl & Effing, 2003).
USIM
PKI SIM
Cellular Networks
A PKI SIM card is a basic SIM with PKI functionality. A RSA coprocessor is added which performs public key based encryption and signing with private keys. The PKI SIM card contains space for storing private keys and certified public keys needed for digital signatures and encryption (Setec, 2006).
2G and 2.5G
A USIM card is a SIM used in 3G mobile telephony networks, such as UMTS. The physical size of a USIM card is the same as a basic 2G GSM SIM card, but USIM is based on a different type of hardware. USIM is actually an application running on a UICC (Universal Integrated Circuit Card). The USIM stores a pre-shared secret key as the basic SIM (Lu, 2002).
User authentication in 2G (Second Generation, GSM) and 2.5G (“Second and a half” generation, GPRS) networks is handled by a challenge/re-
Figure 3. GSM authentication and key agreement
43
Security of Symbian Based Mobile Devices
sponse based protocol. Every mobile station (MS) shares a secret key Ki with its home network. This key is stored in the SIM card of the MS and the authentication centre (AuC) of the home GSM network. Ki is used to authenticate the MS to the visited GSM network and for generating session keys needed for encrypting the mobile communication. The authentication process, shown in Figure 3, is started by the mobile switching centre (MSC) which requests an authentication vector from the AuC of the home network of the MS. The authentication vector, generated by the AuC, consists of a challenge/response pair (RAND, RES) and an encryption key Kc. The MSC of the visited network sends the 128-bit RAND to the MS. Upon receiving the RAND,
Figure 4. UMTS authentication and key agreement
44
the MS computes a 32-bit response (RES) and an encryption key Kc using the received RAND and the Ki stored in the SIM. The calculation is processed within the SIM. The MS sends the RES back to the MSC. The MSC verifies the identity of the MS by comparing the received RES from the MS with the received RES from the AuC. If they match, authentication is successful and the MSC sends the encryption key Kc to the base station serving the MS. Then the MS is granted access to the GSM network service and the communication between the MS and the base station is encrypted using Kc (Meyer & Wetzel, 2004). 2G and 2.5G networks provide reasonably secure access control mechanisms. However, lack of mutual authentication is a considerable
Security of Symbian Based Mobile Devices
vulnerability. An attacker could setup a false base station and imitate a legitimate GSM network. As a result, i.e. the Ki could be cracked and the attacker could impersonate a legitimate user. (GSM, 2006)
Local Connectivity Networks IrDA Symbian includes three different APIs for IrDA (Infrared Data Association) connections:
3G • The authentication and key management technique used in 3G (third generation/UMTS) networks is based on the same principles as in GSM networks, see Figure 4. A secret authentication key is shared between the network and the MS. This key is stored on the USIM of the MS and in the AuC of the home network. Unlike in GSM networks, UMTS networks provide mutual authentication. This means that not only the MS is authenticated to the GSM network but the GSM network is also authenticated to the MS. This protects the MS from attackers trying to impersonate a valid network to the MS. Network authentication is provided by a so called authentication token AUTN. The MSC (Mobile Switching Centre) of the visited network sends the AUTN together with the authentication challenge to the MS during the authentication process. Upon receiving the AUTN, containing a sequence number, the MS checks whether it is in the right range. If the sequence number is in the right range the MS has successfully authenticated the network and the authentication process can proceed. The MS computes an authentication response, here called RES, and encryption and integrity protection keys, called CK and IK, and send these back to the MSC. The MSC verifies the identity of the MS by checking the correctness of the received RES. Upon successful authentication, the MSC sends the encryption key CK and integrity key IK to the UMTS base station. The MS is now able to communicate with the UMTS network and the communication between the MS and the base station is encrypted with CK and the integrity is protected with IK. (Meyer & Wetsel, 2004).
• •
IrDA Sockets for socket based communication IrDA Serial for serial communication IrTranP for communication with digital cameras and printers.
The IrDA standard doesn’t specify any access control or other security features. However, since infrared connections work with the line-of-sight principle, access is easily controlled by physical security measures. (Symbian OS, 2006)
Bluetooth Bluetooth is a technique providing a wireless medium for transmitting data and voice signals between electronic devices over a short distance. The specification is defined by the Bluetooth SIG (Special Interest Group). SIG involves a Bluetooth Security Experts Group, which is responsible for the security issues. The security is based on three different services, authentication, authorization, and encryption. The Bluetooth devices can be set in one of three different security modes: • • •
Security mode 1: No security measures Security mode 2: Security measures based on authorization Security mode 3: Authentication and encryption
Bluetooth performs device authentication (not user authentication) based on a challenge/response process which can be either unidirectional or mutual. The devices are authenticated using secret keys called link keys. These keys are generated either dynamically or through a process called
45
Security of Symbian Based Mobile Devices
pairing. When dynamical generation of the link key is used, the user is required to enter a passkey each time a connection is established. The same passkey must be entered in both connecting devices. When pairing is used, a long-term, stored link key is generated from a user entered passkey, which can be automatically used from several connection sessions between the same devices. Bluetooth access control also provides an authorization service. The authorization service allows a Bluetooth device to determine whether or not another device is allowed access to a particular service. Authorization includes two security concepts: trust relationships and service security levels. Three different levels of trust between devices are allowed by the Bluetooth specification: trusted, not trusted, and unknown. By using combinations of authentication and authorization, Bluetooth provides three service levels as shown in Table 1. A major weakness in Bluetooth access control is the lack of support for user authentication. This means that a malicious user can easily access network resources and services with a stolen device. Furthermore, PIN codes are often allowed to be short which is susceptible to attacks. However, the coverage range of a Bluetooth network is very short. This means that malicious access to a Bluetooth network can mostly be prevented by use of physical access control measures. For more detailed information about access control in Bluetooth networks, see the official Bluetooth wireless info site (Bluetooth, 2006).
Table 1. Bluetooth service levels
46
WLAN WLANs provide wireless high speed Internet connections and are supported by some Symbian smart phones. Implementation and use of secure access control mechanisms is essential in order to protect WLANs from unauthorized network access, since WLANs are by their nature easy to access and are unable to protect by physical security measures. WLANs were earlier associated with serious security vulnerabilities. One of the most significant concerns has been the lack of proper user authentication methods. Today, WLANs provide acceptable security through the recently ratified security standard 802.11i.
Access Control Mechanisms Defined in the 802.11 Standard The authentication mechanisms defined in the original WLAN standard 802.11 are weak and not recommended. The 802.11 standard only provides device authentication in form of the use of static shared secret keys called wired equivalent privacy (WEP) keys. The same WEP key is shared between the WLAN access point and all authorized clients. WEP keys have turned out to be easily cracked with cracking software, which is widely available in Internet. If a WEP key is cracked by an intruder, the intruder gets full access to the WLAN. WEP authentication can be strengthened using MAC filters and by disabling SSID broadcasting on the access point. These measures, however,
Security of Symbian Based Mobile Devices
still don’t provide needed level of security. SSIDs are easily determined by sniffing probe response frames from an AP. MAC addresses are easily captured and spoofed.
Access Control Mechanisms Defined in the 802.11i Standard The recently ratified WLAN security standards WPA and WPA2 address the vulnerabilities of WEP. WPA, introduced at the end of 2002, is a subset of the 802.11i standard, and WPA2, ratified in the summer 2004, provides full 802.11i support. The difference between WPA and WPA2 is the way how the communication is encrypted. Furthermore, WPA2 provides support for adhoc networks which is missing in WPA. User authentication in WPA and WPA2 are based on the same techniques. WPA is currently supported in a few available Symbian smart phone models. WPA2, however, is presently supported only in the most recent model of Nokia Communicators, Nokia 9300i (Wi-Fi, 2006).
Access Control Based on Pre-Shared Keys 802.11i provides two security modes: home mode and enterprise mode. 802.11i home mode is as WEP based on a shared secret string, here called
pre-shared key (PSK). The difference compared to WEP is that the PSK is never used directly as an input for data encryption algorithms. 802.11i home mode is suitable for small WLAN environments, such as small office and home WLANs where the number of users is low.
802.1X Port Based Access Control For large enterprise WLAN environments 802.11i enterprise mode is recommended. This security mode utilizes the 802.1X standard for authenticating users. IEEE 802.1X is a standard, originally designed for LANs, to address open network access. 802.1X has three different components involved: supplicant (client), authenticator (WLAN AP) and authentication, authorization, and accounting (AAA) server. The supplicant is a user or client who wants to be authenticated. The supplicant accesses the network via the authenticator which is, in case of a WLAN, a wireless AP. The AAA server, typically a remote authentication dial-in user service (RADIUS) server, works as a backend server providing authentication service to an authenticator. The AAA server validates the identity and determines, from the credentials provided by the supplicant, whether the supplicant is authorized to access the WLAN or not. During the authentication process, the authenticator works as an intermediary between the
Figure 5. 802.1X authentication in unauthorized state
47
Security of Symbian Based Mobile Devices
supplicant and the AAA server passing authentication information messages between these entities. Until the supplicant is successfully authenticated on the AAA server, only authentication messages are permitted between the supplicant and the AAA server through the authenticator’s uncontrolled port. The controlled port, through which a supplicant can access the network services, remains
Figure 6. 802.1X authentication in authorized state
Figure 7. EAP authentication exchange messages
48
in unauthorized state, see Figure 5. As a result of successful authentication, the controlled port switches to authorized state, and the supplicant is permitted access to the network services, see Figure 6. 802.1X binds the extensible authentication protocol (EAP) protocol which handles the transportation of authentication messages between the
Security of Symbian Based Mobile Devices
supplicant and the AAA server. The authentication message exchange is performed over the link layer, using device MAC addresses as destination addresses. A typical EAP authentication conversation between a supplicant and an AAA server in a WLAN is shown in Figure 7. EAP supports the use of a number of authentication protocols, usually called EAP types. The following EAP types are WPA and WPA2 certified (Wi-Fi, 2006): • • •
• •
EAP-transport layer security (EAP-TLS) EAP-tunneled transport layer security (EAPTTLS) Protected EAP version 0/EAP-Microsoft challenge authentication protocol version 2 (PEAPv0/EAP-MSCHAPv2) PEAPv1/EAP-Generic Token Card (PEAPv1/ EAP-GTC) EAP-SIM
EAP-TLS, EAP-TTLS and EAP-PEAP are based on PKI authentication. EAP-TTLS and EAPPEAP however only use certificate authentication for authenticating the network to the user. User authentication is performed using less complex methods, such as user name and password. EAPTLS provides mutual certificate based authentication between wireless clients and authentication servers. This means that a X.509 based certificate is required both on the client and authentication server for user and server authentication. EAP-SIM is an emerging EAP authentication protocol for WLANs and is recently supported by several WLAN Hotspot environments. This standard is still an IETF draft. EAP-SIM is based on the existing GSM mobile phone authentication system and the SIM. A WLAN user is thus able to authenticate to the network using the secret key and algorithms embedded on the SIM card. In order to implement EAP-SIM authentication in a WLAN, a RADIUS server supporting EAP-SIM and equipped with a GSM/MAP/SS7
(GSM/Mobile Application Part/Signalling System 7) gateway is needed. Additionally the WLAN client software must support the EAP-SIM authentication protocol. During the EAP authentication process, the RADIUS server contacts the user’s home GSM operator through the GSM/MAP/SS7 gateway and retrieves the GSM triplets used to authenticate the user. The triplets are sent to the wireless client, via the AP, and if the supplicant and the user’s SIM card are able to validate the GSM triplets, the RADIUS server requests the AP to grant the client network access. For further reading about WLAN access control and security (see Pulkkis, Grahn, Karlsson, Martikainen, & Daniel, 2005).
ACCESS CONTROL FOR APPLICATIONS AND NETWORK SERVICES Typical network services transferring confidential data to and from a Symbian devices are E-commerce and electronic payments. These services normally run over HTTP and WAP connections. This section concentrates on access control in such connections as well as local applications handling confidential data.
Local Applications Symbian OS doesn’t support individual user accounts and has no concept of user logon at operating system level. Mobile applications, handling confidential data, should thus require user authentication before access to the application is granted. Applications should also support “session timeout” for the case that a mobile device is lost or stolen, while the device user is logged in to an application. This means that a limited time is specified for which an application can be inactive before re-authentication is required (DevX, 2006).
49
Security of Symbian Based Mobile Devices
Client/server Applications
WAP Connections
Symbian OS provides the possibility to develop tailor made client/server applications based on socket communication. SSL sockets are supported providing mutual certificate authentication. X.509 based certificates are supported in Symbian devices and a certificate management application and certificate validation module is embedded in the operating system (Siezen, 2005). Typical client/server applications are WAP and HTTP services. The communication between the WAP/HTTP browser residing on the Symbian device and the WAP/HTTP server consists of two parts:
Wireless application protocol (WAP) is an open standard for applications residing on mobile devices. The protocol is currently widely used in Symbian smart phones, also for confidential data transmissions. Thus, security is an important issue for the WAP protocol. WAP security protocols and specifications are being developed by the WAP Forum (Open, 2006). The evolution of WAP security specifications is shown in Figure 8.
1. 2.
The wireless connection between the mobile device and its mobile carrier The Internet connection between the mobile device and the Internet host/server via the mobile carrier
The security of the first mentioned connection is based on hardware level security, and cannot be affected by application developers. The second mentioned connection is, however, to be secured at the application level.
WTLS The security in WAP versions 1.0 and 1.1 mainly relies on the wireless transport layer security (WTLS) protocol. This protocol is designed to provide privacy, data integrity and authentication between two communicating WAP applications. WTLS is derived from transport layer security (TLS) and optimised for low-bandwidth bearer networks with relatively long latency. WTLS provides similar functionality as TLS 1.0 and adds new features such as datagram support, optimised handshake and dynamic key refreshing. WTLS offers three levels of security:
Figure 8. The development of WAP security specifications
50
Security of Symbian Based Mobile Devices
• •
•
WTLS class 1: An encrypted channel is used, but no authentication takes place. WTLS class 2: Certificate authentication of the server is used but the client is authenticated using alternative means, such as username/password. WTLS class 3: Both client and server are authenticated using certificates. (Open, 2006)
WPKI Wireless Public Key Infrastructure (WPKI) is a PKI specification for mobile environments. This specification is supported since WAP version 2.0. WPKI mainly describes the establishment and maintenance of authentic bindings between entity identifiers and public keys (Open, 2006).
HTTP Connections WMLScript and WIM WAP Forum introduced two new initiatives in WAP version 1.2 to address the lack of both nonrepudiation services and real end-user authentication in earlier WAP versions:
For HTTP-based client/server applications the SSL protocol is a simple and secure way for providing mutual authentication. Examples of applications and systems using SSL are: •
• •
WMLScript (wireless markup language script) Crypto Library WAP identity module (WIM)
• •
WMLScript provides cryptographic functionality of a WAP client (WAP Forum WMLScript, 2001). It defines a signature interface to digitally sign application data with mobile devices. WIM is used in WTLS and in application level security functions (WAP Forum, 2001). The main function of WIM is to store and process user identification and authentication information, such as private keys. An example of a WIM implementation is a combination with SIM (Subscriber Identity Module) of a mobile phone. This combined SIM and WIM is called S/WIM.
TLS/SSL SSL/TLS are security protocols for secure communication in Internet based client/server applications. WAP 2.0 adopts TLS as security protocol and supports the tunnelling of SSL/TLS sessions through a WAP proxy. TLS/SSL in WAP 2.0 replaces the WTLS protocol.
Web browsers for secure communications with web servers (HTTPS) E-mail client software for secure reading of e-mail messages on e-mail servers Secure electronic transactions (SET), a protocol for secure financial transactions and secure use of credit cards on the Internet
SSL supports mutual authentication based on X.509 certificates. The Symbian operating system integrates support for this protocol. Browsers supporting HTTPS connections are available also in Symbian devices.
NETWORK CONNECTION SECURITY Connection security means (Markovski & Gusev, 2003): • • • •
Availability of data communication Mutual authentication of communicating partners Integrity of data communication Possibility of confidential data communication
51
Security of Symbian Based Mobile Devices
• •
Intrusion prevention/detection Malware rejection
Essential for fulfilment of all these security requirements in a mobile device are security settings and commitment to security policy rules controlled by centralized security management software. Availability of data communication to/from a mobile device is achieved if the connection network is operational and can reject denial-of -service (DoS) attacks. Mutual authentication of communicating partners can be achieved by: • • •
Using SSH, VPN or SSL software Using IEEE 802.1X/EAP-TLS in WLAN connection networks USIM cards in UMTS cellular networks
Integrity and confidentiality of data communication is achieved by: • • •
Using SSH, VPN or SSL software Security protocols WPA and WPA2 in WLAN connection networks SIM, USIM, PKI SIM, ISIM cards in cellular mobile networks (GSM, GPRS, UMTS) Intrusion prevention/detection is achieved
by: • • •
•
•
52
A traffic filtering firewall Logging of connection attempts Security audits based on communication event logging, analysis of logged information, alerts and alarms Control of remote synchronization (Palm OS/HotSync, Windows CE & Windows Mobile/ActiveSync, Symbian OS/SyncML based) Shielding the mobile device from unwanted wireless communication with electro-
magnetic shielding bag. (MobileCloakTM, 2006) Malware rejection is achieved by using antivirus software and anti-spyware.
Basic Communication Security Basic communication security of a mobile device can be defined as intrusion prevention and malware rejection. The basic intrusion prevention tool is a configurable firewall with communication event logging and alert messaging. The basic malware rejection tools are anti-virus and anti-spyware with suspicious event alarming features. The core of a malware rejection tool is a malware recognition database. Malware rejection tool providers constantly update this database and the updated malware recognition database is available to malware rejection tool users through some network connection. An installed malware rejection tool should always use the latest update of the malware recognition database. Anti-virus software for mobile devices is delivered for example by (Symantec Corporation, 2006) and (Kaspersky, 2005).
Authentic Data Communication Authentic data communication is based on mutual authentication of communicating parties. In 2G cellular networks (GSM Data) authentication is unidirectional. The mobile device is authenticated to the cellular network by use of the shared secret key in the SIM card. Mutual authentication, for example based on public key certificates, is however possible for packet data communication in GSM networks (GPRS) in addition to PIN based GSM authentication. In 3G cellular networks, like UMTS, authentication is mutual, the mobile device and the network are authenticated to each other by the authentication and key agreement (AKA) mechanism. (Cremonini, Damiani, de Vimercati, Samarati, Corallo, & Elia, 2005)
Security of Symbian Based Mobile Devices
In a WLAN authentication is mutual for WPA and IEEE 802.11i (WPA2). The authentication of a mobile client is based on presented credentials and information registered in an AAA server. The authentication protocol, EAP, also requires authentication of the AAA server to the mobile client. (Pulkkis, Grahn, Karlsson, Martikainen & Daniel, 2005). Also a Bluetooth connection can be configured for mutual authentication. The default security level of a Bluetooth service is:
For Bluetooth connections link level security corresponding to security mode 3 should be used (Sun, Howie, Koivisto & Sauvola, 2001).
•
• • •
•
Incoming connection: Authorisation and Authentication required, Outgoing connection: Authentication required. (Muller, 1999)
Integrity and Confidentiality of Data Communication Confidentiality and integrity of all data communication to and from cellular mobile networks (GSM, GPRS, and UMTS) is provided by the security hardware of SIM/USIM/PKI SIM/ISIM cards in mobile devices. For data communication through other network types (WLAN, Bluetooth, IrDA) connection specific security solutions must be installed, configured, and activated. Alternatively, end-to-end security software like VPN and SSH must be used. Available PDA VPN products are referred to in (Taylor, 2004, Part IV). For WLAN connections available solutions for confidentiality and integrity of all data communication are WEP, WPA, and IEE 802.11i (WPA2). WEP security is however weak, since WEP protection can be cracked from recorded WEP protected data communication (WEPCrack, 2004).
Connection Security Management Security settings and commitment to security policy rules should be controlled by centralized security management software. Security audits based on: Communication event logging Analysis of logged information Alerts and alarms
should be performed with timed and manual options. When necessary, a mobile device should be shielded from unwanted wireless communication with an electromagnetic shielding bag. Special attention should be paid to control of remote synchronization (Palm OS/HotSync, Windows CE & Windows Mobile/ActiveSync, Symbian OS/SyncML based). Remote synchronization should be disabled when not in use. Also mobile devices should have basic communication security features like: • • •
Personal firewalls Antimalware protection software with updated malware recognition data Latest software patches installed
Use and attempts to use remote synchronization ports (see Table 2) should be logged and alerts and alarms should be triggered by unauthorized use or usage attempts. Passwords
Table 2. TCP and UDP port used by synchronization software
53
Security of Symbian Based Mobile Devices
used by synchronization software in desktop and laptop computers should resist dictionary attacks and the PC/Windows option to save connection passwords should not be used.
ADD-ON SECURITY SOFTWARE Several security products are available for solving security problems associated with Symbian OS based mobile devices. The security products are designed to solve individual or more comprehensive security problems. Following list of the product groups reveal versatility of Symbian OS based smart phone security solutions (Taylor, 2004, part III; Douglas, 2004): • • • • • • •
Authentication solutions Encryption software Anti-virus and firewall software VPN software Forensic analysis software Wireless security software Multifunctional software
Add-on security software products for Symbian OS are presented in Appendix A.
EVALUATION OF SYMBIAN SECURITY This section concentrates on evaluating the security of the Symbian OS by discussing its weaknesses and by comparing Symbian to other mobile device operating systems. A case study with security software performance measurements for Symbian OS is presented in Appendix B.
Security Weaknesses Symbian based mobile devices do not provide all the security features required by corporate security policies. Major weaknesses include:
54
• • •
Lack of user authorization No access controls in the file system Insufficient protection against malicious applications
Symbian devices are designed to be personal and it is thus assumed that all the data stored on the devices is owned by the person using it. This causes problems when the device is owned by a company and it is used by many employees. In order to meet corporate access control requirements, Symbian devices should provide authorization features such as file masking, access control lists, and role-based access control. These authorization features are however currently missing (Perelson & Botha, 2004). Malicious software is a growing threat for Symbian devices. Although Symbian provide an application signature feature (Symbian Signed) it does not provide a complete protection against malicious code. Viruses and other malicious software are usually transmitted to a mobile phone as SIS installation files. If the installation file is not digitally signed by a trusted third party, the system will notify the user about it. Nevertheless, in case the user chooses to install the application anyway, the Symbian OS provides no protection against it after it has been installed. An installed application has full access to delete or change any file in the file system. A Symbian application signed by a trusted authority is neither 100% secure. The installation and the application file are tested by the authority by analyzing the functionality, but the authority has no access to the source files. Malicious functions of an application can thus be programmed to be activated after a certain time period. As a result, these functions will not be discovered during the test phase (Smulders, 2004).
Security of Symbian Based Mobile Devices
Comparison to other Mobile Device Operating Systems Symbian, Palm OS, and Windows Mobile are currently the most common mobile device operating systems. In this section the main security features of these operating systems are briefly discussed and compared. In the comparison, only inherent security features are compared. Add-on security software is not taken into account.
Windows Mobile Windows Mobile is an operating system combined with a suite of basic applications for mobile devices based on the Microsoft Win32 API. The Windows Mobile OS runs mainly on two different kinds of devices: Pocket PCs and Smartphones. The main security features of Windows Mobile include (Microsoft, 2006): • • • • •
Authentication functionality Data encryption Application-level encryption Information service encryption Network-level encryption
smartcard authentication. Windows Mobile also provides a unique security feature known as Role-Based Access Control. This feature, however, doesn’t provide any organizational roles for users. The role-based access control is used for assigning roles for over-the-air (OTA) messages and determine which Windows Mobile device resources the messages has access to.
Data Encryption In Windows Mobile handheld devices sensitive data can be stored in a relational database (SQL Server CE). The stored data is protected with 128-bit encryption and a password. This feature is only supported in Pocket PC.
Application Level Encryption For application and network communication protection, the Windows Mobile platform provides various encryption algorithms including: • •
Authentication Functionality
•
Windows Mobile supports 4-digit device passwords and in Pocket PC also a strong password option with 7 or more alphanumeric and punctuation characters. Windows Mobile authentication functionality also provide SIM lock for GSM devices, SSL and PCT for secure Web site authentication, CHAP, MS-CHAP and PAP protocols for VPN authentication, WTLS class 2 for secure WAP, and file signing for code/application authentication. The local authentication subsystem (LASS) is a new feature since Windows Mobile 5.0. It is an OS feature that separates user authentication from the application and its authentication method. LASS provides plug-in modules for additional authentication methods such as biometric and
•
Stream-based encryption algorithms: RC2 and RC4 Block cipher encryption algorithms: DES and 3DES One-way hashing algorithms: MD2, MD4, MD5, SHA-1, MAC, and HMAC Digital signature encryption using RSA public-key algorithm
A library called CryptoAPI also support the use of 128-bit encryption by developers for integrating encryption into their applications and communications.
Information Service Encryption The Microsoft Exchange software with integrated Server ActiveSync provides technology for encrypting e-mail, calendar, and contacts data and for synchronizing such data between the Windows Mobile device and the server. Microsoft Exchange
55
Security of Symbian Based Mobile Devices
also provides WTLS encrypted browsing over the Internet when using a WAP-enabled browser. The method used for protecting data synchronization is SSL 128-bit end-to-end encryption.
Windows Mobile provides the following types of network-level encryption for protecting data transmitted over the Internet and wireless networks:
The authentication manager handles tokens used for verifying device access such as: passwords, PINs, or pass-phrases. Authentication manager also provides an option for developers to incorporate advanced authentication methods such as biometrics (handwriting, voice recognition, fingerprints, etc.) and smart cards. A code signing feature is also supported. Code signing ensures that only applications with a valid digital signature can access certain data and resources.
•
CPM
Network-Level Encryption
• • •
VPN protocol support: PPTP, IPSec, and IPSec/L2TP Secure access to Web sites: SSL (HTTPS) and PCT Secure access to WAP sites: WTLS class 2 Secure wireless LAN connectivity: VPN, WEP, and WPA
Palm OS Palm OS is a compact operating system designed for PDAs. The current latest version is Palm OS Cobalt 6.1. Main security features of Palm OS are (PalmSource, 2006): • • • •
Authorization and authentication manager Cryptography provider manager (CPM) Secure communication Data synchronization and backup
Authorization and Authentication Manager The authorization and authentication manager provides access control to the Palm OS devices. The authorization manager provides a file masking feature. This feature enables applications to specify a set of rules that must be met in order to access data on the device. As a result any stored data, application code, or kernel resource can be protected.
56
The CPM provides a system-wide suite of cryptographic services for securing data and resources on a Palm OS device. The encryption services are available to any written application which needs to take advantage of these services. 128-bit encryption is a standard feature of the CPM. Palm OS has a partnership with RSA Security (one of the leading encryption providers in the security industry) and through the partnership Palm OS includes RC4, SHA-1, and signature verification using RSA-verify. The CPM also incorporates a plug-in cryptographic architecture, allowing developers to incorporate other encryption algorithms such as AES through a suite of APIs.
Secure Communication For extending encryption services to communication, networking, and e-commerce applications Palm OS incorporates SSL/TLS providing secure end-to-end connections over the Internet using 128-bit SSL encryption. The RC4 encryption algorithm is a standard feature in Palm OS and it is used for encrypting data transmissions. Palm OS also support unique device identification, based on the Flash ID, mobile access number (MAN), and electronic serial number (ESN), for network access. Access to VPN networks and WPA secured WLAN access is possible through the use of third party client software.
Security of Symbian Based Mobile Devices
Analysis Symbian OS has several security features in common with Palm OS and Windows Mobile. They are all based on a modular design which enables mobile device manufacturers to choose what OS features they want to implement. The operating systems provide basic security services such as authentication and encryption while user authorization is not considered to be important. The most important security features of Symbian, Palm OS, and Windows Mobile are presented in Table 3. (Perelson & Botha, 2004)
FUTURE TRENDS The growth of the Internet, e-commerce and mcommerce has dramatically increased the amount of personal and corporate information that can be captured or modified. In the near future ubicomp systems will accentuate this trend. An increase in privacy and security risks is expected, not only with the emergence of mobile devices, but also with sensor-based systems, wireless networking and embedded devices. Within the mobile field, emerging technologies like RFID (Radio Frequency IDentification), ZigBee, Wireless USB (Universal Serial Bus), Wireless UWB (Ultra Wide Band), cellular mobile fourth generation (4G) systems, location determi-
nation, adaptive modulation and coding (AMC), digital signal compression, biometrics, Internet protocol (IPv6), mobile ad-hoc networks, multiple-input multiple output (MIMO), orthogonal frequency division multiplexing (OFDM), turbo codes, data encryption technologies among others will have a severe impact on the deployment of ubicomp systems and on their security features. These emerging technologies will impose new security features and the information society of the future will be much more difficult to keep secure. As an example of an emerging application we mention digitalrights management (DRM). DRM is any of several technologies used by publishers to control access to digital data and hardware in order to handle usage restrictions. Ubicomp technologies will probably suffer from the same sorts of unforeseen vulnerabilities that met the Internet society. In the ubicomp world existing security models will be obsolete. In comparison to the Internet the burden of security and privacy is increasingly falling on the user (Hong, 2005). Privacy, security and trust issues are and will be of major importance. Collection of personal data, usage tracking and sharing of knowledge about a user’s location with third parties are typical examples of privacy violation that need to be prevented. Personal information collected by business corporations and governments is already strictly regulated in many countries.
Table 3. Inherent security features of the major mobile device operating systems
57
Security of Symbian Based Mobile Devices
The success of many services like m-commerce is dependent on the underlying mobile technology. Enhanced wireless security requires technological improvements like higher computing speed and higher data rates of the mobile devices in order to compensate for additional overhead and increased complexity. On a more general level, both active and passive security threats need to be prevented by a combination of proactive and reactive methods. A proactive method attempts a priori to prevent an attack in the first place and a reactive method detects security threats and reacts accordingly (Grami & Schell, 2005). The adoption of many mobile services like m-commerce will not be realized until the level of user trust will rise. Typical examples that have an impact on the wireless service performance and on the level of trust are dropped calls, busy signals and dead spots (Grami and Schell, 2005). Issues of health and safety due to electromagnetic radiation in mobile devices will also affect the level of trust. All reliability and security risks in ubiquitous computing systems cannot be avoided but better security models and interaction techniques can be developed to prevent and minimize foreseeable threats.
CONCLUSION The popularity of Symbian based mobile computing devices is constantly growing in both corporate and private use. Smart phones and PDAs are in many companies replacing desktop and laptop computers. As a result, it is becoming more and more common that confidential data is stored in mobile devices. Symbian devices have also interfaces for various wireless network types. Wireless network access can be based on a dial-up connection to a cellular network (GSM, UMTS), on packet communication to a cellular network (GPRS), on a WLAN connection, on a Bluetooth connection, or on an infrared link (IrDA). With
58
these network connections Symbian devices can use several network service types such as web services (HTTP or HTTPS), WAP services, ecommerce, and electronic payment services. In parallel with the growing popularity, security threats against Symbian devices and against data communication to and from Symbian devices have also been constantly growing. The most serious security issues are related to protecting data and user credentials stored in the mobile devices. Due to the small size and portability of the Symbian devices access control is difficult to manage physically. Furthermore, Symbian does not provide proper user authentication systems at the OS level. Symbian devices do not provide all the security features required by corporate security policies. They where originally designed for private use, and thus lack proper user authorization mechanisms such as file masking, access control lists, and role-based access control. Symbian devices also face threats due to openness. Open means in this case that the operating system is open for external software and content. Malicious software, such as Trojan horses, viruses, and worms has also started to emerge. A comparison study shows that Symbian provides quite similar security features than its most important competitor operating systems, Windows Mobile and Palm OS. Neither the security features nor the security flaws significantly differ from each other within these operating systems. The security threats must be seriously taken into account when mobile applications are designed and when mobile devices are used for storing and transmitting sensitive data. Some security features are provided by Symbian OS. These features do not cover all security needs, even though Symbian devices implement many security standards and protocols for wireless networking. There are, however, several add-on security solutions available for Symbian OS. Sufficient security can be reached by supplementing the scarcity of embedded security features with add-on security solutions.
Security of Symbian Based Mobile Devices
REFERENCES Biometrics Research homepage. (n.d.). Retrieved February 5, 2006, from http://biometrics.cse. msu.edu/ Bluetooth SIG. (2006). The official Bluetooth wireless info site. Retrieved February 5, 2006, from http://www.bluetooth.com Cremonini, M., Damiani, E., De Capitani di Vimercati, S., Samarati, P., Corallo, A., & Elia, G. (2005). Security, privacy, and trust in mobile systems and applications. In M. Pagani (Ed.), Mobile and wireless systems beyond 3G: Managing new business opportunities. USA: IRM Press. de Haas, J. (2005, Septemeber15-16). Symbian phone security. Presentation on T2’05 Conference, Helsinki, Finland. Retrieved October 9, 2005, from http://www.symternals.com/downloads/T2Symbian-Security-v1.1.pdf DevX Portal. (2006). Wireless application security: What’s up with that? Retrieved February 6, 2006, from http://www.devx.com/Intel/Article/18013/0/page/1 DIGIA Inc. (2003). Programming for the series 60 platform and Symbian OS. UK: Wiley. Douglas, D. (2004). Windows mobile-based devices and security: Protecting sensitive business information. Retrieved October 9, 2005, from http://download.microsoft.com/download/4/7/ c/47c9d8ec-94d4-472b-887d-4a9ccf194160/ 6.%20WM_Security_Final_print.pdf Duncan, M. V., Akhtari, M. S., & Bradford, P. G. (2004). Visual security for wireless handheld devices. JOSHUA, 2. EMCC Software. (2005). Symbian OS v9 Advances in Symbian OS. Retrieved October 12, 2005, from http://www.newlc.com/article. php3?id_article=959
F-Secure. (2004, December). The malware attack against mobile phones is mounting. Retrieved December 19, 2005, from http://www.f-secure. com/news/items/news_2004122200.shtml Grami, A., & Schell, B. H. (2005). Future trends in mobile commerce: Service offerings, technological advances and security challenges. Retrieved January 15, 2005, from http://dev.hil. unb.ca/Texts/PST/pdf/grami.pdf GSM Security Portal. (2006). Retrieved February 5, 2006, from http://www.gsm-security.net/ Heath, C. (2006). Symbian OS platform security: Software development using the Symbian OS security architecture. UK: Wiley. Hickey, R. A. (2005, November). Loss, theft still no. 1 threat to mobile data. Retrieved December 17, 2005, from http://searchmobilecomputing. techtarget.com/originalContent/0,289142,sid40_ gci1143983,00.html Hicks, S. (2005). Mobile and malicious. Retrieved December 17, 2005, from http://www.ciostrategycenter.com/darwin/Threat/threat_strategies/mobile_malicious/ Hong, J. I. (2005, December). Minimizing security risks in ubicomp systems. Computer, 118-119. Intoto Inc. (2005), iGateway SSL-VPN. Retrieved January 21, 2006, from http://www.intoto.com/ product_briefs/iGateway%20SSL%20VPN.pdf Jansen, W. A. (2003, May 12-15). Authenticating users on handheld devices. The 15th Annual Canadian Information Technology Security Symposium (CITSS), Ottawa, Canada. Retrieved July 11, from http://csrc.nist.gov/mobilesecurity/publications.html#MD Karkimo, A. (2005, October 13). Kännykkää tunnistaa kantajansa askelista. Tietokone computer science magazine on-line news. Retrieved February 1st, 2006, from www.tietokone.fi
59
Security of Symbian Based Mobile Devices
Kaspersky Security for PDAs. (n.d.) Retrieved June 30, 2005, from http://anti-virusi.com/kaspersky/kaspersky_pda.html Lu, W. W. (2002). Broadband wireless mobile, 3G and beyond. USA: Wiley. Markovski, J., & Gusev, M. (2003, April). Application level security of mobile communications (pp. 309 – 317). In Proceedings of the 1st International Conference Mathematics and Informatics for Industry MII 2003. Thessaloniki, Greece. Meyer, U., & Wetzel, S. (2004, September 5-8). On the impact of GSM encryption and man-inthe-middle attacks on the security of interoperating GSM/UMTS networks. The 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2004), Barcelona, Spain. Retrieved February 5, 2006, from http://www.cdc.informatik.tu-darmstadt. de/~umeyer/UliPIMRC04.pdf Microsoft. (2006). Network Security for the Windows Mobile Software Platform. Microsoft White Paper. Retrieved July 17, 2006, from http://www. microsoft.com MobileCloakTM Portal. Retrieved July 20, 2006, from http://www.mobilecloak.com/ Muller, T. (1999). Bluetooth Security Architecture. White Paper. Retrieved July 20, 2006, from http:// www.bluetooth.com/Bluetooth/Apply/Technology/Research/Bluetooth_Security_Architecture. htm Olzak, T. (2005). Wireless Handheld Device Security. Retrieved October 12, 2005, from http://www. securitydocs.com/pdf/3188.PDF Open Mobile Alliance (2006). WAP Forum. Retrieved February 6, 2006, from http://www. wapforum.org/ PalmSource, Inc. (2006). PalmSource portal. Retrieved July 17, 2006, from http://www.palmsource.com.
60
Perelson, S., and Botha, R. (2004). An Investigation Into Access Control for Mobile Devices. In H. S. Venter (Ed.), J. H. P. Eloff (Ed.), L. Labuschagne (Ed.), M.M. Eloff (Ed.), ISSA 2004 Enabling Tomorrow Conference. Peer-reviewed Proceedings of the ISSA 2004 Enabling Tomorrow Conference. Information Security South Africa (ISSA). Pulkkis, G., Grahn, K., Karlsson, J., Martikainen, M., & Daniel, D.E. (2005). Recent Developments in WLAN Security. In Pagani, M. (Ed.), Mobile and Wireless Systems Beyond 3G: Managing New Business Opportunities. USA: IRM Press. Rankl, W., & Effing, W. (2003). Smart Card Handbook (3rd ed). USA: John Wiley & Sons. Rittinghouse, J.W., & Ransome, J.F. (2004). Wireless Operational Security. Amsterdam. Elsevier Digital Press. Setec Portal. (2006). Retrieved February 5, 2006, from http://www.setec.fi Shackman, M. (2005). Platform Security: A Technical Overview. Retrieved October 11, 2005, from http://www.symbian.com/developer/techlib/ papers/plat_sec_tech_overview/platform_security_a_technical_overview_v1.1.pdf Siezen, S. (2005). Symbian OS Version 9.1, Product Description. Retrieved July 20, 2006, from http://www.symbian.com/files/rx/file6965.pdf Smulders, T. H. (2004). Security threats of executing malicious applications on mobile phones. Masters thesis. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Netherlands. Retrieved July 20, 2006, from http://www.win.tue.nl/~ecss/internships/ reports/TSmulders2004.pdf Sun, J., Howie, D., Koivisto, A., & Sauvola, J. (2001). Design, implementation, and evaluation of Bluetooth security. In Proceedings IEEE International Conference on Wireless LANs and Home Networks, Singapore (pp. 121-130).
Security of Symbian Based Mobile Devices
Symantec. (2005). Wireless handheld and smartphone security. Retrieved December 16, 2005, from http://enterprisesecurity.symantec. com/Products/products.cfm?MenuItemNo=2& productID=663&EID=0 Symantec Corporation. (2006). Symantec mobile security for Symbian. Retrieved January 15, 2006, from http://www.symantec.com/Products/ enterprise?c=prodinfo&refId=921 and http://eval. veritas.com/mktginfo/enterprise/fact_sheets/entfactsheet_mobile_security_for_symbian_042005.en-us.pdf Symbian. (2006). Symbian OS: The mobile operating system. Retrieved January 26, 2006, from http://www.symbian.com Symbian OS: Overview to Security (2006). Version 1.1, Retrieved January 26, 2006, from http://sw.nokia.com/id/5e713b29-fe0e-488d-8fc6b4dd1950f3c2/Symbian_OS_Overview_To_Security_v1_1_en.pdf
Symbian OS Version 9.3 (2006). Retrieved July 20, 2006, from http://www.symbian.com/files/ rx/file7999.pdf Symbian Signed Portal. Retrieved January 30, 2006, from http://www.symbiansigned.com Taylor, L., (2004-2005). Handheld Security, Part I-V. Retrieved October 9, 2005, from http://www. firewallguide.com/pda.htm WEPCrack. Retrieved March 3, 2004, from http://webcrack.sf.net Wi-Fi Alliance Portal. (2006). Retrieved February 5, 2006, from http://www.wi-fi.org Ye, T. S. and Cheang, A.. (2005, September). The Mobility Threat. Retrieved December 18, 2005, from http://cio-asia.com/ShowPage.aspx?pagety pe=2&articleid=2535&pubid=5&issueid=63 Yoshihisa, I., Miharu, S., Shihong, L., Masato, K. (2005, Oct 15-21). Face Recognition on Cellular Phone. Demo Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV2005), Beijing, China. Demo Nr 13.
61
Security of Symbian Based Mobile Devices
APPENDIX A: ADD-ON SECURITY SOFTWARE PRODUCTS FOR SYMBIAN OS Authentication Solutions Unauthorized access has not always been recognized as a security risk especially among private users, even though mobile devices such as smart phones are small, portable and easily lost or stolen. These features lead to a high risk of vital data loss. From this point of view it is easy to understand the necessity of authentication. There are several authentication methods (Douglas, 2004, p. 13): • • • • • • •
Signature recognition based authentication Picture based password authentication Fingerprint authentication Voice authentication Face recognition authentication Smartcard based authentication Legacy host access
Overviews of authentication software for Symbian OS-based smart phones are presented in Tables A1-A3.
Signature Recognition Based Authentication Signature recognition based authentication has several benefits. It provides high level of security and a signature is, from the user’s point of view, a simple password which cannot be forgotten. The main problem is that the biographic signature is varying from time to time, which causes the possibility of access denial. Communication Intelligence Corp. provides “Sign-On™ for Symbian OS” software, which enables device access through the use of dynamic biometric signature verification. Sign-On™ for Symbian OS is an authentication system, which verifies a real-time signature drawing. A signature is easy to recreate and test against an encrypted template of user data created during an enrollment phase. All signature data and templates are encrypted with the 3DES encryption algorithm (Communication Intelligence Corp., 2005). There are not many available biometric signature authentication solutions for Symbian OS based mobile phones. However, Wacom, which is a Symbian Platinum Partner and Softpro have announced that they develop in co-operation a signature recognitions based authentication solution. Combination of Wacom’s Penabled™ pen-based interface technology and Softpro’s handwriting verification and authentication technology will deliver a complete solution for capturing and automatically authenticating a biometric signature to enable secure authorization of mobile transactions. The product consists of Wacom’s display and pen and Softpro’s signature verification technology. Penabled™ technology captures dynamic features of a signature, such as the speed of writing, pen pressure, letter shape and the writing rhythm. The signature is then managed and analyzed by Softpro’s verification system (WACOM Technology Corp. 2005).
62
Security of Symbian Based Mobile Devices
Available signature recognition based authentication software for Symbian based mobile devices are summarized in Table A1.
Fingerprint Based Authentication Fingerprint based identification is the oldest method of the biometric techniques. The uniqueness of human fingerprint prevents effectively forgery attempts. The security level of fingerprint authentication is depending on such factors as the quality of scanning and the visual image recognition (ROSISTEM, 2005). Users, who are interested in using biometric fingerprint authentication solution, should choose a Symbian OS based mobile phone with an integrated fingerprint sensor, because it is difficult to find separate solution including both software and hardware. Unfortunately most of them do not have integrated sensor for fingerprint authentication even nowadays. Several different effective fingerprint authentication sensors are presently available and these sensors could be applicable in Symbian OS based mobile phones. However, fingerprint sensors are mostly used in PCs and notebook computers. Fingerprint authentication to mobile phones will probably be more common in a near future. Presently, for example Fujitsu produces mobile phones with an integrated biometric fingerprint sensor for secure access to the phone and to stored data. The FOMA F900i series of 3G FOMA i-mode® mobile phones uses a fingerprint sensor for access security and synchronization with a PC. See Table A2 (NTT DoCoMo Inc., 2004).
Picture Based Password Authentication Picture based password authentication or graphical login software can as fingerprint authentication software be categorized as an unusual solution offered by few providers. Pointsec® for Symbian OS includes PicturePIN technology, which is a picture based authentication method. Presently the PicturePIN
Table A1. Signature recognition based authentication software
63
Security of Symbian Based Mobile Devices
Table A2. Fingerprint authentication software
technology is patent pending. Pointsec allows users to select a password consisting of a combination of icons, see Figure A1. The positions of Pointsec’s password icons change each time the mobile device is switched on. This feature makes it highly difficult for shoulder surfers to recognize passwords. Even the scratches on the screen could not reveal the passwords. See Table A3 (Pointsec Mobile Technologies, 2006).
Face Recognition Authentication Face recognition authentication is based on captured images, which can be static digital pictures or dynamic pictures i.e. video clips. Authentication software measures and compares key features of the observed picture to the picture or series of pictures stored in the mobile device. Some systems are even
Table A3. Picture based password authentication software
Figure A1. Pointsec’s PicturePIN picture based authentication
64
Security of Symbian Based Mobile Devices
able to recognize a face from a crowd. The ability to recognize and verify the authenticity of the user through face recognition is meant to contribute to greater security and safety for mobile devices and the information they contain. A face recognition authentication solution for Symbian OS based mobile phones is “OKAO Vision Face Recognition Sensor”, provided by OMRON Corporation. This technology has been presented at the “Security Show Japan 2005”. See Table A4. Camera equipped mobile phones can use “OKAO Vision Face Recognition Sensor” without additional hardware requirements. Users register their own face image to their phones by taking their own photo with the camera. There is no need to adjust the camera position when taking the photo. After the registration, the “OKAO Vision Face Recognition Sensor” will automatically detect the user and unlock the mobile phone. In addition the sensor will detect the owner automatically if the face is included in the photo. According to OMROM’s tests, The registered mobile device owner is recognized with a probability of 99% or higher. This face recognition technology supports besides Symbian OS also BREW, embedded Linux, and ITRON OS (OMRON Corporation, 2005; Biometric Watch 2005).
Encryption Software A simple method to protect sensitive data is encryption. Pointsec for Symbian OS provides a solution for real time encryption of data on Symbian OS based mobile devices, on different types of memory cards such as Memory Stick Duo, and on MMC (multimedia cards) without any user interaction. Pointsec for Symbian OS uses strong 128 bit AES encryption to protect information stored on the device and on memory cards with no noticeable reduction in speed or in other performance measures. Data can be accessed or decrypted only with proper authentication (Pointsec Mobile Technologies, 2006). Also Ultimaco’s SafeGuard PDA 4.0 provides an authentication and encryption solution for Symbian OS based mobile devices. This solution includes user authentication to the device by password, symbolic or numeric PIN. Forgotten passwords or PINs can be easily reset centrally via SafeGuard’s emergency mechanism. SafeGuard PDA for Symbian support encryption for files, for directories, and for internal databases used in PIM (Personal Information Management) such as e-mails, SMS, tasks and events. Configuration, encryption and security rules can also be centrally managed by administrators. Currently SafeGuard PDA 4.0 supports Symbian Series 80 (Nokia Communicator 9300/9500), Symbian UIQ 2.1 (Sony-Ericsson P900/P910i) and Windows mobile 2003 devices (Utimaco Safeware AG., 2005). Security software for Symbian OS based mobile devices mostly commercial. FreEPOC’s FreeCrypt 1.02 and jRC4 software are examples of freeware for encryption purposes. Both security software solu-
Table A4. Face recognition authentication software
65
Security of Symbian Based Mobile Devices
Table A5. Encryption software
tions encrypt data with the RC4 algorithm. FreeCrypt is available for Nokia Communicator 92xx and jRC4 for Symbian UIQ 1.01 (for example Sony Ericsson P800) mobile devices (FreEPOC, 2006). Available encryption software for Symbian based mobile devices is summarized in Table A5.
Anti-Virus and Firewall Software F-Secure Mobile Anti-Virus™ protects mobile devices against harmful content, for example viruses, worms, and Trojans. The software has been available for most Symbian Series 60, 80 and 90 mobile devices. To prevent infection all files are scanned automatically and transparently during modification, synchronization or transference of data, without any need of user intervention. Also all files on memory cards are automatically scanned. When an infected file is detected, the file is immediately quarantined to protect all other data in the system. In addition, the virus recognition database in the mobile devices is automatically updated over a secure HTTPS connection or with SMS messages. The software supports automatic detection of data connections (for example GPRS, WLAN, UMTS) for updates. F-Secure recently announced ‘F-Secure Mobile Security for Symbian Series 80’, a combination of integrated anti-virus functionality and a firewall (F-Secure Corporation, 2005). Symantec provides solutions with integrated anti-virus and firewall capabilities, called Symantec™ Mobile Security Corporate Edition for Symbian. This software is available for Symbian OS Series 60 and 80. The software has almost the same functionality as F-Secure’s Mobile Security for Symbian. The LiveUpdate Wireless feature from Symantec™ Mobile Security Corporate Edition for Symbian enables users to download virus definitions and software updates directly to their mobile device via an available wireless Internet connection. The key feature of this version is centralized management via a third-party mobile device. This functionality enables administrators to configure, lock and enforce security policies either remotely or locally (Symantec Corporation, 2006). Available anti-virus and firewall software for Symbian based mobile devices is summarized in Table A6.
66
Security of Symbian Based Mobile Devices
VPN (Virtual Private Network) Software Transmitting data over wireless networks causes a remarkable security risk, because the transmitted data over air can be easily exploited by outsiders. Secure VPNs use cryptographic tunneling protocols to ensure sender authentication, as well as the confidentially and integrity of data. In an IPSEC VPN environment a mobile device requires preinstalled VPN client software to authenticate and connect to the VPN gateway. When the application on the user’s mobile device attempts to communicate, the network traffic from these requests is tunneled through the VPN connection Nokia Mobile VPN is an example of a third party VPN solution for Symbian devices. The components of Nokia Mobile VPN include Nokia Mobile VPN Client and Nokia Security Service Manager (SSM). Nokia Mobile VPN Client is an IPSec based VPN application. It allows a user to authenticate and connect to an enterprise VPN and as a result data can be securely transferred between the mobile client and the VPN network. Key features of the Nokia Mobile VPN Client are: • • • • • •
Provides a user the possibility to securely access any network services in a remote network Support for Nokia Series 60 and Series 80 Symbian smart phones Supports legacy and PKI based authentication DES (Data Encryption Standard), 3DES, and AES for encryption SHA-1 (Secure Hash Algorithm 1) and MD5 (Message Digest 5) for data integrity Uses Nokia SSM for automatic provisioning of VPN settings, policy updates, and certificate enrollment
The Nokia SSM is the core of a scalable mobile VPN solution. It extends VPN to the mobile domain using the Nokia Mobile VPN Clients and supported gateways. Key features of the Nokia SSM include: • • • • •
The cornerstone for rapid, large scale Mobile VPN deployments Integrates with management systems, VPN policy, and external authentication servers Enables trust creation between a user and a corporate infrastructure Provides secure provisioning of VPN configuration automatically over the air Provides PKI services for mobile devices (Nokia, 2006)
Table A6. Anti-virus & firewall software
67
Security of Symbian Based Mobile Devices
Compared to the more common VPN, which uses IPSec technology, the modern VPN with SSL (secure sockets layer) cryptographic protocol makes it easier for administrators and users to set-up and manage secure communication on the Internet. SSL VPN uses SSL technology to enable secure remote access. The benefit of using SSL VPN instead of IPSec VPN is that users do not need any VPN client software installed on the mobile device. Users can also quickly and easily connect to the SSL VPN gateway via a web browser and on any compatible device or computer. SSL protocol is widely supported on most Web browsers (Ferraro, 2003; WIKIPEDIA, 2005). Intoto’s iGateway SSL-VPN allows users to access enterprise Intranet services securely from mobile devices. iGateway SSL-VPN makes it possible for users to create a secure encrypted virtual tunnel from any standard web browser. Users of iGateway SSL-VPN can choose authentication methods according to their preferences from following alternatives: RADIUS (Remote Authentication Dial In User Service), LDAP (Lightweight Directory Access Protocol), Active Directory, Windows NTLM (NT LAN Manager) and digital certificates. The software provides end-point security controls i.e. features such as: filtering, anti-virus, personal firewall, registry, file-system entries and browser traces removal, etc. (Intoto Inc., 2005; ZDNet India News, 2005). Available VPN software for Symbian based mobile devices is summarized in Table A7.
Forensic Analysis Software While a large variety of forensic analysis software is available for personal computers, the range of solutions is much more limited for mobile devices, especially for Symbian OS based mobile devices. The problem is not only fewer software solution for Symbian OS, but also that available solutions operate only in most common series of Symbian OS based mobile devices. Forensic analysis software has three main functionalities: acquisition, examination and reporting. Only available solutions have all these functionalities. Often several software solutions must be acquired for a full forensic examination process. The forensic analysis software need full access to a mobile device in order to start acquisition of data. If the examined mobile device is protected with some authentication method, then cracking software is needed. Oxygen Software delivers software for police departments, law enforcement units and all government services for investigation purposes. The Oxygen Phone Manager II (Forensic version) secures phone data to remain unchanged during extraction and exporting. This forensic version allows users to read data from mobile phone and export this data in any supported formats (Oxygen Software, 2006).
Table A7. VPN software
68
Security of Symbian Based Mobile Devices
Paraben Corporation has developed tools to assist law enforcement, corporate security and digital investigators. Paraben’s PDA Seizure offers forensic analysis tools for Symbian OS, Windows CE/Pocket PC, Windows Mobile, and RIM BlackBerry. The version for Symbian OS allows forensic examiners to acquire, examine and analyze data. Both physical and logical acquisition of data is possible. Physical acquisition means complete bit-by-bit copying from physical storage, for example from a disk drive. Logical acquisition means exact copying of logical storage objects, i.e., files and folders. PDA Seizure has a built-in searching function on acquired data and also a book-marking function to help users to organize data. Moreover, the tool supports HTML reporting on findings. Paraben Corporation provides another software solution, Cell Seizure, for forensic data acquisition. A forensic acquisition is carried out on all data stored on GSM SIM cards including deleted data (Paraben Corporation, 2006; Ayers & Jansen, 2004, p.14). Available forensic analysis software for Symbian based mobile devices is summarized in Table A8.
Multifunctional Software Multifunctional software is developed to solve comprehensively all security needs of mobile devices. From an administrators point of view such software is appealing, since a lot of resources can be saved in terms of effective central administration. Table A8. Forensic analysis software
69
Security of Symbian Based Mobile Devices
The key function of Pointsec® for Symbian OS software is encryption. This feature ensures high security level, because all data can be automatically and immediately encrypted before being stored or transferred and decrypted automatically by an authenticated user. Recipients of encrypted data files do not need the same kind of software to open the encrypted data. Recipients can open files with a valid password. Pointsec® for Symbian OS encrypts automatically all data stored on ‘Pointsec for Symbian OS’ protected devices and on memory cards, such as Memory Stick Duo and MMC (Multimedia Cards) without any user interaction. Trust Digital 2005 encrypts data on Symbian OS based mobile devices and PDA devices. Before data can be decrypted, users are required to authenticate themselves to the devices. Data can be encrypted based on administrator and user preferences. Both Pointsec® for Symbian OS software uses the Advanced Encryption Standard (AES) algorithm, the US government approved cryptographic standard, based on the “Rijndael” algorithm with a 128-bit encryption key to encrypt data. Trust Digital 2005 provides six different selectable encryption algorithms, including the Advanced Encryption Standard (AES). In addition Trust Digital 2005 uses MD5 hash algorithm to protect passwords stored on the device. One of the most important features of multifunctional software is the central management possibility. Pointsec® for Symbian OS enables administrators to create, deploy and manage their organization’s security policy for mobile devices from one central location. The central management system ensures that the security policy is enforced. End-users cannot uninstall the software from their mobile devices. Trust Digital 2005 can be centrally managed from a “Policy Editor” or from a “Trusted Mobility Server”, which allows administrators to create, push and manage a security policy for each device. The access policies for the device can also be managed. Trust Digital 2005 together with Encryption Plus products makes a powerful combination, which provides end-to-end data access control and encryption. Pointsec for Symbian OS enables users to securely regain access via “Remote Help”, when a PIN or a password is forgotten. The number of failed authentication attempts is restricted and access to the
Table A9. Multifunctional software
70
Security of Symbian Based Mobile Devices
mobile device is denied without authentication. Administrators can assist users via a secure challenge/ response procedure, which helps user to regain access to the device and resets the PIN or password (Pointsec Mobile Technologies, 2006; GuardianEdge Technologies, 2005). Two multifunctional security software solutions for Symbian based mobile devices are compared in Table A9.
REFERENCES Ayers, R., & Jansen, W. (2004). PDA forensic tools: An overview and analysis. Retrieved July 6, 2005, from http://csrc.nist.gov/publications/nistir/nistir-7100-PDAForensics.pdf Biometric Watch. (2005). Face recognition, coming to a cell phone or PDA near you. Retrieved January 6, 2006, from http://www.biometricwatch.com/BW_20_issue/BW_20.htm Communication Intelligence Corp. (2005). Sign-On™ True Verification comes to the handheld. Retrieved February 3, 2006, from http://www.cic.com/products/signon/ Ferraro, C. I. (2003). Choosing between IPSec and SSL VPNs. Retrieved January 16, 2006, from http:// searchsecurity.techtarget.com/qna/0,289202,sid14_gci940324,00.html F-Secure Corporation. (2005). F-Secure mobile security for Series 80. Retrieved January 15, 2006, from http://www.f-secure.com/download-purchase/manuals/docs/manual/fsmavs80/avfws80.pdf FreEPOC. (2006), FreEPOC’s software. Retrieved February 2, 2006, from http://www.freepoc.org/ software.php GuardianEdge Technologies. (2005). Trust digital 2005. Retrieved February 4, 2006, from http://www. guardianedge.com/products/PDASecure/index.html Nokia. (2006). Nokia mobile VPN. Retrieved January 20, 2006, from http://www.europe.nokia.com/ nokia/0,0,77172,0.html NTT DoCoMo, Inc. (2004). NTT DoCoMo to market F900i, first model of 3G FOMA 900i series. Retrieved January 3, 2006, from http://www.nttdocomo.com/presscenter/pressreleases/press/pressrelease. html?param[no]=415 OMRON Corporation. Face recognition sensor for mobile phone. Retrieved January 5, 2006, from http://www.omron.com/ecb/products/mobile/okao.html Oxygen Software. (2006), Oxygen phone manager II for Symbian OS phones. Retrieved January 22, 2006, from http://www.oxygensoftware.com/en/products/ Paraben Corporation. (2006), Handheld digital forensics. Retrieved January 22, 2006, from http://www. paraben-forensics.com/handheld_forensics.html Pointsec Mobile Technologies. (2006). Pointsec® for Symbian OS. Retrieved January 10, 2006, from http://www.pointsec.com/_file/SymbianOS_LTR_72dpi_ENG_v0506.pdf
71
Security of Symbian Based Mobile Devices
ROSISTEM. (2005). Biometric education » Fingerprint. Retrieved January 19, 2006, from http://www. barcode.ro/tutorials/biometrics/fingerprint.html Symantec Corporation. (2006). Symantec mobile security for Symbian. Retrieved January 15, 2006, from http://www.symantec.com/Products/enterprise?c=prodinfo&refId=921 Utimaco Safeware AG. (2005). [SafeGuard® PDA] enterprise edition. Retrieved January 12, 2006, from http://www.utimaco.com/content_pdf/SG_PDA_40_en.pdf WAC OM Te ch n olog y C o r p. (2 0 0 4). S of t p r o joi n s Wa c o m’s D y n a m ic sig n a ture initiative to deliver secure mobile commerce. Retrieved January 20, 2006 from http://www.wacom-components.com/english/news_and_events/nw0021.asp Wikipedia. (2005), Secure sockets layer (SSL) and transport layer security (TLS). Retrieved January 16, 2006, from http://en.wikipedia.org/wiki/Secure_Sockets_Layer ZDNet India News. (2005), Intoto introduces multi-service security software. Retrieved January 21, 2006, from http://www.zdnetindia.com/news/pr/stories/121012.html
APPENDIX B: SECURITY SOFTWARE PERFORMANCE MEASUREMENTS FOR SYMBIAN OS This case study presents performance measurements for the security software Pointsec for Symbian OS. The purpose was to measure the influence of Pointsec on data communication performance of Symbian OS. Pointsec is presented in more detail in the section Add-on Security Software. According to Pointsec Mobile Technologies, the Pointsec security software should not reduce speed or other performance measures even when the strong 128-bit AES encryption is used to protect the information in the device and in memory cards.
Measurements However, security solutions may reduce data communication performance measures of mobile operating systems, such as download speed and connection times. These performance measures were measured for a Pointsec security software installation in a Nokia Communicator 9500 for: • • • •
Downloading a 4.92 MB file Connection to an e-mail server (penti.arcada.fi) with imaps based e-mail client software Connection to a www site (www.nokia.com) Connection to a ssh server (penti.arcada.fi) with a putty ssh client
All four performance measures were measured six times with and without installed Pointsec security software for two different access network types, WLAN and GPRS. The network bandwidths were 11 Mbit/s for the WLAN and 56 Kbit/s for GPRS. Measurement results are presented in Tables B1 through B4.
72
Security of Symbian Based Mobile Devices
Table B1. Download speed measurements (download times for a 4.92 MB file)
Usefulness of Measurement Results Condition cannot be assumed to be equal for different measurements since the download speed and connection times were measured for data communication through the public Internet. The utilization of Internet during a measurement session is not deterministic. Measurement results have been considered to be useful if standard deviation is less than 10% of the calculated average for measurements with the same mobile device configuration. Standard deviation exceeded 10% of calculated average only in one measurement case, GPRS connection to an e-mail server without Pointsec security software installed, being about 15% of calculated average; see Table B2.
Table B2. Connection time measurements (to mailbox on e-mail server)
73
Security of Symbian Based Mobile Devices
Measured Degradation of Data Communication Performance Caused by Pointsec The influence of Pointsec was considered to be noticeable if the intervals defined by measured average and standard deviation do not overlap with and without Pointsec for otherwise the same mobile device configuration. Noticeable performance degradation was measured only for connection time to a Web site, about twice as long for a GPRS connection and about 17 % longer for a WLAN connection; see Table B3. However, the influence of the traffic load in Internet and the load on the selected web server during carried-out performance measurements is unfortunately unknown. The measurements can thus be considered to support the view of the provider of Pointsec security software, that the performance degradation from this security software is insignificant on a Symbian device; see Table B4.
Table B3. Connection time measurements (Web site www.nokia.fi)
Table B4. Connection time measurements (to a SSH server)
74
75
Chapter IV
Wireless Local Area Network Security Michéle Germain ComXper, France Alexis Ferrero RTL, France Jouni Karvo TKK, Finland
Abstract Using WLAN networks in enterprises has become a popular method for providing connectivity. We present the security threats of WLAN networks, and the basic mechanisms for protecting the network. We also give some advice on avoiding the threats.
Introduction The ease of deploying wireless local area network (WLAN) systems and the abundance of affordable IEEE 802.11 WLAN-based products on the market makes the idea of a wireless office luring. Offices using laptops as workstations can benefit from the ease of bringing a laptop to the meeting and preserving network connectivity. The WLAN connectivity can also be used for salesmen and executives who are on a tour to communicate with
the office when residing within hot-spot areas. This ease and flexibility comes with a price–wireless local area networks are inherently insecure when compared to the wired networks. There are various applications of wireless networks. The first of them is the hot spot, which provides in a public (or private) place, an open radio infrastructure that allows everyone to get an Internet connection or to join the Intranet of his enterprise. A second application is the enterprise WLAN, which completes or replaces a legacy
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Wireless Local Area Network Security
wired network. This has also place in the SoHo or domestic environment for sharing an ADSL connection between several users. WLAN provide also nice network possibility in areas where cabling is impossible or restricted. A last case is the constitution of wireless bridges between nets or subnets. Such wireless networks present specific vulnerabilities due to the radio media and are subject to specific threats from the hackers. The object of this paper is to identify them and to explain how to avoid them. The WLAN connections are based on a radio connection in unlicensed 2.4 or 5GHz radio band, depending of the WLAN type. The radio waves broadcast, and most antennae of a typical WLAN equipment are not designed to produce directed radio beams, but they transmit freely to all directions. Thus, in addition to the intended receiver, any other receiver that is close enough can receive the signal. This is fundamentally different from the wired networks, such as Ethernet, where in order to listen to the traffic, one needs to get physical access to the networking equipment, or at least cabling. The transmission for typical WLAN equipment ranges up to the order of 50 m. Even if the range is short, low-level signal can be received at a longer distances, of even some kilometres, using illicit antennae and high sensitivity receivers. Walls, ceilings and other such constructions reduce the transmission ranges significantly–depending on the materials that are on the radio signal’s way. Thus, there are several concerns on the accessibility of the radio signal. First, the signal may easily be heard outside the premises of the office. Second, the guests visiting the offices are often able to carry in a laptop, thus being able to listen and even connect to the company’s network without being suspicious, unless the network is properly protected. The security of WLAN is a major concern of Network Administrators, on the one hand because
76
multiple (true or false) weaknesses have been reported and amplified by the papers and on the other hand because it is a new technology with multiple new aspects to take in consideration. From the network administrator’s point of view, it is not conceivable to deploy a radio network in complement of his existing network if it introduces vulnerability. To avoid this, standardisation bodies and forums, in particular the IEEE, as well as manufacturers, have worked to develop security mechanisms well suited to this kind of networks. This chapter describes the security features of IEEE 802.11 wireless local area networks, and shows their weaknesses. We present the associated threats with some practical defence strategies.
Scope This chapter deals with wireless LAN’s built on wired infrastructures that support one or more radio bases, so-named “access points” (AP). The infrastructure may also support fixed stations. Mobile stations take service from the access points (see Figure 1). The following is mainly for Wi-Fi technology based on the 801.11 standard of the IEEE, which is presently the most commonly used one. Other technologies will be mentioned later. Wi-Fi is an interoperability label for 802.11 equipment, delivered by the Wi-Fi Alliance (previously WECA). Ad-hoc networks that operate without any kind of infrastructure and in which routing is performed by the mobile stations (that act both as routers and as terminals) are not taken into consideration here.
Background IEEE 802.11 wireless LAN networks come in many varieties. The original standard, IEEE 802.11 from 1997 specifies data rates of 1 to 2 megabits per second and has radio and infrared
Wireless Local Area Network Security
Figure 1. General WLAN architecture Mobile Station
Mobile Station
AP
Mobile Station
AP
Mobile Station
AP
Network server
connectivity as options. The standard includes authentication and association procedures, and support for privacy. Several standard versions have since emerged, each pushing the limits in data rates (11Mbps for 802.11b, 54 Mbps for 802.11a and 802.11g, all using radio transmission for communications) or new features for QoS (802.11e), network management (802.11h) or security features (802.11i). In common parlance, all the different versions are just called IEEE 802.11 wireless local area network. There are also other local area network technologies, such as the ETSI HiperLAN, but the IEEE 802.11 WLAN products have an overwhelming market position, and is a de-facto standard. The WLAN network equipment is common and cheap, and while the cellular networks have also security holes that can be misused, the abundance of available radio equipment for WLAN networks makes them much more vulnerable to attacks. There can be different objectives for securing a company’s WLAN network, such as: •
Preventing unauthorized use. For example, preventing sending mass e-mail from the company’s network or preventing attempts
•
to attack other institution’s computer infrastructure from the company’s network. Protecting the company’s sensitive information. For example, protecting industrial property rights, tender documents, etc.
Each of these objectives may require different priorities for security measures.
The Threats The main threats to WLAN networks are •
•
Radio waves: The major threats are relative to the radio aspect, since radio waves broadcast and respect neither walls nor other limit. Without precaution, a hacker can accede to a network from the street. Another threat comes from the user who, when using a laptop in a public place is exposed to inquisitive glances, or worse, exposed to spurious connections issued from neighbouring hackers. In such a wild environment, encryption and authentication have a major importance. Denial of service (DoS): The purpose is to make the network ineffective. Protections
77
Wireless Local Area Network Security
•
•
78
do not exist but it is possible to apply local corrective actions, eventually with the help of tools. ° Jamming: It is relatively easy to jam a radio network, using a high power emitter on the same frequency. This attack does not present any risk of intrusion but the network becomes unusable. ° Rush access: This consists to overload the network with malicious connection requests. Tools are able to detect this kind of traffic and to provide information to help the network administrator to identify and locate the origin. ° Spoofed de-authentication frames: The purpose is to send malicious frames that force the de-authentication of a station and to make it unable to connect again. A variant is to broadcast such frames in order to attack any mobile station in range. Intrusions: In opposition with the DoS, hacker’s purpose is get access to the network, may be just to get a free Internet connection or worse, to read–or modify–the information stored in the network stations. ° Client intrusion: This is the most common attack that aims to intrude the network via a client station. Protections are the same as for wired networks (firewall). ° Network intrusion: This is the most critical attack that aims to take the control of network resources of the enterprise. Wi-Fi dedicated intrusion detection systems (IDS) are efficient against such attacks. Falsification of access points: With these attacks, the hacker uses false access points to fetch the traffic on the network. Such attacks are discovered by detecting abnormal radio transmission in unexpected areas. ° Fake AP: The hacker’s station presents the characteristics of a network access
•
•
point, using appropriate software applications. From his laptop, the hacker can intercept user’s connections for man in the middle attacks, can catch passwords on a Web page identical to the one of the authentication server, or simply can get into a station to read or modify station data. ° Rogue AP: This attack consists of connecting a pirate access point on the network infrastructure. This AP broadcasts in the area where the hacker stays. Naturally, this attack needs some complicity within the enterprise, but it may be innocently provoked by an employee who installs by him- or herself an AP in his or her office, just for improving the working environment. These AP are particularly dangerous because they open the enterprise network (that may be a wired one) to the Wi-Fi world, generally with poor protections. ° Inversed honey pot: In the network area, the hacker installs an AP that transmits with a high radio level and that appears like a network AP. By this means, the hacker observes the connection sequence and reproduces it for a man in the middle attack. Spoofing: The purpose is to take the place of a mobile station in order to accede to network services. The attack is achieved using the MAC address of a mobile station. It could be the consequence of a Fake AP attack. Probing and network discovery: This is the first step of an attack: to know that a network exists before attacking it.
Operations of wardriving are done by hackers that move (in a car, by foot, by plane) using a radio mobile station to locate radio networks. Using a GPS receptor improves the localisation
Wireless Local Area Network Security
of detected networks. Wardriving software’s are available from the Internet and the equipment is easy to get or to do (some cookies boxes provide excellent antennas). Warchalking consists of tagging the place of available networks (may be with a chalk). Some tools are able to detect wardriving attacks. •
Sensible network information fetching: This concerns the information that the hacker needs to be able to accede to the attacked network and the information that constitutes the enterprise property. ° Intrusion by sniffing: This attack is the same as for an ethernet network. It requires a radio sniffer that intercepts session opening messages and catches login/password. As hackers just receive and never transmit, the operation is undetectable. After, a hacker accedes to the network as any authorised user and can send false commands and viruses. A characteristic of this attack is that it can be managed far from the
enterprise, for example on the laptop of an employee that joins his or her Intranet from an airport hot spot. ° Eavesdropping: This consists of observing the traffic on the network. The main protection against eavesdropping is encryption. • Rebound attack: In this case, the hacker uses the ad-hoc networking facility to accede to the network via an authorised mobile station. It has just to set up an ad-hoc connection with the attacked station. The attacked station is mobile or fixed with the Wi-Fi option enabled (see Figure 2). It is recommended to disable the Wi-Fi option on mobile stations when unused and to forbid ad-hoc connections.
Basic Defences Naturally, most of security protections of wired networks can be applied to wireless networks. However, as seen before, some specific attacks are due to the radio aspect and need adapted tools and defences described hereafter.
Figure 2. Rebound attack
Network access point Network connection
Network mobile station
Ad-hoc connection
Hacker
79
Wireless Local Area Network Security
•
80
Network Monitoring: A good defence is to observe the network in order to be informed if “something strange” occurs. ° The IDS: An intrusion detection system (IDS) especially designed for wireless, is generally used against network intrusions. An IDS correlates several suspect events, and tries to determine if they could be due to an intrusion. The IDS is integrated in the Wi-Fi switch and works in real time. It monitors all exchanges and Wi-Fi flows in order to detect as soon as possible any risk or abnormal event. In case of detection, it alerts the network administrator. Enhanced systems are able to detect weak WEP, Rogue AP’s and wireless bridges. They can also locate devices responsible of DoS attacks and detect spoofing or ASLEAP (tool used to crack LEAP) attacks. These functions are based on information hold by the Wi-Fi switch, enhanced by each occurring event: connection, authentication, roaming or modification of equipment characteristics. ° Traffic monitoring: A particularly efficient prevention against spoofing is to observe in permanence the Wi-Fi traffic and the traffic on the wired network in order to detect any inconsistent situation. The goal is to detect an unforeseen device–access point or station–or the duplication of a station or access point, or the changing of location of an access point. To do that, a solution is to check that the traffic generated by well-known Wi-Fi stations goes through the appropriate LAN’s. Another mean is joining the indication of radio link level to the MAC address of each mobile station: if a MAC address appears at a same time with two different levels, the mobile is
°
°
quarantined and an alert is sent to the network administrator. Devices that supervise communication flows check that communications issued from AP do not reach the network by an illicit circuit–typically a Rogue AP. Conversely, they check that these communications appear on the wired network after having crossed the protection equipment (firewall or switch) and are not diverted to a pirate network via a Fake AP. Radio monitoring: Wi-Fi working mode imposes that an AP can just operate on the radio channel at which it has been attached and, consequently, it cannot supervise other channels. To do that, passive monitoring access points, in reception only, scan all radio channels in order to check the correct operation of neighbouring access points. Monitoring AP’s are able to detect and monitor low-level signals issued from relatively far devices. Thus, they cover a larger range than active AP’s. The traffic of monitoring AP is carried to the switch that checks that no mobile station is connected to an unreferenced AP. Radio monitoring ensures also the protection of wired networks against illicit radio communications (Rogue AP). In this case, the Network Administrator deploys a radio network just to detect illicit Wi-Fi transmissions. Forced detachment: A frequently used defence is to force the detachment of suspect stations or stations attached to a suspect AP. The Wi-Fi network is not reachable again by pirate stations that are unable to set up a complete connection. This feature brings an efficient protection but the problem must
Wireless Local Area Network Security
•
be definitively solved by an intervention on the station or AP origin of the danger. Tools facilitate the localisation of involved devices. ° Audit of radio coverage: When installing the access points, it is important to check that the radio coverage does not spread in a long range from the required area, even if this does not completely prevent from hackers who use amplifiers that provide them radio signal far beyond the nominal coverage. Adequate location of access points and antennas provides an optimal coverage. This coverage must be periodically checked afterwards, to make sure that no pirate access point has been added to the network (Rogue AP). This precaution is also for wired networks. Some users having had the surprise to discover a radio coverage that they never installed. Network Engineering ° The switch: If the access points are connected on a hub and not on a switch, any data directed to any fixed or mobile station is broadcasted on the radio network and thus, can be intercepted by a sniffer. It is strongly recommended to deploy WLAN’s on switches instead of hubs and to control the traffic between the mobile stations and the wired network. There are two types of WLAN architectures: ▪ The first one is based on a standard switch. Access points integrate radio networking and security functions. The switch manages both fixed and mobile stations. ▪ The second one is based on a WLAN-dedicated switch that manages radio, networking and security functions. Access points
°
°
are used just as emitters/receptors. This second configuration has a better resistance against RogueAP attacks, because adding an AP needs an intervention on the switch. Note that ideally the WLAN switch should manage several queues to provide flow control with QoS, typically for the transmission of voice over IP. Firewall: As for wired networks, the best protection is to install a firewall between the WLAN and the wired network. When present, it is integrated in the WLAN switch. This firewall shall manage protections at addressing level, provide filters, log connections, manage access control list (ACL) used for access filtering, monitor the connections (« stateful » characteristic), in order to maintain the same security level as on a wired network. All devices in relation with the wireless network (in and out of the enterprise) shall be considered as insecurity points. They must be installed in a DMZ and VPN authentication and encryption mechanisms activated. VLAN: A precaution is to split the network in order to isolate strategic data from the radio network. For that, the WLAN is deployed on a dedicated virtual LAN (VLAN) structure. The network may contain several VLAN’s, each of them associated to a WLAN subnet with its own SSID. Radio subnets are installed in the DeMilitarized Zone (DMZ) of a firewall that controls the transactions between the radio network and the wired network. It is strongly recommended to connect all VLAN’s on the WLAN switch,
81
Wireless Local Area Network Security
•
82
even if no traffic has to transit through the switch. By this means, the switch locates all devices, updates its network description database and detects abnormal flow or equipment on a segment where it should not appear. ° Honey pot: The WLAN configuration may integrate honey pots made by access points with a poor protection that can just give access to insignificant data. They will attract hackers and keep them out of the protected network. ° VPN: The Virtual Private Network (VPN) provides a ciphered tunnel that constitutes an efficient protection, in particular for users in unsecured areas, like public hot spots. A VPN protects the link in the same way as done for a wired nomad station via a telephone modem. The VPN ensures encryption and mutual authentication and protects the traffic between the client station and the Wi-Fi switch. This last one manages the end point of all clients VPN and delivers a safe traffic to the LAN at which it is connected. Mobile station configuration ° Forbid « ad hoc » networking: Mobile stations, as well as fixed ones equipped with Wi-Fi option, shall be configured for rejecting ad-hoc connections, that is, forbid direct connections that do not go through a network access point. This prevents from hackers who would try a rebound attack. This is a major precaution for users who are used to join their enterprise from a public hot spot. Fixed stations are invited to disable their Wi-Fi option when unused. ° Firewall: It is strongly recommended to use a personal firewall on nomad stations in order to filter unexpected input accesses and to limit output connections.
Radio throughput control: This is a usual protection against Fake AP’s that are located at some distance from the enterprise and thus are received with a low radio level (and consequently transmit with a low bitrate). It consists to forbid mobile stations to connect under a given bitrate (i.e., 1 or 2 Mbps), because it is a priori inconsistent with network engineering design. Radio defences ° Lures: This kind of defence, specific to Wi-Fi networks, is a reaction against Wardriving (« Fake AP » of Black Alchimy). This consists to broadcast a large number of false frames with random SSID’s (network identifiers), MAC addresses and channel numbers. Wardrivers detect a vast of networks and are unable to find the right one. Security at application level: An application software supports the security of carried data without having to protect the association between the mobile station and the access point. The information can be intercepted but it is unusable. ° Encryption: Standard protocols like transport layer security (TLS) may be used in this scope. ° Authentication by Web server: This is well suited to hot spot type connections. When connecting, the user is directed to a Web portal resident in the WLAN switch. Authentication is done by a login/password sequence. The link between the client and the server is secured by TLS and the authentication is done via a local authentication database. In return, the server assigns a category that defines user’s VLAN, rights, etc. For example, if the user is known but has no more credit, he will be redirected to a page that invites him to renew his subscription. For a complete security °
•
•
Wireless Local Area Network Security
of communications authenticated by Web server, it is recommended, after the authentication phase, to set up a VPN client, which can be downloaded (Dialer VPN).
IEEE 802.11 security features Basic Protocol Mechanisms When a mobile node, such as a laptop, connects to the WLAN network is a process called association. To be able to do this, the mobile node needs to find a suitable access point (AP). The access points are identified using the network name (Basic Service Set IDentifier BSSID or Extended Service Set IDentifier ESSID, or short: SSID). The access points can be configured using two methods. The first option is that the access point sends periodically the SSID in plaintext, and the mobile terminals can then decide to ask for association with this access point. Manufacturers provide equipment pieces with a standard SSID. Naturally, lists of manufacturer’s SSID are available from the Internet. A first precaution is to change it into another one. It is recommended to configure the access point to be mute, and just to listen to requests for associations from the mobile nodes. The first level imposes the client station to send a connection request to know the networks in range. When receiving it, AP’s in range send their SSID. This is a good precaution, but a poor protection. Hackers have just to wait for the arrival of a mobile station: offices opening hour is a very nice time for SSID interception. The second level imposes that AP’s do not answer to broadcasts sent by mobile stations that request for access. In this case, mobile stations must know the SSID to attach the network. This may seem secure at the first glance, but the added feeling of security is futile: an attacker can get the necessary information by passively
listening to the network traffic, and as soon as the first legitimate association request is heard, it can find out about the network identifier, which will be present in the request as plaintext. Even worse—the attacker may force a legitimate node to disassociate from an access point by sending a disassociation request to it. Then the legitimate node will try to reassociate immediately, thus revealing the network name. Note: The inhibition of SSID exchange can block the attachment of some NIC’s (network interface card) whose implementation requests for this step in their connection process. Changing and hiding the SSID is better than nothing, but not enough! •
•
MAC addresses filtering: The second noncryptographic security feature in the IEEE WLANs is MAC filtering. This method uses the unique link layer (MAC) address of the WLAN network card to identify legitimate users. The system administration uses network configuration tools to give to the access points a list of valid MAC addresses called ACL: access control list. The access points refuse to answer to any messages received from network cards that are not listed in the ACL. This security feature is also easy to bypass: just listen for a while network traffic, and when a legitimate node leaves the network, set your network card to use its MAC address instead, for impersonating as a legitimate node. This constitutes also a constraint for the network administrator because the introduction of a new station needs an intervention on the network. Address filtering is usable only if the park of stations is limited and stable. MAC addresses filtering it is better than nothing, but not perfect! Encryption with WEP: There are several security features that are based on cryptography. The original and most widely deployed is called wired equivalent privacy (WEP).
83
Wireless Local Area Network Security
It was defined in the initial IEEE 802.11 standard. WEP can be used in conjunction with the aforementioned noncryptographic features, or as such. WEP provides authentication and encryption with 40 to 128 bit key length. The system is based on a shared key that is configured both to the mobile node and to the access point. Using this key, the mobile node is authenticated when it associates to the access point. The access point is not authenticated. One problem with this authentication is that since the WEP key is the same for all nodes, the nodes cannot be distinguished from each other in authentication. WEP uses data encryption on the radio link for providing confidentiality and integrity. The integrity mechanism uses a linear CRC algorithm where an attacker is able to flip bits in the packet without the risk of being detected. The weaknesses of WEP arise from the following factors: • The pseudo-random sequence is computed by a linear algorithm and thus, is easily predictable. • The key is static and common to all access points and mobile stations. • The integrity control is weak and does not efficiently filter frame alterations. • Sequences are not numbered; this facilitates replay attacks. • The pseudo-random sequence is initialised by the means of a 24-bit vector transmitted in clear on the air interface and thus, easily intercepted by sniffing. Fluhrer, Mantin, and Shamir (2001) described an attack (FMS) that allows finding the secret key used in WEP in reasonable time. The WEP algorithm uses the RC4 stream cipher in a mode where the actual key used consists of two parts–a known part called initialization vector (IV), which
84
is concatenated with the secret key. The RC4 algorithm uses a key generation algorithm that generates a pseudo-random bit sequence from the concatenation of IV and the key, and uses the generated bit sequence for encrypting the actual data by a simple “exclusive or” operation. The problem is that the algorithm for generating the bit stream carries some patterns of the original key to the resulting bit stream. The reason for the initialization vectors is that the RC4 algorithm produces an identical bit stream each time it is used with the same key. This would lead to a situation where knowing one bit in plaintext for one packet would mean the corresponding bit would be known for all packets, and thus reducing the strength of the algorithm considerably. Using IVs is supposed to prevent this, but the downside is that the first 24 bits of the key are now known. And since the standard format of an IP packet is also known, this can be used to guess more bytes in the key. This repetition of IVs can be used for decrypting messages even without knowing the key–if the attacker can inject traffic to the WLAN from the fixed network and collect the packets encrypted by the access point, it can get the necessary information for decrypting traffic (see, e.g., Barken, 2004). The attack was first implemented by Stubblefield, Ioannidis, and Rubin (2001) and is now available in common cracking tools, such as AirSnort and WEPCrack. The attack requires several millions of packets to be captured, afterwards the actual cracking is done in seconds. Even if the FMS attack requires capturing a huge number of packets, it can be done quite fast if the attacker can inject packets to the network, and capture them encrypted. In practice, cracking the WEP needs less than two hours and some hackers boast to do it in fifteen minutes! In order to facilitate hacker’s task, it exists some dictionaries of pseudo-random sequences, depending on the values of the initial vector. Excellent software’s for WEP cracking are also available from the Internet (e.g., Airsnort or Netstrumbler).
Wireless Local Area Network Security
•
•
Key renewal: Since the keys can be compromised in a reasonable time, a mechanism has to change keys within a considerable short time. The problem in WEP is that all the nodes share the same unique ciphering key, which is static. The key should be changed in all nodes (laptops, etc.) and in the access points, simultaneously and frequently. The 802.11 protocol does not contain key updating mechanisms. Thus, this must be done manually and simultaneously on all radio devices. Even if it could be done on very small networks, in practice keys are never changed. Key renewing is good, but not realistic! Authentication: Since a WLAN using WEP does not authenticate the access point, the attacker can also impersonate as an access point. To launch a man-in-the middle attack, he may force network nodes to reassociate by sending deassociation frames to it, and then let the node to associate with the attacking node. This is easy if the attacker can position himself or use such equipment so that it can overpower the access point. With modified antennae it is easy to gain transmitting power, but they may look suspicious. Here,
we have a conflict with leaking radio signal outside the premises and making man-inthe-middle attack easier–for example Wi-Fi Alliance (2003) recommend low power for access points in order to reduce leaking the radio outside company premises. Several enhancements have been introduced in order to solve some of these weaknesses. Among them, the dynamic WEP with TKIP that makes frequently the key changed. This will be included in the 802.11i standard described next. WEP using is better than nothing, but not enough!
WiFi Protected Access Conscious of WEP weakness, the IEEE has designed a complementary protocol so-named 802.11i, which was ratified in July 2004, and known as WPA 2 (Wi-Fi Protected Access 2). Before that, WPA 1.0 with TKIP was introduced as a first step, considered as a subpart of 802.11i (see Figure 3). 802.11i relies on TKIP and 802.1x features with increased key-length, MIC integrity code and packet sequencing. However, the major innovation
Figure 3. Key dates for 802.1x security
? 2004 802.11i ratified
2002 introduction of wPA.1
2000 1999
wardriving starting decision of the ieee to launch 802.11i 802.11b (wi-fi) ratified with weP
1997 802.11 ratified including weP
85
Wireless Local Area Network Security
is the particularly efficient AES encryption that is also compatible with QoS requirements. •
°
°
°
°
WPA 1.0: WPA 1.0 is based on IEEE 802.1x that refers to a remote authentication dial-in user service (RADIUS) authentication server. The WLAN switch looks like a modem concentrator (RAS or BAS) in a traditional architecture. WPA 1.0 uses temporal key integrity protocol (TKIP) that manages key generation and dynamic exchange above WEP. In brief: 802.1x is a network access protocol that applies to any type of LAN, radio or wired. It defines a frame for WEP or AES implementation. EAP, initially designed for PPP, is an authentication transport protocol, authentication carried by an upper layer application on a RADIUS server. RADIUS is a client/server protocol that manages user’s account and access rights in a centralised way. It supports various authentication mechanisms, including EAP. The RADIUS server is an authentication, authorisation and accounting (AAA) server whose communications with the clients are managed by the RADIUS protocol.
WiFi protected access (WPA) was originally called WEP2, created in the IEEE 802.11i working group. It aims at correcting the security flaws in WEP while preserving as much as possible from the original WEP mechanism, so that a simple firmware update would suffice for updating equipment. There are two main differences: encryption uses TKIP for generating RC4 keys, instead of the old secret key + IV mechanism, and a new access control mechanism (IEEE 802.1X). •
86
Encryption with TKIP: Keeping the WEP design model, TKIP brings mechanisms to enhance the resistance against attacks; in particular, it solves the problem of key cyclic reusing:
° The common key shared by mobile stations and access points is changed every 10.000 packets. ° Common keys are renewed by a dynamic distribution. ° The MAC address of the station is introduced in the generation of key sequences, thus each station has its own sequence. ° The initial vector is incremented with each packet thus it is possible to reject packets replayed with an old packet number. ° An integrity code ICV (computed according to a MIC algorithm, so named MICHAEL) introduces a notion of “ciphered CRC”. Upgrading WEP equipment needs just a software evolution. Note also that all WAP 1.0 products are ascendant compatible with 802.11i. As WPA 1.0 is compatible with the WEP, WEP and WPA 1.0 devices are interoperable, but with the WEP protection level. Temporal key integrity protocol (TKIP) is used for generating per-packet keys for the RC4 ciphering used. The way of using RC4 in WEP is problematic due to the way keys are used. TKIP solves this by using a key mixing mechanism that creates a new key from three sources: a Temporal key, which is shared between the node and the access point, but is not necessarily the same for all users. This key is called “temporal” since it is changed frequently. The temporal key is combined with the sender’s MAC address using the exclusive OR-operation, thus resulting using different keys for upstream and downstream transmissions, and then with a sequence number for the packet. This sequence number replaces the initialization vector, making each transmission use a different key, but the mixing is done differently. While WEP just concatenates the IV and the key, WPA uses the mixed Temporal key and sender’s MAC for encrypting the sequence number, producing
Wireless Local Area Network Security
a 128bit key of which the first three bytes are given to the original WEP algorithm as the IV and the rest as the user’s key. As a result, it is not possible to use the FMS attack to crack the key, as it is different for each packet. There are two modes for TKIP key management. The first one uses a pre-shared key (PSK) from which the temporal keys are derived. PSK needs to be remembered by the user. To ease this, the PSK is created from a pass-phrase using a hashing algorithm. The problem of the approach is that the pass-phrase needs to be sufficiently long in order to give any real protection against an attack. The attacker can use a short denial of service attack to disconnect temporarily a legitimate node and record the re-association procedure. Then the attacker can just use a dictionary or brute force attack to the captured packets for getting the key. •
Access control: The second option is to use IEEE 802.1X and the extensible authentication protocol (EAP) for providing both the mobile node and the access point the key during association. When using this mode, the access point blocks access to the network until the node has authenticated with an authentication server (a RADIUS server). The authentication is mutual, and if successful, the authentication server and the mobile node both generate a key pairwise master key. This key is new for each session, so the attacker does not benefit much even if he is able to get this key. This key can then be used in the access point for generating temporal keys (for privacy and integrity) and key encryption keys for enabling rekeying later. Several layers are defined over EAP to support security polices (in increasing security order): ° EAP-MD5 uses a RADIUS server that just checks a hash-code of the mobile station password and does not manage
•
•
mutual authentication. This protocol is not recommended for radio networks, because the mobile station can be connected to a honey pot. ° LEAP, developed by CISCO, provides a better protection against the knowledge of passwords by an unauthorised third party, but it is vulnerable to dictionarytype attacks. ° PEAP and EAP-TTLS use a RADIUS server and are based on the exchange of certificates. RADIUS servers that support PEAP and EAP-TTLS may use external databases (e.g., Domain Windows or LDAP directories). They are particularly resistant to dictionarytype attacks. EAP-TLS is the most recommended standard for WLAN security. It uses a RADIUS authentication server. Mobile stations and the server must mutually authenticate, using certificates. The transaction is secured by the means of a ciphered tunnel. It is also particularly resistant to dictionary-type attacks. Even this type of key management has a problem: reauthentication with the authentication servers will be too slow for supporting handovers with real-time traffic (such as VoIP). This would require the access points being able to pass keys with each other. Integrity: TKIP also uses a different checksum algorithm than WEP. The WEP CRC algorithm is not cryptographically secure, so a new message integrity code (MIC) algorithm, Michael was developed. Michael uses a MIC key, and the sender’s and receiver’s MAC addresses as keys, and results in a 64bit message integrity code which is appended to the plaintext before ciphering. The effective security level of Michael is assumed around 20bits, meaning that the attacker could make an active attack against Michael, and succeed in around
87
Wireless Local Area Network Security
220 messages, which might be possible in a couple of minutes. In order to prevent this, TKIP uses a mechanism where a node disassociates from the network and deletes its keys if it finds out two failed forgeries within a second. It will reassociate with the network after a minute. In addition, a mechanism against replay attacks is implemented: a sequence number is used for each packet. If a packet with a lower sequence number than used earlier is found, it is discarded.
WPA 2 is the result of 802.11i standardisation process that has been ratified in June 2004. It relies on most of the improvements of WPA 1 but uses the more powerful advanced encryption standard (AES) encryption, which is a block ciphering algorithm using symmetrical keys with 128, 192 or 256 key length. The processing power needed by AES requires a coprocessor already present (as a reserve for the future) on equipment pieces delivered with WPA 1.0. Older WEP equipment needs a hardware modification to be upgraded to WPA 2. Even if these algorithms and protocols do protect the wireless network from unauthorized use or eavesdropping, the nature of the network makes it prone to Denial of Service type attacks. It is easy for an attacker to jam the frequency band, or more cleverly inject traffic to the access points so that the network is not able to serve legitimate users.
The security of wireless radio networks Application Lets us forget the old WEP and take just WPA in consideration. The security level depends on the usage.
88
•
•
•
For the enterprise: WPA Enterprise needs a RADIUS server with EAP-TLS, EAP-TTLS and PEAP, plus a network controller. This costly solution provides high-level security for large–and small–networks. In this implementation, a VPN is not useful, even not recommended if the network supports VoIP because it degrades the quality of service. For residential and Soho usage: WPA personal does not need a RADIUS server but uses a pre-shared key (PSK) distributed to the access points and mobile stations. This constitutes a flexible and cheap solution for small networks, in particular for Soho and domestic usage. Hot spots: In order to facilitate the access, no security mechanism is implemented on public hot spots. In particular, there is no authentication of the Wi-Fi network, because the user does not know what it is when connecting and vice versa. In this case, authentication is done by a Web portal. The user has to manage its own protection by VPN, by ciphering at application level and by configuring properly his station in order to reject intrusions from others.
Other technologies Bluetooth Bluetooth is well known for point-to-point applications, rather than for WLAN realisation, even if this is technically possible. Even if Bluetooth integrates performing security mechanisms, those are rarely used. The short range (10 meters) of a Bluetooth link is seen as protection against most intrusions, but do not forget that in a public hot spot, users are generally less than 10 meters from each other’s. On another hand, Bluetooth standard foresees several transmission power levels: 10 mW that is the most frequently used for
Wireless Local Area Network Security
wireless connections, and 100 mW that transmits in a range comparable to Wi-Fi. Some 100 mW devices are available, in particular for building small size WLAN’s that are as much vulnerable as Wi-Fi WLAN’s. Vigilance is particularly recommended when using a Bluetooth wireless keypad: keyboarded codes, in particular passwords, go though the radio link and can be detected in the range of Bluetooth transmission. A new form of spamming has appeared, named « Blue Jacking », that seems promised to a great future. It consists to send spam’s on the screen of bistandard GSM-Bluetooth mobile phones that are in range of the spammer. The short range is not a limitation because the spammer is also mobile. Another Bluetooth attack involves bi-standard mobile phones and nomad stations. It takes advantage of a weakness of Bluetooth that makes possible to fetch remotely the information stored in a handy. Attacks are generally done in public areas. By this means, the hacker gets the coordinates of the service provider and the login/password that a victim close from him has just received by SMS. A precaution is to inhibit the Bluetooth option of the mobile phone when unused.
Security is managed at three places: • In the SIM card: Personal security information (authentication key, algorithms for authentication and key generation, PIN) • In the terminal: Ciphering algorithm • In the network: Ciphering algorithm, authentication server. The terminal identifier is hidden and replaced by a temporary identifier allocated when registering on a relay. This constitutes an efficient protection against interception.
Future trends •
HiperLAN/2 HiperLAN/2 is an ETSI standard concurrent of 802.11a that has no commercial issue at this time. HiperLAN/2 integrates basically efficient mechanisms of encryption, authentication and dynamic key assignment.
•
2G/2,5G/3G Networks GSM uses strong security mechanisms that can be attacked (Barkan et al., 2003), but UMTS networks use improved mechanisms, that can be considered reliable. They ensure user’s authentication as well as the confidentiality of user’s identity, signalling and exchanged information.
•
802.11 technology: The 802.11 technology is now mature and its variant 802.11g with 54 Mbps throughput is largely distributed. The standard is enhanced by protocol extensions: 802.11i brings a high level of security and 802.11e provides QoS (Quality of service) for transmitting video and voice in the best conditions. The IEEE is still working on other evolutions of the 802.11 standard, in particular for high-speed data (> 100 Mbps) and meshed networks without wired infrastructure. Ad-hoc networks: On the other hands, researches are undertaken in the domain of ad-hoc networks, in particular in the scope of public safety and defence, in order to provide broadband communication means at the place of an intervention or in case of a major disaster when communication infrastructures have been destroyed. Ad-hoc networks are new targets for new attacks. The fact that routing is done by node terminals makes these networks particularly vulnerable. WiMAX: WiMAX is the commercial name for of IEEE 802.16, and an interoperability label from the WiMAX forum.
89
Wireless Local Area Network Security
•
WiMAX can support throughputs of 70 Mbps in a range of some tenths of kilometres in line of sight. This makes WiMAX well suited for metropolitan networks and in particular, it could be an alternative to the Internet distribution in low-density areas. The first step of WiMAX 802.16a is just for radio connected fixed stations. The next step 802.16e, recently ratified, integrates mobility. In opposition with Wi-Fi, stations must be registered in the system to be allowed to accede to the WiMAX network. This feature does not facilitate hacker’s attacks. Authentication uses certificates and an asymmetrical ciphering. Encryption uses a key generated during the authentication sequence and a 3-DES algorithm. 802.20: The 802.20 standard will address WAN structures for mobile users. In opposition with 3G networks designed for voice and data, 802.20 networks will be dedicated to mobile Internet with an asymmetrical throughput, like the ADSL. Mutual authentication uses certificates with RSA signature. Symmetric encryption keys are exchanged during the authentication. At the present time, 802.20 is under study at the IEEE. No date of ratification is foreseen.
Conclusion WLAN networks are insecure, and attacking a poorly configured WLAN network is easy. There are plenty of different security solutions for WLAN networks, having their weaknesses. Even if the network can be protected against intrusion attempts, the attacker has always the possibility to launch a denial of service attack to shut down the network operation. There are two possible main paths for the system to choose:
90
•
•
To accept that the WLAN network is insecure, and to treat is as it were a public hot spot, using VPN solutions for the mobile nodes. This has the advantage that it is possible to provide access to the Internet for the visitors. To set up a full authentication, encryption and key management infrastructure with WPA or WPA2. With this option, it is essential to drop support of legacy equipment not being able to use the full set of security options. Providing compatibility means opening security holes.
Both of these approaches require that the essential IT infrastructure is in a wired network to counter for the ease of denial of service attacks against WLANs. One can say that wireless technologies meet the requirements of nomad users that aim to obtain the same level of service and security than in their office. Even if the security has been a great concern in the past, we may consider now that, using appropriate technologies and engineering, wireless networks are as safe as traditional wired networks.
References Barkan, E., Biham, E., & Keller, N. (2003). Instant ciphetext-only cryptoanalysis of GSM encrypted communication. CRYPTO 2003, 600-616. Barken, L. (2004). How secure is your wireless network? Upper Saddle River, NJ: Prentice Hall. Fluhrer, S., Mantin, I., & Shamir, A. (2001). Weaknesses in the key scheduling algorithm of RC4. In S. Vaudenay & A. Youssef (Eds.), Selected areas in cryptography (pp. 1-24). Springer. Germain, M., & Ferrero, A. (2006). English translation Michèle Germain. Les réseaux particuliers
Wireless Local Area Network Security
– La sécurité des réseaux sans fil. In La Sécurité à l’usage des décideurs (pp. 154 to 166). Paris: © Éditions tenor 2006. IAIK. (2005). AES lounge. Retrieved from http:// www.iaik.tu-graz.ac.at/research/krypto/AES/
Stubblefield, A., Ioannidis, J., & Rubin, A. (2001). Using the Fluhrer, Mantin, and Shamir attack to break WEP. (Tech. Rep. TD-4ZCPZZ). AT&T Labs. Wi-Fi Alliance. (2003). Enterprise solutions for wireless LAN security.
91
92
Chapter V
Interoperability Among Instrusion Detection Systems Mário M. Ferire University of Beira Interior, Portugal
ABSTRACT This chapter addresses the problem of interoperability among intrusion detection systems. It presents a classification and a brief description of intrusion detection systems, taking into account several issues such as information sources, analysis of intrusion detection systems, response options for intrusion detection systems, analysis timing, control strategy, and architecture of intrusion detection systems. It is also discussed the problem of information exchange among intrusion detection systems, being addressed the intrusion detection exchange protocol and a format for the exchange of information among intrusion detection systems, called by intrusion detection message exchange format. The lack of a format of the answers or countermeasures interchanged between the components of intrusion detection systems is also discussed as well as some future trends in this area.
INTRODUCTION Security incidents are becoming a serious problem in enterprise networked systems. Due to this problem, intrusion detection systems (IDS) are attracting an increasing commercial importance. This kind of systems is used in enterprise network security to attempt the identification and tracking of attacks to the networked systems. Nowadays, several commercial and free intrusion detection
systems are available. Some of them are intended for detecting intrusions on the network, others are intended for host operating systems, while still others are intended for applications. Tools of these categories may have very different strengths and weaknesses. Therefore, it is likely that network and systems administrators deploy more than a one kind of IDS, and administrators may want to analyse the output of these tools from different systems. Therefore, the existence
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Interoperability Among Instrusion Detection Systems
of a standard format for reporting may simplify this task. Moreover, intrusions frequently occur in several organizations or in several sites of the same organization, which may use different IDS. Therefore, it would be very helpful to correlate such distributed intrusions across multiple sites and administrative domains. Thus, it is required the existence of a common format in order to allow an easy interconnection of different IDS. Due to these reasons, recently, the group Intrusion Detection Working Group (IDWG) of Internet Engineering Task Force (IETF) has carried out standardization activities regarding IDS whose objective is to define data formats and exchange procedures for sharing information among intrusion detection and response systems, systems. As result, it was specified a new protocol and a format for the exchange of information among IDS. The specified protocol is called by intrusion detection exchange protocol (IDXP) (Buchheim, Erlinger, et al., 2001; Feinstein, Matthews, & White, 2002) and the specified data format is called by intrusion detection message exchange format (IDMEF) (Debar, Curry,& Feinstein, 2005; Wood & Erlinger, 2002). The model specified by the IDWG group does not define the format of the answers or countermeasures interchanged among the components of IDS. Without the definition of a common format for the exchange of answers, it is not possible to get total interoperability between IDS of different manufacturers. Moreover, use of the incident object description exchange format (IODEF) for incident handling should also be considered regarding real time network defense. This chapter will provide an overview of intrusion detection systems and will discuss how information regarding the detection of an intrusion may de shared with other intrusion detection systems.
INTRUSION DETECTION SYSTEMS Early work on intrusion detection was reported by Anderson (1980) and Denning (1987) and, since then, it has been subject of intense research activities from both academia and industry. Early intrusion detection systems were based either on the use of simple rule–based techniques to detect very specific patterns of intrusive behaviour or on the analysis of historical activity profiles to confirm legitimate behaviour. Nowadays, intrusion detection systems may use data-mining and machine-learning techniques for the dynamic collection of new intrusion signatures, which allows for relatively general expressions of what may constitute intrusive behaviour. Other modern intrusion detection systems may use a mixture of sophisticated statistical and forecasting techniques to predict what is legitimate activity (Almgren & Jonsson, 2004; Abad et al., 2003; Carey, Mohay, & Clark, 2003). In the context of computer security, an intrusion can be defined as any set of actions that attempt to compromise the integrity, confidentiality or availability of a resource. An intruder can be internal or external. External intruders do not have any authorized access to the system they attack while internal intruders have some kind of authority and therefore some legitimate access, but seek to gain additional ability to take action without legitimate authorization (Jones & Lin, 2001). Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions. Intrusion detection systems (IDS) are software or hardware tools that automate this monitoring and analysis process and reports any anomalous events or any known patterns indicating potential intrusions (Bace & Mell, 2001; Wan & Yang, 2000).
93
Interoperability Among Instrusion Detection Systems
There are several types of IDS currently available, which are characterized by different monitoring and analysis approaches. Most of the currently available IDS can be classified according three fundamental functional components: information sources, analysis and response (Bace & Mell, 2001). Nevertheless, there are other issues that may also be taken into account for the classification of IDS (Kazienko & Dorosz, 2004). An overview of the classification of IDS is presented in Figure 1. In the following, a brief description of some categories, in which IDS may be classified, is presented.
Information Sources The most common classification of IDS is based on the kind of information source used to determine whether an intrusion has occurred. Most common information sources are network, host, and application monitoring (see Figure 1; Bace & Mell, 2001; Coull, Branch, Szymanski, & Breimer, 2003; Feng et al., 2004; Gopalakrishna, Spafford, & Vitek, 2005; Lindqvist & Porras, 2001; Kruegel, Valeur, Vigna, & Kemmerer, 2002; Rubin, Jha, & Miller, 2004, 2005; Shankar & Paxson, 2003). The larger part of commercially available IDS is network-based. This kind of IDS analyzes network packets, captured from network backbones or local area network (LAN) segments, in order to detect attacks. Network-based IDS (NIDS) often consist of a set of single-purpose sensors or hosts placed at suitable points in a network. These sensors monitor network traffic, performing local analysis of that traffic and reporting attacks to a central management console. Since the sensors only support the IDS, they can be more easily secured against attacks and may also be configured to run in a stealth mode, in order to make more difficult to determine their presence and location in the network. These systems are usually designed to work as passive devices for monitoring the network traffic without interference with the normal operation of the network
94
(Bace & Mell, 2001). However, NIDS have some limitations: they may be unable to analyze, without network performance degradation, all the packets in a large or busy network or under very high speed operation at LAN interfaces. Moreover, as we move from shared topologies to switched-per-port LAN topologies, most switches limit monitoring range of a NIDS to a single host, provided that the switch does not provide universal monitoring ports. Besides, NIDS are unable to analyze encrypted information, becoming in a particular problem in the case of using Virtual Private Networks (VPNs). Host-based IDS (HIDS) analyze information sources generated by the operating system or application software, trying to find an intrusion. Application-based IDS (AIDS) are a special subset of host-based IDS. HIDS can analyze activities with great reliability and precision, determining
Figure 1. Classification of intrusion detection systems Network-based IDS (NIDS) Information Sources
Host-based IDS (HIDS) Application-based IDS (AIDS) Misuse (Signature) Detection
Analysis Approaches
Anomaly Detection Specification-based Detection Active Response
Intrusion Detection System (IDS)
Response Options
Passive Response Hybrid Response
Analysis Timing
Interval-based IDS Real Time IDS Centralized
Control Strategy
Partially Distributed Fully Distributed
IDS Architecture
Host-target Co-location Host-target Separation
Interoperability Among Instrusion Detection Systems
exactly which processes and users are involved in a particular attack on a given operating system. Moreover, unlike NIDS, HIDS can see the outcome of an attempted attack, as they can directly access and monitor the data files and system processes usually targeted by attacks. HIDS usually use two types of information sources: operating system audit trails, and system logs. Operating system audit trails are normally generated at the kernel of the operating system, and are therefore more detailed and better protected than system logs. However, system logs are much less imperceptive and much smaller than audit trails, and are furthermore far easier to understand. Some hostbased IDS are designed to support a centralized IDS management and reporting infrastructure that can allow a single management console to track many hosts. Others generate messages in formats that are compatible with network management systems (Bace & Mell, 2001). Due to the ability to monitor events in the host, HIDS may detect attacks that cannot be detected by NIDS. Furthermore, unlike NIDS, HIDS are not directly dependent of the evolution form shared topologies towards switched per port topologies and HIDS can often operate in networked environments with encrypted traffic, namely when host-based information is generated before data encryption and/or after data decryption in the destination host. On the other hand, HIDS only read network packets destined to that host and therefore they are unsuitable for network surveillance. Moreover, configuration and management of HIDS are more difficult and they can be disabled during an attack. Besides, HIDS make use of host computing resources, which lead to a performance cost and, when operating system audit trails are used, additional storage capability may be required due to huge volume of information. As referred above, application-based IDS (AIDS) are a special subset of host-based IDS. The most common information sources used by AIDS are the transaction log files of applications. The ability to interface with the application directly,
with significant domain or application-specific knowledge included in the analysis engine, allows application-based IDS to detect suspicious behavior due to authorized users exceeding their authorization. This is because such problems are more likely to appear in the interaction between the network administrator, the data, and the application (Bace & Mell, 2001). AIDS can monitor the interaction between user and application, which often allows them to trace unauthorized activity to individual users. Moreover, AIDS can often work in encrypted environments, since they interface with the application at transaction endpoints, where information is sent in an unencrypted form. On the other hand, AIDS may be more vulnerable than host-based IDS to attacks as the applications logs are not as well protected as the operating system audit trails used for host-based IDS. In any case, as AIDS often monitor events at the user level of abstraction, they usually cannot detect Trojan Horses or other similar attacks. Therefore, it is advisable to use application-based IDS jointly with host-based and/or network-based IDS.
IDS Analysis Nowadays, there are three main intrusion detection approaches: the behavioural approach, also called anomaly detection, and the signature analysis, also called misuse detection and specificationbased detection (Ko, 2000). Anomaly detection is based on statistical description of the normal behaviour of users or applications. The purpose is to detect any abnormal action performed by these users or applications. The second approach, called “misuse detection,” is based on collecting attack signatures in order to store them in an attack base. The IDS then parses audit files to find patterns that match the description of an attack stored in the attack base. On the other hand, specificationbased techniques detect deviation of executing programs from their valid program behaviour. Misuse detection is the most used technique by
95
Interoperability Among Instrusion Detection Systems
commercial systems. Anomaly detection has been subject to intense research activities and is used in limited form by a number of IDS. None of these approaches is fully satisfactory, since they may generate many false positives, corresponding to false alerts, false negatives, corresponding to nondetected attacks, and the alerts are too elementary and not enough accurate to be directly managed by a security administrator. There are strengths and weaknesses associated with each approach, and it appears that the most effective IDS use mostly misuse detection methods with a smattering of anomaly detection components (Cuppens & Miège, 2002; Ko, 2000; Kruegel et al., 2003; Li & Das, 2004; Mutz, Vigna, & Kemmerer, 2003; Tombini, Debar, Mé, & Ducassé, 2004; Vigna, et al., 2004; Wu et al., 2003; Wu & Shao, 2005). Misuse detectors analyse system activity, looking for events or sets of events that match a predefined pattern of events that describe a known attack. Since the patterns corresponding to known attacks are called signatures, misuse detection is sometimes referred to as signature-based detection. The most common form of misuse detection used in commercial products specifies each pattern of events corresponding to an attack as a separate signature. However, there are more sophisticated approaches to doing misuse detection, called statebased analysis techniques that can force a single signature to detect groups of attacks. Anomaly detectors identify abnormal unusual behaviour (anomalies) on a host or network. They function on the assumption that attacks are different from normal (legitimate) activity and can therefore be detected by systems that identify these differences. Anomaly detectors construct profiles representing normal behaviour of users, hosts, or network connections. These profiles are constructed from historical data collected over a period of normal operation. The detectors then collect event data and use a variety of measures to determine when monitored activity deviates from the norm.
96
Specification-based techniques detect the deviation of executing programs from their correct behaviour and they assume that penetrations often cause privileged programs to behave differently from their intended behaviour, which, for most programs, are fairly regular and can be written concisely. They lead to a very low false alarm rate and are able to explain why the deviation is an intrusion. Specification-based techniques are promising for detecting previously unseen attacks. However, specifications need to be written by system or security experts for every security-critical program. Nevertheless, specification-based approaches take advantage from techniques that automate the development of specifications (Ko, 2000).
Response Options for IDS The response of an IDS is the set of actions that the system takes once it detects intrusions (Bace & Mell, 2001; Kazienko & Dorosz, 2004). They are typically grouped into active and passive measures, with active measures involving some automatic actions while passive measures involve the reporting of intrusion detection findings to network administrators, who are expected to take action based on those reports. Though researchers and some network administrators are tempted to underestimate the importance of good response functions in IDS, they may be very important. In fact, commercial IDS support a wide range of response options, often categorized as active responses, passive responses, or some mixture of both kinds of responses. Active responses are automatic reactions taken after the detection of some types of intrusions. Active responses may be classified into three categories (Bace & Mell, 2001): collect additional information, change the environment, and take action against the intruder. A brief description of these three categories follows. Collect additional information is the most inoffensive active response, although sometimes the
Interoperability Among Instrusion Detection Systems
most productive, which is based on the collection of additional information about a potential attack. This is generally associated with the increase of the level of information sources sensitivity (e.g., increasing the number of events logged by an operating system audit trail, or increasing the sensitivity of a network monitor in order to analyse more packets). The collection of additional information may be of interest. For instance, the additional information collected can help decide the detection of the attack. The gathered information may be further used to support investigation and criminal and civil legal actions (Bace & Mell, 2001). Changing the environment is another active response, which is based on the stop of a possible an attack in progress with subsequent block of attacker access. Typically, IDS block Internet protocol (IP) addresses identified as the origin of the attack. Although it is difficult to block a particular attacker, IDS can often prevent attacks through the following actions (Bace & Mell, 2001): (1) injecting TCP reset packets into the attacker connection to the victim system, leading to a close of the connection; (2) reconfiguring routers and firewalls to block IP packets from the attacker apparent location; (3) reconfiguring routers and firewalls to block the network ports, protocols, or services being used by an attacker, and (4) in extreme cases, reconfiguring routers and firewalls to separate and cut all connections that use specific network interfaces. Although an attack to the attacker may be considered, this active option can represent a larger risk than the attack it is intended to block due to civil legal responsibilities. Besides, generally, attacks are carried out from false network addresses, or from zombies. Moreover, this approach may lead to an escalade of the attack (Bace & Mell, 2001). Passive response approaches are based on information made available to network administrators, leaving to administrators the decision to be taken based on that information. A large
number of commercial IDS only rely on passive response approaches. These approaches can be alarms and notifications or relay on SNMP Traps and Plug-ins (Bace & Mell, 2001). When an IDS detects an attack, it generates alarms and notifications to inform network administrators that an attack was detected. Most commercial IDS allow a large degree of freedom regarding how and when alarms are generated and to whom they are displayed. The information included in the alarm message may range from a simple notification saying that an intrusion has taken place to extremely detailed messages including IP addresses of the source and target of the attack, the specific attack tool used to obtain access, and the result of the attack. Another set of options that may be helpful to large or multi-site organizations are those involving remote notification of alarms or alerts. These allow organizations to configure the IDS so that it sends alerts to cellular phones or pagers carried by incident response teams or system security personnel. E-mail messages are avoided because attackers may monitor network traffic and block e-mail messages (Bace & Mell, 2001). Some commercial IDS generate alarms and alerts to network management systems, which use SNMP traps and messages to send alarms and alerts to central network management consoles, where they can be analyzed by network administrators. This reporting scheme may lead to several benefits such as the ability to adapt the entire network infrastructure to act in response to a detected attack, the ability to move the processing load associated with an active response to a another system that is not under attack, and the capability to use common communications channels (Bace & Mell, 2001).
Analysis Timing Analysis timing is concerned with the elapsed time between the occurrence of events and the analysis of those events. It can be classified into
97
Interoperability Among Instrusion Detection Systems
interval based IDS and real time IDS (Bace & Mell, 2001; Kazienko & Dorosz, 2004). In the first approach, the information is not sent continuously from monitoring systems to analysis engines. Audit trail (event log) analysis is the most common method used by systems operated periodically. Real time IDS are designed for on-the -fly processing and are the most common approach used by network-based IDS.
in scenarios dominated by mainframes, in which the high cost of computers prohibited the use of a separated IDS system. With the widespread use of workstations and personal computers, the architecture was changed in order to running the IDS control and analysis systems on a separate system. Therefore, the IDS host and the target are separated systems. This approach makes much more easy to hide the IDS from attackers.
Control Strategy Control strategy refers to the way the elements of an IDS are controlled and how input and output is managed. Control strategy may be classified into three classes: centralized systems, partially distributed systems, and fully distributed systems (Bace & Mell, 2001). In the first strategy, all monitoring, detection and reporting functions are controlled by a central location. In partially distributed systems, monitoring and detection functions are controlled by local node, but reporting is done hierarchically to one or more central locations. In fully distributed systems, monitoring and detection functions are performed through an agent-based approach, being response decisions taken at the point of analysis.
Architecture The architecture of an IDS is devoted to the organization of the functional components of a given IDS. The architecture of an IDS includes the host, the system in which the IDS software is running, and the target, which is the system to be monitoring by the IDS. The architecture of an IDS may be categorized into two classes: host-target co-location and host-target separation (Bace & Mell, 2001). Most of early IDS were of the first type, in which the IDS ran on the systems they protected. However, this architecture presents a security problem, since, after a successful attack to the target, the attacker could disable the IDS. This architecture was typically used several years ago
98
INFORMATION EXCHANGE AMONG INTRUSION DETECTION SYSTEMS As referred previously, nowadays, there are a lot available IDS with very different strengths and weaknesses. Even within each category it is possible to find IDS with different characteristics. For the case of NIDS, which are very popular nowadays, a web site devoted to the most important network intrusion detection systems was made available by (Computer Network Defence, 2006). This web site provides details about several NIDS and links to the products provided by manufacturers. As may be seen in this site, there are IDS with very different characteristics, being likely that network administrators may deploy more than a single IDS in order to complement their scopes. Due to these reasons, a standard format is required for reporting events among intrusion detection systems. Besides, the existence of a common format should allow components from different IDS to be integrated more easily, namely when different IDS are deployed in multiple organizations or in multiple sites within the same organization. Recently, it was specified a new protocol, called by intrusion detection exchange protocol (IDXP) (Buchheim, Erlinger, et al., 2001; Feinstein et al., 2002), and a format for the exchange of information among IDS, called by intrusion detection message exchange format (IDMEF) (Debar, Curry, & Feinstein, 2005; Wood & Erlinger, 2002).
Interoperability Among Instrusion Detection Systems
The Intrusion Detection Working Group (IDWG) of Internet engineering task force (IETF) has made two assumptions about the deployment configuration of an IDS. First, it was assumed that only analyzers create and communicate ID alerts. These alerts contain information stating that an event occurred, but they may contain more detailed information about the event, in order to make easy an informed action or response by the receiving party. The same system may act as both an analyzer and a manager, but these will be two separate ID entities. This assumption is associated with the second, since analyzers and managers can be separate components that communicate pair wise across a TCP/IP network. These assumptions distinguish between creation and consumption of ID alerts, and allow the acts of alert creation and consumption to occur at different IDS components on the network. These assumptions also affect the transfer protocol, since alerts will be transferred from one component to another component over TCP/IP networks, which must be done in a secure way (Buchheim, Erlinger, et al., 2001). Messages sent between IDS elements must get through and be acknowledged, namely under difficult network conditions. The severity of an attack, and therefore the relevance of a prompt response, may be revealed by the number and frequency of messages generated by IDS analyzers, being desirable to achieve a reliable transmission in order to avoid duplication of messages. Therefore, the IDWG has specified that the protocol for exchange of intrusion detection information be based on the transmission control protocol (TCP). The first attempt of the IDWG to meet IDWG transport protocol (IDP) requirements for communicating IDMEF messages was the development of the intrusion alert protocol (IAP) (Buchheim, Feinstein, Gupta, Matthews, and Pollock, 2001). The design of IAP was based on the hypertext transfer protocol (HTTP). In HTTP, the HTTP client is the party that initiates the TCP connection. In IDP, a passive analyzer should act as a client even though it receives the TCP connection.
Nevertheless, IAP still borrows many of HTTP headers and response codes (Buchheim, Erlinger, at al., 2001). However, IAP presents some limitations, namely, regarding security issues. In order to overcome the limitations of IAP, it was proposed the blocks extensible exchange protocol (BEEP) (Rose, 2001), a new IETF general framework for application protocols. Then, the intrusion detection exchange protocol (IDXP) was designed and implemented within the BEEP framework, that fulfills the IDWG requirements for that transport protocol. BEEP is a generalized framework for the development of applicationlayer protocols, since it is located above TCP in the TCP/IP architecture. BEEP offers asynchronous, connection-oriented, and reliable transport. Therefore, an application level transport protocol such as IDP can be implemented using the BEEP framework. Overall, BEEP supports higher-level protocols by providing the following protocol mechanisms (Buchheim, Erlinger, et al., 2001): • • • • • •
Framing: How the beginning and ending of each message is delimited Encoding: How a message is represented when exchanged Reporting: How errors are described Asynchrony: How independent exchanges are handled Authentication:How the peers at each end of the connection are identified and verified Privacy: How the exchanges are protected against third-party interception or modification
The intrusion detection exchange protocol (IDXP) is an implementation, as a BEEP profile, of the IDWG application level transport protocol. Therefore, BEEP provides the protocol while the IDXP profile specifies the BEEP channel characteristics necessary for implementation of the IDP requirements. IDXP can be split into four main phases: connection provisioning, security
99
Interoperability Among Instrusion Detection Systems
setup, BEEP channel creation, and data transfer. Details about IDXP implementation are given in (Buchheim, Erlinger, et al., 2001). The model specified for IDWG group does not define the format of the answers or countermeasures interchanged between the components of IDS. Without the definition of a common format for the exchange of answers, it is not possible to get complete interoperability between different IDS. Therefore, recent work has been focused towards a solution for this problem. Recently, Silva, and Westphall (2005) proposed a model for interoperability of answers in intrusion detection systems. This model of data and architecture is compatible with works accomplished by IDWG group related with the interoperability between IDS. More details regarding the development and tests of their model can be found in Silva and Westphall (2005).
FUTURE TRENDS
This model is compatible with the model of alerts already developed for IDWG group, in order to make possible the integration of both models.
REFERENCES Abad, C., Taylor, J., Sengul, C., Yurcik, W., Zhou, Y., & Rowe, K. (2003). Log correlation for intrusion detection: A proof of concept. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003). Los Alamitos, CA: IEEE Computer Society Press. Almgren, M., & Jonsson, E. (2004). Using active learning in intrusion detection. In Proceedings of the 17th IEEE Computer Security Foundations Workshop (CSFW’04). Los Alamitos, CA: IEEE Computer Society Press. Anderson, J. P. (1980). Computer security threat monitoring and surveillance (Tech.l Rep.). Fort Washington, PA: James P. Anderson.
Recent work has been focused in the specification and development of a transport protocol and message formats. A first model for the format of the answers or countermeasures interchanged between the components of IDS has also been reported. Some research issues still open are the detection of intrusions in high-speed network environments and the share of information regarding attacks under high-speed operation.
Bace, R., & Mell, P. (2001). Intrusion detection systems. NIST special publication in intrusion detection systems. Retrieved from http://csrc.nist. gov/publications/nistpubs/800-31/sp800-31.pdf
CONCLUSION
Buchheim, T., Feinstein, B., Gupta, D., Matthews, G., & Pollock, R. (2001). IAP: Intrusion alert protocol, Internet engineering task force, draftietf-idwg-iap-05.
This chapter provided an overview of intrusion detection systems and the way this kind of systems may exchange information regarding attacks. It was briefly discussed the Intrusion detection exchange protocol and the intrusion detection message exchange format and a data model for interoperability of answers among different IDS.
100
Buchheim, T., Erlinger, M., Feinstein, B., Matthews., G., Pollock, R., Betser, J., et al. (2001). Implementing the Intrusion detection exchange protocol. In Proceedings of 17th Annual Computer Security Applications Conference (ACSAC’01) (pp. 32-41). IEEE Press.
Carey, N., Mohay, G., & Clark, A. (2003). Attack signature matching and discovery in systems employing heterogeneous IDS. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003). Los Alamitos, CA: IEEE Computer Society Press.
Interoperability Among Instrusion Detection Systems
Computer Network Defence Ltd. (2006). Network intrusion detection systems product descriptions. Retrieved from http://www.networkintrusion. co.uk/N_ids.htm. Last access: 2006/07/13 Coull, S., Branch, J., Szymanski, B., & Breimer, E. (2003). Intrusion detection: A bioinformatics approach. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003). Los Alamitos, CA: IEEE Computer Society Press. Cuppens, F., & Miège, A. (2002). Alert correlation in a cooperative intrusion detection framework. In Proceedings of the 2002 IEEE Symposium on Security and Privacy (S&P 2002). Los Alamitos, CA: IEEE Computer Society Press. Debar, H., Curry, D., & Feinstein, B. (2005). The intrusion detection message exchange format, IETF, RFC Draft, draft-ietf-idwg-idmef-xml-14. Denning, D. E. (1987). An intrusion-detection model. IEEE Transactions on Soffware Engineering, 13(2). Feinstein, B., Matthews, G., & White, J. (2002). The Intrusion detection exchange protocol (IDXP), Internet engineering task force, RFC Draft, draft-ietf-idwg-beep-idxp-07. Feng, H. H., Giffin, J. T., Huang, Y., Jha, S., Lee, W., & Miller, B. P. (2004). Formalizing sensitivity in static analysis for intrusion detection. In Proceedings of the 2004 IEEE Symposium on Security and Privacy (S&P’04). Los Alamitos, CA: IEEE Computer Society Press. Gopalakrishna, R., Spafford, E. H., & Vitek, J. (2005). Efficient Intrusion detection using automaton inlining. In Proceedings of the 2005 IEEE Symposium on Security and Privacy (S&P’05). Los Alamitos, CA: IEEE Computer Society Press. Jones, A. K., & Lin, Y. (2001). Application intrusion detection using language library calls. In Proceedings of 17th Annual Computer Security
Applications Conference (ACSAC’01). Los Alamitos, CA: IEEE Computer Society Press. Kazienko, P., & Dorosz, P. (2004). Intrusion detection systems (IDS) Part 2 – Classification; methods; techniques (White paper). Ko, C. (2000). Logic induction of valid behavior specifications for intrusion detection. Los Alamitos, CA: IEEE Press. Kruegel, C., Valeur, F., Vigna, G., & Kemmerer, R. (2002). Stateful intrusion detection for highspeed networks. In Proceedings of the 2002 IEEE Symposium on Security and Privacy (S&P.02). Los Alamitos, CA: IEEE Computer Society Press. Kruegel, C., Mutz, D., Robertson, W., & Valeur, F. (2003). Bayesian event classification for intrusion detection. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003). Los Alamitos, CA: IEEE Computer Society Press. Li, Z., & Das, A. (2004). Visualizing and identifying intrusion context from system calls trace. In Proceedings of the 20th Annual Computer Security Applications Conference (ACSAC’04). Los Alamitos, CA: IEEE Computer Society Press. Lindqvist, U., & Porras, P. A. (2001). eXpert-BSM: A Host-based intrusion detection solution for Sun Solaris. Los Alamitos, CA: IEEE Press. Mutz, D., Vigna, G., & Kemmerer, R. (2003). An experience developing an IDS Stimulator for the black-box testing of network intrusion detection systems. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003).Los Alamitos, CA: IEEE Computer Society Press. Rose, M. (2001). RFC 3080: The blocks extensible exchange protocol core. Internet Engineering Task Force. Rubin, S., Jha, S., & Miller, B. P. (2004). Automatic @sis of NIDS attacks. In Proceedings of
101
Interoperability Among Instrusion Detection Systems
the 20th Annual Computer Security Applications Conference (ACSAC’04). Los Alamitos, CA:. IEEE Computer Society Press. Rubin, S., Jha, S., & Miller, B. P. (2005). Language-based generation and evaluation of NIDS signatures. In Proceedings of the 2005 IEEE Symposium on Security and Privacy (S&P’05). Los Alamitos, CA: IEEE Computer Society Press. Shankar, U., & Paxson, V. (2003). Active mapping: Resisting NIDS evasion without altering traffic. In Proceedings of the 2003 IEEE Symposium on Security and Privacy (SP.03). Los Alamitos, CA: IEEE Computer Society Press. Silva, P. F., & Westphall, C. P. (2005, July, 17-20). A model for interoperability of answers in intrusion detection systems. CD-ROM Proceedings of the Advanced International Conference on Telecomunications (AICT 2005), Lisbon, Portugal. Tombini, E., Debar, H., Mé, L., & Ducassé, M. (2004). A serial combination of anomaly and misuse IDSes applied to HTTP traffic. In Proceedings of the 20th Annual Computer Security Applications Conference (ACSAC’04). Los Alamitos, CA: IEEE Computer Society Press.
102
Vigna, G., Gwalani, S., Srinivasan, K., BeldingRoyer, E. M., & Kemmerer, R. A. (2004). An intrusion detection tool for AODV-based ad hoc wireless networks. In Proceedings of the 20th Annual Computer Security Applications Conference (ACSAC’04). Los Alamitos, CA: IEEE Computer Society Press. Wan, T., & Yang, X. D. (2000). IntruDetector: A software platform for testing network intrusion detection algorithms. Wood, M., & Erlinger, M. (2002). Intrusion detection message exchange requirements, IETF, RFC Draft, draft-ietf-idwg-requirements-10. Wu, Q., & Shao, Z. (2005). Network anomaly detection using time series analysis. In Proceedings of the Joint International Conference on Autonomic and Autonomous Systems and International Conference on Networking and Services (ICAS/ICNS 2005).Los Alamitos, CA: IEEE Computer Society Press. Wu, Y.-S., Foo, B., Mei, Y., & Bagchi, S. (2003). Collaborative intrusion detection system (CIDS): A framework for accurate and efficient IDS. In Proceedings of the 19th Annual Computer Security Applications Conference (ACSAC 2003). Los Alamitos, CA: IEEE Computer Society Press.
Interoperability Among Instrusion Detection Systems
Section II
Trust, Privacy, and Authorization
103
104
Chapter VI
Security in E-Health Applications Snezana Sucurovic Institute Mihailo Pupin, Serbia
ABSTRACT This chapter presents security solutions in integrated patient-centric Web-based health-care information systems, also known as electronic healthcare record (EHCR). Security solutions in several projects have been presented and in particular a solution for EHCR integration from scratch. Implementations of Public key infrastructure, privilege management infrastructure, role based access control and rule based access control in EHCR have been presented. Regarding EHCR integration from scratch architecture and security have been proposed and discussed. This integration is particularly suitable for developing countries with wide spread Internet while at the same time the integration of heterogeneous systems is not needed. The chapter aims at contributing to initiatives for implementation of national and transnational EHCR in security aspect.
INTRODUCTION E-health has become the preferred term for healthcare services available through the Internet. While the first generation of e-health applications comprises educational and informational Web sites, at present e-health has grown into national and transnational patient centric healthcare record processing. A patient centric healthcare record, also called electronic healthcare record (EHCR)
and electronic patient record (EPR), enables a physician to access a patient record from any place with Internet connection and give a new face to integration of patient data. Such integration can improve healthcare treatment and reduce the cost of services to a large extent. Benefits are based on extended possibilities for collaboration through sharing data between a physician and a patient and between physicians. In such large scale information systems, which spread over different
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Security in E-Health Applications
domains, standardization is highly required. The second paragraph describes the main issues in ehealth security as well as the results of EU projects EUROMED and TRUSTHEALTH, while the third paragraph presents MEDIS prototype of national healthcare electronic record suitable especially for developing countries where the Internet is widespread and healthcare information systems are not developed to large extent and therefore integration from scratch is proposed.
EXISTING SOLUTIONS In general, the following lines of development for healthcare information system were considered as important (Reichertz, 2006): (1) the shift from paper-based to computer-based processing and storage, as well as the increase of data in health care settings; (2) the shift from institution-centered departmental and, later, hospital information
systems towards regional and global HIS; (3) the inclusion of patients and health consumers as HIS users, besides health care professionals and administrators; (4) the use of HIS data not only for patient care and administrative purposes, but also for health care planning as well as clinical and epidemiological research; (5) the shift from focusing mainly on technical HIS problems to those of change management as well as of strategic information management; (6) the shift from mainly alpha-numeric data in HIS to images and now also to data on the molecular level; (7) the steady increase of new technologies to be included, now starting to include ubiquitous computing environments and sensor-based technologies for health monitoring. As consequences for HIS in the future, the need for institutional, national, and international HIS-strategies is first seen; second, the need to explore new (transinstitutional) HIS architectural styles is needed; third, the need for education in
Figure 1. Degree of sophistication in healthcare information systems. Note. From Information Systems, Sao Paolo University Technical Report, 2006)
105
Security in E-Health Applications
health informatics and/or biomedical informatics, including appropriate knowledge and skills on HIS are needed. As these new HIS are urgently needed for reorganizing health care in an aging society, as last consequence the need for research around HIS is seen. Research should include the development and investigation of appropriate transinstitutional information system architectures, of adequate methods for strategic information management, of methods for modeling and evaluating HIS, the development and investigation of comprehensive electronic patient records, providing appropriate access for health-care professionals as well as for patients (e.g., including home care and health monitoring facilities). All these requirements have implications on security issues. See Figure 1 for an example of the degree of sophistication in healthcare information systems. Security is a very complex issue related to legal, ethical, physical, organizational, and technological dimensions defined as security policy. In that context, security addresses human, physical, system, network, data, or other aspects.
Processing Categories», whose article 1 states that member countries should forbid the processing of personal data on political attitudes, religious and philosophic beliefs, racial and ethnic origin as well as medical data unless they satisfy particular, precisely specified conditions. For medical data, these conditions are as follows: •
•
•
Legal Issues Hipocrate’s oath contains the obligation of keeping health data secret as a part of professional ethics «What I may see or hear in the course of the treatment or even outside of the treatment in regard to the life of men, which on no account one must spread abroad, I will keep to myself, holding such things shameful to be spoken about”. As far as today’s practice is concerned, several countries have adopted acts on medical data privacy protection, and especially on privacy protection of the electronic form of medical data. One among such documents is European Directive on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data of September 25, 1995, intended for privacy protection in data processing systems. Medical data privacy protection is presented in Section 3 of paragraph 2 «Special
106
•
When data processing is performed for purposes of preventive medicine, medical diagnostics, the provision of medical treatment and management of medical protection services where these data are processed by health professionals who are bound to keep professional secrecy by national laws or rules established by competent bodies or by some other persons subject to an equivalent obligation. Persons to which these data refer have given an explicit consent to the processing of such data. Processing is required for protecting the vital data of the person to which these data pertain or of some other persons, when the person in question is physically or legally incapable of giving a consent. Processing of data relating to persons which have committed a criminal act or to persons which may violate safety is performed under the supervision of authorized officials.
Section 4 of paragraph 2 «Information to be submitted to a person» states the conditions under which a person has to be given the information on the processing of that person’s private data, and especially the informataion about forwarding these data to a third party. Section 5 of paragraph 2 of this Document «A subject’s right to access his own data» states a person’s rights to access his personal data in data processing systems. Paragraph 3 of this document says that member countries should ensure for each person, which considers that he/she has suffered a loss because
Security in E-Health Applications
of illegal processing of his/her data, to be entitled to a compensation.
Results of EUROMED Project All European Commision funded e-health projects are in compilance with EU directive. One of first implemented was EUROMED (Katsikas,1998). This projects (started in 1997) examines use of trusted third party (TTP) services in distributed healthcare information systems. A trusted third party (TTP) is an entity which facilitates interactions between two parties who both trust the third party; they use this trust to secure their own interactions. One of TTPs is a certificate authority (CA). CAs are defined in X. 509 standard (ITU-T Standard, 1997). X.509 standard defines a framework for the authentication service which a directory provides to all interested users. A directory is taken to mean that part of the system which possesses authentic information on system users. A dirctory is implemented as a certificate authority (CA) which issues certificates to users. X.509 defines two authentication levels: • •
Simple authentication, which uses a password for identity verification Strong authentication, which involves credentials—additional means of identification obtained by cryptographic
Strong authentication is based on an asymmetric cryptosystem involving a pair of keys: a public and a private key. The standard does not prescribe mandatory usage of a particular crypto system (DSA, RSA, etc.) and thus supports modifications in methods to be brought about by the development of cryptography. Each user should have a unique distinguished name. A naming authority is responsible for assigning a name. A user is identified by proving that he/she possesses a private key. To be able to verify a
private key, a user–partner in the communication process must possess a public key. The public key is available on the directory. A user should be given a public key from a trusted source. Such a source is a CA which uses its own public key to certify a user’s public key and produces a certificate in this way. A certificate has the following properties: •
•
Each user having the access to a CA’s public key can disclose the public key on which a certificate has been created. No party, except for the CA, can make a modification to a certificate without such a modification being detected. Owing to this property certificates may be stored on a directory with no need for additional protection efforts.
A certificate is obtained by creating a digital signature on a set of information about a user, such as a unique name, a user’s public key and additional information about a user. This set of information also contains a certificate’s validity period. This period includes the interval during which the CA has to keep the information about certificate status, i.e., publish an eventual certificate revocation. A certificate is presented in the ASN.1 notation as follows: Certificate ::= SIGNED {SEQUENCE{ version [0] Version DEFAULT v1, serialNumber CertificateSerial Number, signature AlgorithmIdentifier, issuer Name, validity Validity, subject name, subjectPublicKeyInfo SubjectPublicKeyInfo, issuerUniqueIdentifier [1] I M PLICIT UniqueIdentifier OPTIONAL, subjectUniqueIdentifier [2] I M -
107
Security in E-Health Applications
PLICIT UniqueIdentifier OPTIONAL extensions [3] Extensions OPTIONAL }} Validity ::= notBefore notAfter }
SEQUENCE { Time, Time
SubjectPublicKeyInfo ::= algorithm dentifier, subjectPublicKey } Extensions ::= Extension
SEQUENCE { AlgorithmIBIT STRING SEQUENCEOF
Three types of strong authentication are described in the standard: a.
b. c.
One-way authentication: Includes only one transfer from a user A to an intended user B Two-way authentication: Includes a reply from B to A as well Three-way authentication: Includes an additional transfer from A to B
An example of using two-way authentication is given in CEN ENV 13729 standard which prescribes the use of strong authentication in health information systems. CEN ENV 13729 has defined local and remote two way strong authentication using X509 standard. EUROMED-ETS provides integrity, authentication and confidentiality services using measures such as: • •
108
Digital signatures to ensure data integrity Encryption to provide confidentiality
TTP sites were established in four different locations in Europe: Institute of Computer and Communication Systems - ICCS (Athens-Greece), University Hospital Magdeburg - UHM (Magdeburg-Germany), University of the Aegean - UoA (Samos-Greece), and University of Calabria - UniCAL (Calabria-Italy). Among the functions performed by the Certification Authority are: initialisation, electronic registration, authentication, key generation and distribution, key personalisation, certificate generation, certificate directory management, certificate revocation, CRL generation, maintenance, distribution storage and retrieval. The Directories have served in that way as a repository for identification and authentication information; this information was utilised automatically by the EUROMED-ETS pilot Secure Web Servers to identify potential users and grant or deny to them rights; this identification information was also accessible anonymously through the Internet by the use of LDAP search tools.
Figure 2. Challenge-response authentication protocol using X.509 public key certificates
Security in E-Health Applications
Figure 3. Remote strong authentication according to CEN ENV 13729 Local System
Card System
Remote System
Request Access Request AuthCert Request AuthCert Send AuthCert Send AuthCert Receive AuthCert
Send ACK Generate RND
Send RND Sign RND with Priv.Key of Remote System Send signed RND plus Cert of Remote Receive signed RND plus Cert of Remote
Verify Signature
Send ACK Generate RND
Send RND Send RND Sign RND
Send Signed RND Send RND Signed with Private Key of Local System Verify Signature
Send ACK
Results of Trusthealth Project «Trustworth Health Telematics 2» (Blobel, 2001) project was started in June 1998 under the auspices of the European Commission. The aim of the project is to create a national infrastructure that would provide the security on the communication and application level of health information systems in: Belgium, France, Germany, Norway, Sweden and Great Britain and the focus is on interstate interoperability. The implementation of Trusthealth I Project, which took part from 1996 to 1997 involved the introduction of:
•
• •
The use of Health Professional Cards (HPC), microprocessor cards used in the authentication process Card reader services Services of Trusted Third Party which manages certificates
Within TH-1 the following services have been provided: • •
Sending of a physician’s report with all relevant data to a central register. Execution of prespecified and freely formed SQL queries relating to a patient’s EHCR.
109
Security in E-Health Applications
• •
Statistical analyses by various criteria, also by making a SQL query Exchange of any information, including HL7 messages, images, etc.
A TTP involves several independent organizations responsible for defining TTP services. TTP components may be: • • • •
Key generation instance Naming authority Registration authority Directory service authority
In the TrustHealth Project the public key registration authority (PKRA) is an entity that identifies, in a unique way, a user requiring the provision of a digital signature service. The professional registration authority (PRA) is an entity registering an individual as a physician, i.e. as a medical care professional. The naming authority (NA) is an entity assigning to users a unique name to be used in certificates. The NA may be involved in assigning unique names to classes within the medical profession (e.g., internal medicine) and subclasses (e.g. nephrology). The public key Certification Authority (PKCA) is an entity certifying the relation between a user’s unique name and public key by issuing a public key certificate with a PKCA digital signature. The PKCA is also responsible for revoking and repeated issuance of a certificate with a public key, whereas the professional Certification Authority (PCA) certifies the relation between a user’s unique name and professional status, after having created a digital signature on these data. The PCA is also reponsible for revoking and repeated issuance of professional certificates. The card issuing system (CIS) is an entity issuing microprocessor cards that must contain a private key and may contain certificates as well. The generation of a privatepublic key pair may be performed using a local or a central key generator (LKG), an entity that may be located locally (with a user or PKRA) or
110
centrally (with the PKC or CIS). Certificates have to be stored on the certificate directory (DIR). The DIR is an entity that isues, on a request, certificates with a public key, professional certificates, revoked certificate lists as well as other information about users. In the TrustHealth Project the TTP services (NA, PRA, PCA, LKG, PKRA i DIR) are proposed to be implemented in institutions such as the chamber of physicians. In this project the NA is implemented so as to assign a unique name to a user by using the name of a state, a unique number assigned to each physician on a state level and a physician’s name. The RA is implemented so as to certify a user’s identity and attributes such as profession or qualification. On a user’s request, relevant information is verified, associated to a distinguish name –DN and sent online to a certification entity. The RA uses the information provided by the qualification authentication authority (QAA) or by the profession authentication authority (PAA). The former instance may be a university, for example, while the latter may be a chamber of physicians. In the part of TrustHealth Project implemented in Magdeburg, the chamber of physicians of the state of Saxony-Anhalt has implemented a majority of TTP services. The chamber of physicians has also included QAA services such as qualification, specialization, etc. All these pieces of information have been transmitted online to a certification body. Based on data obtained from the RA, the Certification Authority creates certificates. Certificates that associate a user’s distinguish name and the remaining relevant information to a user’s public key are referred to as public-key certificates. Certificates that associate information about profession, qualification are attribute certificates. The first service is provided by the CA and the second by the PCA. CA Management Toolkit from the SECUDE package is used for X509v3 certificate creation and management in the TrustHealth Project.
Security in E-Health Applications
The DIR directory service includes the publication and revocation of certificates using public directories. An X.500 compatible solution implemented in the SECUDE package is used in the TrustHealth Project. In this Project DIR maintains both public-key and attribute certificates. It is planned to use the Lightweight directory access protocol (LDAP) server later.
Results of PCASSO Project The Patient Centered Access to Secure Systems Online (http://medicine.ucsd.edu/pcasso/index. html) was developed in 1997-2000 at the University of California San Diego School of Medicine. It is intended primarily to permit patients and health care providers to access health information, including sensitive health data. Access control is achieved by combining role-based access control (RBAC), mandatory access control (MAC) and discretionary access control (DAC). PCASSO is patient-centered and all data are stored on a single server in the current project stage. According to DAC, when a user requests accessing an object, it is checked whether there is a rule allowing that user to access that object in a given mode. If there is, access is allowed, otherwise it is forbidden. Such an approach is viewed as very flexible and has found a wide usage, especially in commercial and industrial environments. Its shortcoming is the lack of information flow control. It is thought that it is easy to avoid access restrictions imposed by authorization (a set of rules stating which subject is allowed to access a particular object and in which mode). For example, when a user has read some data once, he/she may forward them to an unauthorized user without data owner’s knowledge. In contrast to this, information flow from a higher-level object to a lower-level one is prevented in the MAC approach. In MAC access rights are based on the classification of subjects and objects in the system. A particular protection level is assigned to each
subject and each object. The protection level assigned to an object reflects data sensitivity level. The protection level assigned to a subject reflects the level of confidence in that subject that it will not forward accessed information to persons that do not have such rights. These levels are arranged in a hierarchy where each protection level dominates lower levels. A subject has the right of access to an object only if there is a particular relation between the protection level belonging to that subject and the protetcion level belonging to that object. One among such relations is the following: the protection level belonging to the subject has to dominate the protection level belonging to the object. MAC is used in defense and governmental departments. Some researchers have expressed a view that DAC and MAC approaches cannot satisfy many practical requirements. The MAC approach is suitable for a military environment, whereas DAC is suitable for communities where cooperative work predominates, such as academic institutions. This is why a number of alternatives have been offered. Role based access control (RBAC) is the most widely used among them. RBAC controls a user’s access right on the basis of user’s activities performed in the system. A role may be defined as a set of activities and responsibilities relating to a particular activity. The advantages of RBAC approach include: •
• •
•
Simpler authorization control. Authorization specification is divided into two stages: assigning roles to users and assigning object access rights to roles. Role hierarchy is easy to create, which is suitable for many systems Roles permit a user to work with a minimalprivilege role and use only exceptionally a role having maximal privileges. Error occurrence possibilities are reduced in this way. Separation of duties. It is possible to provide that not a single person can autonomously
111
Security in E-Health Applications
abuse the system. An example is the introduction of ... : each person performs only a portion of an operation instead of the entire operation. In PCASSO users may have one of the following roles: patient, primary care provider (PCP), secondary care provider (SCP) or Emergency Caregiver. Information and functionalities available to a user depend on the role belonging to her/him. PCASSO employs the following security policy: • • •
•
•
•
•
The system controls all accesses to data for each single user. Primary care providers are allowed to access all parts of a patient’s EHCR. PCP is privileged to mark some data in an EHCR as accessible to or forbidden for a patient or other care providers. A patient is allowed to access all parts of an EHCR except for those marked as “patient deniable” Care providers marked as PCP may change protection attributes in a patient’s record. PCPs may authorize and give rights to consultants referred to as Secondary care providers. A patient’s PCP may declare a SCP to be a PCP for a particular time period, after the expiration of which a previous role is resumed. A possibility is given to care providers to deny access to a part of a child’s EHCR to parents. In emergency cases care providers may have an unlimited access (reading only) to a patient’s EHCR.
PCASSO distinguishes 5 patient data protection levels: Patient-deniable, Parent/Guardiandeniable, Public-deniable, Standard and Low. A user will be allowed to access data having the same or a lower label (protection level) compared with the user’s label (MAC approach) and belong,
112
at the same time, to the group having the right of access to that piece of data (DAC approach).
EHCR ARCHITECTURE CEN ENV 13 606 STANDARD All EU transnational projects are in compliance with CEN ENV 13 606 standard. The Comité Européen de Normalisation European Standard (CEN ENV 13606, 2002) “EHCR Communication” is a high level template which provides a set of design decisions which can be used by system vendors to develop specific implementations for their customers. It contains several parts: •
• •
Part 1. Extended Architecture: Defines component-based EHCR reference architecture. Part 2. Domain Term List: Defines terms which are used in extended architecture. Part 3. Distribution Rules: Defines data structures which are used in distribution and shared access to EHCR.
Communication as an act of imparting or exchanging information is the primary concern of this standard. In its Part 1 the standard defines an EHCR Communication View as the reuse of stored clinical data in a different context. There can be many such Communication Views and they provide presentation of information in a chronological order , “problem-oriented” manner or some other convention. This is provided by use of architectural components which are rich enough to be able to communicate data by a combination of components. There are a root component, which contains basic information about a patient, on one hand, and, on the other hand, a record component established by original component complexes (OCCs), selected component complexes (SCC), data items (DI) and link items (LI). An OCC comprises (according to data
Security in E-Health Applications
contain an identifier of archetype (for example “vitals”, “blood pressure” etc.). In part 2 CEN ENV 13 606 standard defines a list of terms, such as category names for Compositions (“Notes on Consultations”, “Clinical Care Referrals” etc.) and Headed Sections (“Former Patient History”, “Ongoing Problems & Lifestyle” etc.). According to part 3 of CEN ENV 13606 standard, each Architectural Component has a reference to a Distribution Rule. A Distribution Rule comprises When, Where, Why, Who and How classes. Class Why is mandatory, i.e. one of its attributes has to be “not null”. Instances of these classes define When, Where, Why, Who and How is allowed to access that component (see Figure 4).
homogeneity) four basic components: folders, compositions, headed sections and clusters. A SCC contains a collection of data representing an aggregation of other record components that is not determined by the time or situation in which they were originally added to the EHCR. It may contain a reference to a set of search criteria, a procedure or some other query device whereby its members are generated dynamically (for example “current medication”). A Link Item is a component that provides a means of associating two other instances of architectural component and specifying the relationship between them ( “caused by”, for example). A data item is a record component that represents the smallest structural unit into which the content of the EHCR can be broken down without losing its meaning. As a result of cooperation of CEN Technical Committee 251 and Australian Good Electronic Health Record (GEHR) project, CEN ENV 13606 Part 1 was revised in 2002. The revised standard adopted the GEHR concept in which object-oriented EHCR architecture is distinguished from a knowledge model. A knowledge model contains specifications of clinical structures named archetypes. There are many benefits of that two-layer model and one is that archetypes can be developed by clinicians at the same time when IT specialists develop EHCR object oriented architecture. In the revised CEN standard architectural components
IMPLEMENTING SECURE DISTRIBUTED EHCR: MEDIS EXAMPLE The MEDIS project aims at developing a prototype secure national healthcare information system. Since clinical information systems in Serbia and Montenegro have not been implemented to a large extent, we have focused our efforts on integration itself from the very beginning, instead of on studying how to integrate various
Figure 4. Distribution rule (CEN ENV 13606 Part 3) Architectural Component 0..* Distribution Rule 0..*
0..1
Who
When
1..* Why
0..*
0..*
How
Where
113
Security in E-Health Applications
Table 1. Projects’ characteristics EUROMED
TRUSTHEALTH
PCASSO
MEDIS
Internet based architecture
X
X
X
X
Component based architecture
X
X
Public key infrastructure
X
X
Privilege management infrastructure
X X
X
X
Web service security Role based access control Rule based access control
systems. In recent years Internet has become widespread in our country and using Internet to implement a shared care paradigm is becoming a reality. MEDIS is based on CEN ENV 13606 standard and follows a component-based software paradigm in both EHCR architecture and software implementation. MEDIS has been implemented as a federated system where the central server hosts basic EHCR information and clinical servers contain their own part of patients’ EHCR. CEN ENV 13 606 requirements have been strictly fulfilled in clinical servers as well as in the central server. In our opinion the user interface has to be standardised and we give our proposal for standardisation. As for the security aspect, MEDIS implements achievements from recent years, such as Public Key Infrastructure and privilege management infrastructure, SSL and Web Service security as well as pluggable, XML based access control policies. Table 1. presents characteristics of EUROMED, TRUSTHEALTH and MEDIS projects.
MEDIS Architecture Since the MEDIS project refers to integration from the very beginning, EHCR reference architecture has been followed in defining the database model. Data have been stored in a hierarchical manner, where architectural components contain pointers
114
X
X X
X
X
X X
to a supercomponent, linked components and also a selected component complex. MEDIS has been implemented as a federated system. Architectural components are created in compliance with CEN ENV 13606 and stored there where they are created – at hospitals and clinics and are accessed via a central server which contains a root component and the addresses of the clinical and hospital servers. Architectural components that are hosted on the clinical and hospital servers have pointers to supercomponents and linked components (see Figure 5). User interface has been standardised in the following way. HTML pages are created on the central server and contain five frames: the required architectural component (AC) in the right frame, links to subcomponents and linked components in the upper left frame, links to selected component complex (actually distributed queries) in the lower left frame, the AC position in the hierarchical structure of EHCR in the upper frame and information about a user in the lower frame (see Figure 6). A physician can define the position in EHCR (and therefore HTML page) which will appear when he requests EHCR for a patient. Currently, in the MEDIS architecture there are two types of selected component complex. Firstly, there are SCCs given as a union of queries on all clinical servers such as «current medication» or «current diagnosis». Search criteria are made according to the identifier of archetype.
Security in E-Health Applications
Figure 5. System architecture
Smart Card
Central Server
Clinical Server I
Clinical Server II
Clinical Server N
LDAP Server
Secondly, there are SCCs related to architectural components on clinical servers and they contain search criteria for architectural components on that clinical server. In MEDIS there is an authentication applet (Sucurovic, 2005) which is processed in a browser and,
LDAP Servers
after successful authentication, a HTML page has been generated using JSPs on the central server. The clinical servers tier has been implemented in Java Web Services technology using Apache Axis Web Service server and Tomcat Web Server. Business logic has been implemented in reusable
Figure 6. CEN ENV 13 606 Composition Component example
115
Security in E-Health Applications
Figure 7. MEDIS access control components Attribute Certificate’s attributes Local Authorisation Policies
Master Authorisation Policies
Access Decision Function
ACs DRs
Clinical Server SOAP Messages
components–Java Beans. Authorisation policy is decomposed into components which can be plugged in Beznosov (2004) (see Figure 7).
Implementing Security Using a password as a means of verification of claimed identity has many disadvantages in distributed systems. Therefore, MEDIS implements CEN ENV 13 729 (CEN ENV, 2000) which defines authentication as a challenge-response procedure using X.509 public key certificates. Solutions related to attribute certificates management have also been implemented (Sucurovic, 2006). Originally, X.509 certificates were meant to provide nonforgeable evidence of a person’s identity. Consequently, X.509 certificates contain information about certificate owners, such as their name and public key, signed by a certificate authority (CA). However, it quickly became evident that in many situations, information about a person’s privileges or attributes can be much more important than that of their identity. Therefore, in the fourth edition of the X.509 Standard (2000), the definition of an attribute certificate was introduced to distinguish it from public-key certificates from previous versions of the X.509 Standard (ITU-T, 2000). In the MEDIS project X.509 PKCs are supposed to be generated by the public certificate authority, while ACs are supposed to be generated by the MEDIS attribute authority. The
116
public key certificates are transferred to users and stored in a browser (Sucurovic, 2005). The attribute certificates are stored on LDAP server (Sucurovic, 2006), because they are supposed to be under control of the MEDIS access control administrator. In the MEDIS approach, attribute certificates contain user’s attributes as XML text. There are two types of public key and attribute certificates: the Clinicians’ and a Patients’ as distribution rules contain a flag which denotes if the architectural component is allowed to be read by a patient. Patients will be granted reading the architectural component if physicians set a corresponding flag.
Access Control In a complex distributed system, such as MEDIS, access control is consequently very complex and has to satisfy both a fine grained access control and administrative simplicity. This can be realised using plugable, component based authorisation policies (Beznosov, 2004). An authorisation policy is the complex of legal, ethical, social, organisational, psychological, functional and technical implications for trustworthiness of health information system. One common way to express policy definition is an XML shemadata. These schemas should be standardised for interoperatibility purposes (Blobel, 2004). The MEDIS project aims at developing the authorisa-
Security in E-Health Applications
tion policy definitions, using XML schemadata (MEDIS Tech. Report, 2005), which are based on CEN ENV 13 606 Distribution rules attributes (CEN ENV, 2002). Distribution rules define the attributes of architectural components related to access control (CEN ENV, 2002). A distribution rule comprises Who, When Where, Why and How objects (see Table 2). The object Why is mandatory, i.e. one of its attributes has to be “not null”. The attributes and entities contained within the objects Who, When, Where, Why and How shall be processed as ANDs. If, however there is more than one Who, How, Where, When or Why object present in a distribution rule, the occurrences of each of those object types shall be processed as ORs. The MEDIS project has adopted XML as the language for developing constrained hierarchical role based access control and, at the same time, has its focus on decomposing policy engines into components (Beznosov, 2004; Blobel, 2004; Chadwick et al., 2003; Joshi et al., 2004; Zhou, 2004).
The MEDIS project authorisation policy has several components (MEDIS Technical Report, 2005). First, there is an XML schema of user attributes that corresponds to the attribute certificate attributes. Second, there are distribution rules attached to each architectural component (see Figure 4). Third, there are local authorisation policies on local LDAP servers. Fourth, there are Master Authorisation Policies on central LDAP server. There are several types of policies: •
•
Authorisation policy for hierarchy. It defines hierarchies of How, When, Where, Why and Who attributes (hierarchy of roles, professions, regions etc). In that way, a hierarchical RBAC can be implemented, with constraints defined by security attributes (software security, physical security rating etc.) and nonsecurity attributes (profession, specialisation etc.). Authorisation policy for hierarchy combinations. It defines which combination of,
Table 2. Distribution Rule objects [1] Classes
Attributes
Type
Who
Profession Specialization Engaged in care Healthcare agent
String String Boolean Class
Where
Country Legal requirement
String Boolean
When
Episode of care Episode reference
Boolean String
Why
Healthcare process code Healthcare process text Sensitivity class Purpose of use Healthcare party role
String String String Class Class
How
Access method (read, modify) Consent required Signed Encrypted Operating system security rating Physical security rating Software security rating
String Class Boolean Boolean String String String
117
Security in E-Health Applications
•
for example, role hierarchy and profession hierarchy is valid. Authorisation policy for DRs. It defines which combinations of attributes in a Distribution Rule are allowed.
There is an enable/disable flag, which defines whether the policy is enabling or disabling. There are in fact, two administrators: one on the clinical server and another on the central LDAP server. In that way, this approach provides flexibility and administrative simplicity. Our future work is to explore the best allocation of these policies between the central server and local servers.
access control at the same time, a formal verification of security policy properties is needed. Secondly, MEDIS project is going to develop a powerful search engine based on intensive use of distributed queries with corresponding security questions solved. MEDIS is a medical record and we are planning to make it a multidisciplinary and multiprofessional record. Nowadays, integration in electronic record comprises the integration of previously introduced HIS using a communication protocol, such as HL7, and Web Services. Hospitals and clinics are connected in grids with protocols similar to Web Services SOAP protocol.
Encryption
CONCLUSION
The MEDIS project implements Web Service security between the clinical and central server and SSL between the central server and a client (Microsoft, IBM, 2004). We use Apache’s implementation of the OASIS Web Services Security (WS-Security) specification–Web Service Security for Java (WSS4J) (W3C Recommendation, 2002). WSS4J can secure Web services deployed in most Java Web services environments; however, it has specific support for the Axis Web services framework. WSS4J provides the encryption and digital signing of SOAP messages. The RSA algorithm has been chosen for signing and the TripleDES for encryption. Communication between the browser and Web Server has been encrypted using SSL. Currently, Netscape 6 browser and Tomcat 5.0 Web Server are used and the agreed chiper suite between them is SSL_RSA_WITH_RC4_128_MD5.
Regarding the basic requirements of secure communication and secure cooperation in distributed systems based on networks, basic security services are required. These services have to provide identification and authentication, integrity, confidentiality, availability, audit, accountability (including nonrepudiation), and access control. Additionally, infrastructural services such as registration, naming, directory services, certificate handling, or key management are needed. Especially, but not only in health care, value added services protecting human privacy rights are indisputable. This paper presents existing solutions in transnational electronic healthcare records and also gives a proposal for EHCR architecture and EHCR security solutions in developing countries with widespread Internet.
Acknowledgment FUTURE WORK If an application has a large number of users and requires a large number of roles and fine-grained
118
The author would like to thank Mrs. Vesna Zivkovic, PhD, Director of Institute Mihailo Pupin, Computer Systems, for her overall support.
Security in E-Health Applications
REFERENCES Beznosov, K. (2004). On the benefits of decomposing policy engines into components. Third Workshop on Adaptive and Reflect Middleware, Toronto, Canada. Blobel, B. (2001). The European TrustHealth project experiences with implementing a security infrastructure. International Journal of Medical Informatics, 60, 193-201. Blobel, B. (2004). Authorisation and access control for electronic health record system. International Journal of Medical Informatics, 73, 251-257. Blobel, B., Hoepner, P., Joop, R., Karnouskos, S., Kleinhuis, G., & Stassinopoulos, G. (2003). Using a privilege management infrastructure for secure Web-based e-health applications. Computer Communication, 26(16), 1863-1872. Chadwick, D., et. al. (2003, March-April). Role based access control with X.509 attribute certificates. IEEE Internet Computing, 62-69. Commite Europen de Normalisation ENV 13606 Standard. (2002). Extended architecture. Commite Europen de Normalisation ENV 13729 Standard. (2002). Secure user identification. ITU-T Standard X. 509. (1995, October 24). Information technology. Open systems interconnection-The directory: Public-key and attribute certificate frameworks. Directive of the European Parliament and of the Council of 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Retrieved January 10, 2006, from http://www.dsv.su.se/jpalme/society/eu-personalprivacy-directive.html
Information Systems and Information Technology Solutions. (n.d.). Sao Paolo University report. Retrieved June 10, 2006, from http://www.virtual. epm.br/material/healthcare/B01.pdf Joshi, J., et. al. (2004, November–December). Access control language for multidomain environments. IEEE Internet Computing, 40-50. MEDIS technical report. (n.d.). Retrieved September 28, 2005, from http://www.imp.bg.ac. yu/dokumenti/MEDISTechnicalReport.doc Microsoft and IBM white paper. (2005). Security in Web service world: A proposed architecture and roadmap. Retrieved September 28, 2005, from http://www-128.ibm.com/developerworks/ webservices/library/ws-secmap Katsikas S., Spinellis D., Iliadis J., Blobel B., 1998, Using trusted third parties for secure telemedical applications over the WWW: The EUROMED -ETS approach. Intern. Journal of Medical Informatics, 49, 59-68 Reichertz, P. (in press). Hospital information systems: Past, present, future. International Journal of Medical Informatics. Sucurovic, S., & Jovanovic, Z. (2005, February). Java cryptography & X.509 authentication. San Francisco: Dr. Dobb’s Journal. Sucurovic, S., & Jovanovic Z. (in press). Java cryptography & attribute certificate management. San Francisco: Dr. Dobb’s Journal. Wei, Z., & Meinl, Z. (2004). Implement role based access control with attribute certificates, ICACT 2004. International Conference on Advanced Communication Technology, Korea.
119
120
Chapter VII
Interactive Access Control and Trust Negotiation for Autonomic Communication Hristo Koshutanski University of Trento, Italy Fabio Massacci University of Trento, Italy
Abstract Autonomic communication and computing is the new paradigm for dynamic service integration over a network. In an autonomic network, clients may have the right credentials to access a service but may not know it; equally, it is unrealistic to assume that service providers would publish their policies on the Web so that clients could do policy evaluation themselves. To solve this problem, the chapter proposes a novel interactive access control model: Servers should be able to interact with clients asking for missing credentials, whereas clients may decide to comply or not with the requested credentials. The process iterates until a final agreement is reached or denied. Further, the chapter shows how to model a trust negotiation protocol that allows two entities in a network to automatically negotiate requirements needed to access a service. A practical implementation of the access control model is given using X.509 and SAML standards.
INTRODUCTION Recent advances of Internet technologies and globalization of peer-to-peer communications offer for organizations and individuals an open
environment for rapid and dynamic resource integration. In such an environment, federations of heterogeneous systems are formed with no central authority and no unified security infrastructure. Considering this level of openness, each server is
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Interactive Access Control and Trust Negotiation for Autonomic Communication
responsible for the management and enforcement of its own security policies with a high degree of autonomy. Controlling access to services is a key aspect of networking and the last few years have seen the domination of policy-based access control. Indeed, the paradigm is broader than simple access control, and one may speak of policy-based self-management of networks (see, e.g., IEEE Policy Workshop series; Lymberopoulos, Lupu & Sloman, 2003; Sloman & Lupu, 1999). The intuition is that actions of nodes controlling access to services are automatically derived from policies. The nodes look at events, requested actions and credentials presented to them, evaluate the policy rules according to those new facts and derive the actions (Sloman & Lupu, 1999; Smirnov, 2003). Policies can be “simple” iptables configuration rules for Linux firewalls (see http://www.netfilter.org) or complex logical policies expressed in languages such as Ponder (Damianou, Dulay, Lupu, & Sloman, 2001) or a combination of policies across heterogeneous systems as in OASIS XACML framework (XACML, 2004). Dynamic coalitions and autonomic communication add new challenges: A truly autonomic network is born when nodes are no longer within the boundary of a single enterprise, which could deploy its policies on each and every node and guarantee interoperability. An autonomic network is characterized by properties of self-awareness, self-management and self-configuration of its constituent nodes. In an autonomic network nodes are like partners that offer services and lightly integrate their efforts into one (hopefully coherent) network. This cross enterprise scenario poses novel security challenges with aspects of both trust management and workflow security. From trust management systems (Ellison et al., 1999; Li, Grosof, & Feigenbaum, 2003; Weeks, 2001) we take the credential-based view. Since access to network services is offered by autonomic nodes and to potentially unknown clients, the
decision of grant or deny access can only be made on the basis of credentials sent by a client. From workflow access control systems (Atluri, Chun, & Mazzoleni, 2001; Bertino, Ferrari, & Atluri, 1999; Georgakopoulos, Hornick, & Sheth, 1995; Kang, Park, & Froscher, 2001) we borrow all classical problems such as dynamic assignment of roles to users, dynamic separation of duties, and assignment of permissions to users according to the least privilege principle. In an autonomic communication scenario a client might have all the necessary credentials to access a service but may simply not know it. Equally, because of privacy considerations, it is unrealistic to assume that servers will publish their security policies on the Web so that clients can do a policy combination and evaluation themselves. So, it should be possible for a server to ask a client on-the-fly for additional credentials whereas the client may disclose or decline to provide them. Next, the server reevaluates the client’s request, considering the newly submitted credentials and computes an access decision. The process iterates between the server and the client until a final decision of grant or deny is taken. We call this modality interactive access control. Part of these challenges can be solved by using policy-based self-management of networks, but not all of them. Indeed, if we abstract away the details on policy implementation, one can observe that the only reasoning service actually used by nowadays policy-based approaches is deduction: given a policy and a set of additional facts, find out all consequences (actions or obligations) from the policy according to the facts. We simply look whether granting the request can be deduced from the policy and the current facts. Policies could be different (Bertino et al., 2001; Bertino, Ferrari, & Atluri, 1999; Bonatti & Samarati, 2002; Li, Grosof & Feigenbaum, 2003), but the kernel reasoning service is the same. Access control for autonomic communication needs another less-known reasoning service,
121
Interactive Access Control and Trust Negotiation for Autonomic Communication
taken from AI domain, called abduction (Shanahan, 1989). Loosely speaking, we could say that abduction is deduction in reverse: Given a policy and a request to access a network service, we want to know what are the credentials (facts) that would grant access. Logically, we want to know whether there is a (possibly minimal) set of facts that added to the policy would entail (deduce) the request. If we look again at our intuitive description of the interactive access control it is immediate to realize that abduction is the core service needed by the policy-based autonomic servers to reason for missing credentials. We can also use abduction on a client side so that whenever a client is requested for missing credentials it can perform evaluation on its policy and counter-request the server for some evidences in order to establish confidence (trust) to disclose the originally requested credentials.
Chapter Scope This chapter targets readers who want to put into a practical framework security policies for access control. As a chapter outcome, the readers will be able to understand the logical reasoning services of deduction and abduction, and how to use them to model a practical access control framework. Furthermore, the readers will be able to model interactive access control between two entities,
Figure 1. Joint hierarchy model
each of them running its own deduction and abduction algorithms, thus allowing a bilateral exchange of access requirements until an agreement is reached or denied. For those readers with practical background, the chapter presents how to implement and integrate the interactive access control model with the security standards such as X.509 and SAML. Readers should be familiar with either logic programming or answer set programming or datalog, as a prerequisite to the chapter’s content.
A PRIMER ON INTERACTIVE ACCESS CONTROL Motivation by Example Let us consider a shared overlay network PlanetLab between the University of Trento and Fraunhofer institute in Berlin in the context of the E-NEXT project. For the sake of simplicity assume that there are three main access types to resources: disk – read access to data residing on the Planet-Lab machines; run – execute access to data and possibility to run processes on the machines; and configure – including the previous two types of access plus the possibility of configuring network services on the machines. Members of the two labs are classified in a hierarchy shown in Figure 1. The partial order of roles is indicated by arcs where higher the role in the hierarchy is more powerful it is. A role dominates another role if it is higher in the hierarchy and there is a direct path between them. The access policy of the Planet-Lab network specifies that: • •
122
Disk access is allowed to any request coming from the two institutions. Run access is allowed to any request coming either from specific machines at the two institutions or from any machine at the two
Interactive Access Control and Trust Negotiation for Autonomic Communication
•
institutions accompanied with a membership certificate. Configure access is allowed to anybody that has run access to the network resources and is at least researcher at University of Trento or junior researcher at Fraunhofer institute. Additionally, configure access is also granted to associate professors or senior researchers with the requirement of accessing the Planet-Lab network from the respective country domains of Italy or Germany. The least restrictive access is granted to full professors or members of board of directors obliging them to provide the appropriate credential attesting their positions.
Let us have the scenario where Alice is a senior researcher at Fraunhofer and daily she needs to get run access to resources at Planet-Lab network. So, whenever she is at her office and she wants to execute some services she sends her employee certificate to the system. According to the access policy, run access is granted to Alice because as an employee she is a member of the Planet-Lab hierarchy model (see Figure 1). Now, examine the case in which Alice wants to have access to the system from his home place (in Munich) presenting her employee certificate assuming that it is potentially enough to get run access to certain services. But, according to the policy rules the system should deny the request because run access requests coming from domains different than University of Trento or Fraunhofer institute are allowed only to associate professors or senior researchers or higher role positions. So, the natural question is, “is it the behavior we want from the system?” Shall we leave Alice with only “access denied” decision and being idle for the whole day simply because she did not know or just has forgotten that access to the system outside Fraunhofer needs another certificate? An answer like “sorry, we also need a credential for being at least a senior researcher” would be more than welcomed by most employees. At
the same time, servers want to be sure to ask this additional credential only to employees.
Protecting Sensitive Policies Practical access control policies like those protecting companies’ resources, EU project sensitive documents etc, may leak valuable business information when revealed to public. Furthermore, an access control policy sometimes may disclose the entire business strategy of a company or an institution. Consider the following examples: Example 1 (Seamons, Winslett, & Yu, 2001) Suppose a Web page’s access control policy states that in order to access documents of a project in the site, a requester should present an employee ID issued either by Microsoft or by IBM. If such a policy can be shown to any requester, then one can infer with high confidence that this project is a cooperative effort of the two companies. Example 2 (Yu & Winslett, 2003) McKinley clinic makes its patient records available for online access. Let r be Alice’s record. To gain access to r a requester must either present Alice’s patient ID for McKinley clinic (CAliceID), or present a California social worker license (CCSW L) and a release-of-information credential (CRoI ) issued to the requester by Alice. Knowing that Alice’s record specifically allows access by social workers will help people infer that Alice may have a mental or emotional problem. Alice will probably want to keep the latter constraint inaccessible to strangers. However, employees of McKinley clinic (CMcKinleyEmployee) should be allowed to see the contents of the policy. To summarize, we have identified the following two issues: • Provide additional information on missing credentials back to clients in case they do not have enough access rights.
123
Interactive Access Control and Trust Negotiation for Autonomic Communication
• Protect access policies and their requirements from unnecessary disclosure. How to approach the above cases is the subject of the next section.
Interactive Access Control vs. Current Approaches In this section we introduce step-by-step the novel contribution of interactive access control model by evolving the existing access control frameworks. Let us start with the traditional access control. A server has a security policy for access control PA that is used when taking access decisions about usage of services offered by a service provider. A user submits a set of credentials Cp and a service request r in order to execute a service. We say that policy PA and credentials Cp entail r ( informally for the moment, PA∪Cp r ) meaning that request r should be granted by the policy PA and the presented credentials Cp. Figure 2 shows the “traditional” access control decision process. Whether the decision process uses RBAC (Sandhu et al, 1996), SDSI/SPKI (SPKI), RT (Li & Mitchell, 2003) or any other trust management framework it is immaterial
Figure 2. Traditional access control 1. check whether PA and Cp entail r, 2. if the check succeeds then grant access 3. else deny access.
at this stage: they can be captured by suitably defining PA, Cp and the entailment operator (). This approach is the cornerstone of most logical formalizations (De Capitani di Vimercati & Samarati, 2001): If the request r is a consequence of the policy and the credentials, then access is granted; otherwise it is denied. A number of works has deemed such blunt denials unsatisfactory. Bonatti and Samarati (2002) and Yu, Winslett and Seamons (2003) proposed to send back to clients some of the rules that are necessary to gain additional access. Figure 3 shows the essence of the approaches. Both works have powerful and efficient access control establishment mechanisms. However, they merge two different security issues: policy for governing access to server’s own resources and policy for governing the disclosure of foreign credentials. Both approaches require policies to be flat: A policy protecting a resource must contain all credentials needed to allow access to that resource. As a result, it calls for structuring of policy rules that is counter-intuitive from the access control point of view. For instance, a policy rule may say that for access to the full text of an online journal article a requester must satisfy the requirements for browsing the journal’s table of contents plus some additional credentials. The policy detailing access to the table of contents could then specify another set of credentials. Further, constraints that would make policy reasoning nonmonotone (such as separation of duties) are also ruled out as they require to look at more than one rule at a time. So, if the policy is not flat and it has constraints on the credentials that
Figure 3. Disclosable access control 1. check whether PA and Cp entail r, 2. if the check succeeds then grant access 3. else (a) find a rule r ← p ∈ PartialEvaluation(PA∪Cp), where p is a (partial) policy protecting r, (b) if such a rule exists then send it back to the client else deny access.
124
Interactive Access Control and Trust Negotiation for Autonomic Communication
can be presented at the same time (e.g., separation of duties) or a more complex role structure is used, those formalisms would not be complete. Bonatti and Samarati’s approach has further limitations on the granularity level of disclosure of information. In their work governing access to a service is composed in two parts: a prerequisite rule and a requisite rule. Prerequisite rules specify the requirements that a client should satisfy before being considered for the requirements stated by the requisite rules, which in turn grant access to services. Thus, prerequisite rules play the role of controlling the disclosure of the service requisite rules. In this way their approach does not decouple policy disclosure from policy satisfaction, as already noted by Yu and Winslett (2003), which becomes a limitation when information disclosure plays crucial role. The work by Yu and Winslett (2003) overcomes this latter limitation and proposes to treat policies as fist class resources, i.e., each policy protecting a resource is considered as a sensitive resource itself whose disclosure is recursively protected by another policy. Still they have the same flatness, unicity and monotonicity limitations. These limitations are due to a traditional viewpoint: the only reasoning service one needs for access viewpoint is deduction, i.e., check that the request follows from the policy and the presented credentials. Intuition 1: We claim that we need another less-known reasoning service, called abduction:
check which missing credentials are necessary so that the request can follow from the policy and the presented credentials. Thereupon, we present the basic idea of interactive access control in Figure 4. The “compute a set CM such that ...” (Step 3a) is exactly the operation of abduction. Essential part of the abduction reasoning is the computation of missing credentials that are solution for a request and at the same time consistant with the access policy. The consistency property gives up strong guarantees for the missing set of credentials when applying the algorithm on nonmonotonic policies. This solution raises a new challenge: how do we decide the potential set of missing credentials? It is clearly undesirable to disclose all credentials occurring in the access policy and, therefore, we need a way to define how to control the disclosure of such a set. As we have already noted, Yu and Winslett (2003) addressed partly this issue by protecting policies within the access policy itself. The authors distinguish between policy disclosure and policy staisfaction. It allows them to have control on when a policy can be disclosed from a policy is satisfied. However, this is not really satisfactory as it does not decouple the decision about access from the decision about disclosure. Resource access is decided by the business logic whereas credential access is due to security and privacy considerations.
Figure 4. Basic idea of interactive access control 1. check whether PA and Cp entail r, 2. if the check succeeds then grant access 3. else (a) compute a set CM such that: - PA together with Cp and CM entail r, and - PA together with Cp and CM preserve consistency. (b) if CM exists then ask the client for CM and iterate (c) (c) else deny access.
125
Interactive Access Control and Trust Negotiation for Autonomic Communication
Intuition 2: We claim that we need two policies: one for granting access to one’s own resources and one for disclosing the need of foreign (someone else’s) credentials. Therefore, we introduce a security policy for disclosure control PD. The policy for disclosure control is used to decide credentials whose need can be potentially disclosed to a client. In other words, PA protects partner’s resources by stipulating what credentials a requestor must satisfy to be authorized for a particular resource while, in contrast, PD defines which credentials among those occurring in PA are disclosable so, if needed, can be demanded from the requestor. We give a new refined algorithm for interactive access control with controlled disclosure shown in Figure 5. Yu and Winslett’s policy scheme determine s whether a client is authorized to be informed of the need to satisfy a given policy. While, in our case, having a separate disclosure policy allows us to have a finer-grained disclosure control, i.e., determine whether a client is allowed to see the need of single credentials. Control the disclosure of (entire) policies as a finest-grained unit as well as the disclosure of single credentials composing those policies separately and independently from the disclosure of the policies themselves. Now, let us look at Yu and Winslett’s own example (Example 2) formalized as two logic programs:
Example 3 PD
CAliceID ←
PA
CCSWL ← CMcKinleyEmployee
r ← CCSWL , CRoI
CRoI ← CMcKinleyEmployee
The disclosure control policy is read as the disclosure of Alice’s ID is not protected and potentially released to anybody requesting. The need for credentials California social worker license CCSWL and release-of-information CRoI is released only to users requested for, and that have pushed their McKinley employee certificates CMcKinleyEmployee. The access policy specifies that access to r is granted either to Alice or to California social workers that have a release-of-information credential issued by Alice. We note that the disclosure requirement for CMcKinleyEmployee cannot be captured via the service accessibility scheme by Bonatti and Samarati (2002) and refer to Yu and Winslett (2003) for details. We also point out (as in Yu & Winslett, 2003) that having CMcKinleyEmployee does not allow access to r but rather is used to unlock more information on how to access r. We also emphasize that the disclosure control on r’s policy {CCSWL, CRoI} can be further split down on controlling the disclosure of the single credentials constituting it. There are still tricky questions to be answered such as:
Figure 5. Interactive access control with controlled disclosure 1. check whether PA and Cp entail r, 2. if the check succeeds then grant access 3. else (a) compute the set of disclosable credentials CD entailed by PD and Cp, (b) compute a set CM out of the disclosable ones (CM ⊆CD) such that: - PA together with Cp and CM entail r, and - PA together with Cp and CM preserve consistency. (c) if CM exists then ask the client for CM and iterate (d) else deny access.
126
r ← CAliceID
Interactive Access Control and Trust Negotiation for Autonomic Communication
• How do we know that the algorithm terminates? In other words, can actually arrive to a grant? For example, can we assure that the server will not keep asking Alice for a UNITN full professor credential which she does not have while never asking for a FOKUS senior researcher credential, which she has? • How do we know that if a client gets granted then he has enough credientials of the resource. • On the other hand, if a client has a solution for a resource then he will be granted the resource? We will show how to fix the details of the algorithm later in the chatper so that all answers are positive. So far, we have considered the access control process taking part on a server side. Then one would ask what about protecting clients from unauthorized disclosure of missing credentials. One can use the interactive access control algorithm also on the client side so that the client can do policy evaluation itself to determine whether the requested credentials can be disclosed to (granted, to be seen by) servers. And, alternatively, what additional information the servers should provide in order to see the requested credentials. In this way the interactive access control model can be used on client and server sides allowing them to automatically negotiate missing credentials until an agreement is reached or denied. The full evolvement of the negotiation model is described later in the chapter. This is enough to cover stateless systems. We still have a major challenge ahead: How do we cope with stateful systems? Stateful systems are systems where the access decisions change depending on past interactions or past presented credentials. Such systems can easily become inconsistent with respect to the client’s set of presented credentials mainly because access policies may forbid the presentation of credential
if another currently active credential has been presented in the past. Past requests or services usage may deny access to future services as in Bertino, Ferrari and Atluri (1999) centralized access control model for workflows. Separation of duties means that we cannot extend privileges by supplying more credentials. For instance a branch manager of a bank clearing a cheque cannot be the same member of staff who has emitted the cheque (Bertino, Ferrari & Atluri, 1999, p. 67). If we have no memory of past credentials then it is impossible to enforce any security policy for separation of duties on application workflow. The problems that could cause a process to get stuck are the following: • The request may be inconsistent with some roles, actions or events taken by the client in the past. • The new set of presented credentials may be inconsistent with system requirements and constraints such as separation of duties. Intuition 3: To address the problem of inconsistency, we need to extend the stateless algorithm in a way that it allows a service provider to reason of not only what missing credentials are needed to get a service, but also to reason on what conflicts credentials have to be deactiviated that make the access policy inconsistent.We need a procedure by which if a user has exceeded his privileges he has the chance to revoke them. The algorithm for interactive access control for stateful systems is shown in Figure 6. Steps 1 to 3d are essentially the basic interactive access control algorithm. The part for stateful systems comes when we are not able to find a set of missing credentials among the disclosable ones (Step 3d). In this case there are two reasons which may cause the abduction failure when computing CM. The first one could be that in CD there are not enough disclosed credentials to grant r – case in
127
Interactive Access Control and Trust Negotiation for Autonomic Communication
Figure 6. Interactive access control for stateful systems with controlled disclosure 1. check whether PA and Cp entail r, 2. if the check succeeds then grant access 3. else (a) compute the set of disclosable credentials CD entailed by PD and Cp , (b) compute a set of missing credentials CM out of the disclosable ones (CM ⊆CD) such that: - PA together with Cp and CM entail r, and - PA together with Cp and CM preserve consistency. (c) if a set CM exists then ask the client for CM and iterate (d) else i. compute a set of excessing credentials CE among the client’s presented ones (CE ⊆Cp) such that: A) PA together with Cp\CE preserve consistency, and B) it exists CM (⊆CD) such that: - PA together with Cp\CE and CM entail r, and - PA together with Cp\CE and CM preserve consistency. ii. if a set CE exists then ask the client to present CM and revoke CE , and iterate iii. else deny access.
which we should deny access (Step 3(d)iii), or, the second one, there might be credentials in the client’s set of presented credentials Cp that make the policy state inconsistent—case in which any solution among the disclosable credentials cannot be found by the abduction. The latter reason motivates Step 3(d)i. In this step, first, we want to find a set of conflicting credentials CE, called excessing, among the presented ones Cp such that removing them from Cp preserves the access policy consistent, Step 3(d)iA. Second, on top of the not conflicting credentials it must exist a solution set that entails the service request, Step 3(d)iB. The second requirement assures that there is a potential solution for the client to get access to the requested service. We refer the reader to (Koshutanski, 2005) for full details on the stateful model.
Interactive Access Control and Current Policy-Based Approaches Having introduced the reasoning services and the respective access control algorithms does not completely show the advantages of the interactive access control model. This section describes how
128
current logic-based approaches suit our interactive model. The logical model, as presented so far, abstracts from a specific policy language and presents an executional framework for reasoning about access control. As such, the model fills an important gap between policy language specification and policy language enforcement and evaluation. We skip here the classical access control models, (see De Capitani di Vimercati & Samarati, 2001) for a comprehensive survey) and concentrate on the current logic-based access control approaches cited in the literature. The work by Li, Mitchell, and Winsborough (2002) introduces a model for distributed access control, called RT (Role-based Trust management). The core idea of the model is the way it classifies principles in a distributed manner. Basically, the model classifies each entity’s local attributes (roles) and how other entities relate to those attributes. It classifies how each entity’s attributes relates to other entities’ attributes (attribute mapping from one domain to another). It also defines attribute-based delegation of attribute authority, i.e., the ability to delegate authority to strangers whose trustworthiness is determined based on their own certified attributes.
Interactive Access Control and Trust Negotiation for Autonomic Communication
A later approach (Li, Li, & Winsborough, 2005) extends the RT framework to cope with different cryptographic schemes (e.g., zeroknowledge proof of attributes, oblivious signature envelope, hidden credential etc.) that are used to improve the privacy protection and effectiveness during a process of bilateral negotiation. The authors proposed a new language, called attribute-based trust negotiation language (ATNL), that specifies fine-grained protection of resources and their policies. Another interesting logic-based approach is (Ruan, Varadharajan, & Zhang, 2003). In contrast to what we have seen, this work presents an authorization model that supports both positive and negative authorizations. The model introduces variety of rules that define different authorization and delegation statements, as well as, rules for conflict resolutions. This work targets another type of polices where explicit negation is needed to express the policy requirements. All of the above described approaches are good candidates for an underlying policy language as the interactive access control model is data-driven by the abduction and deduction reasoning services. So, we will not target a particular policy language throughout the chapter as it is immaterial to the metalevel access control process. Winsborough and Li (2004) postulate an important property concerning trust negotiation called safety in automated trust negotiation. During a negotiation process a sensitive credential is disclosed when its policy is satisfied by the negotiator. So, the problem comes from the fact that although a sensitive credential itself is not transmitted unless its associated policy is satisfied, the behavior of a negotiator differs based on whether he has the attribute or not. One can reveal additional information about the content of the credential by monitoring the opponent’s behavior. Since the interactive access control model enforces a metalevel negotiation process one
can address the safety property requirement by properly defining the structure of access and disclosure control policies.
policy syntax and semantics Syntax Access policies are written as normal logic programs (Apt, 1990). These are sets of rules of the form: A ← B1, . . . , Bn, not C1, . . . , not Cm,
(1)
where A, Bi and Cj are (possibly ground) predicates. A is called the head of the rule, each Bi is called a positive literal and each not Cj is a negative literal, whereas the conjunction of the Bi and not Cj is called the body of the rule. If the body is empty the rule is called a fact. A normal logic program is a set of rules. In our framework, we also need constraints that are rules with an empty head. ← B1, . . . , Bn, not C1, . . . , not Cm
(2)
The intuition is to interpret the rules of a program P as constraints on a solution set S (a set of ground atoms) for the program itself. So, if S is a set of atoms, rule (1) is a constraint on S stating that if all Bi are in S and none of Cj are in it, then A must be in S. A constraint (2) is used to rule out from the set of acceptable models situations in which Bi are true and all Cj are false. One of the most prominent semantics for normal logic programs is the stable model semantics proposed by Gelfond and Lifschitz (1988) (see also Apt, 1990, for an introduction). In the following we formally define the reasoning services intuitively introduced in the motivation section. Definition 1 (Deduction and Consistency) Let P be a policy and L be a ground literal. L is de-
129
Interactive Access Control and Trust Negotiation for Autonomic Communication
ducible of P (P L) if L is true in every stable model of P. P is consistent (P ⊥) if there is a stable model for P. Definition 2 (Security Consequence) A resource r is a security consequence of a policy P if (i) P is consistent and (ii) r is deducible of P. Definition 3 (Abduction) Let P be a policy, H a set of ground atoms (called hypotheses or abducibles), L a ground literal (called observation) and a partial order (p.o.) over subsets of H. A solution of the abduction problem is a set of ground atoms E such that: 1. 2. 3. 4.
E ⊆ H, P ∪ E L, P ∪ E ⊥, any set E’ E does not satisfy all conditions above.
Traditional partial orders are subset containment or set cardinality. Definition 4 (Solution Set for a Resource) Let P be a policy and r be a resource. A set of credentials CS is a solution set for r according to P if r is a security consequence of P and CS, i.e. P ∪ CS r and P ∪ CS ⊥. Definition 5 (Monotonic and Nonmonotonic Policy) A policy P is monotonic if whenever a set of statements C is a solution set for r according to P (P ∪ C r) then any superset C’⊃C is also a solution set for r according to P (P ∪ C’ r). In contrast, a nonmonotonic policy is a logic program in which if C is a solution for r it may exist C’⊃C that is not a solution for r, i.e. P ∪ C’ r .
130
Formalization of the Example Following is the full formalization of the example introduced at the beginning of the chapter. The predicate authNet(IP, DomainName). It is a tuple with first argument the IP address of the authorized network endpoint (the client’s machine) and the second argument the domain name where the IP address comes from. For any resource in the system the user is considered to have disk, run or configure access rights. We represent variables with staring capital letter (e.g., Holder, Attr, Issuer) while constants with starting small case letters (e.g., planetLabClass1SOA, institute, juniorResearcher). A variable indicates any value in its field. Figure 7 shows the formalization of the Planet-Lab policies. Following is the functional explanation of the policies. The access policy says: • Rules (1), (2), and (3) classify issuers (SOAs) in different logical categories used by the access control logic. Example, Rule (1) categorizes planetLabClass1SOA as a system level SOA. • Rules (4) and (5) give disk access to the shared network content to everybody from the University of Trento and Fraunhofer institute, regardless the IP and roles at these institutions. • Rule (6) gives disk access to anybody who has a run access permission. • Rules (7) and (8) allow run access for those machines that are internal to the two institutions (dedicated only for Planet-Lab access) and distinguished by their fixed IPs. • Rules (9), (10), and (11) relax the previous two and allow run access from any place of the institutions to those users which present either a Planet-Lab membership certificate or a role-position certificate at one of the two institutions.
Interactive Access Control and Trust Negotiation for Autonomic Communication
• Rule (12) gives run access to anybody who has a configure access permission. • Rules (13) and (14) give configure access right if a user has a disk access and is at minimum assistant, attested (issued) by a trusted university’s SOA, or at minimum junior researcher attested by a trusted institutional SOA. • Rules (15) and (16) relax the previous two and give configure access to associate professors and senior researchers provided that requests come from the respective country domains. • Rules (17) and (18) give configure access regardless the geographical region only to
members of board of directors and to full professors. The disclosure policy says: • Rules (1), (2), (3) and (4), disclose the need of specific authorization networks the request should come from. The need of specific authorization domains is disclosed to any potential client. • Rules (5), (6) and (7), disclose the need for an employee, researcher and PlantLab member certificates to any potential client. • Rules (8) and (9) disclose (upgrade) the need of higher role-position certificates than those provided either by a client or (disclosed) by other rules of the policy.
Figure 7. Planet-lab access and disclosure control policies Access Policy: (1) classify(planetLabClass1SOA, system). (2) classify(fraunhoferClass1SOA, institute). (3) classify(unitnClass1SOA, university). (4) grant(disk) ← authNet(*, *.unitn.it). (5) grant(disk) ← authNet(*, *.fraunhofer.de). (6) grant(disk) ← grant(run). (7) grant(run) ← authNet(193.168.205.*, *.unitn.it). (8) grant(run) ← authNet(198.162.45.*, *.fraunhofer.de). (9) grant(run) ← grant (disk), credential(Holder, memberPlanetLab, Issuer), classify(Issuer, system). (10) grant(run) ← grant(disk), credential(Holder, Attr, Issuer), classify (Issuer, university), Attr researcher. (11) grant(run) ← grant(disk), credential(Holder, Attr, Issuer), classify (Issuer, institute), Attr employee. (12) grant(run) ← grant(configure). (13) grant(configure) ← grant(disk), credential(Holder, Attr, Issuer), classify(Issuer, university), Attr assistant. (14) grant(configure) ← grant(disk), credential(Holder, Attr, Issuer), classify(Issuer, institute), Attr juniorResearcher. (15) grant(configure) ← authNet(*, *.it), credential(Holder, Attr, Issuer), classify(Issuer, university), Attr assProf. (16) grant(configure) ← authNet(*, *.de), credential(Holder, Attr, Issuer), classify(Issuer, institute), Attr seniorResearcher. (17) grant(configure) ← credential(Holder, Attr, Issuer), classify(Issuer, university), Attr fullProf. (18) grant(configure) ← credential(Holder, Attr, Issuer), classify(Issuer, institute), Attr boardOfDirectors. Disclosure Policy: (1) authNet(*, *, it) (2) authNet(*, *, de) (3) authNet(*, *, unitn.it) (4) authNet(*, *, fraunhofer.de) (5) credential(Holder, memberPlantLab, planetLabClass1SOA) (6) credential(Holder, employee, unitnClass1SOA) (7) credential(Holder, researcher, fraunhoferClassISOA) (8) credential(Holder, AttrX, unitnClass1SOA) ← credential(Holder, AttrY, unitnClass1SOA), AttrX AttrY (9) credential(Holder, fraunhoferClass1SOA) ← credential(Holder, AttrY, fraunhoferClass1SOA), AttrX AttrY
131
Interactive Access Control and Trust Negotiation for Autonomic Communication
THE INTERACTIVE ACCESS CONTROL ALGORITHM Below we summarize all the information we have recalled (policies, credentials, etc.) to this extend. • • • •
•
PA security policy governing access to resources. PD security policy controlling the disclosure of foreign (missing) credentials. Cp the set of credentials presented by a client in a single interaction. CP the set of active credentials that have been presented by a client during an interactive access control process. CN the set of credentials that a client has declined to present during an interactive access control process.
Now, we have all the necessary material to introduce our interactive access control algorithm, shown in Figure 8. The intuition behind the algorithm is the following. Once the client has initiated a service request r with (optionally) a set of credentials Cp, the interactive algorithm updates the client’s profile CP and CN (lines 1: and 2:). CP is updated with the newly presented credentials Cp while CN is updated with the set difference of what the client was asked in the last interaction (CM) and what he presents in the current one (Cp). Next, the algorithm consults for an access decision (line 3:). The first step of the access decision function checks whether the request r is granted by PA according to the client’s set CP (Step 1). If the check fails, the starting point of the interactive framework, then in Step 2a the algorithm computes all credentials disclosable from PD according to CP and from the resulting set removes all already declined and already presented credentials. The latter is used to avoid dead loops of asking something already declined or presented. Then, the algorithm computes (using the abduction reasoning) all possible
132
subsets of CD that are consistent with the access policy PA and, at the same time, grant r. Out of all those sets (if any) the algorithm selects the minimal one. Example 5 A senior researcher at Fraunhofer institute FOKUS wants to reconfigure an online service for paper submissions of a workshop. The service is part of a big management system hosted at the University of Trento’s network that is part of the Planet-Lab network, formalized in the previous section. For doing that, at the time of access, she presents her employee certificate, issued by a Fraunhofer certificate authority, presuming that it is enough as a potential employee. Formally speaking, the request comes from a domain fokus. fraunhofer.de with an attribute credential for an employee. The set of credentials is: {authNet(198.162.193.46, fokus.fraunhofer. de), credential(AliceMilburk, employee, fraunhoferClass1SOA)} According to the access policy the credentials are not enough to get configure access (see rule 14 in Figure 7). Then, the algorithm (Step 2a in Figure 8) computes the set of disclosable credentials from the disclosure policy and the user’s set of active credentials CP. In our case, CP is the set of credentials mentioned above. Next, the algorithm computes CD as the need of all roles higher in position than memberPlanetLab (see Figure 7, Disclosure Policy part) and the abduction Step (Figure 8 Step 2b), with criterion minimal set cardinality, computes the following missing sets that satisfy the request: {credential(AliceMilburk, juniorResearcher, fraunhoferClass1SOA)}, {credential(AliceMilburk, seniorResearcher, fraunhoferClass1SOA)}, {credential(AliceMilburk, boardOf Directors, fraunhoferClass1SOA)}
Interactive Access Control and Trust Negotiation for Autonomic Communication
Then, using role minimality criterion, the algorithm returns back the need for {credenti al(AliceMilburk, juniorResearcher, fraunhoferClass1SOA)}. In the next interaction, since Alice is a senior researcher, she declines to present the requested credential by returning the same query but with no entry for presented credentials (Cp = ∅). So, the algorithm updates the user’s profile marking the requested credential credential(AliceMilburk, juniorResearcher, fraunhoferClass1SOA) declined. The difference comes when the algorithm recomputes the disclosable credentials as all disclosable credentials from the last interaction minus the newly declined one. Next, abduction computes the following sets of missing credentials that satisfy the request: {credential(AliceMilburk, seniorResearcher, fraunhoferClass1SOA)}, {credential(AliceMilburk, boardOf Directors, fraunhoferClass1SOA)}
According to role minimality criterion, the algorithm returns the need for a credential {credential(AliceMilburk, seniorResearcher, fraunhoferClass1SOA)}. On the next interaction, Alice presents a certificate attesting her as a senior researcher and the algorithm grants the requested service. Remark 1 Using declined credentials is essential to avoid loops in the process and to guarantee successful interactions in presence of disjunctive information.
Technical Guarantees In the following we show the summary of the technical results that the access control algorithm provides. We refer the reader to (Koshutanski, 2005) for full details on the theoretical framework. Following are the basic guarantees that the interactive framework provides:
Figure 8. Interactive access control algorithm Input: r, Cp Output: grant/deny/ask(CM) iAccessControl(r, Cp){ 1: CP = CP ∪Cp; 2: CN = CN ∪(CM\Cp), where CM is from the last interaction; 3: result = iAccessDecision(r, PA, PD, CP, CN ); 4: return result; } iAccessDecision(r, PA, PD, CP, CN ){ 1. check whether r is a security consequence of PA and CP , namely - PA ∪CP |= r, and - PA ∪CP |≠ ⊥. 2. if the check succeeds then return grant else (a) compute the set of disclosable credentials CD as CD = {c | c credential that PD ∪CP |= c} \ (CN ∪CP) , (b) use abduction to find a set of missing credentials CM (⊆CD) such that: - PA ∪CP ∪CM |= r, and - PA ∪CP ∪CM |≠ ⊥. (c) if no such set exists then return deny (d) else return ask(CM). }
133
Interactive Access Control and Trust Negotiation for Autonomic Communication
•
•
•
Termination: The interactive access control algorithm always terminates, that is, in a finite number of interactions either grant or deny is returned by the algorithm (resistant against DoS attacks). Correctness: If a client gets grant for a service then he has a solution for the service, that is, the algorithm does not grant access to unauthorized clients. Completeness: If a client has a solution for a service request then the algorithm will grant him access.
The most important thing, also the most difficult, is to model and prove that a client who has the right set of credentials and who is willing to send them to the server will not be left stranded in our autonomic network and will get grant. First we need to define what would be a reasonable client our framework aims to provide the guarantees for. Definition 6 (Powerful client) A powerful client is a client that whenever receives a request for missing credentials returns all of them. Definition 7 (Cooperative client) A cooperative client is a client that whenever receives a request for missing credentials returns those of them that he has in possession. Defining the notion of good clients with respect to the interactive algorithm is still not enough to state the practical relevance of the access control model. We need to introduce the notion of fairness reflecting the access and disclosure control policies. We define the following two properties: Definition 8 (Fair Access) A fair access property guarantees that whenever there is a request for a service it exists a solution in the access control policy which unlocks (grants) the service.
134
In other words, for each resource protected by the access policy there should exist a set of credentials (a solution) that grants the resource according to the policy. Fair access property avoids cases where the policy specifies a solution for a service but the solution itself makes the policy state inconsistent, so that even a client with the right set of credentials for the service cannot get it. Definition 9 (Fair Interaction) A fair interaction property guarantees that if a solution for a service request exists (according to the access policy) then this solution should be disclosable by the disclosure control policy. In other words, any solution for a service should be potentially disclosable to a client requesting the service. In an autonomic scenario, where a service is potentially accessible by any client, fair interaction property would disclose a solution for a service to potentially any client requesting it. So, on one side, we want to be fair and disclose solutions to clients but, on the other side, we want to protect and restrict the disclosure of information only to selected clients (not to anybody). To approach this problem we introduce the notion of hidden credentials. Informally speaking, a credential is hidden if an access control system needs it for taking an access decision, but does not disclose the need to anybody. Thus, an autonomic server can dynamically protect the privacy of its policies by specifying which credentials are hidden and which are not. This allows a server to restrict access to certain services only to selected clients. Now we can define a client with hidden credentials. Definition 10 (Client with Hidden Credentials for a Service) A client with hidden credentials for a service is any client that has in possession the hidden credentials for that service and knows that these are to be pushed to the server initially.
Interactive Access Control and Trust Negotiation for Autonomic Communication
Now, we have to redefine the fair interaction property with respect to hidden credentials.
IMPLEMENTING THE ACCESS CONTROL FRAMEWORK
Definition 11 (Fair Interaction with Hidden Credentials) If a solution for a service exists and there are hidden credentials for that solution then all credentials from the solution set which are not hidden must be disclosable by the disclosure policy and the set of hidden credentials.
This section emphasizes on the practical relevance of the access control framework and, particularly, on how the access control model can be of practical use. There are two main points relevant to the implementation of the framework. This first one is how to cope with the implementation of the interactive access control algorithm and the second one is how to integrate the logical model with the current security standards widely adopted by IT companies. For the first point, we will use a logical-based reasoning system, called DLV (see http://www. dlvsystem.com) and, particularly, how to employ DLV in order to perform the basic computations of abduction and deduction. As for the second one, we will show how to integrate the logical model with X.509 certificate framework and OASIS SAML standard.
So far, we have introduced all we need to formulate the main guarantees showing the practical relevance of the access control framework. Completeness for a cooperative client: If access and disclosure control policies guarantee fair access and interaction, respectively, then if a cooperative client has a solution for a service request then he will get grant with the interactive access control algorithm. We can postulate the same claim per a cooperative client with hidden credentials for a resource.
Figure 9. Implementation of the basic functionalities of deduction and abduction iAccessDecision(r, PA, PD, CP, CN){ 1: if doDeduction(r, PA∪CP) then return grant 2: else 3: CD = {c | PD∪CP |= c} \ (CN ∪CP ); 4: result = doAbduction(r, CD, PA∪CP ); 5: if result = = ⊥ then return deny 6: else return ask(result); } doDeduction(R: Query, P: LogProgram){ check for P |= R? 1: run DLV in deduction mode with input: P , R? ; 2: check output: if R is deducible then return true else return false; } doAbduction(R: Observation, H: Hypotheses, P : LogProgram){ 1: run DLV in abduction diagnosis mode with input: R, H, P ; 2: DLV output: all sets Ci that (i) Ci ⊆ H, (ii) P ∪Ci |= R, (iii) P ∪Ci |≠ ⊥; 3: if no Ci exists then return ⊥ 4: else select a minimal Cmin and return Cmin; }
135
Interactive Access Control and Trust Negotiation for Autonomic Communication
Integration with the Automated Reasoning Tool DLV For the implementation of the interactive access control algorithm we use the DLV system (a disjunctive datalog system with negations and constraints) as a core engine for the basic functionalities of deduction and abduction. The disjunctive datalog front end (the default one) is used for deductive computations while the diagnosis front end is used for abductive computations. Figure 9 shows the implementation using the DLV system. The input of the function iAccessDecision is the requested service r, the policy for access control PA, the policy for disclosure control PD, the set of active credentials CP and the set of declined credentials CN . Step 1 uses the DLV’s deductive front end. It specifies as input the service request r marked as a query over the models (r?) computed on PA∪CP. The output of this step are those models in which r is true. If it exists a model in Step 1 that satisfies r then the system returnes grant (Step 1). If no model for r exists then we use the DLV’s deductive front end with input PD ∪CP (Step 3). In this case, DLV computes all credentials disclosable from PD ∪CP. Then from the computed set we remove all credentials that belong to CN and CP .
Once the disclosable credentials are computed then, in Step 4, we use the abductive diagnosis front end with the input: the requested service r stored in a temporary file with extension .obs (observations), the just computed set of disclosable credentials CD stored in a temporary file with extension .hyp (hypotheses or also called abducibles) and the third argument is the access policy together with the active credentials PA∪CP. The two input files (.hyp and .obs) have particular meaning for DLV system in the abductive mode. The output of that computation are all possible subsets of the hypotheses that satisfy the observations. In that way we find all possible missing sets of credentials satisfying r. Then we filter them according to some minimality criteria and select the minimal set out of them. The automated reasoning tool depends on the one’s own choice. It can be used any other tool that supports the basic reasoning services (see for example, www.tcs.hut.fi/spyware/smodels).
Integration with X.509 and SAML Standards The framework described so far processes credentials on a high (abstract) level: defines what can be inferred and what missing is from partner’s
Figure 10. X.509 identity and attribute certificates structure
136
Interactive Access Control and Trust Negotiation for Autonomic Communication
access policy and user’s set of credentials. There is a need of a suitable certificate infrastructure for describing participant’s identities and access rights. A good choice is the widely adopted certificate standard X.509 (X.509, 2001). There are two certificate types considered by the standard: identity and attribute certificates. Figure 10 shows the structures of the two certificates. X.509 identity certificate is used to identify entities in a network. The main fields of the certificate’s structure are the subject information, the public key identifying the subject (corresponding to the subject’s private key), the issuer information and the digital signature on the document, signed by the issuer (with its private key). X.509 attribute certificate has the same structure like the identity one with the difference that instead of a public key field there is a field for listing attributes and the Subject field is called Holder (of the attributes). Referring to the message level, one can adopt to use the OASIS SAML standard (SAML) for having standard semantics for authorization statements among participants in an autonomic network. SAML offers a standard way for exchanging authentication and authorization information between on-line partners. The basic SAML data objects are assertions. Assertions contain information that determines whether users can be authenticated or authorized to use resources. The SAML framework also defines a protocol for requesting assertions and responding to them, which makes it suitable when modeling interactive communications between entities in a distributed environment. We list below the SAML request/response protocol and how we employ it in the interactive access control framework. •
SAML Request: Use the authorization decision query statement for expressing access decision requests. Specify the resource and
•
action in the respective standard fields of the access statement. Once an access decision is taken use the SAML response part. SAML Response: Use the authorization decision statement ° Permit / Deny: When explicit grant/ deny is returned by the iAccessControl protocol. ° Indeterminate: When ask(CM) is returned. In this case, list the missing credentials in the standard SAML attribute fields, for example, < a t t r i bu t e n a m e =``M ISSI NG _ CREDENTIAL’’>Employee ID < a t t r i bu t e n a m e =``M ISSI NG _ CREDENTIAL’’>Full Professor To make the access decision engine Web Services compatible we also adopted the W3C SOAP (see http://www.w3.org/TR/soap) as a main transport layer protocol. SOAP is a lightweight protocol for exchanging structured information in a decentralized, distributed environment. It has an optional Header element and a required Body element. Informally, in the body we specify what information is directly associated with the service request and in the header additional information that should be considered by the end-point server. So, to request for an access decision on a message level we have to: ° First, attach X.509 Certificates in the SOAP Header using WS-Security (WSSecurity) specification for that, ° Then, place the SAML Request in the SOAP Body thus making it an input to the decision engine being invoked.
Having the needed technologies in hands, the next section describes how the just introduced
137
Interactive Access Control and Trust Negotiation for Autonomic Communication
standards and protocols can be integrated into one architecture.
System Architecture Figure 11 shows the architecture of a prototype that has been developed, called iAccess. The bottom most layer in the figure comprises the integration of the prototype with Tomcat (see http://tomcat.apache.org) application server. We perform all requests over SSL connection. Thus, assuring message confidentiality and integrity on the transport layer. Once an access request is received by the Tomcat server, it invokes the iAccess engine for an access decision. As shown in the figure, first, the engine parses the SOAP envelope, containing the body and the header elements. Then, it extracts X.509 (see X.509 technology provider: http://
Figure 11. iAccess architecture
138
www.bouncycastle.org) identity and attribute certificates, and the SAML (see SAML technology provider: http://www.opensaml.org) request protocol. Next, the engine performs validation and verification of the certificates: first for expiration dates and second for trustworthiness. The latter is performed according to local databases listing the trusted identity issuers and, respectively, the trusted attribute issuers (their public keys). The two databases are domain specific. Remark 2 We point out that the check for trusted CAs and SOAs is to filter out those certificates that are issued by unknown (distrusted) certificate authorities. The fine-grained verification on trusted attributes and identities is performed on the logical level and according to the access policy.
Interactive Access Control and Trust Negotiation for Autonomic Communication
Once the certificates are validated and verified, iAccess invokes an ontology conversion module for mapping the global certificate information to a local, provider specific, representation. The same mapping is also performed for the SAML request protocol information. The ontology module semantially transforms global-to-local and local-to-global the following information: • • • •
Certificate attributes Certificate issuers Resource names (service requests) Service actions
The conversion module transforms certificates’ information and SAML request to predicates suitable for the logical model, as described below. • Identity certificates are transformed to certificate(subject, Issuer: i) predicates, • Attribute certificates are transformed to credential(holder, Attr: a, Issuer: i) predicates, • SAML access request to grant(Resource: r, Action: p). These transformations leverage access control management on the logical level because on this level there is local (domain specific) syntax for the representation of the above items. After the transformation is performed iAccess invokes the iAccessControl module for an access decision. Once an access decision is taken (returned by the iAccessControl protocol), iAccess maps the information grant, deny or additional credentials to their global representation and then generates the respective SAML Response protocol. After that, iAccess places a time-stamp for validity period on the access decision statement and then digitally signs it to ensure integrity of the infor-
mation. Next, Tomcat server returns the SAML decision to the entity requested for it.
TRUST NEGOTIATION In an autonomic network scenario servers must have a way to find out what credentials are required for clients to get access to resources. Clients, once asked for missing credentials, may be unwilling to disclose them unless the server discloses some of its credentials first, that is, negotiate the need of sensitive credentials. If we merge the two frameworks we have the following open problems: 1. 2.
3.
4.
Alice wants to access some service of Bob Alice does not know exactly what credentials Bob needs, so (a) Bob must compute what is missing and ask Alice, (b) Alice must send to Bob all credentials he requested. In response to 2b, Alice may want to have some credentials from Bob before sending hers, so (a) She must tell Bob what he needs to provide, (b) Bob must have a policy to decide how access to his credentials is granted. In response to 2a, Bob may not want to disclose all that is missing at once but may want to ask Alice first some of the less sensitive credentials, so (a) Bob must have a way to request in a stepwise fashion the missing credentials.
To combine automated trust negotiation and interactive access control we assume that both clients and servers have the three logical security policies:
139
140
1. 2. 3.
PAR a policy for access to own resources on the basis of foreign credentials, PAC a policy for access to own credentials on the basis of foreign credentials, PD a policy for disclosure the need of (missing) foreign credentials.
Technically speaking we could merge 1 and 2 into a flat policy for protecting sensitive resources as in (Yu & Winslett, 2003; Yu, Winslett, & Seamons, 2003). However, the structured approach is better because the criteria behind and likely the administrator of each policy are different. Resource access is decided by the business logic whereas credential access is due to security and privacy considerations. For example the negotiation of a sensitive credential may require activation of credentials that are not considered from the business logic for the actual access control process and even they may be inconsistent with the business logic rules. Thus, forcing separation between policies PAR and PAC we free the access policy PAR to be arbitrarily complex with almost everything that is on the (Datalog) access control market (say with negation as failure, constraints on separation of
duties, or other credentials such as those by Li and Mitchell (2003)). Rather, the policy for access to own credentials PAC we restrict to be monotonic because of its particular nature: once the need for a credential is disclosed (granted), it is disclosed! In contrast, a credential needed to access a resource may come and go due to separation of duty or other access control constraints.
The Negotiation Protocol This sections shows how one can bootstrap from the simple security policies a comprehensive negotiation protocol that establishes proper trust relationships via bilateral exchange of credentials. We introduce a new set notation O indicating a set of own credentials with respect to a negotiation opponent. Now, let us recall the interactive access control protocol with the following modification. Instead of returning the set of missing credentials CM we will transform it into a sequence of single requests each requesting for a foreign credential from the missing set. Figure 12 shows the new version of the protocol.
Figure 12. The core of the negotiation protocol Session vars: CP and CN. Initially CP =CN = ∅;. iAccessNegotiation(r, Cp){ 1: CP = CP ∪Cp; 2: repeat 3: result = iAccessDecision(r, PA, PD, CP, CN); 4: if result = = ask(CM) then 5: for each c ∈ CM do 6: response = invoke iAccessNegotiation(c, ∅)@Opponent; 7: if response = = grant then 8: CP = CP ∪ {c}; 9: else 10: CN = CN ∪ {c}; 11: done 12: fi 13: until result = = grant or result = = deny. 14: return result; }
141
We extended the protocol to work on client and server sides so that they automatically request each other for missing credentials. Step 1 of the protocol updates the set of active (foreign) credentials with those presented at the time of request. Those presented credentials are typically pushed by the opponent when initially requests for a service. After the initial update we go in a loop where iAccessDecision algorithm is run for an access decision. The purpose of the loop is to keep asking the opponent new solutions (missing credentials) until a final decision of grant or deny is taken. The technicality of the protocol is in Step 6 where we represent the request for a missing credential as a remote invocation of the iAccessNegotiation protocol on the opponent side. In this way, the new protocol has the same functionality as iAccessControl protocol. Step 6 invokes iAccessNegotiation protocol with an empty set of presented own credentials.
One can slightly modify the protocol by introducing a function PushedCredentials(c) that decides what own credentials an opponent has to present (Opush) when requesting for a foreign credential c. To approach bilateral negotiation first we have to take into account the following two issues: • Each request for a credential spurs a new negotiation thread that negotiates access to this credential. • During a negotiation process parties may start to request each other credentials that are already in a negotiation. So, the notion of suspended credential requests must be taken into account. Figure 13 shows the updated version of the iAccessNegotiation protocol. With its new version, whenever a request arrives it is run in a new thread that shares the same session variables CP,
Figure 13. The negotiation protocol with suspended credentials Session vars: CP and CN and Oneg. Initially CP =CN = Oneg= ∅;. iAccessNegotiation(r, Cp){ runs in a new thread 1: CP = CP ∪Cp; 2: if r ∈ Oneg then 3: suspend and await for the result on r’s negotiation; 4: return result when resumed; 5: else 6: Oneg = Oneg ∪ {r}; 7: repeat 8: result = iAccessDecision(r, PA, PD, CP, CN); 9: if result = = ask(CM) then 10: for each c ∈ CM do 11: response = invoke iAccessNegotiation(c, ∅)@Opponent; 12: if response = = grant then 13: CP = CP ∪ {c}; 14: else 15: CN = CN ∪ {c}; 16: done 17: fi 18: until result = = grant or result = = deny. 19: Oneg = Oneg\ {r}; 20: resume all processes awaiting on r with the result of the negotiation; 21: return result; 22:elseif }
142
CN and Oneg with other threads running under the same negotiation process. The set Oneg keeps track of the opponent’s own credentials that have been requested and which are still in a negotiation. Now, if a request for a credential, which is already in a negotiation, is received the protocol suspends the new thread until the respective negotiation thread finishes (Step 3). Then, when
the original thread returns an access decision the protocol resumes all threads awaiting on the requested credential and informs them for the final decision (Step 20). Figure 14 shows the full-fledged negotiation protocol. The iAccessDispatcher module manages the negotiation session information. Its role is to
Figure 14. The negotiation protocol Session vars: CP, CN and Oneg. Initially CP= CN= Oneg= ∅;. iAccessDispatcher{ OnReceiveRequest: iAccessNegotiation(r, Cp) 1: if isService(r) then 2: reply response = iAccessNegotiation(r, Cp); in a new negotiation session process. 3: else 4: reply response = iAccessNegotiation(r, Cp); in a new thread under the original negotiation session. OnSendRequest: 1: if isService(r) then 2: result = invoke iAccessNegotiation(r, Op)@Opponent; in a new negotiation session process. } iAccessNegotiation(r, Cp){ 1: CP = CP ∪Cp; 2: if r ∈ Oneg then 3: suspend and await for the result on r’s negotiation; 4: return result when resumed; 5: else 6: Oneg = Oneg∪ {r}; 7: repeat 8: if isService(r) then 9: result = iAccessDecision(r, PAR, PD, CP, CN); 10: else 11: result = iAccessDecision(r, PAC, PD, CP, CN); 12: if result = = ask(CM) then 13: AskCredentials(CM); 14: until result = = grant or result = = deny. 15: Oneg = Oneg \ {r}; 16: resume all processes awaiting on r with the result of the negotiation; 17: return result; 18:elseif } AskCredentials(CM){ 1: parfor each c ∈ CM do 2: response = invoke iAccessNegotiation(c, ∅)@Opponent; 3: if response = = grant then 4: CP = CP ∪ {c}; 5: else 6: CN = CN ∪ {c}; 7: done 8: await untill all responses are received (await until CM ⊆ CP ∪ CN); }
143
dispatch (assign) to each request/response the right negotiation process information. It works in the following way. Whenever a request for a service is received the dispatcher runs iAccessNegotiation in a new session process and initializes CP, CN and Osusp to an empty set (Step 2). Then each counterrequest for a credential is run in a new thread under the same negotiation process (Step 4). On the other hand, whenever an entity requests a service r at the opponent side, presenting initially some own credentials Op, the iAccessDispatcher module invokes iAccessNegotiation (at the opponent side) and creates a new session process so that any counter-request from the opponent is run in a new thread under the new negotiation process. The intuition behind the negotiation protocol is the following: 1.
2.
3.
4.
5.
A client, Alice, sends a service request r and (optionally) a set of own credentials Op to a server, Bob. Bob’s iAccessDispatcher receives the requests and runs iAccessNegotiation(r, Cp) in a new process. Here Cp = Op with respect to Bob. Once the protocol is initiated, it updates the over all set of presented foreign credentials with the newly presented ones and checks whether the request should be suspended or not (Steps 1 and 2). If not suspended, then Bob looks at r and if it is a request for a service he calls iAccessDecision with his policy for access to resources PAR, his policy for disclosure of foreign credentials PD, the set of foreign presented credentials CP and the set of foreign declined credentials CN (Step 9). If r is a request for a credential then Bob calls iAccessDecision with his policy for access to own credentials PAC, his policy for disclosure of foreign credentials PD, the set of presented foreign credentials CP and the
6.
7.
8.
set of declined foreign credentials CN (Step 11). In the case of computed missing foreign credentials CM, Bob transforms it into requests for credentials and awaits until receives all responses. At this point Bob acts as a client, requesting Alice the set of credentials CM. Alice runs the same protocol with swapped roles. When Bob receives all responses he restarts the loop and consults the iAccessDecision algorithm for a new decision. When a final decision of grant or deny is taken, the respective response is returned back to Alice.
Technicality in the protocol is in the way the server requests missing credentials back to the client. As indicated in the figure, we use the keyword parfor for representing that the body of the loop is run each time in a parallel thread. Thus, each missing credential is requested independently from the requests for the others. At that point of the protocol, it is important that each of the finished threads updates presented and declined sets of foreign credentials properly without interfering with other threads. We note that after a certain session time expires each credential request that is still awaiting on an answer is marked as declined. Also an important point here is to clarify the way we treat declined and not yet released credentials. In a negotiation process, declining a credential is when an entity is asked for it and the same entity replies to the same request with answer deny. In the second case, when the entity is asked for a credential and, instead of reply, there is a counter request for more credentials, then the thread, started the original request, awaits the client for an explicit reply and treats the requested credential as not yet released. In any case, at the end of a negotiation process a client either supplies the originally asked credential or declines it.
144
Example 6 Figure 15 shows an example of Alice’s and Bob’s interactions using the negotiation protocol on both sides. The policies for access to resources and access to sensitive credentials are in notations like in Yu, Winslett and Seamons (2003) where the Alice’s local credentials are marked with subscript “A” and Bob’s with “B”, respectively. Bob’s access policy PAR says that access to resource r1 is granted if {CA1, CA2} are presented by Alice. To get access to r2 Alice should either present {CA1, CA3} or satisfy the requirements for access to r1 and present CA4. To get access to r5 Alice should either satisfy the requirements for r2 and present CA7 or satisfy the requirements for r1 and present CA6. Bob’s disclosure policy discloses the need for credentials CA1, CA2, CA3, CA5, CA6, and CA7 to potentially any client. But in contrast, the need for a credential CA4 is never disclosed but expected when r2 is requested. It is an example of a hidden credential that must be pushed. Analogously, Bob’s PAC says: to grant access to Bob’s CB1 Alice must present CA5 and to grant access to Bob’s CB2 Alice must present CA7.
Following is the negotiation scenario. Alice requests r5 to Bob presenting empty set of initial credentials. Alice’s TN Dispatcher detects the request and creates a new session process awaiting on Bob’s reply. Next, Bob runs the interactive algorithm on his PAR. The outcome of the algorithm is the set of missing credentials {CA1, CA3, CA7} (computed as the minimal one). Then, Bob transforms the missing credentials in single requests and asks Alice for them. Alice’s TN Dispatcher receives the requests and runs them in three new threads for each of them, respectively. Next, Alice runs the interactive access control algorithm on her PAC for each of the requests and returns grant CA1, deny CA3 and counter request for Bob’s CB2. Bob replies to the request for CB2 with a counter request for Alice’s CA7. Since CA7 has been already requested by Bob, now Alice suspends the new request and awaits on the original one to finish its negotiation. If we look again in the sequence of requests we recognize than the original thread depends on the outcome of the suspended one and we come to a recursive loop (interlock). Since Alice’s suspended
Figure 15. Example of interoperability of the negotiation protocol
145
thread has a session timeout, so after it expires Alice returns to Bob a decision deny. At this point Alice can choose (automatically) to extend her session time to allow the negotiation to continue and eventually to successfully finish. Next, Bob recalls its interactive access control for a new decision for r5. The next set of missing credentials is {CA2, CA6} which Bob transforms to single requests. The rest of the scenario follows analogously. After Alice and Bob successfully negotiate on Bob’s requests for missing credentials, Bob grants access to the service request r5. However, we have not solved the problem of stepwise disclosure of missing foreign credentials yet. The intuition here is that Bob may not want to disclose the missing foreign credentials all at once to Alice but, instead, he may want to ask Alice first some of the less sensitive credentials assuring him that Alice is enough trustworthy to disclose her other more sensitive credentials and so on until all the missing ones are disclosed. Here we point out that the stepwise approach may require a client to provide credentials that are not directly related to a specific resource but needed for a fine-grained disclosure control. To address this issue we extend the negotiation protocol with an algorithm for stepwise disclosure of missing credentials. The basic intuition
is that the logical policy structure itself tells us which credentials must be disclosed to obtain the information that other credentials are missing. So, we simply need to extract this information automatically. We perform a step-by-step evaluation on the policy structure. For that purpose we use a one-step deduction over the disclosure policy PD to determine the next set of potentially disclosable credentials. We refer the reader to (Koshutanski & Massacci, 2007) for details on the stepwise algorithm.
Implementing the Trust Negotiation Framework Figure 16 shows the architecture of the trust negotiation framework. JBOSS application server (see http://www.jboss.org/products/jbossas) uses TCP/IP sockets to send/receive information. The functionality of the server has been extended with the possibility to transform high-level credential/service requests, understandable by the TN Dispatcher, to low-level raw data requests suitable for transmission over TCP/IP connections. Whenever the TN Dispatcher is initially run it internally runs the JBOSS application server. The TN Dispatcher it resides active in the memory awaiting for new requests. Once the JBOSS server receives a request it transforms the request it
Figure 16. The architecture of the negotiation framework
146
from raw data to a high-level representation and automatically redirects it to the dispatcher. On each received request the TN Dispatcher analyzes the session data from the request against its local database, and acts as following. If no session data is specified in the request then the dispatcher generates new session information (new session data sets, see Figure 14) and runs the negotiation protocol with the new session info. If it exists a session ID in the request and the session ID correctly maps to the corresponding one in the dispatcher’s local database then the dispatcher runs the negotiation protocol under the existing session. We not that any negotiation protocol instance is always run in a new parallel thread that it internally updates the session information. The trust negotiation protocol uses the JBOSS server methods to send/receive requests.
CONCLUSION In this chapter we presented a framework on policy-based access control for autonomic communications. The framework is grounded in a formal model with the stable model semantics. The key idea is that in an autonomic network a client may have the right credentials but may not know it and thus an autonomic server needs a way to interact and negotiate with the client the missing credentials that grant access. We have proposed a solution to this problem by extending classical access control models with an advanced reasoning service: abduction. Building on top of this service, we have presented the key interactive access control algorithm that, in case service request fails, computes on-the-fly missing credentials that entail the request. We have also introduced the notion of disclosable and hidden credentials. The distinction allows servers to dynamically protect the privacy of their policies by specifying which credentials are hidden and which are not and notifying selected clients for that.
We have identified the interactive access control model as a way for protecting security interests with respect to disclosure of information and access control of both server and client sides. We have proposed a protocol for leveraging trust negotiation between two entities involved in an autonomic communication. The protocol communicates and negotiates the missing credentials until enough trust is established and the service is granted or the negotiation fails and the process is terminated. The protocol is run on both client and server sides so that they understand each other and automatically interoperate until a desired solution is reached or denied. One of the advantages of the approach is that we do not pose any restrictions on partner’s policies because the basic computations of deduction and abduction, performed on the policies, do not require any specific policy structure. We have also presented an implementation of the framework using X.509 and SAML standards.
Open Problems and Future Work Future work is in the direction of characterizing the complexity of the framework. Proving which guarantees the protocol can offer in terms of interoperability, completeness and correctness when applied to a practical policy language is still an open process and will be a subject of future research. In the direction of mutual negotiation, future work is to explore the interoperability of the negotiation framework with the TrustBuilder prototype (Yu, Winslett, & Seamons, 2003). We believe that this is an important step toward building a secure open computing environment.
ACKNOWLEDGMENT This work was partly supported by the projects: 2003-S116-00018 PAT-MOSTRO, 016004 IST-FP6-FET-IP-SENSORIA, 27587 IST-FP6-
147
IP-SERENITY, 038978 EU-MarieCurie-EIFiAccess, 034744 EU-INFSO-IST ONE, 034824 EU-INFSO-IST OPAALS.
REFERENCES Apt, K. (1990). Logic programming. In J. van Leeuwen (Ed.), Handbook of theoretical computer science. Elsevier. Atluri, V., Chun, S. A., & Mazzoleni, P. A (2001). Chinese wall security model for decentralized workflow systems. In Proceedings of the Eighth ACM conference on Computer and Communications Security (pp. 48-57). Bertino, E., Catania, B., Ferrari, E., & Perlasca, P. (2001). A logical framework for reasoning about access control models. In Proceedings of the Sixth ACM Symposium on Access Control Models and Technologies (SACMAT) (pp. 41-52). Bertino, E., Ferrari, E., & Atluri, V. (1999) The specification and enforcement of authorization constraints in workflow management systems. ACM Transactions on Information and System Security (TISSEC), 2(1), 65-104. Bonatti, P., & Samarati, P. (2002). A unified framework for regulating access and information release on the Web. Journal of Computer Security, 10(3), 241-272. Damianou, N., Dulay, N., Lupu, E., & Sloman, M. (2001). The Ponder policy specification language. In Proceedings of the International Workshop on Policies for Distributed Systems and Networks (POLICY) (pp. 18-38). De Capitani di Vimercati, S., & Samarati, P. (2001). Access control: Policies, models, and mechanism. In R. Focardi & F. Gorrieri (Eds.), Foundations of security analysis and design - Tutorial lectures (vol. 2171 of LNCS). Springer-Verlag.
Ellison, C., Frantz, B., Lampson, B., Rivest, R., Thomas, B. M., & Ylonen, T. (1999, September). SPKI certificate theory. IETF RFC, 2693. Gelfond, M., & Lifschitz, V. (1988). The stable model semantics for logic programming. In R. Kowalski & K. Bowen (Eds.), Proceedings of the Fifth International Conference on Logic Programming (ICLP’88) (pp. 1070-1080). Georgakopoulos, D., Hornick, M. F., & Sheth, A. P. (1995, April). An overview of workflow management: From process modeling to workflow automation infrastructure. Distributed and Parallel Databases 3(2), 119-153. Kang, M. H., Park, J. S., & Froscher, J. N. (2001). Access control mechanisms for interorganizational workflow. In Proceedings of the Sixth ACM Symposium on Access Control Models and Technologies (pp. 66-74). Koshutanski, H. (2005). Interactive access control for autonomic systems. Unpublished doctoral dissertation, University of Trento, Italy. Koshutanski, H., & Massacci, F. (2007). A negotiation scheme for access rights establishment in autonomic communication. Journal of Network and System Management (JNSM), 15(1), 117136. Li, J., Li, N., & Winsborough, W. H. (2005). Automated trust negotiation using cryptographic credentials. In Proceedings of the 12th ACM Conference on Computer and Communications Security (pp. 46-57). Li, N., Grosof, B. N., & Feigenbaum, J. (2003). Delegation logic: A logic-based approach to distributed authorization. ACM Transactions on Information and System Security (TISSEC), 6(1), 128-171. Li, N., & Mitchell, J. C. (2003). RT: A role-based trust-management framework. In Proceedings
148
of the Third DARPA Information Survivability Conference and Exposition (DISCEX III) (pp. 201-212). Li, N., Mitchell, J. C., & Winsborough, W. H. (2002). Design of a role-based trust management framework. In Proceedings of IEEE Symposium on Security and Privacy (S&P) (pp. 114-130). Lymberopoulos, L., Lupu, E., & Sloman, M. (2003). An adaptive policy based framework for network services management. Plenum Press Journal of Network and Systems Management, 11(3), 277-303. Ruan, C., Varadharajan, V., & Zhang, Y. (2003). A logic model for temporal authorization delegation with negation. In C. Boyd & W. Mao (Eds.), Proceedings of the Sixth International Conference on Information Security (ISC), 2851 (pp. 310-324). SAML. (2004). Security assertion markup language (SAML). Retrieved from http://www. oasis-open.org/committees/security Sandhu, R., Coyne, E., Feinstein, H., & Youman, C. (1996). Role-based access control models. IEEE Computer, 39(2), 38-47. Seamons, K., Winslett, M., & Yu, T. (2001). Limiting the disclosure of access control policies during automated trust negotiation. In Network and Distributed System Security Symposium. San Diego, CA. Shanahan, M. (1989). Prediction is deduction but explanation is abduction. In Proceedings of IJCAI’89 (pp. 1055-1060). Morgan Kaufmann. Sloman, M., & Lupu, E. (1999). Policy specification for programmable networks. In Proceedings of the First International Working Conference on Active Networks (pp. 73-84).
Smirnov, M. (2003). Rule-based systems security model. In Proceedings of the Second International Workshop on Mathematical Methods, Models, and Architectures for Computer Network Security (MMM-ACNS) (pp. 135-146). SPKI. (1999). SPKI certificate theory. IETF RFC, 2693. Retrieved from, http://www.ietf. org/rfc/rfc2693.txt Weeks, S. (2001). Understanding trust management systems. IEEE Symposium on Security and Privacy. Winsborough, W., & Li, N. (2004). Safety in automated trust negotiation. In Proceedings of the IEEE Symposium on Security and Privacy (pp. 147-160). WS-Security. (2006). Web services security (WSsecurity). Retrieved from http://www.oasis-open. org/committees.wss X.509. (2001). The directory: Public-key and attribute certificate frameworks. ITU-T Recommendation X.509:2000(E) | ISO/IEC 9594-8:2001(E). XACML. (2004). eXtensible Access Control Markup Language (XACML). Retrieved from http://www.oasis-open.org/committees/xacml Yu, T., & Winslett, M. (2003). A unified scheme for resource protection in automated trust negotiation. In Proceedings of the IEEE Symposium on Security and Privacy (pp. 110-122). Yu, T., Winslett, M., & Seamons, K. E. (2003). Supporting structured credentials and sensitive policies through interoperable strategies for automated trust negotiation. ACM Transactions on Information and System Security (TISSEC), 6(1), 1-42.
149
Chapter VIII
Delegation Services:
A Step Beyond Authorization Isaac Agudo University of Malaga, Spain Javier Lopez University of Malaga, Spain Jose A. Montenegro University of Malaga, Spain
Abstract Advanced applications for the Internet need to make use of the authorization service so that users can prove what they are allowed to do and show their privileges to perform different tasks. However, for a real scalable distributed authorization solution to work, the delegation service needs to be seriously considered. In this chapter, we first put into perspective the delegation implications, issues and concepts derived from authorization schemes proposed as solutions to the distributed authorization problem, indicating the delegation approaches that some of them take. Then, we analyze interesting federation solutions. Finally, we examine different formalisms specifically developed to support delegation services, focusing on a generalization of those approaches, the Weighted Delegation Graphs solution.
Introduction Information and network security are related to the Internet more than ever before. As a consequence, the use of the Internet has produced
variations in security software. A number of these changes focus on the way users are authenticated by Internet applications and how their rights and privileges are managed. One of the most widely used controversial security services is Access Control. Lampson (2004)
Copyright © 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Delegation Services: A Step Beyond Authorization
defines access control as the composition of two services, authentication and authorization. But Internet applications require distributed solutions for the access control service; thus, accordingly, authentication and authorization services need to be distributed, too. As it is widely known, by using an authentication service, users can prove their identity. More formally, ITU-T (International Telecommunications Union-Telecommunication Standardization Sector) defines authentication as “the process of corroborating an identity. Authentication can be unilateral or mutual. Unilateral authentication provides assurance of the identity of only one principal. Mutual authentication provides assurance of the identities of both principals.” However, new applications, particularly in the area of e-commerce, need an authorization service to describe what the user is allowed to do and privileges to perform tasks should be also considered. Additionally, according to ITU-T, authorization is “the granting of rights, which includes the granting of access based on access rights. This definition implies the rights to perform some activity (such as to access data); and that they have been granted to some process, entity, or human agent.” For instance, when a company needs to establish distinctions among their employees regarding privileges on resources, the authorization service becomes important. Different sets of privileges on resources (either hardware or software) will be assigned to different categories of employees. In those distributed applications where company resources must be partially shared over the Internet with other associated companies, providers, or clients, the authorization service becomes an essential part. Because authorization is not a new problem, different solutions have been used in the past. However, “traditional” authorization solutions are not very helpful for many Internet applications. In order to achieve a real scalable distributed authorization solution, the Delegation service
150
needs to be seriously considered. Again, ITU-T defines delegation as “conveyance of privilege from one entity that holds such privilege, to another entity.” Delegation is quite a complex concept, both from the theoretical point of view and from the practical point of view. In this sense, the implementation of an appropriate delegation service has been one of the cornerstones of Internet applications since a few years ago. Because delegation is a concept derived from authorization, the second section aims to put into perspective the delegation implications, issues and concepts that are derived from a selected group of authorization schemes which have been proposed during recent years as solutions to the distributed authorization problem. In the third section, we analyze some of the most interesting federation solutions that have been developed by different consortiums or companies, representing both educational and enterprise points of view. The final section focuses on different formalisms that have been specifically developed to support delegation services and which can be integrated into a multiplicity of applications.
Delegation (Mis)Perceptions in Authorization-Based Schemes As mentioned previously, in this section we analyze some of the most interesting authorization schemes proposed in the literature to date. In fact, and because of the many solutions that can be found on this topic, we mainly focus on those that have been supported by international bodies or organizations, or that have special implications for commercial products in the information security market. In the different subsections, we review each of the solutions, explaining in certain detail their operational foundations from the authorization perspective while at the same time analyzing the delegation perceptions, and
Delegation Services: A Step Beyond Authorization
in some cases, misperceptions associated with those solutions.
PolicyMaker and Keynote Blaze, Feigenbaum, and Lacy introduced the notion of Trust Management in Blaze, Feigenbaum, Ioanndis, and Krromytis (1999). In that original work, they proposed the PolicyMaker scheme as a solution for trust management purposes. PolicyMaker is a general and powerful solution that allows the use of any programming language to encode the nature of the authority being granted as well as the entities to whom it is being granted. It addresses the authorization problem directly, without considering two different phases (one for authentication and another for access control). PolicyMaker encodes trust in assertions. They are represented as pairs ( f,s), where s is the issuer of the statement, and f is a program. Additionally, it introduces two different types of assertions: certificates and policies. The main difference between them is the value of the Source field. To be more precise, the value is a key for the first one (certificates), and a label for the second one (policies). It is important to note that, in PolicyMaker, negative credentials are not allowed. Therefore, trust is monotonic; that is, each policy statement or credential can only increase the capabilities granted by others. Moreover, trust is also transitive. This means that if Alice trusts Bob and Bob trusts Carol, then Alice trusts Carol. In other words, all authorizations are delegable. Indeed, delegation is implicit in PolicyMaker; thus, it is not possible to restrict delegation capabilities. This is the reason why delegation is uncontrolled in this scheme. Keynote (Blaze, Feigenbaum, & Lacy, 1996) is a derivation of PolicyMaker, and has been supported by IETF. It has been proposed and designed to improve two main aspects of PolicyMaker. First, to achieve standardization and secondly, to facilitate its integration into applications.
KeyNote uses a specific assertion language that is flexible enough to handle the security policies of different applications. Assertions delegate the authorization to perform operations to other principals. Like PolicyMaker, KeyNote considers two types of assertions. Also, as in PolicyMaker, these two types of assertions are called policies and credentials, respectively: •
•
Policies: This type of assertion does not need to be signed because they are locally trusted. They do not contain the corresponding Issuer of PolicyMaker. Credentials: This type of assertion delegates authorization from the issuer of the credential, or Authorizer, to some subjects or Licensees (see later for details). Assertions are valid or not valid depending on action attributes, which are attribute/value pairs such as resouce == “database” or access == “read.”
KeyNote assertions are composed of five fields: •
•
• •
•
Authorizer: If the assertion is a credential, then this field encodes the issuer of that credential. However, if the assertion is a policy, then this field contains the keyword POLICY. Licensees: It specifies the principal or principals to which the authority is delegated. It can be a single principal or a conjunction, disjunction or threshold of principals. Comment: It is a comment for the assertion. Conditions: It corresponds to the “program” concept of PolicyMaker, and consists of tests on action attributes. Logical operators are used in order to combine them. Signature: It is the signature of the assertion. This field is not necessary for policies, only for credentials. Further description on how KeyNote uses cryptographic keys and
151
Delegation Services: A Step Beyond Authorization
signatures can be found in (Blaze, Ioannidis, & Keromytis, 2000). Figure 1 shows an example of assertion. It states that an RSA key 12345678 authorizes the DSA keys abcd1234 1234abcd with read and write access in the database. Given a set of action attributes, an assertion graph is a directed graph with vertex corresponding to principals. An arc exists from principal A to principal B if an assertion exists where the Authorizer field corresponds to A, the Licensees field corresponds to B and the predicate encoded in the Conditions field holds for the given set of action attributes. A principal is authorized, within a given set of action attributes, if the associated graph contains a path from a policy to the principal. We conclude then that all authorized principals are allowed to re-delegate their authorizations. Thus, there is no restriction on delegation. KeyNote has been used in several contexts, like Network-layer Access Control, Distributed Firewalls, Web Access Control, Grid Computing, and so forth (Blaze, Ioannidis, & Keromytis, 2003), which gives some idea of its flexibility.
(SPKI). SPKI was proposed by the IETF working group and, in particular, by Carl Ellison (Ellison, Frantz, & Lacy, 1996). SDSI, designed by Ronald L. Rivest and Butler Lampson (Rivest & Lampson, 1996), was proposed as an alternative to X.509 public-key infrastructure. The SPKI/SDSI certificate format is the result of the SPKI Working Group of the IETF (Ellison, 1999). The main feature of SDSI/SPKI is that its design provides a simple Public Key Infrastructure which uses linked local name spaces rather than a global, hierarchical one. All entities are considered analogous; hence, every principal can produce signed statements. The data format chosen for SPKI/SDSI is S-expression. This is a LISP-like parenthesized expression with the limitations that empty lists are not allowed and the first element in any S-expression must be a string, called the “type” of the expression. In this subsection, we detail the SDSI solution and the integrated solution SDSI/SPKI, as the development of the SPKI solution is similar to the integrated solution. The subsections detail the certificates of each proposal and explain how delegation is implemented.
SDSI/SPKI
SDSI
This solution is a unification of two similar proposals, simple distributed security infrastructure (SDSI) and simple Public Key Infrastructure
SDSI establishes four types of certificates: Name/ Value, Membership, Autocert and Delegation.
Figure 1. KeyNote assertion KeyNote-Version: 2 Authorizer: “rsa-hex:12345678” Licensees: “dsa-hex:abcd1234” || “dsa-hex:1234abcd” Comment: Authorizer delegates read and write access to either of the licensees Conditions: (resource == “database” && (access == “read”) || (access == “write”)) Signature: “sig-rsa-md5-hex:abcd1234”
152
Delegation Services: A Step Beyond Authorization
•
•
•
Name/Value Certificates: These certificates are used to bind principals to local names. Every certificate must be signed by the issuer, using his/her public key (see Figure 2). Membership Certificates: These provide principals with the membership to a particular SDSI group. Autocert Certificates: These are a special kind of self-certificate. Every SDSI principal is required to have an Autocert (see Figure 3).
•
Figure 2. Name-value certificates (Cert: (Local-Name: user1 ) (Value: (Principal: (Public-Key: (Algorithm: RSA-with-SHA1 ) ...... )))
(Signed: ...))
Delegation Certificates: These are the mechanisms for implementing the Delegation in SDSI, (see Figure 3). SDSI provides two types of delegation, based on the structure of the delegation certificate: 1. A user (issuer) can delegate to someone by adding that person as a member of the group which they control. A issues a delegation certificate to B. Therefore, B will have the same privileges as group. 2. A user (issuer) can delegate to someone so that this person is able to sign objects of a certain type on the user’s behalf. The “certain type” is defined by using the template form.
Integrated Solution: SPKI/SDSI SPKI/SDSI unifies all types of SDSI certificates into one single type of structure. The SPKI/SDSI certificate contains at least an Issuer and a Subject, and it can contain validity conditions, authorization and delegation information. Therefore, there are three categories: ID (mapping ),
Figure 3. Autocert and delegation certificates (Auto-Cert: (Delegation-Cert: (Local-Name: user1 ) (Template: form ) (Public-Key: ....) ( Group: group1 ) (Description: temporal user) (Signed: ...)) (Signed: ...))
Figure 4. ID and authorization certificates (cert (issuer <principal>) (subject <principal>) (valid ))
(cert (issuer <principal>) (subject <principal>) (propagate) (tag )
(valid ))
153
Delegation Services: A Step Beyond Authorization
Attribute (mapping ), and Authorization (mapping ). The structure of Figure 4 represents the ID certificate and the authorization certificate. The attribute certificate has the same structure as authorization certificates. The field propagate is (the field) used to perform the delegation. As it was desirable to limit the depth of delegation, initially, SPKI/SDSI had three options for controlling this: no control, boolean control and integer control. Currently, these options have been reduced to boolean control only. In this way, if this field is true, the Subject is permitted by the Issuer to further propagate the authorization.
Privilege Management Infrastructure (PMI) A wide-ranging authentication service based on identity certificates proposed by ITU-T in its X.509 Recommendation is possible by using a
Figure 5. ITU-T identity and attribute certificates
154
public-key infrastructure (PKI) (ITU-T, 1997). A PKI provides an efficient and trustworthy means to manage and distribute all certificates in the system. At the same time it supports encryption, integrity and nonrepudiation services. Without its use, it is impractical and unrealistic to expect that large scale digital signature applications can become a reality. Similarly, ITU-T has defined the attribute certificates framework for authorization services. It defines the foundation upon which a privilege management infrastructure (PMI) can be built (ITU-T, 2000). PKI and PMI infrastructures are linked by information contained in the identity and attribute certificates of every user. The link is justified by the fact that authorization relies on authentication to prove who users are (see Figure 5). Although linked, both infrastructures can be autonomous, and managed independently. Creation and maintenance of identities can be separated from PMI, as the authorities that issue
Delegation Services: A Step Beyond Authorization
certificates in each infrastructure are not necessarily the same ones. In fact, the entire PKI may be existing and operational prior to the establishment of the PMI. One of the advantages of an attribute certificate is that it can be used for various purposes. It may contain group membership, role, clearance, or any other form of authorization. Yet another essential feature is that the attribute certificate provides the means to transport authorization information to decentralized applications. Moreover, many of those applications deal with delegation. In fact, ITU-T defines PMI models for different environments, among them, one specific for delegation, as shown in the following: •
•
•
Control model: Describes the techniques that enable the privilege verifier to control access to the object method by the privilege asserter, in accordance with the attribute certificate and the privilege policy. Roles model: Individuals are issued role assignment certificates that assign one or more roles to them through the role attribute contained in the certificate. Specific privileges are assigned to a role name through role specification certificates, rather than to individual privilege holders through attribute certificates. Delegation model: When delegation is used, a privilege verifier trusts the SOA to delegate a set of privileges to holders, some of which may further delegate some or all of those privileges to other holders.
Regarding the delegation model, there are four components: the privilege verifier, the source of authorization (SOA), the attribute authorities (AAs), and the privilege asserter. The SOA, the initial issuer of certificates, is the authority for a given set of privileges for the resource and can impose constraints on how delegation can be done. The SOA assigns privileges to intermediary AAs, which further delegate privileges to other
entities but obviously not more privilege than they hold. A delegator may also further restrict the ability of downstream AAs. If the privilege asserter’s certificate is not issued by that SOA, then the verifier shall locate a delegation path of certificates from that of the privilege asserter to one issued by the SOA. The validation of that delegation path includes checking that each AA had sufficient privileges and was duly authorized to delegate those privileges.
Delegation Considerations in Federation Solutions In this section we analyze some of the most interesting federation solutions that have been developed by different consortiums or companies. We focus on two significant solutions such as Shibboleth, and .Net Passport. These selected solutions represent both educational and enterprise points of view. Shibboleth represents academia solutions, although there are other solutions like PAPI and Athens. On the other hand, we chose .Net Passport as the representative of companies’ solutions, although its opponent Liberty Alliance is growing in popularity, mainly due to the relevance of the partners that form the consortium. The general definition of federation is the act of establishing a trust relationship between two or more entities or, more specifically, an association comprising any number of service providers and identity providers. Therefore, federation should be understood as delegation of services where the service providers delegate the security management to identity providers.
Microsoft Passport At the end of the 90’s, as part of its .NET initiative, Microsoft introduced a set of Web services that implement a so-called user-centric application model, and which are collectively referred to as .NET My Services. At the core of Microsoft .NET
155
Delegation Services: A Step Beyond Authorization
My Services is a password-based user authentication and Single Sign-In service called Microsoft .NET Passport (Microsoft 2002, 2004). The fundamental component of a federation Solution is single sign-in (SSI) Service, therefore Microsoft .NET Passport could be considered as the first partial Federation Solution. Microsoft .NET Passport users are uniquely identified with an e-mail address (usually hotmail and MSN accounts) and all participating sites are uniquely identified with their DNS names. A Passport account has four parts. The first is a Passport unique identifier (PUID), assigned to the user when he/she sets up the account. This PUID is a 64-bit number that is sent to the user’s site as the authentication credential when a Passport user signs in, being used in representation of the user for the administrative operations. The second is the user profile, containing the user’s phone number or e-mail address, user’s name and demographic information. The third part of a Passport account is the credential information such as the password or security key used for a second level of authentication. The wallet is the fourth element that enables users to digitally store credit card numbers, expiration dates, and billing and shipping addresses. Passport use a series of cookies to store the authentication information and to assist the sign-in functionality in the user computer. The cookies are obtained by using HTTP redirections and are appended as elements in the transferred URI. Passport uses two different types of cookies: •
•
156
Domain authority cookies: None of these cookies can be directly accessed by the user’s site. They are written only to the domain by authority’s domain (e.g., Passport.com). Participant cookies: These cookies are written in the domain of the participating site and enable the user to sign in at any Passport participating sites during a browser session.
Passport is composed mainly of two processes, the registration process and the authentication process. A button named “Sign In” is the only modification needed in the site to interact with Passport service. •
•
Registration process: Occurs when the user has not an account in the system. 1. The user browses to the Site and clicks on the “Sign In” button. 2. The user is redirected to a co-branded registration page displaying the registration fields that were chosen by Site. The minimum number of fields required is two: e-mail name and password. 3. The user reads and accepts terms of use (or declines, and the process ends), and submits the form. 4. The user is then redirected back to Site with the encrypted authentication ticket and profile information attached. 5. Site A decrypts the authentication ticket and profile information and continues the registration process, or grants access to their site. Authentication process: A registered user attempts to use a protected service and the sign-in process is activated. 1. User browses to a/the participating site. User clicks on the “Sign In” button or link. 2. User is redirected to Passport. 3. Passport checks if the user has a ticket granting cookie (TGC) in the browser’s cookie file. If one is found, it skips to Step 4 and never sees the Passport login UI. If the TGC does not satisfy the time limit conditions since the last sign in requested by Site, then Passport removes information that Site passed on the query string, and redirects the user to a page that asks for the currently signed-in users’ password. This new page has a short URL in the Passport.
Delegation Services: A Step Beyond Authorization
net domain. If the user enters the correct information, the process continues. 4. The user is redirected back to Site with the encrypted authentication ticket and profile information attached. 5. Site decrypts the authentication ticket and profile information, and signs the customer into the site. 6. User accesses the page, resource, or service they requested from Site. During the early years, there were numerous security failures. The work by Kormann (Kormann & Rubin, 2000) enumerates a series of Passport flaws. The security issues are related to: User Interface, Key management, Cookies and Javascript, Persistent cookies and Automatic credential assignment. In 2003, IBM, Microsoft, BEA, RSA and Verisign published a competing identity management framework called Web Services Federation Language, or WS-Federation which was intended to be the direct competitor of Liberty (Liberty, 2003), although at this moment IBM, BEA, RSA and Verisign are part of the Liberty consortium.
Shibboleth Shibboleth is an Internet2/MACE project. The purpose of the proposal is to determine if a person using a web browser has permission to access a target resource based on information such as being a member of an institution or a particular class. It is implemented using federated administration. Usually in federated administration, a resource provider leaves the administration of user identities and attributes to the user’s origin site. Therefore, users are registered only at (their origin) this site, but not at each resource provider. Moreover, the system is privacy preserving in the sense that it does not use identity information. Therefore, it is necessary to associate a handle with the user. This handle stores the security information without exposing the identity of the user.
Consequently, Shibboleth is a system for securely transferring attributes about a user, from the user’s origin to a resource provider site. Two principal components are in charge of performing the attribute transference, the attribute authority (AA) on the user side and the Shibboleth attribute requester (SHAR) on the resource side. These components interchange authorization information by exchanging SAML (Cantor, 2005) messages using any shared protocol that supports the required functional characteristics. These messages are named attribute query message (AQM) and attribute response message (ARM), and their complete syntax depends on the protocol used, but all protocols must share the core AQM/ARM syntax and semantics. The AQM is sent by a SHAR to an AA, whereas the ARM is the response to an AQM sent. Guidance on usage of the schema definition by Shibboleth components is explained in details in (Erdos & Cantor, 2002). Besides the SHAR and AA, Shibboleth needs other support components. Shibboleth Indexical Reference Establisher (SHIRE) is the component responsible for intercepting an HTTP request to a protected resource and associating it with a handle. Therefore, this is the component that triggers the Shibboleth system. Handle service (HS) establishes a secure context for communication about the user that will later occur between the SHAR an AA, preserving user’s anonymity. Where are you from? (WAYF) component assists SHIRE to locate the HS associated to the user. These elements are used in the following process, as is shown in Figure 6: 1. 2.
3.
The user makes an initial request for a resource protected by a SHIRE. The SHIRE obtains the URL of the user’s HS (Step 5), or redirects the user to a WAYF service for this purpose (Step 3). The WAYF asks the HS to create a handle for this user, redirecting the request through the user’s browser.
157
Delegation Services: A Step Beyond Authorization
4.
5.
6. 7. 8.
we show how to include the solutions on existing working frameworks, which facilitates the introduction of users’ delegation operations into final applications.
The HS returns a handle for the user that can be used by the SHAR to get attributes from the appropriate AA at the origin site. SHIRE passes on the handle (and AA information, and organization name) to the SHAR. The SHAR asks the AA for attributes via an AQM message. It receives attributes back from the AA via an ARM message. Finally, SHAR passes the attributes to the HTTP Server.
Logic Frameworks Logic programming offers a powerful mechanism to represent authorization and access control decisions (Barker, 2000; Bertino, Bonatti, & Ferrari, 2001; Crampton, Loizou, & O’Shea, 2001). In this context, authorizations are represented as predicates and decisions are based on formulae verification. There are many solutions for formulae verification but the most widely known is probably PROLOG (Nilsson & Maluszynski, 2000), which has several implementations for different platforms (Windows, Linux, Macintosh, etc,). Having this number of different implementations, most of them provided with some kind of free license, it is easy to implement authorization decision systems based on formulae verification. When looking for a suitable logic language to represent an authorization system and its authorization rules, one decision to be taken is
The process shown in Figure 6 explains the complete scenario, where the SHIRE does not have a previous handle associated to the user. In other case, Steps 2, 3 and 4 are not accomplished.
Specific Delegation Schemes In this section we focus on different formalisms that have been specifically developed to support delegation services and that can be integrated into a multiplicity of applications. Those schemes will be explained and analyzed but, in addition,
Figure 6. Shibboleth components and flow for complete scenario wAyf
user hs
shire
AA
resource Owner domain
shAr
httP server
158
resources
Authentication system
user domain
Delegation Services: A Step Beyond Authorization
whether authorization must be explicitly denied. Depending on the available information, i.e. complete or incomplete, it is possible to choose an explicit negation approach or a negation as failure approach. Let us see the differences of those approaches with an example. If we look at a railway timetable, all trains are included in this table. There are no exceptions, so any other train one could think about is implicitly disallowed. This is negation by failure as the failure of finding a positive conclusion leads to a negative conclusion. Now suppose the booking center says that there is a train at 16:00 and another one a 17:50. Could we infer that there are no more trains between 16:00 and 17:50? There may be more, but without free seats. Therefore the information obtained is incomplete and it can not be inferred that there are no trains between 16:00 and 17:50. In order to find out, one may ask the booking center and get an explicit negative statement or a positive one. Specifically, in this section, we focus our attention on two solutions. One of them implements the explicit negation: the delegatable authorization program (DAP) scheme. The other one implements negation by failure: the role based trust management framework (RT). We describe the foundations of each of them and elaborate on the way they manage delegation.
Delegatable Authorization Program (DAP) Ruan et al. proposed in Ruan, Varadharajan, and Zhang (2002) a logical approach to model delegation. They base their approach on extended logic programs (Gelfond & Lifschitz, 1991) so they allow explicit negation (denial) of authorization. Their language is based on the following concepts: •
Subjects: These are the grantors and grantees of authorizations. There is an special subject, the security administrator, denoted by #, which is the responsible for all authori-
• •
•
zation in the system. Any authorization must be derived from one of the administrator authorizations. Objects: They are the target of authorizations, that is, available system resources. Access rights: The same object can be accessed in several ways. For example, a file can be accessed in read-only mode or in read-write. Authorization Type: DAP considers three authorization types: negative authorization (-), positive authorization (+) and delegatable authorization (*). A negative authorization specifies the access that must be forbidden, while a positive authorization specifies the access that must be granted. Finally, a delegable authorization specifies the access that must be delegated as well as granted.
DAP defines three partial orders<S ,