Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2434
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Stuart Anderson Sandro Bologna Massimo Felici (Eds.)
Computer Safety, Reliability and Security 21st International Conference, SAFECOMP 2002 Catania, Italy, September 10-13, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Stuart Anderson Massimo Felici The University of Edinburgh, LFCS, Division of Informatics Mayfield Road, Edinburgh EH9 3JZ, United Kingdom E-mail: {soa, mas}@dcs.ed.ac.uk Sandro Bologna ENEA CR Casaccia Via Anguillarese, 301, 00060 Rome, Italy E-mail:
[email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Computer safety, reliability and security : 21th international conference ; proceedings / SAFECOMP 2002, Catania, Italy, September 10 - 13, 2002. Stuart Anderson ... (ed.). - Berlin ; Heidelberg ; New York ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2434) ISBN 3-540-44157-3
CR Subject Classification (1998):D.1-4, E.4, C.3, F.3, K.6.5 ISSN 0302-9743 ISBN 3-540-44157-3 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by DA-TeX Gerd Blumenstein Printed on acid-free paper SPIN 10870130 06/3142 543210
Preface
Welcome to SAFECOMP 2002, held in Catania, Italy. Since its establishment SAFECOMP, the series of conferences on Computer Safety, Reliability, and Security, has contributed to the progress of the state of the art in dependable applications of computer systems. SAFECOMP provides ample opportunity to exchange insights and experiences in emerging methods and practical experience across the borders of different disciplines. Previous SAFECOMPs have already registered the need for multidisciplinarity in order better to understand dependability of computer-based systems in which human factors still remain a major criticality. SAFECOMP 2002 further addresses multidisciplinarity by collaborating and coordinating its annual activities with the Eleventh European Conference on Cognitive Ergonomics (ECCE-11). This year, SAFECOMP 2002 and ECCE-11 jointly organized an industry panel on Human-Computer System Dependability. The cross-fertilization among different scientific communities and industry supports the achievement of long-term results contributing to the integration of multidisciplinary experience in order to improve the design and deployment of dependable computer-based systems. SAFECOMP 2002 addressed the need to broaden the scope of disciplines contributing to dependability. The SAFECOMP 2002 program consisted of 27 refereed papers chosen from 69 submissions from all over the word. The review process was possible thanks to the valuable work of the International Program Committee and the external reviewers. SAFECOMP 2002 also included three invited keynote talks, which enhanced the technical and scientific merit of the conference. We would like to thank the International Program Committee, the organizing committee, the external reviewers, the keynote speakers, the panelists, and the authors for their work and support for SAFECOMP 2002. We would also like to thank the ECCE-11 people, who collaborated with us in organizing this week of events. We really enjoyed the work and we hope you appreciate the care that we put into organizing an enjoyable and fruitful conference. Finally, we will be glad to welcome you again to SAFECOMP 2003 in Edinburgh, Scotland.
July 2002
Sandro Bologna Stuart Anderson Massimo Felici
General Chair Sandro Bologna, I
Program Co-chairs Stuart Anderson, UK Massimo Felici, UK
EWICS TC7 Chair Udo Voges, D
International Program Committee Stuart Anderson, UK Liam J. Bannon, IRL Antonia Bertolino, I Helmut Bezecny, D Robin Bloomfield, UK Andrea Bondavalli, I Helmut Breitwieser, D Peter Daniel, UK Bas de Mol, NL Istvan Erenyi, HU Hans R. Fankhauser, S Massimo Felici, UK Robert Garnier, F Robert Genser, A Chris Goring, UK Janusz Gorski, PL Erwin Grosspietsch, D Michael Harrison, UK Maritta Heisel, D Erik Hollnagel, S Chris Johnson, UK
Mohamed Kaˆ aniche, F Karama Kanoun, F Floor Koornneef, NL Vic Maggioli, US Patrizia Marti, I Odd Nordland, NO Alberto Pasquini, I Gerd Rabe, D Felix Redmill, UK Antonio Rizzo, I Francesca Saglietti, D Erwin Schoitsch, A Meine van der Meulen, NL Udo Voges, D Marc Wilikens, I Rune Winther, NO Stefan Wittmann, D Eric Wong, US Janusz Zalewski, US Zdzislaw Zurakowski, P
Organizing Committee Stuart Anderson, UK Antonia Bertolino, I Domenico Cantone, I Massimo Felici, UK Eda Marchetti, I
Alberto Pasquini, I Elvinia Riccobene, I Mark-Alexander Sujan, D Lorenzo Vita, I
Organization
External Reviewers Claudia Betous Almeida, F Iain Bate, UK Giampaolo Bella, I Stefano Bistarelli, I Linda Brodo, I L. H. J. Goossens, NL Bjørn Axel Gran, NO Fabrizio Grandoni, I Silvano Chiaradonna, I Andrea Coccoli, I Felicita Di Giandomenico, I Juliana K¨ uster Filipe, UK
Marc-Olivier Killijian, F Frank Koob, D Martin Lange, UK Eckhard Liebscher, D Eda Marchetti, I Marc Mersiol, F Stefano Porcarelli, I Andrey A. Povyakalo, UK Thomas Santen, D Mark-Alexander Sujan, D Konstantinos Tourlas, UK
VII
VIII
Organization
Scientific Sponsor
in collaboration with the Scientific Co-sponsors AICA – Associazione Italiana per l’Informatica ed il Calcolo Automatico ARCS – Austrian Research Centers Seibersdorf Interdisciplinary Research Collaboration in Dependability of Computer-Based Systems EACE – European Association of Cognitive Ergonomics ENCRESS – European Network of Clubs for Reliability and Safety of Software GI – Gesellschaft f¨ ur Informatik
IFAC – International Federation of Automatic Control IFIP – WG10.4 on Dependable Computing and Fault Tolerance IFIP – WG13.5 on Human Error, Safety and System Development ISA-EUNET OCG – Austrian Computer Society SCSC – Safety-Critical Systems Club SRMC – Software Reliability & Metrics Club
Organization
SAFECOMP 2002 Organization
SAFECOMP 2002 Management Tool
IX
List of Contributors
K. Androutsopoulos Department of Computer Science King’s College London Strand, London WC2R 2LS United Kingdom
Sandro Bologna ENEA CR Casaccia Via Anguillarese, 301 00060 - Roma Italy
Christopher Bartlett BAE SYSTEMS United Kingdom
R.W. Born MBDA UK Ltd. Filton, Brstol, United Kingdom
Iain Bate Department of Computer Science University of York York YO10 5DD United Kingdom M. Benerecetti Dept. of Physics University of Naples ”Federico II” Napoli Italy Helmut Bezecny Dow Germany Peter G. Bishop Adelard and Centre for Software Reliability, City University Northampton Square London EC1V 0HB United Kingdom Robin Bloomfield Adelard and Centre for Software Reliability, City University Northampton Square London EC1V 0HB United Kingdom A. Bobbio DISTA Universit` a del Piemonte Orientale 15100 - Alessandria Italy
Jan Bredereke Universit¨ at Bremen FB 3 · P.O. box 330 440 D-28334 Bremen Germany Jos´e Carlos Campelo Departamento de Inform´ atica de Sistemas y Computadoras, Universidad Polit´ecnica de Valencia, 46022 - Valencia Spain Luping Chen Safety Systems Research Centre Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom E. Ciancamerla ENEA CR Casaccia Via Anguillarese, 301 00060 - Roma Italy D. Clark Department of Computer Science King’s College London Strand, London WC2R 2LS United Kingdom
List of Contributors
XI
Tim Clement Adelard Drysdale Building Northampton Square London EC1V 0HB United Kingdom
Thomas Droste Institute of Computer Science, Dept. of Electrical Engineering and Information Sciences, Ruhr Univ. Bochum, 44801 Bochum Germany
Paulo S´ergio Cugnasca Escola Polit´ecnica da Universidade de S˜ ao Paulo, Dept of Computer Engineering and Digital Systems, CEP 05508-900 - S˜ ao Paulo Brazil
G. Franceschinis DISTA Universit` a del Piemonte Orientale 15100 - Alessandria Italy
Ferdinand J. Dafelmair ¨ S¨ TUV uddeutschland Westendstrasse 199 80686 M¨ unchen Germany Dino De Luca NOKIA Italia S.p.A. Stradale Vincenzo Lancia 57 95121 Catania Italy ´Italo Romani de Oliveira Escola Polit´ecnica da Universidade de S˜ ao Paulo, Dept of Computer Engineering and Digital Systems, CEP 05508-900 - S˜ ao Paulo Brazil S. D. Dhodapkar Reactor Control Division Bhabha Atomic Research Centre Mumbai 400085 India Theo Dimitrakos CLRC Rutherford Appleton Laboratory (RAL) Oxfordshire United Kingdom
Rune Fredriksen Institute For Energy Technology P.O. Box 173 1751 Halden Norway R. Gaeta Dipartimento di Informatica Universit` a di Torino 10150 - Torino Italy Bjørn Axel Gran Institute For Energy Technology P.O. Box 173 1751 Halden Norway M. Gribaudo Dip. di Informatica Universit` a di Torino 10149 - Torino Italy Sofia Guerra Adelard Drysdale Building Northampton Square London EC1V 0HB United Kingdom Mark Hartswood School of Informatics University of Edinburgh United Kingdom
XII
List of Contributors
Denis Hatebur ¨ TUViT GmbH System- und Softwarequalit¨ at Am Technologiepark 1, 45032 Essen Germany Klaus Heidtmann Departement of Computer Science Hamburg University Vogt-K¨ olln-Str. 30 D-22527 Hamburg Germany Monika Heiner Brandenburgische Technische Universit¨ at Cottbus Institut f¨ ur Informatik 03013 Cottbus Germany Maritta Heisel Institut f¨ ur Praktische Informatik und Medieninformatik Technische Universit¨ at Ilmenau 98693 Ilmenau Germany Bernhard Hering Siemens I&S ITS IEC OS D-81359 M¨ unchen Germany Erik Hollnagel CSELAB, Department of Computer and Information Science University of Link¨ oping Sweden A. Horv´ ath Dept. of Telecommunications Univ. of Technology and Economics Budapest Hungary
Gordon Hughes Safety Systems Research Centre Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom Jef Jacobs Philips Semiconductors, Bld WAY-1, Prof. Holstlaan 4 5656 AA Eindhoven The Netherlands Tim Kelly Department of Computer Science University of York York YO10 5DD United Kingdom Tai-Yun Kim Department of Computer Science & Engineering, Korea University Anam-dong Seungbuk-gu Seoul Korea John C. Knight Department of Computer Science University of Virginia, 151, Engineer’s Way, P.O. Box 400740 Charlottesville, VA22904-4740 USA Monica Kristiansen Institute For Energy Technology P.O. Box 173 1751 Halden Norway Axel Lankenau Universit¨ at Bremen FB 3 · P.O. box 330 440 D-28334 Bremen Germany K. Lano Department of Computer Science King’s College London Strand, London WC2R 2LS United Kingdom
List of Contributors
XIII
Bev Littlewood Centre for Software Reliability City University, Northampton Square, London EC1V 0HB United Kingdom
Yiannis Papadopoulos Department of Computer Science University of Hull Hull, HU6 7RX United Kingdom
John May Safety Systems Research Centre Department of Computer Science University of Bristol Bristol, BS8 1UB United Kingdom
Bernard Pavard GRIC – IRIT Paul Sabatier University Toulouse France
M. Minichino ENEA CR Casaccia Via Anguillarese, 301 00060 - Roma Italy Ali Moeini University of Tehran n. 286, Keshavarz Blvd 14166 – Tehran Iran MahdiReza Mohajerani University of Tehran n. 286, Keshavarz Blvd 14166 – Tehran Iran Tom Arthur Opperud Telenor Communications AS R&D Fornebu Norway Frank Ortmeier Lehrstuhl f¨ ur Softwaretechnik und Programmiersprachen Universit¨ at Augsburg D-86135 Augsburg Germany M. Panti Istituto di Informatica University of Ancona Ancona Italy
S.E. Paynter MBDA UK Ltd. Filton, Brstol, United Kingdom Peter Popov Centre for Software Reliability City University Northampton Square, London United Kingdom L. Portinale DISTA Universit` a del Piemonte Orientale 15100 - Alessandria Italy Rob Procter School of Informatics University of Edinburgh United Kingdom S. Ramesh Centre for Formal Design and Verification of Software IIT Bombay, Mumbai 400076 India Wolfgang Reif Lehrstuhl f¨ ur Softwaretechnik und Programmiersprachen Universit¨ at Augsburg D-86135 Augsburg Germany
XIV
List of Contributors
Yoon-Jung Rhee Department of Computer Science & Engineering, Korea University Anam-dong Seungbuk-gu Seoul Korea Francisco Rodr´ıguez Departamento de Inform´ atica de Sistemas y Computadoras, Universidad Polit´ecnica de Valencia, 46022 - Valencia Spain Thomas Rottke ¨ TUViT GmbH System- und Softwarequalit¨ at Am Technologiepark 1, 45032 Essen Germany Mark Rouncefield Department of Computing University of Lancaster United Kingdom Job Rutgers Philips Design The Netherlands Titos Saridakis NOKIA Research Center PO Box 407 FIN-00045 Finland Gerhard Schellhorn Lehrstuhl f¨ ur Softwaretechnik und Programmiersprachen Universit¨ at Augsburg D-86135 Augsburg Germany Juan Jos´e Serrano Departamento de Inform´ atica de Sistemas y Computadoras, Universidad Polit´ecnica de Valencia, 46022 - Valencia Spain
Andrea Servida European Commission DG Information Society C-4 B1049 Brussels Belgium Babita Sharma Reactor Control Division Bhabha Atomic Research Centre Mumbai 400085 India Roger Slack School of Informatics University of Edinburgh United Kingdom L. Spalazzi Istituto di Informatica University of Ancona Ancona Italy Ketil Stølen Sintef Telecom and Informatics, Oslo Norway S.Tacconi Istituto di Informatica University of Ancona Ancona Italy Andreas Thums Lehrstuhl f¨ ur Softwaretechnik und Programmiersprachen Universit¨ at Augsburg D-86135 Augsburg Germany Helmut Trappschuh Siemens I&S ITS IEC OS D-81359 M¨ unchen Germany
List of Contributors
Jos Trienekens Frits Philips Institute Eindhoven University of Technology Den Dolech 2 5600 MB Eindhoven The Netherlands E. Tronci Dip. di Informatica Universit` a di Roma ”La Sapienza” 00198 - Roma Italy Alexander Voß School of Informatics University of Edinburgh United Kingdom
XV
Robin Williams Research Centre for Social Sciences University of Edinburgh United Kingdom Wenhui Zhang Laboratory of Computer Science Institute of Software Chinese Academy of Sciences P.O.Box 8718, 100080 Beijing China
Table of Contents
Human-Computer System Dependability (Joint ECCE-11 & SAFECOMP 2002) Human-Computer System Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Panel moderators: Sandro Bologna and Erik Hollnagel Dependability of Joint Human-Computer Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Erik Hollnagel
Keynote Talk Dependability in the Information Society: Getting Ready for the FP6 . . . . . . 10 Andrea Servida
Human Factors A Rigorous View of Mode Confusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Jan Bredereke and Axel Lankenau Dependability as Ordinary Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Alexander Voß, Roger Slack, Rob Procter, Robin Williams, Mark Hartswood, and Mark Rouncefield
Security Practical Solutions to Key Recovery Based on PKI in IP Security . . . . . . . . . . 44 Yoon-Jung Rhee and Tai-Yun Kim Redundant Data Acquisition in a Distributed Security Compound . . . . . . . . . .53 Thomas Droste Survivability Strategy for a Security Critical Process . . . . . . . . . . . . . . . . . . . . . . . 61 Ferdinand J. Dafelmair
Dependability Assessment (Poster Session) Statistical Comparison of Two Sum-of-Disjoint-Product Algorithms for Reliability and Safety Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Klaus Heidtmann
XVIII Table of Contents
Safety and Security Analysis of Object-Oriented Models . . . . . . . . . . . . . . . . . . . .82 Kevin Lano, David Clark, and Kelly Androutsopoulos The CORAS Framework for a Model-Based Risk Management Process . . . . . 94 Rune Fredriksen, Monica Kristiansen, Bjørn Axel Gran, Ketil Stølen, Tom Arthur Opperud, and Theo Dimitrakos
Keynote Talk Software Challenges in Aviation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106 John C. Knight
Application of Formal Methods (Poster Session) A Strategy for Improving the Efficiency of Procedure Verification . . . . . . . . . 113 Wenhui Zhang Verification of the SSL/TLS Protocol Using a Model Checkable Logic of Belief and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Massimo Benerecetti, Maurizio Panti, Luca Spalazzi, and Simone Tacconi Reliability Assessment of Legacy Safety-Critical Systems Upgraded with Off-the-Shelf Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Peter Popov
Reliability Assessment Assessment of the Benefit of Redundant Systems . . . . . . . . . . . . . . . . . . . . . . . . . .151 Luping Chen, John May, and Gordon Hughes Estimating Residual Faults from Code Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Peter G. Bishop
Design for Dependability Towards a Metrics Based Verification and Validation Maturity Model . . . . . 175 Jef Jacobs and Jos Trienekens Analysing the Safety of a Software Development Process . . . . . . . . . . . . . . . . . . 186 Stephen E. Paynter and Bob W. Born Software Criticality Analysis of COTS/SOUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Peter Bishop, Robin Bloomfield, Tim Clement, and Sofia Guerra
Table of Contents
XIX
Safety Assessment Methods of Increasing Modelling Power for Safety Analysis, Applied to a Turbine Digital Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212 Andrea Bobbio, Ester Ciancamerla, Giuliana Franceschinis, Rossano Gaeta, Michele Minichino, and Luigi Portinale Checking Safe Trajectories of Aircraft Using Hybrid Automata . . . . . . . . . . . . 224 ´ Italo Romani de Oliveira and Paulo S´ergio Cugnasca Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Yiannis Papadopoulos
Keynote Talk On Diversity, and the Elusiveness of Independence . . . . . . . . . . . . . . . . . . . . . . . . 249 Bev Littlewood
Design for Dependability (Poster Session) An Approach to a New Network Security Architecture for Academic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 MahdiReza Mohajerani and Ali Moeini A Watchdog Processor Architecture with Minimal Performance Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Francisco Rodr´ıguez, Jos´e Carlos Campelo, and Juan Jos´e Serrano
Application of Formal Methods Model-Checking Based on Fluid Petri Nets for the Temperature Control System of the ICARO Co-generative Plant . . .273 M. Gribaudo, A. Horv´ ath, A. Bobbio, E. Tronci, E. Ciancamerla, and M. Minichino Assertion Checking Environment (ACE) for Formal Verification of C Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 B. Sharma, S. D. Dhodapkar, and S. Ramesh Safety Analysis of the Height Control System for the Elbtunnel . . . . . . . . . . . 296 Frank Ortmeier, Gerhard Schellhorn, Andreas Thums, Wolfgang Reif, Bernhard Hering, and Helmut Trappschuh
XX
Table of Contents
Design for Dependability Dependability and Configurability: Partners or Competitors in Pervasive Computing? . . . . . . . . . . . . . . . . . . . . . . . . 309 Titos Saridakis Architectural Considerations in the Certification of Modular Systems . . . . . 321 Iain Bate and Tim Kelly A Problem-Oriented Approach to Common Criteria Certification . . . . . . . . . .334 Thomas Rottke, Denis Hatebur, Maritta Heisel, and Monika Heiner Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
Human-Computer System Dependability Panel moderators: Sandro Bologna and Erik Hollnagel Panellists: Christopher Bartlett, Helmut Bezecny, Bjørn Axel Gran, Dino De Luca, Bernard Pavard and Job Rutgers
Abstract. The intention of this cross-conference session is to bring together specialists of cognitive technologies and ergonomics, with software developers, dependability specialists and, especially, endusers. The objective of the session is to provide a forum for the sharing of practical experiences and research results related to safety and dependability of human-computer systems.
1
Rationale
While a computing device from an isolated perspective is a technological artefact pure and simple, practice has taught us that humans always are involved at some time and in some way. Humans play a role as developers, designers, and programmers of systems. They may be end-users, or they may be the people who service, maintain, repair, and update the systems. Unless we restrict our perspective to a narrow slice of a system’s life cycle, the central issue must be the dependability of human-computer systems. This issue cannot be reduced to a question of fallible humans affecting perfect computers. Indeed, we cannot consider the functioning of a computer without at the same time consider the functioning of humans. In that sense the issue is not one of either-or, but of both – of human-and-computer seen as a single system. The problem of dependability therefore cannot be reduced to the dependability of one part or the other, but must address the system as a whole. This requires a close collaboration of people with different kinds of expertise and a realisation that no single discipline can provide the whole answer. It is in that spirit that cognitive ergonomics community and the safety & reliability community meet in this session – and hopefully will continue to meet afterwards.
2
Position Statements
Christopher Bartlett is Chief Technologist in the Capability Management Group, Bae Systems, UK. Abbreviated position statement: A major reason for retaining the man-in-the-loop is that we do not have totally integrated systems yet. Pilots still have a task to analyse what else is happening within and without the cockpit and it is this integration S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 1-3, 2002. Springer-Verlag Berlin Heidelberg 2002
2
Sandro Bologna and Erik Hollnagel
attribute to which all their training is directed. We simply do not have sensors with sufficient resolution nor decision-making algorithms as powerful as the human brain. We may achieve such system integration eventually but the larger and more complex the system the more difficult it is to grasp the potential failure modes when we try to formalise it. Until then the best we can do is to recognise both human fallibility alongside our ability to make innovative decisions and to make our systems robust enough to cope. Helmut Bezecny is “Global Process Automation Safety Technology Leader” with Dow Chemical in Stade, Germany. Abbreviated position statement: Human-computer issues are created by the divers capabilities of both, if misapplied. Human can prevent hazardous situations from turning into accidents by identifying and classifying safety relevant information that cannot be measured and engaging in activities that have not been planned but would help in the specific scenario. The strategy for human-computer systems should be to combine the best of both - let the computer do whatever can be automated and the human interact intelligently, creative and with low stress. Bjørn Axel Gran is member of the section on software verification and validation, at the OECD Halden Reactor Project, Norway. Abbreviated position statement: In the operation of complex systems such as nuclear power plants, aircraft, etc., humans and advanced technology interact within a dynamic environment. The dependability of such systems requires that the interplay between humans and technology be addressed for every lifecycle phase of the system. This creates a need for research to formulating new methodologies that ensure a common and uniform platform for the experts from different fields. Dino De Luca is Chief Solution Engineer and Team Leader for MAG (Middleware & Applications Group) in the South Europe hub of Nokia. Abbreviated position statement: Although the possibility to pay interacting with a mobile terminal opens new and interesting scenarios, it is very important that the customer and the service are able to fully trust each other, including all the communication services in-between. Firstly, mutual authentication, privacy and integrity of the communication must be provided for both parties. Secondly, in addition to the secured communications pipe, a higher level of assurance can be achieved by applying digital signatures to a business transaction. The proposed solution is to have the customer’s mobile phone act as a Personal Trusted Device (PTD). Bernard Pavard is professor at the Cognitive Engineering Research Group (GRIC – IRIT) at the Paul Sabatier University, Toulouse, France. Abbreviated position statement: I want to consider how can we design dependability into complex work settings. The paradox of dependability is that the more complex the situation, the more it is necessary to involve humans decisions, immersive and invisible technologies and the less it is possible to carry out traditional safety &
Human-Computer System Dependability
3
reliability analysis. Designers need to tackle non-deterministic processes and answer questions such as: how can controlled (deterministic) processes and distributed noncontrollable interactions be managed optimally to improve the robustness of a system, and how can the risks of such a system be assessed? Job Rutgers is member of the Strategic Design Team at Philips Design in the Netherlands. Abbreviated position statement: Designers engage in the development process by means of scenario’s, such as visual and textual descriptions of how future products and services will enable users to overcome certain problems or will address certain needs. Often, these scenarios are characterized by a simple narration but lack a realistic modelling of everyday life activities into ‘real stories’. To achieve this, we need both to integrate better the complex information collected by social scientist, but also make use of a wider vocabulary of story telling than ‘happy ending’ mainstream Hollywood offers. The detailed modelling of users’ everyday life activities need to result in ‘real life’ scenarios that incorporate a fuller and richer blend of people’s behaviour.
Dependability of Joint Human-Computer Systems Erik Hollnagel CSELAB, Department of Computer and Information Science University of Linköping, Sweden
[email protected] Abstract. Human-computer systems have traditionally been analysed and described in terms of their components – humans and computers – plus the interaction. Several contemporary schools of thought point out that decomposed descriptions are insufficient, and that a description on the level of the system as a whole is needed instead. The dependability of human-computer systems must therefore refer to the performance characteristics of the joint system, and specifically the variability of human performance, not just as stochastic variations but also as purposeful local optimisation.
1
Introduction
In any situation where humans use artefacts to accomplish something, the dependability of the artefact is essential. Quite simply, if we cannot rely or depend on the artefact, we cannot really use it. This goes for simple mechanical artefacts (a hammer, a bicycle), as well as complex technological artefacts (ranging from cars to large-scale industrial production systems), and sociotechnical artefacts (distribution systems, service organisations). In order for an artefact – or more generally, a system – to be dependable it is a necessary but not sufficient condition that the parts and subsystems are dependable. In addition, the interaction between the parts must also be dependable. In pure technological systems this does not constitute a serious problem, because the interaction between the parts usually is designed together with the parts. Take, for instance, an automobile engine. This consists of hundreds (or even thousands?) of parts, which are engineered to a high degree of precision. The same precision is achieved in specifying how the parts work together, either in a purely mechanical fashion or as in a modern car engine with extensive use of software. Compared to the engine in a Ford T, a modern car engine is a miracle of precision and dependability, as are indeed most other technological systems. Since the dependability of a system depends on the parts and their interaction, it makes sense that the design tries to cover both. While this can be achieved for technological systems which function independently of users, the situation is radically different for systems where humans and technology have to work together, i.e., where some of the parts are hardware – or hardware + software – and others are humans. I shall refer to the latter as joint systems, for instance joint human-computer systems or S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 4-9, 2002. Springer-Verlag Berlin Heidelberg 2002
Dependability of Joint Human-Computer Systems
5
joint cognitive systems (Woods, 1986). Although an artefact may often be seen as representing technology pure and simple, practice has taught us that humans always are involved at some time and in some way. Humans play a role as developers, designers, and programmers of systems. They may be end-users, or they may be the people who service, maintain, repair, and update the systems. Since the dependability cannot be isolated to a point in time, but always must consider what went before (the system’s history), it follows that the issue of system dependability always is an issue of human-system dependability as well. The crucial difference between pure and joint systems is that humans cannot be designed and engineered in the manner of technological components, and furthermore that the interaction cannot be specified in a rigorous manner. This is certainly not because of a lack of trying, as shown by the plethora of procedures and rules that are a part of many systems. There also appears to be a clear relation between the level of risk in a situation and the pressure to follow procedures strictly, where space missions and the military provide extreme examples. Less severe, but still with an appreciable risk, are complex public systems for commerce, communication and transportation, where aviation and ATM are good examples. At the other end of the scale are consumers trying to make use of various devices with more or less conspicuous computing capabilities, such as IT artefacts and everyday machines (household devices, kiosks, ATMs, ticket machines, etc). Although the risks here may be less, the need of dependability is still very tangible, for instance for commercial reasons. It is therefore necessary seriously to consider the issue of dependability of joint humancomputer systems at all levels and especially to pay attention to the dependability of the interaction between humans and machines.
2
Human Reliability
One issue that quickly crops up in this endeavour is that of human reliability, and its not too distant cousin “human error”. Much has been written on these topics, particularly in the aftermath of major accidents (nuclear, aviation, trains, trading, etc). In the 1970-80s it was comme il faut to consider humans as being unreliable and error prone by themselves, i.e., considered as systems in isolation. This leads to several models of “human error” and human reliability, and to the concept of an inherent human error probability (Leplat & Rasmussen, 1987; Miller & Swain, 1987; Reason, 1990). From the 1990s and onwards this view has been replaced by the realisation that human performance and reliability depends on the context as much as on any psychological predispositions (Amalberti, 1996; Hollnagel, 1998). The context includes the working conditions, the current situation, resources and demands, the social climate, the organisational safety culture, etc. Altogether this has been expressed by introducing a distinction between sharp-end and blunt-end factors, where the former are factors at the local workplace and the latter are factors removed in space and time that create the overall conditions for work (Reason, 1997; Woods et al., 1994). The importance of the sharp-end, blunt-end framework of thinking is that system dependability – or rather, the lack thereof – is not seen as a localised phenomenon which can be explained in terms of specific or unique conditions, but rather as
6
Erik Hollnagel
something which is part and parcel of the system throughout its entire existence. Human-computer dependability therefore cannot be reduced to a question of human reliability and “human error”, but requires an appreciation of the overall conditions that govern how a system functions.
3
Automation
One of the leading approaches to automation has been to use it as a way of compensating for insufficient human performance. The insufficiency can be in terms of speed, precision, accuracy, endurance, or – last but not least – reliability and “error”. Automation has been introduced either to compensate for or support inadequate human performance or outright to replace humans in specific functions. From a more formal point of view, automation can be seen as serving in one of the following roles (Hollnagel, 1999): • Amplification, in the sense that the ability to perform a function is being improved. If done inappropriately, this may easily led to a substitution of functions. • Delegation, in which a function is transferred to the automation under the control of the user (Billings (1991; Sheridan, 1992). • Substitution or replacement. Here the person not only delegates the function to the automation but completely relinquishes control. • Extension, where new functionality is being added. Pure cases of extension are, however, hard to find. In relation to the dependability of joint human-computer systems, automation can be seen as an attempt to improve the interaction by replacing the human with technology, hence taking a specific slice of the interaction away from the human. Despite the often honourable intentions, the negative effects of automation across all industrial domains have, on the whole, been larger than the positive ones. Among the welldocumented negative effects are that workload is not reduced but only shifted to other parts of the task, that “human errors” are displaced but not eliminated, that problems of safety and reliability remain and that the need for human involvement is not reduced, and that users are forced into a more passive role and therefore are less able to intervene when automation fails. The shortcomings of automation have been pointed out by many authors (e.g. Moray et al., 2000; Wiener & Curry, 1980) and have been elegantly formulated as the Ironies of Automation (Bainbridge, 1983). From a joint systems perspective, the main problem with automation is that it changes existing working practices. After a system has been in use for some time, a stable condition emerges as people learn how to accomplish their work with an acceptable level of efficiency and safety. When a change is made to a system – and automation certainly qualifies as one – the existing stable condition is disrupted. After a while a new stable condition emerges, but this may be so different from the previous one that the premises for the automation no longer exist. In other words, by making a change to the system, some of the rationale for the change may have become obsolete. Solutions that were introduced to increase
Dependability of Joint Human-Computer Systems
7
system dependability may therefore have unsuspected side-effects, and perhaps not even lead to any improvement at all.
4
Local Optimisation – Efficiency-Thoroughness Trade-Off
In designing complex human-machine systems – as in transportation, process control, and e-commerce – the aim is to achieve a high degree of efficiency and reliability so that we can depend on the system’s functioning. In practice all human-machine systems are subject to demands that require a trade-off between efficiency and thoroughness on the part of the users and a dependable joint system is one that has developed a safe trade-off. In other words, the people who are part of the system at either the sharp or the blunt end have learned which corners to cut and by how much. The trade-off of thoroughness for efficiency is, however, only viable as long as the conditions conform to the criteria implied by the trade-off. Sooner or later a situation will occur when this is not the case – if for no other reason then because the environment is made up of other joint systems that work according to the same principles. It will be impossible to make significant improvements to system dependability if we do not acknowledge the reality of system performance, i.e., that there must be an efficiency-thoroughness trade-off. The basis for design is inevitably a more or less well-defined set of assumptions about the people who are going to use the system – as operators, end-users, maintenance staff, etc. If, for the sake of the argument, we only consider the end-users the design usually envisions some kind of exemplary end-user. By this I mean a user who has the required psychological and cognitive capacity, who is motivated to make use of the system, who is alert and attentive, who is “rational” in the sense that s/he responds the way the designer imagined, and who knows how the system works and is able to interpret outputs appropriately. While such end-users certainly may exist, the population of users will show a tremendous variety in every imaginable – and unimaginable – way (Marsden & Hollnagel, 1996). Apart from differences in knowledge and skills, there will also be differences in needs (reasons for using the system), in the context or working conditions, in demands, in resources, etc. If system design only considers the exemplary user, the result is unlikely to be a dependable joint system. In order for that to come about, it is necessary that designers (programmers, etc.) take into account how things may go wrong and devise ways in which such situations may be detected and mitigated. System designers and programmers are, however, subject to the same conditions as end-users, i.e. a diversity of needs and working conditions. They are therefore prone to use the same bag of tricks to bring about a local optimum, i.e., to trade off thoroughness for efficiency in order to achieve a result that they consider to be sufficiently safe and effective. This means that the dependability of a joint system must be considered from beginning to end of the system’s life cycle. The formal methods that are used for requirement specification and programming are themselves artefacts being used by humans, and therefore subject to the same problems as the final systems. Thus, the diligence by which we scrutinise the dependability of the system-in-use should also be directed at the system as it is being built.
8
Erik Hollnagel
Diversity of needs
Diversity of needs
Diversity of contexts
Population of users Artefact Artefact ++ interaction interaction (HW+SW) (HW+SW)
System designer
Exemplary user
Requirements, tools, technology, …
Diversity of contexts
Fig. 1. User types and design diversity
5
Conclusions
In this presentation I have tried to argue that the issue of joint human-computer dependability cannot be reduced to a question of fallible humans corrupting otherwise perfect computing artefacts. Indeed, we cannot consider the functioning of a computing system without at the same time consider the functioning of humans. In that sense the issue is not one of either-or, but of both – of human-and-computer seen as a single system. Although there is no easy way to solve the problems we see all around, one solution would be to build systems that are resilient in the sense that they are able to detect variations in overall performance and compensate for them at an early stage, either by a direct compensation-recovery capability or by defaulting to a safe state. To do this there is a need for better methods to analyse and assess the dependability of joint human-machine systems. The bottom line is that the problem of dependability cannot be reduced to the dependability of one part or the other, but must address the joint system. This requires a close collaboration of people with different kinds of expertise and a realisation that no single discipline can provide the whole answer. It is in that spirit that cognitive ergonomics community and the safety & reliability community meet in this session – and hopefully will continue to meet afterwards.
References 1. 2.
Amalberti, R. (1996). La conduite des systèmes à risques, Paris: PUF. Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775-779.
Dependability of Joint Human-Computer Systems
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
9
Billings, C. E. (1991). Human-centered aircraft automation: A concept and guidelines (NASA Technical Memorandum 103885). Moffett Field, CA: NASA Ames Research Center. Hollnagel, E. (1998). Cognitive reliability and error analysis method – CREAM. Oxford: Elsevier Science. Hollnagel, E. (1999). From function allocation to function congruence. In S. Dekker & E. Hollnagel (Eds.), Coping with computers in the cockpit. Aldershot, UK: Ashgate. Leplat, J. & Rasmussen, J. (1987). Analysis of human errors in industrial incidents and accidents for improvement of work safety. In J. Rasmussen, K. Duncan & J. Leplat (Eds.), New technology and human error. London: Wiley. Marsden, P. & Hollnagel, E. Human interaction with technology: The accidental user. Acta Psychologica, 91, 345-358. Miller, D. P. & Swain, A. D. (1987). Human Error and Human Reliability. In G. Salvendy (Ed.) Handbook of Human factors. New York: Wiley. Moray, N., Inagaki, T. & Itoh, M. (2000). Adaptive automation, trust, and selfconfidence in fault management of time-critical tasks. Journal of Experimental Psychology: Applied, 6(1), 44-58. Reason, J. T. (1990). Human error. Cambridge, U.K.: Cambridge University Press. Reason, J. T. (1997). Managing the risks of organizational accidents. Aldershot, UK: Ashgate. Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge, MA: M. I. T Press. Wiener, E. L. & Curry, R. E. (1980). Flight deck automation: Promises and problems. Ergonomics, 23(10), pp. 995-1011. Woods, D. D. (1986). Cognitive technologies: The design of joint humanmachine systems. The AI Magazine, 6(4), 86-92. Woods, D. D., Johannesen, L. J., Cook, R. I. & Sarter, N. B. (1994). Behind human error: Cognitive systems, computers and hindsight. Columbus, Ohio: CSERIAC.
Dependability in the Information Society: Getting Ready for the FP6 Andrea Servida European Commission, DG Information Society C-4 B1049 Brussels, Belgium
[email protected] http://deppy.jrc.it/
Abstract. The dependable behaviour of information infrastructures is critical to achieve trust & confidence in any meaningful realisations of the Information Society. The paper briefly discusses the aim and scope of the Dependability Initiative under the Information Society Technologies Programme and presents the activities that have recently being launched in this area to prepare the forthcoming Framework Programme 6 th of the European Commission.
1 Introduction The Information Society is increasingly dependent on largely distributed systems and infrastructures for life-critical and business-critical functions. The complexity of systems in Information Society is rapidly increasing because of a number of factors like the size, unboundness and interdependency as well as the multiplicity of actors involved, the need to pursue more decentralised control and growing sophistication in functionality. This trend together with the increasing use of open information infrastructures for communications, freeware software and common application platforms expose our society to new vulnerabilities and threats that would need better understanding, assessment and control. The dependable and predictable behaviour of information infrastructures provides the basis for Trust & Confidence (T&C) in any meaningful realisations of the global Information Society and, in particular, in Electronic Commerce. However, the expectation and perception of T&C are dramatically changing under the pressures new business, technological and societal drivers among which are: the deregulation in telecommunications, which has led to the emergence of new players, actors and intermediaries inter-playing in new added value chains, multi-national consortiums, services and applications but also to the blurring of sector and jurisdictional boundaries;
1
Disclaimer: The content of this paper is the sole responsibility of the author and in no way represents the view of the European Commission or its services
S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 10-18, 2002. c Springer-Verlag Berlin Heidelberg 2002
Dependability in the Information Society: Getting Ready for the FP6
11
the convergence of communications and media infrastructures together with the interoperability of systems and services, which has boosted the deployment of unbounded network computing and communication environments; the realisation of information as an asset, which has facilitated the transition of companies from a manufacturing-centred to an information/knowledge management centred model with quality met production at the lowest point of global cost; the globalisation of services, markets, reach-ability of consumers and companies with virtual integration of business processes; the emergence of new threats and vulnerabilities, which are mostly connected with the increased openness and reach-ability of the infrastructures; the realisation by a number of nations that ‘information superiority’ brings strategic gains; the increased sophistication and complexity of individual systems; the changes in the traditional chain of trust which is affected by blurring of geographic border and boundaries. The European Dependability Initiative, called in short DEPPY [1], is a major R&D initiative under the Information Society Technologies Programme [2] to develop technologies, systems and capability to tackle the emerging dependability challenges in the Information Society. The experience gained in DEPPY has shown that to attain these new challenges objectives there is a need to foster the integration of research efforts and resources coming from a number of areas such as security, fault tolerance, reliability, safety, survivability but network engineering, psychology, human factor, econometrics, etc. In the following we would present how DEPPY has developed and discussed the new dependability challenges which could be tackled in the forthcoming 6th Framework Programme [3] of the European Commission (called in short FP6).
2 The European Dependability Initiative DEPPY was launched 1997/1998 as an initiative of the IST Programme with the primary objective of addressing dependability requirements in tightly connected systems and services, which are at the basis of the Information Society. The mission statement for the DEPPY was: “to contribute towards raising and assuring trust and confidence in systems and services, by promoting dependability enabling technologies”. This mission statement embraces the main goals, precisely: fostering the development of new dependability technologies, and using better the available dependability technologies.
12
Andrea Servida
2.1 The DEPPY Objectives Five key objectives were identified as qualifying the success of DEPPY, precisely: fostering a dependability-aware culture, which would include promoting innovative approaches to dependability, disseminating industrial best pra ctice and training to promote the ability to work in multi-disciplinary teams; providing a workable characterisation of affordable dependability, which would support the integration and layering of services, the assurance of quality of intangible assets and the certification of both new distributed architectures and massively deployed embedded systems; facilitating global interoperable trust frameworks, that would also consider mediation and negotiation along chains of trust, dependable business process integration and guidance on issues of liability that might arise from system failures in large-scale distributed and embedded settings and; mastering heterogeneous technical environments, including the integr ation of COTS and legacy systems software into Internet based applications, rapid recovery strategies and mechanisms to preserve essential services and business continuity, systems composability, dependability assurance and verification in dynamic environments; managing dependability and risk in largely distributed and open systems-of-systems environments, including dependability assurance and ver ification, united frameworks for modelling and validation, flexible business driven models of dependability. In the following, we will briefly discuss the main element of the DEPPY research agenda as it developed through the years.
2.2 The DEPPY Research Agenda The DEPPY research agenda was determined on an early basis in line with the overall approach taken to define the Workprogramme for the IST Programme in which DEPPY was present as a Cross-Programme Action [4]. In 1999, the research agenda for DEPPY focussed on dependability in services and technologies and, in partic ular on: technologies, methods and tools to meet the emerging dependability requirements stemming from the ubiquity and volume of embedded and networked systems and services, the global and complex nature of large-scale information and communication infrastructures, risk and incident management tools as well as on privacy enhancing tec hnologies, self-monitoring, self-healing infrastructures and services.
Dependability in the Information Society: Getting Ready for the FP6
13
Seven R&D projects were funded covering technical areas like intrusion tolerance paradigm in largely distributed systems, the dependable composition of systems-ofsystems and advance tools for embedded system design. In 2000, the technical focus was on promoting research and industrially oriented projects in areas like: large scale vulnerabilities in multi-jurisdictional and unbounded systems; information assurance; survivable systems relying on self organising and self-diagnostic capabilities; dependability of extensively deployed and tightly networked embedded systems; risk management of largely distributed and open systems-of-systems; methods for workable characterisation of affordable dependability. Beside these technical objectives, we also tried to stimulate the international collaboration in particular with the US. Six projects were funded on areas like depen dability benchmarks for COTS, security of global communication networks, methods and tools for assuring dependability and, last but not least, management and control systems for electrical supply and for telecommunications networks. The objectives set for the year 2001, which were logically built on the work of the previous years, were also closely related to the action on dependability of information infrastructures which was part of the “Secure networks and smart cards” objective of the eEurope 2002 Action Plan [3]. Such an action aimed to “stimulate public/private co-operation on dependability of information infrastructures (including the development of early warning systems) and improve co-operation amongst national 'computer emergency response teams'”. In this respect, the technical objectives for 2001 focussed on developing: innovative and multidisciplinary approaches, methods and technologies to build and manage dependability properties of large-scale infrastructures composed of tightly networked embedded systems. methods and technologies to model and manage dependability and survivability of globally interdependent systems and highly interconnected critical infrastructures. technologies to measure, verify and monitor dependability properties and behaviours of large-scale unbounded systems. Of the three projects funded one, called Dependability Development Support Initiative [6] contributes to raise awareness that making the information infrastructure dependable would mean protecting our industry wealth and investments in inform ation and communication technologies as well as in other intangible assets.
3 The Future: Towards FP6 The experience gained with DEPPY shows that we just start to understand what is the scope of the technological, economic and social implications and challenges connected with the increasing reliance of our economy and society on digital communication networks and systems. Such a reliance is developed through an unprec e-
14
Andrea Servida
dented scale of integration and interconnectedness of highly heterogeneous systems that are, individually and collectively, “emergent”, that is, the result of the casual or intentional composition of smaller and more homogeneous components. These aspects are critical in the area networked embedded systems and components where the large volume of deployed networked devices bring to the surface novel and unique system challenges. Lastly, this scenario is made even more complex by the large variety of patterns of use, user profiles and deployment environments. In the following, are some of the issues that we believe may characterise the context for future activities on dependability: In the area of open information infrastructure and unbounded networks there is a growing demand for "working and affordable dependability" which leads to the need to holistically address issues of safety, security, availability survivability, etc. This could only be accomplished by both stimulating innovative multidisciplinary approaches as well as facilitating the convergence of diverse scientific cultures and technical communities. In the network security arena there is a clear shift from "resist to attack" to “survive and adapt”. The target of "absolute security & zero risk" is unfeasible in domains where openness and interconnectivity are vital elements for successful operations. In this respect, the notion of "adaptable environment" (which would have a level of “self awareness”), within which security performance, quality of services and risks should be managed, is becoming the key element of any operational strategy. There is no language to describe dependability of unbounded systems and infrastructures, nor there are global dependability standards. Hence, novel mult idimensional models (which also cover behaviour, composition, physical elements, thermal properties, etc.) and approaches should be developed. In the area of survivability and dependability, the R&D often drives the Policy activity, but Policy must also drive R&D. There is a need to ensure dependability of critical infrastructures across Nations. In this respect, the meaning of "critical" varies because of Trans-national dependencies. A common knowledge base for this purpose does not exist. Pooling R&D resources across nations can build such knowledge. We are just at the beginning of distributed computing and the pace of its change is dramatic. Very monolithic platforms would disappear to be replaced by new computing platforms/fabric whose impact on dependability is to be ascertained. The next dependability challenge would be related to networks bandwidth and latency. It is anticipated that both the global and the local (intimately related to emerging short-scale interaction/communication means and capability) dime nsions and aspects of cyberspace deserve a fundamental paradigm shift in conceiving and realising a globally (including the time dimension) trustworthy and secure Information Society.
Dependability in the Information Society: Getting Ready for the FP6
15
Software is still the big problem. Achieving the automated (similarly to what is an automated banking process) production and evolution of software seems to be the good target, but we are still very far away from it. In the e-commerce environment software is getting more and more a “utility” for which “scalability” is more important than “features”. From a business perspective there is no difference between "intentional" (normally dealt with in the "security" context) and "unintentional" (normally dealt with in the safety context) disruptive events. From a business perspective there is no difference between a virus and a bug or from a bomb and a quake. The human component is still a very critical to the dependability of systems and organisations.
For the future, the overall goal of pursuing dependability and interdependencies in Information Society would have to support innovative and multidisciplinary RTD to tackle scale issues of dependability connected with new business and everyday life application scenarios such as (i) the increasing volatility and growing heterogeneity of products, applications, services, systems and processes in the digital environment as well as (ii) the increasing interconnection and interdependency of the information and communication infrastructure and with other vital services and systems for our society and our economy. This would lead to new areas for research on dependability aiming at building robust foundations for Information Society through novel multidisciplinary and innovative system-model approaches, architectures and technologies to realise dependable, survivable and evolvable systems, platforms and information infrastructures; understanding, modelling and controlling the interdependencies among largescale systems and infrastructures resulting from the pervasiveness and inte rconnectedness of information and communication technologies.
3.1 Towards FP6: The Roadmap Projects In order to prepare the ground for research initiatives in the FP6 [7], with particular attention to the new instruments of Integrated Projects (IP) and Networks of Excellence (NoE) [8], seven Roadmap projects on security and dependability have recently been launched with the goals: to identify the research challenges in the respective area, to assess Europe’s competitive position and potential, and to derive a strategic roadmaps for a pplied research driven by visionary scenarios; to build constituencies and reach consensus by means of feedback loops with the stakeholders at all relevant levels. The projects address issues around securing infrastructures, securing mobile ser vices, dependability, personal trusted devices, privacy and basic security technologies. Below is a short summary of the three Roadmap projects on dependability, precisely
16
Andrea Servida
AMSD, which focuses on a global and holistic view of dependability; ACIP, which tackles the are of simulation and modelling for critical infrastructure protection; WGALPINE, which looks at survivability and loss prevention aspects. These roadmaps would nicely complement and enrich the work of DDSI that tackles the area of dependability from a policy support angle.
AMSD - IST-2001-37553: Accompanying Measure System Dependability This project addresses the need for a coherent major initiative in FP6 encompas sing a full range of dependability-related activities, e.g. RTD on the various aspects of dependability per se; (reliability, safety, security, survivability, etc.), education and training; and means for encouraging and enabling sector-specific IST RTD projects to use dependability best practice. It is aimed at initiating moves towards the creation of such an Initiative, via road- mapping and constituency and conse nsus building undertaken in co-operation with groups, working in various depen dability-related topic areas, who are already undertaking such activities for their domains. The results will be an overall dependability roadmap that considers d ependability in an adequately holistic way, and a detailed roadmap for dependable embedded systems. ACIP - IST-2001-37257: Analysis & Assessment for Critical Infrastructure Protection Developed societies have become increasingly dependent on ICT and services. Infrastructures such as IC, banking and finance, energy, transportation, and others are relying on ICT and are mutually dependent. The vulnerability of these infr astructures to attacks may result in unacceptable risks because of primary and cascading effects. The investigation of cascading and feedback effects in highly complex, networked systems requires massive support by computer-based tools. The aim of ACIP is to provide a roadmap for the development and application of modelling and simulation, gaming and further adequate methodologies for the fo llowing purposes: identification and evaluation of the state of the art of CIP; analysis of mutual dependencies of infrastructures and cascading effects; investigation of different scenarios in order to determine gaps, deficiencies, and robustness of CIS; identification of technological development and necessary protective measures for CIP. WG-ALPINE - IST-2001-38703 : Active Loss Prevention for ICT-enabled Enterprise Working Group The main objective of this project is the creation, operation and consolidation of an Active Loss Prevention Working Group to address the common ICT Security problems faced by users, achieve consensus on their solutions across multiple disciplines, and produce a favourable impact in the overall eBusiness market. The Working Group approaches the problems from an ICT user perspective, with spe-
Dependability in the Information Society: Getting Ready for the FP6
17
cial emphasis on the view of small/medium systems integrators (SMEs), while establishing liaisons with all players, including representatives from the key European professional Communities that must collaborate to achieve a more effective approach to ICT Security. These include legal, audit, insurance, accounting, commercial, government, standardisation bodies, technology vendors, and others. DDSI – IST-2001-29202 : Dependability Development Support Initiative The goal of DDSI is to support the development of dependability policies across Europe. The overall aim of this project is to establish networks of interest, and to provide baseline data upon which a wide spectrum of policy-supporting activities can be undertaken both by European institutions and by public and private sector stakeholders across the EU and in partner nations. By convening workshops, bringing together key experts and stakeholders in critical infrastructure depen dability, DDSI facilitates the emergence of a new culture of Trans-national collaboration in this field, which is of global interest, and global concern. In order to make rapid progress in the area, the outcomes of the workshops as well as the i nformation gathered in order to prepare for the workshops will be actively disseminated towards a wider, but still targeted community of interest, including policy makers business, decision makers, researchers and other actors already actively contributing to this field today.
4 Conclusions The construction of the Information Society and the fast growing development of ecommerce are making our Society and Economy more and more dependent on computer based information systems, electronic communication networks and information infrastructures that are becoming pervasive as well as an essential part of the EU citizens’ live. Achieving the dependable behaviour of the Information Society means protecting our industry wealth and investments in IT as well as in other intangible assets. Furthermore, achieving the dependable behaviour of the infrastructure would mean ensuring flexible and co-operative management of the large-scale computing and networking resources and providing resources for effective prevention detection, confinement and response to disruptions. The dependable behaviour of the information infrastructure depends, however, on the behaviour of a growing number of players, systems and networks, including the users and the user systems. The interdependency among critical infrastructures that are enabled and supported by the inform ation infrastructure can not be easily mastered by currently available technologies. The dependability approach, which privileges the understanding of the implic ation of our need to rely on systems and, consequently, the adoption of a risk management approach, appears to be instrumental to foster a new culture of social and economic responsibility. However, more innovative and multidisciplinary research
18
Andrea Servida
on dependability is needed to make the Information Society more robust and resilient to technical vulnerability, failures and attacks.
References 1. DEPPY Forum htpp:/deppy.jrc.it/ 2. IST web site www.cordis.lu/ist 3. IST in FP6 http://www.cordis.lu/ist/fp6/fp6.htm 4. Cross Programme Action on dependability http://www.cordis.lu/ist/cpt/cpa4.htm 5. eEurope 2002 Action Plan http://europa.eu.int/information_society/eeurope/index_en.htm 6. DDSI web site http://www.ddsi.org/DDSI/index.htm 7. FP6 http://europa.eu.int/comm/research/fp6/index_en.html 8. FP6 Instruments http://europa.eu.int/comm/research/fp6/networks-ip.html
A Rigorous View of Mode Confusion Jan Bredereke and Axel Lankenau Universit¨ at Bremen FB 3, P.O. box 330 440, D-28334 Bremen, Germany {brederek,alone}@tzi.de www.tzi.de/{~brederek,~alone} Fax: +49-421-218-3054
Abstract. Not only in aviation psychology, mode confusion is recognised as a significant safety concern. The notion is used intuitively in the pertinent literature, but with surprisingly different meanings. We present a rigorous way of modelling the human and the machine in a shared-control system. This enables us to propose a precise definition of “mode” and “mode confusion”. In our modelling approach, we extend the commonly used distinction between the machine and the user’s mental model of it by explicitly separating these and their safety-relevant abstractions. Furthermore, we show that distinguishing three different interfaces during the design phase reduces the potential for mode confusion. A result is a new classification of mode confusions by cause, leading to a number of design recommendations for shared-control systems which help to avoid mode confusion problems. A further result is a foundation for detecting mode confusion problems by model checking.
1
Introduction and Motivation
Automation surprises are ubiquitous in today’s highly engineered world. We are confronted with mode confusions in many everyday situations: When our cordless phone rings while it is located in its cradle, we establish the line by just lifting the handset — and inadvertently cut it when we press the “receiver button” as usual with the intention to start speaking. We get annoyed if we once again overwrite some text in the word processor because we had hit the “Ins”key before (and thereby left the insert mode!) without noticing. The American Federal Aviation Administration (FAA) considers mode confusion to be a significant safety concern in modern aircraft. So, it’s all around — but what exactly is a mode, what defines a mode confusion situation and how can we detect and avoid automation surprises? As long as we have no rigorous definition, we should regard a mode confusion as one kind of an automation surprise. It refers to a situation in which a technical system can behave differently from the user’s expectation. Whereas mode confusions in typical human-computer interactions, such as the word processor example mentioned above, are “only” annoying, they become dangerous if we consider safety-critical systems. S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 19–31, 2002. c Springer-Verlag Berlin Heidelberg 2002
20
Jan Bredereke and Axel Lankenau
Today, many safety-critical systems are so-called embedded shared-control systems. These are interdependently controlled by an automation component and a user. Examples are modern aircraft, automobiles, but also intelligent wheelchairs. We focus on such shared-control systems in this paper and call the entirety of technical components technical system and the human operator user. Note that we have to take a black-box stand, i. e. we can only work with the behaviour of the technical system observable at its interfaces: since we want to solve the user’s problems, we have to take his or her point of view, which does not allow access to internal information of the system. As Rushby points out [1], in cognitive science it is generally agreed upon that humans use so-called mental models when they interact with (automated) technical systems. Since there are at least two completely different interpretations of the notion “mental model” in the pertinent literature, it is important to clarify that we refer to the one introduced by Norman [2]: A mental model represents the user’s knowledge about a technical system, it consists of a na¨ıve theory of the system’s behaviour. According to Rushby [1], an explicit description of a mental model can be derived, e. g., in form of a state machine representation, from training material, from user interviews, or by user observation. We briefly recapitulate the pertinent state of the art here. It remains surprisingly unclear what a mode as such is. While some relevant publications give no [3, 4] or only an implicit definition [5, 6] of the notions “mode” and “mode confusion”, there are others that present an explicit informal definition [7, 8, 9, 10]. Doherty [11] presents a formal framework for interactive systems and also gives an informal definition of “mode error”. Wright and colleagues give explicit but example driven definitions of the notions “error of omission” and “error of commission” by using CSP to specify user tasks [12]. Interestingly, the way of modelling often seems to be influenced significantly by the tool that is meant to perform the final analysis. Degani and colleagues use State Charts to model separately the technical system and the user’s mental model [13]. Then, they build the composition of both models and search for certain states (so-called “illegal” and “blocking” states) which indicate mode confusion potential. Butler et al. use the theorem prover PVS to examine the flight guidance system of a civil aircraft for mode confusion situations [4]. They do not consider the mental model of the pilot as an independent entity in their analysis. Leveson and her group specify the black-box behaviour of the system in the language SpecTRM-RL that is both well readable by humans and processible by computers [10, 14, 15]. In [10], they give a categorisation of different kinds of modes and a classification of mode confusion situations. Rushby and his colleagues employ the Murφ model-checking tool [16, 5, 3]. Technical system and mental model are coded together as a single set of so-called Murφ rules. L¨ uttgen and Carre˜ no examine the three state-exploration tools Murφ, SMV, and Spin with respect to their suitability in the search for mode confusion potential [17]. Buth [9] and Lankenau [18] clearly separate the technical system and the user’s mental model in their CSP specification of the well-known MD88-“kill-the-capture” scenario and in a service-robotics example, respectively.
A Rigorous View of Mode Confusion
21
The support of this clear separation is one reason why Buth’s comparison between the tool Murφ and the CSP tool FDR favours the latter [9, pages 209-211]. Almost all publications refer to aviation examples when examining a case study: an MD-88 [19, 7, 10, 16, 9], an Airbus A320 [3, 6], or a Boeing 737 [5]. Rushby proposes a procedure to develop automated systems which pays attention to the mode confusion problem [1]. The main part of his method is the integration and iteration of a model-checking based consistency check and the mental model reduction process introduced by [20, 3]. Hourizi and Johnson [6, 21] generally doubt that avoiding mode confusions alone helps to reduce the number of plane crashes caused by automation surprises. They claim that the underlying problem is not mode confusion but what they call a “knowledge gap”, i. e. the user’s insufficient perception prevents him or her from tracking the system’s mode. As far as we are aware, there is no publication so far that defines “mode” and “mode confusion” rigorously. Therefore, our paper clarifies these notions. Section 2 introduces to the domain of our case study, which later serves as a running example. Section 3 and 4 present a suitable system modelling approach and clarify different world views, which enables us to present rigorous definitions in Sect. 5. Section 6 works out the value of such definitions, which comprises a foundation for the automated detection of mode confusion problems and a classification of mode confusion problems by cause, which in turn leads to recommendations for avoiding mode confusion problems. A summary and ideas for future work conclude the paper.
2
Case Study Wheelchair
Our case study has a service robotics background: we examine the cooperative obstacle avoidance behaviour of our wheelchair robot. The Bremen Autonomous Wheelchair “Rolland” is a shared-control service robot which realizes intelligent and safe transport for handicapped and elderly people [22, 23]. The vehicle is a commercial off-the-shelf power wheelchair. It has been equipped with a control PC, a ring of sonar sensors, and a laser range finder. Rolland is jointly controlled by its user and by the software. Depending on the active operation mode, either the user or the automation is in charge of driving the wheelchair.
3
Precise Modelling
Before we can discuss mode confusion problems, some remarks on modelling a technical system in general are necessary. The user of a running technical system has a strict black-box view. Since we want to solve the user’s problems, we must take the same black-box point of view. This statement appears to be obvious, but has far-reaching consequences for the notion of mode. The user has no way of observing the current internal state, or mode, of the technical system.
22
Jan Bredereke and Axel Lankenau
Nevertheless, it is possible to describe a technical system in an entirely blackbox view. Our software engineering approach is inspired by the work of Parnas [24, 25], even though we start out with events instead of variables, as he does. We can observe (only) the environment of the technical system. When something relevant happens, we call this an event. When the technical system is the control unit of an automated wheelchair, then an event may be that the user pushes the joystick forward, that the wheelchair starts to move, or an event may as well be that the distance between the wheelchair and a wall ahead becomes smaller than the threshold of 70 cm. The technical system has been constructed according to some requirements document REQ. It contains the requirements on the technical system, which we call SYSREQ, and those on the system’s environment NAT. However, if we deal with an existing system for which no (more) requirements specification is available, it might be necessary to “reverse engineer” it from the implementation. For the wheelchair, SYSREQ should state that the event of the wheelchair starting to move follows the event that the joystick is pushed forward. SYSREQ should also state what happens after the event of approaching a wall. Of course, the wheelchair should not crash into a wall in front of it, even if the joystick is pushed forward. We can describe this entirely in terms of observable events, by referring to the history of events until the current point of time. If the wheelchair has approached a wall, and if it has not yet moved back, it must not move forward further. For this description, no reference to an internal state is necessary. In order to implement the requirements SYSREQ on a technical system, one usually needs several assumptions about the environment of the technical system to construct. For example, physical laws guarantee that a wheelchair will not crash into a wall ahead unless it has approached it closer than 70 cm and has continued to move for a certain amount of time. We call the documentation of assumptions about the environment NAT. NAT must be true even before the technical system is constructed. It is the implementer’s task to ensure that SYSREQ is true provided that NAT holds.
4 4.1
Clarification of World Views Where are the Boundaries?
The control software of a technical system cannot observe physical events directly. Instead, the technical system is designed such that sensor devices generate internal input events for the software, and the software’s output events are translated by actuator devices into physical events, again. Neither sensors nor actuators are perfectly precise and fast, therefore we have a distinct set of software events. Accordingly, the requirements on the technical system and the requirements on the software cannot be the same. For example, the wheelchair’s ultrasonic distance sensors for the different directions can be activated in turns only, resulting in a noticeable delay for detecting obstacles. We call the software
A Rigorous View of Mode Confusion
23
SYSREQ environment events
IN
software events
SOF
software events
OUT
environment events
Fig. 1. System requirements SYSREQ vs. software requirements SOF
requirements SOF, the requirements on the input sensors IN and the requirements on the output actuators OUT. Figure 1 shows the relationships among them. An important consequence is that the software SOF must compensate for any imperfectness of the sensors and actuators so that the requirements SYSREQ are satisfied. When defining SOF, we definitely need to take care whether we refer to the boundary of SOF or of SYSREQ. This becomes even more important when we consider the user who cooperates with the technical system. He or she observes the same variables of the environment as the technical system does. But the user observes them through his/her own set of senses SENS. SENS has its own imperfections. For example, a human cannot see behind his/her back. Our automated wheelchair will perceive a wall behind it when moving backwards, but the user will probably not. Therefore, we need to distinguish what actually happens in reality (specified in REQ, i. e. the composition of SYSREQ and NAT) from the user’s mental model MMOD of it. When making a statement about MMOD, we definitely need to take care whether we refer to the boundary of MMOD or of REQ. When we define the interfaces precisely, it turns out that there is an obvious potential for a de-synchronisation of the software’s perception of reality with the user’s perception of it. And when we analyse this phenomenon, it is important to distinguish between the three different interfaces: environment to machine (or to user), software to input/output devices, and mental to senses. As a result, we are able to establish a precise relation between reality as it is perceived by the user and his/her mental model of it. This relation will be the base of our definition of mode confusion. 4.2
Brief Introduction to Refinement
As will be explained later, we use a kind of specification/implementation relation in the following sections. Such relations can be modelled rigorously by the concept of refinement. There exist a number of formalisms to express refinement relations. We use CSP [26] as specification language and the refinement semantics proposed by Roscoe [27]. One reason is that there is good tool support for performing automated refinement checks of CSP specifications with the tool FDR [27]. This section shall clarify the terminology for readers who are not familiar with the concepts. In CSP, the behaviour of a process P is described by the set traces(P ) of the event sequences it can perform. Since we must pay attention to what can be
24
Jan Bredereke and Axel Lankenau
done as well as to what can be not done, the traces model is not sufficient in our domain. We have to enhance it by so-called failures. Definition 1 (Failure). A failure of a process P is a pair (s, X) of a trace s (s ∈ traces(P )) and a so-called refusal set X of events that may be blocked by P after the execution of s. If an output event o is in the refusal set X of P , and if there also exists a continuation trace s which performs o, then process P may decide internally and non-deterministically whether o will be performed or not. Definition 2 (Failure Refinement). P refines S in the failures model, written S F P , iff traces(P ) ⊆ traces(S) and also failures(P ) ⊆ failures(S). This means that P can neither accept an event nor refuse one unless S does; S can do at least every trace which P can do, and additionally P will refuse not more than S does. Failure refinement allows to distinguish between external and internal choice in processes, i.e. whether there is non-determinism. As this aspect is relevant for our application area, we use failure refinement as the appropriate kind of refinement relation. 4.3
Relation between Reality and the Mental Model
Our approach is based on the motto “The user must not be surprised ” as an important design goal for shared-control systems. This means that the perceived reality must not exhibit any behaviour which cannot occur according to the mental model. Additionally, the user must not be surprised because something expected does not happen. When the mental model prescribes some behaviour as necessary, reality must not refuse to perform it. These two aspects are described by the notion of failure refinement, as defined in the previous section. There cannot be any direct refinement relation between a description of reality and the mental model, since they are defined over different sets of events (i.e., environment/mental). We understand the user’s senses SENS as a relation from environment events to mental events. SENS(REQ) is the user’s perception of what happens in reality. The user is not surprised if SENS(REQ) is a failure refinement of MMOD. As a consequence, the user’s perception of reality must be in an implementation/specification relationship to the mental model. Please note that an equality relation always implies a failure refinement relation, while the converse is not the case. If the user does not know how the system will behave with regard to some aspect, but knows that he/she does not know, then he/she will experience no surprise nevertheless. Such indifference can be expressed mathematically by a non-deterministic internal choice in the mental model. 4.4
Abstractions
When the user concentrates on safety, he/she performs an on-the-fly simplification of his/her mental model MMOD towards the safety-relevant part
A Rigorous View of Mode Confusion system designer’s view safety− relevant abstraction
SENSE
SAFE
(REQ
)
user’s view failure refinement
SAFE
abstraction A R detailed black−box description
SENSE(REQ)
25
MMOD
SAFE
abstraction A M failure refinement
MMOD
Fig. 2. Relationships between the different refinement relations
MMODSAFE . This helps him/her to analyse the current problem with the limited mental capacity. Analogously, we perform a simplification of the requirements document REQ to the safety-relevant part of it REQSAFE . REQSAFE can be either an explicit, separate chapter of REQ, or we can express it implicitly by specifying an abstraction function, i. e., by describing which aspects of REQ are safety-relevant. We abstract REQ out of three reasons: MMODSAFE is defined over a set of abstracted mental events, and it can be compared to another description only if it is defined over the same abstracted set; we would like to establish the correctness of the safety-relevant part without having to investigate the correctness of everything; and our model-checking tool support demands that the descriptions are restricted to certain complexity limits. We express the abstraction functions mathematically in CSP by functions over processes. Mostly, such an abstraction function maps an entire set of events onto a single abstracted event. For example, it is irrelevant whether the wheelchair’s speed is 81.5 or 82 cm/s when approaching an obstacle – all such events with a speed parameter greater than 80 cm/s will be abstracted to a single event with the speed parameter fast. Other transformations are hiding (or concealment [26]) and renaming. But the formalism also allows for arbitrary transformations of behaviours; a simple example being a certain event sequence pattern mapped onto a new abstract event. We use the abstraction functions AR for REQ and AM for MMOD, respectively. The relation SENS from the environment events to the mental events must be abstracted in an analogous way. It should have become clear by now that SENS needs to be rather true, i. e., a bijection which does no more than some renaming of events. If SENS is “lossy”, we are already bound to experience mode confusion problems. For our practical work, we therefore first make sure that SENS is such a bijection, and then merge it into REQ, even before we perform the actual abstraction step which enables the use of the model-checking tool. Figure 2 shows the relationships among the different descriptions. In order that the user is not surprised with respect to safety, there must be a failure refinement relation on the abstract level between SENSSAFE (REQSAFE ) and MMODSAFE , too.
26
5
Jan Bredereke and Axel Lankenau
A Rigorous View of Mode and of Mode Confusion
We will now present our rigorous definitions of mode and mode confusion. We will then motivate and discuss our choices. In the following, let REQSAFE be a safety-relevant black-box requirements specification, let SENSSAFE be a relation between environment events and mental events representing the user’s senses, and let MMODSAFE be a safety-relevant mental model of the behaviour of REQSAFE . Definition 3 (Potential future behaviour). A potential future behaviour is a set of failures. Definition 4 (Mode). A mode of SENSSAFE (REQSAFE ) is a potential future behaviour. And, a mode of MMODSAFE is a potential future behaviour. Definition 5 (Mode confusion). A mode confusion between SENSSAFE (REQSAFE ) and MMODSAFE occurs if and only if SENSSAFE (REQSAFE ) is not a failure refinement of MMODSAFE i.e., iff MMODSAFE F SENSSAFE (REQSAFE ) . After the technical system T has moved through a history of events, it is in some “state”. Since we have to take a black-box view, we can distinguish two “states” only if T may behave differently in the future. We describe the potential future behaviour by a set of failures, such that we state both what T can do and what T can refuse to do. This definition of “state” is rather different from the intuition in a white-box view, but necessarily so. Our next step to the notion of “mode” then is more conventional. We use the notion of “state”, if at all, in the context of the non-abstracted descriptions. Two states of a wheelchair are different, for example, if the steerable wheels will be commanded to a steering angle of 30 degrees or 35 degrees, respectively, within the next second. These states are equivalent with regard to the fact of obstacle avoidance. Therefore, both states are mapped to the same abstracted behaviour by the safety-relevance abstraction function. We call such a distinct safety-relevant potential future behaviour a mode. Usually, many states of the non-abstracted description are mapped together to such a mode. On a formal level, both a state and a mode are a potential future behaviour. The difference between both is that there is some important safety-relevant distinction between any two modes, which need not be the case for two states. We now can go on to mode confusions. The perceived reality and the user’s mental model of it are in different modes at a certain point of time if and only if the perceived reality and the mental model might behave differently in the future, with respect to some safety-relevant aspect. Only if no such situation can arise in any possible execution trace, then there is no mode confusion. This means that the user’s safety-relevant mental model must be a specification of the perceived reality. Expressed the other way around, the perceived reality must be an implementation of the user’s safety-relevant mental model. This specification/implementation relationship can be described rigorously by failure refinement. If we have precise descriptions of both safety-relevant behaviours, we can
A Rigorous View of Mode Confusion
27
rigorously check whether a mode confusion occurs. Since model-checking tool support exists, this check can even be automated. Please note that we referred to the reality, as described by REQ, which not only includes the system’s requirements SYSREQ but also the environment requirements NAT. This restricts the behaviour of SYSREQ by NAT: behaviour forbidden by physical laws is not relevant for mode confusions. Our mathematical description allows for some interesting analysis of consequences. It is known in the literature that implicit mode changes may be a cause of mode confusion. In our description, an implicit mode change appears as an “internal choice” of the system, also known as a (spontaneous) “τ transition”. The refinement relation dictates that any such internal choice must appear in the specification, too, which is the user’s mental model in our case. This is possible: if the user expects that the system chooses internally between different behaviours, he/she will not be surprised, at least in principle. The problem is that the user must keep in mind all potential behaviours resulting from such a choice. If there is no clarifying event for a long time, the space of potential behaviours may grow very large and impractical to handle in practice.
6
Results
Our definitions form a foundation for detecting mode confusions by modelchecking. It has opened new possibilities for a comprehensive analysis of mode confusion problems, which we currently explore in practice. Our clarification of world views in Sect. 4 enables us to classify mode confusion problems into three classes: 1. Mode confusion problems which arise from an incorrect observation of the technical system or its environment. Formally, this is the case when SENS(REQ) is not a failure refinement of MMOD, but where SENS(REQ) would be a failure refinement of MMOD, provided the user’s senses SENS would be a perfect mapping from environment events to mental events. The imperfections of SENS may have physical or psychological reasons: either the sense organs are not perfect; for example eyes which cannot see behind the back. Or an event is sensed, but is not recognised consciously; for example because the user is distracted, or because the user currently is flooded with too many events. (“Heard, but not listened to.”) Please note that our notion of mode confusion problem also comprises the “knowledge gap” discussed in the research critique by Hourizi and Johnson [6, 21] (see Sect. 1). In our work, it appears as a mode confusion problem arising from an incorrect observation due to psychological reasons. 2. Mode confusion problems which arise from incorrect knowledge of the human about the technical system or its environment. Formally, this is the case when SENS(REQ) is not a failure refinement of MMOD, and when a perfect SENS would make no difference.
28
Jan Bredereke and Axel Lankenau
3. Mode confusion problems which arise from the incorrect abstraction of the user’s knowledge to the safety-relevant aspects of it. Formally, this means that SENS(REQ) is a failure refinement of MMOD, but SENSSAFE (REQSAFE ) is not a failure refinement of MMODSAFE . Since the safety-relevant requirements abstraction function AR is correct by definition, the user’s mental safety-relevance abstraction function AM must be wrong in this case (compare Figure 2 above). In contrast to previous classifications of mode confusion problems, this classification is by cause and not phenomenological, as, e.g., the one by Leveson [10]. The above causes of mode confusion problems lead directly to some recommendations for avoiding them. In order to avoid an incorrect observation of the technical system and its environment, we must check whether the user can physically observe all safety-relevant environment events, and we must check whether the user’s senses are sufficiently precise to ensure an accurate translation of these environment events to mental events. If this is not the case, then we must change the system requirements. We must add an environment event controlled by the machine and observed by the user which indicates the corresponding software input event. This measure has been recommended by others too, of course, but our rigorous view now indicates more clearly when it must be applied. Avoiding an incorrect observation also comprises that we check whether psychology ensures that observed safety-relevant environment events become conscious. Our approach points out clearly the necessity of this check. The check itself and any measures belong to the field of psychology, in which we are not expert. Establishing a correct knowledge of the user about the technical system and its environment can be achieved by documenting the requirements of them rigorously. This enables us to conceive user training material, such as a manual, which is complete with respect to functionality. This training material must not only be complete but also learnable. Complexity is an important learning obstacle. Therefore, the requirements of the technical system should allow as little non-deterministic internal choices as possible, since tracking all alternative outcomes is complex. This generalises and justifies the recommendation by others to eliminate “implicit mode changes” [10, 8]. Internal non-determinism may arise not only from the software, but also from the machine’s sensor devices. If they are imprecise, the user cannot predict the software input events. We can eliminate both kinds of non-deterministic internal choice by the same measure as used against an incorrect physical observation: we add an environment event controlled by the machine which indicates the software’s choice or the input device’s choice, respectively. Ensuring a correct mental abstraction process is mainly a psychological question and mostly beyond the scope of this paper. Our work leads to the basic recommendation to either write an explicit, rigorous safety-relevance requirements document or to indicate the safety-relevant aspects clearly in the general requirements document. The latter is equivalent to making explicit the safety-relevance
A Rigorous View of Mode Confusion
29
abstraction function for the machine AR . Either measure facilitates to conceive training material which helps the user to concentrate on safety-relevant aspects.
7
Summary and Future Work
We present a rigorous way of modelling the user and the machine in a sharedcontrol system. This enables us to propose precise definitions of “mode” and “mode confusion”. In our modelling approach, we extend the commonly used distinction between the machine and the user’s mental model of it by explicitly separating these and their safety-relevant abstractions. Furthermore, we show that distinguishing three different interfaces during the design phase reduces the potential for mode confusion. Our proposition that the user must not be surprised leads directly to the conclusion that the relationship between the mental model and the machine must be one of specification to implementation, in the mathematical sense of refinement. Mode confusions can occur if and only if this relation is not satisfied. A result of this insight is a new classification of mode confusions by cause, leading to a number of design recommendations for shared-control systems which help to avoid mode confusion problems. Since tools to model-check refinement relations exist, our approach supports the automated detection of remaining mode confusion problems. For illustration, we presented a case study on a wheelchair robot as a running example. A detailed version of the case study is discussed in [28]. Our work lends itself to extension into several directions. We currently work in our case study on exploiting the new potential for detecting mode confusion problems by model-checking. Furthermore, the recommendations for avoiding mode confusion problems can be tried out. Experts in psychology will be able to implement the non-technical ones of our rules by concrete measures. Finally, we see still more application domains beyond aviation and robotics.
References [1] Rushby, J.: Modeling the human in human factors. In: Proc. of SAFECOMP 2001. Volume 2187 of LNCS., Springer (2001) 86–91 20, 21 [2] Norman, D.: Some observations on mental models. In Gentner, D., Stevens, A., eds.: Mental Models. Lawrence Erlbaum Associates Inc., Hillsdale, NJ, USA (1983) 20 [3] Crow, J., Javaux, D., Rushby, J.: Models and mechanized methods that integrate human factors into automation design. In Abbott, K., Speyer, J. J., Boy, G., eds.: Proc. of the Int’l Conf. on Human-Computer Interaction in Aeronautics: HCI-Aero 2000, Toulouse, France (2000) 20, 21 [4] Butler, R., Miller, S., Pott, J., Carre˜ no, V.: A formal methods approach to the analysis of mode confusion. In: Proc. of the 17th Digital Avionics Systems Conf., Bellevue, Washington, USA (1998) 20 [5] Rushby, J.: Analyzing cockpit interfaces using formal methods. In Bowman, H., ed.: Proc. of FM-Elsewhere. Volume 43 of Electronic Notes in Theoretical Computer Science., Pisa, Italy, Elsevier (2000) 20, 21
30
Jan Bredereke and Axel Lankenau
[6] Hourizi, R., Johnson, P.: Beyond mode error: Supporting strategic knowledge structures to enhance cockpit safety. In: Proc. of IHM-HCI 2001, Lille, France, Springer (2001) 20, 21, 27 [7] Sarter, N., Woods, D.: How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Human Factors 37 (1995) 5–19 20, 21 [8] Degani, A., Shafto, M., Kirlik, A.: Modes in human-machine systems: Constructs, representation and classification. Int’l Journal of Aviation Psychology 9 (1999) 125–138 20, 28 [9] Buth, B.: Formal and Semi-Formal Methods for the Analysis of Industrial Control Systems – Habilitation Thesis. Univ. Bremen (2001) 20, 21 [10] Leveson, N., Pinnel, L., Sandys, S., Koga, S., Reese, J.: Analyzing software specifications for mode confusion potential. In: Workshop on Human Error and System Development, Glasgow, UK (1997) 20, 21, 28 [11] Doherty, G.: A Pragmatic Approach to the Formal Specification of Interactive Systems. PhD thesis, University of York, Dept. of Computer Science (1998) 20 [12] Wright, P., Fields, B., Harrison, M.: Deriving human-error tolerance requirements from tasks. In: Proc. of the 1st Int’l Conf. on Requirements Engineering, Colorado, USA, IEEE (1994) 135–142 20 [13] Degani, A., Heymann, M.: Pilot-autopilot interaction: A formal perspective. In Abbott, K., Speyer, J. J., Boy, G., eds.: Proc. of the Int’l Conf. on HumanComputer Interaction in Aeronautics: HCI–Aero 2000, Toulouse, France (2000) 157–168 20 [14] Rodriguez, M., Zimmermann, M., Katahira, M., de Villepin, M., Ingram, B., Leveson, N.: Identifying mode confusion potential in software design. In: Proc. of the Int’l Conf. on Digital Aviation Systems, Philadelphia, PA, USA (2000) 20 [15] Zimmermann, M., Rodriguez, M., Ingram, B., Katahira, M., de Villepin, M., Leveson, N.: Making formal methods practical. In: Proc. of the Int’l Conf. on Digital Aviation Systems, Philadelphia, PA, USA (2000) 20 [16] Rushby, J., Crow, J., Palmer, E.: An automated method to detect potential mode confusions. In: Proc. of the 18th AIAA/IEEE Digital Avionics Systems Conf., St. Louis, Montana, USA (1999) 20, 21 [17] L¨ uttgen, G., Carre˜ no, V.: Analyzing mode confusion via model checking. In Dams, D., Gerth, R., Leue, S., Massink, M., eds.: SPIN’ 99. Volume 1680 of LNCS., Berlin Heidelberg, Springer (1999) 120–135 20 [18] Lankenau, A.: Avoiding mode confusion in service-robots. In Mokhtari, M., ed.: Integration of Assistive Technology in the Information Age, Proc. of the 7th Int’l Conf. on Rehabilitation Robotics, Evry, France, IOS Press (2001) 162–167 20 [19] Palmer, E.: “Oops, it didn’t arm.” – A case study of two automation surprises. In: Proc. of the 8th Int’l Symp. on Aviation Psychology. (1995) 21 [20] Javaux, D.: Explaining Sarter & Woods’ classical results. The cognitive complexity of pilot-autopilot interaction on the Boeing 737-EFIS. In: Proc. of HESSD ’98. (1998) 62–77 21 [21] Hourizi, R., Johnson, P.: Unmasking mode errors: A new application of task knowledge principles to the knowledge gaps in cockpit design. In: Proc. of INTERACT 2001 – The 8th IFIP Conf. on Human Computer Interaction, Tokyo, Japan (2001) 21, 27 [22] R¨ ofer, T., Lankenau, A.: Architecture and applications of the Bremen Autonomous Wheelchair. Information Sciences 126 (2000) 1–20 21
A Rigorous View of Mode Confusion
31
[23] Lankenau, A., R¨ ofer, T.: The Bremen Autonomous Wheelchair – a versatile and safe mobility assistant. IEEE Robotics and Automation Magazine, “Reinventing the Wheelchair” 7 (2001) 29–37 21 [24] Parnas, D. L., Madey, J.: Functional documents for computer systems. Science of Computer Programming 25 (1995) 41–61 22 [25] van Schouwen, A. J., Parnas, D. L., Madey, J.: Documentation of requirements for computer systems. In: IEEE Int’l. Symp. on Requirements Engineering – RE’93, San Diego, California, USA, IEEE Comp. Soc. Press (1993) 198–207 22 [26] Hoare, C.: Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs, New Jersey. USA (1985) 23, 25 [27] Roscoe, A. W.: The Theory and Practice of Concurrency. Prentice-Hall (1997) 23 [28] Lankenau, A.: Bremen Autonomous Wheelchair “Rolland”: Self-Localization and Shared-Control – Challenges in Mobile Service Robotics. PhD thesis, Universit¨ at Bremen, Dept. of Mathematics and Computer Science (2002) To appear 29
Dependability as Ordinary Action Alexander Voß1 , Roger Slack1 , Rob Procter1, Robin Williams2 , Mark Hartswood1 , and Mark Rouncefield3 1
2
School of Informatics, University of Edinburgh, UK {av,rslack,rnp,mjh}@cogsci.ed.ac.uk Research Centre for Social Sciences, University of Edinburgh, UK
[email protected] 3 Department of Computing, University of Lancaster, UK
[email protected] Abstract. This paper presents an ethnomethodologically informed study of the ways that more-or-less dependable systems are part of the everyday lifeworld of society members. Through case study material we explicate how dependability is a practical achievement and how it is constituted as a common sense notion. We show how attending to the logical grammar of dependability can clarify some issues and potential conceptual confusions around the term that occur between lay and ‘professional’ uses. The paper ends with a call to consider dependability in its everyday ordinary language context as well as more ‘professional’ uses of this term.
1
Introduction
In this paper we are concerned with the ways in which people experience dependability. We are interested in explicating in-vivo ethnographic accounts of living with systems that are more or less reliable and the practices that this being ‘more or less dependable’ occasions. The situated practical actions of living with systems (e.g. work arounds and so on) are important to us in that they show how society members1 experience dependability as a practical matter. Drawing on the work of the later Wittgenstein [13], we seek to explicate what dependability means in an ordinary language sense and to provide an analysis of the ways in which systems come to be seen as dependable, and the work involved in making them more or less dependable. Such an analysis of ordinary language uses of terms is not intended as a remedy or corrective to ‘professional’ uses, but to show how we might capitalise on lay uses of the term and thereby secure in part a role for ethnographic analysis of what, following Livingston [7] we call the ‘lived world’ of working with more or less dependable systems. We draw on a case study of practical IT use and development [11] that illustrate how dependability is realised in and as part of peoples’ everyday ordinary activities. 1
This points to the skills persons have, what they know and do competently in a particular setting. In this usage we also stress mundane, banal competence as opposed to professionalised conduct.
S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 32–43, 2002. c Springer-Verlag Berlin Heidelberg 2002
Dependability as Ordinary Action
2
33
The Case Study
The case study organisation, EngineCo, produces mass-customised diesel engines from 11 to 190 kW. Production in its plant was designed to work along a strict production orthodoxy and large parts are automated. Since the plant was built in the early 1990s, significant changes have been made to keep up with changing customer demands and to keep the plant operational in a difficult economic environment. The organisation makes heavy use of a wide range of information technologies and, to a large extent, their operation depends on complex ensembles of these technologies. An ethnographic study of the working practices of control room workers has been conducted over the course of the last two years [10] as a pradicate for participatory design activities. The ethnographic method is dedicated to observing in detail everyday working practices and seeks to explicate the numerous, situated ways in which those practices are actually achieved [4]. Interviews with staff were recorded, and notes made of activities observed and artifacts employed. The data also includes copious notes and transcriptions of talk of ‘members’ (i.e. regular participants in the work setting) as they went about their everyday work. Ethnography is attentive to the ways in which work actually ‘gets done’, the recognition of the tacit skills and cooperative activities through which work is accomplished as an everyday, practical activity and it aims to make these processes and practices ‘visible’. As noted above, the production environment at EngineCo is shaped according to a particular just-in-time (JIT) production orthodoxy. Material is delivered to an external logistics provider that operates a high-shelf storage facility near the plant on EngineCo’s behalf. Upon EngineCo’s order, the logistics provider delivers parts to the plant. Consequently, the plant itself was not designed to store large numbers of parts, containing buffer spaces for only four hours of production. The layout of production is basically linear, with an engine picking up its component parts as it moves from one side of the plant to the other. The production of engines is divided into two main steps: the basic engine is produced on an assembly line while customer-specific configuration is done in stationary assembly workspaces. Central to production is the Assembly Control Host which controls all processes within the plant, interacting with local systems in the various functional units of the plant (e.g., assembly lines) as well as with the company’s ERP system (SAP R3). The Assembly Control Host is custom-built rather than being part of the ERP system. It has been developed and is now operated and maintained by an external IT service provider which has personnel located in the plant. A basic precondition for production to work along the lines of the JIT regime is that all parts are available in time for production. This notion of buildability is the key concept in the production management orthodoxy at EngineCo. Located within the plant, an assembly planning department is responsible for the buildability of engines, assuring that all component parts as well as the various pieces of information needed (such as workers’ instructions) are available before production starts. They are also responsible for scheduling production orders in time to meet the agreed delivery dates. Assembly planners create a schedule for
34
Alexander Voß et al.
production taking into consideration their knowledge about the current status of the plant, upcoming events and the requirements of control room workers.
Doing Dependability: Normal Natural Troubles Instances of undependability in this setting are quite frequent but are not normally catastrophic. Rather, they are ordinary, mundane events that occasion situated practical (as opposed to legal) inquiry and repair. This is in contrast to much of the extant literature which has focused on dependability issues as fatal issues, e.g. studies of such cases as the London Ambulance Service [1] or Therac25 [6]. The study points to some of the worldly contingencies that control room workers routinely deal with as a part of their planning and scheduling work. More precisely we might say that all plans are contingent on what, following Suchman [9], we call ‘situated actions’. Due to problems with the availability of certain parts, especially crankcases and because of ever increasing customer demands, the notion of buildability was renegotiated [10] in order not to let the plant fall idle. Today, there are ‘green’, ‘orange’, and ‘red’ engines in the plant that are respectively: strictly buildable, waiting for a part known to be on its way, or waiting for something that is not available and doesn’t have a delivery date. Control room workers effectively share responsibility for ensuring that engines are buildable with Assembly Planning as is illustrated by the following extract from the control room shiftbook: As soon as crankcases for 4-cylinders are available, schedule order number 56678651 (very urgent for Company X). Engines are red even when only loose material is missing.
The first example shows how control room workers effectively assign material to orders and how their decisions may be influenced by various contingencies. Choosing the order in which to schedule engines is a situated accomplishment rather than a straightforward priority based decision wherein the importance of the engine dictates its order. Control room workers need to attend to the way scheduling an engine might influence the ‘flow’ of other engines through the plant and take into consideration the workload a particular type of engine places on workers on the shop floor, i.e. they have to attend to the ‘working division of labour’ [8]. The second example refers to a problem with the IT systems which does not allow them to start production of engines which are missing loose material (e.g., manuals). Clearly, while a missing crankcase effectively prevents production of the engine, loose material is not needed until the engine is actually shipped to the customer (and perhaps not even then in very urgent cases). By redefining details of the working division of labour [8], EngineCo has effectively addressed a situation that was impossible to predict during the original planning of the plant. This is not to say that the notion of buildability has ceased to exist. Rather, the general notion as originally inscribed in working practices has, by appropriation, been localised to take into consideration the
Dependability as Ordinary Action
35
‘worldly contingencies’ – situations which arise in and as a part of the everyday practical work of the plant and its members and which are not, for example, involved with setting up a new system or introducing new machinery or practices – of production in EngineCo’s plant. Where, previously, buildability was a verifiable property of an engine in relationship to the inventory, now buildability of ‘orange’ and ‘red’ engines is an informed prediction based on members’ knowledge about various kinds of socio-material circumstances. In our research we have found a series of expectable, ‘normal’ or ‘ordinary’ troubles whose solution is readily available to members in, and as a part of, their working practices. That is, such problems do not normally occasion recourse to anything other than the ‘usual solutions’. Usual solutions invoke what we call horizons of tractability. By this we mean that a problem of the usual kind contains within it the candidate (used-before-and-seen-to-work) solution to that problem. These problems and their solutions are normal and natural and putatively soluble in, and as a part of, everyday work. From the shiftbook: SMR [suspended monorail] trouble 14:15 to 16:30, engines not registered into SMR, took 25 engines off the line using emergency organisation. Info for Peter: part no. 04767534, box was empty upon delivery, so I booked 64 parts out of the inventory.
The emergency organisation involved picking up the engines by forklift truck and moving them to a location where they can be picked up by the autonomous carrier system. A number of locations have been made available for this purpose where forklift truck drivers can access the Assembly Control Host to update the location information for the engine they just reintroduced into the system. This is one of many examples where non-automated activity leads to a temporary discrepancy between the representation and the represented, which has to be compensated for. The second example illustrates the same point. Updating the inventory in response to various kinds of events is a regular activity in the control room and the fact that control room workers have acquired authority to effect such transactions is witness to the normality of this kind of problem compensation activity. Workers are also able to assess the potential impacts of seen-before problem situations and they take measures to avoid them: From the shiftbook: Carrier control system broken down 10:45–11:05 resulting in delayed transports, peak number of transports in the system = 110 If in the carrier control system you can’t switch from the list of transport orders to the visualisation, don’t reboot the PC if the number of transport orders is more than about 70.
In the first two lines of the above example, workers report on problems with the system that controls the autonomous carriers that supply material to the workstations in the plant. The recording of a breakdown in the shiftbook is a way
36
Alexander Voß et al.
to make this incident accountable to fellow workers, including those working on another shift. The entry contains a number of statements which, on the surface, seem to be rather uninformative. However, they point to a number of normal, natural troubles that can result from this particular incident such as material being stored in places that are far from the workstations where it’s going to be needed. This will affect the length of transports for some time after the root problem has gone away. The result of this is that since transports take longer, more of them will queue up in the carrier control system. Such ‘ripple effects’ are quite common in this production context. In effect, because of the breakdown of the control system, the ‘transport situation’ might be considered problematic for quite a long time. The next extract can be read in this same kind of context as being part of the process of workers’ making sense of, and responding to the potential undependability of the carrier control system. It has become part of the local practice to avoid certain actions that might result in the breakdown of the carrier system if the ‘transport situation’ is regarded as problematic by control room workers: From a video recording of control room work: Pete: Hey, the carrier control is still not running properly. Let’s not run the optimisation, ok Steve? Steve: We didn’t run it this morning either, because we had 40 transports.
Other problems that are not susceptible to these remedies are also interesting to us in that they demand a solution – members cannot remain indifferent to their presence – but that solution is not a normal or usual one (by definition). In order to keep production running, members have to find and evaluate possible solutions quickly, taking into consideration the present situation, the resources presently available, as well as, ideally, any (possibly long-term and remote) consequences their activities might have: From fieldwork notes: A material storage tower went offline. Material could be moved out of the tower to the line but no messages to the Assembly Control Host were generated when boxes were emptied. Control room workers solved this problem by marking all material in the tower ‘faulty’ which resulted in new material being ordered from the logistics provider. This material was then supplied to the line using forklift trucks. [...] A material requirements planner called to ask why so many parts were suddenly ‘faulty’.
Such situated problem-solving results in work-arounds which are initially specific to the situation at hand but may become part of the repertoire of used-beforeand-seen-to-work candidate solutions. They may be further generalised through processes of social learning [12] as members share them with colleagues or they might get factored into the larger socio-material assemblage that makes up the working environment. This process of problem solution and social learning, however, is critically dependent on members’ orientation to the larger context, their making the problem solution accountable to fellow members and their ability to
Dependability as Ordinary Action
37
judge the consequences. The following fieldwork material illustrates how problem solutions can get factored into ongoing systems development as well as how they can adversely affect the success of the system: From an interview with one of the system developers responsible for the ongoing development of the Assembly Control Host: [Such a complex system] will alway have flaws somewhere but if the user has to work with the system and there’s a problem he will find a work-around himself and the whole system works. [...] The whole works, of course, only if the user really wants to work with it. If he says: “Look, I have to move this box from here to there and it doesn’t work. Crap system! I’ll let a forklift do this, I will not use your bloody system” then all is lost. Then our location information is wrong cause the driver doesn’t always give the correct information; then it will never fly. [... If they come to us and say] that something’s not working, we will say “oh! we’ll quickly have to create a bug fix” and, for the moment, I’ll do this manually without the system, then it works, the system moves on, everything stays correct, the whole plant works and if the next day we can introduce a bug fix the whole thing moves on smoothly.
The plans that members come up with within this horizon of tractability do not usually work one way only – it is our experience that an unexpected problem can become a normal problem susceptible to the usual solutions in, and through, the skillful and planful conduct of members. That is to say, the boundaries between the types of problem are semi-permeable (at least). The order of the potentially problematic universe is not similarly problematic for all members, different members will view different problems in a variety of ways and, through the phenomenon of organisational memory [5], this may lead to the resolution for the problem in, and through, the ability to improvise or to recognize some kind of similarities inherent in this and a previous problem. It is important to note that problem detection and solving is ‘lived work’ [7] and that it is also situated. That is, it is not to be divorced from the plans and procedures through which it is undertaken and the machinery and interactions that both support and realise it. Working practices and the structure of the workplace afford various kinds of activities that allow members to check the proper progress of production and to detect and respond to troubles. These very ‘mundane’ (i.e., everday) activities complement the planned-for, made-explicit and formalised measures such as testing. As in other collaborative work (see e.g., [2]), members are aware of, and orient to, the work of their colleagues. This is supported by the affordances of their socio-material working environment as the following example illustrates: From a video recording of control room work: Oil pipes are missing at the assembly line and Jim calls workers outside the control room to ask if they “have them lying around”. This is overheard by Mark who claims that: “Chris has them”. He subsequently calls Chris to confirm this: “Chris, did you take all the oil pipes that were at the line?” Having
38
Alexander Voß et al. confirmed that Chris has the oil pipes he explains why he thought that Chris had them: “I have seen the boxes standing there”.
Here, the visibility of situations and events within the plant leads to Mark being aware of where the parts in question are. The problem that the location of the parts was not accurately recorded in the information system was immediately compensated by his knowledge of the plant situation. Likewise, Jim’s knowledge of working practices leads him to call specific people who are likely to have the parts. Mark’s observation makes further telephone calls unnecessary2 . Video recording continued: Now that the whereabouts of the oil pipes has been established, the question remains why Chris has them. Mark explains that this was related to conversion work Chris is involved in at the moment. This leads Jim to ask if there are enough parts in stock to deal with the conversion work as well as other production orders. Mark explains how the inventory matches the need.
Having solved the problem of locating the parts, there is the question of how the problem emerged and what further problems may lie ahead. It is not immediately obvious that Chris should have the parts but Mark knows that Chris is involved in some conversion work resulting from a previous problem. Again, awareness of what is happening within the plant is crucial as information about the conversion work is unlikely to be captured in information systems as the work Chris is carrying out is not part of the normal operation of the plant. Rather, it is improvised work done to deal with a previous problem. Jim raises the question whether enough oil pipes are available to deal with the conversion work as well as normal production. Again, it is Mark who can fill in the required information and demonstrate to Jim how the parts in the inventory match the needs. As Jim comments in a similar situation: “What one of us doesn’t know, the other does.” Problem detection and solving is very much a collaborative activity depending on the situated and highly condensed exchange of information between members. By saying that Chris has taken the parts from the line, Mark also points to a set of possible reasons as members are well aware who Chris is, where he works and what his usual activities are. Video recording continued: Since it was first established that parts were missing, production has moved on and there is the question what to do with the engines that are missing oil pipes. Jim and Mark discuss if the material structure of the engine allows them to be assembled in ‘stationary assembly’.
Workers in the plant are aware of the material properties of the engines produced and are thus able to relate the material artefact presented to them to the process of its construction. In the example above, Mark and Jim discuss this relationship 2
Another example of the mutual monitoring that goes on in control rooms and similar facilities is to be found in [3]
Dependability as Ordinary Action
39
in order to find out if the problem of missing oil pipes can be dealt with in stationary assembly, i.e., after the engines have left the assembly line. They have to attend to such issues as the proper order in which parts can be assembled. The knowledge of the material properties of engines also allows members to detect troubles, i.e., the product itself affords checking of its proper progress through production (cf. [2]). From a video recording of control room work: Jack has ‘found’ an engine that, according to the IT system, has been delivered to the customer quite a while ago. It is, however, phyiscally present in the engine buffer and Jack calls a colleague in quality control to find out the reason for this. “It’s a 4-cylinder F200, ‘conversion [customer]’ it says here, a very old engine. The engine is missing parts, screws are loose, ... if it’s not ready yet – I wanted to know what’s with this engine – it’s been sitting in the buffer for quite a while.”
Here, the physical appearance is an indication of the engine’s unusual ‘biography’. This, together with the fact that the engine has “been sitting in the buffer for quite a while” makes the case interesting for Jack. These worldly contingencies are interesting for us since they invite consideration of the ‘seen but unnoticed’ aspects of work – that is, those aspects which pass the members by in, and as a part of, their everyday work but which, when there are problems or questions, are subject to inquiry (e.g., have you tried this or that? Did you do this or that? What were you doing when it happened). The answer to such questions, especially to the latter, illustrates the seen-butunnoticed character of work in that, when called upon to so do, members can provide such accounts, although they do not do so in the course of ordinary work.
3
Dependability as a Society Member’s Phenomenon
A central problem for us is the manner in which the term ‘dependability’ has been used in the ‘professional’ literature to be found at, for example, this conference. We argue that there is a need to complement this with a consideration of the ways in which dependability is realised as a practical matter by members and over time. This is not to say that we reject notions of dependability offered by this literature or that our comments here are incommensurable: the point is that we want to look at dependability and similar terms by doing an ethnography of what it means for a system to be reliable or dependable as a practical matter for society members engaged in using that system with just the resources and the knowledge they have. That is, we are interested in what it means to be dependable or reliable in context. In other words, while we are interested in the notions of dependability invoked in the ‘professional’ literature and these inform our discussions, we find that these definitions are dependability professionals’ objects as opposed to society members’ objects. We feel that we should consider how society members experience dependability in context, and that is what we
40
Alexander Voß et al.
present here. Indeed it is our contention that such lay uses are important for understanding what we could mean by ‘dependability’. Our aim here, then, is to bring forward the lay uses, the practical procedures and knowledges that are invoked in working with more or less dependable systems and to consider this alongside ‘professional’ uses of terms and the metrics that realise them. Think, for instance, of the phrase “Tom is very dependable” – we would not want to say qua lay member that we would want to have some kind of metric testing, for example, how many times Tom had arrived at the cinema at the time specified. We would say that such metrics are not something with which members have to deal – it would be unusual to state apropos of Tom’s dependability: “over the last month Tom has turned up on time in n% of cases, where ‘on time’ means +/– 10 minutes”. Such metrics treat dependability in a sense that we might consider outwith the bounds of everyday language. They are somewhat problematic for the purposes of needing to know if Tom will arrive before we go into the cinema. This is not to suggest that ‘professional’ metrics and definitions of dependability have no value but that their use in everyday language is limited. Our aim here, then, is to focus on what it means to live with more or less dependable systems and to do so in the natural attitude through ordinary language and situated actions such as repair. As for humans, for machines the notion of being dependable is an accountable matter3 . It is also a matter that might well occasion work arounds or other situated actions which we should consider when examining what we call the ‘logical grammar’ of dependability. By this we mean the ways that the concept might be used. Consider “this machine is totally undependable, let’s buy it” – such a use cannot be said to be acceptable except as an ironic utterance. Uses such as “you cannot always depend on the machine but we usually find ways of making it work” point us to the ways people treat notions such as dependability not simply as common understandings but as common understandings intimately related to practical actions. Our study of control room work shows that in a strict sense the system is not 100% reliable but in showing how members make it work, we aim to provide a complement to such metrics and to show the work of making systems dependable. This is also our reason for recommending that one do an ethnography since it is only by so doing that one might see the work arounds in action and come to know just how the system is unreliable or cannot be depended on. As practical matters, dependability is important for members; yet the senses in which members treat such terms seems to be of little consequence for those who write on dependability. We want to argue that if one does examine natural language uses of these terms (and others) the benefit will be in a fuller appreciation of what it means to work with (or around) such systems. Consideration of technology in its (social) context illuminates the practical actions that make technologies workable-with and which realise horizons of dependability as soci3
By this we point to as phenomenon which, if asked, members could give an account of, e.g., in the example given above, Tom might report that he is late because his bus was late. This is an account of his being late. We might then say the same of machines, that this or that job could not be done because the machine broke down.
Dependability as Ordinary Action
41
ety members’ objects. Such an exercise might appear as if it is trivial – playing with words – but we find value in it in that it shifts attention to how people cope with technology and away from metrics and measures that find their use in the technical realm but which have little value on the shop floor. They also show us something of the ‘missing what’ of making technologies reliable or dependable, the practical actions that occur, the work arounds, the procedures adopted and so on – the majority of which we would not expect to find in the literature. In other words, we want to present a consideration of the ways in which dependability is ad hoced into being. It is only by doing the ethnography that such features might be found. We might be seen as providing an outsider’s comment on something that has been professionalised and fine-tuned, yet we would argue that such issues are of merit to professionals and that they should be examined in considering what we mean by ‘dependability’ and that maybe the ethnography together with the consideration of such terms in natural language will, in Wittgenstein’s terms [13], be ‘therapeutic’. Therapeutic in the sense that it opens up some elbow room in which to do the kinds of ethnographic work that illustrates how knowledge is deployed within the working division of labour and how members in settings such as EngineCo treat knowledge as a practical resource for making more or less dependable systems work. This directs our attention to knowledge in and as part of practical action and we would argue forms a complement to the work currently being undertaken in the area of dependability.
4
Conclusions
It is important to us not to be seen as having resorted to some form of social constructivism since we believe that such approaches – with their faux na¨ıf insistence that things could have been otherwise if people had agreed – are at best unhelpful. Social constructivist approaches appear to us to locate their territories and then make claims that there is no ‘there’ there, so to speak, without agreement. We would not wish to be heard to endorse an approach that treats dependability as a contingent matter as ‘how you look at it’. Our approach is to regard these things as worldly achievements that require one to look at the practices that exist in and as part of their achievement. This is why we recommend ‘doing the ethnography’ to show what it means to live with systems that are more or less dependable. Through examination of the ‘lived work’ of working with undependable systems (including the work arounds etc. that this involves) we aim to complement existing work in this area. We also believe that a focus on the grammar of dependability is important – Wittgenstein’s insistence on inquiring into what we mean when we use terms such as ‘dependable’ further focuses our attention on the realisation of dependability as lived work.
42
Alexander Voß et al.
We propose that the study of dependability can only be enhanced and strengthened by attending to lay4 uses of the term and by focussing on the work that goes on to make systems more or less dependable. We do not argue that ethnographic studies or grammatical investigations should replace the work currently undertaken under the rubric of dependability, but that there is what we would call a ’missing how’ that needs to be addressed and that this can be done satisfactorily in and through ethnographic research on the procedures and situated actions involved in making systems dependable. There is also a sense in which the study of dependability can be developed through the securing of a deeper understanding of the practices by which it is constitued. Ethnographies of the making of dependability measures/metrics might be useful in that they afford those involved the opportunity to reflect on their practice. We have instantiated the need to ‘do the ethnography’ and to consider the manner in which a consideration of Wittgensteinian philosophy might assist in clarifying what we mean by dependability. We have provided examples from our own work on the realisation of dependability in the workplace, and shown how such practical actions such as work-arounds contribute to the notion of dependability. It must be kept in mind that dependability is a situated concept and that when one considers the constitution of dependable systems one must keep in mind the settings in which such systems are used and the accompanying work practices. When we look at, for example, the work of the engine plant system we find that the workers engage in a series of situated practical actions in order to have the system be reliable. That is to say, dependability is not simply an inherent property of the system itself but of the work in which it is enmeshed. We can therefore speak of the ‘lived work’ of dependability in that, having done the ethnography, we can see that there is a reflexive relationship between work practice and dependable systems. The aim of this paper has been to demonstrate not that ‘professional’ discourses of dependability have no place in our considerations, but that there is an important practical counterpart to these in lay notions of dependability and the work practice that goes on in and as a part of working with (un-)dependable systems. It is our recommendation that researchers consider this often neglected component when employing these concepts. In addition, we hope to have shown the essentially fragile nature of the JIT system and to have shown how the agility of the work practice, predicated on the autonomy accorded to plant workers, is necessary to keep the system running. The more or less dependable system that comprises the factory requires workers to be accorded autonomy in order to have things work. We have focused attention on the ways that this goes on and how reliability and dependability are practical outcomes of the deployment of knowledge by control room workers in an organisation whose production orthodoxy requires agility to repair its rather fragile nature and to make it work. 4
By ‘lay’ we do not suggest some impoverished version of a term but a necessary complement to the ‘professional’ uses to be found in the literature. Therefore, our use should be seen as a complement rather than an alternative.
Dependability as Ordinary Action
43
The research reported here is funded by the UK Engineering and Physical Sciences Research Council (award numbers 00304580 and GR/N 13999). We would like to thank staff at the case study organisation for their help and participation.
References [1] P. Beynon-Davies. Information systems failure and risk assessment: the case of the London Ambulance Service Computer Aided Despatch System. In European Conference on Information Systems, 1995. 34 [2] Mark Hartswood and Rob Procter. Design guidelines for deadling with breakdowns and repairs in collaborative work settings. International Journal of Human-Computer Studies, 53:91–120, 2000. 37, 39 [3] Christian Heath, Marina Jirotka, Paul Luff and John Hindmarsh. Unpacking collaboration: the interactional organisation of trading in a city dealing room. Journal of Computer Supported Cooperative Work 3, 1994, pages 147-165. 38 [4] John Hughes, Val King, Tom Rodden, Hans Andersen. The role of ethnography in interactive systems design. interactions, pages 56–65, April 1995. 33 [5] J. A. Hughes, J. O’Brien, M. Rouncefield. Organisational memory and CSCW: supporting the ‘Mavis’ phenomenon. Proceedings of OzCHI, 1996. 37 [6] Nancy Leveson and Clark S. Turner. An investigation of the Therac-25 accidents. IEEE Computer, 26(7):18-41, 1993. 34 [7] Eric Livingston. The Ethnomethodological Foundations of Mathematics. Routledge, Kegan, Paul, London, 1986. 32, 37 [8] Wes Sharrock, John A. Hughes. Ethnography in the Workplace: Remarks on its theoretical basis. TeamEthno-Online, Issue 1, November 2001. Available at http://www.teamethno-online.org/Issue1/Wes.html (accessed 14th Feb. 2002) 34 [9] Lucy A. Suchman. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, 1987. 34 [10] Alexander Voß, Rob Procter, Robin Williams. Innovation in Use: Interleaving day-to-day operation and systems development. PDC’2000 Proceedings of the Participatory Design Conference, T. Cherkasky, J. Greenbaum, P. Mambrey, J. K. Pors (eds.), pages 192–201, New York, 2000. 33, 34 [11] Alexander Voß, Rob Procter, Roger Slack, Mark Hartswood, Robin Williams. Production Management and Ordinary Action: an investigation of situated, resourceful action in production planning and control. Proceedings of the 20th UK Planning and Scheduling SIG Workshop, Edinburgh, Dec. 2001. 32 [12] Robin Williams, Roger Slack and James Stewart. Social Learning in Multimedia. Final Report of the EC Targeted Socio-Economic Research Project: 4141 PL 951003. Research Centre for Social Sciences, The University of Edinburgh, 2000. 36 [13] Ludwig Wittgenstein. Philosophical Investigations. Blackwell, Oxford 1953 (2001). 32, 41
Practical Solutions to Key Recovery Based on PKI in IP Security Yoon-Jung Rhee and Tai-Yun Kim Dept. of Computer Science & Engineering, Korea University Anam-dong Seungbuk-gu, Seoul, Korea {genuine,tykim}@netlab.korea.ac.kr
Abstract. IPSec is a security protocol suite that provides encryption and authentication services for IP messages at the network layer of the Internet. Key recovery has been the subject of a lot of discussion, of much controversy and of extensive research. Key recovery, however, might be needed at a corporate level, as a form of key management. The basic observation of the present paper is that cryptographic solutions that have been proposed so far completely ignore the communication context. We propose example to provide key recovery capability by adding key recovery information to an IP datagram. It is possible to take advantage of the communication environment in order to design key recovery protocols that are better suited and more efficient.
1
Introduction
Internet Protocol Security (IPSec) is a security protocol suite that provides encryption and authentication services for Internet Protocol (IP) messages at the network layer of the Internet [5,6,7,8]. Two major protocols of IPSec are the Authentication Header (AH) [7], which provides authentication and integrity protection, and the Encapsulating Security Payload (ESP) [8], which provides encryption as well as (optional) authentication and integrity protection of IP payloads. IPSec offers a number of advantages over other protocols being used or proposed for Internet security. Since it operates at the network layer, IPSec can be used to secure any protocol that can be encapsulated in IP, without any additional requirements. Moreover, IPSec can also be used to secure non-IP networks, such as Frame Relay, since operation of many parts of IPSec (E.g., ESP) do not necessarily require encapsulation in IP. Key recovery (KR) has been the subject of a lot of discussion, of much controversy and of extensive research, encouraged by the rapid development of worldwide networks such as the Internet. A large-scale public key infrastructure is required in order to manage signature keys and to allow secure encryption. However, a completely liberal use of cryptography is not completely accepted by governments and companies so that escrowing mechanisms need to be developed in order to fulfill current regulations. Because of the technical complexity of this problem, many rather S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 44-52, 2002. Springer-Verlag Berlin Heidelberg 2002
Practical Solutions to Key Recovery Based on PKI in IP Security
45
unsatisfactory proposals have been published. Some of them are based on tamperresistant hardware, others make extensive use of trusted third parties. Furthermore, most of them notably increase the number of messages exchanged by the various parties, as well as the size of the communications. Based on these reasons, the widespread opinion of the research community, expressed in a technical report [13] written by well known experts, is that large-scale deployment of a key recovery system is still beyond the current competency of cryptography. Despite this fact, key recovery might be needed at a corporate level, as a form of key management. The basic observation of the present paper is that cryptographic solutions that have been proposed so far, completely ignore the communication context. Static systems are put forward for key recovery at IP layer in the Internet. This paper proposes a method for carrying byte oriented Key Recovery Information in a manner compatible with the IPSec architecture. We design a key recovery protocol that is connection oriented, interactive and more robust than other proposals.
2
Background on Key Recovery
In this section, we describe needs of key recovery and practical key recovery protocols compatible with Internet standards. The history of key recovery started in April 1993, with the proposal by the U.S government of the Escrow Encryption Standard [14], EES, also know as the CLIPPER project. Afterwards, many key recovery schemes have been proposed. To protect user privacy, the confidentiality of data is needed. For this, key recovery seems useless, but there are some scenarios where key recovery may be needed [15,16]: ! !
!
3
When the decryption key has been lost or the user is not present to provide the key Where commercial organizations want to monitor their encrypted traffic without alerting the communicating parties; to check that employees are not violating an organization’s policy, for example When a national government wants to decrypt intercepted data for the investigation of serious crimes or for national security reasons.
Related Protocols
On one hand the Internet Engineering Task Force (IETF) standardization effort has led to Internet Security Association and Key Management Protocol (ISAKMP) [5] and IPSec [7,8] for low layers of the ISO model for all protocols above the transport layer. Each of these protocols splits the security protocol into two phases. The first phase enables communicating peers to negotiate the security parameters of the association (encryption algorithm, Hashed Message Authentication Code (HMAC) [11,12] mechanisms, encryption mode), the session key, etc. Moreover this first phase
46
Yoon-Jung Rhee and Tai-Yun Kim
can be split again in an authentication stage and a negotiation stage but these results are an increase of the exchanges (from 3 to 6), which accordingly decreases the performances (the aggressive mode and main mode [4]). The second phase allows encryption of the transmitted messages by means of cryptographic algorithms defined in during the first phase and adds integrity and authentication services with HMAC methods. On the other hand, publications about key recovery systems come from the cryptography community: cryptographic primitives are used to design high-level mechanisms, which cannot fit easily into standards such as Requests for Comments RFCs. We have studied the Royal Holloway Protocol (RHP) [1] that comes from pure academic research in cryptography and the Key Recovery Alliance (KRA) Protocols. Both schemes are based on the encapsulation mechanism. Additional key recovery data are sent together with the encrypted message enabling the Key Escrow Agent (KEA) to recover the key. These data are included in a Key Recovery Field (KRF). Another possibility is to escrow partial or total information about the secret key such as in CLIPPER. However, this technique requires more cryptographic mechanisms to securely manage the escrowed key (threshold secret sharing, proactive systems, etc.). The main drawback of the first system is that the security relies on one single key owned by the Trusted Third Party (TTP) and that this key may be subject to attacks. The main problem raised by escrowing the user’s secret is the need to keep the key in a protected location. 3.1
RHP Encapsulation
The RHP architecture is based on a non-interactive mechanism with a single exchanged message and uses the Diffie-Hellman scheme. The RHP system allows messages sent to be decrypted using the user’s private receive key. Each user is registered with a TTP denoted TTPA for user A . The notations used in this.
A,B
The communicating peers
TTPA , TTPB p g
A ’s TTP, B ’s TTP
K (TTPA , TTPB ) K pr − s ( A) K pu − s ( A) K pr − r ( A) K pu −r ( A)
TTPA and TTPB an element shared between TTPA and TTPB a secret key shared between TTPA and TTPB A ’s private send key ( random value x , 1 < x < p − 1 ) a prime shared between
A ’s public send key ( g x mod p ) A ’s private receive key ( a , 1 < a < p − 1 , derived from A ’s name and K (TTPA , TTPB ) ) A ’s public receive key ( = g a mod p )
The following is the RHP protocol descriptions.
Practical Solutions to Key Recovery Based on PKI in IP Security
47
Fig. 1. RHP Protocol descriptions
1.
A obtains K pu − r ( B ) ,(= g b mod p ). TTPA can compute K pr − r ( B ) ,(= b ), from
2.
B ’s name and K (TTPA , TTPB ) . A derives a shared key, ( g b mod p ) x mod p = g xb mod p from K pr − s ( A) .
3.
This is the session key, or the encryption key for the session key A transmits K pu − s ( A) signed by TTPA and K pu −r ( B ) . This information serves
4.
both as a KRF and as means of distributing the shared key to B . Upon receipt, B verifies K pu − s ( A) from A ’s public send key and K pr − r ( B ) .
The main advantage of the RHP is to be robust in terms of basic interoperability. But, the drawback of the RHP is to mix key negotiation and key recovery. It is difficult to integrate this scheme inside the security protocols of the ISAKMP since the protocol has only one phase. Another drawback is that the KRF is sent once. In fact, this is a major disadvantage in the system since the session can be long and the KEA can miss the beginning. We refer to this difficulty as the session long-term problem. It is necessary to send the KRF more than once. However, the advantage of this system is to encrypt the session key with the shared key so that the security depends on the communicating peers and not on the TTP. But since the private receive keys depends on the TTP, this advantage disappears. Finally, this solution is hybrid between encapsulation and escrow mechanisms because the private send key is escrowed and the private receive key can be regenerated by both TTPs. 3.2
KRA Encapsulation
The KRA system proposes to encrypt the session key with the public key of the TTPs. Key Recovery Header (KRH) [9] is designed to provide a means of transmitting the KRF across the network so that they may be intercepted by an entity attempting to perform key recovery. The KRH carries keying information about the ESP security association. Therefore, KRH is used in conjunction with an ESP security association. In the ISAKMP, the use of the KRH can be negotiated in the same manner as other IPSec protocols (e.g., AH and ESP) [10]. Figure 1 shows IP packets with the KRH used with IPv4.
48
Yoon-Jung Rhee and Tai-Yun Kim
IP Header
KRH
ESP
Payload
(a) KRH used with ESP
IP Header
AH
KRH
ESP
Payload
(b) KRH used with AH and ESP Fig. 2. IP Packets with KRH in IPv4
Various schemes using this technique have been proposed such as TIS CKE (Commercial Key Escrow) [3], or IBM SKR (Secure Key recovery) [2]. The system is quite simple and allows many variations according to the cryptographic encryption schemes. This proposal separates the key recovery information and the key exchange. The system modularity is also compatible with the IETF recommendation. But, the KRF contains the encryption of the same key under a lot of TTP public key. Thus, the KRF can rapidly grow and one must take proper care against broadcast message attacks. The KRA solution is not necessary to sends a KRF in each IP packet inside the IPSec [9]. The intervals at which the initiator and responder send KRF are established independently [10]. But since the KRF size is big, the KRF cannot be included in the IP Header. So, it can be sent in the IPSec header that is a part of the IP packet data. This leads to decrease the bandwidth. The second drawback is to encrypt the session key under the TTP public key. Finally, this solution is not robust because if this key is compromised, the system collapses.
4
The Proposed Key Recovery for IPSec
In this section, we propose some key recovery solutions for IPSec, which improve on the previous protocols described in Section 3. The main problem with the RHP proposal is that the protocol is connectionlessoriented. Therefore, the protocol is not well suited to IPSec or ISAKMP that are connection-oriented and allow interactivity. The KRA’s proposal seems a better solution than the RHP. Still, the security of the session key depends on a fixed public key of TTPs for all communications. Our solution is based on IETF protocols in order to improve the security of the system, the network communication, and the interoperability for cross-certification. We can integrate modified RHP method in the IETF protocols (ISAKMP, IPSec) if we realize a real Diffie-Hellman key exchange such as in Oakley [4] in order to negotiate a shared key. After this first phase, the KRF is sent with the data. 4.1
Negotiation of Security Association for KRF
In phase 2 of ISAKMP, the negotiation for security association of key recovery arises. The proposed system allows session key sent to be encrypted by the communicating peers using Diffie-Hellman shared key between two peers and time stamp, and
Practical Solutions to Key Recovery Based on PKI in IP Security
49
decrypted by the communicating peers and their TTPs using the each user’s private keys. The notations used in our proposed protocol are shown below.
TT
a time stamp
K pr ( A)
A ’s private key. TTPA escrows it. ( x , 1 < x < p − 1 )
K pu ( A)
A ’s public key. ( g x mod p )
K DH ( A, B )
a Diffie-Hellman shared key between A and B . ( A : ( g y mod p ) x mod p = g xy mod p ,
B : ( g x mod p ) y mod p = g xy mod p ) K ek − sk ( A − > B )
an encryption key for the session key in case A is the initiator and B is the responder. ( f ( g xy mod p, TT ) , where f is a one-way function )
The following is the mechanism used in the proposed protocol.
Fig. 3. Negotiation of Security Association for KRF used in the Proposed Protocol
1.
A obtains K pu (B ) , generates TT , derives K DH ( A, B ) , and calculates K ek − sk ( A − > B ) . B obtains K pu ( A) and derives K DH ( A, B )
2.
A transmits {TT }K
DH ( A , B )
to
B.
3. Upon receipt, B calculates K ek − sk ( A − > B ) from {TT }K DH ( A , B ) . In the proposed protocol, we simplify RHP by reducing required keys. In the step 2, we can improve freshness of encryption key for the session key, K ek − sk ( A − > B ) , by generating different key every time using time stamp and one-way function, and reduce dependency of encryption key for the session key on TTPs. Consequently, this can be more robust than RHP because this reduces the influence affected by escrowing the private receive keys depends on the TTP in RHP.
50
4.2
Yoon-Jung Rhee and Tai-Yun Kim
Transmission of KRF
During the IPSec session, we send the KRF with the encrypted message as KRA does. The Following is our proposed KRH format. The KRH holds key recovery information for an ESP security association. The format of the KRH is shown in figure 2. 1 2 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2
Next Header
Length
3 3 4 5 6 7 8 9 0 1
Reserved
Security Param eter Index (SPI)
Encrypted Tim e Stam p
KRF Length
Key Recovery Field (KRF), variable length
Validation Field type
Validation Field Length
Validation Field Value, variable length
Fig. 4. Key Recovery Header format
Next Header. 8 bits wide. Identifies the next payload after the Key Recovery Header. The values in this field are set of IP Protocol Numbers as defined in the most recent RFC from the Internet Assigned Numbers Authority (IANA) describing ‘Assigned Numbers’. Length. 8 bits wide. The length of the KRB in 32-bit words. Minimum value is 0 words, which is only used in the degenerate case of a ‘null’ key recovery mechanism. Security Parameters Index (SPI). A 32-bit pseudo-random value identifying the security association for this datagram. The SPI value 0 is reserved to indicate that ‘no security association exists’. Encrypted Time Stamp. The value of {TT } K DH ( A , B ) generated in phase 2 of ISAKMP. It needs to be transmitted with KRF because it is required when the corresponding TTP recovers KRF, but it is not escrowed to the TTP. Key Recovery Field Length. Number of 32-bit words in the Key Recovery Field. Key Recovery Field. The Key Recovery Data. It contains session key of current IPSec session encrypted by encryption key ( K ek − sk ( A − > B ) ) in ISAKMP. Validation Field Type. Identifies the technique used to generate the Validation Field.
Practical Solutions to Key Recovery Based on PKI in IP Security
51
Validation Field Length. Number of 32-bit words in the Validation Field Value. The Validation Field Length must be consistent with the Validation Field Type. Validation Field Value. The Validation Field Value is calculated over the entire KRH. The TTPs can recover the key as well as the user (execute a Diffe-Hellman operation) since they escrow the user’s private key and obtain Time stamp from KRH. Even if we keep the same secret key, a KRF must be sent, since the session key is not escrowed. Hence, the KRF is sent several times according to an accepted degradation bandwidth. We send the KRF in the IPSec packet as a part of IP packet. The KRF only depends upon a specific user. This allows sending the KRF in a single direction according the user’s policy. User A can choose to send (or not) the session key encrypted with his TTP’s public key and user B can do the same. This is an interesting feature compared to the RHP, since in the RHP scheme both TTP can decrypt all messages without communication with each other. 4.3
Comparison of Protocols
Our proposal is a mix of the modified RHP and the KRA solutions that combines the advantages of both systems. This scheme is based on an escrow mechanism. First, we keep the interoperability of the RHP, improve robustness comparing with RHP, and include it in the Internet Protocols. Secondly, the KRA solution is used but we encrypt the session key with a Time Stamp and a shared key by Diffie-Hellman key exchange between communicating users or user’s public key to gain robustness, not with the TTPs public keys. Therefore it can provide a method for carrying byte oriented Key Recovery Information in a manner compatible with the IPSec architecture. We compare existing protocols and our proposed protocol. In Table 1, we show the comparison between proposals of RHP, KRA and our proposed. Table 1. Comparison of protocols (O: high support
Compatibility with IETF
RHP
KRA
The Proposed
X
O
O
X
O
Robustness Reducing overhead of network
5
: low support X: not support)
O
Conclusion and Future Works
We propose example to provide key recovery capability by adding key recovery information to an IP datagram. It is possible to take advantage of the communication
52
Yoon-Jung Rhee and Tai-Yun Kim
environment in order to design key recovery protocols that are better suited and more efficient. We design a key recovery protocol that is suitable for connection oriented and more robust than RHP or KRA proposals by combining the advantages of the modified RHP and the KRA solutions. As future works, we plan for analysis and evaluation of performance of the mechamism. To progress, we apply this by modifying existing IPSec system for exact analysis results.
References 1.
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16.
N. Jefferies, C. Mitchell, and M. Walker, “A Proposed Architecture for Trusted Third Party Services”, in Cryptography: Policy and Algorithms, Proceedings: International Conference BrisAne, Lecture Notes In Computer Science, LNCS 1029, Springer-Verlag, 1995 R. Gennaro, P. Karger, S. Matyas, M. Peyravian, A. Roginsky, D. Safford, M. Zollett, and N. Zunic. Two-Phase Cryptography Key Recovery System. In computers & Security, Pages 481-506. Elsevier Sciences Ltd, 1997 D. M. Balenson, C. M. Ellison, S.B. Lipner and S. T. Walker, “A new Approach to Software Key Encryption”, Trusted Information Systems The Oakley Key Determination Protocol (RFC 2412) Internet Security Association and Key Management Protocol (ISAKMP) (RFC 2408) The Internet Key Exchange (IKE) (RFC 2409) IP Authentication Header (AH) (RFC 2402) IP Encapsulating Security Payload (ESP) (RFC 2406) T.Markham and C. Williams, Key Recovery Header for IPSEC, Computers & Security, 19, 2000, Elsevier Science D. Balenson and T. Markham, ISAKMP Key Recovery Extensions, Computers & Security, 19, 2000, Elsevier Science The Use of HMAC-MD5-96 within ESP and AH (RFC 2403) The Use of HMAC-SHA-1-96 whithin ESP and AH (RFC 2404) H. Abelson, R. Anderson, S. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, P.Neumann, R. Rivest, J. Schiller, and B. Schneirer. The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption. Technical report, 1997. Available from htt://www.crypto.com/key-study NIST, “Escrow Encryption Standard (EES)”, Federal Information Processing Standard Pubilication (FIPS PUB) 185, 1994 J. Nieto, D. Park, C. Boyd, and E. Dawson, “Key Recovery in Third Generation Wireless Communication Systems”, Public Key Cryptography-PKC2000, LNCS 1751, pp. 223-237, 2000 K. Rantos and C. Mitchell. “Key recovery in ASPeCT Authentication and Initialization of Payment protocol”, Proc. Of ACTS Mobile Summit, Sorrento, Italy, June 1999
Redundant Data Acquisition in a Distributed Security Compound Thomas Droste Institute of Computer Science Department of Electrical Engineering and Information Sciences Ruhr-University Bochum, 44801 Bochum, Germany
[email protected] Abstract. This paper introduces a new concept for an additional security mechanism which works on every host inside a local network. It is focussed on the used redundant data acquisition to get the complete net-wide network traffic for later analysis. The compound itself has a distributed structure. Different components act together on different hosts in the security compound. Therefore, the acquisition and analysis are done net-wide by hosts with free resources, parallel to their usual work. Because the hosts, in particular workstations, change dynamical over the day, the compound must adapt to the actual availability of all hosts. It must be guaranteed, that every transferred packet inside the local network is recorded. Each network traffic at one host in the network is recorded by a minimum of two others. The recorded traffic is combined at a node in order to get a single complete stream for analysis. The resulting problems at the different states of the redundant data acquisition are described and used solutions are presented.
1
Introduction
The distributed security compound is implemented on each host in the local network. It works parallel to other security mechanisms like firewalls, or intrusion detection systems (IDS) and intrusion response systems (IRS). The intention for the use is to get an additional security mechanism which works transparent on the established network and on different hosts. The achieved knowledge and analysis depends on the whole net-wide network traffic transferred inside a local network. A security violation or unusual behavior can then be detected in the whole network. An advantage is the use of usual workstations for the compound. No additional hardware is necessary. All functions and components are added to the existing host configuration. Different components for compound detection, network data acquisition, distributed analysis, compound communication, and reaction on security violations act together [3]. A basic component is the compound detection. The whole network is scanned for hosts which have to be integrated into the security compound. Hosts without the S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 53-60, 2002. Springer-Verlag Berlin Heidelberg 2002
54
Thomas Droste
needed components are detected, too. Therefore, a new host in the network - a new own member or a possible intruder with physically access to the local network - will be identified as such a system. Each host can acquire the network traffic (see Sect. 2), which is used for the later analysis. The analysis of recorded and combined data is splitted and distributed to the compound members. The compound communication is possible among all components. Cryptographic mechanisms are required to encode the communication inside the compound and for the recorded and transferred data. If a local preanalysis process or a distributed analysis recognizes a security violation the reaction component will be activated. Therefore, an internal compromised host is decoupled from the compound and access from/to other hosts are districted. The local reaction component can e.g. shut down the host or log off the local user. All components of the security compound are working as services in the background, i.e. transparent for the local user. They start interacting before the user logs on and remain active all the time the system is up.
2
Acquisition of Network Traffic
The acquisition is realized parallel to the normal network traffic. The complete network traffic inside a collision domain dCD,i (i means number of collision domain) is recorded at the network adapter of a host. Contrary to host based traffic dr,i,j (j means host in collision domain i) all packets are recorded. One assumes that all hosts record the traffic, the resulting traffic in a collision domain is given by
d CD,i =
ni
∑ d r ,i , j .
(1)
j =1
The net-wide traffic dges to record is given by
d ges =
nCD
∑ d CD,i .
(2)
i =1
A recorded data d includes each transmitted frame on the link layer. The data for the higher layers, i.e. header and encapsulated data, are completely included [12]. Each frame is marked with a timestamp. After recording the data, a transformation is done by adding the MAC1 address of the recording host and conversion into a readable packet order (layers are dissolved) for the combining process and the later analysis. This disintegration is done max. up to transport layer. Begin of a converted frame [11]: date;time;mac_source;mac_dest;mac_host;frame_type;... Parallel to the dissolved data dr´ the original frames dr´´ are stored for later requests from analysis processes. 1
Medium access control, the hardware address of the network interface [13].
Redundant Data Acquisition in a Distributed Security Compound
55
The traffic in a collision domain dCD can be recorded at any point inside this physically combined network. Therefore, each host inside the collision domain can record this traffic. A problem is the acquisition in switched networks. There, the traffic is controlled by the switch. Every packet is transferred directly to the station(s) at the corresponding port behind. Other stations can not recognize the traffic. If only one host is connected to a port, it will act as a single collision domain (ni=1). The recorded data at the host dr,i,1 is equal to dCD,i. A detection of hosts which are not part of the compound is still possible because of the administrative traffic (e.g. DNS2-queries). It does not depend on the topology (e.g.. ethernet 10Base-2/10Base-T with CSMA/CD3 or switched network). As soon as communication takes place to any compound member, this host is detected.
3
Generation of Redundancy
For the analysis, it is important to have a maximum of information about the network traffic. To combine the traffic from each host dr´ a filtering of duplicated packets is necessary. The result is the net-wide network data dcombine, given by
d combine = ∪k =1 d r ,k ´, nk = ∑i CD ni nk
n
(3)
nk is the number of hosts in the compound, including all hosts ni in each collision domains nCD. The redundancy degree increases with the number of the compound members. Therefore, it is not necessary to record the complete traffic at all hosts. A reduction from this high redundant level to a smaller one is forced. This is done by distribution of acquiring rules. Each compound member j gets a list of two or more other hosts to monitor (scalable). A recorded traffic dr,j (see Sect. 2) is filtered with the list. Now, the resulting output dr,j´ depends only on the traffic of the hosts in the list (and the own one). A rough estimation of the relation between number of hosts and recorded data is given in Table 1. Table 1. Dependency for recorded and combined data for a special host j
2 3
Number of hosts
Recorded data
low (ni≤3)
dr,j´=dges´
medium (315)
dr,j´ b.pressed } Blank * located_at pressed: 0..n bool 0..1 0..n 0..1
1 Location
0..1 Feedbelt
Press 0..1
Deposit Belt 0..1
Table 0..1
Fig. 1. Analysis Class Diagram for Production Cell
88
Kevin Lano et al.
Table 12. Guideword Interpretations for Case Study Attribute Blank/table relationship
Guideword Interpretations No Blank never placed on table after leaving feed belt. Or system has failed to detect it on table. More 2 or more blanks are on the table. Blank/deposit Part of There is an unpressed belt relationship blank on the deposit belt.
2.5
Case Study: Production Cell
An example of application of the above approach is the following, taken from an analysis of a production cell, in which blanks (car body templates) are moved along a feed belt onto an elevating table to be picked up by a robot arm and then transferred to a press for pressing before being transferred to a deposit belt [5]. Figure 1 shows part of the UML class diagram for this system. Applying HAZOPS to this diagram, using table 6, we could have the analyses given in Table 12. In general either the real world deviates from the model described in the class diagram, and either the software correctly represents this failure, or it does not, or the real world conforms to the model but the software has incorrect knowledge. These three cases may be significantly different in terms of deviation causes and consequences, so should be analysed separately. For instance if a blank has failed to be pressed, and the software fails to detect this, hazards may arise with later processing stages. On the other hand if the blank has been pressed but the software fails to detect this, it may be re-pressed, possibly causing damage to the press. Figure 2 shows the controller state machine for the feed belt. s2 is the blank sensor at the end of the feed belt, stm a signal indicating that the feed belt is safe to move, and bm is the belt motor. Hazard states are indicated by a cross. HAZOPS of this state machine, using tables 2 and 4, identifies the following potential deviations and hazards (Table 13).
Table 13. Guideword Interpretations for Feed belt state machine Attribute Guideword Interpretation and Consequences Event No stmoff not detected when it should be. (for transition Control system leaves motor on, off on on → resulting in hazard state off off on) off off on in EUC. Action Other than bmSeton issued instead of bmSetoff . (for transition EUC stays in state on off on on off on → which is hazardous. on off off )
Safety and Security Analysis of Object-Oriented Models Tuples in order s2_stm_bm stmon
off_off_off
89
Sensor transition: Actuator transition: off_on_off
/bmSeton
/bmSetoff stmoff
off_on_on
off_off_on s2off
s2on
on_on_on on_on_off
/bmSeton
stmoff
stmon on_off_on on_off_off
/bmSetoff
Fig. 2. Controller state machine for Feed belt
3
Security Analysis
Many safety-critical systems have security issues – eg, in a railway network management system, communication between a train coordinator and train drivers must be authenticated to ensure safe operation of the railway. Other systems may not have direct safety implications (eg, an online banking system) but have security aspects with critical consequences. Traditional HAZOPS concerns deviations from intended functionality, in general. For security HAZOPS we want to focus on the relevant components (elements which store, process or transmit data internally, or which receive/transmit data to the external world) and relevant attributes, guidewords and interpretations. Deviation analysis of object-oriented models for security properties can follow the same set of guidewords and interpretations as given above. The elements and attributes of interest will be focussed on security, and similarly for the causes and consequences of deviations. For example in the model of Figure 3 a “part of” deviation could be that some user with a security level below admin is logged into the secure host.
User permissions
*
logged_on
1
Secure Host
{ (user,host): logged_on => user.permissions >= admin }
Fig. 3. Part of Class Diagram of Secure System
90
Kevin Lano et al.
The aim of security HAZOPS is to identify deviations from design intent which have security implications, ie, which affect the systems ability to: – ensure confidentiality and integrity of information it stores/processes/internally transmits (Confidentiality/Integrity) – provide authentication of information it produces (outputs) and to authenticate input information (Authentication) – ensure availability of services/information (Availability) – provide a trace of all user interactions, such as login attempts to the system (Non-repudiation). The team approach and other organisational aspects of HAZOPS defined in Def-Stan 00-58 could still be used, although the range of expertise should be relevant (eg, including experts in the encryption/networking approaches used in the system). The components involved will be: – Physical components: processing nodes (including intermediate relay nodes in a communication network); memory; connectors; input devices (eg, keyboard; smartcard reader; biometric sensor); output devices (eg, CD burner; smartcard writer; VDU) – Information components: file, email, network packet. Subcomponents such as particular record fields/email header, etc. Therefore the guidewords and interpretations could be: – NO/NONE – complete absence of intended CIA/A/NR property for component being considered – LESS – inadequate CIA/A/NR property compared with intention – PART OF – only part of intended CIA/A/NR achieved – OTHER THAN – intended CIA/A/NR property achieved, but some unintended property also present. LATE, EARLY, BEFORE, AFTER, INTERRUPT could also be relevant. The paper [12] gives more focussed guidewords, using the idea of a guidephrase template. We can extend this concept to consider the additional properties of authentication and non-repudiation. CIA/A/NR properties may be lost due either to device behaviour (technical failure/cause) or to human actions. Human actions may either be deliberate hostile actions (by insiders or outsiders) or unintentional human failures (or correct procedures which nonetheless lead to a loss of CIA/A/NR). Negations of CIA/A/NR properties are: disclosure, manipulation (subcases being fabrication, amendment, removal, addition, etc), denial, misauthentication (includes authentication of invalid data and inauthentication of valid data), repudiation. The adapted guidephrase template is given in Table 14. Examples of these and their interpretations could be:
Safety and Security Analysis of Object-Oriented Models
91
Table 14. Revised Guidephrase Template (Pre) Guideword Attribute Deliberate Disclosure Manipulation of COMPONENT by Unintentional Denial Misauthentication Repudiation
(Post) Guideword Insider Outsider Technical behaviour/ functionality
– Unintentional misauthentication of connection request to firewall by technical behaviour. Interpretation (1): firewall authenticates sender of request when it should not. Interpretation (2): firewall does not authenticate a valid sender of request and denies the connection when it should not. – Deliberate disclosure of patient record by insider. Interpretation: staff member provides patient information intentionally to a third party. We could apply this approach to an example of an internet medical analysis system (Figure 4): – The company provides a server to perform analysis of data which customers upload to the website, and download results from. – Data is encrypted by a private key method between the client and server, but stored in decrypted form on the server and the smartcard. – The smartcard reader at the client computer reads data from the smartcard for transmission to the server. – Customers have a password/login for the website, this is checked against their customer ID data on the smartcard (written by the server on registration/initialisation of the card). Only if both agree is the login accepted. – Each customer is allocated a separate directory at the server and cannot load/read the data of any other customer.
Internet
Server
Client PC Data storage Smartcard reader
Fig. 4. Architecture of Medical Analysis System
92
Kevin Lano et al.
An example security HAZOPS analysis from this system is given in Table 15. Detailed risk analysis can then be carried out:
Table 15. Example Analysis of Medical System Guidephrase Unintentional misauthentication of smartcard by technical function Deliberate manipulation of server data by outsider
Interpretation An invalid smartcard is authenticated when it should not be Server data altered by unauthorised person
Causes Software fault in checker, or HW failure in reader Outsider gains login access to server, or via unsecured webpages or scripts
Consequences Valid user can use invalid card: may corrupt data stored on server Analysis data may be altered so incorrect results are given
– Given a specific loss of security situation from the HAZOP, we can compute the approximate likelihoods and severity, and therefore the risk level. – The probability of decryption of encrypted data can be estimated using mathematical theory, other probabilities, of interception, intrusion, denial of service attacks, are less clear, although estimates based on previous experience could be used.
4
Related Work
State machine hazard analysis has been used informally in the HCI field [2] and tool support for generating fault trees from state machines has been developed as part of RSML [7]. The CORAS Esprit project (http://www.nr.no/coras/) is investigating security analysis of object-oriented systems, using scenario analysis. Such analysis could be based on the identification of individual deviations using the techniques give here. Tools for HAZOPS analysis such as PHA-Pro 5 (http://www.woodhill.co.uk/pha/pha.htm) and PHAWorks 5 (http://www.primatech.com/software/phaworks5.htm) deal mainly with process control system HAZOPS support, and do not cover programmable electronic system designs.
5
Conclusion
We have described approaches for HAZOPS of UML notations and for security HAZOPS. These revised guidewords and interpretations have been implemented
Safety and Security Analysis of Object-Oriented Models
93
in a hazard analysis tool for UML class diagrams and E/E/PES P&I diagrams within the RSDS support environment [1].
References [1] K. Androutsopoulos, The RSDS Tool, Department of Computer Science, King’s College, 2001. http://www.dcs.kcl.ac.uk/pg/kelly/Tools/ 93 [2] Alan Dix, Janet Finlay, Gregory Abowd, Russell Beale, Human-Computer Interaction, 2nd Edition, Prentice Hall, 1998. 92 [3] ISO, Guidelines for the Use of the C Language in Vehicle Based Software, ISO TR/15497. Also at: http://www.misra.org.uk/. 82 [4] J. L. Lanet, A. Requet, Formal Proof of Smart Card Applets Correctness, Proceedings of 3rd Smart Card Research and Advanced Application Conference (CARDIS ’98), Sept. 1998. 82 [5] K. Lano, D. Clark, K. Androutsopoulos, P. Kan, Invariant-based Synthesis of Fault-tolerant Systems, FTRTFT 2000, Pune, India, 2000. 88 [6] P. Lartigue, D. Sabatier, The use of the B Formal Method for the Design and Validation of the Transaction Mechanism for Smart Card Applications, Proceedings of FM ’99, pp. 348–368, Springer-Verlag, 1999. 82 [7] Nancy G. Leveson. Designing a Requirements Specification Language for Reactive Systems. Invited talk, Z User Meeting, 1998, Springer Verlag 1998. 92 [8] Ministry of Defence, Defence Standard 00-56, Issue 2, 1996. 82 [9] Ministry of Defence, Defence Standard 00-58, Issue 2, 2000. 82, 83 [10] Rational Software et al, OMG Unified Modeling Language Specification Version 1.4, 2001. 82 [11] K R Leino, J Saxe, R Stata, Checking Java programs with Guarded Commands, in Formal Techniques for Java Programs, technical report 251, Fernuniversit¨ at Hagen, 1999. 82 [12] R. Winther, O-A. Johansen, B. A. Gran, Security Assessments of Safety Critical Systems using HAZOPS, Safecomp 2000. 90
The CORAS Framework for a Model-Based Risk Management Process Rune Fredriksen1 , Monica Kristiansen1 , Bjørn Axel Gran1 , Ketil Stølen2 , Tom Arthur Opperud3 , and Theo Dimitrakos4 1
Institute for Energy Technology, Halden, Norway {Rune.Fredriksen,Monica Kristiansen,Bjorn-Axel Gran}@hrp.no http://www.ife.no 2 Sintef Telecom and Informatics, Oslo, Norway
[email protected] http://www.sintef.no 3 Telenor Communications AS R&D, Fornebu, Norway
[email protected] http://www.telenor.no 4 CLRC Rutherford Appleton Laboratory (RAL), Oxfordshire, UK
[email protected] http://www.rl.ac.uk
Abstract. CORAS is a research and technological development project under the Information Society Technologies (IST) Programme (Commission of the European Communities, Directorate-General Information Society). One of the main objectives of CORAS is to develop a practical framework, exploiting methods for risk analysis, semiformal methods for object-oriented modelling, and computerised tools, for a precise, unambiguous, and efficient risk assessment of security critical systems. This paper presents the CORAS framework and the related conclusions from the CORAS project so far.
1
Introduction
CORAS [1] is a research and technological development project under the Information Society Technologies (IST) Programme (Commission of the European Communities, Directorate-General Information Society). CORAS started up in January 2001 and runs until July 2003. The CORAS main objectives are as follows: – To develop a practical framework, exploiting methods for risk analysis, semiformal methods for object-oriented modelling, and computerised tools, for a precise, unambiguous, and efficient risk assessment of security critical systems. – To apply the framework in two security critical application domains: telemedicine and e-commerce. – To assess the applicability, usability, and efficiency of the framework. – To promote the exploitation potential of the CORAS framework. S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 94–105, 2002. c Springer-Verlag Berlin Heidelberg 2002
The CORAS Framework for a Model-Based Risk Management Process
2
95
The CORAS Framework
This section provides a high-level overview of the CORAS framework for a modelbased risk management process. By ”a model-based risk management process” we mean a tight integration of state-of-the-art UML-oriented modelling technology (UML = Unified Modeling Language) [2] in the risk management process. The CORAS model-based risk management process employs modelling technology for three main purposes: – Providing descriptions of the target of analysis at the right level of abstraction. – As a medium for communication and interaction between different groups of stakeholders involved in a risk analysis. – To document results and the assumptions on which these results depend. A model-based risk management process is motivated by several factors: – Risk assessment requires correct descriptions of the target system, its context and all security relevant features. The modelling technology improves the precision of such descriptions. Improved precision is expected to improve the quality of risk assessment results. – The graphical style of UML furthers communication and interaction between stakeholders involved in a risk assessment. This is expected to improve the quality of results, and also speed up the risk identification and assessment process since the danger of wasting time and resources on misconceptions is reduced. – The modelling technology facilitates a more precise documentation of risk assessment results and the assumptions on which their validity depend. This is expected to reduce maintenance costs by increasing the possibilities for reuse. – The modelling technology provides a solid basis for the integration of assessment methods that should improve the effectiveness of the assessment process. – The modelling technology is supported by a rich set of tools from which the risk analysis may benefit. This may improve quality (as in the case of the two first bullets) and reduce costs (as in the case of the second bullet). It also furthers productivity and maintenance. – The modelling technology provides a basis for tighter integration of the risk management process in the system development process. This may considerably reduce development costs and ensure that the specified security level is achieved. The CORAS framework for a model-based risk management process has four main anchor-points, a system documentation framework based on the Reference Model for Open Distributed Processing (RM-ODP) [3], a risk management process based on the risk management standard AS/NZS 4360 [4], a system development process based on the Rational Unified Process (RUP) [5], and a platform
96
Rune Fredriksen et al.
for tool-integration based on eXtensible Markup Language (XML) [6]. In the following we describe the four anchor-points and the model-based risk management process in further detail. 2.1
The CORAS Risk Management Process
The CORAS risk management process provides a sequencing of the risk management process into the following five sub-processes: 1. Context Identification: Identify the context of the analysis that will follow. The approach proposed here is to select usage scenarios of the system under examination. 2. Risk Identification: Identify the threats to assets and the vulnerabilities of these assets. 3. Risk Analysis: Assign values to the consequence and the likelihood of occurrence of each threat identified in sub-process 2. 4. Risk Evaluation: Identify the level of risk associated with the threats already identified and assessed in the previous sub-processes 5. Risk Treatment: Address the treatment of the identified risks. The initial experimentation with UML diagrams can be summarised into the following: 1. UML use case diagrams support the identification of both the users of a system (actors) and the tasks (use cases) they must undertake with the system. UML scenario descriptions can be used to give more detailed input to the identification of different usage scenarios in the CORAS risk management process. 2. UML class/object diagrams identify the classes/objects needed to achieve the tasks, which the system must help to perform, and the relationships between the classes/objects. While class diagrams give the relationships between general classes, object diagrams present the instantiated classes. This distinction could be important when communicating with users of the system.
Sub-process 1: Identify context Sub-process 2: Identify risks Sub-process 3: Analyse Risks Sub-process 4: Risk Evaluation Sub-process 5: Risk Treatment
Fig. 1. Overview over the CORAS risk management process
The CORAS Framework for a Model-Based Risk Management Process
97
3. UML sequence diagrams describe some aspects of system behaviour by e.g. showing which messages are passed between objects and in what order they must occur. This gives a dynamic picture of the system and essential information to the identification of important usage scenarios. 4. UML activity diagrams describe how activities are co-ordinated and record the dependencies between activities. 5. UML state chart diagrams or UML activity diagrams can be used to represent state transition diagrams. The UML state chart diagram may be used to identify the sequence of state transitions that leads to a security break. 2.2
The CORAS System Documentation Framework
The CORAS system documentation framework is based on RM-ODP. RM-ODP defines the standard reference model for distributed systems architecture, based on object-oriented techniques, accepted at the international level. RM-ODP is adopted by ISO (ISO/IEC 10746 series: 1995) as well as by the International Telecommunication Union (ITU) (ITU-T X.900 series: 1995). As indicated by Figure 2, RM-ODP divides the system documentation into five viewpoints. It also provides modelling, specification and structuring terminology, a conformance module addressing implementation and consistency requirements, as well as a distribution module defining transparencies and functions required to realise these transparencies. The CORAS framework extends RM-ODP with: 1. Concepts and terminology for risk management and security. 2. Carefully defined viewpoint-perspectives targeting model-based risk management of security-critical systems. 3. Libraries of standard modelling components. 4. Additional support for conformance checking. 5. A risk management module containing risk assessment methods, risk management processes, and a specification of the international standards on which CORAS is based.
2.3
The CORAS Platform for Tool Integration
The CORAS platform is based on data integration implemented in terms of XML technology. Figure 3 outlines the overall structure. The platform is built up around an internal data representation formalised in XML/XMI (characterised by XML schema). Standard XML tools provide much of the basic functionality. This functionality allows experimentation with the CORAS platform and can be used by the CORAS crew during the trials. Based on the eXtensible Stylesheet Language (XSL), relevant aspects of the internal data representation may be mapped to the internal data representations of other tools (and the other way around). This allows the integration of sophisticated case-tools targeting system development as well as risk analysis tools and tools for vulnerability and treat management.
98
Rune Fredriksen et al.
Viewpoints enterprise
information computation engineering technology
ODP Foundations modelling, specification & structuring terminology
Conformance Module implementation & consistency requirements
Distribution Module transparencies & functions
Fig. 2. The main components of RM-ODP
2.4
The CORAS Model-Based Risk Management Process
The CORAS methodology for a model-based risk management process builds on: – – – – – –
HAZard and OPerability study (HAZOP) [7]; Fault Tree Analysis (FTA) [8]; Failure Mode and Effect Criticality Analysis (FMECA) [9]; Markov analysis methods (Markov) [10]; Goals Means Task Analysis (GMTA) [11]; CCTA Risk Analysis and Management Methodology (CRAMM) [12].
These methods are to a large extent complementary. They address confidentiality, integrity, availability as well as accountability; in fact, all types of risks, threats, hazards associated with the target system can potentially be revealed and dealt with. They also cover all phases in the system development and maintenance process. In addition to the selected methods other methods may also be needed to implement the different sub-processes in the CORAS risk management process. So far two additional methods have been identified. These are Cause-Consequence Analysis (CCA) [13] and Event-Tree Analysis (ETA) [13]. The CORAS risk management process tightly integrates state-of-the-art technology for semiformal object-oriented modelling. Modelling is not only used to provide a precise description of the target system, but also to describe its context and possible threats. Furthermore, description techniques are employed to document the risk assessment results and the assumptions on which these results
The CORAS Framework for a Model-Based Risk Management Process
99
The CORAS platform
XSL
Commercial modelling tools
XML internal representation
X SL
XML tools providing basic functionality
Commercial risk analysis tools
XSL
Commercial vulnerability and treat management tools
Fig. 3. The meta-model of the CORAS platform
depend. Finally, graphical UML-based modelling provides a medium for communication and interaction between different groups of stakeholders involved in risk identification and assessment. The following table gives a brief summary and a preliminary guideline to which methods that should be applied for which sub-process in the CORAS model-based risk management process. This guideline will be updated further during the progress of the CORAS project. Risk management requires a firm but nevertheless easily understandable basis for communication between different groups of stakeholders. Graphical objectoriented modelling techniques have proved well suited in this respect for requirements capture and analysis. We believe they are equally suited as a language for communication in the case of risk management. Entity relation diagrams, sequence charts, dataflow diagrams and state diagrams represent mature paradigms used daily in the IT industry throughout the world. They are supported by a wide set of sophisticated case-tool technologies, they are to a large extent complementary and, together they support all stages in a system development. Policies related to risk management and security are important input to risk assessment of security critical systems. Moreover, results from a risk assessment will often indicate the need for additional policies. Ponder [14] is a very expressive declarative, object-oriented language for specifying security and management policies for distributed systems. Ponder may benefit from an integration with graphical modelling techniques. Although the four kinds of graphical modelling techniques and Ponder are very general paradigms they do not always provide
100
Rune Fredriksen et al.
Table 1. How the RA methods apply to the CORAS risk management process Sub-process Goal Context Identification Identify the context of the analysis (e.g. areas of concern, assets and security requirements). Risk Identification Identify threats. Risk Analysis Find consequence and likelihood of occurence. Risk Evaluation Evaluate risk (e.g. risk level, prioritise, categorise, determine interrelationships and prioritise). Risk Treatment Identify treatment options and assess alternative approaches.
Recommended methods CRAMM, HAZOP
HAZOP FMECA, CCA, ETA CRAMM
HAZOP
the required expressiveness. Predicate logic based approaches like OCL [15] in addition to contract-oriented modelling are therefore also needed. 2.5
The CORAS System Development and Maintenance Process
The CORAS system development and maintenance process is based on an integration of the AS/NZS 4360 standard for risk management and an adaptation of the RUP for system development. RUP is adapted to support RM-ODP inspired viewpoint oriented modelling. Emphasis is placed on describing the evolution of the correlation between risk management and viewpoint oriented modelling throughout the system’s development and maintenance lifecycle. In analogy to RUP, the CORAS process is both stepwise incremental and iterative. In each phase of the system lifecycle, sufficiently refined versions of the system (or its model) are constructed through subsequent iterations. Then the system lifecycle moves from one phase into another. In analogy to the RM-ODP viewpoints, the viewpoints of the CORAS framework are not layered; they are different abstractions of the same system focusing on different areas of concern. Therefore, information in all viewpoints may be relevant to all phases of the lifecycle.
3
Standard Modeling Components
Much is common from one risk assessment to the next. CORAS aims to exploit this by providing libraries of reusable specification fragments targeting the risk management process and risk assessment. These reusable specification fragments are in the following referred to as standard modelling components. They will typically be UML diagrams annotated with constraints expressed in OCL
The CORAS Framework for a Model-Based Risk Management Process
101
(Object Constraint Language), or in for this purpose other suitable specification languages. The process of developing standard modelling components will continue in the CORAS project. In this phase the focus has been to: 1. Build libraries of standard modelling components for the various security models developed in CORAS. 2. Provide guidelines for the structuring and maintenance of standard modelling components. The following preliminary results and conclusions have been reached: 1. Standard modelling components may serve multiple purposes in a process for model-based risk management. They can represent general patterns for security architectures, or security policies. They can also represent the generic parts of different classes of threat scenarios, as well as schemes for recording risk assessment results and the assumptions on which they depend. 2. In order to make effective use of such a library, there is need for a computerised repository supporting standard database features like storage, update, rule-based use, access, search, maintenance, configuration management, version control, etc. 3. XMI offers a standardised textual XML-based representation of UML specifications. Since UML is the main modelling methodology of the CORAS framework and XML has been chosen as the main CORAS technology for tool integration, the repository should support XMI based exchange of models. 4. The UML meta-model is defined in Meta Object Facility (MOF) [16]. In relation to a CORAS repository, MOF may serve as a means to define a recommended subset of UML for expressing standard modelling components, required UML extensions to support a model-based risk management process, as well as the grammar of component packets. The repository should therefore be MOF based. 5. To support effective and smooth development of a consistent library, a single CORAS repository that all partners in the consortium may access via the Internet would be useful. 6. The OMG standards MOF and XMI ensure open access to the library and flexible exchange of standard modelling components between tools. There are already commercial tools for building repositories supporting MOF and XMI on the market; others are under development. The consortium will formalise the library of standard modelling components in terms of MOF and XMI using a suitable for this purpose UML CASE-tool.
4
The CORAS Trials
The trials in CORAS are performed within two different areas: e-commerce and telemedicine. The purpose of the trials is to experiment with all aspects of the
102
Rune Fredriksen et al.
Web Server
Web Client
Application Server
Data Storage
reqLogin() reqLogin() Create(sn) Add(sn) return(status) retLoginPage(sn) retLoginPage(sn)
Fig. 4. An example of the UML sequence diagram used in the trial
framework during its development, provide feedback for improvements and offer an overall assessment. The first e-commerce trial was based on the authentication mechanism. Among other models an indicative UML sequence diagram for starting the FMECA method, see Figure 4, was used. It is important to stress that the sequence diagram presented here is only one example of typical possible behaviours of the system. Scenarios like unsuccessful login, visitor accessing the platform, registration of new user, etc, could also be modelled. A more detailed description of the CORAS trials will be provided in the reports from the CORAS project. This trial was focused on the sub process 2 – identify risks, and to gain familiarity with use of CRAMM, HAZOP, FTA and FMECA for this purpose. The results from the first e-commerce trial are divided into four partly overlapping classes: 1. 2. 3. 4. 4.1
Experiences with the use of the specific risk analysis methods. Experiences from the overall process. Input to changes to the way the trials are performed. Input to minor changes of the CORAS risk management process. Experiences with the Use of the Specific Risk Analysis Methods
The individual methods used during the first e-commerce risk analysis trial session provided the following main results: – CRAMM was useful for identification of important system assets. – HAZOP worked well with security-related guidewords/attributes [17] that reflected the security issues addressed.
The CORAS Framework for a Model-Based Risk Management Process
103
– FTA was useful for structured/systematic risk analysis, but was timeconsuming and might present scalability problems. – FMEA worked well, but has to be well organised before it is applied and it may even be prepared beforehand by the system developers. The trial also demonstrated, through the interactions between the models on the board and the risk analysis methods that model-based risk assessment provides an effective medium for communication and interaction between different groups of stakeholders involved in a risk assessment. 4.2
Experiences from the Overall Process
The CORAS risk management process was initially difficult to follow without guidance from experienced risk analysts. Especially the interfacing between models and the objective for using each method was not initially clear. During the process it became obvious that sufficient input of documentation, including models, was critical to obtain valuable results. The process did, however, provide identification of threats and some important issues were discovered despite time limitations. The different methods provided complementary results, and the application of more than one method was very successful. 4.3
Input to Changes to the Way the Trials Are Performed
One of the objectives of the first e-commerce trial was to provide input on how the following trials should be performed. Four major issues were addressed: 1. The trials should be more realistic, regarding the people that participate, the duration and the functionality that is analysed. 2. The CORAS risk management process should be followed more formally. 3. Documentation, including models, should be provided in sufficient time before the trial so that clarifications can be provided in time. 4. Tool support for the different risk analysis methods would make the application of the methods more productive. 4.4
Input to Minor Changes of the CORAS Risk Management Process
The major results from this trial for the subsequent updates of the CORAS risk management process are: 1. Guidelines for the application of the CORAS risk management process need to be provided; 2. The terminology in use need to be defined in more detail; and 3. Templates for the different risk analysis methods need to be available.
104
5
Rune Fredriksen et al.
Conclusions
This paper presents the preliminary CORAS model-based risk management process. The main objective of the CORAS project is to develop a framework to support risk assessment of security critical systems, such as telemedicine or ecommerce systems. A hypothesis where risk analysis methods traditionally used in a safety context were applied in a security context, has been evaluated - and will be evaluated further during the forthcoming trials. This paper also presents the experiences from the first trial in the project. The different methods provided complementary results, and the use of more than one method seemed to be an effective approach. The first trial experiences also demonstrated the advantages of the interactions between the models on the board and the risk analysis methods. In addition the trial provided the identification of threats and some important issues for further follow up. The trials to be performed during the spring 2002 will provide feedback to updated versions of the recommendations developed in the CORAS project.
References [1] CORAS: ”A Platform for Risk Analysis of Security Critical Systems”, IST-200025031,(2000).(http://www.nr.no/coras/) 94 [2] OMG: UML proposal to the Object Management Group(OMG), Version 1.4, 2000. 95 [3] ISO/IEC 10746: Basic Reference Model of Open Distributed Processing, 1999. 95 [4] AS/NZS 4360: Risk Management. Australian/New Zealand Standard 1999. 95 [5] Krutchen, P.: The Rational Unified Process, An Introduction, Addison-Wesley (1999) 95 [6] W3C: Extensible Markup Language (XML) 1.0 October 2000 96 [7] Redmill F., Chudleigh M., Catmur J.: Hazop and Software Hazop, Wiley, 1999. 98 [8] Andrews J. D., Moss, T. R.: Reliability and Risk Assessment, 1st Ed. Longman Group UK, 1993. 98 [9] Bouti A., Kadi A. D.: A state-of-the-art review of FMEA/FMECA, International Journal of Reliability, Quality and Safety Engineering, vol. 1, no. 4, pp (515-543), 1994. 98 [10] Littlewood B.: A Reliability Model for Systems with Markov Structure, Appl. Stat., 24 (2), pp (172-177), 1975. 98 [11] Hollnagel E.: Human Reliability Analysis: Context and Control, Academic press, London, UK, 1993. 98 [12] Barber B., Davey J.: Use of the CRAMM in Health Information Systems, MEDINFO 92, ed Lun K. C., Degoulet P., Piemme T. E. and Rienhoff O., North Holland Publishing Co, Amsterdam, pp (1589 – 1593), 1992. 98 [13] Henley E. J., and Kumamoto, H.: Probabilistic Risk Assessment and Management for Engineers and Scientists. 2nd Ed. IEEE Press, 1996. 98 [14] Damianou N., Dulay N., Lupu E., and Sloman M.: Ponder: A Language for Specifying Security and Management Policies for Distributed Systems. The Language Specification - Version 2.2. Research Report DoC 2000/1, Department of Computing, Imperial College, London, April, 2000. 99
The CORAS Framework for a Model-Based Risk Management Process
105
[15] Warmer Jos B., and Kleppe Anneke G.: The Object Constraint Language - precise modeling with UML. Addison-Wesley, 1999. 100 [16] OMG: Meta Object Facility. Object Management Group(OMG), http://www.omg.org 101 [17] Winther, Rune et al.: Security Assessments of Safety Critical Systems Using HAZOPs, U.Voges (Ed.): SAFECOMP 2001, LNCS 2187, pp. (14-24), 2001, SpringerVerlag Berlin Heidelberg 2001. 102
Software Challenges in Aviation Systems John C. Knight Department of Computer Science, University of Virginia 151, Engineer’s Way, P.O. Box 400740, Charlottesville, VA 22904-4740, USA
[email protected] Abstract. The role of computers in aviation is extensive and growing. Many crucial systems, both on board and on the ground, rely for their correct operation on sophisticated computer systems. This dependence is increasing as more and more functionality is implemented using computers and as entirely new systems are developed. Several new concepts are being developed specifically to address current safety issues in aviation such as runway incursions. This paper summarizes some of the system issues and the resulting challenges to the safety and software engineering research communities.
1
Introduction
The operation of modern commercial air transports depends on digital systems for a number of services. Some of these services, e.g., autopilots, operate on board, and others, e.g., current air-traffic management systems, operate on the ground. In many cases, the systems interact with each other via data links of one form or another, e.g., ground system interrogation of on-board transponders, and aircraft broadcast of position and other status information [1]. This dependence on digital systems includes general aviation aircraft in a significant way also. In most cases, digital systems in aviation are safety-critical. Some systems, such as a primary flight-control system [2], are essential for normal aircraft operation. Others, such as some displays and communications systems, are important but only crucial under specific circumstances or at specific times. Any complex digital system will be software intensive, and so the correct operation of many aviation systems relies upon the correct operation of the associated software. The stated requirement for the reliability of a flight-crucial system on a commercial air transport is 10-10 failures per hour where a failure could lead to loss of the aircraft [2, 3]. This is a system requirement, not a software requirement, and so it is not the case that software must meet this goal—software must exceed it because hardware components of the system will not be perfect. The development of digital aviation systems present many complex technical challenges because the dependability requirements are so high. Some of the difficulties encountered are summarized in this paper. In the next section, aviation systems are reviewed from the perspectives of enhanced functionality and enhanced safety, and the characteristics of such systems are discussed. In section 3, some of the challenges that arise in software engineering are presented. S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 106-112, 2002. Springer-Verlag Berlin Heidelberg 2002
Software Challenges in Aviation Systems
2
Aviation Systems
2.1
Enhanced Functionality
107
The trend of reduced digital hardware costs and the coincident reduction in hardware size and power consumption has led to an increasing use of digital systems in aviation. In some cases, digital implementations have replaced older analog-based designs. In other cases, entirely new concepts become possible thanks to digital systems. An example of the former is autopilots. Autopilots used to be based on analog electronics but now are almost entirely digital. The basic ideas behind the operation of an autopilot have remained the same through this transition, but modern digital autopilots are characterized by greater functionality and flexibility. Examples of entirely new concepts are modern full-authority, digital engine controllers (FADECs) and envelope protection systems. FADECs manage large aircraft engines and monitor their performance with sophistication that would be essentially impossible in anything but a digital implementation. Similarly, comprehensive envelope protection is only possible using a digital implementation. Functionality enhancement is taking place in both on-board and ground-based systems. Flight deck automation is very extensive, and this has lead to the use of the term “glass cockpit” since most information displays are now computer displays [4, 5]. Ground based automation is extensive and growing. Much of the development that is taking place is designed to support Free Flight [6] and the Wide Area Augmentation System (WAAS) [7], a GPS-based precision guidance system for aircraft navigation and landing. Both Free Flight and WAAS depend heavily on computing and digital communications. It is difficult to obtain accurate estimates of the number of processing units, the precise communications architecture, and the amount of software in an aviation system for many reasons. It is sometimes not clear what constitutes “a processor”, for example, because so much specialized electronics is involved. Similarly, software is sometimes in read-only memories and called “firmware” rather than software. In addition, digital systems are often used for non-safety-related functions and so are not of interest. Finally, many of the details of digital systems in aviation applications are considered proprietary and are not made available. Although some details are not available, it is clear that there are many safety-critical digital systems in present aviation applications. It is also clear that these systems are extremely complex in many cases. Both aircraft on-board systems and groundbased systems are often sophisticated computer networks, and these systems also interact. In some cases, such as WAAS, the architecture is a wide-area network with very high dependability and real-time performance requirements. Given the continuing technological trends, it is to be expected that there will be many more such systems in the future. 2.2
Enhanced Safety
The stimulus for developing new and enhanced digital systems is evolving. While the change from analog to digital implementation of major systems will no doubt con-
108
John C. Knight
tinue, there are major programs underway to develop techniques that will address safety issues explicitly [8]. Three of the major concerns in aviation safety are: (1) accidents caused by Controlled Flight Into Terrain (CFIT); (2) collisions during ground operations, take off, or landing; and (3) mechanical degradation or failure. CFIT occurs when a perfectly seviceable aircraft under control of its pilots impacts the ground, usually because the crew was distracted. CFIT was involved in 37% of 76 approach and landing accidents or serious incidents from 1984-97 [9, 10, 11], and CFIT incidents continue to occur [12]. The prevention of collisions on the ground is a major goal of the Federal Aviation Administration [13]. During the decade of the 1990’s, 16 separate accident categories (including “unknown”) were identified in the world-wide commercial jet fleet [14]. The category that was responsible for the most fatalities (2,111) was CFIT. An analysis of these categories by Miller has suggested that nine of the categories (responsible for 79% of the accidents) might be addressable by automation [15]. Thus, there is a very strong incentive to develop new technologies to address safety explicitly, and this, together with the rapidly rising volume of commercial air traffic, is the motivation for the various aviation safety programs [8]. These new programs are expected to yield entirely new systems that will enhance the safe operation of aircraft. The Aircraft Condition Analysis and Management System (ACAMS), for example, is designed to diagnose and predict faults in various aircraft subsystems so as to assess the flight integrity and airworthiness of those aircraft subsystems [16]. The ACAMS system operates with on-board components that diagnose problems and ground-based components that inform maintenance and other personnel. Another important new direction in aviation safety is in structural health monitoring. The concept is to develop systems that will perform detailed observation of aircraft structures in real time during operation. They are expected to provide major benefits by warning of structural problems such as cracks while they are of insignificant size. The approach being followed is to develop sensors that can be installed in critical components of the airframe and to use computers to acquire and analyze the data returned by the sensors. For an example of such a system, see the work of Munns et al [17]. A significant innovation in ground-based systems is automatic alerts of potential runway incursions. In modern airports, the level of ground traffic is so high that various forms of traffic entering runways being used for flight operations are difficult to prevent. The worst accident in aviation history, with 583 fatalities, occurred in Tenerife, Canary Islands in March 1977, and was the result of a runway incursion. Research is underway to develop systems that will warn pilots of possible incursions so that collisions can be avoided [18]. 2.3
Characteristics of Enhanced System
Inevitably, new aviation systems, whether for enhanced functionality or enhanced safety, will be complex—even more so than current systems. Considerable hardware will be required for the computation, storage and communication that will be
Software Challenges in Aviation Systems
109
required, and extensive hardware replication will be present to address dependability goals. Replication will, in most cases, have to go beyond simple duplication or triplication because the reliability requirements cannot be met with these architectures. Replication will obviously extend also into power and sensor subsystems. The functional complexity of the systems being designed is such that they will certainly be software intensive. But functionality is not the only requirement that will be addressed by software. Among other things, it will be necessary to develop extensive amounts of software to manage redundant components, to undertake error detection in subsystems such as sensors and communications, and to carry out routine health monitoring and logging. The inevitable conclusion of a brief study of the expected system structures is that very large amounts of ultra-dependable software will be at the heart of future aviation systems. It is impossible to estimate the total volume of software that might be expected in a future commercial transport, but it is certain that the number of lines will be measured in hundreds of millions. Not all of that software will be flight crucial, but much of it will be.
3
Software Challenges
The development of software for future aviation applications will require that many technical challenges be addressed. Most of these challenges derive from the required dependability goal and approaches that might be used to meet it. An important aspect of the goal is assurance that the goal is met. In this section six of the most prominent challenges are reviewed. These six challenges are: •
Requirements Specification Erroneous specification is a major source of defects and subsequent failures of safety-critical systems. Many failures occur in systems using software that is perfect, it is just not the software that is needed because the specification is defective. Vast amounts of research has been conducted in specification technology but errors in specifications continue to occur. It is clear that the formal languages which have been developed offer tremendous advantages, yet they are rarely used even for the development of safety-critical software.
•
Verification Verification is a complex process. Testing remains the dominant approach to verification, but testing is able to provide assurance only in the very simplest of systems. It has been shown that it is impossible to assess ultra-high dependability using testing in a manner reminiscent of statistical sampling, a process known as life testing [19, 20]. The only viable alternative is to use formal verification, and case studies in the use of formal verification have been quite successful. However, presently formal verification has many limitations, such as floating-point arithmetic and concurrent systems, that preclude its
110
John C. Knight
comprehensive and routine use in aviation systems. In addition, formal verification is usually applied to a relatively high-level representation of the program, such as a high-level programming language. Thus it depends upon a comprehensive formal semantic definition of the representation and an independent verification of the process that translates the high-level representation to the final binary form. •
Application Scale Building the number of ultra-dependable systems that will be required in future aviation systems will not be possible with present levels of productivity. The cost of development of a flight-crucial software system is extremely high because large amounts of human effort is employed. Far better synthesis and analysis tools and techniques are required that provide the ability to develop safety-critical software having the requisite dependability with far less effort.
•
Commercial Off The Shelf Components The use of commercial-off-the-shelf (COTS) components as a means of reducing costs is attractive in all software systems. COTS components are used routinely in many application domains, and the result is a wide variety of inexpensive components with impressive functionality including operating systems, compilers, graphics systems and network services. In aviation systems, COTS components could be used in a variety of ways but for the issue of dependability. If an aviation system is to meet the required dependability goals, it is necessary to base any dependability argument on extensive knowledge of everything used in building the system. This knowledge must include knowledge of the system itself as well as all components in the environment that are used to produce the final binary form of the software. COTS components, no matter what their source, are built for a mass market. As such they are not built to meet the requirements of ultra-dependable applications, they are built to meet the requirements of the mass market. Making the situation worse is that COTS components are sold in binary form only. The source code and details of the development process used in creating a COTS component are rarely available. Even if they are available, they usually reflect a development process that does not have the rigor necessary for ultra-dependable applications. If COTS components are to be useful in safety-critical aviation applications, it will be necessary to develop techniques to permit complete assurance that defects in the COTS components cannot affect safety.
•
Development Cost And Schedule Management Managing the development of major software systems and estimating the cost of that development have always been difficult, but they appear to be especially difficult for aviation systems. Development of the WAAS system, for example, was originally estimated to cost $892.4M but the current program cost estimate is $2,900M. The original deployment schedule for WAAS was expected to begin in 1998 and finish in 2001. The current deployment schedule is to start in 2003 and no date for completion has been projected. WAAS is not an isolated
Software Challenges in Aviation Systems
111
example [21]. The need to develop many systems of the complexity of WAAS indicates that success will depend on vastly improved cost estimation and project management. •
4
System Security Many future aviation systems will be faced with the possibility of external threats. Unless a system is entirely self contained, any external digital interface represents an opportunity for an adversary to attack the system. It is not necessary for an adversary to have physical access. Of necessity many systems will communicate by radio, and digital radio links present significant opportunities for unauthorized access. Present critical networks are notoriously lacking in security. This problem must be dealt with for aviation systems. Even something as simple as a denial-ofservice attack effected by swamping data links or by jamming radio links could have serious consequences if the target was a component of the air-traffic network. Far worse is the prospect of intelligent tampering with the network so as to disrupt service. Dealing with tampering requires effective authentication. Again, this is not a solved problem, and must be dealt with if aviation systems are to be trustworthy.
Summary
The application of computers in aviation systems is increasing, and the range of applications being developed is increasing. If the requisite productivity and dependability goals for these systems are to be met, significant new technology will be required. Further details can be found about many aspects of aviation in general and safety in particular from many sources including the Federal Aviation Administration [22], the National Transportation Safety Board [23], NASA’s Aviation Safety program [8], NASA’s Aviation Safety Reporting System [24], Honeywell International, Inc. [25], and Rockwell Collins, Inc. [26].
Acknowledgments It is a pleasure to thank Ms. Kelly Hayhurst of NASA’s Langley Research Center for her suggestions for the content of this paper. This work was supported in part by NASA under grant number NAG-1-2290.
References 1. 2.
Automatic Dependant Surveillance – Broadcast (ADS-B) System. http://www.ads-b.com Yeh, Y.C.: Design Considerations in Boeing 777 Fly-By-Wire Computers. 3rd. IEEE International High-Assurance Systems Engineering Symposium (1998)
112
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
19. 20. 21. 22. 23. 24. 25. 26.
John C. Knight
RTCA Incorporated: Software Considerations in Airborne Systems and Equipment Certification. RTCA document number RTCA/DO-178B (1992) Swenson, E.H.: Into The Glass Cockpit. Navy Aviation News (May-June, 1998) http://www.history.navy.mil/nan/1998/0598/cockpit.pdf Inside The Glass Cockpit: IEEE Spectrum http://www.spectrum.ieee.org/publicaccess/0995ckpt.html Federal Aviation Administration: Welcome to Free Flight http://ffp1.faa.gov/home/home.asp Federal Aviation Administration: Wide Area Augmentation System http://gps.faa.gov/Programs/WAAS/waas.htm NASA Aviation Safety Program, http://avsp.larc.nasa.gov/ Aviation Week and Space Technology, Industry Outlook (January 15, 2001) Aviation Week and Space Technology, Industry Outlook (November 27, 2000) Aviation Week and Space Technology, Industry Outlook (July 17, 2000) Bateman, Donald: CFIT Accident Statistics. Honeywell International Incorporated, http://www.egpws.com/general_information/cfitstats.htm Aviation Week and Space Technology, Industry Outlook (June 26, 2000) Aviation Week and Space Technology (July 2001) Miller, S., personal communication (2002) ARINC Engineering Services LLC: Aircraft Condition Analysis and Management System. http://avsp.larc.nasa.gov/images_saap_ACAMSdemo.html Munns, T.E. et al.: Health Monitoring for Airframe Structural Characterization. NASA Contractor Report 2002-211428, February 2002 Young, S.D., Jones, D.R.: Runway Incursion Prevention: A Technology Solution. Flight Safety Foundation’s 54th Annual International Air Safety Seminar, the International Federation of Airworthiness’ 31st International Conference, Athens, Greece (November 2001) Finelli, G.B., Butler, R.W.: The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software. IEEE Transactions on Software Engineering, pp. 3-12 (January 1993) Ammann, P.A., Brilliant, S.S., Knight, J.C.: The Effect of Imperfect Error Detection on Reliability Assessment via Life Testing. IEEE Transactions on Software Engineering pp. 142-148 (February 1994) U.S. Department of Transportation, memorandum from the Inspector General to various addresses: Status of Federal Aviation Administration’s Major Acquisitions. (February 22, 2002) http://www.oig.dot.gov/show_pdf.php?id=701 Federal Aviation Administration, http://www.faa.gov National Transportation Safety Board, http://www.ntsb.gov NASA Aviation Safety Reporting System, http://asrs.arc.nasa.gov/ Honeywell International Incorporated: Enhanced Ground Proximity Warning Systems http://www.egpws.com/ Rockwell Collins Incorporated. http://www.rockwellcollins.com
A Strategy for Improving the Efficiency of Procedure Verification Wenhui Zhang Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences P.O.Box 8718, 100080 Beijing, China
[email protected] Abstract. Verification of operating procedures by model checking has been discussed in [11, 12]. As an execution of a procedure may affect or be affected by many processes, a model of the procedure with its related processes could be very large. We modify the procedure verification approach [11, 12] by introducing two strategies that make use of detail knowledge of procedures in order to reduce the complexity of model checking. A case study demonstrates the potential advantages of the strategies and shows that the strategies may improve the efficiency of procedure verification significantly and therefore scale up the applicability of the verification approach.
1
Introduction
Operating procedures are documents telling operators what to do in various situations. They are widely used in process industries including the nuclear power industry. The correctness of such procedures is of great importance to the safe operation of power plants [7, 8]. Verification of operating procedures by model checking has been discussed in [11, 12]. For the verification of a procedure, the basic approach is to formalize the procedure specification, formulate logic formulas (or assertions) for correctness requirements (such as invariants and goals), create an abstract model of relevant plant processes, and specify a set of the possible initial states of the plant. In order to obtain a compact model, we may use different techniques including cone of influence reduction [1], semantic minimization [10], state information compression [4], and other types of abstraction techniques [2, 9]. After we have created these specifications, we use a model checker to verify the specification against the logical formulas, the plant processes and the initial states. As an execution of a procedure may affect or be affected by many processes, a model of the procedure with its related processes could be very large. It is necessary to have strategies for reducing the complexity in order to scale up the applicability of model checking. In this paper, we introduce two strategies that make use of detail knowledge of procedures in order to reduce the complexity of model checking. Let the model be a parallel composition of the processes P, S1 , ..., Sn where P is the model of the procedure and S1 , ..., Sn are the related processes referred to as the environment S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 113–125, 2002. c Springer-Verlag Berlin Heidelberg 2002
114
Wenhui Zhang
processes. Unlike models where all processes are equally important, in models of this type, we are only interested in the correctness of P . This simplifies the verification task and we can take advantage of this fact and use specialized strategies for procedure verification. Formally, the problem could be stated as P ||S1 || · · · ||Sn |= ϕ where we are interested in the correctness of P with respect to ϕ in the environment consisting of the processes S1 , ..., Sn running in parallel with P . The two strategies (where one is a modification of the other) for increasing the efficiency of the verification of such problems are presented in Section 2. In Section 3, we propose a modification of the procedure verification approach presented in [11, 12] and present a case study to demonstrate the potential advantages of the strategies. In Section 4, we present a discussion of the strategies and the conclusion that the strategies may improve the efficiency of procedure verification significantly and scale up the applicability of the procedure verification approach presented in [11, 12].
2
Verification Strategies →
Let T be a system and x be the global variable array of T . The system is in the → → → state v , if the value of x at the current moment is v . A trace of T is a sequence of states. The property of such a trace can be specified by propositional linear temporal logic (PLTL) formulas [3]. →
– ϕ is a PLTL formula, if ϕ is of the form z = w where z ∈ x and w is a value. – Logical connectives of PLTL include • ¬, • ∧, • ∨ and • →. If ϕ and ψ are PLTL formulas, then so are ¬ϕ, ϕ ∧ ψ, ϕ ∨ ψ, and ϕ → ψ. – Temporal operators include • X (next-time), • U (until), • (future) and • [] (always). If ϕ and ψ are PLTL formulas, then so are X ϕ, ϕ U ψ, ϕ, and []ϕ. Let t be a trace of T . Let HEAD(t) be the first element of t and TAILi (t) be the trace constructed from t by removing the first i elements of t. For convenience, we write TAIL(t) for TAIL1 (t). The relation “t satisfies ϕ”, written: t |= ϕ, is defined as follows:
A Strategy for Improving the Efficiency of Procedure Verification
t |= x = v t |= ¬ϕ t |= ϕ ∧ ψ t |= ϕ ∨ ψ t |= ϕ → ψ t |= X ϕ t |= ϕ U ψ t |=ϕ t |=[]ϕ
iff iff iff iff iff iff iff iff iff
115
the statement x = v is true in HEAD(t). t |= ϕ. t |= ϕ and t |= ψ. t |= ϕ or t |= ψ. t |= ϕ implies t |= ψ. TAIL(t) |= ϕ. ∃k such that TAILk (t) |= ψ and TAILi (t) |= ϕ for 0 ≤ i < k. ∃k such that TAILk (t) |= ϕ. t |= ϕ and TAIL(t) |=[]ϕ
Let T be the set of the traces of T and ϕ be a propositional linear temporal logic formula. T satisfies ϕ, written: T |= ϕ, if and only if: ∀t ∈ T .(t |= ϕ). Let e(T ) be a modification of T which extends the global variable array of T → and adds assignment statements of the form y = expr where y ∈ x to the → processes of T . Let ϕ( x ) be an X-free (X for next-time) propositional linear → temporal logic formula that only involves variables of x . The following rule is sound. →
e(T ) |= ϕ( x ) → T |= ϕ( x )
(R1)
The set of traces of T can be constructed from that of e(T ) by deleting → variables not in x and their respective values from the states of e(T ), and deleting → (some of) the stuttering states. These actions do not affect the validity of ϕ( x ), → since the formula does not involve variables not in ϕ( x ) and does not involve → the temporal operator X. The condition that ϕ( x ) be an X-free formula can be relaxed, if the additional assignments can be attached to existing statements with atomic constructions. → → The intention is to verify e(T ) |= ϕ( x ) instead of T |= ϕ( x ). However, additional effort is necessary in order to get benefit out of this construction, → → because the problem e(T ) |= ϕ( x ) is more complicated than T |= ϕ( x ), as there are additional variables and additional assignment statements. The idea is therefore to utilize the additional variables in e(T ) to add procedure knowledge in order to improve the efficiency of model checking. We consider two verification strategies: – The first one is referred to as STR-1 reduction that utilizes the knowledge of the process P in order to discard traces with irrelevant execution statements. – The second is referred to as STR-2 reduction which is a modification of the first strategy by inserting the knowledge into the environment processes.
116
2.1
Wenhui Zhang
STR-1 Reduction
This strategy is based on deduction rule (R1). Instead of verifying P ||S1 || · · · ||Sn |= ϕ we consider two steps: 1. Extend P to e(P ). The problem to be verified after this step is: e(P )||S1 || · · · ||Sn |= ϕ. According to the deduction rule, a verification of e(P )||S1 || · · · ||Sn |= ϕ is also a verification of P ||S1 || · · · ||Sn |= ϕ. 2. Write a formula ψ to represent some useful knowledge of the procedure execution to be used to improve the efficiency of model checking. The problem to be verified after this step is: e(P )||S1 || · · · ||Sn |= ψ → ϕ. The purpose is to use ψ to discard (or eliminate at an early point of the model checking process) traces that do not affect the validity of ϕ (provided that the procedure knowledge is correctly represented). Example: Consider an example where we have two processes S1 and S2 specified in Promela (the process meta-language provided with the model checker Spin [5, 6]) as follows: proctype S1 () { proctype S2 () { byte i,k,num; byte i,num; atomic{ c0?num; c0!num; } atomic{ c0?num; c0!num; } do do :: k< num → i=i+1; k=k+i; :: i × i < num → i=i+1; :: k> num → c1!0; :: i × i > num → c2!0; :: k == num → c1!1; :: i × i == num → c2!1; od; od; } } S1 is a process that reads a value from the channel c0 and tests whether it equals ki=1 (i) for some k. It reports through the channel c1, with 1 meaning k that it has found a k such that n1 = i=1 (i). S2 is a process that reads a value and tests whether it equals k 2 for some k. It reports through the channel c2 in a similar manner. Suppose that we have a procedure which first puts a number in the channel c0 and then uses S1 or S2 for performing a test to determine the property of the number. Suppose further that the choice of using S1 or S2 is determined by an input from the environment modeled by the process E as follows.
A Strategy for Improving the Efficiency of Procedure Verification
117
proctype P () { c0!n1; e0!a1; if :: a1==1; do :: c1?r1; od; :: a1==2; do :: c2?r1; od; fi; } proctype E() { if :: e0!1; :: e0!2; fi; } The process E puts randomly 1 or 2 into e0. The process P puts the number n1 into c0, reads from e0 and gets the reported values through channel c1 or c2 according to the input from k e0. Assume that we want to verify whether there is no k such that n1 = i=1 (i) or n1 = k 2 and we represent the property by ϕ : [](r1 = 1), i.e. we want to verify: P ||E||S1 ||S2 |= ϕ. The knowledge we gained from the construction of the Promela model tells us that Si (with i = 1, 2 respectively) is not relevant with respect to the execution of P , unless P executes along the path with a1==i. We apply the strategy STR-1 reduction and specify P , i.e. e(P ), as follows (the changes are in italic fonts): proctype P () { c0?n1; e0?a1; if :: atomic{a1==1; b= 1}; do :: c1?r1; od; :: atomic{a1==2; b= 2}; do :: c2?r1; od; fi; } In this specification, b is a new global variable used to hold the path information. Let ψ be (b = 2 → last = 2) ∧ (b = 1 → last = 1) where last = i means that process i (i.e. Si in the above Promela specification) is not the last process that just has executed an action. We verify P ||E||S1 ||S2 |= []ψ → ϕ instead of P ||E||S1 ||S2 |= ϕ. As there are additional conditional statements, the number of relevant traces are reduced. For n1 = 2, the number of transitions in model checking using this strategy is 228. The number of transitions in model checking without using this strategy is 506 (with Spin 3.4.2). Note that the strategy is not compatible with partial order reduction and the former is compiled with the option -DNOREDUCE, while the latter did not use this option. Remarks: The use of the formula ψ is not compatible with fairness constraints (with the implication that one cannot exclude any executable process). In the next subsection, we introduce a modified version which does not suffer from this limitation.
118
2.2
Wenhui Zhang
STR-2 Reduction
In this approach, we code the knowledge into the processes, instead of adding it into the formula to be proved. For instance, the knowledge [](b = 1 → last = 1) can be coded into process S1 by inserting the guard b == 1 to appropriate places. So instead of verifying P ||S1 || · · · ||Sn |= ϕ we verify
e(P )||S1 || · · · ||Sn |= ϕ
where S1 , ..., Sn are modifications of, respectively, S1 , ..., Sn by adding guards to appropriate statements. The soundness relies on the analysis of the main process and the environment processes, and whether one puts the guards correctly. Example: Let E, P , S1 and S2 be as previously specified. To use the STR-2 reduction strategy we specify S1 and S2 as follows: proctype S1 () { proctype S2 () { byte i,k,num; byte i,num; b==1; b==2; atomic{ c0?num; c0!num; } atomic{ c0?num; c0!num; } do do :: k< num → i=i+1; k=k+i; :: i × i < num → i=i+1; :: k> num → c1!0; :: i × i > num → c2!0; :: k == num → c1!1; :: i × i == num → c2!1; od; od; } } The problem P ||E||S1 ||S2 |= ϕ is now reduced to P ||E||S1 ||S2 |= ϕ. As there are additional conditional statements in S1 and S2 , the number of potential execution paths of S1 and S2 would be less than that of S1 and S2 . For n1 = 2, the number of transitions in model checking using this strategy is 38. 2.3
Summary
The following table presents the number of transitions, the number of states and the peek memory usage in model checking the example specification for n1 = 2 with the two reduction strategies: Strategy States Transitions Memory No Strategy 396 506 1.493mb STR-1 Reduction 146 228 1.493mb STR-2 Reduction 34 38 1.493mb
A Strategy for Improving the Efficiency of Procedure Verification
119
The use of the reduction strategies can significantly reduce the model checking complexity (however, in this example, no advantage with respect to the memory usage has been achieved, because the example is too simple). The applicability of the strategies depends on the property to be proved as well as on the structure of the specification, in particular, the first strategy is not compatible with fairness constraints. The idea to simplify proofs by extending the proposition to be proved has for long been a very important principle in automated reasoning. We are applying this idea to the verification of procedures. In this particular application, the applicability of this idea is restricted to properties where fairness is not required and it is therefore necessary to modify this idea in order to deal with properties with fairness constraints, i.e. instead of extending the proposition, we code the procedure knowledge (i.e. add appropriate guards) into environment processes.
3
Application – A Case Study
An approach to the verification of operating procedures has been discussed in [11, 12]. In the light of the above discussion, we modify the approach to include the following steps: – create a process representing the procedure; – create abstract processes representing the plant processes; – create processes for modeling the interaction between the procedure process and the plant processes; – create a process for modeling the initial state of the plant; – formulate and formalize the correctness requirements of the procedure; – analyze different paths of the procedure and determine relevant plant processes with respect to the paths, in order to use the proposed strategies; – verify the procedure by model checking. The potential benefits of the strategies are illustrated by a case study which is the verification of an operating procedure with seeded errors. The operating procedure considered here is “PROCEDURE D-YB-001 — The steam generator control valve opens and remains open” [11]. It is a disturbance procedure to be applied when one of the steam generator control valves (located on the feed water side) opens and remains open. Description of the Procedure: The primary goal of the procedure is to check the control valves (there are 4 such valves), identify and repair the valve that has a problem (i.e. opens and remains open). After the defective valve is identified, the basic action sequence of the operator is as follows: Start isolating the steam generator. Manipulate the primary loop. Continue isolation of the steam generator. Repair the steam generator control valve and prepare to increase power. Increase the power.
120
Wenhui Zhang
There are 93 instructions (each of the instructions consists of one or several actions) involving around 40 plant units including 4 control valves, 4 cycling valves, 4 steam level meters, 4 pumps, 4 pump speed meters and 4 protection signals. Seeded Errors: In order to demonstrate the benefits (with respect to model checking times) of the reduction strategies for detection of errors and for verification, we seed 5 errors into the procedure. These seeded errors are as follows: 1 2 3 4 5
A A A A A
wrong condition in a wait-statement - leading to a fail stop. wrong condition in a conditional-jump - leading to a loop. missing instruction - leading to a loop. wrong reference - leading to an unreachable instruction. missing instruction - leading to an unreached goal.
Creating Models: In this step, we create the model of the procedure and the related processes. The model in this case study consists of totally 19 processes: – 1 procedure process for modeling the procedure (with the 5 seeded errors). – 14 simple plant processes: 1 for each of the 4 cycling valves, 1 for each of the 4 pump speed meters, 1 for each of the 4 steam level meters, 1 for modeling notification of supervisor, and 1 for modeling opening and closing valves with protection signals (the consequence is that a valve may not be opened, although an instruction for opening the valve is executed). – 4 other processes: 3 processes for modeling the interactions (for dealing with the 3 main types of procedure elements for respectively checking the value of a system state, waiting for a system state, and performing an action) between the procedure process and the plant processes, and 1 initialization process for choosing an initial state from the set of possible initial states of the plant. Formulating Correctness Requirements: For the purpose of this paper, it is sufficient with one correctness requirement which is as follows: “Every execution of the procedure terminates and upon the completion of the execution, all control valves (identified by the names: RL33S002, RL35S002, RL72S002 and RL74S002) are normal or the supervisor is notified”. For verification, correctness requirements are specified by using the propositional linear temporal logic and we specify the given requirement by the following formulas (referred to as ϕ0 and ϕ1 , respectively): []
(ProcedureCompleted==Yes -> ( (Valve[RL33S002]==Normal && Valve[RL35S002]==Normal && Valve[RL72S002]==Normal && Valve[RL74S002]==Normal) || Supervisor==Notified)) (ProcedureCompleted==Yes)
A Strategy for Improving the Efficiency of Procedure Verification
121
where P rocedureCompleted is a variable to be assigned the value “Yes” right before the end of an execution of the procedure, RL33S002, RL35S002, RL72S002 and RL74S002 are identifiers of the control valves which may have values of the type {“Normal”, “Defective”} and “Valve” is an array that maps a valve identifier to the current state of the valve. Analyzing Procedure Paths: In this step, we analyze the procedure and the plant processes in order to group different execution paths and find out plant processes relevant to the different paths. The procedure starts with identifying the symptoms of the problem and a defective valve. There are 5 branches of executions depending on whether the valve RL33S002, RL35S002, RL72S002 or RL74S002 is defective or none of them is defective. The plant process for opening and closing of valves is relevant for all of the executions except in the case where no valves are defective. The other relevant plant processes for the 5 branches are as follows: – None of the valves are defective: In this case, none of the other processes is relevant. – The valve RL33S002 is defective: 1 process for the relevant (1 of 4) pump speed meter, 1 process for the relevant (1 of 4) steam level meter and 1 process for the relevant (1 of 4) cycling valve. – The cases where the valve RL35S002, RL72S002 or RL74S002 is defective are similar to the previous case. 3.1
STR-1 Reduction
We modified the procedure process by adding the variable i and assign the path information to i at the entrance of each of the 4 main branches (i.e. except the branch dealing with the case where no valves are defective). Let b(i) be the proposition representing that the procedure process is executing in the i-th (i ∈ {1, 2, 3, 4}) branch. Let ps(i), sl(i) and cv(i) represent respectively the i-th pump speed process, the i-th steam level process and the ith cycling valve process. We define ψ as the conjunction of the following set of formulas: {¬b(k) → last = ps(k) ∧ last = sl(k) ∧ last = cv(k) | k ∈ {1, 2, 3, 4}}. In the verification, the test results were generated by Spin 3.4.2. The option for the verification of ϕ0 is -a for checking acceptance cycles. The options for the verification of ϕ1 are -a and -f for checking acceptance cycles with weak fairness. The models were compiled with the option -DNOREDUCE. The error detection and verification steps using this approach were as follows: – Instead of verifying ϕ0 , we verify []ψ → ϕ0 . – The verification detects errors 5 and 4 with model checking times respectively 0.4 and 9.9 seconds.
122
Wenhui Zhang
– The model checking time for re-checking []ψ → ϕ0 is 10.2 seconds. The subgoal ϕ1 is not verified here, because the verification requires the weak fairness constraint and we cannot use STR-1 reduction (i.e. verifying []ψ → ϕ1 instead of ϕ1 ). 3.2
STR-2 Reduction
We modified the processes ps(i), sl(i), cv(i) for i = 1, 2, 3, 4 by coding the knowledge represented by the formula []ψ (in other words: putting the condition b(i) as a guard to appropriate statements) in the plant processes and tried to verify whether ϕ0 and ϕ1 hold. The error detection and verification steps using this approach were as follows: – Verification of ϕ0 detects errors 5 and 4 with model checking times respectively 0.3 and 1.4 seconds. The time for re-checking ϕ0 is also 1.5 seconds. – Verification of ϕ1 detects errors 2, 3 and 1 with model checking time 1.3, 1.4 and 1.4 second, respectively. The time for re-checking ϕ1 is 80.4 seconds. – After that, the formula ϕ0 was re-checked with model checking time 1.8 seconds. 3.3
Summary
The following table sums up the model checking times in the different verification tasks with the different verification strategies. For simplicity, the re-checking of ϕ0 after the subtask of checking ϕ1 is not shown in the table. For comparison, we have also carried out verification without using the proposed strategies (the data is presented in the column marked with “No Strategy”). Task Error No Strategy Strategy 1 Strategy 2 ϕ0 5 2.6 0.4 0.3 4 207.4 9.9 1.4 0 214.1 10.2 1.5 ϕ1 2 4.4 1.3 3 27.5 1.4 1 5.6 1.4 0 7134.5 80.4 The first column is the two verification subtasks. The second column is the numbers of errors detected in the verification. The item “0” means that no errors were detected in that point of the verification. The third column is the model checking times (all in seconds) with the original verification approach, the fourth column is the model checking times with STR-1 reduction, and the fifth column is the model checking times with STR-2 reduction. This table shows that the use of STR-1 reduction can significantly reduce the model checking time (compared with the verification where no strategy was used), in the task where STR-1 reduction is applicable. STR-2 reduction is better than STR-1 reduction in the
A Strategy for Improving the Efficiency of Procedure Verification
123
case study. STR-2 reduction also has the advantage that it can be applied to verify properties that need fairness constraints. For further comparison, the number of visited states, the number of transitions, and the memory usage for the verification of ϕ0 and ϕ1 when no errors were found are presented in the following table (with k for thousand and mb for mega-byte). The data in the table are also clearly in favor of the proposed reduction strategies. Task Type of Data No Strategy Strategy 1 Strategy 2 ϕ0 States 117.6k 21.2k 2.1k Transitions 1450.1k 60.9k 7.2k Memory 19.1mb 6.1mb 3.4mb ϕ1 States 134.8k 2.5k Transitions 57706.2k 482.3k Memory 39.8mb 20.7mb As the case study is considered, the use of the strategies is not time-saving, taking into account the time used for the preparation and analysis of the problem. However the main purpose of the strategies is not to save a few hours of model checking time, it is to scale up the applicability of model checking, such that problems that are originally infeasible for model checking may become feasible with the strategies. For instance, we may have problems that cannot be solved with the available memory using the original approach, while can be solved by using the proposed strategies.
4
Discussion and Concluding Remarks
We have modified the procedure verification approach [11, 12] by adding a step for analyzing the paths of procedures and using the proposed strategies. This has improved the efficiency of procedure verification significantly and therefore has scaled up the applicability of the verification approach. The strategies are suitable for procedure verification (however may not be easily generalizable to other types of models), because procedures normally have many paths and the relevance of environment processes could be different for different paths. The strategies are based on an analysis of the main procedure process and the environment processes. After the analysis of the procedure paths, for STR-1 reduction, all we need to do is to register some path information in the main process and add procedure knowledge to the property we want to verify. The procedure knowledge considered in this paper is simple. They are of the form “if a condition is met, then the execution of a given process is irrelevant” and are easy to formulate. For the modified version, instead of adding procedure knowledge to the property we add it to the environment processes. The reliability of the strategies depends on the correct analysis of the main process and the environment processes. For STR-1 reduction, one problem is the complexity of the representation of the procedure knowledge. The conversion of a formula (such as []ψ → ϕ0 ) to the
124
Wenhui Zhang
corresponding never-claim (using the converter provided by Spin) may require much time and memory when a formula is long. The second problem is that it is not compatible with fairness constraints. For STR-2 reduction, there are no such problems. On the other hand, the additional complexity of the modified version is the modification of the environment processes which may be a greater source for errors than adding the formula []ψ. The strategies are flexible and can be combined with other complexity reduction approaches which do not affect the execution structure of the processes. We may use some of the techniques mentioned in the introduction (e.g. various abstraction techniques) to reduce the complexity of a model and then use the strategies (in the cases where they are applicable) to further reduce model checking time.
Acknowledgment The author thanks Huimin Lin, Terje Sivertsen and Jian Zhang for reading an earlier draft of this paper and providing many helpful suggestions and comments. The author also thanks anonymous referees for their constructive critics that helped improving this paper.
References [1] S. Berezin and S. Campos and E. M. Clarke. Compositional Reasoning in Model Checking. Proceedings of COMPOS’97. Lecture Notes in Computer Science 1536: 81-102. 1998. 113 [2] E. M. Clarke, O. Grumberg and D. E. Long. Model Checking and Abstraction. ACM Transactions on Programming Languages and Systems 16(5): 1512-1542, 1994. 113 [3] E. A. Emerson. Temporal and Modal Logic. Handbook of Theoretical Computer Science (B):997-1072. 1990. 114 [4] J. Gregoire. Verification Model Reduction through Abstraction. Formal Design Techniques VII, 280-282, 1995. 113 [5] G. J. Holzmann. Design and Validation of Computer Protocols. Prentice Hall, New Jersey, 1991. 116 [6] G. J. Holzmann. The model checker Spin. IEEE Transactions on Software Engineering 23(5): 279-295. May 1997. 116 [7] J. G. Kemeny. Report of the President’s Commission on the Accident at Three Mile Island. U. S. Government Accounting Office. 1979. 113 [8] N. G. Leveson. Software System Safety and Computers. Addison-Wesley Publishing Company. 1995. 113 [9] C. Loiseaux, S. Graf, J. Sifakis, A. Bouajjani and S. Bensalem. Property preserving abstractions for the verification of concurrent systems. Journal of Formal methods in System Design 6:1-35. 1995. 113 [10] V. Roy and R. de Simone. Auto/Autograph. In Computer Aided Verification. DIMACS series in Discrete Mathematics and Theoretical Computer Science 3: 235-250, June 1990. 113
A Strategy for Improving the Efficiency of Procedure Verification
125
[11] W. Zhang. Model checking operator procedures. Lecture Notes in Computer Science 1680:200-215. SPIN 1999. Toulouse, France. 113, 114, 119, 123 [12] W. Zhang. Validation of control system specifications with abstract plant models. Lecture Notes in Computer Science 1943:53-62. SAFECOMP 2000. Rotterdam, The Netherlands. 113, 114, 119, 123
Verification of the SSL/TLS Protocol Using a Model Checkable Logic of Belief and Time Massimo Benerecetti1 , Maurizio Panti2 , Luca Spalazzi2 , and Simone Tacconi2 1
Dept. of Physics, University of Naples ”Federico II”, Napoli, Italy 2 Istituto di Informatica, University of Ancona, Ancona, Italy
[email protected] {panti,spalazzi,tacconi}@inform.unian.it
Abstract. The paper shows how a model checkable logic of belief and time (MATL) can be used to check properties of security protocols employed in computer networks. In MATL, entities participating to protocols are modeled as concurrent processes able to have beliefs about other entities. The approach is applied to the verification of TLS, the Internet Standard Protocol that IETF derived from the SSL 3.0 of Netscape. The results of our analysis show that the protocol satisfies all the security requirements for which it was designed.
1
Introduction
In this paper we apply a symbolic model checker (called NuMAS [5, 2]) for a logic of belief and time to the verification of the TLS protocol. TLS is the Internet Standard Protocol that IETF derived [8] from Nescape’s SSL 3.0. The verification methodology is based on our previous works [3, 4, 6]. The application of model checking to security protocol verification is not new (e.g., see [11, 13, 14]). However, in previous work, security protocols are verified by introducing the notion of intruder and, then, by verifying whether the intruder can attack a given protocol. This approach allows for directly finding a trace of a possible attack, but it may not make clear what the protocol flaw really is. This work usually employs temporal logics or process algebras. A different approach makes use of logics of belief or knowledge to specify and verifying security protocols (see, e.g. [1, 7, 16]). The use of such logics requires no models of intruder, and allows one to find what the protocol flaw is, allowing to specify (and check) security properties in a more natural way. However, in this approach, usually verification is performed proof-theoretically. Our approach can be seen as a combination of the above two: we employ a logic called MATL (MultiAgent Temporal Logic) able to express both temporal aspects and beliefs (thus following the line of the work based on logics of belief or knowledge, which does not use a model of the intruder); verification, on the other hand, is performed by means of a symbolic model checker (called NuMAS [2]). NuMAS is built on the work described in [5], where model checking is applied to BDI attitudes (i.e., Belief, Desire, and Intention) of agents. Our work aims S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 126–138, 2002. c Springer-Verlag Berlin Heidelberg 2002
Verification of the SSL/TLS Protocol
127
at the use of MATL to model security protocols, and uses NuMAS for their verification. The paper is structured as follows. In Section 2 we shortly describe the SSL/TLS protocol. Section 3 provides a brief introduction to MATL. The use of MATL as a logic for security protocols is described in Section 4. Section 5 describes the formal specifications for the usual requirements of security protocols. The results of the verification of the SSL/TLS protocol are reported in Section 6. Finally, some conclusions are drawn in Section 7.
2
The SSL/TLS Protocol
The Security Sockets Layer, commonly referred to as SSL, is a cryptographic protocol originally developed by Netscape in order to protect the traffic conveyed by HTTP applications and, potentially, by others different applications types. The first two versions of this protocol had several flaws that limited its application. The version 3.0 of the protocol [10] was published as Internet Draft document and the efforts of Internet Engineering Task Force (IETF) yielded to the definition of an Internet standard protocol, named Transport Layer Protocol (TLS) [8]. As a consequence, the standard version is often mentioned with the name SSL/TLS. TLS is not a monolithic protocol, but it has a two-layers architecture. The lower protocol, named TLS Record Protocol, is placed on the top of a reliable transport protocol (i.e., TCP) and aims at guaranteeing the privacy and the integrity of connection. The higher protocol, named TLS Handshake Protocol, is layered over the TLS Record Protocol and allows entities to mutually authenticate and agree on an encryption algorithm and a shared secret key. After the execution of this protocol, entities are able to exchange application data in a secure manner by means of the Record Protocol. The Handshake Protocol is the most complex and crucial part of TLS, since the achievement of privacy and integrity requirements provided in the Record Protocol depend upon the cryptographic parameters negotiated during its execution. For this reason, we concentrate our analysis on the TLS Handshake Protocol. In order to reduce the high degree of complexity of the protocol, we conduce the formal analysis on an abstract version of TLS. This simplification allows us to leave out from verification the implementation details of the protocol, so as to concentrate on its intrinsic design features. We based our verification on the description provided by [14], and depicted in Figure 1. In the figure, C and S denote the client and server, respectively. {m}KX is a component m encrypted with the public key of the entity X, SX {m} is a component m signed by the entity X, and H(m) is the hash of the component m. Moreover, CertX is a digital certificate of the entity X, V erX the protocol version number used by the entity X, SuiteX the cryptographic suite preferred by the entity X, NX a random number (nonce) generated by the entity X. Finally, SID is the session identifier, P M S is the so called pre-master secret, M ex the concatenation of all messages exchanged up to, this step, and KSC is the shared secret key agreed between entities.
128
(1) (2) (3) (4) (5)
Massimo Benerecetti et al.
C→ S→ C→ S→ C→
S C S C S
: : : : :
C, V erC , SuiteC , NC , SID S, V erS , SuiteS , NS , SID, CertS CertC , {V erC , P M S}KS , SC {H(KSC , M ex)} {H(S, KSC , M ex)}KSC {H(C, KSC , M ex)}KSC
Client Hello Server Hello Client Verify Client Finished Server Finished
Fig. 1. The simplified SSL/TLS handshake protocol
3
A Brief Introduction to MATL
In this section we briefly introduce MATL [4, 5], a model checkable logic of belief and time. The intuitive idea underlying MATL is to model entities engaged in a protocol session as finite state processes. Suppose we have a set I of entities. Each entity is seen as a process having beliefs about (itself and) other entities. We adopt the usual syntax for beliefs: Bi φ means that entity i believes φ, and φ is a belief of i. Bi is the belief operator for i. The idea is then to associate to each (level of) nesting of belief operators a process evolving over time, each of which intuitively correspond to a “view” about that process. View Structure. Let B = {B1 , ..., B n }, where each index 1, ..., n ∈ I corresponds to an entity. Let B ∗ denote the set of finite strings of the form B 1 , ..., B n with B i ∈ B. We call any α ∈ B ∗ , a view. Each view in B ∗ corresponds to a possible nesting of belief operators. We also allow for the empty string, . The intuition is that represents the view of an external observer (e.g., the designer) which, from the outside, “sees” the behavior of the overall protocol. Example 1. Figure 2 depicts the structure of views for the SSL/TLS protocol. The beliefs of entity C are represented by view BC and are modeled by a process playing the client’s role in the protocol. The beliefs that C has about (the behavior of) entity S are represented by view B C BS and are modeled by a process playing S’s role in the protocol. Things work in the same way for any arbitrary nesting of belief operators. The beliefs of entity S (the view B S ) are modeled similarly. Language. We associate a language Lα to each view α ∈ B ∗ . Intuitively, each Lα is the language used to express what is true (and false) about the process of view α. We employ the Computational Tree Logic (CTL) [9], a well known propositional branching-time temporal logic widely used in formal verification. For each α, let Pα be the set of propositional atoms (called local atoms), expressing the atomic properties of the process α. Each Pα allows for the definition of a different language, called a MATL language (on Pα ). A MATL language Lα on Pα is the smallest CTL language containing the set of local atoms Pα and the belief atoms Bi φ, for any formula φ of LαB i . In particular, L is used to speak about the whole protocol. The language LB i (LB j ) is the language adopted to represent i’s (j’s) beliefs. i’s beliefs about j’s beliefs are specified by the language of the view B i B j . Given a family {Pα } of sets of local atoms, the family of MATL
Verification of the SSL/TLS Protocol
129
ε Bc
Bc Bs rec X
Bc
Bs
Bs rec X Bc
Bs
Bs
BcBc
BcBs rec X
. . .
. . .
Fig. 2. The structure of views for the SSL/TLS protocol and the proposition C believes S sees X in MATL languages on {Pα } is the family of CTL languages {Lα }. We write α : φ (called labeled formula) to mean that φ is a formula of Lα . For instance, the formula AG (p → Bi ¬q) ∈ L , (denoted by : AG (p → Bi ¬q)), intuitively means that in every future state (the CTL operator AG), if p is true then entity i believes q is false. In order to employ MATL as logic that is suitable for security protocols, we need to define appropriate sets of local atoms Pα , one for each process α. First of all, a logic for security protocols has propositions about which messages a generic entity P sends or receives. Therefore, we introduce atoms of the form rec X and send Q X (where X can be a full message of a given protocol but not a fragment of message) that represent the P ’s communicative act of receiving X and sending X to Q, respectively. This means that we need to introduce the local atoms rec X and sendQ X in PB P (see Figure 2). Furthermore, we introduce the notion of seeing and saying (fragments of) messages by means of local atoms as sees X and said X. This allows us to take into account the temporal aspects of a protocol. Indeed, rec and send represent the acts of receiving and sending a message during a session. They allows us to capture the occurrence of those events by looking at the sequence of states. This is different from the notion of the fragments of messages that an entity has (atom sees) or uses when composing its messages (atom said ). Finally, the atom sees represents both the notion of possessing (what an entity has because it is initially available or newly generated) and seeing (what has been obtained from a message sent by another entity). A logic for security protocols also has local atoms of the form f resh(X), expressing the freshness of the fragment/message X. The intuitive meaning is that X has been generated during the current protocol session. Local atoms of the form pubk P K and prik P K −1 mean that K is the public key of P and K −1 the corresponding private key and can be directly added as local atoms to the languages of MATL. Finally, a logic for security protocols also has local atoms such as P says X to express that entity P has sent X recently. This can be expressed in MATL by the formula BP says X.
130
Massimo Benerecetti et al.
Example 2. We can set the atoms Pα for the views B C and said SID, sees NS , said {V erC , P M S}KS , SC {H(KSC , M ex)}, PB C =
PB S =
fresh {H(S, KSC , M ex)}KSC
fresh SID, pubk S Ks , ... sees SID, said NS ,
fresh NC , prik S KS −1 ,
sees {V erC , P M S}KS , SC {H(KSC , M ex)}, ...
BS
as follows:
For instance, the atom said SID in view B C represents C sending SID to S (message of step (1) in the SSL/TLS protocol). The local atoms of the other views can be defined similarly. Since each view αB i (with i = C, S) models the (beliefs about the) behavior of entity i, the set of local atoms will be that of view B i (see [4]). Semantics. To understand the semantics of the family of languages {Lα }α∈B ∗ (hereafter we drop the subscript), we need to understand two key facts. On the one hand the semantics of formulae depend on the view. For instance, the formula p in the view B i expresses the fact that i believes that p is true. The same formula in the view B j expresses the fact that j believes that p is true. As a consequence, the semantics associates locally to each view α a set of pairs m, s, where: m = S, J, R, L is a CTL structure, with S a set of states, J ⊆ S the set of initial states, R the transition relation, and L : S → 2P the labeling function; and s is a reachable state of m (a state s of a CTL structure is said to be reachable if there is a path leading from an initial state of the CTL structure to state s). On the other hand there are formulae in different views which have the same intended meaning. For instance Bj p in view B i , and p in view B i B j both mean that i believes that j believes that p is true. This implies that only certain interpretations of different views are compatible with each other, and these are those which agree on the truth values of the formulae with the same intended meaning. To capture this notion of compatibility we introduce the notion of chain as a finite sequence c , ..., cβ , ..., cα of interpretations m, s of the language of the corresponding view. A compatibility relation C on the MATL languages {Lα } is then a set of chains passing through the local interpretations of the views. Intuitively, C will contain all those chains c, whose elements cα , cβ (where α, β are two views in B ∗ ) assign the same truth values to the formulae with the same intended meaning. The notion of satisfiability local to a view is the standard satisfiability relation between CTL structures and CTL formulae. The (global) satisfiability of formulae by a compatibility relation C needs to take into account the chains, and is
Verification of the SSL/TLS Protocol ε
Bs
Bc <mc,S’cc> >
Bc(Bs said X ^ sees X) <mε,Se>
Bc Bs said X ^ sees X
131
<mc,S’’c c> > Bs
Bs said X ^ sees X
...
BcBs ...
said X
<ms,S’’s s> > <ms,S’ss> >
said X
Fig. 3. The notion of compatibility in the SSL/TLS protocol defined as follows: for any chain c ∈ C and for any formula φ ∈ Lβ , the satisfiability relation |= is defined as cβ |= φ if and only if φ is satisfied by the local interpretation cβ = m, s of view β. Example 3. Let us consider the situation of Figure 3, where chains are represented by dotted lines. The formula Bc (Bs saidX ∧ seesX) is satisfied by the interpretation in view . This means that such interpretation must be compatible with interpretations in view Bc that satisfy Bs saidX ∧ seesX. This is indeed the case for both < mc , Sc > and < mc , Sc >. Therefore both of them must be compatible with interpretations in view Bc Bs that satisfy saidX. This is the case for both < mc , Sc > and < mc , Sc >. We are now ready to define the notion of model for MATL (called MATL structure) Definition 1 (MATL structure). A nonempty compatibility relation C for a family of MATL languages on {Pα } is a MATL structure on {Pα } if for any chain c ∈ C, 1. cα |= Bi φ iff for every chain c ∈ C, cα = cα implies cαB i |= φ; 2. if cα = m, s, then for any state s of m, there is a chain c ∈ C such that cα = m, s . Briefly: the nonemptyness condition for C guarantees that there is at least a consistent view in the model. The only if part in condition 1 guarantees that each view has correct beliefs, i.e., any time Bi φ holds at a view then φ holds in the view one level down in the chain. The if part is the dual property and ensures the completeness of each view. Notice that the two conditions above give to belief operators in MATL the same strength as modal K(m), where m is the number of entities (see [5]).
4
MATL as a Logic for Security Protocols
MATL is expressive enough to be used as a logic for security protocols. Furthermore, it has a temporal component that usually is not present in the other
132
Massimo Benerecetti et al.
logics of authentication (e.g., see [1, 7, 16]). In order to show how the properties of security protocols can be expressed within MATL, we shall now impose some constraints to the models in order to capture the intended behavior of a protocol. These constraints can be formalized with a set of sound axioms. This is similar to what happens with several logics of authentication (see for example [1, 16]). Indeed MATL encompasses such logics. Here, for the sake of readability, we show how it is possible to translate in MATL some of the most significant axioms that has been proposed in most logics of authentication. As a first example, let us consider the message meaning axioms. Usually, such axioms correspond to the following schema: shkP,Q K ∧ P sees{X}K → Q said X Intuitively, it means that when a entity P receives a (fragment of) message encrypted with K, and K is a key shared by P and Q, then it is possible to conclude that the message comes from Q. The above axiom schema can be formulated in MATL as follows: P : shk P,Q K ∧ sees{X}K → BQ said X
(1)
where with P : Ax we also emphasize which view (P ) the axiom Ax belongs to. Message meaning is often used with the nonce verification, that has the following schema: Q said X ∧ f resh(X) → Q says X This schema expresses that when an entity Q has sent X (i.e., Q said X) recently (i.e., f resh(X)), then we can assert that Q says X. In MATL, this becomes P : BQ said X ∧ f resh(X) → BQ saysX
(2)
As a consequence, it is important to establish whether a fragment of message is fresh. The following axioms help on this task: f resh(Xi ) → f resh(X1 , . . . , Xn ) f resh(X) → f resh({X}K ) Intuitively, they mean that when a fragment is fresh, then the message containing such fragment (the encryption of the fragment, respectively) is fresh as well. In MATL, they can be inserted in the appropriate views without modification. Another important set of axioms establishes how a message can be decomposed. For example, in [1] we have the following schemata: P sees(X1 . . . Xn ) → P sees Xi P sees {X}K ∧ P has K → P sees X The first schema states that an entity sees each component of any compound unencrypted message it sees, while the second schema states that an entity can
Verification of the SSL/TLS Protocol
133
decrypt a message encrypted with a given key when it knows the key. In MATL the above axiom schemata can be expressed in each view without modification. The following axiom schemata relate sent (received) messages to what an entity says (sees). P : sendQ X → said X (3) P : rec X → sees X
(4)
The next axiom schemata capture the idea that an entity sees what it previously said or what it says. P : said X → sees X (5) P : says X → sees X (6) The following schema expresses the ability of an entity to compose a new message starting from the (fragment of) messages it already sees. P : sees X1 ∧ . . . ∧ sees Xn → sees(X1 . . . Xn )
(7)
In modeling security protocols such as SSL/TLS, we also need to take into account hash functions and signatures. We assume that a signature is reliable, without considering how such schema looks like1 . This allows us to focus on whether the protocol is trustworthy, and is a usual assumption in security protocol verification. The corresponding axiom schemata are the following: P : f resh(X) → f resh(H(X))
(8)
P : f resh(X) → f resh(SQ {X}) P : sees X → sees H(X)
(9) (10)
P : sees X ∧ sees KP −1 ∧ prikP KP −1 → sees SP {X} P : sees X ∧ sees KQ ∧ pubkQ KQ → sees {X}KQ
(11) (12)
where Q and P can be both substituted with the same entity or with different ones. The above schemata are the obvious extension to hash functions and signatures of the axioms about the freshness and the capability of composing messages. The following axiom schema expresses the capability to extract an original message from its signed version when the corresponding public key is known: P : sees SQ {X} ∧ sees KQ → sees X
(13)
The next schema corresponds to the message meaning axiom for signed messages: P : sees SQ {X} ∧ pubkQ KQ → BQ saidX
(14)
Intuitively, it means that when an entity sees a message signed with the key of Q, then it believes that Q said such a message. Finally, the following axiom expresses that if P sees a secret and Q as well sees it, then entities share such a secret. P : sees X ∧ BQ seesX → shsecP,Q X (15) 1
The most common schema is the following: SQ {X} = (X, H(X)KQ −1 )
134
5
Massimo Benerecetti et al.
Security Requirements for SSL/TLS
SSL/TLS is a cryptographic protocol belonging to the family of the so called “authentication and key-exchange” protocols. This means that it must guarantee to each entity participating to the protocol assurance of the identities of other ones. In other words, it must achieve the goal of mutual authentication. Authentication typically depends upon secrets, such as cryptographic keys, that an entity can reveal or somehow use to prove its identity to the other ones. For this reason, such protocols must provide entities with a secure way to agree on a secret session key for encrypting subsequent messages. In order to express these security goals, we introduce the authentication of secrets requirement, where the secret is composed by the pre-master secret P M S and the shared key KSC together. Moreover, such a protocol must guarantee that exchanged secrets are new as well, i.e., it must achieve the goal of freshness. In order to formalize also this security goal, we introduce the freshness of secrets requirement. In MATL, the requirements of authentication and freshness of secrets can be expressed as a set of formulae in the view of each entity. CLIENT REQUIREMENTS Authentication of Secrets. The client C is not required to satisfy possession of P M S, since the client generate this secret term itself. Conversely, in every possible protocol execution, if C sends to S the Client Finished message, then P M S has to be a shared secret between C and S. Formally, we write as follows: C : AG (sendS ClientF inished → shsecC,S P M S)
(16)
Similarly, being the shared key KSC calculated by C starting from P M S and the two nonces after Step (2), it is not necessary to verify its possession. On the other hand, if the client C sends the Client Finished message to S, then KSC needs to be a shared key between C and S. This can be expressed by: C : AG (sendS ClientF inished → shkC,S KSC )
(17)
Another important authentication requirement is that, if C sends the Client Finished message to S, then C must believe that S shares with it both P M S and KSC , as expressed by the following property: C : AG (sendS ClientF inished → BS shkC,S KSC ∧ BS shsecC,S P M S) (18) Freshness of Secrets. It amounts to the requirement that if C receives the Server Finished message, then C must believe that S has recently sent both KSC and P M S. In formulas this can be expressed as follows: C : AG (rec ServerF inished → BS says KSC ∧ BS says P M S)
(19)
SERVER REQUIREMENTS Authentication of Secrets. In the view of S, differently from the case of view C, we need to verify possession by S of P M S, since this term is not locally
Verification of the SSL/TLS Protocol
135
generated, but sent by C. Therefore, we require that if S sends the Server Finisher message to C, then S must possess P M S: S : AG (sendC ServerF inished → sees P M S)
(20)
Moreover, under the same condition, S must share P M S with C: S : AG (sendC ServerF inished → shsecC,S P M S)
(21)
Similarly to the case of C’s view, since the shared key KSC is calculated by S starting from P M S and the nonces, it is not necessary to verify its possession. On the other hand, we need to verify that if S sends the Server Finished message to C, then KSC must be a shared key between C and S: S : AG (sendC ServerF inished → shkC,S KSC )
(22)
Finally, if S sends the Server Finished message to C, then S must believe that C shares with it both P M S and KSC : S : AG (sendC ServerF inished → BC shkC,S KSC ∧ BC shsecC,S P M S)(23) Freshness of Secrets. This requirement is exactly as for the client C. It amounts to check that if S receives the Client Finished message, then S must believe that C has recently sent both KSC and P M S: S : AG (rec ClientF inished → BC says KSC ∧ BC says P M S)
6
(24)
Verification of the SSL/TLS Protocol
The core of the verification process performed by NuMAS is a model checking algorithm for the MATL logic. This algorithm is built on top the CTL model checking and is based on Multi Agent Finite State Machine (MAFSM). Since a CTL model checking algorithm employs Finite State Machine (FSM), thus we have to extend the notion of FSM to accomodate beliefs using the notion of MAFSM. Following the above approach, in order to verify a protocol by means of NuMAS, we need to describe it as a MAFSM. To specify a MAFSM we have to describe the finite state machine of each view that models the behavior (i.e., the temporal evolution) of the corresponding entity that partecipates to the protocol. As consequence, we need to specify the propositional atoms (i.e., message variables and freshness variables) and the explicit beliefs atoms, establishing what are the local atoms of each view and specifying the compatibility relation among the views by means the set of belief atoms for each view. Moreover, we have to specify how atoms vary during the protocol execution. In particular, we need to model entity sending and receiving messages, by means of local atoms as sendP X and rec X, derived directly from the protocol description. These atoms follow the sequence of messages in the protocol and, once they become true. The behavior of other atoms (for example, atoms as sees and says) derives
136
Massimo Benerecetti et al.
Table 1. Summary of the verification of the whole SSL/TLS Parameter Views in the model State variables client/server Security specifications checked Time to build the model Time to check all the properties Total time required for the verification Memory required for the verification
Value 2 120/ 119 18 21.2 s. .21 s. 21.41 s. 2.1 Mb
from the axioms described in Section 4. In the MASFM associated to protocol to verify, these axioms are introduced as invariant, namely they are intended as MATL formulas that must be true in every state. Furthermore, we need to express beliefs about other principals. Also in this case, we use boolean variables and we constraint their behavior by means of axioms. For more details about the process verification, the reader can refer to our previous works. In particular, in [5, 3] is shown the semantics of MATL, briefly sketched in Section 3, can be finitely presented so as to allow model checking of MATL formulae. In [3, 4] can be found a detailed description of how the finite state MATL structure can be specified, and in [2] is described the symbolic model checking algorithm for MATL implemented within the NuMAS model checker. We ran the verification of SSL/TLS protocol with NuMAS on a PC equipped with a Pentium III (frequency clock 1GHz) and 256 MB RAM. Notice that, due to space constraints, in the paper we have only described the analysis of the Handshake Protocol. On the other hand we have verified the whole SSL/TLS protocol suite, including the Record Layer and the Resumption Protocol. Table 1 reports some parameters characterizing the size of the model devised for the entire version of SSL/TLS and some runtime statistics of the verification. It is worth noticing that, despite the fairly big size of the model (more than one hundred state variables per view, i.e., a state-space of more than 2240 states), NuMAS could build and verify the protocol in few seconds. All the expected security requirements turned out to be satisfied. This means that the specification of the protocol provided by IETF is sound. Therefore, we can assert that the design of this protocol is not afflicted with security flaws and so, if correctly implemented, the SSL/TLS protocol can be considered secure.
7
Conclusions
In this paper we have described how a logic of belief and time (MATL) has be used for the verification of the SSL/TLS security protocol. The verification has been performed with NuMAS, a symbolic model checker for MATL. The verification methodology has been applied to the SSL/TLS protocol, the new Internet Standard Protocol proposed by IETF. From the analysis arises that the
Verification of the SSL/TLS Protocol
137
protocol satisfies all the desired security requirements. Our results agree with those found by other researchers that have led on this protocol a formal [14, 15] or informal [17] analysis. The same verification approach to security protocols has also been applied to other protocols, in particular the Lu and Smolka variant of SET [12]. The complete verification of this variant of SET required .6 seconds with a normally equipped PC, and allowed us to find a flaw in the protocol, causing to a possible attack (see [6]). For lack of space, in this paper we have described only the verification of the SSL/TLS protocol.
References [1] M. Abadi and M. Tuttle. A semantics for a logic of authentication. In Proceedings of the 10th Annual ACM Symposium on Principles of Distributed Computing, pages 201–216, 1991. 126, 132 [2] M. Benerecetti and A. Cimatti. Symbolic Model Checking for Multi–Agent Systems. In CLIMA-2001, Workshop on Computational Logic in Multi-Agent Systems, 2001. Co-located with ICLP’01. 126, 136 [3] M. Benerecetti and F. Giunchiglia. Model checking security protocols using a logic of belief. In Proceedings of the 6th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2000), 2000. 126, 136 [4] M. Benerecetti, F. Giunchiglia, M. Panti, and L. Spalazzi. A Logic of Belief and a Model Checking Algorithm for Security Protocols. In Proceedings of IFIP TC6/WG6.1 International Conference FORTE/PSTV 2000, 2000. 126, 128, 130, 136 [5] M. Benerecetti, F. Giunchiglia, and L. Serafini. Model Checking Multiagent Systems. Journal of Logic and Computation, 8(3):401–423, 1998. 126, 128, 131, 136 [6] M. Benerecetti, M. Panti, L. Spalazzi, and S. Tacconi. Verification to Payment Protocols via MultiAgent Model Checking. In Proceedings of the 14th International Conference on Advanced Information Systems Engineering (CAiSE ’02), 2002. 126, 137 [7] Michael Burrows, Martin Abadi, and Roger Needham. A logic of authentication. ACM Transactions on Computer Systems, 8(1):18–36, feb 1990. 126, 132 [8] T. Dierks and C. Allen. The TLS Protocol Version 1.0. IETF RFC 2246, 1999. 126, 127 [9] E. A. Emerson. Temporal and Modal Logic. In Handbook of Theoretical Computer Science, volume B, pages 996–1072, 1990. 128 [10] A. Frier, P. Karlton, and P. Kocher. The SSL 3.0 Protocol. Netscape Communications Corp., 1996. 127 [11] G. Lowe. Finite-State Analysis of SSL 3.0. In Proceedings of the 4th Conference Tools and Algorithms for the Construction and Analysis of Systems, pages 147– 166, 1996. 126 [12] S. Lu and S. A. Smolka. Model Checking the Secure Electronic Transaction (SET) Protocol. In Proceedings of 7th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pages 358–365. IEEE Computer Society, 1999. 137
138
Massimo Benerecetti et al.
[13] W. Marrero, E. Clarke, and S. Jha. Model Checking for Security Protocols. In Proceedings of the DIMACS Workshop on Design and Formal Verification of Security Protocols, 1997. 126 [14] C. Mitchell, V. Shmatikov, and U. Stern. Finite-State Analysis of SSL 3.0. In Proceedings of the 7th USENIX Security Symposium, pages 201–216, 1998. 126, 127, 137 [15] L. C. Paulson. Inductive Analysis of the Internet Protocol TLS. ACM Transactions on Computer and System Security, 2(3):332–351, 1999. 137 [16] P. Syverson and P. C. van Oorschot. On Unifying Some Cryptographic Protocol Logics. In Proceedings of the IEEE Symposium on Research in Security and Privacy, pages 14–28, 1994. 126, 132 [17] D. Wagner and B. Schneier. Analysis of the SSL 3.0 Protocol. In Proceedings of the 2nd USENIX Workshop on Electronic Commerce Proceedings, pages 29–40, 1996. 137
Reliability Assessment of Legacy Safety-Critical Systems Upgraded with Off-the-Shelf Components Peter Popov Centre for Software Reliability, City University, Northampton Square, London, UK
[email protected] Abstract. Reliability assessment of upgraded legacy systems is an important problem in many safety-related industries. Some parts of the equipment used in the original design of such systems are either not available off-the-shelf (OTS) or have become extremely expensive as a result of being discontinued as mass production components. Maintaining a legacy system, therefore, demands using different OTS components. Trustworthy reliability assurance after an upgrade with a new OTS component is needed which combines the evidence about the reliability of the new OTS component with the knowledge about the old system accumulated to date. In these circumstances Bayesian approach to reliability assessment is invaluable. Earlier studies have used Bayesian inference under simplifying assumptions. Here we study the effect of these on the accuracy of predictions and discuss the problems, some of them open for future research, of using Bayesian inference for practical reliability assessment.
1
Introduction
The use of off-the-shelf (OTS) components with software becomes a practice increasingly widespread for both development of new systems and upgrading existing (i.e. legacy) systems as part of their maintenance. The main reason for the trend is the low cost of the OTS components compared with a bespoke development or older components being discontinued as mass production units. In this paper we focus on reliability assessment of a legacy system upgraded with an OTS component which contains software. Two factors make the reliability assessment in this case significantly different from the assessment of a bespoke system. First, reliability data about the OTS component, if available at all, comes, as a rule, for an unknown, possibly different from the target, environment. The evidence of high reliability in different environment will give modest confidence in the reliability of the OTS component in the target environment. Second, acceptance testing of the upgraded system must be, as a rule, short. In some cases postponing the deployment of an upgraded system to undertake a long V&V procedure will simply prevent from gaining market advantage. In some other cases, e.g. upgrading a nuclear plant with smart sensors, it is simply prohibitively expensive or even impossible to run a long acceptance testing on the upgraded system before it is deployed. And yet in many S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 139-150, 2002. c Springer-Verlag Berlin Heidelberg 2002
140
Peter Popov
cases, e.g. in safety critical systems, there are very stringent requirements for demonstrably high reliability of systems in which OTS components are used. In these circumstances Bayesian approach to reliability assessment is very useful. It allows one to combine rigorously both, the a priori knowledge about the reliability of a system and its components, and the new (possibly very limited) evidence coming from observing the upgraded system in operation. The simplest way to assess the reliability of a system is to observe its failure behaviour in (real or simulated) operation. If we treat the system as a black box, i.e. ignore the internal structure of the system, standard techniques of statistical inference can be applied to estimate its probability of failure on demand (pfd) on the basis of the amount of realistic testing performed and the number of failures observed. However, this ‘black-box’ approach to reliability assessment has severe limitations [1], [2]: if we want to demonstrate very small upper bounds on the pfd, the amount of testing required becomes very expensive and then infeasible. It is then natural to ask whether we can use the additional knowledge about the structure of the system to reduce this problem - to achieve better confidence for the same amount of testing. This is the problem which we address in this paper. We present a model of reliability assessment of a legacy system upgraded with a single OTS component and discuss the difficulties and limitations of its practical use. In detail, section 2 presents the problem studied in the paper, in section 3 the main result is presented. In section 4 we discuss the implications of our results and the difficulties in applying the Bayesian approach to practical reliability assessment, some of them as open research questions. Finally, conclusions are presented in section 5.
2
The Problem
For simplicity we assume that the system under consideration is an on-demand system, i.e. it is called upon when certain predefined circumstances occur in the environment. A typical example of an on-demand system is a safety protection system intended to shut down a plant if the plant leaves its safety envelope. Legacy System ROS Old component
Upgraded System ROS New OTS
Fig. 1 Upgrading a legacy system with an OTS component.
We analyse the simplest possible case of system upgrade – the replacement of a single component with an OTS component which interacts with the rest of a legacy system (ROS), as illustrated in Fig. 1. In the rest of the paper we refer to ROS as sub-system A and to the new OTS component as sub-system B. The paper analyses a special case of a system in which both sub-systems are used exactly once per demand.
Reliability Assessment of Legacy Safety-Critical Systems Upgraded
3
141
Reliability Assessment: Bayesian Approach
Bayesian approach to reliability assessment of an upgraded on-demand system is used. The probability of failure on demand (pfd) is the measure of interest. Sub-system A demands
Output A
System output
OR Sub-system B
Output B
Fig. 2. Black-box model of a system. The internal structure of the system is unknown. The outputs of the sub-systems are not used in the inference. Only the system output is recorded on each demand and fed into the inference.
If the system is treated as a black box, i.e. we can only distinguish between system failures or successes (Fig. 2), the inference proceeds as follows. Denoting the system pfd as p, the posterior distribution of p after seeing r failures in n demands is: f p ( x | r, n ) ∝ Λ(n, r | x ) f p ( x ) , (1) where f p (•) is the prior distribution of p, which represents the assessor’s belief about p, before seeing the result of the test on n demands. L(n, r | x ) is the likelihood of observing r failures in n demands if the pfd were exactly x. This is given in this case n (of independent demands) by the binomial distribution, L(n, r | x ) = x r (1 − x ) n −r . r The details of system behaviour which may be available but are ignored in the blackbox model, such as the outcomes of the sub-systems which make up the system on a demand, are taken into account in the clear-box model. As a rule the predictions obtained with the clear-box and black-box models differ. We have shown elsewhere, in a specific context of a parallel system [3], that the black-box predictions can be over-optimistic or over-pessimistic and the sign of the error cannot be known in advance – it depends on the prior and the observations. The Baeysian inference with a clear-box model is more complex than with the blackbox model because a multivariate prior distribution and likelihood are used. The dimensions of the prior distribution depend on the number of sub-systems which make up the system and whether the system score (a success or a failure) is a deterministic or a non-deterministic function of the scores of the sub-systems involved1. For instance, modelling a system with two sub-systems (e.g. Fig. 1) and a deterministic system score as a clear box requires a 3-variate prior distribution/likelihood. A clear– 1
Examples of systems with deterministic system score are parallel systems [3] and serial systems (i.e. which fail if at least one of their sub-systems fails). An example of a system with a non-deterministic system score is the system in Fig. 1 if for the same sub-system scores (e.g. ROS fails, OTS component succeeds) it fails on some demands but succeeds on others. Nondeterministic system score must be explicitly modelled as a separate binary random variable.
142
Peter Popov
box model of a system with the same number of sub-systems but a non-deterministic system score requires a 7-variate prior and likelihood. A clear box of a system with 3 sub-systems with a deterministic and a non-deterministic system score requires 7- or 15-variate distribution/likelihood, respectively, etc. Such an exponential explosion of complexity of the prior/likelihood with the increase of the number of the sub-systems poses two difficulties for using a clear-box Bayesian inference: − Defining a multidimensional prior is difficult. A phenomenon is widely reported that humans are not very good at using probabilities [4]. Increasing dimensions of the prior distribution makes it very difficult for an assessor to justify a particular prior to match their informal belief about the reliability of the system and its subsystems; − Unless the prior and the likelihood form a conjugate family [5] the complexity of the Bayesain inference itself increases with the number of dimensions of the prior used in the inference because multiple integrals must be calculated. These two difficulties are a good incentive for one to try to simplify the multivariate prior distribution. One way of simplification is assuming various forms of independence between the variates of the distribution used which describe the assessor’s knowledge (belief) about system reliability. Recently Kuball et al. [6] in different context used the assumption of independence between the failures of the subsystems, which is attractive. It allows one to avoid the difficulties in defining the dependencies that may exist between several failure processes, the most difficult part in defining a multivariate distribution. Once the failures of the subsystems are assumed independent, however, they will stay so despite what is observed in operation, even if overwhelming evidence is received that the sub-system failures are correlated (positively or negatively). This evidence of failure dependence is simply ignored; the only uncertainty affected by the inference is that associated with the pfd of the sub-systems. The multivariate Bayesian inference collapses to a set of univariate inferences, which are easily tractable. Kuball et al. assert that the predictions derived under the assumption of independence will be pessimistic at least in the case that no failure is observed. Even if this is true there is no guarantee that ‘no failure’ will be the only outcome to observe in operation, e.g. during acceptance testing after the upgrade. The justification that the ‘no failure’ case is the only one of interest for the assessor (since any other outcome would imply restarting the acceptance testing afresh) is known to have a problem. Littlewood and Wright have shown [7] that ignoring the previous observations (i.e. rounds of acceptance testing which ended with failures) can produce overoptimistic predictions. It is worth, therefore, studying the consequences of assuming independence between the failures of the sub-systems for a broader range of observations, not just for the ‘no failure’ case. 3.1
Experimental Setup
Now we formally describe the clear-box model of the system (Fig 1). The sub-systems A and B are assumed imperfect and their probabilities of failure - uncertain. The scores of the sub-systems, which can be observed on a randomly chosen demand, are summarised in Table 1.
Reliability Assessment of Legacy Safety-Critical Systems Upgraded
143
Table 1. The combinations of sub-system scores which can be observed on a randomly chosen demand are shown in columns 1-2. The notations used for the probabilities of these combinations are shown in column 3. The number of times the score combinations are observed in N trials, r0 , r1, r2 and r3 (N = r0 + r1 + r2 + r3) respectively, are shown in the last column Sub-system scores
Sub-system A
Sub-system B
0 0 1 1
0 1 0 1
Probability
Observed in N demands
p00 p01 P10 p11
r0 r1 r2 r3
Clearly, the probabilities of failure of sub-systems A and B, pA and pB, respectively, can be expressed as: p A = p10 + p11 and p B = p 01 + p11 . p11 represents the probability of coincident failure of both sub-systems, A and B, on the same demand and hence the notation pAB ≡ p11 captures better the intuitive meaning of the event it is assigned to. The joint distribution f p A , pB , p AB (•,•,•) describes completely the a priori knowledge of an assessor about the reliability of the upgraded system. It can be shown that for a given observation (r1, r2, and r3 in N demands) the posterior distribution can be calculated as: f p A , pB , p AB ( x, y , z | N , r1 , r2 , r3 ) = f p A , pB , p AB ( x , y , z ) L( N , r1 , r2 , r3 | p A , p B , p AB )
∫∫∫ f
p A , p B , p AB
(2)
( x, y , z ) L( N , r1 , r2 , r3 | p A , p B , p AB )dxdydz
p A , p B , p AB
L( N , r1 , r2 , r3 | p A , p B , p AB ) = N! ( p A − p AB )r2 ( p B − p AB )r1 p AB r3 (1 − p A − p B + p AB )N − r1 − r2 − r3 r1! r2 ! r3 ! ( N − r1 − r2 − r3 )!
is the likelihood of the observation. Up to this point the inference will be the same no matter how the event ‘system failure’ is defined but calculating the marginal distribution of system pfd, PS, is affected by how the event ‘system failure’ is defined. We proceed as follows: 1. A serial system: a failure of either of the sub-systems leads to a system failure. The posterior distribution, f p A , pB , p AB (•,•,• | N , r1 , r2 , r3 ) , must be transformed to a new distribution, f p A , pB , pS (•,•,• | N , r1 , r2 , r3 ) , where PS is defined as: PS = PA + PB - PAB, from which the marginal distribution of PS, f pS (• | N , r1 , r2 , r3 ) , will be calculated by integrating out the nuisance parameters PA and PB. If the system is treated as a black-box (Fig. 2) the system pfd can be inferred using formula (1) above. The marginal prior distribution of PS, f pS (•) , and a binomial likelihood of observing r1 + r2 + r3 system failures in N trials will be used. If the failures of the sub-systems A and B are assumed independent then for any values of PA and PB the probability of joint failure, PAB, of both sub-systems is PAB=PAPB. Formally, the joint distribution can be expressed as:
144
Peter Popov f p (x ) f pB ( y )d (xy ), if z = xy f p*A , pB , p AB ( x, y , z ) = A 0, if z ≠ xy
The failures of the two sub-systems remain independent in the posterior: f * (x | N , r1 + r2 ) f p* ( y | N , r1 + r3 )d (xy ), if z = xy B f p*A , pB , p AB (x, y , z | N , r1 , r2 , r3 ) = p A 0, if z ≠ xy f p*A (• | N , r1 + r2 ) and f p*b (• | N , r2 + r3 ) are the marginal posterior distributions of
sub-systems A and B, respectively, inferred under independence. The inference for sub-system A proceeds according to (1) using the marginal prior of sub-system A, f p A (•) , and binomial likelihood of observing r2+r3 failures of sub-system A in N trials. Similarly, f p*b (• | N , r1 + r3 ) is derived with prior f pB (•) and binomial likelihood of observing r1+r3 failures of sub-system B in N trials. The posterior marginal distribution of system pfd, f p*S (• | N , r1 , r2 , r3 ) , can be obtained from f p*A , pB , p AB (x, y , z | N , r1 , r2 , r3 ) as described above: first the joint posterior is
transformed to a form which contains PS as a variate of the joint distribution and then PA and PB are integrated out. 2. The system fails when sub-system A fails. In this case the probability of system failure is merely the posterior pfd of sub-system A (ROS). The marginal distribution, f p A (• | N , r1 , r2 , r3 ) , can be calculated from f p A , pB , p AB (•,•,• | N , r1 , r2 , r3 ) by integrating out PB and PAB. With black-box inference another marginal posterior *
can be obtained, f p (• | N , r2 + r3 ) , using (1) with the marginal prior of pfd of subA
system A, f p A (•) , and binomial likelihood of observing r2 + r3 failures of subsystem A in N trials. Notice that the marginal distribution f p A (• | N , r1 , r2 , r3 ) is *
different, as a rule, from the marginal distribution f p (• | N , r2 + r3 ) , obtained with A
the black-box inference. 3.2
Numerical Example
Two numerical examples are presented below which illustrate the effect of various simplifying assumptions used in the inference on the accuracy of the predictions. The prior, f p A , pB , p AB (•,•,•) , was constructed under the assumption that f p A (•) and f pB (•) are both Beta distributions, B (•, a, b) , in the interval [0, 0.01] and are
independent of each other, i.e. f p A , pB (•,•) = f p A (•) f pB (•) . The parameters a and b for the two distributions were chosen as follows: aA = 2, bA = 2 for sub-system A and aB =3, bB = 3 for sub-system B. If the sub-systems are assumed to fail independently the parameters above are a sufficient definition of the prior distribution. If the sub-systems are not assumed to fail independently we specify the conditional distributions, f p AB | pB , p A (• | PA , PB ) , for every pair of values of PA and PB, as Beta
Reliability Assessment of Legacy Safety-Critical Systems Upgraded
145
distributions, B (•, a, b) in the range [0, min(PA, PB)] with parameters aAB = 5, bAB = 5 which complete the definition of the trivariate distribution, f p A , pB , p AB (•,•,•) . We do not make any claims that the priors used in the examples should be used in practical assessment. They serve illustrative purposes only and yet, have been chosen from a reasonable range. Each of the sub-systems, for instance, has an average pfd of 5.10-3, which is a value from a typical range for many applications. Two sets of observations were used for the calculations with the same number of trials, N = 4000: - Observation 1 (The sub-systems never failed together): r3 = 0, r1 = r2 = 20; - Observation 2 (Sub-systems always fail together): r3 = 20, r2 = r3 = 0. The number of failures of the sub-systems has been chosen so that they are indistinguishable under the assumption of failure independence – in both cases each of the sub-systems failed 20 times2. The observations, however, provide evidence of different correlation between the failures of the sub-systems: in the first observation of strong positive - while in the second observation - of strong negative correlation. The inference results under various assumptions for both observations are summarised in Table 2 and 3, respectively, which show the percentiles of the marginal prior/posterior distributions of system pfd: Table 2. Observation 1: Strong negative correlation between the failures of the sub-systems (N = 4000, r3 = 0, r1 = r2 = 20). The upper part of the table shows the posteriors if the upgraded system were a ‘serial’ system while the lower part of the table shows the posteriors if the system failed only when sub-system A failed.
50 %
75%
90%
95%
99%
prior system pfd, f pS (•)
0.0079
0.0096
0.0114
0.0124
0.0144
‘proper’ posterior pfd, f pS (• | N , r1 , r2 , r3 )
0.01
0.0118
0.012
0.0126
0.0137
Posterior pfd with independence,
0.0103
0.0112
0.0122
0.0128
0.01393
0.01
0.011
0.012
0.0125
0.0136
0.0095
0.01035
0.0113
0.0118
0.0128
Serial system
f p*S
(• | N , r1 , r2 , r3 )
Black-box posterior with independence Black-box posterior without independence
Failure of sub-system A (ROS) only leads to a system failure Prior system pfd, f p A (•)
0.0049
0.0066
0.0080
0.0086
0.0093
‘proper’ posterior pfd, f p A (• | N , r1 , r2 , r3 )
0.0051
0.0059
0.0066
0.0071
0.0079
0.005
0.0058
0.0065
0.0069
0.0078
Posterior
system
independence, 2
f p*A
pfd
with
(• | N , r1 + r2 )
equal to the expected number of failures of each of the sub-system in 4000 demands as defined by the prior.
146
Peter Popov
The results in Table 2 reveal that the black-box inference produces optimistic posteriors: there is stochastic ordering between the posteriors obtained with the clearbox model, no matter whether independence of failures is assumed or not. Comparing the clear-box predictions with and without failure independence reveals another stochastic ordering: the predictions with independence are conservative. This is in line with the result by Kuball et al. The differences between the various posteriors are minimal. The tendency remains the same (the same ordering between the posteriors was observed) for a wide range of observations with negative correlation between the failures of the sub-systems. Finally, for a non-serial system the independence produces more optimistic predictions than without independence (the last two rows of the table). In other words, the independence is not guaranteed to produce conservative predictions – the ordering depends on how the event ‘system failure’ is defined. Table 3. Observation 2: Strong positive correlation between the failures of the two subsystems (N = 4000, r3 = 20, r1 = r2 = 0). The same arrangement of the results is given as in Table 2
50 %
75%
90%
95%
99%
Prior system pfd, f pS (•)
0.0079
0.0096
0.0114
0.0124
0.0144
‘proper’ posterior pfd, f pS (• | N , r1 , r2 , r3 )
0.0051
0.0058
0.0065
0.0069
0.0076
Posterior pfd with independence,
0.0103
0.0113
0.0123
0.0128
0.0139
Black-box posterior with independence Black-box posterior without independence
0.0055
0.0063
0.0071
0.0075
0.0084
0.0059
0.0066
0.0073
0.0078
0.0088
Serial system
f p*S (• | N , r1 , r2 , r3 )
Failure of sub-system A (ROS) only leads to a system failure 0.0049
0.0066
0.0080
0.0086
0.0093
pfd,
0.00495
0.0056
0.0063
0.0066
0.0074
with
0.005
0.0058
0.0065
0.0069
0.0078
Prior system pfd, f p A (•) ‘proper’ posterior f p A (• | N , r1 , r2 , r3 ) Posterior
system
independence,
f p*A
pfd
(• | N , r1 + r2 )
The results from the black-box inference in Table 3 reveal a pattern different from Table 2. Black-box predictions here are more pessimistic than the ‘proper’ posteriors, i.e. with clear-box without assuming independence. This is true for both the serial system and the system which only fails when sub-system A fails. The fact that the sign of the error of the black-box predictions changes (from over-estimation in Table 2 to underestimation in Table 3) is not surprising, it is in line with our result for parallel systems, [3]. If we compare the clear-box predictions – with and without independence – the same stochastic ordering is observed no matter how the event ‘system failure’ is defined. If the system failure is equivalent to a failure of sub-system A, the predictions with independence (i.e. if the effect of sub-system B on sub-system
Reliability Assessment of Legacy Safety-Critical Systems Upgraded
147
A is neglected) are more pessimistic than the predictions without independence. In other words the ordering for this type of system is the opposite to what we saw in Table 2. For serial systems, the predictions shown in Table 3 obtained with independence are, as in Table 2 - more pessimistic than without independence. The pessimism, however, in this case is much more significant than it was in Table 2. The values predicted under independence are almost twice the values without independence: the conservatism for a serial system may become significant. With the second observation (Table 3) the assumption of statistical independence of the failures of the sub-systems is clearly unrealistic! If the independence were true the expected number of joint failures is 0.1 failure in 4000 trials, while 20 were actually observed!
4
Discussion
The results presented here are hardly surprising! They simply confirm that simplifications of models or model parameterisation may lead to errors. If the errors caused by the simplifications were negligible or at least consistently conservative, reporting on them would not have made much sense. What seems worrying, however, and therefore we believe worth pointing out, is that the errors are neither guaranteed to be always negligible nor consistently conservative. Simplifying the model and using black-box inference may lead to over- or under-estimation of system reliability. We reported elsewhere on this phenomenon with respect to a parallel system. Here we present similar results for alternative system architectures. One seems justified in concluding that using a more detailed clear-box model always pays off by making the predictions more accurate. In some cases, the accuracy may imply less effort on demonstrating having reached a reliability target, i.e. makes the V&V less expensive. In some other cases, it prevents from over-confidence in system reliability although at the expense of longer acceptance testing. In the particular context of this study – reliability assessment of an upgraded system – there is another angle of why clear-box must be preferred to black-box. We would like to reuse the evidence available to date in the assessment of the upgraded system. This evidence, if available at all, is given for the sub-systems: for sub-system A and, possibly, for sub-system B but not for the upgraded system as a whole. Using the evidence available seems only possible by first modelling the system explicitly as a clear box and plugging-in the pieces of evidence into the definition of the joint prior. From the multivariate prior, the marginal prior distribution of system pfd can be derived and used in a marginal Bayesian inference. It does not seem very sensible, however, to use the marginal inference after carrying out the hard work of identifying the plausible multivariate prior. The gain will be minimal, only in terms of making the inference itself easier, too little of a gain at the expense of inaccurate predictions. The results with the simplified clear-box inference seem particularly worth articulating. Our two examples indicated conservative predictions obtained for a serial system under the independence assumption. One may think that this is universally true for serial systems as Kuball et al. asserted. Unfortunately, this is not the case! We have found examples of priors/observations when the conservatism does not hold. An
148
Peter Popov
example of such a set of prior/observation is the following: f p A (•) and f pB (•) assumed independent Beta distributions in the range [0,1] with parameters aA = 2, bA = 20, for aA = 2, bA = 2. The conditional distribution, f p AB | pB , p A (• | PA , PB ) , for a pair of values PA and PB assumed to be a Beta distributions in the range [0, min(PA, PB)] with parameters aAB = 5, bAB = 5, observations: N = 40, r1 = 0, r2 =12, r3=12. In this case the posterior system pfd under the assumption of independence is more optimistic than the posterior without independence. The point with this counterexample is that there exist cases in which the assumption of independence may lead to over-optimism. Since the exact conditions are unknown under which the predictions become overoptimistic assuming independence between the failures of the sub-systems may be dangerous: it may lead to unacceptable errors such as overconfidence in achieved system reliability. A second problem with the independence assumption exists that is that even when the predictions under this assumption are conservative, the level of conservatism may be significant which is expensive. This tendency seems to escalate with the increase of the number of sub-systems. In the worse case it seems that the level of conservatism in the predicted system reliability is proportional to the number of sub-systems used in the model. For a system of 10 sub-systems, for example, the worst case underestimation of system reliability can reach an order of magnitude. The implications are that by using the predictions based on the independence assumption the assessor may insist on unnecessary long acceptance testing until unnecessary conservative targets are met. We call them unnecessary because the conservatism is merely due to the error caused by the independence assumption. Using the independence assumption in Bayesian inference is in a sense ironic because it is against the Bayesian spirit to let data ‘speak for itself’. Even if the observations provide an overwhelming evidence of dependence between the failures of the subsystems, the strong assumption of independence precludes from taking this knowledge into account. In the posteriors the failures of the sub-systems will continue to be modelled as independent processes. Having pointed out problems with the simplified solutions is not a solution of the problem of reliability assessment of a system made up of sub-systems. The full inference requires a full multivariate prior to be specified which for a system with more than 3 components seems extremely difficult unless a convenient parametric distribution, e.g. a Dirichlet distribution [5], is used, which in turn, is known to be problematic as reported in [3]. In summary, with the current state of knowledge it does not seem reasonable to go into detailed structural reliability modelling because of the intrinsic difficulties in specifying the prior without unjustifiable assumptions. Our example of a system upgraded with an OTS component is ideally suited for a ‘proper’ clear-box Bayesian inference because only a few sub-systems are used in the model. It is applicable if one can justify that the upgrade is ‘perfect’, i.e. there are no (design) faults in integrating the new OTS component with the ROS. If this is the case the following assumptions seem ‘plausible’ in defining the joint prior after the upgrade: - the marginal distribution of pfd of sub-system A (ROS) is available from the observations of the old system before the upgrade.
Reliability Assessment of Legacy Safety-Critical Systems Upgraded
149
the marginal distribution of pfd of the OTS component will be, generally, unknown for the new environment (in interaction with sub-system A and the system environment). We should take a "conservative" view here and assume that the new OTS component is no better in the new environment than it is reported to have been in other environments. It may be even worth assuming it less reliable than the component it replaces, unless we have very strong evidence to believe otherwise. The strong evidence can only come from the new OTS component being used extensively in an environment similar to the environment created for the new OTS component by the system under consideration. The new OTS component may have a very good reliability record in various environments. This, however, cannot be used ‘automatically’ as strong evidence about its reliability in the upgraded system. - the pfd of sub-systems A and B are independently distributed (as we assumed in the examples) unless there is evidence to support assuming otherwise. In the latter case, the supporting evidence will have to be used in defining the dependence between the pfd of the two sub-systems. - specifying the pfd of joint failure of sub-system A and sub-system B we can use ‘indifference’ within the range of possible values, but sensitivity analysis is worth applying to detect if ‘indifference’ leads to gaining high confidence in high system reliability too quickly. If justifying a ‘perfect’ upgrade is problematic at least a 7-variate prior distribution must be used to allow for system failures in operation to be accommodated in the inference which are neither failures of ROS nor of the new OTS component. In this case the marginal distributions of the pfds of the two sub-systems, A and B, are the only ‘obvious’ constraints which can be used in defining the prior. These, however, are likely to be insufficient to set the parameters of the 7-variate joint prior distribution and additional assumptions are needed which may be difficult to justify. -
5
Conclusions
We have studied the effect of the model chosen and of a simplifying assumption in parameterising a clear-box model on the accuracy of Bayesian reliability assessment of a system upgraded with a single OTS component. We have shown: - that simplifications may lead to overestimation or underestimation of system reliability and the sign of the predictions is not known in advance. The simplified inference, therefore, is not worthy recommending for predicting the reliability of safety-critical systems. - that even when the simplified predictions are conservative, e.g. the predictions for a serial system under the assumption of independence of failures of the subsystems, they may be too conservative. In the worst case the conservatism is proportional to the number of sub-systems used in the model. This leads to unjustifiably conservative reliability targets achieving which is expensive. - that detailed clear-box modelling of a system is intrinsically difficult because: i) the full inference without simplifications requires specifying a multivariate prior
150
Peter Popov
which is difficult with more than 3 variates, ii) the simplified inferences (blackbox or clear-box with simplifications) have problems with the accuracy of the predictions. - how the available knowledge about the reliability of the sub-systems before the upgrade can be reused in constructing a multivariate prior when the the upgrade is free of design faults. The following problems have been identified and are open for further research: - clear-box inference with the simplifying assumption that the sub-systems fail independently has been shown to lead to over- or underestimation of system reliability. Identifying the conditions under which the simplified clear-box inference produces conservative results remains an open research problem. - Further studies are needed into multivariate distributions which can be used as prior distributions in a (non-simplified) clear-box Bayesian inference.
Acknowledgement This work was partially supported by the UK Engineering and Phisical Sciences Research Council (EPSRC) under the ‘Diversity with Off-the-shelf components (DOTS)’ project and the ‘Interdisciplinary Research Collaboration in Dependability of Computer-Based Systems (DIRC)’.
References 1. Littlewood, B. and L. Strigini, Validation of Ultra-High Dependability for Software-based Systems. Communications of the ACM, 1993. 36(11): p. 69-80. 2. Butler, R.W. and G.B. Finelli. The Infeasibility of Experimental Quantification of LifeCritical Software Reliability. in ACM SIGSOFT '91 Conference on Software for Critical Systems, in ACM SIGSOFT Software Eng. Notes, Vol. 16 (5). 1991. New Orleans, Louisiana. 3. Littlewood, B., P. Popov, and L. Strigini. Assessment of the Reliability of Fault-Tolerant Software: a Bayesian Approach. in 19th International Conference on Computer Safety, Reliability and Security, SAFECOMP'2000. 2000. Rotterdam, the Netherlands: Springer. 4. Strigini, L., Engineering judgement in reliability and safety and its limits: what can we learn from research in psychology? 1994. http://www.csr.city.ac.uk/people/lorenzo.strigini/ls.papers/ExpJudgeReport/ 5. Johnson, N.L. and S. Kotz, Distributions in Statistics: Continuous Multivariate Distributions. Wiley Series in Probability and Mathematical Statistics, ed. R.A. Bradley, Hunter, J. S., Kendall, D. G., Watson, G. S. Vol. 4. 1972: John Weley and Sons, INc. 333. 6. Kubal, S., May, J., Hughes, G. Structural Software Reliability Estimation. in SAFECOMP '99, 18th International Conference on Computer Safety, Reliability and Security. 1999. Toulouse, France: Springer. 7. Littlewood, B. and D. Wright, Some conservative stopping rules for the operational testing of safety-critical software. IEEE Transactions on Software Engineering, 1997. 23(11): p. 673-683.
Assessment of the Benefit of Redundant Systems Luping Chen, John May, and Gordon Hughes Safety Systems Research Centre, Department of Computer Science University of Bristol, Bristol, BS8 1UB, UK {chen,jhrm,hughes}@cs.bris.ac.uk
Abstract. The evaluation of the gain in reliability of mult-iversion software is one of the key issues in the safety assessment of high integrity systems. Fault simulation has been proposed as a practical method to estimate diversity of multi-version software. This paper applies data-flow perturbation as an implementation of the fault injection technique to evaluate redundant systems under various conditions. A protection system is used as an example to illustrate the evaluation of software structural diversity, optimal selection of channelpairs and the assessment of different designs.
1
Introduction
The potential benefit of voted redundancy incorporating multi-channel software has is improved reliability for safety-critical systems [1]. The measurement and assessment of multi-version software is a longstanding topic of research. However, to date no effective and realistic metric or model exists for describing software diversity and evaluating the gain in reliability when diversity/redundancy is employed. Probabilistic models, such as those of Eckhardt and Lee [2], Littlewood and Miller [3] depend upon parameters that are very difficult to estimate. The main obstacle is the difficulty of observing realistic failure behaviors of a safety-critical system with extra-high reliability. The observations are needed to estimate the values required in the corresponding probability models. However, a fault injection has been proposed as a means of assessing software diversity using simulated faults and associated failure behaviors [4] [5]. The fault injection approach can provide a quantitative estimation of diversity as a basis for assessment of the degree of fault tolerance achieved by redundant system elements. In turn this allows investigations of the effectiveness of different redundancy design strategies. To estimate software diversity it is not enough to know only the failure probability of each single version. The fault distribution in the software structure will influence diversity greatly because the diversity is decided not only by the failure rates but also the positions of the failure regions in the input space (the input space is assumed to be shared by the versions). Specifically, it is necessary to understand the overlaps between the failure domains of the different versions caused by various faults. Effective searching methods are required to reveal the failure regions because the S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 151-162, 2002. Springer-Verlag Berlin Heidelberg 2002
152
Luping Chen et al.
searching tasks are very intensive. An intuitive quantitative index of failure diversity for specific faults is also needed based on a comparison of the different failure domains of the versions under test. Both these requirements have been developed in [5]. The index requires two types of information: the individual version failure probability estimates and the likely fault distribution pattern of each single version [6]. This paper applies these developed techniques for assessing software diversity, to some realistic systems incorporating redundant elements. It outlines the implementation and then a diversity index table is used to record and compute the software diversity under various conditions. Furthermore, such quantitative estimations are utilized to observe the relative effectiveness of design strategies that are commonly regarded as having an influence on diversity. Finally some results from the experiments are presented and a discussion provided of further applications and enhancements of this approach. These design factors (data, algorithmic, design method etc.) are potentially important for forcing diversity between versions
2
Fault Injection and Diversity Measurement
The feasibility of the fault injection approach to assess software diversity depends on the development of the following technical elements: 1.
Methods to reveal how faults manifest themselves in the input space (the failure rates and positions); 2. Methods to measure the degree of coincident failure resulting from the faults naturally occurring in 'diverse' versions. 3. Introduction of faults into a program or the formulation of a library of 'real' faults, and categorisation of 'possible' faults so that the scale of any study can be controlled. The first two elements represent the primary objectives, in that with this ability it would be possible to establish the overlapping relation between the failures of versions. However, the third element is important to allow the study to be carried out with limited resources and to understand the fault-failure relation of software. The following sections will summarise the methods and tools that have been developed for the practical utilisation of these elements. 2.1
Diversity Quantification
The concept of "Version Correlation", as clarified by references [2, 3], has been used to explain the difficulty of diversity estimation. The positive covariance caused by "Common faults" was regarded as a significant cause of dependence between versions. Theoretically, if the failures of two systems A and B are independent, their common failure probability is given by the equation: P( AB) = P( A) * P( B) . Where P(A) and P(B) are the failure probabilities of versions A and B respectively. P( AB) is the common failure probability of both versions i.e. the probability that both fail
Assessment of the Benefit of Redundant Systems
153
together. But in practice, independence cannot be assumed and a measure of the degree of dependence is given by:
Cov ( A, B ) = P ( AB ) − P ( A) * P ( B )
(1)
To reflect the reliability gain by the two-version design, the diversity of a pair of versions A and B of software for an injected fault was generally defined as:
Div( AB) = 1 −
P( AB) Min{P( A) , P( B)}
(2)
It is obvious that 0 ≤ Div ≤ 1 , where 1 means the two versions have no coincidental failure for the fault, which is the ideal case, 0 means there is no improvement over the single version because the coincidental failure area isn't less than the smaller of the two failure areas of the individual versions. Where P(A) or P(B) is 0, Div(AB) is defined to be 1. Based on the index, we can set up a 'Div table' to record the versions diversity under various faulty conditions and act as a basis for statistical measures of redundant system behaviour. 2.2
Locating Failure Regions Overlap
Testing multi-version software is different from single version testing in that it is not simply the number of failure points in the input space which matters, but also their position [15] . Only with this extra information, can we determine coincidental failures and calculate diversity as described in 2.1. The proposed approach to assess diversity crucially depends on an ability to search a multi-dimensional input space to establish the magnitudes and locations of the regions, which stimulate the system to fail. The necessary resolution of the search methods will be related to the accuracy of coincident failure predictions that we wish to claim for the study. The developed searching methods address two distinct problems: i) ii)
Determining an initial failure point given a specific fault; Searching from this point to establish the size and boundary of any associated failure region.
In the case of software with known faults, the first form of search does not have to be used. Generally, test cases should be effective at finding failures of a program, and preferably should satisfy some requirements or criterion for testing that is repeatable, automatic, and measurable. A review of the possible approaches indicated that each has a different effectiveness for different problems and testing objectives [5]. In our automatic test tool, the neighbourhood of a known failure point is converted into State-Space Search-branches and then a branch-and-bound algorithm is used for searching contiguous areas [5]. Generally, for searching we want to find the approach that results in the lowest number of iterating steps (and hence least computing time). Branch and bound techniques rely on the idea that we can partition our choices into sets using some domain knowledge, and ignore a set when we can determine that the searched for element cannot be within it. In this way we can avoid examining most elements of most sets [7].
154
Luping Chen et al.
2.3
Fault Injection by Data-Flow Perturbation
Two fault injection techniques have been used in the experiments: Data flow perturbation (DFP) is based on locations in the data flow in software, which are easily controlled to satisfy the test adequacy criteria on covering all program branches [5,9]. Constant perturbation is based on special constants in quantitative requirement specifications and can be regarded as a special type of DFP [6]. The previous work shows that diversity, as measured by the mapping of common failure regions corresponding to injected faults, (not surprisingly) depends on the pattern of the inserted fault set. Ideally one would wish to use a 'realistic' fault set, but the definition of realism is at present deferred. Fault simulation methods have been increasingly used to assess what method might reveal software defects or what software properties might cause defects to remain hidden [8]. Their application in software testing takes two different forms: the physical modification of the program's source code to make it faulty, and the modification of some part of the program's data state during execution. Some works have shown that the main difficulties and limitations of the first method lie in the huge variety of code alterations. The idea of data-state error was suggested to improve the effectiveness of fault injection techniques, and data-flow perturbation and constant perturbation has been proposed and applied in our empirical approach for software diversity assessment [6]. Data state error can propagate because one data error can transfer to other data states by data-flow and finally may change the output of the software. The importance of Dataflow is that it is the only vehicle for data-state error propagation. Constant perturbation can simulate the faults in data-flows because the data flow is fundamentally controlled by these constants. Many results have been published to support the application of the data-state error method for fault injection. These works began from studying the probability that data-state errors would propagate to the output of the program [9][10], then whether the behaviour of each real error was simulated by the behaviour of such data-mutant errors, and finally to whether all artificially injected errors at a given location behave the same [11]. In Constant perturbation, the selected constants are distributed at locations over the data-flow chart, and are key parameters to decide the data-flow. The of this method is that we can control the test scale easily under the condition of satisfying the test adequacy criteria to cover all program branches.
3
Protection System Evaluation
3.1
Examples of Redundant Systems
Evaluation of software diversity using the fault injection approach has been illustrated as previously introduced in [6] by testing the software of a nuclear plant protection system project [12]. In the previous work, we mainly focused on developing some techniques to realize the approach, like the data perturbation, fault selection and diversity measurement etc.. This paper aims to introduce its application on assessment
Assessment of the Benefit of Redundant Systems
155
of redundant systems. The software of the protection system is also used here for an extended application of the approach. The software includes four channels programmed in different languages, all matching the same requirement specification. Two channels programmed in C and ADA have been used for our experiments. In each case, the final code resulting from the software development process was used as the "golden version". On a given test, whether "success" or "failure" occurred with the versions derived by injecting faults into the golden versions was determined by comparing the outputs of these versions against their respective golden versions. Redundant systems were built using different combinations of the two channels. Firstly, the C and Ada channels were used to constitute a channel-pair, C-Ada, as in the original system. This pair is used to test how the difference of software structure between channels influences diversity. Secondly, we used two identical copies of the C channel or the Ada channel respectively to constitute two new systems denoted as C-C and Ada-Ada pairs. In reality, no one would design system in this way because the direct copies from a same prototype means the two channels in the system will contain coincidental faults and show no diverse behaviors. In our experiments, these two systems are used to study how diversity is influenced by different fault distributions alone. The significance of this simulation is to investigate the situation that two teams may develop channels without deliberately different design strategies. The two channels can show diverse failure behaviors because the two developing teams might incorporate different faults. The randomness of development errors can be simulated by inserting manifestly different faults in the two channels. The application of the fault injection approach to analyse redundant system behaviour included three experiments. In 3.2, the experiment illustrates the measurement of software diversity when considering different influencing factors. Failure diversity occurs via the different structure designs of the channels and via different faults in the channels. Without attempting to design diverse structures, the approach of using different development methods to avoid the same fault classes has merit irrespective of the methods ability to produce structural diversity. In 3.3, the experiment is to compare diversity differences among various channel-pairs. This application provides a way to select an optimal channel-pair or to build a redundant system with the most diversity under some specified conditions. In 3.4, the effectiveness of diversity design strategies is assessed by the fault injection approach. The experiments, based on a new deliberately designed channel-pair showed that a forced diverse structural design is feasible, that the designed structures do influence software diversity, and that the design strategies can be assessed by the fault injection approach. 3.2
Diversity Measurement of Different Channel-Pairs
To assess the factors influencing software diversity, all the three channel-pairs were selected for test under various faulty conditions. In each channel (C or Ada version), four patterns of fault (denoted as NP-10_1, NP-10_2, PP-10_1 and PP-10_2) have been simulated by constant perturbation. The patterns included different amplitudes and negative or positive perturbation (perturbing a data by decreasing or increasing its value) [4]. Building a fault set involves selecting the locations (or constants) for inserted faults, and deciding how many times and how large to make the perturbation.
156
Luping Chen et al.
Some locations (constants) are very sensitive to the perturbation and some large perturbations will cause the software to fail with a very high rate. Such unrealistic faults were excluded from the fault sets for testing diversity since in practice they will be found by testing. Therefore finally, each set includes 10 sifted faults, which were selected based on their locations in versions and the size of the resulting failure rate. In total, eight fault sets for the four fault patterns were set up for the two channels. The calculation of the Div between any pair of faults in two similar sets (with the same perturbation at the same locations or for the same constants) respectively injected into any two channels can form a value in the Div matrix.
S10 S9 S8 S7 S6 S5 S4 S3
0.8-1 0.6-0.8 0.4-0.6 0.2-0.4 0-0.2
S2 S1 1
2
3
4
5
6
7
8
9
10
Fig. 3.1. Div values for C-Ada pair
S10 S9 S8 S7 S6 S5 S4 S3 S2 S1 1
2
3
4
5
6
7
8
9
10
Fig. 3.2. Div values for Ada-Ada pair
0.8-1 0.6-0.8 0.4-0.6 0.2-0.4 0-0.2
Assessment of the Benefit of Redundant Systems
157
S10 S9 S8 S7 S6
0.8-1 0.6-0.8
S5
0.4-0.6
S4
0.2-0.4
S3
0-0.2
S2 S1 1
2
3
4
5
6
7
8
9
10
Fig. 3.3. Div values for C-C pair
Figures 3.1 to 3.3 are the graphical representations of Div matrix values for C-Ada, Ada-Ada and C-C with a similar fault pattern. The results by inserting faults with other patterns gave the same result and are thus omitted for brevity. In all the three situations (Fig 3.1 to Fig 3.3), the fault numbers are arranged in the same order for both channels. For example, in Figure 3.1, No.1 on the X-axis (that represents a fault in Ada Channel) is the same as No. S1 on the Y-axis (that represents a fault in C channel). The C-Ada pair was tested mainly to explore the effects of structural diversity. In Fig. 3.1, it can be seen that the Div values form an unsymmetrical matrix and diagonal elements are not always zeroed. It means that, even for a same fault, the C and Ada channels may have different failure regions. This is a direct evidence of the design diversity of the two versions because "diversity by different fault" has been precluded. From the Div values of the C-C and Ada-Ada pairs as in Fig. 3.2 and Fig. 3.3, we can observe the effects on diversity of varying fault distributions alone. Their Div values form a symmetrical matrix and all diagonal elements are zeroed, but the offdiagonal elements are mostly bigger than zero. It shows that different faults in two channels of C-C or Ada-Ada pair can cause diverse failure behaviors, which is the main factor to consider diversity design for such channel-pair. It also shows the C-C and Ada-Ada pairs may have different diversities because the structure C and Ada channels are different. In next section, we will discuss how to use this difference to select channel-pair. Through synthesizing the test results of all three channel pairs, we can also conclude that: •
Statistically, the common failure probability from test results is higher than the value obtained from assumed failure independence (by multiplying the failure probability of single versions). This result is consistent with the general belief that the common failure probability will be underestimated if failure independence is assumed.
158
•
3.3
Luping Chen et al.
By varying the size of perturbation (same constants in each version), the diversity achieved was observed to increase as single version failure probabilities decreased. This result will be used to assess design strategies in 3.3 The Selection of Version-Pairs
Similar to the experiments in 3.2, we can compare results for all pairs to investigate which pair has the better diverse behaviour as shown in Fig 3.4. No pair shows an obvious advantage, and this is perhaps because no special design effort was put into ensuring diversity. Using our 'same faults' analysis, on average the C-Ada and C-C pairs gave higher diversity. One possible reason for this is that the Ada version used more global- variables so that there is higher branch coupling which naturally tends to resulting in more common failure regions.
Fig. 3.4. Comparison of version-pair
These specific conclusions are not suggested as being generally true because of the restricted nature of the injected faults used. However the experiments have demonstrated the feasibility of the approach to compare channel-pairs and assess design strategies given more realistic fault conditions. I.e. the same experimental procedures would be repeated for different types of injected fault set. 3.4
Assessment of System Designs on Diversity
Using development teams with different backgrounds to design multi-version software has been a traditional approach to claim/enhance diversity [1] [13]. As discussed before, two factors play essential roles in the production of diversity. One concerns about the distribution faults i.e. random uncertainties relating to human mistakes. To exploit this factor to increase diversity, the aim would be to reduce the number of faults and avoid the 'same' faults occurring in different versions. The second factor concerns using different software structures possibly resulting from the
Assessment of the Benefit of Redundant Systems
159
use of different programming languages and design/development methods. The experiments showed that the structures existing in the diverse versions used could protect the software from coincidental failures even assuming the same faults are present in the versions. In the present case, differences in the software structure are entirely 'coincidental'. In practice, to force version diversity, it is plausible that special strategies can be employed at the software design stage [14]. In this section, the experiment aims to use the fault injection approach to assess the change used by the artificial design of a new version. In particular, we want to show three points: • • •
Design structure can influence software diversity. This influence can be controlled. Design strategies can be assessed by the fault injection approach.
The ways software structure could influence failure diversity is currently poorly understood. Some researches suggest that the branch coupling in a single channel can influence failure areas [10]. This is because the more branched is coupled, the more inputs is connected by a same data-flow. Therefore, a fault contained such a data-flow can cause more inputs manifest as failure points. Our experiment investigates the use of branch coupling. To introduce a clear change, we purposely attempt to design a channel-pair with lower diversity by increasing the branch coupling. Therefore, in this experiment, we select a channel, re-design its branch coupling, and observe the failure behaviour of the new channel-pair. A piece of source code of a module named "analyse_values.c" from the C channel of the protection system [12] was selected and re-written. Part of the calculation of "combined trips" was changed to introduce more data-flow branches. The correctness of the new version was verified and validated by back-to-back test with the original version of the channel. Two version pairs were used for comparison, one is the original C-C pair, the second is the Cnew-Cnew pair. The same fault patterns in 3.3 were used for the injection test on the Cnew-Cnew pair which produced the results shown in Figure 3.5 for comparison with Figure 3.3. for the C-C pair. S 10
S9
S8
S7
S6
S5
S4
0.8-1 0.6-0.8 0.4-0.6 0.2-0.4 0-0.2
S3
S2
S1 1
2
3
4
5
6
7
8
9
10
Fig. 3.5. Div of Cnew-Cnew pair to DFP set 1
160
Luping Chen et al.
A General comparison is given in figure 3.6. We compare the bahaviours of the CC pair and the Cnew-Cnew pair. Two fault sets (F1, F2) are used on the C-C pair and a third fault set (F3) on the Cnew-Cnew pair. In all cases the 'same type' of fault are used (same location for inserting faults), but the magnitude of perturbation varied. The fault sets are chosen so that for the single version failure probability: APS (F1, CC)>APS(F3, Cnew-Cnew )>APS (F2, C-C). The fact that the Div of the Cnew-Cnew pair was lower than that of C-C pair, irrespectivitive of the APS values, shows that we have succeeded in controlling diversity using design. Here APS denotes the average failure probability of a single version.
1 0.9 0.8 0.7 0.6
fault F1 set 1 pair on C-C
0.5
F3 on Cversion Changed new-Cnew
0.4 0.3
fault set 2
pair
0.2 0.1 0 Div
AveP
APS
Fig. 3.6. Assessment of new channel-pair
4
Conclusion
This paper applies a fault injection approach to see how the measured diversity of multi-version software is affected by the code structure in version pairs and fault types. The approach uses mechanistic failure models based on the concept of input space testing/searching linked to the introduction of representative faults by the methods of data flow and constant perturbation. The techniques have been demonstrated on industrial software, representative of two 'diverse' software channels. This approach allows quantification of the degree of diversity between the two versions. The approach allows the design and programming factors that influence diverse failure behaviour to be studied in a quantitative way. From analyses of these experiments, we can derive the following initial conclusions: Under all four injected fault patterns, Div was found to be lower than the diversity estimate produced if version failure was assumed to be independent; Even when they contain the same fault, versions can possess different failure domains in the input space, so design for diversity using software structure is possible;
Assessment of the Benefit of Redundant Systems
161
One way to achieve failure diversity is by ensuring different faults in versions, as illustrated by testing the same versions (C-C, Ada-Ada) with different fault distributions; Div measurement can be used to assess the different design strategies for software diversity These experimental results show stable trends in their sensitivity to different test environments by considering different diversity strategies, different fault patterns and different fault profiles etc. Therefore in general, this approach can be used to assess the actual diversity, rather than assuming independence, and thus establish the effectiveness of the various factors (e.g. different version designs and combinations) to improve diversity. Further research will consider how to extend the experimental scale e.g. to more complex fault conditions, to lower APS etc, and use such experiments to check theoretical models and common beliefs as a means of defining effective strategies for achieving diversity.
Acknowledgements The work presented in this paper comprises aspects of a study (DISPO) performed as part of the UK Nuclear Safety Research programme, funded and controlled by the Industry Management Committee together with elements from the SSRC Generic Research Programme funded by British Energy, National Air Traffic Services, Lloyd's Register, and the Health and Safety Executive.
References 1. 2. 3. 4. 5. 6.
7.
Avizienis, A. and J. P. J. Kelly, Fault tolerance by design diversity: concepts and experiments, IEEE Computer, Aug. 1984, pp:67-80 Eckhardt, D. & Lee, L., "A theoretical basis for the analysis of multiversion software subject to coincident errors", IEEE Trans. Software Eng., Vol. SE-11, 1985 Littlewood, B. & Miller, D., "A conceptual model of multi-version software", Proc. of FTCS-17, IEEE 1987 Arlat, J. et al, Fault injection for dependability validation-A methodology and some application, IEEE Trans. Software En., Vol. 16, no. 2, Feb. 1990, pp:166182 Chen, L., Napier, J., May, J., Hughes, G.: Testing the diversity of multi version software using fault injection. Procs of Advances in Safety and Reliability, SARSS(1999) 13.1-13.10 Chen, L., May, J., Hughes, G., A Constant Perturbation Method for Evaluation of Structural Diversity in Multiversion Software, Lecture Notes in Computer Science 1943 :Computer Safety, Reliability and Security, Floor Koornneef & Meine van der Meulen (Eds.), Springer, Oct. 2000 Kumar, V. & Kanal, L.N., "A general Branch and Bound Formulation for Understanding And/Or Tree Search Procedures", Artificial Intelligence, 21, pp.179-198, 1983
162
8. 9. 10. 11. 12.
13. 14. 15.
Luping Chen et al.
Voas, J. M., McGraw, G.: Software Fault Injection: Inoculating programs against errors. Wiley Computer Publishing", 1998 Voas, J. M. Adynamic failure model for performing propagation and infection analysis on computer programs, PhD Thesis, College of William and Mary, Williamsburg, VA, USA, 1990 Murill, B.W., Error flow in computer program, PhD thesis, College of William and Mary, Williamsburg, VA, USA, 1991 Michael, C.C., Jones, R.C., On the uniformity of error propagation in software, Technical Report RSTR-96-003-4, RST Corporation,USA Quirk, W.J. and Wall, D.N., "Customer Functional Requirements for the Protection System to be used as the DARTS Example", DARTS consortium deliverable report DARTS-032-HAR-160190-G supplied under the HSE programme on Software Reliability, June 1991 Mitra, S., N.R. Saxena, and E.J. McCluskey, "A Design Diversity Metric and Reliability Analysis for Redundant Systems," Proc. 1999 Int. Test Conf., pp. 662-671, Atlantic City, NJ, Sep. 28-30, 1999 Geoghegan, S.J. & Avresky, D.R., "Method for designing and placing check sets based on control flow analysis of programs", Proceedings of the International Symposium on Software Reliability Engineering, ISSRE, pp.256-265, 1996 Bishop, P.G., The variation of software survival time for different operational input profiles (or why you can wait a long time for a big bug to fail), Proc. 23th IEEE Int. Symp. On Fault-Tolerant Computing (FTCS-23), Toulouse, France, pp.98-107, 1993
Estimating Residual Faults from Code Coverage Peter G. Bishop Adelard and Centre for Software Reliability City University, Northampton Square, London EC1V 0HB, UK
[email protected],
[email protected] Abstract. Many reliability prediction techniques require an estimate for the number of residual faults. In this paper, a new theory is developed for using test coverage to estimate the number of residual faults. This theory is applied to a specific example with known faults and the results agree well with the theory. The theory is used to justify the use of linear extrapolation to estimate residual faults. It is also shown that it is important to establish the amount of unreachable code in order to make a realistic residual fault estimate.
1
Introduction
Many reliability prediction techniques require an estimate for the number of residual faults [2]. There are a number of different methods of achieving this; one approach is to use the size of the program combined with an estimate for the fault density [3]. The fault density measure might be based on generic values, or past experience, including models of the software development process [1,7]. An interesting alternative approach was suggested by Malaiya, Denton and Li [4,5] who showed that the growth in the number of faults detected was almost linearly correlated with the growth in coverage. This was illustrated by an analysis of the coverage data from an earlier experiment performed on the PREPRO program by Pasquini, Crespo and Matrella [6]. This paper develops a general theory for relating code coverage to detected faults that differs from the one developed in [4]. The results of applying this model are presented, and used to justify a simple linear extrapolation method for estimating residual faults.
2
Coverage Growth Theory
We have developed a coverage growth model that uses different modelling assumptions from those in [4]. The assumptions in our model are that: •
there is a fixed execution rate distribution for the segments of code in the program (for a given input profile)
S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 163-174, 2002. Springer-Verlag Berlin Heidelberg 2002
164
• • •
Peter G. Bishop
faults are evenly distributed over the executable code (regardless of execution rate) there is a fixed probability of failure per execution rate, f The full theory is presented in the Appendix, but the general result is that:
M (t ) = N 0 ⋅ C ( f ⋅ t )
(1)
where the following definitions are used: N0 N(t) M(t) C(f⋅t) f
initial number of faults residual faults at time t detected faults at time t, i.e. N0 – N(t) fraction of covered code at time f⋅t probability of failure per execution of the faulty code
Note that this theory should work with any coverage measure; a fine-grained measure like MCDC coverage should have a higher value of f than a coarser measure like statement coverage. Hence a measure with a larger f value will detect the same number of faults at a lower coverage value. With this theory, the increase in detected faults M is always linear with increase in coverage C if f = 1. This is true even if the coverage growth curve against time is nonlinear (e.g. an exponential rise curve) because the fault detection curve will mirror the coverage growth curve; as a result, a linear relationship is maintained between M and C. If f < 1, a non-linear coverage growth curve will lead to a non-linear relationship between M and C. In the Appendix one particular growth model is analysed where the coverage changes as some inverse power of the number of tests T, i.e. where:
1 − C (t ) =
1 1 + kT p
(2)
Note that 1 − C(t) is equivalent to the proportion of uncovered code, U(t)this was used in the Appendix as it was mathematically more convenient. With this growth curve, the Appendix shows that the relationship between detected faults M and coverage C is:
M (1 − C ) =1− p N0 f + (1 − C )(1 − f p )
(3)
The normalised growth curves of detected faults M / N0 versus covered code C from equation (3) are shown in the figure below for different values of f p. Analysis of equation (2) shows that the slope approximates to f p at U = 1, while the final slope is 1/ f p at U = 0, so if for example f p = 0.1 then the initial slope is 0.1 and the final slope is 10. This would mean that the relationship between N and U is far from linear (i.e. the initial and final slopes differ by a factor of 100). If the value of p is small (i.e. growth in coverage against tests has a “long tail”) this reduces the effect of having a low value for f as is illustrated in the table below.
Estimating Residual Faults from Code Coverage
1 0.9 0.8 0.7 Fraction 0.6 detected 0.5 faults M/No 0.4 0.3 0.2 0.1 0
165
f p=1 f p=0.5 f p=0.2 f p=0.1
0
0.5
1
Coverage (C) Fig. 1. Normalised graph of faults detected versus coverage (inverse power coverage growth)
Table 1. Example values of f p
f 1.0 0.5 0.1
p=1 1.0 0.5 0.1
fp p=0.5 1.0 0.71 0.42
p=0.1 1.0 0.93 0.79
This suggests that non-linearity can be reduced and hence permit more accurate predictions of the number of residual faults. To evaluate the theory, the assumptions and its predictions in more detail, we applied the model to the PREPRO program.
3
Evaluation of the Theory on the PREPRO Example
PREPRO is an off-line program written in C that was developed for the European Space Agency. It computes parameters for an antenna array. It processes an input file containing a specification for the antenna. The antenna description is parsed by the program and, if the specification is valid, a set of antenna parameters computed and sent to the standard output. The program has to detect violations of the specification syntax and invalid numerical values for the antenna. This program is quite complex, containing 7555 executable lines of code. The original experiment did not provide the data necessary to evaluate our model so we obtained a copy of the program and supporting test harness from the original
166
Peter G. Bishop
experimenters [6], so that additional experiments could be performed, namely the measurement of: • • •
the growth in coverage of executable lines, C(t) the mean growth in detected faults, M(t) the probability of failure of per execution of a faulty line, f
3.1
Measurement of Growth in Line Coverage
In our proposed model, we assumed that there was an equal chance of a fault being present in any executable line of code. We therefore needed to measure the growth in line coverage against tests (as this was not measured in the original experiment). The PREPRO software was instrumented using the Solaris statement coverage tool tcov, and statement coverage was measured after different numbers of tests had been performed. The growth line coverage against time is shown below.
0.9 0.8 0.7 Fraction of 0.6 0.5 lines covered 0.4 C(t) 0.3 0.2 0.1 0 1
100
10000
1000000
Tests t Fig. 2. Coverage growth for PREPRO (line coverage)
It can be seen that around 20% (1563 out of 7555) of executable lines are uncovered even after a large number of tests. The tcov tool generates an annotated listing of the source showing the number of times each line has been executed. We analysed the entry conditions to the blocks that were not covered during the tests. The results are shown in the table below. Table 2. Analysis of uncovered code blocks
Entry condition to block Uncallable (“dangling” function) Internal error (e.g. insufficient space in internal table or string, or malloc memory allocation problem) Unused constructs, detection of invalid syntax Total
Number of blocks 1 47 231 279
Number of lines 6 248 1309 1563
Estimating Residual Faults from Code Coverage
167
The first type of uncovered code is definitely unreachable. The second is probably unreachable because the code detecting internal errors cannot be activated as this would require changes to the code to reduce table sizes, field sizes, etc. The final class of uncovered code is potentially reachable given an alternative test harness that covered the description syntax more fully and also breaks the syntax rules. The “asymptote” observed in Fig.3 of 1563 uncovered lines is assumed to represent unreachable code under the test profile generated by the harness. This stable value was achieved after 10 000 tests and was unchanged after 100 000 tests. If we just consider the 5992 lines that are likely to be reachable with the test profile we get the following relationship between uncovered code and the number of tests.
1
Fraction of Uncovered Lines (reachable) U(t)
0.1
0.01 1
10
100
1000
10000
100000
Tests t Fig. 3. Uncovered lines (excluding unreachable code) vs. tests (log-log graph)
This linear relationship between the logarithm of coverage and tests suggests that the inverse power law model of coverage growth (equation 2) can be applied to the PREPRO example. From the slope of the line it is estimated that the power law value for coverage growth in PREPRO is p =0.4. 3.2
Measurement of the Mean Growth in Detected Faults
The fault detection times will vary with the specific test values chosen. To establish the mean growth in detected faults, we measured the failure rate of each fault inserted individually into PREPRO, using a test harness where the outputs of the “bugged” version were compared against the final version. This led to an estimate for the failure probability per test, λi of each fault under the given test profile. These probabilities were used to calculate the mean number of faults detected after a given number of tests using the following equation:
168
Peter G. Bishop
M (t ) = ∑1 − exp( −λi t )
(4)
where t is the number of tests. This equation implies that several faults could potentially be detected per test—especially during the early test stages when the faults detected have failure probabilities close to unity. In practice, several successive tests might be needed to remove a set of high probability faults. This probably happened in the original PREPRO experiment where 9 different faults caused observable failures in the first 9 tests. We observed that 4 of the 33 faults documented within the code did not appear to result in any differences compared to the “oracle”. These faults were pointer-related and for the given computer, operating system and compiler, the faulty assignments might not have had an effect (e.g. if temporary variables are automatically set to zero, this may be the desired initial value, or if pointers are null, assignment does not overwrite any program data). The mean growth in detected faults, M(t), is shown below.
35 30 25 Mean faults detected M(t)
20 15 10 5 0 1
10
100 1000 Tests
10000
100000
Fig. 4. Mean growth in detected faults versus tests
3.3
Measurement of Segment Failure Probability f
In the theory it is assumed that f is a constant. To evaluate this we measured the number of failures for each of the known faults (activated individually) under extended testing. Using the statement coverage tool, tcov, we also measured the associated number of executions of the faulty line. Taking the ratio of these two
Estimating Residual Faults from Code Coverage
169
values we computed the failure probability per execution of the line f. It was found that f was not constant for all faults, the distribution is shown in the figure below.
1
0.1 Fail prob. per segment execution f
0.01
0.001
0.0001 0
5
10
15
20
25
30
Faults (sorted by f ) Fig. 5. Distribution of segment failure probabilities
There is clearly a wide distribution of values of f, so to apply the theory we should know the failure rates of individual faults (or the likely distribution of f). However we know that the range of values for f p is more restricted than f. We can also take the geometric mean of the individual f values to ensure that all values are given equal weight, i.e. fmean = (Π fi)1/Nf where Nf is the number of terms in the product. The geometric mean of the values was found to be fmean = 0.0475.
4
Comparison of the Theory with Actual Faults Detected
The estimated fault detection curve, M(t), based on known fault failure probabilities, can be compared with the growth predicted using equation (3) using the values of f and p derived earlier. The earlier analysis gave the following model parameters: f = 0.0475 p = 0.40 So the value for f p is: f p = 0.0475 0.4 = 0.295 The comparison of the actual and predicted growth curves is shown in figure 6 below. Note that the theoretical curve is normalised so that 5992 lines covered is viewed as equivalent to C=1 (as this is the asymptotic value under the test conditions).
170
Peter G. Bishop
Mean (known faults) Theory: f p = 0.295 30 25 20 Detected faults 15 M(t) 10 5 0 0
2000
4000
6000
8000
Lines of Code Covered C(t) Fig. 6. Faults detected versus coverage (theory and actual faults)
It can be seen that the theory prediction and mean number detected (based on the failure rates of actual faults) are in reasonable agreement.
5
Estimate of Residual Faults
The coverage data obtained for PREPRO are shown below: Table 3. Coverage achieved and faults detected
Total executable lines Covered after test 100 000 Unreachable Potentially reachable Faults detected (M)
7555 5992 254 1309 29
Taking account of unreachable code, the final fraction of uncovered code is: C = 5992 / (7555-254) = 0.785 If we used equation (3) and took C=0.785 and f p = 0.295, we would obtain a prediction that the fraction of faults detected is 0.505. However this would be a
Estimating Residual Faults from Code Coverage
171
misuse of the model, as the curve in figure 6 is normalised so that 5992 lines is regarded as equivalent to C=1. Using the model to extrapolate backwards from C=1, only one fault would be undetected if the coverage was 0.99 (5932 lines covered out of 5992). As this level of coverage was achieved in less than 10 000 tests and 100 000 tests were performed in total, it is reasonable to assume that a high proportion of the faults have been detected for the 5992 lines covered. The theory shows that there can be a non-linear relationship between detected faults and coverage, even if there is an equal probability of a fault in each statement. Indeed, given the measured values of f and p, good agreement is achieved using an assumption of constant fault density. Given a constant fault density and a high probability that faults are detected in the covered code, we estimate the density of faults in the covered code to be: 29/5992 =0.0048 faults/line and hence the number of faults in uncovered code is: N = 1563 ⋅ 0.0048 =7.5 or if we exclude code that is likely to be always unreachable: N = 6.3 So from the proportion of uncovered code, the estimate for residual faults lies between 6 and 8. The known number of undetected faults for PREPRO is 5, so the prediction is close to the known value.
6
Discussion
The full coverage growth theory is quite complex, but is does explain the shape of the fault-coverage growth curves observed in [4,5] and our evaluation experiment. Clearly there are difficulties in applying the full theory. We could obtain a coverage versus time curve directly (rather than using p), but there is no easy way of obtaining f. In addition we have a distribution of f values rather than a constant, which is even more difficult to determine. However our analysis indicates that the use of the full model is not normally necessary. While we have only looked at one type of coverage growth curve, the theory suggests that any slow growth curve would reveal a very high proportion of the faults in the code covered by the tests. It could therefore be argued that the number of residual faults is simply proportional to the fraction of uncovered code (given slow coverage growth). This estimate is affected by unreachable code (e.g. defensive code). To avoid overestimates of N, the code needs to be analysed to determine how many lines of code are actually reachable. In the case of PREPRO the effect is minor; only one less fault is predicted when unreachable code is excluded, so it may be simpler to conservatively assume that all lines of code are reachable.
172
Peter G. Bishop
Ideally, the fault prediction we have made should be assessed by testing the uncovered code. This would require the involvement of the original authors, which is impractical as development halted nearly a decade ago. Nevertheless, the fault density values are consistent with those observed in other software, and deriving the expected faults from a fault density estimate is a well-known approach [1,3,7]. The only difference here is that we are using coverage information to determine density for covered code, rather than treating the program as a “black box”. Clearly, there are limitations in this method (such as unreachable code), which might result in an over-estimate of N, but this would still be a useful input to reliability prediction methods [2].
7
Conclusions 1. 2. 3. 4.
A new theory has been presented that relates coverage growth to residual faults. Application of the theory to a specific example suggests that simple linear extrapolation of code coverage can be used to estimate the number of residual faults. The estimate of residual faults can be reduced if some code is unreachable, but it is conservative to assume that all executable code is reachable. The method needs to be evaluated on realistic software to establish what level of accuracy can be achieved in practice.
Acknowledgements This work was funded by the UK (Nuclear) Industrial Management Committee (IMC) Nuclear Safety Research Programme under British Energy Generation UK contract PP/114163/HN with contributions from British Nuclear Fuels plc, British Energy Ltd and British Energy Group UK Ltd. The paper reporting the research was produced under the EPSRC research interdisciplinary programme on dependability (DIRC).
References 1. 2. 3. 4.
R.E. Bloomfield, A.S.L. Guerra, "Process Modelling to Support Dependability Arguments", DSN 2002 Washington, DC, 23-26 June, 2002 W. Farr. Handbook of Software Reliability Engineering, M. R. Lyu, Editor, chapter Software Reliability Modeling Survey, pages 71--117. McGraw-Hill, New York, NY, 1996 M. Lipow, "Number of Faults per Line of Code," IEEE Trans. on Software Engineering, SE-8(4):437-439, July 1982 Y. K. Malaiya and J. Denton, “Estimating the number of residual defects”, HASE’98, 3rd IEEE Int’l High-Assurance Systems Engineering Symposium, Maryland, USA, November 13-14, 1998
Estimating Residual Faults from Code Coverage
5.
6. 7.
173
Y.K. Malaiya, J. Denton and M.N. Li., Estimating the number of defects: a simple and intuitive approach, Proceedings of the Ninth International Symposium on Software Reliability Engineering, Paderborn, Germany, November 4-7, 1998, pp. 307-315 A. Pasquini, A. N. Crespo and P. Matrella, “Sensitivity of reliability growth models to operational profile errors”, IEEE Trans. Reliability, vol. 45, no. 4, pp 531–540, Dec. 1996 K. Yasuda, “Software Quality Assurance Activities in Japan”, Japanese Perspectives in Software Engineering, 187-205, Addison-Wesley, 1989
Appendix: Coverage Growth Theory In this analysis of coverage growth theory, the following definitions will be used: N0 N(t) M(t) C(t) U(t) Q f
initial number of faults residual faults at time t detected faults at time t, i.e. N0 – N(t) fraction of covered code at time t fraction of uncovered code, i.e. 1 – C(t) execution rate of a line of executable code probability of failure given execution of fault in line of code
We assume the faults are evenly distributed over all lines of code. If we also assume there is a constant probability of failure f per execution of an erroneous line, then the mean number of faults remaining at time t is: ∞
N (t ) = N 0 ∫ pdf (Q ) ⋅ e − fQt dQ 0
(5)
Similarly the uncovered lines remaining at time t are: ∞
U (t ) = ∫ pdf (Q ) ⋅ e −Qt dQ 0
(6)
The equations are very similar, apart from the exponential term. Hence we can say by simple substitution
N (t ) = N 0 ⋅ U ( f ⋅ t )
(7)
Substituting N(t) = N0 – M(t), and U(t) = 1 – C(t) and rearranging we obtain:
M (t ) = N 0 ⋅ C ( f ⋅ t )
(8)
So when f=1, we get linear growth of faults detected, M, versus coverage, C, regardless of the distribution of execution rates, pdf(Q). For the case where f 3.5
tb = 0
aircraft _ b
R1
W
Y X
1
tb = 1.33
R 2 tb = 3.5 t b = 3 .5 ⇔ xb = 6
Fig. 13. An optimal safe trajectory. The total time for aircraft a was 3.5 units of time. Note that at that time, the aircraft are imediately before the conflict boundary
References 1. 2. 3. 4. 5. 6. 7.
Bonifácio, A. L.: Verificação e Síntese de Sistemas Híbridos, MSc. Dissertation, Instituto de Computação Unicamp, 2000. Tomlin, C.; Lygeros, J.; Sastry, S.: Synthesizing Controllers for Non-linear Hybrid Systems. LNCS 1386, Springer-Verlag, 1998. Tomlin, C.; Lygeros, J.; Sastry S.: Computing Controllers for Non-linear Hybrid Systems. LNCS 1569, Springer-Verlag, 1999. GodHavn, J. M.; Lauvdal, T.; Egeland, O.: Hybrid Control in Sea Traffic Management Systems. LNCS 1066, Springer-Verlag, 1996. Lygeros, J.; Lynch, N. A: Strings of Vehicles: Modelling and Safety Conditions. LNCS 1386, Springer-Verlag, 1998. Lynch, N.: High-Level Modelling and analysis of an Air-Traffic Management System. LNCS 1569, Springer-Verlag, 1999. Lygeros, J., Pappas, G. J.; Sastry, S.: An Approach to the Verification of theTRACON Automation System. LNCS, Springer-Verlag, 1998.
234
8. 9.
10.
11. 12.
13. 14.
Ítalo Romani de Oliveira and Paulo Sérgio Cugnasca Egerstedt, M., Koo, T. J., Hoffmann, F., Sastry, S.: Path Planning and Flight Controller Scheduling for an Autonomous Helicopter. LNCS 1569, SpringerVerlag, 1999. Alur, R., Courcoubetis, C., Henzinger, T.A., Ho, P.-H.: Hybrid automata: an algorithmic approach to the specification and verification of hybrid systems. In R.L. Grossman, A. Nerode, A.P. Ravn, and H. Rischel, editors, Hybrid Systems, Lecture Notes in Computer Science 736, pages 209-229. Springer-Verlag, 1993. Alur, R., Henzinger, T.A., Ho, P.-H. Automatic symbolic verification of embedded systems. In Proceedings of the 14th Annual Real-time Systems Symposium, pages 2-11. IEEE Computer Society Press, 1993. Full version appears in IEEE Transactions on Software Engineering, 22(3): 181-201, 1996. Henzinger, T. A., Ho, P.-H., Wong-Toi, H.: HyTech : the next generation. In Proceedings of the 16th Annual Real-time Systems Symposium, pages 56-65. IEEE Computer Society Press, 1995. Henzinger, T. A., Ho, P.-H., Wong-Toi, H.: A user guide to HyTech. In E. Brinksma, W.R. Cleaveland, K.G. Larsen, T. Margaria, and B. Steen, editors, TACAS 95: Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science 1019, pages 41-71. Springer-Verlag, 1995. Henzinger, T. A., Nicollin, X., Sifakis, J., and Yovine, S. Symbolic model checking for real-time systems. Information and Computation, 111(2):193-244, 1994. Henzinger, T. A., Wong-Toi, H.: Linear phase-portrait approximations for nonlinear hybrid systems. In R. Alur and T.A. Henzinger, editors, Hybrid Systems III, Lecture Notes in Computer Science. Springer-Verlag, 1995.
Appendix: Glossary ACC: Air Traffic Control Centre. An area control centre established to provide air traffic control service. APP: Approach Control. A terminal control centre established to provide air traffic control service. ATM: Air Traffic Management. CNS: Communication, Navigation and Surveillance. Flight level: Band of altitude of the aircraft. The international standard assigns 100 ft for each level. So, an aircraft flying at level 150 has altitudes between 15000 and 15099 ft. Location: In a hybrid automaton, it represents a discrete state. Every transition between locations is done by edges, graphically represented as arrows in the automaton. Navigation chart: Pre-defined sequence of reference points which the aircraft must pass over. A chart determines the route that an aircraft must follow. Heading: Angle of direction of the aircraft at the horizontal plan, measured from the North. Route: Navigation tube based on a navigation chart.
Checking Safe Trajectories of Aircraft Using Hybrid Automata
235
Trajectory: Path performed by the aircraft. We assume in this model that the trajectories are the union of linear segments, whose vertices belong to the transition windows. Transition window: A transition window Wi is defined as a vertical rectangle, with defined width and height. The transition windows work as boundaries between locations. In other words, each location of the automaton that is not final represents a set like convex(Wi ∪ Wi + 1), in space. A safe trajectory must go through every transition window, at a given sequence. Wake turbulence: a trail of small horizontal tornados left behind the aircraft due its movement.
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model Yiannis Papadopoulos Department of Computer Science, University of Hull Hull, HU6 7RX, UK
[email protected] Abstract. As the safety analyses of critical systems typically cease or reduce in their utility after system certification useful knowledge about the behaviour of the system in conditions of failure remains unused in the operational phase of the system lifecycle. In this paper, we show that this knowledge could be usefully exploited in the context of an online hazard-directed monitoring scheme in which a suitable specification derived from design models and safety analyses forms a reference monitoring model. As a practical application of this approach, we propose a safety monitor that can operate on such models to support the on-line detection, diagnosis and control of hazardous failures in real-time. We discuss the development of the monitoring model and report on a case study that we performed on a laboratory model of an aircraft fuel system.
1
Introduction
The idea of using a model derived from safety assessment as a basis for on-line system monitoring dates back to the late seventies when STAR, an experimental monitoring system, used Cause Consequence Diagrams (CCDs) to monitor the expected propagation of disturbances in a nuclear reactor [1]. CCDs are composed of various interconnected cause and consequence trees, two models which are fundamentally close to the fault tree and event tree, i.e. the causal analyses that typically form the spine of a plant Probabilistic Risk Assessment. In STAR, such CCDs were used as templates for systematic monitoring, detection of abnormal events and diagnosis of the root causes of failures through extrapolation of anomalous event chains. In the early eighties, another system, the EPRI-DAS monitor, used a similar model, the Cause Consequence Tree [2], to perform early matching of anomalous event patterns that could signify the early stages of hazardous disturbances propagating into the system. Once more the monitoring model was a modified fault tree in which delays have been added to determine the precise chronological relationships between events. More recently, the experimental risk monitors developed for the Surry and San Onofre nuclear power plants in the USA [3] also employed a fault-propagation model S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 236-248, 2002. Springer-Verlag Berlin Heidelberg 2002
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
237
to assist real-time monitoring and prediction of plant risks. This is a master fault tree synthesised from the plant safety assessment. Recently, research has also shown that it is possible to synthesise the use of such fault propagation models with other contemporary developments in the primary detection of failures. Varde et al [4], for example, describe a hybrid system in which artificial neural networks perform the initial primary detection of anomalies and transient conditions, while a rule-based monitor then performs diagnoses using a knowledge base that has been derived from the system fault trees. The knowledge base is created as fault trees are translated into a set of production rules, each of which records the logical relationship between two successive levels of the tree. One important commonality among the above prototypes is that they all use causal models of expected relationships between failures and their effects which are typically similar to a fault tree. Experience from the use of such models, however, has shown that substantial problems can be caused in monitoring by incomplete, inaccurate, nonrepresentative and unreliable models [5]. Indeed, difficulties in the development of representative models have prevented so far the widespread employment of such models. One substantial such difficulty and often the cause of omissions and inaccuracies in the model is the lack of sufficient mechanisms for representing the effects that changes in the behaviour or structure of complex dynamic systems have in the fault propagation in those systems. Such changes, often caused by complex mode and state transitions of the system, can in practice confuse a monitoring system and contribute to false, irrelevant and misleading alarms [6]. To illustrate this problem, let us consider for example the case of a typical aircraft fuel system. In this system, there are usually a number of alternative ways of supplying the fuel to the aircraft engines. During operation, the system switches between different states in which it uses different configurations of fuel resources, pumps and pipes to maintain the necessary fuel flow. Initially, for example, the system may supply the fuel from the wing tanks and when these resources are depleted it may continue providing the fuel from a central tank. The system also incorporates complex control functions such as fuel sequencing and transfer among a number of separated tanks to maintain an optimal centre of gravity. If we attempt to analyse such a system with a technique like fault tree analysis we will soon be faced with the difficulties caused by the dynamic behaviour of the system. Indeed, as components are activated, deactivated or perform alternative fuel transfer functions in different operational states, the set of failure modes that may have adverse effects on the system, as well as those effects, change. Inevitably, the causes and propagation of failure in one state of the system are different from those in other states. Representing those state dependencies is obviously crucial in developing accurate, representative and therefore reliable monitoring models. But how can such complex state dependencies be taken into account during the fault tree analysis, and how can they be represented in the structure of fault trees? In this paper, we draw from earlier work on monitoring using fault propagation models that are similar to fault trees. Indeed, we propose a system in which a set of fault trees derived from safety assessment is used as a reference model for on-line monitoring. However, to address the problem of complex state dependencies, we complement those fault trees with a dynamic model of system behaviour that can capture the behavioural transformations that occur in complex systems as a hierarchy
238
Yiannis Papadopoulos
of state machines. This model acts as a frame that allows fault trees to be placed and then interpreted correctly in the context of the dynamic operation of a system. In section two we discuss the form of the monitoring model and its development process. In section three we discuss the algorithms required to operate on such models in order to deliver monitoring and diagnostic functions in real time. In section four, we outline a case study that we performed on a laboratory model of an aircraft fuel system and finally in section five we draw conclusions and outline further work.
2
Modelling
The general form of the proposed monitoring model is illustrated in Fig.1. The spine of the model is a hierarchy of abstract state-machines which is developed around the structural decomposition of the system. The structural part of that model records the physical or logical (in the case of software) decomposition of the system into composite and basic blocks (left hand side of Fig.1). This part of the model shows the topology of the system in terms of components and connections, and can be derived from architectural diagrams of varying complexity that may include abstract engineering schematics, piping/instrumentation diagrams and detailed data flow diagrams. The dynamic part of the model is a hierarchy of state machines that determine the behaviour of the system and its subsystems (right hand side of Fig.1). This part of the model can be developed in a popular variant of hierarchical state automata such as state-charts. The model identifies normal states of the system and its subsystems. Beyond normal behaviour, though, the model also identifies transitions to deviant or failed states each representing a loss or a deviation from the normal functions delivered by the system. A HAZOP (Hazard and Operability) style functional hazard analysis technique is used to systematically identify and record such abnormal functional states. Following the application of this technique, the lower layers of the behavioural model identify transitions of low-level subsystems to abnormal functional states, in other words states where those subsystems deviate from their expected normal behaviour. As we progressively move from the leaf nodes towards the higher layers of the behavioural model, the model shows how logical combinations or sequences of lower-level subsystem failures (transitions to abnormal states) propagate upwards and cause functional failure transitions at higher levels of the design. The model also records potential recovery measures at different levels of the design, and the conditions that verify the success, omission or failure of such measures. As Fig.1. illustrates, some of the failure transitions at the low-levels of the design represent the top events of fault trees which record the causes and propagation of failure through the architectures of the corresponding subsystems. Classical fault tree analysis techniques can be used to derive those fault trees. However, a methodology for the semi-automatic construction of such fault trees has also been presented in SafeComp’99 and is elaborated in [7]. According to that methodology (see also [8]) the fault trees which are attached to the state-machines can be semi-mechanically synthesised by traversing the structural model of the system, and by establishing how the local effects of failure (specified by analysts at component level) propagate
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
239
through connections in the model and cause failure transitions in the various states of system. One difference between the proposed model and a master fault tree is that the proposed model records sequences of failures taking into account the chronological order of the sequence. The model also shows the gradual transformation of lowerlevel failures into subsystem failures and system malfunctions. Thus, the model not only situates the propagation of failure in the evolving context of the system operation, but also provides an increasingly more abstract and simplified representation of failure in the system. This type of abstraction could help to tackle the problem of state explosion in the representation of large or complex systems. Moreover, in real time, a monitoring model that embodies such a layered view of failure and recovery could assist the translation of low level failures into system malfunctions and the provision of higher level functional alarms where appropriate. The development of the proposed model relies on the application of established and widely used design and safety analysis techniques and notations, such as flow diagrams, state-charts and fault trees. Thus, there would be little value in expanding here on the application of those techniques which are generally well understood. Also, technical details about how to produce and integrate the various models into the overall specification of Fig.1 can be found in [9]. In the reminder of this section, we focus on one aspect of modelling which is particularly relevant to the monitoring problem. We discuss how the model, effectively a set of diagrams derived from the design and safety analysis can be transformed into a more precise specification that could be used for the on-line detection and control of failures in real-time. In their raw form, the designs and analyses that compose the proposed model (i.e. flow diagrams, state-charts and fault trees) would typically contain qualitative descriptions of events and conditions. However, the on-line detection of those events and conditions would clearly require more precise descriptions. One solution here would be to enhance the model with appropriate expressions that an automated system could evaluate in real-time. Since many of the events and conditions in the model represent symptoms of failures, though, it would be useful to consider first the mechanisms by which system failures are manifested on controlled processes. Structural Model Architectural diagrams of the system and its subsystems
Behavioural Model State machines of the system and its subsystems
system & its state machine
sub-systems & their state machines
sub-system architectures (basic components)
Semi-mechanically generated fault trees which locate the root causes of low-level sub-system malfunctions in the architecture of the system
Fig. 1. General form of monitoring model
240
Yiannis Papadopoulos
In a simple case a failure would cause a violation of a threshold in a single parameter of the system. In that case, the symptom of failure could be described more formally with a single constraint. The deviation “excessive flow”, for example, could be described with an expression of the type “flow>high”, where flow is a flow sensor measurement and high is the maximum allowable value of flow. In general, though, failures can cause alternative or multiple symptoms, and therefore an appropriate syntax for monitoring expressions should allow logical combinations of constraints, where each constraint could describe, for example, one of the symptoms of failure on the monitored process. One limitation of this scheme is that, in practice, constraints may fire in normal operating conditions and in the absence of failures. Indeed, in such conditions, parameter values often exhibit a probabilistic distribution. That is, there is a non-zero probability of violation of normal operating limits in the absence of process failures, which would cause a monitor to raise false alarms. Basic probability theory, however, tells us that the latter could be minimised if there was a mechanism by which we could instruct the monitor to raise an alarm only if abnormal measurements persisted over a period of time and a number of successive readings. The mechanism that we propose for filtering spurious abnormal measurements is, precisely, a monitoring expression that fires only if it remains true over a period of time. To indicate that a certain expression (represented as a logical combination of constraints) is bound by the above semantics we use the syntax T(expression, ∆t) where ∆t is the period in seconds for which expression has to remain true in order for T(expression, ∆t) to fire. ∆t is an interval which always extends from time t-∆t in the past to the present time t. The above mechanism can also be used for preventing false alarms arising from normal transient behaviour. Consider for example, that we wish to monitor a parameter in closed loop control and raise an alarm every time a discrepancy is detected between the current set-point and the actual value of the parameter. We know that a step change in the set-point of the loop is followed by a short period of time in which the control algorithm attempts to bring the value of the parameter to a new steady state. Within this period, the value of the parameter deviates from the new set-point value. To avoid false alarms arising in this situation, we could define a monitoring expression that fires only when abnormal measurements persist over a period that exceeds the time within which the control algorithm would normally correct a deviation from the current set-point. We must point out that even if we employ this type of “timed expression”, persistent noise may still cause false alarms in certain situations, when for example the value of the parameter lies close to the thresholds beyond which the expression fires. To prevent such alarms we need to consider the possible presence of noise in the interpretation of the sensor output. A simple way to achieve this is by relaxing the range of normal measurements to tolerate a reasonable level of noise. The expressions that we have discussed so far allow detection of momentary or more persistent deviations of parameters from intended values or ranges of such values. Such expressions would be sufficient for detecting anomalous symptoms in systems where the value of parameters is either stable, moves from one steady state to another, or lies within well-defined ranges of normal values. This, however, is certainly not the case in any arbitrary system. Indeed, parameters in controlled
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
241
processes are often manipulated in a way that forces their value to follow a particular trend over time. To detect violations of such trends we have also introduced a set of primitives in the model that allow expressions to reference historical values and calculate parameter trends. Such primitives include, for example, a history operator P(∆t) which returns the value of parameter P in time ∆t in the past, as well as more complex differentiation and integrator operators that calculate trends over sequences of historical parameter values. The differentiation operator D(expression,∆t), for example, when applied to an expression over time ∆t, returns the average change in the value of the expression during an interval which extends from time t-∆t in the past to the present time t. Such operators were sufficient to monitor trends in the example fuel system. However, more complex statistical operators may be required for long term monitoring of other control schemes.
3
Monitoring Algorithms
Given that information from the design and safety analysis of a system has been integrated to form the specification of Fig.1, and that this specification has been annotated with monitoring expressions as explained above, then it is possible for an automated monitor to operate on this model to detect, diagnose and control hazardous failures in real-time. Fig.2 (next page) gives an outline of the general position and architecture of such a monitor. The monitor lies between the system and its human operators. It also lies between the system and its monitoring model. Parts of that model could in principle be developed in widely used modelling and analysis tools such as Statemate and Fault Tree Plus. However, for the purposes of this work, we have also developed a prototype tool that provides integrated state modelling and analysis capabilities and provides a self-contained environment for the development of the model. Once developed in this tool, the monitoring model is then exported into a model file. Between that file and the on-line monitor lies a parser that can perform syntactical analysis and regenerate (in the memory of the computer) the data structures that define the monitoring strategies for the given system. The on-line monitor itself incorporates three mechanisms that operate on those data structures to complete the various stages of the safety monitoring process. The first of those mechanisms is an event monitor. The role of this mechanism is to detect the primary symptoms of failure or successful recovery on process parameters. This is accomplished as the monitor continuously (strictly speaking, periodically) evaluates a list of events using real-time sensory data. This list contains all the events that represent transitions from the current state of the system and its subsystems. Such events are not restricted to system malfunctions and can also be assertions of operator errors or the undesired effects of such errors on the monitored process. This list of monitored events changes dynamically as the system and its subsystems move from one state to another, and as such transitions are recognised by the monitor. Historical values of monitored parameters are stored and accessed by the monitor from ring buffers the size of which is determined at initialisation time when the monitor analyses the expressions that reference historical values. A system of threevalue logic [10] that introduces a third “unknown” truth value in addition to “true”
242
Yiannis Papadopoulos
and “false” is also employed to enable evaluation of expressions in the context of incomplete information. In practice, this gives the monitor some ability to reason in the presence of (detected) sensor failures and to produce early alarms on the basis of incomplete process data. One limitation of the above scheme is that the conditions detected by the event monitor generally reflect the symptoms of failures in the system, and do not necessarily point out underlying causes of failure. If we assume that the monitor operates on a complex process, however, then we would expect that some of those symptoms would require further diagnosis before appropriate corrective action can be taken. The second mechanism of the safety monitor is precisely a diagnostic engine. In each cycle of the monitor, the engine locates the root failures of detected anomalous symptoms by selectively traversing branches of the fault trees in which the initial symptoms appear as top events. The diagnostic algorithm combines heuristic search and blind depth-first traversal strategies. As the tree is parsed from top to bottom, the engine always checks the first child of the current node.
System
Actuator Interface
Sensory Interface
Parameter Values
Parameter Values
Corrective Measures
Operator Interface Symptoms of
Event Monitor
Clear Symptoms
Failure and Recovery
TR1 TC FL3 95 0 85
Diagnostic Engine
Symptoms that Require Diagnosis
Event Processor Diagnosed Causes of Failure
Alarms, Diagnostics, Effects of Failure, Corrective Measures
11 11
FL3
FL3 0
0
TL2
TL3
0
TC
84
85 PJL
PL5
FL5
0
0
TR3
83
TR2 85
84 PR5
FR5
0
0
PJR F
22
11
22
22
VL3
PL3
VR3
VR4
VJR
1
0 FJR
FJL
0
0
Jettison Point
TL1
PL4
Port Engine
PR1
FR1
87
75
VR1
Jettison Point
84
53
Which Events to
VL2
VC2
VL1
FL1 75
PL1 PL1 77
Starboard Engine
Data Structures of the Safety Monitor Event List
State-charts
Fault Trees
0
FL3 22
VL4
FVL
VR2
Monitor in the Present State of the System and its Sub-systems
0 PR4 47
FR3
Refuelling Point
Modelling and Safety Analysis Tool
Safety Monitor Parser
Monitoring Model The proposed model as an executable specification exported from the modelling and analysis tool
Fig. 2. The position and architecture of the safety monitor
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
243
If there is an expression that can be evaluated, a heuristic search is initiated among the siblings of the child node to decide which child will become the current node, i.e. in which branch the diagnosis will proceed. If there is no such expression, a blind depth first search strategy is initiated with an objective to examine all branches until expressions are found in lower levels or a root failure is diagnosed. At the end of a successful diagnosis, the algorithm returns a set of root causes (or a single cause if the tree contains only “OR” gates) that satisfies the logical structure of the fault tree. It is perhaps important to point out that the diagnosis proceeds rapidly as the value of nodes is calculated instantly from current or historical process information without the need to initiate monitoring of new trends that would postpone the traversal of the tree. In fact, this method is not only faster but also consistent with the semantics of the fault tree as a model. Indeed, as we move downwards in the tree we always move from effects to causes which of course precede their effects in chronological order. However, since the events that we visit during the traversal form a reversed causal chain, they can only be evaluated correctly on the basis of historical process data. In each cycle of the monitor, detected and diagnosed events are finally handled by an event processor, which examines the impact of those events on the state-charts of the system and its subsystems. By effectively executing those state-charts, the event processor initially infers hazardous transitions caused by those events at low subsystem level. Then, it recursively infers hazardous transitions triggered by combinations of such low-level transitions at higher levels of the system representation. It thus determines the functional effects of failure at different levels, provides high level functional alarms and guidance on corrective measures specified at various levels of abstraction in the model. The event processor also keeps track of the current state of the system and its subsystems, and determines which events should be monitored by the event monitor in each state. The processor can itself take action to restore or minimise the effects of failure, assuming of course that the monitor has control authority and corrective measures have been specified in a way that can be interpreted by an automated system. In the current implementation, simple corrective measures can be specified as assignments of values to parameters that define the state of on-off controllers or the set-point of control loops.
4
Case Study
For the purposes of this work, an experimental monitor that can operate on models that conform to the proposed specification was developed and this monitoring approach was applied on a laboratory model of an aircraft fuel system. Our analyses of this system were intentionally based on a very basic control scheme with no safety monitoring or fault tolerance functions. This gave us the opportunity to combine the results into a comprehensive model that provides an extensive specification of potential failure detection and control mechanisms that improve the safety of that system. This specification formed the monitoring model which was then used as a knowledge base for the on-line detection, diagnosis and automatic correction of failures that we injected in the system in a number of monitoring experiments. Such failures included fuel leaks, pipe blockages, valve and pump malfunctions and
244
Yiannis Papadopoulos
corruption of computer control commands that, in reality, could be caused, for example, by electromagnetic interference. In the space provided here, it is only possible to include a brief discussion of the study and highlight the main conclusions that we have drawn in the light of our experiments. For further information the reader is referred to an extensive technical report on this work [9]. The prototype fuel system is a model of the fuel storage and fuel distribution system of a twin-engine aircraft. The configuration of the system is illustrated in Fig.3 and represents a hypothetical design in which the fuel resources of the aircraft are symmetrically distributed in seven tanks that lie along the longitudinal and lateral axes of the system. A series of pumps can transfer fuel through the system and towards the engines. The direction and speed of those pumps, and hence of flows in the system, are computer controlled. The figure also shows a number of valves which can be used to block or activate paths and isolate or provide access to fuel resources. Finally, analogue sensors that measure the speed of pumps, fuel flows and fuel levels provide indications of the controlled parameters in the system. During normal operation, the central valve VC2 is closed and each engine is fed from a separate tank at a variable rate x, which is defined by the current engine thrust setting. Other fuel flows in the system are controlled in a way that fuel resources are always symmetrically distributed in the system and the centre of gravity lies always near the centre of the system. This is achieved by a control scheme in which each pump in the system is controlled in a closed loop with one or more local flow meters to achieve a certain flow which is always proportional to the overall demand x, as illustrated in Fig.3. Front Tank
Pump & Speed Sensor
P
Valve L PR4
Flow Meter
L
Level Sensor
FR3
x/7
Left Wing Tanks
F
Right Wing Tanks
PR3
(FR2-FR4)/2
(FL2-FL4)/2 L
L PL5 FL4
L
FL5
L
L PR5
4x/7 FL3
FL2
FR2
x/7
FR4
PL3
VL3
VL4
VR4
VR3 PL4
FR5
L
4x/7 Rear Tank x
PR1
VR2
x
VL2
FR1
FL1 VR1
VC2
PL1
VL1 Starboard Engine
Port Engine
Fig. 3. The fuel system
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
245
The first step in the analysis of this system was to develop a structural hierarchy in which the system was first represented as an architecture that contained four subsystems (engine feeding subsystem, left & right wings, central subsystem), and then each subsystem was refined into an architectural diagram that defined basic components and flows. A state-chart for each subsystem was then developed to identify the main normal functional states of the subsystem. The engine feeding subsystem, for example, has only one such state in which it maintains supply to both engines at a variable flow rate x that always equals the current fuel demand. A HAZOP style functional failure analysis assisted the identification of deviations from that normal state, for example conditions in which the subsystem delivers “no flow”, “less flow” and “reverse flow”. Such deviations, in turn, pointed out transitions into temporary or permanently failed states which we recorded in the model of the subsystem. As part of the modelling process, deviations that formed such transitions were also augmented with appropriate expressions to enable their detection in real-time. The deviation “more flow” in the line towards the port engine, for example, was described with the expression T(FR1>1.03*x,3) which fires only if an abnormal measurement of flow in the line, which is 3% above normal demand x, persists for more than 3 sec. The causes of such deviations, such as valve & pump failures, leaks, blockages and omissions or commissions of control signals, were determined in the structure of fault trees that were semi-automatically constructed by traversing the structural model of the system and by using the approach described in [7]. Nodes of those fault trees were also augmented with monitoring expressions and the trees were then used in real-time for the diagnosis of root causes of failure. Recovery measures were also specified in the structure of the model to define appropriate responses to specific failure scenarios. If, for example, there was an interruption of flow in the line towards the starboard engine, and this was due to a blockage of valve VL2 (this could be confirmed via fault tree diagnosis), then one way to restore the flow in the line would be to open valve VC2. However, to maintain the equal distribution of fuel among the various tanks, one would have to redirect flows in the system. By solving a set of simultaneous equations derived from the new topology of the system and the relationship between volume reduction and input/output flows in each tank, it can easily be inferred that the direction of flow between the rear and central tank should be reversed and the set-points of pumps PL3 and PR3 should be changed from {x/7,x/7} to {-6x/7, 8x/7}. The three measures that form this recovery strategy can be described with the following expression that an automated monitor could interpret and execute in real-time: {VC2:=1 and PL3:=-6x/7 and PR3:=8x/7}. Such expressions, indeed, were introduced in the model and enabled the monitor to take automatic responses to deviations and root causes of failures that were detected or diagnosed in real-time. Finally, once we completed the analysis of the four subsystems, we used the results to synthesise a state-chart for the overall fuel system. In general, failures that are handled in the local scope of a subsystem do not need to be considered in a higher level chart. Wing subsystems, for example, are obviously able to maintain supply of fuel and balance between wing tanks in cases of single valve failures. Since they do not have any effects at system level, such failures therefore do not need to be considered in the fuel system chart. In generalising this discussion, we could say that
246
Yiannis Papadopoulos
only non-recoverable transitions of subsystems to degraded or failed states need to be considered at higher level, and this thankfully constrains the size of higher level charts. Indeed, in our case study, the chart of the fuel system incorporated twenty states - that is approximately as many states as that of the engine feeding subsystem. Our monitoring experiments generally confirmed the capacity of the monitor to detect the transient and permanent failures that we injected into the system. Using appropriately tuned timed expressions, the monitor was able to judge the transient or permanent nature of the disturbances caused by valve or pump failures, and thus to decide whether to trigger or filter alarms in response. Complex failure conditions like structural leaks were also successfully detected using complex expressions that fired when significant discrepancies were detected between the reduction of level in tanks and the net volume of fuel that has flown out of tanks over specified periods of time. Although injected failures did not include sensor failures, sensors proved to be in practice the most unreliable element in the system. In many occasions, therefore, we encountered unplanned transient and permanent sensor failures, which gave us the opportunity to examine, to some extent, the response of the event monitor to this class of failures. With the aid of timed expressions the monitor generally responded well to transient sensor failures by filtering transient abnormal measurements. However, in the absence of a generic validation mechanism, permanent sensor failures have often mislead the monitor into raising false alarms, and into taking unnecessary and sometimes even hazardous action. In the course of our experiments, we also had the opportunity to examine how the monitor detects and responds to multiple failures (caused by simultaneously injected faults). In general, the detection of multiple failures is a process that does not differ significantly from that of single failures. If those failures cause different symptoms on process parameters, the monitor raises a set of alarms that identify those symptoms and takes any recovery measures associated with those events. When a combination of failures is expressed with common symptoms, however, the event monitor detects the symptoms, but leaves the location of the underlying causal faults and further action to the diagnostic engine. One issue raised in our experiments is the treatment of dependent faults. Imagine a situation where two seemingly independent faults cause two different symptoms and those symptoms are accompanied by two conflicting remedial procedures. If those symptoms occur simultaneously, the monitor will apply both procedures, and this in turn will cause unpredictable and potentially hazardous effects. Assuming the independence of the two faults, analysts have designed corrective measures to treat each symptom separately. Individually, those procedures are correct, but together they define the wrong response to the dependent failure. That problem, of course, could have been avoided if the monitoring model had anticipated the simultaneous occurrence of the two symptoms and contained an appropriate third procedure for the combined event Before we close this discussion on the monitor, we would like to emphasise one property that, we believe, could make this mechanism particularly useful in environments with potentially large volumes of plant data. As we have seen, the monitor can track down the current functional state of the system and its subsystems, and thus determine the scope in which an event is applicable and should, therefore, be monitored. In practice and in the context of our experiments, this property meant a
Model-Based On-Line Monitoring Using a State Sensitive Fault Propagation Model
247
significant reduction in the workload on the monitoring system. Beyond helping to minimise the workload of the monitor, this state sensitive monitoring mechanism has also helped in avoiding misinterpretations of the process feedback that often occur in complex and evolving contexts of operation. This in turn, prevented, we believe, a number of false alarms that may otherwise have been triggered if the monitor was unable to determine whether and when seemingly abnormal events become normal events and vice versa.
5
Conclusions
In this paper, we explored the idea of using a specification drawn from design models and safety analyses as a model for on-line safety monitoring. We proposed an automated monitor that can operate on such specifications to draw inferences about the possible presence of anomalies in the controlled process, the root causes of those anomalies, the functional effects of failures on the system and corrective measures that minimise or remove those effects. Our experiments demonstrated to some extent that the monitor can deliver those potentially useful monitoring functions. At the same time, they pointed out some weaknesses and limitations of the monitor. Firstly, they highlighted the vulnerability of the monitor to sensor failures and indicated the need for a generic sensor validation mechanism. Such a mechanism could be based on well-known techniques that exploit hardware replication or analytical redundancy. Secondly, we saw that, in circumstances of (unanticipated) dependent failures that require conflicting corrective procedures, the monitor treats those failures independently with unpredictable and potentially hazardous consequences. We pointed out that the real problem in those circumstances lies in the failure of the model to predict the synchronous occurrence of those failures. This, however, highlights once more an important truth, that the quality of monitoring and the correctness of the inferences drawn by the monitor are strongly contingent to the completeness and correctness of the model. The validation of the monitoring model, therefore, is clearly an area that creates scope for further research. In summary, the monitoring concepts and algorithms proposed in this paper create opportunities for exploiting in real-time an enormous amount of knowledge about the behaviour of the system that is typically derived during design and safety analysis. However, although in this paper we demonstrated that this is possible, the limitations revealed so far and insufficient project experience mean that substantial work will be required before a conclusive evaluation of the real value and scalability of this approach is possible.
Acknowledgements This study was partly supported by the IST project SETTA (IST contract number 10043). The author would like to thank John Firth, John McDermid (University of York), Christian Scheidler, Günter Heiner (Daimler Chrysler) and Matthias Maruhn (EADS Airbus) for supporting the development of those ideas at various stages of the work.
248
Yiannis Papadopoulos
References 1.
Felkel L., Grumbach R., Saedtler E.: Treatment, Analysis and Presentation of Information about Component Faults and Plant Disturbances, Symposium on Nuclear Power Plant Control and Instrumentation, IAEA-SM-266/40, U.K., (1978) 340-347. 2. Meijer, C. H.: On-line Power Plant Alarm and Disturbance Analysis System, EPRI Technical Report NP-1379, EPRI, Palo Alto California USA (1980) 2.132.24. 3. Puglia, W. J., Atefi B.: Examination of Issues Related to the Development of Real-time Operational Safety Monitoring Tools, Reliability Engineering and System Safety, 49:189-199, Elsevier Science (1995). 4. Varde P. V., Shankar S., Verma A. K.: An Operator Support System for Research Reactor Operations and Fault Diagnosis Through a Connectionist Framework and PSA based Knowledge-based Systems, Reliability Engineering and System Safety, 60(1):53-71 (1998). 5. Davies R., Hamscher W.: Model Based Reasoning: Troubleshooting, in Hamscher et al (eds.), Readings in Model-based Diagnosis, ISBN: 1-55860-249-6 (1992) 3-28. 6. Kim, I. S., Computerised Systems for On-line Management of Failures, Reliability Engineering and System Safety, 44:279-295, Elsevier Science (1994). 7. Papadopoulos Y., McDermid J. A., Sasse R., Heiner G.: Analysis and Synthesis of the Behaviour of Complex Programmable Electronic Systems in Conditions of Failure, Reliability Engineering and System Safety, 71(3):229-247, Elsevier Science, (2001). 8. Papadopoulos Y., Maruhn M.: Model-based Automated Synthesis of Fault Trees from Simulink Models, in Proc. of DSN’2001 the Int. Conf. on Distributed Systems and Networks, Gothenburg, Sweden, ISBN 0-7695-1101-5 (2001) 7782. 9. Papadopoulos Y.: Safety Directed Monitoring Using Safety Cases, D.Phil. thesis, Technical Report No YCST-2000-08, University of York, U.K. (2000). 10. Yamashima H., Kumamoto H., Okumura S.: Plant failure diagnosis by an automated fault tree construction consistent with boundary conditions. In Proc. Of the Int. Conf. on Probabilistic Safety Assessment and Management (PSAM), Elsevier Science (1991).
On Diversity, and the Elusiveness of Independence Bev Littlewood Centre for Software Reliability, City University Northampton Square, London EC1V0HB
[email protected] 1
Extended Abstract
Diversity, as a means of avoiding mistakes, is ubiquitous in human affairs. Whenever we invite someone else to check our work, we are taking advantage of the fact that they are different from us. In particular, we expect that their different outlook may allow them to see problems that we have missed. In this talk I shall look at the uses of diversity in systems dependability engineering. In contrast to diversity, redundancy has been used in engineering from time immemorial to obtain dependability. Mathematical theories of reliability involving redundancy of components go back over half a century. But redundancy and diversity are not the same thing. Redundancy, involving the use of multiple copies of similar (‘identical’) components (e.g. in parallel) can be effective in protecting against random failures of hardware. In some cases, it is reasonable to believe that failures of such components will be statistically independent: in that case very elementary mathematics can show that systems of arbitrarily high reliability can be built from components of arbitrarily low reliability. In practice, assumptions of independence need to be treated with some scepticism, but redundancy can nevertheless still bring benefits in reliability. What redundancy cannot protect against, of course, is the possibility of different components containing common failure modes – for example, design defects which will show themselves on every component of a particular type whenever certain conditions arise. Whilst this problem has been familiar to reliability and safety engineers for decades, it became particularly acute when systems dependability began to depend heavily on the correct functioning of software. Clearly, there are no software reliability benefits to be gained by the use of simple redundancy, i.e. merely exact replication of a single program. Since software, unlike hardware, does not suffer from ‘random failures’ – in the jargon its failures are ‘systematic’ – failures of identical copies will always be coincident. Design diversity, on the other hand – creating different versions using different teams and perhaps different methods - may be a good way of making software reliable, by providing some protection against the possibility of common design faults in different versions. Certainly, there is some industrial experience of design-diverse fault tolerant systems exhibiting high operational reliability (although the jury is out on the issue of whether this is the most cost-effective way of obtaining high reliability).
S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 249-251, 2002. Springer-Verlag Berlin Heidelberg 2002
250
Bev Littlewood
What is clear, however – from both theoretical and experimental evidence - is that claims for statistical independence of diverse software versions1 are not tenable. Instead, it is likely that two (or more) versions will show positive association in their failures. This means that the simple mathematics based on independence assumptions will be incorrect – indeed it will give dangerously optimistic answers. To assess the system reliability we need to estimate not just the version reliabilities, but the level of dependence between versions as well. In recent years there has been considerable research into understanding the nature of this dependence. It has centred upon probabilistic models of variation of ‘difficulty’ across the demand (or input) space of software. The central idea is that different demands vary in their difficulty – to the human designer in providing a correct ‘solution’ that will become a part of a program, and thus eventually to the program when it executes. The earliest models assume that what is difficult for one programming team will be difficult for another. Thus we might expect that if program version A has failed on a particular demand, then this suggests the demand is a ‘difficult’ one, and so program version B becomes more likely to fail on that demand. Dependence between failures of design-diverse versions therefore arises as a result of variation of ‘difficulty’: the more variation there is, the greater the dependence. This model, and subsequent refinements of it, go some way to providing a formal understanding of the relationship between the process of building diverse program versions and their subsequent failure behaviour. In particular, they shine new light on the different meanings of that over-used word ‘independence’. They show that even though two programs have been developed ‘independently’, they will not fail independently. The apparent goal of some early work on software fault tolerance – to appeal to ‘independence’ in order to claim high system reliability using software versions of modest reliability, as had been done for hardware – turns out to be illusory. On the other hand, the models tell us that diversity is nevertheless ‘a good thing’ in certain precise and formally-expressed ways. In this talk I shall briefly describe these models, and show that they can be used to model diversity in other dependability-related contexts. For example, the formalism can be used to model diverse methods of finding faults in a single program: it provides an understanding of the trade-off between ‘effectiveness’ and ‘diversity’ when different fault-finding methods are available (as is usually the case). I shall also speculate about their applicability to the use of diversity in reliability and safety cases: e.g. ‘independent’ argument legs; e.g. ‘independent’ V&V. As will be clear from the above, much of the talk will concern the vexed question of ‘independence’. These models, for the most part, are bad news for seekers after independence. Are there novel ways in which we might seek, and make justifiable claims about, independence? Software is interesting here because of the possibility that it can be made ‘perfect’, i.e. fault-free and perfectly reliable, in certain circumstances. Such a claim for perfection is rather different from a claim for high reliability. Indeed, I might believe a claim that a program has a zero failure rate, whilst not believing a claim that another 1
Whilst the language of software will be used here, these remarks about design faults apply equally well to dependability issues arising from design defects in any complex systems – including those that are just hardware-based.
On Diversity, and the Elusiveness of Independence
251
program has a failure rate of less than 10-9 per hour. The reason for my apparently paradoxical view is that the arguments here are very different. The claim for perfection might be based upon utter simplicity, a formal specification of the engineering requirements, and a formal verification of the program. The claim for better than 10-9 per hour, on the other hand, seems to accept the presence of faults (presumably because the program’s complexity precludes claims of perfection), but nevertheless asserts that the faults will have an incredibly small effect. I shall discuss a rather speculative approach to design diversity in which independence may be believable between claims for fault-freeness and reliability.
An Approach to a New Network Security Architecture for Academic Environments MahdiReza Mohajerani and Ali Moeini University of Tehran, n. 286, Keshavarz Blvd, 14166, Tehran Iran {mahdi, moeini}@ut.ac.ir
Abstract. The rapidly growing interconnectivity of IT systems, and the convergence of their technology, renders these systems increasingly vulnerable to malicious attacks. Universities and academic institutions also face concerns about the security of computing resources and information, however, traditional security architectures are not effective for academic or research environments. This paper presents an approach to a new security architecture for the universities and academic centers. While still protecting information and computing resources behind a security perimeter, this system supports the information dissemination and allows the users to develop and test insecure softwares and protocols. We also proposed a method for auditing the security policy based on fuzzy logic intrusion detection system to check the network for possible violations.
1
Introduction
With the growth of the IT systems, computer security is rapidly becoming a critical business concern. Security in computer networks is important so as to maintain reliable operation and to protect the integrity and privacy of stored information. Network attacks cause organizations several hours or days of downtime and serious breaches in data confidentiality and integrity. Depending on the level of the attack and the type of information that has been compromised, the consequences of network attacks vary in degree from mildly annoying to completely debilitating, and the cost of recovery from attacks can range from hundreds to millions of dollars [1][2]. One of the technological tools, which is widely vulnerable against those threats, is the Internet. Because of that, security has become one of the primary concerns when an organization connects its private network to Internet to prevent destruction of data by an intruder, maintain the privacy of local information, and prevent unauthorized use of computing resources. To provide the required level of protection, an organization needs to prevent unauthorized users from accessing resources on the private network and to protect against the unauthorized export of private information [3]. Even if an organization is not connected to the Internet, it may still want to establish an internal security policy to manage user access to portions of the network and protect sensitive or secret information. Academic centers as one of the major S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 252-260, 2002. Springer-Verlag Berlin Heidelberg 2002
An Approach to a New Network Security Architecture for Academic Environments
253
users of the Internet also need security, however, because of their special structure and requirements, the traditional solutions and policies to limit access to the Internet is not effective for them. This paper presents basic rules of a new security architecture for academic centers. The aim of this architecture is to support the information dissemination and to allow the users to develop and test insecure softwares and protocols, while protecting information and computing resources behind the security perimeters. For this, in Section 2, the general structure is presented. In Section 3, the differences between security solutions in corporate and academic environments are discussed. In Section 4, the proposed structure for the security policy and architecture for an academic center is presented. In Section 5, the results of evaluation of auditing the security policy are shown. Section 6 is the conclusions of the paper.
2
Security Architecture and Policy
The objective of network security architecture is to provide the conceptual design of the network security infrastructure, related security mechanisms, and related security policies and procedures. The security architecture links the components of the security infrastructure as one cohesive unit. The goal of this cohesive unit is to protect corporate information [4]. The security architecture should be developed by both the network design and the IT security teams. It is typically integrated into the existing enterprise network and is dependent on the IT services that are offered through the network infrastructure. The access and security requirements of each IT service should be defined before the network is divided into modules with clearly identified trust levels. Each module can be treated separately and assigned a different security model. The goal is to have layers of security so that a "successful" intruder's access is constrained to a limited part of the network. Just as the bulkhead design in a ship can contain a leak so that the entire ship does not sink, the layered security design limits the damage a security breach has on the health of the entire network. In addition, the architecture should define common security services to be implemented across the network. Usually, the primary prerequisite for implementing network security, and the driver for the security design process, is the security policy. A security policy is a formal statement, supported by a company's highest levels of management, regarding the rules by which employees who have access to any corporate resource abide. The security policy should address two main issues: the security requirements as driven by the business needs of the organization, and the implementation guidelines regarding the available technology. This policy covers senior management's directives to create a computer security program, establish its goals and assign responsibilities and also low-level technical security rules for particular systems [5]. After the key decisions have been made, the security architecture should be deployed in a phased format, addressing the most critical areas first. The most important security services are the services, which enforce the security policy (such as perimeter security) and the services, which audit the security policy (such as intrusion detection systems).
254
MahdiReza Mohajerani and Ali Moeini
2.1
The Perimeter Security
Perimeter security solutions control access to critical network applications, data, and services so that only legitimate users and information can pass through the network. This access control is handled by routers and switches with access control lists (ACLs) and by dedicated firewall appliances. A firewall is a safeguard one can use to control access between a trusted network and a less trusted on [6]. A firewall is not a single component; it is a strategy containing a system or a group of systems that enforces a security policy between the two networks (and in our case between the organization's network and the Internet). For the firewall to be effective, all traffic to and from Internet must pass through the firewall [6]. A Firewall normally includes mechanisms for protection at the network layer, transport layer and application layer. In network layer, IP packets are routed according to predefined rules. Firewalls usually can translate the internal IP addresses to valid Internet IP addresses (NAT or Network Address Translation). They can also replace all internal addresses with the firewall address (also called as Port Address Translation). In transport layer, access to TCP & UDP ports can be granted or blocked, depending on IP address of both sender and receiver. This allows access control for many TCP services, but doesn't work at all for others. In application layer, proxy servers (also called application gateways) can be used to accept requests for a particular application and either further the request to the final destination, or block the request. Ideally proxies should be transparent to the end user. Proxies are strippeddown, reliable versions of standard applications with access control and forwarding built-in. Typical proxies include HTTP (for WWW), telnet, ftp etc [7]. Firewalls can be configured in a number of different architectures, provided various levels of security at different costs of installation and operation [12]. The simplest firewall architectures is the Basic Filter Architecture (screening router), which is the cheapest (and least secure) setup involves using a router (which can filter inbound and outbound packets on each interface) to screen access to one (or more) internal servers. On the other hand there is DMZ Architecture. This architecture is an extension of the screened host architecture. The classical firewall setup is a packet filter between the outside and a "semi-secure" or De-Militarized Zone (DMZ) subnet where the proxies lie (this allows the outside only restricted access services in the DMZ). The DMZ is further separated from the internal network by another packet filter, which only allows connection to/from the proxies. Organizations should match their risk profile to type of firewall architecture selected. Usually, one of the above architectures of a composition of them is selected. 2.2
The Intrusion Detection System
An Intrusion Detection System (IDS) is a computer program that attempts to perform ID by either misuse or anomaly detection, or a combination of techniques. An IDS should preferably perform its task in real time. IDSs are usually classified as hostbased or network-based. Host-based systems base their decisions on information obtained from a single host (usually audit trails), while network-based systems obtain data by monitoring the trace of information in the network to which the hosts are connected. Notice that the definition of an IDS does not include preventing the intrusion from occurring, only detecting it and reporting the intrusion to an operator.
An Approach to a New Network Security Architecture for Academic Environments
3
255
Network Security in Academic Environments
Most corporate environments have deployed firewalls to block (or heavily restrict) access to internal data and computing resources from untrusted hosts and limit access to untrusted hosts from inside. A typical corporate firewall is a strong security perimeter around the employees who collaborate within the corporation. Academic institutions also face concerns about the security of computing resources and information. The security problems in these environments are divided into two categories: Problems with research information and problems with administrative information. Research groups often need to maintain the privacy of their works, ideas for future research, or results of research in progress. Administrative organizations need to prevent leakage of student grades, personal contact information, and faculty and staff personnel records. Moreover, the cost of security compromises is high. A research group could lose its competitive edge, and administrative organizations could face legal proceedings for unauthorized information release. In other hand, academic and research institutions are ideal environments for hackers and intruders and many of them are physically located in these places and they are highly motivated to access and modify grades and other information. There are several reports of break-ins and deletion of data from educational institutions [8]. Although the corporate and academic environments face common security problems they can't choose similar methods to solve them, because of their different structures. In a corporate environment, the natural place to draw a security perimeter is around the corporation itself. However, in an academic environment, it is very difficult to draw a perimeter surrounding all of the people whom they need to access information resources and only those people. This is mainly because of different types of information resources in these environments and also different users who want to access them. So if the firewall perimeter is chosen too big it includes untrusted people and if it is chosen too small it excludes some of the authorized people [9]. In addition, corporations can put serious limitations on the Internet connectivity in the name of security but research organizations simply cannot function under such limitations. First, trusted users need unrestricted and transparent access to Internet resources (including World-Wide-Web, FTP, Gopher, electronic mail, etc.) located outside the firewall. Researchers rely on fingertip access to on-line library catalogs and bibliographies, preprints of papers, and other network resources supporting collaborative work. Second, trusted users need the unrestricted ability to publish and disseminate information to people outside the firewall via anonymous FTP, WorldWide-Web, etc. This dissemination of research results, papers, etc. is critical to the research community. Third, the firewall must allow access to protected resources from trusted users located outside the firewall. An increasing number of users work at home or while traveling. Research collaborators may also need to enter the firewall from remote hosts [8]. Consequently, the traditional firewalls don't meet the academic environment requirements.
256
MahdiReza Mohajerani and Ali Moeini
4
The Perimeter Security Architecture
A high percentage of security efforts within organizations rely on perimeter network access controls exclusively. Perimeter security protects a network by controlling access to all entry and exit points. As mentioned before, traditional solutions can not provide the academic environments requirements because of the special needs of these institutions. One solution is to design different layers and zones in the firewall. Based on the resources and people, who want to access the resources, the firewall can have different zones. This can help us to solve one of the basic problems of the academic environments, which is the information dissemination. There are three categories of information in a university: • • •
The information that is officially disseminated by the university (such as news and events, articles and ...) The information that is gathered and used by network users. The information that is not allowed to be disseminated publicly.
Based on the above categories, three types of servers may be proposed in the university: Public servers, which are used to support information dissemination. Experimental servers, which are used for researchers and students to develop and test their own softwares and protocols. Trusted servers, which are used for administrative purposes or keeping confidential information. The other requirement of an academic environment is to let its trusted members to access the resources of the network from outside of the firewall (for example from home or in the trips). Another problem, that causes serious troubles for the university is the network viruses. These viruses are distributed through the network after users access the special sites. The proxy servers can be used to control this problem. Of course these proxy servers should be transparent. To achieve those goals, the proposed network security policy was designed based on six basic rules: i.
ii. iii. iv. v. vi.
Packets to or from the public servers are unrestricted if they are from authorized ports. The authorized port is the port that the special service is on it. Of course, each public server should be protected itself. The server-level security means to enforce stronger access controls on that level. Packets to or from the experimental servers are unrestricted. These servers can be located outside of firewall perimeter. Packets to or from the authorized ports of trusted servers are allowed only from or to the authorized clients inside the firewall. All of the outgoing packets are allowed to travel outside after port address translation. The incoming packets are allowed if they can be determined to be responses to outbound request. The packets to or from trusted users of hosts outside the firewall are allowed. All of the requests from particular applications such as http should be passed through proxy server.
An Approach to a New Network Security Architecture for Academic Environments
257
The rule i is based on our need to support information dissemination in a research environment. We have to separate the public servers from our trusted hosts and protect them in server-level and accept this fact that they may be compromised, so we should have a plan to recover them from information kept securely behind the firewall. The rule ii follows from our recognition that researchers and students sometimes need to develop and test insecure softwares and protocols on the Internet. Of course they should be alerted that their server is not secure and their information may be corrupted. The rule iii, is based on this fact that we want to protect the confidential information. These servers are our most important resources to be protected and we put them in a special secure zone. The rule iv follows from our recognition that open network access is a necessary component of a research environment. On the other hand we don't want to allow the users to setup Internet servers without permission. The address translation prevents the outside systems to access the internal resources except the ones, which are listed as public servers. Rule v grants access to protected resources to users as they work from home or while traveling, as well as to collaborators located outside the research group. Rule vi is based on the need of blocking some sites in the Internet, which contains viruses. This security policy addresses the needs of academic environments---and indeed the needs of many corporate environments. In the above policy the experimental servers are different than others. Because they involve interactions with unauthenticated hosts and users they pose considerable security risks. Since they are used without restrictions various types of services are presented on them. So these servers should be located outside of the firewall and physical separation should be created between them and other servers. These servers can be recovered periodically to make sure that they are not be hacked. The public servers are also vulnerable against the threats because these servers also involve interactions with public users and hosts. But the difference is that, only restricted services are presented on them so it can be protected easier and the internal users can be kept off from these servers. However, since we assume that the servers get corrupted, each server should be automatically restored regularly from uncompromised sources. The firewall enforces the security policy between internal network and Internet, so the firewall structure is designed based on the security policy. Regarding to our security policy our firewall structure would be a combination of structures we mentioned in section 2. We have a router for connection to Internet, which can be used as screening router. The most important part of the firewall would be the bastion host. This is where our filtering rules and the different zones of firewall are defined. Also, our proxy server is installed on it. The bastion host can be a single purpose hardware or a PC with a normal operating system such as Linux. The proposed firewall architecture is shown in Fig. 1.
MahdiReza Mohajerani and Ali Moeini
Trusted User
Scree ning router
258
Public S erver
Trusted S erver
Untrusted User
Firewall
Experimental Server
Security Perimeter Trusted Zone Connections
Fig. 1. The proposed perimeter security architecture
5
Auditing the Security Policy
After implementing the perimeter security based on the proposed security policy it is important to check the network for possible violations from the policy. The key point in our network security policy is the type of servers. The auditing system should be designed to detect the unauthorized servers in the network, so they can be blocked by the firewall. A fuzzy recognition engine is utilized for this purpose (basically developed by J.E. Dickerson, et al [14] named FIRE) in combination with a normal detector to detect the unauthorized servers. The detection system is based on autonomous agents developed at Purdue by Zamboni et al [11] using independent entities working collectively. There are three distinct independent components of the architecture called agents, transceivers and monitors. They each play different roles as part of the system, but their function can be altered only by the operating system not the other processes; thus they are autonomous. An agent monitors processes of a host and reports abnormal behavior to transceiver. It can communicate with another local agent only through a transceiver. A transceiver controls local agents and acts as the external communication tool between a host and a monitor. A transceiver can perform appropriate data processing and report to the monitors or other agents. A monitor is similar to a transceiver but it
An Approach to a New Network Security Architecture for Academic Environments
259
also controls entities in several hosts. Monitors combine higher-level reports, correlate data, and send alarms or reports to the User Interface [11][14]. Fuzzy recognition engine can be used as the correlation engine for the intrusion detection system (Fig. 2). Several agents can be developed under this system. Since choosing the best data elements to monitor in the network stream is critical to the effectiveness of the detection system and to conserve storage space, the system records only information about the services used, length of the connection, type of the connection, the source and the destination. In order to identify an unauthorized server, we should identify unusual service ports in use on the network, unusual numbers of connections from foreign hosts and unusual amounts of network traffic load to/from a host on the network. The main difference between our work and the FIRE is that we used a normal detector beside the fuzzy system [10]. While the normal detector identifies the unusual service port, the fuzzy recognition system identifies the unusual numbers of connections from foreign hosts and unusual amounts of network traffic load to/from a host on the network. The Combination of the results of the two systems detects the unauthorized server. To test this fuzzy system we gathered data for 10 days. The results show that using this system 93% of the unauthorized servers were detected [13].
Fig. 2. The Intrusion Detection System
6
Conclusions
In this paper the proposed security policy and firewall architecture for an academic center was presented. This firewall meets the needs of academic environments. The firewall is based on six rules of security policy and is largely transparent to trusted users and therefore retains the sense of ``openness'' critical in a research environment. This transparency and perceived openness actually increase security by eliminating the temptation for users to bypass our security mechanisms. In designing the firewall, this fact was identified that, network security for research institutions is a problem in its own right and that traditional corporate firewalls impose excessive restrictions. So we should categorize the university information and based on that, we designed the layers and zones of firewall. In addition, each server inside or outside the firewall
260
MahdiReza Mohajerani and Ali Moeini
should have its own server-level security. We also proposed an auditing system based on fuzzy logic recognition to detect the violations from our security policy.
Acknowledgements The Informatics Center of the University of Tehran (UTIC) has been studied about the appropriate structure of network security policy and the firewall of the university since 1997 [15] and this paper is the result of that research. We would like to thank UTIC to provide the testing environment for this project.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Ramachandran, J.: Designing Security Architecture Solutions, John Wiley and Sons (2002) Benjamin, R., Gladman, B.: Protecting IT Systems from Cyber Crime, The Computer Journal, Vol. 4, No. 7 (1998) Goncalves, M.: Firewall Complete, McGrawHill (1999) Arconati, N.: One Approach to Enterprise Security Architecture, SANS Institute http://www.sans.org (2002) Cisco Systems: Network Security An Executive Overview, http://www.cisco.com/warp/public/cc/so/neso/sqso/netsp_pl.htm Semeria, C.: Internet Firewalls and Security, 3Com Technical Paper (1996) Boran Consulting: The IT Security Cookbook, http://secinf.net/info/misc/boran (1999) Greenwald, M., et al.: Designing an Academic Firewall, Policy, Practice and Experience with SURF, IEEE Proceedings of 1966 Symposium of Network and Distributed Systems Security (1996) Nelson, T.: Firewall Benchmarking with firebench II, Exodus Performance Lab (1998) Molitor, A.: Measuring Firewall Performance, http://web.ranum.com/pubs/fwperf/molitor.htm (1999) Zamboni, D.: An Architecture for Intrusion Detection using Autonomous Agents, COAST Technical Report 98/05, COAST Lab, Purdue University (1998) Guttman, B., Bagwill, R.: Implementing Internet Firewall Security Policy, NIST Soecial Publication (1998) Mohajerani, M.R., Moeini, A.: Designing an Intelligent Intrusion Detection System, Internal Technical Report (in Persian), University of Tehran Informatics and Statistics Center (2001) Dickerson, J. E., Juslin, J., Koukousoula, O., Dickerson, J. A.: Fuzzy Intrusion Detection, Proc. of 20th NAFIPS International Conference (2001) Mohajerani, M. R.: To Design the Network Security Policy of the University of Tehran, Internal Technical Report (in Persian), University of Tehran Informatics and Statistics Center (2000)
A Watchdog Processor Architecture with Minimal Performance Overhead Francisco Rodr´ıguez, Jos´e Carlos Campelo, and Juan Jos´e Serrano Grupo de Sistemas Tolerantes a Fallos - Fault Tolerant Systems Group Departamento de Inform´ atica de Sistemas y Computadoras Universidad Polit´ecnica de Valencia, 46022-Valencia, Spain {prodrig,jcampelo,jserrano}@disca.upv.es http://www.disca.upv.es/gstf
Abstract. Control flow monitoring using a watchdog processor is a wellknown technique to increase the dependability of a microprocessor system. Most approaches embed reference signatures for the watchdog processor into the processor instruction stream creating noticeable memory and performance overheads. A novel watchdog processor architecture using embedded signatures is presented that minimizes the memory overhead and nullifies performance penalty on the main processor without sacrificing error detection coverage or latency. This scheme is called Interleaved Signature Instruction Stream (ISIS) in order to reflect the fact that signatures and main instructions are two independent streams that co-exist in the system.
1
Introduction
In the ”Model for the Future” foreseen by Avizienis in [1] the urgent need to incorporate dependability to every day computing is clear: ”Yet, it is alarming to observe that the explosive growth of complexity, speed, and performance of single-chip processors has not been paralleled by the inclusion of more on-chip error detection and recovery features”. Efficient error detection is of fundamental importance in dependable computing systems. As the vast majority of faults are transient, the use of a concurrent Error Detection Mechanism (EDM) is of utmost interest as high coverage and low detection latency characteristics are needed to recover the system from the error. And as experiments demonstrate [2, 3, 4, 5], a high percentage of nonoverwritten errors results in control flow errors. Siewiorek states in [6] that ”To succeed in the commodity market, faulttolerant techniques need to be sought which will be transparent to end users”. A fault-tolerant technique can be considered transparent only if results in minimal performance overhead in silicon, memory size or processor speed. Although redundant systems can achieve the best degree of fault-tolerance, the high overheads implied limit their applicability in every day computing elements. S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 261–272, 2002. c Springer-Verlag Berlin Heidelberg 2002
262
Francisco Rodr´ıguez et al.
The work presented here provides concurrent detection of control flow errors with no performance penalty and minimal memory and silicon sizes. No modifications are needed in the instruction set of the processor used as testbed and the architectural ones are so small that they can be enabled and disabled under software control to allow binary compatibility with existing software. The watchdog processor is very simple, and its design can be applied to other processors as well. The paper is structured as follows: The next section is devoted to present a set of basic definitions and it is followed by the outline of related works in the literature. Section 4 presents the system architecture where the watchdog is embedded. Section 5 discusses error detection capabilities, signature characteristics and placement, and modifications needed into the original architecture of the processor. A memory overhead comparison with similar work is performed afterwards, to finish with the conclusions.
2
Basic Definitions
The following definitions are taken from [5]: 1. A branch instruction is an instruction that can break the sequential flow of execution like a procedure call, a conditional jump or a return-fromprocedure instruction. 2. A branch-in point is an instruction used as the destination of a branch instruction or the entry point of, for example, an interrupt handler. 3. A program is partitioned into branch-free intervals and branch instructions. The beginning of a branch-free interval is a branch-in instruction or the instruction following a branch. A branch-free interval is ended by a branch or a branch-in instruction. 4. A basic block is only a branch-free interval if it is ended by a branch-in. It is the branch-free interval and its following branch instruction otherwise. With the definitions above a program can be represented by a Control Flow Graph (CFG). Vertices in this graph are used to represent basic blocks and directed arcs are used to represent legal paths between blocks. Figure 1 shows some examples for simple High Level Language constructs. We call block fall-through to the situation where two basic blocks are separated with no branch-out instruction in between. Blocks are divided only because the first branch-free interval is ended by a following branch-in instruction that starts the second block. In [7] a block that receives more than two transfers of control flow it is said to be a branch fan-in block. We distinguish whether the control flow transfer is due to a non-taken conditional branch (that is, both blocks are contiguous in memory) and say that a multiple fan-in block is reachable from more than one out-of-sequence vertex in the CFG. A branch instruction with more than one out-of-sequence target is represented in the CFG by two or more arcs departing from the same vertex, where
A Watchdog Processor Architecture with Minimal Performance Overhead if-then-else construct if-block
switch construct switchblock
Inst. i
Conditional branch not taken then-block Inst. i+1
Conditional branch taken else-block
263
Inst. k
case 0 block
case 1 block
Multiple fan-out block
...
case n block
Fall through block next-block Inst. k+1
next-block Multiple fan-in block
Fig. 1. CFG’s for some HLL constructs at least two of them are targeted to out-of-sequence vertices. These are said to be multiple fan-out blocks. A derived signature is a value assigned to each instruction block. The term derived means the signature is not an arbitrarily assigned value but calculated from the block’s instructions. Derived signatures are usually obtained xoring the instruction opcodes or using such opcodes to feed a Linear Feedback Shift Register (LFSR). These values are calculated at compile time and used as reference by the EDM to verify correctness of executed instructions. If signatures are interspersed or hashed with the processor instructions the method is generally known as Embedded Signature Monitoring (ESM). A watchdog processor is a hardware EDM used to detect Control Flow Errors (CFE) and/or corruption of the instructions executed by the processor, usually employing derived signatures and an ESM technique. In this case it performs signature calculations from the instruction opcodes that are actually executed by the main processor, checking these run-time values against their references. If any difference is found the error in the main processor instruction stream is detected and an Error Recovery Mechanism (ERM) is activated.
3
Related Work
Several hardware approaches using a watchdog processor and derived signatures for concurrent error detection have been proposed. The most relevant works are outlined below: Ohlsson et al. present in [5] a watchdog processor built into a RISC processor. A specialized tst instruction is inserted in the delay slot of every branch instruction, testing the signature of the preceding block. An instruction counter is also used to time-out an instruction sequence when a branch instruction is not executed in the specified range. Other watchdog supporting instructions are added to the processor instruction set to save and restore the value of the instruction counter on procedure calls. The watchdog processor used by Galla et al. in [8] to verify correct execution of a communications controller of the Time-Triggered Architecture uses a similar
264
Francisco Rodr´ıguez et al.
approach. A check instruction is inserted in appropriate places to trigger the checking process with the reference signature that is stored in the subsequent word. In the case of a branch, the branch delay slot is used to insert an adjustment value for the signature to ensure the run-time signature is the same at the check instruction independent of the path followed. An instruction counter is also used by the watchdog. The counter is loaded during the check instruction and decremented for every instruction executed; a time-out is issued if the counter reaches zero before a new check instruction is executed. Due to the nature of the communications architecture, no interrupts are processed by the controller. Thus saving the run-time signature or instruction counter is not necessary. The ERC32 is a SPARC processor augmented with parity bits and a program flow control mechanism presented by Gaisler in [9]. In the ERC32, a test instruction to verify the processor control flow is also inserted in the delay slot of every branch to verify the instruction bits of the preceding block. In his work, the test instruction is a slightly modified version of the original nop instruction and no other modifications to the instruction set is needed. A different error detection mechanism is presented by Kim and Somani in [10]. The decoded signals for the pipeline control are checked in a per instruction basis and their references are retrieved from a watchdog private cache. If the run-time signature of a given instruction can’t be checked because its reference counterpart is not found in the cache, it is stored in the cache and used as reference for future executions. No signatures or program modifications are needed because reference signatures are generated at run-time, thus creating no overhead. The drawback in this approach is that the watchdog processor can’t check all instructions. An instruction can be checked if it has been previously executed and only if its reference has not been displaced from the watchdog private cache to store other signatures. Although the error is detected before the instruction is committed and no overheads are created, the error coverage is poor. More recently, hardware additions to modern processor architectures have been proposed to re-execute instructions and perform a comparison to verify no errors have been produced before instructions are committed. Some of these proposals are outlined below for the sake of completeness but they are out of the scope of this work because: i) Hardware additions, spare components and/or time redundancy are used to detect all possible errors by reexecution of all instructions. Not only errors in the instruction bits or execution flow are detected but data errors as well. ii) They require either a complete redesign of the processor control unit or the addition of a complete execution unit capable to carry out the same set of instructions of the main processor, although its control unit can be simpler. These include, to name a few: – REESE (Nickel and Somani, [11]) and AR-SMT (Rotenberg, [12]). Both works take advantage of the simultaneous multi-threading architecture to execute every instruction twice. The instructions of the first thread, along with their operands and results are stored in a queue (a delay buffer in
A Watchdog Processor Architecture with Minimal Performance Overhead
265
Rotenberg’s work) and re-executed. Results of both executions are compared before the instructions are committed. – The microprocessor design approach of Weaver and Austin in [13] to achieve fault tolerance is the substitution of the commitment stage of a pipeline processor with a checker processor. Instructions along with their inputs, addresses and the results obtained are passed to the checker processor where instructions are re-executed and results can be verified before they are committed. – The O3RS design of Mendelson and Suri in [14] and a modified multiscalar architecture used by Rashid et al. in [15] use spare components in a processor capable of issuing more than one instruction per cycle to re-execute instructions.
4
System Architecture
The system (see Fig. 2) is built around a soft-core of a MIPS R3000 processor clone developed in synthesizeable VHDL [16]. It is a 4-stage pipelined RISC processor running the MIPS-I and MIPS-II Instruction Set Architecture [17]. Instruction and data bus interfaces are designed as AMBA AHB bus masters providing external memory access. This processor is provided with a Memory Management Unit (MMU) inside the System Control Coprocessor (CP0 in the MIPS nomenclature) to perform virtual to physical address mapping, to isolate memory areas of different processes and to check correct alignment of memory references. To minimize performance penalty, the instruction cache is designed with two read ports that can provide two instructions simultaneously, one for each processor. On a cache hit, no interference exists even if the other processor is waiting for a cache line refill because of a cache miss.
System Architecture
AMBA AHB bus
Watchdog signature path
AHB Master
Watchdog
Retired instructions
Instruction path
AHB Arbiter I-Cache External memory External bus interface
R3000 Processor
AHB Slave Data path
AHB Master
Processor core System Control Coprocessor TLB
Fig. 2. System architecture
266
Francisco Rodr´ıguez et al.
To reduce the instruction cache complexity a single write port is provided that must be shared by both processors. When simultaneous cache misses happen, cache refills are served in a First-Come First-Served fashion. If they happen in the same clock cycle, the main processor is promoted. This arrangement takes advantage of space and time locality in the application program to augment cache hits for signatures. As we use an ESM technique and signatures are interleaved with processor instructions, when both processors produce a cache miss they request the same memory block most of the times, as both reference words in the same program area. No modification is needed in the processor instruction set due to the fact that signature instructions are neither fetched nor executed by the main processor. This allows us to maintain binary compatibility with existing software. If access to the source code is not possible, the program can be run without modification (and no concurrent flow error detection capability will be provided). This is possible because the watchdog processor and processor’s modified architecture can be enabled and disabled under software control running with superuser privileges. If these features are disabled, our processor behaves as an off-the-shelf MIPS processor. Thus, if binary compatibility is needed for a given task, these features must be disabled by the OS every time the task resumes execution. The watchdog processor is fed with the instructions from the main processor pipeline as they are retired. When these instructions enter the watchdog the runtime signatures and address bits are calculated at the same rate of the arrived instructions. When a block ends, these values are stored in a FIFO memory to decouple the signature checking process. This FIFO allows a large set of instructions to be retired from the pipeline while the watchdog is waiting for a cache refill in order to get a reference signature instruction. In a similar way, the FIFO can be emptied by the watchdog while the main processor pipeline is stalled due to a memory operation. When this memory is full, the pipeline if forced to wait for the watchdog checking process to read some data from the FIFO.
5
Interleaved Signature Instruction Stream
Block signatures are placed at the beginning of every basic block in our scheme. These reference signatures are used by the watchdog processor only and not processed in any way by the main processor. Two completely independent, interleaved instruction streams coexist in our system: the application instruction stream which is divided into blocks and executed by the main processor and the signature stream used by the watchdog processor. We have called Interleaved Signature Instruction Stream (ISIS) to our technique due to this fact. The signature word (see Fig. 3 for a field description) provide enough information to the watchdog processor to check the following block properties: 1. Block Length. The watchdog processor checks a block’s signature when the last instruction of the block is retired from the processor pipeline. Instead
A Watchdog Processor Architecture with Minimal Performance Overhead
267
of relying on the branch instruction at the end of the block to perform the signature checking, the watchdog counts the instructions as they are retired. In this way, the watchdog can anticipate when the last instruction comes and detect a CFE if a branch occurs too early or too late. 2. Block Signature. The block instructions are compacted using a 16-bit LFSR that will be used by the watchdog to verify that the correct instructions have been retired from the processor pipeline. 3. Block Target Address. In the case of a non multiple fan-out block with a target address that can be determined at compile-time, a 3-bit signature is computed from the address difference between the branch and the outof-sequence target instruction. These parity bits are used at run-time to provide some confidence in that the instruction reached after the branch is the correct one. 4. Block Origin Address. When the branch of a multiple fan-out block is executed, the watchdog can’t check all possible destinations even if they are obtainable at compile time. In our scheme, every possible destination block is provided with a 3-bit signature of the address difference between the originating branch and the start of the block, much the same as the previous Block Target Address check. Thus, instead of checking that the target instruction is the correct one, the watchdog processor checks (at the target block) that the originating branch is the correct one in this case. The signature instruction encoding has been designed in such a way that a main processor instruction can not be misinterpreted as a watchdog signature instruction. This provides an additional check when a branch instruction is executed by the main processor. This check consists in the requirement to find a signature instruction immediately preceding the first instruction of every block. This also helps to detect a CFE if a branch erroneously reaches a signature instruction, because the used encoding will force an illegal instruction exception to be raised. Furthermore, the block type helps the watchdog processor to check whether the execution flow is correct. For example, in the case of a multiple fan-out block, the block type reflects the need to check the address signature at the target block. Even if an incorrect branch is taken to the initial instruction of a block, target’s signature instruction must have coded into its type that it is a block where the origin address must be checked or a CFE exception will be raised. Instructions in the MIPS processor must be placed at word boundaries; a memory alignment exception is raised if this requirement is not met. Taking advantage of this mechanism, the watchdog processor computes address differences
Block signature instruction
6 bits Type
3 bits Block Target Add
3 bits Block Origin Add
4 bits
16 bits
Length
Opcode Signature
Fig. 3. Block signature encoding
268
Francisco Rodr´ıguez et al.
Multiple fanout block
Cond. branch Cond. branch not taken
Multiple fanout block
... Shared block
...
Address protected branch to shared block Address unprotected branch to shared block
Fig. 4. Example of an address checking uncovered case
as 30-bit values. Given that the branch instruction type used most of the time by the compiler use a 16-bit offset to reach the target instruction these differences obtained at run-time for Block Target Address and Block Origin Address checks are usually half empty, so every parity bit protects 5 (10 in the worst case) of such bits. To our knowledge, the Block Origin Address checking has never been proposed in the literature. The solutions offered so far to manage jumps with multiple targets use justifying signatures (see [7] for an example) to patch the run-time signature and delay the check process until a common branch-in point is encountered, increasing the error detection latency. Not all jumps can be covered with address checking however. Neither the jumps with run-time computed addresses nor those jumps to a multiple fanin block that it is shared by several multiple fan-out blocks (see Fig. 4 for an example). In the later case, an address signature per origin should be used in the fan-in block in order to maintain the address checking process, which is not possible. Currently, only Block Origin Address checks from non multiple fan-out blocks can be covered for such shared blocks. 5.1
Processor Architecture Modifications
Isolating the reference signatures from the instructions fed into the processor pipeline results in a minimal performance overhead in the application program. Slight architecture modifications are needed in the main processor in order to achieve it. First of all, when a conditional branch instruction ends a basic block, a second block follows immediately. The second block’s signature sits between them, and the main processor must skip it. In order to effectively jumping over the signature, the signature size is added to the Program Counter if the branch is not taken. In the same way, when a procedure call instruction ends a basic block the next one to be executed after the procedure returns immediately follows the first one. Again, the second block’s signature must be taken into account when
A Watchdog Processor Architecture with Minimal Performance Overhead (a)
if-block
(b)
Conditional branch not taken then-block Inst. i+1
Block signature
if-block
Inst. i
Conditional branch taken
269
Jump inserted
Inst. i
Automatic PC addition
else-block Inst. k
Fall through block next-block Inst. k+1
then-block
Inst. i+2
next-block
else-block
Fall through substituted by an explicit jump instruction
Fig. 5. An if-then-else example (a). After block signatures and jump insertion (b)
calculating the procedure return address. And again, this is achieved by an automatic addition of the signature size to the PC. Additions to the PC mentioned above can be automatically generated at runtime because the control unit decodes a branch or procedure call instruction at the end of the block. The instruction is a clear indication that the block end will arrive soon. As the processor has a pipelined architecture, the next instruction is executed in all cases (this is known as the branch delay slot), so the control unit has a clock cycle to prepare for the addition. Despite the fact that the instruction in the delay slot is placed after the branch, it logically belongs to the same block, as it is executed even if the branch is taken. However, in the case of a block fall-through the control unit has no clue to determine when the first block ends, so the signature can not be automatically jumped over. In this case, the compiler explicitly adds an unconditional jump to skip it. This is the only case where a processor instruction must be added in order to isolate main processor from the signature stream. Figure 5a shows an example of an if-then-else construct with a fall-through block that needs such an addition (shown in Fig. 5b).
6
Overhead Analysis
Although we have not enough experimental data yet to assess the memory and performance overhead of our system, a qualitative analysis for the memory overhead based on related work is possible. A purely software approach to concurrent error detection was evaluated by Wildner in [18]. This control flow EDM is called Compiler-Assisted Self Checking of Structural Integrity (CASC) and it is based on address hashing and signature justifying to protect the return address of every procedure. At the procedure entry, the return address of the procedure is extracted from the link register into a general-purpose register to be operated on. The first operation is the inversion of the LSB bit of the return address to provide a misalignment exception in the
270
Francisco Rodr´ıguez et al.
case of a CFE. An add instruction at each basic block is inserted to justify the procedure signature and, at the exit point, the final justifying and reinversion of the LSB bit is calculated and the result is transferred to the link register before returning from the procedure. In the case of a CFE, the return address obtained is likely to cause a misalignment exception thus catching the error. The experiments carried out on a RISC SPARC processor resulted in a memory codesize overhead for the SPECint92 benchmarks varying from 0% to 28% (18,76% on average) depending on the run-time library used and the benchmark itself. The hardware watchdog of Ohlsson et al. presented in [5] use a tst instruction per basic block, taking advantage of the branch delay slot of a pipelined RISC processor called TRIP. One of the detection mechanisms used by the watchdog is an instruction counter to issue a time-out exception if a branch instruction is not executed during the specified interval. When a procedure is called two instructions are inserted to save the block instruction counter and another instruction is inserted at the procedure end to restore it. Their watchdog code size overhead is evaluated to be between 13% and 25%. The later value comes from the heap sort algorithm showing a mean basic block of 4.8 instructions. ISIS inserts a single word per basic block, without special treatment for procedure entry and exit blocks, so CASC or TRIP overhead can be taken as an upper bound of ISIS memory overhead. Hennessey and Patterson in [19] state that the average length of a basic block for a RISC processor sits between 7 and 8 instructions. The reasoning to evaluate memory overhead as 1/L being L the basic block length is used by Ohlsson and Rim´en in [20] to evaluate the memory overhead of their Implicit Signature Checking (ISC) method. The same value (7-8 instructions per block) is used by Shirvani and McCluskey in [21] to perform this same analysis on several software signature checking techniques. Applying this evaluation method to ISIS results in a mean of about 12% 15% memory overhead. An additional word must be accounted to eliminate fallthrough blocks. The overhead of these insertions has to be methodically studied, but initial experiments show a negligible impact on overall memory overhead.
7
Conclusion
We have presented a novel technique to embed signatures into the execution flow of a RISC processor that provides a set of error checking procedures to assess that the flow of executed instructions is correct. These checking procedures include a block length count, the signature of instruction opcodes using a LFSR, and address checking when a branch is executed. All these checkings are performed in a per block basis, in order to reduce the error detection latency of our hardware Error Detection Mechanism. One of those address checking procedures has not been published before. It is the Block Origin Address checking used when a branch has multiple valid targets and consists of delaying the branch checking until the target instruction is reached and verifying that the branch comes from the correct origin vertex
A Watchdog Processor Architecture with Minimal Performance Overhead
271
in the CFG. This technique solves the address checking problem that arises if a branch has multiple valid destinations, for example, the table-based jumps used when the OS dispatches a service request. Not all software cases can be covered with address checking however. When a CFG vertex is targeted from two or more multiple fan-out vertices the Block Origin Address check becomes ineffective. We have called Interleaved Signature Instruction Stream (ISIS) to our signature embedding technique to reflect the important fact that signature instructions processed by the watchdog processor and main processor instructions are two completely independent streams. ISIS has been implemented into a RISC processor and the modifications demanded by signature embedding to the original architecture have been discussed. These modifications are very simple and can be enabled and disabled by software with superuser privileges to maintain binary compatibility with existing software. No specific features of the processor has been used, so the port of ISIS to a different processor architecture is quite straightforward. Memory performance overhead has been studied by comparison with other methods and analysis show a memory overhead between 12% and 15% although we haven’t performed a methodical study yet. As a negligible amount of instructions are added to the original program, the performance is expected to remain basically unaltered.
Acknowledgements This work is supported by the Spanish Government Comisi´ on Interministerial de Ciencia y Tecnolog´ia under project CICYT TAP99-0443-C05-02.
References [1] Avizienis, A.: Building Dependable Systems: How to Keep Up with Complexity. Proc. of the 25th Fault Tolerant Computing Symposium (FTCS-25), 4-14, Pasadena, California, 1995. 261 [2] Gunneflo, U., Karlsson, J., Torin, J.: Evaluation of Error Detection Schemes Using Fault Injection by Heavy-ion Radiation. Proc. of the 19th Fault Tolerant Computing Symposium (FTCS-19), 340-347, Chicago, Illinois, 1989. 261 [3] Czeck, E. W., Siewieorek, D. P.: Effects of Transient Gate-Level Faults on Program Behavior. Proc. of the 20th Fault Tolerant Computing Symposium (FTCS-20), 236-243, NewCastle Upon Tyne, U. K., 1990. 261 [4] Gaisler, J.: Evaluation of a 32-bit Microprocessor with Built-in Concurrent Error Detection. Proc. of the 27th Fault Tolerant Computing Symposium (FTCS-25), 42-46, Seattle, Washington, 1997. 261 [5] Ohlsson, J., Rim´en, M., Gunneflo, U.: A Study of the Effects of Transient Fault Injection into a 32-bit RISC with Built-in Watchdog. Proc. of the 22th Fault Tolerant Computing Symposium (FTCS-22), 316-325, Boston, Massachusetts, 1992. 261, 262, 263, 270
272
Francisco Rodr´ıguez et al.
[6] Siewiorek, D. P.: Niche Sucesses to Ubiquitous Invisibility: Fault-Tolerant Computing Past, Present, and Future. Proc. of the 25th Fault Tolerant Computing Symposium (FTCS-25), 26-33, Pasadena, California, 1995. 261 [7] Oh, N., Shirvani, P. P., McCluskey, E. J.: Control Flow Checking by Software Signatures. IEEE Transactions on Reliability - Special Section on Fault Tolerant VLSI Systems, March, 2001. 262, 268 [8] Galla, T. M., Sprachmann, M., Steininger, A., Temple, C.: Control Flow Monitoring for a Time-Triggered Communication Controller. Proceedings of the 10th European Workshop on Dependable Computing (EWDC-10), 43-48, Vienna, Austria, 1999. 263 [9] Gaisler, J.: Concurrent Error-Detection and Modular Fault-Tolerance in an 32-bit Processing Core for Embedded Space Flight Applications. Proc. of the 27th Fault Tolerant Computing Symposium (FTCS-24), 128-130, Austin, Texas, 1994. 264 [10] Kim, S., Somani, A. K.: On-Line Integrity Monitoring of Microprocessor Control Logic. Proc. Intl. Conference on Computer Design: VLSI in Computers and Processors (ICCD-01), 314-319, Austin, Texas, 2001. 264 [11] Nickel, J. B., Somani, A. K.: REESE: A Method of Soft Error Detection in Microprocessors. Proc. of the 2001 Intl. Conference on Dependable Systems and Networks (DSN-2001), 401-410, Goteborg, Sweden, 2001. 264 [12] Rotenberg, E.: AR-SMT: A Microarchitectural Approach to Fault Tolerance in Microprocessors. Proc. of the 29th Fault Tolerant Computing Symposium (FTCS29), 84-91, Madison, Wisconsin, 1999. 264 [13] Weaver, C., Austin, T.: A Fault Tolerant Approach to Microprocessor Design. Proc. of the 2001 Intl. Conference on Dependable Systems and Networks (DSN2001), 411-420, Goteborg, Sweden, 2001. 265 [14] Mendelson, A., Suri, N.: Designing High-Performance & Reliable Superscalar Architectures. The Out of Order Reliable Superscalar (O3RS) Approach. Proc. of the 2000 Intl. Conference on Dependable Systems and Networks (DSN-2000), 473-481, New York, USA, 2000. 265 [15] Rashid, F., Saluja, K. K., Ramanathan, P.: Fault Tolerance Through Re-execution in Multiscalar Architecture. Proc. of the 2000 Intl. Conference on Dependable Systems and Networks (DSN-2000), 482-491, New York, USA, 2000. 265 [16] IEEE Std. 1076-1993: VHDL Language Reference Manual. The Institute of Electrical and Electronics Engineers Inc., New York, 1995. 265 [17] MIPS32 Architecture for Programmers, volume I: Introduction to the MIPS32 Architecture. MIPS Technologies, 2001. 265 [18] Wildner, U.: Experimental Evaluation of Assigned Signature Checking With Return Address Hashing on Different Platforms. Proc. of the 6th Intl. Working Conference on Dependable Computing for Critical Applications, 1-16, Germany, 1997. 269 [19] Hennessy, J. L., Patterson, D. A.: Computer Architecture. A Quantitative Approach, 2nd edition, Morgan-Kauffmann Pub., Inc., 1996. 270 [20] Ohlsson, J., Rim´en, M.: Implicit Signature Checking. Proc. of the 25th Fault Tolerant Computing Symposium (FTCS-25), 218-227, Pasadena, California, 1995. 270 [21] Shirvani, P. P., McCluskey, E. J.: Fault-Tolerant Systems in a Space Environment: The CRC ARGOS Project. Center for Reliable Computing, Technical Report CRC-98-2, Standford, California, 1998. 270
Model-Checking Based on Fluid Petri Nets for the Temperature Control System of the ICARO Co-generative Plant Marco Gribaudo1 , A. Horv´ ath2 , A. Bobbio3 , Enrico Tronci4 , 5 Ester Ciancamerla , and Michele Minichino5 1
3
Dip. di Informatica, Universit` a di Torino, 10149 Torino, Italy
[email protected] 2 Dept. of Telecommunications, Univ. of Technology and Economics Budapest, Hungary
[email protected] Dip. di Informatica, Universit` a del Piemonte Orientale, 15100 Alessandria, Italy
[email protected] 4 Dip. di Informatica, Universit` a di Roma ”La Sapienza”, 00198 Roma, Italy
[email protected] 5 ENEA, CR Casaccia, 00060 Roma, Italy {ciancamerla,minichino}@casaccia.enea.it
Abstract. The modeling and analysis of hybrid systems is a recent and challenging research area which is actually dominated by two main lines: a functional analysis based on the description of the system in terms of discrete state (hybrid) automata (whose goal is to ascertain for conformity and reachability properties), and a stochastic analysis (whose aim is to provide performance and dependability measures). This paper investigates a unifying view between formal methods and stochastic methods by proposing an analysis methodology of hybrid systems based on Fluid Petri Nets (FPN). It is shown that the same FPN model can be fed to a functional analyser for model checking as well as to a stochastic analyser for performance evaluation. We illustrate our approach and show its usefulness by applying it to a “real world” hybrid system: the temperature control system of a co-generative plant.
1
Introduction
This paper investigates an approach to model checking starting from a fluid Petri net (FPN) model, for formally verifying the functional and safety properties of hybrid systems. This paper shows that FPN [1, 11, 9] can constitute a suitable formalism for modeling hybrid systems, like the system under study, where a discrete state controller operates according to the variation of suitable continuous quantities (temperature, heat consumption). The parameters of the models are usually affected by uncertainty. A common and simple way to account for parameter uncertainty is to assign to them a range of variation (between a minimum and a maximum value), without any specification on the actual value S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 273–283, 2002. c Springer-Verlag Berlin Heidelberg 2002
274
Marco Gribaudo et al.
assumed by the parameter in a specific realization (non-determinism). Hybrid automata [3] and discretized model checking tools [6] operate along this line. If a weight can be assigned to the parameter uncertainty through a probability distribution, we resolve the non-determinism by defining a stochastic model: the FPN formalism [11, 8] has been proposed to include stochastic specifications. However, the paper intends to show that a FPN model for an hybrid system can be utilized as an input model both for functional analysis as well as for stochastic analysis. In particular, the paper shows that the FPN model can be translated in terms of a hybrid automaton [2, 15] or a discrete model checker [5]. FPN’s are an extension of Petri nets able to model systems with the coexistence of discrete and continuous variables [1, 11, 9]. The main characteristics of FPN is that the primitives (places, transitions and arcs) are partitioned in two groups: discrete primitives that handle discrete tokens (as in standard Petri nets) and continuous (or fluid) primitives that handle continuous quantities (referred to as fluid). Hence, in the single formalism, both discrete and continuous variables can be accommodated and their mutual interaction represented. Even if Petri nets and model checking rely on very different conceptual and methodological bases (one coming from the world of performance analysis and the other form the world of formal methods), nevertheless the paper attempts to gain cross fertilizations from the two areas. The main goal of the research work presented in this paper is to investigate on the possibility of defining a methodology which allows to refer to a common FPN model to be used both for formal specification and verification with model checking tools and for performance analysis. We describe our approach and show its usefulness by using a meaningful “real world” application. Namely, we assume as a case study the control system of the temperature of the primary and secondary circuit of the heat exchange section of the ICARO co-generative plant [4] in operation at centre of ENEA CR Casaccia. The plant, under study, is composed by two sections: the gas turbine section for producing electrical power and the heat exchange section for extracting heat from the turbine exhaust gases. The paper is organized as follow. Section 2 describes our case study. Section 3 introduces the main elements of the FPN formalism, provides the FPN model of the case study, and its conversion into an hybrid automaton. Section 4 shows how the same FPN model can be translated into a discrete models checker (NuSMV [14]) and provides some of our experimental results. Section 5 gives the conclusions.
2
Temperature Control System
The ICARO co-generative plant is composed by two sections: the electrical power generation and the heat extraction from the turbine exhaust gases. The exhaust gases can be conveyed to a re-heating chamber to heat the water of a primary circuit and then, through a heat exchanger, to heat the water of a secondary circuit that, actually, is the heating circuit of the ENEA Research Center. If the
Model-Checking Based on Fluid Petri Nets
275
thermal energy required by the end user is higher than the thermal energy of the exhaust gases, fresh methane gas can be fired in the re-heating chamber where the combustion occurs. The flow of the fresh methane gas is regulated by the control system through the position of a valve. The block diagram of the temperature control of the primary and secondary circuits is depicted in Figure 1. The control of the thermal energy used to heat the primary circuit is performed by regulating both the flow rate of the exhaust gases through the diverter D and the flow rate of the fresh methane gas through the valve V. T1 is the temperature of the primary circuit, T2 is the temperature of the secondary circuit, and u is the thermal request by the end user. The controller has two distinct regimes (two discrete states) represented by the position 1 or 2 of the switch W in Figure 1. Position 1 is the normal operational condition, position 2 is the safety condition. In position 1, the control is based on a proportional-integrative measure (performed by block PI1 ) of the error of temperature T2 with respect to a (constant) set point temperature Ts . Conversely, in position 2, the control is based on a proportional-integrative measure (performed by block PI2 ) of the error of temperature T1 with respect to a (constant) set point temperature Ts . Normally, the switch W is in position 1 and the control is performed on T2 to maintain constant the temperature to the end user. Switching from position 1 to position 2 occurs for safety reasons, when the value of T2 is higher than a critical value defined as the set point Ts augmented by an hysteresis value Th and the control is locked to the temperature of the primary circuit T1 , until T1 becomes lower than the set point Ts . The exit of the proportional-integrative block (either PI1 or PI2 , depending on the position of the switch W) is the variable y which represents the request of thermal energy. When y is lower than a split point value Y s the control just acts on the diverter D (flow of the exhaust gases), when the diverter is completely
Fresh gas control
T1 -
Ts
V
PI
+
2 W
SETPOINT
YS
PI -
T2
1
T1 Primary circuit
Secondary circuit u
y
1 Ts +
1
1
D
YS 1 Diverter control
γ(T1-T2)
T2
Fig. 1. Temperature control of the primary and secondary circuits of the ICARO plant
276
Marco Gribaudo et al.
open, and the request for thermal energy y is greater than Y s, the control also acts on the flow rate of the fresh methane gas by opening the valve V. The heating request is computed by the function f (y) represented in Figure 2. Since the temperature T2 is checked out when W is in position 1, and the temperature T1 is checked out in state 2, the function f (y) depends on y2 when W = 1 and on y1 when W = 2. The function f (y) is defined as the sum of two non-deterministic components, namely: g1 (y) which represents the state of the valve V, and g2 (y) which represents the state of the diverter D. The nondeterminism is introduced by the parameters αmin , αmax that give the minimal and maximal heat induced by the fresh methane gas, and βmin , βmax that define the minimal and maximal heat induced by the exhaust gases. Finally, the heat exchange between the primary and the secondary circuit is approximated by the linear function γ(T1 − T2 ), proportional (through a constant γ) to the temperature difference.
3
Fluid Petri Nets
Fluid Petri Nets (FPN) are an extension of standard Petri Nets [13], where, beyond the normal places that contain a discrete number of tokens, new places are added that contain a continuous quantity (fluid). Hence, this extension is suitable to be considered for modeling and analyzing hybrid systems. Two main formalisms have been developed in the area of FPN: the Continuous or Hybrid Petri net (HPN) formalism [1], and the Fluid Stochastic Petri net (FSPN) formalism [11, 9]. A complete presentation of FPN is beyond the scope of the present paper and an extensive discussion of FPN in performance analysis can be found in [8]. Discrete places are drawn according to the standard notation and contain a discrete amount of tokens that are moved along discrete arcs. Fluid places are drawn by two concentric circles and contain a real variable (the fluid level). The fluid flows along fluid arcs (drawn by a double line to suggest a pipe) according to an instantaneous flow rate. The discrete part of the FPN regulates the flow of the fluid through the continuous part, and the enabling conditions of a transition depend only on the discrete part.
g1(y) g2(y) + f (y) g1(y) + g 2(y) max max min min
1
1
g1(y)=
g2(y)= YS
1
YS
Fig. 2. The heating request function f (x)
1
Model-Checking Based on Fluid Petri Nets
3.1
277
A FPN Description of the System
The FPN modeling the case study of Figure 1 is represented in Figure 3. The FPN contains two discrete places: P 1 which is marked when the switch W is in state 1, and P 2 which is marked when the switch W is in state 2. Fluid place Primary (whose marking is denoted by T1 , and has a lower bound at Ts ) represents the temperature of the primary circuit, and fluid place Secondary (whose marking is denoted by T2 and has an upper bound at Ts + Th) represents the temperature of the secondary. The fluid arcs labeled with γ(T1 −T2 ) represent the heat exchange between the primary and the secondary circuit. The system jumps from state 1 to state 2 due to the firing of immediate transition Sw12. This transition has associated a guard T2 > Ts + Th that makes the transition fire (inducing a change of state) as soon as the temperature T2 exceeds the setpoint Ts augmented by an histeresys value Th . The change from state 2 to state 1 is modeled by the immediate transition Sw21, whose firing is controlled by the guard T1 < Ts that makes the transition fire when the temperature T1 goes below the setpoint Ts . In order to simplify the figure, we have connected the fluid arcs directly to the immediate transitions. The meaning of this unusual feature is that fluid flows across the arcs as long as the immediate transitions are enabled regardless of the value of the guards. The fluid arc in output from place secondary, represents the end user demand. The label on this arc is [u1 , u2 ], indicating the possible range of variation of the user demand. Fluid place CT R2, whose marking is denoted by y1 , models the exit of the proportional-integrator PI1 . This is achieved by connecting to place CT R1 an input fluid arc, characterized by a variable flow rate equal to T2 , and by an output fluid arc with a constant fluid rate equal to the setpoint Ts . In a similar way, the exit of the proportional-integrator PI2 is modeled by fluid place CT R2 (whose marking is denoted by y2 ). The fluid arcs that connect transition Sw12 and Sw21 to fluid place primary represent the heating up of the primary circuit.
Ts
T2
y1
Ts
CTR2 P1
Sw12
f(y2)
T2>Ts+Th
P2
Sw21
T1
y2 CTR1
T1
γ(T1-T2)
Primary
γ(T1-T2)
T2
[u1,u2]
Secondary
f(y1)
T1=139 & T1=139 & T2=139 & T1=139 & T2=139 & T1=139 & T2 ] = { unsigned int ident list1 : 1 ident list1 : bool unsigned int ident list2 : constant ident list2 : [0..2constant − 1] .. .. } ident list; } local ident list : [ struct tag | str <struct Counter> ] struct struct tag ident list; local ident list : struct tag
Translation of Data-Types. SPL has data-types int, rat(rational) and bool. The mapping of the basic data-types from C to SPL is shown in table 1. For data types in C which are of type unsigned, during translation an axiom stating that the value of the variable is always positive is generated by c2spl. This is shown in the last column of table 1 where ✷ is a temporal operator read as ’always’. A variable in SPL can have mode in, out or local. Variables of mode in cannot be assigned any value, out variables cannot be read and local variables can be both read and written. Variables qualified with const keyword in C are declared to have mode in in the SPL code. A dummy variable RET is introduced by c2spl to model the return statement of C. This variable is declared as an out variable. All other variables are declared as local variables. Arrays, structures, unions, enumerations, typedefs are also translated by c2spl into semantically equivalent constructs. Table 2 shows the translation of bit-fields. An extra variable str <struct Counter> is introduced if the structure tag is missing in the C declaration. c2spl generates additional axioms to handle union of bit-fields. The translation of pointers in the absence of aliasing is also handled. 4.2
Formal Annotations and Their Translation
ACE provides syntax for inserting seven types of formal annotations. Annotations are enclosed in /* and */ and hence are treated as comments by C compilers. The annotations specifying the functional requirements are written in the
Assertion Checking Environment for Formal Verification of C Programs
291
Table 3. Translation of Formal Annotations Annotation /*pre formula1 end*/ /*post formula2 end*/ /*assert formula3 end*/
SPL code pre1:skip post2:skip assert3:skip
Specification AXIOM a1 : ✷(pre1 → formula1) PROPERTY p2 : ✷(post2 → formula2) PROPERTY p3 : ✷(assert3 → formula3)
Table 4. Translation of Function Specifications Annotation SPL code Specification /*prefunc formula4 end*/ pref1:skip PROPERTY p1 : ✷(pref1 → formula4) function-call /*postfunc formula5 end*/ postf2:skip AXIOM a2 : ✷(postf2 → formula5)
syntax shown in column 1 of table 3. Their translations, as performed by c2spl, are also shown in the table. Here formula1 denotes a pre-condition, formula2 a post-condition and formula3 an intermediate assertion. The symbols pre1, post2 and assert3 are labels denoting control locations, generated by c2spl, to establish the correspondence between formulae and the locations at which they hold. The formulae are written in linear temporal logic in the syntax of STeP. The effect of calls to library functions can be expressed using prefunc and postfunc annotations. These annotations are inserted before and after the function-call as shown in table 4. The translation done by c2spl is shown along side. ACE provides var type of annotation to support declaration of auxillary variables. It also provides always type of annotation to specify a global invariant which is known to be true. The always annotation is translated to an axiom by c2spl and is mainly used to handle union of bit-fields.
5
Example
The C functions given below read data(InputX and InputY) from 12-bit registers and convert them to values(final data) suitable for display. The specification file generated for the function get inputsX follows the C code. Due to lack of space the SPL code generated by c2spl is not given. The function read from reg is a library function whose specification is inserted in the body of function get inputsXY as postfunc type assertion, while for functions change to v and convert to d the proofs are carried out separately and their specifications get automatically inserted in the SPEC file as shown below. C Code #include #define RCD3_X 1 #define RCD3_Y 2
292
Babita Sharma et al.
#define RCD2 2 #define RCD3 3 typedef unsigned short WORD; typedef unsigned char BYTE; struct RCD3_data { double X, Y; }; extern int read_from_reg(int regnum, WORD *D); /****** Function prototype declarations *******/ void get_inputsXY(struct RCD3_data *final_data) { WORD InputX, InputY; BYTE input_src = RCD3; double tempX, tempY; int ret1, ret2; ret1 = read_from_reg( 1, &InputX ); /*postfunc ( InputX >= 0 /\ InputX = 0 /\ InputY 5) end*/ change_to_v(InputY, input_src, &tempY); final_data->X= tempX; final_data->Y= tempY; convert_to_d(1, tempX, final_data); /*assert (#X final_data >= -180) /\ (#X final_data X > 80) || (final_data->X < -80)){ final_data->X = 0; } /*post (#Y final_data = ( (5.0/4096.0) * InputY )) /\ ( !(#Y final_data < 0 \/ #Y final_data > 5)) /\ (!((#X final_data > 80) \/ (#X final_data < -80))) end*/ } /******************************************************/ void change_to_v(WORD D_input, BYTE input_src, double *ptr) { /*pre (input_src = RCD2) \/ (input_src = RCD3) end*/ switch( input_src ){ case RCD2 : *ptr = ( (5/2048) * D_input - 5.0 ); break; case RCD3 : *ptr = ( (5/4096) * D_input ); break; default : break; } /*post ((input_src = RCD3) /\ (ptr = ((5/4096) * D_input))) \/ ((input_src=RCD2) /\ (ptr = ((5/2048) * D_input - 5.0))) end*/ } /******************************************************/ void convert_to_d(WORD src, double input, struct RCD3_data *deg) { /*pre (src = RCD3_X) \/ (src = RCD3_Y) end*/
Assertion Checking Environment for Formal Verification of C Programs
293
switch (src) { case RCD3_Y : deg->Y = ((180 / 5.0) * input - 90.0); break ; case RCD3_X : deg->X = ((360.0 / 5.0) * input) - 180.0; break; default : break; } /*post (src = RCD3_X /\ #X deg = (360.0 / 5.0) * input - 180.0) \/ (src = RCD3_Y /\ #Y deg = (180 / 5.0) * input - 90.0) end*/ } SPEC File for Function get inputsXY: SPEC AXIOM AXIOM
a1 : postf1 ==> a2 : postf2 ==>
( InputX >= 0 /\ InputX = 0 /\ InputY (input_src = 2) \/ (input_src = 3) AXIOM a4 : postf4 ==> ((input_src = 3) /\ (tempX = ((5.0/4096.0) * InputX))) \/ ((input_src = 2) /\ (tempX = ((5.0/2048.0) * InputX - 5.0))) PROPERTY p5 : assert5 ==> !(tempX < 0
\/ tempX > 5)
PROPERTY p6 : prefunc6 ==> (input_src = 2 ) \/ (input_src = 3) AXIOM a7 : postf7 ==> ((input_src = 3) /\ (tempY = ((5.0/4096.0) * InputY))) \/ ((input_src = 2) /\ (tempY = ((5.0/2048.0) * InputY - 5.0))) PROPERTY p8 : prefunc8 ==> (1 = 1) \/ (1 = 2) AXIOM a9 : postf9 ==> ( (1 = 1) /\ (#X final_data = ((360.0 / 5.0) * tempX) - 180.0)) \/ ((1 = 2) /\ (#Y final_data = (180 / 5.0) * tempX - 90.0)) PROPERTY p10 : assert10 ==> (#X final_data >= -180) /\ (#X final_data (#Y final_data = ((5.0/4096.0) * InputY)) /\ ( !(#Y final_data < 0 \/ #Y final_data > 5) ) /\ (!((#X final_data > 80) \/ (#X final_data < -80))) In the SPEC file given above axioms a1 and a2 correspond to the postconditions of the library function read from reg. Identifiers postf1 and postf2 denote control-locations in the SPL code. Property p3 and axiom a4 are the preand post-conditions of function change to v respectively. Properties p5, p10 and p11 are the proof-obligations supplied by the user. The post-conditions of functions convert to d and change to v were discharged automaticaly using the tactic
294
Babita Sharma et al.
repeat(B-INV;else[Check-Valid,skip;WLPC]) and by invoking the corresponding pre-condition. The post-condition and assertions of the function get inputsXY were proved by repeatedly applying B-INV and WLPC rules and invoking the postfuncs at the place of function-calls.
6
Initial Operational Experience
ACE in its current form has been used in the verification of many real programs from safety-critical embedded systems performing control, process interlock and data-acquisition and display functions. The process interlock software analysed by ACE was generated using a tool. The tool took logic diagrams, composed of function blocks and super-blocks as input to generate C code that becomes part of the runtime code. It was required to verify that the generated code implemented the logic specified in the diagram input to the code-generation tool. In this case the post-conditions were obtained from the logic diagrams and the emitted C code was annotated. The post-conditions were then proved, thus validating the translation of diagrams into C code. The software had approximately 6000 lines of code and 54 C functions. Around 500 properties were proved automatically using tactics. In yet another system where code was manually developed, the formal specifications were arrived at from the design documents and with discussions with the system designers. Around 110 properties were derived for the software made up of 4000 lines of code and roughly 40 C functions. For proving properties of some functions human-assistance in the form of selecting the appropriate invariants and axioms was required. The typical properties verified were of following nature: – – – –
7
Range checks on variables Arithmetic computations Properties specifying software controlled actions Intermediate asserts on values of variables
Conclusion and Future Work
Initial experience with ACE has shown that we could verify embedded sytem software, developed to comply with sringent quality standards, with relative ease and within reasonable time. The technique of compositional verification helps in proving higher level properties by splitting the task of verification into small, manageable program units. The properties of small program units, where size and complexity has been controlled, can generally be obtained and proved cheaply. The SPARK Examiner [6] which also supports static assertion checking, is a commercially available tool, for programs written in SPARK-Ada, a subset of Ada. It uses a proprietary Simplifier and Proof-checker, while we have interfaced ACE with STeP, a powerful theorem proving tool. The use of proof tactics and
Assertion Checking Environment for Formal Verification of C Programs
295
other features of STeP such as built-in theories, decision procedures, simplification rules and local invariance generation, makes the verification of most of embedded system programs easy, which is a great advantage. There are tools such as Bandera [8], Java Pathfinder [9] and Automation Extractor(AX) [10] that take source code of the system as input and generate a finite-state model of the system. The system properties are then verified by applying model-checking techniques. The work presented in this paper is targeted towards proving functional correctness of sequential program code and adopts the theorem-proving approach to formal verification. In our future work we propose to use property-guided slicing of C programs prior to translation to SPL. This can further reduce the model size and hence the effort and time required in deducing properties using the theorem prover.
Acknowledgement The authors wish to acknowledge the BRNS, DAE for supporting this work. Thanks are also due to A.K.Bhattacharjee for comments and help with the manuscript.
References [1] Guidelines for the Use of the C Language in Vehicle Based Software. The Motor Industry Software Reliability Association, 1998. 284, 286 [2] C. A. R. Hoare: An Axiomatic Basis for Computer Programming. Communications of the ACM, 12:576-580, 1969. 284, 287 [3] Nikolaj Bjorner et al.: The Stanford Temporal Prover User’s Manual. Stanford University, 1998. 284, 285, 286 [4] E. W. Dijkstra: A Discipline of Programming. Prentice-Hall, 1976. 284 [5] Ken Wong, Jeff Joyce: Refinement of Saftey-Related Hazards into Verifiable Code Assertions. SAFECOMP’98, Heidelberg, Germany, Oct. 5-7, 1998. 285 [6] John Barnes: High Integrity Ada - The SPARK Approach. Addison Wesley, 1997. 285, 294 [7] Shawn Flisakowski: Parser and Abstract Syntax Tree Builder for the C Programming Language. ftp site at ftp.cs.wisc.edu:/coral/tmp/spf/ctree 14.tar.gz 289 [8] J.Corbett, M.Dwyer, et.al.: Bandera : Extracting Finite State Models from Java Source Code. Proc. ICSE 2000, Limerick, Ireland. 295 [9] G. Brat, K. Havelund, S. Park and W. Visser: Java PathFinder - A Second Generation of a Java Model Checker. Workshop on Advances in Verification, July 2000. 295 [10] G. J. Holzmann: Logic Verification of ANSI-C Code with SPIN. Bell Laboratories, Lucent Technologies. 295
Safety Analysis of the Height Control System for the Elbtunnel Frank Ortmeier1 , Gerhard Schellhorn1 , Andreas Thums1 , Wolfgang Reif1 , Bernhard Hering2 , and Helmut Trappschuh2 1
Lehrstuhl f¨ ur Softwaretechnik und Programmiersprachen, Universit¨ at Augsburg 86135 Augsburg, Germany {ortmeier,schellhorn,thums,reif}@informatik.uni-augsburg.de 2 Siemens – I&S ITS IEC OS 81359 M¨ unchen, Germany {
[email protected],helmut.trappschuh@abgw}.siemens.de
Abstract. Currently a new tunnel tube crossing the river Elbe is built in Hamburg. Therefore a new height control system is required. A computer examines the signals from light barriers and overhead sensors to detect vehicles, which try to drive into a tube with insufficient height. If necessary, it raises an alarm that blocks the road. This paper describes the application of two safety analysis techniques on this embedded system: model checking has been used to prove functional correctness with respect to a formal model. Fault tree analysis has validated the model and considered technical defects. Their combination has uncovered a safety flaw, led to a precise requirement specification for the software, and showed various ways to improve system safety.
1
Introduction
This paper presents the safety analysis of the height control for the Elbtunnel. It is a joint project of the University of Augsburg with Siemens, department ‘Industrial Solutions and Services’ in Munich, which are sub-contractors in the Elbtunnel project, responsible for the traffic engineering. The Elbtunnel is located in Hamburg and goes beneath the river Elbe. Currently this tunnel has three tubes, where vehicles with a maximum height of 4 meters may drive through. A new, fourth tube will be going into operation in the year 2003. It is a larger tube and can be used by overhigh vehicles. A height control should prevent these overhigh vehicles from driving into the smaller tubes. It avoids collisions, by triggering an emergency stop and locking the tunnel entrance. Because the system consists of software and hardware components, we combine two orthogonal methods, model checking and fault tree analysis from the domain of software development resp. engineering. Model checking is used to prove safety properties like ‘no collision’. Fault tree analysis (FTA) examines sensor failure and reliability. S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 296–308, 2002. c Springer-Verlag Berlin Heidelberg 2002
Safety Analysis of the Height Control System for the Elbtunnel
297
We will briefly describe the layout of the tunnel, the location of the sensors, and its functionality in Sect. 2. The formalization of the system and model checking of safety properties are presented in Sect. 3 and Sect. 4. The fault tree analysis in Sect. 5 completes the safety analysis. Some weaknesses of the system have been discovered which led to the proposals for improvements given in Sect. 6. Finally, Sect. 7 concludes the paper.
2
The Elbtunnel Project
The Elbtunnel project is very complex. Besides building the tunnel tube, it contains traffic engineering aspects like dynamic route control, locking of tunnel tubes, etc. We will consider only a small part of the whole project, the height control. Currently a height control exists for the ‘old’ three tubes. Light barriers are scanning the lanes for vehicles which are higher than 4 meters and trigger an emergency stop. The existing height control has to be enhanced, such that it allows overhigh vehicles to drive through the new, higher tube, but not through the old ones. In the following, we will distinguish between high vehicles (HVs), which may drive through all tubes and overhigh vehicles (OHVs), which can only drive through the new, fourth tube. Figure 1 sketches the layout of the tunnel. The fourth tube may be cruised from north to south and the east-tube from south to north. We focus our analysis on the northern entrance, because OHVs may only drive from north to south. The driving direction on each of the four lanes of the mid- and west-tube can be switched, depending on the traffic situation. Flexible barriers, signals and road fires guide drivers to the tubes, which are open in their direction. The system uses two different types of sensors. Light barriers (LB) are scanning all lanes of one direction to detect, if an OHV passes. For technical reasons they cannot be installed in such a way, that they only supervise one lane. Therefore overhead detectors (OD) are necessary to detect, on which lane a HV OHV LB
LBpre
LBpost
ODright
ODleft
ODfinal
4. tube
west-tube
mid-tube
east-tube
high vehicle overhigh vehicle light barrier, detecting OHVs OD overhead detector, detecting HVs and OHVs (undistinguishable) west-, existing tubes; HVs mid-, may drive trough, but east-tube not OHVs 4. tube new, higher tube; OHVs may drive trough
Fig. 1. Layout of the northern tunnel entrance
298
Frank Ortmeier et al.
HV passes. The ODs can distinguish vehicles (e.g. cars) from high vehicles (e.g. buses, trucks), but not HVs from OHVs (but light barriers can!). If the height control detects an OHV heading towards a different than the fourth tube, then an emergency stop is signaled, locking the tunnel entrance. The idea of the height control is, that the detection starts, if an OHV drives through the light barrier LBpre . To prevent unnecessary alarms through faulty triggering of LBpre , after expiration of a timer (30 minutes) the detection will be switched off. Road traffic regulations require, that after LBpre both HVs and OHVs have to drive on the right lane through tunnel 4. If nevertheless an OHV drives on the left lane towards the west-tube, detected trough the combination of LBpost and ODleft , an emergency stop is triggered. If the OHV drives on the right lane through LBpost , it is still possible for the driver to switch to the left lanes and drive to the west- or mid-tube. To detect this situation, the height control uses the ODfinal detector. To minimize undesired alarms (remember, that normal HVs may also trigger the ODs), a second timer will switch off detection at ODfinal after 30 minutes. For safe operation it is necessary, that after the location of ODfinal it is impossible to switch lanes. Infrequently, more than one OHV drives on the route. Therefore the height control keeps track of several but at the most three OHVs.
3
Formal Specification
In this section we define a formal specification of the Elbtunnel using timed automata over a finite set of states. An automaton is shown as a directed graph, where states are nodes. A transition is visualized as an arrow marked with a condition c and a time interval [t1,t2] as shown in Fig. 2. A transition from s1 to s2 may be taken indeterministically at any time between t1 and t2, if the c/[t1,t2] s2 condition c holds from now until this time. The time s1 interval is often just [t,t] (deterministic case) and we abbreviate this to t. If t = 1, a transition will hapFig. 2. Transition pen in the next step (provided the condition holds) and we have the behavior of an ordinary, untimed automaton. The always true condition is indicated as ‘−’. The automata defined below are graphic representations of the textual specification that we used in the model checker RAVEN [7], [8], [6]. We also tried the model checker SMV [4], which supports untimed automata only, but is more efficient. Timed automata are translated to untimed ones using intermediate states which say “the system has been in state s1 for n steps”, see [6]. The specification consists of two parts. The first specifies the control system that is realized in software and hardware, the second describes the environment, i.e. which possible routes HVs and OHVs can follow. Our aim is to prove (Sect. 4), that the control system correctly reacts to the environment: for example we will prove that if an OHV tries to go into mid- or west-tube (behavior of the environment) then the control system will go into a state that signals an emergency stop (reaction of the control system).
Safety Analysis of the Height Control System for the Elbtunnel
299
Both parts of the specification consist of several automata: the control system consists of two automata COpre and COpost , which will be implemented in software. The first counts OHVs between LBpre and LBpost , the second checks whether there are any after LBpost . Each uses a timer (TIpre and TIpost ), modeled as an instance of the same automaton with different input and output signals. The environment consists of three identical automata OHV1 , OHV2 , OHV3 for OHVs. This is sufficient, since it is assumed, that at most three OHVs may pass simultaneously through the tubes. Finally, three automata HVleft , HVright , HVfinal model HVs that trigger the sensors ODleft , ODright , and ODfinal . They are instances of a generic automaton describing HVs. Altogether the system consists of 7 automata running in parallel: SYS = COpre COpost TIpre TIpost OHV1 OHV2 OHV3 HVleft HVright HVfinal The following sections will describe each automaton in detail. Specification of a Timer The generic specification of a timer is shown in Fig. 3. Initially the timer is in state ‘off’, marked with the ingoing arrow. When it receives a ‘start’ signal, it starts ‘running’ for time ‘runtime’. During this time, another ‘start’ signal is interpreted as reset. After time ‘runtime’ the timer signals ‘alarm’ start/1 and finally turns off again. Two timers, which start/1 have a runtime of 30 minutes, are used in start/1 the control system. The first (TIpre ) is started −/runtime −/1 when an OHV passing through LBpre , i.e. running alarm off ‘start’ is instantiated with LBpre . After 30 minutes it signals TIpre .alarm to COpre . SimiFig. 3. Timer TI larly, the second timer is triggered by an OHV passes through LBpost . It signals TIpost .alarm to COpost . Specification of Control for LBpre The control automaton COpre (shown in Fig. 4) controls the number of OHVs between the two light barriers LBpre and LBpost . Starting with a count of 0, evTI pre .alarm/1 ery OHV passing LBpre increments the counter, every OHV passing LBpost decreLBpre /1 LBpre /1 LB /1 pre ments it. If COpre receives TIpre .alarm, 0 1 2 3 i.e. if for 30 minutes no OHV has passed LBpost /1 LBpost /1 LBpost /1 through LBpost , the counter is reset. Actually the automaton shown in Fig. 4 is not Fig. 4. Control COpre for LBpre completely given, some edges which correspond to simultaneous events are left out for better readability.
300
Frank Ortmeier et al.
Specification of Control for LBpost Figure 5 shows the automaton COpost which controls OHVs in the area after LBpost . It has fewer states than COpre , since it just needs to know whether there is at least one OHV in the critical section between LBpost and the entries of the tubes, but not how many. If at least one OHV is in the critical section then the automaton is in state ‘active’, otherwise in ‘free’. To signal an emergency stop, the automaton goes to state ‘stop’. To avoid false alarms, COpost interprets the interruption of LBpost as a misdetection, if COpre has not deTI post .alarm/1 LBok /1 tected the OHV before (i.e. when LBok /1 COpre = 0). There are two reasons free active for an emergency stop: either ODfinal LBstop /1 signals that an OHV (or a HV) tries LB stopOR OD final /1 stop to enter the mid- or west-tube, while an OHV is in the critical section (i.e. we are in state ‘active’). Or an OHV Fig. 5. Control COpost for LBpost tries to drive through LBpost , but has not obeyed the signs to drive on the right lane. This rule must be refined with two exceptions. If tube 4 is not available, an emergency stop must be caused even if the OHV is on the right lane. The other way round, no exception must be signalled, if only tube 4 is available, even if the OHV does not drive on the right lane. This means that the conditions LBstop and LBok for going to the ‘stop’ resp. ‘active’ state must be defined as: LBok LBstop
:= LBpost and COpre =0 and (only tube 4 open or (ODright and not ODleft )) := LBpost and COpre =0 and (tube 4 closed or not (ODright and not ODleft ))
Specification of Overhigh Vehicles The specification given in Fig. 6 shows the possible behavior of one OHV that drives through the Elbtunnel. It is the core of the environment specification. We can implement the control system, such that it behaves exactly as the specification prescribes but we have no such influence on reality. Whether the real environment behaves according to our model, must be validated carefully (e.g. with fault tree analysis, see Sect. 5). The model describes the possible movement LBpost and ODright −/1 −/[1,30] of an OHV through −/1 −/1 the light barriers and between LBpre absent LBpost and ODleft the tunnel: Initially the −/[1,30] OHV is ‘absent’. It re−/1 MW −/1 −/1 ODfinal −/[1,30] −/1 mains there for an un−/1 critical tube 4 known period of time (idling transition of −/[1,30] ‘absent’ state). At any Fig. 6. Overhigh vehicle OHV time it may reach
Safety Analysis of the Height Control System for the Elbtunnel
301
LBpre . In the next step the OHV will be ‘between’ the two light barriers and will remain there for up to 30 minutes. Then it will drive through LBpost . Either it will do so on the right lane, which will trigger ODright or on the left lane and cause signal ODleft . After passing LBpost it will reach the ‘critical’ section where the lanes split into the lane to tube 4 and the lane to mid- and west-tube. The OHV may stay in this critical section for up to 30 minutes. Then it will either drive correctly to the entry of ‘tube 4’ and finally return to state ‘absent’ or it will pass through ODfinal and reach the entry of the mid- or west-tube (state ‘MW’). In this case the control system must be designed such that an emergency stop has been signalled and we assume that the OHV will be towed away then (return to state ‘absent’). Our complete specification runs three automata OHV1 , OHV2 , OHV3 in parallel. The signal LBpre , that is sent to COpre , is the disjunction of the OHVi being in state LBpre . Signals LBpost , ODleft etc. are computed analogously. Specification of High Vehicles High vehicles which are not overhigh are relevant for the formal specification only since they can trigger the ODs. Therefore it is not necessary to specify which possible routes −/1 −/1 they drive or how many there potentially may −/1 not OD OD be. Instead we only define three automata −/1 ODleft , ODright and ODfinal , which say, that at any time the corresponding ‘OD’ signal may be Fig. 7. High vehicle HV triggered. Fig. 7 gives a generic automaton that randomly switches between the two states.
4
Proving Safety Properties
The formal specification of the Elbtunnel allows to state safety properties and to prove them rigorously as theorems over the specification. Since the model has only finitely many states, we can use model checkers, which are able to prove theorems automatically. We used two model checkers, SMV and RAVEN. Both have the ability to generate counter examples, i.e. if a theorem we try to prove turns out to be wrong, they return a run of the system which is a counter example to the theorem. To keep the number of states in the model small, we have also avoided an exact definition of the duration of one step of the automaton in real time. Of course the value of 30 (minutes), we used for the maximal time that an OHV is between the two LBs, is much more than 30 times longer than the time the OHV needs to cross an LB. For the real experiments we have even used a value of 5 or 6 to keep the runtimes of the proofs short (with a value of 30 proofs take several hours, while proofs using 5 go through in seconds). The exact value used in the proofs does not influence the results, as long as the maximum times and the runtime of the timers agree.
302
Frank Ortmeier et al.
Despite these inadequacies the model can serve to analyze all safety properties we found relevant. The most important is of course, that whenever an OHV is driving to the entry of the mid- or west-tube, then an emergency stop will be raised. This is formalized by the following formula in temporal logic: AG((OHV1 = M W ∨ OHV2 = M W ∨ OHV3 = M W ) → COpost = stop) The formula is read as: For every possible run of the system it is always the case (temporal operator AG), that if one of OHV1 , OHV2 , OHV3 is in state ‘MW’, then COpost is in state ‘stop’. The first verification results have been negative due to several specification errors. There have been basically two classes: simple typing errors, and errors due to simultaneous signals. E.g. our initial model included only transitions for one signal in the control automaton shown in Fig. 4 and left out the case that one OHV passes LBpre while another passes LBpost at the same time. Additional transitions are necessary when two signals occur simultaneously (but they are left out in Fig. 4 for better readability). Both types of errors have easily been corrected, since each failed proof attempt resulted in a trace, that clearly showed what went wrong. After we corrected the errors, we finally found, that we still could not prove the theorem. It does not hold as can be demonstrated by the following run: 1. Initially, all OHVs are ‘absent’, COpre is in state ‘0’, and COpost signals ‘free’. 2. Then two OHVs drive through the light barrier LBpre at the same time. 3. LBpre cannot detect this situation, so COpre counts only one OHV and switches into state ‘1’. 4. The first of the two OHVs drives through the second light barrier fast, resetting the state of COpre to ‘0’. COpost switches into the ‘active’ state, and starts timer TIpost . 5. The other OHV takes some time to reach LBpost . COpost assumes the signal from LBpost to be a misdetection, since COpre is in state ‘0’. Therefore it does not restart timer TIpost . 6. The second OHV now needs longer than the remaining time TIpost to reach ODfinal . TIpost signals ‘alarm’ to COpost which switches to state ‘free’ again. 7. The OHV triggering ODfinal while COpost is in state ‘free’ is assumed to be a misdetection and does not cause an emergency stop. 8. The OHV can now enter the mid- or west-tube without having caused an emergency stop. This run shows that our system is inherently unsafe, since two OHVs that pass simultaneously through LBpre are recognized as one. Whether the flaw is relevant in practice, depends on the probability of its occurrence, which is currently being analyzed. To get a provable safety property, we now have two possibilities: either we can modify the model, possible modifications for which the safety property can be proved are discussed in Sect. 6. Or we can weaken the safety property to exclude this critical scenario. Then we can prove that the critical scenario described above is the only reason that may cause safety to be violated: Theorem 1. safety System SYS has the following property: If two OHVs never pass simultaneously through LBpre , then any OHV trying to enter the middle or western tube will cause an emergency stop.
Safety Analysis of the Height Control System for the Elbtunnel
303
In practice this means, that if the height control is implemented following the description of Sect. 3, then it will be safe except for the safety flaw described above. It is interesting to note, that we detected the flaw only in the RAVEN specification. We did not find the problem with SMV, since we had made an error in the translation of timed automata to untimed automata. This error resulted in a specification in which all OHVs had to stay for the full 30 minutes between LBpre and LBpost . This prevented the problem from showing up: two OHVs driving simultaneously through LBpre also had to drive through LBpost at the same time. The incident shows that using a specification language which does not explicitly support time is a source for additional specification errors. Safety is not the only important property of height control in the Elbtunnel (although the most critical). Another desirable property is the absence of unnecessary alarms. Here we could prove: Theorem 2. emergency stops If tube 4 is open and not the only open tube, then an emergency stop can only be caused by a) an OHV driving on the left lane through LBpost or b) an OHV driving through ODfinal or c) an OHV driving on the right lane through LBpost while a high vehicle is driving on the left lane or d) a high vehicle at ODfinal while Timer TIpost is running. The first two causes are correctly detected alarms, the other two are false alarms inherent in the technical realization of the system (and of course already known). Formal verification proves the absence of other causes for false alarms. Finally we also analyzed, how the availability of tubes influences the situation. This led to the following theorem: Theorem 3. tube 4 availability If tube 4 is not available, then any OHV trying to drive through will cause an emergency stop. If only tube 4 is available, then an emergency stop will never occur. Summarizing, the formal analysis has led to a precise requirement specification for the automata that should be used in the height control. A safety flaw has been discovered and proven to be the only risk for safety. False alarms due to other reasons than the known ones mentioned in Theorem 2 have been ruled out. The results of formal analysis show that the control system does not have inherent logic faults. The formal proofs of this section do not give an absolute safety guarantee, but only relative to the aspects considered in the formal model. For example we did not consider technical defects, which are covered by the fault tree analysis presented in the next section.
5
Fault Tree Analysis
Another approach to increase the overall system safety is fault tree analysis (FTA)[10]. FTA is a technique for analyzing the possible, basic causes (primary
304
Frank Ortmeier et al.
failures) for a given hazard (top event). The top event event is always the root of the fault tree and the and gate primary failures are its leaves. All inner nodes or gate of the tree are called intermediate events (see primary failure Fig. 8). Starting with the top event the tree is generated by determining the immediate causes Fig. 8. FT Symbols that lead to the top event. They are connected to their consequence through a gate. The gate indicates if all (and-gate) or any (or-gate) of the causes are necessary to make the consequence happen. This procedure has to be applied recursively to all causes until the desired level of granularity is reached (this means all causes are primary failures that won’t be investigated further). We analyzed two different hazards for the Elbtunnel height control - the collision of an OHV with the tunnel entrance and the tripping of a false alarm. We will use the hazard collision to illustrate FTA (see Fig. 9). The immediate causes of the top event - collision of an OHV with tunnel entrance - are that either the driver ignores the stop signals OR (this means the causes are connected through an OR-gate) that the signals are not turned on. The first cause is a primary failure. We can do nothing about it, but to disbar the driver from his license. The second cause is an intermediate event. It’s immediate causes are a) that the signal lights are broken or b) the signals were not actiCollision vated. Again the first one is a primary failure and the second is an intermediate event, which has to Signal not on OHV ignores be investigated further. The minsignal imal set of primary failures, which Signal out Signal not activated are necessary to make the hazof order ... ard happen for sure, is called minimal cutset. Cutsets which conFig. 9. Fault tree for hazard collision sist of only one element are called single point failures. This means the system is very susceptible to this primary failure. We will not present the whole tree here, but only discuss the most interesting results. One of these is the fact, that the original control system had a safety gap. This gap may be seen in one of the branches of the fault tree (see Fig. 10). The direct causes for the (intermediate) event OHV not detected at LB OHV not detected at LBpre , are malfunctioning of LBpre or synchronous passing of two OHVs through the light barrier LBpre . 2OHVs at LB LB out of The malfunction is a primary failure. But simultaneously order the second cause represents a safety gap in system design. In contrast to all other priFig. 10. Safety gap mary failures in the fault tree this event is pre
pre
pre
Safety Analysis of the Height Control System for the Elbtunnel
305
a legal scenario. Although all components of the system are working according to their specification, the hazard collision may still occur. This must not happen in a safe control system. The FTA of the control system examines the systems sensibility to component failures. There are no AND-gates in the collision fault tree. This means that there is no redundancy in the system - making it effective on the one hand but susceptible to failure of each component on the other hand. The false alarm fault tree is different. This tree has several AND-gates. Especially, misdetection of pre-control light barrier LBpre appears in almost all minimal cut-sets (at least in all those scenarios where no OHV is involved at all). This means that the system is resistant to single point failures (i.e. only one primary failure occurs) with regard to the triggering of false alarms. Most failure modes in this fault tree are misdetections. These failures are by far more probable than a not detected OHV, e.g. a light barrier can easily interpret the interruption by a passing bird as an OHV but it is very improbable that it still detects the light beam while an OHV is passing through. Fault tree analysis yields some other interesting results. We could show that Risk of all the measures taken are complemencollision false alarm tary in their effects on the two discussed • Timer TI1 runtime + main hazards collision and false alarm, • Timer TI2 runtime + as shown in Fig. 11. E.g. the pre-control • Pre-Control LB + • OD + LBpre decreases the risk of false alarms, but - less obvious - it increases the risk Fig. 11. Complementary effects for collisions as well. However this is a qualitative proposition. Of course it decreases the first probability much more than it increases the second one. These results correlate with the intention of the system to decrease the high number of false alarms significantly, while still keeping the height control safe in terms of collision detection (the actual height control triggers about 700 false alarms per year). pre
left/right
6
Improvements
The safety analysis led to some suggestions for improvements and changes in the control logic as well as in the physical placement of the sensors. These changes were handed back as analysis feedback to our partners. Their cost-value benefit is being discussed and it is very likely that they will be implemented. In the first part of this section we describe two possible solutions to close the safety leak. Then we will explain some improvements for the overall performance and quality of the system, which were discovered through the safety analysis. Measures to Close the Safety Gap The first suggestion is the better one in terms of failure probability, but the second one can be implemented without any additional costs.
306
Frank Ortmeier et al.
Installing additional ODs at LBpre . The first possibility of closing the safety gap of simultaneous passing of 2 OHVs at LBpre is to install additional ODs at the pre-control signal bridge. With ODs above each lane one can keep track of the actual number of HVs that are simultaneously passing the pre-control checkpoint. Every time the light barrier is triggered, the counter COpre (see Fig. 4) will not only be increased by one but by the number of HVs detected by the (new) ODs. With this information it may be assured that the counter is always at least equal to the number of OHVs in the area between pre- and post-control. The counter can still be incorrect, e.g. simultaneous passing of an OHV and a HV through the pre-control LBpre will increase it by two (instead of one), but it can only be higher than the actual number of OHVs. This closes the safety gap. It increases the probability of false alarms only insignificantly. Never stop TIpre . An alternative to the solution described above is to keep TIpre always active for its maximum running time and restart it with every new OHV entering the controlled area. If this solution is chosen, the counter COpre will be redundant; it will be replaced by an automaton similar to COpost . The advantage of this measure is of course that no additional sensors must be installed. Only a change in the control logic is required. On the other hand it has a higher increase in false alarm probability than the option with ODs (as the alarm triggering detectors are kept active longer). Changes to Improve Overall System Quality We will now give useful advice for changes, which increase the overall performance and quality of the system. Unfortunately quantitative information on failure probabilites and hazard cost were not available to us. But the presented changes will improve the overall quality for all realistic failure rates. Both measures described here are aimed at reducing the number of false alarms. Additional light barrier at entrance of tube 4. The FTA showed that one important factor for the hazard false alarm is the total activation time of ODfinal . This is because each HV passing one of the ODfinal sensors immediately leads to a false alarm, if the sensor is activated. As described above these detectors are active while TIpost is running. An additional light barrier at the entrance of tube 4 can be used to detect OHVs that are leaving the endangered area between post control and tunnel entrance. This can be used to stop TIpost and keep it only running while there are OHVs left in the last section. It will be necessary to use a counter to keep track of the number of OHVs in the post sector. Another advantage is that the timeout for TIpost may be chosen much more conservatively without significantly increasing the risk of false alarms, but decreasing the risk of collisions a lot. It is important to make the risk of misdetections of this additional light barrier as low as possible as misdetections could immediately lead to collisions. This can be done by installing a pair of light barriers instead of a single one and connecting them with an AND-connector. Distinguished alarms at post control. To further decrease the risk of false alarm, one may only trigger an alarm at post control if ODleft detects one HV. If neither ODleft nor ODright detect a HV (or OHV), the ODfinal will be activated without
Safety Analysis of the Height Control System for the Elbtunnel
307
triggering an alarm. This means the system assumes that the post control light barrier had a misdetection, if both ODs can’t detect a high vehicle. But it still activates the ODfinal just in case (if either ODleft or ODright are defect). This measure increases the risk of collisions almost unnoticeable. But the reaction time between alarm signals and the potential collision will decrease.
7
Conclusion
The safety analysis of the height control for the Elbtunnel has shown the benefit of combining formal verification and fault tree analysis. Both, formal model checking and safety analysis are important to examine a complex system. Formal verification gives a precise requirements specification for the control system and discovers logical flaws in the control software. In our case study it has revealed the safety problem of two OHVs passing LBpre at the same time. On the other hand, building an adequate model for the system and its environment is not a simple process. To minimize specification errors an orthogonal analysis technique like FTA is needed. FTA also addresses the issue of component failures. This analysis gives a lot of useful hints for improving the system. The revised system can be formally model checked again leading to a synergy effect. The presented case study, which required an effort of 3 person months, combines both techniques by exchanging results. A short overview, how to combine FTA and formal methods, can be found in [5] and a detailed paper is in preparation. For a tighter integration a FTA semantics was developed [9] used to formally verify the completeness of fault trees. Proof support [2] is integrated into the interactive specification and verification system KIV [1] using statecharts [3] as formal modeling language. In conclusion, we find that the combination of formal methods and FTA is a suitable analysis technique for embedded systems. Our analysis has made the system safer (by detecting and closing the safety gap), led to design improvements and increased overall system quality.
References [1] M. Balser, W. Reif, G. Schellhorn, K. Stenzel, and A. Thums. Formal system development with KIV. In T. Maibaum, editor, Fundamental Approaches to Software Engineering, number 1783 in LNCS. Springer, 2000. 307 [2] M. Balser and A. Thums. Interactive verification of statecharts. In Integration of Software Specification Techniques (INT’02), 2002. 307 [3] D. Harel. Statecharts: A visual formalism for complex systems. Science of Computer Programming, 8(3), 1987. 307 [4] K. L. McMillan. Symbolic Model Checking. Kluwer Academic Publishers, 1990. 298 [5] W. Reif, G. Schellhorn, and A. Thums. Safety analysis of a radio-based crossing control system using formal methods. In 9th IFAC Symposium on Control in Transportation Systems 2000, 2000. 307
308
Frank Ortmeier et al.
[6] J. Ruf. RAVEN: Real-time analyzing and verification environment. Technical Report WSI 2000-3, University of T¨ ubingen, Wilhelm-Schickard-Institute, January 2000. 298 [7] J¨ urgen Ruf and Thomas Kropf. Symbolic Model Checking for a Discrete Clocked Temporal Logic with Intervals. In E. Cerny and D. K. Probst, editors, Conference on Correct Hardware Design and Verification Methods (CHARME), pages 146– 166, Montreal, 1997. IFIP WG 10.5, Chapman and Hall. 298 [8] J¨ urgen Ruf and Thomas Kropf. Modeling and Checking Networks of Communicating Real-Time Systems. In Correct Hardware Design and Verification Methods (CHARME 99), pages 265–279. IFIP WG 10.5, Springer, September 1999. 298 [9] G. Schellhorn, A. Thums, and W. Reif. Formal fault tree semantics. In The Sixth World Conference on Integrated Design & Process Technology, 2002. (to appear). 307 [10] W. E. Vesely, F. F. Goldberg, N. H. Roberts, and D. F. Haasl. Fault Tree Handbook. Washington, D. C., 1981. NUREG-0492. 303
Dependability and Configurability: Partners or Competitors in Pervasive Computing? Titos Saridakis NOKIA Research Center PO Box 407, FIN-00045, Finland
[email protected] Abstract. To foster commercial strength pervasive services, dependability and configurability concerns must be integrated tightly with the offered functionality in order for the pervasive services to gain enduser’s trust while keeping their presence transparent to him. This paper presents the way dependability and configurability correlate with pervasive services and analyzes their common denominators and their competing forces. The common denominators are used to derive a set of design guidelines that promote the integration of dependability and configurability aspects. The competing forces are used for revealing a number of challenges that software designers must face in pervasive computing.
1
Introduction
The explosive evolution of wireless communications in the past decade, combined with the equally impressive improvements in the hand-held device technology (mobile phones, personal digital assistants or PDAs, palmtop computers, etc) has opened the way for the development of ubiquitous applications and pervasive services. In [12] where pervasive computing is seen as the incremental evolution of distributed systems and mobile computing (see Fig. 1), four research thrusts are identified which distinguish pervasive computing from its predecessors: smart spaces (i.e. context awareness), invisibility (i.e minimal user distraction), localized scalability (i.e. scalability within the environment where a pervasive service is used), and masking of uneven conditioning (i.e. graceful service quality degradation when space “smartness” decreases). All these four characteristics of pervasive computing imply directly or indirectly some form of configurability. A pervasive service should be able to adapt to different contexts and to different user profiles. It should also be reconfigurable in order to adjust to changes in its execution environment (e.g. increase of network traffic and decrease of space “smartness”). Hence, the quality attribute of configurability is inherently found in all pervasive services. On the other hand different dependability concerns (i.e. availability, reliability, safety and security according to [8]) are tightly related to the four characteristics of pervasive computing. Availability of a service is of crucial importance when dealing with the scalability of that service. In a similar manner, reliability goes hand in hand S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 309–320, 2002. c Springer-Verlag Berlin Heidelberg 2002
310
Titos Saridakis Remote Comm. Fault Tolerance High Availability Distributed Security
Distributed systems
Mobile computing
Pervasive computing
Mobile Networking Mobile Info Access Adaptive Applications Energy Aware Systems Location Sensitivity
Smart Spaces Invisibility Localized Scalability Uneven Conditioning
Fig. 1. The incremental evolution from distributed system and mobile computing to pervasive computing
with minimal user distraction. And last, but most certainly not least, different aspects of the quality attribute of security are related to all four characteristics of pervasive computing (user privacy, authentication and authorization that are invisible for the user and accountability in spaces with low “smartness” are only few examples). The dependability and configurability concerns in the context of pervasive computing are not entirely new; rather, most of them are rooted back to distributed systems and mobile computing. However, the intensive needs of pervasive computing accentuate and transform these concerns in a way that makes indispensable their integration with the functionality provided by pervasive services. The main objective of this paper is to analyze the roles played by dependability and configurability in pervasive computing. The analysis of these roles reveals the common points of configurability, availability, reliability, safety and security which support the integration of these five quality attributes in the development of pervasive services. On the other hand, the analysis uncovers a number of discrimination factors among the aforementioned quality attributes which resist their integration. The analysis also identifies critical areas in the design and development of pervasive services that need to be carefully dealt with in order to result in successful products. The remainder of this paper is structured as follows: the next section describes the system model that is used in this paper, and is followed by a quick revision of the dependability and configurability quality attributes in section 3.
Dependability and Configurability
311
Section 4 presents the common denominators of the aforementioned quality attributes and discusses how they fit with the needs of pervasive services. Section 5 on the antipode, presents the competing forces that appear when trying to combine the same five quality attributes in the context of pervasive services. A discussion on the issues related to the integration of dependability and configurability concerns in pervasive computing takes place in section 6. The paper concludes in section 7 with a summary of the presented challenges in the development of pervasive services and a mention on the open issues.
2
System Model
The purpose of our system model is to provide the abstractions that describe the entities which participate in a pervasive computing environment. A number of scenarios have been described in the literature (e.g see [4, 12]) giving a coarse-grained view on pervasive computing environments. In brief, these scenarios describe users which use personal, office and commercial information while moving from one physical location to another. The way they use this information depends on the devices they are carrying (personal and wearable computers) and the devices offered by the physical location in which the information access takes place (screens, projectors, printers, etc). Hence, these scenarios can be abstractly described in terms of devices, information, use of information, and physical locations where the information is used. Our system model provides four abstractions for capturing the above terms describing a pervasive computing environment. The asset abstraction captures all kinds of information that a user might want to use, which can be personal or office data, advertisements, flight-schedule updates, etc. The service abstraction captures the manipulation of assets when a user uses the corresponding information. The capability abstraction captures the devices that a user may use to manipulate and perceive a set of assets, which usually corresponds to the devices where information is manipulated and the devices that present to the user the outcome of that manipulation. Finally, the context abstraction captures the physical location in which assets are manipulated by services using the available capabilities. However, the context abstraction goes beyond the physical location by covering also other factors that can parametrize the interaction of assets, services and capabilities. These factors include time, the occurrence of pre-specified events, the emotional condition of the user, etc. Besides the above abstractions, our system model also defines four relations that capture the interactions among these abstractions. These relations are informally described below. In a given context, a service uses a capability to access an asset when the software that implements a service can manipulate the information corresponding to the given asset while executing on the device that corresponds to the given capability. When a service accesses an asset in a given context, it may employ some other service in order to accomplish its designated manipulation of the asset, in which case we say that the former service depends-on the latter. Finally, the contains relation is defined for contexts and
312
Titos Saridakis
Capabilities
Assets
11 00 000 00 111 11 000 111 00 11 00 11 00 11 11 00
11 00 00 11
data
data data
data data
data
data data
uses
Contexts
accesses
data data
data
111 11 000 00 000 11 111 00 00 11 00 11
data data
data data
data data
data
data
data data
11 00 00 11 000 111 000 111
data data data
data data data
depends-on
Services
Fig. 2. Entities and relations defined by the presented system model
all four abstractions defined above, i.e. a context contains the services, capabilities, assets and other contexts that are available in it. Hence, a context is a container of services, capabilities, assets and other contexts, and constrains them and their interactions according to the policies1 associated with the factors that parametrize the context. The accesses, uses and depends-on relations may cross context boundaries as long as one of the contexts is (directly or indirectly) contained in the other or there exists a wider context which (directly or indirectly) contains both contexts. Fig. 2 illustrates graphically the entities and the relations defined by the presented system model. Based on this model, a pervasive service is nothing more than the instantiation of the service abstraction presented above, i.e. a case where, in a given context, the software implementing the functionality captured by a service runs on a device represented by a given capability and operates on a given set of assets. Most of the properties that characterize a pervasive service are inherited from distributed systems and mobile computing (see Fig.1), including fault tolerance, high availability, distributed security and adaptability. Pervasive computing adds the invisibility property and strengthens the requirements regarding availability, adaptability (graceful degradation of service provision), scalability, and security 1
The word “policy” has no particular meaning in this paper, other than describing the constraints that a given context imposes on the contained assets, services and capabilities. In practice, such policies can be represented as context-specific assets which participate in every interaction among assets, services and capabilities contained by the given context.
Dependability and Configurability
313
(user privacy and trust) properties. All these properties are directly related to the dependability and configurability quality attributes of a system which are briefly summarized in the next section.
3
Dependability and Configurability
Dependable systems have been a domain of research for the past four decades. In general, dependability is a property that describes the trustworthiness of a system and it consists of the following four quality attributes [8]: availability, which describes the continuous accessibility of a system’s functionality; reliability, which describes the uninterrupted delivery of a system’s functionality; safety, which describes the preservation of a correct system state; and security, which describes the prevention of unauthorized operations on a system. Configurability is another research domain that has a long history, especially in the domain of distributed systems. The quality attribute of configurability describes the capability of a system to change the configuration of its constituents as a response to an external event (e.g. user input), a reaction to an internal event (e.g. failure detection), as part of its specified behavior (i.e. programmed reconfiguration). In the system model presented in the previous section, all these five quality attributes (availability, reliability, safety, security, and configurability) can be used to qualify assets, services and capabilities but not contexts. In the area of distributed systems dependability has been very closely associated to fault tolerance techniques (e.g. see [9] and [3]). In fact, fault tolerance refers to techniques for dealing with the occurrence of failure in the operation of a system and covers mostly safety and reliability issues. Availability is partially addressed by fault tolerance techniques, and is more related to load balancing issues. Security is even less related to fault tolerance techniques2 and deals mostly with access control, intrusion detection and communication privacy. On the other hand, a wide variety of fault tolerance techniques are closely related to configurability (e.g. after the occurrence of a failure is detected, the system reconfigures itself to isolate the failed part). Hence, in distributed system the relation between reliability and safety on one hand and configurability on the other is quite strong. The relation between availability and configurability is less direct and it mainly relates to the establishment of a configuration where the part of the system that needs to access some remote functionality gets in contact with the part of the system that provides that functionality. Finally, security appears not to relate to the other four quality attributes addressed in this paper. Mobile computing extends the problems that the designers of distributed system are facing with concerns regarding the poor resources of the capabilities, the high fluctuations in connectivity characteristics, and an unpredictable factor of hazardousness [11]. This brings new dimensions to dependability and 2
Fault tolerance techniques which deal with byzantine failures consider security attacks as a special kind of failure where the failed part of the system exhibits unpredictable behavior.
314
Titos Saridakis
configurability, which are further accentuated in pervasive computing. Failures, load balancing and security attacks are not the only concerns that influence the availability, reliability, safety and security of services, assets and capabilities. Contexts composition (i.e. containment under the same wider context), proximity issues in short range wireless networks, the absence of centralized point of information (e.g. which may play the role of a security server) and the limited physical resources of the capabilities are only few of the factors that make dependability an indispensable constituent quality in pervasive computing. In addition, the very nature of pervasive computing contains the need for dynamic reconfiguration and adaptation of services, assets and capabilities of a given context to a wide range (possibly not defined a priori) of services, assets and capabilities in other contexts with which the former may get in contact.
4
Common Denominators
The first common denominator across all five quality attributes presented in the previous section is the detection of conditions which call for some reaction that will prevent any behavior of the system that lies outside its specifications. Such conditions can be different for each quality attribute, e.g. failure occurrence for reliability, exceeded load for availability, unauthorized access for security, and user action that leads to reconfiguration. Nevertheless, in every case the system reaction and the triggering of the mechanisms that ensure the “correct” behavior of the system rely on the detection of the conditions which call for some system reaction. The detection of such conditions can be differentiated depending on whether it is set on assets, services or capabilities as well as with respect to the quality attribute with which it is associated. Still, mechanisms that guarantee availability, reliability, safety, security and configurability are all based on the detection of some type of condition. Closely related to the detection of conditions that can lead to specification violations is the “design for adaptation” characteristic that can be found in all five quality attributes studied in this paper. A direct consequence of detecting a condition that leads to specification violation is to take appropriate actions for preventing the violation. The action which expresses this design for adaptation can be different depending on the case (e.g. stop unauthorized accesses, connect to an unoccupied resource, retrieve the assets or service results from a non-faulty replica, etc). This common characteristic of dependability and configurability aligns perfectly with the property of self-tuning and the invisibility thrust which are fundamental elements of pervasive computing [12]. Another common denominator for the quality attributes of availability and reliability is redundancy which appears in the form of replication of assets, services or devices and is used to mask resource overload or failure (these are the common cases in distributed systems) as well as inaccessibility, ad hoc changes in the composition of contexts, etc. A different form of redundancy can be found in mechanisms that ensure security of assets and more specifically their integrity. This redundancy concerns some extra information which is represented as CRC
Dependability and Configurability
315
codes, digital signatures and certification tokens. For reasons that will be revealed in the following section it is worth to mention that this kind of redundancy has also been used in fault tolerance techniques (CRC codes are used to detect corrupted data).
5
Competing Forces
On the antipode of the common denominators of the five quality attributes discussed in this paper, a number of competing forces strive to complicate the development of pervasive services. An interesting remark is that these competing forces stem directly or indirectly from the common denominators identified above. The first common characteristic of availability, reliability, safety, security and configurability identified in the previous section is the detection of conditions that trigger the mechanisms which guarantee the provision of the five aforementioned quality attributes. The same characteristic yields an important force that complicates the development of pervasive services: the interpretation of a phenomenon as a certain type of condition. For example, the interception of a corrupted message by a security detection mechanism will be interpreted as a security attack to the service or asset integrity and, depending on the security policy, it may result in prohibiting any communication with the sender of the message. However, if the message was corrupted due to some transient communication failure, the most efficient reaction of the receiver would be to request from the sender the re-transmission of the message. But without any means to reveal the true nature of the message corruption, the interpretation of this phenomenon will be decided by the detection mechanism that will first intercept it. The problem of the wrong detection mechanism intercepting a condition (or a detection mechanism intercepting the wrong condition) is not specific to pervasive computing. The same situation can appear in distributed systems and mobile computing, only in these cases the impact is smaller. Based on the above example, in distributed systems a central security authority can be employed to resolve the misunderstanding created by the misinterpretation of the corrupted message as a security attack. In pervasive computing however such central authorities do not exist, and even if they do in some exceptional case they are not accessible at any given moment mainly due to the lack of a global connectivity network. This situation is closer to mobile computing with the exception of the minimal user distraction constraint which is a characteristic of pervasive computing. The implication of this latter fact is that in pervasive computing the misunderstanding must be resolve without the intervention of a central authority (or the user) and it should result in the minimum possible inconvenience for him. The second common denominator, the “design for adaptation”, is probably the factor that produces the most fierce obstacles in integrating all five quality attributes discussed in this paper with the design of pervasive services. The “design for adaptation” reflects the different strategies which are employed to deal
316
Titos Saridakis
with the occurrence of conditions that affect the availability, reliability, safety, security or configurability of a pervasive service. And alone the combination of security and reliability is known to be notoriously difficult (e.g. see [7]). The same statement is true also between the security and any of the three other quality attributes above. Increasing security checks results in an increase in the time to access an asset or to use a capability which in turn decreases the availability of the latter. Similarly, the availability of a service is inversely proportional to the number of checkpoints that are put in place by the mechanism which guarantees the reliability of a service or an asset. The problem of composing the design decisions regarding different quality attributes is not met for the first time in pervasive computing. The development of different views on a system’s architecture in order to better comprehend the implications of the requirements regarding different quality attributes has already been suggested and tried out (e.g. see [5]). However, in pervasive computing the alternative design choices for resolving the requirements regarding availability, reliability, safety, security and configurability are not as many as in distributed systems and in mobile computing. For example, replication in space (e.g. state machine approach or active replication) will rarely be affordable as a reliability solution due to the limited resources of the capabilities. In the same manner, centralized security is not an option in pervasive computing and security solutions must be fundamentally distributed (e.g. distributed trust [6]). The same argument applies to configuration mechanisms based on a central reconfiguration manager (e.g. see [1]). The bottom line is that the composability of different architectural views was already a difficult issue in distributed systems where a variety of design alternatives exist for different quality attributes. In pervasive computing this difficulty is amplified by the lack of alternative design solutions which is the result of the limited resources disposed by hand-held devices. This brings us to the third force that opposes the integration of availability, reliability and security: redundancy. Each of those three quality attributes uses some form of redundancy but, as imposed by the differences in their nature, these redundancy forms are different and in many cases contradictory. For example, replicating an asset to guarantee its availability runs contrary to traditional security practices for protecting secrets. Similarly, encrypting and certifying an asset, which adds some redundant information, increases the time needed to access that asset which fatally decreases its availability. When redundancy for reliability purposes enters the above picture, the result is a mess of conflicting redundancy types and strategies on how to employ them to serve all the three quality attributes in question. In addition to this, the choices of redundancy types that can be employed in pervasive computing are few. Limited resources in terms of memory, computing power and energy power do not permit solutions based on cryptographic techniques on the leading edge of distributed system technology (high consumption of computing and energy power), nor do they permit greedy replication schemes for availability and active replication for reliability (high memory consumption). For the same reasons, fat
Dependability and Configurability
317
proxy solutions used in mobile computing for disconnected mode operation are also excluded.
6
Discussion
The issues of balancing the competing forces in the integration of dependability and configurability concerns is not specific to pervasive computing. Distributed systems and mobile computing face the same problem only on a different scale (e.g. limited resource devices, a number of services, assets and capabilities in a context that may vary with time, etc) and under assumptions which are not valid in pervasive computing (e.g. central authority, presence of capabilities, services and assets in the surrounding environment of a service, etc). The traditional approach in distributed systems, which also applies in mobile computing, is to sacrifice one of the system aspects related to dependability or configurability in order to guarantee the others, or to compromise the guarantees provided by many of these aspects in order to provide a little bit of everything. However, both the above alternatives are just not good enough for pervasive computing. Pervasive services that neglect security issues (e.g. integrity or privacy of services, assets and capabilities) will have significantly limited acceptance by end-users, service providers or device manufacturers regardless the degree of availability, reliability and configurability they offer. Similar cases hold when any of the constituent attributes of dependability or configurability is neglected. For example, neglecting availability issues in favor of reliability, i.e. assuring that the user of a service will receive the expected result but provide no guarantees about when the service will be accessible, results in very weak user trust on the services. On the other hand, compromising the guarantees regarding some of the aforementioned quality attributes does not necessarily increase the trust of the service users and providers. Such approaches can easily result in degraded systems which are neither secure, nor reliable or configurable. In order to ensure the integration of dependability and configurability concerns in the development of pervasive services, the designer must resolve the conflicts arising by the competing forces presented in Secton 5. This integration is a very challenging task which will push the system modeling and software architecture domains to their limits and probably foster their evolution. Still, for each of the three identified competing forces there is a simple guideline that can be used as a starting point when dealing with the integration of dependability and configurability in the design of pervasive services. Regarding the detection of conditions that activate the mechanisms responsible for the dependability and configurability guarantees in a system, the designer must dissociate the detection of an event from its interpretation as a condition of a certain type since it is strongly probable that the same event in different contexts will have to be interpreted as a different condition. In distributed systems an event has a pre-assigned meaning as a condition of some type (e.g. if a timeout expires in communication over a network it is taken to mean that the remote server is down or overloaded). This is a consequence of the assump-
318
Titos Saridakis
tion that the network composition is more or less fixed and nodes do not enter and exit the network all the time. Hence, the appearance and removal of nodes happen under the supervision of some system administrator (human or software component). So, delays in responses to network communication are assumed to be events that signify node overload or failure. Mobile computing deals with the events associated to network communication in a more flexible way since network disconnections are part of the system specification. However, this is not the case with security issues where the usual approach is to trust either a central authority which is occasionally accessible over the network or the information kept in some security profile on the mobile terminal. This results in rigid security policies which fail to adapt to the variety of circumstances that may arise in different contexts in pervasive computing. Following the dissociation of event detection from event interpretation as a condition of a specific type, the second force identified in § 5 (i.e. design for adaptation) must be adjusted. In a similar way that the same event may signify different conditions in different contexts, the adaptation policy for the same condition must be parametrized by the context in which it applies. The first impact of this flexibility of adaptation policies is that the quality attributes of availability, reliability, safety, security and configurability must have more than one mechanisms which guarantee them. For example, privacy may depend on an RSA encryption mechanism in a context where communication takes place over Ethernet, but content to the ciphering provided by the modulation and compression performed by a CDMA-based radio communication. Similarly, reliability may be based on an active replication fault tolerance mechanism in contexts with rich capabilities, but content to a primary-backup mechanism where capabilities are scarce and communication delays high. Design for adaptation in pervasive computing must take into consideration context-dependent parameters that may or may not be known at the design phase. Hence, adaptability in pervasive computing is not only adjusting the system behavior according to the conditions to which event occurrences are translated. It is also adjusting the adaptation policies to the specific characteristics of the context in which a given pervasive service operates. Since all the adaptation policies that may apply in different contexts cannot be known in advance, the adaptation policies must be adaptive themselves. The design issues related to the use of redundancy in pervasive services are directly related to the adaptability of the adaptation policies. For example, using replication of services and/or assets to achieve availability and reliability must not be a choice fixed at design time; rather it should be possible to select the most appropriate form of redundancy for a given context. Security related redundancy must also be adjustable to context characteristics. Encryption and digital signatures might not be necessary for guaranteeing the integrity of an asset when the given asset is accessed in an attack-proof context. Redundancy schemes with conflicting interests for the quality attributes of availability, reliability and security must be prioritized on a per context basis. This will allow the graceful degradation of the qualities guaranteed by a pervasive service to
Dependability and Configurability
319
adapt to the characteristics of different contexts. For example, while assuring maximum security guarantees, lower reliability and availability guarantees can be provided for accessing an asset in a given context (e.g. uncertified context). In a different context where physical security means allow to relax the security guarantees provided by the system, the reliability and availability guarantees for the access of the same asset can be maximized.
7
Summary
The tight relation of pervasive computing with the quality attributes of dependability and configurability suggests that it is unavoidable to deliver successful pervasive services without integrating dependability and configurability considerations in their design. Pervasive computing does not introduce the integration of functional system properties and quality attributes as a new design concern. This is a concern already addressed before (e.g. see [2]). What is new in pervasive computing is that this integration is no longer a desirable action in order to increase the quality of a system; rather it becomes an absolute necessity in order to provide successful pervasive services. This means that pervasive computing characteristics (smart-spaces, invisibility, localized scalability and graceful degradation) must be harmonically interweaved with the condition detection, the adaptability and the redundancy characteristics that constitute a doubleedged knife for the dependability and configurability quality attributes. Although there is no widely applicable, systematic method to support the system designer in the aforementioned challenging integration task, there are three simple guidelines which can serve as a starting point for attempting the harmonic integration of dependability and configurability in pervasive services. First, the dissociation of the detection of events in a context from their interpretation as conditions of a certain type. The interpretation must happen separately and in a context-specific way which will allow the same event to signify different conditions (and hence trigger different mechanisms) in different contexts. Second, the design provisions which enable the adaptation of adaptation policies to the characteristics specific to each context where the policies are applied. Finally, the inclusion of redundancy schemes as part of the adaptation policies, which implies that different redundancy schemes must exist for assets and services and the selection of the one to be applied in a given context will depend on the characteristics of the context in question. These guidelines are very much in line with the self-tuning property of pervasive computing [12]. In fact, the first guideline is an enabler for self-tuning while the second and the third elaborate on how to put self-tuning consideration in the design of dependable and configurable pervasive services. We anticipated substantial support from the system modeling and software architecture activities for the integration of dependability and configurability in the design of pervasive services Finally, the quality attribute of timeliness must be considered in conjunction with dependability and configurability. This is another big design challenge
320
Titos Saridakis
since on one hand real-time embedded devices form a significant number of capabilities in pervasive computing, and on the other the integration of timeliness, dependability and adaptability is shown to be far from trivial (e.g. see [10]).
References [1] C. Bidan, V. Issarny, T. Saridakis, and A. Zarras. A Dynamic Reconfiguration Service for CORBA. In Proceedings of the 4th International Conference on Configurable Distributed Systems, pages 35–42, 1998. 316 [2] J. Bosch and P. Molin. Software Architecture Design: Evaluation and Transformation. In Proceedings of the Conference on Engineering of Computer-Based Systems, pages 4–10, 1999. 319 [3] C. Flaviu. Understanding Fault-Tolerant Distributed Systems. Communications of the ACM, 34(2):56–78, February 1991. 313 [4] R. Grimm, T. Anderson, B. Bershad, and D. Wetherall. A system architecture for pervasive computing. In Proceedings of the 9th ACM SIGOPS European Workshop, pages 177–182, September 2000. 311 [5] V. Issarny, T. Saridakis, and A. Zarras. Multi-View Description of Software Architectures. In Proceedings of the 3rd International Workshop on Software Architecture, pages 81–84, 1998. 316 [6] L. Kagal, T. Finin, and A. Joshi. Trust-Based Security in Pervasice Computing Environments. IEEE Computer, 34(12):154–157, December 2001. 316 [7] K. Kwiat. Can Reliability and Security be joined Reliable and Securely. In Proceedings of the IEEE Symposium on Reliable Distributed Systems, pages 72– 73, 2001. 316 [8] J. C. Laprie, editor. Dependability: Basic Concepts and Terminology, volume 5 of Dependable Computing and Fault-Tolerant Systems. Springer-Verlag, 1992. 309, 313 [9] V. P. Nelson. Fault-Tolerant Computing: Fundamental Concepts. IEEE Computer, 23(7):19–25, july 1990. 313 [10] P. Richardson, L. Sieh, and A. M. Elkateeb. Fault-Tolerant Adaptive Scheduling for Embedded Real-Time Systems. IEEE Micro, 21(5):41–51, September-October 2001. 320 [11] M. Satyanarayanan. Fundamental Challenges in Mobile Computing. In Proceedings of the 15th Annual ACM Symposium on Principles of Distributed Computing, pages 1–7, 1996. 313 [12] M. Satyanarayanan. Pervasive Computing: Vision and Challenges. IEEE Personal Communications, 8(4):10–17, August 2001. 309, 311, 314, 319
Architectural Considerations in the Certification of Modular Systems Iain Bate and Tim Kelly Department of Computer Science University of York, York, YO10 5DD, UK {iain.bate,tim.kelly}@cs.york.ac.uk
Abstract. The adoption of Integrated Modular Avionics (IMA) in the aerospace industry offers potential benefits of improved flexibility in function allocation, reduced development costs and improved maintainability. However, it requires a new certification approach. The traditional approach to certification is to prepare monolithic safety cases as bespoke developments for a specific system in a fixed configuration. However, this nullifies the benefits of flexibility and reduced rework claimed of IMA-based systems and will necessitate the development of new safety cases for all possible (current and future) configurations of the architecture. This paper discusses a modular approach to safety case construction, whereby the safety case is partitioned into separable arguments of safety corresponding with the components of the system architecture. Such an approach relies upon properties of the IMA system architecture (such as segregation and location independence) having been established. The paper describes how such properties can be assessed to show that they are met and trade-off performed during architecture definition reusing information and techniques from the safety argument process.
1
Introduction
Integrated Modular Avionics (IMA) offers potential benefits of improved flexibility in function allocation, reduced development costs and improved maintainability. However, it poses significant problems in certification. The traditional approach to certification relies heavily upon a system being statically defined as a complete entity and the corresponding (bespoke) system safety case being constructed. However, a principal motivation behind IMA is that there is through-life (and potentially runtime) flexibility in the system configuration. An IMA system can support many possible mappings of the functionality required to the underlying computing platform. In constructing a safety case for IMA an attempt could be made to enumerate and justify all possible configurations within the architecture. However, this approach is unfeasibly expensive for all but a small number of processing units and functions. Another approach is to establish the safety case for a specific configuration within the S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 321-333, 2002. Springer-Verlag Berlin Heidelberg 2002
322
Iain Bate and Tim Kelly
architecture. However, this nullifies the benefit of flexibility in using an IMA solution and will necessitate the development of completely new safety cases for future modifications or additions to the architecture. A more promising approach is to attempt to establish a modular, compositional, approach to constructing safety arguments that has a correspondence with the structure of the underlying system architecture. However, to create such arguments requires a system architecture that has been designed with explicit consideration of enabling properties such as independence (e.g. including both non-interference and location ‘transparency’), increased flexibility in functional integration, and low coupling between components. An additional problem is that these properties are nonorthogonal and trade-offs must be made when defining the architecture.
2
Safety Case Modules
Defining a safety case ‘module’ involves defining the objectives, evidence, argument and context associated with one aspect of the safety case. Assuming a top-down progression of objectives-argument-evidence, safety cases can be partitioned into modules both horizontally and vertically: Vertical (Hierarchical) Partitioning - The claims of one safety argument can be thought of as objectives for another. For example, the claims regarding software safety made within a system safety case can serve as the objectives of the software safety case. Horizontal Partitioning - One argument can provide the assumed context of another. For example, the argument that “All system hazards have been identified” can be the assumed context of an argument that “All identified system hazards have been sufficiently mitigated”. In defining a safety case module it is essential to identify the ways in which the safety case module depends upon the arguments, evidence or assumed context of other modules. A safety case module, should therefore be defined by the following interface: 1. 2. 3. 4.
Objectives addressed by the module Evidence presented within the module Context defined within the module Arguments requiring support from other modules
Inter-module dependencies: 5. Reliance on objectives addressed elsewhere 6. Reliance on evidence presented elsewhere 7. Reliance on context defined elsewhere The principal need for having such well-defined interfaces for each safety case module arises from being able to ensure that modules are being used consistently and correctly in their target application context (i.e. when composed with other modules).
Architectural Considerations in the Certification of Modular Systems
2.1
323
Safety Case Module Composition
Safety case modules can be usefully composed if their objectives and arguments complement each other – i.e. one or more of the objectives supported by a module match one or more of the arguments requiring support in the other. For example, the software safety argument is usefully composed with the system safety argument if the software argument supports one or more of objectives set by the system argument. At the same time, an important side-condition is that the collective evidence and assumed context of one module is consistent with that presented in the other. For example, an operational usage context assumed within the software safety argument must be consistent with that put forward within the system level argument. The definition of safety case module interfaces and satisfaction of conditions across interfaces upon composition is analogous to the long established rely-guarantee approach to specifying the behaviour of software modules. Jones in [1] talks of ‘rely’ conditions that express the assumptions that can be made about the interrelations (interference) between operations and ‘guarantee’ conditions that constrain the endeffect assuming that the ‘rely’ conditions are satisfied. For a safety case module, the rely conditions can be thought of as items 4 to 7 (at the start of section 0) of the interface whilst item 1 (objectives addressed) defines the guarantee conditions. Items 2 (evidence presented) and 3 (context defined) must continue to hold (i.e. not be contradicted by inconsistent evidence or context) during composition of modules. The defined context of one module may also conflict with the evidence presented in another. There may also simply be a problem of consistency between the system models defined within multiple modules. For example, assuming a conventional system safety argument / software safety argument decomposition (as defined by U.K. Defence Standards 00-56 [2] and 00-55 [3]) consistency must be assured between the state machine model of the software (which, in addition to modelling the internal state changes of the software will almost inevitably model the external – system – triggers to state changes) and the system level view of the external stimuli. As with checking the consistency of safety analyses, the problem of checking the consistency of multiple, diversely represented, models is also a significant challenge in its own right. 2.2
The Challenge of Compositionality
It is widely recognised (e.g. by Perrow [4] and Leveson [5]) that relatively low risks are posed by independent component failures in safety-critical systems. However, it is not expected that in a safety case architecture where modules are defined to correspond with a modular system structure that a complete, comprehensive and defensible argument can be achieved by merely composing the arguments of safety for individual system modules. Safety is a whole system, rather than a ‘sum of parts’, property. Combination of effects and emergent behaviour must be additionally addressed within the overall safety case architecture (i.e. within their own modules of the safety case). Modularity in reasoning should not be confused with modularity (and assumed independence) in system behaviour.
324
Iain Bate and Tim Kelly
2.3
Safety Case Module ‘Contracts’
Where a successful match (composition) can be made of two or more modules, a contract should be recorded of the agreed relationship between the modules. This contract aids in assessing whether the relationship continues to hold and the (combined) argument continues to be sustained if at a later stage one of the argument modules is modified or a replacement module substituted. This is a commonplace approach in component based software engineering where contracts are drawn up of the services a software component requires of, and provides to, its peer components, e.g. as in Meyer’s Eiffel contracts [6]. In software component contracts, if a component continues to fulfil its side of the contract with its peer components (regardless of internal component implementation detail or change) the overall system functionality is expected to be maintained. Similarly, contracts between safety case modules allow the overall argument to be sustained whilst the internal details of module arguments (including use of evidence) are changed or entirely substituted for alternative arguments provided that the guarantees of the module contract continue to be upheld. 2.4
Safety Case Architecture
We define safety case architecture as the high level organisation of the safety case into modules of argument and the interdependencies that exist between them. In deciding upon the partitioning of the safety case, many of the same principles apply as for system architecture definition, for example: High Cohesion/Low Coupling – each safety case module should address a logically cohesive set of objectives and (to improve maintainability) should minimise the amount of cross-referencing to, and dependency on, other modules. Supporting Work Division & Contractual Boundaries – module boundaries should be defined to correspond with the division of labour and organisational / contractual boundaries such that interfaces and responsibilities are clearly identified and documented. Isolating Change – arguments that are expected to change (e.g. when making anticipated additions to system functionality) should ideally be located in modules separate from those modules where change to the argument is less likely (e.g. safety arguments concerning operating system integrity). The principal aim in attempting to adopt a modular safety case architecture for IMA-based systems is for the modular structure of the safety case to correspond as far as is possible with the modular partitioning of the hardware and software of the actual system. 2.5
Reasoning about Interactions and Independence
One of the main impediments to reasoning separately about individual applications running on an IMA based architecture is the degree to which applications interact or interfere with one another. The European railways safety standard CENELEC ENV
Architectural Considerations in the Certification of Modular Systems
325
50129 [7] makes an interesting distinction between those interactions between system components that are intentional (e.g. component X is meant to communicate with component Y) and those that are unintentional (e.g. the impact of electromagnetic interference generated by one component on another). A further observation made in ENV 50129 is that there are a class of interactions that are unintentional but created through intentional connections. An example of this form of interaction is the influence of a failed processing node that is ‘babbling’ and interfering with another node through the intentional connection of a shared databus. Ideally ‘once-for-all’ arguments are established by appeal to the properties of the IMA infrastructure to address unintentional interactions. For example, an argument of “non-interference through shared scheduler” could be established by appeal to the priority-based scheduling scheme offered by the scheduler. It is not possible to provide “once-for-all” arguments for the intentional interactions between components – as these can only be determined for a given configuration of components. However, it is desirable to separate those arguments addressing the logical intent of the interaction from those addressing the integrity of the medium of interaction. The following section describes how properties of the system architecture, such as those discussed above, can be explicitly considered as part of the architecture definition activity.
3
Evaluating Required Qualities during System Architecture Definition
In defining system architecture it is important to consider the following activities: 1. 2. 3.
4.
derivation of choices – identifies where different design solutions are available for satisfying a goal. manage sensitivities – identifies dependencies between components such that consideration of whether and how to relax them can be made. A benefit of relaxing dependencies could be a reduced impact to change. evaluation of options – allows questions to be derived whose answers can be used for identifying solutions that do/do not meet the system properties, judging how well the properties are met and indicating where refinements of the design might add benefit. influence on the design – identifies constraints on how components should be designed to support the meeting of the system’s overall objectives.
A technique (the Architecture Trade-Off Analysis Method – ATAM [8]) for evaluating architectures for their support of architectural qualities, and trade-offs in achieving those qualities, has been developed by the Software Engineering Institute. Our proposed approach is intended for use within the nine-step process of ATAM. The differences between our strategy and other existing approaches, e.g. ATAM, include the following.
326
Iain Bate and Tim Kelly
1. the techniques used in our approach are already accepted and widely used (e.g. nuclear propulsion system and missile system safety arguments) [2], and as such processes exist for ensuring the correctness and consistency of the results obtained. 2. the techniques offers: (a) strong traceability and a rigorous method for deriving the attributes and questions with which designs are analysed; (b) the ability to capture design rationale and assumptions which is essential if component reuse is to be achieved. 3. information generated from their original intended use can be reused, rather than repeating the effort. 4. the method is equally intended as a design technique to assist in the evaluation of the architectural design and implementation strategy as it is for evaluating a design at a particular fixed stages of the process. 3.1
Analysing Different Design Solutions and Performing Trade-Offs
Figure 1 provides a diagrammatic overview of the proposed method. Stage (1) of the trade-off analysis method is producing a model of the system to be assessed. This model should be decomposed to a uniform level of abstraction. Currently our work uses UML [9] for this purpose, however it could be applied to any modelling approach that clearly identifies components and their couplings. Arguments are then produced (stage (2)) for each coupling to a corresponding (but lower so that impact of later choices can be made) abstraction level than the system model. (An overview of Goal Structuring Notation symbols is shown in Figure 2, further details of the notation can be found in [10]) The arguments are derived from the top-level properties of the particular system being developed. The properties often of interest are lifecycle cost, dependability, and maintainability. Clearly these properties can be broken down further, e.g. dependability may be decomposed to reliability, safety, timing (as described in [11]). Safety may further involve providing guarantees of independence between functionality. In practice, the arguments should be generic or based on patterns where possible. Stage (3) then uses the information in the argument to derive options and evaluate particular solutions. Part of this activity uses representative scenarios to evaluate the solutions. Based on the findings of stage (3), the design is modified to fix problems that are identified – this may require stages (1)-(3) to be repeated to show the revised design is appropriate. When this is complete and all necessary design choices have been made, the process returns to stage (1) where the system is then decomposed to the next level of abstraction using guidance from the goal structure. Components reused from another context could be incorporated as part of the decomposition. Only proceeding when design choices and problem fixing are complete is preferred to allowing tradeoffs across components at different stages of decomposition because the abstractions and assumptions are consistent.
M
By
Stage 2 – Arguing about key properties
327
e fin Re sign De
Stage 1 – Modelling the system e ov pr gn Im esi D
ak e M Ch D O ulti oic esi pt pl e gn im e- s isa Cr tio iter n ia
Architectural Considerations in the Certification of Modular Systems
Stage 3(b) - Extracting Stage 3(a) – Elicitation and evaluation of choices questions from the arguments SCENARIOS
Stage 3(c) – Evaluating whether claims are satisfied
Fig. 1. Overview of the Method
A
Goal
Solution
Context
Assumption SolvedBy InContextOf
Choice
Fig. 2. Goal Structuring Notation (GSN) Symbols
Sensor
Actuator
Calculations
-value -health +read_data() +send_data()
-sensor_data -actuator_data -health +read_data() +send_data() +transform_data()
-value -health +read_data() +send_data()
Health Monitoring -system_health +read_data() +calculate_health() +perform_health() +update_maintainenance_state()
Fig. 3. Class Diagram for the Control Loop
3.2
Example – Simple Control System
The example being considered is a continuous control loop that has health monitoring to check for whether the loop is complying with the defined correct behaviour (i.e. accuracy, responsiveness and stability) and then takes appropriate actions if it does not. At the highest level of abstraction the control loop (the architectural model of which is shown in Figure 3) consists of three elements; a sensor, an actuator and a calculation stage. It should be noted that at this level, the design is abstract of whether the implementation is achieved via hardware or software. The requirements (key safety properties to be maintained are signified by (S), functional properties by (F) and non-functional properties by (NF), and explanations, where needed, in italics) to be met are: 1. the sensors have input limits (S) (F); 2. the actuators have input and output limits (S) (F);
328
Iain Bate and Tim Kelly
3. the overall process must allow the system to meet the desired control properties, i.e. responsiveness (dependent on errors caused by latency (NF)), stability (dependent on errors due to jitter (NF) and gain at particular frequency responses (F)) [6] (S); 4. where possible the system should allow components that are beginning to fail to be detected at an early stage by comparison with data from other sources (e.g. additional sensors) (NF). Early recognition would allow appropriate actions to be taken including the planning of maintenance activities. In practice as the system development progresses, the component design in Figure 3 would be refined to show more detail. For reasons of space only the calculation-health monitor coupling is considered. Stage 2 is concerned with producing arguments to support the meeting of objectives. The first one considered here is an objective obtained from decomposing an argument for dependability (the argument is not shown here due to space reasons) that the system’s components are able to tolerate timing errors (goal Timing). From an available argument pattern, the argument in Figure 4 was produced that reasons “Mechanisms in place to tolerate key errors in timing behaviour” where the context of the argument is health monitor component. Figure 4 shows how the argument is split into two parts. Firstly, evidence has to be obtained using appropriate verification techniques that the requirements are met in the implementation, e.g. when and in what order functionality should be performed. Secondly, the health monitor checks for unexpected behaviour. There are two ways in which unexpected behaviour can be detected (a choice is depicted by a black diamond in the arguments) – just one of the techniques could be used or a combination of the two ways. The first way is for the health-monitor component to rely entirely on the results of the internal health monitoring of the calculation component to indicate the current state of the calculations. The second way is for the health-monitor component to monitor the operation of the calculation component by observing the inputs and outputs to the calculation component. In the arguments, the leaf goals (generally at the bottom) have a diamond below them that indicates the development of that part of the argument is not yet complete. The evidence to be provided to support these goals should be quantitative in nature where possible, e.g. results of timing analysis to show timing requirements are met. Next an objective obtained from decomposing an argument for maintainability (again not shown here due to space reasons) that the system’s components are tolerant to changes is examined. The resultant argument in Figure 5 depicts how it is reasoned the “Component is robust to changes” in the context of the health-monitor component. There are two separate parts to this; making the integrity of the calculations less dependent on when they are performed, and making the integrity of the calculations less dependent on the values received (i.e. error-tolerant). For the first of these, we could either execute the software faster so that jitter is less of an issue, or we could use a robust algorithm that is less susceptible to the timing properties of the input data (i.e. more tolerant to jitter or the failure of values to arrive).
Architectural Considerations in the Certification of Modular Systems
C0010 Mechanism = Healthmonitoring component
Timing Mechanisms in place to tolerate key errors in timing behaviour
329
A0004 Appropriate steps taken when system changes
A
G0020 Sufficient information about the bounds of expected timing operation is obtained
C0009 Appropriate = correct, consistent and completeness
G0015 Timing requirements are specified appropriately
G0016 System implemented in a predictable way
C0010 Expected temporal behaviour concerns when and the order in which functionality is performed
G0017 Verification techniques available to prove the requirements are met
G0021 Operation is monitored and unexpected behaviour handled
G0022 Health monitor relies on health information provided to it
G0023 Health monitor performs checks based on provided information
Fig. 4. Timing Argument
C0012 Component = health monitoring
A0002 The integrity is related to frequency, latency and jitter
G0002 Component is robust to changes
G0011 Make operations integrity less susceptible to time variations
G0012 Make operations integrity less dependent on value
A
C0007 Plant = system under control
G0013 Perform functionality faster than the plant's fastest frequency
G0014 Make calculations integrity less dependent on input data's timing properties
C0008 Robust algorithms e.g. H-infinity
Fig. 5. Minimising Change Argument
The next stage (stage 3(a)) in the approach is the elicitation and evaluation of choices. This stage extracts the choices, and considers their relative pros and cons. The results are presented in Table 1. From Table 1 it can be seen that some of the choices that need to be made about individual components are affected by choices made by other components within the system. For instance, Goal G0014 is a design option of having a more complicated algorithm that is more resilient changes to and variations in the system’s timing properties. However Goal G0014 is in opposition to Goal G0023 since it would make the health-monitoring component more complex. Stage 3(b) then extracts questions from the argument that can then be used to evaluate whether particular solutions (stage 3(c)) meets the claims from the arguments generated earlier in the process. Table 2 presents some of the results of extracting questions from the arguments for claim G0011 and its assumption A0002 from Figure 5. The table includes an evaluation of a solution based on a PID (Proportional Integration Differentiation) loop.
330
Iain Bate and Tim Kelly Table 1. Choices Extracted from the Arguments Content
Choice Goal G0022 Health monitor relies on health information provided to it Goal G0023- Health monitor performs checks based on provided information
Pros Simplicity since health monitor doesn’t need to access and interpret another component’s Goal G0021 state. Operation is monitored and Omission failures unexpected easily detected and behaviour integrity of handled calculations maintained assuming data provided is correct. Goal G0013 – Perform Simple algorithms can functionality faster be used. than the plant’s fastest These algorithms take Goal G0011 less execution time. Make operations frequency. integrity less Goal G0014 - Make Period and deadline susceptible to time calculations’ integrity constraints relaxed. variations less dependent on Effects of failures may input data’s timing be reduced. properties.
Cons Can a failing/failed component be trusted to interpret error-free data. Health monitor is more complex and prone to change due to dependence on the component. Period and deadline constraints are tighter. Effects of failures are more significant. More complicated algorithms have to be used. Algorithms may take more execution time.
Table 2 shows how questions for a particular coupling have different importance associated (e.g. Essential versus Value Added). These relate to properties that must be upheld or those whose handling in a different manner may add benefit (e.g. reduced susceptibility to change). The responses are only partially for the solution considered due to the lack of other design information. As the design evolves the level of detail contained in the table would increase and the table would then be populated with evidence from verification activities, e.g. timing analysis. With the principles that we have established for organising the safety case structure “in-the-large”, and the complementary approach we have described for reasoning about the required properties of the system architecture, we believe it is possible to create a flexible, modular, certification argument for IMA. This is discussed in the following section. Table 2. Evaluation Based on Argument
Question Goal G0011 - Can the integrity of the operations be justified? Assumption A0002 - Can the dependency between the operation’s integrity and the timing properties be relaxed?
Importance Response Design Mod. Essential More design Dependent on information response to needed questions Value Only by changing Results of other Added control algorithm trade-off used analysis needed
Architectural Considerations in the Certification of Modular Systems
4
331
Example Safety Case Architecture for a Modular System
The principles of defining system and safety case architecture discussed in this paper are embodied in the safety case architecture shown in Figure 6. (The UML package notation is used to represent safety case modules.) The role of each of the modules of the safety case architecture shown in Figure 6 is as follows: • • • • •
• • •
•
•
•
ApplnAArg - Specific argument for the safety of Application A (one required for each application within the configuration) CompilationArg - Argument of the correctness of the compilation process. Ideally established once-for-all. HardwareArg - Argument for the correct execution of software on target hardware. Ideally an abstract argument established once-for-all leading to support from specific modules for particular hardware choices. ResourcingArg - Overall argument concerning the sufficiency of access to, and integrity of, resources (including time, memory, and communications) ApplnInteractionArg - Argument addressing the interactions between applications, split into two legs: one concerning intentional interactions, the second concerning unintentional interactions (leading to the NonInterfArg Module) InteractionIntArg - Argument addressing the integrity of mechanism used for intentional interaction between applications. Supporting module for ApplnInteractionArg. Ideally defined once-for-all. NonInterfArg - Argument addressing unintentional interactions (e.g. corruption of shared memory) between applications. Supporting module for ApplnteractionArg. Ideally defined once-for-all PlatFaultMgtArg - Argument concerning the platform fault management strategy (e.g. addressing the general mechanisms of detecting value and timing faults, locking out faulty resources). Ideally established once-for-all. (NB Platform fault management can be augmented by additional management at the application level). ModeChangeArg - Argument concerning the ability of the platform to dynamically reconfigure applications (e.g. move application from one processing unit to another) either due to a mode change or as requested as part of the platform fault management strategy. This argument will address state preservation and recovery. SpecificConfigArg - Module arguing the safety of the specific configuration of applications running on the platform. Module supported by once-for-all argument concerning the safety of configuration rules and specific modules addressing application safety. TopLevelArg - The top level (once-for-all) argument of the safety of the platform (in any of its possible configurations) that defines the top level safety case architecture (use of other modules as defined above).
332
• •
Iain Bate and Tim Kelly
ConfigurationRulesArg - Module arguing the safety of a defined set of rules governing the possible combinations and configurations of applications on the platform. Ideally defined once-for-all. TransientArg - Module arguing the safety of the platform during transient phases (e.g. start-up and shut-down).
An important distinction is drawn above between those arguments that ideally can be established as ‘once-for-all’ arguments that hold regardless of the specific applications placed on the architecture (and should therefore be unaffected by application change) and those that are configuration dependent. In the same way as there is an infrastructure to the IMA system itself the safety case modules that are established once for all possible application configurations form the infrastructure of this particular safety case architecture. These modules (e.g. NonInterfArg) establish core safety claims such as non-interference between applications by appeal to properties of the underlying system infrastructures. These properties can then be relied upon by the application level arguments. TopLevelArg Top Level System Argument for the platform + configured applications
SpecificConfigArg Safety argument for the specific configuration of the system
ApplnAArg
ApplnBArg
Specific safety arguments concerning the functionality of Application A
Specific safety arguments concerning the functionality of Application B
ConfigRulesArg ApplnInteractionArg Argument for the safety of interactions between applications
Safety argument based upon an allowable set of configurations
Hardware Arg
CompilationArg (As Example) Arguments of the integrity of the compilation path
NonInterfArg
InteractionIntArg
Arguments of the absence of non-intentional interference between applications
Arguments concerning the integrity of intentional mechanisms for application interaction
Arguments of the correct execution of software on target hardware
PlatformArg Arguments concerning the integrity of the general purpose platform
PlatFaultMgtArg Argument concerning the platform fault management strategy
ResourcingArg Arguments concerning the sufficiency of access to, and integrity of, resources
TransientArg Arguments of the safety of the platform during transient phases
Fig. 6. Safety Case Architecture of Modularised IMA Safety Argument
5
Conclusions
In order to reap the potential benefits of modular construction of safety critical and safety related systems a modular approach to safety case construction and acceptance is also required. This paper has addressed a method to support architectural design and implementation strategy trade-off analysis, one of the key parts of component-based development. Specifically, the method presented provides guidance when decomposing systems so that the system’s objectives are met and deciding what functionality the components should fulfil in-order to achieve the remaining objectives.
Architectural Considerations in the Certification of Modular Systems
333
References 1.
Jones, C. Specification and design (parallel) programs. in IFIP Information Processing 83. 1983: Elsevier. 2. MoD, 00-56 Safety Management Requirements for Defence Systems. 1996, Ministry of Defence. 3. MoD, 00-55 Requirements of Safety Related Software in Defence Equipment. 1997, Ministry of Defence. 4. Perrow, C., Normal Accidents: living with high-risk technologies. 1984: Basic Books. 5. Leveson, N.G., Safeware: System Safety and Computers. 1995: Addison-Wesley. 6. Meyer, B., Applying Design by Contract. IEEE Computer, 1992. 25(10): p. 4052. 7. CENELEC, Safety-related electronic systems for signalling, European Committee for Electrotechnical Standardisation: Brussels. 8. Kazman, R., M. Klein, and P. Clements, Evaluating Software Architectures Methods and Case Studies. 2001: Addison-Wesley. 9. Douglass, B., Real-Time UML. 1998: Addison Wesley. 10. Kelly, T.P., Arguing Safety - A Systematic Approach to Safety Case Management. 1998, Department of Computer Science, University of York. 11. Laprie, J.-C. Dependable Computing and Fault Tolerance: Concepts and Terminology. 1985. 15th International Symposium on Fault Tolerant Computing (FTCS-15).
A Problem-Oriented Approach to Common Criteria Certification Thomas Rottke1 , Denis Hatebur1 , Maritta Heisel2 , and Monika Heiner3 1
3
¨ TUViT GmbH, System- und Softwarequalit¨ at Am Technologiepark 1, 45032 Essen, Germany {t.rottke,d.hatebur}@tuvit.de 2 Institut f¨ ur Praktische Informatik und Medieninformatik Technische Universit¨ at Ilmenau 98693 Ilmenau, Germany
[email protected] Brandenburgische Technische Universit¨ at Cottbus, Institut f¨ ur Informatik 03013 Cottbus, Germany
[email protected] Abstract. There is an increasing demand to certify the security of systems according to the Common Criteria (CC). The CC distinguish several evaluation assurance levels (EALs), level EAL7 being the highest and requiring the application of formal techniques. We present a method for requirements engineering and (semi-formal and formal) modeling of systems to be certified according to the higher evaluation assurance levels of the CC. The method is problem oriented, i.e. it is driven by the environment in which the system will operate and by a mission statement. We illustrate our approach by an industrial case study, namely an electronic purse card (EPC) to be implemented on a Java Smart Card. As a novelty, we treat the mutual asymmetric authentication of the card and the terminal into which the card is inserted.
1
Introduction
In daily life, security-critical systems play a more and more important role. For example, smart cards are used for an increasing number of purposes, and e-commerce and other security-critical internet activities become increasingly common. As a consequence, there is a growing demand to certify the security of systems. The common criteria (CC) [1] are an international standard that is used to assess the security of IT products and systems. The CC distinguish several evaluation assurance levels (EALs), level EAL7 being the highest and requiring the application of formal techniques even in the high-level design. Whereas the CC state conditions to be met by secure systems, they do not assist in constructing the systems in such a way that the criteria are met. In this paper, we present a method for requirements engineering and (semi-formal or formal) modeling of systems to be certified according to the higher evaluation S. Anderson et al. (Eds.): SAFECOMP 2002, LNCS 2434, pp. 334–346, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Problem-Oriented Approach to Common Criteria Certification
335
¨ assurance levels of the CC. This method is used by TUViT Essen1 in supporting product evaluations. The distinguishing feature of our method is its problem orientation. First, problem orientation means that the starting point of the system modeling is an explicit mission statement, which is expressed in terms of the application domain. This approach is well established in systems engineering [3], but in contrast to other requirements engineering approaches, especially in software engineering [9]. Such a mission statement consists of the following parts: 1. 2. 3. 4.
external actors and processes objective/mission of the system system services quality of services
From our experience, the mission statement provides the main criteria for assessing, prioritizing and interpreting requirements. Second, problem orientation means that we do not only model the system to be constructed, but also its environment, as proposed by Jackson [6]. This approach has several advantages: – Without modeling the environment, only trivial security properties can be assessed. For example, an intruder who belongs to the environment must be taken into account to demonstrate that a system is secure against certain attacks. – With the model of the environment, a test system can be constructed at the same time as the system itself. – The problem oriented approach results in a strong correspondence between the reality and the model, which greatly enhances validation and verification activities as they are required for certification. In the CC, the environment of the system is contained only indirectly via subjects, which must be part of the security policy model. Another difference to the CC is that our method not only takes into account the new system to be constructed, but also performs an analysis of the current system in its environment. Figure 1 shows the most important documents that have to be constructed and evaluated for certification: – The objective of the security target (ST) is to specify the desired security properties of the system in question by means of security requirements and assurance measures. – The security policy model (SPM) shows the interaction between the system and its environment. This model provides a correspondence between the functional requirements and the functional specification enforced by the security policy. 1
¨ TUViT is an independent organization that performs IT safety and security evaluation.
336
Thomas Rottke et al.
ST (Security Target)
RCR
Environment
FSP Functional Specification
TOE-Description Security Security Objectives Objectives
SPM
Security Policy Model
High-Level -Level-Design -
Functional Functional Requirements
LLD
Summary Summary Specification
IMP IMP
formal
RCR
HLD HLD
RCR
Low-Level-Design Low-Level-
Implementation
semi-formal
informal
Fig. 1. CC documents for development – The functional specification (FSP), high-level-design (HLD), low-level-design (LLD) and implementation (IMP) are development documents that are subject to evaluation. – In addition to the development documents, representation correspondences (RCR) documentation is required to ensure that appropriate refinement steps have been performed in the development documents. In the following, we describe our problem oriented approach in Section 2, and then illustrate it in Section 3 by an industrial case study, namely an EPC to be implemented on a Java Smart Card. As a novelty, we treat the mutual asymmetric authentication of the card and the terminal into which the card is inserted. In this case study, we use the notations SDL [2], message sequence charts (MSCs) [5], and colored Petri nets [7]. Finally, we sum up the merits of our method and point out directions for future work.
2
The Method
Our method gives guidance how to develop the documents required for a CC certification in a systematic way. Because of the systematic development and the use of semi-formal and formal notations, the developed documents can readily be evaluated for conformance with the CC. To express our method, we use the agenda concept [4]. An agenda is a list of steps or phases to be performed when carrying out some task in the context of software engineering. The result of the task will be a document expressed in some language. Agendas contain informal descriptions of the steps, which may depend on each other. Agendas are not only a means to guide software development activities. They also support quality assurance because the steps may have validation conditions associated with them. These validation conditions state necessary semantic conditions that the developed artifact must fulfill in order to serve its purpose properly. Table 1 gives an overview of the method. Note that the method does not terminate with Phase 4. There are two more phases that are beyond the scope
A Problem-Oriented Approach to Common Criteria Certification
337
Table 1. Agenda for problem oriented requirements engineering and system modeling Phase 1. Problem oriented requirements capture 2. Analysis of current system
Content list of requirements
Format CC Documents informal —
description of current informal system status or semiformal 3. Problem description of desired informal oriented require- system status, mis- or semiments analysis sion statement formal
—
Validation reviews
reviews
ST: environment, TOE description,security objectives
each statement of phase 1 must be incorporated; internal consistency must be guaranteed. 4. Problem ori- context diagram, sys- possibly ST: functional see sub-agenda, ented modeling tem interface descrip- formal requirements, Table 2. tions, system environsummary specment description ification; FSP, SPM
of this paper. In Phase 5, the model constructed in Phase 4 is validated. Finally, the model is refined by constructing a high-level design, a low-level design and an implementation (Phase 6). In this paper, however, we concentrate on the systematic development of the requirements and specification documents. Validation and refinement issues will be treated in separate papers. Setting up the documents required by the CC need not necessarily proceed in the order prescribed by the CC outline. Our process proceeds in a slightly different order. The “CC Documents” column in Table 1 shows in which phases which CC documents are developed. The purpose of Phase 1 is to collect the requirements for the system. These requirements are expressed in the terminology of the system environment or the application domain, respectively. Requirements capture is performed by conducting interviews and studying documents. The results of Phase 1 are validated by reviewing the minutes of the interviews together with the interview partner and by reviewing the used documents with their authors. In Phase 2, the current state of affairs must be described, analyzed and assessed. External actors and entities must be identified; functionalities must be described and decomposed. The result of this phase are domain-specific rules, as well as descriptions of the strengths and weaknesses of the current system. As in Phase 1, the validation of the produced results is done by reviews. The results of Phases 1 and 2 are not covered by the CC. However, they are needed to provide a firm basis for the preparation of the CC documents.
338
Thomas Rottke et al.
Table 2. Agenda for problem oriented system modeling Phase 4.1. Context modeling 4.2. Define constraints and system properties
Content
Format
structure of system em- context bedded in its environment diagram TOE security require- instanments, security require- tiated ments of environment text from CC part 2 catalogue 4.3. In- data formats, system be- data dicterface havior at external inter- tionary, definition faces MSCs
CC Docu- Validation ments SPM part 1 must be compatible with Phase 3 ST: func- see Phase 3 tional requirements, summary specification FSP, part 1 each service contained in the mission statement must be modeled SPM, part 2 must be compatible with Phases 3 and 4.1
4.4. Model- external components and CEFSM ing of system their behavior, environenvironment mental constrains and assumptions 4.5 Model- service specifications informal FSP, part 2 see Phase 4.3 ing of system text and services CEFSM
The goal of the requirements analysis, i.e. Phase 3, is to qualitatively describe which purpose the new system serves in its environment, and which services it must provide. Strict requirements and constraints for the new system are set up. As in Phase 2 for the existing system, external actors and entities are identified for the desired system. The requirements captured in Phase 1 can now be made more concrete. Thus, the mission statement is set up. The validation condition associated with Phase 3 requires that all requirements captured in Phase 1 be taken into account in the mission statement. In contrast to Phase 2, which makes descriptive statements, Phase 3 makes prescriptive statements. The purpose of Phase 4 is to define the system and its environment. The system entities and their attributes are defined, as well as the processes and procedures they are involved in. Case distinctions imposed by domain rules are identified with respect to the entities and processes. This phase consists of five sub-phases, which are also represented as an agenda, see Table 2. In Phase 4.1, the boundary between the system and its environment is defined. In Phase 4.2, the security requirements for the target of evaluation (TOE) and for the system environment are instantiated from the CC by defining constraints or properties.
A Problem-Oriented Approach to Common Criteria Certification
339
In Phase 4.3, the interface of the system is specified in detail. MSCs are used to represent traces of system services. In Phases 4.4 and 4.5, communicating extended finite state machines (CEFSMs) are set up for the environment as well as for each system service identified in the mission statement. For each service, functional as well as non-functional properties must be defined.
3
Case Study: Electronic Purse Card (EPC)
We now illustrate the method presented in Section 2 by an industrial case study2 . EPCs are introduced to replace Eurocheque (EC) cards for the payment of goods purchased in retail stores. For reasons of space, we can present only parts of the documents produced when performing our method. As a novelty, we define an EPC system that uses mutual asymmetric authentication. This security protocol guarantees mandatory authenticity of two communication partners. In contrast to the symmetric authentication, the asymmetric authentication procedures have the advantage that they do not need a common secret between the partners. In case of asymmetric authentication, each communication partner has its own key pair which is generated independently from other key pairs within the terminals and the cards, respectively. A personalization authority initiates the key generation within the components and the signing of the public key by a certification authority to ensure the correctness of the key generation procedure. By using asymmetric authentication, the e-cash procedure becomes open to other terminal manufacturers and card emitters, as long as their components are personalized by the personalization authority. In the following, we sketch each of the development phases introduced in Section 2. Phase 1: problem oriented requirements capture. Requirements for the EPC system include: – Payment must be simple for all participants. – Payment is anonymous, but at the same time authentic; non-repudiation is guaranteed. – Stolen EPCs cannot be used. – EPCs and terminals can be neither intercepted nor forged. Phase 2: analysis of current system. In this phase, it is described how payment with EC cards proceeds. Figure 2 shows the different stages. Examples of domain-specific rules are that a personal identification number (PIN) has four digits, and that it must be counted how often a wrong PIN has been entered. Some weaknesses of the EC card system are that payments are not anonymous, that the access to the customer’s account is protected only by the PIN, 2
¨ Similar systems have been evaluated by TUViT, Essen.
340
Thomas Rottke et al.
printing receipt
close connection to transaction system
remove EC card from terminal
posting transaction
PIN verification
establish connection to transaction system
commit amount
input PIN
input amount in terminal
insert EC card in card reader
buy with EC card
Fig. 2. Payment with Eurocheque card buy with EPC
printing receipt
remove EPC from terminal
pay with EPC
transfer amount
commit amount
input PIN
authentication of card and terminal
input amount in terminal
insert EPC into terminal
PIN verification
account money on EPC
personalize EPC
Fig. 3. EPC system
and that the connection between the terminal and EC card is insecure. Hence, customer profiles can be constructed, the customer can be damaged by revelation of the PIN, and the system is not protected against man-in-the-middle-attacks. Phase 3: problem oriented requirements analysis. EPCs function differently from EC cards. Before the customer can pay with the EPC, the card must be loaded. For this purpose, it must be inserted into a bank terminal, and the PIN and the desired amount must be entered. Purchasing goods with the EPC proceeds similarly as with the EC card, but the amount is debited from the card instead of from the customer’s account. Moreover, the bank and cash terminals and the EPC must be personalized by a personalization authority. This means that a pair of keys (a public and a private one) is generated for each component, where the public key is certified by a certification authority. Figure 3 shows the desired services of the EPC system. The EPC system, too, has potential weaknesses. For example, a man-inthe-middle attack or spying out the PIN may be possible. An analysis of such weaknesses and the corresponding attack scenarios lead to the following security goals: – Debiting the customer account is done in a secure environment (bank terminal). – An EPC is useless if its secrets are unknown. – Neither the cards nor the terminals can be copied or forged.
A Problem-Oriented Approach to Common Criteria Certification
341
– The connections between the terminals and the EPC are encrypted, so that intercepting those connections is useless. – Transactions take place only between authenticated components. The following assumptions concerning the environment must be made: – The personalization of the card is secure. – The bank terminals are installed in a protected area and cannot be intercepted. Now, the requirements set up in Phase 1 can be made more concrete. Simplicity of payment means that a payment is performed just by debiting the EPC. The customers only need to type their PIN and to confirm the transaction. The store personnel only needs to specify the amount and to hand the printed receipt to the customer. Anonymity is guaranteed, because the only documentation of the payment is the customer receipt. Because of the authentication mechanism, authenticity and non-repudiation are guaranteed. Stolen cards cannot be used, because the PIN is known only to the card holder, and the card secret will not be revealed by an authenticated terminal. Interception is made useless by encryption, and copying cards is prevented by preventing direct access to the physical card storage. Inh this paper, we have only given an informal sketch of Phase 3. In real-life projects, this phase is performed much more thoroughly. Phase 4.1: context modeling. We present two different documents that show the system and its embedding in its environment. Figure 4 shows the security policy model for the EPC system in SDL notation. It shows the EPC in its environment consisting of an intruder, a terminal, a personalization and a certification authority (CA). The personalization and CA components are not the main concern of our discourse and are therefore drawn with dotted lines. The terminal is used for e-cash transactions. It is personalized, which means that it has a key pair, and its public key is signed by certification authority. The intruder models a man-inthe-middle attack, i.e. the intruder intercepts the communication between card and terminal and can therefore attack both the card and the terminal. The EPC is the target of evaluation (TOE). The card application includes functionality for mutual asymmetric authentication, PIN check, credit and debit transactions. It is assumed that the card is personalized. The components in the SPM are interconnected by channels, shown in the diagram by inscripted arcs. The external channels chU ser and chCard represent the interactions between T erminal and U ser and between T erminal and CardReader. The internal channels connect T erminal and Intruder or Intruder and EP C, respectively. If a system is to be certified according to EAL7, we need a completely formal model, which we express as a colored Petri net (CPN). Phase 4.2: define constraints and system properties. As an example for the CC part 2 catalogue we take the component FCS COP.1.1 (Cryptographic operation from class cryptographic support):
342
Thomas Rottke et al.
system SPM
sReturn
sPersonalizeTerminalReturn, sPersonalizeCardReturn
chCard sInsertCard, sRemoveCard sReturn, sSignDocumentReturn
chUser
CA
Terminal
sIdentify, sCredit, sDebit, sSignDocument, sPersonalizeTerminal, sGenerateTerminalKeys
sATR, sTransferKeyReturn, sSignHashReturn, sMutualAuthReturn, sGetRandomReturn
chPersonalizeTerminal
chTerminalToIntruder
sTransferKey, sIdentify, sMutualAuth, sGetRandom sSignHash, sCredit, sDebit
chAttack
sFakeTerminal, sFakeApplet
sGetGlobalCAKey, sSignKey
chPersonalize
sReturn, sGenerateTerminalKeysReturn
Intruder
chCA
sPersonalizeTerminal, sPersonalizeCard
Personalization sGetGlobaCAKeyReturn, sSignKeyReturn
sATR, sTransferKeyReturn, sSignHashReturn, sMutualAuthReturn, sGetRandomReturn
sReturn, sGenerateCardKeysReturn
sTransferKey, sIdentify, sMutualAuth, sGetRandom sSignHash, sCredit, sDebit
chEPCToIntruder chPersonalizeCard EPC
sPersonalizeCard, sGenerateCardKeys
Fig. 4. SDL security policy model FCS COP.1.1 The TSF3 shall perform [assignment: list of cryptographic operations] in accordance with a specified cryptographic algorithm [assignment: cryptographic algorithm] and cryptographic key sizes [assignment: cryptographic key sizes] that meet the following: [assignment: list of standards]. For our EPC system, this component is instantiated as follows: FCS COP.1.1 The TSF shall perform the mutual authentication procedure in accordance with a specified cryptographic algorithm RSA and cryptographic key sizes of 1024 bit that meet the following: IEEE 1363. In addition, it is necessary to follow all dependencies between the components. In this case, the FCS COP.1.1 component requires to include the cryptographic key generation component. Phase 4.3: interface definition. As an example, we consider the asymmetric authentication protocol. It is specified by means of a message sequence chart, which in fact is the common specification technique for technical protocols. 3
TOE security function
A Problem-Oriented Approach to Common Criteria Certification
Terminal
Intruder
EPC
vPrivateKeyTerm, vPublicKeyTerm, vSigPublicKeyTerm, vPublicKeyGlobal
sTransferKey ( vPublicKeyTerm, vSigPublicKeyTerm)
sTransferKeyReturn ( vPublicKeyCard, vSigPublicKeyCard)
343
vPrivateKeyCard, vPublicKeyCard, vSigPublicKeyCard, vPublicKeyGlobal sTransferKey ( vPublicKeyTerm, vSigPublicKeyTerm) checkSig vPublicKeyGlobal (vSigPublicKeyTerm, vPublicKeyTerm) sTransferKeyReturn ( vPublicKeyCard, vSigPublicKeyCard)
checkSig vPublicKeyGlobal (vSigPublicKeyCard, vPublicKeyCard) sGetRandom (void)
sGetRandom (void) vRndCard = genRandom()
sGetRandom Return ( vRndCard)
sGetRandom Return ( vRndCard)
vRndTerm = genRandom(), vSKRndTerm = genRandom(), vSigRndCard = makeSig vPrivateKeyTerm ( vRndCard) vEncrRndTerm = encr vPublicKeyCard (vRndTerm), vEncrSKRndTerm = encrvPublicKeyCard (vSKRndTerm) sMutualAuth (vSigRndCard, vEncrRndTerm,vEncrSKRndTerm )
sMutualAuth (vSigRndCard, vEncrRndTerm,vEncrSKRndTerm ) vSKRndCard = genRandom(), vSigRndTerm = makeSig vPrivateKeyCard ( decrvPrivateKeyCard (vEncrRndTerm )), vEncrSKRndCard = encrvPublicKeyTerm (vSKRndCard)), checkSigvPublicKeyCard(vSigRndCard, vRndCard), vSessionKey=calcSK( vSKRndCard, decrvPrivateKeyCard(vEncrSKRndTerm))
sMutualAuthReturn (vSigRndTerm, vEncrSKRndCard )
sMutualAuthReturn (vSigRndTerm, vEncrSKRndCard )
checkSig vPublicKeyCard (vSigRndTerm, vRndTerm), vSessionKey=calcSK( decrvPrivateKeyTerm (vEncrSKRndCard), vSKRndTerm)
Fig. 5. MSC of authentication protocol
Figure 5 shows a successful asymmetric mutual authentication between a terminal and an EPC, intercepted by an intruder. The sequence of action can be divided into three phases: – public key exchange and check of public key signature using the pulic key of the CA – random number request by terminal – authentication and session key generation Each phase starts with a command called by the terminal followed by an EPC response. Phase 4.4: modeling of system environment. For reasons of space, we cannot present the CEFSMs representing the environment.
344
Thomas Rottke et al.
process CardMutualAuthentication
PersonalizedAndInserted
*
sTransferKey
CheckSig
Err
PersonalizedAndInserted
OK WaitForGetRandom
sTransferKeyReturn
sGetRandom
GenerateRandom
*
WaitForMutualAuth
WaitForGetRandom
PersonalizedAndInserted
sGetRandomReturn
sTransferKeyError
PersonalizedAndInserted
WaitForMutualAuth
sMutualAuth
Generate SKRandomCard Calculate SignatureOfRandomTerm Encrypt SKRandomCard CalculateSessionKey
Check SignatureOfRandomCard
*
PersonalizedAndInserted
Err
OK sMutualAuthReturn
TermAuthenticated
sMutualAuthError
PersonalizedAndInserted
Fig. 6. SDL definition of authentication
Phase 4.5: modeling of system services. We consider the authentication service, where we present an SDL and a CPN version. The EP C part of the asymmetric authentication protocol is modeled as an SDL state machine, see Figure 6. The three phases of the MSC are reflected by the three parts of the state machine. The state machine starts in the state P ersonalizedAndInserted. 1. The phase “Public Key Exchange and Check of Public Key Signature” leads to the state W aitF orGetRandom or remains in the state P ersonalized− AndInserted. 2. The phase “Random Number Request by Terminal” leads to the state W ait− F orM utualAuth or back to the state P ersonalizedAndInserted. 3. The phase “Authentication and Session Key generation” leads to the state T erm− Authenicated or back to the state P ersonalizedAndInserted. The state machine is now modeled formally in CPN, see Figure 7. It is just a translation of the SDL protocol machine into CPN. Therefore the CPN model also contains the same three phases, states and transitions. In CPN, the states of the state machine are modeled by places for simple tokens. The channels are
A Problem-Oriented Approach to Common Criteria Certification
ActualState
tslIntr2CC
tslIntr2CC
sTransferKey
tState
345
tslCC2Intr
FP
tslCC2Intr
Personalized AndInserted
sTransferKeyReturn
CheckSigOK CheckSig = OK
sTransferKey
sTransferKeyError
CheckSigErr CheckSig = Err
sStar
OtherSignal tState
sStarsTransferKey
WaitFor GetRandom sGetRandom
sGetRandomReturn (GenerateRandom)
GenerateRnd tState
sStar
OtherSignal tState
sMutualAuth
sMutualAuth
WaitFor MutualAuth
FP
Personalized AndInserted
sStarsGetRandom sMutualAuthReturn (Generate and Encrypt SKRandom, CalculateSig of Random)
Authenticated CheckSig of Random of Card = OK
sMutualAuthError
NotAuth
CheckSig of Random of Card = Err
sStar
OtherSignal
calculate SessionKey
sStarsMutualAuth
FP
tState
Terminal Authenticated
tState
Personalized AndInserted
Fig. 7. CPN model of authentication
also modeled by places, but using more complex tokens (colors). Arc inscriptions are used to model the functionality of the transitions. Conditions are modeled with the CPN guard mechanism. In the subsequent phases of the method, these documents will be further validated and refined. Because of the formal nature of the documents, the required security properties can be demonstrated in a routine way [8].
4
Conclusions
We have presented a method to model systems in such way that their certification according to the higher levels of the CC is well prepared. This method shows that problem analysis can be performed in an analytic and systematic way, even though problem analysis is often regarded as an unstructured task that needs – above all – creative techniques. In contrast, we are convinced that problem
346
Thomas Rottke et al.
analysis requires sound engineering techniques to achieve non-trivial and highquality results. Using our method, formal documents as they are required by the CC, can be developed in an appropriate way. The problem orientation of the method ensures a high degree of correspondence between the system model and the reality. This is due to the facts that the modeling process is oriented on the system mission and that the requirements are analyzed using terms of the application domain. Such a correspondence is crucial. If it is not given, inadequate models may be set up. Such inadequate models may have serious consequences: relevant properties may be impossible to prove. Instead, irrelevant properties may be proven, which would lead to an unjustified trust in the system. Our method is systematic and thus repeatable, and gives guidance how to model security properties. The risk of omissions is reduced, because the agenda leads the attention of the system engineers to the relevant points. Because of our method, we are now able to suggest some improvements to the CC. Until now, the CC required security models only for access control policies and information flow policies, because only these belonged to the state of the art. By modeling the system environment, we have succeeded in setting up a formal model also for authentication. To the best of our knowledge, we are the first to propose such a systematic, problem oriented approach to CC certification. In the future, we will work on validation and refinement in general, and on a complete validation of authentication SPM in particular.
References [1] Common criteria. See http://www.commoncriteria.org/. 334 [2] F. Belina and D. Hogrefe. The CCITT Specification and Description Language SDL. Computer Networks and ISDN Systems, 16(4):311–341, March 1989. 336 [3] B. Blanchard and W. Fabrycky. Systems Engeneering and Analysis. Prentice Hall, 1980. 335 [4] M. Heisel. Agendas – a concept to guide software development activites. In R. N. Horspool, editor, Proc. Systems Implementation 2000, pages 19–32. Chapman & Hall London, 1998. 336 [5] ITU-TS. ITU-TS Recommendation Z.120anb: Formal Semantics of Message Sequence Charts. Technical report, ITU-TS, Geneva, 1998. 336 [6] M. Jackson. Problem Frames. Analyzing and structuring software development problems. Addison-Wesley, 2001. 335 [7] K. Jensen. Colored Petri nets. Lecture Notes Comp. Sci.: Advances in petri nets, 254:248–299, 1986. 336 [8] K. Jensen. Colored Petri nets, Vol. II. Springer, 1995. 345 [9] G. Kolonya and I. Sommerville. Requirements Engineering. Wiley, 1997. 335
Author Index Androutsopoulos, Kelly . . . . . . . . . 82 Bate, Iain . . . . . . . . . . . . . . . . . . . . . 321 Benerecetti, Massimo . . . . . . . . . . 126 Bishop, Peter G. . . . . . . . . . . 163, 198 Bloomfield , Robin . . . . . . . . . . . . . 198 Bobbio, Andrea . . . . . . . . . . . 212, 273 Bologna, Sandro . . . . . . . . . . . . . . . . . 1 Born, Bob W. . . . . . . . . . . . . . . . . . 186 Bredereke, Jan . . . . . . . . . . . . . . . . . . 19 Campelo, Jos´e Carlos . . . . . . . . . . 261 Chen, Luping . . . . . . . . . . . . . . . . . . 151 Ciancamerla, Ester . . . . . . . .212, 273 Clark, David . . . . . . . . . . . . . . . . . . . . 82 Clement, Tim . . . . . . . . . . . . . . . . . .198 Cugnasca, Paulo S´ergio . . . . . . . . 224 Dafelmair, Ferdinand J. . . . . . . . . . 61 Dhodapkar, S. D. . . . . . . . . . . . . . . 284 Dimitrakos, Theo . . . . . . . . . . . . . . . 94 Droste, Thomas . . . . . . . . . . . . . . . . .53 Franceschinis, Giuliana . . . . . . . . 212 Fredriksen, Rune . . . . . . . . . . . . . . . .94 Gaeta, Rossano . . . . . . . . . . . . . . . . 212 Gran, Bjørn Axel . . . . . . . . . . . . . . . 94 Gribaudo, Marco . . . . . . . . . . . . . . 273 Guerra, Sofia . . . . . . . . . . . . . . . . . . 198 Hartswood, Mark . . . . . . . . . . . . . . . 32 Hatebur, Denis . . . . . . . . . . . . . . . . 334 Heidtmann, Klaus . . . . . . . . . . . . . . 70 Heiner, Monika . . . . . . . . . . . . . . . . 334 Heisel, Maritta . . . . . . . . . . . . . . . . 334 Hering, Bernhard . . . . . . . . . . . . . . 296 Hollnagel, Erik . . . . . . . . . . . . . . . . 1, 4 Horv´ ath, A. . . . . . . . . . . . . . . . . . . . 273 Hughes, Gordon . . . . . . . . . . . . . . . 151 Jacobs, Jef . . . . . . . . . . . . . . . . . . . . 175 Kelly, Tim . . . . . . . . . . . . . . . . . . . . . 321 Kim, Tai-Yun . . . . . . . . . . . . . . . . . . . 44 Knight, John C. . . . . . . . . . . . . . . . 106 Kristiansen, Monica . . . . . . . . . . . . .94
Lankenau, Axel . . . . . . . . . . . . . . . . . 19 Lano, Kevin . . . . . . . . . . . . . . . . . . . . 82 Littlewood, Bev . . . . . . . . . . . . . . . 249 May, John . . . . . . . . . . . . . . . . . . . . . 151 Minichino, Michele . . . . . . . . 212, 273 Moeini, Ali . . . . . . . . . . . . . . . . . . . . 252 Mohajerani, MahdiReza . . . . . . . 252 Oliveira, ´Italo Romani de . . . . . . 224 Opperud, Tom Arthur . . . . . . . . . . 94 Ortmeier, Frank . . . . . . . . . . . . . . . 296 Panti, Maurizio . . . . . . . . . . . . . . . . 126 Papadopoulos, Yiannis . . . . . . . . . 236 Paynter, Stephen E. . . . . . . . . . . . 186 Popov, Peter . . . . . . . . . . . . . . . . . . 139 Portinale, Luigi . . . . . . . . . . . . . . . . 212 Procter, Rob . . . . . . . . . . . . . . . . . . . . 32 Ramesh, S. . . . . . . . . . . . . . . . . . . . . 284 Reif, Wolfgang . . . . . . . . . . . . . . . . . 296 Rhee, Yoon-Jung . . . . . . . . . . . . . . . 44 Rodr´ıguez, Francisco . . . . . . . . . . . 261 Rottke, Thomas . . . . . . . . . . . . . . . 334 Rouncefield, Mark . . . . . . . . . . . . . . 32 Saridakis, Titos . . . . . . . . . . . . . . . . 309 Schellhorn, Gerhard . . . . . . . . . . . 296 Serrano, Juan Jos´e . . . . . . . . . . . . .261 Servida, Andrea . . . . . . . . . . . . . . . . 10 Sharma, Babita . . . . . . . . . . . . . . . . 284 Slack, Roger . . . . . . . . . . . . . . . . . . . . 32 Spalazzi, Luca . . . . . . . . . . . . . . . . . 126 Stølen, Ketil . . . . . . . . . . . . . . . . . . . . 94 Tacconi, Simone . . . . . . . . . . . . . . . 126 Thums, Andreas . . . . . . . . . . . . . . . 296 Trappschuh, Helmut . . . . . . . . . . . 296 Trienekens, Jos . . . . . . . . . . . . . . . . 175 Tronci, Enrico . . . . . . . . . . . . . . . . . 273 Voß, Alexander . . . . . . . . . . . . . . . . . 32 Williams, Robin . . . . . . . . . . . . . . . . 32 Zhang, Wenhui . . . . . . . . . . . . . . . . 113