FEATURE INTERACTIONS IN SOFTWARE AND COMMUNICATION SYSTEMS IX
Proceedings of the International Workshop on Feature Interactions previously published by IOS Press:
Feature Interactions in Telecommunications and Software Systems VIII Edited by S. Reiff-Marganiec and M.D. Ryan Feature Interactions in Telecommunications and Software Systems VII Edited by D. Amyot and L. Logrippo Feature Interactions in Telecommunications and Software Systems VI Edited by M. Calder and E. Magill Feature Interactions in Telecommunications and Software Systems V Edited by K. Kimbler and L.G. Bouma Feature Interactions in Telecommunication Networks IV Edited by P. Dini, R. Boutaba and L. Logrippo Feature Interactions in Telecommunications III Edited by K.E. Cheng and T. Ohta Feature Interactions in Telecommunications Systems Edited by L.G. Bouma and H. Velthuijsen
Feature Interactions in Software and Communication Systems IX
Edited by
Lydie du Bousquet Laboratoire d’Informatique de Grenoble (LIG), Université Joseph Fourier, France
and
Jean-Luc Richier Laboratoire d’Informatique de Grenoble (LIG), CNRS, France
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2008 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-58603-845-8 Library of Congress Control Number: 2008922182 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected] LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
v
Organised and hosted by Laboratoire d’Informatique de Grenoble, France
Sponsored by Institut IMAG
Université Joseph Fourier – Grenoble I
Institut National de Recherche en Informatique
Institut Polytechnique de Grenoble
Centre National de Recherche en Informatique
Grenoble Alpes Métropole
Ville de Grenoble
Conseil Général de l’Isère
This page intentionally left blank
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
vii
Preface These proceedings record the papers presented at the ninth International Conference of Feature Interactions in Software and Communication Systems (ICFI 2007), held in the city of Grenoble, France. This conference builds on the success of the previous conferences in this series. The first edition of this conference, known then as Feature Interaction Workshop (FIW), held in St Petersburg, Florida, USA, in 1992. It was then held in Amsterdam, The Netherlands (1994), Kyoto, Japan (1995), Montreal, Canada (1997), Lund, Sweden (1998), Glasgow, UK (2000), Ottawa, Canada (2003), and Leicester, UK (2005). FIW became ICFI in 2005. The Feature Interaction Workshop was originally created for discussion and reporting on the feature interaction problem in telecommunication systems. In this domain, an interaction occurs when one telecommunications feature/service modifies or subverts the operation of another one. Undesired interactions can both lower this quality and delay service provisioning. Therefore, the problem of feature interactions in telecommunications is of great importance. In the past decade, a lot of attention has been devoted to the development of methods for detection and resolution of feature interactions. However, this feature interaction phenomenon is not unique to the domain of telecommunications systems. It can also occur in any large software system that is subject to continuous changes. For this edition, the conference has a range of contributions by distinguished speakers drawn from both telecommunication and other software systems. Besides its formal sessions the conference included a doctoral symposium, a panel and two invited talks. All the submitted papers in these proceedings have been peer reviewed by at least two reviewers drawn from industry or academia. Reviewing and selection were undertaken electronically. ICFI2007 has been sponsored by IMAG, University Joseph Fourier, INPG, l’INRIA, Grenoble City, Grenoble Alpes Métropole (Communauté d'agglomération de Grenoble), le Conseil Général de l'Isère and la Région Rhônes-Alpes. University Joseph Fourier (Grenoble I) and the Laboratoire d'Informatique de Grenoble (LIG) have provided all local organization and financial backing for the conference. We would like to thank Jean-Luc Richier, Didier Bert, Pascale Poulet and Frédérique Chrétiennot for their help in organizing this event. Online information concerning the conference is available under the following Uniform Resource Locator (URL): http://www-lsr.imag.fr/ICFI2007/ Lydie du Bousquet, Farid Ouabdesselam
viii
Message from the Doctoral Symposium co-chairs The following pages contain the proceedings of the Doctoral Symposium that was held in conjunction with the International Conference on Feature Interactions in Software and Communication Systems in Grenoble in September 2007. Five papers were presented at the symposium, discussing emerging research on different aspects related to feature interaction. Gavin Campbell of the University of Stirling shows how feature interactions can occur in sensor networks, in the form of conflicts between policies. Resolution possibilities are discussed with respect to several examples. Ben Yan of the Nara Institute of Technology studies feature interactions in home networks from the point of view of different types of system safety. It demonstrates how these concepts can be formalized and validated, leading to safety assurance. Andreas Classen of the University of Namur studies feature interactions for systems that are closely integrated in their environment. For such systems, the environment can be the source of interactions. A formalization of the concepts in the event calculus leads to the possibility of automated feature interaction detection. Lionel Touseau of the University of Grenoble addresses the problem of service cooperation in inter-organizational service-oriented computing. In the resulting dynamically changing environments, service availability must be guaranteed. This can be achieved by the use of appropriate service-level agreements and related arrangements. Romain Delamare of the IRISA/INRIA Rennes takes into consideration Aspect Oriented Programming, where new features or aspects can be added to programs, thus requiring changes in existing test cases. He outlines a method by which it can be determined which test cases are impacted, and thus need to rewritten, because of new aspects. These interesting presentations on current and future advances on the feature interaction problem promise significant developments in our research area. We are looking forward to seeing full papers from these authors in the next Feature Interaction Conference. Luigi Logrippo, Université du Québec en Outaouais
David Marples Technolution BV
Lydie du Bousquet Laboratoire Informatique de Grenoble (LIG)
ix
Programme Committee The following people were members of the ICFI 2007 programme committee and reviewed papers for the conference: Conference Co-chairs: Lydie du Bousquet, Université Joseph Fourier, Grenoble, France Farid Ouabdesselam, Université Joseph Fourier, Grenoble, France Daniel Amyot, University of Ottawa, Canada Lynne Blair, University of Lancaster, UK Muffy Calder, University of Glasgow, UK Krzysztof Czarnecki, University of Waterloo, Canada Michael Fisher, University of Liverpool, UK Tom Gray, PineTel, Canada Jean-Charles Grégoire, INRS-Telecommunications, Canada Dimitar Guelev, Bulgarian Academy of Sciences, Bulgaria Robert J. Hall, AT&T Labs Research, USA Mario Kolberg, University of Stirling, UK Pascale Le Gall, LaMI, Université d’Evry Val d’Essonne, France Yves Le Traon, IRISA, France Fuchun Joseph Lin, Telcordia Technologies, USA Luigi Logrippo, Université du Quebec en Outaouais, Canada Evan Magill, University of Stirling, UK Dave Marples, Global Inventures, USA Alice Miller, University of Glasgow, UK Masahide Nakamura, Nara Institute of Science and Technology, Japan Tadashi Ohta, Soka University, Tokyo, Japan Klaus Pohl, LERO, University. of Limerick, Ireland and Software Systems Engineering, Univ. Duisburg-Essen, Germany Stephan Reiff-Marganiec, University of Leicester, UK Jean-Luc Richier, CNRS, LIG, France Mark Ryan, School of Computer Science, University of Birmingham, UK Pierre-Yves Schobbens, University of Namur, Belgium Henning Schulzrinne, Columbia University, USA Ken Turner, University of Stirling, UK Pamela Zave, AT&T, USA
x
External Referees We are grateful to the following people who aided the programme committee in the reviewing of papers, providing additional specialist expertise: Erwan Brottier, IRISA, France Kim Lauenroth, Software Systems Engineering, Univ. of Duisburg-Essen, Germany Clementine Nebut, IRISA, France Thorsten Weyer, Software Systems Engineering, Univ. of Duisburg-Essen, Germany
xi
Contents Preface Lydie du Bousquet and Farid Ouabdesselam
vii
Message from the Doctoral Symposium Co-Chairs Luigi Logrippo, David Marples and Lydie du Bousquet
viii
Programme Committee
ix
Quality Issues in Software Product Lines: Feature Interactions and Beyond (Invited Talk) Andreas Metzger
1
Service Broker for Next Generation Networks Fuchun Joseph Lin and Kong Eng Cheng
13
A Feature Interaction View of License Conflicts G.R. Gangadharan, Michael Weiss, Babak Esfandiari and Vincenzo D’Andrea
21
Managing Feature Interaction by Documenting and Enforcing Dependencies in Software Product Lines Roberto Silveira Silva Filho and David F. Redmiles
33
Towards Automated Resolution of Undesired Interactions Induced by Data Dependency Teng Teng, Gang Huang, Xingrun Chen and Hong Mei
49
Policy Conflicts in Home Care Systems Feng Wang and Kenneth J. Turner
54
Conflict Detection in Call Control Using First-Order Logic Model Checking Ahmed F. Layouni, Luigi Logrippo and Kenneth J. Turner
66
Policy Conflict Filtering for Call Control Gavin A. Campbell and Kenneth J. Turner
83
Towards Feature Interactions in Business Processes Stephen Gorton and Stephan Reiff-Marganiec
99
Resolving Feature Interaction with Precedence Lists in the Feature Language Extensions L. Yang, A. Chavan, K. Ramachandran and W.H. Leung Composing Features by Managing Inconsistent Requirements Robin Laney, Thein Than Tun, Michael Jackson and Bashar Nuseibeh Artificial Immune-Based Feature Interaction Detection and Resolution for Next Generation Networks Hua Liu, Zhihan Liu, Fangchun Yang and Jianyin Zhang
114 129
145
xii
Model Inference Approach for Detecting Feature Interactions in Integrated Systems Muzammil Shahbaz, Benoît Parreaux and Francis Klay
161
Considering Side Effects in Service Interactions in Home Automation – An Online Approach Michael Wilson, Mario Kolberg and Evan H. Magill
172
Detecting and Resolving Undesired Component Interactions by Runtime Software Architecture Gang Huang
188
Doctoral Symposium Sensor Network Policy Conflicts Gavin A. Campbell Considering Safety and Feature Interactions for Integrated Services of Home Network System Ben Yan
195
199
Problem-Oriented Feature Interaction Detection in Software Product Lines Andreas Classen
203
How to Guarantee Service Cooperation in Dynamic Environments? Lionel Touseau
207
Impact of Aspect-Oriented Software Development on Test Cases Romain Delamare
211
Subject Index
215
Author Index
217
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
1
Quality Issues in Software Product Lines: Feature Interactions and Beyond (Invited Talk) Andreas METZGER 1 Software Systems Engineering, University of Duisburg-Essen Schützenbahn 70, 45117 Essen, Germany Abstract In software product line engineering, reusable artifacts are pro-actively created such that they can efficiently be reused in order to build customer-specific software products. To support the efficient reuse, variability is explicitly defined and introduced into the reusable artifacts. This variability implies that the reusable artifacts do not define a single software product but a set of such products. Specifically, the reusable artifacts do not constitute an executable system which could be tested. Thus, in order to check the reusable artifacts of a software product line for defects, the variability in those artifacts has to be handled. This invited talk will elaborate on different strategies for how to handle the variability in the reusable artifacts, and how existing quality assurance techniques for software product lines, including feature interaction analysis, address the specific challenges that are posed by those strategies. Keywords. Software product line engineering, Testing, Feature interactions, Formal reasoning
1. Motivation Software product line engineering (SPLE [1][2][3]) has shown to be a very successful paradigm for developing a diversity of similar software products at low cost, in short time, and with high quality. Numerous success stories report on the significant achievements of introducing software product lines in industry (see [3]). There are two essential differences between SPLE and the development of single software products: • Variability is explicitly defined and managed: Product line variability describes the variation between the products that belong to a software product line in terms of properties and qualities, like features that are provided or requirements that are fulfilled [4]. The central concepts for defining and documenting the variability of a software product line are variation point and variant. A variation point describes what varies between the products of a software product line; e.g., the products of 1 Corresponding Author: Andreas Metzger, Software Systems Engineering, University of Duisburg-Essen, Schützenbahn 70, 45117 Essen, Germany, E-mail:
[email protected].
2
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
an on-line store product line can vary in terms of the payment options that are offered. A variant describes a concrete instance of a variation point; e.g., an on-line store can offer payment by credit card or by debit card. • The development process of a software product line is divided into two interrelated sub-processes: ∗ In domain engineering, the commonalities and the variability of the software product line are defined and reusable artifacts are created. Commonalities are properties and qualities that are shared by all products of the software product line [5]. Reusable artifacts include requirements, design models, components, code, test cases, and documentation. ∗ In application engineering, customer-specific software products are derived from the reusable artifacts by binding the variability, i.e., by selecting the desired variants for the variation points. Like in the development of single software products, quality assurance activities are essential in SPLE to guarantee the desired quality of the derived software products. These quality assurance activities can include – besides many others – inspection, formal verification, static analysis, as well as code- and model-based testing. One key aim of those quality assurance activities is to uncover the evidence of defects in the development artifacts. In SPLE, a defect in a reusable artifact can affect all software products that are derived from this artifact. As an example, the ‘place a call’ feature is a commonality of a mobile phone product line. Thus, a defect in the components which realize this feature can lead to failures in all mobile phones of the product line. As a further example, let us assume an undesired feature interaction between the features ‘silent mode’ and ‘sound alarm when battery low’, i.e., let us assume that those features interact in such a way that the mobile phone will make a sound even if in silent mode. Such a feature interaction will occur in all mobile phones that provide both features. Similar to the development of single software products, defects should be uncovered as early as possible in the SPLE process, as uncovering a fault late in the development process can lead to very high correction costs. Uncovering a defect late in the SPLE process can be very costly especially when several products of the software product line have already been developed and deployed, because all those products might have to be corrected. The earliest phase in SPLE is domain engineering, during which the reusable artifacts are constructed. However, existing quality assurance techniques from the development of single software products cannot be applied directly to the reusable artifacts, because those artifacts contain variability. This means that those artifacts do not define a single software product but a set of such products. Specifically, no executable system exists in domain engineering that could be tested. Quality assurance techniques which consider the specifics of SPLE are thus needed. This talk will elaborate on different strategies and techniques for checking the reusable artifacts in domain engineering while handling the variability in the reusable artifacts.
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
3
2. Strategies and Techniques for Quality Assurance in Domain Engineering Existing techniques for quality assurance in domain engineering generally follow three different strategies for handling the variability in the reusable artifacts: • Commonality Strategy: When following the commonality strategy, only the common parts, which are shared by all products of the software product line, will be covered by the quality assurance technique (cf. [6][3]). • Sample Strategy: When following the sample strategy, sample products (a subset of all products of the software product line) are checked (cf. [6][3]). This implies that the common parts are checked as well as the variants which have been bound in the sample products. • Comprehensive Strategy: When following the comprehensive strategy, all products of the software product line are checked for defects. This implies that the common parts as well as all the variants of the software product line are covered by the quality assurance technique. In the following sub-sections, those strategies and the challenges for realizing them in concrete techniques will be elaborated. Examples of concrete techniques that have been developed within our research group and in collaboration with other researchers will be given to illustrate how those challenges can be addressed. 2.1. Commonality Strategy Quality assurance techniques that follow the commonality strategy aim at checking only the common parts of a software product line. Typically, the variants are either ignored during the checking of the reusable artifacts or they are replaced by placeholders that abstract from the variants or that simulate them. As an example for the first case, an inspection of a reusable requirements specification for a software product line could focus on common requirements only. As an example for the second case, variable code fragments could be replaced by a single code fragment that implements some basic behavior or at least guarantees that the code will compile. 2.1.1. Benefits The benefits of the commonality strategy are that early testing in domain engineering is enabled and that quality assurance activities can be performed even if no or only a few variants have been realized. 2.1.2. Challenges Techniques that follow the commonality strategy must at least address the following challenges: 1. How to keep the effort for creating the placeholders to a minimum? Creating placeholders usually requires development effort. Thus, the number of placeholders should be kept as small as possible. 2. How to guarantee an adequate coverage of the domain artifacts? Variants are not checked when following the commonality strategy. Thus, quality assurance activities should be planned that complement the commonality strategy.
4
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
2.1.3. Example Technique: “Testing” An example for a technique which implements the commonality strategy is the modelbased testing technique ScenTED (see [6][7][8]).
(1) (1)
Variation Point
Variant
(2) (2)
Variability Placeholder
Test Case
Figure 1. Model-based Testing with ScenTED
In ScenTED, placeholders for the variants are developed (see (1) in Fig. 1) and test cases are generated while considering these placeholders (see (2) in Fig. 1). The result of the test cases generation process is a set of test cases, which guarantees that the common functionality of the software product line is covered and that the number of placeholders, which are needed to execute the set of test cases, is minimized (cf. Challenge 1). Besides testing in domain engineering, ScenTED supports the reuse of test cases for testing in application engineering. Thereby, ScenTED can be used to complement the test of the commonalities in domain engineering by performing product-specific tests in application engineering (cf. Challenge 2). 2.2. Sample Strategy Quality assurance techniques that follow the sample strategy aim at checking the common parts as well as selected variants. The basic steps of this strategy are typically as follows: 1. Determine the sample products (defined in terms of variants that are bound). 2. For each of the sample products: (a) Derive product-specific artifacts by binding the variability in the domain artifacts. (b) Apply quality assurance techniques from the development of single software products to the derived artifacts.
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
5
2.2.1. Benefit The benefit of this approach is that existing techniques from the development of single software products can be used as they are. 2.2.2. Challenges In order to implement the sample strategy, the following challenges have to be faced: 1. How to determine representative sample products? The sample products should be chosen in such a way that the results of checking those sample products allow drawing conclusions about the overall quality of the software product line. 2. How to keep the number of selected sample products manageable? The number of sample products should be kept as small as possible while guaranteeing a representative coverage of the software product line. Otherwise, the effort for checking the sample products will become infeasible. 2.2.3. Example Technique: “Testing” As mentioned in Section 2.1, the ScenTED technique supports testing individual software products in application engineering. Thus, ScenTED can also be used to test sample software products in domain engineering. To select representative sample products for testing, products should be chosen which include variants that are likely to be used in many software products (cf. Challenges 1 and 2). The rationale behind this selection is that if a variant is used in most of the products of the software product line, an undiscovered defect in this variant can have an almost as severe effect on the quality of the software product line as a defect in a commonality (also see [6]). 2.2.4. Example Technique: “Feature Interaction Analysis” A more refined approach for selecting sample products has been implemented in the RAFINA technique (see [9][10]). RAFINA has been developed to analyze a software product line with respect to (undesired) feature interactions. It builds on a previous technique for detecting feature interactions in single software products (cf. [11]). To determine the sample products in RAFINA, we assume that if an interaction between the features F = {f1 , . . . , fr } is observed, there will also be interactions between all features f ∈ F where F ⊂ F with 1 < |F | < r. Stated differently, this assumption means that there will be no m-way feature interactions (with m > 2) in the products. In general, an m-way feature interaction is a feature interaction that does not occur between 1 < i < m features but occurs among m features [12]. We presuppose that each feature relates to a variant in the software product line. Thereupon, in order to keep the number of sample products to a minimum, we select the products which provide the maximum number of variants and thus features. If a feature interaction is uncovered in such a maximal sample product, this implies that feature interactions will be present in all smaller products that provide a subset of the interacting features of the sample product (cf. Challenge 1). It should be noted that even if an m-way interaction (with m > 2) exists in one of the sample products, RAFINA will detect that m-way interaction. However, in addition to the true m-way feature interaction, RAFINA will falsely uncover interactions between
6
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
1 < i < m features. Yet, as m-way interactions have shown to be very rare, the number of those false positives is negligible. To illustrate how the number of sample products relates to the number of the potential products of the software product line (cf. Challenge 2), let us examine a single variation point. Let n be the number of variants of that variation point, to which at most k ≤ n and at least j ≤ k of the variants may be bound (for a further discussion on the potential constraints on variability see [3] or [9]). This variation point allows the derivak tion of i=j ni products. Following the RAFINA approach for selecting the sample products, it would suffice to only consider the products with the maximum number of variants bound, which leads to nk sample products in total. The extent to which the number of sample products can be reduced depends on k, i.e., the maximum number of variants that can be chosen for a variation point. The closer k is to the number of the variants per variation point (n), the smaller the number of sample products that have to be considered will be. The same holds when k comes closer to 1. However, in the latter case even a brute-force approach, which checks all the possible products (cf. Section 2.3), would be feasible. Figure 2 shows the numbers of sample products for varying values of k that need to be checked for a variation point with 13 variants (n = 13). Number of variants per variation point n = 13 Number of sample products
All products
6000 4000
RAFINA
2000
0 1
2
3
4
5
6
7
8
9
10
11
12
13
Maximum number of variants that can be chosen per variation point (k)
Figure 2. Comparison of the Number of Sample Products With the Number of All Products
The number of sample products can become very high if k is around n/2 (gray area in the figure). Thus, this can pose a scalability problem especially if a software product line has many variation points with k ≈ n/2. As a solution, the value of k could be modified for the purpose of feature interaction detection as follows: • k → 1: The value of k should only be reduced to k = 2, as otherwise feature interactions between two variants of the same variation point would go unnoticed. This results in n2 = (n · (n − 1))/2 variant combinations per variation point. • k → n: Increasing k to n promises the largest reduction of the number of sample products, because all variants per variation point could be selected. This leads n to n = 1 variant combination per variation point. However, the violation of the constraint on the maximum number of variants per variation point can lead
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
7
to the identification of feature interactions that would never exist in any of the software product line’s products. In order to eliminate those feature interactions, a subsequent step is performed: RAFINA checks whether an actual product of the software product line can offer the features that are involved in the interaction. Modifying k requires that the reusable artifacts allow for binding an unplanned number of variant combinations. Thus, if the modification of k is not possible, an alternative approach for selecting the sample products can be followed: If one relaxes the requirement that all kinds of feature interactions (including m-way interactions with m > 2) have to be detected, it suffices to check all pair-wise feature-combinations. If a software product line has l variants (resp. features) in total, this results in l·(l−1) sample products. The product l · (l − 1) can be considerable smaller than the number of sample products that need to be considered with the initial RAFINA approach for k ≈ n/2. 2.3. Comprehensive Strategy The comprehensive strategy aims at checking all potential products of the software product line for defects. A ‘brute-force’ realization (cf. [3]) of the comprehensive strategy could be as follows: 1. Bind the variability in the reusable artifacts for each of the potential products of the software product line. 2. Apply techniques from the development of single systems to the derived artifacts of each of those products. 2.3.1. Benefit The comprehensive strategy is the strategy that leads to the best coverage of the domain artifacts. Although the sample strategy (see the previous section) allows checking all variants of the software product line by determining representative sample products, those variants are not checked in all potential reuse contexts, i.e., they are not checked for all products of the software product line. 2.3.2. Challenge The number of potential products in a software product line of industry-relevant size prevents any ‘brute-force’ approach from being used for realizing the comprehensive strategy in practice. To illustrate, if the reusable artifacts contain 15 variation points with 2 variants each, approximately 1 billion possible software products can be derived from those artifacts if point, 2are no further constraints for combining the variants. For each variation there 2 15 = 1 + 2 + 1 = 4 variant combinations are possible, leading to 4 ≈ 109 i=0 i potential products of the software product line. Industry reports on software product lines which range to up to tens of thousands of variation points and variants (see [13][14]). A significant challenge for realizing the comprehensive strategy thus is how to deal with the complexity that is involved in checking all potential products.
8
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
2.3.3. Example Technique: “Formal Reasoning” The AVIP technique (see [4,15]) allows to formally reason on the reusable artifacts in domain engineering in order to identify inconsistencies (a specific kind of defect) in those artifacts. Inspired by the work of Czarnecki and Pietroszek [16] and Thaker et al. [17], the AVIP technique follows the comprehensive strategy while addressing the complexity challenge involved with this strategy. The key to handling the complexity of checking each potential product of the software product line is to exploit the power of state-of-the-art verification tools, like SAT Solvers [18], Constraint Programming Systems [19], or Model Checkers [20]. Those tools have reached a level of efficiency that allows them to be applied to problems of industry-relevant size. In AVIP, the domain artifacts to be checked as well as the consistency constraints that the artifacts must satisfy are expressed as inputs to a SAT Solver. The SAT Solver then efficiently computes whether the artifacts violate the consistency constraints for any valid product of the software product line. The valid products of the software product line are defined by an Orthogonal Variability Model (OVM [3]). An OVM is a dedicated model that documents the variation points and the variants of a software product line together with potential constraints on selecting the variants. The variants in the OVM are related to variable elements in the reusable artifacts via cross-links (x-links [4]). Whenever a variant is selected for a concrete product, the x-linked elements in the reusable artifacts will be included in the derived artifacts. Figure 3 shows an example of a simple OVM which is x-linked to a component diagram as a reusable artifact.
VP1: Media
Variability Constraint
1..* V1
Variant
Orthogonal Variability Model (OVM)
VP
Variation Point
V3
V2
Audio
O = (V1 V2 V3)
Text
Image
X-Link UserConsole «component»
Reusable Artifact
MediaPlayer «component»
:UserInterface 1..1
«component»
PlayMedia
A = (W1 W2) (W1 W2)
«component»
W1:MP3Player W2:PDFViewer Component Diagram
Figure 3. Example of OVM, Reusable Artifact, X-Links and Propositional Formulae
In AVIP the semantics of the OVM, the reusable artifacts as well as the x-links are formalized (for further details on the formal semantics see [4]). This formalization
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
9
is used to map the artifacts and the consistency constraints to inputs for a SAT Solver, whereby the consistency checks are automated. A SAT Solver requires a propositional formula as input and, if the formula is satisfiable, delivers an assignment for the Boolean variables such that the input formula evaluates to true. Many of the off-the-shelf SAT Solvers require the input formula to be in Conjunctive Normal Form (CNF). However, CNF is not a very natural representation for our problem. Therefore, we have started to use the non-clausal solver NoClause [21], whereby we can avoid the complex translation of a propositional formula into CNF. In AVIP, an OVM O is mapped to a propositional formula O such that O evaluates to true for each valid product of the software product line. A Boolean variable v ∈ V ar(O) corresponds to a variant of the software product line. Figure 3 shows the result of such a mapping for an exemplary OVM. O = V1 ∨ V2 ∨ V3 only evaluates to true, when at least one variant has been chosen for the variation point. A reusable artifact A together with its consistency constraints are mapped to the propositional formula A. Each Boolean variable w ∈ V ar(A) represents a variable element in the reusable artifact. If the Boolean variable w is set to true this means that the variable element will be contained in the artifact that is derived from the reusable artifact. A is defined in such way that it will only evaluate to true if the combination of variable elements creates an artifact which satisfies the consistency constraint, i.e., if the artifact that is derived from A is free from inconsistencies. The result of such a mapping is shown for the component diagram in Figure 3. The multiplicity of 1..1 at the PlayMedia port of the UserInterface component requires that exactly one component instance is plugged in at this port. This leads to the propositional formula A = (W1 ∧ ¬W2) ∨ (¬W1 ∧ W2). To determine any consistency violations, the satisfiability of G = ¬(O ⇒ A ) is checked. A is the propositional formula A in which the Boolean variables V ar(A) have been replaced by propositional formulae over Boolean variables in V ar(O). More specifically, if the variants represented by the Boolean variables v1 , . . . , vn are x-linked to the variable elements represented by w1 , . . . , wm , each of those Boolean variables wi is replaced by (v1 ∨ . . . ∨ vn ). In the example of Figure 3 this results in the propositional formula A = (V1 ∧ ¬(V2 ∨ V3)) ∨ (¬V1 ∧ (V2 ∨ V3)). Whenever the SAT Solver finds a solution for the formula G, this points to a consistency violation in the reusable artifacts: When G evaluates to true, O ⇒ A must have evaluated to false. Due to the implication (⇒), this requires that O has evaluated to true while A has evaluated to false. This means that for a valid product of the software product line (defined by the assignment of Boolean variables which made O evaluate to true) an inconsistent artifact can be derived. In the example shown in Figure 3, the overall formula to be checked by the SAT Solver is: ¬((V1 ∨ V2 ∨ V3) ⇒ ((V1 ∧ ¬(V2 ∨ V3)) ∨ (¬V1 ∧ (V2 ∨ V3)))). This formula has at least one solution: It will evaluate to true for V1 = true, V2 = true and V3 = true as an example. This points to an inconsistency in the reusable
10
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
artifact: When those variants are bound, two component instances are bound in the component diagram, where only one instance is allowed at a time. Our first experiments have shown the efficiency of the AVIP approach (cf. [4]) and we are confident that it will scale to very complex models as well. 3. Conclusion and Perspectives Quality assurance for software product lines is an important field of research. This talk has reviewed some of the challenges that need to be addressed by quality assurance techniques for software product lines and it has presented how existing techniques address those challenges. This talk has focused on analytical quality assurance techniques, i.e., on techniques which check the artifacts after they have been built. However, there also exists a wide range of principles and techniques that can be applied during the construction of the artifacts such that they will be built without certain kinds of defects. Specifically, codegenerators or configurators (e.g., see [22] [23]) can be used for model-driven product line development. The software product line community has achieved impressive results for quality assurance in software product lines. Still, quite a few research issues remain open. Some of those issues, which can be potential topics for future research, are presented below: ‘Debugging’: The presented quality assurance techniques for software product lines that comprehensively check the reusable artifacts (comprehensive strategy) determine whether the reusable artifacts comply to some pre-defined quality constraint. If this constraint is violated, they list the variants for which this violation will occur in an actual software product. However, in order to correct the reusable artifacts, the reasons for the violation of the quality constraint, i.e., the actual defects, have to be located. As an example, a quality constraint might be violated because the constraints on variability have been defined too loosely, thus allowing the derivation of unwanted products. Currently, the product line engineers have to find such defects manually. This can be a very challenging task when the models become large and complex. Thus, automated techniques that support the product line engineers in ‘debugging’ the reusable artifacts need to be developed (cf. [24]). Empirical Evaluation: The presented quality assurance techniques are promising. They efficiently address the problem of complexity when checking the reusable artifacts in domain engineering. Yet, the effectiveness of those techniques, i.e., their ability to uncover defects, needs further investigation. We expect that we can uncover a significant number of defects in domain engineering, which – if they went uncovered – would imply huge correction costs in application engineering. Applying product line techniques to other paradigms: First publications report on similarities between software product line engineering and service-based systems engineering (e.g., [25]). Quality assurance of a service-based system, for instance, faces a similar – if not worse – complexity problem. Due to the loose coupling of services, they can be composed to a potentially unbound number of different service-based systems. It should be interesting to see how far the solutions for handling the complexity in checking the reusable artifacts of a software product line can be applied to the com-
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
11
plexity problem of checking the potential service compositions in service-based systems engineering. Acknowledgments Parts of this work have been sponsored by the German Research Foundation (DFG) under grant Po 607/1-1 PRIME and Po 607/2-1 IST-SPL. I cordially thank Kim Lauenroth and Ernst Sikora for fruitful discussions on formal reasoning in software product line engineering, Klaus Pohl for the joint research on variability management in software product line engineering, and Heiko Stallbaum for helpful comments on earlier drafts of this contribution.
References [1] [2] [3] [4]
[5] [6] [7]
[8]
[9]
[10]
[11] [12]
[13] [14]
Weiss, D.M., Lai, C.T.: Software Product Line Engineering - A Family-Based Software Development Process. Addison-Wesley, Reading, Mass. (1999) Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison-Wesley Professional, Reading, Mass. (2001) Pohl, K., Böckle, G., van der Linden, F.: Software Product Line Engineering: Foundations, Principles and Techniques. Springer, Heidelberg (2005) Metzger, A., Heymans, P., Pohl, K., Schobbens, P.Y., Saval, G.: Disambiguating the documentation of variability in software product lines: A separation of concerns, formalization and automated analysis. In Sutcliffe, A., ed.: 15th IEEE International Conference on Requirements Engineering (RE 2007), 15-19 October 2007, New Delhi, India, Proceedings, IEEE Computer Society (2007) Coplien, J., Hoffman, D., Weiss, D.: Commonality and variability in software engineering. IEEE Softw. 15(6) (1998) 37–45 Pohl, K., Metzger, A.: Software product line testing. Commun. ACM 49(12) (2006) 78–81 Reis, S., Metzger, A., Pohl, K.: Integration testing in software product line engineering: A model-based technique. In Dwyer, M.B., Lopes, A., eds.: Fundamental Approaches to Software Engineering (FASE), 26-30 March 2007, Braga, Portugal, Proceedings. Volume 4422 of LNCS., Springer (2007) 321–335 Reuys, A., Kamsties, E., Pohl, K., Reis, S.: Model-based system testing of software product families. In Pastor, O., e Cunha, J.F., eds.: Advanced Information Systems Engineering, 17th International Conference (CAiSE 2005), 13-17 June 2005, Porto, Portugal, Proceedings. Volume 3520 of LNCS., Springer (2005) 519–534 Metzger, A., Bühne, S., Lauenroth, K., Pohl, K.: Considering feature interactions in product lines: Towards the automatic derivation of dependencies between product variants. In Reiff-Marganiec, S., Ryan, M., eds.: Feature Interactions in Telecommunications and Software Systems VIII (ICFI’05), 2830 June 2005, Leicester, UK, IOS Press (2005) 198–216 Metzger, A., Pohl, K.: Anforderungsbasierte Erkennung von Feature-Interaktionen in der Produktlinienentwicklung. In Biel, B., Book, M., Gruhn, V., eds.: German Conference on Software Engineering (SE 2006), 28-31 March 2006, Leipzig, Germany, Proceedings. Volume P-79 of LNI., Köllen Druck und Verlag GmbH, Bonn (2006) 53–58 Metzger, A.: Feature interactions in embedded control systems. Computer Networks 45(5) (2004) 625–644 Kawauchi, S., Ohta, T.: Mechanism for 3-way feature interactions occurrence and a detection system based on the mechanism. In Amyot, D., Logrippo, L., eds.: Feature Interactions in Telecommunications and Software Systems VII (FIW 2003), 11-13 June 2003, Ottawa, Canada, Proceedings., IOS Press (2003) 313–328 Deelstra, S., Sinnema, M., Bosch, J.: Product derivation in software product families: a case study. Journal of Systems and Software 74(2) (2005) 173–194 Maccari, A., Heie, A.: Managing infinite variability in mobile terminal software. Softw., Pract. Exper. 35(6) (2005) 513–537
12 [15]
[16]
[17]
[18]
[19] [20] [21]
[22]
[23] [24] [25]
A. Metzger / Quality Issues in Software Product Lines: Feature Interactions and Beyond
Lauenroth, K., Pohl, K.: Towards automated consistency checks of product line requirements specifications. In Egyed, A., Fischer, B., eds.: 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE), 5-9 November, Atlanta, GA, USA, Proceedings. (2007) Czarnecki, K., Pietroszek, K.: Verifying feature-based model templates against well-formedness ocl constraints. In Jarzabek, S., Schmidt, D.C., Veldhuizen, T.L., eds.: Generative Programming and Component Engineering, 5th International Conference (GPCE 2006), 22-26 October 2006, Portland, Oregon, USA, Proceedings, ACM (2006) 211–220 Thaker, S., Batory, D., Kitchin, D., Cook, W.: Safe composition of product lines. In: Generative Programming and Component Engineering, 6th International Conference (GPCE 2007), 1-3 October 2007, Salzburg, Austria, Proceedings, ACM (2007) Zhang, L., Malik, S.: The quest for efficient boolean satisfiability solvers. In Brinksma, E., Larsen, K.G., eds.: Computer Aided Verification, 14th International Conference (CAV 2002), 27-31 July 2002, Copenhagen, Denmark, Proceedings. Volume 2404 of LNCS., Springer (2002) 17–36 Dechter, R.: Constraint Processing. Elsevier, Oxford, UK (2003) Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge, Mass. (2000) Thiffault, C., Bacchus, F., Walsh, T.: Solving non-clausal formulas with DPLL search. In Wallace, M., ed.: Principles and Practice of Constraint Programming, 10th International Conference (CP 2004), 27 September - 1 October 2004, Toronto, Canada, Proceedings. Volume 3258 of LNCS., Springer (2004) 663–678 Muthig, D., Atkinson, C.: Model-driven product line architectures. In Chastek, G.J., ed.: Software Product Lines, Second International Conference (SPLC 2), 19-22 August 2002, San Diego, CA, USA, Proceedings. Volume 2379 of LNCS., Springer (2002) 110–129 Czarnecki, K., Eisenecker, U.: Generative programming: methods, tools, and applications. ACM Press/Addison-Wesley Publishing Co. New York, NY, USA (2000) Benavides, D., Ruiz-Cortes, A., Trinidad, P., Segura, S.: A survey on the automated analyses of feature models. XV Jornadas de Ingenierıa del Software y Bases de Datos, JISBD 2006 (2006) Helferich, A., Jesse, S., Mikusz, M.: Software product lines, service-oriented architecture and frameworks: Worlds apart or ideal partners? In Draheim, D., Weber, G., eds.: 2nd International Conference on Trends in Enterprise Application Architecture, 29 November - 1 December 2006, Berlin, Proceedings. (2006) 143–157
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
13
Service Broker for Next Generation Networks Fuchun Joseph Lin and Kong Eng Cheng Telcordia Technologies 1 Telcordia Drive Piscataway, NJ 08854, U.S.A. {fjlin, kcheng}@research.telcordia.com
Abstract. This paper describes the emerging need of feature interaction managers in Next Generation Networks that are based on a convergent IP architecture to support voice, data, and multimedia services. Though this need has been addressed by the telecom industry with various architectural components under different names such as Service Capability Interaction Manager (SCIM) in the 3GPP (3rd Generation Partnership Project) IP Multimedia Subsystem (IMS), the consensus of the industry is to call such a network function Service Brokering and the network component fulfilling this function Service Broker. This paper reports the current industrial status in architecting the Service Broker, discusses the limits of the Service Brokering functions defined in 3GPP, and points out open issues for further research. Keywords. Feature interaction management, Next Generation Networks, Service Capability Interaction Manager (SCIM), Service Broker, 3GPP IP Multimedia Subsystem (IMS)
1. Feature Networks
Interactions
Management
in
Next
Generation
Next Generation Networks (NGN) based on a convergent IP architecture revolutionize the traditional approach of building special purpose networks for specific vertical services (e.g. PSTN [Public Service Telephone Network] for voice services, cable networks for video delivery, and Internet for data services). The idea is to use one network (i.e. IP network) to offer all services that span across voice, data, and multimedia communications. In such next generation networks, access networks can take any of the following forms: DSL, Cable, fixed wireless or mobile wireless while there is only one IP core network shared by all access networks. This is a total integration of all the existing networks with a high speed IP core network and various access gateways on the edge to interface with different access networks. This architecture allows independent evolution of core and access networks and also shields the changes of one from impacting the other. Moreover, service or application technologies developed above the IP transport layer can also be
14
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
made independent of the network technologies below the IP layer. This makes services and applications shielded from constant change and evolution of underlying network technologies. With applications all built on top of an IP transport layer there is no need to maintain multiple service networks. This greatly reduces the capital and operational expense associated with maintaining service development and operations support for multiple networks. Feature interactions occur in Next Generation Networks due to the following reasons: 1. All NGN services are competing for the underlying shared network resources below the IP transport layer in core and access networks. As a result, an NGN service may inhibit another NGN service due to the constraints of the underlying transport resources. 2. Above the IP transport layer, NGN services are mostly based on SIP (Session Initiation Protocol) [5] as the signaling protocol and IMS (IP Multimedia Subsystem) [1][2][3][4] as the session control. As a result, NGN services may interact with one another either via SIP signaling or via IMS sessions. For example, two NGN services may be simultaneously triggered by the same SIP method in an IMS session. Thus there is a need to manage which service will have precedence over the other. 3. Moreover, it is also possible for IMS-based NGN services to interact with nonIMS based NGN services. For example, a non-IMS service such as calendar service can be used to decide whether an IMS service such as call forwarding need be triggered.
2.
Service Broker as Feature Interactions Manager in NGN
In this section, we will survey 3GPP effort in defining an architectural framework for service brokering in the IMS. The 3GPP is currently conducting a feasibility study of IMS Server Brokering in Release 8 in order to deal with feature interaction problems. The 3GPP IMS [1][2][3][4][5][6][7] already supports some selected Service Brokering functions via two IMS functional components and the interactions between them: x Serving Call Session Control Function (S-CSCF) and its Filter Criteria x Application Server (AS) and its Service Capability Interaction Manager (SCIM) In 3GPP, the S-CSCF provides call session control while services can be provisioned on three types of Application Servers [4] as depicted in Figure 1. The S-CSCF communicates with Application Server via the IP multimedia Service Control (ISC) interface that is based on SIP. Three types of Application Servers [4] include: 1. SIP Application Servers 2. The IM-SSF (IMS Service Switching Function) Application Server for hosting the CAMEL (Customized Applications for Mobile Enhanced Logic) network features [8].
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
3.
15
The OSA (Open Service Access) Service Capability Server (SCS) that interfaces to the OSA Application Server [9] for third party service creation.
Figure 1. Service Provision for 3GPP IMS (From 3GPP TS 23.218)
Additionally, there is a specialized type of SIP Application Server, the Service Capability Interaction Manager (SCIM) that performs feature interaction management between application servers. In summary, the Service Brokering functions in 3GPP exist in either S-CSCF or SCIM. Below we give further details on each of these functions. 2.1.
S-CSCF and its Filtering Criteria
Figure 2 below shows how S-CSCF utilizes Filter Criteria to mediate the execution of service logic in the Application Server.
Figure 2. Filter Criteria in S-CSCF (From 3GPP TS 23.218)
16
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
Filter Criteria (FC) are defined as the information which the S-CSCF receives from the HSS (Home Subscriber Server) or the AS (Application Server) that defines the relevant SPTs (Service Point Triggers) for a particular application. They define the subset of SIP requests received by the S-CSCF that should be sent or forwarded to a particular application in the Application Server. The SPTs are the points in the SIP signaling that may cause the S-CSCF to send/proxy the SIP message to an SIP AS/OSA SCS/IM-SSF. The subsets of all possible SPTs which are relevant to a particular application are defined by means of Filter Criteria. SPTs may potentially include: x any initial known or unknown SIP method (e.g., REGISTER, INVITE, SUBSCRIBE, MESSAGE) x presence or absence of any header x content of any header x direction of the request with respect to the served user x session description information (i.e. SDP) Multiple SPTs can be linked via logical expressions (e.g., AND, OR, NOT). Initial Filter Criteria (iFC) are the filter criteria that are stored in the HSS as part of the user profile and are downloaded to the S-CSCF upon user registration. They represent a provisioned subscription of a user to an application. Subsequent Filter Criteria (sFC) are the filter criteria that are signaled from the SIP AS/OSA SCS/IM-SSF to the S-CSCF. They allow for dynamic definition of the relevant SPTs at application execution time.
Access to existing IN (WIN/CAMEL) applications
Prepaid App.
Call Rest. App.
IN SCP Prepaid App.
OSA Call Rest. App Svr App.
PTT App.
WIN / CAP
Access to external 3rd party applications
OSA API
Application IM-SSF Server-A
Application Server-B 4. INVITE
Application OSA SCS Server-C
5. INVITE
3. INVITE
HSS
Filter Criteria
S-CSCF
6. INVITE
Destination Destination
2. INVITE Calling UE
1. INVITE
P-CSCF
Figure 3. S-CSCF as Service Broker across Different IMS Application Servers
On the Application Server, Service Platform Trigger Points (STPs) are the points in the SIP signaling that instruct the SIP AS, OSA SCS and IM-SSF to execute the service logic.
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
17
The S-CSCF may receive a set of Filter Criteria in the iFC or sFC. In order to allow the S-CSCF to handle different Filter Criteria in the right sequence, a priority shall be assigned to each of them. The S-CSCF will then sequence the handling of these Filter Criteria based on this priority. The mechanism of Filter Criteria thus enables the S-CSCF to perform brokering functions as depicted in Figure 3. However, the actual interaction logic for managing the interactions still needs to be developed. Figure 3 shows that a SIP INVITE sequentially triggers Prepaid, Push to Talk (PTT), and Call Restriction Applications residing in IN SCP, SIP, and OSA Application Servers, respectively, before it is routed to its destination. 2.2.
Application Server and its SCIM
In the 3GPP IMS service provision architecture in Figure 4, the SIP AS contains a SCIM (Service Capability Interaction Manager) to manage feature interactions and do ‘work flow management’ between SIP Application Servers. The SCIM thus can provide service brokering functions for the services on the 3GPP SIP Application Server as it will arbitrate the execution of service logic across multiple SIP applications. This brings up the possibility of combining the Filter Criteria in S-CSCF and the SCIM in the Application Server to create multiple levels of service brokering as indicated in Figure 4. Figure 4 shows that three services PTT (Push to Talk), GLM (Geographical Location Manager), and Presence residing on the SIP AS (Application Server-B) are managed by the SCIM while the services residing on three Application Servers are managed by the Filtering Criteria in the S-CSCF. This in essence creates a new challenge of managing interaction logic of “distributed service brokering functions”. Prepaid App. Application Server-A
GLM App. PTT PTT Pres. App. App. App.
Call Rest. App.
Application Server-B (SCIM)
Application Server-C
4. INVITE S-CSCF uses filter criteria to arbitrate among different features HSS
5. INVITE Application Server can internally coordinate among separate features (SCIM)
3. INVITE
Filter Criteria
S-CSCF
6. INVITE
Destination Destination
2. INVITE Calling UE
1. INVITE
P-CSCF
Figure 4. Combined Use of Filtering Criteria and SCIM for Service Brokering
18
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
3.
Limitation of Existing Service Brokering Functions
This section points out the limitation of existing service brokering functions defined in the 3GPP IMS. 1. Brokering only at the SIP Protocol Level As the analysis in Section 2 indicates, the service brokering functions currently in the 3GPP IMS are operated strictly at the SIP protocol level. This implies very severe limitation on the types of services that can be managed by the Service Broker. For example, all non-SIP applications such as HTTP web browsing, LDAP directory, and SOAP web services will be excluded from consideration. 2. Limits on Filter Criteria The Filter Criteria currently defined by the 3GPP IMS are conditions based on SIP REQUEST-URI, method, and header, direction of request (incoming or outgoing Call), and the content of SDP as well as logical expressions of these conditions. Thus its expressive power is very limited. For example, if an application is triggered based on comparing the contents between two SIP headers, this won’t be supported by the current Filter Criteria. 3. Limits and Lack of Requirements on SCIM The SCIM as now cannot arbitrate service logic across SIP AS, OSA SCS, and IM-SSF as it is embedded in the SIP AS. Furthermore, the requirements for the SCIM are not currently specified at all by 3GPP, as indicated in 3GPP TS 23.003, Section 5.5, “the internal structure of the application server is outside the standards.” Basically, the only service brokering function specified by 3GPP now is the Initial Filter Criteria of the S-CSCF. 4. Weak Support of Service Broker as a Stand-Alone Component The 3GPP has yet to define a stand-alone Service Broker functional component for the integration of SIP services since its SCIM is embedded in the SIP Application Server. 5. Little Support for Dynamic Interactions Management Though 3GPP IMS defines Subsequent Filter Criteria to enable dynamic feature interactions management (Section 2), the Filter Criteria in active use now are mostly Initial Filter Criteria that defines only a static priority order among multiple services. As a result, the dynamic feature interaction management at runtime such as modifying the priority sequence of services or inserting new services is still not well understood. 6. No Support for Interactions across multiple users or multiple sessions The 3GPP hasn’t addressed feature interaction management issues across multiple users or across multiple sessions of a user.
4.
Open Issues for Further Research
It is clear that the current service interaction management architecture in the 3GPP IMS are not sufficient to manage interactions between NGN application servers. The
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
19
open problem is what functional architecture enhancement is required to better or best support service interactions management based on suitable extension of the existing IMS/NGN protocols and procedures. The current draft of 3GPP TR23.810, “Architecture Impacts of Service Brokering”, defines what are required for Service Brokering in IMS/NGN “The service brokering functions are to provide an end user a coherent and consistent IP multimedia service experience when multiple IP multimedia applications are invoked in a session. Such support involves identifying which applications are invoked per subscriber, understanding the appropriate order of the set of applications, and resolving application interactions during the session [TS 22.228]. The applications can reside in any type of IMS Application Servers including an IM-SSF, SIP AS, OSA SCS or other (e.g. OMA enabler) or any combination of the above.” [10] Based on the limitation of existing NGN service brokering functions in Section 3, we summarize the open issues that are faced by the industry right now – 1. What service brokering functions can be standardized? We believe service brokering functions can be divided into two categories: on-line and off-line. Off-line functions include the following tasks o Identify all applications subscribed by a user o Understand how many ways these applications may work together by resolving their potential interactions o Decide one or more service behaviors of combined applications (based on the user’s expectation) for provisioning On-line functions then are to ensure that in a live session, when these multiple applications are invoked by the user, they will work as what the user expects them to work. We believe the only service brokering functions that can be standardized are those on-line functional architecture elements that provide architecture support in enforcing the appropriate order of interacting applications. 2. How much impact to the IMS core network and AS when introducing more capable service brokering functions? We believe the architecture introduced should produce as minimum impact as possible but on the other hand, it should provide as much flexibility as possible in order to accommodate any new applications. As these two are competing tradeoffs, the architecture need to be carefully designed to meet both requirements with maximum benefits. 3. How to accommodate all applications deployed over three types of IMS application servers including integration with existing IN services such as CAMEL? Note that these IN services are not SIP-based and need to be mapped to the corresponding SIP SPTs (Section 2). 4. How to accommodate service integration across different access networks such as UMTS, WLAN, WiMAX, and cable? Ideally, there should be no issues due to the fact that all services are developed on top of a common IP layer. But in reality, each access network has its own specific QoS, security, and charging methods and also interacts differently with the core network. As a result, service integration across these various networks will need to consider integration of heterogeneous QoS, security, and charging brought in by each different network.
20
F.J. Lin and K.E. Cheng / Service Broker for Next Generation Networks
5. How to support service integration between SIP and non-SIP applications and accommodate both in the IMS/NGN service architecture? Many of IP services are not SIP-based and service integration between SIP and non-SIP applications seems to provide the most fertile field for new NGN applications. For example, many emerging IPTV services are not SIP-based; however, integration of IPTV and SIP-based communication services can enable many attractive triple play services. 6. How to support service integration across multiple providers? One type of IMS Application Services is the OSA Application Server via OSA SCS interface that provide an open platform for any third party to become an IMS service provider via a secure interface. Ideally, the Service Broker should allow service integration over application servers of different providers without requiring each provider to expose internal details of their services. 7. How to deal with distributed interaction management between multiple service brokers within the same or across different administrative domains with both security and charging considerations? Such a distributed service brokering function is essential when multiple service providers are involved in the IMS services.
References [1]
3GPP TS 22.228, 3GPP Technical Specification Group (TSG) Services and System Aspects (SA); Service requirements for the Internet Protocol (IP) multimedia core network subsystem; Stage 1 [2] 3GPP TS 23.002, 3GPP TSG SA; Network architecture [3] 3GPP TS 23.228, 3GPP TSG SA; IP Multimedia Subsystem (IMS); Stage 2 [4] 3GPP TS 23.218, 3GPP TSG CN; IP Multimedia (IM) session handling; IM call model; Stage 2 [5] 3GPP TS 24.229, 3GPP TSG CN; IP Multimedia Call Control Protocol based on Session Initiation Protocol (SIP) and Session Description Protocol (SDP); Stage 3 [6] 3GPP TS 29.228, IP Multimedia (IM) Subsystem Cx and Dx Interfaces; Signaling flows and message contents [7] 3GPP TS 29.229, Cx and Dx Interfaces based on the Diameter protocol, Protocol details [8] 3GPP TS 29.078, Customized Applications for Mobile network Enhanced Logic (CAMEL) Phase X; CAMEL Application Part (CAP) specification [9] 3GPP TS 29.198, Open Service Access (OSA); Application Programming Interface (API); Part 1: Overview [10] 3GPP TR 23.810 Draft, V0.5.0 (2007-05), “Architecture Impacts of Service Brokering”.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
21
A Feature Interaction View of License Conflicts Gangadharan G.R. a , Michael WEISS b,1 , Babak ESFANDIARI b , and Vincenzo D’ANDREA a a Department of Information and Communication Technology, University of Trento, Via Sommarive, 14, Trento, 38050 Italy b Systems and Computer Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, K1S 5B6, Canada Abstract. In this paper, we introduce the problem of license conflicts, which occurs when information assets (such as software, data, or multimedia files) are composed, derived or versioned. A license specifies a set of permissions granted by an asset owner to an asset consumer (as expressed in the form of licensing clauses), effectively waiving what would otherwise be an infringement of the owner’s intellectual rights. Thus, a license allows producers to control how consumers may use and extend the asset. New assets can be produced by composing multiple assets or deriving an asset from an existing asset, as governed by their licenses. Licenses interact with each other either directly or indirectly during the composition or derivation of assets. Licenses can also interact with other versions of the same license during the evolution of an asset. We view interactions of licenses as feature interactions, especially if those interactions result in conflicts. Here, features correspond to licensing clauses. In this paper, we identify and analyze feature interactions of licenses during the composition, derivation, and evolution of assets. Keywords. Feature Interactions, Information Assets, License Conflicts
1. Introduction Information assets (referred to simply as assets in this paper) are described as information that is of value to an organization. An asset can be software, a component, service, process or content that holds intellectual value. It can be combined with other assets. A new asset can also be derived from an existing asset. New versions of an asset can, furthermore, be released as a representation of enhancements to its functional or nonfunctional specification. However a new asset is produced, its distribution involves a license that must represent the unified view of the licenses of the composed assets, or the parent asset. During the formation of new asset, the licensing clauses of one asset may conflict with the licensing clauses of other assets. These conflicts are similar to the conflicts observed in feature interactions. Hence, licensing clauses are modeled as features in this 1 Corresponding Author: Department of Systems and Computer Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, K1S 5B6, Canada; E-mail:
[email protected].
22
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
paper. Understanding these interactions, and developing techniques for their detection and resolution will be critical for the legally authorized use of assets on any significant scale. Furthermore, the detection of these feature interactions will be the cornerstone for developing a framework for the semi-automatic composition of licenses. In this paper, we take first steps towards such a framework by providing a conceptualization of license conflicts as feature interactions, and a classification of license feature interactions. This paper introduces license conflicts as feature interactions, which may occur when information assets are composed, derived or versioned. It is organized as follows. Section 2 introduces the fundamentals of asset licensing and licensing clauses. Section 3 frames the problem of feature interactions for licenses. We classify licensing conflicts that can arise from these interactions in Section 4. In Section 5, we illustrate the various scenarios of feature interactions in the context of licenses specific to services, music or software assets. Section 6 discusses related work in this field, followed by our conclusions in Section 7.
2. Basics of Asset Licensing The distribution of an asset is always accompanied by a license, which describes the terms and conditions imposed by its producer. A license reflects the overall business value of the asset to its producers and consumers. Licensing is often used to protect the intellectual rights of asset producers, thereby turning the assets into a source of revenue, and licenses into a tool for business strategy. Also, licenses give developers control over how consumers can use the licensed assets. Consequently, asset producers rely on licenses to protect their assets from unauthorized consumption. An asset producer (the licensor) never transfers ownership of the asset to the consumer. Instead, the consumer (the licensee) merely obtains the right to use/extend the asset subject to the restrictions imposed by the license [1]. Thus, asset licensing is considered to include all transactions between a licensor and a licensee, in which the licensor agrees to grant the licensee the right to use and/or extend (by deriving from it) the asset under predefined terms. More broadly, an asset license is expected to have these elements [2]: 1. Subject of the License: The subject of the license relates to the definition of the asset being licensed, such as a unique identification code for the asset, a name for the asset, and other additional information. 2. Scope of Rights: The scope of rights reflects on what the licensee can do with the licensed asset [3]. This defines the extent to which the asset can be used, accessed, and value added to it (composition or derivation). Several different grants of rights are described including the right to reproduce, display, access, modify, make derivative works, sell or distribute, import, and sub-license to another party, who can do any of the above. The Scope of Rights falls into four types: Usage, Reuse, Manage, and Transfer. • Usage: Usage pertains to the end use of the asset. Usage rights are generally rendering actions like execute, play, display or print. • Reuse: Set of rights pertaining to the reuse of an asset by modifying, excerpting, or aggregating. Reuse can be in full or in part. • Manage: Rights pertaining to the digital management of an asset. This includes housekeeping actions such as back up, install, or uninstall.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
23
• Transfer: Transfer rights apply to the actions that allow a person or agent to transfer some specific rights to another person or agent. In general, transfer rights include the right to sell, lend, or lease. Transfer rights may involve ownership transfer, and may allow the asset to be used in perpetuity with or without exchange of value. 3. Financial Terms: Describe how the licensee will pay for the use of an asset. Consumers make payments either through royalties or a lump sum payment. Generally royalties are based on per unit sales. Lump sum payments are an alternative to royalties. Sometimes, lump sum payments are also used in addition to royalties. A lump sum payment can be paid by the consumer in advance of using the service (prepaid) or at the later stage (post-paid). Alternatively, the producer can make the asset available free of charge. 4. Warranties, Indemnification, and Limitations: Address issues of who bears the financial risk of asset defects or the legal risk of a third party claiming that the asset infringes on or violates their intellectual rights. • Warranties: A warranty is a promise regarding the description of the assets and their quality, stated by the producer. • Indemnification: Provision of defense by the licensor for the licensee if a third party sues the licensee, alleging that the licensee’s use of the licensed asset infringes on or violates their intellectual rights [4]. • Limitation of liability: Limitation of liability deals with the liability of each of the parties under the license agreement. 5. Evolution: Pertains to the rights over future releases or versions of an asset. Furthermore, there are licensing clauses that provide moral support to consumers and providers. Attribution: An asset may expect attribution for its use in any form by another asset. Thus, attribution is ascribing an asset to its creator. Non-Commercial Use: An asset can allow or deny other assets to use it either for non-commercial purposes or for commercial purposes. Sharealike: An asset may expect another asset to reflect the same terms and conditions (similar to Copyleft of GNU2 or Sharealike of Creative Commons [5]). As rights for assets vary based on the nature and context of the assets involved, the expression of rights for a particular asset will be more specific. For example, one of the rights for a multimedia asset can be to play it. The concept of playing can not be directly applied as a right to web service assets. Similarly, the rights for a web service differentiate between the levels of interface and implementation, which are not separate for multimedia or software assets.
3. Feature Interaction Problem for Asset Licenses In software, a feature is a component of additional functionality, i.e., it extends the core functionality of the software [6]. Features are added incrementally. This can happen at different stages in the lifecycle of the software and changes are usually made by different 2 http://www.gnu.org/copyleft/
24
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
developers. Features are often developed and tested independently, or within a particular context. However, when several features are combined, there may be interactions between the features. Interactions are behavioural modifications, in the context of software, where one feature affects the behavior of another. Such effects can be benign and even required, or adverse. Thus, the feature interaction problem concerns the coordination of features such that they cooperate towards a desired result at the application level. Applied to asset licenses, licenses can be thought of having several features (usually referred to as licensing clauses) such as allowing users to create derivative works from the asset. When licenses are used together (in some way) within the same context (we intentionally avoid the term “combined” here since, as we shall see later, composition has a specific meaning for licenses), there can be conflicts between the licenses. Such conflicts take the form of license clauses (i.e. features) affecting clauses of another license in an adverse way. We can conceptualize the relationship between assets and their associated licenses and the interactions of licenses as shown in Figure 1. Assets are represented as circles, and their associated licenses using earmarked rectangles. Solid lines between assets show relationships between assets. Associations between assets and their licenses are shown as directed lines, pointing from licenses to assets. Interactions between licenses are shown as bidirectional dashed lines.
Figure 1. Assets, Licenses, and Interactions
Licensing clauses can be classified into the following three categories based on how the interactions among them affect one another: 1. Independent clauses: These licensing clauses do not affect the resulting license. For example, a software component with a license clause similar to Noattribution of a Creative Commons (CC) license will not affect the resulting license. A No-Attribution clause leaves the choice of Attribution clause open to the resulting license, and has thus no impact on it. 2. Compelling clauses: The presence of certain clauses in a license may restrict the clauses of a resulting license, and forces the resulting license to adhere by the compelling clauses. For example, the copyleft clause of the GNU GPL makes the resulting license viral. Licensees must distribute the asset to other parties under the same terms as the GPL. 3. Repelling clauses: Certain clauses will not allow the combination with certain other licensing clauses. For example, a component with a licensing clause similar
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
25
to the Non-Commercial clause of a CC license will deny the component to interact with another component under a licensing clause that allows commercial use. The Non-commercial clause states that the licensee may not use the component for commercial purposes. There are certain license clauses, which are broader in scope of operation than certain other clauses. Assume two assets with different license clauses, e.g. composition and derivation. If one asset allows composition, a license allowing derivation can also be used, because derivation subsumes composition. We say that derivation and composition are compatible, or derivation can be redefined as composition. The concept of redefinition (at the license clause level) is similar to the concept of redefinition of a method in a subclass [7]. Redefinition implies that two license clauses are compatible, if the given license clause is more permissive (accepts more) than the corresponding clause in the other license.3 License conflicts occur when licenses with incompatible clauses are combined. In certain cases, the absence of one or several of these clauses will not cause conflicts. Table 1 lists rules to determine the compatibility of license clauses with unspecified (“don’t care”) license clauses. Together with redefinition, Table 1 allows us to determine when different types of license clauses are, in effect, compatible with one another. The details of checking license compatibility are, however, out of scope for this paper, and are described elsewhere. Specified Clause
Compatible
Rationale
Composition
NO
A license denying composition cannot be compatible with a license allowing composition.
Derivation
NO
Derivation specifies the creation of an asset based on one or more existing assets.
Attribution
YES
The requirement to specify attribution will not affect the compatibility when unspecified.
Sharealike
YES
The composite license must be similar to the license with the Sharealike clause.
Noncommercialuse
NO
Commercial Use is denied by Non-commercial use.
Payment
YES
Payment clauses do not affect compatibility directly, if unspecified. Table 1. Rules for determining compatibility with unspecified licensing clauses
4. Types of Feature Interactions in Asset Licenses Licenses associated with assets interact in three cases: 1. When an asset is combined with other assets, their associated licenses also need to be combined, and these licenses interact. 2. When new assets are derived from an existing asset, the license of the existing asset interacts with that of the derived one. 3 Two
license clauses are trivially compatible if they are identical.
26
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
3. When a new version or release of an asset is published, the license of evolved asset interacts with the license of previous version. We describe these feature interactions in more detail below, and provide a general template for each type. Examples of using them are given in Section 5. 4.1. Feature Interactions During the Composition of Asset Licenses Composition is a form of integrating assets in a way that adds value to the assets taken individually. When assets are composed, their licenses are also composed. If composition results in incompatibilities, then the corresponding assets cannot be composed. The composition of licenses can produce a set of compatible licenses for the composite asset.4 The composite license can contain licensing clauses, which need not be present in the licenses of the assets that are composed. Assume P and Q are assets composed to form a composite asset R, as shown in Figure 2. In the figure, IXY represents the interaction between the assets X and Y . It is expected that the licenses L(P ) and L(Q) are compatible, which, in turn, requires their license clauses to be compatible. A detailed algorithm for checking license compatibility is provided in another paper [8].
Figure 2. License Interactions during Composition
This composition can be represented as: LC(R) ⊃ (LC(P ) ∩ LC(Q)) ∪ (LCN EW ) where LCN EW is a set of licensing clauses exclusive to the composite asset. We can distinguish two types of interactions between the licenses: 1. Interactions between the licenses being composed (IP Q ) 4 That
is, there are multiple licenses to choose from for the composite asset that are all compatible with the composed licenses. This aspect is further explored elsewhere.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
27
2. Interactions between the composite license and each of the licenses being composed (IRP and IRQ ) In addition to those direct interactions, there could be also indirect license interactions. Figure 3 provides an example. The indirect interaction IN DRγ is shown by a dotdashed line. Consider an asset Y that composes in another asset X. For example, Y is a service that provides weather information such as Yahoo!Weather. In turn, it gets its weather data from another service X. Y has a licensing agreement, L(X), with X for receiving the weather data. However,the copyright over the data that Y is offering as a service remains with X.
Figure 3. Indirect Licensing Clauses Interactions
An end user γ that wants to use the service Y is bound to the terms of a license, L(Y ). For example, the clauses in this license could include: • Not to reproduce, (re)sell or exploit the service for commercial purposes. • Not to modify, distribute or create derivative works based on the service. • Not to access the service by any other means than through the interface provided by Y for accessing the service. These terms restrict γ’s use of Y . γ can not use the service for any commercial purposes, nor can it derive or distribute the service. However, assume that the license of X allows any party to use the service and allows to create derivative works based on X. The license terms imposed by Y , thus, restrict γ from doing something that L(X) permits. For example, if γ were to create a derivative work of X, its license L(γ) would now be in conflict with L(Y ). 4.2. Feature Interactions During the Derivation of Asset Licenses For a new asset, a new license is given by its developer. This new license might be an existing standard license like the GNU GPL. The new license can also be derived from an existing license. The concept of a derivative license is similar to that of derivative software in Free/Open Source Software (FOSS) [9].
28
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
A derived license must include all licensing clauses from its parent license, but can add new clauses. The template for license derivation is as shown in Figure 4. The derivation of a license L(T ) from L(S) can be represented as: LC(T ) ⊇ LC(S) ∪ LCN EW where LCN EW is a set of licensing clauses exclusive to the derivative asset.
Figure 4. License Interactions during Derivation
There can be interactions between the newly added clauses LCN EW with the existing clauses L(S). We expect LCN EW to be compatible with L(S). 4.3. Feature Interactions During the Evolution of Asset Licenses Over time, an asset can evolve in the following ways: • Modifications by the producer of functional or non-functional properties of the asset, represented by new releases or new versions. • Termination of the current running asset and substitution by a new asset with different behavior. • Same asset, but switching to a different asset license. When an asset is released in several versions, each version of the asset can have a different license. However, the licenses of a particular version must not contradict that of a previous version. Here, the licensor is not creating a new license based on an existing one (different from a derivative license). The versions of licenses interact as shown in Figure 5 as the asset evolves over time.
5. License Conflicts Scenarios As Feature Interactions As assets are composed, derived or evolved, license conflicts can arise as a result of feature interactions of licensing clauses. Below, we describe three scenarios of licensing interactions for different types of assets. We analyze each scenario to determine the cause of the license conflict in terms of the type of feature interaction it represents, and instantiate the corresponding template.
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
29
Figure 5. License Interactions during Evolution
5.1. License Conflicts of Web Services Example 1. (Web service composition) Consider a restaurant service R that composes a map service M and a resource allocation service I. Assume that M allows composition and permits the service to be used for any purposes, and that I allows derivation (subsumes composition as per Table 1), but can be used only for non-commercial purposes. When M and I interact during composition, a license conflict occurs, because I denies commercial use. Based on Table 1, these license interactions cause a conflict, resulting in the incompatible licenses. The Non-Commercial Use feature in the license of I causes a conflict with the unspecified Non-Commercial Use feature in the license of M . If Non-Commercial Use is not specified, Commercial Use is deemed to be incompatible. Figure 6 instantiates the template for Composition of Asset Licenses.
Figure 6. Web Service Composition and License Interactions
5.2. License Conflicts in Multimedia Files Example 2. (Modification and re-licensing of a music file) Consider a music file S with a license LP lay,Derive that allows users to play the file (render the asset in audio
30
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
form), and to create derivative works based on it. Assume, that another music file T is derived from this file. If the owner of the derived music file issues the file under a license LP lay,Save that allows users to save the file (save a copy including any changes to permanent storage), this license conflicts with the license of the parent music file S. The play feature in the license of the parent music file does not, by itself, grant the right to store the file in modified form, although the derive feature allows modification. Thus, there is a conflict with the save feature in the license of the derived file. The interaction is the result of a Derivation of Asset Licenses. Different from Evolution of Asset Licenses, the modifications of file and license are not made by their original creator. Figure 7 instantiates the template for Derivation of Asset Licenses interactions as applied to this scenario.
Figure 7. Modification and re-licensing of a music file as license derivation
5.3. License Conflicts in Software Assets Example 3. (Re-releasing an asset under a new license) Consider a software component S1 released under the GNU General Public License (GPL) license. At some point in the future, the licensor may decide to release a new version S1 under two different licenses say, GNU GPL5 and Affero GPL6 . However, the Affero GPL is incompatible with GNU GPL version 2 because of section 2(d) that covers the distribution of application programs via web services or computer networks. Thus, the release of S2 under Affero GPL conflicts with the license of the previous version S1 . Software released under the GNU GPL cannot be re-released under a GPL incompatible license. This conflict is due to changes made to the license of a component. It does not fall under Derivation of Asset Licenses, however, as the licensor did not create a new license based on an existing one. Instead, the licensor re-released a new version S2 of a component S1 under a license that was incompatible with the existing license. This situation is shown in Figure 8, which instantiates the template for Evolution of Asset Licenses interactions. Here, the existing license was GNU GPL, which requires that software released under this license cannot be re-released under an incompatible license such as the Affero GPL. 5 http://www.gnu.org/copyleft/gpl.html 6 http://www.affero.org/oagpl.html
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
31
Figure 8. Re-releasing an asset under a new license as license evolution
6. Related Work and Discussions An asset is generally distributed with a license that governs what asset licensees can do. A license [1] includes all transactions between the licensor and the licensee, in which the licensor agrees to grant the licensee the right to use and access the asset under predefined terms and conditions. Asset licenses interact with one another during the course of the composition, derivation and evolution of assets, and conflicts may arise due to incompatibilities between license clauses. In our own work, we have studied licensing of services [10,11], and formalized the licensing clauses for services [12]. There has been related work on policy conflicts [13,14,15,16]. Policies and licenses are similar in that they govern what an asset does, but are not the same.Policies are commonly used for access control, quality of service, or other management tasks [16]. They capture high-level goals that can be enforced automatically. Policies are meant to be defined by users, allowing them to customize the behavior of a system. Policies provide the means for specifying and modulating the behavior of a feature to align its capabilities and constraints with the requirements of its users [15]. Licenses primarily focus on usage terms and access methods to assets, thus governing what users can do with an asset. They similarly modulate the use (not behavior) of the asset. There are important similarities and differences between policy conflicts and license conflicts. A policy conflict occurs, if there are policies (for example, authentication or privacy policies) specified on two features that refer to their corresponding operations, and the policies are not compatible [13]. Policy conflicts are particularly prone to cause user confusion, as policies are often specified by the users as part of customizing a feature [14]. License conflicts occur in the following scenarios: • A composite asset (aggregation of two or more assets) licenses should be compatible with the licenses of all the assets being composed. • As assets evolve over time, the changes introduced (addition of a new clause or modification of an existing clause) in the licensing clauses should be compatible with the other existing licensing clauses. • The license of a derivative asset (as it is inherited from a parent asset license) should be compatible with the parent license. License conflicts directly preclude the making of composite or modified or derivative assets, thus causing loss for business.
32
G.R. Gangadharan et al. / A Feature Interaction View of License Conflicts
7. Concluding Remarks When assets are combined with other assets, whether or not they can be combined is determined by the compatibility of their associated licenses. New assets cannot be derived from existing assets, unless the license of the existing asset is compatible with that of the derived one. The evolution of assets should be consistent with the corresponding licenses. In this paper, we have modeled interactions of licenses as feature interactions, especially if those interactions result in conflicts. Using feature interactions view, we have detected license conflicts during composition, derivation, and evolution. We are continuing our work to resolve the license conflicts by feature interaction approaches.
References [1] [2] [3]
[4] [5] [6] [7] [8]
[9] [10] [11]
[12]
[13]
[14] [15]
[16]
Classen, W.: Fundamentals of Software Licensing. IDEA: The Journal of Law and Technology 37(1) (1996) World Intellectual Property Organization: Successful Technology Licensing. WIPO Publishers, Geneva, Switzerland (2004) Garcia, R., Gil, R., Delgado, J.: A Web Ontologies Framework for Digital Rights Management. Journal of Artificial Intelligence and Law Online First (http://springerlink.metapress.com/content/03732x05200u7h27) (2007) Chavez, A., Tornabene, C., Wiederhold, G.: Software Component Licensing: A Primer. IEEE Software 15(5) (1998) 47–53 Fitzgerald, B., Oi, I.: Free Culture: Cultivating the Creative Commons. Media and Arts Law Review (2004) Calder, M., Kolberg, M., Magill, E., Reiff-Marganiec, S.: Feature Interaction: A Critical Review and Considered Forecast. Computer Networks 41(1) (2003) 115–141 Jezequel, J.M., Train, M., Mingins, C.: Design Patterns and Contracts. Addison-Wesley (1999) Gangadharan, G.R., Weiss, M., D’Andrea, V., Iannella, R.: Service License Composition and Compatibility Analysis. In: Proceedings of the International Conference on Service Oriented Computing (ICSOC’07). (2007) Feller, J., Fitzgerald, B.: A Framework Analysis of the Open Source Software Development Paradigm. In: Proc. of the 21st Annual International Conference on Information Systems. (2000) 58–69 D’Andrea, V., Gangadharan, G.R.: Licensing Services: The Rising. In: Proceedings of the IEEE Web Services Based Systems and Applications (ICIW’06), Guadeloupe, French Caribbean. (2006) 142–147 Gangadharan, G.R., D’Andrea, V., Weiss, M.: Free/Open Services: Conceptualization, Classification, and Commercialization. In: Proceedings of the Third IFIP International Conference on Open Source Systems (OSS), Limerick, Ireland. (2007) Gangadharan, G.R., D’Andrea, V.: Licensing Services: Formal Analysis and Implementation. In: Proceedings of the Fourth International Conference on Service Oriented Computing (ICSOC’06), Chicago, USA. (2006) 365–377 Sahai, A., Thomposn, C., Vambenepe, W.: Specifying and Constraining Web Services Behaviour through Policies. In: Proceedings of the W3C Workshop on Constraints and Capabilities for Web Services. (2004) Reiff-Marganiec, S., Turner, K.: Feature Interaction in Policies. Computer Networks 45(5) (2004) 569–584 Kamoda, H., Yamaoka, M., Matsuda, S., Broda, K., Sloman, M.: Policy Conflict Analysis Using Free Variable Tableaux for Access Control in Web Services Environments. In: Proceedings of the 14th Intl. World Wide Web Conference (WWW). (2005) Turner, K., Blair, L.: Policies and Conflicts in Call Control. Computer Networks 51 (2007) 496–514
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
33
Managing Feature Interaction by Documenting and Enforcing Dependencies in Software Product Lines Roberto Silveira SILVA FILHO and David F. REDMILES Bren School of Information and Computer Sciences Department of Informatics 5029 Donald Bren Hall Irvine, CA 92697-3440 {rsilvafi, redmiles}@ics.uci.edu
Abstract. Software product line engineering provides a systematic approach for the reuse of software assets in the production of similar software systems. For such it employs different variability modeling and realization approaches in the development of common assets that are extended and configured with different features. The result is usually generalized and complex implementations that may hide important dependencies and design decisions. Therefore, whenever software engineers need to extend the software product line assets, there may be dependencies in the code that, if not made explicit and adequately managed, can lead to feature interference. Feature interference happens when a combined set of features that extend a shared piece of code fail to behave as expected. Our experience in the development of YANCEES, a highly extensible and configurable publish/subscribe infrastructure product line, shows that the main sources of feature interference in this domain are the inadequate documentation and management of software dependencies. In this paper, we discuss those issues in detail, presenting the strategies adopted to manage them. Our approach employs a contextual plug-in framework that, through the explicit annotation and management of dependencies in the software product line assets, better supports software engineers in their extension and configuration. Keywords: Feature interaction, software product lines, product line documentation, contextual component frameworks, software dependencies, and publish/subscribe infrastructures.
Introduction The need for faster software development cycles that meet the constantly evolving requirements of a problem domain has driven industrial and academic research in the area of Software Product Lines (SPL for short). The goal of SPL engineering is “to capitalize on commonality and manage variability in order to reduce the time, effort, cost and complexity of creating and maintaining a product line of similar software
34
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
systems” [1]. In SPLs, reuse of commonality allows the reduction of the costs of producing similar software systems, while variability permits the customization of software assets to fit different requirements of the problem domain [2]. SPLs are usually designed using the concept of features and variation points [3]. Variation points represent the locations in the software that enable choices, while features represent user-observable units of variability associated to one or more of those points. In many industrial settings, commonality is implemented in the form of large pieces of software such as object-oriented frameworks, whereas features implement new behavior by direct extension and source code configuration[4]. In such approaches, features can interact in a positive way, by the combined extension of the common code in different variation points. They can also interact in a negative way, by defining behaviors that are incompatible with other features installed in the same infrastructure. In the latter cases, the interaction is also called interference. A feature interference occurs when the addition of a new feature affects or modifies the operation of the system in an unpredicted way [5]. In fact, many nontrivial feature interferences in software are a result of conflicting assumptions about service operations and system capabilities that are not explicitly documented or exposed to the programmers of those features [6]. SPLs are no exception. The dimensions of extensibility in SPLs are not always orthogonal, and their dependencies are not always explicit. As a consequence, whenever software engineers need to extend a SPL with new features, there may be dependencies within and among variation points and features that, if not documented and managed, can lead to feature interference. Our experience in the development of YANCEES [7], a highly extensible and configurable publish/subscribe SPL, makes evident some of those issues. In particular issues associated with the lack of management and documentation of fundamental, configuration-specific, incidental dependencies and emerging system properties. Those dependencies are further explained as follows. Fundamental (or problem domain) dependencies encompass the logical relationships that are common to all software product line members. For example, in the publish/subscribe domain, the process of: publication of events, followed by their routing based on subscription expressions, and the subsequent notification to subscribers define a common behavior shared by all publish/subscribe infrastructures. The fundamental dependencies that involve this common behavior restrict variability in the problem domain and create configuration rules that must be obeyed in the extension and configuration of a SPL. Moreover, they restrict some configurationspecific dependencies. Configuration-specific dependencies. These include the compatibility relations between features that extend or refine the common SPL behavior in the implementation of the different SPL members. Those dependencies are expressed in the form of inter-feature relations such as “compatible”, “incompatible”, “optional”, “exclusive”, “alternative”, and others. For example, ‘content-based’ filtering, ‘tuplebased’ message format and ‘push’ notifications define a compatible combination of features present in many content-based publish/subscribe infrastructures, whereas ‘content-based’ filtering and events represented as ‘objects’ are usually incompatible. Incidental (or technological) dependencies are consequence of the variability realization approaches employed in the construction of the SPL. Examples of such
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
35
approaches include: design patterns, parameterized classes, aspects, mixings and others [8]. The benefits provided by each one of these approaches come with extra costs: the increase of the overall software complexity and the need to comply with their configuration and extension rules. For example, the use of software patterns such as Strategy, require the proper implementation of interfaces and the selection criteria. Moreover, these approaches usually introduce indirections in the code that, when applied in combination, may hinder its legibility and extension ([9] pp. 295). Dependencies on emerging system properties represent assumptions about system-wide guarantees for example: security, guaranteed event delivery, total order of events, and other properties that depend on different configuration parameters of the infrastructure. These system-wide properties may vary due to complex dependencies between the system components and parameters. In YANCEES, for example, the total order of events is a function of the distribution of the system. In peer-to-peer settings, for example, the total order of events is not preserved, whereas in centralized settings it is assured by the infrastructure. The inherent variability in software product lines, together with the need to cope with fundamental, configuration-specific, incidental and system properties dependencies not only creates a configuration management problem, but also hinders the reuse and the proper extension of software product lines. It makes possible for changes in different parts of software to break implicit system assumptions, leading to feature interference. In fact, our experience in the design, implementation and use of YANCEES shows that software engineers lack appropriate knowledge of those dependencies and assumptions, what we call variability context: the information necessary to understand, extend and customize the software product line. They also lack automated support in the form of tools and mechanisms that enforce those relations in the SPL, providing runtime and configuration-time guarantees. This paper describes in detail those issues in the design and development of YANCEES, a publish/subscribe infrastructure SPL and discusses the strategies used in supporting software engineers in extending and configuring this infrastructure. In particular, we argue for the use of dependency models in both design and implementation, with the elucidation and enforcement of those dependencies in the product line code. Our approach represents dependencies in the code artifacts, allowing their automatic enforcement at both load time and run time, at the same time that support software engineers in extending and configuring the product line, by supporting their understanding of the hidden dependencies and configuration rules of software. The contributions of this paper are in different fronts. From a feature interaction perspective, we provide a case study that shows how the lack of documentation and enforcement of fundamental, configuration-specific, incidental dependencies and emerging system properties can interfere in the feature reuse and the extension of SPLs. From a feature interaction research perspective, we show how the explicit documentation of those dependencies in the SPL, combined with the use of contextual component frameworks and configuration managers can help in the detection and prevention of feature interference. This paper is organized as follows: section 1 presents the technological background of our approach. Section 2 discusses our experience in the design,
36
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
implementation and extension of YANCEES. Section 3 discusses our approach in managing those issues. Section 4 discusses some related work and we conclude in section 5.
1. Background The work presented in this paper relies on concepts from the areas of SPL variability modeling, and software component frameworks. We introduce these concepts here. 1.1. Variability modeling Variability modeling approaches provide a notation for representing choices and constraints (dependencies and rules) involving units of variability (features, variants, components) in SPLs. First generation modeling languages such as FODA [10], represent variability in terms of features and their compatibilities (alternative, multiple, optional and mandatory) and incompatibilities (exclusive or excludes) around predefined variation points. Researchers soon realized the importance of representing other kinds of dependencies in these models, proposing different extensions. For example, Ferber et al. [11] introduces the notion of “intentional”, “environmental”, and “usage” dependencies; whereas Lee and Kang [12] proposes the representation of runtime feature interactions such as “activation” and “modification” dependencies. Those models, however, suffer from a fundamental problem: the lack of representation of dependencies as first-class entities, and their traceability to implementation concerns. The inadequate management and representations of dependencies in SPLs [13] motivated the development of second generation variability modeling approaches [14]. These approaches represent dependencies as first-class entities, and support the variability management by the use of constraint checkers. Together, they provide software engineers with an overview of variability in the system, supporting their navigation through the space of valid product configurations, and deriving individual product members that meet a valid set of quality and feature attributes. An example of variability model and environment is COVAMOF [15], which also represents overall system quality attributes and tacit knowledge in the documentation of more complicated relations between features. While very useful in the representation of system variability, commonality, and the interaction between features, these models fail to: (1) support source code-level maintenance and evolution, and (2) support the runtime configuration management of features [16]. In this paper, we propose an approach that, by the integration of design models into the code, allow software engineers to better understand the underlying assumptions in the code implementation; whereas allows the infrastructure to automatically enforce those relations, preventing feature interference.
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
37
1.2. Contextual software component frameworks Component models define the basic encapsulation, communication and composition rules that support the development of component-based software. Contextual Component Frameworks (CCF) [17] implement these models and support the automatic creation and composition of objects based on user-defined properties (or context). A CCF uses the inversion of control (IoC) and injection of dependencies principles [18] to transparently provide user-requested services and properties to the components in the system. Dependency Injection is a form of IoC that removes explicit dependence on container APIs, separating those concerns from the component implementation. Property-based contextual composition allows software engineers to select environmental characteristics and crosscutting concerns required by the component. This is achieved with the use of properties, usually expressed in the code or associated manifest configuration files. Common properties include: transactional communication, persistency, security and other crosscutting concerns. Examples of well-known component frameworks include CORBA Component Model, COM/ActiveX and Enterprise JavaBeans. YANCEES uses this approach to separate configuration management concerns from feature implementations and to support software engineers in the extension and configuration of software product lines. It explicitly represents variation points and inter-feature dependencies in the software source code, with the specific goal of supporting variability and preventing feature interaction caused by the lack of representation and enforcement of dependencies.
2. Case study: YANCEES, a publish/subscribe product line This section describes our experience in the design and implementation of YANCEES, a highly configurable and extensible publish/subscribe SPL, and discusses the main variability management issues faced. Publish/subscribe infrastructures implement a distributed version of the Observer design pattern [19], as shown in the top level of Figure 1. In its initial stage the pattern is very simple, it provides an interface (IPubSub) which allow polishers (IPublisher) to send events to the infrastructure; whereas subscribers express interest on those events through the use of subscriptions, using the subscribe(Subscription exp) command. A subscription is a logical expression in the content or order of the events. When a subscription is satisfied, a notification with the message matching this expression is sent to subscribers (ISubscriber) through the call of the notify(Event: evt) command in their interfaces. This pattern is used in the implementation of different publish/subscribe infrastructures in different domains. For a survey of existing pub/sub systems please refer to [20]. The majority of publish/subscribe research and commercial infrastructures fall short of mechanisms that allow their customization and configuration to comply with the evolving requirements demanded by event-driven applications [20]. Motivated by this fact, we developed a flexible publish/subscribe infrastructure called YANCEES (Yet ANother Configurable and Extensible Event Service) [7], that allows the different aspects of the publish/subscribe pattern to be extended and customized. In
38
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
the coming sections, we briefly present the main elements of YANCEES design and implementation. 2.1. YANCEES design and implementation Different principles and strategies were applied in the design and implementation of YANCEES. These are: the use of a micro kernel architecture, supporting variability on different publish/subscribe dimensions; the application of different variability mechanisms such as: abstract classes and interfaces, extensible languages, dynamic and static plug-ins, generic events; and the wide use of static and dynamic configuration managers. These principles and strategies are further discussed as follows. Table 1 Publish/subscribe infrastructures variability dimensions and examples. Model
Description
Example
Event model
Specifies how events are represented
Tuple-based; Object-based; Recordbased, others.
Publication model
Permits the interception and filtering of events as soon as they are published, supporting the implementation of different features and global infrastructure policies.
Elimination of repeated events, persistency, publication to peers (through protocol plug-ins).
Subscription model
Allow end-users to express their interest on subsets of events and the way they are combined and processed.
Filtering: content-based, topicbased, channel-based; Advanced event correlation capabilities
Notification model
Specifies how subscribers are notified when subscriptions match published events.
Push; pull; both, others
Protocol model
Deals with other necessary infrastructure interactions other than publish/subscribe. They are subdivided in interaction protocols (that mediate end-user interaction), and infrastructure protocols (that mediate the communication between infrastructure components)
Interaction protocols: Mobility; Security; Authentication; Advanced notification policies. Infrastructure protocols: federation, replication, Peer-to-peer integration.
Variability Dimensions. Around a common generalized publish/subscribe micro kernel, different variability dimensions were implemented in YANCEES, as listed at Table 1. The YANCEES variation points were selected according to the main publish/subscribe design concerns described by Rosenblum and Wolf [21] model, extended to include the notion of protocols, a design concern that captures the different kinds of infrastructure distribution strategies and other sorts of user interactions outside of the publication and subscription of events. Extensible languages and plug-is. Publish/subscribe infrastructures have special requirements of dynamism driven by its interactive characteristic. Subscriptions are dynamic in essence; they are posted and removed at runtime by their users, and are expressed in terms of commands in a subscription language. As a consequence, variability in this domain requires the simultaneous evolution of language and infrastructure capabilities. Those requirements lead us in the choice of extensible languages and plug-ins [22] as the main variability realization approach for the
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
39
subscription and notification models. In particular, YANCEES is implemented as a composition framework ([17] chapter 21.1) where component instances (in our case plug-ins) are created and combined at runtime in response to composition operators (subscription, notification commands in the user’s posted subscriptions) with the help of parses (or Mediators). The extensible language is implemented in XML, having its grammar defined using W3C XMLSchema standard. Static plug-ins. Non-interactive characteristics are implemented by static plugins and filters, installed at load-time (i.e., when the infrastructure is bootstrapped). The publication model, for example, is implemented as a Chain of Responsibility design pattern (see [23]), where filters (as static plug-ins), are composed into event processing queues that intercept the publication of events, implementing global system policies. Features that are shared by different variation points are implemented as static plug-ins a.k.a. services. IPubSub
IPublisher
1 *
1
ISubscriber
*
+publish(Event: event) +subscribe(Subscription: sub, ISubscriber: subscriber) +unsubscribe(Subscription: sub)
+notify(Event: event)
ConcretePublisher
ConcreteSubscriber > PubSubFaçade
IAdapter
builds configuration ArchitectureManager
sends event to
sends events to EventQueue
interacts with IMediator
> ProtocolMediator
+enqueue(Event: event) queries
+parse(Subscription: sub) +connectToNewProtocol() +connectToSharedProtocol()
> PublishMediator
+configArchitecture(File config) +createComponents()
> PlugInRegistry +query(String: keyword)
> NotificationMediator
> SubscriptionMediator
manages IPlugin listens to
1 IStaticPlugin
0..* IFilter +doFilter(Event: event) +addSuccessor(IFilter: filter)
successor
ISubscriptionPlugin INotificationPlugin
IProtocolPlugin
+handle(Event: evt)
+sendNotification(Event: evt)
Figure 1 Overview of YANCEES core architecture, with its main components and interfaces
Generic Events. The variability in the event model is supported by the use of object wrappers that can hold different content formats (attribute/value pairs, objects, XML files or plain text) under a generic interface (IEvent in Figure 1). Configuration managers and dynamic parsers. The final design decision is the implementation of variability management in the system itself by the use of configuration managers that install static plug-ins and filters, and mediators, that allocate plug-ins at runtime. Applying those strategies the original publish/subscribe design pattern was extended as presented in Figure 1, which shows the main YANCEES core
40
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
components and interfaces. Due to space limitations, other classes such as exceptions and auxiliary objects are not represented in Figure 1. The PublishMediator handles the publication of events, allowing the extension of its dimension through the use of filters (implementing IFilter interface). The NotificationMediator utilized notification plug-ins to implement different notification policies; whereas the SubscriptionMediator handles the interpretation of different subscription language expressions allocating appropriate subscription plug-ins. The dynamic allocation and discovery of plug-ins at runtime is supported by the use of a PluginRegistry component. After passing through the publication filters, the events are placed on the internal EventQueue and/or sent to adapters (implementing the IAdapter interface) that allow the integration with existing pub/sub systems. The ArchitectureManager installs static and dynamic plug-ins in the infrastructure based on a configuration file describing the features and their implementation files. The YANCEES core, composed of all the mediators, queue, registry and interfaces presented in Figure 1 is about 6000 LOC of Java code. The plug-ins and extensions used in different projects comprise another 3500 LOC. 2.2. Extending and Configuring YANCEES This section presents an example on how YANCEES can be extended and configured to support different application domains. In particular, we show how it was extended to support Impromptu [24], a peer-to-peer (P2P in short) file sharing tool. Impromptu provides an interface and a repository that allow users to share files in an ad-hoc peerto-peer way. In Impromptu, events are used to monitor the activity of a local file repository from each peer, to inform the arrival or departure of new peers in the network, and to synchronize the shared visualization of user’s interfaces from every peer. The peer discovery protocol is implemented using the IETF multicast DNS protocol. Every Impromptu peer executes a local YANCEES instance which is connected to other YANCEES instances in every peer in the network, thus forming a virtual P2P event bus. In this configuration, YANCEES provides both local and global event-based communication. Locally, it decouples the file repository from the GUI. Globally, it allows the monitoring of events from the file repositories in other peers, keeping their visualizations synchronized. In the support of Impromptu, YANCEES was extended and configured with plug-ins, filters and a tuple-based event format as illustrated in Figure 2. In this example, events are represented as attribute/value pairs of variable length and number. This is achieved by extending the GenericEvent interface. The subscription language is also extended to support two kinds of filtering: content-based, allowing the filtering of events based on the content of all their fields; and topic-based filtering, allowing the fast switching of events based on a single field. It also supports event sequence detection that operates over each one of those filters. The subscription language extension requires two steps: (1) the implementation of the ISubscriptionPlugin interface, and (2) the extension of the XMLSchema of the subscription language for every new command. The notification policy is push, extended in the same way as the subscription plug-ins, i.e. implementing the INotificationPlugin, and extending the notification language. The protocol model supports the mDNS peer discovery, detecting the arrival and departure of YANCEES instances in the local network. It
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
41
also supports the publication of events between YANCEES peers, creating a virtual bus, with the help of the PeerPublisher plug-in that publishes to and receives events from other peers. Events from other peers are placed directly in to the event queue, skipping the publication model filtering. Finally, the publication model is extended with two filters: one that removes repeated events as they are published, thus saving network bandwidth; and another filter that forwards the events to the protocol plug-in. Publication filters must implement the IFilter interface. These extensions are put together with the help of the ArchitectureManager that assembles a valid infrastructure based on a configuration file that defines the feature names, their implementation file, and the variability dimension they extend. Impromptu YANCEES Subscription Publication Publishers
mDNS notifications
Repeated Events
mDNS
Send To Peers
addPeer() removePeer()
Protocol
Notification
Content Filter Event Queue
Peer Publisher
Seq. Push
Subscribers
Topic Filter dynamic build
Publication Mediator
dynamic build
subscription
Notification Mediator
Parsers
Other YANCEES Instances
Events published to and coming from peers
Figure 2 YANCEES configuration with Impromptu required functionality
The generality and variability approaches employed in the design and implementation of YANCEES, whereas provide the required flexibility, resulted in different issues that lead to feature interference. Those issues are further discussed in the next sections. 2.3. Feature interference in the YANCEES variability model Fundamental (or problem domain) dependencies. Through the lack of appropriate documentation, software engineers may assume that certain variable characteristics of the infrastructure are constant. For example, plug-ins and filters may be implemented with specific timing and event formats in mind. A change in the event representation, for example, from variable attribute/value pairs to fixed records, can completely invalidate the subscription language ContentFilter, SequenceDetector or even the input filters and protocol plug-ins in the system in Figure 2. ConFigureuration-specific dependencies. Some features in YANCEES have their functionality implemented through the integration of different components spanning more than one variation point. In the example of Figure 2, the SendToPeers will only work properly if the PeerPublisher and mDNS plug-ins are both installed in the protocol variation point. This characteristic creates a dependency between these features. Moreover, changes in any of those components due to natural software evolution may invalidate the implementation of the whole feature. Incidental (or technological) dependencies. Each variability realization approach introduces specific configuration rules which, if not accounted for, can lead to interference. In the example of Figure 2, the accidental inversion of the order of
42
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
SendToPeers and RepeatedEventsFilter pug-ins would result in the erroneous publication of repeated events to all peers, interfering with the overall system performance. Dependencies on emerging system properties. Implicit assumptions on existing system attributes also permeate the implementation of features in our model. The order of events, for example, is a function of the distribution and of the protocol algorithms used. In a centralized setting, event order is usually guaranteed, whereas in distributed settings as in this P2P model, events can arrive earlier or later than others (coming from the PeerPublisher plug-in for example), invalidating the SequenceDetector subscription command (‘Seq.’ in Figure 2). Those assumptions may also directly impact the behavior the RepeatedEventsFilter. Generality issues. One of the main strategies of reuse in YANCEES is the implementation of a generalized common core. The use of generic interfaces throughout the system, permits specific extensions to be developed, while the common pub/sub process is preserved and reused. This approach, however, has a disadvantage of hiding implicit dependencies and assumptions. For example, the filter interface only prescribes a doFilter(IEvent: evt) method that must be implemented by all the filter components. It does not prescribe any timing or control dependencies between other filters, installed together in the publication model, nor explicitly represent environmental assumptions such as the impact a filter may have in other parts of the system if events are removed, modified or added by this component. The same is true to the subscription and notification models where plug-ins implement generic interfaces dependent on IEvent generic event representation. As a result, syntactically sound expressions can be incompatible with the current system configuration as event order, format or timing.
3. Managing feature interaction in YANCEES In order to address and prevent the different kinds of feature interaction discussed in the last section, software engineers need a way to better understand and enforce the fundamental, configuration-specific and incidental dependencies in the SPL without jeopardizing its flexibility. In YANCEES these goals are achieved by the documentation and enforcement of design and implementation level dependencies in the code. This information is exposed to the software engineers in context, i.e. in the variation points of the system, in a way that is both human and machine readable, supporting engineers in understanding these dependencies, and the infrastructure itself enforcing these dependencies at both load time and runtime. 3.1. Modeling dependencies The first step in the management of feature interaction is the proper modeling of dependencies. One of the most important kinds of dependencies in SPL are the fundamental dependencies. They usually become implicit in the common SPL assets, and impact the other dependency types previously discussed. For being common to all product line members, these dependencies can be analyzed during the design of the system, being further refined as the infrastructure gets implemented.
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
43
We represent the fundamental dependencies in our model in Figure 3, using a notation similar to Ferber and Haag’s approach [11]. Note that, in the diagram of Figure 3, we also introduce new dimensions (written in italic) to represent emerging properties of the SPL. In YANCEES, these properties are the timing, routing and resource concerns that change as a consequence of parameters selected in different variation points. Besides the representation of the problem domain dependencies between variation points, dependencies also exist between features within the same variation point and across variation points (as exemplified in Figure 2). For the lack of space, we do not provide a diagram for these dependencies in this paper. Is a concern between routing and subscription
Notification sends
Event
Is a concern between the event format and the operators
filters
User Protocol Publication
routes
Is a consequence of distribution
Routing
filters according to
affected by
guaranteed by
Timing
Subscription queries content Content operator
Resource
connects
Protocol
queries order
Order operator
Infra Protocol
Figure 3. A dependency model of publish/subscribe main variation points and concerns
In the diagram of Figure 3, the event model and its representation directly impacts the subscription and routing models. Timing is another crucial concern in the model. A change in the way YANCEES routers are federated may affect the timing guarantees of the system (guaranteed delivery or total order of events), which will impact the subscription language semantics. A change in the resource model may also affect the timing model. For example, in a hierarchical distributed system, the total order of events may not be feasible. Finally, the notification model is orthogonal to the other features. Since it manages only events, it can vary independent form the other features. 3.2. Representing and managing dependencies in the product line assets Once the dependencies are identified, they must be formally incorporated in the implementation of the infrastructure. In YANCEES, this is achieved by the use of source code annotations in both the common code and in the feature implementations. In particular, we use the Java annotations API (available since JDK1.5), that permits the creation of custom properties that are associated to classes, methods and fields. Figure 4 illustrates, in general terms, the main strategies of our approach. The dependencies between the many variation points (emerging properties and fundamental dependencies) of the system are represented by annotations in the variation points of the code (arrows between variation points and properties in the picture).
44
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL Feature f5
Feature f3
VariabilityModel.java References to variation points
V3 Property A Property B
Variation
V2 Point V1
Variation point annotations EMERGING: Provides properties A and B; FUNDAMENTAL: Depends on features in V4, V2
Config. Rule 1 Config. Rule 2
Matches and enforces
Interface Composition Filter
Enforced by composition filter
Feature f1
Feature annotations EMERGING: Requires properties A=a2 and B=b1; FUNDAMENTAL: Depends on features in V4=f3 CONFIGURATION-SPECIFIC: Incompatible with V7=f12 Compatible with V2=f7 INCIDENTAL: Satisfies configuration rule 1 Requires configuration rule 2
Composition Framework
V5
Feature f7
V4
VARIATION POINTS: VP1= f1 Vp2 = f7 Vp3 = f5 PROPERTIES: A = function(v4,v2) B = function(v1)
Figure 4 Summary of the approach: managing dependencies with context annotations
The variation point V1, for example, is extended with Feature f1 that has specific emerging, fundamental, configuration-specific and incidental dependencies as described in Figure 4. Those values are matched with the provided properties of the system. The composition framework, based on the annotations in the code, guarantees that the feature’s requirements are met. In other words, all required and provided dependencies are satisfied. Table 2 Summary of the contextual annotations used in YANCEES Depend.
Annotations
Description
Fundamental
@DependsOnVP
Expresses a general dependency existing between variation points and between properties.
@DependsOnProperty Configuration -specific
Traceability
@RequireFeature
Express a dependency on a specific feature on a variation point.
@CompatibleWithFeature @CompatiblwWithProperty
Expresses compatibility with existing features and emerging properties
@ImplementsFeature
Marks classes that implement variation points and features in the code.
@ImplementsVariationPoint Incidental
@ProvidedGuarantees @RequiredGuarantees
Specifies the provided and required guarantees of the extension
In our implementation, the VariabilityModel class (top right of Figure 4), provides a single point of access to the dependency meta-model and the emerging system properties. The emerging properties are encoded in the variability model as rules based on the features installed in each variation point. The main variation points in the infrastructure have their implementation classes referenced in this model,
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
45
allowing the navigation through their dependencies by following their annotations. The dependencies between the variation points are encoded in their respective classes using the annotations described in Table 2. Table 3 sample annotations for the AbstractFilter variation point and SendToPeers input filter. //--- Indicates fundamental dependencies on other variation points --@DependsOnVP(VariabilityModel.VariationPoints.EVENT) // --- Indicates what variation point this class implements --@ImplementsVariationPoint(VariabilityModel.VariationPoints.PUBLICATION) public abstract class AbstractFilter implements FilterInterface { //--- Abstract implementation goes here --} // --- Local configurationconcerns --@ProvidedGuarantees(modifyEventContent=false, modifyEventOrder=false, modifyEventType=false) @RequiredGuarantees(intactEventContent=false, intactEventOrder=false, intactEventType=false) // --- Compatibility with features and emerging properties --@CompatibleWithFeature( variationPontType = VariabilityModel.VariationPoints.EVENT, featureClass= edu.uci.isr.yancees.YanceesEvent.class, featureName="Event.AttributeValueEvent") @CompatibleWithProperties( resource = VariabilityModel.Resource.ANY, routing = VariabilityModel.Routing.ANY, timing = VariabilityModel.Timing.ANY) // --- Feature unique ID --@ImplementsFeature(name = "Publication.PublishToPeers", version="1.0") public class SentToPeersInputFilter extends AbstractFilter { // --- plug-in implementation --- }
An example of the use of code annotations is presented in Table 3. In this example, two classes are presented: the AbstractFilter class that implements the publication variation point and the SendToPeersInputFilter which implements “Publication.PublishToPeers“ feature in the publication model as discussed in section 2.2. These classes are annotated with different tags (highlighted in grey), expressing the local and global dependencies and configuration concerns of this feature. In particular, it expresses the filter intent of preserving the existing order, content and type of the events. It also expresses the guarantees this component requires from the publication variation point. This allows those extensions to require, in this example, that no other component in the chain of responsibility this filter participates with will be able to modify the attributes and content of the events. Annotations also describe the component compatibility with existing concerns and variation point’s extensions. The enforcement of the properties specified in the component annotations is guaranteed, at load time, by the YANCEES architecture manager, which checks for coherent sets of components using the dependency annotations and the information in the architecture configuration file. At runtime, the YANCEES Composition
46
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
framework, with the help of the subscription and notification mediators, check for compatibility dependencies and enforce required and provided guarantees. For such, the framework uses composition filters [25] to wrap plug-ins and data elements (events), controlling their access according to the properties provided and required by the filters. This approach has been used to annotate features and variation points in YANCEES, reducing the feature interference issues discussed in this paper, and helping software engineers in the implementation more robust extensions. One of the advantages of our approach is the ability software engineers have to narrow or broaden the compatibility of a component based on more restrictive or broad compatibility declarations and, in doing so, control the level of enforcement a provided by the infrastructure.
4. Related work In the field of publish/subscribe infrastructures, different approaches are used to provide flexibility to software [20]. The management of feature interaction in this domain has been, to the best of our knowledge, ad-hoc and not well described. In systems such as FACET [26], for example, the configuration management of features does not directly supports software engineers in extending and in managing feature interaction induced by dependencies. In software product lines, variability management approaches, as described in the background section, and surveyed by [14], strive to enforce configuration rules and dependencies. Unlike those approaches, we integrate both the dependency model and the runtime guarantees in the system itself. For such, we employ a contextual framework that is part of the product line, bundling in the source code, the information necessary for its extension and customization, together with runtime and load time tools that enforce those constraints. The use of annotations to elucidate design concerns in the code has been studied as a way to separate and integrate design concerns [27]. Our model applies a similar approach to software product line concerns, with explicit runtime support for the enforcement of those dependencies. Finally, in the feature interaction community, Metzger et al. [28] proposes an approach for systematically and semi-automatically deriving variant dependencies. Our work complements this approach by providing a practical way of incorporating those dependencies in the management of feature interaction.
5. Conclusions and future work Dependencies restrict the variability of a system and variability makes managing dependencies difficult. When improperly documented and managed, dependencies lead to feature interference. As a consequence, the benefits of variably require extra configuration management measures. The gains in software reuse and variability obtained by the use of software product lines usually come with the increase in the software complexity. This
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
47
complexity is a function of the dependencies between the many variation points and the implementation of variability realization approaches. Moreover, the use of software frameworks and other approaches that require direct access to source code, usually suffers from the lack of documentation of these dependencies, and have no automated support for the users in managing those issues. As a consequence those issues represent an important source of programming and design errors in software product line engineering, which can lead to feature interference. In this paper, we show how those issues may lead to feature interference in publish/subscribe SPLs, and discuss the strategies used to manage feature interaction in YANCEES. In particular, our approach is based on a contextual component framework that uses source code annotations expressing dependencies and configuration rules to support software engineers in extending and configuring SPLs, preventing feature interaction. This approach allows for both static and runtime configuration of components, coping with the dynamism requirements of the publish/subscribe domain. Currently, the modeling of dependencies in our approach come from the SPL engineers expertise and the manual analysis of dependencies in the code. In the future, we plan on automating the generation of those dependencies by the static analysis of the SPL source code using approaches such as those proposed at [28]. Future work also includes the broadening the scope of our approach, applying it to other flexible software implementations, for example Apache Tomcat.
Acknowledgments. This research was supported by the U.S. National Science Foundation under grant numbers 0534775, 0205724 and 0326105, an IBM Eclipse Technology Exchange Grant, and by the Intel Corporation.
References [1] [2] [3] [4] [5]
[6]
[7]
[8]
C. Krueger, "Software Product Line Concepts: www.softwareproductlines.com/introduction/concepts.html," The Software Product Lines site, 2006. J. Coplien, D. Hoffman, and D. Weiss, "Commonality and Variability in Software Engineering," in IEEE Software, vol. 15, 1998, pp. 37-45. I. Jacobson, M. Griss, and P. Jonsson, Software Reuse. Architecture, Process and Organization for Business Success: Addison-Wesley, 1997. J. Bosch, "Evolution and Composition of Reusable Assets in Product-Line Architectures: A Case Study," presented at TC2 First Working IFIP Conference on Software Architecture (WICSA1), 1999. T. F. Bowen, F. S. Dworack, C. H. Chow, N. Griffeth, G. E. Herman, and Y.-J. Lin, "The feature interaction problem in telecommunications systems," presented at Software Engineering for Telecommunication Switching Systems, 1989. I. Zibman, C. Woolf, P. O'Reilly, L. Strickland, D. Willis, and J. Visser, "An architectural approach to minimizing feature interactions in telecommunications," IEEE/ACM Transactions on Networking, vol. 4, pp. 582-596, 1996. R. S. Silva Filho and D. Redmiles, "Striving for Versatility in Publish/Subscribe Infrastructures," presented at 5th International Workshop on Software Engineering and Middleware (SEM'2005), Lisbon, Portugal., 2005. M. Svahnberg, J. v. Gurp, and J. Bosch, "A Taxonomy of Variability Realization Techniques," Software Practice and Experience, vol. 35, pp. 705-754, 2005.
48
[9] [10]
[11]
[12]
[13]
[14] [15]
[16] [17] [18] [19]
[20] [21]
[22] [23] [24]
[25] [26]
[27]
[28]
R.S. Silva Filho and D.F. Redmiles / Documenting and Enforcing Dependencies in SPL
K. Czarnecki and U. W. Eisenecker, Generative Programming - Methods, Tools, and Applications: Addison-Wesley, 2000. K. C. Kang, S. G. Cohen, J. A. Hess, W. E. Novak, and A. S. Peterson, "Feature-Oriented Domain Analysis (FODA) Feasibility Study - CMU/SEI-90-TR-021," Carnegie Mellon Software Engineering Institute, Pittsburgh, PA CMU/SEI-90-TR-021, 1990 1990. S. Ferber, J. Haag, and J. Savolainen, "Feature Interaction and Dependencies: Modeling Features for Reengineering a Legacy Product Line," Lecture Notes in Computer Science. Second International Conference on Software Product Lines, SPLC'02, vol. 2379, pp. 235-256, 2002. K. Lee and K. C. Kang, "Feature Dependency Analysis for Product Line Component Design," Lecture Notes in Computer Science - 8th International Conference on Software Reuse, ICSR'04, vol. 3107, pp. 69-85, 2004. S. Deelstra, M. Sinnema, J. Nijhuis, and J. Bosch, "Experiences in Software Product Families: Problems and Issues during Product Derivation, Proceedings of the Third Software Product Line Conference (SPLC 2004)," Springer Verlag Lecture Notes on Computer Science, vol. 3154, pp. 165182, 2004. M. Sinnema and S. Deelstra, "Classifying variability modeling techniques," Information and Software Technology, vol. 49, pp. 717-739, 2007. M. Sinnema, S. Deelstra, J. Nijhuis, and J. Bosch, "COVAMOF: A Framework for Modeling Variability in Software Product Families," Lecture Notes in Computer Science, vol. 3154/2004, pp. 197-213, 2004. C. W. Krueger, "Software product line reuse in practice," presented at 3rd IEEE Symposium on Application-Specific Systems and Software Engineering Technology, Richardson, TX, USA, 2000. C. Szyperski, Component Software: Beyond Object-Oriented Programming, 2nd edition: ACM Press, 2002. M. Fowler, "Inversion of Control Containers and the Dependency Injection Pattern," http://www.martinfowler.com/articles/injection.html, 2004. J. Dingel, D. Garlan, S. Jha, and D. Notkin, "Reasoning about implicit invocation," presented at 6th International Symposium on the Foundations of Software Engineering (FSE-6), Lake Buena Vista, FL, USA, 1998. R. S. Silva Filho and D. F. Redmiles, "A Survey on Versatility for Publish/Subscribe Infrastructures. Technical Report UCI-ISR-05-8," Institute for Software Research, Irvine, CA May 2005. D. S. Rosenblum and A. L. Wolf, "A Design Framework for Internet-Scale Event Observation and Notification," presented at 6th European Software Engineering Conference/5th ACM SIGSOFT Symposium on the Foundations of Software Engineering, Zurich, Switzerland, 1997. D. Birsan, "On Plug-ins and Extensible Architectures," in ACM Queue, vol. 3, 2005, pp. 40-46. E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable ObjectOriented Software: Addison-Wesley Publishing Company, 1995. R. DePaula, X. Ding, P. Dourish, K. Nies, B. Pillet, D. Redmiles, J. Ren, J. Rode, and R. S. Silva Filho, "In the Eye of the Beholder: A Visualization-based Approach to Information System Security," International Journal of Human-Computer Studies - Special Issue on HCI Research in Privacy and Security, vol. 63, pp. 5-24, 2005. L. Bergmans and M. Aksit, "Composing Crosscutting Concerns Using Composition Filters," Communications of the ACM, vol. 44, pp. 51-58, 2001. F. Hunleth and R. K. Cytron, "Footprint and feature management using aspect-oriented programming techniques," presented at Joint conference on Languages, compilers and tools for embedded systems: software and compilers for embedded systems, Berlin, Germany, 2002. A. Bryant, A. Catton, K. D. Volder, and G. C. Murphy, "Explicit Programming," presented at 1st International Conference on Aspect-Oriented Software Development, Enschede, The Netherlands, 2002. A. Metzger, S. Bühne, K. Lauenroth, and K. Pohl, "Considering Feature Interactions in Product Lines: Towards the Automatic Derivation of Dependencies between Product Variants," presented at Feature Interactions in Telecommunications and Software Systems VIII, Leicester, UK, 2005.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
49
Towards Automated Resolution of Undesired Interactions Induced by Data Dependency Teng TENG, Gang HUANG1, Xingrun CHEN and Hong MEI Key Laboratory of High Confidence Software Technologies, Ministry of Education School of Electronics Engineering and Computer Science, Peking University 100871, Beijing, China
Abstract. The application-specific mode of data sharing and usage, called data pragmatics, leads to many undesired interactions related to data dependency between applications. Our previous work focuses on the automatic detection of these undesired interactions in the context of J2EE (Java 2 Platform Enterprise Edition). In this paper, we propose a set of automated solutions based on middleware for this problem. Keywords. data dependency, middleware, feature interaction.
1. Introduction For modern data-centric applications, if different applications are bound to the same data source, and if their objects are mapped to the same data tables, or tables which have explicit or implicit relationships, such data-related interactions between subsystems are called data dependencies, as shown in Figure 1(a). When a certain application fails to manipulate the data in a correct way, other applications may not work properly as expected, then undesired interactions induced by data dependency (UIDD) occur. We argue that the occurrence of UIDD is exactly due to the applicationspecific data sharing and usage mode, called data pragmatics (DP) which reflects the data semantics in the application-specific context. From the angle of DP, if DPs from different applications are overlapped, and if their data manipulations conflict with each other, the UIDD will occur. Our previous paper [2] illustrated a realistic example of this type of interactions: JPS and JST. As shown in Figure 1(b), in this scenario, when application A creates a new instance of a persistent object ‘a’, actually the middleware would insert a new row into the common table ‘CT’. As the attributes of ‘a’ are not mapped to all of the columns of ‘CT’, the other columns which are not associated with ‘a1’ are filled with ‘NULL’. And then, if the attributes of persistent object ‘b’ of application B are also mapped to ‘CT’, and the primary key column set of ‘b’ is not the same as ‘a’. As a result, there exists a certain row which is used by ‘b’, and some of its primary key 1
Corresponding Author: Gang HUANG; Email:
[email protected], Tel: 86-10-62757670, Fax: 86-10-62751792.
50
T. Teng et al. / Towards Automated Resolution of UIDD
columns for ‘b’ are filled with ‘NULL’, so the proper execution of B is interrupted, and undesired interactions occur. CT A
id
a c_name
DS app1
app3
app2
……
appn
PK
PK
name
c_address
address
c_balance
balance
c_tele
tele email
B b id name address balance telephone email
(a) Data-centric application mode
(b) UIDD sample
Figure 1. Undesired interactions caused by data dependency in data-centric applications
Currently, application persistence is usually implemented by middleware. As a mediator which transforms application object invocations to data manipulations, middleware conceals technical details of DBMS from application objects as well as implementation and runtime details of application objects from DBMS. Furthermore, the development, deployment and management of modern data-centric applications mainly depend on middleware instead of DBMS. Since the DBMS and DBA lose the global understanding and control on the whole system, the critical issue of UIDD between applications which has been well resolved in the classical DBMS emerges again. Therefore, it is a natural and feasible way to resolve the UIDD by middleware. Middleware is capable of collecting the data usage information of all applications. According to the information collected, it can detect the existence of UIDD. This work has been addressed in our previous work [2]. But to our best knowledge, how to eliminate UIDD still remains unresolved. So we review the problem in [2], and propose a middleware-based approach to automatically eliminate UIDD.
2. Solutions of UIDD Based on the middleware-based approach to collect the data usage information and discover UIDD in our previous work [2], this paper focuses on how to eliminate UIDD. 2.1. Restraint Solution Restraint is a simple solution for feature interactions, i.e., to avoid the situation which one feature interfere with the other [1]. For the above description of the UIDD, we can adopt a restraint solution to restrain some dangerous creating manipulations from executing automatically. If the relative importance between A and B can be judged, the data manipulations of the application with less importance should be restrained. But how can we judge which application has relatively more importance by middleware. This is a vital challenge for us. And this paper proposes five criteria: 1.
Correctness of the mapping policies: The application whose primary key attributes are properly mapped to the primary key columns is considered to be more important as this arrangement is not devastating for the data consistency.
T. Teng et al. / Towards Automated Resolution of UIDD
2.
3.
4.
5.
51
The referenced degree: The number of quoted times of a certain application which is referenced by others can reflect its influence to its counterparts. So this number can be a guideline of application’s relative importance. Data access frequency: It reflects data usage degree of the application. Therefore, as to an application, higher access frequency means greater importance. Deployment order: Applications deployed later usually meet the new requirements of users, so we consider that the application deployed later is of more importance than those deployed earlier. Importance specified by users: Middleware should support to manually set importance degree for some applications which may need special protection.
Let’s review the example of JPS and JST in [2]. Following the above five criteria, we can draw the conclusion that JPS is relatively more important than JST. For JPS and JST, in our illustration, the analysis result is listed in table1. The process is: 1) An absolutely correct mapping strategy marks 100 points. 2) The reference degree of an application is based on its referenced times by other applications, every time marks one point. 3) Data access frequency is segmented into several levels by the frequency difference. Add every 50 points to the application if it belongs to a higher level of frequency. 4) Every latterly deployed application can get an extra 100 points. 5) Add the points specified by users. Finally, sum up all, lower points means less importance of an application. As shown in Tab.1, JST is the answer. 2.2. Coordination Solution The restraint approach usually damages one side as a sacrifice to avoid the interactions. Compared with the restraint approach, the coordination approach tries to find a solution or compromise to meet both of the conflicted sides[1]. For the UIDD existing in the object attribute mapping, while middleware inserts values for A, it should also insert values with proper meaning for the columns which is used by B. Actually, these values may be meaningless for A, but this may prevent the proper execution of B from UIDD. For the example of JPS and JST in [2], when JST creates a new instance of ‘AccountBean’, a new row would be inserted into the target table ‘CustomerTB’. In this insertion, all 11 columns associated with ‘AccountBean’ should be filled with the attribute values while the other columns should be filled with some default unique values, such as ‘userID’ which is associated with the primary key attribute of ‘ContactInfoEJB’ in JPS. Then JPS will not be interrupted owing to ‘NULL’ value existing in some columns. And these columns may be associated with the primary key attributes of CMP EJBs of JPS. 2.3. Implementation Mechanism for Solutions Both restraint and coordination solutions proposed in this paper can be implemented by middleware with its implementation mechanism for data usage of applications. In the current development, business object persistence is implemented by middleware. Middleware builds the binding and mapping between business objects and target tables at runtime according to the user-specific persistence configuration files. It receives invocations which are from the business layer to objects, and transforms these
52
T. Teng et al. / Towards Automated Resolution of UIDD
invocations into data manipulations on the corresponding target tables. So middleware can accurately control the execution of data manipulations. As object/relational mapping middleware (ORM) is a promising approach to enabling object oriented programs (OOP) to access relational database management systems (RDBMS) in an object oriented style, our work is illustrated by CMP-EJB[4], a typical ORM used widely in building large-scale enterprise applications. This paper gives five criteria and extends PKUAS(a J2EE application server which provide CMP EJB container) [3] to resolve UIDD automatically by ORM, and this requires extending Entity EJB Container, as shown in Figure 2(a).
Persistence Coordinator
3.2
Persistence Manager
5
DB
3.1 6
4 1
Client
CMP EJB Container
2
CMP EJB 3
7
(a) Extended CMP EJB Container
(b) Performance evaluation
Figure 2. Solution framework
The standard actions performed by the CMP EJB Container include 1) waiting for incoming requests from clients, 2) delegating the request to the CMP EJB implementation for pre-processing preferred by EJB developers, 3) waiting for the result of CMP EJB pre-processing, 4) invoking the persistence manager for 5) accessing the database, 6) waiting for the result of data manipulations and 7) returning the final reply to the client. Based on the standard actions, we design and implement some extension of the containers to modify the normal execution of data manipulations, e.g., a persistence coordinator in our approach. Since the UIDD detected previously would be recorded by middleware[2], the coordinator can determine how to adapt the implementation of data manipulations of applications based on the detecting result. In this paper, we implements two automatic solutions: restraint and coordination, which have been discussed previously. Under the direction of the policies of these two solutions, the coordinator can modify the semantics of the normal data manipulations which may result in data dependencies, and selectively execute them on CMP EJBs by controlling the actions performed by the container and the persistence manager, as shown in Figure 2(a). Table 1. Applying the criteria to JPS and JST. App
Correctness
Referenced degree
Access frequency
Deployment order
JPS JST
100 0
0 0
100 50
0 100
Importance specified by users 50 0
Total Score 250 150
To attain the actual effect (negative and positive) of our solutions, we test the time overhead caused by the introduction of the restraint & coordination solutions. As shown in Figure 2(b), ‘Res’ segment shows that the performance of the creation manipulation which is extended with ‘Restraint’ mechanism is about 45.4% lower than
T. Teng et al. / Towards Automated Resolution of UIDD
53
the normal one. And the same penalty of the creation manipulation is about 66.6% for ‘Coordination’ mechanism, as shown in ‘Coor’ segment. As predicted, due to the complicated run-time logic of these two mechanisms, they do cause a considerable cost for the execution of creation manipulation. But this cost is necessary to eliminate the destructive UIDD and guarantee the proper execution of applications. And a positive result also indicates that these two mechanisms have no substantial impacts on the normal application execution if no UIDD exists: ‘Normal’ segment shows that in an environment without UIDD, the application performance penalty caused by the extended container is only 3.92%.
3. Future Work There are some open issues to be addressed. Firstly, we only considered the UIDD arising from middleware-managed object attribute mapping, some other types of UIDD may exist and need to be detected and resolved. Secondly, our current solutions are patterns rather than formal guide for developers.
Acknowledgments This effort is sponsored by the National Basic Research Program (973) of China under Grant No. 2002CB312003; the National Natural Science Foundation of China under Grant No. 90412011, 90612011 and 60403030.
References [1] [2]
[3]
[4]
Dirk O. Keck and Paul J. Kuehn, The Feature and Service Interaction Problem in Telecommunications Systems: A Survey, IEEE Transactions on Software Engineering, Vol. 24, No. 10, 1998, 779-796. Teng, T., Huang, G, Li, R., Zhao, D., Mei, H., Feature Interactions Induced by Data Dependencies among Entity Components, 8th International Conference on Feature Interactions in Telecommunications and Software Systems, Leicester, UK, 2005, 252-269. Mei, H. and Huang, G. PKUAS: An Architecture-based Reflective Component Operating Platform, invited paper,10th IEEE International Workshop on Future Trends of Distributed Computing Systems, 2004, 163-169. SUN Microsystems, Enterprise JavaBeans Specification, Version 2.0, (2001).
54
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Policy Conflicts in Home Care Systems Feng WANG and Kenneth J. TURNER Computing Science and Mathematics, University of Stirling, Stirling, FK9 4LA, UK
[email protected],
[email protected] Abstract. Technology to support care at home is a promising alternative to traditional approaches. However, home care systems present significant technical challenges. For example, it is difficult to make such systems flexible, adaptable, and controllable by users. The authors have created a prototype system that uses policy-based management of home care services. Conflict detection and resolution for home care policies have been investigated. We identify three types of conflicts in policy-based home care systems: conflicts that result from apparently separate triggers, conflicts among policies of multiple stakeholders, and conflicts resulting from apparently unrelated actions. We systematically analyse the types of policy conflicts, and propose solutions to enhance our existing policy language and policy system to tackle these conflicts. The enhanced solutions are illustrated through examples. Keywords: Policy-based management, policy conflict, home care system.
1. Introduction Policies have emerged as a promising and more flexible alternative to features. Among the benefits of policies, they are much more user-oriented. However, policies are prone to conflicts much as features are prone to interactions. This paper examines the issues of policy conflict in a novel application domain: home care. It is predicted that the growing percentage of older people will have enormous impact on the demand for care services. This will exert huge pressures on the resources of existing care services [1]. Increasingly, providing care at home is seen as a promising alternative to traditional healthcare solutions. By making use of sensors, home networks and communications, older people can prolong independent living in their own homes. Remaining in a familiar environment while being taken care of also improves their quality of life. Their families and informal carers can also be relieved of constant worry whether those in care are well. The hardware to enable home care services, such as sensor technologies and communications, has matured in terms of cost and availability. Providing software solutions to deliver home care service, however, remains a challenging task. Most home care systems have been created in an ad hoc way. The systems are usually handcrafted and manually customised to the needs of individual scenarios. Because the solutions for home care services are hard-coded, even simple changes in services requires an on-site visit by specially trained personnel. They are therefore costly to change. Proprietary, off-the-shelf telecare products suffer from similar problems. The functions of a product are typically fixed in special-purpose devices. Data from these devices cannot easily be accessed, and the devices work only with products from the
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
55
same company. Domestic health monitoring and home automation are currently very limited. The major issues in home care delivery are flexibility, adaptability, customisability and cost. We have successfully demonstrated that it is possible to use a policy-based system to integrate data from a variety of home sensors. Sensor data is used to support a variety of home automation and home care services [1]. Considerable research remains to realise the potential of this work and to demonstrate its value in supporting care of older people. One major issue is the detection and resolution of policy conflicts, which is the focus of this paper. Essentially, policies are rules that define the behaviour of a system. A typical policy consists of a trigger, a condition and an action. There are two basic types of policies: authorization policies and obligation policies. Authorization policies give a set of subjects the authority to carry out some actions upon a set of target objects; in negative form, they require subjects to refrain from doing so. Obligation policies specify that a set of subjects is responsible for taking some actions upon the target objects when a certain trigger event is received and the some conditions are satisfied. When enforcing the policies, it is possible that multiple policies may conflict with each other. We use the following general definition of policy conflict: two policies are said to conflict with each other if there is inconsistency between them. The classification of conflicts by Moffet et al. [2] is discussed later. When applying policy-based management to home care systems, we observe that certain classes of policy interaction are unique to this domain: x policy rules of multiple stakeholders may conflict x policy actions resulting from apparently different triggers may interact according to changing situations x policy actions may conflict over time. The issues in a policy-based home care system are as follows. What types of the policy interactions should be tackled inside the policy system? What type of interactions should be tackled outside the policy system? If the policy interaction is tackled by the policy system, how should it be handled? Based on the analysis of the problems, we propose a solution to tackle the above issues. Our solution is built on top of our previous work on the ACCENT project [3]. In order to explain our solution, we will first introduce the previous work on ACCENT. The paper is organized as follows. Section 2 briefly describes the policy language for home care. Section 3 presents how policies are deployed and enforced inside the home care system. In section 4, policy conflict issues in home care systems are identified and analysed. A solution for resolving these conflicts is proposed in section 5. Related work is discussed in section 6. Finally, in section 7 we describe the current status of the work.
2. Policy Language for Home Care Systems The policy language for care at home builds on our previous work for call control [3]. To allow use in different application domains, the policy language has two parts: x the domain-independent core policy language defines the structure of policies (e.g. their combinations) and general attributes of policies (e.g. metadata) x domain-specific extensions reflect specialisations for each kind of application.
56
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
A policy rule consists of three parts: trigger, condition and action. Although the core language defines some of these, specific elements are normally defined perdomain. The core policy language is defined in [3], with its specialization for home care in [1]. A home care policy consists of a set of policy attributes and a set of policy rules. The attributes of a home care policy include the following: x id uniquely identifies the policy in the policy store. x description explains the purpose of the policy in plain text. x owner indicates the entity that the policy belongs to. A notation similar to email addresses is used, e.g.
[email protected]. x applies_ to identifies the entities to which a policy applies (e.g. sensors, people, virtual entities l computer programs). An email-like notation is used for entities:
[email protected] means movement sensor 1 in the kitchen of house1. Omitting ‘1’ means any movement sensor in this kitchen. x preference states how strongly the policy definer feels about it, and represents the modality of the policy. Examples are should, should not. Internally the value of this attribute is represented as integer (which may be positive or negative) . x valid_from and valid_to specify the time period during which a policy is valid x profile is used to group the policies. A policy with an empty profile is always applicable, while one with a non-empty profile must match the user’s current profile. x enabled states whether the policy system should consider a policy or not. x changed indicates the last-modified time of a policy. For home care policy rules, a generic trigger device_in is used: its arguments indicate the trigger type and the sensor that caused it. A trigger sets environment variables to reflect the current state of the environment. A policy condition can make use of these variables to check whether it is eligible for execution. A generic action device_out is defined to instruct actuators to execute actions. This action has arguments to indicate the actuator, the action to be executed and the parameters of the action. In our home care system, a home care service is a rule-based application described by policy rules. An example policy for home care is shown in 0. Dementia patients often wander at night, and this worries their relatives. The policy in figure 1 states that, if movement is detected in the bedroom when it is night (10PM–7AM), remind the patient to go back to bed. The obvious closing tags are omitted in the XML definition. <policy_rule> device_in(arg1,arg2) <parameter>time in 22:00:00..07:00:00 speak(arg1,arg2) Figure 1. Night-Time Wandering Reminder Policy Example
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
57
Policy 1
2
Policy Deployment
Static Analysis 4
5
3
Design Time
Policy Store 5
3
Policy Enforcement 2
Dynamic Analysis 6
7
Event Service 1
Run Time
4
Design-time Module 8
Event
Command
Run-time Module
Figure 2. Policy Deployment and Enforcement in A Home Care System
3. Deployment and Enforcement of Policies Figure 2 illustrates how policies are deployed and enforced in the policy system. 3.1. Policy Deployment At design time, a policy is defined using editing tools such as a policy wizard. This policy is then passed to the policy deployment module (step 1). Since the policy may conflict with the existing policies in the policy store, it is passed to the static analysis module to check for conflicts (step 2). The static analysis module retrieves related policies from the policy store (step 3), performs conflict detection analysis, and returns the result to the policy deployment module (step 4). If there is conflict, the user is notified. If there is no conflict, the policy is saved in the policy store (step 5). 3.2. Policy Enforcement The policy enforcement module makes decisions on which actions should be executed and issues them for execution. At run time, a sensor sends out an event through the event service (step 1). The event is passed to the policy enforcement module (step 2). The policy enforcement module retrieves relevant policies from the policy store (step 3). For each retrieved policy, the policy enforcement module checks the trigger part and the condition part of the policy against input triggers and the current environment setting. If the trigger matches and the policy conditions hold, the corresponding policy action is added to the set of potential actions. Once the policy enforcement module
58
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
finishes checking the relevant policies, there will be a set of potential actions. If there is more than one action in the set, this is passed to the dynamic analysis module for detection and resolution of conflicts (step 4). Our existing policy system uses resolution policies to detect and resolve conflicts among actions. The structure of a resolution policy is very similar to that of an ordinary policy. The major difference is that in the resolution policy, the triggers are the actions of regular policies rather than ordinary triggers from sensors. A detailed description of resolution policies can be found in [4]. The dynamic analysis module retrieves resolution policies from the policy store (step 5), and applies these to the set of potential actions to select the most appropriate actions if there are conflicts. This action is then passed back to the policy enforcement module (step 6). The policy enforcement module sends the actions to the event service (step 7). The event service acts as a broker, passing commands to actuators for execution (step 8). A resolution policy supports two types of resolution actions: specific actions and generic actions. For specific actions, a resolution policy specifies what to do when there are conflicting actions. The outcome is not limited to the set of conflicting actions. For generic actions, the resolution is chosen from among the conflicting actions. This relies on comparing the attributes of conflicting policies. Borrowing from our previous work, the following generic actions are used in home care: x apply_newer, apply_older: decides whether the newer or older policy is chosen. x apply_one: chooses some action from the set of potential actions. x apply_negative, apply_positive, apply_stronger, apply_weaker: decides the action by checking the value of policy preferences. x apply_inferior, apply_superior: uses the applies_to attribute to decide within one hierarchy whether the superior’s policy or inferior’s policy is chosen.
4. Policy Conflicts in Home Care Systems 4.1. Detection and Resolution of Policy Conflicts in General To simplify our analysis, for now we only consider a policy with a single rule. A home care policy has the following elements: subject, target, trigger, condition, action, owner and preference. Much as for Ponder (http://ponder2.net) we consider two types of policies: authorisation (A) and obligation (O). For authorisation policies, the subject is authorised to take an action on a target object. For obligation policies, the subject is obliged to take action on the target when receiving a trigger and the condition is satisfied. If we combine the type of policy with the modality, we get the following policy modes: positive authorisation (A+), negative authorisation (A-), positive obligation (O+), and negative obligation (O-). 4.1.1. Types of Policy Conflicts in Home Care According to Moffet and Lupu’s classification [2] [5], there are two types of policy conflicts: modality conflicts and goal conflicts. Modality conflicts can be detected by looking at the policy alone. For modality conflicts, the following attributes (subject, target, action) of two policies overlap, but
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
59
the mode of the policy contradicts. The other attributes of the policy may be different, including trigger, condition and owner. There are three possible modality conflicts: x A+, A- : one policy states that the subject is authorised to take some action, but the other policy prohibits the subject from performing this action. x O+, O- : one policy states that the subject is obliged to take some action, but the other policy states that the subject is obliged not to take this action. x A-, O+ : one policy states that the subject is obliged to take some action, but the other policy states that the subject is not authorised to do this. A+/O- is not a policy conflict, since no actions result from this combination: the subject is authorized to take some actions, but must refrain from taking these [2] [5]. Goal conflicts need application-specific information to be detected. Moffet et al. [2] identify four types of conflicts: conflicts of imperative goals, in particular for resources; conflicts of authority goals, including conflict of duty and conflict of interest; multiple managers; and self-management. In the home care domain, we consider actuators as the target of policy. A sensor, person or computer program is considered as a subject of policy-based management. Inside the policy system, there will be an agent for each such entity to act on its behalf. Due to lack of computation power on sensors, our policy system employs a centralised server for enforcing policies. This implies that the policy server acts as an agent for all subjects of the policy system. In home care systems, modality conflicts may arise in one owner’s policies due to overlapping situations. They also may arise between multiple owners’ policies. For goal conflicts, we are currently particularly interested in multiple managers and conflicts for resources since many care services are represented as obligation policies. These services are triggered by events from sensors. 4.1.2. Detection of Conflicts: Statically vs. Dynamically Modality conflicts can be detected at definition time or at run time. Detection is achieved by comparing the subject, target, action and preference of two policies. This indicates whether there are potential conflicts. For modality conflicts, if the situations of two policies are exactly the same, this potential conflict becomes definite; the conflict should be eliminated at definition time. If conflict depends on the evaluation of a run-time situation, this type of potential conflict may still need to be detected and resolved at run-time. Detecting goal conflicts needs application-specific information. This may use an explicit definition of conflict situations. It may also use automatic reasoning about the effects on goals if the semantics of these is properly specified. Our policy system supports the specification of conflicting situations by the user. As seen in section 3, our policy system supports both static analysis and dynamic analysis. Dynamic analysis is performed when a trigger from sensors is received and processed. It requires resources and time, and may slow down the decision making process of the policy system. Comparing with dynamic analysis, static analysis is more desirable as it reduces the burden on dynamic conflict. However, not all conflicts can be detected by static analysis, especially potential conflicts.
60
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
4.1.3. Resolving Conflicts Once a conflict is detected, conflicting policies need to be resolved. This can be by achieved by notifying the user and asking for a decision, or it can be done automatically. For automated resolution, our policy system supports both specific actions and generic actions. For policies that belong to a single owner, several policy attributes can be used to choose the resolution action. For example, the policy language supports choosing an action with the strongest preference. For policies that belong to different users in one organization, the ‘distance’ between a policy and the managed object can be used to choose the resolution action. In our policy language, this distance is derived from the applies_to attribute. Suppose one policy applies to @cs.stir.ac.uk and the other policy applies to
[email protected]. An apply_superior resolution action will choose the policy which applies to @cs.stir.ac.uk as the higher domain in the hierarchy. 4.2. Special Issues for Policy Conflicts in Home Care In home care systems, beside the modality conflicts discussed above we observe the following three special types of conflicts between policy actions. How to deal with these conflicts is the focus of this paper. Should the interaction be tackled inside the policy system, or should it be dealt with outside the policy system (e.g. by the actuators)? If the interaction is tackled by the policy server, how can we enhance our existing policy system to handle it? If the interaction is tackled outside the policy server, what functionalities are required from the external system? 4.2.1. Dependency among Situations The actions resulting from different situations may conflict with each other, and situations may have interdependencies. The situation of obligation is common in a home setting. These situations rely on context information. As Dey points out [6], there are different levels of context information. High-level situations can be inferred from low-level sensor data, and the trigger from one sensor can be used to infer multiple situations. As an example, a ‘door open’ sensor can detect the situation of the door being left open. Suppose a policy states that when the front door is left open, a reminder should be given to the resident to close the door. Combining the door sensor and the sensor in the door lock, a new situation can be detected: the door has been broken open. Suppose another policy states that, when the door is broken open, the resident should be advised stay in his/her room to call for help. When a door is broken into, which action should be taken [7]? In the above case, if we specify the two situations as two separate triggers, there will be no policy conflict and two actions will be executed. It does not make sense to remind the user to lock the door, while at the same time stating that there has been a break-in. In this example, we can see that situations in home care can have logic relationships between them. One situation may be implied by another, or two situations may be implied by triggers originating from the same sensor. Besides logical relationships, there are other relationships such as containment. For example, one policy reacts to movement in the bedroom, while another reacts to movement in any room of the house. The situation of the second policy contains that of the first.
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
61
A policy system needs to be able to detect and resolve policy conflicts due to dependent situations. The issue is how to specify the triggers and conditions of the policies properly so that the conflicts are detected. 4.2.2. Multiple Stakeholders In home care systems, policies may be defined by multiple organizations (e.g. a social work department, a surgery, a clinic). Their policies may conflict with each other. How can the conflicts of multiple stakeholders be handled? In fact, conflicts among multiple stakeholders are not much different from the case of a single stakeholder. The difference is in resolution of the conflicts. If the resolution action is chosen from one of the conflicting actions, dealing with multiple stakeholders is an issue. When policies are defined by different organizations, there is no hierarchy among the stakeholders. Some solution is needed to decide how one stakeholder’s policies should be evaluated compared to other stakeholders’ policies. 4.2.3. Interactions between Actions over Time The policy actions in a home care system take time to complete. It is therefore possible for new actions to conflict with ongoing actions. Suppose a medical reminder service can alert a patient to take medicine at certain times. This system will remind the patient again if there is no response to the first reminder. While the medical reminder is running, a more urgent situation such as a fire may be detected in the house. Following the fire alarm policy, the system will remind the user to leave the house immediately. How to deal with these interactions in a policy-based system? A fundamental question is whether they should be dealt within the policy system or not.
5. Enhancement to the Home Care Policy Systems 5.1. Tackling Dependencies among Situations A situation is specified jointly by the trigger and the condition of a policy. To tackle dependencies among situations, we introduce a situation dependency graph (see Figure 3). The nodes on the left are the sensors. The nodes in the middle and on the right are situation nodes. A situation node receives inputs from the nodes on its left and evaluates its function to get a new value. If the value of a situation node has changed, it will send the update to other situation nodes that depend on it. Each situation node also supports queries for its current value. In figure 4, situation B depends on sensor A, thus there is a directional link from A to B. For a policy system to detect conflicts among dependent situations, the trigger part of a policy must specify all the sensors that are used to derive a situation. In the dependency graph, these sensors are the root nodes of the situation node. In the condition part of the policy, an environment variable with the name of the situation node is used. This environment variable is set up by the policy system to keep the most current value of the situation node. In figure 4 for example, if a policy requires situation F, then the trigger part of the policy is the list {A, C, E}. The parameter of the condition is F.
62
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
B A
C
D F
E
Figure 3. Situation Dependency Graph
The dependency graph is designed and maintained by the service designer. If there are new sensors or new software module installed to detect new situations, the dependency graph is updated to reflect the changes. This may affect the triggers and conditions available to the policy definer when using the policy editing tool. The example of ‘door open’ vs. ‘door broken into’ will show how this works. The ‘door open’ reminder policy has the following elements: applies_to:[door1] trigger: [door1:] condition: door_open eq true action:remind(reminder_bedroom, door left open) preference: 3
The ‘door broken into’ policy has the following elements: applies_to: [door1, lock2] trigger: [door1:open, lock2:broken] condition: broken_into eq true action:remind(reminder_bedroom, door broken into) preference: 5
In the above policies, the trigger of the first policy must originate from a specific door sensor, but does not require a particular type of trigger as this is implied by the condition. The trigger of the second policy originates from door1 with type open, and from lock2 with type broken. When the door is broken into, the policy server will receive triggers from the door sensor and door lock sensor at the same time. It will retrieve policies that apply to any of these sensors. Since the triggers and conditions of both policies are satisfied, there will be two actions. These two actions compete for the same reminder service, so there are conflicts. In our system, these conditions are handled by part of a resolution policy. Other parts of the resolution policy decide which action to choose. For example, a policy preference may be used in the generic action apply_stronger. In this case, the action from the second policy would be executed. 5.2. Resolving Conflicts of Multiple Stakeholders Detecting conflicts among multiple stakeholders is the same as for a single stakeholder except that the owners of the policies differ. We therefore add a new generic resolution
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
63
action: apply_stakeholder. This relies on the partial ordering among stakeholders to choose one action among the conflicting ones according to a predefined order among stakeholders. We believe that a total ordering of stakeholders in all situations is not sensible in home care. In a multiple organization setting, the ordering among stakeholders is not fixed and is valid only under certain conditions (e.g. when performing certain actions). 5.2.1. Specifying the Order among Stakeholders We make use of resolution policies to specify the order among stakeholders. The condition of the ordering rule can compare the attributes of the action and the policy. The new action set_order has been introduced to specify the ordering among stakeholders. This action has three parameters: the two different owners of the policies, and the relational operator between the owners (gt, lt, eq, unspecified). gt means the first owner ranks higher than the second, with the other operators having the obvious interpretation. The example of Figure 4 shows that the warden has higher priority over the tenant for setting the TV volume at night time (from 23:00 to 7:00). <parameter>time in 23:00..07:00 <parameter>action eq device_out(TV, setVolume) set_order(arg1,arg2, arg3) Figure 4. Example of specifying Order among Stakeholders
Multiple orders among stakeholders can be specified under the same conditions. The relative orders among stakeholders are transitive. That is, if owner A is ranked higher than owner B and owner B is ranked higher than owner C, then owner A is ranked higher than owner C. This can help to simplify specification of the ordering. The stakeholder parameters of the set_order action can also be roles. In the above example, a policy owner whose role is warden has higher priority than a policy owner whose role is tenant. Our policy system support roles through policy variables, which can contain a single value or a list of values. 5.2.2. Applying Order among Stakeholders When conflicts are detected between policies of different owners, the resolution action apply_stakeholder can be used. Suppose there are conflicting policies that want to set the volume of the TV differently. The condition part of a resolution policy would check whether both their targets are the same TV; whether both actions are set_volume, and whether the volume levels are different. The policy preferences would indicate if both
64
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
policies are positive obligation policies. The action part of the resolution would use apply_stakeholder to ensure that the warden’s policy is respected. 5.3. Handling Interactions of Policy Actions over Time To handle the policy interactions over time inside the policy system, we need to be able to detect the conflicts and then resolve the conflicts. According to Moffet’s classification of policy conflicts, the interactions between a new action and the existing actions are goal conflicts (or resource conflicts) rather than modality conflicts. These are action conflicts, not policy conflicts. Detecting conflicts between new actions and existing running actions needs the policy system to keep a record of all running actions. To find out whether a new action conflicts with ongoing actions, applicationspecific information is needed. This can be achieved by asking the user to specify the specific conditions to check. Similarly, resolving conflicts can be achieved by asking the user to resolve the choice of action manually. Even after an action is stopped due to conflict how to deal with it later also needs to be specified. In addition, only certain kinds of actions may suffer from this kind of conflict. For simplicity, we therefore move the handling of action conflicts from the policy server to the actuators in our system. In the example given earlier, the alarm system would be an actuator and would use a priority-based approach to handle actions that conflict over time. A new alarm with a higher priority would stop an existing alarm with lower priority.
6. Related Work Policy-based management has been applied in many areas, for example network and distributed systems management [8], telecommunications [4, 9], pervasive computing environments [10, 11, and 12], semantic web services [13], and large evolving enterprises [14]. The present paper has used the taxonomy of policy conflicts in [2] that describes how policy conflicts can be detected and resolved at definition time or at run time. [5] reviews policies in distributed systems and proposes using meta-policies to detect and resolve conflicts in one organization. However, this work does not tackle the problem of conflicting policies among multiple stakeholders. In addition, the solution in [5] is mostly for static detection and resolution of policy conflicts. Dunlop et al. have proposed a solution to detect conflicts dynamically using deontic logic, but this tackles only modality conflicts [14]. [10, 11] propose a solution based on reasoning about the effects of actions to detect and resolve goal conflicts in pervasive computing environments. The aim is to guarantee the execution order of actions resulting from a single trigger. However, this work also does not deal with policy conflicts among multiple stakeholders.
7. Conclusion The paper has focused on conflict issues when using policy-based management in home care systems. Three specialised kinds of conflict have been identified in home care: multiple stakeholder conflicts, policy conflicts due to dependency among
F. Wang and K.J. Turner / Policy Conflicts in Home Care Systems
65
situations, and conflicts among actions over time. Based on the analysis of these conflicts, we have proposed a solution to enhance our existing policy system to handle these conflicts. This has included extensions to the policy language and the policy system. The enhancements also have implications for other part of the home care system, including sensors and actuators. We plan to evaluate the approach through field trials in actual homes.
References [1] [2] [3] [4] [5] [6]
[7]
[8] [9] [10] [11] [12] [13] [14]
F. Wang, L. S. Docherty, K. J. Turner, M. Kolberg and E. H. Magill. Service and Policies for Care At Home, Proc. Int. Conf. on Pervasive Computing Technologies for Healthcare, pp. 7.1–7.10, Nov 2006. J. D. Moffett and M. Sloman. Policy Conflict Analysis in Distributed System Management. Organizational Computing, 4(1):1–22, 1994. K. J. Turner, S. Reiff-Marganiec, L. Blair, J. Pang, T. Gray, P. Perry and J. Ireland. Policy Support for Call Control, Computer Standards and Interfaces, 28(6):635–649, Jun. 2006. K. J. Turner and L. Blair. Policies and Conflicts in Call Control, Computer Networks, 51(2):496-514, Feb. 2007 E.C. Lupu and M. Sloman. Conflicts in Policy-Based Distributed Systems Management, IEEE Trans. on Software Engineering, 25(6), 1999. A. K. Dey, D. Salber and G. D. Abowd. A Context-based Infrastructure for Smart Environments. In Proc. 1st Int. Workshop on Managing Interactions in Smart Environments, pp. 114–128, Dublin, Dec. 1999. M. Perry, A. Dowdall, L. Lines and K. Hone. Multimodal and ubiquitous computing systems: supporting contextual interaction for older users in the home. IEEE Trans. on IT in Biomedicine, 8 (3):258–270, 2004. N. Damianou, N. Dulay, E. Lupu and M. Sloman. Ponder: A Language specifying Security and Management Policies for Distributed Systems, Technical Report, Imperial College, London, UK, 2000. Special Issue on Feature Interactions in Telecommunications Systems, IEEE Communications Magazine, 31(8), 1993. C. Shankar, A. Ranganathan and R. Campbell. An ECA-P Policy-based Framework for Managing Ubiquitous Computing Environments, Proc. 2nd Int. Conf. on Mobile and Ubiquitous Systems, 2005. C. Shankar and R. Campbell. Ordering Management Actions in Pervasive Systems using Specificationenhanced Policies, Proc. 4th Int. Conf. on Pervasive Computing and Communications, Pisa, Mar. 2006. L. Kagal, T. Finin and A. Joshi, A Policy Language for Pervasive Systems, Proc. 4th Int. Workshop on Policies for Distributed Systems and Networks, Lake Como, Jun. 2003. A. Uszok, J. Bradshaw, R. Jeffers, M. Johnson, A.Tate, J. Dalton and S. Aitken. KAoS Policy Management for Semantic Web Services. Intelligent Systems, 19(4):32–41, 2004. N. Dunlop et. al. Dynamic Conflict Detection in Policy-Based Management Systems, Proc. EDOC ’02, 2002.
66
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Conflict Detection in Call Control Using First-Order Logic Model Checking Ahmed F. LAYOUNI1, Luigi LOGRIPPO1, Kenneth J. TURNER2 Université du Québec en Outaouais, Département d’informatique et ingénierie, Gatineau, QC, Canada J8X 3X7 (Email: laya01 | luigi @uqo.ca) 2 University of Stirling, Department of Computing Science and Mathematics Stirling FK9 4LA, Scotland, UK (Email:
[email protected])
1
Abstract. Feature interaction detection methods, whether online or offline, depend on previous knowledge of conflicts between the actions executed by the features. This knowledge is usually assumed to be given in the application domain. A method is proposed for identifying potential conflicts in call control actions, based on analysis of their pre/post-conditions. First of all, pre/postconditions for call processing actions are defined. Then, conflicts among the pre/post-conditions are defined. Finally, action conflicts are identified as a result of these conflicts. These cover several possibilities where the actions could be simultaneous or sequential. A first-order logic model-checking tool is used for automated conflict detection. As a case study, the APPEL call control language is used to illustrate the approach, with the Alloy tool serving as the model checker for automated conflict detection. This case study focuses on pre/post-conditions describing call control state and media state. The results of the method are evaluated by a domain expert with pragmatic understanding of the system’s behavior. The method, although computationally expensive, is fairly general and can be used to study conflicts in other domains.
Keywords: Call control, conflict detection, feature interaction, policy, APPEL, Alloy, logic model checking.
1 Introduction
1.1 Features and Policies for Call Control Feature interactions have been discussed with respect to many types of systems, although a good part of the literature has concentrated on call processing systems. A survey of the literature on the subject can be found in [2]. Feature interaction is a complex phenomenon and can be analyzed from different points of view. Much research in the area has emphasized the behavioral aspect of the phenomenon. In this perspective, feature interactions are often seen as the result of complex behavior interleaving for the state machines that represent the features. In two feature interaction contests [10,12] the contestants were given what essentially
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
67
were state models for features. These had to be composed, and their composition had to be modeled, looking for behavioral traces showing undesirable behavior such that, for example, one feature was not allowed to run to completion due to the intervention of another feature. In the world of VoIP, users are allowed to program their own features. However, most users do not program them from scratch using VoIP facilities directly. Rather, each VoIP system offers a set of basic features that can be combined by users and enterprises, by using specifically designed languages, to implement different policies. CPL (Call Processing Language [15]) is a well-known, early embodiment of this idea. Other policy languages with different purposes are LESS [22,23] and APPEL [20, 21]. In these approaches, users can specify policies such as: ‘if a call arrives from Alice during work hours, treat it as urgent’, ‘calls to Bob should be tried at all addresses where Bob normally works’. The familiar paradigm is at the heart of these systems, and we conjecture that it will continue to be used. This paradigm is essentially identical to the ECA, or <event, condition, actions> paradigm that has been applied extensively in areas such as reactive databases, agent systems, access control systems and the semantic web. Generally speaking, a rule is enabled when its trigger occurs and its condition holds. Note the difference between trigger and condition. The trigger can be an external or internal event. A trigger can convey parameters for use in conditions and actions. Conditions can check database or ‘context’ information, such as the time of day or the role of the user in an enterprise ontology. Application of the rule leads to one or more actions. This apparently simple paradigm allows many variations, and is a good match to the many requirements of call control. A policy can expand in a number of such rules. By means of policies and rules one can define the correspondent of traditional features, though policies can be higher-level, user-oriented and more declarative. Several actions can be proposed simultaneously, for example when one rule defines multiple actions or multiple rules are activated by the same trigger. When this happens, the different actions can direct the system to do incompatible things. Actions may also set conditions that can block other actions that should follow. Conflicts between actions imply potential conflicts between the policies that invoke the actions and are the main manifestation of feature interactions in policy systems. In this paper the terms conflict and incompatibility will be synonyms, and conflicts and incompatibilities will be seen as the consequences of logical inconsistencies. In policy systems there are resolution methods to ensure that only one action for each event is executed. For example, this is the situation for firewalls. Here, the rule file is typically scanned top-down and only the first applicable rule is used. This leads to just one action that accepts or rejects the proposed access. Some policy languages allow the user to include meta-rules for resolving cases where several actions may become simultaneously enabled. Often these meta-rules are based on priorities. The situation is complicated by the fact that for certain events, several actions may be needed. Nonetheless, for the validation of a policy set, all rules and actions that can become enabled for a given trigger and condition should be examined without considering resolution methods. Indeed, several cases of interest can be found in this way. For example, an important policy might be ‘shadowed’ by a more general but
68
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
contradictory policy, or a specific case might have been added in contradiction to an important general policy. This can happen because users in these systems may be allowed to add and delete rules when they see the need for them. When they do this, they may not have a global view of all the consequences of the changes. Such situations could lead to unwanted system behavior, even though it may be technically correct. Users should be notified with a request, and possibly suggestions, for resolution. 1.2 Related Work Several authors have suggested that many undesirable feature interactions can be understood as the result of inconsistency in specifications. Perhaps the earliest and clearest statements in this sense can be found in [3,8], where feature interactions are modeled as inconsistencies among temporal logic specifications. According to this work, features A and B conflict if and only if a program realizing their joint specification A^B does not exist. The detection method uses the model-checker Cospan. A similar view is given a theoretical justification in [1]. But already the first classical paper on this subject [2] lists ‘conflicting assumptions’ as one of the main causes of feature interaction. Among others, [5, 9, 13] are based on the idea that feature interactions are the result of conflicting actions becoming enabled. But how to tell that actions can conflict? [22, 23] push the analysis to higher granularity by considering the pre/post-conditions of actions. For example, two actions having incompatible post-conditions can cause a feature interaction if they are simultaneously enabled, or two actions for which the first falsifies the pre-condition of the second can cause a feature interaction if they are enabled one after the other. Conflicts of pre/post-conditions in systems of ECA rules have also been studied in [18]. We extend the conflict identification method of [22, 23] to the language APPEL [20, 21], as well we refine and adapt the definitions used in these papers. We automate the conflict detection method using the first-order formal language Alloy [11]. The associated Alloy tool is used to identify the conflicts. A pragmatic approach to handling conflicts in APPEL is described in [19]. This work provides run-time support assuming that the conflicts have already been identified in some independent way. Another very recent contribution for the same language [16] provides a denotational semantics framework for APPEL, as well as a method to address feature interaction, but again assuming that conflicts between actions have already been identified. The method described in this paper can be used in conjunction with the techniques proposed in these two other papers to provide the information that they need, concerning the conflicts existing between specific actions. This method is a contribution towards a formal semantics for APPEL, as well as to feature interaction handling in APPEL. In a related paper [4], a technique has been developed for filtering conflicts in the same APPEL language. This other approach is founded on the intuitive notion that actions may conflict if they share a common effect. In contrast, the work reported here has a higher degree of precision. Pre/post-conditions are considered, as well as the ordering of actions. This leads to a formal model that allows semantically-based inferences to be drawn about the compatibility of actions. Still, because of our level of
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
69
precision, the high-level analysis possible in [4] would be difficult with our method, as well several aspects that can be considered with that method would be difficult to consider with ours. For the time being, we must consider these two methods as both useful and complementary. Future research will have to deal with the problem of reconciling and integrating them.
2 Ordering and conflicts between actions In this method, the mutual consistency of actions is determined on the basis of their pre/post-conditions. We consider a system state to be characterized by a set of variables and their values. Pre/postconditions are predicates that describe these values. The pre-condition of an action describes the state(s) in which the system must be in order for the action to execute. The post-condition of an action describes the state(s) that can result from its execution. We shall see below that pre/postconditions can be consistent or inconsistent, leading to mutual consistency or inconsistency of states. The following timing relationships can apply between actions: x simultaneous execution: one action starts executing at a time when the other action has not completed. x sequential execution: one action starts executing after the other action has completed, i.e. one action strictly precedes another. If two actions start from or lead to mutually inconsistent system states, they are incompatible and should not be simultaneously executed. Even the case in which such actions are sequentially executed could be suspect, because the second action contradicts the results of the first (although this is normal in the evolution of a system). If an action establishes a post-condition which contradicts the pre-condition of another action, then the second action cannot immediately follow the first. More in detail, the following relations are of interest between the pre/postconditions of two actions A and B (this is not meant to be an exhaustive list): 1. Relationship between the pre-conditions of A and the pre-conditions of B: (a) The conjunction of the pre -conditions of these two actions is always true. The two actions can thus be executed simultaneously always. This is perhaps a rare situation. (b) The conjunction is satisfiable. In certain system states, A and B can both be executed. (c) The pre-conditions of the two actions are not simultaneously satisfiable. There are no system states for which A and B can be executed simultaneously. For example, they both might require the same device or they can be executed only in different connection states. 2. Relationship between the post-conditions of A and the pre-conditions of B (or vice versa). The cases are similar (a) The conjunction is always true: then the second action can always start after the first. (b) The post-conditions of A are simultaneously satisfiable with the preconditions of B. B can follow A in the case of simultaneous truth. (A more
70
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
general case of these two situations is the case in which the post-condition of A implies the pre-condition of B.) (c) The post-condition of A is not simultaneously satisfiable with the pre-condition of B. In other words, B cannot follow A or A disables B. For example, A might free a device that B needs to find reserved, or A might leave the system in a connection state that is different from the one B requires. 3. Relationship between the post-conditions of A and B: (a) Simultaneous truth: no problem for concurrent execution. (b) The post-conditions of A and B are simultaneously satisfiable. This means that the results of A and B can be compatible. (c) The post-conditions of A and B are not simultaneously satisfiable. This means that the results of A and B are incompatible in principle. For example, one of them disconnects the call while the other continues it. Simultaneously executing the two actions would leave the system in an inconsistent, i.e. impossible state. Doing a thorough analysis of all the cases above would be rather complicated, and to our knowledge this has never been done for realistic call control systems. In this work, we are interested about a partial analysis of conflicts, and we identify three situations of conflict between actions (Figure 1): x concurrency conflicts: two actions have inconsistent pre-conditions, and thus cannot be executed in the same system state x disabling conflicts: an action leaves the system in a state where a second action cannot be executed x results conflicts: two actions would leave the system in an inconsistent (impossible) state, and thus cannot be executed simultaneously. Further, the two aspects of pre/post-conditions to be considered are the connection state and the media state.
Figure 1. Three types of conflicts
Conflicts among pre/post-conditions of more than two actions are also possible. However this kind of analysis is rarely performed because it becomes complex and
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
71
very few concrete examples (where three actions can be in conflict without any two of them being in conflict) are known. In addition, our case study will be on APPEL, and run-time conflict handling for APPEL is designed so that only pairwise combinations of actions need be considered.
3 The APPEL Policy Language APPEL (ACCENT Project Policy Environment/Language) is a general-purpose language for expressing policies. The language is defined in [20], and its use for call control is described in [21]. APPEL conforms to the ECA model for policy rules. APPEL is supported by a policy system that interfaces to some system under control (e.g. a SIP server). When a trigger is received (e.g. there is an incoming call or a new party is being added to the call), the policy server retrieves all policies that apply. These are typically policies of the caller and the callee, but higher-level policies may also be retrieved (e.g. of the user’s organizations). Policies are then checked for applicability. Apart from explicit policy conditions, other factors that determine applicability include the profile of a policy and its period of validity. The result is a set of actions. Triggers, conditions and actions may all be composite. Triggers and conditions may be combined by logical operators, and actions may be conditional, sequential or concurrent. Although APPEL resembles a number of other policy languages, it differs in a number of important respects. It was specifically oriented towards the need for call control, as other approaches do not relate well to this application. For example, the Ponder policy language [6] assumes that the subject and target of a policy can be identified. However, in call control and other applications these concepts often are not present. APPEL was designed so that ordinary end users can formulate policies, unlike other languages that require a high degree of technical expertise. Since APPEL is XML-based, policies cannot be defined directly by a non-technical user. APPEL is therefore supported by a user-friendly policy wizard that allows creation and editing of policies using near-natural language. Although APPEL was originally developed for call control, it is of wider applicability. For example, it has also been used for policy-based management of home care and sensor networks. This wide range of applications is possible because APPEL has a core language that is supplemented by domain-specific extensions. This is reflected in the language schemas and also in the ontologies that define domain vocabularies. APPEL was designed with conflict handling in mind. As described in [19], the actions resulting from a trigger are filtered for compatibility. Special resolution policies are used to detect and to resolve conflicts. These policies resemble regular policies, but the trigger of a resolution policy is the action of a regular policy. Since resolutions are defined rather than being built into the policy system, there is considerable flexibility in how conflicts are handled. Generic resolutions choose among the conflicting actions, while specific resolutions propose domain-specific actions (that may differ from the conflicting ones). Although the approach supports automated run-time resolution of conflicts, it relies on resolution policies having been
72
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
already defined. That is, as mentioned, the approach is dependent on already knowing what the conflicts are. In previous work, conflicts were determined manually – a tedious and error-prone task. The new work reported here provides a systematic, automated and semantically-based way of discovering conflicts that can then be used to define resolution policies.
4 APPEL Actions and Their Conflicts
4.1 APPEL Actions Although our approach could be used with APPEL in other domains, for concreteness and familiarity we use call control as the application domain. The call control actions in APPEL are defined by [20]. Some of these depend on particular communications protocols (e.g. H.323) and on particular parameters. We choose to abstract the key call control actions as follows: x connect_to initiates a new and independent call x reject_call rejects a call, i.e. prevents it from completing x forward_to changes the destination of the call x fork_to adds an alternative leg to the call x add_party adds a new party to an existing call x remove_party removes a party from the call x add_medium adds a new medium to the call x remove_medium removes a medium from the call x remove_default removes the default medium from the call x disconnect disconnects the call This list of actions provides an abstract view of the call processing cycle in APPEL: an initial connection action can be followed by reject, forward or fork. During the call, parties can be added or removed. Media can be added or removed. The call can then be disconnected. Note that ‘disconnect’ is not an action in APPEL at present, however our analysis has led to the conclusion that it should be added. The action remove_default deserves mention, especially since there is no add_ default. Certain actions, such as connect_to, implicitly reserve the default medium for the call (usually audio). Although the remove_default action also does not exist in APPEL, it is implicit. We have made it explicit because we will see later that it is useful to consider the availability of the default device in the pre/post-conditions. All these actions have parameters, which can themselves cause interactions. However the treatment of parameters would add considerable complexity to our analysis. We have abstracted away from parameters in our initial analysis of conflicts. We have also omitted actions that do not directly relate to call control (e.g. those that log or send messages). Our method can be applied to them, but this has not been done here because it would have complicated the presentation of the approach with little additional insight. For one thing, our tables would have had to be much larger.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
73
4.2 Pre/Post-Conditions for APPEL Actions Like all real-life distributed systems, call processing systems are complex and the conditions involved are correspondingly complex. In practical terms, with current means, analysis must be limited to few important characteristics. Following the example of [22, 23], we have decided to concentrate our analysis on two aspects: connection (or call) state and media state. We therefore characterize the state of a system as a pair . Table 1 shows the table of pre/post-conditions that was developed for this study. It represents a simplified and abstract view of call processing in APPEL. Setting up this table is a delicate task which determines the results of the analysis. Call processing progresses through three mutually exclusive connection states: NoCall, CallSetup, MidCall. Note that Table 1 does not describe a state machine, i.e. transitions and associated actions from state to state. For example, there is no action that leads from CallSetup to MidCall. It is assumed that this state transition will occur as a consequence of events that are not shown in the table. That is, the table intentionally does not describe how the real system works ‘behind the scenes’. The table identifies two categories of media: the default medium (e.g. audio) and media in general (e.g. video, messaging). It is useful to make this distinction because a call is always initiated with a default medium. This may later be augmented or replaced by something else (e.g. video may be added, or the call may be reduced to messaging only). The analysis presented in the following sections identifies six cases of conflict, in the three major categories we have identified: 1: Concurrency or Pre-Condition - Connection State 2: Concurrency or Pre-Condition - Media State 3: Disabling - Connection State 4: Disabling - Media State 5: Result or Post-Condition - Connection State 6: Result or Post-Condition - Media State Action connect_to reject_call forward_to fork_to
Pre-conditions Connection State Media State NoCall DefaultAvailable CallSetup DefaultReserved CallSetup DefaultReserved CallSetup DefaultReserved
add_party
MidCall
remove_party add_medium remove_medium remove_default disconnect
MidCall, PartyAddedToCall MidCall MidCall MidCall MidCall
DefaultAvailable
Post-conditions Connection State Media State CallSetup DefaultReserved NoCall DefaultAvailable CallForwarded DefaultAvailable CallForked DefaultReserved PartyAddedToCall, DefaultReserved MidCall
DefaultReserved
MidCall
DefaultAvailable
MediumAvailable MediumReserved DefaultReserved DefaultReserved
MidCall MidCall MidCall NoCall
MediumReserved MediumAvailable DefaultAvailable DefaultAvailable
Table 1. Pre/post-conditions for APPEL actions
74
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
Connection State 1 NoCall NoCall CallSetup CallSetup MidCall MidCall
Connection State 2 MidCall CallSetup MidCall NoCall NoCall CallSetup
Table 2.Connection State incompatibilities 4.3 Concurrency Conflicts
reject_call
forward_to
fork_to
add_party
remove_party
add_medium
remove_medium
remove_default
disconnect
connect_to
As mentioned, in this case, the question is whether two actions can be executed starting from the same system state. This will not apply if they require states that are incompatible. For example, action connect_to cannot be concurrent with any other action, since it is the only action that can be executed before a call exists. Similarly, add_party requires the system to be in a state where the default medium is available, while remove_party instead requires the default medium to have been reserved. Note that this does not mean that the two actions are necessarily incompatible. Our analysis
1
1
1
1
1
1
1
1
1
Action Pair
connect_to
1
1
1
1
1
1
reject_call
1
1
1
1
1
1
forward_to
1
1
1
1
1
1
fork_to add_party remove_party add_medium remove_medium remove_default disconnect
Table 3. Pre-condition conflicts for Connection State (case 1) is not sufficiently detailed for such certitude. Indeed in every method reported in the literature, feature interaction detection only suggests the possibility of an interaction, which must be confirmed by domain experts, in consideration also of specific contexts. The approach requires incompatibilities in state to be defined. Table 2 shows the incompatibilities between connection states that we have used. Essentially, the table says that the three connection states are mutually incompatible.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
75
As a consequence of this, we obtain the results shown in Table 3 for incompatibilities among connection states. We can see here that reject_call and add_party are incompatible because each requires the system to be in a different state than the other. Two different connect_to actions are not incompatible for this reason, although they will be incompatible for other criteria, see below. Obviously the table is symmetric. The other aspect to be considered is media state. The table of media state incompatibilities is not shown here because it is rather simple. It indicates potential conflicts if the actions require some medium (including the default) to be both reserved and available. Here again, the necessary simplification should be understood. A call system will have a variety of selectable media and default media. To be complete and precise, one would have to consider the specific media and defaults in the system under consideration, as well as specific operations that reserve and release them. This type of detail is possible in practice, but is irrelevant for the purpose of this paper, which is illustrating the method. 4.4 Disabling Conflicts
add_party
remove_party
add_medium
remove_medium
remove_default
disconnect
fork_to
forward_to
reject_call
connect_to
As mentioned, it is possible for an action to leave the system in a state where another action is impossible. This can be determined by checking post-conditions against preconditions. Concerning the connection state, the incompatibilities to be considered are the same as earlier: the three states are incompatible. Thus, an action that must find the system in state MidCall cannot immediately follow an action that leaves the system in state CallSetup. Similarly for media state, an action that requires default media to be reserved cannot follow an action that sets default media available, and so on.
3
3
3
3
3
3
reject_call
3
3
3
3
3
3
forward_to
3
Action Pair
connect_to
3
3
3
fork_to
3
3
3
add_party remove_party
3
3
3
add_medium
3
3
3
remove_medium
3
3
3
remove_default 3
3
3
3
3
3
Disconnect
Table 4.Disabling conflicts for connection state (case 3)
76
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
Table 4 shows the result obtained with respect to connection state. It is not symmetric because the disable relation is not symmetric. 4.5 Result Conflicts
5
disconnect
remove_default
remove_medium
add_medium
remove_party
add_party
fork_to
forward_to
reject_call
connect_to
Two actions are also incompatible if they lead to incompatible post-conditions. Again, these can refer to connection state or to media state. In the case of connection state, if an action leads to a certain connection state, another compatible action must lead to either the same state or to the next state. As mentioned, the cycle of states is as follows: NoCall leads to CallSetup which leads to MidCall, which leads again to NoCall. An action which leads to one of these states is incompatible with an action which jumps one link in the sequence. As an example, reject_call leads to NoCall, while add_medium leads to MidCall. Clearly a link is skipped here, since between the two we need an operation that establishes CallSetup. Hence the incompatibility. The complete incompatibility table between connection states will not be given for brevity, since essentially it reflects this reasoning. Note that this definition of state incompatibility is perhaps disputable, but this does not affect the validity of the method, which can be adapted to other definitions. Table 5 shows conflicts according to this criterion.
5 5
5
5
5
5
5
5
Action Pair
connect_to reject_call
5
forward_to fork_to
5
add_party
5
remove_party
5
add_medium
5
remove_medium
5
remove_default 5
5
5
5
5
5
disconnect
Table 5. Post-condition conflicts for connection state (case 5) For media state, the incompatibilities are again simple. If the actions lead to some media being available and reserved, or the default media being available and reserved, there is a post-condition incompatibility because of media. To save space, the results of this analysis are given in Table 6, the recapitulative table.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
77
4.6 Overall Results Table 6 shows the complete results for the six types of conflicts we have discussed. We have also analyzed other situations, for example the case where an action enables, or sets the pre-conditions, of another action [14]. In this case, the postcondition of the first action implies the precondition of the second one. These situations cannot be discussed for lack of space. 4.7 Assessment
1,2,6
1
1
1,2,5,6 1,2,6
1,2
1,4
1,2,6
4
4,5
4,6
1,2,3,5,6 1,3,4,5 1,3,5 1,3,5 1,3,4,5 1,3,4
reject_call
1,2,6
4,5
4
4,6
1,2,3,6
1,3,4
1,3
1,3
1,3,4
1,3,4,5
forward_to
1,2,4
3,6
3,6
3
1,2,4
1,6
1
1
1,6
1,6
fork_to
1,4,5
1,2,3,6 1,2,3,6 1,2,3
4
2,6
2,6
2,6
add_party
2,6
4
4
4
remove_party
1,4
1,4,6
1,5
1,3
1,3
1,3
4
2,6
1,5
1,3
1,3
1,3
2,6
4
1,2,5,6 1,3,4
1,3,4
1,3,4,6 2,6
4
1,2,6
1,4,5
1,4,6
3,4,5
1,4
2,3,5,6
3,5
3,5
1,2,5,6
Action Pair
3,4
1,2,5,6 1,4
1,2,6
disconnect
remove_default
remove_medium
add_medium
remove_party
add_party
fork_to
forward_to
reject_call
connect_to
How would a domain expert in call control (or APPEL) view these results? An expert is guided by a pragmatic understanding of the system’s behavior, while the approach of this paper is formal and systematic, at a high level of abstraction. As mentioned, the parameters of actions are disregarded, as well the view of system state is much simplified, and this means it is not said, for example, which specific party or medium is being added or removed. As a consequence, the method discussed here is intentionally pessimistic. However, since the goal of the work is to identify action pairs that require closer study because of potential conflicts, the approach is successful.
connect_to
add_medium remove_medium 4
4
remove_default
3,4,5
3,4
disconnect
Table 6. Summary of conflicts
5 Detecting Conflicts in APPEL with Alloy The method described in the previous sections could be implemented in different programming languages. Instead of using a conventional programming language, we decided to experiment with the model checker Alloy. This decision was taken for two
78
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
reasons: Alloy allows high-level, conceptual modeling of systems architectures and their properties. Further, it has the capability of checking logical models, and thus is open to the possibility of extending our method to logically more complex pre/postconditions. 5.1 Alloy language and tool Alloy [11] is a formal method that includes a logic, a language, and a tool. The logic is primarily a relational logic. The language provides a user-friendly representation for the logic. It supports several specification styles, called predicate calculus style, relational style and navigational style (the last one being the most expressive and most commonly used). It includes a type system and mechanisms to favor reusability. The tool is essentially a first-order logic model-checking tool, based on the use of offthe-shelf satisfaction software. Alloy allows one to describe a system model, and will check it for consistency. It is also able to check whether certain properties are true for the system. However the user of Alloy is required by the execution system to specify a finite size for the model, and inconsistencies not found for the size specified could, at least in theory, appear for different sizes. Signatures are used in Alloy to define types, e.g. abstract sig Rules { trigger : one OBtrigger, condition : lone OBcondition, action : some OBaction }{ #action = 2 }
// there is one trigger // zero or more conditions // the set of acts is non-empty
defines a rule, and at the same time states that we are interested in generating exactly two objects of type action (for which there can be several, some), since we consider only conflicts between pairs of actions. Inheritance relationships can exist between signatures. Facts constitute a data base of facts that are known in the system, e.g. the pre/post-conditions of the actions (see Table 1): fact { connect_to.PreConnState = NoCall connect_to.PreMediaState = DefaultAvailable reject_call.PreConnState = CallSetup reject_call.PreMediaState = DefaultReserved . . . }
Or the fact that connection states are pairwise incompatible (encoding Table 2). fact AC { IncompSet.ConcConflict_Incomp_ConnState = MidCall -> NoCall + MidCall -> CallSetup + NoCall -> MidCall + NoCall -> CallSetup + CallSetup -> MidCall + CallSetup -> NoCall }
Predicates are properties that can be true or false. Assertions are properties that can be checked by the tool, and for which the tool will try to find a counterexample. For
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
79
example, the following predicate is true if two actions are in concurrency conflict because of the connection state in their pre-conditions: pred Conc_Conflict_ConnState ( a1 : OBaction, a2 : OBaction ) { some v : a1.PreConnState, w : a2.PreConnState | (v -> w) in IncompSet.ConcConflict_Incomp_ConnState }
C12 asserts that predicate Conc_Confl_ConnState is true for the two objects connect_to and reject_call. assert C12 { Conc_Confl_ConnState ( connect_to, reject_call ) }
The Alloy tool is asked to check this assertion with: check C12
The result is that there is no counterexample to the predicate, thus the assertion is valid and the two actions conflict in their pre-conditions, making them unsuitable for concurrent execution. The core specification of this problem is about 3 pages of Alloy code. A further 22 pages are required for the check and assert statements needed to determine the presence of conflicts in all cases of interest. 5.2 Alloy Execution In its internals, the Alloy tool expresses the constraints in terms of Boolean expressions and then tries to solve these by invoking off-the-shelf SAT solvers. This problem is of exponential complexity. However, SAT solvers are improving in efficiency and many non-trivial problems can be treated. Current solvers can handle thousands of Boolean variables and hundreds of expressions, although of course much depends on the type of the expressions [11]. Thus, the Alloy user must find a judicious compromise between detail and abstraction, as well as size of model to be checked. Too many details or too large a model will cause the tool to run out of memory or time. The Alloy tool provides a number of useful graphical representations of its results: graphical, tree, XML. Alloy models can be checked in one of two ways: x With the function VerifActions which will check the whole model, but will find at most one (arbitrarily chosen) conflict for each execution. Unfortunately Alloy cannot be asked to continue finding solutions, as Prolog can. x By systematically checking assertions. To consider all cases for our model requires 600 executions (10 actions × 10 actions × 6 predicates). Each assertion takes about 2.5 minutes to check, for a total of around 25 hours. The analysis was performed on a Pentium with dual 2.80GHz CPUs and 1GB of main memory. We used Alloy version 3. Version 4 offers improvements in usability, but it became available late in the progress of this work.
80
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
We are looking forward to improvements in the Alloy tool to simplify and expedite its use in a case like ours, where several hundreds of assertions have to be checked. It should be underlined that our algorithm would be much more efficient if implemented in a procedural programming language, however we wanted to work with a formal technique which allows a view that is close to the problem specification.
6 Conclusions We have described and justified a method for finding conflicts between call processing actions in a VoIP context, extending and adapting ideas in the work of [22,23] and others. We have demonstrated the effective application of this method to the actions of APPEL. Verification was undertaken using Alloy for first-order model checking. We have focused on APPEL and Alloy mainly because we are familiar with them. We plan experimentation and comparison with other applications, other policy languages and other formal tools. In another case study, the method was used to check the results of [23] with regard to LESS, and happily we were able to confirm them, as well as to complete them with the detection of some additional conflicts [14]. The contributions of this work are as follows: x The approach allows potential conflicts among policies to be determined through analyzing the pre/post-conditions of their actions. This is a general idea that is not restricted to call control, APPEL or Alloy. x As has been seen with Appel, the method is successful in identifying genuine conflicts that need to be resolved by a domain expert. x The approach provides a (partial) model of policy actions by defining their pre/post-conditions. In the context of this paper, this gives more precise meaning to APPEL. Note that the usefulness of this method is not limited to static feature interaction filtering. Understanding which actions conflict and why is useful in a number of areas of feature interaction research. This information is useful for feature interaction avoidance, for feature interaction detection, and for feature interaction resolution. Most of the methods that have been proposed in these areas assume that it has been previously determined by some other method that certain actions conflict. Neither is the method limited to single user interactions, since in principle conflicting actions can be in different users’ policies [17]. Our method can be integrated in other methods, i.e. the merge algorithm used in LESS. More detailed presentation of these results can be found in [14]. Future work will deal with various generalizations mentioned in the paper. A more complete model should be developed for APPEL and the pre/post-conditions of its actions. In particular, action parameters and more complete state descriptions should be taken into consideration. We plan to extend the approach to other policy languages, as well as to investigate other tool support besides Alloy.
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
81
Acknowledgments This work was funded in part by the Natural Sciences and Engineering Research Council of Canada, the UK Royal Society, and the Royal Society of Edinburgh. The authors thank Gemma Campbell (University of Stirling) for discussions about detecting conflicts in APPEL, and Xiaotao Wu for discussion about his method. References 1. Aiguier, M., Berkani, K. and Le Gall, P: Feature specification and static analysis for interaction resolution. Proc. Formal Methods’06, LNCS 4085, 364–379, 2006. 2. Calder, M., Kolberg, M., Magill, E. H. and Reiff-Marganiec, S.: Feature interaction: A critical review and considered forecast, Computer Networks, 41:115–141, Jan. 2003. 3. Cameron, E. J., Griffeth, N. D., Lin, Y.-J., Nilson, M. E., Schnure, W. K. and Velthuijsen, H.: A feature-interaction benchmark for IN and beyond, IEEE Communications Magazine, 31(8):18–23, Aug. 1993. 4. Campbell, G. Turner, K.J. Policy calling filtering for call control. These proceedings. 5. Crespo, R.G., Carvalho, M., Logrippo, L.: Distributed resolution of feature interactions for internet applications. Computer Networks 51 (2), 382-397, Feb. 2007. 6. Damianou, N., Dulay, N., Lupu, E. Sloman, M.: The Ponder specification language. Workshop on Policies for Distributed Systems and Networks (Policy2001), Jan. 2001. 7. Felty, A.P., Namjoshi: Feature specification and automatic conflict detection, in Calder, M. and Magill, E. H. (eds.), Proc 6th. Feature Interactions in Telecommunications and Software Systems, 179–192, IOS Press, May 2000. 8. Felty, A.P., Namjoshi: Feature specification and automated conflict detection, ACM Trans. on Software Engineering and Methodology, 12(1):3–27, Jan. 2003. 9. Gorse, N., Logrippo, L., Sincennes, J.: Formal detection of feature interactions with logic programming and LOTOS, Software and System Modeling, 5(2):121– 134, (mistakenly published as Detecting feature interactions in CPL), Jun. 2006. 10. Griffeth, N.D., Blumenthal, R., Gregoire, J.-C., Ohta, T.: Feature interaction detection contest of the fifth international workshop on feature interactions. Computer Networks, 32(4):487-510, April 2000. 11. Jackson, D.: Software Abstractions: Logic, Language, Analysis, MIT Press, 2006. 12. Kolberg, M., Magill, E., Marples, D., Reiff, S.: Second feature interaction context. In: M. Calder, E. Magill (Eds.) Feature Interactions in Telecommunications and Software Systems VI. IOS Press, 2000. 13. Kolberg, M., Magill, E.H. and Wilson, M E.: Compatibility issues between services supporting networked appliances, IEEE Communications Magazine, 41(11):136–147, Nov. 2003. 14. Layouni, A.F.: Méthode formelle pour la détection d’interactions de fonctionnalités dans les systèmes de politiques. Mémoire de maîtrise, Université du Québec en Outaouais, Département d’informatique et ingénierie, 2007 (forthcoming).
82
A.F. Layouni et al. / Conflict Detection in Call Control Using First-Order Logic Model Checking
15. Lennox J, Wu X. and Schulzrinne H.: CPL: A language for user control of Internet telephony services, RFC 3880, Internet Engineering Task Force, Oct. 2004. 16. Montangero, C., Reiff-Marganiec, S. and Semini, L.: Logic–based detection of conflicts in APPEL policies, Proc. Symposium on Fundamentals of Software Engineering (FSEN’07), Feb. 2007. 17. Nakamura, M., Leelaprute, P., Matsumoto, K, Kikuno, T.: On detecting feature interactions in the programmable service environment of Internet telephony. Computer Networks 45(5): 605-624 (2004). 18. Shankar, C., Ranganathan, A., Campbell, R.: An ECA-P policy-based framework for managing ubiquitous computing environments. Mobiquitous 2005, July 2005. 19. Turner, K. J., Blair, L.: Policies and conflicts in call control, Computer Networks, 51(2):496–514, Feb. 2007. 20. Turner, K. J., Reiff-Marganiec, S. and Blair, L.: APPEL: The ACCENT project policy environment/language. Technical Report CSM-161, University of Stirling, UK, Dec. 2005. 21. Turner, K. J., Reiff-Marganiec, S., Blair L., Pang, J., Gray, T., Perry, P. and Ireland, J.: Policy support for call control, Computer Standards and Interfaces, 28(6):635-649, 2006. 22. Wu, X. and Schulzrinne, H.: Handling feature interactions in the Language for End System Services, in Reiff-Marganiec, S. and Ryan, M. D. (eds.), Proc. 8th. Feature Interactions in Telecommunications and Software Systems, 270–287, IOS Press, 2005. 23. Wu, X. and Schulzrinne, H.: Handling Feature Interactions in the Language for End System Services, Computer Networks 51 (2), 515-535, 2007.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
83
Policy Conflict Filtering for Call Control Gavin A. Campbell and Kenneth J. Turner Computing Science and Mathematics, University of Stirling, Stirling FK9 4LA, UK e-mail: gca — kjt @cs.stir.ac.uk Abstract. Policies exhibit conflicts much as features exhibit interaction. Since policies are defined by end users, the combinatorial problems involved in detecting conflicts are substantially worse than for detecting feature interactions. A new, ontology-driven method is defined for automatically identifying potential conflicts among policies. This relies on domain knowledge to annotate policy actions with their effects. Conflict filtering is performed offline, but supports conflict detection and resolution online. The technique has been implemented in the R ECAP tool (Rigorously Evaluated Conflicts Among Policies). Subject to user guidance, this tool filters conflicting pairs of actions and automatically generates resolutions. The approach is generic, but is illustrated with the A PPEL policy language for call control. The technique has improved the scalability of conflict handling, and has reduced the effort required of the previous manual approach. Keywords. Call Control, Conflict Detection, Ontology, OWL, Policy
1. Introduction 1.1. Policies and Features Policies are rules used to control a system dynamically through a set of actions to be performed in specified circumstances. Policies are typically defined by an event, a condition and an action. Historically, policy-based systems have been developed in domains such as access control, quality of service, security and system management. In all these applications, policies are typically created and maintained by administrators. However, the authors’ approach is unusual in being designed for ordinary system users. During the past decade, many policy languages and systems have been developed to decentralise the control of system behaviour, to automate system management, and to give more control to end users. This added flexibility has the advantage that users can tailor services more accurately to their needs, reducing reliance on generic system facilities. Traditional feature-based approaches lack flexibility. In telephony, for example, the features are mostly defined by the network operator. Users have little choice except to select the features they wish and to define a few feature parameters. Systems that offer multiple, independently-defined features are prone to interactions – a well-known situation where the behaviour of one feature may affect another. Many feature interactions have been identified in call control. Detecting these interactions is often problematic due to the large numbers of features (several hundred in a
84
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
typical PBX). Resolving the interactions can also be problematic because features are low-level units of functionality. It is often necessary to understand the user’s true intention before obtaining a satisfactory resolution. For example, consider the well-known interaction between Do Not Disturb and Alarm Call. The user’s intention was presumably to avoid calls from others, but not the alarm call from the exchange. Policies are closer to user goals (e.g. ‘I do not wish to be called by anyone’) and so more faithfully reflect user intentions. Resolving interactions or conflicts is facilitated by the higher-level approach of policies. This paper presents an approach to conflict handling using domain knowledge captured in an ontology. Collecting this knowledge is a manual step. However, conflict detection is then fully automated using the R ECAP tool (Rigorously Evaluated Conflicts Among Policies). Conflict resolution is partially automated by R ECAP – outline resolution policies are automatically generated, for completion by the domain expert using a policy wizard. The general idea is that conflicts are identified and specified through offline filtering. The resulting conflict resolution policies are then use online. 1.2. Ontology Support for Policies The authors use a policy system called ACCENT (Advanced Component Control Enhancing Network Technologies). This includes a policy server that supports the A PPEL policy language, a wizard for creating and editing policies, and a variety of supporting interfaces for various application domains. In recent research, the authors have extended A PPEL to support new and multiple domains. As the core schema of A PPEL is generic, it can be extended for different applications by adding further schemas. However, this does not adequately deal with concepts in the application domains. The authors have therefore developed additional support for A PPEL through a range of ontologies. The new approach uses OWL (Web Ontology Language) to describe the core A P PEL language. The core ontology is then extended hierarchically to define user interface information and to specialise the language for particular domains. This has increased the extensibility and precision of the policy language. A PPEL is supported by a wizard that offers a web-based interface for creating and editing policies. This has been reengineered to replace hard-coded domain information (for call control) with information stored within the ontologies. The result is a highly flexible user interface, easily adaptable to reflect new application domains. 1.3. Related Work Policy conflict is equivalent of feature interaction in telephony and related domains. Since policies are defined in a decentralised manner, the potential for unwanted interaction is far greater than that of conventional feature-based systems. The increased flexibility that policies offer to users is offset by more pervasive, complex and subtle conflicts among policies. Conflicts in a policy-based environment are often caused by the simultaneous execution of policies with contradictory actions. (Conflicts can also arise between actions and system state, i.e. the result of previous actions.) Policy conflict requires study of three different aspects: filtering conflict-prone policies, defining conflict detection mechanisms,
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
85
and defining a conflict resolution strategy. Although policy filtering is a new departure, conflict detection and resolution have already been studied. In system management, for example, conflict detection and resolution techniques include [1,2]. Enhancements to COPS (Common Open Policy Service, RFC 2748) are aimed at managing policy conflict through rigorous definition of actions. Many techniques have been developed to automate feature interaction detection at the specification stage. Techniques in feature interaction detection have focused heavily on a variety of formal methods such as process algebras, automata and (temporal) logic. Of these, techniques for filtering interaction-prone features are the most relevant. However, few are directly relevant to policy-based control. Nonetheless, the ideas have influenced the work reported here. The notion of interaction filtering was initially presented in [3]. The filtering process is followed by detailed checking and refinement of conflicts. Several tools support an automated approach to filtering feature interactions. One example is a prototype designed to detect interactions in a call environment [4]. This filters interactions among IN services, using simple descriptions of the static structure for each service. Interactions are detected for groups of services used in particular call scenarios. Formal approaches have been followed by a number of researchers. FIX (Feature Interaction Extractor [5]) is an example of a domain-independent approach, although only application to telephony has been reported. This uses the model checker C OSPAN to run consistency tests on feature specifications. In a further stage, the tool user can investigate the generated scenarios and decide on their accuracy. [6] presents a filtering technique based on Use Case Maps and applies it to telephony features. [7] uses preconditions and postconditions to identify inconsistencies in features for L ESS (Language for End Systems Services). [8] describes work that is directly relevant to this paper as it uses temporal logic to formalise the semantics of A PPEL. This leads to a formal basis for automated detection of conflicts. In other work on A PPEL, [9] presents a method for discovering conflicts based on the pre/post-conditions of actions. This allows semantically-based inferences to be drawn about the compatibility of actions. However, it is technically more complex than the simple and intuitive approach of the work reported here. As complementary techniques, future study will investigate how [8,9] can be reconciled and integrated with the authors’ approach. The work reported here differs in important respects from the foregoing: • Policies rather than features are used for control. These support higher-level statements of user intentions, and facilitate the resolution of conflicts. • The approach is adapted to many domains, including ones outside telephony. For example, the authors use it to detect conflicts in home care and in sensor networks. • A formal specification of the system and its policies is not required. In practice a precise specification is usually infeasible because the system is too complex, is proprietary, or is open-ended because users can define their own features or policies. • The approach is intentionally less formal. This has the advantages of being simpler to set up and more intuitive, i.e. relying only on domain knowledge. Domain experts, rather than formalists, can define the information needed for conflict filtering. The analysis is efficient and domain-oriented.
86
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
1.4. Paper Outline Section 2 presents an overview of the ACCENT policy system, the A PPEL policy language, and its approach to conflict detection and resolution. Section 3 introduces ontologies, and outlines how they were used to model A PPEL. Section 4 explains how ontologies are used to identify policy conflicts. Section 5 discusses the approach to conflict filtering and the associated tool support. Section 6 evaluates the results.
2. The ACCENT Policy Approach 2.1. Policy System and Language The ACCENT policy system (Advanced Component Control Enhancing Network Technologies, www.cs.stir.ac.uk/ accent) was originally designed to allow users to tailor (Internet) call handling to their own preferences. As illustrated in figure 1, the ACCENT system is split across three layers. At the lowest level, the system layer connects the policy system to its external environment. Policy enforcement is handled by the policy system layer that incorporates the policy server, policy store (where policies reside) and policy database (containing user login and server configuration data). At the top level, the user interface layer is where users create policies and where contextual information is obtained. Policies are defined and edited via a web-based policy wizard [10]. Each policy is saved as an XML document and uploaded to the policy store. The general approach of ACCENT is described in [11]. A PPEL (ACCENT Project Policy Environment/Language [12]) is a comprehensive and flexible language, designed to express policies within the ACCENT system. Key factors in the design of A PPEL include a simple but concise structure, ease of extension, and orientation towards ordinary users. A PPEL comprises a core language and its specialisations for different application domains. The original specialisations were for call control and conflict resolution, but new specialisations have been developed for home care and sensor networks. A PPEL defines the overall structure of a policy document: regular policies, resolution policies, and policy variables. A policy consists of one or more rules in ECA form (Event-Condition-Action). Each rule has a combination of triggers (optional), conditions
User Interface Layer
Policy System Layer
Policy Wizard
Policy Database
Communications System Layer
User Interface
Policy Server System
Communications Network Server
Figure 1. ACCENT System Architecture
Context System
Policy Store
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
87
(optional), and actions (mandatory). The core language constructs are extended through specialisation for new applications. A policy is eligible for execution if its triggers occur simultaneously and its conditions apply. Additional conditions may be imposed, such as the period during which the policy applies, or the profile to which the policy belongs. When the policy system is informed of an event, the applicable policies are retrieved and applied if eligible. As multiple policies can be triggered, conflicts may arise among their actions. 2.2. Conflict Detection and Resolution Conflicts result from clashes between pairs of policy actions. As an example from call control, the caller may wish to conference in a third party whom the callee does not wish to speak to. The caller/callee policies propose add/remove party(person) for some individual. These contradictory actions must be identified as conflicting. They must also be resolved, e.g. by giving the caller (as the bill payer) priority. The ACCENT system allows for both static and dynamic conflict detection. Static detection is performed when a policy is defined and uploaded to the policy system, while dynamic detection occurs at run-time. Although both methods are permitted, only dynamic detection is currently implemented. This focus was intentional since run-time conflict handling is the more challenging task. Dynamic conflicts also subsume static conflicts. The actions resulting from a policy trigger are checked pairwise for conflicts. (The design of the language means that the order of comparison is irrelevant, and that only pairs need be checked.) The outcome is a set of non-conflicting actions. Human guidance is almost inevitably required to determine how best to handle conflicts. Only certain ‘technical’ conflicts might be detected fully automatically. Even then, the treatment of a conflict requires judgment. As an example, suppose one user wishes to add video to a call but the other user wishes to avoid this. This is clearly an add/remove conflict. A trivial resolution would be to permit one or other policy to prevail. However, an acceptable resolution might be much more complex, e.g. using a third party to adjudicate the conflict. As a further example, suppose one user wishes to add the G.723 audio codec to a call but the other user wishes to avoid it. This appears to be an identical kind of add/remove conflict. In fact it is not, because both parties (in H.323) must be willing to support the G.711 audio codec. There is therefore no need to treat this as a conflict. This illustrates that conflict detection requires domain knowledge and human intuition. Conflict handling in ACCENT is defined by resolution policies that are distinct from regular policies. Resolution policies express when and how the system should respond to conflicts. Their effect is to process a set of proposed policy actions, selecting those that are compatible with the conflict handling rules. Resolution policies are specified as an extension of the core A PPEL language, and therefore use the same syntax as regular policies. However, resolution policies use a different vocabulary because they serve a different purpose. The domain-specific actions of regular policies are the triggers of resolution policies. Resolution policies can dictate generic outcomes (selecting among the proposed actions) or specific outcomes (dictating domain-specific actions). A PPEL has a built in notion of policy preference which allows a user to indicate how strongly they wish a policy to be applied. This allocates priorities to policies as one means of resolving conflicts. However, other resolutions may be used such as choosing
88
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
the policy of a superior user, or choosing a longer-standing policy. Resolution policies gives considerable flexibility in that conflict handling is not hard-coded into the policy system. It is defined externally and can be domain-specific. To avoid infinite regress, resolution is performed just once. The approach ensures that the outcome is conflict-free, and does not require resolutions to be checked again for conflicts. Conflict handling within ACCENT is described in [13]. The main limitation of this previous work was that resolution policies had to be defined manually. This was tedious and error-prone. The new work reported here describes an ontology-driven mechanism to automate conflict detection. The R ECAP tool provides automated support for detecting conflicts and for creating outline resolution policies. The details of resolution require human judgment and are added in a further manual step.
3. Ontology Support for Policies 3.1. Ontology Background An ontology is the set of terms used to describe and represent an area of knowledge, together with the logical relationships among these [14]. It provides a common vocabulary to share information in a domain, including the key terms, their semantic interconnections, and the rules of inference. Ontologies enable separation of domain knowledge from common operational knowledge in a system. A variety of specialised languages are used to define ontologies. OWL (Web Ontology Language [15]) is a standard XML-based language. It is supported by a wide range of software, and can be integrated with other techniques. In addition, OWL provides a larger function range than any other ontology language to date. For these reasons, OWL was used to define the ontologies in the work reported here. An OWL ontology defines classes, properties and individuals. A class represents a particular term or concept in a domain, while a property is a named relationship between two classes. An individual is an instance or member of a class, usually representing real data content within an ontology. Properties are defined for classes in the form of restrictions that specify the nature of a relationship between two classes. OWL supports inheritance within class and property structures. OWL can also import shared ontologies. The ontological basis for A PPEL exploits this, using multiple documents for different aspects of the core language and its specialisation in various domains. Ontology support for policies is provided by P OPPET (Policy Ontology Parser Program Extensible Translation [16]). This uses the P ELLET ontology reasoning engine (pellet.owldl.com) and the Jena ontology parser (jena.sourceforge.net). P OPPET parses and integrates ontologies on behalf of the ACCENT system. Figure 2 illustrates the relationship between ACCENT and P OPPET. 3.2. Ontologies for Policies Ontologies were defined for the core of A PPEL and its domain specialisations. Using OWL, three layers of ontologies were developed [16]. At the lowest level, GenPol (generic policy) defines core language elements such as variables, rules, triggers, conditions and actions. This includes the basic elements
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
ACCENT User Interface Policy Wizard
89
OWL Ontology
RMI
POPPET Server
Policy Server
PELLET Reasoner
ACCENT
POPPET
Figure 2. Ontology Support by P OPPET for ACCENT Policies
of a policy and the cardinality rules relating these. Each core element is defined as an ontology class. Relationships between classes are defined using ontology properties that link them. Using properties to describe the associations between concepts is a powerful means of modelling the structure of A PPEL. The GenPol ontology contains no domain knowledge, only a definition of how high-level concepts may be combined to form a regular policy or resolution policy. The ACCENT policy wizard [10] is a user-friendly front-end for creating and editing policies. Such a facility is key in supporting policy definition by non-technical users of the system. The wizard presents policy and domain information using near natural language. The user interface is not part of A PPEL proper, but is essential for the system to be usable. Additional, wizard-related knowledge is therefore defined in WizPol (wizard policy) as an extension of GenPol. This specialises the core language for use with the wizard. Examples of wizard-specific facilities include the categorisation of triggers, conditions, actions and operators. In addition, a subset of the language functionality is matched to the skill or authorisation level of a user. The GenPol and WizPol ontologies define domain-independent aspects of regular policies and resolution policies. To specialise the language for a new domain, a further ontology is created to import and extend these base ontologies; importing WizPol implicitly imports GenPol as well. A domain-specific ontology can contain arbitrary new concepts, but all policy language concepts must be subclasses within the hierarchy defined by the base ontologies. Consequently, as these ontologies are combined through an import mechanism only, they do not suffer incompatibility issues. The CallControl domain ontology specialises A PPEL for call handling. Significant extensions include call control triggers, conditions and actions. Using properties defined in GenPol, constraints may be placed on individual triggers, conditions and actions. This defines their use for certain user levels and for display categories within the wizard. In addition, properties define which actions and conditions are permitted with a particular trigger, and the valid range of operators associated with each condition parameter. Further user interface and data type aspects may be included in a domain-specific ontology.
90
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
4. Automated Conflict Detection 4.1. Action Effects Conflicts arise between policy actions with certain parameters. When two actions with a similar effect are executed simultaneously, the result may be a conflict. For example, actions that add and remove the same aspect are potentially in conflict. Thus, the call control actions add party and remove party are likely to contradict each other. Other conflicts are far more subtle, and cannot easily be identified by naming alone. Action parameters may use enumerated types, e.g. call control parameter medium has possible values audio, video and whiteboard. Actions plus selected parameters allow a deeper exploration of conflicts. Where an action has an enumerated parameter type, conflicts between instances of the same action are likely only if the parameters are the same. For example, call control action add medium(audio) could be considered to conflict with a second add medium(audio). However, if the second action wished to add video then this would not be an obvious conflict. For this reason, actions with distinct values in an enumerated parameter set are treated as distinct actions. In general, an action must be considered along with a subset of its parameters. In a domain like call control, there is a rich set of action names that suggest conflicts in themselves. Even there, it is often necessary to take parameters into account. For example, adding one party and removing a different party is not problematic. In other domains such as home care and sensor networks, a much more limited selection of action names is used. This is because actions are mainly differentiated by their parameters. A simple device out action, for example, carries parameters that indicate the action type, device class, device instance and action parameters. Conflict detection has to work with the domain policy language as defined. In general, a subset of parameters must therefore be considered for conflict along with the basic action name. However, for simplicity the following text mainly refers to comparing actions. Policy actions are defined to have one or more effects on the execution environment. These effects range from the technical (e.g. bandwidth) to the social (e.g. privacy). Internal policy actions affect the policy system itself, such as setting system properties or accessing system resources. Conflicts are likely where two actions share a common effect. Any action may potentially conflict with itself. However, all action pairs must be considered too. (As noted earlier, only two-way and not n-way conflicts need be considered.) Figure 3 shows the effects of internal policy actions, while figure 4 shows the effects of call control actions. Call control actions with enumerated parameters are listed separately. Effects for internal policy actions are distinct from those of domain actions, as internal and external actions do not (normally) conflict. Effect categories differ depending on the language domain. As discussed in section 3.2, ontologies have been used to model policy language concepts. It is therefore convenient to define action effects in these ontologies. However, the ontologies play no role in conflict detection or resolution. As conflict detection is not an integral part of A PPEL, the concept of action effect is defined in the WizPol ontology. This allows conflict information to be specified outside the core language, while maintaining the advantage of further specialisation in domain-specific ontologies. Effect information is defined in WizPol through the ActionEffect class and the hasActionEffect property. The ActionEffect class is a superclass of all effect categories for both internal
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Action log event(arg1) restart timer(arg1) send message(arg1,arg2) set variable(arg1,arg2) start timer(arg1,arg2) stop timer(arg1) unset variable(arg1)
91
Effect file timer channel variable timer timer variable
Figure 3. Internal Action Effects
Action add caller(conference) add caller(hold) add caller(monitor) add caller(release) add caller(wait) add medium(audio) add medium(video) add medium(whiteboard) add party confirm bandwidth connect to fork to forward to note availability note presence play clip reject call reject bandwidth remove medium(audio) remove medium(video) remove medium(whiteboard) remove party
Effect party, privacy party, privacy party, privacy party, privacy party, privacy medium, privacy medium, privacy medium, privacy party, privacy bandwidth route route route availability presence medium call bandwidth medium medium medium party
Figure 4. Call Control Action Effects
and domain-specific policy actions. Generic action effects are defined as subclasses of this class in WizPol. Domain-specific action effects are defined as subclasses within a separate domain ontology that imports WizPol. Each policy action is linked to the appropriate effect category class using the hasActionEffect property. This relates actions and effects, allowing a tool to infer overlapping actions. 4.2. Conflict Detection Algorithm Only pairs of actions need to be considered in the analysis; there are no three-way conflicts. Potential conflicts between actions can be inferred from the ontology-defined effect categories through a two-stage algorithm. Firstly, any two actions sharing at least one
92
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
common effect are identified as potentially conflicting. Secondly, actions with enumerated parameter types are analysed. Where two actions share the same parameter value then they potentially conflict, otherwise it is assumed that no conflict exists. where n is the The total number of action pairs, including self-conflicts, is n(n+1) 2 number of possible policy actions. The policy language for call control has 21 possible actions and therefore a total of 231 action pairs. Conflict handling is commutative (if A1 and A2 conflict, then so do A2 and A1) and associative (the way in which actions are paired is irrelevant). The ontologies allow a list of actions to be inferred for each effect category. If two actions are present in some category, they can be marked as potentially conflicting. For example, the call control actions fork to and forward to potentially conflict as they both affect the route. All action pairs deemed to conflict in this way are then automatically reviewed with respect to their parameters. As explained earlier, actions with enumerated parameter types are considered in more detail. This increases the total number of action pairings as an action may be instantiated multiple times with different parameter values. For example, the action add medium with its parameter is equivalent to three distinct actions. This allows more accurate analysis of potential conflicts. Where actions might be treated as potentially conflicting based on a shared effect, this might not be the case when particular parameters are considered. To explain this more concretely, some examples for medium are shown in figure 5. An action may conflict with itself if there is a common parameter (e.g. both instances wish to add video), and may not conflict if the parameters are different (e.g. they wish to add video and whiteboard respectively). Different actions with a common effect and the same parameter indicate potential conflict (e.g. attempting to add and remove audio simultaneously). Actions with a common effect and dissimilar parameters are assumed not to conflict (e.g. altering the medium by adding video and removing whiteboard). Action1 add medium(audio) add medium(audio) add medium(video) add medium(video)
Action2 remove medium(audio) add medium(video) add medium(video) remove medium(whiteboard)
Conflict × ×
Figure 5. Sample Call Control Conflicts with Action Parameters
5. The R ECAP Conflict Filtering Tool 5.1. Automated Support for Conflict Filtering The R ECAP tool (Rigorously Evaluated Conflicts Among Policies) has been developed to automate the algorithm in section 4 for identifying conflict-prone actions. Figure 6 illustrates what the tool looks like on-screen. Taking the first line as an example, the tool shows pairs of actions (add medium(audio) and add medium(audio)), why they conflict (shared effect on medium and privacy), and when this conflict was last modified (automatically or manually).
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Figure 6. Screenshot of R ECAP
93
94
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
3
4
4
n2 io ct /A n1
3 3
4
io
3
1
ct
3
2
A
stop timer
3
unset variable
start timer
set variable
send message
restart timer
log event
Depending on the domain, the conflicts identified by R ECAP may or may not be complete and correct. Conversely, subtle conflicts that are not automatically flagged can be added by hand. As noted earlier, conflict handling will always require human judgment and cannot be fully automated. Based on human guidance, R ECAP produces conflict resolution policies. R ECAP is started by pointing at the relevant domain ontology. Using the action effects, the tool automatically constructs a matrix of all policy action pairs and highlights those deemed to be potential conflicts. The tool user may explore the matrix, confirming or refining each conflicting action pair. If closer inspection reveals that there is no real conflict, this pairing can be flagged as conflict-free. If an action is linked in an ontology to some effect, this may not be true of the actual implementation. Conflicts arising from this cause can be dismissed using the tool to undo the linking. Potential conflicts are displayed in the tool matrix by noting the common effects in the appropriate cell. For convenience, internal and domain-specific actions are described here in separate figures though in practice they are combined by R ECAP. The result of filtering internal conflicts for A PPEL is shown in figure 7. Conflicts are numbered in the figure according to the underlying effect. As an example of conflict, actions start timer and stop timer are in conflict because they both have a timer effect as indicated at their intersection. Some conflicts are non-obvious (e.g. add caller and add medium). Detailed study by a domain expert confirmed that all conflicts discovered are real, and that no conflicts had been missed. No changes were therefore needed in the analysis.
log event restart timer send message set variable start timer stop timer unset variable
Conflict: 1 channel, 2 file, 3 timer, 4 variable Figure 7. Internal Conflicts identified by R ECAP for A PPEL
Call control actions deemed conflicting by R ECAP are shown in figure 8. For simplicity, this figure shows conflicts between actions without parameters. In the tool, actions with enumerated parameter types are displayed and compared distinctly. Conflicts are numbered in the figure according to the underlying effect.
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
95
5,7
7 5,7 4,7 7 5,7
5 4
4 5
2
2 8
8 8
8 8 8 1 6 4
4 2 3 4 5
io n1 /A ct io n2 ct A
remove party
remove medium
reject call
reject bandwidth
play clip
note presence
note availability
forward to
fork to
connect to
confirm bandwidth
add party
add medium
add caller
Detailed study by a domain expert confirmed that all detected conflicts but one are real, and that no conflicts have been missed. There is a possible problem in that confirm bandwidth is indicated to conflict with itself due to a shared bandwidth effect. This could indeed be an error, as it might lead to bandwidth being allocated twice. As it happens, in the ACCENT system it is harmless to confirm bandwidth twice. Without human guidance, this action pair would be flagged as a conflict. It should be noted that the bandwidth effect is still required as it correctly identifies the conflict between confirm bandwidth and reject bandwidth.
add caller add medium add party confirm bandwidth connect to connect to forward to note availability note presence play clip reject bandwidth reject call remove medium remove party
Conflict: 1 availability, 2 bandwidth, 3 call, 4 medium, 5 party, 6 presence, 7 privacy, 8 route Figure 8. Call Control Conflicts identified by R ECAP for A PPEL
As demonstrated by figures 7 and 8, the automated conflict analysis (for call control) is very accurate. However, it confirms that human guidance is still needed in a small number of cases. R ECAP is mainly intended to analyse conflicts when a domain policy language is initially defined, using an ontology as the source of action effects. This initial analysis is saved to file and can subsequently be reloaded into the tool. This avoids the user and the tool from having to repeat a prior analysis, particularly if the user has manually modified the conflict list.
96
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
5.2. Automated Support for Resolution R ECAP turns the conflict list into a set of outline A PPEL resolution policies that define the detection part of conflict handling. These policies define the conflicting triggers and parameter conditions, but resolution actions must be completed manually. The policies are automatically uploaded to the policy system, where the wizard is used to define the resolutions. Conversely, R ECAP reads existing resolution policies and annotates the matrix with conflicts derived from these. This is a useful feature which allows conflicts defined manually via the policy wizard to be used in conjunction with conflicts identified by R ECAP. Resolution policies can be simple or complex, specific or generic, and dependent on many factors including the conflicting policies and their parameters. One or more actions may be required of a resolution. See [13] for a list of typical resolution policies. As an example, suppose one party wishes to add video to the call with add medium(video), while the other party wishes to conference in a third person with add party(person). This is correctly flagged as a conflict since the third party would be able to view the call parties and their workplaces (affecting privacy). Using human judgment, it might be decided to allow video and the third party. However, someone (e.g. a manager) should be included in the call to oversee it. In view of this complexity, R ECAP generates only outline resolution policies that specify default policy attributes, triggers corresponding to the conflicting actions, and default actions to resolve the conflict. The outline resolutions are then uploaded and customised using the wizard as normal. Resolution policy editing is dealt with by the wizard and not by R ECAP. This allows R ECAP to remain domain-independent and not be constrained to a particular resolution technique or policy language. An additional advantage is that resolution policies are then edited through the same interface as regular domain policies. All default resolution parameters are defined by a properties file, and can therefore be readily modified according to local practice. The property file allows any structural components of outline resolutions to be altered. Resolution policies are normally disabled on upload. This ensures they are ignored by the policy server until they have been edited to include a specific resolution. This avoids incomplete or inconsistent resolutions from being used accidentally. R ECAP could be given a more user-friendly interface to change the default resolution policy structure and parameters. Currently this is achieved by manually editing the properties file. Although the tool is mainly intended for use during definition of a new application domain, there could be some value in easing later changes. Policies in general are distinguished by unique identifiers, typically some phrase chosen by the user. Resolution policies automatically created by R ECAP have machinegenerated (but human-usable) identifiers. If the identifier of such a policy is changed manually, this could lead to duplication. The tool could detect this situation by looking for overlap of resolution triggers and conditions.
6. Conclusion A technique and a tool have been introduced for (semi-)automated filtering of conflictprone policies. Ontologies have been used to model the core and domain-specific aspects
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
97
of A PPEL – for regular and as well as resolution policies. Conflicts between policy actions are handled in ACCENT by resolution policies. Action effects defined in ontologies allow conflicting action pairs to be discovered as potential conflicts. As has been seen, the analysis leads to very accurate results (for call control). Nonetheless, R ECAP allows potential conflicts to be refined manually since a fully automated approach is impossible due to the complexity and subtlety of policy interactions. Following filtering, outline resolution policies are generated and uploaded for completion with the policy wizard. R ECAP offers an automated approach to conflict analysis and resolution where previously this was achieved manually. This has improved the scalability of A PPEL, and has substantially reduced the time and complexity of dealing with conflicts. Associating actions with their effects is very simple compared to formal methods, but yields very good results. The straightforward and domain-oriented approach is much less expensive to use than one that requires a complete formal model. R ECAP provides a way of visually identifying conflicts within an arbitrary collection of policy actions. Unlike many existing approaches and tools, policies in any domain may be analysed easily by R ECAP, and not just those for call control. The tool is also useful for policy applications where action parameters play a bigger role. R ECAP has been designed for stand-alone use. Although conflict data is mainly expected to derive from an ontology, conflict information may be input from a local file. Consequently, data generated by other tools or systems may be used by R ECAP for conflict filtering. The only requirement is knowledge of the conflict data format used. Although R ECAP is aimed at filtering conflicts in the initial stages of specifying a new policy language, it may be used in later revisions of the language to refine conflicts and to generate resolutions. Acknowledgements The authors thank their colleagues Stephan Reiff-Marganiec (now at the University of Leicester) and Lynne Blair (who was on leave from Lancaster University during the development of ACCENT). Both contributed substantially to the design of the policy system that lies at the foundation of the work reported in this paper. Gavin Campbell’s work on the P ROSEN project was supported by grant C014804 from the UK Engineering and Physical Sciences Research Council. References [1]
J. Chomicki, Jorge Lobo, and S. Naqvi. A logical programming approach to conflict resolution in policy management. In Anthony G. Cohn, Fausto Giunchiglia, and Bart Selman, editors, Proc. Principles of Knowledge Representation and Reasoning, pages 121–132. Morgan Kaufmann, 2000. [2] Emil C. Lupu and Morris Sloman. Conflict analysis for management policies. In Proc. 5th. International Symposium on Integrated Network Management, pages 430–443. Chapman-Hall, London, UK, 1997. [3] Kristofer Kimbler. Addressing the interaction problem at the enterprise level. In Petre Dini, Raouf Boutaba, and Luigi M. S. Logrippo, editors, Proc. 4th. International Workshop on Feature Interactions in Telecommunication Networks, pages 13–22. IOS Press, Amsterdam, Netherlands, June 1997. [4] Dirk O. Keck. A tool for the identification of interaction-prone call scenarios. In Kristofer Kimbler and Wiet Bouma, editors, Proc. 5th. Feature Interactions in Telecommunications and Software Systems, pages 276–290. IOS Press, Amsterdam, Netherlands, September 1998.
98 [5] [6]
[7] [8]
[9]
[10] [11]
[12]
[13] [14] [15] [16]
G.A. Campbell and K.J. Turner / Policy Conflict Filtering for Call Control
Amy P. Felty and Kedar S. Namjoshi. Feature specification and automated conflict detection. ACM Transactions on Software Engineering and Methodology, 12(1):3–27, January 2003. Masahide Nakamura, Tohru Kikuno, J. Hassine, and Luigi M. S. Logrippo. Feature interaction filtering with Use Case Maps at requirements stage. In Muffy H. Calder and Evan H. Magill, editors, Proc. 6th. Feature Interactions in Telecommunications and Software Systems, pages 163–178. IOS Press, Amsterdam, Netherlands, May 2000. Xiaotao Wu and Henning Schulzrinne. Handling feature interactions in language for end systems services. Computer Networks, 51:515–535, January 2007. Carlo Montangero, Stephan Reiff-Marganiec, and Laura Semini. Logic based detection of conflicts in APPEL policies. In Ali Movaghar and Jan Rutten, editors, Proc. Int. Symposium on Fundamentals of Software Engineering. Springer, Berlin, Germany, February 2007. Ahmed F. Layouni, Luigi Logrippo, and Kenneth J. Turner. Conflict detection in call control using firstorder logic model checking. In Farid Ouabdesselam and Lydie du Bousquet, editors, Proc. 9th. Feature Interactions in Telecommunications and Software Systems. Springer, Berlin, Germany, July 2007. In press. Kenneth J. Turner. The ACCENT policy wizard. Technical Report CSM-166, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. Kenneth J. Turner, Stephan Reiff-Marganiec, Lynne Blair, Jianxiong Pang, Tom Gray, Peter Perry, and Joe Ireland. Policy support for call control. Computer Standards and Interfaces, 28(6):635–649, June 2006. Stephan Reiff-Marganiec, Kenneth J. Turner, and Lynne Blair. APPEL: The ACCENT project policy environment/language. Technical Report CSM-161, Department of Computing Science and Mathematics, University of Stirling, UK, December 2005. Kenneth J. Turner and Lynne Blair. Policies and conflicts in call control. Computer Networks, 51(2):496– 514, February 2007. N. F. Noy and D. L. McGuinness. Ontology development 101: A guide to creating your first ontology. Technical Report KSL-01-05, Stanford Knowledge Systems Laboratory, Stanford, USA, March 2001. World Wide Web Consortium. Web Ontology Language (OWL) – Reference. Version 1.0. World Wide Web Consortium, Geneva, Switzerland, February 2004. Gavin A. Campbell. Ontology for call control. Technical Report CSM-170, Department of Computing Science and Mathematics, University of Stirling, UK, June 2006.
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
99
Towards Feature Interactions in Business Processes a
Stephen GORTON a,b Stephan REIFF-MARGANIEC b ATX Technologies Ltd, MLS Business Centres, 34-36 High Holborn, London WC1V 6AE, United Kingdom b Department of Computer Science, University of Leicester University Road, Leicester LE1 7RH, United Kingdom email: {smg24,srm13}@le.ac.uk Abstract. The feature interaction problem is generally associated with conflicting features causing undesirable effects. However, in this paper we report on a situation where the combination of features (as policies) and service-targeted business processes yields non-negative effects. We consider business processes as base systems and policies as a feature mechanism for defining user-centric requirements and system variability. The combination of business processes and a diverse range of policies leads to refinement of activities and possible reconfiguration of processes. We discuss the ways in which policies can interact with a business process and how these interactions are different from other approaches such as the classical view of POTS or telecommunications features. We also discuss the conflicts that can arise and potential resolutions. Keywords. feature interaction, business processes, policy conflict, service oriented architecture
1. Introduction Feature Interaction [5] was first identified as a problem in telecommunications, where additional units of functionality (features) would interfere with each other and cause unpredictable behaviour. Telecommunications have become increasingly complex, including the birth of Internet Telephony services. Removing the “Telephony” part, feature interaction has also been identified in Web Services [38,39,40,41]. Web Services [1] are an implementation of Service Oriented Architecture (SOA), where the design of systems shifts from overall development to the orchestration of services. At a more abstract level, workflows are used to specify business processes. Each task in a workflow represents a unit of activity and can be completed by using a service. Business Process Management (BPM) research has often reported that it is paired well with SOA to produce flexible business software solutions (e.g. [26]). Policies are generally agreed as information that can modify a systems behaviour at runtime, without the need for recompilation or redeployment [19]. In
100
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
our work, we use policies to express essential requirements and system variability by combining them with workflows [12] – the latter essentially means treating policies as features. The combination of policies and workflows in the context of Service Oriented Computing (SOC) can lead to feature interactions. What has not been studied is the nature of these interactions. This paper discusses and classifies the types of interactions that can occur, while also showing how the interactions occurring here are different from those in the more traditional domain of telecommunications, and even the more recent domain of policies. Throughout the paper the terms conflict and interaction are used with specific meaning. Interaction will be used to describe points of contact between workflows and policies; interactions are by and large desirable and hence a positive thing. Conflict will be seen as issues that arise between policies or when selecting services and are usually undesirable. The only exceptions to this are when we consider the traditional area of policy conflict, we will use the term policy conflict; when considering traditional feature interaction we will use that term. Overview. The remainder of this paper is structured as follows: Section 2 presents some background material on workflows, Service oriented Computing and policies. Sections 3 and 4 contains the two major contributions. First we present an analysis of the differences and similarities of feature interaction in its more traditional contexts and in the context of workflows, and then consider the types of interaction that can occur in the new setting with suggestions towards solutions. The paper is rounded off with related work in section 5 and a discussion in section 6.
2. Background There are three main ingredients to our work: workflows, Service Oriented Computing (SOC) and policies. Each serve a distinct purpose. Workflows describe the basic process model defining the main functionality. SOC is the foundation of the implementation. Policies augment the workflow to customise it to a particular user’s preference. The combination of the three provides us with the ServiceTargeted, Policy-Driven Workflow approach that we call StPowla. An overview showing the relations between the StPowla elements is shown in Figure 1. Policies are considered an overlay mechanism (including monitoring and enforcement) for business processes. Business processes are the workflow models viewed from the business or application domain. Often, the authors of such workflows are the end users, i.e. business analysts rather than software engineers. Business process models may then be mapped to more precise service models (e.g. SRML [8]) and then to concrete orchestration models. These are then mapped to services via platform-independent middleware. 2.1. Workflows A workflow is a connected graph of activities, or tasks. Each task represents a unit of work that contributes to a wider goal. Workflows are the accepted mechanism
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
101
Figure 1. StPowla overall architecture.
for describing business processes, where a defined sequence of tasks contribute to the satisfaction of a business objective. Workflow description languages exist to describe processes in either codebased or graphic-based notations. Examples of the former include YAWL [33] and ebXML [32], while examples of the latter include BPMN [13] and UML Activity Diagrams [7]. In terms of SOC, BPEL [24] is the most widely accepted business process language, in that it can describe the process and orchestrate a number of services into the process. In our approach, a workflow is a core business model containing enough functional requirements to map each task to a service (Figure 2). At runtime, a task is automatically assigned a service. This assignment includes the discovery of the service, binding to it and invoking it (thus each task is a distinct computation from all other tasks). A workflow should have enough information attached to it to run successfully on its own. Policies are used to alter this process through either refinement or reconfiguration of the workflow. This kind of intervention is required to either maintain a current state (e.g. keep costs to a specified level) or to execute a different path of processing.
Figure 2. Tasks and their relation to services.
The key difference between a workflow for a business process and a workflow for another purpose (e.g. telecoms or home networks) is time. Business processes execute over a long period of time (perhaps hours or days). They include error handling or compensation actions for the recovery of the workflow in the case of
102
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
failures. Also, policies may not have an effect immediately. Once triggered, they may take a period of time before the effects become evident. 2.2. Service Oriented Computing In SOC, software exists as separate entities, developed in isolation as services that are loosely coupled, platform independent, composable and based on open standards. In addition, they may be discoverable and self-describing. A Web Service is discoverable through directory services such as UDDI [23], is self describing through WSDL [36], is composable through a variety of mechanisms (the de facto standard is WS-BPEL [24]) and is based on XML as the open standard (e.g. messaging is often done through SOAP [35]). Services are key to this work. They are reusable software components that take part in a wider process. Our aim is to develop a policy-driven process model that is satisfied by services. Thus, the author of a process is not expected to write functional code, but rather specify enough requirements such that they can be mapped to an existing service (we note that there is the need for syntactic match-making between the process author and services). Services provide agility to processes in that a system is no longer confined to one individual implementing component. One service can be substituted for another, provided it takes the same inputs and returns the same type of outputs. IBM’s business model is based on the Service Component Architecture (SCA) [16], which is based on SOA. A client’s requirements are satisfied by a composition of IBM’s services (if a service doesn’t exist, they create it), thus the product supplied to a client is actually a composition1 . 2.3. Policies Policies are end-user defined rules for the management of a system. Our policies are either Event-Condition-Action rules (ECAs), or goals (e.g. constraint rules). Policies are a proven integrated software management technique. They force a system into dynamic behaviour as the system must react to given rules at runtime. Policies can be added incrementally, with (theoretically) no limit on the number that can be applied at once. However we do note that the probability of policy conflict grows as the number of policies increases. A policy conflict occurs when two or more policies contradict each other in terms of what the system is instructed to do or what state it should maintain. There are broadly three categories of conflict: goal conflicts, function conflicts and combined goal/function conflicts. A goal conflict is when two goals are in contradiction of each other. A functional conflict is when two policies state two different (non-compatible) paths of system execution. A combined conflict occurs when a functional policy chooses a system execution path that would violate a goal. We use the Appel policy description language [28] to define our policies. Appel is an XML-based language, which has recently gained formal semantics via a mapping to ΔDSTL [20] (formerly Appel only had a natural language 1 Keith
Goodman’s (IBM) recent keynote at IM2007
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
103
semantics). It was developed initially as a call control language for the Internet Telephony domain, but is based on a core language with domain specialisations. In our research, we are working towards a customisation of Appel as a policymechanism for service-oriented business modelling. This requires some knowledge of the target domain through ontologies.
3. Feature interactions in SoC workflows Feature Interactions in the context of policies applied to workflows shows all the characteristics of traditional feature interaction, especially that they may hinder advancement of the system at runtime or at least violate user expectations. However, they differ in two significant aspects: one being an assumption about the knowledge of the main system available to policy designers and the other an assumption about the lifespan of the effect of an action. This section discusses both in more detail. 3.1. Details of Base System Considering traditional feature interaction, e.g. in the domain of telecommunications, we notice that there are two fundamental components: a base system and the features. In this domain features have been written by programmers with a sound knowledge of the base system and in general one would always expect a feature deployed on a base system to work correctly in the absence of other features. This notion is fundamental in the definition of feature interaction: if a feature f1 satisfies a property φ1 (written f1 |= φ1 ), and f2 |= φ2 ; a feature interaction is said to occur if, when the features are composed (denoted f1 ⊕ f2 ) we do not have f1 ⊕ f2 |= φ1 ∧ φ2 . We have argued [27] that in the context of policy conflict there is no explicit base system and that conflict emerges between a number of policies. This proved fruitful for addressing policy conflict in a structured way [20]. Considering workflows we are in a setting that differs from both of the above: the workflow presents a base system onto which policies are applied – however, the authors of policies do not need to be aware of the workflow (it would help if they were), as for example a business might change its overarching business policy regarding communication by email for security purposes, not realising that several of the workflows that are conducted within the business rely to some extend on email communication. It is clear that we have a number of stakeholders in this setting, some that are involved in formulating the business process and some that are involved in formulating policies applicable to the same. Clearly this breaks the fundamental assumption in feature interaction that a feature will operate as expected if it is put together with the base system. 3.2. Future Effects In traditional telecommunications systems the effect of a feature is relatively immediate: that is when a feature gets invoked it will perform some action. This has been used extensively in the approach by Calder et al. [4] where a feature
104
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
manager was exploring the next responses and would use a commit and rollback mechanism to select a solution. In particular reorderings of features were explored in their work to allow for the fact that executing A followed by B might be acceptable whereas B followed by A leads to a conflict. Business processes differ fundamentally in this aspect in that the execution of a service might be performed over long periods of time. Using compensation actions (as described using the Sagas calculus in [3]) for workflow recovery, these business processes are regarded as long running transactions (LRTs). The effect of an LRT is that a feature that has no initial effect may have an effect sometime in the future. Let us consider a simple market trader. Their core business process involves selling products to buyers, reflected in a workflow details of which we can omit here. The trader adds two policies to their business model: the first specifies that new stock from a supplier is ordered once a trade has occurred and the second negotiates the price with the supplier of the product that was just sold. By adding these policies to the workflow the business process can be streamlined. An analysis of the example shows that if the reordering feature executes first, then the trader reorders supply at a previously agreed price. The second feature is then activated and a new price is negotiated. While the new price will not have any effect on this transaction (since the purchase has already been made), it will have an effect on future transactions. If the price negotiation feature executes first, then the price is renegotiated and then the reordering occurs with the new price. This example highlights very clearly that the order of execution of the policies matters – something that was to be expected and is very much in line with the observation from [4]. However, what is novel is the fact that the result of the execution of the policies, in either order, has a lasting effect on a future transaction.
4. Types of Interaction When considering workflows that are enhanced with policies in the context of service oriented computing, we can distinguish three broad types of interactions: conflicts between a number of applicable policies, conflicts between policies and workflows as well as conflicts in the service level agreements. In this section we consider all three types and will show that they are, while of course all being interactions, different in the way they emerge and how they need to be dealt with. 4.1. Policy Conflict Policy conflict occurs when two or more policies can be active at the same time and lead to conflicting actions being requested. Policy conflict has been defined in [27], where it was also pointed out that this definition is based on a specific application domain: only by considering a domain can a clear statement be made about which actions conflict. However, in addition there might be types of conflict that exist within the policies, independent of the application domain. These have
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
105
been discussed in detail in [27] and we have mentioned these in the background section. Let us consider an example: In a bank loan approval process the workflow has a task of making an offer followed by a task to vet the offer. Typical policies attached to this might be “the vetter must be different from the offer maker” and “managers might make offers and vet these”. Clearly the execution of the second task (the vetting) might be allowed or blocked depending on how the two policies are interpreted. Cases like this have been discussed in the taxonomy in [27], and we can identify elements of Roles, Domain Entities and Policy Relation here: the employees, including the manager have places in the domain hierarchy and also play specific roles (vetter, offer maker). Furthermore one can argue that the policies allowing the manager to make offers and vet them is a refinement of the more general rule of having two people involved in the process. As the previous example shows, the conflicts between policies do not show any new characteristics, they do however continue to exist in this new domain. Detection and resolution methods fall into the categories discussed in [27], with a desire to detect and resolve as many issues at design-time but keeping in mind that this is not always possible and hence that decisions will need to be taken at runtime. Generally design-time methods will apply when the policies are under control of the same person or details of the overarching policies are known to the policy author (that is e.g. within a group or enterprise) detection involves some reasoning on a logical level (as e.g. in [20]) and resolution would involve policy redesign. However, if the workflow spans a range of businesses or the services are outsourced then detection of conflicts will only be possible by runtime methods and resolution will usually involve negotiation or some other dynamic means. What is however interesting to note for the purpose of this paper is the aspect about domains that was not considered in the taxonomy. When considering the policies in relation to workflows, which themselves are implemented by services we obtain several levels of ‘application domain’: on one hand we can consider the workflow to be the application domain, on the other hand the services can be seen as the application domain (and then one could further investigate which domain the services belong to). The next two subsections address these issues respectively. 4.2. Policies on Workflows A policy has the ability to manipulate a workflow in two ways. Firstly, it can refine the workflow by expressing further requirements for each task. An implementing service is restricted by all requirements in the task, thus the more requirements stated means that the service selection process becomes more precise and closer to the user’s needs. Secondly, a policy can reconfigure a workflow. This involves stating rules for the insertion or deletion of process components. This second concept is explained in the following example: Example 1. Consider a simple purchase process, where you request quotes from 3 suppliers and then you purchase from the cheapest. Suppose we add a policy that states “If the quote from A is below £100, cancel the other quotes and purchase
106
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
directly from A”. If the price from A comes in below the given amount, then the workflow changes (Figure 3).
Figure 3. Example 4.2 basic workflow
4.2.1. Refinement Policies A refinement policy can be created by multiple stakeholders. This implies that a policy can be directed at different levels of process complexity. For example, an IT director may write a policy that overarches a set of processes whereas a project team member may write a policy solely for a single task of a process. Refinement is done through policies specifying constraints over tasks through SLA dimensions2 . This effectively enables stakeholder (or goal-based) conflicts, where different levels of stakeholders can add their own policies, without realising they conflict with others. Furthermore, there is a need to specify priorities over policies. Refinement Conflicts Policy authors are already able to specify modalities (must, should, prefer and their negations), but in the case of two conflicting policies that both have the same modality (must in the worst case), then a resolution is required. Possible solutions include the prioritisation of stakeholders: higher stakeholders have priority over lower stakeholders (e.g. directors over project team members). This method requires robust selection that will ensure that only specific stakeholders are allowed to create policies. Even then, policies should be agreed in advance and published to other stakeholders. In this situation, only generic policies (i.e. goals) can be expressed. Another solution is forced interactive negotiation. In this simple situation, two conflicting policy authors must be put in contact in order to negotiate and 2 more
information in section 4.3
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
107
find a resolution between their conflicting policies. Intuitively, this is not a good solution if the end user wishes to have an automated process. 4.2.2. ECAs ECA rules can also specify goal constraints or functional rules. In either case, ECA rules need triggers. We have identified, through the mapping of services to tasks, the following events that are applicable: Workflow entry/completion/failure: Policies may be applied at the workflow level (including sets of workflows). This level includes the commencement of a workflow, its successful completion and an abnormal completion with no compensation, i.e. an error result. Task entry/completion/failure: Similar to the workflow level but this time based on tasks. A task failure does not imply a workflow failure, but instead a choice of control flow outputs from the task. Service entry/completion/failure: Again similar to the previous, but based on services. A service failure does not imply a failed task as a policy here can recover the task processing. Conversely, a service success can theoretically lead to task failure. It is our opinion that these are the most relevant and interesting triggers in a workflow from a control-flow perspective. A service is a black box, thus we cannot see inside it to recognise any triggers. Conversely, the workflow is the highest level at which we can inspect the system, since all policies can be applied no higher than this. We do, however, recognise that there may be further triggers available, especially if one considers data, constraints and resources, which are out of the scope of this paper. To demonstrate the use of trigger points and error handling (with policies) in a long running transaction, we use simple example as follows: Example 2. Consider a workflow to make a drink and then consume it, plus a separate workflow to purchase coffee granules (Figure 4). The workflow is augmented with policies that state: “if it is morning, I would like a coffee. Otherwise I would like tea”; “if there is no coffee, I would like tea”; and “if there is no coffee or tea, buy some more coffee granules”. The time of day is thus important to the final objective of the initial task (makeCoffee or makeTea), but of small significance in this example (we include it to make a point about time being a factor in business processes). If it is morning, we will try makeCoffee. If this fails, we will try makeTea. If this succeeds, then the task completes successfully. If not, then we execute the extra workflow to purchase coffee granules. If this completes successfully, we can go back and try makeCoffee, which will hopefully work now. Otherwise, should this extra workflow fail, then the main task makeDrink has not been compensated and the task ends in an error state.
108
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Figure 4. Workflow for making and consuming a drink.
ECA Conflicts arise when an event triggers two incompatible policies (i.e. a functional policy conflict, or combined functional/goal conflicts). A functional conflict is one where at least two paths of execution exist, but only one can be chosen. In a state-based system such as a workflow, it exists when the current state a b is X and two transitions are implied by policies (P1 : X − → Y and P2 : X − → Z). This is an example of a shared trigger interaction. At design time, this can be detected if the policy triggers are not dependent on runtime information. Otherwise, an online detection and resolution method is required. This may include priority sequences as an offline solution or user interaction as an online solution. Missed trigger interactions occur when a policy forces a workflow reconfiguration, and this avoids the desirable effects of another policy. For example, the cancelling of a doctors appointment may also inadvertently cancel the task of picking up a prescription, since the journey to the surgery is not made. Sequential action interactions occur when one policy triggers another. For example, we define a simple fail() function that declares a task to have completed abnormally. By calling this function, we might trigger any policies that exist, whose trigger is the current task’s failure, even if this is not what was desired (e.g. the failure policy may try to compensate but we might not want that if we have explicitly declared the task to have failed). Looping interactions occur when one policy triggers another, which triggers the first, etc. Again, provided that the policies are not based on runtime information, these can be avoided at design time. Otherwise, it is difficult to detect and resolve any loops, especially if runtime information is due to change. 4.3. Service Selection Each task inside a workflow has a functional requirement description. In addition it has a default policy. This policy is represented as follows: appliesTo taskId when task_entry do req(main, Inv, SLA ) The function req takes three parameters: main is the functional requirement of the task, Inv is the service invocation parameters and SLA is a set of Service Level Agreement (SLA) dimensions. It essentially says that when a task is reached in the control flow, it should execute according to the stated requirements (including finding, binding to and invoking a service).
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
109
The primary basis of service selection is the functional requirement. The secondary basis is SLA dimensions. In this set, the policy author can add various non-functional requirements of the service, provided they are measurable in some meaningful way. For example, consider a task makeCoffee with a particular requirement that the served cup should be warm. Then, the policy would refine service selection as follows: appliesTo makeCoffee when task_entry do req(main, Inv, [cupTemperature=’’warm’’]3 ) Furthermore, since policies can be added incrementally, Appel includes composition operators such that policies can be added at runtime4 . Therefore, many policies stating many different SLA dimensions can be added to even a single task. This is also a method for adding general SLA dimensions across workflows. SLA Conflicts are easily identifiable conflicts. If two or more SLA dimensions address the same service attribute and require different values, then a conflict may exist. These conflicts can be resolved by prioritisation of policies (perhaps the most specific policy first), or by the addition of policy strength indicators. Even then, with two policies conflicting and being as strong as each other, there is still a conflict and a need for resolution. Brokerage services can lead to a feature interaction problem, under the auspices of an SLA conflict. For example, suppose a user does not wish to use service X. Instead at runtime, service Y is found, bound to and invoked. However, Y is a broker service and delegates its task to X, and returning X’s results to the user. The user is unaware of the involvement of X despite their requirement against using X and thus a feature interaction has occurred. This situation is synonymous with the traditional telecoms example of a feature interaction between Call Forwarding and Call Barring features. The most direct route to resolution in this case is to specify further SLA constraints that require a service to not be a broker, or to provide assurances that the SLA requirements are passed down the brokerage chain.
5. Related Work There has been extensive literature published about policies. They are gaining increasing recognition from implementers as a tool for creating system variability. In addition, there is extensive literature on workflows. Whilst the business processes we discuss can be described in policies or workflows separately, the former method demands too much variable specification and the latter too much static specification. 3 we
expect some knowledge of the service through ontology composition algorithms are not used as they are design time only solutions
4 policy
110
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Feature Interactions have, to our knowledge, yet to be reported in the domain of business processes. Weiss et al. [38,39,40,41] have reported on feature interactions in Web Services, but in general the subject is confined to telecoms and other network systems. A more formal analysis of feature interaction in processes may lead to specification in a temporal logic in order to provide analysis such as in [6]. In [21], the authors do apply feature interaction to workflows. However their approach does not take into account the instance-based changes via refinement and reconfiguration that we have considered here. Workflow Specification. Apart from natural English, structured languages are often used for expressing processes. BPEL [24] is considered the de facto standard for SOA-based business processes, despite its initial purpose as a service composition language. More traditional workflow languages are more appropriate for modelling processes. YAWL [33] is a powerful workflow language with semantics based on PetriNets. There are alternatives, include SMAWL [31] and others. These solutions may be considered better in terms of describing processes since they abstract away composition details that would be included in those solutions previously discussed. However, they are unable to define high-level requirements for activities or events that occur in the workflow. A sister approach to the code-based approaches, process calculi and Petri nets offer a formal method in which to express workflows as processes. The formalisms provide operational semantics allowing for reasoning about the process as used in e.g. [15] and [9]. The most widely-accepted universal process notation for business processes is the Business Process Modelling Notation [13] (BPMN). This graphical notation also describes process flows, though somewhat more structured through the use of swimlanes to describe different roles in the process. One particular advantage of BPMN is that it can be used to model a BPEL process [42]. However, BPMN is still limited by its inability to express service selection criteria including nonfunctional service properties [25]. Workflow Adaptation is normally viewed at the overall workflow level. Despite the reported need for flexibility in executing workflows, this is generally achieved through some process reengineering, such as in [18]. Workflow Patterns [17], are a common tool for expressing frequently-occurring patterns in workflows, do allow a certain degree of adaptation. Of particular interest are the insert case and delete case patterns. We consider workflow patterns as relevant work, but the differences exist between their offline design nature, and our online approach to analysis of feature interactions and workflow configuration. Policies are descriptive and essentially provide information that is used to adapt the behaviour of a system. Most work deals with declarative policies. Examples are the formalisms to define access control policies, and to detect conflicts [30,14]; formalisms for modelling the more general notion for usage control [43]; formalisms for Service Level Agreement, i.e. to specify client requirements and service guarantees, and to sign a contract with an agreement between them [22]. RuleML (www.ruleml.org) is a language for rule-based and knowledge-based systems, and allows Web-based rule storage, retrieval and interchange. Like Ap-
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
111
pel, it is XML-based and allows for the definition of ECA rules (note that for readability we have not used Appel’s XML syntax in this paper). These rules can be translated through XSL transformations, depending on the application being used. None of this has been linked to workflows; there has been an initial discussion on linking policies with workflows, presenting the fundamental ideas in [11,10]. Workflows and Policies are combined by Wang [37] in the Policy-driven Process Design (PPD) methodology. Policies are linked to workflows by extracting processes from real business policies and using a common logic to unify them. However the work is more focussed towards extracting new process, rather than affecting current ones. Though Wang does mention insertion and deletion, with respect to control flows, it is only in an overview of the effects on all aspects of workflows (including constraints, data and resources). Furthermore, Wang makes no use of Service Oriented Architecture. Verlaenen et al [34] have a similar approach, in that policies are used to change workflows. However, the authors use a weaver and policy composition algorithm, indicating an offline approach. Our work specifically addresses the online state.
6. Summary and Further Work In this paper we considered interactions in the context of Service oriented Computing – in particular we considered systems that are described by a workflow that is subject to a number of policies capturing variability. The tasks in the workflow are implemented by services. The two main contributions are a description of the problem domain highlighting and a classification of conflicts in that domain. With respect to the former we identified differences in two major aspects with respect to traditional FI settings: the role of the base system and the longevity of effects of policies. With regard to the latter we presented three classes of interactions: one between policies (policy conflict), one between policies and workflows and one dependent on service selection. Future work includes the formalisation of StPowla, that is the development of a formal semantics for the workflow part which will allow to extend the conflict reasoning techniques for Appel to be extended to the interaction of policies and workflow.
Acknowledgements This work has been partially sponsored by the project SENSORIA, IST-2005016004.
References [1]
Gustavo Alonso, Fabio Casati, Harumi Kuno, and Vijay Machiraju. Web Services: Concepts, Architectures and Applications. Springer-Verlag Berlin and Heidelberg GmbH & Co. K, 2003.
112 [2] [3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12] [13] [14] [15]
[16]
[17] [18]
[19] [20]
[21] [22]
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
Daniel Amyot and Luigi Logrippo, editors. Feature Interactions in Telecommunications and Software Systems VII, June 11-13, 2003, Ottawa, Canada. IOS Press, 2003. Roberto Bruni, Hern´ an C. Melgratti, and Ugo Montanari. Theoretical foundations for compensations in flow composition languages. In Jens Palsberg and Mart´ın Abadi, editors, POPL, pages 209–220. ACM, 2005. Muffy Calder, Mario Kolberg, Evan H. Magill, Dave Marples, and Stephan ReiffMarganiec. Hybrid solutions to the feature interaction problem. In Amyot and Logrippo [2], pages 295–312. Muffy Calder, Mario Kolberg, Evan H. Magill, and Stephan Reiff-Marganiec. Feature interaction: a critical review and considered forecast. Computer Networks, 41(1):115–141, 2003. Muffy Calder and Alice Miller. Using spin for feature interaction analysis - a case study. In Matthew B. Dwyer, editor, SPIN, volume 2057 of Lecture Notes in Computer Science, pages 143–162. Springer, 2001. Marlon Dumas and Arthur H. M. ter Hofstede. Uml activity diagrams as a workflow specification language. In Martin Gogolla and Cris Kobryn, editors, UML, volume 2185 of Lecture Notes in Computer Science, pages 76–90. Springer, 2001. Jos´e Luiz Fiadeiro, Ant´ onia Lopes, and Laura Bocchi. A formal approach to service component architecture. In Mario Bravetti, Manuel N´ un ˜ez, and Gianluigi Zavattaro, editors, WS-FM, volume 4184 of Lecture Notes in Computer Science, pages 193–213. Springer, 2006. Xiang Fu, Tevfik Bultan, and Jianwen Su. Formal verification of e-services and workflows. In C. Bussler, R. Hull, S. A. McIlraith, M. E. Orlowska, B. Pernici, and J. Yang, editors, WES, volume 2512 of Lecture Notes in Computer Science, pages 188–202. Springer, 2002. Stephen Gorton and Stephan Reiff-Marganiec. Policy support for business-oriented web service management. In J. Alfredo S´ anchez, editor, LA-WEB, pages 199–202. IEEE Computer Society, 2006. Stephen Gorton and Stephan Reiff-Marganiec. Towards a task-oriented, policy-driven business requirements specification for web services. In Schahram Dustdar, Jos´e Luiz Fiadeiro, and Amit P. Sheth, editors, Business Process Management, volume 4102 of Lecture Notes in Computer Science, pages 465–470. Springer, 2006. Stephen Gorton and Stephan Reiff-Marganiec. Policy driven business management over web services. In Rodosek and Aschenbrenner [29], pages 721–724. Object Management Group. Business Process Modelling Notation (BPMN) specification. http://www.bpmn.org, Feb 2006. Joseph Y. Halpern and Vicky Weissman. Using first-order logic to reason about policies. In CSFW, pages 187–201. IEEE Computer Society, 2003. Rachid Hamadi and Boualem Benatallah. A petri net-based model for web service composition. In Klaus-Dieter Schewe and Xiaofang Zhou, editors, ADC, volume 17 of CRPIT, pages 191–200. Australian Computer Society, 2003. IBM. Service component architecture. http://www.ibm.com/developerworks/library/specification/ws-sca/, 2007. Last accessed 4 June 2007. Workflow Patterns Initiative. Workflow patterns, 2007. accessed 24 July 2007. Beat Liver, Jeannette Braun, Beatrix Rentsch, and Peter Roth. Developing flexible service portals. In CEC ’05: Proceedings of the Seventh IEEE International Conference on ECommerce Technology (CEC’05), pages 570–573, Washington, DC, USA, 2005. IEEE Computer Society. Emil Lupu and Morris Sloman. Conflicts in policy-based distributed systems management. IEEE Trans. Software Eng., 25(6):852–869, 1999. Carlo Montangero, Stephan Reiff-Marganiec, and Laura Semini. Logic-based detection of conflicts in appel policies. In FSEN2007, Lecture Notes in Computer Science LNCS. Springer Verlag, 2007. Y. C. Ngeow, D. Chieng, A. K. Mustapha, E. Goh, and H. K. Low. Web-based device workflow management engine. In MUE, pages 914–919. IEEE Computer Society, 2007. Rocco De Nicola, Marzia Buscemi, Laura Ferrari, Fabio Gadducci, Ivan Lanese, Roberto
S. Gorton and S. Reiff-Marganiec / Towards Feature Interactions in Business Processes
[23] [24] [25]
[26] [27] [28] [29] [30]
[31]
[32] [33] [34] [35] [36] [37]
[38] [39] [40]
[41] [42] [43]
113
Lucchi, Ugo Montanari, and Emilio Tuosto. Process calculi and coordination languages with costs, priority and probability. 2006. SENSORIA Technical Report. OASIS. UDDI: Universal Description, Discovery and Integration. http://www.uddi.org, 2007. Last accessed 4 June 2007. OASIS. Web services business process execution language. http://docs.oasis-open.org/wsbpel/2.0/wsbpel-v2.0.pdf, 2007. Last accessed 4 June 2007. J. O’Sullivan, D. Edmond, and A. H. M. ter Hofstede. Formal description of non-functional service properties. Technical Report FIT-TR-2005-01, Queensland University of Technology, Brisbane, Feb 2005. Nathaniel Palmer. BPM & SOA. 2005. http://aiim.org/article-docrep.asp?ID=30562, last accessed 4 June 2007. Stephan Reiff-Marganiec and Kenneth J. Turner. Feature interaction in policies. Computer Networks, 45(5):569–584, 2004. Stephan Reiff-Marganiec, Kenneth J. Turner, and Lynne Blair. Appel: the accent project policy environment/language. Technical Report TR-161, University of Stirling, 2005. Gabi Dreo Rodosek and Edgar Aschenbrenner, editors. IM2007: 10th IFIP/IEEE Symposium on Integrated Network Management. IEEE, 2007. Fran¸cois Siewe, Antonio Cau, and Hussein Zedan. A compositional framework for access control policies enforcement. In Michael Backes and David A. Basin, editors, FMSE, pages 32–42. ACM, 2003. Christian Stefansen. Smawl: A small workflow language based on ccs. In Orlando Belo, Johann Eder, Jo˜ ao Falc˜ ao e Cunha, and Oscar Pastor, editors, CAiSE Short Paper Proceedings, volume 161 of CEUR Workshop Proceedings. CEUR-WS.org, 2005. UN/CEFACT and OASIS. Electronic business using extensible markup language. http://www.ebxml.org/, 2007. Last accessed 4 June 2007. Wil M. P. van der Aalst and Arthur H. M. ter Hofstede. Yawl: yet another workflow language. Inf. Syst., 30(4):245–275, 2005. Kris Verlaenen, Bart De Win, and Wouter Joosen. Towards simplified specification of policies in different domains. In Rodosek and Aschenbrenner [29]. W3C. SOAP. http://www.w3.org/TR/soap12-part1/, 2007. Last accessed 4 June 2007. W3C. WSDL: Web Service Description Language v2.0. http://www.w3.org/TR/wsdl20/, 2007. Last accessed 4 June 2007. Harry Jiannan Wang. A Logic-based Methodology for Business Process Analysis and Design: Linking Business Policies to Workflow Models. PhD thesis, University of Arizona, 2006. Michael Weiss. Feature interactions in web services. In Amyot and Logrippo [2], pages 149–158. Michael Weiss and Babak Esfandiari. On feature interactions among web services. In ICWS, pages 88–95. IEEE Computer Society, 2004. Michael Weiss, Babak Esfandiari, and Yun Luo. Towards a classification of web service feature interactions. In Boualem Benatallah, Fabio Casati, and Paolo Traverso, editors, ICSOC, volume 3826 of Lecture Notes in Computer Science, pages 101–114. Springer, 2005. Michael Weiss, Babak Esfandiari, and Yun Luo. Towards a classification of web service feature interactions. Computer Networks, 51(2):359–381, 2007. Stephen. A. White. Using bpmn to model a bpel process. BPTrends, 2005. http://www.bptrends.com, accessed on 15/03/06. Xinwen Zhang, Francesco Parisi-Presicce, Ravi S. Sandhu, and Jaehong Park. Formal model and policy specification of usage control. ACM Trans. Inf. Syst. Secur., 8(4):351– 387, 2005.
114
Feature Interactions in Software and Communication Systems IX L. du Bousquet and J.-L. Richier (Eds.) IOS Press, 2008 © 2008 The authors and IOS Press. All rights reserved.
Resolving Feature Interaction with Precedence Lists in the Feature Language Extensions L. Yang, A. Chavan, K. Ramachandran, W. H. Leung1 Computer Science Department Illinois Institute of Technology, Chicago, IL 60616
Abstract. With existing general purpose programming languages, interacting features executed in the same process must be implemented by changing the code of one another [1]. The Feature Language Extensions (FLX) is a set of programming language constructs that enables the programmer to develop interacting features as separate and reusable program modules. Features are integrated and have their interactions resolved in feature packages. FLX provides the precedence list facilities for the programmer to specify the execution order of the features in a feature package. While not applicable in all situations, precedence lists can be used to resolve many interaction conditions in a single statement. This paper describes the two types of precedence lists supported by FLX and their usage. We give the contradiction conditions that may occur when multiple precedence lists are used in a feature package and show how to resolve them. Finally, we show that the two types of FLX precedence lists are primitive: they can be used to implement arbitrary precedence relations among features that do not exhibit contradictions. Keywords: Feature interaction, program entanglement, feature interaction resolution, reusable feature modules, Feature Language Extensions.
1
Introduction
In software engineering literature, the terms feature, aspect and concern are often used synonymously to denote certain functionality of a software system. For example, reliable data transport and congestion control are two features of the Internet TCP protocol. Features are implemented by computer programs. Two features interact if their behaviors change when their programs are integrated together. The behavior of a computer program is manifested in the sequence of program statements that gets executed and its output for a given input. Consider TCP again. Without congestion control, reliable data transport will retransmit when a duplicated acknowledgement is received. After congestion control is added, the same message may cause the sender to retreat to slow start. Thus these two features interact. The term feature interaction was 1
Corresponding Author: W. H. Leung, Computer Science, Illinois Institute of Technology, 10 West 31st Street, Chicago, Illinois 60616, USA; E-mail:
[email protected].
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
115
coined by developers of telecommunications systems, but its occurrence is common place: when a software system evolves, it usually means that new features are added to the system changing the behavior of existing features. We showed earlier [1] that if (C1) two features interact, (C2) they are executed by the same sequential process, and (C3) they are implemented by a programming language that requires the programmer to specify execution flows, then the programs of the two features will inevitably entangle in the same reusable program unit of the programming language. If the features do not interact, then program entanglement is not necessary. Program entanglement implies that features are implemented by changing the code of one another. Besides making it difficult to develop features, entangled programs are difficult to reuse, maintain and tailor to different user needs. And feature interaction is the root cause of program entanglement. (C1) and (C2) are generally dictated by the application such as the examples given earlier in TCP. Today’s general purpose programming languages require (C3). Existing TCP implementations are notoriously entangled (e.g. see [2]). It is not because the programmers lacked skill; they could not help it. The Feature Language Extensions (FLX) is a set of programming language constructs developed to solve the program entanglement problem. A FLX program unit consists of a condition part and a program body. The program body gets executed when its corresponding condition part becomes true. The programmer does not specify the execution flows of program units; hence FLX relaxes (C3). A feature is composed of a set of program units; it is designed according to a model instead of the code of other features. Features are integrated in a feature package. Features and feature packages are reusable. Different combinations of them can be packaged to meet different needs. We have added the foundation FLX constructs to Java. A research version of the FLX to Java compiler is downloadable from [3]. We call the conditions under which two interacting features change their behavior their interaction conditions, and the interaction is resolved with specification on the new behavior. Presently, the programmer read code to determine when the interaction conditions may become true, and change code to resolve the interaction conditions. This is a labor intensive and error prone process, and a main reason why software development is complex. Due to the way that the FLX compiler generates code, two program units written in FLX interact if the conjunction of their condition parts is satisfiable, or equivalently, if the condition parts of the two program units can become true at the same time. Two features interact when some of their program units interact. The satisfiable condition is their interaction condition. Several other researchers have constructed systems with this property (e.g. see [15]). As we shall see later, the condition part of a program unit is a set of quantifier-free first order predicate formulas. Detecting feature interaction in programs written in FLX then requires an algorithm, often called a satisfiability solver, which determines the satisfiability of such formulas. The first order predicate satisfiability solver of FLX does not require iterations of trial and error incurred in prior art and is overviewed in [4]. This paper focuses on using FLX to integrate features and resolve their interaction without changing their code. In particular, we discuss usage of the precedence list facilities provided by FLX.
116
L. Yang et al. / Resolving Feature Interaction with Precedence Lists in the FLX
A precedence list establishes a strict partial ordering 2 among a set of features in a feature package. FLX supports two types of precedence lists: a straight precedence list specifies that if the interaction condition for some of the features becomes true the programs of the features with higher precedence will get executed before the programs of features with lower precedence; and a priority precedence list specifies that only the program unit belonging to the feature with the highest precedence will get executed. Precedence list is a powerful facility. For example, in a telephony application written in FLX, the feature DoNotDisturb interacts with the plain old telephone service (POTS) whenever the phone is called. The interaction conditions of the two features are resolved in a single precedence list statement in a feature package. One of the authors came from the telecommunication industry and was involved in the development of DoNotDisturb in a production digital switch. The programmers in that project needed to go through hundred of thousands lines of code to find several hundred places to insert code for the feature. Later, as new features are added to the system, they had to remember not to forget including the code for the feature. We first introduced precedence lists in [5]. A more detailed discussion is given in this paper. We review briefly the FLX constructs to specify features and feature packages in Section 2. In Section 3, we describe the two different types of precedence lists implemented in FLX. We also show there that precedence lists alone is not sufficient in certain situations. When that happens, the interaction condition is resolved by program units in the feature package. In Section 4, we discuss the integration of multiple precedence lists. This can happen, for example, when two feature packages each with its own precedence list is integrated in a feature package. Multiple precedence lists can lead to contradictions that need to be resolved. In the same section, we introduce the compound precedence statement which specifies the precedence relations among precedence lists. It is a short hand for multiple precedence lists. In Section 5, we show that the two types of precedence lists supported by FLX are primitive in the sense that they can be used to specify arbitrary precedence relationships that do not contain contradictions. We review related work in Section 6. Our method to integrate interacting features without changing feature code appears to be new. The use of precedence lists as language mechanisms to resolve interaction is also new. We conclude the paper in Section 7.
2 Some FLX basics FLX supports the view that complex software should be organized as a collection of components and FLX is meant for the development of feature rich components called feature packages. In a telephone system developed using FLX, each telephone object is associated with two feature packages: a call processing feature package for features like call forwarding, and a digit analysis feature package for features like speed calling. Different telephone objects can be associated with different feature packages containing different sets of features, or the set of features can be the same but the
2 A strict partial order is an irreflexive, asymmetry and transitive relation between two elements of a set, denoted by “