Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen
1844
Springer Berlin Heidelberg New York Barcelona Hong Kong London
Milan Paris Singapore Tokyo
William B. Frakes (Ed.)
Software Reuse: Advances in Software Reusability 6th International Conference, ICSR-6 Vienna, Austria, June 27-29,2000 Proceedings
Springer
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editor William B. Frakes Virginia Tech, Computer Science Department 7054 Haycock Road, Falls Church, VA 22043-231 1, USA E-mail: wfrakes @ vt.edu Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Software reuse: advances in software reusability : 6th international conference ;proceedings I ICSR-6, Vienna, Austria, June 27 - 29, 2000. William B. Frakes (ed.). - Berlin ; Heidelberg ;New York ; Barcelona ;Hong Kong ;London ; Milan ; Paris ; Singapore ;Tokyo : Springer, 2000 (Lecture notes in computer science ; Vol. 1844) ISBN 3-540-67696-1
CR Subject Classification (1998): D.2, K.6, D.l, J.l ISSN 0302-9743 ISBN 3-540-67696- 1 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations,recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9,1965, in its current version, and permission for use must always be obtained from Springer-Verlag.Violations are liable for prosecution under the German Copyright Law. Springer-Verlag is a company in the BertelsmannSpringer publishing group. 8 Springer-Verlag Berlin Heidelberg Zoo0 Printed in Germany Qpesetting: Camera-ready by author, data conversion by DA-TeX Gerd Blumenstein Pnnted on acid-free paper SPIN:10722052 06/3142 5432 10
Message from the Program Chair People like to make predictions about software reuse. Some of us have predicted that systematic reuse via domain engineering will produce a paradigm shift in software engineering. Others have predicted that reuse will be universally accepted as good practice, and therefore die as a research discipline. While significant progress has been made in systematic reuse, the paradigm shift is not yet complete, nor has reuse research died. What has happened in the past few years in that the concerns of the reuse research community have fragmented into many subareas. These include componentry, product line architectures, design patterns, functional languages, economics, and object oriented methods. The ICSR6 program reflects this diversity. This fragmentation means that it is ever harder to keep up with developments in reuse. To address this problem, the ICSR6 program contains several summary talks on various areas of reuse research. There is also diversity in the origins of the papers: There are contributions from Europe, North and South America, Asia, and Australia. ICSR6 has many new contributors and committee members, along with many long time contributors. The many kinds of diversity should make for an interesting and productive conference. Enjoy ICSR6 in Vienna.
June, 2000
Bill Frakes
Message from the General Chair
The International Conference in Software Reuse has always had a special relationship with Europe, dating back to its modest beginnings as a workshop in 1991 in Germany. During this first edition, and again in its second edition in Italy two years later, the workshop matured and acquired a critical mass of visibility in the software engineering community. These formative experiences in Europe permitted the workshop to reincarnate itself as a full-fledged conference in a number of locations in North and South America in the decade that followed. Now, nearly ten years after that first workshop, we are pleased to welcome ICSR back to Europe for the first time since its transformation into the world’s premier conference on software reuse, to Vienna, at the very center of Europe. The past decade has seen many concepts first articulated by that small group of original participants, such as component-oriented development, pass into the general vocabulary. Yet as satisfying as it is for us to see the validation of reuse as the dominant software development paradigm today, much remains to be done. Product line architectures are only beginning to reach their full potential. Systematic reuse processes are only beginning to be codified and implemented. The full impact of reuse on the economics of enterprise information technology is only now beginning to be analyzed. No other conference can address these issues as directly as ICSR. ICSR owes an immense debt of gratitude to ARCS, the Austrian Research Centers, whose extraordinary generosity has literally made this conference possible. Special thanks for this opportunity are due to my longtime colleague Professor G¨ unter Koch, a prominent champion of European software engineering for the past twenty years, and now a leading figure in the transformation of ARCS from its traditional role in support of government research into a dynamic participant in the commercial IT industry. As part of that transformation, ARCS has made a deep commitment to the very technologies that are the subject of our conference. The Executive Committee thanks all of the volunteers who have contributed their efforts to the organization of this conference. Most of all, however, our sincerest thanks are reserved for Dr. Dieter Donhoffer of ARCS, whose seemingly inexhaustible supply of energy and resourcefulness has created an event that will not only be a useful contribution to the software engineering community but a memorable one for its participants as well.
June, 2000
John Favaro
Committees
General Chair John Favaro Program Chair William B. Frakes Tutorial Co-chairs Patricia Collins Kyo Kang Corporate Support Co-chairs Juan Llorens Mike Mannion
Publicity Co-chairs Roland T. Mittermeir Giancarlo Succi Exhibitions/Demos Co-chairs Sidney Bailin Dieter Donhoffer Local Arrangements Chair Registrations Chair Finance Chair Dieter Donhoffer
Program Committee O. Alonso (USA) D. Batory (USA) S. Bailin (USA) I. Baxter (USA) T. Biggerstaff (USA) C. Boldyreff (UK) J. Bosch (SWEDEN) S. Castano (ITALY) S. Cohen (USA) P. Collins (USA) J. Cybulski (Australia) K. Czarnecki (Germany) P. Devanbu (USA) J. Favaro (Italy) C. Fox (USA) B. Frakes (USA) (Chair) S. Fraser (Canada) H. Gall (Austria) H. Gommaa (USA) M. Griss (USA) E. Guerrieri (USA) S. Isoda (Japan)
K. Kang (Korea) P. Koltun (USA) G. Kovacs (Hungary) J. Kuusela (Finland) M. Lacroix (Belgium) J. Lerch (USA) J. Leite (Brazil) C. Lillie (USA) J. Llorens (Spain) M. Mannion (UK) Y. Maarek (Israel) M. Matsumoto (Japan) Y. Matsumoto (Japan) R. Mittermeir (Austria) M. Morisio (USA) J. Neighbors (USA) H. Obbink (Austria) R. Prieto-Diaz (USA) S. Robertson (UK) G. Succi (Canada) S. Wartik (USA) B. Weide (USA)
VIII
Committees
Organizing Committee D. Donhoffer (Chair) G. Holzer
M. Strasser
E. Voehringer G. Zoffmann
Sixth International Conference on Software Reuse ICSR6 Vienna, Austria June 27-29, 2000 Sponsored by Austrian Federal Ministery for Traffic, Innovation, and Technology Austrian Computer Society Deloitte &Touche DTInf Desarrollos para las Tecnolog´ıas de la Informaci´ on Fraunhofer Institute for Experimental Software Engineering Intecs Sistemi Sodalia TCP Sistemas e Ingenier´ıa University of Madrid Organized and supported by
Table of Contents
Generative Reuse and Formal Domain Languages A New Control Structure for Transformation-Based Generators . . . . . . . . . . . . . 1 Ted J. Biggerstaff Palette: A Reuse-Oriented Specification Language for Real-Time Systems . . 20 Binoy Ravindran and Stephen Edwards From Application Domains to Executable Domains: Achieving Reuse with a Domain Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Ulf Bergmann and Julio Cesar Sampaio do Prado Leite Reuse of Knowledge at an Appropriate Level of Abstraction Case Studies Using Specware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Keith E. Williamso, Michael J. Healy and Richard A. Barker
Object Oriented Methods 1 Building Customizable Frameworks for the Telecommunications Domain: A Comparison of Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Giovanni Cortese, Marco Braga and Sandro Borioni Object Oriented Analysis and Modeling for Families of Systems with UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Hassan Gomaa Framework-Based Applications: From Incremental Development to Incremental Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Neelam Soundarajan and Stephen Fridella
Product Line Architectures Achieving Extensibility Through Product-Lines and Domain-Specific Languages: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Don Batory, Clay Johnson, Bob MacDonald and Dale von Heeder Implementing Product-Line Features with Component Reuse . . . . . . . . . . . . . .137 Martin L. Griss Representing Requirements on Generic Software in an Application Family Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Mike Mannion, Oliver Lewis, Hermann Kaindl, Gianluca Montroni and Joe Wheadon
X
Table of Contents
Implementation Issues in Product Line Scoping . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Klaus Schmid and Cristina Gacek
Requirements Reuse and Business Modeling Requirements Classification and Reuse: Crossing Domain Boundaries . . . . . 190 Jacob L. Cybulski and Karl Reed Reuse Measurement in the ERP Requirements Engineering Process . . . . . . . 211 Maya Daneva Business Modeling and Component Mining Based on Rough Set Theory . . 231 Yoshiyuki Shinkawa and Masao J. Matsumoto
Components and Libraries Visualization of Reusable Software Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Omar Alonso and William B. Frakes Reasoning about Software-Component Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Murali Sitaraman, Steven Atkinson, Gregory Kulczycki, Bruce W. Weide, Timothy J. Long, Paolo Bucci, Wayne Heym and Scott Pike and Joseph E. Hollingsworth Use and Identification of Components in Component-Based Software Development Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Marko Forsell, Veikko Halttunen and Jarmo Ahonen Promoting Reuse with Active Reuse Repository Systems . . . . . . . . . . . . . . . . . . 302 Yunwen Ye and Gerhard Fischer
Design Patterns A Method to Recover Design Patterns Using Software Product Metrics . . . 318 Hyoseob Kim and Cornelia Boldyreff Object Oriented Design Expertise Reuse: An Approach Based on Heuristics, Design Patterns and Anti-Patterns . . . . 336 Alexandre L. Correa, Cl´ audia M. L. Werner and Gerson Zaverucha Patterns Leveraging Analysis Reuse of Business Processes . . . . . . . . . . . . . . . . .353 Marco Paludo, Robert Burnett and Edgard Jamhour Constructional Design Patterns as Reusable Components . . . . . . . . . . . . . . . . . 369 Sherif Yacoub, Hany H. Ammar and Ali Mili
Table of Contents
XI
Object Oriented Methods 2 A Two-Dimensional Composition Framework to Support Software Adaptability and Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Constantinos A. Constantinides, Atef Bader and Tzilla Elrad Structuring Mechanisms for an Object-Oriented Formal Specification Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 M´ arcio Corn´elio and Paulo Borba Software Reuse in an Object Oriented Framework: Distinguishing Types from Implementations and Objects from Attributes . . . . . . . . . . . . . . . . . . . . . . . 420 J. Leslie Keedy, K. Espenlaub, G. Menger, A. Schmolitzky and M. Evered Compatibility Elements in System Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 Giancarlo Succi, Paolo Predonzani and Tullio Vernazza Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .449
A New Control Structure for Transformation-Based Generators Ted J. Biggerstaff
[email protected] Abstract. A serious problem of most transformation-based generators is that they are trying to achieve three mutually antagonistic goals simultaneously: 1) deeply factored operators and operands to gain the combinatorial programming leverage provided by composition, 2) high performance code in the generated program, and 3) small (i.e., practical) generation search spaces. The hypothesis of this paper is that current generator control structures are inadequate and a new control structure is required. To explore architectural variations needed to address this quandary, I have implemented a generator in Common LISP. It is called the Anticipatory Optimization Generator (AOG1) because it allows programmers to anticipate optimization opportunities and to prepare an abstract, distributed plan that attempts to achieve them. The AOG system introduces a new control structure that allows differing kinds of knowledge (e.g., optimization knowledge) to be anticipated, placed where it will be needed, and triggered when the time is right for its use.
1
Problems
A serious problem of most transformation-based generators is that they are trying to achieve three mutually antagonistic goals simultaneously: 1) deeply factored operators and operands to gain the combinatorial programming leverage provided by composition, 2) high performance code in the generated program, and 3) small (i.e., practical) generation search spaces. This paper will make the argument that this quandary is due in large measure to the control structure characteristics of large global soups of pattern-directed transformations. While pattern-directed transformations make the specification of the transformations easy, they also explode the search space when one is trying to produce highly optimized code from deeply factored operators and operands. Since giving up the deep factoring also gives up the combinatorial programming leverage provided by the composition, that is not a good trade off. There are other problems in addition to the search space explosion. Pattern-directed transforms provide relatively few tools for grouping sets of transformations, coordinating their operation, and associating them with a large grain purpose that transcends the fine grain structural aspects of the target program. For example, there are few and crude tools to express the idea that some subset of tightly related, cooperating transformations is designed for the narrow purpose of creating, placing and merging loops in the target program. 1
Much of this work was done at Microsoft Research and the author gratefully acknowledges the support of Microsoft.
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 1-19, 2000. Springer-Verlag Berlin Heidelberg 2000
2
Ted J. Biggerstaff
Along a similar vein, it is difficult to generate target code optimized for differing machine architectures from a single canonical specification. For example, one would like to be able to generate code optimized for a non-SIMD architecture and with the flip of a switch generate code optimized for a SIMD architecture. With conventional systems this is difficult. There is no way currently to record knowledge about the reusable components that may lead to optimization opportunities during the generation process. For example, the writer of a reusable component may know that a property of his component ensures that part of its code can be hoisted above a loop that will be generated to host the component. Conventional transformation systems allow no easy way to express and then later exploit such information. Similary, most conventional transformation systems provide no mechanism to record knowledge of future optimization opportunities that become known in the course of executing early transformations. Further, there is no way to condition such optimization opportunities upon optimization events (e.g., a substitution of a particular structure) or expected properties of the generated code (e.g., the requirement that an expression must have been simplified to a constant). Finally, there is no way to intermix (abstractly) planned optimization operations with opportunistic optimization operations that cannot be planned because the opportunities for these optimizations arise unpredictably as a consequence earlier transformations manipulating the target program code. We will introduce a new kind of transformation and a new control structure (called a tag-directed control structure) that can overcome much of the search space explosion problem and also can address these other problems. Let us review convention transformation architectures, contrast the new architecture with respect to these conventional systems and then examine how this new architecture addresses these problems.
2
The New Control Structure
2.1
Overview
Conventional Transformation Systems. Conventionally, generic transformation systems store knowledge as a single global soup of transformations represented as rules of the form syntactic pattern ⇒ reformulated structure The left hand side of the rule recognizes the syntactic form and binds matching elements of the program being transformed to transformation variables (e.g., ?operator) in the pattern. If that is successful, then the right hand side of the rule (instantiated with the variable bindings) replaces the matched portion of the program. Operationally, rules are chosen (i.e., triggered) based largely on the syntactic pattern of the left hand side, which may include type constraints as well as purely syntactic patterns. Rules may include some set of additional constraints (often called enabling conditions) that must be true before the rule can be triggered. However, all of this is not entirely sufficient since pure declarative forms are often inadequate to express
A New Control Structure for Transformation-Based Generators
3
complex procedural transformations. Therefore, the rules of some systems also allow for arbitrary computations to occur during the execution of the rules. The operational form of the program being transformed may be text or, more typically in modern systems, an Abstract Syntax Tree (AST). In summary, the major control structure of generic transformation systems is based largely on the pattern of the program portion being transformed. Hence, we call such systems pattern-directed. Draco. Some transformation systems add control variations to enhance efficiency. For example, Draco [10] separates its rules into two kinds: 1) refinement rules, and 2) optimization rules. Refinement rules add detail by inlining definitions. Optimization rules reorganize those details for better performance. Further, the refinement rules are grouped into subsets that induce a set of translation “stages” by virtue of the fact that each group translates from some higher level domain specific language (e.g., the language of the relational algebra) into one or more lower level domain specific languages (i.e., ones that are closer to conventional programming languages, such as a tuple language). After each such refinement stage, optimization rules are applied to reorganize the current expression of the program into a more optimum form within the current domain specific language. The final output is a form of the program expressed in a conventional programming language such as C. While such generators usually produce programs with adequate performance and do so within an acceptable period of time, in some domains, the search space of the optimization phases tends to explode. Tag-Directed. A key insight of AOG (i.e., the use of tags to identify and trigger optimizing transformations) arose from an analysis of several transformation-based derivations of graphics functions. The derivations were quite long with a series of carefully chosen and carefully ordered transformations that prepared the program for the application of key optimizing transforms. The need to choose exactly the right transform at exactly the right point over and over again seemed like a long series of miracles that implied a huge underlying search space. Further, I noticed that the choice of the various transformations depended only weakly on the patterns in the program. Rather they depended on other kinds of knowledge and relationships. Some of this was knowledge specific to the reusable components and could be attached to the components at the time of their creation. Other knowledge arose in the course of transformation operation (e.g., the creation of a data flow introduced by a preparatory transformation). Tags serve to capture such knowledge and thereby they became a key element of the AOG control structure. Such use of tags motivates the moniker of tagdirected transformations for those optimizing transformations that are triggered largely because of the tags attached to the AST. In triggering these optimizing transformations, patterns play a lesser role. Event Triggering. The transformation name in a tag indicates what transformation to fire but not when. Thus, the tag control structure includes the idea of local and global events associated with the tag structure that indicate when to fire the transformations. Global events, which apply to the whole subtree being operated on, provide a way to induce a set of optimization stages. Each stage has a general optimization purpose, which assures certain global properties are true upon completion of the stage. For example, all loop merging is complete before commencing the stage that triggers certain kinds of code hoisting outside of loops. Local events, on the other hand, are events specific to an AST subtree such as its
4
Ted J. Biggerstaff
movement or substitution. Local events occur in the course of some other transformation’s operation. Local events allow opportunistic transformations to be interleaved among the transformations triggered during a stage. Components. Another control structure innovation is to represent passive and active components by different mechanisms. Passive components (just called components) are those for which one can write a concrete, static definition. These comprise the library reusable piece parts from which the target program will be built and they are represented in an Object Oriented hierarchy. For example, one of the data types specific to the graphics domain language that I use is a Neighborhood, which is a subset of pixels in a larger image centered on some specific pixel within that larger image. Specific instances of neighborhoods are defined via functional methods that compute: 1) the indexes of neighborhood pixels in terms of the image’s indexes (Row and Col), 2) a set of convolution weights associated with individual pixel positions within the neighborhood (W), and 3) the range of the relative offsets from the centering pixel (PRange, QRange). These methods, like conventional refinement rules, will be inlined and thereby will form portions of the target program. The operator components (e.g., the convolution operator ⊕) are defined in the operator subtree of this OO hierarchy and act like multi-methods whose definition is determined by the type signature of the operator expression. For example, “(⊕ ⊕ array [iterator, iterator], neighborhood)” is a signature that designates a method of ⊕ with a static, inline-able definition which will become the inner loop of a convolution. Transformations. The transformations are the active components and are represented as executable functions rather than isolated declarative rules. This allows them to handle high degrees of AST variation; compute complex enabling conditions; and recognize why enabling conditions are failing and take actions to fix them (e.g., by directly calling another transformation). Representing such transformations as conventional rules would require splitting them up into a number of individual transformations which would thereby explode the generator search space. Larger grain transformations that are implemented as programmable functions prevent this explosion. Kinds of Transformations. AOG transformations come in two flavors – patterndirected and tag-directed. Pattern-directed transformations are used for program refinement stages and tag-directed are used for optimization stages. Pattern-directed transformations are organized into the OO hierarchy to help reduce the number of candidate transformations at each point in the AST. That is to say, the OO hierarchy captures a key element of the pattern – the type of the subtree being processed, which saves looking at a large number of transformations that might syntactically match but semantically fail. Partial Evaluation. Like mathematical equations, reformulations of program parts require frequent simplication. If simplifications are represented as isolated transformations in a global soup of transformations, they explode the search space because they can be applied at many points, most of which are inappropriate. Therefore, AOG contains a partial evaluator that is called for each new AST subtree to perform that simplication. A partial evaluator is a specialized agent that simplifies expressions without exploding the search space. Now let us examine these ideas in the context of an example.
A New Control Structure for Transformation-Based Generators
2.2
5
Related Interdependent Knowledge
Sometimes a priori knowledge about the interdependence of several aspects of a problem implies a future optimization opportunity and using this knowledge in course of generation can reduce the generator’s search space. For example, a component writer may know that in a particular architectural context (e.g., a CPU without parallel processing instructions) a specific optimization will be enabled within a component and will be desirable at a particular stage of generation. How should such knowledge be captured? Where should it be kept? And how should the optimization be triggered? Let us look at a concrete example of this situation. Suppose that the component writer is creating a description of a pixel neighborhood (call it s) within a graphics image and that description will be used in the context of graphics operators such as the convolution2 operator. This neighborhood description comprises a set of methods of s that describe the size and shape of the neighborhood, the weights associated with each relative position in the neighborhood, how to compute the relative positions in terms of the [i,j] pixel of the image upon which the neighborhood is centered, and any special case processing such as the special case test for the neighborhood hanging off the edge of the image. Let us consider the method that defines the weights for a particular neighborhood s. If any part of the neighborhood is hanging off the edge of the image, the weight will be defined as 0. Otherwise, the weights will be defined by the matrix:
Q4 647 8 −1 0 1 − 1 − 1 0 1 P 0 − 2 〈 0〉 2 1 − 1 0 1 where p and q define the pixel offset from image pixel upon which the neighborhood is centered. The diamond bracketed entry indicates the center of the neighborhood. Then the definition of the weight method w for pixel [p,q] in the neighborhood s is defined by the following pseudo-code, where s is centered on the [i,j] pixel of an m x n image: w.s(i,j,m,n,p,q) ⇒ {if ((i==0) || (j==0) || (i==(m - 1)) || (j==(n - 1))) then 0; else {if ((p!=0) && (q!=0)) then q; else {if ((p==0) && (q!=0)) then (2 * q); else 0 }}}
2
A convolution is a graphics operator that computes an output image b from an input image a by operating on each neighborhood around each pixel a[i,j] to produce the corresponding b[i,j] pixel. Specifically, the operation is a sum of products of each of the pixels in the neighborhood around a[i,j] times the weight value defined for that pixel in the neighborhood.
6
Ted J. Biggerstaff
What does the component writer know about this component that might be helpful in the course of code generation? He knows that the eventual use of the weight calculation will be in the context of some image operator (e.g., a convolution). Further, that context will comprise a 2D loop iterating over the neighborhood that will be nested within another 2D loop iterating over the whole image. Further, the component writer knows that the special case test is not dependent on the neighborhood loop and, therefore, the test can be hoisted outside of the neighborhood loop. Or equivalently, the loop can be distributed over the if statement producing an instance of the loop in both the then and else clauses. Since the specific instance of the image operation has not been chosen and these loops have not yet been generated, the component writer cannot execute the potential transformation yet. The component writer can only use the abstract knowledge that defines, in general terms, the eventual solution envelope. What he would like to do is to associate a piece of information with the if statement that would indicate the transformation to be invoked and some indication of when it should be invoked. AOG provides a mechanism for accomplishing this by allowing the component writer to attach a tag (to be used by the generator) to the if statement. In this case, the tag has the form: (_On SubstitutionOfMe (_PromoteConditionAboveLoop ?p ?q)) This tag is like an interrupt that is triggered when the if statement gets substituted in some context, i.e., when a local substitution event happens to the if statement. When triggered, the tag will cause the _PromoteConditionAboveLoop transform to be called with the name of the target loop control variables (i.e., the values p and q, which will be bound to the generator variables ?p and ?q) as parameters. The transform will find the loops controlled by the values of p and q, check the enabling conditions, and if enabled, perform the distribution of the loop over the if statement. Thus, _PromoteConditionAboveLoop will transform a form like {Σ Σp,q
(p ∈ [-1:1] ) (q ∈ [-1:1] ) : { if(i==0 || j==0 || i==m-1 || j==n-1) /* Off edge */ then <special case processing>; else }}
into a form like { if(i==0 || j==0 || i==m-1 || j==n-1) /* Off edge */ then {Σ Σp,q (p ∈ [-1:1] ) (q ∈ [-1:1] ): <special case processing>} else {Σ Σp,q (p ∈ [-1:1] ) (q ∈ [-1:1] ): } } Thus, with these ideas, we have introduced a new kind of transformation, which we will call a tag-directed transformation. Such transformations are triggered by events, in contrast to conventional pattern-directed transformations that are triggered largely
A New Control Structure for Transformation-Based Generators
7
by patterns in the AST (Abstract Syntax Tree). Another difference from patterndirected transformations is that tag-directed transformations and their host tags can capture knowledge that is not easily derivable from the AST patterns or operator/operand semantics. Such knowledge would require some deep inference, some sense of the optimization opportunities particular to the evolving program or some knowledge that is fleetingly available in the course of transformation execution. For example, the tag-directed example we examined takes advantage of several interrelated knowledge nuggets that have little to do with the structure of the AST: 1) the knowledge of the case structure of the reusable component s where the general purpose of the branching is known (i.e., one branch is a special case), 2) the knowledge that the reusable component will be used in the context of neighborhood loops whose general purpose is known a priori, 3) the knowledge that the if condition within the component is independent of the anticipated neighborhood loops, 4) the optimization knowledge that executing an if test outside of the loop instead of within the loop is more computationally efficient, and 5) the generation knowledge that attaching an interrupt-like tag (incorporating all of this special knowledge) to the reusable component will produce search space reduction advantages by eliminating the search for which transform to fire, where to apply it in the AST, and when to fire it. The key objective of tag-directed transformations is to reduce the generator search space explosion that arises when each transformation is just one of many in a global soup of transformations. By using all of this knowledge together, we can eliminate much of the branching in the search space (i.e., eliminate all of the alternative transformations that might apply at a given point in the generation) because tags supply all of the information needed to make the choice of transformation. They determine: 1) which transformation is called (i.e., it is explicitly named in the tag), 2) when it is called (i.e., when the named event occurs), and 3) where in the AST to apply the transform (i.e., it is applied to the structure to which the tag is attached). 2.3
The Likelihood of Optimization Opportunities
Not all knowledge that one might want to use is deterministic. Often, there is just the likelihood that some optimization friendly condition may occur as a result of code manipulation. This knowledge too is valuable in keeping the search space from exploding while simultaneously achieving important generation goals such as eliminating redundant code. An example of this situation is illustrated by the expression for Sobel edge detection in bitmapped images. DSDeclare image a, b :form ( array m n) :of bwpixel; b = [ (a ⊕ s)2 + (a ⊕ sp)2]1/2 ;
8
Ted J. Biggerstaff
where a and b are (m X n) grayscale images and ⊕ is a convolution operator that applies the template matrices s and sp to each pixel a[i,j] and its surrounding neighborhood in the image a to compute the corresponding pixel b[i,j] of b. s (whose w method was defined earlier) and sp are OO abstractions that define the specifics of the pixel neighborhoods. It is possible and indeed, even likely that the special case processing seen in the w method of s will be repeated in the w method of sp. If so, it would be desirable to share the condition test code if possible by applying the _MergeCommonCondition transformation: { if ?a then ?b else ?c; if ?a then ?d else ?e } => if ?a then {?b; ?d} else {?c ; ?e} where the ?x syntax represents generator variables that will be bound to subtrees of the AST. In the above example, ?a will be bound to the common special case condition code that tests for the neighborhood hanging partially off the edge of the image. Because the component writer anticipates this possibility, he would like to hang a tag on the if statement in the w definitions of s and sp that will cause the _MergeCommonCondition transformation to be called. If the condition code is common and all other enabling conditions are met, it will perform the transformation. However, this raises a question. What event should this transformation be triggered on? Local events like substitution of the subtree would be a bad choice because there is no easy way to assure ordering of the strategic computational goals of the overall optimization process. For example, we want all code sharing across domain specific subexpressions (i.e., global code manipulations) to be completed before any in-place optimizations (i.e., mostly local manipulations) begin. The approach used by AOG is to separate the strategic processing into phases that each have a general purpose such as 1. 2. 3. 4.
inlining definitions (e.g., substituting the method definition of w.s), sharing code across expressions (e.g., sharing common test conditions), performing in-place optimizations (e.g., unrolling loops), and performing clean up optimizations (e.g., eliminating common subexpressions).
These phases behave like an abstract algorithm where the details of the algorithmic steps come from the transformations mentioned in the tags attached to the components. The start of each stage is signaled by the generator posting a global event that will cause all tags waiting on that event to be scheduled for execution. This is how _MergeCommonCondition gets called to share the condition test code common to w.s and w.sp. It is scheduled when the global event signalling the start of the cross expression code sharing is posted by AOG. So, here we have a new control structure construct that allows a useful separation of concerns. The generator writer provides the broad general optimization strategy by defining stages, each with a narrow optimization goal. The component writer at library creation time (or a transformation during generator execution) adds tags that supply the details of the steps. This means that the generator writer does not have to account for all the possible combinations of purposes, sequences, enabling conditions, etc. He only has to create one (or more) abstract algorithms (i.e., define a set of stages) that are suitable for the classes of optimization that might occur. Similarly, the
A New Control Structure for Transformation-Based Generators
9
component writer (or transformation writer in the case where the tags are dynamically added to the AST) can add tags that take advantage of every bit of knowledge about the components, even possibilities that may or may not arise in any specific case. This kind of separation of concerns avoids much of the search space explosion that occurs when all of the strategic goals, component-specific details, and their interdependencies reside in one central entity such as the generator’s algorithm or in a global soup of transformations. The view that tag-directed transforms are like interrupts is an apt simile because they mimic both the kind of design separation seen in interrupt-driven systems as well as the kind of operational behavior exhibited by interrupt-driven systems. 1.4
Simplification Opportunities
Not all optimizations fit the tag-directed model. There are many opportunities for simplification by partial evaluation. In fact, any newly formed AST subtree is a candidate for such simplification and each transformation that formulates new subtrees immediately calls the partial evaluator to see if the subtrees can be simplified. For example, during the in-place optimization phase, one of the neighborhood loops is unrolled by a tag-directed transformation. The pseudo-code of the internal form of the loop looks like: {_sum (p q) (_suchthat (_member p (_range -1 1)) (_member q (_range -1 1))) {if ((p!=0) && (q!=0)) then (a[(i + p),(j + q)]*p); else {if ((p!=0) && (q==0)) then (a[(i + p),(j + q)]*(2*p)); else 0;}}} where the _sum operator indicates a summation loop and the _suchthat clause indicates that p and q range from –1 to +1. This produces two levels of loop unwrapping. In the course of the first level of unwrapping (i.e., the loop over p), one of the terms (i.e., the one for (p==1) ) has the intermediate form: {_sum (q) (_suchthat (_member q (_range -1 1))) {if (q!=0) then a[(i+1),(j+q)] ; else {if (q==0) then (a[(i+1),(j+q)]*2); else 0;}}} When the remaining loop over q is subsequently unrolled, we get the expression ({if (-1!=0) then a[(i+1),(j-1)] ; else {if (-1==0) then (a[(i+1),(j-1)]*2); else 0;}} + {if (0!=0) then a[(i+1),(j+0)]; else {if (0 == 0) then (a[(i+1),(j+0)]*2);
10
Ted J. Biggerstaff
else 0;}} + {if (1!=0) then a[(i+1),(j+1)] ; else {if (1 == 0) then (a[(i+1),(j+1)]*2); else 0;}}) The unroll transform calls the partial evaluator on this expression which produces the final result for this subloop: (a[(i+1),(j-1)] + (a[(i+1),j]*2) + a[(i+1),(j+1)]) In the same way, the other derived, nested loops produce zero or more analogous expressions for a total of six terms for the original loop over p and q. Partial evaluation is critical because it allows future transformations to execute. Without it, many future transformations would fail to execute simply because of the complexity of detecting their enabling conditions or the complexity of manipulating un-simplified code. Partial evaluation is the most executed transformation. For the Sobel edge detection expression, 44 out of a total of 92 transformations required to generate code are partial evaluation. 1.5
Architectural Knowledge
No aspect can have a larger effect on the final form of the generated code than the architecture of the CPU. Consider the two different sets of code produced by AOG for a CPU without parallel instructions and one with parallel instructions (i.e., the MMX instructions of the PentiumTM processor). For a single CPU Pentium machine without MMX instructions (which are SIMD instructions that perform some arithmetic in parallel), the AO generator will produce code that looks like for (i=0; i < m; i++) /* Version 1 */ {im1=i-1; ip1= i+1; for (j=0; j < n; j++) { if(i==0 || j==0 || i==m-1 || j==n-1) then b[i, j] = 0; /* Off edge */ else {jm1= j-1; jp1 = j+1; t1 = a[im1,jm1]*(-1)+a[im1,j]*(-2) + a[im1,jp1]*(-1)+a[ip1,jm1]*1 + a[ip1,j]*2+a[ip1,jp1]*1; t2 = a[im1,jm1]*(-1)+a[i,jm1]*(-2) + a[ip1,jm1]*(-1)+a[im1,jp1]*1 + a[i,jp1]*2+a[ip1,jp1]*1; b[i,j] = sqrt(t1*t1 + t2*t2 )}}} This result requires 92 large grain transformations and is produced in a few tens of seconds on a 400 MHz Pentium. In contrast, if the machine architecture is specified to be MMX, the resultant code is quite different:
A New Control Structure for Transformation-Based Generators
11
{int s[(-1:1), (-1:1)]={{-1, 0, 1}, {-2, 0 , 2}, {-1, 0, 1}};/* Version 2 */ int sp [(-1:1), (-1:1)]={{-1, -2, -1}, {0, 0, 0}, {1, 2, 1}}; for (j=0; j such that (1) ((ak−1 , ak ) ∈ Γ (Pi ), ∀k : 1 ≤ k ≤ n) and (2) (ai,j = a0 ∧ ai,k = an ) holds. Further, ROOT (Pi ) defines the root node of path Pi i.e., the only node of the path that does not have an incoming edge from any other nodes of the path.
32
Binoy Ravindran and Stephen Edwards
System Spec.
Specification of system characteristics in system description language
Compiler
Static IR Real-Time C2 System Application
dynamically measured performance data
Run-Time System
Dynamic IR
QoS management actions
Staic information of the system such as software and hardware compositions, stream properties and QoS requirements
Platform-independent resource abstractions
QoS Management Middleware
Fig. 6. Language interface with QoS techniques
4
Application of the Specification Language
To illustrate the use of the specification language and the intermediate representations, we describe QoS management techniques that deliver the desired QoS of real-time control systems. The techniques use the IRs that are constructed from the language specifications and augmented with dynamically instrumented performance data. The interface of the language implementation—compiler and run-time system—with QoS management middleware is illustrated in Figure 6. As part of our prior work, we have implemented the QoS management middleware. Details of the middleware can be found in [30]. The steps involved in the QoS management process are as follows: The realtime system components are monitored by the middleware for conformance to specified QoS requirements. QoS violations such as path latencies exceeding their slack values on deadlines are reported to diagnosis functions of the middleware
Palette: A Reuse-Oriented Specification Language for Real-Time Systems
33
that identify the causes of poor QoS. Further analysis by the middleware identifies possible reallocation actions to improve the QoS, and selects the “best” set of these possible actions (e.g., identifying the optimal or sub-optimal bottleneck application program of the path to replicate). Resource allocation is performed by the middleware to determine the optimal or sub-optimal way to execute the selected actions (e.g., determining the optimal or sub-optimal host to execute an application replica). These QoS management steps are summarized in the following subsections. For brevity, we focus here on managing slack values on simple deadlines for periodic, continuous functions such as assessment. More complete details of the middleware algorithms, their implementation, and performance validation are available [28]. A complete implementation of the middleware is currently in the process of commercialization. 4.1
QoS Monitoring
Any algorithm that performs run-time QoS monitoring consumes resources and incurs overhead. We have implemented the middleware algorithms described here and have experimentally measured the run-time overhead [28]. The observed increase in path latency due to monitoring and management was less than 4% with very tight standard deviations for a number of experimental scenarios. Monitoring of real-time QoS involves collection of time stamped events that are sent from the application programs, and synthesis of the events into pathlevel QoS metrics. A time stamped event tag from an application includes the start and end times of processing the data elements and data batches by the application. The start and end times of processing data elements and data batches are defined as follows: s(Pi .DS(c, ai,j )k ) and e(Pi .DS(c, ai,j )k ) define the start time and end time of processing the k th data stream element by application ai,j during path cycle c. s(Pi .DS(c, ai,j , k)) and e(Pi .DS(c, ai,j , k)) define the start time and end time of processing the k th data stream batch by application ai,j during path cycle c. From these basic events, the observed real-time QoS metrics of path latency, throughput, and data inter-processing time are defined as follows: The observed latency of path Pi during cycle c is the maximum of the set of latencies that are incurred in processing each of the data batches. It is defined as λOBS (Pi , c) = M AX(e(Pi .DS(c, ai,m , j))−s(Pi .DS(c, Root(Pi ), 1))), ∀j : 1 ≤ j ≤ T otalBatches(Sink(Pi), c), ∀ai,m ∈ Replicas(Sink(Pi)), where Sink(Pi ) defines the sink node of path Pi i.e., the only node of the path that does not have an outgoing edge to any other nodes of the path. The observed throughi .DS(c)| put of path Pi during cycle c is defined as θOBS (Pi , c) = λ|POBS (Pi ,c) . The observed data-inter-processing time of path Pi during cycle c is δOBS (Pi , c) = f (s(Pi .DS(c, ai,j )k ) − s(Pi .DS(c − 1, ai,j )k )), ∀k, ∀ai,j , and for c > 1, where f (.) is an averaging function over all the inter-processing times. Analyzing a time series of real-time QoS metrics enables detection of QoS violations. Examples of QoS violations include the latency of the path exceeding its minimum slack value for a simple deadline (called a path overload ) or the latency of the path decreasing below its maximum slack value for a simple
34
Binoy Ravindran and Stephen Edwards
deadline (called a path underload ). An overload of a path occurs in any cycle c when the observed path latency violates the required latency in at least υ(λREQ (Pi )) out of the previous ω(λREQ (Pi )) cycles. That is, υ(λREQ (Pi )) ≤ λ (Pi )−λOBS (Pi ,d) < ψmin (λREQ (Pi )). Similarly, |{d : c − d < ω(λREQ (Pi )) ∧ REQ λREQ (Pi )×100 an underload of a path occurs in any cycle c when υ(λREQ (Pi )) ≤ |{d : c − d < λ (Pi )−λOBS (Pi ,d) ω(λREQ (Pi )) ∧ REQ > ψmax (λREQ (Pi )). λREQ (Pi )×100 4.2
QoS Diagnosis
When an end-to-end, path-level QoS violation such as a path overload or path underload occurs, QoS diagnosis is performed to determine the cause(s) of the violation. Here, we focus on QoS diagnosis for a path overload. The objective of diagnosis is to identify applications or replicas that are experiencing significant slowdown. The illustration here presents a technique to perform path local diagnosis, which considers only the applications in a path exhibiting poor QoS. The local QoS diagnosis technique compares current performance of an application to its best performance in the same application-to-host mapping and at the same data stream size. An application ai,j of path Pi is said to be unhealthy during a cycle c if there exists another cycle d such that the processing latencies are significantly worse for the data batches processed by ai,j during cycle c than the processing latencies for the batches processed by ai,j during cycle d, for the same data stream size and host resource. Further, during cycle d, ai,j exhibited its least latency over all cycles. That is, the application ai,j is said to be unhealthy if ∃d : (d < c) ∧ (HOST (ai,j , c, Pi ) = HOST (ai,j , d, Pi )) ∧ (|Pi .DS(c, ai,j )| = |Pi .DS(d, ai,j )|) ∧ (∀f : f < c ∧ HOST (ai,j , c, Pi ) = HOST (ai,j , f, Pi )∧ |Pi .DS(c, ai,j )| = |Pi .DS(f, ai,j )| ∧ λOBS (Pi , f ) > λOBS (Pi , d)∧ (λOBS (Pi , d) < λOBS (Pi , c) − 4), where 4 defines the minimal difference between cycle latencies that is considered significant. 4.3
Identifying Recovery Actions
QoS diagnosis functions produce a set of unhealthy applications as its output. The unhealthy applications are analyzed to identify recovery actions that will improve the QoS of the path. The first step of the analysis is to determine whether the data stream size of an unhealthy application has significantly increased in the recent past. This is determined from the abstraction in the dynamic IR that describes the data stream size processed by an application within the past α cycles, where α defines the size of a window of recent cycles. The data stream size of an application ai,j is said to have increased by at least a significant amount δ, when ∃d : c > d ∧ c − d < α ∧ (|Pi .DS(c, ai,j )| > |Pi .DS(d, ai,j )| + δ). The data stream analysis is used to determine the appropriate recovery action for an unhealthy application. An unhealthy application a is replicated when its data stream size has increased recently by a significant amount. An unhealthy application a is migrated from its current host when its data stream size has not increased recently but the load of its host resource has increased significantly.
Palette: A Reuse-Oriented Specification Language for Real-Time Systems
35
We define the set A = {act1 , act2 , · · ·} as the set of actions selected, where acti is a pair (a, action) that denotes the action that is recommended for application a. Actions that address the same cause of poor path QoS are grouped together. A group of related actions is a set gi (., .) = {(a, action − a), (b, action − b), · · ·}. All actions that migrate applications (or replicas) from a particular host are grouped together, since any one of those actions will reduce the contention experienced by applications on the host. The group of all actions that migrate applications from a host hk to recover from poor QoS detected in cycle c is identified as gm (hk , c) = {acti = (ai , action) : action = ‘migrate’ ∧ HOST (ai , c, Pi ) = hk }. Actions that involve replication of a particular application ai is also grouped together, since the addition of another replica of ai will cause a redistribution of load processed by the existing replicas of ai . The group of all actions that involve replication of an application ai,j can be identified as follows: gr(ai,j , c) = {acti = (ai,j,k , action) : action = ‘replicate’ ∧ ai,j,k = REP LICAS(ai,j , c)}. 4.4
Resource Allocation
Once the recovery actions to improve QoS are determined, the next step is to allocate hardware resources—CPUs and networks—to actions. We summarize a heuristic algorithm that uses the dynamic IR to determine the hardware resources for the use of the replica of an application program or a migrant application program. Details of the algorithm can be found in [30]. The technique uses a fitness function that considers both host and network load indices. The algorithm uses trend values of load indices of hosts and networks over a (moving) set of sample values that is instrumented and represented in the dynamic IR. The trend values are determined as the slope of regression lines that plot the load index values as a function of time. A fitness value is determined for each of the hosts by the algorithm. The fitness value of a host is computed as a function of the load index value of the host, the load index value of the host network that has the least load among all networks of the host, a host weight, a network weight, a host load index weight, and a network load index weight. The host for an application replica or migrant application is determined as the host that has the minimum fitness value among all the eligible hosts of the application. Once the recovery actions and hardware resources are determined, they are enacted on the application to improve the delivered QoS.
5
Related Efforts
Palette is significantly different from other real-time languages, which can be divided into two groups: application development languages, and specification languages (or formalisms) used to describe timing constraints at the application or system level. Examples of application programming languages are Tomal [20], Pearl [24], Real-Time Euclid [21], RTC++ [16], Real-Time Concurrent C [7], Dicon [22], Chaos [3], Flex [23], TCEL [8], Ada95 [1], MPL [27], and CaRTSpec [41]. These languages include a wide variety of features that allow the
36
Binoy Ravindran and Stephen Edwards
compiler (and possibly the run-time system) to check assertions or even to transform code to ensure adherence to timing constraints. Specification languages or metalanguages, such as ACSR [4], GCSR [2,32], and RTL [17], formalize the expression of different types of timing constraints and in some cases allow proofs of program properties based on these constraints. In some cases, such as RTL, these features have been folded into an application development language [9]. Palette is a specification metalanguage that is independent of any particular application language. Rather than providing real-time support within a particular application language, it provides support for expressing timing constraints for families of application programs written in a wide variety of programming languages. Unlike previous work in which timing constraints are described at a relatively small granularity, such as at the individual task level, Palette allows timing constraints to be expressed at a larger granularity, i.e., to span multiple programs. Real-time languages also differ in the way they characterize a system’s interactions with its environment. Prior work has typically assumed that the effects of the environment on the system can be modeled deterministically. Palette expands this to include systems that interact with environments that are deterministic, stochastic, and dynamic. This is accomplished by modeling interactions through data and event streams that may have stochastic or dynamic properties. In order to handle dynamic environments, it is useful if the language includes features that can be related to dynamic mechanisms for monitoring, diagnosis and recovery [18,15,31,33]. Language support for run-time monitoring of real-time programs has been addressed in [9], [19], Real-Time Euclid [21], and Ada95 [1]. However, this prior work provides limited support for diagnosis and recovery actions. Palette extends the language features pertaining to diagnosis of timing problems, and to the migration or replication of software components to handle higher data stream or event stream loads (scalability). Previous real-time languages allow the description of behaviors that are purely periodic or aperiodic. Palette extends language support to describe hybrid behaviors such as the transient-periodic behaviors [35]. It also allow for dynamic multi-dimensional timing constraints—deadlines that are expressed on an aggregation of execution cycles of functions—that to our knowledge, cannot be described in any existing real-time language. Fault-tolerance is an issue that has not yet been addressed in many real-time languages. This is an important QoS objective of systems that operate in hostile environments, however. Palette addresses fault-tolerance by providing abstractions to describe minimum redundancy levels of software components, faulttolerance strategies, and hardware clusters that exhibit group failure modes. With respect to other specification approaches designed for supporting reuse, Palette builds on the parameterized programming approach to software composition. This approach was originally introduced by Goguen with OBJ [12,13], and has been successfully used in a number of other programming languages and specification languages, such as LILEANNA [36], Ada [14], Ada95 [10], Standard ML [26], and RESOLVE [34]. While each of these languages contains some lin-
Palette: A Reuse-Oriented Specification Language for Real-Time Systems
37
guistic features supporting software reuse or parameterized programming, none are appropriate for use in specifying real-time systems and their QoS constraints in a programming language independent fashion. Palette also builds on some work with object-oriented languages, such as Eiffel [25], FOOPS [11], Ada95, and RESOLVE, that combine reuse-oriented language mechanisms with objectoriented features such as inheritance and dynamic binding. None of these languages combine the features necessary to support the three elements of design for reuse, object-oriented design, and real-time QoS specification, however. Further, no other language research effort has aimed to develop a set of reusable, parameterized abstractions that characterize the core behaviors shared by applications in the real-time control systems domain.
6
Conclusions
We have presented a domain specific specification language for the domain of real-time control systems. The core reusable abstraction of the language is the notion of a path—a functional abstraction that can be customized with environmental attributes including properties of data and event streams. The path abstraction is used to define the composition of control system functions in terms of application programs and devices, and to express their non-functional performance objectives. Further, the intermediate representations that are constructed from the language specifications can be used by path-based QoS management techniques. This promotes reuse of underlying techniques (e.g., QoS management strategies), since such techniques are tied to the reusable abstractions of the language itself. We have built a prototype compiler and a run-time system for the DeSiDeRaTa specification language. The language was used to specify the Anti-Air Warfare surface combatant system of the U.S. Navy and to ensure that its QoS requirements were met through the use of a resource management middleware system. Furthermore, experimental characterizations of the middleware illustrated its effectiveness in ensuring the desired QoS requirements. The language and the middleware technology has been transitioned to the Navy [37,6]. Ongoing efforts include using Palette to construct the emerging generation of real-time surface combatants of the Navy by reusing existing components as well as QoS management strategies.
References 1. International Standard ANSI/ISO/IEC-8652:1995. Ada 95 Reference Manual. Intermetrics, Inc., January 1995. 35, 36 2. H. B-Abdallah, I. Lee, and J-Y. Choi. A graphical language with formal semantics for the specification and analysis of real-time systems. In Proceedings of the IEEE Real-Time Systems Symposium, pages 276–286, December 1995. 36 3. T. Bihari and P. Gopinath. Object-oriented real-time systems. IEEE Computer, 25(12):25–32, December 1992. 35
38
Binoy Ravindran and Stephen Edwards
4. J-Y. Choi, I. Lee, and H-L. Xie. The specification and schedulability analysis of real-time systems using ACSR. In Proceedings of the IEEE Real-Time Systems Symposium, pages 266–275, December 1995. 36 5. R.K Clark, E.D. Jensen, and F.D. Reynolds. An architectural overview of the Alpha real-time distributed kernel. In Proceedings of the USENIX Workshop on Microkernel and Other Kernel Architectures, Seattle, April 1992. 24 6. Quorum. Available at http://www.darpa.mil/ito/research/quorum/index.html, 1999. 37 7. N. Gehani and K. Ramamritham. Real-time concurrent C: A language for programming dynamic real-time systems. Journal of Real-Time Systems, 3(4):377– 405, December 1991. 35 8. R. Gerber and S. Hong. Semantics-based compiler transformations for enhanced schedulability. In Proceedings of the IEEE Real-Time Systems Symposium, pages 232–242, December 1993. 35 9. M. Gergeleit, J. Kaiser, and H. Streich. Checking timing constraints in distributed object-oriented programs. In Proceedings of the Object-Oriented Real-Time Systems (OORTS) Workshop, October 1995. Seventh IEEE Symposium on Parallel and Distributed Processing (SPDP). 36 10. D. S. Gibson. An introduction to RESOLVE/Ada95. Technical Report OSUCISRC-4/97-TR23, The Ohio State University, April 1997. 36 11. J. A. Goguen and J. Meseguer. Extensions and foundations of object-oriented programming. SIGPLAN Notices, 21(10):153–162, October 1986. 37 12. J. A. Goguen, J. Meseguer, and D. Plaisted. Programming with parameterized abstract objects in OBJ. In D. Ferrari, M. Bolognani, and J. Goguen, editors, Theory and Practice of Software Technology, pages 163–193. North-Holland, Amsterdam, The Netherlands, 1983. 36 13. Joseph A. Goguen. Principles of parameterized programming. In Ted J. Biggerstaff and Alan J. Perlis, editors, Software Reusability, Volume I: Concepts and Models, pages 159–225. ACM Press, New York, NY, 1989. 36 14. Joseph Hollingsworth. Software Component Design-for-Reuse: A Language Independent Discipline Applied to Ada. PhD thesis, Dept. of Computer and Information Science, The Ohio State University, Columbus, OH, 1992. 36 15. D. Hull, A. Shankar, K. Nahrstedt, and J. W. S. Liu. An end-to-end QoS model and management architecture. In Proceedings of the IEEE Workshop on Middleware for Distributed Real-Time Systems and Services, pages 82–89, December 1997. The 18th IEEE Real-Time Systems Symposium. 36 16. Y. Ishikawa, H. Tokuda, and C. M. Mercer. An object-oriented real-time programming language. IEEE Computer, 25(10):66–73, October 1992. 35 17. F. Jahanian and A. K.-L. Mok. Safety analysis of timing properties in real-time systems. IEEE Transactions on Software Engineering, 12(9):890–904, 1986. 36 18. F. Jahanian, R. Rajkumar, and S. Raju. Run-time monitoring of timing constraints in distributed real-time systems. Journal of Real-Time Systems, 1994. 36 19. K. B. Kenny and K. J. Lin. Building flexible real-time systems using the Flex language. IEEE Computer, pages 70–78, May 1991. 36 20. R. B. Kieburtz and J. L. Hennessy. TOMAL—a high-level programming language for microprocessor process control applications. ACM SIGPLAN Notices, pages 127–134, April 1976. 35 21. E. Kligerman and A. D. Stoyenko. Real-Time Euclid: A language for reliable real-time systems. IEEE Transactions on Software Engineering, 12(9):941–949, September 1986. 35, 36
Palette: A Reuse-Oriented Specification Language for Real-Time Systems
39
22. I. Lee and V. Gehlot. Language constructs for distributed real-time systems. In Proceedings of the IEEE Real-Time Systems Symposium, December 1985. 35 23. K. J. Lin and S. Natarajan. Expressing and maintaining timing constraints in FLEX. In Proceedings of the 9th IEEE Real-Time Systems Symposium, pages 96– 105, December 1988. 35 24. T. Martin. Real-time programming language PEARL—concept and characteristics. In Proceedings of the IEEE Computer Society Second International Computer Software and Applications Conference (COMPSAC), pages 301–306, 1978. 35 25. Bertrand Meyer. Object-Oriented Software Construction, 2nd Ed. Prentice Hall, New York, NY, 1997. 37 26. Robin Milner, Mads Tofte, and Robert Harper. The Definition of Standard ML. MIT Press, Cambridge, MA, 1990. 36 27. V. M. Nirkhe, S. K. Tripathi, and A. K. Agrawala. Language support for the maruti real-time system. In Proceedings of the 11th IEEE Real-Time Systems Symposium, pages 257–266, 1990. 35 28. B. Ravindran. Modeling and Analysis of Complex, Dynamic Real-Time Systems. PhD thesis, The University of Texas at Arlington, August 1998. 33 29. B. Ravindran, L. R. Welch, and C. Kelling. Building distributed, scalable, dependable real-time systems. In Proceedings of the Tenth IEEE International Conference on Engineering of Computer Based Systems, pages 452–459, March 1997. 25 30. B. Ravindran, L. R. Welch, and B. Shirazi. Resource management middleware for dynamic, dependable real-time systems. Journal of Real-Time Systems, to appear 1999. 32, 35 31. D. Rosu, K. Schwan, S. Yalamanchili, and R. Jha. On adaptive resource allocation for complex real-time applications. In Proceedings of the 18th IEEE Real-Time Systems Symposium, pages 320–329, December 1997. 36 32. A. Shaw. Reasoning about time in higher-level language software. IEEE Transactions on Software Engineering, 15(7):875–889, July 1989. 36 33. K. G. Shin and C.-J. Hou. Design and evaluation of effective load sharing in distributed real-time systems. IEEE Transactions on Parallel and Distributed Systems, 5(7):704–719, July 1994. 36 34. M. Sitaraman and eds. Bruce W. Weide. Special feature: Component-based software using RESOLVE. ACM SIGSOFT Software Engineering Notes, 19(4):21–67, October 1994. 36 35. S. Sommer and J. Potter. Operating system extensions for dynamic real-time applications. In Proceedings of the IEEE Real-Time Systems Symposium, pages 45–50, December 1996. 36 36. William Tracz. Formal Specification of Parameterized Programs in LILEANNA. PhD thesis, Dept. of Electrical Engineering, Stanford University, Stanford, CA, 1997. 36 37. High performance distributed computing (HiPer-D). Available at http://www.nswc.navy.mil/hiperd/index.shtml. 37 38. L. Welch, B. Ravindran, B. Shirazi, and C. Bruggeman. Specification and modeling of dynamic, distributed real-time systems. In Proceedings of the 19th IEEE RealTime Systems Symposium, December 1998. To appear. 25 39. L. R. Welch, B. Ravindran, R. D. Harrison, L. Madden, M. W. Masters, and W. Mills. Challenges in engineering distributed shipboard control systems. In Proceedings of the Work-In-Progress Session, December 1996. The 17th IEEE Real-Time Systems Symposium. 25
40
Binoy Ravindran and Stephen Edwards
40. L. R. Welch, B. A. Shirazi, B. Ravindran, and F. Kamangar. Instrumentation, modeling and analysis of dynamic, distributed real-time systems. International Journal of Parallel and Distributed Systems and Networks, 2(3):105–117, 1999. 31 41. L. R. Welch, A. D. Stoyenko, and T. J. Marlowe. Response time prediction for distributed periodic processes specified in CaRT-Spec. Control Engineering Practice, 3(5):651–664, May 1995. 35 42. L. R. Welch, P. V. Werme, L. A. Fontenot, M. W. Masters, B. A. Shirazi, B. Ravindran, and D. W. Mills. Adaptive QoS and resource management using a posteriori workload characterizations. In The Fifth IEEE Real-Time Technology and Applications Symposium, June 1999. 24
From Application Domains to Executable Domains: ∗ Achieving Reuse with a Domain Network Ulf Bergmann1 + and Julio Cesar Sampaio do Prado Leite2 1
Departamento de Engenharia de Sistemas – Instituto Militar de Engenharia Praça General Tibúrcio 80 - Rio de Janeiro, 22290-240, Brasil 2 Departamento de Informática - Pontifícia Universidade Católica do Rio de Janeiro Rua Marquês de São Vicente, 225 Rio de Janeiro, 22453-900, Brasil {bergmann,julio}@inf.puc-rio.br
Abstract. Software generators are among the most effective methods for achieving software reuse. Describing an application using a domain specific language and then performing a single refinement step to an executable language is the approach used by most generators. This paper shows how to use the Domain Network (DN) concept, a set of interconnected domains, as a way to improve reuse in the context of software generation.
1
Introduction
We have been using Neighbor’s Domain Network (DN) idea [1] in our work with the transformation engine Draco-PUC[2]. We believe that DN allows more effective reuse than that provided by usual software generators. A Domain Network is a set of domains interconnected by transformations. Each domain encapsulates the knowledge of a specific area and is described by a domain specific language (DSL). Transformations are constructed to allow a program written in a source DSL to be implemented by a different DSL. The use of DN in software generation has two main advantages. First, while a usual software generator uses a single refinement step, a DN based generator provides a more effective reuse by using several refinement steps. Second, we don't need to implement a new DSL from scratch because we create the transformations to other domains that are represented as DSLs. This paper describes how we have used the concept of domain network, a core concept in the Draco paradigm [1]. In Section 2 we briefly present an overview of the paradigm and of the Draco-PUC Machine. In Section 3 we describe the domain network we have been using with focus on the partial DN used in the generation of ∗
This work has been partially supported by CNPq and by the Pronex Project of Ministério da Ciência e Tecnologia + This work was performed while this author was a Ph.D. student at Pontifícia Universidade Católica do Rio de Janeiro W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 41-57, 2000. Springer-Verlag Berlin Heidelberg 2000
42
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
reverse engineering tools described in Section 4 and an example of its use described in Section 5. We conclude by comparing our results with the literature and suggesting future work.
2
The Draco Paradigm
The Draco paradigm may be seen and characterized by several viewpoints [3]. The original motivation for its development was to provide a way of building software from pre-existing components. It may also be seen as a generator, being able to build and maintain a class of similar systems. A third viewpoint is that of high level domain languages to be used in the systems construction process supported by a transformation mechanism capable of generating executable programs from a given DSL specification. What is behind these is a way of organizing components for reuse that is different from the approaches based on libraries. In the Draco paradigm, the reuse elements are the formal languages named domains. These languages are built with the objective of encapsulating objects and operations of a given domain. The programs written in the domain languages need a refinement process to reach an executable language. In order to build software from the point of view of the Draco paradigm, it is first necessary that the domain knowledge be formalized, with well-defined syntax and semantics. Once these domain languages are available, it is possible to achieve reuse at the domain level of abstraction. Clearly, the paradigm has a very high initial cost, but on the other hand it will lower, considerably, the cost of writing software in a domain that has been characterized by a domain language. The paradigm pays off when a domain will be reused several times. The Draco-PUC machine, on the other hand, is a software system that aims to implement the paradigm. The role of the machine is to construct domains described by their specific languages. A domain must contain syntax and semantics. Draco-PUC is composed of: a powerful general parser generator, a prettyprinter generator and a transformation engine. Transformations can be intra domain (horizontal) or inter domain (vertical). The semantics of a domain are expressed by inter domain transformations. The executable domains (like C) do not need reduction semantics since there is already an available compiler/interpreter. Figure 1 shows an example of the domain construction in Draco-PUC. To create the lexical and syntactic specifications we use a Draco domain called GRM (grammar domain). This domain uses an extended BNF. Embedding commands from the Draco domain PPD (prettyprinter domain) in the syntactic specification provide the prettyprinter specification. The file cobol.grm in Figure 1 shows a partial implementation for the Cobol domain. The semantics for the domain are specified by transformations to other domains. These transformations are described using the Draco domain TFM (transformation domain). Basically, a transformation has a recognition pattern (the lhs) and a replacement pattern (the rhs): when the lhs matches any part of the original program it is replaced by the rhs. The file cobol2c.tfm shows an example of the Cobol to C transformation.
From Application Domains to Executable Domains
cobol.grm
cobol2c.tfm
/* . . . */ paragraph : label '.' .NL stat**'.' .( , , .NL , ) end_par ; end_par : 'end' .sp label '.' | ; label : ID | INT ; stat : perform_stat | conditional_stat | atomic_stat | copy_file ; /* . . . */ ID : [a-zA-Z0-9][a-zA-Z0-9]* ; INT : [0-9]* ; Lexical / Syntactic Specifications Parser Generator
43
TRANSFORM GoTo LHS: {{dast cobol.statement /* recognition pattern * / GO TO [[ID L1]] }} POST-MATCH: {{dast cpp.stmt_list /* Post-Match control point */ COPY_LEAF_VALUE_TO(str1,"L1"); FixName(str1); SET_LEAF_VALUE("LABEL",str1); }} RHS: {{dast cpp.dstatement goto [[IDENTIFIER LABEL]]; /* replacement pattern */ }}
DAST- Sintax Mappings
Semantic specification
prettyprinter generator
Transformer generator
Draco-Puc
Cobol Domain
Cobol2C Transformer
1) In lhs, ID is an identifier from Cobol grammar 2) In rhs, IDENTIFIER is an identifier from C++ grammar
Domain Construction
Fig. 1. Domain Construction in Draco-PUC
Figure 2 shows the entire transformation structure. Before and after the usual matching sides of a production rule we have 5 possible control points. Pre-Match is activated each time the lhs is tested against the program. Match-Constraint is activated when there is a bind between a variable in lhs and a part of the program. Post-Match is activated after a successful match between the program and the lhs. Pre-Apply is activated immediately before the matched program section is substituted by the rhs. Post-Apply is activated after the program has been modified.
Fig. 2. Transformation Structure
44
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
Draco-Puc Use
!
parser
program pretty printer DAST1
DAST2
transformers
Fig. 3. Overview of Draco-PUC use
The Draco-PUC use is showed in Figure 3. The parser for the original program is used by Draco-PUC to convert the program in the internal form used in Draco-PUC (DAST – Draco abstract syntax tree). Transformations are applied to modify the DAST and, at any time, we can use the pretty-printer to unparse the DAST.
3
Domain Networks
The process by which a specification written in a library source domain reaches a target domain traverses several other domains. A set of domains that are linked together is called a domain network. This property distinguishes Draco-PUC from the usual generator generators, which typically uses a single refinement step. By reusing previous domain languages Draco uses a strategy of divide and conquer for building complex domains. Given this possibility we decide to build a network of domains to support the generation of applications, starting with a high-level domain language. We chose to avoid strategies like the one proposed in [4] where the software engineer needs to write the transformations, even though eventually reusing the ones available in the transformation library, we decide to build a network of domains to support the generation of applications, starting with a high level domain language. We have used the classification proposed by Neighbors [1], which divides domains as: • application domains – which express the semantics of real world domains or problem classes, • modeling domains – which are domains built to help the implementation of application domains, by encapsulating concepts with a broad range of reusability, and • executable domains – which are domains to which a generally accepted translator exists, as it is the case of well-known programming languages. We also add more one type, the Draco domains, which are used to specify the parser, prettyprinter and transformations used in Draco-PUC.
From Application Domains to Executable Domains Application Domain 1 2 3 4 5
prog.rdoo
Program tool source CPP; Extraction_Rules Begin module: dependence ; End
rdoo2exl
Transform IfMatch20 LHS: {{dast exl.if_match2 IF_MATCH [[dast pad]] DO [[tipo *tps]] [[statement *stmts]] END_IF }} POST-MATCH: {{dast cpp.ext_decl char s2tmp[200]; RawDASTLocater k; GET_VALUE("pad",k); createTransformationName(s2tmp , k); SET_LEAF_VALUE("NTFM",s2tmp); }} RHS: {{dast tfmdom.tfm TRANSFORM [[NAME NTFM]] LHS: [[dast pad]] POST-MATCH: {{dast lc.stmt_list [[statement *tps]] [[statement *stmts]] }} }}
exl2tfm
Set Of Transforms Ana Method Search: Bottom-Up Apply: Single Step Transform ExlTfm1 Lhs: {{ dast cpp.preprocessor #include [[_FILENAME FN]] }} Post-Match: {{dast lc.stmt_list COPY_LEAF_VALUE_TO(stmp,"FN"); strlwr(stmp); if (stmp[0]=='"') { stmp[strlen(stmp)-1]='\0'; strcpy(reg_moddep.nd,&stmp[1]); strcpy(reg_moddep.n,module_name); InsInTab(rel_moddep,(void*)reg_moddep); } }}
tfm2c
Draco Domain
void _export RegisterTransf( Transformer* c ) { SetOfTransforms* sot = new SetOfTransf; /* . . . */
DATA_DEFINITION SECTION TABLE moddep(nd,n) UNIQUE_INDEX ind1moddep(nd,n) INDEX ind2moddep(n) EXTRACTION SECTION PUBLIC BOTTON_UP_METHOD Ana IF_MATCH {{dast cpp.preprocessor #include [[_FILENAME FN]] }} DO GET_VAR FN AT stmp; strlwr(stmp); IF stmp[0]='"' THEN DO stmp[strlen(stmp)-1]='\0'; strcpy(moddep.nd,&stmp[1]); strcpy(moddep.n,module_name); INSERT_REG moddep; END_IF END_IF END_PROGRAM
Modeling Domain
exl2c #include<mbase.h> typedef relation *p_relation; typedef char *p_char; #include "erro.h" #include "contid.c" void append_file(char *d, char *s) { /* . . . */ } #define _ind_ind1moddep 0 #define _ind_ind2moddep 1 #include "moddep.h" relation *rel_moddep; moddep reg_moddep; moddep tst_moddep;
Executable Domain
45
Transform FindCppModule Lhs: {{dast rdoo.ext_rule module : [[single_property* pp]] ; }} Post-Match: {{ dast txt.decls TEMPLATE ("CreateModTable") PLACE_AT("WSTabDef"); END_TEMPLATE; APPLY("BuildCppModExt", "pp"); }} Set Of Transforms BuildCppModExt Trigger: external Method Apply: Single Step Transform ModuleDependence Lhs: {{dast rdoo.single_property dependence }} Post-Match: {{ dast txt.decls TEMPLATE ("CreateModDepTable") PLACE_AT("WSTabDef"); END_TEMPLATE; TEMPLATE ("CreateModDepTr") PLACE_AT("WSTransf"); END_TEMPLATE; SKIP_APPLY(); }} Template CreateModDepTable rhs: {{ dast exl.data_sec TABLE moddep(nd,n) UNIQUE_INDEX ind1moddep(nd,n) INDEX ind2moddep(n) }} Template CreateModDepTr rhs: {{ dast exl.if_match IF_MATCH {{dast cpp.preprocessor #include [[_FILENAME FN]] }} DO GET_VAR FN AT strAux; strlwr(strAux); IF strAux[0]='"' THEN DO stmp[strlen(strAux)-1]='\0'; strcpy(moddep.nd,&strAux[1]); strcpy(moddep.n,module_name); INSERT_REG moddep; END_IF END_IF }}
Transform INDICE2 LHS: {{dast exl.ind UNIQUE_INDEX [[ID IND]]([[ind_field *if]] ) }} POST-MATCH: {{dast cpp.statement_list char sind[100]; COPY_LEAF_VALUE_TO(sind,"IND"); sprintf(sind,"#define _ind_%s %d",sind,ind_count); TEMPLATE("DecInd"); SET_TEMPL_LEAF_VALUE("IND",sind); INSTANTIATE_TEMPLATE_AT(lcDec); END_TEMPLATE; ind_count++; }} TEMPLATE DecInd RHS: {{dast lc.ext_def [[MACRO IND]] }}
Fig. 4. A Partial View of a Domain Network and its Conections
A traditional domain network has as a starting point an application domain, and may involve several application domains and modeling domains in order to be grounded in an executable domain. Our use of Draco-PUC already produced many
46
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
executable domains and transformations between them [2][5]. Recently [6] we have just produced our first two modeling domains: EXL (Extraction Language) and MdL (Model Language) used to help in the extraction and visualization of source code information. Here, we report on two applications domains for design recovery (RDOO – Recover Design OO and RDS – Recover Design Structured). A partial example of the domain network use is presented in Figure 4. Starting with a program written in RDOO (prog.rdoo) and applying successive transformations we obtain a program that is described in the executable domain C. The first transformation (rdoo2exl) create the representation of the original program (application domain) in the modeling domain. The next three transformations (exl2tfm - tfm2c and exl2c) produce the representation in the executable domain C. Note that the knowledge about table manipulation and transformation structure is embedded in the modeling domain and can be reused in the construction of many applications domains.
TFM PPD GRM
Draco Domains
KB
EXL
RDOO MdL
SDL
Fusion
Cobol
Prolog
Pascal
Clipper
VBasic
MBase
Fortran
Estelle
C
Math
Java
Mondel
Http
Txt
C++
Transformations
RDS
DSF
Application Domains Modeling Domains
Executable Domains
Transformations
Fig. 5. Draco-PUC Domain Network
Figure 5 shows the actual domain network available in Draco-Puc Machine. The main domains are briefly described in Figure 6. In the next section we focus on the domain network used in the generation of Design Recovery Tools. Domain GRM TFM PPD RDOO RDS KB EXL MdL SDL Fusion C++, C, ..
Fig. 6. Main Domains
Description Describe the Lexical and Syntactic from the domain language Specify transformations Specify the prettyprinter Recover Design from OO Systems Recover Design from Structured Systems Manipulate a Knowledge Base Specify source code extractions Specify the visualization of information extracted from source code Define a Graphics User Interface Describe models from the Fusion Method [9] Programming Languages
From Application Domains to Executable Domains
47
4 A Domain Network for the Generation of Design Recovery Tools To be able to generate reverse engineering tools we have been using the domain network shown in Figure 7. It consists of two application domains where the source code for design recovery is described. These domains encapsulate the necessary knowledge for recovering the design from structured (RDS) and from object oriented systems (RDOO). Starting with the original source code in RDS or RDOO and applying the transformations shown in Figure 7, we obtain the target code that will perform the design recovery from the original source files. These codes are written in the domains: • TFM: is an intermediate domain (Draco domain) that describes the extraction transformations used to populate the database with the information specified in the original source code. In order to be used by Draco-PUC, this code is transformed to C. • C: contains two kinds of code: the one to export information from the database to files that can be read by the viewer (VG files) and the extraction transformations previously described in TFM. • Mbase: contains the MetalBase [10] scheme that is used to construct the tables needed to storage the extraction information. • C++: contains the code used to visualize the information extracted in a particular style. RDS
Application Domains
RDOO
DSF rde2exl TFM
Exl2tfm tfm2c
rdoo2exl EXL
Modeling Domains
MdL
Exl2c
mdl2cpp
GRM Draco Domains
C
MBase Schema
Cobol
C++
Java
Executable Domains
Fig. 7. Domain Network for the generation of reverse engineering tools
Figure 8 shows an example of how the generated code is used to extract information from source files. The file ext.tfm, that contains the extractions specified by one of the applications domains (RDOO or RDS) and translated to TFM, is used by Draco-PUC to search the syntax tree for the source files and populate the database. The database tables have been defined by the Mbase scheme generated by exl2c transformation (Figure 7). Then the filter, specified in EXL and translated to C, exports partial information to files that can be read by the viewer. The files booch.cpp
48
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
and jsd.cpp specify the style used to show the information to the user. These files are created by a transformation from MdL to C++.
ext.tfm
!!
Source Files
Viewer booch.cpp jsd.cpp
Extraction Draco-Puc
! ! vg files
Extraction DataBase
Filter filter.c
Fig. 8. Information extraction example
By using a domain network to generate these kind of tools, we obtain a high-level reuse each time a transformation is applied: from RDS or RDOO to EXL (by rdoo2exl or rds2exl transformations), we reuse the transformation, database and programming language syntax knowledge; from EXL to TFM (by exl2tfm transformation) we reuse transformation syntax and the database manipulation language; from EXL to C (by exl2c transformation) we reuse the database manipulation used to create filters; from MdL to C++ (by mdal2cpp transformation) we reuse the view structure and manipulation.
4.1 Application Domains The two application domains encapsulate the knowledge about information extraction from source files, one for structured languages and the other for object-oriented languages. They are used to specify what kind of information is to be extracted from source files in a particular class of programming language. Basically, these domains allow the user to specify an extraction at a very high level of abstraction using syntax description and database transactions. The extraction languages are examples of little languages [11], consequently, the languages for these domains are very small. By using these applications domains, the user does not need to specify extractions for all the features from source files. He/she can specify only the extractions needed for a particular problem. This is a very useful characteristic since it will generate a small extractions code, better performance in extraction and a small database. Follows an example of an extraction code in RDS for structured source code.
From Application Domains to Executable Domains
49
program Y2k Source Cobol; Extraction_Rules Begin module: name , dependence; method: name , call , variable_access; variable: name , type , struct; End This example shows extractions specifications for programs written in Cobol. What is to be extracted is specified in the extraction rules. In this case, the user wants to extract: modules names, dependences between modules, methods (paragraphs) names, static call graph, variables accessed by each method, variables names, variables type and the structure for each variable. The next example shows a program in RDOO for Java. Here we are interested in class names and the name, type and access type of each variable declared in a class. Program extractPartial source Java; Extraction_Rules Begin class: name, variable(name , access , type); End Figures 9 and 10 gives an idea of what kind of information can be extracted using RDOO and RDS. Property Type Module Name Dependence Class Name Dependence Inheritance Aggregation Association Method name Method access Method call Variable name Variable access Variable type
Description Module name Dependencies between modules (import , include , ...) Class name Module where the class is defined Superclass Aggregation between classes (when a class has a variable that is an instance from other class) Association between classes (when a class has a variable that is a pointer to another instance) Method name Method access (public, private, ...) Methods called statically Variable name Variable access (public, private, ...) Variable type
Fig. 9. Extraction in RDOO Property Module Variable
Method
Type Name Dependence Name Access Type Name Access Call
Fig. 10. Extraction in RDS
Description Module name Dependencies between modules (import , include , ...) Variable name Variable access (public, private, ...) Variable type Method name Method access (public, private, ...) Methods called statically
50
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
The programs in these domains are transformed to the equivalent program in the modeling domain EXL through the rdoo2exl or rds2exl transformations. These transformations are written in the transformation language (TFM domain) used by Draco-PUC.
4.2 Modeling Domains Extraction Domain (EXL). The domain EXL [6] provides an abstract language for defining transformations for information extraction, information storage in tables and the manipulation of this information. The EXL domain language has three sections: a data definition section, an extraction method definition section and an exportation section. The sections are interconnected. For instance, if we include a table record, automatically the semantics for inclusion will be updated in the extraction section and the semantics for reading will also change in the exportation section. The design of this language had a strong influence on fourth generation database management languages. For this reason it does contain high level commands such as FOR_EACH and EXIST. On the other hand, in the extraction section, EXL aims to simplify the extraction work by means of more appropriated commands for these tasks, see for instance the IF_MATCH command, An example of the extraction section follows that discovers the dependencies between C++ modules (source files). The result is stored in the MetalBase [10] database. EXTRACTION SECTION PUBLIC BOTTOM_UP_METHOD Analysis IF_MATCH {{dast cpp.preprocessor #include [[_FILENAME FN]] }} DO GET_VAR FN AT moddep.nomedependente; strcpy(moddep.nome,module_name); INSERT_REG moddep; END_IF This example shows an EXL code with a search method (PUBLIC BOTTOM_UP_METHOD Analysis) and a pattern matching command (IF_MATCH...DO… END_IF). The command recognizes the pattern in the C++ DAST (dast cpp.preprocessor) and copies the value of FN to the field nomedependente of the table moddep. The other field of this table, nome, is filled with the value of the global variable module_name that stores the current module name. After that, the current record is inserted in the table moddep asserting the existence of the dependence between the current module and the module specified in the include statement. EXL is not a code analyzer language [12][13], EXL programs will be able to transverse a DAST (representing the system to be recovered), find specific patterns and group them in a repository. The reverse engineer according to EXL syntax will specify the transverse strategy and the grouping.
From Application Domains to Executable Domains
51
Model Domain (MdL). The domain MdL helps in the construction of the recovered models. These models are composed of objects that can be nodes, edges and pages. A page contains the edges and nodes, and each edge has a source node and a destination node. Objects as we could see easily model the models, so that was the paradigm we choose to write the MdL domain. The MdL domain was written using a very useful characteristic of the Draco paradigm, that is to allow programs to be written mixing several languages. The MdL domain works together with the C++ domain, in which we write the methods of the MdL classes. In the MdL domain we describe the behavior of the three main objects, so the MdL domain was designed as a class definition language. In the following example we show the node “Sequential” of the JSD entity structure diagram [16]. NODE Sequential { METHOD: {{method_bodyCPP.function_definition.RC("vg.ctx") virtual void Draw(VGDC *dc) { float strw,strh; int strx,stry; dc->DrawRectangle(x, y,w,h); // not allow the node ext region be invalid dc->SetClippingRegion(x,y,w,h); // Calculates the title position dc->GetTextExtent(title,&strw,&strh); strx=x+(w-strw)/2; stry=y+(h-strh)/2; DrawText(dc,title, strx,stry); // Draws the special node character DrawText(dc,"*", x+w-13,y+1); dc->DestroyClippingRegion(); } }} }; In this example we describe the “Sequential” class that embodies the “Draw” method. The syntax for domain change appears after the word “METHOD:”, where there is an identification that there was a change from the domain MdL by the rule “method_body” to the CPP (C++) domain by the rule “function_definition” using the context file “vg.ctx”. The drawing method makes the rectangle, followed by the title and then by the indication of interaction or selection or sequence. The methods “SetClippingRegion” and “DestroyClippingRegion” take care of not drawing out of the established region. The great advantage of MdL is that the defined classes encapsulate completely the aspects of dynamic loading of models, characteristic not available in C++ and very important for our reverse engineering architecture.
52
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
5
Example
Our analysis of the literature on reverse engineering concludes there are four major aspects any reverse engineering tool must address. They are: how to extract information, how to store the information, how to visualize the information and how to selectively visualize parts of the recovered information. Based on that, Freitas and Leite [6] designed an approach to implement these requirements using a transformation tool to generate these kind of tools. Freitas and Leite [6] described all the necessary tool specifications using the modeling domains EXL and MdL. We extended this approach using the applications domains described in Section 4.1. Figure 11 shows the three main activities from our approach: Extraction, Visualization and the Execution. The Extraction is performed by creating two source files: one using the domain RDOO to specify the needed extractions and another using the domain EXL to describe a filter that will be applied to export a subset of the extraction information. By applying transformations on these files we obtain: a transformation file (.tfm file), that can be used to perform the extractions and populate the database; a file containing the MetalBase Scheme (.s file) that will be used to create the necessary database tables; and a C source file that select information from database and create several files (VG files) that can be read by the viewer. Draco-Puc RDOO Domain
rdtool.rds
EXL Domain
Draco-Puc
filter.exl
EXL Domain rdoo2exl Transformer
Extraction Building
rdtool.exl
rdtool.tfm
TFM Domain C Domain
MBase Scheme
exl2tfm Transformer exl2c Transformer
filter.c
Draco-Puc
!!
C++ Source
CPP Domain
rdtool Transformer
Extraction DataBase
filter.ex e
! ! vg files
VG (viewer)
Run
Draco-Puc mod.mdl cdg.mdl jsd.mdl
mod.cpp MDL Domain CPP Domain
mdl2cpp Transformer
Fig. 11. Overview of Recover Design C++ Tool
cdg.cpp jsd.cpp
Visualization Building
From Application Domains to Executable Domains
53
In the Visualization phase we use the MdL domain to specify the particular presentation style that will be used by the viewer. This phase produce the C++ source files that are used by the viewer. The Execution uses the results from the early phases to extract the information and show it to the user. We have used this approach to generate a tool to recover three types of information from the C++ code: the system structure, the relationships between objects and the internal structure of a class. For that we have used three well-known software engineering diagrams: the module diagram [14], the association diagram [15] and the entity structure diagram [16]. The generated tool was applied to recover design information of the same real system used in [6], a medium size system called wxWeb [17] that implements a HTML browser. This public domain browser has the major functionality of the wellknown Netscape and Internet Explorer, having features like mail and authentication. The system wxWeb is written in C++ and uses Internet technology. To recover the structure of wxWeb we wrote one extraction specification in RDOO, one filter specification in EXL and three specifications in MdL. The sizes for the originals, intermediates and finals files are showed in Figure 12. We started with 540 lines of code (original files) and ended up with 2280 lines of code (final files). Original File name size Rdtool.rdoo 10 Filter.exl Cdg.mdl Mod.mdl Jsd.mdl
170 120 120 120
Intermediate File name size Rdtool.exl 630
Final File name size Rdtool.tfm 980 Rdtool.s 50 Filter.c 430 Booch.cpp 250 Jsd.cpp 320 Mod.cpp 250
Fig. 12. File sizes (lines of code)
Figure 13 shows a partial example of a command described in RDOO and the correspondent commands in EXL and TFM generated by the transformations applied by Draco-PUC. The command in RDOO specifies that the information about module names and dependence must be extracted from C++ source files. The first transformation (rdoo2exl) produces a new file (rdtool.exl) that contains the definition of the tables where the information will be stored and the extraction transformation that looks for module dependence in C++ (#include ) source file and inserts this information in the tables. The next transformation (exl2tfm) produces a new file (rdtool.tfm) with the extraction transformations that are used by Draco-PUC to perform the extractions. The transformation exl2c creates the Mbase schema. Note that our program in RDOO is reusing the knowledge encapsulated by the EXL domain. Note that in RDOO we are reusing extraction knowledge that is transformed to EXL that in turn is reusing transformation and database knowledge. Comparing our approach with that used in [6], we have generated a very similar tool for design recovery, but using a more abstract specification, achieving the reuse of the EXL code encapsulated in the application domain.
54
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite Rdtool.tfm
Rdtool.rdoo
Program tool source CPP; Extraction_Rules Begin module: name , dependence ; class: name , inheritance , method(call , access), variable(name , type); End
rdoo2exl
Rdtool.exl DATA_DEFINITION SECTION TABLE modulo(nome) UNIQUE_INDEX ind1modulo(nome) TABLE moddep(nomedep,nome) UNIQUE_INDEX ind1moddep(nomedep,nome) INDEX ind2moddep(nome) EXTRACTION SECTION DECLARATIONS INITIALIZATIONS strcpy(modulo.nome,module_name); INSERT_REG modulo; PUBLIC BOTTON_UP_METHOD Analisys IF_MATCH {{dast cpp.preprocessor #include [[_FILENAME FN]] }} DO exl2tfm GET_VAR FN AT stmp; strlwr(stmp); IF stmp[0]='"' THEN DO stmp[strlen(stmp)-1]='\0'; strcpy(moddep.nomedep,&stmp[1]); strcpy(moddep.nome,module_name); INSERT_REG moddep; END_IF END_IF EXPORTATION SECTION BEGIN_PROGRAM END_PROGRAM
Global-Declaration: {{ dast lc.ext_def_list extern "C" { #include<mbase.h> } char module_name [200 ]; #define _ind_ind1modulo 0 extern "C" { #include "rels\modulo.h" } relation *rel_modulo; modulo reg_modulo; #define _ind_ind1moddep 0 #define _ind_ind2moddep 1 extern "C" { #include "rels\moddep.h" } relation *rel_moddep; moddep reg_moddep; }} Global-Initialization: {{ dast lc.stmt_list strcpy(module_name,GET_FIRST_MODULE_NAME()); strlwr(module_name); rel_modulo=mb_inc("rels\\modulo",""); rel_moddep=mb_inc("rels\\moddep",""); strcpy(reg_modulo.nome,module_name); InsertInTab(rel_modulo,(void*)®_modulo); }} Set Of Transforms Analisys Method Search: Bottom-Up Apply: Single Step Transform ExlTfm1 Lhs: {{dast cpp.preprocessor #include [[_FILENAME FN]] }} Post-Match: {{dast lc.stmt_list COPY_LEAF_VALUE_TO(stmp,"FN"); strlwr(stmp); if (stmp[0]=='"') { stmp[strlen(stmp)-1]='\0'; strcpy(reg_moddep.nomedep,&stmp[1]); strcpy(reg_moddep.nome,module_name); InsertInTab(rel_moddep,(void*)®_moddep); } SKIP_APPLY(); }}
tfm2c
rdoo2exl Rdtool.s relation modulo field nome type char * 100; index ind1modulo on nome without dups; typedef modulo; end relation moddep field nomedep type char * 100; field nome type char * 100; index ind1moddep on nomedep, nome without dups; index ind2moddep on nome ; typedef moddep; end
// . . . // . . .
Fig. 13. Transformations Example
We have also used the same approach to generate a tool to detect Y2K errors in Cobol source files. To do this, we wrote one extraction specification in RDS and one
From Application Domains to Executable Domains
55
error locator specification in EXL. The sizes for these sources, intermediate and final files are showed in Figure 14. We started with 448 lines of code (original files) and ended up with 1810 lines of code (final files). Original File name size Y2k.rds 8 Locerr.exl
440
Intermediate File name size Y2k.exl 620
Final File name size Y2k.tfm 1100 Y2k.s 50 LocErr.c 660
Fig. 14. File sizes (lines of code)
6
Conclusion
Comparing the use of domain networks with the software generators that add domainspecific extensions to industrial programming languages [18][19] we believe that our approach has several advantages. First, it simplifies the task of creating a DSL. We can create the necessary translation from a new DSL to an executable language through transformations to a set of intermediate domains that already have the translation to an executable language. Using intermediate domains we can describe the needed transformations in a more abstract level. Second, by using a DSL we have used the syntax from the application domain and not the one from the host language. If we observe how multiple refinement steps through the domain network occurs, then we will have an idea of how reuse is achieved. For instance, once a domain A has been created by a language specification and the translation into another domain B is specified, we can create several other domains that will be translated to domain A. In this way, all information about how the domain A works is reused. We believe that this approach is a more effective way of achieving reuse than just using a single refinement step from a DSL directly to an executable language. Most current software generators still use this technique. We used the domain network concept in the context of a transformation system implemented by the Draco-PUC Machine. This powerful combination allows us to not only compose components like JTS [18] but also to generate individual pieces of code that once separately compiled communicate during their execution (e.g. MetalBase schema generation). Our solution to the specific problem of the generation of reverse engineering tools (Section 5) is similar to the one proposed by Canfora [20]. However, we use a language based on the application domain (RDOO) instead of using a "strange" (to the user) language, like the algebraic representation proposed by Canfora. This provides a high abstraction level simplifying the task for the reverse engineer. Future work aims at the production of more domains with special emphasis on applying reverse engineering in conjunction with the concept of reengineering [21].
56
Ulf Bergmann and Julio Cesar Sampaio do Prado Leite
7
Acknowledgement
We would like to express special thanks to James M. Neighbors for his valuable comments. We also thanks the anonymous reviewers for they helpful comments.
8
References
1. Neighbors, J., Software Construction Using Components, PhD thesis, University of California at Irvine, 1980. http://www.BayfrontTechnologies.com/thesis.htm 2. Leite, J.C.S.P., M.Sant’Anna, F.G.Freitas, Draco-PUC: a Technology Assembly for Domain Oriented Software Development, Proceedings of the Third International Conference on Software Reuse, IEEE Computer Society Press, 94-100, 1994. 3. Freeman, P., A Conceptual Analysis of the Draco Approach to Constructing Software Systems, IEEE Transactions on Software Engineering, SE-13(7):830-844, July 1987. 4. Harris, D.R., Yeh, A.S., Reubenstein, H.B., Extracting Architectural Features from Source Code, Automated Software Engineering, 3(1/2), Jul. 1996, 109-138. 5. Leite, J.C.S.P, Sant'Anna, M. and Prado, A.F. Porting Cobol Programs Using a Transformational Approach, Journal of Software Maintenance: Research and Practice, Vol. 9, John Wiley Sons Ltd., Vol. 9, pp. 3-31, 1997. 6. Freitas, F.G. and Leite, J.C.S.P., Reusing Domains for the Construction of Reverse Engineering Tools, Proceedings of the 6th Working Conference on Reverse Engineering, IEEE Computer Press, 1999. 7. Leite, J.C.S.P, Sant'Anna, M. and Prado, A.F. Porting Cobol Programs Using a Transformational Approach, Journal of Software Maintenance: Research and Practice, John Wiley Sons Ltd., Vol. 9, pp. 3-31, 1997. 8. Leite, J.C.S.P. and Freitas, F.G. Reusing Domains for the Construction of Reverse Engineering Tools, Proceedings of the 6th Working Conference on Reverse Engineering, IEEE Computer Press, 1999. 9. Coleman, D. et. All, Object-Oriented Development: The Fusion Method, Prentice-Hall, 1994. 10. http://cui.unige.ch/~scg/FreeDB/FreeDB.6.html 11. Bentley, J., Little Languages, Communications of the ACM, 29(8), Aug 1986, pp. 711721. 12. Devanbu, Prem, T. GENOA - A Customizable Language and Front-End independent Code Analyzer, Proceedings of the 14th International Conference on Software Engineering, IEEE Computer Society Press, pp. 307-319, 1992. 13. Christopher A. Welty, Augmenting Abstract Syntax Trees for Program Understanding, Proceedings of the Automated Software Engineering, IEEE Computer Society Press, 1997 14. Booch, G., Object Oriented Analysis and Design with Applications, second edition, Benjamin/Cummings, 1994 15. Rumbaugh, J., et all, Object Oriented Modeling and Design. Prentice Hall ISBN 0-13629841-9, 1991 16. Jackson, M., System Development, Prentice-Hall, 1983 17. http://www.aiai.ed.ac.uk/~jacs/wxwin.html 18. Batory, D. and Smaragdakis, Y., JTS: Tools for Implementing Domain-Specific Languages, Proceedings of the Fifth International Conference on Software Reuse, IEEE Computer Society Press, 1998.
From Application Domains to Executable Domains
57
19. Hudak, P., Modular Domain Specific Languages and Tools, Proceedings of the Fifth International Conference on Software Reuse, IEEE Computer Society Press, 1998. 20. Canfora, G. et All., An Extensible System for Source Code Analysis, IEEE Transactions on Software Engineering, SE-24(9): 721-740, September 1998. 21. Penteado, R., et al., Reengineering of Legacy Systems Based on Transformation Using Object Oriented Paradigm, Proceedings of the 5th Working Conference on Reverse Engineering, IEEE Computer Press, 1998.
Reuse of Knowledge at an Appropriate Level of Abstraction – Case Studies Using Specware Keith E. Williamson, Michael J. Healy, and Richard A. Barker The Boeing Company, Seattle, Washington
[email protected] Abstract. We describe an alternative paradigm for software reuse that attempts to reuse software derivation knowledge at an appropriate level of abstraction. Sometimes that level is a domain theory that is involved in stating system requirements. Sometimes it is a design pattern. Sometimes it is a software component. Often it is a combination of the these. We describe our experiences using Specware for deriving software and reusing software derivations.
1
Introduction
With the advent of intelligent computer aided design systems, companies such as Boeing are embarking on an era in which core competitive engineering knowledge and design rationale is being encoded in software systems [25]. The promise of this technology is that this knowledge can be leveraged across many different designs, product families, and even different uses (e.g., manufacturing process planning). However, this promise has been hard to achieve. The reasons for this are complex, but a large challenge arises from the attempt to reuse software. Programmers, who try to reuse software components written by other people, have run into several problems. First, software components often have assumptions or constraints on their use that are not clearly or explicitly stated. Second, even when these assumptions are clearly and explicitly stated, the assumptions that were applied when the software was originally written may turn out to be different than the assumptions that apply when someone else tries to reuse that component at some point in the future. Third, when this happens, it may be difficult to adapt the software component to different requirements since the original software design rationale is often not stated clearly and explicitly. Generally, these problems arise from a lack of traceability of requirements, through the design process, to software. A fundamental problem in this paradigm of reuse is that what we are trying to reuse is software - the end artifact in a long and complicated process that goes from requirements, through a process of design, to an implementation built on top of some virtual machine. Knowledge sharing and reuse cannot easily and uniformly occur at the software level alone.
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 58-73, 2000. Springer-Verlag Berlin Heidelberg 2000
Reuse of Knowledge at an Appropriate Level of Abstraction
59
1.1 Motivations for Software Reuse In attempting to improve the state of the art in reuse, let us back up and ask, “Why are we attempting to reuse software in the first place?” From a software engineering standpoint, it is desirable to: • Increase software system quality • Decrease software development time and costs • Decrease software maintenance time and costs Industry demands that software development and maintenance be made faster, cheaper, and better. Conspiring against this are high labor turnover rates in the software industry, and increasingly in American industry more broadly. Institutional memory concerning software development and maintenance is being lost in this turnover. Is there another way to capture, structure, use and then reuse this knowledge for the purposes of improved software development and maintenance?
1.2 Reuse of Software Derivations Within the field of automated software engineering, there is an approach to software development and maintenance that appears to solve some of these problems [19,21,22,25,1,2]. In essence, this paradigm for software development and maintenance is one that allows the capture and structuring of formal requirement specifications, design specifications, implementation software, and the refinement processes that lead from requirements to software. In this approach, the refinement process can guarantee correctness of the synthesized software. By recording, modifying, and then replaying the refinement history, we are able to more easily maintain the software. By capturing, abstracting, and structuring requirement, design, and implementation knowledge in a modular fashion, we are able to more easily reuse this knowledge for other applications. Knowledge reuse occurs at whatever level of abstraction is most appropriate. Sometimes that level is a domain theory that is involved in stating requirements. Sometimes it is a design pattern. Sometimes it is a software component. Often it is a combination of these. Even within these different levels – requirement, design, and implementation – knowledge can be teased apart, abstracted, and structured in such a way as to make knowledge components that can be more easily reused in other, related applications. As an example, a simplified theory of physics can be stated once and used in many different contexts. As another example, a specification of optimization problems can be stated once and used for stating requirements, designs, and implementations. In this way, software derivation knowledge is captured, abstracted, and structured so that the intellectual effort that goes into system development can be leveraged for other purposes in the future. It is interesting to note that when this technology is applied to software systems whose outputs are designs for airplane parts, the design rationale that is captured is not only software engineering design rationale, but also design rationale from other, more traditional, engineering disciplines (e.g., mechanical, material, manufacturing, etc.). This suggests the technology provides an approach to general systems engineering that enables one to structure and reuse engineering knowledge broadly.
60
Keith E. Williamson et al.
1.3 Overview of Paper We begin this paper with some formal preliminaries, followed by a brief description of the software development tool Specware™. We then describe our experiences using this tool for the derivation and maintenance of Boeing engineering software. Along the way, we describe an excellent example of the challenges of reuse at the software level. Finally, we describe our current work on extending this technology into the realm of requirements elicitation.
2
Formal Preliminaries
The field of category theory [13,16] provides a foundational theory. This theory was applied to systems theory [5,9]. This theory was embodied in the software development tool Specware™ [19,21,22].
2.1 Category of Signatures A signature consists of the following: 1. A set S of sort symbols 2. A triple O = < C, F, P > of operators, where: a. C is a set of sorted constant symbols, b. F is a set of sorted function symbols, and c. P is a set of sorted predicate symbols. Sorts correspond to types. Thus, a signature gives typing information about the various symbols used in a specification. A signature morphism is a consistent mapping from one signature to another (from sort symbols to sort symbols, and from operator symbols to operator symbols). The category Sign consists of objects that are signatures and morphisms that are signature morphisms. Composition of two morphisms is the composition of the two mappings.
2.2 Category of Specifications A specification consists of: 1. A signature Sig = < S, O >, and 2. A set Ax of axioms over Sig Given two specifications <Sig1, Ax1> and <Sig2, Ax2>, a signature morphism M between Sig1 and Sig2 is a specification morphism between the specifications iff: ∀a∈Ax1, (Ax2 |- M(a)) That is, every one of the axioms of Ax1, after having been translated by M, can be proved to follow from the axioms in Ax2. This assures that everything that is provable from Ax1 is provable from Ax2 (modulo translation). Of course, Ax2 may be a stronger theory. The category Spec consists of objects that are specifications and morphisms that are specification morphisms.
Reuse of Knowledge at an Appropriate Level of Abstraction
61
2.3 Diagrams and Colimits A diagram in a category C is a collection of vertices and directed edges consistently labeled with objects and morphisms of C. A diagram in the category Spec can be viewed as expressing structural relationships between specifications. The colimit operation is the fundamental method in Specware™ for combining specifications. This operation takes a diagram of specifications as input and yields a specification, commonly referred to as the colimit of the diagram. See figures 1 and 2. The colimit specification contains all the elements of the specifications in the diagram, but elements that are linked by arrows in the diagram are identified in the colimit. Informally, the colimit specification is a shared union of the specifications in the original diagram. Shared here means that sorts and operations that are linked by the morphisms of the diagram are identified as a single sort and operation in the colimit. For example, in figure 2, the function weight that appears in the panel specification and the manufactured parts specification are linked by the arrows emanating from the part specification (which introduces the weight function). The colimit operation can be used to compose any specifications, which can represent problem statements, theories, designs, architectures, or programs. The fact that specifications can be composed via the colimit operation allows us to build specifications by combining simpler specifications modularly, just as systems can be composed from simpler modules. import
Real Numbers import
Physics physical-object, g, weight, mass, volume, density, weight(p) = mass(p) * g, mass(p) = volume(p) * density(p)
Geometry geometry, volume, box, height, length, width, box-volume, cylinder, radius, depth, cylinder-volume, box-volume(b) = height(b) * length(b) * width(b), cylinder-volume(c) = depth(c) * pi * radius(c)^2
import Materials material, aluminum-7075, if material(p)=aluminum-7075 then density(p)=20
Parts part, g, weight, mass, volume, ..., material, aluminum-7075, geometry, box, box-volume, …, weight(p) = mass(p) * g, ... , if material(p)=aluminum-7075 …, box-volume(b) = ..., …
Figure 1. Colimit of a Specification Diagram
62
Keith E. Williamson et al.
Parts part, weight, mass, ..., volume, height, ... weight(p) = mass(p) * g, ... , box-volume(b) = ..., ...
import
import
Panels
Manufactured Parts
panel, boundary, hole, number-of-holes, vertical separation, horizontal separation, volume(p) =box-volume(boundary(p)) (number-of-holes(p)*cylinder-volume(hole(p))) material(p) = aluminum-7075
part, manufacturing-cost, cost-of-raw-stock, cost-of-drilling-hole, If material(p)=aluminum-7075 then cost-of-drilling-hole(p,h)= 2*cylinder-volume(h) cost-of-raw-stock(p) = 5*raw-stock-volume(p)
Colimit of Diagram import Manufactured Panels panel, cost, raw-stock-volume(p) = box-volume(boundary(p)) manufacturing-cost(p) = cost-of-raw-stock(p) + number-of-holes(p)*cost-of-drilling-hole(p,hole(p)) cost(p) = (5*manufacturing-cost(p)) + (2*weight(p))
Figure 2. Another Colimit followed by a Specification Morphism
3
Specware - A Software Development Tool
Specware™ is a software development and maintenance environment supporting the specification, design, and semi-automated synthesis of correct-by-construction software. It represents the confluence of capabilities and lessons learned from Kestrel’s earlier prototype systems (KIDS [18], REACTO [23], and DTRE [4]), grounded on a strong mathematical foundation (category theory). The current version of Specware™ is a robust implementation of this foundation. Specware™ supports automation of: • Component-based specification of programs using a graphical user interface • Incremental refinement of specifications into correct software in various target programming languages (e.g., currently C++ and LISP) • Recording and experimenting with alternate design decisions • Domain-knowledge capture, verification and manipulation • Design and synthesis of software architectures • Design and synthesis of algorithm schemas • Design and synthesis of reactive systems • Data-type refinement • Program optimization
Reuse of Knowledge at an Appropriate Level of Abstraction
63
The Specware™ system has some of its roots in the formal methods for software development community. Within this community, there are numerous languages that have been used for specifying software systems; e.g., Z [20], VDM [3], and Larch among many others [6]. Of the many formal specification languages, the Vienna Development Method (VDM) is one of the few that tries to formally tie software requirement specifications to their implementations in programming languages. This system is perhaps the closest to Specware™ in that it allows an engineer to specify a software system at multiple levels of refinement. Each step of the refinement process can be formally proven. VDM tools allow for the capture and discharging (sometimes manually) of “proof obligations” that arise in the refinement of a specification from one level to another. Specware™ differs from VDM by having category theory as its fundamental underlying theory. This appears to give several benefits. It allows for a greater level of decomposition, and then composition, of specifications (the use of diagrams and colimits provides a general framework for this). It provides a solid basis [10,15] for preserving semantics in all refinement operations - not only within Slang (Specware’s specification language), but also across different logics (e.g., from the logic of Slang into the logics underlying target programming languages). It allows for parallel refinement of diagrams [22], which helps with the scalability of the technology. Multiple categories underlie Specware™ - signatures, specifications, shapes, interpretations, and so forth. Within each category, the notions of morphism, diagram, and colimit play an analogous role in structuring and composing objects. The utility of category theory lies in its general applicability to many contexts.
4
Stiffened Panel Layout
We began our evaluation of this technology with an example of a software component that was being considered for inclusion in a component library for structural engineering software [24,25]. The software component solves a structural engineering layout problem of how to space lightening holes in a load-bearing panel. The component was originally part of an application that designs lay-up mandrels, which are tools that are used in the manufacturing process for composite skin panels. At first glance, the software component appears to solve the following onedimensional panel layout task (this was pulled from a comment in the header of this component). Given a length of a panel, a minimal separation distance between holes in the panel, a minimal separation distance between the end holes and the ends of the panel, and a minimum and maximum width for holes, determine the number of holes, and their width, that can be placed in a panel. See Figure 3. This software component solves a specific design task that is part of a broader design task. Prior to the invocation of this function, a structural engineer has determined a minimum spacing necessary to assure structural integrity of the panel. Upon closer inspection of the software, one realizes that this function actually minimizes the number of holes subject to the constraints specified by the input parameter values. The original set of constraints defines a space of feasible solutions. Given a set of parameter values for the inputs, there may be more than one solution to
64
Keith E. Williamson et al.
picking the number of holes and their width so that the constraints are satisfied. So, the software documentation is incomplete. However, going beyond this, one is inclined to ask, “Why did the programmer choose to minimize the number of holes?” Is there an implicit cost function defined over the feasible solutions to the original set of constraints? If so, what is it? Presumably, this is all part of the engineering design rationale that went into coming up with the (not fully stated) specification for the software component in the first place. If we were to use this component to design a panel that was to fly on an airplane, the panel would be structurally sound, but not necessarily of optimal cost (e.g., not making the best trade-off between manufacturing cost and overall weight of the panel). Panel Width End Separation
Separation
Hole
End Separation
Separation
Hole
Hole
Figure 3. Stiffened Panel Layout Problem
Rather than put this incompletely documented software component into a reuse library, we seek to explicate the engineering design rationale and tie it directly to the software. For this purpose, we used the Specware™ system to first document the structural and manufacturing engineering design rationale leading to the software component specification, and then generate software that provably implements the specification (which in turn requires documenting software design rationale). Specware™ allows specifications to be composed in a very modular fashion (using the colimit construction from category theory). In this example, we were able to generate a specification for basic structural parts by taking a colimit of a diagram, which relates specifications for basic physical properties, material properties, and geometry (see Figure 1). A specification for stiffened panels was derived by taking the colimit of another diagram, this time relating basic structural parts to panel parts and manufactured parts (see Figure 2). This specification was then imported into another, which added manufacturing properties that are specific to stiffened panels (it is here that we finally state the (originally implicit) cost function). From this specification, and another describing basic optimization problems, we are able to formally state the panel layout problem. This specification was then refined into Lisp software [25].
4.1 The Challenge of Software Maintenance As business processes change, software requirements must change accordingly. Some software changes are straightforward. Other changes are harder to make, and
Reuse of Knowledge at an Appropriate Level of Abstraction
65
the inherent complexity is not always obvious. In the stiffened panel layout example, suppose there is a change to the material of the panel. If the density of the new material is less than five, then the search algorithm that was used is no longer applicable [25]. In fact, with this single change to the cost function, it is more cost effective to have no holes in the panel at all! What is fundamentally missing in the software (the end artifact in the software development process) is the fact that a design decision, that of picking a particular algorithm to solve a class of optimization problems, is reliant on a subtle domain constraint. Indeed, in the original software component, the cost function is nowhere to be seen in the software, nor the documentation that was associated with it. Knowledge sharing and reuse cannot easily and uniformly occur at the software level. If we place requirement specifications, design specifications, and software derivations in a repository, we can reuse them to derive similar engineering software. When requirements change (e.g., in response to a change in manufacturing processes), we are able to change the appropriate specifications, and attempt to propagate those changes through the derivation history. Sometimes the software can be automatically regenerated. Other times, some of the original software design will break down (due to some constraints no longer holding). In this case, we need to go back to the drawing board and come up with a new design to satisfy the requirements. But even in these cases, presumably only some portions of the software will need to be redesigned. We leverage those parts of the software design that we can. In this way, we reuse knowledge at the appropriate level of abstraction, and not solely at the software level.
5
Equipment Locator Problem
After the successful experience of using Specware™ on the stiffened panel layout component, we decided to see if the technology would scale up to industrial strength applications. There were various criteria that we used to pick this application. The application should: • Be large enough to test scalability. • Be a real problem of importance to Boeing. • Already have requirement documents written. • Be an engineering application with relatively simple geometry. • Have requirements that change over time. • Be in a family of related applications. • Have overlap with the panel layout problem. • Be functional in nature (i.e., no real time constraints). Some of these requirements were chosen in an effort to maximize reuse of knowledge over time and across applications. We have felt that the additional up front costs (associated with rigorously defining requirement specifications and design specifications) can more easily be justified if there is a high probability that those costs can be amortized over a relatively short period of time (i.e., two or three years). Only the last requirement is due to current state of the technology (although work is being done in this area).
66
Keith E. Williamson et al.
After some searching, we found the equipment locator problem, which satisfied all of the criteria listed above. This is the problem of determining optimal placements of electronic pieces of equipment (e.g., the flight data recorder, inertial navigation systems, flight computers, etc.) on shelves of racks in commercial airplanes. The purpose of the equipment locator application is to support the equipment installation design process for determining optimal locations for electrical equipment. The application supports the designers in determining optimal locations for equipment on a new airplane model, as well as finding a suitable location for new electrical equipment on an existing airplane model. The application is intended to reduce the process time required to determine equipment locations, and also improve the quality of the equipment location designs. Numerous specifications are needed for stating and solving this problem; e.g.: • Theory of geometry, • Global and relative part positioning, • Major airplane part and zone definitions, • Operations and properties for pieces of equipment, shelves, and racks, • Separation and redundancy requirements for equipment, • An assignment of a piece of equipment to a position on a shelf, • A layout (a set of assignments), • Hard constraints on layouts, • Cost function on layouts, • Theories of classes of optimization problems, • Theories of search algorithms for classes of optimization problems, • The equipment locator problem statement, • An algorithmic solution to the problem (instantiating a branch and bound algorithm) All in all, about 7,000 lines of requirement and design specifications are needed to state and solve this problem. The generated Lisp software exceeds 7,000 lines. The fact that the number of lines of specifications roughly equals the number of lines of code is coincidental. In other situations, the code may far exceed the specifications.
5.1 General Problem Statement With some simplification, the equipment locator problem has as inputs: 1. A set of shelves 2. A set of equipment 3. A partial layout of equipment to shelves And produces as output the set of all layouts of equipment to positions on shelves, such that: 1. The partial layout is preserved/extended, 2. All pieces of equipment are placed in some position on some shelf, 3. All hard constraints are satisfied, 4. Layout costs (e.g., wiring distances between pieces of equipment) are minimized. The hard constraints, which define feasible layouts, are things like: • Equipment assignments can not overlap, • Equipment must be placed on shelves with appropriate cooling properties,
Reuse of Knowledge at an Appropriate Level of Abstraction
67
• • • • •
Redundant pieces of equipment must be placed on separate cooling systems, Critical pieces of equipment have certain restricted regions in space, Equipment with low mean time to failure must be easily accessible, Voice and flight data recorders must be placed in the front electrical bay, Equipment sensitive to electromagnetic interference must be separated (by a certain distance) from other equipment emitting that interference. The cost function on layouts includes such things as: • Equipment wiring distances minimized, • Heavy equipment should be placed as low as possible (for ergonomics), • Voice and flight data recorders should be placed as far aft as possible. The following portion of the requirement specifications give an illustrative example of the specification language (Specware™ does have a prefix notation, but we did not use it). The first axiom states that for any two assignments in a layout, if they contain redundant pieces of equipment, then the minimal distance between them must be greater than the required separation distance for redundant pieces of equipment. The second axiom states that for any two assignments in a layout, if they contain redundant pieces of equipment, then the cooling properties of the shelves that contain them must be different. Both are hard constrains that must be satisfied to assure safety of the aircraft configuration under unusual circustances.
op redundant-separated-enough : layout -> boolean axiom (iff (redundant-separated-enough l) (fa (a1:assignment a2:assignment) (implies (and (in a1 l) (in a2 l)) (implies (redundant (equipment-of a1) (equipment-of a2)) (gt (min-distance (assigned-geometry a1) (assigned-geometry a2)) (redundant-sep (equipment-of a1) (equipment-of a2))))))) op redundant-separate-cooling : layout -> boolean axiom (iff (redundant-separate-cooling l) (fa (a1:assignment a2:assignment) (implies (and (in a1 l) (in a2 l)) (implies (redundant (equipment-of a1) (equipment-of a2)) (not (equal (cooling (shelf-of a1)) (cooling (shelf-of a2))))))))
5.2 Process for Technology Use So how does one go about using this technology for industrial applications? After having learned some of the underlying theory, and then Specware™ (from working on the stiffened panel layout problem), we proceeded to learn the domain of our new
68
Keith E. Williamson et al.
application. We had three English requirement documents to work from. These comprised about 20 pages of writing, drawings, etc. In addition, we had several pieces of supporting material (tables, drawings, etc). Only two discussions with a domain expert were needed, since the requirement documents were fairly clear and complete. Once understood, we formalized the requirements. Part of this involved figuring out how to best decompose and abstract various portions of the problem domain. We estimate that we captured roughly 98% of the requirements found in the informal material. The remaining 2% of the requirements dealt with interfacing with other software systems (of which we had insufficient information at the time). Next, we went through a manual validation process in which we compared the formal requirements with the informal ones. We wrote a brief document noting places where either: • Requirements were not formalized (the 2% mentioned above), • Additional detail was needed to formalize the requirements (due to some degree of ambiguity in the English documents), or • Some choice was made between alternate interpretations of the written material (since the three English documents were written at different times, there were minor inconsistencies). Once the requirements were formalized, we made and then encoded our design decisions. Again, there were decisions to be made about decomposition and abstraction. For each design decision, we needed to choose data structures for sorts and algorithms for operators. Specware™ comes with many built-in specifications that can be used for this (and other purposes). For example, there are specifications for sets, lists, and sets interpreted as lists. These design decisions then had to be verified to ensure that requirement properties were upheld. Specware™ has a built-in resolution based theorem prover. This was used to prove roughly 25% of the proof obligations. The other 75% were proven by hand. Since these proofs were done informally, some errors may be present (most proofs were done mentally, and not actually written down). Eventually, every sort and operation had to be refined down into some data structure and operation provided by the Lisp programming language. Finally, once the software was initially generated, we maintained the software with Specware™. As we learned more about the problem domain, several changes were made to the requirement specifications, and the software was easily regenerated. None of these changes required significant redesign efforts, fortunately. However, one other change did. The initial optimization algorithm used an exhaustive search. For purposes of rapid prototyping, we had used the optimization problem theories and search theories from the stiffened panel layout problem. Since these were inefficient when applied to this problem, we encoded an additional theory of branch and bound optimization problems, and applied a corresponding search theory to the domain of the equipment locator problem. The general branch and bound theories are completely reusable (i.e., independent of the domain in which they are instantiated).
Reuse of Knowledge at an Appropriate Level of Abstraction
69
5.3 Getting the Technology into Broader Use Our overall impression is that this technology does work for industrial strength applications, but that it needs additional work to make it more usable [26]. We briefly describe some suggestions in this subsection. For maximum ease of use, the user interface of any system should reflect and reinforce the user’s mental model of the artifacts and processes involved. The interface of the Specware™ needs to be designed with this in mind. Some other, specific suggestions for improvement of Specware™ include: • Support for linking nonformal requirements to formal requirements (e.g., perhaps hyperlinks between English documents and formal requirement specifications). • Better support for viewing dependencies between requirements, designs, and software. While those dependencies are present in proofs, it would be handy to view them directly. • Better proof support (e.g., better theorem provers, enhanced with constraint solvers and other types of theorem proving). • Support for program level optimization (e.g., compiler-like optimizations and finite differencing found as in KIDS [18]). • Better support for software derivation replay. Some of this work is already underway at Kestrel Institute. Creating an environment surrounding Specware(TM) that is tailored to specific domains of application would also enhance the effectiveness of the technology. This, together with improved theorem proving, would make the technology more suitable for safety critical applications (e.g., avionics software systems).
6
Requirements Elicitation
Systems development and engineering does not often begin with a clear and formal statement of requirements. A system such as Specware™ allows reuse of knowledge from that point forward, but preceding this, are there opportunities for knowledge reuse? If so, can one find a common underlying theory for knowledge structuring and reuse that allows an integration of requirements elicitation techniques with the systems development techniques describing previously in this paper? These are some of the questions underlying our current work.
6.1 Human Factors Engineering The most common causes of failure of software development projects are a lack of good requirements and a lack of proper understanding of users' needs (www.standishgroup.com and [8]). System developers have long recognized how difficult it is to elicit good requirements from users and customers. Even experienced developers, who are intelligent people with at least a normal amount of common sense, still fall prey to this problem. Why?
70
Keith E. Williamson et al.
There are two parts to this problem. First, eliciting the requirements from users. Second, inserting those requirements into the system development process. People have difficulty verbalizing their inner thought processes. This is called “The Fallacy of Conscience Access” [7]. To make matters even more challenging, some of the domain knowledge may exist only as collective knowledge of a group of people. This "tribal" knowledge is not usually written down nor is it formally encapsulated. One solution is to use the tools and techniques from a specialty engineering discipline called Human Factors Engineering (HFE). HFE uses the principles of cognitive psychology to elicit knowledge from users and customers, and designs interfaces that allow users to interact with the system so as to perform their tasks efficiently and effectively. Human Factors engineers have professional training in techniques for interviewing, observing, questionnaires, surveying, human testing, and how to build task models of user-system interactions. These techniques and processes are outlined in [14,27]. Data obtained using these techniques specify not only the users work processes, but also the data and system behavior the user needs to accomplish those work processes. The data and behavior are grouped into user-centered Information Control Groups (ICG). A study of the users’ cognitive tasks has been shown to lead to shorter development projects and better products [8]. Although the requirements elicitation phase of the project is lengthened, the system construction phase is shortened. The overall length (and cost) of the project is less. The resulting system better meets customer needs and therefore requires less maintenance. The second part of the system development problem, inserting user requirements into the system development process, requires that the users’ task models be formalized and inserted into the system development process at the appropriate time. The task models can be described as a series of steps in which the user accesses an ICG, executes some behavior on the data it contains, and goes on to the next step in the work process. The data and behavior requirements encapsulated in an ICG therefore specify a large proportion of the system behavior in most business applications. Further, a description of the number of users and the frequency with which tasks are executed specifies performance requirements of the system. The challenge is to capture the data and behavior encapsulated in an ICG using more formal methods. The ability to represent domain knowledge as a user interface allows us to do two key things. First, using a user interface prototype as a communication tool can iteratively refine our knowledge of the domain. User and domain experts often have difficulty expressing highly technical domain knowledge in terms an expert in another domain, such as software engineering, can grasp. Second, the user interface can be used to allow the users and domain experts to interact with the to-be system (albeit in a limited way) to perform common tasks. Customers can then "buy-off' on the to-be system before system construction begins since, to the customers, domain experts, and users, the user interface is the system [14]. Buying off on the system, before system construction, is very effective at stabilizing requirements. A major cause of high software maintenance costs is "changing customer requirements." Closer examination shows that the customer didn't change requirements - the requirements were poorly understood until system delivery when the user first could get hands on experience with the system.
Reuse of Knowledge at an Appropriate Level of Abstraction
71
6.2 Algebraic Semiotics We propose that category theory provides a mathematical foundation for translating user task models and domain knowledge into formal system requirements. Possible techniques for doing this have been proposed under the term algebraic semiotics [11]. Semiotics is the study of systems of symbols and their meaning. Symbol systems include, for example, Object Oriented class models, user interfaces, task models, and even the users' internal cognitive representation of domain knowledge (i.e., mental models [17]). Formalizing these representations of knowledge in category theoretic systems allows a well-defined, mathematical approach to understanding reuse of engineering domain knowledge at higher levels of abstraction. The process for moving from domain knowledge to software proceeds through several discrete stages. The highly informal and unstructured domain knowledge that exists in the minds of domain experts and users is represented as a task model. The task model can be represented as a symbol system whose structure is defined in category theory. The users' interface to the system is some translation of the users' perception of the task model, and is itself a symbol system. The symbol systems as well as the translations are represented in category theory [11]. The groupings of controls and displays in the user interface, and the order in which users access them to perform tasks, determine the information processing demands imposed on the system by the user. This defines the system requirements for most business systems. This does not mean that all aspects of system functions are in the user interface, only that the need for the system functions is implied by the user interface. For example, in the equipment locator problem above, the formula for calculating RFI is not in the user interface. However, the user interface does show the need for making this calculation, and the requirement is defined.
7
Summary
Reuse cannot easily and uniformly occur at the software level alone. In this paper, we have described an alternative paradigm for reuse that attempts to reuse knowledge at an appropriate level of abstraction. Sometimes that level is a domain theory that is involved in stating system requirements. Sometimes it is a design pattern. Sometimes it is a software component. Often it is a combination of these. We have described our experiences in applying a category theory based approach to industrial strength software specification, synthesis, and maintenance. This paradigm is one that allows the capture and structuring of formal requirement specifications, design specifications, implementation software, and the refinement processes that lead from requirements to software. In this approach, the refinement process can guarantee correctness of the generated software. By recording, modifying, and then replaying the refinement history, we are able to more easily maintain the software. By capturing, abstracting, and structuring knowledge in a modular fashion, we are able to more easily reuse this knowledge for other applications.
72
Keith E. Williamson et al.
It is interesting to note that when this technology is applied to software systems whose outputs are designs for airplane parts, the design rationale that is captured is not only software engineering design rationale, but also design rationale from other, more traditional, engineering disciplines (e.g., mechanical, material, manufacturing, etc.). This suggests the technology provides an approach to general systems engineering that enables one to structure and reuse engineering knowledge broadly.
Bibliography 1. 2. 3. 4. 5.
6. 7. 8. 9. 10. 11.
12. 13. 14.
Balzer, R., A 15 Year Perspective on Automatic Programming, in IEEE Transactions on Software Engineering, Vol. SE-11, no. 11, November 1986, pp. 1257-1268. Baxter, I., Design Maintenance System, in Communications of the ACM, vol. 35, no. 4, April 1992, pp. 73-89. Bjorner, Dines and Jones, Cliff, Formal Specification & Software Development, Prentice-Hall International, 1982. Blaine, Lee and Goldberg, Allen, DTRE – A Semi-Automatic Transformation System, in Constructing Programs from Specifications, ed. B. Moller, North Holland, 1991. Burstall, R. M. and Goguen, J. A., The Semantics of Clear, a Specification Language, in Proceedings of the Copenhagen Winter School on Abstract Software Specification, Lecture Notes in Computer Science, 86, Springer-Verlag, 1980. Gannon, John et al., Software Specification - A Comparison of Formal Methods, Ablex Publishing. Gardiner, M. and Christie, B. Applying Cognitive Psychology to User-Interface Design, John Wiley & Sons, 1987 Gibbs, W., Taking Computers to Task, Scientific American, July 1997. Goguen, J. A., Mathematical Representation of Hierarchically Organized Systems, in Global Systems Dynamics, ed. E. Attinger and S. Karger, 1970, pp. 112-128. Goguen, J. A. and Burstall, R. M., Institutions: Abstract Model Theory for Specification and Programming, Journal of the Association of Computing Machinery, 1992. Goguen, J. A., An Introduction to Algebraic Semiotics, with Applications to User Interface Design, in Computation for Metaphor, Analogy and Agents, edited by Chrystopher Nehaniv, Springer Lecture Notes in Artificial Intelligence, 1999. Jullig, R. and Y. V. Srinivas, Diagrams for Software Synthesis, Proceedings of the 8th Knowledge-Based Software Engineering Conference, Chicago, IL, 1993. MacLane, Saunders, Categories for the Working Mathematician, SpringerVerlag, 1971. Mayhew, Debra J., The Usability Engineering Lifecycle, Academic Press/Morgan Kauffman, 1999.
Reuse of Knowledge at an Appropriate Level of Abstraction
73
15. Meseguer, Jose, General Logics, Logic Colloquium ‘87, Eds. Ebbinghaus et al., Elsevier Science Publishers, 1989. 16. Pierce, Benjamin C., Basic Category Theory for Computer Scientists, MIT Press, 1994. 17. Rogers, Yvonne, et. al, Models in the Mind – Theory, Perspective, and Application, Academic Press, 1992. 18. Smith, Doug, KIDS: A Knowledge Based Software Development System, in Automating Software Design, Eds. M. Lowry and R. McCartney, MIT Press, 1991. 19. Smith, Doug, Mechanizing the Development of Software, in Calculational System Design, Ed. M. Broy NATO ASI series, IOS Press, 1999. 20. Spivey, J. M., The Z Notation: A Reference Manual, Prentice-Hall, New York, 1992. 21. Srinivas, Y. V. and Jullig, Richard, Specware™: Formal Support for Composing Software, in Proceedings of the Conference of Mathematics of Program Construction, Kloster Irsee, Germany, 1995. 22. Waldinger, Richard et al., Specware™ Language Manual 2.0.1, Suresoft, 1996. 23. Wang, T. C. and Goldberg, Allen, A Mechanical Verifier for Supporting the Design of Reliable Reactive Systems, International Symposium on Software Reliability Engineering, Austin, Texas, 1991. 24. Williamson, K. and Healy, M., Formally Specifying Engineering Design Rationale, in Proceedings of the Automated Software Engineering Conference, 1997. 25. Williamson, K. and Healy, M., Deriving Engineering Software from Requirements, Journal of Intelligent Manufacturing, to appear, 1999. 26. Williamson, K. and Healy, M., Industrial Applications of Software Synthesis via Category Theory, in Proceedings of the Automated Software Engineering Conference, 1999. 27. Human Engineering Program – Processes and Procedures, US Department of Defense, Handbook MIL-HDBK-46855A, 1996.
Building Customizable Frameworks for the Telecommunications Domain: A Comparison of Approaches Giovanni Cortese1, Marco Braga2, and Sandro Borioni3 1
Processes, Reuse & Technologies Sodalia S.p.A., Via V. Zambra, 1, Trento, TN 38100, Italy
[email protected] 2 TDM Business Unit Sodalia S.p.A.
[email protected] 3 IS & PL Architecture Sodalia S.p.A.
[email protected] Abstract. Based on the experience of development of industrial application frameworks for the telecommunications domain, we compare different approaches and techniques for achieving software reuse. In particular, we describe an assessment we performed on two application frameworks for the telecommunications domain developed in Sodalia. The assessment had the objective of collecting evidence on the benefits and implications of different techniques which can be adopted for achieving large-grain reuse in product line development. Keywords: Software Architecture, Object-Oriented Framework, Design Patterns, Software Reuse, Distributed Systems, Product Lines, Network and Service Management, Network Traffic Data Analysis.
1
Introduction
Successful programs evolve to become more reusable. As they evolve, domain abstractions, originally embedded in the code, tend to surface either in the software or in the data architecture. If a business driver exists, this evolution may lead to the development of an application framework for the domain, which can be deployed in different custom projects with a limited customization effort. 'Reusability', i.e. the ability to reuse functionalities in a variety of technical and business contexts, can be obtained with several technical approaches: configuration mechanisms domain-specific scripting languages object-oriented frameworks metadata W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 74-88, 2000. Springer-Verlag Berlin Heidelberg 2000
Building Customizable Frameworks for the Telecommunications Domain
75
This evolutionary path (see [18] for a more extended introduction of the subject) has been occurring quite regularly in our software development experience, both when reengineering for reuse medium-grain software components, and, on a much larger scale, when evolving custom applications to application frameworks. This paper describes an analysis we performed on two application frameworks developed within Sodalia. We surveyed the mechanisms and techniques which our engineers adopted to build large-grain reusable software assets, in order to assess their effectiveness in our business and their relevance to our asset base. Specifically, we wanted to get some insight on how these mechanisms are related to productivity, time-to-market, and other architectural qualities, and on the degree of penetration of specific reuse techniques in our software developers’ community. The paper introduces the method we followed in the analysis, provides an architectural analysis of the application frameworks from the specific point of view of adopted reuse techniques, and provides a summary of the findings of this assessment.
1.1 Business and Organizational Context Sodalia S.p.A. is an Italian company in its sixth year of activity. Its mission is the development of innovative telecommunications software products for the management, administration, and maintenance of telecommunication networks and services. Reuse Program From its founding, Sodalia has adopted a Reuse Program whose goal is to make software reuse a significant and systematic part of the software process. A dedicated group, over the years accounting for roughly 3-4 percent of the company’s development staff, has acted as facilitator in the adoption of reuse practices, by taking actions such as development of reusable components, development of methodology, training, and dissemination of information. Product Lines and Application Frameworks The company delivers custom solutions to its customers by reusing whenever possible an Application Framework for a specific domain (e.g. Service Fulfillment, Service Assurance, Network Data Management). According to [15] “A software product line is a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission. Substantial economies can be achieved when the systems in a software product line are developed from a common set of core assets. A new product, then, is formed by taking applicable components from the asset base, tailoring them as necessary through pre-planned variation mechanisms such as parameterization, adding any new components that may be necessary, and assembling the collection under the umbrella of a common, product-line-wide architecture. Building a new product (system) becomes more a matter of generation than creation; the predominant activity is integration rather than programming.”
76
Giovanni Cortese et al.
Sodalia’s Application Frameworks are developed as the core of a product line, from a development process point of view. They have been built mostly in a bottomup fashion, in an iterative process of generalizing software components developed for a previous project to accomodate requirements for a subsequent custom project (or new releases of existing ones). In a few cases, investments have been made to rearchitect important assets in order to introduce the flexibility needed to deal with ‘likely’ customer scenarios. For each product line two development groups exist, one for engineering the core and one for delivering the solutions to the customer. A product line architect guarantees technical coordination between the two groups. Reusable Asset Repository and Sodalia Architectural Framework The Application Frameworks presented in this paper share a number of reusable domainindependent components, which provide a common application infrastructure (available in a Reusable Asset Repository), and a common set of architectural guidelines (Sodalia Architectural Framework [16]), defined to increase the interoperability between software developed by different teams. The common infrastructure and architecture is developed by the Reuse Support Organization. The remainder of this paper is structured as follows. First, we recap some useful concepts on customization mechanisms and introduce the classification criteria which we adopted for this analysis. The bulk of the paper is represented by the two case studies, where each application framework is described with emphasis on the customization mechanism used for each of its subsystems, and by the assessment summary, where the results of the analysis are shown. Last, conclusions and further directions of work are presented.
2 A Taxonomy of Techniques for Design of Customizable Software In this section, we present a simple taxonomy of approaches for designing adaptable/ customizable software. Each of them represents an alternative for introducing new domain-specific behavior into a customizable component. Though there might be more accurate or fine-tuned ways of making this taxonomy, we found that these five categories fit the descriptive needs of our analysis.
2.1 Object-Oriented Frameworks OOFs are application frameworks designed according to a number of well documented object-oriented techniques (see for example [20]). To extend the capabilities of an OOF [17], a combination of the following techniques can be used:
Building Customizable Frameworks for the Telecommunications Domain
77
in white-box frameworks, the existing functionality is reused and extended by (1) inheriting from the framework base classes (2) overriding predefined hook methods via patterns [5] like the Template method. black-box frameworks support extensibility by defining interfaces for components which can be plugged into the framework via object composition. Existing functionality is reused by (1) defining componens which adhere to a particular interface, (2) integrating these components into the framework using patterns such as Strategy and Functor Though the actual mechanisms may vary depending on the chosen implementation techniques and language, often OOF customization requires availability of the development environment.
2.2 Active Object Models Metamodels are another technique for designing highly customizable and reusable software. With this approach, the focus is on making data more reusable, by including in the object model of the application a number of objects which provide explicit representation of other objects and their relationships. Data that describe other data, rather than aspects of the application domain itself, are called metadata. An important class of applications whose design exploits metadata can be described according to the Active Object Model (AOM) pattern [18]: “An ACTIVE OBJECT- MODEL is an object model that provides ‘meta’ information about itself so that it can be changed at runtime .. A system with an ACTIVE OBJECT-MODEL has an explicit object model that it interprets at run-time.. As this evolutionary process unfolds, and the architecture of a system matures, knowledge about the domain becomes embodied more and more by the relationships among the objects that model the domain, and less and less by logic hardwired into the code.. In the end, this design allows some application configuration decisions to be pushed to the user ..” An AOM implementation is typically comprised of:
the metamodel (typically data, may include a software interface, e.g. through CORBA IDL, which is used by applications to interact with the metamodel) a number of generic applications, which reason on the domain metamodel a parser of the domain-specific language, which can be used to load and extend the metamodel a visual builder, for interactive manipulation of the metamodel Examples follow: a software application which supports the provisioning process of Telecom/ Internet Services [19]. The business rules which describe how to combine simple service elements (for example, a Frame Relay virtual circuit) and network resources (for example, a customer premises router), to deliver aggregated, end-toend services according to commercial offers are represented in the object model. Entities describing new types of services, network resources and commercial offers together with their relations can be added dynamically to the model (by the marketing staff itself). Generic applications, such as the resource reservation
78
Giovanni Cortese et al.
algorithm which is executed to fulfill a new customer order, are not impacted by the extensions to the model a workflow engine, which is able to execute different workflow schemas Customization of an AOM can be made at run-time, by using the parser of the associated domain-specific (meta)language or a visual builder. All the AOM software components are reused black-box. Also, metadata are often reused in different customizations.
2.3 Property-Based Customization Components may be customized by having them expose a number of properties, i.e. attribute-value pairs. Java Beans represent an example of components which can be customized by setting values for properties. This category may include components whose degree of customizability varies widely. The power and flexibility in customization depends obviously on how much semantics can be associated with the property values. We chose to classify in this category components with relatively simple configuration options, where the property values are used mostly to select between available logic/ behavior. Customization of this type of component is usually done when packaging the product or at installation time.
2.4 Macro Preprocessors/Code Generators Borrowing again from [18], “Often, users will want to specify complex new behaviors that are beyond what can be specified using traditional properties or tables. Therefore, have a program generate the code, based upon some description, and compile or interpret it. Macro systems write code based on their arguments and on expansion rules given by their (human) authors. Some macro systems make heavy use of metadata. Macros may or may not be expanded at run-time.”
2.5 Domain-Specific Scripting Languages As a customization mechanism, scripting languages can be used to add at run-time1 new business rules / algorithms to an existing component. While also AOMs often rely on a language which is used for the purpose of interacting with the metamodel, and often include a parser, scripting languages (e.g.
1
In a sense, this approach is very similar to the macro preprocessor. Both are driven by the desire to embed domain-oriented language primitives into a fully featured programming language. With scripting languages, the focus is on an interpreted run-time environment.
Building Customizable Frameworks for the Telecommunications Domain
79
domain-specific scripting languages such as Tcl-based Scotty, an SNMP- oriented2 language interpreter) are more procedural in nature. They represent very powerful ways to program new algorithms to an extensible component, and can complement the other approaches mentioned in this chapter e.g. be configured into a component via properties implement ‘Strategy’ hooks in an OOF
3
Case Study: STM/AF
3.1 Main Functionalities Sodalia Traffic Manager (STM/AF) is an application framework for network traffic data collection and analysis. The most important functionalities provided by the system are: Data Collection: traffic data are collected and stored in a repository after being mapped to a common representation that hides technology specificity. Network surveillance: collected data are evaluated against predefined thresholds to generate alerts on network congestion. Traffic engineering: collected data are aggregated across different dimensions to spot trends on a daily, monthly, and yearly time scale. Reporting: the system is completed by a report environment that provides a mechanism able to schedule and generate reports according to user requests.
3.2 Variability Requirements The framework has been used to address the needs of a number of custom projects. While the functionalities each customization must provide are typically very similar, the framework has to be able to manage different network technologies (voice and data) e.g. Circuit Switch (CS) technology, ATM, IP, FTTC. Specifically, it has to deal with variability in: network protocols data formats interfaces to customer legacy Operations Support Systems (OSS)
3.3 Architectural Overview and Reuse Techniques To address the variability requirements presented so far, STM/AF design relies mostly on a powerful AOM-based repository, and on an easily extensible subsytem acting as an adaptation layer to external data sources. STM/AF architecture is summarized by Figure 1. 2
Simple Network Management Protocol, an application level protocol for development of network management applications.
80
Giovanni Cortese et al.
OOF DCCustom Generic AOM Apps.
VBC
RSRE
MCF
NSEB
NSEBV
Macro Processor RSRS
DCN DCCC TEDAC NSEC
PB Appl
MDR
AOM kernel
NSTE
CFAT CNET
CNE
NSTEV
MVC based appl. CFATV
CNEV
Fig. 1. STM/AF Architecture
At the heart of the system lies the Metadata and Data Repository (MDR). The repository is based on a relational database and is optimized for dealing with very large sets of data. The repository implements a meta-model for objects such as network topologies, network elements and network performance measurements. The meta-model can be instantiated through a visual builder tool (VBC), which allows the user of the system to create descriptions of ‘real world’ network objects. Metadata configuration files (MCF) are then created through a generation process. Within the Data Collection subsystem, the DC Custom component is in charge of the interface with network elements or external OSSs. The DC Custom is based on Bulk Data Transfer (BDT). BDT is an OOF, following the ‘Pipe&Filter’ ([1]) architectural pattern, which can be customized to implement a number of different data transfer, compression, and ‘cleanup’ strategies. The data collection subsystem is completed by a few ‘generic applications’ whose behavior is completely driven by the metamodel, hence are reused ‘black-box’, such as Data Collector Normalizer (DCN) and the Consistency Checker (DCCC). Similarly, ‘generic applications’ represent a significant portion of the Network Surveillance subsystem (Exception Calculator Engine - NSEC, Exception Browser Model - NSEB, Threshold Editor Browser Model - NSTE), of the Traffic Engineering subsystem (Data aggregation and calculator engine - TEDAC), and of the Configuration subsystem (Network Element Configuration Model - CNE, Network Element Type Configuration Model - CNET, Formula Aggregation Type Configuration Model - CFAT). At the architectural/ infrastructural level, the Model-View- Controller pattern ([7), together with an OOF which provides its implementation in a Java-CORBA ([9],[10])
Building Customizable Frameworks for the Telecommunications Domain
81
environment, provides a uniform method of interfacing the ‘model’ component of the applications already mentioned with their GUIs. The GUIs (views) are configurable via properties (Exception Browser View NSEBV, Threshold Editor View - NSTEV, Network Element Configuration View CNEV, Formula Aggregation Type Configuration View (CFATV). Finally, the Report Subsystem is supported by a Macro Processor which is able to generate at run-time HTML pages based on templates.
4
Case Study: SISM/AF
4.1 Main Functionalities SISM/AF, an integrated service management application framework [4], offers integrated customer-focused service and network management capabilities for heterogeneous, multi-domain networks [3]. The application framework aims to isolate service problems across a broad range of modern network technologies for diverse types of services (ATM, FR, IP, etc.). This requires a complete set of fault, performance, and configuration management functionalities organized into customer views.
4.2 Variability Requirements SISM/AF is organized as an application framework so that it can be quickly customized to handle new network technologies and differences in customer environments. The requirements which shaped the architectural strategy oriented to large-grain reusability of SISM can be summarized in the ability of dealing with the following variations: Business Process variations. Software applications, such as SISM, whose target customers are telecommunication service providers, should flexibly adapt to peculiarities in operational workflow and organizational roles. Network Technology and Management Protocol variations. The main goal of SISM is to provide integrated and homogeneous management capabilites across a wide variety of network technologies and management protocols. Service variations. The current trend in the Telecommunication market is to compete through increased differentiation in the service offering portfolio and reduced time-to-market of new services. SISM must allow the user to define new services through system configuration. OSS interface variations. Deploying SISM in a specific customer environment requires taking into account integration with the existing, legacy OSSs of the customer.
82
Giovanni Cortese et al.
4.3 Architectural Overview and Reuse Techniques The SISM framework addresses these challenges by customizability techniques at different levels of its architecture. Client Apps (Service View)
Client Apps (Alarm View)
Client Apps (Topology View)
.....
Nw and Service Instance DB Nw and Service Inventory
Nw and Service Fault
External OSS Interface Framework
Nw and Service QoS
applying
different
SISM Management Applications Client Tier
SISM Management Applications Server Tier
Nw and Catalog DB (Metadata)
Core VNM
Core VNM
Core VNM
....
.... Core TL1
Core SNMP Cust. SNMP1
Cust. SNMPn
Custom TL11
CISCO
IBM
X
Custom TL1k
Y
SISM Management Adaptation Tier
Cust. NEMS1 Cust. NEMSk
Nortel
Lucent
Network/ Network Element Layer
Fig. 2. SISM/AF Architecture
The picture below presents the main architectural subsystems of SISM: Virtual Network Manager (VNM) Core. VNM components are adapters for interfacing different types of network equipment. They provide a uniform CORBA management interface between SISM applications and the managed environment. VNMs are developed by specializing via inheritance a C++ OOF. VNM Core SNMP, Core TL1. An additional layer of customizable software is aimed at managing two broad classes of network devices, specifically those manageable through the SNMP and TL/1 protocols. Variations are handled through algorithms written in a scripting language (Tcl). Network and Service Inventory (NSI). A persistent, high performance repository of objects describing the managed environment (e.g. devices, connections, service agreements) implements a model of the managed environment. The design of this component is based on the AOM approach, that is to say service and network configuration data are modelled through a generalized object model of the network. The model can be extended via scripts and a visual builder. All SISM applications are based on the NSI metamodel. This allows easy extension of the management scope (i.e., add new service types) at run-time avoiding system recompilation and code changes. Some of these applications combine this ability with additional mechanisms which allow adaptability and reuse. For example, the Event Correlation engine can interpret a scripting language through which an experienced user can add new rules for diagnosing service
Building Customizable Frameworks for the Telecommunications Domain
83
problems. In addition, most applications reuse from infrastructural components (see below). A few applications, which present to the user network events and alarms, are derived by customization of the Alarm Browser OOF (ALB). ALB is a C++ framework which allows persistent storage, manipulation, and display of alarm records. Within the software infrastructure of the system (not shown in figure) smaller grain frameworks allow additional reuse and promote architecture. Frameworks from which server and GUI components are derived provide for architectural standardization and reuse of implementation for infrastructural services (e.g. events, directory services, application management).
5
Assessment Summary
The next step in our research was to gather data regarding the size of the software components which constitute the application frameworks under assessment. The analysis was then discussed with the project managers responsible for framework development in order to draw conclusions and add qualitative considerations.
5.1 Quantitative Reuse Analysis The data we present in this section try to answer two questions: Which reuse techniques have been most widely used to develop the company’s application frameworks (i.e. within the ‘core part’)? What is the relevance of each technique to the ‘typical’ solution delivered to the customer ? What is the customization effort for the different techniques in relation to the customer-perceived value ? 3 To answer the first question, we have listed the major subsystems or components, grouped by the customization technique they rely on, and provided a size indication (first three columns of Table 1 and Table 2). To measure the size in a programming language independent way, Function Points, calculated through backfiring, have been used ([21]). Then, to assess the ‘business relevance’ of each technique we measured the size of the customized solution for an average installation (fourth column in tables). The Function Points are again measured through backfiring. The last column is intended to provide further insight on the customization activities required, thus helping the reader have a better interpretation of the reported figures, and relate them to the effort required to perform the customization. The reuse analysis for the two case studies follows (see Table 1, “STM/AF Reuse Analysis” and Table 2, “SISM/AF Reuse Analysis”). The section is closed by a number of remarks which provide conclusions and additional qualitative considerations from the assessment team. 3
This specifically represents a starting point for further analysis on productivity and overall business effectiveness of reuse techniques
84
Giovanni Cortese et al.
Customization Technique
Subsystem/ Component
Core Asset Size (FPs)
Size of typical Custom Solution (FPs)
Table 1. STM/AF Reuse Analysis
OOF
DCC (BDT)
350
625
MVC
100
100
AOM
VBC + MFC
400
8200
4000
4000
AOM Applicati ons
MDR DCN, DCC, NSEC, TEDAC...
Customization requires C++ development of framework plug-ins. See remark in section ‘Component Libraries’) MVC is reused 4 times in the core itself (see ‘OOFs in the infrastructure’) Customization consists of instantiation of the metamodel. Size of custom solution seems very high. Though a relevant analysis effort is behind this customization, the figures are influenced by code generation (see ‘Effects of Code Generation’). See also remark on ‘Component Libraries’ The Metamodel ‘engine’ - reused black box
1575
1575
Generic applications - reused black box Graphical user interfaces - Customization is marginal rework specific to customer process, and configuration file definition Customization through creation of templates based on macro keywords. Macro expansion is done run-time (see ‘Effects of Code Generation’)
Property Based
All Views
725
845
Macro Processor
RSRE
700
800
RSRS
125
175
7975
16320
Totals
Notes
OOF
AOM
Subsystem/ Component
Core Asset Size (FPs)
Size of typical Custom Solution (FPs)
Customization Technique
Table 2. SISM/AF Reuse Analysis
VNM
770
2645
Infrastructure
820
820
ALB
452
691
NSI
1316
2150
3043
3043
350
628
AOM Applica- All (PerfMgr, tions TopoMgr, ...) Domain Spe- VNM SNMP, cific Scripting VNM TL1
Notes
Customization requires C++ development of framework plug-ins. See remark on ‘Component Libraries ’ See note ‘OOFs in the infrastructure’ There are three applications based on ALB The repository implementing the (meta)model, and all customizations of the model Generic applications, reused black box
Subsystem/ Component
Core Asset Size (FPs)
Size of typical Custom Solution (FPs)
Customization Technique
Building Customizable Frameworks for the Telecommunications Domain
Event Correlator
280
560
7031
10357
85
Notes
Languages
Property Based Macro Processor Totals
n.a.
Customization requires writing diagnostic rules for each network technology Considered marginal - all applications have already been accounted for in previous categories
n.a.
5.2 Conclusions Metamodels for large grain functional subsystems The basic evidence derived from this analysis is that AOMs have independently emerged as the preferred architectural approach for the design of large-grain reusable components in different product lines. However rough these figures may be, metamodels and generic applications based on the metamodel account for more than 65 percent of the software developed in the core of the two product lines (the relevance in custom solutions reaches higher than 70 percent). This trend is further proven by what is happening in the third company product line for Service Provisioning, recently established, whose architecture is centered on a workflow engine and a Network Service Inventory subsystem, both conceived as AOMs. Based on interviews with project managers, a strong driver for the choice of this technique is its ability to enable the design of an application which is highly customizable at run-time either by the end user or by Sodalia field staff. OOFs in the infrastructure The analysis has shown that OOFs are more widely adopted in the infrastructure (with a few important exceptions, e.g. VNM) than in the design of application subsystems, where the metamodel approach has shown a wider application. Examples include BDT, GUI framework, Generic Server Framework, MVC; they have been conceived to relieve application components from infrastructural tasks. Reuse Analysis shows that the relevance of OOFs in our assets is 16 percent in the core and 18 percent in the custom solution. It must be noticed that the Reuse Analysis tables, more focused on functional subsystems, are not fair to these type of components. In fact, infrastructural OOFs are actually reused many times in the framework itself, though not shown in the tables. In addition, their importance has been rated very high by the project manager due to the ‘enforcement of architecture’ they perform on the software system.
86
Giovanni Cortese et al.
Trend towards configurability in the user environment The assessment shows a ‘marginal’ dependency of the core and custom solution on scripting languages. Based on interviews with project managers, however, the importance of this mechanism is much higher than shown by the quantitative analysis. The current trend is to implement customizations, whenever feasible, through a mechanism which does not require availability of a complex development environment (e.g. C++). It should be possible to perform most configurations in the user environment (by properly ‘authorized’ personnel, such as an application administrator or support staff). Scripting languages are perceived as another important mechanism in addition to AOMs to address this business driver. Although not as powerful as the AOM approach, they result in a simpler design, thus enabling some degree of customization in the application architecture with limited risk and cost.
5.3 Other Remarks A few other comments, more related to the method we used to analyse and present the data, follow. Effects of Code Generation Being calculated through function point backfiring, the figures related to ‘custom solution size’ in the tables are sometimes biased by the effects of code generation. To be properly understood, the following considerations may help: when the technique used for customization relies on run-time expansion or interpretation of powerful domain abstractions, the reported size may underestimate the size in terms of ‘user-perceived functionality’ on the contrary, when code generation is involved, the reported figure may be higher than expected Component Libraries When creating a customization (e.g. an adapter for a new network technology), often the resulting component itself becomes a reusable part of the application framework. This applies both when the component is encoded as a C++ plug-in for an OOF, or is a new metamodel configuration. In the tables, however, for sake of simplicity, we always kept such components separate from the ‘core’ and counted them in the ‘custom’ part.
6
Future Research
The expected follow-up of this assessment will be in two main directions. Metamodelling infrastructure This analysis provides strong evidence that software and patterns which assist in the development of customizable software are themselves a strategic reusable asset in our business. We plan to generalize the patterns and software implementations discovered in the assessment, specifically those in support
Building Customizable Frameworks for the Telecommunications Domain
87
of metamodelling, with the objective of extending the company’s architectural guidelines and Reusable Asset Repository (See “Reusable Asset Repository and Sodalia Architectural Framework”) with a common flexible infrastructure for this type of application. Productivity and Cost Estimates Evidence emerged in this assessment regards the unavailability of reliable effort data allowing us to calculate the costs of implementation of the reusable core components. As these components have been developed over a number of project iterations, it proved too difficult to analyze the collected effort data in a coherent way. As product line development emerges as the most widely adopted software development process in the company, with a corresponding dramatic growth of the impact of reuse on software process metrics, we need to refine our practices in this area. A simple but effective framework for calculating software process metrics is a target for our future activities.
Acknowledgements Authors wish to thank M. Banzi for the help provided.
References 1. 2. 3. 4.
5. 6. 7. 8. 9.
Bushmann, F., et al. Pattern Oriented Software Architecture. John Wiley & Sons, 1996. Coplien, J.O., Schmidt, D.C. Pattern Languages of Program Design, vol. I. Addison-Wesley, 1995. Borioni, S., Marini, M.A Service Management Application framework: Business, Interoperability and Architectural requirements. Proceeding Interworking, 1998. Feldkhun, L., Marini, M., Borioni, S. Integrated Customer-Focused Network Management: Architectural Perspectives. Proceedings of the Fifth IFIP/IEEE International Symposium on Integrated Network Management, San Diego, California, USA, May 12-16, 1997. Gamma, E., Helm, R., Johnson, R., Vlissides, J. Design Patterns - Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA, 1994. Johnson, R.E. Frameworks = (Components + Patterns). Communications of the ACM, October 1997, pp.71-77. Krasner, G.E., Pope, S.T. A Cookbook for Using the Model View Controller User Interface Paradigm in Smalltalk-80. Journal of Object Oriented programming, August, 1988. Martin, R., Riehle, D., Bushmann, F. Pattern Languages of Program Design, vol. III. Addison-Wesley, 1997. Object Management Group. The Common Object Request Broker: Architecture and Specification, Revision 2.0, July, 1995.
88
Giovanni Cortese et al.
10. Object Management Group. Common Object Services Specification, Volume 1. OMG Document Number 91-1- 1, Revision 1.0, March 1, 1994. 11. Taligent, Inc. Building object-oriented frameworks, 1994. 12. Vlissides, J., Coplien, J.O., Kerth, J.M. Pattern Languages of Program Design, vol. II. Addison-Wesley, 1996. 13. Willars, H. Amplification of Business Cognition through Modelling Techniques. Proceedings of the 11th IEA Congress, Paris, July 1991. 14. Wirfs-Brock, R.J., Johnsson, R.E. Surveying current research in object-oriented design. Communication of the ACM, 33(9), pp. 104- 124, September 1990. 15. Clements, P., Northrop, L. A Framework for Software Product Line Practice Version 2.0, July 1999 16. Cortese, G., Fregonese, G., Zorer, A. Architecture Architectural Framework Modeling in Telecommunication Domain. Proc. ICSE 99, May 1999 17. Fayad, M., Schmidt, D. Object Oriented Application Frameworks. CACM vol. 10. n 40, October 1997 18. Foote, B., Yoder, J. Metadata and Active Object Models. PLoP 98 19. Sodalia DE/NAM. Service Provisioning Tool System Architecture. Sodalia internal document, 1998. 20. Pree, W. Framework Patterns, SIGS Books and Multimedia, 1996 21. Jones, Capers. Backfiring: converting lines of code to function points. Computer Vol. 28, No 11, Nov. 1995, pp.87-88.
Object Oriented Analysis and Modeling for Families of Systems with UML Hassan Gomaa Department of Information and Software Engineering George Mason University Fairfax, VA 22030-4444, USA (703) 993 1652
[email protected] Abstract. This paper describes how the Unified Modeling Language (UML) notation can be used to model families of systems. The use case model for the family is used to model kernel and optional use cases. The static model for the family is used to model kernel, optional and variant classes, as well as their relationships. The dynamic model for the family is used to model object interactions in support of kernel and optional use cases, and for modeling all state dependent kernel, optional, and variant objects using statecharts.
1
Introduction
A software reuse area of growing importance is that of domain engineering of families of systems [Parnas79], also referred to as software product lines, where an application domain is modeled by analyzing the common and variant aspects of the family [Coplien98, DeBaud99, Dionisi98, Kang90]. At George Mason University, a project is underway to support software engineering lifecycles, methods, and environments to support software reuse at the requirements and design phases of the software lifecycle, in addition to the coding phase. A reuse-oriented software lifecycle, the Evolutionary Domain Lifecycle [Gomaa95], has been proposed, which is a highly iterative lifecycle that takes an application domain perspective, allowing the development of families of systems. Earlier research addressed the analysis and specification phases of the Evolutionary Domain Life Cycle for developing families of concurrent and distributed applications. It also addressed the domain modeling environment developed at GMU to support configuring target systems from a domain model [Gomaa94, Gomaa96, Gomaa97]. The domain analysis and modeling method described in [Gomaa95] emphasizes analyzing the common and variant aspects of an application domain. This paper describes a new approach for analyzing and modeling a family of systems, which uses the Unified Modeling Language (UML) notation [Booch98, Rumbaugh99]. This paper starts by briefly describing the earlier Domain Analysis and Modeling method. It then describes how the UML notation for use case modeling, static modeling, and dynamic modeling can be used to model families of systems. Examples are given from a factory automation domain. W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 89-99, 2000. Springer-Verlag Berlin Heidelberg 2000
90
Hassan Gomaa
2
Domain Modeling Method
A Domain Model is a multiple view object-oriented analysis model for the application domain that reflects the common aspects and variations among the members of the family of systems that constitute the domain. The domain modeling method [Gomaa95, Gomaa96] is similar to other object-oriented methods when used for analyzing and modeling a single system [e.g., Rumbaugh91]. Its novelty is the way it extends object-oriented methods to model families of systems. The method allows the explicit modeling of the similarities and variations in a family of systems. In a domain model, an application domain is represented by means of multiple views, such that each view presents a different perspective on the application domain. Four of the views, the aggregation hierarchy, the object communication diagrams, the generalization/specialization hierarchy, and the state transition diagrams have similar counterparts in other object-oriented methods used for modeling single systems. However, in the domain modeling method, the aggregation hierarchy is also used to model optional classes, which are used by some but not necessarily all members of the family of systems. Furthermore, the generalization / specialization hierarchy is also used to model variants of a class, which are used by different members of the family of systems. The fifth view, the feature/class dependency view, is used to represent explicitly the variations captured in the domain model. This view relates the end-user's perspective of the domain, namely the features (functional requirements) supported by the domain, to the classes in the domain model. It shows for each feature the classes required to support the feature. Also defined are any prerequisite features required and any mutually exclusive features. This view is particularly important for optional features, since it is the selection of the optional features, and the classes required to support them, that determine the nature of the desired target system.
3
Domain Modeling for Families of Systems with UML
Object-oriented concepts are considered important in software reuse and evolution because they address fundamental issues of adaptation and evolution. Object-oriented methods are based on the concepts of information hiding, classes, and inheritance. With the proliferation of notations and methods for the object-oriented analysis and design of software systems [Jacobson92, Rumbaugh91, etc.], the Unified Modeling Language (UML) is an attempt to provide a standardized notation for describing object-oriented models [Booch98, Jacobson99, Rumbaugh99]. However, for the UML notation to be effectively used, it needs to be used in conjunction with an object-oriented analysis and design method. The Object Oriented Analysis and Modeling method for single systems [Gomaa00] uses a combination of use cases [Jacobson92], object modeling [Rumbaugh91], statecharts [Harel96, Rumbaugh91], and event sequence diagrams used by several methods [Jacobson92, Rumbaugh91, etc.]. Next, consider how the UML notation may be used to model families of systems.
Object Oriented Analysis and Modeling for Families of Systems with UML
3.1
91
Use Case Model for Families of Systems
The functional requirements of the system are defined in terms of use cases and actors [Jacobson92]. An actor is a user type. A use case describes the sequence of interactions between the actor and the system, considered as a black box, to satisfy a functional requirement. For a single system, all use cases are required. When modeling a family of systems, only some use cases are required by all members of the family. These use cases are referred to as kernel use cases. Optional use cases are those use cases required by some but not all members of the family. Some use cases may be variant, that is different versions of the use case are required by different members of the family. The variant use cases are usually mutually exclusive. With the Kernel First Approach [Gomaa95], the kernel of the application domain is determined first. In other words, the common use cases that are used by all members of the family are determined first. The optional use cases are then determined iteratively. With the View Integration Approach [Gomaa95], which is most applicable when there are existing systems that can be analyzed, each system is considered a view of the application domain and the different views are integrated. In terms of use cases, this means that the use cases for each member of the family are specified. If the use cases do not exist, then it is necessary to develop them; in effect this is a reverse engineering exercise. After developing the use cases for each member of the family, the use cases from the different systems are compared. Common use cases to all members of the family constitute the kernel of the application domain. Uses cases that are only used by an individual system or a subset of the systems constitute the optional use cases. A kernel or optional use case may also be parameterized using variation points [Jacobson97]. An example of a use case model using the UML notation is given in Fig. 1. The Factory Operator actor uses two use cases View Alarms and Generate Alarm and Notify. Briefly, in the View Alarms use case the actor is the factory operator, who views outstanding alarms and acknowledges that the cause of an alarm is being attended to. In Generate Alarm and Notify, a robot generates an alarm and the operator is notified. 3.2
Static Model for Families of Systems
The static modeling notation is a rich notation for modeling classes and their relationships on class diagrams. It can thus be used for modeling the associations between classes (as done for single systems), as well as for modeling the hierarchies used in domain models for families of systems, namely the Aggregation Hierarchy and the Generalization / Specialization hierarchies (Section 2 and [Gomaa95]). A static model for an application domain has kernel classes, which are used by all members of the family, and optional classes that are used by some but not all members of the family of systems. Furthermore, a generalization / specialization hierarchy is used to model variants of a class, which are used by different members of the family of systems. UML stereotypes are used to allow new modeling elements,
92
Hassan Gomaa
tailored to the modeler’s problem, which are based on existing modeling elements [Booch98, Rumbaugh99]. Thus, the stereotypes and are used to distinguish between kernel and optional classes.
Pick and Place Robot
Generate Alarm and Notify Assembly Robot Factory Operator
View Alarms
Fig. 1. Example of Use Case Model
An example of an Aggregation Hierarchy from the factory automation domain is given in Fig. 2. The kernel factory aggregate class is composed of a kernel factory workstation class and an optional Automated Guided Vehicle Manager. An example of a generalization/specialization hierarchy is given in Fig. 3, in which the factory workstation class is specialized to support three variants, the flexible workstation, the high volume workstation, and the factory monitoring workstation. The way the static model for the family is developed depends on the strategy used for developing the domain model. With the View Integration Approach, a separate static model is developed for each member of the family. Each static model is considered a separate view of the application domain. The static model for the application domain is produced by integrating the individual views. If the Kernel First Approach is used, then the static model for the kernel is developed first and evolved as variations are considered for inclusion in the domain model. When static models for the different members (views) of the family are integrated, classes that are common to all members of the family are kernel classes of the integrated static model, which is referred to as the domain static model. Classes that are in one view but not others are optional classes of the domain static model. Classes that have some common aspects but also some differences are generalized. The common attributes and operations are captured in the superclass while the differences are captured in the variant subclasses of the superclass. 3.3
State Machine Model for Families of Systems
A statechart is developed for each state dependent object, including kernel, optional, and variant objects. There have been some attempts to model variations using
Object Oriented Analysis and Modeling for Families of Systems with UML
93
hierarchical statecharts, where the superstate is used to model the generalized aspects of the superclass while the substates are used to model the variations used in each subclass [Harel96]. Factory
0..*
1 Factory Workstation
Automated Guided Vehicle (AGV) Manager
Fig. 2. Example of Aggregation Hierarchy Factory Works tation
Flexible Workstation
High Volum e Workstation
Factory Moni torin g Workstation
Fig. 3. Example of Generalization / Specialization Hierarchy
State dependent control objects are depicted using statecharts. Since there can be variants of a control object, each variant is modeled using its own statechart. In this situation, the statecharts are all mutually exclusive as they model different variants. Each state dependent object would support a different feature and all the features would be mutually exclusive. For example, an automobile cruise control system may have a controlled deceleration feature or may not. Each of the alternative features is supported by a different Cruise Control object with a different statechart. The features (cruise control with controlled deceleration, cruise control without controlled deceleration) are mutually exclusive. However, it is possible in some applications for the variant features not to be mutually exclusive. Consider an Elevator Control System where there are multiple elevators, and where some elevators have a high speed feature for bypassing the first twenty floors and other do not. This case would again be represented by two different features, where each type of elevator is modeled using a different statechart. However, since the two elevators can co-exist in the same
94
Hassan Gomaa
system, in this case it is possible to select both features (for different elevators) and hence to have co-existing variant statecharts. An example of a statechart is given in Fig. 4 for a state dependent control class, namely the High Volume Workstation variant class (depicted in Fig. 3). Workstation Startup / Part Request
Awaiting Part from Predecessor Workstation
Part Coming / Next Operation Request
Part Arriving
Part Arrived / Robot Pick
Part Ready / Start Assembly
Robot Picking
Operation End [Part Requested] / Robot Place
Part Placed / Part Coming, Part Request
Robot Placing
Assembling Part
Operation End [Part Not Requested]
Part Request / Robot Place
Awaiting Part Request From Successor Workstation
Fig. 4. Example of statechart 3.4
Collaboration Model for Families of Systems
As with single systems, the collaboration model is used to depict the objects that participate in each use case, and the sequence of messages passed between them [Booch98, Jacobson99]. In families of systems, the collaboration model is developed for each use case, kernel or optional. Once the use cases have been determined and categorized as kernel, optional, or variant, the collaboration diagrams can be developed. Fig. 5 shows the collaboration diagram for the View Alarms use case depicted in Fig. 1. This is a simple client/server use case [Gomaa98] involving a client object and a server object. S1: Operator Request
S1.1: Alarm Request Alarm Handling Server
Operator Interface m : Operator Interface S1.2: Alarm Data
S1.3: Display Info : Factory Operator
Fig. 5. Example of Collaboration Diagram
In the domain modeling method [Gomaa95], for each feature, the objects that are needed to support the feature are determined and depicted on a feature based object communication diagram, which is similar to a collaboration diagram. With this UML based approach, the objects are determined from the use cases. The relationship between use cases and features is described in Section 5.
Object Oriented Analysis and Modeling for Families of Systems with UML
4
95
Feature Analysis
Feature analysis is an important aspect of domain analysis [Cohen98, Gomaa95, Griss98, Dionisi98, Kang90]. In the FODA (Feature-Oriented Domain Analysis) method [Cohen98, Kang90], features are organized into a feature tree. In FODA, features may be mandatory (kernel), optional, or mutually exclusive. The tree is an aggregation hierarchy of features, where some branches are kernel, some are optional, and others are mutually exclusive. In FODA, features may be functional features (hardware or software), non-functional features (e.g., relating to security or performance), or parameters (e.g., red, yellow, or green). Features higher up in the tree are composite features if they contain other lower level features. In the domain modeling method, for each feature, this view shows the classes required to support the feature [Gomaa95]. In domain analysis, features are analyzed and categorized as kernel features (must be supported in all target systems), optional features (only required in some target systems), and prerequisite features (dependent upon other features). There may also be dependencies among features, as described below. This view emphasizes optional features, because it is the selection of the optional features, and the classes required to support them, that determines the nature of the desired target system. Feature/class dependencies may be modeled in UML using the package notation, where a package is a grouping of model elements [Booch98]. An example of feature/class dependencies for the factory automation domain is given in Fig. 6. The High Volume Manufacturing feature is depicted as a package with the stereotype , which groups the three optional classes that support this feature, the Receiving Workstation, the Line Workstation, and the Assembly Workstation. «feature» High Volume Manufacturing
«optional» Receiving Workstation
«optional»
«optional»
Line Workstation
Shipping Workstation
Fig. 6. Example of Feature/Class Dependency
5
Features and Use Cases
In the object-oriented analysis of single systems, use cases are used to determine the functional features of a system. They can also serve this purpose in families of systems. Griss [Griss98] has pointed out that the goal of the use case analysis is to get a good understanding of the functional requirements whereas the goal of feature
96
Hassan Gomaa
analysis is to enable reuse. The emphasis in feature analysis is on the optional features, since the optional features differentiate one member of the family from the others. Use cases and features may be used to complement each other. In particular, use cases can be mapped to features based on their reuse properties. In the Domain Modeling Method [Gomaa94, Gomaa95], functional requirements that are required by all members of the family are packaged into a kernel feature. From a use case perspective, this means that the kernel use cases, which are required by all members of the family, constitute the kernel feature. Optional use cases, which are always used together, may also be packaged into an optional feature. For example, the two use cases shown in Fig. 1 could be packaged into an Alarm Handling feature, as shown in Fig. 7.
Alarm Handling
Pick and Place Robot
Generate Alarm and Notify Assembly Robot Factory Operator
View Alarms
Fig. 7. Feature As Use Case Package
In the UML, use case relationships can also be specified. Thus, common functionality among several use cases can be split off into an abstract use case, which can then be included in other use cases. This dependency among use cases is analogous to feature dependency, where one feature may require another feature as a prerequisite. Thus in the factory automation domain, High Volume Manufacturing and Flexible Manufacturing are optional use cases. However, the common functionality among the two use cases is split off into an abstract use case called Factory Production. The abstract use case is used by the other two, now more concise, use cases (Fig. 8). Based on this, we can choose to have features corresponding to each of these use cases, where the High Volume Manufacturing and Flexible Manufacturing features both require the Factory Production feature. Another form of use case relationship is the extend relationship, where one use case may extend another when certain conditions hold. In the example, the Flexible Manufacturing with Storage use case can be used to extend the Flexible Manufacturing use case (Fig. 8). Thus, introducing intermediate part storage in the
Object Oriented Analysis and Modeling for Families of Systems with UML
97
factory is an extension of the case where no intermediate part storage is provided. As before, each use case is mapped to a feature where the Flexible Manufacturing with Storage feature requires the Flexible Manufacturing feature. Factory Production
High Volume Manufacturing
Flexible Manufacturing
>
Flexible Manufacturing with Storage
Fig. 8. Use Case Dependencies and Feature Dependencies
6
Target System Configuration
The analysis model for the target system is configured from the domain model by selecting the desired optional features subject to the feature/feature constraints [Gomaa94, Gomaa96]. Kernel features are automatically included. With the UML based domain modeling method, the analysis model for the target system consists of all kernel use cases and the optional use cases that correspond to the optional features. In addition, the analysis model for the target system includes the object collaboration diagrams that correspond to the selected use cases and a tailored class diagram that includes all kernel classes and those optional classes that support the selected features.
7
Conclusions
This paper has described how the Unified Modeling Notation can be used to model families of systems. The use case model for the family is used to model kernel and optional use cases. The static model for the family is used to model kernel, optional and variant classes, as well as their relationships. It is also used for modeling feature dependencies. The dynamic model for the family is used to model object interactions in support of kernel and optional use cases, and for modeling all state dependent kernel, optional, and variant objects using statecharts. Future plans include developing a second generation knowledge based software engineering environment [Gomaa94, Gomaa96, Gomaa97] for generating target system analysis models and architectures from the UML based domain model for the family of systems.
98
Hassan Gomaa
References [Booch98] [Cohen98] [Coplien98] [DeBaud99] [Dionisi98] [Gomaa94]
[Gomaa95] [Gomaa96]
[Gomaa97]
[Gomaa00] [Griss98] [Harel96] [Jacobson92] [Jacobson97] [Kang 90]
G. Booch, J. Rumbaugh, I. Jacobson, “The Unified Modeling Language User Guide”, Addison Wesley, Reading MA, 1999. S. Cohen and L. Northrop, “Object-Oriented Technology and Domain Analysis”, Proc. International Conference on Software Reuse, Victoria, June 1998. J. Coplien, D. Hoffman, D. Weiss, “Commonality and Variability in Software Engineering”, IEEE Software, Nov/Dec 1998. J.M. DeBaud and K. Schmid, “A Systematic Approach to Derive the Scope of Software Product Lines”, Proc. IEEE Intl. Conf. Soft. Eng., LA, May, 1999. A. Dionisi et al., “FODAcom: An Experience with Domain Modeling in the Italian Telecom Industry”, Proc. Intl. Conf. Software Reuse, Victoria, June 1998. H. Gomaa , L. Kerschberg, V. Sugumaran, C. Bosch, I Tavakoli, “A Prototype Domain Modeling Environment for Reusable Software Architectures,” Proc. IEEE Intl. Conf. on Software Reuse, Rio de Janeiro, Brazil, November 1994. Gomaa H, "Reusable Software Requirements and Architectures for Families of Systems", Journal of Systems and Software, May 1995. H. Gomaa, L. Kerschberg, V. Sugumaran, C. Bosch, I Tavakoli, "A Knowledge-Based Software Engineering Environment for Reusable Software Requirements and Architectures," J. Auto. Softw. Eng., Vol. 3, Nos. 3/4, Aug. 1996. H. Gomaa and G. Farrukh, “Automated Configuration of Distributed Applications from Reusable Software Architectures”, Proc. IEEE International Conference on Automated Software Engineering, Lake Tahoe, November 1997. H. Gomaa, "Designing Concurrent, Distributed, and Real-Time Applications with UML", Addison Wesley, Reading MA, 2000. M. Griss, J. Favaro, M. D’Alessandro, “Integrating Feature Modeling with the RSEB”, Proc. International Conference on Software Reuse, Victoria, June 1998. Harel, D. and E. Gary, “Executable Object Modeling with Statecharts”, Proc. 18th International Conference on Software Engineering, Berlin, March 1996. I. Jacobson et. al., Object-Oriented Software Engineering, Addison Wesley, Reading MA, 1992. I. Jacobson, M. Griss, P. Jonsson, “Software Reuse - Architecture, Process and Organization for Business Success”, Addison Wesley, 1997. Kang K. C. et. al., “Feature-Oriented Domain Analysis,” Technical Report No. CMU/SEI-90-TR-21, Software Engineering Institute, November 1990.
Object Oriented Analysis and Modeling for Families of Systems with UML
[Parnas79]
99
Parnas D., “Designing Software for Ease of Extension and Contraction,” IEEE Transactions on Software Engineering, March 1979. [Rumbaugh91] Rumbaugh, J., et al., “Object-Oriented Modeling and Design," Prentice Hall, 1991. [Rumbaugh99] J. Rumbaugh, G. Booch, I. Jacobson, “The Unified Modeling Language Reference Manual,” Addison Wesley, Reading MA, 1999.
Framework-Based Applications: From Incremental Development to Incremental Reasoning Neelam Soundarajan and Stephen Fridella Computer and Information Science, The Ohio State University Columbus, OH 43210 {neelam,fridella}@cis.ohio-state.edu
Abstract. Object-oriented frameworks provide a powerful technique for developing groups of related applications. The framework includes one or more template methods that at appropriate points call various hook methods. Each application built on a framework reuses the template methods provided by the framework; the application developer provides definitions, tailored to the needs of the particular application, for only the hook methods. Our goal is to develop a technique for reasoning in which we reason about the framework behavior just once; whenever a new application A is developed, we arrive at its behavior by composing the behavior of hook methods defined in A with the behavior of the framework. Just as the template methods allow the application developers to reuse the code of the framework, our technique allows them to reuse the effort involved in reasoning about the framework in understanding each application built on the framework. We illustrate the technique by applying it to a simple example, and contrast our approach with a more standard approach based on behavioral subtyping.
1
Introduction
Object-oriented frameworks [5,12] provide a powerful technique for developing groups of similar or related applications. The framework designer designs the main classes and methods, including in particular the key methods that direct the flow of control, and call appropriate methods of various classes. One or more of the called methods will be virtual1 or pure virtual; indeed, one or more of the classes may just be interfaces (abstract base classes in C++ terminology), i.e., consist entirely of pure virtual methods. In the patterns literature [6], the method(s) directing the flow of control are called template methods, and the (pure) virtual methods that they invoke, hook methods. In order to implement a complete application based on this framework, the application developer would provide suitable definitions for all the hook methods; a different developer could 1
For concreteness we often use C++/Java-terminology, but our approach is not language specific.
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 100–116, 2000. c Springer-Verlag Berlin Heidelberg 2000
Framework-Based Applications
101
produce a (possibly very) different application by providing an alternate set of definitions for the hook methods. If the framework has been designed carefully and provides the right hooks, an entire new application can be developed with just the incremental effort involved in designing a new set of hook methods. In this paper we develop a similarly incremental technique for reasoning about the framework and the applications. Using our technique, we would reason about the framework behavior just once; every time a new application is developed using this framework, we would incrementally arrive at the behavior of this application by first reasoning about the behavior of the newly defined hook methods, and then plugging this behavior into the behavior of the framework. The key idea underlying our approach that makes this possible is the notion of the hook method call trace; with each template method we associate such a trace and record on it, information about every call the template method makes to a hook method. As we will see, specifying the behavior of the template method not just in terms of the member variables of the class but also in terms of this trace, allows us to plug-in the behavior of the hook methods as defined in the derived class, to arrive at the application-specific behavior. By contrast, the standard approach to dealing with applications built on OO frameworks, based on the notion of behavioral subtyping [9,10], focuses on just the behavior provided by the framework. The main contributions of this paper may be summarized as follows: – It makes the case that behavioral subtyping by itself is not sufficient to deal with framework-based applications since it is concerned only with behavior that is common to all the applications based on the given framework, completely ignoring application-specific behavior. – It presents a specification notation and reasoning technique using which we can include sufficient information in the specifications of template methods, so that information about the behavior of hook methods –as defined in the application– can be plugged-in, to arrive at specifications appropriate to the particular application. – It illustrates the approach by applying it to a simple example. The rest of the paper is organized as follows: In Section 2, we summarize the behavioral subtyping approach to dealing with framework-based applications. In Section 3 we consider, via a simple framework and several applications that might be built on it, what additional information needs to be included in specifications of template methods, and how this may be done. In Section 4 we see how application-specific information about the hook methods may be plugged into the specification of the template methods. Throughout, our focus is on motivating and intuitively justifying our approach rather than just the formal details. In Section 5 we apply the approach to a simple example, a continuation of the one used in Section 3. In the final section we reiterate the need for going beyond behavioral subtyping in dealing with frameworks, summarize how our approach does so, and point out problems that remain to be addressed.
102
2
Neelam Soundarajan and Stephen Fridella
Behavioral Subtyping
Consider a simple situation: the framework consists of a single class C that has one non-virtual method t() and one virtual2 method h(); C may also include one or more data members. In the design patterns terminology, t() is a template method and it invokes (one or more times) the hook method h(). In order to develop an ‘application’ using this framework, the developer would define a derived class D of C that would provide a definition for h(). A client programmer who wishes to use this application would create an object X of type D and invoke the method t() on it. OO polymorphism will ensure that during this execution of t(), the invocations of h() will in fact call the method defined in D, so that this call to t() will exhibit the behavior appropriate for the particular application. The standard approach, based on the notion of behavioral subtyping [9,10], to specifying such a framework may be summarized as follows: First arrive at a specification St for the template method t(). This specification consisting in the usual manner of pre- and post-conditions, will give us the effect that an execution of the method has on the values of member variables of C. Also provide a specification Sh for the hook method h(), specifying its effect on the values of the member variables of C. Next, validate the specification St –either formally or informally– by analyzing the body of t() as given by its definition in the class C. During this analysis, make use of the specification Sh for analyzing the effects of the calls to h() that appear in t(). If C does not provide a definition for h() (so h() is a pure virtual method in C), then we are done with reasoning about C. If C does include a definition for h() (so the framework C is, by itself, a complete application), then ensure that this definition satisfies the specification Sh . Next, when we develop an application by defining a derived class D of C and providing a definition in D for the method h(), arrive at a specification Sh corresponding to this h(), check that the h() defined in D satisfies Sh , and show that Sh is consistent with Sh ; more precisely, show that any method that satisfies Sh must necessarily also satisfy Sh . The last step of showing that Sh is consistent with Sh , is crucial to the approach. It ensures that D is a behavioral subtype of C, in other words that the behavior that methods of D exhibit is consistent with the behavior of the methods of C as specified by St and Sh . As a result, we can be sure, without any reanalysis, that t() will satisfy St even though the calls to h() that this t() makes will be to the h() defined in D, rather than the one defined in C. This 2
In Java and Eiffel all methods are virtual by default, and the non-virtual methods, that is those that are not to be redefined in the application, are flagged as final; in C++, final is the default, and virtual methods are so flagged. Even methods that are not flagged as virtual in C++ can be redefined in the derived classes but these redefinitions will not be visible in the base class C; i.e., any calls in t() to such an h() will invoke the one defined in C, not the redefinition in the derived class.
Framework-Based Applications
103
is because in our previous analysis of t() to check that it satisfies St , the only assumption we made about h() was that it would satisfy the specification Sh . While this is valid, it ignores an important point: the power of frameworks derives from the fact that the application developer can, when he3 defines the hook method h() in the class D, tailor the definition to suit the needs of the particular application; OO polymorphism ensures that although the definition of t() is inherited from the framework, as far as clients of this application are concerned, the h() that will be invoked during the execution of t() will be the one defined in D; hence t() will exhibit the behavior tailored to this application. But the specification St is not so tailored; rather, it is written to be valid for all applications that are built on this framework. In other words, as far as the behavioral subtyping approach is concerned, there is no difference between the different applications that may be built on this framework! What we need, then, is a specification of t() that can be tailored to the individual D by plugging-in the specification of h() corresponding to that particular D. Such a specification will capture the unique (as well as the common) behavior provided by that application. This is very important to the client because the reason he chose to use this application rather than one of the others is to exploit the unique behavior provided by this application. Thus a specification technique that brings out the unique behaviors in the different applications built on a common framework is key to utilizing the power of the framework approach; ‘vive la difference’, so to speak! We conclude this section with a note about one point that we have ignored so far – abstraction. Clients of classes should not be concerned with internal details of member variables of the class, or how the methods modify the values of these variables. The standard way [7] of addressing this is to have two specifications, a concrete one for the class designer/implementor, and an abstract one for the client. All of this applies in a standard manner to the reasoning technique we develop, and hence we omit further discussion of it, using only concrete specifications throughout. Once the concrete specification has been obtained for an application, we can provide an abstract specification for it and relate it to the concrete specification in the standard manner.
3
Specifying the Framework
What extra information do we need to include in the specification of a template method t() so that we can plug in appropriate information about the hook method from the application? The example framework we consider next and the applications that may be built on it, will help us answer this question. Our example is a simple banking framework, specifically a general type of bank account. The definition4 of the BankAccount class appears in Figure 1. Deposit and Withdraw, as their names suggest, are used to deposit money into and withdraw 3 4
Following standard practice, we use ‘he’, ‘his’, etc. as abbreviations for ‘he or she’, ‘his or her’ etc. For concreteness, we use a Java-style syntax in examples.
104
Neelam Soundarajan and Stephen Fridella
class BankAccount { protected int bal; // current account balance protected nmonthly; // no. of monthly transactions protected Array monthlies; // transactions which happen every month
}
BankAccount(int b) { bal = b; monthlies.init(0); nmonthlies = 0; } public void Deposit(int amt) { bal = bal + amt; } public void Withdraw(int amt) { bal = bal − amt; } public final int GetBalance() { return bal; } public final void AddMonthly(int amt) { if (monthlies.size() 0) { this.Deposit(monthlies[i]); } else { this.Withdraw(−monthlies[i]); } } }
Fig. 1. Class BankAccount
money from the account. It is expected that the application designer will provide alternate definitions for these two methods tailored to the kind of bank account needed for his particular application, which is why these are not flagged as final. monthlies[] is an array of transactions that are expected to be performed once every month; new transactions may be added to this array by calling the AddMonthly method. NewMonth is the method that will perform all the transactions in this array and is expected to be invoked once a month. When NewMonth is invoked, it does not directly update bal, but instead calls Deposit and Withdraw to perform the transactions; so if the application developer provides new definitions for Deposit and/or Withdraw, NewMonth will invoke these newly defined methods. Thus NewMonth is the template method of our framework and Deposit and Withdraw the hook methods. How do we specify the behavior of these methods? We will use f(x).pre and f(x).post to denote the pre-condition and post-condition respectively of a method f(), x being the parameter to the method. Consider the Deposit method: Deposit(amt).pre ≡ true Deposit(amt).post ≡ (bal = #bal + amt) ∧ !{nmonthlies, monthlies, amt}
(1)
Framework-Based Applications
105
As usual, pre-conditions are assertions on the state of the object and parameters immediately before the method is invoked5 ; post-conditions are assertions over the state immediately before and immediately after a method is invoked. The specification states that if the pre-condition holds for the state before a method invocation, then the post-condition is guaranteed to hold over that same state and the state when the method finishes. In our post-conditions, we use the notation #var to refer to the value of a variable in the state before invocation and var to refer to the variable’s value in the final state. Writing the name of a variable within braces preceded by “!” is a shorthand way of specifying that the value of that variable is the same in the initial and final states. Thus the post-condition above asserts that Deposit updates bal as expected, and does not affect the array of monthly transactions, the number of those transactions, or the amt parameter. Withdraw can be specified similarly. What of the template method? How do we specify its behavior? A specification similar to that of Deposit would look like: NewMonth().pre ≡ true NewMonth().post ≡ nmonthlies−1 monthlies[i]) (bal = #bal + Σi=0 ∧ !{nmonthlies, monthlies}
(2)
This states that bal will be updated to account for all the deposits and withdrawals in monthlies[], and the other variables unaffected. But this specification will not be sufficient from the point of view of the application designer. It does summarize the effect of the NewMonth method on the variables of BankAccount, but it makes no mention of the fact that NewMonth calls the hook methods Deposit and Withdraw. Why is this information important? An example ‘application’ will tell us. Suppose the application designer wants to create a a promotional account. Perhaps the bank has entered into a promotional agreement with an airline to offer frequent flier miles to the bank’s customers; the more a customer uses his account, the more miles he earns. The designer can implement this as class PromoAccount that inherits from BankAccount, adds a data member miles which stores the number of miles accumulated by a customer, and methods for getting and resetting this value. Most importantly, PromoAccount would include new versions of Deposit and Withdraw which update miles appropriately. A simple scheme that awards twenty-five miles for each transaction is easily implemented, as in Figure 2. PromoAccount will inherit BankAccount’s facilities for monthly scripted transactions. Moreover, since NewMonth invokes the hook methods Deposit and Withdraw to perform these transactions, the frequent filer miles total for the account will be properly updated which, of course, is the whole point of the example. 5
In a real system, we would have included the clause amt > 0 in the pre-condition of Deposit. In order to keep specifications simple, we omit this clause. This does not compromise correctness, it just means that our Deposit method does not assume that the amt to be deposited will necessarily be positive.
106
Neelam Soundarajan and Stephen Fridella
public class PromoAccount extends BankAccount { protected int miles; // no. of miles earned PromoAccount(int b) { miles = 0; super(b); } public void Deposit(int amt) { super.Deposit(amt); miles += 25; } public void Withdraw(int amt) { super.Withdraw(amt); miles += 25; } }
Fig. 2. Class PromoAccount
But we cannnot arrive at this conclusion from the specification (2) since it does not tell us that NewMonth invokes Deposit or Withdraw, so there is no way to know what effect NewMonth may have on miles. The application designer might even conclude (incorrectly) from (2) that NewMonth will have no effect on miles, and provide a new body for this method in PromoAccount to ensure correct updating of miles. (This would not just lead to unnecessary work– if the new body first calls the base class version to perform the transactions, and then updates the milage total, the account would receive twice as many miles as it should . . . a bug!) Hence, specifications of template methods should contain at least the following information: – Names of the hook methods it invokes. – Number of times it invokes each hook method. Given this information, the designer could mentally “plug-in” the behavior of the new versions of the hook methods to conclude that NewMonth, as defined in the framework, will update miles appropriately. Are these two pieces of information always sufficient? Unfortunately, the answer is no. Suppose the application designer wanted to use a more complicated scheme for awarding miles; suppose, for example, that in order to encourage deposits and discourage withdrawals, the designer deems that each successive deposit would result in an award of double the number of miles awarded by the previous deposit. However, a withdrawal results in no miles being awarded and a resetting of the “deposit award” to twenty-five. Such a scheme could be easily coded, but now even if the specification of NewMonth said that Deposit and Withdraw are called once for each positive and negative element respectively in the monthlies array, the application designer would be unable to reliably conclude how many miles are awarded by a particular call to NewMonth. The problem is that, quite possibly, NewMonth might perform all the deposits before all the withdrawals (the application designer cannot rule this out based on the specification we just considered), and a much different number of miles may be awarded than if NewMonth, as is actually the case, performs the deposits and withdrawals in the order that they appear in the monthlies array. Hence, in general, we need another piece of information to be added to the specification of a template method: – The interleaving of the calls to the hook methods.
Framework-Based Applications
107
And we are still not quite finished. Still more mileage schemes can be envisioned. For example, suppose twenty-five miles are given for both deposits and withdrawals, but not if the withdrawal overdraws the account. In such a case, knowing only the interleaving of the calls to Deposit and Withdraw would not allow the designer to figure out whether one of the calls to Withdraw would result in an overdraft. For this, the specification of NewMonth would have to include information about the parameters of each call to the hook method (e.g.,the parameters to each call are the corresponding values in the monthlies array) as well as information about the value of bal –in general the state of the object– immediately preceeding each call to a hook method. Similar arguments can be made to show that we may also need to know about the state immediately after the hook method’s return. So we have a fourth kind of information that should be present in the specification of a template method: – The state of the object and parameters immediately before and after each hook method call. How do we include information about these four items in the specification of a template method? With each such method t, we associate a hook method call trace, τ . In this variable we record the interactions between the template method t and the hook methods it invokes. τ will be a sequence; each of its elements will represent a single call to (and return from) a hook method; these elements will be in the same order as the hook method calls. A call to the hook method h() will be represented as the element (h, σ , aa , σ, aa) in the trace τ ; the first component of this element is the identity of the hook method being called; the second component, σ , is the state immediately prior to the call to h(); the third, aa , is the set of values of the arguments passed to h() in this call; the fourth, σ, is the state immediately after the return from h(); and the fifth, aa, is the set of values of the arguments upon return from h(). τ will have the value , the empty sequence, at the start of t (since at its start, t has not yet invoked any hook methods). The specification of t will include assertions about the form and content of τ as well as relations among the various elements of τ , and with the final state of the class when t finishes as well as any values t may return. In the worst case the resulting specifications can be unwieldy, but in practice for most frameworks the needed information can be expressed in a fairly concise and comprehensible manner. It is useful to note what is not included in τ . No information about the sequence of operations that the template method performs between successive calls to the hook methods is recorded; also not recorded are calls to non-hook methods that the template method might make. In other words, information about the state of the object is recorded only at those points where the template method interacts with the hook methods. This is precisely the information the application designer needs in order to arrive at the complete behavior of the template method once he has specified the behavior of the hook method bodies as defined in his specific application.
108
4
Neelam Soundarajan and Stephen Fridella
Incremental Reasoning
In this section we present our reasoning technique. We discuss both reasoning performed by the framework designer and that performed by the application developer who uses the framework. For his part, the framework designer must check that the pre- and post-condition specification for each method of the framework is implemented correctly by the code, in the framework, for that method. Similarly, the application developer must check that each method code defined in his derived class meets its specification. But this developer has two additional tasks: he must ensure that each of these methods continues to satisfy its base class specification; and he must arrive at the richer specification for template methods by plugging-in the enriched behavior of the hook methods. Our approach is essentially an extension of standard axiomatic reasoning techniques, see, for example [1]. Considering first the task of framework designer, we need a rule to deal with calls to hook methods. The standard rule for method calls is adequate for dealing with calls to non-hook methods but for calls to hook methods we need a new rule –R1 below– that would take account of recording the call on the trace τ . In this rule, σ represents the state of the object, i.e., the values of all member variables of the class; last(τ ) is the last element of τ ; and abl(τ ) is the sequence that contains all elements of τ but the last one: R1. Hook Method Call Rule p ⇒ h(x).prexaa σ, aa #σ, #x, x [(∃σ , aa ).[ pτ, abl(τ ), σ , aa ∧ h(x).postσ , aa , aa ∧ last(τ ) = (h, σ , aa , σ, aa) ] ] ⇒ q { p } this.h(aa); { q } The first antecedent of R1 requires that p (which is an assertion summarizing what we know about the state immediately before h is invoked) must imply the pre-condition of h (with the actual parameter aa substituted for the formal parameter x). The second antecedent ensures that we have added a new element to τ corresponding to this call and that the state at this point (and the returned values of the arguments) satisfy the post-condition of h. In more detail, the left side of this antecedent asserts that the state (and argument values) that existed immediately before the call and the trace, less its last element, satisfy the precondition; the post-condition of h is satisfied with appropriate substitutions for the before- and after-states and argument values; and that the (newly added) last element of τ consists of the name of the called method (h), the state (σ ) immediately before the call, the initial value (aa ) of the parameter, the state (σ) immediately after the return from h, and the final value of the parameter (aa). Note that the post-condition of h that this rule refers to is the one provided with the framework, not any application-specific post-condition (which we do not yet have, since we do not yet have any applications). How can we be sure that this reasoning remains valid when the application designer redefines the hook method h()? This is part of the reasoning responsibility of the application designer, and the behavioral subtyping rule R2 will
Framework-Based Applications
109
allow this designer to ensure this. We first introduce some notation: Let F denote the framework and A the application we are interested in. Let h.f.pre, h.f.post
denote the pre- and post-conditions of the hook method h as specified in the framework, and h.a.pre, h.a.post the pre- and post-conditions we wish to establish for h in the application. We will use F :: h.f.pre, h.f.post to denote that this specification has been established for the framework F , and a similar notation for the application. R2. Behavioral Subtyping (for Hook Methods) F :: h.f.pre, h.f.post
{h.a.pre} A.H {h.a.post} h.f.pre ⇒ h.a.pre, h.a.post ⇒ h.f.post A :: h.a.pre, h.a.post
The first antecedent requires the appropriate specification to have been established for the framework. A.H is the body of the hook method as defined in the application; the second antecedent requires us to reason about this body using the application-specific pre- and post-conditions. The third antecedent imposes the behavioral subtyping requirements. In the interest of keeping the presentation simple, we have not allowed for parameters to h in this rule, but it would be straightforward to do so. Also in the interest of simplifying the presentation, we have not included class invariants. Note however, that class invariants are especially important if h.a.pre imposes any conditions on member variables introduced in the derived class, since h.f.pre cannot ensure such conditions. Our final rule will allow the application designer to establish an applicationspecific behavior for a template method t by plugging into the framework specification of t, the application-specific behaviors of the hook methods that t invokes. This “plugging-in” can often be done informally by the designer as long as a few key facts are kept in mind: First, the behavior of the hook method as defined in the application must be consistent with its behavior specified in the framework; this is the behavioral subtyping condition imposed by R2. Second, the application-specific data members introduced in a derived class cannot be affected by t except by calls to hook methods; in other words, the only changes that happen to the values of these data members during the execution of t are because of calls that t makes to the hook methods defined in the derived classes. Third, the state immediately before a call to a hook method h and the state immediately after the call will be related by the application-specific post-condition for h. It is this last point that will allow us to establish an application-specific behavior for t since h.a.post will include information about the new data members introduced in the derived class. The rule R3 formalizes these notions. We use σ to denote the state of the application object, i.e., the (current) values of all the member variables defined in the framework (that is the base class), as well as those defined in the application (the derived class). σ ↓ (F ) will represent the framework portion of σ; and τ ↓ (F ) represents the trace obtained by projecting out the framework portion of each of the states recorded in τ .
110
Neelam Soundarajan and Stephen Fridella
R3. Rule for Plugging-in Application F :: t.f.pre, t.f.post
t.a.pre ⇔ t.f.pre #σ, σ, τ [ t.f.preσ, #σ↓(F ), ∧ t.f.post#σ↓(F ), σ↓(F ), τ↓(F ) ∧ TBR(#σ, σ, τ ) ] ⇒ t.a.post A :: t.a.pre, t.a.post
The first antecedent requires us to have established, in the framework, the appropriate specification for t. The second requires the framework-level pre-condition of t to be equivalent to its application-specific pre-condition. The next antecedent is the crucial one; it will, as we will see next, justify t.a.post. Given these three antecedents, R3 allows us to derive, as its conclusion, the specification for t that is appropriate to this application. Now consider the third antecedent. Recall that our post-conditions involve both the state at the start of the operation as well as the state when the operation finishes; plus, this being a template method, its specification will also include references to its trace. The first conjunct on the left side of this implication tells us that in t.a.post, we may include information about the initial state, i.e., the state when t begins execution, provided to us by t.f.pre, except that where t.f.pre refers to the state, we must refer to that state using the ‘#’ notation. Further, the state that t.f.pre refers to is not the complete state but just the framework portion; hence we replace σ in t.f.pre by #σ ↓ (F ). Similarly the second conjunct allows us to include, in t.a.post, information about the initial and final states provided by t.f.post (with substitutions of #σ ↓ (F ) for #σ and σ ↓ (F ) for σ). The third conjunct on the left side of the implication tells us that further information, provided to us by the TBR relation, may also be included in t.a.post; indeed this is the part that will allow us to add application-specific information to t.a.post, and we consider that now. The Trace Behavior Relation (TBR) captures the following facts: the information recorded in elements of τ which represent calls to and returns from the hook methods is consistent with the specifications of these methods as defined in the application; the only change in the state, as far as the portion that is introduced in the application is concerned, is because of these calls. (We use the notation σ ↓ (A) to denote the state introduced in this derived class, paralleling the σ ↓ (F ) notation which we use to denote the framework portion of the state.) TBR(#σ, σ, τ ) ≡ (∀k ∈ {0, . . . , |τ − 1|}). [(k = 0) ⇒ (τk .is↓(A) = #σ↓(A)) ∧ (k = |τ | − 1) ⇒ (τk .f s↓(A) = σ↓(A)) ∧ (0 < k ≤ |τ | − 1) ⇒ (τk .is↓(A) = τk−1 .f s↓(A)) ∧ (τk .n = hi (x)) ⇒ #x, σ, x (hi (x).a.post#σ, τk .is, τk .ip, τk .f s, τk .f p )] |τ | denotes the length, i.e., the number of elements in τ . τi is the ith element of τ ,
Framework-Based Applications
111
the elements being numbered starting at 0. τi .n is the name of the hook method being invoked in this call. τi .is is the state immediately before this call, τi .ip the value of the parameter passed to the call, τi .f s the state immediately after the call returns, and τi .f p the final value of the parameter. Thus the first clause of TBR says that the portion of state that was introduced in the application must be the same at the time of the first call to a hook method as it was when t was called since the only way that this portion of the state can change is as a result of a hook method being executed. Similarly the second clause states that this portion of the state does not change after the last call to a hook method has completed and the third clause says that this portion of the state is unchanged between calls to hook methods. The fourth clause says that the state immediately before a call to hi and the state immediately after that call returns must be related by the application-specific post-condition for hi . It is this clause that allows us to plug-in the application specific behavior of the hook method hi into the behavior of t and enables us to include, in t.a.post, information about how the portion of the state introduced in the application changes as a result of the execution of t(). It may seem that there ought to be an analogous clause that requires that the state at the time of the call to hi satisfies hi (x).a.pre. But such a clause is not needed since the behavioral subtyping requirement imposed by R2 ensures that hi (x).a.pre will be satisfied at the time of the call, given that hi (x).f.pre must have been checked as part of the reasoning to show that the body of t, defined in the framework, satisfies its (framework) specification.
5
Example
Recall the BankAccount class defined in Figure 1. The framework-level specification of the template method NewMonth may be written as follows: NewMonth().f.pre ≡ true NewMonth().f.post ≡ nmonthlies−1 monthlies[i]) ∧ (bal = #bal + Σi=0 !{nmonthlies, monthlies} ∧ (|τ | = nmonthlies) ∧ (∀i, 0 ≤ i 0) ⇒ (τi .n =Deposit ∧τi .p =monthlies[i]) ) ∧ ( (monthlies[i] ≤ 0) ⇒ (τi .n =Withdraw ∧τi .p = −monthlies[i]) ) )∧ (τ0 .is(bal) = #bal) ∧ (τ|τ |−1 .f s(bal) = bal) ∧ (∀i, 0 ≤ i 100) and unwieldy. Moreover, different missions might use the same message type at an OPFAC for slightly different purposes. Simpler rules that once sufficed often had to be factored to disambiguate their applicability to newer, more specialized missions. In worse cases, large subsets of rules had to be duplicated, resulting in a non-linear increase in rules and interactions. Moreover, the relationship between rules of different OPFACs, and the missions to which they applied, was lost. Modifying OPFAC rules became perilous without laborious analysis to rediscover and reassess those dependencies. The combinatorial effect of rule set interactions made analysis increasingly difficult and time-consuming.
122
Don Batory et al.
FSATS management realized that the current implementation was not sustainable in the long term, and a new approach was sought. FSATS would continue to evolve through the addition of new mission types and by varying the behavior of an OPFAC or mission to accommodate doctrinal differences over time or between different branches of the military. Thus, the need for extensible simulators was clearly envisioned. The primary goals of a redesign were to:
• disentangle the logic implementing different mission types to make implementation and testing of a mission independent of existing missions,
• reduce the “conceptual distance” from logic specification to its implementation so that implementations are easily traced back to requirements and verified, and
• allow convenient switching of mission implementations to accommodate requirements from different users and to experiment with new approaches. 2.3 GenVoca The technology chosen to address problems identified in the first-generation FSATS simulation was a GenVoca PLA implemented using the Jakarta Tool Suite (JTS) [3]. At its core, GenVoca is a design methodology for creating product-lines and building architecturally-extensible software — i.e., software that is extensible via component additions and removals. GenVoca is a scalable outgrowth of an old and practitionerignored methodology called step-wise refinement, which advocates that efficient programs can be created by revealing implementation details in a progressive manner. Traditional work on step-wise refinement focussed on microscopic program refinements (e.g., x+0 ⇒ x), for which one had to apply hundreds or thousands of refinements to yield admittedly small programs. While the approach is fundamental and industrial infrastructures are on the horizon [6][24], GenVoca extends step-wise refinement largely by scaling refinements to a component or layer (i.e., multi-class-modularization) granularity, so that applications of great complexity can be expressed as a composition of a few large-scale refinements [2][9]. There are many ways in which to implement GenVoca refinements; the simplest is to use templates called mixin-layers. Mixin-Layers. A GenVoca component typically encapsulates multiple classes. Figure 3a depicts component X with four classes A-D. Any number of relationships can exist among these classes; Figure 3a shows only inheritance relationships. That is, B and C are subclasses of A, while D has no inheritance relationship with A-C. A subclass is a refinement of a class: it adds new data members, methods, and/or overrides existing methods. A GenVoca refinement simultaneously refines multiple classes. Figure 3b depicts a GenVoca component Y that encapsulates three refining classes (A, B, and D) and an additional class (E). Note that the refining classes (A, B, D) do not have their superclasses specified; this enables them to be “plugged” underneath their yet-tobe-determined superclasses.2 Figure 3c shows the result of composing Y with X (denoted Y<X>). (The classes of Y are outlined in darker ovals to distinguish them from classes of X). Note that the obvi-
Achieving Extensibility through Product-Lines and Domain-Specific Languages
(a)
(c)
component X
composition Y<X> A
A B
C
A
D B
(b)
123
C
D
E
component Y B A
B
D
D
E
Figure 3: GenVoca Components and their Composition ous thing happens to classes A, B, and D of component X — they are refined by classes in Y as expected. That is, a linear inheritance refinement chain is created, with the original definition (from X) at the top of the chain, and the most recent refinement (from Y) at the bottom. As more components are composed, the inheritance hierarchies that are produced get progressively broader (as new classes are added) and deeper (as existing classes are refined). As a rule, only the bottom-most class of a refinement chain is instantiated and subclassed to form other distinct chains. (These are indicated by the shaded classes of Figure 3c). The reason is that these classes contain all of the “features” or “aspects” that were added by “higher” classes in the chain. These “higher” classes simply represent intermediate derivations of the bottom class [3][13][25]. Representation. A GenVoca component/refinement is encoded in JTS as a class with nested classes. A representation of component X of Figure 3a is shown below, where $TEqn.A denotes the most refined version of class A (e.g,. classes X.B and X.C in Figure 3a have $TEqn.A as their superclass). We use the Java technique of defining properties via empty interfaces; interface F is used to indicate the “type” of component X: interface F { } // empty class X implements F { class A { ... } class B extends $TEqn.A { ... } class C extends $TEqn.A { ... } class D { ... } }
2. More accurately, a refinement of class A is a subclass of A with name A. Normally, subclasses must have distinct names from their superclass, but not so here. The idea is to graft on as many refinements to a class as necessary — forming a linear “refinement” chain — to synthesize the actual version of A that is to be used. Subclasses with names distinct from their superclass define entirely new classes (such as B and C above), which can subsequently be refined.
124
Don Batory et al.
Components like Y that encapsulate refinements are expressed as mixins — classes whose superclass is specified via a parameter. A representation of Y is a mixin-layer [13][25], where Y’s parameter can be instantiated by any component that is of “type” F: class Y extends S class A extends S.A class B extends S.B class D extends S.D class E { ... } }
implements F { { ... } { ... } { ... }
The composition of Y with X, depicted in Figure 3c, is expressed by: class MyExample extends Y<X>;
where $TEqn is replaced by MyExample in the instantiated bodies of X and Y. Readers familiar with the GenVoca model will recognize that F corresponds to a realm interface3, X and Y are components of realm F, and MyExample is a type equation [2]. Extensibility is achieved by adding and removing mixin-layers from applications; product-line applications are defined by different compositions of mixin layers.
3 The Implementation The GenVoca-FSATS design was implemented using the Jakarta Tool Suite (JTS) [3], a set of Java-based tools for creating product-line architectures and compilers for extensible Java languages. The following sections outline the essential concepts of our JTS implementation. 3.1 A Design for an Extensible Fire-Support Simulator The Design. The key idea behind the GenVoca-FSATS design is the encapsulation of individual mission types as components. That is, the central variabilities in FSATS throughout its history (and projected future) lie in the addition, enhancement, and removal of mission types. By encapsulating mission types as components, evolution of FSATS is greatly simplified. We noted earlier that every mission type has a “cross-cutting effect”, because the addition or removal of a mission type impacts multiple OPFAC programs. A mission type is an example of a more general concept called a collaboration — a set of objects that work collectively to achieve a certain goal [22][25][28]. Collaborations have the desirable property that they can be defined largely in isolation from other collaborations, thereby simplifying application design. In the case of FSATS, a mission is a collabora3. Technically, a realm interface would not be empty but would specify class interfaces and their methods. That is, a realm interface would include nested interfaces of the classes that a component of that realm should implement. Java (and current JTS extensions of Java) do not enforce that class interfaces be implemented when interface declarations are nested [25].
Achieving Extensibility through Product-Lines and Domain-Specific Languages
125
tion of objects (OPFACs) that work cooperatively to prosecute a particular mission. The actions taken by each OPFAC are defined by a protocol (state diagram) that it follows to do its part in processing a mission thread. Different OPFACs follow different protocols for different mission types. An extensible, component-based design for FSATS follows directly from these observations. One component (Basic) defines an initial OPFAC class hierarchy and routines for sending and receiving messages, routing messages to appropriate missions, reading simulation scripts, etc. Figure 4 depicts the Basic component encapsulating multiple classes, one per OPFAC type. The OPFACs that are defined in Basic do not know how to react to external stimuli. Such reactions are encapsulated in mission components. Basic
Legend component encapsulating multiple classes
FO ...
...
...
Brigade FSE
...
Artillery
WRFFE-artillery
class implementing a state diagram
...
...
superclass inheritance relationship
... subclass
...
...
TOT-artillery
...
...
Figure 4: OPFAC Inheritance Refinement Hierarchy Each mission component encapsulates protocols (expressed as state diagrams) that are added to each OPFAC that could participate in a thread of this mission type. Composing a mission component with Basic extends each OPFAC with knowledge of how to react to particular external stimuli and how to coordinate its response with other OPFACs. For example, when the WRFFE-artillery component is added, a forward observer now has a protocol that tells it how to react when it sees an enemy tank — it creates a WRFFE-artillery message which it relays to its FIST. The FIST commander, in turn, follows his WRFFE-artillery protocol to forward this message to his brigade FSE, and so on. Figure 4 depicts the WRFFE-artillery component encapsulating multiple classes, again one per OPFAC type. Each enclosed class encapsulates a protocol which is added to its appropriate OPFAC class. Component composition is accomplished via inheritance, and is shown by dark vertical lines between class ovals in Figure 4. The same holds for other mission components (e.g., TOTartillery). Note that the classes that are instantiated are the bottom-most classes of these linear inheritance chains, because they embody all the protocols/features that have been grafted onto each OPFAC. Readers will recognize this is an example of the JTS paradigm of Section 2.3, where components are mixin-layers. The GenVoca-FSATS design has distinct advantages:
126
Don Batory et al.
• it is mission-type extensible (i.e., it is comparatively easy to add new mission types to an existing GenVoca-FSATS simulator),4
• each mission type is defined largely independently of others, thereby reducing the difficulties of specification, coding, and debugging, and
• understandability is improved: OPFAC behavior is routinely understood and analyzed as mission threads. Mission-type components directly capture this simplicity, avoiding the complications of knowledge acquisition and engineering of rule sets. Implementation. There are presently 25 different mixin-layer components in GenVoca-FSATS, all of which we are composing now to form a “fully-loaded” simulator. There are individual components for each mission type, just like Figure 4. However, there is no monolithic Basic component. We discovered that Basic could be decomposed into ten largely independent layers (totalling 97 classes) that deal with different aspects of the FSATS infrastructure. For example, there are distinct components for:
• OPFACs reading from simulation scripts, • OPFAC communication with local and remote processes, • OPFAC proxies (objects that are used to evaluate whether OPFAC commanders are supported by desired weapons platforms),
• different weapon OPFACs (e.g., distinct components for mortar, artillery, etc.), and
• GUI displays for graphical depiction of ongoing simulations. Packaging these capabilities as distinct components both simplifies specifications (because no extraneous details need to be included) and debugging (as components can largely be debugged in isolation). An important feature of our design is that all OPFACs are coded as threads executing within a single Java process. There is an “adaptor” component that refines (or maps) locally executing threads to execute in distributed Java processes [5]. The advantage here is that it is substantially easier to debug layers and mission threads within a single process than debugging remote executions. Furthermore, only when distributed simulations are needed is the adaptor included in an FSATS simulator. Perspective. It is worth comparing our notion of components with those that are common in today’s software industry. Event-based distributed architectures, where DCOM or CORBA components communicate via message exchanges, is likely to be a dominant architectural paradigm of the future [26]. FSATS is a classic example: OPFAC
4. Although a product-line of different FSATS simulators is possible; presently the emphasis of FSATS is on extensibility. It is worth noting, however, that exponentially-large product-lines of FSATS simulators could be synthesized — i.e., if there are m mission components, there can be up to 2m distinct compositions/simulators.
Achieving Extensibility through Product-Lines and Domain-Specific Languages
127
programs are distributed DCOM/CORBA “components” that exchange messages. Yet the “components” common to distributed architectures are orthogonal to the components in the GenVoca-FSATS design. Our components (layers) encapsulate fragments of many OPFACs, instead of encapsulating an individual OPFAC. (This is typical of approaches based on collaboration-based or “aspect-based” designs). Event-based architectures are clearly extensible by their ability to add and remove “component” instances (e.g., adding and removing OPFACs from a simulation). This is (OPFAC) object population extensibility, which FSATS definitely requires. But FSATS also needs software extensibility — OPFAC programs must be mission-type extensible. While these distinctions seem obvious in hind-sight, they were not so prior to our work. FSATS clearly differentiates them. 3.2 A Domain-Specific Language for State Machines We discovered that OPFAC rule sets were largely representations of state diagrams. We found that expressing OPFAC actions as state diagrams was a substantial improvement over rules; they are much easier to explain and understand, and require very little background to comprehend. One of the major goals of the redesign was to minimize the “conceptual distance” between architectural abstractions and their implementation. The problem we faced is that encodings of state diagrams are obscure, and given the situation that our specifications often refined previously created diagrams, expressing state diagrams in pure Java code was unattractive. To eliminate these problems, we used JTS to extend Java with a domain-specific language for declaring and refining state machines, so that our informal state diagrams (nodes, edges, etc.) had a direct expression as a formal, compilable document. This extended version of Java is called JavaSM. Initial Declarations. A central idea of JavaSM is that a state diagram specification translates into the definition of a single class. There is a generated variable (current_state) whose value indicates the current state of the protocol (i.e., statediagram-class instance). When a message is received by an OPFAC mission, a designated method is invoked with this message as an argument; depending on the state of the protocol, different transitions occur. Figure 5a shows a simple state diagram with three states and three transitions. When a message arrives in the start state, if method booltest() is true, the state advances to stop; otherwise the next state is one. Our model of FSATS required boolean conditions that triggered a transition to be arbitrary Java expressions with no side-effects, and the actions performed by a transition be arbitrary Java statements. Figure 5b shows a JavaSM specification of Figure 5a. (1) defines the name and formal parameters of the void method that delivers a message to the state machine. In the case that actions have corrupted the current state, (2) defines the code that is to be executed upon error discovery. When a message is received and no transition is activated, (3) defines the code that is to be executed (in this case, ignore the message). The three states in Figure 5a are declared in (4). Edges are declared in (5): each edge has a name, start state, end state, transition condition,
128
Don Batory et al.
(a)
start
t1:¬booltest()
one t3: true
t2: booltest()
stop
(b) state_diagram exampleJavaSM { event_delivery receive_message(M m); on_error { error(-1,m); } otherwise_default { ignore_message(m); }
(1) (2) (3)
states start, one, stop;
(4)
edge t1 : start -> one conditions !booltest() do { /* t1 action */ }
(5)
edge t2 : start -> stop conditions booltest() do { /* t2 action */ } edge t3 : one -> stop conditions true do { /* t3 action */ } // methods and data members from here on... (7) boolean booltest() { ... } exampleJavaSM() { current_state = start; } ... }
Figure 5: State Diagram Specification and transition action. Java data member declarations and methods are introduced after edge declarations. When the specification of Figure 5b is translated, the class exampleJavaSM is generated. Additional capabilities of JavaSM are discussed in [3]. Refinement Declarations. State diagrams can be progressively refined in a layered manner. A refinement is the addition of states, actions, edges to an existing diagram. A common situation in FSATS is illustrated in Figure 6. Protocols for missions of the same general type (e.g., WRFFE) share the same protocol fragment for initialization (Figure 6a). A particular mission type (e.g, WRFFE-artillery) grafts on states and edges that are specific to it (Figure 6b). Additional missions contribute their own states and edges (Figure 6c), thus allowing complex state diagrams to be built in a step-wise manner.
Achieving Extensibility through Product-Lines and Domain-Specific Languages
129
The original state diagram and each refinement are expressed as separate JavaSM specifications that are encapsulated in distinct layers. When these layers are composed, their JavaSM specifications are translated into a Java class hierarchy. Figure 6d shows this hierarchy: the root class was generated from the JavaSM specification of Figure 6a; its immediate subclass was generated from the JavaSM refinement specification of Figure 6b; and the terminal subclass was generated from the JavaSM refinement specification of Figure 6c. Figure 7 sketches a JavaSM specification of this refinement chain.
(a) original diagram
(b) first refinement
(c) second refinement
(d) inheritance hierarchy
Figure 6: Refining State Diagrams state_diagram black { states one_black, two_black, three_black; edge a : one_black -> two_black ... edge b : one_black -> three_black ... } state_diagram shaded refines black { states one_shaded; edge edge edge edge
c d e f
: : : :
one_black -> one_shaded ... one_shaded -> three_black ... two_black -> three_black ... two_black -> two_black ...
} state_diagram white refines shaded { states one_white; edge g : two_black -> one_white ... edge h : one_white -> three_black ... }
Figure 7: A JavaSM Refinement-and-Inheritance Hierarchy Inheritance plays a central role in this implementation. All the states and edges in Figure 6a are inherited by the diagram refinements of Figure 6b, and these states, edges, etc. are inherited by the diagram refinements of Figure 6c. The diagram that is executed is created by instantiating the bottom-most class of the refinement chain of Figure 6d. Readers will again recognize this an example of the JTS paradigm of Section 2.3.
130
Don Batory et al.
Perspective. Domain-specific languages for state diagrams are common (e.g., [7][12][17][18][21]). Our way of expressing state diagrams — namely as states with enter and exit methods, edges with conditions and actions — is an elementary subset of Harel’s Statecharts [17][18] and SDL extended finite state machines [12]. The notion of “refinement” in Statecharts is the ability to “explode” individual nodes into complex state diagrams. This is very different than the notion of refinement explored in this paper. Our work is closer to the “refinement” of extended finite state machines in SDL where a process class (which encodes a state machine) can be refined via subclassing (i.e., new states and edges are added to extend the parent machine’s capabilities). While the idea of state machine refinements is not new, it is new in the context of a DSL-addition to a general-purpose programming language (Java) and it is fundamental in the context of component-based development of FSATS simulators.
4 Preliminary Results Our preliminary findings are encouraging: the objectives of the redesign are met by the GenVoca-FSATs design:
• it is now possible to specify, add, verify, and test a mission type independent of other mission types (because layers/aspects encapsulate code by mission type, the same unit by which it is specified),
• it is now possible to remove and replace mission types to accommodate varying user requirements, and
• JavaSM allows a direct implementation of a specification, thereby reducing the “conceptual distance” between specification and implementation. As is common in re-engineering projects, detailed statistics on the effort involved in the original implementation are not available. However, we can make some rough comparisons. We estimate the time to add a mission to the original FSATS simulator at about 1 month. A similar addition to GenVoca-FSATS was accomplished in about 3 days, including one iteration to identify and correct an initial misunderstanding of the protocols for that mission. To evaluate the redesign in a less anecdotal fashion, we collected statistics on program complexity. We used simple measures of class complexity: the number of methods (nmeth), the number of lines of code (nloc), and the number of tokens/symbols (nsymb) per class. (We originally used other metrics [10], but found they provided no further insights). Because of our use of JTS, we have access to both component-specification code (i.e., layered JavaSM code written by FSATS engineers) and generated non-layered pure-Java code (which approximates code that would have been written by hand). By using metrics to compare pure-Java code vs. JavaSM code and layered vs. non-layered code, we can quantitatively evaluate the impact of layering and JavaSM on reducing program complexity, a key goal of our redesign.
Achieving Extensibility through Product-Lines and Domain-Specific Languages
131
Complexity of Non-Layered Java Code. Consider a non-layered design of FSATS. Suppose all of our class refinement chains were “squashed” into single classes — these would be the classes that would be written by hand if a non-layered design were used. Consider the FSATS class hierarchy that is rooted by MissionImpl; this class encapsulates methods and an encoding of a state diagram that is shared by all OPFACS. (In our prototype, we have implemented different variants of WRFFE missions). Class FoMission, a subclass of MissionImpl, encapsulates the additional methods and the Java-equivalent of state diagram edges/states that define the actions that are specific to a Forward Observer. Other subclasses of MissionImpl encapsulate additions to that are specific to other OPFACs. The “Pure Java” columns of Table 1 present complexity statistics of the FoMission and MissionImpl classes. Note that our statistics for subclasses, by definition, must be no less than those of their superclasses (because the complexity of superclasses is inherited). Pure Java
JavaSM
Class Name
nmeth
nloc
nsymb
nmeth
nloc
nsymb
MissionImpl
117
461
3452
54
133
1445
FoMission
119
490
3737
56
143
1615
Table 1. Statistics for Non-Layered Implementation of Class FoMission One observation is immediately apparent: the number of methods (117) in MissionImpl is huge. Different encoding techniques for state diagrams might reduce the number, but the complexity would be shifted elsewhere (e.g., methods would become more complicated). Because our prototype only deals with WRFFE missions, we must expect that the number of methods in MissionImpl will increase as more mission types are added. Consider the following: there are 30 methods in MissionImpl alone that are specific to WRFFE missions. When we add a WRFFE mission that is specialized for a particular weapon system (e.g., mortar), another 10 methods are added. Since WRFFE is representative of mission complexity, as more mission types are added with their weapon specializations, it is not inconceivable that MissionImpl will have several hundred methods. Clearly, such a class would be both incomprehensible and unmaintainable.5 Now consider the effects of using JavaSM. The “JavaSM” columns of Table 1 show corresponding statistics, where state exit and enter declarations and edge declarations are treated as (equivalent in complexity as) method declarations. We call such declarations method-equivalents. Comparing the corresponding columns in Table 1, it is clear
5. It would be expected that programmers would introduce some other modularity, thereby decomposing a class with hundreds of methods into multiple classes with smaller numbers of methods. While this would indeed work, it would complicate the “white-board”-to-implementation mapping (which is what we want to avoid) and there would be no guarantee that the resulting design would be mission-type extensible.
132
Don Batory et al.
that coding in JavaSM reduces software complexity by a factor of 2. That is, the number of method-equivalents is reduced by a factor of 2 (from 119 to 56), the number of lines of code is reduced by a factor of 3 (from 490 to 143), and the number of symbols is reduced by a factor of 2 (from 3737 to 1615). However, the problem that we noted in the pure-Java implementation remains. Namely, the generic WRFFE mission contributes over 10 method-equivalents to MissionImpl alone; when WRFFE is specialized for a particular weapon system (e.g., mortar), another 3 method-equivalents are added. While this is substantially better than its non-layered pure-Java equivalent, it is not inconceivable that MissionImpl will have over a hundred method-equivalents in the future. While the JavaSM DSL indeed simplifies specifications, it only delays the onset of design fatigue. Non-layered designs of FSATS may be difficult to scale and ultimately hard to maintain even if the JavaSM DSL is used. Complexity of Layered Java Code. Now consider a layered design implemented in pure Java. The “Inherited Complexity” columns of Table 2 show the inheritance-cumulative statistics for each class of the MissionImpl and FoMission refinement chains. The rows where MissionImpl and FoMission data are listed in bold represent classes that are the terminals of their respective refinement chains. These rows correspond to the rows in Table 1. The “Isolated Complexity” columns of Table 2 show complexity statistics for individual classes of Table 2 (i.e., we are measuring class complexity and not including the complexity of superclasses). Note that most classes are rather simple. The MissionAnyL.MissionImpl class, for example, is the most complex, with 43 methods. (This class encapsulates “infrastructure” methods used by all missions). Table 2 indicates that layering disentangles the logic of different aspects/ features of the FoMission and MissionImpl classes into units that are small enough to be comprehensible and manageable by programmers. For example, instead of having to understand a class with 117 methods, the largest layered subclass has 43 methods; instead of 461 lines of code there are 149 lines, etc. To gauge the impact of a layered design in JavaSM, consider the “Inherited Complexity” columns of Table 3 that show statistics for MissionImpl and FoMission refinement chains written in JavaSM. The “Isolated Complexity” columns of Table 3 show corresponding statistics for individual classes. They show that layered JavaSM specifications are indeed compact: instead of a class with 43 methods there are 24 methodequivalents, instead of 149 lines of code there are 65 lines, etc. Thus, a combination of domain-specific languages and layered designs greatly reduces program complexity. The reduction in program complexity is a key goal of our project; these tables support the observations of FSATS engineers: the mapping between a “white-board” design of FSATS protocols and an implementation is both direct and invertible with layered JavaSM specifications. That is, writing components in JavaSM matches the informal designs that domain experts use; it requires fewer mental transformations from design to implementation which simplifies maintenance and extensibility, and makes for a much less error-prone product. In contrast, mapping from the original FSATS implementation back to the design was not possible due to the lack of an association of any particular rule or set of rules with a specific mission.
Achieving Extensibility through Product-Lines and Domain-Specific Languages
Inherited Complexity
133
Isolated Complexity
Class Name
nmeth
nloc
nsymb
nmeth
nloc
nsymb
MissionL.MissionImpl
9
25
209
9
25
209
ProxyL.MissionImpl
11
30
261
2
5
52
MissionAnyL.MissionImpl
51
179
1431
43
149
1170
MissionWrffeL.MissionImpl
83
314
2342
35
135
911
MissionWrffeMortarL. MissionImpl
93
358
2677
13
44
335
MissionWrffeArtyL. MissionImpl
109
425
3187
19
67
510
MissionWrffeMlrsL. MissionImpl
117
461
3452
11
36
265
BasicL.FoMission
117
461
3468
0
0
16
MissionWrffeMortarL. FoMission
117
468
3547
4
7
79
MissionWrffeArtyL. FoMission
119
484
3687
7
16
140
MissionWrffeMlrs. FoMission
119
490
3737
3
6
50
Table 2. Statistics for a Layered Java Implementation of Class FoMission . Inherited Complexity Class Name
Isolated Complexity
nmeth
nloc
nsymb
nmeth
nloc
nsymb
MissionL.MissionImpl
8
20
169
8
20
169
ProxyL.MissionImpl
10
25
221
2
5
52
MissionAnyL. MissionImpl
34
90
877
24
65
656
MissionWrffeL. MissionImpl
45
115
1132
11
25
255
MissionWrffeMortarL. MissionImpl
48
121
1231
3
6
99
MissionWrffeArtyL. MissionImpl
52
129
1383
4
8
152
MissionWrffeMlrsL. MissionImpl
54
133
1445
2
4
62
BasicL.FoMission
54
133
1461
0
0
16
MissionWrffeMortarL. FoMission
54
136
1518
2
3
57
MissionWrffeArtyL. FoMission
55
140
1586
3
4
68
MissionWrffeMlrs. FoMission
56
143
1615
2
3
29
Table 3. Statistics on a Layered JavaSM Implementation of Class FoMission
134
Don Batory et al.
5 Conclusions Extensibility is the property that simple changes to the design of a software artifact requires a proportionally simple effort to modify its source code. Extensibility is a result of premeditated engineering, whereby anticipated variabilities in a domain are made simple by design. Two complementary technologies are emerging that make extensibility possible: product-line architectures (PLAs) and domain-specific languages (DSLs). Product-lines rely on components to encapsulate the implementation of basic features or “aspects” that are common to applications in a domain; applications are extensible through the addition and removal of components. Domain-specific languages enable applications to be programmed in high-level domain abstractions, thereby allowing compact, clear and machine-processable specifications to replace detailed and abstruse code. Extensibility is achieved through the evolution of specifications. FSATS is a simulator for Army fire support and is representative of a complex domain of distributed command-and-control applications. The original implementation of FSATS had reached a state of design fatigue, where anticipated changes/enhancements to its capabilities would be very expensive to realize. We undertook the task of redesigning FSATS so that its inherent and projected variabilities — that of adding new mission types — would be easy to introduce. Another important goal was to minimize the “conceptual distance” from “white-board” designs of domain experts to actual program specifications; because of the complexity fire-support, these specifications had to closely match these designs to make the next-generation FSATS source understandable and maintainable. We achieved the goals of extensibility and understandability through an integration of PLA and DSL technologies. We used a GenVoca PLA to express the building blocks of fire support simulators as layers or aspects, whose addition or removal simultaneously impacts the source code of multiple, distributed programs. But a layered design was insufficient, because our components could not be written easily in pure Java. The reason is that the code expressing state diagram abstractions was so low-level that it would be difficult to read and maintain. We addressed this problem by extending the Java language with a domain-specific language to express state diagrams and their refinements, and wrote our components in this extended language. Preliminary findings confirm that our component specifications are substantially simplified; “whiteboard” designs of domain experts have a direct and invertible expressions in our specifications. Thus, the combination of PLAs and DSLs was essential in creating extensible fire support simulators. While fire support is admittedly a domain with very specific and unusual requirements, there is nothing domain-specific about the need for PLAs, DSLs, and their benefits. In this regard, FSATS is not unusual; it is a classical example of many domains where both technologies naturally complement each other to produce a result that is better than either technology could deliver in isolation. Research on PLA and DSL technolo-
Achieving Extensibility through Product-Lines and Domain-Specific Languages
135
gies should focus on infrastructures (such as IP and JTS) that support their integration; research on PLA and DSL methodologies must be more cognizant that synergy is not only possible, but desirable. Acknowledgments. We thank Dewayne Perry (UTexas) for his insightful comments on an earlier draft and bringing SDL to our attention, and Frank Weil (Motorola) for clarifying discussions on SDL state machines. We also thank the referees for their helpful suggestions that improved the final draft of this paper.
6 References 1. “System Segment Specification (SSS) for the Fire Support Automated Test System (FSATS)”, Applied Research Laboratories, The University of Texas, 1999. See also URL http://www.arlut.utexas.edu/~fsatswww/ fsats.shtml. 2. D. Batory and S. O’Malley, “The Design and Implementation of Hierarchical Software Systems with Reusable Components”, ACM TOSEM, October 1992. 3. D. Batory, B. Lofaso, and Y. Smaragdakis, “JTS: Tools for Implementing DomainSpecific Languages”, 5th International Conference on Software Reuse, Victoria, Canada, June 1998. Also URL http://www.cs.utexas.edu/users/ schwartz/JTS30Beta2.htm. 4. D. Batory, “Product-Line Architectures”, Smalltalk and Java Conference, Erfurt, Germany, October 1998. 5. D. Batory, Y. Smaragdakis, and L. Coglianese, “Architectural Styles as Adaptors” Software Architecture, Kluwer Academic Publishers, Patrick Donohoe, ed., 1999. 6. I. Baxter, “Design Maintenance Systems”, CACM, April 1992. 7. G. Berry and G Gonthier, “The Esterel Synchronous Programming language: Design, Semantics, and Implementation”, Science of Computer Programming, 1992, 87-152. 8. J. Bosch, “Product-Line Architectures in Industry: A Case Study”, ICSE 1999, Los Angeles. 9. K. Czarnecki and U.W. Eisenecker, “Components and Generative Programming”, ACM SIGSOFT, 1999. 10. S.R. Chidamber and C.F. Kemerer, “Towards a Metrics Suite for Object Oriented Design”, OOPSLA 1991. 11. A. van Deursen and P. Klint, “Little Languages: Little Maintenance?”, SIGPLAN Workshop on Domain-Specific Languages, 1997. 12. J. Ellsberger, D. Hogrefe, A. Sarma, Formal Object-Oriented Language for Communicating Systems, Prentice-Hall, 1997. 13. R.B. Findler and M. Flatt, “Modular Object-Oriented Programming with Units and Mixins”, ICFP 98.
136
Don Batory et al.
14. E. Gamma et.al. Design Patterns Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, Massachusetts, 1995. 15. T. Graves, “Code Decay Project”, URL http://www.bell-labs.com/ org/11359/projects/decay/. 16. J.A. Goguen, “Reusing and Interconnecting Software Components”, IEEE Computer, February 1986. 17. D. Harel, “Statecharts: A Visual Formalism for Complex Systems”, Science of Computer Programming, 1987, 231-274. 18. D. Harel and E. Gery, “Executable Object Modeling with Statecharts”, ICSE 1996. 19. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J. Loingtier, and J. Irwin, “Aspect-Oriented Programming”, ECOOP 97, 220-242. 20. “System Segment Specification (SSS) for the Advanced Field Artillery Tactical Data System (AFATDS)”, Magnavox, 1999. 21. J. Neighbors, “DataXfer Protocol”, BayFront Technologies, 1997, URL http:// bayfronttechnologies.com. 22. T. Reenskaug, et al., “OORASS: Seamless Support for the Creation and Maintenance of Object-Oriented Systems”, Journal of Object-Oriented Programming, 5(6): October 1992, 27-41. 23. Software Engineering Institute, “The Product Line Practice (PLP) Initiative”, URL http://www.sei.cmu.edu/plp/plp_init.html. 24. C. Simonyi, “The Death of Computer Languages, the Birth of Intentional Programming”, NATO Science Committee Conference, 1995. 25. Y. Smaragdakis and D. Batory, “Implementing Layered Designs with Mixin Layers”, ECOOP 1998. 26. R. Taylor, Panel on Software Reuse, Motorola Software Engineering Symposium, Ft. Lauderdale, 1999. 27. L. Tokuda and D. Batory, “Evolving Object-Oriented Architectures with Refactorings”, Conf. on Automated Software Engineering, Orlando, Florida 1999. 28. M. Van Hilst and D. Notkin, “Using Role Components to Implement Collaboration-Based Designs”, OOPSLA 1996, 359-369. 29. D.M. Weiss and C.T.R Lai, Software Product-Line Engineering, Addison-Wesley, 1999.
Implementing Product-Line Features with Component Reuse Martin L. Griss Hewlett-Packard Company, Laboratories Palo Alto, CA, USA +1 650 857 8715
[email protected] Abstract. In this paper, we show how the maturation of several technologies for product-line analysis and component design, implementation and customization provides an interesting basis for systematic product-line development. Independent work in the largely separate reuse ("domain analysis") and OO ("code and design") communities has reached the point where integration and rationalization of the activities could yield a coherent approach. We outline a proposed path from the set of common and variable features supporting a product-line, to the reusable elements to be combined into customized featureoriented components and frameworks to implement the products.
1
Introduction
Today, it is increasingly important to manage related products as members of a product-line. Rapid development, agility and differentiation are key to meeting increased customer demands. Systematic component reuse can play a significant role in reducing costs, decreasing schedule and ensuring commonality of features across the productline. Business managers can make strategic investments in creating and evolving components that benefit the whole product-line, not just a single product. Most often, successful reuse is associated with a product-line. Common components are reused multiple times, and defect repairs and enhancements to one product can be rapidly propagated to other members of the product-line through shared components. Many different authors have worked on this area from multiple perspectives, resulting in somewhat confusing and overlapping terminology. We will define some terms as we go, and collect other terms at the end of the paper. A product-line is a set of products that share a common set of requirements, but also exhibit significant variability in requirements. This commonality and variability can be exploited by treating the set of products as a family and decomposing the design and implementation of the products into a set of shared components that separate concerns [1,2,3]. A feature is a product characteristic that users and customers view as important in describing and distinguishing members of the product-line. A feature can be a specific requirement, a selection amongst optional or alternative requirements, or related to W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 137-152, 2000. Springer-Verlag Berlin Heidelberg 2000
138
Martin L. Griss
certain product characteristics, such as functionality, usability, and performance, or implementation characteristics, such as size, execution platform or compatibility with certain standards. Members of a product-line might all exist at one time, or be related to the evolution of the product-line. A product-line can be built around a set of reusable components. The approach is based on analyzing the products to determine common and variable features, and then developing a product structure, software architecture and implementation strategy that expresses this commonality in terms of a set of reusable components. The decision to do so involves a combination of economic and technical issues - there must be a sufficient number of different members of the product-line, closely enough related, to justify the extra work of developing appropriately customizable reusable components. For example, each different member of a word processing product-line might optionally provide an outline mode, spell-checking, grammar correction, or diagrams, may run in Windows or Unix, be large or small, support a small number of files or large number files, etc. Some word processors might have a minimal spell checker, while others could have a large spell checker integrated with a grammar corrector. As another example, consider all of the many kinds of printers sold by HP. There are black and white printers, and color printers. Some use laser technology, others use inkjet technology. Some are small, while others are large, with multiple paper handlers and so on. Marketing decides which families to treat as a product-line, while engineering decides which should share a common firmware architecture and components. For a final example, [4] shows how a particular large-scale product-line for a communications product is built not by composing black box components, but by assembling customized subsystems, each based on a "framework" which evolves slowly. Each framework is characterized by a cluster of features, and the subsystem is implemented by modifying parts of the framework and adding code corresponding to the variable features. Each component will capture a subset of the features, or allow individual features to be built up by combining components. For example, each word processor in the product-line above might share a common editing component, and optionally one or more components for spell-checking, diagramming, etc. These components may be combined to produce a variety of similar word processors, only some of which would be viable products. Often, these components are used directly without change; in other cases the components are adapted in some way to account for differences in the products that are not expressible by just selecting alternative components; and in yet other cases, the components are substantially modified before inclusion into the product. Thus, some components would be used as is, some would-be parameterized and others would come in sets of plug compatible alternatives with different characteristics to satisfy the features.
1.1 Domain Analysis Helps Find Features Domain analysis and domain engineering techniques [5] are used to systematically extract features from existing or planned members of a product-line. Features trees are used to relate features to each other in various ways, showing sub-features, alternative
Implementing Product-Line Features with Component Reuse
139
features, optional features, dependent features or conflicting features. These methods, such as FODA [6], FeatuRSEB [7], FORM [8,9,10], FAST, ODM, and Pulse are used to cluster sets of features to shape the design of a set of components that cover the product-line, and optionally carry this into design and implementation. Once designed, each component is assigned to a component development team which develops and maintains it. Products are assigned to feature teams who select a set of features that define the products, and then select and adapt the appropriate components to cover the features and produce the desired product. This approach in general is a good plan. It works extremely well when the set of features decomposes nicely into a set of almost independent, fixed components, where very few features span multiple components. For example, products dominated by mathematical and statistical computation are easier to implement from relatively fixed components, since the interactions between features, and hence components, is less complex and better understood than in some other domains.
1.2 Tracing Features to Components is Complex Things are not always this simple. Things get a little more complex when components have compile time or run time parameters that essentially generate a customized component. Here one has to keep track of which parameter settings relate to which version, and to which set of selected features. Things get much more complicated when the code that implements a particular feature or several closely related features needs to be spread across multiple products, subsystems, modules or classes and is intertwined or tangled with code implementing other (groups of) features. In many (ideal) cases, a particular feature will be implemented as code that is mostly localized to a single module; but in many other cases features cut across multiple components. Such "crosscutting" concerns make it very difficult to associate separate concerns with separate components that hide the details. Examples of crosscutting features include end-to-end performance, transaction or security guarantees, or end to end functional coherence. To produce a member of the product line that meets a particular specification of such a cross cutting feature, requires that several (or even all) subsystems and components make compatible choices, and include specifically related fragments of code. The RSEB[11] and FeatuRSEB approaches[7] to product-line development are “usecase-driven.” This means that we first structure the requirements as a set of distinct usecases that are largely independent. Design and implementation of each usecase typically gives rise to a collaboration of multiple classes, resulting in the specification of a set of compatible responsibilities for these classes. Some classes participate in multiple collaborations, requiring the merging of the cross-cutting contributions to each class. For product-line development, commonality and variability analysis in RSEB and FeatuRSEB structures each usecase in terms of core and variant parts of the usecase. FeatuRSEB augments the RSEB usecases with an explicit FODA-derived feature model to highlight commonality and variability of features. The variability structure in the usecase (and feature) models then give rise to variant parts of the corresponding classes, to be attached at designated “variation points.” Thus selection of a
140
Martin L. Griss
single variant in a usecase, gives rise to multiple variant selections in the resulting classes. These choices need to be made compatibly to match the high-level crosscutting concern. Several classes and some variants are packaged into a single component, and thus variability in a single usecase translates into crosscutting variability in multiple design and implementation components. Kang [8,9,10] describes the FORM (feature oriented reuse method) as an extension of FODA [6] to better support feature-driven development. FORM extends FODA into the design and implementation space. FORM includes parameterization of reusable artifacts using features. The FORM feature model is structured in four layers, to better separate application-specific features from domain, operating environment or implementation features. These layers reflect a staged transition from product-line features through key domains to detailed implementation elements. FORM encourages synthesis of domain specific components from selected features However, no systematic technique for composing the code from related aspects is provided. In the case of significant crosscutting variability, maintenance and evolution become very complicated, since the change in a particular (usecase) feature may require changes all over the resulting software design and implementation . The problem is that individual features do not typically trace directly to an individual component or cluster of components -- this means, as a product is defined by selecting a group of features, a carefully coordinated and complicated mixture of parts of different components are involved. This "crosscutting" interaction is also referred to as the requirements traceability or feature-interaction problem. How should we address this problem? The conventional approach is to build fairly complex maps (essentially hyper-webs) from features to products, and manage the evolution of the features by jointly managing the evolution of the individual components (e.g., see [31].) In the past, this has been done manually, but more recently powerful requirements tracing and management tools have made this somewhat easier. Even with such traceability tools, what makes this task so complex is that when one feature changes, code in many modules will have to change; to find and decide how to change this code requires the code corresponding to this feature to be disentangled from the code for other features. Typically individual features (or small sets of related features) change more or less independently from other features, and thus a group of code changes can be related to threads drawn from a few related features. This is the same observation that led to usecase-driven development; it is complete usecases that change or are selected almost independently. Thus separation of concerns is most effective at the higher-level, before "mangling" and "crosscutting" into design or implementation.
2
Untangling the Feature Web
There are three different approaches that can be used to address this problem. Typically, some combination of these is used in practice. Each approach has benefits and drawbacks.
Implementing Product-Line Features with Component Reuse
141
1. Traceability - using hyperweb-like tools to help find and manage threads corresponding to one feature (e.g., program slicing, browsers, code coloring, such as the DOORS or Calibre RM requirements management tools.) See also UML language and tool-based traceability [12] and the discussion by Tarr [31] on a traceability hyper-web. In some cases, the threads from requirements through features to design and code are maintained as the software is built; in others, the threads are discovered using maintenance and analysis tools. This is often hard to do, and leads to a complex set of multiple versions of multiple components. When a feature is changed, code in many components is changed, and many components have to be retested and revalidated. 2. Architecture - use tools and techniques such as patterns[13,14], styles [18], and flexibility-enhancing technologies such software buses and middleware to provide a good decomposition into separate pieces that are more orthogonal, localizing changes. This often leads to a better structure and a more maintainable, flexible system. However, discovering and maintaining a good design, requires a lot of skill (and some luck) in choosing an appropriate decomposition that anticipates the kinds of changes that will arise during the evolution of a system. Different decompositions are more or less robust against different classes of change. For example, consider the well-known differences in flexibility of the functional decomposition or the object-oriented decomposition of a system, in terms of how they encapsulate or hide decisions. In either case, adding new behavior or new information fields might result in only a local change, a more radical global change that might affect only multiple methods in one class, or a single change in one data structure that might affect many procedures all over the system. This kind of frequent non-local change often requires significant framework refactoring to change the dominant decomposition so as to (re-)group together or further factor those parts of the design that seem be dependent, moving the implementation of key design decisions to different classes or modules. 3. Composition or "weaving" of program fragments - using language extensions or a preprocessor, the feature decomposition is used directly to produce pieces of software ("aspects") that are assembled in some manual or mechanical way into components, subsystems or complete products. The traditional approach, using a combination of manual techniques to select and combine features, supported with mechanisms based on macros or compile time parameters is somewhat clumsy and error-prone.
3
Feature-Driven, Aspect-Based Product-Line Engineering
A systematic, automated approach is emerging. Over the last several years, several techniques have been developed to make the "composition and weaving" approach to feature-driven design and development of software practical. The idea is to directly implement some feature (sometimes also called aspect) as several fragments of code, and then use some mechanism or tool to combine these code fragments together into complete modules.
142
Martin L. Griss
The key idea is to identify a useful class of program fragments that can be mechanically combined using some tool, and associating these fragments with distinct features of the software. One does not directly manage the resulting components, but instead directly manages only the common and variable features, the design aspects and their corresponding implementation code fragments. One then dynamically generates a custom version of the component as needed. There are several techniques, mechanisms and tools that can be used. These differ primarily in the granularity and language level of fragments and the legal ways in which they can be combined or woven together. For example, code could be woven at the module or class level, at the method level or procedure level, or even within individual statements. The approach could expressed only as a “method” (a disciplined way of using standard language features such as templates or inheritance) or can be expressed and enforced by a new tool that effectively changes the language (such as a preprocessor, language extension or macro). The following brief summary of these techniques is not meant to give a complete history of the many threads of work in the field, but just to highlight some of the developments. This paper can only provide a brief summary of several “intuitively” related techniques. While this is not a coherent, comprehensive analysis or integration, we hope this will inspire others to push the goal of integration further. Starting from Parnas' papers on separation of concerns and hiding[1,2,3] there have been numerous efforts to design methods, languages, macros, tools and environments to make the specification and implementation of layered abstractions easier. Languages such as Modula, Algol-68, and EL/1, Goguen's work on parameterized programming[16], tools such as the programmer's apprentice, and many program transformation systems are notable. Meta-programming was first popularized in the LISP community in the form of LISP computational macros, followed by the work of Kiczales and colleagues on the meta-object protocol in CLOS and other languages[15,17]. "Mixin" classes, and "before" and "after" methods were also used in LISP through stylized multiple inheritance. Earlier techniques used procedures, macros, classes, inheritance and templates in an ad hoc manner. Macros look like procedure calls, but create inline code, overcoming a concern about performance with breaking a system into many small procedures. Some macros can generate alternative pieces of code using conditional expansion, and testing compile time parameters. There are several distinct models (or paradigms) for using OO inheritance and C++ templates - parameterized data types or generics (e.g. STL), structure inheritance (e.g., frameworks), and parameterized algorithms and/or compile time computation (e.g. for tailoring matrix algorithms). In the case of (C++) frameworks, subclasses define new methods that "override" the definition of some default methods inherited from superclasses. More recent work takes a more systematic approach to identifying and representing crosscutting concerns and component (de-)composition. Stepanov and Musser converted their earlier Ada Generic Programming work to C++[19,20], at a time when C++ template technology was weaker and still largely untried. Batory and his students developed the GenVoca system generator and the P++ preprocessor language for component composition[21].
Implementing Product-Line Features with Component Reuse
143
Subsequent work on Generic Programming (GP) by Stepanov[19,20,22], Jazayeri[23], C++ template role mixins by Van Hilst and Notkin[24,25,26] and C++ template meta-programming by Czarnecki and Eisenecker[27,28,29], and others showed the power of these techniques. Recently, GP has started to change from meaning Generic Programming to mean Generative Programming, including domain analysis[29]. In discussing several of these techniques in more detail, the intent is not to advocate or denigrate any approach; rather, the goal is to show how they are similar attempts to explicitly express the code corresponding to features as separate, composable software fragments. Each approaches the many issues from different starting points and perspectives. Each technique is also changing over time, as experience grows, and the groups influence each other. When components entered the picture, several different technologies began to come together. • C++ templates and generic programming - the body of a method, or of several methods, is created by composing fragments. This was heavily exploited in the C++ Standard Template Library (STL), in which a class was constructed from a composition of an Algorithm, a Container, an Element, and several Adapter elements[19,20]. STL is noteworthy for its lack of inheritance. • C++ template role and layer "mixins" - By using multiple-inheritance or layering of "mixin" classes, the subclasses are essentially a method level combination of methods (as class fragments) drawn from several different sources. The base classes and mixins can be thought of as component fragments being woven into complete components or complete applications. It is worth noting that many design patterns also essentially help encapsulate some crosscutting features[13,14,31]. • Frame-based generators - Paul Bassett [30] shows how a simple frame-based "generator" (he calls it an "assembler") composes code fragments from several frames, linked into a form of "inheritance" or "delegation" hierarchy. A frame is a textual template that includes a mixture of code fragments, parameterized code, and generator directives to select and customize lower-level frames, and to incorporate additional code fragments generated by the processing of lower frames. While quite ad hoc frame structures are possible to achieve quite complex code transformation and generation, a good decomposition results in each frame dealing with only one or a small number of distinct aspects of the system being generated. Frame technology has been used with impressive results by Netron and its clients to implement several product lines rapidly and efficiently. It has also been used by HP labs in our customizable component framework work, discussed in more detail below. • Subject-oriented programming (SOP) - Ossher, Harrison and colleagues [31,32,33] represent each concern or feature as a separate package, implemented or designed separately. SOP realizes a model of object oriented code artifacts whose classes, methods and variables are then independently selected, matched and finally combined together non-invasively to produce the complete program.” Systems build up as a composition of subjects -- hyper slices -- each a class hierarchy modeling its domain from a particular point of view. A subject is a collection of classes, defining a particular view of a domain or a coherent set of functionality. For example, a subject might be a complete class or class fragment, a pattern, a feature, a component or a complete (sub)system. Composition rules are specified textually in C++ as
144
Martin L. Griss
templates. SOP techniques are being integrated with UML to align and structure requirements, design and code around subjects, representing the overlapping, design subjects as stereotyped packages[45]. • Aspect-oriented programming (AOP) - developed by Kiczales and colleagues[36], AOP expands on SOP concepts by providing modular support for direct programming of crosscutting concerns[44]. Early work focused on domain-specific examples, illustrating how several, nonfunctional, crosscutting concerns, such as concurrency, synchronization, security and distribution properties could be localized. AOP starts with a base component (or class) that cleanly encapsulates some application function in code, using methods and classes. AOP then applies one or more aspects (which are largely orthogonal if well designed) to components to perform large-scale refinements which add or change methods, primarily as design features that modify or crosscut multiple base components. Aspects are implemented using an aspect language which makes insertions and modifications at defined join points. Join points are explicitly defined locations in the base code at which insertions or modifications may occur, similar to UML extension points or RSEB variation points. These join points may be as generic as constructs in the host programming language or as specific as event patterns or code markers unique to a particular application. One such language, AspectJ[32], extends Java with statements such as "crosscut" to identify places in Java source or event patterns. The statement "advise" then inserts new code or modifies existing code where ever it occurs in the program. AspectJ weaves aspect extensions into the base code, modifying and extending a relatively complete program into a newer program. Thus AspectJ deals with explicit modification or insertion of code into other code, performing a refinement or transformation. • System generators - Batory's GenVoca and P++ [21,34]- a specialized preprocessor assembles systems from specifications of components, and composition as layers. GenVoca eschews the arbitrary code transformations and refinements possible with AOP and SOP, instead carefully defining (e.g., with a typing expression), composing layered abstractions, and validating compositions. These techniques have been applied to customizable databases, compilers, data structure libraries and other domains. Related work has been done by Lieberherr and colleagues [34,43] on Demeter's flexible composition of program fragments, and by Aksit[37,38] and others.
3.1 Evolution and Integration of the Techniques As experience with C++ templates has grown, some of the weaving originally thought to need a system generator or special aspect-oriented language, can in fact be accomplished by clever decomposition into template elements, using a combination of nested C++ templates and inheritance. VanHilst and Notkin [24,25,26] use templates to implement role mixins. They show how to implement a collaboration between several OO-components using role mixins. Batory and Smaragdakis simplified and extended the role mixin idea into mixin layers[39,40,41]. Using parameterized sets of classes and templates, groups of
Implementing Product-Line Features with Component Reuse
145
application classes are built up from layer of super-classes that can be easily composed. They show how P++ and GenVoca style composition can use C++ template mixin layers, and so avoid the need for a special generator. However, a disciplined use of the C++ templates seems needed in order to maintain the layering originally ensured by the specialized P++ language. As in RSEB and FeatureRSEB, these role and mixin layer techniques express an application as a composition of independently defined collaborations, and a portion of the code in a class is derived from the specific role that the class plays in a collaboration of several roles. The responsibilities associated with each role played by an object gives rise to separate code that must then be woven into the complete system. The mixin-layer approach to implementation makes the contribution of each role (feature) distinct and visible across multiple classes, all the way to detailed implementation, rather than combining the code in an untraceable manner as was done in the past. Cardone [42] shows how both aspect oriented programming and GenVoca begin with some encapsulated representation of system function (AOP component, GenVoca realm) and then provides a mechanism to define and apply large-scale refinements (AOP aspects, GenVoca components) at specific sites (AOP join points, parameters in GenVoca components). The AOP weaver and the GenVoca generator perform transformations from the aspect languages or type expressions. Both have been implemented as pre-processors (AspectJ and P++), but more recently, GenVoca has moved to a stylized use of C++ templates. Tarr [31] shows how separation of concerns can lead to a hyperweb tracing from specific concerns through parts of multiple components. Each slice (hyperslice) through this web corresponds to a different composition/decomposition of the application. She suggests that there are a variety of different ways to decompose the system, and this stresses any particular multidimensional separation. This is the "tyranny of the dominant decomposition" - no single means of decomposition, including features, is always adequate, as developers may have other concerns instead/in addition to features (or functions, or classes, or whatever). Neither AOP nor SOP provides true support for multiple, changing decomposition dimensions; AOP supports essentially one primary decomposition dimension (e.g., class structure of the base program) with modifying aspects, while SOP supports only a fixed number of multiple dimensions, based on the original packaging decomposition In both cases, once a dominant dimension(s) and corresponding primary decomposition of the software is chosen, all subsequent additions and modifications corresponding to a particular aspect will have to be structured correspondingly. This means that in some cases, what is conceptually a completely orthogonal aspect will result in some code fragments that will be aware of the existence and structure of the other aspects. See also Van Hilst's discussion of the Parnas' KWIK example[26]. Tarr suggests that these multiple decompositions can be best represented as hyperslices in a hyper-web across the aspects of the components. Each aspect is a hyper slice and a set of aspects together with core classes approximate a hyper model. This is essentially equivalent to the traceability expressed in UML [12], RSEB[11] and FeatuRSEB[7]. It allows tracing from features to crosscutting aspects within multiple components.
146
Martin L. Griss
3.2 Disentangling the Terminology Many different people have worked on this problem from different perspectives, without being aware of or taking account of each other's work. Hence there is a considerable amount of overlapping terminology, and great care must be taken to be sure the right interpretation of words such as feature, aspect, framework, mixin or role are used for any given paper. It is not the goal of this paper to create a new and completely consistent set of terms, nor to completely describe the subtle nuances between related terms. Unfortunately the paper can not avoid using the terms as used by the original authors. Many of the terms are used to indicate various forms of packaging, management or granularity of different collections or slices of software workproducts created during the development of a software system. More work needs to be done to clarify the exact relationship between the terms feature, aspect, refinement, and mixin. In some cases, the differences are not very significant, while in others one could either be referring to the design concept or instead to its implementation (like the confusion between the terms "architecture" and "framework"). As far as possible, we use the UML[12]and RSEB[11] definitions. • System - A complete set of software workproducts, mostly self-contained, and able to be understood largely independent of other systems. May be formally described by a set of UML models. Some of the workproducts in a system may be packaged for sale and deployment as a product, in the form of a set of applications and processes that work together in a distributed context. • Workproduct - Any document used or produced during the production of a software system. These include models, designs, architectures, templates, interfaces, tests, and of course code. • Product - The subset of the system workproducts that are packaged for sale or distribution, and managed as a complete unit by the development organization. • Product-line - A set of products that share a common set of requirements and significant variability. Members of a product-line can be treated as a program family, and developed and managed together to achieve economic, marketing and engineering coherence and efficiency. • Subsystem - A subset of the workproducts produced during the hierarchical decomposition of a system, packaged as a self- contained system in its own right. It is explicitly identified as a part of the larger system, encapsulating key software elements into a unit. • Component - The smallest unit of software packaged for reuse and independent deployment, though typically must be combined with other components and a framework to yield a complete product. Components should be carefully packaged, well documented, well-designed units of software, with defined interfaces, some of which are prescribed for compliance with a specific framework. For example, COM components, CORBA components and JavaBeans are three different standard component models. Each uses a different style of interface specification and has a different set of required interfaces. Other component models can be defined by a particular product-line architecture.
Implementing Product-Line Features with Component Reuse
147
• Component system - An RSEB term to reflect that components are most often not used alone, but with other compatible components and thus will be designed, developed and packaged together as a system, with the individual components represented by subsystems in the component system. • Framework - Loosely speaking, a framework is a reusable "skeletal," or "incomplete" system, that is completed by the addition of other software elements. Note that a subroutine or class library also provides reusable software elements, but plays a subservient role to the additional software, while a framework plays a dominant role as a reusable software element that captures and embodies the architecture and key common functionality of all systems built from the framework. Typically, the framework supplies the major control structures of the resulting system, and thus "calls" the additions, while the subroutine or class library provides elements designed to be "called" by the supplied additions. Different kinds of framework use different kinds of additional elements and completion mechanisms. For example, the well-known Johnson and Foote[46] framework provides the top levels of a class structure, providing abstract and concrete base classes. Additions are provided in the form of added subclasses that inherit from the supplied base classes. These added subclasses sometime override default behavior provided by supplied methods. In a well defined framework, key collaborations and relationships are built into the supplied classes, so that the added classes can be simpler and largely independent. Other framework styles depend on different mechanisms, such as templates and "plug in" interfaces, to allow the additions to be attached, and use the services provided by the framework. • Feature - A product characteristic used to describe or distinguish members of a product line. Some features relate to end-user visible characteristic, while others relate more to system structure and capabilities. • Aspect - In AOP, the set of class additions or modifications to a base class; also, the orthogonal concept giving rise to the set of adding or modifying software elements. • Mixin - Usually refers to a class that is used in a multiple inheritance situation only to augment a base class, and not for use by itself. The term was introduced in the LISP Flavors object system. • Refinement - The application of a transformation step to some software workproduct (a model or code) to produce a more detailed, less generic and less abstract software product • Domain - A coherent problem area. This is often characterized by a domain model, perhaps in the form of a feature diagram, showing common, alternative and optional features. Domain analysis is concerned with identifying the domains(s) present in a problem and its solution(s) and producing a model of the domain, its commonality and variability structure, and its relationship to other domains.
4
Feature Engineering Drives Aspect Engineering Thus the systematic approach becomes quite simple in concept:
148
Martin L. Griss
a. Use a feature-driven analysis and design method, such as FeatuRSEB to develop a feature model and high-level design with explicit variability and traceability. The feature diagram is developed in parallel with other diagrams as an index and structure for the product-line family usecases, and class models. Features are traced to variability in the design and implementation models [7]. The design and implementation models will thus show these explicit patterns of variability. Recall that a usecase maps into a collaboration in UML. FeatuRSEB manages both a usecase model (with variation points and variants) and a feature model, and through traceability, relates the roles and variation points in the usecase, to feature model variants, and then to design and implementation elements. b. Select an aspect-oriented implementation technique, depending on the granularity of the variable design and implementation features, the patterns of their combination, and the process/technology maturity of the reusers. c. Express the aspects and code fragments using the chosen mechanism d. Design complete applications by selecting and composing features, which then select and compose aspects, weaving the resulting components and complete application out of corresponding fragments. One can view the code implementing specific features as a type of (fine-grain) component, just as templates can be viewed as reusable components [26]. In general though, we focus on large-grain, customizable components and frameworks (aka Component Systems), in which variants are "attached" at "variation points." We take the perspective that highly-customized, fine-grained aspect-based development can be seen as complementary to large-grain component-based development. Large systems are made of large grained customizable components (potentially distributed, communicating through COM/CORBA/RMI interfaces and tied together with a workflow glue). It is well known that integrated systems cannot be built from arbitrary [C]OTS components[47]. On the one hand, these components must be built using consistent shared services and mechanisms, and interface style, suggesting a common decomposition and macro-architecture. On the other hand, many of the components must be built from scratch or tailored to meet the specific needs of the application. This needs to be done using compatible fine-grained approaches, since many of the components must be customized compatibly to achieve the constraint of the cross-cutting concern. This means that the feature-driven aspect-based development should not be limited only to smaller products, but should also be used for the (compatible) customization of the components of large products.
4.1 Hewlett-Packard Laboratories Component Framework At Hewlett-Packard Laboratories we have prototyped several large-scale domain specific component frameworks, using CORBA and COM as the distribution substrates, and a mixture of Java and C++ for components. The same component framework architecture was used for two different domains: a distributed medical system supporting doctors in interaction with patient appointments and records, and an online commercial banking system for foreign exchange trades.
Implementing Product-Line Features with Component Reuse
149
The applications in each domain are comprised of several large distributed components, running in a heterogeneous Unix and NT environment. Different components handle such activities as doctor or trader authentication, interruptible session management (for examples as doctors switch from patient to patient, or move from terminal to terminal), task list and workflow management, and data object wrappers. The overall structure of all the components is quite similar both across the domains and across roles within the domains. For example, all of the data object wrapper components are quite similar, differing only in the SQL commands they issue to access the desired data from multiple databases, and in the class-specific constructors they use to consolidate information into a single object. Each component has multiple interfaces. Some interfaces support basic component lifecycle, others support the specific function of the component (such as session key). Several of the interfaces work together across components to support crosscutting aspects such as transactions, session key management, and data consistency. Each component is built from one of several skeletal or basic components to provide a shell with standard interfaces and supporting classes. Additional interfaces and classes are added for some of the domain- and role- specific behavior, and then some code is written by hand to finish the component. It is important that the customization of the components corresponding to the cross-cutting aspects be carried out compatibly across the separate components. The skeletal component is essentially a small framework for a set of related components. Basic and specific component skeletons are generated by customizing and assembling different parts from a shared parts library. Some parts are specific to one of the cross-cutting aspects, while others are specific to implementation on the target platform. We did a simplified feature-oriented domain analysis to identify the requisite parts and their relationships, and used this to structure a component generator which customizes and assembles the parts and components from C++ fragments. We used a modified Basset frame generator, written in Perl. The input is expressed as set of linked frames, each a textual template that includes C++ fragments and directives to select and customize lower-level frames and generate additional fragments of code. The frames were organized to mostly to represent the code corresponding to different aspects, such as security, workflow/business process interfaces, transactions, sessions and component lifecycle. Each component interface is represented as a set of C++ classes and interfaces, with interactions represented as frames which are then used to generate and composed the corresponding code fragments. The components are then manually finished by adding code fragments and complete subclasses at indicated points in the generated code.
5
Summary and Conclusions
Feature oriented domain analysis and design techniques for product-lines and aspect-oriented implementation technologies have matured to the point that it seems possible to create new, clear and practical path for product-line implementation. Starting from the set of common and variable features needed to support a product-
150
Martin L. Griss
line, we can systematically develop and assemble the reusable elements needed to produce the customized components and frameworks to implement the products. The simple prescription give in section 4 is not a complete method, but does suggests that product-line architects, domain engineers and component and framework designers should now become aware of subject and aspect engineering (and viceversa) and begin to use these ideas to structure their models and designs. Then they can structure the implementation, using OO inheritance and templates in a disciplined way, or perhaps use one of the generators, to explicitly manage and compose aspects traceable from features into members of the product family. More work also needs to be done on clarifying the terminology, and unifying several of the approaches.
Acknowledgements I am grateful for several useful and extremely clarifying suggestions and comments on revisions of this paper by Don Batory, Gregor Kiczales, Peri Tarr and Michael Van Hilst.
References 1.
D. L. Parnas, "On the criteria to be used in decomposing systems into modules," CACM, 15(12):1053-1058, Dec 1972. 2. D. L. Parnas, "On the Design and Development Of Program Families," IEEE Transactions on Software Engineering, v 2, 16, pp 1-9, Mar 1976. 3. D. L. Parnas, "Designing Software for Ease of Extension and Contraction," IEEE Transactions on Software Engineering, v 5, #6, pp 310-320, Mar 1979. 4. J. Bosch, "Product-Line Architectures and Industry: A Case Study," Proceedings of ICSE 99, 16-22 May 99, Los Angeles, California, USA, ACM press, pp. 544-554. 5. G. Arango, “Domain Analysis Methods,” in W. Schäfer et al., Software Reusability, Ellis Horwood, Hemel Hempstead, UK, 1994. 6. K. C. Kang et al, “Feature-Oriented Domain Analysis Feasibility Study,” SEI Technical Report CMU/SEI-90-TR-21, November 1990. 7. M. Griss, J. Favaro and M. d'Alessandro, "Integrating Feature Modeling with the RSEB," Proceedings of ICSR98, Victoria, BC, IEEE, June 1998, pp. 36-45. 8. K. C. Kang et al, “FORM: A feature-oriented reuse method with domain-specific architectures," Annals of Software Engineering, V5, pp 143-168, 1998. 9. K. C. Kang, "Feature-oriented Development of Applications for a Domain", Proceedings of Fifth International Conference on Software Reuse, June 2 - 5, 1998. Victoria, British Columbia, Canada. IEEE computer society press, pp. 354-355. 10. K.C. Kang, S. Kim, J. Lee and K. Lee "Feature-oriented Engineering of PBX Software for Adaptability and Reusability", Software - Practice and Experience, Vol. 29, No. 10, pp. 875-896, 1999. 11. I. Jacobson, M. Griss, and P. Jonsson, Software Reuse: Architecture, Process and Organization for Business Success, Addison-Wesley-Longman, 1997. 12. G. Booch, J. Rumbaugh and I. Jacobson, The Unified Modeling Language: User Guide, Addison-Wesley-Longman, 1999.
Implementing Product-Line Features with Component Reuse
151
13. E. Gamma, R. Helm, R. Johnson and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1994. 14. F. Buschmann et. al. Pattern-Oriented Software Architecture - A System of Patterns. John Wiley & sons, 1996. 15. G. Kiczales, J. M. Ashley, L. Rodriguez, A. Vahdat, and D. G. Bobrow, "Metaobject protocols: Why we want them and what else they can do," in A. Paepcke, editor, ObjectOriented Programming: The CLOS Perspective, pages 101--118. The MIT Press, Cambridge, MA, 1993. 16. J. A. Goguen, "Parameterized Programming," IEEE Trans. Software Eng., SE-10(5), Sep 1984, pp. 528-543." 17. G. Kiczales, J. des Rivières, and D. G. Bobrow, The Art of the Metaobject Protocol, MIT Press, 1991. 18. D. Garlan and M. Shaw, Software Architecture, Perspectives on an Emerging Discipline, Prentice-Hall, 1996. 19. D. R. Musser and A. A. Stepanov. Algorithm-Oriented Generic Libraries. In Software Practice and Experience, Vol 24(7), 1994 20. D. R. Musser and A. Saini, STL Tutorial and Reference Guide, Addison-Wesley, 1996. 21. D. Batory and S. O'Malley. “The design and implementation of hierarchical software systems with reusable components”. ACM Transactions on Software Engineering and Methodology, 1(4):355-398, October 1992. 22. J. Dehnert and A. Stepanov, "Fundamentals of Generic Programming," Proc. Dagstuhl Seminar on Generic Programming, April 27--May 1, 1998, Schloß Dagstuhl, Wadern, Germany. 23. M. Jazayeri, Evaluating Generic Programming in Practice, Proc. Dagstuhl Seminar on Generic Programming, April 27--May 1, 1998, Schloß Dagstuhl, Wadern, Germany. 24. M. Van Hilst and D. Notkin, "Using C++ Templates to Implement Role-Based Designs," JSSST Symposium on Object technologies for Advanced Software, Springer-Verlag, 1996, 22-37 25. M. Van Hilst and D. Notkin, "Using Role Components to Implement Collaboration-Based Designs," Proc. OOPSLA96, 1996, 359-369. 26. M. Van Hilst and D. Notkin, "Decoupling Change From Design," ACM SIGSOFT, 1996. 27. K. Czarnecki and U. W. Eisenecker, "Components and Generative Programming, ACM SIGSOFT 1999 (ESEC/FSE), LNCS 1687, Springer-Verlag, 1999.Invited talk. 28. K. Czarnecki and U. W. Eisenecker. Template-Metaprogramming. Available at http://home.t-online.de/home/Ulrich.Eisenecker/meta.htm 29. U. Eisenecker, "Generative Programming: Beyond Generic Programming", Proc. Dagstuhl Seminar on Generic Programming, April 27--May 1, 1998, Schloß Dagstuhl, Wadern, Germany. 30. P. G. Bassett, Framing Reuse: Lessons from the Real World, Prentice Hall 1996. 31. P. Tarr, H. Ossher, W. Harrison and S. M. Sutton, Jr., "N degrees of Separation: Multidimensional separation of concerns," Proc. ICSE 99, IEEE, Las Angeles, May 1999, ACM press, pp. 107-119. 32. C. V. Lopes and G. Kiczales, Recent Developments in AspectJ™, In ECOOP'98 Workshop Reader, Springer-Verlag LNCS 1543. 33. W. Harrison and H. Ossher, "Subject-Oriented Programming (a critique of pure objects)," Proc. OOPSLA 93,Washington, DC, Sep 1993, 411-428, ACM. 34. D. Batory, "Subjectivity and GenVoca Generators," Proc. ICSR 96, Orlando, FLA, IEEE, April 1996, pp.166-175. 35. K. J. Lieberherr and C. Xiao, "Object-oriented Software Evolution," IEEE Transactions on Software Engineering, v 19, No. 4, pp. 313-343, April 1993.
152
Martin L. Griss
36. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, L. Loitinger and J. Irwin, "Aspect Oriented Programming," Proc. ECOOP 1997, Springer-Verlag, June 1997, pp. 220-242. 1. M. Akşit, L. Bergmans, and S. Vural, "An Object Oriented Language-Database Integration Model: the Composition-Filters Approach," in Proceedings of the 1992 ECOOP, pp. 372395, 1992. 38. Mehmet Akşit. Composition and Separation of Concerns in the Object-Oriented Model. In ACM Computing Surveys 28A(4), December 1996. 39. Y. Smaragdakis and D. Batory, "Implementing Reusable Object-Oriented Components," Proc. of ICSR 98, Victoria, BC, June 1998, pp. 36-45. 40. Y. Smaragdakis and D. Batory, "Implementing Layered Designs with Mixin Layers," Proc. of ECOOP98, 1998. 41. Y. Smaragdakis, "Reusable Object-Oriented Components," Proceedings of Ninth Annual Workshop on Software Reuse, Jan. 7-9, 1999, Austin, Texas. 42. R. Cardone, "On the Relationship of Aspect Oriented Programming and GenVoca," Proc. WISR, Austin Texas, 1999. 43. M. Mezini and K. Lieberherr, "Adaptive Plug-and-Play Components for Evolutionary Software Development," Proceedings of OOPSLA'98, pp.97-116. 44. R. J. Walker, E. L. A. Banniassad and G. C. Murphy, “An Initial Assessment of Aspect Oriented Programming," Proc. ICSE 99, IEEE, Las Angeles, May 1999, pp. 120-130. 45. S. Clarke, W. Harrison, H. Ossher and P. Tarr "Towards Improved Alignment of Requirements, Design and Code", in Proceedings of OOPSLA 1999, ACM, 1999, pp. 325339. 46. R. Johnson and B. Foote, "Designing reusable classes." Journal of Object-Oriented Programming, pp. 22-30, 35, June 1988. 47. D. Garlan, R. Allen, and John Ockerbloom, "Architectural mismatch, or Why it's hard to build systems out of existing parts." In Proc. ICSE, 1995.
Representing Requirements on Generic Software in an Application Family Model Mike Mannion1, Oliver Lewis2, Hermann Kaindl3, Gianluca Montroni4, and Joe Wheadon4 1
Department of Computing, Glasgow Caledonian University, Scotland, UK
2
tel: +44 141 331 3285
[email protected] School of Computing, Napier University, Edinburgh, Scotland, UK 3 Siemens AG Österreich, Vienna, Austria 4 European Space Operations Centre, Darmstadt, Germany
Abstract. Generic software is built in order to deal with the variability of a set of similar software systems and to make their construction cheaper and more efficient. A typical approach to representing requirements variability in generic software is through the use of parameters, i.e. quantitative variability. Qualitative variability, however, is often dealt with in an implicit and ad hoc manner. In previous work, we used discriminants for representing qualitative variability in a model of application family requirements. In this paper we extend this approach by combining discriminants and parameters for modelling qualitative and quantitative variability. Using this approach, we present a case study in the domain of spacecraft control operating systems and focus on building an application family model. The experience suggests that our approach provides a clean and well-defined way of representing the variability of generic software.
1 Introduction Some organisations set out to write generic software i.e. software that can be parameterised and configured for reuse in a set of similar software systems to be derived from it. In writing the requirements for this software, often some provision for future variation is made in anticipation that the generic software will be slightly modified or additional software will be bolted on to adapt it. However, the variability of generic software is not usually made explicit and is available only to an expert reader. Variability also occurs when organisations deliberately set out to develop a family of applications i.e. a group of similar products within a market segment e.g. mobile phones. In such an application family, the issue is to reuse as much as possible from already existing software systems in the course of building the next family member. To achieve large-scale reuse within an application family, most application family methods advocate deriving implemented reusable components from early lifecycle workproducts including requirements. If the requirements of a domain are not well
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 153-169, 2000. Springer-Verlag Berlin Heidelberg 2000
154
Mike Mannion et al.
understood, there is no basis for making intelligent decisions on reusable system architectures, designs and components. To support the process of application family requirements engineering we have developed a Method for Requirements Authoring and Management (MRAM) [12]. Its main focus is on representing the variability of an application family in an application family model, which consists of a pool of numbered, atomic, natural language core and variable requirements and a domain dictionary. The model contains all requirements in all the existing systems in the application family and is constructed as a lattice of parent-child relationships. The requirements in the model are all related to each other in parent-child relationships or through their relationship with discriminants. A discriminant is any requirement, which differentiates one system from another, and it is the root of a tree in the lattice. The nodes below the root are the variants. Requirements belonging to each variant appear in the tree beneath the variant. To support MRAM we have developed a Tool for Requirements Authoring and Management (TRAM). Due to the common theme of variability, the requirements of generic software might be represented in an application family model. In fact, MRAM and TRAM as described in [12] can represent configurability through discriminants. However, they have previously not covered parameterisation. So, we extended both MRAM and TRAM to combine parameters with discriminants. Typically, discriminants enable the specification of qualitative variability whilst parameterisation enables the specification of quantitative variability. In building a new instance of the generic software the first task is to select desired requirements from the application family model of requirements for the generic software. Making choices at variable requirement points drives selection from the model. Requirements belonging to the chosen variant appear in the new instance. Whenever during the course of selection a parameter value is needed, TRAM prompts the requirements engineer. The Operations Centre of the European Space Agency Operations Centre (ESOC) is developing a new generation of spacecraft control operating system, SCOS-2000, which was designed as generic software. Spacecraft control operating systems are increasing in complexity placing greater demands upon mission control centres. Increased budgetary constraints mean that these systems must be developed and operated with greater efficiency. The trend is to move towards systems that are easily customised across a range of missions with a high level of reuse across systems. The aim of SCOS-2000 is to define a configurable spacecraft control operating system infrastructure from which it is possible to define instances of SCOS-2000, called Mission Control Systems. The SCOS-2000 User Requirements Definition [18] document describes the requirements for the entire generic system from a user's perspective. In ESOC, requirements definitions consist of numbered atomic natural language requirements and software systems are defined according to the PSS-05 [3] software development process model. SCOS-2000 has several generic sub-systems e.g. Commanding, Procedure Execution, On-Board Software Maintenance, Telemonitoring and Display. Differences between Mission Control Systems are caused by variations to these sub-systems. Each sub-system has its own Software Requirements Document (SRD), a software requirements specification primarily written in natural
Representing Requirements on Generic Software in an Application Family Model
155
language. We have built an application family model of the Commanding sub-system, based on its SRD [19]. Commanding is concerned with the transmission and receipt of commands to control the spacecraft. As reported in [12], we previously built an application family model of an earlier version of the Commanding SRD in which there were 328 commanding requirements. In this earlier case study, we used only discriminants as a variable requirement specification technique and described our first efforts in using a discriminant-based selection method for generating the requirements for new commanding systems. Since then ESOC have expanded, refined and re-organised the Commanding SRD into 778 requirements and we have observed that discriminants are insufficient for representing each type of variability. In this paper, we report on a new case study, making use of the extended range of techniques to include parameterised requirements and especially parameterised discriminants. This paper is organised as follows. First, we present our extended approach for representing variability in an application family model. Then we propose a process for writing requirements for generic software and explain how to generate instances of generic software by selecting from an application family model of reusable requirements. Then we present the commanding requirements case study. Finally, we discuss important issues and related work.
2 Variability in an Application Family Model of Requirements Considerable work has been undertaken on analysis, design and code reuse [4, 5, 8, 11, 13, 14, 16]. Outstanding technical questions on these issues have recently been reported in [15]. However there has been little work on requirements reuse [17, 21]. In many application family engineering methods [1, 7, 16, 20], the process is to map application family requirements and requirements clusters onto analysis and design modelling structures e.g. use cases, entity-relationship diagrams, and functional models. These are then mapped to reusable code components. When a new application is built the desired requirements are selected from the application family requirements and compromises are made when parts of clusters are desired. However typically there is little explanation about the specification of application family requirements. Requirements engineers often recognise similarities in systems they specify but usually have no motivation for making these requirements reusable due to budgetary considerations. When they can afford to take a longer view, few guidelines are available for writing reusable requirements. In practice, natural language continues to be the most common method for expressing requirements and will continue to be so for the foreseeable future. One simple, application family independent and effective classification [9] of natural language, atomic requirements is: • • •
non-reusable; directly reusable (composition); parameter based (generation).
156
Mike Mannion et al.
We have replaced the third category by variable requirement to include discriminants and parameterised requirements and we have added a fourth category called obsolete. 2.1 Non-Reusable Requirements Some requirements apply only to the system being built and are hence non-reusable. Examples from the mobile phones domain include requirements that refer to deadlines, to hardware unique to that phone or to any unique ground receiver facilities. The reason for permitting non-selectable non-reusable requirements into an application family model at creation time is to make backward traceability to the existing systems in the family more visible. 2.2 Directly Reusable Requirements Some requirements are generic as they apply to all systems in a family and hence are directly reusable. Examples include requirements about the format of a telecommunications command or the logging of command history data. In a new document these types of requirements will be written as they were without a reuse effort. However the following guidelines will assist with subsequent analysis of non-reusable requirements to make them reusable: •
removing specific references;
•
deriving common terms;
•
splitting specific from generic.
Removing Specific References It is very common for a subject name to be constantly referred to in a requirement. R1: The Mobile Phone System PQR shall be able to … This reference to PQR is often unnecessary. Removing specific references will increase the number of reusable requirements. Commonality of Terms Even in a tightly coupled domain, it is common for different terms to be used for essentially the same element. For example: a short-term plan may be described as a “short term plan”, “detailed plan” or even “daily plan”. It is possible to standardise on one term for writing requirements and leave the subtle differences to be detailed in a glossary. This greatly increases the level of reuse. Splitting Specific from Generic Often, a requirement is written that contains generic and specific parts. Although this can be correct, it can reduce the level of reuse e.g. R2:
When receiving an incoming call the mobile phone shall respond with a bell tone within 10 seconds.
Representing Requirements on Generic Software in an Application Family Model
157
This is better written as one completely reusable requirement and one parameterbased requirement i.e. R2: R3:
When receiving an incoming call the mobile phone shall respond with a bell tone. When receiving an incoming call the bell tone shall sound within N seconds.
2.3 Variable Requirements Discriminants One technique for specifying variable requirements is to label those requirements as discriminants. A discriminant is any requirement that differentiates between systems. Discriminants can take three forms: •
mutual exclusion (single adaptor): a set of mutually exclusive requirements from which only one can be chosen in any system in the application family. For example, a mobile phone requires the network services of a telecommunications carrier but it will only ever use one carrier at a time. A mobile phone user can switch from company A to company B without having to buy a new phone i.e. the phone can deal with either, but it is possible to use the services of only one of these companies at a time i.e. the user is either a company A customer or a company B customer at any given time. In a requirements specification this might be written as:
<SA> R4 The mobile phone requires the network services of one telecommunications carrier at a time. R4.1 The telecommunications carrier shall be X1; R4.2 The telecommunications carrier shall be X2; •
a list of alternatives (multiple adaptor) which are not mutually exclusive, but at least one must be chosen. For example on any mobile phone there can be several ways of making a phone call but there must be at least one. In a requirements specification this might be written as:
<MA> R5 There shall be the facility to make a telephone call. R5.1 A telephone call shall be made by pressing the numeric digits. R5.2 A telephone call shall be made by storing the number in memory and then pressing a memory recall button. R5.3 A telephone call shall be made by pressing a ringback facility to dial the number of the last incoming call. R5.4 A telephone call shall be made by using speech recognition technology to say the numbers aloud. •
option: a single optional requirement that may or may not be chosen. For example a mobile phone may or may not have an Internet connection. In a requirements specification this might be written as:
158
Mike Mannion et al.
R6 The mobile phone shall have an Internet connection. Dependencies between discriminants can be represented in the hierarchy. For example, while email maybe an option, the list of email protocols it supports will be a list of alternatives from which at least one will be chosen. In a requirements specification this might be written as a hierarchy of requirements: R7 The mobile phone shall support email. <MA> R7.1 There shall be the facility to use one or more email protocols. R7.1.1 There shall be the facility to use the Post Office Protocol. R7.1.2 There shall be the facility to use the Internet Message Access Protocol. R7.1.3 There shall be the facility to use the Simple Mail Transfer Protocol. Parameters Another technique to introduce variability into requirements is to use parameters. Some requirements specify some level of performance, an element of time, or a list of required functionality. Often these requirements can be reused if the actual measure of performance, the time value or the list of items is changed. Example R3 above illustrates that mobile phones may respond to incoming calls in different times. Some requirements may contain more than one parameter. Consider: R8:
The mobile phone shall respond to X commands simultaneously within Y seconds.
We distinguish between two types of parameters. •
local parameters: The scope of a local parameter is exclusive to the single requirement within which that parameter is contained. In requirement text we can use $ to denote a local parameter.
•
global parameters: The scope of a global parameter is the set of application family requirements. Any change to the existence, meaning or value of a global parameter will affect every requirement containing that parameter. In requirement text we can use @ to denote a global parameter.
The value of any parameter may change across different instances of the generic software. Parameterised Discriminants One perspective on the different approaches is to consider discriminants as permitting the specification of qualitative variability whereas parameters permit quantitative variability. We can combine discriminants and parameters to model variability, qualitatively and quantitatively, respectively. This can be valuable when the need is for complex variability. A parameterised discriminant is a mechanism for combining variability types in a single requirement. It is a requirement that is a discriminant that also happens to con-
Representing Requirements on Generic Software in an Application Family Model
159
tain parameters. If the parameters are removed, the requirement remains a discriminant. Consider the single adaptor discriminant example R4 above. We can change R4 into a parameterised single adaptor discriminant R9: <SA> R9 The mobile phone requires the network services of 1 from @N telecommunications carriers at a time. R9.1 The telecommunications carrier shall be X1; R9.2 The telecommunications carrier shall be X2; …………………………………. R9.N The telecommunications carrier shall be XN; We are combining two elements of mutual exclusivity in one requirement but using two different techniques to do so. First we want to convey that for any given system there can be a set of telecommunications carriers from which we can only choose one. Here we use the single adaptor discriminant technique. Second we want to convey that the size of the set of telecommunication carriers can also vary between systems. Here we use the parameter technique. As another example, R10.1 is a parameterised multiple adaptor discriminant, that is a modified form of R7.1 above: <MA> R10.1 There shall be the facility to use 1 up to @M email protocols, choosing from: R10.1.1 There shall be the facility to use Post Office Protocol. R10.1.2 There shall be the facility to use Internet Message Access Protocol. R10.1.3 There shall be the facility to use Simple Mail Transfer Protocol. …………………………………………………… R10.1.N There shall be the facility to use ABC Protocol. We are combining a list of alternatives with an element of mutual exclusivity. We want to convey that for any given system there can be a set of email protocols from which we must choose between 1 and M. The size of the set from which we choose is equal to N, where 1 ≤ M ≤ N. We use the multiple adaptor discriminant technique. We also want to convey that the size of the set of email protocols can also vary between systems. Here we use the parameter technique. As a third example, R11 is a parameterised option discriminant: R11: There shall be a Short Message Service permitting short messages of up to @N characters. We are combining an option with an element of mutual exclusivity. We want to convey the optional requirement that for any given system there can be a short message service. We use the option discriminant technique. We also want to convey that the number of characters in a message can vary between systems. We use the parameter technique. In each case we can rewrite these requirements by apportioning the two variability elements into two separate requirements. There is always a balance during the speci-
160
Mike Mannion et al.
fication of requirements between conciseness and clarity. Parameterised discriminants are a convenient form to express complex variability. Sometimes the value that a parameter takes can effect future selection decisions. Consider the requirement: R12: All mobile phones have a range of operation of @N miles. The value of this range, e.g. 100 mile radius or 900 mile radius, will determine the set of features available to the user. To make this variability more explicit, R12 can be re-written as separate requirements contained in a single adaptor, each distinguished by different values for the range of operation, e.g. <SA> R12: All mobile phones have a defined range of operation. R12.1: The range of operation shall be ≤@U miles. R12.2: The range of operation shall be >@L and ≤@U miles. R12.3: The range of operation shall be >@L miles. 2.4 Obsolete Requirements As an application family evolves, some reusable requirements may no longer be needed in the systems that are to be built in the future, regardless of whether or not they have been in existing systems. It can be helpful to leave these requirements in the application family, to record which systems they have been reused within, but to mark them as obsolete and hence no longer available for selection into subsequent systems. Obsolete requirements can be any of the reusable requirement types above.
3 Application Family Requirements Engineering for Generic Software For representing the requirements for generic software in an application family model and for generating instances of those requirements from the model, we propose detailed process guidance. 3.1 Writing Requirements for Generic Software During the creation of the model, for each new requirement to be introduced the order of consideration is directly reusable requirement, discriminant, parameterised discriminant, parameterised requirement. At creation time there will be no non-reusable requirements (because there are no existing systems at the time of writing the requirements) and no obsolete requirements. Fig. 1 shows the steps for this process. The assumption is that every new requirement introduced into an application family model must be used at least once. Even
Representing Requirements on Generic Software in an Application Family Model
161
when it is known that a new requirement will only be used once, it is marked as reusable. It can be marked as obsolete after it has been selected for the one system for which it is to be a part. The marking of a requirement as obsolete needs consideration as all the requirements that are dependent on it will also be marked as obsolete. If a part of the requirement tree beneath the obsolete requirement is to remain available for selection, then the model needs to be reorganised and a new point of variability introduced. 3.2 Generating Requirements for Instances of Generic Software From an application family model of requirements for generic software, we can generate the requirements for a new instance of the generic software from this model. Step 1 Directly reusable requirement ? If yes, introduce the requirement into the model, go to step 7. Step 2 Is the variability introduced by the new requirement qualitative ? If no, Step 5. Step 3 Determine if the requirement is a single adaptor, multiple adaptor or option. Step 4 Determine if the requirement can be the leaf node of an existing discriminant in the hierarchy or whether a new parent requirement must be formed. Step 5 Is the variability introduced by the new requirement quantative ? If no, Step 7. Step 6 Determine how many parameters there should be and if they are local or global. Step 7 End. Fig. 1. Introducing New Requirements into an Application Family Model
A common approach to the selection of requirements for reuse is free selection i.e. allowing a requirements engineer to browse the application family model and simply copy and paste a single requirement from anywhere in the model to the new instance. However there are limitations including: • •
random choice can mean illegal choice e.g. 2 mutually exclusive requirements are selected or requirements which must be included are not selected; there can be an untenable number of choices.
In [12] we described a discriminant-based selection method that assumed that the only variable requirement specification technique being deployed was discriminants. We extend this approach to a variable requirement-based selection method that assumes that parameterisation, discriminants and parameterised discriminants are all deployed as variable requirement specification techniques. Before the requirements selection process begins, the values of global parameters are defined. Then starting from the top of each tree, the lattice is traversed depth-first and selected requirements are added to the new instance. During traversal not every requirement will be visited. As the model is arranged as a dependency hierarchy, visitation will depend on prior selection. Directly reusable requirements are automatically selected. At variable requirements, a requirements engineer has to make choices. At single adaptor discriminants
162
Mike Mannion et al.
the choice is one requirement from many. At multiple adaptor discriminants the choice is at least one requirement (but it can be several) from many. At option discriminants the choice is to select the requirement or not. After choices are made, traversal proceeds down routes of selection. At parameterised requirements and parameterised discriminants, choices must be made for each local parameter. Parameterised requirements containing only global parameters are automatically selected. Table 1 summarises the selection decisions to be made when the different reusable requirement types are visited.
4 Case Study We were given the software requirements definition document [19] for the commanding sub-system of the SCOS-2000 spacecraft control operating system. The Commanding SRD consisted of 778 requirements and is concerned with the transmission and receipt of commands to control the spacecraft. To evaluate MRAM and TRAM we built an application family model of these commanding requirements. Spacecraft commands are organised into tasks. Tasks can be grouped together into command sub-systems. Tasks have parameters, which have attributes of type and length so that each task can be executed with varying amounts of information. Parameter types are represented by Parameter Task Codes (PTC). Parameter lengths are represented by Parameter Format Codes (PFC). Table 1. Selection Decisions for Reusable Requirement Types Reusable Requirements Type Directly reusable requirement Parameterised requirement
Single adaptor discriminant Multiple adaptor discriminant Option discriminant
Parameterised discriminant
Obsolete requirement
Selection Decision Always selected Always selected but a user is prompted for the value of each local parameter in the requirement. NB. The values of global parameters are set prior to the requirements selection process. A user must select one requirement from a list. Traversal proceeds down the route of selection. A user can select many requirements but at least one. Traversal proceeds depth-first down each route of selection. A user can select the requirement or not. Traversal proceeds downwards if option selected, otherwise traversal proceeds back to next selection decision point in lattice. If a particular requirement is selected as one of the alternatives of a discriminant and it contains local parameters, a user is prompted for the value of each of these parameters. NB. The values of global parameters are set prior to the requirements selection process. Not selectable.
Representing Requirements on Generic Software in an Application Family Model
163
The commanding SRD was organised into a hierarchy of sections, each section containing a set of requirements e.g. command interlocking, task parameters. Each requirement had an identifier of the form CMD-99999. Requirements in the document were numbered in increasing sequential order. The requirements were generally well written, but about 10% of them had to be re-written to make the implicit variability more explicit using the techniques described in section 0. Examples of unparameterised single adaptor, multiple adaptor and option discriminants of an earlier version of those requirements are provided in [12]. 4.1 Parameterised Requirements There were no requirements as given in [19] that contained parameters to make explicit quantitative variability. Some requirements did contain “magic” numbers where the intention was to convey variability e.g. CMD-01220 It shall be possible to define up to 255 command sub-systems. Comment: Within each application in the commanding application family there shall be a facility to define a set of command subsystems. The maximum value of this set will change from system to system. This type of requirement was re-written to become a parameterised requirement, e.g.: CMD-01220 It shall be possible to define up to a maximum of @MAXNUMCMD command sub-systems where MAXNUMCMD can not be greater 255. 4.2 Parameterised Discriminants The only technique used to explicitly represent qualitative variability was the use of a suffix to a requirements status. Requirements were marked as being Essential (E) or Desirable (D). Some Essential and Desirable requirements were suffixed to indicate the specific mission control systems that they were only to be included in. E.g., a requirement marked as E-I was an essential requirement to be implemented only as part of the Integral Mission Control System. Such requirements were rewritten as discriminants. Parameterised Single Adaptor Discriminant Requirement variability was often hidden in a requirement’s corresponding commentary. Consider: CMD-92610 A task parameter can be of type Record, containing an Array of up to 6 elements. Comment For elements of type PTC = 6 (BitString), the PFC = 0 for variable length, PFC > 0 for fixed length string. The term “TPCR” is coined to mean “Task Parameter Component Record”. A choice must be made about the value of the PFC (Parameter Format Code), making
164
Mike Mannion et al.
the requirement a single discriminant. However the choice TPCR Element-Type varies between systems in the application family so a global parameter TPCR_El_Typ was introduced to represent the type. The definition of “6:BitString” as the value of TPCR_El_Typ can be made for the system selection (the type code for BitString is fixed by the ESA PUS standard). The requirement was re-written as a parameterised single adaptor discriminant: <SA> CMD-92610 For TPCR = @TPCR_El_Typ the PFC shall assume one of the following values: CMD-92620 CMD-92630
The PFC shall be 0. The PFC shall be > 0.
Parameterised Multiple Adaptor Discriminant A similar case could arise in the case of a Multiple Adaptor: CMD-02400 For OBDH commanding, a task parameter can be of type Record, containing an Array of a defined subset of PUS Parameter Types. When selecting a system, a choice must be made about the permitted Parameters Types, and the maximum number in a command, allowed by the OBDH. This can be expressed by a parameterised multiple adaptor discriminant: <MA> CMD-02400 For OBDH commanding, a task parameter can be of type Record containing an Array of maximum of @TPCR_OBDH_Max_El elements, of the following possible PTC Types and PFC: CMD-02410 CMD-02420 CMD-02430 CMD-02440 CMD-02450 CMD-02460
PTC = 3: UnsignedInteger, with PFC = 14 PTC = 4: SignedInteger, with PFC = 14 PTC = 3: UnsignedInteger, with PFC = 16 PTC = 5: Real, with PFC = 2 PTC = 6: BitString, with PFC = 16 PTC = 8: CharacterString, with PFC = 128
Parameterised Option Discriminant Often qualitative requirement variability tended to be implicit and only available to a knowledgeable reader. For example, the parameterised requirement CMD-52700 is an option discriminant: CMD 52700 The system shall be able to support multiple on-board queue models. 4.3 Tool Support MRAM is supported by TRAM. Requirements can be constructed and browsed using an interface based upon the standard Microsoft Explorer interface. Fig. 2 shows an example of a parameterised multiple adaptor discriminant.
Representing Requirements on Generic Software in an Application Family Model
165
Fig. 2. Example of Parameterised Multiple Adaptor Discriminant in TRAM Table 2 describes the information that can be stored in TRAM for each requirement. Table 2. Requirement Information in TRAM Requirement Information Parameters Lattice Detail Linked To Viewpoint Glossary
Description The definition of the local parameters contained in the requirement A brief description of any parent and child requirements Attribute information including type (e.g. single adaptor), rationale, status Details of links to other requirements, other than parentchild links The stakeholders that are interested in this requirement A list of terms contained in the requirement that are defined in the glossary
The lattice can be restructured using ‘drag and drop’. Once a node in the lattice has been selected, it can be dragged to another node. The user is then prompted to confirm or cancel the move. Requirements and discriminants are also identified by icons. An icon is coloured green if a requirement is still in use and red if it is obsolete. The icons used are: • • • • • • • •
R - requirement; - parameterised requirement; SA - single adaptor discriminant; MA - multiple adaptor discriminant; OP - option discriminant. PSA - parameterised single adaptor discriminant; PMA - parameterised multiple adaptor discriminant; POP - parameterised option discriminant. PR
166
Mike Mannion et al.
Building the application family model took about 200 man-hours. This included reading and understanding the Commanding SRD, consulting domain experts, rewriting requirements, restructuring the requirements hierarchy, entering requirements into TRAM, defining the glossary items, and project reporting. Table 3 shows the variability in the commanding requirements. Table 3. Variability in Commanding Requirements Reusable Requirement Type Directly Reusable Parameterised Requirement Single Adaptor Discriminant Multiple Adaptor Discriminant Option Discriminant Parameterised Discriminant
Numbers 737 14 3 8 15 1
A demonstration of the case study was well received by ESOC requirements engineers from different departmental sections e.g. flight dynamics, mission planning, largely because they were being a offered a solution to a problem that they each face: introducing and representing requirements variability; and because the MRAM requirements definition process complements their existing requirements definition process. The TRAM explorer interface gives a good view at any desired level of the requirements organisation, and provides a convenient means for changing and refining this organisation. It helps to focus attention on the organisational aspects of the application family. This is important because there can be several different ways of organising these requirements and hence the systems that can be selected from them. The aim of the application family modeller is to create a structure that is most appropriate for re-use. We used the variable requirement selection method to generate a hypothetical instance of the generic software, that contained 716 of the 778 application family requirements. This took about 30 minutes because only 20 discriminant selection decisions and 12 parameter value entries had to be made. On completion we could be sure that the new instance did not contain conflicting requirements. This compares to 8-12 hours using a free selection method because there were 778 selection decisions to be made and each selection involved checking that conflicting requirements had not been selected.
5 Discussion and Related Work Our specification method assumes that requirements are written as numbered atomic natural language requirements. Documentation not written in this way will require additional effort to isolate requirements that can be reused. The method also assumes that requirements are structured as a lattice of parent-child relationships. This worked well in this case study as it maintained the existing hierarchical structure of require-
Representing Requirements on Generic Software in an Application Family Model
167
ments that existed in the Commanding SRD. However if a requirements specification has to be reverse engineered from generic software then it may not always be easy to see how the specification might be restructured into a lattice. Although defining commonality and variability is a step in most application family methods, the detail is sparse. The Scope, Commonality and Variability (SCV) analysis approach [2] addresses this gap by refining the process steps but does not define specific techniques to use at each step. The techniques within MRAM for identifying and specifying variable requirements can be used within SCV. FODA [1] and FeatuRSEB [6] take a similar approach to modelling as MRAM but use features not requirements. There are three phases in FODA: feature modelling, information analysis and operational analysis. A feature model is a hierarchical tree diagram in which the features are typically one-word terms meaningful to stakeholders. Variability is modelled by permitting features to be mandatory, alternative and optional. Information analysis captures and defines the data requirements of the applications in the domain. The output of this task may take a combination of forms including entity-relationship diagrams, class diagrams, and structure charts. Operational analysis identifies the control and data flow of the applications in the domain. The output of this task may also take a combination of forms including data flow diagrams, class interaction diagrams and state transition diagrams. The models are used to assist a domain analyst in the derivation of a requirements specification. However FODA does not provide support for the representation of requirements variability within the specification. In addition, whilst FODA describes mechanisms similar to discriminants for modelling qualitative variability, it does not discuss how these might be combined with parameters. We do not claim that our discriminant categories are necessary and sufficient in representing qualitative variability but we do believe them to be intuitive. We observed that the determination of variable requirements using a combination of discriminants and parameters proved straightforward. Without an application family model to select from, the typical approach to generating the requirements for a new instance would be “copy, paste then edit”. The advantage of a model is that requirements engineers can make better use of their time focusing on the implications of a requirement’s inclusion in a new instance without thinking about additional definition, specification, linkage and traceability issues. Further work is required on the selection facilities available within TRAM. We observed that even with a modest number of selection decisions to be made e.g. 20, engineers can easily forget a decision made at the start of the selection process that is pertinent to a later decision. Recording selection decisions will be a marked improvement. Secondly, constraining the traversal of the lattice to be depth-first may not always be appropriate. A better solution is to permit the order of selection to be more flexible whilst ensuring that selection decisions continue to satisfy the constraints imposed by the discriminants and parameters. Such flexibility will also increase the need for selection recording. In principle since the aim of application family modelling is usually to save time or money through reuse of the one model, there is little to be gained by maintaining several versions of it. However it is often desirable to maintain traceability between the application family model and systems selected from it. When an application family
168
Mike Mannion et al.
model is updated, it is important to make visible that some systems were selected from previous versions of the model. An advantage of marking requirements as obsolete is that it allows this traceability to be maintained without defining several versions of the application family model. Database size will increase but as cost-performance ratios for computer hardware decrease functional performance is not likely to be a big issue.
6 Conclusion We have developed a Method for Requirements Authoring and Management (MRAM) that supports the process of application family requirements engineering by explaining: •
how to produce a model of application family requirements;
•
how to generate a system model for each new system in the family derived from the application family model.
The focus of this paper has been the articulation of the techniques within MRAM that permit the clean specification of variable requirements using a combination of parameters and discriminants. These techniques can be used to represent the requirements of generic software in an application family model. This is important because typically the variation in generic software requirements is usually implicit and available only to an expert reader. By making this variation explicit the instantiation and traceability of systems from the generic software is made simpler and more visible. We have demonstrated this by generating a set of commanding requirements for a spacecraft control operating system from a pool of application family requirements.
Acknowledgements This work was carried out under ESOC Study Contract 12171/97/D/IM/. We would like to thank ESOC domain experts for spending time explaining the spacecraft control system domain.
References [1] [2] [3]
Cohen, S., Hess, J., Kang, K., Novak, W., and Peterson, A.: Requirement-Oriented Domain Analysis (FODA) Feasability Study, Special Report CMU/SEI-90-TR-21, Carnegie Mellon University (1990). Coplien, J., Hoffman, D., and Weiss, D.: Commonality and Variability in Software Engineering, IEEE Software, 15, 6, November/December (1998) 37–45. European Space Agency PSS-05-1, 1991–1994.
Representing Requirements on Generic Software in an Application Family Model
[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
169
Finkelstein, A.: Reuse of Formatted Requirements Specifications, Software Engineering Journal, 3, 5, May (1988) 186–197. Gomaa, H.: Reusable Software Requirements and Architectures for Families of Systems. Journal of Syst. Software, 28, 11 (1995) 189–202. Griss, M., Favaro, J., and d’Alessandro, M.: Integrating Feature Modeling with the RSEB, in Proc. of the IEEE Int’l Conf. on Software Reuse (ICSR5), Vancouver, June (1998) 76–85. Jacobson, I., Griss, M., and Jonsson, P.: Software Reuse: Architecture, Process and Organization for Business Success, Addison-Wesley, ISBN 0-201-92476-5 (1997). Jeng, J-J., and Cheng, B.H.C.: Specification Matching for Software Reuse: A Foundation, in Proc. of the ACM Symp. on Software Reuse, Seattle, Washington, April (1995) 97–105. Keepence, B., Mannion, M., and Smith, S.: SMARTRe Requirements: Writing Reusable Requirements, in Proc. of the IEEE Symp. on Eng. of Computer-based Systems, Tucson, Arizona, March (1995) 27–34, ISBN 0-7803-2531-1. Lam, W., McDermid, J. A., and Vickers, A. J.: Ten Steps Towards Systematic Requirements Reuse, Requirements Engineering Journal, 2, 2, (1997) 102–113. Maiden, N., and Sutcliffe, A.: Exploiting Reusable Specifications Through Analogy, CACM, 35, 4 (1992), 55–64. Mannion, M., Keepence, B., Kaindl, H., Wheadon, J.: Reusing Single System Requirements From Application Family Requirements, in Proc. of the 21st IEEE International Conference on Software Engineering (ICSE’99), May (1999) 453–462. Massonet, P., and van Lamsweerde, A.: Analogical Reuse of Frameworks, in Proc. of the 3rd IEEE Int’l Symp. on Requirements Engineering, Annapolis, Maryland, USA, January (1997) 26–33. Mili, H., Mili, F., and Mili, A.: Reusing Software: Issues and Research Directions, IEEE Trans. on Software Engineering, 21, 6 (1995) 528–561. Mili, A., Yacoub S., Addy, E., and Mili, H.: Toward An Engineering Discipline of Software Reuse, IEEE Software, 16, 5, September/October (1999) 22–31. Organization Domain Modeling (ODM) Guidebook Version 2.0, STARS-VCA025/001/00, June 14 (1996), Electronic Systems Center, Air Force Systems Command, USAF, Hanscom, AFB, MA 01731-2816. Ryan, K., and Matthews, B.: Matching Conceptual Graphs as an Aid to Requirements Reuse, in Proc. of the IEEE Symp. on Requirements Engineering, San Diego, ISBN 08186-3120-1, January (1993) 112–120. SCOS-2000 User Requirements Document, SCOSII-URD-4.0, Issue 4, Draft 1, October 20 (1995). SCOS-2000 Commanding Software Requirements Document, S2K-MCS-SRD-0002TOS-GCI, Issue 2.0, May 21 (1999). Software Productivity Consortium Services Corporation, Reuse-Driven Processes Guidebook, SPC-92019-CMC, November 1993, SPC Bldg 2214 Rock Hill Rd, Herndon, Virginia. Sutcliffe, A.: A Conceptual Framework for Requirements Engineering, Requirements Engineering Journal, 1, 3 (1996) 170–189.
I m p l e m e n ta tio n Issu es in P r o d uc t Lin e Sc o pin g Klaus Schmid and Cristina Gacek Fraunhofer Institute for Experimental Software Engineering (IESE), Sauerwiesen 6, D-67661 Kaiserslautern, Germany {schmid, gacek}@iese.fhg.de Abstract. Often product line engineering is treated similar to the waterfall model in traditional software engineering, i.e., the different phases (scoping, analysis, architecting, implementation) are treated as if they could be clearly separated and would follow each other in an ordered fashion. However, in practice strong interactions between the individual phases become apparent. In particular, how implementation is done has a strong impact on economic aspects of the project and thus how to adequately plan it. Hence, assessing these relationships adequately in the beginning has a strong impact on performing a product line project right. In this paper we present a framework that helps in exactly this task. It captures on an abstract level the relationships between scoping information and implementation aspects and thus allows to provide rough guidance on implementation aspects of the project. We will also discuss the application of our framework to a specific industrial project.
1
Introduction
Recently, the importance of an architecture-based approach to software reuse has been recognized as being a key driver to achieving high reuse levels. Product Line Software Engineering combines this with ideas from domain engineering [1, 2] like the up-front analysis of commonalities and variabilities present in the product line. A specific method for Product Line Software Engineering is PuLSETM, an approach developed at the Fraunhofer IESE [3]. This approach provides technologies for addressing all life cycle stages of a product line engineering project: scoping, domain analysis, architecture development, system implementation, and system maintenance and evolution. In this paper we will concentrate on the approach proposed for scoping reuse infrastructures in the context of product line engineering. This approach is called PuLSE-Eco [4]. PuLSE-Eco is based on the idea of using the business objectives in deriving the product line scope. That is, they are the key criteria for deciding whether something should be part of the scope or not. Thus, depending on the specific business objectives relevant to the company, the scope may vary, even for the same line of products. As part of our exposure to industrial projects we had to recognize that the different phases of product line development are strongly interrelated with each other. Already during scoping some implementation level information is needed in order to adequately analyze the economics of the product line project that has to be planned. For example, when W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 170-189, 2000. c Springer-Verlag Berlin Heidelberg 2000
Implementation Issues in Product Line Scoping
171
comparing a generator-based approach with component-based reuse, up-front investment in the reuse infrastructure will be considerably higher, while costs for creating individual systems will be much lower; thus completely altering the economics of the product line project. This fact lead us to analyze more deeply the relationship between the general product line situation and the implementation aspects. In this paper we present the results of this analysis in terms of a framework. This framework can be used to provide initial guidance to architects and implementors by proposing possible solutions to key implementation decisions based on characteristics of the product line. The paper is structured as follows: in the remainder of this section we will discuss in more detail the type of relations between scoping and implementation we are looking at in this paper. In Section 2 we will describe in some detail our framework. First, we will discuss in Section 2.1 the environmental factors on which we base our framework and in Section 2.2 the implementation aspects for which we want to derive some information. Section 2.3 will then describe in some detail the relationships we found. In Section 3 we will then apply this framework to a case study and describe the insights we could gain from this. Finally, Section 4 concludes. 1.1 Product Line Scoping Scoping is a key step in any product line or domain engineering project. It determines actually which parts of the systems will be supported by reusable assets: • If too much is supported in a reusable way, this may impair the overall pay-off, as engineering for reuse is usually more expensive than single-system development. • If too little is supported, the situation may actually be even more complicated, as assets that do not support the necessary range of variability can only be used in a subset of the required products. This shows that it is both very important to do scoping right and at same time not easy to do it right. In particular, scoping impacts the following sub-decisions: • Should we build a product line for this particular system family? • What functionalities to include in the domain analysis? • What functionalities should be directly (e.g., by a component of the architecture) supported by the reference architecture? • For which functions should reusable lower level assets be implemented? Often scoping is performed to analyze whether a certain domain should be supported in terms of a reuse infrastructure. This view is especially common with domain engineering approaches [1, 2, 5]. This is typically extended in a straightforward manner to product lines, i.e., the union of the functionality of the systems is regarded as “the domain”. PuLSE-Eco builds on a somewhat different concept. Domains are conceptual domains that describe a certain type of functionality (e.g., report writing, internet data transfer, etc.) several of them may be relevant to a single system. This situation is depicted in Figure 1. Then scoping aims at identifying the particular functionality (e.g., in terms of features) that should be developed in a generic manner.
172
Klaus Schmid and Cristina Gacek
1.2 Scoping Impacts Architecture and Implementation As described above some important decisions for the reference architecture are already determined during scoping, strongly influencing the range of functionality supported by the reference architecture. While later activities may change this scope based on additional information and better insight gained during the latter stages, usually the basic scope will remain intact. Among the architecture decisions that are usually (implicitly) made during scoping are the following: • The functionality explicitly supported by the reference architecture is roughly defined. This has the consequence that certain functionalities, although part of the product family, will not be explicitly supported by the reference architecture. • The major variabilities that are explicitly supported by the reference architecture are identified. In particular this entails some minimum requirements on component interfaces (e.g., what functionality to encapsulate, whether the absence of components needs to be handled as is the case for optional features) • Similarly, already some basic services are identified that will be relevant to all the systems and will thus be of general importance. Further, some constraints are identified, which are not so obvious: • Initial assumptions about the underlying architecture are made since they determine costs and effort estimates • An initial guess about the major architectural style can be made as the major domain characteristics are known (e.g., event-based architecture, layered architecture) • Assumptions on the implementation schema need to be made (e.g., generator, component-based, etc.) as the economies of the product line and thus the decision on what to support by reuse strongly depend on this (e.g., for generator-based implementation the up-front investment is much higher, but the costs of individual systems is lower than the nominal case). These decisions can be made that early as they largely depend on easy to elicit aspects of the product line like: number of systems, maturity of the domain, etc. However, while these assumptions are assumed to be incorporated in the product line reference architecture, in situations where one or more of these assumptions cannot be properly or efficiently accommodated in the reference architecture, the results of scoping must be revisited accordingly.
domains
systems
Figure 1. Relationship between domains and systems
Implementation Issues in Product Line Scoping
173
1.3 Architecture and Implementation Impact Scoping In Section 1.2 we discussed the impact of scoping on architecture and implementation definition. In this section, we will describe the feedback from architecture and implementation to scoping level decisions. One influence from architecting to scoping has just been illustrated: in order to derive the scope and the economic viability of a scope, assumptions about the implementation technology need to be made. However, some more obvious uses of an architecture are: • An existing architecture (e.g. for a single system which shall be expanded towards a product line) needs to be taken into account as it will influence the future architecture (cf. [6]). • Even in cases where no system architecture exists there will be inherent assumptions in defining the features of the system and what is exactly meant by a certain functionality. These include things like whether the feature definition encompasses all the computation down to the data base access layer or whether intermittent features exist that perform some pre-processing. Initial decisions and structuring of interacting features of this type are called a conceptual architecture, as it provides a conceptual structuring of the system functionality. Such a conceptual structure – whether implicit or explicit – will always influence the resulting scope definitions, as it provides the vocabulary for defining the scope. Thus, we found it better to make this basis explicit. • Especially for identifying the economics of the scope, it is important to make the conceptual architecture explicit, as an estimate on the amount of effort needed for a feature can only be given after it has been clarified what behavior shall be regarded as part of this feature. • The way the product line is supposed to be implemented is very important to determining the economics of the scope definition and hence, what is the most appropriate scope. As described above, there are quite a few aspects that become visible during scoping, which lead to serious constraints on the architecture and the implementation of a software product line. In turn, these constraints and the solutions chosen have actually a large impact on the economics of the product line situation and thus on how the scope should be derived. Consequently, they need to be made explicit as early as possible, just like the conceptual architecture. In order to do so, we developed a framework that makes the relationship between the high level aspects (e.g., business goals), the architectural issues, and chosen implementation method explicit. As product line development is still in its early days and only little validated information on this relationship is documented, we could not rely as much on experience as we would have liked to. However, quite a few relations can be deduced from technological knowledge in the field with sufficient confidence. Additionally, we have been able to partially validate our framework in the context of an industrial project. At this point we have to focus on this type of information for the framework presented here. The following benefits are expected of the framework: • Help in performing the scoping activity right, as it allows to deduce relevant information.
174
Klaus Schmid and Cristina Gacek
• Restrict (in an adequate way) the solution space for the implementation aspects of the product line, similar to the framework provided in [7] for architectures based on domain characteristics. • Support validation of existing reference architectures. • Be a starting point for aggregating further knowledge, as more and more lessons are learned. The relationships are briefly summarized in Figure 2. 1.4 Related Work The idea that high-level inputs have important implications for architecture and implementation decisions is not new, to the contrary [7]. Consequently, we will not reiterate this type of material. Instead we will concentrate on PL aspects. Within the realm of scoping approaches we do not know at this point of any other approach which tries to explicitly fill the gap of determining which implementations assumptions should be used in the scoping process. Some assessment-based approaches to scoping [5, 8] use criteria like maturity of the domain directly to assist in scoping, however, they make the feedback loop which exists not explicit. On the other hand economic models [9] usually use the implementation characteristics we identified here in order to determine the reuse project economics. A framework for assisting an architectural style choice based on certain domain characteristics has been previously defined [10] and later refined [7]. That work provides a characterization of architectural styles based on a set of features focusing on control and data issues. It also provides rules-of-thumb for architectural styles selection while considering the required characteristics of the architectural solution being built. vs. the architectural styles intrinsic characteristics. Although this work is quite useful, it addresses the construction of one-of-a-kind software system architectures only. These considerations are also relevant for the construction of product line reference software architectures, but are not enough. There are many product line environmental factors that must also be considered. Classifications of reuse approaches have existed for a while [11, 12]. These usually compare and contrast various approaches, yet fail to provide guidelines on their selecScoping provides constraints
Domain Analysis Architecting
impacts appropriate decision
Implementation Figure 2. Relationship between scoping and implementation
Implementation Issues in Product Line Scoping
175
tion depending on the situation. Similarly, an overview of variability mechanisms is given in [13]. In this paper we provide support for architectural style selection within product lines, as well as define clear guidelines for the resolution of reuse infrastructure implementation decisions. The underlying approach that is used for selecting a specific technology for a specific situation is similar to the selection of technology packages in the context of experience factories [14]. It is important to note that architectural style choice is just one of the issues relevant to the definition of a product line reference architecture [15]. Other considerations are less dependent on the product line environmental factors and hence not in the focus of this paper. These other considerations are addressed elsewhere [16, 17].
2
The Framework
In this section, we will describe the basic framework for relating business factors and implementation factors. Note, however, that we did refrain from including all possible such relationships. Instead, we concentrated on those aspects that are particular to product lines, as other aspects like the relationship between domain aspects and architecture have been described elsewhere [7]. Below, we will first discuss the business (or environmental) factors and the reasons for choosing these particular factors. Then, we will similarly discuss the implementation level aspects. In a third subsection we will then describe the relationships we found. 2.1 Environmental Factors There are quite a few characteristics of a product line project that can be easily surveyed with the help of domain experts early on with little effort and have a quite strong impact on the technical solutions one should be aiming for. Here we will describe the ones we could identify so far both from literature [5,1], as well as by our experience and thorough study of the topic area. Below we will discuss each of the factors we could so far identify as being relevant. Note that the scales we propose are clearly subjective. This is not assumed to be a problem since only the magnitude of the values is relevant. Number of independent features. How many features relevant to distinguishing the various members of the product line can be identified? The measure is relative to the overall size of the functional area. Meaning larger functional areas can also be expected to have more features without changing the value of the measure. The scale has the values low, medium, high (e.g., for a domain estimated at 100 kLoC 10 features would be low, while 100 would be high). Structure of the product line. This captures whether variabilities among systems are dominated by optionality or alternative. Variabilities can be basically classified in two types: optionalities, i.e., features which can be present or absent, and alternatives, i.e., features for which various alternative behaviors can exist, but which have to be present in principal. Usually, both of them will exist, thus we are looking here at
176
Klaus Schmid and Cristina Gacek
the predominant type of variability. Scale is: optional, neutral, alternative (e.g., 20% optionalities, 80% alternatives would still be captured as alternative). Variation degree. What percentage of a system is expected to be covered by the core (i.e., the overall common) part? low, medium, high (low ~ 40%, high ~80%). Number of products. What is the number of products the product line is expected to contain? Scale: low, medium, high (low=12) Complexity of feature interactions. This describes how interrelated features are on average. Two features are called interrelated if one modifies the behavior of the other (i.e., the functionality is not just the sum of the two). This is again measured as low, medium, high. Feature size. The size of a feature is basically the amount of code relevant to implementing it. It is measured on a scale ranging from low (approx. one procedure/method/object) to high (a complete subsystem). Performance requirements. The performance requirements (memory, runtime) are measured relative to what is not easy to provide. Thus, the performance requirements are called strict, if they are expected to be a high priority design rationale to squeeze out the required performance level. Otherwise (i.e., it is obvious that the required performance levels can be achieved) the performance requirements are called loose. Coverage. This basically measures to what extent the potential feature combinations will actually occur. For example, if 100 optional features exist then the domain contains 2100 possible combinations; if actually only a small number of products (10) will be developed than the coverage is obviously low. Conversely for high coverage. Maturity/Stability. If the domains relevant to the product line are not expected to change and are well understood (e.g., as shown by standardization) then they can be regarded as being of high maturity/stability. Scale: low, medium, high Entry points. Three different starting situations can be distinguished for the product line project (cf. [18]): • Independent PL: a new product line is developed from scratch • Integrating PL: product line is introduced while some products are already under development • Reengineering-driven PL: the core product line assets are reengineered from legacy systems Openendedness. This describes the range of functionality that may be relevant to the systems now and in the future (i.e., can it be expected that the currently identified set of features will also cover future systems well or is there an expectation that future product line members may need other features?). As opposed to maturity/stability this
Implementation Issues in Product Line Scoping
177
doesn’t address the change in the features that are relevant to a domain, but with respect to the domains that are relevant to the system family. Scale: open, neutral, bounded. 2.2 Implementation Aspects Similarly to our discussion of the environmental factors as input to the framework, we are now going to describe the aspects we want to derive values for as results from our framework. So far we could identify four aspects for which recommendations (i.e., constraints) can be derived from the input factors described above. Each of the four aspects can be seen as independent in the sense that the values for the various factors can be independently derived and used. Further, especially the values for the first three categories should be seen as describing a continuum, within which only some extreme have been identified similar to the environmental factors. 2.2.1 Type of reuse infrastructure What kind of technology should be the basis for the resulting infrastructure construction (i.e., what is the aimed-at result)? At this point we distinguish three different categories. Software Platform. While in the literature many different meanings are given to the term software platform, we use it here explicitly to refer to groups of assets that only address the commonalities within a product line (i.e., the assets contained in the software platform will be incorporated in every product in the product line). Product Line/Reference Architecture. As opposed to a platform a reference architecture provides the concepts for the complete products, including also variabilities, as part of the architectural description. Domain-Specific Language (DSL). A DSL is provided that completely abstracts from all implementation details and covers all characteristics that may be relevant to systems. 2.2.2 Variability Representation This describes how variability is mapped to code from an implementation point of view. Pure Code. The code assets are expected to contain only code. No parametrization except for the selection of code components and the run-time parameters are expected to exist. Parametrization. In this case some compile-time parameters are expected to exist. Conditional compilation is one approach for implementing this approach. Template. In this case, assets are more generalized and code fragments may be reassembled in a rather sophisticated way during compilation. Frames or aspect-oriented programming are examples of this kind of approach.
178
Klaus Schmid and Cristina Gacek
DSL. Here, the full capabilities of a language can be used to describe variabilities in the domain. (Note, that opposed to the previous section here we use DSLs not as a means to describe the coverage, but as a means to describe variability.) 2.2.3 Level of Detailedness There are many different interpretations of what level of detail is implied by the term product line architecture [15]. Similarly, many different interpretations can be given to the term reuse infrastructure. Here, we simplify the discussion by distinguishing only three main levels of detailedness of the reuse infrastructure. (Below code is meant to include also templates and parametrized code.) Reference Architecture Description. Only a reference architecture description is developed but no code is actually produced. Core Code. On top of the architecture description code components to cover the core functionality are developed. Full Range. Here, most code components (except for system-specific parts) are actually developed. 2.2.4 Architectural Concept Unlike the other implementation aspects, the architectural concept cannot be precisely defined based only on the environmental factors described earlier. The style choice for the overall product line reference architecture and its subsystems depends also on external factors. The most influential external factors that must be taken into account are domain drivers, pre-existing architectures of systems and/or subsystems legacy or already under development, as well as the architectural assumptions made while scoping was being performed. How domain drivers impact the architectural style choice has already been described by Mary Shaw and Paul Clements [10]. During the construction of a product line reference architecture, just as for single system architectures, their approach for selecting an architectural style should always be taken into account. Additionally, if the entry point is reengineering-driven or integrating product line, preexisting architectures or architectural parts do heavily influence the architectural style choice. The reason for this is twofold, the existing architecture already has its own style [19], and, while using existing parts, architectural mismatches must be either avoided or handled appropriately [20]. The impacts suggested by the environmental factors discussed in section 2.1 will be discussed shortly. 2.3 The Relationships Above we described both environmental factors and implementation aspects for software product lines. However, as we discussed previously, there is a close relationship between the identified factors. The relationships we could identify so far are summarized in this section. They had to be derived mostly from our background knowledge on
Implementation Issues in Product Line Scoping
179
the subject matter as to our knowledge so far no comparable studies exist. Consequently, these relationships should be regarded as being of preliminary nature. Each of the relationships will be described below in the form of a prototypical situation (in terms of the environmental factors) in which the corresponding value for the implementation factors should be chosen. As a real situation will usually not correspond exactly to a certain prototypical situation, the results may not exactly correspond to one of the prototypical results. Where we found that several values can occur in the prototypical situation which will warrant the same solution, we give a range value1..value2, if a certain factor is not relevant to the decision, we mark it with a ‘*’. A further discussion on how to use the results of applying the relationships in a real situation is given in Section 3. 2.3.1
Type of reuse infrastructure
Software Platform. A software platform, especially in the sense of a platform that supports several product lines (e.g. cellular phone, as well as wired phones), is usually a high investment that is only worthwhile, if a large number of products will be supported by it. In order for such an investment to pay off also a high stability and maturity of the domain is required. 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: low..medium 4. Number of products: high 5. Complexity of feature interactions: low 6. Feature size: * 7. Performance Requirements: loose 8. Coverage: low 9. Maturity/Stability: high 10. Entry Points: * 11. Openendedness: * Reference Architecture . Here, the idea is to get a single line of products under control. This is worthwhile if overall variations are restricted, so that a common architecture can be realistically provided (i.e., all systems have a similar structure). 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: * 4. Number of products: * 5. Complexity of feature interactions: low..medium 6. Feature size: medium..high 7. Performance Requirements: * 8. Coverage: low..medium
180
Klaus Schmid and Cristina Gacek
9. Maturity/Stability: 10. Entry Points: 11. Openendedness:
* * *
DSL. A complete coverage of all software that might be possibly relevant to the product line is only worthwhile if the problem domain is clearly bounded and stable and a large number of products is expected to exist such that a positive return on investment can be expected. 1. Number of independent features: low..medium 2. Structure of the product line: * 3. Variation degree: medium..high 4. Number of products: high 5. Complexity of feature interactions: medium..high 6. Feature size: low..medium 7. Performance Requirements: loose 8. Coverage: medium..high 9. Maturity/Stability: medium..high 10. Entry Points: independent 11. Openendedness: bounded 2.3.2
Variability Representation
Pure Code. This is meaningful in particular if features do not have a large impact on the implementation of other features and if only a small subset of these combinations will actually be implemented. 1. Number of independent features: * 2. Structure of the product line: opt 3. Variation degree: * 4. Number of products: * 5. Complexity of feature interactions: low 6. Feature size: * 7. Performance Requirements: * 8. Coverage: low 9. Maturity/Stability: * 10. Entry Points: * 11. Openendedness: * Parametrization. This allows to represent more variabilities and interactions, but has the down-side to require a higher effort and is more difficult to do right, which implies that the domain(s) should be rather stable and mature. 1. Number of independent features: * 2. Structure of the product line: alt
Implementation Issues in Product Line Scoping
3. 4. 5. 6. 7. 8. 9. 10. 11.
Variation degree: Number of products: Complexity of feature interactions: Feature size: Performance Requirements: Coverage: Maturity/Stability: Entry Points: Openendedness:
181
* * low..medium * * low..medium medium..high * *
Templates. This again allows for a larger degree of variation but is again more difficult to do and requires a larger number of systems to pay off. 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: medium..high 4. Number of products: medium..high 5. Complexity of feature interactions: medium..high 6. Feature size: low..medium 7. Performance Requirements: loose 8. Coverage: medium..high 9. Maturity/Stability: medium..high 10. Entry Points: independent, integrating 11. Openendedness: bounded DSL. Again a larger degree of variabilities can be represented, but again also a larger effort is required. As the underlying economics are the same as in Section 2.3.1, the same situational characteristics apply. 2.3.3 Level of Detailedness How much up-front investment is meaningful mainly depends on how strongly the systems will overlap and thus how much of the investment can be recovered over the various systems. Scoping itself will then be used to refine this a-priori expectation. Reference Architecture Description. One will restrict oneself usually to this solution if the situation is rather unclear and only few systems are expected to recover the up-front investment. 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: low 4. Number of products: low 5. Complexity of feature interactions: * 6. Feature size: *
182
7. 8. 9. 10. 11.
Klaus Schmid and Cristina Gacek
Performance Requirements: Coverage: Maturity/Stability: Entry Points: Openendedness:
* low low..medium * *
Core Code. In case variability is high, thus covering all combinations will not pay off, but the core can be expected to be sufficiently stable this approach can be assumed to be adequate. 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: low..medium 4. Number of products: low..medium 5. Complexity of feature interactions: * 6. Feature size: * 7. Performance Requirements: * 8. Coverage: low 9. Maturity/Stability: medium 10. Entry Points: * 11. Openendedness: * Full Range. If variability over the whole range of product features is sufficiently stable and under control, and covered by products, this approach can be regarded as most adequate. 1. Number of independent features: * 2. Structure of the product line: * 3. Variation degree: medium..high 4. Number of products: medium..high 5. Complexity of feature interactions: low..medium 6. Feature size: * 7. Performance Requirements: * 8. Coverage: medium..high 9. Maturity/Stability: medium..high 10. Entry Points: * 11. Openendedness: * Table 1 gives a summary of the relationships we described above. 2.3.4 Architectural Concept As mentioned previously (see section 2.2.4), the directions to be considered by the reference architecture are dictated by both environmental (section 2.1) and external factors such as domain drivers. In this section we will simply describe how each of the envi-
Implementation Issues in Product Line Scoping
opt
*
*
low
*
*
low
*
alt
*
*
low/ med
*
*
low/ med/ med high
*
*
low/ med
*
*
*
Platform
*
*
*
*
*
Ref. Arch.
*
*
DSL
*
*
med/ high med/ low/ loose med/ med/ indep. boun high high med high high ded *
Type of reuse infrastructure
low/ med
Level of Detailedness
*
*
Representation of Variability
*
low/ med/ med high
Openendedness
low/ med
*
Entry points
*
loose low
Maturity/Stability
*
*
Coverage
*
high low
Performance requirements
*
Feature size
Degree of Variation
Complexity of feature interactions
Structure of the product line *
Number of Products
Number of definable features *
183
*
*
*
Code
*
*
*
*
Param.
*
med/ med/ med/ low/ loose med/ med/ indep/ boun high high high med high high integr. ded
*
Templ.
*
*
med/ high med/ low/ loose med/ med/ indep. boun high high med high high ded
*
DSL
*
*
*
low
low
*
*
*
low low/ med.
*
*
*
*
Arch. Repr.
*
*
low/ low/ med med
*
*
*
low med
*
*
*
*
Core Code
*
*
med/ med/ low/ high high med
*
*
med/ med/ high high
*
*
*
*
Full Range
Table 1: Summary of the Framework Relationships ronmental factors contributes to architectural decisions. The weight given to specific factors, both environmental and external, varies from situation to situation, depending on the major risks and priorities at hand. Consequently, we cannot prescribe here how the combination of differing influencing factors should be dealt with. A method such as ATAM [21] should be used for resolution support.
184
Klaus Schmid and Cristina Gacek
Number of Independent Features. When the number of independent features is high it is best to use architectural styles where components are self-contained and ignore the overall context in which they are being used. Some of the recommended styles are event based, blackboard, C-2, database centric, pipe and filters, and communicating processes. Structure of the Product Line. Having alternatives or not has no impact in the architectural style choice. The impact is only on making sure that the various alternatives do implement the same interface and that their underlying assumptions do not clash with the rest of the architecture. The use of layering or abstract data types may help by localizing considerations on various alternatives. The existence of optional items does have a stronger architectural impact. Components and connectors interacting with the optional parts must be able to handle both their presence and their absence. The best way to deal with optional architectural items is to use styles where components are self contained and ignore the existence of others, by assuming that required services will be performed somehow elsewhere. Examples of these styles are event based, blackboard, C-2, database centric, pipe and filters, and communicating processes. Variation Degree. If the variation degree is low, the architecture for every system instance will have to have many components and connectors added as being system specific. Hence, one must already plan for ease of component and connection addition. The styles best suited for this purpose are event based, blackboard, C-2, database centric, pipe and filters, and communicating processes. Number of Products. Integrating instance specific parts has inherent costs. Differing architectural styles may provide lower instance specific integration costs, yet higher set up costs (e.g., blackboard). The larger the number of products the product line is expected to contain, the better the justification for adopting such architectural styles. Complexity of Feature Interactions. Unless this is extremely low, styles like event based, blackboard, C-2, database centric, pipe and filters, and communicating processes are not very useful. Feature Size. This factor has no architectural impact as long as features are properly encapsulated in the architecture. Performance Requirements. These are domain drivers that were considered by Shaw and Clements. For considerations on this aspect please refer to their work [10]. Coverage. This factor plays only a very subjective role in the architectural context. If coverage is low, one must be careful not to over engineer the architectural solution. Maturity/Stability. A highly mature domain implies that there is a known (set of) solution(s) to the problem. A known architectural solution should then be used for the reference architecture while still considering the constraints at hand.
Implementation Issues in Product Line Scoping
185
A mature domain composed of subdomains that are still evolving can be best supported by layering or abstract data types. A not so mature domain implies that components will be evolving, some extra ones be added, and existing ones removed. Best solutions to this are the styles event based, blackboard, C-2, database centric, pipe and filters, and communicating processes. Entry Points. Both integrating and reengineering driven product line must consider pre-existing assets. The best way of dealing with pre-existing parts is by using layers, simply using the pre-existing architecture, implementing some wrapping scheme [19], and/or using instrumented connectors [22]. Openendedness. The values open and neutral imply that components will be evolving, some extra ones be added, and existing ones removed. Best solutions to this are the styles event based, blackboard, C-2, database centric, pipe and filters, and communicating processes.
3
Applying the Framework
In the previous section, we concentrated on a description of the relationships between the environmental factors and the implementation aspects we found. As we discussed earlier, these relationships can be read both ways. They are identified in the scoping phase, but have an impact on the implementation. On the other hand these implementation aspects need to be taken into account already during scoping. Here, we will concentrate on the first aspect and discuss how a description of a project environment can be used to derive a first estimate of the implementation aspects. The situation we discuss is derived from a real industrial project. A brief assessment of the situation lead to the following characterization of the product line situation:1 1. Number of independent features: medium 2. Structure of the product line: opt 3. Variation degree: high 4. Number of products: low..medium 5. Complexity of feature interactions: low..medium 6. Feature size: low..medium 7. Performance Requirements: loose 8. Coverage: low 9. Maturity/Stability: medium 10. Entry Points: integration 11. Openendedness: open
1. In the meantime, the authors also applied this framework jointly with project personnel to other projects, leading to similar results.
186
Klaus Schmid and Cristina Gacek
As we discussed before, the domain description is not part of this characterization, as we concentrate here on those aspects that are specific to the product line approach as opposed to those that are specific to the type of systems. 3.1 Implementation Aspects When discussing the various relationships in Section 2.3, the aspects type of reuse infrastructure, variability representation, and level of detailedness were described in terms of prototypical situations. For determining the type of solution that is most appropriate to our example, we will use a simple similarity measure. The solution that has the highest similarity is then the one which is considered most appropriate for the situation. The fact that the different values we identified only specify specific points in a continuum, shows up in situations were two values have a similar rating. In these situations we assume that the “true” value lies somewhere in-between these values. The similarity measure we will use here is very simply defined. We add over all attributes and divide the result by the number of non-’*’ attributes. If a relationship says nothing about this attribute, then the value 0 is used. Likewise, if the value(-range) in the situation and the value (-range) in the relationship do not overlap. If the value(range) in the situation is fully contained in the range determined in the relation a 1 is added. If there is only an overlap then 0.5 is added. This approach leads to the similarity values given in Table 2: When we look at the results, we can make several interesting observations. For the type of reuse infrastructure the reference architecture has the highest value. This flows pretty well with our initial assumptions about the product line project, where based on the initial contacts we assumed (without the framework) that this would be appropriate. For representing variability code has by far the highest value, with still a very high value for parametrization. Again this fits very well with our initial ideas about the product line. Additionally, the hint for parametrization — while originally we did not look at it — is regarded as an interesting idea that deserves more investigations, while the other options are clearly inappropriate. With respect to the level of detailedness we can see that both core code and full range received similar values. This hints at the most appropriate resolution being somewhere in between these two values. Again, this flows pretty well with our intuition about the project that we strive towards a rather complete reusability of the code components, while some components may just not be appropriate. After having identified this information we have a good starting point for performing a more reliable scoping, as we now have an informed estimate on how implementation Type of reuse infrastructure Representation of Variability Platform
Ref. Arch.
DSL
Code
0.625
0.8222
0.6
1
Param Templ DSL . . 0.8
0.6666 0.45
Level of Detailedness Arch. Repr.
Core Code
Full Range
0.625
0.75
0.7
Table 2: Similarity Values for Case Study
Implementation Issues in Product Line Scoping
187
will be done. This is crucial, as for example, the distinction between domain-specific languages, templates, and pure code has a strong impact on the overall economics of the product line project. Similarly the amount of code that will be shared has a strong impact. While it would be possible to derive values for each of the aspects from scratch whenever the situation arises it is very helpful to use this framework as it saves much time and it allows a neutral judgement (e.g., in the example above we would have made not always exactly these conclusions without the framework and wherever the framework provided a different hint, it was good to deeply consider the framework proposal. 3.2 Architectural Concept Within this same project, we are currently working towards deriving a product line reference architecture. The fact that the structure of their product line is predominant on optionalities, is composed of domains with medium maturity, and is openended, suggest that styles such as event based, blackboard, C-2, database centric, pipe and filter, and communicating processes be considered. Since the number of products expected to be in the product line is low to medium, a blackboard style is not really an option. Based on the domain drivers we have also been able to discard the usage of a pipe and filter style. This product line is being built using a couple of pre-existing systems. We are currently in the process of retrieving their existing architectures. We already know that the overall systems are layered, but it is not yet clear which other styles are present where. Additionally, the decision on which parts to reengineer, which ones to redevelop, and which ones to incorporate as is has not yet been made. All the factors above and the current product line requirements will be taken into account while we apply PuLSE-DSSA [16] and ATAM [21] to derive their reference architecture. This clearly identifies how environmental factors impact architectural decisions, yet also stresses that they are only part of the process.
4
Conclusions and Future Work
Due to little existing experience in product lines, guidance towards their key decisions is needed. In this paper, we highlighted the fact that information elicited during scoping also has clear impacts on architectural and implementation decisions. We have also shown that the converse relation also exists. Our most important contribution here is towards defining a framework to support the resolution of implementation and architectural decisions based on environmental factors elicited while scoping is performed. We have illustrated how to use this framework with a real-life case study. This case study showed our approach to be very appropriate, providing a good fit and saving a substantial amount of time. As future work we shall apply this framework to other environments, allowing us to further validate and improve the concepts as needed [23]. Additionally, we shall also provide a more formal mechanism for information capture and exchange between the scoping and architecting efforts, as well as specify configuration management rules and policies to be applied in this context.
188
5 [1] [2]
[3]
[4]
[5] [6]
[7] [8] [9] [10]
[11] [12] [13] [14] [15]
[16]
[17]
[18]
Klaus Schmid and Cristina Gacek
Bibliography Reuse-Driven Software Processes Guidebook, Software Productivity Consortium Services Corporation, Technical Report SPC-92019-CMC, 1993. Organization Domain Modeling (ODM) Guidebook, Version 2.0, Software Technology for Adaptable, Reliable Systems (STARS), Technical Report STARS-VC-A025/001/00, 1996. J. Bayer, O. Flege, P. Knauber, R. Laqua, D. Muthig, K. Schmid, T. Widen, and J.-M. DeBaud. “PuLSE: A methodology to develop software product lines,” in Proceedings of Symposium on Software Reusability’99 (SSR’99), May 1999. J.-M. DeBaud and K. Schmid. “A systematic approach to derive the scope of software product lines,” in Proceedings of the 21st International Conference on Software Engineering (ICSE 99), 1999. Department of Defense - Software Reuse Initiative, Domain Scoping Framework, Version 3.1, Volume2, Technical Description, 1995. J.M. DeBaud and J.F. Girard. “The Relation between the Product Line Development Entry Points and Reengineering,” in Proc. of Workshop on Development and Evolution of Software Architecture for Product Families, Las Palmas de Gran Canaria, Spain, Feb. 1998. L. Bass, P. Clements, and R. Kazman. Software Architecture in Practice, Addison Wesley, 1998. Reuse Adoption Guidebook, Software Productivity Consortium Services Corporation, 1993. J. Poulin. Measuring Software Reuse. Addison Wesley, 1997. M. Shaw and P. Clements. “A Field Guide to Boxology: Preliminary Classification of Architectural Styles for Software Systems,” in Proceedings of COMPSAC 1997, Washington, DC, August 1997. C. Krueger. “Software Reuse,” ACM Computing Surveys, vol. 24, no.2, June 1992, pp. 131-183. J.M. DeBaud. The Construction of Software Systems using Domain-Specific Reuse Infrastructures, Ph.D. Dissertation, Georgia Institute of Technology, Atlanta, GA, USA, 1996. Ivar Jacobson, Martin Griss, and Patrik Jonsson. Software Reuse: Architecture Process and Organization for Business Success. ACM Press, 1997. Andreas Birk, Felix Kröschel. A Knowledge Management Lifecycle for Experience Packages on Software Engineering Technologies. IESE-Report No. 007.99/E, February, 1999. D. Perry. “Generic Architecture Descriptions for Product Lines,” in Proceedings of Workshop on Development and Evolution of Software Architecture for Product Families, Las Palmas de Gran Canaria, Spain, Feb. 1998, pp 51-56. J. Bayer, O. Flege, and C. Gacek. “Creating Product Line Architectures,” Proceedings of the Third International Workshop on Software Architectures for Product Families (IWSAPF-3), March, 2000. J. Bayer, C. Gacek, D. Muthig, and T. Widen. “PuLSE-I: Deriving Instances from a Product Line Infrastructure,” to appear in Proceedings of the 7th Annual IEEE International Conference on the Engineering of Computer Based Systems (ECBS), April 2000. J. Bayer, O. Flege, P. Knauber, R. Laqua, D. Muthig, K. Schmid and T. Widen. PuLSETM — Product Line Software Engineering. Fraunhofer Institute for Experimental Software Engineering. IESE-Report No. 020.99/E, 1999.
Implementation Issues in Product Line Scoping [19]
189
J. Bayer, J.-F. Girard, M. Würthner, J.-M. DeBaud, M. Apel, “Transitioning Legacy Assets to a Product Line Architecture,” in Proceedings of 7th European Software Engineering Conference (ESEC) / 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), 1999. [20] C. Gacek. Detecting Architectural Mismatches During System Composition, Ph.D. Dissertation, Center for Software Engineering, University of Southern California, Los Angeles, CA 90089-0781, USA, 1998. [21] R. Kazman, M. Barbacci, M. Klein, S.J. Carriere, and S.G. Woods. “Experience with Performing Architecture Tradeoff Analysis,” in Proceedings of the 21st International Conference on Software Engineering (ICSE 99), 1999, pp. 54-63. [22] R. Balzer. “An Architectural Infrastructure for Product Families,” in Proceedings of Workshop on Development and Evolution of Software Architecture for Product Families, Las Palmas de Gran Canaria, Spain, Feb. 1998, pp.158-160. [23] K.D. Althoff, A. Birk, S. Hartkopf, W. Müller, M. Nick, D. Surmann, and C. Tautz. “Managing Software Engineering Experience for Comprehensive Reuse,” in Proceedings of the 11th Software Engineering and Knowledge Engineering Conference (SEKE 11), 1999, pp. 10-19.
Requirements Classification and Reuse: Crossing Domain Boundaries Jacob L. Cybulski1 and Karl Reed2 1
2
Dept of Information Systems, University of Melbourne, Parkville, Vic 3052, Australia
[email protected] School of Computer Science and Computer Engineering, La Trobe University, Bundoora, Vic 3083, Australia
[email protected] Abstract. A serious problem in the classification of software project artefacts for reuse is the natural partitioning of classification terms into many separate domains of discourse. This problem is particularly pronounced when dealing with requirements artefacts that need to be matched with design components in the refinement process. In such a case, requirements can be described with terms drawn from a problem domain (e.g. games), whereas designs with the use of terms characteristic for the solution domain (e.g. implementation). The two domains have not only distinct terminology, but also different semantics and use of their artefacts. This paper describes a method of cross-domain classification of requirements texts with a view to facilitate their reuse and their refinement into reusable design components.
1
Introduction
Reuse of development work-products in the earliest phases of software life-cycle, e.g. requirements engineering and architectural design, is recognised to be beneficial to software development [1]. In the past, claims were made that early reuse improves resource utilisation [24] and produces higher quality of software products [20]. It was also suggested that early reuse encourages software development environment that facilitates a more systematic approach to software reuse [14,31]. Such an environment will most often be automated, leading to tool support in early phases of a project [34], hence, yielding improved reuse in the subsequent development phases [34,43]. The objective of requirements reuse is to identify descriptions of (large-scale) systems that could either be reused in their entirety or in part, with the minimum modifications, thus, reducing the overall development effort! Informal requirements specification are commonly used in the early phases of software development. Such documents are usually produced in natural language (such as English), and in spite of many problems in their handling, they are still regarded as one of the most important communication medium between developers and their clients [11]. The lack of formality, structure and ambiguity of natural W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 190-210, 2000. Springer-Verlag Berlin Heidelberg 2000
Requirements Classification and Reuse
191
Process
Knowledge
Text
Table 1. Approaches to Requirements Reuse Approaches
Researchers/Groups
parsing specifications
Allen & Lee [3]
natural languages processing Naka [33], Girardi & Ibrahim [19] use of hypertext
Kaindl, Kaiya [21, 22]; Garg & Scacci [18]
finding repeated phrases
Aguilera & Berry [2]
assessment of similarity
Fugini, et al. [5, 17]
taxonomic representation
Wirsing, et al. [42] Johnson & Harris [20]
logic-based specifications
Puncello, et al. [37]
analogical reasoning
Maiden & Sutcliffe [30]
knowledge-based systems
Lowry [26] Tamai [41], Borgida [6] Lubars & Harandi [28, 29] Zeroual [44]
analysis patterns
Fowler [13]
domain mapping
Simos, et al. [39] Cybulski & Reed [10]
domain analysis
Prieto-Diaz [35] Frakes, et al. [15] Kang, et al. [23] Simos [40]
reuse-based process
Kang, et al. [24]
meta- and working models
Castano, Bubenko [7, 8]
wide-spectrum reusability
Lubars [27]
family of requirements
Lam [25]
reuse-based maintenance
Basili [4]
CASE-support of early reuse Poulin [34]
language makes requirements documents difficult to represent and process, not to mention their effective reuse. To overcome this problem, requirements statements need to be processed in a unique fashion to accommodate reuse tasks, which include analysis of existing requirements, their organization into a repository of reusable requirements artefacts, and their synthesis into new requirements documents. Several methods, techniques, tools and methodologies were suggested as useful in supporting these tasks. There are three major approaches to requirements reuse (see Table 1), i.e. text processing, knowledge management and process improvement. The first approach focuses on the text of requirements, its parsing, indexing, access and navigation. Such
192
Jacob L. Cybulski and Karl Reed
approaches rely heavily on the natural language grammars and lexicons, statistical text analysers, and hypertext. The second approach aims at elicitation, representation and use of knowledge contained in requirements documents, and reasoning about this captured knowledge. These methods commonly focus on the modelling of a problem domain, they utilise knowledge acquisition techniques and elaborate modelling methods. Sometimes they also utilise knowledge-based systems and inference engines. The last approach aims at changing development practices to embrace reuse. We believe that the most successful method of requirements reuse should address all three above-mentioned aspects of requirements handling. Informal requirements do not impose any rigid syntax or semantics of their texts. Their form is natural and easily understood by all of the stakeholders, despite their potential complexity. Their content is very rich, however, it may also be ambiguous, imprecise and incomplete, and hence confusing. We, therefore, suggest that reuse of informal requirements texts should not overly rely on the informal documents' grammar or their semantics, which may both be very difficult to deal with. Instead we suggest that the methods applicable to informal requirements documents should focus on their lexical and structural properties. A method embracing these principles, RARE (Reuse-Assisted Requirements Engineering), was therefore proposed and subsequently implemented in a prototype tool, IDIOM (Informal Document Interpreter, Organiser and Manager). In what follows, we illustrate the RARE IDIOM approach by identifying commonalities in two simple "requirements documents" for two common games of chance. This will provide both a short description of the RARE method and will demonstrate the power of an IDIOM tool.
2
The RARE Method
The RARE method centres on the tasks of requirements recording, analysis and refinement. It is based on the commonly accepted understanding that the main purpose of requirements engineering is to prepare a high quality requirements specification document.1 RARE's key principle is to promote use of requirements analysis techniques that lead to the discovery and addition of reuse information to the requirements documents with a view to enhance the subsequent development stages. RARE suggests a focus on three aspects of requirements analysis. 1. Identification and replacement of new requirements with those drawn from existing documents that have already been processed and refined into a collection of designs (Cf. Fig. 1a). 2. Identification of requirements for the new software product that can be refined with reusable design components, created in the process of developing some other software system (Cf. Fig. 1b). 3. Identification of similarities between requirements drawn from specifications in the same application domain or software system, indicating either reuse potential, 1
Which means the document is clear, precise, unambiguous and consistent.
Requirements Classification and Reuse problem domain (a)
Existing Requirement
solution domain other workproducts
refinement
similarity & potential reuse
(b)
193
Design Component
New Requirement
potential refinement
Existing Requirement
refinement
potential reuse
other workproducts Design Component
New Requirement
(c)
New Requirement
similarity & potential problem New Requirement
potential refinement
potential refinement potential refinement
affinity & potential reuse
Design Component conflicting components
other workproducts
Design Component
Fig. 1. Identification of reuse in RARE
conflict or redundancy. Similarity cross-referencing may contribute to the improved quality of the resulting designs and ultimately the entire software product (Cf. Fig. 1c). To address these aspects, we propose to utilise :− cross-domain requirements classification to detect requirements similarity, and − affinity of requirements to design artefacts to support refinement of requirements. We will explain the need for cross-domain classification in the following sections.
3
Keywords or Facets
The main aim of RARE is to establish a collection of informal requirements text that could be reused from one project to another, and a collection of formal designs that could be used to refine them. Together, requirements and designs form a repository of reusable artefacts. For each requirement statement there may be a number of design artefacts perceived as useful in its refinement. IDIOM's role is to assist designers in
194
Jacob L. Cybulski and Karl Reed
selecting candidate artefacts that could be combined to form a single design, and in rejecting those artefacts found to be inappropriate in further refinement. In this work, we seek to process requirements text with a view to identifying requirements, representing their features, and characterising them by reference to domain concepts. Such analysis may involve statistical analysis of terminology used in requirements documents, parsing and skimming of text, representation and manipulation of knowledge introduced in these documents, etc. In our work, we focus on nominating lexically and syntactically interesting concepts to become members of a list of domain terms characteristic to each requirement. Table 2. Dice-game requirements characterised with domain terms (weighed by relevance) Term list: Relevance:
Term 1 0.4
Term 2 0.3
Term 3 0.2
Term 4 0.1
D1. The system shall allow players to specify the number of dice to “roll”.
specify
the number of
dice
player
D2. The player shall then roll dice.
roll
dice
player
then
D3. Each die represents numbers from
represent
dice
number
each
D4. The dice are assigned their values randomly.
assign
value
dice
random
D5. Every time the dice are rolled, their values are assigned in a random fashion.
assign
value
random
every
D6. If the total of both dice is even, the user shall win.
total
even
win
if
D7. Otherwise the user looses.
loose
user
otherwise
The Dice-Game System
1 to 6.
Table 2 and Table 3 show a number of simple requirements statements drawn from the Dice Game (D1 to D7) and the Coin Game (C1 to C7) systems. Each statement can be characterised with a list of standardised domain terms, which were selected from their text. Some of the terms are specific to the domain of game playing, e.g. "dice" and "coin", "player", "roll" and "flip", "win" and "loose". Others are common to many application domains, e.g. "specify" and "depict", "the number of", "represent", "random", etc. The terms are ordered by their relevance to the meaning of their respective requirement. Some researchers show that the terms characterising the requirements may be used to determine the similarity between requirements documents [5, 17, 19]. Finding the relevant classification terms is easy and the automation of the process is also straightforward [38]. In our example, the documents found to be similar to Dice and Coin games could also describe some gaming systems. The Dice Game requirements will match requirements documents that refer to "dice", "rolling", "players",
Requirements Classification and Reuse
195
"random", "winning" and "loosing". The Coin Game requirements, on the other hand, will match the documents that make statements about "coins", "heads" and "tails", "flipping", but also "players", "random", "winning" and "loosing". It is also clear that there exist similarities between Coin and Dice gaming systems as well, as they both make use of some common terms, e.g. "players", "random", "winning" and "loosing". To facilitate such document matching, we could use many commercially available information retrieval systems. Such systems include full-text databases such as Knowledge Engineering Texpres2 or askSam Professional3, or text analysis programs such as Concordance4 and SPSS TextSmart5. General-purpose document cataloguing systems such as dtSearch6 and ConSearch (Readware Intelligence Warehouse)7 can also be used very effectively. We could also utilise some of the web search engines, e.g. AltaVista Discovery8 and Ultraseek Server9. Some document-classifying software are available for retrieval of both disk and web-based texts.10 Table 3. Coin-game requirements characterised with domain terms (weighed by relevance) Term list: Relevance:
Term 1 0.4
Term 2 0.3
Term 3 0.2
Term 4 0.1
C1. The system shall depict two coins that can be "flipped".
depict
coin
two
flip
C2. At each turn the player flips both coins.
flip
coin
player
both
C3. Each coin has two sides, the head and the tail.
coin
side
head
tail
C4. The coins are placed on their randomly selected sides.
place
coin
side
random
C5. Every time the coins are flipped, their face values are assigned in a random fashion.
assign
value
random
every
C6. If both coins have identical face values, the user shall win.
identical
face
win
if
C7. Otherwise the user looses.
loose
user
otherwise
The Coin-Game System
Use of document retrieval systems leads to a number of problems. The first problem is that such systems commonly use a document rather than its parts as a 2
See http://www.ke.com.au/texpress/index.html. askSam Systems, askSam Professional, http://www.askSam.com/. 4 R. J.C. Watt, Concordance, http://www.rjcw.freeserve.co.uk/. 5 SPSS, TextSmart, http://www.spss.com/software/textsmart/. 6 DT Software, Inc., dtSearch, http://www.dtsearch.com/. 7 ConSearch, http://www.readware.com/. 8 DEC AltaVista, Discovery, http://discovery.altavista.com/. 9 Infoseek, Ultra Seek Server, http://software.infoseek.com/. 10 For example, dtSearch and AltaVista Discovery. 3
196
Jacob L. Cybulski and Karl Reed
retrieval entity. In requirements reuse, where requirements documents tend to be very large, it is an individual requirement or a group of related requirement statements that are of interest to a reuser. Another problem is that a simplistic lexical matching of requirements statements is doomed to fail, as it ignores the semantic similarity of words used in document indexing. A knowledge-based approach could be more successful here, as a large knowledge base of common-sense facts could assist in assessing the semantic similarity of related concepts. A simpler approach could rely on the use of a domain thesaurus able of cross-referencing similar terms. Terms extracted from the requirements text of two gaming systems (see Table 2 and Table 3) clearly illustrate the potential semantic disparity between requirements documents.11 Since both documents belong to the same domain of discourse, and both were produced according to the same requirements template, we could expect their characteristic terms to be similar across the documents, e.g. D5-C5 or D7-C7. However, some of the corresponding requirements have completely different term characterisation, e.g. D1-C1, D2-C2 or D4-C4. We could use of a thesaurus to deal with these disparities, e.g. by defining some of the terms as synonyms, e.g. "specify" and "depict", "dice" and "coin", "roll" and "flip", "assign" and "place". None of these "synonyms", however, make any sense lexically or semantically. This is because the equivalence between these concepts exist only in terms of a functional design of both games, whereby "dice" and "coin" are game's "instruments"; whereas "rolling" and "flipping" or "placing" are actions assigning a value to the "instrument". To capture the functional aspects of requirements statements, we need to explicitly model the required system context, its function, the data manipulated and the constraining methods. The simplistic keyword-based approach to text classification may not be appropriate, in spite of its many advantages. Methods more suitable to the task of functional requirements classification are those which can identify and categorise different aspects of classified artefacts - these include the attribute-value, enumerative and faceted classification techniques [12]. Although these methods provide similar usability and effectiveness [16], we believe that the faceted classification is most appropriate for our purpose. Its classification procedure allows a natural, multi-attribute view of artefact characteristics, it is simple to enforce, its search method is easy to implement, and its storage facilities can utilise a standard database technology [36]. Table 4 and Table 5 show the faceted classification of requirements drawn from the Dice and Coin game examples. The classification scheme uses four facets, i.e. function, data, method and environment, which all define terms from a design domain. The example clearly illustrates that comparing requirements in terms of their functional facets values (from the design domain) could be more effective than by matching the keywords found in the body of these requirements (from the problem domain). According to the faceted classification, the requirements that were 11
Such disparity in small requirements documents may not present any problems to an experienced analyst, who would immediately identify an opportunity to abstract the required system functions into a description of a more general problem. For large sets of requirements, tool-support becomes essential!
Requirements Classification and Reuse
197
previously found to be dissimilar, i.e. D2-C2 and D4-C4, can now be determined to be very much alike. As faceted classification also allows fuzzy-matching of terms, so that requirements statements can also be compared for their semantic distance, even if their classifying terms are not identical [36], e.g. D1 with C1. Table 4. Faceted classification of "Dice Game" requirements (with facet weights) Function 0.4
Data 0.3
Method 0.2
Enviro n. 0.1
D1. The system shall allow players to specify the number of dice to “roll”.
define
multiplicity
elaboration
user
D2. The player shall then roll dice.
assign
value
random
instrum ent
D3. Each die represents numbers from
define
value
iteration
instrum ent
D4. The dice are assigned their values randomly.
assign
value
direct
instrum ent
D5. Every time the dice are rolled, their values are assigned in a random fashion.
assign
value
direct
instrum ent
D6. If the total of both dice is even, the user shall win.
add
collection
iteration
success
D7. Otherwise the user looses.
end
boolean
choice
failure
Facet name: Weight: Dice Game Requirements
1 to 6.
Table 5. Faceted classification of "Coin Game" requirements (with facet weights) Function 0.4
Data 0.3
Method 0.2
Environ. 0.1
output
value
direct
user
C2. At each turn the player flips both coins.
assign
value
random
instrument
C3. Each coin has two sides, the head and the tail.
any
value
any
instrument
C4. The coins are placed on their
any
value
random
instrument
assign
value
direct
any
end
boolean
choice
success
end
boolean
choice
failure
Facet name: Weight: Coin Game Requirements C1. The system shall depict two coins that can be "flipped".
randomly selected sides. C5. Every time the coins are flipped, their face values are assigned in a random fashion. C6. If both coins have identical face values, the user shall win. C7. Otherwise the user looses.
198
Jacob L. Cybulski and Karl Reed
The advantage of the functionally based faceted classification is that requirements can be compared regardless of terminology used in their expression. The classifying terms can be drawn from the solution domain of discourse. This means that requirements can be classified in the same manner as design artefacts (see Table 6) and hence can be compared against them to facilitate the process of requirements refinement. By visual inspection,12 we can see that faceted descriptors of requirements D2 and D4 match those of artefacts A8 and A9. Since requirements and designs are significantly different in their form and nature their similarity is rooted in the design process, hence, we will refer to it as their affinity (or tendency to combine). Table 6. Faceted classification of design artefacts (with facet weights) Design Artefacts
Function 0.4
Data 0.3
Method 0.2
Environ. 0.1
A1. array
define
collection
iteration
machine
A2. number
define
number
direct
machine
A3. string
define
string
iteration
machine
A4. command
control
data
direct
user
A5. read
input
data
query
user
A6. write
output
data
report
user
A7. set dimension
define
multiplicity
elaboration
machine
A8. set numeric value
assign
number
direct
machine
A9. random number generator
calculate
number
random
machine
A10. sum of numbers
add
number
iteration
machine
A11. compare number
compare
number
choice
machine
A12. quit program
end
data
direct
machine
A13. you've won dialogue box
output
boolean
report
success
A14. you've failed dialogue box
output
boolean
report
failure
One of the disadvantages of this approach lies in the difficulty of constructing requirements descriptors, which are no longer based on the body of their text. Another, a harder problem, is in the allocation of facet values to the requirement descriptor, which may necessitate making certain design decisions, and is hence going beyond a simple classification process. Both of these issues imply the need for the manual classification of requirements by a skilled analyst or a designer, thus, incurring significant labour costs and demanding considerable time to complete the task - the problems commonly associated with faceted classification in general [32]. It should be noted, however, that simply changing the classification scheme would not
12
A critical reader may wish to calculate the similarity of these descriptors using the formulae defined in section 5. See also Cybulski and Reed [10] for the details of calculating affinities between requirements and designs.
Requirements Classification and Reuse
199
eliminate our problems, which find their source in the cross-domain nature of the classification process!
4
Domain Mapping
To resolve our cross-domain classification problems, we developed the concept of a domain-mapping thesaurus. The thesaurus classifies all of the terms commonly found in the lexicon of a problem domain into the facets of a solution domain. Such term pre-classification helps us in automating the classification of requirements, which use this pre-classified terminology. The motivation behind this approach stems from our belief that the problem domain lexicon is much smaller as compared with the space of requirements that use its terminology. It means that the effort of classifying such a lexicon would be far smaller than the effort of classifying the great many requirements statements themselves. Table 7. Domain-mapping thesaurus terms (with sense strengths) Thesaurus Game Domain
Weighed Facet Value Senses Data
0.3
1. card
value
2. coin
value
3. deal
Function
start
0.4
Environ.
0.1
1.0
instrument
0.5
1.0
instrument
0.5
game
0.3
instrument
0.5
1.0
4. dice
Method
iteration value
1.0
value
1.0
0.2
0.7
5. flip
assign
1.0
6. loose
end
0.7
failure
1.0
7. player
interact
0.7
user
1.0
8. roll
assign
1.0
value
1.0
random
0.5
9. shake
arrange
1.0
value
0.7
random
0.9
10. shuffle
arrange
1.0
value
0.7
random
0.9
11. win
end
0.7
success
1.0
General Domain
Function
0.4
Data
0.3
Method
0.2
Environ.
0.1
12. assign
assign
1.0
value
1.0
direct
0.5
pair
1.0
sequence
0.7
direct
0.5
user
0.7
15. each
collection
0.5
iteration
1.0
16. every
collection
0.3
iteration
1.0
17. if
boolean
0.3
choice
1.0
18. number
number
1.0
13. both 14. depict
output
1.0
random
0.5
200
Jacob L. Cybulski and Karl Reed
19. otherwise
boolean
0.3
20. random
choice
1.0
random
1.0
elaboration
0.4
21. represent
define
1.0
22. specify
define
1.0
23. the number of
count
0.7
multiplicity
1.0 choice
1.0
add
1.0
collection
0.7
iteration
0.4
pair
1.0
sequence
0.7
value
1.0
24. then 25. total 26. two 27. user 28. value
interact
0.7
user
0.7
user
1.0
The thesaurus was designed as a repository of problem-domain concept descriptors sensing (associating or linking) facet values drawn from the solution domain. Table 7 shows an example of such a thesaurus. It describes two problem domains, i.e. the domain of software games and a general application domain. The "game" domain defines terminology specific to the objects and actions observed in card (e.g. "card", "deal" or shuffle"), dice (e.g. "dice", "roll" or "shake"), coin (e.g. "coin" and "flip"), and other generic games (e.g. "player", "win" or "loose"). The "general" domain lists common-sense concepts that occur in many different types of requirements documents (these will include such terms as "represent", "assign" or "user"). Each thesaurus term senses several facet values, which provide the interpretation of the term, give its semantics, provide hints on its design and implementation, and at the same time classify it. For instance, a "coin" represents a value in a game and it is the game's instrument. To "flip" the coin means to assign its value in a random fashion. The game can end either by "winning" or "loosing", which represent the user's "success" or "failure", etc. The thesaurus terms sense each of their facet values with different strength, which is measured as a number in the interval [0, 1]. The sense strength indicates the relevance of a given facet value to the interpretation of a problem domain term. In our experimental work, a domain expert manually allocated the strength values. It is, however, more practical to automatically calculate these values based on the frequency of senses use in already classified requirements. The domain-mapping thesaurus is used to translate problem domain keywords into solution domain facets. Consider a single requirement for the dice game system (see Table 2), i.e. "The system shall allow players to specify the number of dice to “roll”" (D1). The requirement statement was initially assigned a list of problem domain terms, which include "specify", "the number of", "dice" and "player". Each of these terms senses a few facet values from the solution domain (cf. Table 7), e.g. "specify" senses the facet values function="define", environment="user" and method="elaboration", whereas "dice" senses the facet values data="value" and environment="instrument". Since each of the problem domain terms senses several facet values, the mapping of the terms from two domains is not one-to-one. To resolve the mapping, we take advantage of a number of factors stored and available to the domain-mapping process, i.e.
Requirements Classification and Reuse
201
• the relevance of terms extracted from the body of requirements text to the classification of this requirement (wi); and, • the strength of each term's sense (si,j) as defined by the domain-mapping thesaurus. The two factors can be combined together to calculate the facet value priority for the classification, which is calculated according to the following formula [9]: p i , j = α × wi + β ×
si, j
∑ si,k
(1)
⋅
k
where: ti - i-th problem domain term of some requirement wi - relevance of the i-th term in some requirement si,j - j-th sense strength of facet j in term i pi,j - priority of a facet value fi,j for a facet j in term i α - importance of the relevance factor (0.9) β - importance of the sense strength (0.1), α + β = 1.
term
facet value senses
w1 t1 → w2 t2 → … wn tn →
s1,1 f1,1 s2,1 f2,1 sn,1 fn,1
… s1,m f1,m … s2,m f2,m … … sn,m fn,m
Table 8. Resolution of requirements classification terms (ranked by facet value priority) Requirements (with selected keywords)
Function
Data
Method
Environment
1. The system shall allow players to specify the number of dice to “roll”. (specify, the number of, dice, player)
define count interact
0.39 0.41 multiplicity 0.33 elaboration 0.38 user 0.25 instrument 0.21 0.31 value user 0.15 0.13
2. The player shall then roll dice. (roll, dice, player, then)
assign interact
0.40 value 0.22 value
0.40 random 0.34 choice
0.38 instrument 0.30 0.19 user 0.24
3. Each die represents numbers from 1 to 6. (represent, dice, number, each)
define
0.46 value number collection
0.34 iteration 0.28 0.12
0.16 instrument 0.30
4. The dice are assigned their values randomly. (assign, value, dice, random)
assign
0.40 value value value
0.40 direct 0.37 random 0.25
0.38 instrument 0.21 0.19
5. Every time the dice are rolled, their values are assigned in a random fashion. (assign, value, random, every)
assign
0.40 value value collection
0.40 direct 0.37 random 0.11 iteration
0.38 0.28 0.17
202
Jacob L. Cybulski and Karl Reed
6. If the total of both dice is even, the user shall win. (total, even, win, if)
add end
0.41 collection 0.22 boolean
0.39 iteration 0.11 choice
0.38 success 0.17
0.24
7. Otherwise the user looses. (loose, user, otherwise)
end interact
0.40 boolean 0.31
0.20 choice
0.26 failure user
0.42 0.33
The priorities are then used to rank all possible classifications of each requirement (see Table 8). In those cases when a number of facet values share the highest priority any one of them will be used in further processing. The requirements engineer may either accept the proposed classification terms (shown in italic), change the suggested prioritization, or to allocate to the facet a term, which has not been previously selected by the domain-mapping process, but which is defined in the respective facet. It should be noted that resolution of facet values may lead to the loss of potentially valuable query terms, which could result in the unnecessary narrowing of a set of design artifacts suitable for refinement (we call such queries narrow). An alternative approach is to use all keyword senses in the construction of an affinity query (we call such queries broad). The latter method, though less accurate, can be used in batch (no feedback) classification of requirements. We used both narrow and broad queries, and we found them to be valuable in the process of requirements refinement.
5
Requirements Similarity
So far, we have developed all instruments necessary to compare requirements with design artifacts. We are now able to characterize requirements in terms of their problem domain attributes, to automatically translate these terms into a solution domain, and to compare such requirements using a faceted classification method. For completeness reasons we provide the reader with details of our faceted classification (a working model of the RARE method is also available as an Excel spreadsheet that can be obtained from the web).13 Our classification method defines four facets (for this small example), i.e. function (Fig. 2), data (Fig. 3), method (Fig. 4) and environment (Fig. 5) (some of these figures can be found at the end of this paper). Each facet defines a collection of classification terms or values, and also defines a conceptual distance measure between facet values. A small conceptual distance value indicates the facet values to be very close, conversely, large distance represents the two facet values to be far apart. During the development of the facet structure, facet values are represented as an associative network, or a connected weighted graph, of inter-related concepts. In this model, the measure of closeness between two concepts can be defined as an 13
http://www.dis.unimelb.edu.au/staff/jacob/seminars/DiceGame.xls.
Requirements Classification and Reuse
203
associative distance, i.e. the weight attached to each network link. Any two concepts x and y in a facet f would have an associative distance of Af(x, y) ∈ [0, 1], and Af(x,x) = 0. For the concepts that are not directly associated, the length of a path leading from one concept to another would determine their proximity [36]. In such a case, the distance between two distant facet values could be defined as the length of the shortest path between the nodes in the facet value network, i.e. 0 for x1 = xn n −1 D f ( x1 , xn ) = min ∑ A f ( xi , xi +1 ) where ∀ : A f ( xi , xi +1 ) ≠ 0 ⋅ 1≤ i < n i =1 ∑ Af ( x, y ) otherwise x , y∈ F f D f ( q, a ) . d f (q, a) = max D f ( x, y )
(2)
(3)
x, y
where: Df(x, y) - distance between "x" and "y" in facet "f" df(x, y) - normalised distance between "x" and "y" in "f" Af(x, y) - associative distance between "x" and "y" in "f" Ff - a set of "xi" values in facet "f"
add
any
arrange
assign
calculate
co mpare
control
count
d efine
do
en d
execu te
input
interact
non e
o utpu t
start
Inability to reach one facet value from another would be represented by the length of the longest possible path in a facet network. After calculating the distances between all the facet values, we represent the facet structure as a conceptual distance matrix (effectively a path matrix, see Fig. 2). To counter the facet size differences, this distance is also normalised to the interval of [0, 1] by dividing the conceptual distance between facet values by the maximum path length within the facet.
add
0
0
3
3
1
3
4
2
4
3
5
2
5
4
6
5
5
any
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
arrange
3
0
0
2
2
2
3
3
3
2
4
1
4
3
6
4
4
assign
3
0
2
0
2
2
3
3
3
2
4
1
4
3
6
4
4
calculate
1
0
2
2
0
2
3
1
3
2
4
1
4
3
6
4
4
compare
3
0
2
2
2
0
3
3
3
2
4
1
4
3
6
4
4
control
4
0
3
3
3
3
0
4
2
1
1
2
3
2
6
3
1
count define do
2 4 3
0 0 0
3 3 2
3 3 2
1 3 2
3 3 2
4 2 1
0 4 3
4 0 1
3 1 0
5 3 2
2 2 1
5 3 2
4 2 1
6 6 6
5 3 2
5 3 2
end
5
0
4
4
4
4
1
5
3
2
0
2
4
3
6
4
2
execute
2
0
1
1
1
1
2
2
2
1
2
0
3
2
6
3
3
input
5
0
4
4
4
4
3
5
3
2
4
3
0
1
6
2
4
interact
4
0
3
3
3
3
2
4
2
1
3
2
1
0
6
1
3
non e
6
6
6
6
6
6
6
6
6
6
6
6
6
6
0
6
6
output
5
0
4
4
4
4
3
5
3
2
4
3
2
1
6
0
4
start
5
0
4
4
4
4
1
5
3
2
2
3
4
3
6
4
0
Conceptu al distan ce matrix for the "fun ction" facet
Fig. 2. Conceptual distance matrix for the "function" facet
204
Jacob L. Cybulski and Karl Reed
Requirement descriptors are used as the basis for the construction of artefact vectors and queries. The match between a query and an artefact is defined in terms of their similarity/affinity, given as a weighted geometric distance metric, i.e.
∑ (w × d ( q , a ) ) ∑w
2
i
dist (q, a ) =
i
i
i
i
2
⋅
(4)
i
i
sim ( q, a ) = 1 − dist ( q, a ) .
(5)
where: dist(q, a) - normalised distance between "q" and "a" sim(q, a) - similarity between artefacts "q" and "a" qi - i-th facet value in the query vector ai - i-th facet value in the artefact vector wi - importance of the i-th facet di(a, b) - normalised distance between "a" and "b" in "I" Consider the two sets of user requirements for the previously discussed Dice game and a Coin game (see Table 9). Assume that dice-game requirements have already been refined and implemented, hence they have all been already classified (see Table 2 and Table 4). The coin-game requirements define a new system to be implemented (see Table 3), these requirements are mapped into design facets using a domainmapping thesaurus (see Table 5). During the coin-game requirements affinity analysis, the new requirements are matched against all stored artifacts, which include both design and requirements artifacts. Knowing that the two sets of requirements match almost line-by-line (C1-D1, C2-D2, etc.), we would expect the best matches and identification of reuse opportunities to fall along the matrix diagonal. So there they are (see Table 9) - clear indication of their reuse value!
Requirements Classification and Reuse
205
Table 9. Requirements affinity and opportunity to reuse C7
Otherwise the user looses.
C6
If both coins have identical face values, the user shall win.
C5 Every time the coins are flipped, their face values are assigned in a random fashion.
C4
The coins are placed on their randomly selected sides.
C3
Each coin has two sides, the head and the tail.
C2
At each turn the player flips both coins.
C1
The system shall depict two coins that can be "flipped".
Coin Requirements (New):
0.592
0.559
0.885
0.752
0.592
0.379
0.379
0.506
1.000
1.000
1.000
0.927
0.460
0.460
D3 Each die represents numbers from 1 to 6.
0.605
0.574
1.000
0.781
0.607
0.585
0.585
D4 The dice are assigned their values randomly.
0.512
0.927
1.000
0.927
1.000
0.475
0.475
0.513
0.927
1.000
0.927
1.000
0.496
0.496
0.355
0.537
0.817
0.715
0.592
0.305
0.301
0.484
0.460
0.817
0.766
0.496
0.927
1.000
Dice Requirements (Reused): D1 The system shall allow players to specify the number of dice to “roll”. D2 The player shall then roll dice.
D5 Every time the dice are rolled, their values are assigned in a random fashion. D6 If the total of both dice is even, the user shall win. D7 Otherwise the user looses.
Requirement C3 does not have a clear match with any of the D1-D7, it matches the majority of artifacts in the repository (dashed line in Table 9). Since both its "function" and "method" facets are undefined (see Table 5), they result in small conceptual distances from every other facet value, hence, leading to the high affinity
206
Jacob L. Cybulski and Karl Reed
0 4 4 1 2 2 1 5 4 2 3 0 1 1 2
vector
0 1 1 2 1 3 3 5 1 3 0 3 1 1 3
value
0 4 4 1 2 2 1 5 4 0 3 2 1 1 2
variable
0 2 2 2 2 4 4 5 0 4 1 4 1 1 1
string
5 5 5 5 5 5 5 0 5 5 5 5 5 5 5
pair
0 4 4 1 2 1 0 5 4 1 3 1 1 1 1
single
0 4 4 1 2 0 1 5 4 2 3 2 1 1 2
none
0 2 2 1 0 2 2 5 2 2 1 2 1 1 2
number
multiplicity
0 3 3 0 1 1 1 5 2 1 2 1 1 1 1
data
0 2 0 3 2 4 4 5 2 4 1 4 1 1 4
matrix
0 0 2 3 2 4 4 5 2 4 1 4 1 1 1
any boolean character collection data matrix multiplicity none number pair single string value variable vector
character
0 0 0 0 0 0 0 5 0 0 0 0 0 0 0
Conceptual distance matrix for the "data" facet
collection
any
boolean
with every design artifact. This vagueness results from our selection of characteristic terms for requirements C3 ("side", "head", "tail"), which are not in the thesaurus.
0 1 1 1 1 1 1 5 1 1 1 1 0 1 1
0 1 1 1 1 1 1 5 1 1 1 1 1 0 1
0 1 4 1 2 2 1 5 1 2 3 2 1 1 0
Fig. 3. "Data" facet
any
choice
direct
elaboration
iteration
none
query
random
report
sequence
Such situations are easy to detect and may need manual correction (by defining a new domain term of by rephrasing the requirement). Similar problems may also occur due to the non-functional nature of a requirement, which may lead to certain facets being unfilled (also C3).
any choice direct elaboration
0 0 0 0
0 0 1 3
0 1 0 2
0 3 2 0
0 1 2 4
5 5 5 5
0 2 1 1
0 2 1 3
0 1 2 4
0 1 2 4
iteration
0
1
2
4
0
5
3
3
2
2
none
5
5
5
5
5
0
5
5
5
5
query
0
2
1
0
3
5
0
2
3
3
random
0
2
1
3
3
5
2
0
3
3
report sequence
0 0
1 1
2 2
4 4
2 2
5 5
3 3
3 3
0 2
2 0
Conceptual distance matrix for the "method" facet
Fig. 4. "Method" facet
game
instrument
machine
money
none
result
success
any
0
0
0
0
0
0
5
0
0
0
failure
0
0
2
4
2
4
5
1
2
3
game
0
3
0
3
1
3
5
2
3
2
instrument
0
4
3
0
2
2
5
3
4
1
machine
0
2
1
2
0
2
5
1
2
1
money
0
4
3
2
2
0
5
3
4
1
none
5
5
5
5
5
5
0
5
5
5
result
0
1
2
3
1
3
5
0
1
2
success
0
2
3
4
2
4
5
1
0
3
user
0
3
2
1
1
1
5
2
3
0
Conceptual distance matrix for the "environment" facet
207
user
any
failure
Requirements Classification and Reuse
Fig. 5. "Environment" facet Another interesting phenomenon can be observed by studying the similarity of requirements D2, D4 and D5 vs. all other coin-game requirements. The results indicate that descriptors of these requirements overlap (or requirements are redundant). Detection of such cases can be achieved in a simple way, i.e. by comparing requirements of a single document one with another and determining their relative similarity (we will not conduct this analysis here due to the shortage of space). Note also that due to the classification of D6, which emphasizes its calculation aspect rather than game's exit condition, as is the case with D7, C6 unexpectedly matches D7. Requirements C6 and D7 both refer to the end of a game, though with different consequences for the player. These cases of "misclassification" could be dealt with by allowing multiple classification vectors per each requirement, in which case D6 could be classified with the use of two vectors, one to cover its calculation aspects and another its exit condition, thus improving the match!
6
Conclusions
Reuse of software requirements leads to the effective reuse of all software workproducts derived from these requirements downstream the development process. Requirements reuse can, therefore, provide significant gains in developmental productivity and in the quality of the resulting software product. In software development, the semantics of requirements can either be found in the knowledge of their problem domain or in the designs derived from these requirements. Either of these approaches could be used to determine requirements similarity. In this paper, we proposed a method of requirements classification that takes advantage of design-based semantics for requirements. Our approach suggests combining keyword-based and faceted classifications of requirements and designs. The keywords are (efficiently) extracted from the body of requirements text, hence they represent the requirements characterisation in the problem domain. With the use of a domain-mapping thesaurus, keywords are then
208
Jacob L. Cybulski and Karl Reed
translated into design terms of a faceted classification. Facets are subsequently used to determine affinity between requirements and design artefacts, which can be used as a basis for assessing requirements similarity and for reuse-based refinement of requirements documents. Cross-domain classification is an integral part of the RARE method of requirements engineering, proposed by the authors, and supported by IDIOM, a prototypic software tool. Our experimental studies (to be reported elsewhere) show that IDIOM offers some superiority over simple document/text classification and retrieval software, such as web search engines, which have been recently promoted by other researchers as suitable to facilitate software reuse. We have also conducted a number of RARE IDIOM useability experiments, which have drawn our attention to the features required of IDIOM, should the tool be considered for further commercial exploitation. Overall, we are satisfied that RARE IDIOM classification method addresses the problem of domain "boundary" in requirements processing. The proposed method of domain-mapping complements other approaches to classification, matching and retrieval of requirements and designs, thus, leading to enhanced reuse of requirements and to their refinement into reusable design components.
References 1. Agresti, W. W. and McGarry, F. E.: The Minnowbrook Workshop on Software Reuse: A summary report. In W. Tracz (ed): Software Reuse: Emerging Technology. Computer Society Press: Washington, D.C. (1988) 33-40 2. Aguilera, C. and Berry, D. M.: The use of a repeated phrase finder in requirements extraction. Journal of Systems and Software 13, 3 (1990) 209-230 3. Allen, B. P. and Lee, S. D.: A knowledge-based environment for the development of software parts composition systems. In 11th International Conference on Software Engineering. Pittsburgh, Pennsylvania: IEEE Computer Society Press (1989) 104-112 4. Basili, V. R.: Viewing maintenance as reuse-oriented software development. IEEE Software, (1990) 19-25 5. Bellinzona, R., Fugini, M. G., and Pernici, B.: Reusing specifications in OO applications. IEEE Software 12, 2 (1995) 65-75 6. Borgida, A., Greenspan, S., and Mylopoulos, J.: Knowledge representation as the basis for requirements specifications. IEEE Computer, (1985) 82-90 7. Bubenko, J., Rolland, C., Loucopoulos, P., and DeAntonellis, V.: Facilitating "Fuzzy to Formal" requirements modelling. In The First International Conference on Requirements Engineering. Colorado Springs, Colorado: IEEE Computer Society Press (1994) 154-157 8. Castano, S. and De Antonellis, V.: The F3 Reuse Environment for Requirements Engineering. ACM SIGSOFT Software Engineering Notes 19, 3 (1994) 62-65 9. Cybulski, J.: Application of Software Reuse Methods to Requirements Elicitation from Informal Requirements Texts, PhD Thesis Draft, La Trobe University, Bundoora (1999) 10. Cybulski, J. L. and Reed, K.: Automating Requirements Refinement with Cross-Domain Requirements Classification. Australian Journal of Information Systems, Special Issue on Requirements Engineering (1999) 131-145 11. Davis, A. M.: Predictions and farewells. IEEE Software 15, 4 (1998) 6-9
Requirements Classification and Reuse
209
12. DoD: Software Reuse Initiative: Technology Roadmap, V2.2, Report http://sw-eng.fallschurch.va.us/reuseic/policy/Roadmap/Cover.html, Department of Defense (1995) 13. Fowler, M.: Analysis Patterns: Reusable Object Models. Menlo Park, California: AddisonWesley (1997) 14. Frakes, W. and Isoda, S.: Success factors of systematic reuse. IEEE Software. 11, 5 (1994) 15-19 15. Frakes, W., Prieto-Diaz, R., and Fox, C.: DARE: domain analysis and reuse environment. Annals of Software Engineering 5, (1998) 125-141 16. Frakes, W. B. and Pole, T. P.: An empirical study of representation methods for reusable software components. IEEE Transactions on Software Engineering 20, 8 (1994) 617-630 17. Fugini, M. G. and Faustle, S.: Retrieval of reusable components in a development information system. In P.-D. Ruben and William, B.F. (eds): Advances in Software Reuse: Selected Papers from the Second International Workshop on Software Reusability. IEEE Computer Society Press: Los Alamitos, California (1993) 89-98 18. Garg, P. K. and Scacchi, W.: Hypertext system to manage software life-cycle documents. IEEE Software 7, 3 (1990) 90-98 19. Girardi, M. R. and Ibrahim, B.: A software reuse system based on natural language specifications. In 5th Int. Conf. on Computing and Information. Sudbury, Ontario, Canada (1993) 507-511 20. Johnson, W. L. and Harris, D. R.: Sharing and reuse of requirements knowledge. In 6th Annual Knowledge-Based Software Engineering Conference. Syracuse, New York, USA: IEEE Computer Society Press (1991) 57-66 21. Kaindl, H.: The missing link in requirements engineering. ACM SIGSOFT Software Engineering Notes 18, 2 (1993) 30-39 22. Kaiya, H., Saeki, M., and Ochimizu, K.: Design of a hyper media tool to support requirements elicitation meetings. In Seventh International Workshop on Computer-Aided Software Engineering. Toronto, Ontario, Canada: IEEE Computer Society Press, Los Alamitos, California (1995) 250-259 23. Kang, K., Cohen, S., Hess, J., Novak, W., and Peterson, S.: Feature-Oriented Domain Analysis (FODA) Feasibility Study, Technical Report CMU/SEI-90-TR-21, Software Engineering Institute, Carnegie-Mello University (1990) 24. Kang, K. C., Cohen, S., Holibaugh, R., Perry, J., and Peterson, A.S.: A Reuse-Based Software Development Methodology, Technical Report CMU/SEI-92-SR-4, Software Engineering Institute (1992) 25. Lam, W.: A case study of requirements reuse through product families. Annals of Software Engineering 5, (1998) 253-277 26. Lowry, M. and Duran, R.: Knowledge-based software engineering. In A. Barr, Cohen, P.R., and Feigenbaum, E.A. (eds): The Handbook of Artificial Intelligence. Addison-Wesley Publishing Company, Inc.: Readings, Massachusetts (1989) 241-322 27. Lubars, M. D.: Wide-spectrum support for software reusability. In W. Tracz (ed): Software Reuse: Emerging Technology. Computer Society Press: Washington, D.C. (1988) 275-281 28. Lubars, M. D.: The ROSE-2 strategies for supporting high-level software design reuse. In M. R. Lowry and McCartney, R. D. (eds): Automatic Software Design. AAAI Press / The MIT Press: Menlo Park, California (1991) 93-118 29. Lubars, M. D. and Harandi, M. T.: Addressing software reuse through knowledge-based design. In T.J. Biggerstaff and Perlis, A.J. (eds): Software Reusability: Concepts and Models. ACM Addison Wesley Publishing Company: New York, New York (1989) 345377
210
Jacob L. Cybulski and Karl Reed
30. Maiden, N. and Sutcliffe, A.: Analogical matching for specification reuse. In 6th Annual Knowledge-Based Software Engineering Conference. Syracuse, New York, USA: IEEE Computer Society Press (1991) 108-116 31. Matsumoto, Y.: Some experiences in promoting reusable software: presentation in higher abstract levels. In T. J. Biggerstaff and Perlis, A. J. (eds): Software Reusability: Concepts and Models. ACM Addison Wesley Publishing Company: New York, New York (1989) 157-185 32. Mili, H., Ah-Ki, E., Godin, R., and Mcheick, H.: Another nail to the coffin of faceted controlled-vocabulary component classification and retrieval. Software Engineering Notes 22, 3 (1997) 89-98 33. Naka, T.: Pseudo Japanese specification tool. Faset. 1, (1987) 29-32 34. Poulin, J.: Integrated support for software reuse in computer-aided software engineering (CASE). ACM SIGSOFT Software Engineering Notes 18, 4 (1993) 75-82 35. Prieto-Diaz, R.: Domain analysis for reusability. In W. Tracz (ed): Software Reuse: Emerging Technology. IEEE Computer Society Press (1988) 347-353 36. Prieto-Diaz, R. and Freeman, P.: Classifying software for reusability. IEEE Software. 4, 1 (1987) 6-16 37. Puncello, P. P., Torrigiani, P., Pietri, F., Burlon, R., Cardile, B., and Conti, M.: ASPIS: a knowledge-based CASE environment. IEEE Software, (1988) 58-65 38. Salton, G.: Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer. Readings, Massachusetts: Addison-Wesley Pub. Co. (1989) 39. Simos, M.: WISR 7 Working Group Report: Domain Model Representations Strategies: Towards a Comparative Framework. Andersen Center, St. Charles, Illinois (1995) http://www.umcs.maine.edu/~ftp/wisr/wisr7/dawg-nps/dawg-nps.html 40. Simos, M. A.: The growing of organon: a hybrid knowledge-based technology and methodology for software reuse. In R. Prieto-Diaz and Arango, G. (eds): Domain Analysis and Software Systems Modeling. IEEE Computer Society Press: Los Alamitos, California (1991) 204-221 41. Tamai, T.: Applying the knowledge engineering approach to software development. In Y. Matsumoto and Ohno, Y. (eds): Japanese Perspectives in Software Engineering. AddisonWesley Publishing Company: Singapore (1989) 207-227 42. Wirsing, M., Hennicker, R., and Stabl, R.: MENU - an example for the systematic reuse of specifications. In 2nd European Software Engineering Conference. Coventry, England: Springer-Verlag (1989) 20-41 43. Yglesias, K. P.: Information reuse parallels software reuse. IBM Systems Journal 32, 4 (1993) 615-620 44. Zeroual, K.: KBRAS: a knowledge-based requirements acquisition system. In 6th Annual Knowledge-Based Software Engineering Conference. Syracuse, New York, USA: IEEE Computer Society Press (1991) 38-47
Reuse Measurement in the ERP Requirements Engineering Process Maya Daneva Clearnet Communications 200 Consilium Place, Suite 1600, Toronto, Ontario M1H 3J3, Canada
[email protected] Abstract. Reusing business process and data requirements is fundamental to the development of modern Enterprise Resource Planning systems. Currently, measuring the benefits from ERP reuse is attracting increasing attention, yet most organizations have very little familiarity with approaches to evaluating quantitatively requirements reuse customers have achieved. This paper is intended to provide a solid understanding of ERP requirements reuse measurement. It describes the first results obtained from the integration of explicit and systematic reuse measurement with standard requirements engineering (RE) practices in ERP implementation projects. Aspects of measurement planning, execution and reuse data usage are analyzed in the context of SAP R/3 implementation. As a preliminary result of our study, the paper concludes that the proposed approach shows considerable promise.
1
Introduction
Engineering the business requirements for an Enterprise Resource Planning (ERP) solution is one of the most critical processes in implementing standard software packages. It is concerned with (i) the analysis and the comparison of predefined reusable assets and (ii) the creation of a large number of artifacts, or descriptions, based on these assets. It requires the full time commitment of a cross-functional team including business process owners, internal process and data architects, and external ERP consultants to produce one of the most critical project deliverables, namely the business blueprint. The ERP RE process begins with the identification and the documentation of the company’s organizational units, their business rules and constraints and continues throughout the entire implementation cycle in the form of tracing the life history of any particular requirement and business issue. The better the resulting business blueprint is formulated, the faster the progress in subsequent phases, because the necessary decisions concerning the future ERP solution have been taken and agreed upon [2]. As the impact of the RE process on the project success is significant, an increasingly large number of ERP software producers are motivated to make the RE process more efficient in the face of pressures to create carefully engineered ERP solutions rapidly. Traditionally, requirements reuse is the mechanism preferred by vendors to achieve this goal. Today, a variety of requirements reuse initiatives [12] have been launched W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 211-230, 2000. Springer-Verlag Berlin Heidelberg 2000
212
Maya Daneva
ranging from systematic reuse approaches, reuse process methods, and reuse knowledge transfer models to complex domain-specific frameworks. Moreover, many vendors are now delivering component-based solutions to common business process and data requirements derived from numerous industry-specific business cases. These generally involve architectures defining the structure of integrated IS within the business problem domain, sets of business application components engineered to fit the architecture, and tools that assist the consultant in building component-based solutions using the domain knowledge within the architecture. The increased emphasis on ERP reuse suggests that quantifying and estimating requirements reuse the customers achieve is essential to project organizations who wish to manage reuse expectations of pre-configured ERP solutions, to plan reuse levels throughout theer implementation projects and to set up achievable reuse goals. As Pfleeger points out, we ‘can not do effective reuse without proper measurement and planning’ [17]. Although there are many publications focused on the strengths of today’s ERP reuse strategies and the way they benefit client organizations, to the best of our knowledge there is no widely accepted execution plan or method available to support quantitative requirements reuse measurement in ERP projects. The model for integrating requirements reuse measurement and RE presented in this paper partially fills this void. It defines a set of practices the team should set up in place in order to successfully establish a disciplined and systematic measurement process that links reuse measurement needs to ERP reuse goals and reuse benefits. The objective of our effort is to add visibility into the ways in which reuse processes, reusable assets, resources, reuse methods and techniques of ERP RE relate to one another and, thus, to provide a sound and consistent basis for strategic planning of requirements reuse metrics in ERP implementation projects. For the purpose of this research, we place the requirements reuse measurement concepts in the context of implementing the SAP R/3 System, a leading product in the ERP software market [22]. However, our concept of incorporating reuse measurement practices in the RE process is generic enough and could easily be applied to any other ERP implementation project. The paper is structured as follows: In Sections 2, 3, 4 and 5 we reuse measurement process areas that, we think, are necessary for an organization to successfully integrate reuse measurement activities into their RE process. Sections 6, 7 and 8 report on results from early analysis we did to assess the cost-effectiveness of the approach, the soft benefits of the integration effort and the applicability of the approach across multiple projects. Section 9 concludes the paper and indicates areas of future research.
2
Requirements Reuse Measurement Practice Areas
An ERP requirements reuse measurement process is a systematic method of adopting or adapting standard reuse counting practices in ERP RE, measuring the ERP reuse goals, and indicating reuse levels targeted at the beginning and achieved at the end of each stage of the ERP implementation cycle. Establishing a requirements reuse meas-
Reuse Measurement in the ERP Requirements Engineering Process
213
urement process as part of a larger organizational process involves solving significant business and technical problems: • understanding the context for ERP requirements reuse, • identifying reuse issues and goals, • mapping goals and reuse measures to the context, • defining suitable reuse counting standards, • assembling metrics toolset, • establishing procedures for using reuse metrics data as part of the regular RE work packages. In systematically addressing these problems, we have to consider each step toward adoption of requirements reuse measurement in terms of inputs, outputs, control mechanisms and supporting tools [7,17,18]. In this paper, we use the concept of measurement process area to characterize the competence an ERP project team must build and the necessary assets and conditions that must be set up in place before a successful requirements reuse measurement can proceed. Process areas indicate the areas the team should focus on to integrate properly requirements reuse measurement into the RE process. At its essence, the integration involves three areas: requirements measurement planning, requirements measurement implementation and measurements usage. The first area includes those practices necessary to define the context for reuse. The second area involves practices that the team needs in order to have the planned activities carried out. Finally, the third area consists of practices that help the team select action items based on the reuse measurements. Issues pertinent to these activities are now explained in more detail.
3
Requirements Reuse Measurement Planning
The objective of this process area is to have a reuse measurement process documented in the form of requirements reuse measurement plan (Fig.1). Organizational goals
Project plans Project deliverables
Requirements Reuse Measurement Planning
Requirements reuse measurement plan
Business process modelling tools ASAP Accelerators
Fig. 1. Area 1: Requirements Reuse Measurement Planning
214
Maya Daneva
The area includes collecting and structuring information on stakeholders involved, measurement frequency, sources of metrics data, counting standards, tool support, reports to be produced and action items that could be taken based on the metrics data. Essential activities in this process area are: • documenting multiple view of ERP reuse based on stakeholders’ goals, issues and questions, • analyzing the ERP RE process, • selecting valid counting standards, • devising metrics data processing tools, • developing high-level models presenting typical uses of reuse data. These activities are reviewed in the next subsections. 3.1
Multiple Views of Requirements Reuse
Critical to the reuse metrics success is a clear understanding of reuse expectations of the parties interested in ERP reuse. To identify the stakeholders, the approach developed by Sharp, Finkelstein and Golal in [20] may be applied. Given early project deliverables, such as the SAP project organization structure, the project resources, the project plan, the project management standards and procedures, and the SAP implementation standards and procedures, we developed stakeholder interaction diagrams that document three important aspects of our team working environment: relationships between stakeholders, the relationships of each stakeholder to the system, and the priority to be given to each stakeholder’s view (Fig. 2). Business Process Architect 1
! !
! ! !
Create new process components Maintain current process components
6
3 ! ! !
Select diagrams Prepare presentation materials Facilitate validation walk-throughs
! !
Assist in scoping business cases Document the implementation process
SAP Project Administrator
Business Systems Analyst (SAP)
2
7
Publish process components Liaise with the configuration team and the super users Organize process review sessions
Business Area Analyst
! !
Define scope Analyze constraints
4
Business Process Owner
! !
Validate process models Determine and assess improvement actions
SAP Configuration Team Members
5 ! !
Validate process models Give feedback on improvements
8
!
Implement system enhancements
Fig. 2. An Example of a Stakeholder Interaction Diagram
Reuse Measurement in the ERP Requirements Engineering Process
215
The organizational knowledge represented in the diagrams is needed to manage, interpret, balance and process stakeholders’ input into the SAP requirements reuse measurement process. It helped us structure the SAP project team members in four groups: • business decision makers, who are corporate executives from the steering committee responsible for the optimization, standardization and harmonization of the business processes across multiple locations; • business process owners, who are department managers responsible for providing the necessary line know-how and designing new processes and procedures to be supported by the R/3 business application components; • technical decision makers, who are SAP project managers responsible for planning, organizing, coordinating and controlling the implementation project; • configurators, who are both internal IT staff members and external consultants, e.g. SAP process analysts, data architects, configuration specialists, ABAP programmers, system testers, documentation specialists. For each group, we identified and documented a set of relevant questions that should be answered by using the metrics data. For example, business decision makers would like to know: • What costs and benefits does the business process harmonization bring to the organization if ERP assets are reused? • What are the costs of maintaining a standard ERP software solution? • What are our process standardization priorities? • What implementation strategy fits better with the project? Business process owners might ask: • What strengths and limitations do we have in practicing reuse? • How much reuse similar organization units are practicing? • Why other practice more reuse than our organization? • How ERP reuse works with volatile process requirements? • How much customization effort is required to implement minor/major changes in the business application components? • Which processes have the greatest potential for practicing reuse? Technical decision-makers need to know: • What on-site and off-site help desk resources are needed to support our ERP users? • How much effort is required to produce the project deliverables associated to the customized components? • How much reuse the team did? Configurators might ask: • Are there any rejected requirements that should be re-analyzed because of reuse concerns? • What implementation alternative fits best? • Which segments of the requirements are likely to cause difficulties later in the implementation process?
216
Maya Daneva
3.2
Understanding the Reuse-Driven RE Process
The AcceleratedSAP (ASAP) standard methodology for rapid R/3 implementation provides the principles and practices underlying ERP reuse and is intended to help customer’s organization coordinate, control, configure and manage changes of the R/3 business application components [10]. To analyze the ASAP RE process, we decomposed it in sets of activities, looked at the participating roles, and determined ways in which reusable assets support these activities. The SAP RE process includes four iterations with increasing collection of information by three types of activities: (i) requirements elicitation activities which deliver the foundation for the business blueprint and are concerned with finding, communication and validation of facts and rules about the business, (ii) enterprise modelling activities which are concerned with the business processes and data analysis and representation, and (iii) requirements negotiation activities which are concerned with the resolution of business process and data issues, the validation of process and data architectures and the prioritization of the requirements. In this process, the very first iteration results in a clear picture of the company’s organizational structure based on the pre-defined organization units in the R/3 System. The second iteration is to define aims and scope for business process standardization based on the R/3 application components. The third iteration leads to a company-specific business process architecture based on scenarios from the standard SAP process and data architecture components. Finally, the fourth iteration results in the specification of data conversion, reporting and interfaces requirements. The major actors in these activities are business process owners who are actively supported by the SAP consultants and the internal SAP process analysts and data architects. Next, there are three types of tools in support of the ASAP RE process: (i) the ASAP Implementation Assistant and the Questions and Answers Database which provide reusable questionnaires, project plans, cost estimates, blueprint presentations, blueprint templates, project reports and checklists, as well as manage the project documentation; (ii) the SAP Business Engineer, a platform including a wide range of business engineering tools fully integrated into the R/3 System [10]; (iii) enterprise modelling tools (e.g. ARIS-Toolset, LiveModel and Visio) which have rich model management capabilities and assist in analyzing, building and validating customer-specific process and data architectures based on the reusable reference process and data models. Due to the tool support, the team can (i) take advantages of proven reuse standards, reuse process methods and techniques, and (ii) ensure that the requirements are correct, consistent, complete, realistic, well prioritized, verifiable, traceable and testable. These requirements reuse benefits are realized by using the R/3 Reference Model [2], a comprehensive architectural description of the R/3 System including four views: business process view, function view, data view and organizational view. Instead of building an integrated information system from scratch, with the R/3 Reference Model we build a solution from reusable process and data architectures based on SAP’s business experience collected on a large scale. Specifically, our analysis indicates that the R/3 Reference Model benefits the RE process in three ways: (i) in requirements elicitation, it provides a way for process owners and consultants to agree on what the SAP business application components are to do, (ii) in requirements modelling, it serves two separate but related purposes. It helps to quickly develop a requirement definition
Reuse Measurement in the ERP Requirements Engineering Process
217
document that shows to the business owners the process flow the solution is expected to support. Beyond that, it can be seen as a design specification document that restates the business specification in terms of R/3 transactions to be implemented, and (iii) in requirements negotiation, the R/3 Reference Model serves as a validation tool. It makes sure that the solution will meet the owners’ needs, it is technically implementable and it is maintainable in future releases. 3.3
Integration Concept
The most important component of reuse measurement planning is the development of a strategy for integrating measurement into regular RE process activities [17]. This should be done by mapping reuse measurement practices into the components of the RE cycle. A diagram summarizing our integration concept appears in Fig. 3. Requirements Management Reuse Measurement Planning
Requirements Elicitation
Reuse Measurement Usage
Requirements Modelling
Reuse measurement plan
Requirements Negotiation
Action items
Process models/ Object models
Reuse Measurement Implementation
Action items
Reuse data
Requirements Reuse Metrics Management
Fig. 3. Integration of requirements reuse measurement in RE
It shows where in the RE process reuse measurement data will be collected, analyzed, reported and used. The concept relies on the process of measuring reuse of SAP business requirements specified in detail in [3]. It involves several basic assumptions: • a set of custom reuse counting practices is devised to the SAP team’s context; • reuse data collection and extraction are carried out by a staff-member with expertise in both SAP process/data modelling and software metrics; this could be either an internal SAP process analyst or an external SAP consultant; • the reuse measurement focus is strictly on specific itemized functionality based on two major RE deliverables: business scenario models and business object models; • reuse metrics data analysis is based on quantitative indicators; • reuse metrics data is used to support stakeholders’ decision during the requirements negotiation and elicitation; • reuse metrics data is reused at a later stage to support decision making in planning for future releases, upgrades and major enhancements. We suggest reuse measurement be applied once the modelling activities of the third RE process iteration are completed and the customer-specific process and data archi-
218
Maya Daneva
tectures are built. Given the reuse metrics data, the SAP process analyst may decide what negotiation / elicitation activities to take place next. 3.4
Defining Counting Standards
As per the recommendation of software metrics practitioners [7,17,18], the selection of counting standards must be based on what is visible for the SAP project team in the requirements modelling stage of the third iteration. In this paper, we use published results of our previous research on the derivation of reuse indicators from SAP scenario process models and business object models [3]. Given the extensively used measure of “reuse percents” [18], we specified a reuse indicator that included reused requirements as a percentage of total requirements delivered [3]: SAP_Reuse = ( RR / TR ) * 100% where RR represents reused requirements, and TR represents total requirements delivered. Requirement borrowed from the R/3 Reference Model are called reused if it does not require modification. If borrowed requirement does require minor or major enhancement before use, we term it ‘customized requirement’. To provide a consistent and reliable means for structuring and collecting data to make up reuse metrics, we suggest the team adopt a standard functional size measurement methodology, and, then, adapt it to the ERP business requirements. We chose to use Function Point Analysis (FPA), because of its appropriateness to the software artifact being measured [8,11] and its proven usage and applicability in software reuse studies [13,18]. The step-by-step procedure for counting Function Points (FP) from scenarios and business object models is described in great detail in [3, 4]. Furthermore, we have derived three levels of SAP requirements reuse [3]: • Level 3: It refers to processes and data entities that were reused without any changes. This category of reuse would bring the greatest savings to the SAP customer’s organization. Scenarios with higher reuse rate at this level have greater potential of practicing reuse. • Level 2: It refers to minor enhancements applied to reference processes and data entities. A minor enhancement is defined as a change of certain parameter of a business process or a data entity that does not result in a change of the process logic. This category of reuse refers to those processes and data entities of the R/3 Reference Model that logically match the business requirements but their parameters need to be changed at code level to achieve their business purpose. Level 2 reuse does not save as much effort as Level 3 reuse; however, in most projects, it is as desirable as level 3 reuse. • Level 1: It refers to major enhancements applied to reference processes and data entities. A major enhancement is any considerable modification in the definition of a process or a data entity that affects the process logic from business user’s point of view. This category of reuse refers to those processes and data entities that do not match the business requirements and require changes at conceptual level, as well as at design and code level to achieve their business purpose. Level
Reuse Measurement in the ERP Requirements Engineering Process
219
1 reuse often turns out to be very expensive [14] and, generally, is considered as at least desirable. Next, we introduce a level of new requirements, No_Reuse, to acknowledge the fact that reuse is not practiced at all. It refers to newly introduced processes and data entities. This does not mean a reuse category; it just helps us to partition the overall requirements and to get understanding of how much requirements are not covered by the standard scenario processes and business objects. Given our definition of what to count as reuse and how to count it, we have formulated three reuse indicators [3]: Level i SAP_Reuse = ( RR i / TR )*100% where i = {1, 2, 3}, RR i represents reused requirements at Level i , and TR represents total requirements delivered. The indicator No_Reuse = ( NR / TR )*100% , where NR represents the new requirements, and TR has the above meaning, reports the percentage of requirements that can not be met by the R/3 application package unless some customer-specific extensions are not developed. Our reuse levels do not assume that the individual requirements associated to a certain level have the same costs. We analyzed typical ERP requirements reuse cost drivers and we identified 11 factors that can contribute to the costs of reusing an individual requirement at any reuse level. These are: (i) technical difficulty of implementing the transactions needed to build a process or data component, (ii) number of activities in the standard SAP implementation guide that are needed to be performed in order to configure a process component, (iii) team members’ level of SAP expertise, (iv) adequate hours of training needed for the team, (v) need for bilingual support, (vi) need of R/3 installation at multiple sites for multiple organizations, (vii) capabilities of the process and data architecture to facilitate change, (viii) deployment of new standards, tools and process methods in the project, (ix) involvement of new requirements for user interfaces, (x) development of a complex and large database and (xi) level of requirements volatility. However, this list reports preliminary results and does not pretend to be complete. Currently, case studies are being carried out to validate empirically our counting model and its application procedure [5]. This exercise is being done on the basis of Jauqet’s and Abran’s framework [9] for investigating measure validation issues. 3.5
Reuse Data Collection Instruments
Reuse measurements are only as good as the data pieces that are collected and analyzed [7]. To assure the quality of the reuse data the team may consider both general purpose tools helping to collect many types of project data and special purpose tools that are more useful at certain reuse measurement tasks. Business process engineering tools that generally support RE activities in ERP projects can be of great help to iden-
220
Maya Daneva
tify the counting process and data components making up the size metrics and to determine the level of customization of each data and process element of the SAP customer-specific architecture. Examples of tools include the SAP Business Engineer, the ARIS toolset, the LiveModel tool, and the ASAP Implementation Assistant. Next, database and spreadsheet tools help to quickly set up forms for recording, analyzing and reporting reuse data. Finally, commercial metrics tools focusing on FPA-specific counting activities and reporting requirements are definitely of great help to the team. In cases, when the ERP organization does not have a budget for developing a sophisticated reuse metrics infrastructure, the following low-cost approach to assembling metrics tools proved to be useful. It consists of three essential steps: • design a standard reuse counting form based on published FP forms [8,11]. • build a reuse metrics data base by using tools available at the company, and • maintain current a business process knowledge repository. We adapted the FP counting form presented in [8] to the needs of reuse measurement. It was extended to include information needed to calculate the reuse indicators. Finally, we used the SAP Business Engineer, Excell spreadsheet software, MS Access and the corporate SAP project repository as components of our reuse measurement infrastructure. 3.6
Modelling the Reuse Data Usage
Measuring reuse in ERP RE is an investment that should result in (i) better understanding of what is happening during the RE process, and (ii) better control over what is happening on the project [7]. Requirement reuse measurement itself does not lead to any benefit; it leads to knowledge and the team must think out strategies for maximizing the benefits of our process knowledge. One way to document the processes of using requirement reuse measurements is to build SAP process knowledge flow models. These are developed on the basis of the stakeholders’ interaction diagrams and define all business locations, the desired distribution of process knowledge onto these locations, the type of reports to be distributed and the action items that could be taken based on the metrics reports. Given requirements reuse measurement tables in the Excell database (see Table 1 in Section 4), the process knowledge flow diagrams reflect the usage of two types of reuse data profiles that can be built: scenario-specific profiles which present the levels of reuse pertinent to a given scenario, and levelspecific profiles which show how the requirements are reused at a specific level within a project. Both types of profiles can be passed on to business decision-makers who would use them in at least three ways: • multiple reuse profiles of two or more different ERP products (SAP, BAAN, PeopleSoft) can be compared to determine which package best serves the needs of the company and offers the greatest opportunity for reuse; • multiple reuse profiles of different releases (SAP R/3 3.1, 4.0B, 4.5, 4.6 ) of one ERP package could be compared to determine which release brings biggest benefits to the company.
Reuse Measurement in the ERP Requirements Engineering Process
221
•
multiple reuse profiles of a single ERP package (e.g. SAP R/3) can build an assessment of the overall level of standardization of the ERP solution in the organization. Reuse profiles of a single ERP package (e.g. SAP R/3) support comparisons between similar projects and help in finding groups of ERP enhancement, maintenance or migration projects that might be treated together for effort estimation. These profiles can be provided to three audiences: (i) business decision makers who need to put ERP contract agreements in quantitative terms, (ii) technical decision-makers that would use the data to plan and control the reuse levels in the later phases of the ASAP implementation process, and (iii) business process owners and configurators that can track requirements reuse levels over time to control the changes in overall reuse during the iterations of the RE process.
4
Requirements Reuse Measurement Implementation
The objective of this process area is to ensure the successful initiation of reuse measurement on a project (Fig. 4). RE process model Reuse measurement process model List of stakeholder's questions
Business process models Business object models
Requirements Reuse Metrics Implementation
Reuse data reports
Reuse counting form Reuse data store Business process modelling tools
Fig. 4. Area 2: Requirements Reuse Measurement Implementation
Key parts of the area include: • selecting potential sites for reuse measurement, if metrics will be piloted prior to full implementation, • checking that the reuse measurement infrastructure is ready for use, • record functional size and reuse data by using counting forms, • collecting information about the measurement process itself (e.g. effort, resources). In case of applying the measurement plan as a pilot program, we identified three issues which influence the selection of pilot scenarios: level of architects’ expertise in the business area, experience of the process owner in the business environment, and volatility of the business requirements. Ideally, the safest way to pilot reuse metrics is to apply it to scenarios that meet three criteria: (i) the architect is best familiar with the business area; (ii) the partnering process owner is an experienced manager in the field,
222
Maya Daneva
(iii) the scenario is well-known to be stable and unaffected by the fast changing business environment. Furthermore, based on our FP counting model [4], we devised a counting form usage procedure that indicates at exactly what point each piece of data should be collected. To reduce the level of bias, only one SAP process analyst was involved in data collection and data processing. Next, collected information has been stored and processes in MS Excel. Summarized and detailed reports have been extracted from Excel tables. For example, Table 1 shows summarized results from measuring reuse of four SAP business scenarios. Finally, reuse data have been packaged, catalogued and published by using standard SAP business engineering tools, such as the ASAP Implementation Assistant or LiveModel [10]. Since reuse metrics provide knowledge about the business processes, reports on metrics data should be considered as part of the SAP process documentation. It can be stored in a corporate intranet repository and, thus, made available for review and analysis to all interested parties. In this way, users of SAP documentation can easily navigate from scenario process models to functional size and reuse metrics data. Table 1. Reuse levels for five SAP scenarios Business Scenarios
Level 3
Level 2
Level
No
Reuse
Reuse
1 Reuse
Project-related Engineer-to-Order Production
36%
15%
17%
32%
Procurement of Stock Materials
19%
50%
8%
22%
Procurement of Consumable Materials
31%
59%
0%
10%
External Service Management
68%
22%
10%
0%
5
Reuse
Using Requirements Reuse Measurements
The objective of this process area is to make sure that requirement measurements are used in a way that adds value to the RE process and the ERP organization (Fig. 5). Organizational golas RE process model List of stakeholder's questions
Reuse data reports
Requirements Reuse Metrics Usage
Action items
Corporate/ERP project repository Reuse data store
Fig. 5. Area 3: Using Requirements Reuse Measurements
Reuse Measurement in the ERP Requirements Engineering Process
223
The area consists of the following activities: • prepare for using measurement data, • maintain current the list of action items the team takes based on the reuse data, • ensure that metrics are used consistently and in compliance with the process knowledge flow models, • collect feedback on data usage, • update the knowledge flow models based on the feedback. The next subsections provide insight into our action items taken on the basis of reuse data and our approach to packaging experiences in using metrics.
5.1.
Using Scenario-Specific Profiles
These profiles reveal information important to those team members who are responsible for planning for reuse and assigning target reuse levels to each scenario to be achieved throughout the R/3 implementation project. Typically, the profiles are used in the requirements elicitation stage of the last RE process iteration in order to: (i) understand reuse constraints, (ii) clarify motivation for package customization, (3) analyze options for business process standardization, and (iv) collect input to modify reference process and data models. For example, knowing if a low level of reuse is indication that the R/3 Reference Model does not match the requirements of the project or there are some impediments to reusing standard R/3 functionality helps the team better understand what alternative process flows should be elaborated to avoid the need of modifying the R/3 components. For these new process models, reuse levels can be determined and compared to select the best alternative. Furthermore, scenario-specific profiles can quickly reveal problems in planning, organizing and controlling project deliverables to be produced in the later stages of the ASAP implementation, for example, test cases, SAP user documentation and training materials. Based on the measurement data, the technical decision-makers can get understanding of what portion of their testing, training and documentation development plans need further consideration and adjustment. If the reuse data shows great deviation from the standard scenarios, the technical decision-makers should re-assess to what extent the team could rely on standard deliverables provided by external SAP consultants. Processes with high Level 1 Reuse or No_Reuse rates are likely to require additional resources (e.g. business process owners, internal training specialists, and documentation analysts) to get tested and documented. Finally, adhering to a consistent means of reuse counting optimizes the relationships between the stakeholders and facilitates on-time and on-budget upgrades of the SAP solution. In planning for new upgrades, reuse measurements can serve as input to the assessment of the customization risks. It helps the technical decision-makers manage the risk on two counts: the business process owners can more readily accept the risk for a given scope of the project with specified target reuse levels. Second, the decision-makers and configurators can more readily accept the risk for the cost of customization. In planning the migration of the existing SAP solution to a new release, scenario-specific profiles are very helpful in the identification of processes that
224
Maya Daneva
are likely to cause particular difficulties to the configurators. Processes with higher Level 1 and No-Reuse rates should be migrated with extra caution. The technical decision-makers have to budget and set resources aside for extra analysis of the gaps between those scenarios and the standard scenarios provided in the new SAP release. The gap analysis often leads to reengineering the business requirements with the purpose to achieve higher level of standardization and to avoid unnecessary customization. If reuse data are collected from the reengineered requirements, the team can use the new set of measurements as a basis for system acceptance. In this way, it reduces the chances of dispute between process owners and external consultants about whether or not these requirements have been met by the system. 5.2.
Using Level-Specific Profiles
These profiles are usually useful in the determination of how much reuse the team did, as well as, in the identification of business scenarios in which reusing SAP application components potentially would bring the greatest benefits to the company. According to the Level 3 reuse profile, the External Service Management scenario is the one which practices most reuse (see Table 1). These profiles are important to requirements negotiation activities. Specifically, the profiles help the team (i) set reuse goals and expectations, (ii) define scope for practicing reuse, (iii) rethink rejected requirements, and (iv) reprioritize requirements. As the R/3 customization is one of the most risky matters to deal with in the package implementation, it is likely to re-prioritize the requirements in order to maximize the benefits from reuse. The business process owners can start reviewing those scenarios that have the lowest Level 3 and Level 2 reuse ratings. The team can validate these scenarios on a function-by-function basis to see where customization should and should not occur and how risky it may be. This process usually results in structuring the requirements in three categories: must-to-have, nice-to-have, and possible-but-could-be-eliminated. Recent study reports that 50% of the initial must-to-have requirements usually go to the nice-to-have category during the negotiation session [14]. As the process owners get better understanding of the SAP reuse, they become more conscious to the avoidance of unnecessary customization. Next, level-specific reuse profiles can help both business and technical decision makers decide on the SAP implementation strategy. If Level 1 reuse dominates and much customization efforts are anticipated, the team is likely to adopt a step-by-step approach to a sequenced implementation of the SAP components. If Level 3 reuse rates are the highest ones, the customization risks are limited and a big-bang approach to implementing multiple components seems to be reasonable.
Reuse Measurement in the ERP Requirements Engineering Process
5.3.
225
Packaging Experiences
Collecting, documenting, reviewing and analyzing facts and observations about the context of reuse measurement means finding explanations of how and why the measurement process worked as part of the RE cycle [6,21]. It provides the team with a basis for understanding (i) the usefulness and the applicability of the requirements reuse measurement process, (ii) the areas of the process where improvements could be done, and (iii) the benefits of having reuse measurements integrated into the RE process. Our reuse measurement experience refers to four SAP projects: three new implementations and one upgrade. While applying the measurement process, we packaged facts and observations about its preconditions and outputs. Each package consists of characteristics of the project context, a logical conclusion about specific aspects of the measurement process and a set of facts and observations that support this conclusion. The conclusions represent either early lessons learnt that tell us what and how worked in the process or critical success factors that suggest why it worked. Section 6,7 and 8 discuss what could be derived from our early experience packages.
6
Cost-Effectiveness of Requirements Reuse Measurement
Experience packages were first used to assess the overall cost of practicing requirements reuse measurement as part of SAP RE. Table 2 shows the activities of a typical measurement exercise and lists the total effort needed for each step. Table 2. Efforts needed for reuse measurement in a typical SAP project Requirements reuse measurement activity
Effort (prs.-hrs)
Documenting multiple view of ERP reuse based on stakeholders’ goals, issues and questions
4
Analyzing the ERP RE process
2
Selecting valid counting standards
3
Devising metrics data processing tools
1
Developing high-level models presenting typical uses of reuse data.
1
Selecting potential sites for reuse measurement, if a pilot is needed
1
Checking that the reuse measurement infrastructure is ready for use
0.5
Record functional size and reuse data by using counting forms
32.5
Prepare for using measurement data
2
Maintain current the list of action items the team takes
3
Ensure that metrics are used according to the knowledge flow models
2
Collect feedback on data usage
4 3
Update the knowledge flow models based on the feedback
226
Maya Daneva
Due to the full integration of our reuse metrics and the RE process, the measurement process can be applied at low overhead cost. As the ASAP RE phase is limited to 6 weeks [10], a typical duration of a measurement cycle is 2 weeks. We anticipate to apply requirement reuse measurement three times: (i) at the end of the fourth week, when the requirements models are produced, (ii) at the end of the sixth week, when the last elicitation sessions are completed, and (iii) at the end of the RE process, when the blueprint is signed off. The total effort for a measurement in a project employing 11 external consultants, 16 process owners and 13 internal IT staff-members, and covering 5 business application components is 59 person-hours or approximately 7.5 person-days. The involvement of the business process owners is kept to a minimum: the process architect made 92% of the effort and the business process owners made 8% of the effort. This effort model assumes that (i) the requirements reuse measurement planning has been done well in advance and a measurement plan exists in the organization, and (ii) the team has to check the components of the measurement plan for their fitness for use in the project-specific environment and to revise them to the specific reuse measurement needs of the stakeholders.
7
Assessment of the Soft Benefits
As the experience packages reflect the actual usage of the measurements, we consider them as a starting point in developing multifaceted definitions of the benefits of the integration of the reuse measurement process into the RE process. Moreover, they enforce the implementation of a value-added approach to evaluating benefits [15]. Unlike other methods, it is an user-centered approach that focuses on the unique problems of each stakeholder in ERP RE (Fig. 6).
Stakeholders
Payoff
ERP Requirements Reuse Measurement
Fig. 6. Measuring value-added benefits of measurements in RE
Benefits of reuse measurement are realized only if measurement practices are adopted by team members as part of their regular implementation process. As of now, the analysis of our experience packages resulted in the identification of four groups of benefits the could be realized by an ERP implementation team: 1. Collaborative requirements engineering. These benefits refer to the impact the requirements reuse measurement process has on the RE process. The measurement process: • helps the team understand the newly designed business processes in the organization and the implications of an integrated view of the business. • facilitates the transfer of implementation knowledge from SAP consultants to the process owners.
Reuse Measurement in the ERP Requirements Engineering Process
2.
3.
4.
227
• provides a common ground for communication across working groups. • provides a foundation for building and reinforcing partnerships, increasing customers’ understanding of the ERP functionality, re-prioritizing the business requirements, communicating the value of ERP-reuse. • creates awareness of the integrated processes in SAP R/3. Maintenance of the ERP solution. These benefits show the impact of practicing requirements reuse measurement on the process of maintaining existing SAP solutions. Reuse measurements: • represent a form of documenting the implementation process. • provide structured planning for system enhancements, configuration and training. • help the team estimate and control costs as process requirements have to be reworked and customers report poor satisfaction. • help process owners understand what parts of the SAP R/3 System are flexible and what parts are rigid. • enable the exploration and the validation of implementation alternatives. • help the technical decision makers fine-tune the amount of consulting and implementation resources. • In upgrades, they help the team harmonize the solution with the SAP standard software; they allow to compare the baseline models with the new functionality in order to eliminate the custom code in the new system. • help identify bottlenecks in the system, spot reuse problems and evaluate their impact on the process workflow. Enhancement of ASAP. These benefits address how reuse measurements help the team overcome some deficiencies with the current ASAP implementation process. Unfortunately, as of now, the ASAP methodology does not provide sufficient support in setting up suitable training and user documentation plans [10]. Reuse level measurement: • compliments the ASAP methodology by providing systematic procedures for quantifying fits and gaps between the company‘s desired state and the R/3 System. • is a mechanism for tracking ASAP implementation deliverables. • is a tool for controlling the quality of the business blueprint. It helps the team detect errors or omissions in requirements in time to contain them before the solution is configured. Creation of process visions. These benefits refer to improved visibility in the ERP implementation processes. Requirements reuse measurement data: • enables the reuse process to be planned and reuse planning to be done as part of the RE process. • helps the team build business understanding of and ownership over the R/3 implementation . • provides a foundation for building and reinforcing partnerships, increasing customers’ understanding of the ERP functionality, re-prioritizing the business requirements, communicating the value of ERP-reuse.
228
Maya Daneva
• serves as a vehicle for faster resolution of requirements problems and conflict. Metrics data help focus requirements elicitation and negotiation meetings and resolve architectural problems that may arise. • helps identify parts of the process likely to be most expensive to change. • serves as an input to an effort estimation model. • provides a basis for linking SAP implementation /enhancement projects to the business model. The results reported in this section are preliminary benefits assessments only. Further research is planned to extend the set of experience packages and to conduct a formal evaluation of the benefits in qualitative and quantitative terms.
8
Applying Metrics across Multiple Projects
The experience packages have been analyzed to assess the extent to which the requirements reuse measurement practices are applicable beyond the ERP RE context of SAP R/3 implementation projects. Our findings confirm that the reuse measurements work best where there exist reference process and data models of the business application components being implemented. In case of model-driven implementation projects based on other ERP packages, for example BAAN, the approach does not need any adaptation. As the ASAP RE process and the BAAN Target Enterprise RE process [1] are similar in terms of process roles, activities, supporting tools (including reference process and data models) and business requirements modelling standards, the application of our measurement plan in BAAN implementation projects is straightforward. Furthermore, the approach can be used in business applications where business process modelling captured the interactions in the business workflow and the data objects from the business area. The issue that greatly impacts the applicability of our approach to other projects refers to the adaptation of the FPA counting rules to the business requirements modelling standards established by other ERP providers. The definitions of our reuse percentage ratios and reuse levels are stable but the custom rules for sizing the business requirements may vary. In case of dealing with multiple business requirements modelling standards and/or multiple versions of FP counting, the team should take extra actions to ensure the consistent application of the FPA counting rules across the projects. For example, the team can elaborate a crossreference table that explains what each FPA counting component means in the terms of the different requirements modelling languages used in the projects. When applying the plan to other projects, different company-specific, methodologyspecific or project-specific aspects might lead to shortening or eliminating of some stages in reuse metrics planning. Examples of such aspects are given in Table 3. Basically, the decision for how to handle reuse at requirements level is a risk-based one and depends on the assessment of the risk of having the customization of a standard package out of control versus the costs and the residual risk of each possible reuse handling option.
Reuse Measurement in the ERP Requirements Engineering Process
229
Table 3. Environment characteristics
9
Organizational characteristics
Methodology-related characteristics
Project-specific characteristics
• Level of requirements process maturity [19]
• Existing process modelling practices, standards and tools
• Commitment to reuse business processes and objects
• Communication esses and tools
proc-
• Level of utilization of ERP business engineering tools
• Significance of the business process harmonization exercise
• On-going project measurement
• Level of utilization of ERP reuse methods
• Existing reuse acceptance management practices
• Existing competency in enterprise integration architecture
• Adherence to a standard implementation process (for example ASAP)
• Adherence to a standard business language within the project
Conclusions
The present paper described three requirements reuse process areas that form the foundation for the successful integration of requirements reuse measurement process into the ERP RE cycle. Relevant issues in planning, implementation and using the measurements have been discussed and possible solutions to these issues have been demonstrated in the project context of SAP R/3. The experiences we gained in our requirements reuse measurement exercises has been analyzed to assess the costeffectiveness of the approach, the value-added benefits it brings, and its applicability across multiple project. We found that (i) the costs of applying the practices are low, (ii) the approach benefits the ERP team in four ways: it provides support in establishing a collaborative requirements engineering process, maintaining existing SAP solutions, enhancing the ASAP methodology and creating process visions, and (iii) the approach could be tailored at low cost to the needs of ERP projects based other packages. Moreover, we found that modelling and documenting stakeholders’ interactions, RE process steps and process knowledge flows forces the team to make different kind of analysis of the requirements reuse measurement process. These exercises are likely to reveal different types of problem, omission and inconsistency in planning, implementing and using measurements. The work reported in this article is only the beginning of an ongoing effort to develop better requirements reuse measurement practices. Further research will be focused on the development of a lessons learnt architecture by using product/process dependency models [6]. Second, it would be interesting to link the reuse measurements with some product attributes, such as maintainability and level of standardization of the resulting ERP solution, as well as to investigate some effort and project duration estimation models [16] based on our metrics data. Lastly, we plan to develop a systematic approach to evaluating the value-added benefits of ERP requirements reuse.
230
Maya Daneva
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Brinkkemper, S., Requirements Management for the Development of Packaged Software, 4th IEEE International Symposium on Requirements Engineering, IEEE Computer Society Press, Limerick, Ireland, 1999 Curran, T., A. Ladd, SAP R/3 Business Blueprint, Understanding Enterprise Supply Chain Management, 2nd. Edition, Prentice Hall, Upper Saddle River, NJ (1999) Daneva M.: Mesuring Reuse of SAP Requirements: a Model-based Approach, Proc. Of 5th Symposium on Software Reuse, ACM Press, New York (1999) Daneva, M., Deriving Function Points from SAP Business Processes and Business Objects, Journal of Information and Software Technologies (1999). Accepted for publication Daneva M., Empirical Validation of Requirements Reuse Metrics. In preparation. ESPRIT Project PROFES, URL: http://www.ele.vtt.fi/profes Fenton, N., Pfleeger, S.L.: Software Metrics: Rigorous and Practical Approach, PWS Publishing, Boston Massachusetts (1997) Garmus D., D. Herron, Measuring the Software Process, Prentice Hall, Upper Saddle River, New Jersey (1996) Jacquet, J.-P., Abran, A.: Metrics Validation Proposals: a Structured Analysis. In: Dumke, R., Abran, A. (eds.): Software Measurement, Gabler, Wiesbaden (1999), 43-60. Keller, G., Teufel, T.: SAP R/3 Process Oriented Implementation, Addison-Wesley Longman, Harlow (1998) Jones, C.: Applied Software Measurement, McGraw Hill, New York (1996) Lam, W., J.A. McDermid, A.J. Vickers, Ten Steps Towards Systematic Requirements Reuse, Proceedingd of 3rd IEEE International Symposium on Requirements Engineering January 5-8 1997, Annapolis, USA Lim, W.: Managing Software Reuse, A Comprehensive Guide to Strategically Reengineering the Organization for Reusable Components, Prentice Hall, Upper Saddle River, NJ (1998) Lozinsky, S.: Enterprise-wide Software Solutions, Addison-Wesley, Reading MA (1998) Meyer N. D., M. E. Boone, The Information Edge, 2nd Ed., Gage Educational Publishing Co., Toronto Canada (1989) Oligny S., P. Burque, A. Abrain, B. Fournier, Developing Project Duration Models in Software Engineering, Journal of Systems and Software, 2000, submitted Pfleeger, S. L.: Measuring Reuse: a Cautionary Tale, IEEE Software, June (1997) Poulin, J. Measuring Software Reuse: Principles, Practices, and Economic Models, Addison-Wesley, Reading, MA (1997) Sawyer, P., I. Sommerville, S. Viller, Capturing the Benefits of RE, IEEE Software, March-April (1999), 78-85 Sharp, H., A. Finkelstein, G. Galal, Shakeholder Identification in the Requirements Engineering Process, Proceeding of the 1st Intl. Workshop on RE Processes/ 10th Intl Conf. on DEXA, 1-3 Sept., 1999, Florence, Italy Statz, J., Leverage Your Lessons, IEEE Software, March/April (1999), 30-33 Welti, N., Sussessful R/3 Implementation, Practical Management of ERP Projects, Addison-Wesley, Harlow, England (1999)
Business Modeling and Component Mining Based on Rough Set Theory Yoshiyuki Shinkawa1 and Masao J. Matsumoto2 1
2
IBM Japan, Ltd, Systems Laboratory 1-1, Nakase, Mihama-ku,Chiba-shi, Chiba Japan
[email protected] The University of Tsukuba, Graduate School of Systems Management, 3-29-1, Otsuka, Bunkyo-ku,Tokyo Japan
[email protected] Abstract. Model based approach and component based software development (CBSD) both contribute to software reuse. In this paper, we present a formal approach to combine them together for efficient software reuse. The approach consists of two phases. The first is the model construction phase. Rough set theory (RST) and colored Petri nets (CPN) are used in order to build the accurate model (hereafter, simply referred to “the model”) from various facts and requirements provided by various domain-experts. RST coordinates differences in knowledge and concepts among those experts, whereas CPN express the model rigorously and intuitively. The second is the component mining phase. We use Σ algebra and RST for retrieving such components as being adaptable to the model. Σ algebra evaluates functional equivalency between the model and the components, while RST coordinates differences in sorts or data types between them. We mainly focus on large scale enterprise back-office applications, however the approach can be easily extended to the other domains, since it does not depend on any specific domain knowledge. An application model example from an order processing is depicted to show how our approach works effectively.
1
Introduction
Component based software development (CBSD) is one of the key technologies for software reuse, and has achieved success in many small or medium sized software developments. In those sized developments, software components and their running environments would be merely required to implement whole software systems with components. However, when introducing existing CBSD technologies into large scale software developments, especially in enterprise back-office applications, we must deal with much more complex facts and requirements residing in the domain. In addition, we must deal with much coarser and more complex components which are difficult to evaluate adaptability to the requirements. Those components are often called business objects. W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 231–250, 2000. c Springer-Verlag Berlin Heidelberg 2000
232
Yoshiyuki Shinkawa and Masao J. Matsumoto
In order to evolve software reuse with business objects in such applications, we need accurate domain models to which adaptability of components are evaluated [11,13]. Those accurate domain models could be used in the similar domain as templates, and contribute to software reuse from the modeling viewpoint. When building domain models, we usually elicit the facts and requirements residing in the domain from people concerned, often referred to as domain experts. Since those facts and requirements are based on the enterprise knowledge and viewpoints of each domain expert, which might be biased by his/her mission and/or role in the domain, we must coordinate and integrate them for building the consistent knowledge base. After the completion of domain modeling, we need to select the software components based on several criteria upon developing particular system in CBSD environments. At this time, we are faced with the another problem that we have to identify and interpret conceptual differences between the requirements and the components. Commercial components, such as COTS (Commercial Off-TheShelf Software) and shrinkwrapped software, are not designed for the specific requirements that we have, but for generic ones that reside in the target domains they focus on. Therefore, there often are conceptual differences between the requirements and the components in terms of e.g. the meaning and the scope, in other words, the semantic and the taxonomic differences. Those differences make it difficult to select suitable or adaptable components. Traditional software development methods assume all the software components are implemented by decomposing the requirements, and the methods do not care about the conceptual gaps between the requirements and the components. In addition, those methods focus on specific stages in software development cycle such as requirements elicitation[12,3], requirements modeling [1,2], software specification [15] or components evaluation [5]. We need systematic and rigorous approach to bridge requirements and components for effective software component reuse. In this paper, we propose a formal approach for model construction and component mining based on the knowledge of the domain and the components. The approach can be effectively used in large scale enterprise back-office applications. The paper is composed as follows. Section 2 and 3 focus on the modeling phase, in which we construct the business process model for enterprise back-office applications. This model represents functional and behavioral requirements for the software to be developed. On the other hand, section 4 and 5 focus on component mining phase. In section 2, we discuss a method for integrating the knowledge of each expert through identifying the “basic model units” which are the elements used for composing the domain. Rough Set Theory (RST) is used for integrating the knowledge. Section 3 deals with a way to build a consistent enterprise model with the basic model units to clarify the structure of the requirements. Colored Petri nets (CPN) are used for representing the functional and behavioral aspects of the model in rigorous and intuitive way. Section 4 presents a way to express both the model and the components in the same notation. Σ algebra is used to express them. We discuss a method to transform the CPN model into Σ algebra
Business Modeling and Component Mining Based on Rough Set Theory
233
uniquely. Section 5 presents a formal method to retrieve the adaptable and also reusable components to the model. RST is used to resolve definitional differences of the concepts between the model and the components. Our approach only deal with the essential nature of the problems and do not refer to the scalability of the related aspect, however it is possible to extensively apply this approach to a large scale.
2
Identifying Basic Model Units and Their Relationships
The first step in building an enterprise model is to identify the basic model units and the relationships between them by analyzing the enterprise. Basic model units are the elements of the model, e.g. entities in entity-relationship modeling, processes and data stores in data flow diagrams, and objects in object oriented analysis. There have been many enterprise modeling frameworks proposed in order to understand complex enterprise business operations [8,14]. Although they provide us with various different viewpoints on enterprises, business processes are commonly recognized as one of the most important concepts. Therefore we focus on business processes for enterprise modeling and define the following basic modeling units and their relationships, by which we can describe the whole business process. 1. task: A basic unit of resource and /or information transformation, which could have multiple inputs and outputs 2. organization: A basic unit to perform and/or control the tasks 3. resource and information: A basic unit which flows between the tasks 4. organization - task relationship: Knowledge of “who performs the tasks with which resources/information?” 5. task - organization relationship: Knowledge of “who receives the results of the tasks?” The above relationships (item 4 and 5) could include the explanative statements, which would be the rationale and/or constraints of them. Those statements are often referred to as business rules. The first three items are used to determine the static or structural aspect of the business process, on the other hand, the last two items are used to determine the dynamic or behavioral aspect of it. 2.1
Rough Set Theory and Knowledge Representation
The basic model units and their relationships are elicited from domain-experts as concepts in the business process, which are based on their knowledge of the enterprise. The knowledge those experts have could be different from each other, because of differences between their roles or missions in the enterprise. Therefore, we have to integrate their individual knowledge into enterprise-wide knowledge in order to build the model units. We introduce Rough Set Theory (RST) for this purpose.
234
Yoshiyuki Shinkawa and Masao J. Matsumoto
RST provides us with theoretical aspects of reasoning about data, and deals with various knowledge and concepts based on set theory [6,7]. RST regards knowledge as the ability to classify objects, and regards concepts as the classes to which the objects belong. A set of objects, called universe U , is classified by knowledge into the classes X1 , . . . , Xn , where Xi ∩ Xj = ∅ and U = ∪Xi . Each Xi becomes a concept in U . This classification corresponds to an equivalence relation1 over U according to the classical set theory. Therefore knowledge in RST can be represented by equivalence relations. A family of equivalence relations and a universe U compose a knowledge base K = (U, R), where R = {R1 , R2 , . . . , Rn } is a family of equivalence relations over U. Partial knowledge in K is represented by P ⊆ R, and the finest classification by this P is obtained by the equivalence relation IN D(P) = ∩P which is called an indiscernibility relation2 over P. The class thatx ∈ U belongs to in this finest classification is [x]IN D(P) = [x]R R∈P
where [x]R means the class to which x belongs, when U is classified by the equivalence relation R. The family of all the equivalence relations definable in K is denoted by IN D(K) = {IN D(P) : ∅ = P ⊆ R}. Classes that are classified by an equivalence relation R is called R-basic categories, and we regard the categories as concepts. If X ⊆ U is the union of some R-basic categories, X is called R-definable, otherwise called R-undefinable. By an equivalence relation R derived from a given knowledge base K = (U, R), that is, by R ∈ IN D(K), any subset X ⊆ U is recognized in the following two ways. RX = ∪({Y ∈ U/R : Y ⊆ X} RX = ∪({Y ∈ U/R : Y ∩ X = ∅} where U/R means the family of all classes that is classified by R, that is, R-basic categories. The former is called the R-lower approximation of X and the latter is called R-upper approximation of X. BNR (X) = RX − RX is called the R-boundary of X. For example, suppose we are given a set of five cars as the universe U , and we have such three pieces of knowledge on cars as color, size and number of doors, then those cars can be characterized by the tuple of the attributes (color, size, number of doors ). Each attribute can classify or categorize the universe U , therefore they are pieces of knowledge in RST. If those tuples are (red, large, 4), (red, large, 4), (blue, large, 4), (blue, small, 2) and (green, medium, 2), then the knowledge base is denoted by K = (U, R), where U = {1, 2, 3, 4, 5} and R = 1 2
A relation R ⊆ U × U which is reflexive, symmetric and transitive. IN D(P) is an equivalence relation, since intersection of several equivalence relation is also an equivalence relation.
Business Modeling and Component Mining Based on Rough Set Theory
235
{R1 , R2 , R3 }. R1 , R2 and R3 are the equivalence relation which correspond to the attributes color, size and number of doors respectively. By those equivalence relations, U can be classified in three ways as follows. U/R1 = {{1, 2}, {3, 4}, {5}} U/R2 = {{1, 2, 3}, {4}, {5}} U/R3 = {{1, 2, 3}, {4, 5}} This knowledge base K = (U, R) can also be denoted by the following table, which is called Knowledge Representation System (KRS) with the attributes color, size and number of doors.
Table 1. Knowledge Representation System U 1 2 3 4 5
2.2
color red red blue blue green
size number of doors large 4 large 4 large 4 small 2 medium 2
Identifying the Static Aspect of Business Processes
The static aspect of a business process can be identified by specifying all the business process constituents. Those constituents can be regarded as the concepts that reside in the enterprise, and we can identify them by RST. For applying RST, we first define the following three universes as the basis of discussion 3 . 1. U1 : a set of all the work units, which cannot be decomposed further more by external observation 2. U2 : a set of all organizational objects related to the business process. (usually the objects represent the people involved in the business process) 3. U3 : a set of all related resources and information Each domain expert has knowledge of U1 , U2 and U3 in terms of RST, that is, the ability to classify those universes. On the assumption that there are p experts, named e1 , . . . , ep , and each ei has his/her own knowledge on U1 , U2 and U3 . As stated above, knowledge in RST is the ability to classify objects, and is represented by an equivalence relation over the set of the objects. This knowledge can also be regarded as a viewpoint on the objects. For example, if an expert has the knowledge reflecting a project viewpoint, this knowledge could then used to classify U2 (organizational objects) into the teams related to the project4 . 3 4
Each element in a universe Ui is not a concept, but an actual occurrence or an instance which is used for composing the concepts. people who are not involved in the project will be classified into the one class, say, named unrelated class
236
Yoshiyuki Shinkawa and Masao J. Matsumoto
Since each expert has multiple viewpoints on each Ui (i = 1, 2, 3), the knowledge of a domain expert ej can be denoted by 1 2 Rij = {Rij , Rij ,...} k where Rij is the k-th piece of knowledge of the expert ej on universe Ui . The total knowledge we can acquire from these experts is p p p R1 = R1j , R2 = R2j , R3 = R3j j=1
j=1
j=1
and they compose the knowledge bases in the enterprise, K1 = (U1 , R1 ), K2 = (U2 , R2 ), K3 = (U3 , R3 ). We can define the basic model units as the equivalence classes classified by the knowledge. The task units, the organization units and the resource/information units can be denoted by the following respectively. X1 = {X ∈ U1 /IN D(R1 )} X2 = {X ∈ U2 /IN D(R2 )} X3 = {X ∈ U1 /IN D(R3 )} where IN D(Ri ) = ∩Ri is the indiscernibility relation over Ri as defined in the previous section. In addition to those classes, we need two sets of relationships to identify the enterprise model structure : “organization - task relationship” and “taskorganization relationship”, which are also elicited from the domain experts. 2.3
Identifying the Dynamic Aspect of Business Processes
The concepts, or the basic model units identified in the previous section, can determine only the structure of the business process. We need to specify the behavior of it for accurate modeling. We defined two relationships which are representing enterprise behavior, that is, an organization-task relationship and a task-organization relationship. Those relationships express “who performs a task” and “who receives the result of a task” respectively. In addition each task would deal with several resources. Therefore, an organization-task relationship which is recognized by a domainm1 mr k l expert ej can be expressed by a tuple in the form of X2j , X1j , (X3j , . . . , X3j ), k l where X2j is an organization or group which performs a task X1j by dealing with mr mn k l a set of resources (X m1 , . . . , X3j ). X2j , X1j and X3j are basic model units of organization, tasks and resources respectively, which are identified for the expert ej . A task-organization relationship also expressed in a similar form to the above, that is: m1 mr l k , X2j , (X3j , . . . , X3j ).
X1j In addition to those relationships, domain-experts can provide us with “transformation rules” if the relationships, since each relationship is associated with a task and resources, and the task can be considered to transform the associated resources.
Business Modeling and Component Mining Based on Rough Set Theory
237
i i i i Let E(X4j and E(X5j be the transformation rules of X4j and X5j respectively. The above individual based relationships must be integrated into enterprisewide relationships in order to build an enterprise-model. This integration can be accomplished by set operations as follows: m1 mr k l 1. Select an element X2j , X1j , (X3j , . . . , X3j ) in X4j
m
m
q k l 1 2. Find an element X2j = j ) which satis , X1j , (X3j , . . . , X3j ) in X4j (j fies k k l l ∩ X2j = ∅, X1j ∩ Xij =∅ X2j
m
ms ∩ X3j t = ∅] ∀s ∈ {m1 , . . . , mq } ∃t ∈ {m1 , . . . , mq } [X3j This step integrates the knowledge on the task across an organizational boundary. If there is no such element, repeat 1 until no elements left in X4j . m1 mr k l 3. Replace X2j , X1j , (X3j , . . . , X3j ) by m ms k k l l
X2j ∩ X2j , X1j ∩ X1j , ( X3j ∩ X3j t ) ms m m ms ∩ X3j t means the sequence of (X3j ∩ X3j t ) = ∅ where X3j Repeat 2 and 3 for all of the elements satisfying the condition 4. repeat the above procedure over the all elements in X4j .
At the end of the above procedure, we obtain a new set of tuples ˜ 4 = { X ˜ k, X ˜ l , (X ˜ m1 , . . . , X ˜ mr )} X 2 1 3 3 mn k ˜l ˜ ˜ where X2 , X1 and X3 is an organization model unit, a task model unit and a resource model unit respectively, which can be recognized commonly by all the ˜ l , (X ˜ m1 , . . . , X ˜ mr ) can be regarded as enterprise˜ k, X domain-experts. Those X 2 1 3 3 wide organization-task model units. We can identify a set of enterprise-wide task-organization model units ˜ 5 = { X ˜l, X ˜ k , (X ˜ m1 , . . . , X ˜ mr )} X 1 2 3 3 ˜ similarly to X4 . ˜ 1 = {X ˜ l }, X ˜ 2 = {X ˜ k } and X ˜ 3 = {X ˜ mn } can also be regarded as enterpriseX 1 2 3 wide model units which are corresponding to tasks, organizations or resources respectively. X1 , X2 and X3 can be regarded as the concept based model units, on the ˜ 1, X ˜ 2 and X ˜ 3 can be regarded as usage-based model units. other hand, X In ideal modeling, they should be identical to each other. ˜ 4 and X ˜ 5 , we can translate {E4j (X) : X ∈ X4j } and By the above X ˜ 4} {E5j (X) : X ∈ X5j } into the enterprise-wide model units {E4 (X) : X ∈ X ˜ and {E5 (X) : X ∈ X5 }, respectively. In order to show how the method works, let us think of a reference model process, which is an order processing business process including order acceptance, order validation, production, shipment and so on. Assuming that there are two domain experts named e1 , and e2 from a sales department and a production department respectively, they have knowledge on the common universes U1 , U2 , and U3 , along with the individual universes U4i and U5i (i = 1, 2).
238
Yoshiyuki Shinkawa and Masao J. Matsumoto
Table 2. Integrating Knowledge of the Experts X11 1 X11 2 X11 ... 4 X11 ... 8 X11 ... X21 1 X21 2 X21 ... 9 X21 ... X31 1 X31 2 X31 3 X31 ... X41 1 X41 ... X51 1 X51 ... 9 X51 ...
e1 = U1 /IN D(R11 ) : order entry : inventory check
e2 X12 = U1 /IN D(R12 ) 1 X12 : sales activity 2 X12 : production planning ...
: evaluation : production = U2 /IN D(R21 ) X22 = U2 /IN D(R22 ) 1 : reception office X22 : sales office 2 : inventory management X22 : production planner ... : factory = U3 /IN D(R31 ) : product name : product number : customer name (organization → task) 1 1 1 : X21 , X11 , (X31 , . . . ) (task → organization) 1 2 2 : X11 , X21 , (X31 , . . . ) 4 10 2 : X11 , X21 , (X31 , . . . )
X32 = U3 /IN D(R32 ) 1 X32 : product number 2 X32 : quantity 3 X32 : customer information ... X42 (organization → task) 1 1 2 1 X42 : X22 , X12 , (X32 , . . . ) ... X52 (task → organization) 1 1 1 1 X52 : X12 , X22 , (X32 , . . . ) ...
Table 2 depicts a part of the knowledge elicited from those experts. Table 3 shows a part of the model units which are obtained by the knowledge integration discussed in this section 5 . For example, 8 2 2 X11 ∩ X12 = X11 = X18 is the result of the classification with the equivalence relation R11 ∩ R12 , that is, the integration of knowledge R11 and R12 . Another example 9 1 ∩ X52 X51 4 10 2 1 1 1 = X11 , X21 , (X31 , . . . ) ∩ X12 , X22 , (X32 , . . . ) 4 1 10 1 2 1 = X11 ∩ X12 , X21 ∩ X22 , (X31 ∩ X32 , . . . ) 4 1 2 = X11 , X22 , (X31 , . . . ) 4 9 2 = X1 , X2 , (X3 , X33 , . . . ) is the result of integrating X51 and X52 . 5
˜ 1, X ˜ 2 and X ˜ 3 happen to be identical to We assume the usage based model units X ˜ 1, X ˜ 2 and X ˜3 the concept based model units X1 , X2 and X3 , therefore we omit X from Table 3
Business Modeling and Component Mining Based on Rough Set Theory
239
Table 3. Enterprise-wide Model Units X1 X11 X13 ... X18 ... X2 X21 ... X29 ... X3 X31 ... ˜4 X X41 ... ˜5 X X51 ... X59 ...
= IN D(R11 ∩ R12 ) : order entry, X12 : inventory check : credit check, X14 : evaluation : production planning, X19 : production control = IN D(R21 ∩ R22 ) : reception office, X22 : inventory management : production planner, X19 : production control = IN D(R31 ∩ R32 ) : product name, X32 : product number (enterprise-wide organization → task ) : X21 , X11 , (X31 , . . . ), X42 : X22 , X12 , (X31 , . . . ) (enterprise-wide task → organization ) : X11 , X22 , (X32 , . . . ), X52 : X11 , X23 , (X32 , . . . ) : X14 , X29 , (X32 , . . . )
The granularity of the knowledge is not uniform over the universes U1 , U2 and U3 for each experts. For example, the expert e1 in the sales department has rough knowledge or little knowledge on the production department and exact knowledge or enough knowledge on the sales department, on the contrary, the expert e2 in the production department has them inversely. RST can deal with such differences in knowledge mathematically. The approach described in this section leads us to the homogenous model over the business process, which is not biased by any particular expert. In other words, we can obtain the homogenous models, from any system analysts, which are independent to their abilities and experiences, thanks to the nature of our approach which is formal enough to achieve this. Those enterprise-wide model units can be used as templates to analyze other enterprises in similar industries, and would evolve model reuse.
3
Enterprise Modeling in CPN
The model units and their relationships identified in the enterprise are too com˜ 1, X ˜ 2, X ˜ 3, X ˜ 4, X ˜ 5, plex to understand by the description on X1 , X2 , X3 , X {E4 (X)} and {E5 (X)}, even though they contain enough information to ex-
240
Yoshiyuki Shinkawa and Masao J. Matsumoto
plain the enterprise. Therefore, we need more graphical and intuitive notation for making easy to understand. Colored Petri-nets (CPN) are one of the best methods to express those two aspects simultaneously. Petri-nets, including CPN, have been applied originally to control applications, such as process control or manufacturing control. However, recent researches proved that Petri-nets are suitable to model business processes [1,2,10]. CPN are defined as follows [4]. CPN=(S, P, T, A, N, C, G, E, I) , where S : a finite set of non-empty types, called color sets, P : a finite set of places, T : a finite set of transitions, A : finite set of arcs P ∩ T = P ∩ A = T ∩ A = ∅, N : node function A → P × T ∪ T × P , C : a color function P → Σ, G : a guard function T → expression, E : an arc expression function A → expression and I : an initialization function : P → closed expression. In general, a business process is represented by a CPN model in the following way. 1. Each transition represents an activity in a business process, which transforms resources and/or information. The transformation rules are described by arc expression functions. 2. Each input place represents an organization or a person that performs an activity. 3. Structure of a CPN model, that is, connections between places and transitions, represents the business rules which control the business process. 4. Guard and initialization functions also represent the business rules. 5. Each token represents resource or information processed by the activities. A color set represents a type of those resources and information. From the units and the relationships found in section 2.1, we can easily make the CPN model of the enterprise by the following procedure. ˜ 1 (organization model units) as places, and X j ∈ X ˜ 2 (task model 1. Put X1i ∈ X 2 units) as transitions ˜ ∗ ∈ X ˜4 2. Draw arcs from a place X2j to a transition X1i if X2j , X1i , X3 ∈ X 3 (organization → task relationships) ˜ ∗ ∈ X ˜5 3. Draw arcs from a transition Xi1 to a place X2j if X1i , X2j , X3 ∈ X 3 (task → organization relationships) 4. Put E4 (X4k ) as an arc expression function on the arc from a place X2j to a transition X1i if ˜4 X4k = X2j , X1i , X3 ∈ X3 ∈ X i X1 / ∪ X3 becomes the guard function of the transition X1i
Business Modeling and Component Mining Based on Rough Set Theory
241
p1
t1
Order Entry
p3
p2
t2
t3 Inventory Check
Credit Check
p4
p5
t4
Evaluation
p7
p6
t6
t5 Rejection Manufacturing
p8
t7
p12
p9
t8 Biling
p10
Shipping p11
Fig. 1. Process Model Example 5. Put E5 (X5l ) as an arc expression function on the arc from a transition X1i to a place X2j if ˜5 X5l = X1i , X2j , X3 ∈ X3 ∈ X By the above five steps, we can define the structure of the CPN model. An initialization function I determines the initial state of the CPN model, and represents the initial state of the business process to be modeled. Since various initial states could be possible, and those initial states do not affect component mining in our approach, we do not refer to an initial function any more. A CPN model for an application example from an order processing is shown in Fig. 1 (CPN diagram) and Table 4 (the description of it). In this model the place p1 corresponds to X11 (reception office), p2 corresponds to X22 and transition t1 corresponds to X11 (order entry) which were identified in section 2. The CPN model constructed in the aforementioned way would be reused in the similar business domains, since the model expresses the essential structure, functions and behavior of a business process at the appropriate level.
242
Yoshiyuki Shinkawa and Masao J. Matsumoto
Table 4. Sample CPN Model Description Color sets S = {C1 , C2 , C3 , C4 , C5 , C6 , C7 , C8 , C9 , C10 , D1 , . . . } C1 : Product Name, C2 : Product Number, C3 : Quantity, C4 : Customer Name, C5 : Credit Number, C6 : Customer Address, C7 : Check Result, C8 : Order Number, C9 : Warehouse Number, C10 : Price D1 = C1 × C3 × C4 × C5 × C6 : Order Form D2 = C8 × C2 × C3 : Availability Check Form D3 = C8 × C4 × C5 × C6 : Credit Check Form D4 = C8 × C2 × C3 × C7 × C9 : Shipment Information D5 = C8 × C2 × C7 : Rejection Information ... Places P = {p1 , p2 , . . . , p12 } Transitions T = {t1 , t2 , . . . , t8 } Color Function C(p1 ) = D1 , C(p2 ) = D2 , C(p3 ) = D3 , C(p4 ) = D4 ∪ D5 , ... Arc Expression Function E(p1 → t1 ) = Id, E(p2 → t2 ) = Id, . . . Id : Identity function E(t1 → p2 ) = (h1 (x1 ), h2 (x2 ), P roj 2) h1 : assigns an Order Number h2 : transforms a Product Name to a Product Number P roj i : Projection E(t1 → p3 ) = (h1 (x1 ), h2 (x2 ), P roj3, P roj4, P roj5) E(t2 → p4 ) = if x2 ∈ X1 ⊆ C2 (the product is available) then 18 (P roj1, P roj2, P roj3, , h3 (x2 )) else 18 (P roj1, P roj2, ⊥) h3 : C2 → C9 assign warehouse number ... Initialization function I(p1 ) = 18 (x1 , x2 , x3 , x4 , x5 ) ∈ D1 I(p) = ∅ (p = p1 )
In addition, this model is intuitively understandable in both business realm and software realm, therefore it would be a template of enterprise models in other industrial organizations.
4
Σ Algebra Expressed Business Process and Components
After constructing the CPN model of the business process, we have to retrieve the adaptable components to the model from various sources, such as COTS (Commercial Off-The-Shelf Software), software packages, legacy codes and so
Business Modeling and Component Mining Based on Rough Set Theory
243
on. In order to retrieve those adaptable components efficiently, we need to have the common notation for specifications between the model and the components, along with an adaptability evaluation method. The common notation could provide us with a unified way to evaluate adaptability of components to the requirements. 4.1
Σ Algebra as the Common Notation
We use Σ algebra for the purpose of common notation, since it has the superiority in expressing both the business process and the components, in addition, we can evaluate functional equivalency between them rigorously, which would be the measure of adaptability [10]. Σ algebra provides an interpretation for the signature Σ = (S, Ω), where S is a set of sorts, and Ω is an S ∗ × S sorted set of operation names which transform (x1 , x2 , . . . ) ∈ S ∗ into y ∈ S [15]. Here, S ∗ is the set of finite sequences of elements of S. A Σ algebra is an algebra (A, F ), where A = ∪Aσ (a set of carriers) and fA : Aσ1 × Aσn → Aσ . F = {fA |f ∈ Ωσ1 ...σn ,σ } S-sorted function fA is said to have arity σ1 . . . σn and result sort σ. Functional equivalency between two Σ algebras are represented by the existence of Σ homomorphism. Σ homomorphism η = {ησ |σ ∈ S, ησ : Aσ → Bσ } is a family of functions such that ∀f ∈ Ωσ1 ...σn ,σ , ∀ai ∈ Aσi [ησ (fA (a1 , . . . , an )) = fB (ησ1 (a1 ), . . . , ησn (an ))] where A=(A, F ) and B=(B, G) are Σ algebras. fA and fB are the functions in F and G respectively, or elements of A and B if n = 0. Arc functions in CPN models could become S-sorted functions in Σ algebra. However, they are often defined in a complicated way to compose the model with fewer arcs. Therefore those functions might be reduced to more simplified ones in order to compose Σ algebra. This simplification can be achieved by composing an S-sorted function from a pair of an output arc function and the input arc functions relating to it [11]. We can derive S-sorted functions and carriers uniquely from a given CPN by this method. 4.2
Transforming the CPN Model and the Components into Σ Algebra
In order to construct Σ algebra from a CPN model CP N = (S, P, T, A, N, C, G, E, I), we first focus on one transition “t” with input places “p1 , . . . , pm ”, input arc functions 6 “f1 , . . . , fm ”, output places “p1 , . . . , pm ”, and output arc functions “f1 , . . . , fm ”. Each fi transforms a token xi ∈ C(pi ) in input place pi into a token or tokens fi (xi ). Similarly, each fj produces a token or tokens according to the 6
We refer to arc expression functions on input arcs to a transition as input arc functions, and those on output arcs to a place as output arc functions.
244
Yoshiyuki Shinkawa and Masao J. Matsumoto
input tokens to the transition produced by the input arc functions, hence a token or tokens produced by fj can be denoted by fj (f1 (x1 ), . . . , fm (xm )). Since those fi and fj are kinds of operations over the color sets, we can regard the color sets as carriers in Σ algebra. If each fi (xi ) and fj (f1 (x1 ), . . . , fm (xm )) represent single tokens, that is, fi (xi ) ∈ C(pi ) and fj (f1 (x1 ), . . . , fm (xm )) ∈ C(pj ), we can define the carriers as Aσi = C(pi ) and Aσ = C(pj ) respectively. S-sorted function gj with the arity σ1 . . . σm and the result sort σ is derived from them by defining gj (y1 , . . . , ym ) = fj (f (x1 ), . . . , fm (xm )), where yi = fi (xi ) ∈ Aσi (= C(pi )) and gi ∈ Aσ (= C(pj )). However, fi and fj could be a multi-set over C(pi ) and C(pj ) respectively, therefore in this case, fi and fj are in the form of: fi (xi ) = ni1 y1 + ni2 y2 + · · · fj (f1 (x1 ), . . . , fm (xm )) = nj1 z1 + nj2 z2 + · · · where yk ∈ C(pi ) and zk ∈ C(pj ). In such case, the carriers should be regarded as (C(pi ))ni1 ×(C(pi ))ni2 ×· · · and (C(pj ))nj1 ×(C(pj ))nj2 ×· · · . The term “ni1 y1 ” means that ni1 tokens which have the value y1 are yielded by the function. In addition, since a multi-set for each fi and fj could be different according to the value of xi , we must define multiple carriers for each fi and fj in this case. For example, if an arc expression function is in the form of E(a) = if (conditions f or xi ) then . . . else . . . there could be two different multi-sets, and hence two different carriers would be generated. From the above discussion, multiple S-sorted functions with different arities and result sorts could be derived from those fi (i = 1, . . . , m) and fj . Let {Atσ } be a set of carriers identified in transition t, and {g t } be a set of S-sorted functions identified in t. By identifying those sets of carriers and functions all over the transitions in the CPN model, we can construct Σ algebra A=(A, F ), where {Atσ }, F = {g t }. A= t∈T
t∈T
For example, if we focus on the transition “t2 ” (inventory check) of the sample application, S-sorted functions with carriers are derived as follows. First, we can define the carriers related to the output arc function E(t2 → p4 ) as Aσ1 =C8 (order number), Aσ2 =C2 (product number), Aσ3 =C3 (quantity), Aσ =D4 (shipment information), Aσ =D5 (rejection information), according to the definition of E(p2 → t2 ) and E(t2 → p4 ) in the Table 1. Then we can derive two S-sorted functions g1 and g2 in such forms as: g1 (x1 , x2 , x3 ) = y = (x1 , x2 , x3 , , h3 (x2)) and g2 (x1 , x2 , x3 ) = y = (x1 , x2 , ⊥) where x1 ∈ Aσ1 , x2 ∈ Aσ2 , x3 ∈ Aσ3 , y ∈ Aσ , y ∈ Aσ , and h3 is the function to transform “Product Number” to “Warehouse Number”. By repeating such operation over all the transitions in the CPN model of the sample application, we can construct the Σ algebra representing the application. As for expressing the components by Σ algebra, it is much easier than the business process. Each software component transforms inputs (or arguments)
Business Modeling and Component Mining Based on Rough Set Theory
245
to an output, and the functionality of it can be regarded as an operation in Σ algebra. Types of the inputs and the outputs compose the sorts in Σ algebra. Therefore, a Σ algebra can be composed from those operations and sorts identified in the components.
5
Retrieving the Adaptable Components to the Business Process
Adaptability of software components to the requirements can be evaluated by Σ homomorphism, if both the requirements and the components are expressed in Σ algebra [13]. Σ homomorphism η = {ησ |σ ∈ S, ησ : Aσ → Bσ } is a family of functions such that ∀f ∈ Ωσ1 ...σn ,σ , ∀ai ∈ Aσi [ησ (fA (a1 , . . . , an )) = fB (ησ1 (a1 ), . . . , ησn (an ))] (n ≥ 0) where A=(A, F ) and B=(B, G) are Σ algebras. fA and fB are the functions in F and G respectively, or elements of A and B if n = 0. In the previous section, we expressed the requirements and the components in the form of Σ algebra. Therefore, we could evaluate equivalency between them by using Σ homomorphism as a measure. However, there are two problems to identify Σ homomorphism between the above two algebras from practical viewpoints. The first one is that those algebras could involve unnecessary sorts which prevent us from identifying Σ homomorphism. The second is that each algebra could use semantically different sorts, which make the adaptability evaluation difficult. In this section, we deal with methods to overcome those problems. 5.1
Eliminating Unnecessary Sorts from Σ Algebra
Since the requirement model is often built based on incomplete knowledge of the enterprise, each S-sorted function which is extracted from the model could have unnecessary carriers. In other words, sorts which do not affect the transformation rule of the function could be included in its arity. By eliminating those unnecessary carriers, we could make more essential adaptability evaluation between the model and the components. Let an S-sorted function in Σ algebra of the requirements be gσ1 ...σn ,σ , and the involved carriers be Aσ1 , . . . , Aσn , Aσ . An S-sorted function can be regarded as a decision table, if all involved carriers are countable 7 . The decision table is in the form of Table 5. Each number in the column N is an index of the decision table. Each row represents data transformation of the S-sorted function gσ1 ...σn ,σ , that is 7
This assumption is realistic for back-office applications, since all the resources and information dealt with in business processes must be finite.
246
Yoshiyuki Shinkawa and Masao J. Matsumoto
Table 5. Decision Table of an S-sorted Function N 1 2 .. . x .. .
σ1 v11 v21 .. . vx1 .. .
σ2 v12 v22 .. . vx2 .. .
. . . σm . . . v1m . . . v2m .. . . . . vxm .. .
σ v1 v2 .. . vx .. .
gσ1 ...σn ,σ (vx1 , . . . , vxn ) = vx By RST we can find the unnecessary columns which have no meanings to determine the data transformation. For any x ∈ N , let Xi and Y be Xi = [x]σi and Y = [x]σ respectively. Xi and Y are equivalent classes in N which satisfy ∀x ∈ Xi , vx i = vxi and ∀x ∈ Y, vx = vx . If for all x ∈ N , the family F = {X1 , X2 , . . . } satisfies ∩F ⊆ Y and ∩{F − {Xi }} ⊆ Y Xi is called dispensable and the decision table shows the same data transformation even though we eliminate the column σi [9]. In this case, gσ1 ...σn ,σ and gσ1 ...σi−1 σi+1 ...σn ,σ are equivalent, that is : ∀a, b ∈ Aσi [g(x1 , . . . , xi−1 , a, xi+1 , . . . , xn ) = g(x1 , . . . , xi−1 , b, xi+1 , . . . , xn )] where Aσi is the corresponding carrier to the sort σi . By eliminating a dispensable σi one after another until there remains no dispensable attribute, we get the simplest decision table which is equivalent to the original one. This simplest one is called reduct of the original one. The tables like Table 5 could be huge in size, however by introducing subdomains of the carriers in an S-sorted function, we can reduce them to practical size. A subdomain of a carrier Aσi is a class in terms of RST, if we regard Aσi as the universe. For example, suppose there are such subdomains in the carriers of the S-sorted function g1 which we derived in section 4.2 as: Aσ1 = O1 ∪ O2 ∪ O3 (a carrier for order number) Aσ2 = P1 ∪ P2 (a carrier for product number) Aσ3 = Q1 ∪ Q2 (a carrier for quantity) Aσ = W1 ∪ W2 ∪ W3 ( a carrier for warehouse number) where Oi means a subdomain consisting of similar orders, e.g. those from the same region. Pi , Qi and Wi are also subdomains defined based on such similarity as the above. Assuming that those subdomains determine particular transformation rules of g1 , we can make a decision table by those subdomains instead of individual values, and can reduce the size of the table. For example, if we have the decision table like Table 6, we can identify the sort σ3 to be dispensable, by the set calculus mentioned above, and conclude (g1 )σ1 σ2 σ3 ,σ = (g1 )σ1 σ2 ,σ .
Business Modeling and Component Mining Based on Rough Set Theory
247
Table 6. Decision Table of a Sample S-sorted Function N 1 2 3 4 5 6 7 8
σ1 O1 O1 O2 O3 O1 O2 O1 O3
σ2 P1 P2 P2 P1 P1 P2 P2 P1
σ3 Q1 Q2 Q1 Q1 Q2 Q2 Q1 Q2
σ W1 W2 W3 W3 W1 W3 W2 W3
A reduct represents the essential part of the original S-sorted function, and therefore we use reducts instead of the original functions for components adaptability evaluation. By using reducts, we can find out the components which are not fully equivalent to the requirements, but are practically equivalent. Therefore, this method would expand the scope of adaptable components, and contributes to components reusability. 5.2
Sorts Adjustment and the Adaptable Component Retrieval
First, we focus on one reduct of an S-sorted function derived from the requirements. Let φ be a reduct with arity σ1 . . . σm and result sort σ, which have the corresponding carriers Aσ1 , . . . , Aσm and Aσ . Each carrier can be regarded as representing a set of resources or information in the enterprise, and hence corresponds to a class in the universe U discussed in the previous section. We refer to this class as home class of the carrier. Let K = (U, R) be the knowledge base which can make all the home classes for the reducts in the requirements. Assuming that the home classes are Xi ⊆ U and X ⊆ U for Aσi and Aσ respectively. Aσi and Aσ are derived by a function ImA , which means Aσi = ImA (Xi ) and Aσ = ImA (X) Similarly, we can express a carrier Bσi and Bσ in the components as Bσi = ImB (Yi ) and Bσ = ImB (Y ) , where Yi and Y are classes of U classified by the knowledge base K = (U, R ) in the components. The common sorts between the requirements and the components can be derived as follows. 1. For each Aσi , and Aσ , let Xi and X be the home classes in U , that is, Aσi = ImA (Xi ) and Aσ = ImA (X). Calculate the upper approximations of Xi and X by the knowledge base K = (U, R ). These approximations are denoted by R Xi = {Yi1 , . . . , Yipi }, R X = {Y1 , . . . , Yp } where R = IN D(R ).
248
Yoshiyuki Shinkawa and Masao J. Matsumoto
A σ1 Aρ11
Aρ12
ImA Y2 Y1
X1
ImB
ImB
Bσ1 Bρ11
U
Bσ2
Bρ12
Fig. 2. Sorts Adjustment
2. derive the new carriers with the new sorts as the following. Aρij = ImA (Xi ∩ Yij ) ⊆ Aσi = Im (Yij ) Bρij = ImB (Xi ∩ Yij ) ⊆ Bσij B Aρk = ImA (X ∩ Yk ) ⊆ Aσ Bρk = ImB (X ∩ Yk ) ⊆ Bσk = ImB (Yk ) where j = 1, . . . , pi and k = 1, . . . , p respectively. Fig. 2 shows this intuitively. ˜ with arity ρ1i . . . ρmi We can define a set of new S-sorted functions {φ} 1 m and result sort ρk , where i1 = 1, . . . , p1 . . . im = 1, . . . , pm , . . . k = 1, . . . , p . φ˜ ˜ 1 , . . . , xm ) = φ(x1 , . . . , xm ) for xi ∈ Aρij (the restriction can be defined as φ(x of φ to Aρij ). ˜ {Aρ } and {Aρ } compose a Σ algebra A=(A, Φ). ˜ Those {φ}, ij k . . . σmi On the other hand, if there is an S-sorted function ψ with arity σ1i 1 m ˜ and result sort σk , we can define a set of new S-sorted function {ψ} with arity ρ1i1 . . . ρmim and result sort ρk , in the similar way to {φ}. ˜ with the carriers {Bρ } and {Bρ }, and There could be multiple sets of {ψ} ij k ˜ those are the candidates for the corresponding functions for {φ}. ˜ {Bρ } and {Bρ } compose a Σ algebra B=(B, Ψ). ˜ Those {ψ}, ij k ˜ Therefore, we can determine the equivalency between Σ algebras A=(A, Φ) ˜ by examining the existence of Σ homomorphism beand Σ algebras B=(B, Ψ) tween A and B. ˜ to Σ algebra A=(A, Φ), ˜ If we can identify equivalent Σ algebra B=(B, Ψ) ˜ the original function g of {φ} has the functionally equivalent functions {ψ} with arity σ1i . . . σmi and result sort σk . 1 m In order to show a simple example, we use a reduct of S-sorted function g1 which was introduced in section 5.1. We denote the reduct by φ. The reduct has the arity σ1 σ2 and the result sort σ which correspond to “Order Number”,
Business Modeling and Component Mining Based on Rough Set Theory
249
“Product Number” and “Shipment Information” respectively. The discussion is based on the following assumptions. 1. We already identified two S-sorted functions or their reducts ψ1 , ψ2 from , σ3 and components, which have such arities and result sorts as σ1 σ21 σ1 σ22 , σ3 respectively. 2. The home classes of σ1 and σ are identical to those of σ1 and σ. The home classes of σ21 and σ22 represent a set of “raw materials” and “parts” respectively. and σ22 be X2 , Y21 and Y22 , which correspond Let the home classes of σ2 , σ21 to carriers Aσ2 , Bσ21 and Bσ22 respectively. If those home classes satisfy: X2 ⊆ Y21 ∪ Y22 and X2 ∩ Y2i = ∅ (i = 1, 2), then we can introduce the two new sorts ρ21 and ρ22 which correspond to the home classes X2 ∩ Y21 and X2 ∩ Y22 . By using those sorts, we can define the pairs of S-sorted functions with the common sorts as: (φ˜1 )σ1 ρ21 σ2 ,σ ←→ (ψ˜1 )σ1 ρ21 σ2 ,σ (φ˜2 )σ1 ρ22 σ2 ,σ ←→ (ψ˜2 )σ1 ρ22 σ2 ,σ , where φ˜1 and ψ˜1 can be interpreted as the S-sorted functions to deal with “raw materials”, while φ˜2 and ψ˜2 are those to deal with “parts”. The sorts adjustment makes it possible to evaluate equivalency between two conceptually different S-sorted functions which represent requirements and a components, and would expand the scope of components reusability. By examining all S-sorted functions within the requirements model in the above way, we can determine whether the components are functionally adaptable to the requirements CPN model. The S-sorted functions in the requirements model, which do not have the corresponding functions in the components, must be developed to construct the adaptable software system.
6
Conclusions
A formal approach, which is dedicated to modeling business processes and mining adaptable components to the models, is presented in this paper. In modeling phase, a systematic way based on RST and CPN helps us to identify the basic structure of the target domain. The structure includes basic model units which are used for composing the domain and their relationships among them. We can identify the structure by the observation of the domain and the knowledge elicitation/integration from various domain-experts. Our method is formal enough, so that we can construct very homogenous models independently from the ability or experiences of domain experts or system analysts. This homogeneity of models can increase reusability of them. In component mining phase, we can make mathematically rigorous component selection by Σ algebra possible. We showed a way to express both the model and the components in the form of Σ algebra. RST makes it possible to compare those Σ algebras even though they are composed of the different signatures, that is, the different sets of functions and sorts. The method extends the scope
250
Yoshiyuki Shinkawa and Masao J. Matsumoto
of reusability of the components, since the method enables them reusable in the problem domains, with the different algebraic structure from theirs, which used to be regarded unadaptable ever before the method.
References 1. W. M. P van der Aast and K. M. van Hee. Framework for Business Process Redesign. Proc. of the Fourth Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET ICE ’95), pp. 36-45, IEEE, (1995) 232, 240 2. G. Graw, V. Gruhn, and H. Krumm, Support of Cooperating and Distributed Business Processes. Proc. of 1996 International Conference on Parallel and Distributed Systems, pp. 22-31, IEEE, (1996) 232, 240 3. J. Grundy. Aspect-oriented Requirements Engineering for Component-based Software Systems. Proc. of Fourth International Conference on Requirements Engineering, pp. 84-91, IEEE, (1999) 232 4. K. Jensen, Coloured Petri Nets Volume 1 Second corrected printing. SpringerVerlag, (1997) 240 5. N. A. Maiden and C. Ncube, Acquiring COTS Software Selection Requirements. IEEE SOFTWARE March/April, pp. 46-56, IEEE, (1998) 232 6. Z. Pawlak, Rough Sets : Theoretical Aspects of Reasoning About Data. Kluwer Academic Pub, (1992) 234 7. Z. Pawlak, J. Grzymala-Busse, R. Slowinski and W. Ziarko, Rough Sets. Communications of the ACM 38, pp. 89-95, ACM, (1995) 234 8. C.J. Petrie (Editor) Enterprise Integration Modeling : Proc. of the First International Conference. MIT Press, (1992) 233 9. A. W. Scheer, ARIS-Business Process Frameworks, Springer-Verlag, (1998) 246 10. Y. Shinkawa, and M. J. Matsumoto, On Legacy System Reusability Based on CPN and CCS Formalism. Proc. of Ninth International Workshop on Database and Expert Systems Applications (DEXA’98), pp. 802-810, IEEE, (1998) 240, 243 11. Y. Shinkawa, A New Approach to Build Enterprise Information Systems for Global Competition. Research Report of IEICE. SGC99-20, pp. 29-42, IEICE, (1999) 232, 243 12. I. Sommerville, P. Sawyer and S. Viller, Viewpoints for requirements elicitation: a practical approach. Proc. of Third International Conference on Requirements Engineering, pp. 74-81, IEEE, (1998) 232 13. D. A. Tailor, Business Engineering with Object Technology, John Wiley and Sons Inc., (1995) 232, 245 14. F. B. Vernadat, Enterprise Modeling and Integration. Chapman and Hall, (1996) 233 15. W. Wechler, Universal Algebra for Computer Scientists. Springer Verlag, (1992) 232, 243
Visualization of Reusable Software Assets Omar Alonso1 and William B. Frakes2 1
2
Oracle Corp., 500 Oracle Parkway, Redwood Shores, CA 94065.
[email protected] Computer Science Dept., Virginia Tech, 7054 Haycock Rd, Falls Church, VA 22053.
[email protected] Abstract. This paper presents methods for helping users understand reusable software assets. We present a model and software architecture for visualizing reusable software assets. We described visualizations techniques based on design principles for helping the user understand and compare reusable components. Keywords: Representation methods, Software assets, Information visualization, 3Cs, XML.
1
Introduction
This paper present methods for helping users understand reusable software components. This is important because if software engineers cannot understand components, they will not be able to reuse them [10]. Current methods for representing reusable components are inadequate. A study of four common representation methods for reusable software components showed that none of the methods worked very well for helping users understand the components [13]. Our approach for helping potential users understand reusable software components is to use visualization techniques. We describe a model and system for storing, retrieving, and visualizing components in a software repository. We argue that having a tool to visualize components in different ways can help users understand and integrate them into applications. We explore the use of visualization techniques with a couple of examples from the information retrieval domain as a starting point. We also propose a software architecture that implements our ideas.
2
Related Work
Research on visualization is quite active. Much of the work focuses on visualization of scientific data from the physical sciences. Research on software visualization often concerns algorithms and code. Algorithm representations and animations, for example, are commonly used in teaching and research [22]. Aspects of code such as W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 251-265, 2000. Springer-Verlag Berlin Heidelberg 2000
252
Omar Alonso and William B. Frakes
function call and include relationships can be visually represented using tools such as CIA and CIA++ [6]. There has also been considerable work on the development of notations to support various software design methods. Baecker and Marcus applied human factors and typography techniques to source code [2]. They proposed a new software engineering approach called program visualization. In this approach they suggest the importance of enhancing program representation, presentation, and appearance. They provided seventeen design principles, several design variations, and developed a graphic design manual for C along with its graphic parser. Unfortunately they did not explore other program metrics that could go beyond improving program presentation. In a more recent paper they explored how graphical and auditory representations of programs can help the debugging process [1]. SeeSoft allows the analysis of statistical data from large systems [7], [3]. SeeSoft introduced new techniques for visualization and analysis of source code that can be summarized in four ideas: reduced representation, coloring by statistic, direct manipulation, and capability to read actual code. Eicks’ group is part of Visual Insights1 that provides ADVIZOR, a set of components for interactive data visualization. Knuth proposed a paradigm called literate programming that combines documentation with program code in a way that it is easy to read and understand by humans [16]. Knuth designed WEB, a system that implements the ideas of literate programming. CWEB is a version of WEB for documenting C, C++, and Java programs.
3
Find and Understand
Users typically search for components by submitting queries to systems that retrieve assets. The user evaluates the output and if necessary, refines the query. The reuse search and retrieval problem is well understood and there are several summaries of approaches [11], [19]. Our approach is to emphasize the understanding process assuming a known search method. As a basis of understanding we use the 3Cs model [18]. The 3Cs model of reuse design provides a high level framework that has been found useful in the design of reusable assets. The model indicates three aspects of a reusable component - its concept, its content, and its context. The concept specifies the abstract semantics of the component, the content specifies its implementation, and the context specifies the environment necessary to use the component. For a software component, the concept might correspond to an abstract data type (ADT), whose implementation might be a C program. This component's context might require a workstation running UNIX, and a GNU C compiler. Figure 1 shows a scenario in which a user needs assets. Using a search system, the user will query the repository and get, if found, a list of assets that could answer their 1
http://www.visualinsights.com
Visualization of Reusable Software Assets
253
needs. At this point the user must understand the assets to perform a good evaluation of them. Using visualizations as a representation of the attributes and values of the assets, including concept, content, and context, the user can have a better understanding of the components. For example, lets’ assume that the user is looking for string searching components. After performing the query the search system returns a list of all the matching string searching assets found in the repository. Each asset has 3Cs (concept, content, and context) which are ordered: concept precedes content and content precedes context. If the user cant’ understand the concept of the component, theres’ no interest in its contents or its context. If the users’ concept of a string searching component differs from the concepts in the string searching assets, the user should reformulate the query. If the asset concept is clear to the user, the next step is to understand its content (the type of algorithm the code implements, type of text target, etc.). Finally if the content suits the initial requirement, the user will consider the context (the version of compiler, operating system, expected running time, etc.). The identification of concept, content, and context is not always easy and sometimes there are no clear boundaries between them. For example an executable specification may be considered as both a concept and content. Information Need
has
Visualizations
presented to
User
submits
entered
Understand and Evalaute
Represented by
Query e.g. "string search"
Search System Returns Attributes and Values
AKO
have Assets
AKO
AKO KMP.c
Concept Content Context
Algorithm Spec
Fig. 1. Reuse understanding scenario
254
4
Omar Alonso and William B. Frakes
Visualization Reference Model
Card et al. define visualization and information visualization as follows [5]. Visualization is the use of computer-supported, interactive, visual representations of data to amplify cognition. Information visualization is the use of computer-supported, interactive, visual representations of abstract data to amplify cognition. Visualization is the process of creating a visual representation for data and information that is not inherently spatial. In the rest of this section we describe the reference model for visualization that was proposed by Card et al. [5] This reference model is useful because it is simple and supports comparison of different information visualization systems. Figure 2 shows the reference model. We can see that arrows flow from Raw Data to the human, indicating a series of data transformations. Each arrow might indicate multiple chained transformations. Arrows flow from the human at the right into the transformations themselves, indicating the adjustment of these transformations by the user. Data Transformations map Raw Data, that is, data in some specific format, into Data Tables, relational descriptions of data extended to include metadata. Visual Mappings transform Data Tables into Visual Structures, structures that combine spatial substrates, marks, and graphical properties. Finally, View Transformations create Views of the Visual Structures by specifying graphical parameters such as position, scaling, and clipping. The core of the reference model is the mapping of a Data Table to a Visual Structure. Visual Form
Data
Raw Data Data Transformations
Visual Structures
Data Table Visual Mappings
Views
View Transformations
Fig. 2. The reference model for visualization .The main goal behind transforming raw data into data table, is that it is easier to map a data table into a visual structure. A data table combines relational data with metadata that describes them. For example, a relation:
{ , The basic idea behind this algorithm is that each time a mismatch is detected, the false start consists of characters that we have already examined KMP, string searching, algorithms 25 C lines 2n + O(m) worst case NA NA <usage> kmpsearch (text, pat) NA <makefile>yes
Visualization of Reusable Software Assets
259
string.h Shift-Or (C version). sosearch R. Baeza-Yates NA Shift-Or string searching algorithm <description> The basic idea is to represent the state of the search as a number. Each search step costs a small number of arithmetic and logical operations. Shift Or, string searching, algorithms 23 C lines O(m) Yes Yes <usage> sosearch (text, pat) NA <makefile>yes string.h
7.3 Visual Structures The visual structure renders the 3CML documents to a visual metaphor. If the visual metaphor does not support XML, with the help of a transformer, it can translate the XML data into internal format and then display the results. In our approach, the files in 3CML are mapped to a visual structure, which augment a spatial substrate with marks and graphical properties. We can say that a visualization metaphor is a kind of visual structure. It is important that all the data in 3CML is mapped to a visual structure.
8
Examples
This section explains some visualization examples of string searching and stemming algorithms using two techniques: alternative geometry and trees. 8.1
Alternative Geometry
Inxights’ Hyperbolic Tree 2 is an implementation of a focus+context technique based 2
http://www.inxight.com/
260
Omar Alonso and William B. Frakes
on hyperbolic geometry for visualizing and manipulating large hierarchies. This technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy [17]. The Hyperbolic Tree browser initially displays a tree with its root at the center, but the display can be transformed to bring other nodes into focus. The user needs only to point to the node and drag it with the mouse to a particular place in the screen to put in focus. With a double click on that node, the regular browser loads the asset. In all the scenarios, the amount of space available in the screen to a node falls off as a function of its distance in the tree from the node in focus. Figure 4 shows the visualization of two conflation or stemmers components [9]. The concept is the root and it is highlighted. There are two implementations available of the Porter algorithm (content). One is a C version and the other is a Perl version. Each content represents a link from the root. For the C version we have more assets (context): the makefile, the header, the driver, and test data. In contrast, we do not have more assets for the Perl version. There is more information for the C version than for the Perl version of the stemmer.
Fig. 4. Using the Hyperbolic Tree to visualize stemming components
The user can issue a double click on the Porter C asset and see the source code in the Web browser. Figure 5 shows that example. This is an example of the Hyperbolic Tree as a navigation aid for a software assets repository.
Visualization of Reusable Software Assets
261
Fig. 5. Hyperbolic Tree as a navigation aid for a software repository
Figure 6 shows a view of all the source code of an information retrieval library (based on [8]). It is clear to see the arrangement of the string searching components based on the book chapter but no more than that. The visualization is basically a tree structure based on chapters. All those components share the same header file (context) but that is not clear to see since the contents do not represent the assets in terms of the 3Cs model. 8.2
Trees
Tree visual structures encode hierarchical data, typically by using connection or containment. Connection is used to create node-link diagrams that are useful for encoding relationships between cases. Figure 7 shows a tree-like structure of string searching assets. The book icon represents the library of components, and each folder represents one of the 3Cs. We can see a top folder of the concept s“ tring searching”, content, and context. There are three main algorithms as content (a C naïve, a Java naïve, and a C version of KnuthMorris-Pratt). There is also context assets for KMP: a test case, the string.h import file, and the makefile.
262
Omar Alonso and William B. Frakes
Fig. 6. The Hyperbolic Tree as a site map for source code
9
Summary and Future Work
This paper has presented issues in the visualization of reusable assets, and a model and architecture for visualizing reusable components from a software library. This is important area for helping users understand assets so they can reuse them. Linkages between this work and the general process of searching for and finding reusable assets were established, as were linkages to other work in software visualization. Our visualizations are grounded in reuse design principles, such as the 3Cs model, and in general principles of information design such as those of Tufte. We described how to use an extension of XML to describe assets in terms of the 3Cs model. This basic description allows the development of multiple representations via transformations. We plan to continue refining the 3CML language to provide richer descriptions of assets.
Visualization of Reusable Software Assets
263
Fig. 7. A tree view of string searching algorithms
We described visualization methods that are implemented as Java applets. We plan to develop more applets that implement different visualization techniques in the next step of the project. We are also interested in integration other commercial visualization products. They will provide more information for decision making. We plan to continue using the domain of information retrieval as a testbed and will add more components from other systems to the repository. We also want to run experiments to test our hypothesis that visualization tools can significantly improve user understanding of reusable components.
Acknowledgments We thank the anonymous reviewers for their comments and feedback.
References 1.
R. Baecker, C. DiGiano, and A. Marcus. S “ oftware Visualization for Debugging”. Comm. Of the ACM, Vol. 7, No. 4 (April 1997).
264
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
Omar Alonso and William B. Frakes
R. Baecker and A. Marcus. Human Factors and Typography for More Readable Programs. ACM Press, Addison-Wesley, Reading MA (1990). T. Ball and S. Eick. S “ oftware Visualization in the Large”. IEEE Computer 29(4), April (1996). J. Bertin. Semiology of Graphics. (English Translation), Univ. of Wisconsin Press (1978). S. Card, J. Mackinklay, and B. Shneidernan, Readings in Information Visualization, Morgan Kaufmann, San Francisco, CA (1999). Y.-F. Chen, M. Nishimoto, and C. V. Ramamoorthy. T “ he C Information Extractor” . IEEE Transactions on Software Engineering, 16(3), 325-334 (1990). S. Eick, J. Steffen, and E. Summer, S “ eesoft - A Tool For Visualizing Line Oriented Software Statistics”, IEEE Transactions on Software Engineering, 18(11), November (1992). W. B. Frakes and R. Baeza-Yates (Eds.) Information Retrieval. Data Structures and Algorithms Prentice-Hall, Englewood Cliffs, NJ (1992). W. Frakes, S “ temming Algorithms”. In W. B. Frakes and R. Baeza-Yates (Eds.), Information Retrieval. Data Structures and Algorithms. Prentice-Hall, Englewood Cliffs, NJ (1992). W. Frakes and C. Fox. Q “ uality Improvement Using A Software Reuse Failure Modes Model”. IEEE Transactions on Software Engineering Vol. 22, No. 4, (April 1996). W. Frakes and P. Gandel. R “ epresenting reusable software” Inf. Software Technology. 32,10 (1990). W. Frakes and B. Nejmeh. A “ n Information System for Software Reuse”. In Twentieth Annual Hawaii International Conference on Systems Sciences, Kona, Hawaii (1987). W. Frakes and T. Pole. A “ n Empirical Study of Representation Methods for Reusable Software Components”. IEEE Transactions on Software Engineering Vol. 28, No. 8, (August 1994.) C. Goldfarb and P. Prescod. The XML Handbook. Prentice-Hall, Upper Saddle River, NJ (1998). E. Guerrieri. S “ oftware Document Reuse with XML” . Proceedings of the Fifth International Conference on Software Reuse, BC, Canada (1998). D. Knuth. Literate Programming, CSLI Lecture Notes No. 27, Stanford, CA (1992). J. Lamping and R. Rao. T “ he Hyperbolic Browser: A Focus + Context Technique for Visualizing Large Hierarchies”, Journal of Visual Languages and Computing 7(1) (1996). L. Latour, T. Wheeler, and W. Frakes. D “ escriptive and Prescriptive Aspects of the 3 C's Model: SETA1 Working Group Summary”. Ada Letters, XI(3), 9-17 (1991). A. Mili, R. Mili, and R.T. Mittermeir, A “ Survey of Software Reuse Libraries”, Annals of Software Engineering, Vol. 5 (1998). Oracle 8i Reference manual. Oracle Corp., Redwood Shores, CA (1999). M. Shaw and D. Garlan, Software Architecture, Prentice-Hall, Upper Saddle River, NJ. (1996)
Visualization of Reusable Software Assets
265
22. J. Stasko, B. Price, and M. Brown (Editors). Software Visualization : Programming As a Multimedia Experience. Cambridge, MIT Press, Cambridge, MA (1998). 23. E. Tufte. The Visual Display of Quantitative Information. Graphics Press, Cheshire, CT (1982). 24. E. Tufte. Envisioning Information. Graphics Press. Cheshire, CT (1990). 25. W3C, E “ xtensible Markup Language (XML)” http:// www.w3.org/TR/1998/RECxml-19980210.
Reasoning about Software-Component Behavior
Murali Sitaraman1 , Steven Atkinson1 , Gregory Kulczycki1 , Bruce W. Weide2 , Timothy J. Long2 , Paolo Bucci2 , Wayne Heym2 , Scott Pike2 , and Joseph E. Hollingsworth3 1
Computer Science and Electrical Engineering West Virginia University Morgantown, WV 26506 USA
fmurali,atkinson,
[email protected] 2
Computer and Information Science The Ohio State University Columbus, OH 43210 USA
fweide,long,bucci,heym,
[email protected] 3
Computer Science Indiana University Southeast New Albany, IN 47150 USA
[email protected] The correctness of a component-based software system depends on the component client's ability to reason about the behavior of the components that comprise the system, both in isolation and as composed. The soundness of such reasoning is dubious given the current state of the practice. Soundness is especially troublesome for component technologies where source code for some components is inherently unavailable to the client. Fortunately, there is a simple, understandable, teachable, practical, and provably sound and relatively complete reasoning system for component-based software systems that addresses the reasoning problem. Abstract.
Keywords: component-based software, reasoning, software component, software reuse, speci cation, veri cation.
1
Introduction
Both the object-oriented literature and common sense suggest that componentbased software development, and the resulting software reuse, should improve programmer productivity and software quality because: { less new code must be written to produce the same results, and { o-the-shelf components should be \well-seasoned" and therefore more reliable than code written from scratch. Both these observations are basically valid. But they are not the main reason why component-based software has the potential to dramatically improve W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 266-283, 2000. c Springer-Verlag Berlin Heidelberg 2000
Reasoning about Software-Component Behavior
267
software engineering practice. The key feature of well-designed software components is that they|or more speci cally, the mathematical models used to explain them|can help you understand, and reason soundly about, the execution-time behavior of component-based software systems. Don Knuth emphasizes the importance of such reasoning in an interview in Byte magazine [8]: [People in the object computing realm] haven't yet built a reliable way to reason about these programs, that is, we still lack the mathematical proofs to ensure a program will work. With object oriented programs, we have much less of an understanding of how we would ever prove that they don't have bugs. This is a huge gap. If people can understand OOP, they ought to be able to prove that the programs are correct. How can this problem be addressed, especially when some components in the program are not available in source form? In this paper we describe how to use mathematical modeling to explain and to reason about software-component behavior, i.e., the computational states reached during execution. We also demonstrate why you must use appropriate mathematical models if you expect to be able to reason about the composite behavior of software systems built from such components.
2
The Reasoning Problem
Any robust software-development paradigm must provide an answer to the reasoning problem [12, 17]; namely, how can you reason soundly about the behavior of a statement without actually executing it on a computer? The argument for this claim is straightforward. Suppose you could not reason abstractly about what a statement does, that you had to run it on a computer to see what happens. Then how would you choose a statement to ask the computer to execute? Trial-and-error is a surprisingly common approach for newcomers to computing, but it cannot work for software professionals because clearly there are just too many possible statements to try them all. You must be able to do some reasoning just to prune the options. A practical solution to the reasoning problem must be eective and reliable, not mere guesswork|even if you never try to \prove" anything about your programs. Consider a common built-in programming type such as Integer. How, for example, do you reason about the eect of code involving objects (variables1 ) of type Integer? A hardware engineer might view the value of an Integer object as a boolean vector, and the high-level-language operators \+" and \{" as macros that stand for hardware control sequences which manipulate boolean vectors. 1
We use the word \object" throughout this paper to emphasize that the techniques illustrated apply to object-oriented programs involving inheritance, etc., as well as to the \object-based" program fragments that constitute this paper's examples. For instance, [17] shows how the approach works with component-based C++ software; see also http://www.cis.ohio-state.edu/~weide/sce/now.
268
Murali Sitaraman et al.
\Boolean vector" is an example of a mathematical model for the value of an Integer object, i.e., something that de nes a mental image for the object's value and provides a machine-processable notation that supports formal reasoning about that object's behavior. The boolean vector model for programming type Integer works well for the hardware designer who is implementing arithmetic circuits. But it is at best unnecessarily complex for the software engineer who is a client of that hardware. For a software engineering task, you normally view the value of an Integer object according to a more appropriate mathematical model: a mathematical integer. You also picture Integer operators such as \+" and \{" as performing additions and subtractions of mathematical integers. You don't think about Integer objects in terms of internal representations, but in terms of their representation-neutral (i.e., \abstract") mathematical models.
3
Reasoning with Software Components
Component-based software development aggravates the reasoning problem because it signi cantly widens the semantic gap between the kinds of real-world information you can write programs to process, and the bits that computer hardware ultimately is able to process. Appropriate mathematical models have long since been adopted for the built-in types provided by programming languages. But in component-based software development you use not only these built-in types|which are one or two levels removed from the hardware|but also much higher-level types de ned by o-the-shelf software components with powerful operations whose exact behavior can be complex, and even mysterious if it is not very carefully described. What are appropriate mathematical models for these types? The burgeoning popularity of component technologies, from the early Booch components [2] through such distributed object technology contenders [10] as CORBA, DCOM, and Jini, makes it imperative that reasoning diÆculties with component-based software be dealt with before they lead to a software disaster. Fortunately, software components present an opportunity along with the reasoning challenge. Every programming type gives you something to \wrap" with an appropriate mathematical model. In fact, researchers have already used this idea to tie formal mathematical models to some popular-technology components [9]. The models involved are more complex than simple mathematical integers. But they are far less complex than the underlying bits used in computer representations and the code that transforms them, which must remain the last resort for understanding program behavior. Mathematical modeling also provides guidance when trying to identify and design new domain-speci c software components. Textbooks on the subject usually stop short of detailed component designs. They assume that the domainspeci c concepts identi ed by analysis, if named appropriately, will be intrinsically understandable to domain experts through intuitive or metaphorical models (e.g., \a stack is like a stack of cafeteria trays"). But in complex domains
Reasoning about Software-Component Behavior
269
where system correctness is very important|such as air-traÆc control|the precise behavioral details of software components must be so well-understood that speci cation by wishful naming and content-free explanations such as \a stack is like a stack" cannot suÆce. Moreover, the software objects in a system often do not correspond one-to-one with actual physical objects, making it impossible to explain the behavior of some software objects by appealing to physical analogies. Implementations of complex domain-speci c components usually are layered over other complex components, making it practically impossible to understand their behavior by sifting through their implementation code. Finally, in many component technologies no source code is available for some or all components.
4 An Example: \List" We might have used a software component from a domain such as air-traÆc control as an example of selecting and using appropriate mathematical models. But the point is clear (perhaps even clearer) when the component in question deals with something most software engineers seem to know and \understand" very well. So consider this piece of code that uses List objects, where List is a programming type de ned by an o-the-shelf software component: procedure Reverse ( updates s: List ) begin variable temp: Item if Length (s) > 0 then Remove (s, temp) Reverse (s) Insert (s, temp) end if end Reverse
Assuming you understand informally that the intended behavior of Reverse is to \reverse" a List object, how do you reason soundly about whether this body actually accomplishes that? You need to know exactly what a List object is, exactly what each of the operations Length, Remove, and Insert does, and exactly what Reverse is supposed to do. Mathematical modeling seems like an obvious approach. But is this answer really so obvious? To see how such a question might be answered in traditional documentation for clients of a \List" component, we examined several descriptions of o-the-shelf components involving \List" and \Insert". We found a wide range of explanations ranging from the content-free to the cryptic to the implementation-dependent to the nearly acceptable (i.e., the best we could nd). Here, quoted directly but without attribution, are a few of the explanations we found for the behavior of an Insert operation for a List: { a new item is inserted into a list
270
Murali Sitaraman et al.
{ postcondition: the list = the list + the item { insert adds the item to the beginning of the pointer \pre" [accompanied by a gure showing a typical con guration of a linked list representation with a \pre" pointer, among others] { put v at ith position... ensures insertion done: i th(i) = v [accompanied by a separate de nition of another programming operation called \i th": item at ith position] Evidently, if you want to know exactly what Insert does and does not do, you need to understand a speci c linked list representation and the code for the body of the Insert operation. Then you need to apply the same sledgehammer to understand what Remove does. Finally, you can \manually execute" the Reverse code on multiple inputs, at which point you might make an educated guess about whether Reverse works as intended. Without an explicit mathematical model that abstractly speci es the state of a List object and the behavior of List operations, reasoning about List objects is reduced to speculation and guesswork. How should objects and their operations be explained, given that a basic objective of software engineering is to be able to reason about and understand the software? The next section illustrates an answer to this fundamental question using the List example. The issue at hand is one that you must address no matter which programming language or paradigm you use. But it is especially important for component-based software development, where source code for the components used often is not available to the client programmer.
5 Explaining the Values and Behavior of Lists To arrive at an appropriate mathematical model that explains the behavior of a List object, we start by considering exactly what values (states) and state changes we are trying to model. Figure 1 shows a common singly linked list data structure consisting of a sequence of nodes chained together by next pointers. New nodes can be added to or removed from the sequence just after the node referenced by cur pos. Other operations allow the sequence of data items to be traversed by following next pointers. Evidently, in Figure 1, a traversal has already visited the consecutive nodes containing items 3, 4, and 5, and has yet to visit the remaining nodes containing items 1 and 4. What is the essence of the information captured in this data structure, independent of its representation? We claim that it is simply the string2 of items already visited, namely h3; 4; 5i, and the string of items yet to be visited, namely h1; 4i. That is, you can view the value of a List object as an ordered pair of mathematical strings of items. As the Integer-as-boolean-vector example suggests, mathematical modeling does not by itself guarantee understandable speci cations or ease of reasoning. 2
A string is technically simpler than a \sequence" because it is nite and does not explicitly involve the notion of a position. But thinking of a string as a sequence will not lead you astray.
Reasoning about Software-Component Behavior
data
3
4
5
1
271
4 null
next start cur_pos Fig. 1.
A typical singly linked list representation
Choosing a good mathematical model is a crucial but sometimes diÆcult task. For example, you might choose to think of the value of a List object as a single string of items (e.g., h3; 4; 5; 1; 4i) along with an integer current position (e.g., 3); as a function from integer positions to items, along with a current position; or even as a complex mathematical structure that captures the links and nodes of the above representation. Selection of a good mathematical model depends heavily on the operations to be speci ed, the choice of which should be guided by considerations of observability, controllability, and performance-in uenced pragmatism [4, 16]. The pair-of-strings model suggested above leads (in our opinion) to the most understandable speci cation of the concept and makes it easy to reason about programs that use List objects, as we will see. Figure 2 shows the speci cation of a List component in a dialect of the RESOLVE language [13]. List Template is a generic concept (speci cation template) which is parameterized by the programming type of items in the lists. As just stated, each List object is modeled by an ordered pair of mathematical strings of items. The operator \*" denotes string concatenation; \hxi" denotes the string consisting of a single item x; and \jsj" denotes the length of string s. Conceptualizing a List object as a pair of strings makes it easy to explain the behavior of operations that insert or remove from the \middle". A sample value of a List Of Integers object, for example, is the ordered pair (h3; 4; 5i; h1; 4i). Insertions and removals can be explained as taking place between the two strings, i.e., either at the right end of the left string or at the left end of the right string. The declaration of the programming type List introduces the mathematical model and says that a List object initially (i.e., upon declaration) is \empty": both its left and right strings are empty strings. Each operation is speci ed by a requires clause (precondition), which is an obligation for the caller; and an ensures clause (postcondition), which is a guarantee from a correct implementation. In the postcondition of Insert, for example, #s and #x denote the incoming values of s and x, respectively, and s and x denote the outgoing values. Insert has no precondition, and it ensures that the incoming value of x is concatenated onto the left end of the right string of the incoming value of s; the left string is not aected. Notice that the postcondition describes how the operation updates the value of s, but the return value of parameter x (which has the mode clears)
272
Murali Sitaraman et al.
concept
List Template (type Item)
List is modeled by (left: string of Item, right: string of Item) exemplar s
type
initialization ensures
|s.left| = 0
and
|s.right| = 0
Insert ( updates s: List, s.left = #s.left and s.right = * #s.right
operation
clears
x: Item )
ensures
Remove ( updates s: List, |s.right| > 0 ensures s.left = #s.left and #s.right = <x> * s.right
operation
replaces
x: Item )
requires
Advance ( updates s: List ) |s.right| > 0 ensures s.left * s.right = #s.left * #s.right |s.left| = |#s.left| + 1
operation
requires
and
Reset ( updates s: List ) |s.left| = 0 and s.right = #s.left * #s.right
operation ensures
Advance To End ( updates s: List ) |s.right| = 0 and s.left = #s.left * #s.right
operation ensures
operation
Left Length ( restores s: List ) length: Integer length = |s.left|
returns ensures operation
Right Length ( restores s: List ) length: Integer length = |s.right|
returns ensures end
List Template Fig. 2.
RESOLVE speci cation of a List component
Reasoning about Software-Component Behavior
273
remains otherwise unspeci ed; clears means it gets an initial value for its type. For example, an Integer object has an initial value of 0. RESOLVE speci cations use a combination of standard mathematical models such as integers, sets, functions, and relations, in addition to tuples and strings. The explicit introduction of mathematical models allows the use of standard notations associated with those models in explaining the operations. Our experience is that this notation, while precise and formal, is nonetheless fairly easy to learn, even for beginning computer science students. We leave to the reader the task of understanding the other List Template operations. List Template is just an example chosen to illustrate the features of explicit mathematical modeling as a speci cation approach. Other o-the-shelf RESOLVE components include general-purpose ones de ning queues, stacks, bags, partial maps, sorting machines, solvers for graph optimization problems, etc.; and more complex domain-speci c components.
6
Reasoning About Reverse
Shown below is one possible formal speci cation of Reverse, i.e., this is what we intend to mean by \reversing" a List object: operation Reverse ( updates s: List ) requires |s.left| = 0 ensures s.left = reverse (#s.right) and |s.right| = 0
The only new notation here is reverse, a built-in mathematical function in the speci cation notation. Formally, its inductive de nition is: reverse (empty string) = empty string reverse (a * <x>) = <x> * reverse (a)
Informally, its meaning is that, if s is a string (e.g., h1; 2; 3i), then reverse (s) is the string whose items are those in s but in the opposite order (e.g., h3; 2; 1i). Let's reconsider the reasoning question raised earlier (where Length has been replaced in the code with Right Length to match exactly the component interface de ned in Figure 2). Is the following implementation correct for the above speci cation of Reverse? procedure Reverse ( updates s: List ) decreasing |s.right| begin variable temp: Item if Right_Length (s) > 0 then Remove (s, temp) Reverse (s) Insert (s, temp) end if end Reverse
274
Murali Sitaraman et al.
You can reason about the correctness of this code with varying degrees of con dence through testing (computer execution on sample inputs), tracing (human execution on sample inputs), and/or formal symbolic reasoning (proof of correctness). But all of these must be based on mathematical modeling of Lists.
Table 1.
State
0
if
A tracing table for Reverse Facts
s = (, 0 then 1 s = (, )
and
2>)
and
)
and
)
s = (, ) temp = 0
and
and
Although testing is clearly important, here we illustrate only the latter two approaches to show the power of mathematical modeling for human reasoning about program behavior. Tracing. Tracing is sometimes part of code reviews, walkthroughs, and formal technical reviews [5]. It is helpful to use a conventional form when conducting a trace. Table 1 shows a tracing table for Reverse where the incoming value of the List Of Integers s is (h i; h3; 4; 6; 2i). The Facts column of this table records the values of objects in the corresponding state of the program listed in the State column. States are the \resting points" between statements at which values of objects might be observed. There are two states in Table 1 where the recording of facts calls for some explanation. The facts at state 2 are based on the postcondition of the Remove operation. However, you can assume the postcondition of Remove only if the precondition of Remove is satis ed before the call, i.e., in state 1. In this case, object values at state 1 can be seen by inspection to satisfy the precondition of Remove, so appealing to the postcondition of Remove to characterize state 2 represents valid reasoning. Also, the facts at state 3 use the postcondition of
Reasoning about Software-Component Behavior
275
Assuming the postcondition of Reverse when tracing Reverse would represent circular, invalid reasoning without rst verifying that the recursion is \making progress". In this case, progress is evident because the length of s.right, at state 2, is less than the length of s.right at state 0. Again you can see this by inspection. The justi cation for appealing to the postcondition of Reverse in state 3 is, then, mathematical induction. (Note also that the precondition of Reverse holds at state 2.) Details of the remaining entries of the table are straightforward. Examination of the facts at state 5 reveals whether this implementation of Reverse is correct for the speci c input value s = (h i; h3; 4; 6; 2i). You should be able to see from this trace and the speci cation that it is not correct. Formal Symbolic Reasoning. This is a powerful generalization of tracing where the names of objects stand for arbitrary values of the mathematical models of their types, not for speci c values. For example, instead of tracing Reverse using the speci c values #s:left = h i and #s:right = h3; 4; 6; 2i, you simply let #s:left and #s:right denote some arbitrary incoming value of s. Our approach to symbolic reasoning is called natural reasoning, a veri cation technique proposed by Heym [7], who also proved conditions for its soundness and relative completeness. The general idea is called natural reasoning, like natural deduction in mathematics, because it is an operationally-based approach that is intuitively appealing to computer science students and experienced software engineers alike. It lets you formally represent the informal reasoning used by the author of the code, eectively encoding why he/she thinks the code \works". Heym compared his natural reasoning system with classical program veri cation techniques based on Hoare logic, and with two earlier proposals for similar reasoning methods (which lacked soundness proofs and, as it turned out, were actually unsound). We do not digress here to discuss this related theoretical work; see [7] for details. Some other features of natural reasoning are: Reverse.
{ Programs with loops are handled through the use of traditional loop invariants or loop speci cations, which are not illustrated in the present example.
{ The techniques used in our List Reverse example generalize to cover reason-
ing about the correctness of data representations (e.g., to decide whether a proposed List Template implementation is correct), also not illustrated here. { The soundness of natural reasoning depends on the absence of aliasing in the client code. In RESOLVE, we eliminate aliasing by using the swapping paradigm [6] as the basis for component design, implementation, and use. The consequences of this decision are illustrated in the detailed design of List Template, most notably in the way Insert works (note in Figure 2 the parameter mode for x). Natural reasoning about code correctness can be viewed as a two-step process: 1. Record local information about the code in a symbolic reasoning table, a generalization of a tracing table. 2. Establish the code's correctness by combining the recorded information into, and then proving, the code's veri cation conditions.
276
Murali Sitaraman et al.
Step 1 is a symbol-processing activity no more complex than compiling. It can be done automatically. Consider an operation Foo that has two parameters and whose body consists of a sequence of statements (Figure 3). You rst examine stmt-1 and record assertions that describe the relationship which results from it, involving the values of x and y in state 0 (call these x0 and y0 ) and in state 1 (x1 and y1 ). You similarly record the relationship which results from executing stmt2, i.e., involving x1 , y1 , x2 , and y2 ; and so on. You can do this for the statements in any order because these relationships are local, involving consecutive states of the program. operation
Foo (x, y)
requires
pre [x, y]
ensures
post [#x, #y, x, y]
procedure begin
Foo (x, y)
// state stmt-1 // state stmt-2 // state stmt-3 // state stmt-4 // state end Foo
0 1 2 3
state 0 stmt-1 state 1
is
stmt-2 state 2 stmt-3 state 3 stmt-4 state 4
4
Fig. 3.
Relationships in symbolic reasoning
In addition to those arising from the procedure body statements, step 1 produces two special assertions. One is a fact (an assertion to be assumed in step 2 of natural reasoning): the precondition of Foo holds in state 0, i.e., pre[x0 ; y0 ]. Another is an obligation (an assertion to be proved in step 2): the postcondition of Foo holds in state 4 with respect to state 0, i.e., post[x0 ; y0 ; x4 ; y4 ]. Intuitively, this says that if you view the eect of the operation from the client program, as control appears to jump directly from state 0 to state 4, the net eect of the individual statements in the body is consistent with the speci cation. Step 2 of natural reasoning involves combining the assertions recorded in step 1 to show that all the obligations can be proved from the available facts. This task is generally an intellectually challenging activity in which computerassisted theorem proving helps, but, given the current state-of-the-art, it is far from entirely automatic. The assertions recorded in step 1 arise from three questions about every state: { Under what condition can the program get into this state?
Reasoning about Software-Component Behavior
{
277
If the program gets into this state, what do we know about the values of the objects?
{
If the program gets into this state, what must be true of the values of the objects in order that the program can successfully move to the next state? Table 2 shows a completed symbolic reasoning table for Reverse. The columns
State column contain the answers to the above questions for Path Conditions records the condition under which execution reaches that state. Column Facts records assumptions (generally the postconditions of called operations) that can be made in that state. Column Obligations records assertions (generally the preconditions of called operations) to the right of the
a given state. Column
that need to be true in that state in order for execution to proceed smoothly to the next state.
Table 2.
State Path
A symbolic reasoning table for Reverse Facts
Obligations
Conditions
0 if
|s0 .left| = 0 and is initial (temp 0 )
Right Length (s) > 0 then 1 |s0 .right| > 0 s1 = s0 and |s1 .right| > 0 temp1 = temp0 Remove (s, temp) 2 |s0 .right| > 0 s2 .left = s1 .left and |s2 .left| = 0 and s1 .right = * |s2 .right| < |s0 .right| s2 .right Reverse (s) 3 |s0 .right| > 0 s3 .left = reverse (s2 .right) and |s3 .right| = 0 and temp3 = temp2 Insert (s, temp) 4 |s0 .right| > 0 s4 .left = s3 .left and s4 .right = * s3 .right and is initial (temp 4 )
end if
5a 5b
|s0 .right| = 0 s5 = s0 and temp5 = temp0 |s0 .right| > 0 s5 = s4 and temp5 = temp4
s5 .left = reverse (s0 .right) and |s5 .right| = 0
In Table 2, si .left and si .right are the symbolic denotations of values for object s in state i; similarly for object
temp.
The facts at state 0 are obtained
278
Murali Sitaraman et al.
by substituting the symbolic value of object s at state 0, namely s0 , into the precondition of Reverse, and by recording initial values for all local objects. The obligation at state 5 is obtained by substituting the symbolic values of s at state 0 and at state 5 into the postcondition of Reverse. This is the goal obligation: when it is also proved, the correctness of Reverse is established. Notice how the path condition js0 :right j > 0 for states 1{4 records when these states are reached. Facts recorded for states 1{5 are based on the postconditions of operations and on the ow of control for an if statement. Obligations arise in state 1, because of the precondition of Remove, and in state 2, because of the precondition of Reverse and because Reverse is being called recursively. Natural reasoning includes a built-in induction argument here so recursion is nothing special, except that before a recursive call there is an obligation to show termination: the recursive operation's progress metric has decreased, in this case, js2 :right j < js0 :right j. A progress metric, like a loop invariant, is a claim that must be supplied by the programmer of a recursive body; hence, the decreasing clause in the body of Reverse. A proof obligation just before any recursive call is that the claim holds, i.e., that execution is making progress along this metric. Once all these assertions are recorded, you solve the reasoning problem by composing them appropriately to form the veri cation conditions and then showing that each of these conditions is satis ed. There is one veri cation condition for each obligation, of the form: assumptions implies obligation The soundness of natural reasoning depends upon using only the following assumptions in the proof of the obligation for state k : { (path condition for state i) implies (facts for state i), for every i satisfying 0 i k , and { path condition for state k .
j
So, in order to discharge the proof of the obligation in state 1 of Table 2, i.e., j > 0, you may assume:
s1 :right
(true implies (|s0 .left| = 0 and is initial (temp0 )) and (|s0 .right| > 0 implies (s1 = s0 and temp1 = temp0 )) and |s0 .right| > 0
The rst two conjuncts are the assumptions of the rst form for states 0 and 1, respectively, and the third is the assumption of the second form for state 1. The proof of the obligation in state 1 is easy for humans who have had a bit of practice with such things. Assuming that js0 :right j > 0, you conclude from the second line that s1 = s0 and, therefore, s1 :right = s0 :right . Then since js0 :right j > 0 you conclude by substitution js1 :right j > 0, i.e., the assertion to be proved. In a similar manner, you can easily prove the obligation at state 2. Is Reverse correct? Table 1 shows a counterexample to a claim of correctness; indeed the obligation at state 5 cannot be proved from the allowable assumptions. If the code were correct, however, tracing could not show this whereas symbolic
Reasoning about Software-Component Behavior
279
reasoning could. Fixing the program is left as an exercise for the reader, as we would leave it for our students.
7
Experience
We routinely introduce mathematical modeling and the important role of speci cations in reasoning, using the RESOLVE notation, in rst-year CS course sequences at The Ohio State University (OSU) and West Virginia University (WVU). We have conducted formal attitudinal and content-based surveys as well as essay-style evaluations to assess the impact of teaching these principles. A detailed summary of the results to date is beyond the scope of this paper. But the evaluations with a sample size over 100 allow us to reach at least the following interim conclusions:
{ Most students can learn to understand mathematical modeling as the basis for explanations of object behavior. This is illustrated by their ability to select reusable components and to act as clients of components, without any knowledge of those components' implementations. It is con rmed by their performance on exam questions asking them to write operation bodies and test plans, given only formal speci cations (often involving quanti ers). { After the course sequence, a statistically signi cant number of students have changed certain attitudes about programming. They tend to believe at the end of the sequence (but not before starting it) that natural language descriptions are inadequate descriptions of software components, and that it is possible to show that a software component works correctly without running it on a computer. A prototype implementation of the tracing and natural reasoning systems described in this article is part of the Software Composition Workbench tool being developed by the Reusable Software Research Groups at WVU and OSU.3 The tool generates symbolic reasoning tables automatically. It then uses the PVS theorem prover [11] to discharge the veri cation conditions. The prover typically requires human intervention and advice in this process.
8
Formalism: Necessity and Scalability
This section addresses two questions that we often get concerning the necessity for and scalability of formal mathematical modeling and formal natural reasoning. The rst of these involves necessity: Given that non-trivial component-based software systems are routinely built and deployed without using mathematical models for describing component behavior, is such precision and care really 3
For
examples
of
symbolic
reasoning
http://www.csee.wvu.edu/~resolve/scw.
tables
generated
by
this
system,
see
280
Murali Sitaraman et al.
necessary? There is little doubt that component designers, implementers, and clients generally agree on certain unwritten conventions to go along with informal natural-language component documentation. The result is that usually components are used where they will not \break". In other words, most componentbased software is written under wishful usage assumptions that more or less hold|but not always. An example of such a situation involves the inadvertent introduction of aliasing through repeated arguments, as noted in [3]: The most obvious forms of aliasing are similar to those found in any system involving references, pointers, or any other link-like construct. For example, in the matrix multiplication routine: op matMul(lft: Matrix, rgt: Matrix, result: Matrix);
It may be the case that two or more of lft, rgt, and result are actually connected to the same matrix object. If the usual matrix multiplication algorithm is applied, and result is bound to the same matrix as either of the sources, the procedure will produce incorrect results. The reason for using explicit mathematical modeling, and for adhering to a formalized reasoning method whose conditions for soundness have been established, is to limit or eliminate this dependence on wishful usage assumptions. Where the consequences of software failures are signi cant enough, the economics justify the added expense of more careful modeling and more careful reasoning. This brings us to the second question, regarding that expense and the resulting trade-o, which is often phrased in terms of scalability: Even granting that it is (sometimes) necessary, can formal mathematical modeling and formal natural reasoning scale up to handle practical, large, and complex components and systems? The rst answer is that the techniques of using mathematical modeling for precise description of component behavior, and of using natural reasoning to predict the behavior of software built from such components, are independent of the complexity of the components and the size of the systems built from them. In fact, we (and many others) have developed formal speci cations for quite a few components that are far more complex than Lists; e.g., see [1]. Such examples, however, clearly have required a serious amount of modeling and speci cation eort. Admittedly, the fact that such work appears in research papers (which is appropriate, given the current state of the art) does little to raise con dence that formal mathematical modeling and formal natural reasoning are practical approaches usable by \real programmers." A second answer can be obtained by revisiting the premise behind the rst question: Somehow, the same people whose ability to reason formally about software system behavior is being questioned, manage to build and deploy complex component-based software systems all the time. How do they do it? (A similar question can be asked about reverse engineering of large, complex software systems [15].) Somehow they must be reasoning|most of the time correctly| about the behavior of systems using some informal method based on some informal mental models of the components' behaviors and the behaviors of their
Reasoning about Software-Component Behavior
281
compositions. The Invariance Theorem [14] implies that every describable object (e.g., a particular model of component behavior, a particular argument about why a program works correctly, a method for reasoning about program behavior) has an intrinsic complexity that is independent of the means of description. So, whatever models and reasoning methods allow software engineers to develop large, complex systems now, if they can be described in the usual natural language of discourse of software professionals then they can be described in the formal language of mathematics without substantial impact on the complexity of description. Perhaps the notations involved in the formal models and reasoning method will be initially unfamiliar to those who have not yet been educated to understand them. But our experience with CS1/CS2 students suggests that this is not an inherent impediment to making the approach practical. Our claim for the scalability of formal mathematical modeling and formal natural reasoning, then, is that they are simply formalizations of the informal mental models and informal reasoning processes software professionals routinely use. If the resulting reasoning is unsound, then it just shouldn't be used! If it is sound, then in principle, encoding the same reasoning into formal mathematics need not add to the intrinsic complexity. Our speci c contribution in this area is that added con dence comes from formalizing the natural reasoning method and then establishing conditions under which it is sound. Recall from Section 6 that two prior attempts to do this resulted in formal systems that were unsound. Ultimately, the practical importance of having a sound formal reasoning approach that mirrors how people think depends heavily on the ability to support it with machine-processable notation such as we have introduced, and on veri cation and/or proof-checking tools. The latter present a longer-term challenge.
9
Conclusion
Mathematical modeling is essential for reasoning about component-based software. Without precise descriptions based on mathematical models, the bene ts of component-based software development are unlikely to be fully realized because clients who use existing components will be unable to understand those components well enough to reason soundly about non-trivial programs that use them. Perhaps this situation is tolerable if software components are to be used only for prototyping and non-safety-critical applications. But for \industrial strength" software systems where there can be serious consequences to software failures, the ability to reason soundly about software behavior is undeniably critical. The implications of unsound reasoning for productivity and quality|the very attributes component-based software is supposed to improve|are ominous. Fortunately, introductory CS students can learn to read and use speci cations based on mathematical modeling and can appreciate the signi cance of appropriate modeling in developing correct software. With open minds, a bit of continuing education, and tool support, software professionals also should be able to under-
282
Murali Sitaraman et al.
stand and appreciate this important technique and know how to use it to reason about software system behavior.
10
Acknowledgment
We gratefully acknowledge nancial support from our own institutions, from the National Science Foundation under grants DUE-9555062 and CDA-9634425, from the Fund for the Improvement of Post-Secondary Education under project number P116B60717, from the National Aeronautics and Space Administration under project NCC 2-979, from the Defense Advanced Research Projects Agency under project number DAAH04-96-1-0419 monitored by the U.S. Army Research OÆce, and from Microsoft Research. Any opinions, ndings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily re ect the views of the National Science Foundation, the U.S. Department of Education, NASA, the U.S. Department of Defense, or Microsoft.
References 1. M. Aronszajn, M. Sitaraman, S. Atkinson, and G. Kulczyski, \A System for Predictable Component-Based Software Construction," in High Integrity Software, V. Winter and S. Bhattacharya, eds., Kluwer Academic Publishers, 2000. 2. G. Booch, Software Components With Ada, Benjamin/Cummings, Menlo Park, CA, 1987. 3. D. de Champeaux, D. Lea, and P. Faure, Object-Oriented System Development, Addison-Wesley, Reading, MA, 1993. 4. D. Fleming, Foundations of Object-Based Speci cation Design. Ph.D. diss., West Virginia University, Dept. Comp. Sci. and Elec. Eng., 1997. 5. D.P. Freedman and G.M. Weinberg, Handbook of Walkthroughs, Inspections, and Technical Reviews: Evaluating Programs, Projects, and Products, 3rd ed., Dorset House, New York, 1990. 6. D.E. Harms and B.W. Weide, \Copying and Swapping: In uences on the Design of Reusable Software Components," IEEE Trans. on Soft. Eng., Vol. 17, No. 5, May 1991, pp. 424-435. 7. W.D. Heym, Computer Program Veri cation: Improvements for Human Reasoning. Ph.D. diss., The Ohio State Univ., Dept. of Comp. and Inf. Sci., 1995. 8. D. Knuth, \Knuth Comments on Code: An Interview with D. Andrews," Byte, Sep. 1996, http://www.byte.com/art/9609/sec3/art19.htm (current Oct. 11, 1999). 9. G.T. Leavens and Y. Cheon, \Extending CORBA IDL to specify behavior with Larch," OOPSLA '93 Workshop Proc.: Speci cation of Behavioral Semantics in OO Info. Modeling, ACM, New York, 1993, pp. 77-80; also TR #93-20, Dept. of Comp. Sci., Iowa State Univ., 1993. 10. R. Orfali, D. Harkey, and J. Edwards, The Essential Distributed Objects Survival Guide, J. Wiley, New York, 1996. 11. S. Owre, J. Rushby, N. Shankar, and F. von Henke, \Formal Veri cation of FaultTolerant Architectures: Prolegomena to the Design of PVS," IEEE Trans. on Soft. Eng., Vol. 21, No. 2, Feb. 1995, pp. 107-125.
Reasoning about Software-Component Behavior
12. M. Sitaraman,
14. 15. 16. 17.
An Introduction to Software Engineering Using Properly Conceptu-
WVU Publications, Morgantown, WV, 1997. M. Sitaraman and B.W. Weide, eds., \Component-Based Software Using RESOLVE," ACM Software Eng. Notes, Vol. 19, No. 4, 1994, pp. 21-67. J. van Leeuwen, ed., Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, Elsevier Science Publishers, Amsterdam, 1990. B.W. Weide, J.E. Hollingsworth, and W.D. Heym, \Reverse Engineering of Legacy Code Exposed," Proc. 17th Intl. Conf. on Software Eng., ACM, Apr. 1995, pp. 327331. B.W. Weide, S.H. Edwards, W.D. Heym, T.J. Long, and W.F. Ogden, \Characterizing Observability and Controllability of Software Components," Proc. 4th Intl. Conf. on Software Reuse, IEEE CS Press, Los Alamitos, CA, 1996, pp. 62-71. B.W. Weide, Software Component Engineering, OSU Reprographics, Columbus, OH, 1999. alized Objects.
13.
283
Use and Identification of Components in Component-Based Software Development Methods Marko Forsell, Veikko Halttunen, and Jarmo Ahonen Information Technology Research Institute, University of Jyv¨ askyl¨ a P.O.Box 35, FIN-40351 Jyv¨ askyl¨ a, Finland.
Abstract. New software systems are needed ever more but to keep up with this trend software developers must learn to create quality software more efficiently. One approach is to (re-)use components as building blocks of the new software. Recently there has been more interest to create component-based software development methods to support this. In this article we first set out requirements for reuse-based software development and then evaluate three component-based methods, namely Catalysis, OMT++, and Unified Process. As a conclusion we argue that evaluated methods produce prefabricated components and that component-based means that software developers can change better components to existing systems. Reuse of components to create new software is neglected in these methods.
1
Introduction
While the users’ requirements and needs for software products are rapidly increasing, the number of software professionals is not accompanying this trend. Every time new software system is created at least half of the developers are needed to keep it going on. This is due maintenance needs. One solution to produce quality software systems more efficiently is to use components as building blocks of the new software. This fact means that software components need to be more reusable and reused. The very basic idea of reuse is: use a thing more than once [3,32,20]. A component is the common term for a re-usable piece of software. Depending on the level of abstraction and the ways of selection, specialization to the specific situation and integration to the whole, reuse technologies can be categorized as follows [20]: 1. 2. 3. 4. 5. 6.
high-level languages design and code scavenging source code components software schemas application generators very high-level languages
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 284–301, 2000. c Springer-Verlag Berlin Heidelberg 2000
Component-Based Software Development Methods
285
7. transformational systems 8. software architectures A more coarse division distinguishes two kinds of reuse technologies: composition and generation. In composition technologies the software is composed of its parts (components) whereas in generation technologies the software is generated from the higher-level descriptions or specifications. Successful areas of software reuse, such as UNIX subroutines, mathematical or programming language specific program libraries, database management systems, or tools for graphical user interface, are well known and carefully explored and these areas have well established patterns [5]. A software developer can easily learn the use of them through education and working experience. Reuse does not just happen it requires careful planning and coordination [23,17,13]. However, the research of software reuse usually concentrates on the properties of software component or their interfaces without having a proper understanding on the whole process of software reuse. Dusink and Katwijk [12] note that there is no integrated model available for reuse support in the software development. Reuse-based software development requires specific methods. However, good “reuse methods” are not available. As Basili [2] puts it, current software development methods even hinder reuse in system development. Furthermore, Bailey and Basili [1] argue that the reuse models, which build upon expertise of the application domain without having clear instructions on how to do things, offer insufficient guidelines for the reuse process. In brief, such models burden the experts too much and make, thus, the process vulnerable. The objective of this paper is to assess how current methods support reuse in software development, and, through the evaluation, to expose how to improve methods aiming at software reuse. For the possible methods we set the following two criteria: 1. component usage viewpoint had to be explicitly taken into account, and 2. the method should have been published in a sufficient breadth (i.e. as a book). By these criteria we selected the following methods for deeper consideration: Catalysis [11], OMT++ [16], and Unified Process [18]. Each of these methods uses components in their process and is described in a lately published book (in 1999). In this paper we approach software reuse from the above mentioned composition aspect. We consider ‘a component’ in a wider sense than traditionally. Thus, a component is not restricted to be a code component, but it can be also some other artifact like software schemas or software architectures (see [20]). We see also many other reusable elements in software development, for example, domain models, analysis and design documents etc. When analyzing the support of a method for the reuse process, it is necessary to define the process model, through which the analysis is done. For finding such a model we first scan for the essential features of a reuse process and then — according to these features — select a process model that can be considered “ideal”.
286
2
Marko Forsell et al.
Crucial Features of a Reuse Process
Traditional software development processes require substantial consideration when component-based reuse is adopted. In this part of the paper we make a literature review to find out the most crucial features of a reuse process gained attention of the contributors of the area. We have listed the features that have been noticed by several contributors. These are: 1. 2. 3. 4. 5.
Domain modeling [9,26,24,6,10,15,8,4] Software architecture [17,25,8,20] “More than code” [15,7,21,22,19,13] Separate processes for component (re)use and creation [7,30,19,32,23,5,17] (Component) repository [20,28,27,14,26,10]
There is a wide consensus that domain modeling and software architecture are pre-requisites for successful use of software components. It is absolutely necessary to know the context where the components are (re-)used, because it is not self-evident how well the components suite to different contexts (see [7]). The identification of components starts with domain analysis, which defines the reusable components at the conceptual level. Therefore, domain analysis is an important phase in finding out what components are needed and how they interrelate. Software architecture, which defines how the software is composed of its subsystems, helps to manage with the software as a whole, which makes easier the selection of suitable components. The domain analysis and the software architecture form together a sound basis for use of components in the particular application domain (see [10] and [26]). Despite the fact that reuse has traditionally meant reuse of code, it is necessary to realize that reuse can — and should — be extended to many other artifacts, too. Like Horowitz and Munson [15] state, reuse can happen at several levels and the best benefits can be achieved when, in addition to the reuse of code, requirement analysis, designs, testing plans etc. are reused. Caldiera and Basili [7] argue that experience is also an important reusable resource. Reuse of experience enables other types of reuse. Lanergan and Grasso [21] report highly successful reuse of the software’s logic structures. It is the size and the abstraction level of the components that decides the phase of the development process wherefrom the components can be used [22]. When talking about reuse of components, two different processes can be separated: production of components and use of them [32,30,5]. In addition to this, some researchers see the maintenance of the component repository as a separate task [23,27,10,7,26]. Some emphasize the role of managing the reuse infrastructure (e.g. [17] and [23]). Moreover, some researchers have considered abstraction as a means to find a component (see [26] and [19]). The most extensive models for reuse include the STARS [31] and the models by Lim [23] and Karlsson [19]. The Karlsson’s model considers reuse purely from the object-oriented point of view. The coverage of the STARS model and the Lim’s model is very similar. However, it seems that the STARS model is a little
Component-Based Software Development Methods
287
more directed towards code-components. The Lim’s model is not dependent on any software development paradigm. It notifies the essential features of a reuse process and thus it sees components as “more than code”. We selected the Lim’s model for our evaluation.
3
The Lim’s Model of the Reuse Process
Lim [23] talks about ‘assets’ instead of ‘components’. This shifts the emphasis from pure code orientation to conceiving ‘a component’ as a wide concept that covers designs and other things besides computer programs. We believe that perspective is absolutely necessary when aiming at successful and wide-spread reuse of components in software development. Lim divides the reuse process into the following four major activities: 1. 2. 3. 4.
managing the reuse infrastructure producing reusable assets brokering reusable assets consuming reusable assets
First activity, managing the reuse infrastructure, plans and drives the other three activities. It manages the whole reuse process. Lim’s [23] subtasks of the three latter activities are described more closely in Table 1. Lim names distinct roles for the developers in three latter activities. Producers create assets for reuse, brokers provide repository and support for reusable assets, and consumers produce new software with reusable assets. Table 1. The reuse process, its phases and explanations Activity / Tasks Explanation Producing Reusable sets (PRA)
Reusable assets can be created either by prefabrication or As- retrofitting. In either case, the production of the reusable asset is preceded by domain analysis (DA). Two major elements of PRA are domain analysis and domain engineering.
Analyzing main
Do- Tasks: – DA attempts to create a domain model which generalizes all systems within a domain. – DA is at a higher level of abstraction than systems analysis. – The resulting assets from a DA possess the functionality necessary for applications developed in that domain. Continues on next page
288
Marko Forsell et al. Continued from previous page (Reuse process)
Activity / Tasks Explanation Producing Assets Tasks: – Produced assets are those identified by the DA and include both components and architectures. – The architecture is the structure and relationship among the constituent components. Having identified the set of assets that have a high number of future reuse instances. – Two approaches are available for producing these assets: prefabrication and retrofitting. Prefabricating is approach to “build” reusability into assets when they are created. This approach has been variously called “design for reuse” and “a posteriori reuse”. Retrofitting is second approach to examine existing assets, evaluate the feasibility of reengineering them for reuse, and if viable, doing so. This set of activities has been called salvaging, scavenging, mining, leveraging, or a priori reuse. Maintaining Tasks: & Enhancing – Maintenance involves changing the software system / product Assets after it has been delivered. Maintenance can be – Perfective maintenance (enhancing the performance or other attributes), – Corrective maintenance (fixing defects), or – Adaptive maintenance (accommodating a changed environment). – When assets are enhanced, fixed, or replaced, the consumers are notified of the changes, and in many cases for active projects, will require integration of the newer versions of the assets. – Verification and validation (V&V) are performed throughout the life cycle. In software reuse, V&V are intended to demonstrate that the reusable asset will perform without fault under its intended conditions. Brokering Reusable sets (BRA)
Aids the reuse effort by qualifying or certifying, configuring, mainAs- taining, and promoting reusable assets. It also involves classifying and retrieving assets in the reuse library.
Assessing Assets Tasks: – Potential assets from both external and internal sources should be assessed before order. – Potential assets are identified and brokers examine them, reviewing several factors. Continues on next page
Component-Based Software Development Methods
289
Continued from previous page (Reuse process) Activity / Tasks Explanation Procuring Assets Broker determines whether to purchase or license the asset, purchase and reengineer the asset to match the consumers’ needs, produce the asset in-house, or reengineer an existing in-house non reusable asset to meet consumer needs. Certifying Assets Tasks: – Reusable assets should be certified before they are accepted into the repository. – Certification involves examining an asset to ensure that it fulfills requirements, meets quality levels, and is accompanied by the necessary information. Once the asset is certified, the next step in the process is to accept the asset. Adding Assets Deleting Assets
Adding an asset involves formally cataloging, classifying, describing it, and finally entering it to the list of reusable assets. The broker should examine the inventory of assets and delete those which are not worth continuing to carry or have been superseded by other reusable assets.
Consuming CRA involves using these assets to create systems and products, Reusable Assets or to modify existing systems and products. CRA is also known as (CRA) application engineering. Identifying Sys- Tasks: tem & Assets – End-users’ needs are translated into system requirements. – Requirements for assets are also determined as part of this analysis. – In reuse-enabled businesses, system requirements are determined in part by the availability of reusable assets – In strategy-driven reuse, a deliberate decision is made to enter certain markets or product lines in order to economically and strategically optimize the creation and use of reusable assets which fulfill multiple system requirements. Locating Assets Consumers locate assets which meet or closely meet their requirements, using the reuse library, directory, or other means. Continues on next page
290
Marko Forsell et al. Continued from previous page (Reuse process)
Activity / Tasks Explanation Assessing Assets Tasks: for Consumption – Consumers evaluate the assets. – If a suitable asset cannot be found externally, the consumer must determine whether a reusable version should be requested from the producer group. In some circumstances, it may be more viable to create a non reusable version for the project at hand. – A modified asset may be valuable to other projects as well. Consequently, the consumer should consider submitting a request for a modification of the reusable asset which would be supported by the broker group. Adapting / Mod- Asset is adapted to the particular development environment. If modifying Assets ification is necessary, the consumer should carefully document the changes. Possible reuse strategies are black box reuse and white box reuse. Integrating / In- Reusable assets are incorporated with new assets created for the corporating As- application. sets
Domain analysis and software architecture are considered in phases analyzing domain and producing assets respectively. In Lim’s model it is very obvious that target of the reuse is much more than merely the code and that for producing and using of the components are two separate processes. Repository and use of it are in very central position in Lim’s reuse model. To be successful reuse has to be planned carefully. To cope with such a complicated process it is necessary to make difference between the above mentioned areas of the process. It is also important to know thoroughly the tasks of each phase.
4
Evaluation of Reuse Processes in Three Known Methods
The evaluation of the three methods, Catalysis, OMT++ and Unified Process, was started by reading carefully the following books: D’Souza and Wills [11], Jacobson et al. [18], and Jaaksi et al. [16]. Next, each method was evaluated in terms of its support for the activities and tasks of the Lim’s model. In this section we summarize the main findings of our analysis. We start with a brief description each of the methods, after which we analyze the general features of them using the Lim’s model. 4.1
Catalysis
Catalysis is based on three modeling concepts: type, collaboration, and refinement. Furthermore, it uses frameworks to describe recurring patterns. A collab-
Component-Based Software Development Methods
291
oration defines a set of actions between group of objects. A type defines external behavior of an object. Precise description what is external behavior of the type is given in type model. Types serve as basic means to identify and document components. A refinement describes how abstract model maps to the more concrete ones. Frameworks are used to describe recurring patterns in specifications, models, and designs. [11] Catalysis distinguishes between three levels of abstraction: domain/business level, component level, and internal design level. In the domain level one identifies the problem and defines the domain terminology and tries to understand the business processes. At the component level one specifies the system’s boundary and distributes responsibilities among the identified/defined components. In the internal design level one implements specifications of the system and defines internal architecture, internal components and collaborations and designs the insides of the system and/or component. [11] Finally, Catalysis is founded on three principles: abstraction, precision, and pluggable parts. In Catalysis abstraction means that one should focus on essential aspects one at the time while leaving others for later consideration. Precision means that one should be able to find out inconsistencies between specifications as early as possible, trace requirements through specifications or models, and allows to use support tools at a semantic level. Pluggable parts enables one to use results from the development work in the following projects. [11] Catalysis does not give any specific process to produce software but it gives number of process patterns. By combining the process patterns one can create suitable process for current development needs. 4.2
OMT++
OMT++ uses OMT [29] as the backbone of the approach. The notations and naming conventions of OMT are used as is in OMT++. Even the methods name relies heavily on OMT. OMT++ consists of four phases, namely object-oriented analysis, objectoriented design, object-oriented programming and testing. These phases are separate ones and they can be arranged either in a waterfall or iterative manner. In each of the phases there is activities that aim to model either the static or the functional properties of the system. Although the use cases are seen as a very key to the successful development of software, for architectural and practical reasons the key abstractions are service blocks and components. A service block is a grouping of closely related components that provide a consistent set of reusable software assets to designers using the service block. A component is a configuration of files implementing a basic architectural building block, such as an executable program or a link library. 4.3
Unified Process
Rational Unified Process (RUP) is use-case driven, architecture-centric, iterative, and incremental. To serve its users the software system must correspond to user
292
Marko Forsell et al.
needs. RUP uses use cases to capture functional requirements which satisfies user needs. Additionally use cases drive the development process. Based on the found use cases developers of the software system create series of design and implementation models that realize the use cases. Thus use-case driven means that the development process follows a flow — it proceeds through a series of workflows that derive from the use cases. By architecture-centric RUP means that a system’s architecture is used as a primary artifact for conceptualizing, constructing, managing, and evolving the system under development. Full scale construction of the software system is not started before architecture designers can be sure that the developed architecture can manage through the software’s lifecycle (i.e. maintenance and further development). The iterative and incremental process in RUP means that the software system is developed in many iterations and through small increments. Each iteration deals with a group of use cases that together extend the usability of the product thus producing an increment to the whole software system. Key abstractions are service packages, service subsystems, and components. A service package provides a set of services to its customers. Service packages and use cases are orthogonal concepts meaning that one use case is usually constructed by many service packages and one service package can be employed in several different use-case realizations. In RUP service packages are primary candidates for being reused, both within a system and across related systems. A service subsystem is based on the service package and there is usually one to one mapping between them. Usually service subsystem provide their services in terms of interfaces. Often service subsystems leads to a binary or executable component in the implementation. A component is the physical packaging of model elements, such as design classes in the design model. Stereotypes of components are for example executables, files, and libraries. 4.4
Evaluation of Methods
In the evaluation of the methods we analyzed the features of the methods against the Lim’s model of a reuse process. The phases of “the ideal model” are depicted earlier in Table 1. We created a scale for estimating the support of the methods for each phase of the reuse process (Table 2). We remind that our evaluation is not based on experiences of using the methods but only on the book reviews we have accomplished. The results of our analysis are summarized in Table 3. A more detailed analysis can be found in Appendix 1. As Table 3 shows, the three evaluated methods have emphasis on producing components. In OMT++ and Unified Process this means production of code component whereas in Catalysis other types of components are also considered. The use of components has gained some attention, which, however, largely focuses on the identification of the system. The coverage of the methods is quite similar: they include domain modeling, production of components, and identification of a system. In practice, these elements seem to be intertwined. So, it is
Component-Based Software Development Methods
293
Table 2. The scale for evaluating the features of the methods Symbol + ++ +++
Corresponding Not mentioned at all or mentioned incidentally Briefly considered Distinctly considered Thoroughly considered
Table 3. Summary of the results Process / Phase Producing Reusable Assets (PRA) Analyzing domain Producing Assets Prefabricating Retrofitting Asset Maintenance Asset Enhancement Brokering Reusable Assets (BRA) Assessing Assets Procuring Assets Certifying Assets Adding Assets Deleting Assets Consuming Reusable Assets (CRA) Identifying System Identifying Assets Locating Assets Assessing Assets for Consumption Adapting / Modifying Assets Integrating / Incorporating Assets
Catalysis OMT++ Unified Process ++
++
+++
+++ + -
+++ +
+++ +
+ -
-
-
+++ + + + -
+++ + + -
+++ + + -
noteworthy that the methods, actually, integrates the production of components into the use of them. According to the analyzed methods software production progresses as follows: 1. 2. 3. 4.
analyze the domain identify the functionality of the system define the software architecture construct the software using components.
The methods would be more usable if the production of components were distinctly separated from the use of components. Although components should be produced keeping the reuse aspect in mind, it is necessary to realize that the
294
Marko Forsell et al.
mentioned two tasks face different problems and different solutions and need to be managed as separate processes. For example, finding and adapting a component might get too little attention when these tasks are not seen as important processes of component reuse. This is due to the fact that the person responsible for producing a component naturally sees the component from the perspective of how the component is to be implemented, whereas the user of the component see the ”service” provided by the component. What is, then, the reason for integrating of the two tasks? Possible answers include: – It is difficult to get acceptance for a method that suggests big changes into the prevailing practices. – The developers of methods have not yet deep enough understanding on the information that is necessary when storing and retrieving components. – If the producer and user of a component is the same person it may be difficult and even unnecessary to separate the two processes. – Some methods use component purely for managing the complexity perhaps ignoring the reuse perspective. The strength of the evaluated methods is their thoroughness in domain modeling and describing the software architecture. These two tasks, which 1. bind components to the context, and 2. define the connections between components, are a crucial part of successful component-based software development. So, it seems that these areas are well covered by the current methods. It appears that the telecommunication backgrounds of OMT++ and Unified Process have affected the development of these methods: since the telecommunication applications tend to be very complicated the methods, which are used to build such applications, must help splitting the application into parts that are manageable. The software architecture is in a crucial role when the maintenance of the application means replacing a component by a new one, or adding a new component to the old system. We have already noted that the term ‘component’ has a wider meaning in Catalysis compared with the ‘component’ in OMT++ or in Unified Process. There is also another remarkable point where Catalysis differs from the other two methods. That is the way of building a component. Whereas OMT++ and Unified Process talk about service blocks or service packages, respectively, Catalysis derives components from the functions of the system. There is a big difference here, because these two approaches should be seen as orthogonal since one service package can be utilized by several functions. This implies that a function normally consists of several service packages or vice versa.
5
Discussion and Further Research
In the current methods the tasks of producing and using components are intertwined. This makes the methods complicated and decreases their usability. For
Component-Based Software Development Methods
295
example, storing and searching for a component can gain too little attention, when production of components dominates the software process. A concrete result of this problem is that no or little information on the components is saved for helping the further use of the components. It is obvious that in many cases the software people do not even know what is the necessary information that helps find a suitable component. Are the contemporary component-based methods, then, really componentbased? In our opinion, the answer is ‘Yes’ and ‘No’. They are component-based in the sense that they aim at well-structured architectures, where the purpose of each element can be distinctly defined and where an element can be easily replaced by another element. They are component-based also in that they aim at well-defined interfaces. However, they have several weak areas that need to be improved if desired to fully benefit from the use of components. First, the methods should put much more emphasis on the sub-processes of the component use. These include identifying and searching for a component. The methods should have support to view the use of components from multi-purpose perspective. This means that the methods should not only help divide an application into its pieces but also find more generic features to be implemented in the component. Furthermore, the methods should include tools to collect relevant information on the components to be saved into a repository that can be effectively used when searching for a suitable component. The current methods have little or no support for using such a repository. Apparently Basili’s [2] statement about hindrance of the current methods is still true. It seems that the current practices in component-based software development still rely on software experts’ personal knowledge. Although this point of view is understandable, it is, however, contradictory to the idea of using generic elements in software production. A real component orientation should, therefore, aim at practices and models that decrease the irreplaceableness of individual knowledge. This can be reached by standardization and other agreements but also by supporting the entire software process with information restored in repositories. Because the standardization process is usually a stony way, the importance of using repositories cannot be overestimated. To direct the further research concerning the component-based methods we provide the themes that we see important: – We should explore how to document components of different levels so that people not being expert of the domain could use them. – What kind of a repository would be most valuable in supporting the reuse process. – We should explore how the different reuse-oriented activities (e.g. managing the reuse infrastructure, producing, brokering, and consuming reusable assets) can be adapted to a software development process. These three themes are the most important ones seen in the light of our research. Because we based our analysis on a known model of a software process (Lim’s model), some tasks do not obtained attention very much although we see them
296
Marko Forsell et al.
as important parts of software process. This concerns especially testing and adapting of components. The current methods offer no component-specific means to test a software product or a piece of it. This issue deserves more efforts by the researchers. Acknowledgements This research was supported by Tekes1 and companies participating in the PISKO-project. We also like to thank the four anonymous reviewers for their useful comments.
References 1. Bailey, J., Basili, V.: The software-cycle model for re-engineering and reuse. Proceedings of the conference on Ada: today’s accomplishments; tomorrow’s expectations, ACM (1991), 267-281. 285 2. Basili, V.: Facts and myths affecting software reuse. Proceedings of the 16th International Conference on Software Engineering (1994), 269. 285, 295 3. Basili, V., Caldiera G., Cantone G.: A reference architecture for the component factory. ACM Transactions on Software Engineering and Methodology, Vol. 1, No. 1, January (1992), 53-80. 284 4. Batory, D., O’Malley, S.: The Design and Implementation of hierarchical software systems with reusable components. ACM Transactions on Software Engineering and Methodology, Vol. 1, No. 4, October (1992), 355-398. 286 5. Biggerstaff, T., Richter, C.: Reusability framework, assessment, and directions. IEEE Software, March (1987), 41-49. 285, 286 6. Burton, B., Aragon, R., Bailey, S., Koehler, K., Mayes, L.: The reusable software library. IEEE Software, July (1987), 25-33. 286 7. Caldiera, G., Basili, V.: Identifying and qualifying reusable software components. IEEE Computer, February (1991), 61-70. 286 8. Capretz, L.: A CASE of reusability. Journal of Object-Oriented Programming, June (1998), 32-37. 286 9. Davis, J., Morgan, T.: Object-oriented development at Brooklyn Union Gas. IEEE Software, January (1993), 67-74. 286 10. Devanbu, P., Brachman, R., Selfridge, P., Ballard, B.: LaSSIE: A knowledge -based software information system. Communications of ACM, Vol. 34, No. 5, May (1991), 33-49. 286 11. D’Souza, D., Wills, A.: Objects, Components, and Frameworks with UML: The Catalysis Approach. Addison-Wesley (1999). 285, 290, 291 12. Dusink, L, van Katwijk, J.: Reuse dimensions. Software Engineering Notes, August (1995), Proceedings of the Symposium on Software Reusability, Seattle, Washington, April 28-30 (1995), 137-149. 285 13. Fisher, G.: Cognitive view of reuse and redesign. IEEE Software, July (1987), 6172. 285, 286 14. Henninger, S.: An evolutionary approach to constructing effective software reuse repositories. ACM Transactions on Software Engineering and Methodology, Vol. 6, No. 2, April (1997), 111-140. 286 1
National Technology Agency, Finland. http://www.tekes.fi
Component-Based Software Development Methods
297
15. Horowitz, E., Munson, J.: An expansive view of reusable software. In Biggerstaff, T., Perlis, A. (eds.): Software Reusability, Volume I: Concepts and Models. ACM Press (1989), 19-41. 286 16. Jaaksi, A., Aalto, J-M., Aalto, A., V¨ att¨ o, K.: Tried & True Object Development: Industry-Proven Approaches with UML. Cambridge University Press (1999). 285, 290 17. Jacobson, I., Griss, M., Jonsson, P.: Software Reuse: Architecture, Process and Organization for Business Success. ACM Press (1997). 285, 286 18. Jacobson, I., Booch, G., Rumbaugh, J.: The Unified Software Development Process. Addison-Wesley (1999). 285, 290 19. Karlson, E.: Software Reuse: A Holistic Approach. John Wiley & Sons Ltd., (1995). 286 20. Krueger, C.: Software reuse. ACM Computing Surveys, Vol. 24, No. 2, June (1992), 131-183. 284, 285, 286 21. Lanergan, R., Grasso, C.: Software engineering with reusable designs and code. In Biggerstaff, T., Perlis, A. (eds.): Software Reusability, Volume II: Applications and Experience. ACM Press (1989), 187-195. 286 22. Lenz, M., Schmid H., Wolf, P.: Software reuse through building blocks. IEEE Software, July (1987) 35-42. 286 23. Lim, W.: Managing Software Reuse. Prentice Hall PTR (1998). 285, 286, 287 24. Neighbors, J.: Draco: A method for engineering reusable software systems. In Biggerstaff, T., Perlis, A. (eds.): Software Reusability, Volume I: Concepts and Models. ACM Press (1989), 295-319. 286 25. Nierstrasz, O., Meijler, D.: Research directions in software composition. ACM Computing Surveys, Vol. 27, No. 2, June (1995), 262-264. 286 26. Ostertag, E., Prieto-Daz, R., Braun C.: Computing similarity in a reuse library system: An AI-based approach. ACM Transactions on Software Engineering and Methodology, Vol. 1, No. 3, July (1992), 205-228. 286 27. Prieto-Daz, R.: Implementing faceted classification for software reuse. Communications of the ACM, Vol. 34, No. 5, May (1991), 88-97. 286 28. Prieto-Daz, R., Freeman, P.: Classifying software for reusability. IEEE Software, January (1987), 6-16. 286 29. Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., Lorensen, W.: ObjectOriented Modeling and Design. Prentice-Hall, Inc., (1991). 291 30. Sommerville, I.: Software Engineering Fourth Edition. Addison-Wesley, (1992). 286 31. STARS Conceptual Framework for Reuse Processes (CFPR) Volume I: Definition, Version 3.0. STARS-VC-A018/001/00, October 25 (1993). Available at http://direct.asset.com/wsrd/product.asp?pf id=ASSET%5FA%5F495. 286 32. Taivalsaari, A.: A Critical View of Inheritance and Reusability in Object-oriented Programming. Ph.D. Dissertation, Jyv¨ askyl¨ a Studies in Computer Science, Economics and Statistics, No. 23, University of Jyv¨ askyl¨ a (1993). 284, 286
298
Marko Forsell et al.
Appendix 1. The More Detailed Analysis of Catalysis, OMT++ and Unified Process. Table 4. More detailed results of the analysis of Catalysis Activity / Tasks
Catalysis
Producing Reusable Assets (PRA) Analyzing domain Domain analysis is performed by creating business models. Business models include use cases, joint actions and type models. In this phase vocabulary of the domain is created with the aid of the user. Producing Assets Components are mainly created from the domain model and are designed for reuse. There is briefly mentioned reengineering ex– Prefabricating isting systems. If third party components or legacy systems are – Retrofitting used, one should create type models of them. Brief description of component development. Description of components is done by type models which are created during business modeling. Basic course of action is as follows: 1. Describe every component from the one users point of view. 2. Combine view points. 3. Design largest components and according to the use cases distribute them to functional blocks. 4. Define interfaces and extract business logic from user interface. Create layered architecture for the software. 5. Extract middleware from business components. 6. Create draft design model where one class presents one type. 7. Distribute responsibilities and collaboration among objects so that you can create flexible design. 8. Define components connections, and attributes and document them. Asset
-
– Maintenance – Enhancement Brokering Reusable Assets (BRA) Assessing Assets Procuring Assets Certifying Assets Adding Assets Component management should be created through reuse group. Resources to build and maintenance repository should be given. Deleting Assets Consuming Reusable Assets (CRA) Continues on next page
Component-Based Software Development Methods
299
Continued from previous page (Catalysis) Activity / Tasks
Explanation
Identifying
Software system is identified through use cases (joint actions). First one should define technical architecture which includes infrastructure components and their relationships with physical and logical architectures.
– System – Assets
Locating Assets Assessing Assets First architecture is implemented which guides experienced defor Consumption veloper to find components. Adapting / Modi- Components interfaces can be altered or interfaces can be added, fying Assets but the component should be left unchanged. Integrating / In- corporating Assets
Table 5. More detailed results of the analysis of OMT++ Activity / Tasks
OMT++
Producing Reusable Assets (PRA) Analyzing domain You model domain by class diagram and that gives you and your customer common vocabulary. Use cases and analysis phase’s class diagram are also used to gather understanding of the context. Producing Assets Components which are created through OMT++ are prefabricated components. No discussion of retrofitting is done. Com– Prefabricating ponents are identified during architectural design which is done – Retrofitting via OMT++ own 3+1 views. Components which result from this process are code components. Component identification process is basically as follows: 1. Create software architecture according to OMT++’s 3+1 views to the software architecture. As an result you get the following layers to the software system: System Products, Application products / Platforms; Applications / Service Blocks; Components; Classes 2. Implement group of classes as components. Component is ’size of the human being’ (500 to 15 000 lines of code) and is a code component. Asset
Component maintenance is when you create better (in size, functionality, quality etc.) component for the new software system – Maintenance and replace old component with it. OMT++ sees this as normal – Enhancement development work.
Brokering Reusable Assets (BRA) Assessing Assets Procuring Assets Certifying Assets Continues on next page
300
Marko Forsell et al. Continued from previous page (OMT++)
Activity / Tasks
Explanation
Adding Assets Deleting Assets
-
Consuming Reusable Assets (CRA) Identifying Software system is identified through use cases and you create the software architecture according to the use cases. Reusable – System components are discovered from the old system by experienced – Assets developers. Locating Assets
Experienced developer locates reusable components from existing software system. He/she can use architecture descriptions to identify reusable components. Assessing Assets for Consumption Adapting / Modi- Guided by the software architecture (implicitly). fying Assets Integrating / In- corporating Assets
Table 6. More detailed results of the analysis of Unified Process Activity / Tasks
Unified Process
Producing Reusable Assets (PRA) Analyzing domain RUP advises to do domain analysis as a project of its own. DA can be done via business modeling or domain modeling (which is subset of business modeling). Producing Assets RUP concentrates to prefabricate components and typical components are binary or executable files, files, libraries, table, or – Prefabricating document. Application-specific and application-general layers are – Retrofitting separated from middleware and system-software layers at the analysis phase. In analysis main focus is describing application layers and in design the focus is describing more middleware and system-software. Basic course of action to create component: 1. Create use cases. 2. Do analysis phase’s class diagram and classify analysis classes to service packages. 3. Create service subsystems according to service packages. 4. Implement service subsystems. There should be straight mapping between service packages, service subsystems and implemented components. Components are identified and created by “reuse-enabled” developers. Continues on next page
Component-Based Software Development Methods
301
Continued from previous page (Unified Process) Activity / Tasks
Explanation
Asset
-
– Maintenance – Enhancement Brokering Reusable Assets (BRA) Assessing Assets Procuring Assets hline Certifying Assets Adding Assets Deleting Assets Consuming Reusable Assets (CRA) Identifying Software system is identified through business/domain models and use cases. Mock up user interface can be used to gather more – System specific information about requirements. Existing systems can be – Assets used as a basis for identifying assets. Locating Assets
Components are located through existing systems or by “reuseenabled” developers. Possible components can be found from third party, through corporate standards for using and creating components (i.e. frameworks or design patterns), or by using “team memory”. Assessing Assets for Consumption Adapting / Modi- Guided by software architecture (implicitly). fying Assets Integrating / In- corporating Assets
Promoting Reuse with Active Reuse Repository Systems Yunwen Ye1,2 and Gerhard Fischer1 1
Department of Computer Science, CB 430, University of Colorado at Boulder, Boulder, CO. 80309-0430, USA {yunwen,gerhard}@cs.colorado.edu 2 Software Engineering Laboratory, Software Research Associates, Inc. 3-12 Yotsuya, Shinjuku, Tokyo 160-0004, Japan
Abstract. Software component-based reuse is difficult for software developers to adopt because first they must know what components exist in a reuse repository and then they must know how to retrieve them easily. This paper describes the concept and implementation of active reuse repository systems that address the above two issues. Active reuse repository systems employ active information delivery mechanisms to deliver potentially reusable components that are relevant to the current development task. They can help software developers reuse components they did not even know existed. They can also greatly reduce the cost of component location because software developers need neither to specify reuse queries explicitly, nor to switch working contexts back and forth between development environments and reuse repository systems.
1
Introduction
Component-based software reuse is an approach to build new software systems from existing reusable software components. Software reuse is promising because complex systems evolve faster if they are built upon stable subsystems [34]. Empirical studies have also concluded that software reuse can improve both the quality and productivity of software development [1,23]. However, successful deployment of software reuse has to address managerial issues, enabling technical issues and cognitive issues faced by software developers. This paper is focused on the technical and cognitive issues involved in component-based software reuse. A reuse process generally consists of location, comprehension, and modification of needed components [13]. As a precondition to the success of reuse, a reuse repository system is indispensable. A reuse repository system has three connotations: a collection of reusable components, an indexing and retrieval mechanism, and an operating interface. Reuse repository systems suffer from an inherent dilemma: the more components they include, the more potentially useful they are, but also the more difficult they become for reusers to locate the needed components. Nevertheless, for reuse to pay off, a reuse repository with a large number of components is necessary. The success of reuse thus relies crucially on W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 302–318, 2000. c Springer-Verlag Berlin Heidelberg 2000
Promoting Reuse with Active Reuse Repository Systems
303
the retrieval mechanism and the interface of reuse repository systems to facilitate the easy location of components. Existence of reusable components does not guarantee their being reused. Reusable components help software developers think at higher levels of abstraction. Like the introduction of a new word into English that increases our power in thinking and communication, reusable components expand the expressiveness of software developers and contribute to the reduction of the complexity of software development. However, software developers must learn the syntax and semantics of components if they are able to develop with them. Components learning constitutes no small part of the cognitive barrier to reuse [4]. Another truism in reuse is that for software developers to reuse, they must be able to locate reusable components easier than developing from scratch [20]. It is important to put reuse into the whole context of software development. For software developers, reuse is not their goal; reuse is only means for them to accomplish their tasks. Cognitive scientists have revealed that human beings are utility-maximizers [30], therefore only when software developers perceive that the reuse approach has more value than its cost, will reuse be readily embraced. In order to diminish the above-mentioned two barriers to reuse faced by software developers, a developer-centered approach to the design of reuse repository systems is proposed in this paper. This approach stresses the importance of integrating reuse repository systems into the development environment and views reuse as an integral part of the development process. Drawing on empirical studies and cognitive theory, we first analyze the difficulties of reuse from the perspective of software developers in Sect. 2. We argue in Sect. 3 that active reuse repository systems–systems equipped with active information delivery mechanisms–are a solution because they increase developers’ awareness of reusable components, and reduce the cost of component location. A prototype of an active reuse repository system, CodeBroker, is described in Sect. 4.
2 2.1
Developer-Centered View of Reuse Three Modes of Reuse
From the perspective of software developers, there are three reuse modes based upon their knowledge about a reuse repository: reuse-by-memory, reuse-by-recall and reuse-by-anticipation. In the reuse-by-memory mode, while developing a new system, software developers may notice similarities between the new system and reusable components they have learned in the past and know very well. Therefore, they can reuse them easily during the development, even without the support of a reuse repository system because their memory assumes the role of the repository system. In the reuse-by-recall mode, while developing a new system, software developers vaguely recall that the repository contains some reusable components with similar functionality, but they do not remember exactly which components they are. They need to search the repository to find what they need. In this mode,
304
Yunwen Ye and Gerhard Fischer
developers are often determined to find the needed components. An effective retrieval mechanism is the main concern for reuse repository systems supporting this mode. In the reuse-by-anticipation mode, software developers anticipate the existence of certain reusable components. Although they don’t know of relevant components for certain, their knowledge of the domain, the development environment, and the repository is enough to motivate them to search in hopes of finding what they want from the reuse repository system. In this mode, if developers cannot find what they want quickly enough, they will soon give up reuse [26]. Software developers have little resistance to the first two modes of reuse. As is reported by Isoda [19], software developers reuse those components repeatedly once they have reused them once. This also explains why individual ad hoc reuse has been taking place while organization-wide systematic reuse has not received the same success: software developers have individual reuse repositories in their memories so they can reuse-by-memory or reuse-by-recall [26]. For those components that have not yet been internalized into their memories, software developers have to resort to the mode of reuse-by-anticipation. The activation of the reuse-by-anticipation mode relies on two enabling factors: – Software developers anticipate the existence of reusable components. – They perceive that the cost of the reuse process is cheaper than that of developing from scratch. 2.2
Information Islands
Unfortunately, software developers’ anticipation of available reusable components does not always match real repository systems. Empirical studies on the use of high-functionality computing systems (reuse repository systems being typical examples of them) have found there are four levels of users’ knowledge about a computing system (Fig. 1) [12]. Because users of reuse repository systems are software developers, we will substitute “software developers” for “users” in the following analysis. In Fig. 1, ovals represent levels of a software developer’s knowledge of a reuse repository system, and the rectangle represents the actual repository, labeled L4. L1 represents those components that are well known, easily employed, and regularly reused by a developer. L1 corresponds to the reuse-by-memory mode. L2 contains components known vaguely and reused only occasionally by a developer; they often require further confirmation when they are reused. L2 corresponds to the reuse-by-recall mode. L3 represents what developers believe, based on their experience, exists in the repository system. L3 corresponds to the reuse-by-anticipation mode. Many components fall in the area of (L4 - L3), which means their existence is not known to the software developer. Consequently, there is little possibility for the developer to reuse them because people generally cannot ask for what they
Promoting Reuse with Active Reuse Repository Systems
305
L4 L2 L3 (Belief)
(Vaguely Known)
L1 (Well Known)
Unknown Components
Fig. 1. Different levels of users’ knowledge about a system
do not know [15]. Components in (L4 - L3) thus become information islands [8], inaccessible to software developers without appropriate tools. Many reports about reuse experience in industrial software companies illustrate this inhibiting factor of reuse. Devanbu et al. [6] report that developers, unaware of reusable components, repeatedly re-implement the same function– in one case, this occurred ten times. This kind of behavior is also observed as typical among the four companies investigated by Fichman and Kemerer [10]. From the experience of promoting reuse in their organization, Rosenbaum and DuCastel conclude that making components known to developers is a key factor for successful reuse [33]. Reuse repository systems, most of which employ information access mechanisms only and operate in the style of “if you ask, I tell you,” provide little help for software developers to explore those information islands. 2.3
Reuse Utility
Human beings try to be utility-maximizers in the decision-making process [30]; software developers are no exception. Reuse utility is the ratio of reuse value to reuse cost. Although there is considerable cost involved in setting up the reuse repository, from the perspective of software developers, their perception of reuse cost consists of the part associated with the reuse process–cost of the location, comprehension, and modification of reusable components. Reduction of location cost leads to the increase of reuse utility, which in turn leads to the selection of reuse approach. Location cost can be further broken down into the following items: (1) The effort needed to learn about the components, or at least the existence of them so that developers will initiate the reuse process. (2) The cost associated with switching back and forth between development environments and reuse repository systems. This switching causes the loss of working memory and the disruption of workflow [27].
306
Yunwen Ye and Gerhard Fischer
(3) The cost of specifying a reuse query based on the current task. Developers must be able to specify their needs in the way that is understood by a reuse repository system [13]. (4) The cost of executing the retrieval process to find what is needed. Automating the execution of the retrieval process only is not enough; reducing the costs of items (1), (2) and (3) should be given equal, if not more, consideration.
3
Active Information Delivery
In contrast to the conventional information access mechanism, in which users explicitly specify their information needs to computer systems, which in turn return retrieval results, the active delivery mechanism operates in the style that information is presented to users without being given explicit specifications of information needs. Active delivery systems that just throw a piece of decontextualized information at users, for example, Microsoft Office’s Tip of the Day, are of little use because they ignore the working context. To improve the usefulness of delivered information so it can be utilized by users in accomplishing their tasks, relevance of the information to the task at hand or to the current working context must be taken into consideration. This context-sensitive delivery requires that active delivery systems have a certain understanding of what users are doing. 3.1
Active Information Delivery in Reuse Repository Systems
Equipping reuse repository systems with active information delivery not only makes it possible for software developers to reuse formerly unknown components, but also supports the seamless transition from development activities to reuse activities to reduce the cost of reuse. A Bridge to Information Islands. Not knowing the existence of reusable components residing on information islands prohibits reuse from taking place. In contrast with passive reuse repository systems–systems employing query-based information access only, active reuse repository systems are able to compare the current development task with reusable components and proactively present those that are relevant to developers to increase the opportunity of reuse. Well-Informed Decision Making. Studies on the human decision-making process have shown that the presence of problem-solving alternatives affects the final decision dramatically [30]. The presence of actively delivered reusable components reminds software developers of the alternative development approach– reuse–other than their current approach of developing from scratch. It prompts software developers to make well-informed decisions after giving due consideration to reuse.
Promoting Reuse with Active Reuse Repository Systems
307
Reduction of Reuse Cost. Active reuse repository systems reduce the cost of reuse by streamlining the transition from developing activities to componentlocating activities. Software developers can access reusable components without switching their working contexts. This is less disruptive to their workflow compared to the use of passive repository systems because the latter involves more working memory lost. As software developers switch from development to reuse, their working memory (whose capacity is very limited and holds about 7 slots) of the development activities decays with a half-life of 15 seconds [27]. Therefore, the longer they spend on component-locating, the more working memory gets lost. With the support of active reuse repository systems, software developers do not need to specify their reuse queries explicitly and do not need to execute the searching process. All of these factors contribute to the reduction of reuse cost and the increase of reuse utility so that reuse can be put in a more favored situation. 3.2
Capturing the Task of Software Developers
For an active reuse repository system to deliver components relevant to the task in which a software developer is currently engaged, it must be able to capture to a certain extent what the task is. Software development is a process of progressive transformation of requirements into a program. Inasmuch as software developers use computers, it is possible for reuse systems, if integrated with the development environment, to capture the task of software developers from their partially constructed programs, even though the reuse systems may not necessarily fully understand the task. A program has three aspects: concept, code, and constraint. The concept of a program is its functional purpose or goal; the code is the embodiment of the concept; and the constraint regulates the environment in which the program runs. This characterization is similar to the 3C model of Tracz [36], who uses concept, content, and context to describe a reusable component. A program includes not only its code part. Software development is essentially a cooperative process among many developers; therefore, programs must include both formal information for their executability and informal information for their readability by peer developers [35]. Informal information includes structural indentation, comments, and identifier names. Comments and identifier names are important beacons for the understanding of programs because they reveal the concepts of programs [2]. The use of comments and/or meaningful identifier names to index and retrieve software components has been explored [7,?]. Modern programming languages such as Java enforce this self-explaining feature of programs further by introducing the concept of doc comments. A doc comment begins with /** and continues until */. It immediately precedes the declaration of a module that is either a class or a method. The contents of doc comments describe the functionality of the following module. Doc comments are utilized by the javadoc program to create online documentation from Java
308
Yunwen Ye and Gerhard Fischer
source codes. Most Java programmers therefore do not need to write separate, extra documents for their programs. The constraint of a program is manifested by its signature. A signature is the type expression of a module that defines its syntactical interface. A signature of a function or a method specifies what types of inputs it takes and what types of outputs it produces. The signature of a class includes its data definition part and the collection of signatures of its methods. For a reusable component to be integrated, its signature should be compatible with the program to be developed. Combining the concepts revealed through comments, as well as constraints revealed through signatures, it is highly possible to find components that can be reused in the current development task, if they show high relevance in concepts and high compatibility in constraints. Fortunately, in current development practices and environments, comments and signature definitions come sequentially before the code. This gives active repository systems a chance to deliver reusable components before the implementation of codes, after the systems have captured the comments and signatures and used them to locate relevant components automatically.
4
Implementation of an Active Reuse Repository System
A prototype of an active reuse repository system, CodeBroker, has been implemented. It supports Java programmers in reusing components during programming. 4.1
System Architecture
The architecture of CodeBroker is shown in Fig. 2. It consists of three software agents: Listener, Fetcher, and Presenter. A software agent is a software entity that functions autonomously in response to the changes in its running environment without requiring human guidance or intervention [3]. In CodeBroker, the Listener agent extracts and formulates reuse queries by monitoring the software developer’s interaction with the program editor–Emacs. Those queries are then passed to Fetcher, which retrieves the matching components from the reuse repository. Reusable components retrieved by Fetcher are passed to Presenter, which uses user profiles to filter out unwanted components and delivers the filtered result in the Reusable Components Info-display (RCI-display). The reuse repository in CodeBroker is created by its indexing program, CodeIndexer, which extracts and indexes functional descriptions and signatures from the online documentation generated by running javadoc over Java source code. 4.2
Retrieval Mechanism
An effective retrieval mechanism is essential in any reuse repository system. CodeBroker uses the combination of latent semantic analysis (LSA) and signature matching (SM) as the retrieval mechanism. LSA is used to compute the
Promoting Reuse with Active Reuse Repository Systems
Program Editor Developers
Delivered Components
RCI-display
Working Products
Update
Automatic update
Listener Analyzes
Presenter Delivers
ulate Manip
Refine
Concept Queries
User Profile Retrieved Components
Fetcher Retrieves
Constraint Queries
Data Action
309
Concept Indexing Signature Indexing
Reuse Repository
Data flow (bubble shows the contents) Control flow (label shows the action)
Fig. 2. The system architecture of CodeBroker
concept similarity existing between the concepts of the program under development and the textual documents of reusable components in the repository. SM is used to determine the constraint compatibility existing between the signature of the program under development and those of components in the repository. Latent Semantic Analysis. LSA is a technology based on free-text indexing. Free-text indexing suffers from the concept-based retrieval problem; that is, if developers use terms different from those used in the descriptions of components, they cannot find what they want because free-text indexing does not take the semantics into consideration. By constructing a large semantic space of terms to capture the overall pattern of their associative relationship, LSA facilitates concept-based retrieval. The indexing process of LSA starts with creating a semantic space with a large corpus of training documents in a specific domain–we use the Java language specification, Java API documents, and Linux manuals as training documents to acquire a level of knowledge similar to what a Java programmer most likely has. It first creates a large term-by-document matrix in which entries are normalized scores of the term frequency in a given document (high-frequency words are removed). The term-by-document matrix is then decomposed, by means of singular value decomposition, into the product of three matrices: a left singular vector, a diagonal matrix of singular values, and a right singular vector. These matrices are then reduced to k dimensions by eliminating small singular values; the value of k often ranges from 40 to 400, but the best value of k still remains an open question. A new matrix, viewed as the semantic space of the domain, is constructed through the production of the three reduced matrices. In this new matrix, each row represents the position of each term in the semantic space. Terms are re-represented in the newly created semantic space. The reduction of singular values is important because it cap-
310
Yunwen Ye and Gerhard Fischer
tures only the major, overall pattern of associative relationship among terms by ignoring the noises accompanying most automatic thesaurus construction based simply on the co-occurrence statistics of terms. After the semantic space is created, each reusable component is represented as a vector in the semantic space based on terms contained, and so is a query. The similarity of a query and a reusable component is thus determined by the Euclidean distance of the two vectors. A reusable component matches a query if their similarity value is above a certain threshold. Compared to traditional free-text indexing techniques, LSA can improve retrieval effectiveness by 30% in some cases [5]. Signature Matching. SM is the process of determining the compatibility of two components in terms of their signatures [37]. It is an indexing and retrieval mechanism based on constraints. The basic form of a signature of a method is: Signature:InTypeExp->OutTypeExp where InTypeExp and OutTypeExp are type expressions resulted from the application of a Cartesian product constructor to all their parameter types. For example, for the method, int getRandomNumber (int from, int to) the signature is getRandomNumber: int x int -> int Two signatures Sig1:InTypeExp1->OutTypeExp1 Sig2:InTypeExp2->OutTypeExp2 match if and only if InTypeExp1 is in structural conformance with InTypeExp2, and OutTypeExp1 is in structural conformance with OutTypeExp2. Two type expressions are structurally conformant if they are formed by applying the same type constructor to structurally conformant types. The above definition of SM is very restrictive because it misses components whose signatures do not exactly match but are similar enough to be reusable after slight modification. Partial signature matching relaxes the definition of structural conformance of types: A type is considered as conformant to its more generalized form or its more specialized form. For procedural types, if there is a path from type T1 to type T2 in the type lattice, T1 is a generalized form of T2, and T2 is a specialized form of T1. For example, in most programming languages, Integer is a specialized form of Float, and Float is a generalized form of Integer. For object-oriented types, if T1 is a superclass of T2, T1 is a generalized form of T2, and T2 is a specialized form of T1. The constraint compatibility value between two signatures is the production of the conformance value between their types. The type conformance value is 1.0 if two types are in structural conformance according to the definition of the programming language. It drops a certain percentage (the current system uses 5% which will be adjusted as more usability experience is gained) if one type conversion is needed, or there is an immediate inheritance relationship between them, and so forth. The constraint compatibility value is 1.0 if two signatures exactly match.
Promoting Reuse with Active Reuse Repository Systems
4.3
311
Listener
The Listener agent runs continuously in the background of Emacs to monitor the input of software developers. Its goal is to capture the task software developers have at hand and construct reuse queries on behalf of them. Reuse queries are extracted from doc comments and signatures. Whenever a software developer finishes the definition of a doc comment, Listener automatically extracts the contents, and creates a concept query that reflects the concept aspect of the program to be implemented. HTML markup tags included in the doc comments are stripped, as well as tagged information such as @author, @version, etc. Figure 3 shows an example. A software developer wants to generate a random number between two integers. Before he or she implements it (i.e., writes the code part of the program), the task is indicated in the doc comment. As soon as the comment is written (where the cursor is placed), Listener automatically extracts the contents: Create a random number between two limits. This is used as a reuse query to be passed to Fetcher and Presenter, which will present, in the RCI-display (the lower part of the editor) those components whose functional description matches this query based on LSA.
Fig. 3. Reusable component delivery based on comments
Concept similarity is often not enough for components to be reused because they also need to satisfy the type compatibility. Incompatible types of reusable components inflict much difficulty in the modification process. For instance, in
312
Yunwen Ye and Gerhard Fischer
Fig. 3, although component 3, the signature of which is shown in the message buffer (the last line of the window), could be modified to achieve the task, it is desirable to find a component that can be immediately integrated without modification. Type compatibility constraints are manifested in the signature of a module. As the software developer proceeds to declare the signature, it is extracted by Listener, and a constraint query is created out of it. As Fig. 4 shows, when the developer types the left bracket { (just before the cursor), Listener is able to determine it as the end of a module signature definition. Listener thus creates a constraint query: int x int -> int. Figure 4 shows the result after the query is processed. Notice that the first component in the RCI-display in Fig. 4 has exactly the same signature–shown in the second line of the pop-up window–as the one extracted from the editor, and therefore can be reused immediately.
Fig. 4. Reusable component delivery based on both comments and signatures
4.4
Fetcher
The Fetcher agent performs the retrieval process. When Listener passes a concept query, Fetcher computes, using LSA, the similarity value from each component in the repository to the query, and returns those components whose similarity value ranks in the top 20. The number 20 is the threshold value used by Fetcher to determine what components are regarded as relevant, and can be customized by software developers. If software developers are not satisfied with
Promoting Reuse with Active Reuse Repository Systems
313
the delivery based on comments only and proceed to declare the signature of the module, the SM part of Fetcher is invoked to rearrange the previously delivered components by moving those signature-compatible ones into higher ranks. The new combined similarity value is determined using the formula Similarity = ConceptSimilarity ∗ w1 + ConstraintCompatibility ∗ w2 where w1 + w2 = 1, and the default values for w1 and w2 are 0.5. Their values can be adjusted by software developers to reflect their own perspectives on the importance of concept similarity and constraint compatability accordingly. Figures 3 and 4 show the difference when the signature is taken into consideration. Component 1 in Fig. 4 was component 4 in Fig. 3, and the top three components in Fig. 3 do not even show up in the visible part of RCI-display in Fig. 4 because they all have incompatible signatures.
4.5
Presenter
The retrieved components are then shown to software developers by the agent Presenter in RCI-display in decreasing order of similarity value. Each component is accompanied with its rank of similarity, similarity value, name, and a short description. Developers who are interested in a particular component can launch, by a mouse click, an external HTML rendering program to go to the corresponding place of the full Java documents. The goal of active delivery in CodeBroker is meant to inform software developers of those components that fall into L3 (reuse-by-anticipation) and the area of (L4 - L3) (information islands) in Fig. 1. Therefore, delivery of components from L2 (reuse-by-recall) and L1 (reuse-by-memory), especially L1, might be of little use, with the risk of making the unknown, really needed components less salient. Presenter uses a user profile for each software developer to adapt the components retrieved by Fetcher to the user’s knowledge level of the repository, to ensure that those components already known to the user are not delivered. A user profile is a file that lists all components known to a software developer. Each item on the list could be a package, a class, or a method. A package or a class indicates that all components from either of them should not be delivered; a method indicates that the method component only should not be delivered. The user profile can be updated by software developers through interaction with Presenter. A right mouse click on the component delivered by Presenter brings the Skip Components Menu, as shown in Fig. 4. A software developer can select the All Sessions command, which will update his or her profile so that the component or components from that class or that package will not be delivered again in later sessions. The Skip Components Menu also allows programmers to instruct the Presenter to remove, in this session only, the component, or the class and package it belongs to, by selecting the This Session Only command. This is meant to temporarily remove those components that are apparently not relevant to the current task in order to make the needed component easier to find.
314
5
Yunwen Ye and Gerhard Fischer
Related Work
This work is closely related to research on software reuse repository systems, as well as to active information systems that employ active information delivery mechanisms. 5.1
Software Reuse Repository Systems
Free-text indexing-based reuse repository systems take the textual description of components as the indexing surrogates. The descriptions may come from the accompanying documents [24], or be extracted from the comments and names from source code [7,?]. Reuse queries are also written in natural language. The greatest advantage of this approach is its low cost in both setting up the repository and posing a query. However, this approach does not support concept-based retrieval. Faceted classification is proposed by Prieto-Diaz to complement the incompleteness and ambiguity of natural language documents [29]. Reusable components are described with multiple facets. For each facet, there is a set of controlled terms. A thesaurus list accompanies each term so that reusers can use any word from the thesaurus list to refer to the same term. Although this approach is designed to improve the retrieval effectiveness, experiments have shown it does not perform better than the free-text based approach [17,?]. AI-based repository systems use knowledge bases to simulate the human process of locating reusable components in order to support concept-based retrieval. Typical examples include LaSSIE [6], AIRS [28], and CodeFinder [18]. LaSSIE and CodeFinder use frames and AIRS uses facets to represent components. A semantic network is constructed by experts to link components based on their semantic relationship. The bottleneck to this approach is the difficulty in constructing the knowledge base, especially when the repository becomes large. Although most reuse repository systems use the conceptual information of a component, the constraints, that is, the signatures, of components can also be used to index and retrieve components. Rittri first proposed the use of signatures to retrieve components in functional programming languages [32]. His work is extended in [37], which gives a general framework for signature matching in functional programming languages. CodeBroker is most similar to those systems adopting free-text indexing. It is also similar to the AI-based systems because the semantic space created by LSA could be regarded as a simulation of human understanding of the semantic relationship among words. Whereas semantic networks used in AI-based systems are instilled with human knowledge, semantic spaces of LSA are trained with a very large corpus of documents. After being trained on about 2,000 pages of English texts, LSA has scored as well as average test-takers on the synonym portion of TOEFL [21]. CodeBroker also extends signature matching onto object-oriented programming languages. In terms of retrieval mechanisms, CodeBroker is unique because it combines both the concept and constraint of programs in order to improve the retrieval effectiveness, whereas all other systems use only one aspect.
Promoting Reuse with Active Reuse Repository Systems
315
Finally, the most distinctive feature of CodeBroker is its active delivery mechanism, which does not exist in any of above systems. 5.2
Active Information Systems
Reuse repository systems are a subset of information systems that help users find the information needed to accomplish their work from a huge information space [12]. Many information systems have utilized the active information delivery mechanism to facilitate such information use. The most simple example of active information systems is Microsoft Office’s Tip of the Day. When a user starts an application of Microsoft Office, a tip is given on how to operate the application. More sophisticated active information systems exploit the shared workspace between working environments and information systems to provide context-sensitive information. Activists, which monitors a user’s use of Emacs and suggests better commands to accomplish the same task, is a context-sensitive and active help system [14]. LispCritic uses program transformation rules to recognize a less ideal code segment, and delivers a syntactical equivalent, but more efficient, solution [11]. Active delivery is also heavily utilized in the design of autonomous interface agents for WWW information exploration. Letizia, based on analyzing the current web page browsed by users and their past WWW browsing activities, suggests new web pages that might be of interest for their next reading [22]. Remembrance Agent utilizes the active delivery mechanism to inform users of those documents from their email archives and personal notes that are relevant to the document they are writing in Emacs [31]. CodeBroker is similar to those systems in terms of using current work products as retrieval cues to information spaces, and building a bridge to information islands with active delivery mechanisms.
6
Summary
Most existing reuse repository systems postulate that software developers know when to initiate a reuse process, although systematic analysis of reuse failures has indicated that no attempt to reuse is the biggest barrier to reuse [16]. When deployment of such repository systems fail, many blame the managerial issues, or the NIH (not invented here) syndrome, and call for education to improve the acceptance of reuse. Managerial commitment and education are indeed important for the success of reuse, but we feel it equally important to design reuse repository systems that are oriented toward software developers and are integrated seamlessly into their current working environments. Ready access to reusable components from their current working environments makes reuse appealing directly to software developers. As we have witnessed from the relative success of so-called ad hoc, individual-based reuse-by-memory, active reuse repository systems can extend the memory of software developers by presenting relevant reusable components right into their working environments.
316
Yunwen Ye and Gerhard Fischer
Ongoing work on the development of CodeBroker aims to expand the signature matching mechanism into the class level with more relaxed matching criteria. An evaluation of CodeBroker will also be performed to gain better understanding of the difficulties encountered by software developers when they adopt reuse into their development activities, as well as to determine to which extent an active reuse repository system such as CodeBroker is able to help.
References 1. Basili, V., Briand, L., Melo, W.: How reuse influences productivity in objectoriented systems. Comm. of the ACM, 39(10):104–116, 1996. 302 2. Biggerstaff, T. J., Mitbander, B. G., Webster, D. E.: Program understanding and the concept assignment problem. Comm. of the ACM, 37(5):72–83, 1994. 307 3. Bradshaw, J. M.: Software Agents. AAAI Press, Menlo Park, CA USA, 1997. 308 4. Brooks, F. P.: The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley, Reading, MA USA, 20th anniversary ed., 1995. 303 5. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., Harshman, R.: Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407, 1990. 310 6. Devanbu, P., Brachman, R. J., Selfridge, P. G., Ballard, B. W.: LaSSIE: A knowledge-based software information system. Comm. of the ACM, 34(5):34–49, 1991. 305, 314 7. DiFelice, P., Fonzi, G.: How to write comments suitable for automatic software indexing. Journal of Systems and Software, 42:17–28, 1998. 307, 314 8. Engelbart, D. C.: Knowledge-domain interoperability and an open hyperdocument system. In Proc. of Computer Supported Cooperative Work ’90, 143–156, New York, NY USA, 1990. 305 9. Etzkorn, L. H., Davis, C. G.: Automatically identifying reusable OO legacy code. IEEE Computer, 30(10):66–71, 1997. 307, 314 10. Fichman, R. G., Kemerer, C. E.: Object technology and reuse: Lessons from early adopters. IEEE Software, 14(10):47–59, 1997. 305 11. Fischer, G.: A critic for Lisp. In Proc. of the 10th International Joint Conference on Artificial Intelligence, 177–184, Los Altos, CA USA, 1987. 315 12. Fischer, G.: User modeling in human-computer interaction. User Modeling and User-Adapted Interaction, (to appear). 304, 315 13. Fischer, G., Henninger, S., Redmiles, D.: Cognitive tools for locating and comprehending software objects for reuse. In Proc. of the 13th International Conference on Software Engineering, 318–328, Austin, TX USA, 1991. 302, 306 14. Fischer, G., Lemke, A. C., Schwab, T.: Knowledge-based help systems. In Proc. of Human Factors in Computing Systems ’85, 161–167, San Francisco, CA USA, 1985. 315 15. Fischer, G., Reeves, B. N.: Beyond intelligent interfaces: Exploring, analyzing and creating success models of cooperative problem solving. In Baecker, R., Grudin, J., Buxton, W., Greenberg, S., (eds.): Readings in Human-Computer Interaction: Toward the Year 2000, 822–831, Morgan Kaufmann Publishers, San Francisco, CA USA, 2nd ed., 1995. 305 16. Frakes, W. B., Fox, C. J.: Quality improvement using a software reuse failure modes models. IEEE Transactions on Software Engineering, 22(4):274–279, 1996. 315
Promoting Reuse with Active Reuse Repository Systems
317
17. Frakes, W. B., Pole, T. P.: An empirical study of representation methods for reusable software components. IEEE Transactions on Software Engineering, 20(8):617–630, 1994. 314 18. Henninger, S.: An evolutionary approach to constructing effective software reuse repositories. ACM Transactions on Software Engineering and Methodology, 6(2):111–140, 1997. 314 19. Isoda, S.: Experiences of a software reuse project. Journal of Systems and Software, 30:171–186, 1995. 304 20. Krueger, C. W.: Software reuse. ACM Computing Surveys, 24(2):131–183, 1992. 303 21. Landauer, T. K., Dumais, S. T.: A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2):211–240, 1997. 314 22. Lieberman, H.: Autonomous interface agents. In Proc. of Human Factors in Computing Systems ’97, 67–74, Atlanta, GA USA, 1997. 315 23. Lim, W. C.: Effects of reuse on quality, productivity and economics. IEEE Software, 11(5):23–29, 1994. 302 24. Maarek, Y. S., Berry, D. M., Kaiser, G. E.: An information retrieval approach for automatically constructing software libraries. IEEE Transactions on Software Engineering, 17(8):800–813, 1991. 314 25. Mili, H., Ah-Ki, E., Grodin, R., Mcheick, H.: Another nail to the coffin of faceted controlled-vocabulary component classification and retrieval. In Proc. of Symposium on Software Reuse ’97, 89–98, Boston, MA USA, 1997. 314 26. Mili, H., Mili, F., Mili, A.: Reusing software: Issues and research directions. IEEE Transactions on Software Engineering, 21(6):528–562, 1995. 304 27. Norman, D. A.: Cognitive engineering. In Norman, D. A., Draper, S. W., (eds.): User centered system design: New perspective on human-computer interaction, 31– 61, Lawrence Erlbaum Associates, Hillsdale, NJ USA, 1986. 305, 307 28. Ostertag, E., Hendler, J., Prieto-Diaz, R., Braun, C.: Computing similarity in a reuse library system: An AI-based approach. ACM Transactions on Software Engineering and Methodology, 1(3):205–228, 1992. 314 29. Prieto-Diaz, R.: Implementing faceted classification for software reuse. Comm. of the ACM, 34(5):88–97, 1991. 314 30. Reisberg, D.: Cognition. W. W. Norton & Company, New York, NY, 1997. 303, 305, 306 31. Rhodes, B. J., Starner, T.: Remembrance agent: A continuously running automated information retrieval system. In Proc. of the 1st International Conference on the Practical Application of Intelligent Agents and Multi Agent Technology, 487–495, London, UK, 1996. 315 32. Rittri, M.: Using types as search keys in function libraries. Journal of Functional Programming, 1(1):71–89, 1989. 314 33. Rosenbaum, S., DuCastel B.: Managing software reuse–an experience report. In Proc. of 17th International Conference on Software Engineering, 105–111, Seattle, WA USA, 1995. 305 34. Simon, H. A.: The Sciences of the Artificial. The MIT Press, Cambridge, MA USA, 3rd ed., 1996. 302 35. Soloway, E., Ehrlich, K.: Empirical studies of programming knowledge. IEEE Transactions on Software Engineering, SE-10(5):595–609, 1984. 307 36. Tracz, W.: The 3 cons of software reuse. In Proc. of the 3rd Workshop on Institutionalizing Software Reuse, Syracuse, NY USA, 1990. 307
318
Yunwen Ye and Gerhard Fischer
37. Zaremski, A. M., Wing, J. M.: Signature matching: A tool for using software libraries. ACM Transactions on Software Engineering and Methodology, 4(2):146– 170, 1995. 310, 314
A Method to Recover Design Patterns Using Software Product Metrics Hyoseob Kim1 and Cornelia Boldyre2 1
2
Software Engineering Research Group (SERG), Department of Computing, City university, Northampton Square, London, EC1V 0HB, UK
[email protected] RISE (Research Institute in Software Evolution), Department of Computer Science, University of Durham, South Road, Durham, DH1 3LE, UK
[email protected] Software design patterns are a way of facilitating design reuse in object-oriented systems by capturing recurring design practices. Lots of design patterns have been identi ed and, further, various usages of patterns are known, e.g., documenting frameworks and reengineering legacy systems [8, 15]. To bene t fully from using the new concept, we need to develop more systematic methods of capturing design patterns. In this paper, we propose a new method to recover the GoF1 patterns using software measurement skills. We developed a design pattern CASE tool to facilitate the easy application of our method. To demonstrate the usefulness of our approach, we carried out a case study, and its experimental results are reported. Abstract.
Keywords
reuse.
1
: design pattern recovery, software product metrics, design
Introduction
The concept of design pattern originally came from Christopher Alexander, a professor of Architecture at the University of California, Berkeley and was enthusiastically adopted by the object technology community. The reason for this willing adoption was that the OO community needed a higher abstraction concept to facilitate design reuse beyond classes and objects. Gamma contributed to its current successful state by his PhD thesis work [6]. We can nd the in uence of Alexander's work on the pattern community in such names of the most popular patterns, as Bridge, Decorator and Facade. The GoF de nes design patterns [7, Chapter 1]: descriptions of communicating objects and classes that are customised to solve a general design problem in a particular context. 1
In the pattern community, GoF (Gang of Four) indicates the four authors who wrote the book, \Design Patterns: Elements of Reusable Object-Oriented Software", i.e., Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides [7].
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 318-335, 2000. c Springer-Verlag Berlin Heidelberg 2000
A Method to Recover Design Patterns Using Software Product Metrics
319
Design Patterns, Pattern Languages Forward engineering
Design Recovery
Existing Software
Redocmenting, Reengineering, Restructuring
New Software
Maintained Software
Fig. 1.
Software engineering processes related to design patterns
Rewording the above de nition, we can say that design patterns convey valuable design information to users by capturing commonalities usable in dierent contexts to solve dierent problems. Through reusing previously successfully practised designs, patterns enable us to build more reliable software more quickly. These two bene ts are critical in today's highly competitive business world. Although we still do not know the full power and all potential applications of design patterns, a limited degree of success has already been reported in several application areas [2]. Five kinds of software engineering processes are associated with design patterns as explained in Figure 1. They are design recovery process, redocumentation process, restructuring process, reengineering process, and forward-engineering process. The rst one focusses on recovering design patterns, while the other four are about the usage of those recovered patterns or the usage of known patterns. Firstly, we need to recover design patterns from existing software by applying design recovery techniques. Having identi ed and catalogued them in our pattern repositories, we can use those patterns to redocument our software for improving program comprehensibility, and to restructure it into a more desirable shape in terms of those speci c software quality issues that we are interested in such as low coupling and high cohesion. Also we might want to reengineer our software to adopt new technologies like Java and CORBA, or to meet challenges of new business environments like e-commerce. Finally, those patterns can be used in the development of new software, i.e., forward engineering process. The scope of this paper is mainly limited to the design recovery process. In this paper, we do not try to extend the set of known design patterns, but focus on a limited set, i.e., the 23 patterns of GOF's. By doing so, the usefulness and scalability of the proposed design pattern recovery method can be determined. This paper is organised as follows: Section 2 gives an overview of the work on design patterns and software product metrics. In the next section, we detail our design pattern recovery (DPR) method. Section 4 reports a case study conducted on three pieces of systems mainly developed in the C++ language. We show the
320
Hyoseob Kim and Cornelia Boldyreff
results of our experiments along with an evaluation of the DPR method. Finally, w e dra w some conclusions and suggest further w ork to be carried out. 2
Background
2.1
Design Patterns for Design Reuse not Code Reuse
Design reuse has been touted as a solution to overcome the softw are crisis for quite some time. How ever, w e found that it would be diÆcult to achiev e it only based on low abstraction mechanisms provided with programming language features. The object-oriented programming (OOP) championed with the C++ language certainly improved this situation, but not quite yet to a satisfying degree. P atterns can be classi ed into many dierent ways. One is to divide them by softw are lifecycle artifacts, e.g., requirement patterns, analysis patterns, and, not to mention design patterns. Another classi cation is made according to the level of abstraction and size. Architectural patterns (a.k.a. architectural styles) are the highest and the biggest; programming patterns (a.k.a. code idioms), the low est and the smallest. Design patterns are situated in the middle of them. Design patterns are not inven ted but discovered because design patterns capture design information successfully employed in the past2 . Perhaps this explains why people from industry are more eager to utilise the bene ts of patterns than academia currently. In both cases of good patterns and antipatterns, it is important that we disco ver them correctly and eÆciently. 2.2
Software Product Metrics
Soft w are metrics are used to measure speci c attributes of soft w are products or softw are development processes. With regards to softw are products, we can classify soft w are metrics in man y dierent ways. One way is to divide metrics according to the programming paradigms in which the subject system was developed. Thus, for object-oriented (OO) programs, we have procedur al metrics, structural metrics, and obje ct-oriented metrics. Procedural metrics measure properties of softw are parts such as sizes of each module whereas structural metrics are based on the relationships of each module with others. Other names cited for these tw o kinds of metrics in the softw are metrics literature areintr a-module metrics and inter-module metrics, respectively. For example, \Lines of Code (LOC)" and \McCabe's Cyclomatic Number (MVG)" are tw oof the most representative procedural metrics while \coupling" and \cohesion" belong to the structural metrics group. On top of these, we have to consider another kind of metrics, i.e., OO metrics. Among many OO metrics proposed, Chidamber and Kemerer [5]'s 2
There has been also observed the existence of antip atterns, which represent badly practised softw are engineering and can serve as examples of practices to be avoided [4].
A Method to Recover Design Patterns Using Software Product Metrics
321
OO metrics suite, in short, CK metrics, is the most popular. They proposed six new metrics. They are \weighted methods per class (WMC)", \depth of inheritance tree (DIT)", \number of child (NOC)", \coupling between object classes (CBO)", \response for a class (RFC)", nally, \lack of cohesion in methods (LCOM)". With respect to the scope covered by each group of metrics, we can say that OO metrics are a superset of the other two, because OO systems contain those features that can be found in the traditional programming concepts as well as newly added ones; whereas the the reverse is never true. Thus, speci c attributes of OO systems are not re ected well using only procedural metrics and structural ones. In section 3, we will discuss about how these three kinds of software product metrics can be used for investigating design patterns from existing software. 2.3
Previous Work
A few pieces of work on recovering design patterns from existing applications have been reported from the academia and the industry [12, 3, 9]. However, most of the results are special cases or, if general, ineÆcient for applying to industrial applications. Further, some ways of identifying patterns are language-dependent so that we cannot use those methods for applications built in other programming languages. A typical example is the work done with Smalltalk [3]. The method used by Kramer and Prechelt encodes the characteristics of each pattern in the form of Prolog rules and then matches those rules with facts representing properties of potential pattern candidates [10]. Their method was only applied to GOF's structural patterns. Another piece of work on design pattern recovery was done by Lano and Malik in the aspect of reengineering legacy application [11]. In their approach, rstly, procedural patterns are captured from source code, then they are matched with their corresponding OO patterns. By applying those patterns, legacy applications are reengineered while preserving their original functionality. VDM++ and Object Calculus were used in order to formalise design patterns and to show that the reengineered applications correctly re ne those legacy applications. Their approach is meaningful in that they try to use procedural patterns, acknowledging design patterns are not unique to OO systems. By doing so, they overcome one of the major drawbacks with current reengineering techniques that is the loss of maintainer understanding. 3 3.1
The Design Pattern Recovery (DPR) Method OO Software Development and Maintenance Model
In OO methods, problem space and solution space are linked directly through various features such as encapsulation and inheritance. Four dierent worlds are assumed in object-oriented systems, i.e., the real world, the abstract world, the technical world and the normative world as shown in Figure 2. Thus we can
322
Hyoseob Kim and Cornelia Boldyreff
say that developing OO systems are essentially an evolutional process. All these w orlds are potentially fruitful sources of discovering patterns. In most existing w ork, the emphasis has been on examining design artifacts of the abstract and technical worlds, and this is where our work has focussed.
Fig. 2.
Object-oriented analysis
the real world
Object-oriented design
the abstract world
Object-oriented programming
the technical world
Object-oriented quality assurance
the normative world
F our dierent worlds represented in OO systems
Although many variants of softw are life cycle exist, normally softw are is developed starting from de ning problem, via nding solution for the problem, nally , to implementing it. Thus recovering design information from the abstract and technical worlds needs a reverse process of softw are development steps. Shull et al [14] identi ed three major parts comprising any design patterns on the basis of the descriptions used in GOF's pattern catalogue. They are \purpose", \structure" and \implementation". Figure 3 shows the opposite directions that softw are development steps and pattern identi cation steps tak e, respectively.
Purpose
Structure
Software Development Steps Pattern Identification Steps
Implementation
Fig. 3.
Soft w are dev elopmen t steps and pattern investigation steps
Design pattern recovery can bring us bigger bene ts than the normal reverse engineering process as the former captures more fragments of design information than the latter one. Further, naturally, the design information captured in design
A Method to Recover Design Patterns Using Software Product Metrics
323
pattern recovery process is in bigger grains and more formal than the software knowledge obtained through the traditional reverse engineering processes. 3.2
The Design Pattern Recovery (DPR) Method
In OO systems we can recover design information more easily than in the ones programmed in procedural languages. This is because in the former semantic information and syntactic information are more closely associated than in the latter. We conjecture that by measuring and analysing the syntactic characteristics of software, we can become aware of the semantic information embedded in those syntactic program structures. However, using only procedural metrics and structural metrics is not enough for this kind of tasks as OO programs do not follow the traditional way of software building. OO metrics should be collected along with the other two groups of metrics to recover patterns properly. Applying the Goal/Question/Metric (GQM) Paradigm Establishing a proper measurement plan is important in order to achieve the goal we aim at, and to measure what we intend. The GQM paradigm [1, 16] is one of the most popular one to plan a measurement scheme. We use it for our purpose to recover patterns from OO systems. In our research, the GQM plan can be simply established as follows: Goal 1: Recover design patterns. Question 1: What are the main constituents of an OO Metrics 1: object-oriented metrics Metrics 2: structural metrics Metrics 3: procedural metrics Question 2: What are the building blocks that design
mented with?
Metrics 1: Metrics 2: Metrics 3: Goal 2:
patterns3 .
system?
patterns are imple-
object-oriented metrics structural metrics procedural metrics
Determine whether this method is an eective way to recover design
Question 3: How accurate is the method at picking out design patterns? Metrics 4: # ltered patterns/#patterns candidates Question 4: Does the method pick out patterns that are not there? Metrics 5: #positive false patterns
3
As in the most typical pattern matching examples, we can think of four dierent occasions when dealing with recovered pattern candidates. They are positive true, positive false, negative true, and negative false. The rst two are about deciding the trueness of identi ed pattern candidates, whereas the remaining two handle the trueness of pattern instances that we failed to recover as pattern candidates.
324
Hyoseob Kim and Cornelia Boldyreff Question 5: Does the Metrics 6: #negative
method fail to nd patterns that are there? true patterns
As indicated abo ve, inour casethe rst goal is quite clear, i.e., recovering patterns eÆciently. Then, to address the goal w ecan ask tw oquestions. One is about an OO system, and another about a pattern as w ew an tto iden tify patterns from OO systems. We have the same set of metrics for these two GQM questions. Soft w are systems dev eloped in languages like C++ and Java mainly consist of classes, their dynamic instances, i.e., objects, associations and interactions betw een these. Also if we further investigate them, we can nd that they are based on more traditional language features such as v ariables, operators, conditional branches, functions, and procedures. Thus, if w e want to know the structure and behaviour of an OO system, we should consider the three kinds of metrics that we mentioned above. The following gives more details concerning the metrics we have adopted in our pattern investigation. 1. OO metrics { Weigh ted methods per class (WMC): This measures the sum of a weighting function over thefunctions of the module. Two dierent weigh ting functions are applied: WMC1 uses the nominal w eigh tof 1 for each function, and hence measures the number of functions, WMCv uses a w eigh ting function whic h is 1 for functions accessible to other modules, 0 for private functions. { Depth of inheritance tree (DIT): This is the measure of the length of the longest path of inheritance ending at the current module. The deeper the inheritance tree for a module, the harder it may be to predict its behaviour. On the other hand, increasing depth gives the potential of greater reuse b y the current module of behaviour de ned for ancestor classes. { Number of children (NOC): This counts the number of modules which inherit directly from the current module. Moderate values of this measure indicate scope for reuse, how ever high values may indicate an inappropriate abstraction in the design. { Coupling betw een objects (CBO): This is the measure of the n umber of other modules which are coupled to the current module either as a clien t or a supplier. Excessive coupling indicates w eakness of module encapsulation and may inhibit reuse. 2. Structural metrics: There exist three variants of eac h of the structural metrics: a count restricted to the part of the interface which is externally visible (FIv, FOv and IF4v), a count which only includes relationships which imply the clien t module needs to be recompiled if the supplier's implementation changes (FIc, FOc and IF4c), and an inclusive count (FIi, FOi and IF4i), where FI, FO and IF4 are respectively de ned as follows: { F an-in (FI): This measures the number of other modules which pass information into the current module
A Method to Recover Design Patterns Using Software Product Metrics { {
325
Fan-out (FO): This is obtained by counting the number of other modules into which the current module passes information Information Flow measure (IF4): This is a composite measure of structural complexity, calculated as the square of the product of the fan-in and fan-out of a single module. This was originally proposed by Henry and Kafura.
3. Procedural metrics { {
{
Lines of Code (LOC): This is one of the oldest measures that simply is a count of the number of non-blank, non-comment lines of source code. McCabe's Cyclomatic Complexity (MVG): This was developed to overcome the weakness of LOC, and is a measure of the decision complexity of the functions which make up the program. The strict de nition of this measure is that it is the number of linearly independent routes through a directed acyclic graph which maps the ow of control of a subprogram. Lines of Comments (COM): This is the counting of number of lines of comment, and can be used to guess the quantity of design information contained within a speci c portion of code.
In addition to Goal 1, we have another goal to judge the eectiveness of our DPR method. Because counting the cases of negative true is prohibitively diÆcult and time-consuming especially if we do it to large software systems, we exclude it here. Also negative false cases are perfectly right ones, thus we do not need to consider them. The Process of Extracting Pattern Signatures Having established the GQM plan for our investigation, we extract the socalled pattern signatures by processing the published GoF patterns implemented in C++ using CCCC4 . This is based on the Question 2 of our GQM plan. We collected each metrics data, and did some statistical analyses, e.g., calculating their average values and standard deviation values, to assign \A" to \D" in order of highest values to lowest value according to the normal distribution5 . In other words, given a metrics, its corresponding signature can be calculated with the following function, Signature as de ned below: 4 5
CCCC (C and C++ Code Counter) is a metrics producer for the C language and the C++ language. It was developed by Tim Littlefair in Australia. Suppose y represents the normal variable, then the height of the probability distribution for a speci c value of y is represented by f (y ). For the normal distribution
p 2 f (y ) = (1= 2 )e 1=2[(y )=]
where and are the mean and standard deviation, respectively, of the population of y values [13, chapter 3].
326
Hyoseob Kim and Cornelia Boldyreff
8> A if x + < B if x < + Signature(x) = C if x < >: D if x < where x is a soft w are metrics data of a class in an object-orien ted system. F or example, if a class belongs to abstract factory pattern, it tends to have a high value of DIT metrics. Obviously, our mapping scheme for allotting real data to formal data, i.e., metrics, is comparative. As the size of classes varies greatly depending on each program, it is meaningless to allot absolute metrics to eac h pattern. We put all the pattern signatures in Table 1, and Figure 4 shows the process of extracting each pattern signature. GoF Patterns (Source) CCCC Metrics Statistical Analyses Pattern Signatures
Fig. 4.
The process to extract pattern signatures
Note that we have the same set of metrics both for Question 1 and Question 2 of our GQM plan. That means that we need to measure the subject system that we are in terested in. Then the metrics are compared with the pattern signatures in order to recover pattern candidates. Finally, our DPR method becomes complete by adding h uman interven tion. For this kind of tasks, a certain level of h uman intervention is almost inevitable, and actually desirable as well. 3.3
T ool Support: Pattern Wizard
Learning lessons from existing design patterns tools, w e dev eloped a design pattern CASE tool, Pattern Wizard (PW). Basically, it implements our DPR method presented. Below, we sketch the structure and the functionality of the tool, and suggest some usage scenarios. The Structure and Functionality of PW
A Method to Recover Design Patterns Using Software Product Metrics
Table 1.
GoF patterns signatures
Pattern WMC1 WMCv DIT NOC CBO FOv FOc FOi FIv FIc Abstract Factory A A A B A C A A B B Builder A A A B A B A A A B Factory Method B B A B A C A B B B Prototype A A A B A A A A A B Singleton C C C C C D C C D C Adapter C C C C C C A C B C Bridge C C D D D C D D B D Composite D D C C C D C D D C Decorator D D A B B C C C B B Facade B B C C B C C B B C Flyweight A A C C C C C C B C Proxy B B C C B C C C A C Chain of Responsibility B B A C C D C C D C Command D D C B C C C C B B Interpreter C C C A B C C C A A Iterator C C D D D D D D D D Mediator C C C B B A C A B B Memento B B D D C B D C B D Observer C C C C C C A B B C State A A C B B D C C D B Strategy C C C B C B C C B B Template Method C C C C D D C D D C Visitor C C C B B D C B D B
FIi IF4v IF4c IF4i LOC MVG COM A B A A B B C A A A B A C B A B A B B A A A A A B B C C C C C C C A A C C C C A B C D C C C D C D C C C C C C B B C A C C D A C C C A C C D C C C C C A B C C C B A B B B C A C B B B C C C C C C C B C C C B A B D C C C D D D B C C B C B B C C C C C C C C C C C C C C B C C B B D C B C C C A B A D C C C C B C C C C A D C D
Main Module
DPR
PBR
CCCC
DocClass
Fig. 5.
327
Pattern Repository
The structure of pattern wizard
328
Hyoseob Kim and Cornelia Boldyreff
Figure 5 shows the overall structure of PW drawn in the UML notation. It consists of the four major modules. Main Module brings about the main menu where we can select other commands. The DPR module is used to recover design patterns and the PBR (Pattern-Based Redocumentation) module to redocument the soft w are where the patterns ha ve been recovered. These two modules are sup6 , respectively. PW ported by the t w o external modules, i.e., CCCC and DocClass has also a Pattern Repository so that users can chec k the details of each pattern by browsing through it. Finally, the Help command shows the help content of how to run PW. Thus we indicated these relationships using the association and aggr egationnotations of the UML. F or the sak e of portability and the speed of dev elopingthe protot ype,w e chose Tcl/Tk (Tool Command Language/Tool Kit) as the main implementation language. It has a rich prebuilt set of GUI facilities, a sort of \application framework", and provides pow erful regular expressions that suit well for our purpose. In addition, being an untyped language, it is quite suitable for our tasks of dealing with a lot of strings. Because we intend to reuse existing tools as much as w ecan, Tcl/Tk can play a very important role as a glue language com to bine existing ones and our own contribution. Usage Scenarios of PW PW can be useful for various activities, although the most desirable one is design pattern recovery. In the following, we show the steps of using PW. 1. Produce a project le containing the program les and header les. The project les can be easily produced using any kind of editors. Because PW detects patterns b y analysing softw areproduct metrics, it is desirable to include all les available. 2. Obtain product metrics. PW calls CCCC to produce the three kinds of product metrics, i.e., procedural, structural and OO metrics. Because the outputs of CCCC are in html format, we need to convert them into ASCII text format. We use the \Save As" command of Netscape Communicator to do it. We found that other web browsers did the conversion dierently. 3. Recover pattern candidates (Figure 6). PW processes the metrics output in order to iden tifyGoF patterns. Essentially , PW compares the metrics output of each individual application with the pattern signatures embedded in PW itself that we extracted earlier. Then PW displays potential pattern candidates that are to be ltered with some degree of human intervention. 4. Produce class documentation. Using DocClass, class documentation is obtained. It contains various information of each class, e.g., attributes and methods of each class, let alone comments. 6
DocClass is a simple C++ program which reads in C++ header les, and outputs documentation describing the class hierarc hy,methods, inherited methods etc. It w as developed by T rumphurst Ltd, and its w ebsite is located in <www.trumphurst.com/docclass.phtml>.
A Method to Recover Design Patterns Using Software Product Metrics
Fig. 6.
329
PW recovering patterns
5. Add pattern documentation. We add pattern documentation to the class documentation processed using the above step. Added to the information provided by class documentation, pattern documentation can help users understand how classes and objects collaborate each other to solve subproblems. 6. Browse the pattern repository (Figure 7). We can use the pattern repository for three purposes. Firstly, when we recovered patterns, we can use it to lter the pattern instances to check whether they are real pattern instances. Secondly, we users have the pattern documentation, they can look into the repository for detailed description. Thirdly and nally, users interested in design patterns can simply browse through the repository to learn them.
4
Case Study
In section 3, we discussed the DPR method. Also, we developed a prototype tool to show how our method can be implemented as a CASE tool. Experiments are needed to see whether these methods and tool are suitable and sound. Here, we describe the the experimental method and framework. Then, we go on to report the experimental results obtained. After that, a detailed experimental analysis of them is given in the end.
330
Hyoseob Kim and Cornelia Boldyreff
Fig. 7.
4.1
The pattern repository
Experimental Method
Experimental Goals The main goal of the experiments is to investigate the usefulness of our pattern recovery method, and the prototype tool, PW. Another goal is investigating the correctness of our pattern recovery method. Experimental Materials We used three systems to experiment with. Table 2 gives a brief description of these systems with respect to their sizes, complexities, development years and some speci c characteristics. The experimental materials
T able 2.
Soft w are Size Complexity Development Characteristics in LOC in MVG Year Syatem1 4027 465 1997 built using PCCTS Syatem2 295 79 1993 System3 3117 404 1996 visual-programming based System1 is a softw areproduct metrics tool that parses source code programmed in C, C++, Ada or Java, and displays its metrics outputs in html format in order for users to view those metrics data using their favourite web bro wsers. System2 is a simple C++ program which parse C++ header les, and outputs documentation describing the class hierarc h y,methods, inherited methods etc. The main dierence between System1 and System2 is that the rst uses a parser generator, i.e., \PCCTS (Purdue Compiler Construction Tool Set)" to
A Method to Recover Design Patterns Using Software Product Metrics
331
produce a language recogniser, whereas the second one has a language recogniser for the C++ programs that was built by hand. Finally, System3 is a software reuse tool suite that has functionality of reverse engineering inheritance charts from C++ header les and extracting documentation from comments. Also it can store software modules into a repository. Its facility of producing inheritance chart was implemented using Java so that those diagrams can be rendered using either the OMT7 tool or any Java-compatible web browsers. Experimental Framework In section 3, we successfully extracted pattern signatures on the basis of a GQM plan. We intend to use those signatures in order to detect patterns in those three experimental materials. First we make each project les containing program les and header les. Then we get their metrics information as outputs in html format and save this metrics data in ASCII text format. After that, the remaining process is fairly straightforward. PW processes the metrics data by comparing those metrics data with pattern signatures, and displays potential pattern candidates in order for users to lter them manually. Finally, users intervene to check whether the candidates found are real instances of patterns. During checking, PW also helps users perform this task with its Pattern Repository. Users can browse the repository to compare those pattern candidates with real ones. If we cannot nd any pattern instances at the certain precision, we can lower the precision. Of course, in this case, the accuracy of the pattern recovery process decreases. Experimental Results Using the DPR method, we successfully recovered pattern candidates from the three pieces of software as shown in Table 3. The precision we used was 70.59%, i.e, 12 matches out of 17 metrics, for System1 and System3, while the one for System2 was 58.82%, i.e., 10 matches out of 17 metrics. In the case of System2, we had to lower the precision as we could not recover pattern instances at a higher precision unlike the other two. Having recovered the pattern candidates, the next step is lter them to check whether they are true instances of the GoF design patterns. Table 4 shows those patterns ltered by hand. Experimental Analysis Figure 8 depicts the data shown in Table 4. We can say that System1 is so-called pattern-rich software, and it has a good balance of patterns. In the 7
The Object Modelling Technique (OMT) was developed by James Rumbaugh and others, and later it was integrated into the UML (Uni ed Modelling Language).
332
Hyoseob Kim and Cornelia Boldyreff Table 3.
Recovered pattern candidates
Softw are System1
Creational P atterns Structural Patterns Beha vioural Patterns Pattern Instances 4 13 16 33 Singleton (4) Adapter (5) Command (5) Composite (2) Observer (10) Facade (2) Template Method (1) Flyweight (4) System2 3 0 0 3 Abstract F actory (1) Builder (1) Prototype (1) System3 0 0 26 26 Command (13) Observer (13)
P atterns ltered by hands
T able 4.
Softw are Creational P atterns Structural Patterns Behavioural Patterns Pattern Instances System1 3 4 6 13 Singleton (3) Adapter (1) Command (1) Composite (2) Observ er (5) System2
2
Flyweight (1) 0
System3
Builder (1) Prototype (1) 0
0
0
2
12 Command (5) Observ er (7)
12
meantime, the other tw o are possessing ill-balanced patterns; System2 has only creational patterns while System3 has behavioural patterns.
Fig. 8.
P atterns con tained in the three soft w are systems
It is interesting to know that a large portion of the pattern instances ltered, i.e., 66.67%, 18 out of 27 instances, belong to behavioural patterns category. Bearing in mind that patterns are essentially collaborating classes and/or objects tow ards a speci c goal or a task, we can easily understand this result; behavioural patterns either use inheritance to describe algorithms and ow of control or describe ho w a group of objects cooperate to perform a task that no single object can carry out alone.
A Method to Recover Design Patterns Using Software Product Metrics
333
Both Figure 9 and 10 reassure us with the fact that bigger size and/or high complexity of software systems tend to contain more patterns than smaller and/or simpler software.
Fig. 9.
Fig. 10.
The relationship between size and No. of recovered patterns
The relationship between complexity and No. of recovered patterns
With respect to Goal 2 of our GQM plan, we achieved a precision of 43.55% in the DPR method, that is higher than any existing research work on design pattern recovery. The method seemed to work well regardless of the sizes and complexities of systems. In addition, we observed that our method produced the almost same precision against the three pattern categories.
334
5
Hyoseob Kim and Cornelia Boldyreff
Conclusions
In this paper, we developed a method to recover design patterns from OO systems. Our DPR method used three kinds of product metrics, and the measurement plan was established on the basis of the GQM paradigm. Through a case study, we evaluated the DPR method and the tool support w eprovided. We claim that our approach helps users ac hiev edesign reuse b y capturing previous design information in the form of design patterns. During the course of our researc h, w e iden ti edsome further w ork to be carried out. Firstly, we need to apply our method to some industrial-strength data sets. Often softw are engineering methodsfail to scale up for industrial applications beyond primitive academic setting. Secondly, it will be useful to dev elopa reco very methodfor other types of patterns like architectural styles and analysis patterns. Thirdly, and nally, there are some softw areabstractions that are highly related to design patterns. Examples are object-oriented frameworks, code idioms, or pattern languages. An investigation of the relationships betw een these remains for future work. References
1. Victor R. Basili, Gianluigi Caldiera, and H. Dieter Rombac h. Goal Question Metric Paradigm. In John J. Marciniak, editor, Encyclopedia of Software Engineering, volume 1, pages 528{532. John Wiley & Sons, 1994. 2. Kent Bec k, James O. Coplien, Ron Crocker, Lutz Dominick, Gerard Meszaros, F rances Paulisc h, and John Vlissides. Industrial experience with design patterns. In Pr oc. International Conference on Software Engineering, ICSE, Berlin, pages 103{114. IEEE Press, March 1996. 3. Kyle Brown. Design reverse-engineering and automated design pattern detection in Smalltalk. Master's thesis, Department of Computer Engineering, North Carolina State University, 1996. 4. William J. Brown, Raphael C. Malveau, Hays W. McCormick III, and Thomas J. Mowbray. A ntiPatterns: Refactoring Software, A rchitectures, and Projects in Crisis. John Wiley, 1998. 5. Shyam R. Chidamber and Chris F. Kemerer. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20(6):476{493, June 1994. 6. E. Gamma. Objektorientierte Software-Entwicklung am Beispiel von ET++: Design-Muster, Klassenbibliothek, Werkzeuge. PhD thesis, University of Z urich, 1991. published by Springer Verlag, 1992. 7. Eric h Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: elements of reusable object-oriente d softwar e. Addison-Wesley Publishing Company, Reading, Mass, 1995. 8. R. E. Johnson. Documenting frameworks with patterns. SIGPLAN Notices, 27(10):63{76, October 1992. OOPSLA'92. 9. Hy oseobKim and Cornelia Boldyre. Softw are reusabilit y issues in code and design. A CM Ada Letters, XVII(6):91{97, November/December 1997. Originally
A Method to Recover Design Patterns Using Software Product Metrics
10.
11. 12. 13. 14.
15.
335
presented in \OOPSLA '96 Workshop on Object-Oriented Software Evolution and Re-engineering" in San Jose, USA in October, 1996. Christian Kramer and Lutz Prechelt. Design recovery by automated search for structural design patterns in object-oriented software. In Proceedings of Working Conference on Reverse Engineering, Monterey, USA, November 1996. IEEE CS Press. K. Lano and N. Malik. Reengineering legacy applications using design patterns. In Proceedings of the 8th International Workshop on Software Technology and Engineering Practice, London, UK, July 1997. Robert Martin. Discovering patterns in existing applications. In James O. Coplien and Douglas C. Schmidt, editors, Pattern Langauges of Program Design, pages 365{393. Addison Wesley, 1995. Lyman Ott. An introduction to statistical methods and data analysis. PWS-KENT Publishing Company, Boston, Massachusetts, third edition, 1988. Forrest Shull, Walcelio L. Melo, and Victor R. Basili. An inductive method for discovering design patterns from object-oriented software systems. Technical Report CS-TR-3597, UMIACS-TR-96-10, Computer Science Department/Institute for Advanced Computer Studies, University of Maryland, 1996. Perdita Stevens and Rob Pooley. Systems reengineering patterns. In Proceedings of the ACM SIGSOFT 6th International Symposium on the Foundations of Software Engineering (FSE-98), volume 23, 6 of Software Engineering Notes, pages 17{23,
New York, November 3{5 1998. ACM Press. 16. Frank van Latum, Rini van Solingen, Markku Oivo, Barbara Hoisl, Dieter Rombach, and Gunther Ruhe. Adopting GQM-based measurement in an industrial environment. IEEE Software, 15(1):78{86, January/February 1998.
Object Oriented Design Expertise Reuse: An Approach Based on Heuristics, Design Patterns and Anti-patterns Alexandre L. Correa1, Cláudia M. L. Werner1, and Gerson Zaverucha2 1 1
COPPE/UFRJ - Computer Science Department, Federal University of Rio de Janeiro C.P. 68511, Rio de Janeiro, RJ, Brazil – 21945-970 {alexcorr, werner, gerson}@cos.ufrj.br 2 Department of Computer Science, University of Wisconsin – Madison
[email protected] Abstract. Object Oriented (OO) languages do not guarantee that a system is flexible enough to absorb future requirements, nor that its components can be reused in other contexts. This paper presents an approach to OO design expertise reuse, which is able to detect certain constructions that compromise future expansion or modification of OO systems, and suggest their replacement by more adequate ones. Both reengineering legacy systems, and systems that are still under development are considered by the approach. A tool (OOPDTool) was developed to support the approach, comprising a knowledge base of good design constructions, that correspond to heuristics and design patterns, as well as problematic constructions (i.e., anti-patterns).
1
Introduction
One of the main motivations to use the Object Oriented (OO) paradigm is the promise of being able to dramatically reduce maintenance efforts and increase development productivity, when compared to more traditional development paradigms. This promise would be accomplished by applying concepts such as encapsulation, inheritance, polymorphism, dynamic binding and interfaces, among others. However, it is possible to find many OO applications that are already hard to maintain. These applications present rigid and inflexible structures that make the addition of new features, due to inevitable requirement changes, a difficult task. Although OO concepts are known by the academic community for at least two decades, the design of a flexible and reusable system is still a challenge. It involves the identification of relevant objects, factoring them into classes of correct granularity level, hierarchically organizing them, defining class interfaces, and establishing the dynamics of collaboration among objects. The resulting design should solve the specific problem at hand and at the same time should be generic enough to be able to address future requirements and needs [8]. While performing these tasks, new OO designers have to decide among several possible alternatives. However, they often apply non-OO techniques, both due to their greater familiarity with these techniques, 1
The authors are partially financially supported by CNPq – Brazilian Research Council
W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 336-352, 2000. Springer-Verlag Berlin Heidelberg 2000
Object Oriented Design Expertise Reuse
337
and to time schedule pressures that do not allow them to search for the best OO solution. In fact, many of these problematic OO applications were developed by teams composed mostly by OO beginners, who do not have enough knowledge to define the best solution for a given problem, making it flexible and resilient to change. Many OO development methods were proposed in this last decade. Along with these methods, several OO CASE tools became available. The emphasis of these methods and tools has been on how to develop semantically correct OO models, according to the constructions available in modeling languages, such as UML. However, a correct model does not necessarily mean that it is flexible and reusable. Due to this fact, many organizations nowadays depend on reviews performed by experts to increase design quality and avoid too much effort on future maintenance. These reviews are very costly and such experts are not always available. This work presents an approach for the reuse of several sources of OO design expertise (heuristics, design patterns and anti-patterns), integrating them to OO CASE tools. This approach supports both reengineering problematic legacy OO systems, and evaluation of OO design models from systems that are still under development. This approach involves the detection of good and bad OO design constructions, i.e., constructions corresponding to standard solutions to recurring design problems (design patterns), or constructions that can result in future maintenance and reuse problems (anti-patterns). By doing so, it is possible to identify points in a system that need to be modified in order to make it more flexible and reusable, and to ease system understanding as a whole, including badly documented systems, by detecting existing design patterns. The identification of problematic OO software constructions is very difficult to be done manually. Some of the reasons for this difficulty are: • legacy systems that need to be reengineered are usually medium/large in size, making manual search for problems unfeasible; • developers often do not know what kind of problems they should be looking for. A database containing potential design problems provides a valuable support in this case. This paper is organized as follows: section 2 introduces some basic concepts regarding design patterns and heuristics for the construction of good OO designs. Section 3 discusses the anti-pattern concept and OO design problems. Section 4 presents the tool designed to support design expertise reuse and detection of good and bad OO design constructions, called OOPDTool. Section 5 briefly presents a practical case study using the proposed approach. In section 6, some related works are discussed, and in section 7 some conclusions and future directions of this work are presented.
2
Heuristics and Design Patterns
A design heuristic is some kind of guidance for making design decisions. It describes a family of potential problems and provides guidelines to aid the designer to avoid
338
Alexandre L. Correa et al.
them. Heuristics, such as „A base class should know anything about its derived classes“ [14], direct designer’s decisions towards a more flexible and reusable OO design. It is important to note that a heuristic should not be considered as a law that must be followed in all circumstances. It should be seen as an element that, if violated, would indicate a potential design problem. Apart from these heuristics, the „design patterns philosophy“ [8] has recently emerged as one possible way to capture knowledge of OO design experts, allowing the reuse of well succeeded solutions to recurring problems regarding design and software architectures. A design pattern solves a recurring problem, in a given context, by providing a proven working solution (not speculations or theories), also indicating its consequences, i.e., the results and tradeoffs of its application, and providing information regarding its adaptation to a problem variant. Every pattern is identified by a name, forming a common vocabulary among designers. While heuristics represent generic directives to OO design, a design pattern is a solution to a specific design problem. One of the keys to maximize reuse and minimize software maintenance effort resides in trying to anticipate new requirements and possible future changes, while designing a system. By using heuristics and design patterns, it is possible to avoid huge changes of the software structure, since they allow a certain design aspect to vary in an independent way, making it more resilient to a particular kind of change. In [8], one can find common causes for design inflexibility and the corresponding design patterns that can be applied. For example, the instantiation of an object, by the explicit specification of its class, makes a design dependent on a particular implementation and not on an interface. In this case, the Abstract Factory, Factory Method, and Prototype design patterns can be applied, providing more flexibility to the instantiation process.
3
OO Design Problems and Anti-Patterns
Software design is an iterative task, and designers often need to reorganize its elements to make it flexible. However, current OO design methods and supporting tools focus on the development of new systems. The same happens with the use of heuristics and design patterns. Several organizations are already facing inflexible OO systems nowadays; many of which are critical, incorporating undocumented business knowledge. Moreover, it is often unfeasible, from an economical point of view, to throw these systems away and rebuild them from scratch adopting a more flexible and reusable design. A more interesting solution would be to reengineer those systems by looking for constructions that are responsible for the system inflexibility, and replacing them by others that are more flexible and resilient to change. The study of anti-patterns has recently emerged as a research area for detecting problematic constructions in OO designs [4]. An anti-pattern describes a solution to a recurrent problem that generates negative consequences to a project. An anti-pattern can be the result of either not knowing a better solution, or using a design pattern (theoretically, a good solution) in the wrong context. When properly documented, an anti-pattern describes its general format, the main causes of its occurrence, the
Object Oriented Design Expertise Reuse
339
symptoms describing ways to recognize its presence, the consequences that may result from this bad solution, and what should be done to transform it into a better solution. However, the number of catalogued anti-patterns is still small, if compared to the number of design patterns available in the technical literature. In [4], there is a list of some anti-patterns, as for instance the Blob anti-pattern. This anti-pattern corresponds to an OO design solution that strongly degenerates into a structured design style. It can be recognized when responsibilities are concentrated on a single object, while the majority of other objects are used as mere data repositories, only providing access methods to their attributes (get/set methods). This kind of solution compromises the ease of maintenance and should be restructured by better distributing system responsibilities among objects, isolating the effects of possible changes. Since the number of anti-patterns available in the literature is still small, one possible way to guide the process of searching for problematic solutions in OO software design is by looking at design patterns catalogues. Usually, these catalogues not only describe good solutions applicable in a particular context, but also informally discuss bad solutions that could have been used instead. Those bad solutions can be formalized and catalogued, forming a database of OO design problems. OO design heuristics can be another source for the identification of design problems. Possible ways to violate a particular heuristic correspond to potential problems that can be found in a design. For instance, the definition of attributes in the public area of a class violates the principle of encapsulation, corresponding to a potential design problem. However, it is not always true that the violation of an OO design heuristic corresponds to a design problem, since a particular bad construction can be driven by some specific requirements such as efficiency, hardware/software constraints, among other non functional ones.
4
OOPDTool
OO design heuristics, design patterns and anti-patterns represent ways of capturing OO design expertise. Current design tools do not provide support to the reuse of this kind of knowledge. Therefore, our goals are twofold: First, to support design expertise reuse. Second, to provide automated support both for legacy OO systems reengineering, and evaluation of OO systems that are still under development. This support means detecting design constructions that can make future system maintenance harder, and suggesting their replacement by more flexible ones. Thus, identifying points in the system that should be modified in order to make it more flexible and reusable. A tool, named OOPDTool, was designed to support these tasks. As shown in figure 1, there are four main modules composing the OOPDTool architecture: • Design Extraction: automatically extracts design information from OO source code, generating a design model in an object oriented CASE tool. • Facts Generation: generates a deductive database, corresponding to facts extracted from an OO design. These facts are expressed in predicates, according to a metamodel for object-oriented design representation.
340
Alexandre L. Correa et al.
• Expertise Capture: captures knowledge about good and bad OO constructions, generating a deductive database of heuristics, design patterns and anti-patterns. • Detection: analyzes a design model stored in the facts deductive database by using an inference machine, machine learning techniques and the heuristics, design patterns and anti-patterns knowledge base. This analysis detects design fragments that can result in future maintenance and reuse problems, showing hints to the refactoring of those problematic constructions into more flexible ones. This module also allows the identification of design patterns used by developers in an OO development.
Design Extraction
Source Code
OO Case Tool
OO design
OO Design Literature and Experts
Expertise Capture
Design Patterns and Heuristics Deductive Database
AntiPatterns Deductive Database
Facts Generation
Facts Deductive Database
Detection
Detected Problems and Hints for Better Solutions
Fig. 1. OOPDTool Architecture
Those modules were designed to be integrated to an OO CASE tool. Although OOPDTool architecture does not impose a particular CASE tool, our current implementation considers a very popular OO CASE tool: Rational Rose [13]. Rose is
Object Oriented Design Expertise Reuse
341
a CASE tool that supports OO analysis and design activities, being able to generate models according to Booch, OMT or UML notation. The information about an OO model can be accessed through OLE components, available in REI (Rose Extensibility Interface). Rose can also be customized, allowing the addition of new features as „Add-Ins“. By using the features described above, we added all OOPDTool modules to Rose, transforming it into an environment that supports the construction of object oriented design models, the evaluation of those models against candidate problematic constructions, and design expertise reuse. 4.1 Design Extraction Although reverse engineering modules for languages such as C++ and Java are available in Rose, the information extracted is limited to what can be obtained from a structural analysis of the source code. The C++ module, for instance, captures information only from the „header files“. Therefore, only structural information such as packages, classes, attributes, methods, and inheritance relationships can be retrieved, narrowing the scope of patterns and anti-patterns detection. Constructions related to object creation (Abstract Factory and Factory Method patterns, ManyPointsOfInstantiation anti-pattern, for example) cannot be detected using only structural information. To overcome this limitation, it was necessary to provide a design extraction module that was able to retrieve not only the structural information captured by Rose reengineering modules, but also information related to methods implementation. This extraction is done according to a metamodel that defines the concepts needed to the facts deductive database generation, as presented in section 4.2. Based on the extracted information, it is possible to know the attributes used by a particular method; how these attributes are manipulated (read, write, parameter, operation invocation); method stereotypes („constructor“, „destructor“, „read accessor“, „write accessor“, among others), and which collaborations are necessary to the implementation of a particular method. The analysis of method invocations is done in order to gather information about dependencies between types and not between objects, since the latter would require a run-time analysis of the system. For each method, OOPDTool generates a collaboration diagram with only the direct calls presented in the method. From each operation invocation identified in the implementation of a particular method, OOPDTool captures the following information: the called operation, the type (class or interface) of the object referenced by the method, and how this object is being accessed by the method (i.e., as a parameter, a locally created object, a global object, an attribute, or self). Considering the code fragment shown in figure 2, in aTape.price() invocation from addTape method of TapeRental class, the called object (aTape) is a parameter, while in theFilm.price invocation from price method of Tape class, the called object (theFilm) is an attribute of the invoker object. OOPDTool extracts only the operation calls directly invoked from each method implementation, as we are only interested in the direct dependency relationships between classifiers (classes or interfaces).
342
Alexandre L. Correa et al.
This module is responsible, therefore, for recovering design information from OO source code. This recovery is done according to a metamodel that defines all the elements necessary to the evaluation of an OO design. All information extracted from the source code is stored in the OO CASE tool repository. class TapeRental { public: void addTape(Tape aTape) {..; aTape.price(); .. } }; class Tape { private: Film theFilm; public: currency price() {...;theFilm.price(); ... } } class Film { public: void price() { ...; } }
Fig. 2. Collaboration example
4.2
Facts Generation
Once the design information becomes available in the Rose repository, either as the result of a reverse engineering process performed by the Design Extraction module, or as a result of a forward engineering process (i.e., design of a new application), the Facts Generation module comes into action, generating a deductive database which represents the facts captured from the Rose repository. These facts are stated in predicates corresponding to constructions defined by a metamodel for object oriented software. This metamodel was defined based on UML semantics metamodel [1] and on other works in this field [7], [11]. This metamodel defines the entities and relationships that are relevant to design patterns and antipatterns identification, including not only structural elements, but also dynamic elements such as object instantiation and method calls, for instance. In Appendix A, all predicates defined by the metamodel are presented in detail. The metamodel defines the main entities of an OO design (package, classifier, attribute, operation, parameter), relationships between those entities (dependency, realization, inheritance), and also elements corresponding to the implementation of methods such as object instantiation, object destruction, method invocation, and attribute access (see Appendix A for details). From a structural point of view, the Facts Generation module generates a set of facts that describes all the classifiers found in a model (classes, interfaces, and basic data types), how those classifiers are organized into packages, their attributes and operations, including detailed information about each one (visibility, type, scope, parameters, among others). The associations, aggregations, and compositions are captured as pseudo-attributes [1]. The inheritance relationships between classifiers and their realizations are also captured by specific predicates.
Object Oriented Design Expertise Reuse
343
From a behavioral point of view, this module generates facts about the implementation of each method. Each object instantiation and destruction, method invocation, and attribute access (read or write) are captured as predicates. Capturing these behavioral elements is essential for the identification of many design patterns and anti-patterns, such as those related to object creation. DCActor keyPressed(key : KeyEvent) GetDiagramBox() : DCDiagramBox
1: GetDiagramBox()
DCActorEdit keyPressed(key : KeyEvent)
: DCActor Edit
3: Repaint( ) F
diagramBox : DCDiagramBox
2: DeleteSelectedObjects()
Fig. 3. Rational Rose Model package("LogicalView", "Nil", "Nil"). classifier("LogicalView", "LogicalView::DCActor", "Nil", "Class", "Abstract", "NotLeaf", "NotRoot"). classifier("LogicalView", "LogicalView::DCActorEdit", "Nil", "Class", "Concrete", "NotLeaf", "NotRoot"). inheritsFrom("LogicalView::DCActorEdit", "LogicalView::DCActor"). attribute("LogicalView::DCActor","LogicalView::DCActor::diagramBox"," Instance","Private","LogicalView::DCDiagramBox","LogicalView::DCDiagr amBox","NonConst","1","Association"). operation("LogicalView::DCActor","LogicalView::DCActor::keyPressed"," keyPressed","Instance","Public","Nil","Virtual","Abstract","NonConst" ). operation("LogicalView::DCActor","LogicalView::DCActor::GetDiagramBox ","GetDiagramBox","Instance","Public","Nil","Virtual","Method","NonCo nst"). operation("LogicalView::DCActorEdit","LogicalView::DCActorEdit::keyPr essed","keyPressed","Instance","Public","Nil","Virtual","Method", "NonConst"). parameter("LogicalView::DCActor::keyPressed","LogicalView::DCActor::k eyPressed::key"," 1","In","LogicalView::KeyEvent"). parameter("LogicalView::DCActor::GetDiagramBox","LogicalView::DCActor ::GetDiagramBox::Return"," 1","Return","LogicalView::DCDiagramBox"). parameter("LogicalView::DCActorEdit::keyPressed","LogicalView::DCActo rEdit::keyPressed::key"," 1","In","LogicalView::KeyEvent"). invokes("LogicalView::DCActorEdit::keyPressed","LogicalView::DCActorE dit","LogicalView::DCActor::GetDiagramBox","Self"). invokes("LogicalView::DCActorEdit::keyPressed","LogicalView::DCDiagra mBox","LogicalView::DCDiagramBox::DeleteSelectedObjects","Attribute") . invokes("LogicalView::DCActorEdit::keyPressed","LogicalView::DCDiagra mBox","LogicalView::DCDiagramBox::Repaint","Attribute").
Fig. 4. Facts generated from the Rose model
Figures 3 and 4 illustrate an example of this process. Figure 3 shows a small fragment of an OO design model corresponding to a CASE tool that supports the elaboration of domain and application models [2]. This model could have been generated by the Design Extraction module or manually by the software designer. Class diagrams like the one on the left side are the source for all structural
344
Alexandre L. Correa et al.
information extracted by this module. All behavioral information is extracted from collaboration diagrams like the one on the right side of figure 3. This diagram shows the method calls necessary to the implementation of keyPressed operation of DCActorEdit class. Figure 4 shows the predicates generated by the Facts Generation module corresponding to the diagrams of figure 3. 4.3 Expertise Capture Heuristics, design patterns and anti-patterns are important sources of information for OO designers. There are hundreds of heuristics and patterns available in many books and papers, or even from the knowledge accumulated by OO experts in many projects. Since one of our goals is to provide support to OO design expertise reuse, an important requirement of OOPDTool is to allow the maintenance of an OO design knowledge database that integrates these concepts (heuristics, design patterns and anti-patterns). OOPDTool provides browsers, like the one shown in figure 5, where the developer can navigate through all heuristics, patterns and anti-patterns stored in the design knowledge database. All these concepts are organized into categories and subcategories.
Fig. 5. Heuristics browser
All information associated to each concept is stored in the database. The information about a design pattern, for example, is organized into many sections: Name, Problem, Context, Solution, Consequences, Related Patterns, Known Uses, and Author. The relationships among those concepts are also stored in OOPDTool. As an example, a particular heuristic can be associated to all anti-patterns corresponding to possible violations. It is also possible to associate a design pattern with all antipatterns corresponding to possible situations where this would be a better solution to a problem. OOPDTool also provides support to design patterns and anti-patterns detection in OO design models. Those constructions are formalized in deductive rules that define
Object Oriented Design Expertise Reuse
345
the characteristics necessary to the recognition of a particular construction in an OO design model. OOPDTool allows the definition of new rules, so that new patterns and anti-patterns can be added to the database. Therefore, the knowledge base evolves as a result of the organization experience in developing and maintaining OO systems. These rules are expressed using the same predicates employed in the OO design facts, as described in section 4.2. Figure 6 shows an example of design pattern formalization. This rule allows not only the detection of the Factory Method [8] pattern, but also the identification of all participants in a particular instance of this pattern. The participants correspond to the AbstractCreator, the ConcreteCreator, the AbstractProduct, the ConcreteProduct, and the FactoryMethod. factoryMethodPattern(AbstractCreator,ConcreteCreator,AbstractProduct, ConcreteProduct, FactoryMethod) :classifier(_,AbstractCreator,_,"Class","Abstract","NotLeaf","Root"), operation(AbstractCreator,FactoryMethod,FactoryMethodAbrev, "Instance", _, _, "Virtual", "Abstract", _), parameter(FactoryMethod, _, _, "Return", AbstractProduct), operation(AbstractCreator,Oper,_,"Instance", _, _, _,"Method", _), invokes(Oper, _, FactoryMethod, "Self"), classifier(_,ConcreteCreator, _, "Class", "Concrete", _, "NotRoot"), descendant(ConcreteCreator, AbstractCreator), operation(ConcreteCreator,RedefinedFactoryMethod, RedefAbrevFactoryMethod, "Instance", _, _, _, "Method", _), parameter(RefefinedFactoryMethod,_,_,"Return",AbstractProduct), creates(RedefinedFactoryMethod, ConcreteProduct, _), descendant(ConcreteProduct, AbstractProduct), operationRedefinition(RedefinedFactoryMethod,FactoryMethod, RedefAbrevFactoryMethod, FactoryMethodAbrev), not (ConcreteProduct = AbstractProduct).
Fig. 6. Factory Method Pattern (Prolog rule)
In the same way, it is possible to catalogue anti-patterns with rules defined from predicates corresponding to the basic OO constructions. For each anti-pattern formalized using Prolog rules, we also capture information that describes its general format, the main causes of its appearance, the consequences that this bad solution can generate, and what should be done to replace this solution by a better one. This better solution often points to the application of a design pattern. Anti-patterns can be directly captured from the anti-patterns literature [4], or indirectly by looking for violations of well-known design heuristics and patterns. The ExpositionOfAuxiliaryMethod anti-pattern (figure 7) detects the definition of methods in the public interface of a class that are used only as auxiliary methods for the implementation of other methods of this class. This contradicts the design heuristic „Do not put implementation details such as common-code private functions into the public interface of a class“ [14]. Apart from the anti-patterns derived from object oriented design heuristics, we can also identify other potentially problematic constructions by analyzing the problems discussed in design patterns. As an example of this strategy, we present the formalization of two anti-patterns related to some creational design patterns [8]. The
346
Alexandre L. Correa et al.
rules defined in figure 8 detect the presence of an object with global scope, whose class would be better defined by applying the Singleton design pattern. The ManyPointsOfInstantiation anti-pattern (figure 9) indicates that many points of the design are coupled to a particular class and, therefore, to a particular implementation. The design would be more flexible if a creational design pattern as, for example, AbstractFactory or Prototype, were used instead of the direct instantiation spread in many points of the software. expositionOfAuxiliaryMethod (Class, Method) :classifier(_, Class, _, _, _, _, _), operation(Class, Method, _, _, "Public", _, _, _, _), findAll(ClientClass,externalClient(Method,ClientClass), ClientClasses), listSize(ClientClasses, Size), Size = 0, findAll(ClientClass,internalClient(Method,ClientClass, InternalClients), listSize(InternalClients, InternalSize), InternalSize > 0). externalClient (Method, ClientClass) :operation (Class, Method, _, _, _, _, _, _, _), operation (ClientClass, Caller, _, _, _, _, _, _, _), invokes(Caller, Class, Method, _), not (sameHierarchy(ClientClass, Class)). internalClient (Method, ClientClass) :operation (Class, Method, _, _, _, _, _, _, _), operation (ClientClass, Caller, _, _, _, _, _, _, _), invokes (Caller, Class, Method, _), sameHierarchy(ClientClass, Class). sameHierarchy (X, Y) :- X = Y. sameHierarchy (X, Y) :- descendant (X, Y). sameHierarchy (X, Y) :- ancestor (X, Y).
Fig. 7. ExpositionOfAuxiliaryMethod anti-pattern globalScopeObject (Object, Class) :classifier(_,"Logical View::Global",_,_,_,_,_), attribute("Logical View::Global",Object,_,"Public",_,Class,_,_,_). globalScopeObject (Object, Class) :classifier(_,X,_,_,_,_,_), attribute(X,Object,"Class","Public",_,Class,_,_,_).
Fig. 8. Global Scope Object anti-pattern manyPointsOfInstantiation (Class, NPoints) :findAll(Creator,doInstantiation(Creator,Class),Creators), listSize(Creators, Size), Size> NPoints). doInstantiation(Creator, Class) :operation(Creator,Operation, _, _, _, _, _, _, _), creates(Operation, Class, _).
Fig. 9. ManyPointsOfInstantiation anti-pattern
Object Oriented Design Expertise Reuse
4.4
347
Detection
This OOPDTool module is responsible for analyzing the facts deductive database corresponding to an OO design model currently opened in Rational Rose. OOPDTool searches for matches with the constructions (patterns/anti-patterns) captured by the Expertise Capture module. Figure 10 illustrates a typical use of this module. The user selects one anti-pattern or anti-pattern category from the browser, and OOPDTool identifies all the design fragments that satisfy the Prolog rules defined for the detection of those anti-patterns.
Fig. 10. Detection module
A report is generated indicating each identified problem, the elements responsible for its occurrence, and some possible ways to overcome it. These possible better solutions correspond to the anti-pattern information captured by the Expertise Capture module. For example, if the tool finds the PublicVisibility anti-pattern (an attribute defined in the public interface of a class), it points out to the attribute and the class where it occurs, showing that a possible solution would be to move the attribute to the private area of the class, and to create accessor methods (get and set methods) for retrieving and modifying this attribute. The user can also search for anti-patterns associated to a particular heuristic or heuristic category. This allows the identification of all possible violations of the selected heuristics in a given OO design model. Another result that can be achieved with this module is the identification of design patterns used in the evaluated design. By selecting the desired design patterns, the user is able to receive a report indicating the design patterns found, and all the elements matching each participant role involved in the pattern. For example, if OOPDTool detects an instance of the FactoryMethod design pattern, it would detect not only the presence of this pattern in the design but also all classes corresponding to the Abstract Creator, Concrete Creator, Abstract Product, and Concrete Product participants found in this design pattern instance.
348
5
Alexandre L. Correa et al.
Case Study
A practical case study using the proposed approach and tool was conducted in the context of a research project named Odyssey [2]. The major goal of Odyssey is the construction of a reuse infrastructure based on domain models. This infrastructure comprises a set of tools for component management, domain information acquisition, elaboration of domain and application models, among others. These tools were designed using Rational Rose and coded in Java. Since the development team was composed mostly by OO beginners, many common errors (anti-patterns) were detected in their work. OOPDTool was initially used as a support tool for design inspections. We populated the OOPDTool design knowledge database with 15 heuristics, 10 design patterns, and 25 anti-patterns. These anti-patterns correspond to possible heuristics violations and also to constructions that can be enhanced by applying a design pattern. Our main goal in this case study was to automatically detect patterns and antipatterns in a Rose model corresponding to one of the tools of the Odyssey infrastructure (the domain models editing tool). This model, composed by 150 classes, was submitted to OOPDTool analysis and also to a manual identification of patterns and anti-patterns. The results of the automatic and manual processes were compared. Although OOPDTool successfully detected all formalized patterns and anti-patterns present in the model, a number of false positives and negatives were initially reported due to incorrect or insufficient information associated to the pattern/anti-pattern detection rules. This was especially true for complex rules, such as the Factory Method pattern, and also for rules designed to detect situations where a particular design pattern is incorrectly applied, i.e., the designer should use another design pattern instead.
6
Related Works
Several works, related to the reengineering of legacy OO systems and design patterns detection, have appeared in the last five years. Cinnéide [5] describes a tool called Design Pattern Tool that is able to make some refactorings in programs written in Java. This tool is limited to refactorings related to object instantiation anti-patterns, and it does not provide support to automatic identification of the fragments that need to be refactored (anti-patterns). Zimmer [16] reports experiences using design patterns, general OO design rules, and metrics in reorganization of object oriented systems. He describes a method used in the restructuring of a hypermedia OO application with the incorporation of more flexible design constructions. However, no supporting tool is mentioned in this work. KT is a tool developed by Brown [3] that can reverse engineer OO designs from source code written in Smalltalk. KT detects three design patterns as described in [8]: Composite, Decorator, and Template Method. Algorithms specially designed to detect them are able to identify instances of those patterns. However, the tool does not provide support to the detection of other patterns, or even code written in other languages, because it searches for specific Smalltalk constructions using algorithms
Object Oriented Design Expertise Reuse
349
that are also specific for the detection of each pattern. Unlike OOPDTool, KT does not support pattern identification in OO models built using a CASE tool. Krämer [10] presents a supporting tool, Pat, for the design recovery process which searches for structural design pattern in an object oriented design model. The design constructions are also represented in Prolog. However, the patterns detected by this tool are limited, since the reverse engineering task is done by a Case tool (Paradigm Plus) that is only able to recover structural design elements. Therefore, Pat cannot detect design patterns that require semantic information about behavior and collaboration between classes as, for instance, object instantiation, method invocation or attribute access. However, we consider that recovering information about object collaborations, including the use of polymorphism, is indispensable for effective detection of pattern-based design constructions. Crew [6] defines a language for analyzing syntactic artifacts in abstract syntax trees of C/C++ programs. Although we can also search for patterns using source code as input, our main focus is to detect constructions in OO design models and provide an OO design expertise repository. Unlike the approach described in this paper, none of these approaches provides an integrated support to the heuristics, design patterns and anti-patterns concepts.
7
Conclusions and Future Works
This paper described an approach to OO design expertise reuse, by incorporating knowledge not currently supported by OO CASE tools. This support was developed as an „add-in“ to a largely used OO CASE tool. As the developer builds an OO model, he can use OOPDTool to help him in detecting potentially problematic constructions, and also as a source of OO design knowledge, represented by heuristics, design patterns and anti-patterns databases. Moreover, the identification of design patterns helps the developer in understanding a legacy system at a higher level of abstraction and also allows the detection of anti-patterns related to inadequate uses of a particular design pattern. Since the constructions detected by OOPDTool are expressed in Prolog rules, it is easy to expand the scope of detection. It is possible to add new heuristics, patterns, and problematic constructions as new reports in the technical literature appear and developers gain more experience. Since our approach is based on a metamodel, comprising not only static elements of an OO model (classes, operations, and attributes) but also behavioral ones (method invocations, object instantiation, among others), it is possible to detect several design patterns and anti-patterns that are not currently detected by other approaches uniquely based on structural components and/or source code analysis. The main contributions of this paper are the integration of heuristics, design patterns and anti-patterns into an OO design workbench, allowing the reuse of knowledge about good and bad OO design practices, and also the support to the automatic detection of constructions corresponding to those practices. We are currently working on a new version of OOPDTool, incorporating machine learning techniques, in particular Inductive Logic Programming (ILP) and Case-
350
Alexandre L. Correa et al.
Based Reasoning (CBR) [12]. We have two main objectives: first, we would like to detect design constructions with structures similar to a particular pattern, but with some sort of variation that makes it undetectable on an perfect matching algorithm using the Prolog inference machine. Our second goal is to make OOPDTool able to learn patterns definitions by examples, since the definition of detection rules showed to be very difficult, especially for complex patterns. Regarding the mentioned case study, we are now analyzing other tools constructed within Odyssey project, and incorporating more design patterns and anti-patterns to the knowledge base. However, it would be important to be able to analyze large and complex commercial systems to capture more data about the accuracy, performance and scalability of the proposed approach.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Booch, G., Jacobson, I., Rumbaugh, J., „The Unified Modeling Language for Object Oriented Development – UML Semantics – version 1.1“, URL: http://www.omg.org.,1997 Braga, R., Werner, C., Mattoso, M., „Odyssey: A Reuse Environment Based on Domain Models“, Proc. of 2nd. IEEE Symposium on Application-Specific Systems and Software Engineering Technology, Richardson, March 1999 Brown, K., „Design Reverse-Engineering and Automated Design Pattern Detection in Smalltalk“. Master Thesis. URL: http://www2.ncsu.edu/eos/info/tasug/kbrown/thesis2.htm, 1997 Brown, W., Malveau, R., McCormick III, H., Mowbray, T., „Anti-patterns – Refactoring Software, Architectures, and Projects in Crisis“, Wiley Computer Publishing, 1998 Cinnéide, M., Nixon, P., „Program Restructuring to Introduce Design Patterns“, Proc. of the Workshop on Object Oriented Software Evolution and ReEngineering“, ECOOP’98 Crew, R.F., „ASTLOG: A Language for Examining Abstract Syntax Trees“, Usenix Conference on Domain-Specific Languages, 1997 Demeyer, S., Tichelaar, S., Steyaert, P., „FAMOOS – Definition of a Common Exchange Model“, URL: http://www.iam.unibe.ch/~famoos/InfoExchFormat/ Gamma, E., Helm, R., Johnson, R., Vlissides, J., „Design Patterns: Elements of Reusable Object Oriented Software“, Addison-Wesley, Reading, MA, 1995 Jacobson, I., Christerson, M., Jonsson, P., Overgaard, G., „Object Oriented Software Engineering“, Addison-Wesley, Workingham, England. 1992 Krämer, C., Prechelt, L., „ Design Recovery by Automated Search for Structural Design Patterns in Object Oriented Software“, Proc. of the Working Conf. on Reverse Engineering, IEEE CS Press, Monterrey, November 1996 Maughan, G., Avotins, J., „A Meta-model for Object Oriented Reengineering and Metrics Collection“, Eiffel Liberty Journal, Vol. 1, No. 4., 1998 Mittchel, T.M., „Machine Learning“, McGraw Hill, 1997 Rational Software Inc., Santa Clara, CA. – Rational Rose 98, URL: http://www.rational.com
Object Oriented Design Expertise Reuse
351
14. 14.Riel, A., „Object Oriented Design Heuristics“, Addison-Wesley, 1996 15. Software Composition Group (SCG) - FAMOOS Project, URL: http://iamwwww.unibe.ch/~famoos 16. Zimmer, W., „Experiences using Design Patterns to Reorganize an Object Oriented Application“, Proc. of the Workshop on OO Software Evolution and Re-Engineering, ECOOP, 1998
Appendix A – Metamodel Predicates The predicates used in the representation of OO design constructions are : • package(Name,Stereotype,ParentName): represents a package definition. A package is a general mechanism used for organizing model elements in groups. • packageDependency(PackageClient, PackageSupplier): represents a dependency relationship between two packages: one is a client and the other is a supplier. • classifier(Package,ClassifierName,Stereotype,Type,Abstract/Concrete,isLeaf, isRoot): indicates a class, interface or basic data type definition. • visibilityInPackage(Package, Classifier, Visibility): represents the classifier visibility (public, protected, private) in a package. • realizes(Classifier,RealizedClassifier): represents the existence of a realization relationship between two classifiers as, for example, a class implementing the operations defined by an interface. • inheritsFrom(SpecificClassifier,GenericClassifier): represents an inheritance relationship between two classifiers. • attribute(Classifier,AttributeName,Scope,Visibility,Type,Modifiability,Multiplicity, Aggregation): indicates the presence of an attribute or a pseudo-attribute (an association with a classifier) [21] in a class definition. It also captures the attribute scope (class or instance), its visibility (public, protected or private), if its value can be modified or not, its type, multiplicity (1 or many), and the semantic of the association with the attribute type (association, aggregation or composition). • operation(Classifier,OperationName,Scope,Visibility,Stereotype,Polymorphism, Abstract/Concrete,Const/NonConst): represents an operation defined in a class, indicating its scope (class or instance), its visibility (public, protected, private), stereotype (constructor, destructor, read accessor, write accessor, other), if the operation can be redefined by subclasses, if it modifies the object state, and whether it is only a declaration (Abstract) or a method implementation (Concrete). • parameter(Operation,ParameterName,Order,Direction,ParameterType): represents a parameter expected by an operation. The direction indicates whether it is an input, output, input/output or a return value. • creates(Caller,Classifier,Constructor): represents the invocation of a class constructor resulting in an object instantiation. This predicate indicates which operation is responsible for the invocation (caller). • destroys(Caller, Classifier, Destructor): represents the invocation of an object destructor. This predicate indicates which operation is responsible for this invocation.
352
Alexandre L. Correa et al.
• invokes(Caller, Classifier, Operation, AccessType): represents the invocation of an operation. Caller corresponds to the method where this invocation occurs, Classifier is the type of the called object. Operation is the name of the called operation and AccessType tells how the called object is known in the caller method. AccessType can be the object itself (self), a parameter, an object created in the caller method (local object), a global scope object or an attribute of the caller object. • access(Operation, Attribute, AccessType): indicates that an operation accesses a particular attribute. This access can be a value retrieval or modification, an operation call or even passing this attribute as an argument in some operation call.
Patterns Leveraging Analysis Reuse of Business Processes Marco Paludo1,2, Robert Burnett1, and Edgard Jamhour1 1 Pontifícia Universidade Católica do Paraná – PUC-PR - Graduate Course in Applied Information Science - Rua Imaculada Conceiãç o, 1155, 80215-901, Curitiba, PR, Brazil {paludo,robert,jamhour}@ppgia.pucpr.br 2
Banco do Estado do Paraná S.A.- R. Máximo João Kopp, 274, Santa Cândida, 80.630-900, Curitiba, PR, Brazil
[email protected] Abstract. This paper proposes the use of patterns to help the software designer to model business processes. It focuses on the initial phases of the software development life cycle and has the objective of promoting reuse of the components of these phases. Business processes are considered to have a critical analysis phase, which demands a significant portion of the development efforts. Due to the emphasis on these phases, the proposed solution is to use patterns with two objectives: to model the business processes and to provide reuse of analysis elements. For that, the 'Strategies and Patterns' methodology is complemented with new patterns, diagrams and stages in its process. Complementary, the pattern documentation structure is improved. This work intends to contribute presenting new directions to use and to obtain patterns. To assess the propositions, one case study is presented and analyzed, trying to demonstrate the proper applicability of patterns in business processes.
1
Introduction
As information systems become increasingly complex and disseminated, their development, maintenance, and operation are reaching the limits of material and economic feasibility. Reuse is receiving particular attention from the software community, acting as a lever in the development of better quality, lower cost systems. Though studies on the theme are not new in the community, in the past reuse was mostly carried out ad hoc by small groups or individually. To be effective, however, a systematic approach to the reuse of resources throughout the life cycle (requirements, architectures, design, code, tests etc.) must be emphasized. It is important to leverage and apply reuse to all products in the life cycle to save funds. According to Jay Reddy in [1] reuse efforts should not be spent only in coding and implementation, as typically done, but rather in the whole life cycle, since code development is just a fraction of the cost of building and maintaining systems. Another important factor lies in the fact that reuse activities must be very well integrated in the system development process. This ensures that reuse is automatically planned, assessed and practiced, because it is considered in all phases of the process. W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 353-368, 2000. Springer-Verlag Berlin Heidelberg 2000
354
Marco Paludo et al.
Considering those factors, this paper will highlight the reuse of elements of the analysis phase, mainly the static modeling portion. Reinforcing this approach of reuse of the initial software development phases, Tim Korson, in [2], affirms "the reuse of class libraries and stand-alone components does not alter significantly the software development process". He adds that higher level concepts and tools are needed to add more value to the development process. The proposed way to deal with the high-level reuse and integrated process requirements is to introduce patterns into the development process. Patterns act as plans or structures to be followed, derived from reality abstractions. The scope selected for this paper is the business processes, which represent the procedures that realize business objectives. The workflow concept, in turn, represents the business process automation, where documents or tasks passes between participants, according to predetermined rules. The objective of pattern utilization is to support the modeling of business processes, allowing them to become workflows. The proposed way to address the problem of analysis reuse in business processes is complementing a pattern methodology with new documentation structures. Also, business process patterns are identified and applied in a case study. Those issues are presented as follows: section 2 defines business processes; section 3 shows how to use patterns, initially by selecting the baseline methodology, followed by the proposed documentation structure, pattern catalogue, and finishing with the Domain structures; section 4 presents a case study with the objective of validating the proposals made in the foregoing sections; conclusions are presented in section 5.
2
Business Processes
The Workflow Management Coalition (WfMC) presents the following definitions [3,4]: - Business Process: Set of one or more procedures or activities that collectively realize a political or business objective, normally within the context of an organizational structure, defining functional roles and relationships. - Workflow: the automation of a business process, in whole or part, during which documents, information or tasks are passed from one participant to another, according to a set of procedural rules. Business processes are generally related to the operational objectives and intercompany business relations, as the process of insurance applications, for instance. They can be contained within one organization or encompass several others, as customer-supplier relations. Figure 1 depicts some examples of business processes.
Patterns Leveraging Analysis Reuse of Business Processes
355
Business Processes Purchase Request Management
External Customer Service
Travel Expense Control
Backlog and Working System
Insurance Request Procedures
Refund Claim
Project Management
Collaborative Report Creation
Elaboration of Biddings at Auctions
...
Fig. 1. Examples of Business Processes
3
Business Process Patterns
In this section, changes and additions are proposed to the Strategies and Patterns methodology presented in [5]. The resulting methodology is intended to adequately support business process modeling. It also allow the capture and record of analysis experiences, pattern evolvement and their reuse in future developments. For that purpose, the rationale for the selection of the baseline methodology is presented in subsection 3.1, followed by the proposed documentation structure in subsection 3.2, on the basis of which six specific patterns were developed for one business process, making up subsection 3.3. Finally, subsection 3.4 presents the concepts of particular-domain models and proposes their application. 3.1 Selection of the Baseline Methodology Example-driven methodologies are considered those which apply the patterns directly from a catalogue. In such catalogue, patterns are arranged in an organized and structured way, allowing easy and direct access to them. Process-driven methodologies, in turn, present the steps to be followed when conceiving and elaborating systems. This approach uses patterns as determined by the process. It therefore differs from the example-driven approach, where the pattern catalogue is the core structure methodology. Considering these approaches, the methodologies most widespread and mentioned in the literature were chosen. The main aspects, considered differentials among the methodologies, are presented in Table 1 and listed in the following: process-driven methodology, example-driven methodology, emphasis on the analysis phase, or emphasis on the design and implementation phase.
356
Marco Paludo et al.
Based on such tabulation, the Strategies and Patterns methodology [5] was selected to be the object of study and complementation. Also because it fulfills the specific requirements proposed in the development of this work, which are: (i) achieve reuse of elements from the analysis phase; (ii) elaborate patterns which addresses Business Processes, and (iii) insert those patterns into the development process. Thus, the other methodologies mentioned are not discarded to be used to model business processes. This survey did not intend to classify or rank the best methodology from any standpoint. Table 1. Characteristics of Pattern Methodologies Methodologies
Design Patterns [6]
Process
Example
Emphasis:
Driven
Driven
Analysis
X
X
Emphasis: Design and Implement .
X
POSA [7]
X
X
X
Analysis Patterns [8]
X
X
X
Taligent [9]
X
X
Hot Spots [10]
X
X
Strategies and Patterns [5]
X
X
X
In the Strategies and Patterns methodology, the reuse of analysis elements is highly emphasized, fulfilling the requirements proposed here. From the pattern standpoint it can be considered example-driven, since it has a catalogue with thirty one patterns ready to be instantiated. It can also be considered process-driven, since it has a proposed sequence of steps to be followed that make up the development process [11]. 3.2 Selection of the Baseline Methodology For an effective reuse of pattern proposed solutions, patterns must be documented and recorded in a uniform manner. There are basically three types of pattern collections: (i) pattern catalogues (e.g. [6]), represented by a set of patterns in a uniform format, regardless of domain, without a close relationship; (ii) pattern systems (e.g. [7]), represented by a collection of related patterns that help developing systems with particular properties; (iii) pattern languages (e.g. [12]), making up a closed set of strongly related patterns, linked by well-defined rules, that solve all aspects related to a problem domain. In every type of pattern representation, great care must be taken in order to describe each pattern well and also the context where they are applied. It should also make evident occasional similarities or combinations with other patterns.
Patterns Leveraging Analysis Reuse of Business Processes
357
The patterns of the original methodology [5] are documented in a simple way, with little text information of where and how to apply them. The original documentation elements are: pattern name, the two classes involved with the stereotyped services and attributes, the typical interactions between objects, examples, combinations, and notes. • Pattern name (original): the pattern name is derived from the names of the classes composing it. For instance, the pattern called Participant-Transaction is composed of the Participant and Transaction classes. • Problem (proposed): describes a recurrent problem in a text. Here some forces involved in the pattern application can be listed. The forces represent the desired requirements, restrictions or properties. • Context (altered): described the environment in which the problem exists. It can be considered a precondition for the application of the pattern. • Solution (proposed): represents the action taken to solve the problem within the context. It is composed of three subdivisions: • Object model (original): diagram with the classes involved (typically two) and the relationship proposed, including the stereotyped attributes and services that will guide the instancing of those classes. • Typical object interaction (original): lists the interactions between classes, through the services having class relationships. • Examples (original): presents examples of occurrences of each class. For the Participant-Transaction pattern, examples of likely participants (agent, customer, etc.) and likely transactions (contract, payment, etc.) are provided. • Combinations (altered): list of all the other patterns using the same classes as this pattern. In the original methodology, the existence of this item was justified to ensure that this pattern would be independent from any other diagram or document. However, since this paper proposes the creation of a pattern object model for each domain (subsection 3.4.1), the need for this item is questionable, because such information can be easily extracted from the Domain Object Model. It was kept in order to the pattern developer decide whether to fill it out or not. • Notes (original): space reserved for general comments. • Reference to the Domain Object Models (proposed): describes how to locate Domain Object Models related to the pattern, if any, preferably by means of the model name or code. Fig. 2. Proposed pattern documentation
On the basis of the documentation patterns proposed by [6], [7], [9], [13], new items are proposed for the original methodology, complementing particularly the descriptive and textual part, but without adding excessive complexity since easy creation and handling, allied to simplicity, are key to pattern success. The result is shown in Figure 2, where each item is identified as original, altered or proposed and is followed by a brief description of its contents and the way to employ
358
Marco Paludo et al.
it. That structure is used to assemble the documentation of the initial Business Pattern Catalogue presented in the next subsection. In this new documentation structure, at least two levels of abstraction are proposed for the patterns. The first level is represented by the original methodology catalogue, keeping its simplified documentation structure unaltered, as a more conceptual level independent from the application domain. The second level is represented by the specific Business Process Patterns, with a more complete documentation structure, as proposed in Figure 2. At this level the pattern abstraction is close to the processes they address, thereby rendering them more dependent on the domain where they are applied. This proposed pattern hierarchy, according to the granularity and abstraction, enables the existence of more specialized patterns for certain problems, according to the need and availability. The more levels exist and the more detailed the patterns, closer they will be to design/implementation phases.
3.3 Business Process Pattern Catalogue According to the documentation structure proposed in the foregoing subsection, six Business Process Patterns were developed. To define the scope of proposed pattern catalogue, an analogy was made with Fowler's quotation, in [8]: "An effective framework should not be too complex or too big. This book does not attempt to define frameworks for several industries. This books describes alternative ways of modeling one situation". Similarly, the pattern proposals here are limited to a specific domain, that is, the automation of business processes dealing with the creation, approval, and fulfillment of the services or activities. In short, Business Process Patterns are patterns with a known scope, documented in a standardized way, and addressing specific problems within the context of the business process. The initial Business Process Pattern catalogue is composed of: Requester-Request, Approver-Request, Resource Manager-Developer, Developer-Request, ApproverSubsequent Approver, and Coordinator Agent-Request. Figure 3 presents the Requester-Request pattern as an example of one pattern of the catalogue.
Patterns Leveraging Analysis Reuse of Business Processes
359
1) Pattern name: Requester–Request 2) Problem: This pattern should be applied to business processes involving a requester for one activity and a request of that activity. Examples of activities are: a cost estimate, a job order, and a draft project. The requester participates in the preparation of the request with the information needed for its initial filing. That request follows the route set forth in the process, and can undergo steps such as approval, return and fulfillment of the activity 3) Context: workflow of a request elaboration. 4) Solution: 4.1) Object model: Reques t ( tr ans )
Reques ter (partic ) Name Plac e Gener alData MakeReques t UpdateReques t Conf ir mReques t Canc elReques t GetReques tInf ormation
1
n
ItemData Es timatedBudget Reques tTy pe Is s ueDate Conf irmationDate Reques tStatus Deadline Retur nEx pec tation MakeReques t SubmitFor A ppr ov al ReturnReques t Notif y Reques ter UpdateReques t
4.2) Object interaction: ConfirmRequest –> MakeRequest UpdateRequest –> UpdateRequest CancelRequest 0) { System.out.println("Book scheduling: BLOCK.\n"); setIsBlocked (); return AspectModerator.BLOCKED; } else { unsetIsBlocked (); component.moderator.status[AspectModerator.BOOK][AspectModerator.RESUME]++; return AspectModerator.RESUME; } } public void postcondition() { component.moderator.status[AspectModerator.BOOK][AspectModerator.RESUME]--; }
A Two-Dimensional Composition Framework
395
private void setIsBlocked () { if (!is_blocked) { is_blocked = true; component.moderator.status[AspectModerator.BOOK][AspectModerator.BLOCKED]++; } } private void unsetIsBlocked () { if (is_blocked) { is_blocked = false; component.moderator.status[AspectModerator.BOOK][AspectModerator.BLOCKED]--; } } }
Listing 4. Implementation of the BookSchedulingAspect class The completion of the method execution will initiate a call by the RoomProxy to the AspectModerator’s postactivation phase. During postactivation, there is a call to the postcondition method of the synchronization and scheduling aspects. During synchronization postcondition, synchronization variables are updated upon method completion. The postcondition for the scheduling aspect will identify and notify the next scheduled threads. The proxy-moderator object pair coordinate functional and aspectual behavior by handling their interdependencies. We stress the fact that the activation order of the aspects is the most important part in order to verify the semantics of the system. Synchronization has to be verified before scheduling. A possible reverse in the order of activation may violate the semantics. There are other issues that might also be involved. For example, if authentication is introduced to a shared object for example, it must be handled before synchronization. public class AspectModerator implements AspectModeratorIF { … public int preactivation(String methodID) { int result = this.ERROR; if (methodID.equals("Book")) { synchronized(synchronization_waiting_queue[WRITE][WAITING_TO_BOOK]) { synchronized(this) { result = ((BookSynchronizationAspect)aspectArray[book][sync]).precondition(); } while (result == BLOCKED) { try { ((java.lang.Object)synchronization_waiting_queue[WRITE][WAITING_TO_BOOK]).wait (); synchronized(this) { result = ((BookSynchronizationAspect)aspectArray[book][sync]).precondition(); } } catch (Exception exception) { System.out.println (exception.toString() + "\n"); return AspectModerator.ABORT; } } } synchronized(scheduling_waiting_queue[WRITE][WAITING_TO_BOOK]) { synchronized(this) { result = ((BookSchedulingAspect)aspectArray[book][sched]).precondition();
396
Constantinos A. Constantinides et al. } while (result == BLOCKED) { try { scheduling_waiting_queue[WRITE][WAITING_TO_BOOK].wait (); synchronized(this) { result = ((BookSchedulingAspect)aspectArray[book][sched]).precondition(); } } catch (Exception exception) { System.out.println (exception.toString() + "\n"); return AspectModerator.ABORT; } } } return result; } // similarly for cancel and view return result; }
Listing 5a. Implementation of method preactivation in the AspectModerator class
4.
The Use of Assertions to Support Software Quality
A major component of quality in software is reliability: a system’s ability to perform its job according to the specification (correctness) and to handle abnormal situations (robustness). The concept of „Design by Contract“ was introduced in the context of the Eiffel programming language [18]. Under this theory, a software system is viewed as a set of communicating components whose interaction is based on precisely defined specifications of their mutual obligations known as contracts. These contracts govern the interaction of the element with the rest of the world. The Aspect Moderator Framework adopts this approach in a slightly different context: defining assertions (preconditions and postconditions) as a set of design principles. Another important issue is the one of the verification of components and aspects in isolation from each other. One must be able to test the functionality of a component as well as being able to test that an aspect will align nicely with the functional components. Otherwise, there can be no guarantee that components and aspects will cooperate. In other words, one must test and verify the collaboration of components and aspects. This would constitute an important phase in the design process. public void postactivation(String methodID) { if (methodID.equals("Book")){ synchronized(scheduling_waiting_queue[WRITE][WAITING_TO_CANCEL]) { synchronized(this) { ((BookSchedulingAspect)aspectArray[book][sched]).postcondition(); } scheduling_waiting_queue[WRITE][WAITING_TO_CANCEL].notify(); } synchronized(synchronization_waiting_queue[WRITE][WAITING_TO_CANCEL]) { synchronized(this) { ((BookSynchronizationAspect)aspectArray[book][sync]).postcondition(); } synchronization_waiting_queue[WRITE][WAITING_TO_CANCEL].notify();
A Two-Dimensional Composition Framework
397
} } //similarly for cancel and view } public void registerAspect (String methodID, String aspect, Object aspectObject) { if (methodID.equals("Book") && aspect.equals("Sync")) { aspectArray[book][sync] = aspectObject; } // similarly for the rest of the aspects }
Listing 5b. Implementation of methods postactivation and registerAspect in the AspectModerator class
5. Adaptability Adaptability is an important quality factor in software systems and the issue of it being explicitly engineered into a system is stressed in [11]. Incremental adaptability means coping with changing requirements without modifying previously defined software components. The conventional object-oriented model supports adaptability through composition, encapsulation, message passing and inheritance mechanisms. In general, lack of support of dynamic adaptability might lead to a re-engineering of the whole software system. The general architecture of the framework allows reusability and ensures adaptability of components and aspects as both are designed relatively separately from each other. This framework hooks components and aspects together, defining their semantic interaction. One advantage is that the aspect moderator class is extensible in order to make the overall system adaptable to addition of new aspects. If a new aspect of concern would have to be added to the system, we do not need to modify the moderator class. For static adaptability we can simply create a new class to inherit and re-define it, and reuse it for a new behavior. The inherited class can handle all previous aspects, together with the newly added aspect. Adaptability is also applied to components. In this framework, the moderator object has the capability to activate or drop aspects on the fly. For dynamic adaptability, we can create aspects and register them with the AspectModerator object. The Aspect Moderator Framework does not require some new syntactic structure for the representation of new aspects, but simply a new class for the new aspect. This technique makes it easy for an existing aspect to be removed from the overall system. Further, the semantic interaction between components and aspects in the framework is defined by a set of principles. Part of this semantic interaction is the order of activation of the aspects thus providing a criterion for aspect ordering. The order of execution can also be altered on the fly. This concept is not feasible with automatic weaver technologies. In this framework, components and aspects are designed relatively separately and they remain separate entities that may access each other freely without code transformation. In fact, functional components do not need to know about the aspect components in advance (before run-time) but only after an aspect has been created and registered by the moderator class. As a result, components and aspects discover each other at run-time if necessary. The interaction
398
Constantinos A. Constantinides et al.
of newly added aspects with the rest of the system is handled in a similar manner as the implementor must specify the contract that binds a new aspect to the rest of the system rather than having to re-engineer the whole system. On the other hand, automatic weavers must rely on language constructs that are hard coded into aspect code to provide the contact (join) points. 5.1
Providing Static Adaptability in the Room Reservation System
Let us assume that a new requirement has been introduced that states that a room is reserved based on the security requirements that only employee at the level of technical managers or above may reserve conference rooms. To codify this requirement, we only need to add the security aspect to the aspect bank and extend the moderator to evaluate the security aspects without the need to modify the functionality of the participating components. It is the moderator that will evaluate the security aspect during the pre-activation phase. Therefore the only modification that is need is to activate the RegisterAspect method on the moderator in order to register the new aspect. Of course the aspect bank must be extended in order to create new aspect object upon request (Listing 6). public class AspectFactory2 extends AspectFactory implements AspectFactoryIF { … public Object create (String methodID, String aspect, RoomProxy component) { if (methodID.equals("Book")) if (aspect.equals("Sec")) return new BookSsecurityAspect(component); // similarly for cancel and view } }
Listing 6. Implementation of the extended factory class to support the introduction of a security aspect 5.2
Providing Dynamic Adaptability in the Room Reservation System
In a similar manner to static adaptability the framework can support dynamic adaptability. At run time, once we ensure that aspect definitions are available, we can call the factory to create an aspect instance that will be registered by the aspect moderator. The policy within the proxy can then change in order to observe the new semantics.
6.
Evaluation of the Aspect Moderator Framework
The separation of functional and aspectual code in the aspect moderator framework results in a program code that is more modular than what it could otherwise have been with a traditional OOP approach. Furthermore, the framework follows a generalpurpose approach in order to achieve composition of concerns. This way, it is not
A Two-Dimensional Composition Framework
399
confined to certain aspects but can address a number of aspects that can be expressed by a set of assertions. It is also language neutral. The level of weaving defines the point up to which one manages to achieve separation of concerns in the software system. The framework puts the system under one compilation phase where an executable code is produced. Intermingled code exists only at the binary (executable) level. On the other hand, linguistic technologies such as AspectJ require two phases of compilation, one for the weaver to produce an intermingled source code and another for the final compilation into an executable code. The concurrency facilities of the Java language provide a good choice to demonstrate the implementation of the concepts although the framework manages to remain language neutral. Particularly advantageous is the ability to express components and aspects in the same language as large-scale software systems are built based on COTS technology rather than domain specific languages. In general, there are tradeoffs between languages and frameworks. A language is ready to program but it is limited to the facilities (linguistic constructs) that it provides. On one hand, a language implementor can always hard code a set of constructs to support a number of pre-defined aspects. It would perhaps be impossible to predict all possible aspects that might come up and it would thus be impossible to predict their syntax and semantics as the language implementor would need to have the syntax in advance. On the other hand the Aspect Moderator Framework provides a general aspectual capability to the system which is independent of a language and it allows for an open language where new aspects (specifications) can be added and their semantics can be delivered to the compiler through the moderator. This framework can be viewed as an open implementation since the moderator provides a mechanism to support an open language. This approach has a good chance to reduce possible inconsistencies, although it cannot guarantee correctness.
7.
Conclusion
In this paper we presented an architectural support for the design and development of aspect-oriented open software systems. We show how aspect code that would otherwise be spread across functional components can now be isolated, resulting in a number of benefits. First and for most it promotes reusability for the functional classes and the aspect classes. It also simplifies the design of complex systems, since the interaction code is separated from the functional code. Further, the testing of these systems becomes easier since the functional components are tested in isolation from the interaction code that is maintained in one or more of the aspect classes. Experience shows that object-oriented techniques do not ensure that the resulting software is reusable, since code tangling may have a major impact on code reuse and programming by extension. Our approach is a step towards building reconfigurable software in order to engineer adaptability into software systems and thus to improve
400
Constantinos A. Constantinides et al.
software quality as it complements the object-oriented and component-oriented technologies with a set of design principles.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
18.
Mehmet Aksit. Composition and Separation of Concerns in the Object-Oriented Model. In ACM Computing Surveys. 28A(4). December 1996. Mehmet Aksit. Issues in Aspect-Oriented Software Development. Position paper at the ECOOP ’97 workshop on Aspect-Oriented Programming. Atef Bader and Tzilla Elrad. Framework and Design Pattern for Concurrent Passive Objects. In Proceedings of IASTED/SE ’98. Atef Bader and Tzilla Elrad. The Adaptive Arena: Language Constructs and Architecural Abstractions for Concurrent Object-Oriented Systems. In Proceedings of ICPADS ’98. Ulrich Becker. D2AL: A Design-based Aspect Language for Distribution Control. Position paper at the ECOOP ’98 workshop on Aspect-Oriented Programming. L. Berger, A. M. Dery and M. Fornarino. Interactions Between Objects: An Aspect of Object-Oriented Languages. Position paper at the ECOOP ’98 Workshop on AspectOriented Programming. Kai Böllert. On Weaving Aspects. Position paper at the ECOOP’99 workshop on AspectOriented Programming. Constantinos Constantinides, Atef Bader, Tzilla Elrad. An Aspect-Oriented Design Framework for Concurrent Systems. Position paper at the ECOOP’99 workshop on Aspect-Oriented Programming. Constantinos Constantinides, Atef Bader, and Tzilla Elrad. A Framework to Address a Two-Dimensional Composition of Concerns. Position paper OOPSLA ’99 First Workshop on Multi-Dimensional Separation of Concerns in Object-Oriented Systems. Kris De Volder. Aspect-Oriented Logic Meta Programming. Position paper at the ECOOP ’98 workshop on Aspect-Oriented Programming. Mohammed Fayad and Marshall P. Cline. Aspects of Software Adaptability. In Communications of the ACM. Vol 39, No. 10, pp. 58-59. October 1996. Walter L. Hürsh and Cristina Videira Lopes. Separation of Concerns. Technical Report NU-CCS-95-03. Northeastern University, Boston. February 24, 1995. Gregor Kiczales, John Lamping, Anurag Mendhekar, Chris Maeda, Cristina Lopes, JeanMarc Loingtier, and John Irwin. Aspect-Oriented Programming. In Proceedings of ECOOP ’97. LNCS 1241. Springer-Verlag, pp. 220-242. 1997. Cristina V. Lopes. D: A language Framework for Distributed Programming. Ph.D. Thesis. Graduate School of the College of Computer Science. Northeastern University. Boston, Massachusetts, 1997. Cristina Lopes and Gregor Kiczales. Recent Developments in AspectJ. Position paper at the ECOOP ’98 workshop on Aspect-Oriented Programming. Frank Matthijs, Wouter Joosen, Bart Vanhaute, Bert Robben, and Pieere Verbaeten. Aspects Should not Die. Position paper at the ECOOP ’97 workshop on Aspect-Oriented Programming. Satoshi Matsuoka and Akinori Yonezawa. Analysis of Inheritance Anomaly in ObjectOriented Concurrent Programming Languages. In Gul Agha, Peter Wegner and Akinori Yonezawa, editors. Research Directions in Concurrent Object-Oriented Programming. Chapter 4, pp. 107-150. The MIT Press. Cambridge, MA 1993. Bertrand Meyer. Applying Design by Contract. In IEEE Computer, pp. 40-52. October 1992.
A Two-Dimensional Composition Framework 19. 20. 21. 22. 23.
401
Harold Ossher and Peri Tarr. Multi-Dimensional Separation of Concerns in Hyperspace. Position paper at the ECOOP ’99 workshop on Aspect-Oriented Programming. D. L. Parnas. On the Criteria to be Used in Decomposing Systems into Modules. In Communications of the ACM. Vol. 15. No. 12. Pages 1053-1058. December, 1972. Jane Pryor and Natalio Bastán. A Reflective Architecture for the Support of AspectOriented Programming in Smalltalk. Position paper at the ECOOP’99 workshop on Aspect-Oriented Programming. Bedir Tekinerdogan and Mehmet Aksit. Deriving Design Aspects from Canonical Models. Position paper at the ECOOP ’98 workshop on Aspect-Oriented Programming. A. Vogel and K. Duddy. JAVA programming with CORBA. John Wiley, New York. NY, 1998.
Structuring Mechanisms for an Object-Oriented Formal Specification Language M´ arcio Corn´elio and Paulo Borba Centro de Inform´ atica Universidade Federal de Pernambuco 50740-540 Recife PE Brasil {mlc2,phmb}@di.ufpe.br
Abstract. In this work we propose an extension of the MooZ formal specification language with support for parameterized packages. This enhances MooZ’s capabilities for software reuse and maintenance in the large. We discuss several design issues for MooZ’s structuring mechanisms: the distinction between inheritance and subtyping, values and objects, vertical and horizontal composition, packages and classes. We also analyse the impact of our design decisions on software reuse. Keywords: Formal Methods, Z, Object-Oriented Specification, Parameterized Programming, Language design
1
Introduction
The integration of formal methods with object-oriented concepts can be quite useful for the development of high quality software. Formal methods are important when reliability is essential. On the other hand, object-oriented concepts such as objects and classes, subtype polymorphism, inheritance, and dynamic binding [2] favor software reuse and maintenance. Packages can enhance the support for reuse and maintenance offered by object-oriented concepts because they can group several (related) classes and data types, and so play an important role in structuring complex software. Parametric polymorphism [18] is another powerful concept for enhancing reuse and maintenance. It can be implemented by parameterization mechanisms at the level of packages, so that packages are parameterized by packages. Here we discuss several design issues of structuring mechanisms for MooZ [24,23], a formal specification language whose name stands for Modular object-oriented Z. MooZ is a conservative object-oriented extension of the Z [14,13] specification language. Since the beginning of its development MooZ has been widely used to formalize computational models in several different application areas such as Artificial Neural Networks [20] and Medical Systems [8]. We extended MooZ to support parameterized packages. This enhances MooZ’s capabilities for reuse and maintenance in the large. In addition, we distinguish inheritance from subtyping, values from objects, vertical composition W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 402–419, 2000. c Springer-Verlag Berlin Heidelberg 2000
Mechanisms for an Object-Oriented Formal Specification Language
403
from horizontal one, and packages from classes. The extended version of MooZ is called MooZPP, an acronym for MooZ with Parameterized Packages. This paper is organized as follows. In Sect. 2, we present the features introduced in MooZPP, an extension of MooZ. In Sect. 3 and 4, respectively, we discuss the impact that these features have on reuse in the small and on reuse in the large. In Sect. 5, we compare MooZPP with other works. Finally, in Sect. 6 we summarise our results. In the following sections we assume basic knowledge of object-oriented concepts, Z or of a similar state-based formal specification language.
MooZPP
2
MooZPP [19] is an extension of MooZ that supports parameterized packages. A MooZPP specification consists of a set of packages. Each package can declare several (mutually) related classes and data types. In the following sections we discuss the main aspects considered during the design of MooZPP. 2.1
Inheritance and Subclassing
A key decision for MooZPP was to provide different mechanisms for inheritance and subtyping. Inheritance1 is a key reusability technique [2]. It avoids solving a problem from scratch and redoing what has already been done. It is a way of defining new classes based on already defined ones. By using inherits B in the definition of a class A, we indicate that A is an heir of B. MooZPP allows multiple inheritance and does not require the semantics of an ancestor class to be preserved by the heir class. So, an heir class does not have to refine the ancestor. Furthermore, an object of an heir cannot be used where an object of an ancestor class is expected because inheritance does not establish a type relationship between heir and ancestor classes. As an example of inheritance, let us specify the class of bank accounts for which every withdrawal is taxed, assuming we already have a class of current accounts for which withdrawals are not taxed. Thus, TaxedAccount can be specified as an heir of CurrentAccount: Class CurrentAccount protected number, balance state number : N balance : balance ≥ 0 1
There is usually a confusion between the notions of inheritance and “subtyping with inheritance”.
404
M´ arcio Corn´elio and Paulo Borba
operations Credit ∆(balance) m? : m? ≥ 0 balance = balance + m? W ithdrawal ∆(balance) m? : balance − m? ≥ 0 balance = balance − m? ... EndClass CurrentAccount. The protected clause is present only in MooZPP and indicates that attributes number and balance are visible to heir classes and subclasses of CurrentAccount. The private clause of MooZPP indicates that attributes are not visible to heir classes and subclasses. Both clauses can change the visibility of operations too. The predicate of the anonymous schema of CurrentAccount establishes that the balance of a current account will never be less than zero. Class T axedAccount inherits CurrentAccount operations W ithdrawal ∆(balance) m? : balance − (m? × 1.01) ≥ 0 balance = balance − (m? × 1.01) ... EndClass T axedAccount. So (specification) code is reused, but objects of TaxedAccount cannot be used where objects of class CurrentAccount are expected. In fact, they should not because they have incompatible behaviour (the operation Withdrawal of TaxedAccount behaves differently from Withdrawal of CurrentAccount). Therefore, it is possible to reuse code of a class to define another, even when they cannot establish a type relationship. Subclassing is a classification technique, having a significant impact on extendibility. In MooZPP, subclassing really means “subtyping with inheritance”. Classes related by subclassing indicates that objects of a subclass are also objects of a superclass because they establish a subtype relationship. Moreover, objects of a subclass can be used in contexts where objects of a superclass are expected,
Mechanisms for an Object-Oriented Formal Specification Language
405
and objects of the subclass must have at least the same public attributes and methods of the superclass, preserving their behaviour. A method of a superclass can be overriden by a method of a subclass, since they establish a refinement relation. By using superclasses B in a class A, we indicate that A is a subclass of B. MooZPP allows a class to have multiple superclasses. For instance, the class of saving accounts is very similar to the class of current accounts, but the former has an operation that applies an interest rate to the account’s balance. Class SavingAccount superclasses CurrentAccount operations ApplyInterest ∆(balance) interest? : balance = balance + (balance × interest?) ... EndClass SavingAccount. The attributes and operations of class CurrentAccount were all inherited by SavingAccount, preserving the behaviour of its superclass. So, objects of SavingAccount can be used where objects of CurrentAccount are expected. 2.2
Data Types
Besides classes, MooZPP supports another mechanism for defining types. Abstract data types, which are sets of values together with operations over them, can also be defined. The main difference between classes and data types is that the first defines objects, whereas the second defines values. Values are immutable, whereas objects can have their state changed by method application. Basically, data types are related to the functional aspects of MooZPP directly inherited from Z. In fact, state independent definitions like free types, schema types, abbreviations, constants (introduced by axiomatic descriptions [14]) are used to define data types. For example, a trivial data type is defined by the following specification. Datatype DT riv EndDatatype DT riv. which introduces the data type DTriv. As there are no specified constraints upon DTriv, nothing can be said about the values or operations of this type, achieving the same effect of the specification of a given set in Z. The following data type CmpValue specifies the concept of a comparable value.
406
M´ arcio Corn´elio and Paulo Borba
Datatype CmpV alue = : CmpV alue × CmpV alue → B < : CmpV alue × CmpV alue → B EndDatatype CmpV alue. The first operation allows comparing whether two values are the same, and the second one allows comparing values with respect to an ordering relation like greater than or less than. The distinction between classes and data types avoids declaring stateless definitions (free types, schema types, abbreviations, and constants introduced by axiomatic definitions) in a class context, as it would be artificial and hinder reuse of that definition.
2.3
Packages
Packages are encapsulation units [5]. Each package in a MooZPP specification can group the declaration of several (mutually) related classes and data types. The relationship between concepts described by classes and data types is a fact that must be taken in account when deciding what type definitions should be grouped in a specific package. For example, a class that specifies the edges of a graph is closely related to a class that specifies the vertices. In fact, these classes are mutually dependent: the definition of edges depend on the definition of vertices, and vice versa. This is a strong indication that these classes must be declared in the same package. For example, consider the package NONTAXEDACCOUNTS. This package contains the classes CurrentAccount and SavingAccount declared so far. These classes are dependent since the concepts of accounts they describe are very close. This is the reason for these classes to be specified in the same package.
Package N ON T AXEDACCOU N T S Here should appear the specifications of CurrentAccount and SavingAccount. EndPackage N ON T AXEDACCOU N T S. Package declarations are only available inside the package or by means of composition features. Package composition concerns the ways packages are combined to form new packages. In this way, a package can access declarations of other packages. Package import is a composition feature.
Mechanisms for an Object-Oriented Formal Specification Language
407
Import. Import allows package declarations to be available to other packages. So, if package B imports package A, all visible declarations contained in A will be available to B. Packages can be horizontally and vertically imported [9]. Basically what differentiates horizontal and vertical import is the level of abstraction involved in the import and its accumulative aspect, as will be explained below. The horizontal mode of import is used when the concepts of a package inherently depend on the concepts of an imported package. They are said to be in the same level of abstraction. Through horizontal import, all visible declarations of an imported package are available to a package X and to all packages that import X. This process is transitive [11], so that if the package C includes the package B and B, in turn, includes the package A, then all visible declarations contained in A are also available to C. Horizontal import is indicated by the includes clause. For example, let us specify a package that contains a class that specify taxed accounts. The package ALLACCOUNTS contains such class and those declared in the horizontally imported package. Package ALLACCOU N T S includes N ON T AXEDACCOU N T S Here should appear the specification of TaxedAccount EndPackage ALLACCOU N T S. Vertical import has to do with different levels of abstraction [11,9]. A package that vertically imports another package uses the functionality provided by the imported package so as to realize a given functionality. Through vertical import all visible declarations of an imported package are available to a package X that imports it, however these declarations are not available to packages that import package X. Vertical import is not a transitive process. The kind of dependence among packages that vertically import other packages is just circumstantial, not conceptual. The uses clause corresponds to the vertical mode of import. An example of vertical import is presented in Sect. 2.4. Renaming. In order to adapt packages to new contexts, MooZPP allows declarations (classes, data types and operations) contained in a package to be given new names, thus favoring reuse. It avoids duplicating specification text to obtain declarations with the desired names. Package renaming in MooZPP is performed by the operator ‘#’ followed by a renaming list. For example, N ON T AXEDACCOU N T S#(CurrentAccount\RegularAccount) results in a new package with the same functionality as NONTAXEDACCOUTS, but with class CurrentAccount renamed to RegularAccount.
408
M´ arcio Corn´elio and Paulo Borba
The renaming operator actually yields a new package, with the same functionality as its target, but with different declaration names. In the above example, the package NONTAXEDACCOUNTS remains the same after the application of the renaming operation. Renaming is also useful to solve name clashes between classes, data types and operations. Slicing. MooZPP also supports an operation for slicing the functionality provided by a package. Slicing allows eliminating unwanted functionality from a package. If the functionality provided by a package exceeds what is needed to define another one, the former can be sliced to eliminate what is not necessary. Slicing is accomplished by the operator ‘\’ followed by a list of declaration names. For example, N ON T AXEDACCOU N T S\(W ithdrawal.SavingAccount) denotes the package that is obtained by discarding the operation Withdrawal from class SavingAccount contained in the package NONTAXEDACCOUNTS. The slicing operator actually yields a new package with a different functionality from the original one. The package used by the slicing operation remains the same after the application of the operation. In the above example, the package NONTAXEDACCOUNTS remains the same after the application of the slicing operation. 2.4
Parameterized Packages
MooZPP supports the concept of parametric polymorphism [18] through parameterized packages. These packages generalize a particular behaviour across a family of related packages. A parameterized package, also called a generic package, is very similar to a non-parameterized package, except that it is parameterized by other packages, and so can refer to these parameters in its body. In order to be used, a parameterized package must be instantiated with arguments (packages). Each instantiation yields a new package. Thus, a generic package defines a set of related packages, one for each possible instantiation. Packages of such a set differ on the type of information they manipulate and present a common behaviour pattern [6]. A generic package usually must impose restrictions on its parameters, in the same way a typed method imposes type constraints on its parameters. Package Types. Package parameters have associated package types that state exactly what constraints valid arguments have to satisfy. This satisfaction corresponds to package refinement, as discussed in Sect. 2, which states that classes and data types present in the argument must refine those in the package type. We refine a formal specification by adding more information [15]. For example, we may be more precise about how data is to be stored, or about how certain calculations are to be carried out. A package type is similar to an ordinary package. For example, parameters having the following package type
Mechanisms for an Object-Oriented Formal Specification Language
409
Package DT RIV Datatype DT riv EndDatatype DT riv. EndPackage DT RIV . impose minimal restrictions on arguments because the package type contains a data type that does not declare any associated operation. Clearly stated, it specifies that any package containing any data type can be used to instantiate a parameterized package whose parameter has DTRIV as package type. Like ordinary packages, package types can contain several mutual dependent classes and data types. This indicates that the package type imposes constraints to a group of mutually dependent concepts. For example, the package NONTAXEDACCOUNTS, when used as a package type, requires argument packages to contain at least two classes offering the functionality offered by those classes in NONTAXEDACCOUNTS. Parameterization. Parameterization is a key technique for obtaining abstraction. There are two kinds of parameterization in MooZPP: horizontal and vertical. The difference between them is the kind of dependence a parameterized package establishes with arguments used for instantiation, as will be exemplified below. Horizontal. Horizontal parameterization involves parameters in the same level of abstraction of the parameterized package. It states that there is a conceptual dependence between a parameterized package and arguments used for instantiation. For example, the package BANK, containing a class that defines a bank, is horizontally parameterized by a package that is supposed to define the classes that specify the accounts to be stored in the bank. The reason for horizontal parameterization is that the concept of accounts is inherent to the concept of bank. In other words, we cannot describe a bank without the concept of account. The symbol ‘::’ binds the formal parameter NTXACCT to its associated package type NONTAXEDACCOUNTS. We import package SET, which contains the definition of a class of sets, such package is instantiated with the same argument used for instantiating BANK (we will discuss instantiation later). Package BAN K[N T XACCT :: N ON T AXEDACCOU N T S] uses SET [N T XACCT ]
410
M´ arcio Corn´elio and Paulo Borba
Class Bank ... EndClass Bank. EndPackage BAN K. BANK denotes a set of related packages that can be differentiated by the kind of accounts that are kept by banks. We obtain a different package for each different argument given to BANK, which is a parameterized package. In contrast to the specification of BANK given here, suppose this package were not parameterized, but horizontally imported package NONTAXEDACCOUNTS. In this case, we fix the kind of accounts a bank can keep, differently from the case of a parameterized package in which the kind of information to be manipulated depends on the argument given to the package. Vertical. Vertical parameterization is another abstraction feature which involves packages in different levels of abstraction. For example, a vertically parameterized package depends on the representation of storage provided by an argument given to it, as will be discussed below. This kind of parameterization establishes a circumstantial dependence between a parameterized package and arguments used for instantiation. Here we can point out the difference between horizontal and vertical parameterization. Horizontal parameterization states that a package inherently depends on the concepts defined by its parameters. For example, a bank inherently depends on the concepts of accounts. On the other hand, vertical parameterization states that the dependence on the concepts defined by package parameters is not inherent. For example, we could specify banks using sets. In this case, the package BANK would be vertically parameterized by the package SET. Here we parameterize BANK by a package that contains the descriptions of sequences in order to show that the concept of sets and sequences are not inherent for banks, which justifies the vertical mode of parameterization. The horizontal parameter has to do with the accounts to be stored in the bank, while the vertical parameter is the mathematical model of storage of a bank. Package BAN KSEQ [N T XACCT :: N ON T AXEDACCOU N T S] {N T XACCT SEQ :: SEQU EN CES[N T XACCT ]} Class Bank ... EndClass Bank. EndPackage BAN KSEQ.
Mechanisms for an Object-Oriented Formal Specification Language
411
The package BANKSEQ has one horizontal parameter and one vertical parameter. The horizontal parameter is NTXACCT that can only be bound to argument packages that satisfy the properties specified by the package NONTAXEDACCOUNTS. The vertical parameter is NTXACCTSEQ whose package type is denoted by the package expression SEQU EN CES[N T XACCT ]. The parameter NTXACCTSEQ can be bound to argument packages that satisfy the requirements of the package type denoted by that expression. Notice that SEQUENCES2 is instantiated with the horizontal parameter NTXACCT (we will discuss instantiation later). In order to be used, this package must be instantiated with two argument packages. The first provides the definition of accounts that will be stored in the bank whose mathematical model is described by the second argument. Views. An argument package may satisfy a package type in more than one way. So, it is necessary to describe the particular ways that argument packages satisfy package types. For example, the following package contains a data type that is an abbreviation for the natural numbers. Package N AT U RAL Datatype N atural N atural == N EndDatatype N atural. EndPackage N AT U RAL. This package can satisfy the package type CMPVALUE, which contains only the data type CmpValue (see Sect. 2.2), which requires a data type with an operation for equality and another one for ordering. In fact, the required order could be instantiated either with ≤ or with ≥, among other options. A view [10] is a mapping from the types (classes and data types) and operations of a package type to types and operations of an argument package, indicating how an argument package should satisfy a package type. In fact, views are used in order to express how an argument package satisfies a package type in a particular instantiation. The semantic aspect of views is determined by the relation between declarations of the argument package and declarations of the package type. The argument package must provide declarations that satisfy (refine) declarations of the package type under the interpretation given by a view. In fact, a view is valid only if related types and operations hold the refinement notion. More precisely, let D be a set of type declarations contained in a package type P T and D a set of type declarations contained in an argument package AP . A 2
This package contains the specification of different kinds of sequences, such as FiniteSequence and NonEmptySequence.
412
M´ arcio Corn´elio and Paulo Borba
view v : P T → AP maps each element d in D to an element d in D , and each operation op present in d is mapped into an operation op in d . Furthermore, the types of op and op should match. A default view is provided when there is an obvious view to use and it is not necessary to write out the view in full detail. For example, a view that consists of mapping classes and data types with the same names, since they present the same number of operations with the same name and type, is obvious. Unless the default view is not the desired one to map declarations from a package type to declarations of an argument package, it is necessary to explicitly declare the desired view. Instantiation. Parameterized packages must be instantiated with actual arguments (packages) to be used. Instantiating a package requires views for its parameters. Any package that satisfies the requirements of a package type can be used to instantiate the corresponding parameterized package yielding a new package that is obtained from the parameterized package body by replacing references to the parameter package by references to the argument package, using the views to bind types and operations of the package type to types and operations of the argument package. In this way, references to types and operations of a package parameter become references to types and operations of an argument package. For example, the package BANK should be instantiated in order to obtain a specific class of banks: Package U SU ALBAN K includes BAN K[view to N ON T AXEDACCOU N T S (CurrentAccount to CurrentAccount; Withdrawal to Withdrawal...)] EndPackage U SU ALBAN K. This is an ordinary package obtained from the instantiation of BANK using NONTAXEDACCOUNTS as argument. As we are using the default view, we could specify the package USUALBANK in the following way: Package U SU ALBAN K includes BAN K[N ON T AXEDACCOU N T S] EndPackage U SU ALBAN K. Instantiations of a parameterized package yield different packages unless equivalent views are used. Two views are equivalent if, independent of type and operation names and order of appearance, they contain the same binding components [6]. In fact, considering that an argument package may satisfy the
Mechanisms for an Object-Oriented Formal Specification Language
413
requirements of a package type in different ways, there might be different mappings from types and operations of a package type into types and operations of an argument package that are described by views, and consequently different bindings between parameter and arguments. So, different packages are yielded for each different view. Compatibility Rules. In order to obtain valid instantiations, argument packages must offer at least the same functionality as their associated package types. In order to guarantee that an instantiation is valid, some compatibility rules must be satisfied by the views used for instantiation. These rules attest that types and operations of the package type are mapped to “compatible” types and operations of the argument package. They are further discussed elsewhere [19]. Argument packages must satisfy the constraints imposed on package parameters by their associated package types. This notion of satisfaction corresponds to refinement in the sense that an argument A satisfies a formal parameter P if P A, where denotes the refinement relation on specifications. The argument given at instantiation must refine the types of the package type. A view v can arbitrarily map type names contained in a package type to type names in an argument package, however this mapping is valid only if it preserves the refinement relation between specifications. If A and B are types and A B, then if v(A), the image of A under v, is a valid view for instantiating a parameterized package, then v(A) v(B).
3
Reuse in the Small
MooZPP provides an inheritance mechanism that favors class reuse. This mechanism allows class reuse between different type hierarchies (families). It avoids specifying from scratch classes that present some similarities even if these classes belong to different type hierarchies. In fact, class inheritance does not require the notion of subtyping to hold between parent and heir classes. This allows code reuse among classes that should not be related by subtyping, but that have enough similarities that justify the use of inheritance. So inherited code can be freely modified. A heir class has the freedom to modify the inherited code so that it has a different behaviour from the original. Inheritance is more flexible, in terms of reuse, than subclassing because it does not require the preservation of semantics between classes. So, (specification) code can be reused when subclassing makes no sense. On the other hand, it might be possible to prove that properties hold between classes related by inheritance because semantic preservation is not compulsory. Data types play a relevant role in structuring a specification since all operations related to the same set of values are grouped together and are syntactically delimited. This avoids scattering these operations over a specification text, and thus, improves the legibility and reuse of specifications. In MooZPP, data types are declared inside packages. So, they are available to other classes
414
M´ arcio Corn´elio and Paulo Borba
and data types defined inside the same package and to classes and data types specified inside other packages by means of package composition features.
4
Reuse in the Large
In this section we evaluate the impact on reuse caused by the introduction of features like packages in MooZPP. Packages and classes are complementary concepts [11,21]. Classes only support reuse and maintenance in the small, whereas packages support reuse and maintenance in the large. This is based on the fact that whole classes hierarchies and related data types can be reused together through composition features. In other words, whole type hierarchies can be reused, preserving the relations (dependencies) they have among each other. The horizontal and vertical modes of composition help to document the structure and dependencies of a specification. They can indirectly improve reusability since a well-structured specification is easier to understand. A specification structure is related to levels of abstraction. Import, which is a composition feature, can be classified into two modes: horizontal and vertical. The horizontal mode of import has to do with structuring a given level of abstraction whereas the vertical mode has to do with different levels of abstraction. This allows differentiating inherent and circumstantial concept dependence between packages. Complex horizontal and vertical structures of packages can be imported by other packages. So, import allows not only structuring, but also reusing complex structures. In object-oriented programming languages such as Eiffel [2] there is no distinction between packages and classes. This is based on the idea that the encapsulation and abstraction provided by classes can substitute packages. However, other object-oriented programming languages support the concept of packages (packages in Java [17] and namespaces in C++ [3]). In fact, packages can enhance software reuse [11] beyond that supported by the object-oriented paradigm, as we have seen so far. PJava [6] is a proposal of parameterized packages for Java which uses concepts of parameterized programming [10] such as views and package renaming. PJava does not support the horizontal import mode and the package slicing operation that are supported by MooZPP. 4.1
Parameterized Packages
Parameterized packages allows grouping several related classes and data types, which can refer to declarations present in parameters. Instantiating a parameterized package yields a new package which can be combined with others packages by means of composition features. A parameterized package can declare several classes and data types. Furthermore, parameterized packages put packages together to compose new packages. As each package can contain several related classes and data types, when packages are put together, several classes and data types are put together to compose several other related classes and data types.
Mechanisms for an Object-Oriented Formal Specification Language
415
A parameterized class puts classes together to compose a new class, whereas a parameterized package puts packages together to compose a new package. But, note that a single package can group the declaration of several types. Consequently, we achieve larger granularity for reuse. Parameterized packages are at least as expressive as parameterized classes. In fact, a parameterized class can be considered a special case of a parameterized package: a package containing just one class and parameterized by package types, each of them containing a single class as well. For software reuse to be effective, it is necessary to have the capability of adapting software components to contexts that are different from those ones in which they were specified. It is important to reuse software components differently in different contexts without having to modify the component itself. In MooZPP, views and renaming allows adapting software components to particular contexts. 4.2
The Role of Views
Views are mapping from declarations (classes, data types, and associated operations) of a package type to declarations of an argument package. The mapped declarations may have different names in the source (package type) and in the target (argument package), but they must have the same type. This contrasts with other approaches that can only accept arguments with operations having the same name and types [2]. In this way, it is necessary to know previously the parameter operation names in order to specify types with the same operation names so that they could be (valid) instantiation arguments. In MooZPP, views are used to express adapter packages. However, differently from adapters, views do not introduce any new concept that would be used only for instantiation. Moreover, views are simpler to write than adapter packages. 4.3
Renaming
Renaming is fundamental in adapting packages and package types to new contexts. This improves reuse because already specified packages can be adapted to new contexts; it avoids rewriting packages just to give contextual relevant names to classes and data types. Renaming also serves the purpose of improving specification readability. Through renaming classes and data types (operation names can be renamed as well) can be given names that are relevant not only to the context of a specification itself, but also to the application domain. In this way names present in a specification can be easily related to the entities of the part of the real world that they are expected to represent.
5
Related Work
GenVoca [7] shares many of our concerns and distinguishes between components and “realms” which are our package types although without semantic
416
M´ arcio Corn´elio and Paulo Borba
constraints. GenVoca is primarily based on vertical parametrization, although a limited form of a horizontal parametrization allows constants and types without any horizontal composition. GenVoca does not involve specification, but is used, in fact, to generate systems supported by the P++ language, an extension of C++. Another related work, RESOLVE [26], also shares many concepts we have used in the design of MooZPP. Horizontal and vertical import modes of MooZPP are also supported by RESOLVE by means of addition and enhancement of abstract components. However, in RESOLVE there is no way to delete an operation from an interface as we have in MooZPP. RESOLVE involves both specification (abstract components) and implementation (concrete components). The latter is not supported by MooZPP. Larch [25,16] supports the horizontal and vertical modes of import. Parameterization is limited to the horizontal mode. There is no support for the concepts of views or other kind of adapter. Notice that this discussion is limited to components described using the Larch Shared Language. In RESOLVE and in Larch, specified components are in some way implemented. MooZPP is an object-oriented specification language and so it cannot be used in all phases of software development. A method for refining MooZPP specification into a programming language is a way to bridge the gap between specification and implementation (a method for refining MooZ specifications into Eiffel programs was presented in [4]). OOZE [1], a general wide spectrum language based on the notation and style of Z which presents modularization mechanisms similar to the ones presented here, MooZPP clearly distinguishes the vertical and horizontal modes of composition that help to document the structure and dependencies of specifications. Although OOZE presents three different kinds of modules (one for encapsulating declaration of classes, theories and data types), we can achieve the same functionality by using MooZPP packages. Both languages support views which are used when instantiating a parametrized package. OOZE allows complex methods to be defined only as sequential and parallel composition of already defined methods, whereas MooZPP conserves Z schema calculus, allowing operations such as conjunction and sequential composition of operation (method) schemas. So, MooZPP supports a more powerful schema calculus than the one of OOZE. MooZPP is an extension of the model based language Z. MooZPP data types declare a set of values and operations on these values. In order to satisfy the constraints of a data type specified in a package type, we must provide a package that also contains a data type whose set of values has, at least, corresponding values of the data type of the package type. In OOZE, however, constraints on data types are not based on a single model. The functional module of OOZE consists of an order sorted signature, which gives sort and function symbols, and a set of equations that relate symbols of the signature. In order to satisfy a functional theory in OOZE, argument modules must provide data types that satisfy the equations of the theory. In summary, data types in MooZPP and the functional module of OOZE have a different semantics. This comes from the
Mechanisms for an Object-Oriented Formal Specification Language
417
fact that OOZE is a derivative of OBJ [12] that also supports all concepts of parameterized programming. MooZPP reuse capabilities are more powerful than those supported by Object-Z [22], an extension of the formal specification language Z with objectoriented concepts, because MooZPP supports a modularization mechanism at package level whereas Object-Z supports only modularization at class level. In fact a package in MooZPP can group several related classes and data types.
6
Conclusions
In this paper we discussed some design issues of an extension of the formal specification language MooZ. Such extension supports concepts of parameterized programming [10]. The introduction of parameterized packages enhanced MooZ’s capabilities for reuse and maintenance. In fact, MooZPP modularization capabilities are more expressive and offer a better support for reuse and maintenance in the large than those provided by MooZ. MooZPP supports, as we have seen, concepts of parameterized programming that are implemented with limitations in some programming languages. Objectoriented extensions of Z, e.g. Object-Z, do not support parameterized packages. Only OOZE supports parameterized packages, but it is an algebraic-based specification language (as it is a derivative of OBJ), not a model-based specification language like all other extensions of Z. Some specifications using MooZPP have been developed, and the first results show that these features have a satisfactory behaviour, improving MooZ capabilities and expressiveness. The specification of real systems described in MooZ such as those presented in [8,20] using MooZPP will serve to confirm our ideas based on comparison between specifications. We are now formally describing the semantics of the module language we presented here.
Acknowledgements Thanks to the referees for the excellent comments which helped to improve the final version.
References 1. A. J. Alencar and J. A. Goguen. OOZE: An Object Oriented Z Environment. In ECOOP’91 - V European Conference on Object-Oriented Programming, Geneva Switzerland, 1991. Springer-Verlag. 416 2. B. Meyer. Object-Oriented Software Construction. Prentide-Hall, second edition, 1997. 402, 403, 414, 415 3. Bjarne Stroustrup. The C++ Programming Language. Addison-Wesley, 3rd edition, 1997. 414
418
M´ arcio Corn´elio and Paulo Borba
4. V. A. de O. Cordeiro. From MooZ to Eiffel: A Rigorous Approach to System Development. Master’s thesis, Universidade Federal de Pernambuco, Departmento de Inform´ atica, 1994. In Portuguese. 416 5. D. A. Watt. Programming Languages Concepts and Paradigms. C. A. R. Hoare Series Editor. Prentice Hall International, 1989. 406 6. D. Aranha and P. Borba. Parameterized packages and Java. II Brazilian Symposium on Programming Languages, pages 208–218, September 1997. Campinas, Brazil. 408, 412, 414 7. D. Batory, V. Singhal et. al. The GenVoca Model of Software-System Generators. IEEE Software, pages 89–94, September 1994. 415 8. G. H. M. B. Motta. Object-Oriented Formal Specifications: Application in the Development of an Effort Eletrocardiogram Processing System. Master’s thesis, Departamento de Inform´ atica, Universidade Federal de Pernambuco, 1992. In Portuguese. 402, 417 9. J. Goguen. Reusing and interconnecting software components. Computer, 19(2):16–28, February 1986. 407 10. J. Goguen. Principles of parameterized programming. In Ted Biggerstaff and Alan Perlis, editors, Software Reusability, volume I: Concepts and Models, chapter 7, pages 159–225. ACM Press, 1989. 411, 414, 417 11. J. Goguen and A. Socorro. Module composition and system design for the object paradigm. Journal of Object-Oriented Programming, 7(9), 1995. 407, 414 12. J. Goguen and T. Winkler. Introducing OBJ3. Technical Report SRI-CSL-889, SRI International, Computer Science Lab, August 1988. Revised version to appear with additional authors Jos´e Meseguer, Kokichi Futatsugi and Jean-Pierre Jouannaud, in Applications of Algebraic Specification Using OBJ, edited by Joseph Goguen. 417 13. J. M. Spivey. Understanding Z: A specification language and its formal semantics. Cambridge University Press, 1988. 402 14. J. M. Spivey. The Z Notation: a reference manual. C. A. Hoare Series Editor. Prentice Hall International, 2nd edition, 1992. 402, 405 15. J. Woodcock and J. Davies. Using Z - Specification, Refinement and Proof. C. A. R. Hoare Series Editor. Prentice-Hall International, 1996. 408 16. J. V. Guttag, J. J. Horning and A. Modet. Report on the Larch Shared Language Version 2.3. Technical report, Digital Equipment Corporation, 1990. SRC Research Report 58. 416 17. K. Arnold and J. Gosling. The Java Programming Language. Addison-Wesley, 1996. 414 18. L. Cardelli and P. Wegner. On understanding types, data abstraction and polymorphism. Computing Surveys, 17(4), December 1985. 402, 408 19. M. L. Corn´elio. Design and evaluation of an object-oriented formal specification language. Master’s thesis, Universidade Federal de Pernambuco, Departamento de Inform´ atica, Recife - PE, 1998. 403, 413 20. P. D. de L. Machado. EASY — An Environment for Simulation of Artificial Neural Networks. Master’s thesis, Departamento de Inform´ atica, Universidade Federal de Peranambuco, 1994. In Portuguese. 402, 417 21. R. Borges and R. Ierusalimschy. M´ odulos em Linguages Orientadas a Objetos. I Brazilian Symposium on Programming Languages, pages 371–384, September 1996. Belo Horizonte, Brazil. 414 22. R. Duke, P. King, G. A. Rose and G. Smith. The Object-Z Specification Language: Version 1. Technical Report 91-1, Department of Computer Science, University of Queensland, Software Verification Center, April 1991. 417
Mechanisms for an Object-Oriented Formal Specification Language
419
23. S. L. Meira and A. L. C. Cavalcanti. MooZ Case Studies. In S. Stepney, R. Barden and D. Cooper, editor, Object Orientation in Z, Workshops in Computing, chapter 5, pages 37–58. Springer-Verlag, 1992. 402 24. S. R. L. Meira and A. L. C. Cavalcanti. The MooZ Specification Language. Technical report, Universidade Federal de Pernambuco, Departamento de Inform´ atica, Recife - PE, 1992. Also available at http://www.di.ufpe.br/∼mooz. 402 25. S. J. Garland, J. V. Guttag and J. J. Horning. An Overview of Larch. In Peter E. Lauer, editor, Functional Programming, Concurrency, Simulation and Automated Reasoning, volume 693, pages 329–348. Springer-Verlag Lecture Notes in Computer Science, 1993. 416 26. W. F. Ogden, M. Sitaraman et. al. Special Feature: Component-Based Software Using RESOLVE. Software Engineering Notes, 19(4):21–67, October 1994. 416
Software Reuse in an Object Oriented Framework: Distinguishing Types from Implementations and Objects from Attributes J. Leslie Keedy1, K. Espenlaub1, G. Menger1, A. Schmolitzky2, and M. Evered3 1Department
of Computer Structures, University of Ulm D-89069 Ulm, Germany {keedy,gisela,espenlaub}@informatik.uni-ulm.de 2Peninsula School of Computing and Information Technology, Monash University Victoria 3199, Australia
[email protected] 3School of Mathematical and Computer Sciences, University of New England Armidale, N.S.W. 2351, Australia
[email protected] Abstract. Almost no object oriented programming languages offer distinct language constructs for the definition of types and their implementations; instead these are united into a single class concept. Similarly object oriented programming languages do not normally distinguish between object types, which may be independently instantiated, and attribute types, which may not. The paper shows how these distinctions can be used to develop both a specialized and a generalized bracket technique, and how the ideas lead to interesting possibilities for reusing code in a flexible and modular way.
1
Introduction
The basic idea of object-oriented programming – that a program is decomposed into objects which are defined in terms of classes, the methods of which determine the behavior of the objects which are instances of a particular class – can contribute in a general way to the reuse of software. This idea, in the hands of an experienced designer, leads to modular software which can correspond to "real" objects in the "real world", and these often find an application in many different situations. More specifically the idea of inheritance, which is often regarded as the distinguishing characteristic of object oriented programming, is considered by many to be useful primarily because it allows the code ("methods") which has been developed to support a class of objects to be reused in the implementation of subclasses. In this paper the idea of code reuse in object-oriented programming is taken two steps further. First, a distinction is made between the definition of classes (as the central construct in most object oriented languages) on the one hand, and of types and implementations on the other hand. Conventionally, a "class" unites the concept of type and implementation in such a way that these are effectively inseparable. This conventional approach has many disadvantages. For example a class cannot have multiple W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 420-435, 2000. Springer-Verlag Berlin Heidelberg 2000
Software Reuse in an Object Oriented Framework
421
implementations without affecting its status as a type. By separating the concepts of an object type (i.e. a specification of the interface and behavior of a class of objects) and an object implementation (i.e. a code module or modules which actually implement the specification) and treating these as orthogonal concepts, it becomes possible to envisage that a type may have more than one implementation, even in a single program. In fact this distinction leads to a novel view [1] (for the object oriented world) of software reuse, in that for example one implementation of a type can (but need not) be reused in a different implementation of the same type. A further important effect of this distinction is that inheritance in the sense of code reuse (subclassing) can be clearly distinguished from inheritance in the sense of type specialization (subtyping), cf. e.g. [9, 19, 20] and this for example leads to clearer program designs. Second, a distinction is drawn between objects and attributes. The basic idea behind this distinction was first explained in [16]. In the present paper some aspects of the idea are described and their relevance to the reuse of software is highlighted. Each of these extensions to object oriented programming provides a (different) basis for introducing the idea of bracket routines, i.e., routines which are designed to bracket another routine or routines. In the case of the type/implementation distinction the bracket routines can be "specialized", whereas the object/attribute distinction provides a basis for "generalized" bracket routines. Each has its own uses, and both can add considerably to the range of potential reuses of software. The advantages to be gained from these two ideas cannot be realized using existing programming languages. For this and other reasons an experimental programming language L1 is being developed at the University of Ulm. This has its roots in object orientation and in procedural languages in the style of Pascal and its successors. More recently the ideas based on the distinction between type and implementation definitions were extended and incorporated into the programming language Tau [25], which is an experimental modification of the Java language [14]. The notation used in the examples is based on L1. In general this notation is selfexplanatory, but footnotes are used where appropriate to explain unusual ideas. Section 2 examines the idea of separating types from implementations in relation to the reuse of code and shows how this separation can lead to a technique for introducing specialized brackets. In section 3 the separation of object types from attribute types is motivated in terms of code reuse, and this leads to a technique for implementing more generalized bracket code. The basic differences between specialized and generalized brackets are outlined in section 4. Then in section 5 the advantages of both techniques for code reuse are summarized. Section 6 refers to related work and section 7 concludes the paper.
2
Separating Definitions of Types and Implementations
In this section we consider the consequences for code reuse of separating definitions of types from definitions of implementations.
422
J. Leslie Keedy et al.
2.1
Achieving Standard Code Reuse
The definition of an object type in L1 (known via the keyword objtype) can be viewed as a semi-formal specification of a class of objects, consisting of the signatures of the methods associated with the object class and possibly some natural language explanations. These signatures may include pre and post conditions and definitions of exceptions which the methods might raise, but such features are ignored in the examples in this paper for the sake of simplicity. We begin with a simple example of an object type book: objtype book op set_title (in title: string) enq get_title: string op set_author (in author:string) enq get_author: string op set_isbn (in isbn: isbn_num) enq get_isbn: isbn_num constr new end book
The keyword constr introduces a constructor. The keyword op introduces a routine known as an operation, which can modify the state of an instance of the class; the keyword enq introduces an enquiry, which can return information about the state of an object, but may not modify the state. L1 conveniently allows a pair of routines for setting and getting the state of a component to be defined together as a var. Thus the above definition is equivalent to: objtype book var title: string author: string isbn: isbn_num constr new end book
However this is merely syntactic sugar; it does not imply that such a component is actually represented as a variable in an implementation of the type. An object type can inherit from another object type, as indicated by the keyword isa, e.g. objtype loanable_book isa book var currently_loaned: bool due_date: date borrower: person enq days_overdue: int enq date_returned: date op put_on_loan (in b: person; d: date) op return_from_loan (in d: date) end loanable_book
An implementation is normally explicitly associated with a type definition, as follows: impl book_impl_1 for book .. data and code for the methods .. end book_impl_1
In the case of a simple type such as book, which contains only var definitions and a simple constr method, the compiler can create an implementation automatically, but if the programmer chooses he can provide an explicit implementation or
Software Reuse in an Object Oriented Framework
423
implementations (for example to provide consistency checking in the operation set_isbn). An implementation may reuse the code of another implementation. This technique can be used to achieve reuse for inheritance, as in conventional object orientation, e.g. impl loan_book_1 for loanable_book reuse book_impl_1 .. data and code for the methods not inherited from book .. end loan_book_1
The code of a supertype need not be reused in this way, but when it is, this does not exclude the possibility that some of the methods from the reused module might be overridden in the new implementation. In an inherited routine which is overridden (redefined) it is sometimes convenient to reuse the original code. This is possible in some object oriented languages using a technique such as super as found in Smalltalk-80 [13]. For this purpose L1 uses a "hat" symbol, as follows: impl loan_book_1 for loanable_book reuse book_impl_1 op set_isbn (in isbn: isbn_num) begin .. new code .. ^set_isbn .. new code end set_isbn .. data and code for other methods .. end loan_book_1
2.2
Multiple Implementations of a Type
Because the concepts of type and implementation are united in most object oriented languages into a single class construct a type may have only one implementation and the reuse of code is restricted to the cases described in section 2.1. However in L1 and Tau a type may have more than one implementation (and these may be used concurrently for different objects of the same type in a single program1). This creates new possibilities for code reuse. For example an implementation of a type can reuse code from another implementation of the same type, e.g. impl book_impl_2 for book reuse book_impl_1 .. data and code for some methods .. end book_impl_2
In this case the second implementation reuses the code of the first implementation for all the methods which are not explicitly overridden. The use of the hat symbol in this context provides a natural way of bracketing code in such a manner that it is not linked to the type inheritance hierarchy, e.g.
1
The binding of implementations to objects is determined by a combination of defaults and pragmas.
424
J. Leslie Keedy et al.
impl book_impl_2 for book reuse book_impl_1 op set_isbn (in isbn: isbn_num) begin .. new code .. ^set_isbn .. new code end set_isbn .. data and code for some methods .. end book_impl_2
Notice that such "bracket" implementations are specialized, in the sense that the bracketing part and the bracketed part both have access to the parameters of the routine. 2.3
Type Independent Reuse
So far the examples have illustrated how code can be reused in the context of a type hierarchy (section 2.1) or a single type (section 2.2). In fact both in L1 and in Tau the concepts of type and of implementation are kept fully orthogonal, so that it is also possible for an implementation to be reused in an implementation of a completely independent type, or even in a case where the subtyping and subclassing relations run contrary to each other. An example of the second case was described in [2]. The scenario at the type level involves a double ended queue inheriting from a normal queue, e.g. objtype queue op put_at_back (in e: element) op get_from_front (out e: element) constr new end objtype d_e_queue isa queue op put_at_front (in e: element) op get_from_back (out e: element) end
Given an existing implementation d_e_q_1 of the type d_e_queue, it is attractive to reuse the code of this to implement the type queue. In L1 and Tau this is simple to achieve: impl d_e_q_1 op put_at_front (in e: element) begin ... end op put_at_back (in e: element) begin ... end op get_from_front (out e: element) begin ... end op get_from_back (out e: element) begin ... end constr new begin ... end end d_e_q_1 impl q1 for queue reuse d_e_q_1 end
This is only possible because the separation of types from implementations is kept fully orthogonal. In section 3.3 we shall encounter an example of code reuse involving an implementation of two independent types.
Software Reuse in an Object Oriented Framework
3
425
Attribute Types and Object Types
We now turn to the second key feature which allows the reuse of code to be increased in an object oriented context: the distinction between object types and attribute types. 3.1
The Basic Concept
The relationship between attribute types and object types is analogous to the relationship in natural language between adjectives and nouns. An object generally corresponds to a noun and an attribute to an adjective. Consider a (motor) car: in the English language the word "car" is a noun and in an object-oriented language a (particular) car can be viewed as an object. Similarly a "book" is a noun in English and potentially an object in a program. In natural languages nouns can be qualified by adjectives. Some examples of adjectives in English are "loanable" and "catalogued". The fact that nouns can be qualified by adjectives gives natural languages an enormous flexibility, because (a) the same adjective can be used to qualify many nouns, e.g. a "loanable book" or a "loanable car" and (b) a noun can be qualified by more than one adjective, e.g. a "catalogued, loanable book". Often we use set expressions for an association between a noun and one or more adjectives, e.g. we usually call a "loanable car" a "hire car" or "rental car" and a "catalogued, loanable book" a "library book". As we saw in section 2, types in L1 can be defined corresponding to (types of) objects, introduced by the keyword objtype. Types corresponding to (types of) attributes are introduced by the keyword attrtype, e.g. attrtype loanable var currently_loaned: bool due_date: date borrower: person enq days_overdue: int enq date_returned: date op put_on_loan (in b: person; d: date) op return_from_loan (in d: date) end loanable
Notice that this definition is almost identical to that which appears in section 2.1 in the definition of the objtype loanable_book. The key difference is that it has been separated from the type book, and can now be used in a more general way. This results in an alternative way of defining loanable_book as follows: objtype loanable_book isa loanable book end loanable_book
Here a new object type is defined by inheriting from an object type and from an attribute type. It is possible to add new methods when defining a type in this way, e.g. objtype library_book var dewey_number: section_name: shelf_number: end library_book
isa loanable book string string int
One of the main advantages of attribute types is that the same attribute type can easily be combined with different object types. For example, given a further object type car:
426
J. Leslie Keedy et al.
objtype car var registration_number: string type: string number_of_seats: int constr new (in reg_num: string car_type: string seat_count: int) end car
it is now easy to define a hire car: objtype hire_car isa loanable car end hire_car
Notice that this not only enhances the modularity of programs; it is also a powerful enhancement of polymorphism, because for example all loanable objects can be passed to a routine, e.g. op send_overdue_notice (in x: loanable) begin if x.days_overdue > 21 then .. endif end
Modularity, generality and polymorphism all favor the use of attribute types rather than conventional inheritance by extension. The definition of loanable_book in section 2.1 is much less flexible than the same type as defined in this section. In fact these arguments suggest that it might also have been advisable to introduce a further attribute type catalogued, rather than add new methods when defining the type library_book, as follows: attrtype catalogued var dewey_number: string section_name: string shelf_number: int end catalogued
and then objtype library_book isa catalogued, loanable book end library_book
In this way these methods can be associated with other catalogued items available in a library such as compact disks, sheet music, theses, etc. This approach is advantageous when compared with languages which provide only single inheritance, because in such languages the methods of generalized attributes cannot be separately defined and therefore cannot be easily combined with quite different object types. For example the methods of loanable have to be separately defined to allow hire_car to inherit from car and to allow library_book to inherit from book. Furthermore it is then impossible to use loanable polymorphically. 3.2
Rules for Attribute Types
Attribute types can be viewed as a special kind of multiple inheritance which is easy to understand and to use. Although languages which support multiple inheritance can achieve a similar effect by defining attributes as objects, this leads to other well known problems. How does L1 avoid the usual problems of multiple inheritance? To understand this we consider the rules which distinguish attribute types from object types.
Software Reuse in an Object Oriented Framework
427
First, object types must, but attributes may not, include a constructor; consequently only object types can be instantiated. Second, an object type can inherit from at most one object type and from zero or more attribute types. Third, both attribute types and object types may have several implementations, but only implementations of attribute types may include bracket routines. The latter are discussed in the next section. One of the problems which occurs in relation to multiple inheritance is name clashes. L1 solves the problem of the same names occurring in different types by requiring a method name to be qualified by its type name in ambiguous cases. It avoids the "diamond" name clash problem (which occurs when a new type inherits more than once from a supertype) by not allowing an object type to inherit from more than one object type. Another standard problem is with constructors: in L1 an object type can inherit from only one object type, and attribute types do not have constructors. It is interesting that Java distinguishes between classes (which as in other objectoriented languages combine type and implementation definitions) and interfaces. The latter are effectively type definitions, but although these can be implemented in classes, the code of such classes cannot be multiply inherited by other classes. Thus Java interfaces can be reused in an ‘adjectival’ manner, but with the disadvantage that only the type but not the code can be multiply inherited – and therefore the code not easily reused. Java interfaces in fact reinforce the usefulness of the idea of attribute types, in that they are frequently used to represent adjectives such as "cloneable", "runnable", "synchronized" and the like. It is therefore interesting to consider why for example Java interfaces may not have separate implementations which can easily be reused in a general way. The answer appears to be that the available language techniques do not readily provide a mechanism which allows the code of attributes to be kept separate from the code of the classes which they qualify. For example, how can the code of an attribute such as "synchronized" be separated from the code of the classes which it qualifies? 3.3
Bracket Routines for Attributes
One possible approach to this question has been called "aspect oriented programming" [17], which proposes the use of "aspect weavers" to thread different code together; however, there is no universal mechanism called an aspect weaver. L1 also does not pretend to provide a universal aspect weaver capable of solving all potential problems which can arise when "adjectival" attributes are kept orthogonal to object classes. However it does provide a mechanism, generalized bracket routines, referred to in the rest of this section simply as "bracket routines", which can be effectively used in many situations. These are quite different from the specialized bracket routines described in section 2. Bracket routines in L1 can be included in an implementation of an attribute type (but not an object type). The basic idea is that in addition to the normal visible methods associated with an attribute, additional routines may be provided which are implicitly invoked when a client calls a method of the object which is qualified by the attribute. Such routines may include a statement body, which indicates the point(s) at which the explicitly invoked object method is actually invoked. When this method
428
J. Leslie Keedy et al.
exits, control is first returned to the bracket routine. The idea can be simply illustrated in the form of an attribute type mutually_exclusive and an implementation of it. attrtype mutually_exclusive end mutually_exclusive
Notice that this type definition is unusual in that it has no explicit methods. impl mutex for mutually_exclusive var2 database: semaphore bracket op, enq -- defines the bracket code for operations begin -- and enquiries of synchronized objects. database.P -- claims exclusion. body -- the body of the object's op or enq. database.V -- releases exclusion. end bracket constr begin body -- the body of the object's constructor. database:= 1 -- initialization of the exclusion semaphore. end end mutex
L1 uses the distinction between the different method categories op, enq and constr to determine which object methods3 are bracketed by which bracket routines. This allows a further attribute read_write_synchronized to be defined and implemented as follows: attrtype read_write_synchronized end read_write_synchronized impl Courtois_et_al for read_write_synchronized reuse mutex var reader_sem: semaphore read_count: int bracket enq -- (re)defines the bracket code for enquiries of begin -- synchronized objects reader_sem.P read_count:= read_count + 1 if read_count = 1 then database.P endif reader_sem.V body reader_sem.P read_count:= read_count - 1 if read_count = 0 then database.V endif reader_sem.V end bracket constr -- (re)defines the constructor bracket code begin body -- the body of the object's constructor read_count:= 0 reader_sem:= 1 database:= 1 end end Courtois_et_al 2
3
This defines an instance field in an implementation as distinct from a pair of methods from a type definition. As was indicated in Section 2.1, a var in a type definition is viewed as an op and an enq, and is treated as such for bracketing purposes.
Software Reuse in an Object Oriented Framework
429
This solution is based on [6]. It illustrates how an implementation of a type can reuse an implementation from the same or (in this case) a different type4. (In this example the code of bracket op is reused but the implementation of bracket enq and bracket constr are redefined. Explicit methods of an object or attribute can be similarly reused or redefined.) We now look at an example of an attribute type which has both its own independent methods and bracket routines which are invoked in association with calls to the methods of qualified objects. attrtype modification_expiring op set_expiry_date (in ex_date: date) enq get_expiry_date: date end modification_expiring
The intention here is to define an attribute type which does not allow its qualified objects to be modified after a defined expiry date (which can be changed). An implementation: impl no_mod for modification_expiring var expiry_date: date op set_expiry_date (in ex_date: date) begin expiry_date:= ex_date end enq get_ex_date (out ex_date: date) begin ex_date:= expiry_date end bracket op begin if system.today • expiry_date then body endif end bracket constr begin body expiry_date:= system.today end end no_mod
As in all examples in this paper error handling has been ignored; in a real system it would possibly be appropriate for example to raise an exception in an else clause of the conditional statement. From this example we see that bracket routines have the potential to restrict calls to a qualified object, either by omitting a body statement, or as in this case by including it in a conditional statement. This opens up many possibilities for using brackets as a protection mechanism.
4
Generalized and Specialized Brackets
Although L1 is not the first object-oriented language to include a bracket mechanism, it is the first to support both generalized and specialized bracket mechanisms. 4
It can be argued that code reuse which gives access to the internal variables of another module is unattractive, as it violates the information-hiding principle [27] and hinders verification reuse [28].
430
J. Leslie Keedy et al.
Generalized bracket routines are clearly suitable for applications such as synchronization, protection, monitoring, etc. where the bracket routine(s) can be programmed entirely independently of the objects which they qualify. This means for example that they have no access to the parameters of the methods which they bracket. On the other hand specialized bracket routines are provided in implementations of specific types. They can therefore access the parameters of the bracketed method and can thus be useful in quite different circumstances from generalized bracket routines. For example specialized brackets can be used in a transaction processing system to write the parameters of each transaction to a log file before the transaction is processed in the body of the routine. They can be used to carry out consistency checks, to implement pre and post routines, etc. This technique also encourages the separation of aspects and provides a method for "weaving" them together. Notice that with generalized brackets it is generally the brackets themselves which are reusable, whereas in the case of specialized brackets it is the bracketed code which is reused.
5
Levels of Software Reuse
The driving motivation for the L1 design was not to make code more reusable as such, but rather to define clean orthogonal structures which simplify modeling of the real world and thus simplify program design, implementation and maintenance. But inevitably this leads to better modularity and to more possibilities for the reuse of software in an object-oriented environment. We now review some of the different possibilities for reuse of code. (i) The first major distinction between L1 and almost all other object-oriented languages is the separation of types from implementations. This means that types can easily be reused independently of implementations. This is not unimportant, because type definitions (together with appropriate comments describing the semantic intention) can be viewed as semi-formal specifications, and the reuse of specifications encourages software standardization, which means that separately developed software units can often be reused in combination. (ii) L1 takes the further step of allowing more than one implementation of the same type to exist (and even be used together in a single program). This makes it possible for one implementation of a type to reuse another implementation of the same type, without straining the type system. (iii) An implementation of a type can reuse (parts of) more than one other implementation of the same type. (iv) Reused code need not be an implementation of the same type nor of a supertype. This allows flexible code reuse in cases involving code not related by the type hierarchy (cf. mutually_exclusive and read_write_ synchronized), or where subclassing and subtyping run contrary to each other (cf. d_e_queue and queue) (v) The reuse mechanism is complemented by a "hat" mechanism, which allows the code of a different implementation to be bracketed by new code in a different implementation, thus making the bracketed code extensible and reusable. This is
Software Reuse in an Object Oriented Framework
431
considerably more flexible than the standard super technique, as it is orthogonal to the type system. (vi) The second major difference between L1 and other object-oriented programming languages is its distinction between object types and attribute types. Here the potential for reuse of code is very high. We saw how the attribute loanable could easily be combined with quite different object types, such as book and car. It is easy to envisage how such general purpose attributes can be reused in combination with very many kinds of objects both within a single system (e.g. the library might also have loanable CDs, theses, software packages) and in quite different systems (car hire firms, catering firms, etc.). (vii) Yet a further level of reusability of software is made possible by the idea of bracket routines for attribute types. A particularly good example of this is in the area of synchronization. We have illustrated two simple synchronization approaches (mutual exclusion, reader-writer synchronization), which were not only able to reuse code, but which have a wide range of application in many software systems. This list could easily be extended to include more complex synchronization mechanisms such as Hoare's monitor, path expressions and the like. Normally such code cannot be packed into separate modules and reused at will with any object type, so here the potential for code reuse is very high. All that is needed to synchronize any object is a type definition such as: objtype shareable_library_book isa read_write_synchronized library_book end shareable_library_book
This opens up the possibility of reusing generalized "adjectival" code in a way which is not otherwise possible in object-oriented programming languages.
6
Comparison with Other Work
The distinction between definitions of types and implementations can be traced back to ideas such as information-hiding [22, 23] and specification techniques for abstract data types [15]. More recent work on specification and verification (as found in e.g. OBJ [11], Resolve [26], GenVoca [1] and Lileanna [12]) emphasizes the advantages of this separation in conjunction with static parameterization as a technique which to some degree can be considered as equivalent to inheritance in object oriented languages. In this way, for example, a supertype used parametrically can achieve a similar effect to the "hat" mechanism mentioned in section 2. In fact appropriate uses of templates, which have their roots in Ada, can achieve equivalent results to our specialized brackets, for example as a means of checking pre- and post-conditions [8]. In contrast, our generalized brackets appear to be more flexible, because with these, the names of the bracketed operations need not be known. On the other hand, generalized brackets have a more limited application, being of relevance mainly for ‘system’ purposes, such as synchronization, monitoring and protection. In the object-based and object-oriented traditions the distinction between definitions of types and implementations appears in the languages Emerald [3, 4] and Theta [20]). However, in contrast with L1 and Tau, Theta does not support constructors in type definitions and it appears not to have a mechanism for selecting
432
J. Leslie Keedy et al.
implementations. Although the language Sather [29] does not have separate types and implementations as such, it does have separate subtype and code-reuse hierarchies. However, it does not support implementations which are not also types, and, in contrast with L1 and Tau, reused code must be recompiled. As mentioned earlier, a more limited form of specialized bracketing can be achieved in Smalltalk-80 [13] (and in almost all other object-oriented languages) by redefining the methods in a subclass and calling the original methods from within the redefined methods via the super construct. As discussed in section 3.2 languages such as Eiffel [21] with multiple inheritance can, in a rather less attractive way, model attribute types, but there is nothing comparable to generalized bracket routines in such languages. The language Beta [18] is close to the L1 approach for bracket routines. The methods of a class definition may include the special statement inner (similar to body). This results in the same method in a subclass being bracketed by the code of the superclass method. But this mechanism is not very useful for general attributes such as modification_expiring or mutually_exclusive. A class mutually_exclusive would need to know exactly which methods occur in its subclass shareable_library_book in order to bracket them and would therefore be of no use in bracketing shareable_library_cd. Even worse, since Beta supports only single inheritance, shareable_library_book could either inherit from mutually_exclusive or from library_book but not both. Mixins can be seen as a generalization of both the super and the inner constructs. The language CLOS [7] allows mixins as a programming technique without supporting them as a special language construct, but a modification of Modula-3 to support mixins explicitly has also been proposed [5]. A mixin is a class-like modifier which can operate on a class to produce a subclass in a way similar to L1 attribute types. So, for example, a mixin mutually_exclusive can be combined with a class library_book to create a new class shareable_library_book. Bracketing can be achieved by using the 'call-next-method' statement (or super in the Modula-3 proposal) in the code of the mixin methods. As with Beta, however, the names of the methods to be bracketed must be known in the mixin. This again prevents it from being used as the kind of general attribute we intend. Furthermore, it is proposed in [5] that mixins entirely replace classes and that 'classes are viewed as degenerate mixins'. This, together with the fact that mixins do not support a separation of type and implementation, impairs their use for modeling and software engineering. As was discussed in section 3.2 interface types of the language Java [14] are often used as a kind of attribute type. But since independent implementations of adjectival interfaces cannot be combined with the implementations of objects via multiple inheritance, this helps only on the type level and not for code reuse. In [24] encapsulators are described as a novel paradigm for Smalltalk-80 programming. The aim is to define general encapsulating objects (such as a monitor) which can provide pre- and post-actions when a method of the encapsulated object is invoked. This is similar to L1 generalized bracket routines but is based on the assumption that the encapsulator can trap any message it receives at run-time and pass this on to the encapsulated object. This is feasible only for a dynamically typed system. The L1 mechanism can be seen as a way of achieving the same result in a statically type-safe way via a limited form of multiple inheritance. The applications of
Software Reuse in an Object Oriented Framework
433
encapsulators are also more limited than bracket routines since there is no way for them to distinguish between constructors, enquiries and operations. The similarity between our proposal and the approach of 'aspect-oriented' programming [17] was mentioned in section 3.3. Both the specialized and generalized bracket routines can be viewed as 'aspect weaver' techniques for some aspects of software system design. Finally, the metaphor of adjectives as a design guide has also been emphasized recently in [10].
7
Conclusion
We have attempted to show how the reuse of software can be considerably enhanced by making two unusual distinctions in object oriented programming. Whereas in conventional object oriented programming the idea of a class unites the concepts of type and implementation, these concepts are kept separate and orthogonal in the experimental languages L1 and Tau. The advantages of making this separation for the reuse of code are described in section 2 and are summarized in section 5, points (i) to (v). An unusual technique for achieving a limited form of multiple inheritance, based on a distinction between object types (which can be viewed as corresponding to nouns in natural language) and attribute types (corresponding to adjectives) and its implications for code reuse were described in section 3 and summarized in section 5, points (vi) and (vii). By separating type and implementation definitions a technique for providing specialized bracket routines, and by separating object types and attribute types a technique for providing generalized bracket routines emerged. The differences between these are summarized in section 4 and their relevance for code reuse is mentioned in section 5, points (v) and (vii). It is particularly interesting to note that with generalized brackets it is generally the brackets themselves which are reused, whereas in the case of specialized brackets it is the bracketed code which is reused. Finally in section 6 we briefly referred to work related to that described in this paper. It appears that the potential for reusing code when writing object oriented programs based on the proposed techniques is considerably higher than is possible in current object oriented environments.
References 1. 2. 3. 4.
D. Batory, J. Singhal, J. Thomas, S. Dasari, B. Geraci and M. Sirkin "The GenVoca Model of Software-System Generators", IEEE Software, pp. 89-94, 1994. G. Baumgartner and V. F. Russo "Signatures: A Language Extension for Improving Type Abstraction and Subtype Polymorphism in C++", Software - Practice and Experience, 25, 8, pp. 863-889, 1995. A. Black, N. Hutchinson, E. Jul and H. Levy "Object Structure in the Emerald System", in Proceedings of the OOPSLA '86, Portland, Oregon, Vol. 21, ACM SIGPLAN Notices, 1986. A. Black, N. Hutchinson, E. Jul, H. Levy and L. Carter "Distribution and Abstract Types in Emerald", IEEE Transactions on Software Engineering, SE-13, 1, pp. 65-76, 1987.
434 5. 6. 7. 8.
9.
10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.
J. Leslie Keedy et al. G. Bracha and W. Cook "Mixin-based Inheritance", in Proceedings of the ECOOP/OOPSLA '90, pp. 303-311, 1990. P. J. Courtois, F. Heymans and D. L. Parnas "Concurrent Control with Readers and Writers", Communications of the ACM, 14, 10, pp. 667-668, 1971. L. DeMichiel and R. Gabriel "The Common Lisp Object System: An Overview", in Proceedings of the ECOOP '87, pp. 151-170, 1987. S. H. Edwards, G. Shakir, M. Sitaraman, B. W. Weide and J. E. Hollingsworth "A Framework for Detecting Interface Violations in Component-Based Software", in Proceedings of the 5th International Conference on Software Reuse, pp. 46-55, IEEE, 1998. M. Evered, J. L. Keedy, A. Schmolitzky and G. Menger "How Well Do Inheritance Mechanisms support Inheritance Concepts?", in Proceedings of the Joint Modular Languages Conference (JMLC) '97, Linz, Austria, Springer-Verlag, Lecture Notes in Computer Science 1204, 1997. M. C. Feathers "Factoring Class Capabilities with Adjectives", Journal of Object Oriented Programming, 12, 1, pp. 28-34, 1999. J. A. Goguen "Parameterized Programming", IEEE Transactions on Software Engineering, SE-10, 5, pp. 528-543, 1984. J. A. Goguen and W. Tracz "An Implementation-Oriented Semantics for Module Composition", in Foundations of Component-based Systems, ed. G. Leavens and M. Sitaraman, Cambridge, 2000. A. Goldberg and D. Robson, Smalltalk-80: The Language and its Implementation, Reading, Mass.: Addison-Wesley, 1983. J. Gosling, B. Joy and G. Steele, The Java Language Specification, Reading, MA: Addison-Wesley, 1996. J. Guttag and J. J. Horning "The Algebraic Specification of Abstract Data Types", Acta Informatica, 10, 1, pp. 27ff., 1978. J. L. Keedy, M. Evered, A. Schmolitzky and G. Menger "Attribute Types and Bracket Implementations", in Proceedings of the 25th International Conference on Technology of Object Oriented Systems, TOOLS 25, Melbourne, pp. 325-337, 1997. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J.-M. Loingtier and J. Irwin "Aspect-Oriented Programming", in Proceedings of the ECOOP '97, pp. 220-242, 1997. B. B. Kristensen, O. L. Madsen, B. Moller-Pedersen and K. Nygaard "The Beta Programming Language", in Research Directions in Object-Oriented Programming, MIT Press, pp. pp. 7-48, 1987. G. T. Leavens "Modular Specification and Verification of Object-Oriented Programs", IEEE Software, July, pp. 72-80, 1991. B. Liskov, D. Curtis, M. Day, S. Ghemawat, R. Gruber, P. Johnson and A. C. Myers "Theta Reference Manual", Report Number 88, MIT Laboratory for Computer Science, Cambridge, MA, 1994. B. Meyer, Eiffel: the Language, New York: Prentice-Hall, 1992. D. L. Parnas "On the Criteria to be Used in Decomposing Systems into Modules", Comm. ACM, 15, 12, pp. 1053-1058, 1972. D. L. Parnas "A Technique for Module Specification with Examples", Comm. ACM, 15, 5, pp. 330-336, 1972. G. A. Pascoe "Encapsulators: A New Software Paradigm in Smalltalk-80", in Proceedings of the OOPSLA '86, pp. 341-346, 1986. A. Schmolitzky "Ein Modell zur Trennung von Vererbung und Typabstraktion in objektorientierten Sprachen [A Model for Separating Inheritance and Type Abstraction in Object Oriented Languages]", Dr. rer. nat. [Ph.D.] Thesis, University of Ulm, 1999. M. Sitaraman and B. Weide "Component-Based Software Using Resolve", ACM SIGSOFT Software Engineering Notes, 19, 4, pp. 21-67, 1994.
Software Reuse in an Object Oriented Framework
435
27. A. Snyder "Encapsulation and Inheritance in Object-Oriented Programming Languages", in Proceedings of the OOPSLA '86, Portland, Oregon, Vol. 21, ACM SIGPLAN Notices, 1986. 28. N. Soundarajan and S. Fridella "Inheriting and Modifying Behavior", in Proceedings of the 23rd International Conference on Technology of Object Oriented Systems, TOOLS 23, pp. 148-162, IEEE Computer Society Press, 1998. 29. C. Szyperski, S. Omohundro and S. Murer "Engineering a Programming Language: The Type and Class System of Sather", in Programming Languages and System Architectures, ed. Jurg Gutknecht, Springer Verlag, pp. 208-227, 1993.
Compatibility Elements in System Composition Giancarlo Succi1, Paolo Predonzani2, and Tullio Vernazza2 1
Department of Electrical and Computer Engineering, The University of Alberta 238 Civil / Electrical Building, Edmonton, AB, Canada T6G 2G7
[email protected] 2 Dipartimento di Informatica, Sistemistica e Telematica, Univerisità di Genova via Opera Pia 13, I-16145, Genova, Italy {predo,tullio}@dist.unige.it
Abstract. Composition of systems requires compatibility between its components. In today’s software market the compatibility relations between components are complex: there is a variety of compatibility elements, which can be proprietary or standardized. Moreover, network externalities concur to give higher value to compatible components, while transition costs impair the migration between incompatible products. The paper analyzes the technical and economic aspects of compatibility in system composition. It presents the different perspectives of system builders and component producers with respect to compatibility in the reference domain of email systems.
1
Introduction
Compatibility is a key factor in the composition of software systems. When several components are put together in a system, their interfaces need to be compatible. A component is not reusable if it is not compatible with the system in which it is integrated. The problem is more acute when components are sold as products made by different firms: not all firms agree on the forms of compatibility to support. The history of the software market evidences that compatibility is an important player in competition. Compatibility affects products in domains. A domain contains the products that can be composed to build systems to support the domain’s functionalities. Also, a domain typically contains more than one product performing any given functionality: these products are alternative, i.e., they are substitutes for each other. Thus, a domain is the pool of products from which, through composition and substitution, users can build systems. Compatibility defines the rules for composition and substitution, and is implemented through compatibility elements, such as data formats, APIs, protocols, and user interfaces. Some compatibility elements are proprietary; others are standardized. System builders choose the compatibility elements and the components complying with those compatibility elements. In this choice, they determine present and future compatibility and, consequently, composition rules. System builders need to maximize reuse and minimize the risks of migrating to incompatible products or the costs of W. B. Frakes (Ed.): ICSR-6, LNCS 1844, pp. 436-447, 2000. Springer-Verlag Berlin Heidelberg 2000
Compatibility Elements in System Composition
437
such a migration. Component producers face the problem of developing products that will be successful in the market. They need to consider the compatibility needs of system builders and the attitude of the other producers in the domain. The paper presents the rationale and the mechanisms of compatibility in system composition. It also shows how component producers can approach the problems of compatibility. To provide evidence of the concepts, the compatibility in a real domain is analyzed. This reference domain is the domain of email systems. The perspectives of the system builder and component producer are presented. This paper is structured as follows. Section 2 presents previous work done in the area of compatibility. Section 3 discusses system composition and the problems introduced by compatibility. Section 4 presents the framework for the next sections, introducing the reference domain of email systems. Section 5 presents the system builder’s perspective and section 6 presents the component producer’s perspective. Section 7 draws the conclusions.
2
State of the Art
There are generally two trends in approaching the subject of compatibility. One trend explains the technical issues of verifying and ensuring compatibility with the purpose of easing the composition of systems. The other trend focuses on the market relations between products and firms that derive from compatibility. The presented literature is representative of both trends. Yellin and Strom discuss the role of compatibility in the composition of objectoriented systems [1][2]. They show that compatibility is typically defined as the compliance with object-oriented interfaces. They point out that this compliance should be enforced through precise specifications of other constraints, which they call “protocols”. They also propose a technique for the verification of such protocols. Finally, they show how incompatibility can be overcome through the use of software adapters. Along the same object-oriented approach, Weber distinguishes two kinds of compatibility: conformance and imitation. The former enables composition of systems. The latter enables component reuse [3]. Gandal [4] and Brynjolfsson and Kemerer [5] have analyzed the implications of compatibility and standards in the market of software spreadsheets. Their analysis focuses on the value of software products and is based on the theory of hedonic models. Hedonic models decompose value hierarchically into aspects and determine how much each aspect contributes to the total value of the product. The cited works attribute a significant part of the value to networks externalities. Network externalities are the effects deriving from the size of the pool of users using a given product. Such pool is the installed base of the product. Network externalities are evident as an increased value of the product perceived by the users. As far as network externalities are concerned, a product X with ten thousand users is more valuable than a product Y with one thousand users. Users of X have larger benefits than users of Y. These benefits include the sharing of information and experience with the larger pool of users. Farrell and Saloner [6][7][8], Economides [9], and Shapiro and Varian [10] provide an in-
438
Giancarlo Succi et al.
depth discussion on network externalities. The presence of network externalities also determines the different approaches the entrants (firms entering the market) and the incumbents (firms already in the market) have towards compatibility. Choi discusses network externalities and the problems of obsolescence in compatibility elements [11]. A product line is a group of products produced by a firm. Compatibility in product lines is a critical issue in modern software production. Poulin [12] and Simos [13] discuss product lines from a domain analysis perspective, highlighting the reuse potentials across the product line. Baumol et al. complete the product line analysis from an economic perspective [14]. They focus on economies of scale and scope, and on the equilibrium in markets. They consider different cases of monopoly, oligopoly, and competition and analyze under which conditions and with what consequences new firms can enter the market. Samuelson highlights the relevance of compatibility in the software market from a legal perspective [15]. She discusses several lawsuits between firms for compatibility issues. She points out the legal differences between compatibility of user interface and internal compatibility, presenting the possibilities of protecting compatibility elements with copyright and patents. The paper provides practical examples of de-facto standards in software. The work emphasizes that compatibility and incompatibility are market strengths and shows how software firms are trying to gain control over them.
3
Compatibility and Composition of Systems
Software systems are usually the result of composition from components. The size of components can vary from basic components to large subsystems. In most domains there is competition on the production of components, which are sold as products in the market. Products can be complementary or alternative. Systems are assembled from complementary products, choosing between the multitude of alternative ones. In domains, compatibility is a key factor for composition. Compatibility can involve data formats, APIs, protocols, user interfaces, etc. Compatibility is more practically discussed in terms of compatibility elements. A compatibility element is a specific form of compatibility. The RTF format, the Netscape Plug-in API, the NNTP protocol, and the Windows look-and-feel are examples of compatibility elements. If two components are compatible, it is very likely that they can be parts of a common system. Each domain has a set of compatibility elements that define most of its possible forms of composition. Compatibility is a matter of competition between software producers. Any compatibility element gives advantages to those who adopt it and disadvantages to those who do not adopt it. A proprietary compatibility element is a compatibility element proposed by a producer or by an alliance of producers. Competition occurs when there are many proprietary compatibility elements that perform a similar task: for instance, the many existing file formats to store text documents are in competition. A standard is a compatibility element that has been adopted by a large number of producers, products and users. Some standards – de jure standards – are formalized by standardization organizations. Other standards – de facto standards – are the recogni-
Compatibility Elements in System Composition
439
tion of some popular proprietary compatibility element. In new domains, there are usually no standards, and many proprietary compatibility elements compete for the leading position. On the other hand, in stable domains standards emerge and very few new compatibility elements are proposed. Economides shows that the promotion of a compatibility element to a standard is possible through subsidization [9]. With subsidization, a producer invites competitors to adopt its technology, considered as a compatibility element. Subsidization occurs, e.g., by licensing technology at no cost. Subsidizing competitors increases the expectations of sale and the network externalities of the technology: users buy more and are willing to pay more. In this scenario, both the firm promoting the technology and the competitors have advantages. However, the firm promoting the technology has the advantage of moving first, and is in a leading position when the technology is promoted to a standard. The choice of the compatibility elements affects the evolution of systems. Once a system adopts some compatibility elements, it is bound to them for a possibly long time. It is possible to change to a different compatibility element. However, the costs of such a change – called transition costs – can be high. First there is a need to update or replace the components that complied with the former compatibility element. Second, especially in the case of data formats, there is a need to port legacy data to the new compatibility element. Converters are products that bridge across incompatible compatibility elements. Converters are useful for any type of compatibility. Converters of data formats are widespread: for instance, there are converters to convert the mailboxes stored by different email clients. Converters of APIs exist, for instance, to overcome incompatibilities between different versions of an API. Protocol converters are frequent in network bridges. Converters of UIs are seen sometimes to ease the porting of products across different platforms. Converters prove two concepts: • Transition costs exist and converters contribute to decrease them. Let us suppose that a user of system A wants to switch to an incompatible system B. Rather than bearing all the transition cost, the user may be better off buying a converter to ease the transition. Theoretically the price of the converter can be slightly less than the transition costs. Practically, no converter solves all the problems of transition: e.g., converters usually do not tackle the costs of re-training. • Larger networks of users have higher value. Converters connect incompatible networks, producing an overall larger network. Please note that this case is different from the previous one regarding transition costs. Here, a user of system A is not willing to switch to an incompatible system B. Rather, the user stays with A but wants to communicate with users of B. For users of B, the situation is symmetrical. The communication between the two networks would give a tangible value to their users. A converter makes the communication possible and can claim a price as high as the produced added value. Converters solve many compatibility issues. When a system builder faces an incompatibility between products, he/she can overcome it through a converter. When the benefits make it profitable, the system builder can even produce the converter if none is commercially available. This is frequent, e.g., when there is a need to integrate legacy systems with newer systems or products: usually, incompatibility issues arise.
440
Giancarlo Succi et al.
In these cases, the only solution is to bridge the systems through converters. A practical example are the numerous API converters interposed between legacy databases and more recent systems (OO systems, web services, etc.)
4
Perspectives in Compatibility for System Composition
To better understand the attitudes different parties may have, we will use the domain of email systems as a reference for the discussion. The considered parties are the system builders and the component producers. In the domain, the purpose of the system builder is to assemble a system to support the functionalities of corporate email system: management of employees’ accounts and email sending and delivery. The purpose of the component producer is to build products that fit in this system. There are a few basic assumptions on the corporate environment, like the availability of an Intranet and of Internet, and the presence of a moderately high number of employees. According to current technology, the system should comprise at least two parts: a mail client and a mail server. At the time of this analysis the domain includes more than one hundred products available on the market. For this analysis we consider a subset of 12 email products (mail servers and mail clients), summarized in Table 1. Table 1. List of considered products
Product Sendmail Exchange Server Outlook Sun Internet Mail Server Domino Mail Server Notes GroupWise Infinite InterChange D-Mail Mail Server Eudora Internet Mail Server Eudora Pro Communicator
Producer Sendmail, Inc. Microsoft Corporation Sun Microsystems, Inc. Lotus Development Corporation Novell, Inc. Service Strategies Inc. NetWin, Ltd. Qualcomm, Inc. Netscape Communications Corporation
The domain comprises several compatibility elements. Most are related to email standards and protocols. Others are related to the environment where the products are used. A first group of compatibility elements includes: • Connection protocols: SMTP (Simple Mail Transfer Protocol), ESMTP (Extended Simple Mail Transfer Protocol), POP3 (Post Office Protocol), IMAP4 (Internet Mail Transfer Protocol), UUCP (Unix to Unix Copy program), HTTP (HyperText Transfer protocol), and X.400 (ITU-TS messaging standard). • Directory access protocols: LDAP (Lightweight Directory Access Protocol).
Compatibility Elements in System Composition
441
• Message content formats: RFC822 (Email format standard), MIME (Multipurpose Interner Mail Extensions), S/MIME (Secure MIME), HTML (HyperText Markup Language), HDML (Handheld Device Markup Language), Java, Javascript, and OLE (Object Linking and Embedding). • Security protocols and formats: X.509 (Certificate standard), SSL (Secure Socket Layer), and PGP (Pretty Good Privacy). Table 2 summarizes the compatibility relations between the products and the first group of compatibility elements.
Sendmail Exchange Server Sun Internet Mail Server Domino Mail Server GroupWise Infinite InterChange D-Mail Mail Server Eudora Internet Mail Server Communicator Outlook Eudora Pro Notes
√ √ √
Message content
Security
ESMTP POP3 IMAP4 UUCP HTTP X.400 LDAP RFC822 MIME S/MIME HTML HDML Java JavaScript OLE X.509 SSL PGP
SMTP
Connection protocol
Dir
Table 2. First group of compatibility elements and relations with products
√ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ √
√ √ √
√ √
√
√ √
√
√ √
√
√ √ √ √ √
√ √
√ √ √ √
√ √
√ √ √ √ √ √ √ √ √ √
√
√ √ √
√
√ √
√
√ √
√
√ √
√ √ √
√ √ √ √ √ √
√ √
√ √ √
√
√
√ √ √ √ √
√ √ √
√ √
√ √ √ √ √ √ √ √ √ √ √ √ √ √
√ √
√ √
√ √ √ √ √
√
The second group of compatibility elements comprises: • Platforms: MS Win 3.x, MS Win32 (Win 95/98/NT), Sun Solaris, SunOS, Linux, HP-UX, Digital Unix, IBM AIX, MacOS. • Administration user interfaces: command line, graphical, WWW-like, and SNMP (Simple Network Management Protocol).
442
Giancarlo Succi et al.
• APIs (mainly for extension and customization): MAPI (Messaging Application Program Interface) and proprietary API of the product. • Address book format and protocols: vCard (IMC standard) and Ph (Phonebook protocol). • Notification protocols: DSN (Delivery Status Notification). Table 3 summarizes the compatibility relations between the products and the second group of compatibility elements.
API
Notific.
Admin. UI
MS Win 3.x MS Win32 Sun Solaris SunOS Linux HP-UX Digital Unix IBM AIX MacOS Command line Graphical WWW-like SNMP MAPI Proprietary API vCard Ph DSN
Platform
Address book
Table 3. First group of compatibility elements and relations with products
Sendmail Exchange Server Sun Internet Mail Server Domino Mail Server GroupWise Infinite InterChange D-Mail Mail Server Eudora Internet Mail Server Communicator Outlook Eudora Pro Notes
√
√ √ √ √ √ √
√ √
√
√
√ √
√
√
√ √ √ √ √
√ √
√
√
√ √ √ √
√ √ √ √
√
√
√
√
√
√ √ √ √ √
√ √ √ √ √ √ √ √
√
√ √ √ √ √
√ √ √ √
√ √
√
√ √
√ √ √
√ √
√ √
The general information provided so far is the basis for the discussion on the system builder’s and component producer’s perspectives.
Compatibility Elements in System Composition
5
443
The System Builder’s Perspective
The system builder can be the final user of the system or another party that builds and delivers the system to the user. In both cases, the system builder should ensure interoperability between users and compatibility between products. Table 2 has shown that all the products are compatible with SMTP, POP3, RFC822, and MIME. The compatibility applies to both clients and servers. This shows that widely accepted standards exist for the connection protocol and the message content. In domain where widely accepted standards exist, these usually cover the basic functionalities. In addition to the widely accepted standards there are many compatibility elements that are not widely accepted. The reasons to use them can be the following: • The compatibility element can support a new feature. This is the case, e.g., with HDML. While this compatibility element is not widespread, it is required to support email on handheld devices. • The compatibility element gives network externalities with other products already in use. The choice of the platform, in terms of hardware and operating system, is a typical example. Unless a platform is preferable for some technical reasons, system builders reuse previous platforms and previous experience. The adoption of a compatibility element should consider the internal needs, the external needs and the future perspectives. In the email system domain, the internal needs are the interoperability of the system within the corporate environment. A basic need is the compatibility between the client and the server. The external needs are the interoperability requirements with actors outside the corporate environment. For instance, if the firm communicates with clients by email, it is necessary that the system supports any compatibility element in use by the clients. The future perspectives are the needs in the future. These depend on the scalability and adaptability of he system. “Open” systems generally scale and adapt easily. The availability of public and standard APIs also allows improvements of the system from various parties. When no compatibility element is clearly dominant, the situation is uncertain and the risks of transition are high. On the other hand, when one compatibility element becomes dominant, network externalities push system builders to adopt it, possibly migrating from loosing compatibility elements. If a system builder chooses the dominant compatibility element up-front, he/she is rewarded: his/her systems will still be valid and reusable. On the contrary, choosing a loosing compatibility element gives severe problems: the system builder can either keep an incompatible, outdated system or update it to the dominant compatibility element. Clearly, the loosing compatibility element gives the system builder no reuse possibility. In addition to the risks, system integrators should consider also the costs of transition. These heavily depend on the user’s environment. For example, the cost of retraining the users during the transition depends on their unique background. One final note is on the exclusivity of the products in case of transition. The server part of the system is exclusive in the sense that, with minor exceptions, two servers can hardly coexist during the transition. On the contrary, the client side of the system is not exclusive, as many different clients can coexist without interfering. The exclu-
444
Giancarlo Succi et al.
sive case localizes the cost in a short transition time. The non-exclusive case can distribute the costs over a long time span, through an incremental transition.
6
The Component Producer’s Perspective
The goal of producers is to sell the products and to make profit. Producers provide functionalities to meet the user’s requirements and exploit compatibility to increase the value of their products. Sales occur when a user purchases a product. The user may have previous experience with other products and generally “moves” between products according to his/her needs. We say that the user moving between products generates a user flow. A sale is associated to any user flow. To generalize, also users coming from no previous experience with other products (at least in the domain) generate a user flow. Focussing on a specific FirmX, producing a specific ProductX in the domain, we can analyze the possible user flows that result from ProductX. The flows that positively affect FirmX’s sales are the following: • Flow of new users: ProductX attracts users approaching the domain for the first time. A careful study can determine why those users have not approached the domain before: sometimes there is lack of functionality and users simply don’t buy products that do not fulfill their requirements; other times, there are commercial or technical barriers that make the approach difficult. The latter case can be addressed providing the user with simple ways to start using the product. In the email systems domain an entry barrier could be the complexity of the system for users with no previous experience with Internet or Intranets. Most producers overcome such a barrier by providing support packages that users can buy. • Flow from competitors: ProductX attracts users that previously used ProductX’s competitors. The origin of such a transition is that users feel that the competitors are unsatisfactory. However, to make the transition actually happen, ProductX needs to ensure that the transition costs are low. Compatibility is the tool to lower these costs. Compatibility can occur naturally if ProductX and the competitors adopt the same compatibility elements. If this is not the case, ad-hoc converters can be built to make the transition possible. In the email system domain, most products actually adopt this technique, which is usually called “competitive upgrade”. Table 4 column (a) summarizes the competitive upgrades advertised by the products in the domain. • Flow from previous versions: users of ProductX’s previous versions upgrade their product by purchasing the new version. FirmX produces both versions, so there is no global loss nor gain in the installed base for FirmX. This flow can be highly profitable because it exploits (a) network externalities and (b) reuse from the old to the new version. Compatibility allows the users to upgrade with little or no transition cost. Most firms in the email system domain are the result of a long evolution through several versions. Moreover, upgrading is generally specifically supported. Only Infinite InterChange and DMail Mail Server do not emphasize their versions and the related transition issues.
Compatibility Elements in System Composition
445
• Flows to accessories: accessories are products that are usable only in conjunction with ProductX. Plug-ins are typical accessories. This flow increases the sales of the accessories. Accessories are usually possible through the definition of an API for the extension of the product. FirmX has two choices: to keep the API secret and proprietary, or to make the API public. In the former case, FirmX is the exclusive producer of accessories for ProductX. In the latter case, other producers can become FirmX’s competitors in the production of accessories; due to network externalities with ProductX, the global effect of this competition can be profitable to FirmX. The only public API available in the email system domain is MAPI. Several products adopt it. Microsoft originated MAPI, although this API can be considered as an open standard. • Flow from complementary products: ProductX interoperates with complementary products. In the email system domain, servers and clients are complementary. Network externalities make complementarity profitable to all the products involved. Any producer ensures compatibility between its own products. However, compatibility is possible also between products from different producers. In the email system domain there is compatibility between all the products through a set of widely accepted standards (SMTP, POP3, RFC822, and MIME). Compatibility can also be the result of agreements between producers. Most producers in the domain make also other products, with which they seek relations of complementarity. The details of such relations are too complex for this discussion. As a simplification, here we summarize these relations in terms of number of complementary products. Table 4 columns (b) and (c) shows, for each product, the number of complementary products (with respect to the email system domain) by the same producer and the total number of products by the same producer. The figures demonstrate roughly the benefits each firm has from compatibility between its own products. User flows confirm that compatibility is a competitive issue. The analysis shows that all producers in the domain exploit compatibility to promote user flows. However, none of the producers supports all the compatibility elements. Rather, producers focus only on a certain market segment and provide the compatibility the segment needs. As a matter of fact, no producer supports all platforms, which is in accordance to the fact that platforms define a major differentiation between market segments. The domain shows that there is a general correlation between number of supported compatibility elements and popularity of products. MS Exchange Server, Sun Internet Mail Server, Lotus Domino, and Novell GroupWise, which are major products, all support a great variety of compatibility elements. A significant exception is Sendmail: despite its popularity, its compatibility regards mainly the supported platforms. However, Sendmail is an exception also because of its recent history as commercial software. Minor products – Infinite InterChange and DMail Mail Server – lack many compatibility elements. Infinite InterChange attempts to create a market niche by supporting HDML, a compatibility element not supported by any of its competitors.
446
Giancarlo Succi et al. Table 4. Supported competitive upgrades and number of product by the same producer
(b) Number of complementary products by the same producer 0 Server, 8
(a) Supported competitive upgrades
Sendmail Exchange Server
None Domino Mail GroupWise Mail None
Sun Internet Server Domino Mail Server Exchange Server, GroupWise GroupWise Exchange Server, Domino Mail Server Infinite InterChange None D-Mail Mail Server None Eudora Internet Mail None Server Communicator Eudora Pro, Outlook Outlook Eudora Pro, Communicator, Notes Eudora Pro Communicator Notes Communicator
7
(c) Total number of products by the same producer 0 >50
20
>50
5
>30
15
>50
8 9 6
9 10 9
3 12
1
>50
6 8
9 >30
Conclusions
This paper has shown that compatibility is technical requirement in system composition and a very desirable feature due to the interoperability it allows. Moreover, the analyzed email domain and the history of the software market in general show that compatibility is a critical competitive issue. Both system builders and competitors seek advantages from compatibility. However, due to its competitive relevance, compatibility requires accurate choices and implies risks and potential costs. To maximize the benefits keeping risks and costs low, each party needs to be aware of the mechanisms of compatibility and of the attitude other parties have toward compatibility. The knowledge of the specific compatibility relations in a domain allows both system builders and component producers to make wiser compatibility choices and to obtain better results. The presented email
1
The Sun – Netscape alliance does not allow to define a clear boundary between the two producers.
Compatibility Elements in System Composition
447
system study shows how such knowledge can be practically obtained thorough an analysis of compatibility in the domain.
References 1.
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
Yellin, D. M., and R. E. Strom, “Interfaces, protocols, and the semi-automatic construction of software adaptors,” Proceedings of the ninth annual conference on Objectoriented programming systems, language, and applications, Portland, OR, October 2328, 1994, pp. 176-190. Yellin, D. M., and R. E. Strom, “Protocol specifications and component adaptors,” ACM Transactions on Programming Languages and Systems, vol. 19, no. 2, 1997, pp. 292333. Weber, F., “Towards a discipline of class composition,” proceedings on Object-oriented programming systems, languages, and applications (Addendum), Vancouver, B.C. Canada, October 18-22, 1992, pp.149-151. Gandal, N., “Hedonic Price Indexes for Spreadsheets and an Empirical Test of the Network Externalities Hypothesis,” RAND Journal of Economics, vol. 25, no. 1, 1994. Brynjolfsson, E., and C. F. Kemerer, “Network Externalities in Microcomputer Software: An Econometric Analysis of the Spreadsheet Market,” Management Science, vol. 42, no. 12, December 1996, pp. 1627-1647. Farrell, J., and G. Saloner, “Standardization, Compatibility, and Innovation,” Rand Journal of Economics, vol. 16, no.1, 1985. Farrell, J., and G. Saloner, “Installed Base and Compatibility: Innovation, Product Preannouncements, and Predation,” American Economic Review, vol. 76, no. 5, 1986. Farrell, J., and G. Saloner, “Converters, Compatibility, and the Control of Interfaces,” The Journal of Industrial Economics, vol. 40, no. 1, 1992. Economides, N., “Network Externalities, Complementarities, and Invitations to Enter,” European Journal of Political Economy, vol. 12, 1996, pp. 211-232. Shapiro, C., H. R. Varian, Information Rules: A Strategic Guide to the Network Economy, Harvard Business School Pr., 1999. Choi, J. P., “Network Externality, Compatibility Choice, and Planned Obsolescence,” the Journal of Industrial Economics, Vol. 42, No.2, 1994. Poulin, J. S., “Software Architectures, Product Lines, and DSSAs: Choosing the Appropriate Level of Abstraction”, Proceedings of the 8th Workshop on Institutionalizing Software Reuse, Columbus, Ohio, 1997. Simos, M. A., “Lateral Domains: Beyond Product-Line Thinking,” proceedings of the 8th Workshop on Institutionalizing Software Reuse, Columbus, Ohio, 1997. Baumol, W. J., J. C. Panzar, and R. D. Willig, Contestable Markets and The Theory of Industrial Structure, Harcourt Brace Jovanovich, Inc., 1982. Samuelson, P., “Software compatibility and the law”, in Communications of the ACM, vol. 38, no. 8, 1995, pp. 15-22.
Author Index
Ahonen, Jarmo . . . . . . . . . . . . . . . . 284 Alonso, Omar . . . . . . . . . . . . . . . . . .251 Ammar, Hany H. . . . . . . . . . . . . . . 369 Atkinson, Steven . . . . . . . . . . . . . . .266 Bader, Atef . . . . . . . . . . . . . . . . . . . . 388 Barker, Richard A. . . . . . . . . . . . . . .58 Batory, Don . . . . . . . . . . . . . . . . . . . 117 Bergmann, Ulf . . . . . . . . . . . . . . . . . . 41 Biggerstaff, Ted J. . . . . . . . . . . . . . . . 1 Boldyreff, Cornelia . . . . . . . . . . . . .318 Borba, Paulo . . . . . . . . . . . . . . . . . . 402 Borioni, Sandro . . . . . . . . . . . . . . . . . 74 Braga, Marco . . . . . . . . . . . . . . . . . . . 74 Bucci, Paolo . . . . . . . . . . . . . . . . . . . 266 Burnett, Robert . . . . . . . . . . . . . . . 353 Constantinides, Constantinos A. 388 Corn´elio, M´ arcio . . . . . . . . . . . . . . . 402 Correa, Alexandre L. . . . . . . . . . . 336 Cortese, Giovanni . . . . . . . . . . . . . . . 74 Cybulski, Jacob L. . . . . . . . . . . . . . 190 Daneva, Maya . . . . . . . . . . . . . . . . . 211 Edwards, Stephen . . . . . . . . . . . . . . .20 Elrad, Tzilla . . . . . . . . . . . . . . . . . . . 388 Espenlaub, K. . . . . . . . . . . . . . . . . . 420 Evered, M. . . . . . . . . . . . . . . . . . . . . 420 Fischer, Gerhard . . . . . . . . . . . . . . . 302 Forsell, Marko . . . . . . . . . . . . . . . . . 284 Frakes, William B. . . . . . . . . . . . . . 251 Fridella, Stephen . . . . . . . . . . . . . . .100 Gacek, Cristina . . . . . . . . . . . . . . . . 170 Gomaa, Hassan . . . . . . . . . . . . . . . . . 89
Griss, Martin L. . . . . . . . . . . . . . . . 137 Halttunen, Veikko . . . . . . . . . . . . . 284 Healy, Michael J. . . . . . . . . . . . . . . . 58 Heeder, Dale von . . . . . . . . . . . . . . 117 Heym, Wayne . . . . . . . . . . . . . . . . . 266 Hollingsworth, Joseph E. . . . . . . .266 Jamhour, Edgard . . . . . . . . . . . . . . 353 Johnson, Clay . . . . . . . . . . . . . . . . . 117 Kaindl, Hermann . . . . . . . . . . . . . . 153 Keedy, J. Leslie . . . . . . . . . . . . . . . . 420 Kim, Hyoseob . . . . . . . . . . . . . . . . . 318 Kulczycki, Gregory . . . . . . . . . . . . 266 Leite, Julio Cesar Sampaio do Prado . . . . . . . . . . . . . . . . . . . . . . . 41 Lewis, Oliver . . . . . . . . . . . . . . . . . . 153 Long, Timothy J. . . . . . . . . . . . . . . 266 MacDonald, Bob . . . . . . . . . . . . . . .117 Mannion, Mike . . . . . . . . . . . . . . . . 153 Matsumoto, Masao J. . . . . . . . . . . 231 Menger, G. . . . . . . . . . . . . . . . . . . . . 420 Mili, Ali . . . . . . . . . . . . . . . . . . . . . . . 369 Montroni, Gianluca . . . . . . . . . . . . 153 Paludo, Marco . . . . . . . . . . . . . . . . . 353 Pike, Scott . . . . . . . . . . . . . . . . . . . . 266 Predonzani, Paolo . . . . . . . . . . . . . 436 Ravindran, Binoy . . . . . . . . . . . . . . . 20 Reed, Karl . . . . . . . . . . . . . . . . . . . . .190 Schmid, Klaus . . . . . . . . . . . . . . . . . 170 Schmolitzky, A. . . . . . . . . . . . . . . . . 420
450
Author Index
Shinkawa, Yoshiyuki . . . . . . . . . . . 231 Sitaraman, Murali . . . . . . . . . . . . . 266 Soundarajan, Neelam . . . . . . . . . . 100 Succi, Giancarlo . . . . . . . . . . . . . . . 436
Wheadon, Joe . . . . . . . . . . . . . . . . . 153 Williamso, Keith E. . . . . . . . . . . . . . 58 Yacoub, Sherif . . . . . . . . . . . . . . . . . 369 Ye, Yunwen . . . . . . . . . . . . . . . . . . . .302
Vernazza, Tullio . . . . . . . . . . . . . . . 436 Zaverucha, Gerson . . . . . . . . . . . . . 336 Weide, Bruce W. . . . . . . . . . . . . . . 266 Werner, Cl´ audia M. L. . . . . . . . . . 336