REVERSE ENGINEERING
Edited by LINDA WILLS PHILIP NEWCOMB
KLUWER ACADEMIC PUBLISHERS
REVERSE ENGINEERING
REVERSE ENGINEERING
edited by
Linda Wills Georgia Institute of Technology Philip Newcomb The Software Revolution, Inc.
A Special Issue of AUTOMATED SOFTWARE ENGINEERING An International Journal Volume 3, Nos. 1/2(1996)
KLUWER ACADEMIC PUBLISHERS Boston / Dordrecht / London
AUTOMATED SOFTWARE ENGINEERING An International Journal Volume 3, Nos. 1/2, June 1996 Special Issue: Reverse Engineering Guest Editors: Linda Wills and Philip Newcomh
Preface Introduction
Lewis Johnson
5
Linda Wills 1
Database Reverse Engineering: From Requirements to CARE Tools J.'L. Hainaut, V. En^lebert, J. Henrard, J.-M. Hick and D. Roland Understanding Interleaved Code Spencer Ru^aber, Hurt Stirewalt and Linda M. Wills
9 Al
Pattern Matching for Clone and Concept Detection K.A. Konto^iannis, R. Pernori, E. Merlo, M Galler andM. Berstein 11 Extracting Architectural Features from Source Code David R. Harris, Alexander S. Yeh and Howard B. Reubenstein
109
Strongest Postcondition Semantics and the Formal Basis for Reverse Engineering GeraldC. Gannod and Betty H C. Chenf^ 139 Recent Trends and Open Issues in Reverse Engineering Linda M. Wills and James H. Cross II
165
Desert Island Column
173
John Dobson
Distributors for North America: Kluwer Academic Publishers 101 Philip Drive Assinippi Park Norwell, Massachusetts 02061 USA Distributors for all other countries: Kluwer Academic Publishers Group Distribution Centre Post Office Box 322 3300 AH Dordrecht, THE NETHERLANDS
Library of Congress Cataloging-in-Publication Data A CLP. Catalogue record for this book is available from the Library of Congress.
Copyright © 1996 by Kluwer Academic Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, mechanical, photocopying, recordmg, or otherwise, without the prior written permission of the publisher, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park, Norwell, Massachusetts 02061 Printed on acid-free paper. Printed in the United States of America
Automated Software Engineering 3, 5 (1996) © 1996 Kluwer Academic Publishers. Manufactured in The Netherlands.
Preface This issue of Automated Software Engineering is devoted primarily to the topic of reverse engineering. This is a timely topic: many organizations must devote increasing resources to the maintenance of outdated, so-called "legacy" systems. As these systems grow older, and changing demands are made on them, they constitute an increasing risk of catastrophic failure. For example, it is anticipated that on January 1, 2000, there will be an avalanche of computer errors from systems that were not designed to handle dates larger than 1999. In software engineering the term "legacy" has a negative connotation, meaning old and decrepit. As Leon Osterweil has observed, the challenge of research in reverse engineering and software understanding is to give the term the positive connotation that it deserves. Legacy systems ought to be viewed as a valuable resource, capturing algorithms and business rules that can be reused in future software systems. They are often an important cultural heritage for an organization, embodying the organization's collective knowledge and expertise. But in order to unlock and preserve the value of legacy systems, we need tools that can help extract useful information, and renovate the codes so that they can continue to be maintained. Thus automated software engineering plays a critical role in this endeavor. Last year's Working Conference on Reverse Engineering (WCRE) attracted a number of excellent papers. Philip Newcomb and Linda Wills, the program co-chairs of the conference, and I decided that many of these could be readily adapted into journal articles, and so we decided that a special issue should be devoted to reverse engineering. By the time we were done, there were more papers than could be easily accommodated in a single issue, and so we decided to publish the papers as a double issue, along with a Desert Island Column that was due for publication. Even so, we were not able to include all of the papers that we hoped to publish at this time, and expect to include some additional reverse engineering papers in future issues. I would like to express my sincere thanks to Drs. Newcomb and Wills for organizing this special issue. Their tireless efforts were essential to making this project a success. A note of clarification is in order regarding the review process for this issue. When Philip and Linda polled the WCRE program committee to determine which papers they thought deserved consideration for this issue, they found that their own papers were among the papers receiving highest marks. This was a gratifying outcome, but also a cause for concern, as it might appear to the readership that they had a conflict of interest. After reviewing the papers myself, I concurred with the WCRE program committee; these papers constituted an important contribution to the topic of reverse engineering, and should not be overlooked. In order to eliminate the conflict of interest, it was decided that these papers would be handled through the regular Automated Software Engineering admissions process, and be published when they reach completion. One of these papers, by Rugaber, Stirewalt, and Wills, is now ready for publication, and I am pleased to recommend it for inclusion in this special issue. We look to publish additional papers in forthcoming issues. W.L. Johnson
Automated Software Engineering 3, 7-8 (1996) (c) 1996 Kluwer Academic Publishers. Manufactured in The Netherlands.
Introduction to the Special Double Issue on Reverse Engineering LINDA M.WILLS Georgia Institute of Technology
A central activity in software engineering is comprehending existing software artifacts. Whether the task is to maintain, test, migrate, or upgrade a legacy system or reuse software components in the development of new systems, the software engineer must be able to recover information about existing software. Relevant information includes: What are its components and how do they interact and compose? What is their functionality? How are certain requirements met? What design decisions were made in the construction of the software? How do features of the software relate to concepts in the application domain? Reverse engineering involves examining and analyzing software systems to help answer questions like these. Research in this field focuses on developing tools for assisting and automating portions of this process and representations for capturing and managing the information extracted. Researchers actively working on these problems in academia and industry met at the Working Conference on Reverse Engineering (WCRE), held in Toronto, Ontario, in July 1995. This issue of Automated Software Engineering features extended versions of select papers presented at the Working Conference. They are representative of key technological trends in the field. As with any complex problem, being able to provide a well-defined characterization of the problem's scope and underlying issues is a crucial step toward solving it. The Hainaut et al. and Rugaber et al. papers both do this for problems that have thus far been ill-defined and attacked only in limited ways. Hainaut et al. deal with the problem of recovering logical and conceptual data models from database applications. Rugaber et al. characterize the difficult problem of unraveling code that consists of several interleaved strands of computation. Both papers draw together work on several related, but seemingly independent problems, providing a framework for solving them in a unified way. While Rugaber et al. deal with the problem of interleaving, which often arises due to structure-sharing optimizations, Kontogiannis et al. focus on the complementary problem of code duplication. This occurs as programs evolve and code segments are reused by simply duplicating them where they are needed, rather than factoring out the common structure into a single, generalized function. Kontogiannis et al. describe a collection of new pattern matching techniques for detecting pairs of code "clones" as well as for recognizing abstract programming concepts. The recognition of meaningful patterns in software is a widely-used technique in reverse engineering. Currently, there is a trend toward flexible, interactive recognition paradigms, which give the user explicit control, for example, in selecting the type of recognizers to use
WILLS and the degree of dissimilarity to tolerate in partial matches. This trend can be seen in the Kontogiannis et al. and Harris et al. papers. Harris et al. focus on recognition of high-level, architectural features in code, using a library of individual recognizers. This work not only attacks the important problem of architectural recovery, it also contributes to more generic recognition issues, such as library organization and analyst-controlled retrieval, interoperability between recognizers, recognition process optimization, and recognition coverage metrics. Another trend in reverse engineering is toward increased use of formal methods. A representative paper by Gannod and Cheng describes a formal approach to extracting specifications from imperative programs. They advocate the use of strongest postcondition semantics as a formal model that is more appropriate for reverse engineering than the more familiar weakest precondition semantics. The use of formal methods introduces more rigor and clarity into the reverse engineering process, making the techniques more easily automated and validated. The papers featured here together provide a richly detailed perspective on the state of the field of reverse engineering. A more general overview of the trends and challenges of the field is provided in the summary article by Wills and Cross. The papers in this issue are extensively revised and expanded versions of papers that originally appeared in the proceedings of the Working Conference on Reverse Engineering. We would like to thank the authors and reviewers of these papers, as well as the reviewers of the original WCRE papers, for their diligent efforts in creating high-quality presentations of this research. Finally, we would like to acknowledge the general chair of WCRE, Elliot Chikofsky, whose vision and creativity has provided a forum for researchers to share ideas and work together in a friendly, productive environment.
Automated Software Engineering 3, 9-45 (1996) (c) 1996 Kluwer Academic Publishers. Manufactured in The Netherlands.
Database Reverse Engineering: From Requirements to CARE Tools* J.-L. HAINAUT
[email protected] V. ENGLEBERT, J. HENRARD, J.-M. HICK AND D. ROLAND Institut d'Informatique, University of Namur, rue Grandgagnage, 21-B-5000 Namur
Abstract. This paper analyzes the requirements that CASE tools should meet for effective database reverse engineering (DBRE), and proposes a general architecture for data-centered applications reverse engineering CASE environments. First, the paper describes a generic DBMS-independent DBRE methodology, then it analyzes the main characteristics of DBRE activities in order to collect a set of desirable requirements. Finally, it describes DB-MAIN, an operational CASE tool developed according to these requirements. The main features of this tool that are described in this paper are its unique generic specification model, its repository, its transformation toolkit, its user interface, the text processors, the assistants, the methodological control and its functional extensibility. Finally, the paper describes five real-world projects in which the methodology and the CASE tool were applied. Keywords: reverse engineering, database engineering, program understanding, methodology, CASE tools
1. 1.1.
Introduction The problem and its context
Reverse engineering a piece of software consists, among others, in recovering or reconstructing its functional and technical specifications, starting mainly from the source text of the programs (IEEE, 1990; Hall, 1992; Wills et al., 1995). Recovering these specifications is generally intended to redocument, convert, restructure, maintain or extend old applications. It is also required when developing a Data Administration function that has to know and record the description of all the information resources of the company. The problem is particularly complex with old and ill-designed applications. In this case, not only can no decent documentation (if any) be relied on, but the lack of systematic methodologies for designing and maintaining them have led to tricky and obscure code. Therefore, reverse engineering has long been recognized as a complex, painftil and prone-to-failure *This is a heavily revised and extended version of "Requirements for Information System Reverse Engineering Support" by J.-L. Hainaut, V. Englebert, J. Henrard, J.-M. Hick, D. Roland, which first appeared in the Proceedings of the Second Working Conference on Reverse Engineering, IEEE Computer Society Press, pp. 136-145, July 1995. This paper presents some results of the DB-MAIN project. This project is partially supported by the Region Wallonne, the European Union, and by a consortium comprising ACEC-OSI (Be), ARIANE-II (Be), Banque UCL (Lux), BBL (Be), Centre de recherche public H. Tudor (Lux), CGER (Be), Cockerill-Sambre (Be), CONCIS (Fr), D'leteren (Be), DIGITAL, EDF (Fr), EPFL (CH), Groupe S (Be), ffiM, OBLOG Software (Port), ORIGIN (Be), Ville de Namur (Be), Winterthur (Be), 3 Suisses (Be). The DB-Process subproject is supported by the Communaute Frangaise de Belgique.
10
HAINAUTETAL.
activity, so much so that it is simply not undertaken most of the time, leaving huge amounts of invaluable knowledge buried in the programs, and therefore definitively lost. In information systems, ox data-oriented applications, i.e., in applications whose central component is a database (or a set of permanent files), the complexity can be broken down by considering that the files or databases can be reverse engineered (almost) independently of the procedural parts. This proposition to split the problem in this way can be supported by the following arguments: — the semantic distance between the so-called conceptual specifications and the physical implementation is most often narrower for data than for procedural parts; — the permanent data structures are generally the most stable part of applications; — even in very old applications, the semantic structures that underlie the file structures are mainly procedure-independent (though their physical structures are highly proceduredependent); — reverse engineering the procedural part of an application is much easier when the semantic structure of the data has been elicited. Therefore, concentrating on reverse engineering the data components of the application first can be much more efficient than trying to cope with the whole application. The database community considers that there exist two outstanding levels of description of a database or of a consistent collection of files, materialized into two documents, namely its conceptual schema and its logical schema. The first one is an abstract, technologyindependent, description of the data, expressed in terms close to the application domain. Conceptual schemas are expressed in some semantics-representation formalisms such as the ERA, NIAM or OMT models. The logical schema describes these data translated into the data model of a specific data manager, such as a commercial DBMS. A logical schema comprises tables, columns, keys, record types, segment types and the like. The primary aim of database reverse engineering (DBRE) is to recover possible logical and conceptual schemas for an existing database. 1.2.
Two introductory examples
The real scope of database reverse engineering has sometimes been misunderstood, and presented as merely redrawing the data structures of a database into some DBMS-independent formalism. Many early scientific proposals, and most current CASE tools are limited to the translation process illustrated in figure 1. In such situations, some elementary translation rules suffice to produce a tentative conceptual schema. Unfortunately, most situations are actually far more complex. In figure 2, we describe a very small COBOL fragment fi-om which we intend to extract the semantics underlying the files CF008 and PFOS. By merely analyzing the record structure declarations, as most DBRE CASE tools do at the present time, only schema (a) in figure 2 can be extracted. It obviously brings little information about the meaning of the data. However, by analyzing the procedural code, the user-program dialogs, and, if needed, the file contents, a more expressive schema can be obtained. For instance, schema (b) can be considered as a refinement of schema (a) resulting from the following reasonings:
create table CUSTOMER ( CNUM numeric{6)not null, CNAME char(24) not null, CADDRESS char(48 not null, primary key (CNUM))
CUSTOMEF CNUM CNAME CADDRESS id: CNUM 0-N 1
create table ORDER ( ONUM char(8) not null, CNUM numeric(6) not null, ODATE date, primary key (ONUM), foreign key (CNUM) references CUSTOMER)
R 1-1
id: RB.B RA.A
Figure 6. Transforming a relationship type into an entity type, and conversely.
B A Al
1
M 62 id:Bl
B
A
"CUST-" " DATE" -> " TIME"
replaces all prefixes "C-" with the prefix "CUST-"; replaces each substring " DATE", whatever its position, with the substring "TIME"; " ^CODE$" -> "REFERENCE" replaces all the names "CODE" with the new name "REFERENCE".
In addition, it proposes case transformation: lower-to-upper, upper-to-lower, capitalize and remove accents. These parameters can be saved as a name processing script, and reused later.
32
HAINAUTETAL.
Figure 12. Control panel of the Transformation assistant. The left-side area is the problem solver, which presents a catalog of problems (1st column) and suggested solutions (2nd column). Theright-sidearea is the script manager. The worksheet shows a simplified script for conceptuahzing relational databases.
9.
The assistants
An assistant is a higher-level solver dedicated to coping with a special kind of problem, or performing specific activities efficiently. It gives access to the basic toolboxes of DB-MAIN, but in a controlled and intelligent way. The current version of DB-MAIN includes three general purpose assistants which can support, among others, the DBRE activities, namely the Transformation assistant, the Schema Analysis assistant and the Text Analysis assistant. These processors offer a collection of built-in functions that can be enriched by user-defined functions developed in Voyager-2 (Section 10). The Transformation Assistant (figure 12) allows applying one or several transformations to selected objects. Each operation appears as a problem/solution couple, in which the problem is defined by a pre-condition (e.g., the objects are the many-to-many relationship types of the current schema), and the solution is an action resulting in eliminating the problem (e.g., transform them into entity types). Several dozens problem/solution items are proposed. The analyst can select one of them, and execute it automatically or in a controlled way. Alternatively, (s)he can build a script comprising a list of operations, execute it, save and load it. Predefined scripts are available to transform any schema according to popular models (e.g., Bachman model, binary model, relational, CODASYL, standard files), or to perform standard engineering processes (e.g., conceptualization of relational and COBOL schemas, normalization). Customized operations can be added via Voyager-2 functions (Section 10). Figure 12 shows the control panel of this tool. A second generation of the Transformation assistant is under development. It provides a more flexible approach to build complex transformation plans thanks to a catalog of more than 200 preconditions, a library
DATABASE REVERSE ENGINEERING
33
of about 50 actions and more powerful scripting control structures including loops and if-then-else patterns. The Schema Analysis assistant is dedicated to the structural analysis of schemas. It uses the concept of submodel, defined as a restriction of the generic specification model described in Section 5 (Hainaut et al., 1992). This restriction is expressed by a boolean expression of elementary predicates stating which specification patterns are valid, and which ones are forbidden. An elementary predicate can specify situations such as the following: "entity types must have from 1 to 100 attributes", "relationship types have from 2 to 2 roles", "entity type names are less than 18-character long", "names do not include spaces", "no name belongs to a given list of reserved words", "entity types have from 0 to 1 supertype", "the schema is hierarchical", "there are no access keys". A submodel appears as a script which can be saved and loaded. Predefined submodels are available: Normalized ER, Binary ER, NIAM, Functional ER, Bachman, Relational, CODASYL, etc. Customized predicates can be added via Voyager-2 functions (Section 10). The Schema Analysis assistant offers two ftinctions, namely Check and Search. Checking a schema consists in detecting all the constructs which violate the selected submodel, while the Search frinction detects all the constructs which comply with the selected submodel. The Text Analysis assistant presents in an integrated package all the tools dedicated to text analysis. In addition it manages the active links between the source texts and the abstract objects in the repository.
10.
Functional extensibility
DB-MAIN provides a set of built-in standard ftinctions that should be sufficient to satisfy most basic needs in database engineering. However, no CASE tool can meet the requirements of all users in any possible situation, and specialized operators may be needed to deal with unforeseen or marginal situations. There are two important domains in which users require customized extensions, namely additional internal functions and interfaces with other tools. For instance, analyzing and generating texts in any language and according to any dialect, or importing and exchanging specifications with any CASE tool or Data Dictionary Systems are practically impossible, even with highly parametric import/export processors. To cope with such problems, DB-MAIN provides the Voyager-2 tool development environment allowing analysts to build their own functions, whatever their complexity. Voyager-2 offers a powerfiil language in which specific processors can be developed and integrated into DB-MAIN. Basically, Voyager-2 is a procedural language which proposes primitives to access and modify the repository through predicative or navigational queries, and to invoke all the basic functions of DB-MAIN. It provides a poweful list manager as well as ftinctions to parse and generate complex text files. A user's tool developed in Voyager-2 is a program comprising possible recursive procedures and functions. Once compiled, it can be invoked by DB-MAIN just like any basic function. Figure 13 presents a small but powerftil Voyager-2 function which validates and creates a referential constraint with the arguments extracted from a COBOL/SQL program by the pattern defined in figure 11. When such a pattern instantiates, the pattern-matching engine passes the values of the four variables Tl, T2, CI and C2 to the MakeForeignKey ftinction.
34
HAINAUTETAL.
function integer KakeForeignKey (string : T1,T2,C1,C2) ; explain(* title="Create a foreign key from an SQL join"; helps"if CI is a unique key of table Tl and if C2 is a column of T2, and if CI and C2 are compatible, then define C2 as a foreign key of T2 to Tl, and return true, else return false"
*); /*define
the
variables;
any
repository
object
type
can be a domain
*/
schema : S; entity_type : E; attribute : A,IK,FK; list : ID-LIST,FK-LIST; S := Ge t C u r r e n t Schema 0 ; / * S is the current schema */ ID-LIST = list of the attributes A such that :(I)A belongs to an entity type E which is in schema S and whose name is (2) the name of A is CI, (3) A is an identifier ofE (the ID property of A is true) */ I D - L I S T := a t t r i b u t e [ A l { o f : e n t i t y _ t y p e [ E ] { i n : [ S ] a n d E.NAME = T l ) a n d A.NAME = Cl and A.ID = t r u e ) ; FK-UST = list of the attributes A such that: (I) A belongs to an entity type E which is in S and whose name is Tl, (2) name of A is C2 */ FK-LIST := a t t r i b u t e [ A ] { o f : e n t i t y _ t y p e [ E ] { i n : { S ] a n d E.NAME = T2} and A.NAME = C 2 } ; if both lists are not-empty, then if the attributes are compatible then define the attribute in ID-LIST as a foreign key to the attribute in FK-LIST * / i f not(empty(ID-LIST) o r empty(FK-LIST)) t h e n
{IK := GetFirst(ID-LIST); FK := GetFirst(FK-LIST); if IK.TYPE = FK.TYPE and IK.LENGTH = FK.LENGTH then {connect(reference,IK,FK); return true;} else {return false;};} else {return false;};
Figure 13. A (strongly simplified) excerpt of the repository and a Voyager-2 function which uses it. The repository expresses the fact that schemas have entity types, which in turn have attributes. Some attributes can be identifiers (boolean ID) or can reference (foreign key) another attribute (candidate key). The input arguments of the procedure are four names T1,T2,C1,C2 such as those resulting from an instantiation of the pattern offigure11, The function first evaluates the possibility of attribute (i.e., column) C2 of entity type (i.e., table) T2 being a foreign key to entity type Tl with identifier (candidate key), Cl. If the evaluation is positive, the referential constraint is created. The e x p l a i n section illustrates the self-documenting facility of Woyager-2 programs; it defines the answers the compiled version of this function will provide when queried by the DB-MAIN tool.
11.
Methodological control and design recovery^
Though this paper presents it as a CARE tool only, the DB-MAIN environment has a wider scope, i.e., data-centered applications engineering. In particular, it is to address the complex and critical problem of application evolution. In this context, understanding how the
DATABASE REVERSE ENGINEERING
35
engineering processes have been carried out when legacy systems were developed, and guiding today's analysts in conducting application development, maintenance and reengineering, are major functions that should be offered by the tool. This research domain, known as design (or software) process modeling, is still under full development, and few results have been made available to practitioners so far. The reverse engineering process is strongly coupled with these aspects in three ways. First, reverse engineering is an engineering activity of its own (Section 2), and therefore is submitted to rules, techniques and methods, in the same way as forward engineering; it therefore deserves being supported by methodological control functions of the CARE tool. Secondly, DBRE is a complex process, based on trial-and-error behaviours. Exploring several solutions, comparing them, deriving new solutions from earlier dead-end ones, are common practices. Recording the history of a RE project, analyzing it, completing it with new processes, and replaying some of its parts, are typical design process modeling objectives. Thirdly, while the primary aim of reverse engineering is (in short) to recover technical and functional specifications from the operational code of an existing application, a secondary objective is progressively emerging, namely to recover the design of the application, i.e., the way the application has (or could have) been developed. This design includes not only the specifications, but also the reasonings, the transformations, the hypotheses and the decisions the development process consists of. Briefly stated, DB-MAIN proposes a design process model comprising concepts such as design product, design process, process strategy, decision, hypothesis and rationale. This model derives from proposals such as those of Potts and Bruns (1988) and Rolland (1993), extended to all database engineering activities. This model describes quite adequately not only standard design methodologies, such as the Conceptual-Logical-Physical approaches (Teorey, 1994; Batini et al., 1992) but also any kind of heuristic design behaviour, including those that occur in reverse engineering. We will shortly describe the elements of this design process model. Product and product instance, A product instance is any outstanding specification object that can be identified in the course of a specific design. A conceptual schema, an SQL DDL text, a COBOL program, an entity type, a table, a collection of user's views, an evaluation report, can all be considered product instances. Similar product instances are classified into products, such as Normalized c o n c e p t u a l schema, DMS-compliant o p t i m i z e d schema or DMS-DDL schema (see figure 3). Process and process instance, A process instance is any logical unit of activity which transforms a product instance into another product instance. Normalizing schema SI into schema S2 is a process instance. Similar process instances are classified into processes, such as CONCEPTUAL NORMALIZATION in figure 3.
Process strategy. The strategy of a process is the specification of how its goal can be achieved, i.e., how each instance of the process must be carried out. A strategy may be deterministic, in which case it reduces to an algorithm (and can often be implemented as a primitive), or it may be non-deterministic, in which case the exact way in which each of its
36
HAINAUTETAL.
instances will be carried out is up to the designer. The strategy of a design process is defined by a script that specifies, among others, what lower-level processes must/can be triggered, in what order, and under what conditions. The control structures in a script include action selection (at most one, one only, at least one, all in any order, all in this order, at least one any number of times, etc.), alternate actions, iteration, parallel actions, weak condition (should be satisfied), strong condition (must be satisfied), etc. Decision, hypothesis and rationale. In many cases, the analyst/developer will carry out an instance of a process with some hypothesis in mind. This hypothesis is an essential characteristics of this process instance since it implies the way in which its strategy will be performed. When the engineer needs to try another hypothesis, (s)he can perform another instance of the same process, generating a new instance of the same product. After a while (s)he is facing a collection of instances of this product, from which (s)he wants to choose the best one (according to the requirements that have to be satisfied). A justification of the decision must be provided. Hypothesis and decision justification comprise the design rationale. History. The history of a process instance is the recorded trace of the way in which its strategy has been carried out, together with the product instances involved and the rationale that has been formulated. Since a project is an instance of the highest level process, its history collects all the design activities, all the product instances and all the rationales that have appeared, and will appear, in the life of the project. The history of a product instance P (also called its design) is the set of all the process instances, product instances and rationales which contributed to P. For instance, the design of a database collects all the information needed to describe and explain how the database came to be what it is. A specific methodology is described in MDL, the DB-MAIN Methodology Description Language. The description includes the specification of the products and of the processes the methodology is made up of, as well as of the relationships between them. A product is of a certain type, described as a specialization of a generic specification object from the DB-MAIN model (Section 5), and more precisely as a submodel generated by the Schema analysis assistant (Section 9). For instance, a product called Raw-conceptual-schema (figure 3), can be declared as a BINARY-ER-SCHEMA. The latter is a product type that can be defined by a SCHEMA satisfying the following predicate, stating that relationship types are binary, and have no attributes, and that the attributes are atomic and single- valued: (all rel-types have from 2 to 2 roles) and (all rel-types have from 0 to 0 attributes) and (all attributes have from 0 to 0 components) and (all attributes have a max cardinality from 1 to 1); A process is defined mainly by the input product type(s), the internal product type, the output product type(s) and by its strategy. The DB-MAIN CASE tool is controlled by a methodology engine which is able to interpret such a method description once it has been stored in the repository by the MDL compiler. In this way, the tool is customized according to this specific methodology. When developing an application, the analyst carries out process instances according to chosen hypotheses, and builds product instances. (S)he makes decisions which (s)he can justify.
DATABASE REVERSE ENGINEERING
37
All the product instances, process instances, hypotheses, decisions and justifications, related to the engineering of an application make up the trace, or history of this application development. This history is also recorded in the repository. It can be examined, replayed, synthesized, and processed (e.g., for design recovery). One of the most promising applications of histories is database design recovery. Constructing a possible design history for an existing, generally undocumented database is a complex problem which we propose to tackle in the following way. Reverse engineering the database generates a DBRE history. This history can be cleaned by removing unnecessary actions. Reversing each of the actions of this history, then reversing their order, yields a tentative, unstructured, design history. By normalizing the latter, and by structuring it according to a reference methodology, we can obtain a possible design history of the database. Replaying this history against the recovered conceptual schema should produce a physical schema which is equivalent to the current database. A more comprehensive description of how these problems are addressed in the DB-MAIN approach and CASE tool can be found in Hainaut et al. (1994), while the design recovery approach is described in Hainaut et al. (1996).
12.
DBRE requirements and the DB-MAIN CASE tool
We will examine the requirements described in Section 3 to evaluate how the DB-MAIN CASE tool can help satisfy them. Flexibility. Instead of being constrained by rigid methodological frameworks, the analyst is provided with a collection of neutral toolsets that can be used to process any schema whatever its level of abstraction and its degree of completion. In particular, backtracking and multi-hypothesis exploration are easily performed. However, by customizing the method engine, the analyst can build a specialized CASE tool that is to enforce strict methodologies, such as that which has been described in Section 2. Extensibility, Through the Voyager-2 language, the analyst can quickly develop specific functions; in addition, the assistants, the name and the text analysis processors allows the analyst to develop customized scripts. Sources multiplicity. The most common information sources have a text format, and can be queried and analyzed through the text analysis assistant. Other sources can be processed through specific Voyager-2 functions. For example, data analysis is most often performed by small ad hoc queries or application programs, which validate specific hypotheses about, e.g., a possible identifier or foreign key. Such queries and programs can be generated by Voyager-2 programs that implement heuristics about the discovery of such concepts. In addition, external information processors and analyzers can easily introduce specifications through the text-based import-export ISL language. For example, a simple SQL program can extract SQL specificationsfi-omDBMS data dictionaries, and generate their ISL expression, which can then be imported into the repository.
38
HAINAUTETAL.
Text analysis. The DB-MAIN tool offers both general purpose and specific text analyzers and processors. If needed, other processors can be developed in Voyager-2. Finally, external analyzers and text processors can be used provided they can generate ISL specifications which can then be imported in DB-MAIN to update the repository. Name processing. Besides the name processor, specific Voyager-l functions can be developed to cope with more specific name patterns or heuristics. Finally, the compact and sorted views can be used as poweful browsing tools to examine name patterns or to detect similarities. Links with other CASE processes. DB-MAIN is not dedicated to DBRE only; therefore it includes in a seamless way supporting functions for the other DB engineering processes, such as forward engineering. Being neutral, many functions are common to all the engineering processes. Openness, DB-MAIN supports exchanges with other CASE tools in two ways. First, Voyager-2 programs can be developed (1) to generate specifications in the input language of the other tools, and (2) to load into the repository the specifications produced by these tools. Secondly, ISL specifications can be used as a neutral intermediate language to communicate with other processors. Flexible specification model. The DB-MAIN repository can accomodate specifications of any abstraction level, and based on a various paradigms; if asked to be so, DB-MAIN can be fairly tolerant to incomplete and inconsistent specifications and can represent schemas which include objects of different levels and of different paradigms (see figure 5); at the end of a complex process the analyst can ask, through the Schema Analysis assistant, a precise analysis of the schema to sort out all the structural flaws. Genericity, Both the repository schema and the functions of the tool are independent of the DMS and of the programming languages used in the application to be analyzed. They can be used to model and to process specifications initially expressed in various technologies. DB-MAIN includes several ways to specialize the generic features in order to make them compliant with a specific context, such as processing PL/l-IMS, COBOLVSAM or C-ORACLE applications. Multiplicity of views. The tool proposes a rich palette of presentation layouts both in graphical and textual formats. In the next version, the analyst will be allowed to define customized views. Rich transformation toolset, DB -MAIN proposes a transformational toolset of more than 25 basic functions; in addition, other, possibly more complex, transformations can be built by the analyst through specific scripts, or through Voyager-2 functions. Traceability, DB-MAIN explicitly records a history, which includes the successive states of the specifications as well as all the engineering activities performed by the analyst and
DATABASE REVERSE ENGINEERING
39
by the tool itself. Viewing these activities as specification transformations has proved an elegant way to formalize the links between the specifications states. In particular, these links can be processed to explain how a conceptual object has been implemented (forward mapping), and how a technical object has been interpreted (reverse mapping). 13.
Implementation and applications of DB-MAIN
We have developed DB-MAIN in C++ for MS-Windows machines. The repository has been implemented as an object oriented database. For performance reasons, we have built a specific 0 0 database manager which provides very short access and update times, and whose disc and core memory requirements are kept very low. For instance, a fully documented 40,000-object project can be developed on a 8-MB machine. The first version of DB-MAIN was released in September 1995. It includes the basic processors and functions required to design, implement and reverse engineer large size databases according to various DMS. Version 1 supports many of the features that have been described in this paper. Its repository can accomodate data structure specifications at any abstraction level (Section 5). It provides a 25-transformation toolkit (Section 6), four textual and two graphical views (Section 7), parsers for SQL, COBOL, CODASYL, IMS and RPG programs, the PDL pattern-matching engine, the dataflow graph inspector, the name processor (Section 8), the Transformation, Schema Analysis and Text Analysis assistants (Section 9), the Voyager-2 virtual machine and compiler (Section 10), a simple history generator and its replay processor (Section 11). Among the other functions of Version 1, let us mention code generators for various DMS. Its estimated cost was about 20 man/year. The DB-MAIN tool has been used to carry out several government and industrial projects. Let us describe five of them briefly. • Design of a government agricultural accounting system. The initial information was found in the notebooks in which the farmers record the day-to-day basic data. These documents were manually encoded as giant entity types with more than 1850 attributes and up to 9 decomposition levels. Through conceptualization techniques, these structures were transformed into pure conceptual schemas of about 90 entity types each. Despite the unusual context for DBRE, we have followed the general methodology described in Section 2: Data structure extraction. Manual encoding; refinement through direct contacts with selected accounting officers; Data structure conceptualization. — Untranslation. The multivalued and compound attributes have been transformed into entity types; the entity types with identical semantics have been merged; serial attributes, i.e., attributes with similar names and identical types, have been replaced with multivalued attributes; — De-optimization. The farmer is requested to enter the same data at different places; these redundancies have been detected and removed; the calculated data have been removed as well;
40
HAINAUTETAL. — Normalization. The schema included several implicit IS-A hierarchies, which have been expressed explicitly;
•
•
•
•
The cost for encoding, conceptualizing and integrating three notebooks was about 1 person/month. This rather unusual application of reverse engineering techiques was a very interesting experience because it proved that data structure engineering is a global domain which is difficult (and sterile) to partition into independent processes (design, reverse). It also proved that there is a strong need for highly generic CASE tools. Migrating a hybrid file/SQL social security system into a pure SQL database. Due to a strict disciplined design, the programs were based on rather neat file structures, and used systematic cliches for integrity constraints management. This fairly standard two-month project comprised an interesting work on name patterns to discover foreign keys. In addition, the file structures included complex identifying schemes which were difficult to represent in the DB-MAIN repository, and which required manual processing. Redocumenting the ORACLE repository of an existing OO CASE tool. Starting from various SQL scripts, partial schemas were extracted, then integrated. The conceptualization process was fairly easy due to systematic naming conventions for candidate and foreign keys. In addition, it was performed by a developer having a deep knowledge of the database. The process was completed in two days. Redocumentating a medium size ORACLE hospital database. The database included about 200 tables and 2,700 columns. The largest table had 75 columns. The analyst quickly detected a dozen major tables with which one hundred views were associated. It appeared that these views defined, in a systematic way, a 5-level subtypes hierarchy. Entering the description of these subtypes by hand would have required an estimated one week. We chose to build a customized function in PDL and Voyager-l as follows. A pattern was developed to detect and analyze the c r e a t e view statements based on the main tables. Each instantiation of this pattern triggered a Voyager-2 function which defined a subtype with the extracted attributes. Then, the function scanned these IS-A relations, detected the conraion attributes, and cleaned the supertype, removing inherited attributes, and leaving the conmion ones only. This tool was developed in 2 days, and its execution took 1 minute. However, a less expert Voyager-l programmer could have spent more time, so that these figures cannot be generalized reliably. The total reverse engineering process cost 2 weeks. Reverse engineering of an RPG database. The application was made of 31 flat files comprising 550 fields (2 to 100 fields per file), and 24 programs totalling 30,000 LOC. The reverse engineering process resulted in a conceptual schema comprising 90 entity types, including 60 subtypes, and 74 relationship types. In the programs, data validation concentrated in well defined sections. In addition, the programs exhibited complex access patterns. Obviously, the procedural code was a rich source of hidden structures and constraints. Due to the good quality of this code, the program analysis tools were of little help, except to quickly locate some statements. In particular, pattern detection could be done visually, and program slicing yielded too large program chunks. Only the dataflow inspector was found useful, though in some programs, this graph was too large, due to the presence of working variables common to several independent program sections. At that time, no RPG parser was available, so that a Voyager-2 RPG extractor was developed
DATABASE REVERSE ENGINEERING
41
in about one week. The final conceptual schema was obtained in 3 weeks. The source file structures were found rather complex. Indeed, some non-trivial patterns were largely used, such as overlapping foreign keys, conditional foreign and primary keys, overloaded fields, redundancies (Blaha and Premerlani, 1995). Surprisingly, the result was estimated unnecessarily complex as well, due to the deep type/subtype hierarchy. This hierarchy was reduced until it seemed more tractable. This problem triggered an interesting discussion about the limit of this inheritance mechanism. It appeared that the precision vs readability trade-off may lead to unnormalized conceptual schemas, a conclusion which was often formulated against object class hierarchies in 0 0 databases, or in OO applications. 14.
Conclusions
Considering the requirements outlined in Section 3, few (if any) commercial CASE/CARE tools offer the functions necessary to carry out DBRE of large and complex applications in a really effective way. In particular, two important weaknesses should be pointed out. Both derive from the oversimplistic hypotheses about the way the application was developed. First, extracting the data structures from the operational code is most often limited to the analysis of the data structure declaration statements. No help is provided for further analyzing, e.g., the procedural sections of the programs, in which essential additional information can be found. Secondly, the logical schema is considered as a straighforward conversion of the conceptual schema, according to simple translating rules such as those found in most textbooks and CASE tools. Consequently, the conceptualization phase uses simple rules as well. Most actual database structures appear more sophisticated, however, resulting from the application of non standard translation rules and including sophisticated performance oriented constructs. Current CARE tools are completely blind to such structures, which they carefully transmit into the conceptual schema, producing, e.g., optimized IMS conceptual schemas, instead of pure conceptual schemas. The DB-MAIN CASE tool presented in this paper includes several CARE components which try to meet the requirements described in Section 3. The first version^ has been used successfully in several real size projects. These experiments have also put forward several technical and methodological problems, which we describe briefly. • Functional limits of the tool. Though DB-MAIN Version 1 already offers a reasonable set of integrity constraints, a more powerful model was often needed to better describe physical data structures or to express semantic structures. Some useful schema transformations were lacking, and the scripting facilities of the assistants were found very interesting, but not powerful enough in some situations. As expected, several users asked for ''full program reverse engineering". • Problem and tool complexity. Reverse engineering is a software engineering domain based on specific, and still unstable, concepts and techniques, and in which much remains to learn. Not surprisingly, true CARE tools are complex, and DB-MAIN is no exception when used at its full potential. Mastering some of its functions requires intensive training which can be justified for complex projects only. In addition, writing and testing specific PDL pattern libraries and Voyager-2 functions can cost several weeks.
42
HAINAUTETAL.
• Performance, While some components of DB-MAIN proved very efficient when processing large projects with multiple sources, some others slowed down as the size of the specifications grew. That was the case when the pattern-matching engine parsed large texts for a dozen patterns, and for the dataflow graph constructor which uses the former. However, no dramatic improvement can be expected, due to the intrinsic complexity of pattern-matching algorithms for standard machine architectures. • Viewing the specifications. When a source text has been parsed, DB-MAIN builds a firstcut logical schema. Though the tool proposes automatic graphical layouts, positioning the extracted objects in a natural way is up to the analyst. This task was often considered painful, even on a large screen, for schemas comprising many objects and connections. In the same realm, several users found that the graphical representations were not as attractive as expected for very large schemas, and that the textual views often proved more powerful and less cumbersome. The second version, which is under development, will address several of the observed weaknesses of Version 1, and will include a richer specification model and extended toolsets. We will mainly mention some important extensions: a view derivation mechanism, which will solve the problem of mastering large schemas, a view integration processor to build a global schema from extracted partial views, the first version of the MDL compiler, of the methodology engine, and of the history manager, and an extended program sheer. The repository will be extended to the representation of additional integrity constraints, and of other system components such as programs. A more powerful version of the Voyager-2 language and a more sophisticated Transformation assistant (evoked in Section 9) are planned for Version 2 as well. We also plan to experiment the concept of design recovery for actual applications. Acknowledgments The detailed conmients by the anonymous reviewers have been most useful to improve the readability and the consistency of this paper, and to make it as informative as possible. We would also like to thank Linda Wills for her friendly encouragements. Notes 1. A table is in 4NF ij^all the non-trivial multivalued dependencies are functional. The BCNF (Boyce-Codd normal form) is weaker but has a more handy definition: a table is in BCNF (/f each functional determinant is a key, 2. A CASE tool offering arichtoolset for reverse engineering is often called a CARE (Computer-Aided Reverse Engineering) tool. 3. A Data Management System (DMS) is either a File Management System (FMS) or a Database Management System (DBMS). 4. Though some practices (e.g., disciplined use of COPY or INCLUDE meta-statements to include common data structure descriptions in programs), and some tools (such as data dictionaries) may simulate such centralized schemas. 5. There is no miracle here: for instance, the data are imported, or organizational and behavioural rules make them satisfy these constraints.
DATABASE REVERSE ENGINEERING
43
6. But methodology-aware if ^5/g« recovery is intended. This aspect has been developed in Hainautetal. (1994), and will be evoked in Section 11. 7. For instance, Belgium commonly uses three legal languages, namely Dutch, French and German. As a consequence, English is often used as a de facto common language. 8. The part of the DB-MAIN project in charge of this aspect is the DB-Process sub-project, fully supported by the Communaut^ Francaise de Belgique. 9. In order to develop contacts and collaboration, an Education version (complete but limited to small applications) and its documentation have been made available. This free version can be obtained by contacting thefirstauthor at j lh@inf o. f undp. a c . be.
References Andersson, M. 1994. Extracting an entity relationship schema from a relational database through reverse engineering. In Proc. of the 13th Int. Conf on ER Approach, Manchester: Springer-Verlag. Batini, C, Ceri, S., and Navathe, S.B. 1992. Conceptual Database Design. Benjamin-Cummings. Batini, C, Di Battista, 0., and Santucci, G. 1993. Structuring primitives for a dictionary of entity relationship data schemas. IEEE TSE, 19(4). Blaha, M.R. and Premerlani, W.J. 1995. Observed idiosyncracies of relational database designs. In Proc. of the 2nd IEEE Working Conf. on Reverse Engineering, Toronto: IEEE Computer Society Press, Bolois, G. and Robillard, P. 1994. Transformations in reengineering techniques. In Proc. of the 4th Reengineering Forum Reengineering in Practice, Victoria, Canada. Casanova, M. and Amarel de Sa, J. 1983. Designing entity relationship schemas for conventional information systems. In Proc. of ERA, pp. 265-278. Casanova, M.A. and Amaral, De Sa 1984. Mapping uninterpreted schemes into entity-relationship diagrams: Two applications to conceptual schema design. In IBM J. Res. & Develop., 28(1). Chiang, R.H,, Barron, TM,, and Storey, V.C. 1994. Reverse engineering of relational databases: Extraction of an EER model from a relational database. Joum. of Data and Knowledge Engineering, 12(2): 107-142. Date, C,J. 1994. An Introduction to Database Systems. Vol. 1, Addison-Wesley. Davis, K.H. and Arora, A.K. 1985, A Methodology for translating a conventional file system into an entityrelationship model. In Proc. of ERA, lEEE/North-HoUand. Davis, K.H. and Arora, A.K. 1988. Converting a relational database model to an entity relationship model. In Proc. of ERA: A Bridge to the User, North-Holland. Edwards, H.M. and Munro, M. 1995. Deriving a logical model for a system using recast method. In Proc. of the 2nd IEEE WC on Reverse Engineering. Toronto: IEEE Computer Society Press. Fikas, S.F. 1985. Automating the transformational development of software. IEEE TSE, SE-11:1268-1277. Fong, J. and Ho, M. 1994, Knowledge-based approach for abstracting hierarchical and network schema semantics. In Proc. of the 12th Int. Conf. on ER Approach, Arlington-Dallas: Springer-Verlag. Fonkam, M.M. and Gray, W.A. 1992. An approach to ehciting the semantics of relational databases. In Proc. of 4th Int. Conf on Advance Information Systems Engineering—CAiSE'92, pp. 463-480, LNCS, Springer-Verlag. Elmasri, R. and Navathe, S. 1994. Fundamentals of Database Systems. Benjamin-Cummings. Hainaut, J.-L. 1981. Theoretical and practical tools for data base design. In Proc. Intern. VLDB Conf, ACM/IEEE. Hainaut, J.-L, 1991. Entity-generating schema transformation for entity-relationship models. In Proc. of the 10th ERA, San Mateo (CA), North-Holland. Hainaut, J.-L., Cadelli, M„ Decuyper, B., and Marchand, O. 1992. Database CASE tool architecture: Principles for flexible design strategies. In Proc. of the 4th Int. Conf on Advanced Information System Engineering (CAiSE-92), Manchester: Springer-Veriag, LNCS. Hainaut, J.-L., Chandelon M,, Tonneau C, and Joris, M. 1993a. Contribution to a theory of database reverse engineering. In Proc. of the IEEE Working Conf. on Reverse Engineering, Baltimore: IEEE Computer Society Press. Hainaut, J.-L., Chandelon M., Tonneau C, and Joris, M. 1993b. Transformational techniques for database reverse engineering. In Proc. of the 12thlnt. Conf on ER Approach, Arlington-Dallas: E/R Institute and Springer-Verlag, LNCS.
44
HAINAUTETAL.
Hainaut, J.-L., Englebert, V., Henrard, J., Hick, J.-M., and Roland, D. 1994. Evolution of database applications: The DB-MAIN approach. In Proc. of the 13th Int. Conf. on ER Approach^ Manchester: Springer-Verlag. Hainaut, J.-L. 1995. Transformation-based database engineering. Tutorial notes, VLDB'95, Ziirich, Switzerland, (available
[email protected]). Hainaut, J.-L. 1996. Specification Preservation in Schema transformations—Application to Semantics and Statistics, Elsevier: Data & Knowledge Engineering (to appear). Hainaut, J.-L., Roland, D., Hick J.-M., Henrard, J., and Englebert, V. 1996. Database design recovery, in Proc. of CAiSE'96, Springer-Veriag. Halpin, T.A. and Proper, H.A. 1995. Database schema transformation and optimization. In Proc. of the 14th Int. Conf on ER/00 Modelling (ERA), Springer-Verlag. Hall, P.A.V. (Ed.) 1992. Software Reuse and Reverse Engineering in Practice, Chapman & Hall IEEE, 1990. Special issue on Reverse Engineering, IEEE Software, January, 1990. Johannesson, P. and Kalman, K. 1990. A Method for translating relational schemas into conceptual schemas. In Proc. of the 8th ERA, Toronto, North-Holland. Joris, M., Van Hoe, R., Hainaut, J.-L., Chandelon, M., Tonneau, C, and Bodart F. et al. 1992. PHENIX: Methods and tools for database reverse engineering. In Proc. 5th Int Conf. on Software Engineering and Applications, Toulouse, December 1992, EC2 Publish. Kobayashi, I. 1986. Losslessness and semantic correctness of database schema transformation: Another look of schema equivalence. In Information Systems, ll(l):41-59. Kozaczynsky, Lilien, 1987. An extended entity-relationship (E2R) database specification and its automatic verification and transformation. In Proc. of ERA Conf Markowitz, K.M. and Makowsky, J.A. 1990. Identifying extended entity-relationship object structures in relational schemas. IEEE Trans, on Software Engineering, 16(8). Navathe, S.B. 1980. Schema analysis for databaserestrucmring.In ACM TODS, 5(2). Navathe, S.B. and Awong, A. 1988. Abstracting relational and hierarchical data with a semantic data model. In Proc. of ERA: A Bridge to the User, North-Holland. Nilsson, E.G. 1985. The translation of COBOL data structure to an entity-rel-type conceptual schema. In Proc. of ERA, lEEE/North-HoUand. Petit, J.-M., Kouloumdjian, J., Bouliaut, J.-F., and Toumani, F. 1994. Using queries to improve database reverse engineering. In Proc. of the 13th Int. Conf on ER Approach, Springer-Verlag: Manchester. Premerlani, W.J. and Blaha, M.R. 1993. An approach for reverse engineering of relational databases. In Proc. of the IEEE Working Conf on Reverse Engineering, IEEE Computer Society Press. Potts, C. and Bruns, G. 1988. Recording the reasons for design decisions. In Proc. oflCSE, IEEE Computer Society Press. Rauh, O. and Stickel, E. 1995. Standard transformations for the normalization of ER schemata. In Proc. of the CAiSE'95 Conf, Jyvaskyla, Finland, LNCS, Springer-Veriag. Rock-Evans, R. 1990. Reverse engineering: Markets, methods and tools, OVUM report. Rosenthal, A. and Reiner, D. 1988. Theoretically sound transformations for practical database design. In Proc. of ERA Conf Rosenthal, A. and Reiner, D. 1994. Tools and transformations—Rigourous and otherwise—for practical database design, ACM TODS, 19(2). Rolland, C. 1993. Modeling the requirements engineering process. In Proc. of the 3rd European-Japanese Seminar in Information Modeling and Knowledge Bases, Budapest (preprints). Sabanis, N. and Stevenson, N. 1992. Tools and techniques for data remodelling cobol applications. In Proc. 5th Int. Conf on Software Engineering and Applications, Toulouse, 7-11 December, pp. 517-529, EC2 Publish. Selfridge, P.G., Waters, R.C., and Chikofsky, E.J. 1993. Challenges to thefieldof reverse engineering. In Proc. of the 1st WC on Reverse Engineering, pp. 144-150, IEEE Computer Society Press. Shoval, P. and Shreiber, N. 1993. Database reverse engineering: From relational to the binary relationship Model, data and knowledge Engineering, 10(10). Signore, O., Lofftedo, M., Gregori, M., and Cima, M. 1994. Reconstruction of E-R schema from database Applications: A cognitive approach. In Proc. of the 13th Int Conf on ER Approach, Manchester: SpringerVerlag.
DATABASE REVERSE ENGINEERING
45
Springsteel, F.N. and Kou, C. 1990. Reverse data engineering of E-R designed relational schemas. In Proc, of Databases, Parallel Architectures and their Applications. Teorey, T.J. 1994. Database Modeling and Design: The Fundamental Principles, Morgan Kaufmann. Vermeer, M. and Apers, R 1995. Reverse engineering of relational databases. In Proc. of the 14th Int. Conf on ER/00 Modelling (ERA). Weiser, M. 1984. Program slicing, IEEE TSE, 10:352-357. Wills, L., Newcomb, R, and Chikofsky, E. (Eds.) 1995. Proc. of the 2nd IEEE Working Conf on Reverse Engineering. Toronto: IEEE Computer Society Press. Winans, J. and Davis, K.H. 1990. Software reverse engineering from a currently existing IMS database to an entity-relationship model. In Proc. of ERA: the Core of Conceptual Modelling, pp. 345-360, North-Holland.
Automated Software Engineering, 3, 47-76 (1996) © 1996 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
Understanding Interleaved Code SPENCER RUGABER, KURT STIREWALT College of Computing, Georgia Institute of Technology, Atlanta, GA
{spencer,kurt}@cc.gatech.edu
LINDA M. WILLS
[email protected] School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA
Abstract. Complex programs often contain multiple, interwoven strands of computation, each responsible for accomplishing a distinct goal. The individual strands responsible for each goal are typically delocalized and overlap rather than being composed in a simple linear sequence. We refer to these code fragments as being interleaved. Interleaving may be intentional-for example, in optimizing a program, a programmer might use some intermediate result for several purposes-or it may creep into a program unintentionally, due to patches, quick fixes, or other hasty maintenance practices. To understand this phenomenon, we have looked at a variety of instances of interleaving in actual programs and have distilled characteristic features. This paper presents our characterization of interleaving and the implications it has for tools that detect certain classes of interleaving and extract the individual strands of computation. Our exploration of interleaving has been done in the context of a case study of a corpus of production mathematical software, written in Fortran from the Jet Propulsion Laboratory. This paper also describes our experiences in developing tools to detect specific classes of interleaving in this software, driven by the need to enhance a formal description of this software library's components. The description, in turn aids in the automated component-based synthesis of software using the library. With every leaf a miracle. — Walt Whitman.
Keywords: software understanding, interleaving, domain models, specification extraction, analysis tools.
1.
Introduction
Imagine being handed a software system you have never seen before. Perhaps you need to track down a bug, rewrite the software in another language or extend it in some way. We know that software maintenance tasks such as these consume the majority of software costs (Boehm, 1981), and we know that reading and understanding the code requires more effort than actually making the changes (Fjeldstad and Hamlen, 1979). But we do not know what makes understanding the code itself so difficult. Letovsky has observed that programmers engaged in software understanding activities typically ask "how" questions and "why" questions (Letovsky, 1988). The former require an in-depth knowledge of the programming language and the ways in which programmers express their software designs. This includes knowledge of common algorithms and data structures and even concerns style issues, such as indentation and use of comments. Nevertheless, the answers to "how" questions can be derived from the program text. "Why" questions are more troublesome. Answering them requires not only comprehending the program text but relating it to the program's purpose - solving some sort of problem. And
48
RUGABER, STIREWALT, AND WILLS
the problem being solved may not be explicitly stated in the program text; nor is the rationale the programmer had for choosing the particular solution usually visible. This paper is concerned with a specific difficulty that arises when trying to answer "why" questions about computer programs. In particular, it is concerned with the phenomenon of interleaving in which one section of a program accomplishes several purposes, and disentangling the code responsible for each purpose is difficult. Unraveling interleaved code involves discovering the purpose of each strand of computation, as well as understanding why the programmer decided to interleave the strands. To demonstrate this problem, we examine an example program in a step-by-step fashion, trying to answer the questions "why is this program the way it is?" and "what makes it difficult to understand?" /./.
NPEDLN
The Fortran program, called NPEDLN, is part of the SPICELIB library obtained from the Jet Propulsion Laboratory and intended to help space scientists analyze data returned from space missions. The acronym NPEDLN stands for Nearest Point on Ellipsoid to Line. The ellipsoid is specified by the lengths of its three semi-axes (A, B, and c), which are oriented with the X, y, and z coordinate axes. The line is specified by a point (LINEPT) and a direction vector (LINEDR). The nearest point is contained in a variable called PNEAR. The full program consists of 565 lines; an abridged version can be found in the Appendix with a brief description of subroutines it calls and variables it uses. The executable statements, with comments and declarations removed, are shown in Figure L The lines of code in NPEDLN that actually compute the nearest point are somewhat hard to locate. One reason for this has to with error checking. It turns out that SPICELIB includes an elaborate mechanism for reporting and recovering from errors, and roughly half of the code in NPEDLN is used for this purpose. We have indicated those lines by shading in Figure 2. The important point to note is that although it is natural to program in a way that intersperses error checks with computational code, it is not necessary to do so. In principal, an entirely separate routine could be constructed to make the checks and NPEDLN called only when all the checks are passed. Although this approach would require redundant computation and potentially more total lines of code, the resultant computations in NPEDLN would be shorter and easier to follow. In some sense, the error handling code and the rest of the routine realize independent plans. We use the term plan to denote a description or representation of a computational structure that the designers have proposed as a way of achieving some purpose or goal in a program. This definition is distilled from definitions in (Letovsky and Soloway, 1986, Rich and Waters, 1990, Selfridge et al., 1993). Note that apian is not necessarily stereotypical or used repeatedly; it may be novel or idiosyncratic. Following (Rich and Waters, 1990, Selfridge et al., 1993) , we reserve the term cliche for a plan that represents a standard, stereotypical form, which can be detected by recognition techniques, such as (Hartman, 1991, Letovsky, 1988, Kozaczynski and Ning, 1994, Quilici, 1994, Rich and Wills, 1990, Wills, 1992). . Plans can occur at any level of abstraction from architectural overviews to code. By extracting the error checking plan from NPEDLN, we get the much smaller and, presumably, more understandable program shown in Figure 3.
49
UNDERSTANDING INTERLEAVED CODE
SUBROUTINE NPEDLN ( A, B, C, LINEPT, LINEDR, PNEAR, DIST ) IF ( RETURN 0 ) RETURN ELSE CALL CHKIN ( END IF CALL UNORM ( LINEDR, UDIR, MAG ) IF ( MAG .EQ. 0 ) THEN CALL SETMSG( 'Line direction vector is the zero vector. CALL SIGERR( 'SPICE(ZEROVECTOR)' CALL CHKOUTI 'NPEDLN' ) RETURN ELSE IF { DO ) ( A LE. DO ) .OR. { B LE. DO ) .OR. { C LE. THEN CALL SETMSG 'Semi-axes: A = #, B = #, C = #. ' '#', A ) CALL ERRDP '#', B ) CALL ERRDP CALL ERRDP '#', C ) CALL SIGERR ('SPICE(INVALIDAXISLENGTH)') CALL CHKOUT ( 'NPEDLN' ) RETURN END IF SCALE = MAX ( DABS (A) DABS(B), DABS(C) ) A / SCALE SCLA B / SCALE SCLB SCLC = C / SCALE O.DO ) ( SCLA**2 .LE IF ( . .OR. ( SCLB**2 .LE. O.DO ) . .OR. ( SCLC**2 .LE. O.DO ) ) THEN CALL SETMSG { 'Semi-axis too small: A = #, B = #, C = #. ' ) '#', A ) CALL ERRDP '#', B ) CALL ERRDP '#*, C ) CALL ERRDP CALL SIGERR ('SPICE(DEGENERATECASE)') CALL CHKOUT ( 'NPEDLN' ) RETURN END IF Scale LINEPT. S C L P T d ) = LINEPT (1) / SCALE SCLPT{2) = LINEPT(2) / SCALE SCLPT{3) = LINEPT(3) / SCALE CALL VMINUS (UDIR, OPPDIR )
Figure 1. NPEDLN minus comments and declarations.
CALL SURFPT (SCLPT, UDIR, SCLA, SCLB, SCLC, PT(1,1), FOUND(l)) CALL SURFPT (SCLPT, OPPDIR, SCLA, SCLB, SCLC, PT{1,2), FOUND(2)) DO 50001 I = 1, 2 IF ( FOUND(I) ) THEN DIST = 0.ODO CALL VEQU ( PT(1,I), PNEAR ) CALL VSCL ( SCALE,PNEAR, PNEAR ) CALL CHKOUT ( 'NPEDLN' ) RETURN END IF 50001 CONTINUE C NORMALd) = U D I R d ) / SCLA**2 NORMAL(2) = UDIR(2) / SCLB**2 NORMAL(3) = UDIR(3) / SCLC**2 CALL NVC2PL ( NORMAL, O.DO, CANDPL ) CALL INEDPL ( SCLA, SCLB, SCLC CANDPL, CAND, XFOUND ) IF ( .NOT. XFOUND ) THEN CALL SETMSG ( Candidate ellipse could not be found.' ) CALL SIGERR { SPICE(DEGENERATECASE)' CALL CHKOUT { NPEDLN* ) RETURN END IF CALL NVC2PL ( UDIR, O.DO, PRJPL ) CALL PJELPL ( CAND, PRJ P L , P R J E L )
)
CALL VPRJP ( SCLPT, CALL NPELPT ( PRJPT, DIST = VDIST ( PRJNPT,
PRJPL, PRJPT PRJEL, PRJNPT PRJPT )
) )
( PRJNPT, PRJPL, CANDPL, PNEAR, IFOUND ) ( .NOT. IFOUND ) T H E N CALL SETMSG ( Inverse projection could not be found.' ) SPICE(DEGENERATECASE)' ) CALL SIGERR ( NPEDLN• ) CALL CHKOUT ( RETURN E N D IF CALL VSCL ( SCALE, PNEAR, DIST = SCALE * DIST C A L L C H K O U T { 'NPEDLN' ) RETURN END CALL VPRJPI IF
50
RUGABER, STIREWALT, AND WILLS
SUBROUTINE NPEDLN { A, B, C, LINEPT, LINEDR, PNEAR, DIST ) IF ( RETURN 0 f RETUiUS EL$E CAX*L CHKIN { 'N|»E0I^* )
CALL SURFPT (SCLPT, UDIR, SCLA, SCLB, SCLC, PT(1,1), FOUND(l)) CALL SURFPT (SCLPT, OPPDIR, SCLA, SCLB, SCLC, PT(1,2), F0UND(2))
DO 50001 I = 1, 2 IF ( FOUND(I) ) THEN mo XF DIST = 0 ODO CALL VEQU CALL UNORM ( LINEDR, UDIR, MAG ) ( PT(1,I), PNEAR ) CALL VSCL I F ( JJAG ,EQ. 0 ) THfiN { SCALE,PNEAR, PNEAR ) CALL CRKOUT ( CALL SETMSG{ 'Line direction vector ) RETURN is the zer6 vector, ' END IF CALL SIQERR< * SPICE(ZEROVECTOB)' \ 50001 CONTINUE CALL CHKODTi *NPEI>LK' ) C RETPim NORMAL(1) = UDIR(l) / SCLA**2 ( A X E . 0,00 ) ELSE IF { { B ,LE, Q,m ) NORMAL(2) = UDIR(2) / SCLB**2 .OR. NORMAL(3) = UDIR(3) / SCLC**2 .OR. i c .m. o.m ) } CALL NVC2PL ( NORMAL, O.DO, CANDPL ) mm CALL INEDPL ( SCLA, SCLB, SCLC, CANDPL, CALL SKtKSG { fi » #. C « #•' i CAND, XFOUND ) CALL ERRDP { * #' , A } IF ( ,NOT. XFOUND ) THEN CALL ERRDP i ' # \ B ) CALL SETMSG < 'Candidate ellipse could not CALL EKRDP. { '#', C ) be found,' > CALL SIGBRR T SPICE {IHVALIDAXISJi^ESGTH) ^ ) CALL SIGERR ( ' SPICE (DES^NERATECASEP ) CALL CHKOUT f 'nPSlDlM' J CALL CHKOUT ( 'NP13>WI' \ RETtmN RETtmJJ FND I F e»D IF CALL NVC2PL ( UDIR, O.DO, SCALE = MAX { DABS(A), DABS(B), DABS(C) ) PRJPL ) CALL PJELPL ( CAND, PRJPL, PRJEL ) A / SCALE SCLA B / SCALE SCLB C / SCALE CALL VPRJP ( SCLPT, PRJPL, PRJPT ) SCLC { SCLA**2 .LE. O.m } CALL NPELPT ( PRJPT, PRJEL, PRJNPT ) IF { ( SCLB**2 .LE. O.bO ) DIST = VDIST ( PRJNPT, PRJPT ) . ,OR, i SCLC**2 .LE, a.m ) \ THEN CALL VPRJPI ( PRJNPT, PRJPL, CANDPL, PNEAR, CALL SETMSG i 'Semi-axis too SMiall: A « #, B = #. C == i. ' } IFOUND ) CALL ERRDP i '#*, A ) IF ( ,NOT. IPOmm ) THEN CAtt. ERRDP i '#% B ) CALL SETMSG ( 'Inverse projection could not CALt WmW { '#'r C J be found.' > CALL SIGEKR ('SPtCfiCDEtSENERATSCASE) ') CALL SIGERR i 'SPICS(DSJeENERATSCASEj ' } CALL CHKOUT i 'NPEDLN* } CAU, CHKOUT { 'JtJPEDLN' > RETURN RETURN END IF END I F Scale LINEPT. CALL VSCL ( SCALE, PNEAR, PNEAR ) SCLPT(l) = LINEPT(l) / SCALE DIST = SCALE * DIST SCLPT(2) = LINEPT(2) / SCALE CALL CHKOUT ( 'STPEDLN* ) SCLPT{3) = LINEPT(3) / SCALE RETURN CALL VMINUS (UDIR, OPPDIR ) END
Figure 2. Code with error handling highlighted.
The structure of an understanding process begins to emerge: detect a plan, such as error checking, in the code and extract it, leaving a smaller and more coherent residue for further analysis; document the extracted plan independently; and note the ways in which it interacts with the rest of the code. We can apply this approach further to NPEDLN'S residual code in Figure 3. NPEDLN has a primary goal of computing the nearest point on an ellipsoid to a specified line. It also has a related goal of ensuring that the computations involved have stable numerical behavior; that is, that the computations are accurate in the presence of a wide range of numerical inputs. A standard trick in numerical programming for achieving stability is to scale the data involved in a computation, perform the computation, and then unscale the results. The
51
UNDERSTANDING INTERLEAVED CODE
JROUTINE NPEDLN ( A, B, C, LINEPT, LINEDR, PNEAR, DIST ) CALL UNORM ( LINEDR, UDIR, MAG ) SCALE = MAX { DABS(A), DABS(B), DABS{C) A / SCALE SCLA B / SCALE SCLB C / SCALE SCLC SCLPT{1) SCLPT{2) SCLPT(3)
= = =
LINEPT(1) / SCALE LINEPT(2) / SCALE LINEPT(3) / SCALE
CALL VMINUS (UDIR, OPPDIR ) CALL SURFPT (SCLPT, UDIR, SCLA, SCLB, SCLC, PT(1,1), FOUNDd)) CALL SURFPT (SCLPT, OPPDIR, SCLA, SCLB, SCLC, PT(1,2), F0UND(2)) DO 50001 I = 1, 2 IF ( F O U N D d ) ) THEN DIST = O.ODO ( PT(1,I), PNEAR ) CALL VEQU ( SCALE,PNEAR, PNEAR ) CALL VSCL RETURN END IF 50001 CONTINUE
NORMAL (1) = NORMAL(2) = NORMAL(3) = CALL NVC2PL ( CALL INEDPL (
U D I R d ) / SCLA**2 UDIR(2) / SCLB**2 UDIR{3) / SCLC**2 NORMAL, O.DO, CANDPL ) SCLA, SCLB, SCLC, CANDPL, CAND, XFOUND ) CALL NVC2PL { UDIR, O.DO, PRJPL ) CALL PJELPL ( CAND, PRJPL, PRJEL ) CALL VPRJP ( SCLPT, PRJPL, PRJPT ) CALL NPELPT ( PRJPT, PRJEL, PRJNPT ) DIST = VDIST ( PRJNPT, PRJPT ) CALL VPRJPI ( PRJNPT, PRJPL, CANDPL, PNEAR, IFOUND ) CALL VSCL ( SCALE, PNEAR, PNEAR ) DIST = SCALE * DIST RETURN END
Figure 3. The residual code without the error handling plan.
code responsible for doing this in NPEDLN is scattered throughout the program's text. It is highlighted in the excerpt shown in Figure 4. The delocalized nature of this "scale-unscale" plan makes it difficult to gather together all the pieces involved for consistent maintenance. It also gets in the way of understanding the rest of the code, since it provides distractions that must be filtered out. Letovsky and Soloway's cognitive study (Letovsky and Soloway, 1986) shows the deleterious effects of delocalization on comprehension and maintenance. When we extract the scale-unscale code from NPEDLN, we are left with the smaller code segment shown in Figure 5 that more directly expresses the program's purpose: computing the nearest point. There is one further complication, however. It turns out that NPEDLN not only computes the nearest point from a line to an ellipsoid, it also computes the shortest distance between the line and the ellipsoid. This additional output (DIST) is convenient to construct because it can make use of intermediate results obtained while computing the primary output (PNEAR). This is illustrated in Figure 6. (The computation of D I S T using VDIST is actually the last computation performed by the subroutine NPELPT, which NPEDLN calls; we have pulled this computation out of NPELPT for clarity of presentation.) Note that an alternative way to structure SPICELIB would be to have separate routines for computing the nearest point and the distance. The two routines would each be more coherent, but the common intermediate computations would have to be repeated, both in the code and at runtime. The "pure" nearest point computation is shown in Figure 7. It is now much easier to see the primary computational purpose of this code.
52
RUGABER, STIREWALT, AND WILLS
SUBROUTINE NPEDLN { A, B, C, LINEPT, LINEDR, PNEAR, DIST )
NORMAL(1) = UDIR(l) / SCLA**2 N0RMAL(2) = UDIR(2) / SCLB**2 NORMALO) = UDIR(3) / SCLC**2 CALL NVC2PL ( NORMAL, O.DO, CANDPL ) CALL INEDPL ( SCLA, SCLB, SCLC, CANDPL, CAND, XFOUND ) CALL NVC2PL ( UDIR, O.DO, PRJPL ) CALL PJELPL ( CAND, PRJPL, PRJEL )
CALL UNORM { LINEDR, UDIR, MAG ) SCALE - MAX { DABS {Ah mBS(B), DABS(C) } / SCALE SClk / SCALE SCLB / SCALE SCLC SCLP1*aj SCLPTI2) SCLPTU)
LINEPT (2) LINEPTO)
ISCALB SCALE SCALE
CALL VPRJP ( SCLPT, PRJPL, PRJPT ) CALL NPELPT ( PRJPT, PRJEL, PRJNPT ) DIST = VDIST ( PRJNPT, PRJPT )
CALL VMINUS (UDIR, OPPDIR ) CALL SURFPT (SCLPT, UDIR, SCLA, SCLB, SCLC, PT{1,1), FOUND(l)) CALL SURFPT (SCLPT, OPPDIR, SCLA, SCLB, SCLC, PT(1,2), F0UND(2)) DO 50001 I = 1, 2 IF ( FOUND(I) ) THEN DIST = 0.,0D0 CALL VEQU ( PT(1 intersect with all clusters Cj^k in one of the non used metric axis Mj, j G {1 .. 5}. The clusters in the resulting set contain potential code clone fragments under the criteria Mcurr and Mj, and form a composite metric axis McurrOj- Mark Mj as used and set the current axis Mcurr ~
'^currQj'
4. If all metric axes have been considered the stop; else go to Step 3. The pattern matching engine uses either the computed Euclidean distance or clustering in one or more metric dimensions combined, as a similarity measure between program constructs. As a refinement, the user may restrict the search to code fragments having minimum size or complexity. The metric-based clone detection analysis has been applied to a several medium-sized production C programs. In tcsh, a 45 kLOC Unix shell program, our analysis has discovered 39 clusters or groups of similar functions of average size 3 functions per cluster resulting in a total of 17.7 percent of potential system duplication at the function level. In bash, a 40KLOC Unix shell program, the analysis has discovered 25 clusters, of average size 5.84 functions per cluster, resulting to a total of 23 percent of potential code duplication at the function level. In CLIPS, a 34 kLOC expert system shell, we detected 35 clusters of similar functions of average size 4.28 functions per cluster, resulting in a total of 20 percent of potential system duplication at the function level. Manual inspection of the above results combined with more detailed Dynamic Programming re-calculation of distances gave some statistical data regarding false positives. These results are given in Table 1. Different programs give different distribution of false alarms, but generally the closest the distance is to 0.0 the more accurate the result is. The following section, discusses in detail the other code to code matching technique we developed, that is based on Dynamic Programming. 2.3.
Dynamic Programming Based Similarity Analysis
The Dynamic Programming pattern matcher is used (Konto, 1994), (Kontogiannis, 1995) to find the best alignment between two code fragments. The distance between the two code fragments is given as a summation of comparison values as well as of insertion and deletion costs corresponding to insertions and deletions that have to be applied in order to achieve the best alignment between these two code fragments. A program feature vector is used for the comparison of two statements. The features are stored as attribute values in aframe-basedstructure representing expressions and statements in the AST. The cumulative similarity measure T> between two code fragments P , M, is calculated using the function
88
KONTOGIANNIS ET AL,
D : Feature ^Vector X Feature^Vector —> Real where: A(p,j~l,P,M)+
D{£{l,p,nS{lJ-l,M)) D{£{l,p,V),£{lJ,M))
= Mm{
I{p-lj^V,M)^
(1)
C{p-lJ-l,V,M)-h D{£{l,p-l,V),£{lJ-l,M)) and, •
Al is the model code fragment
•
7^ is the input codefragmentto be compared with the model M
•
£{h jt Q) is a program feature vectorfromposition / to position y in codefragmentQ
•
-D(Vx , Vy) is the the distance between two feature vectors Vx, Vy
•
A(i, J, 7^5 M) is the cost of deleting \hc']th statement of Al, at position / of the fragment V
•
/(i, J, 7^, X ) the cost of inserting the ith statement of V at position^* of the model M and
•
C(^, J, V^ M) is the cost of comparing the ith statement of the codefragmentV with the j^Afragmentof the model M. The comparison cost is calculated by comparing the corresponding feature vectors. Currently, we compare ratios of variables set, used per statement, data types used or set, and comparisons based on metric values
Note that insertion, and deletion costs are used by the Dynamic Programming algorithm to calculate the best fit between two codefragments.An intuitive interpretation of the best fit using insertions and deletions is "if we insert statement i of the input at position 7 of the model then the model and the input have the smallest feature vector difference." The quality and the accuracy of the comparison cost is based on the program features selected and the formula used to compare these features. For simplicity in the implementation we have attached constant real values as insertion and deletion costs. Table 1 summarizes statistical data regarding false alarms when Dynamic Programming comparison was applied to functions that under direct metric comparison have given distance 0.0. The column labeled Distance Range gives the value range of distances between functions using the Dynamic Progranmiing approach. The column labeled False Alarms contains the percentage of functions that are not clones but they have been identified as such. The column labeled Partial Clones contains the percentage of functions which correspond
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
89
Table 1. False alarms for the Clips program
Distance Range False Alarms Partial Clones
Positive Clones
0.0
0.0%
10.0%
90.0%
0.01 - 0.99
6.0%
16.0 %
78.0%
1.0-1.49
8.0%
3.0%
89.0%
1.5-1.99
30.0%
37.0 %
33.0%
2.0 - 2.99
36.0%
32.0 %
32.0%
3.0 - 3.99
56.0%
13.0 %
31.0%
4.0 - 5.99
82.0%
10.0 %
8.0%
6.0 -15.0
100.0%
0.0%
0.0%
only in parts to cut and paste operations. Finally, the column labeled as Positive Clones contains the percentage of functions clearly identified as cut and paste operations. The matching process between two code fragments M and V is discussed with an example later in this section and is illustrated in Fig.3 The comparison cost function C{i,j,M,V) is the key factor in producing the final distance result when DP-based matching is used. There are many program features that can be considered to characterize a code fragment (indentation, keywords, metrics, uses and definitions of variables). Within the experimentation of this approach we used the following three different categories of features 1. definitions and uses of variables as well as, literal values within a statement: (A) Featurei : Statement statement,
—> String denotes the set of variables used in within a
(B) Feature2 - Statement statement
-^ String denotes the set of variables defined within a
(C) Features • Statement —> String denotes the set of literal values (i.e numbers, strings) within a statement (i.e. in a printf statement). 2. definitions and uses of data types : (A) Featurei • Statement within a statement,
—> String denotes the set of data type names used in
(B) Feature2 • Statement within a statement
—> String denotes the set of data type names defined
The comparison cost of the ith statement in the input V and the jth statement of the model M. for the first two categories is calculated as :
90
KONTOGIANNIS ET AL.
. *'
^
1 Y^ card{InputFeaturem{Vi) O ModelFeaturem{-Mj)) V ^ card{InputFeaturem{Vi)UModelFeaturemMj))
where v is the size of the feature vector, or in other words how many features are used, 3. five metric values which are calculated compositionally from the statement level to function and file level: The comparison cost of the ith statement in the input V and the jth statement of the model M when the five metrics are used is calculated as :
C{VuMj)
^^{Mk{V,)
- MUMj))^
(3)
\ A:=l
Within this framework new metrics and features can be used to make the comparison process more sensitive and accurate. The following points on insertion and deletion costs need to be discussed. •
The insertion and deletion costs reflect the tolerance of the user towards partial matching (i.e. how much noise in terms of insertions and deletions is allowed before the matcher fails). Higher insertion and deletion costs indicate smaller tolerance, especially if cutoff thresholds are used (i.e. terminate matching if a certain threshold is exceeded), while smaller values indicate higher tolerance.
•
The values for insertion and deletion should be higher than the threshold value by which two statements can be considered "similar", otherwise an insertion or a deletion could be chosen instead of a match.
•
A lower insertion cost than the corresponding deletion cost indicates the preference of the user to accept a code fragment V that is written by inserting new statements to the model M. The opposite holds when the deletion cost is lower than the corresponding insertion cost. A lower deletion cost indicates the preference of the Ubcr to accept a code fragment V that is written by deleting statements from the model M. Insertion and deletion costs are constant values throughout the comparison process and can be set empirically.
When different comparison criteria are used different distances are obtained. In Fig.2 (Clips) distances calculated using Dynamic Programming are shown for 138 pairs of functions (X - axis) that have been already identified as clones (i.e. zero distance) using the direct per function metric comparison. The dashed line shows distance results when definitions and uses of variables are used as features in the dynamic programming approach, while the solid line shows the distance results obtained when the five metrics are used as features. Note that in the Dynamic Programming based approach the metrics are used at
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
Distances between Function pairs (Clips)
91
Distances between Function Pairs (Bash)
- Distances on definitions and uses of variables - Distances on definitions and uses of variables _ Distances on data and controlflowmeasurements. _ Distances on data and control flow measurements
3h
1
C 40
60 80 Function Pairs
100
120
140
0
Figure 2. Distances between function pairs of possible function clones using DP-based matching.
the statement level, instead of the begin-end block level when metrics direct comparison is performed. As an example consider the following statements M and V:
ptr = head; while(ptr != NULL && !found) { i f ( p t r - > i t e i t i == s e a r c h l t e m ) found = 1 else ptr = ptr->next;
while(ptr != NULL && !found) { if(ptr->item == searchltem)
92
KONTOGIANNIS ET AL.
^
•Ise-part
1 y.
M
ptr I-..
A
^-l-T-
then-part ptr->lten •>
i£().
£ounJk> 1
t^—1—
ptx->it«m H . .
\ than-purt
^-^ alfls part
prlntfO..
i2i found • 1
Figure 3. The matching process between two code fragments. Insertions are represented as horizontal hnes, deletions as vertical lines and, matches as diagonal hnes.
{ printf("ELEMENT FOUND found = 1;
%s\n", searchltem);
} else ptr = ptr->next;
The Dynamic Programming matching based on definitions and uses of variables is illustrated in Fig. 3. In the first grid the two code fragments are initially considered. At position (0, 0) of the first grid a deletion is considered as it gives the best cumulative distance to this point (assuming there will be a match at position (0, 1). The comparison of the two composite while statements in the first grid at position (0, 1), initiates a nested match (second grid). In the second grid the comparison of the composite i f - t h e n - e l s e statements at position (1,1) initiates a new nested match. In the third grid, the comparison of the composite t h e p a r t of the i f - t h e n - e l s e statements initiates the final fourth nested match. Finally, in the fourth grid at position (0, 0), an insertion has been detected, as it gives the best cumulative distance to this point (assuming a potential match in (1,0).
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
93
When a nested match process finishes it passes its result back to the position from which it was originally invoked and the matching continues from this point on. 3.
Concept To Code Matching
The concept assignment (Biggerstaff, 1994) problem consists of assigning concepts described in a concept language to program fragments. Concept assignment can also be seen as a matching problem. In our approach, concepts are represented as abstract-descriptions using a concept language called ACL. The intuitive idea is that a concept description may match with a number of different implementations. The probability that such a description matches with a code fragment is used to calculate a similarity measure between the description and the implementation. An abstract-description is parsed and a corresponding AST Ta is created. Similarly, source code is represented as an annotated AST Tc. Both Ta and Tc are transformed into a sequence of abstract and source code statements respectively using transformation rules. We use REFINE to build and transform both ASTs. The reason for this transformation is to reduce the complexity of the matching algorithm as Ta and Tc may have a very complex and different to each other structure. In this approach feature vectors of statements are matched instead of Abstract Syntax Trees. Moreover, the implementation of the Dynamic Programming algorithm is cleaner and faster once structural details of the ASTs have been abstracted and represented as sequences of entities. The associated problems with matching concepts to code include : •
The choice of the conceptual language,
•
The measure of similarity,
•
The selection of a fragment in the code to be compared with the conceptual representation. These problems are addressed in the following sections.
3,1,
Language for Abstract Representation
A number of research teams have investigated and addressed the problem of code and plan localization. Current successful approaches include the use of graph granmiars (Wills, 1992), (Rich, 1990), query pattern languages (Paul, 1994), (Muller, 1992), (Church, 1993), (Biggerstaff, 1994), sets of constraints between components to be retrieved (Ning, 1994), and summary relations between modules and data (Canfora, 1992). In our approach a stochastic pattern matcher that allows for partial and approximate matching is used. A concept language specifies in an abstract way sequences of design concepts. The concept language contains:
94
KONTOGIANNIS ET AL.
•
Abstract expressions £ that correspond to source code expression. The correspondence between an abstract expression and the source code expression that it may generate is given at Table 3
•
Abstract feature descriptions T that contain the feature vector data used for matching purposes. Currently the features that characterize an abstract statement and an abstract expression are:
•
1.
Uses of variables : variables that are used in a statement or expression
2.
Definitions of variables', ariables that are defined in a statement or expression
3.
Keywords: strings, numbers, characters that may used in the text of a code statement
4.
Metrics : a vector of five different complexity, data and control flow metrics.
Typed Variables X Typed variables are used as a placeholders for feature vector values, when no actual values for the feature vector can be provided. An example is when we are looking for a Traversal of a list plan but we do not know the name of the pointer variable that exists in the code. A type variable can generate (match) with any actual variable in the source code provided that they belong to the same data type category. For example a List type abstract variable can be matched with an Array or a Linked List node source code pointer variable. Currently the following abstract types are used : 1. Numeral: Representing Int, and float types
•
2.
Character : Representing char types
3.
List: Representing array types
4.
Structure : Representing struct types
5.
Named : matching the actual data type name in the source code
Operators O Operators are used to compose abstract statements in sequences. Currently the following operators have been defined in the language but only sequencing is implemented for the matching process : 1.
Sequencing (;): To indicate one statement follows another
2.
Choice ( 0 ) : To indicate choice (one or the other abstract statement will be used in the matching process
3.
Inter Leaving (|| ) : to indicate that two statements can be interleaved during the matching process
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
95
Table 2. Generation (Allowable Matching) of source code statements from ACL statements
ACL Statement
Generated Code Statement
Abstract Iterative Statement
While Statement For Statement Do Statement
Abstract While Statement
While Statement
Abstract For Statement
For Statement
Abstract Do Statement
Do Statement
Abstract Conditional Statement
If Statement Switch Statement
Abstract If Statement
If Statement
Abstract Switch Statement
Switch Statement
Abstract Return Statement
Return Statement
Abstract GoTo Statement
GoTo Statement
Abstract Continue Statement
Continue Statement
Abstract Break Statement
Break Statement
Abstract Labeled Statement
Labeled Statement
Abstract
Zero or more sequential source code statements
Statement*
AhstractStatement^
One or more sequential source code statements
96
KONTOGIANNIS ET AL.
Table 3. Generation (Allowable Matching) of source code expressions from ACL expressions
ACL Expression
Generated Code Expression
Abstract Function Call
Function Call
Abstract Equality
Equality (==)
Abstract Inequality
Inequality (\ =)
Abstract Logical And
Logical And (Sz&z)
Abstract Logical Or
Logical Or (\\)
Abstract Logical Not
Logical Not (!)
•
Macros M Macros are used to facilitate hierarchical plan recognition (Hartman, 1992), (Chikofsky, 19890). Macros are entities that refer to plans that are included at parse time. For example if a plan has been identified and is stored in the plan base, then special preprocessor statements can be used to include this plan to compose more complex patterns. Included plans are incorporated in the current pattern's AST at parse time. In this way they are similar to inline functions in C++. Special macro definition statements in the Abstract Language are used to include the necessary macros. Currently there are two types of macro related statements 1.
include definitions: These are special statements in ACL that specify the name of the plan to be included and the file it is defined. As an example consider the statement i n c l u d e planl.acl traversal-linked-list that imports the plan traversal-linked-list defined in file planl.acl.
2.
inline uses : These are statements that direct the parser to inline the particular plan and include its AST in the original pattern's AST. As an example consider the inlining p l a n : traversal-linked-list that is used to include an instance of the traversal-linked-list plan at a particular point of the pattern. In a pattern more than one occurrence of an included plan may appear.
A typical example of a design concept in our concept language is given below. This pattern expresses an iterative statement (e.g. while ,for, do loop that has in its condition an inequality expression that uses variable ?x that is a pointer to the abstract type l i s t (e.g. array, linked list) and the conditional expression contains the keyword "NULL". The body of I t e r a t i v e - s t a t e m e n t contains a sequence of one or more stateme nts (+-statement)
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
97
that uses at least variable ?y (which matches to the variable obj) in the code below and contains the keyword meniber, and an Assignment-Statement that uses at least variable ?x, defines variable ?x which in this example matches to variable f i e l d , and contains the keyword next. {
Iterative-statement(Inequality-Expression abstract-description uses : [ ?x : *list], keywords : [ "NULL" ]) { -(--Statement abstract-description uses : [?y : string, ..] keywords : [ "member" ]; Assignment-Statement abstract-description uses : [?x, . . ] , defines : [?x], keywords : [ "next" ]
A code fragment that matches the pattern is: {
while (field != NULL) { if (!strcmp(obj,origObj) || (!strcmp(field->AvalueType,"member") && notlnOrig ) ) if (strcmp(field->Avalue,"method") != 0) INSERT_THE_FACT(o->ATTLIST[num].Aname,origObj, field->Avalue); field = field->nextValue; } }
3.2,
Concept-tO'Code Distance Calculation
In this section we discuss the mechanism that is used to match an abstract pattern given in ACL with source code.
98
KONTOGIANNIS ET AL.
In general the matching process contains the following steps : 1. Source code (^i; ...5^) is parsed and an AST Tc is created. 2. The ACL pattern {Ai; ...A^) is parsed and an AST Ta is created. 3. A transformation program generates from Ta a Markov Model called Abstract Pattern Model (APM). 4. A Static Model called SCM provides the legal entities of the source language. The underlying finite-state automaton for the mapping between a APM state and an SCM state basically implements the Tables 2, 3. 5. Candidate source code sequences are selected. 6. A Viterbi (Viterbi, 1967) algorithm is used to find the best fit between the Dynamic Model and a code sequence selected from the candidate list. A Markov model is a source of symbols characterized by states and transitions, A model can be in a state with certain probability. From a state, a transition to another state can be taken with a given probability. A transition is associated with the generation (recognition) of a symbol with a specific probability. The intuitive idea of using Markov models to drive the matching process is that an abstract pattern given in ACL may have many possible alternative ways to generate (match) a code fragment. A Markov model provides an appropriate mechanism to represent these alternative options and label the transitions with corresponding generation probabilities. Moreover, the Vitrebi algorithm provides an efficient way to find the path that maximizes the overall generation (matching) probability among all the possible alternatives. The selection of a code fragment to be matched with an abstract description is based on the following criteria : a) the first source code statement Si matches with the first pattern statement Ai and, b) S2]S^]..Sk belong to the innermost block containing Si The process starts by selecting all program blocks that match the criteria above. Once a candidate list of code fragments has been chosen the actual pattern matching takes place between the chosen statement and the outgoing transitions from the current active APM's state. If the type of the abstract statement the transition points to and the source code statement are compatible (compatibility is computed by examining the Static Model) then feature comparison takes place. This feature comparison is based on Dynamic Programming as described in section 2.3. A similarity measure is established by this comparison between the features of the abstract statement and the features of the source code statement. If composite statements are to be compared, an expansion function "flattens" the structure by decomposing the statement into a sequence of its components. For example an i f statement will be decomposed as a sequence of an e x p r e s s i o n (for its condition), its then part and its e l s e part. Composite statements generate nested matching sessions as in the DP-based code-to-code matching.
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
99
3,3, ACL Markov Model Generation Let Tc be the AST of the code fragment and Ta be the AST of the abstract representation. A measure of similarity between Tc and Ta is the following probability
where, (rci,...rc,,...rcj
(5)
is the sequence of the grammar rules used for generating Tc and {ra^,...ran"'raL) (6) is the sequence of rules used for generating Ta. The probability in (1) cannot be computed in practice, because of complexity issues related to possible variations in Ta generating Tc. An approximation of (4) is thus introduced. Let iSi, ..5fc be a sequence of program statements During the parsing that generates Ta, a sequence of abstract descriptions is produced. Each of these descriptions is considered as a Markov source whose transitions are labeled by symbols Aj which in turn generate (match) source code. The sequence of abstract descriptions Aj forms a pattern A in Abstract Code Language (ACL) and is used to build dynamically a Markov model called Abstract Pattern Model (APM). An example of which is given in Fig.4. The Abstract Pattern Model is generated an ACL pattern is parsed. Nodes in the APM represent Abstract ACL Statements and arcs represent transitions that determine what is expected to be matched from the source code via a link to a static, permanently available Markov model called a Source Code Model (SCM). The Source Code Model is an alternative way to represent the syntax of a language entity and the correspondence of Abstract Statements in ACL with source code statements. For example a transition in APM labeled as (pointing to) an A b s t r a c t while S t a t e ment is linked with the while node of the static model. In its turn a while node in the SCM describes in terms of states and transitions the syntax of a legal while statement in c. The best alignment between a sequence of statements S = 5i; ^2; 5fc and a pattern A = Ai;A2]....Aj is computed by the Viterbi (Viterbi, 1967) dynamic programming algorithm using the SCM and a feature vector comparison function for evaluating the following type of probabilities: P,(5i,52,...5,|%,))
(7)
where/(/) indicates which abstract description is allowed to be considered at step /. This is determined by examining the reachable APM transitions at the ith step. For the matching to succeed the constraint P^(*S'i|Ai) = 1.0 must be satisfied and ^/(fc) corresponds to a final APM state. This corresponds to approximating (4) as follows (Brown, 1992): Pr{Tc\Ta) c^ P.(5i; ..Sk\Ai- ..An) =
100
KONTOGIANNIS ET AL.
^maa:(P^(5l;52..5,_l|Al;^2;..%^-l))•Pr(5^|%i)))
(8)
i=l
This is similar to the code-to-code matching. The difference is that instead of matching source code features, we allow matching abstract description features with source code features. The dynamic model (APM) guarantees that only the allowable sequences of comparisons are considered at every step. The way to calculate similarities between individual abstract statements and code fragments is given in terms of probabilities of the form Pr{Si\Aj) as the probability of abstract statement Aj generating statement Si. The probability p = Pr{Si\Aj) = Pscm{Si\Aj) * Pcomp{Si\Aj) is interpreted as "The probability that code statement Si can be generated by abstract statement Aj". The magnitude of the logarithm of the probability p is then taken to be the distance between Si and Aj.
The value ofp is computed by multiplying the probability associated with the corresponding state for Aj in SCM with the result of comparing the feature vectors of Si and Aj. The feature vector comparison function is discussed in the following subsection. As an example consider the APM of Fig. 4 generated by the pattern ^ i ; ^2 5 ^3» where Aj is one of the legal statements in ACL. Then the following probabilities are computed for a selected candidate code fragment 5i, 52, S'a:
Figure 4. A dynamic model for the pattern Al\ A2*; A3*
Pr{Si\Ai)
= 1.0
[delineation
criterion)
(9)
Pr{Su S2\A2) = PriSllAi)
• Pr{S2\A2)
(10)
PriSuS2\As)
' Pr{S2\As)
(11)
= PriSMl)
Pr{SuS2\A2)'Pr{Ss\A3)
Pr{SuS2,S3\As)=Max
(12) PriSi,S2\As)'Pr{Ss\As)
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
Pr{SuS2,Ss\A2)
= Pr{Si,S2\A2)
' Pr{Ss\A2)
101
(13)
Note that when the first two program statements 81^82 have already been matched, (equations 12 and 13) two transitions have been consumed and the reachable active states currently are A2 or A3. Moreover at every step the probabilities of the previous steps are stored and there is no need to be reevaluated. For example Pr{Si^S2\A2) is computed in terms of Pr{Si\Ai) which is available from the previous step. With each transition we can associate a list of probabilities based on the type of expression likely to be found in the code for the plan that we consider. For example, in the T r a v e r s a l of a l i n k e d l i s t plan the while loop condition, which is an expression, most probably generates an i n e q u a l i t y of the form (list-node-ptr 1= NULL) which contains an identifier reference and the keyword NULL. An example of a static model for the p a t t e r n - e x p r e s s i o n is given in Fig. 5. Here we assume for simplicity that only four C expressions can be generated by a P a t t e r n Expression. The initial probabilities in the static model are provided by the user who either may give a uniform distribution in all outgoing transitions from a given state or provide some subjectively estimated values. These values may come from the knowledge that a given plan is implemented in a specific way. In the above mentioned example of the T r a v e r s a l of a l i n k e d l i s t plan the I t e r a t i v e - S t a t e m e n t pattern usually is implemented with a while loop. In such a scenario the i t e r a t i v e abstract statement can be considered to generate a while statement with higher probability than a for statement. Similarly, the expression in the while loop is more likely to be an inequality (Fig. 5). The preferred probabilities can be specified by the user while he or she is formulating the query using the ACL primitives. Once the system is used and results are evaluated these probabilities can be adjusted to improve the performance. Probabilities can be dynamically adapted to a specific software system using a cache memory method originally proposed (for a different application) in (Kuhn, 1990). A cache is used to maintain the counts for most frequently recurring statement patterns in the code being examined. Static probabilities can be weighted with dynamically estimated ones as follows : Pscm{Si\Aj)
= X . Pcache{Si\Aj)
+ ( 1 - A) • Pstatic{Si\Aj)
(14)
In this formula Pcache{Si\Aj) represents the frequency that Aj generates Si in the code examined at run time while PstaticiSi\Aj) represents the a-priori probability of Aj generating Si given in the static model. A is a weighting factor. The choice of the weighting factor A indicates user's preference on what weight he or she wants to give to the feature vector comparison. Higher A values indicate a stronger preference to depend on feature vector comparison. Lower A values indicate preference to match on the type of statement and not on the feature vector. The value of A can be computed by deleted-interpolation as suggested in (Kuhn, 1990). It can also be empirically set to be proportional to the amount of data stored in the cache.
102
KONTOGIANNIS ET AL.
1.0
/ Pattern \ ^,-^-'*''^^ y Inequality J
0.25
/
1.0
Arg2
Argl
expression
expression
^^-.^^^
1 is-an-inequality / /
/ Pattern 0.25^^^.^*\ Equality
\ /
1.0
1.0
Arg2
Argl expression
expression
r Pattern \ is~an-equality lExpression 7 1.0 I is-a-id-ref
7 Pattern
\
\
V Id-Ref
/
id-ref
0.25 \ \ is-a-function-call 1.0 ^'''^>..^/ Pattern \ V Fcn~Call /
id-ref
-Args
Fen-Name 0.5
expression
expression Figure 5. The static model for the expression-pattern. Different transition probability values may be set by the user for different plans. For example the traversal of linked-list plan may have higher probability attached to the is-an-inequality transition as the programmer expects a pattern of the form (field f= NULL)
As proposed in (Kuhn, 1990), different cache memories can be introduced, one for each Aj. Specific values of A can also be used for each cache. 3,4,
Feature Vector Comparison
In this section we discuss the mechanism used for calculating the similarity between two feature vectors. Note that Si's and ^^'s feature vectors are represented as annotations in the corresponding ASTs. The feature vector comparison of Si, Aj returns a value p = Pr{Si\Aj). The features used for comparing two entities (source and abstract) are: 1. Variables defined V : Source-Entity —> {String} 2. Variables usedU : Source-Entity —> {String}
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
103
3. Keywords /C : Source-Entity —> {String} 4. Metrics •
Fan out All : Source-Entity —> Number
•
D-Complexity M2 - Source-Entity -^ Number
•
McCabe Ms : Source-Entity -^ Number
•
Albrecht M4 : Source-Entity —^ Number
•
Kafura M5 : Source-Entity —> Number
These features are AST annotations and are implemented as mappings from an AST node to a set of AST nodes, set of Strings or set of Numbers. Let Si be a source code statement or expression in program C and Aj an abstract statement or expression in pattern A. Let the feature vector associated with Si be Vi and the feature vector associated with Aj be Vj. Within this framework we experimented with the following similarity considered in the computation as a probability: /CM \ comp % 3
-'• ^r^ car d{ Abstract Feature j^n^CodeF eaturci^n) ^ £^ card{AbstractFeaturej^n ^ CodeFeaturCi^n)
where v is the size of the feature vector, or in other words how many features are used, CodeFeaturei^n is the nth feature of source statement Si and, AbstractFeaturCj^n is the nth feature of the ACL statement Aj. As in the code to code dynamic programming matching, lexicographical distances between variable names (i.e. next, next value) and numerical distances between metrics are used when no exact matching is the objective. Within this context two strings are considered similar if their lexicographical distance is less than a selected threshold, and the comparison of an abstract entity with a code entity is valid if their corresponding metric values are less than a given threshold. These themes show that ACL is viewed more as a vehicle where new features and new requirements can be added and be considered for the matching process. For example a new feature may be a link or invocation to another pattern matcher (i.e. SCRUPLE) so that the abstract pattern in ACL succeeds to match a source code entity if the additional pattern matcher succeeds and the rest of the feature vectors match. 4.
System Architecture
The concept-to-code pattern matcher of the Ariadne system is composed of four modules. Thefirstmodule consists of an abstract code language (ACL) and its corresponding parser. Such a parser builds at run time, an AST for the ACL pattern provided by the user. The ACL AST is built using Refine and its corresponding domain model maps to entities of the C language domain model. For example, an Abstract-Iterative-Statement corresponds to an Iterative-Statement in the C domain model.
104
KONTOGIANNIS ET AL.
A Static explicit mapping between the ACL's domain model and C's domain model is given by the SCM (Source Code Model), Ariadne's second module. SCM consists of states and transitions. States represent Abstract Statements and are nodes of the ACL's AST. Incoming transitions represent the nodes of the C language AST that can be matched by this Abstract Statement. Transitions have initially attached probability values which follow a uniform distribution. A subpart of the SCM is illustrated in Fig. 5 where it is assumed for simplicity that an Abstract Pattern Expression can be matched by a C i n e q u a l i t y , e q u a l i t y , i d e n t i f i e r r e f e r e n c e , and a function c a l l . The third module builds the Abstract Pattern Model at run time for every pattern provided by the user. APM consists of states and transitions. States represent nodes of the ACL's AST. Transitions model the structure of the pattern given, and provide the pattern statements to be considered for the next matching step. This model directly reflects the structure of the pattern provided by the user. Formally APM is an automaton where •
Q, is the set of states, taken from the domain of ACL's AST nodes
•
S, is the input alphabet which consists of nodes of the C language AST
•
300 KLOC. Moreover, clone detection is used to identify "conceptually" related operations in the source code. The performance
106
KONTOGIANNIS ET AL.
is limited by the fact we are using a LISP environment (frequent garbage collection calls) and the fact that metrics have to be calculated first. When the algorithm using metric values for comparing program code fragments was rewritten in C it performed very well. For 30KLOCS of the CLIPS system and for selecting candidate clones from approximately 500,000 pairs of functions the C version of the clone detection system run in less than 10 seconds on a Sparc 10, as opposed to a Lisp implementation that took 1.5 minutes to complete. The corresponding DP-based algorithm implemented in Lisp took 3.9 minutes to complete. Currently the system is used for system clustering, redocumentation and program understanding. Clone detection analysis reveals clusters of functions with similar behaviour suggesting thus a possible system decomposition. This analysis is combined with other data flow analysis tools (Konto, 1994) to obtain a multiple system decomposition view. For the visualization and clustering aspect the Rigi tool developed at the University of Victoria is used. Integration between the Ariadne tool and the Rigi tool is achieved via the global software repository developed at the University of Toronto. The false alarms using only the metric comparison was on average for the three systems 39% of the total matches reported. When the DP approach was used,this ratio dropped to approximately 10% in average (when zero distance is reported). Even if the noise presents a significant percentage of the result, it can be filtered in almost all cases by adding new metrics (i.e. line numbers, Halstead's metric, statement count). The significant gain though in this approach is that we can limit the search space to a few hundreds (or less than a hundred, when DP is considered) of code fragment pairs from a pool of half a million possible pairs that could have been considered in total. Moreover, the method is fully automatic, does not require any knowledge of the system and is computationally acceptable 0{n * m) for DP, where m is the size of the model and n the size of the input. Concept-to-code matching uses an abstract language (ACL) to represent code operations at an abstract level. Markov models and the Viterbi algorithm are used to compute similarity measures between an abstract statement and a code statement in terms of the probability that an abstract statement generates the particular code statement. The ACL can be viewed not only as a regular expression-like language but also as a vehicle to gather query features and an engine to perform matching between two artifacts. New features, or invocations and results from other pattern matching tools, can be added to the features of the language as requirements for the matching process. A problem we foresee arises when binding variables exist in the pattern. If the pattern is vague then complexity issues slow down the matching process. The way we have currently overcome this problem is for every new binding to check only if it is a legal one in a set of possible ones instead of forcing different alternatives when the matching occurs. Our current research efforts are focusing on the development of a generic pattern matcher which given a set of features, an abstract pattern language, and an input code fragment can provide a similarity measure between an abstract pattern and the input stream. Such a pattern matcher can be used a) for retrieving plans and other algorithmic structures from a variety of large software systems ( aiding software maintenance and program understanding), b) querying digital databases that may contain partial descriptions of data and c) recognizing concepts and other formalisms in plain or structured text (e.g.,HTML)
PATTERN MATCHING FOR CLONE AND CONCEPT DETECTION
107
Another area of research is the use of metrics for finding a measure of the changes introduced from one to another version in an evolving software system. Moreover, we investigate the use of the cloning detection technique to identify similar operations on specific data types so that generic classes and corresponding member functions can be created when migrating a procedural system to an object oriented system.
Notes 1. In this paper, "reverse engineering*' and related terms refer to legitimate maintenance activities based on sourcelanguage programs. The terms do not refer to illegal or unethical activities such as the reverse compilation of object code to produce a competing product. 2. "The Software Refinery" and REFINE are trademarks of Reasoning Systems, Inc. 3. We are using a commercial tool called REFINE (a trademark of Reasoning Systems Corp.). 4. The Spearman-Pearson rank correlation test was used.
References Adamov, R. "Literature review on software metrics", Zurich: Institutfur Informatik der Universitat Zurich, 1987. Baker S. B, "On Finding Duplication and Near-Duplication in Large Software Systems" In Proceedings of the Working Conference on Reverse Engineering 1995, Toronto ON. July 1995 Biggerstaff, T, Mitbander, B., Webster, D., "Program Understanding and the Concept Assignment Problem", Communications of the ACM, May 1994, Vol. 37, No.5, pp. 73-83. P. Brown et. al. "Class-Based n-gram Models of natural Language", Journal of Computational Linguistics, Vol. 18, No.4, December 1992, pp.467-479. Buss, E., et. al. "Investigating Reverse Engineering Technologies for the CAS Program Understanding Project", IBM Systems Journal, Vol. 33, No. 3,1994, pp. 477-500. G. Canfora., A. Cimitile., U. Carlini., "A Logic-Based Approach to Reverse Engineering Tools Production" Transactions of Software Engineering, Vol.18, No. 12, December 1992, pp. 1053-1063. Chikofsky, E.J. and Cross, J.H. II, "Reverse Engineering and Design Recovery: A Taxonomy," IEEE Software, Jan. 1990, pp. 13 -17. Church, K., Helfman, I., "Dotplot: a program for exploring self-similarity in millions of lines of text and code", /. Computational and Graphical Statistics 2,2, June 1993, pp. 153-174. C-Language Integrated Production System User's Manual NASA Software Technology Division, Johnson Space Center, Houston, TX. Fenton, E. "Software metrics: a rigorous approach". Chapman and Hall, 1991. Halstead, M., H., "Elements of Software Science", New York: Elsevier North-Holland, 1977. J. Hartman., "Technical Introduction to the First Workshop on Artificial Intelligence and Automated Program Understanding" First Workshop on Al and Automated Program Understanding, AAAr92, San-Jose, CA. Horwitz S., "Identifying the semantic and textual differences between two versions of a program. In Proc. ACM SIGPLAN Conference on Programming Language Design and Implementation, June 1990, pp. 234-245. Jankowitz, H., T, "Detecting plagiarism in student PASCAL programs". Computer Journal, 31.1, 1988, pp. 1-8. Johnson, H., "Identifying Redundancy in Source Code Using Fingerprints" In Proceedings of GASCON '93, IBM Centre for Advanced Studies, October 24 - 28, Toronto, Vol.1, pp. 171-183. Kuhn, R., DeMori, R., "A Cache-Based Natural Language Model for Speech Recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No.6, June 1990, pp. 570-583. Kontogiannis, K., DeMori, R., Bernstein, M., Merlo, E., "Localization of Design Concepts in Legacy Systems", In Proceedings of International Conference on Software Maintenance 1994, September 1994, Victoria, BC. Canada, pp. 414-423.
108
KONTOGIANNIS ET AL.
Kontogiannis, K., DeMori, R., Bernstein, M., Galler, M., Merlo, E., "Pattern matching for Design Concept Localization", In Proceedings of the Second Working Conference on Reverse Engineering, July 1995, Toronto, ON. Canada, pp. 96-103. "McCabe T., J. "Reverse Engineering, reusability, redundancy : the connection", American Programmer 3, 10, October 1990, pp. 8-13. MoUer, K., Software metrics: a practitioner's guide to improved product development" Muller, H., Corrie, B., Tilley, S., Spatial and Visual Representations of Software Structures, Tech. Rep. TR-74. 086, IBM Canada Ltd. April 1992. Mylopoulos, J., "Telos : A Language for Representing Knowledge About Information Systems," University of Toronto, Dept. of Computer Science Technical Report KRR-TR-89-1, August 1990, Toronto. J. NIng., A. Engberts., W. Kozaczynski., "Automated Support for Legacy Code Understanding", Communications of the ACM, May 1994, Vol.37, No.5, pp.50-57. Paul, S., Prakash, A., "A Framework for Source Code Search Using Program Patterns", IEEE Transactions on Software Engineering, June 1994, Vol. 20, No.6, pp. 463-475. Rich, C. and Wills, L.M., "Recognizing a Program's Design: A Graph-Parsing Approach," IEEE Software, Jan 1990, pp. 82 - 89. Tilley, S., Muller, H., Whitney, M., Wong, K., "Domain-retargetable Reverse Engineeringll: Personalized User Interfaces", In CSM'94 : Proceedings of the 1994 Conference on Software Maintenance, September 1994, pp. 336 - 342. Viterbi, A.J, "Error Bounds for Convolutional Codes and an Asymptotic Optimum Decoding Algorithm", IEEE Trans. Information Theory, 13(2) 1967. Wills, L.M.,"Automated Program Recognition by Graph Parsing", MIT Technical Report, AI Lab No. 1358,1992
Automated Software Engineering, 3, 109-138 (1996) © 1996 Kluwer Academic Publishers, Boston. Manufactured in The Netheriands.
Extracting Architectural Features from Source Code* DAVID R. HARRIS, ALEXANDER S. YEH The MITRE Corporation, 202 Burlington Road, Bedford, MA 01730, USA HOWARD B. REUBENSTEIN * Mitretek Systems, 25 Burlington Mall Road, Burlington, MA 01803, USA
[email protected] [email protected] Abstract. Recovery of higher level design information and the ability to create dynamic software documentation is crucial to supporting a number of program understanding activities. Software maintainers look for standard software architectural structures (e.g., interfaces, interprocess communication, layers, objects) that the code developers had employed. Our goals center on supporting software maintenance/evolution activities through architectural recovery tools that are based on reverse engineering technology. Our tools start with existing source code and extract architecture-level descriptions linked to the source code firagments that implement architectural features. Recognizers (individual source code query modules used to analyze the target program) are used to locate architectural features in the source code. We also report on representation and organization issues for the set of recognizers that are central to our approach. Keywords: Reverse engineering, software architecture, software documentation
1.
Introduction
We have implemented an architecture recovery framework on top of a source code examination mechanism. The framework provides for the recognition of architectural features in program source code by use of a library of recognizers. Architectural features are the constituent parts of architectural styles (Perry and Wolf, 1992), (Shaw, 1991) which in turn define organizational principles that guide a programmer in developing source code. Examples of architectural styles include pipe and filter data processing, layering, abstract data type, and blackboard control processing. Recognizers are queries that analysts or applications can run against source code to identify portions of the code with certain static properties. Moreover, recognizer authors and software analysts can associate recognition results with architectural features so that the code identified by a recognizer corresponds to an instance of the associated architectural This is a revised and extended version based on two previous papers: 1. "Reverse Engineering to the Architectural Level" by Harris, Reubenstein and Yeh, which appeared in the Proceedings of the 17th International Conference on Software Engineering, April 1995, © 1995 ACM. 2. "Recognizers for Extracting Architectural Features from Source Code" by Harris, Reubenstein and Yeh, which appeared in the Proceedings of the 2nd Working Conference on Reverse Engineering, July 1995, © 1995 IEEE. The work reported in this paper was sponsored by the MITRE Corporation's internal research program and was performed while all the authors were at the MITRE Corp, This paper was written while H. Reubenstein was at GTE Laboratories. H. Reubenstein's current address is listed above.
110
D.R. HARRIS, H.B. REUBENSTEIN, A.S. YEH
feature. Within our implementation, we have developed an extensive set of recognizers targeted for architecture recovery applications. The implementation provides for analyst control over parameterization and retrieval of recognizers from a library. Using these recognizers, we have recovered constituent features of architectural styles in our laboratory experiments (Harris, Reubenstein, Yeh: ICSE, 1995). In addition, we have used the recognizers in a stand-alone mode as part of a number of source code quality assessment exercises. These technology transfer exercises have been extremely useful for identifying meaningful architectural features. Our motivation for building our recovery framework stems from our efforts to understand legacy software systems. While it is clear that every piece of software conforms to some design, it is often the case that existing documentation provides little clue to that design. Recovery of higher level design information and the ability to create as-built software documentation is crucial to supporting a number of program understanding activities. By stressing as-built, we emphasize how a program is actually structured versus the structure that designers sketch out in idealized documentation. The problem with conventional paper documentation is that it quickly becomes out of date and it often is not adequate for supporting the wide range of tasks that a software maintainer or developer might wish to perform, e.g., general maintenance, operating system port, language port, feature addition, program upgrade, or program consolidation. For example, while a system block diagram portrays an idealized software architecture description, it typically does not even hint at the source level building blocks required to construct the system. As a starting point, conmiercially available reverse engineering tools (Olsem and Sittenauer, 1993) provide a set of limited views of the source under analysis. While these views are an improvement over detailed paper designs in that they provide accurate information derived directly from the source code, they still only present static abstractions that focus on code level constructs rather than architectural features. We argue that it is practical and effective to automatically (sometimes semi-automatically) recognize architectural features embedded in legacy systems. Our framework goes beyond basic tools by integrating reverse engineering technology and architectural style representations. Using the framework, analysts can recover multiple as-built views - descriptions of the architectural structures that actually exist in the code. Concretely, the representation of architectural styles provides knowledge of software design beyond that defined by the syntax of a particular language and enables us to respond to questions such as the following: •
When are specific architectural features actually present?
•
What percent of the code is used to achieve an architectural feature?
•
Where does any particular code fragment fall in an overall architecture?
The paper describes our overall architecture recovery framework including a description of our recognition library. We begin in Section 2 by describing the overall framework. Next, in Section 3, we address the gap between idealized architectural descriptions and source code and how we bridge this gap with architectural feature recognizers. In Section 4, we describe the underlying analysis tools of the framework. In Section 5, we describe
EXTRACTING ARCHITECTURAL FEATURES FROM SOURCE CODE
111
the aspects of the recognition library that support analyst access and recognizer authoring. In section 6, we describe our experience in using our recovery techniques on a moderately sized (30,000 lines of code) system. In addition, we provide a very preliminary notion of code coverage metrics that researchers can used for quantifying recovery results. Related work and conclusions appear in Sections 7 and 8 respectively. 2.
Architecture Recovery - Framework and Process
Our recovery framework (see Figure 1) spans three levels of software representation: •
a program parsing capability (implemented using Software Refinery (Reasoning Systems, 1990)) with accompanying code level organization views, i.e., abstract syntax trees and a "bird's eye" file overview
•
an architectural representation that supports both idealized and as-built architectural representations with a supporting library of architectural styles and constituent architectural features
•
a source code recognition engine and a supporting library of recognizers
Figure 1 shows how these three levels interact. The idealized architecture contains the initial intentions of the system designers. Developers encode these intentions in the source code. Within our framework, the legacy source code is parsed into an internal abstract syntax tree representation. We run recognizers over this representation to discover architectural features - the components/connectors associated with architectural styles (selecting a particular style selects a set of constituent features to search for). The set of architectural features discovered in a program form its as-built architecture containing views with respect to many architectural styles. Finally, note that the as-built architecture we have recovered is both less than and more than the original idealized architecture. The as-built is less than the idealized because it may miss some of the designer's original intentions and because it may not be complete. The as-built is also more than the idealized because it is up-to-date and because we now have on-line linkage between architecture features and their implementation in the code. We do not have a definition of a complete architecture for a system. The notions of code coverage described later in the paper provides a simple metric to use in determining when a full understanding of the system has been obtained. The framework supports architectural recovery in both a bottom-up and top-down fashion. In bottom-up recovery, analysts use the bird's eye view to display the overall file structure and file components of the system. The features we display (see Figure 2) include file type (diamond shapes for source files with entry point functions; rectangles for other source files), name, pathname of directory, number of top level forms, and file size (indicated by the size of the diamond or rectangle). Since file structure is a very weak form of architectural organization, only shallow analysis is possible; however, the bird's eye view is a place where our implementation can register results of progress toward recognition of various styles. In top-down recovery, analysts use architectural styles to guide a mixed-initiative recovery process. From our point of view, an architectural style places an expectation on what
112
D.R. HARRIS, H.B. REUBENSTEIN, A.S. YEH
Idealized Architecture
Views of the As-Built Architecture combine using architectural styles to form
Architectural Features
implemented by I
provides clues for recognizing 1
Program
parses into
1 Abstract Syntax Tree
Figure 1. Architectural recovery framework
recovery tools will find in the softw^are system. That is, the style establishes a set of architectural feature types which define components/connectors types to be found in the software. Recognizers are used tofindthe component/connector features. Once the features are discovered, the set of mappings from feature types to their realization in the source code forms the as-built architecture of the system. 2. /.
Architectural Styles
The research community has provided detailed examples (Garlan and Shaw, 1993, Shaw, 1989, Shaw, 1991, Perry and Wolf, 1992, Hofmeister, Nord, Soni, 1995) of architectural styles, and we have codified many of these in an architecture modeling language. Our architecture modeling language uses entity/relation taxonomies to capture the component/connector style aspects that are prevalent in the literature (Abowd, Allen, Garlan, 1993, Perry and Wolf, 1992, Tracz, 1994). Entities include clusters, layers, processing elements, repositories, objects, and tasks. Some recognizers discover source code instances of entities where developers have implemented major components - "large" segments of source code (e.g., a layer may be implemented as a set of procedures). Relations such as contains, initiates, spawns, and is-connected-to each describe how entities are linked. Component participation in a relation follows from the existence of a connector - a specific code fragment (e.g., special operating system invocation) or the infrastructure that processes these fragments. This infrastructure may or may not be part of the body of software under analysis. For example, it may be found in a shared library or it may be part of the implementation language itself. As an illustration. Figure 3 details the task entity and the spawns relation associated with a task spawning style. In a task spawning architectural style, tasks (i.e., executable processing
EXTRACTING ARCHITECTURAL FEATURES FROM SOURCE CODE
Figure 2. Bird's Eye Overview
113
114
D.R. HARRIS, H.B. REUBENSTEIN, A.S. YEH
elements) are linked when one task initiates a second task. Task spawning is a style that is recognized by the presence of its connectors (i.e., the task invocations). Its components are tasks, repositories, and task-functions. Its connectors are spawns (invocations from tasks to tasks), spawned-by (the inverse of spawns), uses (relating tasks to any tasks with direct interprocess communications and to any repositories used for interprocess communications), and conducts (relating tasks to functional descriptions of the work performed). Tasks are a kind of processing element that programmers might implement by files (more generally, by call trees). A default recognizer named executables will extract a collection of tasks. Spawns relates tasks to tasks (i.e., parent and child tasks respectively). Spawns might be implemented by objects of type system-call (e.g., in Unix/C, programmers can use a system, execl, execv, execlp, or execvp call to start a new process via a shell command). Analysts can use the default recognizer, find-executable-links, to retrieve instances of task spawning. defentity TASK :specialization-of processing-element :possible-implementation file :recognized-by executables defrel SPAWNS :specialization-of initiates :possible-implementation system-call :recognized-by find-executable-links :domain task :range task
Figure 3. Elements in an architecture modeling language
Many of the styles we work with have been elaborated by others (e.g., pipe and filter, object-oriented, abstract data type, implicit invocation, layered, repository). In addition we have worked with a few styles that have special descriptive power for the type of programs we have studied. These include application programming interface (API) use, the task spawning associated with real time systems, and a service invocation style. Space limitations do not permit a full description of all styles here. However, we offer two more examples to help the reader understand the scope of our activities. Layered: In a layered architecture the components (layers) form a partitioning of a subset, possibly the entire system, of the program's procedures and data structures. As mentioned in (Garlan and Shaw, 1993), layering is a hierarchical style: the connectors are the specific references that occur in components in an upper layer and reference components that are defined in a lower layer. One way to think of a layering is that each layer provides a service to the layer(s) above it. A layering can either be opaque: components in one layer cannot reference components more than one layer away, or transparent: components in one layer can reference components more than one layer away.
EXTRACTING ARCHITECTURAL FEATURES FROM SOURCE CODE
115
Data Abstractions and Objects: Two related ways to partially organize a system are to identify its abstract data types and its groups of interacting objects (Abelson and Sussman, 1984, Garlan and Shaw, 1993). A data abstraction is one or more related data representations whose internal structure is hidden to all but a small group of procedures, i.e., the procedures that implement that data abstraction. An object is an entity which has some persistent state (only directly accessible to that entity) and a behavior that is governed by that state and by the inputs the object receives. These two organization methods are often used together. Often, the instances of an abstract data type are objects, or conversely, objects are instances of classes that are described as types of abstract data. 3.
Recognizers
Recognizers map parts of a program to features found in architectural styles. The recognizers traverse some or all of a parsed program representation (abstract syntax tree, or AST) to extract code fragments (pieces of concrete syntax) that implement some architectural feature. Examples of these code fragments include a string that names a data file or a call to a function with special effects. The fragments found by recognizers are components and connectors that implement architectural style features. A component recognizer returns a set of code-fragments in which each code-fragment is a component. A connector recognizer returns a set of ordered triples - code-fragment, enclosing structure, and some meaningful influence such as a referenced file, executable object, or service. In each triple, the code-fragment is a connector, and the other two elements are the two components being connected by that connector. 3.1.
A Sample Recognizer
The appendix contains a partial listing of the recognizers we use. Here, we examine parts of one of these in detail. Table 1 shows the results computed by a task spawning recognizer (named Find-ExecutableLinks) applied to a network management program. For each task to task connector, the ordered triple contains the special function call that is the connector, the task which makes the spawn (one end of the connector), and the task that is spawned (invoked) by the call (the other end). This recognizer has a static view of a task: a task is the call tree subset of the source code that might be run when the program enters the task. The action part of a recognizer is written in our RRL (REFINE-based recognition language). The main difference between RRL and REFINE is the presence in RRL of iteration operators that make it easy for RRL authors to express iterations over pieces of a code fragment. The RRL code itself may call functions written either in RRL or REFINE. Figure 4 shows the action part of the previously mentioned task spawning recognizer. This recognizer examines an AST that analysts generate using a REFINE language workbench, such as REFINE/C (Reasoning Systems, 1992). The recognizer calls the function i n v o c a t i o n s - o f - t y p e , which finds and returns a set of all the calls in the program to functions that may spawn a task. For each such call, the recognizer calls p r o c e s s - invoked
116
D.R. HARRIS, H.B. REUBENSTEIN, A.S. YEH
Table 1. The results of task spawning recognition Function Call
Spawning Task
Spawned Task
system(... system{... system(... system(... system(... system(... execlp(...
RUN_SNOOPY SNOOPY SNOOPY SNOOPY SNOOPY SNOOPY MAIN
SNOOPY EXNFS EX69 EX25 EX21 SCANP RUN_SNOOPY
let (results = {}) (for-every call in invocations-of-type('system-calls) do let (target = process-invoked(call)) if ~(target = undefined) then let (root = go-to-top-from-root(call)) results s^, such that the loop is executed as long as any guard B^ is true. A simplified form of repetition is given by "do B —> s od ". In the context of iteration, a bound function determines the upper bound on the number of iterations still to be performed on the loop. An invariant is a predicate that is true before and after each iteration of a loop. The problem of constructing formal specifications of iteration statements is difficult because the bound functions and the invariants must be determined. However, for a partial correctness model of execution, concerns of boundedness and termination fall outside of the interpretation, and thus can be relaxed. Using the abbreviated form of repetition "do B —^ s od", the semantics for iteration in terms of the weakest liberal precondition predicate transformer wlp is given by the following (Dijkstra and Scholten, 1990): wlp{BO, R) = {Wi:0 0. The strongest postcondition semantics for repetition has a similar but notably distinct formulation (Dijkstra and Scholten, 1990): 5P(DO, Q) = -^^ A (3i : 0 < ^ : sp{iF\ Q)).
(6)
Expression (6) states that the strongest condition that holds after executing an iterative statement, given that condition Q holds, is equivalent to the condition where the loop guard is false {-^B), and a disjunctive expression describing the effects of iterating the loop i times, where i > 0. Although the semantics for repetition in terms of strongest postcondition and weakest liberal precondition are less complex than that of the weakest precondition (Dijkstra and
STRONGEST POSTCONDITION SEMANTICS
149
Scholten, 1990), the recurrent nature of the closed forms make the appHcation of such semantics difficult. For instance, consider the counter program "do i < n ^ i : = i + 1 od". The application of the sp semantics for repetition leads to the following specification: sp{do i < n - > i
: = i + l od,Q) =
{i>n)A{3j:0<j:sp{IF^,Q)).
The closed form for iteration suggests that the loop be unrolled j times. If j is set to n — start, where start is the initial value of variable i, then the unrolled version of the loop would have the following form: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
i : = start; if i < n —> i : = i + 1; fi if i < n --> i : = i + 1; fi if i < n —> i : = i + 1; fi
Application of the rule for alternation (Expression (2)) yields the sequence of annotated code shown in Figure 4, where the goal is to derive 5p(do i < n — > i
: = i + l od, {start < n) A {i = start)).
In the construction of specifications of iteration statements, knowledge must be introduced by a human specifier. For instance, in line 19 of Figure 4 the inductive assertion that "i = start + (n — start — 1)" is made. This assertion is based on a specifier providing the information that (n — start — 1) additions have been performed if the loop were unrolled at least (n — start — 1) times. As such, by using loop unrolling and induction, the derived specification for the code sequence is ( ( n - 1 < n) A (z = n)). For this simple example, we find that the solution is non-trivial when applying the formal definition of sp{DO,Q). As such, the specification process must rely on a user-guided strategy for constructing a specification. A strategy for obtaining a specification of a repetition statement is given in Figure 5.
150
GANNOD AND CHENG
1.
{ {i = I) A {start < n) }
2.
i:= s t a r t ;
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29.
{{'^ = start) A {start < n)} i f i < n -> i : = i + 1 f i { sp{i := i -\-1, {i < n) A {i = start) A {start < n)) V {{i >= n) A{i = start) A {start < n)) = {{i = start + 1) A {start < n)) } i f i < n -> i : = i + 1 f i { sp{i := i -\-1, {i < n) A {i = start + 1) A {start < n)) V {{i >= n) A{i = start -h 1) A {start < n)) = {{i = start + 2) A {start + 1 < n)) V {{i > = n) A{i = start + 1) A {start < n)) } { ((^ = start -h (n - start - 1)) A {start + (n - start - 1) - 1 < n)) V {{i >= n)A{i = start-\-{n-start-2))A{start-h{n-start-2)-l < n)) = {{i = n-l)A{n-2 i : = i + 1 f i { sp{i := i -h 1, (z < n) A (i = n - 1) A (n - 2 < n)) V {{i>=n)A{i = n-l)A{n-2< n)) = {^ = n)}
Figure 4. Annotated Source Code for Unrolled Loop
STRONGEST POSTCONDITION SEMANTICS
151
1. The following criteria are the main characteristics to be identified during the specification of the repetition statement: •
invariant (P): an expression describing the conditions prior to entry and upon exit of the iterative structure.
•
guards (B): Boolean expressions that restrict the entry into the loop. Execution of each guarded command, Bi —> Si terminates with P true, so that P is an invariant of the loop. {P A Bi}Si{P},
fori
NumTwo then begin Max := NumOne; Min := NumTwo; end else begin Max := NumTwo; Min := NumOne; end end; procedure swapa( var X:integer; var Y:integer ) ; begin Y + X = Y - X Y = Y - X end; procedure swapb( var X:integer; var Y:integer ) ; var temp : integer; begin temp := X; X := Y; Y := temp end; procedure funnyswap( X:integer; Y:integer ) ; var temp : integer; begin temp := X; X := Y; Y := tenp end; begin a := 5; b := 10; swapa(a,b); swapb(a,b); funnyswap{a,b); FindMaxMin{a,b,Largest,Smallest); c := Largest; end.
Figure 7. Example Pascal program
Figures 8, 9, and 10 depict the output of A U T O S P E C when applied to the program code given in Figure 7 where the notation id{scope}instance is used to indicate a variable i d with scope defined by the referencing environment for scope. The i n s t a n c e identifier
156
GANNOD AND CHENG
program McixMin { input, output ) ; var a, b, c, Largest, Smallest : real; procedure FindMaxMin( NumOne, NumTwo:real; var Max, Min:real ) ; begin if (NumOne > NumTwo) then begin Max := NumOne; (* Max{2)l = NumOneO & U *) Min := NumTwo; (* Min{2}l = NumTwoO & U *) end I: (* (Max{2)l = NumOneO & Min{2}l = NumTwoO) & U *) else begin Max := NumTwo; (* Max{2}l = NiimTwoO & U *) Min := NumOne; (* Min{2}l = NumOneO & U *) end J: (* (Max{2)l = NumTwoO & Min{2}l = NumOneO) & U *) K: (* (((NumOneO > NumTwoO) & (Max{0}l = NumOneO & Min{0}l = NumTwoO)) | (not (NumOneO > NumTwoO) & (Max{0}l = NumTwoO & Min{0}l = NumOneO ) ) ) & U *) end L: (* (((NumOneO > NumTwoO) & (Max{0}l = NumOneO & Min{0}l = N\JimTwoO)) | (not (NumOneO > NumTwoO) & (Max{0}l = NumTwoO & Min{0}l = NumOneO ) ) ) & U *)
Figure 8. Output created by applying AUTOSPEC to example
is used to provide an ordering of the assignments to a variable. The scope identifier has two purposes. When scope is an integer, it indicates the level of nesting within the current program or procedure. When scope is an identifier, it provides information about variables specified in a different context. For instance, if a call to some arbitrary procedure called f oo is invoked, then specifications for variables local to f oo are labeled with an integer scope. Upon return, the specification of the calling procedure will have references to variables local to f oo. Although the variables being referenced are outside the scope of the calling procedure, a specification of the input and output parameters for f oo can provide valuable information, such as the logic used to obtain the specification for the output variables to f oo. As such, in the specification for the variables local to f oo but outside the scope of the calling procedure, we use the scope label So. Therefore, if we have a variable q local to f oo, it might appear in a specification outside its local context as q{f oo}4, where "4" indicates the fourth instance of variable q in the context of f oo.
STRONGEST POSTCONDITION SEMANTICS
157
In addition to the notations for variables, we use the notation ' | ' to denote a logical-or, '&' to denote a logical-and, and the symbols ' (* * ) ' to delimit comments (i.e., specifications). In Figure 8, the code for the procedure FindMaxMin contains an alternation statement, where lines I, J, K, and L specify the guarded commands of the alternation statement (i and j ) , the effect of the alternation statement (K), and the effect of the entire procedure (L), respectively. Of particular interest are the specifications for the swap procedures given in Figure 9 named swapa and swapb. The variables x and Y are specified using the notation described above. As such, the first assignment to Y is written using Y{O}I, where Y is the variable, '{o}' describes the level of nesting (here, it is zero), and ' 1 ' is the historical subscript, the *l' indicating the first instance of Y after the initial value. The final comment for swapa (line M), which gives the specification for the entire procedure, reads as: (* (Y{0}2 = XO Sc X{0}1 = YO & Y{0}1 = YO + XO) & U *)
where Y{O}2 = XO is the specification of the final value of Y, and x{o}l = YD is the specification of the final value of x. In this case, the intermediate value of Y, denoted Y{O}I, with value YO + xo is not considered in the final value of Y. Procedure swapb uses a temporary variable algorithm for swap. Line N is the specification after the execution of the last line and reads as: (* (Y{0}1 = XO & X{0}1 = YO & temp{0}l = XO) & U *) where Y{O}I = XO is the specification of the final value of Y, and x{o}l = YO is the specification of the final value of x. Although each implementation of the swap operation is different, the code in each procedure effectively produces the same results, a property appropriately captured by the respective specifications for swapa and swapb with respect to the final values of the variables x and Y. In addition, Figure 10 shows the formal specification of the funnyswap procedure. The semantics for the funnyswap procedure are similar to that of swapb. However, the parameter passing scheme used in this procedure is pass by value. The specification of the main begin-end block of the program MaxMin is given in Figure 10. There are eight lines of interest, labeled i, J, K, L, M, N, o, and p, respectively. Lines I and J specify the effects of assignment statements. The specification at line K demonstrates the use of identifier scope labels, where in this case, we see the specification of variables x and Y from the context of swapa. Line L is another example of the same idea, where the specification of variables from the context of swapb (x and Y), are given. In the main program, no variables local to the scope of the call to funnyswap are affected by funnyswap due to the pass by value nature of funnyswap, and thus the specification shows no change in variable values, which is shown by line M of Figure 10. The effects of the call to procedure FindMaxMin provides another example of the specification of a procedure call (line N). Finally, line P is the specification of the entire program, with every precondition propagated to the final postcondition as described in Section 3.1. Here, of interest are the final values of the variables that are local to the program MaxMin (i.e., a, b, and C). Thus, according to the rules for historical subscripts, the a{o}3, b{o}3, and c{o}l
158
GANNOD AND CHENG
procedure swapa( var X:integer; var Y:integer ) ; begin Y := (Y + X ) ; (* {Y(0)1 = (YO + XO)) & X := (Y - X) ; (* (X{0}1 = ((YO + XO) Y := (Y - X ) ; (* (Y{0}2 = ((YO + XO) end (* (Y{0}2 = XO & X{0)1 = YO
U *) XO)) & U *) ((YO + XO) - XO))) & U *) & Y(0)1 = YO + XO) & U *)
procedure swapb( var X:integer; var Y:integer
);
var tenp : integer; begin temp := X; (* (temp{0)l = XO) & U *) X := Y; (* (X{0)1 = YO) & U *) Y := temp; (* (Y{0}1 = XO) & U *) end (* (Y{0)1 = XO & X{0}1 = YO & temp{0}l = XO) & U *) procedure funnyswap( X:integer; Y:integer ) ; var temp : integer; begin temp := X; (* (tenp{0}l = XO) & U *) X := Y; (* (X{0)1 = YO) & U *) Y := temp; (* (Y{0}1 = XO) & U *) end (* (Y{0}1 = XO & X{0}1 = YO & temp{0}l = XO) & U *)
Figure 9. Output created by applying AUTOSPEC to example (cont.)
are of interest. In addition, by propagating the preconditions for each statement, the logic that was used to obtain the values for the variables of interest can be analyzed.
6.
Related Work
Previously, formal approaches to reverse engineering have used the semantics of the weakest precondition predicate transformer wp as the underlying formalism of their technique. The Maintainer's Assistant uses a knowledge-based transformational approach to construct formal specifications from program code via the use of a Wide-Spectrum Language (WSL)(Ward et al., 1989). A WSL is a language that uses both specification and imperative language constructs. A knowledge-base manages the correctness preserving transforma-
STRONGEST POSTCONDITION SEMANTICS
159
(* Main Program for MaxMin *) begin a := 5; a{0}l = 5 & U *)
(*
b := 10; b{0}l = 10' & U *)
(*
swapa{a,b) (* (b{0)2 = 5 & (a{0}2 = 10 & {Y{swapa}2 ^ 5 & (X{swapa}l = 10 & Y{swapa)l =15)))) & U *) swapb(a,b) (* (b{0}3 = 10 & (a{0}3 = 5 & (Y{swapb}l = 10 & (X{swapb}l = 5 & temp{swapb}l =10)))) & U *) funnyswap(a,b) (* (Y{funnyswap}l = 5 & X{funnyswap)l = 10 & tenp(funnyswap}l = 5 ) & U *) FindMaxMin{a,b,Largest,Smallest) (* (Smallest{0}l = Min{FindMaxMin)l & Largest{0)1 = Max{FindMaxMin}l & (({5 > 10) & {Max{FindMaxMin}l = 5 & Min{FincaMaxMin}l = 10)) | (not (5 > 10) & (Max{FindMaxMin)l = 10 & Min{FindMaxMin)l = 5)))) & U *) c := Largest; (* c{0}l = Max{FindMaxMin)l & U *)
(* ((c{0)l = Max{FindMaxMin}l) & (Smallest{0)l = Min{FindMaxMin)l & Largest{0}l = Max{FindMaxMin)l & (((5 > 10) & (Max{Finc3MaxMin}l = 5 & Min{FindMaxMin}l = 1 0 ) ) | (not(5 > 10) & (Max{FindMaxMin)l = 10 & Min{FindMaxMin)l = 5))))) & ( Y{funnyswap}l = 5 & X{fvinnyswap) 1 = 1 tenip{funnyswap)l = 5 ) & ( b{0)3 = 10 6c a{0}3 = 5 & (Y{swapb}l = 10 & X{swapb}l = 5 & teirp{swapb)l = 10)) & ( b{0}2 = 5 & a{0}2 = 10 & (Y{swapa}2 = 5 & X{swapa)l = 10 & Y{swapa}l = 15)) & (b{0}l = 10 & a{0)l = 5 ) & U *)
Figure 10. Output created by applying AUTOSPEC to example (cont.)
tions of concrete, implementation constructs in a WSL to abstract specification constructs in the same WSL.
160
GANNOD AND CHENG
REDO (Lano and Breuer, 1989) (Restructuring, Maintenance, Validation and Documentation of Software Systems) is an Espirit II project whose objective is to improve applications by making them more maintainable through the use of reverse engineering techniques. The approach used to reverse engineer COBOL involves the development of general guidelines for the process of deriving objects and specifications from program code as well as providing a framework for formally reasoning about objects (Haughton and Lano, 1991). In each of these approaches, the applied formalisms are based on the semantics of the weakest precondition predicate transformer wp. Some differences in applying wp and sp are that wp is a backward rule for program semantics and assumes a total correctness model of execution. However, the total correctness interpretation has no forward rule (i.e. no strongest total postcondition stp (Dijkstra and Scholten, 1990)). By using a partial correctness model of execution, both a forward rule {sp) and backward rule {wlp) can be used to verify and refine formal specifications generated by program understanding and reverse engineering tasks. The main difference between the two approaches is the ability to directly apply the strongest postcondition predicate transformer to code to construct formal specifications versus using the weakest precondition predicate transformer as a guideline for constructing formal specifications. 7.
Conclusions and Future Investigations
Formal methods provide many benefits in the development of software. Automating the process of abstracting formal specifications from program code is sought but, unfortunately, not completely realizable as of yet. However, by providing the tools that support the reverse engineering of software, much can be learned about the functionality of a system. The level of abstraction of specifications constructed using the techniques described in this paper are at the "as-built" level, that is, the specifications contain implementationspecific information. For straight-line programs (programs without iteration or recursion) the techniques described herein can be applied in order to obtain a formal specification from program code. As such, automated techniques for verifying the correctness of straight-line programs can be facilitated. Since our technique to reverse engineering is based on the use of strongest postcondition for deriving formal specifications from program code, the application of the technique to other programming languages can be achieved by defining the formal semantics of a programming language using strongest postcondition, and then applying those semantics to the programming constructs of a program. Our current investigations into the use of strongest postcondition for reverse engineering focus on three areas. First, we are extending our method to encompasses all major facets of imperative programming constructs, including iteration and recursion. To this end, we are in the process of defining the formal semantics of the ANSI C programming language using strongest postcondition and are applying our techniques to a NASA mission control application for unmanned spacecraft. Second, methods for constructing higher level abstractions from lower level abstractions are being investigated. Finally, a rigorous technique for re-engineering specifications from the imperative programming paradigm to the object-oriented programming paradigm is being developed (Gannod and Cheng, 1993). Directly related to this work is the potential for
STRONGEST POSTCONDITION SEMANTICS
161
applying the results to facilitate software reuse, where automated reasoning is applied to the specifications of existing components to determine reusability (Jeng and Cheng, 1992). Acknowledgments The authors greatly appreciate the comments and suggestions from the anonymous referees. Also, the authors wish to thank Linda Wills for her efforts in organizing this special issue. Finally, the authors would like to thank the participants of the IEEE 1995 Working Conference on Reverse Engineering for the feedback and comments on an earlier version of this paper. This is a revised and extended version of "Strongest Postcondition semantics as the Formal Basis for Reverse Engineering" by G.C. Gannod and B.H.C. Cheng, which first appeared in the Proceedings of the Second Working Conference on Reverse Engineering, IEEE Computer Society Press, pp. 188-197, July 1995. Appendix A Motivations for Notation and Removal of Quantification Section 3.1 states a conjecture that the removal of the quantification for the initial values of a variable is valid if the precondition Q has a conjunct that specifies the textual substitution. This Appendix discusses this conjecture. Recall that 5p(x:= e,Q) = {3v::QlAx
= el).
(A.l)
There are two goals that must be satisfied in order to use the definition of strongest postcondition for assignment. They are: 1. Elimination of the existential quantifier 2. Development and use of a traceable notation. Eliminating the Quantifier, First, we address the elimination of the existential quantifier. Consider the RHS of definition A.l. Let y be a variable such that (Q^ A X = el) ^ {3v :: Q^ A X = el).
(A.2)
Define spp{K:= e, Q) (pronounced "s-p-rho") as the strongest postcondition for assignment with the quantifier removed. That is, spp{K:= e,Q) = {Ql Ax = ey) forsome}'.
(A.3)
Given the definition of spp, it follows that spp{x:=
e,(3) =4> 5p(x:= e , Q ) .
(A.4)
162
GANNOD AND CHENG
As such, the specification of the assignment statement can be made more simple if y from equation (A.3) can either be identified explicitly or named implicitly. The choice ofy must be made carefully. For instance, consider the following. Let Q := P A {x = z) such that P contains no free occurrences of x. Choosing an arbitrary a for y in (A.3) leads to the following derivation:
= {Q:=PA{x = z)) {PA{x = z)raA{x = e%) = (textual substitution) (P^ A{x = z)l A (x = el) = {P has no free occurrences of x. Textual substitution) PA{a = z)A{x = 6%) = {a = z) P A ( a = z)A(x = 0 = (textual substitution) PA{a = z)A{x = e^). At first glance, this choice ofy would seem to satisfy the first goal, namely removal of the quantification. However, this is not the case. Suppose P were replaced with P' A{a^ z). The derivation would lead to 5pp(x:= e,(3) = P^A{a^z)A{a
= z)A{x
= e%).
This is unacceptable because it leads to a contradiction, meaning that the specification of a program describes impossible behaviour. Ideally, it is desired that the specification of the assignment statement satisfy two requirements. It must: 1. Describe the behaviour of the assignment of the variable x, and 2. Adjust the precondition Q so that the free occurrences of x are replaced with the value of X before the assignment is encountered. It can be proven that through successive assignments to a variable x that the specification spp will have only one conjunct of the form (x = /?), where P is an expression. Informally, we note that each successive application of spp uses a textual substitution that eliminates free references to x in the precondition and introduces a conjunct of the form (x = /3). The convention used by the approach described in this paper is to choose for y the expression yS. If no P can be identified, use a place holder 7 such that the precondition Q has no occurrence of 7. As an example, let y in equation (A.3) be z, and Q := PA{X = z). Then spp{x:= e,Q) = PA{z^z)A{x
= e^).
Notice that the last conjunct in each of the derivations is (x = e^) and that since P contains no free occurrences of x, P is an invariant.
STRONGEST POSTCONDITION SEMANTICS
163
Notation. Define sp^^ (pronounced "s-p-rho-iota") as the strongest postcondition for assignment with the quantifier removed and indices. Formally, spp^ has the form spp,{yi: = e, Q) = {Qy Axk = Sy) for somey.
(A.5)
Again, an appropriate y must be chosen. Let Q := PA{xi = y), where P has no occurrence of X other than i subscripted x's of form {xj = ej),0 < j < i. Based on the previous discussion, choose y to be the RHS of the relation (xi = y). As such, the definition of spp^^ can be modified to appear as spp,{x:= e, Q) = ((P A {xi = y))l A x^+i = e^) for some j .
(A.6)
Consider the following example where subscripts are used to show the effects of two consecutive assignments to the variable x. Let Q := P A{xi = a), and let the assignment statement be x: = e. Application of sppc yields spp,{x:= e,Q) = ( P A ( x i = a))^ A(xi+i = e ) ^ = (textual substitution) P^ A {xi = a)% A {xi+i = e)l = (textual substitution)
PA{xi = a)A{xi^i
= e^)
A subsequentapplication of 5j?pt on the statement x:= f subjecttoQ' := QA(xi+i = e%) has the following derivation: f, Q') = ( P A {xi = a)A (x^+i = e^))^. A Xi^2 = fe^ (textual substitution) Pfx A {xi = a)^x A {xi^i = eS)^x A x^+s = f^. (P has no free x, textual substitution) PA{xi = a)A {xi^i = e^) A Xi+2 = fe(definition of Q) Q A {xi+i = e^) A Xi+2 = fe(definition of Q') Q' A Xi+2 = feg SPP,{K:=
= = = =
Therefore, it is observed that by using historical subscripts, the construction of the specification of the assignment statements involves the propagation of the precondition Q as an invariant conjuncted with the specification of the effects of setting a variable to a dependent value. This convention makes the evaluation of a specification annotation traceable by avoiding the elimination of descriptions of variables and their values at certain steps in the program. This is especially helpful in the case where choice statements (alternation and iteration) create alternative values for specific variable instances.
164
GANNOD AND CHENG
References Byrne, Eric J. A Conceptual Foundation for Software Re-engineering. In Proceedings for the Conference on Software Maintenance, pages 226-235. IEEE, 1992. Byrne, Eric J. and Gustafson, David A. A Software Re-engineering Process Model. In COMPSAC. IEEE, 1992. Cheng, Betty H. C. Applying formal methods in automated software development. Journal of Computer and Software Engineering, 2(2): 137-164, 1994. Cheng, Betty H.C., and Gannod, Gerald C. Abstraction of Formal Specifications from Program Code. In Proceedings for the IEEE 3rd International Conference on Tools for Artificial Intelligence, pages 125-128. IEEE, 1991. Chikofsky, Elliot J. and Cross, James H. Reverse Engineering and Design Recovery: A Taxonomy. IEEE Software, 7(1): 13-17, January 1990. Dijkstra, Edsgar W. A Discipline of Programming. Prentice Hall, 1976. Dijkstra, Edsger W. and Scholten, Carel S. Predicate Calculus and Program Semantics. Springer-Verlag, 1990. Flor, Victoria Slid. Ruling's Dicta Causes Uproar. The National Law Journal, July 1991. Gannod, Gerald C. and Cheng, Betty H.C. A Two Phase Approach to Reverse Engineering Using Formal Methods. Lecture Notes in Computer Science: Formal Methods in Programming and Their Applications, 735:335-348, July 1993. Gannod, Gerald C. and Cheng, Betty H.C. Facilitating the Maintenance of Safety-Critical Systems Using Formal Methods. The International Journal of Software Engineering and Knowledge Engineering, 4(2): 183-204,1994. Gries, David. The Science of Programming. Springer-Verlag, 1981. Haughton, H.P., and Lano, Kevin. Objects Revisited. In Proceedingsfor the Conference on Software Maintenance, pages 152-161. IEEE, 1991. Hoare, C. A. R. An axiomatic basis for computer programming. Communications of the ACM, 12(10):576-580, October 1969. Jeng, Jun-jang and Cheng, Betty H. C. Using Automated Reasoning to Determine Software Reuse. International Journal of Software Engineering and Knowledge Engineering, 2(4):523-546, December 1992. Katz, Shmuel and Manna, Zohar. Logical Analysis of Programs. Communications of the ACM, 19(4): 188-206, April 1976. Lano, Kevin and Breuer, Peter T. From Programs to Z Specifications. In John E. Nicholls, editor, Z User Workshop, pages 46-70. Springer-Verlag, 1989. Leveson, Nancy G. and Turner, Clark S. An Investigation of the Therac-25 Accidents. IEEE Computer, pages 1 8 ^ 1 , July 1993. Osborne, Wilma M. and Chikofsky, Elliot J. Fitting pieces to the maintenance puzzle. IEEE Software, 7(1): 11-12, January 1990. Ward, M., Calliss, F.W., and Munro, M. The Maintainer's Assistant. In Proceedings for the Conference on Software Maintenance. IEEE, 1989. Wing, Jeannette M. A Specifier's Introduction to Formal Methods. IEEE Computer, 23(9):8-24, September 1990. Yourdon, E. and Constantine, L. Structured Analysis and Design: Fundamentals Discipline of Computer Programs and System Design. Yourdon Press, 1978.
Automated Software Engineering, 3, 165-172 (1996) © 1996 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
Recent Trends and Open Issues in Reverse Engineering LINDA M. WILLS School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA 30332-0250 JAMES H. CROSS II Auburn University, Computer Science and Engineering lOJDunstan Hall, Auburn University, AL 36849
[email protected] [email protected] Abstract. This paper discusses recent trends in the field of reverse engineering, particularly those highlighted at the Second Working Conference on Reverse Engineering, held in July 1995. The trends observed include increased orientation toward tasks, grounding in complex real-world applications, guidancefromempirical study, analysis of non-code sources, and increased formalization. The paper also summarizes open research issues and provides pointers to future events and sources of information in this area.
1.
Introduction
Researchers in reverse engineering use a variety of metaphors to describe the role their work plays in software development and evolution. They are detectives, piecing together clues incrementally discovered about a system's design and what "crimes" were committed in its evolution. They are rescuers, salvaging huge software investments, left stranded by shifting hardware platforms and operating systems. Some practice radiology, finding ways of viewing internal structures, obscured by and entangled with other parts of the software "organism": objects in procedural programs, logical data models in relational databases, and data and control flow "circulatory and nervous systems." Others are software archeologists (Chikofsky, 1995), reconstructing models of structures buried in the accumulated deposits of software patches and fixes; inspectors, measuring compliance with design, coding, and documentation standards; foreign language interpreters, translating software in one language to another; and treasure hunters and miners, searching for gems to extract, polish, and save in a reuse library. Although working from diverse points of view, reverse engineering researchers have a common goal of recovering information from existing software systems. Conceptual complexity is the software engineer's worst enemy. It directly affects costs and ultimately the reliability of the delivered system. Comprehension of existing systems is the underlying goal of reverse engineering technology. By examining and analyzing the system, the reverse engineering process generates multiple views of the system that highlight its salient features and delineate its components and the relationships between them (Chikofsky and Cross, 1990). Recovering this information makes possible a wide array of critical software engineering activities, including those mentioned above. The prospect of being able to provide tools and methodologies to assist and automate portions of the reverse engineering process is
166
WILLS AND CROSS
an appealing one. Reverse engineering is an area of tremendous economic importance to the software industry not only in saving valuable existing assets, but also in facilitating the development of new software. From the many different metaphors used to describe the diverse roles that reverse engineering plays, it is apparent that supporting and semi-automating the process is a complex, multifarious problem. There are many different types of information to extract and many different task situations, with varying availability and accuracy of information about the software. A variety of approaches and skills is required to attack this problem. To help achieve coherence and facilitate communication in this rapidly growing field, researchers and practitioners have been meeting at the Working Conference on Reverse Engineering, the first of which was held in May 1993 (Waters and Chikofsky, 1993). The Working Conference provides a forum for researchers to discuss as a group current research directions and challenges to the field. The adjective "working" in the title emphasizes the conference's format of interspersing significant periods of discussion with paper presentations. The Second Working Conference on Reverse Engineering (Wills et al., 1995) was held in July, 1995, organized by general chair Elliot Chikofsky of Northeastern University and the DMR Group, and by program co-chairs Philip Newcomb of the Software Revolution and Linda Wills of Georgia Institute of Technology. This article uses highlights and observations from the Second Working Conference on Reverse Engineering to present a recent snapshot of where we are with respect to our overall goals, what new trends are apparent in the field, and where we are heading. It also points out areas where hopefully more research attention will be drawn in the future. Finally, it provides pointers to future conferences and workshops in this area and places to find additional information.
2.
Increased Task-Orientation
The diverse set of metaphors listed above indicates the variety of tasks in which reverse engineering plays a significant role. Different tasks place different demands on the reverse engineering process. The issue in reverse engineering is not only how to extract information from an existing system, but which information should be extracted and in whatform should it be made accessible? Researchers are recognizing the need to tailor reverse engineering tools toward recovering information relevant to the task at hand. Mechanisms for focused, goal-driven inquiries about a software system are actively being developed. Dynamic Documentation. A topic of considerable interest is automatically generating accessible, dynamic documentation from legacy systems. Lewis Johnson coined the phrase "explanation on demand" for this type of documentation technology (Johnson, 1995). The strategy is to concentrate on generating only documentation that addresses specific tasks, rather than generating all possible documentation whether it is needed or not. Two important open issues are: what formalisms are appropriate for documentation, and how well do existing formalisms match the particular tasks maintainers have to perform? These issues are relevant to documentation at all levels of abstraction. For example, a similar issue arises in program understanding: what kinds of formal design representations
TRENDS AND ISSUES IN REVERSE ENGINEERING
167
should be used as a target for program understanding systems? How can multiple models of design abstractions be extracted, viewed, and integrated? Varying the Depth of Analysis. Depending on the task, different levels of analysis power are required. For example, recent advances have been made in using analysis techniques to detect duplicate fragments of code in large software systems (Baker, 1995, Kontogiannis et al., 1995). This is useful in identifying candidates for reuse and in preventing inconsistent maintenance of conceptually related code. If a user were interested only in detecting instances of "cut-and-paste" reuse, it would be sufficient to find similarities based on matching syntactic features (e.g., constant and function names, variable usage, and keywords), without actually understanding the redundant pieces. The depth of analysis must be increased, however, if more complex, semantic similarities are to be detected, for example, for the task of identifying families of reusable components that all embody the same mathematical equations or business rules. Interactive Tools. Related to the issue of providing flexibility in task-oriented tools is the degree of automation and interaction the tools have with people (programmers, maintainers, and domain experts (Quilici and Chin, 1995)). How is the focusing done? Who is controlling the depth of analysis and level of effort? The reverse engineering process is characterized by a search for knowledge about a design artifact with limited sources of information available. The person and the tool each bring different types of interpretive skills and information sources to the discovery process. A person can often see global patterns in data or subtle connections to informal domain concepts that would be difficult for tools based on current technology to uncover. Successful collaboration will depend on finding ways to leverage the respective abilities of the collaborators. The division of labor will be influenced by the task and environmental situation. 3.
Attacking Industrial-Strength Problems
The types of problems that are driving reverse engineering research come from real-world systems and applications. Early work tended to focus on simplified versions of reverse engineering problems, often using data that did not always scale up to more realistic problems (Selfi-idge et al., 1993). This helped in initial explorations of techniques that have since matured. At the Working Conference, several researchers reported on the application of reverse engineering techniques to practical industrial problems with results of significant economic importance. The software and legacy systems to which their techniques are being applied are quite complex, large, and diverse. Examples include a public key encryption program, industrial invoicing systems, the X window system, and software for analyzing data sent back from space missions. Consequently, the types of information being extracted from existing software spans a wide range, including specifications, business rules, objects, and more recently, architectural features. A good example of a large scale application was provided by Philip Newcomb (Newcomb,1995) who presented a tool, called the Legacy System Cataloging Facility. This tool supports modeling, analyzing, and transforming legacy systems on an enterprise scale by providing a mechanism for efficiently storing and managing huge models of information systems at Boeing Computer Services.
168
WILLS AND CROSS
Current applications are pushing the limits of existing techniques in terms of scalability and feasibility. Exploring these issues and developing new techniques in the context of real-world systems and problems is critical. 4.
More Empirical Studies
One of the prerequisites in addressing real-world, economically significant reverse engineering problems is understanding what the problems are and establishing requirements on what it would take to solve them. Researchers are recognizing the necessity of conducting studies that examine what practitioners are doing currently, what is needed to support them, and how well (or poorly) the existing technology is meeting their needs. The results of one such full-scale case study were presented at the Working Conference by Piernicola Fiore (Fiore et al., 1995). The study focused on a reverse engineering project at a software factory (Basica S.p.A in Italy) to reverse engineer banking software. Based on an analysis of productivity, the study identified the need for adaptable automated tools. Results indicated that cost is not necessarily related to number of lines of code, and that both the data and the program need distinct econometric models. In addition to this formal, empirical investigation, some informal studies were reported at the Working Conference. During a panel discussion, Lewis Johnson described his work on dynamic, accessible documentation, which was driven by studies of inquiry episodes gathered from newsgroups. This helped to determine what types of questions software users and maintainers typically ask. (Blaha and Premerlani, 1995) reported on idiosyncracies they observed in relational database designs, many of which are in conmiercial software products! Empirical data is useful not only in driving and guiding reverse engineering technology development, but also in estimating the effort involved in reverse engineering a given system. This can influence a software engineer's decisions about whether to reengineer a system or opt for continued maintenance or a complete redesign (Newcomb,1995). While the value of case studies is widely recognized, relatively few have been conducted thus far. Closely related to this problem is the critical need for publicly available data sets that embody representative reverse engineering problems (e.g., a legacy database system including all its associated documentation (Selfi"idge et al., 1993)). Adopting these as standard test data sets would enable researchers to quantitatively compare results and set clear milestones for measuring progress in the field. Unfortunately, it is difficult tofinddata sets that can be agreed upon as being representative of those found in common reverse engineering situations. They must not be proprietary and they must be made easily accessible. Papers describing case studies and available data sets would significantly contribute to advancing the research in this field (Selfridge et al., 1993) and are actively sought by the Working Conference. 5.
Looking Beyond Code for Sources of Information
In trying to understand aspects of a software system, a reverse engineer uses all the sources of information available. In the past, most reverse engineering research focused on supporting
TRENDS AND ISSUES IN REVERSE ENGINEERING
169
the recovery of information solely from the source code. Recently, the value of noncode system documents as rich sources of information has been recognized. Documents associated with the source code often contain information that is difficult to capture in the source code itself, such as design rationale, connections to "human-oriented" concepts (Biggerstaff et al., 1994), or the history of evolutionary steps that went into creating the software. For example, at the Working Conference, analysis techniques were presented that automatically derived test cases from reference manuals and structured requirements (Lutsky, 1995), business rules and a domain lexicon from structured analysis specifications (Leite and Cerqueira, 1995), and formal semantics from dataflow diagrams (Butler et al., 1995). A crucial open issue in this area of exploration is what happens when one source of information is inaccurate or inconsistent with another source of information, particularly the code. Who is thefinalarbiter? Often it is valuable simply to detect such inconsistencies, as is the case in generating test cases. 6.
Increased Formalization
When a field is just beginning to form, it is common for researchers to try many different informal techniques and experimental methodologies to get a handle on the complex problems they face. As the field matures, researchers start to formalize their methods and the underlying theory. The field of reverse engineering is starting to see this type of growth. A fruitful interplay is emerging between prototyping and experimenting with new techniques which are sketched out informally, and the process of formalization which tries to provide an underlying theoretical basis for these informal techniques. This helps make the methods more precise and less prone to ambiguous results. Formal methods contribute to the validation of reverse engineering technology and to a clearer understanding of fundamental reverse engineering problems. While formal methods, with their well-defined notations, also have a tremendous potential for facilitating automation, the current state-of-the-art focuses on small programs. This raises issues of practicality, feasibility and scalability. A promising strategy is to explore how formal methods can be used in conjunction with other approaches, for example, coupling pattern matching with symbolic execution. Although the formal notations lend themselves to machine manipulation, they tend to introduce a communication barrier between the reverse engineer who is not familiar with formal methods and the machine. Making reverse engineering tools based on formal methods accessible to practicing engineers will require the support of interfaces to the formal notations, including graphical notations and domain-oriented representations, such as those being explored in applying formal methods to component-based reuse (Lowry et al., 1994). 7.
Challenges for the Future
Other issues not specifically addressed by papers presented at the Working Conference include:
170
WILLS AND CROSS
•
How do we validate and test reverse engineering technology?
•
How do we measure its potential impact? How can we support the critical task of assessment that should precede any reverse engineering activity? This includes determining how amenable an artifact is to reverse engineering, what outcome is expected, the estimated cost of the reverse engineering project and the anticipated cost oinot reverse engineering. Most reverse engineering research assumes that reverse engineering will be performed and thus overlook this critical assessment task which needs tools and methodologies to support it.
•
What can we do now to prevent the software systems we are currently creating from becoming the incomprehensible legacy systems of tomorrow? For example, what new problems does object-oriented code present? What types of programming language features, documentation, or design techniques are helpful for later comprehension and evolution of the software?
•
A goal of reverse engineering research is to raise the conceptual level at which software tools interact and conmiunicate with software engineers, domain experts, and end users. This raises issues concerning how to most effectively acquire, refine, and use knowledge of the application domain. How can it be used to organize and present information extracted in terms the tool user can readily comprehend? What new presentation and visualization techniques are useful? How can domain knowledge be captured from noncode sources? What new techniques are needed to reverse engineer programs written in non-traditional, domain-oriented "languages," such as spreadsheets, database queries, granmiar-based specifications, and hardware description languages?
•
A clearer articulation of the reverse engineering process is needed. What is the lifecycle of a reverse engineering activity and how does it relate to the forward engineering life-cycle? Can one codify the best practices of reverse engineers, and thereby improve the effectiveness of reverse engineering generally?
•
What is management's role in the success of reverse engineering technology? From the perspective of management, reverse engineering is often seen as a temporary set of activities, focused on short-term transition. As such, management is reluctant to invest heavily in reverse engineering research, education, and application. In reality, reverse engineering can be used in forward engineering as well as maintenance to better control conceptual complexity across the life-cycle of evolving software.
8.
Conclusion and Future Events
This article has highlighted the key trends in thefieldof reverse engineering that we observed at the Second Working Conference. More details about the WCRE presentations and discussions is given in (Cross et al., 1995). The 1993 and 1995 WCRE proceedings are available from IEEE Computer Society Press. Even more important than the trends and ideas discussed is the energy and enthusiasm shared by the research community. Even though the problems being attacked are complex,
TRENDS AND ISSUES IN REVERSE ENGINEERING
171
they are intensely interesting and highly relevant to many software-related activities. One of the hallmarks of the Working Conference is that Elliot Chikofsky manages to come up with amusing reverse engineering puzzles that allow attendees to revel in the reverse engineering process. For example, at the First Working Conference, he challenged attendees to reverse engineer jokes given only their punch-lines. This year, he created a "reverse taxonomy" of tongue-in-cheek definitions that needed to be reverse engineered into computing-related words. ^ The next Working Conference is planned for November 8-10, 1996 in Monterey, CA. It will be held in conjunction with the 1996 International Conference on Software Maintenance (ICSM). Further information on the upcoming Working Conference can be found at http ://www.ee.gatech.edii/coiiferencesAVCRE or by sending mail to were @ computer.org. Other future events related to reverse engineering include: •
the Workshop on Program Comprehension, which was held in conjunction with the International Conference on Software Engineering in March, 1996 in Berlin, Germany;
•
the International Workshop on Computer-Aided Software Engineering (CASE), which is being planned for London, England, in the Summer of 1997; and
•
the Reengineering Forum, a commercially-oriented meeting, which complements the Working Conference and is being held June 27-28,1996 in St. Louis, MO.
Acknowledgments This article is based, in part, on notes taken by rapporteurs at the Second Working Conference on Reverse Engineering: Gerardo Canfora, David Eichmann, Jean-Luc Hainaut, Lewis Johnson, Julio Cesar Leite, Ettore Merlo, Michael Olsem, Alex Quilici, Howard Reubenstein, Spencer Rugaber, and Mark Wilson. We also appreciate comments from Lewis Johnson which contributed to our list of challenges. Notes 1. Some examples of Elliot's reverse taxonomy: (A) a suggestion made to a computer; (B) the answer when asked "what is that bag the Blue Jays batter runs to after hitting the ball?" (C) an instrument used for entering errors into a system. Answers: (A)command; (B)database; (C)k:eyboard
References Baker, B. On finding duplication and near-duplication in large software systems. In (Willis et al., 1995), pages 86-95. Biggerstaff, T., B. Mitbander, and D. Webster. Program understanding and the concept assignment problem. Communications of the ACM, 37(5):72-83, May 1994. Blaha, M. and W. Premerlani. Observed idiosyncracies of relational database designs. In (Willis et al., 1995), pages 116-125.
172
WILLS AND CROSS
Butler, G., P. Grogono, R. Shinghal, and L Tjandra. Retrieving information from data flow diagrams. In [19], pages 22-29. Chikofsky, E. Message from the general chair. In (Willis et al., 1995) (contains a particularly vivid analogy to archeology), page ix. Chikofsky, E. and J. Cross. Reverse engineering and design recovery: A taxonomy. IEEE Software, pages 13-17, January 1990. Cross, J., A. Quilici, L. Wills, R Newcomb, and E. Chikofsky. Second working conference on reverse engineering summary report. ACM SIGSOFTSoftware Engineering Notes, 20(5):23-26, December 1995. Fiore, R, E Lanubile, and G. Vissaggio. Analyzing empirical data from a reverse engineering project. In (Willis et al., 1995), pages 106-114. Johnson, W. L. Interactive explanation of software systems. In Proc. 10th Knowledge-Based Software Engineering Conference, pages 155-164, Boston, MA, 1995. IEEE Computer Society Press. Kontogiannis, K., R. DeMori, M. Bernstein, M. Galler, and E. Merlo. Pattern matching for design concept localization. In (Willis et al., 1995), pages 96-103. Leite, J. and P. Cerqueira. Recovering business rules from structured analysis specifications. In (Willis et al., 1995), pages 13-21. Lowry, M., A. Philpot, T. Pressburger, and I. Underwood. A formal approach to domain-oriented software design environments. In Proc. 9th Knowledge-Based Software Engineering Conference, pages 48-57, Monterey, CA, 1994. Lutsky, P. Automating testing by reverse engineering of software documentation. In (Willis et al., 1995), pages 8-12. Newcomb, P. Legacy system cataloging facility. In (Wilhs et al., 1995), pages 52-60, July 1995. Quilici, A. and D. Chin. Decode: A cooperative environment for reverse-engineering legacy software. In (Willis et al., 1995), pages 156-165. Selfridge, P., R. Waters, and E. Chikofsky. Challenges to the field of reverse engineering - A position paper. In Proc. of the First Working Conference on Reverse Engineering, pages 144-150, Baltimore, MD, May 1993. IEEE Computer Society Press. Waters, R. and E. Chikofsky, editors. Proc. of the First Working Conference on Reverse Engineering, Baltimore, MD, May 1993. IEEE Computer Society Press. Wills, L., P. Newcomb, and E. Chikofsky, editors. Proc. of the Second Working Conference on Reverse Engineering, Toronto, Ontario, July 1995. IEEE Computer Society Press.
Automated Software Engineering 3, 173-178 (1996) (c) 1996 Kluwer Academic Publishers. Manufactured in The Netherlands.
Desert Island Column JOHN DOBSON
[email protected] Centre for Software Reliability, Bedson Building, University of Newcastle, Newcastle NEl 7RU, U.K.
When I started preparing for this article, I looked along my bookshelves to see what books I had on software engineering. There were none. It is not that software engineering has not been part of my life, but that I have not read anything on it as a subject that I wished to keep in order to read again. There were books on software, and books on engineering, and books on many a subject of interest to software engineers such as architecture and language. In fact these categories provided more than enough for me to wish to take to the desert island, so making the selection provided an enjoyable evening. I also chose to limit my quota to six (or maybe the editor did, I forget). Since there was an element of choice involved, I made for myself some criteria: it had to be a book that I had read and enjoyed reading, it had to have (or have had) some significance for me in my career, either in terms of telling me how to do something or increasing my understanding, it had to be relevant to the kind of intellectual exercise we engage in when we are engineering software, and it had to be well-written. Of these, the last was the most important. There is a pleasure to be gained from reading a well-written book simply because it is written well. That doesn't necessarily mean easy to read; it means that there is a just and appropriate balance between what the writer has brought to the book and what the reader needs to bring in order to get the most out of it. All of my chosen books are well-written. All are worth reading for the illumination they shed on software engineering from another source, and I hope you will read them for that reason. First, a book on engineering: To Engineer is Human, by Petroski (1985). Actually it is not so much about engineering (understood as meaning civil engineering) as about the history of civil engineering. Perhaps that is why I have no books on software engineering: the discipline is not yet old enough to have a decent history, and so there is not much of interest to say about it. What is interesting about Petroski's book, though, is the way it can be used as a base text for a future book on the history of software engineering, for it shows how the civil engineering discipline (particularly the building of bridges) has developed through disaster. The major bridge disasters of civil engineering history—Tay Bridge, Tacoma Narrows—have their analogues in our famous disasters—Therac, the London Ambulance Service. The importance of disasters, lies, of course, in what is learnt from them; and this means that they have to be well documented. The examples of software disasters that I gave have been documented, the London Ambulance Service particularly, but these are in the minority. There must be many undocumented disasters in software engineering, from which as a result nothing has been learnt. This is yet another example of the main trouble with software being its invisibility, which is why engineering it is so hard. It is probably not possible, at least in the western world, to have a major disaster in civil engineering which can be completely concealed; Petroski's book shows just how this has helped the development of the discipline.
174
DOBSON
What makes Petroski's book so pleasant to read is the stress he places on engineering as a human activity and on the forces that drive engineers. Engineering is something that is born in irritation with something that is not as good as it could have been, a matter of making bad design better. But of course this is only part of the story. There is the issue of what the artifact is trying to achieve to consider. Engineering design lies in the details, the "minutely organized Particulars" as Blake calls them^. But what about the general principles, the grand scheme of things in which the particulars have a place, the "generalizing Demonstrations of the Rational Power"? In a word, the architecture—and of course the architect (the "Scoundrel, Hypocrite and Flatterer" who appeals to the "General Good"?). It seems that the software engineer's favourite architect is Christopher Alexander. A number of colleagues have been influenced by that remarkable book A Pattern Language (Alexander et al., 1977), which is the architects' version of a library of reusable object classes. But for all its influence over software architects (its influence over real architects is, I think, much less noticeable), it is not the one I have chosen to take with me. Alexander's vision of the architectural language has come out of his vision of the architectural process, which he describes in an earlier book. The Timeless Way of Building (Alexander, 1979). He sees the creation of pattern languages as being an expression of the actions of ordinary people who shape buildings for themselves instead of having the architect do it for them. The role of the architect is that of a facilitator, helping people to decide for themselves what it is they want. This is a process which Alexander believes has to be rediscovered, since the languages have broken down, are no longer shared, because the architects and planners have taken them for themselves. There is much talk these days of empowerment. I am not sure what it means, though I am sure that a lot of people who use it do not know what it means either. When it is not being used merely as a fashionable management slogan, empowerment seems to be a recognition of the embodiment in an artifact of the Tao, the quality without a name. As applied to architecture, this quality has nothing to do with the architecture of the building or with the processes it supports and which stem from it. The architecture and architectural process should serve to release a more basic understanding which is native to us. We find that we already know how to make the building live, but that the power has been frozen in us. Architectural empowerment is the unfreezing of this ability. The Timeless Way of Building is an exploration of this Zen-like way of doing architecture. Indeed the book could have been called Zen and the Art of Architecture, but fortunately it was not. A cynical friend of mine commented, after he had read the book, "It is good to have thought like that"—the implication being that people who have been through that stage are more mature in their thinking than those who have not or who are still in it. I can see what he means but I think he is being unfair. I do not think we have really given this way of building systems a fair try. Christopher Alexander has, of course, and the results are described in two of his other books, The Production of Houses (Alexander et al., 1985) and The Oregon Experiment (Alexander et al., 1975). Reading between the lines of these two books does seem to indicate that the process was perhaps not as successful as it might have been and I think there is probably scope for an architectural process engineer to see what could be done to improve the process design. Some experiments in designing computer systems that way have been performed. One good example is described in Pelle Ehn's book Work-Oriented
DESERT ISLAND COLUMN
175
Design of Computer Artifacts (Ehn, 1988), which has clearly been influenced by Alexander's view of the architectural process. It also shares Alexander's irritating tendency to give the uneasy impression that the project was not quite as successful as claimed. But nevertheless I think these books of Alexander's should be required reading, particularly for those who like to acknowledge the influence of A Pattern Language. Perhaps The Timeless Way of Building and The Production of Houses will come to have the same influence on the new breed of requirements engineers as A Pattern Language has had on software engineers. That would be a good next stage of development for requirements engineering to go through. If there is something about the architectural process that somehow embodies the human spirit, then there is something about the architectural product that embodies the human intellect. It sometimes seems as if computers have taken over almost every aspect of human intellectual endeavour, from flying aeroplanes to painting pictures. Where is it all going to end—indeed will it ever end? Is there anything that they can't do? Well of course there is, and their limitations are provocatively explored in Hubert Dreyfus' famous book What Computers Can't Do (Dreyfus, 1979), which is my third selection. For those who have yet to read this book, it is an enquiry into the basic philosophical presuppositions of the artificial intelligence domain. It raises some searching questions about the nature and use of intelligence in our society. It is also a reaction against some of the more exaggerated claims of proponents of artificial intelligence, claims which, however they may deserve respect for their useftilness and authority, have not been found agreeable to experience (as Gibbon remarked about the early Christian belief in the nearness of the end of the world). Now it is too easy, and perhaps a bit unfair, to tease the AI community with some of the sillier sayings of their founders. Part of the promotion of any new discipline must involve a certain amount of overselling (look at that great engineer Brunei, for example). I do not wish to engage in that debate again here, but it is worth remembering that some famous names in software engineering have, on occasion, said things which perhaps they now wish they had not said. It would be very easy to write a book which does for software engineering what What Computers Can't Do did for artificial intelligence: raise a few deep issues, upset a lot of people, remind us all that when we cease to think about something we start to say stupid things and make unwarranted claims. It might be harder to do it with Dreyfus' panache, rhetoric, and philosophic understanding. I do find with What Computers Can't Do, though, that the rhetoric gets in the way a bit. A bit more dialectic would not come amiss. But the book is splendid reading. Looking again at the first three books I have chosen, I note that all of them deal with the human and not the technical side of software capabilities, design and architecture. One of the great developments in software engineering came when it was realised and accepted that the creation of software was a branch of mathematics, with mathematical notions of logic and proof. The notion of proof is a particularly interesting one when it is applied to software, since it is remarkable how shallow and uninteresting the theorems and proofs about the behaviour of programs usually are. Where are the new concepts that make for great advances in mathematical proofs? The best book I know that explores the nature of proof is Imre Lakatos' Proofs and Refutations (Lakatos, 1976) (subtitled The Logic of Mathematical Discovery—making the
176
DOBSON
point that proofs and refutations lead to discoveries, all very Hegelian). This surely is a deathless work which so cleverly explores the nature of proof, the role of counterexamples in producing new proofs by redefining concepts, and the role of formalism in convincing a mathematician. In a way, it describes the history of mathematical proof in the way that To Engineer is Human describes the history of engineering (build it; oh dear, it's fallen down; build it again, but better this time). What makes Proofs and Refutations so memorable is its cleverness, its intellectual fun, its wit. But the theorem discussed is just an ordinary invariant theorem (Euler's formula relating vertices, edges and faces of a polyhedron: V — E-\-F = 2), and its proof is hardly a deep one, either. But Lakatos makes all sorts of deep discussion come out of this simple example: the role of formalism in the advancement of understanding, the relationship between the certainty of a formal proof and the meaning of the denotational terms in the proof, the process of concept formation. To the extent that software engineering is a branch of mathematics, the discussion of the nature of mathematics (and there is no better discussion anywhere) is of relevance to software engineers. Mathematics is not, of course, the only discipline of relevance to software engineering. Since computer systems have to take their place in the world of people, they have to respect that social world. I have lots of books on that topic on my bookshelf, and the one that currently I like the best is Computers in Context by Dahlbom and Mathiassen (1993), but it is not the one that I would choose to take to my desert island. Instead, I would prefer to be accompanied by Women, Fire and Dangerous Things by Lakoff (1987). The subtitle of this book is What Categories Reveal about the Mind. The title comes from the fact that in the Dyirbal language of Australia, the words for women, fire, and dangerous things are all placed in one category, but not because women are considered fiery or dangerous. Any object-oriented software engineer should, of course, be intensely interested in how people do categorise things and what the attributes are that are common to each category (since this will form the basis of the object model and schema). I find very little in my books on object-oriented requirements and design that tells me how to do this, except that many books tell me it not easy and requires a lot of understanding of the subject domain, something which I know already but lacks the concreteness of practical guidance. What Lakoff's book does is to tell you what the basis of linguistic categorisation actually is. (But I'm not going to tell you; my aim is to get you to read this book as well.) With George Lakoff telling you about the linguistic basis for object classification and the Christopher Alexander telling you about how to go about finding out what a person or organisation's object classification is, you are beginning to get enough knowledge to design a computer system for them. However, you should be aware that the Lakoff book contains fiindamental criticisms of the objectivist stance, which believes that meaning is a matter of truth and reference (i.e., that it concerns the relationship between symbols and things in the world) and that there is a single correct way of understanding what is and what is not true. There is some debate about the objectivist stance and its relation to software (see the recent book Information Systems Development and Data Modelling by Hirschheim, Klein and Lyytinen (1995) for a fair discussion), but most software engineers seem reluctant to countenance any alternative view. Perhaps this is because the task of empowering people to construct their own reality,
DESERT ISLAND COLUMN
177
which is what all my chosen books so far are about, is seen as a task not fit, too subversive, for any decently engineered software to engage in. (Or maybe it is just too hard.) My final choice goes against my self-denying ordinance not to make fun of the artificial intelligentsia. It is the funniest novel about computers ever written, and one of the great classics of comedy literature: The Tin Men by Frayn (1965). For those who appreciate such things, it also contains (in its last chapter) the best and most humorous use of self-reference ever published, though you have to read the whole book to get the most enjoyment out of it. For a book which was written more than thirty years ago, it still seems very pointed, hardly dated at all. I know of some institutions that claim as a matter of pride to have been the original for the fictitious William Morris Institute of Automation Research (a stroke of inspiration there!). They still could be; the technology may have been updated but the same individual types are still there, and the same meretricious research perhaps—constructing machines to invent the news in the newspapers, to write bonkbusters, to do good and say their prayers, to play all the world's sport and watch it—while the management gets on with more stimulating and demanding tasks, such as organising the official visit which the Queen is making to the Institute to open the new wing. So there it is. I have tried to select a representative picture of engineering design, of the architecture of software artifacts, of the limitations and powers of mathematical formalisation of software, of the language software embodies and of the institutions in which software research is carried out. Together they say something about my view, not so much of the technical detail of software engineering, but of the historical, architectural, intellectual and linguistic context in which it takes place. So although none of these books is about software engineering, all are relevant since they show that what is true of our discipline is true of other disciplines also, and therefore we can learn from them and use their paradigms as our own. There are many other books from other disciplines of relevance to computing that I am particularly sorry to leave behind, Wassily Kandinsky's book Point and Line to Plane (Kandinsky, 1979) (which attempts to codify the rules of artistic composition) perhaps the most. Now for my next trip to a desert island, I would like to take, in addition to the Kandinsky, [that's enough books, Ed.]. Note L Jerusalem, Part III, plate 55.
References Alexander, C. 1979. The Timeless Way of Building. New York: Oxford University Press. Alexander, C, Ishikawa, S., and Silverstein, M. 1977. A Pattern Language. New York: Oxford University Press. Alexander, C, Martinez, J., and Comer, D. 1985. The Production of Houses. New York: Oxford University Press. Alexander, C, Silverstein, M., Angel, S., Ishikawa, S., and Abrams, D. 1975. The Oregon Experiment. New York: Oxford University Press. Dahlbom, B. and Mathiassen, L. 1993. Computers in Context. Cambridge, MA and Oxford, UK: NCC Blackwell. Dreyfus, H.L. 1979. What Computers Can't Do (revised edition). New York: Harper & Row. Ehn, P 1988. Work-Oriented Design of Computer Artifacts. Stockholm: Arbetslivscentrum (ISBN 91-86158-45-7).
178
DOBSON
Frayn, M. 1965. The Tin Men. London: Collins, (republished by Penguin Books, 1995). Hirschheim, R., Klein, H.K., and Lyytinen, K. 1995. Information Systems Development and Data Modelling. Cambridge University Press. Kandinsky, W. 1979. Point and Line to Plane Trans. H. Dearstyne and H. Rebay (Eds.), New York: Dover, (originally published 1926, in German). Lakatos, I. 1976. Proofs and Refutations. J. Worrall and E. Zahar (Eds.), Cambridge University Press. Lakoff, G. 1987. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press. Petroski, H. 1985. To Engineer is Human. New York: St. Martin's Press.
Automated Software Engineering An International Journal Instructions for Authors Authors are encouraged to submit high quality, original work that has neither appeared in, nor is under consideration by, other journals. PROCESS FOR SUBMISSION 1. Authors should submit five hard copies of their final manuscript to: Mrs. Judith A. Kemp AUTOMATED SOFTWARE ENGINEERING Editorial Office Kluwer Academic Publishers 101 Philip Drive Norwell, MA 02061
Tel.: 617-871-6300 FAX: 617-871-6528 E-mail:
[email protected] 2. Authors are strongly encouraged to use Kluwer's LSTgX journal style file. Please see ELECTRONIC SUBMISSION Section below. 3. Enclose with each manuscript, on a separate page, from three to five key words. 4. Enclose originals for the illustrations, in the style described below, for one copy of the manuscript. Photocopies of the figures may accompany the remaining copies of the manuscript. Alternatively, original illustrations may be submitted after the paper has been accepted. 5. Enclose a separate page giving the preferred address of the contact author for correspondence and return of proofs. Please include a telephone number, fax number and email address, if available. 6. If possible, send an electronic mail message to <
[email protected]> at the time your manuscript is submitted, including the title, the names of the authors, and an abstract. This will help the journal expedite the refereeing of the manuscript. 7. The refereeing is done by anonymous reviewers.
STYLE FOR MANUSCRIPT 1. Typeset, double or 1^ space; use one side of sheet only (laser printed, typewritten, and good quality duplication acceptable). 2. Use an informative title for the paper and include an abstract of 100 to 250 words at the head of the manuscript. The abstracts should be a carefully worded description of the problem addressed, the key ideas introduced, and the results. Abstracts will be printed with the article. 3. Provide a separate double-space sheet listing all footnotes, beginning with "Affiliation of author" and continuing with numbered references. Acknowledgment of financial support may be given if appropriate.
References should appear in a separate bibliography at the end of the paper in alphabetical order with items referred to in the text by author and date of publication in parentheses, e.g., (Marr, 1982). References should be complete, in the following style: Style for papers: Authors, last names followed by first initials, year of publication, title, volume, inclusive page numbers. Style for books: Authors, year of publication, title, publisher and location, chapter and page numbers (if desired). Examples as follows: (Book) Marr, D. 1982. Vision, a Computational Investigation into the Human Representation & Processing of Visual Information. San Francisco: Freeman. (Journal) Rosenfeld, A. and Thurston, M. 1971. Edge and curve detection for visual scene analysis. IEEE Trans. Comput., €-20:562-569. (Conference Proceedings) Witkin, A. 1983. Scales space filtering. Proc. Int. Joint Conf Artif Intell, Karlsruhe, West Germany, pp. 1019-1021. (Lab. memo) Yuille, A.L. and Poggio, T. 1983. Scaling theorems for zero crossings. M.I.T. Artif. Intell. Lab., Massachusetts Inst. Technol., Cambridge, MA, A.L Memo. 722. Type or mark mathematical copy exactly as they should appear in print. Journal style for letter symbols is as follows: variables, italic type (indicated by underline); constants, roman text type; matrices and vectors, boldface type (indicated by wavy underline). In word-processor manuscripts, use appropriate typeface. It will be assumed that letters in displayed equations are to be set in italic type unless you mark them otherwise. All letter symbols in text discussion must be marked if they should be italic or boldface. Indicate best breaks for equations in case they will not fit on one line. ELECTRONIC SUBMISSION PROCEDURE Upon acceptance of publication, the preferred format of submission is the Kluwer I^^^gX journal style file. The style file may be accessed through a gopher site by means of the following commands: Internet: gopher g o p h e r . wkap. n l or (IP number 192.87.90.1) WWW URL: gopher://gopher.wkap.nl -
Submitting and Author Instructions Submitting to a Journal Choose Journal Discipline Choose Journal Listing Submitting Camera Ready
Authors are encouraged to read the ''About this menu" file. If you do not have access to gopher or have questions, please send e-mail to:
[email protected] The Kluwer L^T^ journal style file is the preferred format, and we urge all authors to use this style for existing and future papers; however, we will accept other common formats (e.g., WordPerfect or MicroSoft Word) as well as ASCII (text only) files. Also, we accept FrameMaker documents as "text only" files. Note, it is also helpful to supply both the source and ASCII files of a paper. Please submit PostScript files for figures as well as separate, original figures in camera-ready form. A PostScript figure file should be named after its figure number, e.g., figl.eps or circlel.eps. ELECTRONIC DELIVERY IMPORTANT - Hard copy of the ACCEPTED paper (along with separate, original figures in camera-ready form) should still be mailed to the appropriate Kluwer department. The hard copy must match the electronic version, and any changes made to the hard copy must be incorporated into the electronic version. Via electronic mail 1. Please e-mail ACCEPTED, FINAL paper to KAPfiles @ wkap.com 2. Recommended formats for sending files via e-mail: a. Binary files - uuencode or binhex b. Compressing files - compress, pkzip, gunzip c. Collecting files - tar 3. The e-mail message should include the author's last name, the name of the journal to which the paper has been accepted, and the type of file (e.g., I^lgX or ASCII). Via disk 1. Label a 3.5 inch floppy disk with the operating system and word processing program (e.g., DOSAVordPerfect5.0) along with the authors' names, manuscript title, and name of journal to which the paper has been accepted. 2. Mail disk to Kluwer Academic Publishers Desktop Department 101 Philip Drive Assinippi Park Norwell, MA 02061 Any questions about the above procedures please send e-mail to: srumsey @ wkap.com
STYLE FOR ILLUSTRATIONS 1. Originals for illustrations should be sharp, noise-free, and of good contrast. We regret that we cannot provide drafting or art service. 2. Line drawings should be in laser printer output or in India ink on paper, or board. Use 8 ^ by 11-inch (22 x 29 cm) size sheets if possible, to simplify handling of the manuscript. 3. Each figure should be mentioned in the text and numbered consecutively using Arabic numerals. In one of your copies, which you should clearly distinguish, specify the desired location of each figure in the text but place the original figure itself on a separate page. In the remainder of copies, which will be read by the reviewers, include the illustration itself at the relevant place in the text. 4. Number each table consecutively using Arabic numerals. Please label any material that can be typeset as a table, reserving the term "figure" for material that has been drawn. Specify the desired location of each table in the text, but place the table itself on a separate page following the text. Type a brief title above each table. 5. All lettering should be large enough to permit legible reduction. 6. Photographs should be glossy prints, of good contrast and gradation, and any reasonable size. 7. Number each original on the back. 8. Provide a separate sheet listing all figure captions, in proper style for the typesetter, e.g., "Fig. 3. Examples of the fault coverage of random vectors in (a) combinational and (b) sequential circuits." PROOFING Page proofs for articles to be included in a journal issue will be sent to the contact author for proofing, unless otherwise informed. The proofread copy should be received back by the Publisher within 72 hours. COPYRIGHT It is the policy of Kluwer Academic Publishers to own the copyright of all contributions it publishes. To comply with the U.S. Copyright Law, authors are required to sign a copyright transfer form before publication. This form returns to authors and their employers full rights to reuse their material for their own purposes. Authors must submit a signed copy of this form with their manuscript. REPRINTS Each group of authors will be entitled to 50 free reprints of their paper.
REVERSE ENGINEERING brings together in one place important contributions and up-to-date research results in this important area. REVERSE ENGINEERING serves as an excellent reference, providing insight into some of the most important research issues in the field.
ISBN 0-7923-9756-8
0-7923-9756-8
92»'397564»