Advances
in COMPUTERS VOLUME 43
This Page Intentionally Left Blank
Advances in
COMPUTERS EDITED BY
MARVIN V. ZELKOWITZ Department of Computer Science and Institute for Advanced Computer Studies University of Maryland College Park, Maryland
VOLUME 43
ACADEMIC PRESS San Diego New York Boston London Sydney Tokyo Toronto
Find Us on the Web! http://www.apnet.corn
This book is printed on acid-free paper.
@
Copyright 0 1996 by ACADEMIC PRESS, INC
All Rights Reserved. No part of this publication may be reproduced or transmitted in any fonn or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher.
Academic Press, Inc. A Division of Harcourt Brace & Company 525 B Street, Suite 1900, San Diego, California 92101-4495
United Kingdom Edition published hv Academic Press Limited 24-28 Oval Road, London NW 1 7DX
International Standard Serial Number: 0065-2458 International Standard Book Number: 0-12-012143-3 PRINTED IN THE UNITED STATES OF AMERICA 9 6 9 7 9 8 9 9 0 0 0 1 B C 9 8 7 6 5 4 3 2 1
Contents CONTRIBUTORS . . . . . . . . PREFACE
. . . . . . . . . . . . . . ix ... . . . . . . . . . . . . . . xiii Program Slicing
David W . Binkley and Keith Brian Gallagher
1. Introduction . . . 2. Computing Slices . . 3. Applications of Program References . . . .
. . . . . . . . . . Slicing . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. 2 . 4 . 34 . 45
Language Features for the Interconnection of Software Components Renate Motschnig-Pitrik and Roland T. Mittermeir
1. 2. 3. 4.
Introduction . . . . . . . . . . . . . Different Perspectives and Fundamental Issues . . . A Model for Component Interconnection Specification Survey of Language Paradigms for the Specification of Component Interconnection . . . . . . . . . 5 . Comparison of Features for Expressing Component Interconnections . . . . . . . . . . . . 6. Discussion . . . . . . . . . . . . . . 7. Summary . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . References . . . . . . . . . . . . . .
. . . 53 . . . 57 .
.
62
. . .
67
. . . . .
.
. . . . .
. . . . .
86 125 130 132 133
Using Model Checking to Analyze Requirements and Designs
.
Joanne Atlee Marsha Chechik. and John Gannon
1. 2. 3. 4. 5. 6.
Introduction . . . . . . . . . . SCR Requirements . . . . . . . . Model Checking . . . . . . . . . Model Checking Requirements . . . . . Model Checking Designs . . . . . . . Case Study . . . . . . . . . . .
v
. . . . . . 142 . . . . . . 143
. . . . . . 147 . . . . . . 151 . . . . . . 156 . . . . . . 168
vi
CONTENTS
7. Conclusion . References .
. . . . . . . . . . . . . . . . 176 . . . . . . . . . . . . . . . . 177
Information Technology and Productivity: A Review of the Literature Erik Brynjolfsson and Shinkyu Yang
1. The “Productivity Paradox”-A Clash of Expectations and Statistics. . . . . . . . . . . . . . . . . 2. Research on Economy-wide Productivity and Information Worker Productivity . . . . . . . . . . . . . 3. Industry-Level Studies of Information Technology Productivity 4. Firm-Level Studies of Information Technology Productivity . 5 . Contribution to Consumer Surplus and Economic Growth . 6. Conclusion: Where Do We Go from Here? . . . . . . References . . . . . . . . . . . . . . . .
. 179 . 187 192 . 196 . 201 . 205 . 209
The Complexity of Problems William Gasarch
1. Introduction . . . 2. Decision Trees . . . 3. P versus N P . . . 4. Decidable. Undecidable. 5 . Summary . . . . References . . . .
. . . . . . . . . . . . . . . . . . . . . and Beyond . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
216 218 225 233 239 240
3-0 Computer Vision Using Structured Light: Design. Calibration. and Implementation Issues Fred W . DePiero and Mohan M. Trivedi
1. Introduction . . . . . . . . . . . . . . 2 . Introduction to Structured Light Ranging . . . . . . 3. Literature Review and Highlight of Critical Design Issues . 4. Structured Light Acquisition . . . . . . . . . 5 . Calibration of Structured Light Sensors . . . . . . 6. Sensitivity Analysis of Structured Light Sensors . . . . 7. Performance Benchmarks and Ranging Experiments . . 8. Related Research and Concluding Remarks . . . . . References . . . . . . . . . . . . . . .
. . 244 . . 245 . . 249 . . 252
. . 258 . . 264 . . 269 . . 272 . . 275
CONTENTS
AUTHOR INDEX
vii
. . . . . . . . . . . . . . . . 279
SUBJECT INDEX . .
. . . . . . . . . . . . . . . 287
CONTENTS OF VOLUMES I N THISSERIES .
. . . . . . . . 297
This Page Intentionally Left Blank
Contributors Joanne Atlee Currently an assistant professor of computer science at the University of Waterloo in Waterloo, Canada, Dr. Atlee received a B.S. in computer science and physics from the College of William and Mary, Williamsburg, Virginia, and an M.S. and a Ph.D. from the University of Maryland at College Park. Her research interests include requirements specification, automated verification of requirements and design, CASE tools, and feature interactions. David W. Binkley An assistant professor of computer science at Loyola College in Baltimore, Maryland, Dr. Binkley has been a member of the faculty since 1991. In the spring of 1992 he joined the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, as a visiting faculty researcher. His research interests include a compiler feedback project, currently supported by the National Science Foundation, software maintenance cost-reduction tools, and software issues in high integrity software system assurance (at NIST). Erik Brynjolfsson As the Douglas Drane Career Development Associate Professor of Information Technology at the MIT Sloan School of Management, Dr. Brynjolfsson’s research analyzes how the structures of markets and firms are being transformed by advances in information technology and assesses the productivity of information technology investments. He has written numerous articles in academic journals, is on the editorial boards of the Communications of the A CM, Information Systems Research, the Journal of Electronic Commerce, and Information Technology and People, and has served as the editor of special issues of Management Science and Journal of Organizational Computing. Professor Brynjolfsson holds both a B.S. and an M.S. in applied mathematics and decision science from Harvard and a Ph.D. in managerial economics from MIT. Before joining the MIT faculty, he directed a software consulting and development firm and created two of the first courses on artificial intelligence at Harvard University. Marsha Chechik Ms. Chechik graduated summa cum laude from the University of Maryland Baltimore County in 1991,majoring in computer science and mathematics. She received an M.S. in computer science from the University of Maryland at College Park in 1994. She is the recipient of the Washington, D C Association for Computing Machinery Master’s grant and is currently pursuing a Ph.D. in computer science at the University of
ix
X
CONTRIBUTORS
Maryland. Her research interests include formal methods, program analysis and verification, and software safety. She is a student member of the IEEE Computer Society and the Association for Computing Machinery.
Fred W. DePiero Dr. DePiero received his B.S. and M.S. in electrical engineering from Michigan State University in 1985 and 1987, respectively. H e worked as a development associate at Oak Ridge National Laboratory until 1993. There he was involved in a variety of real-time image processing projects including a high-compression video transmission system for remote driving and various structured light measurement systems. His research goals focused on real-time performance and calibration techniques. He is currently a research associate and Ph.D. candidate at the University of Tennessee. His structured light work continues on a high-performance measurement system for railway maintenance and on a multipurpose ranging system for industrial applications. His research goals have broadened to include graph-based object recognition and real-time range image segmentation. Keith Brian Gallagher Dr. Gallagher received a B.A. in mathematics from Bucknell University, an M.S. in both mathematics and computer science from the University of Michigan, and a Ph.D. in computer science from the University of Maryland Graduate School at Baltimore. H e is now an associate professor of Computer Science at Loyola College in Baltimore, Maryland. His interests include software maintenance process modeling, empirical studies of maintainers, and software transformations. Dr. Gallagher has received the Research Initiation Award from the National Science Foundation, is a faculty research associate at the National Institute of Standards and Technology in Gaithersburg, Maryland, and dances in local ballet productions. John Gannon A professor and Chairman of the Department of Computer Science at the University of Maryland, Dr. Gannon has been a faculty member there since 1975. He conducts research on automated applications of formal methods to software development activities, e.g., proving that requirements enforce safety properties and deriving test oracles from formal specifications. Professor Gannon has also worked for the National Science Foundation directing the Software Engineering Program. He is a member of the editorial boards of IEEE Transactions on Software Engineering and ACM Computing Surveys and is currently serving as Vice Chair of ACM SIGSOFT. William Gasarch An associate professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland, College Park, Maryland, Dr. Gasarch received a Ph.D. from
CONTRIBUTORS
xi
Harvard University before joining the faculty at the University of Maryland. His research interests include recursion theory and complexity theory. Roland Mittermeir Dr. Mittermeir is currently Professor in Informatics at the Universitat Klagenfurt in Austria. As an Austrian, he received a Mag. SOC. oec. from the Wirtschaftsuniversitat Wien in Vienna, a Dip1.Ing. in Informatics from the Technische Universitat in Vienna, and a Ph.D. from the Wirtschaftsuniversitat Wien. His academic career started at the Technische Universitat Wien. After positions at the University of Texas at Austin and the University of Maryland at College Park, he joined the Universitat Klagenfurt in 1984 where he was founding chairman of the Department of Informatics. His research interests are in various areas of software engineering, notably in software reuse, software re-engineering, software process modeling, and design. Renate Motschnig-Pitrik An assistant professor of computer science at the Department of Applied Computer Science and Information Systems at the University of Vienna, Dr. Motschnig-Pitrik received a Dip1.-Ing. in 1982 in computer science and a Ph.D. in 1988 from the University of Technology in Vienna, where she has been on the faculty of the Department of Applied Computer Science and Systems Analysis. She has repeatedly been a visiting researcher at the Department of Computer Science at the University of Toronto. In the winter term of 1995/1996she joined the faculty of the Rheinisch-Westfalische Technische Hochschule Aachen, working as visiting professor at the Department of Information Systems. Her primary research interests include various aspects of software engineering and knowledge representation, in particular the specification of abstraction concepts, abstraction-based software development, and cognitive issues in the development of information systems. She currently works on viewpoint abstraction, the application of context mechanisms, and meta-modeling. Mohan M. Trivedi A professor in the Electrical and Computer Engineering Department of the University of Tennessee, Knoxville, Dr. Trivedi has established the Computer Vision and Robotics Research Laboratory there. H e and his team are engaged in a broad range of sponsored research studies in sensor-based mobile manipulators, active perception and machine vision, interactive graphical interfaces for human-machine systems, and remote sensing. He received the B.E. (Honors) in electronics from the Birla Institute of Technology and Science, Pilani, India, in 1974 and the M.E. and Ph.D. in electrical engineering from the Utah State University in 1976 and 1979, respectively. Dr. Trivedi serves as the Editor-in-Chief of Machine Vision and Applications, the official journal of the International Association of Pattern Recognition. He serves or has served as an associate editor of
xii
CONTRIBUTORS
IEEE Transactions on Systems, Man, and Cybernetics, Pattern Recognition, the International Journal of Applied Intelligence, the International Journal of Approximate Reasoning, and Optical Engineering Reports. He served as Chairman of the Robotics Technical Committee of the Computer Society of the IEEE (1987-1989). Currently, he chairs the Pattern Recognition, Image Processing and Computer Vision Committee of the IEEE Systems, Man, and Cybernetics Society.
Shinkyu Yang A graduate student at MIT Sloan School of Management working toward a Ph.D., Mr. Yang received a B.S. in physics at Seoul National University and an M.S. in management at MIT Sloan School. Before joining MIT, he worked as a software engineer and information systems planning manager. H e is studying information technology’s impact on various measures of economic performance.
Preface Advances in Computers, published since 1960, is an anthology of some of the most important developments within the information technology field over the past year. Each volume contains from five to seven chapters that describe in depth important facets of this increasingly significant component of our technological universe. This series of books fills an important place in the technical publication field. Authors are given the space to fully expound upon their expertise without the usual constraints of a journal article. This provides for a detailed analysis of the topic under discussion. The subject matter published here can range among all aspects of computer science and engineering. While my personal interest centers on software engineering, chapters on systems, hardware, languages, vision, artificial intelligence, business applications, and theory also have appeared. I am increasingly looking for papers on networking and distributed computing, given the emergence of the Internet in the mid-1990s as the major technological advance of the decade. This volume contains six chapters that cover the spectrum of business practices, software engineering, theory, and vision. I hope you find these topics of interest. In the first chapter, David W. Binkley and Keith Brian Gallagher discuss “Program Slicing.” In program slicing, a technique first described in the early 1980s, you isolate only those statements of a program that affect a given variable of interest. That limits the amount of information you need to look at from the entire source program to a manageable subset of those statements of interest. This allows you to find errors or other problems with a program more easily. Dr. Binkley and Dr. Gallagher give a short historical perspective on slicing and then give applications of slicing in the software development, software testing, and maintenance domains. In the second chapter, “Language Features for the Interconnection of Software Components,” by Renate Motschnig-Pitrik and Roland T. Mittermeir, the authors discuss important problems in linking components in large, complex systems. While we have technologies, such as structured programming, for the design of small modules, we still d o not have easy ways that limit the complexity of the interaction among the many modules of large systems. In particular, this chapter discusses language constructs, so-called Component Interconnection Languages, and describes the various ways language design can be used t o aid in this communication problem.
xiii
xiv
PREFACE
In the third chapter, the third software engineering chapter in this volume, “Using Model Checking to Analyze Requirements and Designs,” Joanne Atlee, Marsha Chechik, and John Gannon discuss the technique of model checking as an aid to software design. Model checking is a formal specification technique by which the requirements for a process are specified by state transitions. These transitions are then subject to execution by various testing tools in order to show that the specifications are consistent. Erik Brynjolfsson and Shinkyu Yang in “Information Technology and Productivity: A Review of the Literature” discuss the relationship between information technology and productivity. Although organizations rushed to computerize their operations in the 1970s and 1980s, the promised increases in productivity did not seem to occur at the same rate. This was seen as a damaging assessment of the “computer revolution.” However, a more careful analysis of the data shows that computers have had a positive effect. This chapter examines three dimensions of the productivity puzzle and reviews four possible explanations for it: mismeasurement, lags, redistribution, and mismanagement. One of the more intriguing questions to ask in computer science is “How long will that program run?” While we would all like to program more efficiently and have our programs run faster, it is important to know that there are limits to how short the computation time for a solution to a problem may be. In “The Complexity of Problems,” Professor William Gasarch discusses a hierarchy of problems that take an increasingly longer time to solve, no matter how clever a programmer you are or how fast a machine you purchase. Understanding these limits is an important criterion to meet before tackling any new program design. In the final chapter of this volume, “3-D Computer Vision Using Structured Light: Design, Calibration, and Implementation Issues,” Fred W. DePiero and Mohan M. Trivedi discuss an important aspect of computer vision. In robotic applications it is often necessary that the equipment be able to “see” in three dimensions, yet cameras provide a two-dimensional view of the world. This chapter discusses several computational methods by which three-dimensional objects can be developed from such twodimensional viewing. I thank the authors for their contributions to this book. Each chapter requires a significant effort, and I hope that you find the topics informative. Please let me know if you would like to see a topic covered in a future volume of this series. My Internet address is
[email protected]. MARVIN V. ZELKOWITZ
Program Slicing DAVID W. BINKLEY AND KEITH BRIAN GALLAGHER Loyola College in Maryland Baltimore, Maryland
A hundred times every day, I remind myself that my inner and outer life depends on the labors of other men, living and dead, and that I must exert myself in order to give in the measure as I have received and am still receiving. A. Einstein
Abstract Program slicing is a technique for reducing the amount of information that needs to be absorbed by a programmer. Given a point of “interest” in a program, described by a variable and a statement, a program slice gives all the statements that contributed to the value of the variable at the point, and elides unnecessary statements. This chapter surveys techniques for computing program slices and the applications of program slicing to development, testing, maintenance, and metrics.
Introduction. . . . . . . . . . . . . . . . , . . . . . . 1.1 A Brief History. . . . . . . . . . . , . . . . . . . 2. Computing Slices . . . . . . . . . . . . . . , . . . . . 2.1 Slicing Control-Flow Graphs. . . . . . . . . . . . . 2.2 Slicing as a Graph-Reachability Problem . . . . . . . 2.3 Dynamic Slicing as a Data-Flow Problem. . . . . . . 2.4 Dynamic Slicing as a Graph-Reachability Problem . . 2.5 Alternative Approaches to Computing Program Slices. 3. Applications of Program Slicing . . . . . . . . . . . . . 3.1 Differencing . . . . . . . . . . . . . . . . . . . . 3.2 Program Integration. . . . . . . . . . . . . . . . . 3.3 Testing. . . . . . . . . . . . . . . . . . . . . . . 3.4 Debugging . . . . . . . . . . . . . . . . . . . . . 3.5 Software Quality Assurance . . . . . . . . . . . . . 3.6 Software Maintenance. . . . . . . . . 3.7 Reverse Engineering . . . . . . . . . . . . . . . . 3.8 Functional Cohesion . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . 1.
. .
1 ADVANCES I N COMPUTERS, VOL. 43
. . . . .
.
. . . .
. . .
.
. . . . . . . . . . . . . .
. . . . .
. . . . . . . . . .
. . .. .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
...
. . . . . . ... . . . . . . . . . . . . . . . . . .
. . ............. I
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
3 4 5 12 22 26 31 34 35 36 36 38 40 42 43 44 45
Copyright 0 1996 by Academic Press, Inc. All rights of reproduction in any form reserved.
2
DAVID W. BINKLEY AND KEITH B. GALLAGHER
1. Introduction At the fifteenth International Conference on Software Engineering (ICSE) in 1993, Mark Weiser received recognition for the best paper at the fifth ICSE. Weiser reported that “Program Slicing” [90], the conference paper, and its journal incarnation of the same name [93], were cited about 10 times a year from 1984 to 1989 and about 20 times a year in 1990 and 1991. Our search of the “Science Citation Index” found 9 citations in 1992; 9 citations in 1993; and 15 citations in 1994, and included universities in 10 countries and 8 industrial laboratories. This is a lower bound as our bibliography has only 7 items that do not cite [93]. Since so many researchers and practitioners have been citing this seminal work, herein we attempt to collate, summarize, and organize the results that have followed. Program slicing is a decomposition technique that extracts from a program statements relevant to a particular computation. Informally, a slice provides the answer to the question “What program statements potentially affect the computation of variable v at statements?” An observer viewing the execution of a program and its slice cannot determine which is which when attention is focused on statement s. It is as if the observer has a window through which only part of the program state can be seen as illustrated by the “clear window” in Fig. 1. The utility and power of program slicing comes from the ability to assist in tedious and error-prone tasks such as program debugging [l, 46, 50, 60-62, 84, 911, testing [3, 11, 15, 18, 27, 47, 50, 53, 831, parallelization [21, 921, integration [78, 39,76,77, 791, software safety [29], understanding [22, 52, 65, 711, software maintenance [28, 54, 66, 70, 73, 741, and software metrics [58, 59, 68, 671. Slicing does this by extracting an algorithm whose
I
-
I
?
FIG. 1. The programmer’s view of a slice.
PROGRAM SLICING
3
computation may be scattered throughout a program from intervening irrelevant statements. Consequently, it should be easier for a programmer interested in a subset of the program’s behavior to understand the slice.
1.1 A Brief History This section provides a brief history of program slicing and introduces the principal terminology. A slice is taken with respect to a slicing criterion <s, v > , which specifies a location (statement s) and a variable (v). This can be easily extended to slicing with respect to a collection of locations and a collection of variables at each location by taking the union of the individual slices. However, to simplify the exposition, we present definitions for the single statement and single variable case. Program slices, as originally introduced by Weiser [89, 90, 931, are now called executable backward static slices. Executable because the slice is required to be an executable program. Backward because of the direction edges are traversed when the slice is computed using a dependence graph. Finally, static because they are computed as the solution to a static analysis problem (i.e., without considering the program’s input). Weiser’s requirement that a slice be executable provided an empirical validation of the concept. Although this seems reasonable, many applications of program slicing (e.g., debugging) do not require executable slices. This issue is considered in Section 2. Weiser originally used a control-flow graph as an intermediate representation for his slicing algorithm. Ottenstein et al. [24, 691 later noted that backward slices could be efficiently computed using the program dependence graph as an intermediate representation by traversing the dependence edges backwards (from target to source). This observation was pursued vigorously and rigorously by Honvitz et al. [36-40,42-44,78,79,82], who introduced the notion of forward slicing in [44]. Informally, a forward slice answers the question “What statements are affected by the value of variable v at statement s?” Finally, Korel and Laski introduced the notion of dynamic slicing [49, 511: a slice computed for a particular fixed input. The availability of runtime information makes dynamic slices smaller than static slices, but limits their applicability to that particular input. As with Weiser’s algorithm, Korel and Laski’s algorithm uses a control-flow graph as an intermediate representation. Agrawal and Horgan later presented a dynamic slicing algorithm that used the program dependence graph as an intermediate representation [5, 61. This chapter is organized into three major sections. Section 2 discusses static slicing as a data-flow problem and as a graph-reachability problem; then we discuss dynamic slicing as a data-flow problem and as a graph-reachability
4
DAVID W. BINKLEY AND KEITH B. GALLAGHER
problem; and closes with a collection of alternative methods for computing slices. The third section looks at applications of the idea of program slicing, without regard to any particular method of computing the slice.
2. Computing Slices The following terminology is used in this section to discuss the computation of slices.
Definition 1: Graph. A directed graph G is a set of nodes N and a set of edges E C N X N. For edge (n, m ) E E, m is an immediate successor of n and n is an immediate predecessor of n. G contains two special nodes, ninitial, which has no predecessors, and which has no successors. Furthermore, there is a path from ninirial to every node in G and a path from nfinal W to every node in G-’, the inverse graph of G. Definition 2: Control-Flow Graph. A control-flow graph for program P is a graph in which each node is associated with a statement from P and the edges represent the flow of control in P. Let V be the set of variables in P. With each node n (i.e., each statement in the program and node in the graph) associate two sets: REF ( n ) ,the set of variables whose values are referenced at n, and DEF (n),the set of variables whose values are defined at n. Definition 3: Program Slice. For statement s and variable v, the slice of program P with respect to the slicing criterion <s,v> includes only W those statements of P needed to capture the behavior of v at s. Exactly what is meant by “capture” varies. The most intuitive definition is presented in the definition of an executable slice. Other definitions are examined later in this section.
Definition 4: Executable Program Slice. For statement s and variable v, the slice S of program P with respect to the slicing criterion <s, v > is any executable program with the following properties: 1. S can be obtained by deleting zero or more statements from P. 2. If P halts on input I, then the value of v at statement n each time n is executed in P is the same in P and S. If P fails to terminate normally’ n may execute more times in S than in P, but P and S compute the W same values each time n is executed by P.
’
A program fails to terminate normally if it diverges or terminates abnormally, for example, with a division by zero error.
PROGRAM SLICING
5
Note that every program has itself as a slice on any criterion.
Example. The program of Fig. 2 never executes Statement 5, while the slice on includes only Statement 5, which is executed once. 2.1
Slicing Control-Flow Graphs
This section describes computing a slice as the solution to a data-flow problem using a control-flow graph as an intermediate representation. It considers a progression of harder slicing problems beginning with slicing straight-line programs, then considering structured control flow, unstructured control flow, data structures, and, finally, procedures. Computing a slice from a control-flow graph is a two-step process: First requisite data-flow information is computed and then this information is used to extract the slice. The data-flow information is the set of relevant variables at each node n. For the slice with respect to <s, v > , the relevant set for each node contains the variables whose values (transitively) affect the computation of v at s. The second step identifies the statements of the slice. These include all nodes (statements) II that assign to a variable relevant at n and the slice taken with respect to any predicate node that directly controls n’s execution.
2.1.1 Slicing Flow Graphs of Straight-L ine Programs Straight-line code contains only assignment statements executed one after the other. For such code the additional slices with respect to predicate nodes are not required. We begin by assuming that expression evaluation does not alter the values of its operands. Relevant sets for the slice taken with respect to are computed as follows [60]:
1; 1 1
1. Initialize all relevant sets to the empty set. 2. Insert v into relevant(n). t;;;ats
1 2
c=o while ( TRUE )
4 endwhile 5 c=2 FIG. 2. “Sliceable” divergent program and the convergent slice taken with respect to .
6
DAVID W. BINKLEY AND KEITH B. GALLAGHER
3. For m,n's immediate predecessor, assign relevant(m) the value (relevant(n) - DE F(m)) u (REF(m) if relevant(n) n DEF(m) # 0). 4. Working backwards, repeat step 3 for m's immediate predecessors until n,nit,u/is reached.
Example. Figure 3 shows the relevant sets for a slice taken with respect to . The relevant sets are computed from line 8 to line 1. For example, relevant(7) = (relevant(8) - DEF(7)) U (REF(7) if relevant(8) n DEF(7) # 0) = ({a} - {a}) u ((6, c} if {a} f l {a} # 0) = {b, c}, and relevant (2) = ({b, c} - {c}) U (0 if (6, c} n {c} # 0) = {b}. If we were interested in some variable other than a, the computation of the relevant sets would be different; thus, different relevant sets must be computed for different slices. The relevant sets may be viewed as a flow of sets of variables; the slice is the set of statements that disturb this flow. If no relevant variables are defined at a statement, then the relevant set flows through unperturbed. On the other hand, if a relevant variable is defined, then the statement is added to the slice. In Fig. 3, for example, the slice with respect to includes lines 7,6,2, and 1 (line 6 changes the b that is in the relevant set). Note that, in practice, the relevant sets and the statements in the slice are computed in a single pass.
-
2.1.2 Slicing Structured Programs In straight-line programs each statement has a single unique predecessor. In the presence of structured control flow, a statement may have multiple control predecessors. The preceeding algorithm requires three modifications to handle this: first, the inclusion of control sets; second, a rule for combining relevant sets at control join points; and, finally, iteration of the relevant set computation.
n
d=3 a=d
d=b+d b=b+1
print a
a
FIG. 3. Relevant sets for .
7
PROGRAM SLICING
-
n statement 1 b=l 2 c=2 3 d=3 4 a=d 5 if ( a ) t h e n 6 d=b+d 7 c=b+d 8 else 9 b=b+1 10 d=b+1 11 endi f 12 a = h + c 13 print a -
control(n)
refs(,)
d a b,d b,d
5 5 5
b h
8 8
h,c a
FIG.4. Control sets.
The control set control(n) associates with node (statement) n, the set of predicate statements that directly controls n’s execution [60].The use of a set here facilitates the transition to unstructured control flow in Section 2.1.3. For a structured program, control(n)contains a single entry, the loop or conditional statement that controls the execution of n, or is emply, when n is a top-level statement. For example, in Fig. 4 control(6) includes statement 5, the if ( a ) then; the control sets for statement 9 includes statement 8, the else, whose control set also includes statement 5. Whenever a statement is added to the slice, the members of its control set, k, are added to the slice along with statements in the slice taken with respect to . At join points (where two nodes have the same predecessor), the relevant set is the union of the relevant sets of the two nodes. For example, in Fig. 5, relevant(5) is the union of relevant(6) and relevant(9). This assumes that n 1 2 3 4 5 6
7 8 9
10
control( n )
5 5 5 8 8
11
12 13 FIG.5. Relevant sets on a
at
13.
8
DAVID W. BINKLEY AND KEITH B. GALLAGHER
the expression in the conditional at the join point has no side effects. If it does, then after the union, step 3 of the relevant set computation must be performed to update the set to account for the side effects of the expression.
Example. Figure 5 shows the data necessary to calculate the slice with respect to of the program shown in Fig. 4. The slice is normally computed in conjuction with the relevant sets: Working backward from l Refevant(l3) # 0, line 12 is included in the slice line 13, since DEF(12) f and its relevant set is assigned (b, c}. No change occurs at line 11. Line 10 in included in the slice because DEF(10) fl Refevant(l1)# 0; refevant(l0) is assigned {c, d}. Next, lines 5 and 8 are included in the slice because controf(l0) includes 8 and controf(8) includes line 5. Along with these lines, the lines in the slices with respect to = and = are also included. These add to the slice lines 4 and 3. Finally, line 6 completes the slice. The third change is required to handle loops. The absence of loops allows a slice to be computed in a single pass over the control-flow graph; the presence of loops requires iteration over parts of the graph, in particular, iteration over each loop until the relevant sets and slice stabilize. Hausler [35] has shown that the maximum number of iterations is the same as the number of assignment statements in the loop. Figure 6 shows how the upper bound is reached.
2.1.3 Slicing Unstructured Programs The addition of goto statesments, and its restricted forms such as return, exit, break, and continue, complicates the construction of the control sets. One solution is to restructure the program as goto-less [56] and then slice. The drawback to this method is that the structured version may be significantly textually dissimilar from the original. Lyle [60] proposed a simple and conservative solution to slicing over unstructured control flows: if a goto statement has a nonempty relevant
x2 = XI
FIG. 6. A while statement that needs n passes to compute a slice when x , is in the criteria.
PROGRAM SLICING
9
set, include it in the slice. The targets of the goto are included, and so on until the slice stabilizes. An alternative approach [26] is to note that goto statements are associated with labels. Rather than look for goto statements to include in the slice, look for labeled statements that are included in the slice. Then include only the goto statements that branch to these labels. This algorithm was devised for a language in which the only labels are targets of goto’s,a labeled statement does no computation, and does not have explicit compound statements. That is, statements of the form
if ( b
then { /* compute X */ ) else { / * compute Y */
are replaced by the semantically equivalent:
if ( b )
goto L1 /* compute Y * / goto L2
Ll :
/ * compute X * / L2 : Each statement of the form
label :
statement ;
is transformed to the semantically equivalent
label : null ; statement ; Definition 5: Labeled Block. A labeled block is a basic block’ that begins with a labeled statement and ends when one of the following occurs.
* A basic block [48] is a sequence of consecutive instructions that are always executed from start to finish.
10
DAVID W. BINKLEY AND KEITH B. GALLAGHER
1. The basic block ends. 2. Another labeled block begins; that is, another labeled statement occurs.
Definition 6: Pseudo-Label. Let L be the label of the statement that begins a labeled block B. The remaining statements in B are said to have pseudo-label L. On the pass after a pseudo-labeled statement is added to the slice, as each goto statement is examined it is placed in the slice according to whether or not its target (pseudo-)label has already been added to the slice. The noncomputing labeled statements are added if the actual label matches a pseudo-label in the slice.
2.1.4 Arrays, Records, and Pointers The handling of composite structures and pointers requires a change to the DEF(n) and REF(n) sets. A simple approach for arrays is to treat each array assignment as both an assignment and a use of the array. For n: a[i] := x; one may naively assume that DEF(n) = {a} and REF(n) = {i, x}. But the new value of a also depends on the old value of a, so a must also be included. The correct value for REF(n) is {a, i, x}. However, this approach leads to correct, but unnecessarily large slices. To more precisely determine if there is a dependence between a statement that contains an assignment to alf(i)] and a statement that contains a use of a[&)], we must answer the question ‘‘Canf(i) = g(j)?” In general, this question is unanswerable, although it can be solved for some common index expression types. These solutions are often one sided: Algorithms exist to determine if the answer to the question is “no.” Otherwise, no information is obtained. To illustrate this, consider the greatest common divisor (GCD) test applied to the following loop:
i = O while (i < N)
{
*
i + . . . = X[bl * i i = i + 1
X[al
I
+
= ... bo J
1 If @(al, bl) does not divide (bo - ao) then the GCD test demonstrates the absence of a dependence between sI and s2, but if gcd(al, b,) divides
PROGRAM SLICING
11
(bo - a,,), the solution may lay outside the range 0. . . N. Other tests [64, 721 are similarly one sided. If none of these tests can prove the absence of a flow dependence, then one is assumed to exist. This provides a more precise, yet safe, approximation to the correct data relationships. Once this has been done, the slicing algorithm proceeds as in the absence of arrays. Records are simple, if not tedious, to treat in a slicing algorithm. Unlike arrays, each field of a record is identified by a constant (the field’s name). This allows occurrences of record.field to be treated as occurrences of the simple variable record field. The assignment of one record to another is modeled as a sequenceof field assignments. The multiple levels of indirection for both assignment and reference of pointers create difficult problems. One must obtain every possible location to which a pointer could point. If a variable is defined or referenced through a chain of pointers (e.g., *****a), then all intermediate locations in the accessing of the variable must be considered as refed locations. Lyle et al. [63] construct and then prune a pointer state graph (PSS) for expression *ka. The PSSk(a) is a directed graph with single source a and is defined recursively as
PSSk(a) =
(* {v {v
I * a = v}, I v E PSSl(u) A
ifk=O if k = 1 & edge(a, v) E PSSk(a) otherwise & edge(u, v) E PSSk(a).
u E PSS,-l(a)},
PSSk(a)gives the possible references of *ka.It is then pruned: R P ~ ~ k(w) ( a ,= {v E PSSk(a) A dist(v, w)Ik}, where dist (w,v) is the distance, measured in edges, between v and w.RPsSk(a,(w)is used to compute DEF(n) by eliminating the indirect definitions of w that cannot be reached via k levels of indirection from a. See [63] for a detailed discussion.
2.1.5 Interprocedural Slicing Slicing across procedures3 complicates the situation due to the necessity of translating and passing the criteria into and out of calling and called procedures. When procedure P calls procedure Q at statement i, the active criteria must first be translated into the context of Q and then recovered once Q has been sliced. To translate a set of criteria, C, into a called procedure, for each v E relevant(succ(Q)) n actual pararneter(Q) map v -+ w , the corresponding formal parameter of Q. Then generate new criteria . If v E relevant(succ(Q)) n local-definition(Q), that is, a local redefinition of v This section follows closely that of Weiser [93].
12
DAVID W. BINKLEY
A N D KEiTH B. GALLAGHER
occurs, change the line number of criteria involving v to i, the call-site of Q Q. When nZiriaris reached, unmap w + v, and replace ninirial with i. Weiser called this new set of criteria DOWNo(C). This is essentially an in-line replacement of each procedure occurrence, with appropriate substitutions. Globals pass into the called procedure undisturbed if their visibility is not blocked by a variable in Q; thus references to them inside the procedures are captured correctly. This method was introduced early in the development of the idea of slicing and does not address hard questions about pointers, aliasing, function parameters, etc. When P is called from statement j of Q, criteria must be generated to slice up into Q. The new criteria are generated in similar fashion to the calling context. Criteria involving local variables are discarded, so that no undefined references are passed. Criteria involving formal parameters are mapped into the corresponding actual of Q with new line number j. Weiser called this set UPo(C). The sets Downo(C) and UPo(C)are then mapped to functions from and ) UP(CC) = criterion to criterion: Down(CC) = U C E c ~ D O W N o ( C UC-CCUP,(C). Union and transitive closure are defined in the usual way for these relations. Thus (DOWNUUP)*(C)will give a complete set of criteria to obtain the interprocedural slice for any criteria. This conservative, but correct, approximation was improved by [44].
2.2
Slicing as a Graph-Reachability Problem
Ottenstein and Ottenstein [69] observed that the program dependence graph (PDG), used in vectorizing and parallelizing compilers and program development environments, would be an ideal representation for constructing program slices: “Explicit data and control dependence make the PDG ideal for constructing program slices’’ [69]. This section first discusses a variation of the PDG used to compute intraprocedural slices of structured programs and the extensions necessary to slice programs that contain procedures and procedure calls [39]. Both of these algorithms have three steps: 1. Construct a dependence graph from the program. 2. Slice the dependence graph. 3. Obtain a sliced program from the sliced graph.
The advantage of the dependence graph approach is that steps 2 and 3 are efficient. For example, step 2 is linear in the size of the graph. However, step 1, the construction of the PDG is, O(n’) in the number of statements in the program. To focus on the slicing algorithms, this section does not
PROGRAM SLICING
13
discuss how step 1, dependence graph construction, is accomplished (see [24, 34, 39, 44,57, 691 for details).
2.2.1 lntraprocedure Slicing The PDG used for intraprocedural slicing by Horwitz et al. [39] is a modified version of the dependence graph considered by Ottenstein and Ottenstein [69]. The PDG for program P, denoted by Gp, is a directed graph whose vertices are connected by several kinds of edges. The vertices of GPrepresent the assignment statements and control predicates that occur in program P. In addition, G P includes a distinquished vertex called the entry vertex. GP is a multigraph. Not only can it have more than one edge between two vertices, it can have more than one edge of a given kind between two vertices. Edges in GPrepresent dependences among program components. An edge represents either a control dependence or a flow dependence. Control dependence edges are labeled true or false, and the source of a control dependence edge is always the entry vertex or a predicate vertex. A control dependence edge from vertex u to vertex v, denoted by u -+ v, means that during execution, whenever the predicate represented by u is evaluated and its value matches the label on the edge to v, then the program component represented by v will eventually be executed if the program terminates. For structured languages, control dependences reflect the program’s nesting structure. Consequently, PDG GPcontains a control dependence edge from vertex u to vertex v of GP iff one of the following holds: 1. u is the entry vertex, and v represents a component of P that is not nested within any loop or conditional; these edges are labeled true. 2. u represents a control predicate, and v represents a component of P immediately nested within the loop or conditional whose predicate is represented by u. If u is the predicate of a while loop, the edge u -+ v is labeled true; if u is the predicate of a conditional statement, the edge u + v is labeled true or false according to whether v occurs in the then branch or the else branch, respectively. A flow dependence edge from vertex u to vertex v means that the program’s computation might be changed if the relative order of the components represented by u and v were reversed. The flow dependence edges of a PDG are computed using data-flow analysis. A PDG contains a flow
14
DAVID W. BINKLEY AND KEITH B. GALLAGHER
dependence edge from vertex u to vertex v, denoted by u the following hold:
-+
fv, iff all of
1. u is a vertex that defines variable x. 2. v is a vertex that uses x. 3. Control can reach v after u via an execution path along which there is no intervening definition of x. Flow dependences can be further classified as loop carried or loop independent. A flow dependence u -+ f v is carried by loop L, denoted by u -+ IccLiv,if in addition to items 1, 2, and 3 given, the following also hold:
4. There is an execution path that both satisfies the conditions of item 3 above and includes a back edge to the predicate of loop L. 5. Both u and v are enclosed in loop L. A flow dependence u -+ v is loop independent, denoted by u + [;v,if in addition to items 1,2, and 3 there is an execution path that satisfies item 3 above and includes no back edge to the predicate of a loop that encloses both u and v. It is possible to have both u Ic(Li v, and u -+ liv. When there is more than one loop-carried flow dependence edge between two vertices, each is labeled by a different loop that carries the dependence. Figure 7 shows an example program and its PDG. program Main SUM := 0 ;:= 1 whlle i c 1 1 do SUM := SUM+i i:=i+l
Edge Key control dependence d loop-independent Row dependence -I-= loop-carried flow dependence
-.
od prlnt (SUM) end
FIG. 7. An example program, which sums the integers from 1 to 10 and leaves the result in the variable sum, and its program dependence graph. The vertices of the slice of this PDG taken with respect to the final use of i are shown in bold.
PROGRAM SLICING
15
For vertex s of PDG G, the slice of G with respect to s, denoted by Slice(G, s), is a graph containing all vertices on which s has a transitive flow or control dependence (i.e., all vertices that can reach s via flow and/ or control edges): V(Slice(G, s))
=
{VE V ( G ) I v -+LT,,fs}.
The edges of Slice(G, s) are those in the subgraph of G induced by V (Slice(G,s)): E(Slice(G, S))
= {V+~U
U{v+,
E E(G) I v, u E V(Slice(G, S ) ) } u E E(G) I v, u E V(Slice(G, S))}
The vertices of the slice of the PDG shown in Fig. 7 taken with respect to the output vertex for i are highlighted in Fig. 7. Slicing programs with composite data structures involves changing the computation of the flow dependence edge only. Two methods for slicing in the presence of arbitrary control flow (programs containing goto’s) require modification of the control dependence subgraph of the PDG, but not the slicing algorithm. Choi and Ferrante [20] augment the control-flow graph that is used in the construction of the PDG with a set of fall-through edges, that is, the lexcial successor of the goto in the source text. The fall-through edges capture the requisite control flow when a goto statement is deleted from the slice. Ball and Horwitz [7] describe a similar technique in which jump statements are represented by pseudo-predicate vertices, which always evaluate to true. The outgoing control edge labeled true is connected to the target of the jump, while a false successor is connected to the jump statement’s continuation; that is, the statement that would be executed if the jump were a no-op. Harrold et al. [34] describe an efficient construction of PDG’s that captures exact control dependence (i.e., goto’s) but uses neither a control-flow graph or a postdominator tree as an auxiliary structure. This construction technique improves the methods of [7, 201 for construction of the PDG. During the parse, a partial control dependence subgraph, which incorporates exact control, is constructed. The partial control dependence subgraph manages control flow by ordering the nodes implicitly during construction or explicitly by the creation of control edges. The presence of goto’s does require a minor change to step 3 of the slicing algorithm (the projection of a program from the sliced PDG). This change ensures that labels of statements not in the slice are included in the resulting program if a goto to that label is in the slice. Another method for computing slices in the presence of arbitrary control flow avoids changing the PDG at the expense of modifying the slicing
16
DAVID W. BINKLEY AND KEITH 6. GALLAGHER
algorithm use in step 2 [2]. This method maintains two relations postdominator and lexical-successor. The algorithm computes a slice from the graph used by HPR algorithm using step 2 and then looks €or a jump statement not in the slice whose nearest postdominator in the slice is different from the nearest lexical successor in the slice. Such statements are then added to the slice. As with the algorithms in [7, 20, 341, step 3 must be modified to include any necessary labels.
2.2.2 Interprocedural Slicing of Dependence Graphs Interprocedural slicing as a graph-reachability problem requires extending of the PDG and, unlike the addition of data types or unstructured control flow, it also requires modification of the slicing algorithm. The PDG modifications represent call statements, procedure entry, parameters, and parameter passing. The algorithm change is necessary to account correctly for procedure calling context. This section describes the interprocedural slicing algorithm presented in [44], which is based on an extension of the PDG called the system dependence graph (SDG).4 Horwitz et al. [44] introduced the term system dependence graph for the dependence graphs that represents multiprocedure programs. The term system will be used to emphasize a program with multiple procedures. The SDG models a language with the following properties: 1. A complete system consists of a single main procedure and a collection of auxiliary procedures. 2. Parameters are passed by value-result. Techniques for handling parameters passed by reference and for dealing with aliasing are discussed at the end of this section. Horwitz et al. [44] make the further assumption that there are no call sites of the form P(x, x ) or P ( g ) , where g is a global variable. The former restriction sidesteps potential copy-back conflicts. The latter restriction permits global variables to be treated as additional parameters to each procedure; thus, they are not discussed explicitly. An SDG is made up of a collection of procedure dependence graphs connected by interprocedural control- and flow-dependence edges. Procedure dependence graphs are similar to program dependence graphs except that they include vertices and edges representing call statements, parameter The term “SDG” is used because the term “PDG” is associated with graphs that represent single procedure programs.
PROGRAM SLICING
17
passing, and transitive flow dependences due to calls (we will use the abbreviation “PDG” for both procedure dependence graphs and program dependence graphs). A call statement is represented using a call vertex; parameter passing is represented using four kinds of parameter vertices: On the calling side, parameter passing is represented by actual-in and actualout vertices, which are control dependent on the call vertex; in the called procedure, parameter passing is represented by formal-in and formal-out vertices, which are control dependent on the procedure’s entry vertex. Actual-in and formal-in vertices are included for every global variable that may be used or modified as a result of the call and for every parameter; formal-out and actual-out vertices are included only for global variables and parameters that may be modified as a result of the call. Interprocedural data-flow analysis is used to determine the parameter vertices included for each procedure [8, 101. Transitive dependence edges, called summary edges, are added from actual-in vertices to actual-out vertices to represent transitive flow dependences due to called procedures. These edges were originally computed using a variation on the technique used to compute the subordinate characteristic graphs of an attribute grammar’s nonterminals [44].Recently, Reps et al. [80] described a faster algorithm for computing summary edges. A summary edge is added if a path of control, flow, and summary edges exists in the called procedure from the corresponding formal-in vertex to the corresponding formal-out vertex. Note that the addition of a summary edge in procedure Q may complete a path from a formal-in vertex to a formalout vertex in Q’s PDG, which in turn may enable the addition of further summary edges in procedures that call Q. Procedure dependence graphs are connected to form an SDG using three new kinds of edges: 1. A call edge is added from each call-site vertex to the corresponding procedure-entry vertex. 2. A parameter-in edge is added from each actual-in vertex at a call site to the corresponding formal-in vertex in the called procedure. 3. A parameter-out edge is added from each formal-out vertex in the called procedure to the corresponding actual-out vertex at the call site.
Figure 8 shows an example system and the corresponding SDG. (In Fig. 8, as well as in the remaining figures in this article, the edges representing control dependences are not labeled; all such edges would be labeled true.) Interprocedural slicing can be defined as a reachability problem using the SDG, just as intraprocedural slicing is defined as a reachability problem
18
DAVID W. BINKLEY AND KEITH 6. GALLAGHER
procedure Main sum
=a
=I while i < I 1 do call A (sum,I ) I
procedure A ( x . y ) calIAdd(x. y ) call IncremenrQ)
procedureAdd(n, b ) a =a+b return
procedureIncremenr(r) cdIAdd(z, 1 )
return
return
od
print(sum)
intraprocedural flow
-
(loop-independent)
summary
using the PDG. The slices obtained using this approach are the same as those obtained using Weiser’s interprocedural slicing method 2931. However, his approach does not produce slices that are as precise as possible, because it considers paths in the graph that are not possible execution paths. For example, there is a path in the graph shown in Fig. 8 from the vertex of procedure Main labeled “xin : = sum” to the vertex of Main labeled ‘‘i : = Your.”However, this path corresponds to procedure Add being called by procedure A, but returning to procedure Increment, which is not possible.
19
PROGRAM SLICING
The value of i after the call to procedure A is independent of the value of sum before the call, and so the vertex labeled “xjn : = sum” should not be Figure included in the slice with respect to the vertex labeled ‘7: = You(’’ 9 shows this slice. To achieve more precise interprocedural slices, an interprocedure slice with respect to vertex s is computed using two passes over the graph.
‘ I
procedure Main i := I whllei < 11 do d l A (i) od end0
procedure A Cy) d l Increment Cy) return
procedure Add(a. b ) a:=a+b return
procedure /ncremenr(z) cdlAdd(z. 1 ) return
Edge Key
inIraprocedural flow
--k
- ->
(loop-independent)
inuaprwedural flow (Iwp-carried) summary
arheter-in
ammeter-odt
Q/..,Y :=Ym
d l Inc
Zi”
:=y
;
YO”,
=Y
Y”&,
_ _ - - -_ - _ - - -_ _ - -
FIG.9. The SDG from Fig. 8 sliced with respect to the formal-out vertex for parameter z in procedure Increment, together with the system to which it corresponds. Note that this slice correctly excludes the vertex labeled ‘‘x~,,:= sum” in Fig. 8.
20
DAVID W. BINKLEY AND KEITH B. GALLAGHER
Summary edges permit movement across a call site without having to descend into the called procedure; thus, there is no need to keep track of calling context explicitly to ensure that only legal execution paths are traversed. Both passes operate on the SDG, traversing edges to find the set of vertices that can reach a given set of vertices along certain kinds of edges. Informally, if s is in procedure P, then pass 1 identifies vertices that reach s and are either in P itself or procedures that (transitively) call P. The traversal in pass 1 does not descend into procedures called by P or its callers. Pass 2 identifies vertices in called procedures that induce the summary edges used to move across call sites in pass 1. The traversal in pass 1 starts from s and goes backward (from target to source) along flow edges, control edges, call edges, summary edges, and parameter-in edges, but not along parameter-out edges. The traversal in pass 2 starts from all vertices reached in pass 1 and goes backward along flow edges, control edges, summary edges, and parameter-out edges, but not along call or parameter-in edges. The result of an interprocedural slice consists of the sets of vertices encountered during by pass 1 and pass 2, and the set of edges induced by this vertex set. A worklist algorithm for finding the vertices of an interprocedural slice is stated in [44].The (full backward) interprocedural slice of graph G with respect to vertex set S, denoted by Slice(G, S), consists of the sets of vertices identified by pass 1 and pass 2, and the set of edges induced by this vertex set. Slice(G, S ) is a subgraph of G. However, unlike intraprocedural slicing, it may not be feasible (i.e., it may not be the SDG of any system). The problem arises when Slice( G, S) includes mismatched parameters: Different call sites on a procedure include difference parameters. There are two causes of mismatches: Missing actual-in vertices and missing actual-out vertices. Making such systems syntactically legal by simply adding missing parameters leaves semantically unsatisfactory systems [17]. To include the program components necessary to compute a safe value for the parameter represented at missing actual-in vertex v, the vertices in the pass 2 slice of G taken with respect to v must be added to the original slice. A pass 2 slice includes the minimal number of components necessary to produce a semantically correct system. The addition of pass 2 slices is repeated until no further actual-in vertex mismatches exist. The second cause of parameter mismatches is missing actual-out vertices. Because missing actual-out vertices represent dead code no additional slicing is necessary. Actual-out mismatches are removed by simply adding missing actual-out vertices to the slice. A system can now be obtained by projecting the statements of the original system that are in the original slice or added by the preceding two steps.
PROGRAM SLICING
21
These statements appear in the same order and at the same nesting level as in the original system. The details of this algorithm are given in [17].
2.2.3 lnterprocedural Slicing in the Presence of Call-by-Reference Parameter Passing and Aliasing The definitions of the system dependence graph and interprocedural slicing assume that parameters are passed by using value-result parameter passing. The same definitions hold for call-by-reference parameter passing in the absence of aliasing; however, in the presence of aliasing, some modifications are required. Two extreme methods for handling aliasing include transforming the system into an aliasing-free system and generalizing the definition of flow dependence. Translation is done by creating a copy of a procedure for each possible aliasing configuration that it may be called under. Because the number of aliasing configurations is potentially exponential, the cost of this transformation may, in the worst case, be exponential in the maximum number of formal parameters associated with any one procedure. Generalizing the definition of flow dependence makes the pessimistic assumption that any aliases that exist during a particular call to a procedure m a y exist during all calls to the procedure. Such aliases are referred to as m a y aliases. This requires the use of generalized definitions for flow dependence. For example, a flow dependence edge connects vertex v to vertex u iff all of the following hold: 1. v is a vertex that defines variable x. 2. u is a vertex that uses variable y. 3. x and y are potential aliases. 4. Control can reach u after v via a path in the control-flow graph along which there is no intervening definition of x or y.
Note that clause (4) does not exclude there being definitions of other variables that are potential aliases of x or y along the path from v to u. An assignment to a variable z along the path from v to u only over-writes the contents of the memory location written by v if x and z refer to the same memory location. If z is a potential alias of x, then there is only apossibility that x and z refer to the same memory location; thus, an assignment to z does not necessarily over-write the memory location written by v, and it may possible for u to read a value written by v. The first solution produces more precise (smaller) slices than the second at the expense of transforming the system. It is possible to consider interme-
22
DAVID W. BINKLEY AND KEITH B. GALLAGHER
diate solutions to the problem of slicing in the presence of aliasing. Binkley [16] presents an algorithm that is parameterized by a set of aliasing information. The more precise this information, the more precise the slice. In the case of exact information, the same slice is produced as by the transformation approach without replicating procedures. In the case of a maximal approximation (imprecise information), the same slice if obtained as by the generalized dependence approach. In between is a range of possible slices differing only in their precision.
2.3
Dynamic Slicing as a Data-Flow Problem
Korel and Laski [49] introduced the idea of dynamic slicing. Their solution, patterned after Weiser’s static slicing algorithm, solves the problem using data-flow equations. A dynamic slice differs from a static slice in that it makes use of information about a particular execution of a program. Thus, a dynamic slice contains “all statements that actually affect the value of a variable at a program point for a particular execution of the program” rather than “all statements that may affect the value of a variable at a program point for any arbitrary execution of the program” [49]. Most dynamic slices are computed with respect to an execution history (called a trajectory in [49]). This history records the execution of statements as the program executes. The execution of a statement produces an occurrence of the statement in the execution history. Thus, the execution history is a list of statement occurrences.
Example. Two example execution histories are shown in Fig. 10. Superscripts are used to differentiate between the occurrences of a statement. For example, statement 2 executes twice for the second execution producing occurrences 2’ and 22. w Korel and Laski define a dynamic slice, taken with respect to a set of Statement
Number
Program
1
read“) f o r i = l toNdo n=2 if cl then
2 3 4 5
if CZ then
6
a=4
I
else a =6 A
R 8
9
Execution History 1 Input N = l , cl and c2 both true. < I
I,
2’. 3 ’ . 4’. 5 ’ , 6 ’ .8 ’ . 2’. 9’>
Execution History 2 Input N=Z. cl and c2 false on the first iteration and true on the second. ~ 1 ’ . 2 ’ , 3 ’ , 48’.2’.3’,4’.5’,6’,8’,2’,9’> ~,
.?=a
od
printk)
FIG. 10. Two execution histories.
PROGRAM SLICING read(N) for i = 1 to N do a-2 od
23
Execution history of slice
< I 1 , 2 ' , 3 ' , 22, 3 2 2
FIG. 11. A dynamic slice of the program shown in Fig. 10 and its execution history.
variables V, an input Z, and a point P in the execution history (obtained by running the program on Z), as a reduced executable program. The execution history for the execution of this program on input I must satisfy the slice sublist property. Intuitively this property is satisfied if the execution history of the slice is equivalent to the execution history of the original program after removing occurrences of statement not in the slice. Figure 11 shows the Korel and Laski slice of the program shown in Fig. 10 taken with respect to ({a},2, 32). Notice that this slice imprecisely includes 3l, which does not affect the computation a t 3'. Why this occurrence is needed in the Korel and Laski framework and how Agrawal and Horgan remove it are discussed at the end of this section. To formalize the notion of a dynamic slice, we consider the relationship between the execution histories of the program and its slice (on the same input). The execution history of the slice should be obtained by removing occurrences from execution history of the original program. To do this we must remove occurrences of statements not in the slice and remove all occurrences after the occurrence with respect t o which the slice was taken. Formally, we make use of three operators: 1. colfect(list, predicate) = [I E list I predicate(/) = true] 2. front(list, li) = [li E list j , 3,) = < l', 2l, 3' > and collect (front (< l', 2', 3', 4l, 8', 22, 3*, 42, 9, 6', 82, 23, 9' >, 3'), Af.occurrence-of(f, P ' ) ) = collect( < 11,2', 3l >, Al.occurrence-of(l, P ' ) ) = < 1', 2', 31 >
24
DAVID W. BINKLEY AND KEITH B. GALLAGHER
While for the slice with respect to ({a}, 2, 3') front (< I', 2l, 3',
z2,3'
>, 32) = < I', 2', 3', z2, 32 >.
and collect(front(< 1', 2l, 3', 4', 8', 22, 32,4', 5 ' , 6', 82,23, 9' >, 32), Al.occurrence-of(l, P ' ) ) = collect(< l', 2l, 3', 4',8', 22, 32 >, hl.occurrence-of(l, P ' ) ) = < l', 2', 3', 22, 32 >.
w
Dynamic slices are computed from three data-flow relations computed from the program. The first two capture the flow and control dependences. Unlike static slicing, these definitions refer to the execution history: thus, they capture dependences that actually happened in the particular execution of the program rather than dependences that may occur in some execution of the program as is the case with static slicing. The final relation is explained later. The first definition captures flow dependences that arise when one occurrence represents the assignment of a variable and another a use of that variable. For example, in Fig. 10 when cl and c2 are both false, there is a flow dependence from s3 to sg.
Definition 8: Definition Use (DU). v'DU d w v' appears before in the execution history and there is a variable x defined at v', used at w uJ,but not defined by any occurrence between v' and uI. MI
The second definition captures control dependence. The only difference between this definition and the static control dependence definition is that multiple occurrences of predicates exist. Consequently, care must be taken t o avoid a statement appearing dependent on a predicate from an earlier iteration of a loop.
Definition 9: Test Control ( TC). v'TC ul H u is control dependent on v (in the static sense) and for all occurrences w k between v' and uJ,w is control dependent on v. The definition of a dynamic slice requires the use of front(,!,', p ' ) = collect(front(,!,, p ) , hl.occurrence-of(Z, P ' ) ) . The preceding two relations are insufficient to ensure this. For example, the slice obtained from the program in Fig. 10 taken with respect to ({a},2, 32)using only the preceding two relations is < l', 2', 22, 32 >, which omits 3'. In this example the omission would be benign, but in other examples the slice would not have the correct execution history. Korel and Laski solve this problem by including the following relation. Note that this produces a conservative solution to the problem. That is, some dynamic slices are bigger than they need t o be.
Definition 10: Identity Relation (IR). vr ZR UJ w v
= U.
m
PROGRAM SLICING
25
The relations DU, TC, and ZR are used to formulate the algorithm for a dynamic slicing by computing an ever-growing approximation to the occurrences in the slice. Let E be the execution history produced for program P when run on input I. In the following two equations, only the front of E is of interest to the dynamic slice, since only these occurrences may affect the point where the slice originates. The first approximation to the slice taken with respect to (V,I, li) contains the direct control and data influences: 9 = {v' E front(& ?) I vj TC }'Z U {vj E front(,!?, )'Z I x E V A x is defined at vj but not defined by any occurrence between vj and li}. The slice is found by iterating the following until a fixed point is reached: Si+l = s' U {d E front(E, ?) 1 3uk E S' s.t. v' (DU U TC U ZR) uk}.The iteration from earlier must terminate since E is finite and bounds each S'. To complete the slice, it is necessary to include the occurrence with respect to which the slice was taken. Finally, the sliced program P' is obtained by projecting out of P those statements that have occurrences in the fixed point solution.
Example. The dynamic slice of execution history 2 from Fig. 10 taken with respect to ({a}, 2, 3') is computed as follows DU TC
=
IR
=
=
so = S' =
S2 =
s=
{(l',2')(3', 8')(6', S2)(S2,9')) {(2', 3')(2', 4')(2l, S1)(2', 32)(21,42)(21,S2)(2', 32)(22,42)(22,S2) (42, 5')(5', 6l)) {(2', 2')(2', 23)(22,2')(3', 32)(4', 42)(8', S2)} (2') {l',2') {l',2') = S,; thus the iteration ends. (3') u {l',21) = {l',2', 3'}.
Similar to static slicing, handling unstructured control flow requires modifying the definition of control dependent as used in the definition of TC. Unlike static slicing, the handling of different data types can be done more precisely in a dynamic slice. First consider arrays. Because the subscript value is known at run time, the ambiguity of array accesses in static slicing is gone. The only complication this raises is that different occurrences of a statement may assign different variables (for simplicity, we consider each array element to be a different variable). Records are similarly handled by treating each component as a separate variable. Finally, consider dynamically allocated memory. This can also be handled by giving each newly allocated block of memory a pseudo name. At the end of the next section
26
DAVID W. BINKLEY AND KEITH B. GALLAGHER
we describe how Agrawal and Horgan replace the notion of (pseudo) variable name to improve dynamic slicing in debugging.
2.4
Dynamic Slicing as a Graph-Reachability Problem
Agrawal and Horgan presented the first algorithms for finding dynamic slices using dependence graphs [5].We first consider two simply dynamic slicing algorithms that are imprecise. We then consider two exact algorithms that differ only in their space complexity. The initial description of all four algorithms assumes scalar variables. The extensions necessary to handle complex data structures including, for example, (unconstrained) pointers, arrays, records, and unions are considered at the end of this section. The first two algorithms both mark some portion of a PDG as “executed.” The first marks executed vertices and the second marks executed edges. Both algorithms suffer because they summarize in one place (a vertex or an edge) all the times a statement or dependence edge between statements was executed. The first algorithm initially labels all vertices in the PDG as “unexecuted.” As the program runs, whenever a statement is executed, its corresponding vertex is marked as “executed.” After execution the slice is computed by applying the static slicing algorithm restricted to only marked vertices (and the edges that connect them). Thus, unexecuted vertices will not be in the dynamic slice. As illustrated in Fig. 12 by slice 1, this approach produces imprecise slices because it is possible for a vertex to be marked executed even though it does not actually affect a given computation. For slice 1, s3 is executed, but the assignment at s6 overwrites the value written to a. Therefore, s3 need not be in the dynamic slice. The second algorithm marks “executed edges.” This produces more precise answers by leaving out statements that were executed, but had no influence on the variables in the slice (the corresponding edges were not “executed”). Its imprecision comes in loops where different iterations execute different edges. In slice 1shown in Fig. 12, the marking of edges allows Slice 1: slice on z at s y for the input N = l . where cl and c2 both true: static slice contains s I-s9 dynamic slice (marking nodes) contains s sg, sp dynamic slice(marlringedges)containss,. s 2 , sq-sg. s g , s p Slice 2 slice on z at sp,for input N=2, where cl and c2 false on the first iteration and true on the second: static slice contains s I-sp dynamic slice (marking nodes) contains s I-s6. s 8 . s9 dynamic slice (marking edges) contains sI-sgl$ 8 . sp
FIG. 12. Two dynamic slices of the program of Fig. 10 that show the limitations of the first two dynamic slicing algorithms.
PROGRAM SLICING
27
the algorithm to leave out s3, because the flow dependence edge from s3 to sx is never executed. However, for slice 2, this edge is executed during the first iteration of the loop. Since the second iteration overwrites the value of a, s3 need not be in the dynamic slice. The problem is that different iterations require the marking of different edge occurrences. Summarizing all occurrences on a single edge introduces imprecision.
Example. Figure 12 compares static slicing with these two dynamic slicing algorithms for two different inputs. (Figure 13 shows the PDG for this program.) The static slice taken at (the vertex representing) statement s9 with respect to z contains the entire program because all three assignments to a may reach the use at statement sx.The node-marking dynamic slicing algorithm produces a smaller slice for both inputs because s7 does not execute in either case. These slices, however, unwantedly include s3, which does execute, but which does not affect the computation of z. The edge-marking algorithm correctly handles this in slice 1, because the flow dependence edge from s3 to sx is not executed. However, the weakness of the edge-marking algorithm is shown by slice 2: Although the assignment at s2 does not affect the output value of z, this flow edge is executed by the first iteration of the loop. Because it is marked executed, s2is included in the slice. To omit statements in loops like s3, it is necessary to have multiple occurrences of the vertices and edges in the loop. This is captured by the execution history of a program for a given input. For example, Slice 2 of the program in Fig. 12 produces the execution history < 1, 2', 3l, 4l, 8l,
FIG. 13. The PDG for the program shown in Fig. 10.
28
DAVID W. BINKLEY AND KEITH B. GALLAGHER
22, 3l, 42, 5*, 62, 8*, 23, 9 > where the superscripts are used to denote different executions of the same statement. A dynamic slice can now be defined in terms of a variable, an execution history, and an occurrence of a statement in the execution history. The slice contains only those statements whose execution had some effect on the value of the variable at the occurrence of the statement in the execution history. To obtain this slice a dynamic dependencegraph (DDG) is produced from the execution history. A dynamic dependence graph has a vertex for each occurrence of a statement in the execution history and contains only executed edges. Because a D D G separates the occurrences of the executed vertices and edges, a vertex in a loop may have multiple occurrences, which can have different incoming dependence edges. This removes the imprecision in the two simple approaches. Control dependence edges of the DDG are copies of the control dependence edges from the static dependence graph. There is a single dynamic flow dependence for each variable v referenced at a node n. The source of this edge is computed from the dynamic reaching definitions (DRDs), which are in turn computed from the execution history as follows: DRD(var, empty) = 0. DRD(var, < previous-history
{
I last-node
{last-node} D R D ( var, previous-history)
>) =
if var E def(last-node) otherwise.
Example. Figure 14 shows the DDG for slice 2 of the program shown in Fig. 12. This graph contains two occurrences of s8. The first has an
FIG. 14. The DDG for slice 2 in the example shown in Fig. 12.
PROGRAM SLICING
29
incoming flow dependence edge from sj, whereas the second has an incoming flow dependence edge from s6. Since the only occurrence of s9 has an incoming edge from the second occurrence of s8, s6 and not s3 is included in the slice. The disadvantage of this approach is its space requirement. Since a loop may execute an unbounded number of times, there may be an unbounded number of vertex occurrences. This is clearly unacceptable and, it turns out, unnecessary. A dynamic slice is a subset of the original program; consequently, there can be only a finite number of slices (subsets). The final dynamic slicing algorithm, which computes the same slices as the third algorithm, exploits this fact to limit the size of the dynamic dependence graph. The fourth algorithm considers each new vertex occurrence v as the DDG is built. If a suitable vertex occurrence u exists in the D D G then v is merged into u. Vertex occurrence u is suitable if its slice is the same as the slice of the new vertex occurrence. That is, the statements whose occurrences affect u are the same as the statements that affect v. Note that since the slice is a projection of the D D G back on to the original program, it is possible to combine vertices that are not created from the same statement. The algorithm maintains three sets: DeJinitionNode, which maps a variable’s name to the last vertex occurrence that defines it; PredicateNode, which maps a control statement to its most recent occurrence in the DDG; and ReachingStaternents, which, for each node n, contain all the statements of the original program with an occurrence that reaches n. The slice with respect to an occurrence of statements is simply those statements in ReachingStaternents(s), when the occurrence of s is created.
2.4.1 Dynamic Slicing of Programs with Unconstrained Pointers Agrawal and DeMillo [4] extended the notion of dynamic slicing to programs that contain unconstrained (C-like) pointers. The technique applies also to arrays, constrained (Pascal-like) pointers, records, and other data structures. What makes unconstrained pointers difficult is the loss of the 1-1 mapping from variables assigned and used and from memory locations assigned and used. This is no longer true in the presence of more complex data structures. Although aggregates such as records can be handled by treating each field as a separate variable, and even array elements can be treated as separate elements, since dynamic slicing has access to the run-time index value, pointers present a particularly difficult problem. If we ignore “pointer laun-
30
DAVID W. BINKLEY AND KEITH B. GALLAGHER
dering" through variant records, constrained pointers, such as those found in Pascal, always point to a nameable object of a particular type. Special names are given to objects allocated from the heap. As with records and arrays, runtime information allows us to identify the particular object pointed to by a given pointer. In contrast, unconstrained pointers, such as those found in C, may use or modify arbitrary (unnamed) memory locations. This forces us to generalize first the notion of a variable (var in the proceeding definition of a DRD) to memory cells. Each cell is composed of a starting address and a length. A flow dependence exists between a node that modifies cell, and a node that references cell2 if celll and cell2 overlap. This occurs when there is a nonempty intersection of the set of addresses celll.start, . . . , celll.start + celll.length and cel12.start, . . . , cel12.start + cel12.1ength.For example, if celll represents an assignment to a structure and cell2 represents a use of a field in this structure then the two cells overlap. Notice that an assignment to a field of the structure may only modify part of the cell (the same is true of assignments to an element of an array). In the following definition, the function PreCell(celll, cell2) denotes the part of cell] before cell2.Likewise, PostCell(cell,, cell2)denotes the part of cell] after cell2.
Definition 11: DRD in the Presence of Unconstrained Pointers. Let cell' = def(1ast-node). DRD(cel1, empty) = 0 DRD((address, 0 ) , history) = 0 DRD(cel1, < previous-history I last-node >) if cell r l cell' = 0 D R D (cell, previous-history ) last-node U DRD( PreCell(cel1, cell'),previous-history) U DRD(PostCell(cel1, cell'), previous-history) otherwise This technique works well when dealing with the kind of error shown in the following code (note that different compilers may lay memory out differently):
c i n t i, j,
aC41;
j = 2; f o r ( i = O ; i tmax then tmax := xi output(tmax) The bulk of the algorithm's time is spent making comparisons. We will measure algorithms for MAX,, (and other problems) in terms of the number of comparisons made by the algorithm.
THE COMPLEXITY OF PROBLEMS
x,)
219
Definition 2.2 Let P be any problem whose input is (xl, . . . , E S". A decision tree for P is a full binary tree labeled and interpreted
as follows. (See Fig. 1 for an example of a decision tree that finds the maximum of three elements.) 1. Every nonleaf is labeled with a comparison of the form " x , < x,?" 2. Every leaf is labeled with a potential output. 3. Let ( x l , . . . ,x,) E S". We interpret the tree as an algorithm acting on (XI, . . . , x,) by thinking YES as GO LEFT ON THE TREE, and N O as GO RIGHT O N TREE. This leads to a leaf of the tree that is labeled with the answer P(xl, . . . , x,,).
Definition 2.3 A problem P can be solved with f(n) comparisons if there exists a decision tree for P where no branch has more than f ( n ) edges. The algorithm for MAX,, can be phrased as a decision tree where each branch has n - 1 edges. Since we have a model of computation that captures the known algorithm we can pose the question of lower bounds intelligently. This is given as: Theorem 2.4
[ZZJ Let n E N.
1. MAX,, can be solved with n - 1 comparisons. 2. MAX,, cannot be solved with fewer than n - 1 comparisons. In fact, any decision tree that solves MAX,, has at least n - 1 comparisons on X I
<x2
FIG. 1. Decision tree.
220
WILLIAM GASARCH
every branch. Hence the best case and the average case require n 1 comparisons. To summarize, initially an algorithm for MAX, was formulated that took n - 1 comparisons. A model was developed to capture this algorithm, and in that model the algorithm is optimal.
2.2
Finding the ith Largest Element
The problem of MAX, asked for the largest element of a list. We now look at the more general problem of finding the ith largest element of a list. Consider the problem of finding the second largest element of (xlr . . . , x,). The problem can be done in 2n - 3 comparisons: First find x = MAX,(xl, . . . , x,) (using n - 1 comparisons). Then find the maximum of {xI,. . . ,x,} - { x } (using n - 2 comparisons). Is this the best possible? No! We present an algorithm that uses at most n + log n comparisons: 1. Input (xi, . . . , x,). 2. Form a tournament of pairs of comparisons: compare (xI,x2), ( x 3 , x4), etc. Then compare the winners in pairs. Then compare the winners of that competition in pairs. Keep doing this until y l = MAX,(xl, . . . , x,) is found. This will take n - 1 comparisons. Note that this is not the algorithm we used in Section 2.1. While both algorithms find MAX, in n - 1 comparisons, this one leaves behind a residue of information that will be helpful in finding SEL;,(x,, . . , , x,). 3. Replace the element x , that has the value yi in the tournament by - CQ. Redo the part of the tournament that this changes. This takes log n steps and yields the second largest element y2. Note that both algorithms given for finding the second largest element can be expressed as decision trees. Hence, the question of whether you can do better with “the same kind of algorithm” can be expressed intelligently. The next theorem (the i = 2 case) states that the n + log n algorithm is essentially optimal. Theorem 2.5 [S, 171 Let i be a constant. Then the following happens: 1. SEL,!, can be solved with n + (i - 1) log n + O(I) comparisons. (This is similar to the algorithm we gave for finding the second largest element.) 2. There exists a (small) constant c such that, for all i, SELA cannot be solved with fewer than n + (i - 1) log n + c comparisons.
Both the algorithm and the lower bound can be modified to handle the following similar problem: Given a list (xi, . . . , x,) find the lst,
THE COMPLEXITY OF PROBLEMS
22 1
2nd, . . . , and ith largest element. The upper and lower bounds are the same for this problem as they are for SELL except that the additive constant changes. To summarize: we first obtained an algorithm for finding the second largest element of a list that took 2n - 3 comparisons, and we then improved this to n + log n. Both algorithms can be expressed as decision trees. We then stated a theorem that gave matching upper and lower bounds for the general SELX problem, and implied that our algorithm for second largest is optimal.
2.3 MED, In the last section we looked at finding the ith largest element of a list. We thought of i as fixed and n as the measure of size. We now consider the problem of finding the [flth element of a list (the middle or median element). This problem has a different character since i varies with n. If the algorithm in Theorem 2.5 is used to find the median of (x,,. . . , x,) then this will take n + log n = O(n log n ) comparisons. Can we do better? The lower bound in Theorem 2.5 does not apply here since i is not constant. We exhibit an algorithm that finds MED, in linear time. Actually we present an algorithm that will, given ((XI, . . . , x,),i), find the ith largest element of (x,,. . . , x,). We plan on having the program call itself recursively. The algorithm is from [2]: 1. Input((xl, . . . , x,),i). Let A be the array (xl,. , . ,x,). 2. Group the elements into groups of five. There are at most four elements left over. We will ignore them for purposes of exposition. (The choice of five is not important-any odd number larger than three would work.) 3. Find the median of every group of five. (This can be done by the algorithm for SELX.) Put the set of medians into an array B of length roughly f. This step takes c11 comparisons for some constant c. 4. Find the median x of the elements in B. This is done by calling this algorithm recursively. 5. Compare all elements to x. Put the elements that are less than x into an array C,,, and all those that are greater than x into an array C,,. Note that at this point we know j such that x is the jth largest element. If j = i then output(x) and stop. (This could be done in a more spaceefficient manner.) 6. If j < i then we know the ith largest element of A is in the ith largest element of C,,. Call the algorithm recursively on (C,,, i).
222
WILLIAM GASARCH
7. If j > i then we know the ith largest element of A is the ( j - i)th element of C*,j - i).
One can show that C