Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen
1543
3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo
Serge Demeyer Jan Bosch (Eds.)
Object-Oriented Technology ECOOP ’98 Workshop Reader ECOOP ’98 Workshops, Demos, and Posters Brussels, Belgium, July 20-24, 1998 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands
Volume Editors Serge Demeyer University of Berne Neubruckstr. 10, CH-3012 Berne, Switzerland E-mail:
[email protected] Jan Bosch University of Karlskrona/Ronneby, Softcenter S-372 25 Ronneby, Sweden E-mail:
[email protected] Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Object-oriented technology : workshop reader, workshops, demos, and posters / ECOOP ’98, Brussels, Belgium, July 20 - 24, 1998 / Serge Demeyer ; Jan Bosch (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 1998 (Lecture notes in computer science ; Vol. 1543) ISBN 3-540-65460-7
CR Subject Classification (1998): D.1-3, H.2, E.3, C.2, K.4.3, K.6 ISSN 0302-9743 ISBN 3-540-65460-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. c Springer-Verlag Berlin Heidelberg 1998 Printed in Germany
Typesetting: Camera-ready by author SPIN 10693041 06/3142 – 5 4 3 2 1 0
Printed on acid-free paper
Preface At the time of writing (mid-October 1998) we can look back at what has been a very successful ECOOP’98. Despite the time of the year – in the middle of what is traditionally regarded as a holiday period – ECOOP'98 was a record breaker in terms of number of participants. Over 700 persons found their way to the campus of the Brussels Free University to participate in a wide range of activities. This 3rd ECOOP workshop reader reports on many of these activities. It contains a careful selection of the input and a cautious summary of the outcome for the numerous discussions that happened during the workshops, demonstrations and posters. As such, this book serves as an excellent snapshot of the state of the art in the field of object-oriented programming.
About the diversity of the submissions A workshop reader is, by its very nature, quite diverse in the topics covered as well as in the form of its contributions. This reader is not an exception to this rule: as editors we have given the respective organizers much freedom in their choice of presentation because we feel form follows content. This explains the diversity in the types of reports as well as in their lay out.
Acknowledgments An incredible number of people have been involved in creating this book, in particular all authors and all the individual editors of each chapter. As editors of the workshop reader itself, we merely combined their contributions and we hereby express our gratitude to everyone who has been involved. It was hard work to get everything printed in the same calendar year as the ECOOP conference itself, but thanks to everybody's willing efforts we have met our deadlines. Enjoy reading ! University of Berne University of Karlskrona/Ronneby October 1998
Serge Demeyer Jan Bosch
Table of Contents I. The 8th Workshop for PhD Students in Object-Oriented Systems Erik Ernst, Frank Gerhardt, Luigi Benedicenti
1
Framework Design and Documentation Ákos Frohner ................................................................................................................ 5 Reengineering with the CORBA Meta Object Facility Frank Gerhardt ............................................................................................................ 6 Enforcing Effective Hard Real-Time Constraints in Object-Oriented Control Systems Patrik Persson .............................................................................................................. 7 Online-Monitoring in Distributed Object-Oriented Client/Server Environments Günther Rackl............................................................................................................... 8 A Test Bench for Software Moritz Schnizler............................................................................................................ 9 Intermodular Slicing of Object-Oriented Programs Christoph Steindl ........................................................................................................ 10 Validation of Real-Time Object Oriented Applications Sebastien Gerard ........................................................................................................ 14 Parallel Programs Implementing Abstract Data Type Operations --- A Case Study Tamás Kozsik.............................................................................................................. 15 A Dynamic Logic Model for the Formal Foundation of Object-Oriented Analysis and Design Claudia Pons .............................................................................................................. 16 A Refinement Approach to Object-Oriented Component Reuse Winnie Qiu.................................................................................................................. 17 A Compositional Approach to Concurrent Object Systems Xiaogang Zhang ......................................................................................................... 18 Component-Based Architectures to Generate Software Components from OO Conceptual Models Jaime Gomez .............................................................................................................. 21 Oberon-D -- Adding Database Functionality to an Object-Oriented Development Environment Markus Knasmüller .................................................................................................... 22 Run-time Reusability in Object-Oriented Schematic Capture David Parsons ............................................................................................................ 23 SADES - a Semi-Autonomous Database Evolution System Awais Rashid .............................................................................................................. 24
VIII
Table of Contents
Framework Design for Optimization (as Applied to Object-Oriented Middleware) Ashish Singhai ............................................................................................................ 25 Object-Oriented Control Systems on Standard Hardware Andreas Speck ............................................................................................................ 26 Design of an Object-Oriented Scientific Simulation and Visualization System Alexandru Telea.......................................................................................................... 26 Testing Components Using Protocols Il-Hyung Cho .............................................................................................................. 29 Virtual Types, Propagating and Dynamic Inheritance, and Coarse Grained Structural Equivalence Erik Ernst.................................................................................................................... 30 On Polymorphic Type Systems for Imperative Programming Languages: An Approach using Sets of Types and Subprograms Bernd Holzmüller ....................................................................................................... 31 Formal Methods for Component-Based Systems Rosziati Ibrahim ......................................................................................................... 32 Compilation of Source Code into Object-Oriented Patterns David H. Lorenz ......................................................................................................... 32 Integration of Object-Based Knowledge Representation in a Reflexive ObjectOriented Language Gabriel Pavillet .......................................................................................................... 33 Implementing Layered Object-Oriented Designs Yannis Smaragdakis ................................................................................................... 34 An Evaluation of the Benefits of Object Oriented Methods in Software Development Processes Pentti Virtanen............................................................................................................ 35 Process Measuring, Modeling, and Understanding Luigi Benedicenti........................................................................................................ 37 The Contextual Objects Modeling for a Reactive Information System Birol Berkem............................................................................................................... 38 Experiences in Designing a Spatio-temporal Information System for Marine Coastal Environments Using Object Technology Anita Jacob................................................................................................................. 39 Facilitating Design Reuse in Object-Oriented Systems Using Design Patterns Hyoseob Kim .............................................................................................................. 39 A Reverse Engineering Methodology for Object-Oriented Systems Theodoros Lantzos...................................................................................................... 40
Table of Contents
IX
The Reliability of Object-Oriented Software Systems Jan Sabak ................................................................................................................... 41 Extending Object-Oriented Development Methodologies to Support Distributed Object Computing Umit Uzun................................................................................................................... 42
II. Techniques, Tools and Formalisms for Capturing and Assessing the Architectural Quality in Object-Oriented Software Isabelle Borne, Fernando Brito e Abreu, Wolfgang De Meuter, Galal Hassan Galal 44 A Note on Object-Oriented Software Architecting Galal Hassan Galal.................................................................................................... 46 COMPARE: A Comprehensive Framework for Architecture Evaluation Lionel C. Briand, S. Jeromy Carrière, Rick Kazman, Jürgen Wüst ........................... 48 Experience with the Architecture Quality Assessment of a Rule-Based ObjectOriented System Jeff L. Burgett, Anthony Lange................................................................................... 50 Evaluating the Modularity of Model-Driven Object-Oriented Software Architectures Geert Poels ................................................................................................................. 52 Assessing the Evolvability of Software Architectures Tom Mens, Kim Mens ................................................................................................. 54 The Influence of Domain-Specific Abstraction on Evolvability of Software Architectures for Information Systems Jan Verelst .................................................................................................................. 56 Object-Oriented Frameworks: Architecture Adaptability Paolo Predonzani, Giancarlo Succi, Andrea Valerio, Tullio Vernazza ..................... 58 A Transformational Approach to Structural Design Assessment and Change Paulo S.C. Alencar, Donald D. Cowan, Jing Dong, Carlos J.P. Lucena................... 60 Reengineering the Modularity of OO Systems Fernando Brito e Abreu, Gonçalo Pereira, Pedro Sousa .......................................... 62 A Contextual Help System Based on Intelligent Diagnosis Processes Aiming to Design and Maintain Object-Oriented Software Packages Annya Romanczuk-Réquilé, Cabral Lima, Celso Kaestner, Edson Scalabrin............ 64 Analysis of Overriden Methods to Infer Hot Spots Serge Demeyer............................................................................................................ 66 Purpose: between types and code Natalia Romero, María José Presso, Verónica Argañaraz, Gabriel Baum, Máximo Prieto .......................................................................................................................... 68
X
Table of Contents
Ensuring Object Survival in a Desert Xavier Alvarez, Gaston Dombiak, Felipe Zak, Máximo Prieto .................................. 70
III. Experiences in Object-Oriented Re-Engineering Stéphane Ducasse, Joachim Weisbrod
72
Exploiting Design Heuristics for Automatic Problem Detection Holger Bär, Oliver Ciupke ......................................................................................... 73 Design Metrics in the Reengineering of Object-Oriented Systems R. Harrison, S. Counsell, R. Nithi .............................................................................. 74 Visual Detection of Duplicated Code Matthias Rieger, Stéphane Ducasse ........................................................................... 75 Dynamic Type Inference to Support Object-Oriented Reengineering in Smalltalk Pascal Rapicault, Mireille Blay-Fornarino, Stéphane Ducasse, Anne-Marie Dery .. 76 Understanding Object-Oriented Programs through Declarative Event Analysis Tamar Richner, Stéphane Ducasse, Roel Wuyts......................................................... 78 Program Restructuring to Introduce Design Patterns Mel Ó Cinnéide, Paddy Nixon .................................................................................... 79 Design Patterns as Operators Implemented with Refactorings Benedikt Schulz, Thomas Genssler ............................................................................. 80 “Good Enough” Analysis for Refactoring Don Roberts, John Brant............................................................................................ 81 An Exchange Model for Reengineering Tools Sander Tichelaar, Serge Demeyer.............................................................................. 82 Capturing the Existing OO Design with the ROMEO Method Theodoros Lantzos, Anthony Bryant, Helen M. Edwards .......................................... 84 Systems Reengineering Patterns Perdita Stevens, Rob Pooley....................................................................................... 85 Using Object-Orientation to Improve the Software of the German Shoe Industry Werner Vieth............................................................................................................... 86 Report of Working Group on Reengineering Patterns Perdita Stevens ........................................................................................................... 89 Report of Working Group on Reengineering Operations Mel Ó Cinnéide........................................................................................................... 93 Report of Working Group on Dynamic Analysis Tamar Richner............................................................................................................ 95 Report of Working Group on Metrics/Tools Steve Counsel............................................................................................................. 96
Table of Contents
IV. Object-Oriented Software Architectures Jan Bosch, Helene Bachatene, Görel Hedin, Kai Koskimies
XI
99
Pattern-Oriented Framework Engineering Using FRED Markku Hakala, Juha Hautamäki, Jyrki Tuomi, Antti Viljamaa, Jukka Viljamaa ... 105 Exploiting Architecture in Experimental System Development Klaus Marius Hansen ............................................................................................... 110 Object-Orientation and Software Architecture Philippe Lalanda, Sophie Cherki.............................................................................. 115 Semantic Structure: A Basis for Software Architecture Robb D. Nebbe.......................................................................................................... 120 A Java Architecture for Dynamic Object and Framework Customizations Linda M. Seiter ......................................................................................................... 125
V. Third International Workshop on Component-Oriented Programming (WCOP'98) Jan Bosch, Clemens Szyperski, Wolfgang Weck
130
Type-Safe Delegation for Dynamic Component Adaptation Günter Kniesel.......................................................................................................... 136 Consistent Extension of Components in Presence of Explicit Invariants Anna Mikhajlova ...................................................................................................... 138 Component Composition with Sharing Geoff Outhred, John Potter ...................................................................................... 141 Late Component Adaptation Ralph Keller, Urs Hölzle .......................................................................................... 143 Adaptation of Connectors in Software Architectures Ian Welch, Robert Stroud ......................................................................................... 145 Connecting Incompatible Black-Box Components Using Customizable Adapters Bülent Küçük, M. Nedim Alpdemir, Richard N. Zobel ............................................. 147 Dynamic Configuration of Distributed Software Components Eila Niemelä, Juha Marjeta...................................................................................... 149 Components for Non-Functional Requirements Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten ... 151 The Operational Aspects of Component Architecture Mark Lycett, Ray J.Paul ........................................................................................... 153 Architectures for Interoperation between Component Frameworks Günter Graw, Arnulf Mester .................................................................................... 155 A Model for Gluing Together P.S.C. Alencar, D.D. Cowan, C.J.P. Lucena, L.C.M. Nova ..................................... 157
XII
Table of Contents
Component Testing: An Extended Abstract Mark Grossman ........................................................................................................ 159 Applying a Domain Specific Language Approach to Component Oriented Programming James Ingham, Malcolm Munro............................................................................... 161 The Impact of Large-Scale Component and Framework Application Development on Business David Helton ............................................................................................................ 163 Maintaining a COTS Component-Based Solution Using Traditional Static Analysis Techniques R. Cherinka, C. Overstreet, J. Ricci, M. Schrank ..................................................... 165
VI. Second ECOOP Workshop on Precise Behavioral Semantics (with an Emphasis on OO Business Specifications) Bernhard Rumpe, Haim Kilov 167 VII. Tools and Environments for Business Rules Kim Mens, Roel Wuyts, Dirk Bontridder, Alain Grijseels
189
Enriching Constraints and Business Rules in Object Oriented Analysis Models with Trigger Specifications Stefan Van Baelen..................................................................................................... 197 Business Rules vs. Database Rules - A Position Statement Brian Spencer ........................................................................................................... 200 Elements Advisor by Neuron Data Bruno Jouhier, Carlos Serrano-Morale, Eric Kintzer.............................................. 202 Business Rules Layers Between Process and Workflow Modeling: An ObjectOriented Perspective Gerhard F. Knolmayer ............................................................................................. 205 Business-Object Semantics Communication Model in Distributed Environment Hei-Chia Wang, V. Karakostas ................................................................................ 208 How Business Rules Should be Modeled and Implemented in OO Leo Hermans, Wim van Stokkum.............................................................................. 211 A Reflective Environment for Configurable Business Rules and Tools Michel Tilman........................................................................................................... 214
VIII. Object-Oriented Business Process modelling Elizabeth A. Kendall (Ed.)
217
Business Process Modeling - Motivation, Requirements, Implementation Ilia Bider, Maxim Khomyakov.................................................................................. 217 An Integrated Approach to Object Oriented Modeling of Business Processes Markus Podolsky ...................................................................................................... 219
Table of Contents
XIII
Enterprise Modelling Monique Snoeck, Rakesh Agarwal, Chiranjit Basu.................................................. 222 Requirements Capture Using Goals Ian F. Alexander....................................................................................................... 228 'Contextual Objects' or Goal Orientation for Business Process Modeling Birol Berkem............................................................................................................. 232 Mapping Business Processes to Software Design Artifacts Pavel Hruby.............................................................................................................. 234 Mapping Business Processes to Objects, Components and Frameworks: A Moving Target! Eric Callebaut .......................................................................................................... 237 Partitioning Goals with Roles Elizabeth A. Kendall ................................................................................................. 240
IX. Object-Oriented Product Metrics for Software Quality Assessment Houari A. Sahraoui 242 Do Metrics Support Framework Development ? Serge Demeyer, Stéphane Ducasse .......................................................................... 247 Assessment of Large Object Oriented Software Systems: A metrics Based Process Gerd Köhler, Heinrich Rust, Frank Simon............................................................... 250 Using Object-Oriented Metrics for Automatic Design Flaws Detection in Large Scale Systems Radu Marinescu........................................................................................................ 252 An OO Framework for Software Measurement and Evaluation Reiner R. Dumke....................................................................................................... 253 A Product Metrics Tool Integrated into a Software Development Environment Claus Lewerentz, Frank Simon................................................................................. 255 Collecting and Analyzing the MOOD2 Metrics Fernando Brito e Abreu, Jean Sebastien Cuche....................................................... 259 An Analytical Evaluation of Static Coupling Measures for Domain Object Classes Geert Poels ............................................................................................................... 261 Impact of Complexity Metrics on Reusability in OO Systems Yida Mao, Houari A. Sahraoui, Hakim Lounis ........................................................ 264 A Formal Analysis of Modularisation and Its Application to Object-Oriented Methods Adam Batenin ........................................................................................................... 267 Software Products Evaluation Teade Punter ............................................................................................................ 269
XIV
Table of Contents
Is Extension Complexity a Fundamental Software Metric? E. Kantorowitz .......................................................................................................... 270
X. ECOOP Workshop on Distributed Object Security Christian D. Jensen, George Coulouris, Daniel Hagimont
273
Merging Capabilities with the Object Model of an Object-Oriented Abstract Machine María Ángeles Díaz Fondón, Darío Álvarez Gutiérrez, Armando García-Mendoza Sánchez, Fernando Álvarez García, Lourdes Tajes Martínez, Juan Manuel Cueva Lovelle ...................................................................................................................... 277 Mutual Suspicion in a Generic Object-Support System Christian D. Jensen, Daniel Hagimont .................................................................... 278 Towards an Access Control Policy Language for CORBA Gerald Brose ............................................................................................................ 279 Security for Network Places Tim Kindberg............................................................................................................ 280 Reflective Authorization Systems Massimo Ancona, Walter Cazzola, Eduardo B. Fernandez ..................................... 281 Dynamic Adaptation of the Security Properties of Applications and Components Ian Welch, Robert Stroud ......................................................................................... 282 Interoperating between Security Domains Charles Schmidt, Vipin Swarup................................................................................ 283 Delegation-Based Access Control for Intelligent Network Services Tuomas Aura, Petteri Koponen, Juhana Räsänen.................................................... 284 Secure Communication in non-uniform Trust Environments George Coulouris, Jean Dollimore, Marcus Roberts............................................... 285 Dynamic Access Control for Shared Objects in Groupware Applications Andrew Rowley ......................................................................................................... 286 A Fault-Tolerant Secure CORBA Store using Fragmentation-Redundancy-Scattering Cristina Silva, Luís Rodrigues.................................................................................. 287
XI. 4th ECOOP Workshop on Mobility: Secure Internet Mobile Computations Leila Ismail, Ciarán Bryce, Jan Vitek
288
Protection in Programming-Language Translations: Mobile Object Systems Martín Abadi ............................................................................................................ 291 D'Agents: Future Security Directions Robert S. Gray .......................................................................................................... 292
Table of Contents
XV
A Multi-Level Interface Structure for the Selective Publication of Services in an Open Environment Jarle Hulaas, Alex Villazón, Jürgen Harms ............................................................. 293 A Practical Demonstration of the Effect of Malicious Mobile Agents on CPU Load Balancing Adam P. Greenaway, Gerard T. McKee................................................................... 294 Role-Based Protection and Delegation for Mobile Object Environments Nataraj Nagaratnam, Doug Lea............................................................................... 295 Coarse-grained Java Security Policies T. Jensen, D. Le Métayer, T. Thorn.......................................................................... 296 Secure Recording of Itineraries through Cooperating Agents Volker Roth............................................................................................................... 297 A Model of Attacks of Malicious Hosts Against Mobile Agents Fritz Hohl ................................................................................................................. 299 Agent Trustworthiness Lora L. Kassab, Jeffrey Voas.................................................................................... 300 Protecting the Itinerary of Mobile Agents Uwe G. Wilhelm, Sebastian Staamann, Levente Buttyán ......................................... 301 Position paper: Security in Tacoma Nils P. Sudmann ....................................................................................................... 302 Type-Safe Execution of Mobile Agents in Anonymous Networks Matthew Hennessy, James Riely............................................................................... 304 Mobile Computations and Trust Vipin Swarup ............................................................................................................ 305 Case Studies in Security and Resource Management for Mobile Objects Dejan Milojicic, Gul Agha, Philippe Bernadat, Deepika Chauhan, Shai Guday, Nadeem Jamali, Dan Lambright .............................................................................. 306
XII. 3rd Workshop on Mobility and Replication Birger Andersen, Carlos Baquero, Niels C. Juul
307
UbiData: An Adaptable Framework for Information Dissemination to Mobile Users Ana Paula Afonso, Francisco S. Regateiro, Mário J. Silva ..................................... 309 Twin-Transactions - Delayed Transaction Synchronisation Model A. Rasheed, A. Zaslavsky .......................................................................................... 311 Partitioning and Assignment of Distributed Object Applications Incorporating Object Replication and Caching Doug Kimelman, V.T. Rajan, Tova Roth, Mark Wegman......................................... 313
XVI
Table of Contents
Open Implementation of a Mobile Communication System Eddy Truyen, Bert Robben, Peter Kenens, Frank Matthijs, Sam Michiels, Wouter Joosen, Pierre Verbaeten ......................................................................................... 315 Towards a Grand Unified Framework for Mobile Objects Francisco J. Ballesteros, Fabio Kon, Sergio Arévalo, Roy H. Campbell................. 317 Measuring the Quality of Service of Optimistic Replication Geoffrey H. Kuenning, Rajive Bagrodia, Richard G. Guy, Gerald J. Popek, Peter Reiher, An-I Wang .................................................................................................... 319 Evaluation Overview of the Replication Methods for High Availability Databases Lars Frank ................................................................................................................ 321 Reflection Based Mobile Replication Luis Alonso ............................................................................................................... 323 Support for Mobility and Replication in the AspectIX Architecture Martin Geier, Martin Steckermeier, Ulrich Becker, Franz J. Hauck, Erich Meier, Uwe Rastofer ............................................................................................................ 325 How to Combine Strong Availability with Weak Replication of Objects? Alice Bonhomme, Laurent Lefèvre ........................................................................... 327 Tradeoffs of Distributed Object Models Franz J. Hauck, Francisco J. Ballesteros................................................................. 329
XIII. Learning and Teaching Objects Successfully Jürgen Börstler
333
Teaching Concepts in the Object-Oriented Field Erzsébet Angster ....................................................................................................... 335 A Newcomer's Thoughts about Responsibility Distribution Beáta Kelemen.......................................................................................................... 340 An Effective Approach to Learning Object-Oriented Technology Alejandro Fernández, Gustavo Rossi ....................................................................... 344 Teaching Objects: The Case for Modelling Ana Maria D. Moreira ............................................................................................. 350 Involving Learners in Object-Oriented Technology Teaching Process: Five WebBased Steps for Success Ahmed Seffah ............................................................................................................ 355 How to Teach Object-Oriented Programming to Well-Trained Cobol Programmers Markus Knasmüller .................................................................................................. 359
Table of Contents
XVII
XIV. ECOOP'98 Workshop on Reflective Object-Oriented Programming and Systems Robert Stroud, Stuart P. Mitchell 363 MOPping up Exceptions Stuart P. Mitchell, A. Burns, A. J. Wellings.............................................................. 365 A Metaobject Protocol for Correlate Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten ... 367 Adaptive Active Object José L. Contreras, Jean-Louis Sourrouille............................................................... 369 Yet Another java.lang.Class Shigeru Chiba, Michiaki Tatsubori .......................................................................... 372 A Reflective Java Class Loader Ian Welch, Robert Stroud ......................................................................................... 374 Sanity Checking OS Configuration via Reflective Computation Lutz Wohlrab ............................................................................................................ 376 A Reflective Component Model for Open Systems José M. Troya, Antonio Vallecillo ............................................................................ 378 CoffeeStrainer - Statically Checking Structural Constraints on Java Programs Boris Bokowski ......................................................................................................... 380 A Computational Model for a Distributed Object-Oriented Operating System Based on a Reflective Abstract Machine Lourdes Tajes Martínez, Fernando Álvarez-García, Marián Díaz-Fondón, Darío Álvarez Gutiérrez, Juan Manuel Cueva Lovelle....................................................... 382 A Reflective Implementation of a Distributed Programming Model R. Pawlak, L. Duchien, L. Seinturier, P. Champagnoux, D. Enselme, G. Florin..... 384 Evaluation of Object-Oriented Reflective Models Walter Cazzola ......................................................................................................... 386 2K: A Reflective Component-Based Operating System for Rapidly Changing Environments Fabio Kon, Ashish Singhai, Roy H. Campbell, Dulcineia Carvalho, Robert Moore, Francisco J. Ballesteros ........................................................................................... 388 Experiments with Reflective Middleware Fábio M. Costa, Gordon S. Blair, Geoff Coulson .................................................... 390 Three Practical Experiences of Using Reflection Charlotte Pii Lunau .................................................................................................. 392
XVIII Table of Contents
XV. Aspect Oriented Programming Cristina Videira Lopes (Ed.)
394
Towards a Generic Framework for AOP Pascal Fradet, Mario Südholt .................................................................................. 394 Recent Developments in AspectJ Cristina Videira Lopes, Gregor Kiczales ................................................................. 398 Coordination and Composition: The Two Paradigms Underlying AOP ? Robb D. Nebbe.......................................................................................................... 402 Operation-Level Composition: A Case in (Join) Point Harold L. Ossher, Peri L. Tarr................................................................................. 406 Deriving Design Aspects from Conceptual Models Bedir Tekinerdogan, Mehmet Aksit .......................................................................... 410 Aspect-Oriented Logic Meta Programming Kris De Volder.......................................................................................................... 414 Roles, Subjects and Aspects: How Do They Relate? Daniel Bardou .......................................................................................................... 418 HAL: A Design-Based Aspect Language for Distribution Control Ulrich Becker, Franz J. Hauck, J. Kleinöder ........................................................... 420 Interactions between Objects: An Aspect of Object-Oriented Languages L. Berger, A.M. Dery, M. Fornarino ........................................................................ 422 Replication as an aspect: The Naming Problem Johan Fabry.............................................................................................................. 424 AspectIX: A Middleware for Aspect-Oriented Programming Franz J. Hauck, Ulrich Becker, Martin Geier, Erich Meier, Uwe Rastofer, Martin Steckermeier ............................................................................................................. 426 An AOP Case with Static and Dynamic Aspects Peter Kenens, Sam Michiels, Frank Matthijs, Bert Robben, Eddy Truyen, Bart Vanhaute, Wouter Joosen, Pierre Verbaeten ........................................................... 428 Visitor Beans: An Aspect-Oriented Pattern David H. Lorenz ....................................................................................................... 431 Assessing Aspect-Oriented Programming: Preliminary Results Robert J. Walker, Elisa L.A. Baniassad, Gail C. Murphy ........................................ 433 Aspect-Oriented Programming using Composition Filters Mehmet Aksit, Bedir Tekinerdogan .......................................................................... 435 The impact of Aspect-Oriented Programming on Formal Methods Lynne Blair, Gordon S. Blair.................................................................................... 436
Table of Contents
XIX
Aspects of Enterprise Java Beans Gregory Blank, Gene Vayngrib ................................................................................ 437 Aspect-Oriented Programming in the Coyote Project Vinny Cahill, Jim Dowling, Tilman Schäfer, Barry Redmond.................................. 438 Towards Reusable Synchronisation for Object-Oriented Languages David Holmes, James Noble, John Potter ................................................................ 439 Agent Roles and Aspects Elizabeth A. Kendall ................................................................................................. 440 The Distribution Aspect - A Meeting Ground between Tool and Programmer Doug Kimelman........................................................................................................ 441 Is Composition of Metaobjects = Aspect-Oriented Programming Charlotte Pii Lunau .................................................................................................. 442 Run-time Adaptability of Synchronization Policies in Concurrent Object-Oriented Languages Fernando Sánchez, Juan Hernández, Juan Manuel Murillo, Enrique Pedraza....... 443
XVI. Parallel Object-Oriented Scientific Computing Kei Davis
444
OVERTURE: Object-Oriented Parallel Adaptive Mesh Refinement for Serial and Parallel Environments David L. Brown, Kei Davis, William D. Henshaw, Daniel J. Quinlan, Kristi Brislawn446 Applying OO Concepts to Create an Environment for Intensive Multi-user Computations in Electromagnetism Delphine Caron ........................................................................................................ 448 Rethinking a MD code using Object Oriented Technology Stefano Cozzini ......................................................................................................... 450 ROSE: An Optimizing Transformation System for C++ Array-Class Libraries Kei Davis, Daniel Quinlan ....................................................................................... 452 The Parallel Asynchronous Data Routing Environment PADRE Kei Davis, Daniel Quinlan ....................................................................................... 453 Object Oriented Programming and Finite Element Analysis: Achieving Control Over the Calculation Process R. I. Mackie, R. R. Gajewski..................................................................................... 456 Tecolote: An Object-Oriented Framework for Physics Development J. C. Marshall, L. A. Ankeny, S. P. Clancy, J. H. Hall, J. H. Heiken, K. S. Holian, S. R. Lee, G. R. McNamara, J. W. Painter, M. E. Zander, J. C. Cummings, S. W. Haney, S. R. Karmesin, W. F. Humphrey, J. V. Reynders, T. W. Williams, R. L. Graham ... 458 Is Java Suitable for Portable High-Performance Computing ? Satoshi Matsuoka, Shigeo Itou ................................................................................. 460
XX
Table of Contents
Applying Fortran 90 and Object-Oriented Techniques to Scientific Applications Charles D. Norton, Viktor Decyk, Joan Slottow...................................................... 462 Development and Utilization of Parallel Generic Algorithms for Scientific Computations A. Radenski, A. Vann, B. Norris ............................................................................... 464 The Matrix Template Library: A Unifying Framework for Numerical Linear Algebra Jeremy G. Siek, Andrew Lumsdaine ......................................................................... 466 A Rational Approach to Portable High Performance: The Basic Linear Algebra Instruction Set (BLAIS) and the Fixed Algorithm Size Template (FAST) Library Jeremy G. Siek, Andrew Lumsdaine ......................................................................... 468 Object-Oriented Programming in High Performance Fortran E. de Sturler.............................................................................................................. 470 Towards Real World Scientific Web Computing Matthias Weidmann, Philipp Drum, Norman Thomson, Peter Luksch .................... 472
XVII. Automating the Object-Oriented Development Process Mehmet Aksit, Bedir Tekinerdogan
474
The Case for Cooperative Requirement Writing Vincenzo Ambriola, Vincenzo Gervasi ..................................................................... 477 Systematic Construction of UML Associations and Aggregations Using cOlOr framework Franck Barbier ......................................................................................................... 480 Software Quality in the Objectory Process Klaas van den Berg .................................................................................................. 483 Evaluating OO-CASE Tools: OO Research Meets Practice Danny Greefhorst, Mark van Elswijk, Matthijs Maat, Rob Maijers......................... 486 Conceptual Predesign as a Stopover for Mapping Natural Language Requirements Sentences to State Chart Patterns Christion Kop, Heinrich C. Mayr ............................................................................. 489 Using the MétaGen Modeling and Development Environment in the FIBOF Esprit Project B. Lesueur, N. Revault, G. Sunyé, M. Ziane ............................................................. 492 Formalizing Artifacts of Object-Oriented Analysis & Design Methods Motoshi Saeki ........................................................................................................... 493 Providing Automatic Support for Heuristic Rules of Methods Bedir Tekinerdogan, Mehmet Aksit .......................................................................... 496 From Visual Specifications to Executable Code Enn Tyugu................................................................................................................. 499
Table of Contents
XVIII. Object-Oriented Technology and Real-Time Systems Eugene Durr, Leonor Barroca, François Terrier
XXI
502
Dynamic Scheduling of Object Invocations in Distributed Object -Oriented RealTime Systems Bo N. Jørgensen, Wouter Joosen.............................................................................. 503 A Code Generator with Application-Oriented Size Optimization for Object-Oriented Embedded Control Software Fumio Narisawa, Hidemitsu Naya, Takanori Yokoyama ......................................... 507 UML/PNO: A Way to Merge UML and Petri Net Objects for the Analysis of RealTime Systems Jérôme Delatour, Mario Paludetto .......................................................................... 511 Modular Development of Control and Computational Modules Using Reactive Objects Frédéric Boulanger, Guy Vidal-Naquet ................................................................... 515 TDE: A Time Driven Engine for Predictable Execution of Real-Time Systems Flavio De Paoli, F. Tisato, C. Bellettini................................................................... 519 Virtual World Objects for Real-Time Cooperative Design Christian Toinard, Nicolas Chevassus ..................................................................... 525 Providing Real-Time Object-Oriented Industrial Messaging Services R. Boissier, M. Epivent, E. Gressier-Soudan, F. Horn, A. Laurent, D. Razafindramary ........................................................................................................ 529 A Train Control Modeling with the Real-Time Object Paradigm Sébastien Gérard, Agnès Lanusse, François Terrier ............................................... 533
XIX. Demonstrations Jan Dockx
539
Reflections on a demonstration chair Jan Dockx ................................................................................................................. 539 Visualizing Object-Oriented Programs with Jinsight Wim De Pauw, John Vlissides .................................................................................. 541 SoftDB - A Simple Software Database Markus Knasmüller .................................................................................................. 543 OO-in-the-Large: Software Development with Subject-Oriented Programming Harold Ossher, Peri Tarr ......................................................................................... 545 Dynamic Application Partitioning in VisualAge Generator Version 3.0 Doug Kimelman, V. T. Rajan, Tova Roth, Mark Wegman, Beth Lindsey, Hayden Lindsey, Sandy Thomas ............................................................................................ 547 The Refactoring Browser John Brant, Don Roberts .......................................................................................... 549
XXII
Table of Contents
Business Objects with History and Planning Ilia Bider, Maxim Khomyakov.................................................................................. 550 Poor Man's Genericity for Java Boris Bokowski, Markus Dahm ................................................................................ 552 An Object DBMS for Multimedia Presentations Including Video Data Rafael Lozano, Michel Adiba, Herve Martin, Francoise Mocellin .......................... 553 OPCAT - Object-Process Case Tool: An Integrated System Engineering Environment (ISEE) Dov Dori, Arnon Sturm ............................................................................................ 555
XX. Posters Patrick Steyaert (Ed.)
557
The AspectIX ORB Architecture Franz J. Hauck, Ulrich Becker, Martin Geier, Erich Meier, Uwe Rastofer, Martin Steckermeier ............................................................................................................. 557 Formalization of Component Object Model (COM) - The COMEL Language Rosziati Ibrahim, Clemens Szyperski........................................................................ 558 Oberon-D = Object-Oriented System + Object-Oriented Database Markus Knasmüller .................................................................................................. 559 OctoGuide - a Graphical Aid for Navigating among Octopus/UML Artifacts Domiczi Endre .......................................................................................................... 560 Run Time Reusability in Object-Oriented Schematic Capture David Parsons, Tom Kazmierski .............................................................................. 561 Replication as an Aspect Johan Fabry.............................................................................................................. 563
Author Index
564
The 8th Workshop for PhD Students in Object-Oriented Systems Erik Ernst1 , Frank Gerhardt2 , and Luigi Benedicenti3 1
DEVISE, Comp.Sci.Dept., University of Arhus, Denmark,
[email protected], 2 Daimler-Benz AG, Dept. IO/TM, D-70322 Stuttgart, Germany,
[email protected], 3 DIST - Universita di Genova, I-16145 Genova, Italy,
[email protected] Each year since 1991 there has been a workshop for PhD students at the ECOOP conference. It is every every year conducted by the network of PhD Students in Object-Oriented Systems (PhDOOS), hence it is an event for PhD students by PhD students. The purpose of the PhDOOS network is to help leveraging the collective resources of young researchers in the object community by improving the communication and collaboration between them. During a year, the PhDOOS workshop is the main event where we meet face-to-face. Between workshops we stay in touch through our mailing list. More information on the PhDOOS network can be found at http://www.phdoos.org/. This workshop is a little special compared to the other workshops at ECOOP. Where other workshops typically focus on a well-de ned topic chosen at the outset, the technical topics of this workshop were derived from the research interests of the participants. Since the workshop had 33 participants, we partitioned the main group into several subgroups, each having a more focused research area as topic. The work in these subgroups had been prepared extensively by the participants. It is our experience that well-prepared participants is a very important precondition for good interaction at the workshop. About half of the participants had submitted a position paper. Everybody had prepared a presentation of his or her research work|a longer presentation for those participants with a position paper, and a shorter one for those who just provided a short abstract of their research work. The position papers have been published at Arhus University; information about how to obtain this report is given at the end of this introduction. The technical sessions in subgroups were an important part of the workshop, but there were also other activities. In plenary sessions we had two sessions with invited speakers, and we discussed various issues related to the PhDOOS network itself, collaboration between us and others, the conditions of being a doctoral student in various countries, and more. So many OO PhD students collected in one room is a great opportunity to generate discussion! Our invited speakers were Prof. Eric Jul from the University of Copenhagen and Prof. Dave Thomas of OTI. Eric Jul gave a brilliant talk about the process of getting a PhD, how to write the thesis, how to obtain the the right balance between its topics, how to use the time reasonably during those years, and many other things. Dave Thomas gave a talk, not less interesting, about all the subtle (or sometimes less subtle) dierences between the academic world and the indusS. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 1-43, 1998. Springer-Verlag Berlin Heidelberg 1998
2
E. Ernst, F. Gerhardt, and L. Benedicenti
try, hence helping us to understand some trade-os between dierent potential career paths. We were indeed happy to have these two outstanding personalities as speakers at our workshop. There was quite a lot of discussion dealing with the network, and with collaboration between us in the future. We felt that the network is too inactive during the year, and that communication needs to be improved. On the other hand, probably everybody felt that the PhDOOS workshop at ECOOP is a good tradition, and it will be continued. However, to make a lot of other things happen, too, the activities in the network should be a natural and valuable resource for each of us in our daily work, not just a beautiful idea that we can play with after having nished our real work. An obvious idea which has been discussed before is to create a framework for reviewing each other's papers or other written work. The idea is that the large number of PhD students in the PhDOOS network|and their local friends out there|is a too good source of information and inspiration to leave unused! For example, send out a section about \Related Work" from an article you are working on, and have people tell you about the things you overlooked. In this process, we have to make sure that the authors feel assured their work is not \stolen" by anybody in the process. Since cooperation is a basic tool in research today, keeping the work secret is not an option. On the contrary, as soon as many people know that a particular idea or approach originally came from one group of persons, it will in fact be better protected against \theft" than without this community awareness. The network is a great resource of knowledge and inspiration, we just have to push the idea a little bit. Another idea was to use the Internet more intensively to get in touch, possibly on a more regular basis. This has already been the case for the organizers of this workshop for years|usually the organizers come from dierent countries, and there are many things to discuss during the year. Real meetings are great, but dicult and expensive to arrange, and well-known technologies like IRC can already do much. However, whether in real life or via network cables, meeting other people and getting to know them is a necessary precondition for good, lively cooperation, and an event like this workhop is an excellent way to meet new people|and, next year, also well-known ones. . . The remainder of this chapter is devoted to the presentation of the technical discussions which took place in the workshop subgroups. Each group presents itself in one section. Each section starts with a group summary. The summary should work as a reader's guide for you, to quickly discover what was discussed in the group and to zoom in on the persons whose work is most interesting for you. The work of the members of the group is presented in subsections following the summary. In each subsection, one participant presents his or her own research, as it was presented and discussed at the workshop. The presentation is approximately one page long; it is not supposed to be an ultra-compact research article, but rather an appetizer which would give you an opportunity to look closer at the presented
The 8th Workshop for PhD Students in Object-Oriented Systems
3
work using the provided URLs and similar references. The contact information for the participants is listed separately at the end of this chapter. As mentioned above, a number of position papers were submitted to the workshop. The accepted position papers are available as the publication DAIMI PB535, from the Department of Computer Science, University of Arhus. It will also be made available in an electronic format. Please contact the library of the department at
[email protected] about this.
4
E. Ernst, F. Gerhardt, and L. Benedicenti
1 Code Analysis and Tools The projects within the Code Analysis and Tools subgroup are scattered over the lifetime of a program development.
The tools are overlapping in some phases and in these cases one might integrate or connect them to allow more sophisticated methods in the development. The possible connections between the tools are shown by the following gure:
Slicing & Documentation: (semi) automatic integartion of the control and
data ow information into the documentation might ease the understanding of a piece of software. Slicing & Migration: one could discover strongly coupled classes, which should be migrated together to the new envinorment to avoid big performance problems. Test Bench & Migration: re-engineering is likely to introduce errors thus it is important to test it afterwards. If the test bench is aware of both platforms and the test cases are migrated with the software such mistakes might be avoided. Migration & Monitoring: migration might allow one to reorganize the deployment of the subsystems. An on-line monitoring tool can help this process right from the begining. Migration & Documentation: documentation tool needs to operate on the same meta information as the migration tool to be synchronized with the current source code. Test Bench & Slicing: white box testing requires control and data ow information which can be obtained from the slicing tool. This makes easier to measure the line coverage of the test as well. RT Systems & Slicing: in a real-time system to determine the worst case execution time one needs to determine the control ow where the slicing tool gives a great help.
The 8th Workshop for PhD Students in Object-Oriented Systems
5
Such integration might be achieved if the tools can exchange their meta information over the developed software. There are already standards for meta information structures (UML), formats (CDIF) and protocols (CORBA MOF), where the static and the dynamic part for analysis can be fully described. Unfortunately they lack the support to describe the dynamic information for later phases of the development (e.g. data and control ow), thus we need to focus on this area for improved interoperability.
1.1 A kos Frohner: Framework Design and Documentation Current Research: In 1997 at the group of Prof. Hanspeter Mossenbok (Linz),
I have been involved in the FWF research project for "Framework Design and Documentation". This project aims at the development of techniques and tools for designing and documenting frameworks. A central paradigm in the research is the active document. To describe the structure and behaviour of frameworks we also need to store this information not only in textual format, but in a way which can be used to help the navigation. The most straightforward way is to store the meta information of the framework itself and we found that the Uni ed Modelling Language suits our needs. During my research at Linz in I have introduced an implementation of a UML based metamodel and a framework around this model to support our design and documentation applications, with the following requirements: (1) extensibility, (2) exible model manipulation, (3) clear separation of the data and its viewers, editors and nally (4) support for team work. The original UML metamodel describes how the information should be stored and provides mechanisms to extend the model at run time. My implementation added an other way to extend the model and gives support for the remaining three goals.
Future Directions: The current state of this tool is not complete thus I would like to continue of my research in the following areas:
1. Active Pictures in Framework Documentation: Technical artefacts are often described by graphical plans, but when presented the full picture, it is often dicult to nd out in which order to read it. Therefore, our idea is to store a picture not as a whole but as a sequence of drawing steps that can be played forwards and backwards like a lm. 2. Extensions of the UML Notation for Specifying the Hot Spots of Frameworks: Information that could be provided is among others: What classes have to be extended in order to get some desired eect? What methods have to be rede ned for that? What are the pre and post conditions of these methods? 3. Active Cookbooks for Extending Frameworks: A cookbook is a recipe that explains how to perform a certain task. An active one is not just a textual recipe but also contains interactive elements that can provide information on demand or help to perform certain subtasks in a (semi-) automatic way.
6
E. Ernst, F. Gerhardt, and L. Benedicenti
Sidework: Due to my previous research work in the area of parallel computing
and my interest in security I would like to evaluate some visualisation techniques for the mapping of the objects to real world resources and the security constraints in the system using the above mentioned tool.
1.2 Frank Gerhardt: Reengineering with the CORBA Meta Object Facility
I'm working on reengineering of object-oriented systems. Speci cally I'm looking at the problem of migrating applications from one development platform to another. (The platforms I consider in detail are Java and Smalltalk.) One way to get tool support for such a migration eort is to extend a conventional reengineering tool suite with support for object-oriented concepts. This has some practical limitations and is technologically unsatisfactory because advanced concepts would only be handled in the code being worked on . They would not be employed in the architecture of the reengineering tool suite itself. I claim that reengineering tools should have an object-oriented architecture and that it can based on the CORBA Meta Object Facility (MOF). Let's look at a scenario in which we would use a conventional reengineering tool suite to migrate some code from a development platform A to another platform B. If we use an IDE on platform A with version control, team support etc. then the rst step is to export our project (or a part of it) from the IDE to obtain at les with our source code. Then we import this code into the repository of a reengineering tool suite and start working on the code using the tools we nd there. After performing some reengineering tasks we expect our code to work on the target platform B. So, we export everything from the repository and import it into the IDE on platform B which we use for testing and debugging. As we nd errors we can either x them right there or move everything back into the reengineering suite to use its powerful tools. Again, exporting, importing, exporting - ad libitum in an iterative process. My approach is to use OOT to wire the tools mentioned above together. Instead of inventing my own infrastructure I use CORBA and (my prototype of the standardized but not yet commercially available implementation of) the Meta Object Facility as the \missing glue". This way I also achieve a Smalltalk-like though distributed - integration of the development and runtime environments of the involved platforms. The setup is as follows. The runtime environments of the source and the target platform run in parallel. In the case of Java and Smalltalk these are Virtual Machines (VMs). Both images contain all base classes of the corresponding platforms, an Object Request Broker (ORB) and some adapter code which links their meta-level architectures to CORBA/MOF. The image on the source platform also contains the code of the application which is to be migrated. The user interacts with this setup through a browser/inspector window which uses CORBA/MOF to access each platform. Let's look at how this setup would be used. First we would select some (e.g. a use case) or all classes which we would like to migrate to the target platform in
The 8th Workshop for PhD Students in Object-Oriented Systems
7
the rst run. With some tool help we syntactically convert them into classes of the target platform. In a semiautomatic way the system helps us inserting proxies for those classes and associated objects into the source environment. The selected classes/objects will be instantiated in the target image. All communication to their original counterparts is redirected via CORBA to the classes/objects on the new platform. At this stage we (manually) make the code work in the target platform by eliminating the problems caused by the architectural mismatch between the two platforms. Some straightforward refactorings could have been done automatically already. When we are done with the rst run, we continue with the next set of classes until we have completed the migration. Then all classes/objects will be running in the target environment and CORBA can be disconnected from both VMs. This setup enables the use of advanced techniques for reengineering which are currently only available in forward engineering environments, e.g. refactoring and dynamic observation/analysis. Tasks like slicing and typing of untyped code can - compared to static solutions - bene t from the dynamic and interactive nature of this setup. To achieve all this many ideas have been drawn form the area of forward engineering tools: re ection, meta-level-architecture, the idea of single-source CASE tools, dynamic compilers. The contribution of my work will be a new architecture for reengineering tools and an assessment of the adequacy of the Meta Object Facility to implement such an architecture.
1.3 Patrik Persson: Enforcing Eective Hard Real-Time Constraints in Object-Oriented Control Systems
Many control applications can be characterized as hard real-time systems, where a control task must compute its result within its deadline to guarantee correct behavior and stability of the system. Existing hard real-time scheduling theory requires the WCET (Worst-Case Execution Time) of the control algorithms to be known. In practice, it is very dif cult to obtain a tight bound on WCET due to various techniques employed by modern processors (such as multi-level caches, pipelines, and speculative execution) to enhance average-case performance. The worst-case performance of these processors is not only considerably harder to predict, it also diverges from the average case. Existing timing analysis tools often require that the user deliberately slows down the processor in order to make the execution times deterministic, e.g., by turning o caches. With respect to programming languages, object-oriented languages are currently not used for these systems, mainly due to constructs which make WCET analysis dicult, in particular dynamic binding, dynamic data structures, and garbage collection. Recursion is another technique which is desirable to use in object-oriented programs but hard to analyze in general. The goal of this research is to develop WCET analysis techniques to handle object-oriented languages and which can make use of modern processors. To achieve this our idea is to build an interactive program editing tool which
8
E. Ernst, F. Gerhardt, and L. Benedicenti
incrementally computes WCET based on a combination of the program source and assertions made by the programmer, allowing better WCET times to be attributed to the program code than by basing the WCET analysis on the code alone. Assertions associated with functions, loops, or blocks may be checked at runtime, and exception handling code may be generated in order to, e.g., fall back on simpler algorithms with tighter WCET bounds. Such exception handling can be used to handle unexpected delays and, since the average-case and worst-case execution times may dier substantially, potentially enhance processor utilization. The tool will be built on the APPLAB platform, a system for interactive development of language-based editors [Bja]. This system uses Door Attribute Grammars [Hed], an attribute grammar formalism extended with object-oriented constructs which allows simple speci cation and ecient incremental evaluation of programs written in object-oriented languages. This incremental approach can be used to interactively provide WCET bounds to the programmer while developing the real-time program. It can also be combined with and compared to dynamic measurements (i.e., code instrumentation). The run-time system will support recently developed algorithms for hard real-time garbage collection [Hen], and the WCET analysis tool will be used to parameterize these algorithms [Nil]. Note that these techniques are in principle possible to use independently of the programming language, but for several reasons we see that Java would be the most promising choice. To validate our techniques, we intend to apply them to robot control applications.
References 1. E. Bjarnason. Interactive Tool Support for Domain-Speci c Languages. Licentiate Thesis. Dept. of Computer Science, LU-CS-TR:97-192. Dept. of Computer Science, Lund University, December 1997. 2. Grel Hedin. An Overview of Door Attribute Grammars. International Conference on Compiler Construction (CC'94), pp 31-51. LNCS 786, Springer Verlag. 1994. 3. Roger Henriksson. Ph.D Thesis (in preparation) Dept. of Computer Science, Lund University, September 1998. 4. Klas Nilsson. Industrial Robot Programming Ph.D Thesis. Dept. of Automatic Control, Lund University, May 1996.
1.4 Gunther Rackl: Online-Monitoring in Distributed Object-Oriented Client/Server Environments Monitoring parallel and distributed applications is an important issue both during the development phase and the subsequent usage of such applications. During application development, tasks like visualization, debugging, and performance tuning are useful for the developer, while during the usage phase, supervising
The 8th Workshop for PhD Students in Object-Oriented Systems
9
running applications using interactive management tools, generating alarms in case of failures, or dynamic load management are important issues. Currently, there is no systematic approach for retrieving information required by all these types of tools from distributed environments. Most tools are proprietary solutions built for speci c programming environments and can hardly be adapted to other environments. An approach to solve these problems is the OMIS project [1], which speci es a common on-line monitoring interface for parallel programs. The speci cation de nes a general interface between tools and the monitoring system, such that tools can easily be adapted to other platforms only by implementing the monitoring system for a new programming environment. But, for future information systems, the trend in distributed computing goes towards distributed object-oriented applications that interact in the context of very large and heterogeneous environments, as it is the case e.g. within the OMG's CORBA environment. Therefore, my aim is to develop a monitoring system for distributed object-oriented client/server systems. In order to handle the complexity of large distributed object systems, I propose a multi-layer monitoring system which is able to work on several abstraction levels re ecting the structure of the distributed environment under consideration. With this approach, tools residing on dierent abstraction levels can be built. For example, a load management tool might only be interested in the distribution of objects within the system, whereas a debugger might work on the process level. Moreover, tools can work on the distributed environment in a hierarchical way, e.g. a visualization tool can allow a user to explore the distributed application starting from the highest abstraction level and moving into speci c aspects he is interested in. Due to the high complexity of distributed object computing systems, this multi-layered monitoring approach therefore implies a dierent design and usage methodology for tools. Finally, the monitoring system can be seen as a step towards integrated tool environments which allow to build tools both for the development and deployment of distributed applications. The advantage of this approach is an enhanced software lifecycle which connects the development and deployment phases of distributed applications, resulting in a more ecient software construction process.
References 1. Thomas Ludwig, Roland Wismuller, Vaidy Sunderam, and Arndt Bode. OMIS | On-Line Monitoring Interface Speci cation (Version 2.0), volume 9 of Research Report Series, Lehrstuhl fur Rechnertechnik und Rechnerorganisation (LRR-TUM), Technische Universitat Munchen. Shaker, Aachen, 1997.
1.5 Moritz Schnizler: A Test Bench for Software Today software is encountered throughout everyday life. But while applications often dier only slightly in their purpose, they are needed in various situations. So it comes that usually from a successful program and its essential architecture
10
E. Ernst, F. Gerhardt, and L. Benedicenti
many variations on dierent platforms are developed. Object-oriented technology supports this evolvement of programs into, so called, program families by framework technology which allows for reuse of parts of the functionality and architecture of already developed software. But creating programs for various operating system platforms and diverse application domains poses also new problems. A major problem that comes with program families is testing. For example, frameworks are seldom available on every necessary platform, or if they are, they have not the same required quality. For these reasons, developers have to choose or develop alternatives, and so a program family soon becomes a very heterogenous software system. In consequence of this, a uni ed testing approach for program families is generally impossible, despite the fact that they are in special need for testing with all their parts operating in dierent contexts. Existing testing approaches in literature do not oer much help in this situation. Nevertheless there are some commercial tools that promise to support the testing of software. But based on our experience, these tools are very sensitive to change and usually not adequate for testing such heterogenous program families. Even worse, they normally only test some aspect of a program, for example the user interface and interaction, while they provide no way for testing other important areas of the program. We suppose, the basic reason for these problems is the lack of a stringent requirement for testability in most software projects. It is still very common, to look at testing as an activity at the end of the implementation phase, receiving only little interest in earlier development phases, especially at the design phase. If you compare this situation to any development project in the classical engineering disciplines, you will recognize that there an important part of the development work is spent on creating the test environment for the developed product. For example, when engineers develop a new car engine, they will devote a big part of their work to the creation of the test bench for this particular engine. This practice allows them, to adapt this test bench optimally on every detail, they want to examine afterwards. My goal is to transfer this approach into the area of software development. The principle idea is to make "testability" an explicit requirement for the whole development process. In particular design and implementation should be guided by this requirement, resulting in a product that can be tested more easily, eectively and eciently. I am especially interested in the constructive measures that will be mandatory for the design of the original application program to prepare it for this test bench. Another important issue is the development of the "test bench" itself, which can probably be realized as a dedicated framework, comprising as much reusable functionality as possible.
1.6 Christoph Steindl: Intermodular Slicing of Object-Oriented Programs We describe a program slicing tool for object-oriented programs. Program slicing [Wei84] uses control ow and data ow information to visualise dependences
The 8th Workshop for PhD Students in Object-Oriented Systems
11
and assist the programmer in debugging and in program understanding. Objectoriented programs exploit features like dynamic binding which complicate interprocedural alias analysis. Two distinctive features of our Slicer are the support for intermodular slicing and the usage of user-feedback during the computation of data ow information. To cope with the problem of alias analysis in the presence ounction pointers (which is NP-hard [ZhR94]), we decided to rst use a conservative approach leading to less precise data ow information, but then use the user's expertise to restrict the eects of dynamic binding at polymorphic call sites to get more precise solutions which should still be safe. Overview: We implemented a program slicing tool for static forward slicing of object-oriented programs written in the programming language Oberon-2 [MWi91] (for a technical description see [Ste98a, Ste98b]). We did not restrict the language in any kind which means that we had to cope with structured types (records and arrays), global variables of any type, objects on the heap, sideeects of function calls, nested procedures, recursion, dynamic binding due to type-bound procedures (methods) and procedure variables (function pointers), and modules. Weiser [Wei84] originally de ned a slice with respect to a program point p and a subset of the program variables V to consist of all statements in the program that may aect the values of the variables in V at point p. He presented algorithms which use data ow analysis on control ow graphs to compute intraprocedural and interprocedural slices. The underlying data structures of our Slicer are the abstract syntax tree (AST) and the symbol table constructed by the front-end of the Oberon compiler [Cre90]. Additional information (such as control and data dependences) is added to the nodes of this syntax tree during the computation. We de ne a slice with respect to anode of the AST (starting node). The nodes of the AST represent the program at a ne granularity (see Fig. 1), i.e. one statement can consist of many nodes (function calls, operators, variable usages, variable de nitions, etc.). The target and origin of control and data dependences are nodes of the AST, not whole statements. This allows for ne-grained slicing (cf. [Ern94]), therefore we call our slicing method expression-oriented in contrast to statement-oriented slicing or even procedureoriented slicing. Our slicing algorithm is based on the two-pass slicing algorithm of Horwitz et al. [HRB90] where slicing is seen as a graph-reachability problem (this algorithm uses summary information at call sites to account for the calling context of procedures) and on the algorithm of Livadas et al. [LivC94, LivJ95] for the computation of transitive dependences of parameters of procedures. In order to slice the program with respect to the starting node, the graph representation of the program is traversed backwards from the starting node along control and data dependence edges. All nodes that could be reached belong to the slice because they potentially aect the starting node. We extended the notion of interprocedural slicing to intermodular slicing. Information that has been computed once is reused when slicing other modules that import previously sliced modules. Furthermore, we support object-oriented features such as inheritance, type extension, polymorphism, and dynamic bind-
12
E. Ernst, F. Gerhardt, and L. Benedicenti
ing. Since the construction of summary information at call sites is the most costly computation, it is worthwhile to cache this information in a repository and reuse as much information as possible from previous computations. Zhang and Ryder showed that aliasing analysis in the presence of function pointers is NP-hard in most cases [ZhR94]. This justi es to use safe approximations since exact algorithms would be prohibitive for an interactive slicing tool where the maximum response time must be in the order of seconds. Our approach to reach satisfying results is to use feedback from the user during the computation of data ow information. The user can for example restrict the dynamic type of polymorphic variables and thereby disable speci c destinationsat polymorphic call sites.
References 1. Rgis Crelier. OP2: A Portable Oberon Compiler. Technical report 125, ETH Zrich, February 1990. 2. Michael D. Ernst. Practical ne-grained static slicing of optimized code. Technical report MSR-TR-94-14, Microsoft Research. 3. Susan Horwitz, Thomas Reps, David Binkley. Interprocedural Slicing Using Dependence Graphs. ACM TOPLAS vol. 12, no. 1, Jan. 1990. 4. Panos E. Livadas, Theodore Johnson. An Optimal Algorithm for the Construction of the System Dependence Graph. Technical report, Computer and Information Sciences Department, University of Florida, 1995, ftp://ftp.cis.ufl.edu/cis/ tech-reports/tr95/tr95-011.ps.Z
5. Hanspeter Mssenbck, Niklaus Wirth. The Programming Language Oberon-2, Structured Programming, vol. 12, no. 4, 1991. 6. Christoph Steindl. Program Slicing (1) { Data Structures and Computation of Control Flow Information. Technical report 11, Institut fr Praktische Informatik, JKU Linz, 1998. 7. Christoph Steindl. Program Slicing (2) { Computation of Data Flow Information. Technical report 12, Institut fr Praktische Informatik, JKU Linz, 1997. 8. Mark Weiser. Program Slicing. IEEE Trans. Software Engineering, vol. SE-10, no. 4, July 1984. 9. Sean Zhang, Barbara G. Ryder. Complexity of Single Level Function Pointer Aliasing Analysis.
The 8th Workshop for PhD Students in Object-Oriented Systems
13
2 Concurrency, Logic, Model Checking With the advent of object-oriented languages, traditional development approaches do not cope well with the requirements of object-oriented systems. In terms of reuse, composition and evolution, old methods oer few supports. We propose, rst, to have a rigorous semantic foundation for de ning and studying the features of object orientation. Therefore, for instance, we are able to specify the notions of objects, to integrate the computational and compositional aspects, and to reason about the correctness of reuse. Eventually, we could develop sound and useful methods and tools for increasing the quality of object-oriented development. What makes introducing formal methods for the object-oriented system more dicult is the object-oriented paradigm emphasizes on interaction captures. Concurrent and distributed objects might not be expressed by sequential algorithms and interactive behaviours might not be completely described by traditional mathematical formalism like rst-order logic. To some extent, can we extend and combine current formal models or methods to express and verify object interactions? The group members do provide some perspectives and insights. Superposition seems to be particularly well suited to the development of object-oriented systems because it allows to construct a larger system by successive property-preserving extensions of existing programs. Applying superposition as a method for re nement, Winnie introduces a weaker re nement notion. This notion, supporting both context-dependent re nement and interface extension, extends the traditional re nement calculus to object-oriented systems for reasoning about components reuse and their composition. Using a superposition re nement calculus, Tamas investigates how abstract data types can be added to a relational model of parallelism so as to formally de ne concurrent objects. Both Xiaogang and Tamas advocate the need to make the distinction between the speci cation of functionality and synchronization in a concurrent object system. Xiaogang is studying the behaviour composition of concurrent objects with process algebra (a variation of pi-calculus) and Tamas introduces a relational model of concurrent objects. It has been proposed to combine the advantages of formal and informal methods. Claudia is interested in de ning a conceptual formal model based on dynamic logic and a set of rules to implement a semi-automatic transformation from UML descriptions to formal speci cations. Sebastien aims to show that integrating problem-prototyping with symbolic model checking is able to prove some real-time properties. The use of formal approaches in industry might still uncommon. The reasons for this fact are mainly due to the complexity of their mathematical formalisms. We all realize it is quite necessary to have supporting tools for building models, proving system properties and automatic transformation. Among other results, an interactive tool turns out to be interesting. This kind of tool could help in a stepwise system construction by proving the correctness of parts of a system based on known properties of components and their composition.
14
E. Ernst, F. Gerhardt, and L. Benedicenti
2.1 Sebastien Gerard: Validation of Real-Time Object Oriented Applications
Classical real-time development of software systems is reaching its limits in a world were target hardware cannot be known in advance, versions evolution become increasingly fast and time to market must be shorten drastically in order to meet economical requirements. Reusability and evolutivity become even more important in this particular domain than it is in other software elds. In such a context, real-time systems development cannot be achieved eciently without a strong methodological support and accompanying tools. In parallel, a consensus is reached that object oriented techniques are successful to provide the exibility required. Up to now, however, real-time community has long been reluctant to cross the Rubicon for mainly two reasons: { object orientation oer was not mature enough to provide stability in their solutions (methods, tools,...) { the real-time speci city was generally not well covered by the methods. In the past years, a some solutions have been investigated, they have resulted in a certain number of methods and tools such as: HRT-HOOD (Stood [Burns 95]), ObjectGeode [Arthaud 95, Leblanc 96] or SDT, ROOM (ObjecTime [Sellic 94]), Rhapsody [Douglas 97], OCTOPUS (ParadigmePlus [Awad 96]) and ACCORD [Terrier 97b]. All of these methods propose an homogeneous solution for real-time application development. Therefore, they supply concepts and rules sustained by a method which covers the main step of any software lifecycle, that is to say, the requirements speci cation, the analysis, the design and the implementation. However, once these steps have been achieved, the developer do not end his work for all that. Indeed, there is still validation of the application to perform. This last point is the one which is today the lesser taken into account by most of methods and tools. Some tools like ObjectGeode, SDT or Rhapsody allow the developer, not without eorts, to simulate these models. My work aims to show that the real-time object techniques are able to respond to the prototyping problem of real-time application and this under the both following successive points of views: In a rst part, the prototype is realized without references to any implementation techniques allowing the developer to emphasizes on his trade. That is to say that the main part of his work focuses on the analysis of the system to be developed. Automatic rules map the user model into an executable code from libraries that sustain the ACCORD real-time concepts and that link the application with the real-time operating system used. In a second part, I am interesting in the behavioral validation of such applications. So I would like to do validation at two levels of the application. On the model itself, I would like to apply some symbolic model checking methods which seem to be able to prove some properties like deadlock, starvation and even under some condition allow to validate deadline -reachability techniques-. On the prototype, I intend to express rules that allow to explicit the task model
The 8th Workshop for PhD Students in Object-Oriented Systems
15
implicitly speci ed in the real-time object model and apply academic technique to validate the application such as RMA.
References 1. R. Arthaud, OMT-RT. Extensions of OMT for better Describing Dynamic Behavior, in proc. TOOLS Europe '95, Versailles, France, February 1995. 2. M. Awad, J. Kuusela, J. Ziegler. Object-Oriented Technology for Real-Time Systems: A Practical Approach Using OMT nd Fusion, Prentice Hall, 1996. 3. A. Burns, A. Wellings, HRT-HOOD: A Structured Design Method for Hard RealTime Ada Systems, Real-time Safety Critical Systems V3, Elsevier. 4. B. P. Douglass. Real-Time UML, Object technology Series, Addison Wesley, 1998. Leblanc96 P. Leblanc, V. Encontre, ObjectGeode: Method Guidelines, VERILOG SA, 1996. Sellic94 B. Sellic et al. Real time Object-oriented Modeling, John Wiley Publisher, 1994. 5. F. Terrier et al. Diveloppement multitbche par objet : la solution ACCORD. In proc. Ginie Logiciel '97, Paris, December 1997.
2.2 Tamas Kozsik: Parallel Programs Implementing Abstract Data Type Operations | A Case Study
The research detailed below aims at establishing a connection between parallel programming and object-oriented programming. We are interested in de ning a calculus in which the dierent steps of program design can be performed. A special emphasis is made on the development of parallel programs. We investigate how abstract data types can be added into this framework. A relational model of parallelism is introduced, as an extension of a model of programming. The classical model formalizes the notion of state space, problem, sequential program, solution, weakest precondition, speci cation, programming theorem, type, program transformation etc. Formulating the main concepts of UNITY in an alternative way, the extended model iscapable to present parallel programs and to describe their behavior. We emphasize the importance of the following three terms: problem, (abstract) program and solution. A program is given as a relation speci ed by a set of nondeterministic conditional assignments. The behavior relation of a parallel program can be computed using the notion of weakest precondition. Our approach is functional: problems are also given their own semantic meaning. The relation that constitutes a problem has a similar structure as the behavior relation of a program. We say that a program is a solution to a problem if some conditions on the behavior relation of the program and the relation corresponding to the problem hold. The program creation process involves the formulation of the problem, its step-by-step re nement (using e.g. problem decomposition / program composition theorems) and the semi-automatic generation of a program. An implicit proof of the correctness of the resulting program is also provided. Similarly to the above mentioned technique the de nition of a data type can fundamentally be divided into two parts. First the speci cation of the data type
16
E. Ernst, F. Gerhardt, and L. Benedicenti
is given: the abstract description of the objects of the type and the applicable operations (methods). The latter ones are de ned as a set of problems. We refer to this rst part as \type speci cation". In the second part of the de nition an appropriate representation for the set of values of the speci ed data type is provided and also the implementation of the operations: these are abstract programs that operate on the representation of the data type. We call the representation and the implementation a \type". We can de ne whether a type is adequate to a type speci cation by setting up requirements for the representation and using a de nition analogous to the de nition of solution. According to the generalized theorem of speci cation of data types, a simpler sucient condition can be used to ease the comparison of the speci cation and implementation of the operations. The tools we use in our model to de ne data types provide two interesting features of the operations. First, operations on an object can be run in parallel, thus internal concurrency is allowed. Second, it is possible to de ne operations that can run forever. In this case study we demonstrate the methodology outlined above with an example: the data type queue with concurrently executable insert and remove operations. More details can be found at: http://www.elte.hu/~kto/papers/.
2.3 Claudia Pons: A Dynamic Logic Model for the Formal Foundation of Object-Oriented Analysis and Design
A theoretical foundation for object oriented software development must support engineering activities for analysis and design and must include a conceptual model for the information acquired during these activities. The more complete, consistent, and formal the conceptual model is, the more precise and unambiguous engineers can be in their description of analysis and design information. Since errors at this stage have a high and costly impact on the subsequent stages of the software development process, formal veri cation of analysis and design information is important. Although formal approaches provide a high degree of semantic accuracy, their use in large-scale industrial systems development is still quite uncommon. The reasons for this fact are mainly due to the complexity of their mathematical formalisms that are dicult to understand and to communicate to the customer. As a consequence, it has been proposed to combine the advantages of intuitive graphical notations on the one hand and mathematically precise formalisms on the other hand, in development tools. This approach has advantages over a purely graphical speci cation development as well as over a purely mathematical development because it introduces precision of speci cation into a software development practice while still ensuring acceptance and usability by current developers. Our research objective consists in the de nition of a conceptual objectoriented model based on order-sorted dynamic logic with equality, following the ideas presented for Wieringa and Broersen. This conceptual model(a detailed description is presented in [1]) formally represents the information acquired during object-oriented analysis and design. The principal dierence between our
The 8th Workshop for PhD Students in Object-Oriented Systems
17
model and other object-oriented formal models is that the former allows the representation of interconnections between model and meta-model entities. This is particularly useful for:
{ { { {
Description of system evolution: the problem of specifying consistency between a set of objects and a set of class de nitions that can change Formal description of contracts, reuse contracts and reuse operators Description and recognition of design patterns Quality assessment mechanisms
References 1. C. Pons, G. Baum, M. Felder. A dynamic logic framework for the formal foundation of object-oriented analysis and design, technical Report available from http://www. lifia-info.unlp.edu.ar/~cpons.
2.4 Winnie Qiu: A Re nement Approach to Object-Oriented Component Reuse The paradigm of object-oriented software systems has shifted from algorithms to interaction captures. The components are designed to be reusable and the computations can be exibly composed from interactive components. The correctness of component reuses has become an important issue in formal software development methods. The re nement techniques have provided a method for the correct construction of sequential programs: applying a series of property preserving transformations for translating abstract speci cations into concrete programs. The re nement calculus is a well-known formalization of these stepwise approaches. Considering the weakest pre-condition semantics, the re nement calculus operates on the relation between the input and output states of a component. For object-oriented systems, we feel this re nement notion has two limitations. First, the re nement speci cations describe the functionality of a component but hide its behaviors, so the re nement of a component is internal and independent of the context of the component. Second, the re nement must keep component interface operations unchanged. Thus, if component C is a re nement of component C , C is better than C but it cannot do more than C . The research is to introduce a formal method for deriving object behavioural composition in a re nement manner. To support reuse in the re nement, we reduce its limitations and introduce the context-dependent re nement and the interface re nement. We base our re nement notion on the concept of conservative extension, which supports the view that a software development process preserves the meaning of a speci cation while adding additional requirements. In our method, a behavioral composition speci cation consists of three major parts: (a) the dependency determining the static structure of a composition, (b) the interactions specifying the overall behavior of a composition and (c) the invariant properties de ning the condition of behavioural composition consistency. 0
0
18
E. Ernst, F. Gerhardt, and L. Benedicenti
A re nement of this speci cation is allowed to extend both the components and their interactions subject to the preservation of behavioural composition consistency. As a composition has been re ned, new interface behaviours are superposed on its components. Therefore, the re nements of individual components are not conducted independently but in the context of their composition. The provable re nement steps will ensure that a composition is correctly reusing its components. We are investigating how the re nement calculus could be extended to provide a formal framework for reasoning reuse. What makes our research both more interesting and dicult than the traditional re nement is that the re nement here involves the global relationships between views of behaviours.
References 1. Winnie Weiqun Qiu The Re nement of Component-based Systems, position paper, July 1998, http://www.cse.unsw.edu.au/~weiqun
2.5 Xiaogang Zhang: A Compositional Approach to Concurrent Object Systems The concurrency aspects of a concurrent object system can be separated from its functionality and composed as needed. This concept will enable us to isolate and solve problems and diculties involved with concurrency to increase the quality and productivity of development of such systems. In this research I propose a model which takes the following view in study of concurrent object systems: A concurrent object can be considered as the composition of the functionality of an un-constrained (un-synchronised) object which allows maximum concurrency, and the concurrency constraints on it to reduce the freedom of concurrency and avoid the states of exception. Based on such a model, we are studying on the theory of behaviour composition for concurrent objects with the process algebra (a variation of pi-calculus), and it will cover
{ { { { {
the semantics of concurrent behaviour composition when and how concurrent behaviours can be composed with and separated from functional behaviours or other concurrent behaviours; identifying relevant patterns and properties of concurrent behaviours; the probability and method to reason about concurrency separately from functionality; the methods and underlying principle for avoidance of inheritance anomaly.
A class-based concurrent object model has established mathematically with the pi-calculus, where concurrency constraints are excluded and the process representing the functionality behaviour allows maximum concurrent, that is, any method body can be executed in parallel with other methods. Some key issues of
The 8th Workshop for PhD Students in Object-Oriented Systems
19
OO technical such as dynamic creation, (multiple) inheritance, dynamic binding, etc. are included in the pi-calculus encoding. Our research has shown that, we can separately de ne control process to constrain parallel execution of methods by imposing constrains on method invocations, and in most cases the concurrency behaviour of the composed object is predictable by the structure of the control process. More precisely, the behaviour of the composed object can be described by a process which has the same structure pattern of the control process. The way of the constraints composition is to use the control process as the intermediate layer between message arriving and the method body execution, and can be explained from three dierent points of view: action re nement, message manipulation and name substitution. Some properties, such as the associative and the right idntity, of the composition of concurrency constraints have been identi ed, and some basic concurrency behaviours have been modeled. This work will apply the theory and model in the development of a concurrent and distributed data structure library, where the research will also include:
{ { { {
reasoning about concurrent data structures; dynamic con guration of concurrent data structure; speci cation method (language) for concurrency behaviours; analysis and developing of concurrent algorithm.
20
E. Ernst, F. Gerhardt, and L. Benedicenti
3 Frameworks and Applications Object-oriented frameworks are a helpful technology to support the reuse of proven software architectures and implementations. The use of frameworks reduces the costs and improves the software quality [1]. Frameworks are semi-complete software systems for a certain application domain which can be adapted to a speci c application. Frameworks consist of already coded pieces of software which are reused, the so called frozen spots and the exible elements, the hot spots, which allow the user to adjust the framework to the needs of the concrete application [2]. Flexibility. Frameworks and applications mainly aim at modelling domain speci c knowledge. Very often there arises a need to make dynamic modi cations to the meta-data in the system such as adding types or modifying existing types at run-time. Besides, persistence is required for both data and the meta-data. The management of changing types in the system further requires taking into consideration temporal aspects such as maintaining consistency between the types and the objects. All these requirements bring to front the need for exibility as the very characteristic of the system. Flexibility is largely determined by the extensibility of the various components employed by the application. Compiled components tend to be xed and are in exible as compared to interpreted components which are extensible and hence more exible. Compiled systems are, therefore, more application speci c than interpreted systems which are generic in contrast. Hence, the degree of
exibility depends upon the extent to which compiled/interpreted components are present in the system. This, in turn, implies that there exists a trade-o, as extensible systems do not tend to be application speci c and hence do not properly model the particular domain knowledge. Inheritance and Aggregation. In order to model domain knowledge correctly and attain exibility at the same time two fundamental characteristics of object oriented systems can be exploited. These are:
{ Inheritance { Aggregation The application of inheritance and aggregation are an orthogonal concern to other aspects of framework architecture. Both compiled and interpreted elements may apply both of these techniques. However, the underlying implementation of a framework can dictate what is possible in those aspects that are open to change. For example, if a framework employs a code interpreter then exibility and extensibility can be provided by modifying inheritance hierarchies and/or introducing new classes. In contrast, where a framework's source code is fully compiled then we must rely on aggregation, where new objects rely not on new de nitions of type but on new combinations of components. Where the essential aspect is exibility, such as the run time con guration of components, then
The 8th Workshop for PhD Students in Object-Oriented Systems
21
aggregation is likely to be the favoured approach. Where extensibility is more important, such as in database schema evolution, then we may nd that inheritance is more commonly used. Viljaama states that \Frameworks refer to collections of concrete classes working together to accomplish a given parameterisable task" [3]. However, the extent to which the framework classes are indeed concrete dictates the means by which the system may be parameterised. A framework based on the extension of abstract classes will be parameterised using dierent techniques from one that builds aggregations of purely concrete components. Example Frameworks. During the workgroup session various frameworks were introduced whose structural commonalities led us to the formulation of the more general observations presented above. Ashish Singhai presented a framework for dynamically con gurable \Middleware Components". Markus Knasmueller's approach aimed at providing \Schema Evolution and Garbage Collection" in the Oberon-D system. Awais Rashid's application framework focussed on \SemiAutonomous Object Database Evolution" through learning at run-time. David Parsons' \Extensible Schematic Capture" approach was directed towards runtime addition of new components in a mixed mode electronic simulation environment. Andreas Speck's \Industrial Control Systems" framework addressed issues involved in driving such real-time systems while Alexandru Telea's \ObjectOriented Computational Steering System" enhanced object-oriented design with data ow semantics.
References 1. Fayad M.E. and Schmidt D.C.: Object-Oriented Application Frameworks. Communications of the ACM, 40 (10):32-38, October 1997 2. Pree, W.: Design Patterns for Object Oriented Software Development. Addison Wesley, Reading MA, 1996 3. Viljaama, P.: The Patterns Business: Impressions from PLoP-94. ACM Software Engineering Notes 20(1):74-78, January 1995
3.1 Jaime Gomez: Component-based Architectures to Generate Software Components from OO Conceptual Models A basic problem of software development is how to derive executable software components from requirements, and how this process could be systematized. Current object-oriented CASE tools support various graphical notations for modeling an application from dierent perspectives. However, the level of built-in automation is relatively low as far as how to produce a nal software product. Nowadays OO methodologies like OMT, OOSE, or Booch are widely used in industrial software production environments. Industry attempts to provide uni ed notations such as the UML proposal which was developed to standarize the set of notations used by the most well-known existing methods. Even if the attempt is commendable, this approach has the implicit danger of providing
22
E. Ernst, F. Gerhardt, and L. Benedicenti
users with an excessive set of models that have overlapping semantics without a methodological approach. Following this approach we have CASE tools such as Rational ROSE or Paradigm Plus which include code generation from the analysis models. However if we go into depth with this proposed code generation feature, we nd that it is not at all clear how to produce a nal software product which is functionally equivalent to the system description collected in the conceptual model. This is a common weak point of these approaches. Far from what is required, what we have after completing the conceptual model is nothing more than a template for the declaration of classes where no method is implemented and where no related architectural issues are taken into account. In order to provide an operational solution to the related problem, the idea of clearly separating the conceptual model level, centered in what the system is, and the execution model, intended to give an implementation in terms of how the system is to be implemented, constitutes a good starting point. I'm working on a component-based architecture based on a formal objectoriented model which gives the pattern for obtaining software components from the conceptual model step. This software components set up the basis for a software prototype that is functionally equivalent to the conceptual model in an automated and reusable way. The starting point is the OO-Method proposal. OO-Method is an OO methodology that allow us to collect the relevant system properties. The main feature of OO-Method is that developers' eorts are focused on the conceptual modeling step, where analysts capture system requirements. Once we have an appropiate system description, a formal OO speci cation is automatically obtained. This speci cation provides a well-structured framework that enables the building of an automatic code generation tool from a component-based perspective.
3.2 Markus Knasmuller: Oberon-D { Adding Database Functionality to an Object-Oriented Development Environment
While object-orientation has become a standard technique in modern software engineering, most object-oriented systems lack persistence of objects. This is rather surprising because many objects (e.g. objects in a graphical editor) have a persistent character. Nevertheless, most systems require the programmer to implement, load and store operations for the objects. In this Ph.D. work we demonstrate the seamless integration of database functionality into an objectoriented development environment, in which, the survival of objects is for free. Persistence is obtained by a persistent heap on the disk. Persistent objects are on this heap, while transient objects are in the transient memory. Transient and persistent objects can access each other mutually. Accessing a persistent object leads to loading the object into the transient heap. If it is not accessed from transient objects any more, it will be written back to the persistent heap. A transient object becomes persistent as soon as it can be reached from a persistent root. Every object may become a persistent root if it is registered with the function Persistent.SetRoot (obj, key), where obj is (a pointer to) an object and
The 8th Workshop for PhD Students in Object-Oriented Systems
23
key is a user-de ned unique alpha-numerical key. If not de ned otherwise, all objects directly or indirectly referenced by a persistent root are automatically persistent as well. Persistent objects which are not referenced by other persistent objects are reclaimed by a Stop & Copy garbage collector. This algorithm uses two heaps ( les) and copies all accessible objects from the full heap fromHeap to the empty heap toHeap. The idea of this work is to oer the impression of an inde nitely large dynamic store on which all objects live. The programmer does not have to distinguish between 'internal' and 'external' objects. All objects can be referenced and sent messages as if they were in main memory. The underlying language does not have to be extended. Other database features, such as schema evolution or recovery are embedded in this persistent environment. Schema evolution, for example, is done during the persistent garbage collection run. In this phase it is checked, if the type de nition of any object has been modi ed since the last garbage collection run. If this is the case the object is read from the fromHeap by using the old type de nition and is written to the toHeap by using the new type de nition. Furthermore, an Oberon binding for ODL/OQL is implemented as part of this work. ODL is a speci cation language for de ning interfaces to object types that conform to the Object Model of the Object Database Management Group. OQL is an object query language supporting this model. A proof-of-concept implementation, named Oberon-D, has been done in the Oberon system, which oers powerful mechanisms for extending software in an object-oriented way. However, any other object-oriented operating system, which oers garbage collection and exception handling, could be used instead of Oberon. The work includes some novel aspects, e.g., the implementation of userde ned mappers, the integration of garbage collection and schema evolution and the translation of OQL code into Oberon code.
3.3 David Parsons: Run-time Reusability in Object-Oriented Schematic Capture
This research is based on the development of a graphical schematic capture interface for VHDL-AMS, the hardware description language for mixed mode (analogue and digital) circuits. The role of the system is to allow circuit schematics to be drawn interactively and to generate VHDL-AMS code from them. An essential feature of the system is that new types of electronic component can be added at run time using visual tools and behavioural descriptions. In addition, successful code generation for mixed mode circuits involves the selection of the correct component models from a number of possibilities, depending on the context of analogue and digital signals in the circuit. The system provides for adding new types of component by applying a re ective architecture. By providing the system with a meta level that is used to con gure the objects in the system, new types of component can be designed by adding data to this meta level. An essential aspect of this architecture is that
24
E. Ernst, F. Gerhardt, and L. Benedicenti
domain objects are all members of a single class, with behaviours supported by other objects encapsulated within the implementation. Thus the concept of a new type is separated from that of a new class, providing extensibility without having to rebuild code. The invocation of correct component models in mixed mode code generation is managed by analysis of the types of connections that components have. A single visual representation of a digital component may have a number of models associated with it, each one of which relates to a particular set of input and output types. By iterating through its connections and nding out what kind of object each one connects to, the component is able to identify and select the model that matches its current state. The system is able to generate validated VHDL-AMS code from mixed mode circuits and to support the run time creation of new component types. Components that have been interactively created in this way can be successfully integrated into these circuits, and appropriate models are automatically selected where digital components have analogue connections.
3.4 Awais Rashid: SADES - a Semi-Autonomous Database Evolution System Relational databases have been successful at supporting data-intensive recordprocessing applications. However, the level of complexity of such applications being relatively low, relational databases lack the necessary abstraction to act as a repository for integrated and advanced applications such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided software engineering (CASE) and oce automation systems, etc. Object databases are more suited to supporting such complex applications involving highly interrelated data, which cannot easily be supported by the built-in data types in relational databases. Like any other database application, object database applications often require modi cations to their schema or meta-data. Several applications, however, require that any such change be dynamic. In some applications, also arises the need to keep track of the change in case it needs to be reverted. Several applications, especially those involving CAD, CAM, CASE, require also the creation of progressively enhanced versions of an object from its existing multiple versions. Therefore, in an object-oriented database management system, where there is a need for traditional database functionality such as persistence, transaction management, recovery and querying facilities, there also arises the requirement for advanced features such as the ability to evolve, through various versions, both the objects and the class de nitions. SADES is a Semi-Autonomous Database Evolution System which aims at employing a composite active and passive knowledge-based approach to dynamically evolve the conceptual structure of an object-oriented database. For the purpose the use of the following three basic machine learning techniques has been suggested:
The 8th Workshop for PhD Students in Object-Oriented Systems
25
1. Learning from instruction 2. Learning from exception 3. Learning from observation SADES aims at providing support for: 1. Class hierarchy evolution 2. Class versioning 3. Object versioning 4. Knowledge-base/rule-base evolution The system is being built using a layered approach on one of the commercially available object oriented database management systems. Currently, Object Store, O2, Versant, POET and Jasmine are being evaluated for the purpose.
3.5 Ashish Singhai: Framework Design for Optimization (As applied to Object-oriented middleware)
This thesis explores the design space for composable (or component based) systems. Speci cally, we address the following issues: { What is the structure of a composable system? { Do independently components adversely aect performance? { Do well performing systems have to be monolithic? Framework optimizations capitalize on object dependencies, while framework
exibility and composability demand object independence. We show how to balance these con icting needs using new design techniques. These techniques embody the observation that common optimizations can be realized by reifying and tuning object interactions. We describe the use of our techniques with the help of using examples from diverse application domains. We also develop a catalog of patterns that covers common optimizations. We have also designed an architecture, called Quarterware, for developing communications middleware. The Quarterware architecture uses the design patterns referred to above to implement exible middleware that can be specialized to improve functionality and performance. The Quarterware architecture depends upon two observations about communications middleware: rst, most middleware are similar, the dierences are in their interfaces and optimizations; second, neither a xed set of abstractions nor a xed implementation of a set of abstractions is likely to be sucient and well-performing for all applications. Quarterware abstracts basic middleware functionality and admits application speci c specializations and extensions. Its exibility is demonstrated by deriving implementations for core facilities of CORBA, RMI, and MPI from it. The performance results show that the derived implementations equal or exceed the performance of corresponding native versions. The Quarterware architecture demonstrates design principles for dynamic extensibility, real-time method invocation, exible concurrency models, design for debugging and visualization, performance optimization and interoperability.
26
E. Ernst, F. Gerhardt, and L. Benedicenti
3.6 Andreas Speck: Object-Oriented Control Systems on Standard Hardware
The issue of my work is an object-oriented design of a universal control system based on standard hardware like workstations or PCs. This system supports the control of industrial devices such as robot arms, Cartesian systems or I/O units. The new architecture merges the bonds of the traditional control components such as robot controls (RC), numeric controls (NC) or programmable logic controllers (PLC). This concept of a universal control system allows to control an entire industrial production cell consisting of dierent device types with a single piece of hardware. Moreover the proposed design in general can be used as pattern for the development of object-oriented control systems. According to the introduced architecture various control systems as well as an object-oriented framework have been developed on dierent platforms (SPARC workstations, Windows NT PCs and industrial PCs).
3.7 Alexandru Telea: Design of an Object-Oriented Scienti c Simulation and Visualization System
Better insight in complex physical processes requires integration of scienti c visualization and numerical simulation in a single interactive framework. Interactivity, generally seen as the ability of the user to interrogate and modify the universe she observes, is an essential requirement of simulation and visualization tools. Another powerful mechanism used by visualization systems is the data ow concept, which allows a simulation or visualization process to be described as a network of computational modules exchanging data to perform the speci ed task. On the other hand, object-oriented design is the favourite technique for building application class libraries, whether for visualization or direct manipulation or for the simulation phase, such as nite-element analysis libraries. Integration of such libraries in an object-oriented general-purpose simulation should greatly simplify the task of the application library writer, simulation designer and end-user, due to the inherent high reusability of object-oriented code. However most of the existing simulation and visualization environments are not built based on an object- oriented foundation, at least not up to the level where the integration of application-speci c modules in them would be a simple, seamless task. In these cases, the application integrator often has to adapt objectoriented code to t the model of a given simulation environment. End users will hence either not be able to bene t from the full exibility of object-orientation, as they will often interact with the adapted, possible non-OO versions, or alternatively will have to learn a 'second view' on the componens, oered by the environment, which often will sensibly dier from the original OO components' structures. We have addressed these problems by the design and implementation of an object-oriented computational steering system for scienti c simulations. The proposed system oers a general-purpose environment for designing and steering
The 8th Workshop for PhD Students in Object-Oriented Systems
27
applications consisting of sets of cooperating objects, and in particular is being used for scienti c visualization and (real-time) interaction with running simulations. The system is immediately extendable with application-speci c OO libraries written in C++, requiring almost no changes to be done to these libraries to t the system. Moreover, the system combines the data/event ow modelling paradigm familiar to visualization/simulation scientists with object-oriented application code in an easy, intuitive manner. This powerful commbination that we call 'object-oriented data ow' gives a considerable freedom in the interactive design of applications from OO components coming from dierent libraries and/or application domains, thus promoting component reuse in a very simple and intuitive manner. Instances of our system are ultimately full- edged simulation/visualization environments extending the visual programming concepts of similar tools like AVS/Express,IRIS Explorer or Performer with object-oriented modelling concepts present in application software like Oorange or vtk. The proposed system treats the parameter input, computation and result visualization phases uniformly, allowing the end-user to change the parameters of a simulation, ne-tune the numerical engines involved in the solving phase, and visualize the results interactively via a comprehensive graphical user interface (GUI). The key to this exibility is an object-oriented data ow engine based on a C++ interpreter/compiler combination. On the other hand, while similar systems require the user to interface her application-speci c code with a system API (e.g. AVS), wrap it in some object-oriented structures (e.g. Oorange) or statically extend a class hierarchy by derivation/composition (e.g. vtk,Open Inventor), our system can directly and dynamically load application libraries written independently by the user in the C++ language. The programmer has the full set of object-oriented features oered by C++ to write her simulation classes, and there are practically no constraints imposed by the simulation environment on the structure of the application code. The system automatically provides a GUI for each C++ class supplied and a graphics representation allowing its manipulation in a GUI-based data ow network editor. The C++ interpreter/compiler combination solution oers therefore a single-language OO solution to development, runtime modelling, scripting, and interactivity.
28
E. Ernst, F. Gerhardt, and L. Benedicenti
4 Languages and Types The presentations in this group gave rise to very lively and interesting conversation. The discussion revolved around two main axes: Issues in developing richer type systems, and the development of component-oriented languages. There was consensus in the importance of these two general areas and several independent aspects were identi ed. Richer Type Systems: The need for improved type genericity (e.g. type-safe template parameterization) was clear from several presentations. Yannis works with layered object-oriented designs, expressed using multi-class entities called components. An implementation in a C++ context uses templatized nested classes, and this illustrates problems with the (lack of) template type analysis. Bernd works with the typing of another genericity mechanism, generic functions and procedures, in context of extensible sets of (sets of) types, which can be constructed by set union, intersection, and complement and related by the subset and element relations. Finally, Erik's work expands on the expressive power of virtual classes, yet another mechanism supporting genericity. This line of research has clear connections to the development of constraints for types. They appear both in the type speci cation mechanism of Bernd's work, and in Erik's work with relative type analysis. A dierent issue arises with alternative techniques for relating objects, such as David's work on type checking with respect to environmental acquisition, as opposed to inheritance. Gabriel's work with meta object protocols (MOPs) actually describes the implementation of a type system enhancement (adding support for 1-bit boolean slots), thus exploring the expressive power and exibility of dierent MOPs. Component-Oriented Languages: Research in generalizing the object concept seems particularly promising. Several interesting ideas were presented, e.g., Pentti's verb inheritance and the multiple-class components of Yannis and Erik. Formalizing the underlying concepts of components is important, and Rosziati's work with the formalization of the Component Object Model (COM) works toward this goal, using the special-purpose language COMEL as a vehicle. In contrast to Rosziati's formal speci cation at the language level, Il-Hyung works with support for less formal speci cation at the level of components, possibly written in dierent speci cation languages. In a family of classes, i.e., in a component, each interface of a class plays a speci c role, and the behavior of such systems can be speci ed in terms of roles and interactions (or protocols). Once the class family speci cation is developed and known to have some good (correctness) properties, individual components can be changed and only the relation between the components and its role need re-testing, not the entire class family. The Connections: The two main areas have several strong connections between them. Clearly, component speci cation and development techniques are in need of type system support. Types for families of classes was identi ed as an interesting issue in this direction. The very concept of families of classes blurs the distinction between classes and components.
The 8th Workshop for PhD Students in Object-Oriented Systems
29
4.1 Il-Hyung Cho: Testing Components using Protocols Interoperability is the ability of two or more software modules to communicate and cooperate with each other. The interoperability problem arises when software developers want to reuse legacy software systems, or when software systems are componentized and these components need to be connected in order to work together. The problem occurs in both heterogeneous (multi-lingual) and homogeneous environments. Software modules can be functions, objects, or components that consist of multiple functions or objects. We focus on the speci cation of software components in object oriented systems. Traditionally, interoperability checking is performed by signature matching between an invoking function and a function being invoked. Function level signature matching techniques are not sucient for software components in object oriented systems since an object encapsulates a set of data and functions, and a component may contain more than one object. In this work we describe the interoperability problems of software modules in the object oriented paradigms and propose an interoperable component model that enhances software reusability and maintainability. Software modules (or components) serve as the building blocks of a software system. Component level software development is receiving much attention in recent years due to its promise of plug-and-playable software. The current interest in architectural software design has a great eect on componentizing software module. A component plays the role of reusable software unit, and can interoperate with other software modules if their interfaces (and protocols) match. We use the terms software module and component as synonyms for some cohesive subset of a software system. A component may operate across language boundaries, operating system boundaries or network boundaries. Each component de nes an interface that provides access to its services from outside the component. The interface is comprised of methods and describes the responsibilities and behavior of the component. However, the typical interface of a component does not provide sucient information to judge whether two components can successfully work together. A protocol is the sequence of messages involved in the interactions of two components (or objects). If the protocols are not compatible, the components can not interoperate. (We will use interoperation, interaction and collaboration as synonyms.) The use of protocols to describe object communication fosters structured, safer and potentially veri able information exchange between objects. The protocol plays an important role as a partial interface speci cation (a component may participate in multiple protocols). An object's interface alone involves only one object, but a protocol de nes the interaction of two objects. In this work, we will propose a technique for determining the compatibility of two software components, where compatibility is de ned as the ability to interoperate.
30
E. Ernst, F. Gerhardt, and L. Benedicenti
4.2 Erik Ernst: Virtual Types, Propagating and Dynamic Inheritance, and Coarse Grained Structural Equivalence This PhD work is about the design, implementation, and formalization of a modern, advanced OOPL called gbeta. Rooted in the Scandinavian approach to object-orientation where program executions are viewed as concept-based models of a perspective on the real world, and starting out from BETA, a block-structured, strongly and statically typed object-oriented language with very powerful abstraction mechanisms including virtual types, gbeta aims to add expressiveness and exibility without compromising static type safety. To obtain this end, the fundamentals of BETA were reconsidered and generalized, both at the level of basic concepts, and at the level of implementation. As a result, gbeta generalizes the single inheritance of BETA into a propagating class combination mechanism, a kind of multiple inheritance which supports combination of classes, merging of methods, and propagation of class combination to dependent classes. Moreover, classes can be constructed at run-time, and existing objects can be specialized|enriching their structure to become instances of more derived classes. The type system is also generalized. BETA and gbeta use name equivalence at the level of individual declarations{this helps avoiding accidental confusion of dierent declarations with the same name. But at the level of blocks of attribute declarations, gbeta uses structural equivalence; (e.g., in C++, a block would be "f..g", like in "class C: public B f..g;"). This means that many classes that would otherwise be unrelated are indeed related by inferred inheritance. Staying with C++, an example would be: class class class class
A f..g; class B f..g; AB: public A, public B f..g; concreteA: public A f..g; class concreteB: public B f..g; concreteAB: public concreteA, public concreteB f..g;
Here, concreteAB is not a subclass of AB even though "it has all the stu." In the corresponding gbeta example, this subclass relation does hold. These inferred inheritance relationships enhance the expressive power and exibility of the language in important ways, even though the dierence may at rst seem subtle. gbeta has been implemented, and the implementation is available for exploration or further development (you're welcome!) at http://www.daimi.aau. dk/~eernst/gbeta. The implementation emphasizes correct type analysis and run-time semantics, and largely disregards performance. The semantics of full gbeta probably does imply worse performance than standard Beta, but how much worse is has to be is still a topic of future research. Another ongoing project is the description of the formal semantics of the language, using the formalism Action Semantics, see http://www.daimi.aau.dk/ BRICS/FormalMethods/AS/index.html, and proving various soundness properties of the static analysis.
The 8th Workshop for PhD Students in Object-Oriented Systems
31
4.3 Bernd Holzmuller: On Polymorphic Type Systems for Imperative Programming Languages: An Approach using Sets of Types and Subprograms This thesis is concerned with the incorporation of polymorphic type systems into imperative programming languages. It rst provides a comprehensive classi cation of polymorphism that is based on the way polymorphic sets are de ned and ranks each of the possible approaches. Based on that classi cation a polymorphic type system is proposed that is based totally on extensionally de ned polymorphic sets. Such sets may be de ned locally and thus are of a locally known xed size. Or they may be declared globally and then de ned by explicit contributions given in dierent parts of the system. The latter form thus supports the open-world assumption. Polymorphic sets may be built from other sets using the usual set operators union, intersection, and complement. These operators give a powerful means to de ne abstractions from other abstractions, allowing to express type classi cations with exceptional elements. The empty set is the least element in the resulting lattice and its complement the greatest. Legality of assignments and subprogram calls is based on the subset relation between the type sets associated with the corresponding expressions. This legality relation between expressions is called \conformance". Sets of subprograms provide for subprogram polymorphism, covering both the concepts of overloading and dynamic dispatching. Applying a subprogram set to a number of arguments the one most speci cally applicable subprogram to work on the arguments is selected. Speci city between subprograms is de ned by pointwise parameter and result types conformance. In case the argument expressions are of a polymorphic type, the selection in general has to take place at run-time. Because all arguments are considered for selection this is a form of multi-dispatching in the spirit of CLOS. Static checks are performed, however, to guarantee the absence of non-determinism (more than one most speci cally applicable subprograms) and type errors (no applicable subprogram). The context of the application is also considered to constrain possible matching subprograms, comparable to overloading resolution in Ada. The thesis investigates the interactions of polymorphic sets with second order type concepts, known as `genericity' or `type parametricity'. It is argued that an implicit style of type matching for calls to generic subprograms yields more freedom than an explicit style and the necessary rules for this implicit binding of type parameters are provided, which are based on the union and intersection of participating type sets. These concepts are then used to de ne an experimental language called `Hoopla'. Considerations for the implementation of Hoopla are given, providing some sucient conditions when global checks for subprogram sets can be avoided though keeping separate compilation.
32
E. Ernst, F. Gerhardt, and L. Benedicenti
4.4 Rosziati Ibrahim: Formal Methods for Component-Based Systems With the availability of Microsoft's OLE (Object Linking and Embedding) and its support for document-centric computing and the more recent arrival of Microsoft's ActiveX and Sun's JavaBeans, the understanding of component-based systems has become critically important. This thesis therefore focuses on componentbased systems and uses a formal approach to specifying key notions of such systems. The study of formal methods for object-oriented languages is important and very popular. The thesis therefore concentrates on two main facets. For one it looks at the development of a formal model for component-based systems and for another stream it looks at the development of a formal model type system speci cally geared towards one of the industry's main component models. The rst stream looks at the development of a formal model for a componentbased system based on the comparative studies of three important component object models, mainly, Microsoft's Component Object Model (COM), Sun's JavaBeans and Oberon's BlackBox. A formal model for a component-based system is developed based on the generalization of the concepts about object, class, component and interface found in the three component object models. The second stream of the thesis will look in detail into one of the component object models: Microsoft's Component Object Model (COM). A formal model for COM is developed by introducing a type system and operational semantics for a model language used for component-based speci cation. COM has been chosen because it is an approach that is already in practical use on a very wide basis but is still lacking a formally precise understanding. Since COM itself is langauge independent, a special language COMEL (Component Object Model Extended Language, pronounced cho-mell) is used as an examplary language to achieve the goal. The COMEL language demonstrates the role of component types, object composition, object instantiation, interface lookup, and method call in COM. The COMEL language has a formally de ned type system and operational semantics with established type soundness. Further details can be found at http://www.fit.qut.edu.au/~ibrahim.
4.5 David H. Lorenz: Compilation of Source Code into Object-Oriented Patterns
Stated in terms of the vocabulary of design patterns, the Interpreter and Visitor patterns are adapted for source code handling. The Interpreter captures the grammatical structure of programs. The Visitor processes grammatically correct programs. In practice, however, these patterns are not intended for full-scale programming languages because of complexity of the grammar of most languages and the lack of support in the programming environment for propagating inherited grammar-attributes. In this work we present new object-oriented techniques to compiling the source code of a program into a strongly-typed representation, for the use of
The 8th Workshop for PhD Students in Object-Oriented Systems
33
software engineering tool builders. We introduce the novel concept of pattern tiling which describes the process of assembling patterns by other patterns. The manner in which the pattern are combined may result in dierent pattern tessellations. Tiling lays the ground for building adaptive client tools. Such tools adapt more easily to changes in the structure of the compiled data they need to manipulate. We propose a new inheritance-like abstraction mechanism named environmental acquisition for which an object can acquire, via a static type mechanism, features from the classes of objects in its environment. By examining the declarations of classes, it is possible to determine which kinds of classes may contain a component and which components must be contained in a given kind of composite. These relationships are the basis for supporting environmentally acquired grammar-attributes, such as symbol tables. As an application example, we implemented an object-oriented parser generator and applied it to the Eiel programming language. We show how the re ective architecture resulted from tiling is used to generated adaptive tools.
4.6 Gabriel Pavillet: Integration of Object-based Knowledge Representation in a Re exive Object-oriented language
The object approach is divided into several partitioned currents, and among them Object-oriented Programming Languages and the Knowledge Representation by Objects are distinguished. The rst introduced powerful concepts like re exivity [Coi87b] and various MetaObject-Protocols (MOP [KdRB91], [GC87], [Pae93], [KG89]). Meanwhile, the second which inherits from frame-based languages [Duc91], [Rat93] and also from Description Logics (or Terminological Logics [Kar93]), privileged languages expressivity, and/or their formal semantics. The aim of my thesis consists in extending the functionalities of an existing object-oriented language to make improvements to its expressivity and its eectiveness. To do that, we plan to add a MetaObject-Protocol (MOP) to a language, and to conceive and realize then an object-based Knowledge Representation System providing some functionalities of classi cation and ltering based on its own semantics. A MetaObject-Protocol (MOP) in an Object-oriented Programming Language is an object-design protocol (or instance-design protocol) easily modi able and extensible. A \MOP" makes easier the possibilities to extend a language in adding new functionalities thanks to the management of the several MetaObjects. MetaObjects are classes whose instances are fundamental objects which de ne the language (e.g. a metaclass is a metaobject whose instances are themselves classes). These classes (MetaObjects) determine what properties can have their instances, and de ne consequently the various behaviors of the language [KdRB91]. Unfortunately, there are few object-oriented language owning a MOP: In order that a language provides the powerful tools of evolution and extensibility,
34
E. Ernst, F. Gerhardt, and L. Benedicenti
which are the MOP, it need to be very exible (quality which is found only in dynamic languages, such as LISP [Ste90], [KG89] and Smalltalk), and need also to be re exive (the modi cations on the MOP can infer new functionalities on the language and because the MOP requires a re exive language to be ecient). The interest of a MOP within my framework is the ease that it provides to extend the characteristics of an existing object-oriented language in order to cause some improvement for its expressivity [KdRB91], and also because this extension tool can facilitate the design and the realization of an object-based knowledge representation system with some mechanisms of classi cation and ltering (as in a Description Logic).
References 1. P. Cointe. Towards the Design of a CLOS Metaobject Kernel: ObjVlisp as a rst Layer. In Proceedings of the rst International Workshop On Lisp Evolution and Standardization (IWOLES), Paris, 1987. 2. R. Ducournau. Y3. YAFOOL, le language objets. Sema Group, 1991. 3. N. Graubi and P. Cointe. Une Introduction CLOS et ses Mitaobjets. In Proceedings of the Greco/Groplan Workshop on Language and Algorithms, Rouen, 1987. 4. P.D. Karp. The Design Space of Frame Knwoledge Representation Systems. Technical Note 520, SRI International, Menlo Park, 1993. 5. G. Kiczales, J. des Rivihres, and D.G. Bobrow. The Art of the Meta-Object Protocol. MIT Press, 1991. KG89 Sonya E. Keene and Dan Gerson. Object-oriented programming in Common LISP: a programmer's guide to CLOS. Addison-Wesley, Reading, MA, USA, 1989. 6. A. Paepcke. User-level language crafting : Introducing The CLOS Metaobject Protocol. In A. Paepcke, editor, Object-Oriented Programming: The CLOS Perspective, pages 65-99. MIT Press, 1993. 7. C. Rathke. Object-Oriented Programming and Frame-Based Knowledge Representation. In Proceedings of the 5th IEEE International Conference on Tools with Arti cial Intelligence, Boston, pages 95-98, 1993. Ste90 G.L. Steele. Common Lisp, the Language. Digital Press, second edition, 1990.
4.7 Yannis Smaragdakis: Implementing Layered Object-Oriented Designs My research concerns software components that encapsulate functionality for multiple classes. Such components can be composed together to yield classes in a layered fashion. The complexity of software has driven both researchers and practitioners toward design methodologies that decompose design problems into intellectually manageable pieces and that assemble partial products into complete software artifacts. The principle of separating logically distinct and (largely independent) facets of an application is behind many good software design practices. A key objective in designing reusable software modules is to encapsulate within each module a single (and largely orthogonal) aspect of application design. Many design methods in the object-oriented world build on this principle of design
The 8th Workshop for PhD Students in Object-Oriented Systems
35
modularity (e.g., design patterns and collaboration-based designs). The central issue is to provide implementation (i.e., programming language) support for expressing modular designs concisely. My work addresses this problem in the context of collaboration-based (or role-based) designs. Such designs decompose an object-oriented application into a set of classes and a set of collaborations. Each application class encapsulates several roles, where each role embodies a separate aspect of the class's behavior. A cooperating suite of roles is called a collaboration. Collaborations express distinct (and largely independent) aspects of an application. This property makes collaborations an interesting way to express software designs in a modular way. While collaboration-based designs cleanly capture dierent aspects of application behavior, their implementations often do not preserve this modularity. Application frameworks are a standard implementation technique. As shown by VanHilst and Notkin, frameworks not only do not preserve the design structure but also may result in inecient implementations, requiring excessive use of dynamic binding. VanHilst and Notkin proposed an alternative technique using mixin classes in C++. Their approach mapped design-level entities (roles) directly into implementation components (mixin classes). It suered, however, from highly complex parameterizations in the presence of multiple classes, and the inability to contain intra-collaboration design changes. This caused them to question its scalability, and seek a way to explicitly capture collaborations as distinct implementation entities. I explore an alternative way to implement multiple-class components. My work showed how to remove the diculties of the VanHilst and Notkin method by scaling the concept of a mixin to multiple classes. These scaled entities are called mixin layers. A mixin layer can be viewed as a mixin class encapsulating other mixins with the restriction that the parameter (superclass) of an outer mixin must determine all parameters of inner mixins. Employing mixin layers yields signi cantly simpler code and shorter compositions than in the VanHilst and Notkin model. The primary means of expressing mixin layers is C++ templatized nested classes, but the same ideas are applicable to CLOS or Java with mixins. In general, I study mixin layers from a programming language standpoint. Some of the issues involved have to do with interface (constraint language) support for multiple-class components, verifying the consistency of a composition of layers, and handling the propagation of type information from a subclass to a superclass. For more information, see: Yannis Smaragdakis and Don Batory, \Implementing Layered Designs with Mixin Layers", ECOOP '98.
4.8 Pentti Virtanen: An Evaluation of the Bene ts of Object Oriented Methods in Software Development Processes The study is about measuring the software development process. The process is supposed to use object-oriented methods. Object orientation is a way to produce components which are easy to reuse. Reusability is one of the main concerns.
36
E. Ernst, F. Gerhardt, and L. Benedicenti
A new method to estimate software development eort is introduced. It is named Object Component Process Metrics (OCPM). That method is useful when software is constructed using object components. The foundations of software metrics are studied to understand what the measuring should be about. The foundations are reorganised. It is easy to reuse components which are easy to adapt to new applications. New ways to enhance object-orientation are introduced to create components which are more suitable for reuse. The foundations and metrics of reusability must be studied to deduct these methods. New ways to enhance reuse are deducted using the metrics. Components which are easier to understand are those which the user can comprehend using his knowledge of the problem domain. The resemblance to natural languages is an important part of good component design. The correspondence of natural and computer language is studied to learn how to construct components which are easier to understand. The number of reuses is the most important factor of reusability. There can be a large number of reuses due to organisational reasons. The component is easy to nd, it is well documented and so on. The focus of this study is in how to construct components, which are easy to adapt into new uses. Object orientation is studied thoroughly. The focus is on the mechanism of reuse. Object oriented methods introduce the inheritance as a basic mechanism of reuse. Nowadays several other mechanisms are included in these languages. One of those is the idea: template. Another is the idea: pattern. This reveals that inheritance is not powerful enough as the reuse mechanisms of future programs. A new mechanism of reuse will be introduced as a synthesis of the previous. It is called the verb inheritance. It is a way to inherit procedures from other procedures and couple them with classic objects. Programming languages cannot be discussed without talking about typing. A win-win solution to the debate about the need for safe typing will be introduced. The study as a whole contains several threads under a common theme. These threads do not build a whole of any subject. So the threads will be published as their own papers. Each of them has their own title. OCPM has been published in IRIS21. The head title is an evaluation of the bene ts of object oriented methods in the software development process.
The 8th Workshop for PhD Students in Object-Oriented Systems
37
5 Methodology The Methodology Group collected many dierent aspects of Object Oriented Programming. Methodologies, in fact, span many dierent areas and therefore present little aspects of homogeneity. This fact led to some initial diculties in focusing the problem examined and the solution proposed. The topics varied widely, including real-time fully con gurable systems development analysis for marine applications (Anita), industrial testing analysis (Jan), and measurement systems (Luigi). The rst eort was towards the development of a simple taxonomy that could help classify the works of the group easily, eectively, and completely. We found that each work belonged to either one of two classes: empirical, concrete works, and theoretical, abstract works. The two classes have merits of their own. The theoretical class clusters methodologies based on new scienti c developments. The empirical class clusters methodologies based on current situations in the eld. Both classes are vital: while theory provides insight and paves the way, empirical research brings knowledge in everyday's world. All methods more or less center on the need to nd new levels of abstraction to represent complex objects (which may be tasks, testing procedures, or network elements). There is a de nite bias towards patterns, which may represent good architectural building blocks and be the element of reference for the various phases of the development cycle. Empirical Works: The empirical side of the table included the works of Anita, dealing with pattern mining in complex multisensor real-time marine information systems, Luigi, presenting an empirical measurement system based on a two-stage abstraction process, and Theodoros, presenting a reverse engineering technique to organize program comprehension and extract patterns relevant for it. Theoretical works: The works presented by the theory side of the table were directed to the future of computing, addressing aws in current methodologies and trying to build the new generation of abstractions. Birol presented the Contextual Objects Model, a meta-architecture for reactive information systems. Hyoseob's work deals with new techniques of program design, he seeks to integrate patterns in the design as a standard building block for the program's architecture. Jan analyzed a software development environment with a mathematical model in order to predict the location in time of the trade-o point after which testing becomes no longer cost-eective. Umit presented a distributed computing model that blends traditional development with networking paradigms.
5.1 Luigi Benedicenti: Process Measuring, Modeling, and Understanding
Software Engineering is a relatively new discipline, and changes at a very fast rate. For this reason, it is often believed that the metrics and measures devised for each particular framework can not be eective when applied to other
38
E. Ernst, F. Gerhardt, and L. Benedicenti
frameworks. My Doctoral thesis presents a more general empirical approach to software measurements. The approach allows for new measures, but retains the repeatability and rigour of the scienti c method. The approach is top-down. It is based on two methods that form a complete methodology: measurement theory and process modeling. Measurement Theory. Measurement theory is used to devise, collect and represent suitable software measures. Suitability of software measures is determined on a case basis, depending on the investigation being performed. For example, when trying to assess the impact of reuse over software quality, at least two measures must be collected: a measure of the amount of reuse in a program, and the level of quality of the program. Measurement theory allows for consistent and clear data collection, providing support for sophisticated analysis techniques (for example, statistical analysis and non-linear neural networks analysis). Process Modeling. The measures obtained in the previous phase are then coupled with an object oriented high-level description of the software process employed in developing the product examined. Object orientation provides customisable levels of abstraction in the description, allowing to concentrate on the most important parts of the process being analyzed and minimizing the eort spent on others. The model produced can be validated by means of activity based costing, a method to track the time spent by each person in the process to the activities of the process itself. Conclusions. The methodology resulting form the union of measurement theory and process modeling presents many advantages. Completeness: the methodology is general enough to cope with initial vague process descriptions, but can be also used to keep track of small details such as the number of lines of code developed per day by a single developer. Flexibility: the methodology is rigorous where it is needed (measures), but allows for less rigorous descriptions for highlevel, broad tasks that need not be further detailed. The level of abstraction is decided entirely by the methodology user. Clarity: the methodology allows for full experimental design, thus making it possible to compare dierent experiments and discover the mediating factors that originated the change. This allows to apply sophisticated analysis techniques to the results, such as the parametric statistical ANOVA analysis.
5.2 Birol Berkem: The Contextual Objects Modeling for a Reactive Information System For a presentation of Birol Berkem's work please see workshop 8.
The 8th Workshop for PhD Students in Object-Oriented Systems
39
5.3 Anita Jacob: Experiences in Designing a Spatio-temporal Information System for Marine Coastal Environments Using Object Technology This work examines and illustrates the use of Object-Oriented Technology (OOT), Design Patterns (DP) and Uni ed Modelling Language (UML) in the analysis and design of an information system for monitoring the marine coastal environment. The object-oriented approach allows us to document the characteristics of the problem domain in a terminology close to that of the users making it easier to involve them in the development process. The design of this system is inherently complex because it must capture a variety of physical, biological and chemical processes that interact in the coastal zone, the studied phenomena occur at dierent temporal and spatial scales and the data come from a number of dierent sources. Object technology was selected to develop this system because it could model the complexity of the real world environment and keep up with the rapid changes in available tools and computer equipment. Other potential bene ts of OOAD include reuse of design and code, leading to a smaller system and consequently reduced maintenance costs. As a rst step, the various components of a MARine COASTal Information System (MARCOAST) are identi ed, as are the marine processes and phenomena to be studied. A 3-tier client-server architecture was selected to implement the MARCOAST. The IS was designed keeping in mind the nature of good OO systems, namely, small and simple objects, loosely coupled objects and preference of object composition to inheritance. Further, lessons learned from incorporating design patterns in an application problem domain are presented. The work undertaken here is an attempt to take the object concepts out of the textbooks and actually use them in designing and developing an application software, and thus harness the vast potential of the object technology and apply it to the real world. The work presented here was sponsored by the Norwegian Research Council.
5.4 Hyoseob Kim: Facilitating Design Reuse in Object-Oriented Systems Using Design Patterns Software design activities are one of the most time-consuming tasks during the software life cycle. This has mainly resulted from the fact that they require a high degree of human intelligence. Also, faults originating from the early stages of the software life cycle need more eorts to x them than those introduced in the coding stage. Reuse in general provides a basis for intellectual progress in most human endeavours. While code reuse can bring us saving time and eort, it must be noticed that the savings will obviously not exceed the coding time, which is approximately 13% of the whole investment during the software life cycle. Much bigger savings can be expected from reuse during design, testing, and maintenance [1]. Thus, reusing software designs is considered a good way of improving programmers' productivity and software quality.
40
E. Ernst, F. Gerhardt, and L. Benedicenti
The thesis is to try to nd a way of improving the current design practices especially in the object-oriented software engineering community. Object-oriented (OO) methods are becoming popular these days thanks to their capability of mimicking the real world; inheritance and encapsulation are two of the most useful features found in them. However, they are failing in describing the overall system structure and have not brought the same degree of extensive design reuse experienced in other engineering disciplines. Learning from other mature engineering disciplines such as building architecture and chemical engineering, researchers are trying to raise the level of the current capability of object technologies with new concepts such as \software architecture", \object-oriented software frameworks" and \software design patterns". Software design patterns are a way of facilitating design reuse in objectoriented systems by capturing recurring design practices. Lots of design patterns have been identi ed and, further, various usages of patterns are known, e.g., documenting frameworks and re-engineering software. To fully bene t from using the new concept, we need to develop more systematic methods of nding design patterns. In our research, we propose a new method to recover patterns using object-oriented metrics. The activity of discovering patterns itself is not meaningful unless those recovered patterns are fully utilised for our software maintenance and development tasks. When we maintain existing software, the need for understanding the software arises at the rst instance. However, most existing documentation fails in supplying enough information to users, thus causing a heavy burden for them. Design patterns are a way of delivering valuable design information for future reuse. We suggest a software redocumentation method using design patterns and pattern languages. Further, our work is extended to restructuring programs for easier software evolution.
References 1. J. van Katwijk and E. M. Dusink. Reusable software and software components. In R. J. Gautier and P. J. L. Wallis, editors, Software Reuse with Ada, pages 15-22. Peter Peregrinus Ltd., 1990.
5.5 Theodoros Lantzos: A Reverse Engineering Methodology for Object Oriented Systems
Bene ts oered by OO technology has made it one of the leading technologies employed in the software community and a prime candidate for transforming old legacy systems. Though, OO technology oers bene ts such as re-use, maintainability, and understability, it has its limitations. OO characteristics such as inheritance, polymorphism, dynamic binding are bringing many complication at the time of system maintenance and reverse engineering. The use of new technologies in OO (design patterns, frameworks, and agents) introduced the rst generation of OO legacy systems. Researchers working in the
The 8th Workshop for PhD Students in Object-Oriented Systems
41
area of OO Re-engineering have already found problems associated with rst generation OO system. Much attention must be given to complications from OO characteristics and problems associated with the rst OO legacy systems during the software maintenance process. An approved approach for assisting the software maintenance and to decrease the cost associated with it is by methodological design extraction. The aim of this project is to create a reverse engineering methodology for extracting system design from an existing OO system. The rationale for this project can be stated as follows: (i) the need for design extraction in OO systems, (ii) the need for extracting system design manually, (iii) the need for a dual approach of extracting system design (CASE-MANUAL) and, (iv) the need for lling the gap between comprehension models and design extraction. Against this background, this project creates a design extraction method that will accept as its input an OO source code, and based on a set of transformation rules, the maintainer will be able to extract the system design. The method provides a set of intermediate forms for monitoring the design extraction process . A major concern of the method is the ability to be applied manually. A maintainer, by employing problem domain knowledge, programming language and design method details, and by following the transformation steps can extract the system design. Some of the bene ts provided by this approach are a representation of the system in a higher form, an accurate documentation, a knowledge of what and how the system does, and the ability to apply re-engineering technologies. The method has already been developed and it is called ROMEO. For testing ROMEO, a C++ language was chosen for experiments and the Object Modelling Technique (OMT) was chosen for representing the extracted system design. ROMEO has been applied to a case study and validation of the extracted system design was done using the Rational Rose CASE tool. Further experiments to be done include the application of the method to another case study and the its application by other practitioners. Maintaining OO systems as working systems or moving a legacy OO system in new form reverse engineering is the most important process that takes place and returns revisable results. Methodological design extraction and its manual implementation is an area that promises substantial help in the process of maintaining OO systems. The ROMEO method is the rst of this kind.
5.6 Jan Sabak: The Reliability of Object-Oriented Software Systems In my Phd research I would like to develop a reliability model suitable for objectoriented programming with fault correction, and upon this model develop test stopping criterion. I have applied the hyperexponential model to commercial object-oriented database program.
42
E. Ernst, F. Gerhardt, and L. Benedicenti
5.7 Umit Uzun: Extending Object-Oriented Development Methodologies to Support Distributed Object Computing I have always been interested in Object Oriented (OO) Technology, this was rst started when I learned Object Pascal as a programming language and saw the dierence in the object oriented way of thinking. The Object Oriented way of modeling the world in software systems is more natural than structural ways; moreover it provides new capabilities to the software engineering area such as improved reusability trough inheritance, better managing of complex systems through encapsulation and modularity [Booch94]. Computer networks (especially the Internet, Intranets, LANs) are now becoming a part of our daily life. In the very near future, it is going to be very dicult to nd a computer, which is not connected to a network. Software technology, on the other hand, is heading towards to make most use of these networks. Object oriented technologies accelerated the shift from client-server computation to peer-to-peer and multi-tier computation model and introduced distributed object computing. Although, implementing distributed object systems is becoming easier using current techniques and tools, I believe that current design methodologies are insucient in addressing problems related to distribution in object oriented software development. As a result of carefully looking into current OO design methods, I found UML (Uni ed Modeling Language) [Booch97] to be the most robust one and is becoming a de-facto standard, however, even in UML, problems that are related to distribution of objects are not addressed in detail. Consequently, I picked UML as an example to extend and add distributionrelated issues to the object-oriented design methods. We may de ne a distributed object oriented application as an object-oriented application where its objects are located in dierent hosts. Although this dierence might not seem great, it requires many changes to distribute an OO application. I have developed a sample information retrieval application as a case study. The implementation of the system as a distributed application was not very dicult using RMI (Remote Method Invocation) [SunSoft97] and CORBA (Common Object Request Broker Architecture) [OMG97]. However, the design characteristics of such a distributed object system is not easily expressed in any of the current software development methodologies. Moreover, it is very dicult to predict the performance or error-prone parts of the system without having a good design prior to the implementation. Based on my experience from the case study I have identi ed the problems in designing distributed object systems as; (1) What to distribute? (Granularity) , (2) Where to put? (Allocation), (3) Physical / Virtual Network Schema, (4) How to cluster objects? (Clustering), (5) Exploiting and specifying parallelism (Concurency), (6) How to evaluate the quality of the design in terms of distribution? (Assessment), (7) Re nement of the design according to distribution issues. (Re nement) In my Ph.D. work I am trying tackle theese issues. I believe, being able to clearly express these characteristics in object oriented design methodologies will help developing better distributed object designs and will make it possible to
The 8th Workshop for PhD Students in Object-Oriented Systems
43
have an idea of reliability and performance of the system prior to the implementation. As I mentioned earlier I picked UML to extend it to support design of distributed object systems. I am hoping to use UML's extension mechanisms for this purpose. As an example, the issue of network schema is discussed in [Uzun98]. I am currently working on specifying other extensions and working on a tool that will accept the design model with suggested extensions and evaluate the distribution properties of the system. Using this tool and a case study I am planning to show that using proposed extensions one can analyse and design distributed object systems in a more clari ed development methodology.
References 1. Grady Booch. Object-Oriented Analysis and Design with Applications. The Benjamin/Cummings Publishing Company, Inc., 390 Bridge Parkway Redwood City, California 94065, USA, second edition, 1994. ISBN 0-8053-5340-2. 2. G. Booch and J. Rumbaugh and I. Jacobson, The Uni ed Modelling Language for Object Oriented Development Documentation Set, Version 1.1, Rational Software Corporation, September 1997. http://www.rational.com/uml/documentation. html
3. Object Management Group. Common Object Request Broker Architecture, OMG Technical Documentation Archive, 1997. http://www.omg.org/corba/ 4. SunSoft. Remote Method Invocation Speci cation. Sun Microsystems Inc., 2550 Garcia Avenue Mountain View, CA 94043, USA, 1997, http://www.javasoft. com/products/jdk/1.1/docs/guide/rmi/
Uzun98 Umit Uzun. The role of network architectures in design methodologies for distributed object computing. In BAS'98 The Third Symposioum on ComputerNetworks, Dokuz Eylul University, Izmir, Turkey, June 1998.
Techniques, Tools, and Formalisms for Capturing and Assessing the Architectural Quality in Object-Oriented Software G
H
J
K
c
r
L
Q
M
M
e
M
g
L
P
J
T
Q
R
g
T
j
L
W
L
X
Y
k
L
L
a
R
_
T
L
J
R
T
l
]
X
Q
J
P
T
R
]
^
_
m
Q
J
L
M
J
`
M
K
o
R
L
m
a
J
b
M
X
J
M
q
s
t
u
v
w
x
z
x
{
}
~
x
{
z
x
x
{
t
¨
x
{
x
z
x
u
x
z
t
¡
¢
w
£
x
{
¡
x
z
v
w
¤
~
¥
v
v
¦
§
w
©
~
ª
x
~
£
x
{
~
x
~
¦
{
{
x
w
w
x
~
w
®
¯
¦
{
{
x
w
x
w
§
~
¦
±
²
~
¸
J
K
^
H
a
T
H
_
L
_
¹
^
^
_
Á
Ø
í
Á
L
K
_
H
^
¹
T
^
T
Q
a
¸
_
_
^
R
T
¹
_
Þ
¹
^
é
ã
g
Ê
Q
ì
a
÷
Ê
ë
ì
÷
X
Ü
_
ì
^
ì
L
÷
÷
á
ï
î
î
ì
_
¹
Ú
L
ï
î
ö
$
ð
ï
í
H
î
÷
_
÷
Ú
ð
ð
Q
ï
H
õ
î
M
M
í
Q
L
R
H
X
J
]
T
L
`
_
Q
g
T
L
Ë
^
L
_
^
L
·
T
Á
_
L
e
_
]
Q
_
_
v
J
M
R
L
H
g
½
_
L
]
½
Ç
R
L
¹
^
L
º
J
T
H
a
Q
¿
L
L
±
Q
R
e
¹
¾
z
^
H
J
Q
]
L
]
^
g
§
^
¹
L
M
T
T
_
_
]
H
T
~
g
J
H
T
z
^
M
Q
^
x
À
M
R
_
J
R
L
Ç
º
~
Q
¿
½
^
R
Ê
H
L
¿
J
¼
L
J
H
K
L
L
J
Q
L
H
Á
Ç
T
Q
e
ð
g
Æ
ï
]
L
X
î
í
à
K
÷
î
½
_
L
M
Q
Û
_
^
ä
à
Ú
½
H
H
a
Á
Ë
^
a
H
a
L
½
X
_
_
Û
Ü
]
M
]
^
½
^
À
¹
_
J
J
Q
K
T
H
K
M
½
L
L
_
L
L
J
Q
T
½
_
L
T
]
L
R
^
_
L
_
H
a
L
M
a
^
L
a
¹
R
T
½
Á
T
Q
Q
T
J
J
^
¾
]
½
½
H
J
R
Q
Q
a
L
º
^
L
g
R
½
R
Á
L
_
Ú
J
_
_
H
e
Q
¹
º
J
ß
R
e
L
H
L
å
L
T
_
e
J
M
_
^
]
Q
T
Ç
T
M
T
º
¹
Ü
J
T
R
J
Á
^
Æ
º
L
^
R
J
Q
X
J
_
^
Q
Q
]
a
H
Q
H
R
H
T
½
½
L
Q
L
e
T
g
ß
a
^
Ê
^
R
L
L
]
¾
a
M
^
R
ã
õ
ô
X
û
Q
K
H
R
¹
H
R
J
H
Ê
Ç
L
¾
L
Ê
ê
Ì
ó
ì
J
æ
L
¾
_
L
½
L
T
J
_
L
K
¹
_
L
¹
½
Q
Q
R
_
½
_
_
J
J
T
_
Ë
L
L
¾
H
M
_
T
R
Q
T
÷
a
^
L
_
Á
L
R
Q
_
R
L
K
¹
^
]
Q
]
J
J
_
T
Á
e
¹
½
J
L
J
H
Q
ÿ
¹
Þ
ï
]
½
H
^
Q
R
L
_
ð
î
À
º
Ü
H
º
õ
H
R
_
^
J
^
é
a
_
M
_
J
Á
Q
Ë
J
a
^
½
R
X
a
T
L
L
T
º
K
M
L
_
H
H
Q
¾
H
º
J
^
L
H
ã
L
^
H
J
K
Q
º
_
Q
J
å
R
º
Q
¹
ó
_
½
½
^
_
¹
K
L
M
Q
_
R
J
H
L
Q
¹
J
L
T
]
T
J
¾
e
½
M
^
º
Ç
ê
a
J
J
^
L
è
Q
_
L
T
Ú
ò
H
î
M
¹
¹
M
Q
H
_
H
Q
e
X
Á
^
Ê
ê
¹
R
H
H
T
H
¿
È
M
à
T
T
¹
L
J
^
È
î
H
ä
ñ
H
½
H
J
M
Q
Q
^
L
¶
K
_
È
Á
º
º
é
¹
Q
]
R
¿
ð
K
Q
_
t
J
]
^
Á
_
Q
]
º
Q
ì
a
L
T
Ü
e
T
J
T
ê
_
Q
T
¹
Q
H
Q
L
Q
K
¸
Q
L
^
ê
e
X
_
a
_
L
L
H
½
L
º
ö
L
J
ê
¹
î
L
K
R
R
¹
Q
H
½
¹
J
Ê
º
º
û
_
_
R
M
®
_
T
J
½
ã
Q
L
_
_
Q
^
è
H
H
È
È
^
_
_
Q
R
_
T
ê
_
J
R
T
H
_
ã
R
õ
È
J
Þ
L
H
Q
H
T
H
g
K
Ü
L
a
L
½
J
¿
½
¿
_
È
ê
û
Q
J
e
H
í
M
½
Q
T
õ
M
J
Q
H
J
a
L
H
_
_
Ö
ß
M
¹
^
÷
Þ
X
H
¿
ï
^
Õ
J
È
H
÷
H
L
_
ö
H
c
H
_
H
¼
a
µ
Q
º
Æ
J
Q
_
H
M
º
Q
æ
R
^
X
R
ð
L
R
Ç
º
L
¿
L
J
õ
½
T
J
L
J
Á
J
]
¾
]
L
L
Q
Ó
R
^
J
L
v
H
M
R
¹
Á
â
T
a
z
Ì
Ò
Q
Q
J
J
L
_
T
á
Y
^
L
v
¿
_
]
Q
ã
_
R
º
½
e
L
Ñ
û
^
T
L
_
]
Ú
ò
_
ý
L
e
L
L
_
ô
ñ
H
ù
R
ÿ
R
R
¹
å
Q
^
L
J
R
ð
ï
^
à
_
ò
î
H
a
M
_
¸
a
H
]
T
J
L
Q
T
ß
½
ö
H
Ê
J
ä
ì
½
J
ò
½
Ð
å
J
R
k
ï
º
H
g
Û
R
ò
õ
L
Ú
H
¹
T
º
^
º
Á
K
^
^
L
Ê
ï
H
º
J
º
Á
J
R
H
M
â
í
Ö
L
L
á
R
ë
R
¹
R
M
M
R
ö
J
J
g
Q
ï
T
º
g
È
a
¿
L
R
e
÷
K
Q
ò
R
L
Ð
Ü
ñ
Á
a
Ï
Û
ô
Q
M
¹
Á
R
L
Q
Q
_
_
_
x
Q
e
^
T
x
¹
X
¹
R
H
H
½
Q
L
À
L
R
Á
H
Î
Ú
È
T
Q
Í
Ü
ï
K
J
_
e
ò
ö
T
R
a
Q
L
Q
Æ
]
R
e
J
L
L
L
à
H
L
K
^
L
_
L
x
¿
T
¹
H
R
_
T
ä
ï
R
R
L
½
Q
L
Q
È
í
÷
a
´
H
¹
_
Q
T
Q
Û
M
J
T
¹
K
L
T
ö
H
Á
_
R
ê
H
J
½
L
Q
õ
e
]
L
¾
^
ð
M
¹
^
L
J
L
_
ò
K
½
^
J
K
ß
í
J
¹
^
½
L
¹
v
¹
L
H
Q
H
_
_
J
e
e
L
J
Í
ã
]
R
½
e
Á
_
R
ö
T
]
_
ô
R
R
^
Q
Þ
¸
a
½
K
R
_
L
e
Q
_
R
â
õ
J
Q
v
Q
½
L
Q
Ê
L
R
¹
z
L
T
L
¾
H
¹
_
_
Ú
ð
_
J
J
È
ô
H
_
î
m
_
½
R
Q
Ê
Á
J
M
¹
J
í
Á
T
J
J
¹
º
Q
½
L
ó
R
H
Á
Ü
^
^
e
Q
v
a
À
T
á
_
Q
R
º
Ú
ò
L
R
ì
g
L
g
à
L
½
_
L
L
`
R
Ê
ñ
_
L
ë
R
]
Q
_
L
ð
T
Á
x
L
_
Ê
_
Q
ß
¼
Q
H
]
¿
Þ
ì
a
T
L
Ý
ï
L
¹
¿
Ü
î
K
H
L
§
R
¿
Á
x
J
L
L
w
K
a
]
a
_
w
Q
Æ
R
¹
À
v
T
L
Q
Q
¿
Û
^
Q
R
¾
R
Í
Ú
ì
Q
Q
Í
Ù
ë
_
Q
]
¸
g
L
³
T
Ê
L
]
Ì
J
^
H
L
¿
~
L
T
H
H
_
×
Q
R
{
J
¹
º
¹
R
¹
^
L
_
º
_
_
_
T
x
L
_
J
È
^
¹
Q
K
£
½
ð
Ð
M
õ
J
T
í
H
½
H
ï
J
ï
L
È
T
ï
ï
Q
]
÷
º
a
H
L
_
î
_
J
¹
R
ì
Q
_
]
ñ
í
ò
÷
î
ð
ô
Ê
J
_
Q
T
L
M
L
¾
L
M
(
R
J
K
J
H
T
_
g
R
^
J
T
½
g
_
^
e
Q
T
R
Q
e
º
Q
½
R
_
M
R
J
J
H
½
H
L
a
J
K
Á
^
_
M
^
_
Q
È
H
H
J
È
À
H
L
_
L
º
M
L
¾
L
M
)
X
È
Q
a
H
¹
Q
a
M
]
g
Q
a
Á
J
T
]
Ê
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 44-45, 1998. Springer-Verlag Berlin Heidelberg 1998
]
Q
¿
T
^
T
Techniques, Tools, and Formalisms for Object-Oriented Software *
Ê
+
ð
ï
õ
ì
î
÷
(
;
í
ð
ñ
ù
ò
÷
º
$
H
ì
_
a
_
ß
R
L
]
a
M
M
H
a
H
L
H
^
]
Ê
I
ð
H
?
ß
ã
Û
O
R
_
¹
L
J
R
R
¹
½
^
_
L
T
U
^
_
_
X
^
e
a
M
Q
J
R
H
L
H
½
^
È
L
H
¾
_
T
_
Q
_
¹
J
R
_
¾
]
½
Q
¹
e
^
_
^
K
^
Q
K
R
M
Q
M
Q
º
_
M
H
H
À
k
a
T
H
]
L
_
Á
]
T
T
^
î
í
î
L
ñ
î
õ
õ
_
í
÷
J
ò
K
í
ñ
M
î
ó
ï
÷
L
K
L
M
ñ
Q
î
í
õ
¿
î
H
÷
í
a
ù
ò
í
º
ñ
ò
º
í
ò
û
J
÷
5
R
^
?
Ü
Ü
ß
à
Ø
å
Ü
Ú
ê
ÿ
î
í
L
í
H
î
_
45
î
î
¹
ì
õ
÷
÷
ï
L
ñ
J
î
T
ñ
H
ï
í
¿
7
L
R
H
@
J
T
½
º
^
J
_
L
X
T
^
]
T
e
R
L
Q
R
a
º
H
J
_
^
Q
T
L
G
È
G
g
G
D
R
e
a
J
Á
½
M
L
^
G
T
g
G
D
î
ï
J
_
^
Ü
E
G
T
ñ
á
_
H
T
L
Q
]
Ú
Ü
^
R
^
Q
^
^
_
¹
g
L
L
¹
L
R
û
Q
Ú
K
L
T
J
â
_
M
J
ê
_
a
R
T
M
¹
J
_
a
^
Á
R
L
L
J
½
ê
Ü
^
g
H
_
_
H
M
H
J
_
Q
T
g
_
L
Ú
R
È
J
R
K
_
^
J
è
Q
M
J
R
^
ê
J
T
a
L
M
^
T
_
M
M
Á
X
½
ê
J
H
¹
^
Q
_
]
T
K
H
J
L
M
^
L
H
Q
¹
½
R
e
L
J
_
½
H
È
Q
È
_
Ü
ß
L
e
L
¿
º
Q
å
_
¿
_
H
J
a
L
^
L
Û
Á
¿
Ü
^
J
å
ß
R
È
^
½
H
Ú
J
_
Þ
M
¾
Q
J
Á
ß
M
R
è
^
Q
å
]
½
é
J
L
Ü
_
L
à
^
Ý
]
é
Q
T
½
J
H
L
Ê
ã
Ú
ß
Ú
â
R
H
L
g
H
^
]
L
L
H
¹
R
X
T
^
L
]
_
R
T
R
º
J
^
L
Q
g
H
T
J
½
¹
½
J
^
^
_
M
^
Q
T
È
T
_
^
_
º
L
½
K
¾
Q
_
a
R
]
R
L
L
L
L
_
H
M
X
¿
a
M
^
J
L
L
H
T
H
L
g
T
T
S
Ë
_
]
Q
_
R
K
^
¹
_
^
a
^
T
e
L
J
½
M
^
Ç
_
Q
È
R
R
X
º
½
¹
^
Ç
]
K
g
Q
H
L
Q
e
_
J
^
M
^
_
R
T
H
H
^
K
e
^
M
L
_
¿
M
T
º
^
J
_
H
^
e
J
T
R
R
L
_
L
^
Á
J
½
^
H
_
R
^
_
g
^
½
M
e
_
¿
]
^
_
M
a
^
L
J
L
¹
]
_
Q
M
½
T
M
^
H
Á
R
a
J
^
Á
J
Q
T
R
T
J
Q
R
Q
½
_
È
H
a
_
J
L
J
_
R
H
_
¿
H
L
^
¹
M
º
Á
L
½
J
½
H
Q
L
L
Q
È
T
¹
K
Q
H
g
^
R
]
¿
^
^
¹
J
J
T
M
R
M
^
_
L
_
L
Q
^
_
^
Q
M
T
R
º
R
Q
H
L
a
_
L
R
H
½
g
M
J
X
L
T
^
g
]
Ê
½
a
¹
R
J
^
M
_
^
H
Q
½
^
½
e
º
H
L
_
e
Q
L
_
Á
R
Q
L
T
H
J
T
]
H
Q
Á
Q
a
È
È
J
^
_
H
Á
Q
g
_
_
T
K
T
Q
a
¿
º
^
¾
º
J
L
½
L
J
J
¾
M
¹
Q
R
L
L
_
_
Q
º
H
_
L
M
Á
T
R
H
e
½
Q
L
^
T
^
M
Q
R
Q
_
L
Á
H
_
R
¾
Á
L
½
½
L
a
¾
L
H
]
T
Q
]
½
H
^
L
T
½
¸
H
M
Q
^
Ê
H
L
]
R
^
_
Ê
R
L
M
J
H
_
^
a
H
a
L
H
_
L
T
_
J
_
Q
g
L
½
¿
Q
L
M
L
½
L
J
º
½
¹
T
Q
_
H
H
º
º
g
L
Q
H
J
R
^
Q
J
R
L
R
J
¼
R
L
¹
R
Á
È
Q
R
Q
R
Q
Á
½
¹
e
R
R
^
Q
½
L
e
Q
¼
R
Á
a
e
T
Á
L
_
a
Q
^
L
M
¹
½
Æ
J
T
H
K
g
Á
M
J
]
e
º
K
¿
_
]
L
L
Q
L
_
¹
_
L
H
J
L
J
L
Ð
L
H
º
M
Ê
Q
K
H
K
e
M
¹
]
_
T
Q
_
_
J
M
R
_
J
_
_
L
K
L
T
^
½
¹
Q
e
ý
a
M
½
J
R
Ê
R
L
_
L
¾
M
]
Q
º
H
L
a
Q
_
e
k
Q
¹
e
Á
M
Ê
¹
_
]
Q
_
T
H
Q
H
^
H
^
¹
J
_
R
¹
½
]
Á
_
H
T
_
R
T
º
e
_
^
^
J
]
]
Q
¿
J
^
Q
J
º
X
T
L
g
g
J
L
T
T
L
¹
H
^
L
a
_
H
H
L
M
L
È
º
L
½
¼
^
T
¾
_
Q
¹
_
L
^
X
T
^
T
e
T
_
_
Á
M
J
Q
Ð
]
H
È
J
Y
L
È
_
T
T
õ
À
^
õ
H
]
_
_
í
H
T
J
º
T
L
R
^
a
Q
L
a
À
M
î
½
Q
L
]
ô
Q
¿
J
º
ð
R
H
¾
e
H
ï
Á
º
L
ñ
L
_
L
]
î
R
Q
H
g
L
Q
R
÷
Á
H
È
X
^
L
R
J
L
L
Á
H
È
_
L
J
H
J
º
L
¹
º
L
½
º
¾
î
R
L
¹
]
Q
g
î
_
Q
H
J
g
Á
T
÷
a
T
È
Q
_
K
L
L
¹
e
e
L
R
_
^
_
M
_
J
_
^
÷
]
_
L
½
M
Q
H
ì
Q
H
R
H
L
R
R
î
L
H
^
K
Q
M
R
T
g
½
M
L
^
^
Q
a
H
J
e
a
¾
X
Q
J
^
J
L
H
¹
a
÷
$
¾
L
Á
]
H
_
]
H
]
õ
M
ì
Ê
L
a
Á
í
a
¹
H
R
H
Á
T
_
H
õ
à
Q
T
R
J
¹
M
^
L
ß
H
½
Æ
e
_
L
Æ
T
ô
_
T
]
J
a
J
a
J
Q
õ
^
½
^
J
H
ð
^
_
î
¹
à
R
J
Á
^
î
Þ
L
R
L
H
]
Q
º
M
½
^
L
Ý
R
J
]
½
L
T
X
_
Á
R
¹
ò
G
L
½
R
½
é
L
M
a
J
¾
î
¸
ã
Á
a
H
¹
î
M
Ü
]
R
í
^
R
Á
L
J
Q
ô
M
J
½
H
ð
J
L
Q
T
¾
¿
L
]
Ê
Q
J
M
a
L
^
_
¿
`
¿
^
Q
H
a
L
H
R
õ
º
M
L
J
J
Û
]
º
¹
ð
Á
g
_
H
T
Q
_
ï
º
_
T
X
º
Ú
^
M
í
½
÷
û
¹
R
Q
L
Q
ï
^
ò
÷
D
T
]
¹
a
½
í
ì
Ê
ï
^
e
^
R
H
L
J
R
L
H
Q
T
ì
L
Q
R
L
T
U
H
J
ð
_
Ù
]
½
ò
L
Q
Q
M
õ
ñ
å
ÿ
^
º
Q
K
ò
@
^
e
½
½
Q
^
È
H
½
í
é
º
a
_
L
Ø
a
L
e
º
L
È
H
R
T
X
ð
M
H
¾
L
g
õ
H
a
Q
½
L
_
^
L
Q
L
Q
Þ
^
º
ì
R
M
í
H
Q
e
º
Á
T
î
Q
á
^
½
H
R
R
g
]
L
L
H
]
a
R
^
J
k
J
Ç
J
L
_
e
R
î
L
ã
L
Q
Q
_
T
Q
L
Á
L
H
L
º
¹
à
È
J
_
í
_
]
L
H
M
g
Q
_
ó
Ú
L
_
L
Á
ì
H
Q
½
÷
]
ß
ý
^
R
g
å
^
¹
g
ò
T
H
º
L
X
½
ì
ï
÷
T
H
Ì
ß
Q
a
g
T
K
ÿ
L
)
å
^
÷
ð
î
È
Ü
ì
í
_
_
ì
Q
_
½
Q
_
^
Ü
ä
T
R
í
J
Û
L
H
Û
J
Á
½
L
]
X
ð
L
Ú
ã
^
L
^
Ü
Þ
L
a
î
]
à
ß
¹
_
a
ä
O
¸
_
_
T
J
^
J
¾
ö
M
Û
½
/
î
ð
^
Ú
^
^
M
T
H
H
_
î
K
ý
ï
õ
ñ
Ü
^
J
J
^
à
½
ò
2
J
ä
^
L
ý
L
K
Q
^
Û
Á
¹
k
û
í
T
H
õ
J
º
ò
ó
ð
?
L
÷
ó
Æ
ï
ï
º
H
ð
é
L
È
ð
H
Ú
_
î
T
=
H
K
J
Ú
a
ô
ð
L
È
ð
î
completed’. Similarly, in a set of Parallels, each goal has the default Precondition ’<Parallels parent> started’. Each goal has a Priority, which may be primary, secondary, tertiary, or none. Prioritisation focuses attention on key areas, while allowing the modeler to describe goals which are currently perceived as less important or out of scope. Analysts can choose to show only those goals that are in scope. This can be achieved visually with a tool by filtering on the level of Priority. The goal hierarchy approach thus makes clear the effects of scoping decisions, and allows trade-offs to be evaluated. Each goal is associated with a list of Actors, which may be classes, such as quality assurance or engineering, or systems. External systems which can cause contingencies can also be listed as Actors. For example, ’Mains Power Supply’ can be treated as an Actor in some Exception scenarios involving computer systems (Precondition ’Power fails’).
Tool Support A tool can be helpful in making a goal hierarchy and the scenarios derived from it accessible to users. Tools can also provide abilities not available with manual methods: for example, animating, filtering, etc. A tool with these capabilities, Scenario Plus [1], based on the DOORS requirements engine, is freely available. Figure 2 shows a fragment of a model of a Marketing process during animation. The user has decided to Go ahead (goal 60) with a product launch, and the tool is about to explore the pre-launch sequence (goal 12). Animation proceeds
Requirements Capture Using Goals
231
interactively; the user chooses the desired path when an Alternatives goal is encountered. Animation involves users in the model. Each animation generates a Scenario, a fully-determined path, and these form the basis for acceptance test scripts.
Figure 2: Animating a Marketing Goal Hierarchy
Conclusions This article has summarized a simple approach to describing business processes as a hierarchy of goals. Goal hierarchies are readily understood by both users and engineers, forming a bridge between their views. There is a close relationship between business goals, actors, and scenarios and object-oriented system development concepts, such as object collaboration, roles, and use cases. However, the goal hierarchy is valid and useful for business process description regardless of whether the subsequent development approach is functional or object-oriented.
References 1. 2. 3. 4. 5. 6. 7.
Alexander, Ian, Scenario Plus User Guide, http://www.scenarioplus.com 1997 Alexander, Ian, "A Co-Operative Task Modelling Approach to Business Process Understanding", http://www.ibissoft.se/oocontr/alexander.htm ECOOP 1998 Cockburn, Alistair, "Structuring Use Cases with Goals", http://members.aol.com/acockburn/papers/usecases.htm 1997 Graham, Ian, "Task scripts, use cases and scenarios in object oriented analysis", Object Oriented Systems 3, pp 123-142, 1996 Kendall, E., "Goals and Roles: The Essentials of Object Oriented Business Process Modeling", http://www.ibissoft.se/oocontr/kendall.htm ECOOP 1998 a Kendall, E., S. Kalikivayi, "Capturing and Structuring Goals: Analysis Patterns," European Pattern Languages of Programming, Germany, July, 1998 b Schank, R.C. and Abelson, R.P., "Scripts, Plans, Goals and Understanding", Lawrence Erlbaum Associates, Boston, USA, 1977
‘Contextual Objects’ or Goal Orientation for Business Process Modeling Birol Berkem Independent Consultant / CNAM 36, Av. du Hazay 95800 Paris-Cergy / FRANCE tel : +33.1.34.32.10.84 e-mail :
[email protected] Abstract. In this paper, we propose extensions to object-oriented notions to represent business processes. The idea avoids considering object types independently, but takes into account their collaboration based on the context emerging from a goal requirement. In this way, we introduce ‘Contextual Objects’ that incites objects to collaborate in order to realize a business goal.
Traditional object orientation doesn’t meet business process modeling requirements since objects do not incorporate the goals for which they collaborate. Indeed, business processes like Order Management, Sales, etc. and their steps (activities) must be driven by a goal in order to take dynamic decisions at any step of the process and to evaluate the progression of activities. Thus a business process may be assumed as a goal-oriented graph with each step representing a goal-oriented node. Listing attributes and operations within classes does not help to model business process steps since the goal and the emerging context of objects are not expressed in today’s object-oriented diagrams (even in the UML’s activity, sequence, statetransition or collaboration diagrams). ‘Contextual Objects’ represent Goals and Contexts by object’s contextual behaviors that express implicitly attributes, relationships and methods that depend on a goal and on a context. Modeling goals requires that each activity inside a process step is conducted by a driver object [1]. Contextual objects collaborate to realize the goal whenever a driver object enters into one of its lifecycle stages. For example, an ‘order’ incites other objects, such as product, customer, delivery, invoice, etc., to react, depending on the context. UML 1.1. has a work unit [2] whose state is reflected by the state of its dominant entity (driver object ) [3]. But nothing is said about the internal dynamics of a work unit whose detailed description becomes necessary to define the responsibilities of its participating objects. In such a way, contextual objects should contribute to modelling activities inside business processes (see Figure 1). Secondly, we propose the Behavioral State Transition Diagram to model the goal and responsibilities of objects inside each step of a business process. This diagram (Figure 2) represents an activity's internal behavior as a reusable component. As a bridge toward use cases, a Work Unit Class should be considered as a Use Case Class and Nested Work Units represent target (destination) use cases classes or Packages within ‘uses / includes’ relationships. That way, we can obtain business process driven use cases and their relationships. In summary, ‘contextual objects’ provide the following : • a robust implementation of executable specifications S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 232-233, 1998. Springer-Verlag Berlin Heidelberg 1998
·Contextual Objects· or Goal Orientation for Business Process Modeling
233
• a formal definition of the behavioral state transition diagram which represents the internal behavior of the ‘action states’ and their transition, • a formal way to find process-oriented use cases and their relationships. B u sin ess P rocess Steps are m odeled via ‘B eh avioral W ork U n its’ O rd er (R ecord in g ) O rder (recordin g) 1:
*
2:
1
Produ ct (to record)
O rder (to d eliv er)
O rd er (D eliverin g) O rd er (d elivering ) *
1:
3:
O rder (to b ill)
2:
1
P ro duct (to v erify)
D eliv ery (to ex ecute)
O rd er (B illin g) O rder (b illin g) 1
1:
2: 1
D elivery (to verify )
B ill (to prep are)
Figure 1. Business Process Steps modeled using behavioral work units. Object collaboration is a behavioral stereotype. Classes with in ‘executing’ are the input or suppliers of the interfaces. Output objects, origins of flows (dashed lines) play a role of output or client parts . O r d e r ( D e l iv e r in g ) O rd er ( d e l iv e r in g ) *
1:
P ro d u c t ( to v e r ify )
2:
1
D e liv e r y ( to e x e c u te )
D e liv e r y ( e x e c u tin g ) D e liv e ry ( e x e c u tin g ) 2:
P ackage ( to s h ip )
1:
*
P ro d u c t ( to p r e p a r e )
P r o d u c t ( p r e p a r in g ) P ro d u c t ( p r e p a r in g ) 1
2:
1:
S to c k ( to d r o p )
P ro d u c t ( to p a c k a g e )
Figure 2. A Behavioral State Transition Diagram for the delivery step of the Order process. Rounded rectangles indicate the boundary for each individual or nested activity. Dashed rounded rectangles depict the boundary of object’s collaboration inside each action state.
References 1. Berkem, B., BPR and Business Objects using the Contextual Objects Modeling, 8th. International Software Engineering & Its Applications Conference - Univ. Leonard de Vinci / Paris November 1995 2. UML Summary version 1.0 - Business Modeling Stereotypes / January 1997 3. UML 1.1.- Extensions for Business Modeling - Stereotypes and Notation www.rational.com
Mapping Business Processes to Software Design Artifacts Pavel Hruby Navision Software a/s Frydenlunds Allé 6 2950 Vedbæk, Denmark Tel.: +45 45 65 50 00 Fax: +45 45 65 50 01 E-mail:
[email protected] Web site: www.navision.com (click services) Abstract. This paper explains the structure of a project repository, which enables you to trace business processes and business rules to the architecture and design of the software system. The structure identifies types and instances of business processes, which are mapped to software design artifacts by means of refinements, realizations and collaborations at different levels of abstraction.
Even when using a visual modeling language such as UML, a useful specification of a business system is based on precisely defined design artifacts, rather than on diagrams. The design artifact determines the information about the business system, and the diagram is a representation of the design artifact. Some design artifacts are represented graphically in UML, some are represented by text or tables and some can be represented in a number of different ways. For example, the class lifecycle can be represented by a statechart diagram, an activity diagram, state transition table or in Backus-Naur form. The object interactions can be represented by sequence diagrams or by collaboration diagrams. The class responsibility is represented by text. Business processes, in UML shown as use cases, are considered as collaborations between organizations, business objects, actors, workers or other instances in a business system. Business process (use case) is a type of collaboration, specifying the collaboration responsibility, goal, precondition, postcondition and operations involved in the collaboration. Business process instance (use case instance) is an instance of collaboration, specifying concrete sequences of actions and events. Fig. 1 shows relationships between design artifacts specifying business processes and logical design of the software system. Artifacts are structured according to the level of abstraction: the organizational level, the system level and the architectural level. At each level of abstraction and in each view, the system can be described by four artifacts. They are the classifier model (specifying static relationships between classifiers), the classifier interaction model (specifying dynamic interactions between classifiers), the classifier (specifying classifier responsibilities, roles and static properties of classifier interfaces) and the classifier lifecycle (specifying dynamic properties of classifier interfaces). The classifier model is represented by a static structure diagram (if classifiers are objects, classes or interfaces), a use case diagram (if classifiers are use cases and actors), a deployment diagram (if classifiers are nodes) and a component diagram in its type form (if classifiers are components). The classifier interaction model is S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 234-236, 1998. Springer-Verlag Berlin Heidelberg 1998
Mapping Business Processes to Software Design Artifacts
235
represented by a sequence or collaboration diagram. The classifier is represented by text. The classifier lifecycle is represented by a statechart, an activity diagram, a state transition table and in Backus-Naur form.
Organizational Level
Business Objects View (Logical View)
Organization Model
Business Process View (Use Case View in UML)
Organization Interaction Model
«instance»
System Level
Organization Use Case Interaction Model
Organization Use Case
Organization Use Case Lifecycle
«collaborations» Organization
Organization Lifecycle «realize»
«refine»
System Model
System Interaction Model
«instance»
«refine» System Use Case Model
System Use Case Interaction Model
System Use Case
System Use Case Lifecycle
«collaborations» System / Actor
System / Actor Lifecycle «realize»
«refine» Architectural Level
Organization Use Case Model
Subsystem Model
Subsystem Interaction Model
«instance»
«refine» Subsystem Use Case Model
Subsystem Use Case Interaction Model
«collaborations» Subsystem
Subsystem Lifecycle
Subsystem Use Case
Subsystem Use Case Lifecycle
Fig. 1. Mapping business processes to objects at organizational, system and architectural levels.
The organizational level of abstraction specifies the responsibility of an organization (such as a company) and the business context of the organization. The artifact organization specifies responsibility and relevant static properties of the organization. The artifact organization model specifies relationships of the organization to other organizations. The artifact organization use case specifies the business process with the organizational scope in terms of the process goal, precondition, postcondition, business rules that the process must meet and other relevant static properties of the process. This business process is a collaboration of the organization with other organizations. All collaborations of the organization with other organizations are described in the artifact organization use case model, see the dependency «collaborations» in Fig. 1. The instances of organization business processes are specified in the artifact organization interaction model in terms of the interactions of the organization with other organizations. The organization business processes can be refined into more concrete system business processes, see the dependency «refine» in Fig. 1. Allowable order of the system business processes is
236
P. Hruby
specified in the artifact organization use case life cycle. The organization use case interaction model specifies typical sequences of business process instances, see the dependency «instance» in Fig. 1. This artifact can be represented in UML by sequence or collaboration diagram, in which classifier roles are use case roles. An example of such a diagram is in the reference [2]. The realization of the organizational business process is specified by the interactions between the software system and its users (team roles) see the dependency «realize» in Fig. 1. The system level specifies the context of the software system and its relationships to its actors. The artifact system specifies the system interface, the system operations with responsibilities, preconditions, postconditions, parameters and return values. The artifact actor specifies the actor responsibilities and interfaces, if they are relevant. The system lifecycle specifies the allowable order of system operations and events. The system model specifies relationships between the software system and actors (other systems or users), and the system interaction model specifies interactions between the software system and actors. These interactions are instances of system business processes, see the dependency «instance» in Fig. 1. The artifact system use case specifies the static properties of the business process with the system scope. This business process is a collaboration of the system with other systems and users. All collaborations of the system with its actors are described in the artifact system use case model, see the dependency «collaborations» in Fig. 1. The dynamic properties of the business process interface, such as the allowable order of system operations in the scope of the business process, are specified in the system use case life cycle. The system use case interaction model specifies typical sequences of business process instances. The system business processes can be refined into subsystem business processes, see the dependency «refine» in Fig. 1. The realization of the system business process is specified by the subsystems at the architectural level, their responsibilities and interactions, see the dependency «realize» in Fig. 1. Artifacts at the architectural level are structured in the very same way. The architectural level specifies the software system in terms of subsystems and components, their responsibilities, relationships, interactions and lifecycles. The same structure can also specify the software system at the class level and the procedural level of abstraction. Please see reference [1] for examples of UML diagrams representing the artifacts discussed in this paper.
References 1. Hruby, P.: "Structuring Design Deliverables with UML", ’98, Mulhouse, France, 1998, http://www.navision.com/default.asp?url=services/methodology/default.asp 2. Hruby, P.: "Structuring Specification of Business Systems with UML", OOPSLA'98 workshop on behavioral semantics of OO business and system specifications, Vancouver, Canada, 1998. http://www.navision.com/default.asp?url=services/methodology/default.asp
Mapping Business Processes to Objects, Components and Frameworks: A Moving Target ! Eric Callebaut
[email protected], METHOD Consulting Kraainesstraat 109, 9420 Erpe BELGIUM
Abstract. Recently some material has been published on how to map or even integrate business processes with objects. However, object technology is moving fast and so is our target : the OO community is gradually moving towards components (including business components) and frameworks (including business frameworks). This document outlines an approach for: i) mapping business processes to business components, and ii) refining business processes for the development of business components and frameworks. Variations of this approach have been implemented by the author in leading OO projects, such as IBM’s San Francisco Project and JP Morgan/Euroclear ’s Next Project.
1. Mapping Business Processes to Business Components The OO paradigm is gradually shifting towards component based development (CBD). It’s not really a replacement of OO but rather a natural course of events. The main objective is to enable higher levels of reuse. Business process modeling focuses on the chain of activities triggered by business events. Business component modeling focuses on identifying the main building blocks and their collaborations. A business component provides a well defined set of business services (’functionality’) required to fulfil these business processes. A business component is typically built from a set of objects, or other components, which are invisible to users of the component [1]. So a component is a more ’large grained’ concept than an object. In order to hide the internal complexity of the business component, it is assigned an ’interface class’ that acts as a representative, facade, or mediator for the business component. This interface class holds all the services that are made available to users of the business component. The terms and conditions under which these services can be used should be well defined as ’service contracts’. The key question is : how can business process models be mapped into business component models? This can be done by re-using a technique that is quite popular in the OO community : interaction diagrams (sequence diagrams or collaboration diagrams in UML). Let’s illustrate this with an example. The sequence diagram in Fig. 1 maps the elementary business processes (or business activities) to the components and its services. In the example below Business Activity 1 is S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 237-239, 1998. Springer-Verlag Berlin Heidelberg 1998
238
E. Callebaut
implemented by the Business Service A1 that is provided by the ComponentA; to execute Business Activity 2, ComponentA requests the service Bn provided by ComponentB.
Global sequence diagram: Business Process/Components BusinessProcess Description
1 Business Activity 1
ComponentB
ComponentA
ServiceA1 ServiceBn
2 Business Activity 2 3 ...
ComponentA
ComponentB
Fig 1. Global sequence diagram: Business Process/Components
This approach has some important advantages : • Separation of concerns : both models (business process model and business component model) have their own purposes, audience and are created separately. • Validation : mapping both models allows for validation of the completeness and accuracy of business processes and business component services. In practice, this is often done via walkthrough sessions between the business experts/ users and the business architect. As the business expert runs through the business process description, the business architect explains which business components/services are involved. • Traceability: tracing between business processes and business components/ services is provided • Black box approach : the internal details of business activities and components can be delayed to a later stage (each component service can be detailed via a child sequence diagram).
2. Refining Business Processes for Components and Frameworks As business components and business frameworks are receiving more interest, a key question is how to refine the business process models so that they provide proper input for the definition of business components and business frameworks. For business components and frameworks to be reusable, they need to be sufficiently generic (applicable in multiple business domains, business contexts) and adaptable to changes in the business environment. These business variations and changes may occur for several reasons: e.g. • Internationalisation : business areas which require variations on the basis of country that may be legal, cultural, etc.. • Legal : business areas which reflect legal requirements that are prone to change
Mapping Business Processes to Objects, Components and Frameworks
239
•
Business policies : business areas which have different implementations and can change. These variations may result from changing market conditions, differences in company size and complexity, business transformations, etc. Given these variations and changes we can refine our business processes/business activities based on the following typology [2]. By applying this typology, business processes can be refined and provide a better input to the definition of reusable business components. • Primary business activities : these correspond to the main, concrete business tasks (initiate trade deal, register purchase invoice, confirm sales order execution, etc.). • Common business activities : these correspond mainly to an operation or set of operations that is common for all or most of the activities within different processes. Common activities affect several business processes and need to be handled in the same or similar way. Two examples are calendar handling and currency handling. Common business activities will be major candidates for common services provided by a common component layer. • Abstract business activities : these activities correspond mainly to similarities, generalisations or patterns within and across different business processes. Some examples are similarities between orders (sales, purchase, etc.), similarities between business partners (supplier, customer, etc.), similarities between finance ledgers (GL, A/R, etc.). Abstract business activities provide the main input to the definition of business patterns during component modelling • Extension business activities : these activities correspond to volatile business activities due to variations or changes in legal rules (e.g. VAT calculations), business policies (pricing policy, inventory policy), or internationalisation. Extension activities will provide the main input to the definition of variation points in the services provided by business components. These variation points are places that are likely to be changed by different users of the business component.
References 1. 2.
Allen, P., Frost, S., Component-Based Development for Enterprise Systems: Applying the SELECT Perspective, Cambridge University Press-SIGS Books, 1998. Callebaut, E., IBM San Francisco Framework requirements, IBM San Francisco Project, IBM Boblingen, Germany, 1996
238
E.A. Kendall
Partitioning Goals with Roles
239
Partitioning Goals with Roles Elizabeth A. Kendall
[email protected],
[email protected] Intelligent Business Systems Research, BT Laboratories MLB1/ PP12, Martlesham Heath, Ipswich, IP5 3RE ENGLAND
Once goals have been captured and structured, they should be partitioned and assigned to roles that appear in role models. As roles can be played by objects, systems, or people, this results in a unified approach to object oriented business process modeling.
Roles and role models [1, 3, 5, 6, 7] are relatively new abstractions in object oriented software engineering. Roles have also been widely used in business process modeling [4]. Work at BT aims to clarify, integrate, and extend the role and role model concepts and notation from object oriented software engineering and business process modeling. Role model patterns are also being identified. Role models respond to the following needs: • Role models emphasize how entities interact. Classes stipulate object capabilities, while a role focuses on the position and responsibilities of an object within an overall structure or system. • Role models can be abstracted, specialized, instantiated, and aggregated into compound models [1, 5]. They promote activities and interactions into first class objects. • Role models provide a specification without any implication about implementation. They are unified models that can encompass people, processes, objects, or systems. • Role models can be dynamic [3]. This may involve sequencing, evolution, and role transfer (where a role is passed from one entity to another). There are a few approaches to role modeling; the summary presented here is closely based on [1] and [6]. A role model is very similar to a collaboration diagram in UML, which effectively captures the interactions between objects involved in a scenario or use case. However, a collaboration diagram is based on instances in a particular application; its potential for reuse and abstraction is limited. Further, it is just one perspective of an overall UML model; usually it is subordinate to the class diagrams. Class diagrams adequately address information modeling, but not interaction modeling [1]. This is because classes decompose objects based on their structural and behavioral similarities, not on the basis of their shared or collaborative activities and interactions. This is where role models come in. A role model describes a system or subsystem in terms of the patterns of interactions between its roles, and a role may be played by one or more objects or some other entity. Once you have a base role model, you can build on it to form new models. One role model may be an aggregate of others. Also, a new role model may be derived from one or more base models; in this case, the derived role must be able to play the S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 240-241, 1998. Springer-Verlag Berlin Heidelberg 1998
Partitioning Goals with Roles
241
base roles. Combined roles must be addressed in a system specification. Synergy can make a combined role more than just the sum of the parts. An example of this is the Bureaucracy pattern [7]. This pattern features a long chain of responsibility, a multilevel hierarchical organization, and centralized control. This pattern can be constructed by bringing together the Composite, Mediator, Observer, and Chain of Responsibility patterns [2], which involve eighteen roles in total. However, there are only six roles in the Bureaucracy pattern because the resulting compound pattern is more than just the sum of the individual patterns Chain of Responsibility, Mediator, Observer, and Bureaucracy are all role models that are relevant to business process modeling. They can be used to model business systems comprised of people, organizations, systems, and objects. Other relevant role models can be found [1, 3, 7]. Patterns are needed for identifying roles and role models that are relevant to business process modeling, and a role model catalog is under development at BT as a first step. Whereas static role models have been presented here, it is anticipated that role dynamics will be a fruitful area for modeling changing business processes. Here, roles can be transferred from one entity to another, or follow a certain sequence. This may be valuable for modeling mobility.
References 1. 2. 3. 4. 5. 6. 7.
Andersen, E. (Egil), Conceptual Modeling of Objects: A Role Modeling Approach, PhD Thesis, University of Oslo, 1997. Gamma, E.R., R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software. 1994: Addison-Wesley. Kristensen, B. B., Osterbye, K., “Object-Oriented Modeling with Roles”, OOIS’95, Proceedings of the 2nd International Conference on Object-Oriented Information Systems, Dublin, Ireland, 1995.. Ould, M., Business Processes: Modelling and Analysis for Reengineering and Improvement, John Wiley & Sons, West Sussex, England, 1995. Reenskaug, T., Wold, P., Lehne, O. A., Working with Objects, The OOram Software Engineering Method, Manning Publications Co, Greenwich, 1996. Reenskaug, T., "The Four Faces of UML," www.ifi.uio.no/~trygve/documents/, May 18, 1998. Riehle, D., “Composite Design Patterns”, OOPSLA ’97, Proceedings of the 1997 Conference on Object-Oriented Programming Systems, Languages and Applications, ACM Press, Page 218-228, 1997.
Object Oriented Product Metrics for Quality Assessment (Workshop 9) Report by Houari A. Sahraoui CRIM er 550 Sherbrooke ouest, 1 ét., Montréal (QC) Canada H3A 1B9
[email protected] 1. Introduction Software measures have been extensively used to help software managers, customers, and users to assess the quality of a software product based on its internal attributes such complexity and size. Many large software companies have intensively adopted software measures to better understand the relationships between to software quality and software product internal attributes and, thus, improve their software development processes. For instance, software product measures have successfully been used to assess software maintainability and errorproneness. Large software organization, such NASA and HP, have been able to predict costs and delivery time via software product measures. Many characterization baselines have been built based on technically sound software measures. In this workshop, we were mainly concerned in investigating software products measures for Object-Oriented Software systems which can be used to assess the quality of large OO software systems. OO paradigm provides powerful design mechanisms which have not been fully or adequately quantified by the existing software products measures.
2. Suggested papers topics Papers that investigate analytically or empirically the relationship between OO design mechanisms and different aspects of software quality were specially welcome. In particular, the suggested topics for papers included: 1. 2. 3. 4. 5.
metrics versus quality attributes (reliability, portability, maintainability etc.); Automatic collection (collection tools, collection OO CASE tools); Validation of OO metrics (Empirical and Formal); Relationships between OO product and process metrics; Standards for the collection, comparison and validation of metrics;
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 242-272, 1998. Springer-Verlag Berlin Heidelberg 1998
Object-Oriented Product Metrics for Quality Assessment
243
3. Organisation
3.1 Workshop organizers Walcelio L. Melo, Oracle Brazil and Univ. Catolica de Brasilia, Brazil Sandro Morasca, Politecnico di Milano, Italy Houari A. Sahraoui, CRIM, Canada 3.2 Participants Adam Batenin, University of Bath, United Kingdom Fernando Brito e Abreu, INESC, Portugal Bernarde Coulange, VERILOG, France Christian Daems, Free University of Brussels, Belgium Serge Demeyer, University of Berne, Switzerland Reiner Dumke, University of Magdeburg, Germany Eliezer Kantorowitz, Technion Haifa, Israel Hakim Lounis, CRIM, Canada Carine Lucas, Free University of Brussels, Belgium Geert Poels, Katholieke Universiteit Leuven, Belgium Teade Punter, Eindhoven University of Technology, Netherlands Marinescu Radu, FZI, University of Karlsruhe, Germany Houari A. Sahraoui, CRIM, Canada Frank Simon, Technical University of Cottbus, Germany 3.3 Program Eight papers were accepted to the workshop. The workshop was organized in three separate sessions. Each session consisted of group review of two or three papers followed by a concluding discussion. The papers discussed during the workshop are listed below.1 Session 1 (OO metrics versus quality attributes) • Metrics, Do They Really Help? (by Serge Demeyer and Stephane Ducasse) • An Assessment of Large Object Oriented Software Systems (by Gerd Köhler, Heinrich Rust and Frank Simon) • Using Object-Oriented Metrics for Automatic Design Flaws Detection in Large Scale Systems (by Radu Marinescu)
1
Extended summaries of these papers are presented at the end of this chapter.
244
H.A. Sahraoui
Session 2 (collection tools, collection OO CASE tools) • An OO Framework for Software Measurement and Evaluation (by Reiner Dumke) • A Product Metrics Tool Integrated into a Software Development Environment (by Claus Lewerentz and Frank Simon) • Collecting and Analyzing the MOOD2 Metrics (by Fernando Brito e Abreu and Jean Sebastian Cuche) Session 3 (Validation of OO metrics) • An Analytical Evaluation of Static Coupling Measures for Domain Object Classes (by Geert Poels) • Impact of complexity metrics on reusability (by Yida Mao, Houari A. Sahraoui and Hakim Lounis)
4. Conclusions The wide variety of papers presented and the high level of expertise at the workshop led to the following conclusions (with a more or less general agreement): 1. Many work shows that metrics can be used to measure successfully the quality of a system. This can be done through the detection of problematic constructs in the code (and the design). 2. Empirical validation studies on OO metrics should be conducted intensively in order to derive conclusions on their application as quality indicators / estimators. 3. There seems to be precious little investigation into generalization of case study results for their systematic application in the future. 4. There is a widening recognition for the need for industrial case studies to validate metrics. 5. It is far from clear that industry perceives that metrics can be useful in the improvement of their OO product quality. We have to find ways to overcome the skepticism of our industrial partners. To be complete, we present in following paragraphs the conclusions of the participants: Adam Batanin: The workshop was a success in a number of ways, firstly it demonstrated the diversity of research interests in the field. This is surely a sign that we are exploring many new avenues and progressing product measurement. Secondly, it highlighted the shortcomings in our work, pointing out the areas that will require greater focus in the future such as getting businesses interested in systematic application of metrics and formal analysis of measures. Measurement science can benefit from giving more attention to its foundation. Intuitive
Object-Oriented Product Metrics for Quality Assessment
245
understanding of concepts such as modularity needs to be backed up by a formal analysis of it. Modularity concepts are not specific to designs or code but also apply to specifications, program documentation, etc. The principles underlying good modularity are shared by all these products. We can benefit from it by accepting that there are common elements and by separating these from the context-specific information associated with problem. F. Brito e Abreu: The editor of the french Nantes-based magazine "L'Object" challenged the participants to submit papers to an edition to be dedicated to software metrics on OO systems to appear soon. All participants felt that this workshop, due to its success, deserved to be continued on next ECOOP to take place in Lisbon, Portugal. There was a general agreement that empirical validation studies on OO metrics should be conducted intensively in order to derive conclusions on their application as quality indicators / estimators. Repetitiveness of those experiments by different researchers and comparison of results is one important issue. Published work in this area is very scarce. B. Coulange: This workshop was very interesting because several very different points of view where presented. About metrics, one asked : "do they really help ?" and two other speakers presented helpful results when using metrics on large projects. what is the meaning of each metric ? Which metrics to use ? Which value for these metrics ? These subject are still open. This workshop proposed some answers. Other presentations about case tools gave a good idea of what is the expectation of developers when using metrics and what can be the role of a tool. S. Demeyer: The first session provided some interesting discussion concerning the question whether metrics can be used to measure the quality of a system. On the one hand, participants like Radu Marinescu and Frank Simon reported on the usage of metrics to detect problematic constructs in source code. On the other hand, Serge Demeyer claimed that problematic constructs detected by metrics do not really hamper the evolution of a system. This debate boils down to the old question whether internal attributes can be used to assess external quality factors. Hakim Lounis: The workshop was organized in three sessions. The first session was about OO metrics versus quality attributes ; speakers present their experimentation mainly for large OO software systems. A discussion took place on the real usefulness of OO internal attributes for assessing quality features of systems under study. Speakers of the second session present their work around measurement and evaluation frameworks and tools. On the other hand in this session, a great amount of discussions turned around metric collection tools. Finally, the third session was concerned with the validation of OO metrics two approaches were presented: an analytical evaluation and a machine learning approach that has the advantage of producing explicit rules capturing the correlation between internal attributes and quality factors. A very important aspect
246
H.A. Sahraoui
was pointed out by some of the participants : it concerns generalization of case study results for their systematic application in the future. I consider that researchers meetings similar to the one held in the ECOOP'98 workshop are of a great benefit for the promotion and affirmation of such results. Radu Marinescu: Three major points became very clear to me as a result of this workshop. First of all the fact that it is very hard to "portate" the experimental results from one experimental context to another. We have to find ways to make the conclusion of experimental studies more trustworthy and usable for other cases. A second important issue is that on the one hand it is vital for the metrics-research to validate metrics on more industrial case-studies, but on the other hand it is very hard to obtain these case-studies. We have to find ways to overcome the fear of our industrial partners. Last, but not least, we observed that size-metrics are in most of the cases irrelevant and therefore we should seriously think about defining new metrics that might reflect deeper characteristics of object-oriented systems. G. Poels: I especially liked the focus on theoretical and applied research. All workshop participants had an academic background and much emphasis was laid on experimental design, validation of research results and measurement theoretical concerns regarding measure definitions. Quite innovative research was presented on the use of software measurement for software re-engineering purposes (the FAMOOS project). Also, a number of comprehensive measurement frameworks were presented to the participants. Further, the empirical research described by some presenters sheds some light on the relationship between OO software measures and external quality attributes like reusability. Generally, I very much appreciated the rigorous, technically detailed, and scientific approach of the research presented in the position papers, which is certainly a factor that differentiates this workshop from similar events. The proceedings contain a richness of information and I hope that future workshops will continue emphasizing research results. T. Punter: Metrics are important to assess software product quality. The purpose of the researchers of OO-metrics, seem to find OO-representants –like Number of Children and Depth of Inheritance Tree- which can replace the conventional metrics to measure size, structure or complexity of the product. Attendees of the OOPM-workshop did agree with each other about the statement that software product quality depends on the application area of the product. The target values of the metrics depend on the situation in which the product should operate. Therefore system behavior –which is measured by external metrics- should be part of the subject of evaluation. So, despite of the focus on internal metrics, external metrics are recognized as important product metrics. Besides system behavior, internal metrics and their target values are influenced by the application area of the product too. In our case-experiences –as a third party evaluator of C/C++ code in applications for retail petroleum system-, we found that
Object-Oriented Product Metrics for Quality Assessment
247
internal metrics and their target values differ with their application area too. We think that no universal set of metrics for different products exist. For each assessment an appropriate set should be selected and criteria should be set. This influences the way to validate product metrics. Normally –and also during the workshop- validation focusses upon the relationships between the metrics and the properties they should predict. However, taking the application dependency of software products serious, means that the results can not be generalized. For each assessment a dedicated set of metrics and its associated target values should be selected and argumented. Concepts and ideas to conduct this are developed at Eindhoven University of Technology. F. Simon: The use of object-oriented product metrics to improve software quality is still very difficult: Although there exist some great tools for measuring software, and although there exist several dozen of different metrics, the validation of usefulness of measurement is still at the beginning. In my opinion the workshop showed some weak points: - There's missing some theory for establishing a framework to validate product metrics, - new metrics are needed to represent interesting software properties with a strong correlation to external software attributes, - there's only a poor knowledge about the subjectivity of quality models and their dependent variables, - and there are only a few case studies, how to introduce successfully a measurement program within the software lifecycle. But in my opinion, the measurement community is on the right way toward these goals.
5. Abstracts of presented papers
Workshop 9, paper 1: Do Metrics Support Framework Development ? Serge Demeyer, Stéphane Ducasse Software Composition Group, University of Berne {demeyer,ducasse}@iam.unibe.ch http://www.iam.unibe.ch/~scg/
Introduction It is commonly accepted that iteration is necessary to achieve a truly reusable framework design [4], [6], [8]. However, project managers are often reluctant to apply this common knowledge, because of the difficulty controlling an iterative development process. Metrics are often cited as instruments for controlling software development projects [5], [7]. In the context of iterative framework design, one would like a metric programme that (a) helps in assigning priorities to the parts of the system that need to be redesigned first, and (b) tells whether the system design is
248
H.A. Sahraoui
stabilising; in short, a metric programme should detect problems and measure progress. This paper discusses whether metrics can be used to detect problems and measure progress. We summarise the results of a case-study performed within the context of the FAMOOS project (see http://www.iam.unibe.ch/~famoos/); a project whose goal it is to come up with a set of reengineering techniques and tools to support the development of object-oriented frameworks. Since we must deal with large-scale systems (over 1 million lines of code) implemented in a variety of languages (C++, Ada, Smalltalk and Java) metrics seem especially appealing.
Case-Study To report on our experiments with metrics, we selected a case-study outside the FAMOOS project — the VisualWorks/Smalltalk User-Interface Framework. Besides being an industrial framework that provides full access to different releases of its source code, the framework offers some extra features which make it an excellent case for studying iterative framework development. First, it is available to anyone who is willing to purchase VisualWorks/Smalltalk, which ensures that the results in this paper are reproducible. Second, the changes between the releases are documented, which makes it possible to validate experimental findings. Finally, the first three releases (1.0, 2.0 & 2.5) of the VisualWorks frameworks depict a fairly typical example of a framework life-cycle [3], meaning it is representative. The metrics evaluated during the case study were selected from two sources, namely [7] and [1]. They measure method size (in terms of message sends, statements and lines of code); class size (in terms of methods, message sends, statements, lines of code, instance variables and class variables) and inheritance layout (in terms of hierarchy nesting level, immediate children of a class, methods overridden, methods extended, methods inherited, methods ). We are aware that other metrics have been proposed in the literature and are not included in the evaluation. Especially the lack of coupling and cohesion metrics might seem quite surprising. This incompleteness is due to several reasons: first, because some coupling and cohesion metric lack precise definitions; second because coupling and cohesion metrics are subject to controversy in the literature and third because most such metrics cannot accurately be computed because of the lack of typing in Smalltalk.
Experiment and Results Note that within the scope of this paper, we can only provide a summary of the experiment and the results. We refer the interested reader to [2] for a full report including the actual data. In the near future, we will run the same experiment on different case studies to verify these results. We will also evaluate other metrics, especially coupling and cohesion metrics. Problem Detection. To evaluate all above metrics for problem detection, we applied each metric on one release and examined whether the parts that are rated
Object-Oriented Product Metrics for Quality Assessment
249
'too complex' improved their measurement in the subsequent release. We ran every test with several threshold values to cancel the effects of the threshold values. We observed that between 2/3 and 1/2 of the framework parts that are rated 'too complex' did not improve their measurement in the subsequent release. On the contrary, quite a lot of them even worsened their measurement. Improving these parts was definitely not necessary for the natural evolution of the framework. Consequently, we conclude that the evaluated metrics (i.e., size and inheritance metrics) are unreliable for detecting problems during an iterative framework design. Progress Measurement. To evaluate all above metric for progress measurement, we measured the differences between two subsequent releases. For those parts of the framework that have changed their measurement, we performed a qualitative analysis in order to verify if we identify relevant changes. The qualitative analysis was based on manual inspection of documentation and source code. Afterwards, we examined the change in measurements to see whether the design was indeed stabilising. We observed that the metrics are very accurate in analysing the differences between two releases and as such can be used to measure progress. Especially, interpretation of changes in inheritance values reveals a lot about the stability of the inheritance tree.
Acknowledgements This work has been funded by the Swiss Government under Project no. NFS-200046947.96 and BBW-96.0015 as well as by the European Union under the ESPRIT programme Project no. 21975.
References 1.
2. 3. 4. 5. 6. 7. 8.
Shyam R. Chidamber and Chris F. Kemerer, A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, vol. 20, no. 6, June 1994, pp. 476493. Serge Demeyer and Stéphane Ducasse, Metrics, Do They Really Help ?, Technical Report. See http://www.iam.unibe.ch/~scg/. Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides, Design Patterns, Addison Wesley, Reading, MA, 1995. Adele Goldberg and Kenneth S. Rubin, Succeeding With Objects: Decision Frameworks for Project Management, Addison-Wesley, Reading, Mass., 1995. Brian Henderson-Sellers, Object-Oriented Metrics: Mesures of Complexity, PrenticeHall, 1996. Ivar Jacobson, Martin Griss and Patrik Jonsson, Software Reuse, Addison-Wesley/ACM Press, 1997. Mark Lorenz and Jeff Kidd, Object-Oriented Software Metrics: A Practical Approach, Prentice-Hall, 1994, (2) . Trygve Reenskaug, Working with Objects: The OOram Software Engineering Method, Manning Publications, 1996.
250
H.A. Sahraoui
Workshop 9, paper 2: Assessment of Large Object Oriented Software Systems - a metrics based process -2 Gerd Köhler, Heinrich Rust and Frank Simon Computer Science Department, Technical University of Cottbus P.O. Box 101344, D-03013 Cottbus, Germany (hgk, rust, simon)@informatik.tu-cottbus.de
Motivation This extended abstract presents an assessment process for large software systems. Our goal is to define a self-improving process for the assessment of large object oriented software systems. Our motivation for the development of such a process is our perception of the following problem in software development practice: There are large software projects which tend to outgrow the capacities of the developers to keep an overview. After some time, it is necessary to rework parts of these "legacy systems". But how should capacities for software reengineering be applied? Finite resources have to be efficiently used. Our process should help project teams to identify the most critical aspects regarding the internal quality of large software systems. The need for such a process increases if a software has got a long lifecycle because then risks like application area shift, quick additions caused by customer complaints, development team split, documentation/implementation synchronisation problems etc. might occur. Programs with problems like these have rising costs for error corrections and extensions. Many aspects of the program have to be considered before recommendations for the reengineering can be given. Our work focuses on one part of the total assessment process which has to be performed by an external review organisation, that is, the source code assessment. A process for audits of large software systems The purpose of our process is to help teams to identify the spots where reengineering capacity is best applied to improve a system. The following ideas shaped our process: • Human insight is necessary to check if a method, class or subsystem has to be reworked. • In very large systems, not all modules can be inspected manually. • Automatic software measurements can help to preselect suspicious modules for manual inspection. We assume good correlation between the manual identification of anomalous structures and the identification of anomalies based on automatic measurements. For our purpose we developed an adjustable metrics tool called Crocodile, that is fully integrated into an existing CASEtool. 2
For a full version and references see our paper in [W.Melo, S. Morasca, H.A. Sahraoui: "Proceedings of OO Product Metrics Workshop", CRIM, Montréal, 1998], pp. 16-22
Object-Oriented Product Metrics for Quality Assessment
•
251
Thus, in very large systems, the efficient application of human resources should be guided by an automated analysis of the system. The process is dividable into the subprocesses assessment preparation for considering all special requirements like schedules, maximal effort, specialised quality goals and measurement data calculation, assessment execution for the source code review of the most critical software parts, that are identified by Crocodile and assessment reflection for considering process improvements for further assessments, including the validation of the automatic raw data collection and its usage for module selection: Experiences This assessment process was used for three object oriented programs provided by an industrial project partner (Java and C++-Projects with several hundred classes). Every assessment process was planned with an effort of about 40 person hours, distributed over the phases assessment preparation and assessment execution. The initial quality model that was used for the assessment was built from our own analysis and programming experiences. A report about the results was sent to the customer. The results of the largest project in detail: The module selection was done by analyzing some frequency charts for every used metric, outlier-charts and a cumulated diagram. The automatic software measurements were used to preselect suspicious modules for the inspection, in this case three classes with altogether 95 methods. Our checklist contains 42 questions to identify anomalies for 9 indicators. The items identifying method-anomalies were applied to every method. 206 items failed and showed anomalies for all indicators for all three classes. Independently of the customer feedback we spent further work on the reflection phase: Thus, we reviewed three further, randomly chosen classes. Doing so we detected only 11 failed items. They detected anomalies for two indicators of one class and anomalies for only one indicator of the other two classes. Thus, the process for this project seemed to be quite effective and because of the few resources that we spent on this process, it seemed to be quite efficient. A process model for quality assessment of large software systems feedback
Assessment preparation
Product data
measurement parameter
1.
assessment
2.
Assessment reflection
3.
Assessment execution
Assessment preparation Resource plan
Assessment execution
Assessment reflection
Reverse (2.1) engineering
(1.1)
Quality-model (1.2) definition
Randomized (3.1) meas. test
Module selection
Measurement (1.3) environment
Check efficiency
Source code (2.3) review
Report Legend process subprocesses
X
Y
Process y has to be finished before process x starts
Datastore (paper or electronic)
X
(3.3)
(3.4) Process improvements
(2.4) Anomaly identification
Automatic raw(1.4) data collection
Customer feedback
Check (3.2) effectiveness
(2.2)
Z
results of process X are stored into the datastore Z
252
H.A. Sahraoui
Workshop 9, paper 3: Using Object-Oriented Metrics for Automatic Design Flaws Detection in Large Scale Systems Dipl.Ing. Radu Marinescu “Politehnica” University in Timisoara Faculty of Automation and Computer Engineerring
[email protected] Introduction In the last decade the object-oriented paradigm has decisively influenced the world of software engineering. The general tendency manifested in the last time is to redesign these older industrial object-oriented systems, so that they may take full advantage of the today’s knowledge in object-orientation, and thus improve the quality of their design. In the first stage of a redesign process it is needful to detect what are the design-flaws contained in the application and where are this flaws located. The detection of design problems for large or very large systems is impossible to be fulfilled manually and must therefore be accomplished by automated methods. This paper presents the possibility of using object-oriented software metrics for the automatic detection of a set of design problems. We illustrated the efficiency of this approach by discussing the conclusions of an experimental study that uses a set of three metrics for problem detection and applies them to three projects. These three metrics are touching three main aspects of object-oriented design, aspects that have an important impact on the quality of the systems – i.e. maintenance effort, class hierarchy layout and cohesion. The importance of this paper is increased by the fact that it contributes to the study of object-oriented metrics in a point where this is specially needed: the addition of new practical experience and the practical knowledge that derives from it.
WMC – Weighted Method Count [Chid94] The metric proved to be a good indicator of the maintenance effort by indicating correctly the classes that are more error prone. In terms of problem detection, we may assert that classes with very high WMC values are critical in respect of the maintenance effort. The WMC value for a class may be reduced in two ways: by splitting the class or by splitting one or more of its very complex methods. A second conclusion is based on the observation that in all case-studies the classes with the highest WMC values were the central classes in the project. This relation can be exploited at the beginning of a redesign operation for a foreign project in order to detect the central classes.
NOC – Number of Children [Chid94] This metric may be used in order to detect misuses of subclassing, and in many cases this means that the class hierarchy has to be restructured at that point during the redesign operation. We have detected two particular situations of high NOC values that could be redesigned:
Object-Oriented Product Metrics for Quality Assessment
• •
253
Insufficient exploitation of common characteristics Root of the class hierarchy
TCC – Tight Class Cohesion [Biem95] Classes that have a TCC-value lower than 0.5 (sometimes 0.3) are candidate classes for a redesign process. The redesign consists in a possible splitting of the class in two or more smaller and more cohesive classes. From another perspective subjects with a low TCC may indicate classes that encapsulate more than one functionality. In other words, the design flaw that can be detected using TCC is the lack of cohesion, and one concrete way to reduce or eliminate it might be the splitting of the class.
Conclusions This study has indicated that metrics can be efficiently used for the automatic detection of design flaws, and more generally they can be useful tools in the early stages of re-engineering operations on large systems. On the other hand, we should not forget that metrics will never be able to offer us 100% precise results; metrics will always be useful hints, but never firm certainties. A human mind will always be necessary to take the final decision in re-engineering matters. It is also strongly necessary to let the conclusions of this study be validated by more experimental studies made on other large scale systems.
References [Biem95] J.M. Bieman, B.K. Kang. Cohesion and Reuse in an Object-Oriented System. Proc. ACM Symposium on Software Reusability, April 1995. [Chid94] S.R. Chidamber, C.F. Kemerer. A metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering, Vol.20, No.6, June 1994.
Workshop 9, paper 4: An OO Framework for Software Measurement and Evaluation R. R. Dumke University of Magdeburg, Faculty of Informatics Postfach 4120, D-39016 Magdeburg, Germany Tel: +49-391-67-18664, Fax: +49-391-67-12810 email:
[email protected] Software Measurement includes the phases of the modeling of the problem domain, the measurement, the presentation and analysis of the measurement values, and the evaluation of the modelled software components (process components, product components and resources) with their relations. This enables the improvement and/or controlling of the measured software components. Software measurement
254
H.A. Sahraoui
exists more than twenty years. But, still we can establish a lot of unsolved problems in this area. Some of these problems are • the incompleteness of the chosen models, • the restriction of the thresholds only for special evaluation in a special software environment, • the weaknesses of the measurement automization with metrics tools, • the lack of metrics/measures validation, • last but not least: the missing set of world-wide accepted measures and metrics including their units. The problems of software measurement are also the main obstacles to the installation of metrics programs in an industrial environment. Hence, a measurement plan/framework is necessary which is based on general experience of software measurement investigations. The application of a software measurement approach must be embedded in a business strategy as CAME strategy based on a CAME measurement framework using CAME tools. The CAME strategy stands for • community: the necessity of a group or a team that is motivated and qualified to initiate software metrics application, • acceptance: the agreement of the (top) management to install a metrics program in the (IT) business area, • motivation: the production of measurement and evaluation results in a first metrics application which demonstrate the convincing benefits of the metrics, • engagement: the spending of many effort to implement the software measurement as a persistent metrics system. We define our (CAME) software measurement framework with the following four phases: • measurement views as choice of the kind of measurement and the related metrics/ measures, • the adjustment of the metrics for the application field, • the migration of the metrics along the whole life cycle and along the system structure (as behaviour of the metrics), • the efficiency as construction of a tool-based measurement. The measurement choice step includes the choice of the software metrics and measures from the general metrics hierarchy that can be transformed in to a class hierarchy. The choice of metrics includes the definition of an object-oriented software metric as a class/object with the attributes: the metrics value characteristics, and services: the metrics application algorithms. The steps of the measurement adjustment are • the determination of the scale type and (if possible) the unit, • the determination of the favourable values (thresholds) for the evaluation of the measurement component including their calibrating, • the tuning of the thresholds during the software development or maintenance., • the calibration of the scale depends on the improvement of the knowledge in the problem domain. The migration step is addressed to the definition of the behaviour of a metric class such as the metrics tracing along the life cycle and metrics refinement along the
Object-Oriented Product Metrics for Quality Assessment
255
software application. These aspects keep the dynamic characteristics that are necessary for the persistent installation of metrics applications and require a metrics data base or other kinds of the metrics values background. The measurement efficiency step includes the instrumentation or the automatization of the measurement process by tools. The tools supporting our framework are the CAME (Computer Assisted software Measurement and Evaluation) tools. The application of the CAME tools with the background of a metrics data base is the first phase of measurement efficiency, and the metrics class library the final OO framework installation. Some first applications are described in detail in [1], [2] and [3] applying CAME tools. However, in this short paper was only possible to indicate the principles and aspects of the framework phases as measurement choice, measurement adjustment, measurement migration and measurement efficiency. However, this approach clarifies the next steps after the initiatives of ISO 9000 certification, CMM evaluation and, on the other hand, the special metrics definition and analysis for small aspects. Further research effort is directed at using the OO measurement framework to the UML-based development method using a Java-based metrics class library.
References [1] R. R. Dumke and E. Foltin, Metrics-Based Evaluation of Object-Oriented Software Development Methods, Proc. of the European CSMR, Florence, Italy, March 8-11, 1998, pp. 193-196 [2] H. Grigoleit, H.: Evaluated-based Visualization of Large Sclale C++ Software Products (German), Diploma Thesis, University of Magdeburg, 1998 [3] SMLAB: The Virtual Software Measurement Laboratory at the University of Magdeburg, Germany, http://ivs.cs.uni-magdeburg.de/sw-eng/us/
Workshop 9, paper 5: A Product Metrics Tool Integrated into a Software Development Environment - extended abstract -3 Claus Lewerentz and Frank Simon Software and Systems Engineering Group Computer Science Department, Technical University of Cottbus P.O. Box 101344, D-03013 Cottbus, Germany (lewerentz, simon)@informatik.tu-cottbus.de
3
For a full version and references see our paper in [W.Melo, S. Morasca, H.A. Sahraoui: "Proceedings of OO Product Metrics Workshop", CRIM, Montréal, 1998], pp. 36-40
256
H.A. Sahraoui
Introduction The goal of the project Crocodile is to provide concepts and tools for an effective usage of quantitative product measurement to support and facilitate design and code reviews. Our application field is the realm of object oriented programs and, particularly, reusable frameworks. The main concepts are • Measurement tool integration into existing software development environments, using existing tool sets and integration mechanisms, • mechanisms to define flexible product quality models based on a factorcriteria-metrics approach, • the use of meaningful measurement contexts isolating or combining product components to be reviewed, • effective filtering and presentation of measurement data. Our current implementation platform for a tool providing these concepts is TakeFive's SNiFF+, an industrial strength integrated C++/Java programming environment.
Measurement environment The Crocodile measurement tool is designed to be fully integrated into existing SDEs that provide for object-oriented design and coding tools (e.g. structure editors, parsers, source code browsers) and version management. The main components of the Crocodile tool are abstract interfaces to the SDE services. These services are used to display critical program parts through the SDE user interface and to extract the data that are necessary to measure (see at right). According to the goal to use measurement-based analysis as early as possible in the development process, we concentrate on structural data available on architecture level. Because Crocodile does not parse the source code itself but extracts the data from the SDE internal database, it is language independent. Our current implementation platform supports among other languages C++ and JAVA.
Flexible and adaptable quality models Crocodile uses the factor-criteria-metrics approach to connect the measures with some high-level goals. The measures are defined using an interpretative metrics and query definition language on top of a SQL database system that consists of a table structure to implement the architectural data of the OO-System. Besides simple selection and joining of basic data, arithmetic operators are used to scale and to combine simple measures into more complex measures. To be as flexible as possible Crocodile does not come with fixed built-in quality models. So the full model has to be defined. Starting from the root which could be a general quality goal like reusability descriptions of directed paths from this goal down to the concrete measures have to be entered. It is possible to connect one measure to different design principles and criteria. The quality model itself is used by Crocodile to provide an interpreting of the measurement results.
Object-Oriented Product Metrics for Quality Assessment
consist of
0,n
implemented in
0,m name
Packages 1 1
1 implemented in
implemented in
0,n 1 1
have 0,n
have 0,n
name Attributes visibility 0,m
257
0,n
0,n
name visibility 0,n Methods length abstract? 0,m 0,n benutzt use
Classes
name abstract?
0,n 0,m inherit from
benutzt use
Measurement contexts and filtering To be able to measure large object oriented systems Crocodile provides some helpful settings of Measurement contexts: • Class-focus: It contains all classes to be measured. To measure particular parts of a composed program only those classes are included in the focus. • Inheritance-context: The functionality of classes - containing methods and attributes - from the inheriting context is copied into subclasses of the focus. • Use-context: When measuring a focus there are only considered references of used attributes and methods from classes within the focus. Selecting an additional use-context gives the possibility to selectively include use-relations to outside the class focus. To allow for an interpretation of the measurement results as either good or critical with respect to a particular (sub-) goal we provide the definition of thresholds for every measure. These quality levels provides a means to filter the huge amount of measurement values to those indicating critical situations. In Crocodile we support the following threshold-definition. Values are critical if values are critical if they • are inside an absolute interval / outside an absolute interval, • belong to the group with the x highest respectively lowest values, • belong to the group with the y percent highest respectively lowest values. Many different kinds of diagrams like frequency charts or outlier charts helps to visualize the measurement results to get an easy overview of the software.
Experiences Crocodile provides quite simple but powerful means to create a specialized measurement process. The quality models can be easily adapted to the user’s specific goals and can be used to support different activities in engineering and reengineering of object oriented applications. Due to Crocodile’s integration into a software development environment like SNiFF+ the measurement activities are smoothly integrated into existing software development processes.
258
H.A. Sahraoui
Workshop 9, paper 6: Collecting and Analyzing the MOOD2 Metrics Fernando Brito e Abreu1, Jean Sebastien Cuche2 1 ISEG/UTL, 2 École de Mines de Nantes INESC, R. Alves Redol, nº9, Lisboa, Portugal {fba, sebastien.cuche}@inesc.pt Abstract. An experiment of systematic metrics collection is briefly described. The MOOD2 metrics set, an extension of the MOOD set, was used for the first time. To collect them, a new generation of the MOODKIT tool, with a WWW interface and built upon the GOODLY design language, was built.
Introduction The MOOD metrics set (Metrics for Object Oriented Design) was first introduced in [Abreu94] and its use and validation was presented in [Abreu95, Abreu96a, Abreu96b]. During the corresponding experiments, it became evident that some important aspects of the OO design were not being measured in the initial set. The need also arose on expressing aspects of OO design at different levels of granularity. Those shortcomings were abridged in the MOOD2 metrics set [Abreu98a], which can be obtained with the MOODKIT G2, a tool built upon the GOODLY language. The MOOD2 metrics set includes several granularity levels: Attribute, Operation, Class, Module, Inter and Intra-Specification. A detailed definition of each of the metrics can be found in [Abreu98a]. Metrics at the lower granularity levels are used to compute the ones at the higher levels. In this experiment we focused our attention on specification level metrics to assess overall OO design quality.
GOODLY and the MOODKIT G2 tool GOODLY (a Generic Object Oriented Design Language? Yes!) allows to express design aspects such as modularization, class state and behavior, feature visibility, inheritance relations, message exchanges and information hiding [Abreu97]. The higher level organization unit in GOODLY is called a specification (a set of modules). Instead of the single-tiered architecture of MOODKIT G1, G2 has a two-tiered one. The first tier consists of formalism converters that generate GOODLY specifications either by direct engineering from OOA&D specifications contained in a CASE tool repository (such as OMT or UML models produced with Paradigm Plus) or by reverse engineering of source code written in OO languages such as C++, Eiffel, Smalltalk, OOPascal or Java. The second tier consists of an analyzer of GOODLY code, a repository and a web-based tool interface. The analyzer does lexicalsyntactic and referential integrity verification (traceability analysis), generates HTGOODLY, extracts the MOOD2 metrics and also produces other coupling information to allow modularity analysis [Abreu98b]. HT-GOODLY is a hypertext
Object-Oriented Product Metrics for Quality Assessment
259
version of GOODLY that allows improved understandability by context swapping through navigation.
The experimental results The analyzed sample consists of around fifty GOODLY specifications with varied profiles, generated by reverse engineering of systems written in Smalltalk, Eiffel and C++. To avoid sample noise, we applied outlier removal techniques before analysis. Each specification was classified according to criteria such as application domain, origin, original language, version and production date. A set of hypothesis was tested against the sample, from where the following conclusions arose. The metrics correlate very low with each other, and thus represent different design aspects. The metrics correlate very low with system size, whether we use LOC, number of classes or other size measures. Therefore they are size-independent. When comparing different versions of the same systems we could find signs of design quality evolution experienced throughout time. Therefore the metrics are sensible enough to assess incremental quality changes. While we could not find evidence of application domain impact on resulting design quality, the same does not apply to software origin. The metrics allow to assess the amount of reuse in great detail. We were able to observe a large variance on the effective reuse. The detailed results of this experiment will soon be published in a journal and meanwhile can be obtained by contacting the authors.
References [Abreu94] Abreu, F.B. & Carapuça, R., «Object-Oriented Software Engineering: Measuring and Controlling the Development Process», Proc. 4th Int. Conference on Software Quality, ASQC, McLean, VA, USA, October 1994. [Abreu95] Abreu, F.B. & Goulão, M. & Esteves, R., «Toward the Design Quality Evaluation of Object-Oriented Software Systems», Proc. 5th Int. Conference on Software Quality, ASQC, Austin, TX, USA, October 1995. [Abreu96a] Abreu, F.B. & Melo, W., «Evaluating the Impact of Object-Oriented Design on Software Quality», Proc. 3rd Int. Software Metrics Symposium, IEEE, Berlin, March 1996. [Abreu96b] Abreu, F.B. & Esteves, R. & Goulão, M, «The Design of Eiffel Programs: Quantitative Evaluation Using the MOOD Metrics», Proc. TOOLS USA’96, Santa Barbara, California, USA, August 1996. [Abreu97]* Abreu, F.B. & Ochoa, L. & Goulão, M., «The GOODLY Design Language for MOOD Metrics Collection», INESC internal report, March 1997. [Abreu98a]* Abreu, F.B., «The MOOD2 Metrics Set», INESC internal report, April 1998. [Abreu98b] Abreu, F.B. & Pereira, G. & Sousa, P., «Reengineering the Modularity of Object Oriented Systems», ECOOP’98 Workshop 2, Brussels, Belgium, July 1998. * - available at http://albertina.inesc.pt/ftp/pub/esw/mood
260
H.A. Sahraoui
Workshop 9, paper 7: An Analytical Evaluation of Static Coupling Measures for Domain Object Classes (Extended Abstract) Geert Poels Department of Applied Economic Sciences, Katholieke Universiteit Leuven Naamsestraat 69 B-3000 Leuven, Belgium
[email protected] Why Measuring Coupling ? In the context of object-oriented software systems coupling refers to the degree of interdependence between object classes. According to Chidamber and Kemerer coupling between object classes must be controlled [3, p. 486]: 1. Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application. 2. In order to improve modularity and encapsulation, inter-object class couples should be kept to a minimum. The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult. 3. A measure of coupling is useful to determine how complex the testing of various parts of a design are likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.
The Context The method we use to specify an OO domain model is MERODE [8]. This method combines formal specification techniques with the OO paradigm in order to specify and analyse the functional and technical requirements for an information system. A characteristic of MERODE is that it is model-driven. A clear distinction is made between functional requirements related to the application domain (i.e., business functionality), functional requirements related to user-requested information system functionality, and technical requirements. These three types of requirements are respectively specified in the domain model, the function model and the implementation model. The method is model-driven in the sense that starting from the domain model the other types of models are generated by incrementally specifying the other types of requirements. Specific to MERODE is that no message passing is allowed between the instances of the domain object classes. However, as MERODE supports the modelling of generalisation/specialisation and existence dependency4 relationships between The existence dependency relation is a valuable alternative to the aggregation relation as its semantics is very precise and its use clear cut, in contrast with the concept of aggregation [8]. 4
Object-Oriented Product Metrics for Quality Assessment
261
domain object classes, there exist two types of static coupling between domain object classes. The first type is inheritance coupling. As a rule, a child object class inherits the features of its parent object class [7]. A change of the features of the parent object class is propagated into the child object class as the new defined features of the child may reference the inherited features. The second type of coupling is abstraction coupling. The relationship between an object class X and an object class Y, where X is existent dependent of Y, is established by declaring in X an attribute v of type Y.
The Static Coupling Measures The following measures quantify the extent of static class coupling. Let X be a domain object class defined within the domain model DM. Inbound inheritance coupling: IIC(X) = 1 if X inherits from an object class in DM = 0 otherwise Outbound inheritance coupling: OIC(X) = the count of object classes in DM that inherit from X Inbound abstraction coupling: IAC(X) = the count of attributes of X that have a class in DM as type 5 Outbound abstraction coupling: OAC(X) = the count of attributes of type X that have been declared in classes of DM The next two measures quantify the extent of static coupling in a domain model DM. Inheritance coupling: IC(DM) =
IIC(X) = X ˛ DM
OIC(X) = the count of inheritance relationships in X ˛ DM
DM Abstraction coupling: IAC(X) =
AC(DM) = X ˛ DM
OAC(X) = the count of abstraction relationships X ˛ DM
in DM
If there are n existence dependency relationships between X and Y, then X will have n attributes of type Y. 5
262
H.A. Sahraoui
The above measures take into account the strength of the coupling between domain object classes. The measure IIC reflects for instance the MERODE constraint of single inheritance. The abstraction coupling measures count all occurrences of coupling relationships between two classes. However, the measures do not count indirect coupling relationships. Moreover, they are coarse-grained measures that are derived in an early stage from high-level conceptual models.
Analytical Evaluation We checked whether the above static coupling measures for domain object classes satisfy the coupling properties published by Briand et al. [1]. Measure properties must be regarded as desirable properties for software measures. They formalise the concept being measured such as intuitively understood by the developers of the property set. However, most sets of measure properties consist of necessary, but not sufficient properties [4]. As a consequence, measure properties are useful to invalidate proposed software measures, but they cannot be used to formally validate them. In other words we wished to check whether our static coupling measures do not contradict a minimal set of subjective, but experienced, viewpoints on the concept of coupling. However, showing the invariance of the coupling properties for our measures is not the same as formally proving that they are valid. The coupling properties of Briand et al. were proposed in [1] and further refined in [2], [5], [6]. We refer to these papers for the formal definition of the coupling properties. Our static coupling measures are invariant to the coupling properties under certain assumptions regarding the representation of a domain model as a generic software system.
Conclusions In this paper a set of coarse-grained static coupling measures was proposed for domain object classes. The measures are constructed such that they measure the strength of the couplings between object classes, and not merely the number of classes that are coupled to the class of interest (such as done by CBO [3]). The measures are analytically evaluated using a well-known set of coupling properties. The conformance to these measure properties does not formally validate our proposed measures, but shows that they do not contradict popular and substantiated beliefs regarding the concept of coupling.
References [1] L.C. Briand, S. Morasca and V.R. Basili, ‘Property-Based Software Engineering
Measurement’, IEEE Transactions on Software Engineering, Vol. 22, No. 1, January 1996, pp. 68-86. [2] L.C. Briand, S. Morasca and V.R. Basili, ‘Response to: Comments on «PropertyBased Software Engineering Measurement»: Refining the Additivity Properties’, IEEE Transactions on Software Engineering, Vol. 23, No. 3, March 1997, pp. 196197.
Object-Oriented Product Metrics for Quality Assessment
263
[3] S.R. Chidamber and C.F. Kemerer, ‘A Metrics Suite for Object Oriented
Design’, IEEE Transactions on Software Engineering, Vol. 20, No. 6, June 1994, pp. 476-493. [4] B.A. Kitchenham and J.G. Stell, ‘The danger of using axioms in software metrics’, IEE Proceedings on Software Engineering, Vol. 144, No. 5-6, OctoberDecember 1997, pp. 279-285. [5] S. Morasca and L.C. Briand, ‘Towards a Theoretical Framework for Measuring Software Attributes’, Proceedings of the IEEE 4th International Software Metrics Symposium (METRICS’97), Albuquerque, NM, USA, November 1997. [6] G. Poels and G. Dedene, ‘Comments on «Property-Based Software Engineering Measurement»: Refining the Additivity Properties’, IEEE Transactions on Software Engineering, Vol. 23, No. 3, March 1997, pp. 190-195. [7] M. Snoeck and G. Dedene, ‘Generalisation/ Specialisation and Role in Object Oriented Conceptual Modeling’, Data and Knowledge Engineering, Vol. 19, No. 2, June 1996, pp. 171-195. [8] M. Snoeck and G. Dedene, ‘Existence Dependency: The key to semantic integrity between structural and behavioural aspects of object types’, IEEE Transactions on Software Engineering, Vol. 24, No. 4, April 1998, pp. 233-251.
Workshop 9, paper 8: Impact of Complexity on Reusability in OO Systems Yida Mao, Houari A. Sahraoui and Hakim Lounis CRIM 550, Sherbrooke Street West, #100 Montréal, Canada H3A 1B9 {ymao, hsahraou, hlounis}@crim.ca
Introduction It is widely recognized today that reuse reduces the costs of software development [1]. This reduction is the result of two factors: (1) developing new components is expensive, and (2) the reusable components are supposed to have been tested and thus are not expensive to maintain. Our concern is to automate the detection of potentially reusable components in existing systems. Our position is that some internal attributes like complexity can be good indicators of the possibility of reuse of a component. We present an experiment for verifying a hypothesis on the relationship between volume and complexity, and the reusability of existing components. We derived a set of related metrics to measure components’ volume and complexity. This verification is done through a machine-learning approach (C4.5 algorithm, windowing and cross-validation technique). Two kinds of results are produced: (1) a predictive model is built using a set of volume and complexity metrics, and (2) for this predictive model, we measure its completeness, correctness, and global accuracy.
264
H.A. Sahraoui
A Reusability hypothesis and its Derived metrics Different aspects can be considered to measure empirically the reusability of a component depending on the adopted point of view. One aspect is the amount of work needed to reuse a component from a version of a system to another version of the same system. Another aspect is the amount of work needed to reuse a component from a system to another system of the same domain. This latter aspect was adopted as the empirical reusability measure for our experiment. To define the possible values for this measure, we worked with a team in CRIM specializing in developing intelligent multiagents systems6. The obtained values classes are : 1. Totally reusable: means that the component is generic to a certain domain (in our case "intelligent multiagents systems"). 2. Reusable with minimum rework: means that less than 25% of the code needs to be altered to reuse the component in a new system of the same domain. 3. Reusable with high amount of rework: means that more than 25% of the code needs to be changed before reusing the component in a new system of the same domain. 4. Not reusable at all: means that the component is too specific to the system to be reused. Measures of complexity and volume has been shown to be able of predicting the maintenance effort and the cost of rework in reusing software components [3], [2]. We define our hypothesis as follows: Hypothesis : A component's volume and complexity somehow affect its reusability Following are some of the thirteen metrics that are selected to measure the volume and complexity of a candidate class according to the hypothesis: WMC (Weighted Methods Per Class), RFC (Response For Class), NAD (Number of Abstract Data Types).
Hypothesis Verification We have used the data from an open multiagent system development environment, called LALO. This system has been developed and maintained since 1993 at CRIM. It contains 87 C++ modules/classes and approximately 47K source lines of C++ code (SLOC). The actual data for suite of measures we have proposed in our hypothesis were collected directly from the source code. We have exploited an OO metrics tool: QMOOD++ [5] which was used to extract Chidamber and Kemerer's, and Bansiya's OO design metrics. To verify the hypothesis stated in section 2, we built characterization models, that can be used to easily assess class reusability based on their type and their level of complexity. The model building technique that we used is a machine learning algorithm called C4.5 [4]. C4.5 induces classification models, also called decision trees, from data. C4.5 derives a rule set from a decision tree by writing a rule for each path in the decision tree from the root to a leaf. In that rule the left-hand side is
6
Details on the work of this team can be found in http://www.crim.ca/sbc/english/lalo/
Object-Oriented Product Metrics for Quality Assessment
265
easily built from the label of the nodes and the labels of the edges. The resulting rule set can be simplified. To evaluate the class reusability characterization model based on our measures, we need criteria for evaluating the overall model accuracy. Evaluating model accuracy tells us how good the model is expected to be as a predictor. If the characterization model based on our suite of measures provides a good accuracy it means that our measures are useful in identifying reusable classes. Three criteria for evaluating the accuracy of predictions are the measures of correctness, completeness, and global accuracy. As we can see in table 1, C4.5 presents good results in our experiment. The results are pretty high for the verification of the hypothesis of complexity. On the other hand, C4.5 induced a rules-based predictive model for the hypothesis. The size of the predictive model is 10 rules. An example of these generated rules is given by the following rule: Rule 5: NOT > 2 NAD From Jacobson's method, we nd use cases particularly relevant. We base our courses mainly on the three methods we just mentioned. Other colleagues in the area are basing their teaching on UML (Uni ed Modeling Language) [8]. We have serious reservations against using UML as the basis for university-level teaching, as, in our opinion, it is important to use a coherent and consistent method to teach object-oriented concepts. UML is a large notation, still in the development stage and with many inconsistencies to be resolved; it is not yet a fully- edged language. (In this regard, we have been working on formalizing a subset of UML with the aim of producing a coherent tool for modelling.) UML can be very important for tool builders, but needs to be integrated with a process, or a method, to be useful in a course like ours. Therefore, we use some notation and terminology taken from UML, but UML is not the basic instrument for teaching.
3.2 Object and Dynamic Models
Our nal goal is to specify the problem as a set of concurrent objects, whose classes are represented in a class diagram (which constitute the object model). We start with the informal requirements, where we identify \obvious" objects (usually entity and interface objects) and add them to the class diagram. For problems with a strong dynamic behaviour, other objects may be dicult to nd. In this case we focus on the behaviour, identify use cases and, for each one, construct an MSC. The rst message in an MSC corresponds to the initial motivation for the use case. (Our MSCs extend the event trace diagrams proposed by OMT by incorporating choices and cycles so that we can minimize the number of MSCs to be drawn.) As we construct the MSCs, we identify missing objects (especially control objects) and the message passing may lead to the identi cation of new services. Ultimately, we can see the system modelled in three concentric layers: an inner layer composed of entity objects, i.e., the objects which maintain the data; a middle layer, when it exists, composed of control objects, which lead the application; and an outer layer composed of interface objects.
3.3 The Process
Our process is composed of a set of tasks and subtasks. A simpli ed version of this process is:
Teaching Objects: The Case for Modelling
353
1. Identify obvious interface and entity classes and start building a class diagram. { Add to the classes services and attributes; relate them using messages and associations. 2. Identify a list of dierent use cases. { Describe them and relate them using the relationships uses, extends and inherits. 3. For each use case, build a MSC, guaranteeing that: { the reason which conducted us to identify it is the starting message in the corresponding MSC; { the object receiving the above message is always an interface object; this message has a correspondent oered service in that interface object; { auxiliary objects should be added to help accomplishing the required service; { interface objects exist only to get or to present information to the environment; if the interface object is doing more, that responsibility should be given to a control object; { for each new type of object identi ed, add a new class to the class diagram and complete it with services and attributes; { messages requiring a service from an object have a corresponding oered service on that object and may originate a new message in the class diagram. 4. Relate classes by using inheritance and objects by using messages, associations, and aggregations. 5. For complex classes, construct a state diagram to integrate, in a single diagram, the behaviour already identi ed in the MSCs and to identify missing transitions. 6. Repeat tasks 3, 4 and 5 for each set of requirements left aside (not yet dealt with) or for each object treated as a black box. The approach to solve the problems given during the course is based on the notion of incremental development. Therefore, we treat complex entity objects as black boxes which oer a set of services, for which we do not fully specify the state. This is important, as we can always model only parts of the problem and later, as we understand the requirements better, specify both requirements which were left aside and objects which were left as black boxes.
3.4 Tool Support Other important factors in the success of a method are tool support and traceability. Ideally, tools should allow prototyping during the analysis phase. This is dicult to get as most CASE tools available at reasonable prices mainly help in drawing diagrams. In some cases, we have been introducing to our students other techniques to specify objects. We are referring to formal description languages, like SDL [3] and LOTOS [1].
354
A.M.D. Moreira
These techniques have simulator tools which allow us prototyping the speci cations, at an analysis level [6]. An important advantage of using an executable speci cation language is that students can \see" the objects being executed. However, due to lack of time, we usually only mention those possibilities. During our lectures we use object-oriented CASE tools, such as OMTool [5] and Rose [7].
3.5 Traceability
Traceability is also important in a development method. Our method supports traceability by integrating closely the use cases, the class diagram, the sequence diagrams and the state diagrams. Each is used as an input to the creation of subsequent diagrams. A major problem with informal requirements is that they contain inconsistencies. An important feature of a development method is that it helps identifying inconsistent requirements. A method must therefore support the management of change and this involves the need for backward as well as forward traceability. Here tool support is essential to enable us to trace back to earlier diagrams to identify the cause of an inconsistency and also to ensure that the diagrams are all returned to a consistent state after changes have been made. The usual low price CASE tools available unfortunately do not support this feature. 4
Conclusions
Existing methods suer some limitations, which lead us to propose a new process for object-oriented analysis. This process incorporates characteristics of several methods, in particular Rumbaugh's, Coad and Yourdon's, and Jacobson's methods. We propose two models to be build (object and dynamic) and need use cases, class diagrams (with entity, interface and control objects), MSC and state diagrams. References
1. Brinksma, E. (ed).: `Information Processing Systems { Open Systems Interconnection { LOTOS { A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour'. ISO 8807, 1988. 2. Coad, P. and Yourdon, E.: Object Oriented Analysis, Second Edition, Yourdon Press, Prentice-Hall, 1991. 3. J. Ellsberger, D. Hogrefe and A. Sarma, SDL, Prentice Hall, 1997. 4. Jacobson, I.: Object-Oriented Software Engineering, Addison-Wesley, 1992. 5. Martin Marietta: The Object Modeling Tool (OMTool), Martin Marietta, 1993. 6. Moreira, A.M.D. Clark, R.G.: Adding Rigour to Object-Oriented Analysis. Software Engineering Journal 11(5), 270-280, 1996. 7. Rational: http://www.rational.com/products/rose, 1998. 8. Rational: Uni ed Modeling Language, http://www.rational.com, 1997, 1998. 9. Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F. and Lorensen, W.: ObjectOriented Modelling and Design, Prentice-Hall, 1991.
Involving Learners in Object-Oriented Technology Teaching Process: Five Web-Based Steps for Success Ahmed Seffah
(1,2)
(1) Computer Research Institute of Montreal c 1801 M Gill College Suite #800, Montreal, Quebec, Canada H3A 2N4 Telephone: (514) 840-1234 — Fax: (514) 840-1244
[email protected] - http://www.crim.ca/~aseffah (2) Department of Computer Science Université du Québec à Montréal PO Box 8888, Montreal, Quebec, Canada
Abstract. In this paper, our discussion will focus on three major issues: (1) ways in which New Information and Communication Technologies (NICT), especially the Internet, can influence the teaching/learning process; (2) the application of NICT in an academic course; (3) how students and teachers can cooperate to strengthen the use of NICT and promote new skills and knowledge.
1
Introduction
A revolution is taking place in academic and continuing education, one that deals with the philosophy of how we teach and learn, the relationship between educators and learners, the way in which the classroom is structured, and the nature of curriculum. This new approach, termed learner-centered education [2, 6], is focussed on the needs, skills and interests of the learner rather than on the organization of the curriculum content. Moreover, according to several studies and research experiments [1], the emergence of New Information and Communication Technologies (NICT) can provide a less costly and more efficient alternative to classical and continuing education approaches. In addition to reducing costs, NICT, and especially the Internet, is also expected to shorten the time required for students and/or newly hired employees to become accomplished practitioners. In this paper, we describe our own experience regarding how Internet technology can be an efficient environment for centering education around the learner and how students can be involved in the teaching process through such an environment. The following are the fundamental steps of our approach that combine the power of the emerging education paradigm with Internet technologies and tools: 1. 2. 3.
Presenting students with a set of relevant resources and information Encouraging students to add their own resources and share their ideas Inviting students to collaboratively share and critique their projects
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 355-358, 1998. Springer-Verlag Berlin Heidelberg 1998
356
4. 5.
2
A. Seffah
Bridging the gap between academic education and industry needs Improving the personal learning process by integrating self-assessment and mentoring strategies
The Five Steps Objectives
Step1: Presenting Students with an Initial Set of Relevant Resources and Information Table 1 describes a typical environment that a professor can establish as a starting point towards an Internet-based training environment. From the learner’s perspective, this environment offers different types of resources that can be used to either achieve a greater understanding of a concept or obtain further information about it. Table 1: An example of an environment for distributing resources Support technology Resources (HTML documents) HTTP server - Overheads - Course outline - Objectives - Assigned readings and references - Assignment list - FAQ - Bibliography - Tutorials Step 2: Encouraging Students to Add their Own Resources and Share their Ideas The infrastructure also supports communication between students and/or instructors through newsgroups and electronic mail tools. Table 2: An example of an environment for discussion and sharing resources Support technology Resources Discussion group Questions/answers Professor comments Student ideas Electronic mail Personalized questions/answers Training follow-up Corrected exams Experts to consult Personal information
Involving Learners in Object-Oriented Technology
Anonymous FTP server
-
357
Students’ personal resources Class projects and accomplishments
Step 3: Inviting Students to Collaboratively Share and Critique their Projects Another important point we experimented with was publishing the students’ projects, thereby making them available to all other students in the class. Students were concerned about their work being made so visible, but publishing did seem to encourage them to polish their projects more so than before. The following is the scenario that was tested in an object-oriented modeling and design issue of user interface class: -
Several weeks before the final due date, each team uploaded their first OO design model to the FTP server using an anonymous name that the other students in the class and the professor did not know. Students were invited to comment on and critique the other students’ design models through a newsgroup established specifically for this purpose. The professor acted as the newsgroup’s moderator. Each team was invited to submit a compilation of all comments made by other students about their project by e-mail. The professor graded this compilation and allowed the students time to make the appropriate changes and/or extend their model.
Step 4: Bridging the Gap Between Academic Education and Needs The learning resources listed in step 1 (see Figure 1) can introduce concepts and present simple case studies in an attempt to help with understanding these concepts. Sharing projects and achievements are also limited in their scope to reflect true-life experiences. In this step learning resources that can assist students in developing the skills they will use in the real world must be added as realistically as possible by the professor to the environment obtained in Step 3 [2]. Relevant examples of such learning resources for OO technology education and training are: Examples [3], problems and pedagogical design patterns [5]. These resources are the essence of learner-centered education in that they are chosen to fit the learner’s interests and needs. For instance, scaffolding are sample problems of realistic size whose complexity is gradually revealed in steps that leverage and reinforce the intrinsic structure of the problem-solving process. Scaffolding enables learners to build their understanding through a process of successive elaboration and integration.
358
A. Seffah
Step 5: Improving the Personal Process by Integrating Self-Assessment and Mentoring Strategies From the learner’s perspective, the environment obtained in Step 4 has the potential to offer a flexible structure allowing self-directed, self-paced instruction on any topic. However, we believe that in order to take advantage of this potential, existing objectoriented education approaches must be concurrently adapted to new methods of apprenticeship capable of empowering and sustaining the act of self-learning. A priori, we need to anticipate and identify the end-users’ unique and self-paced exploration of the given materials, then situate their need for insight, alternatives and new directions by providing embedded questions and correct, summative selfevaluation instruments.
3
Conclusion
This paper has analyzed the added value of Internet technology and learner-centered education for object-oriented education. The issues presented here are the results of what we have learned during the development of several IBT environments, as well their use in two courses. Our work is still in progress, and we hope that this workshop will give us the opportunity to: (a) describe our experiment that were briefly mentioned in this paper, (b) discuss related technical and pedagogical development issues [7, 8], (c) establish a digital library for sharing object-oriented learning across frontiers.
References 1. Capell, P.: Distance Learning Technologies. CMU/SEI-95-TR-004. Software Engineering Institute, Carnegie Mellon University (1995) 2. Denning, P.J.: Educating the New Engineers. Communications of the ACM 35 (1992) 3. Hermann, H. Metz, I.: Teaching OO Software Engineering by Examples. ECOOP'96 Educator's Symposium (1996) 4. Lato, K. Drechsler, A.: Effective Training in OOT: Learn by doing. Journal of ObjectOriented Programming 9(6) (1996) 5. Manns, M.L.: Pedagogical Patterns: Successes in Teaching Object Technology. ECOOP'96 Educator's Symposium (1996) 6. Norman, D.A. Spohrer, J.C.: Learner-Centered Education. Communications of the ACM 39(4) (1996) 7. Seffah, A., Ramzan, K.: An Architectural Framework for Developing Internet-Based Software. World Conference of WWW, Internet and Intranet. Toronto, Canada (1997) 8. Seffah, A., Bouchard, R.: A Discussion of Pedagogical Strategies and Resources for Software Engineering Training Over the Internet. World Conference on Educational Multimedia and Hypermedia. Germany (1998)
How to Teach Object-Oriented Programming to WellTrained Cobol Programmers Markus Knasmüller1 BMD Systemhaus Ges.m.b.H. Steyr Sierninger Str. 190, 4400 Steyr, Austria
[email protected] Abstract. Introducing object-oriented programming to old-style programmers is a rather hard task. This paper shows how this job was done at BMD Steyr, Austrians leading producer of accountancy software. A special course for former Cobol programmers was offered. This course is based on the principle that one should first learn data abstraction before one starts with object-oriented programming. Its structure is rather similar to a lecture, which was given by the author together with Hanspeter Mössenböck at the University of Linz. The main differences are the removal of academic terms and the focus on Delphi.
1
Introduction
This paper describes how object-oriented programming was introduced at BMD Steyr, Austrians leading producer of accountancy software. This company has a software department with more than 40 developers; most of them are currently maintaining a character-based Cobol-product. However a newer object-oriented version of this product with a Windows user interface is currently under construction. In the first year of this project there were two main tasks: First, the necessary Windows tools had to be implemented; second, the Cobol programmers had to be turned into object-oriented programmers. For programming the Windows tools new programmers with an academic education were employed. But for the accountancy software the knowledge of the old staff was necessary and so they were trained in object-oriented programming. The following sections of this paper describes the knowledge of the old staff, the rationale for the choice of the development environment, and how object-oriented programming was taught.
2
Basic Knowledge of the Staff
The programmers at BMD Steyr have a rather good Cobol knowledge, otherwise the current product would not be successful. They have good domain knowledge as well, which is very important for producing accountancy software. The author believes that 1
Markus Knasmüller is on leave from Johannes Kepler University Linz, Department of Practical Computer Science (Systemsoftware). S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 359-362, 1998. Springer-Verlag Berlin Heidelberg 1998
360
M. Knasm ller
this domain knowledge and the experience in implementing such a software product are more important than the programming knowledge. Since we think that it would be hopeless to look for programmers who are able to implement a new accountancy software product without the necessary domain knowledge, it is easier to teach the current staff object-oriented programming. Before the object-oriented programming course at BMD Steyr started, the programmers knew Cobol and all its concepts such as records, arrays, and procedures. They were aware of structured programming and avoided goto statements. However, their Cobol compiler did not support some important programming features, such as call by reference parameters, local variables, and pointers. Because of these shortcomings algorithms for binary trees, hash tables or heaps were unknown or at least rarely used by most member of the staff.
3
The Choice of the Development Environment
Since the Cobol compiler, which was used for the current product, did not support object-oriented programming and Windows programming, we had to choose a new development environment. We looked at different tools based on different languages (e.g. Java, C++, Object-Cobol, Object-Pascal) but the wider the choice, the greater the trouble. Finally, we used Delphi [1] because of two reasons. First, its class library is very powerful and supports database accesses, which are numerous in our accounting software. Second, Delphi is based on Object-Pascal with its well-proven type system and module concept. Pascal is a very readable language with block structure, separate compilation and strong type checking. Furthermore, we think that the language is easy to learn for former Cobol programmers.
4 Teaching Programmers
Object-Oriented
Programming
to
Cobol
We decided to divide the necessary knowledge about object-oriented programming into four parts: • Knowledge about good programming style • Knowledge about data abstraction and pointers • Knowledge about classes and methods • Knowledge about the class library Of course, one could say that only part three and four are important for objectoriented programming, but we think that good programming style and a firm knowledge about data abstraction are prerequisites for object-oriented programming. Based on these four parts we offered four courses with weekly lectures and exercises. The exercises were corrected not to give grades but to know about the progress and to detect misunderstandings on the side of the programmers. We also had the following agreement with the participants of the programming course: the hours, which they spent, on visiting the lectures and doing the exercises were paid, if
How to Teach Object-Oriented Programming to Well-Trained Cobol Programmers
361
they could do it in an average time (about three hours per week). If they needed more time they had to do it in their spare time. The lectures were rather similar to those that the author was involved in the Johannes Kepler University of Linz. But some things were changed. First, we removed some theoretical materials and all academic terms, like ontology. Furthermore we used Object-Pascal as the programming language instead of Oberon [4], which was used at the university. At the university it is more necessary to teach concepts and not to teach a programming language, because languages change but the concepts remain. Therefore, Oberon with its efficient object-oriented development environment and its powerful concepts such as garbage collection, dynamic module loading, run-time types, and commands is a very good choice. However, in our company we had already decided that we would take Object-Pascal and therefore we showed the concepts and all examples in this language. Since Oberon and Pascal are very similar languages, changing the examples was rather easy. In the first two parts of our object-oriented programming course we learned all the necessary things that make a programmer excellent. We spoke about structured types, type safety, procedures, local variables, pointers and most important data abstraction. Of course all this was not really new for the participants, but their Cobol environment did not really support it. In the third part of the course, we started to introduce classes. Classes are rather similar to abstract data types. Therefore in a first lesson we only explained the syntactical differences between an abstract data type and a class. Using this process we overcame the fear about the term •object-oriented programming•. We showed a new syntax for the already known abstract data type and only afterwards the participants found out that they are now working with objects. Later we introduced inheritance and dynamic binding. Inheritance was simply explained as type extension. A subclass is an extension of a base type, i.e. it inherits the fields and methods of the base type and may declare additional fields and methods for its own. The explanation of dynamic binding was hard work, but based on the explanation of inheritance, we could explain that the compatibility between a subclass and its base class makes it possible for a variable at run time to contain objects of various types that react differently to a message. In the last part of the course we explained the different classes of the Delphi class library. Based on the already known facts about object-oriented programming, this was rather simple and could mostly be done by the participants themselves by reading the manual.
5
Recommendations for Successful Teaching
To round up this paper, some recommendations for successful teaching of objectoriented programming are attached. First teach programming, afterwards teach object-oriented programming, that means for a teacher, that data abstraction and writing well-structured programs should be the contents of the first lessons. This because it is easier to explain object-oriented programming based on the knowledge of data abstraction. During the lessons the teacher should avoid academic terms (like ontology), because they can lead to misunderstandings. Furthermore as many examples as possible should be shown,
362
M. Knasm ller
because often, a single example can say more than hundred facts. If object-oriented programming is introduced in a company, the first products of novices should not be used, because they contain mostly many errors, therefore maintaining them is a hard job.
6
Future Work
Future work will concentrate on implementing the new Windows product. Some continuation courses will be necessary, e.g. for databases and Internet programming. Furthermore we want to introduce patterns in two ways: Offering a special course on design patterns, and analysing our teaching strategy in a pedagogical pattern [3]. For more information about the project see http://www.bmd.at.
Acknowledgements I wish to thank Prof. Hanspeter Mössenböck for providing me with his lecture notes [2]. Further thanks go to my boss Ferdinand Wieser and to the whole staff of BMD Steyr for supporting my efforts in introducing object-oriented programming. This work is supported by the Austrian •Innovations- und Technologiefonds (ITF)• under project number 7/693.
References 1. Cornell, G., Strain, T.: Delphi Nuts & Bolts. McGraw-Hill, Berkley (1995) nd 2. Mössenböck, H.: Object-Oriented Programming in Oberon-2, 2 edition. Springer (1995) 3. Manns, M.L., Sharp, H., Prieto, M., McLaughlin, P.: Capturing Successful Practices in OT Education and Training. JOOP 11 (1) (1998) 4. Wirth, N., Gutknecht, J.: The Oberon System. Software—Practise and Experiences 19 (9) (1989)
ECOOP’98 Workshop on Reflective Object-Oriented Programming and Systems 1
2
Robert Stroud and Stuart Mitchell , 1
Department of Computer Science, University of Newcastle upon Tyne, UK
[email protected] 2 Department of Computer Science, University of York, UK
[email protected] http://www.cs.york.ac.uk/~rts/ecoop98/papers.html
1 Theme of Workshop In recent years the principles of reflective object-oriented programming have seen increasing acceptance and application to a wide range of fields - for example, security, fault tolerance and real time. Reflection is seen as a promising way of managing system complexity by separating functional and non-functional concerns and we believe that many areas of system design and validation can benefit from this research. This workshop was aimed at providing a venue at which researchers into many disparate topics related to reflection could meet, disseminate their ideas to a broad spectrum of the reflection community, and explore the advantages that the disciplined separation of concerns offered by a reflective system can provide within a computing system. 14 papers were accepted for presentation at the workshop and 2 position statements were also presented briefly. The topics covered included the design of reflective systems, applications of reflection, and experiences using reflection. There were several common themes running through the papers – reflection and adaptation, reflection and Java, reflection and operating systems, reflection and distributed systems. Each paper resulted in lively discussion and provoked many questions so that members of the audience contributed to the workshop just as much as the speakers. The organisers would like to thank everyone who took part in the workshop for their contribution towards making the workshop successful.
2 Overview of Papers This chapter of the ECOOP’98 Workshop Reader contains summaries of the 14 papers that were presented at the workshop. Unfortunately, because of space limitations, it is not possible to present full versions of all the papers here. Instead, each author has been asked to provide a brief summary of his or her paper. Hopefully this will be enough to give the interested reader a flavour of the workshop and S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 363-364, 1998. Springer-Verlag Berlin Heidelberg 1998
364
R. Stroud and S. Mitchell
encourage them to visit the Web site which contains full versions of all the papers submitted to the workshop, including the position statements. A brief summary of each presentation now follows: • Stuart Mitchell presented a reflective model of exception handling that could capture different linguistic mechanisms for dealing with exceptions. • Bert Robben presented Correlate, a flexible MOP that can be used to embed different reflective models of autonomous objects in existing object-oriented programming languages. • José Contreras presented a model of adaptive active objects that used reflection to support various forms of adaptation. • Shigeru Chiba proposed an alternative to the standard java.lang.Class, which adds compile-time reflection to Java. • Robert Stroud described a reflective class loader for Java that could be used to generate wrappers for adapting third-party components to their run-time environment. • Lutz Wohlrab described how reflection could be used as a mechanism for sanity checking OS configuration. • Antonio Vallecillo described how reflection could be used to build flexible and reusable controllers for constructing open distributed systems from components. • Boris Bokowski presented CoffeeStrainer, a system for statically checking structural constraints on Java programs that was based on the use of compile-time reflection. • Darío Álvarez-Gutiérrez described how a reflective operating system layer could be used to customise the execution environment provided by an object-oriented abstract machine. • Laurence Duchien described how reflection could be used to extend a basic object programming model in order to facilitate distributed programming. • Walter Cazzola presented a comparative evaluation of three different reflective models. • Ashish Singhai described 2K, a highly reconfigurable component-based operating system based on reflective principles. • Gordon Blair presented a reflective model of distributed objects and middleware. • Charlotte Lunau described her experience using three different kinds of reflection to build different systems. In addition, Hidehiko Masuhara and Alesandro Fernandez presented brief position statements that can be found at the Web site, together with full versions of all the papers summarised in this chapter.
MOPping up Exceptions S. E. Mitchell, A. Burns and A. J. Wellings Department of Computer Science, University of York, UK fstuart,burns,
[email protected] Abstract. This paper describes the development of a reflective treatment of exception handling. It introduces a metaexception object responsible for controlling the semantics of an exception and enabling run-time change of those semantics – for example from the termination to resumption model.
1 Introduction Exception handling mechanisms separate error handling from a program’s normal operation [1] which is in line with the use of reflection to provide a separation of concerns. Despite this desire for a separation, in many cases the handler code is still mixed with application code, albeit moved to the end of a block. This paper explores using reflection to complete the separation of concerns.
2 Exceptions and Reflection This section describes our model of reflective exceptions in which the exception handling and modification prior to propagation is handled using a metaobject. This architecture produces a disciplined separation without adding any new entities to the computational model. In this model, an exception is a reified object and a jump to meta-level processing occurs at the creation (raising) of an exception object. The metaobject associated with each exception object is the metaexception object. The metaexception object is created as a result of an exception raise and controls both the structure and behaviour of the exception at the base-level, being responsible for exception resolution/propagation. Note that the exception handler can access and modify the base-level object’s state. Thus, we consider that from the view of the base-level’s computation an exception is a program assertion. After a failed assertion, the reification process causes the meta-level to take action to make the assertion true. This maintains transparency – if the base-level is run without meta-level support then it continues while the assertions are true and simply does not benefit from corrective actions. To illustrate this process, consider an object A invoking a method in B that is intercepted by "B. Then "B invokes the requisite method script followed by an assertion failure that creates an exception object. The executing method blocks and control passes to the new metaexception object which controls the semantics of exception handling. The metaexception object then invokes a handler in the metaobject which has access to the state of the base-level object through the reflective self-representation and so can S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 365-366, 1998. Springer-Verlag Berlin Heidelberg 1998
366
S.E. Mitchell, A. Burns, and A.J. Wellings
rectify the cause of the exception. After successful handling control is returned to the object that created the exception object. The altering of the object’s state will mean that the error will not immediately be repeated. Converesly, if the exception cannot be processed the handler returns a failure message to the metaexception object along with a reference to the invoking object/metaobject pair. Subsequently, the blocked method call is terminated by the metaobject and finally the metaexception object repeats the attempt to handle the exception with the handler(s) at the invoking metaobject "A. If the exception remains unprocessed after all possible propagation attempts have occurred, then the metaexception object is responsible for handling and may terminate the program. This approach means the exception controls its future. For example, by default exceptions may be bound to a termination model metaexception object. However, the exception may be handled at run-time and so the meta-level handler can then change to resumption semantics and continue execution.
3 Summary Reifying exceptions provides the separation of concerns desired and is also very flexible – exception semantics can be changed on a fine-grained basis so that different exceptions raised by the same object can have different semantics. The separation arises because the exceptional code and the functional code are orthogonal – the exception code at the meta-level manipulates the self-representation of the functional code at the base-level. This approach introduces no new language or conceptual entities – exceptions are objects with semantics handled by meta-level entities. The handlers are meta-level object methods manipulated by the meta-meta-level to alter their semantics (e.g. termination or resumption) or the actions necessary to recover from an exception. Reflective systems can also be used to deal with exception evolution since introspection/modification allow new handlers to be added and the semantics of existing ones to be altered or split into several different handlers. In [2], three categories of exception evolution are identified: exceptional (new exceptions derived from existing ones), functional (entirely new exceptions) and mechanism evolution (exception overloading due to implementation change). Our approach can deal with all of these categories of evolution. Acknowledgements We would like to acknowledge the contributions made by other members of the ESPRIT long-term research DeVa project towards this work, especially their comments on the work during its various stages of preparation.
References 1. Burns, A. and A. Wellings (1996). Real-Time Systems and Programming Languages, Addison-Wesley. 2. Miller, R. and A. Tripathi (1997). Issues with Exception Handling in Object-Oriented Systems. Proceedings of ECOOP’97. M. Askit and S. Matsuoka. Jyv¨askyl¨a, Finland, SpringerVerlag. LNCS-1241: 85-103.
A Metaobject Protocol for Correlate Bert Robben? , Wouter Joosen, Frank Matthijs, Bart Vanhaute, and Pierre Verbaeten K.U.Leuven, Dept. of Computer Science. Celestijnenlaan 200A, B3001 Leuven - Belgium
[email protected] Abstract. Distributed applications are complex software systems that
need support for non-functional requirements such as reliability and security. Often these non-functional requirements are mixed with the application semantics resulting in an overly complex system. A promising solution that cleanly separates the application from the non-functional requirements is the use of a language with a metalevel architecture. In this extended abstract, we brie y present the metalevel architecture of Correlate, a concurrent language extension to Java.
1 Introduction In our approach, we oer the programmer the high-level programming language Correlate that enables a high-level programming model of autonomous active objects. Instead of creating a new language from scratch, the approach we take is to enhance an existing object-oriented language with a set of constructs to handle concurrency. Currently, we have prototypes running on top of C++ and on top of Java. A complete description and formal semantics of the model can be found in [1]. To tackle non-functional requirements, we have de ned a strictly implicit metaobject protocol (MOP) for Correlate.
2 The Metaobject Protocol The Correlate MOP oers the metalevel programmer a set of basic building blocks that represent rei ed information of the baselevel. These blocks can then be composed to form metalevel subsystems that control the baselevel application. The MOP de nes two sets of metaobjects: one that rei es baselevel interaction and one that rei es baselevel objects. The rst set consists of metaobjects that represent the dierent kinds of object interaction present in Correlate: method invocation, object construction and object destruction. The second set represents the state, behaviour and identity of application objects. A complete description of the MOP can be found in [2]. Using these building blocks, the metalevel programmer can easily construct a metalevel that realizes the default Correlate semantics. A possible approach ?
Research assistant of the Fund for Scienti c Research - Vlaanderen (F.W.O.)
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 367-368, 1998. Springer-Verlag Berlin Heidelberg 1998
368
B. Robben et al.
is to create a meta program that associates a dedicated metalevel object with each baselevel object. Such a structure is fairly common in the literature about metaobject protocols[3]. However, our metaobject protocol also supports more alternative architectures. Instead of focusing on a baselevel object, it is equally possible to build an architecture in which the interaction between baselevel objects is the primary abstraction. In this approach each interaction is controlled at run-time by its own active interaction object. This alternative architecture promises to be a good candidate for an architecture that is mainly concerned with scheduling problems. As each interaction is modelled as a separate active object, it becomes possible to de ne interaction-speci c behaviour like for instance adding time-outs or time-periodic behaviour for certain invocations.
3 Non-Functional Requirements The Correlate MOP has been used to tackle distributed execution and fault tolerance. For fault tolerance, both a checkpointing and an active replication algorithm have been implemented. The former algorithm takes persistent snapshots of the state of the objects during execution of the application. After a failure, the snapshot can be used to restart the application from an earlier state. The latter algorithm replicates application objects and implements an appropriate consistency protocol to ensure that all non-faulty replicas remain consistent. Distributed execution has also been realized at the metalevel. Metalevel objects intercept the invocations between objects and transport them if necessary over the network to the host of the receiver. Because object construction is rei ed as well, special object allocation policies can be used to distribute the objects evenly over the network. An object migration mechanism has also been implemented. When several non-functional requirements must be met, for instance to get a distributed fault-tolerant system, a single metalevel solution is not ideal. Indeed, such an approach would lead to highly complex metalevels that mix dierent functionalities. At the moment, we're experimenting with a multi-level approach where each non-functional requirement is implemented in a separate metalevel. An interesting point of future work is how to deal with non-orthogonal requirements in this approach.
References 1. Bert Robben, Frank Piessens, Wouter Joosen. \Formalizing Correlate: from Practice to Pi". In Proc. 2nd BCS-FACS Northern Formal Methods Workshop, July 1997, Ilkley, UK. Published electronically at http://ewic.springer.co.uk. 2. Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten. Building a Meta-level architecture for distributed applications. Technical Report CW265, department of Computer Science, K.U.Leuven, Belgium. 3. Shigeru Chiba and Takashi Masuda. \Designing an Extensible Distributed Language with a Metalevel Architecture". In Proc. ECOOP '93, pages 483-502, Kaiserslautern, July 1993. Springer-Verlag.
Adaptive Active Object 1
José L. Contreras and Jean-Louis Sourrouille
2
1
2
Univ. Tecnica Federico Santa Maria - Chile L3i – Laboratoire d’Ingénierie de l’Informatique Industrielle, INSA de Lyon {jcontrer,sou}@if.insa-lyon.fr
Abstract. An object with dynamic adaptation capabilities (AAO) for run time context variations is presented. A reflective meta-level architecture is defined for the AAO, in which adaptability and context related aspects are treated at meta level. By applying their adaptability mechanisms, AAOs can match dynamic context variations and consequently increase dependability in context sensitive applications.
1 Introduction The main objective of our work is to develop an adaptive active object (AAO) capable of adaptation during run time to reduce malfunctioning due to context variations. Additionally, in order to respect such important principles as modularity, separation of concerns, reusability, etc., all added mechanisms must be separated from, and should not alter, the user application. Behaviour is characterised by the actions the AAOs perform, so adaptation is achieved by selecting and performing the most appropriate actions in particular situations. Two basic behavioural principles are important for AAOs: the best effort they make to accomplish a requested service, and the least suffering they produce when a service cannot be realised [1]. The AAO’s adaptation mechanisms will normally include method selection and operation mode change. Decisions are based on context information and decision criteria AAOs keep locally. Most of the information is gathered whilst interacting with other AAOs but information used by all the AAOs is kept in common zones to reduce redundancy and costly diffusions. Decision criteria can be specified as parametric rules with facts as entries and decisions about behaviour as outputs. A reflective meta level architecture [2] is convenient for AAO purposes since AAOs must control their interactions and method executions in order to adapt to events like service requests, server unavailability, overload situations, etc.
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 369-371, 1998. Springer-Verlag Berlin Heidelberg 1998
370
J.L. Contreras and J.-L. Sourrouille
2 The Adaptive Active Object 2.1 Architecture Objects and metaobjects. The AAO general architecture follows the active object model proposed in RTGOL [3]. Each object at base level is linked to a corresponding metaobject that is created when the object is created. The metaobject controls and makes decisions about all aspects of the object’s activity, including method executions and message communications. Messages. In the AAO architecture, messages are reified and controlled by metaobjects that make best use of their capabilities to get the service realised as specified by the client. Common zone. Public information is posted in a common zone by all the AAOs belonging to the same machine. Information, such as offered services, current operation mode and load level, can be read by any AAO whenever required to make a decision. 2.2 Message Processing Message sending. Messages are reified and sent to the client’s metaobject who takes control of the sending operation. The metaobject considers context and message information to decide what to do with the message. Message receiving. Arriving messages are placed in a message queue of the receiver AAO, who starts an analysis to decide what to do with them. The metaobject decides whether the message should be: executed immediately, left in the queue for future execution, delegated to another object or rejected. Message analysis. A list with all possible executable methods is built during message analysis. At the end of the analysis and if a thread is available, a method from the list is selected and its execution started. 2.3 Adaptation Mechanisms Server selection. When sending a message the AAO may select the best server according to available information and specified decision criteria. Operation mode selection. The AAO has different operation modes to cope with different context situations. Changes of operation mode are based on decision criteria considering current self status and context situation. Method selection. The object may provide more than one method that satisfies a requested service. Each method has particular characteristics such as execution time, response quality and type of method. The AAO selects the best method according to the particular characteristics of the request, the cpu load, etc. Message delegation. When the AAO can’t satisfy a request, it sends it to other AAOs who may receive and process the message. Messages are also delegated if they
Adaptive Active Object
371
have stayed in the message queue for too long without being executed, thereby increasing their chances of being executed eventually. 2.4 AAO Knowledge The information maintained by AAOs is composed of context related and self related information. Self information. This information is about the AAO itself, including base and meta level data. Decision criteria, operation modes, message/methods tables, statistical data about its activities, object methods data, etc. are examples of the kinds of self information that AAOs keep. Context information. Context related information, such as data about resources, communication time, cpu availability, and other AAOs, is collected during execution and used by AAOs when making decisions. Some of this information is obtained from the AAO’s interactions.
3 Conclusion and Future Work Unlike other work in this area [4,5], each object must have its own metaobject to satisfy the requirements, therefore there is generally a metaobject class for each object class. Although metaobject activity increases runtime overhead, our main concern has been to conceive the mechanisms to include in the AAO. A basic metaobject has been realised and also a metaobject implementing additional functionality has been done. Work still to be done includes completing the definitions of AAO mechanisms and providing precise specifications of behaviour, decision criteria (rule descriptions) and AAO relationships (contracts).
References 1. Takashio, K. and Tokoro, M. “DROL: An Object-Oriented Programming Language for Distributed Real-Time Systems”, OOPSLA’92 Proceedings, 1992, pp. 276-294. 2. Ramana Rao, “From the Broad Notion of Reflection to the Engineering Practice of ObjectOriented Metalevel Architectures”. Position paper for the OOPSLA’91 Workshop on Reflection and Metalevel Architectures. 3. Babau J-P, Sourrouille J-L., “Expressing Real-time Constraints in a Reflective Object Model”, IFAC Control Engineering Practice, Vol 6, 1998, pp. 421-430. 4. Chiba, S. and Masuda, T. “Designing an Extensible Distributed Language with a MetaLevel Architecture”, ECOOP’93 Proceedings, 1993, pp. 482-501. 5. McAffer J, “Meta-level Programming with CodA”, ECOOP’95 Proceedings , LNCS 952, Springer Verlag, 1995, pp. 190-214.
Yet Another java.lang.Class Shigeru Chiba and Michiaki Tatsubori Institute of Information Science and Electronics, University of Tsukuba fchiba,
[email protected] Abstract. This paper proposes an extension of Java's re ection mechanism that uses compile-time re ection to enable comprehensive language customization without needing to modify the Java virtual machine.
1
Introduction
The Java language already has the ability for re ection [1]. java.lang.Class is a class for class metaobjects and provides the ability for introspection at runtime. However, the Java language has not provided the ability for language customization, or intercession, which is another kind of re ection. Java programmers cannot change the super class of a class, add a new method, or rename a eld name. Furthermore, they cannot alter the behavior of method invocation, change a variable type, or extend the syntax. A few extensions to Java, such as MetaJava [2], have been proposed to support the ability for language customization, but their abilities are limited to customizations that do not imply severe performance impacts. We propose an extended version of java.lang.Class, which enables more comprehensive language customization than other similar systems. To avoid performance degradation, we employ a technique called compile-time re ection. Instead of modifying the Java virtual machine (JVM), we modify the Java compiler so that most customizations are statically applied to a Java program at compile time. Thus, the compiled code is executed by a regular JVM and can bene t from up-to-date execution techniques such as just-in-time (JIT) compilation. 2
OpenJava
We are currently implementing a class openjava.mop.OJClass for class metaobjects, together with associated classes such as OJMethod. These classes are reimplementations of java.lang.Class and Method although their names begin with OJ for easy distinction, and they provide the ability for not only introspection but also language customization. The latter is implemented using compile-time re ection in conjunction with our Java compiler, the OpenJava compiler. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 372-373, 1998. Springer-Verlag Berlin Heidelberg 1998
Yet Another java.lang.Class 2.1
373
Changes of ob ject behavior
supplies all the methods that Java's Class does. In addition, OJClass supplies several methods for customizing program behavior. Meta-level programmers can then de ne a subclass of OJClass and override one of these methods. For example: OJClass
class VerboseClass extends Class { Object readField(Object o, String fieldName){ System.out.println(fieldName + " is read."); return super.readField(o, fieldName); } }
If a eld is read from an object whose metaclass is VerboseClass, then the method readField() is called and a debug message is printed. To implement this ability for customization, our OpenJava compiler translates all the occurrences of eld-access expressions so that they are intercepted by the class metaobject. For example, suppose that a class Person is associated with the metaclass VerboseClass. If variable p denotes a Person object, then the expression p.name is translated by the OpenJava compiler into: (String)(OJClass.getClass(p).readField(p,"name"))
At runtime, this expression calls readField() on the class metaobject for p. Other operations such as method calls are also processed using the same technique. Thus, all expressions involving operations on objects are modi ed at compile time to enable them to be intercepted at runtime by class metaobjects. This technique makes it possible to change the behavior of object operations without modifying the JVM. 2.2
Changes of class structures
also enables programmers to customize class structures. For example, they can add a new method, rename a eld, and change a super class. However, this customization is only available at compile time. Before producing byte code, the OpenJava compiler calls transformClass() on every class metaobject and then translates the resulting class de nition. For example, in order to add a new eld to a class, programmers could de ne transformClass() as follows: OJClass
void transformClass(Environment e){ OJClass type = OJClass.forName("double"); OJField f = new OJField(type, "z"); addField(f); // z }
add this eld
to the base-level class
References 1. Java Soft, JavaTM Core Re ection API and Speci cation. Sun Microsystems, Inc., 1997. 2. Kleinoder, J. and M. Golm, \MetaJava: An Ecient Run-Time Meta Architecture for Java," in Proc. of the International Workshop on Object Orientation in Operating Systems (IWOOS'96), IEEE, 1996.
A Re ective Java Class Loader Ian Welch and Robert Stroud University of Newcastle upon Tyne, Newcastle upon Tyne NE1 7RU UK fI.S.Welch,
[email protected], http://www.cs.ncl.ac.uk/people/fi.s.welch, r.j.stroudg
Abstract. We describe the use of a re ective class loader in Java to generate wrappers for third-party components dynamically, thereby adapting them to satisfy non-functional properties such as fault tolerance and security requirements.
1
Overview
Distributed languages such as Java enable applications to be built out of thirdparty components that are downloaded and integrated into running applications as required. An example of this is a thin client database browser where the client is downloaded and executed on a remote system dynamically. This has great advantages in terms of productivity, and eases the problems of software distribution. However, applications may require the client's components to be modi ed in order to ensure fault-tolerance or security properties. Without access to source code, changing the functionality of the components is problematic. Wrappers are a common way of dealing with component adaptation when source code may not be changed [1]. The wrapper intercepts invocations sent to the wrapped component and transforms the invocation and the result of the invocation in order to implement a desired non-functional behaviour. Such wrappers can be seen as implementations of metaobject protocols [2] in the sense that the wrapper may be thought of as a metaobject that rede nes the default behaviour of invocations. However, wrappers are normally hardcoded for each new component and generated statically. This prevents easy reuse of wrappers with new components and cannot work where components have only a transient existence such as with a thin client. Our approach is to statically de ne wrappers independently of the component they are to be used with and combine them dynamically with components in the target execution environment. We have implemented a prototype in Java (JDK1.1) called Dalang [3], which wraps components dynamically as they are loaded into the Java Virtual Machine (JVM). This is achieved by de ning a ReflectiveClassLoader that is explicitly used to load classes into the JVM. This ReflectiveClassLoader makes use of Java introspection to build a wrapper on-the- y which is compiled dynamically and substituted for the requested class. This means that subsequent references to the requested class are actually handled by the wrapper. A metaobject controls the wrapper's handling of the S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 374-375, 1998. Springer-Verlag Berlin Heidelberg 1998
A Reflective Java Class Loader
375
method calls and a con guration le present on the target host determines which methods of which classes are re ected upon. Our design allows delegation of the actual loading of a class to an existing application level class loader, for instance a class loader that loads classes over the network. 2
Discussion
Customisations are de ned independently of the component to be customised. Each customisation is de ned statically but applied dynamically. The generated wrappers implementing the customisations are type safe as they are compiled using a standard Java compiler. Dalang utilises Java introspection, application class loaders, dynamic class loading, and dynamic compilation. It allows a clear separation of concerns which means that the programming tasks can be split between (i) the application programmer who is only concerned with the base application, (ii) the metalevel programmer who designs the customisations, and (iii) the system integrator who determines which components to customise. Dalang is similar in some ways to BeanExtender [4], OpenJava [5] and MetaJava [6]. The main dierences are that Dalang allows customisation at runtime, doesn't need access to source code, and doesn't require a specialised JVM. Ideally our approach should be transparent ,requiring no change to any code. This could only be achieved by changing the default class loader which is impossible in current versions of Java unless you change the JVM. This makes it necessary for the application loading the client to explicitly use a re ective class loader. For a thin client this is a relatively minor change as explicit loading via application level class loaders already occurs. Also, we have eciency problems since in-memory dynamic compilation is not currently supported in Java and we must therefore spawn a separate process to perform compilation. However, this could be improved by caching and replacing dynamic compilation by direct bytecode transformation. In our future work we hope to address both these problems A copy of the full paper and our software is available from the Dalang home page http://www.cs.ncl.ac.uk/people/i.s.welch/home.formal/dalang. References 1. Gar nkel S., and Spaord G.: Practical UNIX and Internet Security. O'Reilly & Associates, 1996. 2. Kiczales, G., Rivieres, J. des, and Bobrow, D. G.: The Art of the Metaobject Protocol. The MIT Press, 1991. 3. Welch, I. S. and Stroud, R. J.: Using MetaObject Protocols to Adapt Third-Party Components. Work in Progress Paper at Middleware'98. 4. IBM: Bean Extender Documentation, version 2.0, 1997. 5. Wu, Z. and Schwiderski, S.: \Re ective Java : The Design, Implementation and Applications.". Presentation at APM Ltd, 1996. 6. Golm, M.: Design and Implementation of a Meta Architecture for Java, University of Erlangen, MSc Thesis, 1997
Sanity Checking OS Con guration via Re ective Computation Lutz Wohlrab Chemnitz University of Technology, 09107 Chemnitz, Germany
[email protected] 1 Introduction Increasing adaptability of today's operating systems implies higher demands for administration skills, and thus higher costs for system maintenance. Network Computer, NetPC, and Zero Administration Initiative are attempts to lower the costs of ownership by reducing hardware and software variety and restricting the number of things users and administrators can con gure or adapt. However, the scope of these concepts is limited to (thin) clients, and does not cover the maintenance of servers, of PCs or workstations for users doing work out of the main stream and thus needing dierent (fat) software installations, and of NetPC or other client pro les. Our idea is that the operating system should be responsible for its own con guration state, re ect about it at the time of adaptation, detect erroneous alterations, and reject or even correct them itself. Because of the early error detection at adaptation time and the higher quality diagnosis, systems would become easier to maintain and cheaper to own without restricting adaptability.
2 The Adaptation Manager Object To be able to check whether a given adaptation seems to be OK (is sane, in our terminology) the operating system needs to include: (1) the administrator's nonalgorithmic knowledge in the form of rules, (2) a mechanism for nding decisions based on this knowledge (an inference engine), and (3) triggers which activate this mechanism on adaptation events. The knowledge base and inference engine are encapsulated within an object, the adaptation manager. For the implementation of the former we decided to use Prolog, because an inference engine is an inherent property of a Prolog interpreter. Thus, we bene ted from the wide availability of literature on the subject of building knowledge based systems using this language. The adaptation manager oers a MOP to applications and kernel components which consists of the following operations: OpenTransaction, CommitTransaction, RollbackTransaction establish an adaptation activity, a notion similar to a database transaction. An adaptation activity is a sequence of operations of this MOP representing a single adaptation. Assert, RetractAll request inclusion and removal of knowledge. All data passed to these operations S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 376-377, 1998. Springer-Verlag Berlin Heidelberg 1998
Sanity Checking OS Configuration via Reflective Computation
377
are subjected to a sanity check. Errors and warnings received during the sanity check are stored internally and can be retrieved by the GetExplanation operation. Save saves all predicates marked persistent to a le. This knowledge is then available automatically after a system restart. EvaluateGoal, EvaluateGoalAllSols ask the adaptation manager to evaluate a goal and return the rst or all solutions. GetFirstVariableValue, GetNextVariableValue are used to browse through the returned variable values. GetExplanation returns explanations stored in temporary storage during an adaptation activity or a query. Explanations are error messages, warnings, and messages explaining the reasoning of a rule set. For the implementation of the adaptation manager within a legacy operating system like UNIX or NT we also need a translation between the old con guration interface (i.e. le system calls accessing con guration les) to the above MOP and vice versa.
3 Adaptation Manager Implementation for Linux The Linux implementation of the adaptation manager consists of a daemon encapsulating the adaption manager object (amgtd) and a kernel module facilitating the communication between kernel, applications, and adaptation manager daemon. In addition, the le system calls have been modi ed in the abstract \Virtual File System" layer common to all le system types, in order to distinguish between con guration and non-con guration les. On close, unlink, link, symlink, and rename operations, with a con guration le (name) as (one of the) argument(s), an upcall is made to amgtd. An example: /etc/fstab is modi ed, close called. The intercepted close system call performs an upcall to amgtd, which reads the modi ed le contents, translates it into Prolog clauses and operations of the above MOP and consults the adaptation manager object. If the sanity check performed during the MOP operations succeeds, close successfully returns. On error, the modi ed fstab le is replaced by the previous contents (known to be OK), and error messages and warnings are written to syslogd, from where they are available to system management tools like HP OpenView. amgtd
Prolog Part adaptation manager MOP implementation interface config to amgt file syscall handlers
fstab rules error/warning handl. common rules
OS interfacing routines
Editor or similar, unaware of amgtd open,close,dup, dop2, fcntl, link, symlink, unlink
amgtd-aware application adaptation manager proxy amgt syscall
user space kernel space
modified VFS layer with config files table making upcalls Fig. 1.
amgt syscall module
Linux amgtd: implementation of the adaptation manager object concept
A Re ective Component Model for Open Systems Jose M. Troya and Antonio Vallecillo Dpto. de Lenguajes y Ciencias de la Computacion. Universidad de Malaga. Campus de Teatinos. 29071 Malaga. Spain. ftroya,
[email protected] Abstract. We present a component model for the modular design of
Open Distributed Systems. Based on black-box components and reusable controllers, it addresses many of the special issues for such systems, including heterogeneity, component evolution, and dynamic change.
1
Introduction
The increasing use of Open and Distributed Systems (ODS) for the development of applications, together with the increasing needs of a global component marketplace, are changing the way software is developed nowadays. Reusability and late composition are two driving forces towards the separation of the computational and interoperational aspects of components, forcing ODS-speci c requirements to be incorporated into user applications in a modular and independent manner. To address these concerns, new trends in application development rely on a meta-model in which components encapsulate computation, leaving the rest of the issues to other entities, called layers, controllers, meta-objects or connectors. This architectural approach has been followed by dierent authors [1{4] and considers components as black boxes that transparently modify their behavior through controllers, rst-class entities that can be plugged to them. However, such models also present some limitations, since their controllers act as dedicated lters whose behavior and functionality is dictated by the components they wrap. In our approach, however, controllers are reusable, o-the-shelf entities, de ned independently from the components they will be later attached to. Based on them we have de ned a three-layered architecture: \systems-controllers-components". Systems are greatly simpli ed, oering just the infrastructure for the creation and communication of components; components encapsulate computation; and the add-on reusable controllers provide components with the required behavior, in a modular and independent manner. In this software market there is not only room for systems and components manufacturers, but also for developers of reusable controllers. 2
The Basic Model
In our model, components interoperate using mailboxes and asynchronous messages. Each component has a mailbox with a unique global identi er, through S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 378-379, 1998. Springer-Verlag Berlin Heidelberg 1998
A Reflective Component Model for Open Systems
379
which the component sends messages to other mailboxes and receives messages from them. Messages are information entities, identi ed by their selector, a eld that determines the operation to be executed at the target component. On top of this, we have also added inspection, local broadcasting and tubes. These facilities allow components to cope with both the static and dynamic requirements of information passing in ODS. Inspection is used to interrogate components for their implemented methods. Local broadcasting allows components to send messages to all components currently at a domain (a set of interconnected machines that de nes the \environment" of a component). This has proved to be more versatile and exible for ODS components than the publishand-subscribe mechanism used by other component models. And nally, tubes are bi-directional channels used for ecient data transfer between components, once the initial contacts have been established. Components being black boxes, their behavior is de ned by their interfaces. We de ne the interface of a component as the set of message selectors that it sends out (outputs) plus the set of message selectors that it can receive and respond to (inputs), supposing that received messages not understood by a component are discarded. From these sets we can also de ne the concepts of compatibility and replaceability of components, which can then be used for reasoning about component behavior using an Object-Z formal framework that we have de ned to support the model. In addition to the communication mechanisms mentioned above, our model also oers a re ective facility that allows components to modify their behavior according to the application requirements, using the aforementioned controllers. Each controller has the same structure, and allows customization of particular user-de ned con gurations. Multiple controllers can be attached to the mailbox of a component, getting chained in such a way that outgoing messages from a controller become incoming messages to its successor. Each controller implements a property, that deals with an ODS speci c requirement, like heterogeneity, component evolution, environment-awareness, or dynamic con guration. We have identi ed and implemented an initial set of properties that we think are of particular interest for ODS, and that can be achieved through the use of controllers. Additional reusable controllers for user-speci c requirements can be easily de ned and implemented using the mechanisms provided by our model. References 1. G. Agha, W. Kim and R. Panwar. Actor Languages for Speci cation of Parallel Computations. In DIMACS, 1994. 2. M. Aksit et al. Abstracting Object-Interactions using Composition Filters. In Object-Based Distributed Processing, Springer-Verlag, 1993. 3. J. Bosch. Language Support for Component Communication in LayOM. Workshop Reader of ECOOP'96. Max Muehlhaeuser (ed.). Dpunkt Verlag, 1997. 4. R.K. Joshi, N. Vivekananda and D. Janaki Ram. Message Filters for ObjectOriented Systems. In Software{Practice and Experience, 27(6)677{699, 1997.
CoffeeStrainer - Statically Checking Structural Constraints on Java Programs Boris Bokowski Freie Universität Berlin, Institut für Informatik, Berlin, Germany
[email protected] Abstract. Statically typed languages allow many errors to be detected at compile-time. However, many errors that could be detected statically cannot be expressed using today's type systems. We describe a compile-time reflection framework for Java which allows for static checking of structural constraints.
1 Overview In any reasonably sized software development project, there are rules constraining the structure of the application under development that must be obeyed by the programmers, ranging from simple coding conventions to design constraints caused by, e.g., using design patterns. Although most of these constraints could be enforced at compile-time, few tools exist that can support static checking of programmer-defined constraints in a way that is both expressive and useable for everyday programmers. We have implemented a system called CoffeeStrainer1 that supports compile-time checking of stylistic constraints, implementation constraints, and design constraints for Java programs. Constraints may be specified by programmers, or re-used from a library of constraints. The system is useful for several reasons: for software development teams, it allows the specification of coding rules that can be enforced automatically; for framework developers, it allows the specification of rules for using the framework correctly; for framework users, it can warn of incorrect uses or specializations of the framework. For checking a program, CoffeeStrainer builds an object-oriented representation of the program. This representation consists of the program’s abstract syntax tree enriched by additional information obtained by name analysis (associating each use of a name with its declaration) and type analysis (associating each expression with its static type). It contains objects for each of the elements that make up a base-level program: class objects, method objects, statement objects, expression objects, and so on. Then, checking structural properties of the program involves a traversal of this enriched abstract syntax tree, performing checks at various points during the traversal depending on the type of the object at hand. To determine the checks that are perfor1
The system is available at http://www.inf.fu-berlin.de/~bokowski/CoffeeStrainer S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 380-381, 1998. Springer-Verlag Berlin Heidelberg 1998
CoffeeStrainer - Statically Checking Structural Constraints on Java Programs
381
med at each object reached during the traversal, the framework makes use of the Visitor design pattern [1]. Constraints are implemented as pieces of meta-level Java code embedded in the base-level program as special comments. Before building the program’s representation, these code pieces are extracted and combined to form visitor implementations, compiled and dynamically loaded into CoffeeStrainer, and later invoked during the traversal. Constraints contained in a class or an interface not only apply to the class or interface itself, but also to all derived types. Additionally, constraints can be specified that apply to code that uses a type derived from this class or interface, e.g., in a variable declaration, or a cast. For a complete description of CoffeeStrainer, including practical examples of its use, the reader is referred to [2].
2 Related Work CoffeeStrainer differs from previous work on specifying implementation or design constraints for object-oriented programs [3-5] in the following respects: • Instead of defining a new special-purpose language, constraints can be specified in Java, a language the programmer already knows; • The system is implemented as an open object-oriented framework that can be extended and modified by defining new object-oriented meta-level abstractions; • The meta-level code and the base-level code share the same structure, making it easy to find the rules that apply to a given part of the program; • The meta-level code is embedded in special comments, leaving the base-level syntax and semantics unchanged; thus, arbitrary compilers and other tools can operate on the source code; • When defining a new rule, the programmer has access to a meta model that is a complete object-oriented representation of the program to be checked; the meta model is not restricted to classes, methods and method calls; • Special support is provided for constraining the usage of classes and interfaces.
References 1. E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design patterns - elements of reusable objectoriented software, Addison-Wesley 1995 2. B. Bokowski, A system for statically checking structural constraints on Java programs, Technical Report B-98-14, Freie Universität Berlin, Institut für Informatik, September 1998 3. C. K. Duby, S. Meyers, S. P. Reiss, CCEL: a metalanguage for C++, Proceedings of USENIX C++ Conference, Portland, Oregon, August 1992 4. N. Klarlund, J. Koistinen, M. I. Schwartzbach, Formal design constraints, Proceedings of OOPSLA'96, ACM SIGPLAN Notices, Vol. 31, No. 10, October 1996 5. N. H. Minsky, Law-governed regularities in object systems; part 1: an abstract model, Theory and Practice of Object Systems, Vol. II, No. 4, Wiley 1996
A Computational Model for a Distributed ObjectOriented Operating System Based on a Reflective Abstract Machine Lourdes Tajes-Martínez, Fernando Álvarez-García, Marián Díaz-Fondón, Darío Álvarez-Gutiérrez, Juan Manuel Cueva-Lovelle Department of Computer Science, University of Oviedo {lourdes,falvarez,fondon,darioa,cueva}@pinon.ccu.uniovi.es
Abstract. The design of an object-oriented operating system (OOOS) involves the design of a model that governs the objects’ method execution. In this paper we show the design of an OOOS based on an OO abstract machine: specifically, the design of the computational model. We propose the adoption of an active object model and we think reflection is a helpful tool to achieve a flexible OO computational system.
1 Introduction The aim of the OVIEDO31 [1] project is to develop an OO integral system where every layer is designed and developed using the OO paradigm. The two lowest layers are an abstract machine, named Carbayonia, and an Operating System (OS), named SO4. The computational system of SO4 defines an active object model and extends the default behavior of Carbayonia in some areas by means of reflection.
2 The Computational Model: Reflection Carbayonia is the lowest support level. It provides objects with the most basic mechanisms for the execution of their methods: 1. Basic classes. Their execution is atomic, without interruption, and synchronization is not necessary. The Thread class is one of the basic classes. 2. Instructions for method invocation (call) and exception handling (handle, throw). SO4 provides objects with the mechanisms needed to define the behavior of their environment and modify or extend it if needed. We think reflection is a fundamental means of achieving this. SO4 defines a set of OS classes and extends the Object class defined by Carbayonia, allowing user objects to construct some aspects of the 1
nd
This work has been supported in part by the 2 plan (FICYT) of research of the Principado de Asturias, Spain, under project PBP-TIC-97-01 “Sistema Integral Orientado a Objetos: OVIEDO3” S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 382-383, 1998. Springer-Verlag Berlin Heidelberg 1998
A Computational Model for a Distributed Object-Oriented Operating System
383
execution environment. Other projects studying the use of reflection to design and construct OS, abstract machines, languages etc. include Apertos [2] and Merlin [3].
3 Object Environment Reflection is used to extend the machine behavior and is achieved by attaching to an object a set of objects, which will be named the object environment. The key idea in the design is to divide the object world into two levels: base level (where base objects exist) and meta-level (where meta-objects exist). Each one of the meta-objects will describe some aspect of the base-level behavior of the object. Control is transferred to the meta-object when some specific event (e.g. method invocation, exceptions, etc.) happens. In the first stage of our system, the object environment is composed of the following meta-objects: 1. Concurrence: It defines the synchronization policy specific to the object. 2. Scheduler: It defines the scheduling policy the base object applies to its tasks. 3. Communication: It takes charge to send, receive and manage the messages.
4 Advantages of Using Reflection to Organize an OOOS By offering OS mechanisms as extensions of an abstract machine a number of advantages are obtained: Flexibility [4] is the most important benefit and is achieved because each object can define its own environment using specific meta-objects and thus adapt the behavior of Carbayonia. Uniformity around the object concept is another benefit because meta-objects are objects themselves. Persistence and distribution of computation can also be provided as straightforward extensions to traditional mechanisms.
References 1. Álvarez-García, F., Álvarez-Gutiérrez, D., Tajes-Martínez, L., Díaz-Fondón, M.A., Izquierdo-Castanedo, R., Cueva-Lovelle, J.M., “An Object-Oriented Abstract Machine as the substrate for an Object-Oriented Operating System”, Workshop on Object-Orientation and Operating Systems, 11th European Conference on Object-Oriented Programming (ECOOP’97), Jyväskylä (Finland), June 1997 2. Yokote, Y., The Apertos reflective operating system: The concept and its implementation. Proceedings of the 1992 Conference on Object-Oriented Programming Systems, Languages and Applications. ACM Special Interest Group on Programming Languages, ACM Press, October 1992. 3. The Merlin Project, http://www.lsi.usp.br/~jecel/merlin.html, November, 1996 rd 4. Cahill, V., Flexibility in Object-Oriented Operating Systems: A Review, 3 CaberNet Radicals Workshop, Connemara (Ireland), May 1996
A Re ective Implementation of a Distributed Programming Model R.Pawlak, L.Duchien, L.Seinturier, P.Champagnoux, D.Enselme, and G.Florin Laboratoire CEDRIC-CNAM 292 rue St Martin, Fr 75141 Paris Cedex 03
[email protected] Abstract. This paper presents a re ective implementation of a pro-
gramming model for distributed applications design. This model extends elements of the object programming model such as classes, instantiation, and algorithmic statements in order to facilitate distributed programming. It is implemented with a framework based on a run-time metaobject protocol written with OpenC++ v2.
1 Introduction Programming environments such as CORBA or DCOM associated with an objectoriented language such as C++ or Java provide some basic mechanisms that make it easier to write distributed client/server applications. Our project is to extend this model toward a distributed algorithmic model where the distributed application is a program processed by a group of objects. In this work, our goal is to express the basic level of the distributed application with a similar syntax of a centralized and sequential program, and to use re ection to implement distributed features in some meta levels. Our architecture combines recent Compile Time (CT) MOP approaches [1] with traditional Run Time (RT) MOP approaches [2].
2 Distributed Programming Model 2.1 Basic Programming Features and Semantic Extensions In our model, each distributed programming feature can be related to a familiar programming feature of the object-oriented programming model. Thus, programmers can de ne behaviors for groups of objects using group classes, which are related to the object-oriented concept of a class. We also de ne a distributed instantiation notion, which is related to the centralized programming concept of instantiation. Thus, a group class de nes a set of distributed services and can be instantiated by a group object. Group services can be called using a group method invocation programming feature. Because we also need some control structures to write group-level algorithms, we de ne some distributed statements like distributed condition and distributed iteration. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 384-385, 1998. Springer-Verlag Berlin Heidelberg 1998
A Reflective Implementation of a Distributed Programming Model
385
This model should be implemented mainly by modifying the normal semantics of programming language features. Re ection, by clearly separating the programming features from their implementation, meets our needs. Moreover, the main advantage of re ection is the possibility of keeping open to the programmer several implementation aspects of the system. This last property is crucial for our problem. Indeed, because we deal with complex and not xed distributed algorithms, the base level program should be able to choose or modify the semantics of its own distributed programming features.
2.2 Architecture Overview
Basically, our distributed programming model (DPM) can be seen as an additional middleware layer between a distributed environment layer such as CORBA, and a programming language layer such as C++. Because we adopt a re ective approach, each layer rei es some mechanisms of the inferior layer and provides some additional features. The originality of this architecture is to explicitly distinguish the two main types of re ection pointed out by Ferber [3]: structural and behavioral re ection. To allow structural and behavioral re ection without complicating the meta programs and without introducing unnecessary overhead, our architectural model rei es structural aspects in compile-time metaobjects and behavioral aspects in run-time metaobjects. We use the OpenC++ v2 [1] compiler that provides a CLOS-like MOP [4] but at compile time. This open compiler allows us to link run-time metaobjects to base objects, to map C++ objects to distributed objects, and to add some language extensions.
3 Conclusion and Work Perspectives Our re ective framework appears to be a good way to implement distributed object oriented applications. It provides three abstraction levels that allow exible and clear separation of concerns. In future work, we will use this architecture to seamlessly implement some basic distributed algorithms.
References 1. S. Chiba. A study of Compile-Time Metaobject Protocol. PhD thesis, Univ. of Tokyo, 1996. 2. S. Chiba. OpenC++ release 1.2 programmer's guide. Technical Report 9303, Univ. of Tokyo, 1993. 3. J. Ferber. Computational re ection in class based object oriented languages. In Proc. of OOPSLA'89, 1989. 4. G. Kiczales, J. des Rivieres, and D.G. Bobrow. The Art of the Metaobject Protocol. MIT Press, 1991.
Evaluation of Object-Oriented Re ective Models? Walter Cazzola DSI-University of Milano, Via Dodecaneso 35, 16100 Genova, Italy
[email protected] http://www.disi.unige.it/person/CazzolaW
Abstract. Re ection is a suitable paradigm for developing open systems. Re ection improves software reusability and stability, reducing development costs. But with several dierent kinds of re ection to choose from, it is important to know which re ective model to use and when.
1 Evaluating Re ective Models In the literature [1,2], several dierent models of re ection have been presented, each having features absent from the others. In order to evaluate the quality and applicability of the various models and hence determine which task each model is best suited for, I considered the following features: Structural Re ection is the ability of a language to provide a complete rei cation of both the program currently executed as well as a complete rei cation of its abstract data types. Behavioral Re ection is the ability of the language to provide a complete rei cation of its own semantics as well as a complete rei cation of the data it uses to execute the current program. Re exivity concerns three dierent aspects of re ection, the rst two related to introspection and the last related to intercession: 1. how much time the computational ow spends in the meta-level, 2. when the computational ow shifts up or down among levels, and 3. which aspects are rei ed by the meta-entities. Transparency measures the number of changes that must be made to the base-level code in order to integrate it with the meta-level (i.e. how transparently introspection and intercession can be performed). Extensibility and Separation of Concerns deal with the extent to which each aspect of the computational system's functionality is the concern of a different level of the re ective tower. Visibility concerns the scope of the meta-computation, i.e. which aspects of which base-entities can be involved in the meta-entity's meta-computation. A meta-computation with a global view can involve all the base-entities and other aspects of the computation that it rei es. ?
An extended version of this work, with a full explanation of the model analysis, can be retrieved from http://www.disi.unige.it/person/CazzolaW/references.html
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 386-387, 1998. Springer-Verlag Berlin Heidelberg 1998
Evaluation of Object-Oriented Reflective Models
Behavioral R. Structural R. Re exivity Transparency Extensibility Visibility Concurrency Granularity Lifecycle Proliferation
MCM
MOM
MRM
387
CRM
Yes Yes Yes Yes Yes Separated No Separated Always or Never Depends Always On Request on msg exchange on msg exchange on msg exchange on msg exchange Instances Referent Message Communication Very Good Good Poor Average Good Good Poor Good its instances its referent only action global view Very Tight Tight Very Loose Loose class object method call method call program referent method shorter than referents Low Average High Average - High
Table 1. Model Evaluation Summary Concurrency evaluates the interdependencies among base- and meta-entities
(loosely, or tightly coupled) and hence how easy it is to distribute the system. Re ection Granularity denotes the smallest aspect of the base-entities of a computational system that can be rei ed by dierent meta-entities. The most interesting granularity levels are: classes, objects, methods and method calls. Meta-Entities Lifecycle describes the period of the system execution in which a speci c meta-entity has to exist. Meta-Entities Proliferation estimates the number of meta-entities involved in the system computation.
2 Results of Evaluation Using these criteria, I have evaluated the meta-class (MCM), the meta-object (MOM), the message rei cation (MRM), and the channel rei cation (CRM) models. Table 1 summarizes the results of my analysis. In brief, the models belonging to the communication rei cation approach are more suitable than the other models for developing distributed re ective systems with ne-grained parallelism and loosely coupled entities. However, the models belonging to the meta-object approach are more suitable than the other models for handling structural re ection, and permit re ective systems to be extended dynamically by changing their structure. In conclusion, MOM and CRM are the winners of their respective categories and are adaptable to any requirement. In contrast, the MCM model is limited by language requirements and the MRM model by lack of information continuity. References 1. M. Ancona, W. Cazzola, G. Dodero, and V. Gianuzzi. Channel Rei cation: A Re ective Model for Distributed Computation. In proc. of IPCCC'98, pp 32{36, Feb 1998. 2. J. Ferber. Computational Re ection in Class Based Object Oriented Languages. In proc. of OOPSLA'89, vol. 24 of Sigplan Notices, pp 317{326. Oct 1989.
2K :
A Re ective, Component-Based Operating System for Rapidly Changing Environments
Fabio Kon?1 , Ashish Singhai??1 , Roy H. Campbell1, Dulcineia Carvalho1, Robert Moore1 , and Francisco J. Ballesteros2 University of Illinois at Urbana-Champaign 1304 W. Spring eld Av, Urbana IL 61801 USA. 1
2
ff-kon,singhai,rhc,dcarvalh,
[email protected] Universidad Carlos III de Madrid, E-28911 Leganes (Madrid) Spain,
[email protected] Abstract. Modern computing environments face both low-frequency
infrastructural changes, such as software and hardware upgrades, and frequent changes, such as uctuations in the network bandwidth and CPU load. However, existing operating systems are not designed to cope with rapidly changing environments. They provide no mechanism to permit the insertion of self-adapting components that can optimize system performance according to diversity, software and hardware changes, and variations in the environment. They are not designed to accommodate dynamic updates of software, or to deal with component interdependence. This paper describes the philosophy behind 2K , a re ective, componentbased operating system, and shows how it can be used to manage dynamism in modern computer environments.
1 Introduction This position paper reviews the design issues of communication middleware, modern distributed operating systems, and environments that are characterized by rapid changes. It proposes 2K, a component-based operating system that uses re ection to manage change. In 2K , adaptation is driven by architectural awareness: the system software includes models of its own structure, state, and behavior. To implement adaptation, 2K incorporates a re ective middleware layer that admits on-the- y customization through dynamic loading of new components. Our research investigates the deployment, within this framework, of dynamic policies and mechanisms for security, mobility, load balancing, fault tolerance, and quality of service for multimedia and real-time applications. Fabio Kon is supported in part by a grant from CAPES, the Brazilian Research Agency, proc.# 1405/95-2. ?? Ashish Singhai is supported in part by the grant NSF CDA 94-01124. ?
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 388-389, 1998. Springer-Verlag Berlin Heidelberg 1998
2K: A Reflective, Component-Based Operating System
387
2 Supporting Adaptation In 2K, we build an integrated architecture for adaptability where change is the fundamental premise, and adaptability is the fundamental goal. The design of 2K addresses three important questions in the design of adaptive systems. 2K focuses on two kinds of adaptation strategies. It adapts to frequently varying parameters { such as network bandwidth, connectivity, memory availability,and usage patterns { using dynamic recon guration of existing objects that constitute the framework. It adapts to slowly varying parameters { such as software versions, communication protocols, and hardware components { using dynamic code management. In both cases, re ection provides the means for isolating these ever-changing software components from more stable system and application components. 1. What to adapt?
2K addresses this question using architectural awareness, meaning rei cation of inter-component dependence and dynamic representation of system state. Components can access the system state to determine if they need to adapt. Alternatively, changes in the system state can trigger automatic adaptation. 2. When to adapt?
A Re ective Object Request Broker (ORB) and mechanisms for code distribution support the adaptation process. In the re ective ORB, components encapsulate ORB mechanisms and policies for method invocation, marshaling, concurrency, and the like. Code update mechanisms allow the dynamic replacement of system and application components, providing access to new functionality. Mobile agents distribute and migrate code to where it is needed. 3. How to adapt?
In contrast to existing systems where a large number of non-utilized modules are carried along with the basic system installation, our philosophy is based upon a \what you need is what you get" (WYNIWYG) model. In other words, the system con gures itself automatically and loads the minimum set of components required for executing the user applications in the most ecient way. Our ongoing research aims at further extending the capabilities of our CORBA re ective ORB and improving its interoperability with other object models such as DCOM. In addition to implementing 2K as middleware running on existing commercial operating systems, we also intend to run 2K on O++, our new architecturally-aware microkernel. In that way, we expect to provide support for high-performance distributed objects. Preliminary experiments with a prototyped re ective ORB and with a distributed multimedia application demonstrate that the 2K operating system will help to achieve substantial gains in exibility, simplicity, and performance. For further information on the 2K project including a full version of this paper refer to http://choices.cs.uiuc.edu/2k.
Experiments with Re ective Middleware Fabio M. Costa? , Gordon S. Blair, and Geo Coulson Department of Computing Lancaster University Bailrigg, Lancaster LA1 4YR, U.K.
ffmc,gordon,
[email protected] Abstract. Middleware platforms have emerged as an eective answer to the requirements of open distributed processing. However, in our opinion, a new engineering approach based on con gurability and openess of platform implementations is essential to meet the needs of applications areas such as multimedia, groupware and mobile computing. This paper outlines our architecture for con gurable and open middleware platforms, along with a rst prototype. The architecture is based on the concept of re ection, the ability for a program to access, reason about and alter its own implementation in a principled way, according to a well-de ned Meta-Object Protocol (MOP) [1].
1 Architecture In common with most of the research on re ective languages and systems, we adopt an object-oriented model of computation and a procedural approach to re ection [2]. In addition, we propose the use of per object (or per interface) meta-spaces, allowing a ne level of control over the support provided by the middleware platform and a limited scope for the re ective computations. The meta-space is structured as a number of closely related but distinct meta-space models, in a way similar to the multi-model re ection framework of AL-1/D [3]. This simplies the interface oered by meta-space by maintaining a separation of concerns between dierent system aspects. The three aspects currently employed are described bellow. The compositional meta-model represents the object in terms of its constituent components, as an object graph, in which the nodes are components and the arcs are the local bindings between them. The encapsulation meta-model represents a particular interface in terms of its set of methods and associated attributes, together with key properties of the interface including its inheritance structure. Finally, the environment meta-model represents the execution environment for each interface as traditionally provided by the middleware platform. In a distributed environment, this corresponds to functions such as message arrival, enqueueing, selection, dispatching, unmarshalling, thread creation and scheduling (plus the equivalent on the sending side). ?
Sponsored by CNPq, Brazil
S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 390-391, 1998. Springer-Verlag Berlin Heidelberg 1998
Experiments with Reflective Middleware
391
Objects/interfaces at the meta-level are also open to re ection and may have an associated meta-meta-space, which is instantiated on demand. This potentially allows for an in nite tower of re ection. A complete description of our architecture can be found in [4].
2 Implementation The prototype was implemented in Python 1.5 and consists of a base-level platform that enables the programmer to establish point-to-point distributed open bindings between application interfaces. Re ective meta-objects then provide a principled way to inspect and manipulate the three meta-models of such binding objects, in order to adapt the service provided by the binding. Access to these meta-objects can be obtained by calling the operations composition(), encapsulation() and environment() respectively, giving the interface name as a parameter. If the meta-object does not exist, it is created. The MOPs provided by each of the meta-levels consist of the operations: compositional MOP { list composition, add/remove a component, and get information about local bindings; encapsulation MOP { inspect the features of the interface, add/remove methods, attributes, and also pre- and post-methods; environment MOP { insert/remove functions used for before and after processing, which allow the application programmer to modify the way in which method invocations are handled.
3 Concluding Remarks This paper has presented an approach for the design and implementation of next generation middleware platforms, exploiting the concept of re ection to provide the desired level of con gurability and openness in a principled way. The prototype implementation has allowed us to identify several issues related to the implementation of the meta-models, such as consistency management and the feasibility of the use of orthogonal meta-spaces in the context of re ective open bindings. (A full version of this paper is available as Tech. Rep. MPG-98-11, Computing Department, Lancaster University).
References 1. G. Kiczales, J. des Rivieres, and D.G. Bobrow. The Art of the Metaobject Protocol. MIT Press, 1991. 2. P. Maes. Concepts and experiments in computational re ection. In Proceedings of OOPSLA'87, pages 147{155. ACM, October 1987. 3. H. Okamura, Y. Ishikawa, and M. Tokoro. AL-1/d: A distributed programming system with multi-model re ection framework. In Proceedings of Workshop on New Models for Software Architecture, November 1992. (Also available from the Department of Computer Science, Keio University, Japan). 4. G.S. Blair, G. Coulson, P. Robin, and M. Papathomas. An architecture for next generation middleware. In Proceedings of IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware'98), 1998.
Three Practical Experiences of Using Reflection Charlotte Pii Lunau Department of Computer Science, Aalborg University, Denmark
[email protected] Abstract. Reflection exists in different forms with different purposes and advantages. This paper summarises our practical experience using three different forms of reflection in the development of three different applications.
1 Introduction This note summarises the paper Reflection: Three Practical Experiences [1] and presents our experience developing three applications, each using a different kind of reflection. The first application, an object-oriented database, uses structural reflection and is implemented using metaclasses in Clos. The second application, a process control application, uses computational reflection and is implemented in an extension to Objective C. The third application is a fault tolerant automation system implemented in Java that reconfigures itself automatically when a fault is detected. 1.1 An Object-Oriented Database Using Metaclasses The database is a simple database that allows objects to be stored on a file and loaded into a CommonLisp image from the file. In the design of the database system we used ordinary classes to represent the database and its contents, and we used metaclasses to implement a control function on the database. Data objects to be inserted in the database must inherit from a special data object class that has a redefined metaclass. Our experience is that metaclasses are a convenient and structured way to change the behaviour of language constructs. However, we discovered the following problem. A client who defines objects to be inserted in the database does this by specialising the data object class. Because class data object has a redefined metaclass, every specialisation of the class has to have the same metaclass. This means that our modification of the language construct is visible to all users of our data base system. 1.2 A Process Control Application Using Computational Reflection A process control application monitors and controls a physical system and it must be able to react to changes and faults in the physical system. Process control software S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 392-393, 1998. Springer-Verlag Berlin Heidelberg 1998
Three Practical Experiences of Using Reflection
393
should be able to change its structure and behaviour while it is running. To ease the implementation and maintenance of such software we have defined a computational reflective architecture [2] that is implemented in an extended version of Objective C. In order to use a computational reflective architecture for process control applications we had to extend the architecture and redefine the causal connection. Because several independent aspects of behaviour need to be monitored simultaneously we extended the architecture to allow a base object to have more than one metaobject. The metaobjects attached to the same base object are composed so that each one is invoked in turn. The causal connection is redefined so that each metaobject affects the entire physical process and not just its base object. 1.3 A Fault Tolerant Automation System Using Java Fault tolerant control is concerned with controlling the operation of an automation system in the presence of faults. The main purpose is to continue process operation, perhaps with reduced performance. Fault tolerant control is based on redundancy in the operations of the system and the possibility of reconfiguring the system. In [3], we propose an object-oriented software architecture that separates an automation system into three layers: the function layer, the fault detection layer, and the fault tolerant control layer. Our architecture is based on two enabling techniques; encapsulation of algorithms in objects and the Java reflection package Encapsulation of algorithms in objects ensures that the algorithms have identical interfaces and can be exchanged at run time. The Java reflection package allows the contents of variables in other objects to be read and altered. The fault tolerant control layer uses this feature to install reconfiguration algorithms in the function layer. Although Java’s reflection package provides a limited form of reflection, when used in conjunction with the encapsulation of algorithms, it is sufficient to implement our architecture for an automation system. The architecture separates the function layer and the fault tolerant control layer, and allows the fault tolerant control layer to reconfigure the function layer without its knowledge.
References 1. Lunau,C.P.: Reflection: Three Practical Experiences. Presented at ECOOP’98 Workshop on Reflection, Brussels, July 1998. 2. Lunau,C.P: A Reflective Architecture for Process Control Applications. Proc. Of ECOOP’97 Lecture Notes in Computer Science, vol.. 1241 Springer-Verlag 1997. 3. Lunau, C.P.: A Software Architecture for Fault Tolerant Control of a Ship Propulsion System. Proc. of IFAC CAMS Workshop Fukuoka; Japan October 1998.
Towards a Generic Framework for AOP Pascal Fradet1 and Mario Sudholt2 1
IRISA/INRIA, Campus de Beaulieu, 35042 Rennes cedex,
[email protected] 2 Ecole des Mines de Nantes, 4 rue A. Kastler, 44307 Nantex cedex 3,
[email protected] 1 Introduction During the 1st workshop on AOP [AOP97] several fundamental questions were raised: What exactly are aspects? How to weave? What are the join points used to anchor aspects into the component program? Is there a general purpose aspect language? In this position paper, we address these questions for a particular class of aspects: aspects expressible as static, source-to-source program transformations. An aspect is de ned as a collection of program transformations acting on the abstract syntax tree of the component program. We discuss the design of a generic framework to express these transformations as well as a generic weaver. The coupling of component and aspect de nitions can be de ned formally using operators matching subtrees of the component program. The aspect weaver is simply a xpoint operator taking as parameters the component program and a set of program transformations. In many cases, program transformations based solely on syntactic criteria are not satisfactory and one would like to be able to use semantic criteria in aspect de nitions. We show how this can be done using properties expressed on the semantics of the component program and implemented using static analysis techniques. One of our main concerns is to keep weaving predictable. This raises several questions about the semantics (termination, convergence) of weaving.
2 Aspects and aspect de nitions Component language and program transformations. We advocate using a single
powerful and exible transformation language for the de nition of aspects. First, our framework should be generic with respect to the component language. To this aim, the abstract syntax of the component language is described by a tree data type. The component program is seen and manipulated as a tree. Then, de ning aspects for a speci c component language can be done on the basis of the abstract syntax de nition. A transformation is just a function which maps the tree representing the component program to a new tree. Any programming language could be used; there exist, however, powerful and executable specialized languages which permit to express concisely such transformations. These languages are based on patterns and tree matching operators. TrafoLa-H [HS93] is such a language where transformations are of the form pat =) TreeExpr. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 394-397, 1998. Springer-Verlag Berlin Heidelberg 1998
Towards a Generic Framework for AOP
395
Applied to a source program, it transforms a subtree matching pat into the result of the evaluation of TreeExpr. The variables occurring in pat are bound to subtrees and TreeExpr is a functional expression which is evaluated with these bindings. Aspects. The features of TrafoLa-H make it easy to specify join points, both
generic ones or join points which are speci c to a particular program. For example, assuming an imperative component language, patterns matching \each program point", \each assignment containing a division", or \all calls to the function f " can be described succinctly. In this setting, an aspect is simply a set of transformations specifying how code should be transformed at join points. The order of declaration of transformations is not relevant and transformations can be applied in any order. A fundamental question is whether the transformations are semantics-preserving or not. We believe that restricting ourselves to semantics-preserving transformations would be too strong a limitation. The class of expressible aspects would boil down to optimization aspects. On the other hand, transformations which are not semantics-preserving may be much too general because it is absolutely crucial to keep control over the semantics of woven programs. Each aspect language has to include appropriate restrictions on the transformations. Generic weaving. The generic aspect weaver is de ned in this setting using re-
peated applications of program transformations to the component program until a xpoint is reached. The weaver is therefore parameterized with a component program P and a set of transformations T : Weaver(P ; T ) = if 9 2 T : (P ) 6 P then Weaver( (P ); T ) else P (1) This de nition raises several interesting issues. First, in general, this de nition does not describe a terminating algorithm because of the xpoint computation. So, one has to make sure that the rewriting system speci ed by the program transformations is terminating. While this problem is undecidable in general, it is often trivial to solve for practically relevant transformations. A second problem arises from transformations which are not semantics-preserving. Since no application order is speci ed, two dierent weavings of the same component program and aspects may lead to programs whose semantics dier. In some cases, this might be acceptable. Otherwise, one would also have to make sure that the rewriting system is con uent.
3 Integrating program analyses Property-based aspects. In many cases, purely syntactic criteria are not com-
pletely satisfactory to de ne aspects. As an illustration, let us consider a speci c aspect dealing with program robustness. Intuitively, such an aspect speci es invariants which must be veri ed by a program. After weaving, the program either respects the invariants or invokes an exception. For instance, if the invariant to
396
P. Fradet and M. S dholt
check is
5 a naive solution would be to insert the statement if
V > 5 there is no point in generating such a test after the assignment V := V , 1. We would like to check the invariant only when it may be violated. This means, we need a way to de ne and use semantic criteria in aspects. This is achieved by extending the syntax of aspect-de ning transformations as follows V
then error after all assignments to V . But
=) if Prop then T1 else T2 (2) where Prop is a property of the component program de ned using its standard semantics. Intuitively, this can be read \for each part of the component program matching pat, if Prop can be proven then produce the tree T1 else produce T2 ". Assuming an axiomatic semantics, an example of a transformation (which can be implemented using a local analysis) is pat
pat =) if fV 5g V := E fV 5g then V := E else V := E ; if V > 5 then error; which avoids inserting tests when the invariant holds after the assignment assuming that it holds before. Note that we could achieve even better results with a global analysis. In this case, inserted tests augment the precision of the analysis because it proceeds on the transformed programs. Since we consider only static and automatic weaving, the properties occurring in aspects are meant to be inferred by a static analyzer. Thus, we can only expect safe approximations of these properties. Furthermore, one is not supposed to have any knowledge about the precision of the analyses. In order to have control on the semantics of the produced programs, it is important to enforce that each transformation of the form (2) satis es the following semantic equality Prop ) [ T1] = [ T2]
In the case of semantics-preserving transformations this condition trivially holds. Otherwise, the condition ensures that the precision of the analyzer cannot have any impact on the meaning of the woven program. Indeed, if Prop does not hold then the analyzer will not be able to infer it (it infers only safe approximations) and T2 will be produced otherwise T1 and T2 are semantically equivalent and the result of the analyzer does not semantically matter. Thus, the properties are best seen as lters to optimize weaving. In our previous example, it is clear that
fV 5g V := E fV 5g ) [ V := E ] = [ V := E ; if V
>
5 then error;]]
Generic weaving of property-based aspects. Since we are interested in a generic
description of aspects and the aspect weaver, we need a framework allowing the de nition of the component language semantics, the description of properties and the derivation of static analyzers. The automatic derivation of an analyzer from a semantics and a property is still an open research issue. At the moment, we are only working on a common formulation for dierent analyzers. The weaver remains essentially the same as de ned in (1) but each application
Towards a Generic Framework for AOP
397
of a transformation may require a program analysis to be performed. In general, properties or transformations can be global so the component program must be re-analyzed after each transformation. In the common case of local properties and transformations, a one-pass analyzer can be integrated into the weaver. Hypotheses. The usability of the approach as described hitherto may depend too much on the analyses. For example, the aspect of robustness described below would not be realistic without program analysis. This does not quite t the spirit of AOP (i.e. \no smart compilers"). We address this problem by extending the language of aspects with so-called hypotheses. An hypothesis is of the form pat =)! Prop. It is not checked by the analyzer but integrated as a new piece of information. Through hypotheses, the user can help and control the analyzer. Of course, false hypotheses may lead to unexpected results but they are at least documented and the user has explicitly acknowledged her or his responsibility.
4 Conclusion Until now, aspects have always been described and implemented in a rather ad hoc way. Here, we have sketched a generic framework based on program transformation and analysis which accommodates a large class of aspects. It is generic with respect to the component programming language: dierent languages can be incorporated by changing the abstract syntax. Once the syntax is described, the framework provides a pattern-based language to describe aspects and a generic weaver. Aspects can refer to semantic properties of the component program. In order to implement property-based aspects the framework provides a common format to express static analyzers. At the moment, the main weakness of the framework is semantic. When transformations which are not semantics-preserving are to be taken into account, the framework does not provide much help to reason about the semantics of weaving. For this reason, the framework does not come close to a theoretical foundation at the moment. However, it does provide useful tools as well as simple answers to the questions asked in the introduction. In the near future, we intend to complete the description and formalization of the framework. We see robustness and exceptions as a paradigmatic example of an aspect. They are largely independent from the component program but their introduction crosscuts large parts of it. We are designing a comprehensive aspect of robustness and plan to implement it for a small imperative language.
References [AOP97] K. Mens, C. Lopes, B. Tekinerdogan, G. Kiczales. \Aspect-Oriented Programming Workshop Report ", 1st Int. Workshop on AOP, ECOOP, 1997. [HS93] R. Heckmann, G. Sander: \TrafoLa-H Reference Manual", LNCS 680, ch. 8, 1993. [Kic+97b] G. Kiczales et al.: \Aspect-Oriented Programming", collection of technical reports no. SPL-97-007 { 010, Xerox Palo Alto Research Center, 1997.
Recent Developments in AspectJ Cristina Videira Lopes and Gregor Kiczales Xerox Palo Alto Research Center 3333 Coyote Hill Rd., Palo Alto CA 94304, USA {lopes, kiczales}@parc.xerox.com
Abstract. This paper summarizes the latest developments in AspectJ, a generalpurpose aspect-oriented programming (AOP) extension to Java. Some examples of aspects are shown. Based on our experience in designing language extensions for AOP, we also present a design space for AOP languages that may be of interest to the AOP community.
1
Introduction
Traditionally, programs involving shared resources, multi-object protocols, error handling, complex performance optimizations and other systemic, or cross-cutting concerns have tended to have poor modularity. The implementation of these concerns typically ends up being tangled throughout the code, resulting in systems that are difficult to develop, understand and maintain. Aspect-oriented programming is a technique that has been proposed specifically to address this problem [3]. In the last couple of years we have been designing aspectoriented languages. That work lead us to AspectJ, a general-purpose aspectoriented extension to Java. In AspectJ, aspects are programming constructs that work by cross-cutting the modularity of classes in carefully designed and principled ways. So, for example, a single aspect can affect the implementation of a number of methods in a number of classes. This enables aspects to capture the cross-modular structure of these kinds of concerns in a clean way.
2
Most Recent Features of AspectJ
In this position paper we illustrate only the basic features of AspectJ. A more comprehensive description of the system can be found in [4]. The basic features are presented with a couple of examples. Consider two classes, Point and Line, with set and get methods (the implementations are not shown), and an aspect that is intended to show the kind of accesses (i.e. read/write/create) that are performed on points and lines: S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 398-401, 1998. Springer-Verlag Berlin Heidelberg 1998
Recent Developments in AspectJTM
class Point { Point(int x, int y) void set(int x, int y) void setX(int x) void setY(int y) int getX() int getY() }
399
class Line { Line(int x1, int y1, int x2, int y2) void set(int x1, int y1, int x2, int y2) // also set y1, x2, y2 int getX1() // also get y1, x2, y2 }
aspect ShowAccesses { static before void Point.set(*), void Line.set(*) void Point.setX(*), void Point.setY(*), void Line.setX1(*), void Line.setY1(*), void Line.setX2(*), void Line.setY2(){ System.out.println(Write); } static before int Point.getX(), int Point.getY(), int Line.getX1(), int Line.getY1(), int Line.getX2(), int Line.getY2() { System.out.println(Read); } static before Point(*), Line(*) { System.out.println(Create); } } This aspect contains three weave declarations (weaves for short), all of them starting with static before. The effect of this aspect is that every time an instance of either of those two classes is invoked or created, a message is printed out on the screen: if the method invoked is one of the set methods, then the string Write is printed out; if the method invoked is one of the get methods, then the string Read is printed out; and if a constructor is executed, then the string Create is printed out. As the example shows, the weaves apply to elements of classes, such as methods and constructors. Designators name those elements. A method designator has the generic form Type Type.MethodName(Formals) and a constructor has the generic form Type(Formals), where Type is a class or interface name. The character * can also be used to indicate any return type and any list of formal parameters. Fields and class initializers can also be used in weaves (see next example). The keyword before means that the body of those weaves is to be executed before the body of the element (method or constructor) is executed. AspectJ also supports after meaning that the body of the weave is to be executed after the
400
C.V. Lopes and G. Kiczales
body of the element is executed catch and finally both of these being similar to Javas catch and finally constructs. All of these (i.e. before, after, catch and finally) are called advise weaves, in that they annotate the classes elements with code wrappers. AspectJ supports another kind of weave called the new weave, which extends the classes with new elements. For example, aspect Color { static new Color Point.color, Line.color; static new void Point.setColor(Color c), Line.setColor(Color c) {color = c;} } The above aspect extends the classes Point and Line with new fields of type Color called color and a new methods called setColor. When this aspect is woven, it is exactly as if color and setColor are members of classes Point and Line.1 The keyword static means that no aspect instances are involved. Static weaves are always executed for all instances of the designated classes. The alternative is to have non-static weaves, which require the instantiation of the aspect.2
3
Properties of Aspects
The basic features explained above are sufficient for explaining three important properties of AspectJ. First, aspects can capture cross-cutting design issues. In the ShowAccesses example, each weave applies to several methods of two classes. Without AspectJ, the lines of code that print out the messages would be repeated over and over again, and the abstraction behind this aspect would be lost. Second, AspectJ is more general-purpose than the other AOP languages we have designed. Because this language is fairly generic, aspects can capture a diverse set of cross-cutting design concerns. Aspects can be used in distributed and non-distributed applications for debugging, concurrency control, inter-class protocols, optimizations and for programming many non-functional issues. Third, aspects can be easily plugged-in and plugged-out of the applications. For example, the ShowAccesses aspect is plugged-in simply by invoking the weaver: % ajweaver Point.ajava Line.ajava ShowAccesses.ajava With AspectJ plugging in and out the code of an aspect involves no editing of the classes, and the code is truly inserted or removed from the executable code (as opposed to its activation being conditional on some flag). Plugging the ShowAccesses aspect out is as easy as omitting its file name from the command line above. 1
This example is intended to illustrate the meaning of the new weave. It is by no means intended to suggest that the color feature of points and lines should be coded as an aspect. The issue of when to use it falls out of the scope of this position paper. 2 Non-static weaves and the use of aspect instances fall out of the scope of this position paper.
Recent Developments in AspectJTM
application-specific
AML
domain-specific
RG
concern-specific general purpose
401
RIDL COOL
talk
AspectJ
low-level
high-level
Fig. 1. Range of AOP languages features in some of our work. RIDL is the remote data transfer language in [5]; COOL is the synchronization language in [5]; RG is the image processing system in [6]; AML is the sparse matrix language in [1]; talk refers to the language in the slides of the invited talk [2]
4
Relation to Previous Work
Having designed a number of different AOP extensions for different purposes, we summarize our work in Fig. 1. The figure presents a two-dimensional space for language design (not specific for AOP). The horizontal axis indicates to what extent the language abstracts away from low-level implementation issues. The vertical axis indicates how specific the language constructs are to a particular domain or problem. The current version of AspectJ, summarized in this position paper, is more generalpurpose and more low-level than our previous work.
References 1. 2. 3. 4. 5. 6.
Irwin J., Loingtier J.-M., Gilbert J. et al. Aspect-Oriented Programming of Sparse Matrix Code. In proc. International Scientific Computing in ObjectOriented Parallel Environments (ISCOPE), Marina del Rey, USA, 1997. Kiczales G. AOP: Going beyond objects for better separation of concerns in design and implementation. Xerox PARC, Slides of invited talk, http://www.parc.xerox.com/aop/invited-talk/ Kiczales G., Lamping J., Mendhekar A. et al. Aspect-Oriented Programming. In proc. European Conference on Object-Oriented Programming, Finland, 1997. Lopes C. and Kiczales G. Aspect-Oriented Programming with AspectJ. Xerox PARC, Tutorial, http://www.parc.xerox.com/aop/aspectj/tutorial Lopes C. V. D: A Language Framework for Distributed Programming. PhD thesis, College of Computer Science, Northeastern University, Boston, 1997. Mendhekar A., Kiczales G. and Lamping J. RG: A Case-Study for AspectOriented Programming. Xerox PARC, Palo Alto, CA. Technical report SPL97-009 P9710044, February 1997.
Coordination and Composition: The Two Paradigms Underlying AOP? Robb D. Nebbe Software Composition Group Institut für Informatik und angewandte Mathematik Universität Bern, Neubrückstrasse 10 CH 3012 Bern, Switzerland
[email protected] Introduction Our experience recovering architectures from object-oriented systems in the context of the FAMOOS Esprit project corroborates the existence of aspects that cross-cut the functionality of a software system. Furthermore, it is important for programmers to recognize these aspects, even in the absence of language support for aspects, because the tangling that results is important to understanding what policies have been adopted for issues such as concurrency, distribution, persistency etc. Our hypothesis on the causes of this tangling are based on our observation that software systems can be structured as a set of independent semantic domains consisting of a core problem domain and a set of coordinated supporting domains. These supporting domains cover distinct problem domains such as concurrency, communication, and resource allocation. Within any particular domain the objectoriented paradigm, which is essentially compositional in nature, works extremely well. However, between domains composition is not enough. Important design decisions about how concurrency, distribution and persistency are handled cannot be abstracted out because of the way they cross-cut the core problem domain. Fundamentally similar solutions are adapted to each specific context in which they are applied resulting in design decisions being duplicated throughout the source code. Accordingly, the software is harder to understand because key decisions are not clearly defined and harder to evolve because a change in policy potentially requires numerous changes to the source code. What aspect-oriented programming provides that programming languages typically lack is the ability to cleanly coordinate supporting domains with the core domain by maintaining a clear separation of concerns. Furthermore, it accomplishes the task in a more principled way than approaches such as reflection while retaining far greater flexibility than achieved by building support into the programming language. We start by outlining our observations about the architecture of software systems and the problems we encountered. We then try to relate these difficulties to the support provided by the programming language and examine the traditional approach of extending the programming language to deal with these problems. Finally, we offer a hypothesis about the nature of components and aspects as embodying composition and coordination. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 402-405, 1998. Springer-Verlag Berlin Heidelberg 1998
Coordination and Composition: The Two Paradigms Underlying AOP?
1
403
Software Architecture
In the context of the FAMOOS Esprit project we have observed that our case studies have architectures that exhibit certain commonalities. Each has a single core domain, such as mail sorting or pipeline management. Moreover, they have they also have supporting domains. In the case of pipeline management this is persistency and in the case of mail sorting this is concurrency and communication. Each of the supporting domains is relatively independent of the core domain in the sense that they do not directly affect the semantics of the core domain; in other words they appear to affect only how things are done rather than what is done. This suggested the following hypothesis: A software system can be structured as a set of independent semantic domains consisting of a core problem domain and a set of coordinated supporting domains. The model of the core problem domain (or core domain model) captures the basic functionality of the software system. The supporting domain models help by providing services such as synchronization, threads, persistency, and RMI that are needed to implement this functionality. Typically the relationship with these supporting domain models is hidden in the implementation of the core domain model where it is coded into the various classes. Two situations were identified as problematic: • The code relating the core problem domain to the supporting problem domains is distributed across the classes of the core problem domain. Furthermore, this code is often fundamentally similar and is in effect duplicating a single policy in different contexts. For example, the synchronization policy is often the same but adapted specifically to each class; changing the policy requires changing each class. • Distinctions are lifted into the core domain model, where they are totally irrelevant, from supporting domain models in order to facilitate coordinating the two models. For example classes are split into synchronized and non-synchronized variants or persistent and non-persistent variants thus increasing the chance that the underlying similarity will go unnoticed. Both situations complicate understanding the software system. The duplication in the first situation greatly increases the chances of human error in adapting and implementing a policy in a particular class. The second situation creates a risk that if the model of the problem domain is adapted then not all of the variants will be recognized as capturing a single concept from the problem domain and they will thus become inconsistent. Both problems are aggravated by the fact that classes provide the only means of organizing abstractions. One means of getting around this limitation is to extend the programming language to support what is in our terminology a supporting domain model directly. In the next section we look at two such cases.
404
2
R.D. Nebbe
The Traditional Approach: Language Extensions
In their work on persistency Moss and Hosking [5] suggest that an extension to Java supporting persistence should be both orthogonal and transparent; principles attributed to [1]. What they mean by orthogonal is independent from the type system. This implies that persistency can be applied independently to a single object or perhaps a set of objects independent of their respective types. Transparency relates to how many decisions a programmer must make in the source code relating to a particular extension. Moss and Hosking point out that this relates to the amount of control the programmer has over the extension. If an extension is completely transparent then its use is entirely a consequence of the semantics of the core domain model. In AOP transparency does not translate into a lack of control, only a reliance on the semantics of the core domain model to provide the appropriate join points. If we look at the two situations identified as problems we see that the second relates to a lack of orthogonality; however, the first is more complex. The distribution throughout the source code is related to transparency but the fact that fundamentally similar code is duplicated relates to the fact that in order to formulate a general policy capturing the underlying similarity one must have reflective facilities to add new data members and insert method invocations. Another example of extending a language to encompass supporting domains is the Illinois Concert C++ system (ICC++). Even though they do not use the principles of orthogonality and transparency to explain their work they are quite appropriate. Using a simplified version of C++ as a base ICC++ handles locality, communication, thread creation and synchronization transparently, orthogonally and efficiently [2]. The underlying argument in ICC++ (coming from my understanding of [6]) is that the transparency is critical to obtaining a respectable level of performance. If the approach were not transparent then the programmer would be required to program too many assumptions about the context in which an application was to execute thus crippling the attempts of the compiler to produce efficient code. Orthogonality is also a critical aspect since many optimizations involve locking sets of objects with different types as a single collection.
3
Composition and Coordination
I suggest that a composition language is good at defining individual semantic domains including both the core domain model and any supporting domain models. What a composition language is not good at is coordinating these different domain models. This is because in order to support the separation of concerns the coordination should be both orthogonal and transparent. Early work on aspects leaves the distinction between aspects and objects somewhat unclear [4]. I feel this is due to the fact that this notion encompasses a particular supporting domain model as well as the coordination of this model with the core domain model. For example, section 3.3 of her thesis [4] "Design Decisions and Alternatives" Lopes relates design decisions and alternatives essentially covering the domain models behind COOL and RIDL rather than issues relate purely to aspects.
Coordination and Composition: The Two Paradigms Underlying AOP?
405
Our experience suggests that coordination does not semantically link domains. The supporting domains know nothing of the core domain and the code that coordinates the two is in the classes of the core domain. Furthermore, this code does not appear to affect what is done but only how it is done. We note that in COOL [4] the aspects are not allowed to change the state of an object. This is rationalized as preserving a clear separation of responsibilities. Our cases studies suggest that this is a reasonable limitation and is at the heart of what separates aspects from components.
Conclusion To paraphrase [3]: Object-oriented technology provides a good fit for defining the semantics of individual problem domains. However, many important non-functional issues cross-cut any such domain. This situation arises because the relationship between a core domain model and its supporting domain models is not one of composition, upon which the object-oriented paradigm is based, but of coordination. The principle advantage of aspect-oriented programming is that, like language extensions, it provides a means of maintaining a clear separation of concerns but without sacrificing flexibility by building a particular approach into the language. Furthermore, it is a more principled approach than reflection, which facilitates understanding and evolution.
Acknowledgments This work has been funded by the Swiss Government under Project NFS-200046947.96 and BBW-96.0015 as well as by the European Union under the ESPRIT program Project 21975
References 1. M. P. Atkinson and R. Morrison, "Orthogonally Persistent Object Systems", Int. J. Very Large Data Bases 4, 3, 319-401, 1995 2. A. Chien, J. Dolby, B. Ganguly, V. Karamcheti, X. Zhang, High Level Parallel Programming: The Illinois Concert System, submitted for publication 1997 3. G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J. Loingtier and J. Irwin, Aspect-Oriented Programming, Xerox Palo Alto Research Center, 1997 4. C. I. V. Lopes, D: A Language Framework for Distributed Programming, Ph.D. thesis, Northeastern University, Nov. 1997 5. J. E. B. Moss and A. L. Hosking, "Approaches to Adding Persistence to Java", in First International Workshop on Persistence and Java, Technical Report 96-58, Sun Microsystems Laboratories, Nov. 1996 6. J. B. Plevyak, Optimization of Object-Oriented and Concurrent Programs, Ph.D. thesis, University of Illinois at Urbana-Champaign, 1996
Operation-Level Composition: A Case in (Join) Point Harold L. Ossher and Peri L. Tarr IBM T.J. Watson Research Center P.O. Box 704, Yorktown Heights, NY 10598 USA {ossher, tarr}@watson.ibm.com
Abstract. The identification and integration of join points—locations where different components describe overlapping concerns—is at the core of research in AOP. The selection of potential join points—the types of locations in code, such as statements or declarations, that may be joined—affects, either positively or negatively, many properties of both aspect weavers and “woven” systems. This paper explores some issues in selecting potential join points.
1
Introduction
Our work on subject-oriented programming [1, 2] (SOP) focuses on two key issues we believe are at the core of research in the domain of aspect-oriented programming: • Facilitating the identification and description of cross-cutting concerns in software—i.e., aspects that affect more than one unit of functionality in the system, given some definition of “units of functionality” (objects, modules, functions, etc.). • Enabling the identification and integration of join points, which are locations in systems affected by one or more cross-cutting concerns. The process of integrating join points involves describing how a cross-cutting concern affects code at one or more join points. The integration process is called composition or weaving. The set of possible join points includes all locations in all system components, which we call statement-level join points to indicate that they can occur anywhere. SOP, however, is predicated on a belief that a significant majority of join points of concern in software development are those represented by operations and that the majority of cross-cutting issues of concern involve capabilities that affect multiple operations. Additionally, a focus on operation-level joining is especially appropriate in an OO context, since it adds the power of composition naturally within the OO paradigm. Operation join points need not imply specification of all aspects as functions. We believe many kinds of capabilities for which AOP might be used, which might appear non-functional, actually involve functional aspects and operation join points. Undoubtedly, cases exist where non-functional specification of aspects, in different notations, is appropriate. Even then, operation join points are often appropriate. This paper explores the broad utility of functional aspects and operation join points and discusses issues affecting the feasibility of supporting general statement-level join points. It is written as a comparison of operation and statement-level join points. The issues raised are the essence, however, not the comparison itself: they provide an initial guide to considerations involved in evaluating potential kinds of join points. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 406-409, 1998. Springer-Verlag Berlin Heidelberg 1998
Operation-Level Composition: A Case in (Join) Point
2
407
The Prevalence of Operation-Level Join Points
Many kinds of cross-cutting concerns affect the definition of collections of operations that span multiple units of functionality. For example, many objects might support a print capability. The particular way in which this capability works in a given system might depend on one or more system requirements, such as “all print operations will send mail to the system administrator if they cannot complete successfully.” Cross-cutting concerns that affect the behavior of groups of operations typically require the use of operation join points—e.g., to add a check for success after the invocation of each print operation, and to send mail as appropriate. In our experience, a wide range of common cross-cutting concerns are, in fact, correctly described using operation join points, even when, at first glance, this is not obviously the case. This phenomenon arises because cross-cutting concerns often are specified in terms of how they affect existing object behaviors (when they define new behaviors, they involve operation join points trivially), which makes them amenable to implementation via operation join points. Some examples follow. Persistence: Persistence is often a pervasive property of data; thus, it is desirable to develop a “persistence aspect” that implements persistence independently of any particular objects, and compose this capability into appropriate objects. In deciding how to compose the persistence aspect with the objects to which it applies, we note that retrieval of persistent objects from a database occurs upon object access, and update of persistent objects occurs upon object creation or modification. Thus, the needed join points are operations: the “update” part of the persistence aspect affects constructor and set methods, while the “retrieve” part affects get methods. Error detection and handling, and fault tolerance: Some forms of error detection are intrinsic to the definition of a type of object; e.g.,, it is always wrong to attempt to pop an item from an empty stack. Such “well-formedness” definitions are usually built into a type. Other kinds of errors, however, are context-specific—their presence depends on the requirements of the particular application in which the types are used. For example, a set of generic, reusable components (e.g., lists, stacks, sets) used in a compiler have considerably looser error handling and fault tolerance requirements than the same components used in a safety-critical system. In such cases, it is desirable to describe the error detection and handling behaviors as one or more separate aspects. Some of the most common kinds of non-intrinsic error handling mechanisms we have seen are those represented as pre- and post-conditions on modify methods. The join points in such cases are operations: pre- and postcondition checks, error-catching methods, and error-handling methods can all be joined to the methods that can cause or encounter the error conditions. Other cases exist where one really must add additional error checks within existing code; in such cases operation join points are not sufficient, unless one is willing to duplicate code. Logging, tracing, and metrics-gathering: Where and when logging, tracing, or metrics-gathering activities occur is frequently dependent on application-wide decisions that are determined, for example, by development phase or local policies. Ideally, code to perform these activities would be modeled as an aspect and composed selectively into the relevant parts of an application. All of these activities are usually associated with operations (e.g., to log entry into, and exit from, an operation) and could be composed using operation join points.
408
H.L. Ossher and P.L. Tarr
Caching behavior: It is often desirable to consider the caching of intermediate results as an aspect separate from an algorithm. In particular, an algorithm written without caching might need to be modified to include caching upon observation of inadequate performance. In the important case where the computation traverses a network of objects, processing each node to compute some value(s), operation join points are natural. The “process” operation in the algorithmic aspect just performs the computation. The caching aspect of this same operation maintains a cache of the computed value(s), probably in the node itself. It either returns the cached value or invokes the algorithmic aspect, depending on the currency of the cache. Clearly, the composition in this case must give the caching aspect control. Other common cross-cutting issues include serializability, atomicity, replication, security and visualization. We believe that support for these and many other ubiquitous features end up being well represented by operation join points. Clearly, functional aspects and operation join points are required and useful for describing and integrating a wide range of important cross-cutting concerns in software systems. For this reason, the SOP paradigm supports “AOP” based on functional aspects and operation join points. Part of our ongoing research includes the exploration and validation of the variety of functional and non-functional aspects to which operationlevel joining applies.
3
On General, Statement-Level Join Points
A key element of SOP is flexible, domain-independent, generic points at which composition is to occur, and specify the details of the composition desired. It is important that rules continue to work even as the inputs evolve, within reason. We excluded statement-level join points for several reasons: • We have not yet found convincing evidence that the additional power resulting from such join points is of general use, particularly in light of the concomitant increase in complexity. This is particularly true in light of the broad spectrum of circumstances under which operation join points appear to be applicable. • The tractability of the problem of defining general-purpose statement-level weavers is questionable. Even stable references to join points in rules present serious problems in the light of evolving inputs. • We are extremely concerned about the degree of unpredictability that results from statement-level weaving. Changing a statement in a piece of code changes both the data- and control-flow properties of that code; any guarantees that might have been made about the code are negated. In fact, much work in the area of software analysis and testing has attempted to identify the impact of changes to code, in an attempt to identify new errors and to help select test cases whose results have been invalidated by the changes. Unfortunately, data- and control-flow analyses are inherently exponential, further suggesting the difficulty involved in understanding the effects of statement-level composition. • A significant contribution of OO development was to make changes additive rather than invasive [3], which is important because of the well-documented, adverse
Operation-Level Composition: A Case in (Join) Point
409
effects of invasive changes. The notion of general statement-level joins represents invasive change, violating the additive-changes principle. In light of these concerns, we do not believe the additional power provided by arbitrary statement-level joining is of sufficiently broad and practical use to justify its potential disadvantages. Indeed, the work on AOP we are aware of has concentrated on weaving in specific domains, and in each case the kinds of join points, though not always operations, are carefully circumscribed. Further research is required to characterize the circumstances under which statement-level joining may be practical and justified, despite its drawbacks.
4
Conclusion
We believe that the ability to describe capabilities associated with concerns that cut across multiple parts of a system and to specify how those capabilities affect the system, without having to physically intersperse the code that realizes such capabilities, is an extremely important part of support for programming-in-the-large. Used correctly, this capability, which defines AOP, can reduce the complexity and improve the maintainability of a system considerably. Identifying points at which a cross-cutting concern affects a system is an important part of AOP. While the set of potential “join points” is large and could, potentially, include every statement and expression within a system, our research suggests that focusing on operation-level joining can successfully address many common composition needs, for both functional and non-functional aspects. Further, we believe that general statement-level joining raises numerous technical and methodological concerns that may render it intractable, infeasible, or undesirable in the general case. Further work is needed to test and extend this conclusion. SOP is an approach to AOP that is based on operation-level joins. SOP thus avoids problems inherent in general statement-level joining while enabling description and composition of many kinds of cross-cutting concerns in OO programs. It also preserves and extends many of the desirable features of the object-oriented paradigm. The choice of join points has made it possible to develop a general-purpose, domainindependent compositor, which is central to our tool support [4].
5
Bibliography
1. William Harrison and Harold Ossher. Subject-oriented programming (a critique of pure objects). In Proceedings of the Conference on Object-Oriented Programming: Systems, Languages, and Applications, September 1993. 2. Harold Ossher, William Harrison, Frank Budinsky, and Ian Simmonds. Subjectoriented programming: Supporting decentralized development of objects. In th Proceedings of the 7 IBM Conference on Object-Oriented Technology, July 1994. 3. John Vlissides. Subject-Oriented Design. In C++ Report, February 1998. 4. http://www.research.ibm.com/sop
Deriving Design Aspects from Conceptual Models Bedir Tekinerdogan & Mehmet Aksit TRESE project, Department of Computer Science, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands. email: {bedir | aksit}@cs.utwente.nl, www server: http://wwwtrese.cs.utwente.nl Abstract. Two fundamental issues in aspect orientation are the identification and the composition of aspects. We argue that aspects must be identified at the requirement and the domain analysis phases. We also propose a mechanism for gradually composing aspects throughout the software development process. We illustrate our ideas for the design of a transaction framework.
1. Introduction Software components can be defined as programming language abstractions. Examples of software components are procedures, data structures and objects. Programming languages are vehicles to express abstract executable mechanisms. Components can be identified by the use of heuristic rules in conventional methods. For example in OMT [Rumbaugh 91] tentative classes are identified by looking for nouns in a problem statement. The identified components are composed by means of the object-oriented composition mechanisms, such as inheritance, aggregation and association. Object-oriented methods define a number of heuristic rules to identify relations among components. Similar to object-oriented design, in aspect-oriented design, two issues appear to be important: identification of the abstraction models, that is, aspects and the composition of these aspects, or aspect weaving [Kiczales et. al 97]. We will focus on these two issues in this paper. Regarding the aspect identification we argue that like objects, aspects should be identified during the requirement and the domain analysis phases. We will also discuss an approach in which aspects are gradually composed along the software development process. The outline of this paper is as follows: Section 2 will elaborate on our approach for aspect identification and aspect composition. Section 3 will give our conclusions.
2. Aspect identification 2.1 Where to look for? Software development can be seen as a problem solving process in which the requirements represent the problem for which a programming solution is required. A software development process involves a number of steps, which produce various kinds of software artifacts. These steps can be considered as transitions between artifacts. No doubt, the early phases of the software development process includes S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 410-413, 1998. Springer-Verlag Berlin Heidelberg 1998
Deriving Design Aspects from Conceptual Models
411
concerns, which have a major impact on the final structure and quality of the software [Aksit 97]. We therefore, believe that aspects appear beyond the programming level and as such the identification of aspects should start at the levels of requirements and domain analysis phases. 2.2 How to identify? Here the fundamental question is, given a problem domain like for example the transaction domain, how should we abstract and identify the aspects? To address this issue, we propose the method described in Figure 1, which defines the basic steps needed to identify aspects in the earlier phases of the software development process. This method will be described in more detail in subsequent sections. CLIENT ’S WISH
PROBLEM DOMAIN
1. Requirements Analysis
2. Domain Analysis
ASPECT-ORIENTED DESIGN
6. Identify Rules for Aspect composition
3a. Conceptual Modeling
5. Aspect Composition Specification
3b. Aspect abstraction
4. Define design space
Fig. 1. The aspect modeling method
2.2.1 Requirements Analysis The first step in software development is the requirements analysis phase. The goal of the requirements analysis phase is to understand and capture the exact needs of the clients of a software system [Wieringa 96]. Requirements analysis deals with eliciting, analyzing and capturing the requirements of the client for which the software system is developed. 2.2.2 Domain Analysis Domain analysis aims at systematically identifying, formalizing and classifying the knowledge in problem domain in a reusable way [Arrango 94]. The basic steps of domain analysis are the identification of the knowledge sources, the data collection from the knowledge sources, data analysis of the extracted knowledge and knowledge modeling. Domain models are mainly derived by considering commonalities and variations between the retrieved data. The basic difference between requirements analysis and domain analysis is that requirements analysis focuses on the requirements of one application whereas domain analysis attempts to model the knowledge of a wide range of related applications. The deliverables of domain analysis are a set of domain models, relations and rules that are common in a problem domain for the corresponding applications. Domain models can be represented in many ways, e.g. ER diagrams, object-oriented class diagrams, or just ordered
412
B. Tekinerdogan and M. Aksit
text. In [Aksit et. al. 98] we applied domain analysis techniques to support the development of stable frameworks. 2.2.3 Conceptual modeling Requirements analysis extracts the potential aspects. Domain analysis collects knowledge about these aspects. The conceptual modeling process elaborates on the domain model to define canonical models. Canonical models are similar to concepts of the classical view [Smith & Medin 81]. Concepts are not chosen arbitrarily but are formed by abstracting the knowledge about instances. An identified concept is useful if it has meaningful differences with the existing concepts. The meaningfulness on its turn is defined by the context. An example: Aspect modeling for Adaptable Transaction Systems We applied our above ideas for aspect identification and aspect modeling in a pilot project which aims at designing an object-oriented atomic transaction framework to be used in a distributed car dealer management system [Tekinerdogan 96]. After the requirements analysis and domain analysis phases we could extract four basic groups of aspects. Aspects related to the transaction models [Elmagarmid 92], aspects related to quality factors such as adaptability [Adaptability 96] and performance and aspects related to the object model. We developed conceptual models for all these aspects. 2.2.4 Define design space Even though we may not know about individual designs it is convenient to talk about design spaces. We define a design space as a set of descriptions of possible designs. The identified concept models represent the dimensions of such a design space. Our concept of design space is similar to the concept of information space described in [Jacobson 92]. In [Aksit & Tekinerdogan 98] we elaborate on the concept of design space and more specific on the concept of design algebra. For the adaptable transaction domain the design space is as follows: Design Space = (Transaction x ObjectModel x Object Coupling x Adaptability x Performance) Each element in this design space represents a design solution for the given problem domain. Since each basic concept is composed of sub-concepts the design space is very large. 2.2.5 Aspect composition specification Basically there are two ways for aspect composition. Composition at-once or gradual composition. In the composition at-once approach, one aspect composer composes all the identified aspects into to the final realization model. The problem with this approach is that the aspect composer needs to deal with all the aspects at once which may be a difficult, error-prone and time-consuming process. In addition, not all the combinations of the design space may be possible or useful. It is therefore, not necessary to elaborate on all the elements of the design space. Accordingly, we need some mechanisms to restrict this large design space and exploit only the useful
Deriving Design Aspects from Conceptual Models
413
combinations. In order to meet this requirement dedicated aspect composers will be used in our approach. These aspect composers are used to gradually explore the useful combinations in the design space. For example in the transaction system application we adopt a AdaptabilityComposer which composes a domain model aspect with the adaptability aspect. The basic issue of the use of multiple dedicated aspect composer is the ordering of the composition of aspects. If we have 6 aspects we can apply the gradual aspect composition in 6!= 720 ways. This is a difficult task. We therefore apply some general rules to manage this situation. The most intuitive ordering is to start with the domain models and end with the component models. From the resulted space a new model is selected which includes the useful and desired combinations. This process is iterated until we have included all the aspects and domain models. The final result of this process is a realization model. The realization model includes all the elements, which defines the final implementation for the design problem. 2.2.6 Define aspect composition rules After we have determined the ordering of the aspect composers we must define the rules which will be applied in each specific aspect composer. In [Aksit & Tekinerdogan 98] we have described this process in more detail.
Conclusion In this paper we have proposed an approach for identification of proper aspects from the domain analysis and requirements analysis phases. The aspect composition can be done centrally by one composer or gradually by multiple composers along the software development process. We illustrated the practical applicability of gradually composing aspects.
References [Adaptability 96] M. Aksit, B. Tekinerdogan, L. Bergmans, K. Lieberherr, P. Steyaert, C. Lucas, & K. Mens, ECOOP ’96 Adaptability in Object-Oriented Software Development Workshop, url: http://wwwtrese.cs.utwente.nl/ecoop96adws/, 1996. [Aksit 97] M. Aksit. Issues in Aspect-Oriented Programming, Position paper, AOP workshop, ECOOP ’97. [Aksit & Tekinerdogan 98] M. Aksit & B. Tekinerdogan, Models for Composing Design Aspects, University of Twente, Department of Computer Science, 1998. [Aksit et. al. 98] M.Aksit, B. Tekinerdogan, F. Marcelloni., & L. Bergmans. Deriving Object-Oriented Frameworks from Domain Knowledge. To be published as chapter in M. Fayad, D.Schmidt, R. Johnson (eds.), Object-Oriented Application Frameworks, Wiley, 1998. [Arrango 94] G. Arrango. Domain Analysis Methods. In Software Reusability, Schäfer, R. Prieto-Díaz, and M. Matsumoto (Eds.), Ellis Horwood, New York, New York, 1994, pp. 17-49. [Elmagarmid 92] A. Elmagarmid (ed), Database Transaction Models for advanced applications, San Mateo, CA, Morgen Kaufmann, 1992. [Jacobson 92] I. Jacobson, M. Christerson, P. Jonsson & G. Overgaard, Object-Oriented Software Engineering - A Use Case Driven Approach, Addison-Wesley/ACM Press, 1992. [Kiczales et al. 97] G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J-M. Loingtier and J. Irwin, Aspect-Oriented Programming, ECOOP’97 Conference proceedings, LNCS 1241, June 1997, pp. 220 – 242. [Smith & Medin 81] E.E. Smith & D.L. Medin. Categories and Concepts, Harvard University Press, 1981. [Tekinerdogan 96] B. Tekinerdogan. Requirements analysis of transaction processing in a distributed car dealer system. Technical report. University of Twente, 1996. [Wieringa 96] R.J. Wieringa. Requirements Engineering: Frameworks for understanding, Wiley, 1996.
Aspect-Oriented Logic Meta Programming Kris De Volder (
[email protected]) Programming Technology Lab, Vrije Universiteit Brussel
Abstract. It is our opinion that declaring aspects by means of a full-
edged logic language has a fundamental advantage over using a restricted special purpose aspect language. As an illustration we present a simpli ed implementation of the Cool aspect weaver. Cool declarations are represented as logic facts in a Prolog like logic meta-language for Java. A fundamental advantage of this approach is that it enables aspect-oriented logic meta programming.
We will brie y introduce the TyRuBa system. TyRuBa [DV98] was designed as an experimental system to explore the use of logic meta programming for code generation. As an illustration of the potential of the approach, we used TyRuBa to implement a simpli ed subset of the aspect language Cool as proposed by Lopes [LK97]. Because the Cool aspect declarations are represented as logic facts, they can therefore be accessed and declared by logic rules. The fundamental advantage this oers is that it allows de ning new kinds of aspect declarations in terms of other related or more low-level aspect declarations. We call this technique aspect-oriented logic meta programming because it depends on logic meta programs which reason about aspect declarations.
1 TyRuBa The TyRuBa system is basically a simpli ed Prolog variant with a few special features to facilitate Java code generation. We assume familiarity with Prolog and only brie y discuss the most important dierences. TyRuBa's lexical conventions dier from Prolog's. Variables are identi ed by a leading \?" instead of starting with a capital. This avoids confusion between Java identi ers and Prolog variables. Some examples of TyRuBa variables are: ?x, ?Abc12, etc. Some examples constants are: x, 1, Abc123, etc. Because TyRuBa oers a quoting mechanism which allows intermixing Java code and logic terms, the syntax of terms is slightly dierent from Prolog's to avoid confusion with function or procedure calls in Java: TyRuBa compound terms are written with \