Business Process Implementation for IT Professionals by Robert B. Walford
ISBN: 0890064806
Artech House © 1999 (599 p...
131 downloads
1486 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Business Process Implementation for IT Professionals by Robert B. Walford
ISBN: 0890064806
Artech House © 1999 (599 pages) An all-inclusive roadmap to help you convert business practices into applications to facilitate those same business practices.
Table of Contents Business Process Implementation for IT Professionals and Managers Foreword Preface Chapter 1
- Introduction
Part I - Automation asset management
Chapter 2
- Automation asset system
Chapter 3
- Life cycle management
Chapter 4
- Repository utilization
Chapter 5
- Business rules
Chapter 6
- Financial management
Chapter 7
- Planning and strategy
Part II - Automation assets
Chapter 8
- Process modeling
Chapter 9
- Scenario modeling
Chapter 10 - Role modeling Chapter 11 - Information modeling Chapter 12 - Client/server modeling Chapter 13 - Dialog and action modeling Chapter 14 - Software component modeling Chapter 15 - Workflow modeling Part III - Automation methodology
Chapter 16 - Overview of process implementation methodology Chapter 17 - Spirals Chapter 18 - Step 1: Define/refine process map Chapter 19 - Step 2: Identify dialogs Chapter 20 - Step 3: Specify actions Chapter 21 - Step 4: Map actions Chapter 22 - Step 4(a): Provision software components Chapter 23 - Step 5: Design human interface Chapter 24 - Step 6: Determine workflow Chapter 25 - Step 7: Assemble and test Chapter 26 - Step 8: Deploy and operate Chapter 27 - Retrospective Glossary
- List of Acronyms
Index List of Figures List of Tables
Business Process Implementation for IT Professionals and Managers Robert B. Walford Library of Congress Cataloging-in-Publication Data Walford, Robert B. Business process implementation for IT professionals and managers / Robert B. Walford. p. cm. — (Artech House software engineering library) Includes bibliographical references and index. ISBN 0-89006-480-6 (alk. paper) 1. Management information systems. I. Title. II. Series. T58.6.W324 1999 658.4’038—DC21 99-18034 CIP British Library Cataloguing in Publication Data Walford, Robert B. Business process implementation for IT professionals and managers. — (Artech House software engineering library) 1. Management information systems 2. Data transmission systems 3. Business — Communication systems I. Title 658.05’46 ISBN 0-89006-480-6 Cover design by Lynda Fishbourne © 1999 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062 All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. International Standard Book Number: 0-89006-480-6 Library of Congress Catalog Card Number: 99-18034 10 9 8 7 6 5 4 3 2 1
This book is dedicated to my colleagues Truman Mila and Mark Feblowitz, who breathed life into the PRIME methodology through their innovative application of systems and software engineering.
About the Author Robert B. Walford received the B.S. degree in Electrical Engineering from the Illinois Institute of Technology and the M.S. and Ph.D. degrees in Electrical Engineering from
the University of Southern California. He has over 30 years of diverse experience in engineering, telecommunications, and information processing, including hardware design, software design, and technical and general management. He has held engineering and management positions with several commercial organizations, including the Hughes Aircraft Company, Bell Laboratories, EMI Medical, Florists’ Transworld Delivery (FTD), GTE Data Services, and GTE Telephone Operations. He also performed as an independent consultant in factory automation for General Motors and was the owner of Tri-Ware Digital, Inc., an original equipment manufacturer (OEM) that developed and marketed a comprehensive minicomputerbased accounting and management system for small businesses. Dr. Walford’s current position is Manager of Advanced Information Management Technology for GTE Telephone Operations. Areas of responsibility include computing infrastructure specification, application architecture, technology planning, and project management. Specific technologies of interest are component architectures, knowledge management, business rules, and decision support. His academic experience includes the teaching of circuit design and mathematics courses as an assistant professor of electrical engineering at the University of Southern California. He was also an adjunct professor in the Computer Science and Engineering Department of the University of South Florida, teaching graduate and undergraduate courses in software engineering and data communications. He also served as an engineering accreditation visitor for the Accreditation Board for Engineering and Technology (ABET) and was responsible for examining and evaluating the computer engineering curriculum in a number of universities as part of their periodic accreditation process. As a participant in the initial international standards efforts for intelligent networks, Dr. Walford originated the four-layer reference model, which is at the core of current intelligent network standards. For that work, he received a Warner Award, the highest recognition that GTE Corporation gives for technical achievement. In addition to Business Process Implementation for IT Professionals and Managers, he is the author of three books on information networks and systems published by AddisonWesley in 1990: Information Systems and Business Dynamics, Network System Architecture, and Information Networks: A Design and Implementation Methodology. He also has authored and presented numerous talks and articles on management, telecommunications, and software engineering topics. Dr. Walford is a registered professional engineer in Florida, Illinois, and California and a certified public accountant. He is a senior member of the Institute of Electrical and Electronic Engineers and a member of the National Society of Professional Engineers, the Florida Engineering Society, and the American Institute of Certified Public Accountants.
Foreword I think this book must have been written by a collusion of half a dozen different Robert Walfords. One Robert Walford is a subject matter expert (SME) in several areas of expertise, especially telecommunications-related domains, his broad knowledge being the result of a long tenure in the industry. As an SME, Bob has been a resource to numerous business process definition and reengineering projects, providing reliable content to their documents, models, and decision-making processes. Another Robert Walford, Bob the methodology practitioner, has been a participant as well as a guide and coach to teams following various prescribed organizational methodologies.
The lead author of this book is, undoubtedly, Bob the methodologist, whose experience as a user and a facilitator of several methodologies has been brought to bear on the design of new methodologies, striving for and encapsulating best practices. This Bob has reflected on the reasons, both technical and organizational, for successes and failures of process modeling and implementation projects. He has a compulsion and a passion for imparting clear concepts and their coherent integration, for separating the generic from the idiosyncratic, and for culling the essential from the incidental. Bob the methodologist has worked closely with Dr. Bob, as his colleagues sometimes call him, whose doctorate in computer science equips him to avoid superficiality and ensure firm theoretical foundations. Another author, Professor Bob, teacher of information technology and related subjects at the university, has made the book’s presentation eminently lucid and digestible. The approach taken in the book is heavily influenced by Bob the professional engineer, whose perspective complements those of Dr. Bob and Professor Bob. Engineer Bob is concerned with the principles and techniques that engineers use to build things that work. For example, Bob the professional engineer designed and built a front porch onto his house, which is located in a hurricane-vulnerable area. Not only did he follow current practices, standards, and codes, but he added his own engineering calculations to ensure the structural integrity of the product. I am confident that porch will be left standing after any hurricane, even if the rest of the neighborhood is leveled. Engineer Bob wants the implemented business processes to get the job done, despite the constraints and hazards of the enterprise environment. A systems engineering approach is therefore advocated. A systems engineering approach defines customer needs and required functionality early in the development cycle and follows a structured development process in which technology components are combined to end up with working systems that meet the requirements. A systems engineering approach is eclectic in that it integrates several relevant disciplines and specialty groups into a team effort. Besides expertise embodied in the components, the structured development process must include provisions to handle, from concept to production to operation and ultimately replacement or disposal, cost and schedule; training and support, quality assurance, testing, and performance; and system integration, deployment, and disposal. If asked, Dr. Walford might call himself Bob the technologist. He has been an ardent observer of information technology trends, tracking, assessing, and, when appropriate, championing adoption of new technologies in the corporation. He writes about some of those topics in this book. As you will see, Bob the technologist does not believe in silver bullets, panaceas, or overnight cures. But we can depend on him for insight, prudence, and commonsense guidance. It is hoped that understanding where the author is coming from will help readers know where they are going. But enough about Bob(s)! A concise statement of the theme of the book is the assertion (in Chapter 1) that “management by process itself requires a process.” A process for managing processes, that is, a methodology, has as one of its dimensions a set of coordinated modeling activities and guidelines (presented in Part III). Another dimension of the methodology is a conceptual framework for capturing specifications of information systems (Part II). The products of the methodologists’ labors, primarily the populated models, are viewed as assets having a life cycle, with the life cycle supported by such things as repository technology, financial management, and business rules, among other things (Part I). Preceding the three parts of the book is an introduction that articulates prerequisites, principles, and perspectives. In particular, there is cogent reflection on the significant business and technology factors that motivate management by process, rationale for the utilization of a conceptual modeling approach, and justification for the emphasis on an asset management paradigm. The preceding paragraph, which essentially summarizes the book in reverse, discloses that the book goes well beyond the explication of a single methodology. There is ample
discussion of the basic ideas that underlie the methodology—concepts, theories, rationale, and pragmatics—which apply not only to this particular methodology but also provide insights into the subject of process-oriented methodology in general. A number of pertinent topics are woven into various parts of the book and expounded in a lucid and instructive manner. Take business rules, for example, a subject fairly new but highly relevant to business process automation. Business rules are characterized as statements that define or constrain the operation of an enterprise. Business rules are always present where business processes live, and business rules induce some of the key requirements on enterprise systems. However, adequate techniques have yet to be developed for eliciting, acquiring, analyzing, implementing, and monitoring business rules. Some business rule deployment tools have appeared in the marketplace, but they are targeted at a narrow set of business rule types. The author clarifies the business rule concepts, explains the need for a broader definition, and relates business rules to process-driven methodologies. Business rules are characterized as an asset that must be managed from process definition to implementation. Through the use of business rules and business rules technology, the objective is the capability to rapidly deploy or redeploy the corresponding business logic. Another example of a well-chosen infusion of significant material is workflow. The methodology proposes that the implementation of a business process can best be achieved by mapping processes to workflow systems, based on the details of the specified tasks, actions, interfaces, and so on. A workflow system is a platform that performs the work of a business process by providing mechanisms for specifying, scheduling, coordinating, and monitoring the workflow instances that are created in response to business events. Making a workflow system part of the architecture for target implementations reduces the need to reprogram those generic capabilities for every developed system. Although the state of workflow technology, as for business rules, is not yet mature, an activity for determining workflow is included in the methodology as an indication of the trend toward using workflow technology to implement business processes. In contrast, the systems we have inherited as our legacy are not built on workflow platforms; it is proposed here that adopting workflow technologies will reduce some of the problems of creating new systems (our new legacies) and will facilitate integration and evolution. Besides business rules and workflow, it is clear that many more concepts and technologies impinge on process implementation methodology: knowledge management, document management, and integration architectures, just to mention a few. At several points in the book, the author appears poised to cover those topics, but there are limits to what one book can cover. We could not expect more from a single volume. I believe there is a substantial body of professionals, both technical and nontechnical, as well as teachers and students, who should read this book to reap the benefits of the assembled knowledge and views. I see the book as a key resource, inhabiting an environment where process implementation methodology is a defined area of expertise and where a group of appropriately trained individuals is a center of excellence to provide necessary technical and organization skills. Thus, the book contents alone are not enough. There must also be an explicit intention to act and to back the associated activities with the necessary resources. There must be a commitment to the belief that the process implementation methodology is one of the most important business processes in the organization. In fact, I would venture to say that the ability of an enterprise to perform the methodological aspects of its operations will, in the coming years, become more important than a majority of its other business processes. As technology domains mature and technology components (e.g., software applications and services provided over networks) become more specialized and standardized, a primary differentiator between enterprises may be the quality of the process implementation methodology rather than of the business processes specific to the domain. That could be considered a somewhat radical view, namely, that the critical success factor of a telecommunications company,
for example, could depend at least as much on process implementation methodology as on telecommunications expertise. To give credence to that contention, we have to ask: Where will the professionals come from to carry out the methodologies? Where will they get the training they need? What knowledge and expertise are needed for success? Some progress is being made by recent advances in university curricula, in courses on systems analysis, requirements engineering, and the like. Other skills such as facilitation and project management also are important and available. Dr. Walford folds such material into this book, demonstrating a predilection for a multidisciplinary approach, which is precisely what is needed. In addition to needing a new generation of professionals to carry out the new order of things, mechanisms for organizational change will be needed. The fact that methodologies are higher-order processes, that is, processes about processes, makes them both powerful and paradoxical. From one perspective, a methodology is carried out by agents who take on an external, apparently dispassionate view of the enterprise, and they build models from an apparently objective viewpoint; the operations they perform are on other business processes. On the other hand, the methodology is just another process (caution is always advised when the word just is used). The process happens to be performed by agents who often are stakeholders internal to the current or future configuration of the enterprise. So a participant in a methodology wears two hats: that of the designer of the new enterprise and the other of a potential occupant of the new organization. Obviously, there are some natural conflicts between self-interest and organizational altruism. The fact that a methodology is a higher-order process is therefore a challenge to overcome. Consider, as well, that the introduction of a new methodology is an implementation of an even higher order business process. That infinite regress is, in principle, troublesome, but in practice we have not reached the point where there are any significant consequences. One of the main reasons for developing a methodology is to bridge the gap between business development and information technology. Historically, the relationship between the two has been ill-defined, not atypical of the nature of relationships between customers and application vendors. Business involves business judgment and decision making, for which the most natural forms of communication have been informal media such as conversations, text, and real-world scenes. Implementors use more formal, rigorous representations as they consider system designs and get closer to procedures that execute on computers. Business is learning to be more rigorous in its process descriptions, and IT is learning to reciprocate by utilizing methods that tolerate some ambiguity and incompleteness (rather than, for example, forcing specifications into molds for the sake of direct implementation). The progression from the more freeform, unstructured languages to the more formal, structured ones is one of the obstacles the methodology is supposed to overcome by gradually transitioning from one to the other. The methodology in this book addresses that issue (often called the requirements gap) to a significant degree. It is a prescription for how the two camps can interact on a constructive basis. In conclusion, it is an admirable task indeed for the many Robert Walfords to have assembled and so plainly presented the material in this book, which is both deep in concept and broad in scope. Sol Greenspan
Preface The King is dead; long live the King! That famous cry sums up most aspects of modern business practice. The previously existing competitive environment, scope, internal structures, and automation support needs of an enterprise have disappeared and been replaced by other sets of conditions and requirements. In time, those needs, too, will disappear and be replaced by yet another set and much more quickly than before. The concept of “Internet years” applies to most aspects of modern life. To stay viable, an
enterprise must learn to live with the new king and begin to prepare itself for the next one, who inevitably will arrive when least expected. From an information technology (IT) perspective, we recently have converted from a centralized mainframe environment to one with a distributed client/server structure. Even before the latter environment began to stabilize, the rapid emergence of the Internet has created the need for yet another version with its own needs and constraints. This swift succession is likely to continue into the foreseeable future as technology advances and customers demand the services and products enabled by the new capabilities. Adding value in this environment requires that a “stretch view” along with innovative approaches to enterprise automation be used. That will result in some possibly controversial directions, but there is no hope of keeping up with the rapid pace without utilizing paths other than the existing ones. The basic unit of the enterprise, from an automation perspective, is considered to be a process rather than a particular function of the enterprise. This approach is taken not only because a process orientation is being used for much of the current work in organizational dynamics (e.g., process reengineering) but because it is a more natural construct for determining and defining automation functionality that meets the needs of the enterprise. When we consider a process approach, it rapidly becomes evident that the specification of a process is just the beginning. A considerable amount of effort must be applied to the transformation of processes from the business environment in which they are defined to the technical environment in which they must be implemented and deployed. Different process models are needed to make the transformation effective. This book specifically addresses an issue usually missed in the discussion of process engineering and management: the need for eventual implementation and deployment of the business processes. There are many publications concerned with the need for an enterprise to be process oriented and that provide approaches to the specification of so-called optimum processes through some type or business reengineering. However, little consideration has been given to how to actually implement those processes using manual or automated functions and to successfully deploy them in the enterprise. Thus, many enterprise process engineering activities have failed. The central purpose of this book is to fill that void through the specification of a process implementation methodology. The process orientation of the methodology gives rise to its name: process implementation methodology (PRIME). Unfortunately, the mere specification of a series of steps is not enough to provide a workable methodology. If such were the case, many such methodologies probably would be available. An effective methodology must fit into the current and projected enterprise business and technical environments. It also must provide a means to solve the four generic business problems: decrease costs, reduce time to market, increase quality, and provide greater value for the customer. PRIME provides a way to meet all those requirements while staying focused on process implementation. The development of PRIME rests on three supporting concepts: systems engineering, automation assets, and modeling. A systems engineering approach permits the many needed technology and business concepts to be considered as an integrated whole rather than as isolated entities. An automation asset view ensures that the entities utilized in process specification and implementation are correctly managed and their inherent value to the enterprise understood. Extensive modeling is used throughout the presentation to define and structure the discussions and permit a relatively rigorous examination of the principles, concepts, and entities involved. In many current instances, the specification of enterprise automation emphasizes technology and products with only a passing mention of an associated methodology. Technology and products, no matter how well conceived and designed, cannot be effectively employed without methodology. Even when methodologies are utilized, they
involve previous methodologies that were developed for centralized computing architectures and that are no longer appropriate. Many current methodologies are based on either data or control specifications and were first defined in the mid-1970s, when the software development and deployment environment was considerably different from what it is now. Although the existing methodologies have been adapted over the years to some extent, their basic approach has not changed, and they still are unable to accommodate the needs of a process orientation efficiently. In addition, these existing methodologies have a number of other disadvantages when they are applied to the emerging environment: For example, they do not take advantage of reuse; they are hard to adapt to a distributed deployment environment (e.g., client/server configurations); they do not adequately consider the human interfaces; they do not involve the stakeholders as an integral part of the development process; they inherently require a long time-t o-market cycle for new products; and they produce products that are difficult to maintain. The PRIME methodology addresses all those issues and includes their resolution as an integral part of the methodology activities rather than as an add-on or afterthought. For that reason, PRIME provides the first fundamental advance in automation software specification and development methodologies in several years. The term development as used in connection with the PRIME methodology is intended to cover custom implementation as well as the selection of already existing software such as commercial off-the-shelf (COTS) products, legacy systems, or components specifically designed to be reusable. PRIME can accommodate centralized, client/server, and Internet-based architectures, either alone or in combination. The comprehensive scope of this book provides a significant value independent of the specification of an effective process implementation methodology. It permits consideration of many needed topics in their appropriate context. That also allows a presentation based on principles that are relatively independent of specific technologies and products, thus increasing the effective life of the information in the book. Many of the presentations touch on issues currently being aggressively debated within the industry. The models of the components involved constitute a suitable framework through which the divergent views can be examined and understood. The information in this book is presented in three parts, with an introductory chapter on the current business and technical environments and associated drivers with which an enterprise must deal. Each part constitutes a major aspect of the methodology specification. This organization is necessary to manage adequately the complexity of the information and provide a presentation that will serve the needs of many different types of readers. Chapter 1, the introductory chapter, is concerned with defining the business and technical environments in which the automation software must exist and be utilized. It introduces the major pressures that are forcing the enterprise to change how it conducts business and, as a result, how support software is obtained and utilized. Without a minimal understanding of those topics, it is not possible to follow and understand the reasons for—and the details of—the design and construction of the methodology. The presentation is useful in and of itself as a guide to the confusing set of forces causing the current upheaval in the business environment. However, the main purpose of Chapter 1 is to motivate the remainder of the discussion. Part I is concerned with the concept of automation assets and their management. The concept of automation assets provides the framework for the definition and analysis of the entities needed in the specification of the methodology. It also allows their interactions to be defined and considered in a structured and natural way. The asset management system is modeled using five interacting components: life cycle management, financial management, business rules, repositories, and the automation assets themselves. The common characteristics developed in Part I are applied to all automation assets considered in Part II. Part II is concerned with the modeling of the automation assets needed for the definition of PRIME, consistent with the direction and requirements of asset management. A key presentation is concerned with the reuse of software components. Reuse of the various
elements involved in the specification and development of software has been a constant focus since early computers. Until now, that has never been successfully accomplished except for a few isolated and rather specialized instances. An approach to a feasible method of achieving reuse success is presented in this discussion and incorporated as an integral part of the PRIME methodology. Part III contains the design and specification of PRIME using the information developed in Parts I and II. PRIME is based on an adaptation of the spiral approach to design and implementation. In that type of approach, the basic steps of a methodology are reinvoked over and over with an increase in detail and structure after each iteration or spiral. While it is possible to design a methodology with only one spiral that includes all the methodology steps, several problems are associated with that approach: The complexity of its application to a development of significant size is difficult to manage; parallel activities are not possible; and iteration over a subset of activities is not easily defined. For those reasons, PRIME utilizes multiple overlapping spiral types within the overall spiral definition. That allows iteration over all the steps or a subset, as desired. Seven explicit spirals are defined in the methodology. Implicit spirals also can be defined among any set of steps as needed during a specific development. It is important to note that this book is not intended to be a cookbook. Although it does contain a comprehensive and relatively complete discussion of the principles involved in process implementation and could be used as described, it commonly will be used as just a starting point. Every enterprise is different and will need different aspects and details of the material presented. Certain aspects will be emphasized more than others. Any of the definitions, models, and procedures contained in this book can be altered to meet specific requirements. However, the systems engineering and automation asset perspectives must be maintained so the result preserves the necessary consistency and focus. The information presented in this book has been designed to assist software engineering professionals involved in the implementation of processes. There are more than sufficient details and explanations to enable readers to (1) understand the underlying reasons for the design of PRIME and (2) adapt the methodology to their organization without encountering a large number of unexpected problems. The information presented will enable individuals involved in any aspect of business process utilization to understand the consequences of their results and provide for a smooth implementation path. Although there are no study questions at the end of the chapters, this book is also appropriate for classroom use in advanced courses in software engineering or management information systems. A condensed course could be taught in one semester, but exploring the technical requirements in depth probably would take a twosemester sequence. Either way, student assignments would consist of example designs and investigation of possible alternatives to the suggested activities and steps of the methodology. As in actual practice, there are few simple answers to most of the questions that can be formulated. An instructor must, therefore, look to the validity of the approach and conclusions reached to determine the degree of absorption of the material. I would like to express great appreciation to Blayne Maring, former GTE assistant vice president—architecture, and Charlie Troxel, director of enterprise computing strategies at GTE, for providing the environment and encouragement that allowed the development and refinement of this innovative methodology. Girish Pathak, vice president and director of the operations system laboratory at GTE Laboratories, and Mary Reiner, director of enterprise systems, are also due a considerable amount of appreciation for their efforts on my behalf. Thanks and recognition are also richly deserved by the many associates who worked on the development and validation of the methodology during some aspect of its development. They include Truman Mila and Mark Feblowitz, to whom this book is dedicated, as well as Carl Pulvermacher, Nancy Amburgey, Ken Dilbeck, and David Wang, who participated in the pilot application of the methodology. Thanks are also due to Mark Lapham and Jeff Gilliam of Anderson Consulting, who contributed to the early development sessions.
I also would like to thank the many colleagues who participated in the early trials and initial production use of the methodology. Their patience, humor, and helpful suggestions for improvement were of invaluable help in making the methodology a success. Robert B. Walford April 1999
Chapter 1: Introduction Overview Our world has become almost totally dependent on software for its proper functioning. From the control of airplanes to the control of large enterprises, the need to rapidly develop and utilize reliable and cost-efficient software is of utmost importance. The development of software to control devices seems to be progressing at a relatively steady pace. The new Boeing 777 aircraft, for example, is almost entirely controlled by a fly-by-wire structure that is enabled through the use of complex distributed software. In addition, most of the testing of the aircraft was done entirely through computer simulation techniques that provided significant savings while enabling on-time delivery of a more thoroughly tested product. That same level of progress does not seem to apply to the use of software that supports the operation of our large enterprises. There are still many project failures and expensive overruns. The cost and the time required for development and testing keep increasing, as does the backlog of new software projects. Viable future directions that could remedy the problem seem uncertain and distant. This chapter examines the current conditions under which large (and not so large) enterprises must operate. It presents the basis on which a partial resolution to the software development difficulties that pervade modern business can be found. As such, it provides the high-level justification and context for the use of a process-oriented approach to business automation. The discussion is at a relatively high level but contains sufficient detail to show the interrelationships among a large number of complex business, social, and technical issues. It is those interrelationships that provide the clue to the software difficulties of the enterprise as well as the direction for their solution.
1.1 Environment “It was the best of times; it was the worst of times.” That, of course, is the opening from A Tale of Two Cities, the classic novel by Charles Dickens that is set in the late eighteenth century. Why is the quote appropriate for a technical book set in the late twentieth century? Dickens lived in a time of great social upheaval and wanted to explore the effects of that condition on the citizens and institutions of his country. His stage was the novel, a story of fiction. We also live in a time of great social change, compounded by the modern addition of business and technical change. We also need to explore the effects of that change on ourselves and on the enterprises in which we work. The stage is a nonfiction technical presentation—this book. Regardless of the vehicle, the need to explore and understand our environment and come to a reasonable accommodation with it remains the same. Throughout this presentation, the underlying theme is change and how best to react to it. If we react well and take advantage of the opportunities, it will indeed be “the best of times.” If we react poorly, it will be “the worst of times.” Unfortunately, like the characters in Dickens’s novel, we are trying to live and survive in a world where we have only imperfect knowledge of the dynamics, and it is difficult to know how to identify and take
advantage of the opportunities that occur. An additional complication is that the current rate of change is far greater than in Dickens’s time. The concept of “Internet years” is very real. The future cannot be ascertained with any certainty because it is a function of the unknown dynamics. The temptation is to look for shortcuts, to follow the latest fasttalking pitch man who promises an easy answer to our problems, and to be satisfied with a fast small reward rather than working toward large future gains. The author hopes that this book will be a factor in avoiding those temptations and, by addressing at least one important area of concern, will aid in coping with the unceasing change that pervades our profession. The specific subject of interest is the revolution in need for enterprise automation and the most effective means for providing flexible workable solutions that will not rapidly become obsolete.
1.2 Discussion organization The formation of a business automation methodology is approached here through the use of a system engineering approach to specify the structural elements and their interrelationships. The overall structure of this book is shown in Figure 1.1.
Figure 1.1: Automation methodology determination structure. First, the major business drivers and associated requirements are identified and examined. Second, the major technology drivers that affect the enterprise are defined and discussed. With those business and technical drivers as a base, a set of automation requirements and principles are specified. Those requirements are then converted into a set of automation assets and an associated asset management system. The asset management system ensures that the proper assets are available when needed. The automation assets are utilized by the methodology to create specific elements of the enterprise automation environment. The enterprise automation environment architecture is based on workflow model. On the basis of that information, Part I defines the asset management system, Part II identifies and models the automation assets, and Part III provides the overall specification of an automation methodology that transforms the assets into elements of the enterprise automation environment. The methodology design is directly dependent on the automation requirements and principles, automation assets, and the enterprise automation environment architecture.
1.3 Business requirements and drivers In essence, only four high-level business requirements apply to the development of any product—software, hardware, text, graphics, or combinations thereof—in the current environment: § Decreased time to market; § Decreased resources expended (financial, personnel, equipment); § Increased quality; § Increased value to the customer (functionality, ease of use). Every other need is in support of those four. Of course, in providing that support, a fair number of details must be considered and appropriate decisions made. It is the details that make these simple requirements so hard to achieve in practice. Examples of the types of decisions that must be made are: § Do the requirements apply to each product individually, to an average across all offerings, or some combination? § How are specific conflicts between the requirements resolved? Those and other questions and their resolution, which generally requires some form of compromise, do not invalidate the requirements. They only serve to illustrate the types of considerations that must be addressed in translating them into realistic procedures. In addition to the four general product requirements, some requirements resulting from general enterprise philosophy usually must also be considered (e.g., premature obsolescence of previously implemented software products should be avoided) in the determination of an automation methodology. These are usually presented in the form or business rules discussed in Chapter 5. Although this discussion focuses on the business requirements, a number of business drivers also greatly affect the operation of the enterprise. They include changes in regulatory and legal requirements, changes in the competitive landscape (e.g., mergers, bankruptcies, startups), and changes in executives and other key personnel. As with the requirements already discussed, the drivers also greatly affect the way in which the enterprise must operate.
1.4 Business structures In the classical business structure of the recent past, the organization is hierarchical and the information processing function based. The hier- archical organization model was based on the centuries-old military structure that emphasized command and control at each level of the organization. That type of rigid structure evolved because it was the only model then known that was suitable for a large organization. Because organizations tended to grow larger with the advent of industrialization, it was only natural that this type of structure would dominate. The evolution of functionality-based information processing also occurred for similar historical reasons, although, of course, the evolution occurred much later. When computers were first applied to the hierarchical organization to reduce the amount of manual information processing, it was only natural that the automated information processing would mirror the specific functions of its manual predecessor. Thus, the concept of the information system that performed some specific set of functionality such as payroll, accounts payable, order entry, and inventory came about. Each of those systems was independent of the others and was considered to be owned by the organization that historically performed its function. Because of the way they are sometimes pictured, those systems are sometimes called vertical silos of automation (Figure 1.2). In the figure, the line that winds through the silos represents the path followed to fulfill a customer’s request. The systems generally are utilized on an ad hoc basis as each organization becomes involved and provides its function.
Figure 1.2: Vertical silos of automation. The hierarchical organization and its silo-based, centralized automation support began to change because of numerous technical and social pressures. § Relatively inexpensive desktop computers, which could store large quantities of information and perform tracking and scheduling tasks with ease, became widely available. Desktop computers made it possible for one person to manage a much larger number of subordinates and functions than could be performed using manual techniques. The need for middle layers (management) in the hierarchical structure was greatly reduced. § The onset of true global competition, with subsequent pressure on the cost and quality per unit of produced goods and services, made it necessary for the enterprise to become much more efficient in terms of its cost of producing a unit of output. § The cost of a full-time employee was rapidly escalating, not because of large increases in remuneration but because of the cost of the benefit programs (mostly health care), which were rising much faster than the rate of inflation. § Due to new laws and other legal considerations, it was becoming increasingly difficult and expensive to eliminate full-time workers either permanently or temporarily during cycles of decreased activity. § Companies were becoming global with locations throughout the world. That required an effective method of interaction between many geographical diverse locations with differing customs and business practices. All those factors made it imperative that large enterprises reinvent themselves to become competitive on a global basis. It was either that or simply go out of business. The reinvention of large (and many medium-size) businesses is still proceeding and probably will not reach some sort of new equilibrium until well into the twenty-first century. Enough change has occurred, however, that we can discuss the major trends that are occurring in three broad areas: the organizational (management) structure of the enterprise, the operational structure of the enterprise, and the application of technology (automation) to the operation of the business. While the last trend is of most importance to the thrust of this presentation, it is necessary to place it in the context of the other two to fully understand the consequences and opportunities that are beginning to arise. The organization of the enterprise is changing in three basic ways. The first change, as mentioned previously, is that the number of layers in the organization is being greatly reduced. This “flat” organization requires methods of both informal and formal communications as well as automated support that is different from that of organizations with a hierarchical structure. The second change, which follows directly from the first, is that the number of employees is also being greatly reduced. More, in fact, than the workload would ordinarily allow.
That is being mitigated by the formation of self-directed and other types of work teams that perform a large number of management and administrative tasks formerly performed by the displaced middle managers in addition to the work performed for the benefit of the customer. The availability of automation to assist individual team members as well as the team as a whole has contributed significantly to the ability of the team to provide the required productivity. Because of the closeness of the team to the customer and work performed, there probably is some improvement in overall efficiency, although the final verdict on this type of structure is still a long way off. The second and by far the most important mitigation for a reduced work force is the use of consultants and outsourcers to perform work previously accomplished by employees. Although their cost may be initially greater than before, exposure to increasing benefits cost is avoided along with social and regulatory restrictions on reducing the work force when necessary. Those changes in organizational structure and staffing require a corresponding change in the way the enterprise must operate. The reduction in the number of hierarchical levels and employees can no longer support functional partitioning with the large number of interfaces that must be managed and maintained. Reducing the number of interfaces requires that cross-functional views of the organization be taken. That leads to the third fundamental enterprise change. The operational emphasis of the enterprise becomes one of process rather than organization. The emphasis on organization produced processes that were implicitly defined and functionality that mirrored the organization partitions. An emphasis on process requires that the functionality be defined to support the process. In fact, the current emphasis on process reengineering does not result from a desire to take the current processes and make them better, as would be expected from the name. Process reengineering is, in reality, a way to make a transition from an organization- or function-based view of the enterprise to a process-based view. It is more enterprise reengineering than process reengineering. In a process approach, the satisfaction of the customer request, is obtained through the use of a process specifically defined to handle that type of request (and perhaps others of the same general type). The process approach to handling customer requests is illustrated in Figure 1.3. Notice that the process is a continuous end-to-end definition of the required activities needed to handle the request. That is significantly different from the functional organization that has a discontinuity at every function boundary. The effectiveness of the process view is enforced by the managed entity being the process rather than the individual organization function.
Figure 1.3: Process approach to request satisfaction.
1.5 Management by process The impact on the enterprise considering the need for a process approach is a transition to a management-by-process philosophy instead of the classical hierarchical commandand-control structure. The transition is the fundamental business reason that a new approach to automation is required. The reasons for the transition and some of the major consequences are discussed in Section 1.6. The resultant impact on automation needs is presented in Section 1.7. Although management by process is considered by many to be synonymous with process reengineering, it actually is considerably greater in scope. Management by process encompasses a basic philosophy on how to manage the enterprise. Process reengineering is merely the action of trying to determine a more efficient process for performing some aspect of the enterprise operation. Although process reengineering seemingly focuses on process, in many cases it focuses on a single organization or
function within an enterprise (e.g., accounts payable) and, except at a very rudimentary level, does not require much in the way of a process orientation. In this discussion, the emphasis is on the process management philosophy. The determination of suitable processes, while of considerable importance, is relegated to the automation methodology. That ensures that the selected processes can be efficiently incorporated into the enterprise automation system. 1.5.1 Major implications The first part of this discussion has addressed the major forces on the enterprise and some of the actions, including the adoption of a process-oriented management paradigm, that have been taken to respond to those pressures. No significant change is ever undertaken without an associated set of implications, some probably advantageous and some not so advantageous. To determine how the automation needs of the enterprise are affected by a process-oriented approach, it is necessary to examine some of the organization, financial, and software implications of the process paradigm. Although the organizational and financial implications may not immediately seem pertinent, in reality, they have an enormous impact on how the automation needs are defined and obtained. That impact will become clearer as the discussion proceeds.
1.5.1.1 Organization In addition to the changes in organization structure occurring as a result of the enterprise pressures, additional organizational implications occur as a direct result of the utilization of a process-based management approach. In one aspect, processes can be defined independently of the location of the process performers. That allows individual staff members to be geographically distributed. It is no longer necessary for managers to be collocated with their staff, since control of individuals is not the function being optimized in the new approach. Workflow techniques that form an important part of process implementation and facilitate this type of organization are discussed in Chapters 15 and 24.
1.5.1.2 Financial Although the discussion thus far has referred to some underlying financial pressures, there is a need to address additional financial considerations related to the changes themselves. The financial aspects are divided into two parts: those dealing with the effects that result from any change and those dealing with the effects of a specific shift to a process paradigm. Additional financial implications of the process approach are examined in Chapter 6. The major financial result of any radical change is the premature obsolescence of those assets that supported the old paradigm and the corresponding need to obtain assets that support the new way of operating. The enterprise must be prepared to write off a significant amount of assets and commit the resources necessary to obtain new assets. The assets can exist in many parts of the organization and include such diverse items as buildings, equipment, office supplies, forms, intellectual property, and support software. Although all costs of change must be considered, it is the last item, software, that is of particular significance in this discussion. As discussed previously, software that does not fit the new enterprise directions usually is referred to as a legacy system, and its disposition as well as the acquisition of replacement software can be quite costly to the enterprise. The financial aspects of changing specifically to a process orientation are associated mostly with the determination of the total cost of ownership of an asset used in the implementation of a process. That requires that the cost of the asset over its entire life cycle, including acquisition, operations, and disposal be estimated a priori. The financial and accounting structures of the enterprise must accommodate that approach, which can be complicated by the projected use of the asset in multiple processes and involve both capital and expense components. In addition, because processes themselves can be
considered assets, the cost of a process over its asset management needs to be ascertained and a determination made as to the propriety of utilizing that process. The accounting function in many enterprises, because of government reporting regulations and organization culture, can prevent many of the financial aspects of the process approach from being effectively implemented. That can be implicitly done in a number of ways: § The need for considerable upfront investment can be frustrated. An emphasis on cost rather than investment can stop the procurement of assets needed to define and implement the processes. § Internal controls can be established that prevent a process from being able to be efficiently implemented. Although many functions of the enterprise can impede the transition to a process paradigm, the accounting function, because of its history and orientation, must be especially considered. This discussion is not meant to disparage the accounting function. It is absolutely necessary for the continued viability of the enterprise. The intent is to point out that all of the enterprise must change for the process orientation to succeed. 1.5.2 The process of process In this type of discussion, it is easy to forget that management by process itself requires a process. The management activities necessary to ensure that the enterprise is functioning correctly and at a high degree of efficiency should also be addressed by an appropriate process. Because the management process is an enterprise process, it is subject to all the characteristics of any process. Although this type of recursion can be conceptually difficult, in practice it offers few problems as long as there is a reasonable separation of functions within the enterprise. The management process must be considered as just another process to be managed, and the same measurements and corrective actions that are defined for any process can be applied. These processes can also make effective use of automation in their implementation.
1.6 Technology requirements and drivers The state of the art is changing so rapidly that any technology presentation will be obsolete almost as soon as it is completed. In the current environment—to put it bluntly—nothing is stable, everything is flexible, the choices are enormous, and few products from different vendors interoperate. In addition, the applications are more complex, and the time-to-market need is more critical. Those conditions are not likely to change in the foreseeable future. Any approach to enterprise automation must directly consider this environment in addition to the classical functionality requirements. The only way to accommodate all those additional pressures and still produce a product that always meets the needs of the customer when it is deployed is to define and utilize anappropriate automation methodology specifically oriented toward process implementation. The required methodology must provide structures and activities that explicitly consider the conditions of the current environment and take advantage of the opportunities they offer while mitigating the difficulties. As would be expected, a considerable number of technologies affect the automation requirements of the enterprise. A comprehensive treatment is far beyond the scope of the current presentation. Many of those technologies are addressed in later chapters, where they are utilized in the formation of an automation methodology. From an overall enterprise perspective, however, it is necessary first to consider the major technologyoriented pressures and constraints under which the enterprise must function: § The Internet (and associated technologies); § Digital convergence; § Commercial off-the-shelf (COTS) products;
§ Legacy systems. This section considers each of those pressures in enough detail to determine its effect on the automation needs of the enterprise. 1.6.1 The Internet There is a temptation to devote a considerable amount of discussion to the Internet because of its large and growing impact on the enterprise from both a business and a technical perspective. That would be a mistake for two reasons. First, the technology is changing so rapidly that little could be said that would remain current for any significant length of time. Second, an abundance of literature is available that addresses almost every aspect of the Internet in far greater detail than could be accommodated in this book. The reader is referred to those sources for further information. This discussion of the Internet is limited to the realization that it represents a source of great impact on the enterprise, and any automation functionality must consider the implications of the technology. In spite of these difficulties, the presentations in the book are relatively independent of whether or not Internet technology is utilized generally or in specific situations, but process, budgets, content management, and IT management are relevant. The technologies presented and the automation methodology that they support are necessary under any condition. From a technical perspective, most of the effect of the Internet is contained in the computing infrastructure. The infrastructure is a complex entity, a thorough discussion of which is beyond the scope of this book. However, appropriate aspects will be addressed as needed for completeness during discussions of specific topics. From a business perspective, the Internet can greatly influence the conduct of the enterprise or even provide the reason that the enterprise exists. Although that will determine the number and the type of processes and associated automation needed, it does not change the need for a process orientation and an associated automation methodology. 1.6.2 Digital convergence Much current and most future product technology is based on a digital format. The common digital representation structure allows computer, television, radio, telephone, and other major technologies to be integrated and viewed in the same uniform way. A bit is a bit is a bit. It can be processed, transmitted, and presented in a consistent manner regardless of the original source or intended use. Although the ubiquity of a bit is the basis for convergence, there can be different requirements for the temporal relationship between bits. For example, real-time transmission may require that the delay for each bit in the transmission be approximately the same, while that may not be necessary for the transmission of non-real-time information. Those requirements usually fall under the heading of quality-of-service (QoS) characteristics. Depending on transmission needs, the QoS specification may vary. The possible need to specify a QoS characteristic does not alter the meaning of an individual bit, so the core aspect of convergence is preserved. The extent to which the digital architecture can blur the separation between products is illustrated by the following examples. A personal computer is fitted with a microphone, speaker, and modem and used to communicate via voice. What is the difference between that computing device and what is usually known as a telephone? If a television is coupled to a settop box connected to a video cable and used to connect to the Internet, what is the difference between that equipment and a personal computer connected to a telephone line? The examples could continue, but those should provide the basic idea. As long as a product has digital underpinnings, it is defined more by what it does than what it looks
like! In many ways, it begins to resemble and behave like an ecosystem rather than an architecturally driven configuration. Digital convergence can also lead to other types of convergence, including that of organizations and even industries. An example would be the possible convergence of the television and personal computer industries. That could occur only in the context of digital convergence. Convergence of organizations leads to the need to integrate their individual automation systems. Anticipation of the continuing convergence of organizations through mergers and consolidations forms another requirement for the design of automation support that can facilitate this type of activity. In addition, the automation impact of digital convergence is the need to allow different areas of an enterprise to interact more closely with each other and to cooperate in the development of new products and services that can use common components and processes. 1.6.3 COTS products The high costs of developing new software and the increasingly large backlog have begun to significantly affect how an enterprise obtains new software. Software procurement directions are becoming increasingly oriented toward the use of software that is available on the general market as a prepackaged system or set of functions instead of the development of custom software. This type of prepackaged software generally is referred to as COTS products. COTS refers to any entity that is purchased as offered, including processes, user interfaces, and textual instructions, but it most often applies to software packages. The software packages can be of any size, and there is conceptually no restriction as to the packaging method. For example, a COTS product does not have to be shrink wrapped, and it can come with support personnel from the vendor. For the most part, this discussion focuses on software products, although the context is expanded, as needed for generality, to include other types of COTS entities. Legacy systems and other existing entities (e.g., reusable components) can be considered a type of COTS product because they are existing, packaged items. Most of the discussion provided for COTS can also be applied to the incorporation of current legacy systems. To strengthen the analogy, many enterprises, to increase their revenue from previous development activities, are selling their internal legacy systems in the open market. The legacy systems of the enterprise then become COTS products to their perspective customers. The use of COTS products to meet an enterprise need has long been the rule rather than the exception for hardware-oriented products such as those from the electronics and equipment industries. For example, COTS products in the electronics industry range from small components, such as individual resistors and capacitors, to large systems, such as computers and radio transmitters. Few users of such components build their own; they almost always purchase what they need. Integrating existing components to perform the desired function has always been the focus of these types of industries. The required infrastructure and product architecture were developed with that specific activity in mind. The engineering procedures and techniques to accomplish the integration are relatively well known but almost always require expert knowledge to produce satisfactory results. Utilizing this approach in other industries, including automation software, has always been a goal, albeit an illusive one. The lack of standards, along with a philosophy that encouraged a construction approach rather than an integration approach, worked against the use of COTS products. That view is rapidly changing, however, because the current economic and competitive pressures are reaching an intensity that almost forces the consideration of a COTS approach before any custom development is undertaken.
The automation impact of using COTS products is that they must be appropriate for the part of the business in which they will be used. That must be enforced by both asset management and the automation methodology. 1.6.4 Legacy systems Although legacy systems are not really a force that impinges on the enterprise, they certainly are a direct result of the enterprise reaction to such forces. In that sense, this topic is appropriate for consideration as a technical driver. In addition, the reaction of different enterprises to the, in many cases, “sudden” appearance of legacy systems has been varied and usually occurs without sufficient analysis and planning. The purpose of this section is to examine the reasons that the so-called legacy systems come into existence, the general properties of such systems, and an analysis of some positive approaches to their utilization instead of the almost universal negative connotation that any discussion of such systems engenders. In fact, use of the term legacy represents an attempt to moderate the effect of the use of some other widely used terms for this type of software, such as old, obsolete, and outdated. In fact, a legacy system may be described by one of those terms, but it is also possible that a legacy system is relatively new, of good design, and useful in the operation of the enterprise. Although it probably was not consciously realized, the use of the term legacy is appropriate for the change process that is taking place. Because of the importance of legacy systems in the enterprise and the general lack of analysis of such systems in the literature, it is useful to develop this topic in some detail. Incorporating legacy systems (at least temporarily) in any automation system is necessary to facilitate an effective migration.
1.6.4.1 The concept of legacy Dictionary definitions of the term legacy include “property left by will; a bequest; something that has been handed down from an ancestor or predecessor.” Going to the dictionary definition for the term is interesting because, in general, the familiar context of a legacy is usually considered something good. Being the recipient of a $1,000,000 bequest from a long-lost relative is the dream of a fair number of people. However, the term also can be used in a negative sense, as in “His behavior left a poor legacy for his family.” In either case, a legacy follows the third definition. It is something that one entity inherits from another. The entity doing the inheriting usually has no control over either the timing or the contents of the inheritance. Indeed, the bequestor also may have little control over the timing, although there usually is some control over the contents. In the context here, that of computer systems and software, the uncertainty properties of a legacy, as well as its usual characteristics, provide the prevailing atmosphere for the attitude held by most workers in the area. The reasons for the almost unanimous negative view are examined here in some detail. As part of the analysis process, a model of legacy software and the associated operational environment is developed. The model is then employed to provide some directions that will enable legacy systems to be used to facilitate the transformation of the enterprise, rather than being considered a major impediment to change. As a quick aside, this discussion is presented from an engineering point of view, not a legal one, although many of the concepts and terms utilized in the discussion originated in the legal sense. In addition, many analogies between the two points of view are drawn to help convey the required information. It is possible that some of the definitions given, statements made, or conclusions drawn by the author are not correct in the legal sense. The potential conflict between engineering and legal concepts is not new and should not present a problem unless the reader is both a lawyer and an engineer.
1.6.4.2 The legacy environment To have a legacy, three things are required: a predecessor, a successor, and something (the legacy) that is going to pass between them. The transfer is considered to be one way, from predecessor to successor. As already stated, the successor usually has no
control over the timing and the contents of the legacy, although, strictly speaking, that is not a required condition to have a legacy. To develop a useful model of the legacy environment, as applied to computer software and systems, it is necessary to consider each of the three components of a legacy as presented in Figure 1.4. The rest of this section examines the definitions and the characteristics of the individual components as well as the overall structure of the mode.
Figure 1.4: Legacy model. In the familiar sense, the predecessor (or bequestor) and the successor are persons. For computer software and systems, the person is replaced by a rather complex concept, that of a development environment. The development environment addresses the birth, life, and death of automation software (and associated hardware) supporting the operation of the enterprise. Everything needed to perform those activities is included in the environment. The predecessor Software developed or otherwise acquired under the current development (and maintenance) environment can get old and obsolete and be replaced. As long as the replacement occurs under the same development environment, however, the old software generally is not considered a legacy even when it has been marked for elimination or replacement. As long as the enterprise is satisfied that the current development environment can support the needs of the enterprise, there is no need to consider another development environment approach. If the pressures impinging on the enterprise force operational changes that cannot be accommodated by the current development environment, then changes to the development environment must be made and the current environment becomes the predecessor of a legacy environment. An interesting example of this is the so-called year 2000, or Y2K, problem. The problem arises from the fact that most software prior to the early 1990s was coded such that the year designation of dates consisted of two numbers representing the last two digits of a year. Thus, 1993 was coded as 93, with the 19 assumed. That works fine until the year 2000 and beyond. The year 2000 coded by the two-digit method would be 00, which most software would interpret as 1900. Affected software must be updated to eliminate the problem, but the need to make the changes does not automatically transfer existing software to legacy status. If the existing development environment is still considered adequate, the changes should be handled as any other maintenance change. However, many companies have taken advantage of the need to make Y2K changes to also update the current development environment and implement a new one. The new environment would make the current programs into legacy systems that would be replaced by Y2K-compliant software developed under the new development environment. In addition to updating the software, it also would solve the Y2K problem in existing software. Although the impetus to changing the software development environment is the Y2K problem, a Y2K problem does not by itself produce legacy status.
There have been many development environments in the history of computers, and many of them continue in some fashion even today. In fact, one of the most popular development environments is of the null environment. No standards are defined and everything is done ad hoc. For purposes of this discussion, it is not necessary to consider all the development environments and transitions that have been utilized in the past. Only the latest set is utilized to illustrate the concepts involved. The most prevalent development environment used, until very recently, was based on the custom development of closed software systems using a centralized mainframe processor. This type of development environment produced software systems and computing platforms with specific characteristics. § Self-contained: Each software system was an entity unto itself. Identifying and obtaining access to the individual functions and components are difficult. § Local service access: The services needed by the software system were available locally, including security, timing, transaction monitors, and data access. There was no need to build in remote-access capability. § Operating system dependent: Although all applications are dependent on the operating system to some extent, most software was completely intertwined with the operating system used, usually the IBM MVS. § Batch oriented: Because of the design philosophy of the operating systems used, application and development architectures were fundamentally batch oriented, even though humans were sitting at terminals trying to direct the operation and at least thinking that they were in charge. § Systems network architecture (SNA) communications protocol: Because of the widespread availability and the use of IBM-compatible platforms and MVS operating systems, this protocol was almost universally used for large systems. § IBM and compatible mainframe computers with 3270 format terminals. As would be expected, there are a large number of other characteristics, but those listed above will suffice for now. Because of the rigid characteristics of this development environment, it could not evolve to meet the new needs of the enterprise. An entirely new development environment was needed. The old development environment died and became the predecessor component of a new legacy environment. The successor component in the new legacy environment is the new development environment structure. Suddenly, the existing systems and their development and operational environments became a legacy. Because the predecessor development environment was now considered old and obsolete, this characterization was instantly transferred, rightly or wrongly, to the legacy software. The practical result was that almost all software developed under the predecessor development environment was looked on with scorn. The successor The successor development environment, as would be expected, consists of the same basic types of activities as those of the predecessor environment, the difference being in their definitions and structural characteristics. The development environment that replaced the predecessor one is based on the utilization of commercial and reused software components in a distributed environment. This type of development environment produces software systems and computing platforms with characteristics very different from those of the predecessor. § It is built from individual functions. The focus is on the implementation of processes built from individual, identifiable commercial or reused components from previous enterprise systems. The concept of a system disappears.
§ § §
§ §
It has remote-service access. The services needed by the software system can be distributed anywhere on the network. The capability must exist to obtain and utilize those services wherever they exist. It is operating system independent. The goal is to define applications that can run on different operating systems, such as UNIX and Microsoft Windows. It is online oriented. Many applications are expected to remain operational 24 hours a day, 7 days a week. That requires major differences in the way software is designed, implemented, operated, and maintained. It has a TCP/IP communications protocol. This protocol is designed for a distributed, transaction-oriented, online application environment. It is a client/server operational environment. Both clients and servers have significant amounts of computing power.
As should be evident from a comparison of the characteristics of a successor development environment with those of a predecessor development environment, they are in many cases exact opposites. That is why a new development environment had to be defined rather than an evolution of the older one utilized. The legacy Remember that the legacy comes from the predecessor and goes to the successor. What is being transferred from the old development environment to the new? The simple answer is the operational software (systems) that existed at the time the new development environment was deployed, hence the term legacy systems. Unfortunately, there is much more to the legacy than the systems themselves, and that is where things begin to get complex. Note from Figure 1.4 that the legacy consists not only of the operational systems but also of all the predecessor elements needed to keep them operational until new software resulting from the successor development environment can be made available to replace them. The entire set of items the legacy comprises is called the automation legacy to indicate that it consists of much more than the software systems. The use of the term legacy is qualified in the remainder of the discussion to refer to only a portion of the whole legacy. Sometimes it is thought that the term legacy means that the included soft ware will be around forever. Although it sometimes may seem like forever, there is no set time for the legacy to remain. It could be very short or agonizingly long. The schedule for the development of any replacement functionality using the successor development environment, of course, depends on the business case that can be made for any specific functionality replacement. Note that in the definition of a legacy, there is no inherent assumption as to the quality of the systems it contains or even of the development environment itself. The only known fact is that the enterprise has determined that the previous development environment has become inappropriate. Consider the following example of the decoupling between the legacy status of a product and its quality or age. Many enterprises continue to successfully market software and associated support that have been internally labeled with legacy status. Obviously, those organizations purchasing the products do not consider them to be poor quality, obsolete, legacy products. They were purchased to fill a need for a reasonably current solution. Legacy status is enterprise specific and even then remains a situational concept. The complex automation legacy can be considered either good or bad, depending on how it is expected to be used by the successor development environment. As a part of the good view, it can be seen as a valuable help in determining the requirements for successor development environment products and in keeping the business running after the transition. Although people always talk in glowing terms about having a clean sheet to work with, meaning, of course, no constraints, there is also a definite downside to such freedom.
For example, there is the condition known as the blank-page syndrome, which occurs when a writer is starting a new book chapter or an article. (This author is very familiar with the blank page syndrome!) It is difficult to create something out of nothing. It is much easier when there is something to work with, even if all the existing items eventually will be replaced. However, the lure of the clean-sheet approach is such that many enterprises choose that course of action even though a considerable amount of resources could be saved by using the legacy to advantage. On the other hand, the automation legacy also can be viewed as a hindrance to the deployment of new applications and something that utilizes resources without much return. It usually is much more enjoyable to create something new than to maintain existing items, especially when it is assumed that the existing items are eventually scheduled for replacement. It is also perceived that management is not allocating enough resources for the new development environment and is too interested in maintaining the legacy. That may be the perception even though the existence of the legacy environment is almost always a direct result of positive management action! In any event, it is the latter view of legacy systems as a hindrance that seems to prevail in any discussion of legacy systems. We are members of an impatient industry. Out with the old, in with the new—and the quicker the better! Anything perceived to impede the changeover must be bad by definition.
1.6.4.3 Using the legacy to advantage Once established, the legacy environment is likely to exist for a considerable amount of time. It disappears only when the last of the legacy ceases to exist. Until that time, it is useful to examine ways in which we can use the legacy to our advantage, instead of squandering it or wishing it away. There are several techniques through which the legacy can help facilitate the change to and operation of the new development environment. The first is to make good use of all the development environment components. Part of the perception that legacy systems impede the change to a new development environment is that the legacy is seen only in terms of the operational systems and then only in terms of replacing them as soon as possible. The automation legacy is much more robust than that and, if viewed from another perspective, can actually aid in the development environment transition rather than hinder the change. Consider, for example, the user practice part of the legacy. User practice is how end users operate the legacy software and shape their implicit processes to accommodate it. Understanding those processes and their good points and bad points will help developers produce software under the successor development environment that improves the processes and their automation support. Without using that type of information, which is part of the legacy, the effective development of new software is made much more difficult. The lack of consideration of prior operations is the real impediment to the transition to the successor development environment. The second way to use the legacy to advantage is to let it guide the replacement strategy as software is developed or otherwise procured under the new development environment. As with any software-oriented entity that has been in existence for a significant length of time, the original orderly (it is hoped) structure becomes twisted and bent with unforeseen add-ons, ad hoc extensions, quick fixes, and other unplanned changes. One approach to maintaining that requirement is to develop a map of the interactions of the existing legacy systems. That can be a very intimidating and time-consuming exercise, because for a large enterprise it usually is discovered that many more systems exist and are in use than originally thought. Links between systems are undocumented, system descriptions and documentation are missing, ad hoc modules have become institutionalized, and so on. The resultant map probably looks like the one depicted in Figure 1.5.
Figure 1.5: Legacy system operations map. The value of this aspect of the legacy is that it provides a means for the software to be acquired under the new development environment to be well structured and conceptually simpler than that of the legacy. A rule for replacement software acquired under the new development environment should be that the overall result obtained is easier to understand and maintain than the older software alone. That should be true for mixed legacy and replacement software as well as replacement software alone. From this discussion, it should be evident that the legacy has much to offer the new development environment. Even though the legacy software systems are not structured in a way that can support the changing needs of the business and must be replaced, the legacy as a whole provides necessary continuity. It also can provide the enterprise with a firm foundation on which to perform some of the analysis necessary to ensure that the software produced under the new development environment will be as effective and as efficient as possible. Should another change take place in the enterprise automation environment, forcing another legacy situation, the lessons learned from dealing with the first one should facilitate the handling of the second. In fact, that is just what is happening with the Internet (see Section 1.6.1). There will be another, although yet unknown, change after the Internet environment becomes the standard. The need to accommodate change is endless.
1.7 Automation requirements and principles The enterprise and its automation system must accommodate the business and technical requirements and drivers. Two major principles result from consideration of the drivers as well as other technical and business requirements. The first requirement is that the automation structure must support the change to the process management philosophy of the enterprise. A process orientation has significant implications. The second requirement is that the methodology must be based on an asset and modeling approach. That is necessary to provide adequate definition of the methodology and its design elements.
To fully understand the major automation implications behind the shift to a processbased enterprise, the concept of a process must be considered in some detail. Although such an examination could be accomplished in this section, it is easier and more effective to provide the needed discussion in the context of process modeling, presented in Chapter 9. Postponing the detailed discussion allows for more effective integration of the concept of process with the other entities that are closely associated with it, such as scenarios, roles, and dialogs. In addition, the context of asset management, which includes business rules and financial management, is important in the specification of process-based automation and must be included in any detailed discussion. The fundamentals of the asset management approach are presented in Part I. 1.7.1 Process orientation Matching automation software development to the management-by-process paradigm is not just a matter of adapting the correct architecture, infrastructure, and development methodology. In many advertisements and articles in the popular literature, however, that seems to be the message. Even if perfect architectures, infrastructures, methodologies, and supporting tools were available (and we are quite far from that condition), the effort would not succeed without a different attitude toward software development. Supporting processes with automation software requires that the enterprise adapt an integration philosophy instead of a closed-system model. In addition, the manual activities and the automated activities must work in concert to implement the process. From a software perspective, that requires that software be implemented or otherwise obtained in the form of discrete components that are then connected through an appropriate control mechanism with the necessary degree of human input. They may be large COTS products or small individual components. Another important aspect is the use of an infrastructure through which all the components can interoperate with each other as well as with the humans utilizing them. The defining item for the change in software direction is not the size or the construction of the components or the infrastructure. It is the philosophy that the enterprise follows in its software-oriented activities, and that philosophy is supported by the financial and organizational structures of the enterprise. If management still views enterprise automation as individual pieces of standalone software with no need to facilitate the communications between them, management -by-process will never reach the degree of effectiveness of which it is capable. That easily can result in the enterprise being placed in a significant competitive disadvantage. In addition to the need for a different automation approach and an associated software philosophy, there is also a need to consider how to migrate effectively from the old philosophy to the new one without hurting the day-t o-day operations of the enterprise. As might be expected, the migration can be difficult. With the proper planning and models, however, it certainly is not impossible. The need to change how enterprise support software is viewed and the means to effect that change constitute a large part of the discussion in this book. Although it usually is not explicitly stated in this context, that aspect of management by process is pervasive. To put it bluntly, most of what has been taught and learned about automation and software development for the support of the enterprise must be forgotten. In its place, a new approach to development must be substituted and utilized if processes are to achieve their expected potential as enterprise management units. 1.7.2 Modeling Obtaining a good understanding of the structure and the operation of any enterprise, except for the very small organization, depends on the use of many types of modeling techniques. Unfortunately, the use of models in most enterprises is relatively infrequent. The models that are used tend to be somewhat informal and depend on the inherent knowledge of each individual involved for interpretation and utilization. Some recent
attempts to introduce formal models such as the Unified Modeling Language (UML), which is used to model object classes, have tended to be very low level and specific to given topic areas. The lack of general formalisms worked reasonably well in highly structured, top-down organizations but presents problems when applied to the flat, more loosely structured organizations that are currently evolving. The management and automation needs of enterprises adapting these new organizational structures require that the business operations be defined and analyzed in greater detail than before. Through the proper definition and application of models, the complexities and interactions of these new organizations can be better understood and managed. It is, therefore, necessary to define and utilize models in a structured and relatively rigorous sense. Without modeling, solutions to the complex problems inherent in the specification of enterprise automation can only be guessed. To effectively use modeling techniques in the enterprise, it is necessary to understand their major types, properties, advantages, and disadvantages, at least at a high level. This discussion is designed to provide that overall understanding. In addition, this presentation also provides the detail needed for the specifications of the many models that will be developed in subsequent chapters. Without a basic understanding of the necessary modeling techniques, the information in the remainder of this book would be somewhat more difficult to assimilate and utilize effectively. Specifically, this discussion provides an introduction to modeling. It is not meant to be an exhaustive treatment of the subject but only to serve to provide the understanding and motivation required to place the discussions of the following chapters in context.
1.7.2.1 General principles This subsection contains a short discussion of general modeling principles. The information is designed to provide the overall context and understanding needed to apply modeling techniques to specific situations. Sections 1.7.2.2 and 1.7.2.3 utilize those principles to define the approaches used to develop individual models. The discussion should provide readers with some ability to extend the information presented to meet the modeling needs encountered in their individual situations. Unfortunately, the term model has many meanings. In the context here, a model is meant to be a representation of some entity for the purpose of analysis. That contrasts with its other meaning as a toy (model train, model plane, model automobile) that is generally a smaller version of the real item. Some confusion can arise when the same physical “thing” can be used to perform both functions. A model of an airplane can be used in a wind tunnel to determine its aerodynamic characteristics (analysis representation). It also can be used by a child as a plaything (toy). Another example of how terminology can make for a difficult time. Model positioning In any analytical or synthesis activity, it usually is necessary to translate the real world (physical or intellectual) into a form that can be conveniently examined and manipulated. Once the desired results have been obtained, a reverse translation back to the real world is made. This form (or representation as it is sometimes known) is usually called a model. Models have some desirable properties when used in the examination of physical or intellectual subjects. Two examples are: § In the physical world, mechanisms usually are quite complex and involve many, possibly nonlinear, stochastic interacting forces. § Intellectual concepts may not have a uniform understanding among individuals trying to understand or otherwise make use of the concept. The uncertainties present in each of those areas can be reduced by the selection of appropriate models. Model usage A model of some aspect of the physical or intellectual world can be defined by a structure that is suitable for portraying the desired activity. Among many possibilities, the structure could be based on mathematics principles, it could be defined by graphical techniques, it could be embodied in a procedure or an algorithm, and, of course, it could be any combination of formats. Different models of the same real world
entity can be developed. Each model could address a different area or emphasize a specific feature. Physical models are used for many purposes. The reaction of physical models to stimuli is reproducible. The same may not be true in the real world. Models can be frozen in time or made to work faster or slower than real time. Models can be used to examine the effect of a change in the physical world. That is especially useful when an undesirable physical world reaction could be catastrophic or result in the waste of a large amount of resources (including money). Models can be made noise free. Random perturbations that occur in the physical world and result in fuzzy observations can be eliminated and only the essential operations preserved. The major disadvantage of a physical model is that it does not behave exactly as the physical world entity it represents. The difference might be negligibly small or differ in unimportant areas. However, the difference can be significant and, in the worst case, cause the model user to make the wrong conclusions concerning the physical entity. It is important that the model designer have an excellent understanding of the physical entity being modeled and the use to which the model will be put. In most cases, the model should be tested over the range of intended use by comparing its reaction to the reaction of the physical entity for the same stimulus. As an example, consider a model for the congestion in a network. The model is designed to address only one aspect of a possibly complex entity. The network manager uses the model to make a determination as to the best way of reducing congestion during some set of operating conditions. If the model represents the network accurately in this limited area, the network will respond according to the predictions of the model and the right decisions can be made. If the model is not an accurate reflection of the network, relying on its predictions may result in the wrong actions being taken. These types of models usually are fairly accurate, but conditions can exist for which the models are not reliable. The builder or user of the model must be aware of the limitations and act accordingly. Models that represent intellectual concepts are used slightly differently. They are used mainly to convey definitions and context in a way that promotes understanding of the concept. In that regard, models must contain enough structure and detail to make the concept unambiguous and understandable. Allowing the understanding of possible interactions with other concept or entity models is also a primary concern of those models. The major disadvantage to this type of model is that it might unnecessarily constrain the concept and limit its potential usefulness. Because of the complexity of the concepts involved in software development and the large number of interactions, the only way in which the development methodology can be examined and optimized is through the extensive use of mainly intellectual models. Although it is possible to describe a methodology and concurrently define the necessary models, it tends to confuse the flow of the presentation and results in less than optimum knowledge transfer. For that reason, the necessary models will be developed separately from the presentation of the methodology design. Methodology-specific models are developed in Part II, while methodology design is presented in Part III. Although this organization has a disadvantage in that the model structures must be remembered during the methodology discussions, it is the author’s opinion that the overall result of separating the two presentations results in a better understanding and appreciation for the techniques involved.
1.7.2.2 Modeling of concepts and terms A large number of terms and concepts have been applied to various aspects of the automation environment, and they must be addressed by a software implementation methodology. Many terms and concepts have no generally accepted meaning; to understand the interpretation intended, the exact context must be known, sometimes an impossible task.
There are several reasons for the lack of standard industry usage of well-known and frequently utilized terms and concepts: § They were invented and promulgated without sufficient analysis and structure (e.g., business rule). § They are being utilized for different purposes than the one for which they were originally conceived (e.g., scenario). § They have acquired different meanings when applied using the context of technologies other than that in which they were originally utilized (e.g., process). § Their specialized use and definition is confused with that of the nontechnical general-usage term (e.g., dialog). Of course, more than one reason can apply to any given term or concept. In spite of those problems, in most cases terms and concepts were given specialized meanings for valid reasons and could have significant value in managing the complexity of business process implementation, operation, and change. Using them to convey and classify the many forms of information that must be analyzed and integrated can greatly simplify the design and utilization of the methodology. To adequately serve that purpose, each term and concept must be given a suitable definition and structure. The design must be sufficiently detailed to unambiguously convey its purpose and provide enough understanding so it can be utilized in a consistent manner. In addition, all the terms and concepts employed in the process asset management must be self-consistent and consistent in definition and structure with each other and with the overall process implementation methodology. The modeling approach is designed to motivate and provide a detailed definition and structure (model) for the terms and concepts of interest. The models are specifically designed to allow the methodology specification to proceed in an orderly fashion. In most cases, the difference between the model as defined herein and many of the common uses is simply a matter of additional detail. In some cases, however, it is necessary to depart from one or more aspects of common usage and travel in a different direction to keep all the models consistent. When that is necessary, the reasons for the departure will be carefully motivated. In addition to those terms and concepts that have received wide usage, a small number of additional concepts are of use in the methodology specification and are not in general usage. They will also be modeled in much the same way as those that are better known. Treating all the needed terms and concepts in a similar fashion will improve the consistency of the presentation and facilitate the detailed specification of the methodology.
1.7.2.3 Model types In the various discussions presented in this book, models are used for different purposes and for different types of objects. Physical and logical entities, concepts, terms, process components, and enterprise forces all need some form of model for them to be adequately defined. The model allows them to be examined individually or as an interoperating set. The model formats and characteristics used for each object type vary considerably and are defined in the specific discussion for that object. However, some models are similar for a wide variety of objects, and those models are defined as a separate activity in this chapter. These common models usually are associated with the physical and logical entities described in Part II. Each addressed entity has two types of models, an entity model and a class model. The entity model considers the definition and structure associated with an individual entity. The entity class model considers all entities of the same type. It provides a structure that organizes the entities in a manner that facilitates the creation, management, and use of the entities. All the entities modeled in Part II have both entity models and entity class models defined. More detail on the use and structure of class models is provided in Chapter 2.
1.8 Summary Effectively meeting the changes in the business and technological environment is causing the enterprise to greatly change the way it is managed and operates. Consequently, those changes are forcing the automation structure of the enterprise to change to provide the required degree of support. The resultant automation structure requires new ways of specifying and developing software based on a system engineering approach with a process and asset orientation that utilizes extensive modeling support. Selected Bibliography Andriole, S. J., Systems Requirements and Process Reengineering: A Modeling and Prototyping Guide, New York: McGraw-Hill, 1995.
Baum, D., “Legacy Systems Live On,” Information Week, 1996, Issue 573, 10A–14A. Blumenthal, M. S., “Unpredictable Uncertainty: The Internet and the Information Infrastructure,” Computer, Vol. 30, No. 1, 1997, pp. 50–56. Bollig, S., and D. Xiao, “Throwing Off the Shackles of a Legacy System,” Computer, Vol. 31, No. 6, 1998, pp. 104–109. Bork, A., and D. R. Britton, Jr., “The Web Is Not Yet Suitable for Learning,” Computer, Vol. 31, No. 6, 1998, pp. 115–116. Born, G., Process Management to Quality Improvement: The Way to Design, Document and Reengineer Business, New York: John Wiley & Sons, 1994. Cast, J. L., Alternate Realities: Mathematical Models of Nature and Man, New York: John Wiley & Sons, 1989. Chris, W. J., “Building a Process-Based Organization,” Proc. 51st Annual Quality Congress, Orlando, FL, May 5–7, 1997, pp. 95–102. Coburn, S., and H. Grove, “Process Based Management at U S West,” Trans. 49th Annual Quality Congress, Cincinnati, OH, May 1995, pp. 667–674. Cole, B., “Holding Code to a Higher Standard,” EETimes, Issue 985, Dec. 15, 1997, pp. 77, 90, 98. Covell, A., “Digital Convergence: The Water’s Fine,” Network Computing, June 15, 1998, p. 32. Davenport, T. H., Process Innovation: Reengineering Work Through Information Technology, Cambridge: Harvard Business School Press, 1992. Dollar, D., and E. N. Wolff, Competitiveness, Convergence, and International Specialization, Cambridge: MIT Press, 1993. Feblowitz, M. D., and S. J. Greenspan, “A Scenario-Based Technique for COTS Impact Analysis,” GTE Laboratories Tech. Report, TR-0351-12-96-163, 1997. Flynn, D. J., and O. F. Diaz, Information Modelling: An International Perspective, Englewood Cliffs, NJ: Prentice Hall, 1996.
Hammer, M., Beyond Reengineering: How the Process-Centered Organization Is Changing Our Work and Our Lives, New York: Harper Business Press, 1997. Lindquist, U., and E. Johnson, “A Map of Security Risks Associated With Using COTS,” Computer, Vol. 31, No. 6, 1998, pp. 60–66. Ljung, L., and T. Glad, Modeling of Dynamic Systems, Englewood Cliffs, NJ: Prentice Hall, 1994. Maiden, N. A., and C. Ncube, “Acquiring COTS Software Selection Requirements,” IEEE Software, Vol. 15, No. 2, 1998, pp. 46–56. Marshall, K. T., and R. M. Oliver, Decision Making and Forecasting: With Emphasis on Model Building and Policy Analysis, New York: McGraw-Hill, 1995. McNair, C. J., “Implementing Process Management: A Framework for Action,” Hamilton, Ontario, Canada: The Society of Management Accountants of Canada, 1998. Melan, E. M., and E. H. Melan, Process Management: Methods for Improving Products and Service, New York: McGraw-Hill, 1992. Mende, M. W., L. Brecht, and H. Österle, “Evaluating Existing Information Systems From a Business Process Perspective,” Proc. 1994 Computer Personnel Research Conf. on Reinventing IS: Managing Information Technology in Changing Organizations,” 1994, pp. 289–296. Mesterton-Gibbons, M., A Concrete Approach to Mathematical Modelling, New York: John Wiley & Sons, 1995. Moore, G. E., “Cramming More Components Onto Integrated Circuits,” Electronics Mag., Vol. 38, No. 8, 1965, pp. 114–117. Sarna, D. E., and G. J. Febish, “Don’t Squander Your Legacy,” Datamation, Vol. 42, No. 7, 1996, pp. 28–29. Scacchi, W., and J. Noll, “Process-Driven Intranets—Life Cycle Support for Process Reengineering,” IEEE Internet Computing, Vol. 1, No. 5, 1997, pp. 42–51. Schaller, R. R., “Moore’s Law: Past, Present, and Future,” IEEE Spectrum, Vol. 34, No. 6, 1997, pp. 53–59. Talbert, N., “The Cost of COTS,” Computer, Vol. 31, No. 6, 1998, pp. 46–52. Thompson, J. R., Empirical Model Building, New York: John Wiley & Sons, 1989. Voas, J. M., “The Challenges of Using COTS Software in Component-Based Development,” Computer, Vol. 31, No. 6, 1998, pp. 44–45. Walford, R. B., Information Systems and Business Dynamics, Reading, MA: Addison-Wesley, 1990. Yoffie, D. B., ed., Competing in the Age of Digital Convergence, Cambridge: Harvard Business School Press, 1997. Young, L. H., “The Process is the Thing,” Electronic Business, Vol. 24, No. 2, 1998, pp. 75– 78.
Part I: Automation asset management Chapter 1 provided an overview of the business and technical environment in which the enterprise must operate and outlined some associated requirements for enterprise automation. The task for the remainder of this book is to present an approach to developing an effective enterprise automation environment consistent with those requirements as well as other technical and operational considerations. There are many ways to structure the specification and development of an enterprise automation environment. The framework that seems to provide the most effective structure is based on the concept of automation assets. An enterprise functions by utilizing assets to create products or services for sale. Assets provide the foundation for enterprise operations and are thus a familiar concept to most business and technical personnel. Although assets are basic concepts to the enterprise, many of the assets of the automation environment are intangible and have characteristics considerably different from the physical assets of the enterprise (e.g., manufacturing equipment). Of course, the automation environment contains some physical assets, such as computers and communications facilities. Those assets usually are treated the same as any other physical asset and do not pose a significant problem. For convenience, the intangible assets utilized in the specification, development, and deployment of the enterprise automation environment are called automation assets. Intangible automation assets need management characteristics different from those of the physical assets of the enterprise. For example, a physical asset requires some assigned factory or office space in which to function, while an automation asset requires a repository, which, except for the computer in which it resides, has no physical embodiment. Unfortunately, the automation assets used in the enterprise usually are not considered to be “real” assets and are therefore not treated with the same degree of care that the physical assets enjoy. That can—and often does—result in incorrect decisions concerning those assets. Part of the problem may be that the proper method of handling the assets is not understood or that the consequences of incorrectly handling them is not fully appreciated. This presentation is organized into three parts. Part I outlines the overall asset structure and discusses the management needs for the automation assets so they can be considered as real enterprise assets. Part II discusses the definition and models for the individual automation assets. Part III defines the automation methodology that converts the automation assets into the enterprise automation environment. For our immediate purposes, the automation environment can be considered to be all the hardware, software, policies, and procedures needed to provide automation support to the enterprise. A more formal definition is provided later. Note that human operators are not considered to be part of the environment. Although that is somewhat arbitrary, it results in a somewhat cleaner definition and presentation does than the opposite assumption.
Chapter List Chapter Chapter Chapter Chapter Chapter Chapter
2: Automation asset system 3: Life cycle management 4: Repository utilization 5: Business rules 6: Financial management 7: Planning and strategy
Chapter 2: Automation asset system
Overview The usual model of the manufacturing function of an enterprise is simple. Raw materials are acquired and converted into finished goods through the use of some type of labor and equipment utilized in a manufacturing process. The finished goods are then offered for sale or consumed in the enterprise. The raw materials, equipment, and finished goods are considered to be assets that must be managed in some fashion to ensure that the enterprise obtains maximum value from their use. The creation of an enterprise automation environment that provides automation support for the business can follow much the same model, although it is rarely thought of in that manner. The automation environment can be considered to be a finished good, one that consists of a set of automation assets, some of which are used only in the conversion phase (equipment) while others become part of the operational environment (raw material). The assets are converted through the use of a development methodology (manufacturing process). This manufacturing model applied to enterprise automation is depicted in Figure 2.1 and for identification purposes is called the automation asset system.
Figure 2.1: Automation asset system.
2.1 Asset management Each part of this book addresses a different aspect of the automation asset system model. Part I considers the management needs of the automation assets. While much of the discussion would be familiar to anyone involved with physical asset management, critical areas are considerably different. Understanding those differences and the implications for the enterprise makes the difference between a successful enterprise automation program and one that either fails outright or falls far short of its potential. Toward that goal, automation asset management is defined and modeled as an overall system. Each component of the system is discussed in sufficient detail to provide an appreciation of the significance and utilization of the asset management system and its application to the automation assets. It should be noted that the components of the asset management system are themselves automation assets because they exist as an explicit model. Although that type of recursion may seem to complicate the discussion, as a practical matter it does not. The components of the automation asset management system are shown in Figure 2.2.
Figure 2.2: Automation asset management model.
2.2 Managed assets Central to the system is the automation asset being managed. Such assets exist only as data organized according to the specified model or as software. Because they are intangible, the assets have properties somewhat different from those of physical assets. For example, a physical raw material asset can become part of only one finished good asset. An automation raw material asset (e.g., software component), however, can exist as part of many automation finished goods (e.g., workflows). That ability to easily reproduce automation assets is one of the characteristics that produces the need to manage those assets somewhat differently than physical assets. Every asset has a life cycle: it is acquired, it is used and possibly reused, and it is eventually replaced when it wears out or becomes obsolete. The life cycle must be explicitly maintained and customized for each asset type. For automation assets, the life cycle must be considered somewhat differently than physical assets because the number of individual assets is usually quite large, the rate of change of the assets is high, and there are many interrelationships among different types of assets. Information about all these assets must be maintained to ensure that they are being applied in the most appropriate manner. Every asset affects enterprise financial resources throughout its life cycle. Funds must be used to provide the asset. Funds must be used to maintain or utilize the asset. When the asset is removed from service, it may be sold and thus create revenue or it may be scrapped and thus require additional funds. Automation assets generally are not addressed by the financial system the same way as physical assets because of the limited nature of generally accepted accounting principles. To realize the benefit of the automation assets, the management accounting system, as opposed to the financial accounting system, needs to consider automation assets as “real” enterprise assets from a financial perspective. Every asset needs a location where it can be utilized. Physical assets need a physical location. Automation assets also need a place to exist and to be used: some type of information storage and access mechanism. Automation assets generally require a great deal of information to be available about each asset. That usually is much more than a physical asset requires. The asset information is usually called metadata and is resident in a logical construction called a repository. The automation asset itself may also be stored in a repository or in a separate database. Repositories also can store information about physical assets, although that is not their primary purpose. Finally, every enterprise needs rules that govern asset management. The rules determine how and when assets are procured, maintained, used, and disposed of. They determine who can make use of the assets and the manner in which the assets will be used. They usually are called business rules and should be explicitly stated. Business
rules have a much broader application in the enterprise than just addressing the automation assets. Because of that, the discussion of business rules considers enterprise areas other than the automation assets. Figure 2.2 also shows some of the relationships between the asset management system components. The relationships are only broadly indicated and not specifically identified as they would be in an entity-relationship type of diagram. The relationships are complex and difficult to depict in a two-dimensional diagram without obscuring the intent of the components. In addition, each managed asset has its own unique relationships with the management system. For example, in a general sense, business rules constrain how other management system components perform their functions. However, the business rules that govern how the implementation of a process asset is financed differ considerably from those concerned with financing of the software component assets used in the process implementation. The relationships also change depending on the specific stage of the asset life cycle. Again considering the finance example, the financial aspects of configuration management during the creation of an entity are different from those present during the use or reuse of the entity. Explicit relationships that need to be addressed probably are best discussed either in the presentations of individual management system components or during the individual asset presentations, depending on the specifics of the relationship. The remaining chapters of Part I each address one of the automation asset management model components. The specific assets that are managed are determined by the needs of the automation methodology and the architecture of the enterprise automation environment. Those automation assets are defined and discussed in Part II.
2.3 Asset model The asset model component of the asset system is considered an intrinsic part of each of the assets being discussed, and there is no need to address it in general terms. That component, therefore, is not discussed as a separate aspect from the development of the asset models themselves. All the automation asset management components are developed further in the following chapters. Most of the components have been individually discussed at considerable length in the popular literature as well as in scholarly publications. Unfortunately, those discussions have not always led to a clarification of the topic, and considerable confusion and misunderstanding remains. Although there is no intent to produce a definitive discussion of these topics, there is a need to define and model the components in a selfconsistent manner with an emphasis on their interrelationships. The asset management system presented in this chapter provides an appropriate framework for that examination. Selected bibliography Klinger, C. D., and J. Solderitsch, “DAGAR: A Process for Domain Architecture Definition and Asset Implementation,” Proc. Conf. Disciplined Software Development With Ada, 1996, pp. 231–245.
Perna, J., “Leveraging the Information Asset,” Proc. 1995 ACM SIGMOD Internatl. Conf. Management of Data, 1995, pp. 451–452. Steyaert, P., et al., “Reuse Contracts: Managing the Evolution of Reusable Assets,” Proc. 11th Ann. Conf. Object-Oriented Programming Systems, Languages, and Applications, San Jose, CA, Oct. 6–10, 1996, pp. 268–285.
Chapter 3: Life cycle management Overview Life cycle management is responsible for all aspects of an asset, from conception to disposal. Figure 3.1, using two levels of abstraction, depicts the essence of the life cycle process. At the highest level, the life cycle seems deceptively simple. An asset is created; it is used and possibly reused; and finally, after its useful life is over, it is retired. The complexity comes about when we examine the next level of detail, as shown by the ovals within the boxes.
Figure 3.1: Basic life cycle process stages. Before we examine each life cycle stage, it is necessary to briefly identify the type of assets being considered. As discussed in Chapter 2, the automation assets of interest are intangible and exist only as intellectual constructions (models or software). Although the number of different classes of assets is relatively small, the number of individual assets in each class may be quite large. Physical assets generally have weak relationships between assets of the same class and those of different classes. Automation assets, on the other hand usually have relatively strong relationships within and between classes. That requires a modeling technique that recognizes those relationships and can adequately accommodate them. In fact, it usually is necessary to model the asset classes as well as individual assets in each class.
3.1 Creation stage management As shown in Figure 3.1, asset creation is the process of recognizing a need for an automation asset, finding a way to meet that need, and obtaining and making the asset available to all users who can make effective use of it. The activities in each step are discussed in the following sections. A significant amount of information is needed about each asset, including ownership, date created, and scope of use. Those types of metadata are examined in the discussion of the repository (Chapter 4) and business rules (Chapter 5).
3.1.1 Need recognition There are two ways to determine the need to create an asset with specific functionality or characteristics. The first way is to develop a model of the enterprise that addresses one or more asset types. From that model, appropriate assets are specified. The model is called the class model because it is used to provide the specification for all assets of the same type (or class). The second way to determine asset need is to recognize that some operational aspect of the enterprise cannot be completed or is less efficient than possible because an asset does not exist. The structure of a given asset is called the unit model of the asset. Requirements for missing assets utilize the unit model because such requirements generally are discovered on a singular basis, although one requirement may result in the specification of multiple assets. In general, both models must be used to provide the enterprise with a robust set of assets for any class. However, depending on the class, one model is used more than the other. For example, in determining the set of scenarios needed (Chapter 9), the class model is the major source because the scenarios are designed to reflect the desired business operation. A small number of additional scenarios are defined by applying the unit model because needs are determined from actual operational experience. In contrast, human interface assets (Chapter 23) are, in general, determined on a situationby-situation basis, thus having the unit model as the major source. However, enterprisewide guidelines usually are used in the development, bringing in the use of the class model as a structuring concept. Whichever model, if any, is dominant, the following must hold for a given asset class: § A harmonious relationship exists between the class model and the unit model. § The final set of assets must be consistent, regardless of the originating method. Maintaining a consistent relationship, starting with the original use of these types of models in data definition, has always been a problem. The interaction between the two models is discussed, and some possible harmonizing solutions are presented after the general characteristics and use of each of the individual models have been examined.
3.1.1.1 Class models Viewing the enterprise as a whole has always been difficult to do with any degree of accuracy. In addition, using this type of view to determine the characteristics of a class of assets adds additional inaccuracies, and the result may not be useful. Because of the perceived problems, the development of class models largely has fallen from popularity. However, with the tendency toward management by process and the intensifying focus on software reuse, the specification of enterprise-level models is experiencing a resurgence in popularity. The effective use of either of these approaches (management by process and software reuse) requires a certain amount of enterprise-level modeling. Figure 3.2 shows the position of the class model in the enterprise.
Figure 3.2: Position of the class model. The use of object-oriented technology for software development and the implementation for enterprisewide standards are also focusing attention on class models that are useful in achieving maximum return from the use of those techniques. The widespread use of UML, standardized by the object management group (OMG), for modeling of object classes is an indicator of the growing popularity of class-level modeling. Ideal classe s There are two basic reasons for defining and using class models. The first is to structure the entire asset class in some way so that the need for additions or changes in the set of assets can be evaluated using some reasoning or analysis paradigm. The ability to examine the defined class of assets based on the model structure is of considerable importance. In that regard, the purposes of both reasoning and analysis activities are basically the same, although they are achieved in somewhat different ways. Reasoning refers to the use of human intelligence to arrive at a conclusion based on the available information, while analysis utilizes a predefined procedure or approach that could be implemented via software. The commonalty of purpose is the major reason that the two terms usually are considered together. In addition, they can be used synergistically to achieve their common results. The purposes of analysis and reasoning are provided in the following list: § To increase insight and understanding about the asset class; § To perceive new information about the contents of the class; § To determine the usefulness of individual assets and groups of assets in the class; § To identify deficiencies in the class; § To make decisions about changes to the class. Those purposes are not presented in any particular order. All the purposes are important, and any ordering would depend on the circumstances of a specific situation. The ultimate purpose of examining a class model is to achieve an optimum set of automation asset classes. Because the term optimum implies, in some sense, a lack of defects, the approach is oriented toward the identification and elimination of deficiencies that can cause a less than optimum condition. An ideal asset class is defined to be one that has no defects. Although it cannot be proved in the usual sense, the following characteristics of asset classes are assumed to be optimum for the enterprise. The term ideal has the same meaning as that defined earlier. § Every asset class is ideal. § The class of all asset classes is ideal.
§
The class of relationships between asset classes is ideal.
Those characteristics are easy to state but difficult to determine and accomplish. Deficiencies in nonideal asset classes are the cause of many difficulties in their utilization in enterprise activities. That is a major incentive for spending a significant amount of resources on reasoning and analysis activities dedicated to the automation assets. By spending the resources upfront, the need to expend more resources later in fixing problems can be avoided. To be able to refer to all the characteristics as a single item, each of the class types as presented in the previous list is called a reasoning class. That term is used during the course of the discussion to indicate that any listed class can be included as the object of the discussion. Deficiencies The statement that the characteristics are considered to be optimum for the enterprise needs a brief explanation. First, the three tenets of an ideal reasoning class can be stated in more familiar terms by stating the deficiencies that prevent the ideal from being achieved. An ideal class has none of the following: § Gaps, which indicate that the understanding of needs and the use of the class is not sufficient. Gaps may require the procurement of additional elements under conditions not conducive to proper consideration. § Overlaps, which indicate poor asset requirements, specifications, or design. They may be benign but are the usual source for conflicts, inconsistencies, and superfluous elements. § Conflicts, which indicate that element requirements were missing or not correctly articulated. Conflicts are a major source of bugs and faulty operation. § Inconsistencies, which indicate that the provisioning environment was not adequately characterized. They are also a major source of bugs and faulty operation. § Superfluous elements, which indicate incomplete or inaccurate information about existing items. They waste resources and can confuse the provisioning process by offering unnecessary choices. Each reasoning class defined in the repository could be examined individually to show that any deviation from ideal will result in a nonoptimum condition. Nonoptimum in this context means that the enterprise will expend more resources than it would if an ideal reasoning class were available. The resources can be monetary or nonmonetary in nature. This discussion uses a general reasoning class to avoid unnecessary repetition. The concepts can be easily applied to any specific class. An important point to remember is that, eventually, each reasoning class must be examined, because any one of them can contain deficiencies. For example, a missing asset, a missing asset class, or a missing asset class relationship can cause problems as it is utilized. A given reasoning class can contain one or more of those problems and, unfortunately, usually does. The total elimination of deficiencies is almost certainly not cost effective. However, the considered utilization of reasoning and analysis techniques can provide a means of achieving a set of life cycle asset classes that can provide a significant improvement over those that can be attained without the use of these techniques. The second reason to consider class models is to develop an initial set of asset specifications and possibly some associated embodiments. The term initial applies not only to the first time the enterprise explicitly uses the defined assets but also to cases when the enterprise evolves, requiring changes to the class model. This proactive specification activity eliminates the development time that otherwise would be required once the need for an asset has been determined from operational considerations. From a time-to-market perspective, that can be crucial.
3.1.1.2 Unit models Because a unit model is utilized for determining needs based on enterprise operations, it would be expected that the unit model would be simple compared to the class model. An examination of the diagram in Figure 3.3, which shows the fundamental positioning of a
generic unit model in the enterprise, indicates that is not true. The major complicating factor for this type of model is the need to ensure that the desired functionality necessitates creation of a new asset and that it cannot be satisfied by one (or possibly more) currently available.
Figure 3.3: Generic unit model. Determination of asset need can be made by any operational enterprise organization if suitable facilities are available. It also can be made by the life cycle management organization. In any event, the unit model must be employed before any consideration of a new asset is allowed. As an integral part of utilizing the unit model, it is necessary to ensure that possible candidate assets have the proper characteristics for effective use in the expected operational environment, as discussed in Section 3.2.2.3. Although many, if not most, of the available assets will have been made available through the definition of a class model, the use of the unit model does not require that the class model exist. In this case, all assets are created on a one-by -one basis as the need for them arises. In general, relying on the unit model for asset creation is more inefficient and results in a longer time to market than a combination of class and unit models. As pointed out, the degree to which the use of one model or another is effective depends to a large extent on which asset is being examined.
3.1.1.3 Model integration Because unforeseen difficulties arise as a result of enterprise operations, every asset class needs some type of unit model (formal or informal) to provide a common model structure for the individual assets that are members of the class. If a class model for an asset does not exist, the unit model produces a set of assets that have no organized structure among them. For asset sets with a small number of elements in them, that may or may not result in major inefficiencies. For asset sets with large numbers of elements, that lack of organization almost certainly will result in significant problems, since identification of existing assets that could be used to provide the needed function becomes extremely difficult. Assuming for the remainder of this discussion that both models are utilized to produce asset specifications, the relationship between the two specification models becomes of interest. Sometimes both models are used for an asset, but there is no direct connection between the two. In that case, it is possible to get assets that do not conform to the class model. Ultimately, the usefulness of the class model is compromised, and it probably will fall into disuse, resulting, by default, in an unintended unit-model-only situation. To provide the maximum benefits of both models, the relationship should be that indicated by the diagram in Figure 3.4. Each candidate asset specification produced by the unit model needs to be checked for conformity to the class model and placed in its
proper position in the model. In many cases, there is more than one possible location for an asset in the class model, and the most useful one needs to be explicitly determined through the use of appropriate business rules and experience of the involved staff.
Figure 3.4: Integration of class and unit models. For example, consider the need for a new software component asset. Assume that the class model for software components consists of a building block structure in which each building block contains a cohesive set of individual components. The functionality could be provided by a new component: § In an existing building block; § In a new building block as part of an existing building block structure (class); § In a new building block as part of a new building block class. Which position in the model is selected for the new software component depends on a large number of factors, including how different the new component is from all existing ones, the probability that there will be a large number of similar components that have to be accommodated (i.e., the enterprise just went into a new line of business), the business rules governing the addition of new building block classes, hierarchies, and the knowledge of the decision maker as to the usage characteristics of the new component. The important consideration about making the decision explicitly is that, regardless of the final placement of the asset in the model, the asset will be in conformity with the class model. Except from a historical perspective, there is no difference between those assets defined from the unit model and those defined from the class model. 3.1.2 Requirements determination Once the need for an asset is recognized, the requirements for that asset must be determined. The complexity of this determination depends on such factors as the business need and the unit model of the asset involved. If, for example, the new asset needed is a process, a unit process model will have a great impact on the requirements for any new process. 3.1.3 Procurement Automation asset procurement is much more than the usual consideration of “make versus buy” that rests heavily on financial considerations and the availability of internal
development resources. Because of the characteristics of automation assets, especially the close relationships between assets and asset classes, many aspects must be considered before the best course of action can be determined. The most cost-effective procurement priority for these assets is generally as follows: 1. Reuse assets the enterprise already has. 2. Modify assets that the enterprise already has. 3. Buy a COTS product. 4. Modify a COTS product. 5. Develop a custom asset inhouse. The weight given to each course of action greatly depends on the asset. For the specification of an enterprise scenario asset, the only procurement methods considered usually would be 1, 2, and 5 because scenarios are enterprise specific. For a software component asset, all the procurement methods would be considered. The analysis necessary to determine the best way to meet the identified needs requires a relatively complex set of activities. Although a comprehensive treatment of this subject is beyond the scope of this presentation, the approach is briefly outlined as follows. Again, it must be noted that not all the considerations addressed are applicable to all assets. They must be determined on a case-by-case basis. 1. Document the requirements and specifications in a standard form. One of the major reasons for modeling an asset is to establish a standard structure or form for the asset. Because each asset class has a known form, comparison between the assets is facilitated. Without a standard form, performing the remainder of the activities needed for procurement analysis would be almost impossible. 2. Determine which set of existing assets, if any, can meet the functionality need. That is accomplished by answering the following questions in order: § Are there any existing assets that match exactly the needed asset functionality? § Are there any existing assets that have a model that exactly contains the needed asset functionality? § Are there any existing assets that have functionality close to the functionality of the needed asset? § Is there a set of assets the union of which exactly contains the functionality of the needed asset? § Is there a set of assets the union of which contains functionality close to the functionality of the needed asset? 3. Determine which, if any, of the assets can most effectively meet the need. That is accomplished by considering the effect of the asset in four areas using a questioning technique: § Business: What amount of the required functionality does the candidate asset have? Is any inherent product process compatible with the process in which it will be used? What are the risks and constraints in using the asset? § Technical: Does the asset interoperate with other needed assets? Is the asset compatible with the infrastructure. Where is the technology that is used for the asset in its life cycle? § Financial: What is the cost of the asset to procure, change if necessary, and operate? Can it also avoid future costs or help generate revenue? § Operational: What is the quality of the asset? How long will it take to put into service? How efficient is it in the use of corporate resources? Are any staff members experienced in its use? How much training will be required?
4.
5.
Determine how the selected asset(s) can most effectively be utilized. The usage modes are given in the following list and some of the consequences are discussed in Section 3.2.1. § Use as a private element with no intent to reuse. This mode should not be used very often, but occasionally it is necessary. § Use in a shared mode. There is one physical copy of the asset, and all users share the embodiment. That is the optimum mode, because changes need to be made only once and they are instantly effective. However, a proposed change to benefit one user must be examined to determine its effect on other users. § Use in a copy mode. There are multiple identical physical copies of the asset. They must be carefully coordinated to ensure that all copies are kept in agreement. § Use in a modify mode. There are multiple physical copies of the asset with each having the possibility of some differences. There may or may not be a master copy of the asset that was the original source for the multiple copies. § Use in a specifications mode. There is one specification for the asset but multiple physical embodiments may be different. Embodiments may also be produced from changes to the basic specification. Develop a new asset if no existing one is suitable. For large assets, this is the least desirable; for small assets, it may be the most effective. The determination greatly depends on the given situation.
The purpose of this discussion is to indicate the differences in addressing the procurement of intangible automation assets from that of physical assets. It only touches on the many types of analysis and reasoning that must accompany any need to obtain a specific asset. Each area could be the subject of a lengthy presentation. 3.1.4 Deployment Once the asset is obtained in a suitable form, it must be made available to potential users, and information concerning its characteristics must be made a part of the repository. Depending on the asset, it may also reside in a repository or in some other type of storage suited to its characteristics. Deployment is the procedure that performs that activity. Once an asset is deployed, its creation stage is complete, and the asset is ready to be advanced to the next life cycle stage. Many areas must be examined to determine the best provisioning approach. Should availability be gradual or immediate? Available to all users or a select few? Should some type of user training be performed prior to availability? What type of backup plan is needed if unforeseen problems arise? Some of those issues are addressed in Section 3.2; others must be left to other sources for the required information.
3.2 Use stage management The use stage requires two distinct types of management activities. The first, called configuration management, manages the change, update activity, and referential integrity for the automation assets and their users. The second, operations management, manages the availability and accessibility of the existing assets. Both types of management are necessary to provide users with asset functionality when it is needed.
3.2.1 Configuration management To facilitate the remainder of the discussion, configuration management is partitioned into four areas: versioning, updates, interoperability, and multiple users. The activities and capabilities of each aspect are presented in sufficient detail for the reader to understand their purpose in the asset life cycle.
3.2.1.1 Versioning Version control consists of the information and functions needed to maintain positive control over the state of each asset that is part of the environment of interest. The purpose of version control is to ensure that all the communicating assets are compatible with each other in those areas necessary for correct operation. It is, of course, possible to define and utilize separate pools of compatible assets, with each pool used for different purposes. Versioning information can be utilized in a number of areas, but its main purposes are for the evaluation of proposed changes to the product and version mix as well as to help determine the cause of any interoperability problems that arise. Versioning also implies the maintenance of a history of changes so that an audit trail is available if problems arise.
3.2.1.2 Updates While the versioning activity tracks the different product versions, the update activity determines: § What products and versions will be deployed; § What schedule will be used for the deployment; § What type of user turnover will be utilized. In short, the update (or release planning) activity is concerned with providing a continuing effective set of products that will meet the automation needs of the enterprise. The products can be either COTS products or proposed or actual custom implementations. It is not unheard of for an organization to develop a custom product at great cost and then, for valid configuration management reasons (e.g., a needed product from an outside supplier does not meet the needed interface requirements), decide not to deploy the product or version or delay it for a significant amount of time. This type of problem usually results from lack of communication among the various enterprise functions, although rapidly shifting enterprise needs can also be a culprit. Determining, in general, what products are to be utilized in the enterprise environment is a complex undertaking well beyond the scope of this discussion. Determining if a new version or replacement of an existing product is to be deployed is somewhat more manageable and is the focus of this presentation. Also, the update decision can have more of an impact on the users than the decision to utilize a new product for which the functionality does not yet exist in the environment. Many questions must be asked and answered. Some of the major questions are: § Does the new version interoperate with the current versions of other deployed products? § Are the new or improved functions really needed by the users? § Are any existing functions eliminated in the new version? § Do any hardware platforms require an upgrade? § What is the total cost for the upgrade (including acquisition, operations, and upgrades)? § When is the next upgrade due and what changes and additions will it have (can one or more versions be leapfrogged)? § What operational considerations need to be addressed (are additional support personnel needed)?
As those questions indicate, both technical and financial aspects must be considered heavily in the decision to deploy or not to deploy a new version of a product. It is tempting to consider the product as utilized in the previous discussion only as a software product. However, it is useful at this point in the discussion to note that configuration management is applied to every asset with a life cycle. Thus, versions and updates of such items as business rules, processes, scenarios, and so on, are also subject to this type of analysis. Although they are not addressed in that particular context in this book, so are enterprise organization charts and personnel assignments. The fact that configuration management applies, in one form or another, to almost all enterprise assets needs to be kept in mind throughout the discussion. When to deploy a new product version can also be a difficult decision requiring considerable analysis. The following factors may be of importance in this consideration, depending on the asset in question: § Volume of use versus time (including day, week, month, holidays, and so on); § Need for new or upgraded features and functions; § Number of other products that have to be deployed simultaneously; § Associated requirements (power shutdown, equipment reconfiguration); § Skill level needed for deployment (clerk, engineer, team of specialists); § Complexity of deployment (testing required, remote or local location, previous experience). When a new product or version is used, there is always an increased possibility of problems. Even though the change may be welcomed and wanted by all concerned, traveling in unfamiliar territory almost always causes difficulties. It is usually wise to assume the worst in that regard and plan the deployment accordingly, regardless of specific answers to the questions.
3.2.1.3 Interoperability In Section 3.2.1.2, the links of interest were those between different versions of the same asset or replacement assets. The link that defines the interoperability between different assets has not yet been considered. That link is the operational link between assets that must exist to meet some enterprise need. The information is of considerable importance in determining the effects of changes to one or more of the assets. Unfortunately, this type of link can be complex. The two kinds of operational links of interest are interoperability and aggregation/decomposition. The interoperability link is one that indicates that two assets must operate together in some fashion to provide the needed results. The concept of interoperability varies depending on the asset involved. For example, interoperability between two processes means that two different processes must be utilized to respond to a business event. Interoperability between two data structures means that the definition and the relative position of each element in the structure must be the same. Interoperability between two roles means that the roles must be defined such that they can cooperate in the handling of a request. Because two assets are capable of interoperating or have interoperated in the past does not mean that they will do so in the current operational environment. For example, assume that an enterprise has a machine repair process and an advertising process. Both processes are implemented using the same unit model and facilities. Should there be a need to interoperate, the two processes probably could. In fact, the configuration management information could show a link between the two processes with the characteristic that they were both compatible in some way. Usually, however, the two processes are different enough that there are no explicit interconnections between them. They do not interoperate in support of the business.
Why is that distinction important? Consider an augmentation of the previous example by the addition of a third process, one that is concerned with accounting. Assume also that the accounting process is also implemented with the same unit process model as the machine repair and advertising processes. They all can interoperate. However, the accounting process currently has explicit interactions with the other two. If the advertising process should be updated, its effect on the accounting process would have to be determined. The machine repair process is not affected and would not have to be examined unless the accounting process changed. A later change may affect the machine repair process, and that would have to be determined. Because assets can be used in multiple ways in different areas of an enterprise, any analysis concerning the effects of a product or version change must be based on the current asset interactions across all enterprise processes or needs. That is another reason for the asset modeling approach presented earlier in this chapter. The second operational relationship of interest is that of aggregation/decomposition. In this relationship, the assets that form parts of other assets must be known. One of the most important of those assets is the tasks that are used to provide the functionality for a process. Because a specific task can be contained in more than one process, determining the effect of a change to that task would require examining the effect on every process of which it is a part. That can be carried one level further. Assume that a task consists of multiple software components. Then the same component could be a part of multiple tasks, each of which is used in multiple processes. A proposed change to a software component could have widespread implications, each of which would have to be examined and the effects determined. Depicting the operational links in a fashion that can facilitate the understanding of the effects of proposed changes is an area of considerable interest. A graphical indication of the existing interactions between assets is a useful representation of that information for some types of assets. Such assets would include processes, software programs, and manual tasks. Figure 3.5 illustrates this type of map for software programs. The map in Figure 3.5 is a type of entity-relationship (E-R) diagram in that it shows specific relationships between connected assets. For clarity, only some sample relationships are provided. Note that the software programs come in different forms. Included are standalone systems, utilities, and tasks that are meant to be used in conjunction with other tasks.
Figure 3.5: Interaction map. Also included are workflows that refer to aggregates of software programs that together implement some enterprise process. For large numbers of programs, this type of map can become convoluted, but the pattern recognition it affords can make it quite useful under certain circumstances. There is no time associated with this type of diagram because the control aspect of the asset relationship is not included. Control generally is not considered under the scope of configuration management because it deals with the dynamic operational relationships among some specific types of assets. Configuration management considerations usually are restricted to static information and relationships. The same type of reasoning also would exclude process operations statistics and other transient data.
For some other assets, such as business rules, a graphical map would not be appropriate because of the large number and small size of the assets involved. For those types of assets, some algorithm that could provide the needed analysis and reasoning can be utilized instead. The responsibilities of configuration management in the decision as to the appropriateness and timing of changes are considerable but, in general, too often ignored, and the concentration is simply on version control. That lack of information can lead to inadequate and inappropriate decisions regarding proposed changes to an asset. Admittedly, the amount of information is large and the functions complex. However, the effort must be made to help ensure suitable control of the asset life cycle.
3.2.1.4 Multiple users An asset can have multiple users. The users can be other assets, humans, or both. Multiple usage of an asset can be manifested in many ways. Multiple users can: § Share the same embodiment; § Use a copy of the embodiment; § Use a derivative of the embodiment; § Use a different embodiment of the same specification; § Use a derivative of the specification; § Use any combination of the above. Each method of multiple usage imposes different requirements on the change analysis function. If, for example, two users share the same embodiment of a function and one of the users proposes a change, the other users must be able to accommodate the change for the change to be made and the same relationship maintained. If the change is absolutely necessary for one user but cannot be tolerated for one or more of the sharing users, the relationship must change for those users. In that case, the relationship probably would change to that of a derivative of the embodiment. There is always a question as to why user information must be maintained for other than shared relationships. The answer is that even in derivative situations, it is good engineering practice to keep all the asset embodiments as close as possible to minimize their proliferation. That in turn reduces maintenance and facilitates the determination as to what changes should be made in any given asset and embodiment. For example, assume that user A uses one embodiment of an asset, while user B uses a derivative of that embodiment because not all the functions of the asset can be accommodated by user B. Further assume that user A finds a problem with the asset and determines what is needed to correct the problem. It is entirely possible that the same problem exists in the asset embodiment used by user B. If the information concerning the derivative use relationship was not kept, there would be no way of determining which asset embodiment should be examined for the effects of proposed changes. Keeping all the user information is a function of configuration management. That represents a considerable administrative burden, but the results usually more than compensate for the resources that are expended in the maintenance of the necessary information. 3.2.2 Operations management The purpose of operations management is to maintain an environment that can effectively support the assets utilized by the enterprise. The environments vary greatly from industry to industry and from enterprise to enterprise within an industry. An environment is defined as that portion of an enterprise concerned with utilizing and maintaining a set of related assets for the economic well-being of the enterprise. Operations management activities usually are partitioned into two basic types: online activities and offline activities. Online activities are those that must take place simultaneously with the operations aspects of the environment and usually involve the
same general time frame. For example, monitoring a communication network for traffic congestion would be an online activity because it is taking place at the same time the network is being used to provide a service. The major online activities are monitoring and problem detection and correction.
3.2.2.1 Online management Monitoring is an activity that is not an end in itself. The result of any monitoring function is to provide data for use in performing some other aspect of operations management or for other automation asset system management functions such as configuration management. Monitoring must be accomplished in such a way that it does not interfere with operations activities. The output of a monitoring function can be stored to be used by an offline activity or sent in real time to another management activity. Problem detection determines if the asset is performing in the way that was intended. That generally is accomplished through the real-time examination of data produced from the monitoring activity. A problem can appear in many forms. Using the network example, a problem could be a broken connection, congestion, or capacity traffic. Once a problem is found, it must be corrected from a service perspective. That does not mean the problem is eliminated (i.e., repaired). It only means that the service expected by a user is restored. In the case of a broken network link, that could involve dynamic rerouting of traffic around the break. In some cases, as will be discussed later, a temporary fix is not possible, and the only way to correct the problem is to effect a repair.
3.2.2.2 Offline management Offline management refers to activities that do not have to occur while the operational activities are taking place or that require a different (usually longer) time period in which to accomplish the function. An example of this type of activity would be preventative maintenance. In fact, preventative maintenance usually is scheduled so it can take place while operational use is absent or at a minimum. The major offline activities are: § Help; § Preventative maintenance; § Repair; § Capacity planning; § Facility expansion.
3.2.2.3 Environments An environment is defined as a set of enterprise assets that must be managed as a unit. The concept of an environment is needed as a framework in which to integrate a set of assets and other components of a business that need to be considered together. Assets can be hardware, software, data, or, in some cases, a combination. The enterprise automation environment (further defined in later chapters) is an environment as specified by the definition. The assets of the environment include the operational software, hardware, and communications networks. The specification of the “best” set of environments depends on the type of business concerned and the pragmatics of the specific enterprise involved. However, because the environment is the basic unit of operations management, some attention needs to be given to the definition of a suitable set of environments. There can be many reasons for the definition of a specific environment: § It utilizes a special or hard-to-obtain type of expertise. § It must be managed as an integrated system. § It accounts for a significant amount of the enterprise assets. § It is crucial to the efficient functioning of the enterprise. § It requires tight security or other form of close control. § It is merely a convenient unit of specialization for the management of complexity.
Except for the last item, the motivation for those reasons is that they require explicit attention. Without that attention, the effectiveness of the environment eventually would be compromised. Although the concept of an enterprise environment would be advantageous as defined, some additional characteristics are useful in ensuring that no significant area is omitted from inclusion in an environment: § Environments are mutually exclusive. An asset can be a member of only one environment (copies of assets are considered separate assets). § Environments are complete. Every asset must be a member of an environment. Operations management is a management activity directed toward ensuring that the assets used in the operations activities of the environment are in a state that provides the maximum effectiveness. A more formal definition of operations management is a set of activities whose purpose is to maintain the assets of an environment such that a set of predetermined characteristics of the assets are preserved. The characteristics in the definition refer to the qualities that the asset must have to be considered in a state appropriate to its use in the operations of the enterprise. In addition to the enterprise automation environment, the asset management system is also an environment. The set of related assets of the automation asset management system are the repository and those assets addressed by the repository. In the case of physical assets and some logical assets such as software products, the repository may contain only information (including metainformation) about the assets. In that case, the repository information is defined as separate, but associated, automation assets. Some assets, such as scenarios and business rules, may exist only in the context of the repository. All other assets of the enterprise are partitioned into other environments as needed. The definition of the automation asset management system as an operational environment can be motivated by examining the reasons that an environment is defined. § It utilizes a special or hard-to-obtain type of expertise. The expertise needed to supply an effective repository function is specialized. Ensuring that effective automation assets are available when needed requires planning expertise, technical skill, and negotiating prowess. § It must be managed as an integrated system. The automation asset management system and the assets that are managed by them must be considered as a tightly coupled aggregate. § It accounts for a significant amount of the enterprise assets. Depending on the specific business involved, that can indeed be the situation. In the case of an information technology–oriented enterprise, that almost certainly would be true. § It is crucial to the efficient functioning of the enterprise. Businesses can and do operate without an emphasis on the management of the automation assets. However, the efficiency of providing a service can be increased considerably by utilization of the concepts of automation asset management as presented herein. § It requires tight security or other form of close control. Automation asset management requires a significant amount of security to ensure the integrity of the information involved. Because of the high degree of the interactions between the assets, any corruption of the information can have far reaching consequences. § It is merely a convenient unit of specialization for the management of complexity. Given the complexity of the automation assets and their management, this is also true but it is probably not the major reason to consider the asset system as an environment.
3.3 Retirement stage management The activities of life cycle management in the retirement part of the life cycle rely heavily on the use of business rules to make the necessary decisions concerning the disposition of an asset. Although business rules certainly could be used in any stage of the life cycle, they are particularly useful in the retirement stage. There are two aspects to the retirement of an asset. The first is the decision to move the asset from the use stage to the retirement stage. That usually occurs when the asset has reached legacy status or a replacement using newer technology has been made available. The second aspect is the actual retirement of the asset. That is usually relatively gradual and may consist of several steps. As an example, the first step may just limit the use of the asset in new situations to a relatively few situations in which it is still attractive. That may be a case where a new function is being developed that has to tie closely to an existing system that uses the asset. The next step may be to not use it in any new development but still maintain it where it is in active use. Finally, it is eliminated from all active use and eventually completely removed from the repository. Gradual retirement is usually necessary to keep a continuity among the assets and preserve the integrity of the many relationships involved.
3.4 Implications Some major implications of life cycle management follow directly from the discussion in this chapter. § The process implementation methodology must explicitly consider the needs of life cycle management. § Assets must be closely monitored and placed in the proper life cycle stage. That prevents the use of inappropriate assets and keeps the number of active assets to a minimum. § Considerable attention must be given to the definition of appropriate asset and asset class models. Those models will, to a great extent, determine the efficiency of defining and utilizing the appropriate assets. § Configuration and operations management are crucial in the use of an assetbased approach. Without an effective program in each of those areas, the suitability and availability of needed assets will be unreliable. § The retirement of an asset requires that a series of explicit decisions be made. Leaving the retirement stage to chance will result in the use of outdated assets. § Effective life cycle management is needed to control the resources needed for the definition and use of automation assets. Selected bibliography Alonso, F., et al., “Software Engineering and Knowledge Engineering: Toward a Common Life Cycle,” J. Systems and Software, Vol. 33, No. 1, 1996, pp. 65–79.
Aquilino, D., P. Asirelli, and P. Malara, “Supporting Reuse and Configuration: A Port-Based Software Configuration Management Model,” Proc. 3rd Internatl. Workshop Software Configuration Management, Trondheim, Norway, June 1991, pp. 62–67. Atria Software, Inc., “Beyond Version Control: Evaluating Software Configuration Management Systems,” Tech. Report, Natick, MA: Atria Software, Inc., 1994. Bersoff, E. H., and A. M. Davis, “Impacts of Life Cycle Models on Software Configuration Management,” Communications of the ACM, Vol. 34, No. 8, 1991, pp. 104–118.
Cohan, D., and D. Gess, “Integrated Life-Cycle Management,” Proc. 1994 IEEE Internatl. Symp. Electronics and the Environment, San Francisco, May 2–4, 1994, pp. 149–154. Feiler, P. H., “Configuration Management Models in Commercial Environments,” Tech. Report CMU/SEI-91-TR-7, Pittsburgh, Carnegie-Mellon University, 1991. Kabjian, M., and A. Veroutis, “An Examination of Alternative Software Technologies That Facilitate Life-Cycle Management,” Proc. 1995 IEEE Internatl. Symp. Electronics and the Environment, Orlando, FL, May 1–3, 1995, pp. 62–67. Krane, S. P., and D. Heimbigner, “Towards a Graph Transform Model for Configuration Management Environments,” Proc. Internatl. Workshop Software Version and Configuration Control, Grassau, Germany, Jan. 1988, pp. 438–443. May, E., “Software Configuration Management for Reuse Projects,” Proc. 4th Internatl. Workshop Software Configuration Management, Baltimore, May 1993, pp. 185–187. Ochuodho, S., and A. W. Brown, “A Process-Oriented Version and Configuration Management Model for Communications Software,” Proc. 3rd Internatl. Workshop Software Configuration Management, Trondheim, Norway, June 1991, pp. 109–120. Plaice, J., and W. W. Wadge, “A New Approach to Version Control,” IEEE Trans. Software Engineering, Vol. 19, No. 3, 1993, pp. 268–276. Rosenquest, C. J., “Entity Life Cycle Models and Their Applicability to Information Systems Development Life Cycles: A Framework for Information Systems Design and Implementation,” Computer J., Vol. 25, No. 3, 1982, pp. 307–315. Ross, R. G., and W. I. Michaels, Resource Life Cycle Analysis: A Business Modeling Technique for IS Planning, Boston: Database Research Group, Inc., 1992. Shimozono, K., et al., “User Management in An Educational Computer System: Personal Information Management,” Proc. 12th Internatl. Conference on Information Networking, Koganei, Tokyo, Jan. 21–23, 1998, pp. 627–632.
Chapter 4: Repository utilization The organization of a discussion of the repository must be carefully crafted because of the complexities involved and the need to ensure that the topic is well understood. Because of its critical importance in automation asset management, the structure, components, and capabilities of the repository must be carefully motivated. To that end, the main discussion is organized around a conceptual model of a repository. This chapter introduces the basic model and examines the individual components in some detail. Then a discussion of repository access follows, including a look at the different types of production users and their needs as well as the administration and operational users of the repository.
4.1 Metamodel Before beginning the discussion of the conceptual model and its layers and components, the concept of a metamodel needs to be well understood. In general usage, the prefix meta has several meanings. As usually applied to situations encountered in enterprise automation, the dictionary definitions of meta of interest are: (1) a more highly organized or specialized form of; (2) more comprehensive, transcending. That translates to the following general definition, which is in a form somewhat more suitable for the current discussion.
A metamodel contains additional information about the asset that is not directly associated with its model or structural definition (e.g., where used, interactions with other assets, statistics, age). The diagram in Figure 4.1 illustrates the relationship of an asset, its model, and the metamodel. The asset is approximated by a model that represents the item. Because the metamodel contains information about the asset that is not a part of the asset model itself, it is shown as orthogonal to both the actual asset and its model.
Figure 4.1: Relationships of asset, model, and metamodel. Although the representation in Figure 4.1 indicates the proper relationships, it is awkward to use in practice, especially when we want to build more complex model structures. For that reason, the depiction of the relationship between the asset model and the associated metamodel is changed to that shown in Figure 4.2. In that representation, the asset model specification is included as an element of the metamodel specification. That makes it possible to reference all the information about an asset simply by referring to the metamodel. This representation will be utilized throughout the remainder of the discussions.
Figure 4,2: Simplified model and metamodel relationship. A metamodel is specified in the repository definition because much more information about an asset needs to be available for effective use of the repository other than just the structure of its model. The information can and is used by both repository and user functions. As an example, the repository must keep information concerning the allowable access for any given asset and user. That is metamodel information. The users need information about how and where the assets are employed so changes can be tracked. That also is metamodel information. After the introduction of the repository conceptual model, the relationships of the specified metamodels and the underlying models are illustrated in graphical form as a means of motivating and understanding the use of the structure. In addition, specific examples of various metamodels and their utilization are provided as the discussion proceeds.
4.2 Conceptual model It would be nice to be able to provide a simple text definition of repository. Unfortunately, that is not possible. Any such limited definition would, of necessity, omit important aspects of the repository function. For that reason, the definition of a repository is in the form of a layered conceptual model. Layered models are frequently used to define communication protocols, but they also are useful in the definition of other complex structures. Figure 4.3 is one possible conceptual model of a repository. Although others could be defined, this is the model that is used in this chapter to describe the services, capabilities, and usage requirements of a robust repository.
Figure 4.3: Conceptual model of a repository. As indicated in Figure 4.3, the conceptual model of a repository contains six layers. Layers usually are numbered starting at the bottom, as indicated in the following list. This is also the order in which they are discussed. § Layer 1, the physical storage layer; § Layer 2, the database management system (DBMS) layer; § Layer 3, the asset metamodel layer; § Layer 4, the asset class metamodel layer; § Layer 5, the asset/class relationship metamodel layer; § Layer 6, the user view (metamodel) layer. In addition to those layers, the model defines three ways of accessing the repository according to the type of user involved. Thus, operational users, administrators (configuration management), and service managers (operations management) have specific functions they must perform and the access model specifically recognizes those types. While all users could have been lumped into one general class, it is more convenient for the purposes of discussion to separate them. Access methods and their specific functions are discussed after the individual layers have been considered. Each layer has a specific purpose in the operation of the repository. As is the case with most layered models, it is necessary to traverse the layers from top to bottom when utilizing the repository. Depending on the specific request of the repository, each layer provides a portion of the processing needed to satisfy the request. That can range from one or more layers providing almost a pass-through operation to a condition in which most of the processing is performed by a single layer. In most cases, however, each layer provides some significant value added in handling the request. To perform its intended purpose, each layer must contain both information appropriate to the layer and functions that manipulate and interpret that information. The functions also must enable the interaction of their layer with the adjacent layers. Some of the functions that directly interact with the layer information and that are needed to explicitly service users of the repository are described here.
In general, this discussion does not include functions that realize the internal operation of the repository, either specific to a given layer or used across layers, because they are dependent on the particular design and implementation of the repository. In general, it is not possible to communicate with or access layer-specific information in any understandable form without going through the layers above it. It is the upper layers that understand the communication formats required by the lower layer. This characteristic is not a disadvantage of the repository even though it sometimes is portrayed as such, especially of the DBMS and physical storage layers. That portrayal generally results from a common data storage characteristic of legacy systems, which, in general, store data in such a way that the data cannot be accessed directly but must be obtained through action of the legacy system. That, of course, makes it difficult to migrate away from the legacy system. The difference between legacy systems and the repository in this area is that the repository is designed to be the common data access mechanism for any application level service that needs the information. It does not make application data specific to and integrated with the application, as is the situation with a number of legacy applications. Migrating from one application development philosophy to another should be facilitated by the ability to tailor the access usage models of the repository while keeping the underlying information intact. In addition, the layer interfaces of any selected repository product should be well defined and standard. That allows the capabilities of any layer to be extended by adding appropriate functions and data structures without disturbing those that already exist. With that necessary topic having been addressed, it should be remembered that, at this time, we are not concerned with the actual implementation of any of the layers. At this point in the discussion, the only concern is with the definition of the capabilities and functionality that must be provided in some manner. In addition, if history is any indication, vendors will invent innovative ways to provide the needed capabilities of the repository. It would be counterproductive to constrain possible implementations to any significant extent. However, some global issues need to be considered in a philosophical sense. 4.2.1 Physical storage layer The bottom layer of most layered models refers to a physical realization or representation of the logical structures of the higher layers. In the repository model, this layer contains the physical storage of all repository information as provided by some persistent media (e.g., tapes, compact discs, cartridges). For current purposes, it is not necessary to decide if the medium is centralized or distributed (or even what those terms mean). It also is not necessary to define what type of medium is available for this function. 4.2.2 DBMS layer The DBMS layer provides the ability to organize the storage of the model and metamodel information defined at the higher layers and to ensure the basic integrity of the stored information. That usually is accomplished through the inherent capabilities of a COTS DBMS product. Most available DBMS products include the ability to back up data, to perform multiphase commits, and to detect some types of errors along with appropriate housekeeping functions. For the purposes of layer 2, the DBMS can use any defined data organization method (or combination of methods), including hierarchical, network, relational, and object formats. While one or more DBMS organization formats may be better suited to the basic purpose of the repository than others, that is a separate topic subject to a great deal of emotion and disagreement (e.g., object versus relational formats). The organization method will not be directly visible to most operational users (administrators probably will need to understand the DBMS formats). The fundamental requirement is simply that the DBMS
layer communicates properly with the asset metamodel layer and other higher layers as necessary. Assuming that has been properly defined and implemented, it is not necessary to go into more detail on DBMS organization. Sometimes the functions of layers 1 and 2 are themselves labeled a repository because of their ability to create, store, retrieve, and delete data, the so-called creation/retrieval/update/deletion (CRUD) functions. In that sense, they do form a primitive type of repository. The format of the DBMS organization determines the structure of the models and metamodels of the upper layers. For example, the structure of a relational DBMS consists of tables and the ability to manipulate their rows and columns in some predefined ways. The specific table structure that is defined provides implicit upper-level models, and the user view must also be consistent with the table format and access mechanism (e.g., Structured Query Language, or SQL). That can be restrictive and generally requires a great deal of detailed knowledge on the part of the user concerning the specific products used to implement layers 1 and 2. Maintenance of the model information in the face of rapidly changing needs also can be a significant problem if only layers 1 and 2 exist, because the models do not exist in explicit form. The major purpose of the explicit models and metamodels in the upper layers (layers 3, 4, 5, and 6) of the conceptual model is to facilitate the use of the repository by: § Providing a higher-level modeling capability suited to the needs of the assets; § The ability to hide the details of the specific mechanism used to store the data. From those considerations, it is evident that layers 1 and 2, while certainly necessary for the repository to function correctly, do not by themselves provide the capabilities needed for the integration and management of the assets. That result is reinforced as the discussion proceeds. 4.2.3 Asset metamodel layer The purpose of layer 3, the asset metamodel layer, is to store, update, and retrieve information about a given asset in the context of the model or metamodel. The asset exists in the repository only in the form of the model. Asset metamodel information about a business rule, for example, is discussed in Chapter 5. Adding a new asset to the repository would require the administrator to develop a template for the model/metamodel and define it to the repository. In a robust repository, the definition would take place at the model layer, and the internal repository functions would ensure that the DBMS layer would be updated with the proper structures to allow the new asset information to be accommodated. Changes to the model/metamodel structure for an existing asset would also be performed at the model layer. Administration of the repository is discussed in additional detail later in Section 4.3.3. 4.2.4 Asset class metamodel layer The asset class metamodel layer contains information about the entire class of each asset represented. This fourth layer has a number of purposes, including the following uses for the model information: § It partitions the class of assets into segments designed for the management of complexity. § It allows a set of assets to be selected and manipulated as a group according to some specified criteria. § It provides the mechanism for the definition and utilization of enterprise (class) models. § It provides a basis for reasoning and analysis about the suitability and effects of individual assets in the class. The purpose of the metamodel for asset classes is similar to that of asset metamodels, and the type of information contained in each would be similar, at least in purpose.
Because the asset metamodel information already has been illustrated, it is not necessary to include here an additional depiction for the class metamodel. Although other aspects of this layer could be defined, the ones presented in this discussion should provide an adequate indication as to the purpose and type of services performed by this layer. 4.2.5 Asset/class interaction metamodel layer In many cases, models are defined that require multiple assets to work together to provide the required structure. The definitions of the required interactions are contained in layer 5, the asset/class interaction layer. This layer can provide interaction definitions of entire asset classes or individual assets, as needed. In a sense, this layer provides for the development of E-R diagrams. However, the classical type of E-R diagram is limited in the types of relationships it can easily illustrate. The types of relationships possible in the interaction layer can be complex and exist in more than two dimensions. This layer also enables the important concept of referential integrity. If a change is proposed for an individual asset or an entire asset class, it is necessary to determine what other assets or classes could be affected by the change. The relationships that would enable that determination would be defined by models in this layer. Whether a given change is acceptable would depend on an analysis that examines the proposed change and the specific assets or classes affected. The questions raised in Chapter 2 concerning the complex interactions of the asset management components would be answered through the structures and functions of this layer. For example, the relationship of business rules with the asset finance component would be defined in this layer. Another example of layer use would be the definition of the interaction of scenarios with processes to produce process map traces. In the latter case, the relationship between scenarios and processes depends on the specific representation of the process employed. In addition, the different scenarioprocess relationships have relationships and interactions among themselves! Without the repository, such complex interactions would be difficult to define and utilize. The repository, however, makes it possible to use model complexities to greatly simplify the activities of the user. That ability is the real strength of the repository concept and the main reason that the repository is placed as the central focus of asset management. That possibly counterintuitive effect of having complex models provide for easier usage is further explored in Section 4.2.6. 4.2.6 Usage metamodel layer The usage metamodel contains information that determines how a given user, for a specific use, will perceive and access the data defined and contained in other layers of the repository. For any particular combination of user and use an associated model will be defined in this layer. This model will define the type, form, and sequence of data access for the purpose of creation, retrieving, updating, and deleting. While these socalled CRUD functions are a part of any data storage mechanism, the ability of the repository, and specifically the usage metamodel layer, to tailor the interaction to the needs of the user and use elevates these functions from one of a low level, data oriented view to one that is at the same level and orientation of the user. In performing this service, the usage model is free to incorporate as much or as little of the structures and relationships defined in the models of the other layers and substitute new relationships as desirable. As an example that will illustrate the value that can be added by the upper four layers, assume that a tool has been designed to determine some quality measure of a project. As a part of that measure, the tool needs to identify the manager of the project and the assigned staff members. Their performance, using some predetermined metrics as measured on previous projects, is then incorporated into the quality model. Further assume that any related projects also have to be taken into account.
Figure 4.4 illustrates the situation from a repository perspective. For clarity, the metamodel information has been omitted, but in an actual situation, that information, as previously defined, would be incorporated as part of the definition for the relevant models in each layer. Two types of assets are involved, employees and projects. The class model for employees is the organization chart, and the class model for projects is the set of related projects. The individual models for each of the assets are not shown but can be considered to contain any information needed to determine the quality contribution.
Figure 4.4: Example of a quality tool usage model. The tool usage model is as follows. For any project of interest as selected by the tool, the manager of the project and assigned staff members are provided by the repository as well as the set of associated projects. The only information that needs to be supplied by the tool is the project of interest. The repository tool view determines what information is to be returned. This model is incorporated into level 6 of the repository. The usage model incorporates layer information in the following manner: § The employee model: quality-related characteristics; § The employee class (organization chart) model: type of employee (manager, staff); § The project model: quality-related characteristics; § The project class model: related projects; § The employee/project interaction model: employees assigned to a given project. Note that the DBMS layer and the physical storage layer are not part of the view because they are considered to provide only internal services to the repository and are not visible to the users. Again, the exception to this statement would be for administrative users who may need to view the models of the two lowest layers. A more complex but probably more realistic example is illustrated next. Because of the complexity involved, only the major points are examined. In this example, the user again is a tool. The tool in this case is designed to select, probably with some human intervention, the most appropriate means of implementing a task for a given process. The required repository models for layers 3 through 5 are illustrated in schematic form in Figure 4.5.
Figure 4.5: Example of task selection. Six types § § § § § §
of assets are involved in this example: Process tasks; Business rules; Scenarios; COTS products; Legacy systems; Reusable components.
The class models for each of those assets also are illustrated but not defined in detail. They can range from very formal and structured, as would be the case for business rules, to very loose and ad hoc, as would ordinarily be the case for legacy systems. The relationships between these assets and their classes can be complex. For example, COTS products and legacy systems must be compatible with the reusable component class model structure, but they also can have class models of their own. As in the quality tool example, the usage model considers only those aspects of the lower layer models specifically needed to provide the needed function. This model, again shown in schematic form, is illustrated in Figure 4.6.
Figure 4.6: Example of task selection usage model.
Note that only certain parts of the asset and class models are used and that their relationships are somewhat redefined to meet the needs of the user. Different parts of the models could be used by other user views. Also, although it is difficult to determine by the illustration, the interactions between the models as defined in the asset/class interaction layer are used as part of the determination as to which assets are presented to the tool for consideration. The ordering of the presentation generally is determined by the usage model.
4.3 Access methods Now that all the layers of the conceptual model have been discussed, it is necessary to determine how the information stored in the repository can be accessed. Three access methods are defined for the repository: one production or operational method and two management methods. Although the access methods are not an inherent part of the repository conceptual model, they are important to an understanding of how the repository is used and managed. To that end, each access method is described briefly in the following sections. 4.3.1 Production method The main access method is the production or operational method. This method is the one employed by production users of the repository, which can be humans, intelligent agents, operational programs, automated tools, or other similar assets that need to utilize repository information in performing their defined functions. The main requirement of the production users is to obtain and utilize the information contained in the repository in the most optimum form for the specific function intended. In addition to classifying production users according to their implementation types, these users also can be classified according to their functions within the enterprise. Some specific production user functions are as follows: § Support of an externally focused enterprise process. This type of process is oriented toward serving a customer need such as order entry or customer service. § Direct support of an internally focused enterprise process. This type of process is oriented toward providing an enabling service such as preventative maintenance or purchasing. § Support of enterprise functions not usually considered to be process driven. These functions include research and development or the establishment of a corporate vision and strategy. § Support of the other components of the asset management system. Repository requirements are specified in the individual discussions of these components. In any of those cases, access is obtained through a usage metamodel defined for the specific production user and one or more of its associated information requirements. Of course, if appropriate, more than one production user can employ the same usage metamodel. In that case, the metamodel information could indicate the priority or other resolution to simultaneous requested updates for the same information. That is always a problem that requires some amount of attention to ensure data integrity. Usually one global solution is implemented for that difficulty, which sometimes can present a problem when there are a large number of types of different users and functions. For the repository model discussed in this chapter, the usage metamodel structure accommodates different update resolution strategies and allows any conflict resolution to reflect the individual needs of the users and their functions. It also should be indicated that production users do not necessarily have to be members of the enterprise that owns and maintains the repository and its contained information. They could be employees of other enterprises that need access to repository data to
provide a service to or receive a service from the enterprise. That type of use certainly would require appropriate security data. However, this type of use could be of significant advantage to both the enterprise and its vendors and customers. 4.3.2 Life cycle management All the considerations presented in the life cycle management discussion in Chapter 3 apply to the automation assets, including the repository itself. Two components of the life cycle, configuration management (administration) and operations management, are discussed briefly in the specific context of the repository. 4.3.3 Configuration management (administration) The content management access method is used by the administrators of the repository to create, update, and delete models but not the individual information about the assets that conform to those models. The creation, retrieval, updating, and deletion of specific asset information is performed by the production access method, discussed in Section 4.3.1. Can new models be created by production users? That is always a key question asked to indicate the flexibility of the repository. The answer to the question must be yes, but with an important caveat. When the production user is performing a model-changing function, it must be through the content management access method. The consistency is required to help ensure that changes are always performed in the same way, using the same repository features regardless of the user class performing them. Although production users should have the ability to perform content management functions, the use of that ability should be carefully controlled. If that is not done, the repository easily can become cluttered with outdated information and require more resources than otherwise would be necessary. In addition, administration of the repository is a skill that needs experience and training to obtain an effective and efficient repository function. The key to successful repository administration is the ability to define the models in terms of the layer that contains the model. For example, if a new asset model is being added to the repository, the model should be able to be defined directly by its structure and attributes. The repository, through an internal function, should then be able to convert that definition into the structures needed by the underlying DBMS storage mechanism. This type of automated functionality allows the administrator to concentrate on the proper definition of the model and not to have to spend a lot of time and energy converting the model to possibly hostile storage constructs. From a practical point of view, however, it is also recognized that this access method may also need access to the DBMS layer to allow the administrator to tune the response time or storage parameters. The goal is to minimize this low-level intervention. 4.3.4 Operations management Because the repository provides a service designed to be employed by multiple users, its operational status is of critical importance. Adequate response time, throughput, error detection and recovery, and so on, must be provided. That requires that the status of the repository be continuously monitored and corrective action taken as required. In addition, security in the form of access restrictions, user verification, and information integrity must be provided. All those activities are performed by means of the operations management access method that utilizes and interacts with an appropriate set of internal functions. In many cases, the major user of the operations management access method is a management application of some sort. That is especially true when the entire suite of applications and support functions, the servers on which they are implemented, and the network that interconnects them are being managed as a unified asset. When the management program detects a problem that could be related to the repository, it may attempt to implement a prepackaged solution or notify a human that a condition exists
that needs investigation. In the latter case, the human then becomes the user of the access method. The information obtained or utilized by the operations management activity should also be modeled and stored in the repository so it can be treated the same as any other repository information. Operations information is not only useful in performing the realtime operations activities, it is also needed for a variety of other purposes such as capacity planning and quality determinations. Because operations management is an important component of asset management, it applies to many assets other than the repository itself. However, as is the case with all the components, the fundamental principles are the same regardless of the asset or assets they are being applied to.
4.4 Implications The major implications of a repository are the need to keep the information current and available and to ensure that its use is not being circumvented. If either of those is not performed adequately, the effectiveness of the entire corporation is compromised because of the central place of the repository in the management of corporate automation assets. Adequate resources must be provided and appropriate processes defined and implemented that can assure users that the repository contains the latest versions of all needed information. Any attempt to circumvent the use of the repository because of efficiency or other arguments should be stopped quickly. If use of the repository is considered an impediment, the reasons need to be investigated and any indicated changes rapidly made. If the information in the repository is robust and current, that should not be a significant problem. As a startup need, to provide the required set of information, some pressure may have to be exerted on the responsible organizations. There is always a tendency to produce local optimizations rather than the global optimization needed by the enterprise as a whole. It is only the repository that can eventually facilitate and support that global optimization. Selected bibliography Davenport, T. H., and L. Prusak, Working Knowledge: How Organizations Manage What They Know, Cambridge: Harvard Business School Press, 1998. Henninger, S., “An Evolutionary Approach to Constructing Effective Software Reuse Repositories,” ACM Trans. Software Engineering and Methodology, Vol. 6, No. 2, 1997, pp. 111–140. Henrich, A., “Document Retrieval Facilities for Repository-Based System Development,” Proc. 19th Ann. Internatl. ACM SIGIR Conf. Research and Development in Information Retrieval, Aug. 18–22, 1966, pp. 101–109. John, H., and G. Spiros-Theodoros, “Reuse Concepts and a Reuse Support Repository,” Proc. IEEE Symp. and Workshop Engineering of Computer-Based Systems, Friedrichshafen, Germany, March 11–15, 1996, pp. 27–34. Petro, J., et al., “Model-Based Reuse Repositories—Concepts and Experience,” Proc. 7th Internatl. Workshop Computer-Aided Software Engineering, Toronto, July 10–14, 1995, pp. 60–69. Taeheun, L., and C. Wu, “A Simple Configuration Management System for a CASE Repository,” Proc. 1995 Asia Pacific Software Engineering Conf., Dec. 6–9, 1995, pp. 439– 446.
Traynor, O., and A. Bloesch, “The Cogito Repository Manager,” Proc. First Asia-Pacific Software Engineering Conf., Tokyo, Dec. 7–9, 1994, pp. 356–365.
Chapter 5: Business rules Overview The term business rule is yet another example of a term that is in common usage but for which no agreed-on context or definition exists. The term is applied in a number of different ways in a wide variety of situations, which makes it almost impossible to understand how the term is being used without a detailed knowledge of the context of the discussion. While this chapter cannot solve that problem, it can provide an overall self-consistent framework through which an examination of the concept can be structured. It will also facilitate specification of the interface and interaction of the business rule concept to other concepts needed in the development of a methodology for process implementation. References to the term business rules in later chapters are assumed to conform to the model defined in this chapter. Before delving into the details of business rules and the development of an appropriate model, it is necessary to determine the general context in which this discussion will occur. The easiest way to accomplish that is to ask and answer four questions: § Why are enterprises interested in the concept of business rules? § What is the general context for the existence of business rules? § What is meant by the term business rule and what does one look like? § How are business rules utilized in the enterprise? Once those questions are answered (at least for the purpose of this presentation), the remainder of the discussion centers on the structure and utilization of business rules. The structure emphasis will be on the asset class model because, as will be shown, the asset model must be very general to accommodate the many possible variations of business rule forms.
5.1 Enterprise interest The interest in business rules results from some promises that, rightly or wrongly, have been associated with such rules. Those promises have been that business rules will: § Enable nontechnical personnel to define the operating principles (e.g., control, algorithms, policies, procedures) utilized in business automation functionality (applications); § Enable the operating principles to be changed quickly without the need for programming or reprogramming applications; § Enable the analysis of the operating principles from a global perspective to determine if they are complete, consistent, and nonconflicting; § Ensure that different applications utilize the same operating principles as needed for the functionality involved; § Ensure that the operating principles are in conformity with the desired way for the enterprise to operate. It has always been difficult to translate business needs into software that can aid in accomplishing the intended function. Historically, a solution to that difficulty has been approached in two entirely different ways: one based on job function and the other based on language. The first approach was to define an organization position specifically intended to provide the translation from business needs to software requirements and specifications, the traditional system analyst position. The individual filling that role was supposed to be proficient in both business functions and software development technology. Unfortunately, most people filling the role were more technically proficient than business oriented, and the translation problem continued to be difficult. Instead of using one person, the system analyst, to provide the translation, most current
requirement generation efforts employ a team approach. That tack works better, but the problems remain. The second approach was the specification of languages, such as Fortran and Cobol, that were intended to be directly used by the individuals needing the software without the need for technical professionals to produce the requirements and eventual implementation of the software. This effort continues to the present and includes fourthgeneration languages and graphical or visual languages, such as Visual Basic. Although this approach is attractive in concept, the realization that “programming is programming” eventually sets in. Although languages may change, the act of programming always requires specialized knowledge if the problem to be solved requires a moderately complex solution. Eventually, software engineers take over the development function. The latest effort in trying to eliminate the business-to-technical translation problem is through the concept of business rules. The idea is to have the business people specify in some manner the basic principles by which they want to operate the business. The principles, or rules, would then be automatically translated into software or operational parameters. The development of business rules would have other advantages to the enterprise. As the business changed, the underlying rules could be changed, in turn changing the supporting software. In addition, the rules would expedite reasoning and analysis about the business and its intended operation. It would make explicit many policies and procedures that were known only implicitly. The business rule approach is a variation on the language-oriented approach and ultimately will have the same problems as all the previous attempts using that approach. In addition, unfortunately, the concept, while clear in purpose, also suffers from many additional problems. The remainder of this discussion is an overview of business rules. It includes the identification of inherent problems and presents approaches to realizing some of the promises of the concept.
5.2 General context To provide a framework for this discussion, an entity called a statement is defined. A statement does not have to be in English (or any other natural language) text. It could be a mathematical expression, a picture, a chart, a diagram, or any other recognizable form of expression. However, for simplicity of presentation, the examples used here are in the form of text sentences. Statements, of whatever form, are used as the vehicle for determining the context and the scope of business rules in the enterprise. A statement can be considered to be an articulated view of some aspect of a business from the enterprise perspective. Among other possibilities: § The view can provide a concise representation of a large complex area of the business. (Example: The manufacturing operation has had no time-off injuries in the last year.) § The view can explicitly articulate an otherwise ill-defined concept. (Example: This company will be the sales leader in its industry in 5 years.) § The view can aggregate a large number of individual items. (Example: Most employees own stock in the corporation.) § The view can examine one component of a larger item. (Example: Joe Smith and Jane Jones have received the yearly achievement award.) § The view can quantify items that are otherwise expressed only on a qualitative basis. (Example: The good health of our employees is responsible for a $100,000 increase in profits last year.) § The view can indicate desired business behavior. (Example: Selling new services should be performed by all employees as the opportunity presents itself.) Enterprises make statements (indicate views) about themselves for many purposes: visions, strategies, goals, milestones, policies, procedures, and so on. The statements are intended to meet a variety of purposes in the enterprise. Statements can be used for planning, for internal and external communications, as status indicators, and to specify
and promulgate guiding principles. Although they are only a small fraction of all the information available about an enterprise, statements are, nevertheless, important in understanding how the enterprise views itself and intends to behave. It also should be evident from this discussion that enterprises are complex organisms and do not just consist of bundles of statements, as some of the literature would lead you to believe.
5.3 Definition Beyond the definition of a statement as an articulated “internal” view of some aspect of the enterprise, statements have few, if any, restrictions. They can be written or oral, formal or informal, widely known or restricted, possible or impossible, true or not true, passive or active, or any condition in between. There is one class of statements, however, that can be further defined and is of considerable interest. This is the class of statements that will be loosely defined as rule statements. Rule statements, or business rules as they are commonly called, are defined as follows: “Business rules are statements that articulate an operating principle by defining or constraining some aspect of the business.” If a statement can be found to have that property, it is a business rule (or rule, for short). Rules do not merely reflect what has already occurred or predict a status at some future time. They indicate some operational aspect of the enterprise. Current popular uses of rules are oriented toward the specification of low-level constraints on software and data such as trigger conditions and E-R cardinality. The pressing need is to develop a comprehensive approach to rules, including the delineation of policy and procedures. Business rules can be used to define the operation of the enterprise at all levels. The simple definition of a rule as a constraining statement does not provide enough structure for determining what constitutes an effective rule and how it should be utilized. Further analysis is required to provide sufficient information for the practical exploitation of rules. Making rule statements without some expectation that they will be effective in some aspect of running the enterprise is not a particularly useful exercise. From a terminology viewpoint, any statement intended to be a rule (e.g., a rule that intends to impose a constraint) is considered to be a rule. However, a distinction must be made between a well-formed rule that can provide a benefit to the enterprise and a rule that is not well formed and cannot serve a useful function. To provide a means of identifying a well-formed rule, a number of characteristics for such a rule have been determined to be required. If a statement conforms to those characteristics, it is well formed. If it fails to meet any one of them, it is not considered to be well formed. The major characteristics of a well-formed rule are listed here, and each is then examined, along with examples that motivate the need for the characteristic in the specification of a well-formed rule. First, a well-formed rule must have the following characteristics: § It must be able to be followed. § It must be consistent. § It must be able to be articulated. § It must be able to have its compliance measured. § It must have constraints. Note that the concept of well formed is not based on the agreement with a specified syntax. It is based only on the potential to serve a purpose in the enterprise. The issue of agreeing or disagreeing with a specified syntax is an implementation problem and is not of immediate concern in this discussion. § A well-formed rule must be able to be followed. If a rule statement can never be realized, it cannot be considered as a well-formed rule even though at first glance it appears to be one. If a proposed rule states that “passengers
§A
§A
§A
§A
on flights between St. Louis and Chicago will not be served meals” and the airline has no flights between St. Louis and Chicago, the rule is not well formed. It has no potential to be of use in the enterprise. Those types of rule problems usually result from rules not being kept current. This subject is discussed in more detail later. Another example is a proposed rule that “any new product developed will not become obsolete for 4 years.” Unless it is a monopoly, an enterprise usually cannot control the length of time a product remains viable. That is determined by the competition and customers. The enterprise cannot apply that statement; therefore, it is not a well-formed rule. well-formed rule must be consistent in its application. Each time a rule is applied under the same circumstances, the same result should occur. Violations of this characteristic usually occur when a statement is not specific enough and other factors cause changes in the outcome of applying the statement. For example, consider the proposed requirement rule that “employees will be given their birthday off in addition to normal time not worked.” If an employee’s birthday does not fall on a normal workday, that is not possible. Some other day (unspecified) must be taken off, or perhaps the employee will not get an additional day off, also violating the rule. Applying the proposed rule to different employees and in different years will produce different results. The statement is not a well-formed rule but could be changed into one by incorporating additional constraints or changing its strength class. well-formed rule must be able to be articulated. A well-formed rule must contain enough explicit detail so it can be understood and applied. It may be written or unwritten, but a general “feeling” or “understanding” is not enough. For example, the proposed guideline rule that “all employees should be happy in their work” does not contain enough detail for one to be able to understand what “being happy” means and, therefore, how to apply it. Although variations of that statement are issued periodically by many organizations with good intentions, they are devoid of useful content from the perspective of a rule and cannot even be considered general guidelines! well-formed rule must be able to have its compliance measured. If a proposed school rule is that “no student will watch over 2 hours of TV a day” and there is no device to monitor TV watching (including willing parents), this is not a rule. Its application cannot be measured. It could be an excellent statement of intention and one worthy of making. It just cannot be considered a well-formed rule. well-formed rule must have constraints. The proposed rule that “this enterprise will do anything it takes to become a leader in its industry” is not a well-formed rule. There are no constraints (directions) on what actions the enterprise is willing to take. There may be nothing wrong with the statement from a motivational point of view, but it is not a well-formed rule.
If a proposed rule fails one of the defining characteristic checks, in many cases it can be rewritten so that it does retain the necessary characteristics. That usually involves one or more of the following: (1) changing the strength class, (2) adding additional constraints, (3) rewriting the rule to obtain agreement with the characteristics. For simplicity in the following discussion, it is assumed that the term rule means a wellformed rule, and rules that are not well formed have been eliminated or changed to become well formed. When a statement is considered to be a rule, the strength of the rule must be determined. The strength of a rule is the rigor with which the rule needs to be followed. In that regard, the following additional questions are of interest. Is a rule something that must always be followed? Are exceptions to rules allowed? Under what conditions? Is it a rule or only a suggestion? Depending on the answers to those questions, rules can be classified into three strength classes:
§ Requirements. A requirement specifies that a condition must be followed with no exceptions. § Standards. A standard specifies a condition that should be followed, but a few exceptions are allowed under controlled circumstances. The conditions that caused the need for the exceptions must be explicitly identified. § Guidelines. A guideline specifies a suggested course that generally should be followed, but other approaches may be used as necessary or desirable. The reasons for not following the guideline should be identified. No matter what strength class a rule is in, to be a well-formed rule, it must conform to the characteristics outlined above. If it does not, it is not a rule that can provide the enterprise with an associated benefit. Before concluding the definition of a rule, an additional aspect of the definition must be considered. Although most of the discussion so far has been on the definition of rules within an enterprise, that usage should be made explicitly to distinguish that use from the use of rules in other venues. Therefore, the following definition is made: A business rule is a rule applied to some aspect of a business enterprise. That is enough to distinguish business rules from personal rules, religious rules, and society rules (all of which can have the same general rule characteristics defined for business rule). It should be evident that the significant difficulty in defining business rules lies not with the word business but with the term rule.
5.4 Utilization The main interest in business rules and the original purpose for defining the concept is the hope that these types of statements can be defined by individuals experienced in the needs of the business and then directly used in the operation of the enterprise without having to be interpreted by some information management organization. It is thought that such direct specification would reduce the possibility of error and misinterpretation while providing faster implementations and changes. However, it must be indicated that rule specification is difficult. § Individual rules need to be potentially useful (i.e., well formed). § Rule sets should be complete, consistent, and nonconflicting (i.e., form an ideal class). § Rules need to be in conformity with the way in which the various stakeholders want the business to function. § Rule capture from legacy systems may yield rules that are not well formed, that are not consistent, or that are not in agreement with the enterprise philosophy because the entire context of the system is not available. § Proposed new rules need to be motivated and understood in the context of the rules already in place. The definition of business rules in Section 5.3 does not constrain their application to any specific aspect of an enterprise or level of detail. As long as the defining characteristics have been met, the statement is a rule. To utilize business rules as an integral part of business operation, some attention needs to be given to those other considerations. Rules that deal with different aspects of the enterprise or are specified at different detail levels may need to be handled in distinct ways. In addition, enough information concerning each rule must be available to allow it to be individually identified and its function and utilization understood. Although the following discussion provides one possible method of specifying, categorizing, and utilizing business rules, undoubtedly there are many others. As long as the basic definitions are followed and the defined structure proves useful to the enterprise for which it is developed, many analysis and synthesis methods can be successfully employed.
5.5 Metamodel structure The business rule metamodel attributes that are utilized as the basis for further discussion are: § Creation date; § Name; § Description; § Purpose; § Source; § Owner; § Category; § Scope; § Detail level; § Strength; § Priority; § Persistence; § Format; § Implementation method; § Compliance monitor; § Domain specific. Other metamodels certainly are possible, but the one utilized here will effectively serve as a vehicle for discovering the opportunities and problems associated with the use of business rules as an integral part of business operation. That structure, along with some refinements discussed later, can be considered as the core of a comprehensive business rule metamodel. 5.5.1 Rule identification The first four attributes of the metamodel structure, creation date, name, description, and purpose, provide a unique identifier for the rule. The meaning and values for each attribute are generally self-explanatory. A separate ID attribute is not specified in this structure but, depending on need, could certainly be added. Using only the values of the four identifier attributes, it should be possible to define a rule with sufficient detail to distinguish it from all other rules. Although additional information is provided by the values of the other attributes, they should not be needed for the identification of a unique rule. The remaining attributes and values can, of course, be used to search for rules with specific characteristics. 5.5.2 Rule advocacy The attributes of source and owner provide an indication as to the original and continuing advocates for keeping the rule. All rules must have a business reason for their creation and continued existence. As part of the life cycle process, that reason must be examined periodically and verified as to its continued validity. As a result of their direct involvement with the rule, the creator and the owner are the primary advocates and must be consulted if any alteration in the rule status is contemplated. The source of the rule and the ownership of the rule must be from a business perspective. The source and the current owner can be the same or they can differ, and the owner can change periodically for many business reasons. The rule administrator is not necessarily the owner, and the administration function is usually considered from a technical perspective. If the source or the owner for a rule is not specified and cannot be easily identified, that rule should be considered a candidate for elimination. All these conditions should be specified as part of the rule life cycle activities.
5.5.3 Rule classification taxonomy In a very real sense, any classification scheme is somewhat arbitrary, even though some means of classification is usually considered necessary to reflect the utilization and implementation of the rule within the enterprise. There is also another major reason for the classification of rules. In any reasonably sized enterprise, there likely are thousands of business rules. The intricacies of effectively creating, maintaining, using, reusing, and discarding the rules require some classification scheme regardless of the configuration management approach used. The purpose of the classification scheme is to reduce the number of rules that would need to be considered at any one time. Many investigators have advanced and defended specific rule taxonomies, resulting in considerable debate and conflict. Most of the taxonomies are specific to the area and emphasis being advanced by the investigator and are usually defined to efficiently accommodate the needs of the proposed approach. Because the set of rules for an enterprise exists independently of any taxonomy, it is possible to define and utilize multiple taxonomies. Each taxonomy could be optimized to serve a specific purpose, such as configuration management or conflict analysis. One taxonomy could be business oriented, while another could be implementation oriented. Specification of a rule taxonomy as a separate degree of freedom governed by needs of the business and independent of the other defining rule characteristics would accomplish the following: § Eliminate the unnecessary conflict over the definition of the ultimate rule taxonomy; § Allow different user groups to optimize their access to and use of the rules. The price that must be paid for that flexibility is, of course, the additional time and effort to place each rule (either automatically or manually) into its proper place in every taxonomy utilized. In addition, as has been previously stated, a robust repository is also necessary for the simultaneous definition of multiple taxonomies. For illustration purposes in this section, some specific taxonomy is necessary. A relatively robust taxonomy that interacts efficiently with the process implementation methodology discussed later is defined and utilized. It is defined in business-oriented terms and is matrix oriented. Although it effectively provides a comprehensive example of the information needed in a taxonomy, it is not intended that this classification scheme be considered as the only appropriate one. Many schemes are possible and may simultaneously coexist with each other and the one presented here. Three structure attributes, category, scope, and detail level, are used in this taxonomy. They provide the basis on which to classify the rule into a suitable category. Figure 5.1 indicates one method of specifying values for the category and scope attributes using a matrix structure. The rows of the matrix represent the specific aspect of the enterprise that the rule addresses. Although the rows may not provide a complete taxonomy, the author has yet to find a case where a proposed rule could not be reasonably placed into one of the rows.
Figure 5.1: Example of business rule taxonomy. The columns of the matrix represent the scope or sphere of influence within the enterprise. Each of the columns and rows can be further decomposed into smaller units as necessary for the understanding of a particular rule. For example, the organization unit row can be divided into the individual organizations of the enterprise. The degree of decomposition determines the detail level of the rule. Although not indicated in the figure, each matrix cell should be decomposed in both category and scope for as many levels as rules are reasonably expected to be defined. Because the amount of detail that can be produced in that manner is quite large, this process will not be explicitly performed in this presentation. 5.5.4 Rule operation The last seven attributes of the rule structure provide the operational conditions for the realization of the rule. Those conditions are specified by the strength, priority, persistence, format, implementation method, compliance monitor, and domain-specific attributes. A brief discussion of each of those attributes follows. The strength attribute was discussed in some detail in Section 5.3, so it is not considered further at this time. The interactions of that attribute with other operational attributes are examined later as necessary. The priority attribute is utilized when there is a rule conflict. Although in theory there should never be a rule conflict with rules covering the same area, in practice that is difficult to control because there usually are a large number of rules with many different originators and owners. The priority attribute indicates the desired outcome in a conflict situation. The simplest priority scheme is probably the values “required,” “where possible,” and “guideline.” Although better than nothing, that priority scheme generally is ineffective in resolving conflicts since the chance of conflicting rules having the same priority is relatively large. That is where reasoning and analysis over a given rule set become necessary. As with any type of attributes, care must be taken to ensure that all rules are not simply given the highest value possible by their originators. That requires good administration and configuration management functions. The persistence attribute indicates the period during which the rule will be effective. There are many possible values for the persistence attribute: § From creation until explicitly deleted. Example: “From now on, all employees will enter the building through the south entrance only.” § From creation until a specific occurrence, date, or time. Example: “Until the renovations are complete, all employees will enter the building through the south entrance only.” § For a certain period. Example: “For a 2-week period starting this Friday, all employees will enter the building through the south entrance only.”
§
During periods indicated by the state of a defined entity. Example: “When the red light is on, all employees will enter the building through the south entrance only.”
In the examples, for better understanding, the period during which the rule is effective is included in the rule statement. That, of course, does not have to be the case. A rule can be stated without any indication as to its period of applicability. In that case, a specific indication as to the effectiveness period must be included as the value of the attribute. If it is unknown, that condition also should be stated. The format attribute provides the style of the business rule. Styles could include: § A rule language; § Structured English (or other natural language) text; § A mathematical expression; § Freeform text; § A table or matrix (including the case of one cell); § Procedural code; § A combination of any of the above. All those styles are useful in expressing business rules. Which form is the most appropriate in a given situation depends on the level and the purpose of the rule and the manner in which it will be used. A single style is not sufficient to represent all the different types of rules that are needed in the enterprise. Because of the desirability to compare and reason about rules of differing styles, some means of translating to a common formalism is desirable. That need is discussed further in Section 5.8, which considers the use of a repository. Each of the styles can provide for parameters whose values can be changed without changing the rule statement itself. In many cases, that parameterization can facilitate rule creation and management. Currently, no standards are available for business rules, and that lack, unfortunately, extends to the definitions of style. Almost anything can be used, as determined by the creator of the rule. The enterprise needs—and should set—some internal standards in this area to be in a better position to effectively manage business rule usage. In many cases, the creator of the rule uses a style that must be converted into one or more other styles before it can be used. Products that can accommodate business rule inputs generally require their own specific style (remember the lack of standards). The use of many different products, each with individual style requirements, can create a difficult problem. Errors can easily be made in translating rules from one style into another. The need for many target styles complicates the testing and identification of error conditions in the application as a whole. With that said, it still is probably better to set a single standard for the creation of business rules and then translate them into the various formats needed rather than create them with different formats. In that way, all the rules have at least the same standard style(s), which facilitates their comparison and reuse. Rule styles other than those listed could also be defined, depending on the particular needs of an enterprise. The more structured the format and the more it is parameterized, the easier it is to automatically utilize the rule in the operation of the enterprise. The implementation method attribute indicates the method(s) by which the rule will be accommodated. This attribute is key to the successful utilization of business rules because it provides for an explicit relationship between the rule statement and the means for realizing it. By examining the proposed linkage, it can be determined if the proposal will provide the most effective method of rule realization and, if not, what the method should become. In many contemporary instances, rules are stated without any indication as to how the rule will be incorporated into the business. Without that explicit linkage, the use of business rules as an integral part of enterprise operation will fail. The major utilization methods include the following:
1.
Specifically including or considering a rule in the development of manual procedures and practices; 2. Considering a rule as the requirements for a software development are being developed; 3. Specifically including a rule in the requirements for a software development; 4. Manually considering or incorporating a rule as part of an application design activity; 5. Entering a rule into a product and interpreting it at compile time; 6. Accessing a rule from a library and interpreting it at compile time; 7. Entering a rule into a product and interpreting it at run time; 8. Accessing a rule from a library and interpreting it at run time. Each implementation method is illustrated as part of an overall rules system, as defined in Section 5.8. In general, the methods become more automatic and less error prone as listed from top to bottom. Depending on the strength of a rule, one implementation method may be more appropriate than the others. For a required rule, the desired method would be the eighth one in the list, directly accessing the rule and utilizing it at run time. If changes are needed, they could be made and the altered rule bound at the desired time. For a guideline rule, methods 1, 2, and 4 are probably most appropriate; the rule does not have to be followed exactly, although some type of fuzzy logic could be used to determine which rule would be utilized. The value of the compliance monitor attribute, which is closely associated with the rule implementation method, explicitly defines how the rule realization is examined to determine if it produces its defined constraints. A good way to start is to associate one or more compliance monitor techniques with each implementation method. Others can be defined if needed. The numbers in the following list correspond with the numbers in the implementation methods list. 1.
Use a design review of draft materials to determine if and to what extent applicable business rules are reflected in the document text.
2,3,4.
Use a design review of requireme nts to determine if and to what extent applicable business rules are
contained in the requireme nts specificati on. Perform complianc e audit of finished software to determine if the software accurately reflects the requireme nts, including those requireme nts associate d with business rules. Test finished system to determine if applicable business rules are being accurately reflected. 5,6.
Test product with suitable environme nt conditions designed to determine if product reacts properly to incorporat ed business rules. Test product integrated with the entire
applicatio n to determine if business rules are being interprete d properly. 7,8.
Test product with suitable environme nt conditions designed to determine if product reacts properly to incorporat ed business rules. Test product integrated with the entire applicatio n to determine if business rules are being interprete d properly. Change business rule parameter value or structure and retest to determine if change is being implement ed correctly.
The domain-specific attribute, the final one in the metamodel, consists of any domain requirements that may arise from regulatory bodies, required standards, or similar sources. Requirements must be considered when the applicable domain is involved. For
example, assume this rule: “Insert the maximum number of ads into the bill envelope without increasing the amount of postage due.” The domain-specific value for that rule could be a reference to the regulation that the post office uses to calculate the amount of postage to be paid. 5.5.5 Examples Two examples of business rules with differing characteristics are shown in Tables 5.1 and 5.2. The rules are designed to illustrate the wide variety of rules that can be accommodated using the metamodel defined at the beginning of this section. Readers are invited to take any business rules with which they are familiar and (1) ensure that it is a well-formed business rule by determining if it has the required characteristics and (2) determine the specific characteristics of the rule by determining values for all the metamodel structure attributes. Table 5.1: Emergency Response Rule Metamodel Creation date:
1/1/95
Name:
Emergency Response Rule
Description:
In the event of an emergency that requires a field response by employees, at least two employees will be dispatched to the trouble location.
Purpose:
The purpose of this rule is to help provide safe working conditions under emergency conditions.
Source:
Safety Department
Owner:
Vice President of Administration
Category:
Operations: function
Scope:
Process
Detail level:
Dispatch function/trouble resolution process
Strength:
Requirement
Priority:
High
Persistence:
Immediately, until rescinded
Format:
Text only
Implementation method:
Included in all software requirements dealing with dispatch functionality
Compliance monitor:
Design review
Domain specific:
None
Table 5.2: Sales Tax Calculation Rule Metamodel Creation date:
6/1/95
Name:
Sales Tax Calculation Rule
Description:
This rule requires that the sales tax be calculated using this formula: ST = (Price * Tax rate) rounded up to next cent if not already at a whole cent.
Purpose:
The purpose of this rule is to provide the rule for calculating the sales tax on products.
Source:
Tax Department
Owner:
Chief Accounting Officer
Category:
Operations: function
Scope:
Transaction
Detail level:
Tax calculation function, tax calculation transaction
Strength:
Requirement
Priority:
High
Persistence:
Immediately, until rescinded
Format:
Mathematical equation
Implementation method:
Business rule library that is accessed by software at run time
Compliance monitor:
Module test and application integration test
Domain specific:
Tax code reference: Sect. 10.556.9 State Code
5.6 Rules versus requirements Although the modeling, implementation, and use of process business rules are not covered until later in this book, one area of potential confusion needs to be addressed early: the relationship between process business rules and the requirements for software products that support a process. In general, the software requirements must be consistent with applicable process business rules (and other types of business rules, for that matter) but they do not necessarily have to contain the rule itself, nor do they have to be rules themselves. In fact, they should, in general, not be business rules. Business rules are oriented toward understanding and defining the operation of the business, while requirements are oriented toward defining the operation of a software product. If a process business rule states that “all preventative maintenance will be performed between midnight and 7:00 A.M. except for weekends, when it can be done anytime,” then a software scheduler that supports the maintenance process could have a requirement that “a means must exist through a GUI interface to specify the time period, based on days of the week during which work can be scheduled.” That example of a requirement is a product requirement or rule that supports the business rule. It clearly is not a business rule in the sense we have been assuming and using business rules. The author refuses to be drawn into a philosophical discussion concerning the definition of a product requirement as a special type of business rule. It is not necessary, or probably even feasible, to provide a definitive answer to that question. In the development of software product requirements, additional business rules may be identified. If a prospective requirement further defines the process that the software will support, it can also be considered a business rule and may or may not be kept as a requirement. As an example, assume a casual customer billing business rule has been specified. A software product that will be used to monitor customer account status could be given a requirement that “if a customer is delinquent in any payment for the last year, that customer will be billed every month instead of every other month.” In that case, the requirement is clearly process related even though it initially was defined as a software product requirement. With that understanding, the requirement should also be considered as a business rule and the needed product requirement reconsidered. In the example, the product requirement could (and probably should) be restated to the following: “The product will be capable of generating a bill at intervals determined by customer status.”
Whether a candidate software requirement is process related must be determined from the individual circumstances. If it is determined to be such, a recast of the requirement and an addition to the business rule set probably is appropriate.
5.7 Rule engine As an aid in the configuration management of business rules and to make the incorporation of business rules into the enterprise more effective, the concept of a rule engine has been advanced. Although not well defined, the implicit purpose of the rule engine is to provide an efficient mechanism for rule administration and use. The functionality of a rule engine can range anywhere between the following two extremes depicted: § The functionality is that of a passive library that is used only to contain rule definitions. The library usually has functions that allow rules to be created, deleted, copied, and manually browsed, but there is little additional functionality. Although better than nothing, this type of rule engine is minimally useful. It is not possible to use this type of engine with implementation methods 5, 6, 7, and 8, as defined in the list in Section 5.5.4. That severely limits the effective use of rules in the operation of the business. § The functionality is that of a full-function repository utilizing an explicit robust rule metamodel. This engine would allow the full range of implementation methods as well as provide for the efficient reuse of rules. Configuration management functions would be enhanced through automatic discovery of rule conflicts, overlaps, and possibly gaps. To achieve the second type of rule engine, a large number of problems must be addressed and solved. Further discussion of what is needed to accomplish the realization of this engine is beyond the scope of this chapter. The main purpose of including the concept of a rule engine is to alert readers that they should be cautious in assuming a specific meaning when they encounter the term. Some types of rule engines being offered are modified inference engines, database constraint checkers, and workflow engines. Those are all legitimate implementations, but each has a relatively narrow focus. Examples of uses of these types of engines are given in Section 5.8.
5.8 Rule system An illustration of the architecture for a complete business rule system as utilized in the implementation of a process is presented in Figure 5.2; the discussion given in this section is based on the diagram in the figure. In addition to the end user, five different human-oriented roles are shown: administrator, owner or agent, analyst, software developer, and data modeler. Those roles could be performed by five different staff members, or they could be combined as warranted. In addition, some parts of the roles may be automated through the use of expert systems or similar approaches. For the purpose of this discussion, however, it is assumed that the roles are performed by individual staff members. Because Figure 5.2 is designed to show a systems approach to the operational environment for business rules, it must incorporate many interrelated functions and entities necessary to the implementation of a process. If the reader is unfamiliar with some of them, the terms and functions presented in the figure are discussed in other areas of the book.
Figure 5.2: Rule system architecture. The administrator assigns the proper characteristics, such as a specific implementation type, and places it in the repository. If new functionality is needed to accommodate the implementation type, it must be acquired and provisioned by the enterprise organizations responsible for that type of activity before creation is considered complete. Before being made generally available, the new business rule (or set of business rules) should be simulated to determine possible effects on the enterprise. Some activities to determine potential significant overlaps, gaps, inconsistencies, and conflicts with existing rules should be performed through a reasoning and analysis process. That may require simulation and other forms of testing using the experience and knowledge of staff members along with an automated tool assist. As an example of the reasoning and analysis required, consider the following two rules concerning business travel: 1. “All travelers on company business will fly at the lowest available airfare.” 2. “No employee traveling on company business will be compelled to stay over a weekend unless the business purpose spans the weekend.” In many cases, the two rules will conflict because the lowest airline fare will require a weekend stay. It would be useful to analyze rule sets to determine where conflicts of this type exist. The rules could then be adjusted to remove the problems. As an example, the following rule contains the essence of the two travel rules while removing the conflict: “All travelers on company business will fly at the lowest available airfare that does not require a weekend stay unless the business purpose also requires a weekend stay.” This rule system is based on a single repository for all rules but allows the utilization of multiple rule engines for different rule implementation types. The repository itself can, of course, also be considered to be a type of rule engine and used directly as indicated. That direct use may or may not be practical, but it needs to be considered for completeness. The use of the single repository permits an effective simulation function and the ability to reason about the effect and purpose of each individual rule or multiple rules in combination. This type of analysis is critical in ensuring that each rule or set of rules correctly fulfills its intended purpose, regardless of how they are implemented. In addition, the single repository facilitates the life cycle management of the rule base. Because of its central position in the analysis and management of the rule base, the format of the rules in the repository is generally suited to these purposes. Many rule formats will be required for the effective use of the rules under different circumstances and environments. It will be necessary to convert the repository rule form to multiple representations, depending on the desired characteristics of the rule under operational conditions.
The design of the repository business rule metamodel, including the necessary classification taxonomies, is also of critical importance. This class model serves as the main mechanism for ensuring the integrity of the rule base and preventing undue proliferation of rules. Although it generally will not be used as part of the real-time operational environment, it is central to its proper functioning. Figure 5.2 depicts many of the implementation types discussed in previous sections. Although not theoretically necessary, each type usually involves a format change. The diagram illustrates how the same task can make use of rules with different implementation types. That is the usual situation in software development and shows the importance of determining the effect and compatibility of the different rules as a separate step in the development process. For example, Task 1 uses two business rules in two different ways. First, it uses rules that have been compiled from the repository into a run-time library. The rules are not incorporated into the task but, in effect, constitute a type of database and are accessed in much the same way. The characteristics of the rules are that they are changed frequently and also need to be accessed on a frequent basis. The compile function places the rules in a format that facilitates their access by the task. The complexity of the rule format is borne by the compiler, not by the task. An autocompile function is shown as the compile mechanism in the diagram. That means that whenever an appropriate rule is changed in the repository, it is automatically compiled and placed in the run-time library. A manual compile also could have been used, but if there is a significant amount of change, the autocompile probably is better. A manual compile function is shown later to depict the concept separately. The characteristics of rules with this implementation type are that they change frequently and need to be accessed at a moderate rate. In addition to the use of a run-time library, Task 1 uses some business rules directly from the repository. The characteristics of the rules are that they are changed frequently but are accessed only rarely. The format complexity must be borne by the task, because there is no conversion from the representation used in the repository to one better suited to the operational task. It is possible, however, to provide a real-time translation function or interpreter to change the format. That would be a reasonable approach if there were several tasks that could use the same interpreter. If the interpreter would be used by only a single task, there would be no reason not to make it a component of the task and not keep it as a separate entity. In general, unless the change rate of the rules is much greater than their access rate, this type of implementation should not be used. If at all possible, the repository should not be an integral part of the real-time software operations environment. It is much better suited to a non-real-time role. In addition, the characteristics that would make this type of implementation desirable do not seem to represent a practical situation. In the diagram, a workflow manager is used as the control mechanism for the tasks. The manager depends on business rules to determine the execution conditions for the tasks and to monitor their operational performance. The rules are present in the repository but must be translated to the form needed by the workflow manager. After translation, the rules become resident in the workflow manager, becoming a replicated copy of the original rules. The characteristics of these rules are that they are changed relatively infrequently and the input format required by the manager is fixed and not under control of the enterprise. Although not made a part of the diagram to manage its complexity, many other software components can require business rules in a manner similar to that of the workflow manager. They need business rules to be entered in a fixed prespecified format (probably different for each package). These components could exist in either the infrastructure (e.g., security packages) or application (e.g., accounting packages) domain.
Task 2 uses rules that must be interpreted by the software designer and then implemented as an integral part of the final software product. This is the situation found in most previous and current software implementations. The business rules are utilized by incorporation as part of the source code and in many instances have difficult-tounderstand representations. In addition, past and current software practice does not use an explicit form of the incorporated rule, as is indicated by the repository. The rules exist only in the software code form. There are legitimate reasons for using this type of rule implementation as long as the rule is also replicated in the repository. Efficiency considerations in the form of response time or throughput requirements may mandate this implementation approach. However, because this approach is prone to significant error in the translation, comprehensive testing and simulation probably would be necessary to ensure that the rule implementation conforms to the original. Characteristics of business rules with this implementation type are very infrequent changes and high access rates. Task 3 uses two different rule implementation types. The first is a variation of the type used by Task 2. It also involves a manual translation of the business rule, as depicted in the repository, into actual software code. The difference is that the code that represents the business rules is isolated into a separate part of the task. The method is about as efficient as the Task 2 method and is less prone to errors because of the separation from the other components of the task. However, because changes will force a recompilation, the time to make a change will be significant. Rule characteristics for this implementation are the same as those for Task 2: very infrequent changes and high access rates. The second type of rule implementation used by Task 3 is an autocompile from the repository, which is similar to the first implementation type used in Task 1. The difference is that the compiled rule is part of the task and not part of a separate run-time library. That may also indicate a format difference between the two because the results of the compilations do not have to be the same. The characteristics of rules with this implementation type are that they change frequently and need to be accessed at a moderate rate. In addition to using business rules to direct software functionality, they also can be used to format or change the data used by the software programs. That is illustrated in Figure 5.2 using a direct translation from the repository to the database. It also could be done on a manual or automated basis, depending on the anticipated frequency of change. Finally, the business rules can be used to produce printed materials that direct the manual tasks performed by the end users. In general, any process implementation requires both manual and automated tasks. Those tasks must be coordinated and interoperate with each other. Using the same set of business rules as the source for both types of tasks helps ensure the needed interaction.
5.9 Summary Business rules are used throughout the enterprise, usually on an informal and unstructured basis. That prevents achievement of most of the advantages that the rules can provide. By considering business rules as a fundamental asset of the enterprise and performing the needed modeling, rules can be utilized to considerable advantage. Selected bibliography Herbst, H., “A Meta-Model for Business Rules in Systems Analysis,” Proc. 7th Conf. Advanced Information Systems Engineering, Berlin, Springer-Verlag, 1995, pp. 186–199.
Herbst, H., Business Rule-Oriented Conceptual Modeling (Contributions to Management Science), New York: Springer-Verlag, 1997.
Katsouli, E., and P. Loucopoulos, “Business Rules in Information System Development,” Proc. 2nd Workshop Next Generation of CASE Tools, University of Jyvaskyla, Finland, 1991, pp. 481–503. Lang, P., W. Obermair, and M. Schrefl, “Modeling Business Rules With Situation/Activation Diagrams,” Proc. 13th Internatl. Conf. Data Engineering, Birmingham, UK, Apr. 7–11, 1997, pp. 455–464. Rosca, D., et al., “A Decision Making Methodology in Support of the Business Rule Lifecycle,” Proc. 3rd IEEE Internatl. Symp. Requirements Engineering, Annapolis, MD, Jan. 6–10, 1997, pp. 236–246. Ross, R. G., The Business Rule Book: Classifying, Defining and Modeling Rules, Version 4.0, Boston: Database Research Group, 1997. Soper, P., Managing Business Rules, Englewood Cliffs, NJ: Prentice Hall, 1997.
Chapter 6: Financial management Economics—more specifically, financial considerations—are at the core of any enterprise. Bluntly stated, any profit-making enterprise exists so that the amount of money it takes in from customers is greater than the amount that goes out in expenses; the more the better. While that is a relatively easy statement to make, in actual practice, there are enough nuances to keep armies of people busy trying to define exactly what the statement means and how best to achieve that desirable condition. A strong indication of that problem becomes evident as this chapter unfolds.
6.1 The basics Almost every decision and every action taken or not taken in an enterprise generally are motivated or justified on the basis of one or more financial metrics whose values are measured, calculated, or estimated. Unfortunately, depending on which organization or individuals are involved and the specific procedures and metrics utilized, many of those references are based on incomplete information and do not always adequately reflect the true values. The goal of this chapter is to motivate and define an enterprise financial model that can be used to provide a financial analysis of any proposed enterprise activity. The availability of this model is especially important in analyzing decisions concerning the automation assets. Many of the advantages of those assets as well as the overall assetbased approach are susceptible to challenge on traditional accounting grounds. To determine their actual contribution to the enterprise, the financial model and associated analysis mechanisms must be able to effectively consider all the factors that affect the automation assets. The financial model derived here is not perfect. There is still plenty of room for controversy and differences of opinion. The value of the model is that it forces explicit consideration of all issues that have a possible impact on the decision as to whether or not a proposed action (or group of related actions) concerning the automation assets should be undertaken. The emphasis of the discussion is on financial philosophy rather than detailed accounting procedures. However, to provide an embodiment of the philosophy and give enough detail for suitable examples, some reliance on accounting terms and principles is necessary. Those terms and principles are explained as necessary, so a detailed knowledge of accounting is not required.
The points raised here are not meant to always apply to each and every aspect of the enterprise. It is necessary, as in the case of any proposed analysis or activity, to evaluate the advantages and disadvantages before deciding that a proposed procedure or technique is appropriate and necessary. A decision also must be made as to whether to utilize informal or formal approaches to meet the needs of a particular situation. Formal approaches using explicit models and procedures usually provide better results but at a cost that may or may not be worth the improvement. In addition to presenting the financial needs of the automation assets and the reasons that a conventional financial and accounting approach generally will not provide the desired results, the discussions in this chapter are designed to provide a better understanding of some of the more pervasive financial and accounting criteria and the inherent limitations of their use. 6.1.1 Basic equations The financial or accounting aspects of the enterprise are summarized by two equations: assets − liabilities = equity (evaluated at a specific point in time) equity + (revenue − expenses ) = revised equity (evaluated over a specified time period) where assets are items that can be used for the production of revenue; liabilities are items that reduce the future capability to create revenue; equity is a measure of the worth of the enterprise; revenues are items that increase equity; and expenses are items that decrease equity. The equations also incorporate some inherent assumptions that are addressed in the discussions that follow. For ease in use, the five variables in the equations hereafter are known as the financial categories of the enterprise. While the equations must always hold for a given enterprise, they cannot indicate the many underlying aspects that are necessary for the successful operation and continued existence of an enterprise. The equations can only represent static conditions, while an enterprise is fundamentally a dynamic asset. Thus, the dynamic (process) aspects also must be considered to provide a complete characterization of the financial aspects of the enterprise. Those aspects are addressed shortly. The terms expense and cost are used here somewhat interchangeably, as common usage dictates. Although they are intended to signify different meanings, it is not useful for current purposes to be rigorous in that respect. 6.1.2 Asset transformations This section is concerned with the dynamic aspects of the enterprise. Enterprise dynamics can—and usually do—consist of a large number of complex interactions.
However, there are some simple ways in which the basic enterprise dynamics can be defined and described in the same terms used for the basic financial equations. The resultant structures and processes can then be utilized in the construction of the desired financial model. The fundamental dynamic of the enterprise is to transform assets into revenue, as illustrated in Figure 6.1. Figure 6.1(a) depicts the basic asset cycle. Depending on the business of the enterprise and the assets involved, that can be accomplished in a variety of ways. Physical assets can be sold, leased, or used to provide a service (e.g., house washing with a pressure washer). Intangible assets (e.g., skills) can be used to provide a service (e.g., building design). Sometimes assets need to be converted before they can realize revenue (e.g., beads and string converted into necklaces, which are then sold). Assets that support the enterprise but that are not in the direct conversion chain also are considered to contribute to the generation of revenue (e.g., equipment, lights, personnel skills, automation assets).
Figure 6.1: Basic asset transformation process. The dynamic model also contains a feedback path, such that the revenue generated can be used to obtain more assets, which then can be used to generate additional revenue. That classic case of positive feedback loop would allow uncontrolled growth and prosperity for the enterprise as long as other activities that tend to inhibit the main process do not exist. The major inhibiting activity is that associated with incurred expense. Expense is the utilization of assets for non-revenue-producing functions. Expense is incurred to pay employees, provide office space and utilities, and so on. Adding this process to the main process results in the diagram Figure 6.1(b). As long as the values of expenses and revenue are such that revenue does not suffer a continuous decrease, the enterprise will remain viable. In Figure 6.1 and the others in this chapter, expense is shown as the diversion of revenue. Such diversion reduces the amount of revenue available to obtain assets. By considering revenue as a monetary equivalent asset, it is reasonable for the schematic format shown to show expense as a reduction in that asset. In reality, however, expense can reduce any monetary equivalent asset. So far, liabilities and equity have not been accounted for in the dynamics. Liabilities are merely a mechanism for delaying the actual payment of an expense to keep current revenue-producing assets at a maximum. For example, if a building is purchased and a mortgage is obtained for the purchase, the mortgage represents the liability. However, the whole building asset can be immediately used by the enterprise. Liabilities can be considered by incorporating the concept in the dynamic process model shown in Figure 6.1(c). Assuming that equity remains constant, incurring a liability
increases assets by the same amount, while reducing a liability requires an equivalent amount of asset value. The first basic financial equation presented in Section 6.1.1 contains only two independent variables; the third variable is dependent on the values of the other two. Assets and liabilities usually are assumed to be the independent variables, so equity is merely the difference between them. If there are no liabilities, then equity is equivalent to the assets. For that reason, there is no real reason to include equity in the dynamic model. It can be calculated at any time a value is desired. The next addition to the dynamic model is the inclusion of internal transformations. Internal transformations are transformations that are utilized in the enterprise to produce the financial items needed in the operation of the enterprise. The most frequent occurrence of such transformations is the conversion of one or more assets into another asset that is needed by the enterprise. Exchanging cash for buildings, equipment, or raw materials is a frequent type of asset transformation. Cons equently, converting raw materials and equipment use into a product suitable for sale is another type of transformation. The transformation chain can contain as many steps as needed. An example of an asset conversion chain is shown in Figure 6.2.
Figure 6.2: Asset conversion chain. Although not as frequent, liabilities also can be converted from one form to another (e.g., unsecured loan to secured mortgage). Although revenues and expenses also can assume different forms, their value is the overriding aspect of interest so that the different forms usually are not considered explicitly except when the associated risk is materially different. For simplicity, the remainder of this discussion considers only asset transformations. The dynamic model developed so far does not include the concept of intangibles. Intangibles can easily be added to the model, as shown in Figure 6.3. The major difference between the effect of the intangible assets and liabilities and the tangible categories is that the intangibles have a direct effect on the ability of the enterprise to convert assets into revenue. As a part of the asset conversion chain, an intangible asset enhances the conversion, while an intangible liability inhibits it because the assets are used to service the liability instead of being used to produce revenue.
Figure 6.3: Transformation process and intangibles.
6.2 Initial financial model The financial operations of the enterprise have now been defined through a static model (the basic equations) and a dynamic model, as defined in Figure 6.1(c). The combination of the two models forms the initial financial model of the enterprise as illustrated by Figure 6.4. This initial model lacks some constructs that are needed to fully understand the financial implications. The major missing aspects of the initial model are the drivers that cause each of the models to change state and explicit consideration of the automation assets. Those concepts are introduced as the discussion continues and are used to complete the model.
Figure 6.4: Initial enterprise financial model.
6.3 Financial events Any driver that causes the values of the categories of the financial model to be changed is called a financial event. A financial event is any action or activity on behalf of the enterprise that causes one or more financial categories (assets, liabilities, equity, revenue, expense) to change value and thereby change the financial state of the enterprise. Financial events can result from internal or external occurrences. From a financial perspective, the enterprise consists of the totality of the effect of all financial events. This discussion provides more detail concerning the definition and characteristics of financial events and continues the process of developing an appropriate financial model. Initially, we examine financial events that result from internal enterprise activity. Financial events that originate outside the enterprise are then considered and the combination of the two used to create the completed model. Financial events can be of any size and may be an aggregate of other financial events. A simple financial event cannot be subdivided into simpler acts, while a compound financial event consists of an aggregate of simpler events. Because the effect of a financial event will, in general, be nonlinear, the financial consequences of a compound financial event usually will not be a simple linear addition of the results of all the component financial events. It may be a relatively involved (and possibly unknown) function of the component financial events. That effect is one of the major difficulties that limits the application of synthesis techniques to the characterization of the enterprise financial environment and necessitates the use of measurement and analysis techniques instead. A financial event can have long-term or short-term effects. Buying paper for the copy machine usually results in a one-time expense per purchase (financial event). However, purchasing some form of investment security is a financial event that may realize expense or revenue over an extended period of time. Each event has unique characteristics and needs to be considered separately to accommodate those differences.
6.4 Financial event model An enterprise financial event (simple or compound) will be modeled by defining all the effects that can occur as a result of the event. From the definition of a financial event, those effects consist of changes in one or more financial categories. Each category can increase or decrease depending on the specifics of the financial event. It is useful to further decompose some of the categories to provide enough detail to develop the different perspectives utilized later in the discussion. In addition, the possibilities of intangible categories are allowed. That is necessary to partially accommodate financial implications not captured by the conventional approach to financial analysis.
The resultant model is illustrated by Figure 6.5. For that model, 11 categories have been defined that need to be examined individually for each financial event. Although most of the categories are well known, some are not usually stated or examined explicitly. That lack of consid- eration is one of the main reasons that a financial analysis using a partial model may give misleading or even wrong results. The major character- istics of all the categories are discussed briefly, and, as a part of this discussion, their applications to the automation assets are considered.
Figure 6.5: Financial event model. Although each model category is discussed separately, they are closely interrelated. For example, performing a purchase financial event can result in a direct cost, an allocated cost, an opportunity cost, and an asset transfer. All of them happen simultaneously, and both their individual values and the aggregate result are important for a variety of analysis purposes. In addition to each of the categories being a part of the financial event model, they also must be used to augment the static and dynamic categories of the initial model. To avoid undue complexity, that is not explicitly performed during the course of this discussion. The items are made a part of the completed model produced after the discussion of all the financial event model categories. 6.4.1 Cost categories As will be evident from the following sections dealing with costs associated with the automation assets, costs result from two sources: that associated with the creation and continuing existence of the assets and that associated with the use of the assets in the operation of the business. With some few exceptions, the costs associated with the use of the assets is far greater than the costs needed to define and implement them. That is a unique aspect of the automation assets. In most normal costing environments, the cost to provide an asset is the dominant one. Because of the shift in emphasis for the automation assets, intangible costs become much more important to consider, and sufficient means to estimate them must be defined and utilized. Continuing measurements to determine the accuracy of the measurements also must be utilized. Following the normal procedure results in the costs assigned to the automation assets to be understated by a considerable amount.
6.4.1.1 Allocated cost An allocated (indirect) cost is a cost that is incurred in support of a large number of enterprise financial events; as a consequence, the contribution associated with any given financial event cannot be known exactly. The amount to be allocated is fixed, and an algorithm determines the amount assigned. Although the allocated cost often is not directly controllable, it usually is included in the measurements of accountability. An
allocated cost can be tangible or intangible but should be reasonably matched in time with the service it represents. Consider a business rule automation asset. Are there costs that should be allocated to a business rule? If so, what does this allocation represent? Certainly the costs of operating the repository as well as those incurred by performing the activities of the other automation asset management components should be allocated. Those costs, however, represent a minor effect. The major effect comes from the intangible costs resulting from the enterprise using the rule to constrain the operation of the business. Assume the following business rules: (1) “No sales will be made to any customer 60 days or more behind in payments for previous orders” and (2) “No sales will be made to any customer owing more than $10,000.” Customers denied goods or services because they fall under the effect of both rules may get mad and give all future orders to a competitor. Part of the cost of the bad-will and lost orders should be allocated to each of these rules. The allocation may be uniform or based on the percentage of other companies that come under only one of the rules. Portions of the estimated intangible cost also may be allocated to other rules or assets that are found to contribute to the cost. The total intangible cost could be estimated by considering the number of customers that fall under those rules every year, their average yearly orders for past years, and the percentage that do not place another order within a 1-year period. The fact that there is an operational cost allocated to the rules does not mean they are bad rules. The amount saved in bad debts because of either rule may be much larger than the cost they cause. (The cost avoidance effect is discussed in Section 6.4.2.3.)
6.4.1.2 Direct cost Direct costs are costs that can be reasonably associated with a specific enterprise financial event. As with allocated or indirect costs, direct costs can be tangible or intangible. Using the previous example of business rule assets, direct costs that could be associated with the rules are those costs that are unique to the rule. If customers are denied goods or services because they fall under the effect of one of the rules, then that cost is a direct cost of the rule because it can be reasonably associated with the use of the rule. Direct costs are sometimes referred to as variable costs; they exist only because of the financial event to which they are attached. If the financial event had not occurred, the cost would not exist. If one of the rules cited in Section 6.4.1.1 did not exist, then the cost currently attributed to it also would no longer exist.
6.4.1.3 Opportunity cost An opportunity cost is a cost that is incurred because the occurrence of a specific financial event prevented another financial event from being performed, and the absence of that financial event resulted in some cost (or lost revenue) to the enterprise. The most common example usually is stated in terms of lost interest on money that is spent to purchase equipment for a specific purpose. The revenue that is lost because the money was not available for investment is an opportunity cost. The rule of thumb is that the revenue realized from a financial event should at least be greater than the associated opportunity cost. Of course, the money also could have been used to purchase other equipment to produce a different product. The potential revenue from that alternative use also would be considered an opportunity cost. The opportunity cost associated with the creation phase of an automation asset is the revenue generated from the direct cost of developing and placing the asset in the repository. While that could be substantial for assets such as processes, the opportunity cost associated with the use of the asset is usually far greater. The cost of utilizing a given process in labor costs and computer costs is large; therefore, so are the opportunity costs.
6.4.2 Revenue categories Unlike costs, revenue usually is associated only with the use of an asset and not with its mere existence. The use of the asset may be to convert other assets, to transfer ownership to someone else (e.g., a sale), or to retain ownership and charge for its continuing usage (e.g., a lease, a license, or a service). That is in keeping with the more traditional approach to revenue accounting. However, other unique aspects to the revenue aspects of the asset management system must be considered. One of those aspects is the concept of allocated revenue. Why that concept is useful in the asset management system and the major considerations involved are discussed next.
6.4.2.1 Allocated revenue In most revenue generation situations, the cause of the revenue can be readily determined. In the case of the automation assets, that is not the usual condition because of the number of assets that may be involved and the implicit, rather than explicit, contribution to the revenue. Consider, for example, a scenario that is used to develop and test a process representation. The process is used as the basis for a workflow, and the workflow is used to provide a service that generates revenue. The scenario certainly contributed to the realization of revenue. Why and how much of this revenue should be allocated to the scenario? This discussion provides a partial answer. The automation assets need to have revenue allocated to them to indicate their value to the enterprise. In far too many cases, the concept of cost centers is utilized as one type of component of the enterprise. Cost centers, as their name implies, are considered to be pure cost and, by implication, a bad thing to have. Management is always trying to reduce the cost associated with a cost center. Cost centers are always contrasted with profit centers, which, again by implication, are considered good things to have. After all, who does not want profit? In reality, there is no such thing as a cost center. It is a fiction of accounting, defined for convenience. From the perspective of the dynamic model, all costs contribute to the generation of revenue, and all revenue is the ultimate result of incurring cost. If there are costs that do not contribute to some aspect of the generation of revenue, they easily can be eliminated, and the profit of the enterprise will increase by the same amount. By allocating revenue to the financial events that resulted in costs of some type (whether they occurred in a cost center or a profit center), the true contribution of the financial events to the effective functioning of the enterprise can be determined. It easily could be true that increasing the cost incurred by a cost center would provide an even greater increase in revenue to the enterprise. Activity-based costing, which is designed to determine the relative contribution to cost of the activities in a process, provides only half the needed information. The other half is activity-based revenue, which provides an allocation of revenue to each of the same process activities. If the allocated cost is greater than the allocated revenue, that indicates that the process activity is badly designed and needs to be changed or that the allocation algorithm requires adjusting. Now that the “why” part of the revenue allocation has been considered, the remaining need is to discuss how the allocation should be made, specifically in the case of the automation assets. Because of the number of assets involved, the allocation probably should be made on the basis of an entire asset class or a major subclass. If desired, another allocation based on some type of averaging could be made to the individual assets. The allocation algorithm probably will be structured around the use of an “elimination” technique. That is accomplished by asking the following questions: What would be the effect on revenue if the asset class did not exist? Would it be reduced by 0.1%, 5%, or 75%? Would the revenue be increased by some amount? While the last question may
seem strange, consider the business rule example used previously. If it was decided to eliminate the restriction placed on customer orders imposed by the rules, the total revenue of the enterprise conceivably could increase. However, so would the risk factor associated with that revenue, resulting in a cost associated with unrealized revenue or simply bad debts. In fact, it would be hoped that the cost avoidance allocation (see the discussion in Section 6.4.2.3) would be greater than the negative revenue (cost) allocation. The effect on revenue can be difficult to determine, but if the enterprise is to understand the causes and effects of its individual and compound financial events, an approach to providing that type of information needs to be developed and actively utilized in the decision-making process. Returning to the example of the scenario asset, the prior discussion indicates that the allocation of revenue could be accomplished as follows. Assume that the scenarios did not exist, resulting in processes that had to be tested and verified another way, possibly on “live” customers. If that approach resulted in a longer time to reach operational status and, based on historical data, resulted in 5% fewer customers, then the lost revenue due to late process availability and the reduction in customers would be partially allocated to the scenario assets involved in the process. Remember that the revenue reduction assumption is only an estimation procedure used by the analysis process to determine the contribution of the scenarios to the overall revenue and thus indicate their contribution to the enterprise revenue.
6.4.2.2 Direct revenue As opposed to allocated revenue, revenue that can be identified as a direct result of a given asset should be entirely associated with that asset. Unlike costs, there are few instances in which revenue can be considered as only being associated with a single asset or other enterprise financial event. The generation of revenue is almost always the result of the interaction and contribution of many enterprise activities. One exception might be a sole practitioner providing medical or legal advice for a fee. Even then, there probably are other associated financial events that would have part of the resultant revenue allocated to it. In spite of the view expressed in the preceding paragraph, most revenue realized by an enterprise is assumed to be the result of a single financial event. That is because the allocation activity is not performed and there is a need to associate revenue with some enterprise financial event. The revenue resulting from a sale is attached to the asset or service sold. The contribution of other enterprise financial events to the realization of the revenue is not considered. To restate the point, this method of associating revenue is what makes it so hard to justify individual components of the enterprise that are important, but for which only a cost category is identified.
6.4.2.3 Cost avoidance Cost avoidance is the result of an enterprise financial event that negates the need to incur a cost. Cost avoidance is a type of revenue because it increases the ultimate amount of funds available to an enterprise over that which would be available had not the financial event been performed. However, that aspect must be viewed with extreme caution, as the following discussion indicates. Two types of cost avoidance, each with somewhat different characteristics, are defined. For convenience, they are labeled weak avoidance and strong avoidance. The first type, weak avoidance, is similar to that realized by buying an item on sale. If the usual price is $10.00, but the item is purchased on sale for $8.00, then a cost of $2.00 has been avoided by meeting the terms of the sale. The reason that financial event is termed a weak avoidance is that, had the item not been purchased, there would have been no obligation to pay the $2.00. The existence of the cost that is avoided is created by the very financial event that avoids it. It easily can be argued that this is not a cost
avoidance financial event at all but merely presents a different decision cost for an item that may or may not be purchased. The strong type of avoidance is much more powerful in its effects. In this case, the cost still exists even if the financial event that could avoid it does not occur. An example is the receipt of an invoice that is due in 30 days. The terms of the invoice state that if it is paid in 10 days, only 98% of the total amount needs to be paid. That is the usual commercial practice of “two 10, net 30” payment terms. By paying the invoice within the 10-day period, a cost of 2% of the invoice is avoided. If the payment is not made in the 10-day period, the entire amount must be paid. A real savings can result from strong avoidance. Another example of strong cost avoidance results from the potential reuse of an automation asset (e.g., software component). If the asset is reused, the need to incur cost to provide a new asset is avoided. If it is not reused, the cost for the asset still exists. Cost avoidance is one of the major contributions of the automation assets. 6.4.3 Asset and liability categories Not all financial implications occur from revenue and expense con- siderations. Changes in asset values and liability values also are important aspects of enterprise financial events. The major difference from the model categories defined in this section and those used in the usual accounting sense are again associated with intangibles, specifically the concept of intangible liabilities. As will be pointed out, neglecting the explicit effect of intangibles can cause the enterprise to fail. It is interesting to note that the intangible aspects, while not considered part of the conventional accounting treatment of an enterprise, probably are one of the most important considerations of the stock price of a public company. Companies do react to that by engaging in public relation campaigns and so-called institutional advertising, which are designed to make people feel good about the company and its brands. In a real sense these costs are not expenses but an asset conversion from a monetary asset to the intangible asset of positive consumer perception. However, by not reflecting the financial effects of these intangibles down deep into the enterprise, the best opportunities to reinforce the good actions and correct the bad actions are lost.
6.4.3.1 Tangible assets Many financial events result in an exchange of one asset type for another. The sale of an inventory item realizes revenue and results in a reduction in the value of the inventory asset and an increase in the cash asset. The purchase of office supplies results in an expense, an increase in the office supplies asset, and a decrease in the cash asset. In addition, a financial event can cause the value of an asset to change without resulting in a transformation, the realization of revenue, or the incurring of a liability. Those changes may or may not be associated with a revenue or expense component. An example is the reduction in price of certain inventory items to sell them more quickly. Outside financial events also can result in asset value changes. For example, a competitor bringing out a new line of merchandise can cause the inventory value to decrease considerably. The valuation of enterprise assets is open to some interpretation. If a tangible asset is purchased for some amount of money, it usually (but not always) can be assumed that the value of the asset, at the time of purchase, is the amount paid. After that period, asset value can increase, decrease, or remain the same. Although usually expressed in dollars, it is difficult to determine against what measure value should be determined. As a simple example, assume that the intrinsic, or absolute, worth of some asset remains constant throughout its life. If it was purchased for $10.00, then, depending on inflation, it will be worth more after some amount of time. The dollar value increases, but the absolute value does not. Because of that difficulty in measuring the intrinsic worth of an asset, standard accounting practices do not consider those types of changes unless
they are realized by an explicit financial event, such as the sale of the asset. If the asset in this example were sold for $13.00, the gain in the dollar value of the asset would be recognized. At any given time, using only standard accounting information, it is difficult to determine the financial eventual value of the tangible assets. The intangible assets are even harder to value, as discussed in Section 6.4.3.2. From an asset management system perspective, the expenditure of resources is the financial consequence of creating an automation asset. The difficulty lies in the determination of the value of the automation asset. Is it the cost of creating the asset, the projected cost avoidance obtained from the reuse of the asset, the amount of future revenue that can be realized from the asset, or some combination thereof? Although a great deal of attention is not given to the valuation of the automation assets, it is important to obtain some degree of comprehension as to the value of the assets. That is measured in dollars because it is the only real comparison available to the enterprise.
6.4.3.2 Intangible assets Consider the development of a software component asset. How should the value of the component be determined so that any project using the component could be charged appropriately? In this case, the value probably should be determined by utilizing the development costs as a base. That is in contrast to most attempts at evaluating this type of asset, which use as an estimate the number of times the asset is expected to be used. That type of estimate is relatively unreliable because usage is difficult to predict. Even if the asset is used in a project, its worth would stay the same if the asset is still considered to be viable. The value obtained through use could be thought of as a cost avoidance revenue. Methods of valuing software components as one specific case of great interest are discussed in Chapter 14. As difficult as it is to estimate the worth of a software component, it is more difficult to estimate the worth of other automation assets, such as a business rule. The cost to develop a specific business rule is usually quite low, while the effects of the rule can be quite large, as indicated in previous examples. Because of the difficulty, the temptation would be to forego a valuation. However, some value must be placed on the asset if it is to be treated with the respect accorded the other enterprise assets and show that it is part of the asset conversion chain that ultimately results in revenue. Simply expensing the cost of creating the business rules is an informal indication that they have no continuing value and are, in a sense, throwaway items. The easiest approach to the valuation problem in this instance is probably the same as utilized for tangible assets. Assign the class of business rules a value consistent with the cost of creating them. While that probably understates their real value to a considerable extent, the major goal is to treat them as assets, not necessarily to provide as accurate a valuation as possible. An absolute valuation of asset worth probably is not a reasonable goal to pursue anyway. The importance of assigning a value to the automation assets is to treat them in the same way as the other assets of the enterprise. At a minimum, the costs associated with their creation should be capitalized and not treated as an immediate expense. When that is not done, there is an increased tendency to view the assets as having little intrinsic value because no other effective way exists to estimate their continuing contribution to the operation of the enterprise. Even if financial accounting rules do not allow capitalization of those assets, this should be done from a management accounting perspective.
6.4.3.3 Tangible liabilities As with assets, an enterprise can change its liabilities by performing an appropriate enterprise financial event. The major difference is that the valuation of tangible liabilities usually is not difficult. The liability almost always is the amount owed in dollars. The connection with the automation assets usually is indirect because there is no concept of liability, per se, in the asset management system. Assume that a decision is made to
procure a certain software component and make it an automation asset. If the purchase price is borrowed, then a liability is created that is associated with the asset. The association is relatively unimportant, however, since the key is the availability of the asset itself, not the means by which it was financed.
6.4.3.4 Intangible liabilities In many cases, intangibles are more important than the tangible financial categories. The same is true for the liability aspects of the asset management system. One of the most important intangible liabilities is negative customer perception of the enterprise or its products. If the perception is large enough, it can easily overwhelm the assets of the enterprise and cause an effect that is the same as if the tangible liabilities were much greater than the assets. It can throw the enterprise into bankruptcy. The relatively recent near-bankruptcy experiences of Ford Motor Company and Chrysler Corporation can easily attest to that fact. What does the asset management system have to do with the intangible liabilities? The answer is that, properly considered and utilized, it can go a long way toward preventing the occurrence of this type of liability. One of the main purposes of the asset management system and associated assets is to allow the enterprise to standardize and measure the effectiveness of its operations and thereby improve the service to its customers. The asset management system, or lack thereof, directly affects the existence of any intangible liability. 6.4.4 Equity categories Because equity is merely assets less liabilities, there is no need to treat equity as a separate topic, especially as it relates to the asset management system. Although an enterprise spends a significant amount of time raising equity through stock sales or other activities, the goal is not really to increase equity, it is to increase the assets available to the enterprise without increasing liability. By focusing on the increase in assets, the equity is carried along. Because intangible assets and liabilities have been defined as possible categories of both the static and dynamic enterprise models, the concept of intangible equity also must be defined. In the same manner as that defined for the tangible categories, this type of equity is merely the difference between the intangible assets and the intangible liabilities.
6.5 External financial events External financial events are occurrences that: § Originate from outside the enterprise; § Are not controllable by the enterprise; § Change the value of at least one of the 11 financial categories of the financial event model. External financial events represent the randomness of the enterprise environment and the need for the enterprise to operate in such a manner as to reduce the effect of those events to a minimum. From a financial perspective, that means the definition and utilization of internal financial events in such a way that the enterprise is least vulnerable to outside forces. This is a type of risk management that should be explicitly utilized in any type of financial and operational planning for the enterprise. By adding a representation for external financial events to the enterprise dynamic model, the dynamic model is completed. That model, along with the augmented static model developed previously, now can be used to structure the evaluation of any proposed financial event.
Although in theory external financial events can change any of the categories of the mode, they most frequently induce changes in the intangible assets and liabilities. However, tangible assets and liabilities also can be affected by some classes of external financial events. For example, when they reflect changes in the competitive environment, external events can affect the valuation of inventory or other assets. The value of a liability also can vary if it is dependent on foreign currency values and an external event causes those to change.
6.6 Complete model Figure 6.6 shows the completed financial model. The model contains three parts, the augmented equations of the static model, the augmented dynamic model, and the financial event model. The purpose of the model is to be able to consider the static and dynamic effects of a proposed or completed enterprise financial event.
Figure 6.6: Complete financial model. The model does not provide information as to how the value of each of the financial categories will change as a result of a specific financial event. Also, it does not specify what procedures are to be followed to obtain that type of valuation information. Definitions of those procedures are highly situational and depend on the specific structure and business of the enterprise. Although it is a difficult and probably resourceintensive exercise, performing the necessary activities to be able to estimate all the categories for proposed or completed financial events will provide a great deal of the information needed for the effective and cost-efficient operation of the enterprise.
6.7 Implications The implications of the discussions in this chapter on the asset man- agement system are many. Overall, they advise caution in utilizing a financial argument to justify a proposed enterprise activity. From the perspective of the asset management system, which is the major focus of this chapter, they reinforce the need to consider the asset management system assets as true contributors to the health of the enterprise and the need to expend the necessary resources to operate it effectively. In effect, the automation assets are the raw material needed to construct parts of the enterprise automation environment asset. That asset, in its support of enterprise operations is used to realize revenue. If the raw materials are lacking or faulty in some respect for the use that will be made of them, the assets built from them may require more resources and time than otherwise would be necessary. In addition, the resultant assets may not provide the customers with the expected results. The resulting effect on the tangible and intangible financial categories of the enterprise can be quite negative
and far greater than would be expected from considering only the value of the assets themselves. The major points of this chapter can be summarized as follows: § The financial model employed in evaluating courses of action is of crucial importance in obtaining suitable results. § The dynamic model dimension of enterprise financial condition is as least as important as the static model component in determining the effects of an activity. § The intangible financial categories are important in evaluating the effects of any proposed or completed enterprise activity, and, in fact, their effect may be much greater than that of the tangible categories. § Financial events cannot be considered in isolation but must be examined as part of an overall environment. If that is not done, the combined effect of the individual events will not produce as effective a result as one that accommodates more global considerations. § Allocation of revenue to financial events is as important as the allocation of costs. That type of allocation reduces the tendency to think only in terms of expense and produces a more balanced outlook as to the necessity of the event being considered. § Using financial arguments to support the desirability of taking some enterprise action should require an analysis of the possible effect on all the financial categories of the model, including the intangible ones. § Automation assets are the raw material of the enterprise and, as such, influence the effectiveness of the revenue-producing assets and the financial health of the enterprise that usually is far greater than their assigned value. Selected bibliography Alfredson, F. K., “Defining and Recognising Revenues,” New Accountant, Vol. 5, No. 7, 1992, p. 3.
Brockington, R., Accounting for Intangible Assets: A New Perspective on the True and Fair View, London: Longman Group, 1995. Cheng, C. S. A., and R. P. Manes, “The Marginal Approach to Joint Cost Allocation: A Model for Practical Application,” J. Management Accounting Research, Vol. 4, 1992, pp. 44–63. Chenhall, R., and D. Morris, “The Effect of Cognitive Style and Sponsorship Bias on the Treatment of Opportunity Costs in Resource Allocation Decisions,” Accounting, Organizations & Society, Vol. 16, No. 1, 1991, pp. 27–46. Egginton, D. A., “Towards Some Principles for Intangible Asset Accounting,” Accounting and Business Research, Vol. 20, No. 79, 1990, pp. 193–205. Guilding, C., and R. Pike, “Intangible Marketing Assets: A Managerial Accounting Perspective,” Accounting and Business Research, Vol. 21, No. 81, 1991, pp. 41–49. Hibbard, J., “Software Gains Capital Treatment,” Information Week , Issue 664, 1998, pp. 18– 20. Hodgson, A., et al., “Accounting for Intangibles—A Theoretical Perspective,” Accounting and Business Research, Vol. 23, No. 90, 1993, pp. 138–150. Lee, T. A., “Cash Flow Accounting and the Allocation Problem,” J. Business Finance and Accounting, Vol. 9, 1982, pp. 341–352.
Parker, C., “A Fundamental Reassessment of What is Understood by ‘Assets,’” New Accountant, Vol. 5, No. 7, 1992, pp. 2–3. Reilly, K., “Equity and the Asset-Liability Equation,” New Accountant, Vol. 5, No. 7, 1992, p. 2. Ristelhueber, R., “Toxic Accounting,” Electronic Business, Vol. 24, No. 3, 1998, pp. 66–70. Shank, J. K., and V. Govindarajan, “Strategic Cost Management: The Value Chain Perspective,” J. Management Accounting Research, Vol. 4, 1992, pp. 190–196. Soh, D., “Concepts That Will Shake the Profession,” New Accountant, Vol. 5, No. 7, 1992, p. 1. Stewart, T., Intellectual Capital: The New Wealth of Organizations, New York: Doubleday, 1997. Sveiby, K. E., The New Organizational Wealth: Managing & Measuring Knowledge-Based Assets, San Francisco: Berrett-Koehler Publishers, 1997. Thomas, A. L., “The Allocation Problem in Financial Accounting: The Nature of the Problem,” in Contemporary Accounting Theory, ed. by E. S. Hendriksen and B. P. Budge, New York: American Accounting Association, 1974, pp. 180–185. Walker, B., “Intangible Assets and Other Real Things,” New Accountant, Vol. 5, No. 12, 1992, p. 9.
Chapter 7: Planning and strategy 7.1 Explicit asset recognition The explicit recognition of the intangible automation assets as “real” enterprise assets is fundamental to providing a flexible auto- mation environment that can accommodate continuous change in the enterprise. Without that recognition, a significant amount of the accumulated wealth of the enterprise cannot be utilized, and additional resources must be spent to provide the needed automation support. Unfortunately, the operation and behavior of the enterprise must change for automation assets to be utilized in a manner similar to that of the physical assets. This type of change is difficult and is one of the major reasons that an asset-based approach has not been widely adopted for automation development. An explicit management system that can address the major requirements of these assets can help the enterprise determine the way to utilize those assets and the type of changes that have to be put in place to make their use effective. Such a management system has been specified and the components of the system have been discussed in enough detail to provide a basic understanding of the principles involved. The management system may seem too complex to be effectively utilized, require technology that does not yet exist, or absorb too many scarce resources to sustain its operation. These criticisms can certainly be valid if a rigid dogmatic approach is adopted. That was not the intent of the presentations. Not every aspect of the management system must be present, and those that are utilized do not necessarily need to be perfect. With an understanding of the purpose and utilization of the entire management structure, especially the interactions between the various components, the environment can be tailored to the needs of a specific enterprise. When problems occur, the required changes can be made based on knowledge of the dynamics of the environment.
A popular saying states, “The devil is in the details.” That means high-level generalized approaches can always be made to sound good, but the details necessary for implementation can easily cause them to fail. Like all good sayings, there is a measure of truth to what is said. There is another popular saying that “if you don’t know which way you want to go, any road will get you there.” That means, of course, that before beginning anything and worrying about the details, it is useful to have a general understanding as to what is to be done. Obviously there is truth to the second statement also. Both those directions can be summarized by a statement attributed to General George Patton: “Planning is invaluable, a plan is useless.” The interpretation is that knowledge of what can be done and at what cost is required to understand the environment. Selection of a specific plan thought to be suitable for the environment, while necessary, is always subject to attack and subsequent revision. Knowledge of the details of the environment makes meaningful revision possible. That is the reconciliation to the first two slogans. Understanding the overall environment is necessary, and the selection of a general direction based on that knowledge will start the actions required to reach the intended goal. As the details unfold, the willingness to admit problems, reevaluate directions, and change accordingly is absolutely vital if forward progress is to continue. Why go through this discussion? Because automation asset management, to produce the results expected, requires exactly this type of approach. Planning is necessary to understand and effectively utilize the entire management system. The details are subject to change as the implementation and utilization proceed. Combine operations management and configuration management? If the circumstances dictate, why not! Consider a less than perfect repository? Some capabilities are better than none! Identify a repository that accommodates the most needed functions. Create the wrong type of asset or asset class? Replace it! There is no way to know in advance all the variables and conditions that will be encountered, even if the personnel involved are experienced in the activity. If the automation assets are considered to be the raw materials of an enterprise asset chain as discussed in Chapter 6, there will always be “spoilage and scrap,” to use a manufacturing analogy. That is to be expected and, as long as it does not exceed a predetermined level, is a normal part of the automation asset system. The cost of being perfect usually is much too high and usually can never be achieved, even if attempted. That merely underscores the fact that keeping asset management running properly requires constant attention. These types of evolutionary changes do not indicate a faulty approach. On the contrary, they indicate a healthy willingness to discover and adapt to new information. The automation methodology, discussed in Part III, is based on this type of adaptive change, and the approach will be on a more structured basis than can be done in the current discussion.
7.2 Wasted efforts Most attempts to use a managed automation asset approach start off with a great deal of enthusiasm and energy. Whether the enterprise initiates an asset system because of the desire to reuse software components, go to a management-by-process structure, or utilize object-oriented approaches, some expenditure of resources and changes in operation are usually considered necessary. As time passes and the constraints required by automation asset management begin to chafe and the resources utilized in its operation are coveted by other operational groups, the initial resolve can weaken greatly. Unless the enterprise has a good understanding as to why the management system was started and the advantages that can be obtained, the initial enthusiasm and energy will disappear.
The worst thing that can happen to the management system and its associated assets is that they become neglected. Although the effects of neglect can vary, the ultimate result is that the automation assets do not contribute to the good of the enterprise, no matter how well the assets are specified and designed. In that condition, frustration and a feeling of helplessness become the overriding conditions. Neglect begins when there is a general feeling on the part of the enterprise that managing the assets is an unnecessary expense and what is being accomplished is really not necessary or can be performed more cost effectively when distributed to other enterprise areas. Without a financial model and management accounting system that would indicate the real value provided, this type of attitude is hard to refute. A robust management system that is adequately supported and utilized as the foundation for all automation and software activities contributes greatly to the success of the enterprise. However, if that is not the case, and the management activities are merely going through the motions, there is no reason to continue to waste the resources used in a meaningless exercise. As competitive pressures increase, a meaningful asset management system almost certainly will return. In the meantime, the saved resources can be used to stave off bankruptcy for a little while.
7.3 Technology evolution As indicated in Chapter 1, all the technologies involved in enterprise automation are changing at a rapid pace. The ones in use at any given time ultimately will be replaced by others with different characteristics and utilization needs. Unless there is an excellent understanding of the underlying engineering principles and a structure or model on which possible changes can be evaluated, it will not be possible to evolve and change the approaches to automation in a timely and effective manner. In many cases, unless it is corrected, that lack of technology responsiveness will cause the enterprise to begin a slow and possibly painful decline. While business needs should always drive the utilization of technology, if there is none to be driven, the result will be equally as unsatisfactory. Automation asset management can form an important component of the structure necessary to evaluate the introduction and utilization of technology. As it is used in enterprise operation, much will also be learned about what works, what does not work, and what is missing in the specific environment involved, from both a technology and a business perspective. That is a continuing process and must exist throughout all aspects of the management and its associated assets. What did not work in the past may now be enabled through technology. What worked previously may be superseded. Entirely new approaches may become feasible. A knowledge of the underlying principles is again the main enabler for the analysis of the information obtained and for the definition and implementation of the necessary changes.
Part II: Automation assets The automation assets presented in Part II are those that are utilized in implementing the enterprise automation environment as part of the design, part of the operational environment, or both. As discussed in Chapter 1, the needs of the enterprise in the current business and technical climate indicate that a process-based approach be adopted. That means the automation methodology must be able to efficiently convert a business process specification into a workflow structure that is utilized by the enterprise automation environment. The implemented business process should be compatible with the computing infrastructure and other business process implementations that are part of the automation environment. The definition and modeling of the needed assets could be accomplished concurrently with the specification of the methodology design. Having had experience with both types
of presentation methods, the author feels that it is better to present the automation assets separately from the methodology for two reasons: § First, each automation asset is of interest in areas outside its specific use in the methodology. Independent consideration can include interesting aspects of the other uses. § Second, the flow of the methodology discussion would be too fragmented with the frequent stops that would be necessary to develop the needed asset models. The major disadvantage to separating the models from the methodology discussion is that it can be difficult to describe exactly why a specific concept or approach was selected. In addition, it probably will be necessary to refer to the presentations in Part II as the methodology construction proceeds. Three types of assets are considered: those used as part of the business process representations, those that provide the implemented process representation, and those that have aspects of both representation types. The business process representation assets are processes, scenarios, and roles. The client/server structure, software components, and workflow assets are part of the implemented process representation. The data, dialog, and action assets are considered to be part of both types of representations. The order of discussion of the assets is approximately that which will be needed in the methodology discussion. Although there are significant interrelationships between the assets, they are considered only as needed to fully explain the model or utilization of an asset. That usually occurs when one asset is used as a component of another. Many of the other relationships are examined as a consequence of their use in the process implementation methodology.
Chapter List Chapter 8: Process modeling Chapter 9: Scenario modeling Chapter 10: Role modeling Chapter 11: Information modeling Chapter 12: Client/server modeling Chapter 13: Dialog and action modeling Chapter 14: Software component modeling Chapter 15: Workflow modeling
Chapter 8: Process modeling The need for a transition to a process management approach was well documented in Chapter 1, and a process approach to enterprise automation is assumed. This chapter concentrates on developing unit and class asset models for business processes in sufficient detail to serve as the starting point for the process implementation methodology in Part III. Although there is only one class model, there are multiple unit models because of the different purposes for which they must be used. Before proceeding, it is necessary to discuss the use of the term process and the terminology in general. The term process is commonly used to refer to (1) a process component at any level of the class model, (2) the parts of its eventual representation in a graphical form, and (3) a deployed implementation of a process. That causes considerable confusion and difficulty in determining the proper context of any given use of the word. To partially alleviate that problem, if the context is not clear from the discussion herein, the word process is qualified to indicate the particular context in which it is being used. Those qualifications will be introduced as required. In addition, process, for the remainder of the discussion, refers to a business process (i.e., one used to support the operation of the enterprise).
8.1 Process implementation To be useful in the day-t o-day functioning of the enterprise, the process model must be implemented in some form. One failing of many process reengineering and management-by-process efforts is that the entire focus is on process definition and very little attention is paid to the process implementation needs and the associated life cycle management requirements. Process implementation requires design, fabrication (build), and operate activities as well as an oversight function to determine the effectiveness of the implementation. Those activities are the same as the ones necessary in the production and utilization of any asset. Although used informally here, those activities are defined in additional detail in this and other chapters. Although it is important for any automation asset, life cycle management is particularly important for processes, and considerable attention will be given to that aspect. One approach to defining and utilizing a complete process life cycle is discussed in Section 8.2. It is concerned not only with process definition but also with implementation and the changes that occur in the normal course of business. Because of the importance of process as the unit of enterprise automation, a concerted effort is made to differentiate a process implementation from a traditional functional system implementation. Even though it sometimes leads to awkward sentence structure, the term process implementation is used to refer to an implemented process along with all the necessary support software. The term system is not used. Where the context is reasonably clear, the word workflow occasionally is used to indicate a process implementation.
8.2 Process life cycle model To be useful, the process life cycle model must address all the aspects of the life cycle discussed in Chapter 3. In addition, it must serve as a unifying structure on which all the diverse terminology and concepts that are applied to business processes can be placed. Such a model is presented in Figure 8.1, which is used as the basis for the remainder of the discussion of management by process. It is a form of E-R diagram, although there is no attempt to show all the possible relationships. That would make an already complex chart impossible to understand and would certainly defeat its purpose.
Figure 8.1: Process life cycle model. Although the model itself has some process-like characteristics, it is not discussed in that context. The process qualities are not complete and would only add confusion at this time. The discussion of the implementation methodology in Part III that enables that view of the life cycle focuses on the process view. The notion of a process life cycle is separate and distinct from the notion of a software life cycle, although they interact closely. The two life cycles are sometimes confused because software frequently is the mechanism used to support process implementations. To illustrate that important point, remember that a process could be implemented completely by human effort. Such a process certainly would have a life cycle, depending
on the needs of the business, but it would not require any software support. In a process that requires software support because of advancing technology, several generations of software, each with its own life cycle, could be employed without changing the process. Alternatively, for competitive reasons, a process could undergo considerable change during its life with corresponding implications on the software that supports it. In fact, one of the goals of the new view of process support software is to allow a process to undergo considerable change without the need to greatly change the underlying support software, which is designed to be process independent. Techniques for realizing that goal are examined in the discussion of software components in Chapter 14 as well as in Part III. The model defines four main subcycles of the overall process life cycle: plan, design, operate, and manage. The plan cycle is further divided into two additional subcycles: the business events plan and the business structure plan. Cycles are called cycles because they may be visited over and over during the lifetime of the process. They are not considered phases in the sense that, once a phase has been completed, it is never again invoked for the same development instance. In the methodology discussion, the term spiral is used to indicate a repeatable set of activities. That is in keeping with generally used methodology terminology. Using two different terms also helps differentiate between the life cycle model and the methodology that enables it. Each subcycle is discussed briefly to ensure that the definitions of all the assets in the subcycle, their major relationships, and their place in the overall enterprise process context are well understood. Details of most of the assets of interest are contained in other chapters. The purpose in this discussion is to understand the life cycle and its components as a whole. 8.2.1 Plan cycle The plan cycle is where the enterprise defines the business it is in and how it intends to support that business. In many, if not most, businesses, the definition is informal and has grown and changed throughout the life of the enterprise with little or no attention. Even businesses that devote a considerable amount of their resources to a planning function of some type do not have a model for describing and quantifying their expected interactions with customers or the structure by which they intend to meet the requests of those customers. Most businesses have been quite successful over a long period of time without a defined approach. That was possible because of the inherent constraints imposed by the hierarchical organization and function-based automation support. Those conditions, to a great extent, mitigated the lack of formalized models. With the change to a relatively flat, process-based organization and client/server distributed automation support, the lack of a planning model puts any enterprise at a considerable disadvantage. That disadvantage is external with respect to competitors who are able to model and understand their business. The disadvantage is also internal, in that the efficiency of operation will be far less than it could or should be with a wellthought-out model. For the remainder of this discussion, it is assumed that the enterprise has a formal planning structure of the type described here. Although other approaches to the plan cycle can be defined and successfully utilized, a discussion of their structure and characteristics and how they would fit into the other subcycles is beyond the scope of this presentation. The plan cycle is used to determine the definitions of the relevant business events and the enterprise operations processes. It contains two basic types of structures: (1) those devoted to understanding what the customer intends to request of the enterprise and expect in return and (2) those devoted to how the enterprise intends to respond to those requests. It would be expected that the two types of structures would be closely interwoven and their specifications performed in concert. In practice, that is not the normal situation. Both types of plan structures usually are defined and specified independently. The customer interaction definitions are provided by front-line or first-line supervisory personnel, who are intimately involved with the
customers and understand their needs and expectations. The business structures, including high-level process definitions, generally are provided by executive personnel, who are far removed from “real” customers, and result from history, internal politics, and in many cases a lack of understanding of how the needs of the customers are being met. Considerable friction results from those two parts of the planning function being independent degrees of freedom. What should be a smooth response to a customer request or other business event becomes difficult when it is handled by a business structure not designed to accommodate it. Process reengineering was supposed to fix that problem, but the existing organization structure proved to be extremely resistant to change. The organization can remove layers, it can downsize, it can change technology, it can emphasize process over function, but it usually retains the old organizational structure boundaries. The engineering organization remains the engineering organization; the sales organization remains the sales organization; and the manufacturing organization remains the manufacturing organization. Whether one organization or another is the best one to meet the needs of the customer is relegated to secondary importance. There is turf to protect, careers to consider, and a general fear of the unknown. When everything around you is changing, there is usually something that is made to be stable. That “thing” in current enterprises is the existing organizational structure. Because the disconnection between defining customer needs and structuring the enterprise to provide for those needs is unlikely to change, the best approach is to develop frameworks and procedures designed to accommodate the two different views and mitigate any resultant friction and conflict. That is the approach taken in this presentation and in the methodology presentation in Part III. Experience has shown that such harmonization has worked well when put into practice. Sections 8.2.1.1 and 8.2.1.2 discuss the plan cycle areas and their major interrelationships.
8.2.1.1 Business events Business events are occurrences that elicit some type of response from the enterprise. The purpose of predefining expected events is to determine how to produce an effective response. An effective response is one that satisfies the requester (if the event is a request), is cost effective for the enterprise, and is consistent with the enterprise view of its purpose. Business events can be external or internal. They can be caused by customers, suppliers, employees, or other individuals associated in some way with the enterprise (including those with criminal intent). Business events also can arise from nonhuman sources, such as storms and earthquakes. To illustrate the usefulness of explicitly defining business events, consider the following example. An enterprise is in the retail shoe business and a customer requests the custom manufacture of a shoe. This event could be handled by the enterprise in a variety of ways: § The request could be rejected outright if the enterprise does not want to be in the custom shoe business and has no knowledge of organizations that perform that type of work. § As a service to the customer, the requester could be referred to another enterprise that manufactures custom shoes with no further involvement on the part of the retail establishment. § The order could be taken by the retail shoe business, sent to another enterprise for manufacture, and returned to the retail business, which would then deliver it to the customer. § The retail shoe business could manufacture the shoe itself using its own facilities and resources. The possible responses to the same business event are listed in the order of the business’s increasing involvement in the event. There is no right or wrong response to this event. Each response is possible depending on what business the enterprise believes it is in, what its resources are, and how it presumes its customers should be
treated. However, if there is no attempt to predefine expected business events, the business will not have the opportunity to make this type decision explicitly. That could result in, among other problems, lost revenue opportunities, inefficient use of resources, and unhappy customers. The major automation asset used in the plan cycle is the scenario. Scenarios represent detailed business events that the enterprise expects to occur in the course of business and that it wants to be able to effectively and efficiently address. Scenarios provide a framework on which to model and examine the definitions and consequences of business events that the enterprise is prepared to handle. Scenarios are described in detail in Chapter 9. In addition to providing a structure on which to model business events, scenarios are also useful to distinguish planned business events from those that actually occur during the operation of the business. As would be expected, planned business events are referred to as scenarios, while actual ones are referred to as business events. Although that may seem a small point, considerable confusion occurs when the same term is used in the planning sense and in the operational sense. Scenarios are grouped by “story,” and many scenarios can have the same story. Individual scenarios are differentiated by the context, or specific set of circumstances associated with the scenario. The context for each scenario in a story group must be different. As an example, several scenarios can have the story “a customer places an order.” Individual scenarios with that story have different contexts. For example, the context could be formed by attributes that indicate the customer’s size, credit rating, frequency of ordering, and size of order. Each unique set of values for those attributes would be a different context and hence a different scenario. Depending on the specific need, scenario story groups or individual scenarios could be used in the individual subcycles of the process life cycle.
8.2.1.2 Business structure This cycle contains assets related to the structure of the enterprise and includes process definitions as well as the specification of the organizational units and their responsibilities. Ideally, these assets should be contained in a comprehensive business model along with the enterprise view of other models of critical importance, such as a data model and network model as appropriate for the business. This business model could be used in a number of different ways, including comparison of the business to competitors, determination of the validity of software product requirements, and measurement of the internal efficiency of the business. Unfortunately, in most cases, a formal business model does not exist, or it is at such a high level that no real use can be derived from it. In that case, the necessary models still can be defined, but they may not be as tightly coupled as would be desirable. Process class model If a business model does not exist or does not contain a process class model, a process class model must be created. That is accomplished by defining the overall business operations process and then decomposing it into subprocesses. The decomposition continues until processes are obtained that can be implemented through manual and automated operations and deployed so that individuals can utilize them in performing their job assignments. Such a procedure is illustrated in Figure 8.2. The root process is decomposed into branch processes, which are then further decomposed as necessary until the leaf processes are reached. The leaf processes eventually are implemented and deployed. The root process and the branch processes are utilized only to arrive at the implementable leaf processes. The resultant structure defines part of the class model. The remaining part of the class model is determined by the relationships among the leaf processes.
Figure 8.2: Decomposition of the business process. The definition of the leaf processes that will best support the business is one of the two most important activities in a management-by-process approach. The other activity is implementation of those leaf processes. When all is said and done, specification and implementation of the leaf processes are the heart and soul of process engineering. With any decomposition procedure, there is always a question as to how many levels are needed and what the termination criteria should be. In the case of business processes, it is strictly an issue of management of complexity and resources. Process decomposition should continue until the leaf processes are of such a number and complexity that the enterprise feels comfortable about its ability to define each of the processes in detail. Another important question in process decomposition is the information that should be included in the initial definition of each process. At a minimum, the name; description; relevant business events, products, and services; business rules; and resources utilized should be included in this high-level model. For example, consider a leaf process defined to satisfy some maintenance business events. The process initially could be documented as shown in Table 8.1. Eventually, as detail is added in Chapters 13 and 15 and Part III, other models of the leaf process are specified and introduced during the appropriate discussions. Table 8.1: Example of Leaf Process Documentation Name
Maintenance Process
Description:
The purpose of the maintenance process is to keep the equipment in good operating order and to minimize the time a given piece of equipment is unavailable. The maintenance process must also provide for the timely repair and testing of
Table 8.1: Example of Leaf Process Documentation Name
Maintenance Process inoperable equipment.
Resources:
The maintenance process utilizes dedicated resources (people and facilities). It has a fixed yearly budget for all aspects.
Business rules:
The maintenance process is not responsible for repair in the aftermath of a disaster, such as a fire or flood. The maintenance process is responsible for initial setup and certification of new equipment. Except in emergencies, it cannot completely shut down another process or result in a significant increase in manufacturin g costs.
Business events:
Report of inoperable equipment Arrival of new equipment Time for equipment maintenance
Products and services delivered:
Equipment
Table 8.1: Example of Leaf Process Documentation Name
Maintenance Process repair New equipment certification Scheduled equipment maintenance
Organization charts The most often created model of an enterprise is not the process model but the organization chart. Almost every business with more than a few employees has one. An organization chart defines the partitions of the business (usually called organizations), the responsibilities (usually called functions) of the organizations, and the reporting relationships of the organizations. The organization chart is a static model that is suited to the hierarchical structures of the past. It cannot show how the enterprise organizations would respond to a business event. Usually, responses to business events generally were documented informally or through written practices or procedures. Usually, the process by which an event was handled was “just known.” For example, “First the order goes to the order department, then it goes to the engineering department, then it goes to the manufacturing department, and finally it goes to the shipping department.” Other departments may be involved for billing, purchasing, and so on, and their roles in responding to the order usually are defined in the same informal way. Those major partitions of an enterprise probably are not going to change significantly. Almost always, too much history and politics are involved. How, then, is the enterprise going to formally model its response to expected business events without eliminating the organization chart? The current and most obvious answer is to define each major partition as a process. The billing organization gives rise to the billing process, the shipping organization defines a shipping process, and so on. This type of partitioning occurs quite often, even if the enterprise pretends that it is entirely process based and defines and decomposes high-level processes in the manner presented in Figure 8.2. Decomposing those high-level processes seems to result in a one-to-one relationship between a leaf process and an organizational unit. The major problem with that one-to-one relationship is that an organization-based process may not be the most appropriate one to handle a set of expected business events. That is especially true for those events that need more than one organization to be involved in the response (the usual case). It is also possible, even with the best of intentions, to arrive at leaf processes that are inappropriate for the set of business events that must be serviced. That can result from inexperienced staff performing the decomposition or from lack of knowledge concerning the vision and the goals of the enterprise. What happens in that case is that the vertical automation silos (discussed in Chapter 1) resulting from the traditional organization- or function-based approach are replaced with horizontal silos based on an organization approach to process definition. The horizontal silos are, in essence, the vertical silos rotated 90 degrees. Figure 8.3 illustrates this rotation for the vertical silos of Figure 1.2. In either case, because of the lack of integration, the silos inhibit the development of the needed end-to-end processes. Fortunately, there is a relatively easy method of mitigating problems with leaf process definitions, regardless of how they occurred. That method is discussed in Part III.
Figure 8.3: Horizontal silos of automation. Business rules The general definition and structure for business rules were described in Chapter 5. Business rules as applied to processes are of particular interest in the context of the process life cycle. In that context, business rules are defined to be constraints on the enterprise processes or their relationships. Business rules can be defined that apply to all enterprise business processes, a set of processes, or only one process. Business rules can be defined at any level of process decomposition or specification, including implementation. Business rules can assume different forms and contain different information, depending on the level at which they are defined and the particular process(es) to which they are applied. Business rules also can be applied to different aspects of a process and its implementation, including the flow control of the process, the functionality required by the process, the data used in the process, and the assignment of human performers to the process. Process maps Once the leaf processes have been defined, considerable detail must be added to their specification to allow for their eventual implementation. A model that can accommodate that detail in a structured way and serve as a framework for the necessary analysis and testing must be defined. Because of the difficulty in working with the strictly text-based representation model utilized in the decomposition procedure, the new leaf process model is usually a combination of graphical, text, and data formats in a welldefined configuration. For simplicity, at the risk of introducing some confusion, this discussion of process representation and analysis utilizes the word process to mean a leaf process. Because only leaf processes are implemented, that should not cause undue confusion. Detail is added to the process specification in several ways throughout the implementation period, which starts with the availability of the leaf process(es) definitions and ends with product deployment and operation. Because the detail-adding activities are under control of the implementation methodology, most of this discussion is given as a part of the methodology presentation. Only enough of the procedures are covered to provide an understanding and motivation for the overall life cycle and its individual components. The detail that is added to the process specification during the business structure cycle is designed to accomplish two basic results: The first is to determine and specify, at a relatively high level, the functions, information flows, and skill definitions necessary to implement the process. The second is to determine if any of the automation assets have been previously defined so they can be reused. The structure and detail level of the process model must be such that reuse of any previously defined functions, information flows, and skill definitions is facilitated. There are guidelines for determining the proper size and complexity of those assets so that reuse is maximized. The guidelines are discussed during the methodology discussion. The ability to reuse assets specified at the process level can result in significant savings in implementation and deployment costs. One representation model that seems to allow processes to be detailed and examined effectively from the business perspective is the process map. The format of the graphical portion of a process map is shown in Figure 8.4. It consists of process steps arranged
according to their assigned roles and precedence order. If desired, organization information also can be included, although it is not strictly required.
Figure 8.4: Example of a process map. Process steps can be either functions or decisions. Process steps can occur in parallel, although the map usually tends to be highly sequential for two reasons: § Business function subject matter experts (SMEs), using explicit or implicit scenarios, tend to think in a sequential manner when determining the functions and decisions that can be utilized to realize a specific process. § The sequential format simplifies the use of the map in determining if the map assets can support the handling of associated business events as defined in appropriate scenario groups and individual scenarios. Although the process map realization has certain characteristics that are not an inherent property of the process it represents (e.g., very sequential functionality), those artifacts do not matter as long as the map is successful in showing that the scenarios are capable of being satisfied through the defined functionality. The representation artifacts can be removed later during the design/build activities. Roles are shown as horizontal segments on the process map. The purpose of roles is to define the skill sets necessary for performing a given part of an implemented process. The definition and utilization of roles are examined in Chapter 10. Optionally, organization units that contain those roles also may be incorporated in the same format as the roles. Incorporating organizations does provide an additional comfort level to many SMEs and their management, but it is not necessary and, indeed, may prove to be an unwanted distraction. The process map framework requires that additional information be specified for each process step, including the following: § A comprehensive description of the process step (function or decision) in addition to a descriptive name of some sort; § Specification of any particular approach or algorithm that must be used in an implementation; § The information needed by the process step; § The information produced by the process step; § Operational data as known (e.g., throughput, number of performers, physical locations). The additional specifications generally are documented in a separate text format so that the graphical map format does not become too cluttered. How the two formats are related and utilized is a tool implementation issue and is not discussed further at this time. The difference between information and data as specified in the process map is discussed in Chapter 11. Connections between process maps During the development of a given process map, the determination of the relationships that need to be established between its process steps and the process steps of other maps needs to be made. That usually is performed at two levels, depending on the amount of information available concerning other maps. Level 1 simply specifies that there is a relationship between a process step (input or output) and another process. The existence of a relationship should be communicated to the developers of the other map involved (if they are a different set of
individuals), but detailed coordination is not necessary at this level. The communication also should be made to any stakeholders (concerned individuals) of both maps. Level 2 requires identification of the specific process step in the related process map and the exact nature of the relation (e.g., precedence order of the two steps). This requires agreement from the developers of the related map who must reflect the same relationship in their map. Eventually, all process step relationships must be defined on a process step–to–process step basis, whether the two process steps are in the same map or in different maps. During the initial development of the process map, the scenarios that form the scenario group specific to that map (e.g., billing scenarios for a billing process) are used as an aid to ensure that the map can accommodate those scenarios effectively. Without the availability of the scenarios, it would be much more difficult to determine if the map assets were capable of providing the response to the business events that the process was defined to handle. After the basic map has been developed, scenarios that require multiple processes for resolution can be used as an aid in determining the needs for specific process relationships. Process relationships are further refined as multiple processes are tested together. Process understanding While the development of the process map is oriented toward arriving at a representation of the process that can serve as the basis for an implementation, there is another important consequence of the map specification activities. That consequence is the fostering of a much improved understanding (from what is usually available when a process is first defined) of the function and the purpose of the process as well as its projected use in the enterprise. Although it is an implicit byproduct of the map specification, this improved understanding probably has as much (if not more) impact on the final implementation of the process as the map structure itself. During the design/build cycle, it is relatively easy to find and correct any structural problems with the map (e.g., steps that should be decomposed or missing information flow). It generally is much harder to find and correct fundamental misunderstandings concerning the function and the use of the process in the enterprise (e.g., because of some natural affinity, parts of the process should be placed in other processes). Detection of such later types of problems is best accomplished during the development of the process map representation, hence the emphasis on process understanding during development of the map. Process prototypes Construction of the process map is extremely useful in producing a greater understanding of the use and the context of the process and determining the process requirements for function, information, and skills. However, to be useful as an implementation vehicle, the process map must be tested as rigorously as resources permit to determine if it is constructed in accordance with the definition and the purpose of the original process. The testing is accomplished by utilizing all the scenarios that in any fashion require the functionality of the process map under test. If possible, for scenarios requiring multiple processes, it is also desirable to test the process in the context of the other processes so that the connections between them also can be tested. That does not have to be done at the time of initial process map testing, but it does have to be performed at some time before the completion of the design/build cycle. Before testing starts, the process map is converted into a process prototype. If tool support is not available, that simply means that a wall-size copy of the process map is printed and declared to be the prototype. The process prototype engine in that case is human. (The author has been an engine of this sort many times and frankly enjoys the experience.) If tool support is available, the process map definition is made available to the tool that contains the process prototype engine. The specific mechanism used depends on the tool and the environment being used. The process prototype is one of a number of prototypes that are defined and used throughout the life cycle of the process. Each prototype is used to test a different aspect or level of a process representation. It is only through the prototypes that the detail
required for process implementation can be tested for compliance with the definition of the original process and its associated business events. Testing is initiated by selecting a scenario that is of critical importance. Using either a human or an automated prototype engine, the sequence of process steps that result from applying the conditions of the scenario to the process map is identified. There can be several possible outcomes from that activity. Some of those outcomes are outlined in the following list, along with an indication of the action that needs to be taken. In actual practice, there are many possible results and even more actions that can be taken, depending on the specifics of the problem and the process. § Result: A sequence of process steps occurs, ending with a normal termination point, that provides the desired outcome. Action: None required. The scenario test has been successful. § Result: A sequence of process steps occurs, ending with a normal termination point, that does not provide the desired outcome. Action: Change the process map so the desired outcome occurs. That may involve changing the definition of a process step, adding process steps, or changing the relationships between process steps. § Result: A sequence of process steps occurs, ending with an internal process step for which there is no reasonable successor. Action: Change the process map so the desired outcome occurs. That may involve changing the definition of the terminating process step or previous process steps, adding process steps, or changing the relationships between process steps. § Result: The scenario is not detailed enough to be able to identify the path that should be taken for at least one decision point. Action: Change the scenario so that the decision point is explicitly defined. This will also usually involve the creation of another scenario that will indicate that the opposite decision point should be taken. Once the scenario run is successful, the next most critical scenario is selected and the procedure repeated. That continues until several of the most important scenarios run successfully. The actual number of possible scenarios is extremely large, and only a fraction can be utilized in this procedure. The main purpose of the process prototype testing is not to test the process exhaustively (not a tractable activity) but only to obtain confidence that the process representation is reasonable. Achieving such confidence does not mean the process map will not change in the future (it undoubtedly will) due to many other possible factors. Successful runs of a number of critical scenarios at this stage of definition mean only that the process has been successfully represented in a form that can be communicated to all enterprise personnel who need a working knowledge of how the implemented process will function. That can be demonstrated through the use of the map and scenarios of interest. The scenarios used in the testing of the process map are saved and will be used again when other representations of the process are defined during the design/build and operate cycles of the process life cycle. By using the same scenarios, the reactions of the process representations can be compared. That greatly assists in the determination of the correctness of a representation. Additional scenarios can, of course, be added to the test suite at any time during the implementation. Regardless of how it is accomplished, the result of a scenario run against the process map is a process step sequence called a process scenario trace. Examples of scenario traces for the process map in Figure 8.4 are shown in Figures 8.5 and 8.6. Exactly how the sequence is identified, displayed, and documented is a function of the tool or human procedures utilized. The exact formats are not critical but need to be defined, with an efficient transfer of information as the main goal.
Figure 8.5: First example of process trace.
Figure 8.6: Second example of process trace. When it is successful, the process trace produced by the scenario becomes part of the definition of the scenario. (That is explained further in Chapter 9.) Process scenario traces can be used in a number of different ways in addition to the determination of the success or failure of the application of the scenario, including initial cost estimates, initial performance estimates, and initial resource estimates for handling the business event represented by the scenario. Statistics for a scenario group also can be obtained. The information may show a need to modify the process map if the statistics indicate an unfavorable result (e.g., too expensive for the expected return). 8.2.2 Design/build cycle The purpose of the design/build cycle is to translate the process map and associated information, as described in the plan cycle, into a product that can be deployed and used to handle business events. That involves changing the process representation format from that of the process map, which emphasizes the business needs of the process, to other formats that are more technically (implementation) oriented. The latter representations are intended to accommodate the detailed technical information necessary for eventual implementation in the enterprise environment. As initially specified in the process map, there is no guarantee that the process is implementable. Environment, technology, and financial constraints may not allow the process to be effectively implemented without changes to the process map or the original process specification. Implementation-oriented process representations, which are defined during the design/build cycle, can more effectively indicate when the process cannot be effectively implemented and determine how the process map or specification should be altered. Because the process map format looks similar to a process implementation (build) representation—a workflow diagram—there can be a great temptation to truncate or eliminate the design portion of the design/build cycle and implement the process map directly as a workflow. For the purpose of this immediate discussion, a workflow can be viewed as an implementation of a process that utilizes a set of tasks that are sequenced and scheduled by a manager programmed with the appropriate commands. (A more comprehensive treatment is presented in Chapters 15 and 24.) A workflow is typically depicted by a diagram showing tasks interconnected with arrows. Who wouldn’t want to skip the cost and time consumed in the design activity? Encouragement to perform the direct conversion from process map to implemented workflow comes not only from cost conscious management but also from tool vendors who claim that their products are capable of making the conversion effortless.
As tempting as it might appear, the direct conversion almost always fails for a number of reasons related to the fact that the process map is not a workflow diagram, although the two graphs appear similar superficially. § The process map format lacks information essential to designing a workflow (e.g., detailed functionality, data, and workforce descriptions). § The purpose of the process map format is to gain a business-oriented understanding of the process and the functions, information, and skills involved. It is not intended to be directly implemented. o The process steps are significantly more sequential than necessary. Implementing them this way will reduce efficiency considerably. o The process steps generally are at the wrong level for implementation as workflow services or tasks. o The process steps generally are at the wrong level to determine if human or automated implementation is appropriate. o The process steps generally are at the wrong level to achieve significant function (software) reuse. The activities defined for the design/build cycle are always necessary to achieve an effective process implementation. If they are not accomplished explicitly and efficiently as part of the cycle, they will be performed inefficiently as part of some other activity or cycle. To paraphrase a familiar saying, “The work that needs to be done is the work that needs to be done.”
8.2.2.1 Logical design The process map serves as a vehicle to capture, explore, analyze, and specify the needs of a process from a business point of view. To translate that information into a form more suited as the basis for eventual implementation, a more technically oriented representation of the process must be created and utilized. The initial representation is called the logical design. The logical design comprises different parts, each emphasizing a different aspect of the information needed by the implementors. It is not necessary for the purposes of this discussion to define the logical design in detail. That will be left to the methodology design chapters in Part III. In general, there is no deterministic way to convert from a process map representation of the process to a logical design representation. The logical design results from a true design effort. Design requires a knowledge of the relevant design principles, a detailed understanding of the methodology involved, an awareness of the specific requirements of the process, and experience developing logical designs. Although the development of the logical design requires knowledgeable human intelligence, a number of guidelines and rules of thumb can facilitate the process. The logical design representation uses a different model structure and is more detailed than the process map representation. Therefore, in many instances it is difficult to show an explicit relationship between process map elements and logical design elements. That can cause some concern to staff involved in the development who are familiar only with decomposition techniques for adding detail. With decomposition, there is always an explicit relationship between the elements at one level and the elements at a more detailed level. The lack of an explicit relationship also causes concern to management personnel who want to ensure that if a change is necessary to a logical design element it is also reflected in the process map. If that consistency is not maintained, the businessoriented staff and the technically oriented staff will not have similar views of the underlying process. That causes great difficulties for process-based enterprise management and must be avoided at all costs. However, the fact that there may be no explicit relationship between elements of the process map and those of the logical design does not mean that consistency between the two cannot be maintained. It does mean that some attention must be placed on maintaining referential integrity between elements of the two representations. Maintaining referential integrity, in general, requires the use of a repository and
associated tools. A repository is necessary for a number of other reasons, as explained in Chapter 4, so that is not an unreasonable requirement. In addition, management controls must be in place to require, if necessary, a reexamination and update of the relevant portions of a process map when the logical design with which it is associated changes. Of course, the same is true for a process map change. The logical design associated with the area of change also must be examined and modified as appropriate. It is possible for the process map to change without affecting the logical design and for the logical design to change without affecting the process map. Although such an occurrence does not appear likely, it does happen in practice because of the rather loose coupling between the two representations. It is that loose coupling that provides a number of significant advantages over the usual decomposition techniques. The logical design has five major purposes: 1. To incorporate information that is not available as part of the process map, for example, a user interface (UI) design; 2. To add additional detail to the information contained in the map, including further specification of the control, data, and functionality required; 3. To represent the process in a format that will provide a more efficient implementation than the process map format will permit; 4. To allow previously defined and implemented functionality to be reused in realizing the process, greatly increasing the implementation productivity; 5. To factor technical and financial influences into the implementations, which will affect the functions defined to be automated and those that will be accomplished by humans. As a process representation, the logical design must be tested through the use of the scenarios using a procedure similar to that described for testing process maps. Because the development of the logical design is not deterministic, that is the only way to ensure that the logical design is a valid representation of the process.
8.2.2.2 Physical design Once the logical design has been developed and tested, an implementable form of the design must be developed. That is generally referred to as the physical design. The physical design must accommodate the enterprise technical and business environment and specific computing architecture. In addition, the physical design also must be able to be directly transformed into an efficient and effective product resident in the enterprise computing environment. To provide for those two needs, the structure of the physical design representation of the process must be somewhat different from the structure of the logical design representation. That should not be surprising, given the discussion of the differences between the process map and the logical design representations. To maximize flexibility and reuse of existing functionality, the physical design architecture considered is based on workflow technology. The physical design architecture is considered in two parts: one that is process independent and one that is process dependent. The process-dependent part is known as a workflow, while the processindependent part consists of software components. A workflow implementation utilizes functionality units called tasks. A task consists of one or more reusable software components and provides a part of the functionality needed for completion of the application. An example for an invoicing application would be a function to determine tax amount. The individual tasks necessary to respond to a given business event are sequenced, scheduled, and then routed (assigned) to a specific task performer by a workflow manager or, alternatively, a workflow engine. The workflow engine uses data defined during the logical and physical design process to determine how to reach the desired response to an individual business event. Much of that information is contained in a workflow map, which is a representation of the business process. By changing the map
and other input data to the workflow engine, the response to a specific type of business event can be altered without necessarily changing the operation of any task. The workflow engine programming is designed to emulate the process. As in the case of the process map and logical design, the physical design also is a representation of the process. As such, the physical design must be tested by using the scenarios in a fashion similar to that described for process maps and the logical design. 8.2.3 Operate cycle The operational process implementation is the representation of the process that is used to respond to real-life business events. As a process representation, it also should be tested through the use of scenarios before being placed in an operational status, although a separate prototype obviously does not have to be defined on which to perform the tests. The implementation representation structure is based on the physical design representation discussed in Section 8.2.2.2. The implementation format must allow a great degree of flexibility. With that flexibility, process updates required by the changing business climate can be made quickly and accurately. In addition, of course, the implementation must be an accurate reflection of the intent of the process. The implementation consists of five major parts: tasks, workflow manager or engine, workflow definition, workflow instance, and operational statistics. When a business event occurs, a workflow instance is created. The purpose of the instance is to contain the status and instance-specific data that will be associated with the handling of that business event. Initially, the workflow instance contains little information. As the solution to the business event progresses, additional information is added. A workflow instance exists until the final response to the defining event is provided. At that time, the characterization of the workflow is complete and can be used for statistical purposes in the management of the process. The tasks necessary for a specific workflow instance are sequenced, scheduled, routed, and monitored by a workflow engine. That engine is a software product capable of interpreting workflow business rules that specify how the available tasks are to be utilized in responding to a particular business event. When a workflow instance requires a service or task to perform a function, an instance of that service or task is formed that is then associated with that workflow instance. In that way, all the work necessary to respond to a given business event can be connected with that particular business event, even if the same task functionality is needed for many different business events or multiple occurrences of the same business event. As part of its function, the workflow engine collects statistics on the defined metrics of the workflow instances, including the associated service or task instances. The statistics are used in the manage cycle to determine how effectively the process implementation is functioning and what, if any, changes should be considered. 8.2.4 Manage cycle Once the workflow (process implementation) has been deployed and becomes operational, it must be continuously examined and adjusted to ensure that the current workflow representation continues to represent the intent of the process properly. Statistical data produced by the workflow engine are analyzed along with information produced by other means, including customer comments or complaints, employee observations, operational expenses, and equipment costs. The analysis usually is performed by the same SMEs (or others with similar knowledge if the original individuals are not available) who helped formulate the various process representations during the first development cycle. If the analysis indicates that changes should be made, the proper subcycle is invoked and a determination as to how the
representation(s) of that subcycle should be altered to meet the new needs. The change process then proceeds in the same way as the original design effort. The management of an operational process implementation as part of a comprehensive implementation methodology is presented in Chapter 26, which discusses the analysis and other activities that ensure that the process remains effective.
8.3 Quality considerations Current industry-accepted approaches to ensuring that a quality product or service is delivered to the customer all revolve around a process-based paradigm. Because of past problems in trying to define quality (which is more or less an impossible task), the currently accepted approach is to examine the process that produces the product or service and determine if it is defined, consistent, and effective. Each of the three major approaches to making that determination utilizes somewhat different criteria and emphasizes different areas, but they probably have more similarities than differences. Although an indication of the role of process in current approaches to quality is appropriate to the scope of this discussion, a detailed examination of individual quality approaches would not add a great deal to the presentation. Comprehensive examinations of each of those approaches can be obtained through other publications (see the bibliography at the end of this chapter). Only those characteristics and definitions of the major process-oriented quality methods that are necessary to indicate how process is considered are listed. Table 8.2 contains brief descriptions of the three major approaches: ISO9000, Malcolm Baldridge, and the Software Maturity Model. The first two methods are intended to examine general enterprise processes, while the third is specifically intended for the software development process. Table 8.2: Major Process-Based Quality Approaches Approach
Purpose
Developer
Overall Structure
ISO9000
Certification
International Standards Organizatio n (ISO)
Relies on a series of detailed audits to determine if the processes utilized in a number of specified enterprise areas, including the production of products and services, are documente d; perform in the way the documenta tion indicates; are effectively monitored so that
Table 8.2: Major Process-Based Quality Approaches Approach
Purpose
Developer
Overall Structure problems are detected; and have a mechanism for change.
Malcolm Baldridge
Award
U.S. Department of Commerce
Similar to ISO9000 except that more emphasis is placed on the compariso n of different organizatio ns rather than compliance to a standardiz ed checklist. The compariso n emphasis is necessary to select the winners.
Software Maturity Model
Selfassessment
Software Engineering Institute
Utilizes a five-layer model that indicates how sophisticat ed the software developme nt process of an enterprise is. The layers are in the order of increasing maturity and are defined as follows. Layer 1, ad
Table 8.2: Major Process-Based Quality Approaches Approach
Purpose
Developer
Overall Structure
hoc; layer 2, repeatable; layer 3, defined; layer 4, managed; layer 5, continuous improveme nt. Will adherence to any of the approaches in Table 8.2 guarantee commercial success? Unfortunately, no. Can a successful quality product be produced without adhering to any of those methods? Fortunately, yes. The approaches assist the generation of a quality product; following them does not automatically generate a commercial success or even a quality product. Many other aspects of running a successful business and developing a product are required. However, the process-based quality approaches do provide a structure and evaluation framework that is helpful in examining the processes of the enterprise. That is probably their most useful purpose.
8.4 Management of expectations Changing from management by organization to management by process in an enterprise is not a panacea. By itself, such a change will produce little in the way of positive results and may instead be harmful to the enterprise. Unless there is total commitment to the change and a willingness to alter the basic fabric of the enterprise, a change of that magnitude should not be attempted. If the business remains organizationally bound and attempts to use a management-by-process paradigm, the result will be confusion and possibly chaos. Saying the right words will not work. They must be followed by the right actions. Even with the required commitment, change will be slow and sometimes painful. Staff will be in the wrong positions. The role and development of software must change dramatically. Organizational boundaries will begin to dissolve. Management’s focus will change from subordinate control to activity coordination. And all of that must happen without a significant disruption in providing service to the customers —quite a feat for any enterprise. The reason for all those cautions is to counteract the popular notions of what can be expected by a stated shift to management by process. Like any other major enterprise activity, there is no magic, only a lot of hard work. Having said all that, is the shift worthwhile? The answer is a resounding yes! Without a real shift to the management-byprocess paradigm, an enterprise will find itself at an increasing competitive disadvantage. The change must be made, but it must be made with an understanding of the difficulties that will be encountered and a willingness to solve the difficult problems as they occur. Selected bibliography Bickeboller, M., D. F. Kocaoglu, and T. R. Anderson, “Business Concepts Analysis Process,” Proc. Portland Internatl. Conf. Management and Technology, Portland, OR,July 27–31, 1997, pp. 803–806.
Bohrer, K., et al., “Business Process Components for Distributed Object Applications,” Communications of the ACM, Vol. 41, No. 6, 1998, pp. 43–48. Briccarello, P., G. Bruno, and E. Ronco, “REBUS: An Object-Oriented Simulator for Business Processes,” Proc. 28th Ann. Simulation Symp., Phoenix, Apr. 9–13, 1995, pp. 269–277. Chung, L., and B. A. Nixon, “Dealing With Non-Functional Requirements: Three Experimental Studies of a Process Oriented Approach,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 25–37. Cook, J. E., and A. L. Wolf, “Automating Process Discovery Through Event -Data Analysis,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 73–82. Dion, R., “Process Improvement and the Corporate Balance Sheet,” IEEE Software, Vol. 10, No. 4, July 1993, p. 28. Ferscha, A., “Optimistic Distributed Execution of Business Process Models,” Proc. 31st Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 6–9, 1998, pp. 723–732. Hedberg, S. R., “AI Tools for Business-Process Modeling,” IEEE Expert, Vol. 11,No. 4, pp. 13–15. Huyink, D. S., and C. Westover, ISO 9000: Motivating the People, Mastering the Process, Achieving Registration, Burr Ridge, IL: Irwin, 1994. Kirchmer, M., Business Process Oriented Implementation of Standard Software: How to Achieve Competitive Advantage Quickly and Efficiently, New York: Springer-Verlag, 1998. Mayer, J. H., “Avoiding a Fool’s Mission” Software Mag., Feb. 1998, pp. 43–48. Pennell, J., J. Stepper, and D. Petrozzo, “Concurrent Engineering of a Service Order Process,” Proc. 46th Ann. Quality Cong., Nashville, May 1992, pp. 634–641. Profozich, D., Managing Change With Business Process Simulation, Englewood Cliffs, NJ: Prentice-Hall, 1997. Sheer, A.-W., Business Process Engineering: Reference Models for Industrial Enterprises, 2nd ed., Berlin: Springer-Verlag, 1994. Sundstrom, G. A., “Business Process Modeling and Process Management: Concepts and Examples,” Proc. IEEE Internatl. Conf. Systems, Man, and Cybernetics, Oct. 12–15, 1997, pp. 227–231. Yu, E. S. K., and J. Mylopoulos, “Using Goals, Rules, and Methods to Support Reasoning in Business Process Reengineering,” Proc. 27th Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 4–7, 1994, pp. 234–243.
Chapter 9: Scenario modeling Overview Chapter 8 introduced the concept and major characteristics of a business event as a fundamental input to the enterprise. The utilization of scenarios to model an individual or related group of business events also was examined. The details of scenarios as
automation assets were not presented because the emphasis of Chapter 8 was on the concept of process life cycle. It is necessary to comprehend the genesis, definition, and application of the concept of scenarios as an important component in the overall understanding of the operation of the enterprise and its determination of automation support. As an integral part of this discussion, a detailed model of a scenario must be developed. The definition of and motivation for such a model are the focus of this chapter. Two types of scenarios are discussed. Although their models are similar, they differ in some important ways and must be discussed individually. In many cases, scenarios can be used informally without directly utilizing the structure provided by the models. In fact, such use is included in several chapters in this book when a more formal approach is not warranted. However, when scenarios are used in process implementation, experience has shown that the number of scenarios and the multiple times they are used require that the models be closely followed.
9.1 Business events Business events can be used for many purposes in operating and managing the enterprise, including the following: § To define the orientation and the scope of the enterprise; § To determine the amount of resources that would be needed to handle a new set of events; § To compare the enterprise with others in the industry; § To define the processes needed in the enterprise. For business events to be utilized effectively, they must be structured and modeled. Such models are called scenarios.
9.2 Scenarios Because a business event can exist independently of the structure of an enterprise as well as the process used to accommodate it, the same is true of a scenario. Scenarios can be used in a number of different areas in the enterprise, and it is important to define them as independent entities. Three aspects of scenarios need to be defined and discussed in some detail: § The structures of the two basic types of scenarios; § The relationships between scenarios; § The unique identification of a scenario. Those aspects are defined and discussed in the order listed. In addition, a brief discussion of the types of analysis that can be profitably performed on the defined scenarios also is provided. 9.2.1 Structure The structure of a scenario must enable all the relevant information that can be associated with a business event to be captured and utilized as needed. In addition, the structure must allow the scenarios to be managed as automation assets. Scenarios are classified as either unit or compound. A unit scenario is atomic in the sense that it cannot decompose into component scenarios. A compound scenario is a concatenation of other scenarios, simple or compound. Compound scenarios facilitate the construction of complex scenarios from simpler scenarios that cover a smaller scope. Examples of each type are provided later in the discussion.
9.2.1.1 Unit scenario The overall structure of a unit scenario consists of six major sections, as shown in Figure 9.1. Most of the sections are divided into subsections that contain additional detail about
the business event being modeled. The scenario identification method presented later depends on the scenario structure and relationships. Thus, the scenario structure is presented first, and the method of uniquely identifying each scenario is discussed later. As an aid to understanding, a continuing example is used throughout the discussion to illustrate the concepts involved.
Figure 9.1: Construction of a unit scenario. Story se ction In common usage, a story usually is thought of as an explanation or a representation of the circumstances involved in some event(s) of interest. The purpose may be to entertain, educate, or arouse some emotion. In any event, the story is intended to give life and meaning to the event(s) it represents. The purpose of the story section is exactly the same as that of the classical story. It provides an explanation of the specific circumstances surrounding the business event so that the event can be considered and utilized by the enterprise. The story section consists of three subsections, which are always the same for a given story: name, narrative, and attribute. The name briefly identifies the purpose of the scenario. As will become clearer during the discussion, many scenarios may utilize the same story; therefore, the contents of the subsections of the story are not unique. The means to uniquely identify a scenario is accomplished by the inclusion of an explicit ID section in the structure. The structure of the ID is considered later. The continuing example used here as an illustration is that of a customer placing an order by telephone. The name for a scenario with that purpose could then be defined as “Customer Places Telephone Order.” The narrative subsection is an elaboration of the scenario’s purpose and contains more detailed information than can be conveyed by the name alone. All scenarios with the same name contain the same narrative. A narrative for a telephone order scenario might be as follows: A customer places a telephone order by calling a specified number. The call is answered by a service representative, who asks a series of questions designed to obtain the information needed to satisfy the request in a timely and efficient manner. If the order is accepted, the customer is given an order number and the order is placed in an active status. The attribute subsection describes the relevant variables in the story. In some market testing, a movie may have multiple endings. The version that is finally released depends on how the test audience reacted to each ending. Whether that is a good technique is irrelevant to this discussion. The important aspect is that the same story can be given a completely different ending (or beginning or middle) by changing some of the variables of the story: The car crashes or it doesn’t crash. An individual’s disease is cured or it isn’t.
The daughter inherits nothing, or $10,000, or $10,000,000. The variables of the story are subject to manipulation by the filmmaker. In a movie or a book, only one set of possibilities is usually desired. That need for uniqueness is not carried over to the use of scenarios. Because a given scenario story will be utilized many times, the circumstances surrounding each occurrence may be different, and it is necessary to understand what variability is possible so that proper responses can be determined. The attribute specification subsection contains information as to how the circumstances can vary in a given story. It does that by explicitly identifying attributes associated with the story and the values that those attributes can assume. Using the telephone order example, the attribute subsection might be as shown in Table 9.1. Table 9.1: Example of Attribute Subsection Attribute
Values
Customer credit rating
High, medium , low
Usual order size of customer
Small, medium , large
Language spoken by customer
English, French, other
Geographical location of customer
U.S., Canada , Asia, Europe, other
Celebrity status of customer
Is one, is not one
An infinite number of attributes probably could be defined for any story. There will be some attributes for which different values are not important, while the values of others are of considerable interest. For example, the credit rating of a potential customer usually is of great interest to an enterprise. Different actions probably would be taken in response to a telephone order event, depending on the value of the credit attribute: The order would be accepted; the order would be denied; or the order would be kept below a predefined dollar limit. In contrast, most companies do not care about the celebrity status of a customer placing an order, so that attribute need not be explicitly identified since it is not relevant to this particular story. If celebrity status is of interest for some reason (e.g., the company is a Hollywood clothing store that caters to movie and TV stars), then it must be explicitly considered. The exercise of determining the attributes of interest is of considerable value in understanding the scenario story and how the enterprise intends to respond to it. The definition of a process to handle the scenario story is considerably easier if the identification of relevant attributes is accomplished with some care. The same attribute could be used in scenarios with different stories. To maintain the necessary consistency across the different stories, it is necessary to define and maintain a global attribute data model. Using that model along with the set of attributes and values used in the different stories, conflicts, overlaps, and gaps in the attributes can be identified. For example, if the attribute credit rating in one story is given the values “high, medium, and low” while in another story it is given the values “good and bad,” there
could be some difficulties if the two stories are required to work together in some fashion. The attribute data model helps avoid that situation a priori, but if, for some reason, it does occur, it can be detected and corrected. Maintaining and using a global sequence of all attributes as part of the data model would help with the definition of compound scenarios as well as scenario analysis because it would allow the same relative attribute sequence to be used in all stories. Because of the large number of possible attributes and their potential volatility, that activity probably is not practical. However, it might be possible for a given analysis to recast the attributes of the stories of interest into a common sequence to facilitate the examination. Context section Scenarios with the same story are differentiated by information contained in the context section. The context section identifies the unique circumstances that can influence the way a request is satisfied. In effect, different contexts result in different versions of the same basic story. Scenarios with the same story that have different contexts are different scenarios, although they are closely related. The context section defines a specific value for each attribute in the attribute subsection. Different attribute values result in different scenarios. Using the telephone order example, one context might be as shown in Table 9.2. Table 9.2: First Example of Context Section Attribute
Values
Customer credit rating
Low
Usual order size of customer
Large
Language spoken by customer
French
Geographical location of customer
Not of interest
Celebrity status of customer
Not of interest Creating a different context involves giving a different value to at least one of the attributes. For example, it the geographical location of the customer becomes of interest in a different scenario, its context would be that shown in Table 9.3. Table 9.3: Second Example of Context Section Attribute
Values
Customer credit rating
Low
Usual order size of customer
Large
Language spoken by customer
French
Geographical location of customer
Asia
Celebrity status of customer
Not of interest A different scenario could have all the attribute values changed as illustrated in Table 9.4. Table 9.4: Third Example of Context Section Attribute
Values
Customer credit rating
High
Usual order size of customer
Medium
Language spoken by customer
English
Geographical location of customer
U.S.
Table 9.4: Third Example of Context Section Attribute
Values
Celebrity status of customer
Movie star
Although the name, narrative, and attribute set are the same, each context results in a different story version because the circumstances differ. It should be evident that many individual scenarios can be considered for each scenario story. The level of detail that needs to be present depends on the attributes the enterprise considers important and wants to consider when formulating a response to the business event. Changes to the attribute set for a given story can always be made if experience dictates changes. The number of scenarios also depends on the values of attributes that preclude the need to consider other attributes. Business rule section The business rule section contains (usually by reference) the business rules that have some effect on how the business event is addressed. Some business rules for the telephone order example might be as follows: § The minimum order is $1,000. § No customer can owe more than $100,000 at any given time. § Certain products cannot be shipped to foreign countries. § Large customers have a specifically assigned service representative. § Orders are accepted in only English, French, and German. Those business rules may affect other scenario types, such as those concerned with customer service, shipping, and accounting. View section Each business event must be addressed by some set of activities of the enterprise. The view of the event as seen by the enterprise structures provided to accommodate the event is depicted in the view section. The enterprise structures can be at different levels of abstraction, and there is no specific limit to the number that can be considered. The three enterprise structures usually needed are the process utilized in addressing the event, the logical specification of the automation system functionality used in the implementation of the process, and finally the workflow that controls the overall implementation of the process. Other view subsections, such as the human interface, can be added to produce additional detail. Another example could be an organizational view that indicated the departments involved in resolving the event. From a strict definition perspective, the view section is not needed to fully model a business event. The information depends on the interpretation provided by a given enterprise. However, it is useful in giving an understanding as to how the event will be addressed by the enterprise. Characteristics section The characteristics section contains infor- mation about the business event that is not process or organization oriented. It provides detail as to the relative frequency of occurrence of the business event, the importance or priority of the event, and any information as to the timeliness of resolving the event. Because many events compete for the same enterprise resources, the information in the characteristics section is useful in determining how the resources should be deployed if some type of rationing must be utilized. Three of the useful pieces of information are shown in the model here (others can be added as desired). For the example, this information might contain the following information: § Each day, 10,000 instances of these scenarios occur. § The importance of handling the scenarios correctly is very high. § The representatives involved with the scenarios must be highly trained. Comments section Unstructured comments that provide information relative to the scenario are placed in the comments section. An example of the information that might be useful is some type of history or change record of the scenario, for example:
This scenario was added in 1995 because it was decided to classify customers by the dollar amount of orders placed each year. This was a new classification and an associated attribute was needed to consider it. The values of that attribute had to be reflected in a set of scenarios (this being one) to ensure that the different classifications were being handled according to company policy.
9.2.1.2 Compound scenario A compound scenario models a business event that requires a wide scope with multiple types of activities to provide a suitable resolution. One important class of this type of scenario is a customer request and the subsequent satisfaction of that request. Unit scenarios generally address only scopes with a single type of activity. In the continuing example, the business event of entering an order was modeled by a unit scenario. Assume that the business event being modeled was defined to be the resolution of a customer telephone order. The resolution would not include only entering the order (the continuing example) but also filling the order from inventory, shipping the order, billing the order, and payment for the order. It also could include a survey of customer satisfaction. As defined earlier, each of those separate activities could be considered the result of an internal business event, and the overall effect of all of them, acting at an appropriate time, would be the same as the single “order resolution” business event. The overall structure of a compound scenario is shown in Figure 9.2. In addition to a unique identifier, the compound scenario also has six major sections, all but one of which contain the same type of information as those found in the unit scenario structure. In a compound scenario, a component section replaces the view section of the unit scenario. The reasons for that difference, as well as some differences in the orientation of the other compound scenario sections, are discussed as each section is examined in detail.
Figure 9.2: Construction of a compound scenario. Story section The story section in the compound scenario has the same purpose as the story section in the unit scenario, contains the same three subsections, and describes the business event that is to be modeled. The major difference is that in the compound scenario the story section describes a business event whose state change spans one or more state changes from simpler business event definitions. The narrative may be somewhat more complex than that found in a unit scenario. The attributes utilized are not independently specified but are derived from the scenarios that make up the compound scenario. The compound scenario example addressing the resolution of a customer telephone order is used as an illustration. The name for this scenario could then be defined as “Resolution of a Customer Telephone Order.” The narrative subsection might then be as follows:
A customer places a telephone order by calling a specified number. The call is answered by a service representative, who asks a series of questions designed to obtain the information needed to satisfy the request in a timely and efficient manner. If the order is accepted, the customer is given an order number and the order is placed in an active status. When resources are available, the order is assembled from inventory and shipped, and at the next billing cycle a bill is sent to the customer. Payment for the order is credited to the customer’s account and satisfaction with the order is determined. If the order is not satisfactory, any necessary corrections are made. The attribute subsection contains the union of all attributes from each component scenario. The order of the attributes in the compound scenario is arbitrary and can differ from that of one or all of the component scenarios. However, if a global order has been defined for all attributes, that is the one that would be used to order the set of the compound scenario. An advantage to the global order is that the order would also agree with the attribute order of all the components and make analysis somewhat easier. Context section As in the case of unit scenarios, all compound scenarios with a common story have the same name, narrative, and attribute specifications. Scenarios with the same story are differentiated by information contained in the context section in the same way unit scenarios with the same story are differentiated. The compound scenario context must be compatible with the contexts of each component scenario in the sense that any attribute value in a component context is the same as the attribute value of the compound scenario. One of the checks that must be performed in the definition of a compound scenario is that an attribute used in one or more component scenarios must have the same value in all the component scenarios. The “not used” value can be considered to have any other defined value (it is equivalent to a “don’t care” value) for the sake of this discussion. If the attribute values differ, the component scenario is impossible and cannot be defined. For example, if the credit rating of a customer in one of the component scenarios is “high” and in another scenario it is “low,” then there is a fundamental conflict. One of the scenarios would have to be replaced with another that contains the same value of the attribute in question (e.g., both with a “high” value or both with a “low” value). If the credit rating in one component scenario is “high” and in the other scenario it is “not used,” then the “not used” value could be considered to be “high” and the two scenarios would be compatible. Business rule section The business rule section, as in the case of the unit scenario, contains (usually by reference) the business rules that have some effect on how the business event is addressed. In the case of the compound scenario, however, the business rules usually have an effect over the entire scope of the scenario. In the example, some possible business rules could be as follows: § The resolution of an order should be completed in 2 months. § If the resolution for an order takes over 6 months, the vice president of marketing shall perform an investigation. § The customer shall be notified every 2 weeks during the resolution period as to the status of the order. The business rules in this section are in addition to any business rules that have been specified as part of the individual component scenarios. In addition, the business rules for the component scenarios and the compound scenario should be checked for consistency in the same way that the attribute values were checked. Conflicts and other problems need to be corrected or substitute component scenarios used before the compound scenario is usable. Component section The component section consists of the less complex scenarios that form the compound scenario. These less complex scenarios, in the sense that their scope encompasses fewer activities than the scenario of interest, may be either unit scenarios or other compound scenarios. From a practical standpoint, it is usually wise to limit the definition of a compound scenario to one that contains only unit scenarios. That helps limit the complexity of the overall scenario structure. Unfortunately, any enterprise
of reasonable size will have a relatively complex scenario structure because of the large number of scenarios needed and the diverse set of information that each contains. There is no need to have a view section because the compound scenario view is formed from the individual views of its components. For example, assume that the two unit scenarios “Enter Telephone Order” and “Fill Order” are used to form the compound scenario “Fill Order.” Further, let us assume that the view sections of the unit scenarios contain a process view. Then, as illustrated in Figure 9.3, the process views of the two unit component scenarios form the compound scenario process view by concatenating the processes. The order of concatenation usually is taken to be the order of the component scenarios as contained in the component section of the compound scenario.
Figure 9.3: Construction of compound scenario view. In general, the component views can be concatenated only in a linear construction. If a more complex relationship is required, such as looping between processes or interaction between internal steps the processes, the compound scenario is not the proper vehicle on which to define the relationship. Characteristics section As in a unit scenario, the characteristics section contains information about the business event that is not process or organization oriented. The parts of this section are essentially the same as those for a unit scenario. The characteristics section is needed because the compound scenario might have characteristics that are different from any of its components. For example, the frequency of the scenario could be low, while the frequency of its components could be high because they are utilized as parts of multiple compound scenarios. Comments section Again, as in the definition of the unit scenario, unstructured comments that provide information relative to the scenario can be placed in this section.
9.2.2 Relationships As is evident from the previous discussion, scenarios can be thought of as being related in one of two ways: (1) by a common story or (2) as components of the same compound scenario. Of course, one scenario can be a component of multiple compound scenarios, resulting in some rather complex relationships. That type of relationship also includes the relationship of a component with the compound scenario to which it belongs. Those relationships are exploited throughout the remainder of the book. Although scenarios with different stories can have some of the same attributes and values, if they are not used together in a compound scenario, they are not related in the sense of this discussion. The existence of common attributes is considered to be too weak for an explicit relationship to be defined. Some types of analysis might need to determine all the stories that contain a specific attribute. That is certainly a reasonable activity, but the result, except for the purposes of the specific analysis undertaken, is not considered a relationship. That seemingly esoteric distinction needs to be made because scenarios are uniquely identified and classified using the scenario relationships that have been defined. When a large number of scenarios have been defined and are in use, the classification method becomes important in analyzing the scenarios and determining whether gaps, overlaps, inconsistencies, or conflicts exist or will be created by adding, removing, or changing scenarios. 9.2.3 Identifiers Although a completely arbitrary scenario identification scheme could be utilized, one that is derived in some fashion from the information contained in the scenario usually is easier to use and understand. Undoubtedly many such schemes could be defined. The one offered in this section seems to provide some useful characteristics (e.g., explicitly indicating the relationships defined for the scenarios) while minimizing the complexity of determining and utilizing the resultant ID. In many situations, the scenario ID serves as shorthand for the scenario itself. Because of this type of usage, the justification and definition of an identification scheme for scenarios are addressed in the following sections.
9.2.3.1 Unit scenarios Assume that a story contains the “Telephone Order” attribute subsection shown in Table 9.1. The ID for any scenario with this story is determined by a double ordering technique. The order number of the value for each attribute is placed in the same order as the attributes and preceded by the story ID. The syntax of the story ID is not important and is left to the enterprise to define, based on their individual needs. Any reasonable definition for the story ID will work. The “not used” attribute value is always given order number 0. The result of this construct is the ID for the scenario. IDs formed by using this technique are illustrated by using the contexts in Tables 9.2 through 9.4 and are presented in Tables 9.5 through 9.7. It should be noted that the IDs for all scenarios with the same story have the same size. Scenarios with different stories may have different ID sizes. Table 9.5: First Example of Context Section Attribute
Values
Customer credit rating
Low
Usual order size of customer
Large
Language spoken by customer
French
ID
Table 9.5: First Example of Context Section Attribute
Values
Geographical location of customer
Not of interest
Celebrity status of customer
Not of interest
ID
Story ID,3,3,2 ,0,0 Table 9.6: Second Example of Context Section Attribute
Values
Customer credit rating
Low
Usual order size of customer
Large
Language spoken by customer
French
Geographical location of customer
Asi
Celebrity status of customer
Not of interest
ID
Story ID,3,3,2 ,3,0 Table 9.7: Third Example of Context Section Attribute
Values
Customer credit rating
High
Usual order size of customer
Medium
Language spoken by customer
English
Geographical location of customer
U.S.
Celebrity status of customer
Movie star
ID
Story ID,1,2,1 ,1,1 Although the IDs of different scenarios with the same story can be analyzed through differential techniques to determine some of the characteristics of the set of scenarios, the story attribute subsection must be used to fully determine the meaning of an individual ID. From the scenario ID itself, it is impossible to know what the attribute and value a specific number in the ID represents. Although other ID schemes utilizing the given scenario structure certainly are possible to define, the utilization of this method is based on three major considerations: § It is efficient in representation size. § It is oriented toward comparing and analyzing scenarios with the same story. § Patterns can be observed by either human or machine intelligence.
Experience has shown that most interest and activity occur for scenarios containing a given story. This is because the set of scenarios with the same story indicates how the process will respond to different contexts, which is important in process design and analysis. That observation is true for both unit and compound scenarios. With some knowledge of the underlying story, humans can effectively differentiate and use different scenarios simply by examining the ID. The ID structure also lends itself to machine analysis. Those considerations carry over to the IDs defined for compound scenarios.
9.2.3.2 Compound scenarios Most of the story and context sections of a compound scenario are derived from the corresponding sections of the scenarios incorporated in the components section. Likewise, the ID of a compound scenario is based on the IDs of the component scenarios. Although some of the constructs needed to arrive at those results may seem complex, in reality they are only a simple extension of the concepts discussed for unit scenarios. The discussion here proceeds step by step through the procedure. It assumes that a global attribute order has not been developed but that a correct and comprehensive attribute data model has been defined. The discussion is based on a common example. The example is relatively abstract so the various nuances of the procedure can be easily illustrated. Table 9.8 contains a representation of the attribute subsection for three different scenario stories that are used as components of a compound scenario. The order of the scenarios is the order given in the table. The values for each attribute are represented by the name of the attribute and a number that indicates its position in the attribute definition. That would be one of the relationships specified by the attribute data model. For purposes of illustration, some of the stories contain different numbers of attributes and attribute values. For example, story S1 contains four attributes, while story S2 contains five attributes. Table 9.8: Component Scenario Attribute Subsections Story
Attribute
Values
S1
A B C D
A0, A1, A2, A3 B0, B1, B2, B3 C0, C1, C2 D0, D1, D2, D3, D4
S2
E F C D G
E0 , E1, E2 F0, F1, F2, F3 C0, C1, C2 D0, D1, D2, D3, D4 G0, G1, G2, G3
Table 9.8: Component Scenario Attribute Subsections Story
Attribute
Values
S3
E H J D
E0 , E1, E2 H0, H1, H2, H3 J0, J1, J2, J3 D0, D1, D2, D3, D4
In several instances, the same attribute is contained in multiple stories. In that case, the value set of the attribute must be the same, although that does not imply that all the values are utilized in any of the scenarios of a given story. Examples of such multiple use are as follows: Attribute C is contained in stories S1 and S2, and attribute D is contained in all three stories. As required, all attributes have at least three values, including the 0 value, which indicates a “not used” or “not needed” condition. The attribute set of the compound scenario is formed by the union of the attributes of the component stories, as illustrated in Table 9.9. The order of the compound attribute set can differ from that of any of the components and is somewhat arbitrary, although the general considerations of unit scenario attribute ordering, discussed previously, does apply. In the specific case shown in Table 9.9, the order of the attributes of story S2 is utilized. Table 9.9: Compound Scenario Attribute Subsection Attribute
Values
E
E0, E1, E2
F C D G A B H J
F0, F1, F2, F3 C0, C1, C2 D0, D1, D2, D3, D4 G0, G1, G2, G3 A0, A1, A2, A3 B0, B1, B2, B3 H0, H1, H2, H3
J0, J1, J2, J3 The next step is to utilize the context for each component scenario to form the context for the compound scenario. That is illustrated in Table 9.10. Note that the values of the attributes used in multiple scenarios are the same for each scenario. Each specific
attribute value is contained in the context section of the compound scenario C1 in the same order as previously defined for the compound scenario attributes. The ID for each component scenario is also shown in Table 9.10. The IDs are derived using the same approach as that defined previously. The attribute portion of the ID of the compound scenario is determined exactly the same was as was specified for the unit scenarios. The compound scenario story ID is formed by concatenating the story IDs for each component. The resultant ID for the compound scenario is also presented in Table 9.10. Table 9.10: Compound Scenario ID Derived From Component IDs Story
Attribute
Values
IDs
S1
A B C D
A1 B0 C2 D3
S1ID,1,0,2,3
S2
E F C D G
E2 F1 C2 D3, G1
S2ID,2,1,2,3,1
S3
E H J D
E2 H3 J0 D0
S3ID,2,3,0,0
Compound
E F C D G A B H J
E2 F1 C2 D3 G1 A1 B0 H1 J3
S1ID,S2ID,S3ID,2,1,2,3,1,1,0,1,3
9.3 Analysis The effective development and utilization of scenarios in support of a business event or group of business events depend on the enterprise’s ability to analyze the set of scenarios as to their completeness and correctness. It must be possible to determine gaps, overlaps, conflicts, and information accuracy of the scenarios so that needed corrections can be identified and made. Needed changes to the scenarios also are identified as they are employed in various uses throughout the enterprise. This source of information is complementary to the type of analysis discussed in this section. Both sources are needed for a useful set of scenarios to be initially specified and maintained. While the number of different analyses that can be defined and performed is large, the following examples illustrate the motivation for and the type of information that can be obtained. Assume that a set of unit scenarios is to be developed with the same story. Once an initial set of scenarios has been developed based on business needs as determined by the individuals involved in handling the associated business events, the result needs to be examined for completeness and accuracy. The set of all attributes may not be complete when considered as a whole. Needed values for individual attributes may be
missing. The set of scenario contexts may lack one or more contexts that are significant in some sense. All the previous analyses are concerned with the examination of the entire set of scenarios and their components. Only by analyzing the set of scenarios as a whole can judgments be made concerning the suitability of the results. Assume that a set of compound scenarios with the same story is to be developed from the scenarios available for the component stories. Before beginning the construction process, it would be useful to examine the component stories for the following: § For each value of each attribute, determination of the component scenarios that contain them as part of the context; § Determination as to which attributes are not used in all stories; § Determination of all scenario sequences that could be used to create a valid scenario for the compound story; § Identification of the context for each possible compound scenario. Once that information is available, it can be determined if the set of possible compound scenarios is sufficient for the intended purpose or if changes and additions must be made to the set of scenarios for one or more of the component stories. It is assumed that analyses of this type will utilize a repository of the type discussed in Chapter 5. Because scenarios are defined to be automation assets, all the asset management functions defined also apply to scenarios. Selected bibliography Anton, A. I., W. M. McCracken, and C. Potts, “Goal Decomposition and Scenario Analysis in Business Process Reengineering,” Proc. 6th Conf. Advanced Information Systems Engineering, Utrecht, The Netherlands, 1994, pp. 94–104.
Carroll, J. M. (ed.), Scenario-Based Design: Envisioning Work and Technology in System Development, New York: John Wiley & Sons, 1995. Chin, G., Jr., M. B. Rosson, and J. M. Carroll, “Participatory Analysis: Shared Development of Requirements From Scenarios,” Proc. Conf. Human Factors in Computing Systems, Atlanta, Mar. 22–27, 1997, pp. 162–169. Desharnais, J., et al., “Integration of Sequential Scenarios,” IEEE Trans. Software Engineering, Vol. 24, No. 9, 1998, pp. 695–708. Hsia, P., et al., “Formal Approach to Scenario Analysis,” IEEE Software, Vol. 11, No. 2, 1994, pp. 33–41. Jacobson, I., et al., Object-Oriented Software Engineering: A Use Case Driven Approach, Reading, MA: Addison-Wesley, 1992. Kaindl, H., “An Integration of Scenarios With Their Purposes in Task Modeling,” Proc. Conf. Designing Interactive Systems: Processes, Practices, Methods, & Techniques, 1995, pp. 227–235. Lam, W., “Scenario Reuse: A Technique for Complementing Scenario-Based Requirements Engineering Approaches,” Proc. Internatl. Computer Science Conf., Dec. 2–5, 1997, pp. 332– 341. Potts, C., “Using Schematic Scenarios to Understand User Needs,” Proc. Conf. Designing Interactive Systems: Processes, Practices, Methods, & Techniques, 1995, pp. 247–256.
Weidenhaupt, K., et al., “Scenarios in System Development: Current Practice,” IEEE Software, Vol. 15, No. 2, 1998, pp. 34–45.
Chapter 10: Role modeling Overview The concept of roles is utilized, either explicitly or implicitly, in various areas related to the specification of enterprise automation needs. Unfortunately, there is no general agreement as to the definition or characteristics of a role. That ambiguity results in considerable confusion as to how to specify and use roles effectively, especially in the development of process maps. The purpose of this chapter is to present a comprehensive discussion of roles and their use in the enterprise. Because of the close identification of the term role with stage plays and movies, the tendency is to transfer that association to the understanding of roles in the enterprise. While the analogy is close, there are some subtle differences that, if not recognized and accounted for, can greatly confuse the concept and hinder the acceptance of roles as a useful entity. For that reason, all the terms used here are carefully motivated and defined. That allows deviation, as needed, from the popular notion of the term. In addition, questions that arise from the current confusion in using roles (e.g., is “customer” a role?) are addressed and answered. Roles are automation assets, and as such all the concepts developed in Part I apply. This chapter explicitly addresses many of those considerations. However, even if a specific aspect is not explicitly addressed in this discussion, it still is important in a design and development situation to explicitly consider all the management system functions.
10.1 Basic definitions Because a role must exist in an environment that utilizes other ambiguous terms, it is necessary to define a number of those terms for an understanding of the overall environment involved. Because there is a close analogy to the stage and movies, examples are used that exploit that association as long as the analogy does not become unduly strained. To successfully make that connection, it is necessary to define some stage and movie terms in addition to those needed for enterprise auto- mation. The relationships between the terms are described after the definitions. 10.1.1 Role Appropriately, the first definition is that of role: A role is an unordered set of cohesive activities. By that definition, a role is passive. It is defined by the collection activities assigned to it. How those activities are defined and assigned becomes the crucial element in role definition. Most of the confusion in the specification and use of roles occurs because the set of activities involved is not well understood. That problem, as well as a partial solution, will become clearer as the discussion proceeds. The word cohesive is intended to indicate that the activities defining a role must have some discernible relationship. The relationship can take many forms, but it must be able to be explicitly identified. If there are activities assigned to a role for which the cohesive property does not apply, those activities should be considered to be part of another role. The activities that form a role are not in any predefined order. Order implies process, and roles are independent of process. The same activity can be incorporated into multiple roles. That, unfortunately, permits a greater degree of freedom in the specification of roles than would be desirable. However, making the roles completely and mutually exclusive in terms of their defined activities would result in an artificial model for a real enterprise. Careful specification of the set of enterprise roles can minimize the inherent confusion of overlapping activities. The specification of a set of roles for the enterprise is investigated in more detail later in the discussion.
As entities, roles can have attributes, and the values of those attributes determine the type of the role and its characteristics. Several types of roles are considered and discussed in detail. 10.1.2 Activity For the purposes of this discussion, the term activity generally is considered in its commonly accepted usage. However, the formal definition also must consider the purpose of the activity in relation to the enterprise: An activity is any work the primary purpose of which is to change the state of the enterprise. Note that there is no indication as to the characteristics of the work or how it is performed. That allows the definition of many types of roles. Work that is not primarily intended to change the state of the enterprise in some manner is considered to be outside the scope of the enterprise, and the associated activity cannot be used as part of a defined role. As one interesting result of that definition, we can answer the question of whether customer is an enterprise role. Unfortunately, as might be expected, the answer is maybe! If the customer’s work efforts are not primarily intended to change the state of the enterprise, even though eventually the result will be used in some fashion by the enterprise, the customer is not performing an activity from the perspective of the enterprise. Because the customer is not performing an activity, there is no customer role. If, however, the customer is interacting with enterprise resources in a way that is intended to directly change the state of the enterprise, then the customer is performing an enterprise activity and, hence, an enterprise role. The role may be that of customer, or it may be another defined enterprise role with a performer of “customer,” as discussed in Section 10.1.3. The discussion concerning the role of customer extends to any other prospective role that is to be filled by individuals who are not employees of the enterprise. That would include suppliers, consultants, government regulators, and service providers of various types. The determination depends on the degree of planned interaction with enterprise personnel in the fulfillment of an enterprise process. 10.1.3 Performer Because role is defined to be passive, there must be some mechanism for the animation of a role. In this case, animation means causing the activities defining a role to occur. The mechanism for animation is through the use of a role performer. A role performer (or performer for short) is some entity that animates a role by causing one or more of the activities that define the role to be accomplished. When a role is animated by a performer, an instance of the role is said to have been created. A role is passive, but a role instance is dynamic. Depending on the number of performers capable of performing the role, there may be many instances of the role in existence at any given time. Although a performer animates a role by performing some role activities, common usage is to say that the performer performs (or “acts out,” in stage terminology) the role. Since it engenders no confusion, that terminology is used because continued reference to “animating a role” would be somewhat awkward. A performer can be human or it can be a machine. The main concept is that the performer is separate from the role. A performer may perform multiple roles, although usually not at the same time. However, a role may be performed by many performers simultaneously Multiple types of roles can be defined. The type of role determines the needed characteristics of the role performer. The reverse is not true. The performer does not change the role (i.e., the activities to be performed). However, specific performers may, if allowed by the control mechanism, perform the role in different ways (e.g., the sequence of activities).
10.1.4 Control Control of which role activities a performer is to perform is achieved through one of two entities. In a play or a movie, the mechanism is a script. The script defines which roles are needed and tells the performer of each role what lines to say when, in what manner to say them, and how to behave as they are being said. In the enterprise environment, the control mechanism is a process specification. The process defines which roles are needed and guides the performer of each role as to how, when, and in what manner to perform the defined activities of the role. In that sense, a process can be considered to be an activity script. There is a difference between a script and a process that needs to be understood so there is no confusion in the utilization of the analogy. A script contains the instructions that enable performers to elucidate a story by animating the roles. Only one context (set of story attributes) is utilized. A process contains the instructions that enable performers to elucidate a story using many different contexts. The specific context utilized for any given instance of the story is provided by the scenario. In a play, there usually are multiple roles. The focus (control) of the play is passed from one role to another as orchestrated by the script. Likewise, in the enterprise there are multiple roles. Control is passed from one role to another as directed by the process. The role that is in control gives the performer of the role the ability to accomplish the specified activities of the role. Some processes allow control to be shared by multiple roles. That enables the roles to simultaneously have their activities accomplished by the assigned performers. The concept of passing control from one role to another is important in differentiating a role from a resource. It is possible for a given performer to perform more than one role. That in no way compromises the independence of the roles or diminishes their individuality. The choice of performers for each role is a completely separate occurrence from the definition of the individual roles. 10.1.5 Resource Except for the most elementary set of activities, for a performer to perform a role, some set of resources is needed. The resources can be in many forms. In a play, they are the props, the set, or the costumes the performers wear. Those resources are not usually considered to play a role. For example, if a performer, as part of the role being performed, uses a knife in some fashion, one does not normally say that the knife is performing a knife role. The knife is referred to as a prop. In general, in a play, humans perform roles and inanimate objects are resources of some sort. Likewise, if a role in an enterprise process requires that a performer utilize a resource as part of the performance of the role, the resource is not performing a role. Unfortunately, in this situation, the differentiation can be somewhat less clear than it is in the case of a play. That results in considerable confusion as to what is a resource and what is a performer performing a role. As a quick aside, it must be admitted that at some level all entities involved in elucidating a story can be considered resources, resulting in three potential types of resources: entities not performing a role, entities performing a role, and data. As an aid to understanding, it is useful to distinguish between those different types of entities from a resource perspective. Toward that end, role performers are not referred to as resources. Non-role-performer entities are considered role resources, and data also are considered a separate type of resource. With this interpretation, the discussion can continue. If a performer of a role uses a calculator to total some numbers in performing an activity, it probably is evident that the calculator is a resource and that it is not performing a (calculator) role. Now assume that, instead of adding the numbers as part of the role activity, the performer of the role gives the list of numbers to another individual who totals the numbers and gives the result back to the performer of the original role. The performer then uses the total as needed in a subsequent activity. Is the individual who
totals the numbers a resource in the same way as the calculator, or is that person a performer of the “number adder” role? Now again change the example slightly. Assume that a performer is performing the same role activity, but instead of either using a calculator or having another person perform the number addition, an automated function or application is employed. The computer application also can be viewed as a resource used to help the performer perform the activity and a role activity being performed by an automated performer. Some means of determining a uniform response to those situations from a “role” perspective is necessary to provide a firm foundation for the specification of enterprise roles. The following three principles allow a deterministic answer to the role-versusresource questions. § Principle 1: All inanimate objects (no intelligence) provide only a resource and do not perform a role. In that sense, data are always a resource because they have no inherent intelligence of their own. § Principle 2: All humans perform a role rather than merely provide a resource. It is dehumanizing for a person to have only the status of a resource. § Principle 3: Entities that contain machine intelligence (computers) and provide an automated function perform an automated role when they function independently of the role that initiates their operation. The concept of independence is defined next. If an automated function has at least one of the following characteristics, it is considered independent of the initiating role and is defined to be an automated role. If it has none of these characteristics, it is not independent of the initiating role and is considered a resource of the initiating role. § The automated function provides some information to a role other than the one that initiated it. Storing data in a shared database is not considered to be providing information to another role unless database interaction is the primary purpose of the function. § The automated function is able to complete its operation after the initiating role terminates its normal activity. The initiating role does not expect a direct response from the automated function. § The automated function is capable of independently deciding which of multiple activities it should perform in response to a request. Given those principles, the answers to the role-versus-resource questions would be determined as follows: § The calculator is a resource. It has no inherent intelligence. § The number adder person is a role performer. He or she is human. § The number adder computer application can be either a resource or a role. It depends on which of the characteristics in the preceding list are involved. With the specific circumstances of the question as originally stated, the automated application is a resource because it has none of the defining characteristics of an automated role. Although not explicitly defined, an automated role can utilize other automated functions that may themselves be resources or other automated roles. The application of these principles provides that determination in the same way as for human role performers. It should be admitted at this point that the principles utilized to differentiate between a role and a resource are somewhat pragmatic. They seem to follow the practice of many organizations, but exceptions almost always seem to exist. However, the author has not found any situations that cannot be resolved by applying the defined principles in a reasonable fashion. For the remainder of this chapter and the rest of this book, the determination of the existence of an automated role or the explicit definition of one is consistent with the defined principles.
10.2 Role relationships Figure 10.1 utilizes a modified E-R diagram to structure the relationships of the various terms needed to place roles in their proper context. Such a representation inherently presents a passive view of the structure. A dynamic aspect, associated with the development and usage of the relationship structure, also needs to be presented. Providing such a dynamic representation in a diagram format is difficult. However, by stepping through the structure in the sequence that a process implementation and operation would require, the use of the structure can be illustrated, although somewhat imperfectly at this point. Later chapters provide a great deal more insight into the dynamics of the structure.
Figure 10.1: Structure of role relationships. Assume that a set of enterprise roles has been defined by specifying the activities each contain. The dynamic properties of the relationship structure are then illustrated by the following: § A script/process elucidates a story/scenario by specifying the particular roles involved and then controlling the performers as to how they perform their respective roles. § The control mechanism guides the performers as to when and in what manner they perform the specific role activities required by the given story/scenario. § The performance of an activity can, as necessary, employ resources or the direct CRUD of data. A resource can also directly access data and utilize the CRUD functions, if appropriate to the resource.
10.3 Enterprise role specification The determination of an effective set of enterprise roles is a difficult undertaking and one for which few guidelines exist. Usually many different sets of roles could serve the enterprise reasonably well, but the identification of even one of those sets is nebulous enough that it almost never is attempted in practice. Roles generally are determined on a process-specific basis. The problem with that approach is the usual one that occurs when independent efforts define different elements of the same set: § The same role has different names. § Different roles have the same name. § Roles that should be mutually exclusive are not. § Roles that are needed for required functions do not exist and that lack is not detected. § Many more roles than really needed are defined because possible reuse is not obtained.
It is possible to develop guidelines for the process-independent specification of enterprise roles so that the majority of those problems can be avoided. In addition, guidelines address the need to ensure that the set of roles conforms to enterprise management and operational philosophy. This section motivates and articulates those guidelines. Although developing roles from an enterprise perspective is useful, it is not possible to determine all the roles that the enterprise will need in that manner. There will always be a need to define new roles from a process perspective. Because of the large number of roles usually found in an enterprise and the inability for humans to deal effectively with that complexity, the need for additional roles will arise as processes are defined in enough detail to be implemented. Such exception-based specification is preferable to trying to determine all the enterprise roles from only a process-by -process examination. 10.3.1 Role orientation The first step in the determination of a set of roles for the enterprise is to determine possible role orientations. Although orientation is only one of the role attributes that need to be discussed, it is introduced at this time because of its importance in determining an effective role structure. The other role attributes are considered in Section 10.4. Actual roles can consist of a single orientation, or they can combine multiple orientations. Possible role orientations along with some examples of each are defined in the following list: § Organization unit: Warehouse worker, marketing department member, finance department member, manufacturing depart- ment member; § Position title/description: Member of staff, strategic planner, janitor, secretary, bookkeeper; § Management level: Foreman, supervisor, manager, department head, director, officer; § Salary level: Nonexempt, exempt band 1, exempt band 2; § Degree/professional designation: Engineer, accountant, physician, attorney; § Enumerated activities: “Makes copies, clears machine when it jams, calls service when service light comes on”; “Bolts on steering wheel, mounts dashboard, attaches radio antenna”; § Service provider: Travel agent, workstation repairer, help desk representative; § Process step(s) performer: Bill producer, accounts payable check writer, order taker, customer complaint handler. Examples § § §
of roles that combine orientations are as follows: Engineer; manager; exempt band 2; Member of staff, finance department member, strategic planner; Assembly line worker, nonexempt, “bolts on steering wheel, mounts dashboard, attaches radio antenna.”
The exact titles of the roles are not really important, nor are the orientations used to define them. Roles with various orientations can be mixed and matched as desired, as later examples illustrate. The most important criterion is that every enterprise activity needed for its operation is defined as part of at least one role. Although not quite as important, the number of activities placed in multiple roles should be kept to a minimum, to maintain consistency in role utilization. There is one important exception to that criterion. It may be necessary for some activities to be part of many roles (e.g., filling out a time card). That activity could be required by human resources for all employees and would implicitly be a part of all roles. An efficient way of handling such a case from the human resources perspective is through a general role class. The activities that belong to a specific role usually are not explicitly defined except for roles that have a job description that enumerates them. (Hence this often related story:
When asked to do something a little different, the employee says, “I didn’t know that was part of my job description.”) Admittedly, having roles with implied activities can and does lead to some confusion. However, it provides the necessary flexibility to handle new activities and situations without the need to continuously create new roles or update the definitions of current ones. For example, the role of “design engineer” may not have an explicit set of activities associated with it. However, there generally is enough innate understanding of this role so it can be reasonably determined if a given activity is contained in the role. That common association of activities may not be true of an “administrative assistant” role. Common sense must prevail in determining the degree of definition necessary for a role. Although it has been stated that the names and orientations of roles are not really important, it is also recognized that, from a practical perspective, the naming and definition of roles is of considerable interest to the individuals who are expected to perform them. Also, the enterprise usually is very interested in the defined set of roles to maintain some agreement with the desired operational characteristics of the business. To meet those needs, some general principles can be used in the definition of a set of enterprise roles. These guidelines are to be viewed as general because in any given enterprise the principles will, at times, conflict with each other. The importance and priority of each must be evaluated as role definition proceeds. § Roles should reflect the management style of the enterprise. If the enterprise is managed by organization entity, the role definitions should maintain an organization orientation. If the enterprise is managed by process, the roles should have a process orientation. § Roles should reflect any enterprise philosophy on the scope of individual job assignments. If it is expected that an individual must be able to perform a number of different activities, that should be reflected in role definitions that contain several activities. If the enterprise believes individuals should be limited to only a few activities, the role definitions should reflect that philosophy. § Roles that are more clerical in nature should have the activities defined in more detail than the activities of knowledge workers. Likewise, the activities of knowledge worker roles should be defined in more detail than those of executive roles. § Roles should reflect the characteristics of the enterprise. An enterprise in a stable, slow-moving industry (e.g., paper towel manufacturing) needs roles with activities defined in greater detail than the roles in an unstable, fast-moving industry (e.g., Internet software development). § Roles should reflect those areas that the enterprise wants to emphasize or deemphasize. If an enterprise needs to emphasize a certain area of its operation, roles should be defined that specifically address that area. If product A is the most important area of the enterprise, roles should be defined to specifically address product A (e.g., product A program manager, product A marketing manager). Conversely, if product A is to be deemphasized, roles directly involving products would be defined generically and not refer to product A specifically. It is not necessary to keep roles static once they are defined. There are many reasons that any current set of roles will have to be changed in some fashion. Because of the use of role definitions in process specifications and other enterprise uses, referential integrity is important. When the role set changes, any uses of the roles that change must be identified and examined for any possible consequence. That requires that good configuration management must be maintained, which requires the use of repository techniques as part of the management process. Any enterprise of significant size will have a large number of roles. A structured approach to the specification of those roles that meets the needs articulated earlier is required to manage the inherent complexity involved. Such an approach is outlined in
this discussion. It is presented only as a representation of the type of analysis needed to obtain a comprehensive enterprise role set. For convenience, object-oriented notation and concepts are used as a basis for representation of the ideas involved. 10.3.2 Role class structure For the purpose of facilitating the introduction of the role structure, only the roles with an association value of employee are initially addressed. In addition, consideration of general role classes is deferred to Section 10.4.8. Assume that an enterprise has a traditional philosophy of management and the highlevel organization chart depicted in Figure 10.2. In this case, most of the roles are going to be organizationally oriented, and a portion of an associated role class structure might take the form presented in Figure 10.3. Because of the complexity that would result if complete structures were shown, for the purposes of this and the following sections, only that portion of each structure necessary to illustrate the salient points being discussed is provided. It is relatively easy to extend the diagrams to any degree of detail; if readers are so inclined, it may be useful to model their own enterprise structures using the described principles.
Figure 10.2: An enterprise organization chart.
Figure 10.3: An organization-oriented role class structure. Note that the first level of the class hierarchy contains broadly defined roles similar to the major organizations of the enterprise. Subsequent partitions of the roles can represent smaller organizational units; finally, position descriptions generally are required (e.g., engineer, technician). It also is necessary to add other role orientations such as management level (e.g., foreman, manager) and enumerated activities that further define a role (e.g., television assembly worker, radio assembly worker). Now assume that an enterprise has a philosophy of management by process and utilizes the set of leaf processes depicted in Figure 10.4. In this case, most of the roles are going to be process oriented and a portion of a role class hierarchy might take the form presented in Figure 10.5.
Figure 10.4: Enterprise process definitions.
Figure 10.5: A process-oriented role class structure. There is a considerable difference in the orientation of these role specifications from those defined from an organizational perspective. As might be expected, each type of orientation has advantages and disadvantages. The organization orientation makes it relatively easy to specify a wide range of activities (usually implicitly) in a given role and keep the number of roles somewhat low. The disadvantage is that the activities included in the role are not easily specified, and a noncohesive activity set can easily result. The process orientation results in an intuitive understanding of the activities included in a role. The disadvantage is that the roles easily can become fragmented, and a large number of roles with few activities can result. Although not explicitly shown in the figures, the two role orientations can easily be mixed, depending on the needs of the enterprise. For example, instead of the manager role (that probably is somewhat nebulous from a process perspective) included in the process orientation role structure in Figure 10.5, it might be more appropriate to substitute the executive staff or the administration staff role from the organization orientation role structure in Figure 10.4. As long as all the enterprise activities are included in at least one role, the roles can be defined in any way that makes sense for the business.
10.4 Role attributes In addition to role orientation, a number of other role attributes are needed to adequately define the characteristics of individual roles. A useful set of role attributes and associated values are provided in Table 10.1. Table 10.1: Role Attributes and Values Attribute
Value Set
Orientation
Organization unit, position title/description, management level, salary level, degree/professi onal designation, enumerated activities, process step(s) performer
Association
Internal, external
Performer type
Human, automated,
Table 10.1: Role Attributes and Values Attribute
Value Set institution
Performer standing
Internal, external
Cardinality
Singular, multiple
Totality
Whole, part
Persistence
Permanent, temporary
Node
Branch, leaf
Dependence
Primary, shadow, tandem
10.4.1 Orientation The orientation characteristics were discussed in Section 10.3.1; the reader is referred to that discussion for examination of each of the possible values. 10.4.2 Association Enterprise roles usually are considered to be internal to the enterprise. However, as has been discussed, roles can be defined that are external to the enterprise (e.g., customer). External roles must be used with extreme caution because the degree of control over the performers is considerably less than that usually assumed for internal roles. 10.4.3 Performer type Human and automated performers were discussed in Section 10.1.3. There can also be institution performers that always have a standing (see Section 10.4.4) external to the enterprise. Those performers usually are known as service providers. For example, assume that a needed activity in an order-handling process is to determine a prospective customer’s credit rating. That activi ty could be assigned to a role identified as some type of institution such as a credit bureau. All that would be known about the activity is that it will be performed by the assigned institution, hence the designation. 10.4.4 Performer standing A performer can be located internal or external to the enterprise. An external role always implies an external performer. Internal roles, however, may be performed by external performers. There are two aspects to the use of external performers for internal roles. The first is the use of consultants and contractors who are utilized to perform internal roles frequently. However, from an economic point of view, they are the same as employees since their services must be compensated. Another desirable aspect occurs when it is possible to get cus- tomers and suppliers to perform internal roles. In the discussion in Section 10.1.2 of the definition of the term activity, it was noted that an internal role can be performed by a customer under the appropriate circumstances. From the enterprise view, these types of external performers essentially are providing “free” work. As part of the ongoing analysis that should be performed as part of role life cycle management, there should be an examination of the possibility of customers and suppliers performing internal roles.
10.4.5 Cardinality The cardinality of a role indicates the number of human performers that must closely cooperate to perform the role activities effectively. That number usually is one, meaning that a single individual performs the activities of a given role instance. There is a growing tendency, however, to use a team approach in the performance of certain activities. For example, assume that an activity is to “plan next year’s capital budget.” That probably would be performed by multiple individuals acting together, and the role containing the activity could be known as the “capital budget team.” That would be a role with a cardinality greater than one. 10.4.6 Totality When the cardinality of a role is greater than one, another aspect of the role must be considered. For example, in the capital budget team role, each team member (role performer) also has an individual role associated with the team role. That role could be as simple as “capital budget team member” or the somewhat more complex “capital budget team trend analysis member.” In any event, the collective (cardinality greater than one) role would have a totality value of “whole” because it consists of all of the activities assigned to the role. The individual roles would have a totality of “part” because they would reflect only part of the activities of the entire role. The totality attribute should not be confused with the node attribute (discussed in Section 10.4.8). Totality is defined only in conjunction with a role of cardinality greater than one. The node attribute is defined for all roles. 10.4.7 Persistence The persistence of a role indicates whether the role is temporary or permanent. A permanent role is created with no identified time for elimination. It is to exist as long as it is useful for the business. A temporary role is created for a specific purpose and time period. It will then disappear. Most roles are permanent but in certain circumstances, temporary roles are quite useful. The activities in a temporary role may themselves be transient, or they may be permanent and only temporarily assigned to the role. For example, assume an enterprise purchases another company and it is necessary to merge the operations of the two organizations. Temporary roles could be created to accommodate the temporary activities needed. Both are temporary because after the organizations merge, neither the roles nor their activities will be needed. In another example, assume that a permanent activity to “analyze competitor products” is assigned to a role with a cardinality of one. Now assume that a significant change occurs in the industry and that to “rebase” future analyses, a team of experts in different areas is needed to perform the activity for a given period of time. After the rebasing takes place, the team is no longer needed, and the activity can revert to its original role. In this case, the activity did not change. However, the role necessary for performing the activity did change on a temporary basis. 10.4.8 Node In the role hierarchies illustrated in Figures 10.3 and 10.5, the roles are either branch nodes or leaf nodes. The branch nodes have child nodes, while the leaf nodes do not. In general, leaf nodes are the only ones used as operational roles by the enterprise. That includes the use of roles as part of the process definition. The main purpose of the branch nodes is to serve as a vehicle for the eventual definition of the leaf nodes. However, they are also useful, in some situations, as a means to indicate a general role class. An example of the use of a general role class is the definition of a role called “employee.” This role can stand for all employees when necessary. A process that puts new employees on the payroll could use a general role such as this to avoid having to enumerate all the different employee roles involved. Roles of this type could be included
in the role hierarchy shown in Figure 10.6. The employee role would then become the root branch role for the hierarchies shown in Figures 10.3 and 10.5.
Figure 10.6: Addition of general roles to the class hierarchy. It can be argued, with some justification, that the branch nodes do not need to be characterized, and that, therefore, this attribute is not needed. However, there is some use for branch nodes in the general characterization of certain role classes and for containing activities common to large numbers of roles. Also, characterization of the leaf nodes can be made easier through standard class inheritance techniques. This attribute will be retained and the branch nodes characterized as needed. 10.4.9 Dependence Roles can depend on other roles in addition to the dependency defined implicitly through their use in a process. Roles that have no dependencies except as obtained through use in the same process are called primary roles. Shadow roles are roles that depend on one or more primary roles and are used to perform activities that closely interact with those of the primary roles. The usual example is that of a management role. For example, assume a role of “customer bill checker.” Assume further that a management oversight role of “customer billing manager” also has been defined. If an error is found by the bill checker role performer, the relevant information is sent to the manager role. Other primary roles in the billing process also could utilize the same customer billing manager role for errors, approvals, and other management functions. The customer billing manager role is a shadow role, because it does not exist independently of the primary roles that elucidate the customer billing process. Almost all management roles are shadow roles because they cannot exist independently of the roles they manage. The role being managed can, however, exist without the management role. Another type of shadow role would be a support role that provides additional information to the primary role being supported. A popular example would be a sales support role that contains research activities designed to help the sales role provide needed information to prospective customers. The sales support role cannot exist independently of the sales role, but the reverse is not true. Definition as a shadow role does not reduce the importance of the role to the enterprise in any way. It is simply a way of characterizing role dependencies. Tandem roles are roles that require each other and must be defined together. Although the term tandem implies two, any number of roles could be defined as part of a group being utilized simultaneously. An example of tandem roles would be a trainee (student) role and a trainer (teacher) role. It would be difficult to separate the two roles. Although each could be used in some situations without the other, neither makes sense unless the other role is associated with it.
10.5 Role utilization The main reason to define roles in the enterprise is to provide a useful mechanism for ensuring that the enterprise is operating efficiently and effectively. That is accomplished using both construction and analysis techniques. Without the use of role definition, there is no easy means to determine how well employees and their automated tools (automated functionality) are being used in performing the activities of the enterprise.
Although automated roles have been defined for convenience, the main focus of roles is toward the humans in the enterprise. That focus is evident in the following discussion. In current organizations, especially those involved in high-technology industries, human capital can be significantly more important than financial capital in ensuring future success, although both certainly are necessary for survival. Human capital is really a shorthand way of referring to the collective experience and abilities of the employees of an enterprise. Roles represent an important tool for determining if (1) the human capital is utilized wisely and (2) individuals are comfortable and reasonably happy with their assigned activities. The discipline of explicit role definitions also provides a mechanism for determining how the enterprise should respond to changing business and technology pressures from the human capital perspective. The explicit use of role specifications allows an enterprise to determine, for a specific point in time, the characteristics and numbers of employees that are needed to effectively and efficiently operate the enterprise or, conversely, how to best apply the talents and experience of individuals that are already available. Job descriptions, which are a form of role definition, historically have been used in the enterprise to help define individual positions and the desired characteristics of the employees who fill the positions. That, however, is a narrow utilization of the role concept and does not easily permit any analysis as to the overall effectiveness of the positions as defined. In addition, not all (or even a majority) of positions in an organization may have job descriptions. In that case, because the definition of roles is not pervasive in the enterprise, their usefulness as an analysis tool is further diminished. Selected bibliography Lupu, E., and M. Sloman, “A Policy-Based Role Object Model,” Proc. 1st Internatl. Enterprise Distributed Object Computing Workshop, Gold Coast, Australia, Oct. 24–26, 1997, pp. 36–47.
Plaisant, C., and B. Shneiderman, “Organization Overviews and Role Manage- ment: Inspiration for Future Desktop Environments,” Proc. 4th Workshop Enabling Technologies: Infrastructure for Collaborative Enterprises, Morgantown, WV, Apr. 20–22, 1995, pp. 14–22.
Chapter 11: Information modeling Effective management and utilization of information are required for an enterprise to obtain and maintain a competitive advantage. Executive decisions, marketing strategies, process improvement, and personnel motivation, along with many other crucial activities, all depend on having the right information at the right time in the right format. With the glut of information currently available, there is a growing need to identify and make effective use of the specific sets of information that will enhance the operation of the enterprise. An emerging technology known as knowledge management is attempting to deal with the general problem of information overload. Although a general discussion of that technology is well beyond the scope of this presentation, there are some similarities with the discussion in this chapter, including a need to utilize a higher level of abstraction than data, extensive modeling techniques, and the use of repository technology. Modeling allows a consistent approach to the identification, control, and utilization of information. Although information modeling (or data modeling, as it historically has been called) has been in use since the beginning of computer applications, there is a growing need for more extensive models than the relatively restrictive ones used in the past. The purpose of this chapter is to motivate and develop a comprehensive information model suitable for the fast-changing automation environment of the enterprise. To keep with historical precedent, the term data modeling is used throughout this discussion, except when it is necessary to differentiate between the concepts of data and information.
11.1 Background Historically, the modeling of data has used several different formats. From the earliest beginnings of general computer usage and data processing, data have been modeled using the constructs of programming languages and still are for local data that will not be shared between software programs. Fortran and Cobol, along with most other languages, let programmers determine the layout, names, and characteristics of the data (data model) needed by a program. Initially, the data were unique to the program in which the data were defined. Shared data were later in coming and required that the model (e.g., Fortran Common) be copied in all programs that needed it. The purpose of those primitive data models was simply to allow the programmer to easily couple the program logic with the data it required. Those models could be considered logical models, because the programmer usually was not concerned with how the data were actually stored in memory (the physical model). Some programmers, however, did understand how the compiler directed the data to be stored and were able to circumvent the model by directly accessing physical storage. (It should be noted that the problem of circumventing models of various types continues to this day, usually with unanticipated and unfortunate results.) The need to define data models as an independent (from software development) activity resulted from the development of mass storage devices. That need did motivate some additional modeling concepts. File structures had to be defined with some thought toward the most efficient way to access and utilize the sequential data format. From that need comes the concept of the master file and transaction files. That operational data model is still in use today, although some of the forms have changed to keep pace with storage technology. In addition to the modeling needed for efficient file design and utilization, another modeling concept became popular. It resulted from the need for a procedure to determine: § What specific data were needed for the individual automated functions; § The detailed characteristics of the data so they could be specified to the software applications. The result is the E-R model, which is usually depicted in diagrammatic form. That model, or E-R diagram, the name by which it was usually known, was quickly adopted because the number of functions being automated was increasing rapidly, as was the amount of data that needed to be accommodated. An example of an E-R diagram is shown in Figure 11.1.
Figure 11.1: An example of an E-R diagram.
In concept, an E-R diagram is relatively simple. The entities are represented by the nodes (circles) of the diagram, and the relationships are represented by the lines. Both entities and relationships can have attributes, illustrated by annotations in the figure, the values of which determine their characteristics. Some additional, more complex constructs are needed for practical modeling situations. For the purposes of this discussion, however, it is not necessary to go further into the details of E-R model notation. Because of the power of the E-R model to identify problems with a data specification, it was applied to the modeling of the entire enterprise. It is possible although somewhat difficult, as these efforts indicated, to develop an E-R diagram for an entire enterprise. Whether or not that was a useful activity depended heavily on the intended use of the model. If the purpose was to identify and understand enterprise data, then it could have been a useful exercise. If the purpose was to use the model as the basis for software development, then the inherent structural complexity could—and usually did—greatly hinder the us e of the model. In many cases, that resulted in the development ignoring the data model and using no model at all. E-R modeling fell on hard times. The development and use of DBMSs have given rise to the new specialty of database administrator (DBA), who was responsible for: § Mapping the enterprise data into the structure of the DBMS being used (logical model); § Ensuring that the DBMS was efficiently used (physical model). Software that used DBMS resident data had to understand the logical model of the data to navigate through the DBMS structures to locate the desired data. Providing for efficient response times usually was met by tuning the parameters that determined how the DBMS stored the data on the bulk storage media being used. With the advent of relational DBMS structures, data modeling took a somewhat different path. Considerable emphasis, from a modeling perspective, was placed on data normalization. The original purpose of normalization was to minimize the need to store a given data item multiple times and thereby improve storage efficiency and integrity. That is accomplished by defining data units that contain a cohesive set of data elements and, with the exception of links between units, allow a data element to reside in only one unit. In a relational model, those units are structured as tables. While normalization is an excellent concept for a physical data model, there was an unforeseen effect. The logical data models used for coupling with the software were also being subject to normalization. That type of design made it difficult to find the data contained in an intermediate data structure (e.g., purchase order). Link after link would have to be followed, sometimes through three and four levels, before individual data elements of interest could be found. Normalized data models, much like the E-R models, were not well suited to the needs of software development. The latest evolution of DBMS formats is based on objects, a format that has considerable appeal for use with software based on object-oriented designs and implementations. It also is considered useful for storing data that contain information such as video, audio, text, and graphics. Data models for object DBMSs result from the specification of the attributes for the object classes of interest. This area is of great general interest, but the state of the art has not advanced to the point where it would affect the discussions in this chapter. Although several aspects of data models (physical, logical, operational) appeared at various times in the evolution of data modeling, the focus of the models almost always was on the following three concerns: § The specification and relationships of individual data elements; § The physical storage format of the data at the data element level; § An efficient means of handling the traffic load on the data storage mechanism. The use of data models to facilitate the specification and utilization of automation functionality generally were not considered. One of the few exceptions was the development of the master and transaction file data
model, which was designed to ease the software design and operation problem inherent in the use of the sequential files. While the data-centric concerns are still quite valid, maintaining the data modeling focus on them exclusively is no longer sufficient. Data models must also support the efficient specification and utilization of automation functionality in the enterprise. Inclusion of that view requires consideration of the information aspects of the enterprise as well as a more effective way of incorporating that level of abstraction into the specification and implementation of automation functionality. To some extent, incorporation of both views requires shifting the overall focus of data modeling to the information level abstraction rather than the data level, although both must be included in the resultant model. Section 11.2 contains models designed to provide the broader modeling scope and the shift of emphasis needed to address both the information and data levels of abstraction. For the purposes of the discussions in this book, information is considered to be data with a usage context. For example, “customer name” is data because there is no indication as to how it will be used. “Customer name” in a pending-order structure is information, because “pending order” indicates how the customer name is to be used. In general, information utilizes a higher abstraction level than the data it contains. That differentiation will become clearer as the discussion proceeds.
11.2 Data modeling system The purpose of the data modeling system as defined in this section is to provide a comprehensive, unified structure that facilitates the efficient specification and utilization of information and data in the automated operations of the enterprise. The system is designed to meet the following needs (both old and new): § The determination of the data elements needed in the enterprise; § The use of the data elements to provide an effective physical implementation of the data; § The ability to specify the data needs of the automation functionality at the information level; § The ability to specify, access, and use the physical data elements at the information level; § The ability to ensure that changes in any one area are reflected in all other affected areas of the modeling system. Those needs all must be met without incurring significant overhead costs to use the data or requiring an excessive amount of administration and management effort. That necessitates an approach that incorporates a method for self-definition of the data and that uses a repository as an integral part of the model structure. 11.2.1 Modeling system dynamics Figure 11.2 illustrates a data modeling system. Four component models must interact to provide the overall system model. Each model has a specific purpose in the management of the data specification. In addition, the DBMS may have its own internal model. However, discussion of DBMS internal construction is well beyond the scope of this presentation. The global aspects of the data modeling system are considered first to determine the purpose and usage of each of the individual models. Once those requirements have been established, the structure of each of the individual models can be defined and discussed in additional detail.
Figure 11.2: Structure of a data modeling system. In addition to the model components, a repository component is also included in the system diagram. The repository indicates the requirement for saving all of the model information as well as the ability to examine the modeling system for specification problems or improvement opportunities. Maintenance of the referential integrity of the model also requires repository services. As the discussion continues, it should be kept in mind that many of the requirements placed on the models and their relationships are dependent on use of a repository. The modeling system is dynamic, although the diagram is not able to depict that aspect very well. To convey an understanding of the model dynamics, a description of a typical sequence of events is utilized, along with a description of the major characteristics of each component model as it is invoked in response to the data system operation. The first activity is the development of an enterprise logical data model. The purpose of that model is to determine the data needs of the enterprise as a whole. As discussed previously, any attempt to develop an enterprisewide definition of any type is fraught with difficulty. That certainly does not mean it is not a useful effort, however. It can serve as an excellent starting point to which additional items can be added as normal operations necessitate. Once the logical model has been defined, it is used for two entirely different purposes. The first—and probably the most important—purpose is to provide the structural basis for defining the physical model. The physical model is used to partition the data such that they can be efficiently accessed by the operational software. The characteristics of the DBMS product being utilized as well as the platform on which it resides are also major inputs to the physical data model. The physical data model must meet the intent of the logical model, but it is not required to mirror the logical model in every aspect. In fact, in most cases it will not be able to do so and maintain adequate access characteristics. Because the logical model defines the set of enterprise data elements and their associated characteristics, it also forms a natural repository of data elements for other models that need that information. Although the structure of the data elements used in the logical model usually is not appropriate for incorporation into the other models, the ability to provide a common set of data element names and associated characteristics is important in maintaining consistency throughout the entire modeling system.
The information model provides the mechanism for defining and utilizing data as part of the process specification. It is defined independently of the logical data model, and its constructs are specifically designed to complement and integrate effectively with those used to define the automation functionality. Because the components of the information model eventually must be defined down to the data element level, the use of the available element definitions from the logical model simplifies that aspect of model development. In addition, given that the information model is designed to elicit all the data element needs for a specific process implementation, it readily can be determined if there is a gap in the definitions of the logical model for the process of interest. The purpose of the operational model is to efficiently couple the data accessed through the DBMS with the automation software. As such, it is not unlike the purpose of the original data models specified through the use of programming language constructs or the master/transaction file model. This completes the dynamic description of the data modeling system. It should be noted that each component model is defined from needs imposed from outside the data system as well as needs that result from adjacent system components. The operation of the system is continuous because it must respond rapidly to any changes in the needs of the enterprise. The decoupling of the component models and their close integration with the development phase with which they are associated make that kind of response possible. As the components targeted to the specification and execution of automation software, the information model and the operational data model specifically are used as integral parts of the process implementation methodology and are covered in detail in the following sections. 11.2.2 Information model The information model structure and specification procedure are illustrated in Figure 11.3. This discussion proceeds in the context of a business process specification, which is the preferable approach to the development of automation functionality.
Figure 11.3: Information model.
11.2.2.1 Information flows The first activity is the definition of information flows for each step of the business process. The initial definition of an information flow consists of a name and a businessoriented description of its purpose and contents. The flows depict the logical types of information needed for the successful execution of the process step. Information as it is used in this discussion is merely a context specification at a relatively high level of abstraction presented in terms understandable by business-oriented personnel. The specification format would also be of interest to any technical personnel who are not familiar with the specifics of traditional data modeling. Examples of
information flows would be: “customer profile repair ticket,” “items purchased,” and “credit rating.” The information flows should be defined independently of any current source of data (e.g., database, legacy system). The goal is to identify the inherent information needs of the process and not to determine how to get at the physical data the information represents. The link between the physical data and the automation functionality are specified by other components of the data modeling system. It should be noted that it may be difficult for the participants in this information identification effort to divorce themselves from their knowledge of the current locations of the information. However, that must be accomplished if the information flow specifications are to accomplish their purpose. The intention of this type of specification is to separate the identification of information need from the details of how the data elements are specified in the logical data model. As has been stated, the logical data model is not in a form that can be easily adapted to procedures for information determination. Although the term flow is included because of its historical use, a dataflow diagram in the classical sense is not required. A listing of the information flows for each process step and their major characteristics (e.g., input, output, description, timeliness) is sufficient. The specification of the information flows at this time serves two basic purposes: § It provides an indication as to the complexity of the process step. § It allows the business SMEs to provide advice and counsel on the information requirements of the process in addition to the functionality aspects. The specification of the logical sources and sinks of the flows is not immediately necessary and probably premature. However, that identification must be accomplished before the information flow specification is completed.
11.2.2.2 Information structures After the information flows are specified, the information structures derived from the information flows must be identified. As indicated in Figure 11.3, an information flow can consist of an entire information structure, or there can be several information flows defined within a structure. The information structure serves as the authoritative source for the information in the flow. Because a basic philosophy of the modeling approach presented here is that the same information can be replicated in many places, it is necessary to identify a means of obtaining the “official” copy of the information contents. By convention, the specific information structure defined for an information flow contains the official copy of the information in that flow. As an example, assume that a needed information flow is “customer contact information.” The official source of the information could be a set of data called “customer profile” that would contain the information for customer contact as well as a significant amount of other potential information about the customer (e.g., credit history). “Customer profile” would then be identified as the information structure that contains the customer contact information. Note that all of “customer profile” will be a single information flow under some circumstances such as initial creation of a profile or a general update of the information contained in the structure. Generally, every information structure will also be defined as an information flow. Because an information structure is also an information flow, it does not have to have a different degree of definition than its component flows to be identified as an information structure. In addition, the following discussion of selecting data elements for each flow also applies to information structures. If possible, the data elements for the information structure should be selected first. That makes it easier to select those for the other information flows it contains. In many cases, however, the data element population proceeds in an opposite sequence. The data elements for the component flows are selected first, and the union of those are used to populate the information structure. Either way works, as does a combination approach. In any case, the data elements of
each information flow must be consistent with the data elements contained in its defining structure.
11.2.2.3 Data element specification After the information structures are identified, it is necessary to determine which specific data elements will constitute each information flow. The basic approach is to use the description of the flow to examine data elements: § As defined in any of the structures of the logical data model; § As contained in the defining information structure of the flow. A determination then can be made as to which elements would be appropriate for inclusion in the definition of the flow. The elements selected are examined to determine what gaps, overlaps, conflicts, or inconsistencies may exist. Those problems are resolved by going back to the logical data model for new or replacement elements and eliminating currently specified ones. This is an iterative procedure and must be repeated several times before an acceptable information fl ow specification at the element level can be achieved. The need for data elements that have not been specified in the logical data model is also quite likely. In that case, they must be added to the logical model as well as the information flow. When a set of data elements has been identified for an information flow, the elements must also be examined in light of the functionality for the process steps in which they are used. That analysis may show the need for additional data elements or the inclusion of inappropriate ones. The process always remains the base specification against which all modeling constructs must be validated. The same data element may appear in multiple flows. Elements are not normalized across flows. Remember that the purpose of the information flow is to facilitate the specification of the data needed for a process step. Molding it into an artificial construct that will obscure the purpose of the information must be avoided. A convenient method of managing such replication through naming conventions is discussed in Section 11.2.2.5. Continuing with the “customer profile” example, assume that in addition to a “customer contact” information flow, a “customer type” flow is defined for use in selling new products. The latter flow indicates the level of services needed by the customer. Some of the elements included in the “customer contact” flow might be: § Name; § Street address; § City; § County; § State; § Zip code; § Main telephone number; § E-mail address; § Fax number. Some of the § § § § § § § §
elements included in the “customer type” flow might be: Name; Main telephone number; Customer size; Mobile phone number; Other phone numbers; Pager number; E-mail address; Fax number.
As stated, the purpose of those elements is to indicate the level of services utilized by the customer. Note that some of the same elements are used in each flow since the
desire is to have each flow as a complete entity. Even though an element appears in multiple flows, it would appear only once in the information structure itself. For the above assumptions, the elements in the “customer profile” structure would be (in no particular order): § Name; § Street address; § City; § County; § State; § Zip code; § Main telephone number; § E-mail address; § Fax number; § Customer size; § Mobile phone number; § Other phone numbers; § Pager number. Additional information flows defined for this structure could add more elements, or they could use those elements already defined for the structure.
11.2.2.4 Datastores Once the data elements of an information flow have been determined, a source or a sink of the flow must be identified. The source or a sink of an information flow is called a datastore. The base location of an information structure is also in a datastore. An individual datastore may be a source of information, a sink for information, or both. Datastores may have a considerable amount of implied intelligence, although they can always be reformulated (at a cost) as containing no intelligence. That aspect is illustrated in Figure 11.4. Requesting information from a datastore is accomplished through a key mechanism. The request key can be complex or simple; an example that uses a complex key is given later. The datastore structure is compatible with the models developed in the following chapters.
Figure 11.4: Datastore configurations. As an example, assume that a datastore is defined that serves as a source of equipment costs. The input (or key) to obtain cost information from the datastore is a specific equipment configuration. The output is the cost of the configuration. The datastore could be organized as a table that contains entries for each possible type of configuration and the cost associated with each configuration. With a large number of possible combinations that may be undergoing considerable change, this approach easily could be impractical. Another approach is to create a program that calculates the cost based on the configuration presented and a set of business rules that contain the costing policy. From a user perspective, there is no difference between these two implementations (except possibly the response time). The datastore is specified only on the basis of the information desired (configuration cost) and not on how it is obtained (e.g., table or program logic). If a datastore consists of more than static data, it sometimes is referred
to as an intelligent datastore. The same discussion can be utilized for the other CRUD operations. The user view remains the same, but the implementation may vary considerably. More information on the use of datastores is given in Chapter 13, which discusses the structure and use of functionality elements called actions. Actions interact with datastores to access the data needed to perform the indicated function. Actions assume that all functionality is provided by intelligent datastores regardless of actual implementation. If existing datastore definitions are available from the previous implementation of other processes, it would be useful to examine their definition and data elements to determine if they would be suitable for the current process. That is the data equivalent of reusing process activities and associated functionality. The determination as to the suitability of a previously defined datastore can be accomplished only through a detailed knowledge of the process being implemented and the definition of the existing datastore. Because an existing datastore contains a relatively rich set of data as needed by the previous process implementation, its use would be advantageous later in the design. However, the use of an existing datastore should not be forced, because it may compromise the design if a good fit does not exist. Seasoned engineering judgment is needed in the effort as well as excellent documentation. It is not necessary to assume that an information flow will use one and only one datastore and is probably counterproductive. Continuing with the example, again consider a “customer contact” information flow as defined earlier. A source for this information could be a datastore called “customer information,” which contains basic information on all customers. There could be multiple sinks for the customer contact information, including a “customer contact” datastore, which contains a record of all customer contacts; a “pending order” datastore, which contains a record on all pending service orders; and others depending on the process being implemented. Those datastores could, in turn, serve as a source for the information as needed in subsequent process steps. Because each datastore has a different logical purpose, it is conceptually easier to keep all the information needed for each logical purpose together even if some of the information is duplicated (e.g., customer contact). Initially thinking in terms of the purpose of the information rather than its content results in more efficient automation design and development.
11.2.2.5 Naming convention A naming convention is an effective means to keep the replication of data elements under control. In addition, it provides for the self-definition of the data when used under operational conditions. Although many formats could be utilized, the naming convention that will be utilized for the data elements is as follows: datastore name.information structure name.information flow name.data element name (value) or DS.IS.IF.DE (value) for short. Minus the value, this format is referred to as the fully qualified name for a data element; alternatively, it is called a data item. The data element name portion of a data item can also have components of its own, depending on how the data elements are specified in the logical data model. Each unique fully qualified name can have a different value for the data element in the same invocation instance. Although that will be relatively infrequent, there are conditions under which it is a useful feature. One of the most compelling conditions is utilization of legacy or COTS systems that have their own inherent element or value structures that must be accommodated. The operational aspects of this naming convention are considered in detail in Section 11.2.3.
11.2.3 Operational model The purpose of the operational model is twofold. The first is to provide a structure through which the physical data elements can be accessed by the automation software at the information level. That allows the software developer to work with data at a level higher than data elements when appropriate but still have access to the individual elements as needed for the functionality to be implemented. While the specification of data at the information level probably is not that unusual an idea, the access and control of data at the information level certainly are. However, this aspect is needed to facilitate the specification and development of automation functionality. The second aspect of the operational data model is to allow for effective use of the selfdefining aspects of the data. The metadata inherent in the fully qualified name provides the self-defining feature. Self-defining data provide a significant number of advantages during the operational phase of automation software. Some of the major advantages are the following: § The operational model need not be statically defined. Data needed during any given invocation of the software may vary depending on the path taken through the process being followed. § The presence or absence of a given data item, regardless of value, can easily be detected. § The same data item can be given multiple values by changing its fully qualified name (e.g., utilizing a different datastore name). § The elements in an information structure or information flow can be changed to accommodate new applications without the need to change existing software. § New datastores, information flows, and information structures can be defined without the need to change existing software. § The needs of transient software components, such as Java Beans, can be easily accommodated. As an example of direct utilization of the information level, consider a customer verification activity from an order entry process. Assume that there are two information flows of interest: “customer profile” and “customer-provided information.” To provide verification as to the identity of the individual placing the order, the information provided by the customer must match the information in the customer profile. That certainly can be accomplished at an element level, but it is far more efficient and meaningful to do it at the information level. In high-level pseudo-code: Get customer profile Get customer provided information If customer provided information [elements given] = customer profile [associated elements] continue else go to terminate order activity What that indicates is that all the elements obtained from the customer must match the same elements in the customer profile for the order to be processed. It should not be necessary to determine in advance which elements are to be compared or even what elements are in the customer profile information flow. Although there will have to be a fragment of infrastructure code that actually compares the data items, their self-defining feature makes the development of that type of generic functionality very efficient. The operational model provides the concepts, structures, and overall framework to accommodate that type of information use. An illustration of one design for an operational model is shown in Figure 11.5. The DBMS structure, although not a direct part of the model, is included in the diagram for completeness of the illustration. The concept of a cluster store, discussed in Section 11.2.3.1, is central to the model structure. The reason for this particular terminology will be evident in the presentation of the process implementation methodology.
Figure 11.5: Operational model. Note that both logical and physical data paths are shown. The logical path indicates the relationship or mapping between the datastore and the physical storage for a specific information flow. The physical path is more representative of how the data transfer would be controlled and transported in an actual implementation. For that reason, the scenario examples, which are utilized to indicate the dynamics of the model, will use the physical path as an explicit part of the specification.
11.2.3.1 Cluster store The cluster store contains all the data items needed by the software functions for a specific task. At that point, the cluster store can be thought of as local data, although, as is explained in Chapter 13, the concept is much more powerful than that. The concept and definition of a task are contained in the discussion presented in the workflow discussion in Chapter 15. Required data from persistent storage must be explicitly obtained. For the immediate purpose of this discussion, a task may be considered as all activities of a single role that are performed by a given role performer in a continuous time interval. Thus, all the data considered during that time interval are available to any activity during the same interval. The great power of this construct will become evident during the implementation methodology discussion. All the data in the cluster store are formatted according to the DS.IS.IF.DE (value) format presented in Section 11.2.2.5. Data can be accessed and utilized at any level in the name (e.g., by information structure name). The types of operations that can be performed on the cluster store data are defined and discussed in detail in Chapter 13. Although this postponement probably is not very satisfying to the reader, because of the uniqueness of the approach, it is necessary to develop a large number of basic concepts and models before they can be brought together into an integrated structure that achieves the desired results.
11.2.3.2 Dynamics The overall dynamics of the operational data model are considered through the use of high-level scenario stories. Only a few are considered, but that should be enough to provide the necessary understanding for later discussions that incorporate this concept.
Scenario 1: Retrieval of information from a datastore 1. 2.
3.
Function 1 requests the “customer contact” information flow from the “customer information” datastore. The request is given to the cluster store control by way of the function execution environment. (This environment provides a standard means of communication between the individual functions and the cluster store and cluster store control.) The cluster store control queries the location map to determine which DBMS contains the “customer contact” information flow
4. 5.
6.
7.
from the “customer profile” datastore. (Logically, this map would be in the repository.) The request is then sent to the physical data control that locates and accesses the desired data from the identified database. The physical data control sends the data through the network connection to the cluster store under control of the cluster store control. When the “customer contact” information flow is available in the cluster store, the cluster store control notifies the function execution environment, which then notifies function 1. A subset of the data in the cluster store is now as follows: § customer information.customer profile.customer contact.name § customer information.customer profile.customer contact.street address § customer information.customer profile.customer contact.city § customer information.customer profile.customer contact.county § customer information.customer profile.customer contact.state § customer information.customer profile.customer contact.zip code § customer information.customer profile.customer contact.main telephone number § customer information.customer profile.customer contact.e-mail address § customer information.customer profile.customer contact.fax number
Scenario 2: Copy of an information flow Function 2 requests the “customer contact” information flow from the “customer information” datastore be copied as the “customer contact” information flow for the “repair ticket” datastore. 1. The request is given to the cluster store control by way of the function execution environment. 2. The cluster store control locates all data items of the form (xx means “don’t care”): customer information.customer profile.customer contact.xx. 3. For each item located, it forms a new data item of the form: repair ticket.customer profile.customer contact.xx. 4. The “don’t care” levels are copied without change. 5. A subset of the data in the cluster store is now as follows: § customer information.customer profile.customer contact.name § customer information.customer profile.customer contact.street address § customer information.customer profile.customer contact.city § customer information.customer profile.customer contact.county § customer information.customer profile.customer contact.state § customer information.customer profile.customer contact.zip code § customer information.customer profile.customer contact.main telephone number § customer information.customer profile.customer contact.e-mail address
§ § § § § § § § § §
customer information.customer profile.customer contact.fax number repair ticket.customer profile.customer contact.name repair ticket.customer profile.customer contact.street address repair ticket.customer profile.customer contact.city repair ticket.customer profile.customer contact.county repair ticket.customer profile.customer contact.state repair ticket.customer profile.customer contact.zip code repair ticket.customer profile.customer contact.main telephone number repair ticket.customer profile.customer contact.email address repair ticket.customer profile.customer contact.fax number
Scenario 3: Update of information in a datastore 1.
2. 3. 4.
5.
6.
Function 3 requests the “customer contact” information flow from the “repair ticket” datastore be used to update the datastore information. The request is given to the cluster store control by way of the function execution environment. The cluster store control queries the location map to determine which DBMS contains the “repair ticket” datastore. The request is then sent to the physical data control that determines where the data will be stored in the identified database. 5.The cluster store control sends the data through the network connection to the physical data control under control of the cluster store control. 6.The physical data control causes the data to be stored in the proper location.
Using the operational datastore model greatly simplifies the handling of data as well as the automation software that operates on them. For explanation purposes, the discussion has been greatly simplified from what is required in an actual system.
11.3 Summary The data modeling perspective presented in this chapter is considerably different from that utilized under current practice. While conventional data modeling, which generally consists only of the logical and physical data models, is included in the perspectives presented in this chapter, it forms only a part of the overall data model. Those models are still needed to define the enterprise need for data and to ensure that physical storage and access to the data are efficient and cost effective. The nontraditional model components, the information and operational models, serve as the bridge between the physical data elements and their utilization by the automation software at the information level. Those models are concerned with increasing the effectiveness and efficiency of software specification, procurement, and deployment. Although the logical and physical models are an integral part of the overall data modeling structure, the emphasis in this chapter and future chapters is on the information and
operational models because it is those models that greatly facilitate the development and deployment of process implementations. Selected bibliography Gonzalez, R., “Hypermedia Data Modeling, Coding, and Semiotics,” Proc. IEEE, Vol. 85, No. 7, 1997, pp. 1111–1140.
Johannesson, P., “A Method for Transforming Relational Schemas into Conceptual Schemas,” Proc. 10th Internatl. Conf. Data Engineering, Houston, Feb. 14–18, 1994, pp. 190–201. Kevin, H. X., et al, “An Introduction to Enterprise-Participant Data Model,” Proc. 7th Internatl. Conf. Database and Expert Systems Applications, Zurich, Sept. 9–13, 1996, pp. 419–417. Levene, M., and G. Loizou, “A Graph-Based Data Model and Its Ramifications,” IEEE Trans. Knowledge and Data Engineering, Vol. 7, No. 5, 1995, pp. 809–823. Ozaki, K., and Y. Yano, “The 3-tuple Data Modeling,” Proc. IEEE Internatl. Conf. Systems, Man and Cybernetics, Vol. 3, 1996, pp. 2149–2164. Parsons, J., “A Graphical Technique for Modeling Multi-User Data Requirements,” Proc. 28th Hawaii Internatl. Conf. System Sciences, Vol. IV, Wailea, HI, Jan. 1995, pp. 273–282. Peters, P., “Business Process Oriented Information Management: Conceptual Models at Work,” Proc. Conf. Organizational Computing Systems, 1995, pp. 216–225. Reingruber, M., and W. W. Gregory, The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models, New York: John Wiley & Sons, 1994. Schloss, G. A., and M. J. Wynblatt, “Using a Layered Paradigm to Model Multimedia,” Proc. 28th Hawaii Internatl. Conf. System Sciences, Vol. V, Wailea, HI, Jan. 1995, pp. 777–786. Spaccapietra, P., and C. Parent, “View Integration: A Step Forward in Solving Structural Conflicts,” IEEE Trans. Knowledge and Data Engineering, Vol. 6, No. 2, 1994, pp. 258–274.
Chapter 12: Client/server modeling Overview The concept of client/server (C/S) structures has been in place since the earliest computers were developed. Although it was not recognized as such at the time, many of the characteristics associated with contemporary C/S architectures were embodied by those early systems. That lack of understanding carries over to current meanings and understanding of the terminology. Inconsistencies and confusion as to what constitutes a C/S approach remain common among both vendors and their customers. To effectively use the rather powerful concepts of the C/S paradigm, it is necessary to provide some structure and associated definitions. To provide a comprehensive discussion of the C/S approach would take multiple volumes. Such depth clearly is beyond the scope of the information that can be provided in this chapter. The presentations are limited to providing only the information that is needed to (1) ensure a consistent approach in using the C/S paradigm and (2) provide a sufficient base for specifying a process-oriented software development methodology.
The discussion starts with a simple C/S model and progresses to more complicated structures. Unfortunately, no organization of the material could separate all the elements into a nice linear progression. That means some material has to be referenced before it can be discussed in detail. When that occurs, an explicit indication of the situation is given.
12.1 Simple C/S systems C/S systems, by their very name, must involve at least two distinct parts, the client and the server. The simplest structure, using one client and one server, is depicted by the model in Figure 12.1(a). In addition to showing the client and server entities, it also defines the elementary relationship between them. The client requests services from the server, and the server provides services to the client. A slightly more complex system is shown in Figure 12.1(b). It is still a relatively simple system but has enough structure to allow an examination of the many diverse characteristics of C/S systems.
Figure 12.1: Simple C/S specification. Figure 12.1 is useful in establishing the basic relationships between the components of a C/S system and in determining an initial set of issues that need to be addressed. However, from a modeling perspective, the structure is not adequate. Therefore, most of this C/S discussion uses a timing diagram format to illustrate the specific configuration involved. This type of diagram can show graphically the dynamics of client and sever operation with respect to the client service requests and the server responses. The basic format of the diagram is shown in Figure 12.2 for a single client and server. Configuration diagrams for several multiple server configurations are considered in Section 12.2.1.
Figure 12.2: C/S configuration diagram. Given the depiction of a simple C/S system, a number of issues immediately come to mind. Some of those issues may be more applicable to C/S systems that have additional components, but they are listed here for completeness. 1. Do the services that the client requests have to be delivered to the client, or can they be delivered elsewhere (i.e., another client or server)? 2. Can services be delivered to the client without being asked for by the client? 3. Can the client have multiple simultaneous requests outstanding? 4. Can the client become a server and the server become a client? 5. Can the server request from other servers the services needed to fulfill a request? 6. Can the client or the server be logical, physical, human, automated, or a combination? 7. Do the client and the server maintain persistent state information across requests? 8. What is the mechanism to request and deliver services? Depending on the specific set of answers to those questions, C/S systems with distinct characteristics result. Although they are all C/S systems according to the simple system definition, they differ considerably in structure, capabilities, complexity, and, in many cases, the nomenclature used to describe them. The latter problem causes most of the difficulty in describing and understanding the different incarnations of C/S systems. To provide some organization to the rather complex set of possibilities posed by the questions, the approach here is to partition the problem into four parts: § The equipment configuration; § The functional (software) configuration; § The human interface configuration; § The communications configuration. 12.1.1 Equipment configuration The first serious problem in nomenclature results from equipment designations by the vendors of computing equipment. They sell either servers or some form of personal computing devices (personal computers, workstations, or more recently, network computers). No vendor (at least to the author’s knowledge) sells a “client” computer. The equipment terminology is not so much attuned to the C/S model as it is to whether it is oriented toward direct human interaction with the computing device. The fact that a
computer not designed for direct human interaction is called a “server” causes extensive trouble when we consider a C/S model. To alleviate that problem, the following assumptions are made: § The C/S model is not based on hardware configurations. § All computers on a network have connectivity with each other and can send information to each other as necessary. Thus, whether or not a specific computer is designated as a server is immaterial to the rest of the discussion. (Perhaps it should be called a “hands-off computer”!) Because equipment is not part of the C/S definition, there is no need to address any question in the list except number 6 (Can the client or the server be logical, physical, human, automated, or a combination?). One component of the answer is that the client and/or the server of a C/S model cannot be defined as a piece of physical equipment (hardware). 12.1.2 Functional configuration The C/S nature of a system is determined only by the configuration of its component software functions and their interaction. From a design perspective, that configuration is also called the logical configuration. The latter term is used here to refer to the construction of a C/S system as a whole. In reference to component functions, it is assumed that in a C/S system the functions are separate and not integrated. In any computing system, all the functions required to perform the desired operations must be present in some form. The C/S designation requires that the individual functions are separated into individual identifiable units. In a non-C/S system, that may not be true.
12.1.2.1 Client characteristics A client is defined to be that functionality of a C/S computing system designed to interface with a human being or an agent (software) of a human being. Although that definition is close to conventional usage, it does not permit a server to have client characteristics when it is requesting services from another server. The latter usage is permitted by common practice. As will be indicated, the term requester is substituted for client when a server requests services from another server. The questions can now be answered with respect to the client. Questions that are not client related are addressed later in the discussion. Question 1: Do the services that the client requests have to be delivered to the client, or can they be delivered elsewhere? The client can request services on the client’s behalf or on behalf of another client or server. If the client is restricted to asking only for services that will be provided back to the client, that is called a paired client. The client is “paired” with the server for the purpose of service delivery. That does not imply that a client can request services from only one server; when services are requested from any server, they must be delivered on behalf of the requesting client. A client for an e-mail system usually is not considered to be paired because the service (e.g., delivery of an e-mail message) is performed on behalf of a client other than the requesting one. A client for an inventory analysis system probably would be considered as paired because the results would be returned to the requesting client. In practice, most clients can make both types of service requests. Question 2: Can services be delivered to the client without being asked for by the client? If the answer is yes, the system has push capabilities (also known as publish/subscribe or event registration). Those capabilities allow the server to send information to the client without having a specific request for each response. If the answer is no, it is a pull-only system. In a pull system, each server response must be preceded by a specific request. Question 3: Can the client have multiple simultaneous requests outstanding?
If the answer is no, it is almost always a synchronous system. That means the client functionality must be synchronized with the delivered service. Client processing is suspended until the server notifies the client that the request has been satisfied. This type of system is usually implemented via remote procedure calls (RPCs). If the answer is yes, it is an asynchronous system. In an asynchronous system, the requested service is not synchronized with the client processing. The client can continue processing after a service request has been issued because the provided service does not have to be synchronized with the client functionality. That, in turn, permits the client to have multiple requests outstanding; furthermore, the responses do not have to be in the same order as the requests. Some means must be defined to match responses with requests. From the comparison, it would be expected that asynchronous systems are somewhat more complicated to design and implement. However, the additional flexibility and throughput possible with these types of systems usually compensate for the increased complexity. Figure 12.3 is a configuration diagram for an asynchronous system using one client and one server.
Figure 12.3: Asynchronous system configuration diagram. Question 4: Can the client become a server and the server become a client? Given the previous discussion, the client, by definition, cannot assume server capabilities. Likewise, by definition, a server cannot assume client capabilities. Question 5: Can the server request from other servers the services needed to fulfill a request? This question is discussed in Section 12.1.2.2. Question 6: Can the client or the server be logical, physical, human, automated, or a combination? A client is not defined as a piece of equipment and therefore cannot be a physical entity. Given the definition of a client as part of a C/S system, the question becomes one of whether or not a C/S system includes a human user of the system. That question has been debated endlessly from psychological, technical, and other viewpoints. Although a case could be made for either assumption, the usual condition for the type of business automation under discussion is that the human is not a part of the system. If the automation inherent in an airplane control system were being considered, this assumption might be different. Given that a human user is outside the C/S system, the client cannot be human by definition. That leaves the client as an automated entity with human interface functionality defined on a logical basis. The server characteristics in this area are discussed Section 12.1.2.2. Question 7: Do the client and the server maintain persistent state information across requests?
The client maintains state information from request to request because it is the entity responsible for using the services of the server to perform some function needed by the human it serves as interface. In other words, the client is in control. The server answer is provided in Section 12.1.2.2. Question 8: What is the mechanism to request and deliver services? This question is answered in Section 12.3, which discusses the infrastructure.
12.1.2.2 Server characteristics A server is defined as that functionality of a C/S computing system that provides services to clients or other servers. Given that definition, those questions relevant to the server are considered in this subsection. Question 5: Can the server request from other servers the services needed to fulfill a request? As stated in Section 12.1.2.1, a server cannot assume client characteristics. However, servers must be able to request services from other servers in more complex C/S systems with multiple servers. In those cases, there must be a mechanism similar to that of a client for a server to request services in the same manner as a client. That function is called a requester function. A requester performs the same functions on behalf of a server as a client does on behalf of a human. It can have different characteristics, however, because the target for the services may have different characteristics. Question 6: Can the client or the server be logical, physical, human, automated, or a combination? Again, because a server in a C/S system cannot be defined as a piece of equipment, a server is not physical. Remember that the physical equipment called a server is not a server in the same sense as a C/S server. Also, a human is not considered as part of a C/S system; therefore, a server cannot be a human for the same reasons that a client cannot. That results in a server being defined in a logical sense as the provider of some set of services. Question 7: Do the client and the server maintain persistent state information across requests? The server may or may not maintain state information between server requests depending on the characteristics of the service provided. For example, most services provided by servers responding to client HTML queries over the Internet do not keep state information between requests. However, most servers utilized in the enterprise automation environment have services that do keep state information between requests. Whether state information is kept is not usually a server function but is determined on a service-by-service basis. As is evident from this discussion, servers can assume many different characteristics and still retain their identity as servers in a C/S system. As with clients, it is important to understand the context in which the C/S system is defined and operated. 12.1.3 Human interface configuration The client provides the interface between the C/S system and the human user. An important part of the client functionality is the mechanism for providing information to a human user and obtaining information from the user. That could be through a number of devices, including display screen, keyboard, cursor positioning devices (mouse, track ball, joystick), or audio speakers and microphones. More recent devices for virtual reality interfaces include hand and body position locators, tactile pressure transducers, and three-dimensional visual displays. Fingerprint readers and retina pattern readers also fall into the human interface category. The important human interface characteristic for a C/S system is not the specific devices used but the restriction they are intended for use by the client.
12.1.4 Logical connectivity There are two basic means of obtaining logical connectivity between client and server. One is direct program-to-program procedure (or subroutine) calls as defined through the programming language being used. That requires that both client and server be defined on the same computing platform and linked in some manner. This mechanism was first used to provide program-to-program communication, but it cannot be used when client and server are on different physical platforms. The second type of communication is the message, which must be used between a client and server on different platforms and which can be used if they are on the same platform. Messages are simply a string of formatted data that identify the source and the destination address of the information being communicated, the information itself, and some check or control characters that indicate whether the message arrived intact. Because most C/S systems are defined such that they can run on different physical platforms, the use of messages to provide communications is assumed for the remainder of this discussion.
12.2 Compound C/S systems There are three basic differences between simple C/S systems and the more complex compound C/S systems. In general: § Compound C/S systems utilize an infrastructure to provi de common services to the process-specific C/S clients and servers. The infrastructure is considered in Section 12.4. § Compound C/S systems utilize a large number of intercommunicating servers, both those that are process specific and those that provide infrastructure services. § Compound C/S systems utilize multiple clients that transfer information between them. As with simple systems, each server in a compound system can provide multiple services. It is also possible for different infrastructure servers to incorporate and offer the same service. In that case, the process implementer needs to specify the server that is going to provide a specific service. The server utilized also might change under different sets of circumstances. 12.2.1 Multiple servers Several different multiple-server C/S configurations are discussed in this section: singleserver types, multiple-server types, synchronous and asynchronous communications, and push and pull interactions. For systems with a single-server type, an illustration of the structure is shown in the configuration diagrams in Figure 12.4 for synchronous messaging and Figure 12.5 for asynchronous messaging.
Figure 12.4: Multiple-server, synchronous communication configuration.
Figure 12.5: Multiple-server, asynchronous communication configuration. The next extension permits servers to request services from other servers to complete the initial client-requested service. The extension is illustrated by the configuration diagram in Figure 12.6. The illustration assumes synchronous communication, but the concept is easily extended to the asynchronous communication format. When a server requests services from another server, the mechanism is that of a server requester, not a client.
Figure 12.6: Multiple-server, requester function configuration. Figures 12.4–12.6 have all been for services performed as the result of a pull or direct request on the part of the client. In these cases, there is a pairing of the service request and response. For every request, there is an associated response. As mentioned in Section 12.1.2.1, push services also can be defined. Push services are services delivered to the client from a server but not always as a response to a direct request from the client. Usually there are several responses to one request, and those responses are asynchronous with respect to the client processing. That possibility is illustrated in Figure 12.7. In general, there is an initiating request, such as client request a to server C in the figure. There are multiple responses back to the client from server C in response to the request, also labeled a in the figure. For example, say the request is to return a specialized version of a newspaper from a newspaper server (server C) every day. Then each day the requested information would be returned (pushed) to the client from server C with no further request by the client.
Figure 12.7: Multiple-server, push services configuration.
Other requests and responses also could be occurring asynchronously from the original request. Extend the example by allowing server C to send, via a request, the address of the client and type of information the client is interested in to another server (server A) that contains advertising for a given manufacturer’s products. If there is a match, then server A sends an unsolicited advertisement for one or more products (response a from server A to the client). That push can be considered to be another response to the original request. Note that there is no direct response to the request from server C to server A. That also is allowed with this type of C/S system. However, there is no guarantee that the request was ever processed by server A. The client also could have requested the same product information directly from server A, as indicated by the request and the initial response labeled b. Additional responses b could be the result of the server also sending information on pending sales or special deals involving the product of interest. These systems can get arbitrarily complex and involve a large number of servers over a period of time. That is both the strength and the weakness of this type of C/S system. 12.2.2 Multiple clients The tendency is to provide a client for any significant type of automation functionality. Some of the more widely used standard clients, in addition to clients defined for specific enterprise automation functionality, are as follows: § E-mail clients; § Workflow clients; § Web browsers; § Office services clients. Multiple clients, in the context of this discussion, means that they are being utilized as the interface to the same person in the performance of some enterprise activity. Each person utilizing the C/S system would have his or her own instance of the clients. The main requirement when multiple clients are involved in an overall C/S system is the ability to transfer information from client to client in a relatively easy way. That usually is accomplished through the resident infrastructure services of the platform on which the clients reside (e.g., drag and drop from one client window to another). Another method is to design the clients so they can interact directly with each other. That requires that the appropriate standards be available. It usually is assumed that all the clients will reside on the same platform because they all must be accessible by the person with whom they are interfacing. If a client is moved to a platform where it is accessed by means of another client on a different platform, for example, a Web browser accessing a workflow client, then the workflow client loses its client identity and logically becomes a server. The functionality may remain approximately the same, but because it no longer is the direct interface to a human, it cannot remain a client. The Web browser becomes the new client. This terminology has not been universally adopted, and in many situations the original client still is referred to as the client and the Web browser retains its identity as an interface device. Readers should be aware of this confusing situation when it occurs. While some of these points may seem minor, they are the source of considerable confusion in discussions of C/S systems. It is hoped that by explicitly addressing the issues there will be an increased understanding as to how to classify and discuss the structure and components of a specific C/S system. It should be noted that a given C/S system can behave in multiple ways. For some functions, it can be a single-server, synchronous system, while for other functions it can be a multiple-server (including infrastructure), asynchronous system with push features.
12.3 Infrastructure Implementing an automation system generally involves two types of functions and entities: those that contribute directly to the problem being addressed and those that perform a support role. In general, support functions and entities are common to a large number of process-specific functions. Support components usually are grouped under the heading of infrastructure. The infrastructure provides the environment that facilitates the use of the C/S system in providing the automation functions needed for the enterprise. A simple diagram of a C/S system that explicitly incorporates the infrastructure is of use in understanding the basic relationships. Such a diagram is shown in Figure 12.8.
Figure 12.8: Infrastructure relationships. The infrastructure in a C/S system consists of servers and associated services that can be used in support of process-specific clients and servers. A more familiar depiction of a C/S system using an explicit infrastructure is shown by the configuration diagram in Figure 12.9. In most cases, as shown in the figure, the client communicates directly with the infrastructure servers and only indirectly with the application servers. That occurs because of the fundamental nature of the infrastructure services as conduits and qualifiers for the application servers.
Figure 12.9: Infrastructure-oriented configuration diagram. The decoupling of an automation client from the automation server through the use of infrastructure servers has some desirable side effects. The service functionality resident in the application servers can be structured in such a way that many different automation clients, which may implement different enterprise processes, can utilize the same service components without change. Means to define and implement components with functions that can facilitate component reuse are discussed in Chapter 13. Although reuse can be achieved using other approaches, the C/S structure is a definite help. The process implementation methodology also takes advantage of the C/S approach to facilitate reuse in addition to providing other advantages to conventional techniques of automation specification and use.
Decoupling also can facilitate the use of legacy systems in new automation functionality. Legacy systems can be considered to be large, complex application servers. By isolating this functionality, it can be utilized in any fashion needed by a specific client. If some specialized access conditions are needed, they can be placed into an infrastructure server and possibly utilized for multiple legacy systems. That is not to minimize the considerable problems in the use of legacy system functionality. However, if the advantages of utilizing legacy system functionality outweigh the disadvantages of obtaining replacement functionality, than the C/S approach using a robust infrastructure can be of considerable value. The architecture and specification of the infrastructure and its services are a significant and separate subject. Although some later chapters briefly address specific aspects of the infrastructure, a more detailed consideration of the infrastructure is well beyond the scope of this book.
12.4 Summary Enterprise automation environments based on a C/S approach, along with a comprehensive infrastructure, can provide a flexible and powerful means of implementing business processes. With the increasing utilization of the Internet and its variations (intranet, extranet) as a communications and delivery vehicle for automation functionality, the C/S approach is a natural fit. However, the complexity of such systems demands that a systems engineering approach to their development and specification be utilized. Enterprise mission-critical C/S systems cannot be structured from the bottom up. Selected bibliography Adler, R. M., “Distributed Coordination Models for Client/Server Computing,” Computer, Vol. 28, No. 4, 1995, pp. 14–22.
Baker, J., et al., “The Role of Client/Server Computing Technology in the Management of Global Enterprises,” Proc. Portland Internatl. Conf. Management and Technology, Portland, OR, 1997, July 27–31, 1997, pp. 739. Drakopoulos, E., “Design Issues in Client/Server Systems,” Proc. IEEE 14th Ann. Internatl. Phoenix Conf. Computers and Communications, Phoenix, Mar. 28–31, 1995, pp. 487–493. Graham, W. C., and S. Majumdar, “The Performance of Multithreading and Scheduling on Client -Server Systems,” IEEE Internatl. Performance, Computing, and Communications Conf. 1997, Phoenix/Tempe, Feb. 5–7, 1997, pp. 87–93. Kanitkar, V., and A. Delis, “Real-Time Client-Server Push Strategies: Specification and Evaluation,” Proc. IEEE 4th Real-Time Technology and Applications Symp.,June 3–5, 1998, pp. 179–188. Savino, S. P., and S. M. Queroli, “Impact of Client/Server Computing on the Telecommunications Industry,” Proc. Internatl. Conf. Engineering and Technology Management, Vancouver, BC, Aug. 18–20, 1996, pp. 605–610. Zhang, T., and J. R. Hayes, “Client/Server Architectures Over Wide Area Networks,” 1997 IEEE Internatl. Conf. Communications, Montreal, June 8–12, 1997, Vol. 2, pp. 580–584.
Chapter 13: Dialog and action modeling
Overview Sufficient modeling has now been accomplished to allow the development of two closely related automation assets: dialogs and actions. The models of those two assets provide the mechanism through which process definitions are changed into functional specifications. Because actions are used as part of the definition of a dialog, dialogs are discussed first. That order allows the most natural progression from process definition to implementation. In addition, it is not necessary to understand the detailed action model to utilize the concept as part of the dialog model specification. The construction of the dialog and action models is designed to take into account the requirements of automation development in the emerging environment: § Direct process implementation; § Rapid implementation and change; § Reusable components at several levels; § Isolation of change propagation; § Flexibility; § Explicit utilization of business rules; § Utilization of a C/S approach; § Accommodation of an information model; § Compatibility with push and pull technology; § Compatibility with Internet standards; § Compatibility with component-based architectures. The structure of the dialog and action model assumes a robust infrastructure exists that can provide the common services as discussed in this and previous chapters.
13.1 Dialog context A dialog serves as a bridge between the business requirements as embodied in the process perspective, the technical specifications needed for design, and the deployed software of the enterprise automation environment. As such, a dialog must be in a form that can be used as the basis for custom product development, COTS product selection, or the incorporation of a legacy system functionality. Dialogs provide the continuity needed to allow a smooth transition from business requirements to implementation, as illustrated in Figure 13.1.
Figure 13.1: Bridging aspects of dialogs. This chapter specifically addresses the dialog model utilized by the technical specification anchor of the “bridge.” Other chapters are devoted to the other bridge anchors. However, to provide continuity in this presentation, a brief definition of a dialog in the business requirements anchor and the automation environment anchor also is
presented. Although the three definitions involved will seem quite different, they each describe a different view of the same entity. From the business requirements perspective, a dialog is a portion of a process (process fragment) with activities that can be performed by a single role performer without transferring control to another role performer. Using that definition, a dialog has no specific size and can be as small as a single process step or as large as an entire process. A dialog gets its name from the fact that it also can be considered a conversation (or dialog) between the user and the process automation functionality. From the automation environment perspective, one or more related dialogs define a workflow task that is implemented using reusable components and a suitable control mechanism. From the technical specification perspective, a dialog consists of a set of self-contained atomic actions that provide the functionality specification for the dialog and a cluster store that defines and contains the dynamic data utilized by the actions. The relationship between the automation assets that make up the three perspectives of a dialog is shown in Figure 13.2.
Figure 13.2: The dialog and action environment. In the following discussion, to ensure that the proper view is being addressed, the term dialog is used to imply the technical specification (design) perspective; process fragment and task imply the business and operational perspectives, respectively.
13.2 Dialog model Dialogs are the fundamental unit of process implementation. Dialogs can be implemented individually or in related groups without compromising the implementation. Multiple dialogs usually are required to implement a given process fragment. Some are derived from the business process and others because of technical and implementationspecific requirements. In addition, business processes may need associated management, administrative, and support processes. Dialogs resulting from all those sources may need to be executed as a part of the same task, as illustrated in Figure 13.3.
Figure 13.3: Multiple-dialog utilization. The same dialog also can be utilized in the design of process fragments from different processes. That latter ability provides for reuse at the process level. For example, a dialog that implements a process fragment that determines a customer’s credit rating could be part of a marketing process, a late payment treatment process, and an ordering process. The assumption, as should be the case, is that all the processes use the same set of steps to perform the procedure. 13.2.1 Clusters Dialogs that are simultaneously executing on a given workstation are called a cluster, and they usually are loosely coupled through the use of a common cluster store. A specific instance of a dialog must be able to be initiated and terminated, be able to be suspended, and be able to interact with a human or automaton user. That requires that a suitable set of states that can be assumed by the dialog must be defined. Those states are defined as follows: § An active dialog is a dialog that has been initiated and not explicitly suspended or terminated. § An open dialog is an active dialog that has the current capability to output information and receives information from a human interface. § A closed dialog is an active dialog that does not have the current capability to output information and receive information from a human interface. It may, however, initiate and complete actions based on information in the cluster store that does not involve the human interface. § A suspended dialog is a dialog that currently is not performing any processing and that cannot by itself initiate processing. § A terminated dialog is a dialog that has been removed from the cluster. Any outstanding transactions are lost. The states are controlled by the cluster control interacting with the human or automaton role performer instance. The cluster control is an infrastructure function that determines the states of those dialogs that are part of the operational cluster. Many dialogs can be active, but only one can be open at any given time. A cluster is an operational entity because it is defined only at run time. As discussed in Chapter 11, the information in a cluster store is intended to be self-defining and is available to all dialogs in the cluster on an equal basis. The set of tasks that implement an entire process is controlled through workflow techniques, as discussed in Chapter 15. Examples of dialogs that could be part of a cluster in the design of one or more business process would be as follows. § Business process fragment 1 advances the process through the business process fragment implemented (e.g., order entry). § Business process fragment 2 advances the process through a different business process fragment (e.g., previous order status). All information obtained as a result of either process is available to both processes because of the common self-defining cluster store.
§
Management process dialog allows a supervisor or other manager to take over an interaction with a customer and have all current information available. § Support process dialog provides information to the operator about any of the available functions. § Administration dialog terminates the process for any reason (e.g., customer hangs up). All dialogs have the same framework, as illustrated in Figure 13.4. The framework consists of a set of nine interrelated action categories and a cluster store that provides the mechanism for dynamic data storage. The action categories include those needed to provide the administrative functions of the dialog as well as the business-related functions required by the specification of the business process fragment. It is expected that the majority of the actions having an administrative category are common to almost all dialogs and, therefore, are highly reusable.
Figure 13.4: Dialog model framework. 13.2.2 Cluster store Actions of all dialogs in a cluster utilize a common cluster store for dynamic data. Although a cluster store is shown as part of a dialog framework and the dialog actions use the cluster store, it is not defined on a dialog basis. The cluster store has a separate definition that is tied to the existence of a continuing role performer instance. A role performer instance starts when the initial business process dialog is initiated by the role performer. It continues until the role performer terminates the original dialog and all associated dialogs that were initiated and required the same dynamic data as the first dialog. As long as the role performer instance is not terminated, no matter how many dialogs are initiated and terminated during the instance period, the cluster store stays in existence. That allows the utilization of instance information by any active dialog during the time the role performer instance exists. This aspect of the cluster store is illustrated in Figure 13.5. The up arrows indicate when a dialog is initiated and the down arrows when it is terminated.
Figure 13.5: Cluster store instance. For example, assume a customer calls to place an order. A role performer instance is created to perform the activities of the task dialogs that implement, in part, the order process (dialogs 1, 4, and 5 in Figure 13.5). While the order is being taken, the customer
asks the status of a previous order. That is a task (dialogs 3 and 6) in another process, but because of the circumstances, it is performed by the same role performer instance as used in the order dialog because it uses the same set of dynamic data. As shown, the two tasks are processed in parallel. Although the process and associated dialog have changed, the role performer instance has not; because of that, the cluster store remains intact. That allows the customer information obtained because of the order dialog to be immediately used for the status inquiry. Keeping the cluster store intact throughout a continuing role performer instance also makes the specification of the user interface easier. Because the same role performer instance also would necessitate some reasonable continuity in the user interface, the user interface can be defined on the basis of the cluster store rather than a dialog. Thus, no matter how many dialogs are inserted into and removed from the cluster, the user interface remains continuous as long as the cluster store is not terminated. That aspect is also illustrated in Figure 13.5. This type of operation requires the role performer to have the ability to initiate any workflows (processes) that may be needed as an adjunct to the handling of other processes. The use of scenarios is crucial to the design and specification effort. 13.2.3 Functional categoriesand phases An action is a specification for a specific type of operation to be performed during the invocation of a dialog. Each action in a dialog belongs to one of the following functional categories: § Initialization; § Instance data verification and validation; § Business process; § Instance data assembly; § Cleanup; § Termination; § Exception handling; § Control; § Help/tutorial. Regardless of which category an action is in, it will have the model framework as presented in the following section. Although the definitions of the action categories and some general usage examples are presented in the following discussion, it is not possible to provide an appreciation of the power of the categories until the methodology discussion in Part III. As will be evident from the following discussion, most of the actions in categories other than business process and help/tutorial are common to all dialogs. That is one powerful example of the reuse capabilities of this type of structure. Actions in the initialization category are utilized to place the dialog in a state suitable for execution. Such actions could include the retrieval and setting of appropriate parameters; the initialization of data in cluster store; and the updating of statistics, logging, and other administrative data. The actions do not necessarily result from the business requirements of the process steps but may be developed from the technical requirements of the dialog. Usually, a set of initialization actions common to all dialogs are developed and documented, in addition to the initialization actions specific to a given dialog. Actions in the instance data verification and validation category are utilized to ensure that data presented to the dialog via a workflow system are available and correct. Actions could include the examination of data for proper identification and to verify the availability of the needed data in the cluster store. These actions do not include any validation and verification that is associated with data retrieved from datastores. Actions in the business process category are the implementations of specific business functions. Such actions are directly dependent on the function of each dialog and therefore must be specified on a dialog-by-dialog basis. However, the number of possible actions in this category is limited to a relatively small number to facilitate their standardization and reuse in multiple dialogs.
Actions in the instance data assembly category are utilized in a manner similar to the instance data verification and validation actions. They are used to assemble the “folder” data necessary for the utilization of a workflow system. Actions in the cleanup category are utilized to place the dialog in a consistent state before terminating. That usually involves the update of persistent storage as nec essary and the performance of any necessary administrative operations (e.g., providing statistical information). A set of cleanup actions common to all dialogs usually is developed and documented. The common actions are in addition to the ones specific to a given dialog. Actions in the termination category are utilized to determine the type of termination the dialog will have. Some termination types are normal termination, forced termination, and error termination. The next step in the implemented process may depend on the type of termination. Actions in the exception-handling category are utilized to provide a consistent approach to error processing. These actions do not result from the functional requirements of the process steps but are developed from the technical requirements of the dialog. A set of exception-handling actions common to all dialogs usually is developed and documented. The common actions are in addition to the ones specific to a given dialog. Actions in the control category are utilized to interact with the infrastructure to provide the proper environment for the execution of the dialog. These actions do not necessarily result from the business requirements of the process steps but are developed from the technical requirements of the dialog. Control actions could include the obtaining of needed network resources (e.g., bandwidth) and the setting of appropriate timers. Actions in the help/tutorial category are utilized to provide a user-friendly environment for the human role instances (operators) that interface and interact with a dialog. These actions provide additional information concerning screen entities and generally are utilized for training or abnormal conditions. These actions can also be defined and utilized for other human interface types such as voice and video. 13.2.4 Dialog utilization The definition and the structure of dialogs provide several advantages: § They allow an effective and efficient transition from the business environment to the technical environment without the necessity of making sometimes arbitrary decisions. § They allow the implementation process to proceed in parallel, using wellsized units. § They provide for the maximum amount of common functionality that has to be specified, designed, and implemented only once. § They enable a similar approach to the validation and verification of the design of individual dialogs or combinations thereof with the intent of the originating business processes. Clusters encapsulate the control and local storage functionality needed by a human or automated user. Such encapsulation provides the framework for ensuring that the comprehensive specifications needed for this type of functionality are explicitly considered and stated. In addition, the framework enables changes and additions to be made in a cost-effective manner because the resultant effects are localized. Once the dialog actions have been defined, they can be used in two different ways. If a COTS or legacy system implementation for the dialog is being considered, the action specification provides a means of determining its suitability. If a custom development is indicated, the actions can be further defined and used as the specification for reusable components that would form the functionality needed. This analysis requires a significant amount of analysis and must be considered beyond the scope of this presentation. The following discussion, therefore, assumes a custom development using reusable components. The business processing actions are always preceded by the initialization and the source data verification and validation actions and succeeded by the recipient data transfer,
cleanup, and termination actions. Because of the well-defined progression of those action categories, the dialog is partitioned into phases, with each phase containing only one action category. Action usage in each phase must be completed before the actions in the next phase can be launched. The phase progression is the same as the category sequencing. Partitioning helps in the control of the dialog and also makes the search for actions that can be launched more efficient. Actions in the exception-handling, control, and help/tutorial categories can be invoked during any of the dialog phases, as needed. In addition to the designated actions, a dialog is also associated with a cluster store. There is one cluster store serving all tasks in the cluster. It is used to contain any data produced as a consequence of the actions executed in any dialog of the cluster. In addition, the cluster store contains a set of common information, such as the date, time, operator profile. A cluster store is a special case of a datastore or information structure, as defined in Chapter 11. It can serve as the source or sink of information as well as an authoritative source for the information. Although it is defined on a temporary basis, it is useful for consistency to define a cluster store in the same way as a persistent datastore or information structure.
13.3 Action model Chapter 12 discussed the concept of C/S systems. In most cases, the communications required by such systems consists of response/request pairs. In the case in which a unit of work is involved, a message/response pair is called a transaction. For convenience, in this chapter, a C/S communication that consists of a message/response pair is called a transaction even though there is no state change in the server (e.g., retrieval of data from a datastore). This use of the terminology makes the subsequent discussion somewhat easier to structure and follow. From the client perspective, each transaction must involve four supporting functions or components. § The determination of when to initiate a transaction (launch condition); § The formulation of the request; § The validation and verification of the response; § The analysis and dissemination of the result information. Those components, along with the transaction itself, constitute an action. The action model framework is shown in Figure 13.6. Actions consist of the transaction functionality along with all the support needed for the client to effectively utilize the transaction. Support functions have access to the cluster store and can manipulate any of its data as determined by the specific specification of the support function.
Figure 13.6: Action model framework. The relationship between actions and the cluster store is strong. By definition, action request and launch data must come from the cluster store, and action response data can be entered only into the cluster store. That restriction provides a mechanism for close
control over action specification. Of course, any persistent data that the server needs or produces in processing a client request can utilize any available datastore. An action is closed or self-contained in the sense that it encapsulates all the support functions with the transaction itself. In fact, a group of actions, with the proper platform infrastructure, constitute a reasonable segment of automation functionality without the need for any additional software. By their very nature, actions implement an asynchronous messaging structure, because the launch of any action does not prevent the launch of any other action. That can be circumvented, but it is difficult. Matching responses to requests also is rather easy, because responses can carry a common identification based on the identification of the action involved. One significant advantage of utilizing a closed structure is that any of the components of a given action can be changed without directly affecting any of the other actions. The change could result from many directions and include a need to change the launch condition, the destination of the transaction, or the determination of what constitutes an acceptable response. Isolation of changes is a distinct advantage to this type of structure. There can, however, be indirect consequences of changes to individual actions because of alterations to the data available in cluster store. For example, if an action is changed such that the data needed to launch another action are delayed or not obtained at all, the second action would never be launched. That might be a correct result of the change or an artifact indicating that further changes are necessary. Although action changes are localized, there can be consequences that must be evaluated and corrections made as needed to ensure the integrity of the dialog functionality. In practice, actions closely resemble the production rules in a classic knowledge-based system. The infrastructure required to utilize a dialog-and action-based structure operates in a manner similar to that of an inference engine with forward chaining. Although the resemblance is useful for comparison, there are some significant differences. Actions are production rules only in the sense of the launch criteria. Once launched, actions are active until a response is received. That can be a considerable length of time and requires that the infrastructure engine also consider the condition in which actions are still outstanding before assuming that the rule set (actions) have become quiescent. An entire action can be reused, as can any component or set of components of an action. The usual condition is the one in which all the components of an action are reused except the launch condition. The launch condition varies according to the specific circumstances of use. Because actions and their individual components are considered to be automation assets, the administration and management of action reuse can be handled through a repository function. It will be shown in the methodology presentation given later in this book that the structure of the action framework model is specifically designed to support and encourage software component reuse. The information contained in any of these components can be considered to be in the form of business rules. The business rules are explicit statements of how the actions, and hence the dialog, will operate. It is possible to put all the rules in a repository using an appropriate syntax and taxonomy. An examination of the rule set as a whole can then be undertaken to determine such aspects as consistency. 13.3.1 Launch condition In procedural software, the launch condition, that is, the decision to send a transaction, is determined by reaching that point in the code where the transaction sending procedure resides. There is no separate determination of whether or not to send a transaction other than that programmed into the software.
The launch condition support function includes all the work necessary to determine when the conditions for launching the action transaction have been met. Actions have a launch condition that is directly attached to the action. Those conditions can be of three kinds: data, state, and time. An example of a data condition is the availability or unavailability of specified data in the cluster store. An example of a state condition is the previous execution or nonexecution of another transaction. An example of a time condition is the attainment of 2:00 P.M. Because all the data are placed in cluster store in some form, the launch condition essentially is determined by the state of the data in cluster store and is defined by the “true” or “false” evaluation of a predicate expression that contains the data elements of interest. For example, assume that an action function is to retrieve customer contact information from the customer profile datastore and put it into the cluster store. Further, assume that the action is not needed unless the customer is calling for a legitimate purpose as determined by the value of the call_purpose data element. That element has cluster store as its information structure (authoritative source). Cluster store is also the primary—but not the only—source and sink for this data element. The launch condition of the retrieve customer contact information action might be expressed as follows: call_purpose = order, call_purpose = inquiry, or call_purpose = complaint. Anytime the call_purpose data element exists in cluster store and has a value of order, inquiry, or complaint, and the task in which this action resides is active, the action will be launched. Exactly when the launch occurs is a function of the infrastructure that supports this component. Once launched, the other components of the action become operational. The transaction message is formulated and sent, and the response is validated and disseminated as determined by the action definition. If the purpose is not one of those values, the action is never launched. Note that no explicit decision point is required to make that determination. The launch condition for every action in the dialog is evaluated whenever cluster store changes. A number of other implicit conditions are attached to launch conditions, such as “an action can be launched only once in a task unless its launch state is explicitly reset.” 13.3.2 Request formulation Transactions are implemented via a request/response message pair. It is, therefore, necessary to compose the message and set its operational characteristics as determined by the action definition. Some of the characteristics that may need to be considered explicitly are as follows: § Logical destination address; § Message format identification; § Action identification; § Request data required (must be in cluster store); § Cluster store data element value to be set on error condition; § Timeout parameters; § Infrastructure information: security parameters, data type of expected response (e.g., streaming audio or video, text, image, multimedia), special handling, accounting information. There are several advantages to attaching the message formulation function to the action. Different actions having the same transaction could utilize different characteristics depending on the context in which they are used; those changes could be made without having to consider the effect on other actions. Because all the data for the request message must reside in cluster store, it is relatively easy to determine (1) if an error condition exists (data not present) and (2) the proper recovery procedure.
13.3.3 Response validation and verification As a part of the infrastructure supporting the use of the action model, the identification of a message as a response to a request, a push message, or an unidentifiable message is made. A response message also could be generated by the infrastructure timing mechanism in response to a timeout condition as specified by the request formulation support function. If it is identified as a response, the message is associated with the correct action and the attached response validation and verification support function is invoked. That support function contains the information needed for it to determine if the action response message indicates successful completion of a transaction or that some problem has occurred (e.g., definition of expected condition codes and their meaning). If a problem has occurred, the determination as to what corrective procedures should be taken also is contained as a part of the specification of this support function. This support function is not intended to determine if any of the data returned are valid. It is meant only to determine if and how the transaction completed. The validity of the operational data must be determined by other actions. Messages are identified by the infrastructure as push messages by using information in a nonshared datastore that results from the operation of previous actions designed to provide that information. Push messages also put their information into cluster store, but there is no equivalent to the response validation and verification support function for those types of messages. 13.3.4 Analysis and dissemination of results Assuming that the response validation and verification function has determined that the response is valid, the analysis and dissemination of results support function examines the operational data returned to decide how the data should be processed. In most cases, the data simply are placed in cluster store. However, there may be conditions when some or all of the returned data will not be placed in cluster store. Another possibility would be to rename the data elements before they are placed in cluster store. One situation in which that could occur is when the same data element is already in cluster store. The default generally would be to not replace that element with the current one. However, the information given to this support function could indicate a variety of conditions and responses. If the data element already appears in cluster store: § Discard the data element; § Replace the element with the current value; § Rename the response data element and place it in cluster store; § Rename the cluster store data element and place the response element in cluster store; § Add the response element value to the cluster store data element through the use of a multivalued construct. If elements are renamed, other actions must be defined that are aware of the potential name change and can accommodate the newly named element. By associating the analysis and dissemination support function with each action, a different determination can be made for each action without directly affecting any other action. This again serves to localize changes. 13.3.5 Action types In addition to having the same framework, actions are closely constrained as to the definition of their functionality to promote their use in multiple processes and dialogs. Those constraints consist of two forms. The first limits the actions to a small number of types. The second requires all actions to obtain or place data in cluster store, depending on type. An action in any dialog category can consist of any of the defined types. Action constraints are illustrated in Figure 13.7.
Figure 13.7: Action data transfer dynamics. The definition of each action type is based on the specifics of the work performed by the contained transaction: § Shared (by multiple clients) data CRUD; § Nonshared data CRUD; § Human interface data CRUD; § Resource status request (retrieval or update); § Data transformation; § Enable push transfer. Although those definitions of action types may seem that they are limited to database access only, that is not the case. They can accommodate any functionality required by the process being implemented using the concept of intelligent datastores defined in Chapter 11. The types are defined as they are to keep the number needed to a minimum. As an example, consider the following: The defined purpose of an action is to obtain the cost of a specific equipment configuration and update a statistics database that accumulates the configurations utilized in all costing inquiries. The transaction request contains a specific equipment configuration. The transaction response to the action is the cost of the configuration. The server functional component that calculates the cost based on the equipment configuration also updates the configuration statistics. As far as the action is concerned, the transaction request and response are in the same form as a standard database retrieve. The key is the configuration, and the returned data are the cost, hence the type definition of this action as a shared data retrieve. It is shared because of the assumption that dialogs other than the one utilizing the action can request the same functionality. The actual effect expected must be part of the specification of the transaction (and the action by association). The specification can be as complex as necessary, but because access to the functionality of the server is always through the use of a request/response pair, the action, regardless of definition, can always be characterized as a data CRUD type of activity. Although the possible complexity is arbitrary, in actual practice action complexity usually is quite small. That occurs because it generally is easier to define several actions, each with limited functionality, than one action with a large number of operations. Some rules of thumb that can be used to indicate the proper amount of functionality defined for an action are discussed in the methodology presentation in Part III.
This limitation to a small number of action types is necessary to maximize the potential reuse of previously implemented actions. Defining actions with a relatively small amount of functionality also helps in that regard. Any available actions from prior dialog decompositions also are limited to these types. This restriction greatly enhances the probability of obtaining a match between actions resulting from the current dialog decomposition and those defined during previous or future executions of the methodology. Detailed definitions of each of the six action types follow. The shared data (CRUD) action type is used to move data between shared (multiple clients) logical datastores (persistent storage) and cluster store. Shared datastores are utilized by multiple dialogs. The datastores can have any degree of intelligence associated with them. Examples of these types of actions would include the retrieval and update of customer contact information, the update of a switch routing table, the determination as to whether a specific telephone was active, and the retrieval of a telephone number for a new customer. The last retrieval action would require a datastore with some intelligence since a considerable amount of processing might be needed to determine the retrieved information and update the necessary inventory information. The nonshared data (CRUD) action type is used to move data between nonshared persistent logical datastores and cluster store. Nonshared datastores are not utilized by other clients and are therefore client specific. Nonshared datastores are used to keep information concerning the client that must be used by all tasks executed by the client. Nonshared datastores can be used to keep security, processing characteristics, allowable push messages, and other client-based data. The human interface data (CRUD) action type is used to move data between the human interface devices, both input and output, and cluster store. The human interface can be considered to be a specialized type of datastore where the information can be created (e.g., new window displayed), retrieved (e.g., information entered by an individual using a keyboard), updated (e.g., the information displayed in a given window is changed), and deleted (e.g., a window is removed from the display). The human provides the intelligence for this type of datastore. The human interface functionality and the action implementation can be on the same or different platforms. That allows dialog containing these actions to be on the client or a server while the human interface functional components accessed by these actions reside on the client. The state retrieve and update action type is used to move state information between logical entities represented by state machines and cluster store. As with datastores, the state machines can have any degree of intelligence associated with them. Examples of these transactions would include the retrieval of the operational state of a switch and the availability of an operator. It is not possible to create or delete states using this action type because that would alter the state machine itself. The data transformation action type is used to transform data in the cluster datastore. Cluster datastore is used for both the input and the output of this action type. An example of a data transformation transaction is an algebraic manipulation of cluster datastore to produce the data needed for a retrieval request. The enable push transfer action type is used to allow messages to be accepted and their data placed in cluster store without an associated request message. By utilizing an action to determine when and which messages will be accepted, the occurrence of unsolicited messages can be managed. The data from this action are placed in a nonshared datastore, where it will be used by the infrastructure. Because it interacts with a nonshared datastore, it could be included in the definition of the nonshared CRUD datastore action. However, this type of action is of such importance that it was given its own type. Although action typing does not change the format or operation of the action, it does provide three important functions: § The facilitation of the identification of similar actions that already have been defined and implemented. These candidate actions can then be used in whole or in part in the implementation of the current action. § A target set of actions for the specification of process functionality. This aids in performing the constrained decomposition procedure. § A means to analyze the operation of the process fragment using a standard approach. This allows the identification of gaps, overlaps,
conflicts, and inconsistencies in the process specification that cannot be easily identified at the process level. Experience has shown that this type of analysis frequently shows a need for changes at the process level. These action types have resulted from experience with using the action type concept for the technical specification of process fragments. Further analysis and research may indicate the need for a different number of types or a different definition for the ones presented in this section. Because of the way that actions are defined, that easily could be accomplished. However, to this point, there has been no indication that such changes are needed.
13.4 Action dynamics Without using an automated simulation tool, it is difficult to provide an adequate feeling and understanding of action dynamics. However, to complete this discussion, it is necessary to provide at least a rudimentary indication as to how an action system operates. Figure 13.8 depicts a simple dialog that has a system of five actions and a cluster store. The launch criteria and the response variables and values are shown for each action. For the current purpose, it is not necessary to define the detailed operation of the actions. In an actual implementation, all the action data would have to be specified. It also is assumed for this simple system that each action completes successfully.
Figure 13.8: Action dynamics case 1. At the start of the dialog, it is assumed that cluster store contains two variables and associated values, V1 = A and V8 = H. The launch criterion for each action is examined at the start of the dialog and every time cluster store changes in some way. The time periods represent the time between cluster store changes. The elapsed time represented by a time period is not fixed. It depends on when cluster store changes. Because the actions are asynchronous, that period can vary considerably, depending on how long server processing takes for a given action. Thus, the elapsed real times for the time periods may be different. At every vertical line between time periods, the cluster store changes in some way, and the launch criteria for all actions that have not yet been launched are examined. Remember that only data in cluster store are considered when launch criteria are evaluated. If the launch criterion evaluates to “true,” the action is launched. Multiple actions can be launched at the start of any time period. If no action is launched and there are no outstanding transactions, the dialog is completed and terminates. If there are
outstanding transactions, they must be completed before the dialog terminates, since their response variables may cause additional actions to launch. At the start time line, the only action with a launch criterion that is true is action 1, and it is launched at the beginning of time period 1. It completes at the end of the time period, and the response data are added to cluster store. The action launch criterion is then evaluated and becomes true for both action 2 and action 3, and they both are launched at the beginning of time period 2. Action 3 completes at the end of time period 2, and its response data are placed in cluster store except for the value of variable V8. Note that variable V8 is already in cluster store with a value of H. In this case, the response analysis and dissemination support function for action 3 has instructions not to replace the values of response variables already in cluster store. As discussed earlier, other instructions also could have been provided. The transaction of action 2 is still outstanding at the end of time period 2 and does not contribute to any changes in cluster store. The launch criterion again is examined, and action 4 launches. Action 2 completes at the end of time period 3, and its response data are added to the cluster store. No new actions are launched at the beginning of time period 4, but the transaction of action 4 is still outstanding, so the dialog continues. Action 4 completes at the end of time period 4. No new actions are launched and no transactions are outstanding, so the dialog completes. Note that for this task, action 5 did not launch because its launch criterion was never true. It is not necessary for all actions in a dialog to launch. In fact, in most cases, the majority of the actions in a dialog will not launch for any given task. For different tasks, the actions that launch and those that do not will be different. It all depends on the specific conditions present for the dialog, depending on the scenario that is being followed. As an example, the specific operational conditions of the dialog will be altered (e.g., the scenario being followed is different) by changing the initialization cluster store variables and values. The definitions of the dialog and actions are the same as in the previous case. In this situation, different actions will launch and not launch; those that do may launch at a different time. That is illustrated in Figure 13.9. Note that actions 1 and 4 do not launch, while action 5 does launch. Actions 2 and 3 launch as they did in the previous example, but they now launch at the beginning of time period 1 rather than time period 2.
Figure 13.9: Action dynamics case 2. To begin the dialog, some designers use initialization category actions with launch criteria that always evaluate to “true” regardless of cluster store data. That is easily accomplished by using a launch criterion of “1.” Other designers prefer to have initial variables and values placed into cluster store by action of the infrastructure prior to
dialog start if the infrastructure allows that procedure. Both methods work well, and it is simply a matter of preference. Somewhat more complex examples of dialog and action designs are given in the methodology presentation. Although it may seem more complex than procedural programming, there is actually more control over the operation of the dialog using the nonprocedural approach. The artificial sequencing of operations required by conventional coding does not get in the way of explicitly specifying the conditions needed to perform each action. By using suitable simulation techniques along with the defined scenarios, the operation of the actions in the dialog can be examined and changes made as appropriate.
13.5 Consequences Direct process implementation and the use of nonprocedural programming techniques require a somewhat different orientation. That includes the personnel responsible for requirements determination and the software designers who will design and implement the functionality and data models needed. The individuals involved will require a significant amount of training and technology transfer, and the enterprise must allocate the time and resources needed. The structure of the dialog and action models require that a number of difficult issues and decisions be considered for every dialog and action in a dialog. That makes it difficult to “forget” to specify or implement a necessary piece of functionality, with its attendant negative impact on quality, resources, and customer satisfaction. That information is useful if a COTS product or a legacy system is considered or if a custom implementation using the structure specified in this chapter is indicated. The dialog and action frameworks also enable the reuse of a significant amount of functionality, either through the utilization of the common functions needed for dialog implementation or the use of available business-oriented software components. Although a large amount of information is generated in the action specification process, its regular structure makes it possible (though difficult) to handle, even without automated assistance. Because of the amount of initial information needed, this specification activity is definitely a front-loaded effort. That is somewhat in conflict with current approaches of building something quick and then modifying it until it works or seems to fit the requirements of the users. While such an evolutionary approach certainly provides the instant gratification that seems to be a need for contemporary activity, it cannot provide the product quality level and future change flexibility that also are needed. The middle ground is to use the front-loaded analysis for quality but do it in a timely fashion. Because dialogs can be designed and implemented in parallel, multiple teams can be used. As a dialog is designed, it can be implemented either through the reuse of previously developed functions or by the development or purchase of commercially available ones. The ability to overlap the development of individual dialogs and reuse previously developed components can greatly improve the speed at which a process implementation can take place without sacrificing the quality of the resultant product. Use of Internet techniques, standards, and products as the underlying mechanism for implementing business processes does not reduce or eliminate the need for careful analysis of the functionality required, as outlined in this chapter. For example, by considering actions as Java applets and their infrastructure support, including the cluster store, as part of the Java machine definition, the dialog-and-action structure meshes quite well with an Internet-based solution. The key is that an Internet-based approach may facilitate implementation but it cannot replace the need for careful analysis. Selected bibliography
Davidson, D., “The Logical Form of Action Sequences,” Essays on Actions and Events, Oxford: Clarendon Press, 1980, pp. 105–148.
Krishnamurthy, B., and D. S. Rosenblum, “Yeast: A General Purpose Event-Action System,” IEEE Trans. Software Engineering, Vol. 21, No. 10, 1995, pp. 845–857. de Lemos, R., and A. Romanovsky, “Coordinated Atomic Actions in Modelling Object Cooperation,” Proc. 1st Internatl. Symp. Object-Oriented Real-Time Distributed Computing, Kyoto, Apr. 20–22, 1998, pp. 152–161. Nett, E., and B. Weiler, “Nested Dynamic Actions: How to Solve the Fault Containment Problem in a Cooperative Action Model,” Proc. 13th Symp. Reliable Distributed Systems, St. Augustin, Germany, Oct. 25–27, 1994, pp. 106–115. Santos, F. A. A., et al., “Action Concepts for Describing Organised Interaction,” Proc. 30th Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 7–10, 1997,pp. 373–382.
Chapter 14: Software component modeling Overview The purpose of defining and modeling software components as automation assets is to provide an effective mechanism for obtaining software reuse. This chapter defines a component reuse model and framework that can substantially increase the amount of reuse obtained from process implementation activities. The reuse of software has been a goal of the computer software industry since the time of the earliest commercial machines. The author personally has been involved in various efforts since the late 1960s. Even with that rather lengthy time period, considerable confusion remains as to what exactly is meant by the term software reuse and how it can best be applied in practice. The confusion extends to the definition of the metrics needed to indicate the extent to which reuse is—or is not—being achieved. Some discussion concerning the implications of initiating an extensive reuse program is also provided. The migration to a reuse-oriented approach to software development requires a well-thought-out plan that considers all the many aspects involved. Those aspects include management, social, and financial/accounting considerations, in addition to the purely technical aspects. Also required are the means and the will to carry the plan forward. If any of those is lacking, the attempt to reuse software as a basic business doctrine will be a failure, no matter how well conceived and defined the underlying framework and strategy might be. Unless management fully backs the reuse concept and is prepared to make whatever changes are necessary to the basic functioning of the enterprise, it probably is better that a reuse program not be started.
14.1 Duplication of effort In any enterprise, there are many instances when internal duplication of effort occurs. Unless that is caused by basic business philosophy (e.g., General Motors’ different divisions each trying to sell a car to the same customer or a development organization promoting the development of two or more different approaches to the same problem in order to select the best approach), it usually is caused by structural problems between two or more enterprise organizations. Poor communications, distrust, and arbitrary restrictions are among the many possible causes. Methods of reducing unnecessary duplication of effort are an interesting topic in its general form. However, for the purposes of this discussion, only the potential duplication of effort existing in process implementation and associated software development is of
concern. In the discussion that follows, the terms duplication of effort and duplicate effort are used in that context. Reuse is one type of duplication-of-effort reduction that is usually associated with the software implementation process. Because the focus of this discussion is specifically directed toward implementation activities, it is limited to the reuse aspects. Chapter 13 defined multiple types of reusable software “parts.” However, to keep this discussion relatively generic, the term component is used for any type of software part that is intended to be reused. 14.1.1 Reuse concept What the term of reuse specifically refers to is the intended use of the same software component in multiple products. Such a component can then be considered to be reused. The concept comes from the manufacturing industry, where the reuse of individual components to create customized products is utilized. If, for example, an order is placed for a water pump with specified characteristics, the manufacturer could assemble a suitable pump from an inventory of existing parts. That example also illustrates another major requirement for reuse. The purpose of reusable components is to allow the development of a product by assembly of the components (usually from inventory), rather than to design and implement the product as a custom unit, developing everything from the ground up using only the stated requirements. Thus, while the use of the same functionality in multiple projects is certainly the essence of reuse, it must be done in the context of an assembly methodology. If that definition is not rigorously enforced, reuse retains the hit-or-miss characteristic that it has historically possessed. There is another aspect of reuse that needs to be addressed. What is the status of software that is copied and then altered? Does that constitute reuse? Historically, such cases have been considered as a form of reuse, although there is considerable vagueness concerning how much of the original software would have to be utilized to maintain the definition. In fact, it probably is safe to say that, historically, partial reuse was the dominant form of reuse. To avoid too much conflict with the historical definitions, for the purposes of this discussion, the author also, rather reluctantly, considers partial usage to be a reuse condition (and not just a reduction in possible duplication of effort). 14.1.2 The importance of reuse The importance of reuse lies not so much in the elimination of duplicate coding but in the ancillary functions, such as testing and integration. Coding is a relatively minor part of software development. If all reuse did was eliminate that activity, it probably would not be worth much effort and certainly not the amount of attention that it historically has enjoyed. The major benefit that reuse should provide is a large reduction in testing, both unit and integration. Because the amount of integration testing needed increases exponentially with the number of elements being integrated, that is a significant factor. A reduction in the amount of testing necessary also significantly increases the quality of the resultant product. Although quality improvement is a soft benefit, in that it is difficult to quantify, it certainly is an important one in the competitive situation that currently exists. The decision as to whether or not an altered component is being reused, in essence rests on the amount of testing that is eliminated. That determination, as with the other requirements, is based on the judgment of the individual doing the determination. That problem is eased somewhat, as shown in Section 14.2.2, where the conditions that forced the extensive utilization of partial reuse are examined in the context of the current technical environment. It is shown that the current and projected technical environment will make partial reuse of much less importance in the future. The three reuse requirements can be restated in a more succinct form as follows: § The reused component is intended to be an integral part of two or more different products.
§ §
The reused component is incorporated through the use of an assembly methodology. The reused component is essentially the same in all implementations.
14.2 A new approach to reuse To be successful, an entirely new approach to software reuse must be defined. In fact, the philosophical basis for reuse must be completely reversed. Up to now, reuse has been treated as an exception. That is evident from the following list of example characteristics. § Reuse components are specifically identified and treated differently than those components designed to be used in only the project for which they are defined. § The activities needed to identify and utilize existing components are not integrated into the normal flow of the development, and they require considerable attention and resources. § The upfront financing of reusable components is usually an exception to the normal funding methods of software development projects. The philosophical change that must be made is to stop treating reuse as an exception and begin treating it as the normal course of events. That view engenders a completely different characteristic list. § Components will provide all the functionality for a given software development. § Each component must be viewed as reusable unless there are (assumed to be very few) project-specific reasons to prevent sharing of the component. § All existing components must be examined for possible reuse before the determination that a new component must be implemented. The new components will then be available for all other projects to use. § All components are developed using the same financial model. Restated, the main thrust behind this shift is to eliminate the idea that reuse is an exception that is difficult to utilize and instead view reuse as the usual way process implementations are developed. Why has this shift not been proposed before? It probably has many times, but the enablers that were necessary for successful utilization were not yet available or palatable in the environment. 14.2.1 Enablers For convenience, reuse enablers are grouped here into business, social, and technical categories. It should be recognized that some enablers can overcome problems in multiple categories. In addition, the enablers are interrelated, and with few exceptions they are all necessary to allow for an acceptable level of reuse. If one or more enablers cannot be provided in a specific enterprise, reuse in that enterprise probably will fail regardless of the availability or the strength of the others.
14.2.1.1 Business enablers The major financial enabler is to fund component development by funds specifically allocated for that purpose. Component acquisition should not be paid for directly by a project. An allocation system, such as that used for overhead charges, would be one mechanism that could be utilized to provide the needed development resources. That approach is acceptable, but other management accounting structures can be utilized to provide the required resource allocation. The result is that each project pays for the use of an existing component based on the average reuse statistics for the enterprise, not on the projected number of reuse occurrences for a specific component. The crystal ball can be thrown away! The accounting structures necessary for this type of charging mechanism are really not difficult to define and install. An example of one of these structures is provided later in the discussion.
The main management enabler is a development structure that separates the process implementation function from the component implementation function. Each activity would be assigned to another organization. This open organization would replace the closed model in which all implementations are performed by the same organization. This type of organization split has an analogy in the split between process implementers and system software developers (e.g., operating systems, communication software). The two types of software historically have been given to different groups with somewhat different skills. Successful integration depends on communication and documentation between the two organizations and usually is not a difficult problem. With experience, the same should be true of a split between process implementers and component developers.
14.2.1.2 Social enablers The secret of the social enablers is to allow project personnel the degree of creativity that satisfactorily motivates them but provides appropriate mechanisms that allow that creativity to work toward reuse rather than hinder it. The separation of process implementation from the implementation of functional components also would work very well in providing that mechanism. Because process implementers are not allowed to develop components (code), their creativity is shown by the success of identification and integration of existing components under the direction of an appropriate methodology. If new components are necessary, they are developed (or adapted as necessary) by a separate organization that does not develop process implementations. Creativity is shown by the individuals in that organization by providing components that are able to meet the needs of many projects and that are problem free, even when used under radically different circumstances. Because the thrusts of the activities of the two organizations are different, the skill sets (and associated creativity needs) of the individuals also can be different and targeted toward the specific needs of each area. Each group also can receive a sense of pride in the successful accomplishment of their respective activities. That type of separation can go a long way in eliminating the reuse dilemma created by having the same organization produce both the process implementation and its functional components.
14.2.1.3 Technology enablers A major technology enabler is the availability of C/S distributed architectures based on a uniform messaging structure. Those messaging structures are part of what is known as middleware or infrastructure in the current terminology. This type of architecture can significantly reduce the need to port a component from one environment (platform, operating system, source language) to another by designing the component with a messaging interface and using it where it sits. Because every component used in a product no longer has to reside on the same (usually centralized) computer, it is not necessary to port a given component to a specific environment. That eliminates the need to redevelop and retest a component almost every time it is reused. This philosophy does require that a component be developed so it can be utilized in that manner. As long as every component is so designed, there should be no insurmountable problem because every component is reusable. For the most part, objections to doing that are rooted in arguments that are no longer valid. The availability of low-cost computing power removes the tight operational requirements from the components that were necessary when efficiency (processing and memory) was the main focal point. That results in faster development time and the ability to deploy multiple copies of the component when throughput requires. In addition, the cost per component also decreases because of the same loosening of constraints. Component design and implementation can (and should) become a specialized field with the associated skills and experience well defined. Expert attention also reduces the objection to developing components according to a somewhat more difficult paradigm. Because component development currently is not defined independently of process
implementation, a lot of amateurs are plying the trade, and they tend to utilize simplistic solutions and constructs. The C/S architectural structure also helps enforce the separation of process implementation from component implementation by providing a well-defined and enforceable interface between the two entities. It is not necessary at this point to delve into the details of a suitable messaging structure. Another needed technology enabler is a methodology that defines software components with a high probability of being reused. That type of methodology must utilize some type of constrained decomposition so that all the resulting components have the same general structure and contain business functionality in the proper-sized quanta. The lack of such methodologies has been a major stumbling block to the significant reuse of software components. The process implementation methodology described in Part III provides structures and approaches by which components with the necessary characteristics can be specified using the concepts of Chapter 13. 14.2.2 Framework design The previous discussions provide a suitable direction for the definition of an overall reuse framework. Such a framework must accommodate the enablers as well as provide a means for determining an implementation strategy. One possible framework is shown in schematic form in Figure 14.1. It is based on the concept of the separation of the process implementation logic from the components that provide the necessary functionality. Communication between the two is provided by a common messaging system. The components are monitored and administered by a configuration management system that provides such functions as change control and versioning.
Figure 14.1: Reuse framework. In that framework, the process implementation consists of (1) the logic needed to properly sequence the execution of the component functionality and (2) any software needed to interface with the messaging structure. A process implementation generally would run on a computing platform designated as a client, although that would not have to be the case. The user interface is assumed to be performed by one or more components whose functionality provides the necessary input/output control. By defining the user interface in this manner, it is possible to share all or portions of the user interface with multiple process implementations, thus maintaining a common look and feel. This type of process implementation structure has as its focus the specification and sequencing of functionality. With that type of approach, changes can be made to the components, under many circumstances, without necessitating changes in the process implementation. Likewise, changes can be made to the process implementation without affecting the contents of the components. Although that requires a relatively sophisticated configuration management approach, it does ease the maintenance
problem considerably. Of course, in many cases, to respond to business needs, it is necessary to change both the process implementation and one or more of the components it utilizes. The infrastructure, which consists of the messaging system and associated services such as security and location directories, is key to the success of this framework. Regardless of functionality and location, all the process implementations and components must have the same type of messaging structure, or the messaging system must have a means of converting between differing ones. A lack of a uniform approach to messaging has been one of the major historical problems with specifying this type of process implementation structure. That difficulty is being partially alleviated by some current standards efforts that are beginning to provide some useful specifications. The components have no knowledge of the process implementations that use them. They must respond to any process implementation (including security considerations) that correctly sends a message to them. That requires that each component have a means of handling multiple requests without the requests interfering with each other. Several means can accomplish that separation, including input queues, multiple threads, and multiple copies of the component. Different components probably will use different methods depending on their size, anticipated load and throughput, and response time requirements. The means of handling multiple requests can be changed for any component without altering the process implementations that use it, since the messaging structure will remain constant. Components are resident on servers that provide the facilities needed for their operation. The language(s) in which they are coded, the operating system utilized by the server, and the hardware computer platform do not matter to the process implementation as long as the messaging structure is consistent. That eliminates most of the need to port components between different combinations of those elements because they are used where they sit. Of course, the same is true for the process implementations. Their implementation conditions can be different or the same as that utilized by the components. It simply does not matter. If an enterprise develops products for different customers, each with their own independent C/S systems, then a server-based philosophy usually can be used only within a given customer’s facilities unless the enterprise also operates as a type of utility or service bureau. However, most of the advantages of the framework still can be obtained as long as the characteristics of the elements used are not changed between customers. In general, the more change that is required between customers, the less reuse can be effective. There are two types of servers, shared and private. All components on shared servers can be reused by any process implementation, wherever located, needing their functionality. Components on private servers can be used only by process implementation(s) having access to that server. Private servers may be necessary for security reasons, operational conditions, different customers, or other similar reasons. In that case, if it is desired to use an existing component on a private server, it may be necessary to port that component to the private server, if the server implementation elements differ in some way. Although porting reduces the efficiency of the resultant reuse, it is greatly preferable to redesigning and reimplementing the component functionality. Figure 14.1 shows a repository for the reusable components other than the server(s) on which they are deployed. There are two purposes for the repository. The first is to eliminate objections to interacting with operational components for nonoperational purposes (e.g., copying to a private server). The second is to provide a place where information about the component can be stored and retrieved during a search for available functionality. The main purpose of the repository is not to provide copies of components to process implementers so they can be altered at will. That is the major purpose of the classic reuse catalog, which has been proven not to work.
With all its advantages, why has this type of framework not been utilized before? The answer is in the lack of enablers. Enablers are only now becoming available, and a lot of work still needs to be accomplished. However, the advantages of this structure are great, and its use should increase over time as experience and successful results are obtained. 14.2.3 A workable strategy How can the reuse framework be effectively implemented and utilized? One way is illustrated in Figure 14.2, which shows several different roles involved in producing a process implementation. For simplicity, a specific process implementation is referred to as a process implementation for the remainder of the discussion. The multiple-role strategy is used to ensure that the proper emphasis is placed on the different aspects of reuse, including the specification of the needed process implementation functionality, the development of common components, and the mapping between them. With that type of separation, little is to be gained by any individual or organization not trying to obtain the maximum amount of reuse. In fact, by structuring the metrics properly, each organization can easily be held accountable for ensuring the success of the reuse program.
Figure 14.2: Reuse strategy. The separation is maintained by assigning the roles to different organizations. Six separate activity areas (roles) are defined. However, it will be shown that it is possible to combine some of them to reduce the number of separate organization units involved without adversely affecting the purpose of the separation. The use of multiple roles is similar to that which occurs naturally in hardware development using integrated circuits. The engineers that assemble the components into working equipment almost always are not the same engineers who design and develop the components themselves. Historically, that has occurred because of the different skill sets of the individuals involved as well as the different tools and other equipment needed in the development process. That splitting of roles is one of the factors that has enabled component reuse in hardware development to be successful.
14.2.3.1 Process implementers Process implementers are responsible for the specification of the process implementation logic and the functional requirements for each component. The specification of the components must be constrained and provided in a common form so the probability that an existing component can satisfy the need is high. Defining the necessary structures and activities for providing the constrained component specifications is the responsibility of the methodology. The specification of the process implementation logic also must be provided in a standard format so that each process implementation can be implemented using common facilities. Although not a defined reuse activity, it nevertheless is required to reduce duplication of effort in the form of support facilities.
Because the process implementers are not allowed to implement components, there is no particular reason for them to change their design to favor new development over the reuse of existing components. The focus of their role is to ensure that their process implementation will meet the needs of the business events that must be accommodated. To do that effectively, they must be well versed in the technology they are using as well as the business aspects of the process(es) being implemented. That specialty is unique in and of itself. Although process implementers will not be developing software in the classical sense of the term, they still will be performing a type of programming to produce a viable design for the specification and sequencing of components (workflow). That programming-like process allows individuals performing the design role to exercise a considerable amount of creativity as an essential part of the design activity while conforming to the reuse framework.
14.2.3.2 Component implementers The role of the component implementers is to provide an extensive set of deployed components that have the same form and constraints as the specifications produced by the process implementers. This is the only method that will facilitate a significant amount of component reuse. A major debate that occurs during the startup of this type of strategy centers on the source of the components to be implemented. One group argues that the source should be the specifications developed by the process implementers. Over time, as process implementations are developed, an extensive set of implemented components will be made available. Another group argues that that is too slow a process and that an enterprise view of functionality needs should be developed to provide an initial set of deployed components. In the author’s view, both sources should be considered, using the philosophy of the 80/20 rule. Because any business has a large number of easily definable functions it performs, regardless of the specific process(es) that must use them, it should be possible to quickly define and implement a component set containing that functionality, as long as there is no attempt to obtain 100% coverage; 80% would be fine! Addressing the remaining 20% upfront would take too long and is not really necessary. In deciding to undertake development based on an enterprise view, however, some cautions must be considered. The major problem with taking the enterprise view toward component implementation is the need to develop the defined functionality in a form consistent with the component specifications developed by the process implementers. Unless enterprise-view component specifications use the same approach as that mandated of the process implementers, the resultant implementations not only will be useless but actually will be counterproductive to reuse, because the component forms and functionality constraints will not match. That is also the same problem that occurs in trying to buy a COTS set of components that cover common business functions. Even if the messaging structure is the same as that specified by the enterprise and the functionality is correct, the form of the components undoubtedly will be wrong. By using the component framework defined in the process implementation methodology, the problem essentially can be eliminated. Because the components are independent of any process implementation and their specifications result from activities not associated with the role of component implementer, the implementers are free to concentrate on producing quality components that meet all their specifications and are free from bugs. Another aspect that is enhanced by allowing only component developers to produce code is security. Although security usually is considered to be an infrastructure service, restricting access to basic machine functions to the component implementers can aid in the prevention of unauthorized process implementation entry. The component implementer role is the closest to
traditional software development, although making it process implementation independent imposes a somewhat different perspective on the activity.
14.2.3.3 Asset managers As an automation asset, a software component is subject to all the requirements of asset management described in Part I. Although asset management is important in any design situation, it becomes much more critical with a comprehensive reuse program because of the additional relationships that are formed. The appropriate organization and staff members must be provided and trained in methods that allow the maximum amount of reuse to occur.
14.2.3.4 Component utilization specialists Mapping the component specifications as produced by the process implementers to the existing components to find an appropriate match is a specialty that must be provided by the enterprise. Individuals who perform that function must be independent of both the process implementers and the component implementers because they may require changes to the products of both of these groups. Their function is to obtain the maximum amount of reuse from existing components. Although mechanized help in the form of searching and matching tools is necessary, the human intelligence of understanding the many ways in which the process implementation specifications can be satisfied by existing components is critical. As an example, consider the number of ways the component needs for a given process implementation could be satisfied: § The component specification matches exactly with an existing component. § The component specification matches exactly with an existing component except that operational characteristics differ (e.g., response time, throughput). § Functionality matches exactly, but additional data are necessary. § Functionality matches exactly, but more data than needed are utilized. § The component specification requires only part of the functionality (or data) of an existing component. § The component specification can be satisfied through the use of multiple existing components. § The component specification can be satisfied by an existing component if some noncritical changes can be made to the component specification. § The component specification requires a completely new component. This type of analysis can be performed only by someone experienced in the mapping activity even if extensive tool help is available. Detailed knowledge of process implementation and component development also is necessary for this role.
14.2.3.5 Infrastructure developers Infrastructure development is a specialty independent of process implementation. It is closely associated with the development of system software such as operating systems and communications protocols. It follows design rules that generally are different from those used for process implementation. It is not the purpose of this presentation to define the infrastructure and its development function in detail or to discuss the characteristics of the individuals involved in that function. However, the role of infrastructure developer is included in the reuse discussion for completeness and to emphasize the critical role that a common messaging structure plays in the effectiveness of any reuse program. Using a common infrastructure for all process implementations greatly reduces duplicate development in addition to being a critical success factor for reuse, but it is not, by itself, an instance of reuse in the sense of this discussion.
One other aspect of the infrastructure is of importance to the reuse strategy. Both the process implementers and the component imple- menters must understand the design of the infrastructure and its basic operational characteristics, because both process implementation and component execution occur through invocation of infrastructure services. Without such an understanding, process implementation and component design may be compromised and the final product will not perform as effectively as possible.
14.2.3.6 Integration specialists Integration is probably the single most important role in this reuse strategy. The process implementation logic, the infrastructure, and the functional components, along with the support hardware and software needed by the computing platforms and network, must be combined to produce a product that accurately reflects the intent of the initial process definition. In addition, the operational characteristics of the product must be met, including error recovery, online help, and response time. The enterprise requirements for the quality of the product also must be present. While the standards and structures defined previously will greatly help in that activity, a significant amount of effort still must be expended to produce a successful product. In addition to those computation-related items, appropriate deployment plans, training materials, and help-desk functions also must be provided. While the integration specialists will not produce these items, they must ensure that they are available and in agreement with the computation elements. The most difficult task of the integration specialists is to “program” the workflow engine(s) that instantiate the process implementation logic and test the results. Depending on the complexity of the logic and the sophistication of the engine interface, that may involve a significant effort. However, those engines are at the heart of the product implementation, so considerable care must be taken in this activity. Additional information concerning the use of the engines, their properties, and the information needed to successfully program them is discussed Chapter 15. Product testing is another area that deserves considerable attention. Although individual components will have undergone considerable testing to ensure that they operate in accordance with their design specifications, the entire product must be tested as a unit because there can be unexpected interaction between components. In addition, load (volume) testing of the product should always be performed to ensure that the overall operation meets design guidelines.
14.3 Implications Unfortunately, instituting a reuse strategy and eventually reaping the significant benefits it will bring does not come without a significant price. The willingness to pay that price is a major factor in the success of any such strategy, whether or not it agrees with the one presented here. The price includes a disruption in the development activities of the enterprise, a decrease in the morale of the staff, an increase in training costs, and a possible earnings dip. While those effects are temporary and will eventually disappear once the strategy becomes entrenched, it does make for a difficult transition. Successfully paying the price requires that all employees of the enterprise have a knowledge and understanding of the needed changes and their purpose. It also requires a personal and real commitment from the top executives of the company that the changes are necessary and will be instituted. Unfortunately, for many enterprises, that is simply too high a price, and the required changes are not instituted. The result is that, while the right words may be said, the actions of management do not match, and the reuse attempt fails. The next three subsections provide some motivation and expectation for the type of changes that must be made.
14.3.1 Management changes The major management change is in how a product development is viewed. It no longer can be considered a closed entity with controls over all aspects of the development procedures. A development must be viewed as a cooperative effort between different organizations, each of which has a specific purpose in advancing the development. Management must institute controls to monitor and correct any deficiencies in that interaction. The development process itself must be well defined and subject to the same measurements and change policy as any of the other business processes. This type of cooperating interaction between organizations must be based on trust and respect for the contributions of each role. Unfortunately, the current closed development view is based on distrust and suspicion. Turning that culture around while maintaining the required degree of productivity is a management challenge of the first order. 14.3.2 Accounting changes The reuse paradigm requires a different view of the manner in which a product incurs development cost. That is important for two reasons: being able to project the cost of a project based on initial requirements and the determination of whether or not a project is within its budget as development progresses. Using the project costs to determine the final price of a product may require more analysis than that developed in this discussion. Cost-based pricing is not within the scope of this presentation. When a development is a closed entity, the cost as well as the timing of the cost is relatively easy to determine. The product cost consists of all the development costs and some fixed percentage for overhead and is incurred as the project progresses. The amount can be tracked against the budgeted amount, and everyone is happy: the developers, the accountants, and management. The developers are in control of all aspects of the development, the accountants have relatively simple internal and cost controls, and management has measurements they can use to manage. With reuse, product costs become much harder to determine because the cost of a component must be allocated between all the projects that use it. The usual method of addressing that problem is to project the number of times a reusable component will be reused and charge a project that fraction of the development costs for that component. There are three problems with that method when using the strategy defined previously: § It assumes the exception role for reuse that is not applicable in the new reuse strategy. § The ability to forecast the reuse results for a given component is almost sheer guesswork. § If the reuse frequency for a component is not met, there is no project category in which to put the expense. In some sense, a designated reusable component that is not reused fits the same category as scrap in a manufacturing organization. One possible method for addressing that problem (other solutions can be defined based on standard cost accounting methods) is to allocate costs based on the average amount of component reuse. In that approach, all the costs for component development are divided by the number of components available to arrive at an average cost per component. An average reuse factor is calculated by dividing the sum of all components in all process implementations by the number of components available. The amount allocated to a given project for the use of any component is the average component cost divided by the average reuse factor. A value less than 1, which could occur during the startup phase of the strategy, is not allowed and is rounded to 1. As the number of projects using that strategy increases, the average values should arrive at a steady-state level that will be relatively stable. In addition to the use in cost allocation, the average reuse factor as well as the average component cost can serve as very good metrics in determining the success of the reuse strategy. They will be of greater utility than the current metrics that relate to the number
of components in the reuse catalog and the number that are used in a given project. Those metrics can provide misleading indications. The other major accounting problem is the timing of the costs. The reuse strategy requires that a significant amount of development be funded prior to the inception of a specific project. One solution is to accumulate those costs and allocate them as projects begin development. This type of accounting is similar to the one required for overhead costs and is not a particularly difficult problem in an accounting sense. It does require some adjustment, however, from a management perspective, because there will be (eventual) project-related costs that cannot be immediately and directly related to projects and customers. 14.3.3 Personnel changes Because of the separation of roles in the defined strategy, the experience, education, and other characteristics of the individuals filling those roles must be reconsidered. Because most of those roles do not involve the development of software, using expertise as the yardstick for obtaining most staff for the development function may not be appropriate. As an example, a background in systems engineering would probably be best for an individual filling the integration specialist role. Although personnel issues are far beyond the intent of the reuse discussion, this topic is mentioned to indicate the extent of the changes required with the adoption of a comprehensive reuse strategy.
14.4 Summary Significant software reuse using the concepts of software components, C/S structures, and an assembly methodology works quite well. There are a number of problems to overcome and an initial price must be paid, but the results will be well worth the effort. It should be noted that the discussion in this chapter is valid whether or not an Internetbased approach is utilized. The Internet does provide a great amount of infrastructure and the form of the components may change, but the need for analysis and development of software components from a business perspective does not. Several vendors have announced various component architectures that purport to accomplish the goals established in this chapter. In some cases, those architectures are specific implementations of some of the logical models developed here. However, in all cases, they only partially consider the requirements for a substantial reuse program as described in this chapter. Although they certainly can facilitate reuse and more rapid software development, they will fail in actually accomplishing those results if the other aspects, such as the financial and cultural needs, are not also addressed. The lack of a comprehensive approach to these needs is what has caused most of the failures in the past. Selected bibliography Baer, T., “The Culture of Components,” Application Development Trends, Sept. 1998, pp. 28– 32.
Basili, V. R., et al., “A Reference Architecture for the Component Factory,” ACM Trans. Software Engineering and Methodology, Vol. 1, No. 1, 1992, pp. 53–80. Garlan, D., R. Allen, and J. Ockerbloom, “Architectural Mismatch or Why It’sHard to Build Systems out of Existing Parts,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 179–185.
Glass, R. L., “Reuse: What’s Wrong With This Picture,” IEEE Software, Vol. 15,No. 2, 1998, pp. 57–59. Gustavsson, A., “Software Component Management and Reuse Component Repositories,” Proc. 4th Internatl. Workshop Software Configuration Management, Baltimore, May 1993, pp. 123–126 Heinninger, S., K. Lappala, and A. Raghavendran, “An Organizational Learning Approach to Domain Analysis,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 95–104. Jacobson, I., G. Martin, and P. Jonsson, Software Reuse: Architecture, Process and Organization for Business Success, Reading, MA: Addison-Wesley, 1997. Kotula, J., “Using Patterns to Create Component Documentation,” IEEE Software, Vol. 15, No. 2, 1998, pp. 84–92. McClure, C., Software Reuse Techniques: Adding Reuse to the System Development Process, Englewood Cliffs, NJ: Prentice-Hall, 1997. Rada, R., and J. Moore, “Sharing Standards: Standardizing Reuse,” Communications of the ACM, Vol. 40, No. 3, 1997, pp. 19–23. Radding, D., “Benefits of Reuse” Information Week , Mar. 31, 1997, pp. 1A–6A. Rosenbaum, S., and B. du Castel, “Managing Software Reuse—An Experience Report,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 105–111. Sen, A., “The Role of Opportunism in the Software Design Reuse Process,” IEEE Trans. Software Engineering, Vol. 23, Issue 7, 1997, pp. 418–436. Voas, J. M., “Certifying Off-the-Shelf Software Components,” Computer, Vol. 31, No. 6, 1998, pp. 53–59.
Chapter 15: Workflow modeling Workflows are the last automation assets that need to be examined as background for the specification of a process implementation methodology. Essentially, a workflow is another representation of a business process. The workflow representation or model is different from the business process models discussed in Chapter 5 because workflows must incorporate the technical and workforce information needed for implementation and deployment. To differentiate the different process representations, the original representation is referred to here as the business process, while the workflow representation is called a workflow.
15.1 Evolution Although workflow-like techniques have been in use for many years, the designation of workflow as a distinct technology is relatively recent. As such, it is useful to (1) investigate how and why the technology started; (2) develop an understanding of the current state of the art (including the availability of products that incorporate the technology); and (3) determine the specification of any standards that have general industry acceptance.
15.1.1 Genesis Workflow technology had its start in the image processing and document management technologies. Many business procedures involve interaction with paper-based information, which can be captured as image data and used as part of an automation process. Once paper-based information has been captured electronically as image data, it often is required to be passed between a number of different participants, thereby creating a requirement for workflow functionality. The emphasis on business process reengineering (BPR) in the early 1990s provided a need for utilization of the general workflow techniques and contributed to their development as a separate technology. Because the initial emphasis of BPR was on process definition and not implementation, the pace of development for workflow technology has been relatively slow. That situation is beginning to change as the emphasis shifts from process development to process implementation. Defining a process is not very useful unless some means exists to monitor and track the operation of the process and determine how well it is working. That is true for mostly manual processes as well as highly automated ones. Although manual monitoring and tracking can be utilized, they are not very efficient and the tendency is always to eliminate that function when time gets short. Utilizing the automated means for monitoring and tracking that workflow provides can greatly assist in evolving to an efficient and effective process. 15.1.2 Standards The Workflow Management Coalition (WfMC) was formed in 1993 by a number of vendors who produced products addressing various aspects of workflow technology. The purpose was to promote interoperability among heterogeneous workflow management systems. A workflow management system consists of workflow products configured in a manner that enables implementation of one or more specified business processes. Currently, the WfMC remains the only standards body involved with workflow products and services. It currently produces standards that address the following areas: § Application program interfaces (APIs) for consistent access to workflow management system services and functions; § Specifications for formats and protocols between workflow management systems themselves and between workflow management systems and applications; § Workflow definition interchange specifications to allow the interchange of workflow specification among multiple workflow management systems. The standards activities are still in an early stage, with only a small portion of the needed standards addressed in a manner that can provide needed guidance to product vendors and workflow implementers. Nevertheless, the existence of a standards body is an important indication of the viability and strength of the technology. Because the WfMC standards generally are concerned with the partitioning of workflow functionality and the interfaces between different functions, further discussion of that aspect of the standards is deferred until Section 15.5 when configuration models are discussed.
15.2 Model views To facilitate the discussion and present the diversity of information required to understand the use of workflow technology in the implementation of business processes, this chapter utilizes several different models. Each model is oriented toward a specific aspect of workflow design and operation. The following basic models are considered: § The dynamic model illustrates the basic operation of a workflow in performing a business process.
§ The design model represents a business process suitable for implementation and deployment. The parts of the design model are the workflow map, data access, and control. § The configuration model defines the interaction of the different components of a workflow management system. Each component eventually is realized by one or more products. The parts of the configuration model are the reference model, the logical model, and the physical model. Although the three models are discussed separately to avoid unnecessary complexity, they are closely related and the definitions of all the models must be compatible for the resultant implementation to function properly. The interrelationships are not discussed explicitly because of the complexity involved. However, it should be evident from the discussion how the models interact. As will be seen, some of the models use concepts and models from previous chapters as an integral part of their definition. That also illustrates the close relationships among all the models that have been presented.
15.3 Dynamic model The basic dynamics of a workflow implementation are shown in Figure 15.1. The four major components of this model are:
Figure 15.1: Workflow dynamics. § A workflow instance that contains the data relevant to a given invocation of the workflow; § Tasks, each of which performs some specific aspect of process functionality; § Business rules that determine when and where the next task is performed; § A monitor that determines if the workflow is progressing according to the specified parameters. When a business event occurs, a workflow instance is created. The purpose of the instance is to contain the status and instance-specific data that will be associated with the handling of the business event. Initially, the workflow instance contains very little information. As the solution to the business event progresses, additional information is added. A workflow instance exists until the final response to the defining business event is provided. At that time, the characterization of the workflow is complete and can be used for statistical purposes in the management of the process. The workflow instance is known by a number of different names, including folder, container, courier, and token. In addition, the implementations may vary considerably, ranging from a single structure to a complex set of structures. To avoid confusion, the generic term of workflow instance is used throughout this discussion. During a review of the characteristics of different products, it is necessary to understand that a diversity in naming conventions and implementation methods exists.
The information in the workflow instance is examined by the business rules defined for the workflow. Depending on the instructions contained in the rules and the data elements and values contained in the workflow instance, a specific task is scheduled and routed to a role performer assigned to that task. After the task is performed, the workflow instance information is updated and the procedure continues until the business event is satisfied. When a workflow instance requests a specific task to perform a function, an instance of that task is formed that is then associated with the workflow instance. In that way, all the work necessary to respond to a given business event can be connected with that particular business event, even if the same task is needed for many different workflows or multiple instances of the same workflow. The operation of the workflow is continuously monitored to ensure that it is performing satisfactorily. If a problem is encountered, the instance data are updated appropriately. For example, if the monitor discovers that an assigned task has not been completed within the time period indicated by the business rules, the instance information is updated to reflect that fact. When the business rules are next used to examine the instance information, they then may cause a task to be executed that notifies a supervisor of the problem. If allowed by the business rules, it is possible to have multiple tasks simultaneously executing in the same workflow. As part of its function, the monitor also collects statistics on the defined metrics of all the instances of the given workflow process, including the associated task instances. Those statistics are used to determine how effectively the process implementation (work flow) is functioning and what, if any, changes should be considered. That statistical function operates across all instances of the workflow, as contrasted with the monitoring function, which operates within a single instance. The dynamics of the workflow are incorporated in a workflow engine that is part of the overall workflow management system. The engine provides the means for creating the workflow instance, interpreting the business rules, executing the tasks, and monitoring the overall operation of the workflow. Because of the differences in commercial products, the specifics of individual workflow engines are not discussed here. Instead, the discussion of workflow engines emphasizes the modeling efforts needed to utilize any workflow engine.
15.4 Design model The workflow design model is an operational representation of the business process being implemented. It consists of three parts. The workflow map shows the relative sequences of the tasks that will be utilized by the workflow. Other information is considered as part of the map and must be keyed to the diagram, including: § The rules that determine which tasks will be selected for execution; § The transactions called by each task and the location of the functionality needed by each transaction; § The workforce units that will perform the roles used to perform the tasks. The characteristics of the workforce units are not considered part of the map information. The data access model part of the design model indicates what data are contained in the workflow instance and what data are contained in databases external to the workflow system. Both types of data are accessible by the tasks. Finally, the control model determines how the workflow progresses. 15.4.1 Workflow map As a part of the workflow process model, a workflow map is produced. An example of such a map is shown in Figure 15.2. Although the business process map and the resultant workflow map may look similar, there are important differences. The workflow map is the result of the design phase and is not an exact duplicate of the business process map. To help impress that point, the terminology employed is different. Instead of a sequence of process steps, a sequence of tasks is utilized.
Figure 15.2: Workflow map structure. As defined in Chapter 13, a task usually consists of one or more dialogs assigned to the same role. A task can be implemented via a custom development, a purchased product, a legacy system, or a combination of those methods. Instead of the information flows used in the business process representation, the need to interact with databases and processing functionality is indicated by the use of transactions. The location of the functionality accessed by each transaction also must be indicated. Without that information, the workflow cannot be implemented. Transactions result from the specification of the actions for each of the dialogs in a task. 15.4.2 Data access model The data access model defines how data are stored and accessed. It is illustrated in Figure 15.3 and shows how tasks can utilize data from either the workflow instance or databases external to the workflow management system. As will be discussed, not all the data in a workflow instance can be accessed by the tasks. Some data are available only to the workflow control mechanism. In general, that is not a problem because the tasks performing process functionality do not usually require data of that type.
Figure 15.3: Data access structure. As shown in Figure 15.3, a task can obtain data from either the workflow instance or an external database. Data also can be transferred between tasks using either construct. In general, it is better to keep the information in a workflow instance relatively small. That avoids the need for routing the same information to a number of potential role performers. Depending on the number of potential role performers, a large amount of data in a workflow instance can require considerable network and processing resources.
The use of pointers in the workflow instance to locate required data in an external database usually is the best method of transferring large amounts of data between tasks. The first task stores the data on a database outside the workflow management system but stores key pointers to those data in the workflow instance. The second task uses the key pointers passed to it in the workflow instance to retrieve the application data it needs from the external application database. As will be shown, any data needed by the workflow management system business rules to make decisions must be placed in the workflow instance. If desired, task functionality can be defined that transfers data between external databases and the workflow instance. To help in the determination as to where to place needed data, the classification of workflow instance data is presented along with a brief explanation of the data involved.
15.4.2.1 Internal control data The internal control data identify the state of individual workflow processes or task instances and may support other internal status information. The data may not be accessible to the tasks, but some of the information content may be provided in response to specific commands (e.g., query process status, give performance metrics). Multiple workflow engines used to implement a given process also can exchange this type of information between them.
15.4.2.2 Workflow-relevant data Workflow-relevant data are used by a workflow management system to determine particular transition conditions and may affect the choice of the next task to be executed. Such data potentially are accessible to workflow tasks for operations on the data and thus may need to be transferred between tasks. Multiple workflow engines used to implement a given process may also exchange this type of information between them. The transfer may (potentially) require name mapping or data conversion.
15.4.2.3 Workflow application data Workflow application data are not used by the workflow management system and are relevant only to the tasks executed during the workflow. They may convey data utilized in task processing, instructions to the task as to how to proceed, or instance priority. As with workflow-relevant data, they may need to be transferred (or transformed) between workflow engines in a multiple-engine environment. 15.4.3 Control model The workflow control model consists of the business rules responsible for determining the sequencing, scheduling, routing, queuing, and monitoring of the tasks in the workflow process. Each area is briefly described, and examples of rules for the continuing example are provided at the end of this section.
15.4.3.1 Sequencing Sequencing is the determination of the next task or tasks that must be performed to respond to the business event. Multiple tasks can be scheduled concurrently if determined by the appropriate rules. Although much of the sequencing information is shown on the workflow map, additional information may be needed to determine when the conditions for task execution have been met. For example, assume two tasks are executing in parallel and the map shows that both tasks require a common task as the next in sequence. The rules must determine if execution of the common task must wait until both parallel tasks are completed or if it can be started when one or the other predecessor task completes. Such rules can get somewhat complex for large maps.
One of the more interesting differences between the business process map and the associated workflow process map is the amount of parallelism that can be obtained. The business process map is very sequential, while the workflow map can have many tasks that are able to execute in parallel. Whether it is desirable to take advantage of that situation is up to the workflow designer and depends on a number of factors, including the size and sophistication of the workforce.
15.4.3.2 Scheduling Scheduling is the determination as to when the next task in the sequence should be executed. That can vary from immediately to weeks or even months in the future. The schedule can be static or dynamically determined from the workflow instance data or an external event calendar. Although most of the time, subsequent tasks can be executed immediately, there are a number of situations in which the execution must wait until a predetermined time. Such situations would include the availability of equipment or other resources, a time negotiated between the service provider and the customer, and the desirability of performing certain tasks, such as switching electric lines when electric usage is light (e.g., at 2 A.M.).
15.4.3.3 Routing Once a task has been selected and scheduled, the next need is to determine the work stations or the task performers that are to be used to process the task for a given business event (workflow instance). There are many methods that can be used to determine the routing, and the routing business rules specify which ones are to be used in a specific situation. When the task is routed to more than one individual, the first individual to select that task is assigned the task and it is unavailable to any others in the identified group. The most popular routing methods are as follows: § Route to the same individual who performed the previous task in the given workflow instance; § Route to a specific named individual; § Route to a list of named individuals; § Route to any individual in a given geographical location; § Route to any individual who can perform the specified role; § Route to an individual who can perform the specified role and who has the shortest queue or who is next in line for a new task (round- robin); § Any combination of those. Although most products allow an individual who is assigned a task to reroute it to another permissible performer, the workflow designer may restrict that ability to prevent misuse.
15.4.3.4 Queuing The queue of tasks available to a given individual can have several characteristics, depending on the capabilities of the products utilized and the rules established by the workflow designer. Queues may be defined to be first in/first out (FIFO) or last in/first out (LIFO) for all tasks arranged by priority in one of those categories. In this type of queue, the task performer would not be able to select the next task. The task would be presented automatically when the performer is available using the defined rules. This type of queue is known as a push queue, because it pushes work to the performer. In another case, the queue might be able to be viewed by a prospective performer, who could select any task deemed appropriate. For knowledge workers, this is probably the most used method. This type of queue is a pull queue, because the performer pulls work from the queue as needed. Any form of queue could be used with an automated task performer, although the push queue probably is utilized most often, because it eliminates the need for the task logic to determine which task is to be selected.
15.4.3.5 Calendaring Calendaring is the definition of events based on calendar units (days, weeks, months, years). Examples of calendar events are employee work schedules, server availability, due dates, deadlines, holidays, and special sale periods. Sequencing, scheduling, and routing operations are able to use calendar events in making the necessary decisions. A monitor function, discussed in Section 15.4.3.6, also can utilize those events to determine if problems exist with the operation of the workflow instance.
15.4.3.6 Monitoring Using the specified rules, the monitoring function examines the values of the metrics defined for the workflow. Depending on the conditions established in the rules, tasks that indicate that some condition has occurred may be initiated or a database may be updated with appropriate information that can be utilized later for reports or queries. Metrics Some common metrics are as follows: § Total time expended for the instance to present; § Elapsed real time total for the instance; § Elapsed real time in each task in the instance; § Elapsed real time in each queue; § Elapsed active time in each task; § Elapsed active time in each queue; § Task throughput per unit time; § Resource utilization per unit time; § Queue length for each work station; § Number of each priority item in each queue. Active time is time that counts against the workflow instance. For example, some workflows do not count traveling time assessed against the workflow instance. This type of metric is important when contractual obligations (service-level agreements) differentiate between real and assessed time. Alerts and alarms Based on the values of the metrics and the threshold defined for them, alerts and alarms determine if an alert task (no action required), an alarm task (action required), or a recovery task should be scheduled and routed to locations specified by the routing rules. Statistical information Statistics are developed for each metric, including those for each instance, all workflows of a given type, all tasks, and all task performers. Queries and reports Queries and reports request and receive information related to any of the measured values given previously for an individual workflow instance, including: § Total time expended to present; § Time until defined calendar or schedule event; § Total cost expended to present; § Any of the available statistics. The time period over which the information is needed must be specified.
15.5 Configuration model The configuration of a workflow management system refers to the way in which the components of the system are defined and interconnected. The three parts of this model are: § The reference model, which defines the system components and their interfaces; § The logical model, which indicates how the components will be utilized to accommodate a given workflow specification; § The physical model, which indicates how the components given by the logical model will be implemented using selected products in the environment in which they must function.
The structure of each model is defined in later sections. Additional information as to how the models are developed in response to a given workflow process specification is provided in Chapter 24, which deals with the workflow aspects of the process implementation methodology. 15.5.1 Reference model As a part of its standards activities, the WfMC has developed a workflow reference model that defines the elements of a workflow management system and the interfaces between them. Figure 15.4 is a pictorial description of the WfMC workflow reference model, which consists of five components: § Process definition; § Workflow enactment service; § Administration and monitoring; § Workflow client applications; § Invoked applications.
Figure 15.4: Workflow reference model. (Source: WfMC.)
15.5.1.1 Process definition The process definition component of the workflow reference model allows workflow definition information (as defined by the process models) to be entered into the workflow system and some amount of testing of the resultant workflow definition to be performed. Simulation and other test procedures also are defined as a part of this component. The process definition tool translates the process model information into a standard format for input to the workflow enactment service.
15.5.1.2 Workflow enactment service The workflow enactment service provides the run-time environment in which one or more workflow processes are executed. It consists of one or more workflow engines that cooperate in the implementation of a given process. The model assumes that the engines of an enactment service are all from the same vendor and are allowed to communicate using proprietary protocols. If engines (and hence enactment services) from multiple vendors are used to implement a process, the reference model requires them to communicate via a standard protocol. The characteristics of a workflow engine are described in Section 15.5.3. The enactment service is distinct from the application and end-user tools used to process items of work. As such, it must provide interfaces to those components to provide a complete workflow management system. As can be seen from Figure 15.4, the workflow enactment service is the main component of the model. All the defined interfaces are between it and the other components. There are no direct interfaces between components of the model other than the ones shown.
15.5.1.3 Administration and monitoring The administration and monitoring component contains the logic for administering and monitoring the status and operation of the workflow. Queries and online status information (dashboard display) would be a part of this component. Process and workforce changes would not be functions of this component but would be contained in the process modeling component.
15.5.1.4 Workflow client application The workflow client application is the component that presents end users with their assigned tasks according to the business rules (push, pull, and other custom rules). It may automatically invoke functions that present the tasks to the user along with related data. It allows the user to take appropriate actions before passing the case back to the workflow enactment service and indicating its current state (e.g., completed, error condition, reassigned to another performer). The workflow client application may be supplied as part of a workflow management system, or it may be a custom product written specially for a given application.
15.5.1.5 Invoked applications There is a requirement for workflow systems to deal with a range of invoked applications. In may be necessary, for example, to invoke an e-mail or other communications service, image and document management services, scheduler, calendar, or process-specific legacy applications. Those applications would be executed directly by the workflow engine without having to utilize a workflow client. 15.5.2 Logical model There are many ways that a business process can be implemented and deployed using workflow techniques and products. The purpose of the logical model is to depict the specific components that are used to implement the process of interest. The issues and procedures involved in producing a design for a specific logical model are not considered in this section, only the need for the model and its structure. To develop a suitable workflow design for a given business process, it is necessary to understand and incorporate the characteristics of the process as well as the capabilities and characteristics of the workflow management system products being utilized. If, for example, there is a need to utilize multiple workflow engines, any product considered for the function must be able to provide that ability. Figure 15.5 illustrates the example of using a help-desk process. One enactment service consists of four cooperating workflow engines (all assumed to be from the same vendor so multiple enactment services are not needed). The engines are used to support the user roles as follows: § One engine is assigned to the east help desk; § One engine is assigned to the west help desk; § One engine is assigned to the clerks regardless of location; § One engine is assigned to all of the other users (service technicians, marketing representatives, and experts).
Figure 15.5: Logical configuration model. Each help desk engine needs to communicate with the miscellaneous engine and the clerk engine, but they do not have to communicate between them; once assigned, users cannot migrate between engines. The scheduler needs to be accessed by three of the four engines. The clerk process fragment does not require any scheduling. The clients of the same three engines need to access the test and diagnostic legacy applications. If those applications are accessed by a task defined outside the client, they also should be shown as a part of the logical diagram, because they would have to be made available to the workstations running the clients and associated tasks. Supervisors and managers are considered to be a part of the group to which they are assigned. 15.5.3 Physical model Once the logical architecture has been determined, the physical architecture can be designed. The development of the physical architecture is similar for most types of system development. Although some aspects are unique to workflow, the procedure and information contained do not differ materially. Essentially, the physical architecture starts with the logical architecture components along with the automated activities, human interface modules, custom worklist management modules, and any external interfaces to existing components (applications) needed. External interfaces could include those utilized for calendaring, task scheduling, and work force management. Also included in the physical architecture specification is the network topology that supports the distributed environment, including protocols, server configuration and characteristics, and data location and conversion. Specific products also are assigned at this step but are omitted in this discussion because of the desire not to be tied to any vendor’s products. Figure 15.6 illustrates a possible physical configuration for the logical model developed in Section 15.5.2. The physical configuration should be detailed enough to serve as an implementation and deployment guide.
Figure 15.6: Physical configuration model. Because the details of the physical configuration depend on the infrastructure components and configuration used by a given enterprise, it is not possible to consider the physical configuration in any more detail. The most important aspect of the physical configuration is that it is developed from the characteristics of the logical configuration model.
15.6 Summary Workflows are the natural method of implementation for business processes. They maintain the same sense of process as that originally developed from a business perspective while allowing the use of automation assets concepts such as reusable software components. Workflows are also a natural mechanism to coordinate the manual and automated activities needed to perform the identified functions of the original process. However, business processes cannot be converted directly to workflows. A considerable amount of design is necessary to produce the different models described in this chapter. That aspect requires a robust implementation methodology.
Notes on Part II All the required automation asset models have now been specified in a form suitable for use by the process implementation methodology. The human interface is not included as an asset model because it is more of a design technique than a model and is better addressed as an integral part of the methodology. Although some of the relationships between models have been discussed as part of the structure definitions, most of the relationships are driven by the needs of the methodology and will be examined as they occur. In some cases, not all of an asset model is directly incorporated into the methodology. That occurs because (1) to form a coherent asset model, the defining structure needs to be broader than is strictly needed by the methodology (e.g., roles, scenarios), and (2) some of the assets have a significant implication for the enterprise beyond that of the methodology (e.g., C/S structure). The reader should not be surprised or confused by this condition. Other automation assets used by the enterprise are only indirectly utilized by the methodology. Those assets are concerned mainly with the infrastructure and include such areas as security, communications, and network operations. To keep this book
focused on the topic of process implementation, it was not possible to address those assets in detail. However, the development of a robust infrastructure is critical to the automation environment, and those assets need the same degree of concentration given to the assets needed by the process implementation methodology. Although not always explicitly specified, all the automation assets used in the methodology must be considered to be “real” enterprise assets and managed using the functions as described in Part II. Without that degree of attention, the availability of the assets is unreliable and the ability of the automation methodology to produce effective process implementations is compromised. The asset models developed in Chapters 8 through 15 have been demonstrated to work effectively with the methodology. However, it certainly is possible to define alternative models to address unique or unusual circumstances. Space constraints do not permit that possibility to be explored here, but enough information has been presented concerning the motivation and reasons behind the model structure to facilitate that type of activity. Finally, the author again would like to state his conviction that the automation environment is one of the most critical elements in the ability of the enterprise to compete in the future. Not only must it improve the efficiency of enterprise operation, it must provide the ability to obtain competitive advantage. Without a clear understanding of the assets utilized in this environment and a means to ensure that they are used to the best advantage of the enterprise, the automation environment will not be able to provide the required support. Selected bibliography Cichocki, A., et al., Workflow and Process Automation: Concepts and Technology, Boston: Kluwer Academic Publishing, 1998.
Engelhardt, A., and C. Wargitsch, “Scaling Workflow Applications With Component and Internet Technology: Organizational and Architectural Concepts,” Proc. 31st Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 6–9, 1998, pp. 374–383. Koulopolous, T. M., The Workflow Imperative: Building Real World Business Solutions, New York: John Wiley & Sons, 1997. Jablonski, S., and C. Bussler, Workflow Management: Modeling Concepts, Architecture and Implementation, Boston: International Thompson Computer Press, 1996. Jackson, M., and G. Twaddle, Business Process Implementation: Building Workflow Systems, Reading, MA: Addison-Wesley, 1997. Ngu, A. H. H., et al., “Modeling Workflow Using Tasks and Transactions,” Proc. 7th Internatl. Conf. Database and Expert Systems Applications, Zurich, Sept. 9–10, 1996,pp. 451–456. Workflow Management Coalition, Workflow Handbook 1997, ed. by P. Lawrence, New York: John Wiley & Sons, 1997.
Part III: Automation methodology With the conclusion of Parts I and II of this book, the fundamental concepts and models needed to permit the specification of a comprehensive methodology for the implementation of business processes are now in place. Part III develops and details the specification of a process implementation methodology (PRIME). In a sense, Part III is an anticlimax since most of the difficult work has been done during the investigation of the automation assets utilized by the methodology. What remains is the integration of those components into a unified design that can be implemented in a timely and costeffective manner.
It probably is evident from the discussions of some of the individual assets that PRIME is not a conventional methodology. It contains a number of unique approaches and constructs designed to address the issues inherent in the current business and technical environment. From the preceding chapters, it should be clear that the current difficulties of software development and deployment will not be solved by conventional approaches. That has been amply demonstrated over the years by the large number of projects that have failed to deliver what they promised. They overran projected costs, completion times, or did not provide customers with what was needed. Requirements usually were vague and kept changing. Development staff turned over, and valuable information that was not documented was lost. General management and project management that did not understand the intricacies and special needs of software development in general and business software in particular tried to apply techniques that simply were not adequate for the task. The basic problem is not the particular architecture, technology, tools, equipment, or even management style utilized. Other than not being process oriented, the problem is the implementation methodology employed. Currently available methodologies contain structural problems and are deficient in one or more (usually all) of the following areas: § They do not adequately ensure that the requirements have been incorporated into the completed product. § They leave too much to the discretion of the developers. § They have inadequate checks and balances. § They do not adequately address geographical distribution of equipment and function (with or without Internet techniques). § They are not adequately integrated with the infrastructure components and services. § They do not provide for the continuous involvement of the customer to ensure usefulness of the completed implementation. § They do not address the integration of manual and automated tasks, which must cooperate in the satisfaction of a business event. § They are not adequately coupled to the structure of the resultant automation functionality. § They do not incorporate a reuse approach. Assuming that management realizes that a process orientation is necessary to address the fundamental requirement problems, it is still necessary to overcome the difficulties of the available methodologies. That can be accomplished only by the design of an entirely new methodology that has as its specific purpose the effective implementation of business processes in a manner that addresses the listed deficiencies. The PRIME approach addresses all the difficulties of the current methodologies but admittedly introduces constructs that require a technology and culture stretch. Because of the rapid advances in technology occurring on a large number of fronts, the technology aspect is not an insurmountable problem for even the near future. Also, “workarounds” can be utilized until the needed technology becomes available. The major problem is cultural. All areas of the enterprise, from executive management to the accountants to the software developers, have to learn a new way of working. The commitment to that change must be obtained from all levels of the organization, from senior management to the individual software engineers. Senior management must provide the leadership needed, including overseeing the changes in the strategic planning process that are required to accommodate the difference in approach. The inherent problems in accomplishing such change and obtaining the needed commitment should not be underestimated. If those difficulties can be overcome (and they can), the result of utilizing the PRIME methodology will be implemented processes and associated automation software that meet the needs of the users, that utilize state-of-the-art technology appropriately, that provide results in a timely and cost-effective fashion, and that can be easily changed to accommodate business and technological improvements.
Unfortunately, the discussion in Chapters 16 through 27 is essentially a circle. No matter where the start is positioned, the entire circle must be traversed before all the concepts and their interactions become clear. The author has tried to keep the discussion in as logical a sequence as possible. However, the reader should be aware of the inherent circle properties and not become frustrated when a concept is utilized before it can be sufficiently motivated or detailed. The problem is self-correcting as the discussion progresses.
Chapter List Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter
16: Overview of process implementation methodology 17: Spirals 18: Step 1: Define/refine process map 19: Step 2: Identify dialogs 20: Step 3: Specify actions 21: Step 4: Map actions 22: Step 4(a): Provision software components 23: Step 5: Design human interface 24: Step 6: Determine work flow 25: Step 7: Assemble and test 26: Step 8: Deploy and operate 27: Retrospective
Chapter 16: Overview of process implementation methodology Overview A process implementation has several elements that must be developed in a coordinated fashion and that must all interoperate to accurately reflect the requirements contained in the original business process. Depending on the process being implemented, those elements usually include the following: § Automated functions; § Manual procedures; § User interfaces; § Task management; § Workflow management. A PRIME must be capable of converting a given business process into a set of those elements, all of which are compatible with and utilize the available infrastructure for support and common services. The infrastructure is specified and designed independently of the process implementation. Its architecture and design are determined by the services provided, the technology utilized, and the resources available. An explicit differentiation must be made between an implementation methodology and a project management methodology. Unfortunately, the two types of methodologies quite often are confused. That results in a significant amount of uncertainty as to what is included in an implementation methodology and what are the required skill sets of the personnel involved. An implementation methodology specifies how the conversion of requirements to an implementation should be accomplished. It defines the procedures, models, structures, and information flows utilized in the conversions as well as their relationships and when and how they are to be utilized. A project management methodology produces a project plan that includes estimates for resources and elapsed time. It also may identify the individuals needed to implement the plan. After the conversion has started, a project methodology determines if the conversion is proceeding according to the initial estimates
and indicates the types of corrections to make if there is significant deviation from the plan. A project management methodology without an attendant implementation methodology is devoid of meaning because there literally is nothing to manage. In a similar way, an implementation methodology needs an associated project management methodology to ensure that the conversion is utilizing enterprise resources in an effective way. The definition and utilization of the two types of methodologies should be separate and distinct because they focus on different problems and the personnel involved require different skill sets. The methodology developed in this presentation is an implementation methodology. It focuses on the conversion from requirements to implementation and does not explicitly consider management aspects, such as resource estimation. However, where appropriate, some management topics are discussed when they add to the understanding of the procedures involved. Management topics are explicitly identified as such, in keeping with the desire to keep the two types of methodology separate. It is assumed that both methodologies utilize the same repository. That ensures that the project management methodology is able to track the progress and estimate resource utilization of a given implementation and can determine at any step if changes should be made in the development. Such information would be placed in the repository and cause the implementation parameters to change accordingly. The repository also would be used to contain the results of “what-if” alternatives (e.g., manual function versus automated function implementation) requested through the project management methodology. Each what -if alternative would use the implementation methodology as needed but would be identified as a separate result associated with the original development. The specification and design of PRIME are a combination of art and engineering. The art addresses the decisions as to the specification of the underlying concepts and models. The engineering addresses the detailing of the models and the means to ensure that all the components interoperate as needed. Both the art and engineering aspects must work in harmony to provide a methodology that meets the requirements and produces the required results. Although it is possible from an intellectual perspective to separate those two aspects, in practice it is difficult because they are closely intertwined. Thus, no attempt is made during the development of the methodology to identify when one or the other is being applied. The readers are invited to judge for themselves if the differentiation is of interest.
16.1 Automation system In previous chapters, the term enterprise automation environment was used to represent the totality of the deployed and operational computing environment, including the automation software (applications and infrastructure) and the computing and network equipment employed in the enterprise. Because PRIME is one mechanism by which the automation environment software is structured, it is necessary to define further the environment and the associated elements used in its creation, which, for convenience, are referred to as the automation system. The automation system is defined by the model shown in Figure 16.1. The primary input to the model is the business requirements as represented by the business process. Secondary inputs are (1) enterprise standards that will affect some aspect of the development or deployed process and (2) stakeholder needs, which represent the interests of the different classes of individuals interacting with the deployed process. The output is the implemented and deployed business process the users employ in performing their work. The deployed processes consist of the business functionality and associated workflow management that use the enterprise computing infrastructure for common support services (e.g., security, communications). The set of all such deployed processes is the enterprise automation environment.
Figure 16.1: Automation system model. 16.1.1 Enterprise automation environment The enterprise automation environment can utilize many different forms and structures. Although a process implementation architecture can be specified separately from the implementation methodology and vary with different functionality, that approach is inefficient and may not allow the implementation methodology to meet the specified requirements. For those reasons, only one process implementation architecture is utilized in the automation environment. It is designed to be an integral part of the PRIME approach and thus support the fundamental requirements of the methodology. That results in the following advantages for the automation environment: § Processes can easily interact with each other as required. § Reuse of the same components in multiple processes is facilitated. § The infrastructure needed to support the architecture need be developed only once. Multiple support structures are not needed. § Asset management procedures are simplified. § The implementation methodology can be optimized to a single implementation architecture instead of requiring multiple structures. The one disadvantage to the utilization of a single architecture for all process implementations is that operational efficiency may not be optimized in all cases. Because the cost for the performance obtained of hardware continues to decrease rapidly, the performance penalty usually is small in comparison to the advantages. 16.1.2 Process implementation architecture The process implementation architecture is the aggregation of several of the concepts and models presented in previous chapters. It is illustrated in schematic form in Figure 16.2. The architecture is based on a C/S structure with four explicit types of servers: automation control, workflow, infrastructure, and business functionality/data. There is also a client that provides the user interface.
Figure 16.2: Process implementation architecture schematic. In conventional C/S structures, the user client and the automation control server are coresident in a workstation, and the combination of the two is called the client. This arrangement is called the fat client; the functionality of the automation control server (which may be significant) is grouped and colocated with the user client. In an Internetbased C/S structure, the user client consists of only a browser and supported I/O devices and the automation control server is located on a different platform in the thin-client arrangement. An associated thin-hardware platform is sometimes called a network computer (NC). The process implementation architecture specification can support either type of client arrangement. In fact, for a given process, either or both arrangements can be utilized, depending on the needs of the users and the policies and procedures of the enterprise. The purposes of the automation control server are to (1) determine the functionality needed for a particular instance of the workflow process fragment assigned to the server and (2) cause it to be executed. Depending on the specific design and implementation of the workflow, the determination as to the appropriate functions can be made by the user, the task manager, the workflow manager, or a combination of them. As indicated in Figure 16.2, the automation control server has four types of components: human interface, task manager, workflow manager access, and cluster store. Each of the three methods of determining the needed functionality is represented by one of the component types. The cluster store contains the dynamic data used by those components. The basic operation of the automation control server is as follows: 1. The workflow manager routes workflow instances to the server, where it can be addressed using the functionality available to the server. 2. The workflow manager access component first determines how a workflow instance will be selected from all the instances available to the server. It may automatically select one of them according to specified business rules or present a selected set to the user through the human interface for manual selection. When a workflow instance has been selected, the workflow manager access component indicates that selection to the workflow manager. 3. Once the instance has been selected, the appropriate task manager is invoked. Controlled by the information contained in cluster store, the task actions cause functionality in the various servers to be executed (including user functions accessed through the human interface). Changes in cluster store data are made continuously as server and user functionality finish. That continues until the task manager has
4.
5. 6.
caused all the needed functionality to be executed and the task completes. Depending on the specific circumstances of the instance, additional tasks needed to address the instance may be invoked either in parallel or in sequence. Each of the tasks operates as in step 3. Each task invocation is communicated to the workflow manager by the workflow manager access component. The workflow manager access component indicates to the workflow manager that a task has completed. After an instance has been satisfied, the entire procedure starts over again.
16.1.3 PRIME Now that the scope, role, and context of the implementation methodology have been defined, the remainder of this book focuses on the design of PRIME. The start of the design process is an examination of the various types of implementation methodologies and their respective advantages and disadvantages. That allows selection of an appropriate type as the foundation of PRIME.
16.1.3.1 Methodology types The major types of implementation methodologies considered here are the waterfall, evolutionary, build-and-test, and spiral methodologies. Although the ad hoc methodology is probably the most prevalent one in current use, it is not considered because it cannot be characterized readily. It varies with each individual and project and is not a true methodology in the sense used in this section. The advantages and disadvantages of each type of methodology are discussed briefly. Waterfall methodology The waterfall methodology has been used since the early days of software development. It partitions the development into a series of fixed, concatenated stages, as shown in Figure 16.3. Each stage must be substantially completed before the next one is started. Although there can be feedback paths from one stage to an earlier one, they usually are exception paths and are not used as a normal part of the methodology definition. The methodology gets its name from the sequential nature of the stages. The flow of the development is in one direction, much like the flow of a waterfall.
Figure 16.3: Waterfall methodology. The waterfall type of methodology has several advantages. § A large number of individuals are experienced in its use. § Organizations have used this methodology for a long time and are comfortable with it. § It lends itself to very large projects. The disadvantages are:
§ § § §
The customer does not see the implementation until it is finished, which can be a significant number of years for a large project. It is assumed that all the requirements can be determined at the start of the project. That usually is not the case, and the system implementation may not reflect hidden requirements. It is not oriented toward software reuse, process-based requirements, and integration with an established infrastructure. Requirements that change during the development can cause considerable disruption to the development, requiring even more time and resources to complete.
The waterfall methodology generally is not suited for the type of development needed to provide process-based enterprise automation. Evolutionary methodology Whereas the waterfall methodology has been used for large projects, the evolutionary methodology has been used for small projects or parts of larger projects. It starts with the development of some initial functionality that is then validated by the customer. The next set of functions is then determined and implemented. That continues until the entire product has been implemented. This type of development is illustrated in Figure 16.4.
Figure 16.4: Evolutionary methodology. The development starts by implementing functionality 1 as a core and then proceeds to add functionality increment 2, and so on. The resultant structure changes with each new evolution and is impossible to predict in advance. If some part of the functionality is rejected by the customer, it must be redone until approval is obtained. The methodology gets its name because the functionality available evolves over time rather than appearing as a completed implementation. There are some advantages of the evolutionary type of methodology. § The customer sees some of the functionality quickly and can determine if the evolution is what is needed. § Requirements can be added and changed with little disruption to the overall project. § It tends to keep projects small and more manageable. The disadvantages are: § There is a strong tendency to skip the requirements phase and leap right into the coding. That can cause a considerable amount of redevelopment when the customer determines that much of the initial development is not appropriate and needs to be redone.
§ § §
The ability to add functionality decreases as the product evolves. That occurs because no overall architecture is defined. The product becomes more and more complex and difficult to understand. It is not oriented toward software reuse, process-based requirements, and integration with an established infrastructure. The development tends to be closed in all its aspects. The implementation tends to become the documentation, resulting in insufficient documentation of the system construction and the reasons that the evolution proceeded as it did. That can hamper continued evolution as well as usage of the product.
The evolutionary methodology is also not suited for the type of development needed to provide process-based enterprise automation. Build-and-test methodology The build-and-test methodology and the evolutionary methodology have many similarities. The main difference is that the build-and-test methodology utilizes a well-defined structure designed to accommodate all the functionality needed. The functionality itself is developed in small increments, as it is in the evolutionary methodology, but it is designed to fit into the overall structure. This is illustrated in Figure 16.5.
Figure 16.5: Build-and-test methodology. The overall structure, as indicated by the large rectangular box in Figure 16.5, is designed in accordance with the initial set of requirements. The initial set of functionality, depicted by box 1, is implemented so it fits into the defined structure. Then the next functionality increment (box 2) is added, and so on. The overall structure remains constant and does not change as the product evolves. Because an initial set of requirements has been determined, it is less likely that the customer will reject the functionality increments. To some extent, the build-and-test methodology is a compromise between the waterfall methodology and the evolutionary methodology. It keeps the initial requirements phase and structured approach but permits the customer to get a feel for the product and its functionality much more quickly than allowed by the waterfall approach. The advantages of the build-and-test type of methodology are:
§ § § § § §
The customer sees some of the functionality quickly and can determine if the evolution is what is needed. Requirements can be added and changed with little disruption to the overall project. It tends to keep projects small and more manageable. An initial set of requirements is determined up-front and helps guide the overall implementation. It is oriented toward software reuse, process-based requirements, and integration with an established infrastructure. Because an overall architecture defined, the ability to add functionality does not decrease as the product evolves.
The disadvantages are: § A relatively comprehensive structure that can accommodate the functionality fragments needs to be developed and utilized up-front. § If the requirements should change materially, the structure needs to be changed. That may be mitigated by certain structure designs that are more flexible than others. The build-and-test methodology is suited for the type of devel- opment needed to provide enterprise automation in the current environment. Spiral methodology The spiral methodology is a type of build-and-test methodology in that it formalizes that approach by defining a standard set of activities that are performed in a repetitive fashion to construct both the initial and additional functionality. The use of those standard activities is shown graphically by arranging them as quadrants of a circle. Development proceeds by logically traversing the circumference of the circle and performing each set of activities as they are encountered. As the process is repeated, functionality is added (or improved with respect to requirements) with each complete cycle. The ever widening radius of the circle being traversed—the outward spiral— indicates the increase in knowledge of the development over time. The spiral process is illustrated in Figure 16.6, in which the set of standard activities is arranged by quadrant and consists of analysis, design, implementation, and evaluation types. For each traversal of the spiral, all those activities are sequentially invoked. Analysis is performed to determine what needs to be accomplished next, usually in the form of some type of new or changed implementation. The necessary design work for the implementation is performed and the implementation produced. The results are then evaluated and the cycle starts over with the analysis for the next cycle. Each activity depends on the ones before it as the development continues.
Figure 16.6: Spiral approach to development. In theory, a spiral methodology can be thought of as a single procedure that guides the development through one complete revolution and is reinvoked for each new cycle or traversal. The only difference between successive revolutions is the additional knowledge and detail available for the development. The spiral activities remain the same. Although the spiral methodology is an attractive notion, in actual practice the procedures necessary to perform a given cycle vary considerably. The specific methodology design for a spiral cycle must reflect not only the current detail level but also the type of
information being developed. That can be accomplished within the high-level context of the spiral, but the details certainly will vary. As an example, early in the development, the cycle of interest is concerned with defining the initial set of requirements and the associated overall structure of the product. In a subsequent cycle, the emphasis may be on the detailed design of the user interface functionality. Obviously, the needs of those two cycles are somewhat different, even though they go through the same basic quadrants of analysis, design, implementation, and evaluation activities. One approach to accommodating the differences is to explicitly define different types of spirals. The major categories of the spiral will be present in each type, but the differences can be addressed in a more effective way.
16.2 Selected approach for PRIME A spiral methodology, which is a type of build-and-test methodology, is suited for the sort of development needed to provide enterprise automation in the current environment and is the approach selected for PRIME. Some modifications to the basic methodology are made to accommodate the needs of practical software development and adapt it to a process implementation focus. Specifically, the uses of multiple tailored spiral types are specified to ensure that all aspects of the development are covered adequately. 16.2.1 Requirements Because a spiral technique indicates only an overall methodology approach, the actual design of the methodology must account for additional requirements. The PRIME methodology has several additional needs that must be considered: § Methodology driven by process definitions; § Reuse of business functionality; § Incorporation of legacy systems; § Use of a common infrastructure; § Use of a distributed C/S architecture; § Integration with enterprise long-term planning; § Utilization of prototypes to convey the evolving design to stakeholders; § Equal consideration of manual and automated functions; § Incorporation and integration of specifications for error handling and recovery, online training and help, and performance monitoring; § Covering of the operational phase of the life cycle as well as the development phase. Although PRIME is completely compatible with an object-oriented approach, it is not necessary to utilize an object structure to make effective use of the methodology. In fact, software developed according to object-oriented techniques can be mixed with software developed in a more traditional fashion. That is required when legacy systems and COTS products are combined in a given process implementation. The ability to combine the different software architectures is one of the strengths of PRIME. Those requirements, along with the selected spiral approach, form the basis of the PRIME design. 16.2.2 Methodology design PRIME is a nontraditional, process-driven, and workflow-realized approach to the implementation of business processes. As necessary, individual tasks are developed using a nonprocedural programming approach. As will be seen, PRIME utilizes a common infrastructure with shared components and business functionality built on reusable software components and incorporates prototyping and stakeholder interaction in each spiral. The PRIME methodology has a total of seven explicit spirals, each of which has a specific function in the methodology. Figure 16.7 shows the spirals in a manner similar to
that used to illustrate the spiral methodology in Figure 16.6. However, the representation in Figure 16.7 does not adequately convey the interactions or show the individual activities that the methodology operation comprises. A more robust depiction, although not as obviously a spiral approach as that utilized in Figure 16.7, is shown in Figure 16.8.
Figure 16.7: PRIME spirals. In Figure 16.8, the seven spirals are indicated through the use of reverse (feedback) arrows. The activities that define the operation of the spirals are contained in eight named steps. Although a methodology can be thought of as a continuum, it usually is partitioned into discreet steps for ease of definition and management. Although the steps are somewhat arbitrary, they do need to contain a set of closely related activities and produce specific deliverables.
Figure 16.8: Methodology structure of PRIME. In most cases, a step belongs to more than one spiral. Having a step participate in multiple spirals allows a smooth transition from one spiral to another (both forward and backward). It should be remembered that traversing any spiral increases (or improves) the quality of the development through improved knowledge of the requirements, design, or implementation. In that sense, a transition to a previous spiral type still increases the quality of the development. It does not usually mean that there is a problem. The spiral always gets larger in radius, never smaller.
16.2.3 Methodology dynamics There are three entry points into the methodology. The initial one, shown on the left side of Figure 16.8, represents the case in which it is desired to utilize the methodology to implement a given business process. The second entry is at the top middle of the diagram, the case in which an explicit link between the project management methodology and PRIME is needed. That link provides project management with the description of functionality that is needed but has not yet been implemented. The information is crucial to the determination of project resource and schedule needs and is shown explicitly to reflect that importance. The third entry point, shown near the upper right side of the diagram, represents the situation in which the need for a new or changed software component has been identified independently from the requirements of a specific process implementation. Although the third entry point does not represent an actual invo- cation of the methodology, it is necessary to indicate that an initial structured set of software components should be developed utilizing a top-down approach. For similar reasons, step 4(a) and spiral 3(a) are not considered an integral part of PRIME. They are shown in the diagram and discussed in the context of the methodology to emphasize the point that software components must be designed and developed to provide the functionality needed by the action specifications. The implementation of the software components is independent of the PRIME process implementation, as discussed in Chapter 14. Given that exception, the initial general flow of PRIME is from step 1 through step 8, and from spiral 1 through spiral 7. Because PRIME follows a spiral approach, it is possible to repeat a spiral or a step multiple times and traverse them in any reasonable order. In fact, some steps (e.g., 4, 5, and 6) and spirals can be performed in parallel. However, if any of the steps causes a change that affects another step that is common to the spirals containing the parallel steps (e.g., “Identify actions”), then all the affected spirals need to be revisited. That can result in changes to information in any of the spiral steps. That type of behavior also can occur at any point in a development. For example, assume that step 5 of spiral 4, “Design human interface,” is in process and the implementation, for some reason, requires that the action definitions be revised, so step 3, “Specify actions,” is revisited. As a step in spiral 2, changes to the actions may require that step 2, “Identify dialogs,” be revisited and the dialog definitions changed. That, then, affects all of spirals that contain that step. Those types of changes occur with any implementation methodology, but they can be confusing and difficult to track and manage. Things can easily fall through the cracks. With the PRIME structure, there is a well-defined method to manage the changes and ensure that all the effects are considered. For a given change, there may be several traversals of multiple spirals necessary before the situation is fully resolved. However, the needed activities and their status easily can be checked at any time during the development. 16.2.4 Methodology prototypes As another aspect of the spiral approach, PRIME also specifies the development of a series of prototypes. Prototypes are associated with a spiral rather than attached to a specific methodology step. The prototypes are used to contain the implementation aspect of the spiral. As a spiral goes through successive iterations, additional information and structure may be added to the prototype. Some spirals use essentially the same prototype, each populating it with information specific to the spiral involved. For organization purposes, the activities necessary for the initial embodiment of a prototype are assigned to one step (not necessarily the first) of the related spiral. Once that has occurred, subsequent traversals of the spiral steps assume the existence of the associated prototype.
16.2.5 Implicit spirals In addition to the named spirals, there can be implicit spirals, depending on the specific circumstances involved. Implicit spirals can be defined as necessary during an implementation by allowing a transition from any methodology step to any other step as the situation warrants. Situation-dependent spirals can utilize the prototypes defined for the explicit spirals and may include any arbitrary set of steps as long as the step preconditions and postconditions are met. 16.2.6 Tool support Some tools are available from commercial vendors that can greatly aid in the utilization of PRIME. The tools and their capabilities are constantly changing and evolving. For that reason, specific tools and vendors are not identified here. The usefulness and power of the methodology are independent of the tools used to assist in its implementation. Where a generic tool capability is needed, such as that provided by a workflow engine, it is described only by its inherent capabilities and not by identification of products from specific vendors. 16.2.7 Presentation organization The presentation of the methodology is as follows: Chapter 17 presents the description of all the spirals, including the prototype defined for each. Next, the activities of the eight steps and the one associated step are described, one to a chapter. Chapter 27 provides a retrospective of the methodology and presents some conclusions concerning its design and deployment. The discussion of the methodology assumes familiarity with the terms, models, and concepts of the automation assets as presented in previous chapters. They are utilized as the basic components of the methodology without further definition. However, interactions between those models are considered as their specific application to the PRIME methodology requires. As the methodology design progresses, the reader is encouraged to revisit the chapters where the automation assets being utilized were initially examined. 16.2.8 Beginning the methodology All that is needed to invoke PRIME from the normal entry point is a brief description of a business process to be implemented. The definition does not have to be available in any great detail, because the purpose of the first spiral and step of PRIME is to develop the process information needed for utilization of the remainder of the methodology. By including the majority of the requirements gathering as an integral part of the methodology, changing the requirements can be considered a normal part of the operation of the methodology (revisiting the spiral and steps required) and does not have to be handled on an exception basis. Once invoked, it is assumed that PRIME continues to govern the operation of the process implementation so that changes just utilize additional cycles of the spirals needed to address the changes. That makes the maintenance aspects of the implementation a continuation of the development and effectively eliminates the usual disconnection between development and maintenance. Individuals familiar with the methodology can work on initial developments or maintenance activities. The basic procedures are the same. The increase in efficiency gained by not having to define and train staff in a separate maintenance methodology can save considerable amounts of scarce resources over the process life cycle. Selected bibliography Ahrens, J. D., and N. S. Prywes, “Transition to a Legacy- and Reuse-Based Software Life Cycle,” IEEE Computer, Vol. 28, No. 10, 1995, pp. 27–36.
Avrilionis, D., N. Belkhatir, and P.-Y. Cunin, “A Unified Framework for Software Process Enactment and Improvement,” Proc. 4th Internatl. Conf. Software Process, Brighton, U.K., Dec. 2–6, 1996, pp. 102–111. Cattaneo, F., A. Fuggetta, and L. Lavazza, “An Experience in Process Assessment,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995,pp. 115–121. Cave, W. C., and A. B. Salisbury, “Controlling the Software Life Cycle—The Project Management Task,” IEEE Trans. Software Engineering, Vol. 4, No. 4, 1978,pp. 326–334. Crinnion, J., Evolutionary Systems Development: A Practical Guide to the Use of Prototyping Within a Structured Systems Methodology, New York: Plenum Publishing, 1992. Frakes, W. B., and C. J. Fox, “Modeling Reuse Across the Software Life Cycle,”J. Systems and Software, Vol 30, No. 3, 1995, pp. 295–301. Kwan, M. M., and P. R. Balasubramanian, “Adding Workflow Analysis Techniques to the IS Development Toolkit,” Proc. 31st Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 6–9, 1998, pp. 312–321. Liu, L., and E. Horowitz, “A Formal Model for Software Project Management,” IEEE Trans. Software Engineering, Vol. 15, No. 10, 1989, pp. 1280–1293. Madjavji, N. H., et al., “Prism = Methodology + Process-Oriented Environment,” Proc. 12th Internatl. Conf. Software Engineering, Nice, France, Mar. 1990,pp. 277–288. May, E. L., and B. A. Zimmer, “The Evolutionary Development Model for Software,” HewlettPackard J., Vol. 47, No. 4, 1996, pp. 39–45. Sheer, A.-W., Architecture of Integrated Information Systems: Foundations of Enterprise Modeling, Berlin: Springer-Verlag, 1992. Walford, R. B., Information Networks: A Design and Implementation Methodology, Reading, MA: Addison-Wesley, 1990. Wasserman, A. I., “Toward a Discipline of Software Engineering,” IEEE Software, Vol. 13, No. 6, 1996, pp. 23–31.
Chapter 17: Spirals Overview This chapter briefly discusses the spirals in their approximate order of initial invocation for the implementation of a business process. Only the overall characteristics for each spiral are presented. Details as to how the spiral produces its intended results are left to the individual step discussions that follow in Chapters 18 through 26. This approach is designed to provide an overall understanding of the operation of the methodology before the details of the individual steps are developed. Although there is some duplication of the information in this chapter and in the chapters that follow, it makes possible an orderly presentation of the material. First the forest, then the trees! The reader is referred to Figure 16.8 for a graphical representation of the spirals and their included steps. That fi gure should be referenced frequently as the methodology discussion proceeds because it shows the relationships among all the spirals and steps of PRIME. In addition, it will be necessary to refer to the discussions of the automation asset models that are incorporated in this and later chapters. The assumption is made that the reader is familiar with their definitions and construction.
The specification of the named spirals is based on the expected need to cycle through the steps of the spiral multiple times before a satisfactory result is obtained. Some spirals contain other spirals within their scope. For example, spiral 2 contains spiral 1 because it includes all the steps of spiral 1. That inclusion reflects the expectation that a cycle of spiral 2 requires multiple cycles of spiral 1. An explicit transition between spirals can be made in any step that is a member of both spirals. For example, a transition from spiral 4 to spiral 2 can be made in the “Specify actions” step, which is a member of both spirals. In practice, that means the transition to another spiral is made in the common step. If the human interface design requires a change to the action definition and the needed change requires a change to the dialog or process map specification, the transition between spirals must be made to effect the changes and uses the steps common to the spirals. Tracking change chains (which may be long and complicated and require many iterations) can be accomplished by using the explicit transitions between spirals. That makes it easy to determine what must be done for a given change and in what order the activities need to be performed. The real value of the spiral approach is in that explicit mechanism for keeping track of the progress of the development and in determining how changes will be accommodated.
17.1 Spiral 1: Process (requirements) The purpose of spiral 1, as reflected by its name, is to determine the requirements for a business process. The format of the requirements must be such that they are meaningful to the business experts involved and yet robust enough to serve as the basis for the implementation. Spiral 1 usually is the first one encountered during a development. Because it usually is impossible to determine all the requirements prior to the start of the design activities, spiral 1 will be invoked many times during the development as new requirements are discovered or the determination is made that current ones need to be changed. Remember that a revisitation is not a regression. It only signifies the need for this spiral type at a given point in the development. 17.1.1 Description The business requirements are in the form of process maps. In addition to process functionality and sequencing definitions, the maps include information flows, roles and organization assignments, environmental characteristics (e.g., throughput, timing, operational schedules), and any other associated information that may be available. In addition, a set of scenarios that detail the business events that the process is designed to address must be created. The scenarios will be useful in developing the process map as well as for testing the process prototype, the prototypes of the other spirals, and the final implementation before it is turned over to the users. The initial requirements are considered to be suitable when the SMEs agree on the process map and its associated information, including the scenario definitions. In addition to determining an initial set of requirements, spiral 1 also makes a first determination as to the implementability of the requirements. Although the business SMEs are not directly concerned with the development of the technical specifications, the ability to effect that conversion is of immediate interest to them. If the requirements cannot be implemented, appropriate changes must be made to the requirements to remove the impediment. That requires the direct involvement of the SMEs, as would any other proposed changes to the requirements. As a part of the determination of the implementability of the requirements, specifications of individual units of implementation are made. Those units are called dialogs, because they spring from the unbroken conversation between the role performer and the automation system. Each dialog can be specified, designed, and implemented independently of the others. If one of the dialogs previously has been developed during
the implementation of another process, it can be reused in the current process. This partitioning has the potential of greatly shortening the time to market through parallel development efforts. If a given dialog cannot be implemented, the process map must be changed to remove the difficulty. There are many reasons a dialog is not able to be implemented, including the lack of necessary data, long processing times (months, years), the inability to perform an included activity, and the inability to obtain a role performer (automated or human) with the necessary characteristics. 17.1.2 Prototype The prototype (called the process prototype, after the spiral name) defined for this spiral is simply the process map along with the associated scenarios. It is designed to allow the user to obtain a feel for the operational characteristics of the implemented process map without a large amount of development effort. That is accomplished by the use of both animation and simulation techniques. Animation utilizes scenarios to walk through the process map steps and ensure that the step definitions, information flows, and sequencing are appropriate for the scenario being utilized. Animation utilizes the process prototype as follows: § The execution of the process can be shown by lighting up appropriate icons in the proper sequence. § Execution problems (e.g., input information flow not yet created) can be highlighted. § Alternative paths through a process for a given input can be identified. § Various measures (e.g., cost to a given point, elapsed time to a given point) can be obtained at selected locations along the path determined by a given scenario. Any changes made to the process map as a result of the animation exercise can be quickly reflected in the prototype, since the base data must be updated as an integral part of updating the requirements. In addition to the animation of the map for a specific scenario, simulation tools can be employed to better understand the characteristics of the process. By specifying the statistics of each decision point and running a large number of inputs through the map, bottlenecks and other problem areas can be identified. Doing that does require a significant amount of analysis to develop a reasonable set of statistics. At this stage of the development, a large amount of simulation may or may not be worthwhile, depending on the complexity of the process and the need to better understand the dynamics of the process as a part of map development. The process prototype can be a paper representation or a utilization of automated tools. Animation and simulation usually require the use of some set of automated tools to provide an effective result, although manual walkthroughs are certainly useful.
17.2 Spiral 2: Logical design The purpose of spiral 2 is to: § Identify any dialogs from other processes (e.g., management) that are needed in addition to the dialogs directly associated with the business process; § Identify and define the actions needed for each dialog; § Determine which actions are to be implemented via automated means and that are to be performed by a human; § Identify the initial human interface instances (HIIs) and the associated data that must be transferred between the human role performer and the cluster store. This spiral is called the logical design spiral because the major design components are developed. The functionality required to implement the actions are not specified in this
spiral; functionality specification is the subject of the physical design spiral. Although the logical design is thus independent of the physical design, it contains all the elements necessary to define the implementation structure and operation for both human and automated functions. As such, it is an accurate, if limited, representation of the final production implementation. The specification of the actions usually requires that the dialogs or process map be changed. That may require an invocation of the process spiral before another cycle of the logical design spiral occurs. 17.2.1 Description The logical design is achieved over several traversals of the spiral. The initial cycle is used to provide enough information for a high-level prototype, as defined in Section 17.2.2. The prototype is intended to quickly provide a rough indication of the suitability of the initial action set in meeting the requirements of the process map. Depending on the results of that evaluation, the process spiral can be reinvoked, or the logical design spiral can continue to be traversed to increase the detail level of the action specification. Additional detail can come in many forms, including the identification of new required dialogs and actions, expanded action definition, refinement of the human/automated action decisions, more precise data element specification, and expanded human interface data and control definitions. Each iteration of the logical design is developed within a predefined framework that enhances the opportunity for reuse of previously developed constructs, including dialogs, software components, and human interfaces. In addition, the framework automatically provides for the routine but important needs of each dialog, such as initialization, cleanup, exception handling, and control. Common actions in those and other categories are added automatically to each dialog, and the design must consider only developmentspecific additions or changes to the standard set. 17.2.2 Prototype The prototype defined for spiral 2 is designed to give users a feel for the actual operation of the implemented dialog for a given scenario. That is accomplished by the use of a scenario to guide the simulation of the actual sequence of the actions in a dialog interspersed with the human interfaces where they would occur during the execution of the dialog. The human interfaces contain a listing of the data and actions that are defined for each occurrence. If desired, either the action sequencing or the human interface presentation could be suppressed. That allows users the ability to isolate and concentrate on a single aspect of the design. The effect of the multiple dialogs in a process also can be simulated by executing the selected dialogs as controlled by a suitable scenario. It is assumed that the action prototype will be implemented using an automated tool. Once the prototype is produced, changes to the prototype should be relatively easy to make. That allows the prototype to accurately reflect the design throughout the iterations of the logical design spiral. The action prototyping tool provides for graphic representation and simulation of action execution. Scenarios are used to provide details about how the simulation should progress. The tool must include the following capabilities: § Graphic animation of simulated execution to verify proper behavior specification, data usage, and placement of HIIs; § Use of estimated or actual volume, cycle timing, delay, and other related information to simulate throughput and identify potential bottlenecks in the dialog; § Graphic animation of simulated human interfaces at the point of corresponding HIIs within the dialog, including description of the data to be presented at that point.
17.3 Spiral 3: Physical design The purpose of spiral 3 is to develop the physical design of the process implementation by (1) mapping the automated actions to available automation functionality (software components) and (2) mapping the manual actions to some form of instruction. That enables a determination as to what functionality is not available and needs to be developed. The software components can have many types of structures, ranging from groups of small functional components to large COTS products and legacy systems. In any case, the action definitions provide a design through which the software components can be evaluated. The human instructional material could be in the form of an operations manual, policy and procedures guidelines, or similar document(s). The documents could be in paper format or utilize automated tools for access. Once the actions are mapped, the characteristics of the mapped functionality can be used to update the initial specifications of the actions. That provides a better indication of the operational characteristics of the implemented process. To save time, the action specification step is sometimes skipped, and a COTS product or legacy system is mapped directly to one or more dialogs. When that is done, it is difficult, if not impossible, to determine the suitability of the product for the process implementation. It may be that only parts of the product should be used or that the products should not be used at all because they do not adequately provide for process needs. Specification of the actions is the only way to determine the suitability of existing functionality, and the time to perform the logical design spiral must be allocated even when the use of a COTS product or legacy system is anticipated. Project management also utilizes spiral 3 to effectively estimate the time, costs, and other resources required to complete the implementation as well as those necessary to acquire and provision any unavailable functionality. As part of that planning, spiral 3 is also designed to permit the examination of the consequences of different humanautomation action tradeoffs and allow the development of associated what-if scenarios. Spiral 3 provides an explicit link between PRIME and the project management methodology being used. 17.3.1 Description Spiral 3 performs the following functions: § Identification of the proposed human and automated actions along with the defining technologies and other supporting material; § Mapping of the logical automated transactions to available software components; § Provision of the augmented logical design prototype that includes the physical characteristics of the existing physical components; § Identification and preliminary specifications for any new automated software components required; § Mapping of the logical human transactions to instructional documents; § Identification and preliminary specifications for any new documentation required; § Initial human interface definitions and associated input, output, and control data; § Estimates of resources required for a deployable implementation of the service and the development of any new functionality. The information is used as an input to the specific project management methodology being used by the organization for the management of process development, physical software component development, and human action documentation. It is also used as a framework on which to assess proposed or actual changes in the logical design of the process as determined by management requests.
17.3.2 Prototype No unique prototype is defined for the physical design spiral. The prototype utilized in spiral 3 is the logical design prototype defined for the logical design spiral. There is one alteration performed for the purpose of this spiral, which is utilization of the characteristics of the mapped external software components instead of the characteristics of the original action transactions. Making that change is not difficult. Reuse of the same prototype makes it relatively easy to transfer between the spirals and occurs frequently.
17.4 Spiral 3(a): Component The purpose of the component spiral is to ensure that a robust set of software components will provide the functionality needed to implement the actions defined for the process. The components utilized by action transactions are called external components and are accessed by a message pair specified as part of the component. Components used by the action support functionality are called internal components and are accessed by whatever method is specified by the action support infrastructure. The spiral is concerned with the specification, implementation, and provisioning of all required software components. Spiral 3(a) is separate and distinct from the business process implementation that is the focus of the PRIME methodology. However, it is included in the methodology description because proper functioning of the methodology rests on the availability of a large number of existing software components. 17.4.1 Description Spiral 3(a) is concerned with the acquisition and provisioning of external and internal software components. The emphasis is on external components because they provide the needed business functionality. The software components can be specified in one of two ways: a top-down approach or a bottom-up approach. As discussed in Chapter 14, a complete specification for any of the entities requires a combination of the two specification methods. After the components have been designed and specified, they must be provisioned in the computing network so they can be utilized by any action that requires their functionality. That requires physical implementation and deployment on suitable network facilities. The sequencing and timing of the process depends on resource availability and allocation and is essentially a project management decision. The specification and implementation of components must be refined through iterations of spiral 3(a) until they provide the optimum functionality for current and prospective needs. 17.4.2 Prototype The prototypes considered in spiral 3(a) are somewhat different from those defined for other spirals. They are used to determine if the software components are implemented as specified and to serve as a stub in the implementation spiral prototype if the associated component is not immediately available.
17.5 Spiral 4: Human interface The purpose of the human interface spiral is to develop and implement the inputs and outputs that allow a human to interact with the automated functionality of the process being utilized. The inputs and outputs can utilize any reasonable devices but usually consist of a keyboard and a mouse for input and a cathode ray tube (CRT) screen for output. Voice transducers are also commonly utilized, especially when multimedia data are used.
Graphical (iconic) interfaces with multiple windows usually are defined in conjunction with drag-and-drop movement of data from one window to another. The use of video also is growing more common. The design of specific interfaces is the province of human factors experts who specialize in the identification of effective means of human-machine interaction. Although the display of those data is crucial to the success of the final implementation, the most that can be accomplished from an overall methodology standpoint is to ensure that a specific spiral and associated steps are defined for that activity. A thorough discussion of the approaches used in interface design is well beyond the scope of this presentation. The emphasis instead is on the assurance that the data will be available to the human role performer when it is needed. 17.5.1 Description The human interface is the mechanism that permits the exchange of data between the human role performer and cluster store. As such, it is associated with the cluster and not with an individual dialog. If a dialog terminates but the cluster remains active with other dialogs, the data in the human interface could remain even though they were utilized in the terminated dialog. That continuity facilitates the transfer of data between dialogs from a human perspective as well as from an automated one. The human role performer could control the availability of data, or the data could be controlled through an automated means by suitable definition of the actions that read and write data to the interface. Unfortunately, the cluster orientation somewhat complicates the design of the interface because it requires that the designer accommodate any data that could be presented using any reasonable compound scenario. Management of a large amount of data becomes a significant design factor in addition to the usual emphasis of the aesthetics of look and feel, which is the traditional approach. Because of some of its inherent limitations another complication is the use of Internet browser technology to provide the interface instead of a proprietary implementation using products such as Motif for a UNIX-based solution or Microsoft Windows for a PC-based solution. 17.5.2 Prototype The human interface spiral prototypes are used to show the SMEs and end users how the interface operates and to determine how effective they will be in a production environment. Many different types of prototypes can be utilized for that activity; the specific ones are determined by the designers and users involved. Usability testing using an associated prototype is a common technique to ensure that the proposed interface can be used without generating a large number of mistakes or confusion on the part of the user. The human interface prototypes usually are not directly transferable to the final implementation. The design principles can be transferred, but the implementations usually are different.
17.6 Spiral 5: Workflow The purpose of the workflow spiral is to develop the workflow imple- mentation for the business process being addressed. The workflow implementation provides the connection between the process dialogs and any other dialogs needed in their support from a technical or business perspective. It determines when, where, and by which role performers the dialogs will be addressed. Although workflows are defined on a dialog abstraction level, actions used to exchange data between the dialog and the workflow manager need to be defined. That could result in several spirals being revisited. In general, however, the workflow design is based on the dialog definitions. 17.6.1 Description The workflow specification utilizes the definition of the process and support dialogs to determine the tasks that are the unit of functionality for the workflow. Tasks consist of
one or more dialogs, depending on the control and reporting requirements needed for the workflow. Frequently, workflow design activities result in changes to the process map and its defined dialogs. That occurs for the same reasons that the specification of the actions also results in the same types of changes. The need to define the intent of the process using technical constructs frequently identifies areas where the process map does not provide sufficient guidance as to how the implementation should proceed. In addition, alternative ways of providing the same result as specified by the process map occur with regularity. Those alternatives can result in more efficient and effective implementations, but they need to be reflected back to the process map so the business SMEs can be assured that nothing has been lost by the changes. To provide a complete workflow specification, information as to the available workforce, task scheduling, routing methods, and monitoring requirements also must be specified. Examination of those needs also may indicate changes to the process map and dialogs. Once the workflow information has been programmed into the process specification tool of the workflow engine being utilized, simulation of the workflow is utilized to determine the operational characteristics of the workflow. The results of that simulation almost always are different from those of the simulation performed on the business process. This is because the definition of the workflow process contains a great deal more information than is usually made a part of the process map. For example, the simulation may reveal bottlenecks caused by staffing characteristics and not by the inherent characteristics of the process flow. The workflow implementation also needs to be tested using the scenarios defined for the process. That ensures that no inherent process functions have been compromised by the workflow representation and implementation. If testing shows that problems do exist, they may be able to be corrected by changes in the workflow, or just as likely they may need changes to the process map. 17.6.2 Prototype A workflow prototype is used as part of the development of the workflow specification and to ensure that all the needed information is available. The prototype uses stubs for the tasks if they are not available. All other information needed by the workflow is programmed into the workflow engine.
17.7 Spiral 6: Assembly The assembly spiral is concerned with the integration of all the components necessary to implement a process, as well as the testing and evaluation of the resultant implementation. Although some of the external software components that consist of all or part of a COTS product or a legacy system may be stubbed because of access difficulties, the remainder of the implementation is consistent with that of the final product. The elements that are integrated during spiral 6 are: § The dialog infrastructure, including common support actions; § The cluster store; § The final human interface designs; § Implementation-specific actions that have been defined for the dialog; § The workflow engine and associated program; § The client platform; § The software components required by the actions; § The server platform(s); § The system software; § The network that connects the client and server platforms; § Other support software.
Prior to the integration of those elements, the specified software components must be implemented if they are not already available. It is anticipated that most of the components will be available and that only a small number of them will need to be developed for a given project. For the initial projects developed using this methodology, that assumption may not be true and a significant number of software components may need to be implemented. 17.7.1 Description Spiral 6 is used to assemble, demonstrate, and evaluate the operational characteristics of the process implementation. If problems are encountered, the malfunctioning components are identified, and, depending on the cause, one or more spiral types are invoked and utilized to determine and implement the proper changes. The initial transversal of spiral 6 is utilized to create an assembly prototype that behaves in an operational sense exactly like the final system. Some components may be stubbed because they have not yet been implemented or the use of an operational system is deemed to be critical in this phase. However, the characteristics of the implemented software components are maintained by the stubs (e.g., delays, data types). Subsequent iterations of the spiral are used to fine tune the system characteristics and determine if changes are warranted. Fine tuning in this context includes the following: § Launch criteria changes; § Error handling and recovery procedure changes; § Minor modifications of the screen or other aspects of the human interface design; § Rehoming of specific action transactions; § Addition of statistical or management actions; § Changes in the specific dialogs that are coresident on the workstation. Any of those tuning changes requires the appropriate spiral(s) to be invoked to maintain the integrity of the development and ensure that the appropriate information is maintained for later use in the operations-oriented improvement spiral. 17.7.2 Prototype The assembly prototype defined for spiral 6 has a structure and functionality that are as close to the deployed process as can be obtained without producing the final deployable implementation. This prototype is the last chance for the stakeholders to correct any problems before deployment. While it is relatively expensive to correct a problem found in this spiral compared with fixing problems detected in earlier spirals, it is much less expensive than performing corrections in the field. Because the assembly prototype and the final deployable product are relatively close in structure and function, enumerating the differences and similarities between them helps place each in perspective. The assembly prototype has the following characteristics relative to the deployable product: § It runs on the same hardware and software platforms as the deployable product. § Distribution of functionality among servers is the same as the deployable product. § For all specified software components currently available, the actual components are used. § For all other components, stubs with the same interface characteristics are used. § The network protocols are the same as those that will be used in the deployable product. § The network topography may be different from that of the deployed product. § Some network management, recovery, and error handling needed in the deployed product may not be implemented.
§ § §
All human interfaces are completely implemented. The workflow implementation is fully operational. Complete documentation may not be available.
The assembly prototype also can be the vehicle for initial acceptance testing, training of users, and capacity testing. The use of the previously defined scenario set is the basis for acceptance testing. Once the assembly prototype has been evaluated and accepted by the users, there is no opportunity to make additional changes until actual deployment. Thus, it is important to ensure that the assembly prototype is evaluated and tested in as many ways as possible.
17.8 Spiral 7: Improvement The purpose of spiral 7 is to: § Deploy the implementation by provisioning all the necessary components on those platforms that have been identified as participating in the implementation; § Finalize and implement the support functions such as documentation, network or service management, and security; § Determine the procedures by which the implementation will be made available to the affected users (mostly the responsibility of the project management methodology); § Monitor the operation of the implemented process by using the capabilities of the workflow engine and, depending on the results, identify and implement changes that can improve the operation of the implementation. The improvement spiral is also used as the initial mechanism to address changes to the process that arise because of business needs that occur independently of the process operation. That would include such items as regulatory changes, new product offerings, competitive responses, and changes to the organizational structure of the enterprise because of acquisitions or divestitures. 17.8.1 Description The deployment functions of spiral 7 require significant cooperation with the project management methodology because it is that methodology that is responsible for the scheduling of the deployment and the notification of availability to selected users. It is also responsible for the dissemination of any documentation to the end users, operations personnel, and the help desk, if one is to be utilized. The monitoring and evaluation function generally is accomplished through the information obtained by the workflow engine during its normal operation. The data obtained include timing of all the workflow tasks and components as well as any specific data selected by the workflow designers. The operational data can be used in conjunction with the simulation results obtained during the associated development spirals to determine what improvements are possible or feasible. Improvements may be identified that will not be implemented because they are too costly in absolute terms or for the improvements that would be obtained. When process changes are required because of business needs not directly related to the operation of the process, the invocation of the spirals needed to effect the change is considered to be through the operation of the improvement spiral. That is done because the needed changes may not require alterations to the process map or dialogs that would be assumed if the initial development entry point was used. That assumption also provides the desirable continuity between the development and maintenance phases of the implementation. It is useful to assume that both phases utilize the same fundamental methodology.
17.8.2 Prototype The prototype is the operational process implementation. As such, it is not a true prototype, but to the extent that it needs to be monitored and changed, it can be viewed in somewhat the same light as the prototypes associated with the other spirals.
17.9 Summary Each spiral covers a major aspect of a process implementation. The spirals are connected in such a way that the effect of a change, regardless of the spiral in which it occurs, can easily be determined for all spirals. That prevents a change from causing an undetected (until deployment) problem in another area of the implementation. Each spiral also incorporates the same general approach, which helps to integrate the different spirals into a unified methodology. Every spiral will be traversed many times during an implementation Although an initial order is defined for discussion purposes, the actual order will differ considerably over the course of the development. In fact, there is no specific order to the spirals after the initial startup of the implementation. Revisiting any spiral does not mean that a problem has occurred. It only means that additional information needs to be incorporated in the implementation. For staff experienced with other types of methodologies, that can be a difficult transition to make. Each step in the methodology is examined in detail in the next nine chapters. To some extent, the overall operation of the methodology as defined by the spirals is hidden during those presentations. To maintain as clear an understanding as possible, the reader should remain cognizant of the placement of each step in the methodology as its functions and requirements are defined. Selected bibliography Boehm, B. W., et al., “Software Requirements Negotiation and Renegotiation Aids: A TheoryW Based Spiral Approach,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 243–253.
Boehm, B. W., “A Spiral Method of Software Development and Enhancement,” IEEE Computer, Vol. 21, No. 1, 1988, pp. 61–72. Viravan, C., “Lessons Learned From Applying the Spiral Model in the Software Requirements Analysis Phase,” Proc. 3rd IEEE Internatl. Symp. Requirements Engineering, Annapolis, MD, Jan. 6–10, 1997, p. 40.
Chapter 18: Step 1: Define/refine process map 18.1 Purpose Step 1 contains the requirements determination portion of the PRIME methodology. It is designed to provide an initial set of requirements prior to starting the remainder of the implementation. The initial requirements will change and be refined as the activities of the methodology are invoked. The basic unit of consideration in this step is the process. If multiple processes are to be considered simultaneously because they are closely related, each process is still handled independently. The connection between the
processes is the identification of common identical (or nearly identical) sets of process steps and interconnection points. Determination of the requirements is accomplished from a business perspective. The focus is on the process, which is represented by the process map and its associated information, including information flows, scenarios, business rules, and operational characteristics. The operational characteristics include the following items: § Performance; § Security; § Throughput; § Recovery and exception handling; § Statistics; § Tracking and logging; § Costs; § Role characteristics. For simplicity, references to the process map are assumed to also include that associated information. The process map is used as the main communication vehicle with the business stakeholders to ensure that the process accomplishes the intended business goals and objectives. Although some amount of work in that area may have been performed before step 1 is invoked, the rigorous analysis process and creation of an affiliated prototype as part of the step activities help to derive considerable additional information about the process and its component steps. The technical perspective focuses on the ability of the process map to be translated into a structure that is implementable using available technology. The analysis is started in step 1 and continues throughout the methodology. If, during any phase of this analysis, it is determined that the implementation is impractical or impossible, step 1 must be revisited and the process map changed appropriately. If the business process already exists and is in operation in the enterprise, it is usually suggested that the current operation of the process be documented in an “as-is” process map before starting the process reengineering or development of the “to-be” process map. The to-be map documents the process as the enterprise wants it to operate. If several as-is processes exist because relatively independent organizations of the enterprise have developed local versions of the process, it usually is not worth the effort to document all those variations. In that situation and the case in which the process is new, the implementation starts with the to-be map. The discussion that follows assumes that the to-be map is being developed, although many of the concepts could be applied to the as-is map if it is useful to perform that exercise.
18.2 Preconditions The information and resources that must be available prior to invoking step 1 depend on the entry path utilized: § Initial entry point: Process definition (high level); organizational commitment; resource allocation (SME time); § From step 2, “Identify dialogs”: Detailed process map; set of scenarios; initial dialog determination; process prototype; questions or problems concerning process; § From step 3, “Specify actions”: Detailed process map; set of scenarios; dialog determination; process prototype; initial action specification; action prototype; questions or problems concerning process; § From step 6, “Determine workflow”: Detailed process map; set of scenarios; dialog determination; process prototype; initial action specification; action prototype; initial workflow specification; workflow prototype; questions or problems concerning process;
§ From step 8, “Deployment and operation”: Operational process implementation; process measurements and statistics; all implementation information.
18.3 Description This description of step 1 assumes that the step was entered from the initial entry point because that requires the most effort and associated activities. If that is not the case, the process map already exists, usually along with a considerable amount of other information, and is expected to be altered in some fashion. When that type of entry occurs, the existing process map information is valid, but it must focus more on the specific difficulties that caused the step to be revisited. That may require fewer or different personnel at the sessions than indicated in Section 18.3.1. In addition, the activities should accommodate the additional information available. That may lift some restrictions given for the initial development of the process map (e.g., the restriction against discussion of existing system functionality). 18.3.1 Facilitated sessions With those caveats in place, the step description continues. The initial development of the process map is usually performed in a series of facilitated sessions. The sessions should include the following participants: § A set of business SMEs who collectively can address all aspects of the process; § A facilitator familiar with the process approach to requirements gathering; § A methodologist knowledgeable in the entire PRIME methodology who can determine if the sessions are producing acceptable results; § Technical SMEs who will implement the defined process. Because not all of the information discussed will be documented, it is necessary to have a set of development personnel who can get a feel for the process and what it is intended to accomplish. Several rules should be followed during the sessions to obtain a version of the map that truly represents the desired process. There are others, but only the major principles that need to be followed are listed here. 1. Specify only the activities that are desired, not the means of current implementations. Statements such as “then system X is used to obtain data Q” are not allowed. It easily could be that system X is the problem, and its functioning should not be allowed to influence the process specification. Connection of the process to available functionality occurs later in the implementation. 2. For the same reasons, do not make references to CRT screen layouts or data formats. They tend to unduly influence the process specification process; as in the case of legacy systems, they may be a problem, not the solution. 3. Develop high-level scenarios prior to the process map so they can be used in the development of the map. Although scenarios developed after the map is formulated still have significant value, more value is obtained if they are available prior to the start of map development. Scenarios help determine the scope and general characteristics of the process, which can greatly facilitate the process specification activities. 4. Eliminate protracted discussions about inconsequential items. There is a tendency to argue over names and wording. The problem should be noted and the session continued. Because they are easier, these types of discussions often substitute for ones with real content. 5. The same SMEs should participate throughout the process development. Substitutions cause a lack of continuity and can be the source of inefficiency and unnecessary controversy. This rule is sometimes difficult to enforce, but it needs to be forcefully addressed.
6.
7.
The map must be available in readable form as its development progresses. That can be accomplished via paper or projection. Pattern recognition is an important part of the map development, and the current state of the map is necessary to provide the means of that type of informal analysis. Develop the map as a flat, one-level diagram. Do not utilize a decomposition procedure, such that there is a series of maps, each with increasing detail. Although the flat-map principle is controversial, the author feels that the advantages far outweigh the disadvantages (e.g., map size). § Specification of a high-level map with supersteps that represent other steps in a lower level map can hide a significant amount of the detail necessary to determine if the process map adequately represents the process. § It facilitates use of the human ability for pattern recognition. § Coordinating all the maps needed for a given process can require a significant amount of configuration management time. § Deciding the appropriate number of levels can evoke considerable discussion and is an unnecessary distraction. § Maintaining a consistent level of abstraction throughout the map is facilitated. § It is easier to apply the analysis rules outlined in this and later chapters.
Depending on complexity, development of the process map can take several weeks. It is not desirable to force the sessions to an early conclusion, because the result represents the basic requirements of the resultant implementation (both manual and automated functions). Once the process map had been specified to the satisfaction of the business SMEs, the map (requirements) is considered to be sufficiently complete and stable so that the design and implementation effort may continue. It is, of course, almost certain that changes to the requirements will become evident at subsequent spirals and steps in the methodology. When that occurs, step 1 will be revisited, so the changes can be accommodated and agreed to by all stakeholders. 18.3.2 Process map structure To facilitate the following discussion, a continuing example is used based on the process map shown in Figure 18.1. The map provides an adequate degree of complexity while not overburdening the reader with detail.
Figure 18.1: Example of a process map. The process map represents the process by defining the functionality or activities of a set of discrete steps. The steps are placed in precedence order by connecting lines that indicate the allowable paths through the process. The steps are placed on the map in rows; each row represents a role that has been assigned to perform the activities of that step. A step cannot be in two rows (roles) simultaneously. An organization may also be assigned to the rows as shown, but that is an optional designation and is not needed for the proper functioning of the step.
In many cases, the order of the steps is somewhat artificial and could be changed without affecting the operation of the process. From an implementation perspective, such changes sometimes are necessary for efficiency or because of the availability of information. As a part of the map structure, each step must have the associated information flows defined. The flows represent the information needed to perform the defined activities of the step and should conform to the models and principles presented in Chapter 11. The direction of each flow must be indicated, along with the datastore that is the source or sink for that information. The datastore could also be cluster store when the data are not persistent. Incorporation of information flows into the map is illustrated in Figure 18.2.
Figure 18.2: Step information flows. In Figure 18.2, the information flows are shown by the thick lines at a 45-degree angle to the step, and the arrows indicate the direction of the flow. The numbers indicate the identification of the flow. The datastore names have been omitted for clarity. Even with that omission, the diagram is getting quite crowded and difficult to read, which is why it is suggested that the information flows be documented separately from the map. If too many information flows are defined for a step, the step should be partitioned to reduce the complexity. As a rule of thumb, if there are more than four different data flows, the step should be split. Such a split will not change the implementation but should make it easier to understand the process from the business perspective. In the example map, there are no places where that needs to be addressed. Once the information flows have been determined, it is possible to analyze the complete structure to determine its consistency. 18.3.3 Consistency checks Two checks are part of the initial consistency examination. The first is to determine if the outputs of each step are used later in the process and if not, where and why they are needed. The second check is to determine if all the information needed in a step is available when that step is reached. The first check is relatively easy to perform on a static basis using a dataflow (information flow) diagram. The second check requires the use of scenarios and animation techniques and must be performed on a dynamic basis (addressed in Section 18.4). The output information analysis is accomplished by reordering the steps according to the availability of the information in the information flows. If information is available from sources other than previous process steps, it will not alter the map structure for the purposes of this analysis. The resultant step reordering for the example is shown in Figure 18.3 and is a form of the classical dataflow diagram (precedence is based on data availability).
Figure 18.3: Diagram of step information flow.
Figure 18.3 illustrates some problems that might become evident during this analysis. Process steps 2 and 12 do not appear to have successor steps from an information perspective, and there is no indication that they are end steps. Other potential problems, which are not illustrated in the figure, could be steps in an order different from that shown on the original process map or steps that are precedence connected but should not be. There may not be a problem associated with any of those constructs, but they do indicate areas that need to be investigated. On the basis of a reexamination of the specified information flows and discussions with the SMEs as to the intent of the steps in question, any questionable areas should be considered and changes made as necessary. In this case, it is discovered that some information flows are indeed missing and that process step 2 should precede step 6, and step 4 (not 12, as might be assumed at first glance) should precede step 11 from an information perspective. The information flows that were added may have been inadvertently left out, or an activity necessary for the step may have been omitted along with its needed information flow(s). Note that step 12 still does not have a successor step. That may be because IF6 is used in another process and is not needed in this process. The step information precedence diagram is then updated by adding the three missing information flows. That results in the diagram in Figure 18.4, which would then be used for any additional analysis. The added flows also would be reflected in the original process map, which is the basis for the ongoing implementation. That is shown in Figure 18.5, where asterisks indicate added flows.
Figure 18.4: Updated step information flow diagram.
Figure 18.5: Revised process map. In addition to the information flow changes, some steps in the original process map also may be reordered to agree with the information flow reordering, if the SMEs deem that doing so is appropriate (e.g., step 3 performed before step 2, as permitted by the information flows). For the purpose of the example, it is assumed that the original order of the steps remains. This type of analysis may need to be repeated several times during the map specification. Any change may give rise to other changes, and long cascades of changes are not uncommon during step 1 (or other steps of the methodology).
18.4 Prototype A process prototype is utilized as an aid to verify the business requirements represented by the process map. The prototype is an embodiment of the map along with the
information flows and operational characteristics. Although the scenarios will be used in conjunction with the prototype, they are not considered a direct part of it. The embodiment can be simply a map drawn on paper and paper copies of the other information. The use of a design tool for the embodiment facilitates use of the prototype, as will be evident as the discussion continues. The prototype is used for three different activities during map development. The first is to animate the process through utilization of the scenarios. For each scenario of interest, the process is initiated and the path through the process is determined by the scenario. The path through the scenario is simply the step sequence required to address the scenario. Each step in the scenario is checked to ensure that the sequence is logically correct (e.g., no missing steps or premature termination) and that each step has the necessary information to perform its intended function. The time and cost of the path also can be calculated if those values are available for the steps. As an example of the information flow analysis, assume that a scenario produces a path for the example process shown in Figure 18.6. All the steps have the needed information flows available when they are reached. Now assume that another scenario produces the path shown in Figure 18.7. Checking for information flow availability indicates that IF8 used by step 10 is not available. That occurs for the scenario being employed because step 6 was not reached previous to step 10 and it is that step that produces IF8. The resultant examination of the reasons for the inconsistency can reach several conclusions, depending on the specific conditions involved. IF8 may be used by step 10 only if it is available, and the lack of that information flow does not indicate a problem.
Figure 18.6: Consistent information animation result.
Figure 18.7: Inconsistent information animation result. The analysis also might show that IF8 is required and that the path from step 12 to step 7 is wrong. That requires the process map to be altered in some fashion to fix the problem. For the purposes of this discussion, it is not necessary to speculate how the map may need to be changed; for the continuing example, it is assumed that IF8 is optional. Any problems are corrected through changes to the process map and the animation performed again. Changes to the process map should be made in accordance with the flow of activities described in Section 18.5. That helps ensure that the changes do not have any unintended consequences. Also, as shown in the description of the activities, animation is an integral part of process map development. The second use of the process prototype is simulation. Simulation differs considerably from animation in its focus and the results obtained. While animation considers scenarios one by one, simulation considers an aggregate of all scenarios on a statistical basis. That usually is accomplished through the use of a discreet event simulation tool. Probabilities are assigned to each decision, and the process is initiated according to the statistics of the assumed load. The result is aggregate step utilizations, queue lengths,
average path execution times, and costs (if the information is available). The purpose of animation is to determine if the process is meeting the needs of the business events that cause it to be invoked. The purpose of simulation is to determine the operational characteristics of the process according to the assumed load. Simulations also should be a part of process map development. The third use of the prototype is to convey to the business stakeholders (other than the SMEs involved in the process map development) the intended operation and characteristics of the process. Although this activity is important in showing progress toward the availability of an acceptable implementation, it also is important at this time to get the initial backing of the stakeholders. It is the stakeholders who eventually determine the success or failure of the process implementation. In addition, they can offer valuable insights throughout the implementation. The business stakeholders need to be involved with every prototype developed during the methodology spirals, even if the involvement is only for a brief demonstration.
18.5 Activities There are 13 individual activities in this step, arranged according to the sequence diagram in Figure 18.8. The activities in step 1 can begin as soon as a process has been identified. Any process that is currently being processed using this step cannot simultaneously be processed by any other step of the methodology. That is necessary to keep a reasonable degree of stability in the implementation procedures.
Figure 18.8: Activity sequence diagram. Any problems discovered during any of the activities must be res olved by altering the process map or scenarios involved. After that is accomplished, the activities must again be performed in the sequence indicated. This is a form of regression testing that ensures that a change to fix one problem does not inadvertently cause a different problem. In most cases, the additional work involved is relatively small, but the increase in quality is quite large. The following are brief descriptions of the individual activities needed to produce the results expected from step 1 in the methodology. 1. Hold facilitated sessions to obtain an as-is process map if needed. If only one process is involved, then this could be an excellent source for identifying problems and resolving them through a reengineered process. If there are multiple current versions of the process because each individual organization or geographical location using the process performs it in a different way, this may not be a feasible activity and should be skipped. 2. Hold facilitated sessions to obtain an initial or revised to-be process map. This activity is the core of step 1, and the quality of the final result depends on how well it is accomplished. The facilitator should be an expert in process design and the PRIME methodology in addition to being an experienced facilitator. The abstraction level of the map should be such that the process can easily be explained to a business-oriented
individual unfamiliar with the process. The map should not contain any implementation details, such as applications or systems that will provide some of the needed automation functionality. The emphasis of these sessions should be on specification of the process steps and their relative sequencing. Following activities apply some rules and guidelines to the map steps and begin the analysis that ensures the process design is feasible and implementable. 3. Identify or revise scenarios to be handled by the process. The scenarios help determine the design of the process and are used throughout the implementation to ensure that the implementation can adequately address the needs or all the scenarios. Scenarios are the major way of testing the results of each methodology step as well as the final implementation. The development of the scenarios should be specified in parallel with the development of the process map. 4. Identify roles and role characteristics as necessary using the principles outlined in Chapter 10. For each role utilized in the process, identify the characteristics required to perform that role and the characteristics of the user population targeted to perform that role. It is possible that there will be multiple user populations identified for a single role. In that case, the characteristics of each population must be carried forward. The characteristics will be utilized in the design of any necessary human interfaces for the dialog. Different interfaces may be defined and implemented for each user population (e.g., expert and casual performers of a role). 5. Decompose steps assigned to more than one role. If multiple roles (as opposed to the same role with different performer characteristics) are identified as performers of a single process step, decompose that step into smaller steps until only one role is identified for each step. Update the process map as required. 6. Decompose steps with more than four information flows. A process step with more that four individual process flows (in and out) usually is not consistent with the abstraction level at which the map should be developed. The step should be partitioned until each partition has four or fewer information flows, unless there is a valid reason that the step requires a larger number of flows. In that case, the reason should be documented as a part of the step ancillary information. 7. Reorder steps according to the availability of information. The reordering diagram exposes potential inconsistencies with the step ordering and information flow specifications. Take into consideration any documented temporal or state constraints on the sequential execution of specific process steps (e.g., a particular process step cannot start until after 5:00 P.M. on any given workday, even though its information requirements have been satisfied, because it is performed by the night clerk role). Update the process map as necessary. 8. Construct or update the process prototype. The prototype can be a designated printed process map that is executed manually, or it can be implemented through the use of a suitable automation tool. The same holds true for the scenarios. They can exist in paper form or through the use of an automated tool. It is desirable that a single automated tool be available for scenarios and maps to facilitate the animation procedures. It also would facilitate the comparison of different animation runs. 9. Test prototype using animation and scenarios. Using each identified scenario, animate the process map. Ensure that the scenarios are sufficient to produce a visit to each of the steps in the map. Also ensure that the process produces the desired effect for each scenario. Determine any changes to the map or scenarios that need to be made. 10. Use prototype for simulation if considered useful. If the process requires simulation to determine its suitability, develop the simulation parameters from the information available and program the selected discrete event simulator. Simulation may be indicated if the process is extremely time
sensitive and the statistics of the business events involved are known or can be reasonably estimated. Even if the process is simulated at the process map level, it can be used only as an initial indicator because the implementation of the process can significantly alter the results, either positively or negatively. 11. Demonstrate the prototype to stakeholders. Arrange a session with the stakeholders to observe the action of the initial or revised prototype and determine the conformity of the prototype operation to the needs of the business and the individual stakeholders. 12. Obtain required approvals. If approvals are needed to continue beyond step 1, they need to be obtained before continuing. The process prototype, the opinions of the stakeholders, and the hard deliverables from this step (process map, scenarios, animation results, and simulation results if performed) should be sufficient to demonstrate the suitability of the process definition and the ability to proceed. 13. Enter process map specifications into repository. All information obtained as a result of step 1 should be entered into a corporate repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, this information may be needed for a considerable length of time, and it may be useful to individuals other than those involved in the initial implementation.
18.6 Linkages to other steps Step 1 is the first step invoked when the PRIME methodology is utilized in the development of a process implementation. It also must be reinvoked whenever there is a projected change in the process map. In addition, the process prototype needs to be updated and the changes in the affected processes verified with the users. The change can occur as the result of information obtained in another PRIME step or because of a change in the business environment. The transition from this step almost always will be to step 2. After step 1 has been completed and the process prototype has been specified to the satisfaction of all involved stakeholders, it is necessary to consider the dialog definitions, either for the first time or on an updated basis. Step 1 also may serve as the transition step from any spirals and possibly all other steps using the concept of implicit spirals. Any problems uncovered during the invocation of subsequent steps may be refl ected as changes to process definitions. It is also possible that process changes may have to be recommended as implementation constraints are uncovered. Process changes generally result in an updated process prototype and need stakeholder validation of the changes.
18.7 Postconditions In general, steps are terminated in one of two ways. The first way is when a condition is found that requires another step to be invoked before the current step can continue. Such precompletion exit is not an abnormal termination, as might be suggested. It actually is quite normal and merely a recognition that not all the necessary information is available. With this type of termination, the current step is reinvoked when the condition causing the termination has been removed. Reasons for a precompletion termination are specified in the appropriate activities of the step. Because defining/refining the process map is the first step of the methodology, it cannot terminate before completion except to stop the entire implementation. Although that is always a possibility for any step, it does not have to be explicitly considered as part of the discussion. If a problem is found during the activities of the step that prevents
continuing, a transfer can be made to the beginning of the step but not to another step. Step 1 is the only step that cannot have a precompletion termination. The second type of termination occurs when the step completes and produces the desired output. That type of exit is not a guarantee that the step will never be reinvoked. It is only an indication that for a specific invocation, all the information necessary to produce the defined step output is available. Depending on the results of steps that follow, the current step may be revisited (possibly many times). The postconditions presented in this section are those that are necessary for the step to complete and terminate. This definition of postconditions is true for all steps in the methodology but is not explicitly indicated in the discussions in the remaining chapters. The postconditions for step 1 are presented in the following list. § All step activities have been considered at least once. For an update, only the affected steps and activities need to be addressed. § The process prototype is available. § A robust set of scenarios is available. § Appropriate animation (and simulation, if indicated) of the process prototype has been performed using the scenarios. § The business stakeholders have been involved as needed. § All relevant information has been entered into the appropriate repository and updates verified. § Any necessary approvals have been obtained. At the conclusion of step 1, all affected stakeholders must agree that the process, as it currently defined and represented, is the best that can be accomplished prior to utilizing the other methodology spirals to provide additional details and analysis. The information may indicate that further refinement of the process and its representation is necessary. Selected bibliography Batson, R. G., and T. K. Williams, “Process Simulation in Quality and BPR Teams,” Proc. 52nd Annual Quality Congress, Philadelphia, May 4–6, 1998, pp. 368–374.
Damelio, R., The Basics of Process Mapping, Quality Resources, 1997. Galloway, D., Mapping Work Processes, Milwaukee: ASQC Quality Press, 1994. Hunt, V. D., Process Mapping: How to Reengineer Your Business Processes, New York: John Wiley & Sons, 1996.
Chapter 19: Step 2: Identify dialogs 19.1 Purpose Step 2 performs a series of partitions of the process map. The partitions are designed to identify the largest sets of process steps that can be integrated and performed in a continuous time period. That is important in determining independent implementation units, workflow tasks, and needed reusable components. The results of each partitioning step should be discussed with the SMEs in an extension of the facilitated sessions used to develop the process map. That ensures that the partitioning results are consistent with the intent of the process map developers and that any problems that result can be quickly identified and corrected.
19.2 Preconditions The following preconditions are required before work on the activities of step 2 can be initiated. It also is assumed that any result from a previously executed step or spiral can be used to provide guidance or background information in addition to those items explicitly listed in this section. Although this step is labeled as step 2 and always follows step 1, it is possible (and likely) for this step to be reinvoked after other steps have been executed. Information developed in prior steps may be of use in the reinvocation of this step. § Process prototype; § Scenario definitions; § Process step operational information; § Other applicable information gathered during process definition. All that information should be available from a repository used to contain all the information identified during the development of the process and any preceding process specifications.
19.3 Description The purpose of partitioning is to identify groups of process steps that can be independently implemented and utilized. That results in smaller and more efficient software implementations and forms the foundation for a significant amount of reuse. These independent units are called dialogs. A partition consists of a set of steps from the process of interest. There are five types of partitions that are sequentially produced. The final set of partitions is dialogs. A connection between partitions is said to be a transition. Each partition is produced according to a set of rules. Some rules are applicable to all partitions, while others are specific to individual partition types. The following list contains those rules that must be followed by all partitions. § A partition contains one or more process steps. In rare instances it can contain all of the steps in a process if all of the other rules are followed. § A partition cannot contain noncontiguous steps. Only one step in a partition is allowed to have an input that does not come directly from an output of another step in the partition. Multiple such steps are allowed if they have the same external input. § A partition must contain the maximum number of steps that does not violate any of the other applicable rules. § Partitions cannot overlap. A process step can be part of one and only one partition of a given type. § All steps of the map must be included in some partition of each type. § An output from a partition can occur at the output of any step in the partition. § A partition may have multiple outputs. § All partition transitions must be specified and documented (e.g., parallel execution, and, exclusive or). 19.3.1 Organization partitions All the partitioning examples in this chapter are based on the process map depicted in Figure 18.5. The organization partitioning is optional. It identifies groups of contiguous process steps in a process map that are performed by a single organization. Each of those groups is called an organization partition, and the procedure is illustrated in Figure 19.1. Organization partitions conform to the rules for all partitions and a specific rule that states that organizational partitions cannot cross organization boundaries. The inputs and outputs from each partition are transitions to other organizations. Each organization partition is identified by the letter O and a unique arbitrary number. As further partitionings are performed, the identifier is kept to indicate the derivation of the final partitions.
Figure 19.1: Example of organization partitioning. The purpose of organization partitioning is to understand the interaction of the various organizations concerned with the process and as a preliminary step for further partitioning. Although the organization partitioning is not strictly necessary and can be eliminated, the knowledge obtained is useful if any changes must be made to the process. It also serves to assure the individuals in each organization that they have a place in the operation of the process when it is implemented. 19.3.2 Role partitions Once the organization partitions have been identified or determined not to be useful, the process steps are partitioned by role. These partitions conform to the rules for all partitions and a rule that states that role partitions cannot cross role boundaries. A role partition is the fundamental partition, all other partitions are refinements of that one. The result of role partitioning on the example is illustrated in Figure 19.2. The organization partitions are not explicitly drawn to increase the readability of the diagram. Each resultant role partition is identified by the identifier of the organization from which it was derived concatenated with the letter R and another arbitrary unique number. The purpose of partitioning is to determine the maximum number of steps that can be performed by a single role without transitioning to another role.
Figure 19.2: Example of role partitioning. 19.3.3 Input transition partitions Each role partition is then partitioned to agree with the following input transition rule: A transition into a partition can occur to only one given step that is defined to be the initial step of the partition. An exemption to this rule occurs if multiple steps in the partition receive the same external input. Whether or not a split is indicated in the latter case depends on the functionality of the steps receiving the input. Allowing only one point of entry into a partition greatly simplifies the resultant implementation. That aspect of partitioning is illustrated in Figure 19.3. The role partition O2R1 consisting of PS5, PS6, and PS7 is split into two partitions: One consists of PS5 and PS6 (O2R1I1), while the other consists of PS7 (O2R1I2). The notational convention follows that previously defined. The split is necessary because of the transition from PS12 entering PS7.
Figure 19.3: Example of input transition partitioning. 19.3.4 Timebreak partitions The resultant partitions can be further decomposed into sets of steps that are performed in an unbroken time period. That is performed by invoking the following rule on each of the input partition partitions. A partition cannot contain a connection between two process steps that may require a large amount of time to complete, relative to the execution time of the steps, or that may result in suspension of the execution of the process. This type of transition is called a timebreak , and every step input and output needs to be examined for this possibility. The procedure is illustrated in Figure 19.4. Role partition O2R2 in Figure 19.3 is partitioned into two role instances, O2R2T1 and O2R2T2 because there is a timebreak between them. That means it is not necessary to perform PS9 immediately after PS8. In fact, different role per- formers could perform each step, depending on the specifics of the process parameters, the circumstances or the business event initiating the instance, and the eventual workflow design.
Figure 19.4: Example of timebreak partitioning. 19.3.5 Convenience partitions Partitioning is performed for convenience using the following criterion. A partition may be split into smaller partitions when there is a clear advantage to doing so. Some of the reasons for this type of partitioning are as follows: § A subset of a partition has already been implemented and can be reused. § Two (or more) distinct functionality types are involved and should be implemented independently. For the example, it is assumed that there are no partitions of this type. The reasons for any convenience partitions must be carefully documented. 19.3.6 Dialog partitions Each partition that remains after all the previous partitions have been performed is called a dialog. This terminology is utilized because a dialog represents the maximum set of business functions that can be considered as a development and operational unit. Once that set of individual dialog instances is identified, each dialog is compared to existing dialogs as they are documented in a repository, to determine if they contain substantially the same functionality as previously developed dialogs. If an existing dialog is found that substantially performs the same functions as the one under investigation, that fact is noted and placed in the repository. Further consideration of dialogs so
identified is not necessary because their logical design already will have been completed. For those dialogs that do not have a corresponding existing dialog, the defining set of specifications is entered into the repository so that the logical design can proceed in the following spiral. On completion of this step, all dialogs for a given process map should be identified and documented through one of the given methods. For a given process, dialogs usually are considered to be asynchronous with respect to each other and are coupled through a workflow system as described in Chapter 15.
19.4 Dialog map The last major activity of step 2 is verification that the dialog structure meets the business requirements of the defining process. That is accomplished by preparing a dialog map, which is the same as the process map except that dialogs are utilized instead of process steps. As such, it is a higher level abstraction of the process map. An example of a dialog map is presented in Figure 19.5. The purpose of this diagram is to ensure that the dialog relationships are reasonable and to allow easy identification of dialogs that are used multiple times in the process. It also is useful in determining if a realignment of the process would be of value in eliminating some of the multiple appearances of the dialogs. The dialog map also provides the foundation for the definition of the process workflow.
Figure 19.5: Example of a dialog map.
19.5 Prototype The dialog map is converted into a dialog prototype the same way the process map is converted into a process prototype. As such, the dialog prototype is considered a higher level form of the process prototype. Instead of animating the progression of the process steps, the process steps in each dialog are replaced by that dialog and the animation takes place on a dialog-by-dialog basis. The set of defined scenarios also is used as input to the procedure. The dialog prototype is animated via the scenarios to determine if the following are true: § The dialog partition is appropriate. § Dialog parallelism is optimized (each dialog is performed as early as possible, based on data, time, and state constraints). § Each dialog is able to have its input requirements satisfied. § The output of each dialog is used by at least one other dialog. This type of animation also provides a good indication—on a highly quantized basis—as to how the actual operation of the implemented process will occur using workflow techniques. There are other design steps necessary for a complete workflow implementation, but an initial feeling for the result can be obtained. Simulation techniques also can be used at this point, if the situation warrants. As with the process prototype, the dialog prototype can be performed manually or through the use of an automated tool. If the characteristics of the workforce that will be performing each dialog are known and a reasonable estimate of the business event statistics can be made, it may be useful to use the dialog prototype to simulate the process from a workflow perspective. That may uncover the need to alter the process map or dialog definitions (possibly through use of convenience partitions) earlier than during the actual definition of the workflow.
19.6 Activities There are 14 activities in this step arranged according to the sequence diagram in Figure 19.6. All the activities in this step are oriented toward identification of a set of dialogs that can be carried forward in the design process. This step is revisited whenever it is determined in a succeeding step that some change needs to be made in either the process map or the dialog definitions.
Figure 19.6: Activity sequence diagram. The following are brief descriptions of the individual activities needed to produce the results expected from this step in the methodology. All the activities must be considered whether or not this step is being invoked for the first time or as a result of an indicated change in another step. If it is being revisited, many of the activities will be simple and fast. They cannot, however, be skipped; there is always the chance that they will be needed to respond to the change. 1. Identify all organization partitions in the process map. Mark each partition with a unique identifier. Organization partitions must conform to the appropriate rules. Update the repository as required. 2. Identify all role partitions in the process map. Mark each partition with a unique identifier that includes the organization partition information. Role partitions must conform to the appropriate rules. Update the repository as required. 3. Identify all input transition partitions in the process map. Mark each partition with a unique identifier that includes the previous partition information. Input transition partitions must conform to the appropriate rules. Update the repository as required. 4. Identify all timebreak partitions in the process map. This activity requires significant knowledge about the intent of the process. Mark each partition with a unique identifier that includes the previous partition information. Timebreak partitions must conform to the appropriate rules. Update the repository as required. 5. Identify all convenience partitions in the process map. Mark each partition with a unique identifier that includes the previous partition information. Convenience partitions must be documented as to the reasons for their definition. Update the repository as required. 6. Specify final partition information. Investigate each resultant lowest level partition to ensure that it meets the rules specified for the final partitions. A lowest level partition is any partition that has not been split into additional partitions. 7. Define each lowest level partition to be a dialog. This task is merely an administrative one, but it does signify the transition from a purely business-based view of the process into an initial technical view. An identifier should be attached to each dialog. The identifier system does not have to agree with that used for the process map partitions. 8. Prepare a dialog map and associated prototype. Identify any common dialogs and mark them as such. Only one instance of a common dialog set needs to be carried forward. The dialog prototype can be
9.
10.
11.
12.
13.
14.
implemented via paper or through an automated tool. Enter this information into the repository. Test the dialog prototype using the scenario set. All the scenarios should be used to animate the dialog prototype to ensure that the sequencing is correct. Corrections may require changes to the process map and dialog definitions. Demonstrate the prototype to stakeholders. Arrange a session with the stakeholders to observe the action of the initial or revised prototype and to determine conformity of the prototype operation to the needs of the business and the individual stakeholders. For each dialog, identify overall attribute information. Document all the functional, user, technical, environmental, and operational specifications that are known for each dialog, so they can be used in subsequent steps. The repository should be used for this function. Query repository for dialogs with the same characteristics for a possible reuse opportunity. Using pattern-matching techniques, determine the agreement between the requirements of all current dialogs and previously defined dialogs documented in the repository. Determine if a suitable match between a current dialog and a previously defined dialog exists. If so, document the match and its characteristics. For, example, it may be determined that a current dialog is a subset of a repository dialog. If the match is considered close enough, then that fact, along with the discrepancy in functionality, should be documented. Obtain necessary approvals. If approvals are needed to continue beyond this step, obtain them before continuing. The dialog (process) prototype, the opinions of the stakeholders, and the hard deliverables from this step (dialog map, prototype animation results, and simulation results if appropriate) should be sufficient to demonstrate the suitability of the dialog definitions and the ability to proceed. Enter new or updated dialog specifications into the repository. All information obtained as a result of step 2 should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, the information may be needed for a considerable length of time and may be useful to individuals other than those involved in the initial development.
19.7 Linkages to other steps Step 2 begins the transition from a business-oriented view of the process to a technically oriented one in preparation for the design and implementation steps. As such, discrepancies and opportunities to improve the current process map inevitably occur as the activities of step 2 occur. Such changes require a transition to step 1, where those updates can be made. When that step completes and the process spiral has therefore produced an acceptable map, a transition is made to step 3 (logical design spiral), where the actions are specified, and to step 6 (workflow spiral), where the workflow is designed and specified. Steps 3 and 6 can be performed in parallel, although changes in either may require step 2 to be reinvoked with subsequent reinvocations of steps 3 and 6.
19.8 Postconditions Step 2 is completed and can be terminated for a given development when the following information or conditions are present as a result of the current step invocation: § All step activities have been considered at least once. § A set of dialogs has been identified. § The dialog prototype is available. § Appropriate animation (and simulation if indicated) of the dialog prototype has been performed using the scenarios.
§ The business and technical stakeholders have been involved as needed and agree with the dialog definitions. § All relevant dialog information has been entered into the appropriate repository and any updates have been verified. § Any necessary approvals have been obtained. At the conclusion of this step, all affected stakeholders must agree that the process, as it is currently defined and represented, is the best that can be accomplished prior to utilizing the other methodology spirals to provide additional details and analysis. The information may indicate that further refinement of the process and its representation is necessary.
Chapter 20: Step 3: Specify actions 20.1 Purpose Step 3 activities decompose the process steps in each dialog into atomic units of business functionality. These atomic units are the business actions (defined in Chapter 13) required to perform the specified functionality. This step also determines any other actions that must be added to the business-oriented ones, including those needed for operational, administrative, and control purposes. Most of the actions are common to all the dialogs, although there always will be some that are unique to a given dialog. In addition, for each specified action, a determination is made as to whether the action will be (1) implemented using some type of automation or (2) performed by a human. The specifications of the resulting actions are matched against existing actions in the repository to identify any previously specified actions that can be reused. The detailed functionality for the transaction and the support tasks that each action comprises are then specified for all nonmatched actions or parts of an action. All resultant actions are then examined for common characteristics and combined into composite actions where possible. Combinations of actions are of particular importance when a COTS or legacy solution is indicated, because functions in those types of products usually are bundled in some way and cannot easily be partitioned. When the set of actions in a dialog has been completely specified using proposed new or reused functionality, the actions along with their included transactions and support tasks are verified to ensure they correctly reflect the dialog requirements. When all the verification/validation activities have been performed, the actions and their components are entered into the working repository. On completion of that step, all actions, including their transactions and support tasks, are identified, specified, and documented. Actions are considered a part of a dialog as a whole and not part of an individual step from which they initially may have been derived. That consideration is necessary to utilize the dialog as the entity that is common to all the design and implementation activities. The identified actions constitute a logical specification that may then be implemented using reusable software components.
20.2 Preconditions The following items must be available before work on the activities in step 3 can be initiated. It is also assumed that any result from an earlier step or spiral can be used to provide guidance or background information in addition to those items explicitly listed in this section. § Process map with marked dialogs; § Dialog map; § Scenarios; § Data elements from logical data model;
§ For each dialog: o Unique identifier; o Functional description; o Initiation criteria; o Initial operational specifications as available: performance, security, throughput, number of performers and assignment method, location (logical), recovery and exception handling, statistics, tracking and logging needs, measurements for operational management, external (to the enterprise) interactions; § List of all implicit decisions that were necessary to complete step 2 activities; § List of technology enablers; § Company technology standards, practices, and policies.
20.3 Description Step 3 concentrates on the development of actions that result directly from the process functionality requirements. It is assumed that the common actions of the dialog needed for support activities already have been defined. The step activities do not explicitly consider the development of common actions. They are implicit in the specification of the support/administrative actions needed by the dialog implementation. Process-specific actions in other than the business category (administrative) are, however, specified explicitly in this step. Their specification does not result directly from the functionality required by the business process but from the characteristics of the dialog or as an artifact of dialog design. Decomposition, as it is used in this discussion, refers only to the process of identifying the actions resulting from the business process functionality needs. It is also probable that the activities of subsequent steps will determine a need for additional actions. If that occurs, step 3 must be reinvoked to ensure that any new actions are defined and documented correctly. Also, it will be necessary to use the updated action prototype to demonstrate that the new actions interact correctly with the existing actions. Although such verification usually can be done quickly, the activity cannot be skipped without possibly compromising the quality of the finished product. Because the dialog is the basic unit of development, the decomposition is performed on a dialog-by-dialog basis. The entire set of dialogs needed to implement an entire process or any part of a process can be considered together, or the decomposition can proceed using only a single dialog. In either case, the basic approach is the same. The only difference is that, for multiple dialogs considered together, it is necessary to identify any actions that are common to more than one dialog. That helps prevent the unnecessary work inherent in analyzing multiple copies of the same action in this and subsequent steps. 20.3.1 Constrained decomposition The decomposition method is critical if the methodology is to maximize the reuse of existing actions. In most methodologies, the decomposition process utilizes a method called functional decomposition, which is inherently a top-down technique. Many approaches to this type of decomposition have been developed and are utilized in popular methodologies. With functional decomposition, a function is decomposed into segments by following some criterion such as the number of decomposition levels, the number of resulting segments, the relationship between the segments, the size of the segments, or the data needs of the segments. The resulting segments are then decomposed into smaller segments using the same criterion. The decomposition process continues until the individual segments no longer can be decomposed within the allowable criterion. The major problem with this type of unconstrained decomposition is that, after a number of decomposition levels, the functionality inherent in a segment cannot be predicted a priori. That means reuse of the segment functionality resulting
from one decomposition to satisfy the requirements of the segments from another decomposition almost always is impossible. To eliminate that problem, a method known as constrained decomposition is utilized. Constrained decomposition is illustrated in Figure 20.1. This type of decomposition utilizes a combination of top-down and bottom-up approaches. In the procedure, a target set of elements is defined. The specification of the target set results from an analysis of the fundamental needs that must be satisfied, the bottom-up part of the method. The target set must have a manageable number of elements, and each element must be structured, specified, and documented in a standard way. The ideal case would be for the target set to cover all required business functions. In practice, of course, that cannot be achieved. However, as the number of developments that use the target set of components increases, additions and changes to the target set usually allow the set to come closer to the ideal.
Figure 20.1: Constrained decomposition procedure. The top-down part of constrained decomposition is a decomposition of the required functionality performed in such a way as to restrict the functionality of the segments resulting from the decomposition to that contained by individual elements of the target set. If that cannot be accomplished because the target set functionality is not complete, a new element of the target set is postulated that (1) has the same structure as those of the other elements and (2) does not contain any overlapping functionality with existing elements. By successive decompositions of this form, each functionality segment eventually consists of one and only one existing element of the target set. In that way, potential reuse of the elements that will implement the target set is greatly enhanced. 20.3.2 Revised decomposition method Because of the potentially large number of individual actions that could be part of the target set, the classical constrained decomposition method is not practical and a variation of the procedure is used in the PRIME methodology. The variation results in a small target set but requires two separate steps to identify an existing action. The revised decomposition method is illustrated in Figure 20.2.
Figure 20.2: Revised constrained decomposition. The revised target set consists of the individual action types instead of the actions themselves. There are just 16 unique action types, which makes the constrained decomposition procedure quite manageable. Each functional segment that results from the decomposition must have the form and specification of one of the 16 action types, but the details of the action specifications can vary. The identification of an implemented action of the identified type that has the same specifications as a given segment is accomplished by a matching procedure and is the second step of the identification procedure. (The matching procedure is discussed in Section 20.3.3.) It is relatively easy to know when a segment has the form of a single existing action type. It is more difficult in the middle levels of decomposition to determine if the segments consist of a set of action types. That can result in an iterative process, the number of iterations depending on the experience and knowledge of the person performing the decomposition and the capabilities of any automated tool support. If the result of a particular branch of the decomposition does not yield segments that consist of target set elements, it is necessary to retrace the decomposition to determine at which level the decomposition failed and to make the appropriate adjustments. Although that may seem difficult, in practice it is easy to learn and follow if the target set elements are structured properly. An assist from a knowledge (artificial intelligence)– based tool would be of considerable help in performing this type of decomposition. A detailed example of constrained decomposition as applied to the identification of actions is provided in Section 20.3.4. 20.3.3 Action identification The decomposition process starts by individually decomposing all the process steps in a dialog using the action structure for each component until a decomposition no longer can be effected. Each remaining segment is in the form of one of the action types and therefore can be labeled as an action. For each such action, the constituent parts (transaction and support functions) are specified to further characterize the action. The action must then be designated as an existing or a proposed action. If the action cannot be designated that way (remember that a proposed action cannot overlap any existing actions in functionality), the decomposition branch must be reconsidered. This description assumes that the set of action types is complete. While no exceptions have been found to date, it certainly is possible that gaps do exist. To accommodate that possibility, it is necessary to determine if a decomposition can be effected using the existing action types. If that is shown not to be possible, then an update to the action types must be considered to properly decompose the functional segment of interest. Actions that have certain common characteristics (e.g., accesses the same logical datastore) sometimes can be combined to form other actions. The recombination is, in effect, the result of a combined decomposition of multiple process steps. Because a top-
down decomposition of combined steps would be difficult to perform efficiently, this recombination activity is one way in which some advantages of a combined decomposition can be obtained without inordinate effort. For actions that match with existing actions, because they are already available, no further specification is needed after that fact is documented. For proposed actions, the logical design continues with the detailed documentation and specification of each constituent part of the action. The information must be entered into the proper repository for additional utilization as needed. When that is accomplished, the set of actions is validated and verified against the needs of the dialog and the step can terminate. As can be partially inferred from the discussion, once the business activities inherent in the process steps have been determined, they can be manipulated and utilized independently of the specific process steps from which they initially were derived. That confirms the previous assertion that actions are much more closely associated with the dialog than with the process steps. Although it is always possible to associate a given action with the process step(s) from which it was originally derived, a fact that can be used in the validation and verification of the action decomposition and specification, this ability generally is not needed in any subsequent steps of the methodology. 20.3.4 An example of decomposition This section contains an example of action identification using constrained decomposition. It is included to illustrate the types of considerations that must be utilized in determining a suitable action set for a dialog. Because of its limited scope, it is not possible to include all the possible nuances and reasoning that must go into a real process development. It does, however, provide a reasonable feeling for the procedure. Consider the following process step description: For each item ordered, update the inventory and backorder any unavailable units if desired by the customer. That step can be decomposed as follows. The action types included in the decomposition component are shown in parentheses. Note that the format of the decomposition components is the same as that defined for an action. Decomposition level 1. For each item ordered (they all can be done in parallel using appropriate launch criteria): 1. Query the inventory to determine if the number of units ordered are available (shared data retrieve, data transformation). 2. If the units are available, remove them from inventory and assign them to the current order (two shared data updates). 3. If the units are not available (all or in part), remove the number available and assign them to the order (two shared data updates). 4. Backorder those units not available if desired by the customer (shared data create, human interface data retrieve). Decomposition level 2. For each item ordered: 1. (a). Query the inventory to determine the number of units available (shared data retrieve). 1. (b). Subtract the number of units ordered from the number of units available to form the results number (data transformation). 2. If the results number is not negative: a.Assign the number of units ordered to the order (shared data update). b.Set the number of units available to the results number (shared data update). 3. If the results number is negative: a.Assign the value of the number of units available to the order (shared data update). b.Set the number of units available to 0 (shared data update). 4. If the results number is negative:
a.Determine if the customer wants to backorder the unavailable units by setting backorder = y or n (human interface data retrieve). b.If backorder = y, backorder the absolute value of the results number (shared data create). Eight segments, each with the form of a single action type, were formed from the original step functionality description. Each individual action description would then be further detailed by determining the specifications of the transactions and support tasks, including the needed data elements. The detailed specifications of the required actions are compared with the specifications of actions that have been previously implemented. In the example case, two levels of decomposition were required to reach the goal of a single action type. For any given decomposition, the number of levels can vary but rarely exceed three. If a large number of levels are required, the process step probably is overly complex and should be partitioned into multiple steps. That would require a reinvocation of step 1. Note that several of the action transactions and support tasks can be reused. For example, actions 2(a) and 3(a) both use the same transaction to assign an amount of inventory to the order. The difference is in the formulation of the request that uses different variables to supply the assigned number of units. The transaction of each action would utilize the same software component. Likewise, many of the actions have the same launch criteria. If the response validation and verification and result dissemination support tasks also were included in the example, the same type of reuse would occur for those components. This example used a single process step as a starting point. In practice, actions would be developed for all the steps in a dialog. The result would be many more actions and an even more pronounced amount of reuse. The individual actions could then be examined for opportunities to combine them, change their functional definitions and data, and make other alterations to improve the operation of the dialog or further improve reuse. In addition, the decomposition of another process, say, one concerned with material receiving, also can reuse many of the given action transactions and support functions (e.g., the “set the number of units available” transaction). That ability to reuse actions and parts of actions from dialog to dialog and process to process is the strength of the PRIME constrained decomposition approach. As an additional motivational source, it is useful to compare briefly the constrained decomposition approach to an unconstrained one. By way of illustration of what an unconstrained decomposition can produce, consider the following results obtained by using an ad hoc method on the above example. 1. Determine inventory status; 2. Determine backorder need; 3. Reduce inventory; 4. Backorder units; 5. Progress order. Each decomposition component can be reasonably obtained from the initial functional description. There is, however, very little that can be said from a reuse or implementation perspective. The decomposition of a part of another process, such as the material receiving example, might produce a decomposition such as: 1. Increase inventory; 2. Satisfy backorders. Compared with the constrained decomposition, it is not clear how this decomposition could utilize software produced by the previous process implementation. That is the fundamental problem with any method of unconstrained decomposition.
20.3.5 Action class It should be noted that the preceding discussion is independent of the eventual implementation of an action through human or automated effort. That determination is made after the actions have been completely specified. It is important that the humanversus-automated action implementation decision be made independently of the decomposition and specification process. Each action in the dialog must be specified and its relationships with the other actions determined before the information needed in this type of decision can be analyzed effectively and a determination made. The postponement and level (action-versus -process step) of the human-versus-automated decision are one of the major areas in which the PRIME methodology differs from many other common methodologies. The definitions of the values of the action class are as follows: § Automated action class is an action performed entirely by nonhuman resources and activity. § Human action class is an action performed entirely by human activity using nonautomated resources. Each action must be classified as either human or automated using those definitions. Actions that cannot be automated due to management, environmental, or technological constraints are identified as human. Actions closely coupled with those specified human actions also are reviewed, and, based on their relationship with the required human actions, a determination is made if they also must be performed by a human. The remaining actions are marked as automated, and technology enablers are identified for each automated action. Multiple passes may be made through this step as additional constraints become known. The goal is to automate as many actions as possible. For convenience, actions with a manual class are commonly called manual actions, and actions with an automated class are called automated actions. Prior to the class decision, actions are defined independently of their human/automated characteristics. Only the required functionality and associated data are utilized in the determination of the needed actions. Such separation facilitates the changing of action status to reflect new business or technology conditions. That can occur at any time during the development or even after the process has been deployed for some time. Changes to the specification of the human/automated characteristics of an action can arise from many sources. The most obvious one is the determination by management that an action must be human, at least for the initial implementation of the process, because of the development resources required to produce a suitable automated function. Although a process has been deployed using a manual action, because of the standard action framework, it is relatively easy to convert the manual action to an automated one when the development resources or enabling technology becomes available. Going from an automated action to a manual one is also relatively easy, although that direction is thought to be a somewhat rarer event than the other direction. An action must be performed by a human or automated in its entirety. If that cannot be accomplished for a given action, the action must be subdivided into new actions until that requirement is met. If partitioning is necessary, the beginning of the step must be reinvoked to ensure that the new actions meet the requirements inherent in this step. Mixed human and automated actions are not allowed because of the definition (based on reuse considerations) of an action as an atomic or undivided function of the methodology. Because a human interface generally is required at each transition between a human action and a manual action (explained in Chapter 23), a split action would result. Thus, allowing mixed actions would inherently result in compound functionality and violate the atomic action requirement. For that same reason, the support task and transaction parts of an action must have the same human/automated characteristics. The imposition of a human interface between
the support tasks and the transaction would change the basic structure of an action and result in a complex and difficult-to-manage framework. As a result of the class specification, it may be necessary to partition existing dialogs into two or more dialogs if the interaction between the automated and manual actions or between the elements of a sequence of manual actions is such that the imposition of a timebreak becomes necessary. Those derived timebreaks must be handled in the same way as are those defined as part of the process spiral. In fact, the process spiral (and subsequently the logical design spiral) must be reinvoked to include the derived timebreaks into the dialog specifications. As an example of a derived timebreak, consider a manual action with the following description: Dispatch a truck and let the supervisor know when it arrives. That should 1. 2. 3.
be partitioned into three manual actions: Dispatch a truck. Receive a radio call when the truck arrives. Notify the supervisor when the arrival radio call has been received.
Because there may be a considerable length of time between the dispatch of the truck and when it arrives on site, this is an implicit timebreak. There probably will be very little time between getting the radio call and notifying the supervisor, so that would not qualify as a timebreak. There may be some dialogs for which there are no human actions. Such automated dialogs can be assigned to an automated role as defined and discussed in Chapter 10. The requirement that at least one of the three possible characteristics for an automated role be satisfied will always be met by a role that is used to perform an automated dialog. In fact, with few exceptions, all three characteristics will be satisfied. 20.3.6 Action template Figure 20.3 is an example of the type of template that is useful in documenting the information needed to characterize an action. In essence, it structures the information needed by each of the parts of the action. All the information contained may not be available from this step and will be added in a subsequent step (e.g., software component that provides the specified functionality). If this is a reinvocation of the step, the information may have been identified and put into the template. In that case, the continued validity of the information can be questioned if changes are required.
Figure 20.3: Example of action template.
20.4 Prototype The action prototype is developed or updated as a part of this step. The purpose of the prototype is to present the sequence of actions in response to a scenario. In the initial invocation of step 3, the prototype is first constructed and in subsequent iterations, the prototype information is updated as necessary. The actions are highlighted and their characteristics displayed as desired in the order that their launch criteria allow. The general format of the prototype output was illustrated in Chapter 13 and is not repeated here. As stated in that chapter, the particular characteristics of the prototype depend on the tool selected to implement it. Because of such a wide possible diversity, an actual design of the prototype is not developed in this document but left as an implementation issue.
20.5 Activities The 12 activities in this step are arranged according to the diagram in Figure 20.4. The activities can be performed either manually or automated with an appropriate tool, if available. However, it is strongly recommended that automated tool support be used, because it is difficult and tedious to perform these activities using manual means.
Figure 20.4: Diagram of activity sequence. The activities in step 3 are contingent on completion of the activities of the process spiral for any dialogs under consideration in this step. Because the unit of development is a dialog, this step can be initiated as soon as a dialog has completed this spiral. Step 3 must be reinitiated if another step changes or adds an action to a dialog. The purpose of the step activities is to specify a set of actions that can provide the functionality required by each dialog and to determine if any of those actions or their constituent parts has been previously defined. That continues the reuse of previously defined dialogs at the action and action component levels. The following are brief descriptions of each of the individual activities needed to produce the results expected from step 3 in the methodology. All these activities must be considered whether this step is being invoked for the first time or as a result of an indicated change in another step. If it is being revisited, many of the activities will be simple and fast. They cannot, however, be skipped because there is always the chance that they will be needed to respond to the change. 1. Perform a constrained decomposition starting with the dialog steps. For each business process step in a dialog, decompose its functionality, using the defined action structure, until every remaining component consists of only one possible action. Analyze the results for opportunities to improve efficiency. The resulting actions then become part of the business category. For each identified action, assign an action ID and document the functional description. If a decomposition cannot be found for a given component, updates to the action structure may be necessary (i.e., additional action categories or action types may need to be added). After that is accomplished, the decomposition should proceed as previously defined. (This event should be rare.) 2. Identify support/administrative actions. Using the initial technical specifications of the dialog, identify all dialog-specific support action categories required for the dialog. Again using the tech- nical specification information, decompose the implied functionality by category, using the defined action structure, until every remaining component consists of only one possible action. For each support action, assign an action ID and document the functional description. 3. Determine the class of each action. For each action, determine if is to use automated or human means of operation. First, assume that all actions are automated. Then examine each action and identify and document as human those actions that cannot be automated for one of the following reasons: technological constraints, environmental constraints, business requirements, system requirements (e.g., performance, throughput), or a consideration of the inherent capabilities of humans and machines. A dialog with no human actions must be assigned to an automated role. That may mean adding a new role to the process and dialog maps, requiring a revisitation of the process spiral and steps 1 and 2.
For each action (human or automatic), verify that all support tasks in the action framework are of the same class as the included transaction. If a support task has both human and automated components, decompose the action into separate actions such that each action is entirely human or automated. Return to the beginning of the step if this occurs. Specify the technology enablers for each automated action. For most actions, this will be short and simple. However, for those actions that require projected or emerging technology, this specification may require significant discussion. 4. For each human action class, identify implicit timebreaks. For each human action (or contiguous set of human actions), determine whether the human action requires the addition of a timebreak or a role shift, thereby causing the dialog to be split into two or more dialogs. Return to step 1 if that occurs. 5. Create/update action templates. For every defined action (again excluding common actions), using the action characteristics template, characterize the action by entering all the available information into the action template. The data elements specified in the template should conform to the logical data model if it exists. 6. Combine actions as possible. Actions can be combined if they have the same launch criteria and transaction software component target. That usually is not known until at least one invocation of the physical design spiral has been accomplished, unless the action has been previously defined and is being reused. Combinations of this type are an infrequent occurrence, but they can and do happen and should be accommodated for efficiency considerations. 7. Query the repository for actions with similar characteristics (reuse). Using a pattern-matching algorithm, determine the agreement between the characteristics of each action in the dialog and the characteristics of existing actions in the repository. Depending on the matching criteria used, an action may have zero, one, or multiple possible matches. Matches of the individual action components also should be considered. In many cases, actions will match except for the launch criteria. This is an excellent result and should be used when available. On the basis of the results of the matching process, determine if a suitable full or partial match between each action and an existing action exists. If so, document the selected match and its characteristics. For, example, it may be determined that a functionality match exists but that the match between the transaction component technical specifications (e.g., response time) is not exact. The action could then be used with some sacrifice in resultant capability. If the match is considered close enough, then that fact, along with the discrepancy in specifications, should be documented. Update the action characteristics template as necessary to reflect the accepted match. For any action components that do not have a suitable map to an existing action component, mark them as needed to be implemented. 8. Program the action prototype tool using the action template information. Using the information contained in the action templates, program the prototype tool. For the first invocation of this step, that will require the entry of information for all the actions in the dialog. For subsequent invocations, only the changes need be entered. That implies that the prototype remains in existence throughout the life cycle of the process. Use of the repository for this function is recommended, as described in activity 11. 9. Animate the action prototype using the scenarios. Verify that the defined set of actions (including transactions and support tasks) can provide the functionality needed for each dialog. Such validation/verification of the action decomposition and specification is performed by using the logical design prototype. This activity is at the heart of the logical design and must be done carefully and completely using all the available scenarios.
10. Demonstrate action prototype to stakeholders. Arrange a session with the stakeholders to observe the operation of the initial or revised prototype and determine the conformity of the prototype operation to the needs of the business and the individual stakeholders. At the end of this step, the stakeholders must determine if the action definitions, functionality, and relative sequences meet the needs of the business process being implemented. That involves comparison of the animation of the process steps with the animation of the actions. Because the functionality and other specifications of both the process steps and actions are available, this comparison certainly is possible, although somewhat subjective at this point. If the two are not compatible, the actions or process steps must be reexamined to determine the cause. Assuming that the comparison produces an agreement between the two, the larger question, which is the same for all prototype uses, must be addressed: Given a feeling for the process implementation that the action prototype provides, do the associated process steps really meet the underlying business need? As the prototypes become more sophisticated and closer to the final product, that question becomes easier to answer. However, it is desirable to get an answer at the earliest possible time to reduce rework and save development time. Even at this early stage, some feelings for the implementation can be obtained and the question at least asked, if not answered. If and when the answer to the question is “No,” the process spiral must be reinitiated and the difficulties resolved before continuing. 11. Obtain necessary approvals. If approvals are needed to continue beyond step 3, they need to be obtained before proceeding. The action prototype, the opinions of the stakeholders, and the hard deliverables from this step (action templates, prototype animation results) should be sufficient to demonstrate the suitability of the action definitions and the ability to proceed. 12. Enter new or updated action specifications into the repository. All information obtained as a result of step 3 should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, the information may be needed for a considerable length of time and may be useful to individuals other than those involved in the initial development.
20.6 Linkages to other steps Step 3 must be invoked whenever a change is needed in the set of actions or their specifications. Changes in the action set could result from activities in any step of any spiral and consequently cause a direct transition to this step or to other steps that may be affected (e.g., step 1) before this step is reinvoked. The exact transition sequence depends on the circumstances. If the changes are relatively small, the time required for a reinvocation of step 3 should also be relatively quick, although all the activities would need to be utilized at least once during invocation of this step. The usual transition to step 3 is directly from step 2, where the dialogs are identified and specified. The initial specification or changes to the specification of the dialogs always require an analysis of the needed actions. There are three possible transitions from step 3 upon its completion. One is to step 4, where mapping of the actions to available software components is considered. The second is to step 5, where the human interfaces are designed and developed. The third transition is to step 6, which occurs when the action specifications affect the workflow design. All those subsequent steps can be performed in parallel, although changes in any step may require step 3 to be reinvoked with subsequent reinvocations of steps 4, 5, and 6.
If step 3 raises questions concerning the process map or dialog definitions, a noncompletion transition to step 1 must be made and the process spiral reiterated until those questions are resolved. A number of reinvocations of step 1 almost certainly will be made from step 3 because the increased level of detail made available through the step activities will uncover process and dialog problems.
20.7 Postconditions Step 3 is complete and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation. § All step activities have been considered at least once. § As complete a set of actions as possible with available knowledge has been defined for each dialog. § A determination of which existing actions can be reused has been made. § An action prototype is available. § A completed action template is available for each action. The software component information is not required. § Appropriate animation of the action prototype has been performed and the results verifi ed. § The business and technical stakeholders have been involved as needed and agree with the results of the action animation. It is not necessary that the business-oriented stakeholders review the action definitions (they are considered part of the detailed design). § All relevant action information has been entered into the appropriate repository and updates verified. § Any necessary approvals have been obtained. At the conclusion of step 3, all affected stakeholders must agree that the action animation, as it is currently defined, accurately depicts the intended operation of the process. The results of this step may indicate that further refinement of the process is necessary.
Chapter 21: Step 4: Map actions 21.1 Purpose The purpose of step 4 is to map the action specifications of a dialog to the physical entities that will implement them. Actions with an automated class are mapped to existing software components, and those with a human class are mapped to a policy, procedure, or other instructional publication, which may be paper based or in a machinereadable form. The goal is to map all the actions to existing entities to reuse as much of the available software and instructional material as possible. If no existing entities can be utilized to implement one or more parts of an action, then specifications for them must be developed so they can be implemented and used for the current process development as well as future ones. The activities in step 4 map the parts of the automated actions to existing or proposed physical components that have the needed functionality. Transactions are mapped to sharable (external) software components, while support functions are mapped to software components (internal) that are not shared but that can be reproduced and changed to fit the specific needs of the action. External software components are accessed with request messages and reply with a corresponding response message. The message pairs, as well as the software component functionality, must conform to the requirements of the action. Unless specifically stated, the message pair specification is considered part of the software
component it accesses. Internal software components are accessed through any mechanism that is supported by the action execution environment (e.g., sub-routine calls). Existing components are analyzed to determine how much of the functionality needed by the action specifications can be met through their use. The definition of external software components includes available COTS products and legacy systems as well as functionality developed specifically for the process implementation. New or augmented software components are then proposed to provide required transaction functionality not covered by existing ones. A similar procedure is utilized to obtain the functionality required by the support functions. The provisioning of any proposed internal or external software component is subject to management approval, and their availability depends on the resources allocated. In a similar fashion, other activities in this step map the human action(s) to existing instructional publications (policies, practices, and procedures). In essence, they are a form of the business rules utilized by the enterprise. Existing publications are analyzed to determine how much of the functionality of new human actions can be met by existing policies, practices, and procedures. New or augmented publications are then proposed for functionality not covered by existing policies, practices, and procedures.
21.2 Preconditions The following conditions must exist before work on the activities of this step can be initiated. It also is assumed that, in addition to the items explicitly listed in this section, any result from an earlier step or spiral can be used to provide guidance or background information. § Filled-in action specification form for each action. Because the software component information is added in this step, this information does not have to be included in the form, although it may be available when this step is reinvoked. § Existing external software component documentation from the repository. § Existing internal software component documentation from the repository. § Existing business rule (policy, practice, and procedure) documentation from the repository. § Existing enterprise standards documentation from the repository. § Infrastructure architecture specification and as-built configuration from the repository. § The list of all actions that have been previously considered during the examination of other dialogs and that are already mapped to software components. These do not have to be remapped but can be utilized as defined. This information is in the repository. § Any information that would significantly affect the implementation or use of proposed components. This type of information could include possible relationships, restrictions, and operational needs. This information is obtained, but not utilized, from previous steps and is documented in the repository.
21.3 Description Step 4 has three fundamental aspects related to the mapping function. The first is to provide the initial mapping between the actions that form the logical design of the dialog and the physical components (automated or manual) that will be used for its implementation. The mapping may be to existing components or to proposed components when there is no suitable existing component. The second aspect is to analyze the proposed set of action mappings over all the available process dialogs. This analysis is designed to ensure that a consistent set of
mappings is obtained for the process. Because it is possible that different development personnel will be performing this step for different dialogs, this analysis is necessary to ensure that the overall results are compatible. As this step is revisited for each dialog in the process, the number of available dialogs increases until the entire process can be considered. The third aspect of step 4 is provision of information to the associated project management methodology by identifying (1) any functionality that is needed but that does not yet exist and (2) any opportunities to provide a more cost-effective implementation by changing the definition or provisioning of existing external software components. This type of proposal needs careful attention to the configuration management aspects of the components. The latter result is one mechanism through which component provisioning can change without altering the design or implementation of the process. The mapping of the automated actions is discussed first, followed by a discussion of the human actions. Although there are some similarities between the two, each needs to be discussed in its own context. 21.3.1 Initial mapping Mapping of the automated actions is performed on an action-by-action basis using the specifications for each action that were developed in step 3 (Specify actions). For every part of an action, existing or proposed software components with the same functionality are identified and then used in implementation of the process. It should be noted that the use of the term physical component implies only that such functionality has been developed and provisioned or is proposed to be developed and provisioned. It does not necessarily imply a specific deployment configuration. For example, the provisioned functionality for a given action transaction could actually be implemented on multiple platforms. The actual platform employed in a specific use of the transaction might depend on a large number of operational characteristics, such as load, time of day, and platform operational status. Such a deployment structure does not affect the mapping process. To provide an effective mapping between the components of the actions and the available physical components, it may be necessary to manipulate the actions in two different ways. § Decompose an action into multiple actions so that a mapping can be made to existing components that have only a part of the functionality or data of the original action. § Combine actions to utilize a legacy system or COTS product external component. Although that somewhat compromises the underlying concept of an action, it does permit the use of large functional components when necessary. In the extreme, all the actions in a dialog could be combined, and the legacy system or COTS product could become the implementation of the entire dialog. If either or both of those changes are utilized, it will be necessary to reiterate the logical design spiral to ensure that the changes have not affected another part of the process functionality and that the new actions have been properly defined and documented. The use of physical components that are proposed to be created or changed in some way to provide the required functionality must be referred to the project management methodology. The purpose of such referral is to determine the allocation of enterprise resources, both for deployment and to provide new or changed physical components. One result of that allocation is that some actions with an automated class may be changed to a human class on an initial or permanent basis. Changes of that sort require that step 3 and the logical design spiral be reinvoked. A possible result could be to postpone the implementation of the process until the needed physical components are provisioned. That would then suspend the use of the PRIME methodology for the development of that process until such time as the required physical components have
been provisioned and are available. This is another function of the project management methodology. Recommendations to reprovision existing components can result from a number of conditions: § A mismatch in the logical specifications for a component, usually an action transaction, and the properties of the physical component to which it is mapped. Although the functionality of the physical component may be appropriate, such characteristics as throughput, response time, and error detection may not be in sufficient agreement with the logical transaction needs. § The result of the project management review of the mappings as specified in this step. Criteria such as component maintenance costs, error rates, and new recommended functionality would enter into this decision, as would the needs of other processes or dialogs that were also mapped to this component. § Replacement of a legacy system with a set of reusable components that would then be available for future projects. For the human actions, mapping again is performed on an action-by-action basis. The major difference lies in the physical components used for the mapping. In the case of automated actions, the components are software. In the case of human actions, the components are textural instructions (either on paper or in machine-readable form). Again, it should be noted that the use of the term physical component implies only that such functionality has been developed or is proposed to be developed. It does not necessarily imply a specific deployment con- figuration. For example, a provisioned transaction could actually be implemented by multiple human performers. The actual configuration employed in a specific use of the transaction might depend on a large number of operational characteristics, such as load, time of day, and location operational status. That deployment structure does not affect the mapping process. Human actions also require an interface to the automation system to exchange information with the automated actions. The development of that interface is discussed in Chapter 23. The design of such an interface does not replace the need to map the human actions to appropriate instructional material. The material could be made a part of the human interface, if desired, or it could be implemented separately. That is the province of the interface designer. As with the automated actions, to obtain an effective mapping, it may be desirable to decompose an action into multiple actions so a mapping can be made to existing components. If that is done, it is necessary to reiterate over the logical design spiral to ensure that the changes have not affected another part of the process functionality and that the new actions have been properly defined and documented in the repository so they can be utilized by other processes and dialogs. 21.3.2 Mapping analysis Two vehicles are utilized as a framework for analyzing the action mappings. The first is a modification of the logical design prototype originally defined for the logical design spiral. Each prototype of interest is supplemented by the additional information resulting from the mapping activities of this step. That includes information such as timing and other operational characteristics of the physical components. Although the prototype is still referred to as the logical design prototype after having been augmented with the information needed in this step, it contains considerably more information than that required by the logical design spiral. One advantage to utilizing the same prototype form for both the logical design spiral and the physical design spiral is the ease of traversing between the spirals as project management decisions are incorporated in the logical design or multiple what-if studies are performed. The second analysis vehicle is, conceptually, a three-dimensional matrix called the analysis matrix. The columns are dialogs, the rows are actions, and the layers are the attributes of the action parts and mapped physical components. Figure 21.1 is an
example of the matrix. Any face or other related group of cells can be examined as a unit for the purposes of pattern matching. Although use of the matrix can be accomplished manually for relatively small processes, an automated tool assist usually is necessary for practical business processes.
Figure 21.1: Analysis matrix structure. The analysis matrix has two basic purposes. Its first purpose is to analyze, for each dialog or set of dialogs in a process, the specific external software components utilized by the individual transactions. An examination of that information can determine if the mappings of all the actions are consistent with each other. For example, assume one action utilizes an external component that retrieves customer information from database A. Assume that a second action utilizes an external component (most likely in another dialog but possibly in the same dialog) that retrieves the same customer information from database B. There may be a good reason why the process requires the same information from two different physical locations, but that could be a problem if the information in the two components is not concurrent. A remapping may be necessary if it is desired that identical information be obtained from the same source. If multiple sources for the same information are needed, it also may indicate that an explicit process is necessary to keep the two sources concurrent to some specified degree (it will be necessary to investigate that possibility whether or not a remapping is performed). That new process could be defined independently of the current process or incorporated into it. The specific disposition depends on the individual circumstances involved and the characteristics of the relevant data. The internal software components used to implement the action support functions are analyzed in the same way to ensure the required degree of consistency for the process. 21.3.3 Project management methodology information The second use of the analysis matrix is to enable the estimation of the resources required for the development of the defined process as well as for an initial estimate of the development costs of the software components and other functionality that currently does not exist. Although this estimation procedure is performed by the project management methodology, the source data required are developed in this step.
21.4 Prototype The logical design prototype continues to be utilized in this step. It is updated with the operational information obtained from the mapped functionality so the characteristics of the actions as implemented in the prototype can be updated accordingly. After the updates have been made, the entire suite of scenarios should again be used to animate the prototype to ensure that the mappings are appropriate to the correct functioning of the dialog as perceived by the business and technical SMEs as well as other stakeholders of interest.
21.5 Activities The 14 activities in this step are arranged according to the diagram in Figure 21.2. The activities can be performed either manually or with an automated tool as available. If automated, the matching processes utilized in this step usually require the use of a knowledge-based approach.
Figure 21.2: Diagram of activity sequence. The activities for any specific action can be performed in parallel with those for any other action, depending on the resources available (development personnel and tool support). 1. Augment action specifications with infrastructure information as needed. Augment action specifications (transaction and support functions) with applicable infrastructure information. This information is utilized during the matching process to determine the acceptability of the available physical components in areas other than process functionality. An example would be the database utilized by an external component. 2. Determine if an existing software component provides the needed transaction functionality. Using a suitable matching process, determine if any level of agreement exists between the requirements of the transaction part of the action and an existing software component. (Note: Actions that were identified in methodology step 3 as essentially identical to existing actions do not have to be considered in this activity; they were mapped previously and the results placed in the repository. The previous match does need to be documented in the repository, however. If no match at any level of agreement exists, go to activity 4.) If a match exists at some level, document the match and its characteristics. For, example, it may be determined that an exact functionality match exists but that the operational characteristics are not exact. The software component could then be used with some sacrifice in resultant capability. If the match is considered close enough, then that fact, along with the discrepancy in operational needs, should be documented. 3. Decompose remaining actions into less complex actions if possible. For an action transaction that does not have a suitable match to an existing external software component, attempt to decompose it into two or more less complex actions to help facilitate a possible mapping. Repeat activities 1 through 3 for the decomposed actions. When this activity is no longer feasible, terminate it and continue step processing. A successful decomposition is not likely, but it does occur often enough to make the attempt worthwhile. 4. Determine if the updated action set can utilize existing functionality. Depending on the results of the resource allocation, determine if changes to the action specifications can result in additional utilization of existing functionality (e.g., legacy system or COTS product). That may involve changing the operational requirements of the actions or splitting or
5.
6.
combining actions. Changes to the process map or dialog definitions also could be a means to achieve additional mappings. If there is a possibility that such changes could be effective, then the necessary spirals and steps need to be reinvoked. Propose new or augmented external software components as needed. For any action transactions that were neither mapped during activity 2 nor decomposed during activity 3, propose new or augmented software components. A specification of the functionality needed along with the necessary access (message pair) specification must be developed. Document the mapping that results between the transactions and the proposed new external software component. Determine the suitability of existing internal software components. Using a matching process, determine if any level of agreement exists between the requirements of each support function and the physical characteristics of any internal software components that exist in the repository. (Note: Actions that were identified in methodology step 2 as essentially identical to existing actions do not have to be considered in this activity; they were mapped previously and the results placed in the repository.)
If a match exists at some level between an existing internal software component and the action support function, document the match and its characteristics. For example, it may be determined that there is a close but not exact functionality match. The component could then be used with some changes. Any level agreement along with the needed changes should be documented. For any support functions that do not have a suitable map to an existing software component, attempt to identify fragments of existing components. When this activity is no longer feasible, terminate it and continue step processing. 7. Propose new or augmented internal software components as needed. For any support functions that were not mapped at some level to existing components, propose new or augmented components that can be used to implement the functionality. Document the mapping that results between the support functions and the proposed internal software components as well as the specifications of the proposed components. 8. Determine if an existing instruction document provides needed information. Examine available instructional material to determine if such material can be utilized as is or adapted for the needs of the action. 9. Propose new or augmented instructional material as needed. From the action specification, determine the characteristics of the needed instructional material and propose the format and contents of such material. 10. Analyze the action set over all available dialogs. Populate the analysis matrix with the data from each available process dialog and its included actions. If it is desired to address multiple processes concurrently during this step, the matrix should consist of all the dialogs belonging to the entire set of processes under consideration. Examine the analysis matrix to determine if any of the following conditions exists: Multiple transactions that retrieve the same information from different physical components using the same or similar request and response data; Multiple transactions that perform the same operation but have different request or response data; Transactions that perform the same operation using the same physical component but differ in the amount of information specified in the request or response. If any of those conditions surfaces, it must be examined and reconciled if necessary. The exact procedure for accomplishing that depends on the nature of the problem and cannot be generalized. Usually, however, the
resolution requires that a remapping be performed by returning to the mapping activities of the step. It is also possible that it will require a reinvocation of the logical design spiral and possibly the process spiral. 11. Update logical design prototype and animate using scenarios. Present the augmented logical design prototype to the stakeholders in a facilitated session to determine if the action functionality, characteristics, and sequences still meet the requirements of the business processes. Depending on the results, return to the mapping activities of the step and, if necessary, revisit the process and or logical design spiral to resolve any remaining difficulties. 12. Convey new or changed functionality needs to the project management methodology. The need for new or changed software or instructional material that results from step 4 must be presented to the management methodology so available resources can be utilized as efficiently and effectively as possible. That may result in the postponement or elimination of some needed functionality, which in turn may cause any aspect of the process to change. Such changes must be processed through the appropriate spirals and steps of the methodology. The analysis matrix information can be used by the project management methodology to provide an initial estimate of the costs involved for any proposed new or changed functionality. 13. Obtain necessary approvals. If approvals are needed to continue beyond step 4, they need to be obtained before proceeding. The action prototype, the opinions of the stakeholders, and the hard deliverables from this step (updated action templates, new functionality specifications, instructional needs specifications) should be sufficient to demonstrate the suitability of the action definitions and the ability to proceed. Approvals also may depend on the availability of sufficient resources to provide the requested development. Conditional or partial approval may require alterations in the development, including the changing of some action classes from automated to human. Other what-if changes also could be proposed that would require utilizing previous steps to analyze the effect of such proposed changes. Eventually, if the project is to proceed, approval has to be given using some set of conditions, either the original set or one containing some alterations based on available resources. 14. Enter the updated action specifications (including mapping information) into the repository. Identify those actions that were already in the repository upon entering step 4. Every action in a dialog should now have its defined functionality (1) mapped to existing or proposed physical components or (2) marked as previously existing and mapped during the consideration of actions in previous process implementations. The information on the action templates should now be complete.
21.6 Linkages to other steps Step 4 must be invoked whenever there is a change in an action other than a deletion. Because a deletion does not require a mapping, it is not necessary for this step to consider that event. Some changes in needed action characteristics (e.g., response time) can result from the activities in any spiral and consequently cause a direct transition to this step in addition to other steps that may be affected. A change in the automated/human characteristic of an action probably would not cause a direct transition to this step but require step 3 to be invoked first. The usual transition to this step is directly from step 3, where the automated actions are identified. Other sequences depend on the specific circumstances involved. The transition from this step to other steps depends on the type of results incurred. For those dialogs that have all their actions successfully mapped with no changes to the actions, the transition from this step is directly to step 7, where the integration of all the
implementation components is performed. Step 7 is not started, however, until all the necessary components are available (steps 5 and 6 completed normally in addition to step 4). In addition, although it is not a direct part of the PRIME methodology, a transition to step 4(a) also is made. Step 4(a) is concerned with the design and implementation of the software components that were determined to be needed in step 4. Step 4(a) also includes the creation or update of new instructional material. If any actions are changed as a result of the activities in step 4, a transition to step 1 or 3 must be made so the changes can be put into their proper context. The specific step to which the transition is made depends on the particular change or set of changes that must be performed.
21.7 Postconditions Step 4 is complete and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation: § All step activities have been considered at least once. § The mapping of all transactions used in an automated action to existing or proposed external software components has been completed. § A brief description of each proposed external software component is available. § The mapping of the support functions required by an action to existing or proposed internal software components has been completed. § A brief description of the changes required for the use of an existing component is available. § A brief description of each proposed new or augmented internal software component is available. § The mapping of all human transactions used in an action to existing or proposed instructional material has been completed. § The list of all actions that have been considered previously during the examination of other dialogs and that are already mapped to existing components is available. Of course, they do not have to be remapped but can be utilized as defined. § Any information that would significantly affect the implementation and use of proposed software components is documented. That could include possible relationships, restrictions, and operational needs. § A completed action specification form is available for each action. The software component information is now required. § Using the scenarios, animation of the action prototype with the updated operational information has been performed and the results verified. § The business and technical stakeholders have been involved as needed and agree with the action mappings as demonstrated through the augmented logical design prototype. § All relevant action information has been entered into the appropriate repository and updates verified. Any necessary approvals have been obtained. At the conclusion of step 4, all affected stakeholders must agree that the action animation, as it is defined using existing components, accurately depicts the intended operation of the process. The results of step 4 may indicate that further refinement of the process is necessary. It is not necessary that the business-oriented stakeholders review the action mappings because they are considered part of the detailed design. Selected bibliography
Isakowitz, T., and R. J. Kauffman, “Supporting Search for Reusable Software Objects,” IEEE Trans. Software Engineering, Vol. 22, No. 6, 1996, pp. 407–423.
Chapter 22: Step 4(a): Provision software components 22.1 Purpose Step 4(a) represents the interface between PRIME and the software component methodology. The step itself is a part of the software component methodology because it contains activities needed to obtain and provision software components needed by PRIME. However, when a component needed in a process implementation is not available, the information must be communicated to the software component methodology so that a component with the required characteristics can be provided. Because of that tight coupling, step 4(a) is considered in the context of the PRIME methodology. The timely availability of implemented and provisioned software components is critical to the proper functioning of PRIME, so a discussion of some of the major aspects of specifying and obtaining those components is of considerable interest. Because this step is not in the direct path of PRIME, it is identified as an adjunct to step 4, which is concerned with the mapping of available functionality to the actions. This discussion assumes a C/S architecture (see Chapter 12) and the utilization of reusable software components (see Chapter 14). In those chapters, the class structure of software components was not discussed because it would have detracted somewhat from the central purpose of presenting the structures needed for effective software reuse. For the purpose of this discussion, it is useful to be able to refer to a component class structure as well as to the individual components. Without a class structure of some type, only an unmanageable sea of components would be available. That would partially negate the advantages of employing reusable components for process implementation. Although a component class also could be considered to be a reusable software component, the use of the term component is restricted to a given instance of individual functionality. To avoid having to rely on a particular architecture for the component class, a somewhat generic terminology is used.
22.2 Component class structure The external software components are grouped into building blocks. Access to the components is through the use of a messaging structure based on request/response pairs, or simply message pairs. The messaging structure for a given request/response can be simple or complex, mirroring the C/S structure required to accommodate it. The relationships among the building blocks, their included components, and the message pairs used for access form the class structure of the components. The organization is designed to provide a significant amount of flexibility while providing a means to quickly identify or obtain components that can be used to implement the transaction part of an action. Obtaining the internal software components that implement the action support tasks also is performed in step 4(a). Because the internal components generally have much greater commonality, are generally less complex, and not as numerous as the external components, they are not considered explicitly in this initial discussion, but they are considered in the description of the individual step activities. For simplicity, unless otherwise indicated in the discussion, the term building block when used alone also implies its included components. Building blocks can be specified through multiple approaches and do not have to have homogeneous characteristics. All message pairs, however, should conform to a standard format regardless of the building block or component with which they are associated. While it is technically possible to relax that constraint, the resulting implementation would
be far more complex than necessary. This step outlines some of the considerations of the design, specification, and provisioning of building blocks and messages. Figure 22.1 depicts a logical model of the class structure that indicates the relationship between the entities of interest. For simplicity, except for the control aspects of building blocks, the infrastructure elements needed for an actual implementation are not explicitly shown. Building blocks contain a class of functionality. An example would be that of an inventory building block that contains all functionality specific to inventory control. Components provide a cohesive set of functionality related to some aspect of the building block. In the case of the inventory building block, examples of components could be “inventory item reorder analysis” and “items received.” Message pairs provide the means for utilizing the components.
Figure 22.1: External component class structure. In an object-oriented system, the building blocks would be object classes, the components would be objects, and the message pairs would provide the means to execute the methods or specific functionality of the objects and return any appropriate data. Such objects are sometimes referred to as business objects, although use of that term is not universal. Object-oriented terminology generally is not used in the methodology discussion so that software components not inherently object oriented can be utilized. That makes it easier to incorporate a variety of structures, including legacy systems and COTS products in the implementation of a process. Several different building block structures are briefly considered in Section 22.4.1.
22.3 Preconditions The following items are required to be available before work on the activities of step 4(a) can be initiated from step 4 or from the interface to other software component methodology activities. It is also assumed that any result from an earlier step or spiral can be used to provide guidance or background information. § Message format specification; § Existing building block relationship structure; § Existing building block definitions and included components; § Existing component specifications (including message pairs); § Descriptions of proposed components.
22.4 Description For the purposes of this discussion, it is assumed that the software component methodology includes the architectural model(s) for messages, components, and building blocks as well as the means for obtaining those components. Components can be obtained through a custom development, a COTS product, a legacy system, or a combination thereof. The interchange of information between PRIME and the software component methodology provides for the specification of components from a process-specific point of view. Information received from step 4 is used to provide the specific functionality requirements of a process that cannot be met by existing components. Those components are those that have been specified from an enterprise view as a part of the overall class definition or components that have been implemented as needed for previous process implementations. Building blocks, components, and message pairs are closely coupled, but their design and specification are somewhat different. That requires that they be considered independently before being treated as an integrated set. The specification of building blocks is examined first, followed by a brief discussion of the message structure. Finally, the entire component class structure is presented. 22.4.1 Building block specifications A building block consists of a bundle of one or more related pieces of functionality (components and other building blocks) that can be accessed through a messaging structure. The term component signifies that the function is indivisible and is intended to provide only part of the entire set of capabilities needed for a specific process implementation. Building blocks may maintain an internal state relative to a particular series of requests. For example, a request to an inventory building block to reserve an inventory item results in the state of the building block changing. The new state is such that the only requests relative to this item that can be processed are to (1) assign the item to a customer and (2) remove it from the reserve list and place it back in general inventory. That will either advance the building block to another new state or return it to its previous state. Building blocks are not designed to be utilized as stand-alone solutions to any business need. They are not software systems in the conventional use of the term. Building blocks are intended to be utilized in conjunction with a control mechanism and operations infrastructure. The control mechanism invokes the appropriate components of the building block as needed to provide the defined business functionality. The operations infrastructure provides the support framework within which the building blocks and control mechanism are managed. Building blocks can be nested. A building block can consist of other building blocks along with the necessary control and infrastructure components. That allows the specification of building blocks at a number of different levels and the reuse of them at any level. Once they have been specified, building blocks can be procured in several ways. That aspect is addressed later in the chapter. Several types of building blocks can be defined. As long as they meet the definition, a large number of common software and hardware entities can be utilized as building blocks. That aspect of the component structure is important because it indicates how components with different architectural paradigms can be utilized as part of an overall solution. It is not necessary—in fact, it probably is counterproductive—to try to maintain a single approach to the specification of the individual building blocks. Building block types include the following: § General component class; § Object class; § Module;
§ § § § § §
Specialized library; Legacy system; COTS product; External information provider (EIP) service; Infrastructure element; Embedded system.
Each type of building block has a unique set of characteristics and associated advantages and disadvantages. Although use of a single homogeneous type would simplify the operations, administration, management, and provisioning of the building blocks in the infrastructure, that probably is an unrealistic goal, at least in the short term. Although a short discussion of each type of building block is presented here, it is not intended as a tutorial on the underlying technologies. If that is desired or necessary, the reader should consult the appropriate literature.
22.4.1.1 General component class A general component class is a building block specifically designed to structure the definition of and access to reusable components. Such components have many of the same characteristics of objects, but they are not bound by all the characteristics of objects, such as inheritance. In addition, they may be defined to have nonobject properties such as self-definition of internal functionality and interfaces. Currently, several different definitions of these types of component classes have been introduced by vendors. These component classes are flexible and will continue to be defined for various purposes.
22.4.1.2 Object class An object-oriented structure usually contains multiple layers of building blocks. A class library contains multiple object classes that contain multiple objects (components). Specification of a class library usually is performed by a top-down procedure and is based on the business functions of one or more aspects of the enterprise. It usually is not developed as an aggregate of individual component needs. However, the extension of existing class libraries usually occurs in the latter bottom-up fashion. Unfortunately, there is no generally accepted methodology for defining object classes on an enterprisewide basis. That lack requires a cross-functional effort by experts in object technology as well as the various individual aspects of the business. Because of the advantages of object technology, as many components as possible should be specified as objects even though that requires more upfront analysis than some other approaches.
22.4.1.3 Module A module is a building block that has only one component and is the most general type of building block specification. The only requirement is that its outside interactions occur via a messaging structure. Because of the generality of the module construct, care must be taken to ensure that a maintenance problem is not created by defining too many unrelated modules with only a casual relationship between the building blocks. The generality, however, does allow for the rapid creation of components that have a localized sphere of influence and do not have strong relationships with other components. This type of building block also can be utilized as an interim solution while a more comprehensive structure is being defined and implemented. Modules can interact with other modules through facilities other than messages as long as the module-to-module interactions form a closed set. The set of modules effectively becomes the building block. Again, in specifying these types of structures, care must be taken to avoid a maintenance problem. They should be used only when the specific
circumstances of the problem make them an effective approach. One such instance might be when most or all of the module set already exists and the complexity of the implementation is such that it is economically reasonable to define the set as a building block.
22.4.1.4 Specialized function library Specialized libraries have been around as long as computing and contained subroutines (e.g., math functions) that would perform a number of specialized calculations. The functions in the libraries usually were designed to be incorporated in a program using a subroutine call mechanism, not utilized as separate operational entities. To utilize the specialized collection as a building block, it would have to be reimplemented using message-based communication. This type of building block usually results from a bottom-up approach. Its specification is the result of a general knowledge of the types of functions that any business-oriented processing might require. Examples of specialized function libraries that might be useful in a business context would be a math library, a currency conversion library, a billformatting library, and a pattern recognition library. Admittedly, those libraries are all very low level, but they are useful for many types of processes.
22.4.1.5 Legacy system Legacy systems generally have been designed for a specific functional area, designed to operate on a standalone basis and to combine function data and control. Turning a legacy system into a building block requires some amount of work and is basically a three-step process. First, components must be identified, which requires a thorough knowledge of the individual functions and basic design of the system. Use of the action specification for the dialog(s) that will use the system is an excellent starting point for that determination. After the components have been designed, an approach to executing them on an individual basis must be identified. That could be accomplished through screen emulation, a special-purpose front end to the system (wrapper), or modification of the code in the system. Each approach has economic as well as operational consequences, and the particular method(s) chosen would depend on the circumstances involved. Finally, a message-based interface to the wrapped or modified system would have to be developed. The resultant entity then would have the necessary characteristics of a building block. Because the construction and characteristics of each legacy system are unique, the effort necessary to obtain a form of the system that is consistent with the definition of a building block varies considerably. However, because of the immense investment in these systems, it is necessary to utilize them (at least initially) to the maximum extent possible if the benefit of the service approach to realizing business processes is to become economically viable. Requiring that all needed functionality be redeveloped is not possible except in the case of startups or other enterprises that have little embedded base. Once a component has been made available as part of a legacy system, the functionality then can be migrated to another building block, depending on the economic and technical circumstances. The ability to migrate individual components is one of the major strengths of the building block approach.
22.4.1.6 COTS product The considerations necessary for the use of a COTS product as a building block is similar to that for a legacy system. Because a COTS product usually is sold to a significant number of customers, the leverage of any given customer to effect changes is
relatively small. COTS software must be selected carefully to minimize the potential problems.
22.4.1.7 External information provider service The characteristics of EIP service are similar to those of the legacy application. The difference is that it usually is more difficult to change the interface to resemble that required of a building block. One way to approach that problem is to define a module building block that contains an EIP agent. The EIP agent would have a building block– compliant interface to the control and infrastructure entities but would have an application-specific interface to the EIP service. The EIP service itself would be defined to be a module to be consistent with the approach presented for module building blocks. If desired, this approach could also be applied to legacy applications, although more inefficiencies probably would be introduced. The use of Internet standards and technologies by all parties to the interaction mitigates that problem considerably, but the details still must be examined to ensure that the required interoperability does exist. There can be many companies involved in a given supply chain, from raw material supply to manufacturing to end customer, so the specification of the characteristics of this type of building block can be extremely important.
22.4.1.8 Infrastructure element In many cases, to satisfy a business need, one or more functions usually associated with a computing infrastructure are needed. Security is an example of this type of function. For the purpose of providing a building block type of access, an appropriate interface must be made available. Interfaces of one infrastructure element with other infrastructure elements do not necessarily have to follow the building block structure (although it certainly would simplify the understanding and specification of this functionality) but may include other types of structures consistent with network standards and implementation.
22.4.1.9 Embedded system An embedded system is software integrated with a piece of hardware. The result is a set of functionality that utilizes hardware as well as software in its execution. Examples are telephone switches and software-driven appliances such as stoves and refrigerators. These systems must have building block characteristics to fully participate as process func- tionality. If a process requires that a telephone switch state or data be altered, then it must be accessed as a component in a building block. In the architectural definitions of the intelligent network, building blocks are called service independent building blocks (SIBs), and the components are called functional entity actions (FEAs). As more equipment becomes software controlled, these types of building blocks will proliferate and be available for innovative solutions to business problems.
22.4.1.10 Selection criteria It should be noted that some of the characteristics of the individual building block types overlap. It also should be evident that a large number of building block types are possible, each of which addresses a specific need. Deciding which type to use in a particular set of circumstances depends on those characteristics deemed most important and the number and type of building blocks already defined. 22.4.2 Message pair specifications A request message invokes components in one or more building block(s) and causes the return of an appropriate response message. The response must be returned from the same building block to which the original request was sent. Depending on whether a simple or complex C/S structure is utilized, multiple building blocks and associated message pairs can be used to satisfy an action transaction. However, the only message
pair of direct interest is the one initiated by the client. The specification of the pair determines the match to the transaction specified in the action. The client can initiate multiple message pairs that together form a unit of work. In that case, all the component state changes must be made either as a group or not at all. The determination as to the start and the end of the unit of work usually is made by the client and requires infrastructure functionality known as a transaction processing (TP) monitor. In current terminology, a unit of work is also known as a transaction. To avoid as much confusion as possible, the term message pair is used instead of transaction when dealing with building blocks, although the use of that term is somewhat awkward. As with building blocks, message pairs can be specified in either a top-down or bottomup fashion, usually depending on how the associated building blocks were determined. Once the components are defined, it is necessary to define the associated message pair. Multiple message pairs can be specified for the same component if appropriate for the function and the component is able to accommodate more than one. A building block can use the same message pair format for all the included components, or different formats can be defined as necessary. Information that must be specified in the message pair definition includes the input data and the possible responses. In some cases, especially those involving legacy system building blocks, the inherent structure of the legacy system may dictate the input and output data once the component has been selected. In most cases, however, considerable latitude is possible in the determination of the message specifications. By making the specifications as broad as possible, it should be possible to limit the need for multiple similar message definitions. For example, assume that a component in a customer information building block has been defined to extract customer location information from the customer database. The input data required to enable the component to perform could be either narrowly defined to be the customer ID or broadly defined to be any part of the location information such as ID, (partial) name, (partial) address, and so on. The returned data could include not only the standard post office type of data but also information such as telephone central office, Federal Express route number, and e-mail address. Any process that had available any of the permissible input data and that needed all or any part of the output data could use the same message pair and component. Except under extreme circumstances, the inefficiencies inherent in using generalized functionality should not materially affect the operation of the process. Unless the proper conditions exist for specifying a complex C/S implementation, the output data should not be broadened to the point that a component in another building block must be used. In general, keeping a simple message structure is preferable to increasing the amount of output data, even if the data is of the same general type. There may be other attributes that must be specified and given a value for each defined message pair. They usually are concerned with the operational characteristics of the message pair. An example is the response time distribution (may be dependent on offered load). Even though a building block or message pair is specified at a physical level, that does not mean the entity is available for use. The physical label indicates only that it is specified in terms of its physical characteristics (e.g., product names). To be made available for use, it must be provisioned. Provisioning implies that an implementation of the entity actually exists in the network and that it can be accessed through a predefined protocol. The implementation and provisioning process is discussed in Chapter 26. 22.4.3 Integrated specifications To obtain the maximum benefit from the building block concept, an overall architecture or framework must be developed. The framework should incorporate the enterprise business rules as to the desired structural characteristics as well as the migration strategies needed to reach that goal.
One important requirement for a building block architecture is the high rate of change that must be accommodated. Because the messages and building blocks are intended to provide most of the business functionality, any changes to the business or its processes are reflected in the need for additional message pairs or building blocks and their components. That must be accommodated efficiently without affecting those processes that remain stable. Issues to be addressed as a part of the framework definition include the permissible types of building blocks, both on a temporary and a permanent basis, the relationships among the building blocks, access methods for legacy data, messaging format, deployment infrastructure interactions, and framework maintenance approaches. As with any comprehensive architectural specification, many other aspects of developing and maintaining a messaging/building block framework must be addressed. Except for identifying the need for such a framework, the identification and specification of an overall architecture are the responsibility of the software component methodology and are beyond the scope of the current presentation.
22.5 Prototype No user-oriented prototypes are associated with the component spiral. However, for the purposes of this step, as well as step 8, prototypes of components, building blocks, and message pairs can be developed as necessary to validate the specifications. The prototypes are designed to be used only by developers of the software component and process implementers and are not intended for user evaluation. The use of prototypes depends on the complexity of the component and the level and degree of confidence in the specification. Any prototypes that are developed should not, in general, be used in any steps other than those in the component or assembly spirals.
22.6 Activities The 11 activities in step 4(a) are arranged according to the diagram in Figure 22.2. Unlike most of the other steps, most of the activities in step 4(a) are defined at a relatively high level. This is appropriate because of the need to utilize other activities of the software component methodology, such as COTS selection and custom software development. A discussion of all the activities of the software component methodology is well beyond the scope of this discussion.
Figure 22.2: Activity sequence diagram. There are two separate entries to step 4(a). The first comes from other steps of the software component methodology and usually occurs during the initial definition of the building block, component, and messaging architecture. It is driven by an analysis of the basic needs of the business without reference to a specific process. The input is also used to change the component structure or functionality when the business needs change (e.g., the addition of a new line of business).
The second entry comes from PRIME step 4 and occurs when a needed component for a process implementation is not available. The requirements for the required functionality and access method are passed between the two methodologies. Those two input sources represent the links to the enterprise and process-based approaches of specifying software components. It is assumed throughout this section that a repository containing the specification and state of all provisioned and proposed messages and building blocks is available and can be effectively and efficiently accessed by either PRIME or the software component methodology. The repository becomes the major communication structure between the software component methodology and PRIME. It also provides the major communication vehicle between the individual activities of step 4(a). As discussed in Chapter 14, it is expected that experts from both the PRIME and the software component methodology areas will coordinate their activi ties. The activity descriptions assume that that coordination has been accomplished and the two methodologies are in harmony. 1. Determine required changes in the class structure architecture and associated elements. Based on the business needs as determined by other activities of the software component methodology and using architectural principles specified for the component class structure, develop or update the definitions for the set of building blocks, components, and access message pairs. The building blocks may be of any permissible type and can include wrappers for existing legacy systems. From a global perspective, analyze the building block, component, and message pairs specifications based on process usage and operational efficiency data. Identify any changes that would result in improved development or operational effectiveness or lower life cycle costs. 2. Develop requirements for new or changed software components. That includes both action transactions (external) and support functions (internal). Based on the need for new functionality as defined at a high level by step 4 (mapping actions), determine a comprehensive set of requirements for such functionality. 3. Determine specific changes to building block/message set. This activity does not include any architectural changes to the component class structure. It is based solely on the need to locate new software components in the existing class structure. Changes to the overall class architecture should come from activity 1. 4. Develop detailed specifications for new components. Based on the requirements for the new components, develop the specifications utilizing the enterprise infrastructure architecture and products. It is possible that no specifications can be identified that will provide the required functionality in an acceptable manner. In that case, the negative result must be returned to step 4 of the PRIME methodology, if it is the invoking entity, to permit changes to be made that will remove the problem. The changes can be made at any previous level, including changes to the process map. 5. Develop procurement recommendation for new components. Determine the means for obtaining the defined component functionality. That can be provided by a COTS product, custom development, legacy system, or a combination. To provide this type of analysis, a large amount of information must be available from outside the PRIME methodology. It is assumed that the information is available for the analysis without the necessity of defining a formal interface. The make-versus -buy analysis is placed in step 4(a), because of the necessity of maintaining the architectural integrity of the entire set of building blocks. A simple economic analysis usually is not appropriate. As in activity 4, if a negative result is obtained and the desired functionality cannot be obtained in a reasonable and cost-effective manner, changes must be made that will solve the problem.
6.
Perform COTS selection. If a COTS procurement is indicated, determine appropriate candidate products and perform the required analysis. The approach outlined in Chapter 1 is one that can be utilized for this activity. 7. Perform custom development. Utilizing activities of the software component methodology, develop custom software that meets the requirements and specifications of the needed functionality. This development can be performed using inhouse resources or those provided by outside vendors. Prototype building blocks, components, and access messages as necessary to help determine the validity of their specifications. The prototypes are considered only a development aid and are not meant to be propagated beyond step 4(a), except in relatively rare instances in which they can be used in step 7 as an aid to assembly and testing until the actual provisioned components are available. 8. Integrate legacy system. Utilize the functionality inherent in an available legacy system by providing some means to utilize the messaging structure with the legacy functionality. That usually requires that some form of wrapper software be developed to perform the required translations. Some of the aspects of legacy systems that should be considered during this activity are addressed in Chapter 1. 9. Perform provisioning procedures. Utilizing appropriate activities of the software component methodology, provision external components so they are accessible via the defined message pair. Provision internal components so they can be obtained and incorporated as action support functions. This activity is independent of the assembly function of step 7, which assumes that the needed components are available and tested. 10. Obtain necessary approvals. If approvals are needed to continue, they need to be obtained before proceeding. Documentation of component functionality specifications, building block structure alterations, message pair specifications, and provisioning of the tested component should be sufficient to demonstrate the ability to proceed. 11. Document results, update repositories, and update component location directories. All information obtained as a result of step 4(a) should be entered into a repository where it is available for future needs. Because maintenance is considered to be an integral part of the methodology, this information may be required for a considerable length of time and may be useful to individuals other than those involved in the initial development.
22.7 Linkages to other steps Transitions into step 4(a) generally are from step 4, after the development parameters for the components and access message pairs have been developed. Because this step is considered an adjunct to step 4, the usual transition from this step is to step 4. Although the provisioned components are used during the assembly and test activities of step 7, that is not considered a direct link to that step because no direct invocation is involved.
22.8 Postconditions Step 4(a) is completed and may be terminated for a given development when the following information and conditions are present as a result of this step’s invocation: § All step activities appropriate to the method of invocation have been considered at least once. § Each request for a new component has been identified as to message pair format, component functional specification, and resident building block.
§ A provisioned component is made available for each requested new component, or it has been determined that the component cannot be provided as requested and changes must be made by revisiting previous spirals. § The technical stakeholders have been involved as needed and agree with the building block, component, and messaging definitions. § All relevant component, message, and building block information has been entered into the appropriate repository and updates have been verified. § The component location directories have been updated. § All necessary approvals have been obtained. At the conclusion of step 4(a), all affected stakeholders must agree that the specification of proposed software components will provide the needed functionality. It is not necessary for the business-oriented stakeholders to review the software component specifications because they are considered part of the detailed design.
Chapter 23: Step 5: Design human interface 23.1 Purpose From the specification of the human and automated actions, Step 5 produces the human interface for a dialog. As indicated in Chapter 17, the human interface is associated with cluster store and not an individual dialog. The design of the interface must, therefore, accommodate and be compatible with the interface design for any other dialog that may become a part of a cluster with the dialog of interest. That need must be reflected in whatever standards (e.g., style guides) are used by the enterprise for the design of human interfaces. The development of human interfaces is a specialty that requires a great deal of experience and education. This chapter is not an attempt to provide the comprehensive set of information necessary to produce SMEs in this area. Rather, it presents the type of information that is utilized in the design of human interfaces and illustrates how that design is integrated into PRIME.
23.2 Preconditions The following items are required to be available before work on the activities in step 5 can be initiated. § Role performer characteristics for the dialog, including those that affect usability requirements; § The list of existing human and automated actions and associated information; § Action prototype for the dialog.
23.3 Description The place of the human interface in a process implementation is illustrated in Figure 23.1. The interface couples the human role performer to the automation system to provide for a complete process operation. It is one of five elements that must cooperate to provide an effective implementation.
Figure 23.1: Human-to-aut omation coupling. 23.3.1 Quality considerations An important optimization criterion is that all five elements—human activity, interface, data, software, and hardware—are designed and utilized such that they do not introduce errors that could negatively affect the operation of the process. From the human element perspective, design and utilization means that the policies and procedures (business rules), both personnel oriented ones and those used to directly guide the performance of the assigned work, must be constituted effectively. The relative ranking of each element according to its potential for introducing errors is shown in Figure 23.2. Although that ranking may change depending on the specific process and implementation strategy involved, it is clear that the human interface is one of the more error-prone areas of system operation and, from a quality perspective, one that requires a significant amount of attention.
Figure 23.2: Process implementation elements and probability of error. The elements, although closely related and having great influence on each other, can be structured and designed relatively independently of each other using general principles specific to each element. This discussion focuses on some of those principles for the human interface. Although the suitability for their use in any given situation must be evaluated by an SME, it is useful to understand some of the principles that can contribute to the reduction of operational errors and result in a high-quality interface design. Although the human interface is a significant source of error, another aspect must be considered before the design aspects are considered: the effect of an error on the operation of the process. Not every error will produce unusable results. The consequences of an error can be described as follows: nonexistent, annoying, significant, or catastrophic. For business processes, most interface errors fall in the middle two categories: An error usually has a consequence but one that is rarely enterprise threatening.
From a human interface perspective, that means careful attention must be paid to the design of the interface. However, it should not be afforded more resources than any of the other four elements of the process implementation unless special circumstances dictate a different priority. 23.3.2 Design approaches Although it is common to equate the design of a human interface with aspects such as the specification and placement of graphical elements on a CRT screen (e.g., windows, icons), that is not what is being considered in this discussion. Although that aspect is important in making the human user comfortable and producing a result that can be utilized efficiently, it probably is not the most important design issue. Even before this aspect of the design can be started, it is necessary to determine the basic interaction method(s) between the human and the automation system. Three of the major methods are discussed in this chapter and should provide a good understanding of the basic principles involved in the construction of a quality-oriented human interface. It also should be noted that the design principles discussed here are true whether the interface is provided in classical fashion via a character- or graphics-oriented presentation method or is obtained through the use of a Web browser either with or without Java support. The use of a browser may indeed help with the visual imagery and reduce the time it takes to provide a working interface, but it cannot substitute for careful consideration of what the interface should accomplish from an interaction perspective. This chapter considers three basic design approaches to the human interface: function assignment, feedback, and impedance matching. The first approach, function assignment, is based on the differing characteristics of humans and computers; the other two designs are based on engineering and mathematical principles. The examples presented in this chapter are greatly simplified to facilitate presentation of the concepts. In actual practice, many more details must be addressed and resolved. In addition, more complex situations do occur, such as dual interacting systems, which involve more than one human user. That type of situation requires additional analysis and design concepts and is beyond the scope of the current presentation.
23.3.2.1 Function assignment Function assignment examines all the functions that must be performed to achieve a suitable result. Based on the performance characteristics of humans and those of computers, each function can be assigned to either a human performer or an automated performer. The availability of suitable technology and infrastructure for the support of any automated functions also must be available. The unit of functionality must be appropriate for this approach to be successful. The tendency in many situations is to use functionality units that either are too complex or have too high a level of abstraction. In most of those cases, the approach fails because it covers more functionality than is appropriate. In the PRIME methodology, function assignment is the approach that was utilized to determine the human or automated class value for each identified action. Because each action is atomic, that is an appropriate level. If the assignment were made at the process step level, the approach would fail because it would be overly broad. Most process steps are neither all manual nor all automated. Therefore, some aspects of the step would fit the assignment while others would not, and the result could be very inefficient or even totally unusable.
23.3.2.2 Feedback A feedback approach to interaction design is based on the principles utilized in control theory. It usually is depicted as shown in Figure 23.3. The purpose of feedback is to smoothly drive a system to stability. The input (action) signal is compared with some function of the output (result) signal, and the consequence of the comparison (activating signal) is used to create a new output.
Figure 23.3: General feedback configuration. If the input and feedback signals are the same, there is no activating signal indicating a stable situation and no change in the output. If the input and feedback signals are different, an activating signal is generated and the output changes, causing a change in the feedback signal. If the system is designed properly, the difference between the input signals and the feedback signals is smaller than before. Of course, if the input changes during this feedback process, everything starts over. That principle applied to the human interface is illustrated in Figure 23.4. In the figure, the input and the feedback consist of information shown on the screen or other output device, the comparator is the human, and the activating signal is any activity undertaken by the human in response to the input and feedback information. When no activity occurs, the system is stable and it is assumed that the desired output has occurred. Until that situation is reached, the output is not used because it is not the result of a stable condition.
Figure 23.4: Feedback applied to the human interface. As an example, consider the following inventory situation. The input information is the quantity of an item ordered. At this time, there is no feedback information, so the activating signal is the amount of the order for a given item that the human enters into the system. The system compares that value with the number available from inventory and determines the order condition with respect to that item (forward transfer function). The output consists of three parts: (1) order quantity N exists; (2) quantity is short by S amount; and (3) backorder B amount. That information is fed back to the human (because the information is not changed, the feedback transfer function in this case is a pass-through) and appears on the screen as the feedback signal. If there is sufficient quantity, the feedback amount and the input amount are the same, and no further action is required; the system, insofar as the input is concerned, is stable. For this case, part 1 is Y, part 2 is 0, and part 3 is 0. If there is not sufficient quantity, the human must make some response depending on the instructions from the customer. That could be to reduce the amount ordered to the quantity available that changes the situation to the one given previously. Another response could be to backorder the number missing so that the output would be as follows: part 1 is N, part 2 is S, and part 3 is B. This information would then be fed back, and the total available plus amount backordered is equal to the quantity ordered; the system is then stable, and no further activity is required. The feedback mechanism must be relatively rapid. Otherwise, either the order taker or the customer could decide not to wait, and the effect would be the same as having no feedback. In that case, or if a feedback mechanism was not utilized, the order taker would not know the status of the order (called open loop) and could easily give the customer wrong information.
23.3.2.3 Impedance matching Impedance matching is concerned with making two interacting entities essentially the same as far as the ability to accept and produce signals. It is a concept from electrical engineering but can be adapted to a wide variety of situations, including human interfaces. The concept of impedance matching is illustrated in Figure 23.5. If an impedance match exists between element a and element b, then the maximum signal
(e.g., r) can flow. If there is a mismatch between element a and element c, then a signal of less than the maximum will pass.
Figure 23.5: Concept of impedance matching. Applying impedance matching to the human interface means that all the interface elements must have about the same level of abstraction and difficulty. That maximizes the ability of all the elements to transmit and receive information. It is necessary to determine (1) what form of display and response information is the most effective for the human to utilize in performing the particular task involved and (2) what parameters the automation system needs to incorporate to accommodate the human user. In other words, how can the interaction be defined so there is a smooth coupling between the human and the automation system. Impedance matching can be used with or without the feedback mechanism. The following example assumes that feedback is also being employed. Consider the system illustrated in Figure 23.6. The purpose of the system is to perform a network test in response to a problem that has been indicated. The problem may have been articulated through either human or automation means. In either case, the problem must be presented to the human in a form that can be easily understood. The form may depend on the skill level of the human, and different presentations may be necessary to accommodate a variety of potential users. Some type of training or online help facility may improve the transfer and use of information, but training has definite limits as to what can be accomplished in this area.
Figure 23.6: Impedance matching in the human interface. The response of the human to the problem manifestation also can be facilitated by the form of the interface design, which again may depend on the skill level of the user. That theme continues with the format of the processed test results intended to provide a definitive diagnosis of the problem’s cause. If any of the displays is ambiguous, unclear, or difficult to interpret, the human may make inappropriate or wrong conclusions and take actions that exacerbate the original condition. Note that the transfer feedback function utilized in this example is somewhat more complex than that utilized in the example of Section 23.3.2.2. The complexity of both of these functions will vary considerably according to the specific characteristics of the development. 23.3.3 Human interface instances In some implementation methodologies, the definition of the human interface is the starting point, and the functionality and data follow. The problem with that approach is that the interface is not the primary purpose of the implementation. The functionality and the data are paramount, and the interface really should be a derived entity that serves as an aid to achieving the desired functionality. An interface item should be defined only if it is needed to provide a coupling between human actions and automation system actions.
Because the interface provides the coupling between the human and the automation system, the need for human interaction occurs whenever there is a transition between a human action and an automated action. Because the order of actions is not specified a priori and can change for different scenarios, it is possible that a transition can occur before and after each human action. That is illustrated in Figure 23.7. Some analysis is required to determine what specific HIIs are needed for a given dialog.
Figure 23.7: HII determination. An HII consists of: § Action data either taken from cluster store and presented to the human (request data) or entered by the human and placed into cluster store (response data); § Artifact (nonaction) data used to facilitate (impedance match) the interface, including forms and instructions. The purpose of the HII is to identify where a human interface is needed and to determine the initial data requirements. The HII specification is merely a list of data, as shown in Figure 23.8. The data consist of both action data and artifact data. In the example, the human action is “verify customer.” The description might be as follows:
Figure 23.8: Example of an HII request. The stored customer information is compared with the information obtained from the caller. If it agrees in all respects, the customer is considered verified. If there is any disagreement, the customer is considered not to be verified. As part of this customer information, a specific verification ID shall be used. The HII request shown in Figure 23.8 that occurs prior to the performance of the human action displays all the information needed for verification. The data are obtained from cluster store and are considered to be a part of the request message to the interface. The form in which the data are displayed is artifact data, because it is not an integral part of the data required for verification. After the human user performs the verification action, the results (response data) are entered into the automation system through the use of radio buttons, as indicated in Figure 23.9. The data are then placed in cluster store. The icons are artifact data because, again, they are not an integral part of the response data; however, they may improve the impedance match of the interface.
Figure 23.9: Example of an HII response. The HIIs are used in conjunction with the logical design prototype to indicate the data presented to and entered by a human user. By examining the HII data displayed in the context of the logical design, it should be possible for the SMEs to determine if the data displayed are adequate and appropriate for satisfying the human action. Only the data needs and interface media (e.g., graphical user interface, voice) are determined initially. There is no intent in the definition of HIIs to limit or constrain the actual interface design. For example, a specific HII could expand to a multiple-screen sequence or consist of both voice and image types. In addition, because the human interface is considered to be attached to the cluster store and not a specific dialog, the interface display could transcend the termination of a dialog in which it was initially defined. Thus, the data could be used as response data for an action in another dialog. In that sense, it is very close to being a nonshared database while the cluster is active. 23.3.4 Interface design The HIIs form the basis for the interface design, but the final interface can be designed in any way necessary to meet the needs of the users. Most effort is in the design of CRT screens, but alternative output devices such as those using audio instructions or menus also need careful design. Usability testing is the main method for determining the effectiveness of any given design or set of designs. The subjects of such testing are representative of the user community and should also include stakeholders who are not anticipated to be direct users of the process. That provides the stakeholders with a look at the parameters of the design as well as some of the problem areas that need to be addressed. After such an experience, the stakeholders will have a much better appreciation for the intricacies of the human interface design as well as an understanding of the reasoning behind major design decisions. Depending on the complexity of the design, there will be several iterations of design and usability testing before an acceptable design is reached. It may be necessary to revisit previous steps and spirals many times because of deficiencies discovered during the interface design activities. As needs to be repeated at this point, returning to previous spirals is not an indication of failure; rather, it is an indication of success because deficiencies are discovered relatively early in the development. The deficiencies probably could not have been found before release of the implementation without the dynamics provided by a spiral methodology.
23.4 Prototype Two kinds of prototypes are utilized in step 5. The first consists of prototypes intended for usability testing of the interface design. Usability tests determine how a human user will react to the design under different usage conditions. These prototypes generally are
specific to the area being tested, and a significant number of them may be generated, depending on the amount of human activity required by the process. The second kind of prototype is intended to convey to the user an indication of the operation of the overall system. This would be performed for all the design stages, from the initial HII specifications through the completed interface design. In conventional developments, this prototype usually consists of the interface designs along with throwaway navigation logic used to enable the transition from screen display to screen display using real or dummy data. Of course, output devices other than a CRT screen can be specified. In PRIME, the necessary navigation logic already exists or is developed concurrently with the interface design. The logical design prototype augmented with the interface design provides for navigation within a dialog. If it is desired to extend the testing over the entire process, the workflow prototype can be used to provide the necessary dialogto-dialog transitions. The development of specialized throwaway navigation logic, therefore, is not needed or is greatly reduced with the use of PRIME.
23.5 Activities The 14 activities in step 5 are arranged according to the sequence diagram in Figure 23.10. All the activities in this step are oriented toward development of a human interface that can be carried forward in the design process. This step will be revisited whenever it is determined in a succeeding step that some change needs to be made in the interface.
Figure 23.10: Diagram of activity sequence. The following are brief descriptions of each of the individual activities needed to produce the results expected from this methodology step. All the activities must be considered whether the step is being invoked for the first time or as a result of a change in another step. If it is being revisited, many of the activities will be very simple and fast. They cannot, however, be skipped, because there is always the chance that they will be needed to respond to the change. 1. Develop usability objectives for the dialog. Usability objectives provide a way of defining a successful dialog for a particular role. Develop usability objectives with the goals and needs of that role in mind. By establishing specific usability objectives, one can later test whether the resulting dialog meets those goals. These objectives should drive the definition and engineering of key dialog features and performance. Usability objectives can be defined to meet user/role goals for learnability, ease of use, throughput, user satisfaction, and so on. Examples of usability objectives are: § Learnability (e.g., users can learn to produce XYZ report in 30 minutes; after some initial training period, help functions are accessed only for rarely used features); § Ease of use (e.g., 98% of users can complete a service order without any errors; voice menu selections can be completed in less than 2 minutes 80% of the time); § User satisfaction (e.g., users rate new process as “much improved” compared to previous process; favorable
comments are five times greater than negative comments). 2. Determine usability approaches and tests for the dialog. Develop an action plan for determining if the usability criteria are met and if they are not how to proceed. This activity could include the use of equipment to facilitate the analysis, including one-way glass to monitor the actions of the test subject, video cameras to record the activities, and computers to capture the keystrokes. 3. For each human action in a dialog, specify the required HIIs. One may be required at the request to a human action and one at the response of the human action. An HII is essentially a reflection of the data contained in or to be entered into the cluster store. 4. Identify the nature of each HII. These should be based on the requirements of the job and the user characteristics. Examples include interactive/noninteractive, graphical user interface, character-based user interface, auditory messages, hard-copy report, touch screen, and FAX. 5. Specify the data requirements (request, response) for eac h HII. This specification should include both action data and artifact data. In addition, the format of the data appropriate to the interface device indicated also should be given as known. For a screen device, for example, that would include whether the data are numeric or alphanumeric, value ranges, and similar information. 6. Enhance the logical design prototype with the HII information. Because the logical design prototype is animated using the scenarios, the HII information should be displayed at the appropriate places. That provides an initial indication as to how the system will appear to the human user, from both a data and a sequencing perspective. 7. Perform initial usability tests. Using the scenarios as a base, use the logical design prototype enhanced with the HII information to determine the reactions of a representative group of human users, which may require changing the HIIs. When this activity is completed, the information gathered should be useful in the actual design of the interface. 8. Based on dialog and HII characteristics and usability tests, design initial interfaces. Design the initial interfaces utilizing any applicable enterprise standards such as style guides and desktop equipment configurations. The design also depends on whether the interface is to be implemented using a Web browser or a more classical technique. 9. Using initial interface design, logical design prototype, and scenarios, perform usability tests. Based on the test plan developed earlier in the step, perform the indicated tests. Changes to the plan usually have to be made during this activity to accommodate unexpected results. That is left to the discretion of the person or persons performing the tests. 10. Analyze usability tests and make appropriate changes to the interface. Based on the previously developed objectives, determine what changes need to be made to the current interface design to eliminate any discrepancies. 11. Using updated interface design and logical design prototype, perform usability tests. After the indicated changes have been made to the interface design based on the latest round of usability tests, update the design and retest. 12. Make sure the interface design meets the objectives. If the interface meets its usability objectives, the step is essentially over, with just the approval and repository update activities left. If the interface does not meet the objectives, either another round of interface design is required (this spiral may be traversed many times for a relatively complex interface) or some other aspect of the implementation needs to be changed, resulting in a transition to a previous step and spiral. A common problem that surfaces during this step is a need to change the definition of some of the actions, resulting in a transition to step 3.
13. Obtain necessary approvals. If approvals are needed to continue beyond this step, they must be obtained before continuing. The human interface prototype, the opinions of the stakeholders, and the hard deliverables from this step (interface design and usability results) should be sufficient to demonstrate the suitability of the dialog definitions and the ability to proceed. 14. Enter new or updated human interface specifications into a repository. All information obtained as a result of this step should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, the information may be needed for a considerable length of time and may be useful to individuals other than those involved in the initial development.
23.6 Linkages to other steps The normal termination link from step 5 is to step 7 (assemble and test). Depending on the results of the usability tests and design parameters, it frequently is necessary to redefine some of the actions. This occurrence requires a transition to step 3. It also may be necessary to reconsider the definition of the process map, dialogs, or workflow as a result of the information obtained during the interface design. The appropriate step and spiral would then be reinvoked.
23.7 Postconditions Step 5 is completed and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation: § All step activities have been considered at least once. § For each action requiring HIIs, the list of HIIs and their requirements are documented. Required information includes data in (including artifact data), state conditions, actions, and data out (including artifact data). § The human interface design is complet e and documented for the dialog or group of dialogs of interest. § The logical design prototype augmented with the completed interface design is available for demonstration. § Usability objectives for the interface are documented. § The business and technical stakeholders have been involved as needed. § All relevant interface information has been entered into the appropriate repository and updates verified. § All necessary approvals have been obtained. At the normal conclusion of this step, all affected business and technical stakeholders must agree that the interface, as it is currently defined and represented, is the best that can be accomplished prior to actual use. As necessary, changes may be made after the implementation and improvement spirals invoked as additional information is obtained. If there are more than one set of usability objectives, step 5 may have to produce specific interfaces for each set of objectives. The postconditions must then be available for each set of objectives before the step can complete. Selected bibliography Clarke, L., “The Use of Scenarios by User Interface Designers,” Proc. HCI ‘91 Conf. People and Computers, Stuttgart, Germany, 1991, pp. 103–115.
Cuomo, D. L., and C. D. Bowen, “Stages of User Activity Model as a Basis for User-System Interface Evaluations,” Proc. Human Factors Soc. 36th Ann. Meeting,Vol. 2, Atlanta, Oct. 12– 16, 1992, pp. 1254–1258.
Dearden, A.M., and M. D. Harrison, “Impact and the Design of the Human- Machine Interface,” IEEE Aerospace and Electronics Systems, Vol. 12, No. 2, 1997, pp. 19–25. Fitts, P. M., “The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement,” J. Experimental Psychology, No. 47, 1954,pp. 381–391. Hartson, H. R., and E. C. Smith, “Rapid Prototyping in Human-Computer Interface Development,” Tech. Report TR-89-42, Virginia Polytechnic Inst. and State University, 1989. Hefley, W. E., et al., “Integrating Human Factors With Software Engineering Practices,” Proc. Human Factors and Ergonomics, Vol. 1, 1994, pp. 315–319. Landay, J. A., and B. A. Myers, “Interactive Sketching for the Early Stages of User Interface Design,” Proc. ACM CHI’95 Conf. Human Factors in Computing Systems, Vol. 1, Denver, May 7–11, 1995, pp. 43–50. MacKenzie, I. S., “Fitts’ Law as a Research and Design Tool in Human-Computer Interaction,” Human-Computer Interaction, Vol. 7, No. 7, 1992, pp. 91–139. Mantei, M. M., and T. J. Teorey, “Cost/Benefit Analysis for Incorporating Human Factors in the Software Lifecycle,” Communications of the ACM, Vol. 31, No. 4, 1988, pp. 428–439. Mayhew, D. J., “Managing the Design of the User Interface,” Proc. ACM CHI’94 Conference on Human Factors in Computing Systems, Vol. 2, Boston, Apr. 24–28, 1994, pp. 401–402. Sutcliffe, A., “Task Analysis, Systems Analysis and Design: Symbiosis or Synthesis?” Interacting With Computers, Vol. 1, No. 1, 1989, pp. 6–12. Sweeney, M., et al., “Evaluating User-Computer Interaction: A Framework,” Internatl. J. ManMachine Studies, Vol. 28, No. 4, New York: Academic Press, 1993, pp. 689–711. Wilson, S., et al., “Beyond Hacking: A Model Based Approach to User Interface Design,” Proc. HCI’93 Conf. People and Computers, Orlando, FL, 1993, pp. 217–231.
Chapter 24: Step 6: Determine workflow 24.1 Purpose Step 6 produces a workflow representation of the process being implemented. The workflow determines the sequence of the individual dialogs and determines the set of role performers that can be assigned to each dialog on the basis of the business event being addressed. Each business event intended to be satisfied by the process causes a workflow instance to be created. The workflow instance contains the specifics of the initiating event as well as the general properties of the process. There can be as many simultaneous workflow instances as there are associated business events that have not yet been satisfied. There are many ways a business process can be implemented and deployed using workflow techniques and products. To develop a suitable workflow implementation for a given process, several aspects must be considered, including: § Operational characteristics of the dialog set; § Computing infrastructure of the enterprise; § Characteristics of the workforce that provides the role performers for the dialogs; § Characteristics of the products used to implement the workflow; § Engineering and design guidelines and techniques.
This step provides a pragmatic approach to incorporating each of those elements into a workflow implementation that conforms to the intent of the business process. The approach is independent of the workflow product set selected, but it does utilize the general characteristics and functionality provided by most product vendors. Although workflow techniques and products are sometimes used simply as a means to integrate legacy and other existing systems, this discussion does not directly consider that type of utilization. Although it may have merit in specific situations, the primary purpose and benefits of the use of a workflow approach are the efficient implementation of business processes. Use of a workflow approach without an underlying process specification is not considered an effective use of the technology. Workflow is presented in this chapter as an integral part of the PRIME methodology. Although many of the principles of workflow development are provided in this chapter, it is not the purpose of this presentation to describe the operation and programming of workflow engines and associated products. If such information is desired, the reader is referred to the product information provided by various vendors. It is assumed that the information in Chapter 15, which describes the capabilities and use of workflow technology in general terms, is understood. That information is implicitly incorporated into the following discussion.
24.2 Preconditions The following items are required to be available before work on the activities of step 6 can be initiated. It also is assumed that any results from an earlier step or spiral can be used to provide guidance or background information. § Process map with dialog designations. § Dialog map and prototype. § Scenarios. § For each dialog: o Unique identifier; o Functional description; o Initiation criteria; o Initial operational specifications as available: data (internal and external); performance; security; throughput; number of performers and assignment method; location (logical); recovery and exception handling; statistics; tracking and logging needs; measurements for operational management; and external (to the enterprise) interactions. § List of all implicit decisions that were necessary to complete step 2 activities, including any business rules captured during the facilitated sessions. § Company technology standards, practices, and policies. § Company standard products (including workflow products). It is assumed that the products that will be used have been selected prior to the invocation of step 6. That would be accomplished using a COTS product selection process, as considered in Chapter 1. § The implementation of the computing infrastructure that will be used in support of the workflow deployment, including the logical and physical architecture of the infrastructure as well as its deployed structure. § The characteristics of the workforce that will be used to perform the roles required by the process, including such items as the number of individual performers, the physical location of individual performers, types of work groups, and normal working hours.
24.3 Description The workflow implementation is based on the development of a logical model and a physical model that incorporate the characteristics of the business process, workflow enactment products, the computing infrastructure, and the workforce that will perform the tasks. The design of the models forms most of the activities in step 6. Once the models have been completed, the remainder of the step is concerned with programming the workflow engine and testing the configuration using the scenarios. 24.3.1 Task definition The first step in the development of the workflow models is the determination of the tasks that provide the fundamental unit of functionality for the workflow. The workflow is concerned only with the movement of information and control between tasks. It has knowledge of only the intertask environment and is not involved in the workings of the tasks themselves. For completeness, it should be noted that it is possible for the tasks also to utilize a workflow approach for their internal implementation and for the two workflow services (external and internal to the tasks) to communicate with each other. That complexity does not change the basic approach and is considered to be beyond the scope of this discussion. Because the workflow is not concerned with the internals of the tasks, initially each dialog can be considered to be a task. The issue then is to determine which tasks can be combined for the purpose of the workflow implementation. In making that determination, it is useful to note two differences between tasks and dialogs: § Tasks can have more than one entry point. The particular entry point is selected by the workflow instance information available at task start time. § Tasks can have more than one role performer. That means the transition between roles does not necessarily imply a timebreak and a given member of the workforce can perform multiple roles as required. Other characteristics generally are the same for both tasks and dialogs. If dialogs are to be combined into a larger task, there must be no need to track the transition from one dialog in the task to another. If such a need exists, the dialogs cannot be combined. An example of a task being defined from multiple dialogs is illustrated in Figure 24.1, which shows a portion of a dialog map. In that example, D2 is defined as a separate dialog from D1 because it has an input from other than D1. D3 is an independent dialog because it is in a role different from that of D1. D4 is an independent dialog because it already exists, having been defined during another process implementation.
Figure 24.1: Task definition. It is possible to combine all four dialogs into one workflow task. That assumes that the given role conditions have been met: § No inherent timebreak exists between D1 and D2 (e.g., D1 is used to take a telephone order, and D2 is used to determine the credit status and associated discount of the customer). § The human user is capable of performing both roles. In addition, there must be no need for the workflow service to track the ending of D1 and the start of D2.
If tasks consisting of multiple dialogs are to be used, there must be some mechanism for transferring information and control between dialogs independent of the workflow system. That is accomplished by defining and implementing an infrastructure service on the task computing platform that can communicate with the termination actions of the dialog instead of using those actions to communicate with the workflow service. If the platform service is designed properly and can be programmed with the identification of the dialogs the task comprises, none of the actions of a dialog has to change, regardless of whether or not it is communicating with the task platform service or the workflow service. For reuse considerations, if multiple dialogs are combined into a task, they should not be implemented as a single dialog. That may be tempting because it could eliminate the task platform service, but in the long run it is not an efficient practice. Once the tasks have been defined, a workflow map can be constructed. This map is essentially the same as the dialog map except for the following: § Instead of dialog interconnections, task interconnections are shown. § Instead of roles, appropriate workforce units are shown. These units still can be characterized as roles if desired, but because of the possible combination of process roles, these task roles must be considered to be a different type of entity. An example of a workflow map is shown in Figure 24.2. A workflow map is used in the development of the logical and physical models. Although similar in form to the process and dialog maps, the workflow map has some important differences.
Figure 24.2: A workflow map. § The roles (organizational work groups) are not presented in bands but are provided by annotations to each task. That format is needed because multiple work groups may be used to perform a given task, such as is the case with T0. In addition, the physical location of each work group should be included and the same work group may be in multiple locations, such as Organization B, which performs T2. That organization and location information is necessary for the development of the logical model. § The emphasis is on the task transitions, not on the performers. Even though tasks T0, T6, T7, and T8 are all performed by work group C in Chicago, no attempt is made to show that by the position of the tasks in the map, as is the case for the process and dialog maps. § The task map also should contain the data elements and values used to determine the transitions between tasks when there is a choice. The transition from T1 is to T2 if element W has a value greater than or equal to 5, and it is to T5 if W has a value less than 5. That also holds true for the transition from T4 using element F. § The task map may contain artifact tasks that are not reflected in either the process maps or the dialog maps. For example, what occurs if element F has a value other than 0 or 1 when T4 completes? That is an obvious error condition and should be accommodated in the task map. For a value other than 0 or 1, the next task is an exception-handling one on a supervisor’s desk. That requires an update of the task map but should not require a change in the process or dialog maps, because the operating assumption is that the implementation is error free. § Other information should be made available in the task map as available. That would include such items as the task load (number of invocations
per unit time) and the QoS agreements between organizations responsible for different tasks. The workflow map is used in the development of an example logical model in Section 24.3.4.3. 24.3.2 Task environment Figure 24.3 illustrates the position of a task in a workflow environment. The interfaces between the task (dialogs) and the role performer were discussed in Chapter 23, which dealt with the human interface, and in Chapter 21, which discussed action mapping. The other interfaces are specific to the workflow environment and need further explanation.
Figure 24.3: Task workflow environment. In general, all information exchanged between the workflow service and the task goes through the workflow client. The client represents the workflow service on the platform where the task is resident. For improved clarity of discussion and to avoid an unduly complex figure, direct interfaces are shown for status information and instance data exchanges. Status information is exchanged between the task and the workflow engine that is responsible for tracking the state of each workflow instance. The information consists of at least the following: § The ID of the user assuming responsibility for the task; § Time responsibility is assumed; § Time of completion; § Completion status (normal or error). The workflow client selects an available workflow instance task from the task list either automatically according to some predetermined criteria or under guidance of the user as indicated by the client-user interface. In the latter case, the client must contain a human interface that facilitates the manual task selection. The software needed to execute the task is launched automatically by the client through the use of the facilities available from the platform infrastructure (e.g., operating system calls). The initiation is indicated by the client-task interface and is specified during the development of the physical model (described in Section 24.3.4.4). In many cases, the client must also be aware of the termination of the task. For example, it may need to add the task back to the user task list for later processing when additional information becomes known to the user (probably through a paper mechanism). This type of task termination (suspension) is quite common. It also might be necessary to assign the task to another performer if, for some reason, the original performer is unable to complete it. When a task is selected from the task list (tasks that are available to and addressable by the user), the instance data automatically become available for use during the execution of the selected task, again through action of the platform infrastructure. The task can utilize the information in any way that is needed. The task can store the information in persistent store and/or update the instance information from persistent store, changing
names as needed. The updated instance data are then available to the remaining tasks. Defining this type of data mapping is also part of the development of the physical model. 24.3.3 Engineering guidelines The initial guidelines for the design and development of a workflow implementation are presented in this and the following sections. The specific recommendations are based on current practice and projections as to the probable evolution of the technology. Their real value, however, is an indication of the types of considerations and analysis that must be performed to realize an effective workflow implementation. § Use a single workflow system, if possible. § Do not assign more than 100 users to a workflow engine. § Do not assign more than 10 distinct processes to a workflow engine. § Avoid partitioning a process across workflow engines, if possible. § If the load requires more than one workflow engine, run the entire process in each engine and split the users between them. § Some code may have to be written to utilize product APIs for incorporating legacy systems, worklist management, and desired human interfaces. Do not be intimidated by this need. In many ways, it can be a strength, not a weakness. § Before beginning the workflow design, make sure the process to be implemented has been defined in sufficient detail, including activities, data, roles, organizations, and measurement metrics. If not, a return to step 1 and the process spiral is indicated. It may not always be possible to conform to all those guidelines in developing a specific design. If that is the case, careful attention must be paid to the testing of the workflow to ensure that it will operate as designed. 24.3.4 Design considerations There are five major considerations in the specification and implementation of a workflow service: load analysis, topology analysis, the logical model, the physical model, and the deployment procedure. Except for the deployment procedure, each aspect is examined in detail in the following sections, along with examples that illustrate the concepts. The deployment aspect is addressed in Chapter 26, which considers the deployment of the entire process implementation, including the workflow portion.
24.3.4.1 Load factors The purpose of the load analysis is to determine the major load factors for use in the logical model design step. Three load factors must be estimated for each complete workflow. If different portions of the workflow have significantly different values in any of the factors, they should be documented independently. The reason for this will become clearer during the topology analysis. Table 24.1 lists some examples of load rules. Table 24.1: Examples of Load Rules Load Factors
Low
Medium
High
Number of simultaneous workflow instances
100
Number of tasks in the workflow
40
Percentage of tasks needing human involvement
70%
The load factors eventually help determine the number of individual workflow engines required and their interconnection topology. Some commercial workflow management systems have capacity planning tools that estimate the number of workflow engines based on the factors shown.
24.3.4.2 Topology analysis Activities of the workflow are partitioned according to geographical location or organizational unit. The sequencing of activities between them is also determined in this analysis. For purposes of this discussion, the following assumptions are made. § Different organizational units and geographical locations are relatively independent of one another for the purposes of workflow design and implementation. § Individual workflow activities are always performed in one organizational unit. Independence in this context implies that each organizational unit or location requires its own workflow engine because of administration and other needs that must be met on an individualized basis. If this need for independence is not necessary in a particular circumstance, the locations or organizational units should be combined for the purpose of the following analysis. Figure 24.4 illustrates three different types of topology constructs. The circles represent all tasks assigned to a location or business unit for a given workflow. The entire workflow map should be analyzed for each occurrence of those constructs. In most cases, the map is easily partitioned into a small set of the topological constructs.
Figure 24.4: Types of topology constructs. If all the tasks for a workflow map can be assigned to a single location, organizational unit, or task role, the map is said to be a simple linear construct, as is the case for the first diagram in the figure. Other linear constructs can be formed from either a sequence or a parallel flow, as illustrated in the other diagrams of the linear flow section of Figure 24.4. If there is a feedback path somewhere in the workflow map, the construct is called a feedback flow. The flow constructs can be combined to define a workflow that is sometimes referred to as a combination flow.
24.3.4.3 Logical model The results of the load analysis and topological analysis can now be used to determine the configuration of the logical model. That is accomplished by first defining a series of simple configurations, as shown in Figures 24.5 through 24.8. Each configuration has an associated series of rules whose applicability can be obtained from the results of the two analysis steps given in Section 24.3.4.2. The rules determine under what circumstances that configuration is applicable. As necessitated by the workflow map, the simple configuration diagrams can be combined to form a more complex one.
Figure 24.5: Single-engine configuration.
Figure 24.6: Two or more load-sharing engines.
Figure 24.7: Two or more chained engines.
Figure 24.8: Hierarchical engine configuration. After those rules have been applied to a workflow map, the basic topological constructs of the workflow are obtained. The results of the topological analysis for the workflow map in Figure 24.2 are shown in Figure 24.9. The only concern of this analysis is the transition from organizational unit to organizational unit or from geographical location to geographical location. Note that each transition in the figure is to a different organization
or location. Along with the load data, that provides the information needed to define the logical model.
Figure 24.9: Topology analysis example. The logical model obtained from the workflow map in Figure 24.2 and the topological analysis shown in Figure 24.9 is presented in Figure 24.10. For the purpose of this figure, because only a small number of tasks are shown, it is assumed that the load can justify the number of individual engines indicated. The major design feature indicated by this design is the use of a workflow engine that serves as a manager of several other engines. That was indicated because of the complexity of the interactions, as shown in the topology analysis. Other designs also could have been utilized, depending on the exact load data, the capabilities of the products that will be used in the physical model, and the experience and knowledge of the designer.
Figure 24.10: Example of a logical model. As indicated in Section 24.3.3, the simplest logical model configuration possible should be used. However, for large, complex processes with heavy utilization and participation by many organizational units, that is not always possible. The approach presented here is applicable whether a simple result or a complex one is needed. For simplicity, the load-sharing engines can be combined into a single box and labeled a workflow enactment service. That makes the diagrams easier to follow when a large number of instances are required. Thus, in all logical model diagrams, boxes can be labeled as workflow enactment services which consist of one or more load-sharing managers, depending on the number of simultaneous instances required.
24.3.4.4 Physical model The physical model is produced by programming the selected workflow engine with the process, dialog, and workforce information in accordance with the configuration of the logical model. Some of the information needed for the physical model, such as the names and locations of the dialog functionality, are not known until step 7, when all the parts of the development are assembled. During the initial invocation of step 6, unknown functionality can be “stubbed” until it has been specified in later steps. Subsequent invocations of this step receive most, if not all, of the functionality information. Because the physical model can be implemented in a test environment prior to actual deployment, it is not necessary at this time to integrate the workflow products with the production computing infrastructure. That will be accomplished during the deployment phase in step 8. However, the physical design should always be performed with the eventual deployment environment in mind.
24.4 Prototype The prototype in step 6 is the workflow prototype, which is the implementation of the workflow physical model into the selected workflow product set with the task functionality stubbed in an appropriate fashion. The purpose of the prototype is to animate the
workflow using the scenarios to determine if the workflow definition meets the intent of the original business process. The operation of the prototype would be of interest to the business and technical stakeholders as an overall indication of how the business process implementation will operate.
24.5 Activities The 11 activities in step 6 are listed according to the sequence diagram in Figure 24.11. All the activities in this step are oriented toward the implementation of a workflow that will meet the intent of the original business process. This step will be revisited whenever it is determined in a succeeding step that some change needs to be made in some aspect of the workflow definitions.
Figure 24.11: Activity sequence diagram. The following are brief descriptions of each of the individual activities needed to produce the results expected from this methodology step. All these activities must be considered whether step 6 is being invoked for the first time or as a result of a change in another step. If it is being revisited, many of the activities will be very simple and fast. They cannot, however, be skipped because there is always the chance that they will be needed to respond to the change. 1. Define tasks using the dialog map. Use the dialog map as a base to determine which, if any, dialogs can be combined into workflow tasks. If dialogs from different process roles are combined, new workflow roles have to be defined to accommodate the merged responsibilities. 2. Create a workflow map. After the tasks have been defined, a workflow map can be defined. The workflow map serves as the vehicle for defining the operational characteristics of the tasks and the workforce that will perform them. It is also used to program the workflow engine with the sequence information. 3. Perform a load analysis. Determine the amount of activity that each task must support. The load analysis is used to help determine the logical model of the workflow implementation. 4. Perform a topology analysis. Examine the workflow map for topological structures that will help determine the logical model of the workflow implementation. 5. Define a logical model. The logical model contains the structure of the workflow engines used in implementing the process. The logical model also indicates the tasks supported by each workflow engine, their geographical location, and the organizations supported. 6. Define task list processing for each workflow role. There are many ways that the next task to be addressed can be selected from those available for a given role. The most common way is to allow the role performer to select the task from a list displayed by the workflow client. Another possibility is to automatically select the next task using the task characteristics and predefined business rules that contain the selection criteria. Some workflow clients allow a great deal of flexibility in selection, while others require a significant amount of custom programming.
7.
Develop a physical model. Program the selected workflow products using the logical model and the workflow map as the main inputs. If multiple workflow engines are required, they need to be connected using the same protocols that would be used in the deployed configuration. The same network configuration does not have to be used, however. As part of the engine programming, access to the required task and external functionality needs to be defined. The functionality does not need to be integrated if it is unavailable or if immediate access is cost prohibitive. However, a configuration as close to the one that will be deployed is desired. 8. Animate the workflow prototype using the scenarios. Once the physical model has been implemented, it becomes the workflow prototype. The scenarios are used to animate the prototype and ensure that the workflow reflects the intent of the original business process. This type of testing can also find problems with either the logical or the physical model so they can be corrected before the integration phase. 9. Demonstrate the workflow prototype to stakeholders. Arrange a facilitated session with the stakeholders to observe the operation of the initial or revised prototype and determine the conformity of the prototype operation to the needs of the business and the individual stakeholders. At the end of step 6, the stakeholders must determine if the workflow definitions, functionality, and relative sequences meet the needs of the business process being implemented. 10. Obtain necessary approvals. If approvals are needed to continue beyond step 6, they need to be obtained before proceeding. The workflow prototype, the opinions of the stakeholders, and the hard deliverables from this step (task specifications, workflow map, logical model, physical model) should be sufficient to demonstrate the suitability of the action definitions and the ability to proceed. 11. Enter new or updated workflow specifications into repository. All information obtained as a result of step 6 should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, this information may be needed for a considerable length of time and may be useful to individuals other than those involved in the initial development.
24.6 Linkages to other steps Step 6 must be invoked whenever there is a needed change to any aspect of the workflow specification. Changes could result from the activities in any step of any spiral and consequently cause a direct transition to step 6 or to other steps that may be affected (e.g., step 1) before this step is reentered. The exact transition sequence depends on the circumstances. If the changes are relatively small, the time required for a reinvocation of this step or previous ones should be relatively quick. The usual transition to step 6 is directly from step 2, where the dialogs are identified and specified. The initial specification or changes to the specification of the dialogs always require an analysis of the workflow definition. The definition of the actions also can have a significant effect on the design of the workflow. Therefore, step 6 should be reinvoked after step 3 has completed, to consider any possible effect on workflow definition. Changes in actions concerned with error recovery, for example, could affect the design of the workflow even if the definitions of the process dialogs remain unchanged. If the results of step 6 do not require the business processes or dialog definitions to be reexamined, the normal transition from this step is to step 7, where all the individual parts of the development are integrated and assembled into a working implementation.
If step 6 raises questions concerning the process map or dialog definitions, then a transition to step 1 must be made and the process spiral reiterated until those questions are resolved.
24.7 Postconditions Step 6 is completed and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation: § All step activities have been considered at least once. § The logical and physical models (prototype) of the workflow have been implemented and are available for demonstration. § Appropriate animation of the workflow prototype using the scenarios has been performed and the results verified. § The business and technical stakeholders have been involved as needed. § All relevant workflow information has been entered into the appropriate repository and updates verified. § All necessary approvals have been obtained. At the normal conclusion of step 6, all affected business and technical stakeholders must agree that the workflow, as it is currently defined and represented, is the best that can be accomplished prior to actual use. As necessary, changes may be made after the implementation and improvement spirals have been invoked as additional information is obtained. Selected bibliography Basu, A., “Metagraph Transformations and Workflow Analysis,” Proc. 30th Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 7–10, 1997, Vol. 4, pp. 359–366.
Gokkoca, E., et al., “Design and Implementation of a Distributed Workflow Enactment Service,” Proc. 2nd IFCIS International Conference on Cooperative Information Systems, Kiawah Island, SC, June 24–27, 1997, pp. 89–98. Karnath, M., and K. Ramamritham, “Failure Handling and Coordinated Execution of Concurrent Workflows,” Proc. 14th Internatl. Conf. Data Engineering, Orlando, FL, Feb. 23– 27, 1998, pp. 334–341. Kwan, M. M., “Dynamic Workflow Management: A Framework for Modeling Workflows,” Proc. 30th Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 7–10, 1997, Vol. 4, pp. 367– 376. Orwig, R., D. Dean, and L. Mikulich, “A Method for Evaluating Information Systems from Workflow Models: Results from a Case Study,” Proc. 31st Hawaii Internatl. Conf. System Sciences, Wailea, HI, Jan. 6–9, 1998, Vol. 4, pp. 322–331. Schuster, H., S. Jablonski, and C. Bussler, “Client/Server Qualities: A Basis for Reliable Distributed Workflow Management Systems,” Proc. 17th Internatl. Conf. Distributed Computing Systems, Baltimore, May 27–30, 1997, pp. 186–193. Van Der Aalst, M. P., “Modeling and Analyzing Interorganizational Workflows,” Proc. 1998 Internatl. Conf. Application of Concurrency to System Design, Fukushima, Japan, Mar. 23–26, 1998, pp. 262–272.
Chapter 25: Step 7: Assemble and test
25.1 Purpose PRIME is an assembly methodology. That requires an assumption that most of the functionality required to implement a process should already be available from previous developments. The major need then is to assemble the process-independent functionality with elements that provide the process-specific needs. Steps 1 through 6 have shown how the functionality in the form of actions, dialogs, tasks, and reusable software components can be defined and designed so that they can be used in any process as needed. The process-specific elements with which that functionality is combined comprise the workflow, the human interface, and the infrastructure. The dynamics of the assembly approach is illustrated in Figure 25.1. All the elements, both process independent and process specific, must be integrated with a computing environment that provides the resources necessary for their operation.
Figure 25.1: Assembly dynamics. The purpose of step 7 is to take all the elements that have been defined and developed in steps 1 through 6 and assemble them into a complete structure that: § Provides an effective implementation of the original business process; § Can be managed so it continues to meet the needs of the business as changes are encountered; § Has a form that can be efficiently deployed into the enterprise computing environment. An effective assembly procedure should resemble the construction of a jigsaw puzzle. All the pieces should have a form that fits with the form of the connecting pieces. There should be no need to force-fit any piece, nor should there be any need to reshape a piece. The picture that results when the puzzle is completed is independent of the shape of the puzzle pieces. If the parts being assembled are structured (formed) properly, they should go together easily, regardless of the process (picture) being implemented. Steps 1 through 6 have produced parts with the proper structure. Step 7 puts them all together and produces an implemented process.
25.2 Preconditions The following items are required to be available before work on the activities in step 7 can be initiated. It is assumed that any result from an earlier step or spiral can be used to provide guidance or background information. § Human interface design; § Workflow design and prototype; § Task specifications; § Action specifications; § Dialog specifications;
§ Mapped external software components; § Databases utilized; § Internal software components needed by the actions; § Computing test environment; § Scenarios.
25.3 Description The assembly activities are similar to the functions performed in traditional system integration when all the software modules are brought together and integrated with the platform hardware and systems software, databases, and the communications network. The major differences are in the types of entities utilized, the order in which integration occurs, and the action that is taken when a problem is encountered. 25.3.1 Integration elements In step 7, the elements being integrated are the actions, dialogs, tasks, workflow, and infrastructure services; integration occurs in approximately that order. As necessary, each element must be assembled from its component parts before being integrated with the other entities into the automation environment. The automation environment forms the framework on which integration occurs. If a problem is encountered in any of the assemblies or integration into the automation environment, the reason must be ascertained and the appropriate methodology step invoked to provide a solution. That could be any previous step, since a problem with the business process map or any other specification easily can be encountered at this point. The use of previous steps to formulate the changes necessary to fix problems encountered during integration automatically ensures referential integrity between all the development entities, from requirements to implemented software. In addition, it provides a structured way to ensure that all the ripples caused by a change are addressed. The effect of even a seemingly tiny change can be considerable and produce a complex progression of other changes needed to compensate all the interwoven parts. Maintaining the integrity of the development components as changes are made in the later stages is a major problem in current methodologies, because that aspect generally is not considered part of the methodology. That leaves regression testing as the major way to determine that the changes do not have unexpected side effects, which is inefficient and considerably less effective than using PRIME. PRIME provides component integrity as an integral part of the methodology. It is not the intent of this presentation to prescribe exactly how these assembly activi ties are to be defined or performed. That depends on the personnel and tools utilized. The main object is to illustrate that the components can be integrated in a reasonably efficient manner and that an assembly methodology provides capabilities that do not exist in other types of methodologies. 25.3.2 Automation environment The automation environment defined for the process implementation consists of the computing infrastructure and several execution envi- ronments. The relationships between those elements is illustrated in Figure 25.2.
Figure 25.2: Automation environment elements. § The enterprise computing infrastructure provides the interconnectivity between all the platforms that must communicate with each other and the common services such as transaction management, security, and database management systems, directory, and systems management resident on infrastructure servers. § The client platform hardware and system software provide the environment for software that is resident on the client platform and includes the computing hardware, operating system, file structure, and human interface devices and drivers. § The action execution environment provides the environment for executing the actions in accordance with the rules defined in Chapter 13. Each dialog has an associated action execution environment. § The dialog execution environment provides the environment for executing and controlling the cluster of dialogs and includes the cluster store and control, a means for activating appropriate sets of actions as defined in Chapter 20, and the means for controlling the human interface. § The task execution environment provides the means for integrating multiple dialogs into tasks and includes the means for transitioning between dialogs without invoking the workflow engine. § The workflow environment provides the means for transitioning between tasks using the rules and task map programmed into a workflow engine. It also includes the workflow client, monitor, and alert/alarm mechanism. § The application server platform hardware and system software provide the environment for software that is resident on the application server platform and includes the computing hardware, operating system, and file structure. § The external software components provide the means for accessing reusable functionality using predefined message pairs as specified by the actions. The characteristics needed for this type of software were outlined in Chapters 14 and 22. All the computing environment elements are process independent and can be used to implement any process. All of them must be designed from a system engineering perspective so they interoperate properly and provide an effective means for process implementation. There generally are two different implementations of the computing environment. One is for initial integration and testing purposes and is referred to here as the test environment. The other is utilized for deployment and production operation and is referred to as the operational environment. Step 7 uses the test environment. Step 8 (Deploy and operate), described in Chapter 26, uses the operational environment. Both environments, however, should consist of the same types of elements, although the number of individual elements and their configuration may vary, depending on circumstances.
25.4 Prototype In step 7, the prototype is the implemented process produced by the assembled components integrated into the test environment. It is a prototype in the sense that it exists in a test environment and not in the operational environment. However, the test environment mimics the characteristics of the operational environment. That facilitates the deployment of the system and ensures that the initial assembly is compatible with the deployed implementation.
25.5 Activities The nine activities in step 7 are arranged according to the diagram in Figure 25.3. The activities can be performed either manually or automated with an appropriate tool, if available. However, it is strongly recommended that automated tool support be used because it greatly increases the effectiveness and efficiency of the activities.
Figure 25.3: Activity sequence diagram. The following are brief descriptions of each of the individual activities needed to produce the results expected from this methodology step. All these activities must be considered whether step 7 is being invoked for the first time or as a result of a change in another step. If it is being revisited, many of the activities will be very simple and fast. They cannot, however, be skipped because there is always the chance that they will be needed to respond to the change. 1. Assemble all actions into the test environment. That is accomplished by linking the dialog units defined for each action into the action execution environment of the associated dialog. This environment also provides a means for individually executing an action so its operation can be verified. That requires that access is available to the building blocks and capability units specified by the action. If that cannot be accomplished immediately, stubs can be utilized until the actual functionality is available. 2. Assemble all dialogs into the test environment. That is accomplished by linking each dialog, which consists of the means for activating its set of actions, to the cluster store and control forming the dialog execution environment. They also must be linked to the human interface implementation through the client platform environment functionality. If an automated dialog is involved, it can be implemented on a server platform instead of a client platform. In that case, except for the human interface, all the elements shown for the client platform are still needed. 3. Assemble all tasks into the test environment. That is accomplished by linking the dialogs defined for a task into the task execution environment, which provides the means for transitioning between dialogs without invoking the workflow engine. 4. Assemble the workflow into the test environment. The workflow engine is programmed with the task map information, workforce characteristics, and associated rules. It is linked to the tasks through the workflow client. 5. Assemble the infrastructure services into the test environment. These include the services indicated in Section 25.3.2. The assembly is accomplished by specifying actions that interact with those services. If the use of the services is not anticipated when the actions are specified
6.
7.
8.
9.
initially, they must be added now. That requires a return to step 3 (Specify actions) to properly include the actions into the dialog specification. Test the implementation using the scenarios. Animate the test environment process implementation using the scenarios. Verify that the implementation conforms to the intent of the original (or as modified during development) process description. If this agreement is not adequate, then either the process description or the implementation must be altered in some way to achieve conformity. Depending on the analysis as to what needs to be changed, the appropriate step and spiral are invoked. Demonstrate the implementation to stakeholders. Arrange a session with the stakeholders to observe the operation of the implementation and determine if their needs are being adequately met. At the end of step 7, the stakeholders must agree that the implementation is suitable and that deployment can take place. Although this may not be considered a formal acceptance test based on the deployed implementation, all the same conditions apply. Unless unusual circumstances arise, if this activity completes with no changes indicated, the formal acceptance test should pass without difficulty. Obtain the necessary approvals. If approvals other than stakeholder agreement are needed to continue beyond step 7, they need to be obtained before proceeding. The process implementation response to the scenarios and the opinions of the stakeholders should be sufficient to demonstrate the ability and desirability to proceed. Enter results into repository. All information obtained as a result of step 7 should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, this information may be needed for a considerable length of time and may be useful to individuals other than those involved in the initial development.
25.6 Linkages to other steps Step 7 must be invoked whenever a change has been made to the speci- fication of any action, dialog, task, workflow, or utilized computing environment element. The usual transition to step 7 is directly from step 6, where the workflow is identified and specified. Transitions also can be made from step 4, where the actions are mapped to software components, and step 5, where the human interface is developed, if identified changes require that only those steps be reinvoked. However, step 7 cannot proceed until all the components that must be integrated are available in their updated form. The step completion transition from this step is to step 8, where the implementation is deployed and placed into production operation. If the step requires changes of any type, a previous step and spiral, depending on the nature of the change, must be invoked. Consideration of changes may require many iterations of previous spirals. Some of those may proceed in parallel if they address somewhat different areas. There may be many changes of different types needed before the results of step 7 can be considered satisfactory and the step completed. Eventually, all the changes must come together in this step, where their compatibility can be determined.
25.7 Postconditions Step 7 is completed and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation: § All step activities have been considered at least once. § A complete implementation of the process in the test environment is available.
§ Appropriate animation of the implementation using the scenarios has been performed and the results verified. § The business and technical stakeholders have been involved in the testing process as required. § All relevant implementation information has been entered into the appropriate repository. § All necessary approvals have been obtained. At the normal conclusion of step 7, all affected business and technical stakeholders must agree that the operation of the process implementation meets the needs of the business and complies with the intent of the original process specification (as amended) and is the best that can be accomplished with the current degree of knowledge. As necessary, changes may be made after the improvement spiral has been invoked as additional information is obtained.
Chapter 26: Step 8: Deploy and operate 26.1 Purpose The main purpose of step 8 is to extend the methodology over the entire life cycle of the process. That enables the same approach to the maintenance of the process as was used in its initial development. By combining the development and maintenance aspects, the inevitable changes to the process are facilitated. Changes to the process use the same activities and approach as the initial development. Step 8 also contains the activities necessary to deploy the process to achieve operational status. After that has been achieved, other activities determine when a change to a process is necessary. The approach utilized is discussed here in some detail and using a series of examples.
26.2 Preconditions The following conditions are required to be available before work on the activities in step 8 can be initiated. It is also assumed that any result from an earlier step or spiral can be used to provide guidance or background information. § Process implementation that is successfully running in a test environment and that has been approved for deployment; § Operational profile of the implementation that includes: o Expected availability of the system; o Duration of background processing; o Percentage of online-versus -background processing; o Average transaction load expected; o Peak transaction loads expected; o Expected size of databases; o Backup needs; § Deployment schedule; § Documentation and training needs for users and operational personnel; § Designated infrastructure (hardware and software) in place and available.
26.3 Description There are two major phases to step 8: deployment and operation. They are combined into one step because they represent the transition between development and maintenance. One of the major premises of the methodology is that there is no
difference between initial development and maintenance. Combining the end of development and the beginning of maintenance into one step emphasizes that unity. While it does com- plicate the discussion somewhat, conveying the concept of a single methodology that is active throughout the life cycle of the project is considered important enough to warrant the inconvenience. 26.3.1 Deployment Deployment includes the provisioning of the software into the enterprise automation environment as well as those activities necessary to achieve operational status. Deployment activities are oriented in four ways: § Development and dissemination of documentation and training materials to the classes of users involved with operation of the process implementation. Some formal training sessions usually are scheduled to ensure that the operation is well understood. § Integration of the process-specific software with the reusable components and the infrastructure so that all the software interoperates correctly. That means personnel from each activity must be involved. § Demonstration to the stakeholders that the resultant implementation operates in a manner that conforms to the intent of the defined business process (original or as currently exists). This demonstration usually concludes with the acceptance test that is a formal mechanism for obtaining the necessary approvals. § Determination of the method for involving the users with the new process implementation in performing their assigned work. This usually involves some type of schedule that controls when and how many users will be connected to the system. The schedules are determined by the resources available and the risk involved if an unexpected problem develops. Even though software is not being developed, problems that require the reinvocation of a previous step can occur during deployment. It may be discovered that some aspect of the process has been omitted despite the best efforts of all concerned, or the integration into the operational environment may not produce the desired results. While the methodology has been designed to make such last-minute problems rare, there is no effective way to eliminate them. The methodology does, however, provide an efficient means for resolving them should the situation arise. 26.3.2 Operations Once the process implementation has been deployed and becomes operational, it must be continuously examined and adjusted to ensure that the current workflow representation continues to meet the needs of the process. Statistical data produced by the workflow engine are analyzed along with data produced by other means, including customer comments and complaints, employee observations, business and technical environment changes, operational expenses, and equipment costs. This type of analysis and associated process changes usually are referred to as continuous process improvement (CPI). The activities necessary for CPI can be partitioned into two distinct orientations, as depicted in Figure 26.1.
Figure 26.1: Continuous process improvement.
26.3.2.1 Continuous process improvement Unfortunately, there is considerable ambiguity about what CPI means and how it should be applied. As in the case of other ambiguous terms, to provide a consistent basis on which to have the discussion, it is necessary to develop a working definition. The definition of CPI that will be used in this discussion is the following: Continuous process improvement consists of regularly examining the process implementation for possible improvements using analyses based on both operational experience and environmental conditions. The first type of analysis is concerned with the operational aspects of the process and is based on metrics that indicate how well the process is functioning compared with that originally intended or that has been achieved in the past (trend analysis). The second type of analysis, environmental analysis, is concerned with how well the process design meets changes in the business or technology environments and is relatively independent of the operational aspects of the process. Both types of analysis need to involve the same SMEs (or others with similar knowledge if the original individuals are not available) that helped formulate the various process representations during the original development cycle. If an analysis indicates that changes should be made, the proper spiral and step are invoked and a determination made as to how the representation(s) of that spiral should be altered to meet the new needs. The change process then proceeds in the same way as the original development effort. Although a facilitated session could be used, it may not be necessary in the consideration of CPI changes.
26.3.2.2 Operational analysis As an operationally focused example, assume that an analysis of the statistics indicates that too much time is elapsing between task A and task B when a certain class of business event occurs. Additionally, the reason for the delay is that there is no automatic scheduling of task B when task A completes. Task B is invoked only on an oversight basis. Upon further examination, it is discovered that the class of business events that cause this situation to happen were never modeled by a scenario group, and the process map representation did not indicate that these two tasks would ever directly follow each other! In this case, the process map would have to be altered appropriately and the associated scenarios defined and used to test the updated process map. After that has been satisfactorily accomplished, the remainder of the methodology is invoked and eventually an updated workflow design and implementation produced. Representation integrity Why not just change the workflow parameters? It certainly would be easier and take less time. While that observation is true, that type of change would extract a heavy price later on in the life cycle of the process. Once the different representations become “out-of-sync,” all the history behind the design of the workflow becomes obsolete. If enough changes are made this way, it will be impossible for the business-oriented people to determine if the workflow still adequately represents the process needs. The technical people will not understand how to effectively change the workflow or its tasks to better represent the process needs as they evolve over time. Management by process will have effectively broken down! By maintaining the integrity of all the process representations, the business and technical views of the process will be compatible and the effect of changes can be seen and understood by all concerned. That is the fundamental need for an enterprise that is managing by process. It is well worth the additional resources required when compared with the resources that would be required to start a new project to redevelop the workflow representation.
How is the workflow operation analyzed to determine needed changes? Are any general approaches more effective than others? Those questions are frequently asked and must be considered to make the operations spiral an effective one for the enterprise. It might seem reasonable to decide that, because of the number of possibilities and the complexities of the possible outcomes, this type of analysis must be based solely on the experience and expertise of the individuals involved. To some extent, that is true. However, some general approaches and techniques can be defined that will make the assessment activity somewhat more quantitative than a completely unstructured approach. As usual, the use of these (or any) structured methods do require an investment in resources and time. The author feels they are well worth the effort in the quality of the decisions produced. In analyzing a workflow, the following metrics should always be examined for possible changes: § Average cost and/or response time of a workflow instance per unit time; § Average cost and/or response time of a task instance per unit time; § Average number of workflow instances per unit time; § Average number of task instances per unit time; § The technology that resulted in the specification of human workflow actions; § Task performer profiles (needed characteristics); § The vision and goals of the enterprise. Historical information Unless they are dramatic, variations in any of those metrics in a single analysis period probably are not going to signal the need to make process changes. What is important is the history of those items over several analysis periods. That will even out any calendar variations as well as any statistical anomalies that may occur. Significant trends in any of the items eventually will trigger a need to consider changes to one or more of the process representations or even to the process itself. If no historical data are available for other workflows in the enterprise, the trend analysis will depend on the individuals involved. If historical data are available, they can be very useful in providing some structure to the change analysis along with associated guidance to the workflow examiners. The historical information that is needed for a comprehensive analysis is (1) trend information for other workflows based on the metrics listed previously, (2) changes that were initiated based on those trends, and (3) the operational results of the updated workflow. This historical data can be used in two ways. An analysis of the data should indicate what kinds of changes are normal for the enterprise and its industry. They should be considered carefully when a workflow examination is performed. The data should also provide an indication as to what types of metric value changes occurred, what process updates were made in response to those changes, and which updates were successful and which were not. If the database is very good, the reasons for any failures also would be very useful information. Armed with that type of information, any workflow examination should include the following considerations. Although the examples are long and somewhat complex, the detail is necessary to adequately illustrate the point. Normal industry changes For changes considered normal for the enterprise or its industry, have any occurred with enough magnitude to begin to affect the workflow? As an example, consider an enterprise that builds houses. The way in which houses are constructed depends on the local and national building codes, which contain rules that must be followed. Because changes in the building codes are normal for the industry, they are examined in the light of the process under review. Further assume that the process of interest is that utilized for constructing concrete foundations. During a review of the building codes, it is noted that a change that will be effective in two months requires certain connectors to be embedded in the wet concrete rather than bolted on after the concrete cures. That certainly will trigger a set of changes to most or
all of the foundation construction process representations. Unless the building code examination had been made, the workflow would have to be changed on an emergency basis in the future. The results almost certainly would not be as effective as, and the costs will be higher than, ones developed under considerably less pressure. Unfavorable trends For any metrics that have an unfavorable trend, examine the historical data to determine what other workflows have experienced this type of variation. The results of any changes made and the results obtained would provide a strong indication as to what, if any, changes should be recommended for the process under review. Continuing on with the house building example, assume that costs associated with the foundation process have increased by 20% over a 2-year period and that increase is considered intolerable. The historical data indicate that two other workflows have experienced that type of increase, the framing and landscaping workflows. In the case of the framing process, the update that was made enabled the use of lower grade materials as well as a cheaper design. Both were within the allowable building code range. The process was changed to reflect those decisions. Documentation showed that the result was unsatisfactory and caused even higher costs because of the additional rework needed. The process was changed back to the original until, eventually, new material technology allowed an effective solution. In the case of landscaping, the decision was made to use younger plants of the same grade. The revised selection parts of the process had the desired effect of lowering the process costs. What do historical data mean to those deciding on changes to the foundation process? Using the framing experience, it probably would not be a good idea to use cheaper materials and designs, even if allowed. A search should be made for a different foundation construction technology that could be more cost effective. Use of Styrofoam forms instead of wood could be considered one example of a technology shift. (The landscaping example was not felt to be applicable in this instance.) Favorable trends For any metrics that have a favorable trend, also examine the historical data to determine what other workflows have experienced this type of variation. While favorable trends may seem like a gift with no further analysis required, it is just as necessary to understand the reasons underlying the occurrence as it is for the unfavorable trends. It may be that an unfavorable trend or a recent change in another workflow is responsible. An unintended change in the task performer profile also could be responsible (hiring more experienced individuals than were previously available). In any event, it is necessary to investigate further. As in the situation where an unfavorable trend is being investigated, the use of historical data is again warranted in the analysis of favorable trends. That is especially true when the cross-coupling between workflows is responsible. Favorable trends also may trigger process updates. As this discussion indicates, without the availability of historical data, it would be more difficult to understand what changes, if any, are indicated for the workflow being examined. The historical data place the observed operational results of the workflow in a broader enterprise context that allows for a more structured analysis. That requires an investment in time and resources, but for an enterprise that is “managing by process” the investment is necessary.
26.3.2.3 Environmental analysis Section 26.3.2.2 discussed the response to possible workflow problems. While it certainly is true that that type of examination needs to be made periodically, it is equally as important to examine the process in its environmental aspects to determine how the process can be made better in meeting business and technology changes. That type of examination can and should be made using all the process representations, not just the workflow representation that is the usual source of information about operational problems.
The example of the house-building enterprise is used now to illustrate the how and why of the environmental focus of CPI. Assume that on a scheduled basis the individuals in the enterprise with a stake in the operation of the foundation-building process meet. Included in the meetings are expert consultants and, at times, vendor representatives. The purpose of the meetings is to examine the process as it currently exists and to propose and analyze changes that would make it more cost effective. The analysis includes the process map, design, and workflow representations. Further assume that at one of the meetings, an industry consultant relates that another house builder performs the house foundation piling design in a different way and saves in construction time. On examination of the process map, it is determined that the advantages of the other piling design technique can be obtained by eliminating one process step, adding two others, and changing the definition of a fourth step slightly. The remainder of the PRIME methodology is invoked and a new workflow implementation is defined. The updated workflow requires one new task, one changed task, and two routing changes. Those changes may require changes to one or more software components. After placing the revised implementation into practice, it is observed that the changes produce a 20% savings. This aspect of CPI results from an intrinsic view that any process, no matter how well it seems to be functioning from the workflow implementation perspective, can and should be improved. Frequently asking what-if questions about each aspect of the process and observing how other companies approach the same or similar process (sometimes called benchmarking) is the method by which this aspect of CPI can be achieved. No aspect of a process should be immune to such examination, including whether the fundamental process decomposition and associated definitions are correct.
26.4 Prototype The prototype is the operational process. Although operational software usually is not considered a prototype unless it is made available before it has been “hardened” into a product (e.g., no documentation is available and significant bugs still exist). However, for at least the second phase, it could be considered a prototype in the classic sense that it is being examined for improvement. Previous prototypes also are utilized when changes are incorporated into the process, because the methodology for change implementation is the same as for the initial development. Given this procedure, a number of prototypes will result from step 8. However, they will all be variations of existing specialized prototypes.
26.5 Activities The 12 activi ties in step 8 are arranged according to the diagram in Figure 26.2. Two distinct phases, each with its own set of activities, are specified in this step. The first phase deals with the deployment and obtaining of operational status of the process implementation. At the conclusion of the first phase, the process implementation is used in the day-to-day operations of the enterprise.
Figure 26.2: Activity sequence diagram.
The second phase deals with the identification of needed changes to the implemented process. Until a change is needed, step 8 does not exit. When the need for a change does occur, the step is exited and transition is made to the appropriate spiral and step so that the methodology can handle the implementation of the change. When the implementation is concluded, step 8 is again entered and awaits the identification of the next change. In that way, the methodology is active throughout the life cycle of the process. The following are brief descriptions of the individual activities needed to produce the results expected from this methodology step. All these activities must be considered whether step 8 is being invoked for the first time or as a result of an indicated change in another step. If it is being revisited, many of the activities will be very simple and fast. They cannot, however, be skipped because there is always the chance that they will be needed to respond to the change. 26.5.1 Phase 1: Deployment 1. Complete documentation and training materials. One of the major differences between a test implementation and an operational implementation is the availability of documentation for the end users, operational personnel, and others directly involved with the utilization of the process implementation. The production of those materials should not be left as an afterthought but started as soon as the operational characteristics of the implementation are known. That usually will be during step 7 (Assemble and test). Some parts of the material probably could be started during the steps dealing with the human interface and workflow as the final designs and implementations are developed. 2. Deploy software onto operational platforms. In many cases, the software components that provide the functionality needed by the workflow tasks will already be available and used by other process implementations. The infrastructure needed by the implementation should already have been put in place for other process implementations. The major focus of this activity is to integrate the process-specific software into this infrastructure using the designated platforms. The determination as to which specific platforms and networks will be used in the deployment needs to be made using information concerning the operational characteristics of these entities. In most cases, the assignment will be made by experts in those areas and not by the process implementation staff. As with the implementation of the software components, that determination is not an integral part of the PRIME methodology but utilizes a methodology and approach specific to those technologies. As long as the assignment is consistent with the operational characteristics needed by the process implementation, that should not pose a problem. 3. Test in the operational environment using scenarios. Once all the necessary software is in place in the enterprise automation environment, the integrity of the implementation is tested using the scenarios. That usually is done after specific tests are performed to ensure that all the components have been successfully provisioned and are communicating properly. The scenario tests should be the final indication that nothing has been inadvertently omitted or changed during the transition between the test and operational environments and that any previously stubbed functionality operates as assumed. The latter condition is especially important when legacy or COTS products are used as building blocks or capability units and are being accessed “live” for the first time. If any problems occur, a transition to the appropriate spiral and step that will provide the solution must be made. That could be any previous step in the methodology. 4. Hold user training classes. The opinions of the users as to the usefulness of the implementation in performing their duties usually
5.
6.
7.
determine whether the project is a success or a failure. One of the main causes of unfavorable opinions is the lack of understanding as to how to effectively operate and interact with the implementation. While training cannot make up for a poorly designed and implemented process, it can mean the difference between success and failure for a well-constructed implementation. The training should indicate why the implementation works as it does in addition to presenting the usual how-to information. Comments from the users should be noted and considered as a basis for changing the process during the first operations analysis activity. At this point in the development, it probably is unwise to change the implementation. The instability introduced into the development is not worth the effort unless some pathological condition has been discovered. The activities of the methodology should have prevented that from occurring, but, unfortunately, anything is possible. If that should happen, a transition to the appropriate methodology spiral and step must be made to resolve the problem. Train operational personnel. The individuals who will keep the software running, perform backups, answer questions from users, and project future computing and network requirements need to understand the purpose and characteristics of the implementation. Any special conditions or possible weak areas also should be provided, and a determination of the best way to proceed if problems occur should be discussed. Comments from the training sessions should be noted for consideration in changing the process during the first operations analysis activity. It probably is unwise to change the implementation, because the instability introduced into the development is not worth the effort unless some pathological condition has been discovered. If that should happen, a transition to the appropriate methodology spiral and step must be made to resolve the problem. Obtain necessary approvals for operational status. This approval is usually called the acceptance test and is the last hurdle before operational status is achieved. The acceptance should utilize the scenarios in showing that the implementation will provide the results expected. A negative result is painful to all parties concerned. For that reason, step 8 and the previous steps have been structured to ensure that there are no surprises during this activity and approval should be all but assured. Turn over to users. At this point, the process implementation is operational. That does not mean that all the potential users have to have access at the same time. Some gradual procedure for getting the users online is probably the most effective way of proceeding. In some cases, however, the nature of the process and implementation is such that all users must become operational at the same time. This “flash cut” must be handled carefully, because the potential for problems increases greatly under this circumstance.
26.5.2 Phase 2: Operation 8. Periodically analyze environmental and operational information. On some schedule determined by the nature of the process and the possible effects of a suboptimum process, the environmental and operational information associated with the process is analyzed. The schedules for the two types of analysis may be different. The results of the analyses are used in activity 9, which judges the need to change the process implementation. 9. Determine if any action is required. Although almost every analysis will indicate some way in which the operation of the process could be improved, the magnitude and costs of any change effort will determine if changing an operational process is worth the effort. Changes should
be made only if the improvement is deemed to warrant the resources and time expended. That determination must include intangible as well as tangible considerations. A change that will confuse and irritate the users should be approached with great caution even if the improvement is determined to be useful. Eventually, the results of the analysis may indicate one of two possibilities: § No change is required. In this case, the methodology returns to the action determination activity and awaits the next scheduled review. If some event mandates action prior to the scheduled review, the review takes place immediately. If the process has been well conceived and implemented, the need for change should not occur at every examination time. § There is a need for some change in the process implementation. In this case, the areas that need to be addressed are defined, and the step activities continue with activity 10. Multiple areas could be identified, and each one would need to be accommodated during the reinvocation of the spirals and steps of the methodology. 10. Determine transition step. Depending on the results of the analysis, the need for a change could range from a change to the process map to a change in a workflow task and everywhere in between. After the changes have been identified and verified, the most effective step to begin the change procedure must be determined. After the transition is made, there is no difference between this maintenance effort and the original development. That is a significant strength of the methodology. 11. Obtain the necessary approvals. If approvals are needed to continue beyond step 8, they need to be obtained before proceeding. Process changes have the potential to affect a significant number of individuals and organizations. Other processes also could be affected, and the approval process needs to reflect all those possibilities in addition to agreeing that there is a need to change the process. 12. Enter change information into repository. All information obtained as a result of step 8 should be entered into a repository where it is available for future needs. Because maintenance is considered an integral part of the methodology, the information may be needed for a considerable length of time and may be useful as further changes are made to the process.
26.6 Linkages to other steps Step 8 is invoked whenever the components of a new or changed process implementation need to be deployed and all the elements necessary (workflow, human interface, mapped software components) are available and have been tested. The initial transition to step 8 is always directly from step 7, where the entire implementation is assembled and tested. If problems are encountered during phase 1 of step 8, a transition to any previous step that can begin a solution must be made. The completion of step 8 will be to a step that has been identified as the proper one to handle identified changes designed to improve the process or make it conform better to changing business conditions.
26.7 Postconditions Step 8 is completed and may be terminated for a given development when the following information and conditions are present as a result of the current step invocation:
§ The process implementation has been deployed and has obtained operational status. § All step activities have been considered at least once. § A need for a process change has been identified as a result of continuous process improvement operational or environmental analyses. § All relevant change information has been entered into the appropriate repository. § All necessary approvals have been obtained. At the normal conclusion of step 8, all affected business and technical stakeholders must agree that the proposed change will substantially improve the operation of the process or is necessary to meet some revised enterprise need. Selected bibliography Barghouti, N. S., and B. Krishnamurthy, “Using Event Contexts and Matching Constraints to Monitor Software Processes,” Proc. 17th Internatl. Conf. Software Engineering, Seattle, Apr. 23–30, 1995, pp. 83–92.
Chang, R. Y., Continuous Process Improvement, Irvine, CA: Richard Chang Associates, 1994. Harrington, H. J., E. K. C. Esseling, and H. Van Nimwegen, Business Process Improvement Workbook: Documentation, Analysis, Design, and Management of Business Process Improvement, New York: McGraw-Hill, 1997.
Chapter 27: Retrospective Overview This chapter is somewhat different from the preceding ones. As the last chapter, it is intended to serve as a graceful end to a long and sometimes complex series of discussions concerning automation in the enterprise. A large number of interrelated topics needed to be addressed, including: § The determination of automation needs; § The requirements for automation support; § The technologies that are useful in specifying, implementing, and using automation functionality; § The structure of automation functionality; § The process (methodology) of obtaining and provisioning the automation functionality; § The management of the automation functionality so that it remains responsive to enterprise needs. To put all this information in perspective and structure it so it can be tailored to the specific needs of an enterprise, a comprehensive and relatively complete treatment of all aspects of the subject is required. That, unfortunately, means that the presentation will be long and sometimes complex with large numbers of interconnection points between discussions of the various aspects involved. The author hopes that the reader has had the patience and stamina to at least go through the entire book once in a skimming mode and absorb the totality of the presentation. A feeling for the whole is necessary before the significance and the details of the parts can be fully appreciated and understood. After a first pass through the material, the individual topics can be examined and utilized individually with confidence because their place in the overall structure has been established. In that regard, the book not only presents a new approach to enterprise
automation functionality, it is designed to function as a contextual reference source for the many topics involved.
27.1 Lessons learned This chapter is also intended to indicate some of the major premises of the book and discuss some of the lessons that should be inferred from the information presented. They are presented in the following list in the approximate order of importance, although the author certainly would not strenuously object to others perceiving a somewhat different ordering or the addition of other items to the list. 1. Systems engineering: The most important message of the book is that a systems engineering approach to enterprise automation is the key to success. Without the discipline and unifying framework obtained with this approach, the maximum effectiveness of automation cannot come close to being achieved. That is especially true under the current conditions of rapid technology and business environment change. A significant amount of literature describes the concepts and principles of systems engineering that need to be understood so they can be applied to the enterprise automation environment. 2. Technology integration: Closely aligned with the systems engineering approach is the realization that any technology cannot be considered independently of any of the other technologies used in the automation system. The use of Internet technology, for example, has implications on security, communications, human interface, and workflow requirements and implementation. In turn, each of those technologies has implications for the use of the Internet. The days of dealing with a technology and its application in isolation have long passed. All changes must be considered global changes and managed accordingly. 3. Modeling: Another corollary of the systems engineering approach is the extensive use of modeling techniques. Modeling allows the functions and relationships between the many system components and their multiple levels of abstraction to be defined and structured such that their context and specific purpose in the overall system can be well understood. The models also allow a focus on the essential aspects of the components for the intended purposes. Unnecessary detail is eliminated along with the associated potential to introduce confusion and uncertainty. 4. Management by process: The determination of business automation requirements should be based on process rather than the traditional functional or “systems” approach, whereby the needs of a specific organizational area are addressed in a monolithic fashion. The systems approach results in islands of automation that are hard to integrate, utilize, and change. The process approach provides the necessary flexibility to obtain quick response to changes, reuse of available software, and the integration of enterprise organizations. 5. Automation asset view: Without a view that emphasizes the use of automation assets, it is impossible for the enterprise to assign the proper value to software development activities. That in turn inhibits the use of automation as a strategic element in the operation of the enterprise. To obtain an asset-based approach to automation, the proper financial and managerial procedures must be put in place. That can require a relatively large culture shift in the organization. The results are worth it, but the road is long and may be too difficult for many organizations. 6. Software reuse: Software reuse is an adjunct of an automation asset– based approach. The reuse of business automation software is possible and can result in large productivity gains when it is successfully addressed. To achieve the degree of reuse, whether from COTS products, legacy systems, or individual components written specifically for reuse, the assembly methodology employed and the architecture of the resultant products must be carefully considered and designed.
7.
8.
Exceptions to the enterprise policies and procedures that ordain reuse must be few and far between. That imposes a discipline on the organization that may be difficult to accept but that is necessary for continued viability. Enterprise planning: There is sometimes a tendency to substitute the use of (usually new) technology for business or technology planning. That results from assuming that more capabilities exist in the technology than actually are present. For example, use of the Internet does not negate the need to determine the product set to be offered to customers, nor does it eliminate the requirement for configuration management. Some enterprises act as if merely being on the Internet will solve those problems for them. The way in which planning is accomplished or the results reached may change with a new technological approach, but the necessity for planning is not eliminated. In fact, under current competitive conditions, the need for careful planning is greatly increased. Accounting and finance: There is a great deal of concern about legal principles and laws lagging behind technology. There should be an equal amount of concern about the accounting and financial structures that were built for a manufacturing industry but not for the service and software industries of the information age. Intangibles play a much larger role than their initial use in solving some relatively narrow structural problems in accounting. Intangibles of several different types must enter the mainstream of (managerial) accounting and finance so that the true character of a service or software enterprise can be obtained.
27.2 Into the future Although this road may seem to have been a long and difficult one to walk, it is easy compared to what will be required in the future. The length of an “Internet year” will continue to shrink. The num- ber of options opened by technology will increase, as will the demands of customers. The information presented in this book is merely a start in devel- oping a unified approach to handling this ever increasing complexity. It is the most fervent hope of the author that readers will obtain some insights and ideas for making their enterprises more effective and efficient through the use of the information and techniques presented in this book.
Glossary: List of Acronyms ABET Accreditation Board for Engineering and Technology API application program interface BPR business process reengineering C/S client/server COTS commercial off-the-shelf CPI continuous process improvement CRT cathode ray tube CR UD creation/retrieval/update/deletion
D B A database administrator
database management system
entity-relationship (diagram)
external information provider
functional entity action
first in/first out
human interface instance
information technology
last in/first out
network computer
original equipment manufacturer
object management group
process implementation methodology
quality of service
remote procedure call
service independent building block
subject matter expert
systems network architecture
Structured Query Language
transaction processing
user interface
Unified Modeling Language
Workflow Management Coalition year 2000
List of Figures Chapter 1: Introduction Figure Figure Figure Figure Figure
1.1: 1.2: 1.3: 1.4: 1.5:
Automation methodology determination structure. Vertical silos of automation. Process approach to request satisfaction. Legacy model. Legacy system operations map.
Chapter 2: Automation asset system Figure 2.1: Automation asset system. Figure 2.2: Automation asset management model.
Chapter 3: Life cycle management Figure Figure Figure Figure Figure
3.1: 3.2: 3.3: 3.4: 3.5:
Basic life cycle process stages. Position of the class model. Generic unit model. Integration of class and unit models. Interaction map.
Chapter 4: Repository utilization Figure Figure Figure Figure Figure Figure
4.1: 4,2: 4.3: 4.4: 4.5: 4.6:
Relationships of asset, model, and metamodel. Simplified model and metamodel relationship. Conceptual model of a repository. Example of a quality tool usage model. Example of task selection. Example of task selection usage model.
Chapter 5: Business rules Figure 5.1: Example of business rule taxonomy. Figure 5.2: Rule system architecture.
Chapter 6: Financial management Figure 6.1: Figure 6.2: Figure 6.3: Figure 6.4: Figure 6.5: Figure 6.6:
Basic asset transformation process. Asset conversion chain. Transformation process and intangibles. Initial enterprise financial model. Financial event model. Complete financial model.
Chapter 8: Process modeling Figure Figure Figure Figure Figure Figure
8.1: 8.2: 8.3: 8.4: 8.5: 8.6:
Process life cycle model. Decomposition of the business process. Horizontal silos of automation. Example of a process map. First example of process trace. Second example of process trace.
Chapter 9: Scenario modeling Figure 9.1: Construction of a unit scenario. Figure 9.2: Construction of a compound scenario. Figure 9.3: Construction of compound scenario view.
Chapter 10: Role modeling Figure Figure Figure Figure Figure Figure
10.1: 10.2: 10.3: 10.4: 10.5: 10.6:
Structure of role relationships. An enterprise organization chart. An organization-oriented role class structure. Enterprise process definitions. A process-oriented role class structure. Addition of general roles to the class hierarchy.
Chapter 11: Information modeling Figure 11.1: An example of an E-R diagram. Figure 11.2: Structure of a data modeling system. Figure 11.3: Information model.
Figure 11.4: Datastore configurations. Figure 11.5: Operational model.
Chapter 12: Client/server modeling Figure 12.1: Figure 12.2: Figure 12.3: Figure 12.4: Figure 12.5: Figure 12.6: Figure 12.7: Figure 12.8: Figure 12.9:
Simple C/S specification. C/S configuration diagram. Asynchronous system configuration diagram. Multiple-server, synchronous communication configuration. Multiple-server, asynchronous communication configuration. Multiple-server, requester function configuration. Multiple-server, push services configuration. Infrastructure relationships. Infrastructure-oriented configuration diagram.
Chapter 13: Dialog and action modeling Figure Figure Figure Figure Figure Figure Figure Figure Figure
13.1: 13.2: 13.3: 13.4: 13.5: 13.6: 13.7: 13.8: 13.9:
Bridging aspects of dialogs. The dialog and action environment. Multiple-dialog utilization. Dialog model framework. Cluster store instance. Action model framework. Action data transfer dynamics. Action dynamics case 1. Action dynamics case 2.
Chapter 14: Software component modeling Figure 14.1: Reuse framework. Figure 14.2: Reuse strategy.
Chapter 15: Workflow modeling Figure 15.1: Figure 15.2: Figure 15.3: Figure 15.4: Figure 15.5: Figure 15.6:
Workflow dynamics. Workflow map structure. Data access structure. Workflow reference model. (Source: WfMC.) Logical configuration model. Physical configuration model.
Chapter 16: Overview of process implementation methodology Figure 16.1: Figure 16.2: Figure 16.3: Figure 16.4: Figure 16.5: Figure 16.6: Figure 16.7: Figure 16.8:
Automation system model. Process implementation architecture schematic. Waterfall methodology. Evolutionary methodology. Build-and-test methodology. Spiral approach to development. PRIME spirals. Methodology structure of PRIME.
Chapter 18: Step 1: Define/refine process map Figure Figure Figure Figure Figure Figure Figure Figure
18.1: 18.2: 18.3: 18.4: 18.5: 18.6: 18.7: 18.8:
Example of a process map. Step information flows. Diagram of step information flow. Updated step information flow diagram. Revised process map. Consistent information animation result. Inconsistent information animation result. Activity sequence diagram.
Chapter 19: Step 2: Identify dialogs Figure Figure Figure Figure Figure
19.1: 19.2: 19.3: 19.4: 19.5:
Example Example Example Example Example
of of of of of
organization partitioning. role partitioning. input transition partitioning. timebreak partitioning. a dialog map.
Figure 19.6: Activity sequence diagram.
Chapter 20: Step 3: Specify actions Figure 20.1: Figure 20.2: Figure 20.3: Figure 20.4:
Constrained decomposition procedure. Revised constrained decomposition. Example of action template. Diagram of activity sequence.
Chapter 21: Step 4: Map actions Figure 21.1: Analysis matrix structure. Figure 21.2: Diagram of activity sequence.
Chapter 22: Step 4(a): Provision software components Figure 22.1: External component class structure. Figure 22.2: Activity sequence diagram.
Chapter 23: Step 5: Design human interface Figure 23.1: Human-to-automation coupling. Figure 23.2: Process implementation elements and probability of error. Figure 23.3: General feedback configuration. Figure 23.4: Feedback applied to the human interface. Figure 23.5: Concept of impedance matching. Figure 23.6: Impedance matching in the human interface. Figure 23.7: HII determination. Figure 23.8: Example of an HII request. Figure 23.9: Example of an HII response. Figure 23.10: Diagram of activity sequence.
Chapter 24: Step 6: Determine workflow Figure 24.1: Task definition. Figure 24.2: A workflow map. Figure 24.3: Task workflow environment. Figure 24.4: Types of topology constructs. Figure 24.5: Single-engine configuration. Figure 24.6: Two or more load-sharing engines. Figure 24.7: Two or more chained engines. Figure 24.8: Hierarchical engine configuration. Figure 24.9: Topology analysis example. Figure 24.10: Example of a logical model. Figure 24.11: Activity sequence diagram.
Chapter 25: Step 7: Assemble and test Figure 25.1: Assembly dynamics. Figure 25.2: Automation environment elements. Figure 25.3: Activity sequence diagram.
Chapter 26: Step 8: Deploy and operate Figure 26.1: Continuous process improvement. Figure 26.2: Activity sequence diagram.
List of Tables Chapter 5: Business rules Table 5.1: Emergency Response Rule Metamodel Table 5.2: Sales Tax Calculation Rule Metamodel
Chapter 8: Process modeling Table 8.1: Example of Leaf Process Documentation Table 8.2: Major Process-Based Quality Approaches
Chapter 9: Scenario modeling Table 9.1: Example of Attribute Subsection Table 9.2: First Example of Context Section Table 9.3: Second Example of Context Section
Table 9.4: Third Example of Context Section Table 9.5: First Example of Context Section Table 9.6: Second Example of Context Section Table 9.7: Third Example of Context Section Table 9.8: Component Scenario Attribute Subsections Table 9.9: Compound Scenario Attribute Subsection Table 9.10: Compound Scenario ID Derived From Component IDs
Chapter 10: Role modeling Table 10.1: Role Attributes and Values
Chapter 24: Step 6: Determine workflow Table 24.1: Examples of Load Rules