7072.tp.indd 1
3/17/10 2:52:52 PM
This page intentionally left blank
HANDBOOK ON BUSINESS INFORMATION SYSTEMS edited by
Angappa Gunasekaran University of Massachusetts, USA
Maqsood Sandhu University of Oulu, Finland
World Scientific NEW JERSEY
7072.tp.indd 2
•
LONDON
•
SINGAPORE
•
BEIJING
•
SHANGHAI
•
HONG KONG
•
TA I P E I
•
CHENNAI
3/17/10 2:52:53 PM
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
HANDBOOK ON BUSINESS INFORMATION SYSTEMS Copyright © 2010 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN-13 978-981-283-605-2 ISBN-10 981-283-605-2
Typeset by Stallion Press Email:
[email protected] Printed in Singapore.
Jhia Huei - Hdbook on Business Info Sys.pmd 1
5/6/2010, 4:12 PM
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface The contemporary state of rapid information technology (IT) development has led to the formation of a global village that has swiftly transformed business, trade and industry. It has impacted business management processes in health care delivery, data mining management in industry, and created business information systems as strategic tools to manage businesses. Information technology has led to the development of sophisticated supply chain models, and the evolution of comprehensive business information systems. Overall, the business models have changed from a traditional hierarchical control to more customers oriented via e-business models with a network of information systems. This handbook explores the need of business information management in different contexts and sectors of businesses. The competitive environment dictates intense rivalry among the participants in the market. Similar value propositions and products coupled with broad range of choices available to the customer both in terms of delivery and product differentiation has made business information systems and its management as one of the key drivers for business success and sustainable competitive advantage. The Handbook on Business Information Systems is divided into six major parts dealing with different aspects of information management systems in various business scenarios. A total of 37 chapters cover the vast field of business information systems, focusing particularly on developing information systems to capture and integrate information technology together with the people and their businesses. A brief introduction of each part is summarized below. Part I of the book, Health Care Information Systems, consists of four chapters that focus on providing global leadership for the optimal use of health care IT. It provides knowledge about the best use of information systems for the betterment of health care services. These chapters deal with healthcare supply chain information systems, the role of CIO in the development of healthcare information systems and information systems in handling patients’ complaints. Part II, Business Process Information Systems, composed of nine chapters, extends the previous theory in the area of process development by recognizing that improvements in intraorganizational information systems need to be complemented by corresponding improvements in inter-organizational processes. The chapters cover topics such as modeling and analysis of business processeses, reengineering business processes, implications of culture on logistics and information systems, performance measures in information systems, socio-management systems, knowledge management systems, and risk management in ERP.
v
March 15, 2010
vi
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface
With eight chapters, Part III deals with Industrial Data and Management Systems and captures the main challenges faced by the industry, such as the changes in the operations paradigm of manufacturing and service organizations. These chapters are comprehensive in terms of covering significant topics in business information systems including information systems in sustainability, strategies for enhancing innovation culture, decision support system for line balancing, innovation in logistics using RFID, implications of RFID, implications of information systems on operational efficiency, interactive technology for strategic planning and key performance indicators in information systems evaluation. Next, Part IV Strategic Business Information Systems with five chapters discusses the use of information technology in small industries and analysis of digital business ecosystems. The following are the important topics considered by these chapters: applications of IT/IS in small companies, relationships between information systems, business, marketing and CRM, transfer for business information systems, digital business ecosystem analysis and information contents of accounting numbers. Part V with five chapters on Information Systems in Supply Chain Management deals with different challenges and opportunities in the field and discusses supply chain performance along with applications of various technologies. These chapters provide an excellent overview of the implications of supply chain enabling technologies, supply chain management, managing supply chain performance in SMEs, information systems in service learning, and RFID applications in supply chain. Finally, Part VI, Evaluation of Business Information Systems, discusses the adoption of systems development methodologies and the security pattern of the business systems along with useful mathematical models. This part has six chapters dealing with tools for decision-making in IT/IS, application of quantitative models in information management, measurement challenges in IT/IS, object-oriented meta-computing, B2B architecture using web-services technology, and the role of computer simulation in supply chain management. An edited book of this nature can provide useful conceptual frameworks, managerial challenges including strategies and tactics, technologies, and practical knowledge and applications to a variety of audiences including academics, research students and practitioners interested in the management of business information systems. The editors are most grateful to the authors who have chapters in this handbook and have gone through several cycles of revisions and for their continued cooperation in finalizing the chapters. We are thankful to the excellent reviews of over 100 reviewers who have read chapters and helped to improve their quality. The authors are deeply indebted to the editorial team of the publisher for their highly constructive comments that have greatly enhanced the quality of this work. Without their timely support and insightful editorial changes, this handbook would not have been completed. We are especially grateful to Ms. Shalini, the in-house editor for her prompt responses and co-ordination throughout the compiling of this manuscript.
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Preface
vii
Also, we are thankful to the publisher, who has been a great source of a inspiration in completing this book project in a timely manner. Finally, our heartfelt thanks go to our families, for their patience and support over the last two years. Angappa Gunasekaran University of Massachusetts — Dartmouth, USA Maqsood Sandhu University of Oulu, Finland and UAE University, UAE
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Editor Biographies
Angappa Gunasekaran is a Professor of Operations Management and the Chairperson of the Department of Decision and Information Sciences in the Charlton College of Business at the University of Massachusetts (North Dartmouth, USA). Previously, he has held academic positions in Canada, India, Finland, Australia and Great Britain. He has BE and ME from the University of Madras and a PhD from the Indian Institute of Technology. He teaches and conducts research in operations management and information systems. He serves on the Editorial Board of 20 journals and edits several journals. He has published over 200 articles in journals, 60 articles in conference proceedings and edited three books. In addition, he has organized several conferences in the emerging areas of operations management and information systems. He has extensive editorial experience that includes the guest editor of many high profile journals. He has received outstanding paper and excellence in teaching awards. His current areas of research include supply chain management, enterprise resource planning, e-commerce, and benchmarking. He is also the Director of Business Innovation Research Center at the University of Massachusetts — Dartmouth. Dr. Maqsood Sandhu is Associate Professor at Oulu Business School, University of Oulu, Finland. Currently, he is working at the Department of Management, College of Business and Economics at United Arab Emirates University, Al Ain. He earned a PhD from the Swedish School of Economics and Business Administration in Management. Dr. Sandhu has been working over five years in projectbased industry. He has about 15 international journal articles and book chapters. He has presented over 50 papers and published approximately 40 articles in international conferences. Currently, he is interested in doing research in the areas of project management, knowledge management and entrepreneurship. He is also the Head of Innovation Laboratories at the Emirates Centre for Innovation and the Entrepreneurship.
ix
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
This page intentionally left blank
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Editor Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
Part I: Health Care Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Chapter 1
Chapter 2
Chapter 3
Healthcare Supply Chain Information Systems VIA Service-Oriented Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . ¨ Sultan N. Turhan and Ozalp Vayvay The Role of the CIO in the Development of Interoperable Information Systems in Healthcare Organizations . . . . . . . . . Ant´onio Grilo, Lu´ıs Velez Lap˜ao, Ricardo Jardim-Goncalves and Virgilio Cruz-Machado
3
25
Information Systems for Handling Patients’ Complaints in Health Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zvi Stern, Elie Mersel and Nahum Gedalia
47
How to Develop Quality Management System in a Hospital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ville Tuomi
69
Part II: Business Process Information Systems . . . . . . . . . . . . . . . . . . . . . . .
91
Chapter 4
Chapter 5
Modeling and Managing Business Processes . . . . . . . . . . . . . Mohammad El-Mekawy, Khurram Shahzad and Nabeel Ahmed
Chapter 6
Business Process Reengineering and Measuring of Company Operations Efficiency . . . . . . . . . . . . . . . . . . . . . . Nataˇsa Vujica Herzog
117
Value Chain Re-Engineering by the Application of Advanced Planning and Scheduling . . . . . . . . . . . . . . . . . . . Yohanes Kristianto, Petri Helo and Ajmal Mian
147
Cultural Auditing in the Age of Business: Multicultural Logistics Management, and Information Systems . . . . . . . . . Alberto G Canen and Ana Canen
189
Chapter 7
Chapter 8
xi
93
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
xii Contents
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Efficiency as Criterion for Typification of the Dairy Industry in Minas Gerais State . . . . . . . . . . . . . . . . . . . . . . . . . . Luiz Antonio Abrantes, Adriano Provezano Gomes, Marco Aur´elio Marques Ferreira, Antˆonio Carlos Brunozi J´unior and Maisa Pereira Silva
199
A Neurocybernetic Theory of Social Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Masudul Alam Choudhury
221
Systematization Approach for Exploring Business Information Systems: Management Dimensions . . . . . . . . . . Albena Antonova
245
A Structure for Knowledge Management Systems Assessment and Audit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joao Pedro Albino, Nicolau Reinhard and Silvina Santana
269
Risk Management in Enterprise Resource Planning Systems Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Davide Aloini, Riccardo Dulmin and Valeria Mininno
297
Part III: Industrial Data and Management Systems . . . . . . . . . . . . . . . . . .
321
Chapter 14
Asset Integrity Management: Operationalizing Sustainability Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. M. Chandima Ratnayake
Chapter 15
How to Boost Innovation Culture and Innovators? . . . . . . . . . Andrea Bikfalvi, Jari Jussila, Anu Suominen, Jussi Kantola and Hannu Vanharanta
Chapter 16
A Decision Support System for Assembly and Production Line Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. S. Simaria, A. R. Xambre, N. A. Filipe and P. M. Vilarinho
Chapter 17
An Innovation Applied to the Simulation of RFID Environments as Used in the Logistics . . . . . . . . . . . . . . . . . . . Marcelo Cunha De Azambuja, Carlos Fernando Jung, Carla Schwengber Ten Caten and Fabiano Passuelo Hessel
323 359
383
415
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
Contents
Chapter 18
Chapter 19
Chapter 20
Customers’ Acceptance of New Service Technologies: The Case of RFID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alessandra Vecchi, Louis Brennan and Aristeidis Theotokis Operational Efficiency Management Tool Placing Resources in Intangible Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudelino Martins Dias Junior, Osmar Possamai and Ricardo Gon¸calves
xiii
431
457
Interactive Technology Maps for Strategic Planning and Research Directions Based on Textual and Citation Analysis of Patents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elisabetta Sani, Emanuele Ruffaldi and Massimo Bergamasco
487
Determining Key Performance Indicators: An Analytical Network Approach . . . . . . . . . . . . . . . . . . . . . . . Daniela Carlucci and Giovanni Schiuma
515
Part IV: Strategic Business Information Systems . . . . . . . . . . . . . . . . . . . . .
537
Chapter 21
Chapter 22
Chapter 23
Chapter 24
The Use of Information Technology in Small Industrial Companies in Latin America — The Case of the Interior of S˜ao Paulo, Brazil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ot´avio Jos´e De Oliveira and Guilherme Fontana
539
Technology: Information, Business, Marketing, and CRM Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fernando M. Serson
565
Transfer of Business and Information Management Systems: Issues and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . R. Nat Natarajan
585
Chapter 25
Toward Digital Business Ecosystem Analysis . . . . . . . . . . . . . Aurelian Mihai Stanescu, Lucian Miti Ionescu, Vasile Georgescu, Liviu Badea, Mihnea Alexandru Moisescu and Ioan Stefan Sacala
Chapter 26
The Dynamics of the Informational Contents of Accounting Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akinloye Akindayomi
607
639
March 15, 2010
14:46
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-fm
xiv Contents
Part V: Information Systems in Supply Chain Management . . . . . . . . . . Chapter 27
Supply Chain Enabling Technologies: Management Challenges and Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . Damien Power
653 655
Chapter 28
Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avninder Gill and M. Ishaq Bhatti
675
Chapter 29
Measuring Supply Chain Performance in SMES . . . . . . . . . . Maria Argyropoulou, Milind Kumar Sharma, Rajat Bhagwat, Themistokles Lazarides, Dimitrios N. Koufopoulos and George Ioannou
699
Chapter 30
Information Sharing in Service Supply Chain . . . . . . . . . . . . . Sari Uusipaavalniemi, Jari Juga and Maqsood Sandhu
717
Chapter 31
RFID Applications in the Supply Chain: An Evaluation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Valerio Elia, Maria Grazia Gnoni and Alessandra Rollo
737
Part VI: Tools for the Evaluation of Business Information Systems . . . .
763
Chapter 32
Chapter 33
Chapter 34
Tools for the Decision-making Process in the Management Information System of the Organization . . . . . . . . . . . . . . . . . . Carmen De Pablos Heredero and M´onica De Pablos Heredero
765
Preliminaries of Mathematics in Business and Information Management . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammed Salem Elmusrati
791
Herding Does Not Exist or Just a Measurement Problem? A Meta-Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nizar Hachicha, Amina Amirat and Abdelfettah Bouri
817
Chapter 35
Object-Oriented Metacomputing with Exertions . . . . . . . . . . Michael Sobolewski
Chapter 36
A New B2B Architecture Using Ontology and Web Services Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . Youcef Aklouf
889
The Roles of Computer Simulation in Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia Hongyu and Zuo Peng
911
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
945
Chapter 37
853
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Part I Health Care Information Systems
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Chapter 1
Healthcare Supply Chain Information Systems Via Service-Oriented Architecture SULTAN N. TURHAN Department of Computer Engineering, Galatasaray University C¸ıraˇgan cad. No: 36 34357, Ortak¨oy Istanbul, T¨urkiye
[email protected] ¨ OZALP VAYVAY Department of Industrial Engineering, Marmara University ´ G¨oztepe Campus, 34722 Kadık¨oy Istanbul, T¨urkiye
[email protected] Healthcare supply chain management differs from other applications in terms of its key elements. The misalignment, high costs for healthcare providers and heavy dependence on third parties, distributors, and manufacturers are the main trouble making issues for the healthcare supply chain. At the same time, some of the supply chain components of the health sector have a different position compared to the other materials that are taking place in the other supply chains. In particular, the specific consumables used in the surgical operations bear a significant importance in terms of the usage and the costs. In some cases, the doctors may not have a strict opinion on the exact quantity of the consumables they will use during the operation before starting it. On the other hand, it is not always possible to have all these materials in the stock because of their high cost. Moreover, due to the inefficiencies in the social security system in Turkey, the social security institutions do not always accept to pay the price of the materials used. Worse still, the related information generally reaches with a significant delay to the hospital management. Keywords: Healthcare; supply chain management (SCM); service oriented architecture (SOA); vendor managed inventory (VMI).
1. Introduction Ferihan La¸cin Hospital is a medium-scale 53-bed hospital. Four different groups of materials are used during a work day: 1. Ordinary Supplies: These are the materials used in a hotel management like paper towels, bedclothes, soap, cleansing agent, or disinfectants. This kind of material is not included in the scope of this research. 3
March 15, 2010
4
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
2. Drugs: This group consists of the typical drugs used in a hospital including anesthetic drugs. 3. Medical Materials: This group contains medical disposable items, surgical dressing, medical papers used in medical devices such as electrocardiographs, etc. 4. Special Surgical Materials and Equipment: This group contains special surgical materials such as stents, thin tube inserted into a tubular structure (e.g., a blood vessel) to hold it open or remove a blockage, or prosthetic and orthopedic products. This group has a difference from the other groups. For the other groups, the hospital’s employers such as doctors, practitioners, nurses, and technicians have the opportunity to decide the quantity of materials that they are going to consume while working. The decision on the use of materials by this group may only be made by specialist(s) like surgeon(s); however, even they, most of the time, do not have any exact idea on how many they will consume during the operation. They need to make this decision during the operation. Because of this constraint, the management of supply chain of these materials is completely different from the others. Currently, in the hospital, there are neither any rules defined for the inventory management nor for a complete SCM. There is only a pharmacist and a staff member working in the hospital’s pharmacy. All purchasing affairs for all materials are done by them. In fact, working understaffed is a very common feature that we face in small- and medium-sized hospitals. The pharmacist’s mission is to control and audit all the drugs used all over the hospital, especially the anesthetic drugs. They are also responsible to assure the presence of the medical materials. It is also the pharmacist’s responsibility to update supplier information, net cost of procurement, batch sizes, and so forth. On the other hand, their performance measure is to provide the correct materials, at the required time. In healthcare cases, time is a big constraint because even a delay of a second may cause a big problem for the life of a person. The staff working with them are not qualified personnel who have a formation of pharmaceutics. They are responsible to manage the orders and the payments, control the bills, enter the bills’ information to the information system of the hospital, check the boxes, and get the proposals from the suppliers. The proposal must contain the information about medical supplies’ unit prices and the conditions of payment. All the proposals are examined by pharmacist and purchasing director, and they decide together on the supplier from which the materials will be purchased. A simple order process done by the staff consists of following steps: 1. Taking the proposal from the different suppliers. 2. Introduce the candidate suppliers’ proposal to pharmacist and purchasing director to determine the right one. 3. Call the supplier determined by the pharmacist and the purchasing director via phone, fax, and e-mail. 4. Make the order.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
5
This system has several weaknesses. First, there are no prescribed parameters to define the quantity of such orders. The pharmacist decides the quantity of the materials intuitively. This process is also time consuming because the pharmacist must check the materials one by one every day and the staff must spend hours and hours to communicate with the suppliers to get the proposals, and place the orders. On the other hand, the system has also a bad effect on the suppliers’ side. They are always enforced to provide the necessary item in a short period, sometimes in a day. They always have to be ready to answer the hospital’s demands quickly and this causes a big competition in the market. The last weakness is caused by the payment schedule of Turkish government. The major part of the healthcare costs in Turkey is still being covered by the government. The market share of the private health insurance companies that exist in this sector is not significant. Today, only 1 out of 162 people has a private health insurance. However, the Turkish government can only disburse the payments to private hospital in 3 months. Under these circumstances, the hospitals also propose to suppliers to make payment with a delay of 3 months. Most of the medical materials are very expensive with a big cost; both suppliers and hospitals face the difficulties to manage their financial situation. All these problems are getting worse in the case of special surgical materials and equipments. On the other hand, the competition among the small- and medium-scale hospitals is growing intensely; therefore, all hospitals have to improve their quality of healthcare services while reducing their operational cost. The research has started with analysis of business process and system requirements of this specific application in a hospital.a Then, a new idea has been developed to control the purchases and consume the special medical and surgical devices especially during the operation. A telemedicine application is implemented between the hospital information system and government system to provide a real-time online observation for the surgical operation. On the other hand, all processes have been designed according to service-oriented architecture (SOA) because SOA provides a much more agile environment for process orchestration, for integration across applications, and for collaboration between users. Each process has been defined as a Web Service (WS). With this architecture, another problem may be possible: The structure of the information exchanged. Allowing cooperation among distributed and heterogeneous applications is a major need for the current system. In this research, we try to model an efficient pharmaceutical SCM to eliminate the problems cited above. The new system is developed to optimize inventory control, reduce material handling cost, and manage the balance of payment among the government and the suppliers. SCM is a strategy for optimizing the overall supply chain by sharing information among material suppliers, manufacturers, distributors, and retailers (Dona et al., 2001). Our supply chain consists of suppliers, hospitals, and the government. The a Ferihan La¸cin Hospital, Istanbul, Turkey. www.ferihanlacin.com
March 15, 2010
6
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
key element of SCM is information sharing (Dona et al., 2001). Information sharing improves collaboration among the supply chain to manage material flow efficiently and reduces inventory cost (EHCR Committee, 2001). Therefore, we decided to adopt a vendor-managed inventory (VMI) model to optimize the inventory control and then reengineer the processes according to a new architecture SOA. Besides, we try to propose a different management style for the usage of special surgical materials and equipment and their SCM. In Sec. 2, the suggested process remodeling will be revealed item by item while defining VMI. Section 3 illustrates all the technologies used. Finally, benefits of the developed system will be discussed. 2. Process Remodeling 2.1. Constraints It is very difficult to design an efficient pharmaceutical system to improve the Quality of Services (QoS) given to the patients, while there are not any rules determined by the hospital management. Of course, the hospital’s management takes the TQM rules seriously, but the processes are never examined, and never documented. On the other hand, the nature of supply chain is very complex. The first objective of this supply chain is not only to lower procurement cost and improve cash flow, but also to assure the appropriate drug, medical materials, or special surgical materials at the right time, in the right place. Another important issue is the preservation of the drugs. Each drug has an expiry date and some of them, for example, anesthetic drugs need to be more safely preserved. The innovation and changes in the drug sector are very often and it is very usual to substitute a drug not found with an equivalent one. Then, an information system adoption is needed but information sharing (IS) adoption in healthcare affects and is affected by human and organizational actors (Vasiliki et al., 2007). Thus, it is not only the information systems that need to be put in place, but also an effective process solution for how to transfer demand is needed (Riikka et al., 2002). As there is no efficient and effective inventory control in the hospital, we decided to adopt our new SCM system modeled according to SOA principles with VMI techniques. Before implementing our system, the information shared between supplier and hospital, between hospital and government, and between supplier and government was insufficient. Eventually, this results in a high rate of emergency order calls, high stock levels, bad balance of payments, and, of course, patient dissatisfaction. To solve this problem, we first started to implement a VMI system in hospital warehouses. We show that by effectively harnessing the information now available, one can design and operate the supply chain much more efficiently and effectively than ever before.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
7
2.2. What is VMI? VMI approach can improve supply chain performance by decreasing inventoryrelated costs and increasing customer service. Unlike a traditional supply chain wherein each member manages its own inventories and makes individual stocking decision, VMI is a collaborative initiative where a downstream customer (a hospital in our case) shifts the ownership of inventories to its immediate upstream supplier and allows the supplier to access its demand information in return. In particular, a VMI process involves the following two steps: (1) a downstream customer provides demand information to its immediate upstream supplier and leaves the stocking decisions to that supplier; and (2) the upstream supplier has the ownership of the inventories till the inventories are shipped to the customer and bears the risk of demand uncertainty. It is not difficult to see that the VMI structure promotes collaborations between suppliers and customers through information sharing and business process reengineering. VMI is an alternative for the traditional order-based replenishment practices. VMI changes the approach for solving the problem of supply chain coordination. Instead of just putting more pressure on suppliers’ performance by requiring ever faster and more accurate deliveries, VMI gives the supplier both responsibility and authority to manage the entire replenishment process. The customer company (a hospital in our case) provides the supplier access to inventory and demand information and sets the targets for availability. Thereafter, the supplier decides when and how much to deliver. The measure for supplier’s performance is no more delivery time and preciseness; it is availability and inventory turnover. This is a fundamental change that affects the operational mode for both the customer and at the supplier company. Therefore, the advantages to both parties must be evident to make the shift to VMI happen (Lee and Whang, 2000). We cannot deny the advantages of VMI in our case. Before implementing VMI, it was the pharmacist’s mission to manage the inventory which resulted in inefficiencies. The adoption of VMI is started by contracts among suppliers and hospital. These contracts are realized not only on the paper, but also via WSs too (which will be described in the next section). In the contracts, the role of controlling inventory level of each drug or medical supply is given for an appropriate supplier by defining the unit price and payment schedule. With this system, the suppliers’ experts control the stock level instead of the pharmacist. The new system allows the pharmacists do their own job, and also create the time available for the supplier to plan deliveries. It is obvious that the more time the supplier has for planning, the better it is able to serve the hospital and optimize operations. The other problem faced in hospital inventory management is having no proper classification schema. There are so many different drugs and medical supply. Each of them is produced by different manufacturer and may be used as a substitute for
March 15, 2010
8
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
a different one. It is very inefficient to manage all these products without a proper classification because it is not possible to make a contract for each of them. We already mentioned that the stock control will be done by suppliers’ experts. Here, the main question is who will decide for order quantity. We did not leave the decision of order quantity either to supplier’s expert or to pharmacist. Order quantities are calculated by the information system based on demand forecasting and safety stock levels. The hospital has its own information system to manage the stock level of each product. Our system will use information produced by this system to get order quantities. 2.3. Information and Document Sharing IS is a collaborative program in which the downstream firm (referred to as the hospital herein) agrees to provide demand and inventory status in real time to the upstream firm (referred to as a supplier herein) (Lee and Whang, 2000). VMI provides a closer collaboration between the supplier and hospital in our case. That is why the hospital must be able to reengineer its process through real-time information sharing, enabled by electronic data interchange (EDI). With this system, we propose to provide an integrate information sharing between hospital and suppliers, and between hospital and government. By sharing information about product usage between them, it is much easiest to keep the inventory level at a proper level. Besides, the system must have the ability to keep the logs of products, insurance codes, and information about new drugs. We designed the system to be accessible in real time, and to be integrated via WSs with any service provider including government. With the new architecture, all the processes are remodeled according to SOA principles. While remodeling the processes, we took into consideration the WSs policy defined by the Turkish government, the standards, and protocols produced by Health Level Seven which is one of several American National Standards Institute (ANSI)-accredited Standards Developing Organizations (SDOs) operating in the healthcare arena, for a particular healthcare domain such as pharmacy, medical devices, imaging or insurance (claims processing) transactions, the codes defined by Anatomical Therapeutic Chemical Classification System with Defined Daily Doses (ATC/DDD) index published by WHO, and by National Information Bank (UBB) published by Ministry of Health of Turkey. On the other hand, the hospital’s traditional method of exchanging and processing orders and orders’ documents via phone or fax machine results with time inefficiency and high rate of error. The process depends totally on staff’s performance, which is not acceptable in the healthcare case. The system offers the to suppliers to get the order requirements, to control the inventory level of hospital’s central warehouse and exchange documents in XML formats, via WSs. With this system model, order processing of supply chain participants can be enhanced significantly.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
9
2.4. What is Service Oriented Architecture (SOA)? With the growth of real-time computing and communication technologies like the Internet, batch interfaces were posing a challenge. When the latest information about a given business entity was not updated in all dependent systems, it resulted in a loss of business opportunity, decreased customer satisfaction, and increasing problems. SOA may be seen as the new face of enterprise application integration (EAI). We can also define SOA as a business-driven information technology (IT) architectural approach that helps businesses innovate by ensuring that IT systems can adapt quickly, easily, and economically to support rapidly changing business needs. SOA is not a technology. It is an architectural approach built around existing technologies. SOA advocates a set of practices, disciplines, designs, and guidelines that can be applied using one or more technologies and being an architectural concept, it is flexible enough to lend itself to multiple definitions. SOA offers a unique perspective into business that was previously unavailable: It offers a realtime view of what is happening in terms of transactions, usage, and so forth. In anticipation of the discovery of new business opportunities or threats, the SOA architectural style aims to provide enterprise business solutions that can extend or change on demand. SOA solutions are composed of reusable services, with well-defined, published and standards-compliant interfaces. SOA provides a mechanism for integrating existing legacy applications regardless of their platform or language. The key element of SOA is the service. A service can be described as “a component capable of performing a task” (David and Lawrence, 2004). Although a service can be seen as a task or an activity, it is more complicated than these concepts. This is due to the fact that every service has a contract, an interface, and an implementation routine. Josuttis (2007) states that a service has the following attributes: • Self-contained: Self-contained means independent and autonomous. Although there can be exceptions, a service should be self-contained. In order for the services to be self-contained, their inter-dependencies should be kept in a minimum level. • Coarse-grained: It indicates the implementation detail level of services for consumers. Implementation details are hidden for a service consumer because the consumer does not care about such details. • Visible/Discoverable: A service should be visible and easily reachable. This is important also for reusability which means that a service can be used multiple times in multiple systems. • Stateless: Services, ideally, but not always, should be stateless. This means that a service request does not affect another request because service calls do not hold invocation parameters and execution attributes in a stateless service.
March 15, 2010
10
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
• Idempotent: Idempotent means the ability of redo or rollback. In some cases, while a service is executing, a bad response can be returned to the service consumer. In such a case, service consumers can rollback or redo the service execution. • Composable: For a composable service, the service can contain several subservices, where they can be separated from the main service. A composable service can call another composable service. • QoS and Service Level Agreement (SLA)-Capable:A service should provide some non-functional requirements such as runtime performance, reliability, availability, and security. These requirements represent QoS and SLA. • Pre- and Post-conditions: Pre- and post-conditions specify the constraints and benefits of the service execution. Pre-condition represents the state before the service execution. Post-condition represents the state after the service execution. • Vendor Diverse: SOA is neither a technology nor a product. It is also platform (or vendor) independent. This means that it can be implemented by different products. When calling a service, one does not need to be familiar with the technology used for the service. • Interoperable: Services should be highly interoperable. They can be called from any other systems. Interoperability provides the ability of different systems and organization to work together. In other words, services can be called from any other system regardless of the types of environment for them. The second important issue is to define explicitly two key roles in an SOA: the service provider and service consumer. Service provider publishes a service description and provides the implementation for the service, whereas service consumer can either use the uniform resource identifier for the service description directly or can find the service description in a service registry and bind and invoke the service. In Fig. 1, the relationship between a service provider and a service consumer is illustrated. As we mentioned above, a service is a software resource with an externalized service description. This service description is available for searching, binding, and invocation by a service consumer. The service provider realizes the service description implementation and also delivers the QoS requirements to the service consumer. Services should ideally be governed by declarative policies and thus support a dynamically re-configurable architectural style. The services can be used across internal business units or across the value chains among business partners in a fractal realization pattern. Fractal realization refers to the ability of an architectural style to apply its patterns and the roles associated with the participants in its interaction model in a composite manner. It can be applied to one tier in architecture and to multiple tiers across the enterprise architecture. That is why defining the services according to SOA concepts must be the most
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
Figure 1.
11
c IBM). SP&SC relationship (
crucial step while modeling a system. Conceptually, there are three major levels of abstraction within SOA: • Operations: Transactions that represent single logical units of work (LUWs). Execution of an operation will typically cause one or more persistent data records to be read, written, or modified. SOA operations are directly comparable to objectoriented (OO) methods. They have a specific, structured interface, and return structured responses. Just as for methods, the execution of a specific operation might involve invocation of additional operations. • Services: Represent logical groupings of operations. • Business Processes: A long running set of actions or activities performed with specific business goals in mind. Business processes typically encompass multiple service invocations. According to Ali Arsanjani, PhD, Chief Architect, SOA and WSs Center of Excellence, IBM, the process of service-oriented modeling and architecture consists of three general steps: identification, specification and realization of services, components and flows (typically, choreography of services). • Service identification: This process consists of a combination of top-down, bottom-up, and middle-out techniques of domain decomposition, existing asset analysis, and goal-service modeling. In the top-down view, a blueprint of business use cases provides the specification for business services. This top-down process is often referred to as domain decomposition, which consists of the decomposition of the business domain into its functional areas and subsystems, including its flow or process decomposition into processes, subprocesses, and high-level business use cases. These use cases often are very good candidates for business services exposed at the edge of the enterprise, or for those used within the boundaries of the enterprise across lines of business.
March 15, 2010
12
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
In the bottom-up portion of the process or existing system analysis, existing systems are analyzed and selected as viable candidates for providing lower cost solutions for the implementation of underlying service functionality that supports the business process. In this process, you analyze and leverage APIs, transactions, and modules from legacy and packaged applications. In some cases, componentization of the legacy systems is needed to re-modularize the existing assets for supporting service functionality. The middle-out view consists of goal-service modeling to validate and unearth other services not captured by either top-down or bottom-up service identification approaches. It ties services to goals and subgoals, key performance indicators, and metrics. • Service classification or categorization: This activity is started when services have been identified. It is important to start service classification into a service hierarchy, reflecting the composite or fractal nature of services: services can and should be composed of finer-grained components and services. Classification helps determine composition and layering, as well as coordinates building of interdependent services based on the hierarchy. Also, it helps alleviate the service proliferation syndrome in which an increasing number of small-grained services get defined, designed, and deployed with very little governance, resulting in major performance, scalability, and management issues. More importantly, service proliferation fails to provide services, which are useful to the business, that allow for the economies of scale to be achieved. • Subsystem analysis: This activity takes the subsystems found above during domain decomposition and specifies the interdependencies and flow between the subsystems. It also puts the use cases identified during domain decomposition as exposed services on the subsystem interface. The analysis of the subsystem consists of creating object models to represent the internal workings and designs of the containing subsystems that will expose the services and realize them. The design construct of “subsystem” will then be realized as an implementation construct of a large-grained component realizing the services in the following activity. • Component specification: In the next major activity, the details of the component that implement the services are specified: • • • • •
Data Rules Services Configurable profile Variations
Messaging and events specifications and management definition occur at this step. • Service allocation: Service allocation consists of assigning services to the subsystems that have been identified so far. These subsystems have enterprise components that realize their published functionality. Often you make the simplifying
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
13
assumption that the subsystem has a one-to-one correspondence with the enterprise components. Structuring components occurs when you use patterns to construct enterprise components with a combination of: • • • • •
Mediators Fa¸cade Rule objects Configurable profiles Factories
Service allocation also consists of assigning the services and the components that realize them to the layers in SOA. Allocation of components and services to layers in the SOA is a key task that requires the documentation and resolution of key architectural decisions that relate not only to the application architecture but also to the technical operational architecture designed and used to support the SOA realization at runtime. • Service realization: This step recognizes that the software that realizes a given service must be selected or custom-built. Other options that are available include integration, transformation, subscription, and outsourcing of parts of the functionality using WSs. In this step, which legacy system module will be used to realize a given service and which services will be built from the “ground-up” will be decided. Other realization decisions for services other than business functionality include security, management, and monitoring of services. In reality, projects tend to capitalize on any amount of parallel efforts to meet closing windows of opportunity. Top-down domain decomposition (process modeling and decomposition, variation-oriented analysis, policy and business rules analysis, and domain specific behavior modeling (using grammars and diagrams) is conducted in parallel with a bottom-up analysis of existing legacy assets that are candidates for componentization (modularization) and service exposure. To catch the business intent behind the project and to align services with this business intent, goal-service modeling is conducted. In SOA terms, a business process consists of a series of operations which are executed in an ordered sequence according to a set of business rules. The sequencing, selection, and execution of operations are termed service or process choreography. Typically, choreographed services are invoked to respond to business events. Therefore, we have to model our business processes according to service concepts. SOA and design are another concepts from the other analysis and modeling. Service-oriented modeling requires additional activities and artifacts that are not found in traditional OO analysis and design. Experience from early SOA implementation projects suggests that existing development processes and notations such as Object Oriented Analysis and Design (OOAD), Enterprise Architecture (EA), and business process management (BPM) only cover part of the requirements needed to
March 15, 2010
14
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
support the SOA paradigm. While the SOA approach reinforces well-established, general software architecture principles such as information hiding, modularization, and separation of concerns, it also adds additional themes such as service choreography, service repositories, and the service bus middleware pattern, which require explicit attention during modeling (Olaf et al., 2004). There is one more important point that we have to mention here. When one starts an SOA project, the first thing that comes to mind is to define WSs. Yet, the SOA research road map defines several roles. The service requester or client and provider must both agree on the service description (Web Service Definition Language — WSDL definition) and semantics that will govern the interaction between them for WSs to interact properly in composite applications. A complete solution must address semantics not only at the terminology level, but also at the levels that WSs are used and applied in the context of business scenarios — the business process and protocol levels. Thus, a client and provider must agree on the implied processing, context, and sequencing of messages exchanged between interacting services that are part of a business process. In addition to the classical roles of service client and provider, the road map also defines the roles of service aggregator and operator. Service modeling and service-oriented engineering — service-oriented analysis, design and development techniques, and methodologies — are crucial elements for creating meaningful services and business process specifications. These are an important requirement for SOA applications that leverage WSs and apply equally well to all three service plans. SOA should abstract away the logic at the application or business level, such as order processing, from non-business-related aspects at the system level, such as the implementation of transactions, security, and reliability policies. This abstraction should enable the composition of distributed business processes and transactions. The software industry now widely implements a thin Simple Object Access Protocol (SOAP)/WSDL/Universal Description Discovery and Integration (UDDI) veneer atop existing applications or components that implement the WSs, but this is insufficient for commercial-strength enterprise applications. Unless the component’s nature makes it suitable for use as a WS (and most are not) properly delivering components’ functionality through a WS takes serious redesign effort (Papazoglou et al., 2007). On the other hand, our job would not be complete by only defining these services according to SOA. While migrating to SOA, there are some other points that should be taken into consideration. These include: • Adoption and Maturity Models: Every different level of adoption has its own unique needs; therefore, the maturity level of enterprise in the adoption of SOA and WSs must be determined at the beginning. • Assessments: During the migration, controls and assessment must be done after each step.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
15
• Strategy and Planning Activities: The steps, tools, methods, technologies, standards, and training which must be taken into account must be declared at the beginning. Therefore, a roadmap must be represented. • Governance: SOA has the ability to use legacy application’s API as a service. Every API must be examined to decide which one is eligible? Every service should be created with the intent to bring value to the business in some way. 3. Case Study Our main goal in this study is to implement the application of both new working areas to transform the supply chain to a single, integrated model to improve patient care and customer service, while decreasing procurement costs. The first one is to reengineer business processes with SOA. The second one is to be able to make an SOA modeling. As stated in the paper of Zimmerman et al. (2004) SOA modeling is a very new area, and there are no defined strict rules on this subject. Therefore, we first debuted by modeling VMI that we described above according to SOA. All the departments request necessary items and the items are delivered from hospital’s warehouse to the requesting departments. As a requirement of the definition of VMI, the supplier needs to manage the hospital’s overall inventory control system and order processing system and then makes the order delivery schedule according to the contract signed by the supplier and the hospital. In a traditional VMI system, the supplier takes both responsibility and authority to manage the entire replenishment process. The customer company provides the supplier access to the inventory and demand information and sets the targets for availability (Riikka et al., 2002). Here, instead of allowing the supplier to intervene directly to the legacy system used by the hospital, we believe that it would be more appropriate to produce the information needed by the supplier by a service architecture and orchestration on the system. Although one may think that such a business process modeling may be realized by other modeling types like OOD or EA, it is certain that Service OrientedArchitecture Design (SOAD) will be more efficient in defining the human-based task that we eventually need in this modeling. SOAD must be predominantly process, rather than use-case driven. The method is no longer use case-oriented, but driven by business events and processes. Use case modeling comes in as a second step on a lower level. In the SOA paradigm, business process choreography, maintained externally to the services, determines the sequence and timing of execution of the service invocations. SOAD provides an excellent solution to these issues. As it groups services on the basis of related behavior, rather than encapsulated (behavior plus data), the set of services will be subtly different from a business object model. The order is created when stock amount is less than Stock Keeping Unit (SKU) calculated by the legacy system of hospital. For each pharmaceutical, a separate
March 15, 2010
16
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
order item is created, containing details of order quantities and the rules defined in the contract. As the supplier manages the hospital’s stock, he’s ready to provide the necessary amount of this pharmaceutical. The main problem is the delivery lead time. A suitable shipping way needs to be scheduled for each pharmaceutical. Each dispatch may contain one/several pharmaceutical. It must be determined which pharmaceutical has urgency, or which one may be dispatched with the others. When the items come to the hospital, the pharmacist and the employee must verify the boxes and approve the task waiting on the system to declare that the order is correct. If it is not correct, they must specify the details, and a new job will start for the mistakes. When they approve that the order is correct, the supplier’s legacy system will produce the invoice and send the invoice via WSs to hospital to get the payment. The second part of supply chain is the receipt of the payment of the necessary amount of the consumed products either from the insurance companies or the government. In this way, it is expected that the invoices of the products that have been used for the favor of the patients should be sent to the Ministry of Health and the payment should be done against these invoices. The payment part includes the payment that will be done to the suppliers from the hospitals and to the hospitals from either the government or the insurance companies upon the control and the approval of the invoices. These transactions are structured again on the WSs and the orchestration among them. The main point here, as we mentioned before, is to provide the supply of the expensive products that have been decided to be used during the operation while their usage number and their payment conditions are still unknown. The system established may send the necessary information to the information systems of both suppliers and of the Ministry of Health or the related insurance company once the date and venue of the operation, and the type and the estimated amount of the products that will be used during the operation are decided. The telemedicine support that we explained earlier, steps in here. To enable the doctors to communicate easily and to access the system from anywhere independently from their daily computers, a “telemedicine” module has been designed by using the Adobe Flash technology, which is very common nowadays. With the help of this module, the end users may communicate with each other either interactively or in the way of one-sided video conference. To make all the correspondence possible to be watched again, the file extension “.flv” has been selected. These files are stored under the folders that have been named by the variables that belong to the system and that determine the owners. There is also another separated database where all the data related to these files are stored. This method has been selected to facilitate the management and to diminish the load into the database. The Red5 Open Source Flash Server that uses the Real Time Messaging Protocol (RTMP) protocol that provides the simultaneous exchange of information has been selected as server. On the other hand, the Flash Media Server can also be used as an alternative to this server. In this way, the operation may be watched both in real time or later on in the desired period. This also provides
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
17
the opportunity of making a decision on the necessity of the products used during the operation not only from the epicrisis, but also from watching the actual operation. This is certainly an important step to decide unbiasedly without any external influence. Processes are modeled according to SOA and we obtained the services cited below in the figure: 1. Supplier service a. Lookup the supplier by contract b. Create the new supplier c. Get the supplier information 2. Inventory service (legacy system) a. Determine the quantity on hand of item b. Comparing with SKU level c. Determine the order quantity d. Expected arrival date e. Inventory management i. Physical review ii. Closing 3. Order service a. Create the order b. Schedule the order date and time c. Get the offerings d. Delivery order e. Receive the delivery 4. Scheduling service a. Take the delivery schedule b. Schedule the delivery date and time 5. Payment service a. Hospital-Supplier b. Government/IC-Hospital 6. Telemedicine service a. Approval of medical supply usage and its quantity b. Rejection of medical supply usage and its quantity 7. Utilization service a. Create the new utilization by departments b. Status tracking of delivery request c. Decrease the stock quantity d. Increase the usage amount e. Markup the patient’s file
March 15, 2010
18
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
Services are modeled in Fig. 2:
Figure 2.
Services.
4. System Implementation While modeling and implementing the system, we used IBM DB2 Express-C, IBM Websphere Application Server 6.1, IBM Websphere Business Modeler 6.1, IBM Rational Software Architect 6.1, IBM Websphere Integration Developer 6.1, IBM Websphere Process Server 6.1, and IBM Websphere Monitor 6.1. We have used Websphere 6.1 Feature Pack for WSs, which has been installed on our application server, namely, Websphere Application Server 6.1. We decided to select this software because it allows us to communicate with other vendors in a more reliable, asynchronous, secure, and interoperable way. It also enables support for the Java API for XML Web Services (JAX-WS) 2.0 programming model and the SOAP 1.2 that may remove most of the ambiguities that existed in the previous versions of SOAP. As it may be well known, JAX-WS 2.0 is a Java programming model to create WSs. Its most important feature is that it provides an asynchronous client model. This makes easier to develop and deploy WSs. Another important feature of the JAX-WS can be cited as supporting WS-I Basic Profile 1.1. The WSs that have been developed by using the JAX-WS can be consumed by any client which has been previously developed by any programming language supporting this basic profile.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
19
XML Binding for Java (JAXB) enables data mapping between XML Schema and JAVA. XML is contained in the SOAP message, and without knowing how to parse SOAP and XML messages, JAXB defines this binding for us. On the other hand, SOAP with Attachments for Java (SAAJ) enables dealing with XML attachments in SOAP messages. Figure 3 illustrates which product is used in which step. Before coding the WSs, we have first modeled the BPM with Websphere Business Modeler 6.1. The process flow is shown in Appendix A. Then we define the services one by one as illustrated in Fig. 2. For the service orchestration, SOA needs a middleware which is generally Enterprise Service Bus (ESB). In our project, we prefer to use WebSphere Business Process Execution Language (WS-BPEL) as service orchestration tool. WS-BPEL is an orchestration language. An orchestration mentions executable process which means to intercommunicate with other systems dynamically and the control of this process is done by an orchestration designer such as WebSphere Process Server. This language combines two notions: 1. BPM and 2. Web Services (WSs)
Figure 3.
Software used.
March 15, 2010
20
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
The most crucial step in today’s business world is to manage and improve their business processes while workflow components must be loosely coupled and run interoperable. In this point, Web Services are in action. Web Services are self-contained, modular business blocks that achieve interoperability between applications. They use Web standards such as WSDL, UDDI, and SOAP. In real life scenarios, we need to join different WSs and to use them as a whole entity of WSs. This is where, we need a new language that encapsulates, all WSs needed and exposes the business process as a unique WS: the WS-BPEL. In WS-BPEL technique, the business process is defined as follows: “A business process specifies the potential execution order of operations from a collection of WSs, the data shared between these WSs, which partners are involved and how they are involved in the business process, joint exception handling for collections of WSs, and other issues involving how multiple services and organizations participate.” (Sanjiva and Francisco, 2002). WS-BPEL extends the WSs interaction model and enables it to support business transactions. WS-BPEL uses WSDL to specify an interface between the business process and the world outside by describing actions in a business process and the WSs provided by a business process. The business process itself and its partners (services with which a business process interacts) are modeled as WSDL services. So, WS-BPEL has a role to compose existing services. The business process is described as a collection of WSDL portTypes, just like any other WS as illustrated in Fig. 4. WS-BPEL is the top layer which uses a middleware layer WSDL using SOAP, a protocol to exchange structured information. UDDI is on top of SOAP, which is a registry of all publishing WSs. We prefer to model the orchestration of WSs with WS-BPEL because WS-BPEL can handle complex cases. For example, it is usable with loops and scopes in a business process logic, which is not supported by an ESB. We need long-running business processes where we need to maintain the state information of the process, which is also not supported by an ESB.
Figure 4. WS defined by WS-BPEL.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
21
With WS-BPEL, we are able to orchestrate a business process where we use WebSphere Process Server as the business process choreographer. One of the major features of WPS is that it supports human tasks, which enable human activities to be invoked as services. And human activities have an important role in healthcare industry. Our requirements are process-centric so WS-BPEL is the better choice in this project. In our scenario, each pharmaceutical is provided from a specific medical supplier in a specific quantity. All this information is kept in database tables. In the SUPPLIER table, there is a field named “Endpoint Address,” there is information about the medical supplier’s end-point address. So when a pharmaceutical quantity is not sufficient in the stock, a trigger is fired and a WS request is sent to WS provider, the medical supplier. This is where we are a consumer. There are WSs that we offer to our partner medical suppliers. They can control our stock to learn the quantity of pharmaceuticals that we have, and to hold ready their own stock. This is where we are a provider. In the big picture, we are a consumer and also a provider, this is where SOA begins. 5. Conclusion When analyzed, it has been observed that the system that we try to establish has many benefits. First, thanks to the VMI application, the workloads of the pharmacists and the other employees have been decreased. In Turkey, there is no obligation of employing a pharmacist for the hospitals that have less than 100 beds. Taking this account, this system by the unification of the different applications of the telemedicine may also help the pharmacist to support more than one hospital. This situation will be the subject of another study. On the other hand, thanks to this application, the pharmacist is no longer expected to be knowledgeable about the stock management or the cost reduction but on the contrary these operations will be realized by the real experts. Moreover, there are many benefits in the stock management as well. The drugs are difficult materials to store and preserve. In fact, each drug has an expiry date. There is an enormous number of different drugs and most of the drugs may be used equivalently instead of each other. Besides, as we mentioned before, in the healthcare sector, the nonexistence may produce more serious results as the human life is concerned. Therefore, an effective stock management that has been structured in this way, and an efficient material handling at the same time, will certainly improve the quality of the service offered to the patients in the hospital. Second, the information integration and a successful supply chain will eventually result in a strong integration among the partners of the system, i.e., the government, hospitals, and wholesalers. Thanks to the services produced, all the required information may be used by the other institutions within the limit of the permission given by the service provider institution. The only constraint here is that each provider and the consumer must work with the same semantic to understand the WSs. Furthermore, the applications liable to human error, such as lost papers among the files, the products that are forgotten to be ordered, and numerous phone calls, will be totally removed. When the specific importance of the sector studied is taken
March 15, 2010
22
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
¨ Vayvay S. N. Turhan and O.
into account, it is extremely important to minimize the deadlocks arising from the human mistakes. The system established provides also a serious profit both in making the orders and in the stock management. We still do not have feedback on the results of the government/insurance company — hospital integration that has been recommended especially for consumables, as this system has not been yet implemented when this chapter is written. However, it is obvious that the system is extremely interesting. A pharmaceutical including the special purposed ones, SCM system to optimize the supply chain has been modeled and developed in this research. Although the whole supply chain is composed of raw material suppliers, pharmaceuticals companies, wholesalers, hospitals, and patients, we focused especially on implementing a new model, non-traditional VMI system in hospital warehouse by sharing electronic data via WSs between hospitals and wholesalers. We cannot deny that there is a lot of effort still required to improve the efficiency of the total supply chain with regard to manufacturers, the government, and insurance companies. Also, hospitals must be willing to adopt this system, have total trust in their wholesalers, and share their inventory information with them. To extend the benefits presented in this chapter, standards for exchanging information electronically must be established and adopted. This is where semantics of WSs begins and occupies a big place in exchanging the data. References Arbietmann, D, E Lirov, R Lirov and Y Lirov (2001). E-commerce for healthcare supply procurement. Journal of Healthcare Information Management, 15(1), 61–72. Arsanjani, A (2004). Service-oriented modeling and architecture: How to identify, specify, and realize services for your SOA. IBM Whitepaper, November 2004, http://www. ibm.com/developerworks/library/ws-soa-design1 Cingil, I and A Dogac (2001). An architecture for supply chain integration and automation on the internet. Distributed and Parallel Databases, 10, 59–102. EHCR Committee (2001). Improving Supply Chain Management for Better Healthcare http://www.eccc.org/ehcr/ehcr/ Erl, T (2005). Service-Oriented Architecture, A Field Guide to Integrating XML and Web Services. Prentice Hall, NJ, USA. Josuttis, NM (2007). SOA in Practice, The Art of Distributed System Design. e-book, O’Reilly Media Inc., USA. Jung, S, T-W Chang, E Sim and J Park (2005). Vendor managed inventory and its effect in the supply chain, AsiaSim 2004, LNAI 3398, 545–552. Kaipia, R, J Holmstr¨om and K Tanskanen (2002). VMI, What are you loosing if you let your customer place orders? Journal of Production Planning & Control, 13(1), 17–25. Lee, HL and S Whang (2000). Information sharing in a supply chain. International Journal of Manufacturing Technology and Management, 1(1), 79–93. Leymann, F, D Roller and M-T Schmidt (2002). Web services and business process management. IBM Systems Journal, 41(2), 198–211. Mantzana, V, M Themistocleous, Z Irani and V Morabito (2007). Identifying healthcare actors involved in the adoption of information systems. European Journal of Information Systems, 16, 91–102.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch01
Health Care Supply Chain Information Systems VIA SOA
23
Omar, WM and A Taleb-Bendiab (2006). Service-oriented architecture and computing. IT Professional, 35–41. Papazoglou, MP and B Kratz (2007). Web services technology in support of business transactions, Service Oriented Computing and Applications, 1(1), 51–63. Papazoglou, MP, P Traverso, S Dustdor and F Leymann (2007). Service-oriented computing: State of the art and research challenges. Journal of Computer, IEEE Computer Society, June, 38–45. Polatoglu, VN (2006). Nazar foods company: Business process redesign under supply chain management context. Journal of Cases on Information Technology, 2–14. Siau, K (2003). Health care informatics. IEEE Transactions on Information Technology in Biomedicine, 7(1), 1–7. Simchi-Levi, D, P Kaminsky and E Simchi-Levi (2008). Designing and Managing The Supply Chain, Concepts, Strategies, and Case Studies, 3rd Edn., Mc Graw-Hill Irvin. Sprott, S and L Wikes (2004). Understanding service-oriented architecture, CBDI Forum. The Architectural Journal, 1, 2. Weerawarana, S and F Curbera (2002). Business process with BPEL4WS: Understanding BPEL4WS, Part 1, concepts in business processes. IBM Whitepaper, August 2002. http://www.ibm.com/developerworks/webservices/library/ws-bpelcol1 Yao, Y and M Dresner (2008). The inventory value of information sharing, continuous replenishment, and vendor-managed inventory. Transportation Research Part E, 44, 361–378. Zimmerman, O, P Krogdahl and and C Ghee (2004). Elements of service-oriented analysis and design, an interdisciplinary modeling approach for SOA projects. IBM Whitepaper, June 2004 http://www.ibm.com/developerworks/webservices/library/ws-soad1
Biographical Notes Sultan N. Turhan received her ASc degree in computer programming in 1992 from Boˇgazi¸ci University, her BA degree in business administration in 1998 from Marmara University, and her MSc degree in computational science and engineering in 2003 from Istanbul Technical University. She is currently a PhD student of Engineering Management in Marmara University and her research subject is “industrial applications of datawarehouses.” Between 1992 and 1998, she worked as database administrator, IT project coordinator, and IT manager responsible in different institutions. Since 1998, she has been working as senior lecturer in Computer Engineering Department of Galatasaray University. Since 2002, she has also been working for Intelitek — Element A.S as an academic consultant for distance learning and e-learning platforms. Her professional interests are synchronized distance learning, e-learning, databases, datawarehouses, data mining, and knowledge management. ¨ Ozalp Vayvay, PhD, is working in Industrial Engineering Department at Marmara University. He is currently the Chairman of the Engineering Management Department at Marmara University. His current research interests include new product design, technology management, business process reengineering, total quality management, operations management and supply chain management. Dr. Vayvay has been involved in R&D projects and education programs over the past 10 years.
14:44
WSPC/Trim Size: 9.75in x 6.5in
¨ Vayvay S. N. Turhan and O.
SPI-b778
b778-ch01
Workflow diagram
March 15, 2010
24
Appendix A
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
Chapter 2
The Role of the CIO in the Development of Interoperable Information Systems in Healthcare Organizations ´ ˜ †, ANTONIO GRILO∗,$ , LU´IS VELEZ LAPAO ‡ RICARDO JARDIM-GONCALVES and VIRGILIO CRUZ-MACHADO∗,¶ ∗ UNIDEMI, Faculdade de Ciˆ encias e Tecnologia da Universidade Nova de Lisboa, Portugal † INOV — INESC Inova¸ ca˜ o, Lisboa and CINTESIS, Faculdade de Medicina da Universidade do Porto, Porto, Portugal ‡ UNINOVA, Faculdade de Ciˆ encias e Tecnologia da Universidade Nova de Lisboa, Portugal $
[email protected]; †
[email protected] ‡
[email protected]; ¶
[email protected] A major challenge for business information systems (BIS) design and management within healthcare units is the need to accommodate the complexity and changeability in terms of their clinical protocols, technology, business and administrative processes. Interoperability is the response to these demands, but there are many ways to achieve an “interoperable” information system. In this chapter we address the requirements of a healthcare interoperability framework to enhance enterprise architecture interoperability of healthcare organizations, while maintaining the organization’s technical and operational environments, and installed technology. The healthcare interoperability framework is grounded on the combination of model driven architecture (MDA) and service-oriented architecture (SOA) technologies. The main argument in this chapter is to advocate the role of chief information officer (CIO) in dealing with challenges posed by the need to achieve BIS grounded on interoperability at the different layers, and in developing and executing an interoperability strategy, which must be aligned with the health organization’s business, administrative, and clinical processes. A case study of the Hospital de S˜ao Sebasti˜ao is described demonstrating the critical role of the CIO for the development of an interoperable platform. Keywords: Interoperability; healthcare; chief information officer; complexity; model driven architecture (MDA); service-oriented architecture (SOA); healthcare interoperability framework.
25
March 15, 2010
26
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
1. Overview The healthcare sector is characterized by complexity and rapid developments in terms of clinical protocols, technology, business and administrative processes. This poses a major challenge for the BIS design and management within healthcare units, as there is a need for ICT infrastructure to accommodate the changes while simultaneously responding to the pressure to have integrated information flows. Interoperability is the response to these demands, but there are many ways to achieve an “interoperable” information system. In Sec. 2 of this chapter we describe the existing general approaches for developing information systems that are able to cope with interoperability requirements, and review the main concepts of the model-driven architecture (MDA) and serviceoriented architecture (SOA) approaches. In Sec. 3, we point out the importance of information systems (IS) in the healthcare sector, the IS functions on healthcare units, the case for the need for interoperability and finally, a generic healthcare interoperability framework that combines MDA and SOA. The main argument of this chapter of the book is laid out in Sec. 4, where we advocate the critical role of the CIO in developing and implementing adequate but complex information systems strategies, supported by a flexible and robust healthcare interoperability framework that leads to integration/interoperable platforms. The argument advanced is grounded on the increasing business, cross-functional, and leadership skills that are required to convince decision makers to produce the investment (resources, time, “patience”) needed for the deployment of interoperable ICT infrastructures. Section 5 describes the case study of Hospital S˜ao Sebasti˜ao (HSS), an innovative and vanguard Portuguese hospital that has implemented an interoperable platform for internal purposes, as the information systems strategy was grounded on having “best-of-breed” for each functional business, administrative and clinical applications. The case illustrates how the CIO of the HSS has developed and implemented the interoperable platform grounded on the healthcare interoperable framework (HIF) and the importance of his different skills in order to achieve success in the endeavor. Finally, Sec. 6 concludes with a look at the increasing relevance that the CIO will have in healthcare units as the dynamics of healthcare continues to accelerate and evolve. Moreover, it addresses, as future challenge, the need to understand and model the role of the CIO using complex systems theory. 2. Business Information Management Systems and Interoperability Today, many proposals are available to represent data models and services for the main business and manufacturing activities, and the same holds true for the health sector. Some are released with International Standards (e.g., ISO, UN), others are developed at the regional or national level (e.g., CEN, DIN), and others are developed by independent project teams and groups (e.g., OMG, W3C, IAI, ebXML).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 27
Most of the available healthcare standard-based models have been developed in close contact with the health service industry, including the requirements of public and private organizations, following an established methodology. They use optimized software architectures, conferring configurable mechanisms focused on the concepts of extensibility and easy reuse. However, in the foreseen and desired BIS scenario of collaboration and flexibility, the heterogeneity and inadequacy to support interoperability of the selected and needed objects is delaying and even preventing its full integration, even when the objects are standard-based. Studies have shown that interoperability within and across public and private health organizations is far from being a reality, particularly in health public sector organizations (Kuhn et al., 2007). For the adoption of standardized models, processes, and services to be considered appropriate, applications must be prepared with suitable mechanisms and standardized interfaces easily adaptable for fast and reliable plug-and-play. Hence, the use of effective and de facto standards to represent data, knowledge, and services has shown to be fundamental in helping interoperability between systems. Thus, today this poses a major challenge to those responsible for designing and implementing BIS for health organizations. 2.1. Model-Driven Architecture The object management group (OMG) has been proposing the model-driven architecture (MDA) as a reference for achieving wide interoperability of enterprise models and software applications. MDA provides specifications for an open architecture appropriate for the integration of systems at different levels of abstraction and through the entire information systems life cycle (Mellor, 2004; Miller and Mukerji, 2001). Thus, this architecture is designed to incite interoperability of the information models independently of the framework in use (i.e., operating system, modeling and programming language, data servers, and repositories). The MDA comprises three main layers (Mellor, 2004; MDA, 2006). The CIM (computation-independent model) is the top layer and represents the most abstract model of the system, describing its domain. A CIM is a stakeholders-oriented representation of a system from the computation-independent viewpoint. A CIM focuses on the business and manufacturing environment in which a system will be used, abstracting from the technical details of the structure of the implementation system. The middle layer is the PIM (platform-independent model), and defines the conceptual model based on visual diagrams, use-case diagrams, and metadata. To do so it uses the standards UML (unified modeling language), OCL (object constraint language), XMI (XML metadata interchange), MOF (meta-object facility), and CWM (common warehouse metamodel). Thus, the PIM defines an application protocol in its full scope of functionality, without platform dependencies and constraints. For an unambiguous and complete definition, the formal description of the
March 15, 2010
28
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
PIM should use the correct business vocabulary, and choose the proper use-cases and interface specifications. The PSM (platform-specific model) is the bottom layer of the MDA. It differs from the PIM as it targets a specific implementation platform. Therefore the implementation method of the MDA, also known as model-driven development (MDD), is achieved through a transformation that converts the PIM to the PSM. This procedure can be followed through automatic code-generation for most of the system’s backbone platforms, considering middleware-specific constraints, e.g., CORBA, .NET, J2EE, Web Services. The research community is also developing and validating other proposals, including those known as executable UML. With it, the abstract models described in UML are implemented and tested at a conceptual level, i.e., PIM, before transforming them to be implemented in the targeted platform (Mellor, 2004). 2.2. The Service-Oriented Architecture The World Wide Web Consortium (W3C) refers to the service-oriented architecture (SOA) as “a set of components which can be invoked, and whose interface descriptions can be published and discovered” (W3C, 2006). Also, and according to Microsoft, the goal for SOA is a world-wide mesh of collaborating services that are published and available for invocation on a service bus (SOA, 2006). SOA does not consider the service architecture from a technological perspective alone, but also proposes a normalized service-oriented environment (SOE) offering services’description, registration, publication, and search functionalities (Figure 1). Placing its emphasis on interoperability, SOA combines the capacity to invoke remote objects and functions, i.e., the services, with standardized mechanisms for active and universal service discovery and execution. The service-oriented architecture offers mechanisms of flexibility and interoperability that allow different technologies to be integrated with great effectiveness, independent of the system platform in use. This architecture promotes reusability, and it has reduced the time to make available and gain access to a new system’s functionalities, allowing enterprises to dynamically publish, discover, and aggregate a Service Broker
Search Find
Service Consumer Figure 1.
Bind/ Interact Client
Publish Register
Service description Service Service Provider
Service-oriented environment based on SOA.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 29
range of web services through the Internet. Thus, SOA encourages organizations to focus on their business and services, free of the constraints of the applications and platforms. This is an essential feature for organizations to achieve information technology independence, business flexibility, agile partnership, and seamless integration into collaborative working environments and digital ecosystems. Well known organizations include Microsoft’s DCOM, IBM’s DSOM protocol, and the OMG’s Object Request Brokers (ORBs) based on the CORBA specification. Today, the use of W3C’s web services is expanding rapidly as the need for application-to-application communication and interoperability grows. These can implement a business process integrating services developed internally and externally to the company, providing a standard means of communication among different software applications running on a variety of heterogeneous platforms through the Internet. Web services are implemented in XML (eXtended Markup Language). The network services are described using the WSDL (Web Services Description Language), and the SOAP (Simple Object Access Protocol) is the communication protocol adopted. The registration of the services is in the UDDI registry (Universal Description, Discovery and Integration). Although providing a significant contribution, the SOA alone is not yet the answer to achieve seamless interoperability between applications. For example, despite the efforts undertaken to ensure compatibility between all the SOAP implementations, there still exists no unique standard. The Web Services Interoperability Organization, WS-I, is a good example of an organization supporting Web services interoperability across platforms, operating systems, and programming languages. WS-I has been developing mechanisms for the convergence and support of generic protocols in the interoperable exchange of messages between web services (WS-I, 2006).
3. Interoperability in the Healthcare Context Healthcare services have been pushed toward change mostly due to the awareness of the increasing costs and large inefficiencies in the system. It is now accepted that healthcare is one of the most complex businesses, with a large diversity of types of interactions (Plsek and Wilson, 2001; Lap˜ao, 2007a). Using IT to support the services delivery may overcome part of the complexity. New IT solutions for sharing and interacting indeed represent an opportunity to further enhance citizens’ participation in the healthcare process, which could improve the services’ outcomes. Smith (1997) and IOM (2001) have proposed that only IT can bridge this chasm. IT represents a hope to help process change to occur and to create new sorts of services focused on satisfying citizens’ real needs. It is therefore necessary to develop communication strategies toward citizens properly aligned with the healthcare network services delivery strategy.
March 15, 2010
30
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
Interoperability of healthcare systems can play a critical role in this process of communication. Several e-Health initiatives and projects have been developed in recent years, some more successful than others, but in general, the Information and Communication Technologies (ICT) used were not integrated into a broader egovernment strategic plan. There is the awareness that these projects also represent an opportunity to learn and improve the way we are using ICT in healthcare. Today’s healthcare technologies allow easy and fast detection of tumors, probing of tiny catheter into the heart to clean arteries, destruction of kidney calcifications without touching the skin. Nevertheless, simple things, such as distributing the right medicines at the right place or making sure the doctor’s appointment takes place at the right time, can still represent enormous challenges (Lap˜ao, 2007b; Mango and Shapiro, 2001). Progress has been slow. Healy (2000) considers that healthcare ICT Systems are simply following a logistic curve, evolving like other industries, only lagging behind. Some examples of how ICT can be combined to offer high-valued services to end-users, either domain experts or patients are as follows: • • • • • • • •
Electronic Health Records Tele-monitoring of real-time data Alert Systems Pattern recognition in historical data Signal and image processing Inferences on patients’ data using the knowledge base Knowledge base editing Support to knowledge discovery analysis.
The report “To Err is Human” (IOM, 1999) launched the debate about the importance of using ICT in healthcare to avoid many human errors whereas interoperability rules’ utilization can provide additional pressure to help the proper use of technology in that regard. One should not forget, however, that the introduction of ICT in a healthcare environment requires carefully addressing the information data and management models and the integration of the organization information infrastructure (Lenz and Kuhn, 2002). Two years after “To Err is Human,” the report “Crossing the Quality Chasm: A New Health System for the 21st Century” (IOM, 2001) identified weaknesses in the design and safety of healthcare ICT which are mostly related with the lack of pressure to solve those issues, due to a lack of perception and the respective claim from the citizen. Introducing an interoperability framework, will bring up all the good things and create pressure (and a framework) toward dealing with the mistakes and errors. There is evidence that the citizen is more and more aware of the impact of these “mistakes and errors.” This growing awareness will potentially affect the healthcare business and this will become so evident that healthcare organizations will need to invest in order to avoid them (Lap˜ao et al., 2007).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 31
The development of integrated and interoperable information systems in healthcare is an essential requirement in the modernization of health (and welfare) systems. Today, there is some evidence that most hospitals and health centers, which for historical reasons have the support of a set of IS “islands,” show clear signs of inefficiency, lack of interoperability between existing systems, and weak IS integration with processes. As in any other economic sector, within healthcare, there are two main types of interoperability that can be identified, the technical and the semantics. Both require wide organizational agreement on standards. The first should take into consideration mostly the industry interoperability standards, and according to our interoperability approach, are dealt with by the PIM and PSM; the second should focus on proper healthcare business data models and processes, i.e., the CIM. Both are huge tasks to be accomplished. Both need people in the organization to deal with the tasks. There is an increasing number of activities seeking to address and measure interoperability. Organizations such as HITSP (Healthcare Information Technology Standards Panel) (in the USA) and CEN (in Europe) are defining standards that will be the support structure for interoperability. Specialized groups such as IHE are pushing the debate and developing interoperability profiles to tighten areas of ambiguity en route to stronger interoperability. The HL7’s Electronic Health Record (EHR) group has produced many reports and other materials to guide technology managers through the myriad of infrastructures in the transformation toward interoperability. There are a multitude of perspectives that must be considered regarding interoperability in the context of healthcare. 3.1. Semantics Semantics is a truly fundamental issue. Healthcare is a set of multidisciplinary fields that deals with the health and diseases of the human body. Because of the multiplicity of fields and different perspectives of the human organism, professionals need to share a common semantic framework in order to be able to work together. Only at this point we are able to meaningfully exchange and share business-pertinent healthcare information. The challenge is to maintain sufficient information richness and sufficient context for the information to be meaningful and useful to the consumer. This means that the information systems must be able to cope with refinement and evolution. 3.2. Computational Mechanism Even if we have the richest healthcare business semantics and are exchanging information on paper, we have not yet achieved the potential of EHRs and interoperability. While a purist view requires only information exchange, the format matters. Anyone who has traveled and forgotten a power adapter appreciates the
March 15, 2010
32
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
difference. How things connect is important, and the better the infrastructure we put into place to allow systems to interoperate, the more flexible our organizations will be in adapting to changing needs and technologies. 3.3. Healthcare Business and Context Maintaining contextual relevance is important to interoperability. Receiving information items such as “systolic and diastolic values” does not convey enough information for that data to be useful. The same is true in the business context. “Patient self-entered” information may be less reliable than that entered by a healthcare professional, or perhaps not. For example, what if the issue is a healthcare family history, fundamental to the physician’s work and performance. Understanding metadata and contextual information has relevance as we seek to build interoperability bridges across organizations and enterprises. Conformance is required to guarantee that systems properly address the business process issues. The Healthcare Services Specification Project (HSSP) already includes the notion of conformance profiles, which comprise many of the abovementioned points. By building up a conformance verification that address the semantics and functions (e.g., the computational mechanism), we are taking a more comprehensive view of interoperability (HSSP, 2007). If we consider the idea of an implementation context, many of these issues come into focus. Including this concept into our perspective of conformance propels this notion forward. We can elect to have an implementation context bringing together the business perspective, policies, and relevant environmental context in play within an organization. We can do the same for a network (e.g., RHIO), national or international context (Kuhn et al., 2007). Finally, not everything needs to be standardized and interoperable. As long as we can precisely describe what is and what is not interoperable, we have the freedom to extend specifications to include what we need and still be useful. 3.4. The Healthcare Interoperability Framework (HIF) Most of the standards contain a framework including a language for data model description, a set of application reference models, libraries of resources, mechanisms for the data access and representation in neutral format. However, its architecture is typically complex. Especially due to its extent, to understand and dominate a standard completely is a long and arduous task (Bohms, 2001; Dataform 1997). This fact has been observed as one of the main obstacles for the adoption of standard models by the software developers. Even when they are aware of a standard that fits the scope of what they are looking for, quite often they prefer not to adopt it, and instead, create a new framework of their own (aecXML, 2006; CEN/ISSS, 2006; Berre, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 33
Generally, the standard data access interfaces are described at a very low level. Moreover, they are made available with all the complexity of the standard’s architecture to be managed and controlled by the user. This circumstance requires a significant effort from the implementers to use it, and is a source of systematic errors of implementation, for instance when there are functionalities for data access very similar, but with slight differences in attributes, names, or data types (ENV 13550, 2006; Pugh, 1997; Vlosky, 1998). To avoid the explosion in the number of required translators to cover all the existing standard data models, an extension of this methodology proposes the use of standard meta-model descriptions, i.e., the meta-model, using a standard meta-language, and letting the generators work with this meta-model information (Jardim-Goncalves and Steiger-Garc˜ao, 2002; Umar, 1999). With this methodology, changing one of the adopted standards for data exchange does not imply an update of the interface with the application using it, where only the low level library linked with the generated code needs to be substituted. If the platform stores a repository with several implementations of standard data access interfaces, the implementer can choose the one desired for the specific case, e.g., through a decision support multiplexing mechanism. In this case, the change for the new interface will be exercised automatically and the access to the new standard will be immediate. A proposal to address this situation considers the integration of SOA and MDA to provide a platform-independent model (PIM) describing the business requirements and representing the functionality of their services. These independent service models can then be used as the source for the generation of platform-specific models (PSM), depending on the web services executing platform. Within this scenario, the specifications of the execution platform will be an input for the development of the transformation between the MDA’s PIM and the targeted web services platform. With tools providing the automatic transformation between the independent description of the web services and the specific targeted platform, the solution for this problem could be made automatic and global. The appearance of HL7 completely changed the way interoperability was seen in healthcare. HL7 v2 was and remains wildly successful at allowing organizations to interchange information and interoperate, although some problems have emerged with the ambiguity in its “Z” segments, leading to some criticism. We have an opportunity now to benefit from the lessons learned from the past (successes and mistakes), taking the utility and flexibility offered by HL7 so as to give confidence that things will interoperate within an interoperability profile. However, HL7 v2 provides only a solution to the whole healthcare interoperability challenge. As depicted in Fig. 2, HL7 is a base for the PIM of healthcare. In order to have a fully interoperable environment, we will need to develop a CIM — i.e., the Healthcare Sector Model; configure a PIM — i.e., the healthcare clinical processes and procedures and business models, that are delivered by Services, grounded on HL7; and finally, by using many of the existing standards and technology, set up PSM.
March 15, 2010
34
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
Healthcare Business Sector Model
CIM
PIM
Electronic Health Records
HL7 SDAI
PSM
Web Services
TeleMonitoring of Real Data
…
Patient Management
Web Services HL7 SDAI
…
Figure 2.
CORBA
Web Services
CORBA
XSL
Healthcare interoperability framework.
The deployment of the Healthcare Interoperability Framework requires the development of an Integration Platform (IP), which is characterized by the set of methods and mechanisms capable of supporting and assisting in the tasks for integration of applications. When the data models and toolkits working for this IP are standard-based, they would be called Standard-based Integration Platforms (Boissier, 1995; Nagi, 1997). The architecture of an IP can be described through several layers, and proposes using an onion layer model (Jardim-Goncalves et al., 2006). Each layer is devoted to a specific task, and intends to bring the interface with the IP from a low to a high level of abstraction and functionality. The main goal of this architecture is to facilitate the integration task, providing different levels of access to the platform and consequently to the data, covering several of the identified requirements necessary for integration of the applications (Jardim-Goncalves and Steiger-Garc˜ao, 2002). Another challenge is to build up management and ICT teams that would address the interoperability issues in accordance with the healthcare business perspective. The organizational side of the interoperability has been mostly forgotten (Ash et al., 2003; Lorenzi et al., 1997). Technology is more embellished and organizational issues are often rather obscure. Organizations tend to hide many important aspects, except those that show the organization in a good light. This kind of approach does not help when an organization is looking to improve and address difficult issues such as helping everyone work together in a proper way.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 35
We must also consider that considerable sums of public money have been invested in the development of electronic healthcare systems. The lack of overall coordination of these initiatives presents a major risk in achieving the goal of integrated healthcare records, which in turn will restrain the modernization program of healthcare services.
4. CIO Leadership Driving Interoperability in a Healthcare Environment For many years healthcare has been considered to be a different industry, away from cost control and dominated by the physician. Management and engineering were also not taken seriously (Lap˜ao, 2007a; Mango and Shapiro, 2001). At the beginning of the past century, the Mayo Clinic showed the way by introducing management and engineering best practices. At that time they had the leadership to open the way. Surprisingly, only in the last years are we seeing its diffusion. For many years healthcare managers simply had to obey and follow the physician commands and did not feel the pressure to optimize healthcare processes. Healthcare was supposed to serve the patients’ needs, whatever they were even if it seemed too expensive. At the same time, there were no existence of management information systems. Today there is evidence that information systems (IS) can be an important driver for healthcare efficiency and effectiveness. But in order to take advantage of IS, leadership is necessary for promoting the alignment of business with IS. One common barrier is the inadequacy of management tools and models to address healthcare intrinsic complexity (Lap˜ao, 2007a). When the theory of complexity is applied to strategies for implementing BIS, some interesting answers will be reached, as complex organizations and managers need to cope with complexity accordingly (Kauffman, 1998; Plsek and Wilson, 2001). Complexity in an organization can be understood as the ability of a group of interacting agents to auto-organize themselves, while obeying only a set of simple rules. For instance, as far as the healthcare government policies are concerned, the existence of an adequate regulatory body is required, following the international standards, the “simple rules” of action. Furthermore, at the operational level, it means that highly qualified professionals are needed to act as “agents.” In this environment the role of the CIO is critical to ensure good focus on the organization specificities and in the implementation of the IS (Lap˜ao, 2007b). In order to reach interoperability, more sophisticated IS teams are needed for dealing with the challenges and for breaking out of the “vicious-circle” (Fig. 3). Most CIOs today are very much focused on procuring, building, and maintaining the IT systems on which the healthcare business operates. Unfortunately, most CIOs do not have a seat at the decision making table. This situation is incomprehensible since technology assets are strategic in healthcare today. In order to
March 15, 2010
14:44
36
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al. Lack of Skills in health care Information systems
Lack of Competition
Strategic Instability
Lack of Regulation
(Frequent changes in Hospital Boards)
Figure 3. Vicious-circle disturbing HIS development.
deliver value, CIOs must clearly understand the dynamics of the healthcare environment and be in a position to provide valuable input to the CEO and the board as a part of the management and planning process. CIOs who manage to leverage the uncertain market and technology opportunity will be able to provide more help to improve physicians’ and nurses’ productivity. This translates into value, which is most evident in three areas. 4.1. Decision Support Organizational strategic decisions are always very complex and time dependent. They involve many unknowns and many players acting at the same time. The CEO’s ability to set a course of action (vision) largely depends on the availability and interpretation of information, analysis, intuition, emotion, political awareness, and many other factors. The CIO who should have the healthcare system knowledge, including the competitive landscape, must be able to optimize the effective delivery of systems that provide much of the input the CEO and other senior management team members ultimately use in their daily functions. 4.2. Organizational (Value Chain) Visibility A proactive CIO is in a unique position in that her/his role is perhaps the only one within the organization that has significant visibility across most, if not all, functions within the organization. Finance and accounting, clinical services, maintenance and logistics, patient management, training and R&D, administration and other functions all have input (planning) and output (execution) relationship to strategy. The CIO should leverage the level of visibility into these functions, and in particular the regular interface with suppliers and patients (e.g., through the healthcare organization website) can help provide better-fit insights into opportunities to build competitive advantage for an organization.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 37
4.3. Corporate Governance In these complex times, organizational governance must be flexible and open to change. The CIO is the key person to play an ever-growing role in ensuring corporate compliancy and managing risk. Failure to comply with governmental regulations such as HIPAA could put a healthcare organization at significant financial risk. For instance, failure to comply with Sarbanes-Oxley mandates could land a CEO in jail. It is a strategic imperative that organizations understand the regulatory landscape and be able to ensure compliancy, otherwise it risks facing issues that could decidedly put it at a competitive disadvantage. For healthcare organizations that see the CIO’s role as merely technologist (an implementer or a maintainer of IT systems), the strategic importance of the role certainly is diminishing. However, when hospitals provide their CIOs with the opportunity to get involved and stay involved with the healthcare processes, and then leverage that knowledge with their technology know-how, the strategic importance of the CIO will increase. As the pace of change in healthcare quickens, organizations must not only be agile and adaptive, they must have quality too. The CIO as strategist can help organizations meet these demands. A Gartner Group study of more than 1,000 CIOs revealed that two-thirds of CIOs felt their jobs might be at risk because they did not deliver expected value on IT investments. This shows that IT is already recognized as strategic for the organizations (Gartner, 2005). There is some recognition that buying “best-of-breed” solutions is not an easy task in healthcare. Many Healthcare IS departments have been spending tremendous amounts of time, money, and energy trying to integrate disparate legacy (and new) solutions. As IS departments struggle to meet new business demands in such an ad hoc fashion, they become trapped by technical limitations (many even without business relevance) and are less and less able to respond effectively. Yet, surprisingly, healthcare organizations continue to invest in these “stand-alone” solutions. Cost containment and revenue growth are top priorities today, as is the ability to be more responsive and to adapt as necessary. Stand-alone systems do not help to meet these priorities. Clearly, CIOs must find a way to achieve better visibility in the organization, from operational execution (for instance, helping physicians and nurses improve their productivity while making fewer mistakes) to overall performance (new services deployment and patient satisfaction). A proactive CIO is expected, mostly in healthcare. This CIO should promote the development of effective knowledge processes and systems that are able to collect data from any source and deliver timely and accurate information to the right healthcare professional. The challenge for many CIOs, then, is to lead their hospital’s efforts to get real benefits and value from the use of IS investments. To meet current requirements with more agility facing the complexity of the healthcare industry, the CIO needs to learn to collaborate with his fellow managers, avoiding isolated approaches of the past and accepting that the risk of innovation in healthcare
March 15, 2010
38
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
is the only way to provide long-term quality improvements. This also means that proper implementation assessment is required to sieze the information that allows the IS team to evolve and be able to innovate in a routine manner. CIOs should move to more systemic approaches that can collect all business processes on an integrated platform (one that uses the same data for analysis and reporting) in order to have a 360◦ view of the organization business. Investment in integrated platforms is critical so that data can be used to respond to industry obligations and regulations faster and more efficiently, and ensure consistent, high-quality business performance management. Regulation in Europe is somewhat behind that of the USA. The USA launched the Health Insurance Portability Accountability Act (HIPAA) almost seven years ago. With the implementation of HIPAA, all healthcare organizations in the USA that handle health information must conform to national standards for the protection of individually identifiable health information. Despite this positive perspective of the CIO’s role, the actual reality is not as good, but there are some windows of hope. Lap˜ao (2007b) found that the best performing hospital information systems (HIS) departments were linked with department heads having characteristics that matched those attributed to a CIO (Broadbent and Kitzis, 2005): • The best performing HIS directors have a university degree and post-graduate training (not necessarily in HIS). • They are clearly open to others’ suggestions and have an excellent relationship with other healthcare professionals (they also show a rather dense social network within the organization (Lap˜ao, 2007a). • They show leadership skills, which help them organize their department to better answer the challenges. • They have meaningful negotiation skills which they use regularly in their relationships with the vendors, showing openness to bolder projects with new technologies. • They plan their work and at least they implement it on a “draft” of a HIS strategic roadmap. • They demonstrate clear awareness of the barriers, difficulties, and of complexity of the tasks. • They also look for opportunities to improve their hospital with partnerships (Universities, Public Administration, other Hospitals, Vendors, etc.), i.e., they build up a good external social network. This means that CIOs should be highly qualified persons in order to be able to deal with the challenges of both technology and business issues (Figure 4). CIOs are special people (Ash et al., 2003) who cope with the endeavor of pushing the organization further through an innovative use of technology. They know that pushing for interoperability will allow the organization to be more productive and efficient.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 39 CIO’s Role
Healthcare System Strategic Leadership Technology Standards Awareness
-- -
Sponsor Healthcare Professionals network Clinical Services
Support Project and Change Management Develop Team Building
Promoting
Promote Technology and Management
Interoperability
-- -
CITIZENS
Skills internally Enhance communications with customers Technology Suppliers Management
IT
=
Management Systems
Sponsor Negotiations Foster Cooperation with Experts
ICT SUPPLIERS
Figure 4.
CIO’s role in promoting interoperability within the healthcare system.
5. The Role of the CIO: The Case of Hospital S˜ao Sebasti˜ao There are already a few examples of CIOs that give us not only the hope that things are going to change but also facts about the effective role of CIOs in healthcare. Below we present the case of the Hospital S˜ao Sebasti˜ao (HSS) at Santa Maria da Feira (in the north of Portugal), and how its CIO has been evolving in recent years and also provide an estimation of value created by his performance. 5.1. The HSS Information System The HSS is located about 30 km south of Oporto (Portugal) and provides services to a total population of about 3,83,000 people. HSS is a 317 bed, acute care and trauma facility built in 1999. From the very beginning, hospital planners wanted an organization that would be a significant cut above other public health hospitals in Portugal. Since there was an opportunity to build a new hospital from scratch there was also an opportunity to make it as good as possible in terms of architecture and technology. As a result, HSS has been a well-equipped hospital from the very beginning. Rather than just buying an expensive (and at the time, insufficiently sophisticated) commercial HIS, the CIO Mr. Rui Gomes and his dedicated staff of 11 full-time employees embarked on an endeavor to build an interoperable platform grounded on the IT infrastructure, business, and clinical applications that would best serve the hospital and the people who worked there. The CIO promoted the conditions (a small set of rules) that allow the projects development to be done iteratively. From the results point of view, this appears to be serving the hospital
March 15, 2010
40
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
workers and patients extremely well. Mr. Gomes is very dedicated, communicative, eager to help his staff to become better professionals and accept new ideas coming either from the other managers or from the physicians, which makes him a “special person” (Ash et al., 2003). As any CIO needs some help from the medical side, Mr. Gomes fortunately had an important, and indispensable, help. Among the team of physicians, Dr. Carlos Carvalho has been a visionary for almost a decade. He provided an important contribution to Mr. Gomes that was able to join together both the clinical perspective and business perspective at the same time. HSS is a hospital where one can see the physicians using information technology seamlessly. They feel like true owners of the systems, since they have been part of the designing process from the very early stages of the HIS implementation. This is all the more remarkable when one considers that they built, by themselves, the interoperable platform interlinking solutions, using a “best-of-breed” approach. More impressive is that they did so using commodity software (made available by the Portuguese Healthcare Systems Agency) that costs just pennies on the dollar compared to equivalent solutions used in the US and European hospitals. One might find the physicians roaming in the halls with “Fujitsu Tablet PCs” while wirelessly connected to the hospital’s network. One feels that the hospital is a living innovation center. Among other functionalities, the physicians have complete access to all patient data including imaging, lab results, etc., and they are able to perform all their charting from admission to discharge electronically. Nurses and other caregivers can also access the HIS. Additionally, in the emergency room, an electronic triage system (using a Manchester Protocol-based algorithm) not only helps to prioritize treatment, but also times and tracks exactly how that treatment is delivered; sending gentle reminders to staff whenever patients are left waiting longer than necessary. One might discover that the HSS system does not have all the frills that might be found on large vendor solutions used in many American hospitals (e.g., Eclipsys, McKesson, GE/IDX, Cerner, etc.). The CPOE is still a work in progress, although physicians are already using an electronic prescribing solution extensively. Today this electronic prescribing solution has many practical features and is used all over the country. It was broadly deployed as a strategy to reduce and control medicine costs. This is precisely the point. In a complex environment, the system needs to be designed to do exactly what the staff really needs most and be flexible enough to cope with future evolutions. It has an interface and tools that make it intuitive, fast, and highly functional. Perhaps that is why the almost “home-built” HIS solution in use at HSS is so popular with physicians and other caregivers at the hospital. 5.2. The Deployment of the HSS Interoperable Platform: The Role of the CIO The HSS’ HIS architecture was conceived with a healthcare interoperability framework in mind right from the beginning. It is a web-based system that is built on the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 41
interconnection of various clinical and management systems in a “best of breed” style. Currently it does not fully use HL7 to make solutions interoperable, but a new version is in preparation that copes with the interoperability standards, particularly considering the deployment of SOA architectures, and thus deploying a complete Healthcare Interoperability Framework, combining MDA and SOA. The CIO was the engine of the HIS development. His strong links with the CEO and the Board assisted the decision-making process and the alignment between Board strategy and HIS investment allocation. The capability to look at the future envisaging that there was an opportunity to develop an HIS with the software and technology at hand and cope with the lack of financial resources is the proof that a true CIO can provide value to his organization. Mr. Gomes started by envisioning the existence of the hospital healthcare interoperability platform, allowing the flow of data across the disparate applications as a key milestone for the deployment of the information systems. However, his perspective went beyond the technological perspective only, i.e., simply connecting the applications through APIs. Indeed, he developed the business context for the healthcare interoperability platform jointly with Dr. Carvalho, a physician enthusiastic about new ways of working enabled by technology. In reality, they actually began by creating a CIM — despite not calling it as such — where all the business, clinical, and administrative processes were designed “as-is” and “as-should be”. At this stage they were completely removed from any technology and relied much on the existing literature and on-site visits about how other leading-edge hospitals were working. Inspired by Mr. Gomes’ ideas, Dr. Carvalho had a critical role in the CIM development. He was the driver for all the required modeling activities, motivating and encouraging his colleagues to participate in the effort. Despite the fact that the project was led by the CIO, Dr. Carvalho perfectly accepted his role in the whole project: the facilitator and leader for the non-technological activities. The pace was set by Mr. Gomes but Dr. Carvalho could easily accommodate the changes and different swings the project experienced. Without question, Dr. Carvalho was determinant in helping to motivate his fellow physicians and the remaining administrative and management staff on a team project developed in conjunction with Mr. Gomes, to model the processes along with the visioning of how the processes should exist in an interoperable future. As the CIM was being completed, the CIO role changed. The CIO became more technical and a leader of his 11 technological team members. Mr. Gomes understood the importance of developing an interoperability platform that was not focused on existing applications. His perspective was clear: the healthcare sector is highly dynamic and innovation prone, and the ICT infrastructure must be able to accommodate the changes in technology, applications, processes, etc. In this sense, his strategy was to develop the HSS interoperability platform, decoupling as much as possible from the existing technology. In reality, an ICT architecture was designed, allowing independence from the existing applications, i.e., a PIM. At the time, his major concern was not the interoperability with applications from
March 15, 2010
42
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
outside the HSS, but the possibility to easily make the information systems fully interoperable as new applications were deployed and replaced or complemented existing ones, and thus be able to accommodate the dynamics of any healthcare ICT infrastructure. In this phase, the role of Mr. Gomes in convincing the Board for the necessary resources, time, and some “patience” regarding the deployment of the whole information system was important. His ability to explain the decision-makers about the innovative and importance of the approach being made was critical to sustain some of the inevitable delays that occurred. Also, once again, Dr. Carvalho was a major supporter of the healthcare interoperability framework being developed for the platform. Despite not being an ICT expert, Mr. Gomes’ arguments were explained to Dr. Carvalho and he knew too how often the deployment of new applications could bring several problems because of the difficulty of integration with existing ICT infrastructure. As the PIM was completed, Mr. Gomes’ team developed the necessary code to implement the PSM, i.e., interoperability platform. Interoperability is achieved via a set of database mechanisms (using Microsoft BizTalk) that follow sophisticated data model, very well linked to the clinical processes. The HIS datacenter at HSS is based on a Microsoft architecture that includes Active Directory, SQL Server 2005, SharePoint Services, SQL Reporting Services, Balanced Score Card Manager, ISA Server, BizTalk, Exchange, .Net Framework, and Visual Studio 2005. All these licenses are made available free of charge through a government partnership with Microsoft. Mr. Gomes managed to build the HIS out of these packages, adding the important participation of his colleagues at the hospital. This makes him “special” and provides a reason to believe that the HIS development is more dependent on CIO leadership than on technology. Moreover, there is no need to be frightened by the uncertainty of the environment or the complexity of the healthcare systems. One needs to look ahead with the need to bring the best people (physicians, nurses, managers, technicians, suppliers, students, professors, etc.) into a large team that will provide the best knowledge and the energy to design, implement, and correct the HIS according to the clinical needs of healthcare professionals. 6. Conclusions and Challenges Ahead The healthcare sector around the world, and particularly in Portugal, is undergoing great transformation. With the emergence of intensive diagnosis, clinical, and business applications, information systems development will require more careful planning. Moreover, for most healthcare units, it is necessary to integrate existing legacy systems with the new software being deployed. This task can be achievable through the careful planning of a healthcare interoperability framework (HIF) that distinguishes between the business, clinical, and administrative processes (CIM layer), the interoperable mechanism independent of the technology (PIM layer) and finally the coding of the APIs themselves (PSM layer). This is a
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 43
highly sophisticated technical approach that requires highly qualified and expert ICT professionals.Yet the role of the “personnel from informatics” is changing considerably, shifting away from a pure technical profile to a top-level management function. Indeed, possibly the greatest threat to the successful implementation of healthcare interoperable systems is the lack of understanding of the importance of the issue, leading in turn to overall lack of coordination and the absence of a consistent framework for the implementation of integrated personal-care records. Given the amount of public and private financial funds currently being invested and the tight timescales for delivery of objectives, the absence of overall coordination of these programs presents a major risk not only to the strategy to develop integrated personal-care records but also to the motivation of management. The CIOs can therefore play an important role, giving leadership and support to help follow the regulators’ rules and to manage the implementation processes according to the organizational culture. The CIO’s role is becoming increasingly important as information systems development moves away from being essentially a technical/technological problem, having a stronger grounding in the business context. The MDA and SOA approach for interoperability requires a more holistic methodology for IS development. Thus, rather than having fragmented specialized applications that are installed on very localized clinical or administrative functions, interoperability and service architectures require horizontal knowledge of the organization. This can only be achieved by a professional who is able to bridge the complex relationships that evolve in any healthcare unit. The HSS case is a reason to believe that one should be aware that the HIS success depends largely on the CIO’s leadership and interoperability vision. The success does not necessarily rely on buying expensive and sophisticated commercial applications, but rather on having a coherent ICT vision, and being able to involve and commit all stakeholders to the deployment and use of the healthcare information systems. Although the role of the CIO is becoming clearer and more widely accepted across healthcare units, and interoperability frameworks are being recognized as the grand technical challenge in the coming years, other challenges are yet to be fully understood. The way organizations deal with complexity is one such challenge. Theoretical and empirical work has proved the importance of modeling organizations using complexity theory.Yet, as far as information systems are concerned, and specifically with regard to the conception of interoperable systems, it is important to understand how complexity can improve the design of interoperable information systems, and how the CIO can further improve the efficacy of ICT, by addressing some of the features of complexity. This means that the traditional top-down approach for designing and managing ICT may be questioned, and the need to achieve self-developed and self-configured interoperable information systems that respond with effectiveness to the dynamics of complex systems like a healthcare unit may move to the forefront.
March 15, 2010
44
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
References aecXML (2006). Retrieved March 22, 2006, from http://www.iai-na.org/aecxml/mission. php. Ash, JS, PZ Stavri, R Dykstra and L Fournier (2003). Implementing computerized physician order entry: The importance of special people. International Journal of Medicine Informatics, 69, 235–250. Berre, A (2002). Overview of International Standards on Enterprise Architecture (SINTEF). Bohms, M (2001). Building Construction Extensible Markup Language (bcXML) Description: eConstruct bcXML. A Contribution to the CEN/ISSS eBES Workshop. Annex A. ISSS/WS-eBES/01/001. Boissier, R (1995). Architecture solutions for integrating CAD, CAM and machining in small companies. IEEE/ECLA/IFIP International Conference on Architectures and Design Methods for Balanced Automation Systems (Chapman & Hall, London), 407– 416. Broadbent M and Kitzis ES (2005). The New CIO Leader: Setting the Agenda and Delivering Results, Harvard Business School Press. CEN/ISSS (2006). European Committee for Standardisation — Information Society Standardization System, retrieved March 22, 2006, from http://www.cenorm.be/isss. DATAFORM EDIData (1997). UN/EDIFACT Release 93A. ENV 13 550 (1995). Enterprise Model Execution and Integration Services (EMEIS), CEN, Brussels. Gartner (2005), Gartner Survey of 1,300 CIOs Shows IT Budgets to Increase by 2.5 Percent in 2005; Gartner Inc. January 14. Healthcare Services Specification Project (2007). HSSP Healthcare Standards Report 2007, retrieved September 23, 2007, from, http://hssp.wikispaces.com/. Healy, JC (2000). EU-Canada e-Health Initiative. EU-Canada Meeting, Montreal, Quebec, Canada. IOM Report (1999). To Err is Human, Institute of Medicine. IOM Report (2001). Crossing the Quality Chasm: A New Health System for the 21st Century, Institute of Medicine. Jardim-Goncalves, R and A Steiger-Garc˜ao (2002). Implicit multi-level modeling to support integration and interoperability in flexible business environments. Communications of ACM, Special Issue on Enterprise Components, Services and Business Rules, 53–57. Jardim-Gon¸calves, R, A Grilo and A Steigger-Gar¸ca˜ o (2006). Developing interoperability in mass customisation information systems. In Mass Customisation Information Systems in Business, Blecker, T and Friedrich (eds.), Idea Group Publishing, Information Science Publishing, IRM Press. Kauffman, S (1998) At Home in the Universe: The Search for the Laws of Self-Organization and Complexity, OUP. Kuhn, KA, DA Giuse, LV Lap˜ao and SHR Wurst (2007). Expanding the scope of health information systems: From hospitals to regional networks, to national infrastructures, and beyond. Methods of Information Medicine, 47. Lap˜ao, LV (2007a). Smart Healthcare: The CIO and the Hospital Management Model in the Context of Complexity Theory, Doctoral Dissertation.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
The Role of the CIO in the Development of Interoperable Information Systems 45
Lap˜ao, LV (2007b). Survey on the Status of Portuguese Healthcare Information Systems. Methods of Information in Medicine; HIS Special Issue. Lap˜ao, LV, RS Santos and M G´ois (2007). Healthcare Internet Marketing: Developing a Communication Strategy for a Broad Healthcare Network. Proceedings of the ICEGOV 2007, Lisbon. Lenz, R and KA Kuhn (2002). Integration of Heterogeneous and Autonomous Systems in Hospitals, Data Management & Storage Technology. Lorenzi, NM et al. (1997). Antecedents of the people and organizational aspects of medical informatics: Review of the literature. Journal of American Medicine Information Association, 4, 79–93. Mango, P and L Shapiro (2001). Hospital gets serious about operations. The McKinsey Quarterly Number 2. MDA (2006). Model Driven Architecture, MDA Guide Version 1.0.1, June 2003, retrieved March 23, 2006, from http://www.omg.org/mda Mellor, S. (2004). Introduction to Model Driven Architecture. ISBN: 0-201-78891-8, Addison-Wesley. Miller, J and J Mukerji (2001). Model Driven Architecture White Paper, retrieved March 23, 2006, from http://www.omg.org/cgi-bin/doc?ormsc/2001-07-01. Nagi, L (1997). Design and Implementation of a Virtual Information System for Agile Manufacturing, IIE Transactions on Design and Manufacturing, special issue on Agile Manufacturing, 29(10), 839–857. Plsek, P and T Wilson (2001). Complexity sciences: Complexity, leadership, and management in healthcare organisations. BMJ, 323, 746–749. Pugh, S (1997). Total Design: Integrated Methods for Successful Product Engineering, Adddison-Wesley, Wokingham. Smith (1997). Internet Marketing: Building Advantage in a Networked Economy, McGraw-Hill. SOA (2006). The Service Oriented Architecture, retrieved March 23, 2006, from http://msdn.microsoft.com/architecture/soa/default.aspx Umar, A. (1999). A framework for analyzing virtual enterprise infrastructure, in Proceedings of the 9th International Workshop on Research Issues in Data Engineering — IT for Virtual Enterprises, RIDE-VE’99, 4–11, IEEE Computer Society. Vlosky, RP (1998). Partnerships versus Typical Relationships between Wood Products Distributors and their Manufacturer Suppliers. Forest Products Journal, 48(3), 27–35. W3C (2009). World Wide Web Consortium, retrieved June 2009, from http://www.w3c.org. WS-I (2009). Web Services Interoperability Organisation, WS-I, retrieved June 2009, from http://www.ws-i.org.
Biographical Notes Ant´onio Grilo holds a PhD degree in e-commerce from the University of Salford, UK. He is Auxiliar Professor of Operations Management and Information Systems at the Faculdade Ciˆencias e Tecnologia da Universidade Nova de Lisboa, in doctoral, master and undergraduate degrees. He is also a member of the board of directors of the research center UNIDEMI. He has over 30 papers published in international conferences and scientific journals, and he is an expert for the European Commission
March 15, 2010
46
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch02
A. Grilo et al.
DG-INFSO. Besides academia, he has been working in the last 10 years as a management information systems consultant, particularly in e-business, e-commerce and project management information systems. Currently he is a Partner at Neobiz Consulting, a Portuguese management and information systems company. Lu´ıs Velez Lap˜ao has a degree in Engineering Physics from the Lisbon Institute of Technology, an MSc in Physics from the Technical University of Lisbon (TUL), an MBA in Industrial Management and a PhD in Healthcare Systems Engineering from the TUL. He has a post-graduate degree in Public Management from the John F. Kennedy School of Government, Harvard University. He is the Head of Health IT Governance Systems, at INOV-INESC Inova¸ca˜ o and he is now the National Coordinator for Primary Care Management training. Professor and Researcher at Center for Healthcare Technology and Information Systems at the Oporto Medical School. Vice-President of AGO, Garcia-de-Orta Association for Development and Cooperation. He is also the Portuguese representative at the International Medical Informatics Association, since 2005 and FP7 Expert. He is a Healthcare Management visiting Professor at Dubai University, United Arab Emirates. Ricardo Jardim-Goncalves holds a PhD degree in Industrial Information Systems from the New University of Lisbon. He is Auxiliar Professor at the New University of Lisbon, Faculty of Sciences and Technology, and Senior Researcher at UNINOVA institute. He graduated in Computer Science, with MSc in Operational Research and Systems Engineering. His research activities include Standardbased Intelligent Integration Frameworks for Interoperability, covering architectures, methodologies and toolkits to support improved development, harmonisation and implementation of standards for data exchange in industry, from design to e-business. He has been a technical international project leader for more than 10 years, with more than 50 papers published in conferences, journals and books. He is the project leader in ISO TC184/SC4. V. Cruz Machado is an Associate Professor with Habilitation in Industrial Engineering and researcher in the Department of Mechanical and Industrial Engineering in the Faculty of Science and Technology of the New University of Lisbon, Portugal. He is the director of UNIDEMI (Mechanical and Industrial Engineering Research Centre), director of the Industrial Engineering and Management graduation programme, and president of the Portuguese Chapter of the Institute of Industrial Engineers. He holds an MSc and a PhD in Computer Integrated Manufacturing, from the Cranfield University, UK.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Chapter 3
Information Systems for Handling Patients’ Complaints in Health Organizations ZVI STERN∗ , ELIE MERSEL† and NAHUM GEDALIA‡ Hadassah Hebrew University Medical Center P.O.B. 12000, Jerusalem 91120, Israel ∗
[email protected] †
[email protected] ‡
[email protected] An essential and inherent part of any managerial process is the monitoring and feedback for all the organization’s activities. Every organization needs to know if it is acting in an effective way and if its activities are accepted by their recipients the way they are intended to. The handling of complaints is designed to prevent the reoccurrence of similar incidents in the future and to improve the production of the organization. Furthermore, the handling of complaints from the public is important in tempering the bitter feelings and sense of helplessness of the citizen vis-`a-vis bureaucratic systems. One of the various tools available to managers in the organization for obtaining this much needed feedback on the organization’s activities is via complaints from the public. The mechanism for handling complaints from the public and responding to them is generally headed by an ombudsman. Managing information received from complaints and transforming it into knowledge in an effective way requires the database to be: complete, up-to-date, versatile and — most importantly — available, accessible, and practical. For this purpose, a computerized system that is both user-friendly and interfaces with existing computerized demographic and other databases already existing in the hospital is essential. This type of information system should also assist in the ongoing administrative management of complaint handling by the ombudsman. In this chapter, we will examine the importance of the ombudsman in public and business organizations in general and in health organizations in particular. The findings presented in this chapter are based on a survey of the literature, on a study we conducted among the ombudsmen and directors of all 26 general hospitals in Israel, and on the authors’ cumulative experience in management, in complaint handling and in auditing health systems, as described in case studies. These findings will illustrate how it is possible to exploit a computerized database of public complaints to improve various organizational activities, including upgrading the quality of service provided to patients in hospitals. Keywords: Ombudsman; patient representative; patients’ rights; patients’ complaints; hospital; healthcare system; quality assurance.
47
March 15, 2010
48
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
1. Introduction An inherent part of any managerial process is the monitoring and feedback of the organization’s activities. One of the various tools available to managers in the organization for obtaining feedback on the organization’s activities is via complaints from the public. The mechanism for handling complaints from the public is generally headed by an ombudsman. “Ombudsman” is derived from a Swedish concept meaning “representative of the king.” The ombudsman’s role is to serve as a voice for consumers vis-`avis the organization, to provide feedback on the organization’s activities and to constitute one of the catalysts for organizational change and improvement. For the organization, the ombudsman comprises part of the mechanism of internal oversight, serving as a channel for receiving feedback on the activities of various teams, individuals, and functions in the organization, and the way in which these activities are perceived by the consumers. The more powerful the bureaucratic mechanism, the more important the ombudsman’s role becomes for the individual. Thus, an effective ombudsman is especially important in the health system because patients are particularly dependent on the system. Furthermore, with an aim toward enhancing the safety and efficiency of treatment, and in light of skyrocketing insurance premiums for medical malpractice, health systems have come to appreciate the importance of relying on complaints as one of the instruments for improving quality assurance and patient safety — and it is even recommended to encourage patients to complain. In addition, complaints from the public serve as one of the sources of information for the organization’s system of risk management. In this chapter, we will examine the importance of the ombudsman in public and business organizations in general and in the health system in particular. We will note the importance of handling complaints as an instrument for promoting quality assurance and enhancing medical treatment, and we will see how it is possible to use information systems to help improve the effectiveness of the ombudsman’s work, with specific reference to health systems. 2. Research Methodology The findings presented in this article are based on a survey of the literature and on a study conducted among the ombudsmen and directors of all 26 general hospitals in Israel. For this purpose, personal interviews were conducted using a very detailed and lengthy questionnaire. The data from all these interviews were accumulated and analyzed by statistical methods using SPSS. These findings were then analyzed through the authors’ cumulative experience in management, in complaint handling, and in auditing health systems. The findings are also based on a series of case studies and their findings that illustrate how it is possible to exploit a computerized database of public complaints
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 49
to improve various processes, including an upgrade in the quality of service provided to patients in hospitals. 3. Theoretical Framework 3.1. The Institution of the Ombudsman The institution for handling complaints — the ombudsman — developed in states whose social and political systems place a high value on the individual’s rights. Two decades after the French Revolution, which brought into sharper focus the concepts pertaining to the individual’s place in the society, the first ombudsman institution was established in Sweden in 1809. Since then, and particularly during the second half of the 20th century, this institution has become an integral and built-in part of the state’s institutional fabric. The need for the existence of the ombudsman was highlighted as public administration became involved in nearly every facet of the individual’s life and in fulfilling the individual’s most basic needs: health, education, and social security. The need grew as the administrative involvement led to the concentration of enormous power in the hands of the government bureaucracy vis-`a-vis the individual. During recent decades, we have witnessed an increase in the number of states that established the institution of an ombudsman. Much of this increase comes from the post-communist states, which are facing complex difficulties that stem from the gap between their political culture and tradition, on one hand, and the constitutional frameworks that define the status and authorities of their new institutions, on the other hand (Dimitris and Nikiforos, 2004). The ombudsman institutions in different countries vary in the way they are appointed, in their subordination and in the scope of their authority. For example, in the Scandinavian model, the ombudsman is authorized to examine topics upon his own initiative, and not only those which reach him following complaints (Hans, 2002); in the Israeli model, the role of the ombudsman is integrated with the role of the state comptroller; according to the British model, a direct complaint to the ombudsman is only possible via a member of Parliament, while in the French model the ombudsman, or le M´ediateur de la R´epublique, is appointed by and is a subordinate to the head of the executive authority. As in the case of state ombudsmen, various public and business organizations have defined different forms of subordination, authorities, and reporting arrangements for the ombudsmen operating within their organizations. It is possible to define three main types of ombudsmen (Ben Haim et al., 2003): 1. The Classical Ombudsman: Usually appointed by the parliament and reports to the body that appointed him. Handles, by law, complaints pertaining to an action or omission of the executive authority vis-`a-vis the citizen. 2. The Specialty Ombudsman: Usually appointed by a government regulator and investigates complaints in a defined field, generally pertaining to several organizations.
March 15, 2010
50
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
3. The Organizational Ombudsman: Usually appointed by the management of a specific organization and operates within its framework. An organizational ombudsman has less independence than a classical ombudsman or a specialty ombudsman. Nonetheless, the common denominator between all of these types of ombudsmen is their work in dealing with complaints from the public and helping to solve problems arising from the relationship between the individual and the bureaucratic mechanism of the organization. Thus, the main characteristic of the institution of the ombudsman is that he is complaint-driven. This means that smart management of complaint handling is the key to the ombudsman’s effectiveness (Marten, 2002). The handling of complaints is designed to prevent the reoccurrence of similar incidents in the future and to improve the production of the organization. Furthermore, the handling of complaints from the public is important in tempering the bitter feelings and sense of helplessness of the citizen vis-`a-vis bureaucratic systems (Nebantzel, 1997). Part of the citizen’s calmness and sense of well-being derives from his knowledge that the society in which he lives acts fairly towards its members. The fact that the organization is public-oriented — that there is a specific office (the ombudsman’s office) headed by a senior official and designed to stand up for the individual and protect him from the organization itself — can provide the individual with a sense of security. This sense of security does not have to be exercised to be justified. That is, even if the individual, for his own reasons, refrains from complaining, he may feel a sense of relief from the very fact that he could have complained if he wanted to do so. The importance of this feeling for the individual increases in direct relation to the growing power of the organization he faces and the extent of its influence on his basic needs. 3.2. The Ombudsman in the Health System as a Representative of the Patients Healthcare services differ from other services available in the marketplace because of the existing potential for ever-increasing demand. The demand is a function of a number of factors: growing public awareness and knowledge of health issues, new developments in medical knowledge and technology, changes in morbidity patterns, and the effects of a prolonged life expectancy (that is, an aging population). Another contributing factor is the rise in the standard of living, which has brought about a rise in consumer awareness and action, accompanied by expectations for higher standards of service. The growing demand on the part of the public requires managers of healthcare delivery systems to economize to meet the demand. At the same time, growing consumer awareness requires more in-depth scrutiny of the quality of the services provided. In view of these developments, it is evident that quality improvement should incorporate consumer feedback as an integral part of quality assurance.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 51
At the same time, particularly in the health system, the individual is very much dependent on the organization that provides the service. This dependence stems from gaps in knowledge between the caregivers and the patients, as well as the physical and emotional situation of those in need of medical assistance. This dependence is particularly strong in the contact between the patient and the hospital, where some of the patients are unable to tend to their affairs by themselves and have difficulty in finding their way through the Web of administrative and medical procedures and rules. The hospitalization period is generally short, only a few days. However, these are difficult and complex days, and sometimes they are the most complicated days of the entire period of illness. This underlines the importance of the ombudsman’s role as a representative of the hospitalized patients. 3.3. The Ombudsman in the Health System as a Source of Feedback on the Implementation of Health Policy Health was perceived in the distant past as a personal matter for which the individual was exclusively responsible. The concept of the right to health and the obligation of the state to maintain and promote this right began to develop during the 19th century and became formally established in the 20th century in the framework of international conventions and legislation in many countries. This includes the International Covenant on Economic, Social and Cultural Rights (1996) that recognizes the right of every person to enjoy the highest possible standard of physical and mental health. To exercise this right, states are expected to create conditions that ensure medical services for all (Carmi, 2003). Various arguments served to justify recognition of the right to health, such as the right to life, which implies the state’s duty to work for the health of its citizens, or the state’s obligation to enable its citizens to live in dignity. In addition, we also note the Universal Declaration of Human Rights (1948), which recognizes the right of all persons to an adequate standard of living, including medical care; and the European Social Charter (1965), which stipulates that those who do not have adequate financial resources will also be entitled to medical assistance. The charter states that, to ensure the exercise of the general right to protect health, various measures should be taken, such as removing the causes of ill-health, providing advice on promoting health, and preventing contagious diseases. The African Charter on Human and Peoples’ Rights (1986) states that every person has the right to enjoy the highest possible level of physical and mental health. This charter obligates the states to provide medical care to the ill and protect the population’s health. The Cairo Declaration (1990) recognizes the right of every person to medical care, within the constraints of existing resources. The International Convention on the Protection of All Migrant Workers and Members of the Their Families (1990) states that they are entitled to receive medical care that is urgently needed to save their lives or prevent irreparable damage to their health, and that this care should be provided on the basis of equality with the nationals of that state (Carmi, 2003).
March 15, 2010
52
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
In parallel, various states have passed legislation concerning a citizen’s right to receive medical treatment and the state’s obligation to provide suitable health services for all citizens, including those who have difficulty in financing them (van der Vyver, 1989). In Israel, the National Health Insurance Law, enacted in 1994, states that every resident is entitled to health services. This law mandates automatic coverage of health services that are defined in a “health basket” for all residents of the state, regardless of their ability to pay. Another global development involving the expansion of the obligations of the state and health organizations toward patients has occurred in the area of patient rights (Gecik, 1993). In 1793, France issued a decree stating that every patient hospitalized in a medical institution is entitled to a bed. (Until then, two to eight patients had shared a single bed; Reich, 1978.) Some regard this decree as the first document in the area of patient rights. During the second half of the 20th century, the issue of protecting the patient’s various rights — including the subject of patient consent, the right of the patient to privacy, the confidentiality of medical information, etc. — became incorporated in international conventions including (Carmi, 2003): the Council of Europe’s order of the 1950; the Lisbon Declaration of 1981 issued by the World Medical Association (amended in 1995) (Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine, 1997); the UN General Assembly Resolution 48/140 of 1993 (Resolution 48/140 on Human Rights and Scientific and Technological Progress); and the Council of Europe’s 1997 Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine. Subsequently, various states enacted laws designed to establish the status and rights of the patient. The first to do so was Finland in 1993 (Pahlman et al., 1996), thus preceding the rest of the countries of Europe (Carmi, 2003). In Israel, the Patient’s Rights Law was enacted in 1996. Among its other provisions, the law stipulates that a person be appointed in every medical institution to be responsible for handling the complaints of patients — the ombudsman. The monitoring and regulation of the rights of citizens and patients, and the state’s obligations toward them, were assigned to bureaucratic mechanisms. However, the pace of development of expensive medical technologies, the ageing of populations, and the limited economic abilities of the various states to cope with these changes have made it necessary to ration the provision of health services (Nord, 1995). The need to conduct this rationing further empowers the bureaucratic mechanisms and enables them to influence the quality of life of millions of patients. The conflict between the limited resources available to the state and the state’s obligation to provide health services and the right of citizens to receive these services poses very difficult ethical dilemmas in defining priorities when budgeting health services, including the subsidization of medication, in all welfare states. Western states are wrestling with the question of the correct model for rationing health services and have also defined various rules for this purpose.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 53
In light of these trends, the ombudsman’s role has become even more important as a source of feedback on the implementation of the defined policy and its effectiveness. 3.4. The Ombudsman in the Health System as an Instrument for Enhancing the Quality of Medical Care With an aim toward enhancing the safety and efficiency of treatment, and in light of skyrocketing insurance premiums for medical malpractice, health systems in the Western world have come to appreciate the importance of relying on complaints as one of the major instruments for improving quality assurance and safety, as well as a source of important information for the system of risk management (Hickson et al., 2002). In accordance with this perception, it is even recommended to encourage patients to complain (Sage, 2002). In 1993, Leape et al. (1993) showed that a million avoidable medical errors resulted in 120,000 annual deaths. Most of the medical mishaps do not result in a complaint. But, it is possible to learn from the complaints that are submitted about significant errors that could have been prevented. Dealing with complaints provides a “window” for identifying risks to the patient’s well-being (Bismark et al., 2006). An ombudsman can be an effective tool for assistance and education in the health system. He can provide precise and real-time information to the consumer, to the health services provider, to health policymakers, and to relevant legislators — with the aim of improving the entire health system (William and Bprenstein, 2000). For example, handling complaints received by patients pertaining to a medical record led to an improvement in the completeness and quality of the medical record and, consequently, to fewer potential errors (Brigit, 2005). Another example is the activities of the ombudsman of the Ministry of Health in Israel, which led to including new medications and medical technologies in the public “medications basket” (Israel Ministry of Health, 2006). Donabedian (1992) assigns consumers of health services three major roles: contributors to quality assurance, targets of quality assurance, and reformers of health care. As contributors to quality assurance, consumers define their view on what quality is, evaluate quality, and provide information which permits others to evaluate it. However, while patients can certainly contribute by expressing their views on subjects such as information, communication, courtesy, and environment, they usually cannot evaluate the clinical competence of the physician and his treatment — an essential component of quality which must be monitored and evaluated by the physician’s peers. Consumers can become targets of quality assurance both as co-producers of care and as vehicles of control. In the role of reformers, consumers can act by direct participation, through administrative support and political action (Javets and Stern, 1996). An important tool for quality assessment, which is based on consumers’ views and perceptions of the care and services they received, is consumers’ complaints.
March 15, 2010
54
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Not much is found in the literature concerning the use of complaints as a tool for quality promotion. In a national survey of five self-regulating health professions in Canada, it was found that two types of activities, a complaints program and a routine audit program, were used to identify poor performers (Fooks et al., 1990). The authors expressed concern as to the public’s willingness, ability, and selfconfidence to submit complaints when poor performance is encountered. Thus, just as a high level of patient satisfaction expressed in surveys cannot be taken as the only valid indicator that medical services are of high quality, a lack or paucity of complaints does not necessarily prove either high quality of care or complete satisfaction. Nevertheless, there is wide agreement that patients’feedback should be heeded. The process of quality improvement of health care can benefit by patients’ participation in the process of its evaluation (Vuori, 1991), if only through patients’ expressions of dissatisfaction (Steven and Daglas, 1986). However, the long-term viability of any public complaint handling system rests on confidence in its fair operation. That is, “the large majority of cases investigated should provide people with assurance that they have been fairly and properly treated or that a disputed decision has been correctly made under the relevant rules” (National Audit Office, the UK, 2005). Patients provide important feedback to healthcare policy makers and providers by voicing their requests and complaints. This mechanism is direct by nature, pointing to problematic areas as perceived by the care receivers. As such, if indeed applied, it can serve as a monitor of the quality of care and service provided, and as a tool for risk management. A complaint-handling function such as an ombudsman in a healthcare organization is a liaison service geared to serve both the complainants and the institution. As educated and conscious consumerism increases, so does the number of complaints received. The growing volume of complaints has become a significant impetus that drives organizations to develop more effective self-corrective means as they sharpen their capability to react to complaints. In the organization, handling of complaints is performed on a number of levels: the case level, the unit level, the subject level, and the institutional level. All inquiries conducted in relation to complaints can be claimed to have a quality promotion value. Even on the case level, the discussion of the details of an event with the person who complained and with the relevant service provider has an amending effect and a potentially preventive value. On the unit level, whether medical or administrative, work is directed toward drawing conclusions and active corrective steps if recommended by the ombudsman and accepted by the head of service; the same applies for the subject level. On the general institutional level, management should be the quality-improvement and risk-management agent, as expressed by policy-related decisions and initiation of change. The handling of complaints is twofold: as a redress function vis-`a-vis the complaining customer and as a source of systemic changes for all consumers (Paterson, 2002). Both functions require a proper investigation of the incident or problem in question in order to establish the validity of the complaint. It should be emphasized
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 55
that the investigation at the point of occurrence is, by itself, a quality control operation. It is noteworthy that customers of health services are often apprehensive lest their complaints adversely affect the care they require, or may require in the future. As a result, most complaints are submitted post factum rather than at the time the problematic service is extended. Another consequence of such apprehension is the reluctance of many to complain altogether. Therefore, when a complaint is submitted in the health system, particularly in regard to the conduct or care provided by a physician or a nurse, it should be handled with extra attentiveness. The system must assume that if a patient has decided to file a complaint, it might indicate a severe problem. Moreover, the complaint may sometimes indicate that other patients have encountered a similar problem but declined the option to complain formally. As mentioned above, recurrent complaints about certain staff members, units, procedures or service processes may be an indication of a bigger problem and therefore require a more comprehensive, in-depth approach to the problem. It should be noted that even recurrent complaints about the same subject, found to be invalid, call for corrective actions. They may point to problems in communication patterns between staff and patients, or the organization and its consumers in general. Continuous Quality Improvement (CQI) may use complaints for overall examinations of the effectiveness of the process itself and, at the same time, as a tool in individual cases for higher-level improvement activities. Periodic reports of the ombudsman’s office, presenting analyses of the aggregate complaint data, provide a viable tool for overall review of service problems as perceived by the consumers. The analyses provide the relative weighting of problematic areas in terms of the type and volume of the service provided in the hospital. Such comparative analysis over time can identify new problems, recurring problems and, of course, encourage improvement. 3.5. The Ombudsman in General Hospitals in Israel The Patient’s Rights Law of 1996 (Israel Patient’s Rights Law, 1996) was enacted as a follow-up to the National Health Insurance Law (1994), which states that health services in Israel are to be based on the principles of justice, equality, and mutual assistance. As part of this fundamental belief, the Patient’s Rights Law requires every director of a medical institution to appoint an employee to be responsible for patient rights. The law defines three roles for the ombudsman — the patient rights representative: 1. To provide advice and assistance to the patient in regard to exercising his rights under the Patient’s Rights Law. 2. To receive complaints from patients, investigate, and deal with the complaints. 3. To instruct and guide members of the medical institution’s staff and management in all matters related to fulfilling the directives of the Patient’s Rights Law.
March 15, 2010
56
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
The patient rights representative thus serves as an ombudsman for the patients in medical institutions. Though the organizational ombudsman’s role may sometimes be regarded as only a facilitator of individual problem solving, in fact, the ombudsman is ideally situated within the organization to make recommendations for systemic change, based on patterns of complaints brought to his office. Indeed, the ombudsman is obligated to take steps to prevent the future recurrence of a problem, as well as to resolve the problem at hand. Furthermore, because of the ombudsman’s broad understanding of the organizational culture, the needs of its management and other stakeholders, the ombudsman office — in addition to being a vital component of the organization’s conflict management system — may also participate in designing, evaluating, and improving the entire dispute resolution system for the organization (Wagner, 2000). The authors recently published a study (Stern et al., 2008) that examined, among other things, whether ombudsmen had indeed been appointed in all of the general hospitals in Israel as required by law. The study also examined their background and daily activities. For this purpose, personal interviews were conducted with directors and ombudsmen at all 26 general hospitals in Israel. Our findings indicated that all the hospitals had appointed an ombudsman. According to the assessments of the interviewees, an average of 695 complaints per hospital is submitted each year. The interviewees estimated that the complaints that reach the ombudsman constitute 81%–100% of all complaints submitted to the hospital. The most common complaint pertains to treatment, including attitude, quality of service, and quality of treatment. There are only a few cases of complaints regarding the availability of documents, lack of information, and limited protection of patient rights. Most of the ombudsmen do not treat anonymous complaints. Some 62% of the ombudsmen noted that there is a defined procedure at their hospital for handling patient complaints. Usually, the maximum period stipulated for treating a complaint is 14–21 working days. Most of the ombudsmen said that they have full and free access to information and to the hospital’s computerized databases. However, 12.5% of the ombudsmen said that the employees of the hospital are not obligated to respond to their inquiries and questions. Most ombudsmen keep some records of the complaints received and the way they were handled. It was found that some hospitals have a high level of computerization and follow-up, which was conducted with dedicated software for complaint management. On the other hand, at most hospitals the level of computerization is relatively low and is usually based on basic Excel tables. Managing the information received from complaints has two levels of importance: 1. It allows for an ongoing dialog with the complainants, who sometimes return with the same problem and/or with similar/additional problems over time. Available information from the handling of an earlier complaint in the same area is likely to shorten the process of addressing the new complaint and improves the service provided to the patients.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 57
2. It enables the creation of a “central information database” that is regularly updated in real time for use by the organization and its various levels of management, and is based on feedback received from the recipients of the service. Managing all of this information and transforming it into knowledge in an effective way requires that the database be complete, up-to-date, versatile and — most importantly — available, accessible, and practical. For this purpose, a computerized system that is both user-friendly and interfaces with existing computerized demographic and other databases in the hospital is essential. This type of information system should also assist in the ongoing administrative management of complaint handling by the ombudsman. We will describe the operational requirements for an information system for managing patient complaints, based on the experience accumulated at Hadassah’s hospitals and also based on the survey conducted in all the general hospitals in Israel. Hadassah is a worldwide voluntary organization established and based in the United States. The organization owns two general hospitals in Jerusalem — a total of about 1,100 hospitalization beds. The hospitals are also research and university teaching centers in collaboration with Hebrew University in Jerusalem. Five academic schools operate in this framework: a medical school, a nursing school, a dental school, a school of public health, and a school of occupational therapy. The process of handling complaints at Hadassah is similar to the process that is practiced at most of the general hospitals in Israel (a total of about 14,600 general hospitalization beds). Figure 1 below describes the work process of complaint handling. Each of the stages includes substages whose connections are important for properly engineering the information system needed for managing the complaints and the extra information derived from them. The system was developed by Hadassah’s Information Systems Division with a Visual Basic development tool, using Word and Excel to produce reports. The system operates on an NT (Terminal Server) computer network under the Windows operating system, interfacing with an Oracle database. There is a plan to integrate this tailor-made system of complaint handling into Hadassah’s newly introduced ERP/SAP environment. As with other systems that contain confidential medical, personal, and private information and because of the risk that information leaked from the system could be used to harm the hospital and its employees, there is a need to incorporate particularly tight data security in this sensitive system. Hadassah has implemented such data security, including a personal identification number (PIN) generator and computerized mechanisms for PIN validation, PIN storage, PIN change or replacement, and PIN termination; input controls (check digits); field checks (for missing data, reasonableness, etc.); communication controls (including channel access controls); and database controls. For the purpose of developing the system, a process analysis was conducted. This analysis indicated that the handling of a complaint can be broken down into
March 15, 2010
58
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Remedial action
Ombudsman
Deriving managerial information
Handling the complaint
Receiving the complaint
Patient
Medical treatment
Caregiver
Preventive action
Figure 1.
Schematic presentation of the work stages as a flow chart.
three subprocesses: the stage of receiving the complaint, the stage of treating the complaint, and the stage of deriving managerial information from it. Below is a review of each of these substages and the contribution of computerization to boosting the effectiveness of complaint handling. 3.5.1. The initial stage of handling a new complaint A complaint may be received orally or in writing. An oral complaint is usually rejected and the person is told that he must submit the complaint in writing if a formal answer is expected. A written complaint may be submitted by hand or via mail, fax, or e-mail. A notice of confirmation is given for every complaint received. Computerizing this stage, as portrayed in Fig. 2, improves the efficiency of the work processes and enhances the service provided, and can also contribute to the effectiveness of the complaint handling — both in response time and in the handling of the complaint itself. One of the goals of computerizing the handling of complaints is to manage the entire system as a “paperless office.” Therefore, it is proposed to integrate the systems of scanning the written complaint or attaching the e-mail to a “computerized folder” in which all of the information about the complaint will be managed. The index field for managing the complaints system will be the patient’s identity number (comparable to the Social Security number in the United States). This field is the same index field as in the hospital’s demographic and
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 59
Figure 2.
Screen for receiving a complaint.
billing systems, and can serve as the interface between the computerized system for complaint handing and the hospital’s ATD (Admission, Transfer, Discharge) systems. The interface will enable access to the patient’s demographic information, save redundant data entry, and improve the reliability of the database. The system immediately checks whether the patient has already complained in the past and will alert the user to this. Early detection of past complaints is also important for accessing a previous response given to this patient about the same or a different complaint, and preventing contradictory responses pertaining to the same case or person. It is also important to identify “serial complainants” who make a practice of frequent complaints. The subject and the object of the complaint must be mandatory fields in the information system. It is recommended to manage these mandatory fields within tables that are defined in advance, and it is best to minimize the entry of free text at this point to enable the sorting of information based on a common denominator. An example of a complaint categorization screen is attached as Fig. 3. On the other hand, there is room to enter free text about the impression of the complainant — whether the complainant is hurt, angry, or calm — to help identify the motivation for submitting the complaint: whether the patient is seeking to punish someone or contribute to improving the treatment or the system in the future, whether he fears the complaint will adversely affect him, and so on. Another
March 15, 2010
60
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
Figure 3.
Categorizing the subject of the complaint.
important point of information during the initial stage of receiving the complaint is whether the patient intends to file a lawsuit or has already done so. 3.5.2. The stage of dealing with the complaint The management of complaints constitutes the ombudsman’s backoffice work and includes the following stages: a decision about whom to approach; referring the matter to the person responsible and sometimes to his superiors; receiving a formal response to the complaint; deciding whether the complaint is justified or not; if possible, making suggestions for remedial action within the organization; providing a response to the complainant and notifying the person whom the complaint was filed against; when necessary, transferring the complaint to the legal counsel and/or to the risk management unit. At this stage too, we suggest that the system-enabled scanned files to be added, or files sent via e-mail as a response to a specific complaint. The contribution of computerization to this process is mainly in managing automatic reminders for monitoring the receipt of responses, documenting the answers sent to the complainant, documenting the solutions assigned to the problem and follow-up of their implementation, as in the sample screen appearing in Fig. 4. An additional contribution of the computerization of complaint handling is that the system collects the answers received from the respondents and actually combines them all in a document that comprises a basis for the printed response that is given to the complainant, as portrayed in Fig. 5. This saves the need for redundant data entry.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 61
Figure 4.
Managing the reminders.
Figure 5. Collecting the information from the complaint handling stages as a basis for the letter of response to the complainant.
March 15, 2010
62
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
3.5.3. The stage of deriving managerial information Computerization is very helpful in facilitating the dialog between the complainants and the organization that can prevent the reoccurrence of such problems and turn the problem’s solution into a remedial action at the organizational level. This capability derives from the categorizing that is executed during the preliminary stage of handling complaints and from the computer’s processing of this information after completing to deal with the complaint. The more flexible the system is in producing new and changing reports in accordance with new and changing managerial needs, the more effective the system will be as a working tool for management and as a system that provides data on which decisions can be made. The system is utilized to periodically produce different reports for various levels of management within the organization: from the director-general down to the field executives. The reports are produced according to various categories — for example, by subject of complaint, by level of justification, by physician/other staff member — by name, by department or units (frequency), list of complaints by their status (open/closed/in follow-up . . . ), various comparative reports by years. Figure 6 shows an example of the report generator screen. These reports assist the ombudsman and the management in identifying the trends and areas that have been the subject of frequent complaints. Based on this information, they can formulate intervention programs aimed at reducing or preventing the reoccurrence of such incidents. Thus, for example, the most frequent complaints focused on the following topics: the attitude of the physician/nurse/other
Figure 6.
Screen for operating the report generator.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 63
toward the complainant and/or his companion; payment — problems pertaining to payments and debts; hotel services; availability of medical documentation; waiting time in line during the appointment day; waiting time for an appointment, and the quality of treatment — medical/nursing/other. Sometimes, actions are — or should be — taken as a result of even a single complaint on a specific topic because the complaint might indicate a failure in an area that entails substantial clinical, legal, or economic exposure for the hospital. 4. Case Studies 4.1. Complaint Handling that Led to Identifying a Need for a Change in General Policy An accumulation of many complaints about faulty service in the emergency rooms, out-patient clinics, and day hospitalization prompted the management of Hadassah to declare 2007 as the year for improving the quality of all ambulatory services. Formal committees were appointed and chaired by senior members of management to define the root causes behind the complaints. The committees formulated operative proposals to resolve them and submitted them to the director-general. In 2008, various teams were assigned to work on implementing these recommendations. 4.2. Decisions or Actions Taken as a Result of Complaint Handling in Hadassah’s Hospitals that Led to Changes in Work Methods and/or in Specific Areas Below are examples of corrective actions that evolved from the complaint handling process at Hadassah’s hospitals. These examples represent the use of the information system for managing and dealing with patient complaints: locating the areas that require corrective intervention, formulating an intervention program, and its implementation by Hadassah’s management. a) For a number of years, one of the orthopedics departments drew more complaints than any other medical ward in the hospital concerning hospitalization and surgical processes. In the immediate term, each problem was solved on an ad hoc basis. On the short-term level, the ombudsman actively participated in staff meetings aimed at drawing wider conclusions. On the long-term level, a CQI program was introduced that devised and launched a new and dedicated pre-hospitalization clinic. All orthopedic patients requiring elective surgery were referred to this clinic, where they receive complete information about the entire process, including written information. Pre-hospitalization tests were coordinated for them in advance. The date of surgery was set and all arrangements were made for different possibilities of post-surgical rehabilitative care. The purpose of this clinic was to enhance the patient and insurer’s satisfaction through improved preparation of the patient and his family for the procedure ahead, to improve the efficiency of the processing of
March 15, 2010
64
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
pre- and post-hospitalization care, and to shorten the hospitalization period in the department (Javets and Stern, 1996). b) People complained that in one of the busiest clinics, patients were entering without an appointment or ahead of others who had earlier appointments.As a result, the management of Hadassah, together with the Information Systems Division, worked out a solution that included the installation of computerized screens in the waiting area that display the list of patients waiting for each physician. To protect the patient’s privacy, it was decided not to display the patient’s name, but only an identifying number. The complaints stopped and the satisfaction of the patients increased. At the same time, and without defining this as a goal of the project, the satisfaction of the caregivers also rose, because they were less frequently interrupted by patients who were angry about the waiting time and wanted to know when their turn to see the physician would come. c) A special parking area was designated for patients receiving chemotherapy and radiation treatments, and for the disabled in wheel chairs. Numerous complaints about the availability of these parking spaces led to the discovery of improper conduct: Employees arriving to work early shifts were parking in the spaces designated for the seriously-ill patients. This was in addition to the growing shortage of parking spaces for the disabled. Preventive action was taken against these employees and, at the same time, additional parking spaces for the disabled opened and a shuttle service was introduced with a special vehicle for people who have difficulty in walking from the parking areas to the hospital entrance. d) Recurrent complaints about the improper transfer of blood samples to distant and external laboratories led the hospital’s management to change the procedures for transferring blood samples. The change involved instituting new written working procedures in all of the laboratories and allocating suitable means of transportation and handling of the samples. e) Patient claims regarding lost dentures, small accidents, damage to personal possessions, and the like used to be forwarded by the hospital administration to the hospital’s insurance company agent. The process was very complicated and lengthy, causing much aggravation to the complainants. At the end of this process, in most cases, the damage was paid by the hospital itself when the claim was found to be justified. Today, in the short term, minor claims are no longer referred to the insurance company agent and are instead handled by the hospital. In the long term, a committee for handling minor claims was established by the ombudsman. The committee convenes on a flexible schedule to ensure a reply to the complainant within a short period. The committee has developed working procedures and regulations to ensure equity in its decisions about whether to compensate the complainant or to deny his claim as unjustified. Only larger claims, which comprise a small minority of the claims, are still forwarded to the insurance agent and are followed up by the ombudsman to ensure the prompt processing of the claim.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 65
f) Taking care of complaints has also led to identifying areas where instruction is required for employees or patients. Numerous complaints about discourteous behavior by admission clerks led Hadassah’s management to take action in the area of customer service training. For several months, workshops were conducted for employees on the subject of providing service, and the employees will be required to attend periodic training on this subject. g) Patients complained about not receiving proper explanations and about the correct preparations for virtual colonoscopy. Due to the lack of proper preparation, some tests were rejected. The solution was to compose one page of very clear and detailed instructions prepared by a physician and a clinical dietician who were responsible for this subject. The above description of changes introduced in the hospital as part of the CQI process illustrates the comprehensive cooperation required to ensure the success of the devised solution. While the ombudsman fills the role of problem identifier (based on the complaints he receives and analyzes with the help of dedicated information systems), the implementation of the CQI process requires the responsiveness of management, the willingness of the staff in the relevant unit to participate in the effort, and the dedication of all to promote the innovation into a continuous program of action. Moreover, the new procedures must be reevaluated and their effectiveness measured periodically, with the results of this evaluation used for further adaptation to assure the best outcomes. 5. Summary and Conclusions One of the tools available to managers for receiving feedback on the organization’s activities is a mechanism for handling complaints from the public — the ombudsman. Complaint handling comprises a primary and important instrument for receiving feedback on the activities of various mechanisms, departments, teams and individual employees in the organization, as well as feedback on the way these activities are received and perceived by the consumers they are intended to serve. In health systems, the ombudsman has a special and very important role in moderating the feelings of bitterness and helplessness of the individual patient who is in a position of inferiority because of the fact that he is ill and requires help from very professional mechanisms in large bureaucracies. In addition, the ombudsman also fills an important role in serving as a source of feedback information on the effectiveness of the implementation of health policies, and as a source of information for initiating proactive efforts to improve the quality of medical care. The computerization of public complaints and their handling enables the creation of an organizational “knowledge base” that can be used to generate numerous reports, according to various parameters. This “knowledge base” serves managements as an effective tool for enhancing quality, promoting proper administrative
March 15, 2010
66
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
processes, boosting efficiency, and empowering the consumer. Transforming information into knowledge and managing information in an effective way require that the database be complete, up-to-date, versatile and — most importantly — available and user friendly. Indeed, it is also possible to handle complaints without computerization, but the drawing of systemic conclusions would only be intuitive. On the other hand, a computerized database enables rapid execution of calculations and correlations, and a systematic and effective identification of specific areas in which remedial actions are required to enhance the service and assure quality. The handling of complaints entails three subprocesses: receiving the complaint, investigating and rectifying it, and deriving the correct lessons from it. We have shown how the use of this type of information system at Hadassah assists in the everyday management of complaint handling in the ombudsman’s office. At the same time, we demonstrated how computerized handling of each of the substages has focused attention on problematic areas that required action by management — action that is aimed at increasing the satisfaction of patients and preventing failures in future medical and other treatments. Our research and conclusions are applicable to public general hospitals, as part of the health system in Israel. It is our suggestion to check if the same conclusions and managerial implications are applicable to other public organizations that are not part of the medical system. References African (Banjul) Charter on Human and Peoples’ Rights (October 21, 1986). Ben Haim, A, S Schwartz, S Glick andY Kaufman (2003). The development of the ombudsman institution in health services system in Israel. Bitahon Sociali [Social Security], 64, 23, 67–82. Bismark, MM et al., (2006). Relationship between complaints and quality of care in New Zealand: A descriptive analysis of complainants and non-complainants following adverse events. Quality and Safety in Health Care (QSHC), 15, 17–22. Brigit, D (2005). Exploring common deficiencies that occur in recordkeeping. British Journal of Nursing, 14, 10; ProQuest Nursing & Allied Health Source, p. 568. Carmi, A (2003). Health Law Sarigm-leon: nero, 795–800. Council of Europe, ETS No. 164: Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Humasn Rights and Biomedicine, Oriedo, 4.IV.1997 http://conventions.cve.int/treaty/EN/Treaties/Html/164.htm. Dimitris, C and D Nikiforos (2004). Traditional human rights protection mechanisms and the rising role of mediation in southeastern Europe. Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 60, 21–33. Donabedian, A (1992). Quality assurance in health care: Consumers’ role. Quality in Health Care, 1, 247–251. European Social Charter (February 26, 1965). Fooks, C, M Reclis and C Kushner (1990). Concepts of quality of care: National survey of five self-regulating health professions in Canada. Quality Assurance in Health Care, 2(1), 89–109.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Information Systems for Handling Patients’ Complaints 67
Gecik, K (1993). The need to protect patients’ rights. Medical Law, 12(1/2) 109. Hans, G-H (2002). The ombudsman: Reactive or proactive? Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 59, 63–66. Hickson, GB et al., (2002). Patient complaints and malpractice risk, JAMA, 287(22), 2951–2957. International Covenant on Economic, Social and Cultural Rights (1966). Israel Ministry of Health (2006). The decade summary report of the patients representative. Report No. 7, 3–6. Israel Patient Rights Law (December 5, 1996). Javets, R and Z Stern (1996). Patients’ complaints as a management tool for continuous quality improvement. Journal of Management in Medicine, 10(3), 39–48. Leape, LL et al., (1993). Preventing medical injury, Quality Review Bulletin, 19, 144–149. Marten, O (2002). Protecting the integrity and independence of the ombudsman institution: The global perspective. Iyunim—The Periodical of the Office of the State Comptroller and Ombudsman, 59, 67–84. National Audit Office (NAO), Government of the United Kingdom (2005). Citizen Redress: What Citizens Can Do if Things Go Wrong with Public Service. London: The Stationery Office. National Health Insurance Law of (1994). Book of Laws 1469, (June 26, 1994). Nebantzel, YA (1997). Perspectives on the impact of the ombudsman. Studies in State Auditing, 56, 28. Nord, E (1995). The use of cost-value analysis to judge patients’ right to treatment. Medical Law, 14(7/8) 553. Pahlman, I, T Hermanson, A Hannuniemi, J Koivisto, P Hannikainen and P Liveskivi (1996). Three years in force: Has the Finnish act on the status and rights of patients materialized? Medical Law, 15(4) 591. Paterson R (2002). The patient’s complaints system in New Zealand. Health Affairs, 21(3), 70–79. Reich, W (ed.) (1978). Encyclopedia of Bioethics, Vol. 3, 1993 pp. 1201. New York: Resolution 48/140 on Human Rights and Scientific and Technological Progress Dec. 20 1993, pp. Sage, WM (2002). Putting the patient in patient safety, linking patient complaints and malpractice risk. JAMA, 287(22) 3003–3005. Stern, Z, E Mersel and N Gedalia (2008). Are you being served? The inter-organizational status and job perception of those responsible for patient rights in general hospitals in Israel. Harefua (in press). Steven, ID and RM Daglas (1986). A self-contained method of evaluating patient dissatisfaction in general practice. Family Practice 3, 14–19. The Cairo Declaration on Human Rights in Islam (August 5, 1990). Universal Declaration of Human Rights. UN Resolution 217(A), (Dec. 10, 1948). van der Vyver, J (1989). The right to medical care. Medical Law, 7(6), 579. Vuori, H (1991). Patient satisfaction — Does it matter? Quality Assurance in Health Care, 3(3), 183–189. Wagner, ML (2000). The organizational ombudsman as change agent. Negotiation Journal, 16(1), 99–114 (Springer Netherlands).
March 15, 2010
68
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch03
Z. Stern et al.
William, S and PE Bprenstein (2000). Baltimore’s consumer ombudsman and assistance program: An emerging public health service in medical managed care. Maternal and Child Health Journal, 4(4), 261–269.
Biographical Notes Zvi Stern, M.D. is the Director of Hadassah Mount Scopus Hebrew University Hospital in Jerusalem and an Associate Professor of healthcare administration at the Hebrew University Hadassah Medical School, both in Jerusalem, Israel, where he received his MD. His research interests include quality improvement in healthcare: concepts, methodology and assessment, errors and patient safety — the human factor. Elie Mersel, M.A., M.H.A., is the Chief Audit Executive of “MEKOROT” — Israel National Water company, and former Internal Auditor in Hadassah Medical Organization. He is a Council Member of the Institute of Internal Auditors (IIA) in Israel and a lecturer in Tel-Aviv University. His research interests include internal auditing, control environment, risk management and quality assurance. Nahum Gedalia holds an M.A. from Hebrew University, Jerusalem, and an M.P.A. from Harvard University, Boston. He worked as a Deputy and Chief Administrator of both Hadassah University Hospitals at Ein-Kerem and Mount Scopus alternately for 30 years. For the last 3 years he served as the Patient Representative (Ombudsman) for both hospitals and all other Hadassah schools and institutes.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
Chapter 4
How to Develop Quality Management System in a Hospital VILLE TUOMI Department of Production, University of Vaasa, P.O. Box 700 (Yliopistonranta 10), 65101 Vaasa, Finland
[email protected] The objective of this study was to consider how to develop a quality system in a hospital. This is made by answering the questions: what are the situational factors that should be taken into consideration while establishing a quality system and what should be taken care of during the developing process. This study focused mainly on public hospitals. The study was a qualitative constructive study, where we try to develop a model for a development of a quality management system of a public hospital. This is done from the contingency theory’s approach and by using content analysis while analyzing the study material. As a result of the study, a model for the developing of a quality system in a hospital was constructed. The results can be generalized especially to other hospitals. As managerial implications, the model constructed in this study could be applied to other hospitals and professional service organizations, but there is no universal way to develop the QMS and so the system must be always customized to an organization. By improving the fit between the QMS and contingencies, that is issues related to customers, an organization will probably improve its outputs and outcomes. Keywords: Quality management system; hospital.
1. Introduction Quality management is traditionally seen as a universalistic management system, which means that it is assumed that there are some kinds of one best way to implement quality management. When we think about the hospitals, this may cause problems, because quality management is developed in industrial organizations and hospitals are professional service organizations. In Finland and in many other countries, hospitals are also public non-profit organizations, which may cause problems when implementing a quality management system. Therefore, there should be some kind of quality management model for hospitals, which take into consideration the situation in which the hospitals are operating. This leads us to think about quality management system from the viewpoint of contingency approach. 69
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
70 V. Tuomi
2. Quality Management Systems Quality management system (QMS) can be defined in many ways: • QMS is a formalized system that documents the structure, responsibilities, and procedures required to achieve effective quality management (Nelsen and Daniels, 2007). • Quality management system is made to direct and control an organization with regard to quality. A system consists of interrelated or interacting elements. A management system is a system made to establish policy and objectives and to achieve those objectives. A management system of an organization can include different management systems, such as quality management system or a financial management system ISO 9000:2000 (25–27). • “Quality system is agreed on company-wide and plant wide operating work structure, documented in effective, integrated technical, and managerial procedures, for guiding the coordinated actions of the people, the machines, and the information of the company and plant in the best and the most practical ways to assure customer quality satisfaction and economical costs of quality” Feigenbaum (1991, p. 14). • Quality system is an assembly of components, such as the organizational structure, responsibilities, procedures, processes, and resources for implementing total quality management. The components interact and are affected by being in the system. The interactions between the components are as important as the components themselves. To understand the system, you have to look at the totality, not just one component (Oakland, 1999, p. 98). Organizational structure may be considered as “the established pattern of relationships among the components or parts of the organization.” The structure is relatively stable or changes only slowly. The formal structure of an organization is defined as (a) the pattern of formal relationship and duties (the organization chart plus job descriptions) and (b) formal rules, operating policies, work procedures, control procedures, compensation arrangements, and similar devices adopted by management to guide employee behavior in certain ways within the structure of formal relationships. There is also informal organization which refers to those aspects of systems that are not formally planned but arise spontaneously out of the activities and interaction of participants (Kast and Rosenzweig 1970, 170– 173). As we mentioned before, a quality system consists of different components and interaction between them. Structure may also be considered as part of quality system (Nelsen and Daniels, 2007; Oakland, 1999, p. 98). In the ISO QMSs, organizational structure is defined as arrangement of responsibilities, authorities, and relationships between people. A formal expression of the organizational structure is often provided in a quality manual. An organizational structure can include relevant interfaces to external organizations (ISO 9000:2000, p. 27).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
71
The advantages of quality systems are obvious in manufacturing, but they are also applicable in service industries and public sector. When implementing a QMS, you have to use such a language, which is suitable for the organization, where it is applied (Oakland, 1999, p. 113). So in a hospital, you should use the language of the healthcare industry and integrate the QMS to the other management systems and see the system as a totality consisting of interacting components and which is coordinated and organization wide. Sometimes, researchers have evaluated maturity of the quality systems by assessing the use of quality tools. The other model of maturity evaluation is based on performance maturity levels in a 1–5 scale. In the lowest level, there is no formal approach, in the second level there is a reactive approach, the third level is stable formal system approach, in the fourth level continual improvement is emphasized and at the highest level, there exists best-in-class performance, which means strongly integrated improvement process and that best-in-class benchmarked results are demonstrated (Sower et al., 2007, p. 124). So, there should be some sort of QMS in a hospital on average. The most common QMSs are often implemented with the help of the EFQM model (EFQM 11.9.2008) and ISO 9001 quality management standard or in the case of Finland with the help of the SHQS model. The simplification of the logic of the EFQM and ISO is seen in Fig. 1. 3. Implementation of the Quality System in Former Studies What are we doing, if we are implementing a quality system? Some researchers claim that there is a difference between the total quality management (TQM) and ISO 9000, so that TQMa is more effective and practical way to improve operations of an organization (Yang, 2003, p. 94), but at the same time there are very much in common between quality management principles of the ISO 9000 and the TQM: customer focus, leadership, involvement of people, process approach, system approach to management, continual improvement, factual approach to decision making and mutually beneficial relationships (SFS-EN ISO, 9000; Magd and Curry, 2003, 252–253). The TQM could be seen as a broader approach than ISO 9000, but an organization gets best results by implementing both approaches at the same time, because they complement each other (Magd and Curry, 2003, 252– 253). Two different kinds of processes, the implementation of the TQM system and the implementation of the ISO 9000, are listed in the Table 1. It is easy to see that steps in the processes differ from each other and the first step is especially aAccording to a study concerning university hospitals in Iran, TQM requires a quality-oriented orga-
nizational culture supported by senior management commitment and involvement, organizational learning and entrepreneurship, team working and collaboration, risk-taking, open communication, continuous improvement, customers focus (both internal and external), partnership with suppliers, and monitoring and evaluation of quality (Rad, 2006).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
72 V. Tuomi The EFQM model Results
Enablers Leadership, people, policy and strategy, partnerships and resources, and processes
People results, customer results, society results, and key performance results Innovation and learning ISO 9001:2000
Continual improvement of the QMS
Management responsibility
Customer requirements
Measurement, analysis and improvement
Resource management
Input
Service realization
Satisfied customers
Product
A QMS A QMS consists of certain interrelated elements. The aim of the system is to direct and control quality.
Figure 1. The EFQM excellence model and ISO 9001.
different so that ISO starts from the customers and Yang’s TQM model from the management. First, quality systems in the healthcare industry in Finland were established during the late 1990s. In those days, quality systems were quite rare even internationally. In Finnish health care, like in most European countries, there is no real competition. Therefore, a certificate as a document is not very valuable. The main benefit results from the external assessment, which ensures a systematic approach and correct implementation of the quality system. The quality system can never be complete. It shall dynamically search for better ways to carry out the duties of the organization. In the evolution of the quality system, there may be different phases. When the system is well-adopted, importance of the formal documentation is not
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
Table 1.
73
Comparison of the Implementation Models: TQM System and ISO 9000.
Implementation model of TQM in healthcare (Yang, 2003, pp. 96–97)
1. Building commitment for management 2. Setting the management principles and quality policies 3. Installing the corrective concepts of quality to employees 4. Conducting TQM educational training 5. Understanding and fulfilling customers’ requirements 6. Proceeding continuous improvement 7. Standardizing and managing the processes 8. Promoting daily management and empowerment 9. Adjusting the style of leadership 10. Constructing the teamwork 11. Performing customer satisfaction survey and quality audit 12. Changing the organizational culture
ISO 9000 approach to develop and implement quality management system (SFS-EN ISO, 9000, p. 13): (1) Determining the needs and expectations of customers and other interest parties (2) Establishing the quality policy and quality objectives of the organization (3) Determining the processes and responsibilities necessary to attain the quality objectives (4) Determining and providing the resources necessary to attain the quality objectives (5) Establishing methods to measure the effectiveness and efficiency of each process (6) Applying these measures to determine the effectiveness and efficiency of each process (7) Determining means of preventing nonconformities and eliminating their causes (8) Establishing and applying a process for continual improvement of the QMS.
as crucial as in the beginning and it is possible to give more space for innovative planning and implementation of the quality issues (Rissanen, 2000). According to the experiences from the Kuopio University Hospital (KUH), a quality system may be regarded as laborious and restrictive if the guidelines of the standard are taken too seriously and punctiliously. The standard specifies key issues and factors which probably are important for the efficiency and the success of work in an organization, but the organization itself should find solutions for implementing the issues in a feasible and useful way. As a whole, the KUH experience shows that it is feasible to establish and maintain a comprehensive quality system in a big hospital. Without a structured guideline (for instance ISO 9001 or Quality Award), it may be difficult (Rissanen, 2000). There are successful implementations of the ISO 9000 quality standards in hospital in the Netherlands (Van den Heuvel et al., 2005, p. 367). In a longitudinal case study made in Swedish hospital, they succeeded in implementing a quality
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
74 V. Tuomi
system on a surface level, so that incident reports were written on a daily basis. However, the implementation was anyway failed in a way that there was no learning organization with reflective thinking on a place. The study brought attention to ambiguity in the organization. As a consequence of ambiguity, the staff have to conduct their work in a way that is not compatible with their understanding of their role and the best way to accomplish their work goals. This is a work situation that will probably cause an increase in sick leave. The study showed the urgent need for more successful management of work situations that are characterized by ambiguity. A quality system based on the process of sense making might serve as a panoptic system that can unite the disparate meanings and reach a collective meaning status in order to make effective decisions and a successful adaptation to change, and as a result, remove the ambiguity (Lindberg and Rosenqvist, 2005). If an organization is using quality awards or ISO 9000 standards for managing its quality, it is possible that “the tail starts to wag a dog” in a way that quality manual and self-assessment report of the award become “image” documents. This means that continuous improvement is forgotten and for example the selfassessment is not improvement-oriented but award-driven (Conti, 2007, 121–125). On the other hand, there are also successful implementations of the ISO 9000 quality standards (Van den Heuvel et al., 2005, p. 367) and quality awards in Europe (EFQM model) and in the United States (Malcolm Baldrige framework) (S´anchez et al., 2006, p. 64). Implementation of QMSs in hospital departments instead of the organizationwide implementation strategy has been successful (Francois et al., 2003, p. 47; Kunkel and Westerling, 2006, p. 131). In Spain and in other countries, the most important issues impacting the success of implementation of the EFQM were training and experience with the use of the EFQM model. Other important factors were governments’ promotion of the model and development of the guidelines for the practical application of the model (S´anchez et al., 2006, p. 64). According to a study concerning the implementation status of QCI in Korean hospitals, the use of scientific QCI techniques and quality information systems are the most critical elements that help the implementation, although structural support and an organizational culture that is compatible with CQI philosophy also play an important role (Lee et al., 2002, p. 9). According to the study concerning organizational change in a large public hospital, transforming from the traditional professional hierarchy to the organization based on the use of new clinical team involvement in the change process and supporting of the old and new identities were emphasized. This is a cultural change in the sense that professional departments were displaced in favor of clinical teams as the organization’s core operational units. In this kind of situation, the change is likely to be resisted by employees, particularly those in low status groups. The members of low status groups should be involved in the change process somehow and there should be concurrent enhancement of both old and new identities of the employees (Callan et al., 2007, pp. 448, 457, 464–467).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
75
To conclude, we can now present the important issues to take into consideration during the implementation of the QMS in a hospital. The following issues are important: • An organization gets best results by implementing both TQM and ISO 9000 approaches at the same time, because they complement each other. • It is reasonable to utilize existing models for quality management (EFQM, ISO 9001, etc.), but they should not be taken too punctiliously and they should be applied in different ways in different hospitals. • Get training and experience concerning the model you are utilizing. • Develop the guidelines for the practical applications of the model. • Government should promote the model. • Use of proper techniques and quality information systems to help the implementation. • Involve employees especially the members of the organization’s low status group in the change process somehow and enhance both old and new identities of the employees. • During the implementation of the quality system, an organization should pursue learning organization with reflective thinking on a place to decrease the amount of ambiguity in the organization. A quality system based on the process of sense making might help in reaching a collective meaning status in order to make effective decisions and a successful adaptation to change, and as a result, remove the ambiguity. Think carefully what the steps in the implementation of the QMS in the certain hospital are and in what order they should be taken: starting from the customer requirements or management commitment and starting from the hospital departments instead of the organization-wide implementation strategy has been successful. 4. Contingency Approach in QMS It is claimed that two concepts will influence the field of quality management in the next several years: organizational context and contingency theory. Organizational context refers to variables that influence the adoption of quality approaches, tools, and philosophies. Contingency theory emphasizes the fact that business contexts are unique and differences in management approaches should exist to respond to varying business needs. Business context consists of the following factors: people, processes, finance and information systems, culture, infrastructure, organizational learning and knowledge, and closeness to customers (data gathering, interaction and analysis relative to customers). An organization tries to achieve best possible outputs and outcomes in its business context, but there is normally some sort of gap between the existing and desired state of affairs. In making quality-related strategic choices, we should take into consideration both the aforementioned organizational
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
76 V. Tuomi
context (inside an organization), business context (outside on organization), and body of knowledge concerning the quality management (Foster, 2006). As we mentioned before, it depends upon the quality maturity level of an organization, that what kind of quality tools it can use. When we consider the quality techniques used in hospitals, the EFQM model can be more suitable for the top managements’ use and ISO 9001 is more applicable in the tactical or operational level (Van den Heuvel et al., 2005, p. 367). These models are worth considering in this article, because they are common ways to implement quality management (system). This is a constructive study, where we try to develop a model for a quality management system of a public hospital. This is done from the contingency theory’s approach. The essence of the contingency theory paradigm is that organizational effectiveness results from fitting characteristics of the organization to contingencies that reflect the situation of the organization. Contingencies include the environment, organizational size, and organizational strategy. Core commonalities among the different contingency theories are the following assumptions: (1) there is association between contingency and organizational structure, (2) contingency change causes organizational structural change, and (3) fit affects performance (Donaldson, 2001, pp. 1–11, see also Sitkin et al., 1994 or Conti, 2006). When we talk about the quality management and contingency approach, there are two key issues. First, quality is contingent upon the customers, but not upon the organization or its products or services. Second, quality target is continually shifting and therefore organizations must pursue rightness and appropriateness in their products or services. The key to an organization’s success rests on communication within the organization and between the organization and its environment (Beckford, 1998, p. 160). In many cases, more situational approach would be suitable for the quality management. When we consider QMS from that viewpoint, the activities (main tasks) of the system are the following: 1. Strategic policy-making process: — Based on the information on (changes in) the environment, a (quality) policy has to be developed, elaborated in the purposes/intentions for the service which is required and the way these purposes/intentions can be realized. 2. Design and development control, monitoring, and improvement actions: — Constructing the way in which the controlling, monitoring, and improving take place. — Constructing the way in which the tasks are divided among individuals and groups in the organization. — The most important coordination mechanisms (control and monitoring) in a professional service organization are the standardization of knowledge and skills and mutual adjustment and much of the control is self-control.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
77
3. Control, monitoring, and improvement: — The measure of detail on which control, monitoring, and steering of improvement take place and the frequency. — Control, monitoring, and improvement are mainly done by the professionals themselves. — Important issue is which activities should be done by the customers and how these activities can be controlled (van der Bij et al., 1998). The strategies of the Finnish social and healthcare services aim at improving the quality and effectiveness by the year 2015. The aim is, for example, to improve service quality and increase the utilizing of the evaluation and feedback made by customers and patients (Sosiaali- ja terveyspolitiikan strategiat 2015, 2006, pp. 4,5, 18). Public hospitals in Finland have the following changes going on in their operating environment: 1. Strong pressures to change are connected to the Finnish health services and to their supply, demand, and usage and the present system cannot be expected to meet these future challenges. 2. The development of Finland’s health service is determined by EU’s specifications of the pan-European welfare policy and globalization also brings challenges. 3. The coming changes in needs of the aging population are well anticipated and can cause unexpected pressures to services and this means that customers know their rights, will not automatically trust in healthcare personnel, and are more demanding. 4. There have to be some kind of priorization (every customer cannot get every service). 5. There will be more e-services. 6. There will be recruitment problems because of the large age groups transition to retirement. 7. New technology makes it possible to improve quality, productivity, efficiency and effectiveness of the services and increase expectation of the customers. 8. Number of multiproblem patients increase which means the need of multiprofessional cooperation across the organizational boundaries. 9. Every citizen’s own role and responsibility to his or her own health will increase. 10. The financing of the healthcare services comes from several different sources and the financial system must be clarified (Ryyn¨anen et al., 2004, pp. 2–9, 39, 91,92). To conclude, in this study, we try to look at the contingencies which influence the way we should implement QMS in a hospital. To do this, the following issues are important to be analyzed: (1) the QMS, which means strategic policy-making process and measurement and improvement, (2) contingencies, especially external customers (patients), and (3) possible outputs and outcomes, which can be seen by
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
78 V. Tuomi 1. Characteristics of the QMS: strategy, policy, measurement, and improvement
Fit
2. Contingencies in the environment: especially external customers, but also personnel
3. Outputs and outcome and functioning of the QMS
Figure 2. The fit between the QMS and the contingencies in the environment.
evaluating how well the QMS is serving its purposes and what are the outputs and outcomes of the system are. Based on the above mentioned, we could build a preliminary model for analyzing the contingencies of the quality management (Fig. 2). 5. Methodology and Analysis In this chapter, we use contingency approach, but the study is qualitative. This is because contingency theory offers a good way to consider what kind of situational factors should be taken into consideration while implementing a QMS, but at the same time the theory should be developed so that it takes into consideration the human actors level and not only the organization level (see Donaldson, 2001, pp. 56– 76). This is tried here by using one hospital and one hospital unit as an example in implementing a QMS. This is done by constructing a model for QMS in the hospital. The study is qualitative also because of the study subject, quality management subject, which is a vague subject given that the term quality has many dimensions (see, for example, Garvin, 1988) and a hospital is a complex organization with multiple goals (see, for example, Kast and Rosenzweig, 1970 or more lately). A qualitative approach allows researcher to deal with complexity, context and persona and their multitude factors, and fuzzy phenomena. For example, holistic case studies are applicable in these kinds of situations (Gummesson, 2006, p. 167). Qualitative methods are also very suitable for studies concerning organizational change, because they allow the detailed analysis of the change and by using the qualitative methods we can asses how (what processes are involved) and why (in terms of circumstances and stakeholders) the change has occurred (Cassell and Symon, 1994, p. 5). In this study, the research question is how to make QMS and the answer is found by analyzing the fit between the important stakeholders of the hospital (personnel and customers) and characteristics of the QMS. Research process consists of the following steps applied from the study of Lukka (2003): 1. Find a practically relevant problem which also has potential for theoretical contribution. — Our topic is to find out how to build a quality system in a hospital which is an acute problem in for example Finnish healthcare system and from the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
79
theoretical viewpoint, it is a question of how to take into consideration all situational factors and special characteristics of a non-profit healthcare organization when applying quality management, which is said to be universalistic. 2. Examine the potential long-term research cooperation with the target organization(s). — I have research agreement with the case organization and we have also made one study together. 3. Obtain deep understanding of the topic area both practically and theoretically. — In the before-mentioned study, we tried to develop the organization in practice by building a process measurement system into heart the unit of the hospital.b 4. Innovate a solution idea and develop a problem-solving construction, which also has potential for theoretical contribution. — Because of the universalistic tradition of quality management, we tried to construct a situational quality management model which is made in cooperation between practitioners and researcher.c 5. Implement a solution and test how it works. — A weak market test is made by showing the results to the quality manager of the target organization. 6. Ponder the scope of applicability of the solution. — The model is constructed in such a way that it could be used in building quality systems in the case organization and other hospitals and their units. 7. Identify and analyze the theoretical construction. — The nature of the quality management is considered in the continuum of universalistic theory vs. situational approach to quality management. — The major types of potential theoretical contributions are the novel construction itself and the applying and developing the existing theoretical knowledge about the quality management features emerging in the case. The study material consists of semi-structural interviews of the informants, audit report (SHQuality, 2007) and strategy (Vaasa Hospital District, 2003), quality policy, and strategy of the district (Vaasa Hospital District, 2007). All the material is analyzed with the help content analysis (see same kind of a study, for example Keashly and Neuman, 2008 or Kunkel and Westerling, 2006). The study material is analyzed with the help of content analysis which can be defined as “any methodological measurement applied to text (or other symbolic bAccording to Edgar H. Schein, if you want to understand an organization, try to change it. c We have cooperated in former studies and in this case study I have made an interview with a quality
manager.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
80 V. Tuomi
materials) for social science purposes” (Duriau et al., 2007). Content analyzes are most successful when they focus on facts that are constituted in language, in the uses of particular texts that the content analysts are analyzing. Such linguistically constituted facts can be put into four classes: attributions, social relationships, public behaviors, and institutional realities. Attributions are concepts, attitudes, beliefs, intentions, emotions, mental states, and cognitive processes ultimately manifest themselves in the verbal attributes of behavior. They are not observable as such. Institutional realities, like government are constructions that rely heavily on language. Content analysis of what is said and written within an organization provides the key to understanding that organization’s reality (Krippendorff, 2004, pp. 75–77). Central to the value of content analysis as a research methodology is the recognition of the importance of language in human cognition. The key assumption is that the analysis of texts lets the researcher understand other people’s cognitive schemas. At its most basic, word frequency has been considered to be an indicator of cognitive centrality or importance. Scholars also have assumed that the change in the use of words reflects at least a change in attention, if not in cognitive schema. In addition, content analysis assumes that groups of words reveal underlying themes, and that, for instance, co-occurrences of keywords can be interpreted as reflecting association between the underlying concepts (Duriau et al., 2007). See the exact description of the content analysis of the study in Appendix. 6. Case-Study in the Central Hospital of the Vaasa Hospital District The Vaasa Hospital District consists of three hospitals which all are operating under the administration of the Vaasa Central Hospital. The District is owned by 23 municipalities and it is a bilingual organization (both Swedish and Finnish speaking personnel, customers, and owners). The number of the personnel in the year 2006 was 1997, which consisted of nursing staff (1060), physicians (183), research staff (240), administrative staff (405), and maintenance staff (109). Services are offered for 166,000 inhabitants in the area of the municipalities (SHQuality, 2007). The Hospital District is one of the 20 Finnish hospital districts. 7. Analysis and Results of the Study Analysis was made mainly by using content analysis and the making conclusions into tables below. The key themesd are presented in the Table 2. Same kinds of tables are used for thematic content analysis (see Miles and Huberman, 1994, p. 132). d The audit report (SHQuality, 2007) and transcripts of semi-structures interviews were analyzed by
content analysis using former studies concerning contingency theory and quality management to form categories appropriate for this study. Then, I read texts few times and after that (with help of word processing program) the texts were categorized and edited so that the key themes were found. See the Appendix for more details.
QMS and Customers.
There is customer focus in the operation, benchmarking is utilized in development, there is positive attitude toward education in the organization, and nursing work is developed together with higher education institutions. Issues concerning for example well-being at work, personnel, and processes and quality management could be developed (SHQuality, 2007). See also the customers-column!
81
There is customer focus in the operation, but there are still some development needs, for example: service processes should be developed across the organizational boundaries, and availability of services should be developed (SHQuality, 2007). Operation is based on customer needs and the aim is to offer to customers as high-quality services as possible in cost-effective manner. The big question is to shift focus from the service and diagnosis of a single patient to the development of the service processes across organizational boundaries (semi-structured interviews). In Finland there is lack of personnel (recruitment problems), but the number of multiproblem patients’ is increasing (Ryyn¨anen et al., 2004).
b778-ch04
Measurement system must be developed in every organization level as a whole: all objectives must be in such a form that the realization is possible to measure, processes and personnel issues must be evaluated, internal audits and management review must be in use (SHQuality, 2007). There are many ways to measure customer satisfaction, but process measurement need to be improved (semi-structured interviews).
Customers
SPI-b778
Improvement
WSPC/Trim Size: 9.75in x 6.5in
Measurement
How to Develop Quality Management System in a Hospital
Good quality in Vaasa Hospital District is defined as service processes, which are from the customer’s or patient’s point of view high level, available, efficient and economic, and during which the well-being of the personnel and expectations of the stakeholders are taken into consideration. Quality work is based of the values, which are in the strategy as following: Respect of the human dignity, responsibility, and equity. In the quality strategy, there is description of the quality work based on the values and emphasis is on patients, laws and other regulation, management and strategies, risk management, cross organizational relationships and cooperation and quality system.
Contincengies
14:44
Key characteristics of the quality system Quality policy and strategy (Vaasa Hospital District, 2007)
March 15, 2010
Table 2.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
82 V. Tuomi
According to the semi-structured interviews and audit report (SHQuality, 2007), there is no TQM in the organizational level of the Vaasa Hospital District or Central hospital, but there is organization-wide quality project going on in the whole district. In some hospital units, they have their own quality manuals and QMSs, because of the characteristics of the units, for example the heart unit, cleaning services, and laboratories. The hospital is using the Social and Health Care Quality Services, SHQS, which is described in Table 3 below. It starts first from the self-evaluations in the units of the hospital and secondly, viewpoint and quality manuals are constructed in the units, and thirdly, QMS of the whole hospital district is constructed in an electronic form. From the approach of the contingency theory, the fit between contingencies and the key characteristics of the QMS has always some positive outcomes and outputs. In this study, it is easy to understand that the most important issues to improve in the hospital operations is to improve processes between all organizations for services offering to the same patient in the field of social and health care. When comparing the former Table 1 and Table 3, it can be seen that the first phase is different in every list of steps of the implementation of the QMS. Maybe methods (SHQS, ISO or others) are not so important in itself, but by choosing one of the methods, an organization can save time making the implementation easier. The development of the quality system is made partially at three levels at the same time. They are: 1. In some units, they already have their own quality manuals and QMSs and these systems will be coordinated with the SHQS, which is probably not so problematic because of the similarity of the common quality techniques (ISO 9000, EFQM, SHQS, etc.). Majority of the hospital units are developing their quality systems with the help of the SHQS. 2. At the hospital district and hospital level, the coordination of the quality management is at the level of the hospital district. 3. Quality management between the organizations in the field of social and healthcare services needs to be improved. This means that cooperation between the special health care, primary health care, social services, firms, and non-profit organizations operating is being developed. This kind of cooperation is mentioned in the quality strategy of the hospital district and is implemented in practice for example by re-organizing of the emergency duty. 8. Conclusions QMS consists of different interrelating elements which aims at directing and controlling quality. The objective of this study was to consider how to develop a quality system in a hospital. This is made by answering the questions: what are the situational factors that should be taken into consideration when establishing a quality system and what should be taken care of during the developing process. This study in focusing on public healthcare organization, but the results of the study
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
83
Table 3. A Model for Developing a Quality System in a Hospital. Implementation of the Social and Health Care Quality Service, SHQS (SHQuality 2007) 1. Starting: both management and all employees get to know the content of the SHQS-evaluation criteria and the self-evaluation method. 2. Self-evaluation: Management and all employees compare their operations to the evaluation criteria defined beforehand. 3. Development: Choosing the most important areas for development at all levels of the organization on the basis of self-evaluation and systematic development of the functionality of the service system. 4. Preparing for the audit: Agreeing upon the material to be sent beforehand to auditees, choosing the auditees, agreeing upon the timetables and informing the whole organization of the practical issues concerning the audit. 5. External audit and reporting. The evaluation is based on the SHQS-evaluation criteria. 6. Quality assurance: the separate quality council gives certification according to the audit group’s recommendation for a certain time period, if the hospital is operating according to the international quality criteria. 7. Maintaining the quality label: The organization continues development according to the principles of continual improvement. The maintaining is assured for example with the help of regular self-evaluation and internal and external auditing.
What is important in developing a quality system (according to the study material, see Appendix) — Developing and fostering quality management know-how while doing quality work, for example during the process modeling. — Concentrating more on the quality of service (availability of the service, etc.) instead of the care and diagnosis of a single customer/ patient and putting more emphasis on the patients’ welfare services as a whole in which the Vaasa Hospital District is only single service producer. This would improve quality and effectiveness of the services and borders between professions would become lower and there would be less suboptimization in the hospital. — Managing the totality of operations. — Customizing the QMS to suit yourself, for example, the language used in quality management. — Motivation and commitment at every organizational level and between organizations. — Doing quality work in a systematic way, on a daily basis and thinking long term.
could be at least partly generalized also to private organization, because differences between those organizations may not be as big as they at the first glance appear to be (see, for example, Rainey and Bozeman, 2000). Practically, all public healthcare organizations are cooperating so much with private firms that it may be sometimes even difficult to define is a healthcare service public or private. The
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
84 V. Tuomi
study is concentrating especially on Finnish and thereby European tradition of quality management by using mainly European examples. This is a constructive study, where we try to develop a model for a QMS of a public hospital. This is done from the contingency theory’s approach. The essence of the contingency theory paradigm is that organizational effectiveness results from fitting characteristics of the organization to contingencies that reflect the situation of the organization. Contingencies include the environment, organizational size, and organizational strategy. Core commonalities among the different contingency theories are the following assumptions: (1) there is association between contingency and organizational structure, (2) contingency change causes organizational structural change, and (3) fit affects performance (Donaldson, 2001, pp. 1–11; see also Sitkin, et al., 1994 or Conti, 2006). The study gives guidelines for quality management by constructing a model for the development of a quality system in a hospital. The results can be generalized to many countries because of the common roots behind different quality management models used in practice. On the other hand, the results of this study can be generalized only partially, for example because of the sample of the study. Like Lee and Baskerville (2003, p. 241), there is only one scientifically acceptable way to establish a theory’s generalizability in a new setting. A theory must survive an empirical test in that setting. Therefore, further studies concerning quality management from the contingency approach are encouraged. The studies could be both qualitative and quantitative. This study has clear managerial implications. First, the model constructed in this study (see Table. 3) could be applied to other hospitals and professional service organizations. Especially, remembering the list of important issues while developing the QMS in a hospital could be useful for every quality management project. Second, the other conclusion is that there is no universal way to develop the QMS and the system must always be customized to an organization by using on method (SHQS, ISO, EFQM, or something else) and maybe implementing TQM at the same time. By improving the fit between the QMS and contingencies, that is issues related to customers, an organization will probably improve its outputs and outcomes. References Beckford, J (1998). Quality: A Critical Introduction. London, New York: Routledge. van der Bij, JD, T Vollmar and MCDP Weggeman (1998). Quality systems in health care: A situational approach. International Journal of Health Care Quality Assurance, 11(2), 65–70. Callan, VJ, C Gallois, MG Mayhew, TA Grice, M Tluchowska and R Boyce (2007). Restructuring the multi-professional organization: Professional identity and adjustment to change in public hospital. Journal of Health and Human Service Administration (Harrisburg), 29(4), 448–477. Cassell, C and G Symon (1994). Qualitative research in work context. In Qualitative Methods in Organizational Research. A Practical Guide, C Cassell and G Symon (ed.), 1–13. London: SAGE Publications. New Delhi: Thousand Oaks.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
85
Conti, T (2006). Quality thinking and systems thinking. The TQM Magazine, 18(3), 297–308. Conti, T (2007). A history and review of the European Quality Award Model. The TQM Magazine, 19(2), 112–128. Conway, M (2006). The subjective precision of computers: A methodological comparison with human coding in content analysis. Journalism and Mass Communication Quarterly, 83(1), 186–200. Duriau, VI, RK Reger and MD Pfarrer (2007). A content analysis of the literature in organization studies: Research themes, data sources, and methodological refinements. Organizational Research Methods, 10(1), 5–34. Donaldson, L (2001). The Contingency Theory. Thousand Oaks, London, New Delhi: Sage Publications. EFQM (2008). EFQM Introducing excellence. From the internet 11.9.2008: http://www. efqm.com/uploads/introducing english.pdf Feigenbaum, A (1983). Total Quality Control. Fortieth Anniversary Edition. New York, St. Louis, San Francisco, Auckland, Bogota, Caracas, Hamburg, Lisbon, London, Madrid, Mexico, Milan, Montreal, New Delhi, Paris, San Juan Sao Paulo, Singapore, Sydney, Tokyo, Toronto: McGraw-Hill. Foster, ST (2006). One size does not fit all. Quality Progress, 39(7), 54–61. Francois, P, J-C Peyrin, M Touboul, J Labarere, T Reverdy and D Vinck (2003). Evaluating implementation of quality management systems in a teaching hospital’s clinical departments. International Journal for Quality in Health Care, 15(1), 47–55. Garvin, DA (1988). Managing Quality. New York: Free Press. van den Heuvel, J, L Koning, AJJC Bogers, M Berg and MEM van Dijen (2005). An ISO 9001 quality management system in a hospital. Bureaucracy or just benefits? International Journal of Health Care Quality Assurance, 18(4/5), 361–369. Gummesson, E (2006). Qualitative research in management: Addressing complexity, context and persona. Management Decision, 44(2), 167–176. ISO 9000 (2000). Quality management systems. Fundamentals and vocabulary. Finnish Standards Association. Kast, FE and JE Rosenzweig (1970). Organization and Management: A systems approach. New York: McGraw-Hill. Keashly, L and JH Neuman (2008). Aggression at the service delivery interface: Do you see what I see? Journal of Management and Organization, 14(2), 180–192. Krippendorff, K. (2004). Content analysis: An introduction to its methodology. Thousand Oales: Sage. Kunkel, ST and R Westerling (2006). Different types and aspects of quality systems and their implications: A thematic comparison of seven quality systems at a university hospital. Health Policy, 76, 125–133. Lee, AS and RL Baskerville (2003). Generalizing generalizability in information systems research. Information Systems Research, 14(3), 221–243. Lee, S, K-S Choi, H-Y Kang, W. Cho and YM Chae (2002). Assessing the factors influencing continuous improvement implementation: Experience in Korean hospitals. International Journal for Quality in Health Care, 14(5), 383–391. Lindberg, E and U Rosenqvist (2005). Implementing TQM in the health care service: A four-year following-up of production, organizational climate and staff well-being. International Journal of Health Care Quality Assurance (Bradford), 18(4/5), 370–384.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
86 V. Tuomi
Lukka, K (2003). The constructive research approach. In Case Study Research in Logistics, L Ojala and O-P Hilmola (eds.), pp. 83–101. Turku: Publications of the Turku School of Economics and Business Administration. Series B 1/2003. Magd, H and A Curry (2003). ISO 9000 and TQM: Are they complementary or contradictory to each other. The TQM Magazine, 15(4), 244–256. Miles, MB and AB Muberman (1994). Qualitative data anlysis: An expended sourcebook. Thousand Oaks: Sage. Nelsen, D and SE Daniels (2007). Quality glossary. Quality Progress (Milwaukee), 40(6) 39–59. Oakland, JS (1999). Total Quality Management. Text with Cases. Oxford, Auckland, Boston, Johannesburg, Melbourne, New Delhi: Butterworth-Heinemann. Rad, AMM (2006). The impact of organizational culture on the successful implementation of total quality management. The TQM Magazine (Bedford), 18(6), 606–625. Rainey, HC and B Bozeman (2000). Comparing public and private organizations: Empirical research and the power of the a priori. Journal of Public Administration Research and Theory (Lawrence), 10(2), 447–473. Rissanen, V (2000). Quality system based on the standard SFS-EN ISO 9002 in Kuopio University Hospital. International Journal of Health Care Quality Assurance, 13(6), 266–279. Ryyn¨anen, O-P, J Kinnunen, M Myllykangas, J Lammintakanen and O Kuusi (2004). Suomen terveydenhuollon tulevaisuudet. Skenaariot ja strategiat palveluj¨arjestelm¨an turvaamiseksi. Esiselvitys. Eduskunnan kanslian julkaisuja 8/2004. Tulevaisuusvaliokunta. Helsinki. S´anchez, E, J Letona, R Gonzalez, M Garcia, J Darp´on and JI Garay (2006). A descriptive study of the implementation of the EFQM excellence model and underlying tools in the Basque Health Service. International Journal for Quality in Health Care, 18(1), 58–65. Sitkin, SB, KM Sutcliffe and RG Schroeder (1994). Distinguishing control from learning in total quality management: A contingency perspective. Academy of Management. The Academy of Management Review, 19(3), 537–564. SFS-EN ISO 9000 (2001). Quality management systems. Fundamentals and vocabulary. Finnish Standards Association. SHQuality: Audit report of the Vaasa Hospital District. 17 September 2007. Vaasa (in Finnish). Sosiaali- ja terveyspolitiikan strategiat 2015 — kohti sosiaalisesti kest¨av¨aa¨ ja taloudellisesti elinvoimaista yhteiskuntaa (2006). Sosiaali- ja terveysministeri¨on julkaisuja 2006: 14. Sosiaali- ja terveysministeri¨o. Helsinki. Sower, WE, R Quarles and E Broussard (2007). Cost of quality usage and its relationhip to quality system maturity. International Journal of Quality & Reliability Management, 24(2), 121–140. Strategy of the Vaasa Hospital District 2003–2010. Vaasa: Vaasa Hospital District (2003) (in Finnish). Vaasa Hospital District (2007). Quality Policy and Quality Strategies of the Vaasa Hospital District. Vaasa: Vaasa Hospital District. (in Finnish). Yang, C-C (2003). The establishment of a TQM system for the health care industry. The TQM Magazine, 15(2), 93–98.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
87
Appendix: The Conduct of the Content Analysis Thietart, R-A. 2001, 358–360 (Content analysis) 1. Collecting the Data. 2. Coding the Data: As for any coding process, the text is broken down into units of analysis, and then classified into the categories defined according to the purpose of the research.
:
In this study Data consist of semi-structured interviews and organizational documents. Text is broken down into the following categories: — Setting up of the quality system — Strategies, quality policy, measurement, and improvement — External customers — Internal customers (personnel)
2.1. Defining the units of analysis. There are basically two types of content analysis, which can be defined according to the units of analysis defined: (1) lexical analysis analyzes the frequency with which words appear and (2) thematic analysis analyzes to adopt sentences, portions or groups of sentences as their unit of analysis. This last type is more common in organizational studies.
Thematic analysis is done by using sentences and group of sentences as the units of analysis.
2.2. Defining the categories. Depending on the coding unit selected, categories are usually described: (a) Either in the form of a concept that will include words with related meanings (for example, the category “power” could include words like strength, force, or power). (b) Or in the form of broader themes (for example, competitive strategies), which include words, groups of words or even whole sentences or paragraphs (depending on the unit of analysis defined by the researcher). The main difficulty lies in defining the breadth of selected categories. Defining the breadth of the category must be related to both the researcher’s objectives (narrow categories make comparative analysis more difficult) and the materials used. (c) In certain cases, the categories may be assimilated to a single word. (d) Finally, the categories can be characteristics of types of discourse. Qualitative analysis interpretation
Categories are described in the form of themes concerning (a) sentences which handle setting up of quality system in a hospital, (b) certain characteristics of quality management like quality policy, strategies and measurement, and improvement, and (c) key contingencies in the environment are external customers.
Interpretation of the fit between the contingencies and the QMS.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
88 V. Tuomi
The focus of the analysis in this study has been on both latent and manifest meanings. There are numerous studies in which both latent and manifest dimensions are content analyzed, such as in the case of studies focusing on mission statements. Most common content analysis techniques are frequency count, advanced features, qualitative approach. Research design can be inductive, deductive or both (Conway, 2006). Quality policy has the same kind of short document as a mission statement. In this study, we concentrate on the qualitative approach: we use mainly deductive approach. Data used in this research: • Semi-structured interviews during the Spring at the year 2008: ◦ ◦ ◦ ◦ ◦
Quality manager, the Vaasa Hospital District Medical director, the Vaasa Hospital District Head of the Heart Unit, the Vaasa Hospital District Manager of the Vaasa Hospital District Chairman of the government of the Vaasa Hospital District Themes in the semi-structured interview:
1. How would you define the quality in your organization? 2. Why do we need the quality? 3. You have implemented quality management in your organization especially with the help of SHQS/KKA. What kinds of issues belong to quality management generally/“broadly speaking”? How do you define the quality management? 4. What is the most important thing in the quality management? 5. What kinds of quality tool or techniques do you use? 6. What are the objectives for quality improvement in your organization? — Relationship of the quality objectives and other objectives in the organization. 7. Could you name one practical example of good quality work in your organization? 8. What is the role of management in achieving the before defined quality? 9. What is the role of personnel in achieving the before defined quality? 10. Is quality work/quality improvement more like everybody’s normal work or separate development work? 11. What kind of evaluation there is in your organization as a part of the quality management/quality control? 12. How do you evaluate customer satisfaction and utilize the information gathered? 13. How should we measure processes? 14. What is the meaning of the processes in producing high-quality services? 15. How should we improve the processes?
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
How to Develop Quality Management System in a Hospital
89
16. If you think about your organization from the quality management’s point of view, what are the strengths and weaknesses that come from your organization and opportunities and threats that come from the operational environment? 17. How well can we apply quality management developed in industrial organizations to public sector organizations? a. Extremely badly — quite badly — quite well — extremely well b. Why (explanation)? 18. Did I ask all essential questions from the point of view of quality work, or is there something that I did not understand to ask and that is essential from your point of view? Comments: The interviews were recorded and the duration of the interview varied between. The transcript of the tapes was afterward categorized and content analysis was conducted. Biographical Note Ville Tuomi is a researcher at the University of Vaasa in the Department of production. He has also worked as a teacher, trainer, and consultant in the field of management, especially quality management.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch04
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Part II Business Process Information Systems
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Chapter 5
Modeling and Managing Business Processes MOHAMMAD EL-MEKAWY∗ , KHURRAM SHAHZAD† and NABEEL AHMED‡ Information Systems Laboratory (SYSLAB), Department of Computer and Systems Sciences (DSV), Royal Institute of Technology (KTH)/Stockholm University (SU), Isafjordsgatan 39, Forum 100, 164 40 Kista, Sweden ∗
[email protected] †
[email protected] ‡
[email protected] The purpose of this chapter is to present tools and techniques for modeling and managing business processes. For this, business process modeling is defined and classified according to two levels of detail. These categories are chained together with the help of a transformation technique, which is explained with the help of an example. As soon as the number of processes increases, they cannot be managed manually. This motivates the need for a software system called a business process management system (BPMS). The properties of a BPMS are explained, and the components of a BPMS, which support the necessary requirements of managing processes, are also presented with their advantages.Also, the major principles of business process management (BPM) are presented in this chapter. Keywords: Business process management; process modeling; business process management system; managing process models; business models.
1. Introduction The use of a business process has become an important way of representing the business and activities of an enterprise in recent years (Redman, 1998). “A business process is a set of coordinated tasks and activities, conducted by both people and equipment that will lead to accomplishing a specific organizational goal (BPM, 2008).” The management of processes is a burning issue in research these days, and it has received 2,330,000 citations in Google Scholar in the last decade. This chapter aims at presenting a business process management system (BPMS) as state-of-the-art technology for managing the processes of business. In particular, BPMS from operational perspectives will be addressed. Also, we describe business † Corresponding author.
93
March 15, 2010
94
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
process modeling as one of the important dimensions of looking at business and we discuss how BPMS can be used to manage the business of an enterprise. The rest of the chapter is organized as follows: Section 2 contains the goals of business process modeling, the classification of business process modeling and the conversion of a business model to a process model. Section 3 contains the properties of a BPMS, its components, and the uses of a BPMS. Section 4 contains the principles that can be used to integrate Information System (IS) and business process management. 2. Business Process Modeling A process is a collection of activities with clearly identified inputs, outputs, and a specific ordering of activities, whereas business process modeling is the act of representing the current state or proposed state (i.e., “as-is” or “to-be”) of functional activities for an enterprise (Frye and Gulledge, 2007). Modeling business processes enhances the analyzing and planning capabilities of an enterprise, and identifying relationships between the processes of an enterprise increases the understandability of the enterprise architecture and the relationship between the elements of a business, independent of departmental boundaries. The main goals of business process modeling are (Curtis et al., 1992; Bider and Komyakov, 1998 and Endl and Meyer, 1999): • To support business process re-engineering to deal with immense market competition. • To represent a business to understand the key mechanisms for analysis and improvements. • To provide a base for collecting business and user requirements and information system support. • To facilitate suitable strategies for software packages’ implementation. • To facilitate the alignment of the business and information technology (IT) infrastructure. 2.1. Business Process Modeling: Classification Business process modeling involves numerous techniques and methods (Curtis et al., 1992; Dean et al., 1994; Plexousakis, 1995) to analyze deeply and further scrutinize business processes (Luo and Tung, 1999). Defining a business process clearly envisages “a set of related tasks performed to achieve a defined business outcome (Luo and Tung, 1998).” They classify business processes into three basic elements: entities, objects, and activities. On the other hand, Denna et al. (1995) stated three different basic types of business process in an organization. These are acquisition/payment, conversion, and sales/collection, where the conversion process refers to converting goods or services from one form to another.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 95
Business users and technical users (IT users) have a different understanding of a business due to the different abstraction of views, levels of detail, and concerns (Andersson et al., 2008). Consequently, business and IT users do not understand the same model. Therefore, business models are developed for business users and process models are developed for IT users. From here, it can be concluded that, for an enterprise, modeling is an integration of business modeling and process modeling (Bergholtz et al., 2007). 2.1.1. Business model A business model represents the exchange of a value between business partners. The values can be resources or services and business partners are the actors participating in the value exchange. A business is also known as an economic model that depicts a value exchange between partners to represent the “what” of the enterprise (Andersson et al., 2006). A business model gives an abstract view (business view) of activities by identifying the values, activities, and partners represented by resources, exchanges, and agents in the business model. Typically, a business process consists of three components (Lin et al., 2002): customer, ongoing activities, and values that span across departmental boundaries. e3-value model. e3 value is a formalization of a business model to represent an abstract view of a business. It has value objects and value exchanges, as shown in Fig. 1. An enterprise governs some resources and the rights on those resources in a business scenario. Firstly, the value specified by an actor or a group of actors to a given resource is highlighted. Secondly, the use of resources by an actor would emphasize the rights on that particular resource. 2.1.2. Process model A process model represents the detailed activities of a business and the relationship between them. Activities are the detailed operational procedures taking place in a business. A process model is a model that depicts the operational and procedural aspects to represent the “how” of the enterprise (Andersson et al., 2006). A process model gives a detailed view (low-level view) of activities by identifying the starting point, relationship between activities, prerequisites of activities, and the finishing point. We have some process modeling perspectives subject to Actor 1
Resources/ services
Actor 2
Value
Figure 1. A generic e3-value diagram (Kimbrough and Wu, 2004).
March 15, 2010
96
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
software engineering models. Researchers have deduced the four most common perspectives (Curtis et al., 1992). Functional view −→ What activities are being performed? What data are necessary to link these activities? Behavioral view −→ When will these activities be performed? How will they be performed? Informational view −→ Representation of the process Organizational view −→ Where these activities will be performed? Who will perform these activities? 2.2. Chaining the Business and Process Models Business and process models are different views of the same business, so they must be related and derivable from each other. Also, the construction of a business model from a process model assists: (a) in capturing the essential concepts of a business process (Lin et al., 2002) and (b) in representing activities and related elements in a structured way (Bider and Khomyakov, 1998). To construct a process model from a business model, a chaining methodology has been proposed by Andersson et al. (2006). The chaining methodology is a four-phase approach, having the e3-value model as input. Here, we explain the chaining methodology (Andersson et al., 2006) with the help of a short example. The following are the main phases of deriving a process model from a business model. 2.2.1. Input: e3-value model Phase 1: Explicitly model the value exchange components and use arrows to represent each transfer between actors. Phase 2: Explicitly model the evidence document component and use arrows to represent each transfer between actors. Phase 3: Map the e3 model (value transactions and arrows) to open-EDI (ISO/IEC, 2007) phases and add the relevant process. Phase 4: Select the appropriate pattern and apply it (to the processes) to identify the internal structure of the process (Fig. 2). 2.2.2. Input To construct a process model starting from a business model, let us consider a small example of an Internet service provider (ISP). On payment, the ISP provides an Internet service to its customer. Now, we start by laying down a simple e3-value model. An actor (ISP) provides custody (of the Internet service) to another actor (customer) and gains some value (money) in response (Fig. 3).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 97
Process model
Business model
Model value exchange
Phases of chaining Business and process models
Select and apply pattern
Model evidence document
Mapping e3 and open-EDI phases
Figure 2.
Constructing a process model from a business model.
Internet service
Customer
ISP Money
Figure 3.
e3-value transfer for ISP.
Custody of internet service
Internet service
Customer
ISP Money
Figure 4.
Highlighting the custody factor.
Phase 1: Explicitly model the value exchange components and use arrows to represent each transfer between actors. In the first step, analyze the exchange of values and determine the custody factor against value exchanges. Explicitly model the custody factor of the Internet service by using arrows and lines. The number of arrows shown in Fig. 4 represents the transfer of custody from one actor to another. In this way, the transformation of the custody element can be clearly seen. The flow of custody can also be represented as a dashed line. Phase 2: Explicitly model the evidence document component and use arrows to represent each transfer between actors.
March 15, 2010
98
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Custody of internet service
Internet service
Customer
ISP Money
Payment certificate
Figure 5.
Highlighting evidence along with the custody factor.
When the Internet service is up and running and the money has been paid, the evidence document (payment invoice) can be transferred to the payee. In our case, the ISP can provide a payment invoice/certification to the customer to confirm that the payment has been received. Therefore, adding the payment invoice and redrawing the e3 value are shown in Fig. 5. Similarly, several actors may be involved as well in transferring the evidence document from one component to another. Phase 3: Map the e3 model (value transactions and arrows) to open-Electronic Data Interchange (EDI) phases and add the relevant process. Before going into the mapping of open-EDI phases and the e3-value model, beforehand knowledge of the open-EDI phases of a business transaction is essential. Popular and standard definitions of the phases are (Gregoire and Schmitt, 2006): • The “planning” phase is about decisions on the activities to be performed. This phase cannot be mapped to the example e3 model. • The “identification” phase is about selecting and linking the partners of the transaction. This phase cannot be mapped to the example e3 model. • The “negotiation” phase is about creating a common agreement between the transaction participants. This phase is mapped to the value exchange (Internet service and money) of the e3 model. • The “actualization” phase is about realizing the actual transaction. This phase is mapped to the custody and payment certificate of the example e3 model. Once the e3 is mapped to the open-EDI phases, add a process to each mapping, i.e., a negotiation process for the Internet service, a negotiation process for money, actualization processes for custody of the service, and another actualization process for the payment certificate.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 99 Process manipulation Monitor and performance evaluation Define process
Store PM and manage
Process execution Analysis and management
Relationship b/w BM and PM Process and goal alignment
Process simulation
Goals definition
Figure 6.
Properties of a BPMS.
Phase 4: Select the appropriate pattern and apply it (to the processes) to identify the internal structure of the process. Patterns play a central role in the development of a process model and patterns, presented by a Unified Modeling Language (UML) activity diagram, are stored and preserved for further use. A UML activity diagram contains details of the internal structure of each process. For each process in the extended e3-value model, select the appropriate pattern and apply it to find the low-level details and internal structure of each process (Gregoire and Schmitt, 2006). In our example, the payment pattern (see Fig. 6) can be used from the pattern-pool to identify the internal structure of the money payment process. 3. Business Process Management (BPM) The scope of a business process can be restricted to a department, and it may also be extended across more than one department. Therefore, interdepartmental information systems cannot manage the processes that span more than one department. BPM technology comes as a solution, and it provides the tools, technologies, and infrastructure for developing, manipulating, executing, managing, and simulating business processes. According to the Association of Business Process Management Professionals (ABPMP, 2009), Business Process Management (BPM) is a disciplined approach to identify, design, execute, document, monitor, control, and measure both automated and nonautomated business processes to achieve consistent, targeted results consistent
March 15, 2010
100
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
with an organization’s strategic goals. BPM involves the deliberate, collaborative and increasingly technology-aided definition, improvement, innovation, and management of end-to-end business processes that drive business results, create value, and enable an organization to meet its business objectives with more agility (Lusk et al., 2005).
It is not practical to build, manage, and control the business processes of an enterprise manually because an enterprise may have a huge number of processes, which can be large in size and complex in nature (Aalst et al., 2003 and Kotelnikov, 2009). In addition to that, the manual use of business processes hinders the optimization of business processes and their alignment with the enterprise goals. Therefore, we need a software system that has the capability to store, share, and execute business processes. Formally, a software system with these capabilities is called a BPMS, and is defined as: A business process management system is a software system that is used to develop, store, simulate, manage, optimize, execute, and monitor the business processes of an enterprise.
As stated in the definition, by using a BPMS, an enterprise can develop its process models and store them for further use. Once the development of process models has been completed, the BPMS can execute process models. Moreover, a BPMS has the ability to monitor and analyze business processes. Furthermore, it handles all the requests of users related to business processes. 3.1. Properties of a BPMS A number of necessary capabilities that a BPMS should have for implementing, storing, managing, and executing business processes are elaborated in this section. These properties span the complete life cycle of BPM, from modeling and simulation to analyzing and optimizing (Muehlen and Ho, 2006). The choice of properties of a BPMS is based on an extensive survey of book chapters, journal, conference, workshop, and white papers published/presented in reputable and impact-leaving forums. From the survey, a set of all possible properties of a BPMS was prepared. Finally, the obtained properties were filtered by means of a prominent BPM life cycle (Muehlen and Ho, 2006 and Netjes et al., 2006) . The main properties of a BPMS are given below (Fig. 6). Process Definition: This is the potential of a BPMS to model and develop business processes. The purpose of this property is to equip users with the ability to add and simulate both process and business models. Storage and Management: This is the potential of a BPMS to preserve process models and business models and to administer them. The purpose of this property is to provide administrative control over models.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 101
Process Manipulation: This is the ability of a BPMS to facilitate the insertion, updating, deletion, and retrieval of process models. The purpose of this property is to add the ability to implement business processes. Model Relationship: The role of a BPMS is to facilitate the development of a relationship between business and process models. The purpose of this property is to interoperate and relate process models, so that business logic can be developed. Goal Definition: The role of a BPMS is to define and store the business objectives of an enterprise. The purpose of this property is to make the system more objectiveoriented. Process and Goal Alignment: The role of a BPMS is to align process models with the goals of an enterprise. The purpose of this property is to elicit clearly the purpose of each process and to reflect explicitly which process contributes to the achievement of which goal. Process Simulation: The role of a BPMS to simulate process models. The purpose of this property is to prepare for real life changes and risk management. Process Execution: The role of a BPMS is to execute the processes of an enterprise. The purpose of this property is to make the process-based system functional. Monitoring and Performance Evaluation: The role of a BPMS is to monitor the processes that are being executed and to evaluate their performance by a predefined set of parameters. The purpose of this property is to give control over process execution and the optimal execution of processes. Analysis and Management of Processes: The role of a BPMS is to analyze and manage the processes. The purpose of this property is to keep the decision maker informed about the capabilities of process-based systems and their potential. Figure 6 shows the dependency relationship between the properties of a BPMS, i.e., the initiation of a property depends upon the completion of its prerequisites. Graphically, the property toward the tail of an arrow is the prerequisite of the property toward the head of the arrow. For example, the prerequisites of the property (process and goal alignment) are defining the process, storing, and managing the process model and goal definition. 3.2. Components of a BPMS To obtain the properties mentioned in the preceding section, the architecture of a BPMS is proposed in Fig. 7 (with major modification to Vojevodina, 2005). In the current section, we present the necessary functionalities of the components of the proposed BPMS architecture. We also illustrate the relationship between properties (discussed in the preceding section) and the proposed components. Modeling Interface: This component is used to develop a process model. It is a graphical tool that represents processes and their activities with the help of graphical
March 15, 2010
102
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Process developer
Process models
Business models
Metadata repository
Metadata engine
Model simulation Execution engine
Process model repository
Business model repository
Data processor
Buffer manager
Process log
Activity monitor
Rule engine
Dashboard and management tools
Modeling interface
Transaction manager
Rules repository
Process user
Figure 7. Architectural components to meet the requirements of a BPMS.
notions. The sequence of activities and information flow between these activities is either represented by sequential activities, parallel activities or by loops between activities. Moreover, similar to computer aided software engineering (CASE) tools, this component provides a drag and drop facility to develop a process model and the relationships between processes by using predefined constructs. This component delivers the first property (process definition) of a BPMS. Repositories: These are the storage spaces used to store metadata, process models, business models, business rules, and process execution logs. They are a vital component of a BPMS that takes care of all the storage-related issues and saves the mass of process models produced by the modeling interface. Repositories are dynamic in the sense that their contents can be manipulated by editing process models and business models. As soon as the models are modified, their corresponding metadata are stored in a metadata repository, if required the rules are manipulated, and this transaction is stored in the process log. This component is there partially to deliver the second property (storage and management) of a BPMS.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 103
Simulation Components: As explained by its name, this component is used to simulate business processes (process and business models). It is used to simulate the real life changes in business processes to identify bottlenecks that can be faced during implementation (Schiefer et al., 2007). By the simulation of business issues, many other things like performance and flexibility can be predicted. While using simulation tools, one can execute a process to examine its effects on other processes. The simulation results can greatly contribute to correcting real world changes, reducing the risk of implementation and increasing the ability to deploy new processes in production quickly. Activity Monitor: This component is responsible for ensuring the integrity of process models and it is used to monitor activities when a process is under execution. With the help of this component, supervisors and administrators can monitor the performance of processes by the parameters for executing, manipulating, and retrieving processes. Examples of these parameters are correct execution, consistent manipulation, and reliable retrieval, respectively. Consequently, corrections and modifications can be decided. This component partially delivers the second last property, “monitoring and performance evaluation,” of a BPMS. Data Processor: On receiving a data manipulation request, this component directly interacts with the repositories (process and business model repositories) and performs the necessary actions. It comes into action when a model is manipulated (inserted/updated/deleted), retrieved, executed, simulated, or monitored. To summarize, for all actions on process models, the data processor comes into play to ensure interaction with repositories and their consistency. Rules Engine: This component interacts with the business and process rules’ repository. It also ensures the implementation of all rules, e.g., during execution, if a process violates a business rule, the rule engine intercedes to stop the process execution. Another example could be, if a rule is violated by the data processor (during manipulation), the rule engine imposes no-manipulation of the process/business model. It ensures the correct information flow and consistency of the process models with their corresponding rules. Buffer Manager: This component handles the process models that are available in the memory during execution and it also takes care of the operations of the process log. It is then the responsibility of the buffer manager to ensure the consistency of the log and to handle blocks of pages by making them available in the cache if they are not available. An advantage of its use is that it minimizes the number of disc-accesses by finding the answer to the request from information available in the memory. Transaction Manager: This component, collectively with the buffer manager and process log, is responsible for the consistency of the process model repository. It ensures the concurrent execution of processes and the atomicity of processes during execution and during manipulation. In the case of system failure, incomplete
March 15, 2010
104
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
processes are rolled back, whereas completed (but uncommitted processes) are committed. By this component, a BPMS is made a fault-tolerant system. Metadata Engine: During execution, this component interacts with the metadata repository and provides the needed metadata to the data processor. During the process development phase of processes, this component acquires its desired metadata and stores them in a metadata repository. On process retrieval, the metadata engine uses the metadata to make the retrieval faster. Similarly, in the case of an update, metadata related to the process are collected, created and stored, and whenever required update it. Execution Engine: This component is responsible for the processes’ execution and the execution of process-manipulation requests from users. It also provides an environment for the execution of a process and activities. It provides a control mechanism for executing a business process from start to end and it manages activities from inside. During the process execution, it also manages the states of a process and the state of each process instance. It determines the process flow, keeps a record of the process output and gives it as an input for other processes. Management Tools: This component is used by the administrator to manage and control all the functionalities of a BPMS. It allows grant and revoke access rights, functionalities of users, and monitoring of different components of the BPMS. Using these components, administrators can view tasks and their associated properties. BPM Dashboard: This component provides an interface to interact with a BPMS. All the functionalities to be performed by a user are available on the dashboard. It can be customized to define the access rights of each user (Fig. 8). Figure 8 presents a BPMS as a different arrangement of components. This arrangement presents the way in which a manipulation request to a BPMS is processed. As shown above, a user interacts with the system by using a dashboard; by using a dashboard, a user can manipulate, simulate, or execute a process. The request (R) is parsed and optimized by the query engine and forwarded to the activity monitor. The activity monitor registers the request R and forwards it to the execution engine. It is the responsibility of the activity monitor to keep on monitoring the progress of R and inform the transaction manager to ensure consistent completion of the registered request. With the help of the rule engine, metadata manager, data engine, and buffer manager, the execution engine ensures the correct execution of a request. 3.3. Top 10 Advantages/Benefits of a BPMS Employing a BPMS has a number of advantages. It not only improves organizational efficiency, but it also increases control over process models by providing an integrated view of data, transparency of the execution of processes, and the addition of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 105 BPM dashboard Query parser and optimizer Activity monitor Execution engine
Metadata manager
Metadata repository
Figure 8.
Data engine
Rule engine
Rules repository
Business model repository
Buffer manager
Process model repository
Process log
Layered approach to BPMS functionalities.
agility to the business and process refinement. The use of a BPMS has qualitative as well as quantitative benefits. It has been said that: “BPM has enabled organizations to report 10% to 15% return rates through increased efficiencies and staff time reductions, among other benefits” (http://www.cstoneindy.com/resources/articles/usingbpm/). The following are the major advantages of using a BPMS. 1. The correct development, implementation, and management of a BPMS increase forecast accuracy through analysis and process mining (Alves de Medeiros and G¨unther, 2005). Through process mining, the information available in event logs can be extracted and used to forecast accurately the behavior of a process to be executed. 2. A BPMS ensures process standardization. “Standardization is a set of methods and conditions that makes possible repeated high performance” (http://www.argi.com.my/whatispage/processStandard.htm). 3. A BPMS ensures optimum resource utilizations and improved productivity. BPM allows tremendous efficiency gains by streamlining each process end to end (Frye and Gulledge, 2007). “BPM helps optimize and improve business performance by streamlining each process end-to-end” (http://www.cstoneindy.com/resources/articles/using-bpm/). 4. The use of a BPMS improves process control, because activities can be monitored and processes can be continuously improved by using a BPMS. 5. Process simulation can be made possible and it has its own advantages. Simulation is a very important stage in the optimization of processes. By using simulation, the quality of process design can be improved, production capacity can be increased, the cost of experimenting in the real world can be
March 15, 2010
106
6.
7. 8.
9.
10.
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
decreased and real-world changes can be tested and experimented with for improvement (Serrano and Hengst, 2005). A BPMS provides an enterprise process view (integrated view). It is also called centralization of data. Data about each and every transaction are logged and can be retrieved when required. Therefore, it is possible to analyze accurately what happened. One of the main advantages of a BPMS-based business process solution is that it brings agility to a business (Silver, 2008). One of the key advantages of BPM lies in its ability to align business processes better with enterprise goals. In this way, the influence of each business process can be measured. A BPMS makes a business process absolutely transparent and it greatly improves visibility and efficiency, i.e. bottlenecks and delays can be seen and removed, as well as problem areas at each stage (http://www.enjbiz.com/BPM/benefits.html). The initial configuration and design exercise coupled with the data that emerge after running processes for some time can allow the refinement of processes.
4. Integration of Business Processes with IS In the dynamic business environment of the 21st century, fast-changing business strategies and continuously evolving technology are the norms. Having an IT strategy that is out of tune with the business processes is even more harmful to the company than not having one at all. Top management must always take the necessary steps to keep the business and IT processes and strategies of their companies running side by side or aligned, instead of having them conflicting or not meeting at all. Without this alignment, the company will not function in a competitive environment. It will be overtaken by more flexible competitors, which can jump at opportunities stemming from the continuous introduction of new technology (Luftman, 2004). Additionally, without a clear image of the “as-is” environment of different processes in the company and marketplace, the inbound and outbound components of the organization are most likely to be affected. The result can be ineffective planning, weak governance, and wasted IT resources (Camponovo and Pigneur, 2004). It is then no surprise that 54.2% of the Information Systems Managers in the Critical Issues of Information Systems Management (CIISM) report claim the importance of information systems to business processes. In the same report, the alignment takes the second place of the important factors that most contribute to the success of an organization (Pereisa and Sousa, 2004). 4.1. What is Required to Integrate? Several researchers (Beeson et al., 2002; Paul and Serrano, 2003 and Versteeg and Bouwmann, 2006) promote the value of integrating the business processes with the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 107
information systems of an organization. Information systems play an important role not only in helping an organization to achieve its business goals; they help more in creating innovative applications and competitive intelligence. To integrate business processes with IS, both IT and business teams should have a common understanding and view over their aim and the available resources. We, here, identify the most important aspects that have been highlighted by different researchers (Versteeg and Bouwmann, 2006; Trienekens et al., 2004 and Laudon and Laudon, 2007) in the pre-preparation phase of integrating business processes with IS. It has been claimed that integration success can be achieved when attention is given to: (a) understanding the “current system,” (b) “people” as actors rather than resources, and (c) “process relationships” in the new system by enterprise architecture. 4.1.1. Quality of information as a core for understanding the current system As organizations are heavily dependent on information today, information is stored in many different ways. It is therefore necessary to evaluate its quality to determine its value for a business activity. As information is stored in different systems forming information systems, it is important to determine the quality of the information systems before designing business processes. Information greatly contributes to understanding the current system by creating data classes over different parts of an organization. This helps in collecting, fixing, studying, and analyzing every part separately and results in a guideline for identifying the process. 4.1.2. People are actors rather than resources In the integration between the business process and information systems, people should be seen as actors, not as resources. Employees in the same department do not simply play the role explicitly assigned to their department or process. They should act as a communication level between both processes, with affiliation but not a bias to their department. With their formal delegations, only in some highly managerial or structured processes can people be considered as resources. Otherwise, room for informal communication processes should be left for workers to interact. 4.1.3. Enterprise architecture In today’s world that is characterized by dynamic and rapid changes, managers have sought new ways to allow them to understand the turbulent environment of different products, customers, competitors, and technologies. Enterprise architecture is one important overview of arranging an organization with its complex components. Enterprise architecture, as a term, refers to a comprehensive arrangement that links the involved departments and provides a complete view of an organization. As organizations develop different solutions, they form their systems’ infrastructure through years of work. As a result, organizations seek more control and
March 15, 2010
108
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
order in their environment. Enterprise architecture fulfills the needs for such control by representing the picture of an organization, where all elements are related and can be adjusted. Enterprise architecture also organizes the work and responsibilities between different actors and departments in the organization as well as finding suitable areas, where information systems can effectively support and cooperate with business needs (Open Group, 2003). 4.2. Principles of Managing Business Processes In this part, we have provided a list of principles that it is important to consider when managing business processes. This list is provided as a framework, which we developed based on different literature (Anupindi et al., 2004; Malinverno and Hill, 2007 and Armistead, 1996). 4.2.1. Obtaining a process champion and forming the process team A process champion can be recognized as the business owner or manager. Obtaining a champion for a process is the key to an implementation success of its plan. At the initial planning phase of any project run in an organization, the key role of the process champion is to have the main responsibility for the whole process, guiding the team process that can operate largely on its own. Whether the process champion is a small team or a single individual, the responsibility has to be established from the beginning to the end to avoid the reappearance of boundaries in processes. The process champion should have the compelling vision of the “to-be” state of his process and all related processes. He should have the credibility and good reputation for influencing the authority across various areas that are impacted on by the activities of the process. Therefore, he should have the ability to communicate his vision to all organizational levels. This principle can be more important in companies with the management of supply chains. There, the integration of the whole process from procurement to delivery is an increased success. The process owner or champion is responsible for the current business value and integrity of the process design across the functional and organizational limitations the process crosses. In addition to his own responsibilities, the champion is responsible for forming his team from current employees, switching among departments, or hiring new ones. The team should include a system architect who can design different alternatives for the process, its effects, and its relation with other processes. The champion of a process is responsible for establishing a joint process understanding for his team and, moreover, to ensure their commitment toward the process’s objectives and goals.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 109
Business plan
Procurement manufacturing
Operations
Preparation
Transport
Warehousing
Activities (Preparation)
Demand management
Customer service
Contract/ finalization
Task
Customer order
Distribution
Delivery
Core process (High-level process)
Process components (Operations)
(Demand management)
Figure 9.
Business process mapping.
4.2.2. Understanding the “as-is” situation of the business and the new process To understand the activities and processes in the workflow of an organization, the responsible team has to understand the current “as-is” situation of their and companies processes. They also have to monitor the development of the business process by mapping high-level processes, considering them as core processes. Every core process can be broken down into additional subprocesses containing detailed information that is related to different operations at several detailed levels below the core process. There, the project deals with information about input and output variables such as time, cost, and customer value: “who does what and why?” For process mapping, there are different hierarchical methods for displaying processes that help identify performance measures and opportunities for the improvement of the business process. Figure 9 shows an example of a hierarchical representation of a business process for electronic goods. This process mapping and hierarchy helps the team to analyze the different components and levels of the process. This consequently helps in allocating costs and other associated resources to different activities at the process’s levels. 4.2.3. Linking-related processes When the team members understand a process and its components, they should start identifying other processes related to their process. To look at achieving the process’s goals, the team members should relate their process to other processes within their organization or — in the organization’s supply chain — with customer- or
March 15, 2010
110
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
supplier-related processes. This relation is important in defining the flow of information, physical and non-physical resources, and people between processes. Additionally, it may be necessary to add different values concerning the organization, customers, or suppliers. The question is what are the hindrances and the problems caused between processes when it comes to the unnecessary intervention of people, information, or materials? The business processes should not include unnecessary activities and they need to be identified when things go wrong. It is about connectivity and understanding the relationship between factors of achieving good results. Some may doubt the importance of such a step, but we claim that, without knowing processes’ relationships, it is fair to perceive the work within a company as chaotic and unpredictable. It also seems out of control, and can eventually lead to degrading the process efficiency that affects product quality, provides poor customer service and finally wipes out profits. 4.2.4. Tuning the management style (oppositional management) Different styles of management tips can be adopted by a manager or the champion of a process. These styles oppose each other but in some cases they need to be adopted together or at least balanced. Discussing the functional oppositional process can be considered as a kind of trade-off relationship within a process or in the relationship among different processes. The process is helpful and constructive when it is formed from a cross-functional background. That also contributes to making decisions with clearer aspects concerning the best balance. It is not certain that organizations usually take apart functions when they move toward BPM. The causes can be that organizations are afraid of taking a huge step in one shot, or that the challenges are taken to reduce the loss of functions’ characteristics inside a business process-based structure. That is why addressing oppositional management styles is sometimes necessary when organizations make the move to a process-based business. Here, we mention some management tips; although they oppose each other, they may be needed together. • Leadership empowerment versus management-ordered control The empowerment and devolvement for the process team is suggested in process management to understand and achieve the process’s goals. This may be understood as a threat to the control of performance by managers. However, a clear need for empowerment appears when team members feel unable to accept the bigger responsibilities in their process. This case can be handled by empowerment. • Developing process knowledge versus obtaining experts As champions are responsible for forming their process’s team members, they are also responsible for developing the team’s knowledge about its process. By organizing a champion’s committee, champions in an organization can exchange knowledgeable persons in their processes. This rearrangement of people in the business
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 111
process contributes not only to the better understanding of the process, but also to what customers want. This will, maybe, not allow a continual development of expertise. In the advanced phases of the building process’s knowledge base, benefiting from experts may seem to be an over cost or overlapping event as several seniors (or beginner experts) are available for smaller tasks. • Soft-boundaries matrix versus clear structure In most organizations, a clear structure for different processes is important for employees and team members to understand the functions and tasks. It is also needed for understanding and distributing responsibilities. However, people’s motivation and performance are affected by a clear structure when they are worried about their careers. They tend to be less innovative and scared of making mistakes. In these cases, individuals find the soft-boundaries system to be more comfortable with the possibilities to be involved in more than one process as well. 4.2.5. Training and teaching others for the process Training means communicating new knowledge, skills, and changing attitudes and roles. This means a focus on enabling learning and development for people as individuals, which extends the range of knowledge development and the creation of more exciting and motivating opportunities for customers and employers. The heavy pressure organizations face moving to a process-based structure requires change in the culture of the organization, and this should be taken into consideration. A different form of leadership is required because the role of team leader also changes from being the supervisor to being the trainer or facilitator. It is practical to identify the skills needed for the process team and give names to team members according to their relevant skill or capability. A training need is the gap between what somebody already knows and what they need to know to do their job or fulfill their role effectively. To succeed with the new process and make the best profit from it, rewards and acknowledgments need to be clear depending on the targets and the goals in the service operations task for the process. In business process systems, several people in the organization lack the knowledge about different processes or lack the ability to change at the right time. Therefore, a champion or senior manager may expect to find people who need help under their supervision to supply inputs or receive output from a process. People cease working as individuals. They instead rely on each other as team members. As the aim is to move toward management by business processes, process owners and teams should play the teaching role to spread their learning. The learning process itself can be seen as a part of the communication protocols in an organization. It can be further used as a middle forum, where all department and management levels can meet and exchange their experiences, knowledge, and even documentation (Fig. 10).
March 15, 2010
112
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al. Tasks
Activities
Linkingrelated processes Obtaining Champion
Training and teaching
Understand "as is"
Forming team
Measuring the process
Periodic review
Tuning managing style
Core process
Process champion
Figure 10.
Relationship between principles.
4.2.6. Measuring the process Business processes should be first of all measurable in the sense that they can be followed, controlled, improved, and benchmarked. As businesses most of the time are concerned about profits, business process measurement should use financial and non-financial measures. Measurement should be applied between processes at the same managerial level as well as between processes and their subprocesses. Some organizations try to adopt another way of measuring by applying a bottom-up approach to aggregate the results up to the top management and business level. At every level, the key measures should be those which are used by managers at the next level to judge the results from the current level and are directly related to the customer satisfaction at this stage. Additionally, measurement can help balance the distribution of resources and control the flow of the process. Moreover, it can help prevent the re-optimization of subprocesses at the expense of the overall process by giving the process owners an indication of that in the early hours. A clear example of that can be explained by the business processes in the supply chain management. If a stage in the chain rushes the delivery to get different items together for the next business stage, it may have more delivery at the same time or achieve less delivery time. However, the expense of damaged packages will be greater and take more resources at the next stage of the process. 4.2.7. Periodic review for improving the process The development is a continuous process to keep the link between the “as-is” and “to-be” situations. Our world will never stand still. Therefore, a periodic review should be applied to the process to ensure that the initial assumptions are correct. It is also important for ensuring that plans of actions and process modifications are on schedule. Additionally, by this review, all champions, managers, and responsible persons in different processes should be informed by reports and captured images of different changes during working phases.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 113
5. Conclusion This chapter has focused on the development and management of business processes and the use of tools and technologies for this purpose. Modeling helps in re-engineering a business, analyzing business activities, and building strategies for a software package to deal with immense market competition. However, different users (business and IT users) have a different understanding of business. Therefore, two types of models (business models and process models) are developed. As both the models are different views of the same business, so they must be related and derivable from each other. For the derivation of a process model from a business model, a chaining method is presented with an example. The presence of large numbers of processes hinders the manual management of a process model. This motivates the need for a software system called a BPMS, whose properties, advantages, and components are presented. Nevertheless, for the qualitative integration of information, stakeholders, and enterprise architectures, principles of BPM are presented. Acknowledgment We would like to acknowledge the support of Paul Johannesson, Birger Andersson, Maria Bergholtz, and other members of our team. References Aalst, MP, HM Hofstede and M Weske (2003). Business process management: A survey. Springer Lecture Notes in Computer Science, 2678, 109–121. Alves De Medeiros, K and CW G¨unther (2005). Process mining: Using CPN tools to create test logs for mining algorithms. In Proceedings of the Sixth Workshop and Tutorial on Practical Use of Colored Petri Nets and the CPN Tools. 177–190, Aarkus, Denmark. Andersson, B, M Bergholtz, A Edirisuriya, J Zdravkovic, T Ilayperuma, P Jayaweera and P Johannesson (2008). Aligning goal models and business models. In Proceedings of CAiSE Forum, CEUR Proceedings, Vol. 344, 13–16, Montpellier, France. Andersson, B, M Bergholtz, B Gr´egoire, P Johannesson, M Schmitt and J Zdravkovic (2006). From Business to Process Models — A Chaining Methodology, In Proceedings of the CAISE Workshop on Business/IT Alignment and Interoperability, CEUR Proceedings Vol. 237, 216–218, Luxemburg. Anupindi, R, S Chopra, S Deshmukh, JA Mieghem and E Zemel (2004). Managing Business Process Flows: Principles of Operations Management. Prentice Hall Publishers. Association of Business Process Management Professionals. http://www.abpmp.org [8 Dec. 2009]. Armistead, CG (1996). Principles of business process management. Managing Service Quality, 6(6), 48–52.
March 15, 2010
114
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
Beeson, I, S Green, J Sa and A Sully (2002). Linking business processes and information systems provision in a dynamic environment. Information Systems Frontiers, 4(3), 317–329. Bergholtz, M Jayaweere, P Johanneson, P Wohed, P Bringin (2007). Speech Actors into UMM. In Proceedings of the 1st International REA Technology Workshop, Copenhagen, Denmark. Bider, I and M Khomyakov (1998). Business Process Modeling — Motivation, requirements, implementation, ECOOP. Springer Lecture Notes in Computer Science (LNCS), 1543, 217–218. BPM, http://searchcio.techtarget.com/sDefinition/0„sid182 gci1088467,00.html. [25 May 2008]. Camponovo, G andY Pigneur (2004). Information Systems Alignment in Uncertain Environments. In Proceedings of IFIP International Conference on Decision Support Systems, Prato, Italy. Curtis, B, MI Kellner and J Over (1992). Process Modeling. Communications of the ACM, 35(9), 75–90. Dean, DL, JD Lee, RE Orwig and DR Vogel (1994). Technological support for group process modeling. Journal of Management Information Systems, 11(3), 43–63. Denna, EL, LT Perry and J Jasperson (1995). Reengineering and REAL Business Process Modeling. In Business Process Change: Concepts, Methods and Technologies, Grover, V. Kettinger, W. (Eds.), IDEA Group Publishing, London. 350–375. Endl, R and M Meyer (1999). Potential of Business Process Modeling with Regard to Available Workflow Management Systems, SWORDIES Report no. 20, Berlin, http://www.cinei.uji.es/d2/cetile/documentos/fuentes/Model Proces Scholz 99.pdf. [11 Dec. 1998]. Frye, DW and TR Gulledge (2007). End-to-end business process scenarios. Industrial Management & Data Systems, 107(6), 749–761. Gregoire, B and M Schmitt (2006). Business service network design: From business model to an integrated multi-partner business transaction. In Proceedings of the 8th IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services, 84– 94, Washington DC, USA. ISO/IEC 15944-4 (2007). Information technology — Business operational view — Business transaction scenarios — Accounting and economic ontology. http://www.iso.org/iso/iso catalogue/catalogue tc. [1 Dec 2008]. Kimbrough, SO and Wu DJ (2004). Formal Modeling in Electronic Commerce. 1st Edn. Springer Publisher, The Netherlands. Kotelnikov, V. Business process management system (BPMS). In Meeting the Growing Demand for End-to-end Business Processes, http://www.1000ventures.com/ business guide/bpms.html [8 Dec. 2009]. Lam, W (1997). Process reuse using a template approach: A case-study from avionics. ACM SIGSOFT Software Engineering Notes, 22(2), 35–38. Laudon, JP and KC Laudon (2007). Essentials of Business Information Systems. Prentice Hall Publications. Lin, F, MYang andY Pai (2002). A generic structure for business process modeling. Business Process Management Journal, 8(1), 19–41.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
Modeling and Managing Business Processes 115
Luftman, J (2004). Managing the Information Technology Resource: Leadership in the Information Age. Prentice Hall, Inc. USA. Luo, W andYA Tung (1999). A framework for selecting business process modeling methods. Industrial Management & Data Systems, 99(7), 312–319. Lusk, S, S Paley and A Spanyi (2005). The evolution of business process management as a professional discipline. In Evolution of BPM as a Professional Discipline, BPTrends. Major benefits of BPM, http://www.enjbiz.com/BPMbenefits.html. [11 Dec 2008]. Malinverno, P and JB Hill (2007). SOA and BPM Are Better Together. Gartner RAS Core Research Note. http://searchsoa.bitpipe.com/detail/RES/1138808050 532.html. [8 Dec. 2008]. Muehlen, M and DT Ho (2006). Risk management in the BPM lifecycle, BPM workshops. Springer Lecture Notes in Computer Science, 3812, 454–466. Netjes, M, HA Reijers and MP Aalst (2006). FileNet’s BPM Life-cycle Support, BPM Center Report BPM-06-07. Open Group (2003). http://www.opengroup.com. Paul, RJ and A Serrano (2003). Simulation for business processes and information systems design. In Proceedings of the 2003 WSC, 2(7–10), 1787–1796. Pereira, CM and P Sousa (2004). Business and Information Systems Alignment: Understanding the Key Issues. In Proceedings of European Conference on Information Technology Evaluation Amsterdam, ECITE, Netherlands. Plexousakis, D (1995). Simulation and Analysis of Business Processes using GOLOG. In Proceedings of the ACM Conference on Organizational Computing Systems (COOCS), 311–322. Milpitas, California, USA. Redman, TC (1998). The impact of poor data quality on the typical enterprise. Communications of the ACM, 41(2), 79–82. Schiefer, J, H Roth, M Suntinger and A Schatten (2007). Simulating business process scenarios for event-based systems. In Proceedings of the 15th European Conference on Information Systems. Serrano, A and M Hengst (2005). Modeling the integration of BP and IT using business process simulation. Journal of Enterprise Information Management, 18(6), 740–759. Silver, B, BPMS watch: Agility and BPMS architecture, independent BPMS industry analyst, BPM institute, http://www.bpminstitute.org/articles/article/article/bpms-watchagility-and-bpms-architecture.html. [11 Dec 2008]. Trienekens, JM, RJ Kusters, B Rendering and K Stokla (2004). Business objectives as drivers for process improvement: Practices and experiences at Thales Naval, The Netherlands, BPM. Springer Lecture Notes in Computer Science, Vol. 3080, 33–48. Using BPM to your advantage, http://www.cstoneindy.com/resources/articles/using-bpm/. [8 Dec. 2009]. Versteeg, G and H Bouwman (2006). Business architecture: A new paradigm to relate business strategy to ICT. Information Systems Frontiers, 8(2), 91–102. Vojevodina, D, G Kulvietis and P Bindokas (2005). The method for e-business exception handling. In Proceedings of the 5th IEEE International Conference on Intelligent Systems Design and Applications. (ISDA’ 05), 203–208, Wroclaw, Poland. What is Process Standardization? http://www.argi.com.my/whatispage/processStandard. htm. [11 Dec 2008].
March 15, 2010
116
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch05
M. El-Mekawy et al.
Biographical Notes Mohammad El-Mekawy is a teacher in the Department of Computer and Systems Science (DSV), Royal Institute of Technology (KTH), Stockholm, Sweden. He has two M.Sc. degrees from KTH, Sweden and an IT diploma from Information Technology Institute (ITI) in Cairo, Egypt. He has participated in several European projects. Also, he has years of industrial experience in both Egypt and Sweden. He has about 10 publications, presented in international forums. He is an active researcher with research interests in global and strategic IT management, process modeling, crises management, and data integration. Khurram Shahzad is a PhD candidate at the Department of Computer & Systems Science (DSV), Royal Institute of Technology (KTH), Stockholm, Sweden. He is on study leave from COMSATS Institute of Information Technology (CIIT), Lahore, where he is working as an Assistant Professor in the Department of Computer Science. Before joining CIIT he was a lecturer at Punjab University College of Information Technology (PUCIT), University of the Punjab, Lahore, Pakistan. Khurram received his Masters of Science degree from DSV, KTH, and M.Sc. in Computer Science from PUCIT. He has over a dozen publications, presented in national and international forums. Nabeel Ahmed is preparing to join the Department of Computer & Systems Science (DSV), Royal Institute of Technology (KTH)/Stockholm University (SU), Stockholm, Sweden, as a PhD candidate. Nabeel received his Master of Science degree in Engineering and Management of Information System from DSV, KTH, and Bachelor of Science degree in Computer Science from University of Management and Technology, Lahore, Pakistan.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Chapter 6
Business Process Reengineering and Measuring of Company Operations Efficiency ˇ VUJICA HERZOG NATASA Faculty of Mechanical Engineering, University of Maribor Laboratory for Production and Operations Management Smetanova ulica 17, SI 2000 Maribor, Slovenia
[email protected] The main purpose of the presented research is to contribute to a better understanding of business process reengineering (BPR), supported with performance measurement (PM) indicators with the purpose to improve company operations efficiency. Existing literature on the subject warns about deficiencies in the concept of BPR, which can be extremely efficient with its radical workings. The concept of BPR should be studied in connection with the logical supplementary areas: manufacturing strategy and, on the other hand, performance indicators, meant for selected manufacturing strategy and BPR performance verification. BPR and PM literature is based primarily on case studies and there is a lack of rigorous wide-ranging empirical research covering all its aspects. This chapter presents the results of a survey research carried out in 73 medium- and large-sized Slovenian manufacturing companies. Seven crucial areas were identified based on a synthesis of PM literature, which must be practiced to achieve effective operations: cost, quality, time, flexibility, reliability, customer satisfaction, and human resources. Variables have been constructed within these areas, using Likert scales, and statistical validity, and reliability analyses. Keywords: Business process reengineering; process management; performance measurement; survey research.
1. Introduction Over the last 15 years, modes of operation in both manufacturing and service companies have changed considerably. We could even say that today the most important quality for a company which wants to remain successful and competitive is the ability to adapt to constant changes in the global environment. Business Process Reengineering (BPR) is classified by some theoreticians, and even practitioners, as a manufacturing paradigm, which stems from competitive environment and is commonly known as lean manufacturing, a concept of world-class manufacturing, agile manufacturing, and methods such as just-in-time manufacture, total quality management (TQM), continuous process improvement, and concurrent engineering. However, BPR is much more than just one of modern manufacture paradigms. 117
March 15, 2010
118
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
To understand the true meaning of restructuring, we must examine facts which reach far back into the past. In 1776, Adam Smith, who was actually a philosopher, economist, and radical thinker of that time, in his book titled The Wealth of Nations explained a principle that he called the division and specialization of labor which resulted in the productivity of a pin factory increasing a hundredfold. Smith’s principles were improved on in the field of manufacturing, especially by Henry Ford in the automotive industry, and by Alfred Sloan from General Motors in the field of management. Many times later, when companies and especially their management instruments become oversized and therefore almost impossible to manage, Hammer and Champy (1990, 1993) promoted their idea about the need for radical rethinking. They pointed out that the way of thinking, caused by Smith’s central idea — division and specialization of labor — and as a consequence its fragmentation, will not be enough to reach competitive advantage and efficiency in the future. The authors also examined in detail and defined the weak points which stem from division of labor, but have, at the same time, given clear guidelines on how to operate in the future. The most important idea of their work is that those processes divided for 200 years must be united again and restructured, which will make them considerably different from tradition. By focusing on processes and their restructuring, they turned upside down the industrial model which is based on a principle that workers have little knowledge and little time or abilities for additional education, which caused their tasks to be as simple as possible. On the other hand, simple tasks demanded complex-linking processes. To satisfy present demands for quality, flexibility, low cost, reliably delivery, and customer satisfaction, the processes have to be as simple as possible. The consequences of these requirements are apparent in the design of processes, and the form of organization. The field of BPR will be presented in connection with logically complementary fields, the choice of manufacturing strategy, and on the other hand, indicators which are intended for verifying the efficiency of the chosen strategies. We discovered that BPR is dynamic, designed for changes and, as such, difficult to transfer into different environments or, one could say that it is dependent on the conditions and environment where we wish to realize it. Measuring business performance has been one of the key topics of the last 10 years. Traditional criteria which were based especially on cost have become inadequate, especially because of changes in the nature of work, the rise of modern manufacturing concepts, changes in the roles in companies, new demands of business environment, and the development of information technology. Recycling and modernization of implementation measuring systems, on the one hand, refers to innovations of accountancy systems, especially regarding the treatment of expenses which are based on activities (Johnston and Kaplan, 1987), and, on the other hand, on expansion in the field of measuring the so-called non-cost measurements which are not economic or financial in nature, but come from customer needs.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 119
Existing literature on the subject warns about deficiencies in the concept of BPR, which can be extremely efficient with its radical workings. The connection of BPR with the level of manufacturing strategies solves the problem of integrating BPR and enables us to define the starting point and set clear goals. The chosen strategic goals of the company represented by competitive criteria become the goals, which can be attained by BPR. Defining clear goals enables, on the other hand, measuring efficiency or failure of implemented reengineering process, which points to successful realization of chosen or planned strategy — with this approach, the reengineering process is rounded up as a whole. 2. Business Process Reengineering Several authors have provided their own interpretation about the concept of BPR. For example, Davenport and Short (1990) have described BPR as the analysis and design of work flows and processes within, and between, the organizations. Hammer and Champy (1993) have promoted “the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.” Short and Venkatraman (1992) exposed the customer’s point of view when defining BP redesign as the company’s action to restructure internal operations by improving product distribution and delivery performance to the customer. For Johansson et al. (1993), BPR is the means by which an organization can achieve a radical change in performance as measured by cost, cycle time, service, and quality, using the application of a variety of tools and techniques that focus on the business, as a set of related customer-oriented core businesses rather than a set of organizational functions. Even if the main BPR characteristic still remains in the radical nature of change, some — such as Yung and Chan (2003) — have proposed a slightly less radical approach, named “flexible BPR.” Other authors such as Vantrappen (1992) or Talwar (1993) focused on the rethinking, restructuring, and streamlining of business structure, processes, work methods, management systems, and external relationships, through which value is created and delivered. Petrozzo and Stepper (1994), on the other hand, believed that BPR involves the concurrent redesign of processes, organizations, and their supporting information systems, to achieve radical improvement in time, cost, quality, and customers’ regard for the company’s products and services. Loewenthal (1994) described the fundamental rethinking and redesign of operating processes and organizational structure; the focus is on the organization’s core competence to achieve dramatic improvements in organizational performance. Zairi (1997) discussed BPR, including continuous improvement and benchmarking, within Business Process Management, which is a structured approach to analyzing and continually improving
March 15, 2010
120
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
fundamental activities such as manufacturing, marketing, communications, and other major elements of a company’s operation. BPR also has some similarities with TQM first, the process orientation, the customer-driven inspiration, and the wide transversal nature (Schniederjans and Kim, 2003). They differ in the approach: evolutionary (continuous, incremental improvement) process change in the case of TQM, and revolutionary (radical, stepchange improvement) process change in the case of BPR (Venkatraman, 1994; Slack et al., 2001). In spite of the apparent differences in definitions given by many authors, we can extrapolate a few common, more important aspects or key words of process reengineering, which were exactly defined by Hammer and Champy (1993) in the following definition: Reengineering of business processes is a basic new consideration of the business process and its fundamental remodelling, to achieve great improvements in critical and contemporary measurements of performance, such as cost, quality, service, and speed.
3. Correlation Between Business Strategy and BPR In regard to literature review, the concept of BPR should be studied in connection with its logical supplementary areas: on one hand the manufacturing strategy, and on the other hand the performance indicators. The need for a strategically-driven BPR approach has been perceived by numerous authors (Zairi and Sinclair, 1995; Sarkis et al., 1997). Tinnil¨a (1995) ascertained that BPR should start from strategies. The desired strategic position should be the starting point for redesign, rather than improvement in existing operations. Edwards and Peppard (1994, 1998) proposed business reengineering as a natural linkage with the strategy; they suggested that business reengineering can help bridge the gap between strategy formulation and implementation. In this context, BPR is seen as an approach, which defines the business architecture, thus enabling the organization to focus more clearly on customers’ requirements. We focused specifically on manufacturing strategy, as deriving from corporate strategy, having considered manufacturing companies in our survey; however, several items about the overall strategy have been treated. 4. Performance Measurement 4.1. Definition of PM Company PM is a chapter, which is frequently mentioned but rarely precisely defined. This is actually a process of measuring effects where measuring is a process of defining value, and the effect is represented by implementation (Neely et al., 1995). According to the market viewpoint, a company is reaching the set goals,
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 121
when they are implemented in a way that satisfies the customers’ demands more efficiently and more effectively than its competitors. The terms “efficiency” and “effectiveness” are used precisely in this context. Efficiency is a measure of how economically the organization’s resources are utilized when providing a given level of customer satisfaction, while effectiveness refers to the extent to which customer requirements are met. It is a very important aspect which not only defines two basic dimensions of implementation, but also stresses the existence of external and internal influences on operation motives. The definitions can be written as • PM is a process for increasing the efficiency and effectiveness of a company’s operations. • PM is a criterion for increasing the efficiency and effectiveness of a company’s operations. • PM can be a series of criteria for increasing the efficiency and effectiveness of a company’s operations. Appropriate definitions are not required as simple in spite of the above definitions, what does PM system represent? On the one hand, it is true that PM is a series of measures for assessing efficiency and effectiveness for already-preformed processes and procedures. But the above-mentioned definition neglects the fact that the PM system also encompasses other support infrastructures. The data must be acquired, examined, classified, analyzed, explained, and announced. If we leave out any of these activities or overlook them, the measuring is incomplete, and as a consequence the adopted decisions and actions may be unsuitable. Therefore, the complete definition would be: PM enables the adoption of substantiated decisions and actions, for it assesses the efficiency and effectiveness of implementations through the process of acquiring, examining, analyzing, explanation, and announcing appropriate data.
4.2. Reasons for Change in the Field of PM There are several reasons why this area is receiving so much attention today, and why traditional financial measures are perceived as insufficient. An overview of available literature (Eccels, 1991; Neely, 1999) provides the following different content groups: • • • • • • •
Changes in the nature of work Market competitiveness Emergence of advanced manufacturing concepts Changes of roles in companies Demands of business environment Information technology development International quality awards
March 15, 2010
122
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
4.2.1. Changes in the nature of work Traditional accounting systems in particularly stress direct material and labor costs. The latter, especially in the 1950s and 1960s of the previous century, exceeded 50% of all costs. Because of large investments in advanced manufacturing technology, the share of direct labor costs decreased and, thus, also the suitability of traditional accounting systems. As direct labor cost does not represent the most important cost share, lowering costs and consequently increasing productivity do not decisively influence all the operations of a company. Narrow focus on cost lowering can cause: • • • •
Short-term effect on investment decisions. Local optimization without influences on entire operation. Focus on standard solutions and prevention of constant development. Lack of strategic focus, as data on quality, responsiveness, speed, and flexibility is neglected. • Neglecting information on market requirements. 4.2.2. Market competitiveness Economic dynamics, where only the change is constant, development of science, and high competitiveness on the market, shaken by the process of globalization, importantly influence the way efficiency is measured. Financial indicators measure predominantly the consequences of past decisions and are limited with predicting efficiency of operation in the future. Increase in competitiveness demands from companies leads to a search for an original strategic position, which is based on special resources and abilities significant for the company. Companies do not only compete with prices and costs as consequential competitive criteria, but also are trying to differentiate themselves on the basis of quality, flexibility, adaptability to customer demand, innovativeness, and quick response. 4.2.3. Emergence of advanced manufacturing concepts Study of Japanese economic growth in the 1980s and at the beginning of the 1990s overwhelms Western researchers because of the realization that Japanese companies usually define manufacturing differently. The world is presented by lean manufacturing, which hides a stack of approaches and techniques for production management. The concept of lean manufacturing in the 1990s overstepped the bounds of focusing on manufacturing and transferred to the concept of lean operations. Besides lean manufacturing, in the 1990s, other concepts emerged: TQM, BPR, benchmarking, mass customization, and concurrent engineering. Implementation of advanced business concepts helped companies simultaneously advance in the context of different competitive criteria. Efficiency of operations was not measured one-dimensionally through financial criteria. Increase in
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 123
the effectiveness and efficiency of business processes demanded multidimensional monitoring with the help of various indicators. 4.2.4. Changes of roles in companies The majority of criticism about the inadequacy of indicators for monitoring operations was given in the 1980s and 1990s by experts from academic circles who dealt with accounting (Baiman, 2008). These academic experts form the field of accounting and various professional associations increased interest in implementing non-financial indicators in systems for measuring business efficiency. Those responsible for human resource (HR) development represent another group who took a more active role in shaping indicators, and their use (Chen and Cheng, 2007). These indicators were integrated into the entire management of HRs which is composed of setting goals, measuring implementation, feedback information, and rewards. Correlation between implementation measurement and rewarding is, of course, the essence of HR management. 4.2.5. Demands of business environment Business environment cannot be limited only to competitiveness among companies. Other elements of the business environment also influence the importance of different indicators. The trend for desynchronizing the economy instigated privatization of former public companies, and the establishment of different agencies for monitoring the operations of newly established companies. Companies also face an increasing amount of pressure from the final users of products and services, united in various associations. Consumers want more information about the product or service, and also the way this product was produced. 4.2.6. Information technology development Information technology development has heavily influenced the possibility of using reengineered systems for measuring business efficiency (Marchand and Raymond, 2008). Development of hardware, software, and databases enables effective data gathering, analysis and presentation from different sources by more people, and in a cheaper and faster manner. Available information, which constantly monitors business operations, thus enables better business decisions in companies, which finally become evident in improved business results. 4.2.7. International quality awards Establishment of movements for quality and recognizing the importance of improving effectiveness and efficiency in business processes instigated the establishment of different awards for quality. The first one appeared in 1950 in Japan, the Deming Quality Award. In the United States, the Baldridge Award is highly valued. The
March 15, 2010
14:44
124
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
European Foundation for Quality Management (EFQM) gives awards for business proficiency. Companies which compete for such awards must undertake an extensive evaluation and give detailed information about their organizational strategies, resources, information flow, relationship to social issues, quality policy, and also financial results. 4.3. Review of Individual Measurements and Performance Indicators Based on the literature review, we can summarize the most important individual measurements of performance, which are as follows: • • • • • •
Quality Flexibility Time Costs Customer satisfaction Employee satisfaction or HR management
One of the fundamental problems that we face when implementing a useful PM system is trying to achieve a balance between the smaller number of key criteria (clear and simple, but which might not reflect all organizational goals), on one hand, and a greater number of detailed criteria or performance indicators (complex and less appropriate for management but able to show a lot of different possibilities of performance) on the other. In general, we can achieve a compromise by ensuring a clear connection between the chosen strategy, key parameters of performance, which reflect the main performance goals, and a series of performance indicators for individual key parameters (Slack et al., 2001). When dealing with individual performance measures, the most important fact is that they must follow from the strategy (Neely, 1998). Based on the manufacturing strategies literature review, Leong et al. (1990) conclude that generally accepted and useful key dimensions of performance are quality, speed, delivery reliability, price, and flexibility. In spite of this, there is still some vagueness about what different authors actually mean by these terms. Wheelwright (1984), for example, uses flexibility in the context of flexible extent of production. Other authors such as Garvin, Schonberger, Stalk, Gerwin, and Slack mention different dimensions for measuring key dimensions of performance. Therefore, it is almost impossible to review all performance indicators. One of the problems of PM literature is its diversity. This means that different authors focus on different viewpoints when shaping PM systems. Business strategists and managers treat measurements on a higher, different level than managers who are responsible for PMs in production. De Toni and Tonchia (2001) state that traditional measuring systems focused predominantly on production costs and productivity on the basis of changes which
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 125
Production costs
Materials and labor Machinery
“cost” Total Productivity Performance measures
Time “noncost” Flexibility
Capital (fixed and working)
Specific
Production (labor productivity, machinery saturation, inventory and WIP level)
Internal
Run and set-up times
External
Wait and move times
System times
Supplying lead times Manufacturing lead times Distribution lead times
Delivery speed and reliability Time to market Produced quality
Quality
Perceived quality In-bound quality Quality costs
Figure 1.
Performance measures. (Adopted from De Toni and Tonchia, 2001.)
come from competitive environment, were reshaped into two types of measurements (Fig. 1): Cost PMs; including production costs and productivity. These costs display clear correlations, which can be treated in mathematical form when we get final results of the company, which is its net income and profitability. Non-cost PMs; without direct cost connection which are gaining importance. Non-cost performances are usually measured with non-monetary units, which do not enable a direct link to economic and financial statements (net income and profitability) in the exact manner that is characteristic for performance connected to cost, for example delivery date, shorter than 3 days, or a higher quality products (for which we use 5% less) have undoubtedly a positive influence on economic and financial performance, but this cannot be expressed in an incremental manner by net income and/or profitability. The main goal of the presented review is developing a system of indicators for estimating the reengineering of business processes, as the tool for implementing radical changes in a company, with the aim of meeting those guidelines, provided by strategy. Using an extensive survey research of Slovene companies, we tried to develop a system of indicators which would be able to assess the success of the implemented reengineering. For this purpose, we had to study the measurement systems according to individual performance indicators and then again try to determine the connections and accuracy of the proposed theoretical model. The
March 15, 2010
126
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
following chapters show the review of performance indicators on the basis of propositions from different authors and contributions from theory, to the final shape and selection that we have given to companies in the questionnaire, when performing survey research.
4.3.1. PMs, cost-based Development of accounting management is, among others, very well documented by Johnson (1975, 1983). His work reveals that the majority of accounting systems which are used today are based on assumptions that were made 60 years ago. Actually, Garner’s (1954) review of the accounting literature indicates that the majority of the so-called sophisticated cost accounting theories and practices were developed around 1925 (for example, return on investment — ROI). Johnston and Kaplan (1987) stress that, due to dramatic changes in business environments that had occurred over the last 60 years, accounting systems are based on premises that are no longer valid. One of the mostly widely criticized practices is the allocation of indirect labor and overheads according to direct labor cost. In 1900, direct labor cost represented the majority of product costs. Therefore, it is prudent to allocate overhead cost to the product in accordance with its labor extent. With the increasing use of advanced manufacturing technologies, today direct labor costs are regarded as 10%–20% of product costs, while overhead costs represent 30%–40% (Murphy and Braund, 1990). This means high burden of overhead costs influences cost structure greatly, with a relatively small change in the content of direct labor product cost. Moreover, the distribution of overhead costs, in accordance with direct labor hours, stimulates managers to focus on minimizing the number of direct labor hours which are prescribed in their cost centre, and with this they neglect overhead costs. Johnston and Kaplan (1987) prove that these problems will only increase in the future, when the life-cycle time becomes shorter and thus the continuous increase in the share of total product costs will overtake the share of overhead’ costs intended for research and development. As a result of criticisms connected to traditional management accounting, Cooper (1988) developed an approach known as “activity-based costing” (ABC). ABC overcomes a lot of management accounting’s traditional problems, such as management accounting has become distorted by the needs of financial reporting; in particular, costing systems are driven by the need to value stock, rather than to provide meaningful product costs. In the majority of manufacturing companies, the share of direct labor, as the percentage of total cost, has decreased but is still by far the most common basis of loading overheads onto products. Overhead costs are not only a burden that must be minimized. Overhead functions such as product design, quality control, customer service, production planning, and sales order processing are as important to the customer as the physical processes
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 127
on the shop floor increase in complexity. Production processes are more complex, product ranges have expanded, product life-cycles are shorter, and quality is higher. The marketplace is increasingly competitive. In the majority of sectors, global competition has become a reality. Every business should be able to assess the true probability of the sectors it trades in, understand product costs, and know what drives overhead. Cost management systems should support process improvements, and the PMs should be connected to strategic and commercial objectives. In 1985, Miller and Vollman pointed out that, despite many managers focusing on visible costs, e.g., direct labor, direct material, etc., the majority of overheads are caused by invisible transaction costs. Cooper (1988) also in one of his earliest works warned about this and one of his major discoveries that support ABC was that, more than by just the product itself, the costs are caused by activities which are required for the production and delivery of the product. Later, it became clear that the major benefit of ABC is process analysis. This is in accordance with the concept of business processes reengineering, which offers a view of information according to transverse (horizontal) and not vertical flows in a company. The other cost-based PM which is very extensively researched in literature is productivity. Traditionally, it is defined as a relationship between total output and total input (Burgess, 1990). Productivity, therefore, measures how well resources are combined and used to accomplish specific, desirable results (Bain, 1982). Ruch (1982) cite that higher productivity can be achieved through several different methods: • • • • •
Faster increase of output in comparison to input (growth management) Producing higher output with the same level of input (rationalizing work process) Producing higher output with lower input (ideal) Maintaining the level of output at lowering of input (higher efficiency) Lowering of output level with even lower input levels (decrease management)
Different problems arise when measuring productivity, not only when defining inputs and outputs, but also when estimating their amounts (Burgess, 1990). Craig and Harris (1973) propose that companies focus more on total rather than partial productivity measurements. To define the most typical partial cost measurements, called measurement indicators, cost-based, we will take a look at propositions from different authors. Hudson et al. (2001) define the following as the critical dimensions of costbased performance: • • • • • •
Cash flow Market share Cost reduction Inventory performance Cost control Sales
March 15, 2010
14:44
128
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
• Profitability • Efficiency • Product cost reduction Performance indicators cost-based, according to De Toni and Tonchia (2001), are divided into: • Material cost Material and labor cost • Labor cost • Machinery energy costs • Machinery material consumption Machine operation cost • Inventory and WIP level • Machinery saturation • Total productivity • Direct labor productivity • Indirect productivity Production cost • Fixed capital productivity • Working capital productivity • Value-added productivity • Value-added productivity/employee Some typical PM measurement cost-based, according to Slack and Lewis (2002): • • • • • • •
Minimum delivery time/average delivery time Variance against budget Utilization of resources Labor productivity Added value Efficiency Cost per operation hour
Neely et al. (1995) propose the following categories as the critical dimensions of cost measurements: • • • • •
Manufacturing cost Added value Selling price Running cost Service cost
4.3.2. PMs, time-based Time has been described as both a source of competitive advantage and the fundamental measure of PM. According to JIT manufacturing philosophy, just-tooearly or just-too-late production or delivery of goods is seen as a waste. Similarly, one of the objectives of optimal production is minimization of throughput times
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 129
(Goldratt and Cox, 1986). Galloway and Waldron (1988,1989) developed a cost system based on time, also known as throughput accountancy; it is based on the following premises: 1. Manufacturing units are an integrated whole whose operating costs are estimated highly in the short term. It is more suitable and much simpler to consider the entire costs, without material as fixed ones, and name them “total factory costs”. 2. For all companies, profit is a function of the time required to respond to the needs of the market. This means that profitability is inversely proportioned to the level of inventory, as the reaction time itself is the function of all inventories. 3. Relative profitability of the product is defined as the level at which the product contributes money. This is also a level where the product contributes money, comparable to the level regarding use of money in the company, and defines absolute profitability. Galloway and Waldron (1988, 1989) believe that these contributions should be measured as a share, with which the money is received, and not as an absolute value. Therefore, they defined the relationship of accounting flow as income per work hour, separated from cost per work hour: Return per factory hour = Cost per factory hour =
sale price − material costs time on the key resources
(1)
total factory cost total time available on the key resources
(2)
Moreover, House and Price (1991) recommend the use of a Hewlett-Packard return map for monitoring the effectiveness of a new-product development process. Fooks (1992) reports that Westinghouse used similar cost-time profiles for more than a decade. The basic idea is that any set of business activities or processes can be defined as a collection of costs over time. An interesting approach for designing time-based PMs is proposed by Azzone et al. (1991). According to their findings, companies that wish to use time for competitive advantage should use a series of measurements. Let us review the partial PMs, time-based, as different authors propose. The various types of performance, time-based, are fundamentally divided into performances, which are implemented (De Toni and Tonchia, 2001): 1. Inside the company: • Work time and preparation of work (travel, preparation, and finishing times) • Waiting and transport times 2. Outside the company: • System time (time for delivery, production, and distribution) • Speed of delivery and reliability of delivery (customers and suppliers) • Time to market (time, required for new product development)
March 15, 2010
14:44
130
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Furthermore, we can enumerate the indicators of external and internal time performance: • Time to market • Lead times distribution External times • Delivery reliability • Supplying lead times • Supplier delivery reliability • Manufacturing lead times • Standard run times Internal times • Actual flow times • Wait times • Set-up times • Move times Externally-internal times • Inventory turnover • Order carrying-out times −→ External times Slack and Lewis (2002) propose the following typical partial time measurements: • • • • •
Customer query time Order lead time Frequency of delivery Actual versus theoretical throughput time Cycle time
Critical dimensions of time performance according to Hudson et al. (2001): • • • • • • • • •
Lead time Delivery reliability Process throughput time Process time Productivity Cycle time Delivery speed Labor efficiency Resource utilization
Neely et al. (1995) propose the following typical partial measurements, time-based: • • • • •
Manufacturing lead time Rate of introducing production Delivery lead time Due-date performance Frequency of delivery
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 131
4.3.3. PMs, flexibility-based Although the area of flexibility over the last 15 years has generated a lot of literature, there is still some vagueness. Vagueness concerning the concept of flexibility represents a critical obstacle to competitive abilities in effective management performance (Upton, 1994). Definitions of flexibility, which can be found in literature, are divided mainly into two ways: • To definitions which are directly linked to a company • To definitions which arise from general definitions of flexibility, and can be found in other scientific fields Although measuring flexibility in academic circles and among managers is of great importance, these kinds of measurements are still under development — particularly because flexibility is a multidimensional term and because there are usually no indicators that can be obtained by direct measuring (Cox, 1989). Proposed measurements are somewhat naive and general. In spite of the need, there are no generally or widely accepted measuring methods. The robustness of proposed measurements is hardly researched (Chen and Chung, 1996). Direct, objective flexibility measurements are very hard to put into practice. Examples of such measurements are estimating the possibility of a certain moment — a decisive viewpoint and analysis of certain output characteristics. In the area of direct measurements, there are also direct subjective measurements, which are based on the Likert scale. For different angles of flexibility, we give opinions which represent a degree of agreement/disagreement with given statements. Due to problems that arise with direct definition of performance flexibility, different authors propose the use of indirect indicators which take into account: 1. Characteristics of manufacturing system, which enable flexible production and can be: • Technological (for example, availability of excess production capacity, existence of preparation time, etc.) • Organizational and managerial (for example, improvement/increase of work and team work, etc.) 2. Performance, which is, in a way connected to flexibility, and can be: • Economical (cost and value) • Non-cost based — not connected to costs (time for product development, delivery time, quality, and services). Because flexibility can be treated in several dimensions, partial measurements are especially appropriate for measuring the flexibility in manufacturing systems. In this case, we must be familiar with unification procedures which include all important individual indicators that take into account different kinds of flexibility (Tonchia, 2000).
March 15, 2010
14:44
132
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
To define the most typical partial-flexibility measurements, also called “flexibility indicators”, we will review the different authors’ propositions. Slack and Lewis (2002) propose the following typical partial flexibility measurements: • • • • • • •
Time required for developing new products/services Range of products/service Machine change-over time Batch size Time to increase activity rate Average capacity/maximum capacity Time to change schedules
Hudson, et al. (2001) propose, as critical flexibility measurements, the following dimensions: • • • • • • •
Manufacturing effectiveness Resource utilization Volume flexibility New product introduction Computer systems (IT) Future growth Product innovation
De Toni and Tonchia (2001) propose the division and, thus, also measuring of the following types of flexibility: • • • • •
Volume flexibility Mixed flexibility Product modification flexibility Process modification flexibility Expansion flexibility
Neely et al. (1995) propose the following as typical partial flexibility measurements: • • • • • • • •
Material quality Output quality New product development Modify product Deliverability Volume Mix flexibility Resource mix
4.3.4. PMs, quality-based Traditionally, quality has been defined in terms of conformance to specification and, therefore, the quality-based measurements of performance generally focus
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 133
on measurements such as the number of defects produced, and the cost of quality. Feigenbaum (1961) was the first to propose that the true cost of quality is a function of the prevention, appraisal, and failure costs. Campanella and Corcoran (1983) defined three types of cost: Prevention costs are costs, which are used for the purpose of preventing inconsistencies such as quality planning costs, survey of supplier quality, and education costs. Appraisal costs are costs, which are used for the purpose of estimating product quality and to define discrepancies like monitoring costs, testing, and calibration control or dimension control. Failure costs are costs, which are used for correcting discrepancies and are usually divided as follows: • Internal failure costs; these are costs that arise before delivery to the customer, such as cost of repairs, waste, and material examination. • External failure costs are costs that arise regarding the delivery of goods to the customer, such as costs connected with processing customer complaints, customer refunds, maintenance, and warranties. Crosby’s (1972) claim that “quality is free” is based on the assumption that any increase in prevention costs is more than offset by a decrease in failure costs. Quality costs are measured as special costs, which arise in a company, for they are usually higher or lower than the performance. Usually they represent 20% of the net price. Crosby warns that the majority of companies made a mistake by integrating the quality-costs model within the management process. This means that, even if managers estimate the quality cost, they lack appropriate activities for lowering them. With the emergence of TQM, the emphasis has shifted away from “conformance to specification” and toward customer satisfaction. As a consequence, a larger number of surveys on customer satisfaction and market research have emerged. This reflects the emergence of the Malcolm Baldridge National Quality Award in the United States and the European Quality Award in Europe. Other common measures of quality include statistical process control (SPC) (Deming, 1982; Price, 1984) and the Motorola six-sigma concept. Motorola is one of the world’s leading manufacturers and suppliers of semiconductors. In 1992, the company set the goal in the area of quality to meet the six-sigma capacity (3.4 errors per million parts). The last two measurements are especially important for the design of PM systems, because they focus on process and not on output. De Toni and Tonchia (2001) defined the following indicators of performance quality: • • • • •
SPC measures — achieved quality Machinery reliability Reworks — quality costs Quality system costs In-bound quality
March 15, 2010
14:44
134
• • • • • • • • •
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Vendor quality rating Customer satisfaction — Quality perception Technical assistance Returned goods To sum up Production quality Internal quality Quality costs Quality perception (understanding market demands) External quality Delivery quality
Some of the typical partial quality measurements as proposed by Slack and Lewis (2002): • • • • • •
Number of defects per unit Level of customer complaints Scrap level Warranty claims Mean time between failures Customer satisfaction score
Hudson et al. (2001) propose the following as the critical performance quality dimensions: • • • • •
Product performance Delivery reliability Waste Dependability Innovation
Neely et al. (1995) propose the following indicators as typical PMs, pertaining to quality: • • • • • • • • • •
Performance Features Reliability Conformance Technical durability Serviceability Aesthetics Perceived quality Humanity Value
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 135
4.3.5. Dependability Some of typical partial-dependability measurements, as proposed by Slack et al. (2002): • • • • •
Percentage of orders delivered date Average lateness of orders Proportion of products in stock Mean deviation from promised arrival Schedule adherence
4.3.6. Measuring customer satisfaction To measure customer satisfaction Hudson, et al. (2001) propose the following critical dimensions: • • • • • • •
Market share Service Image Integration with customers Competitiveness Innovation Delivery reliability
4.3.7. Measuring employee satisfaction Human capacity or human resources (HR) are surely the most important resources of every company. Often, two equally large companies which are involved in similar activities and work in the same environment achieve substantially different business results. The reasons can be numerous, but the difference is usually a consequence of different work abilities of employees or different quality of HR. Knowledge about the value of HR is not new. Even the pre-classic economists were aware of its value and treated a person as an integral part and a source of national wealth. These realizations matured in time, but human capacities today only rarely find their place in accounting statements. Some of the critical dimensions of measuring employee satisfaction as proposed by Hudson et al. (2001): • • • • • • • •
Employee relationships Employee involvement Workforce Learning Labor efficiency Quality of work-life Resource utilization Productivity
March 15, 2010
136
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
5. Research Methodology The consequences of the change termed BPR can be perceived in companies all over the world including Slovenian companies (Herzog et al., 2006, 2007; Tennant, 2005). An exploratory survey research methodology was taken up when considering the presented problem. This performed research was the first large-scale study carried out in Slovenia on this theme. The research was divided into three phases: (i) A wide-ranging analysis was conducted, of the existent literature aimed at determining the major dimensions of BPR. (ii) A questionnaire was designed to investigate the real BPR, pre-tested on experts and pilot-firms (as suggested by Dillman, 1978), and later sent by post to the General and Plant/Production Managers responsible or participating in the BPR project. This questionnaire contained 56 items, designed according to the Likert scales. (iii) The resulting data were subjected to reliability and validity analyses, and then analyzed using uni- and multivariate statistical techniques. 5.1. Data Collection and Measurement Analysis The research was carried out in 179 Slovenian companies within the mechanical industry and 90 Slovenian companies within the electromechanical and electronic industries. The criterion for the choice of sample was the size of the company. We limited it to medium- and large-sized companies, because the complexity of the BPR activities is more distinctive in these companies. According to the Slovenian Companies Act (Ur. L. RS nr. 30/1993), companies are divided into small, medium and large, according to the number of employees, respectively, less than 50, from 50 to 249, from 250 upwards; and on revenue respectively less than 0.83 million, from 0.83 to 3.34 million, from 3.34 million EUR upwards. The response rate was very good for the post-contact methodology (27.14%), and showed that firms were interested in the subject. The subsequent statistical analysis was, therefore, carried out on the results of those 73 companies, which returned the questionnaires correctly filled in. Of the 73 companies analyzed, 53 belong to the mechanical and 20 to the electromechanical industries. To indicate the degree or extent of each item, as practiced by their business unit, a five-point Likert scale (Rossi and Wright, 1983) was used, ranging from “strongly disagree” to “strongly agree”. In determining the measurement properties of the constructs used in the statistical analysis, reliability and validity were assessed (Dick and Hagerty, 1971), using respectively Cronbach’s alpha and principal components analysis (PCA).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 137
5.1.1. Reliability Reliability has two components (Flynn et al., 1990): stability (in time) and equivalence (in terms of the means and variances of different measurements of the same construct). The main instruments for reliability assessment are the test–retest method (for stability) and Cronbach’s alpha (for equivalence) (Cronbach, 1951). We concentrated on the second aspect, because these variables were being developed for the first time. All of the multiitem variables have a Cronbach’s alpha of at least 0.6383 (for single variables 0.6030), otherwise most of the multiitem variables have Cronbach alpha greater than 0.7 or even 0.8, well exceeding the guidelines set for the development of new variables (Nunnally and Bernstein, 1994). 5.1.2. Validity The validity of a measure refers to the extent to which it measures what it was intended to measure. Three different types of validity are generally considered: content validity, criterion-related validity, and construct validity. Content validity cannot be determined statistically but only by experts, and by referring to literature. Criterion validity regards the predictive nature of the research instrument to obtain the objective outcome. Construct validity measures the extent to which the items in a scale all measure the same construct. We derived content validity from two extended reviews of recent literature about BPR. O’Neil and Sohal (1999) exposed six main dimensions of BPR on the basis of a review of over 100 references covering the period from the late 1980s to 1998. They concluded that the empirical research in BPR has been lagging and it presents the academic community with a considerable opportunity. Rigorous, empirically based research can help in demystifying the confusion that still exists concerning BPR and simultaneously enables better understanding of manufacturing company’s function. Another source was an extended literature study based on 133 references (selected from a start of 900) performed by Motwani et al. (1998). These authors identified four main research streams in the BPR area and determined defectiveness and directions for further research. To establish criterion validity, each item of the questionnaire was critically reviewed by five academics in operations management at the University of Maribor (Slovenia) and the University of Udine (Italy), and also by three general managers from different manufacturing companies. Following the pre-tests of the items, 142 items remained appropriate for conducting research. Of the different properties that can be assessed from measurements, construct validity is the most complex and, yet, the most critical to substantive theory testing. A measurement has construct validity if it measures the theoretical construct or trait that it was designed to measure. Construct validity can also be established through the use of PCA.
March 15, 2010
138
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
At this point PCA was carried out to uncover the underlying dimensions, eliminate problems of multicollinearity (Belsley et al., 1980) and, ultimately, reduce the number of variables to a limited number of orthogonal factors. First, each multiitem variable was factor-analyzed separately: for the items loaded on more than one factor, the items responsible for the other factors beyond the first were eliminated (or considered in another variable) and Cronbach’s alpha was re-calculated. The presented variables are all in their final version. A similar procedure was then adopted to group several variables to get a more manageable set of variables without surrendering too much information. Rotation was applied to aid interpretation. For interpretation of the factor loading’s matrix, only loadings superior to 0.5 were considered (except in a few cases where a variable is transverse to several factors): imposing such a limit allows the retaining of only those variables which contribute in a high degree, to the formation of a given factor, called according to the name of the variables with higher factor loadings. 6. Performance Indicators for BPR Evaluation From the survey and comparison of theoretical models of PMs in existing literature, we gained an insight into the entire extent of the field. One of the basic problems that we face when implementing a useful PM system is an attempt to achieve a balance between the lower number of key PMs (clear and simple, but might not reflecting all organizational goals) on the one hand, and a greater number of detailed measurements or performance indicators (complex and less appropriate for management, but able to show many different possibilities of performance), on the other. Generally, we reach a compromise by ensuring a clear connection between the chosen strategy, key performance parameters, which reflect main performance goals, and a series of performance indicators for individual key parameters. When dealing with individual PMs, the most important aspect is the fact that they must come from the strategy. Measuring can be a qualification process, but its main aim is to instigate positive working and, as Mintzberg pointed out, this strategy can be realized only by consistency between operation and performance. As the most important contribution of the survey, we can highlight the development of the system of indicators for BPR evaluation. Figure 2 shows the PM system of BPR, which we developed on the basis of real information from the companies which have gone through this process; we based it on the method of questionnaires. To form new variables, we used methods which are not as widespread and come from the scientific field of psychometrics. When forming the variables, we used a measurement instrument which we thoroughly examined from the aspects of reliability and validity. Thus, we can say with certainty that the newly developed variables are empirically based, and thus reliable and valid. The first subfield was designated for forming new variables for cost assessment in reengineering. When verifying reliability and validity, we designed new combined variables which then, on the basis of coefficient of variation, classified
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 139
System of indicators for estimating reengineering
Figure 2.
Costs
Material costs Work and maintenance costs Inventory costs Total productivity Money flow Costs of new product development
Quality
Internal quality External quality
Time
Internal company time External time
Flexibility
Product flexibility Process flexibility General flexibility
Reliability
Delays Product inventory Employee reliability
Customer satisfaction
General indicators of customer satisfaction Direct cooperation with customer
Human resources
Absence from work Workforce qualification Promotion and character development of employees Working experiences
Performance indicators for BPR evaluation.
according to importance. Between different types of costs in a company, the questioned persons attribute the greatest importance to the group or total productivity. According to the coefficient of variation, defined as a ratio between standard deviation and mean value of survey research results, opinions in companies about total productivity were very uniform. Total productivity is then followed by material costs, labor costs and services, and cash-flow. If we try to connect the results of the study with the findings of numerous other authors, we can discover that productivity as a PM correlated to costs is most widely treated in literature. Here arises the question: is this of great importance that the questioned people attribute to productivity measurement, perhaps a consequence of wider studying and promoting of productivity in literature? Here, we must not forget that the field of productivity, which is traditionally defined as the relationship between total output and total input, still generates problems, not only defining outputs and inputs but with their amount assessments (Burgess, 1990). Craig and
March 15, 2010
140
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Harris (1973) propose that companies should rather focus on total measurements instead of partial productivity measurements. This idea was also adopted by Hayes et al. (1988) who discovered how companies could measure total productivity. Regarding De Toni and Tonchia’s (2001) research, we can deduce that traditional measurement systems, focusing especially on production costs and productivity, on the basis of changes that stem from a competitive environment, were reengineered, especially in the direction of measurements which they call PMs, without direct cost-correlation. These measurements are becoming increasingly important. Non-cost performance is usually measured as non-monetary units; therefore, a direct correlation with economic and financial statements in the exact method is impossible, which is the distinctiveness of cost-based performance. In the following text, we will present discoveries gathered from the questionnaire using the sequence used in the questionnaire. Quality was examined as the first solution, which is not in direct correlation with cost. Traditionally, quality has been defined in terms of conformance to specification and, therefore, the quality-based measurements of performance generally focus on measurements such as the number of defects produced, and the cost of quality. Quality costs are measured as special costs that arise in a company for they are usually higher or lower than when implemented, and commonly represent 20% of net price share. When defining costs, a question arises — does optimal quality level actually exist? In the field of PM, the most appropriate viewpoint is proposed by Crosby. He warns that the majority of companies made a mistake by integrating a model of quality costs with the management process. This means that, even if managers estimate the quality cost, they lack appropriate activities for their lowering. With the emergence of TQM, the stress from adapting to detail moved to the direction of customer satisfaction. As a consequence, a larger number of surveys on customer satisfaction and market research have emerged. This reflects the emergence of Malcom Baldridge National Quality Award in the USA and the European Quality Award in Europe. Other common measures regarding quality include SPC (Deming, 1982; Price, 1984) and Motorola’s six-sigma concept. The two latter measurements are especially important for the design of PM systems, for they focus on process and not on output. On the sublevel of quality, we designed, on the basis of survey results, two new united variables — internal and external quality. The questioned persons attribute greater importance to external quality, which includes in bound quality, customer satisfaction, quality perception, and delivery reliability. A somewhat lesser importance is attributed to internal quality, which includes the level of rework, warranty claims, costs of rework, and costs of the quality system. Time is described as the source of competitive advantages and also as the basic PM.According to JIT production philosophy, early and late production or delivery is shown as a loss. Similarly, one of the goals of optimal production is minimizing flow
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 141
times (Goldratt and Cox, 1986). Galloway and Waldron (1988, 1989) developed a cost system based on time, known also as flow accountancy. The participants believe that time in the company, like time for machine preparation, waiting times, transport times, and inventory circling are very important for company operations. Although measuring flexibility in academic circles and among managers is of great importance, these types of measurements are still being developed, especially because flexibility is a multidimensional term and because there are usually no indicators which we could obtain by direct measurements (Cox, 1989). Direct, objective flexibility measurements are very difficult to implement in practice. Due to problems which arise at direct definition of performance flexibility, different authors propose the use of indirect indicators. In this case, we must be familiar with procedures for unification which include all important individual indicators, which take into account different kinds of flexibility. The synthesis of measurement must include clear rules on including individual (elementary) and united measurements and elementary data which must be perfect, homogenous (related), and in the appropriate phase to be united optimally. The results of descriptive statistics in the subfield of flexibility show that companies attribute greater importance to general flexibility, including changing timepossibilities, reaction to customer demands, and product innovations. As the most important measurement in the subfield of reliability, participants from companies pointed out delays particularly — a share of orders performed too late and average order delays. The low value for the coefficient of variation shows the uniformity of opinions regarding this issue. Somewhat lesser importance is given to reliability of employees, but the opinions on this vary considerably. In the subfield of customer satisfaction, participants were in agreement about the great importance of direct cooperation with customers; they also attributed great importance to other general customer satisfaction indicators. The results of descriptive statistics in the subfield of HR pointed out as the most valuable characteristic that influences the efficiency of the company — the employees’ education. The opinion of participants about the importance of employee education is very uniform in all mid-sized and large companies. HR (employees) are surely the most important resource of every company (Milost, 2001). Often, two equally large companies which are involved in similar activities and work in the same environment achieve substantially different business results. The reasons can be numerous, but the difference is usually a consequence of the different work abilities of employees or different quality of HR. The problem which arises is contributed to the fact that the work abilities of employees are not shown in classical balance sheets. Accounting gives individual events in company operations a value statement. The accounting thus shows only those means and obligations connected to resources, which can be expressed in value. The result of this approach is that HR, the highest quality and most important means of a company, are not shown in balance statements.
March 15, 2010
142
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
This does not necessarily mean that the quality of HR in a company is treated as something unimportant. The positive contribution of employees is usually mentioned at the presentation of business results. But a few dry sentences cannot express their real contribution to successful business operations. At the end of this chapter, guidelines for following up research must be mentioned, especially on the basis of findings that surfaced during research implementation. Due to the mentioned problems with measuring employee abilities, a very interesting field of research is opening where it would be favorable to study and develop a series of subjective measurements for measuring the abilities of employees, and for measuring employee integration in companies. The option to further study correlations between individual newly developed variables is still open. References Azzone G, C Masella and U Bertel`e (1991). Design of performance measures for timebased companies. International Journal of Operations & Production Management, 11(3), 77–85. Baiman S (2008). Special double issue on the use of accounting data for firm valuation and performance measurement. Review of accounting studies, 13(2–3), 167–167. Bain D (1982). The Productivity Prescription — The Manager’s Guide to Improving Productivity and Profits. New York: McGraw-Hill. Belsley DA, E Kuh and RE Welsch (1980). Regression Diagnosis: Identifying Influential Data and Source Collinearity. New York: John Wiley & Sons. Burgess TF (1990). A review of productivity. Work Study, January/February, 6–9. Campanella J and FJ Corcoran (1972). Principles of quality costs. Quality Progress, April, 16–22. Chen IJ and CH Chung (1996).An examination of flexibility measurements and performance of flexible manufacturing systems. International Journal of Production Research, 34(2), 379–394. Chen CC and WY Cheng (2007). Customer-focused and product-line-based manufacturing performance measurement. International Journal of Advanced Manufacturing Technology, 32(11–12), 1236–1245. Cooper R (1988). The rise of activity-based cost systems: Part II — When do I need an activity based cost system? Journal of Cost Management, 41–48. Cox T (1989). Towards the measurement of manufacturing flexibility. Production & Inventory Management Journal, 68–89. Craig CE and CR Harris (1973). Total productivity measurement at the firm level. Sloan Management Review, 14(3), 13–29. Cronbach LJ (1951). Coefficient alpha and the internal structure of tests, Psychometrika, 16, 297–334. Crosby PB (1972). Quality is Free. New York: McGraw-Hill. Davenport TH and JE Short (1990). The new industrial reengineering: Information technology and business process redesign. Sloan Management Review, 31(4), 11–27. De Toni A and S Tonchia (2001). Performance measurement systems, models, characteristics and measures. International Journal of Operations & Production Management, 21(1/2), 46–70.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 143
Deming WE (1982). Quality, Productivity and Competitive Position. Cambridge: MIT. Dick W and N Hagerty (1971). Topics in Measurement: Reliability and Validity. New York: McGraw-Hill. Dillman DA (1978). Mail and Telephone Surveys: The Total Design Method. New York: John Wiley & Sons. Eccles RG (1999). The performance measurement manifesto. Harvard Business Review, 69(1), 131–137. Edwards C and J Peppard (1994). Forging a link between business strategy and business reengineering. European Management Journal, 12(4), 407–416. Edwards C and J Peppard (1998). Strategic Development: Methods and Models. New York: Jossey-Bass. Feigenbaum AV (1961). Total Quality Control, New York: McGraw-Hill. Flynn BB, S Sakakibara, RG Schroeder, KA Bates and EJ Flynn (1990). Empirical Research Methods in Operations Management. Journal of Operations Management, 9(2), 250–285. Fooks JH (1992). Profiles for Performance: Total Quality Methods for Reducing Cycle Time. Reading, MA: Addison-Wesley. Galloway D and D Waldron (1988). Throughput accounting part 1 — The need for a new language for manufacturing. Management Accounting, November, 34–35. Galloway D and D Waldron (1988). Throughput accounting part 2 — Ranking products profitability. Management Accounting, December, 34–35. Galloway D and D Waldron (1989). Throughput accounting part 3 — A better way to control labour costs. Management Accounting, January, 32–33. Galloway D and D Waldron (1989). Throughput accounting part 4 — Moving on to complex products. Management Accounting, February, 40–41. Goldratt EM and J Cox (1986). The Goal: Beating the Competition. Hounslow: Creative Output Books. Hammer M and J Champy (1990). Reengineering. Work: Don’t Automate, Obliterate, Harvard Business Review, 68(4), 104–112. Hammer M and J Champy (1993). Reengineering the Corporation: A Manifesto for Business Revolution, Harper Business. Hayes RJ, SC Wheelwright and KB Clark (1988). Dynamic Manufacturing: Creating the Learning Organisation. New York: Free Press. Herzog NV, A Polajnar and P Pizmoht (2006). Performance measurement in business process re-engineering. Journal of mechanical engineering, 52(4), 210–224. Herzog NV, A Polajnar and S Tonchia (2007). Development and validation of business process reengineering (BPR) variables: A survey research in Slovenian companies. International Journal of Production Research, 45(24), 5811–5834. House CH and RL Price (1991). The return map: Tracking product teams. Harvard Business Review, January–February, 92–100. Hudson M, A Smart and M Bourne (2001). Theory and practice in SME performance measurement systems. International Journal of Operations & Production Management, 8(8), 1096. Johansson HJ, P McHugh, and J Pendleburv WA, (1993). Wheeler Business Process Reengineering: Breakpoint Strategies for Market Dominance. New York: John Wiley and Sons.
March 15, 2010
144
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
N. V. Herzog
Johnson HT (1975). The role of history in the study of modern business enterprise. The Accounting Review, July, 444–450. Johnson HT (1983). The search for gain in markets and firms: A review of the historical emergence of management accounting systems. Accounting Organisations and Society, 2(3), 139–146. Johnston HT and RS Kaplan (1987). Relevance Lost — The Rise and Fall of Management Accounting. Boston, MA: Harvard Business School Press. Leong GK, DL Snyder and PT Ward (1990). Research in the process and content of manufacturing strategy. OMEGA International Journal of Management Science, 18(2), 109–122. Loewenthal JN (1994). Reengineering the organization: A step-by-step approach to corporate revitalization. Quality Progress, 27(2). Marchand M and L Raymond (2008). Researching performance measurement systems — An information system perspective. International Journal of Operations & Production Management, 28(7–8), 663–686. Milost F (2001). Raˇcunovodstvoa cˇ loveˇskih zmoˇznosti, ISBN 961-6268-59-7. Motwani J, A Kumar, J Jiang and M Youssef (1998). Business process reengineering, a theoretical framework and an integrated model. International Journal of Operations & Production Management, 18(9/10), 964–977. Murphy JC and SL Braund (1990). Management accounting and new manufacturing technology. Management Accounting, February, 38–40. Neely A (1998). Measuring Business Performance. London: The Economist in Association with Profile Books Ltd. Neely A (1999). The performance measurement revolution: Why now and what next? International Journal of Operations & Production Management, 19(2), 205–228. Neely A, M Gregory and K Platts (1995). Performance measurement system design: A literature review and research agenda. International Journal of Operations & Production International Journal of Operations & Production Management, 18(9/10), 964–977. Nunnally JC and IH Bernstein (1994). Psychometrics Theory, 3rd Edn. NewYork: McGrawHill. O’Neil P andAS Sohal (1999). Business process reengineering:A review of recent literature. Technovation, 19, 571–581. Petrozzo DP and JC Stepper (1994). Successful Reengineering, New York: Van Nostrand Reinhold. Price F (1984). Right First Time. Aldershot: Gower. Rossi PH JD and Wright AB (1971). Anderson, Handbook of Survey Research. New York: Academic Press. Ruch WA (1982). The measurement of white-collar productivity. National Productivity Review, Autumn, 3, 22–28. Sarkis J,A Presley and D Liles (1997). The strategic evaluation of candidate business process reengineering projects. International Journal of Production Economics, 50, 261–274. Schniederjans MJ and GC Kim (2003). Implementing enterprise resource planning systems with total quality control and business process reengineering. International Journal of Operations & Production Management, 23(4), 418–429. Short JE and N Venkatraman (1992). Beyond business process redesign: Redefining Baxter’s business network. Sloan Management Review, 34(1), 7–21. Slack N and M Lewis (2002). Operations strategy. Pearson Education Limited.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
Business Process Reengineering and Measuring 145
Slack N, S Chambers and R Johnston (2001). Operations Management, 3rd Edn. London: Pearson Education Limited. Talwar RR (1993). Business re-engineering — A strategy-driven approach. Long Range Planning, 26(6), 22–40. Tennant C (2005). The application of business process reengineering in the UK. The TQM Magazine, 17(6), 537–545. Tinnil¨a M (1995). Strategic perspective to business process redesign, Business Process Management Journal, 1(1), 44–59. Tonchia S (2000). Linking performance measurement system to strategic and organizational choices. International Journal of Business Performance Measurement, 2(1/2/3). Upton DM (1994). The management of manufacturing flexibility. California Management Review, 36(2), 72–89. Vantrappen H (1993). Creating customer value by streamlining business processes. Long Range Planning, 25(1), 53–62. Venkatraman N (1994). IT-enabled business transformation: From automation to business scope redefinition. Sloan Management Review, Winter, 73–87. Wheelwright SC (1984). Manufacturing strategy — defining the missing link. Strategic Management Journal, 5, 77–91. Yung WK-C and DT-H Chan (2003). Application of value delivery system (VDS) and performance benchmarking in flexible business process reengineering. International Journal of Operations & Production Management, 23(3), 300–315. Zairi M (1997). Business process management: A boundaryless approach to modern competitiveness. Business Process Management Journal, 3(1), 64–80. Zairi M and D Sinclair (1995). Empirically assessing the impact of BPR on manufacturing firms. International Journal of Operations and Production Management, 16(8), 5–28.
Biographical Note Dr. Natasa Vujica Herzog is an Assistant Professor in the Laboratory for Production and Operations Management at the Faculty of Mechanical Engineering in Maribor (Slovenia). She received her M.Sc. and Dr. Sc. degrees in Mechanical Engineering at the Faculty of Mechanical Engineering, Maribor, in 2000 and in 2004. She is author of more than 70 referred publications, many of them in international journals, scientific books and monographs. Her research area is operations and production management, in particular, business process reengineering (BPR), performance measurement (PM), lean manufacturing (LM) and Six-Sigma. She acquired further knowledge and research experience at two other European universities. The University of Udine, Italy granted her a three months scholarship for research work with Prof. Stefano Tonchia in Department for Business & Innovation Management. She spent several months at the University of Technology, Graz, Austria, working with Prof. Wohinz at the Institute for Industrial Management and Innovation Research. She is a member of the Performance Measurement Association (PMA) and the European Operations Management Association (EUROMA).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch06
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Chapter 7
Value Chain Re-Engineering by the Application of Advanced Planning and Scheduling YOHANES KRISTIANTO∗ , PETRI HELO† and AJMAL MIAN‡ University of Vaasa, Department of Production, P.O. Box 700, 65101 Vaasa, Finland ∗
[email protected] †
[email protected] ‡
[email protected] The general purpose of the chapter is to present a novel approach to value chain reengineering by utilizing the new concept of Advanced Planning and Scheduling (APS). The methodology applies collaboration among suppliers, buyers, and the customers to fulfill orders. The models show that it is possible to re-engineer the value chain by incorporating the supply side (suppliers) and demand side (customers) within the new concept of APS. A problem example is given to show how to implement this concept by emphasizing important aspects of supplier and customer relationship. This concept, however, does not take into account the importance of service and customer interface and transport optimization; hence the customer requirement effect cannot be measured. In terms of managerial implication, this chapter maintains that the value chain should incorporate procurement and product development into the main value chain activities since both the activities are more actively in communication with customers. The innovation of this chapter is in including product commonality and response analysis in the simulation model. Keywords: Value chain; advanced planning; supply chain management; scheduling; managerial flexibility; market share.
1. Introduction Meeting customer requirements by customizing the manufacturing strategy is one of the strategic goals which challenge supply chain managers over time. The need for customization has been replacing the current trend of manufacturing in industry which has continued since the 1990s, where mass production has been shifting to mass customization by featuring the competitive landscape at for instances, process re-engineering and differentiation, which forces the manufacturer to be more flexible and adopt a quicker response (Pine, 1993). However, this trend has been slowly adopted by up to 60% of the research articles that were published just after 2001–2003 (Du et al., 2003). There have been about 60,000 hits during this period. Furthermore, the current trend of mass customization is shown by the 147
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
148 Y. Kristianto et al.
emerging of the personalization concept instead of customization (Kumar, 2008; Vesanen, 2007). The most recent authors mentioned that nowadays, the firm needs to be different not only in manufacturing but also in marketing by satisfying the cumulative requirement of price, quality, flexibility and agility at affordable price, by applying information and operational technologies. This trend, however, forces the firm to re-engineer its value chain in order to meet the requirement. Pine (1993) proposed four types of value chain re-engineering based on customization stages differentiation. In general, differentiation is categorized according to product and service standardization or customization. A higher customization degree in the value chain processes leads to quick response manufacturing. The idea, however, followed Porter’s value chain concept without making breakthrough with the new phenomena of mass customization. Originating from this idea, this chapter applies advanced planning and scheduling (APS) to customize the value chain from back end (supply) to front end (demand). 1.1. Value Chains and APS The value chain as a chain of activities gives the products more added value than the sum of added values of all activities (see Fig. 1) (Porter, 1985). It is important to maximize value creation by incorporating some support activities: for instance, technology development and procurement. Added value is created by exploiting the upstream and downstream information flowing along the value chains, and firms may try to bypass the information to an automated decision maker to create improvements in its value system. Related to value chain re-engineering, this chapter develops a new model of value chain by referring to the hierarchical planning tasks ofAPS. The reason behind this decision is that both the Michael Porter value chain and strategic network
Support activities Firm infrastructure Human resources Technology development Procurement Inbound logistics
Operations
Outbound logistics
Marketing and sales
Primary activities
Figure 1.
Michael Porter value chain model.
Services
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering Operations
Inbound logistics
Outbound logistics
Marketing and sales
149
Services
Primary activities procurement
Figure 2.
production
distribution
sales
Michael Porter value chain model and APS decision flow.
Collaboration
Marketing and sales
Product development
Operations Service
Purchasing
Figure 3.
Proposed value chain model.
planning of APS model has the same vision of creating added value across order fulfillment processes. The relationship can be described in Fig. 2. From the relationship, this chapter studies a new model of value chain, as follows. Figure 3 depicts the new concept of value chain, starting from marketing and sales to product development and procurement. New product development receives information from marketing and at the same time, back-end operations (purchasing department) coordinate the operations and suppliers simultaneously to fulfill customer demands by optimizing capacity. This model spreads customer information directly to two different sides, the external relation (the suppliers) and internal relation (the manufacturer). This model applies collaboration to improve the customer value by using dynamic material planning. Different from the traditional approach, this model collaborates in every product fulfillment process to synchronize the supply and production capability on a real time basis, according equal benefit of the manufacturer and the supplier. This value chain is then continued to distribution and transport planning, which optimize the entire supply chain by choosing the best distribution channels and transportation. Related to the APS, Fleischman (2002) describes the hierarchical planning task (see Fig. 4), which, at a glance, figures out the application of value chains from the strategic to the short-term level. The details are represented in Fig. 5 by incorporating the support and the primary value chain activities as follows. Figure 4 describes task deployment from strategy (long-term planning) to operations (short term), which is detailed further by developing the structure of the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
150 Y. Kristianto et al. Long-term aggregate, comprehensive
Mid-term
Short-term detailed
Figure 4.
Hierarchy of planning tasks (from Fleischman et al., 2002).
hierarchical planning tasks from Supply Chain Planning Matrix (Stadtler, 2005). The authors propose the two collaboration interfaces of customers and suppliers, as depicted in Fig. 5. Related to the mass customization issue, this situation supports supply chains to be more flexible by assessing each function’s core competence within supply chains and finding the possibility to develop strategic sourcing instead of in-house manufacturing. In this chapter, we propose an APS methodology to create a link between internal and external operational planning within supply chains to possibly the collaboration between APS (Fig. 5). Unfortunately, this opportunity is less supported by the previous APS function since as it is characterized as follows: 1. In practice, APS is usually concentrated on managing production planning and scheduling by using sophisticated algorithms. Figure 5, however, ignores the collaboration between supplier’s available-to-promise (ATP) and buyer’s
Sales
Procurement
Production
Distribution
Sales
Procurement
Strategic networks planning Master planning
Demand planning
Demand fulfillment and ATP
Purchasing and material requirement planning (MRP)
Collaboration
Figure 5.
Production planning
Scheduling
Demand planning Distribution planning
Transport planning
-
Demand fulfillment and ATP
Purchasing and MRP
Collaboration
Collaboration between APS (from Meyr, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
151
Material Requirement Planning (MRP) by assuming that supplier has infinite production capacity, assumes that lead times are fixed and ignores the production schedule and sequence (Chen and Ji, 2007). 2. In addition to MRP and scheduling synchronization, APS does not allow the possibility for activity outsourcing and manufacturing strategy customization. Instead, this chapter proposes optimized push-pull manufacturing strategy as well as sourcing strategy optimization. The advantages of this approach are that the manufacturer can reduce the production traffic by outsourcing some activities, as well as promising delivery promptness by using promised lead times in the ATP module and making collaborative material planning where the supplier and buyer production schedule are synchronized according to production capacity. 3. Integration with Agile Supply Demand Network (ASDN) adds benefit to this APS model by its ability to reconfigure the supply chain network and to measure the value of the order by financial analysis. Figure 6 represents the APS scheme to show the difference between new and existing APS. This new APS model is developed to represent value chain re-engineering. Concurrent engineering is shown by customer and supplier involvement in the process. R&D is included in purchasing and customer involvement is included in order to describe supplier responsibility for product design. At the same time, MRP is excluded from the model to represent dynamic material planning. As a replacement, we use collaborative material planning in order to emphasize supply synchronization.
Procurement
Production
Distribution
Strategic networks planning
Master planning Information flow
Production planning
Collaborative material planning
Scheduling
Demand planning Decision flow Demand fulfillment and ATP
Physical flow Information flow
Figure 6.
Proposed APS model.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
152 Y. Kristianto et al.
Instead of this new approach, this chapter is composed according to the logic of common APS. First there is a discussion of APS, an introduction at a glance (Sec. 1.2). From internal coordination, demand planning is discussed in Sec. 2.1, which informs master planning (Sec. 2.2) to enable ATP (Sec. 2.3) by fulfilling the promised lead time (Sec. 4.3.1) as well as inventory level (Sec. 4.3.2) and optimizing production sequence and schedule (Sec. 4.4). From external coordination, material planning (Sec. 4.5) and network planning (Sec. 4.6) are also optimized. Moreover, APS is able to optimize supply strategy (Sec. 4.2.2) as well as the product development process (Sec. 4.2.4). The key feature of this APS is on profit optimization for the entire supply chain by making a simulation through ASDN software (Sec. 4.6). 1.2. APS Advanced Planning and Scheduling (APS) could be defined as a system and methodology in which decision making, such as planning and scheduling for industries, is federated and synchronized between different divisions within or between enterprises in order to achieve total and autonomous optimization. Unlike other available systems, APS simultaneously plans and schedules production based on available resources and capability. This usually provides a more realistic production plan (Chen and Ji, 2007). APS is generally applied where one or more of the following conditions are satisfied: • • • • • •
Made to order manufacturing instead of make to stock The products require a large number of components or tasks to be manufactured A capital intensive manufacturing process where capacity is limited Products competing with each other to avail the resources Unstable situations for resource scheduling that cannot be planned beforehand It requires a flexible manufacturing approach
Advanced Planning and Scheduling (APS) improves the integration of materials and capacity planning by using constraint-based planning and optimization (Chen, 2007; van Eck, 2003). There are some possibilities to include suppliers and customers in the planning procedure and thereby optimize a whole supply chain on a real-time basis. APS utilizes planning and scheduling techniques that consider a wide range of constraints to produce an optimized plan (van Eck, 2003, for example): • • • •
Material availability Machine and labor capacity Customer service level requirements (due dates) Inventory safety stock levels
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
153
• Cost • Distribution requirements • Sequencing for set-up efficiency Furthermore, in the area of supply chain planning, there has been a trend to embed sophisticated optimization logic into APS that helps to improve the decisions of supply chain planners. If it is used successfully, it is not only supports supply chain strategy, but also improves the competitiveness of a firm significantly. Some areas of possible improvement are listed below (Stadtler, 2002): • • • •
Competitiveness improvement Make the process more transparent Improve supply chain flexibility Reveal system constraints
Furthermore, Fleischman et al. (2002) mention three main characteristics of APS, which are: 1. Integral and comprehensive planning of the entire supply chain from supplier to end customer. 2. True optimization by properly defining alternatives, objectives, and constraints. 3. A hierarchical planning system from top to bottom that requires cooperation among various tasks in the entire supply chain. 2. Architecture of Proposed APS With regard to the needs for personalization in the whole value chain, this chapter tries to fill the gap between the requirement and the existing APS by looking forward to finding some benefits as follows: 1. Within value chain building, the most important thing is how to maximize value for customers. This report supports the requirement by proposing reconfigurable push-pull manufacturing strategy. This strategy can adapt to Bill-of-Materials (BOM) changes by reconfiguring the push-pull manufacturing strategy (front side). In order to support the strategy, this APS also optimizes the product commonality to minimize the inventory level as well as production lead times (back side). 2. Within e-customization, the customer meets directly with the manufacturer. The issue which appears is how to minimize customer losses (time and options) and at the same time manufacturer losses (overhead costs, for instance extra administration cost, order cost, etc.). This APS model can minimize both burdens by offering an optimum design platform to the customer and the suppliers and a reasonable inventory allocation using push-pull manufacturing strategy (Fig. 7).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
154 Y. Kristianto et al. ERP (Sales) 1. Bill of material 2. Order lead times 3. Order locations
APS 1. Demand planning 2. Master planning 3. Distribution and transport planning
SC Execution planning Total inventory value Total profit Total lead times
Figure 7. APS model connection to ERP and SC Execution Planning (SCEP).
With regard to the integration issue, this APS module can be composed as follows. The details of the architecture are elaborated as follows: 2.1. Demand Planning Before going ahead with any production planning process, it is important to calculate the level of demand within a company. Wagner (2002) explored the three main parts of demand planning, namely forecasting, what-if analysis, and safety stock calculation. The purpose of forecasting is to produce a prediction related to future demands. What-if analysis is used as a risk management tool to determine the safety stock level. This ensures the company’s proper utilization of space and minimizing the costly inventory level. It also brings integrity to the company’s supply chain and logistics network. Demand planning necessitates forecasting and what-if analysis is conducted to make the optimal calculation of required inventory and safety stock level. This chapter, however, comprises an order-based APS where forecasting is only conducted within the push manufacturing strategy. 2.2. Master Planning Master planning is used to balance supply and demand by synchronizing the flow of materials within the supply chain (Meyr et al., 2002). Capacity decision from demand planning will be used to setup product and material price, manufacturing strategy by considering lead times and inventory availability from ATP and possible suppliers’ capability from collaborative material planning. Furthermore, master planning is also supported by receiving production schedule information from production planning and scheduling module (see Fig. 8). 2.3. ATP ATP is used to guarantee that customer orders are fulfilled on time and in certain cases, even faster. The logic is shown in Fig. 9. Figure 9 shows three customers who want different requirements and are situated at different locations. ATP optimizes resource assignments such as materials, semi-finished goods (sub-assembly), and production capacity to guarantee that all
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
155
Strategic network planning 1. Agile supply demand networks 2. Transportation optimization 3. Distribution center optimization
Demand planning Production capacity
Decision flow
Information flow
Master planning 1. Push-pull manufacturing strategy 2. Supply strategy 3. Product and material price 4. Design strategy
Collaborative material planning 1. Dynamic material order 2. Warehouse stocks 3. Inventory requirement
Figure 8.
Available to Promise (ATP) Promised lead times
Production planning and scheduling 1. Production sequence 2. Production schedule
Decision and information sequence within APS.
Customer
1
Resources 1. Material 2. Sub-assembly 3. Production capacity
2 3
Figure 9. Available-to-Promise.
orders are fulfilled on time. Furthermore, the model is also constrained by inventory level, order batch size, supplier capability, and set-up cost constraints. Those search dimensions are applied one by one in order to fulfill the customer’s request. It is easy to observe that the above model emphasizes an iterative approach to solve the ATP problem. The ATP problem, however, goes far beyond the idea. The promise, however, must be fulfilled by the supplier, the manufacturer, and the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
156 Y. Kristianto et al.
distributors. This idea supports ASDN by moving the previous APS paradigm from enterprise APS into the supply chain APS (see Fig. 6). Related to this idea, this chapter, however, shifts some tasks of ATP to master planning by customizing the push-pull manufacturing strategy for each product type and assessing the supply strategy according to sourcing options. Thus, ATP module functions are limited to inventory level and lead times optimization. The impact of this stage can be explained in two ways. First, the global decision within supply chains is more represented by responsibility on all sides (the distributors, manufacturers, and suppliers) so that resources assignment are also possible to be developed across supply chains. Second, it is easier to expand the supply network planning in the future by partially adding new members within the supply chains. This is reasonable since, for example, if the demand continuously increases in the future so that one component needs to be supplied by more than two suppliers, then the APS can collaborate with them. 2.4. Production Planning and Scheduling This module is intended for short-term planning within APS so that it sequences the production activities in order to minimize production time. In detail, Stadtler (2002b) describes a model for a production schedule as in Fig. 10. Figure 10 depicts the production schedule model building, where it extracts daily operational information in the ERP such as locations, parts, bills-of-material (BOM), production routing, supplier information, set-up matrices and timetables
1 1. Model building
2. Extracting the required data from ERP system, master planning
6. Scenario ok?
3. Generating a set of assumptions (a scenario)
4. Generating an initial production schedule 5. Analysis of the production schedule and interactive modification
7. Executing and updating the production schedule via the ERP system until an “event” requires optimization
Figure 10.
Production planning and scheduling procedure (from Fleischmann, 2002).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
157
(Stadtler, 2002b). This chapter applies the similar optimized scheduling to the entire products by using Traveling Salesperson Problem (TSP) algorithm. 2.5. Collaborative Material Planning In contrast to the traditional approach of operation management tools, where material requirement follows a top-down hierarchical approach, and starts with Master Production Schedule (MPS), where the schedule is then detailed into Material Requirement Planning (MRP) by ignoring capacity constraint and assuming fixed lead times. This chapter, however, replaces the MPS and MRP functions by applying collaborative material planning (see Fig. 6) consisting of supplier and buyer integration by including a system dynamics approach (see Fig. 8) by following supply synchronization model and replacing the MRP with collaborative material planning (Holweg et al., 2005). It is interesting that the model incorporates purchasing and product development, which is useful to provide information to master planning not only the internal capability (ATP and production planning) but also the supplier capability about how long the maximum time and how many are to deliver the component. 2.6. Distribution and Transport Planning Distribution planning is very much correlated with transport agreements for shipping consumer goods from manufacturers to customers. Shipments could go directly from the factory or from distribution centers to customers, depending on the order types and distances. This typical distribution channel enhances supply chain integration among manufacturers, distributors and customers, who need to plan ahead of time. Furthermore, integrated transport planning decreases the cost substantially. The relatively smaller shipments account for higher costs than larger ones. The distribution and transportation costs also depend on the locations of factories, suppliers, DCs (distribution centers) and TPs (transshipment points). Correlation between distribution and transport planning module and other APS modules as described by Fleischmann (2002) can be summarized in Fig. 11. In this chapter, ASDN is used to investigate the profitability of supply chain networks by considering transportation as well as distribution centers. By applying information from demand and master planning, ASDN enables us to find the supply chain profit, inventory value, and total lead times. Even this software ignores iterative procedures for network optimization. The model is however, can be represented as strategic network planning below. 2.6.1. Strategic networks planning In strategic network planning, firms generally focus on long-term strategic planning and design of their supply chain (see Fig. 6). Therefore, it is related to long-term
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
158 Y. Kristianto et al. Strategic network planning 1. location of factories, suppliers, DCs, and TPs 2. Transport modes and paths 3. Suppliers and customers allocation
Demand planning 1. Delivered customer order 2. DCs demand forecast 3. DCs safety stock
Distribution and transport planning
Master planning 1. Aggregate quantities to be shipped on every transport link 2. Seasonal stocks dynamic at warehouses and DCs
Figure 11.
Production scheduling 1. Net requirements, timed at the planned departure of shipment from the factory 2. Planned and released production order
Distribution and transport interfaces.
decisions, such as plant location and physical distribution structure (Meyr et al., 2002). During the process, some compulsory information, for instance the product family structure and market share, potential suppliers and manufacturing capability, is utilized to decide whether this planning is expansion or collaboration. For example, a car company may wish to expand its market into the new area. They may choose to develop their own business by locating some facilities (factories, distribution centers, and warehouses) or consolidating with another existing company. It is also possible to re-evaluate the previous strategic plan, for instance the manufacturer intends to relocate its factories to a country with cheaper labor costs. This brings them advantages such as a cheap labor market, low cost of raw materials, and the opportunities for new business markets locally. Due to its impact on long-term profitability and competitiveness within a company, the planning depends on aggregate demand forecasting and economic trends in the market. It is, therefore, a challenging task since the planning period ranges from 3 to 10 years, where all the decision parameter conditions may change, for instance customer demand behavior, market power, and supplier capability. This strategy becomes complicated if companies execute their strategic planning infrequently and do not update periodically. The main objective of this type of planning related to value chain re-engineering is to reconfigure the manufacturing process, which is embodied by developing ASDN (Fig. 12). Therefore, the model will collect information from medium- and short-term planning, for instance vendors and distribution facilities among suppliers, distributors, and manufacturers to be optimized against product configuration. The interfaces among them are depicted as follows.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
159
Sales planning
ATP, material planning, etc.
Figure 12.
Demand and master planning
Strategic network planning (ASDN)
Strategic network planning and customer needs alignment.
Importance of delivery time, available to promise, On Time Delivery (OTD) etc.
Demand parameter
Enterprise strategy
Supply parameter
ASDN networks Modeling Demand pattern distribution variation
Capacity time delays OTD Quality Supply demand networks strategy
Lot sizing decisions ordering policies: lot for lot, periodic etc. ABC analysis
Architecture of networks order decoupling point location policy: MTS, ATO, MTO, ETO
Sales and operating planning
Inventory execution
Cycle stock/safety stock
Figure 13. ASDN approach for networks design.
Figure 12 depicts the planning connection to the product database, which is used to reconfigure the demand and master planning where it will be used to reconfigure the strategic network planning. Furthermore, the details of the ASDN operations can be represented as Fig. 13. Beforehand, it is beneficial to study further from the existing APS software in order to find the path for improvement. This report takes two APS software examples, namely SAP APO and ASPROVA APS, and these are described in more detail in the next section.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
160 Y. Kristianto et al.
3. Contribution to APS Software Development APS has increasingly been used instead of Enterprise Resource Planning (ERP), which is also implemented in several commercials software, for example, SAP APO and ASPROVA. Furthermore, this chapter looks beyond comparison to the possible further development of the software by regarding the above architecture as follows. 3.1. SAP Advanced Planner and Optimizer (APO) SAP Advanced Planner and Optimizer (APO) is a well-known software that represents an example of APS software package. SAP APO is designed for supporting the planning and optimization of a supply chain and works via both linkages to ERP-packages and also on its own. Structures of many other software packages follow the same structure (Buxmann and K¨onig, 2000, p. 100): 1. The planning modules consist of procedures for “Demand Planning,” “Supply Network Planning,” “Production Planning and Detailed Scheduling,” and “Available to Promise.” 2. User interface (UI) “The Supply Chain Cockpit” gives the chance of visualizing and controlling the structure of logistics chains. The UI facilitates the graphical representation of networks of suppliers, production sites, facilities, distribution centers, customers, transshipment locations. Additionally, by using the Alert Monitor engine it is possible to track supply chain processes and identify eventinitiating problems and bottlenecks. 3. Solver is an optimization engine that employs various algorithms and solution procedures for solving supply chain problems. This includes forecast modeling techniques such as exponential smoothing and regression analysis being built in for demand planning, and also branch and bound procedures and genetic algorithms are available for production and distribution planning. 4. Simulation of changes is enabled by an architecture for computing and dataintensive applications that makes it possible for simulations, planning, and optimization activities to be in real time. In this software, optimization is bounded into optimization range and resources allocation. The optimization range is different according to whether optimization horizon or resources are transferred. The optimization horizon will optimize each activity in the optimization range, however, due to interrelation between activities in these two regions. These fixed activities determine their action according to their flexibility. Below is described the relationship table for scheduling optimization (Table 1). Another SAP APO facility is networks design. Networks design creates an analysis of entire networks with regards to locations, transportation networks, facility location, and even analysis of current territorial divisions. In practice, these designs
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 1.
161
Relationship Table in Scheduling Problem by SAP APO.
1st activity
2nd activity
Fixed Fixed Non fixed Non fixed
Non fixed Non fixed Fixed Fixed
Relationship
Definition
Maximum interval Minimum interval Maximum interval Minimum interval
Latest start or finish date Earliest start or finish date Earliest start or finish date Latest start or finish date
comprise inbound and outbound logistics planning such as sourcing decision, transportation mode determination, and warehouse location evaluation according to different demand supply patterns, varying costs, and capacity constraints. The discussion on SAP APO produces the following conclusions: 1. The user interface in SAP APO helps the APS planner to investigate the profit performance of the entire supply chain. This report uses ASDN to represent the same objective. 2. Solver optimizer is used in SAP APO to optimize the scheduling problems and demand forecasting. This chapter, however, applies an optimization tool to optimize supply and manufacturing strategy. This chapter enhances the function of optimizer from operational to tactical and strategic levels. 3. SAP APO excludes supply side optimization in terms of long-term planning (Stadtler, 2005), in which it is important to support ATP. This new model, however, puts the planning in the higher hierarchy by positioning material planning collaboration comprising of product development, procurement, and production functions. 4. As well as these advantages, this model has a limitation related to distribution and transport planning, where the optimizer needs to be developed. 3.2. ASPROVA APS ASPROVA APS is developed by the following logic: Figure 14 is taken from the ASPROVA APS main menu, which exhibits the production scheduling process that is taken by receiving the order and shop floor data to build a production schedule. The scheduling operator receives master data (production capability) in order to issue work instructions and purchase order to the suppliers. ASPROVA APS, however, is concerned about scheduling operations instead of creating whole APS components, for instance demand planning, master planning, and transportation and distribution scheduling. Some limitations of this software are: 1. ASPROVA APS does not apply demand planning, for instance capacity or manufacturing strategy planning;
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
162 Y. Kristianto et al. Order data
Master data Scheduling
Result
Shop floor
Work instruction
Purchase orders
Figure 14. ASPROVA APS operation image.
2. ASPROVA APS does not visualize the supply chain network optimization and 3. The impact of the two limitations is that ASPROVA APS is not able to link itself to supply chain execution program and ERP and is just a stand-alone tool. 4. Problem Example Below is one example of the APS application in the truck industry, which is represented as Fig. 15. The varieties of the above product’s structure are illustrated in Tables 2 and 3: From the example, this section will explain step by step the detail of the modeling, as follows: 4.1. Demand Planning The demand planning process is originated from the forecasting part, which is followed by capacity planning, promised lead times, push-pull manufacturing strategy, material and inventory requirements. The planning can be shown in detail by using the following example: 4.1.1. Forecasting Forecasting is required for long-term capacity planning instead of weekly demand. The reason is that this APS is intended to customize orders. This chapter does not go into deep discussion of forecasting techniques because we can use any available technique and it depends on the demand pattern. Otherwise, in general, we can use time series analysis by assuming that demand increases because markets and customers expand continuously. 4.1.2. Capacity decision Capacity decision is established first to give information to the firm with regard to supply and manufacturing strategy. This chapter applies newsboy problem to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
163
Radio
1 Audio package
1 1
1 Office package
CD player
1 Speaker
1
Interior decoration
Cabinet
1
1 Resting package 1
1
Body
Truck
Engine 1
1
1
Power train
Gear box
Chassis 1
1
Suspension
1 1
Frame Front axle Rear axle
1 Tire
1
1
Front wheel
Rim 1 Wheel sheet 1
Tire 1
Rear wheel
Rim 1
Figure 15.
Bill-of-Material (BOM).
minimize over and under stock, as follows: E(C) = h · E(Q − D)+ + p · E(D − Q)+ By operating integration into Eq. (7.1) we get: 1 + (Q · x − D) · dx + p · E(C) = h · D Q
D Q
(7.1)
(D − Q · x)+ · dx
0
h · (Q − D)2 + p · D2 (7.2) 2·Q By optimizing Eq. (7.2) according to Q, optimal production quantity (Q) can be determined as: Q1,2 = p + h · D (7.3) =
Equation (7.3) gives the result of capacity decision.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
164 Y. Kristianto et al.
Table 2. Truck Parts List. Parts Body Office package Interior decoration Resting package Radio CD Player Speaker Engine Gear box Frame Front axle Rear axle Tire (Front) Rim (Front) Tire (Rear) Rim (rear)
Table 3.
Model FH1
Model FH2
FHDA Opl00 FHDA1 RP001 FH001, FH002 6 disc Doors D13A−360HP Powertronic 5sp 4 2 FSH 1370 Hub reduction 1370 385/65-22,5 FR22,5 315/70-22,5 FR24,5
FHDA Opll0 FHDA2 RP002 FH003, FH004, FH005 6 disc Doors+rearWall D13A−400HP Powertronic 5sp 6 2 FSH 1370 Hub reduction 2180 385/65-22,5 FR22,5 315/70-22,5 FR24,5
Required Parameters for Product Manufacturing.
Penalty cost Holding cost Annual demand Order cost Production cost Setup cost Material cost Production rate per month
Model FH1
Model FH2
1 1 50 1 10 4 1 200
15 4 50 1 10 4 1 200
4.2. Master Planning 4.2.1. Push-pull manufacturing strategy The Customer Order Decoupling Point (CODP) is assigned properly to the components or parts which are fabricated internally. In this chapter, we categorize CODP according to make-to-stock (MTS), assemble-to-order (ATO), or make-to-order (MTO). The objective is to give the least waiting time and operations costs (holding, penalty, and production cost). We define processing time in one node consisting of supplier delivery time, production and delivery time to the customer, so let us assume that demand has
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
165
inter-arrival variance (σA ) and the assembly process has process time variance (σB ). According to GI/G/1 queue system, we have: λ2 · (σA2 + σB2 ) +ρ 2 · (1 − ρ)
L=
(7.4)
where L is the number of order, λ is the demand rate. ρ is the utilization factor. This last equation informs us about whether there is a queue or not in our production line. In order to determine our optimum decision, we use these into our cost function E(C) = CP · µ + CW · L, where CP is order processing cost and CW waiting cost (Table 4). The above cost function can be generalized into: λ2 · µ · (σA2 + σB2 ) E(C) = CP · µ + CW · +ρ (7.5) 2 · (µ − λ) Equation (7.5) can be optimized according to µ so that we have: (σA2 + σB2 ) · CW (σ 2 + σB2 ) · CW · µ =0 − A 2 · (µ − λ) 2 · (µ − λ)2 2 · (σA2 + σB2 ) · λ · CW · CP ≤ µ∗ ≤ 2 · λ 2·λ− 2 · CP 2 · (σA2 + σB2 ) · λ · CW · CP + 2 · CP CP +
(7.6)
(7.7)
Equation (7.7) can be modified by positing λ as a dependent variable and µ as an independent variable so that we have: µ∗ +
2 +σ 2 )·λ·C (σA W B 2CP
2 Table 4.
≤λ
(7.8)
Push-pull Manufacturing Decision for Each Component.
Product
σA σB
Radio CD player Speaker Front tire Front rim Rear tire Rear rim Truck FH1 Truck FH2 Power train
10 10 10 20 20 40 40 40 40 10
10 10 10 20 20 40 40 40 40 10
1 100 100 100 200 200 400 400 100 100 100
C w Cpr µ upper µ lower µ actual MTO/MTS/ATO 1 1 1 1 1 1 1 1 1 1
5 5 5 5 5 5 5 5 5 5
245 245 245 526 526 1158 1158 379 379 200
155 155 155 274 274 442 442 21 21 155
20 200 100 200 200 200 200 200 200 200
MTS ATO MTS MTS MTS MTS MTS ATO ATO ATO
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
166 Y. Kristianto et al.
Equation (7.8) is a prerequisite to form postponement. If λ exceeds that limit, form postponement should be changed to time postponement and vice versa. This strategy, however, enables the supply chain to determine the right time for switching from assemble to order to make to order and vice versa. This repositioning strategy can also be used for over-production rate. 4.2.2. Supply strategy Supply strategy is defined as deciding which parts should be ordered from the suppliers, and which parts should be produced in-house. This discussion will be separated into two models, make or buy decision, and single or dual sourcing strategy, which is detailed as follows. In the outsourcing case, suppose the supplier and firm have established a longterm contract by choosing the incentive and penalty cost I and p for the suppliers. The firm gives incentive to the suppliers whenever they can meet the firm’s customer demands D in the predetermined range at D±ε∗t . If production accuracy (εt −ε∗t ) is to be a common objective between the suppliers, then for each i, ε∗t must maximize the supplier’s expected profit, net of penalty and holding costs (ε∗t ) must solve: max I · Pr ob{q(εt ) = q(ε∗t )} − p · Pr ob{q(εt ) < q(ε∗t )} εt ≥0
− h · Pr ob{q(εt ) > q(ε∗t )} = (p − h) Pr ob{q(εt ) > q(ε∗t )} − p + (I + p) Pr ob{q(εt ) = q(ε∗t )}.
(7.9)
The first-order condition for Eq. (7.9) is: ∂ Pr ob{q(εt ) > q(ε∗t )} = (I + p) Pr ob{q(εt ) = q(ε∗t )} (7.10) (p − h) ∂εt That is, the firm and the suppliers choose incentive I, penalty p, and holding h costs, such that the over or under estimation of variance, Pr ob(εt − ε∗t ) > 0, is minimized, which is the probability of over estimation. From Bayes rule, Pr ob{q(εt ) > q(ε∗t )} = Pr ob{εt > qt∗ + ε∗t − qt } ∗ Pr ob{εt > qt∗ + εt − qt |εt }f(ε∗t )dε∗t Pr ob{q(εt ) > q(εt )} = εt
Pr ob{q(εt ) > q(ε∗t )} =
εt
1 − F(qt∗ + εt − qt )f(ε∗t )dε∗t
(7.11)
So the first-order condition for Eq. (7.11) becomes: (p − h) f(qt∗ + εt − qt )f(ε∗t )dε∗t = (I + p) Pr ob{q(εt ) = q(ε∗t )} εt
In a steady state (i.e., qt∗ = qt ), we have: (p − h) f(εt )2 dεt = (I + p) Pr ob{q(εt ) = q(ε∗t )} εt
(7.12)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
If ε is normally distributed with variance σ 2 for example, then 1 f(εt )2 dεt = √ 2σ π εt p−h = Pr ob{q(εt ) = q(ε∗t )} √ 2σ π(I + p) if σ is assumed to be continuously distributed (N (εt − ε∗t )2 p(εt )dεt , and Eq. (7.13) becomes:
(7.13) (7.14)
→ ∞), then σ
(p − h)2 (εt − ε∗t ) = √ 2 π(I + p)2
167
=
(7.15)
λ ·CO By defining the total cost to the firm as c = h(εt −ε∗t )+ +p·(ε∗t −εt )+ + D
and replacing (εt − ε∗t )+ with then we have: coutsource =
h
2 √(p−h) 2 2 π(I+p)
and doing some integration operations,
2 2 √(p−h) 2 2 π(I+p)
+ p(ε∗t )2 λ + · CO 2 (p−h) D 2 2√π(I+p)2 + ε∗t
(7.16)
While with in-sourcing we have the following costs function: E(TC)Insource = h · E(Q − D)+ + p · E(D − Q)+ + CD · Z
D λ · CO + CP · tS + + CPur · q (7.17) + D µ D is order quantity and Q production capacity. For analysis simplification, we will represent our part inventory as (Q − D)+ and part backorder as or (D − Q)+ . Equation (7.17) can be solved by integrating the first two statements as: 1 D Q + E(TC)Insource = h · (Q · x − D) · dx + p · (D − Q · x)+ · dx D Q
0
D λ · CO + CP · tS + + Cpur q + D µ
And we get, E(TC)Insource =
h · (Q − D)2 + p · D2 2·Q
λ D + · CO + CP · tS + + CPur · q D µ
(7.18)
where λ is demand rates, CO order cost, CP production cost, tS setup cost, µ production rate, CPur material cost, and q material quantity.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
168 Y. Kristianto et al.
Our decision is as follows: if E(TC)Insource > coutsource , then outsourcing is chosen, otherwise, insourcing is the option. In addition to sourcing strategy, a procedure is suggested below to choose whether single or dual sourcing is an appropriate option, as follows. 4.2.3. Single or dual sourcing strategy (buy decision) In this section, suppose outsourcing is the best option and now the manager is facing a dilemma between single and dual sourcing. In this section, we consider a Bertrand duopoly model (see Gibbons, 1992) with price function for retailers given by: q = b − p1 + γ · p2 + ε∗t
(7.19)
where pi and pj is price of the supplier 1 and 2 and γ is the supplier process commonality. Different to Elmaghraby (2000), the buying decision is approached according to price uncertainty. This chapter takes into account quantity uncertainty in order to represent demand variety. It also accommodates Forker and Stannack’s (2000) argument of applying competition between suppliers; indeed, the suppliers’ cooperation is also considered by applying product compatibility degree γ. In the Cournot game, suppliers choose their own price to maximize their profit by taking their opponent’s price as a given. We thus propose a methodology which is similar to the Cournot game, except that we take into account the quantity at infinite time in order to optimize the postponed product compatibility decision resulting from the presence of a long-term price contract. To illustrate, we suppose two suppliers make an auction and the firm makes an opening bid, and afterwards the suppliers cooperate with one another on the chosen price and product compatibility. Restricting attention to the sub-game perfect of this two-stage game, we shall see that if the firm chooses a bid-price, then the predetermined price is used by the suppliers to optimize the auction price, where it is finally used by the suppliers to optimize their production quantity. The firm does not have any benefits by shifting from their bid price, while the supplier also has no reason to threaten the retailers. From this point on, the game starts from stage 1, where both retailers decide their capacity. Stage 1: the firm and suppliers optimize their agreed product price according to maximum profit max(b − p1 + γ · p2 + ε∗t )(p1 − coutsource ) p1
(7.20)
The first-order condition is: −2p1 + γ · p2 + b + coutsource + ε∗t = 0
(7.21)
Similarly, the FOC from second product variant is: −2p2 + γ · p1 + b + coutsource + ε∗t = 0
(7.22)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
169
Solving these two equations simultaneously, one obtains: coutsource + b + ε∗t (7.23) 2−γ Stage 1 explores the price equilibrium between two suppliers. Equal price in this equation shows that the suppliers are working under flexible capacity in all states or the suppliers producing to order and accumulate commitments for all future deliveries. There is always an equilibrium in which all the suppliers set p1 = p2 in all periods. The suppliers expect profit to be zero whether they cooperate at time t or not. Accordingly, the game time t is essentially a one-shot game in which the unique equilibrium has all suppliers setting p1 = p2 . Furthermore, both buyer and supplier can take advantage of this problem because whenever a supplier increases his selling price, the buyer product price also increases. In the same way, the firm bargains the supplier’s price at pf in order to maximize their profit by taking a maximum margin between product prices to end customer pb and outsourcing price pf as follows: p2 = p1 = ps =
max(b − pf + ε∗t )(pb − pf ) pf
(7.24)
The first-order condition is: 2pf − b − pb − ε∗t = 0
(7.25)
Solving that equation for pf , one obtains: pb = 2pf − b − ε∗t
(7.26)
If we assume at the final bargaining period that pf = ps , then we have: coutsource + b + ε∗t (7.27) − b − ε∗t pb = 2 2−γ max πtot = πS + πf ps
max πtot = ps
max(b − pf + ε∗t ) p s
coutsource + b + ε∗t ∗ 2· − b − εt − pf 2−γ
+ (b − (1 − γ)pf + ε∗t )(pf − coutsource )
(7.28)
s.t (pf − coutsource ) ≥ 0 The first-order condition is: coutsource + b + ε∗t ∗ − b − εt −b+(1−γ)·coutsource = 0 (7.29) 2γ ·pf −b− 2 · 2−γ Solving that equation for pf , one obtains: c +b+ε∗t ∗ + b − (1 − γ) · c 2 · outsource − ε outsource t 2−γ pf = 2γ
(7.30)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
170 Y. Kristianto et al.
Equation (7.30) describes the compromise price between the firm and suppliers. This equation is also developed in order to respond to Anton and Yao’s (1989) argument about the supplier’s collusion. Stage 2: The firm and suppliers optimize the suppliers’ material price On the suppliers’side, in order to achieve optimal profit, then we have also optimized the material price, as follows: In the first stage we can find, max(b − (1 − γ)pf ) · (pf − cm ) pf
(7.31)
By optimizing Eq. (7.31) against pf , then the supplier material cost cm can be found as: b − 2(1 − γ) · pf + cm (1 − γ) = 0
(7.32)
2(1 − γ) · pf − b (7.33) 1−γ Stage 2 shows that the increasing of product substitutability (γ) will increase the suppliers’ total costs. With regard to the result, below a process commonality and pricing-quantity decision is produced by considering long-term relationships between the firm and suppliers (Patterson et al., 1999). cm =
4.2.4. Component commonality decision between two suppliers In the last stage, product design is collaborated between the firm and the suppliers, which is intended to maximize the firm and the supplier’s profit. In that case, suppose the suppliers profit function Eq. (7.31) is used to define γ as follows: coutsource + b + ε∗t (7.34) γ = 2− pf Eq. (7.34) shows that the increasing of coutsource as well as supplier selling price pf will increase product substitutability (γ). With regards to the result, below is the supplier’s selling price strategy and production quantity optimization for maximising suppliers’ profit. 4.2.5. Selling price strategy In this modeling, we define profitability for the buyer as in the Bertrand game, as follows: max(b − pi + γ · pj )(pi − c) pi
(7.35)
where pi and pj are supplier i and j the selling price, respectively and b is maximum available quantity for the buyer. The first-order condition is: b − 2pi + γ · pj + c = 0
(7.36)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
171
Similarly, the FOC from insourcing is: b − 2pj + γ · pi + c = 0
(7.37)
Solving these two equations simultaneously, one obtains: P = pi = pj =
(γ + 2) · (b + c) 4 − γ2
(7.38)
Equation (7.38) shows that higher γ produces a positive impact on product price to the end-customer. From this point on, suppliers’ product price pf is used to find the optimum production quantity for the suppliers as follows. Stage 2 Quantity decision This chapter applies a similar principle to that of Singh and Vives (1984), except that we take into account both the price and quantity at infinite time in order to optimize supply chain profitability resulting from the presence of long-term price and production quantity contract. This stage is developed by finding the best price response against price decision, which results from the Bertrand pricing game, and it is shown as follows: ps (t) = s(ps − ps (t)); p > 0; ps (0) = ps(0) ps = pf
(7.39)
In Eq. (7.39), we recognize s as speed of quantity to go to its optimal value. This speed represents how much time is needed by both firms to negotiate their price contract. This notation finally becomes insignificant when such a negotiation is done at an infinite due date, where both firms are assumed to have enough time to analyze their decision. To solve Eq. (7.39), let us set up a current-value Hamiltonian as: H = q(ps − c) + λs˙q
(7.40)
Subject to Eq. (7.39), q(t) ≥ 0, where λ is per unit change of objective function (max π(q) ) for a small change in q(t). In the following derivation, we will recognize s and ρ as compound factor and discount rate. ∂H = ps − λ · s · q(t) = 0 ∂ps ∂H λ˙ 1 = δ · λ˙ 1 − = λ1 (δ · + · s) − q = 0 ∂q Steady-state quantity can be found from Eq. (7.42) as: √ lim q = ps s→∞
(7.41) (7.42)
(7.43)
We can see that equilibrium quantity is a concave function of price. In conclusion, quantity postponement gives significant impact to the supplier-buyer supply chain whenever both buyers agree to improve their product commonality.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
172 Y. Kristianto et al.
From Eq. (7.43), the total quantities produced by both suppliers can be summarized as: c + b + ε∗t q 1 = q2 = q ∗ = 2 (7.44) 2−γ Equation (7.44) gives a solution for the suppliers about optimum capacity, which is used by the suppliers to fulfill orders according to the firm purchase price and quantity. Furthermore, we have ε in Eq. (7.44), which denotes observable demands variance from the firm to the suppliers. This variance gives significant impact to the supplier’s willingness to cooperate in product design and at the same time pushes the firm to reduce its demand information inaccuracy to the suppliers (Tables 5 and 6). 4.3. ATP ATP consists of promised lead times and inventory requirement as follows.
Table 5. Product Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
b 245 141 141 141 141 141 346 346 346 346 346 346 346 566 566 71 218 141 141 283 283 141
Sourcing Decision for Each Component. p h I D C0 Cpur Error ts 4 1 1 1 1 1 10 10 10 10 10 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 100 100 200 200 100 100 100
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 20 15 5 5 5 100 70 70 20 20 5 3 5 3 3 3 3 3 3 3 3
0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1 0,1
2 1 1 1 1 1 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1
µ Cprod 500 300 200 20 200 100 20 20 200 50 50 200 200 200 200 200 200 200 200 200 200 200
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
Decision Dual-sourcing Dual-sourcing Dual-sourcing Insourcing Insourcing Insourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Insourcing Insourcing Insourcing Insourcing Insourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing Dual-sourcing
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 6.
173
Price and Product Platform Decision for Each Component.
Product Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front Tire Front Rim Rear Tire Rear Rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
γ
pf
c
a
pb
Total profit
0,5 0,5 0,6 0,6 0,5 0,6 0,5 0,5 0,5 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6 0,6
261,1 179,1 169,6 170,7 140,4 170,7 411,0 410,3 410,5 416,3 416,3 418,2 418,4 682,9 683,1 85,5 263,3 170,9 170,9 341,6 341,6 170,9
32,3 75,4 15,5 5,6 −2,1 5,5 129,2 73,3 72,4 22,4 22,4 5,5 3,5 5,4 3,4 3,9 3,6 3,8 3,8 3,8 3,8 3,5
150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0 150,0
372,2 208,2 189,1 191,3 130,7 191,3 671,9 670,6 670,8 682,6 682,6 686,2 686,7 1215,6 1216 20,9 376,5 191,7 191,7 533,1 533,1 191,8
7717 12692 21109 23104 3159 23132 53241 63193 63408 73574 73574 127903 128689 382296 383876 4891 43579 21021 21021 60553 60553 23534
4.3.1. Promised lead times Promised lead times are divided into two different models, namely Make-To-Stock (MTS) Make-To-Order (MTO) lead times, which are used by production scheduling to setup the sequence and it is gathered by applying newsboy vendor problem, as follows: LT ∗ E(CLT ) = p · E(LT − LT ∗ )+ + h · E(LT ∗ − LT)+ , LT = √ (7.45) p+h d ∗ ∗ =Q where: LTMTO/ATO µ and LTMTS = s , d = distance from factory to customers and s = vehicle speeds. The data required for the promised lead times for truck FH1 and its components are summarized in Table 7.
4.4. Collaborative Material Planning In contrast to the traditional approach of operation management tools, where material requirement follows a top-down hierarchical approach, and starts with Master
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
174 Y. Kristianto et al.
Table 7. Product/parts Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front Tire Front Rim Rear Tire Rear Rim Truck FH1 Truck FH2 Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
Promised Lead Times.
Q
µ
p
h
LT
LT1,2
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 50 50 100 100 200 200 100 100 100
500 300 200 20 200 100 20 20 200 50 50 200 200 200 200 200 200 200 200 200 200 200 200 200
4 1 1 1 1 1 10 10 10 10 10 1 1 1 1 1 15 15 15 15 15 15 15 15
2 1 1 1 1 1 2 2 2 2 2 1 1 1 1 1 4 4 4 4 4 4 4 4
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
5,7 9,9 9,9 9,9 9,9 9,9 4,0 4,0 4,0 4,0 4,0 9,9 9,9 9,9 9,9 9,9 3,2 3,2 3,2 3,2 3,2 3,2 3,2 3,2
Production Schedule (MPS), the schedule is then detailed into Material Requirement Planning (MRP) by ignoring capacity constraints and assuming fixed lead times. This chapter, however, replaces the MPS and MRP functions by applying collaborative material planning (see Fig. 6) consisting of supplier and buyer integration by including a system dynamics approach. A feedback control mechanism is used to maintain optimal condition, which is represented as a two tanks interaction, as follows. Figure 16 depicts an interaction between buyer and supplier. This model modifies Holweg et al. (2005) model (synchronized supply) by replacing the inventory level with product substitutability degree (γ), by considering product commonality. It is interesting that the model incorporates component residence time in the supplier’s (A1 ) and manufacturer’s (AR ) warehouses, which is useful to give information to the warehouse manager about how long the maximum time is to keep inventory. Tank R (buyer) production rate depends on Tank 2 (supplier) production rate (and vice versa) as a result of the interconnection of both production rates with
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering q2
λ
A1
γ1
AR
q1 R1
Figure 16.
175
γR
Q
RR
Feedback control application and built to order supply chains.
production quantity q1 . This analogy is taken from fluid dynamics, which states that a longer fluid transfer time is caused by high transportation hindrance (R) and production rate difference (µ1 − µR ). If we assume that total stock is the tanks’ volume and product substitutability γ is their levels, then either AR and A1 can be found by dividing the manufacturer total stock (TSR = SSR + CSR ) by its product commonality (γ) or 1 Q + Q − D 2(Q − D)
TSR = SSR + CSR = z · σR TS1 = SS1 + CS1 = z · σ1
AR =
A1 =
z · σR z · σ1
1 Q−D
+
1 q + q − Q 2(q − Q)
Q 2(Q−D)
γR 1 q−Q
+
q 2(q−Q)
γ1
(7.46)
(7.47)
(7.48)
(7.49)
where z is the end customer service level, σ1 is the supplier delivery standard deviation, and σR is the manufacturer delivery standard deviation. The promised lead times of the manufacturer and the supplier are formulated as: 1 Q−D 1 R1 = L1 = q−Q
RR = LR =
(7.50) (7.51)
where LR and L1 represent the manufacturer and the supplier lead times for in-house production, while in the case where the manufacturer outsources the manufacturing process, then Q and q represent the manufacturer assembly capacity and the supplier production capacity. First, an open loop interacting system is discussed before a
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
176 Y. Kristianto et al.
further discussion on closed loop built-to-order supply chain. µR (s) = µ1 (s)
RR R1 +RR R1 RR AR R1 +RR s + 1
(7.52) D(s) 1 = Q(s) KR
Q(s) RR = 2 2 q(s) τ s + 2ςτs + 1
(7.53)
We recognize KR in Eq. (7.53) which denotes the manufacturer response to customer demands. The higher the value the higher the manufacturer’s responsiveness. Time constant (τ) represents the supplier responsiveness to customer order ζ in Eq. (7.53) is the decoupling point signal which provides a sign of the customer order penetration point, that is, assembly-to-order (ATO) or MTS. Looking at ζ value helps us to detect lead time variability. Lead times tend to be shorter when ζ < 1 while ζ > 1 yields a sluggish response, while faster response without overshoot is obtained for a critically damped case (ζ = 1). In general, ζ < 1 indicates that the manufacturer is operating under MTS, while ζ < 1 signs ATO. Hereafter, according to the control theory of interacting system, 2ζτ and τ 2 can be formulated as: 2ζτ = RR AR + R1 A1 + RR A1
(7.54)
τ = R1 RR A1 AR 2
(7.55)
Equation (7.55) represents an open loop without information feedback so that supplier has only access to buyer inventory without considering customer demand. Open-loop control can be drawn as Fig. 17. From this point on, a closed-loop system can be formulated by joining Eqs. (7.50)–(7.55) to be: RR q(s) = 2 2 q2 (s) KR (τ s + 2ςτs + 1)
(7.56)
q2
A1
γ1
q
R1
Figure 17.
AR
γR
Q
RR
Open loop: two interacting processes.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
q2
RR K R τ s + 2ζτs + 1
(
)
2 2
177
Q
−1 Qset
Σ
KC Figure 18.
Closed feedback control transfer function.
Q(s) = KR = GR q(s)
(7.57)
With closed-loop feedback control as shown in Fig. 18. KC in Fig. 18 represents information visibility between the manufacturer and the supplier. The larger the gain, the more the supplier delivery quantity will change for a given demand information change. For example, if the gain is 1, a demand information change of 10 percent will change the supplier delivery quantity by 10 percent. KC decision is important to the interacting system because it affects simultaneously the supply chain inventory (buyer and supplier) and order lead times. KC depicts process visibility from manufacturer to supplier so that the higher value means higher visibility. Information visibility (KC ) needs to be adjusted according to product commonality requirements (see Sec. 4.2.4) in order to fulfill the lead time requirement. Finally, Fig. 18 can be used to construct a time domain dynamics of synchronized supply by finding its open-loop transfer function as follows: R KC K (τ 2 s2R+2ςτs+1) Q(s) R = R Q(s)set 1 + KC K (τ 2 s2R+2ςτs+1) R
=
KC · RR KC · RR + KR (τ 2 s2 + 2ςτs + 1)
(7.58)
So that we have roots of denominator as:
s1,2 =
−
2ςτKR KR τ 2
±
2ςτKR KR τ 2
2
C ·RR − 4 KR +K K τ2 R
2
Laplace domain dynamics according to step disturbance is applied in order to represent sudden demand change, which can be inserted directly into Eq. (7.58)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
178 Y. Kristianto et al.
and inverted to get the following inversion of the Laplace transform, as follows: Q(s) = Q(s)set
s s +
KC · RR /KR τ 2
2ςτKR KR τ 2
2 2ςτKR K +K ·R + −4 R C2 R 2 KR τ
KR τ
2
× s +
2ςτKR KR τ 2
2 2ςτKR K +K ·R − −4 R C2 R 2 KR τ
KR τ
2
(7.59)
Simplifying Eq. (7.59) then we have: 2 2ςτKR 2ςτKR C ·RR + − 4 KR +K KR τ 2 KR τ 2 KR τ 2 a= , 2 b=
2ςτKR KR τ 2
−
2ςτKR KR τ 2
2
C ·RR − 4 KR +K K τ2 R
2
Finally,
1 Q(s) e−at − e−bt = , → Q(t) = 1 − Q(t)set Q(s)set s(s + a)(s + b) (b − a)
(7.60)
Equation (7.60) presents our process modeling as a closed-loop feedback control. It describes the role of IT in demand management by presenting information exchange between the manufacturer and the supplier. 4.4.1. Optimum KC value In this chapter, optimum KC value can be found by the application of the numerical method, as follows: ∞ e−at −e−bt ∗− Q 1 − Q(t) t=1 b−a Qtransient LTtransient = = (7.61) D D where Qtransient represents production capacity at the ramp-up period. Furthermore, lead times at the normal capacity level can be calculated as: e−at −e−bt Q∗ − Q∗ − ∞ 1 − Q(t) t=1 b−a LTnormal = (7.62) D
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
179
KC value can be adjusted so that LTtransient + LTnormal = LT ∗ . From this result, the suppliers can decide how much they must supply to the manufacturer, according to KC value. Below is one example from power train, which is manufactured byATO strategy. In the previous section example, the data mentioned that power train lead time LT ∗ is 14 time unit and LTnormal is 3,21 time unit. From the data, we find LTtransient is 10,79 time unit. For this section, the only new information which is required for the simulation is the supplier capacity, and manufacturer (KR ) and supplier (KC ) responsiveness, which are trial in order to meet the transient lead time requirement. The simulation result is depicted in Fig. 19. From the simulation, we have information that KC , KR , and γ values are 0,1; 0,1, and 0,4 (independent variables). It is also found that the optimum supplier capacity is 200 units (see Table 11). Furthermore, the results of the other components can be represented as Table 8. Inventory requirement can be established from Eq. (7.46) and the results are exhibited in Table 9. We can see from Table 11 that inventory requirement is less than normal requirement whenever we apply s, Q or s, S policy. 4.5. Production Planning and Scheduling In this stage, production planning and scheduling extracts information from demand and master planning such as BOM, order and component lead times and inventory level for each component in order to produce detailed operational scheduling. This approach has been applied in other APS software, for instance SAP APO. The difference is that the application of production reconfiguration onto operational
Power train demand fulfillment dynamics 100,2
Production level
100 99,8 99,6 99,4 99,2 99 98,8 98,6 0
2
4
6
8
10
12
Time
Figure 19.
Power train order fulfillment dynamics.
14
16
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
180 Y. Kristianto et al.
Table 8.
Collaboration Between Supplier and Manufacturer.
Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
D
Q
q
Kc
γ
100 100 100 100 100 100 100 100 100 100 100 200 200 400 400 100 100 200 200 100 100 100
141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4
190 190 190 190 190 190 160 160 160 160 160 205 205 405 405 200 200 205 205 200 200 200
1 1 1 1 1 1 1 1 1 1 1 0,2 0,2 1 1 0,1 0,1 1 1 0,1 0,1 0,1
0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4 0,4
scheduling is supported by the application of ASDN software by giving the measurement of lead times, inventory value, and profit. The procedure is explored further in Sec. 4.5.1. 4.5.1. Production Scheduling In order to sequence the tasks of a job shop problem (JSP) on a number of machines related to the technological machine order of jobs, a traveling salesman problem is proposed by considering that it cannot produce illegal sets of operation sequences (infeasible symbolic solutions). The problem can be formulated as in Eq. (7.64) below: (7.64)
Min tn Subject to
tj − ti ≥ di tj − ti ≥ di
(i, j) ∈ O (i, j) ∈ M
(7.65) (7.66)
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
Table 9.
181
Safety and Cycle Stock Requirement.
Body Office package Interior decoration Radio CD player Speaker Engine Gear box Frame Front axle Rear axle Front tire Front rim Rear tire Rear rim Power train Suspension Rear wheel Front wheel Audio Cabinet Chassis
Z
σ1
Q
q
SS1
CS1
1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69 1,69
10 10 10 10 10 10 10 10 10 10 10 20 20 40 40 10 10 20 20 10 10 10
141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4 141,4
190 190 190 190 190 190 160 160 160 160 160 205 205 405 405 200 200 205 205 200 200 200
2 2 2 2 2 2 4 4 4 4 4 4 4 4 4 2 2 4 4 2 2 2
2 2 2 2 2 2 4 4 4 4 4 2 2 1 1 2 2 2 2 2 2 2
where tn is the total makespan of the three operations within three machines for the three components. tj and ti represent the precedent operations j and i where their end and start time cannot be overlapped Eq. (65). Furthermore, the start time operation j cannot overlap the start time operation i in the same machine-M Eq. (7.66). This problem will be solved by applying MS Excel add-in facility for optimal sequencing problem as follows. Suppose we intend to schedule an audio assembly where five activities are distributed among radio, speaker, and CD player (The total lead times are 14 time unit. See Table 7). The CD player is produced by following ATO (step 1 to 5) and the radio and speaker by following MTS manufacturing strategy (step 4 to 5) (see Table 4). The detailed manufacturing times and sequence are shown below (Table 10). The Excel form of representation of this schedule optimization can be depicted as Table 11. Table 10 shows the MS excel add-in facility snapshot of job-shop scheduling, which is applied to optimize the audio manufacturing schedule. There are five steps in the manufacturing process where J1, J2, and J3 represent steps for the CD player because they must be produced as ATO. J4 and J5 denote assembly processes for
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
182 Y. Kristianto et al.
Table 10.
Detailed Audio Manufacturing Machining Time. Operations
Components
1
2
3
4
5
Radio CD player Speaker
4
5
1
2 2 3
4 1 2
Table 11. Audio Scheduling Data. Optimize Name Search Method Problem Job Job Name Next Job Sequence Obj. Terms
Seq_1 Dir. Random Value TSP Algorithm 1 2 Start J1 6 3 1 6 0
Objective Min State 13 Value None 3 4 J2 J3 7 5 4 5
Feasible TRUE 0 5 J4 2 2
6 J5 4 3
7 End 1 7
0
0
0
0
0
0
J1 4 0 0 0
J2 5 0 0 0
J3 1 0 0 0
J4 2 3 2 0
J5 1 2 4 0
End 0 0 0 0
Job Data Process Time CD Player Speaker Radio Release Time
Job Names Start 0 0 0 0
the audio package. The speaker and radio do not follow J1–J3 because they are managed as MTS. The results can be summarized in Fig. 20. Figure 20 exhibits the result of job-shop scheduling by applying the Travelling Salesman Problem (TSP). We can see from the figure that the total makespan is reduced from 16 (longest processing time from J1 to J5) to 12 time units. This results implies that now supply chains, by considering the total order lead times, have a chance to be more flexible because now they have an allowance of at least 4 time units (16 – 12). In value chain perspective, the result allows supply chains to be more competitive by reducing the likelihood of delivery lateness by giving some spaces for uncertain events such as machine down time and changeover. This scheduling optimization also enables the next planning stage (distribution and transport planning) to optimize the supply chain structure by reducing the order lead times. Finally, by applying the same procedure, we can build detailed scheduling for other components. In any case, assembly and fabrication scheduling is focused on internal factory optimization, which needs to be applied into distribution and transport planning in order to optimize the total lead times as below.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering Time: CD Player
0 J5
1 J3
2
3
J5
Speaker
4
5
J4
Radio
6
APS data
8
9
10 11 J2
12 13
J4 J5
Figure 20.
7
J1
183
J4
Scheduling Gantt chart.
ASDN structure
Figure 21.
Distributions and transportation planning.
4.6. Distributions and Transport Planning Distribution and transportation planning is used to optimize order delivery activities from supplier to factories and from factories to end users. This section utilizes demand, master, and production planning and scheduling to develop ASDN by optimizing distribution centers and transportation planning, which are exhibited as Figs. 21 and 22.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
184 Y. Kristianto et al.
Figure 22.
Financial analysis.
From ASDN simulation, Fig. 21 depicts distribution and transport planning of truck manufacturing, which is situated in the United Kingdom and the manufacturer outsources his or her components or activities across the globe. Furthermore, inventory turn, total lead times, and holding cost (cycle stock and safety stock) are also explored by presenting them in a financial chapter (Fig. 22). 5. Practical Implications The concept of value chain re-engineering is shown by giving emphasis to information availability among the supply chain and company giving added value in each step of order processing. It ensures that customers have accurate information about the available product configuration and allows them to configure not only the product but also the lead times. This mechanism can be applied within this APS because product structure database and ASDN are linked by using this proposed APS (see Fig. 12). APS in this module gives options for push-pull manufacturing strategy (Sec. 4.2.1) so that it enables the promise of order lead times as well as optimizing so that it enables the promise of order lead times (Fig. 22) as well as optimizing the aggregate inventory level (Table 8). ASDN in this case measures the value added of APS steps (demand planning, master planning, and production planning and scheduling) through financial analysis (Fig. 22). The implication of ASDN application is that the supply chains can reconfigure the supply chain networks or reschedule the production within the manufacturer’s plants until the required performance target is achieved. Related to value chain re-engineering, this APS model changes the one direction of the value chain to a two-way concept (see Fig. 6) by producing collaboration with both customer and supplier involvement. This collaboration is shown by incorporating suppliers into product platform design (Sec. 4.2.4) and demand forecasting (Sec. 4.2.3). APS supports the integration process effectively. The ATP module is also embedded into master planning where it also receives information from the product configuration database (customer side), which can be used to select
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
185
sourcing strategy; thus it also reduces delivery uncertainty because of supplier commitment. Last but not least, embedding production reconfiguration into distribution and transport planning is a good idea since it has two advantages. The first advantage is that the customer side can reconfigure product structure by considering lead time. This step is possible since ASDN will measure the total lead time at the final simulation. The second advantage is that manufacturers and suppliers can reconfigure their production process by optimizing the manufacturing schedule and reconfigure the push-pull manufacturing strategy. This is the main advantage of value chain re-engineering.
6. Conclusion and Future Research This chapter has discussed value chain re-engineering, which is represented by a new APS model. We may summarize the results derived from the model as follows. 1. Supply chain collaboration needs to be addressed in the value chain discussion. The value chain cannot be managed solely based on optimization in one direction. In fact, both the supply and demand sides must be considered equally. 2. Technological support and procurement activity need to be involved in the main activities of the value chain. Procurement should have a strategic position in the business activities. Furthermore, in mass customized products, a short product life cycle forces the supply chain to be agile and reconfigurable. 3. The first limitation of this APS is that the model does not incorporate customer and service department interface because of the assumption that the sales department is replaced by e-marketing. On the other hand, this situation has the advantage of offering a new future research direction with regard to the possibility of shrinking the organization by diminishing the sales department (see Fig. 6) and changing the firm sales to mass personalization. 4. The second limitation is that there are no solutions to support the function of sales mode. It is necessary to conduct future research on the personalization of sales function by employing information technology to give added value to the APS.
References Anton, JJ and DA Yao (1989). Split awards, procurement and innovation. RAND Journal of Economics, 20(4), 538–551. Buxmann, P and W K¨onig (2000). Inter-organisational Co-operation with SAP Systems: Perspectives on Logistics and Service Management. Berlin: Springer-Verlag. Chen, K and P Ji (2007). A mixed integer programming model for advanced planning and scheduling. European Journal of Operations Research, 184, 512–522.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
186 Y. Kristianto et al.
Davis, SM (1987). Future Perfect Reading, MA: Addison Wesley. Du, X, J Jiao and M Tseng (2003). Identifying customer need patterns for customization and personalization, 14(5), 1–25. Elmaghrabi, WJ (2000). Supply contract competition and sourcing policies. Manufacturing and Services Operations Management, 2(4), 350–371. Fleischmann, B, H Meyr and M Wagner (2002). Advanced planning. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and Kilger C (eds.), 71–95, 2nd Edn. Berlin: Springer–Verlag. Forker, LB and P Stannack (2000). Cooperation versus competition. Do buyers and suppliers really see eye-to-eye? European Journal of Purchasing and Supply Management, 6, 31–40. Gibbons, R (1992). A Primer in Game Theory. New York: Harvester Wheatsheaf, 1992. Holweg, M, S Disney, Holmstr¨om and J Sm¨aaros (2005). Supply chain collaboration: Making sense of the strategy continuum. European Management Journal, 23(2), 170–181. Kilger, C and L Scheeweiss (2002). Demand Fulfillment and ATP. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and C Kilger (eds.), 161–171, 2nd Edn. Berlin: Springer–Verlag. Kumar, A (2008). From mass customization to mass personalization: A strategic transformation. International Journal of Flexible Manufacturing Systems, 19, 533–547. Meyr, H, J Rohde, L Schneeweiss and M Wagner (2002). Structure of advanced planning system. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, Stadtler H and Kilger C (eds.), 99–104, 2nd Edn., Berlin: Springer–Verlag. Patterson, JL, LB Forker and JB Hanna (1999). Supply chain consortia: The rise of transcendential buyer-supplier relationships. European Journal of Purchasing and Supply Management, 5, 85–93. Pine, J (1993). Mass Customization. Boston, Massachusetts: Harvard Business School Press. Porter, M (1985). Competitive Advantage: Creating and Sustaining Superior Performance. New York: The Free Press. Singh, N and X Vives (1984). Price and quantity competition in a differentiated duopoly. Rand Journal of Economics, 15, 546–554. Stadtler, H, (2002a). Supply chain management — an overview. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, H Stadtler and C Kilger (eds.), 7–29, 2nd Edn. Berlin: Springer–Verlag. Stadtler, H (2002b). Production planning and scheduling. In Supply Chain Management and Advanced Planning: Concepts, Models, Software and Case Studies, H Stadtler and C Kilger (eds.), 177–195, 2nd Edn. Berlin: Springer–Verlag. Stadtler, H (2005). Supply chain management and advanced planning-basic, overview and challenges. European Journal of Operations Research, 163, 575–588. Van Eck, M (April 2003). Advanced planning and scheduling. BWI Paper, page 3. http://obp.math.vu.nl/logistics/papers/vaneck.doc, [Accessed: 2 January 2008]. Vesanen, J (2007). Commentary: What is personalization? A conceptual framework. European Journal of Marketing, 41(5/6), 409–418.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
Value Chain Re-engineering
187
Bibliographical Notes Yohanes Kristianto obtained an undergraduate degree in Chemical Engineering and a master degree in Industrial Engineering from Sepuluh Nopember Institute of Technology, Surabaya, Indonesia. Prior to his academic career, he worked for a Quality function of a multinational company. He is now doctoral researcher at Logistics Sytems Research Group, Department of Production, University of Vaasa, Finland. His research interests are in the area of supply chain strategy/management and production/operations management. His papers have been published in several international journals. Petri Helo is a research professor and the head of Logistics Systems Research Group, Department of Production, University of Vaasan Finland. His research addresses the management of logistics processes in supply demand networks, which take place in electronics, machine building, and food industries. He has published many papers in several international journals. Mian M. Ajmal is a doctoral researcher at the Logistics Systems Research Group, Department of Production, University of Vaasa, Finland. He holds an MBA. He has been involved in several research projects in last few years. His research interests pertain to project management, supply chain management, and knowledge management. Previously, he has published some articles in international journals and international conferences in these areas.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch07
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Chapter 8
Cultural Auditing in the Age of Business: Multicultural Logistics Management, and Information Systems ALBERTO G CANEN∗ and ANA CANEN† ∗ Department of Production Engineering, COPPE/Federal University of Rio de Janeiro, Caive Postal 68507, 21941-972 Rio de Janeiro RJ, Brazil
[email protected] † Department of Educational Studies, Federal University of Rio de Janeiro, AV Pasteur 250 (Fundus), 22290-240 Rio de Janeiro RJ, Brazil
[email protected] The present chapter seeks to understand in what ways cultural auditing could represent a process whereby an evaluation could take place that could help improve multiculturalism in organizations, logistics management, and information systems. It suggests that information and business systems could benefit by considering cultural diversity and its impact on organizational success. In fact, cultural auditing has been pointed out as a way to neutralize cultural conflicts. In order to develop the argument, this chapter first discusses the concept of cultural auditing; it analyzes the extent to which literature concerning cultural auditing takes on board multicultural concerns; it then gauges if and how cultural auditing has been perceived, through oral history in an auditing organization in Brazil. It concludes, by pinpointing the possible ideas and ways for cultural auditing in a multicultural context. Keywords: Cultural auditing; logistics management; information systems; multicultural organizations; multiculturism; oral history.
1. Introduction Organizations as multicultural entities (Canen and Canen, 2005) should respond to cultural diversity in a contemporary and an increasingly plural world. Authors like Cox Jr. (2001) argue that managing diversity means to understand “its effects and implementing behaviors, working practices and policies that respond to them in effective way” (p. 4). In fact, understanding the cultural points of views of customers and partners may represent the difference between success and failure. On the other 189
March 15, 2010
190
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
hand, considering cultural diversity also at the level of the organization should help build a climate open to cultural plurality, transparency and trust, with a result in the flourishing of the organization itself. Canen and Canen (2005) stress the fact that a multicultural organization can be considered as one that values cultural diversity and fosters the collective construction of a strong organizational cultural identity. At the same time, Canen and Canen (2008a) also argue that leadership is crucial to ensure a multicultural dimension in organizational structures and practices. They point out the dangers of a monocultural leadership in disrupting organizational performance and eventually being even conducive to cases of bullying at the workplace. Taking into account the interconnectedness of business and information systems, authors like Gillam and Oppenheim (2006) contend that virtual teams — understood as groups of people who work across time, space, and often organizational boundaries using interactive technology — need effective communication strategies in order to be successful and tap on their potential for generating and sharing management knowledge. The same authors contend that “intercultural teams are bound to have definite effects on managerial and leadership styles” (p. 167), stressing the need for cross-cultural management associated with information and business systems. They claim that even though information technology and the advent of the web have revolutionized the business model, there is a strong need to analyze “managerial and cultural issues (all too often ignored) that arise from their use” (p. 161). In the same vein, Weideman and Kritzinger (2003, p. 76) stretch the argument further, by stating that “cultural diversity and technology are interrelated in today’s workplace . . . cultural dividing factors . . . are claimed to be the reason for the lacking of skills required to use the latest technology.” The referred authors illustrate their argument by developing a study in which preferences for different patterns of technological interfaces were largely associated with plural identity markers on the lines of gender and race, among others. These views seem to support the argument that IT and business management should take cultural diversity into account in order to represent real assets to the organization. They seem to strongly suggest that it is not enough to merely emphasize technological progress and capabilities if the culturally plural human agents are not taken on board. In line with that, mechanisms of cultural auditing that could evaluate the weight of cultural variables in the organizational strategies and performances should be central. Building on these ideas, the present chapter — which is an extended version of Canen and Canen (2008b) — focuses on cultural auditing, taking into account the contemporary era of business and IT. It suggests that a lot has been done related to multicultural discussion, but much less in discussing the meaning of cultural auditing in order to evaluate those issues in the organizations. In line with that, a mechanism of cultural auditing should be in place in order to ensure logistics management and organizational evaluation and planning in a multicultural perspective. In a globalized and technological era, cultural auditing should give objectivity and the right weight to this dimension.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 191
In order to develop the argument, cultural auditing is discussed from a theoretical perspective, and from oral history in an auditing organization in Brazil. We also pinpoint a possible framework for cultural auditing in a multicultural context. It should be borne in mind the present chapter is part of the authors’ research agenda. Therefore, we do not claim that the ideas presented here are to be generalized, to the contrary: they are open to discussion. 2. Multicultural Organizations, Evaluation, and Cultural Auditing: Transforming Ideals into Practice Multiculturalism can be considered as a set of answers to cultural diversity so as to build on it for organizational success and for the challenging of prejudices. Preparation of managers in a multicultural perspective is pointed as crucial (Canen and Canen, 2001), the role of training and education being vital in the process. In the context of schools, Brown and Conrard (2007) posit that multicultural leadership has to do with the development of cosmopolitan personalities. At the same time, the dangers of monocultural leaders in condoning professional harassment and bullying in the workplace has been pointed out (Canen and Canen, 2008a), with serious consequences for organizational performance. However, even though many organizational leaders may truly believe their organizations respond adequately to cultural diversity, there seems to be a powerful need to evaluate the extent to which that idea is true. Cultural auditing could represent a possible evaluation tool, even more important than financial auditing, particularly in the challenging processes of fusions, as highlighted by Canen and Canen (2002). As claimed by Radler (1999), 80% of mergers/acquisitions fail due to cultural incompatibilities. Carleton (1999) suggests that the objective of cultural auditing is to elaborate a plan that should manage the organizational cultural differences, mainly in the process of fusions. As pointed out by Castellano and Lightle (2005), “a cultural audit would provide a means for assessing the tone at the top and the attitude toward internal controls and ethical decision-making” (p. 10). Fletcher and Jones (1992) make the point that cultural auditing should not evaluate organizational culture according to any previous conception. Rather, it should be supported by norms and rituals that have a decisive influence on the global ability of the organization in dealing with its changes. In the same vein, Bardoel and Sohal (1999) point out the role of leadership, stressing that the input of senior management and their involvement is crucial for the success of the cultural change. We suggest cultural auditing in a multicultural context should comprise the factors as pointed out in Fig. 1. As it can be noted in Fig. 1, the core of cultural auditing is multicultural leadership (inner circle), which, in its turn, has a strong impact on the monitoring of logistics management and organizational life in a way that shows to what extent solutions are given to cultural conflicts. This leads to the monitoring of respect to cultural diversity and, finally, to understanding the extent to which a shared vision
March 15, 2010
192
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen multicultural leadership
organizational life solutions to cultural conflicts respect to cultural diversity shared vision
Figure 1.
Cultural auditing in a multicultural perspective.
of organizational cultural identity based on mutual respect, motivation, cohesion, and the understanding of cultural differences is in place. Wright (1986), citing Campbell, considers organizational culture as the set of “attitudes, values, beliefs, and expectations of those that work in an organization” (p. 28). Fletcher and Jones (1992) classify organizations according to their culture, stating there is “no ideal culture as different cultures are appropriate in different contexts” (p. 31). In that trend, organizational cultural identities are to be categorized in order to be better understood. Asma Abdullah, interviewed by Schermerhorn (1994), contends that typical organizational problems center on “different perceptions of how work should be done.Very often a new expatriate is not quite sure how to get the most from his national subordinates” (p. 49), which reinforces the need to understand cultural plural views. Therefore, cultural auditing is a process that should go beyond cultural organizational changes, but incorporate an ongoing perspective that highlights its cultural elements. In line with all that, it should be emphasized that logistics management in a multicultural perspective should be part of cultural auditing. In fact, logistics costs represent a high percentage of the country’s GDP. Logistics and cultural diversity should go hand-to-hand for organizational success (Canen and Canen, 1999), therefore being incorporated into cultural auditing in a multicultural perspective. 3. Cultural Auditing in a Real Life Auditing Company Based on the above, the present study sought to understand how cultural auditing has been perceived by organizations. In order to do that, a case study has been developed
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 193
in one of the most prestigious auditing companies, with the site in Brazil. The focus, of the present chapter, is on the interview carried out with a top executive of that company. The interview sought to understand to what extent cultural auditing has been perceived as relevant. Qualitative methodology has been chosen as the research methodology because it provides opportunities for gauging cultural perceptions that inform the everyday life of an institution. Therefore, by focusing on oral history as gleaned from the interview with a top executive of the company, the study should provide a glimpse of perceptions and cultural views that underlie the organizational identity. The interviewee explained that auditing companies have existed since the 18th century. The Dutch were the first ones, their country being small but with a strong international presence including in Brazil. The interviewee posited that the profession of auditing has always been more international and now it is becoming more formalized, “since everything today should be regulated.” The interviewee also makes a distinction between “necessarily regulated activities,” among which he cites banks and ensuring companies; and the others, which are companies that do not develop “obligatory regulated activities,” but wish to be audited, without any imposition to do so. He cited the example of Company N, that undertakes its auditing process without any regulating imposition from outside. The last ones are audited because “they want to, which means that they realize they need it, something must have changed within them” (from the interview carried out in October 2007). From this set of answers, even though the expression “organizational culture” has not been mentioned, it seems more likely that it is taken into account in the cases of voluntary auditing, particularly when the interviewee expresses that “something must have changed within them”. This indirectly seems to raise the possibility of cultural auditing in the sense defended by Bardoel and Sohal (1999), which emphasize its role in assessing the impact of cultural change in the organization. It is interesting to note that, according to the interviewee, there are four large auditing firms in the world, and they are referred to as “the big four”. The fact that Company E failed seems to have been a big blow. Even though the interviewee did not explicitly mention Company E’s culture, that factor seems to be clear when he asserted that: It is shocking to see the arrogance that prevailed in Company E by then. Their motto was “ask why.” However, nobody asked why during the last period, because things had really got rotten! What finished with them was their behavior. They started to burn all files, papers in tons, and that really finished with them. Their behavior made them implode! They even managed to get a favorable sentence in court at a certain point, but that was not enough. They really were finished by then. Therefore, there were five big companies, now there are only four . . . We are all very sensitive indeed to these aspects, because if anything of that kind happens, four may become three . . . (from the interview, October 2007).
March 15, 2010
194
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
It is important to note from the above answer the suggestion that the lack of credibility and ethical behavior in leadership management that was the biggest factor in the breaking down of the mentioned organization. Values defended by authors like Canen and Canen (2005, 2008b), Brown and Conrard (2007) related to multicultural leadership, ethical behavior, communication, credibility, transparency, and others seem to be perceived by the interviewee, albeit in an indirect way, to have been crucial in the downfall of Company E, more than any financial or judicial factors. It is noteworthy that when explicitly asked whether cultural auditing is employed in the auditing company taken as the case study, the answer of the interviewee was negative. However, when talking about Company E, as above, as well as about the auditing process itself, cultural aspects are bound to appear. For example, even though he posited that the culture of auditing is dictated by the market legislation, he recognized that: When talking about auditing we are talking about a large range of companies, ranging from open ones to little ones. Therefore, the rules and standards to be applied by companies are bound to be different. However, I do not think it is cultural, but rather a way of applying risk norms, that could hinder wrong procedures. That was very much upgraded after the Company E case (from the interview, October 2007).
It therefore seems to be clear that the concept of cultural auditing and of organizational culture, as such, is not applied in the auditing process. However, implicitly rather than explicitly, the cultural diversity of organizations due to their characteristics, size, and mission certainly seems to have a strong weight on the way they perform, having to be adapted to that diversity. The cultural aspects and the need to involve personnel in the organizational culture were again implicit in the discussion of about the possible clashes of cultures in processes of merger: The problem with mergers is the behavior of people that work in the companies. The way one company may conduct its business may be highly bureaucratic, whilst the other is in a free atmosphere . . . When we ourselves made our merger the profile of people were very different. Therefore the secret of our type of business — which has to deal with delivery of services, client services — is to let people live with each other. However, sometimes it may be a disaster indeed, because when there are mergers, it is not important if one is in a top position, when the merger is carried out, that position may be inverted . . . But the question of respect is paramount. However, people of that sort are fighting for their lives, some of them will fight one way or the other. There are those that are real fighters, others that are na¨ıve, others that like to look good but indeed are not good at all. But the level of sensitivity to these things is higher nowadays . . . In my view, the best way to tackle this is by having sensitivity. However, some companies do not have that sensitivity at all . . . (from the interview, October 2007).
As it can be noted, aspects related to organizational culture similar to what Cox Jr. (2001) suggests are mentioned when the interviewee talked about a bureaucratic and more liberal atmosphere. In that case, a relativistic approach seems to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 195
be present, in that the culture of the organization is not put into question, and the multicultural aspects are not focused on beforehand, in the way Fletcher and Jones (1992) understand. Nevertheless, the fact that behaviors and sensibility to them should be fostered, as claimed by the interviewee, seems to implicitly confirm the need to see beyond economic factors and probe into cultural ones in order for the organizations to succeed. Finally, when asked about the main difficulties and the main successful aspects involved in auditing, the interviewee noted that: The main challenge is to answer to the enormous amount of demands upon us, in terms of documentation and procedures. We and the rest of the world are working towards a “limitation liability” in our work. England and the United States already have what they call the “liability gap.” Society, the public, the clients and others accumulate a lot of demands upon us. From the practical point of view, difficulties relate to resources in order to face our growth. We have almost doubled our initial size. We admit trainees, young people that still are in university, and we subside their studies apart from an initial salary. We are very open, there is a positive climate here, we hold parties and other events, and there is no restriction in terms of access to directors. Our success is therefore our contribution to the society, we also have social projects, and we have been receiving prizes for those. (from the interview, October 2007).
From the above, it seems to be clear that even though the emphasis of the discourse seems to be on market and bureaucratic challenges, the cultural aspect emerges when the interviewee talks about the organizational climate that is present in the auditing company — described as open, happy, and transparent, with an easy accessibility to all the echelons in the organizational hierarchy. The challenge seems to be for organizations themselves to take on board cultural auditing so that the inner cultural variables — related to aspects such as multicultural leadership, logistics management, and organizational climate — will be efficiently tackled, in order to provide the basis for an increasingly successful organizational performance. 4. Conclusions The present chapter has discussed ways in which cultural auditing could represent a process whereby cultural conflicts could be pinpointed and indeed addressed in organizations. It highlighted the interconnectedness of cultural diversity and technology in the workplace, stressing the importance of effectively dealing with it for the success of business and information managerial systems. It argued that such a cultural auditing could represent an evaluation process focused on indicators related to multicultural leadership, positive organizational climate and their effect on logistics management and organizational success, and it presented theoretical perspectives concerning the issue. Concerning the field work, cultural auditing does not seem to be the focus of the auditing carried out in the company taken as a case study. In fact, multiculturalism is not a part of the dialog of that firm. However,
March 15, 2010
196
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
A. G. Canen and A. Canen
implicitly, the cultural aspects emerge within the discourse of the top executive interviewed, imposing themselves in the analysis of organizational successes and challenges. It is important to note that those aspects emerged throughout the interview, as for example when talking about respect, an open climate, and other aspects mentioned by the interviewee. Based on that, the present study is a part of the authors’ research agenda, and should help open up discussions concerning the importance of cultural auditing for logistics management and organizational performance. It should also contribute to the thinking of IT by taking multicultural aspects into account. Their success is certainly more likely when the organizational environment is that of a nurturing, respectful one in which all — regardless of race, ethnicity, social class, gender, and other identity markers — feel valued.
References Bardoel, EA and AS Sohal (1999). The role of the cultural audit in implementing quality improvement programs. International Journal of Quality and Reliability Management, 16(3), 263–276. Brown, L and DA Conrard (2007). School leadership in Trinidad and Tobago: The challenges of context. Comparative Education Review, 51(2), 181–201. Canen, AG and A Canen (1999). Logistics and cultural diversity: Hand in hand for organisational success. Cross Cultural Management: An International Journal, 6(1), 3–8. Canen, AG and A Canen (2001). Looking at multiculturalism in international logistics: An experiment in a higher education institution. The International Journal of Educational Management, 15(3), 145–152. Canen, AG and A Canen (2002). Innovation management education for multicultural organisations: Challenges and a role for logistics. European Journal of Innovation Management, 5(2), 73–85. Canen, AG and A Canen (2005). Organiza¸co˜ es Multiculturais: log´ıstica na corpora¸ca˜ o globalizada. Rio de Janeiro, Editora Ciˆencia Moderna. Canen, AG and A Canen (2008a). Multicultural leadership: The costs of its absence in organizational conflict management. International Journal of Conflict Management, 19(1), 4–19. Canen, AG and A Canen (2008b). Cultural auditing: Some ways ahead for multicultural organisations and logistics management. In: Menipaz, E, I Ben-Gal and Y Bukchin, (Eds.), Book of Proceedings, International Conference on Industrial Logistics, Tel-Aviv, Israel. Carleton, R (1999). Choque de Culturas. HSM Management, (14), May/June. Castellano, JF and SS Lightle (2005). Using cultural audits to assess tone at the top. The CPA Journal, 75(2), 6–11. Cox Jr, T (2001). Creating the Multicultural Organization. San Francisco: Jossey-Bass, A Wiley Company.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
Cultural Auditing in the Age of Business 197
Fletcher, B and FF Jones (1992). Measuring organizational culture: The cultural audit. Managerial Auditing Journal, 7(6), 30–36. Gillam, C and C Oppenheim (2006). Review article: Reviewing the impact of virtual teams in the information age. Journal of Information Science, 32(2), 160–175. Radler, J (1999). Incompatibilidade cultural inviabiliza fus˜ao entre Empresas. Gazeta Mercantil, 14 December. Schermerhorn, JR (1994). Intercultural management training: An interview with Asma Abdullah. Journal of Management Development, 13(3), 47–64. Weideman, M and W Kritzinger (2003). Concept Mapping vs. Web Page Hyperlinks as an Information Retrieval Interface- preferences of postgraduate culturally diverse learners. Proceedings of SAICSIT, pp. 69–82. Wright, P (1986). A Cultural Audit: first step in a needs analysis? JEIT, 10(1), 28–31.
Biographical Notes Alberto G Canen is a Professor in the Department of Production Engineering at COPPE/Federal University of Rio de Janeiro. He is a Researcher for the Brazilian Research Council (CNPq). He was formerly a Visiting Professor at the University of Glasgow. He is a former President of the Brazilian Operations Research Society (SOBRAPO). He has a wide experience working in industrial organizations, as well as being a consultant. Ana Canen is a Professor in the Department of Educational Studies at the Federal University of Rio de Janeiro. She is a Researcher for the Brazilian Research Council (CNPq). She has also actively participated in long distance education programs. Her main research interests have focused on comparative and multicultural education and institutional evaluation.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch08
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Chapter 9
Efficiency as Criterion for Typification of the Dairy Industry in Minas Gerais State LUIZ ANTONIO ABRANTES∗ , ADRIANO PROVEZANO GOMES∗∗ , ´ MARCO AURELIO MARQUES FERREIRA† and ‡ ˆ ´ ANTONIO CARLOS BRUNOZI JUNIOR Department of Administration, Federal University of Vi¸cosa, CEP: 36.570-000, Vi¸cosa, Minas Gerais, Brazil ∗
[email protected] ∗∗
[email protected] †
[email protected] ‡
[email protected] MAISA PEREIRA SILVA Student in Administration, Federal University of Vi¸cosa, CEP: 36.570-000, Vi¸cosa, Minas Gerais, Brazil
[email protected] Milk production is considered as the strategic activity in the national economy, as it is an important generator of foreign exchange and employments. The increased domestic competition associated with the globalization process of the markets required higher competitiveness and better performance of the organizations relating to the management of their activities. To avoid market loss or even to guarantee survival, these organizations have constantly been looking for ways to improving their performance. This study was carried out to typify the dairy industries in Minas Gerais state, in relation to their technical performance, by focusing on the socioeconomic, financial, and administrative aspects. A total of 142 dairy industries were analyzed, and the data envelopment analysis was used to measure their performance. It was verified that only 10 industries reached the maximum technical efficiency. In the cases of those showing inefficiency, it was observed that the main problem was not the incorrect production scale, but the inefficiency in using the inputs. Keywords: Efficiency; agrobusiness; path envelopment analysis.
1. Introduction Milk production is considered a strategic industry in the national economy, as it is an important generator of foreign exchange and employment. Minas Gerais is the biggest producer in Brazil, as it achieved 27.92% of the national production in 2007, according to the Instituto Brasileiro de Geografia e Estat´ıstica (IBGE). 199
March 15, 2010
200
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
From the 1990s, the changes occurring in this sector due to government intervention were decisive for the current state. By imposing a new profile on the agroindustrial milk complex, these changes were marked by external factors such as the intensification of either globalization or the process of formation and consolidation of economical blocks, as well as internal factors such as the deregulation of the sector starting in 1991 and reduced government intervention for the imported products, which occurred through reduction of both quotas and non-tariff barriers. In addition, the increased domestic concurrence began to require higher competitiveness and better performance in the management of the organizations’ activities. It became necessary to have perfect understanding about market structure, where it is competing as well as its correct positioning in such a way as to ensure a sustainable competitive advantage. To avoid market loss or even to guarantee survival, those organizations have been constantly looking for means to improve their performance. In permanent progress of tactics to improve the relationship with suppliers and consumers, to optimize resources, to increase productivity, and to reduce costs are essential practices, especially when considering that the organizations operate within a macroenvironment that can be affected by tendencies and political, legal, economic, technological, and social systems. Thus, it is well known that a satisfactory performance not only depends on the internal effort of the company, but also on its capacity to innovate, to modernize, to position itself, and to adapt to the pressures and challenges of the competition with regard to the environmental, social, cultural, technological, economical, and financial aspects. Indeed, the company is not an isolated link within this context, where the competitiveness of its product can be significantly affected by both the productivity and efficiency of the several economic agents who directly or indirectly participate in the production chain. In this aspect, the Brazilian agroindustrial milk complex involves a long chain that extends from the industry of inputs to the national and international retail levels. The dairy industry, composing of the transformation segment, is responsible for the industrialization of milk and its derivatives, to supply society with a wide variety of products for final consumption. With an ample part of its activity directed at internal consumption, this segment is strongly affected by the performance of the national economy, employment levels, interest rate, and mainly by the price of raw materials. Thus, knowledge of the current reality at macro- and microenvironmental levels is an indispensable factor in the construction of sectorial analyses, as understanding of the industrial importance is only reached when it becomes possible to contextualize the amplitude of the environment in which it is competing. In this scenario, knowledge of the strategic posture of the segment, the portfolio of its products, and the competitive forces guiding the sector is essentially important to face the competition and to ensure its capacity to survive and expand in the long
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
201
term. Furthermore, short-term policies such as the payment capacity related to the administration of the circulating capital, related to the policies of receipts and payments accomplished by the company, are essential in determining the liquidity and continuity of the business activity. In this aspect, customers, suppliers, and stock are important components involved in the operational and financial cycle of the company. The customers participate in this cycle via their payment capacity and the credit policies accomplished by the company, the suppliers via their financing capacity and the stocks that will change into results and have high dependence from turnover. All these factors will interfere in short-term cycles which will have repercussion in the final results, by taking into account the direct relationship in the final operational results and the formation of other expenses that will affect the net result. So, it is observed that the performance of any organization does not only depend on the company’s internal effort, but also on its capacity to innovate, modernize, position, and adapt in order to answer to the pressures and challenges of the competition in front of the environmental, social, cultural, technological, economical, and financial aspects. The cautiuous improvement of tactics to improve relationship with suppliers and consumers, to optimize resources, to increase productivity, and to reduce costs is an essential practice for the achievement of competitiveness and to reach the desired scale of production, as well as efficiency in using the production factors. In this aspect, one question arises: what is the technical efficiency level of the dairy industry in Minas Gerais State? Thus, the central objective of this work is the typifying of dairy industries in Minas Gerais State in relation to their technical performance, by focusing on the socioeconomic, financial, and management aspects. A more specific intention is the following: (a) To measure the performance of the dairy industries, as based on either technical efficiency or the scale measures. (b) To identify and quantify the influence of the variables related to the socioeconomic, financial, and management aspects in the dairy industries’ technical efficiency. To answer this question, the present research was proposed, by taking into account the capital societies and cooperatives with annual gross revenue above R$1,200,000.00 in Minas Gerais State, Brazil. 2. Theoretical Reference 2.1. The Importance of the Milk Agrobusiness Brazil is distinguished as one of the major producers of milk in the world, and it was the sixth largest producer in the world in 2006. With a total market share of
March 15, 2010
202
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
4.6% of worldwide production, the country is behind only the United States, India, China, Russia, and Germany (Embrapa Gado de Leite, 2007). As one of the largest global milk producers, the dairy sector and the agroindustrial complex of milk represent great socioeconomic importance to the country. In Brazil, milk production is distinguished as one of the main agriculture and livestock activities because of its ability to generate employment and income as well as its connection with other agroindustry sectors. Its socioeconomic importance is highlighted by the position it occupies in the Brazilian agrobusiness; it is among the main sectors that generate national income and tax revenue. The economical importance of this sector can also be verified by the position it occupies in the Brazilian agrobusiness, as it is among the main products in terms of generating national income and tax revenue. In 2007, it occupied the sixth place in the ranking of gross value of the national agricultural production, losing to cattle meat, soybean, sugarcane, chicken, and corn. The inter-connectiveness property of the industrial sector is also distinguished because it demonstrates great relationship with other sectors of the economy, therefore being a key sector in the national economic development process. In Minas Gerais’ economy, the data of the Confedera¸ca˜ o Nacional da Agricultura (CNA) show the prominent participation of milk in the gross value of agriculture and livestock, as it represents the second place among the main products in 2006. In 2006, the gross revenue of coffee and milk products in Minas Gerais totaled R$5.6 and R$3.5 billion, respectively, contributing 26.64% and 16.89% of the total gross revenue (Table 1). Milk production is mainly characterized by its presence in all states of the federation, although half of the national production is concentrated in only three states, with Minas Gerais as the largest national producer at 27.924% of the national production, followed by Rio Grande do Sul, with 14.087% and S˜ao Paulo with 12.481%. Among the industrial parks of national production and processing, the southeast region is distinguished by the concentrated production that reaches 7.8 billion L/year, totaling 43.78% the national total (Table 2). In Minas Gerais, it is relevant to identify the presence of milk production in 89.6% of counties, therefore occupying a prominent position in the composition Table 1. Gross Revenue of the MainAgricultural and Livestock Products in Minas Gerais State in 2006. Products Green coffee Milk Cattle meat Corn Soybean
R$ Millions
% Participation
5.627 3.567 3.478 1.425 1.050
26.64 16.89 16.46 6.75 4.97
Source: Adapted from FAEMG (2007).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
203
Table 2. Raw or Cold Milk Acquired in the Year 2007 in the Country and in the Federation Units. Country and federation units Brasil Rondˆonia Acre Amazonas Roraima Par´a Tocantins Maranh˜ao Piau´ı Cear´a Rio Grande do Norte Para´ıba Pernambuco Alagoas Sergipe Bahia Minas Gerais Esp´ırito Santo Rio de Janeiro S˜ao Paulo Paran´a Santa Catarina Rio Grande do Sul Mato Grosso do Sul Mato Grosso Goi´as Distrito Federal
In 1000 L
% Participation
17,836,363 691,756 11,786 814 205 283,723 112,216 62,466 19,741 152,770 79,415 46,969 201,857 117,209 72,152 286,097 4,980,602 210,061 392,833 2,226,172 1,473,891 1,084,314 2,512,687 225,169 414,704 2,159,971 16,786
100.000 3.878 0.066 0.005 0.001 1.591 0.629 0.35 0.111 0.857 0.445 0.263 1.132 0.657 0.405 1.604 27.924 1.178 2.202 12.481 8.263 6.079 14.087 1.262 2.325 12.11 0.094
Source: IBGE, Trimestrial research of milk (2007).
of the milk areas of the country, in the location of most industries of the dairy products and in the largest consumption center. The nine mesoregions located in this state rather represent 23.6% of the national production and 87.8% of the state production. With regards to revenue, among 12 large companies found in Brazil, five are located in Minas Gerais (Embrapa Gado de Leite, 2007). According to the Instituto de Desenvolvimento Integrado de Minas Gerais (INDI) (2006), the supremacy of the state is mainly due to some factors such as excellent conditions of the climate and soil; strategic geographical location of the consumption centers; tradition; experience in the livestock exploration; and governmental support to the entrepreneurs of the segment. Because of its leading position in national production, Minas Gerais has a very heterogeneous industrial park with very different realities. In one extreme are
March 15, 2010
204
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
the largest and more modern companies of the country such as Nestl´e, Danone, Itamb´e, Cotoch´es, Barbosa & Marques, and Vigor. In the other extreme are the small companies with reduced working productive capacities; they lack the basic conditions of industrialization and competitiveness, and sell their product which are of doubtful quality to the market. The modern companies use advanced technology in all stages of the productive chain: they have production scale, human resources, high-quality products with competitive prices. They operate in high value-added segments such as milks (fermented, sterilized, condensed, powdered, evaporated) and lacteous desserts, ice creams, and fine cheeses. On the other hand, the companies presenting small production scale are those operating in less sophisticated sectors (traditional cheeses, C-type pasteurized milk, milky sweets, and butter). Besides using ancient techniques, they need human resources, lack diversification, and face difficulties in selling their products in the market. According to the Secretaria de Estado de Agricultura, Pecu´aria e Abastecimento (SEAPA-MG) (2007), 70% are small producers with a daily production below 100 L. This institution calls attention to the great social representativeness of this milk segment. In Minas Gerais, the milk segment receives some 583.33 million liters a month, on average, and industrializes the pasteurized milk to creams, cheeses, yogurts, condensed products, desserts, and others. It is responsible for 1.2 million jobs, taking into account the producers, employees, and relatives, generates a revenue of R$ 6 billion/year that is distributed among approximately 900 dairy product companies (Governo de Minas Gerais, 2007). 3. Methodology 3.1. Obtainment of the Efficient Frontier — The DEA Approach The data envelopment analysis is a non-parametric technique that is based on the mathematical programming, specifically in the linear programming, to analyze the relative efficiency of producing units. In the literature concerning DEA models, a producing unit is a so-called decision-making unit (DMU), as a measure to evaluate the relative efficiency of taking-decision units proceeds from those models. The producing unit is any productive system that transforms inputs into products. According to Charnes et al. (1978), to estimate and analyze the DMUs’ relative efficiency, DEA uses the pareto-optimum definition, according to which no product can have its production increased without the increase of their inputs or reduction of other products or, alternatively, when no input can be reduced without the reduction of some product production. The efficiency is analyzed relatively among the units. Charnes et al. (1978) generalized the work by Farrell (1957) to incorporate both multiproduct and multi-input nature of the production, by proposing the DEA technique for the analysis of the different units concerning to their relative efficiency.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
205
Taking into account that there are k inputs and m products for each n DMUs, two matrices are constructed: the X matrix of inputs with dimensions (k × n) and the Y matrix of products with dimensions (m × n), as representing the data of all n DMUs. In the X matrix, each line represents an input and each column represents a DMU. In the Y matrix, however, each line represents a product and each column a DMU. For the X matrix, the coefficients must be non-negative and each line and each column must contain at least a positive coefficient, that is, each DMU consumes at least an input whereas a DMU consumes at least the input that is in each line. The same reasoning is applied for the Y matrix. So, for the ith DMU, the vectors xi and yi are represented for inputs and products, respectively. For each DMU, an efficiency measure can be obtained. This measure is the reason among all products and all inputs. For the ith DMU is DMU i efficiency =
u · yi u1 y1i + u2 y2i + · · · + um ymi = v · xi v1 x1i + v2 x2i + · · · + vk xki
(1)
where u is a vector (m × 1) of weights in the products and v is a vector (k × 1) of weights in the inputs. Notice that the efficiency measure will be a scalar due to the orders of the vectors composing it. The initial presupposition of this measure of efficiency is that it requires a common weight group that will be applied in all DMUs. However, there is some difficulty in obtaining a common group of weights to determine the relative efficiency of each DMU. This occurs because DMUs can establish different values for both inputs and products, and then adopt different weights. Thus, it is necessary to establish a problem that would allow each DMU to adopt the weight group that is better, in comparison with the other units. To select the optimum weights for each DMU, a mathematical programming problem is specified. The DEA model with input-orientation and presupposition of constant returns to scale searches the optimum weights to minimize the proportional reduction in the input levels, as well as maintaining the fixed amount of products. According to Charnes et al. (1978), this model can be algebraically represented by Min
θ,λ,S + ,S −
θ,
subjected to: −yi + Yλ − S + = 0, θxi − Xλ − S − = 0,
(2)
λ ≥ 0, S+
≥ 0,
S−
≥ 0,
where yi is a vector (m × 1) of product quantities of the ith DMU; xi is a vector (k × 1) of quantities of input of the ith DMU; Y is a matrix (n × m) of products of the n DMUs; X is a matrix (n × k) of inputs of the n DMUs; λ is a vector
March 15, 2010
206
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
(n × 1) of weights; S + is a vector of output slacks; S − is a vector of input slacks; and θ is a scaler that has values equal or lower than 1. The value obtained for θ indicates the efficiency score for DMU, that is, a value equal to 1 indicates that the DMU technical efficiency relative to the other ones, whereas a value lower than 1 evidences the presence of relative technical inefficiency. The linear programming problem shown in Eq. (2) is solved n times, as for each DMU, and as result, it presents the values of θ and λ. As mentioned, θ is the efficiency score of the DMU under analysis and, in the case that DMU is inefficient, the values of λ provide the “pairs” of this unit, that is the efficient DMUs that served as reference (or benchmark) for the inefficient DMU. To incorporate the possibility of variable returns to scale, Banker et al. (1984) proposed the DEA model with presupposition of variable returns to scale, by introducing a convexity restriction to the CCR model presented in LPP (Eq. (2)). The DEA model with input-orientation and presupposition of variable returns to the scale and presented in LPP (Eq. (3)) allows the decomposition of technical efficiency into scale efficiency and pure technical efficiency. To analyze the scale efficiency, it is necessary to estimate the DMU’s efficiency, by using either DEA model presented in LPP (Eq. (2)) or the one presented in LPP (Eq. (3)). The scale inefficiency is evidenced, when there are differences in the score of those two models. The DEA model with input-orientation, which presupposes variable returns to scale, can be represented by the following algebraic notation: Min
θ,λ,S + ,S −
θ,
subjected to: −yi + Yλ − S + = 0, θxi − Xλ − S − = 0, N1 λ = 1, λ ≥ 0, + S ≥ 0, S − ≥ 0,
(3)
where N1 is a vector (n × 1) of numerals 1. The other variables were previously described. This approach forms a convex surface with intersected planes, which involves the data under more compact way than the surface formed by the model with constant returns. So, the values obtained for technical efficiency, by presupposing variable returns, are higher or equal to those obtained with constant returns. This occurs because the technical efficiency measure obtained in the model with constant returns is composed either by technical efficiency measure in the model or with the scale efficiency measure variable returns on. The results supplied by the DEA models are complex and rich in details that, when used correctly, constitute an important auxiliary tool in the decision making
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
207
by the agents involved in the productive process. Due to this complexity, for more detailed descriptions of the methodology, some text books are recommended, such as Charnes et al. (1978); Coelli et al. (2005); Cooper et al. (2004, 2007); and Ray (2004). 3.2. Source and Treatment of the Data In this study, the references are the capital societies, companies, and cooperatives with annual gross revenue income above R$ 1,200,000.00 that are installed in the State of Minas Gerais and act in the dairy sector. The data used in this research were obtained from primary sources, by using a structured questionnaire that was applied via postal, contact by telephone or personal, from an intentional sample derived from the group of organizations acting in Minas Gerais’industry of dairy products. 70 cooperatives and 72 companies were contacted. To calculate the technical efficiency measures for the samples of the dairy industries, four variables were used, with three of them being representative of the inputs and the last one related to the product, as described below: • Inputs ◦ Payroll — the labor salary is computed by the annual cost of the payroll. By considering that the direct labor cost used in the productive process is aggregated to the other production factors and included under the form of Stocks or Cost of the Sold Products, the total of the Administrative Expenses with Personal was used as proxy for this factor. ◦ FixedAssets — composition of the permanent structure of the units composing the sample. ◦ Milk acquired — refers the daily average of milk liters acquired by the sample component units. • Product ◦ Revenue — the variable indicates the annual average earnings (in reais — R$) with the sale of the products made by the component units of the sample. 4. Result and Discussion 4.1. Analyzing the Efficiency of the Dairy Industry The DEA model was initially used, as presupposing constant returns to the scale, to obtain the technical efficiency measure for each dairy product of the sample, without considering the scale variations. Subsequently, the presupposition of constant returns to the scale was removed, by adding a convexity restriction that made it possible to obtain the efficiency measures in the paradigm of variable returns. With these two measures, it became possible to calculate the scale efficiency. Table 3
March 15, 2010
208
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 3. Distribution of the Dairy Products According to Intervals of Measures of Technical Efficiency and Scale (E) Obtained in the Models Using DEA.
Specification
Technical efficiency constant returns (No. of dairy products)
Technical efficiency variable returns (No. of dairy products)
Efficiency of scale (No. of dairy products)
0 10 25 29 24 9 13 11 4 7 10 142
0 6 23 23 28 10 7 13 8 4 20 142
0 0 0 1 0 2 3 11 25 88 12 142
0.4951 0.2513 50.75%
0.5457 0.2714 49.73%
0.9169 0.1091 11.90%
E < 0.1 0.1 ≤ E < 0.2 0.2 ≤ E < 0.3 0.3 ≤ E < 0.4 0.4 ≤ E < 0.5 0.5 ≤ E < 0.6 0.6 ≤ E < 0.7 0.7 ≤ E < 0.8 0.8 ≤ E < 0.9 0.9 ≤ E < 1.0 E = 1.0 Total Efficiency measure Average Standard deviation Variation coefficient Source: Results of the research.
displays the results, and separates the dairy products according to the efficiency measures reached. Under the presupposition of constant returns to scale, it is verified that only 10 of those 142 dairy products of the sample obtained maximum technical efficiency. The average level of technical inefficiency is high, that is, around 0.5049 (1–0.4951). It is important to highlight that, by this relative approach, the DEA models with constant returns to scale are more conservative, as it usually results in a lower number of efficient DMUs, compared to the model of the variable return to scale. As the model with product-orientation and only one output (revenue) was used, the inefficiency of the company measures the possible product amount that can be expanded without the need for more inputs. In this case, the inefficient dairy products can, on average, expand some 50.49% of their revenue without the need for higher amounts of inputs. It is important to emphasize that the dairy products that reached maximum technical efficiency cannot expand their revenue without the introduction of more inputs. They are already at the efficient frontier. However, the other dairy products can still expand their revenue, until they reach a technical efficiency equal to one as reference.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
209
As the presupposition of constant returns was included, the sources of inefficiencies can include those proceeding from incorrect production scale. In other words, the total technical efficiency (constant returns) is composed either by the pure technical efficiency (variable returns) or by the scale efficiency. The technical inefficiency with variable returns effectively measures the excessive use of inputs, that is, it gives an idea about the company’s working power, in the case it was using their inputs correctly. The scale efficiency makes a projection about how much the company could gain if it was operating at optimum scale, in this case, with constant returns. The averages of the pure technical efficiency and scale efficiency are 0.5457 and 0.9169, respectively. This means that the inefficient dairy products were able, on average, to increase 45.43% of revenue by using the inputs correctly (without excess). If they were operating on the correct scale, they could increase their revenue by 8.31% without the need for more inputs. As one can observe, the problem of inefficient dairy products is not due to the incorrect scale of production, but the inefficiency in using the inputs, that is, there is proportionally higher waste of inputs than scale problems. Only 24 dairy products present pure technical inefficiency lower than 10%. On the other hand, 100 dairy products only show 10% or less inefficiency of scale. When analyzing the incorrect use of inputs, the data shown in Table 4 describes the current average situation of the companies and makes a projection of revenue, in the case where inefficient dairy products would correct their problems concerning the inadequate use of the inputs. As can be observed, despite having more employees and using more raw material, the average revenue of the efficient dairies is more superior to the inefficient ones. In this case, it can be said that the productivity of the factors is higher in products of the efficient dairies, that is, they produce much more proportionally despite using more production factors. The exceptions are the fixed assets. The efficient dairies have lower capital volume immobilized in the productive system, as a proportion to the inefficient dairies. This can be a signal about the incorrect scale of production in inefficient dairies. Table 4.
Product and Inputs of the Dairies Used in the Sample.
Specification Revenue Employees Milk reception Permanent Projected revenue
Unity
Efficient
Inefficient
Total
Thousand R$/year Person Thousand liters/day Thousand R$ Thousand R$/year
39,463 152.1 128.5 8,452 39,463
11,496 61.3 43.2 9,104 21,820
15,435 74.1 55.2 9,012 24,305
Source: Results of the research.
March 15, 2010
210
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
As there is no wastage of inputs in the efficient dairies, they will not succeed in increasing the revenue with the current amounts of input. This fact is reflected in the last line in Table 4, when the revenue is projected in the absence of pure technical inefficiency. There are no earnings in the projected revenue of the inefficient dairies. On the other hand, if the inefficient dairies correct their problems concerning the incorrect use of inputs, they can increase on average, 90% the revenue of the company. The increase in the revenue is significant. In some dairies’ products in the sample, the earnings could reach 300%. It is very important that the managers are aware of their companies’ situation, relative to the other, more efficient dairies. The wrong way that many dairies are using their input will hinder their performance in the market, because cost is one of the most important variables in competitive markets, as the organizations are usually price-takers. It is well-known that many companies have problems concerning their incorrect scale of production. To further this analysis, it is necessary to calculate the company’s scale efficiency. The scale efficiency measure is obtained by the ratio between the measures of technical efficiency in the models with either constant returns or the variable returns. If this ratio is equal to 1, the dairy is operating at optimum scale. Otherwise, if it is lower than 1, the dairy is technically inefficient because it is not operating at optimum scale. The dairies that are operating with constant returns to scale were not included into optimum scale of production, whereas those operating outside the range of constant returns to scale were not included in the optimum scale of production. In Table 3, it is observed that only 12 dairies do not present any scale problems. It is noticed that 10 out of the 12 dairies are at the frontier of constant returns; although operating at the range of constant returns, the other two are not located in the efficient frontier border, that is, they have problems concerning pure technical efficiency. Scale inefficiency can occur, when the dairy is operating below the optimum scale (increasing returns) or above the optimum scale (decreasing returns). If the dairy is below optimum scale, it can increase the production at decreasing costs, that is, the scale economy will occur. On the other hand, if it is above the optimum scale, the increased production will occur at increasing costs, that is, the dis-economy of scale will occur. To detect if the scale inefficiencies occur because the dairies operate between the range of increasing or decreasing returns, another linear programming problem was formulated, which ignores the restriction of non-increasing returns to scale. So, it was possible to distribute the dairies of the sample according to the return type and the degree of the pure technical efficiency, according to data shown in Table 5. In relation to the type of return, it is observed that most dairies (57%) present increasing returns. Only 10% are within the range of constant returns, that is, at optimum scale. When analyzing only the efficient ones, it is observed that 50% of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
211
Table 5. Distribution of the Dairies According to the Return Type and the Degree of Pure Technical Efficiency. Return type
Efficient (%)
Inefficient (%)
Total (%)
Increasing Constant Decreasing Total
5 (3.52) 10 (7.04) 5 (3.52) 20 (14.08)
76 (53.52) 4 (2.82) 42 (29.58) 122 (85.92)
81 (57.04) 14 (9.86) 47 (33.10) 142 (100.00)
Source: Results of the research.
Table 6. Product and Inputs of the Dairies According to the Type of Return to Scale. Return type Specification Revenue Employees Milk reception Fixed assets
Unit
Increasing
Constant
Decreasing
Thousand R$/year Person Thousand liters/day Thousand R$
3,708 25.6 15.4 6,405
23,482 79.9 52.6 5,097
33,248 155.9 124.5 14,671
Source: Results of the research.
them have no scale problems. On the other hand, only 3.3% of the inefficient ones are at the optimum scale. Among the inefficient ones, most (76 dairies) are within the increasing returns range, whereas 42 of those are operating with decreasing returns. To have an idea of the “size” of the companies in relation to the scale of production, the data shown in Table 6 refer to the average revenue and the inputs used according to the type of return. The average revenue of the companies that are operating at optimum scale of production is R$23.5 million/year. It is observed that the companies below the optimum scale have lower revenue, whereas those above the optimum scale show higher revenue. This is an expected fact. The difference occurs in the decision for increasing the current revenue. For example, to obtain some 10% increase in revenue, the companies at optimum scale would need to increase their inputs at the same proportion. So, the average cost of the product would not be altered. For the companies with increasing returns, a 10% increase in inputs would cause the average product cost to increase by less than 10%. On the other hand, in the companies with decreasing returns, a 10% increase in inputs would cause the average product cost to increase by more than 10%. To conclude, the sample from 142 dairies in the state of Minas Gerais can be distributed, as follows: 7.04% show no problem; 7.04% show only problems concerning the incorrect scale of production; 2.82% show only problems concerning the excessive use of inputs; and 83.10% show problems concerning either the excessive use of inputs or the scale of production. In that sense, it is intended to say
March 15, 2010
212
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
that the simple quantification of the company’s inefficiency is not enough to guide it to improve its efficiency. It is necessary to identify how much inefficiency is due to the incorrect scale of production and how much can be improved if the excessive use of the inputs was eliminated.
4.2. Economic-Financial Profile of the Dairy Industries and Their Leaders, According to Efficiency Most entrepreneurs, acting directly in the management process are at the age range of below 50 years-of-age. In general, these managers have more than 10 years’ experience- in the dairy industry. Most of them have an educational background above the second degree. Among the industries composing the sample, most of them are located in urban areas. In the case of the industrial societal model, it was observed that the cooperatives are differentiated when compared against the efficiency degree, and the same occurs with the industry existence time, as 74.47% have been acting in the market for more than 20 years and they were classified as efficient ones (Table 7). When verifying the sales performance of the production cost, and the profit in the last 5 years, 41% of the inefficient companies pointed out an increasing profit performance, as well as an increasing cost of production that, in some cases, was annulled by the increasing performance of the sales. This fact generated either increasing or constant profits for 67% of those companies. However, it is observed that 33% pointed out decreasing profits (Table 8). In most cases, the profits are used to finance the company’s activities. It represents successful strategies and offers the capacity base to generate funds of resources for investments, from which the objective is to change the competitive environment in the medium and long terms. The relationship with the suppliers in the purchase of the raw material and with the customers in the sale of the final product constitutes the operational cycle of the industry. Thus, it is a medium period over which the resources are invested in the operations, without the occurrence of the corresponding cash entrances. Some part of this circulating capital is financed by suppliers for the productive process. So, the financial cycle of the industry refers to the difference between the operational cycle and the period of the production factor payments. Their direct implications are associated with the capacity to generate and allocate the resources and productive factors that subsidize the company’s activity in the short term whereas influencing the competitive capacity in the long term. It was verified that the average stockpiling in the industry occurs mainly at less than 30 days, and there is no great difference for the companies considered as efficient, a fact justified by either inventory turnover in this sector or the perishability degree of the product (Table 9). The average period granted to the customers is lower in the companies considered as efficient ones, but higher in the case of the suppliers from which 31.91%
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
213
Table 7. Descriptive Statistics of Variables Related to the Leader’s Profile and the Company’s. Variables Manager’s age
Efficient (%) Inefficient (%) 21–30 years old 31–40 years old 41–50 years old 51–60 years old More than 60 years
4.30 21.30 31.90 23.40 19.10
12.60 25.30 35.80 14.70 11.60
Educational background Postgraduate Undergraduate High school Middle school Elementary school
2.13 42.55 44.68 6.38 4.26
3.16 51.58 32.63 11.58 1.05
Experience the activity
Lower than 1 year From 1 to 5 years From 6 to 10 years Above 10 years
2.13 12.77 14.89 70.21
2.11 22.10 16.84 58.95
Societal model
Company Cooperative
23.41 76.59
64.22 35.78
Industry existence time
Until 5 years From 5, 1 to 10 years From 10, 1 to 20 years Above 20 years
4.26 6.38 14.89 74.47
17.02 21.28 28.72 32.98
Localization
Rural area Urban area
14.90 85.10
36.84 63.16
Source: Results of the research.
industries paid off their debts within 30 days. Usually, the supplier finances a considerable part of the industry’s operational cycle. Although this allows the companies to enjoy a comfortable financial situation, an increase in the operational cycle without the suppliers’ financial support can generate liquidity problems, in which the company needs to look for resources outside the operational cycle, which results in higher costs. The competitiveness among companies is a dynamic process that requires immediate reaction in elaborating the individual strategies in the short term. In the case of the dairy industries, 12.77% of the companies considered as efficient adopted the competitive price, whereas 63.83% adopt the cost of the goods more in line with the tax and profit margin. For 23.40% industries, the prices were imposed by the market (Table 10). In all those aspects, the margin obtained in commercialization of production will always depend on the existent structure of costs. Thus, the technological process,
March 15, 2010
14:44
214
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 8. Percent Performance of the Cost of Production, Sales and Profit of the Companies Researched in the Last Five Years. Profit performance Condition
Cost of production
Sale performance
Decreasing (%)
Inefficient
Decreasing
Decreasing Constant Increasing Decreasing Constant Increasing
2
Decreasing Constant Increasing
Constant
Increasing
Total Efficient
Decreasing Constant Increasing
Increasing Decreasing Increasing Decreasing Constant Increasing
Total
Constant (%)
Increasing (%)
Total (%)
2 0 1
1 1 0 1 4
9 0 0 4
2 1 10 2 1 10
6 4 17
0 2 16
2 0 26
9 6 59
33
26
41
100
5
5
9 5 2 9
19 5
7
33
9 2 28 9 2 49
19
21
60
100
2
Source: Results of the research.
Table 9. Percent Relationship of the Average Periods of Stocks, Customers and Suppliers in the Dairy Industry. Average period of stocks Period in days
Inefficient (%)
In cash Lower than 30 30–60 61–90 91–120 Above 120 No response
55.79 18.95 2.11 3.16 1.05 18.95
Source: Results of the research.
Average period of customers
Average period of mercadors
Efficient (%)
Inefficient (%)
Efficient (%)
Inefficient (%)
Efficient (%)
2.13 55.32 21.28 6.38 2.13
2.11 38.95 57.89
8.51 36.17 51.06 4.26
1.05 41.05 57.89
6.38 25.53 68.09
12.77
1.05
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
215
Table 10. Industries’ Percent Relation Concerning to the Adoption Form of the Sale Price. Adoption form of the sale price Competition price Product cost Price is imposed by market
Inefficient (%)
Efficient (%)
22.11 53.68 23.16
12.77 63.83 23.40
Source: Results of the research.
the commercial relationships, the taxation, and both administrative and managerial capacity in the administration of the enterprise in the search for improving productive and economical efficiency are important factors in the analysis of its competitive pattern. The performance of business management depends on the internal decision processes. Those processes are accomplished at several hierarchical levels of the administrative structure, and depending on either the implication of the decision or the targeted results, they can be accomplished or involve the operational and strategic levels with or without participation of the external people. To understand the competitive environment and to know the directions, the sector is following its fundamental for making the right decisions. This requires an accurate knowledge about the internal processes, as it is fundamental that the leaders have better understanding of either practices or the external directions to the business environment. So, the knowledge concentrated or even individualized in the figure of the owner or manager hampers the long-run decisions depending on a more accurate analysis of the data or even on the construction of scenarios that are fundamental for the accomplishment of future action projections. Table 11 shows that the proprietor alone decides the direction of the business in 48% of companies considered as inefficient, with this occurrence being a common reality mainly in the small industries, where no specialized administrative structure exists to give support to decision-making. Although, the concentration of decision-making in the entrepreneur’s hands is necessary and effective, in many cases they become inefficient in front of the competition and impair the company’s competitiveness. However, there is a smaller group of companies considered as efficient, in which the proprietor himself takes the decision of the businesses. In this category of efficient companies, the existence of a considerable group who gathers with the main executives, uses simulation tools, consults the employees and even specialists of the area for the decisional process is emphasized. The use of those tools reinforces the use of historical data series of the sector, making possible the accomplishment of future planning with more safety, by allowing the companies to plan further ahead and react more rapidly to changes in the sector.
March 15, 2010
216
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
Table 11. Different Forms for Accomplishment of the Decision-Making Process in the Industry. Variables
Inefficient (%)
Efficient (%)
1. Proprietary himself takes the decision 2. Proprietary gathers with the main executives 3. Decision are taken after meetings with the employees 4. Use of simulation tools 5. Consultation with experts
48 16 25 4 —
33 30 32 16 4
Source: Results of the research.
Table 12.
Factors Hindering the Management Process in the Industry. Variables
Inefficient (%)
Efficient (%)
Shortage of raw materials Seasonality of raw material Pressure of the markets Disloyal concurrence of other states’ industry Consumers’ revenue Sector informality Poorly structured legislation Pecuniary trouble High interests High tax revenue High cost and taxes of the labor
13 33 23 14 5 4 1 2 2 4 1
4 23 18 15 15 6 0 2 0 4 2
Source: Results of the research.
According to Table 12, there are many factors imposing difficulties on the dairy industries. Some of those factors are internal ones or controlled by the companies, whereas others are external and depend on the medium in which the companies are inserted. Thus, the lack and periodic availability of the raw material, pressure of the supermarkets, disloyal competition of the industry located in other states, the consumer’s income, and informality in the sector were the main problems affecting the performance of the industry in the dairy sector, as pointed out by the companies. Despite being external to the company, all those factors carry heavier weight in the industries considered as inefficient ones. 5. Conclusion Regarding the organizations’ survival in the face of increasing global competition, besides being conditioned to the influence of macroeconomic factors, the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
217
management of internal decision processes is fundamentally important to obtaining positive results. Actually, organizations’ technical performance is conditioned to the socioeconomic, financial, and managerial aspects. Taking this aspect into account, 142 dairy industries in Minas Gerais State were analyzed based on their technical performance and using the data envelopment analysis. Under the presupposition of constant returns to scale, it was verified that only 10 of those dairies obtained the maximum technical efficiency, where they could not expand their revenue without the introduction of more inputs, as they were at the efficient border. However, the other dairies can still expand their revenue, as compared to those with a technical efficiency equal to 1. Although they have more employees and use more raw materials, the average revenue of the efficient dairies is much higher than the inefficient ones. In this case, it was concluded that the productivity of the factors is higher in the efficient dairies, which means that although more production factors are used, they produce much more proportionally. In the case of the fixed assets, a lower volume of the capital immobilized in the productive system were observed, compared to the inefficient ones. In the case of the inefficient dairies, it was observed that the bigger problem is not the incorrect scale of production, but the inefficiency in using the inputs, which means that there are proportionally higher waste of inputs than scale problems. Only 24 dairies show pure technical inefficiency lower than 10%. On the other hand, 100 dairies show only 10% or less inefficiency of scale. To conclude, the following results were shown by the dairy industries: 7.04% — no problems; 7.04% — only problems of inadequate scale of production; 2.82% — only problems referring to the excessive use of inputs; and 83.10% — problems referring to excessive use of inputs and inadequate scale. When analyzing the economical-financial profile of those companies, it was verified that the industry existence time and the industry-adopted societal model were important in the classification of the companies considered as efficient. Concerning the operational cycle, this fact was not found. For the efficient companies, the existence of a considerable group which emphasizes the decision-making process when searching for simulation tools and consultations with specialists in the area, therefore allows for more reliable planning as well as to be ahead and to answer more quickly to the requirements of the sector. It is important to present as limitation of the study that DEA is a relative approach. Therefore, their conclusions are just applied in the analyzed companies. Therefore, we recommend application of that study in other sectors and other countries.
Acknowledgement We also thank the Funda¸ca˜ o de Amparo a` Pesquisa de Minas Gerais (FAPEMIG) for their financial support for this research.
March 15, 2010
218
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
L. A. Abrantes et al.
References Banker, RD, H Charnes and WW Cooper (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 30(9), 1078–1092. Charnes, A, WW Cooper and E Rhodes (1978). Measuring the efficiency of decision-making units. European Journal of Operational Research, V. 2. pp. 429–444. Coelli, TJ, P Rao and GE Battese (2005). An Introduction to Efficiency and Productivity Analysis, 2nd Edn., p. 349. New York: Springer. Confedera¸ca˜ o Nacional da Agricultura (Bras´ılia, DF) (2003). Valor bruto da produ¸ca˜ o agropecu´aria brasileira: 2003. Indicadores Rurais, Bras´ılia, 7(50), 6. Cooper, WW, LM Seiford and J Zhu (2004). Handbook on Data Envelopment Analysis, p. 592. Norwell, MA: Kluwer Academic Publishers. Cooper, WW, LM Seiford and K Tone (2007). Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-Solver Software, 2nd Edn., p. 490. New York: Springer. Embrapa Gado de Leite (2007). Banco de dados econˆomicos. In: http://www. cnpgl.embrapa.br. Federa¸ca˜ o da Agricultura e Pecu´aria do Estado de Minas Gerais (FAEMG) (2007). Indicadores do agroneg´ocio. In: http://www.faemg.org.br. Farrel, MJ (1957). The measurement of productive efficiency. Journal of the Royal Statistic Society, V. 120, pp. 253–290. Governo do Estado de Minas Gerais (2007). Maio de 2007. In: www.mg.gov.br. Instituto de Desenvolvimento Integrado de Minas Gerais (INDI) (2007). A ind´ustria de latic´ınios brasileira e mineira em n´umeros. In: www.indi.mg.gov.br. Instituto Brasileiro de Geografia e Estatistica (IBGE) (2005). Pesquisa Trimestral do Leite. In: http://www.ibge.gov.br. Ray, SC (2004). Data Envelopment Analysis: Theory and Techniques for Economics and Operations Research, p. 353. Cambridge University Press. Secretaria de Estado de Agricultura, Pecu´aria e Abastecimento (SEAPA MG) (2007). Maio de 2007. In: www.agricultura.mg.gov.br.
Biographical Notes Luiz Antônio Abrantes, Doctor of Administration is a Professor in the Department of Administration at the Federal University of Viçosa (UFV). His research interests include management and public policies, corporate finance, accounting and controlling, and tax management of production chains. Adriano Provezano Gomes, Doctor in Applied Economics is a Professor in the Department of Economics at the Federal University of Viçosa (UFV). His research interests include quantitative methods in economics, efficiency analysis models, public policies, consumer economics and agricultural economics. Marco Aurélio Marques Ferreira, Doctor in Applied Economics is a Professor in the Department of Administration at the Federal University of Viçosa (UFV). His
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
Efficiency as Criterion
219
research interests include public administration and social management, finance, efficiency and performance, quantitative issues. Antônio Carlos Brunozi Junior has a Master’s in Administration from the Federal University of Viçosa (UFV). His academic manuscript has been concentrated on the following areas: accounting and finance, public administration and accounting, and public policies related to tax management on agro-industrial chains. Maisa Pereira Silva is a Student in Administration at the Federal University of Viçosa (UFV). His academic manuscript has been concentrated on the following areas: public administration, finance, public policies and finance.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch09
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Chapter 10
A Neurocybernetic Theory of Social Management Systems MASUDUL ALAM CHOUDHURY Professor of Economics and Finance, College of Commerce and Economics, Sultan Qaboos University, Muscat, Sultanate of Oman & International Chair, Postgraduate Program in Islamic Economics and Finance, Trisakti University, Jakarta, Indonesia
[email protected] Neurocybernetics in management theory is a new concept of learning decision systems based on the episteme of unity of knowledge. Such an episteme must be unique and universal so as to be appealing to the global community. Neo-liberalism, which is the core of present perspectives in management theory, cannot offer such a new epistemic future. That is because of the inherently competitive nature of methodological individualism that grounds received management and decision-making theory. On the contrary, the episteme of unity of knowledge on which a new and universal perspective of management and decision-making theory can be established remains foreign to the liberal paradigm. Neurocybernetic theory of management is thus a theory of learning and unifying types of decision-making systems. It is studied here with reference to the case of community-business unitary relations and the family. The social neurocybernetic implications are examined for these two cases in the light of neo-liberalism and Islam according to their contrasting perspectives of the nature of the world of self and other. Out of these specific studies, the chapter derives a generalized theory of neurocybernetic of social management encompassing the wider field of endogenous morality, ethics, and values within the unified process-oriented methodology of a new episteme of science and society. Keywords: Social cybernetics; system theory; management decision making; Islam and neo-liberalism.
1. Introduction The principal objective of this chapter is to introduce a new idea of a system and cybernetic theory of decision making. Because the epistemology of such a management theory is premised on unity of knowledge, learning and systemic unification are inherent in the theory. We will therefore refer to such a learning and unifying theory of management decision-making system as the neurocybernetic theory of management systems. “Neurocybernetic” is meant in this chapter to convey the idea 221
March 15, 2010
222
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
of how the mind and learning construct a decision-making system. Such decision making is governed by preferences that are organized under a management system. Hence, a neurocybernetic theory of management is an epistemological way of understanding decision making in organizational behavior. The example of Islamic perspective of decision making in management systems will be focused on, with especial attention to Islamic finance and economics. 2. Background A perspective of a system theory of organizational behavior arises from management theory. Management theory deals with the method and art of organizational governance. It need not be driven by a commitment to abide by a given epistemology of the background organization theory. Also, in diverse social systems different management theories or methods of organizational governance can abide. One can think of some extreme cases. 2.1. Max Weber on Management Theory and the Problem of Liberalism Weber’s criticism of modern development in organization theory qua management methods was to herald the coming age of individualism in which capitalism, bureaucracy, and rationalization of the governance method would prevail. This would kill the values of the individual in which he self-actualizes with the collective. Mommsen (1989, p. 111) writes on Weber’s concern with the future development of bureaucracy and rationalization in organization theory enforced by the power of management methods. Weber fears that this gaining hegemony would petrify the liberal idea: “. . . Weber was all too aware of the fact that bureaucratization and rationalization were about to undermine the liberal society of his own age. They were working towards the destruction of the very social premises on which individualist conduct was dependent. They heralded a new, bureaucratized, and collectivist society in which the individual was reduced to utter powerlessness.” Weber was thus caught between the pure individualism of liberal making and collectivism led by self-seeking individualism formed into governance (Minogue, 1963). Weber feared that this would destroy the fabric of individualism on which liberalism was erected. 2.2. Global Governance: International Development Organizations Today, the International Monetary Fund (IMF) and its sister organizations such as the World Trade Organization (WTO) take up a global governance view based on Weber’s kind of opposed liberal perspectives. First, the IMF (1995) promotes global ethics under the guise of rational choices and human consensus between nations. On the other hand, the IMF and its various sister organizations impose conditions for stand-by funds to developing countries. In doing so, the Bretton Woods Institutions maintain strict adherence to macroeconomic policies and designs. The IMF
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
223
policies and conditionalities together with the World Bank’s structural adjustment as development management practices have brought about failed futures for many countries (Singer and Ansari, 1988). The management perspectives prevailing in international development organizations are a prototype of the preferences of self-interest and methodological individualism transported to organizational behavior and enforced by global governance as an extreme form of global management. One can refer to the public choice theoretic nature of such organizational preferences and management behavior explained by Ansari (1986). Many examples of this kind of international control and governance within global capitalism are prevalent in institutions such as the WTO, Basle II, and the regional development organizations. The latter are forced to pursue the same directions as the international development financing institutions by design and interest. The transnational corporations too become engines for managing capitalist globalization in this order (Sklair, 2002). 3. Management Theory in the Literature Jackson (1993) sees management systems as a designing of social reality as perceived by the principal-agent game within an organization rather than being led by any kind of epistemological premise. Consequently, social reality is constructed in management systems theory as a perception of the agents. This allows for the contest of individual wills served by those who mold these preferences in organizations. The institutional mediums either propose or enforce the preferences of methodological individualism in society at large. In the neurocybernetic concept of organizational management theory that Jackson proposes, it can be inferred that such a systems perspective of governance serves only to deepen the methodological individualism and competition and contest of wills that ensue from management practices. Management systems theory is thereby not necessarily premised on a learning behavior with unity of knowledge for attaining a common goal of mutually perceived social reality. Yet, a learning practice in unitary management systems remains a possibility. In global political economy, the North has established an effective arsenal of cooperative development-financing institutions, but to the detriment of the well-being the poor South. This is most pronounced on matters of collective military pacts, belligerence, and institutional and technological monopoly of the North on the South. Likewise, an integrative management of governance system can be enforced by the dominant force. The abuse of the United Nation’s authority by the United States, Britain and their alliance on matters of war and peace proved to be true in the case of the invasion of Iraq in the second Gulf War. A coercive system is one that is purely of the individualistic type. Many transnational corporation management practices in capitalist globalization and the political management of war by force can be categorized as coercive systems. Coercive management systems are the principal ones in today’s global governance. It is also this
March 15, 2010
224
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
kind of efficient governance by force that was presented as a model of dominance and national control by Machiavelli (1966). Cummings (2006) gives an incisive coverage of this kind of hegemonic management of global governance in many areas of capitalist globalization in present times. Other forms of management systems pointed out by Jackson are the pluralistic and unitary types. Pluralistic management is a principal-agent game in which the interest of stakeholders is attained by consensus, despite the existence of diverse and opposing views on the issue under discourse, but being guided by the willingness of participants to coordinate and cooperate. An example of this case is industrial democracy, where management and workers can arrive at consensus on management issues despite their opposing views on particular issues. Even in the pluralistic model of management for governance within the Bretton Woods Institutions, self-interest and power-centric approaches are entrenched in the hands of the industrialized nations over the developing ones. The unitary model of management systems is based on a pre-existing agreement among participants on assigned goals and rules in institutional discourse over issues. The abidance by liberalism as the foundation Western democracy’s cultural make up, and its social reality, is an example that prevails over the entire mindset, guidance, and enforcement on issues under discourse in the Western institutional domain. Yet, the same unitary management system is not necessarily epistemologically sensitive to other cultural domains and social realities. The biggest conflict today is the divide between the understanding of neoliberalism as the Western belief and the Islamic Law among the Islamicists. The much-needed bridging and dialogue between the divided worlds will continue as the most significant socio-scientific issue for all peoples for all times (Sardar, 1988).
4. An Example of Management of Complementary Community-Business Relations Figure 1 explains the interconnected dynamics between business and community along with its social and commercial extensions. All these are understood in the framework of pervasively complementary networking according to the neurocybernetic model of social management. Participation in productive social transformation of the community and business, and thus the unification by learning between them, can be measured by the choice of cooperative development-financing instruments. In the Islamic framework of reference such development-financing instruments are interest-free ones, such as, profit and loss-sharing (Mudarabah), equity participation (Musharakah), costpush pricing in project valuation (Murabaha), trade financing, rental and deferred payments (Bay Muajjal), loans without interest charge (Qard Hassanah), joint ventures, co-financing, etc. The respective shares of total investment resources as mobilized by these instruments give their quantitative measures.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
225
Recall The learning process Interactive, Integrative, Evolutionary (IIE) Epistemology
(Tawhid)
Ontology1
Sunnah Ontology and Knowledge Formation Referencing The episteme With self in The midst of Islamic discursive medium (diverse kinds)
Fundamental epistemology is recalled over repetitive learning processes in continuum
Ontology 2-Ontic
Knowledgeinduced relations, Cognition, Entities (Worldsystems: diverse kinds)
Evaluation
Continuity
Social New wellbeing learning in the processes presence Co-evolve of perpetuity Circular (continuum) causation relations for evaluating degree of unity of knowledge attained by pervasive complementarities (Qur’anic pairing)
Shari’ah rules Communitydeveloped by business epistemological participation reference and extended ontological causation -ontic formalized* discourse (Qur’anic Shura)
Communitybusiness extended participation by complementarities evaluated
This process type is repeated in continuity and across continuums by recalling the fundamental episteme at every emergent event along IIE. ∗ The
interconnecting variables in community-business embedded system relations are: income, employment, resources, participation, poverty alleviation and sustainability, output, profitability, share capital, number of shareholders and stakeholders (participation), financial resources, productive factors (capital, labor, and technology) and participatory instruments. More variables are added with the advance of the IIE processes on specific problems under examination. Figure 1. A system model of circular causation interrelationships: The Islamic social management organization.
The surest and logical way of avoiding interest in all forms of productive activities compliant with the Islamic Law (Shari’ah) is to turn the economy into a participatory form, both by means of financial instruments of economic and financial cooperation and by understanding and formalizing the principle of pervasive complementarities in the socio-scientific world between the variables, instruments and agencies in action.
March 15, 2010
226
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
In this context, the complementarities between the real economy, which represents the productive transformation of the Shari’ah compliant enterprises and the financial economy, is the medium for fully mobilizing money into the real economy via the medium of cooperative forms of financing instruments. The return on such a money-real economy interrelationship in the good things of life is the rate-of-return and the profit-sharing rates. Consequently, the holding of money in speculative and savings outlets that accrue interest as a reward for holding money is replaced by the returns on productive investment in the real economy. Such productive transformations and the underlying cooperative development-financing instruments are realized by the fullest mobilization of financial resources acting as currency in circulation. At the community level, the money-real economy circular linkages generate development sustainability. In the special case of the agricultural sector, which is the life-blood of community enterprise and sustainability, it is further represented by the maintenance of resources in agricultural lands with their due linkages with agro-based industries and service outlets, and also with the monetary and financial sectors. Thus linkages between agriculture, agro-based industries, service outlets, and money and finance establish a dynamic basic-needs regime of development. Sustainability can be represented by maintenance of resources in agricultural lands with their due linkages with agro-based industries and service outlets. Industrialization of the agricultural sector must be avoided. Thus linkages between agriculture, agro-based industries, and service outlets establish a dynamic basic-needs regime of development. The same kinds of “participatory instruments” as mentioned for community development, will exist for businesses. See footnote of Fig. 1. Profit-shares serving both business and community shareholders and stakeholders represent profitability of projects and investments. Share capital denotes community resources. The number of shareholders and stakeholders represents community members who are participants in community-business ventures. Participation in the sense of shareholding and stakeholding in community-business cooperative ventures is therefore a socio-economic variable that is common to both community and business. Figure 2 brings out these interactions in the extended sense. 5. Social Well-being Criterion in Community-Business Interrelationships The circular causations between community and business socio-economic variables and policy variables and development financing instruments with dynamic preference transformation are realized by organic learning in IIE processes. Thereby, social well-being functions as the objective of evaluating the degree of unification of knowledge for community-business interrelationships and is estimated by simulation in reference to the degree of unity of knowledge attained between
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems E: Economywide, including expanded communities, businesses, markets and Government
Shareholding, stakeholding Consumer B: Business satisfaction I C: Product Community preference Returns and profitability of venture D: Islamic banks and Islamic insurance Economic and financial stability Community Real sector and financial sector complementarities Demography Resource mobilization product diversification and risk Human diversification resource development along Interactive, integrative and evolutionary learning by the Shari’ah episteme of unity of knowledge between C, B, D, E. lines Evaluation of social wellbeing by means of simulating Participation the degree of complementarities gained between the and selected variables by heightened ethical consciousness. discourse
IIE Feedback caused by continuous learning in the conscious participatory experience between C, B, D, E.
Consciousness
227
Production, organizational and marketing menus share capital financing instruments Project selection Shareholding and stakeholding with other interbusinesses Profitability and returns Socioeconomic variables and relations
IIE Feedback into Business in the consciousness experience between C, B, D, E.
Economy wide expansion and global ethics, markets, trade, sustainability and development
Figure 2. The interactive, integrative and evolutionary process (IIE) of unity of knowledge between community, business, Islamic financial institutions and the economy.
the two systems in terms of their selected variables. The simulation processes occur across continuously evolving IIE processes. Consequently, the selected socioeconomic variables establish circular causation interrelations signifying the presence or absence of complementarities between them. Corrections to the circular causation relations measured by the coefficients of the variables in the causal relations are corrected to attain better levels of complementarities. Such simulated corrections explain the dynamic process of community-business interrelationships.
March 15, 2010
228
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Figure 2 provides a schema on how interrelatedness and risk and product diversifications can be realized by the participation of all parties concerned and by complementarities between their activities. This schema presents the nature of the neurocybernetic system model of social management. 5.1. Another Example of Social Management: Integrated Decision Making in the Extended Human Family A family is a collection of individuals bound together by blood relations, values, and fealty. Thus, they pursue some common well-being objectives through patterns of decision making that interconnect individual members with the head of the family and extended families in the intergenerational sense. The relationship is circularly causal and thus strongly interactive. The values inculcated within the family are interdependent with the social structure by multiple interrelations. Decision-making within the family on various issues in concert with socio-economic matters involves allocation of time according to the distribution of tasks by the members. Through the circular causation relations between the complementary parts or its breakdown in contrary familial systems, a sense of management decision making within the family can be construed. The underlying dynamics are similar to the case of community-business relationship. Consequently, a family undertakes organizational management behavior similar to the community-business decision-making entities. Such organizational behavior has its extensive implications in the community, markets, and the ethics of social behavior. Thus, a neurocybernetic theory of social management decision-making can also be extended through the family as an organizing social miniscule to higher echelons of social decision-making. A neurocybernetic system is thus generated. 6. The Neo-Classical Economic Theory of the Household and Its Social Impact: A Critique In the light of the above definition, neo-classical economic theory treats the individual in relation to the family in terms of utilitarian motives. Three cases can be examined here to make the general observation on the nature of familial relationship, preference formation, and the well-being criterion in neo-classical economic perspectives. 1. Each individual in the family is seen as an individual with rights, freedoms, and privileges of his or her own. This case can be seen in children and parents who each seek their own individual wellbeing out of secured rights within and outside of home. Children exercise their rights to decide individually to remain independent of parents after the dependency age. The same attitude can be found in the common-law family by virtue of an absence of legal rights binding any side for a mutual sharing of economic benefits. Such a picture of individualist attitudes
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
229
and values that transcends individual behavior to the social structure is referred to as methodological individualism (Brennan and Buchanan, 2000). We formalize the above characteristics for the individual and family in neoclassical economic context. Let the ith individual preference map used to preorder a set of rational choices be denoted by ≥i , i = 1, 2, . . . , n. Consider the three choices, A (decision not to bear children followed by increased labor force participation), B (decision in favor of both childbearing and work participation), and C (child bearing and homemaking). Individuals in a family governed by methodological individualism will likely preorder preferences as A ≥i B ≥i C. The collective preferences of the family governed by methodological individualism are first spread over socio-economic states and secondly over the number of individuals i: ∪i ∪states {≥i [A, B, C]} = ∪i [≥i (A)+ ≥i (B)+ ≥i (C)],
i = 1, 2, . . . , n. (1)
If for a larger number of individuals i, state A dominates in the preordering as shown, then ≥i (B) and ≥i (C) become decreasingly relevant preferences (irrelevant preference in the limit (Arrow, 1951). Now ≥i (A) dominates. Consequently, social preference (≥) arising from the household is reflected in ∪i [≥i (A)] =≥ (A) say, now independent of i due to the dominance of this preference. Next, apply ith individual utility index in hth household, Uih to ≥ to yield the above form of aggregation leading to the household utility function, Uh : Uh = i Uih (≥ (A)),
with Uih (≥ (A)) > Uih (≥ (B)) > Uih (≥ (C)),
over the three states, A, B, C. Hence, the utility maximization objective of neo-classical household utility function rests simply on Uih (≥ (A)). From this level, the social welfare function in which the family is a social miniscule, is given by U(A): U(A) = h Uh = h i Uih (≥ (A)).
(2)
Corresponding to rational choice, A causes continuous substitution of the variables characterizing A over those characterizing B and C for individuals, households and society, since preferences are now replicated in additive fashion. 2. In the second case, we consider the possibility of distributed choices between A, B, C. Now, the household members’ behavior as described above results in the social utility function of the type shown in Eq. (3). The formal steps towards establishing it have been skipped, but the implications are important to note. U(A) = h Uh = h i Uih (≥ (A), ≥ (B), ≥ (C)).
(3)
Its own bundle of goods and services that serve individual needs within a household determines each of the states A, B, and C. For instance, A can be characterized by work participation, B by daycare, and C by home-cared goods. These goods exist
March 15, 2010
230
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
as substitutes of each other either taken individually or in groups. For instance, A can combine with B in the form of cost-effective daycare. The bundles of goods work in participation with daycare in the choice (A, B), and thereby substitute C. Socially, this choice is made to reflect the needs of A and B and to formulate both market goods and institutional policies that promote A over B over C, or (A, B) over C, as the case may be. 3. Resource allocation over the alternatives A, B, and C requires time and income. The allocation of income and time over such activities forms the budget constraint for utility maximization in the above two cases. We formalize such resource allocation as follows: Let total household time be allocated to leisure (childbearing, c) and works (productive activity, w). The cost for acquiring c is Cc ; the cost for acquiring w is Cw . T is given by the following expression: T = tc + tw Income constraint is, I = tc · Cc + tw · Cw The household utility maximization problem is now stated as: Max Uh (tc , tw ) = i Uih (A, B, C)
(4)
Subject to, I = tc · Cc + tw · Cw = i (tci · Cci + twi · Cwi ) T = tc + tw = i (tci + twi ). We have now two versions of the above household maximization problem. They together have important underlying implications. First, we note that household and social preferences are social replicas of individual preferences, values, and attitudes toward the family. The individual utility indexes, and thereby, the household utility function and the social welfare function, are each based on competing attitudes towards goods distributed among substitutes A, B, and C in the sense mentioned earlier. Second, preferences are uniformly competing, preordered, and individualistic in type. 4. In Eq. (4), the household utility function and the social welfare function convey all the utilitarian constructs given by Becker (1981) as follows. (a) Household utility function is based on marginal substitution between children as leisure and market goods. The utility function is of the form as in Eq. (1). (b) The number of children and quality of children experience a tradeoff in the utility function with quality included. (c) Children’s utility and the consumption of parents are substitutes either as expressed by the utility forms in Eq. (1) or (3). (d) In the utility function of the head of the family with multiple children’s goods, the head of the family needs more income to augment a gift to children and wife in such a way that there is compensation between other members so as to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
231
keep a sense of fairness in the income distribution between members and also spending in himself. The utility function is of the form as in Eq. (4) with the addition of the cost of gift. Time allocation in generating income for gifts is usually added to or treated similar to tw . The assumption of marginal rates of substitutions between goods for children is that cheating children increase the cost of the head of family by the amount of additional income required for gifts. (e) The utility function in Eq. (4) can be taken up separately for husband and wife to explain Becker’s theory of marriage and divorce (Becker, 1974, 1989). If the gift (dowry) given by the wife to the husband is deducted from the wife’s income in marriage and the net income of the wife while married exceeds the family income if divorced, the decision of the wife is to remain in marriage. The same argument is extended to the husband’s side. In all cases, we find that the specific nature of methodological individualism, preordered preferences, competition, and marginal substitution property of every utility function causes a hedonistic household and a society of individuals as cold calculators. The laterally and independently aggregated preferences of household members are continued intergenerationally to form an extension of the above formalization to this latter case. The socio-economic character premised on the intergenerational family preferences acts as catalysis of its continuity. The postulate of preordering of preferences as datum in decision-making leaves the system in a dissociated form of collective individualism. This is a social organism contrary to management decision-making. Besides, the linearity of the system breaks down the richly complex system of social decision making into dissociated parts. A linear mind of this kind cannot answer the richly complex nature of problems encountered in the social organism with the family as a neurocybernetic system. 7. Preference Behavior of the Islamic Household and Its Socio-Economic Impact In contrast, the Islamic way of life, attitude, motivation, and thus preferences are centrally guided by the principle of unity of knowledge. That is, in this system, knowledge is derived from the divine text that forms, guides, and sustains behavior. The guidance takes the form of mobilizing certain instruments as recommended by the Islamic Law (Shari’ah), which establish unity of knowledge as a participatory and cooperative conduct of decision making at all levels. The instruments used assist in such participatory and cooperative decision making while they phase out the instruments of self-interest, individualism, competition, and methodological independence between the partners. The family as a social unit now becomes a strong source for the realization of the participatory decision-making emanating from knowledge-induced preference formation. As in every other area of human involvement, the family forms its preferences by interaction leading to consensus involving family members. The result of interaction is consensus (integration) based on discourse and participation within
March 15, 2010
232
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
and across members and the socio-economic order. Such an interactively developed integration is the idea of systemic meaning of unity of knowledge. It conveys the idea of neurocybernetic decision making in a social management organization. Third, the enhancement of knowledge by interaction and integration is followed by evolutionary knowledge. The three phases of IIE continue over processes of knowledge formation. The socio-economic order caused and sustained by such a systemic IIE dynamics responds in a similar way. We refer to such a discursive process as being premised on unity of knowledge. Its epistemology and application by appropriate instruments are premised on the law of oneness of God (or unity of divine knowledge). An example of the Islamic familial attitude is respect between young and old, between husband and wife (wives) intergenerationally speaking. In this system, the Qur’an says that men and women are co-operators with each other and children form a social bond ordained by God (Qur’an 7:189–90). Within this familial relationship there exists the spirit of discourse and understanding enabling effective decision making to continue. The participatory experience is realized in such a case through the Islamic medium of participation and consultation called the Shura. The Shuratic process of decision making is methodologically identical with the IIE process. It is of the nature of management dynamics dealt with in the case of community-business circular causation interrelationships. The IIE process being a nexus of co-evolutionary movements in unity of knowledge, it spans over space (socio-economic order) and time (intergenerational). In the socio-economic extension, the IIE process applies to matters of individual preferences, freedom of choice, participatory production environment, appropriateness of work participation, distribution of wealth, caring for orphans, trusts, inheritance, contractual obligations, marriage, divorce, social consequences of goodness and unethical conduct. The socio-economic variables are thus activated by the induction of the moral and ethical values premised on unity of knowledge as the relational epistemology and realized by appropriate participatory instruments that enable social co-determination, voluntary conduct, attitudes, contracts, and obligations. Both the episteme and the instruments of application of unity of knowledge emanate from the law of divine oneness, now understood in the systems sense of complementarities and participation caused by circular causation interrelations. 7.1. Preference Formation in the Islamic Family Let {≥j,k,h }i = {≥j,h ∩ ≥k,h }i denote the interactive preferences of the jth individual (k-individual) in the hth household, j, k = 1, 2, . . . , n; h = 1, 2, . . . , m; i denotes the number of intermember interaction on given issues. Let the jth (kth) preference be of a specific (hth) head-of-the family. The household preference, ≥h = limi [{∪j,k ≥j,k,h }i ] = lim i [{∪j,k {≥j,h ∩ ≥k,h }i ] is the mathematical union of the above individual preference map over (j, k) for i-interaction.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
233
The social preference ≥ equals aggregation of h-household preferences: ≥ = ∪h ≥h = ∪h limi [∪j,k (≥j,k,h )}i ] = ∪h limi [{∪j,k {≥j,h ∩ ≥k,h }i ]. This expression shows that social preferences are formed by interaction (shown by ∪j,k ) and integration (shown by limi {∩j,k (·)) in given rounds of family discourse (i) on issues of common interest. Because interaction leading to integration causes the formation of knowledge in the Islamic family, we will denote such a knowledge formation by θhi , for hth household and i number of interaction. i take up increasing sequential numbers as interaction and integration proceed into evolutionary phases of discourse. The limiting value of knowledge-flows over a given process of IIE may be denoted by θhi∗ . The limiting social value of knowledge-flows in terms of interaction over many goods and services that are shared in the market and ethically determined by IIEtype preference formation across households is denoted by θ i∗ . Sztompka (1991) refers to such an evolutionary social experience as social becoming. As i increases (numbered processes), a case typically encountered when more members of the extended family are involved in a household decision making, an evolutionary phenomenon is experienced. This completes the IIE pattern over many processes. This is the intergenerational implication of extension of the IIE processes over space (socio-economics) and time (intergenerational). 8. A Social Management Model of the Family as a Social Miniscule The family as a social unit is now defined by the collection of all households deciding in the IIE process over given socio-economic issues. Let such socio-economic issues for the hth household with 1, 2, . . . members and given head of the family be denoted by, xih = {x1h , x2h , . . . , xjh , . . .}i . Let chi = {c1h , c2h , . . . , cjh , . . .}i denote the unit cost of acquiring xhi . The influence of interaction is denoted by the presence of “i”. Thereby, the household spending on acquiring its bundle of goods is given by chi, · xhi = k (xkh · ckh )i = Spih , where, Spih denotes the total spending of k-members of hth household over a given series of interaction i. Spending on the good things of life is highly encouraged in the Qur’an as opposed to saving and hoarding as withdrawal from the social economy. The family member’s j, k interactive attainment of wellbeing in h-household is given by: i (θhi∗ , Spikh (θhi∗ ))[limi [{∪j,k {≥j,h ∩ ≥k,h }i ] Wjk
The bracketed term [·] throughout the chapter means the implied induction of this constituent term on all the variables, relations, and functions. The management simulation problem for the interacting (j, k)-individual over i-interaction for a given h-household is given by: i (θhi∗ , Spikh (θhi∗ ))[limi [{∪j,k {≥j,h ∩ ≥k,h }i ] Simulate{θhi∗} Wjk
(5)
March 15, 2010
14:44
234
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Subject to, i )[limi [{∪j,k {≥j,h ∩ ≥k,h }i ], θhi∗ = f1 (θhi∗ , Spikh ; Wjk
“-” denoting one-process lag in the IIE processes, given a simulated value of in any ith process. i )[lim [{∪ {≥ i Spikh = f2 (θhi∗ ; Wjk i j,k j,h ∩ ≥k,h } ] is the spending of kth individual in hth household. After taking the union of all relations concerning k-individuals in h-household management, simulation of the total h-household members’ wellbeing function is given by: i Wjk
Simulate{θhi∗} Whi (θhi∗ , Spih (θhi∗ ))[≥h ]
(6)
Subject to, θhi∗ = f1 (θhi∗ , Spih ; Whi )[≥h ] Spih = f2 (θhi∗ ; Whi )[≥h ] Clearly, Eq. (6) is derived by union of every part of Eq. (5) over all h-household individuals. By a further mathematical union of every part of Eq. (6) over all households, we obtain the simulation problem of the social well-being function in this collective social organism, Simulate{θi∗} W i (θ i∗ , Spi (θ i∗ ))[≥]
(7)
Subject to, θ i∗ = f1 (θ i∗ , Spi ; W i )[≥] Spi = f2 (θ i∗ ; W i )[≥] Since θ values are central to the simulation problem, learning is extended over space and time to embrace such intergenerational knowledge-flows and the corresponding knowledge-induced variables. The ethical and moral preferences of intergenerational members of the family thus remain intact in order to sustain the effectiveness of the IIE process in concert with the intergenerational family and the socio-economic order. The richly complex nature of flows across consensual decision making across the family nexus reflects the neurocybernetic of family management decision making behavior in the sense of the IIE process. 9. Refinements in the Household Social Wellbeing Function In the participatory decision making, the role of the revered and learned principal is central. The head is expected to be a person endowed with knowledge and integrity in the Islamic Law. The guidance of the head in decision making is respected. It is instrumental in guiding discourse and decision making among members. The Islamic family members are required to respect but not to follow the injunctions of
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
235
the head of the family in case such a head-of-the family decision is contrary to the Islamic Law. Given the head (H) of the family’s well-being function, WHNi , as a reference for household decision making, the new simulation problem derived in the manner of Eq. (7) takes the form: Simulate{θhi∗} SW(·) = Whi (θhi∗ , Spih (θhi∗ ))[≥h ] + λ(θhi∗ ) · ({WHNi ∗ − Whi (θhi∗ , Spih (θhi∗ ))[≥h ]} = (1 − λ(θhi∗ )) · (Whi (θhi∗ , Spih (θhi∗ ))[≥h ] + λ(θhi∗ ) · (WHNi ∗ )
(8)
Subject to, θhi∗ = f1 (θhi∗ , Spih ; Whi )[≥h ] Spih = f2 (θhi∗ ; Whi )[≥h ] WHNi ∗ is an assigned level of the head’s perception of the wellbeing function for the family. It assumes a form explicit or implicit through the household IIE process after Ni rounds of discourse. Thus, ({WHNi ∗ Whi (θhi∗ , Spih (θhi∗ ))[≥h ] is an adaptive constraint. λ(θhi∗ ) in 0 < λ(θhi∗ ) < 1 explains simulative knowledge-induced shifts in the wellbeing index as an attribute of knowledge-induction over IIE processes. Evaluation of the resultant social wellbeing function determines the ethical transformation of the socio-economic order caused by the Islamic choices of goods and services at the household level. The result can be generalized across generations. An extended neurocybernetic perspective has thus been conveyed to a managerial kind of guided decision making in the extended family in terms of its widening organic relations. Simple manipulation of Eq. (8) yields dSW/dθhi∗ > 0 with all the terms resulting from the differentiation being positive. The magnitude of the positive sign will be determined by the sign of [WHNi ∗ (..) − Whi∗ (..)]. If this term is positive, the positive value of dSW/dθhi∗ will be higher than the positive value of the same if the term is negative. This means that the effective guidance, governance, and caring attitude of the head of the intergenerational family over the members are pre-conditions for the well-being of the family. In turn, such attitudes of all households determine the increased level of social well-being. We note that an increase in WHNi ∗ (..) due to a gain of knowledge derived from organic complementarities within and across family decision making in concert with the socio-economic order must remain higher than the similar gain in Whi∗ (..). This marks continuity of the patriarchal family and the caring function of the principal. The conviction on the positive role of spending in the good things of life on social wellbeing is a basis of motivation of the principal on family members’ well-being.
March 15, 2010
236
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
Sustainability of such a family-socio-economic response needs to be continued intergenerationally. 9.1. Important Properties of the Simulation Models of Individual-Family-Market Relations: Widening Social Management Neurocybernetic We note a few important properties of the above simulation systems. First, continuous sequencing of IIE-phases explains the dynamic creation of knowledge. Second, the creative evolution of knowledge-flows is determined by behavioral aspects of the model as explained by IIE-type preference formation. Third, the IIE-nature of individual preferences transmits the same characteristics to the socioeconomic variables through household preferences. Hence, the household is seen to be a richly endowed complex social unit. Fourth, the aggregation of the social well-being index from the individual and household levels to the social level is nonlinear, conveying the neurocybernetic feature. Likewise, the simulation constraints are nonlinear. That is because of the continuous knowledge-induction caused by complementarities between diverse variables. Besides, the functional coefficients are knowledge-induced causing shifts in the wellbeing function and the constraints over learning across IIE processes. 9.2. Inferences from the Contrasting Paradigms The neo-classical and Islamic socio-economics of the family give contrasting paradigms and behavioral results. Neo-classical economics is built on hedonic preferences. Methodological individualism starts from the behavioral premise that is essentially formed in the household. The variety of households in neo-classical socio-economics manifests intensifying individualistic views by the very epistemological basis of economic and social reasoning. Consequently, the family as a social miniscule also transmits the same nature of preferences and individualism to the socio-economic order. The meaning of interaction is mentioned without a substantive content in preference formation. A substantive methodology of participatory process in decision making is absent in neo-classical economics. Thereby, individualism, linearity, and failure in organic learning in the family system make the neo-classical family devoid of the richly complex management nature of social neurocybernetics. 9.2.1. The neo-classical case The institutional and policy implications of the above behavioral consequences of the neo-classical family on society are many. Individual rights are principally protected over the rights of the family as an organism. An example is the right of the 18 year-old to date partners over the right of the family to stop him/her from
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
237
doing so. Legal tenets are drawn up to protect the individual rights in this case. In the market venue, the dating clubs and databanks flourish to induce the activity of teen dating. Likewise, such markets that support effective dating activities polish the individual’s sensual preferences. These kinds of goods result in segmentation between ethical goods desired by conservatism and individual preferences. Thus, individual preferences are extended socially. 9.2.2. The Islamic case The IIE-process nature of decision making in the Islamic family relegates individual rights based on self-interest to family guidance against unethical issues. In all ethical issues, the collective will of the members guides and molds the preferences of the individual members according to the Shari’ah rules. Such rules are inspired within the family discourse by the principal. Individual preferences on dating are replaced by early marriage, which is recommended in Islam. Marriage becomes a moral and social relationship on legal, economic and political grounds, and thus is a unifying social force. Consequently, goods and services as common benefits replace competing markets. The legal tenets of the Shari’ah prohibit unethical and immoral goods to be consumed, produced, exchanged, and traded. Ethical consequentialism of the market place is good for all (Sen, 1985). Hence, such goods mobilize the spending power of the household in the economy through individuals who are established in the unified family environment for realizing the greatest degree of economic growth, productivity, stability, and prosperity. Unethical markets are costly because of their price-discriminating behavior in differentiated markets. Market segmentation is thus deepened. 10. The Head of the Family and Islamic Intergenerational (Grandfathers and Grandchildren) Preference Effects on Household Well-being and Its Socio-Economic Effects Intergenerational generalization of familial decision making along the IIE-process model is tied to the intergenerational extension of {θ, x(θ)} values. Note that time in the intertemporal framework now enters the analysis merely as datum to record the nature of co-evolution of {θ, x(θ)} values. The substantive effect on unitary decision making is caused by knowledge-flows toward attaining simulated values of W(θ, x(θ)). In the intergenerational nexus of the IIE-process methodology, W(θ, x(θ)) acts as a measure to evaluate the attained levels of unity of knowledge spatially (family and socio-economics) and intergenerationally as well. In other words, in the intergenerational familial decision-making model according to the IIE process the important point to observe is the generation-to-generation (i.e., process-to-process) continuity of the responsible and integrated behavior in the IIE model. A long haul of intertemporal simulation is thus replaced by sequential simulation on a learning-by-doing basis across IIE processes.
March 15, 2010
238
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
In regards to the intergenerational continuity of the Islamic family (grandchildren relations) the Qur’an declares (52:21): “And those who believe and whose families follow them in Faith, – to them shall We join their families: nor shall We deprive them (of the fruits) of aught of their works: (Yet) is each individual in pledge for his deeds.” The exegesis of this verse is that ethical bonds enhance intergenerational family ties as the essence of unity of IIE-type preferences guided by the divine law. Furthermore, in such learning processes, the individual’s moral capacity interacts with the familial and socio-economic structures. Contrarily, the Qur’anic edict also pointed out on the consequences of the breakdown of familial ties. The Qur’an establishes this rule in reference to the wife of Prophet Lot (11:81–82) and the wife and son of Prophet Noah (11:45–46; 66:10). They were lewd persons and therefore barred from Islamic family communion. Contrarily, even though Pharaoh was the arch enemy of God, yet Pharaoh’s wife was of the truthful. Thereby, she was enjoined with the family and community of believing generations. The same is true of the blessed Mary (Qur’an, 66:11–12). In formal sense, we now drop the suffixes in Eq. (8) and generalize it for both intra- and inter-generational cases. Equation (8) can be easily symbolized for individuals, households, and heads for j-generations by a further extension by j-subscript. The method of derivation is similar to Eq. (8). Consider now the following differentiation in respect to the space-time extension of θ values: dSW/dθ > 0 ⇒ (1 − λ)(dWh /dθ) + λ(dWH /dθ) + (WH − Wh )(dλ/dθ) > 0. (9) Since, (1 − λ)(dWh /dθ) > 0; λ(dWH /dθ) > 0 due to the monotonic θ effect, therefore, the degree of positive value of Eq. (9) is determined by the sign of (WH −Wh ), with (dλ/dθ) > 0 as a shift effect in social wellbeing. If (WH −Wh ) > 0, the higher will be the positive value of dSW/dθ. Furthermore, from (WH − Wh ) we obtain: (dWH /dθ − dWh /dθ) = (∂WH /∂θ − ∂Wh /∂θ) + [(∂WH/∂Sp) − (∂Wh/∂Sp)] · (dSp/dθ).
(10)
The sign of Eq. (10) can be positive or negative. In the case of positive sign we infer that the perception of the heads of the intergenerational families on wellbeing increases more than the members’ wellbeing function as knowledge increases in the intergenerational family nexus. Thus, the cumulative result of knowledge, communion, and co-evolution is repeated intergenerationally, that is from grandfathers to grandchildren. Likewise, socio-economic consequences are similarly co-evolved. Such an ethical function allows the principal to continue on as the acclaimed head, Amir. Furthermore, since Sp(θ) is a positive function of θ in view of the ethics of the Qur’an that encourages spending in the goods things of life, but in moderation, therefore, [(∂WH /∂Sp) − (∂Wh /∂Sp)] > 0 due to the effect of increases in θ values on the heads’ higher perception of family wellbeing intergenerationally.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
239
Consequently, (dWH /dθ − dWh /dθ) > 0. But also (WH − Wh ) > 0 on the basis of the intergenerational wellbeing role of the family heads. From these two relations we obtain the expression, WH = a · Whb ,
a, b > 1.
(11)
a, b are functions of θ and λ(θ), and thereby cause shifts in Wh and WH as the intergenerational IIE processes deepen in family-socio-economic circular causation interrelations. 11. Up Winding: From the Specific to the General Neurocybernetic Model of Social Management The examples of community-business-economy extensive relationships and the intergenerational richly complex relations in the extended family along with its social and economy-wide effects are examples of social systems that learn, but only under the episteme of unity of knowledge. There is no other way how these systems can learn. That is, neither the disequilibrium perturbations along evolutionary epistemology, which are of the nature of social Darwinism, nor the methodological individualism of neo-liberalism can establish learning behavior. Such disequilibrium learning behaviors are of conflicting and non-cooperative types. Social meaning cannot be derived from such continuous perturbations. Though optimality is never a feature of the neurocybernetic learning model, yet learning equilibriums do explain purposeful social actions and responses. This is the feature of learning under unity of knowledge. While it is here exemplified by business-community-economy interrelationships and the evolutionary learning dynamics of the coherent family across generations, the inherent model of social management in this neurocybernetic sense of rich complexity but with social cohesion and order is applicable to the widest range of social and scientific problems. Neurocybernetic learning of social management model thus renders a new vision of orderly process of intercivilization and global discourse. When taken up at the core scientific level, the same methodology establishes a new way of understanding the scientific phenomenon. It points out the inexorable centricity of morality, ethics, purpose, and values existing as simulated endogenous elements in the scientific constructs of ideas. A neurocybernetic theory of social management thus conveys a reconceptualization of the socio-scientific domain in our age of post-modernism in which science is increasingly becoming a study of process and social becoming (Prigogine, 1980). It is also a thoroughly empirical and positive exercise towards social reconstruction. The combination of substantive reconceptualization and empiricism involving morality, ethics, and values endogenously in neurocybernetic models of social management together convey the episteme of the new scientific method (Choudhury and Hossain, 2007). The empirical project within the scientific research program of neurocybernetic theory of social management is not a crass number-crunching exercise. Rather, it is
March 15, 2010
240
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
one that combines deep analytical reason and selection of appropriate models that simulate knowledge of unity between all good things of life. In that perspective, the selected models and methods of emergent empirical analysis are subjected to the appropriateness of the background episteme of unity of knowledge between everything that make moral and ethical sense in human choices. The negation of this in the realm of individualism, as pointed out in this chapter also share unity between them. But this phenomenon can be shown to form bundles of independently evolving entities in their own linearly dissociated spaces. That is, eventually the long-run Darwinian tree of genesis of rationalistic life breaks up into atomistic competing and annihilating point-wise organisms. They create infinitely more replicas of such competing and annihilating organic entities. Social biological atomism is the ultimate destiny (Dawkins, 2006). In the empirical domain, selected methods that can be used to explain the evolutionary unified dynamics of the neurocybernetic theory of social management can be comprised within the broad and extended field of computational complexity (Gregersen, www.metanexus.net/tarp, 1998). Of particular methods that can be used to combine the concept and empiricism of the learning field idea of neurocybernetic theory of social management are Complex Adaptive Systems (CAS) and Autopoietic System (APS) (Gregersen, 1998; Rasch and Wolfe, 2000). In both of these methods, learning between systemic entities is essential. But the difference between them is that while CAS is sensitized by the external environmental synergy, APS extends the sensitized effects to influence organic activity in the embedded systemic entities. Organism interactions are thus broadened up. In the neurocybernetic theory of social management, the focus is not on chaos and disorder between interactions. These are considered as social disequilibrium that happen because of failure of entities to pair, cooperate, and complement continuously and across continuums of space, time, and knowledge. When such disequilibrium disorder or their endogenous bundle of separable movements, as in the dissociated nature of methodological individualism and statistical independence at points of optimality happen, then extended social disequilibrium happens as well. In the extended sense, such social disequilibrium between the organic interactions causes the same kind of social impetus in the socio-scientific universe around and within local systems. The only exception is to abide in its own self-contained character of isolation caused by methodological individualism. The episteme of unity of knowledge is abandoned. Such disequilibrium models are then simulated to attain semblances of evolutionary equilibrium by the learning systemic dynamics. This transformation process is conveyed by the simulation system of circular causation relations between the entities, variables, relations, and sub-systems. Thus, the neurocybernetic theory of social management transcends sheer computation into social reconstructions according to the balances of pairing between cooperating entities by means of pervasive complementarities between them (Luhmann, N translated by Bednarz and Baecker, 1995). These are issues of transformation and choices in the domain of institutional structures, policy, and the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
241
“wider field of social valuation”. The mathematical complementation of the unifying experience is methodological individualism, which is triggered by rationalistic behavior. In this chapter, the two cases studied are embodied with complexity within them and in the extensive sense of learning by widening complementary pairing under the guidance of laws and social behavior. Correction of experiences other than the unifying one, as in the case of neo-liberalism, is implemented to attain desired reconstructed social realities in accordance with the episteme of unity of knowledge. Community-business relations experience IIE processes within and across them to expand into economy and the global order with the necessary attenuating transformations and social reconstructions. The extended family relations extend the internal learning dynamics of the Islamic family preferences over space, time, and knowledge domains. The socially widening consequences are thereby extensive. They are felt and organized to perpetuate the episteme of unity of knowledge and the emergent world-system in the light of the Islamic epistemological practices. 12. Conclusion The absence of a unique and universal epistemological premise in social management theory, one that can guide and be beneficial and acceptable to most of humanity, remains the basis of global disorder in guidance and governance. The urgency in closing up this gap with mutual understanding would be the proper direction for developing a neurocybernetic social management model for the common wellbeing of all (Choudhury, 1996). The quest for this universal and unique epistemological worldview must be both serious and reasoned. The formalism of this chapter explains that IIE-type preferences formed in the midst of community-business relations and the family with its extended socioeconomic linkages have important circular causal meaning. Such relations in systemic unity of knowledge play a significant role in the establishment of appropriate markets rather than leaving market forces to self-interest and consumer sovereignty. The triangular relationship among individuals, the household, and the socio-economic order is continuously renewed and reproduced, giving evolutionary momentum to each of the agencies in this kind of circular causation in both intraand inter-generations. It is the same for the community-business dynamics. Hence, a universal neurocybernetic model of social management is configured to explain richly complex social phenomena taking the episteme of unity of knowledge as a phenomenon of continuously learning systems. In such complex organisms, morality, ethics, and values remain embedded. They are not numinous entities of systems. They are as much real and measurable as is any cognitive socio-scientific variable. In the neurocybernetic theory of social management systems with the universal epistemological model of unity of knowledge, the systemic endogeneity of morality, ethics, and values can be a distinct
March 15, 2010
242
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
M. A. Choudhury
way of formalizing, measuring, and implementing the interactively integrated and dynamic roles of these human imponderables in socio-scientific decision making (Choudhury, 1995). We surmise that on this kind of intellectual thought and its scientific viability rests the future of human wellbeing and socio-scientific global sustainability. Acknowledgements This paper was written during the author’s research leave to Trisakti University, Jakarta between May and June, 2008. The author thanks Sultan Qaboos University Postgraduate Studies and Research Department and the College of Commerce and Economics for providing this opportunity. References Ansari, J (1986). The nature of international economic organizations. In Political Economy of International Economic Organizations, Ansari, J (ed.), 3–32. Boulder, CO: Rienner. Arrow, KJ (1951). Social Choice and Individual Values. New York, NY: John Wiley & Sons. Becker, GS (1974). A theory of marriage. Journal of Political Economy II, 82(2), S11–26. Becker, GS (1981). Treatise on the Family. Cambridge, Massachussetts: Harvard University Press. Becker, GS (1989). Family. In The New Palgrave: Social Economics, J Eatwell, M Milgate and P Newman (eds.), 64–76. New York, NY: W.W. Norton. Brennan, G and J Buchanan (2000). Modeling the individual for constitutional analysis. In The Reason of Rules, Constitutional Political Economy, Brennan, G and J Buchanan (eds.), 53–74. Indianapolis, IN: Liberty Fund. Choudhury, MA (1995). A mathematical formalization of the principle of ethical endogeneity. Kybernetes: International Journal of Systems and Cybernetics, 24(5) 11–30. Choudhury, MA (1996). A theory of social systems: Family and ecology as examples. Kybernetes: International Journal of Systems and Cybernetics, 25(5), 21–38. Choudhury, MA and MS Hossain (2007). Computing Reality. Tokyo, Japan: Blue Ocean Press for Aoishima Research Institute. Cummings, JF (2006). How to Rule the World, Lessons in Conquest for the Modern Prince. Tokyo, Japan: Blue Ocean Press, Aoishima Research Institute. Dawkins, R (2006). The God Delusion. London, England: Transworld Publishers. Gregersen, NH (1998). Competitive dynamics and cultural evolution of religion and God concepts. www.metanexus.net/tarp Gregersen, NH (1998). The idea of creation and the theory of autopoietic processes. Zygon: Journal of Religion & Science, 33(3), 333–367. International Monetary Fund (1995). Our Global Neighbourhood. New York, NY: Oxford University Press. Jackson, MC (1993). Systems Methodology for the Management Systems. New York, NY: Plenum Press. Luhmann, N, (translated by J Bednarz Jr and D Baecker) (1995). Social Systems. Stanford, CA: Stanford University Press.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
Neurocybernetic Theory of Social Management Systems
243
Machiavelli, N (translated by D Dionno) (1966). The Prince. NewYork, NY: Bantam Books. Mommsen, WJ (1989). Max Weber on bureaucracy and bureaucratization: Threat to liberty and instrument of creative action. In The Political and Social Theory of Max Weber, Mommsen, WJ. (ed.), 109–120. Chicago, Illinois: The University of Chicago Press. Minogue, K (1963). The Liberal Mind. Indianapolis, IN: Liberty Fund. Prigogine, I (1980). From Being to Becoming. San Francisco, California: W.H. Freeman. Rasch, W and C Wolfe (2000). Observing Complexity: Systems Theory and Postmodernity. Minneapolis, Minnesota: University of Minnesota Press. Sardar, Z (1988). Islamic Futures, the Shape of Things to Come. Kuala Lumpur, Malaysia: Pelanduk Publications. Sen, A (1985). The moral standing of the market. In Ethics & Economics, EF Paul, FD Miller Jr and J Paul (eds.), Oxford, England: Basil Blackwell. Singer, H and JA Ansari (1988). The international financial system and the developing countries. In Rich and Poor Countries, Singer, H and JAAnsari (eds.) 269–285. London, England: Unwin Hyman. Sklair, L (2002). Transnational corporations and capitalist globalization. In Globalization, Capitalism and Its Alternatives, Sklair, L (ed.), 59–83. Oxford, England: Oxford University Press. Sztompka, P (1991). The model of social becoming. In Society in Action, the Theory of Social Becoming, Sztompka, P (ed.), 87–199. Chicago, Illinois: University of Chicago Press.
Biographical Notes Prof. Masudul Alam Choudhury’s areas of scholarly interest focuses on the epistemological treatment of mathematical models in Islamic political economy and the world-system are diverse. They span from hard-core economic and finance areas to philosophical issues. He derives the foundational reasoning from the Tawhidi (divine unity of the knowledge in the Qur’an) worldview in terms of the relationship of this precept with diverse issues and problems of the world-system. The approach is system and cybernetic oriented, addressing general systems of complex and paired circular causation relations of explanatory and parametric variables. Professor Choudhury’s publications are many and diverse. The most recent ones (2006–2008) are five volumes on Science and Epistemology in the Qur’an (The Edwin Mellen Press), each volume is differently titled. The Universal Paradigm and the Islamic World-System was published by World Scientific Publishing in 2007. In 2008, the publication with coauthor M. Shahadat Hossain is entitled, Computing Reality from Aoishima Research Institute, Japan. There are many more. He has written many international refereed journal articles. Professor Choudhury is the International Chair of Postgraduate Program in Islamic Economics and Finance at Trisakti University, Jakarta, Indonesia.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch10
This page intentionally left blank
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Chapter 11
Systematization Approach for Exploring Business Information Systems: Management Dimensions ALBENA ANTONOVA Faculty of Mathematics and Informatics, 125, Tzarigradsko chausse CIST, bl.2 fl.3 P.O. Box 140, 1113 Sofia University, Bulgaria a
[email protected] Today, business information systems (BIS) has become an umbrella term that indispensably indicates more than just a main business infrastructure. Information systems have to enhance the capacity of knowledge workers to enable business organizations to operate successfully in complex and highly competitive environments. Despite the rapid advancements in technology and IT solutions, the success rate of BIS implementation is still low, according to practitioners and, academics. The effects of IT system failures and delays can be disastrous for many companies, possibly leading to bankruptcy, lost clients and market share, and diminished competitive advantage and company brand, among other things. The study of system science gained impetus after World War II, suggesting a new way of studying complex organisms and their behavior. Investigating parts of the whole is not enough if one is to understand the complex functions and relationships of a system. Business organizations are often examined through a number of their elements and subsystems-leadership and government, marketing and sales systems, operating systems, IT systems, financial and accounting systems, and many other sub-systems. However, behind every sub-system stands human beings — the employees who personalize every business processes in order to express their unique approach to deliver value. This intrinsic element of business organization — its human capital — is often underestimated when "hard" issues like information systems are introduced. Systematization proposes an approach to the study of BIS within its complex environment, considering it as an integral element for organizational survival. Planning BIS is a substantial part of a company’s strategy to succeed further while capturing, analyzing and reacting to information acquired from the environment, combining it with knowledge of internal processes and exploiting it to give customers better value. Keywords: System theory; business information systems; business organisations.
1. Introduction Nowadays, business information systems (BIS) have transformed into powerful and sophisticated technology solutions, vastly different from the standardized onthe-shelf software products designed to fulfill some operational business functions. 245
March 15, 2010
246
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
In this ever-changing and complex global environment, information technologies have become increasingly important for organizational survival, exceeding simple program applications. The way organizations produce, sell, innovate, and interact with a global and complex environment has changed. The business paradigm has shifted to more complex organizational global structures and interlinked systems. The internet and information technologies have linked all businesses — there are no longer small or big businesses, only connected and disconnected businesses. Companies are divided into businesses that are in the global economy and businesses that still survive outside it. Today, the world is more connected and complex than before. While value creation and competitive advantage are still the center of any business strategy, global competition is becoming increasingly severe. Technology has contributed to the intensification of global commerce and global trade. It has allowed for the downsizing and flattening of organizations through the outsourcing of production departments and back offices to low-wage countries. BIS has made it possible for diversified teams to collaborate and work remotely, bringing together experts from all around the world. BIS allows for the design of complex inter- and intra-organizational networks and business systems as various e-business suites and portals. As Laudon and Laudon (2006) point out, the emergent combination of information technology innovations and a changing domestic and global business environment makes IT in business even more important for managers than just a few years ago. Laudon and Laudon (2006) further enumerate the following factors, considering the growing impact of information technologies on the business organizations of today: Internet growth and technology convergence, transformation of the business enterprise and the emergence of the digital firm, growth of globally-connected, knowledge-, and information-intensive economies.
1.1. Internet Growth and Technology Convergence When considering Internet growth and technology convergence, we should think about how Web 2.0 technologies and the increasing dimension of social networks are changing the role and place of information technologies in our organizations and society as a whole. According to Alexa Internet global traffic rankings (Labrogere, 2008), 6 out of 10 of the most visited websites in 2007 were not in the top 10 in 2005, and all of them are Web 2.0 services (Yontube, Windows Live, Facebook, Orkut, Wikipedia, hi5). This gives an idea of the sound trend Web 2.0 represents. The author further points out the emerging Com 2.0 concept that applies Web 2.0 paradigms to the communication sphere and communication services, allowing users to move from a fixed to mobile environment. This comes as an example of the dynamics of IT sector, still on the way to transforming our lives, becoming smarter, more invisible, into the integrated
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
247
various products and applications around us. Information technologies develop fundamentally, constantly improving and enhancing their capacities, and thus should be assumed while designing and implementing next generation of BIS. 1.2. Transformation of the Business Enterprise and Emergence of the Digital Firm The transformation of business organizations and the emergence of the digital firm suggest that BIS should aim not only to deliver separate solutions or workflow automation. As described in Lufa and Boroac¨a (2008), modern enterprise offers a significant variety of services; it is adaptable to internal and external factors and the microeconomic decision depends on the alternative possibilities of the market and the uncertain aspects of demand that are more and more difficult to predict. BIS have to be designed to deliver real intrinsic value to changing business organizations and to limit the risks. Some of the main goals of a “smart” management system according to Lufa and Boroac¨a (2008) are to reduce the risk, to stimulate creativity, and to make people more responsible in the process of decision. Decisions in modern organizations can be considered as a process of risk decrease, because it is based on information, experience, and ideas that come from many different sources and that can be accessed and used through BIS in an efficient anticipatory way (Lufa and Boroac¨a, 2008). There exists a large number of successful business models manifesting how BIS can be transformed into a company profit unit (selling services to other departments and clients), or outsourced (in whole or partly) to other companies (like server farms), or become a strategic instrument for sustainable development (like e-commerce and e-business). Many examples of digital business models depict how IT influences innovations and company profitability. Nowadays, the BIS has a bigger impact based on the way the company formulates its sustainable strategy. 1.3. Growth of a Globally-connected, Knowledge- and Information-intensive Economy The recent banking and financial crisis (September–October 2008) has highlighted the level on which a world economies are more integrated and interrelated than ever thought. For hours, the effect on the stock markets spread all around the world and stock prices globally slowed down, leading to the sharp decrease of all stock exchange indexes. The world economy is becoming smaller, more complex, and very dynamic. So, this imposes a new paradigm for BIS — to guide businesses in dangerous and hostile environments and to map the road for further development and success. On the other hand, during last years, the global economy has shifted towards a service and knowledge-oriented economy. The GDP in most developed countries is generated increasingly by the knowledge-intensive service sector. There are
March 15, 2010
248
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
assumptions that service sectors account more than 80% of the national incomes in most developed countries (Uden and Naaranoja, 2008). The services and their specific features are in the center of our society today. Services are intangible goods that are much more complicated to deliver, to sustain, and to develop than mass products. Services are usually knowledge-intensive and demand better knowledge and information management and its integration into organizational business processes. 1.4. Emergent Characteristics of BIS Following the observed trends discussed above, the following conclusions have emerged: the constant evolution of Internet and IT technologies, global and linked complex business organizations, and an increasingly knowledge and serviceoriented global economy. Summarizing these ideas, we should consider BIS from the level of services it should provide to the organization and business. All BIS users, whatever are their roles, are not interested in specific technical tools — users want to get information and knowledge from the system, or better still, to adapt and personalize the system according to their specific and momentary needs. Users want to perform better while working, communicating, and entertaining. BIS should enable companies to respond to this challenge and provide customized, personalized, and customeroriented services. The main focus of BIS design is not on some specific functions and features of BIS technologies, but on the “integrated services and intrinsic value” that they could provide their users. In most cases, in order to perform a task well, IS should deliver not only information but rather meaningful knowledge, transforming bytes and data into appropriate answers on complex and detailed business problems. BIS have to change from reliable and open networks to large data warehouses and content management solutions. As described in a number of sources, Web 2.0, via various Internet applications, has evolved to link and provide a communication platform for user-friendly services, for internal and external sources of knowledge. Web 2.0 and social networks provide a unique platform and technological tools to allow everybody to freely express his or her individuality. As Buckley (2008) states, Web 2.0 sets users free from a closed set of navigational and functional options and thus of “normal” tasks and ways of interacting. Users exercise great freedom to label, group and tag things, and therefore to carve up and contemplate the world their way. Plenty of meaningful and important messages are hidden in personal blogs, image galleries, video-sharing programs, e-newspaper forums, and comments sections. But is it possible for all these rough and out-of-the-box ideas, messages and information to be processed adequately and timely to be produced meaningful knowledge, indicating the trends and directions for development? The BIS have to respond to this challenge, enabling and facilitating the company’s access to meaningful knowledge inside and outside the organization — past and current databases of best practices, business models,
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
249
business processes, and projects. BIS should enhance the knowledge-workers and the knowledge-intensive companies ability to survive while capturing and adapting to still-invisible signals from the environment to deliver better services and products to its clients. 2. Research Methodology of Complex Systems Exploration To gain a better understanding of the challenges facing BIS, a thorough analysis will be made of the application of system thinking to BIS. A large number of manuscripts and authors explore various aspects of complex systems, but our research methodology will focus on the review of social dimensions of system theory, referring to the main system’s components, elements, and characteristics. The aim of the research is to identify the ground theory aspects of system theory and to present the main challenges for BIS development and its changing role within dynamic business organizations. The first few sections will provide a short review of the both research methods — analytical and systematization approaches. Basic definitions will be presented further, further deepening our understanding of complexity. Section 3 tends to compare Gharajedaghi’s (1999) views of organizational systems with the evolution of BIS systems. General system theory and the main characteristics of systematic thinking according to some of the most popular authors are further discussed and presented. A table with the main system characteristics is developed, and a proposal for a systematization process is presented at the end of the chapter. 2.1. Analytical Method A traditional analytical approach describes the cognitive process as consisting of analyses and synthesis. The analytical viewpoint is employed when the emphasis is put on the constituting elements or components (Schoderbek et al., 1990). The analytical approach is defined as a process of segmenting the whole into smaller parts to better understand the functioning of the whole. Due to the limited capacities of the finite human mind, this is an appropriate scientific method to exhaust any subject by breaking it down into smaller parts. By examining thoroughly all these parts, man is believed to be able to attain a better understanding of the individual aspects of the subject. The process is completed by a final summary or synthesis by putting together all the parts of the whole. Although we knew that this technique was applied in the study of mathematic and logic since before Aristotle, analysis as a formal concept was recently formulated. This method is thoroughly described in “Discours de la Methode” by Ren´e Descartes (1596–1650), and has since been identified with the scientific method. There are many areas of knowledge, where this approach yields good results and observations. Indeed, many of the laws of nature have been discovered with this method (Schoderbek et al., 1990).
March 15, 2010
250
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
One of the main characteristics of the analytical approach is that elements are independent. Discovering a simple mechanic system is possible and appropriate to this methodology. This approach suggests that the whole can be divided into elements or parts, then thoroughly studied and described and then again summarized. The environment is passive and the created system is merely closed, rigid and stable. Any mechanical system (transaction-processing systems (TPS), for example) can be thoroughly researched via the analytical method, as the systems are closed and limited, with little interdependence between its constituting elements. However, the new technologies have evolved and become smarter and harder to cover by simple summary of business applications as stated in Laudon and Laudon (2006). Further evolution of IS and lack of possibility to predict and adapt to the next technology shift have made us very careful when employing this approach to study BIS. 2.2. Systematization Approach The system approach contrasts with the analytical method. It aims to overcome the limitations of the analytical approach, identifying many complex social, economic, and living organisms that cannot be explored and understood via the analytical framework. The notion of “system” gives us a holistic approach to study and research the object as a whole, because the mutual interactions of its parts and elements and its interdependencies create new important and distinctive properties possibly absent in any of its parts or elements (Schoderbek et al., 1990). Today our world represents an organized complexity, defined by its elements, their attributes, and interactions among the elements and the degree of organization inherent in the system (Schoderbek et al., 1990). Whatever economic, natural, political, or social organization we envisage, we always talk about systems. As Dixon (2006) further points out, the universe is itself a system made up of many subsystems — a hierarchy of nested systems. . . . In any context, it is infeasible to attempt to reach a comprehensive understanding of all things. Even the global system contains too many variables to achieve good understanding of all the possible actors and interactions. It is necessary, therefore, to focus the study of a system and its context to that which is feasible: to frame the system in such a way that an understanding is possible. Once a system is framed, it is possible to explore the interactions within that portion of the greater system, keeping in mind that the system frame is an artificial construct. . . . (Dixon, 2006).
Companies and business organizations need to gain a timely and deep understanding of things happening outside and inside the organizations. Information produced daily increases and accelerates as the global environment transfers much more information than before. Nowadays, technologies provide the opportunity for billions of people to become content creators, when writing blogs, taking part in professional forums or sharing images and video with their friends. The information load has been increasing tremendously in recent years and this is expected to
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
251
continue. Often companies miss opportunities or cannot identify treads in time due to the increased complexity of messages coming from the environment. Considering BIS as service-oriented technologies, we should focus on the BIS’s functions and capacities to bypass a mechanistic system view and transform into a simultaneously evolving, adapting, and expanding flexible organizational “nervous system.” This means that BIS should transform into intelligent networks of references and alerts, allowing users to get fast access to needed information as well to further reading and details, links to experts, and other actors within and outside the organizations. 2.3. Some Definitions The general accepted definition of “system” states that: “The system is defined as a set of objects, together with their relationships between objects and between their attributes, and to their environment so as to form a whole.” (Schoderbek et al., 1990) Laudon and Laudon (2006) provide a technical definition of the “information system” as “. . . a set of interrelated components that collects (or retrieves), processes, stores, and distributes information to support decision-making and control in an organization. . . .” It further continues that IS supports decision-making, coordination and control, enabling managers and workers to analyze problems, visualize complex subjects, and create new products. A “service system,” as defined by IBM (2007) is the “. . . dynamic, value co-creating configuration of resources, including people, technologies, organizations and shared information (language, laws, measures, and methods), all connected internally and externally by value propositions with the aim to consistently and profitably meet the customer’s needs better than competing alternatives. . . .” Another definition proposed by Coakes et al. (2004) states that “a system is more than a simple collection of components, as properties “emerge” when the components of systems are comprised and combined.” The authors describe that systems are determined by their boundaries, discovering properties such as emergence and holism, interdependent hierarchical structures, transformation and communication, and control. When considering the nature and properties of any system, care should be taken when looking at the components of the system in isolation. These parts, or subsystems interact, or are “interdependent,” and so need to be considered as a whole or “holistically.” In addition, there is likely to be a discernible structure in the way subsystems are arranged — in a hierarchy. Finally, there needs to be established communication and control with the system, and it has to perform some transformation process (Coakes et al., 2004). BIS should be designed and integrated as whole general organizational systems and not just to be considered as sum of processing functions, databases, and networks. The system approach lets us think about BIS as an ever-evolving system, representing one of the major and most important (together with human resource
March 15, 2010
252
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
systems) subsystems of the organization. Information systems should not be considered as a subsidiary and cost-based function of the organization that merely accounts and records past data. BIS technologies can provide the company with the tools, understanding and vision to improve the service and value that it delivers to its customers. Systems theory focuses on complexity and interdependence. By doing so, it tries to capture and explain phenomena and principles of different complex systems by using consistent definitions and tools. It has a strong philosophical dimension, because it results in unusual perspectives when applied to the human mind and society. It is important to mention that systems theory aims at illustrating and explaining interrelations and connections between different aspects of reality, and not the realization of systems. 2.4. Complexity Complexity is all around us. A system is said to be complex if the variables within the system are interdependent (Dixon, 2006). The links (relationships) between variables ensure that a change (an action or event) in one part of the system has an effect on another part of that system. This effect, in turn, may be the causal event or action that affects some other (or multiple) variables within the system. Durlauf (2005) defines complex systems as “comprised of a set of heterogeneous agents whose behavior is interdependent and may be described as a stochastic process.” Complexity involves the increasing number of interacting elements affecting and influencing the system. In a system where new entities enter the system daily and other entities disappear, gaining a clear understanding of the structure of the system is extraordinarily difficult (Dixon, 2006). Complexity, however, does not simply concern the system’s structure. The focus is not on the quantity of the actors and components within the system but on the quality of its relationships (Dixon, 2006). Systems possessing a great number of agents with different components and attributes are complicated, but not necessarily complex. Today we face a complex environment full of interdependent variables, constantly changing subsystems and variables, new appearing and disappearing elements and attributes. Understanding and managing systems has emerged as a key challenge for organizations because of the increased interaction between various agents within the system and with the complex environment. BIS should provide us with sufficient capacity and the instruments to understand, obtain knowledge from, and cope successfully with increasing uncertainly and risks in an ever-changing global environment. Metcalfe (2005) points out that from complex environments emerge changes that by definition are impossible to fully appreciate. An organizational structure is only reasonable if it is appropriate to its environment. If the environment changes, then the organization needs to change. Re-organization can be seen as reconnecting people, resources, and technologies in line with the new environment. If the
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
253
changes are rapid or the organizational structure is complex, a central administration often has neither the communications capacity nor the technical expertise to provide hands-on coordination of all these reconnections. Attempts to do so are likely to cause bottlenecks of information flows and prevent an appropriate response. Therefore, Metcalfe (2005) proposes a vision for knowledge-sharing networks that anticipates environmental change. BIS should provide information and communication that fully identifies those changes, and then to assists with the process of reorganization. 3. Evolution of the Social Systems and Organizations To understand better the challenges and the complexity of evolving business organizations, we will shortly review the evolution of business organizational systems as presented by Gharajedaghi (1999). In his book Systems Thinking: Managing Chaos and Complexity, the author describes three system models of an organization. He develops the following models according to the idea that man should think about something “similar, simpler, and familiar” in order to understand complex systems. While discovering organizations as evolving systems, we will use the analogy to a better comprehend the BIS. 3.1. Mechanistic View — Organization as Machine The first model describes the idea of the mindless system: the “mechanistic view” that has became a widely accepted concept after the Renaissance in France. The popular vision is that the universe is a machine that works according to its structure and causal laws of nature gave birth to the Industrial Revolution. The mechanistic view was transposed to the birth of organizations — structures of people organized around the principle that everyone performs only a simple task as mechanism from a complex machine. The machine mode supposes that the organization is a simple system — it is a tool with a function defined by the user with no set purpose. Mainly the organization is regarded as an instrument for the owner to make profit. The only important attribute of this system is its “reliability,” evoking other characteristics such as control, tidiness, efficiency, and predictability. The structure of the mechanic system is designed to be stable and unchanging. This type of organization can operate effectively in a stable environment or if it has little interactions with the environment (Gharajedaghi, 1999). The mechanistic view is fully applicable to actual IT applications and BIS. IT systems still are defined as tools that performs some pre-determined functions, designed to operate in a closed, stable, and finite environment. BIS have to be designed as reliable, controllable, efficient, and predictable information systems. But nowadays, business organizations’ and environments changes have accelerated, and new technologies have increasingly evolved. One of the main challenges within organizations nowadays is the successful integration and interoperability of
March 15, 2010
254
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
different existing information systems. But, is it possible to predefine various mechanistic systems to merge and provide sophisticated solutions to complex queries and questions? The present shows that businesses need integrated BIS, that pass through the whole organization and communicate with a complex environment, while performing non-linear functions. 3.2. Biological View — Organization as Living System Gharajedaghi’s second model is of the uniminded system, or a biological view of the organization. The uniminded system considers the organization as a living system that has the clear purpose of survival. In the biological analogy, living systems can survive if they grow and become stronger, exploiting their environment to achieve a positive metabolism. In this type of system, growth is a measure of success and the profit is only a means of achieving it. It is important to stress that in contrast to mechanic organization, the biological organization views profit only as a means towards success. This is the model of multinational conglomerate organizations that tend to expand and explore new marketing opportunities while achieving economies of scale. Although the uniminded system has choice and can react freely, its parts do not — they operate on cybernetic principles as a homeostatic system, reacting to the information as thermostats. The parts of the whole react in a predefined manner and do not have a choice. In living organisms, the parts of the body work in coherence without a consciousness or conflict. The operation of the uniminded system is completely controled by a single brain, and execution malfunctions only when these are problems with communication or information channels. The main idea is that the elements of the system do not have a choice if there appears to be no conflicts in the system. As long as “paternalism” was the dominant culture and imperatives like “father knows best” were appropriate ways to resolve conflicts, uniminded organizations functioned successfully (Gharajedaghi, 1999). Can we imagine a similarity between BIS and uniminded living systems? In principle, BIS tend not to survive as they are static and do not expand in the environment. Most importantly, they do not possess an autonomous purpose. People are still horrified by independent robots in science fiction — humanoid machines processing and acting like living organisms. Employing a common metaphor, we may compare BIS to the nervous system within a living organism. The nervous system is a very complex and important subsystem, allowing the fast gathering and understanding of signals and messages, while observing the environment, transmitting information, around various subsystems and coordinating actions in different parts of the organism. The nervous system controls the activities of the brain and the whole body. In the same way that BIS enables the flow of information within the organization and supports information and knowledge management and subsystem coordination. However, uniminded systems tend not to evolve as they are limited to their predefined purpose.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
255
3.3. Multiminded View — Organization as Sociocultural System An example of a third level multiminded system is the social organization. The sociocultural view organization as a voluntary collective of purposeful members, who may choose their ends and means. The behavior of this system is much more complicated and unpredictable than in the previous two models. The purposeful organization is much a deeper concept than that of the goal-oriented organization. In a social system, the elements are information-bonded. The elements are connected via a common culture and compromise is one of the main methods for managing it. As defined further by Gharajedaghi, business organizations are complex sociocultural systems, representing voluntary associations of purposeful members, who have come together to serve themselves by serving a need in the environment. Sociocultural systems are held together by information. Communication flows maintain the bonds among individuals and between the organization and its members. Nowadays, organizations and systems are becoming increasingly interdependent. On the other hand, their elements and parts tend to be more autonomous, displaying choice and behaving independently, less predictable and programmable. BIS should allow these processes, connecting and facilitating information- and knowledge-sharing inside and outside the organization and systems. On the other hand, BIS has to enable a coherent culture and purposefulness that will bring together all independent and self-directed elements. The recent evolution of Internet facilities and technologies enabling social communications as wikis, personal blogs, social networks, groupware, and forums has still not been thoroughly researched in an organizational context. One important thing that has happened is the transformation of passive information users into active content providers. By expressing themselves, people become members of specific Internet communities, which in turn are self-directed and self-organized sociocultural systems. Many organizations have adopted and encouraged the emergence of virtual communities of practice (CoPs), representing virtual meetings of experts working on the same problems within different parts of the organization. These CoPs can include employees and managers from different parts of the organization, as well as customers, suppliers, or external experts to achieve better results on an identified problem. Social networking tools within organizations can become a powerful instrument to form a common culture and an understanding among self-directed, purposeful individuals. 3.4. Summary In his book, Gharajedaghi (1999) presents three views of organizational systems. Discovering the evolution of the corporate organizations, the author provides us with considerations that we applied to BIS.After reviewing the mechanistic approach, we may conclude that BIS still belongs to the class of mechanistic or machine systems.
March 15, 2010
256
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
They are simply designed to act as tools (information tools) and can be characterized as controllable, stable, reliable, and efficient organizational instruments. Another aspect of BIS is that they can be compared to an organizational nervous system, providing signals and information and coordinating all the organizational processes around it. If an organization is comparable to a living organism, then BIS should enable its vision and hearing, its movements and reactions. Lastly, organizations tend to be sociocultural systems, composed of independent purposeful members. BIS should evolve further along these dimensions, enabling companies to respond to the increasing interests of users and employees to connect and interact virtually. 4. Models for System Exploration 4.1. General System Theory General system theory appeared as an attempt to explain the common principles of all systems in all fields of science. The term derives from von Bertalanffy’s book titled General System Theory (GST). His intent was to use the word “system” to describe the principles that are common to all systems. He writes: . . . there exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or “forces” between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general. . . .
The GST formulates the “systems thinking” approach incorporating 10 tenets (von Bertalanffy, 1974) (Fig. 1): 1. Interrelationship and interdependence of objects and their attributes: independent elements can never constitute a system. 2. Holism — the system is studied as a whole, not dividing it or analyzing it further. 3. Goal seeking — systemic interaction must result in some goal or final stable state. 4. Inputs and outputs — in a closed system inputs are determined once and constant; in an open system additional inputs are admitted from the environment. 5. Transformation of inputs into outputs — this is the process by which the goals are obtained. 6. Entropy — the amount of disorder or randomness present in any system. 7. Regulation — a feedback mechanism is necessary for the system to operate predictably. 8. Hierarchy — complex wholes are made up of smaller subsystems. 9. Differentiation — specialized units perform specialized functions. 10. Equifinality — alternative ways of attaining the same objectives (convergence).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
257
Interdependence Equifinality
Holism
Differentiation
Goal seeking
BIS Inputs and outputs
Hierarchy
Transformation process
Regulation Entropy
Figure 1.
Characteristics of GST and BIS.
4.2. Characteristics of the Systematic Thinking To gain a better understanding of the system approach, we will present and compare some of the main characteristics of systematic thinking. The integrative systems approach is the basis for much research on BIS and management support systems (Clark et al., 2007). Yourdon (1989) discussed the application of the following four general systems theory principles to the field of information systems: • Principle 1: The more specialized or complex a system, the less adaptable it is to changing environments. • Principle 2: The larger the system, the larger the amount of resources required to support that system, with the increase being nonlinear. • Principle 3: Systems often contain other systems, and are in themselves components of larger systems. • Principle 4: Systems grow, with obvious implications for Principle 2. The following subsections present in detail the five basic considerations concerning systematic thinking proposed by Churchman (1968): 1. 2. 3. 4. 5.
Objectives of the whole system The system’s environment The resources of the system The components of the system The management of the system
March 15, 2010
258
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
4.2.1. Objectives The objectives represent the ultimate goals or ends toward which the system tends. Objectives should be measurable and operationalized as they have to be defined by measures of identifiable and repeatable operations. Objectives should determine the system’s performance and effectiveness. For mechanical systems, objectives can be determined easily, while for human systems this is not always true. All systems have some main objectives and this could be described as the main direction for further development. BIS are strategic tools, as new technologies change the way people think, work, and live. Technologies are expected now to be everywhere, to be embedded in various applications, invisible but reliable and stable and providing a complex and integrated service. Regardless of the type of organization, information technologies are one of the main strategic tools for any further organizational development. Information systems assist companies in all internal and external business processes, facilitating main business functions and providing additional value to the business model. There are a number of ways and business models by which an organization sells, buys, produces, or delivers a service, assisted or facilitated by information technologies. BIS are designed to provide a service — to store information, to make calculus, to model a simulation, and to deliver a message. Even more, BIS can sell your product, can make an automatic order to your supplier, deliver timely information about the production phase in remote office, deliver a payment, and find a specific record from the past using one key word. BIS are mobile, integrated, complex and evolving, securing access to specific information and resources. In order to properly design and consider BIS, one should decide what their main objectives are, or more specifically, what services BIS shall provide to its users, to the organization, and to the environment. 4.2.2. Environment The environment represents everything outside the boundaries of the system. Churchman (1964) further identifies two main features of the environment — control and determination. Control examines how much influence the system has on its environment. In BIS language, this can be interpreted as how much information and valuable knowledge is produced and transmitted to the environment. In the second factor, determination evaluates how the environment affects the system’s performance (Fig. 2). The environment is becoming the main source of instability and competition for value creation. Nowadays, the environment is a rich source of information and knowledge as well as threats and competition coming from an increasing number of active agents. Active systems have to attain an appropriate level of understanding of the various factors affecting business future development and complexity.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
Figure 2. factor.
259
Environment and Internet emerging as stand-alone and increasing external
BIS should enable companies to identify, receive and process messages coming from the environment. With new technologies, the access and availability of information has changed, and the complexity of information sources is tremendous. All information users are at the same time information providers. Blogs, wikis, social networks, Web 2.0 technologies, all media encourage personal involvement, comments, links, and active feedback. An increasing amount of text, video, sounds, and images is created every day and published on the Internet. Is our organization missing something important? How do we cope with and process information from one very distinct, complex, special and ever-growing environment — Internet? We live in an information-rich, but knowledge-poor environment. Any organization usually lacks a time, a methodology and a systematic approach to cope with information coming from the environment and to transform it to valuable knowledge, embedding it afterwards in innovations, products, and services. BIS should enable organizations to systematize and to better understand the complexity of the environment. 4.2.3. Resources Resources are the instruments and the means available to the system, to execute its goal or objective. Resources can be either inside the system (employees) or outside it (external collaborators) and they represent all complex material and intangible assets (know-how, strategic alliances, etc.) that the system can process further. One important feature that has to be examined within the system is non-utilized resources, or lost opportunities, unrecognized and unexploited challenges and the lack of appropriate management of the resources. It should be emphasized that in the knowledge-driven economy, people become the main organizational resource.
March 15, 2010
260
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Nowadays, people have become self-directed agents, recognized for their unique combination of theoretical background and acquired experience. Empowering people to perform better (and not wasting their effort, time, and knowledge) can enable the company to perform better and to improve the value of the services offered to clients. BIS should enhance people’s performance, while creating, processing, storing, searching, and transmitting information and knowledge. Nowadays information technologies have to be constructed along the way to improve human performance. As stated in Kikuchi (2008), in the future we have to expect the age of “prosumers” (playing the roles of producer and consumer at the same). This would be true with both B2B and B2C businesses. As companies work to shape business processes and workflows around this new concept, BIS will have a major influence. 4.2.4. Components Components are all those activities that contribute towards the realization of the system’s objectives. By components, Churchman means tasks that the system must perform to realize its objectives. Components are all those activities that contribute to the realization of the system objectives. When exploring BIS, the components of the system (or system activities) should be clearly defined in terms of services (what service the system should provide) and knowledge-processing activities (how knowledge is created, stored, processed, and distributed within the system). The BIS should deliver meaningful and value-adding services to its users, adding much more context, personalization, and knowledge to better serve its users. The service approach has various important considerations. Service values are determined by capricious and instable customers and markets and are changing constantly. It is even more crucial than ever that the companies’ activities and processes enable a wide variety of outside business partners and collaborators, as well as markets and customers, to actively capture market/customer changes (Uden, 2008). 4.2.5. Management By system management, Churchman (1968) defines two basic functions — planning and control. In systems architecture, the system design phase is one of the most important for the system’s overall functioning and success (Dixon, 2006). Management consists of systematized activities to design the system, and to plan and control determined constraints. Planning the system involves all the system’s aspects — the determination of its goals, environment, resources, and components. Controlling the system includes both the examination and review of plans and further making any necessary changes. Centralization usually means standardized, hierarchical, knowledge-sharing procedures, which slow down, average out, and often over-simplify information to the extent it can be misleading.
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
261
The complexity within a system obliges system designers/planners to take into account many features simultaneously, making it difficult or impossible to commit to a single action or to expect a single outcome. BIS can help model the complexity to reduce risks for the specific elements of the system and to mitigate their influence. Realization of such an organization requires management to induce the creation of new ideas and intellectual collaborations beyond organizational boundaries. This leads to the argument that open and flat virtual corporations shall be required to create new service power (Uden, 2008). 4.3. Five System Principles (Gharajedaghi, 1999) As proposed by Gharajedaghi (1999), five important system principles are presented below. They affect the system thinking of BIS as they extend the vision of the organization as unique complex system. The following characteristics give us further insight on the systematic approach. 4.3.1. Openness Openness refers to the fact that living systems constantly interact with their environment. The behavior of living systems can only be understood in terms of their context or containing environment. In terms of energy, a system requires interaction outside its structure to maintain itself. The energies are continually used to maintain the relationship of the parts and keep them from collapsing into decay. This is a dynamic state, not a dead and inert one. Only through an understanding of the system’s interaction (i.e., the transmission of energy) with the environment can the behavior of the system be understood. The environment grows more and more difficult to predict despite efforts to do so. Despite the difficulties of controlling the environment, organizations can influence it, creating a transactional environment. Leadership is about influencing what cannot be controlled and appreciating what cannot be influenced. Culture is the default value in any social system that allows it to reproduce the same order over and over again. 4.3.2. Purposefulness Choice, which depends on rationality, emotion, and culture, is the primary idea behind this principle. Attempts to understand purpose are attempts to understand why systems behave the way they do. Purposeful systems can change their ends — they can prefer one future over another regardless of whether their environment changes or not. (This idea foreshadows a basic working principle of design: when we design, we assume that the old system is destroyed but the environment remains unchanged, and that the new system represents the explicit desires or choices of the designers.) Human systems also have purpose. To truly understand a system, it is necessary to understand why they behave the way they do. People typically do
March 15, 2010
262
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
not act without reason, and regardless of how irrational a behavior appears to the observer, the rationale of the actor must be explored and understood if the observer is to comprehend the system. 4.3.3. Multidimensionality The principle of multidimensionality suggests that variables in a system have multiple characteristics, some of which may appear to be contradictory. Multidimensionality is the “ability to see complementary relations in opposing tendencies and to create feasible wholes with unfeasible parts.” “Multidimensionality maintains that opposing tendencies not only coexist and interact, but also form a complementary relationship.” 4.3.4. Emergent property Emergent properties are formed by relationships within the system. As these properties are not a sum of parts, but a product of interactions, these properties do not lend themselves to analysis, do not yield to causal explanation, and are often unpredictable. This is the characteristic that gives systems the ability to be greater than the sum of their parts. When interacting parts are compatible, the interactivity between them is reinforcing, and the resulting energy produced is significantly greater than either part could produce on its own. Conversely, when interacting parts are incompatible, the product produced by their interaction is less than either part could produce independently. 4.3.5. Counterintuitiveness “Counterintuitiveness means that actions intended to produce a desired outcome may, in fact, generate opposite results.” This characteristic is present in nearly every human system, but seems to manifest itself in greater magnitude during crisis situations. Delays and multiple effects are two primary reasons counterintuitiveness is so prevalent. Delays occur when time and space separate cause and effect. An action taken at a given time and place may have a non-immediate impact, and the delay gives the actor or observer the impression that either nothing happened, or that the effects were singular. In complex systems, effects are rarely singular and second or greater order effects may initially go unnoticed if they occur at a different time or place. We can all think of a time when a plan or strategy backfired. Classical concepts of cause and effect come into question because of delayed effects, circular dependencies, multiple effects of a single event, and durability or resistance of effects to change. Predicting outcomes means modeling the complex and dynamic delays, dependencies, multiple effects, and durability of the set of interacting factors. Prediction is, at best, an uncertain science (Gharajedaghi, 1999).
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
263
4.4. Principles of Systems Thinking According to Bennet and Bennet (2004) Bennet and Bennet (2004) define the following overall general system principles or system rules, contributing to understand the system thinking. 4.4.1. Structure is a key to system behavior Useful insights and understanding of how an organization (system) behaves can be derived from the system’s structure. System thinking suggests that understanding the system’s structure allows us to understand and predict the behavior of individual elements and their relationships. BIS structures should complement and enhance sociocultural structures in the company, while facilitating cooperation and knowledge flows. 4.4.2. Systems that survive tend to be more complex This principle derives from the environment becoming more complex. The system with most options, variety, and flexibility is the one most likely to dominate and survive. This principle is important concerning the BIS conceptualization. 4.4.3. Boundaries can become barriers It usually takes more time and energy to send information or communicate through a boundary than within a system. This principle is increasingly important when considering the planning of BIS and any organizational divisions. Another important aspect is boundary protection that should be carefully managed as a natural phenomenan of the system. 4.4.4. Systems can have many structures Systems often exist within other systems and all levels usually have different purposes or objectives. Recognizing complex structures is vital for management, information supply, and control activities within BIS. 4.4.5. Intervene in systems very carefully Systems are complex structures, and any changes have to be carefully planned and undertaken. Sometimes small changes create big results, but more often than not, big changes have very little impact. Usually, the work is done through the informal network, giving it a vital role in the organization’s performance. Informal systems should always be considered when making changes in organizations. This aspect is increasingly important in the implementation of integrated organizational Information system.
March 15, 2010
264
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
5. System Framework for BIS and Summary of the Systematization Approach The theoretical study of systems enables us to look beyond the limits of the technologies and to figure out the vision of the next BIS. To summarize our understanding of systems in general, an overview of main system characteristics is presented in Table 1. It is interesting to note that almost all authors outline the same aspects of the system properties. Although some of the terms in the table slightly differ, we can state that an overall summary of the systematization approach is provided. The information systems cannot be examined via analytical method due to its limitation to study their complex nature. Hence the reason for proposing a framework for the BIS systematization process which (Fig. 3), emphasizes the system services that BIS is expected to provide for the organization. The overview of system characteristics proposes a sound understanding of the real processes one should take into account when conceptualizing a new BIS. The management role is to determine the strategic level of complexity and the context of BIS related to specific business organizational needs. Considering the management approach, BIS have to be examined from the point of view of evolving valueadding services designed for users within and outside organization (Fig. 3). The systematic theory enables focus-oriented decision makers to think about BIS beyond its mechanical aspect and to expand it on the level of sociocultural systems. 6. Conclusions and Next Steps As Senge (2006) states, system thinking is a “discipline for seeing wholes.” He continues that it is a framework to seeing interrelationships rather than things, for seeing patterns of change rather than static “snapshots.” System thinking becomes increasingly important as the world becomes more and more complex, organizations are being overwhelmed with information, and interdependency is far more complicated than anyone can manage. Nowadays, complex business organizations require even more sophisticated and complex BIS. Technologies enter quite fast into our offices and homes, changing irreversibly our habits, our behavior, and our culture. However, BIS is not about technologies. BIS concerns the future of our organizations. Systematization and system thinking give us directions to consider the design and development of “whole” encompassing patterns of future business models and new business paradigms, providing new knowledge-intensive channels. BIS will expand further, preparing the organizations for the Web 2.0 era and even anticipating Com 2.0 era (Labrogere, 2008). The Web 2.0 philosophy is the “Internet of Services,” where all people, machines, and goods will have access to Web 2.0 by leveraging better network infrastructure. BIS should equip business organizations with new instruments and tools to better conceptualize information and knowledge perceived from the environment, to process it faster and efficiently,
March 15, 2010 14:44
(1968)
Objectives of the whole system Resources of the system
Gharajedaghi (1999)
Coakes et al., (2004)
Bennet and Bennet (2004)
Goal-seeking Holism-
Purposefulness
Holism
Purpose System
Interrelationship and interdependence of objects and their attributes Inputs and outputs Transformation of inputs into outputs Regulation
Emergent property
Interdependence and emergence
Inputs and outputs
Multidimensionality
Transformation Feedback and regulation Environment
Hierarchy Boundary
Structure System boundary
Openness Hierarchy Differentiation Equifinality Entropy
Counterintuitiveness
b778-ch11
Communication and control
SPI-b778
Components of the system Management of the system Systems environment
von Bertalanffy (1974)
WSPC/Trim Size: 9.75in x 6.5in
Churchman
Summary of Systems Characteristics.
Systematization Approach for Exploring Business Information Systems
Table 1.
265
March 15, 2010
266
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Figure 3.
Systematization process.
and react to it in a timely way. The Com 2.0 term goes further with the Web 2.0 principles putting an accent on the mobility issues. BIS have to enable further transformation and adaptation of the services demanded from IT, exploring the next level of submitting new training and encouraging knowledge sharing. The image of the systematization process (Fig. 3) depicts the complex role of BIS and its place as an knowledge gatherer, processor, distributor, and important strategic mediator within the new business organization. The present research focuses on general system theoretical background, making an attempt to summarize the conceptual visions for system development. The limitations of this approach concern the specific aspects of the information systems, combining both technological and social-organizational characteristics. The chapter as well does not present systematic thinking from an epistemological point of view, but rather focuses on a more pragmatic organizational and IT-centered approach, looking to get a better understanding on how to cope with evolving and complex information systems. Information systems are unique technological assets along with organizational knowledge and human resources that can strategically influence a company’s development. Further research should focus on system theory and the Internet (can Internet be explored from the point of view of system approach?). The Internet and Web 2.0
March 15, 2010
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
Systematization Approach for Exploring Business Information Systems
267
will soon offer services for all areas of life and business, entertainment, and social contacts. Those services will require a complex service infrastructure including service delivery platforms, that brings together demand and supply, requirs new business models and approaches to systematic and community-based innovation. The BIS should enable companies to prepare for this coming major shift to the Web 2.0 business. Matching coherently these technological and managerial perspectives in an ever-growing complex environment is the next challenge for the system theory. References Bennet, A and D Bennet (2004). Organizational Survival in the New World, The Intelligent Complex Adaptive Systems. Burlington: Elsevier. von Bertalanffy, L. (1974). Perspectives on General System Theory. New York: George Braziller. Buckley, N (2008). Web 2.0 and the “naming of parts.” International Journal of Market Research, 50 (5: Web 2.0 Special Issue). Churchman, CW (1968). The System Approach. New York: Delacorte Press. Clark, T, M Jones and C Armstrong (2007). The Dynamic structure of management support systems: Theory development, research focus, and direction. MIS Quarterly, 31(3), 579–615. Coakes, E, B Lehaney, S Clarke and G Jack (2004). Beyond Knowledge Management. IGP. Dixon, R (2006). Systems Thinking for Integrated Operations: Introducing a Systemic Approach to Operational Art for Disaster Relief. Fort Leavenworth, KS: US Army Command and General Staff College. Durlauf, S (2005). Complexity and empirical economics. The Economic Journal, 115(504), 225–243. Gharajedaghi, J (1999). Systems Thinking: Managing Chaos and Complexity: A Platform for Designing Business Architecture. Elsevier. IBM (2007). Succeeding through service innovation: Developing a service perspective on economic growth and prosperity. Cambridge Service Science, Management and Engineering Symposium, 14–15 July 2007, Moller Centre, Churchill College, Cambridge, UK. Labrogere, P (2008). Com 2.0: A path towards web communicating applications. Bell Labs Technical Journal, 13(2), 19–24. Laudon, K and J Laudon (2006). Management Information Systems, 9th edn., New Jersey: Pearson. Lufa, M and L Boroac¨a (2008). The balance problem for a deterministic model. International Journal of Computers, Communications & Control, III (Suppl. issue). In Proceedings of ICCCC 2008, 381–386. Metcalfe, M (2005). Knowledge sharing, complex environments and small-worlds. Human Systems Management, 24, 185–195. Schoderbek, P, C Schoderbek and A Kefalas (1990). Management Systems, Conceptual Considerations, 4th edn., Boston: BPI. Senge, P (2006). The Fifth Discipline, The Art and Practice of the Learning Organization. New York: Doubleday.
March 15, 2010
268
14:44
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch11
A. Antonova
Uden, L and M Naaranoja (2008). Service Innovation by SME. In Proceedings of KMO Conference 2008, Vaasa. Yourdon, E (1989). Modern Structured Analysis. Englewood Cliffs: Prentice Hall.
Biographical Note Mrs. Albena Antonova is a PhD student and junior researcher within the Center of IST, Faculty of Mathematics and Informatics, Sofia University, Bulgaria. The main topic of her PhD thesis is knowledge management systems. She received her master degree in Business Administration from University of Nantes, France. Currently, she is involved in several international RTD projects concerning knowledge management, e-business, e-learning, and others. Mrs. Antonova is an assistant lecturer on classes on knowledge management, MIS, and Project management in Sofia University.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
Chapter 12
A Structure for Knowledge Management Systems Assessment and Audit JOAO PEDRO ALBINO∗ , NICOLAU REINHARD† and SILVINA SANTANA‡ ∗ Department of Computer Science, School of Science, Sao Paulo State University-UNESP, Av. Luiz Edmundo Coube, 14-01-17033-360 - Bauru – SP, Brazil
[email protected] † Department of Management, University of Sao Paulo-USP, Av. Luciano Gualberto, 908, Room G-12-05508-900 – S˜ao Paulo – SP, Brazil
[email protected] ‡ Department of Economy, Management and Industrial Engineering, University of Aveiro, Campo Universitario de Santiago-3810-193 – Aveiro, Portugal
[email protected] Knowledge Management Systems (KMS) seek to offer a framework to stimulate the sharing of the intellectual capital of an organization so that the resources invested in time and technology can be effectively utilized. Recent research has shown that some businesses invest thousands of dollars to establish knowledge management (KM) processes in their organizations. Others are still in the initial phase of introduction, and many of them would like to embark on such projects. It can be observed, however, that the great majority of such initiatives have not delivered the returns hoped for, since the greatest emphasis is given to questions of technology and to the methodologies of KM projects. In this study, we call attention to an emerging problem which recent studies of the phenomenon of knowledge sharing have not sufficiently addressed: the difficulties and efforts of organizations in identifying their centers of knowledge, in developing and implementing KM projects, and in utilizing them effectively. Thus, the objective of this chapter is to propose a framework to evaluate the present state of an organization’s processes and activities and identify which information and communication technologies (ICT) are supporting these initiatives, with the intention of diagnosing its real need for KM. Another objective of this instrument is to create a base of knowledge, with all the evaluations undertaken in organizations in different sectors and areas of specialization available to all participants in the process, as a way of sharing knowledge for continual improvement
269
March 15, 2010
270
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al. and dissemination of the best practices. About 30 companies took part in the first phase of investigation in 2008, and the knowledge base is under construction. Keywords: Knowledge management audit; knowledge management benchmark; collaborative benchmarking; knowledge management audit tool.
1. Introduction Much research and discussion has taken place regarding the important role of the knowledge of organizations. Confronting a very complex setting in the corporate world and in society, in general, we see that economic and social phenomena of worldwide reach are responsible for the restructuring of the business environment. The globalization of the economy, and above all, the dynamics afforded by information and communication technologies (ICT), present a reality which modern organizations cannot fight. It is in this context that Knowledge Management (KM) is transformed into a valuable strategic resource. The creation and implementation of processes which generate, store, manage, and disseminate knowledge represent the newest challenge to be faced by companies. Knowledge Management Systems (KMS) seek to offer a framework to stimulate the sharing of the intellectual capital of an organization so that the resources invested in time and technology can be effectively utilized. A research undertaken with a sample of 200 Brazilian executives from large organizations revealed that there have been advances in this area, since the companies possess reasonable perceptions of the importance of KM; however there remain gaps to be overcome (E-Consulting Corps, 2004). Other recent studies show that certain organizations remain in the initial stage of the process of development and implementation of KM projects, and that many would like to get started with such projects (Serrano Filho and Fialho, 2006). Many companies have dedicated efforts and invested considerable financial resources and time to implement KM and to motivate the educational evolution of their personnel, without, however, obtaining the results hoped for, or even obtaining adequate returns on the resources invested (Albino and Reinhard, 2005). With all of the current technological apparatus, we live in an organizational environment where the transfer of knowledge and the exchange of information are efficient and where there exist numerous organizations already working with processes to collect and transfer the best practices. However, there is a great difference between what the companies know and what they effectively put in practice (action). There exist gaps between what the companies know and what they do, and the causes of this gap are still not totally understood (O’Dell and Grayson, 1998). According to Keyes (2006), it becomes necessary to use tools which can guide the centers of knowledge to the areas which effectively demand greater attention,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 271
and also to identify which management practices are already found in use by the organization, so that this knowledge can be stored, nurtured, and disseminated in an equitable manner. Due to their importance, initiatives in KM must be continually verified in order to evaluate whether they are effectively moving toward attaining their objectives for success. Measurement procedures must include not only how the organization quantifies its knowledge capital, but also how its resources are allocated in order to nourish its growth. However, knowledge is very difficult to measure, due to its intangibility, according to Chua and Goh (2007). There exists recognition of the necessity of understanding and measuring the activity of KM in such a way that organizations and organizational systems can achieve what they do best and also that governments can develop policies to promote the benefits obtained with such practices, according to OECD (2003) — Organisation for Economic Co-Operation and Development. Among the various categories of investments related to knowledge (education, training, software, Research and Development, and among others), the management of knowledge is the least known, as much from the qualitative point of view as from the quantitative, as well as in terms of financial costs and returns (OECD, 2003). With all the questions raised in the above paragraphs, this chapter seeks to call attention to an emerging problem which recent studies of the phenomenon of knowledge sharing have not sufficiently addressed: the difficulty and the efforts of organizations in identifying their knowledge centers, in developing and implementing KM projects, and in utilizing them effectively. Thus, the objective of this chapter is to propose a framework to audit the present state of an organization’s processes and activities and to identify which ICTs are supporting these initiatives, with the intention of diagnosing its real need for KM. 2. Defining Knowledge Knowledge, according to Aurum et al. (2008), can be defined as a “justified personal belief” that increases an individual’s capacity to take an effective action. Knowledge, according to the authors, is defined as awareness, or familiarity acquired through study, investigation, observation, or experience over time. Knowledge, in contrast to information, is related to action. It is always knowledge with an end. Knowledge, like information, relates to meaning. It is specific to the context and it is relational, according to Albino and Reinhard (2005). Davenport et al. (2003) define KM as a collection of processes which govern the creation, dissemination, and utilization of knowledge to reach fully the objectives of an organization.
March 15, 2010
272
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
According to these authors, data are a conjunction of distinct and objective facts (attributes or symbols) relative to events. In a business context, data can be described as structured records of transactions. Information is data endowed with meaning within a context. In the business context, information can be described as a term which permits decision making and execution of an action, due to the meaning it has for that company. Knowledge is derived from information, in the same way that information is derived from data. Simply stated, Davenport et al. (2003) consider that an individual generates knowledge based on the interaction of a conjunction of information obtained externally to him/her, but also with the knowledge and information already in his/her mind. The construction of knowledge is a multifaceted effort, assert Albino and Reinhard (2005). Simply stated, it requires a combination of social and technological actions. In Fig. 1, there can be seen a model of KM. For a company to build capability for strategic knowledge, it is proposed that at least four components must be employed: knowledge systems, computer networks, knowledge workers, and organizations which learn. Nonaka and Takeuchi (1997) identified two basic types of knowledge, as shown in Fig. 2 and summarized as: • Tacit or implicit knowledge. Personal knowledge, incorporated into the actions and experiences of individuals, and specific to the context. Since it involves intangible values, ideas, assumptions, beliefs, and expectations, as well as a
Knowledge systems
Computer networks
Capture systems Data banks Decision tools
Local Corporative External
Learning organizations
Knowledge workers
Collaboration Training Ethos
Key person Skills Meritocracies
Figure 1. A KM model.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 273 Explícit (paper or software) Current knowledge
Represent
Implícit (human brain) Understand Action
New knowledge
Validate capture
Hypothesize
Innovation
Source: Nonaka and Takeuchi (1997). Figure 2.
KM cycle.
particular form of execution of activities, it is a type of knowledge difficult to formulate and communicate. • Explicit knowledge. Knowledge articulated in formal language, built into products, processes, services and tools — or recorded in books and documents, systematized and easily transmitted — including by grammatical statements, mathematical expressions, specifications, manuals, periodicals, and so on. As defined by Stewart (2003): “Tacit knowledge is not found in manuals, books, data banks or archives. It is manifested preferably in oral form.”
Thus, tacit knowledge, also according to Stewart (2003), is disseminated when people meet and tell stories, or if they make a systematic effort to uncover it and make it explicit. 2.1. KM: Principal Factors Knowledge Management promotes an integrated approach to identifying, capturing, retrieving, sharing, and evaluating an enterprise’s information assets, asserts Akhavan et al. (2005). These information assets may include databases, documents, policies, procedures, as well as the uncaptured tacit expertise and experience stored in individual’s heads. According to Keyes (2006), KM, implemented by and at the organizational level and supporting empowerment and responsibility at the individual level, focuses on understanding the knowledge needs of an organization and the sharing and creation of knowledge. The main purpose of KM, says Keyes (2006), is to connect people.
March 15, 2010
274
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
People (work force)
K
nowledge
Organizational processes Technology (IT Infrastructure)
Source: Awad and Ghaziri (2004). Figure 3.
Juxtaposition of KM factors.
The basic tripod of KM is composed, according to Awad and Ghaziri (2004), of the juxtaposition of three basic factors: people, information technology (IT), and organizational processes (Fig. 3). One of the three legs of KM initiatives, IT, brought great benefits to organizations. New technologies for communication with great bandwidth, cooperative and remote work, objects and multimedia enlarged the informational environment, and today there exist innumerable tools to facilitate or support current KM projects, according to Albino and Reinhard (2005). Technologies like corporate knowledge portals (CKP), knowledge bases and maps, discussion software and electronic chat, mapping of tacit and explicit knowledge, data mining, document management — among other technologies — are already available and are offered by various manufacturers. A basic taxonomy for KM tools can be seen in Fig. 4. According to Serrano Filho and Fialho (2006), the conceptual structure in Fig. 4 includes the following phases relating to the life cycle of KM: creation, collection or capture, organization, refinement, and diffusion of knowledge. The outermost layer of Fig. 4 represents the organizational environment — technology, culture, consumer and customer intelligence, metrics, competition, and leadership. In this form, the structure greatly influences how the organization develops and implements its KM life cycle, which Awad and Ghaziri (2004) assert, can also be defined as a KM process. Also according to the authors, the final step is the maintenance phase, which guarantees that the knowledge disseminated will be accurate, reliable, and based on the standards of the company defined a priori. In summary, the principal topics of this structure are: • Acquire: The act of prospecting, visualizing, evaluating, qualifying, triaging, selecting, filtering, collecting, and identifying. • Organize/Store: The act of making explicit, analyzing, customizing, contextualizing, and documenting. • Distribute/Share: The acts of disseminating, sharing, and distributing.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 275
Leadership
Create/Acquire
Metrics
Knowledge organization
Apply
Organize/ Store
Culture
Distribute/ Share
Technology
Processes Facilitators
Source: Adapted from Nogeste and Walker (2006), p. 9. Figure 4. Taxonomy of KM tools.
• Apply: The act of producing and using. • Create: The act of evolving and innovating. 2.2. KMS Life Cycle The construction of KM can be seen, according to Serrano Filho and Fialho (2006), as a life cycle that begins with a master plan and a justification and ends with a structured system for attaining the requirements of KM for the whole organization. A knowledge team representing the ideas of the company and a knowledge developer with experience in the capture, projection, and implementation of knowledge guarantees a successful system. However, before construction of a KMS, there becomes necessary, according to Tiwana (2002), the definition of the principal sources from which flow the knowledge to form the system. Thus, three basic steps are involved in the process of knowledge and learning. In summary, these three stages comprise: • Aquisition of knowledge. The process of development and creation of ideas, skills, and relationships. • Sharing of knowledge. This stage comprises dissemination and makes available that which is already known. This focus on collaboration and on collaborative support is the principal factor, which differentiates KMS from information systems.
March 15, 2010
276
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al. Databases and capture tools
Acquisition
Elements of knowledge utilization and basic technology support
Shar ing
on at i iliz Ut
Databases
· · · ·
· · ·
·
Browsers Web pages Document distribution systems Collaboration tools
· ·
Sharing tools Collaboration tools Communication Links Networks Intranets
Source: Tiwana (2002) p. 72. Figure 5.
Stages of utilization of knowledge and their IT functionalities.
• Utilization of knowledge. The utilization of knowledge gains prominence when learning is integrated into an organization. Any knowledge which is available and systematized in the organization can be generalized and applied, at least in part, to a new situation. Any available computational infrastructure which supports these functions can be utilized. Thus, using this technological approach, these three stages and their functionalities of IT are represented in Fig. 5. It can be seen that this figure is a simplification of Fig. 4. According to Tiwana (2002), these three stages need not be in sequence. In some situations, they can occur in parallel.
3. Auditing KM Knowledge Management, according to Keyes (2006), offers a methodology for creation and modification of processes to promote the creation and sharing of knowledge.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 277
However, an understanding of what constitutes organizational or corporate knowledge is still slow, even in large Brazilian organizations (E-Consulting Corps, 2004). According to Akhavan et al. (2005), even among leading companies in the market which adopted and implemented KM, a large proportion either failed in the process or did not reach the expected success. According to Delgado et al. (2007), many of KM initiatives fail and most projects are abandoned because of the use of an inappropriate methodology. This led, according to Akhavan et al. (2005), to the perception that KM initiatives would represent a high-risk venture. Taking into consideration the questions presented above, the KM proponent or professional should always attempt to evaluate the current state of the organization before initiating a KM program. Proceeding in this manner, the strategy of KM projects will be based on solid evidence of the current state of KM activities and processes and, from that point on, the best manner of efficiently implementing KM can be defined. Thus, there will be, according to Delgado et al. (2007), a solid base to determine exactly why, how, and where beneficial results can be obtained. According to Handzic et al. (2008), before practitioners embark on the development of a KM initiative, they need to understand the key elements associated with KM and their inter-relationships. They also need to analyze the ways by which knowledge is managed by the organization and the degree to which current practices address the goals of the organization. The deficiencies or gaps detected from such an audit can then lead to the development of a KM initiative aimed at supporting work more effectively thus ensuring that the goals are well met. Thus, an audit should be the first phase or stage of a KM initiative and should be utilized to provide a complete investigation of information and knowledge policies and their structure. A complete evaluation should analyze the organization’s knowledge environment, its manner of sharing, and its use. This process, assert Chua and Goh (2007), also permits the diagnosis of the behavioral and social culture of the people in the organization through an investigation of their perceptions regarding the effectiveness of KM. Therefore, any instrument for evaluation and auditing of KM should include the following areas of investigation (Handzic et al., 2008): • • • • • • • •
Evaluation of intellectual assets; Knowledge as a strategic asset; The collaborative environment; Culture of internal learning; Culture of information sharing; Importance of the process; Structure of communication; Motivation and rewards initiatives.
In conclusion, Keyes (2006) asserts that the most important characteristic to consider when defining an audit and evaluation instrument is whether the measuring
March 15, 2010
278
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
process shows whether knowledge is being shared and utilized. To this end, the evaluation must be linked to the maturity of the KM initiative, which has a life cycle which progresses through a series of phases. 4. Framework for Assessment and Audit The KMAuditBr instrument is a tool designed to help organizations perform an initial high-level evaluation of the state of a KM process within the organization. The objective of this instrument is to provide, likewise the KMAT tool, a qualitative approach for the evaluation of KM activities and processes internal to the organization (Chua and Goh, 2007; Jin et al., 2007; Nogeste and Walker, 2006). Upon completing all the items in the instrument, the organization will be able to have a panoramic view of the areas or topics which require greater attention, as well as to identify the KM practices in which the company already shows excellence in execution. 4.1. Structure of the Instrument The structural model of KMAuditBr is based on the KM model shown in Fig. 4, which proposes four facilitators (leadership, culture, IT, and metrics), which can be utilized to nourish the development of organizational knowledge throughout the KM life cycle. This model places the principal activities and the KM facilitators within a single dynamic system, according to Nogeste and Walker (2006). Each of the parts of the instrument represents a grouping of questions which permit not only evaluation of the state of KM practices in relation to the model, but also collection of data for evaluation of its performance, and thereby the establishment of benchmarking with other organizations. The basic architecture of the auditing and evaluation process can be seen in Fig. 6. Based on the concepts discussed by Keyes (2006) and by Nogeste and Walker (2006), and understanding the concepts presented in the KMAT in summary, this instrument has the following objectives: • Permit an initial high-level evaluation of KM in organizations; • Evaluate the state of the KM process within the organization; • Provide a qualitative approach to internal KM activities and processes through proportional measurement; • Obtain a panoramic view of the areas or topics requiring greater attention; • Identify the KM practices in which the company already presents excellence in execution; • Permit external benchmarking with companies in the same line of business.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 279 KMAuditBr
Opportunities and strengths
Current activities and processes
Knowledge management components
Organizations
Organizational strategy Communication and collaboration infrastructure
KMAuditBr Other questions
Benchmarking with the sector
Internal evaluation
External evaluation
Knowledge base
Figure 6.
Stagnant
24
Model of KM architecture.
48
Figure 7.
Refine and continue
Prioritize and select
Initial
72
96
120
State of KM processes and activities.
At the end of the process, the instrument permits positioning of the true state of the organization in relation to KM processes and activities in operation (Fig. 7). Four states were initially thought of: (i) Stagnant (insignificant or basic number of processes, or none); (ii) Initial (where KM activities and processes are few);
March 15, 2010
280
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
(iii) Prioritize and select (various procedures are in operation, including those with IT support for carrying them out, but without coordination or a coherent plan); and, last; (iv) Refine and continue (excellent relationship between KM processes, project coordination, and the use of IT. Initiation of quality control for continuous refinement). Finally, after carrying out evaluations in organizations in various sectors and areas of activity, create a knowledge base of evaluations performed and make it available through free access to the network for all participants in the process. In this manner, with the initial information utilized as a knowledge base, carry out collaborative benchmarking between participating organizations so as to create a structure where a group of companies shares knowledge about a certain area of activity, with the intention of improving themselves through mutual learning. 4.1.1. Initial data In the initial part of the instrument, information is collected regarding the organization (Organizational Characteristics). Also collected is information on the respondent to the questionnaire (Individual Characteristics), questions relative to her or his perception of KM (necessity of KM), as well as whether there exists within the organization, to the respondent’s knowledge, investments in KM (specific sectors or areas, level of investment, etc.) This initial information, along with other information throughout the questionnaire, is utilized to generate a database permitting the development of collaborative benchmarking among organizations. In this type of evaluation and comparison, what is sought is the creation of a structure where a group of companies share knowledge about a certain activity, with the objective of improving themselves through mutual learning. 4.1.2. Evaluation of strengths and opportunities The second stage of the questionnaire concentrates on concepts presented by Nogeste and Walker (2006) and permits a deep analysis of KM activities in organizations. This stage is based on five sections, each one comprising a wide spectrum of KM activities. In summary, this stage of the questionnaire permits examination of the following items: • Process. The KM process includes the steps of action that an organization utilizes to identify the information it needs and the manner in which it collects, adapts, and transfers this information throughout the organization (verification of the flow of information).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 281
• Leadership. Leadership practices include questions of strategy and the way the organization defines its business and utilizes its knowledge assets to reinforce its principal competencies. Knowledge management should be directly related to the manner in which the organization is managed. • Culture. Culture reflects how an organization facilitates learning and innovation, including the way it encourages its workers to construct a base of organizational knowledge in such a manner so as to add value to the customer. In some organizations, knowledge is not shared because the rewards, recognitions, and promotions go to those who possess knowledge and not to those who share knowledge. In this situation, the workers do not have the habit of sharing, to the extent that they do not understand that what they have learned could be of value to others. Thus, they do not know how or with whom to share knowledge. • Technology. Practices with respect to technology are concentrated on how an organization equips its members to facilitate communication among them, such as the existence or not of systems utilized to collect, store, and disseminate information. The great danger is in overestimating or underestimating investment in technology (Jin et al., 2007). • Measurement. Measurement practices include not only how an organization quantifies its knowledge capital, but also how its resources are allocated to foster its growth within the organization. Since it is intangible, organizational knowledge is very difficult to measure. Traditional accounting principles do not recognize knowledge as an asset. One of the problems is that organizations see knowledge as one of their most important assets, but on the balance sheets it is still carried as an expense and not as capital (Keyes, 2006). As presented, the objective of this second stage is to assist organizations to evaluate themselves and to verify where their strengths, weaknesses, and opportunities are located within the KMS model of Fig. 4. 4.1.3. Evaluation of activities and processes in operation The third and final stage of the questionnaire permits an evaluation of activities and processes in operation in the organization, which contemplates the stages of utilization of knowledge as shown in Fig. 5 (acquisition, sharing, distribution, and utilization). The items analyzed in this stage of the questionnaire permit observation of which type of IT functionality is supporting or should support efforts at KM. Besides this, it also permits the evaluation of which IT structure is in existence. According to Deng and Hu (2007), KM should be supported by a complex of technologies for electronic publishing, indexing, classification, storage, contextualization, and information recovery, as well as for collaboration and application of knowledge. To account for the different needs for applications for KM, this study took as a base the architecture of the KMS delineated by Lawton (2001) and detailed in Fig. 8.
March 15, 2010
282
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Application tier
Competitive intelligence
Best-practice systems
Interface: : point of entry and exit of knowledge KM services
Product development
CRM
Knowledge portal
Discovery of data and knowledge
Collaboration services
Corporate taxonomy
Experts network
Knowledge map
Document and content management
Knowledge repository
Low-level IT infrastrucuture
E-mail, file servers and internet/intranet services
Information and knowledge sources
Word processor
Database
Electronic document management
Electronic mail
Web
People
Source: Adapted from Lawton (2001), p. 13. Figure 8.
Model of KM architecture.
According to Lawton (2001), Knowledge management is not simply a single technology, but rather a complex composed of tools for indexing and classification, and mechanisms for information recovery, in conjunction with methodologies designed to obtain the desired results for the user (p. 13).
The principal available technologies permit, according to Deng and Hu (2007), that knowledge workers work with content and workflow management and that they categorize knowledge and direct it to the people who can benefit from it, besides numerous options for improving customer relations management (CRM), search and discovery of knowledge, and streamlining the business process, as well as tools for collaboration and work in groups, among other objectives. Thus, an architecture such as that in Fig. 8, according to Deng and Hu (2007), synthesizes the IT needs for each of the stages of KM. In summary, the components of this architecture of KMS are the following: • Sources of explicit knowledge. The bottom tier of the architecture contains the sources of explicit knowledge. Explicit knowledge is located in repositories such as documents or other types of knowledge items (for example, such as e-mail
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 283
• • • • •
messages or records in data banks). Standard editing tools (such as text editors) and Data Bank Management Systems (DBMS) give support to this first tier. Low-level IT infrastructure. File servers and e-mail programs, besides intranet and internet services, give support to the low-level IT infrastructure tier. Document and content management. Document and content management systems are the applications which maintain the knowledge repository tier. Corporate taxonomy. Knowledge needs to be organized in accordance with the context of each organization, based on an organizational taxonomy which creates a knowledge map supported by classification and indexing tools. KMS. In this tier, the tools — which support this level — are the knowledge discovery systems and those which offer collaboration services. Interface and application tier. Distribution of knowledge in the organization can be effected through portals for different users and applications such as distance teaching (e-learning), competency management, intellectual property management, and CRM.
In conclusion, the objective of this third stage of the questionnaire is to assist organizations to evaluate themselves and verify which are the KM activities and processes in operation and how IT is supporting them, within the KMS model as shown in Fig. 5, utilizing as benchmarking the KMS architecture as shown in Fig. 8. 5. Research Methodology Exploratory research was undertaken with the purpose of verifying the feasibility of the instrument developed, thus generating a conjunction of indicators with regard to the KM and indicating the positioning of the organizations in a quadrant with four possible situations: initial, stagnant, select and prioritize, or refine and continue. An initial version of this instrument was made and submitted to a pilot test between December 2006 and March 2007, and applied to some organizations from Portugal, Spain, and Brazil. This first application has generated important information to the development of new versions, then reaching the current instrument structure. The data survey here presented belongs to the third version of the instrument, accomplished between September 2007 and May 2008. The research was answered mostly by MBA students and a group of invited companies established in the interior of the S˜ao Paulo state, operating in diverse sectors. 5.1. Results and Analysis Given the complexity of the instrument used, as well as the difficulty to convince the organizations of participating in the research, a sample with 81 participants was obtained, selected among 120 applied questionnaires. These data were compiled in order to generate a knowledge base on the use of KM by Brazilian organizations. The data, after the period of one year, will become available to all participant
March 15, 2010
284
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
companies, creating a collaborative benchmarking, thus representing the second stage of this research project. In the following sections, a summary of the results is presented, obtained by applying the research instrument. 5.1.1. Characterization of the organizations and the individuals In this item of the questionnaire, some information was collected with regard to the organization and the questionnaire respondents. Also, there is information on the needs pointed out by the respondents, concerning the KM. In majority, 69% of the respondents belong to the industry area, and the companies operate internationally (52%). Following that, are companies operating domestically (33%). With regard to the number of employees, 44% of the researched companies have more than 500 employees. Concerning the business figures, 45% of the sample present figures over R$50 million, which, according to the BNDES (2008), classifies the organizations as large companies. Middle-sized companies represented 39% of the sample (business figures of up to R$50 million/US$ 29 million). In the sample, it is also shown that 16% of small companies with business figures of up to R$2 million (US$ 1,100,000). Most of the respondents were male (65 participants), holding specialization courses or MBAs (50%). Seventeen percent of the respondents hold a position of a manager or an administrator, 16% are general managers of the organization, and 14% are in the area of leadership or supervision. There are several other denominations for the jobs or functions developed — such as analysts, engineers, editors, etc., which account for the large amount of others (49%), obtained. With regard to the experience time in the job, 45% of the respondents have been working in their function from 1 to 5 years; 28%, between 5 and 10 years, and 16% for more than 10 years. Eleven percent of the respondents have been executing their function for less than 1 year. Of all the participants, 54% work at the organization’s operational level, and 27% work at the managerial or tactic level. Nineteen percent work at a strategic level. 5.1.2. The need for KM The results presented in this section have the purpose of showing the landscape, the use, and the reality of the KM by the research participants. The first thing requested from the respondents was their opinion about KM. The respondents stated that KM is vital to their business (43%), and also that KM can help the company to organize better its information (29%). As for 18% of the sample, KM is an element that modifies the manner the organization undertakes its businesses.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 285
It is important to highlight that, despite the KM boom in Brazil from the year 2000 onward, 7% of the respondents are still unaware of what KM is, and 2% has never heard about it. Another interesting aspect in this stage was the obtainment of data with regard to the effective use of KM in the organizations, and whether they are investing in initiatives of this kind. It was observed that, in 38% of the sample companies, there are already KM initiatives, and in 21% of them, certain initiatives are already planned. Comparing the result obtained from this study with a large research carried out in 200 companies by the HSM Management magazine in 2004, an improvement in awareness by the Brazilian organizations concerning the KM role was verified, which is reflected in the number of companies which have already adopted some of its practices (E-Consulting Corps, 2004). From those which have not yet adopted, a significant number intended to do so. Nevertheless, when the respondents were asked about their companies having a specific sector for KM initiative management, 53% answered that they did not know. In 43% of the cases studied in this paper, there is already a specific area to KM, reinforcing the fact that KM initiatives have been increasing in Brazil. In Fig. 9, the amount of investments in KM is shown. In 59% of the cases, the respondents did not know or were not willing to disclose such an amount. However, in 18% of the cases, the volume goes up to US$120,000 and in 14%, the investment is more than US$600,000. According to the other respondents, the investments in KM can reach US$300,000 (5%) and between US$300,000 and US$600,000 (4%). Some questions were created in order to obtain more information about the KM implantation process in the researched companies. The respondents would describe the main difficulties to apply KM initiatives to the organizations. The answers, shown visually in Fig. 10, indicated that the main factors are: first (47%),
Figure 9.
KM budget.
March 15, 2010
286
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Figure 10.
Difficulties in having KM.
Figure 11.
Meaning of KM.
the organization culture does not stimulate the knowledge sharing. Second (25%), people are significantly unaware of the subject. Third (13%), there is a lack of adequate technology. Regarding the meaning of KM, the research demonstrated — as shown in Fig. 11 — that 49% of the respondents see KM as a modeling of corporate processes from the knowledge generated. That is, the KM is perceived as the structuring of organizational activities, thus composing, in fact, a corporate management system. The respondents see KM as an information management philosophy in 17% of the cases, or only as a technology that allows for KM in 12% of the cases. That is, it
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 287
is noted, in most of the answers, that KM is still realized only either as a technology or a corporate management system. A reasonable percentage nevertheless indicates KM as a strategy or a means by which companies can obtain competitive power (16%), in addition to effectively understanding that KM initiatives need an organizational policy composed by systems, a cultural policy of corporate proportions, and among other initiatives that allow for KM, in 6% of the answers. In comparison with the research performed in 2004 by the HSM Management magazine, the global view for KM has been improving inside the Brazilian companies in the last few years, despite uneven perceptions. When asked about the impact of KM on organizations, 61% of the respondents indicated that it will bring a more consistent and optimized development of the collaborators. Other positive impacts refer to the fact that, by adopting the KM practice, this strategy will dictate the companies’ survival capacity (that is, longevity) in 19% of the answers. Also, they point out that the companies will be the winners, in 17%. For the question in which the respondents could indicate the possible benefits obtained from the implantation of KM initiatives, 26% of them pointed out that the most significant one would be the better use of knowledge, as shown in Fig. 12. Following that, 21% of the respondents indicated that the benefit would be connected with a better time-to-market and, as a consequence, with an improving capacity to make decisions efficiently. The respondents also stated that the optimization of the processes in 20% is an evident benefit, as well as the cost reduction (12%). As shown in Fig. 12, questions such as differentiation from other companies (10%) and income increase (8%) were less considered. In the instrument utilized, it was also important to note the successful critical factors in KM initiatives. As shown in Fig. 13, the first point is clear objective
Figure 12.
Benefits in adopting KM.
March 15, 2010
288
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
Figure 13. Actions to facilitate KM implantation.
communication with a significant 35% of the answers. They demonstrate that the structuring of an organizational communication plan focusing on the transmission of information on the KM initiative to the collaborators is an essential process. It is concluded from this issue that clear objective communication and adequate publication of KM initiatives will be important to determine a successful project. Following that, the respondents indicated another noteworthy aspect: training and cultural awareness of collaborators, in 24% of the answers (see Fig. 13). It may indicate that the resistance in adopting KM procedures in the organizations is a result of inadequate organizational culture — or non-applicable to the competitive environment (collaborative and competitive at the same time). It can be then inferred that it is fundamental that the education in KM — or cultural awareness — must be applied prior to the training, in order to establish the absorption process for all the KM steps: creation, collection or capture, organization, refinement, and diffusion of knowledge. The third aspect concerns senior management support. It is when the KM initiatives are accomplished based on the involvement and commitment by senior management. It can be noted in the answers that it is essential to show the KM activities according to the point of view of the added value and the probable results for the entire organization. It is quite a motivating aspect, provided senior management offers its support. 5.1.3. Use of ICT In this section of the questionnaire, it is verified which ICT instruments are mostly employed on a daily basis by the research participants, in order to disseminate knowledge. E-mail (26%) was highlighted as the most employed on a daily basis, as shown in Fig. 14. It is probably due to its easiness and simplicity.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 289
Figure 14.
Most employed ICT instruments on a daily basis.
Figure 15.
KM Functionality.
Other noteworthy instruments are internet and intranets (25%), corporate portal (14%), and brainstorming, representing 10% of the answers. Electronic Document Management (EDM) is also important in the dissemination of explicit and documented knowledge (9%). In the conclusion of the descriptive part, the respondents were requested to point out the main functionality of ICT instruments. The answers showed that the ICT is used to improve the decision-making process (32%), in the first place. Second, to stimulate innovation (25%) and, third, to increase individual production (23%) (see Fig. 15).
March 15, 2010
290
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
6. The Audit and Diagnosis Processes In order to evaluate and diagnose KM activities and processes in progress, some data and elements were intentionally chosen. These data and elements — when in group — will refer to 14 organizations. This intentional choice was made because: the researchers had easy access to a limited number of organizations; the organizations easily allowed for the diagnosis, and posterior debates were carried out on the results obtained from the managers; in loco confirmations were obtained of whether KM initiatives were being developed by the companies. In order to keep secrecy and prevent companies from being identified, they will be referred hereafter as from “Organization 1” to “Organization 14”. The KMAT instrument, developed by APQC and Andersen Consulting, was partly utilized to carry out the section relative to the diagnosis (Nogeste and Walker, 2006). This evaluation and diagnosis instrument was constructed based on the APQC’s organizational knowledge management model, whose structure is composed by four facilitators, who favor the KM process. In this model, the questionnaire is divided into five parts. For each part, a subset of items and information graded by a five-point Likert scale is defined. A general sum of the scores is generated with a value ranging from 24 to 120 possible points. Finally, the answers are classified, and a relation between the total score for the answers and the maximum number of possible points is established. The respondents themselves can evaluate their scores in the KMAT, and make “comments for future actions”, by suggesting alternatives or outlining the steps which could cause improvement in the score obtained. In this paper’s “extended” version, named KMAuditBr, the use of ICT as an auxiliary tool to the KM dissemination activities is analyzed, based on the discussion carried out by Deng and Hu (2007). In addition, a high-level audit for the processes and activities in progress at the company is performed in KMAuditBr, enabling guidance towards future organizational decisions concerning KM projects and initiatives. The audit is also carried out by means of questions for each item (processes and activities), using a five-point Likert scale, from a minimum of 24 to a maximum of 120 points possible. The Likert scale ranges from totally disagree to totally agree, and the respondents shall mark the option that is closest to his/her organizational reality, regarding the activities and processes, which are currently in progress or which had been in progress for the last five years. 6.1. Results Obtained from the Diagnosis and the Audit The results obtained from data from the companies selected in the intentional sample are shown in Table 1. The variable KMATS represents the mean score sum for each one of the organizations, and the variable KMATC, its correspondent classification
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 291
Table 1.
Organization 1 Organization 2 Organization 3 Organization 4 Organization 5 Organization 6 Organization 7 Organization 8 Organization 9 Organization 10 Organization 11 Organization 12 Organization 13 Organization 14
Score and Classification Calculated. KMATS
KMATC
AUDS
AUDC
65 64 51 45 98 67 85 56 61 45 66 45 39 50
0.54 0.54 0.42 0.37 0.82 0.56 0.71 0.47 0.51 0.38 0.55 0.37 0.33 0.42
74 85 63 56 98 90 100 64 92 87 103 60 58 68
0.61 0.70 0.53 0.47 0.81 0.75 0.84 0.54 0.77 0.73 0.85 0.50 0.49 0.57
achieved by using the KMAT principles. The variable AUDS corresponds to the instrument audit process, and the values represented below correspond to the mean of each organization with regard to the processes and activities in progress. The variable AUDC means its correspondent classification. At the end of the process, the instrument enabled the positioning of the actual organizational status against KM processes and activities (see Fig. 7). The following indices were taken into consideration for this purpose: (a) stagnant: from 24 to 47 points; (b) initial: from 48 to 71 points; (c) select and prioritize: from 72 to 95 points and (d) refine and continue: from 96 to 120 points. By using the four possible statuses for the result, the following diagnoses were obtained for the sample companies: • • • •
The Organizations 4, 10, 12, and 13 are in the Stagnant status. The Organizations 1, 2, 3, 6, 8, 9, 11, and 14 are in the Initial status. The Organization 7 is in the Select and Prioritize status. The Organization 5 is in the Refine and Continue status.
It can be inferred, in general, that most sample companies (57%) are in the initial status, in which KM activities and processes are scarce. Twenty-eight percent of the companies are in the stagnant status. Regarding Organization 7, it can be inferred that several procedures are in progress, and IT practices are helping in their accomplishment but with low coordination or with a lack of a united process. This organization would fit the select and prioritize standard. Organization 5, by its turn, was the only one to fit into the Refine and Continue status, in which there is an optimal relation among KM processes, coordinated
March 15, 2010
292
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
projects, and the use of IT. At this status, the company initiated — or is establishing — quality control standards, is looking for continuous improvement. 6.2. Analysis of the Diagnosis and Audit Results The results obtained from the audit confirmed the Organizations 5 and 7 diagnoses, since both scored high in AUDS (98 and 100, respectively). It was noted that, in the in loco audit, both present a specific area for the implantation of KM initiatives, in addition to the specific budget for such a purpose. It was verified conversely, in the four companies diagnosed according to the stagnant status, that there is neither investment in KM nor specific area to its development. As shown in Table 1, the scores achieved by these companies in the audit are also low, indicating few or no initiatives to stimulate the generation of better ways to improve the tasks, methods to improve knowledge on the clientage, the use of instruments, and initiatives to discover and register collaborators’ skills and competences, among other things. Some of the eight companies classified within the initial status had begun or are beginning performance and motivation processes for the collaborators, creation and incentives to diffuse better ways to carry out the tasks, in addition to applying methods to gather and keep both practice and know-how, which are important aspects for the KM initiatives. By assessing the processes and activities in progress at the companies classified in the initial status, it was verified that some of them (such as Organization 11, for example), despite low KMAT score, obtained high scores in the audit. It is observed that, through in loco evaluation, specifically this company — a multinational — currently presents a quality control area and KM projects are underway. This is a demand from the company headquarters located in Germany, and there is a small department in charge, with a defined budget. Other organizations classified as initial, despite allegedly having KM initiatives and projects, actually present initiatives for information management, area integration, or other processes within this line. Such companies are still developing culture in KM, and they have a long way ahead. 7. Concluding Remarks According to Akhavan et al. (2005), organizations which succeeded in KM share common characteristics. These aspects range from technology framework to a strong belief in knowledge sharing and collaboration. Recent research has demonstrated that the organizations looking for initiatives in KM must evaluate their current status, as well as establish a set of indicators to determine which types of KM efforts — originated from strategic planning — will probably succeed. The KM audit results can be therefore utilized to plan inevitable
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 293
changes in technology, processes, and organizational culture, which follow the investments in KM. According to Keyes (2006), the organizations which do not assess their own KM status may implement technologies and concepts in unfruitful or untargeted areas. The KMAuditBr instrument presented in this study was created with the purpose of helping the organization make an initial high-leveled self-evaluation on how knowledge is being managed. The purpose of this instrument is to guide the organizations in the areas demanding major consideration, as well as support them in the identification of KM activities and processes underway. This way, it will stimulate the organizational knowledge development. The support and use of ICT functionalities will also be evaluated by the instrument. 7.1. Research Limitations The study presented in this chapter presents some limitations. The first is that most of the respondents belong to companies from the same sector, industry, with a few participants from other areas like commerce, services and banks, and among others. Another limitation is that the sample also did not include public sector companies or organizations. Yet another clear limitation was that of all the participating companies, we were able to conduct an evaluation that provided us with data regarding the diagnosis and audit in only 14 of them. For such an evaluation, only a minimum number of questionnaires were applied in each organization and selected for obtaining initial data. 7.2. Future Research Directions We ascertained that for a more in-depth analysis of the organizations and sectors, it will be necessary to apply a larger number of questionnaires and also to have the participation of a larger number of companies from the most diverse areas of operation. However, since the issue of KM and its initiatives is still considered strategic, this will be a difficulty that must be overcome, since we detect a certain restriction in access to information. According to the structure shown in Fig. 6, the model projects the creation of a knowledge base aimed at storing all audits and diagnoses conducted and making them available for consultation over the Internet for all organizations participating in the process. This base and its access structure are still in the development and construction phase. An infrastructure is being built in the Knowledge Management Technology Laboratory–LTGC, in the Computer Science Department at UNESP, Bauru Campus, to support this environment. The collaborative benchmarking process will thus be the next phase of research to be implemented, such as the intent
March 15, 2010
294
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
to create an infrastructure where organizations can share their best practices and knowledge with companies from the same sector. The KMAuditBr is not supposed to be flawless. Nevertheless, similar to KMAT tool — from which this instrument is originated — it is an initial step to generate profound and measurable studies on KM. This chapter is intended to demonstrate the possibilities of this instrument as a continued evaluation tool, thus enabling its constant improvement.
References Akhavan, P, M Jafari and M Fathian (2005). Exploring failure — Factors of implementing knowledge management systems in organizations. Journal of Knowledge Management Practice, 6(1). The Leadership Alliance Inc. Available at http://www.tlainc. com/articl85.html. Albino JP and N Reinhard (2005). A Quest˜ao da Assimetria Entre o Custo e o Benef´ıcio em Projetos de Gest˜ao de Conhecimento, in XI Seminario de Gesti´on Tecnologica, ALTEC Conference Proceedings: Salvador, Brazil. Albino, JP and N Reinhard (2005). Avalia¸ca˜ o de Sistemas de Gest˜ao do Conhecimento: Uma Metodologia Sugerida. In: XIII Simp´osio de Engenharia de Produ¸ca˜ o, S˜ao Paulo, Brazil — XIII SIMPEP Conference Proceedings. Available at http://www.simpep.feb. unesp.br/upload/463.pdf. Aurum, A, F Daneshgar and J Ward (2008). Information and Software Technology, 50, 511–533. Awad, EM and HM Ghaziri (2004). Knowledge Management. USA: Prentice Hall. BNDES (2008). Banco de Desenvolvimento Econˆomico e Social, Porte de Empresa. Available at http://www.bndes.gov.br/clientes/porte/porte.asp. [Accessed on 09/05/08]. Chua, AYK and D Goh (2007). Measuring knowledge management projects: Fitting the mosaic pieces together. In Proc. of 40th Hawaii International Conference on System Sciences. (3–6 January 2007). HICSS, IEEE Computer Society, Washington, DC, 1926. Davenport, TH, L Prusak and HJ Wilson (2003). What’s the Big Idea? Creating and Capitalizing on the Best New Management Thinking. USA: Harvard Business School Press. Delgado, RA, LS B´arcena and AML Palma (2007). The knowledge management helps to implement an information system. In Proc. of ECKM 2007: 8th European Conference on Knowledge Management, 28–34. Deng, Z and X Hu (2007). Discussion on models and services of knowledge management system. In Proc. of ISITAE ’07, First International Symposium on Information Technologies and Applications in Education, (23–25 November 2007). ISIATE, IEEE Computer Society, Kunming, China, 114–118. E-Consulting Corps (2004). HSM Management, 42(1), 53–600. Handzic, M, A Lagumdzija and A Celjo (2008). Auditing knowledge management practices: Model and application. Knowledge Management Research & Practice, 6, 90–99. Jin, F, P Liu and X Zhang (2007). The evaluation study of knowledge management performance based on grey-AHP method. In Proc. of 8th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, 444–449.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
A Structure for KMS Assessment and Audit 295
Keyes, J (2006). Knowledge Management, Business Intelligence, and Content Management: The IT Practitioner’s Guide. LLC, New York: Auerbach Publications, Taylor & Francis Group. Lawton, G (2001). Knowledge management: Ready for prime time? IEEE Computer, 34(2), 12–14. Nogeste, K and DHT Walker (2006). Using knowledge management to revise softwaretesting process. Journal of Workplace Learning, 18(1), 6–27. Nonaka, I and H Takeuchi (1997). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. USA: Oxford University Press. O’Dell, C and CJ Grayson (1998). If Only We Knew What We Know: The Transfer of Internal Knowledge and Best Practice. New York: Free Press. OECD (2003). Measuring Knowledge Management in the Business Sector: First Steps. Paris: OECD Publishing. Serrano Filho, A and C Fialho (2006). Gest˜ao de Conhecimento: O Novo Paradigma das Organiza¸co˜ es. Portugal: FCA. Stewart, TA (2003). The Wealth of Knowledge: Intellectual Capital and the Twenty-First Century. UK: Nicholas Brealey Publishing. Tiwana, JA (2002). The Knowledge Management Toolkit. USA: Prentice Hall.
Biographical Notes Jo˜ao Pedro Albino is a Technologist in Data Processing and has a Bachelor’s Degree in Computer Science. He also has a Masters in Computer Science and a PhD in Management. He also earned a post-PhD in Innovation and Technological Management and a post-PhD in Knowledge Management at the Department of Industrial Engineering and Management of University of Aveiro, Portugal. He is a Professor of Information Systems at the College of Sciences of UNESP. His research interests and publications include Computer Science with an emphasis on Information Technology and Knowledge Management. Nicolau Reinhard is a Professor of Management at the School of Economics, Administration and Accounting of the University of São Paulo (USP), Brasil. His research interests and publications include Management of the IT function, the use of IT in Public Administration and Information Systems implementation and impacts. Prof. Reinhard has a degree in Engineering, a PhD in Management, and besides his academic career, he has held executive and consulting positions in IT management in private and public organizations. Silvina Santana is an Assistant Professor at the Department of Economics, Management and Industrial Engineering of the University of Aveiro, Portugal and Guest Professor at Carnegie Mellon University. She is a researcher at IEETA, the Institute of Electronics Engineering and Telematics ofAveiro. She holds a PhD in Knowledge and Information Management and degrees in Electronics and Telecommunications Engineering and Industrial Engineering and Management. She is currently involved
March 15, 2010
296
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch12
J. P. Albino et al.
in three European projects (EU FP7), one of them as project leader and coordinator in Portugal. Her main research interests centre on business integration, business models and business processes, organizational learning, knowledge management, eHealth and telemedicine, integrated care, public health, intersectoral partnerships, healthcare organizations management and entrepreneurship.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Chapter 13
Risk Management in Enterprise Resource Planning Systems Introduction DAVIDE ALOINI,∗ RICCARDO DULMIN† and VALERIA MININNO‡ Department of Electric Systems and Automation, Pisa University, P.O. Box 56100, Via Diotisalvi, 2 Pisa, Italy ∗
[email protected] †
[email protected] ‡
[email protected] Enterprise Resource Planning (ERP) systems are extremely complex information systems, whose implementation is often unsuccessful. We suggest a risk management (RM) methodology supporting the formulation of risk treatment strategies and actions during ERP introduction projects to improve the success rate. In this chapter, first the research context is presented, then the framework and the methodology are illustrated and the main phases of the proposed RM approach are introduced, finally results are discussed. Keywords: Enterprise resource planning; risk management; framework; methodology; information systems.
1. Introduction In the recent years, Enterprise Resource Planning (ERP) systems have received great attention. Nevertheless, ERP projects have often been found to be risky to implement in business enterprises mainly because they are complex and affected by uncertainty. A PMP research (2001) found that the average implementation time of an ERP project is between 6 months and 2 years and the average cost is about 1 million dollars. However, according to the estimation of the Standish Group International, 90% of SAP R/3 projects run late, 34% are late or over budget, 31% are abandoned, scaled, or modified, and only 24% are completed on time and respect the budget. A possible explanation for such high ERP project failure rate is that managers do not take appropriate measures to assess and manage the risks involved in these projects (Wei et al., 2005). However, dealing with risk management (RM) in ERP introduction projects is an ambitious task. ERP projects are highly interdisciplinary, as they affect interdependencies between business processes, software, and their reengineering (Xu et al., 2002). Critical factors include technological and 297
March 15, 2010
298
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
management aspects, both psychological and sociological; moreover, they are often deeply interconnected and have indirect effects on the project. This makes the RM process and in particular risk assessment phases very difficult and uncertain. The main purpose of this chapter is to provide an RM methodology to support managers in making decisions during the life cycle of an ERP introduction project, to improve the success rate. 2. Background Risk is present in every aspect of our life; thus RM is considered a very important task, even if it is often treated in an unstructured way, based only on relevant knowledge, experience, and instinct. All projects involve risk because unexpected events and deviation from the project plan often occur. IT projects, as other kinds of complex projects, may be considered an area suitable for action and potential development of RM practice. 2.1. Project Risk in IT Field Many factors affect IT implementation projects and they can be grouped in different classes; for example (DeSanctis, 1984; Leonard-Barton, 1988; Lucas, 1975; Schultz et al., 1984): (i) Individual factors: such as needs, cognitive style, personality, decision style, and expectancy contributions. (ii) Organizational factors: such as differentiation/integration, extent of centralization, autonomy of unit, culture, group norms, reward systems, and power distributions. (iii) Situational factors: such as user involvement, nature of analyst-user communication, organizational validity, and existence of critical mass. (iv) Technological factors: which include the types and characteristics of technology such as transferability, implementation complexity, divisibility, and cultural content. Other researchers identified similar factors impacting on the successful implementation of IT grouping them into different classes, but everyone has described issues of organizational fit, skill mix, management structure and strategy, software system design, user involvement and training, technology planning, project management, and social commitment. At the project level, IT projects have long been recognized as high-risk ventures prone failure; some of the software project risks are easy to identify and manage, others are less obvious or more difficult to handle. Given the potential costs and losses from failed software projects, researchers and practitioners must continue learning from each other to reduce project failures and develop practices that consistently generate better project outcomes.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 299
Enterprise-wide/ERP projects are among the most critical IT projects and pose new opportunities and significant challenges in the field of RM. ERP systems, as companywide information systems, impact on a firm’s business processes, organizational structure, and existing IT (legacy) systems. So that, RM approaches for ERP system must embrace the software, the business processes, and the project management dimensions. Nevertheless, the adoption of an ERP system can include many potential benefits: • Business Benefits: which include operative efficiency by automation, paperwork, and manual data entry reduction; inventory level reduction; global integration by managing of currency exchange rate, languages, and so on. • Organizational Benefits: improvement of internal IT skills and attitude for change; process standardization; and reduction of staff positions. • IT Benefits: consistent data in a shared database; open architecture; integration of people and data; and reduction of update and repair needs for separate computer systems. As ERP packages touch on many aspects of a company’s internal and external operations, the related investments include not only the software, but also the related services such as consulting, training, and system integration. Consequently, successful deployment and use of ERP systems are critical to organizational performance and survival (Chen, 2001; Markas et al., 2000b). 2.2. Relevance from Literature The importance of an RM approach in an ERP introduction project is recognized both in theory and in business practice. In an extended review of the literature in the ERP field, Aloini et al. (2007) analyzed and classified a number of key contributions to ERP introduction project to define the main issues and research approaches taken, and identify which areas needed ERP RM deployment and most relevant risk factors assessed. The authors underpin that despite the great importance reserved by literature for factors tied up to project management, including RM (Anderson et al., 1995; Cleland and Ireland, 2000) and change management areas, only a few articles dealt explicitly with these topics. Definitely, there are only few academics’ contributions about RM in ERP projects. Furthermore, as for the existent ones, it appears they are mainly concerned with organizational or business impact of ERP systems (Beard and Sumner, 2004) and rarely on RM strategies and techniques or assessment models. In the latter case, they mainly focus on the identification and analysis of risk and just a few of them suggest operative models to support the quantification of the risk in terms of risk analysis and evaluation, or the selection of appropriate treatment strategies; quite never a computer-assisted tool supporting the different stages of the process is carried out.
March 15, 2010
300
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Evidence from literature shows that quite all the analyzed contributions present merely qualitative and not integrated approaches to RM. Authors frequently suggest general frameworks or global approaches to risk management in the ERP fields often derived from IT and RM literature but rarely propose methodologies and tools for the ERP case, which could support decision makers to manage risk in the different stages of the project life cycle. RM phases are usually approached as stand-alone activities without considering the relevant interconnections with the other ones; moreover, transversal phases like context analysis, risk monitoring, and communication are scarcely supported and contributions often remand to more general RM approaches. Finally, they do not deal with the problem of the risk factors interdependence or their relationships with the effects. As for practitioner initiatives, SAP and Baan, along with other ERP vendors, which have certain methodologies, have designed proprietary applications for the needs of their own ERP systems. These RM applications are not generic and thus cannot be used at the implementation of any ERP system. Moreover, they adopt a more technical perspective than an organizational one. These approaches consider some risk factors to support the resource planning, allocation, and time forecasting, especially for the ERP configuration activities. However, they fail to address other important dimensions of project success related to risk mitigation, like the organizational impact or the process reengineering needs. For all the reasons mentioned above, the aim of this chapter is to propose an effective RM methodology for the project of ERP introduction. 3. ERP RM Framework As follows, we first propose of all our definitions of risk, risk assessment, and RM, which will clarify the approach we suggest in dealing with risk during the introduction of ERP systems. Definition 1: Risk is an uncertain event or condition that, if it occurs, has a positive or negative effect on a project objectives (PMI, 2001). Definition 2: Risk assessment is the process of quantitatively or qualitatively assessing risks. It involves an estimation of both the uncertainty of the risk and of its impact. Definition 3: RM is the practice of using RM to devise management strategies and deal with risk in an effective and efficient way. According to this perspective, a general risk management framework can be drawn for ERP projects. It consists of several activities, as Fig. 1 shows. (i) Context Analysis: This aims to define the boundaries of the RM processes: processes which have to be analyzed, desired outputs, performance, etc., to support the definition of the correct risk model approach.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 301
Figure 1. A schematic illustration of a general RM process.
(ii) Risk Assessment: This is a core step of the RM process and includes: (a) Risk identification — which allows the organization to determine early the potential threats (internal and external risk factors) and their impact (effects) on the project’s success. (b) Risk quantification — which aims to prioritize risk factors according to their risk levels and consists of two principal phases: • Risk analysis (or estimation) — provides inputs to the risk evaluation phases for the final quantification. The typical inputs are the occurrence probability of a risk factor, the links (weight) with potential effects, the severity of these effects and eventually the detection difficulty. • Risk evaluation — which defines risk classes. It selects an appropriate and effective risk aggregation algorithm and synthesizes the risk level for each identified risk factor. (iii) Risk Treatment: This targets to the selection of an effective strategy to manage the risks related to the different risk classes identified. RM strategies consist of four classic approaches: the first aims to reduce risky circumstances, the second deals with risk treatment after a risk factor appears, while the third and fourth deal to risk externalization or acceptation. (iv) Risk Control: The final aim of RM is managing the project risk to perform a better control on the project and increase its probability of success. The principal issues of the risk control phase are (a) Monitoring and Review — each step of RM process is a convenient milestone for reporting, reviewing, and action taking.
March 15, 2010
302
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
(b) Communication and Consulting — aims to effectively communicate hazard to the project managers and the people involved into the project to support the managerial actions. Details about these phases in an ERP project will be explained in the following sections. 3.1. Research Methodology The work we present was originated from an extended research including the conceptual development of the methodology and a preliminary empirical test of its validity and suitability in a real context. The research aim was to develop an innovative methodology of RM suitable for ERP introduction projects. According to the general framework presented above, it was divided into three parts: state-of-the-art analysis, methodology deployment, and validation. State of the art analysis assesses the current state of the art on ERP RM and supports the definition of the risk model approach. In particular, a literature review investigating on the existing RM approaches and techniques was performed, and main contributions were analyzed discussing differences, advantages, and disadvantages. The attention then moved on the ERP literature, reviewing most relevant contributions on this topic (130 peer-reviewed articles were collected and 75 selected for the analysis) and identifying the areas which need further deployments. After that, we explicitly focused on contributions dealing with RM in ERP projects investigating on existing RM approaches and techniques both from academic and practitioner world. Methodology deployment aims to develop a suitable model useful for ERP RM. A specific RM methodology for ERP introduction projects was developed, suggesting innovative methodologies and techniques for the different RM phases or adapting the existing ones to the new ERP context. In particular, an extended literature review responded to the need of risk identification focusing on the classification and the taxonomy of the principal risk factors. Then, an overall framework, enumerating risk factors, effects, and macroeffects, was drawn. As for risk quantification, several techniques in risk analysis and evaluation stage were analyzed. After that, an interpretive structural modelling (ISM)-based technique was proposed to model dependences and interconnections among risk factors and between risk factors and effects, to draw a risk event tree (Sage, 1977). A probabilistic network approach was suggested for risk evaluation. Finally, in risk treatment and control phases, potential risk treatment strategies for each risk factor were identified and analyzed using literature analysis, interviews to practitioners, and case studies, in relation to the project life cycle, the risk factor profile, and the impacts on the project. A general roadmap was finally drawn.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 303
Validation aims to preliminarily test and discuss the conceptual validity and applicability of the proposed RM framework in a real context by the use of a number of case studies. The methodology was tested by case studies. It shows evidence from 5 in-depth case studies from multinational firms in different sectors, which were recently involved in a project of ERP introduction. The analysis was based on in-depth interviews, ex post evaluations on the project performance and on an ex post simulation of the methodology. In the following paragraph we illustrate in detail the main phases and techniques of the investigated methodology.
3.2. Context Analysis Context analysis is an important activity, which should be preliminarily assessed in the RM process. Establishing the context means setting boundaries within which risks must be managed and defining the scope, as well as the budget for the following RM activities. The context analysis is essential for the next steps and is functional both to the assessment and the treatment phase, as it can enable a more complete identification, a better assessment, and a more careful selection of a suitable response strategy. In our opinion, the relevant attributes for the analysis are • Detectability of Risk Factors: easy or difficult detection of the occurrence of a risk factor. • Responsibility of Actions: internal or external players. • Project Life Cycle Phases: the phase of the project in which the risk factor is enabled to occur. • Controllability: the possibility of influencing the probability of occurrence of a risk factor. • Dependence: dependence degree of a risk factor from the others. Project modeling tools and techniques are generally useful for this activity, such as project network diagrams, precedence diagramming method (PDM), IDEF3 process modeling, and IDEF0 functional modeling (Ahmed et al., 2007).
3.3. Risk Assessment Risk assessment is the central phase of the RM process. This phase is functional to understand the nature of risk in terms of which factors could impact on project success, the interactions, their probability of occurrence, detection difficulty, and potential impact on the project, to quantify their risky and prioritize them. It consists of two principal issues (Fig. 2): risk identification and risk quantification, which we will separately discuss.
March 15, 2010
304
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 2.
Risk assessment phase.
3.3.1. Risk identification Common RM approaches emphasize the need of identifying “risks” early in the process. Chapman and Ward (2003) assert that a key deliverable for an effective RM is a clear, common understanding of the sources of uncertainty facing the project, and what can be done about them. The real sources of risk are the unidentified ones, so that the identification phase can be considered as an initial risk response action. The identification of sources, effects, and responses can be assessed in a variety of ways, by individual activity or involving other people, including interviewing individuals, interviewing groups, or group processes such as brainstorming and decision conferencing, to stimulate imaginative thinking and draw on the experiences of different individuals (Chapman and Ward, 2003). As for factor identification, checklist approaches, as well influence diagrams, cause-effects diagrams, event or fault trees, can be very effective in focusing the attention of managers. The construction of such checklists and trees can be managed both top-down (form macroproject risk classes to single risk factors) and bottomup (from the effects on the project to the related causes, i.e., risk factors). The first approach can be assisted by guidelines which categorize risks in different project dimensions, such as project life cycle (plan and timetable), project players (both internal and external parties), project objectives and motives, resources, changes in processes and organization structure, which can stimulate and drive managers during the process. The second approach instead needs to start from an overall definition of what project success means in complex projects like the ERP introduction. In this last view, the current state of the art in the ERP field is discussed by Aloini et al. (2007),
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 305
presenting an extended literature review responding to the need of risk identification and focusing on the classification and the taxonomy of the principal risk factors and effects. Adopting the Lyytinen and Hirschheim’s (1987) definition of “failure” and “success” of IT projects, authors suggest a first classification of IT failure: (i) (ii) (iii) (iv)
Process failure, when an IT project is not completed within time and budget. Expectation failure, where IT systems do not match the user expectations. Interaction failure, when user attitudes toward IT are negative. Correspondence failure, where there is not a match between IT systems and the specific planned objectives.
This classification can be useful to identify the project effects and the causes of the failure (risk factors). In the mentioned articles, authors suggest 10 risk effects and 19 risk factors usually happening in ERP projects, as shown in Fig. 3. As for the effects, they mainly concern about budget exceed, time exceed, project stop, poor business performances, inadequate system reliability and stability, low organizational process fitting, low user friendliness, low degree of integration
Figure 3.
Risk factors, effects and effects macro-classes. (From Aloini et al., 2007.)
March 15, 2010
14:45
306
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Table 1.
Risk Factors and Their Frequency Rate in Literature.
ID
Risk factor
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17 R18 R19
Inadequate selection of the ERP package Poor project team skills Low commitment of the top management Ineffective communication system Low involvement of the key users (KU) Inadequate training Complex ERP system architecture Inadequate business process reengineering Bad managerial conduction Ineffective project management techniques Inadequate change management Inadequate legacy system management Ineffective consulting services Poor leadership Inadequate performance of the IT system Inadequate IT system maintainability Inadequate stability and performances of the ERP vendor Ineffective strategic thinking and planning Inadequate financial management
Frequencies √√√ √√ √√ √√ √√ √√ √ √√ √√ √√ √√ √ √ √ √√ √ √ √√√ √
Source: Aloini et al. (2007). Note: The frequency rate is related to the scientific literature.
and flexibility, low strategic goals fitting, and bad financial/economic performances. As for the risk factors, in Table 1, a list of potential elements is reported according to the revealed interest in literature. 3.3.2. Risk quantification Risk quantification aims to evaluate the risk level of the identified factors to synthesize a ranking, which could drive and prioritize the selection of the treatment strategies. In this approach, the definition of risk quantification entails two essential components: uncertainty (i.e., the probability of occurrence of a risk factor, U) and exposure (i.e., the impact or effect of the occurrence of a risk factor on the project, E). Ri = Ui · Ei
(1)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 307
The Australian Standard (AS/NZS 4360, 1999) distinguishes the approach to risk assessment as follows: (a) Qualitative, which uses words to describe the magnitude of potential consequences and the likelihood that those consequences will occur. (b) Semi-quantitative, where qualitative scales are set with given values. The objective is to produce a more expanded ranking scale than is usually achieved in qualitative analysis, but not to suggest realistic values for risk such as is attempted in quantitative analysis. (c) Quantitative, which uses numerical values (rather than the descriptive scales used in qualitative and semi-quantitative analysis) for both consequences and likelihood using data from a variety of sources. As mentioned before, the risk assessment process consists of two subphases: risk analysis and evaluation. In risk analysis stage, risk factors are analyzed and classified according to the decisional attributes defined in the context analysis phase, such as controllability, detectability, project life cycle, responsibility, and dependence. The output is functional both to the evaluation and to the treatment phase as it assesses a pre-analysis of risk factor profiles and enables a more accurate selection of suitable response actions. Dependence, among the risk factors, in particular, is critic in risk assessment as snowball effect can occur. The Interpretive Structural Modeling (Sage, 1977) technique, as well as other Analytic Network Process (ANP) approaches can be useful for modelling dependencies and connections between risk factors and effects, and to draw a risk event tree. An example of these dependencies is reported in Fig. 4. Risk factors here are drawn according to their degree of dependence (how many factors they are depending on) and driving power (how many factors they lead to). From the left to the right the dependence degree increases. Three macroclasses of factors are visible: factors linked to the project governance, factors in the project, and change management group and BPR-related factors. Two isolated classes are also present: Legacy system and financial management (Aloini et al., 2008). In the risk evaluation phase, a ranking has to be elaborated to assess the priority and severity of each risk factor. Consequences and likelihood and hence the level of risk should be estimated. A comparison between the levels of risk against the preestablished criteria and a balance between potential benefits and adverse outcomes should be done. This process enables one to make decisions about the extent and nature of treatments required and about priorities. A wide dispute still exists in RM literature between those who emphasize a formal quantitative assessment of the probable consequences caused by the recommended actions and comparison to the probable consequences of alternatives, and people who emphasizes perceived urgency (qualitative expert judgments) or severity of the situation motivating recommended interventions.
March 15, 2010
308
14:45
WSPC/Trim Size: 9.75in x 6.5in
D. Aloini et al.
SPI-b778
b778-ch13
Risk factors, effects, and macroclasses of effects. Figure 4.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 309
Figure 5.
Risk matrix.
A variety of techniques supporting the evaluation phase exists, such as statistical sums, simulations, decision trees, expert judgments, multicriteria decision, portfolio approaches, probabilistic networks, etc. The risk matrix (Fig. 5), for example, is one of the most common tools used in the assessment phase. Once the likelihood and impact of a risk factor are qualitatively or quantitatively estimated, it classifies risk factors according to the likelihood and the impact level. Probabilistic networks are more sophisticated approaches to the risk evaluation phase. They can be used when the purpose is to take into account simultaneously the risk factor dependencies, the probability of occurrence, and the impact on the project effects. With estimates of the unconditioned probabilities of the occurrence of each risk factor, a matrix which models dependences among the risk factors and between risk factors and effects, and estimates of the importance (weight) of their potential effect on the project’s success, the probabilistic network can be used to assess a global risk index for those risk factors. This kind of approach is more complex and expensive than the risk matrix in terms of estimation of parameters, modeling of dependences, and evaluation of effect. Without a doubt, risk assessment may be undertaken to varying degrees of detail depending upon the risk, the purpose of the analysis, and the information, data and resources available, but costs and benefit of available techniques should be carefully analyzed before a wide application to the project.
March 15, 2010
310
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 6.
Risk treatment and control.
3.4. Risk Treatment After context analysis and risk assessment in risk treatment and control phases (Fig. 6), an effective strategy has to be adopted and implemented to manage risks. As introduced in Section 3, the goal of this phase is to plan a whole of consistent and feasible actions and relative organizational responsibility to avoid, reduce the likelihood, reduce the impact, transfer, or retain the risk (AS/NZS 4360, 1999). The tangible output is a formal risk management plan (RMP), an additional project management tool to execute a feed-forward (support to project design, actions based on the chance of a risk event), and a feedback (risk mitigation action following risk manifestation) control. Such a plane is not a static document because of unexpected problems and the realization of planned actions and its design must like the trade-off between planned outcomes and their cost of execution, constraints, and available skills. In ERP projects, Risk Treatment involves several interrelated and contextspecific aspects such as technological, managerial, and organizational risk factors, the phase of the projects life cycle in which they occur and have to be managed, the most appropriate strategies, and relative specific responses. 3.4.1. ERP project life cycle: Activities, risk factors, and key actors To design an effective RMP, it is useful to describe the organization’s experience with an enterprise system as moving through several phases, characterized by key players, typical activities, characteristic problems, appropriate performance metrics, and a range of possible outcomes. For a better control, we must consider both these factors by breaking up the typical IT investment phases (system development,
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 311
implementation, ongoing operation) in a more detailed framework and considering them (Markus and Tanis, 2000a; Soh and Markus, 1995): • The necessary conditions for a successful outcome (i.e., high-quality information technology “assets”) are not always sufficient for success. An IT investment on track for success is derailed by an external event or changing external conditions. • The outcomes of one phase are the starting conditions for the next. Decisions and actions in a phase can increase or decrease the potential for success subsequently. • Outcome variables are both the success of the implementation project (no overtime/costs) and, chiefly, of the business results (did the company succeed in realizing its business goals for the ERP project?). In this section, to model the project life cycle, we use the well-known five-phase implementation roadmap of SAP ERP (Monk and Wagner, 2008), as demonstrated in Fig. 7. In the project preparation phase, main decisions regard project approval and funding. Goals, objectives, and scope (what the ERP is to accomplish) of the project must be carefully defined. Typical tasks include organizing the project team, selecting the package — hardware and database vendors, identifying e-prioritizing the business process to support, communicating to the personal objectives and impacts of the new system, evaluating financially the investment, and fixing the budget. Common risk factors include low top-management commitment, unrealistic objectives, inadequate budget, poor skills of the project team in ERP introduction, inadequate choice of (and contracting with) vendors, consultants or system integrators, lack of comprehension of opportunities to improve the process, and underestimating the difficulty of change management. Key actors are IT specialists, line of business managers (cross-functional competences are required), ERP systems vendor and integrators, and consultants. The main objective of the Business Blueprint phase is to develop a detailed documentation of how the business process has to be managed and supported by the enterprise system. This documentation, sometimes called “Business Transformation Master Plan”, defines specifications to configure and eventually customize the system in the next phase. Typical tasks include the development of a detailed project plan, definition of KUs, KUs/project team education and acquisition of support skills, detailed process
Figure 7.
Project life cycle (SAP implementation roadmap).
March 15, 2010
312
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
mapping (AS IS) and definition of process reengineering needs according to the procedures incorporated in the system, gap analysis and action plan to solve variances), and identification of a legacy systems treatment (elimination, integration, method of data retrieval, clean up, and transfer to the ERP database). Main risk factors are a lack of cross-functional representation and skills, poor-quality software documentation and training material, and the absence or insufficient attention paid to gap analysis. Key actors are organizational crossfunctional members, project team, and vendors/consultants/system integrator business analysts. The Realization phase covers the core activities to get the system up and running, through its configuration and hardware — network connection, the actual reengineering of processes, and the execution of a change management plan (if any). Typical tasks, besides configuration, are system integration, data clean up and conversion, education of KUs–IT staff–executive, development of standard prototype without detailed interfaces and reports. Main risk factors are an inadequate knowledge of consultants and vendor personnel, too many customizations, poor attention to data clean-up, difficulty in acquiring knowledge of software configuration, the rescheduling (shortening) of these, and the following phases because of the “scope screep effect” (Monk and Wagner, 2006). Scope screep refers to the unplanned growth of project goals and objectives that leads to project overtime/costs and is discovered only in this phase. Management often chooses to omit or shorten the Final Preparation phase, thereby reducing or eliminating end users’ training and software testing. So, costs savings gained are overshadowed by productivity losses, consulting fees, and the prolonging of the period from “Go Live” until “normal operation.” The key actors are the same as in the previous phase, plus end users that begin their training. The final preparation phase encompasses critical tasks such as testing the ERP in critical processes, prototype completion with reports and interfaces, users’ enablement and final training help desk implementation, bug-fixing, final tuning/optimization of data and parameters, ending the data migration from legacy systems, and setting the go live date. The main risk factor which can occur is the above-cited effect of scope screep. Another typical error to prevent is assuming that the end user’s training should be funded from operations budget. Often too many and hard customizations do not work and lead to long rework activities. Key actors are the same as in the previous phase. The final phase of Go Live & Support starts from Go Live and ends when “normal operations,” with processes fully supported and no external support, has been achieved. Most of end users’ problems arise during the first few weeks, so monitoring of the system is critical in order to quickly arrange changes if performance is not satisfactory. Typical tasks are supporting the Help Desk with project team reworking skills, final bug-fixing and reworking, monitoring of operative performances of the new system, and adding capacity. In this phase, we may observe the effects, problems related to a bad management of the cited risk factors: underuse/no use of
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 313
the system, data input errors, excessive dependence on KUs and external parties, retraining, difficulty in diagnosis and solving software problems, and over-length of the same phase. Key actors are IT specialists and members of the project team that staff the Help Desk, operations managers, and external technical support personnel. 3.4.2. Treatment strategies e-risk factors profile The literature describes generic options for responding to project risks (DeMarco and Lister, 2003; Kerzner, 2003; Schwalbe, 2008). Within these high-level options, specific responses can be formulated according to the circumstances of the project, the threat, the cost of the response, and the resources required. Here, we report four common risk response strategies. Avoidance strategies aim to prevent a negative effect from occurring or impacting a project. This can involve, for example, changing the project design so that the circumstances under which a particular risk event might occur cannot arise, or so that the event will have little impact on the project if it does. For example, planned functionality might be “de-scoped” to remove a highly uncertain feature to a separate phase or project in which more agile development methods might be applied to determine the requirement (Boehm and Turner, 2003). Transference strategy involves shifting the responsibility for a risk to a third part. This action does not eliminate the threat to the project; it just passes the responsibility for its management to someone else. Theoretically, this implies a principal-agent relationship wherein the agent is better able to manage the risk, resulting in a better overall outcome for the project. This can be a high-risk strategy because the threat to the project remains, which the principal must ultimately bear, but direct control is surrendered to the agent. Common risk transfer strategies include insurance, contracts, warranties, and outsourcing. In most cases, a risk premium of some kind is paid to the agent for taking ownership of the risk, as also sometimes penalty is included in the contracts. The agent must then develop its own response strategy for the risk. Risk Mitigation strategy is one or more reinforcing actions designed to reduce a threat to a project by reducing its likelihood and/or potential impact before the risk is realized. Ultimately, the aim is to manage the project in such a manner that the risk event does not occur or, if it does, the impact can be contained at a low level (i.e., to “manage the threat to zero”). For example, using independent testers and test scripts to verify and validate software progressively throughout the development and integration stages of a project may reduce the likelihood of defects being found postdelivery and minimize project delays due to software quality problems. Risk acceptance strategy can include a range of passive and active response strategies. One is to passively accept that the risk exists but chooses to do nothing about it other than, perhaps, to monitor its status. This can be an appropriate response
March 15, 2010
14:45
314
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
when the threat is low and the source of the risk is external to the project’s control (Schmidt et al., 2001). Alternatively, the threat may be real but there is little that can be done about it until it materializes. In this case, contingencies can be established to handle the condition when and if it occurs (as a planned reaction). The contingency can be in the form of provision of extra funds or other reserves, or it can be a detailed action plan (contingency plan) that can be quickly enacted when problem arises. To consider such strategies can be useful for a better understanding of the risk factors and design the RMP. Furthermore, we suggest using a context diagram (Fig. 8) as suggested in the SAFE methodology (Meli, 1998), to add information about the risk factors profile, in terms of control, consideration, and influence. Here, we allocate factors and subjects, respectively passive elements with no decision-making power (events, normative, specifications, etc.) and entities able to decide and influence the project success (IT manager, consultants, top management, project management tools etc.). Factors are allocated in Classes, the cloves, such as technology, normative, management, organization, strategy, and so on. The circle’s crowns are related to the capacity of the project (project manager and team) to influence, consider, or control the element along a continuum. The frame below, on the right side (Fig. 9) shows how potential actions can be planned during the different ERP project life-cycle stages, according to the Control: Total control, we may reduce to zero the probability of manifestation of a risk factor.
Elements: Factors Subjects
· ·
Classes
Capacity Control Consideration Influence
Figure 8.
Life Cycle
Consideration: We cannot modify the element and must adopt slack resources and adaptive strategies of the avoidance type Influence: By the project, we may try to reduce the probability of occurrence and/or reduce the impact, with uncertain outcome
Context diagram.
Risk Factor Profile Risk Factors
Phase
Risk Management Strategy Mitigation
Detectability
Controllability
Responsibility
Dependence
Avoidance
Transference Likehood
Effects
(action j)
…
R1 Phase 1:n
…
…
…
…
…
(action i)
R19
Figure 9.
Information to design the RMP.
…
…
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 315
suitable RM strategies and to the risk factors profile. This can be a good reference for the design of the RMP. Obviously, we must correlate this information to external/internal responsibility. For example, we consider the risk factor “Poor Project Team Skills.” This factor causes negative effects all along the lifecycle and has to be managed at the end of Project Preparation phase, with the right selection and roles assigned to project team members. It is a typical element/subject belonging to a class like “competences” to deal with an avoidance strategy. Top management and the Steering Committee can eventually fully control the necessary skills and by recruiting outsourcing, and through the vocational training of internal resources. 3.5. Risk Control Risk control phase aims to increase the effectiveness of the RMP during the time. It consists of: • Communication and consulting, the process of or exchanging/sharing information about risk between the decision maker and the other stakeholders inside and outside an organization (e.g., departments and outsourcers, respectively). Information can relate to the existence, nature, form, probability, severity, acceptability, and treatment of risk (ISO/IEC Guide, 2002). • Monitor and review, the process for measuring the efficiency and effectiveness of the organization’s RMPs and the establishment of an ongoing monitor and review process. This process makes sure that the specified management action plans remain relevant and updated. It also implements the control activities including the re-evaluation of the scope and compliance with decisions (ENISA Study, 2007). Obviously, Risk Communication is a necessary condition to enact the RMP, while Monitor and Review encompasses the typical feedback control depicted in Fig. 10. The outcome of the effectiveness analysis can be formalized in a Risk Management Report (RMR) containing information about events that occurred, strategies and actions executed, and their success. The feedback can lead to changes in the RMP and even redefine the risk factors. Risk treatment itself sometimes introduces new risks that need to be identified, assessed, treated and monitored. If a residual risk still remains after treatment, a decision should be taken whether to retain this risk or repeat the risk treatment process. The effectiveness analysis step theoretically requires some metrics related to a valuation of risk before the RMP implementation (unconditioned risk, if we suppose the risk factors free to occur) and after the RMP actions, along the life cycle. To define such indicators, the control and the evaluation of the expected risk reduction are complex and challenging tasks. Just a few contributions exist in literature and, in our advice, the technique of the RMP Effectiveness Index is the most interesting, as reported in the SAFE methodology (Meli, 1998). In our context, we may state
March 15, 2010
316
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Figure 10.
Control cycle.
that an indirect evaluation based on a set of balanced success metrics (technical, financial, organizational) at different points in time (Markus and Tanis, 2000a) is more useful: • Project metrics. Typical performance measures of project management related to planned schedule, budget and functional scope, typically Earned Value Analysis-based. • Early operational metrics. Metrics related to the Go Live & Support phase. Although this is a transitional phase, the period from Go Live until the normal operation is critical: organization can lose sales, need additional investments and exceedingly poor performance can lead to pressures to uninstall the system. Typical metrics (for a manufacturing firm) include short-term changes in labor costs, time to fill an order, inventory levels, reliability of due date based on the ERP “Available To Promise” ability, order shipped with errors, but also system down time and response time, employee job quality/stress levels and so on. Such metrics support in monitoring the system in this phase to quickly point out and solve problems. • Long-term business results. Some typical relevant metrics (others will be contextspecific, goals and objectives-related) include business process performance, end users’ skills, ease of migration/upgrading, competence of IT specialists, cost savings, and competence availability in IT investments subsequent to ERP (as in a data warehouse and Business Intelligence solution, which take advantage from the ERP database, data clean up, and so on).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 317
According to Markus and Tanis (2000a), disastrous Project Implementation and Go Live & Support metrics are sometimes coupled with high levels of subsequent business benefits from ERP. Conversely, sometimes projects with acceptable Implementation and Go Live & Support metrics do not have business benefits from installing the system in the long term. 4. Conclusions and Managerial Implications In this chapter, we focused on the importance of the RM through the ERP implementation life cycle and suggested an RM approach to manage the project. The main result was to provide managers with a guideline to support the RM process in each phase of the project life-cycle; with this aim, we discussed techniques and tools which could support managers both during the early project assessment stages and during the implementation ones. The information addressed provides managers and researchers with advices and suggestions along all the phases of the RM process customized to a typical ERP project. In particular, the chapter achieves: (a) a classification of the risk factors and the effects in ERP projects; (b) suggestions for a systematic analysis of the risk factors inter-dependencies and their causal links to potential effects, useful in the next stages of assessment; (c) considerations on tools and techniques which could support managers in risk evaluation; and finally (d) a methodological guide for the selection of suitable risk response strategies and control procedures. Regarding managerial implications, here we give suggestions and evidence from the approached research for a right approach to RM in an ERP introduction project (Schwalbe, 2008). In the context analysis phase, the top-management’s commitment is essential in order to define the objectives and bonds of the project, and to decide if and how to approach and plan the RM activities. This implies verifying if the organizational skills, experience, and competencies as well as the strength of the commitment are consistent with the objectives and bonds. In this concern, the project team should review project documents and understand the organization’s and the sponsor’s approach to risk. In the risk assessment phase, the project team can use several risk identification/evaluation tools and techniques, such as brainstorming, the Delphi technique, interviewing or SWOT analysis, risk matrix, and probabilistic networks. What should be considered is that an ERP project incorporates technological, organizational, and financial risks. The imperative in this field is: focus on business needs first, not on technology! Any list of risk factors should include problems related to underestimating the importance of process analysis, requirements definitions and business process reengineering, a proper education, and training both for employees and managers; moreover, attention should be addressed to risk factor interdependencies. The suggested risk response planning approach (strategies of avoidance, acceptance, mitigation, transference) is widely accepted and used, and the project team should involve external and internal IT, financial, project management skills to
March 15, 2010
318
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
choose, for each strategy, the right actions with regard to technical, cost/financial, and schedule risks. In the phase of risk control, risks should be monitored and decisions about mitigation strategies should be made based on pre-established milestones and performance indicators. Finally, a further risk is related to the change management activities; an active top-management support is important for successful acceptance and implementation of companywide changes. References Ahmed, A, B Kayis and S Amornsawadwatana (2007). A review of techniques for risk management in projects, Benchmarking: An International Journal, 14(1), 22–36. Aloini, D, R Dulmin and V Mininno (2007). Risk management in ERP project introduction: Review of the literature. Information & Management, 44, 547–567. Aloini, D, R Dulmin and V Mininno (2008). Risk Assessment in ERP introduction projects: Dealing with risk factors interdependence. In 9th Global Information Technology Management Association (GITMA) World Conference, Downtown Atlanta, Georgia, USA, June 22nd–24th. Anderson, ES, KV Grude and T Haug (1995). Goal-directed Project Management: Effective Techniques and Strategies, 2nd edn., Bournemouth: PricewaterhouseCoopers. AS/NZS 4360 (1999). Risk Management. Strathfield, Standards Association of Australia. Available at www.standards.com.au. Beard, JW and M Sumner (2004). Seeking strategic advantage in the post-net era: Viewing ERP systems from the resource-based perspective. The Journal of Strategic Information Systems, 13(2), 129–150. Boehm and Turner (2003). Balancing Agility and Discipline: A Guide for the Perplexed. Boston, MA: Addison-Wesley Longman Publishing Co., Inc. Chapman, C and S Ward (2003). Project Risk Management: Processes, Techniques and Insights. John Wiley. Chen, IJ (2001). Planning for ERP systems: Analysis and future trend. Business Process Management Journal, 7(5), 374–386. Cleland, DI and LR Ireland (2000). The Project Manager’s Portable Handbook. Professional, New York, NY: McGraw-Hill. DeMarco, T and T Lister (2003). Risk management during requirements. IEEE Software, 20(5), 99–101. DeSanctis, G (1984). A micro perspective of implementation. Management Science Implementation, Applications of Management Science (Suppl. 1), 1–27. ENISA Study (2007). Emerging-Risks-Related Information Collection and Dissemination, February, www.enisa.europa.eu/rmra. ISO/IEC Guide 73:2002 (2002). Risk Management. Vocabulary. Guidelines for use in standards ISBN: 0 580 40178 2, 1–28. Kerzner (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling, 9th Edn., CHIPS. Leonard-Barton, D (1988). Implementation characteristics of organizational innovations. Communications Research, October, 603–631. Lucas, HC (1975). Why Information Systems Fail. New York: Columbia University Press.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
Risk Management in Enterprise Resource Planning Systems 319
Lyytinen, K and R Hirschheim (1987). Information systems failures: A survey and classification of the empirical literature. Oxford Surveys in Information Technology, 4, 257–309. Markus, ML and C Tanis (2000a). The enterprise systems experience — From adoption to success. In Framing the Domains of IT Research: Glimpsing the Future Through the Past, Zmud, RW (ed.), Cincinnati, OH: Pinnaflex Educational Resources, Inc. Markus, ML, S Axline, D Petrie and C Tanis (2000b). Learning from adopters’ experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15, 245–265. Meli, R (1998). SAFE: A Method to Understand, Reduce, and Accept Project Risk. ESCOMENCRESS 98, Project Control for 2000 and Beyond, Rome, Italy, May 27–29. Monk, E and B Wagner (2006). Concepts in Enterprise Resource Planning, Mac Mendelsohn (ed.), 2nd Edn., Canada: Thomson Course Technology. PMI (2001). A Guide to the Project Management Body of Knowledge (PMBOK Guide), 2000 Edn., Project Management Institute Publications. PMP research (2001). Industry Reports: Infrastructure Management Software. www.conferencepage.com/pmpebooks/pmp/index.asp. Sage, AP (1977). Interpretative Structural Modelling: Methodology for Large Scale System, pp. 91–164. New York: McGraw Hill. Schmidt, R, K Lyytinen, M Keil and P Cule (2001). Identifying software project risks: An international Delphi study. Journal of Management Information Systems, 17(4), 5–36. Schultz, RL, MJ Ginzberg and HC Lucas (1984). A structural model of implementation, in management science implementation. Applications of Management Science (Suppl. 1), 55–87. Schwalbe, K (2008). Information Technology Project Management, 5th Edn., Cengage Learning, ISBN-13: 9781423901457. Soh, C and ML Markus (1995). How IT creates business value: A process theory synthesis. In Proceedings of the 16th International Conference on Information Systems, Amsterdam, December. Wei, CC, CF Chien and MJ Wang (2005). An AHP-based approach to ERP system selection. International Journal Production Economics, 96, 47–62. Xu, H, JH Nord, N Brown and GD Nord (2002). Data quality issues in implementing an ERP. Industrial Management & Data Systems, 102(1), 47–60.
Biographical Notes Davide Aloini was born in Catania (CT). He has been a PhD student in EconomicManagement Engineering at Rome’s Tor Vergata University since 2005. He received his BS and MS degree in Management Engineering from the University of Pisa and his Master in Enterprise Engineering from the University of Rome “Tor Vergata.” He is also working in the Department of Electrical Systems and Automation of Pisa University. His research interests include supply chain information management, ERP, risk management e-procurement systems, business intelligence, and decision support systems.
March 15, 2010
320
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch13
D. Aloini et al.
Riccardo Dulmin was born in Piombino (LI). He graduated in Electronic Engineering from the University of Pisa and has been working with the Economics and Logistic Section of the Department of Electrical Systems and Automation since 1996; from 1999 to 2006, he was a researcher and assistant professor in Economics and Managerial Engineering; since 2006 he has been an associate professor of Information Technologies for Enterprises management. His research interests include supply chain management study, the development and applications of decision analysis, artificial intelligence tools and techniques in operations management, and informative systems. Valeria Mininno was born in La Spezia. She graduated in Mechanical Engineering at the University of Pisa in 1993. From 1994 to 2001, she was a researcher and assistant professor in Economics and Managerial Engineering; since then she has been an associate professor of Business Economics and Supply Chain Management. Her main research interests are in supply chain management study, the development and applications of decision analysis, artificial intelligence tools and techniques and their use in operations management.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Part III Industrial Data and Management Systems
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Chapter 14
Asset Integrity Management: Operationalizing Sustainability Concerns R. M. CHANDIMA RATNAYAKE Center for Industrial Asset Management (CIAM), Faculty of Science & Technology, University of Stavanger-UiS, N-4036, Stavanger, Norway
[email protected] The complexity of integrating the concept of sustainable development and the reality of asset integrity management (AIM) practices has been argued. It is important for establishing and consummating an AIM system with practical application value as a whole over the integrity management system. Identifying and prioritizing asset performance through identified risk, detecting and assessing data, resulting in saved costs in the areas of design, operation, and technology application are addressed through sustainability lenses. The research study surfaced over a project initiated to develop governing documents for a major operator company for assessing asset integrity (AI), focusing particularly on design, operational, and technical integrity. The introduction of a conceptual framework for AIM knowledge along with coupled tools and methodologies is vital, as it relates to sustainable development regardless of whether the particular industry belongs to the public or private sector. The subsequent conceptual framework for sustainable asset performance reveals how sustainability aspects may be measured effectively as part of AIM practices. Emerging AIM practices that relate to sustainable development do emphasize design, technology, and operational integrity issues for splitting the problem into manageable segments and alternatively, measure organizational alignment for sustainable performance. The model uses the analytic hierarchy process (AHP), a multicriteria analysis technique that provides an appropriate tool to accommodate the conflicting views of various stakeholder groups. The AHP allows the users to assess the relative importance of multiple criteria (or multiple alternatives against a given criterion) in an intuitive manner. This holistic approach to managing AI provides improvement initiatives rather than a seemingly ad hoc decision making. The information in this chapter will benefit plant personnel interested in implementing an integrated AIM program or advancing their current AIM program to the next level. Keywords: Asset integrity management; substainability; performance measurement; analytical hierarchy process.
323
March 15, 2010
324
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
1. Introduction The decade of the 1970s was a watershed year for International Environmentalism. Alternatively, the first US Earth day was held in 1970, the same year as the US Environmental Protection Agency (EPA) was created. The first United Nations (UN) conference on Human Environment was held in Stockholm in 1972, which led to the formation of the United Nations Environmental Program (UNEP). The UN then set up the World Commission on Environment and Development, also called the Brundtland commission, that defined sustainable development in their 1987 report, “Our common future” (see Ratnayake and Liyanage, 2009) as “meets needs of the present generation without compromising the ability of the future generations to meet their own needs.” Since then the influence of the concept has increased and it features increasingly as a core element in policy documents of governments and international agencies (Mebratu, 1998). For an instance, in the same decade, governments reacted to the public concern about the environment by enacting a raft of legislation. For example, the US Congress enacted the seminal legislation for clean water, clean air, and the management of waste. The hard work of activists and writers such as Rachael Carson with her 1962 book, Silent Spring (Carson, 1962), had started to pay off. The response by industry to the call for regulation and public concern was to design and implement management systems for health, safety, and environment (HSE) to assure management, shareholders, customers, communities, and governments that industrial operations were in compliance with the letter and the spirit of the new laws and regulations. Corporate environment was comprehensively incorporated into corporate policies and procedures during the late 1980s and early 1990s. These management systems grew during the 1990s, there was growing recognition of the interrelationship between economic prosperity, environmental quality, and social justice. The phrase “sustainable development” became the catchword in government and corporate circles to include these three pillars of human development. A more recent definition of the concept of sustainability was presented by John Elkington in his book, Cannibals with Forks. Elkington describes triple bottom line (“TBL”, “3BL”, or “People, Planet, Profit”) concept, which balances over an expanded spectrum of values and criteria for measuring organizational success through economic, environmental and societal conditions (Elkington, 1997; Ratnayake and Liyanage, 2007, 2008). For an example, the development of innovative technologies has played an important role in increasing the global competitive advantage of high-tech companies (Ma and Wang, 2006). On the other hand “[W]ho can resist the argument that all assets of business should contribute to preserving the quality of the societal and ecological environment for future generations”? The need to incorporate the concept of sustainable development into decision-making, combined with the World Bank’s three-pillar approach to sustainable development, resulted in the popular business term “triple-bottom-line decision making” (World Bank, 2008).
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
325
The World Summit on Sustainable Development (WSSD) in 2002 highlighted the growing recognition of the concept by governments as well as businesses at a global level (Labuschagne and Brent, 2005) and demonstrated very clearly that it is not practical to consider environmental issues separate from socioeconomic issues such as health and safety, poverty, etc. The term “sustainable development” is therefore used in the sense of sustaining human existence, including the natural world, in the midst of constant change. It would be a mistake to use the term to mean no change or to assume that we can freeze the status quo of the natural world. However, the rate of change is an important consideration. Perhaps the challenge to the modern business world should be more properly named “management of change” instead of “sustainable development.” 2. Sustainability in Industrial Asset Performance The term “sustainable development” in the context of asset integrity management (AIM) is not used to mean sustaining the exploiting of an asset indefinitely. Rather, it means meeting the needs of the global society for producing a product at a reasonable cost, safely, and with minimal impact on the environment. The traditional industrial model focused on labor productivity as the road block toward local and global industrial sustainability, while assuming nature would allow exploiting the resources available indefinitely. On the contrary, most industrial minds are reluctant to change their mindset to get the benefit of resources productivity. Consequently, many companies have not paid enough attention quantifying the link between sustainability actions, sustainability performance and financial gain, and on making the “business case” for corporate social responsibility. Instead, they act in socially responsible ways because they believe it is “the right thing to do.” The identification and measurement of societal and environmental strategies is particularly difficult as they are usually linked to long-time horizons, a high level of uncertainty, and impacts that are often difficult to quantify. This clearly signifies as per the first EPA administrator William Ruckelshaus: “Sustainability is as foreign a concept to managers in capitalist societies as profits are to managers in the former Soviet Union.” (Hart and Milstein, 2003). That is, for some managers, sustainability is a moral mandate and for others, a legal requirement. Yet others view sustainability as a cost of doing business — a necessary evil to maintain legitimacy and right to operate. A few firms such as HP, Toyota, etc. have begun to frame sustainability as a business opportunity, offering avenues for lowering cost and risk, or even growing revenues and market share through innovation (Holliday, 2001). The detection of enterprise sustainability remains difficult for most firms due to maturing assets, misalignments within the organization, no mechanism to recognize present alignment of sustainability concerns to realize gaps, etc., which in turn may be reconciled with the objective of increasing value for the firm itself, as well as its stakeholders. On these grounds, AIM was initially conceived to focus on industries related to hazardous type operations such as oil and gas, nuclear power, etc. For
March 15, 2010
326
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
example, the offshore oil and gas industry on the UK Continental Shelf (UKCS) is a mature production area. Much of the offshore infrastructure is at, or has exceeded, its intended design life. This is due to an apparent general decline in the condition of the plant’s installations, scheduled to run Key Program 3 (KP3) focusing on AI during the period of 2004–2007. Figure 1 illustrates a holistic view of how the AIM backbone supports to sustainability approaches. As per Lord Kelvin, “When you can measure what you are talking about and express it in numbers you know something about it, but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely in your thoughts, advanced to the stage of science” (Ratnayake and Liyanage, 2009). For managing industrial assets, there must be a way to measure the assets’ performance. The late Peter Drucker has influenced generations of managers with his admonition: “If you can’t measure it, you can’t manage it”. That is, while it is necessary to manage an organization — be it a nuclear installation, an O&G plant, or a childwelfare agency — the managers have to be able to measure what they are doing (Bhen, 2005) to manage as desired. To achieve the so-called sustainability in a commercial organization, it has to design and then adopt their asset management structure, policies, and procedures to guide and regulate its internal practices. Asset upholding is seen as a cost center according to classical economic theories. Nevertheless, in some of leading companies like Toyota, HP, Shell, etc., managers have begun to realize the importance of intangibles and to reexamine industrial operations through value-added lens. Holistic sustainable industrial asset performance domain Systemic analyses domain Optimization domain
Long-term focus
Short-term focus
Long-term focus
Economy−Environment−Society Sustainable asset performance Stakeholder needs-based conceptualization
DI, OI and TI
Inputs
Financial, human, information, and physical assets
Outputs
TBL impact assessment
Trends for sustainability
AIM
Figure 1. Industrial asset performance in Triple Bottom Line (TBL) sustainability and AIM point of view.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
327
Hence, asset upholding is now seen not only as a cost, but also as a process with significant potential to add value for long-time survival in a competitive business world. More recent publications that have brought this issue into open discussion include Liyanage (2003), Liyanage and Kumar (2003), Jawahir and Wanigaratne (2004), Liyanage (2007), and Ratnayake and Liyanage (2007). One of the critical elements of sustainability in the industrial world lies in understanding the role that industrial assets play in this process. Because industrial assets often drive the way in which they consume resources, create waste, and structure society, their role is not insignificant. In fact, some neoclassical economists believe that many pessimistic views of resource scarcity are driven by a misunderstanding of the powerful substitutability between industrial assets (technology related) and natural resources (Stiglitz, 1979). It is recognized that right priorities are critical ingredients in the operationalizing sustainability concerns in the AIM recipe. This chapter devises an “operational knowledge tool” that can help determine the present priorities of an entire industrial organization for managing assets. Consequently, the results can be used to align (manage) the entire organization with best suited to the needs of a sustainable industry. 3. What Is Asset Management (AM)? As per the Xerox Corporation, “Asset management is the process of reusing an asset (machine, subassembly, piece part and packing material either by remanufacturing to its original state, converting to a different state or dismantling to retrieve the original components” (Boateng et al., 1993). The new British Standard, PAS 55, endorses the need for primary, performance accountable asset (or business) units, with secondary “horizontal” coordination and efficiency aids through asset-type specializations, common service providers, standards, etc. However, not many managers involved with AM can really claim to have such a structure in place yet. PAS 55 provides a holistic definition for AM: “Systematic & coordinated activities and practices through which an organization optimally manages its physical assets and their associated performance, risks and expenditures over their lifecycles for the purpose of achieving its organizational strategic plan.” Hence, AM can be considered as the optimum way of managing assets to achieve a desired and sustainable outcome” (PASS-55-1, 2004). Consequently, it can also be concluded that “AM is the art and science of making the right decisions and optimizing the related processes.” The management of “physical assets” (for instance, design, selection, maintenance, inspection, renewal, etc.) plays a key role in determining the operational performance and profitability of industries that operate assets as part of their core business. For AM to live to these key roles, it has to meet a number of challenges. Some of challenges are (Wenzler, 2005). 1. Alignment of strategy and operations with stakeholder values and objectives 2. Balancing of reliability, safety, and financial considerations
March 15, 2010
328
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
3. Benefiting from performance-based rates 4. Living with the output-based penalty regime, etc. The fundamental asset management tasks cover aspects from technical issues like maintenance-planning or the definition of operational fundamentals to more economical themes like the planning of investments and budgeting, and end up in strategic planning issues. The general configuration of asset-centered organization can be visualized as shown in the Fig. 2. Investments depend on availability of money, which is directly influenced by internal and external subcontractures while capital expenditure (CAPEX) and operational expenditure (OPEX) improve the net performance accountable. Thus each company should not only compete efficiently, but also manage knowledge, strategic costs, and strategic advantages by tracking value creation through each asset (Sheble, 2005). The value added by each asset is based on its value to the supply chain and it should be properly introduced and managed to provide ultimate stakeholder satisfaction. But one can ask the following questions himself: • How does a company get there? • How do they know, and demonstrate, what is “optimal?” • How do they coordinate component activities towards this goal? Net performance accountable
ctu on bc su al ern Int
res
ctu
tra
Multidisciplined team
on
bc
l su
Asset manager
na ter
tra
Ex
res
CAPEX and OPEX responsible
Discrete asset system boundaries Performance measurable Clear contracts/service level agreements/ alliances, etc.
Figure 2. Asset-centered organization.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
329
• How can the responsibility towards integrated, sustainable performance be instilled? • How do we develop the skills, tools, and processes to establish and sustain such an environment in the first place? To handle the whole picture, the relationships between human, financial, information, infrastructure, and intangible assets and physical “assets” must be well understood. Figure 3 illustrates how physical assets are coexistent with financial, intangible, information, and human assets, while the area within “PQRS” can be considered as pure infrastructure assets like buildings, machines, inventories, etc. (PASS-55 1&2, 2004). Table 1 illustrates how physical assets are interconnected with financial, intangible, information, and human assets through AI lenses, and surrounded by the industrial world. The concept of asset management is difficult to accept as a philosophy and to implement in practice because asset management means different things to people who work in dissimilar disciplines. For example, some disciplines in an organization may feel they already have an asset management system in place when they have only implemented an inventory-control system. Each stakeholder in a company may target changes in assets for different goals. Those in maintenance might view assets as machines that need to keep working. To those in finance, the assets
s et ss A al ic ys Ph
Figure 3. 2004.)
Physical assets through assets integrity lenses. (Adapted from PASS 55-2,
March 15, 2010
14:45
330
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 1.
Interconnecting Relationships of Physical Assets Through AI Lenses.
Intersection Financial and physical Intangible and physical Information and physical Human and physical
Example cases for interconnecting relationships with physical assets Life-cycle costs, capital investment criteria, operating costs, depreciation, taxes, etc. Reputation, image, morale, health and safety constraints, social and environmental impacts, etc. Condition monitoring, performance and maintenance activities, overheads and opportunities, etc. Training, motivation, communication, roles and responsibilities, knowledge, experience, skills, competence, leadership, teamwork, etc.
represent bundles of capital and cash flow, and may be tempted to covet them without regard for their true purpose. Distribution looks at assets as a means of more effectively transporting goods. Field managers see them as ways of getting products into warehouses or transporting them. Manufacturing may seem them as enablers of quality. IT may view them as enablers of information management. Many organizations say “people are our most important asset” but understand little in relation to physical assets about what it means to nurture and develop them. The CEO sees all of them as competitive differentiators (Woodhouse, 2001). Because of these obvious differences among organizations and within the same organization, the asset management implementation plan should have a series of overlying principles established at a high level. Hence, the asset management implementation plan should be (PASS 55 1&2, 2004): • Holistic: looking at a big picture, i.e. integrating the management of all aspects of the assets (physical, human, financial, information, and intangible assets) rather than a compartmentalized approach. • Systematic: a methodical approach, promoting consistent, repeatable decisions and actions, and providing a clear and justifiable audit trial for decisions and actions. • Systemic: considering the assets as a system and optimizing the system rather than optimizing individual assets in isolation; “where as Goldratt said, “A system of local optimums is not an optimum system.” It can result in islands of productivity within a factory that, overall, is a mess.” • Risk-based: focusing resources and expenditure, and setting priorities, appropriate to the identified risks and the associated cost/benefits. • Optimal: establishing an optimum compromise between competing factors associated with the assets over their life cycles, such as performance, cost, and risk. • Sustainable: considering the potential adverse impact on the organization in the long term of short-term decisions aimed at quick wins. This requires achieving
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
331
t se As nt
me
ge
na
ma
Figure 4.
Role of the asset manager in industry.
an optimum compromise between performance, costs, and risks over the assets’ life cycle or a defined long term. This would be difficult to achieve with separate capital and operating expenditures and annual accounting cycles. However, performance accountability and investment/expenditure responsibility should be more closely linked and lied within the asset management context. Figure 4 illustrates the role of asset manager in an industrial organization. The understanding of what is worth doing, why, when, and how including the linkages between asset management strategy and the overall objectives and plans for the entire organization, is of prime importance. As suggested by PAS 55, assets are not all the same — there is diversity in asset type, condition, performance, and business criticality. Mapping what is worth doing to which and when is complex, dynamic, uncertain, and involves a mix of outputs, constraints, and competing objectives. Dividing the whole problem into manageable components and understanding the asset management system boundaries is vital. Figure 5 illustrates how these complexities are interlinked. 4. The Origins of “Integrated, Optimized Asset Management” The best asset leaders pride themselves on being able to deal with tough asset situations and solving problems with their tools in a proactive manner, while most managers at present spend their time reacting to breakdowns and emergencies, or planning ahead on how to optimize their assets for business performance. There is certainly a big contrast between merely “managing the assets” (which many companies would feel they have been doing for decades) and the integrated, optimized whole-life management of physical, human, intellectual, reputation, financial, and
March 15, 2010
332
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake Legal and stakeholder requirements and expectations Customers, shareholders, suppliers, regulators, employees, society, etc.
Business plans
Asset management system
Priorities for continuous improvement
Optimized AM strategy: Objectives, plans, performance targets, etc. Asset system individuality : Performance, condition, criticalities, needs, etc.
Design integrity
Bottom to top
Top to bottom
Asset management policy
Performance and condition monitoring
Operational integrity
Technical integrity Asset systems or business units
Figure 5. Relationship of “top to bottom and bottom to top” w.r.t. asset management system. Adapted from PAS 55-1 and 2, 2004.
other assets. The acquisition, use, maintenance, modification, and disposal of critical assets and properties are vital to most businesses’ performance and success. Globalization, shifting labor costs, maturing assets, and sustainability concerns all create pressure to further improve current asset performance. The word sustainability, especially in natural resource-dependent industries, has become a priority (for an instance Shell, BP, Toyota, etc.).As the changing nature of legal requirements and stakeholder pressure upset the “profitability equation,” businesses started searching for areas to recalibrate quickly to stay competitive. On these grounds, performance on physical assets and AM became key to altering the profitability equation. Figure 6 illustrates the evolution of AM and corporate thinking. Over the decades, AM has transformed from a “necessary evil” to what it is today, where companies look at entire asset lifecycles and align AM to strategic and sustainable goals. In the near future, we can expect to see more technology integrated into the assets themselves to address sustainability issues. Technologies such as self-diagnostics, radio-frequency identification (RFID) chips, etc., along with AI
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
Strategic and sustainable
Value/impact
Operational
Necessary evil • Paper systems • Corrective maintenance • Frequencybased PMs, etc.
1970
Early automation • New technologies • PdM • Software systems • Change PM systems, etc.
1980
Systemized management
Lifecycle awareness
• Maturing organizations • Searching for • Looking at root causes lifecycles • Mature • Upstream software improvements systems • Wireless • Product design technologies for • Software manufacturing, etc. adjusts to business, etc.
1990
333
Distributed intelligence • Self-diagnostics • Communicating assets • Product and process improvements for sustainability, etc.
2000
2010
Figure 6. The evolution of AM and corporate thinking.
techniques will enable the communication of status, breakdowns, and performance metrics directly to management systems in real-time. The rush in corporate and regulatory interest over the last quarter century for better, optimized, integrated, i.e., financially viable, environmentally friendly, and without affecting health and safety within and surrounding an industrial organization has gathered considerable momentum for managing the assets (Ciaraldi, 2005). For example, the oil and gas sector in the European North Sea has had longest to prove the necessity for integrated, optimized AM, starting with the wake-up calls of the late 1980s: the Piper Alpha disaster, Brent spar incident, the oil price crash, Lord Cullen’s recommendations on risk/safety management, market globalization, etc. These examples provide several implications for the smart asset manager. First, the asset manager must understand that their function is constantly changing. This means that they will need to understand and apply new practices and new technologies to address new challenges. From a reactive perspective, these managers will have to adapt and evolve to stay competitive, keep up with customers, be compatible with supply and distribution partners, and stay on top of the constant top-down mandates to perform better and reduce costs. The second implication, and perhaps more offensive, is the opportunity for excellence, where the leaders in AM do not just follow the evolution, but help create it. The best and boldest asset managers will see the changes in practices, design, and technology as a new opportunity to serve their business better, drive competitive differentiation, and show leadership to customers (Alguindigue and Quist, 1997). The better use of their asset portfolio to meet the many demands of the stakeholders should bed interest in financial performance, design and production
March 15, 2010
334
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
process, operational effectiveness, etc. Those that see the evolution as an opportunity may find themselves in unique positions to add proactive value to the organization and stand to contribute to the business both operationally and strategically. The evolution implies a need to sort and examine an increasingly complex, more tied array of options. The asset manager is supposed to come up with ideas that are “best practices” — the question is best practices are to be found. As each business has its unique needs and limited funds to invest, the challenge to the asset manager will be to sort through the mess and glitz of the fashionable “best practices” buildup and find the practices that suit them. In other words, finding the best practices is the ones that align organizational strategy with sustainability concerns and drive more value to the “triple bottom line.” In this context, it is important to measure the present organization’s priorities and recognize the gaps with respect to what should be given. These force a fundamental reappraisal of the business models — and the recognition that big companies, while holding a number of strategic advantages and economies of scale, were losing the “joined-up thinking” and operational efficiency that smaller organizations naturally enjoy (or need, in order to survive). Hence, asset-centered organization units emerged, the term “asset” having various differing definitions. For instance, some used the oil/gas reservoir as their starting point, along with all associated infrastructure to extract it, while others chose physical infrastructure (platforms) in the first place as the units of business or profit centers.
5. Asset Integrity (AI): Definition The AI for sustainable industrial organization requires a good knowledge of the business, asset condition, operating environment, link between application data and decision-making quality, and integrating distinctive management modules (e.g., resource, safety, risk, environmental, project, financial, operations and maintenance management, etc.) for the delivery of results (Liyanage, 2003b). The crucial task in AM is to understand clearly what is important to the business and how to deliver them from assets, make the underlying objectives of an asset clear to everyone and to ensure that they are in the vicinity of the frame for business objectives, minimizing inherent clashes (Hammond and Jones, 2000). The following definitions exemplify the former explanation and redeem for understanding the essence of AI in a broad sense. • “Sum of all those activities that result in appropriate infrastructure for the costefficient delivery of service, since it intends to match infrastructure resource planning and investment with delivery objectives.” (Rondeau, 1999). • “AI is a continuous process of knowledge and experience applied throughout the lifecycle to manage the risk of failures and events in design, construction, and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
335
during operation of facilities to ensure optimal production without compromising safety, health and environmental requirements.”, (Pirie and Ostny 2007). • “The maintenance of fitness for purpose of offshore structures, process plant, connected pipelines and risers, wells and wellheads, drilling and well intervention facilities, and safety systems.” — International Regulators Forum (Richardson, 2007). Primarily, AI is for making sure the assets function effectively and efficiently while safeguarding life and the environment.AssuringAI means that risks to personnel are controlled and minimized, which at the same time ensures stable industrial operations. “If you are going to achieve excellence in big things, you develop the habit in little matters. Excellence is not an exception, it’s a prevailing attitude.” — General Colin Powell (Notes, 2007). To understand the concept AI broadly, it can be divided into three segments: design integrity (DI), operational integrity (OI), and technical integrity (TI). For the sake of clarity these terms can be defined as follows: • DI: “(‘Assure design for safe operations’): Assurance that facilities are designed in accordance with governing standards and meet specified operating requirements.” (Pirie and Ostny, 2007). • OI: “(‘Keep it running’): Appropriate knowledge, experience, manning, competence and decision-making data to operate the plant as intended throughout its lifecycle.” (Pirie and Ostny, 2007). • TI: “(‘Keep it in’): Appropriate work processes for Maintenance and inspection systems and data management to keep the operations available.” (PIRIE, 2007). Table 2 provides general issues related to each. 6. Case: Flaring System within Petroleum Asset Through the Lens of AI In 1980s, the Norwegian government had decided to introduce a CO2 tax, greatly focusing attention on excessive flaring within Norwegian sector of the North Sea (ARGO, 2004; PUBS, 2005). For example, within the UK offshore operational area, flaring accounts for some 20% of CO2 emissions from the offshore oil production industry with 71% from power generation leading to environmental, safety, and maintenance challenges in relations to flare ignition/combustion system (UKOOA, 2000). However due to the taxes, the expenditure of retrofit on existing offshore assets often calls into question the benefit of full zero flare solutions. As a result of that, many operators have influenced to search flare gas recovery or zero flaring as a requirement for new assets. On the contrary for all existing assets, it is important to re-examine the total benefits of complete installation of zero flare systems, or to consider partial installation based on the benefits (ARGO, 2004) that can see through TBL lenses. However,
March 15, 2010
14:45
336
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 2. General Issues Pertaining to DI, TI, and OI in Relation to Industrial Applications. Component of AI DI
OI
TI
General issues in industry today • Long tradition in the industry to design safety barriers according to regulations and recognized international standards, followed by in-depth verification programs in design and construction. • Struggle with transfer of data and knowledge from construction to operation. • Struggle with change management control. • Overall work process for maintenance/inspection planning and execution well-established. • Inadequate integration of maintenance and safety work processes. • Work processes for analysis of experience data and continuous improvement not in place. • Traditionally, operators and maintenance disciplines technically competent, but lack analytical skills required for application of more systematic and advanced decision models. • Struggle with knowledge management. • Maintenance management systems (MMS) are in place. • Varying quality of planning and prioritization; expert judgment, rather than based on risk models and in-house experience data. • Reporting of failure information generally poor for optimization purposes.
the experience in managing assets in Norwegian petroleum industry clearly demonstrated that the financial reward and the developed technology today is available for the entire industry to use. Charlie Moore, Director of Engineering Stargate Inc., stated: “we moved to tools that are light years ahead of what we were using. Our previous tools were slow and not rules-driven. We had many errors escaping into the field which hurt our reputation. Now with these new tools, we are delivering more robust products at less cost.” (Aberdeen Group, 2007). Through AI lenses, achieving zero flaring system straightforwardly is related to DI. One of the main areas of real benefit for operators in petroleum industry using zero flaring systems is the major reduction in maintenance requirements of the flare system. Failure of components within the flare system will at best cause safety concern, and at worst ensure an unexpected facility shutdown. If uncorrected, damage to the flare system can impact OI of the process facility. Apart from that, the major cause of component failure on the flare system will also lead to TI issues such as low flow-rate flaring when the flame wafts about in the breeze interrupting directly on the flare tip itself, the pilots, and any other equipment on the flare deck. Any flare system, and particularly one located on vertical towers, is susceptible to dropped objects. There is usually a relatively small
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
337
top flare deck and if any part of the flare system were to fail, there will be a risk of dropped objects. Typical dropped objects may include areas of the outer wind fence on the flare itself damaged by heat load and battered by wind (ARGO, 2004). Other items susceptible to breaking away from the flare will include elements of the pilot and ignition assembly. There is plenty of experience within the industry as a whole, of pilot nozzles failing, or parts of the ignition rods breaking free. In more extreme cases, parts of the flare can fall, causing health and safety issues. Finally, the flare deck will require periodic repair especially if upset liquid was flared during the process, resulting in very high heat loads on the flare deck. In addition, the maintainance of AI is not just about integrity of equipment but also about developing and maintaining integrated systems and work process and ensuring the competence of individuals and teams for creating and sustaining a world-class operating culture supported by a few clear and well-understood values and behaviors (BP, 2004). AI is the sum of the features, and offers a way of assessing the net merit of any new activity (which improves some features, at the expense of others) while assuring the asset function effectively and efficiently while safeguarding life and the environment. Figure 7 illustrates how AI declines with time. Table 3 illustrates varying scenarios of AI in different periods of an industrial asset (Ratnayake and Liyanage, 2009a). Figure 8 illustrates some general example elements for assessing and how AI can be improved. 7. Roles of Integrity Management (IM) Integrity is defined as “. . . unimpaired condition; soundness; strict adherence to a code . . . , state of being complete or undivided,” and management as “judicious use of means to accomplish an end” (Webster, 2008). The definition for IM is field-specific. For an instance, in oil and gas production operations it is defined
Asset integrity
A
D
C B
E
F Time
Figure 7. The behavior ofAI under reactive, proactive, and continuous improvements conditions during the life-cycle of an asset (Ratnayake and Liyanage, 2009a).
March 15, 2010
14:45
338
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 3. Varying scenarios of AI in different time periods. Stretch
Scenario
AB
Inherent decline with “living” assets
BC BF
Improvement measures Problems in AI management
CD
AI management with continuous improvement
CE
AI management following “absolute minimum” solutions
Examples • “Living” assets are subjected to variations which in turn have effects on the condition of equipments leading to technical and operational problems. • Design, technical, and operational improvements. • Unexpected problems can occur due to poorly defined AI practice, for instance, relaxation of procedures, changes in operating conditions, lack of competence and training, etc. • Change maintenance practices proactively through monitoring. • Continuous revision of maintenance procedures. • Redirect capital expenditures more efficiently, based on planned replacement of equipment rather than replacement following a failure. • Better data management process capture and institutionalize the expert system knowledge with more experienced personnel to elevate the level of all technical people. • Strict conditions on budget and resource allocations. • Poorly managed/changed processes.
as a “continuous assessment process applied throughout design, maintenance, and operations to proactively assure facilities are managed safely.” (Ratnayake and Liyanage 2009; Ciaraldi, 2005). IM requires industrial assets to assess, evaluate, repair and validate, through comprehensive analysis, the safety, reliability, and security of their facilities in high-consequence areas to better protect society and the environment (FERC, 2005). BP defines it as “the application of qualified standards, by competent people, using appropriate processes and procedures throughout the plants life cycle, from design through to decommissioning.” (Ratnayake and Liyanage, 2008; BP, 2004). Corrosion, metallurgy, inspections, non-destructive evaluation (NDE) readily come to mind as fields where IM plays an important role. But health, safety, environmental, and quality (HSE&Q) issues in parallel with financial and stakeholder expectations (that is, people, competency assurance, procedures, emergency response, incident management, etc.) also carry significant IM roles. The catastrophe history from Grangemouth (1987), Piper Alpha (1988), Longford (1988), P-36
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
339
Asset integrity improvement
Asset Integrity Management: Operationalizing Sustainability Concerns
E1
E2
E3
E4
E5
Example elements to improve asset integrity
Figure 8. Asset integrity enhancements. E1, implementation of technical (hardware) recommendations; E2, training and competence assessment of integrity of personnel; E3, implementation of operation, engineering, maintenance procedures and inspection programs; E4, development and implementation of integrity operating windows; E5, effective implementation of risk management and management of change. (Adapted from Shell, 2008.)
(2001), Skikda (2004), Texas refinery incident (2005) and others have necessitated many IM practitioners to conclude a more holistic approach for managing industrial assets. Effective IM combines many activities, skills, and processes in a systematic way and from the very top of the organization to the bottom and vice versa. Hence the managing of activities, skill sets, and processes can be called “IM” (Ciaraldi, 2005), which has been part of mainly hazardous industries (for instance nuclear installations, oil and gas facilities, chemical processing plants, etc.) for at least the latter half of the 20th century and perhaps even before. This is because each aspect of IM can be considered as an obstacle to the consequences of a major incident, the prevention and mitigation of which are its main objectives (Saunders and O’Sullivan, 2007). Figure 9 shows how imperfect filters (imperfect decision bases) can result in major accidents. Historically, the barriers and escalation control layers were defined by standards, regulations, engineering codes (or practices), etc. Though effective IM uses all these, with rocketing stakeholder pressures, they must be refined from time to time and must select correct priorities based on the ideas along whole cross-section of an organization. The following are some of IM buzz words appearing in petroleum industry: Well Integrity Management (WIM), Structural Integrity Management (SIM), Pipeline
March 15, 2010
340
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Figure 9. Visualization of major accident event in relation to operating assets and the prioritization process.
Integrity Management (PIM), Pressure Equipment Integrity Management (PEI), Rotating Equipment Integrity Management (REIM), Lifting Equipment Integrity Management (LEIM), Civil Work Integrity Management (CWIM), etc. However, the industry has realized the need to organise all the fragmented IM systems into single entity, namely, the AIM system (Alawai et al., 2006). 8. Asset Integrity Management AIM is a complete and fully-integrated company strategy directed toward optimizing efficiency, thereby maximizing the profit and sustainable return from operating assets (Montgomery and Serratella, 2002). This is one of many definitions used to describe AIM, while the definition varies somewhat depending on the industry. The former definition constitutes the basis for the significant performance improvement opportunities available to almost every company in every industrial sector. If we broaden the scope to describe not just infrastructure assets but “any” core owned elements of significant value to the company (such as good reputation, licenses, workforce capabilities, experience and knowledge, data, intellectual property, etc.), then the AIM represents the sustained best mix of • Asset “care” (i.e. maintenance and risk management) • Asset “exploitation” (i.e. “use” of the asset to meet some corporate objective and/or achieve some performance benefit)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
341
Physical assets IM
DIM Design for production/ manufacturing
Design for maintenance
AIM OIM
TIM
Plant operations, inspections, maintenance, etc.
Figure 10. The concept of AIM. (From Ratnayake and Liyanage, 2009.)
The care and exploitation of physical assets, the whole problem is divided into manageable subsections. That is, managing AI consists of Design Integrity Management (DIM), Operational Integrity Management (OIM), and Technical Integrity Management (TIM) as suggested by Fig. 10. For instance, the financial services sector uses the term to describe — finding the right combination of asset “value retention” (capital value) and “exploitation” (yield) over the required time horizon. Physical assets can also be protected and well cared-for, with high capital security (condition) but lower immediate returns (profit), quite similar to different bank accounts or investment options. On the other hand they can be “sweated” for better short-term gains sacrificing long-term gains putting them at the risk and condition cost of future usefulness/value. AIM for sustainability involves trying to juggle the conflicting objectives — milking the cow today but also caring for it so that it can be milked and/or sold well in the future, by definition, sustaining the “license-to-operate.” In this context, AIM for “sustainability” is the phrase for the resolution of tradeoffs and compromise requirements, but few really understand what it means in practice. Sustainability focusing involves “equality of impact, pressure or achievement” and on the other hand involves trying to find the most attractive “combination” (sum) of conflicting elements. This may involve several options such as a lot of cost at very little risk, vice versa, or any other combination — just so long as the net total impact is the best that can be achieved. Figure 11 illustrates how the sustainability and capital value varies with respect to yield (based on asset exploitation level).
March 15, 2010
342
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Business impact
Net present value
Optimum focus for AIM Resultant business impact Yield Balance
Sustainability and capital value
Level of asset exploitation
Figure 11. The concept of “sustainability” in AIM context. (Adapted from Bower et al., 2001.)
Managers in the context of AM should not organize themselves into groups of functional specialization, as the whole picture will not surface. This is because uncertainties about asset behavior, future requirements, performance values, costs, risks all contribute to make the lines in “fuzzy” nature. For instance, departments are set up to design/build the assets in relation “engineering,” exploit them in relation to “operations” or “production,” or to care for them in relation to “maintenance.” Only the top level of an industrial organization has the responsibility for optimizing the combination — unless “asset-based management” has been adopted properly along a cross-section of an organization. Organizing themselves by “activity type” may be administratively convenient for the managers, but it loses sight of the larger sustainability perspective. The slogan “Every one is optimizing. Don’t be left out!” (TEADPS, 2009) is visualized in relation to AIM for sustainability concerns formula: AIM = f (economical, environmental, and societal issues) and as shown in Fig. 12. AIM comes into the equation mainly because of the many improvement methodologies and techniques available to improve plant reliability and availability, such as Reliability-Centered Maintenance (RCM), Planned Maintenance Optimization (PMO), Risk-Based Inspection (RBI), Total Productive Maintenance (TPM), Total Quality Management (TQM), Six Sigma, etc. How can they be used in an optimum way to satisfy the environmental, health and, safety (societal) concerns while making profit? Further all of these techniques have significance limitations when dealing with high-consequence, low-probability events. Those are the events that have a potentially catastrophic impact on industrial AI and consequently, the sustainability of an organization.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
343
Economy
Optimum AIM with sustainability concerns.
RCM, PMO, RBI, TPM, TQM, 6 ...etc.
En vi ro nm en
t
Society
Figure 12.
Optimization of AIM concerns through sustainability lenses.
9. Present Grounds for Pushing Industry Toward Sustainability Focusing and Searching for AIM “Brent Spar,” one of the oil rigs that belongs to Shell in North Sea, UK sector, provides lessons on how as time progresses, the measures surrounding an industrial asset may increase and as it does, the room for action shrinks. Shell was compelled to abandon its plans to dispose of Brent Spar at sea (while continuing to stand by its claim that was the safest option) due to public and political opposition in northern Europe (including some physical attacks and an arson attack on a service station in Germany), from both an environmental and an industrial health and safety perspective. The incident came about as a result of Greenpeace organizing a worldwide, high-profile media campaign against Shell’s plan for disposal in deep Atlantic waters at North Fenni Ridge (approximately 250 km from the west coast of Scotland, at a depth of around 2.5 km). Thousands of people stopped buying their petrol at Shell outlets, although Greenpeace never called for a boycott of Shell service stations (Anderson, 2005). The final cost of the Brent Spar operation to Shell was between £60 M and £100 M, when loss of sales (“Shell’s retail sales in Germany and other European countries had fallen by 30%”) was considered (Melchett, 1995). The incident further exemplifies how decisions made for short-term benefit have their economic, environmental, and societal repercussions. Balancing sustainability concerns with business performance can put some owners and operators in a precarious situation. The dilemma places them in a “gray zone” — juggling risks and reward. But there has been no better time in the industry to feel confident about managing AI than today. Figure 13 illustrates how AIM comes into the surface through sustainability concerns while the degree of
14:45
Room for action and number of constraints
344
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake Number of constraints
Asset performance Economical
Actions
Reactions
Room for action
Degree of uncertainty
March 15, 2010
Environmental
Societal
Focus
AIM
Time
Figure 13. The reasons for pushing an industrial organization for sustainability concerns and role of AIM. (Adapted from Ratnayake and Liyanage, 2009.)
uncertainty is rocketing due to the increased number of constraints and shrinking room for action for managing an industrial asset. The case is “Piper Alpha” incident, which directly gives further evidence to give you an idea about how bad AIM affects economical, health, safety, and environmental factors of an organization. It was a North Sea oil production platform operated by Occidental Petroleum (Caledonia) Ltd. The platform began production in 1976, first as an oil platform and then later converted to gas production. An explosion and resulting fire destroyed it on July 6, 1988, killing 167 men. Total insured loss was about £1.7 billion (US$ 3.4 billion). To date, it is the world’s worst offshore oil disaster in terms of both lives lost and impact on the industry. The enquiry was critical of Piper Alpha’s operator, Occidental, which was found guilty of having inadequate “maintenance” and “safety” procedures. After the Texas City Refinery incident (which killed 15 and injured over 170 people); the Baker panel report was released on January 2007. The principal finding was that BP management had not distinguished between “occupational safety” (i.e., slips-trips-and-falls, driving safety, etc.) vs. “process safety” (i.e., design for safety, hazard analysis, material verification, equipment maintenance, process upset reporting, etc.). The metrics, incentives, and management systems at BP focused on measuring and managing occupational safety while ignoring process safety. This incident highlights well how the numbers of measures are rocketing and the significance of AIM through sustainability lenses. Baker panel concludes that BP confused improving trends in occupational safety statistics for a general improvement in all types of safety. The social justice value, for instance child labor, exposure to dangerous chemicals, sexual harassment, verbal abuse, etc., directly influences industrial asset performance. The case on United Students Against Sweatshops, to quote Anderson:
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
345
“Nike’s failure to manage its supplier subcontractors lead to a boycott based on social justice values” (Anderson, 2005) led to Nike’s stock price and revenues dropped. As noted by Eric Brakken, organizer for United Students Against Sweatshops: “What Nike did is important, it blows open the whole notion that other companies are putting forward that they can’t make such disclosures. Disclosure is important because it allows us to talk to people in these overseas communitiesreligious leaders, human rights leaders — who are they able to go and examine and verify working conditions” (as seen in Anderson, 2005) and other former examples illustrate well how the room for action is shrinking while number of measures surfacing to an industrial organization. All cases emphasize the need for pushing industry toward sustainability focusing and searching for AIM. 10. AIM: Who Needs It? It is easy to see the impact of poor IM. Senior executives are very much aware of the effect on H, S, and Es that result from major safety and environmental incidents and consequential damage to corporate reputation and value. Past efforts at improving occupational H, S, and E have reduced injuries and losses at a personal safety level measurably. On the contrary, the Baker Report after the Texas City accident states: “Leadership not setting the process safety “tone at the top,” nor providing effective leadership or cascading expectations or core values to make effective process safety happen” (Baker, 2007). The industry is now turning its attention to reducing major incident risk through a properly implemented AIM system which enhances process safety. Oil majors have each developed powerful operational excellence and AIM philosophies that, if well implemented, will reduce the risk significantly. Here, the role is to implement and deliver AI throughout the lifecycle of the asset and measure the benefits gained. 10.1. The Need Industry’s historical perception of AIM is not appropriate for today’s current needs. Driven by the global hunger for energy, the industry is moving into ever more extreme environments, utilizing new technology, and extending the life of aging plant. Moreover, the effects of integrity failure and associated publicity surrounding catastrophic events have engaged shareholders, senior managers, regulators, and the public in the debate over managing the integrity of its assets. The primary reason for improving the way we manage AI is therefore risk assessment and risk reduction in all asset-related activities from the design to operations and maintenance activities. AIM process with sustainability focusing is a proven, life-cycle management system that delivers measurable risk reduction and cost benefits. Its fundamental principle is . . . “To ensure that critical elements remain fit-for-purpose throughout the lifecycle of the asset and at an optimum cost.”
March 15, 2010
346
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
AIM process is a simple, logical, and holistic approach that brings innovative methodology to both new-build and existing assets. AIM process defines and ranks the components that are safety/environment/production/business critical with multiple criteria decision-making (MCDM) approach. The asset performance is compared with the available standard in relation to each critical component, along with requirements needed to assure and verify performance throughout the lifecycle of the asset. Previously mentioned, performance assurance and verification focus effort on managing the parts of the asset that matter, leading to informed decisions, a reduced risk/cost profile and documented evidence of good asset management. This process helps AI by optimizing operability, maintainability, inspectability, and constructability as a “built-in” feature of the design. Similarly, it provides support for an instance risk-based inspection, reliability-centered maintenance techniques by maximizing the effectiveness of maintenance, inspection on critical equipment during the operational life of the asset, etc. AIM process facilitates compliance with corporate and regulatory standards by demonstrating that critical items are identified and their performance is being managed in a documented manner. 11. Industrial Challenge With the present grounds, sustainability in industrial assets is an up-and-coming requirement. The question is how core AIM processes should be verified to ensure the quality and compliance of performance in relation to sustainability concerns and to prioritize business goals within sustainability requirements. Figure 14 illustrates the problem environment in the current AIM jargon. Organizational requirements and processes for AIM
OI
TI
DI Priority and significance levels
Economic
Operator company
Priority and significance levels
Environmental
Sustainability performance
Societal
Figure 14. Verification of AIM performance with TSP toward sustainability. (Ratnayake and Liyanage, 2008.)
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
347
The above problem can be resolved by evaluating organizational processes simultaneously with the information from DI, TI, and OI using a decision support system (DSS) with involvement of the representatives of the stakeholders. This chapter devises an “operational knowledge tool” that can help determine how AIM objectives can be given priorities in relation to sustainability concerns. The purpose of this chapter is to present and explore this knowledge tool to assist researchers and technology policymakers in structuring and making decisions in the light of sustainable performance goals. 12. Methodology for Addressing the Challenge The data needed have been obtained from a confidential report of a joint industry project focusing on AIM solutions for a major industrial facility. A joint industry project started in 2006 to ensure AI of gas processing installations focusing on establishing, maintaining, and continually improving AIM. The study focused on addressing a comprehensive verification process focusing on how company’s AIM process should be governed. First, a set of study data were drawn from this study in principal, and also from thorough exploratory studies into different incidents occurred in past. The required experimental data were gathered through interviews, discussions, informal conversations with experts along with various forms of AIM decision-related business documentations. For synthesizing a whole set of data for comprehensive verification process and alternatively to assess how the company has given priorities for different aspects of AIM focusing on TBL sustainability, a model is proposed using the concepts extracted from the so-called analytic hierachy process (AHP), which has been developed by Thomas Saaty of the Wharton School of Business (Saaty, 2005). Ernest Forman recommends AHP as a useful method for synthesizing of data, experience, insight, and intuition in a logical and thorough way (Forman and Selly, 2001). AHP provides an excellent backbone for gathering the entire expert knowledge circulating within the organization into single umbrella, thereby, while providing higher democracy, leading to reliable decision structure which will provide a way out for most of lagging AIM issues. 13. Framework Developed Synthesizing the expertise coming from different layers based on goal objective can be done with the help of AHP along a whole consecution of an organization. With this analysis, it is expected to measure the present organizational awareness or the alignment with respect to desired out put. Figure 15 illustrates how the core AIM process: DI, TI and OI, can be further dispersed into strategic, tactic and ultimately operative targets and the way that AHP can be incorporated. However, it is not enough to have a written system; senior executives must be confident that their management teams are implementing the system effectively.
March 15, 2010
348
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Core process: AIM Design integrity
Technical integrity
Operational integrity
Strategic AIM target: Sustainability in industrial asset operations
Goal
Tactic target: Maintaining the agreed AI level
Criteria
Alternatives
Operative target: Maintaining TBL requirements AHP analysis
AHP analysis
AHP analysis
Figure 15. AIM with AHP. (From Ratnayake and Liyanage, 2008.)
Validation Preliminary review by relevant authorities within the business units
Data collection
Gap analysis
Prioritized list of improvements for managing the asset
AHP Measuring asset performance
Figure 16. The cycle for measuring asset performance with AHP for assessment and prioritizing the decisions.
The method suggested here combines observations made at the asset/facility, input from the staff at different levels and equipment documentation. Basically the analysis relying on expert knowledge received from personnel involved at different levels. The expert knowledge is derived from data, experiences, intuitions, and intentions. The process of synthesizing them in a logical and thorough way is done by AHP analysis (Saaty, 2005). Figure 16 shows the basic cycle for measuring the asset performance for managing an industrial asset. In terms of existing assets, AIM system includes an authoritative review process that assesses and reports on the current condition of critical elements and compares the current AI system with corporate and regulatory requirements as well as industry best practice. This can be achieved by AHP process while recognizing the priorities along whole cross section of an organization while splitting whole picture into three subsets within AIM as explained before DI, TI, and OI. This generates a gap analysis and an initial work-scope and how to incorporate budget for future work.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
349
Alternatively, AIM system also facilitates the necessary communication among operations, maintenance, production, and engineering teams over the hierarchical structure explained in AHP method. The information that is exchanged at regular meetings/interviews ensures that the integrity effects resulting from operational changes to the asset are properly assessed and fed back to management. This will ensure Ohno’s philosophy: “making a factory operate for the company just like the human body operates for an individual,” where the autonomous nervous system responds even when one is asleep (Ohno, 1988). With AHP process data, experiences, insights, and intuitions would be synthesized thoroughly and logical way leading to a feedback system with improvement cycle. This continually updates AI status and trending, leading to improved operational excellence. The level of expensive unplanned maintenance is a good indicator of TI effectiveness. For instance, with the above process, one can expect measurable reductions in unplanned maintenance, maintenance strategy selection (e.g., corrective, preventive, opportunistic, condition-based, predictive, etc.), etc. 14. Measuring Desired OI Figure 17 demonstrates how to operationalize AIM framework showed in Fig. 15. This is based on the case study done with one of gas operator company to assess Desired OI
Level 1. Main goal
Product and process deviations, and management of change
………………..
………………..
Procedures and routines
Communication and reporting
Change requirement specifications
……………..
…………….
……………...
……………..
………………..
………………….
………………...
…………………….. …………………..
Composition and volume control CO 2 content
Calibration of instruments
Systems for quality control
Environmental
…………...
Product quality and volume control …………………………….
Level 5.Composite scenario
………………………………..
Financial
……………………….
………………………..
……………………………..
Level 4. Contrasting scenarios
…………...
……………...
…………...
Safe job analysis (SJA)
Work permit system
……………………………...
………………………………..
Total work load
Overall and local risk level
Level 3. Alternatives
Preparations for manual operations
Simultaneous operations and project execution
Level 2. Criteria
Societal
Composition
Figure 17. Illustrative example of hierarchical structure for obtaining weights for OI. (Ratnayake and Liyanage, 2009.)
March 15, 2010
350
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Table 4. The Pairwise Combination Scale. Intensity
Definition
Explanation
1 3 5 7 9 2, 4, 6, 8
Equal importance Moderate importance Essential or strong importance Demonstrated importance Extreme importance Intermediate values
Two activities contribute equally to the object Slightly favors one over another Strongly favors one over another Dominance of the demonstrated in practice Evidence favoring one over another of highest possible order of affirmation when compromise is needed
OI and hence sustainability concerns through prioritization: “have given” and “to be given.” The scale of relative importance for pairwise comparison as developed by Saaty is shown in Table 4 (Saaty, 2005). The judgment of the decision maker is then used to assign values from the pairwise combination scale (see Table 4) to each main criterion for a “level II” analysis. A pairwise comparison matrix (as shown in Fig. 18 below) along with AHP method or a software program will be utilized for analysis. The most important part of the asset manager is to build the correct hierarchical diagram, which is suited for the particular application. The second most important aspect is to carry out pairwise comparison with right expert personnel. The whole analysis process is explained in Fig. 19 and once the pairwise matrix is built of each layer, the rest of the analysis can be done with a software program. William (2007) provides a comprehensive overview of how integrated AHP and its applications are evolved. Chatzimouratidis and Pilavachi (2007), Wedding and Brown (2007), and Sirikrai and Tang (2006) are some of other examples how different aspects within AIM context can address with AHP. Along with AHP analysis, the areas lagging would be surfaced for managing purpose.
System for quality control
Financial
Environmental
Financial
Environmental
1
X (X:1−9)
Societal
Societal
1/X
1
1
Figure 18.
Step 1. Development of pairwise comparison matrix. Step 2. Assigning a score based on “how much more strongly does this element (or activity) possess or contribute to, dominate, influence, satisfy, or benefit the property than does the element with which it is being compared ?” (Saaty, 2005 ). Step 3. A reciprocal relationship exits for all comparisons. Step 4. When comparing a factor to itself, the relationship will always be one.
Illustrative example of comparison matrix.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
351
Problem: AIM-related Issue
2. Define threshold levels
1. List alternatives
3. Determine acceptable alternatives levels
4. Define criteria 5. Start with decision hierarchy
6. Compare alternatives pairwise
7. Compare criteria pairwise
Relative priorities of alternatives
Importance of criteria
8. Calculate overall priorities of alternatives 9. Sensitivity analysis Advice: AI compliance for TBL sustainability considerations
Figure 19.
Nine phases of AHP approach.
15. Handling TI-Related Issues for AIM: Selecting an Optimum Maintenance Schedule The same approach can be used for TI and DI management-related issues. For instance, optimum maintenance strategy selection is one of the major issues related to TI as manufacturing firms face great pressure to reduce their production costs continuously while addressing sustainability issues. One of the main expenditure items for these firms is maintenance cost which can reach 15%–70% of production costs, varying according to the type of industry (Bevilacqua and Braglia, 2000). The amount of money spent on maintenance in a selected group of companies is estimated to be about 600 billion dollars in 1989 (Wireman, 1990, cited by Chan et al., 2005). On the other hand, maintenance plays an important role in keeping availability and reliability levels, product quality, and safety requirements. As indicated by Mobley (2002), one-third of all maintenance costs are wasted as a result of unnecessary or improper maintenance activities. Moreover, the role of maintenance is changing from a “necessary evil” to a “profit contributor” and toward a “partner” of companies to achieve world-class competitiveness (Waeyenbergh and Pintelon, 2002).
March 15, 2010
352
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Figure 20.
Hierarchy structure of the fuzzy analytic hierarchy process.
The managers are not satisfied with the effect of maintenance activities that depend on corrective maintenance and time-based preventive maintenance mainly, and want to improve their maintenance program without too much increase in investment (Schneider et al., 2005), it is more preferable for them to choose the best mix of maintenance strategies than to make use of the most advanced maintenance strategy for all production facilities to improve the return on investment through health, safety, and environmental concerns. In this sense, AHP with the proposed prioritization method is suitable to the selection of maintenance strategies. This is done by interviewing the maintenance staff and managers. The AHP hierarchy scheme shown in Fig. 20 is constructed for one of redevelopment projects related to oil and gas operations for evaluating optimum maintenance schedule of oil rig located in North Sea. Then, the selection of the optimum maintenance strategy based on industrial asset was done following the AHP process proposed in Fig. 15. 16. Handling DI-Related Issues for AIM: Selecting an Optimum Design Project Mix With fast-tract projects now the norm, project teams need to make sure that their designs are fully compliant with all applicable regulatory and class requirements. Not doing so is a guarantee of later problems. Changes during the design phase obviously translate to cost savings after steel is cut if modifications are required at that time. Industrial personnel know how to design and operate lots of stuff with fast evolving technology. But for instance, the gas operator company’s authors are
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns Select optimum design project
Financial
Environmental
Project 1
Figure 21.
Goal Strategic importance
Risk
Societal
Project 2
353
Return on investment
Project n
Criteria
Image
Subcriteria
Alternative
Selecting optimum project mix to implement for DI management.
working at present: having around 300 projects to implement before 2011, imposed by stakeholders for instance government, joint venture company, etc. necessitates finding out best mix of projects. Figure 21 illustrates the structure developed for gas Operator Company’s AIM solution, focusing on DI management. Then, the selection of the optimum design project mix based on industrial asset performance requirements was done using the AHP process proposed in Fig. 15. Followings are some of implications about the previously-mentioned approach: as indicated by the team of senior managers attached to gas operator company responsible for OI, “. . . we do not have any mechanism for synthesizing ideas along a cross section of the organization and we do make decisions on ad hoc basis . . . believe this process would enhance our decision-making process particularly focusing on AI issues . . . ” and as per a senior manager responsible for DI issues and services for managing risks “. . . many O&G service providing companies do not have a method to evaluate how extent they are aligned with main objectives of the company . . . in turn sustainability concerns . . . to find out gaps in between how extent the company aligned with desired AI and current AI awareness with respect to sustainability concerns. . . .” (Ratnayake and Liyanage, 2009). 17. Conclusion This chapter has been prepared to develop a methodology for assessment of AIM for sustainability and quantified evaluation method of the industry in general. The information and approach illustrate how AIM-based decision making can be used to improve and optimize the AI using AHP. AHP is primarily to assist the analysis and decision-making processes. It helps organize data, intuitions, intentions and experiences in terms of the goal, criteria, and subcriteria (alternatives) formulation. The proposed AHP decision model also provides an effective means to help determine the effectiveness of AIM adoption over whole crosssection of an organization. It is important for today’s operator companies/plant operator’s/owners/professionals to
March 15, 2010
354
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
realize that different methods for managing assets integrity are needed to make the improvements that corporations require and still comply with regulatory requirements. Many industries are exploring opportunities to integrate the concept of sustainable development into their business operations to achieve economic growth with the assurance of environmental protection in the midst of improved health and safety. The proposed approach focuses on achieving better quality of life for present and future generations. The sustaining ability must not be an ending process and hence is a continuous process. Therefore, future research should be carried out to study how frequent these assessments should be done particularly for OI and TI, as today’s business world is changing due to public demands, technology, and global competition.
References Aberdeen Group (2007). Printed Circuit Board Design Integrity — The Key to Successful PCB Development. Retrived February 2, 2008 from http://www.plmv5.com/ aberdeenPLMWP/. ARGO (2004). Environmental and maintenance challenges in flare ignition and combustion. Onshore and Offshore Business Briefing: Exploration & Production: The Oil and Gas (O&G) Review, Retrieved February 18, 2008 from http://www.touchbriefings.com/pdf/ 951/argo 2 tech.pdf. Alawai, SM, AK Azad and FM Al-Marri (2006). Synergy Between Integrity and HSE, Abu Chabi Marine Operating Co., Society of Petroleum Engineers, SPE-98898. Alguindigue, IE and NL Quist (1997). Asset management: Integrating information form intelligent device to enhance plant performance stratergies. In Textile Industry Division Symposium: Vol. 2, June 24–25, 1997. Anderson, DR (2005). Corporate Survival: The Critical Importance of Sustainability Risk Management. USA: iUniverse, Inc. Baker III, JA (2007). The report of the BP U.S. refineries independent safety review panel. Retrieved February 18, 2008, http://www.bp.com/liveassets/bp internet/globalbp/ globalbp uk english/SP/STAGING/local assets/assets/pdfs/Baker panel report.pdf. Behn, B (2005). On the philosophical and practical: Resistance to Measurement, Public Management Report Vol. 3, No. 3, November 2005. Bevilacqua, M and M Braglia (2000). The analytic hierarchy process applied to maintenance strategy selection. Reliability Engineering and System Safety, 70, 71–83. Boateng, BV, J Azar, E De Jong and GA Yander (1993). Asset recycle management — A Total Approach to Product Design for the Environment, IEEE, 0-7803-0829-8/93. Bower, AJ, GO Scott, Hensman and PA Jones (2001). Protection asset maintenance — What business value, Development in Power System Protection, IEE 2001, No. 479. BP (2004). Integrity Management Optimization Program — 2004. Retrieved February 2, 2008, http://www.oilandgas.org.uk/issues/health/docs/bp.pdf. Carson, R (1962). Silent Spring. New York City: Houghton Mifflin. Chan, FTS, HCW Lau, RWL Ip, HK Chan and S Kong (2005). Implementation of total productive maintenance: A case study. International Journal of Production Economics, 95, 71–94.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
355
Chatzimouratidis, AI and PA Pilavachi (2007). Objective and subjective evaluation of power plants and their non-radioactive emissions using the analytic hierarchy process. Energy Policy, 35, 4027–4038. Ciaraldi, SW (2005). The essence of effective integrity management — People, process and Plant, Society of Petroleum Engineers, SPE-95281. Elkington, J (1997). Cannibals with Forks. Oxford: Capstone Publishing. FERC (2005). Federal Energy Regulatory Commission (FERC). Retrieved February 2, 2008, http://www.ferc.gov/news/news-releases/2005/2005-2/06-30-05-ai05-1000.pdf. Forman, EH and MA Selly (2001). Decision by objectives: How to Convince Others that you are Right. Singapore: World Scientific Publishing Co. Hammond, M and P Jones (2000). Effective AM: the road to effective asset management is paved with a look at how we got here and how we might move on. Maintenance & Asset Management 15(4), 3–8. Hart, SL and MB Milstein (2003). Creating sustainable value. Academy of Management Executive, 56–69. Holliday, C (2001). Sustainable growth, the DuPont way. Harvard Business Review, 79(8), 129–132. Jawahir, IS and PC Wanigaratne (2004). New challenges in developing science-based sustainability principles for next generation product design and manufacture. Proceedings of the 8th International Research/Expert Conference on Trends in the Development of Machinery and Associated Technology, 1–10. Labuschagne, C andAC Brent (2005). Sustainable project life cycle management: The need to integrate life cycles in the manufacturing sector. International Journal of Project Management, 23(2), 159–168. Liyanage, JP and U Kumar (2003). Towards a value based view on operations and maintenance performance management. Journal of Quality in Maintenance Engineering, 9(4), 333–350. Liyanage, JP (2003). Operations and Maintenance Performance in Oil and Gas Production Assets: Theoretical Architecture and Capital Value Theory in Perspective. PhD Thesis, Norwegian University of Science and Technology (NTNU), Norway. Liyanage, JP (2007). Operations and maintenance performance in production and manufacturing assets: The sustainability perspective. Journal of Manufacturing Technology Management, 18, 304–314. Ma, N and L Wang (2006). An integrated study of global competitiveness at firm level: Based on the data of China. Proceedings from PICMET 2006: Technology Management for the Global Future. Mebratu, D (1998). Sustainability and sustainable development: Historical and conceptual review. Environmental Impact Assessment Review, 18, 493–520. Melchett, P (1995). Green for danger. New Scientist, 148, 50–51. Mobley, RK (2002). An Introduction to Predictive Maintenance (2nd Edn.). New York: Elsevier Science. Montgomery, RL and C Serratella (2002). Risk-based maintenance: A now vision for asset integrity management. PVP-Vol. 444, Selected Topics on Aiging Management, Reliability. Safety, and Licenense Renewal ASME 2002, PVP2002–1386. Notes (2007). Reflect, connect, expect. Eastman weekend 2006. http://esm.rochester. edu/pdf/notes/NotesJan2007.pdf p. 13.
March 15, 2010
356
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
R. M. C. Ratnayake
Ohno, T (1988). Toyota Production System-Beyond Large-scale Production, pp. xi.Toyko, Japan: Diamond. Inc. PASS-55-1 (2004). Asset management Part 1: Specification for the optimized management of physical infrastructure assets, BSI 30th April 2004. PASS 55-2 (2004). Asset management Part 2: Guidelines for the application of PAS 55-1, BSI 30th April 2004. Pirie GAE, and E Ostby (2007). A Global Overview of Offshore Oil & Gas Asset Integrity Issues. Retrived February 2, 2008 from http://www.mms.gov/international/IRF/PDFIRF/Day1-8—-PIRIE.pdf. PUBS (2005). When the government is the landlord. Regional details: Norway. Retrived February 2, 2008 from http://pubs.pembina.org/reports/Regional%20Details Norway.pdf. Ratnayake, RMC and JP Liyanage (2007). Corporate dynamics vs. industrial asset performance: The sustainability challenge. In The International Forum on Engineering Asset Management and Condition Monitoring — Combining — Second World Congress of Engineering Asset Management and Fourth International Conference on Condition Monitoring, UK. Ratnayake, RMC and JP Liyanage (2008). Analytic hierarchy process for multi-criteria performance evaluation: An application to sustainability in oil & gas assets. In 15th International Annual EurOMA Conference. Netherlands: University of Groningen. Ratnayake, RMC and JP Liyanage (2009). Asset integrity management: sustainability in action. International Journal of Sustainable Strategic Management, 1(2), 175–203. Richardson, A, (2007). Asset integrity. Retrived February 2, 2008 from http://www.mms. gov/international/IRF/PDF-IRF/Day1-9—-RICHARDSON.pdf. Rondeau, E. (1999). Integrated asset management for the cost effective delivery of service. Proceedings of Futures in Property and Facility Management International Conference. London: University College London. Saunders, C and TO Sullivan (2007). Integrity management and life extension of flexible pipe, SPE 108982. Saaty, TL (2005). Theory and Applications of the Analytic Network Process: Decision Making with Benefits, Opportunities, Costs, and Risks. RWS Publications. Schneider, J, A Gaul, C Neumann, J Hografer, W Wellbow, M Schwan and A Schnettler (2005). Asset management techniques. In 15th PSCC, Liege, 22–26, August 2005, Session 41, Paper 1. Sheble, GB (2005). Asset management integrating risk management: Head I win, tails I win. In IEEE Power Engineering Society General Meeting, 2005. Shell, (2008). Assessing technical integrity and sustainability. Retrived September 19, 2008 from http://www.shell.com/static// globalsolutionsen/downloads/industries/gas and lng/brochures/fair brochure.pdf. Sirikrai, SB and CSJ Tang (2006). Industrial performance analysis: A multi-criteria model method. In PICMET 2006 Proceedings, 9–13 July. Stiglitz, JE (1979). A neoclassical analysis of the economies of natural resources. In Scarcity and Growth Reconsidered, VK Smith (ed.), pp. 36–66. Washington, DC: Resources for the future. TEADPS, (2009). Technical electives and design project selection (TEADPS) http://www.chemeng.mcmaster.ca/undergraduate/TechElectivesAndDesignProject.pdf [16 September 2008].
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
Asset Integrity Management: Operationalizing Sustainability Concerns
357
UKOOA (2000). Environmental Report 2000. United Kingdom Offshore Operators Association. Waeyenbergh, G and L Pintelon (2002).A framework for maintenance concept development. International Journal of Production Economics, 77, 299–313. Webster, (2008). Merriam-Webster’s online dictionary. http://www.merriam-webster.com/ dictionary/integrity [4 September 2008]. Wedding, GC and DC Brown, (2007). Measuring site-level success in brownfield redevelopments: a focus on sustainability and green building. Journal of Environmental Management 85, 483–495. Wenzler, I (2005). Development of an asset management strategy for a network utility company: Lessons from a dynamic business simulation approach. Simulation and Gaming, 36(1), 75-90. William, H (2007). Integrated analytic hierarchy process and its applications-A literature review. European Journal of Operational Research, 186, 211–228. Wireman, T (1990). World Class Maintenance Management. NewYork: Industrial Press. World Bank (2008). What is sustainable development? Retrived February 14, 2008 from http://www.worldbank.org/depweb/ english/sd.html. Woodhouse, J (2001). Evolution of asset management. Retrived February 18, 2008 from http://www.plant-maintenance. com/articles/AMbasicintro.pdf.
Biographical Note R. M. Chandima Ratnayake was born in Sri Lanka. He received his B.Sc. degree in Production Engineering and M.Sc. in Manufacturing Engineering from the University of Peradeniya, Sri Lanka. He is presently a Doctoral Research Fellow attached to the Center for Industrial Asset Management (CIAM) and Assistant Professor attached to the Institutt av Konstruksjon og Material Teknologi (IKM), University of Stavanger, Norway. His research interests include performance measurement and management of industrial assets, development and applications of decision analysis tools and techniques in operations management, and change management in oil and gas operations.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch14
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
Chapter 15
How to Boost Innovation Culture and Innovators? ANDREA BIKFALVI Department of Business Administration and Product Design (OGEDP), University of Girona, Campus Montilivi, Edifici PI, Av. Llu´ıs Santalo s/n 17071 Girona, Spain
[email protected] JARI JUSSILA∗ , ANU SUOMINEN† and HANNU VANHARANTA‡ Industrial Management and Engineering, Tampere University of Technology at Pori, PL 300, 28101 Pori, Finland ∗
[email protected] †
[email protected] ‡
[email protected] JUSSI KANTOLA Department of Knowledge Service Engineering, KAIST 335, Gwahangno (373-1 Guseong-dong), Yuseong-gu, Daejeon 305–701, Korea
This chapter examines abstract concepts of innovators’ competences and innovation culture. For people to be innovative, both concepts need to be considered. Ontologies provide a way to specify these abstract concepts into such a format that practical applications can be applied in organizations. Self-evaluation of innovation competence and innovation culture in organizations can be conducted by utilizing a fuzzy logic application platform called Evolute. The approach described in this chapter has management implications. The abstract concepts of innovation culture and innovation competence become manageable, which suggests that organizations should be able to get better innovation results. Keywords: Innovation; innovators; culture; ontology.
1. Introduction For people to be innovative, it seems to require a special mindset and environment, additionally according to Ulijn and Brown (2004) not all innovative people are entrepreneurial. Although some work exists on how to create an organization wide culture of innovation and intra- and entrepreneurship, culture and especially the 359
March 15, 2010
360
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
link between culture and innovation has generally not been studied. A variety of reasons might explain this: the broad concept of what we understand as culture, the multitude of links to other sciences — sociology, anthropology, psychology, or the depth of the concept when referring to national culture, corporate culture, or professional culture. From organization’s management point of view there is another difficulty. Management theories are scattered over a wide area of different management disciplines, and therefore it is difficult to get a holistic view of these different and specific management areas and their detailed content, i.e., constructs, concepts, variables, and indicators. For management research, there is a new area that may help to attain this holistic view: object ontology research. Additionally, system science is trying to help solve this dilemma with many different technologies to help holistic perception and understanding in management. This research, based on previous research on management objects ontologies (Kantola, 2005), aims to build up constructs for management purposes in dynamic organizational environment of innovations management from two points view: individual and organizational. The focus of this chapter is on specific interrelated factors of innovation competences and organizational innovation enablers and barriers. The main targets of this research form the two main objectives of study. The first objective is to examine the nature of personal innovation competences through the concept of creative tension (Senge, 1994). The second objective is to investigate the essence of organizational innovation enablers through the concept of proactive vision (AramoImmonen et al., 2005; Kantola et al., 2005; Paajanen et al., 2004). Through the first objective, we expect to identify the competences needed by individuals in order to be innovative. This would provide a better understanding of the potential impact (if any) of personal innovation competences on the level of employees’ innovation. Through the second objective, we expect to identify those innovation enablers and barriers in terms of the whole organization as a more responsive innovation environment. This will provide a better understanding of the management of innovation within organizations. To make all this possible we have created two conceptual models, i.e., ontologies to help analyze the above. After that we have created computer-supported management questionnaires for self-evaluation of those two above-mentioned ontologies. By making the questionnaires dynamic with internet, we have performed test runs with a group of test subjects. After that we have shown the first signs of the ontology building process. In our future research we will further develop those ontologies and expand the testing also in different industry organizations. 2. Conceptual Framework 2.1. National Systems of Innovation The concept of a national system of innovation provides a good starting point in analyzing both innovation and culture. The standard schematic for a country’s
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
361
Macroeconomic and g y context regulatory
Global Innovation networks Knowledge generation, diffusion & use
Product market conditions
Firms’ capabilities & networks Other research bodies
Science y system Supporting institutions
Communication infrastructures Clusters of industries
Regional novation system Inn m
Education and training system
Factor market conditions
National innovation system
National innovation capacity
COUNTRY PERFORMANCE Growth, jobs, competitiveness
Figure 1.
Main actors and linkages in the innovation system (adapted from OECD).
innovation system depicted in Fig. 1 can be revisited. Although schematic and resumed, it provides a holistic picture of the actors and linkages established in a certain innovation system. Still, the core of the system remains illustrated in the center of Fig. 1. Meanwhile, firms, the research and science base, and the supporting institutions represent Etzkowitz and Leydesdorff’s (1998) triple helix model. The interaction of university–industry–government is at the basis of innovation, a process that becomes an endogenous process of “taking the role of the other,” encouraging hybridization among the institutional spheres (Etzkowitz, 2003). More concretely, universities facing their third mission, higher education can no longer avoid experimenting in entrepreneurial areas. Scientists should be able to complement their basic research activities with research having an immediate commercial value contributing this way to local and regional economic development and growth. Competition and turbulent environment are other areas that nowadays make public research institutions resemble each time more to the business area. Furthermore, government acts as a public entrepreneur and venture capitalist in addition to its traditional role of creating an environment conductive to innovation. Recent public schemes go far beyond the increase of R&D investment, designing and promoting complementary “soft” tools for innovation promotion.
March 15, 2010
362
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
However, firms remain to be the backbone of the systems. Their productive, employing, competitive, innovative and growth capabilities put them in the “spotlight.” The environment in which they operate, often characterized by high complexity and low predictability, converts them into special actors. They are the ultimate innovator operating and facing markets and final customers, whose verdict labels their product and/or services with either success or failure. As firms raise their technological level, they move closer to an academic model, engaging in higher levels of training and in sharing of knowledge. All these show, up to some point, overlapping and common areas rather than isolation between these three pillars. Knowledge generation, diffusion, and use later becoming innovation are a common characteristic to all. During the last two decades, innovation and innovation management were among the top priorities on the research agenda of academics, practitioners, and policy makers. Different trends succeeded in its study, reflected by the richness in definitions collected, for example, by Cumming (1998) in his overview on innovation and future challenges. Focus topics passed from technical, industrial, and commercial aspects, through creativity and culture, large firms versus small firm innovation, sources, patterns, standardization, measurement and monitoring, to human aspects and organizational concepts among others. As the study of innovation’s areas of richness would be too ambitious, focusing on few concepts seems a more appropriate option. Therefore in the following sections we give special attention to organizational innovation in general, and its enablers and barriers in particular. 2.2. Co-evolution of Systems The nature of conscious experience at work is a puzzle for the modern knowledge society. Companies, enterprises, groups, and teams emphasize now more and more on the unique value of individuality in a context of organizational excellence and teamwork. These entities also attempt to learn about the individual’s knowledge in terms of his/her own professional competences, as well as the individual’s aspirations and desires to change and improve those competences. Furthermore, many enterprises would like to guide and support employees’ personal growth, development and personal vision in order to improve their core competences according to the competitive pressures of the business world. Based on the employees’ self-evaluation, the gap between personal vision and current reality forms an individual’s creative tension (Senge, 1994). This creative tension is the energy, which can move an individual from the place of current reality towards the reality of his/her own vision. This is one real achievement driver of the enterprise. When the objects are management processes, it is possible to analyze the current management processes from bottom-up perspectives in order to understand how these processes can be improved. Similarly to creative tension, the other real
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
363
achievement driver in organizations is proactive vision (Aramo-Immonen et al., 2005; Kantola et al., 2005; Paajanen et al., 2004). With proactive vision, employees get opportunity to understand the current state of organizational variables and look to the future and give their opinion, i.e., proactive vision about how these processes should be changed. By combining these two important drivers, i.e., creative tension and proactive vision, it is possible to get more information of the organization, in order to develop organization towards higher performance levels. In the following pages this combination is explained and illustrated in more detail. In our previous research, strong emphasis has been put to a new proposed theory and methodology, i.e., co-evolution (Vanharanta et al., 2005). In the theory formulation, the attempt has been to see the current (perceived) reality from different points of view. Additionally it is emphasized that it is important to understand the time dimension and the change processes inside the systems and subsystems. By increasing the different views it is possible to increase the information variety in human brains and that way decrease the errors to perceive the current reality. From human point of view, it is therefore important to understand both our internal world as well as the external environment in which we live. Co-evolution applied towards an internal view (introspection of own properties or characteristics) extends our ability to evaluate and develop simultaneously different personal characteristics. Co-evolution focused on the external world and different external processes provides a possibility to frame, categorize, conceptualize, understand, and perceive the current reality in a diversified way. From the organizational point of view, the co-evolutionary process viewpoint helps us to identify the need for a change, both in people as well as management processes. 2.2.1. Co-evolution in human performance In order to illustrate an application of the co-evolutionary management paradigm in the human resource management area, Beer (1994) has provided the following relationships, by defining first the levels of human achievement: • Actuality: is what people manage to do now, with existing resources, under existing constraints. • Capability: is what people could achieve now, if really worked at it, with existing resources and under existing constraints. • Potentiality: is what people might be doing by developing their resources and removing constraints, although still operating within the bounds of what is already known to be feasible. Furthermore, the important indices are as follows (Beer, 1994): • Latency: the ratio of potentiality and capability • Productivity: the ratio of capability and actuality
March 15, 2010
364
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al. Potentiality
÷
Latency
Capability
x ÷
Performance
÷
Productivity
Actuality
Figure 2.
Measures of human performance (cf. Beer, 1994; Jackson, 2003).
• Performance: the ratio of potentiality and actuality and also the product of latency and productivity. In the above framework, an application of the co-evolutionary paradigm would lead to the most desired outcomes. First, it can be observed that the above equations indicate the importance of keeping the ratios up (high values) in order to increase the overall human performance. If this potentiality (Fig. 2) is carried out even further, the personal level of the future state a person is targeting towards should be found out, i.e., the individual’s creative tension. On the current level (actuality), it is important to know what a person manages to do now, i.e., how she/he performs at the present and what are the constraints of such performance? The capability or the capabilities to do something is the best ability or the best qualities that the person could exhibit now. The human competence, in turn, is the ability of doing something well and effectively in the immediate future (expanding on the potentiality), i.e., capability for active use. This also gives reasoning to the importance of time in the overall equation (see Fig. 2). 2.2.2. Co-evolution in business performance Another example of the application of the co-evolutionary management paradigm (Vanharanta et al., 2005) can be illustrated by using the concept of productivity in company performance calculations (see Fig. 3). The all-important operational attribute influencing overall company performance is productivity (cf. Kay, 1993; Kidd, 1985). Capital productivity indicates how much capital is invested in relation to addedvalue operations in the company, and market productivity indicates how much profit is yielded in relation to all added-value activities. Capital productivity is addedvalue divided by total capital (total assets), and market productivity is operating profit divided by added-value. Added-value is the market price of products and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
365
Profit
÷
Market productivity
Added-value
x ÷
Return of total assets
÷
Capital productivity
Total assets
Figure 3.
Measures of capital performance (cf. Kay, 1993; Kidd, 1985).
services sold less the market costs of purchased materials (or services) contained with them (cf. Kay, 1993; Kidd, 1985). The same kind of performance patterns can be seen in Fig. 3 as in the human performance illustration in Fig. 2. Similarly, in this example, it is important to keep the ratios high to assure goods results, i.e., the overall profitability performance. However, before the ratios can be changed, the company’s position (situation) at present has to be found out. After that, by understanding the present position (situation) of the company, it is possible to provide new instructions of the means to increase the overall return on total assets through important constructs, concepts, and variables. In real life, the notions illustrated in these two figures (Figs. 2 and 3) have to be understood and utilized simultaneously and concurrently, so that the capital profitability as well as human performance at present and in the immediate future can be comprehended. The equations are similar, giving the asymptotic curves. By combining the information in these equations, the possible new space can be determined where both these concepts can be handled simultaneously. What can be seen are the relationships that are important in order to change those ratios. It is the co-evolutionary way of thinking related to the two equations (which cannot be directly combined), that lead to the overall performance of financial and intellectual capital, i.e., the market value (cf. Edvinsson and Malone, 1997). From the financial point of view, it has to be considered how financial assets are harnessed to create added value and how the customers are willing to pay for that created value. On the other hand, from the human point of view, it has to be considered, which human characteristics (properties) are the best that human performance can be related with. Within the concept of actuality, the current state with existing resources can be managed. By developing these resources and by removing relevant constraints, the potentiality can be increased. It raises the question: “What might then be the best possible way to develop those resources?”
March 15, 2010
14:45
366
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
According to Senge (2004), learning organization has several issues to consider, i.e., systems thinking, team learning, shared vision, mental models, and personal mastery. In the context of our co-evolutionary paradigm, all of these concepts are important, and each of them should be developed simultaneously with another, in the co-evolutionary way. In the personal mastery concept, the driving force behind co-evolution is the creative tension. On the other hand, in the business processes the people who face the real world each day in their work are the people who understand the current and future state of the business process, i.e., proactive vision, the best. By gathering all their individual opinions, the collective view of the organization regarding the proactive vision can be attained for making better performance through peoples’ understanding and motives. 2.3. Organizational Innovation Enablers and Barriers Many authors have identified those enablers and barriers of organizational innovation (e.g., Amabile et al., 1996; Amabile, 1997, 1998; Ekvall, 1996; Martins and Terblanche, 2003; Trott, 2005). Suominen et al. (2008a) have discovered from literature, 22 of those enablers and/or barriers and illustrated them with a vivid metaphor of Hydro Power Plant, Fig. 4.
Freedom of flow
Direction of flow
Innovation Culture
Freedom Openness and trust Communication Requisite variety Understanding strategy Organizational flexibility Stress management Changeability Challenge Empowerment Constructive feedback Risk tolerance Organisation support development Organisation support learning
Transformation of the flow
Idea generation Idea documentation Idea screening and evaluation
Maintaining of the flow
Teamwork and collaboration Seeking information Absorptive capacity Networking Situational constraints
Figure 4.
Innovation culture ontology.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
367
The four part metaphor first illustrates the organizational climate variables, then the organizational management and structure variables, thirdly the innovation process variables, and fourth the supportive organizational variables. Here, the entity of these four parts of the metaphor constructs a metaphor called Innovation culture. However, most of these enablers and barriers are more or less visual or concrete parts of organization, whereas culture is often seen more as an invisible, yet sensible part of organization to their members and even to a third party (McLean, 2005; Schein, 2004). 2.4. Innovator’s Competences Innovator’s competences are like items on a menu, we can identify those that represent our strengths and those that represent our weaknesses and those which we want to focus on (cf. Miller, 1987). The Innovator’s competence ontology describes those competences that the literature (Jussila et al., 2008) emphasizes as important characteristics of creative and innovative people. The major components of individual creativity necessary in any domain are expertise, creative-thinking skills, and intrinsic task motivation (Amabile, 1997). However, rarely an individual is able to rely solely on his own motivation and technical skills to get the job done, most of us work in environments in which we must constantly deal with other people (Merrill and Reid, 1999). The same is true for innovations; innovations are hardly ever the result of only one individual. Therefore more than creativity is needed in making innovations happen. The major components supporting creativity in the ontology are self-awareness, self-regulation, empathy, and relationship management (Goleman, 1998). The innovator’s competence ontology consists of two parts (personal competences and social competences) and seven major components (selfawareness, self-regulation, motivation, expertise, creative thinking, empathy, and relationship management) divided into a total of 27 competences (Fig. 5). The clustering of innovator’s competence ontology is theoretical and based on the earlier theoretical models (Amabile, 1997; Goleman, 1998). More recently, Goleman (2006) has published a model of social intelligence that parallels emotional intelligence. However, as he has pointed out: “The model of social intelligence . . . is merely suggestive, not definitive, of what that expanded concept might look like . . . More robust and valid models of social intelligence will emerge gradually from cumulative research” (Goleman, 2006, p. 330). 3. Methodology 3.1. Self-Evaluation of Humans and Systems In self-evaluation, a person is evaluating oneself, or a system that this individual evaluator is part of. The results from self-evaluation can be used for different purposes, such as motivation, identification of development needs, evaluation
March 15, 2010
14:45
368
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al. Self-awareness
Personal competences
Self-regulation
Motivation
Innovator’s competences p
Expertise
Creative thinking g
Social p competences
Figure 5.
Accurate self-assesment Self-confidence Flexibility Independence Responsibility Self-control Stress tolerance Trustworthiness Absorptive capacity Professional and technical expertise Stress tolerance Trustworthiness Analytical thinking Conceptual thinking Divergent thinking Intuitive thinking
Empathy
g g diversity y Leveraging Understanding others
Relationship management
Communication Conflict management Relationship building Teamwork and cooperation
Innovator’s competence ontology.
of potential, evaluation of performance, career development purposes, etc. (cf. Nurminen, 2003). Self-evaluation is an efficient method of developing oneself, managing personal growth, clarifying roles, and committing to project related goals (e.g., Nurminen, 2003). On the other hand, self-evaluation has limitations too. The results of a self-evaluation are less reliable in the evaluation of work performance (Stone, 1998). People have the tendency to evaluate their own performance better than others (Dessler, 2001). People are also limited in their ability to observe themselves and others accurately (Beardwell and Holden, 1995). Still, there is no question that people are able to evaluate themselves if they are motivated to do so. We have observed that the presentation of self-evaluation projects to target is very important. The effectiveness of self-evaluation also depends on content of the evaluation, application method, and the culture of an organization (Torrington and Hall, 1991). The results of self-evaluations conducted by an individual, vary to some extent. In the short term, the results change because individuals’ power of observation, intentions, and motives change (Cronbach, 1990). In the long term, the results also change because of mental growth, learning, and changes in personality and health. Self-evaluation is more effective in evaluating the relation between different items, such as competencies, than comparing individuals’ performance to others’ performance (cf. Torrington and Hall, 1991). In our approach, competences and
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
369
systems are evaluated indirectly through the statements related to individuals’ every day work — therefore individuals are not evaluating their performance. In this context, we mean self-evaluation of innovation culture (the system) and innovator (human in the system). When we want to include the concept of creative tension (Senge, 1994) in an evaluation, we must use self-evaluation. This is because no one can tell the future intentions and aspirations of another person. Noticeable is that the data generated though self-evaluation has a certain nature. For instance every single individual has their own, personal scale of degree. Therefore traditional scientific statistical methods are not applicable for such data. Thus as analysis method, we have used Friedman test, which is suitable for non-parametric data produced with self-evaluation. The Friedman test is one scientifically valid non-parametric statistical method (Conover, 1999), named after its inventor Nobel Laureate economist Milton Friedman. The Friedman test sums the ranked values of each respondent. Consequently the ranked values can be clustered into groups (Suominen et al., 2008b). 3.2. The Evolute Application Environment Evolute is the name of a generic web-based technology that supports fuzzy logic (Zadeh, 1965) applications on the Internet (Kantola, 2005). Evolute supports special purpose fuzzy logic applications to be developed and run globally. Each application is based on a specified ontology of the target domain (Kantola, 2005). Therefore, each application on Evolute has a unique content and structure specified by the experts of the target domain. Applications can be added and fine-tuned without additional programming. Evolute supports co-evolutionary applications, which are intended for helping in simultaneous development of business enterprises or systems (Vanharanta et al., 2005) that include humans and organizations. 3.3. Self-evaluation of Innovation Competence and Organizational Innovation Enablers and Barriers with Evolute-System Individual’s creativity and organizational innovation have been linked with each other in the literature previously (e.g., Amabile, 1997, 1998; Amabile et al., 1996; Martins and Terblanche, 2003; McLean, 2005). Creativity is a characteristic of an individual, whereas innovation is a process, often within an organization (McLean, 2005). Therefore, creating Management Object Ontology (MOO) (Kantola, 2005) regarding organizational innovation also requires the perception of individual’s capability, i.e., competence to innovate to be included. In this chapter, the first three phases of five phases of ontology development (Sure et al., 2003), feasibility study, kickoff, and refinement (Fig. 6), of two MOOs with co-evolutionary method are presented.
March 15, 2010
14:45
370
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Decisions
Go/ No Go?
Outcomes
Common KADS Worksheets
Feasibility study
Sufficient requirements?
ORSC + Semiformal ontology description
Meets requirements?
Target ontology
Refinement
Kickoff
Evaluation
Roll-out?
Evaluated ontology
Changes?
Evolved ontology
Application &Evolution
Human issues
Knowledge Management Application
Iterations Identify… 1.Problems and opportunities
5. Capture requirements specification in ORSD
2.Focus of KM application
6. Create semiformal ontology description
3. (OTK-) Tools 4. People
7. Refines emi- 10. Technology13. Apply formal ontology focused ontology description evaluation 8. Formalize into11. User-focused 14. Manage targetontology evaluation evolution and maintenance 9. Create 12. OntologyPrototype focused evaluation
Software engineering
Ontology Development
Figure 6. The knowledge meta process (adapted Staab and Studer, 2003, p. 121).
In the first two overlapping phases, the feasibility study and kickoff for Knowledge Management Application (KMA) for innovation was studied. The preliminary study regarding organizational innovation, carried out mainly by literature review, brought forward the facts that innovation as an object of study is both timely, even fashionable and additionally that some parts of innovation, especially regarding organization, are rather poorly researched (McLean, 2005). This is due to the fact that organizational study regarding innovation has many problems and difficulties: on one hand organizations are different in their size, branch, personnel, and focus on e.g., innovation. On the other hand, many of the methods for studying, e.g., organization culture are normally time consuming and require heavy, even subjective analysis by the researchers. This also makes the results indefinite and constrained to a certain time slot, lacking the desired focus to the future. As a goal to create a KMA for leadership purposes, the focus of the application was determined as duplex: on one hand to organizational enablers and barriers for innovation, on the other hand to individuals’ innovation competences. Naturally, there is a wide range of other subjects of innovation that could have been the focus of the study, but these topics were seen most usable from the management’s point of view for leading the entire personnel of an organization by gathering the information from bottom-up and then using the collected collective data for determining the needed management procedures.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
371
The sources for creating the semi-formal ontology description were scientific, mainly journal literature. The literature handled individual competences, also capabilities for innovation, innovation inductive organizational culture and climate and innovation process among others. The literature review resulted from 27 individual innovation competences (Jussila et al., 2008) and 22 organizational innovation enablers and barriers (Suominen et al., 2008a). In this first phase, evolute-system was used as a platform for creating two self-evaluation questionnaires, collecting data and doing the needed computation operations for accumulating the data into collective result. The questionnaires — innovation competence questionnaire having 103 statements and the organizational innovation questionnaire having 94 statements — have sliding scale, a bar that allows the responder to answer on an individual range. The people evaluate subjectively one’s individual capabilities and objectively the environment — in this case one’s organization. The other evolute characteristic is that the evaluation answer is placed on both for current and future states, thus illustrating the creative tension (Senge, 1994) of individuals, or likewise proactive vision for organizations. Besides, the evolute-system uses fuzzy logic for the computation operations to simulate human reasoning, which by nature is fuzzy. Those two questionnaires were formulated according to the found competences and organizational enablers and barriers, each competence and enabler or barrier including 3 to 8 statements for each responder to answer. As a result of the refinement phase, the first version of the parallel created semiformal ontologies was represented (Table 1). Most of the individual variables have a counter part with the organizational ontology. The next phase in the ontology development process would be the evaluation phase including an interview of the test runs with the web-based questionnaires in order to gather comments from the test persons of those two ontologies. This evaluation round should then be followed by a new iteration round of refinement. 4. A Case Test Run After the completion of the semi-formal ontology, a test run was done at an educational and research unit of University with a group. First the results of the individual innovation competences are illustrated in a bar chart (Fig. 7). The n = 10 signifies the 10 respondents of staff members and α = 0.05 signifies the used significance level of 0.05. The sums have been divided into three groups: the most significant (black bars), middle group (white bars), and the least significant (grey bars). The gap between individual’s innovation competences’ current and future states portrays the creative tension, whereas the gap between organization’s innovation enablers and barriers current and future states portrays the proactive vision. Thus, both the creative tension and proactive vision, point out those competences or enablers and barriers that need the most attention: where the current state is relatively low compared to the future desired state. This way the organization’s
March 15, 2010
14:45
372
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Table 1. Individual Innovation Competences and Organizational Innovation Enablers and Barriers (Adapted from Suominen et al., 2008b). Individual innovation competences Connection to innovation enablers and barriers (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22)
Absorptive capacity Accurate self-assessment Achievement orientation Change orientation Communication Flexibility Independence Initiative Stress tolerance Leveraging diversity Professional and technical expertise Relationship building Risk orientation Seeking information Self-development Teamwork and cooperation Trustworthiness Analytical thinking Conceptual thinking Divergent thinking Imagination Intuitive thinking
Organizational innovation enablers and barriers Connection to innovation competences (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)
Absorptive capacity Constructive feedback Challenge Changeable Communication Flexibility Freedom Empowerment Stress management Requisite variety Organization support learning Networking Risk tolerance Seeking information Organization support development Team work and collaboration Trust and openness Idea generation
No connection to innovation enablers and barriers
No connection to innovation competences
(1) (2) (3) (4) (5)
(1) (2) (3) (4)
Conflict management Responsibility Self-control Self-confidence Understanding others
Idea documentation Idea screening and evaluation Understanding strategy Situational constraints
management can direct their attention to those matters requiring most urgent development. With creative tensions we came into groups where the first 13 rankings were the most significant and the last 8 the least significant. With proactive vision the rankings were divided into 3 groups, where first 5 first were the most significant
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
373
RESULTS OF FRIEDMAN TEST Absorptive capacity Intuitive thinking Professional and techical expertise Self-confidence Understanding others Communication Analytical thinking Accurate self-assessment Flexibility Self-development Self-control Stress tolerance Conceptual thinking Relationship building Initiative Leveraging diversity Conflict management Seeking information Change orientation Imagination Responsibility Teamwork and cooperation Achievement orientation Trustworthiness Divergent thinking Independence Risk orientation 0
5
10
15
20
25
INNOVATION COMPETENCE Creative tension: n = 10, α = 0.05
Figure 7.
Results of individual innovation competence self-evaluations.
and the last 16 the least significant. Unlike the creative tension results, the middle group of proactive vision is a group that remains undecided to which group it better belongs to, the most significant or the least significant. Then the results of the organization’s innovation self-evaluations are presented (Fig. 8). In order to give suggestions of the development needs according to the results presented above, both the results of the individual’s innovation competences and organization’s support to those competences have to be compared parallelly. This is where the co-evolution of the two ontologies becomes handy. As both of these ontologies have been constructed together, most of the features and competences
March 15, 2010
374
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
RESULTS OF FRIEDMAN TEST Networking Stress management Teamwork and collaboration Constructive feedback Absorptive capacity Organization support learning Organization support development Idea documentation Changeability Organizational flexibility Communication Requisite variety Challenge Comprehending strategy Risk tolerance Openness and trust Seeking information Idea generation Freedom Situational constraints Empowerment Idea screening and evaluation 0
5
10
15
20
ORGANIZATIONAL INNOVATION ENABLERS AND BARRIERS Creative tension:n = 10, α = 0.05
Figure 8.
Results of organization’s innovation self-evaluations.
have a direct counterpart in the other ontology. Therefore, further analysis is made by finding different combinations that can be discovered by comparing the status of those pairs in the formed clusters. Below the following combinations of the pairs have been discovered: high–high, low–high, high–low, and low–low. Also the interpretations of those combinations are enlightened. 1. In high–high combination, the creative tension is high on both individual innovation competence and organization’s innovation enablers. Table 2 can be interpreted as those individual’s innovation competences and organization enablers that need the most attention. 2. In low–high combination, the creative tension of individual innovation competence is low, thus portraying the satisfaction of the current state of these competences; whereas the creative tension of organization’s innovation enablers is high, thus desire the development of these organizational support features. Table 3 can be interpreted that this organizational enabler “Teamwork and collaboration” needs attention when considering the development measures. However, people
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
Table 2.
375
Results of test run with high–high combination.
Value
Ranking
People want to develop within themselves 21,7 Absorptive capacity 17,8 Accurate self-assessment 16,05 Stress tolerance 21,1 Professional and technical expertise
1 8 12 3
People want to be developed within organization 13,75 14,9 15,7 12,75
Table 3. Value
Absorptive capacity Constructive feedback Stress management Organization support learning
5 4 2 6
Results of test run with low–high combination. Ranking
People do not see need for development within this competence 9 Teamwork and cooperation 22 People need for development within this organizational feature 15 Teamwork and collaboration 3
feel that their competence with “Teamwork and co-opearation” do not need that much development in the future. 3. In high–low combination, the creative tension of individual innovation competence is high, thus portraying desire of development; whereas the creative tension of organization’s innovation enablers is low, thus portraying the satisfaction of the current state of the support from organization. Table 4 can be interpreted that there are individual competences that people want to develop, however those organizational enablers as support, are at good level for this development to happen. 4. In low–low combination, the creative tension of individual innovation competence is high, thus portraying good level, and similarly the creative tension of organization’s innovation enablers is low, thus portraying the satisfaction of the current state of the support from organization. Table 5 can be interpreted that with these individual competences and also organizational enablers supporting those competences are at good level. These competences and organizational innovation enablers are the stone base of this organization’s innovativeness.
March 15, 2010
376
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Table 4.
Results of Test Run with High–Low Combination.
Value
Ranking
People want to develop within themselves 21,2 Intuitive thinking 18,1 Analytical thinking 16 Conceptual thinking 18,3 Communication 16,8 Flexibility 16,7 Self-development
2 7 13 6 9 10
People do not see too much need for development within organization 9,5 9,5 9,5 11,05 11,2 12,15
Table 5. Value
Idea generation Idea generation Idea generation Communication Organizational flexibility Organization support development
18 18 18 11 10 7
Results of Test Run with Low–Low Combination. Ranking
People do not see too much need for development within themselves 10,7 Imagination 20 7,9 Divergent thinking 25 8,3 Achievement orientation 23 8,15 Trustworthiness 24 7,05 Independence 26 4,5 Risk orientation 27 People do not see too much need for development within organization 9,5 Idea generation 18 9,5 Idea generation 18 10,3 Challenge 13 9,7 Openness and trust 16 9,4 Freedom 19 9,8 Risk tolerance 15
However, when considering this or any other organization’s development measures, the entire palette should be seen holistically. Partial optimization or overweighting single organizational enablers may cause more damage than good. Therefore, common sense and experience of the organization while making the development
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
377
measures can be recommended. Naturally also the current and future wanted status of one competence or organizational enabler has to be considered, as the creative tension does not portray the entire situation. Additionally, it should be stressed that this representation of competences or organizational innovation enablers is not a truth, nor it is its intention to be. It is merely a glimpse of one vision to one organization’s reality. 5. Conclusions This chapter has sought to explore the linkage of individual innovation competences and organizational innovation through the concepts of creative tension and proactive vision. The theoretical background of this study is in co-evolutionary creation of MOOs (management object ontologies). These ontologies are the basis for building up dynamic computer supported questionnaires for data collection. The collected individual data can then be gathered for comprehending the organizational collective vision of the future state together with future state, thus portraying the creative tension and proactive vision. The study suggests that building MOOs of individual innovation competence and organizational innovation enablers and gathering that information via questionnaires is the first step of collecting interesting and comparable data of the two sides of innovation: the individual and organizational. This new way of collecting innovation data may bring interesting information of innovations in the organizations in the future. When this data collection is then moved, expanded from individual and organizational level to national and even international level, the true nature of innovation may be revealed — at least from one very essential point of view: those people working in the organizations. In summary, we suggest MOO when combined with data collection is an effective way to approach the complex concept of innovation within organizations in the first place. Furthermore, finding answers to the initial question of how to boost innovation culture and innovators remains in the future, but this approach gives a tool to gather more valuable information on the essence of innovation competence and organizational innovation. The approach described in this chapter has implications on the management. The abstract concepts of innovation culture and innovation competence become manageable, which suggests that organizations should be able to get better “innovation results.” Acknowledgements Finnish Funding Agency for Technology, i.e., TEKES has been the main financing body for this research. We refer to the Flexi project E!3674 ITEA2 Flexi — Added Value Ontology, decision Tekes 40176/07.
March 15, 2010
378
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
References Amabile, TM (1997). Motivating creativity in organizations: On doing what you love and loving what you do. California Management Review, 40(1), 39–58. Amabile, TM (1998). How to kill creativity. Harvard Business Review, September–October, 77–87. Amabile, TM, R Conti, H Coon, J Lazenby and M Herron (1996). Assessing the work environment for creativity. Academy of Management Journal, 39(5), 1154–1184. Aramo-Immonen, H, J Kantola, H Vanharanta and W Karwowski (2005). The web based trident application for analyzing qualitative factors in mega project management. Proceedings of the 16th Annual International Conference of IRMA2005, Information Resources Management Association International Conference, San Diego, California, May 15–18, 2005. Beardwell, I and L Holden (1995). Human Resource Management — A Contemporary Perspective. Pitman Publishing. Beer, S (1994). Brain of the firm, 2nd edn. Chinchester, Wiley. Conover WJ (1999). Practical Nonparametric Statistics, 3rd Edn. New York: John Wiley & Sons. Cronbach, LJ (1990). Essentials of Psychological Testing. New York: Harper Collins. Cumming, BS (1998). Innovation overview and future challenges. European Journal of Innovation Management, 1(1), 21–29. Dessler, G (2001). A Framework for Human Resource Management, 2nd Edn., New Jersey: Prentice Hall. Edvinsson, L and SM Malone (1997). Intellectual Capital. New York: HarperCollins Publishers. Ekvall, G (1996). Organizational climate for creativity and innovation. European Journal of Work and Organizational Psychology, 5(1), 105–123. Etzkowitz, H and L Leydesdorff (1998). The endless transition:A “triple helix” of universityindustry-government relations. Introduction to a thems issue. Minerva, 36, 203–208. Etzkowitz, H (2003). Innovation in innovation: The triple helix of university–industry– government relations, Social Science Information, 42(3), 293–337. Goleman, D (1998). Working with Emotional Intelligence. London: Bloomsbury. Goleman, D (2006). Social Intelligence: The New Science of Human Relationships. London: Hutchinson. Jackson, CM (2004). Systems Thinking: Creative Holism for Managers. West Sussex, England: John Wiley & Sons. Jussila, J,A Suominen and HVanharanta (2008). Competence to innovate? In Karwowski, W. and Salvendy, G. (eds.) 2008 AHFE International Conference, 14–17 July 2008, Caesars Palace, Las Vegas, Nevada, USA, Conference Proceedings, 10 p. Kantola, J (2005). Ingenious management, Doctoral thesis, Tampere University of Technology at Pori, Finland. Kantola, J, H Vanharanta and W Karwowski (2005). The evolute system: A co-evolutionary human resource development methodology. In: International Encyclopedia of Human Factors and Ergonomics, Vol. 3, W Karwowski (ed.), pp. 2902–2908. Kay, J (1993). Foundations of Corporate Success. New York: Oxford University Press. Kidd, D (1985). Productivity analysis for strategic management. In Guth, W. (ed.) Handbook of Business Strategy (17/1–17/25). Massachusets: Gorham & Lamont.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
379
Martins, EC and F Terblanche (2003). Building organisational culture that stimulates creativity and innovation. European Journal of Innovation Management, 6(1), 64–74. McLean, LD (2005). Organizational culture’s influence on creativity and innovation: A review of the literature and implications for human resource development. Advances in Developing Human Resources, 7(2), 226–246. Merrill, DW and RH Reid (1999). Personal Styles & Effective Performance. New York: CRC Press. Miller, WC (1987). The Creative Edge. Cambridge: Perseus Publishing. Nurminen, K (2003). Deltoid — The competences of nuclear power plant operators, Master of Science Thesis, Tampere University of Technology at Pori, Finland. OECD (1999). Managing National Innovation Systems. Ed. OECD Publications Service, Paris. Paajanen, P, J Kantola and H Vanharanta (2004). Evaluating the organization’s environment for learning and knowledge creation. 9th International Haamaha Conference: Human & Organizational Issues in the Digital Enterprise, Galway, Ireland, 25–27 August 2004. Schein, EH (2004). Organizational Culture and Leadership, 3rd Edn. San Francisco, CA: Jossey-Bass., 438 p. Senge, PM (2004). Presence: Human purpose and the field of the future. Society for organisatgional learning, Combridge, MA. Senge, PM (1994), The Fifth Discipline: The Art and Practice of Learning Organization. New York: Currency Doubleday. Stone, R (1998). Human Resource Management, 3rd Edn. Brisbane: John Wiley and Sons, 854 p. Suominen, A, J Jussila and H Vanharanta (2008a). Hydro power plant — metaphor for innovation culture. In AHFE International Conference, Karwowski, W and Salvendy, G (eds.) 14–17 July 2008, Caesars Palace, Las Vegas, Nevada, USA, Conference Proceedings, 10 p. Suominen, A, J Jussila, P Porkka and H Vanharanta, (2008b). Interrelations of development needs between innovation competence and innovation culture? In Proceedings of the 6th International Conference on Manufacturing Research (ICMR08), Brunel University, UK, 9–11 September 2008. Sure, Y, S Staab and R Studer (2003). On-to-knowledge methodology (OTKM). In Handbook on Ontologies. Staab, S and Studer, R (eds.) Berlin: Springer, 117–132. Torrington, D and L Hall (1991). Personnel Management — A New Personnel Approach, 2nd Edn., London: Prentice Hall, 661 p. Trott, P (2005). Innovation Management and New Product Development, 3rd Edn., Essex: Pearson Education Limited. Ulijn, J and T Brown (2004). Innovation, entrepreneurship and culture, a matter of interaction between technology, progress and economic growth? An introduction. In Innovation, Entrepreneurship and Culture. Brown, T and J Ulijn (eds.) Cheltenham, UK: Edward Elgar. Vanharanta, H, J Kantola and W Karwowski (2005). A Paradigm of Co-Evolutionary Management: Creative Tension and Brain-Based Company Development Systems. Las Vegas, Nevada, USA: HCI International. Zadeh, LA (1965). Fuzzy sets. Information and Control 8: 338–353.
March 15, 2010
380
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
A. Bikfalvi et al.
Biographical Notes Andrea Bikfalvi has a PhD in Business Administration and she conducts teaching and research activities in the Department of Business Administration and Product Design at the University of Girona, Spain. Her research interest focuses on technological and organizational innovation, new technologies in business contexts, academia-business relationships and new venture creation, as well as large-scale surveys. Since her incorporation, she undertook teaching activities and participated on various research projects. Some of the projects refer to the topic of international surveys, university spin-offs; other issues concern innovation in education and teaching, networks of innovation, and research in teaching. Jari Jussila is a PhD candidate at TUT, Finland. He holds an MSc (Industrial Management and Engineering), and his research interest is focused on knowledge management. His main experience is derived from information systems projects. Since 2007 he has been working as a managing consultant at Yoso Services Oy. Anu Suominen is a PhD candidate at Tampere University of Technology (TUT) in Finland. She holds an MSc (Industrial Management and Engineering), and her research interest is towards leadership and management: from strategy, knowledge, and innovation point of views. Her prior working experience is from logistics, particularly operational exports in metal and information network industries. Since 2007 she has been working as a researcher at TUT. Jussi Kantola works as an associate professor in the world’s first knowledge service engineering department at KAIST (Korea Advanced Institute of Science and Technology) in Korea. He is an adjunct professor at Tampere University of Technology in the Department of Industrial Management and Engineering in Pori, Finland. His research and teaching interests currently include application of ontologies, elearning and soft-computing. He received his first PhD degree at the University of Louisville in the Department of Industrial Engineering in USA in 1998. He received his second PhD degree at Tampere University of Technology in the Department of Industrial Management and Engineering in Finland in 2006. Earlier he has worked as an IT consultant in USA and business and process consultant for ABB in Finland. Professor Hannu Vanharanta, 1949, began his professional career in 1973 as a Technical assistant at the Turku office of the Finnish Ministry of Trade and Industry. In 1975–1992, he worked for Finnish international engineering companies, i.e., Jaakko P¨oyry, Rintekno, and Ekono as process engineer, section manager, and leading consultant. His doctoral thesis was approved in 1995. In 1995– 1996, he was a professor in Business Economics in the University of Joensuu. In 1996–1998, he served as a Purchasing and Supply Management professor in the
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
How to Boost Innovation Culture and Innovators?
381
Lappeenranta University of Technology. Since 1998 he has been a professor in Industrial Management and Engineering in Tampere University of Technology at Pori. The research interests are: human resource management, knowledge management, strategic management, financial analysis, e-business, and decision support systems.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch15
This page intentionally left blank
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
Chapter 16
A Decision Support System for Assembly and Production Line Balancing A. S. SIMARIA The Advanced Centre for Biochemical Engineering, Department of Biochemical Engineering, University College London, Torrington Place, London WC1E 7JE, United Kingdom
[email protected] A. R. XAMBRE∗ , N. A. FILIPE† and P. M. VILARINHO‡ Department of Economics, Management and Industrial Engineering, University of Aveiro, Campus Universit´ario de Santiago, 3810-193 Aveiro, Portugal ∗
[email protected] †
[email protected] ‡
[email protected] In this chapter, a system to support the design of assembly and production lines used in the make-to-order production phase is presented. The relevance of the system is supported by the fact that current market dynamics often leads to frequent modifications in the allocation of manufacturing resources, and, as a result, decisions related to manufacturing systems design that used to belong to the strategic level are now taken at the tactical level, and thus require new tools to support them. The decision support system addresses two different categories of problems: (i) assembly line balancing and (ii) production line balancing. Due to the high complexity of these problems, it uses heuristic methods based on evolutionary computation approaches to tackle them. The system aggregates several modules to address the different problems, a database to handle both input and output data and an interface that enables a user-friendly interaction with the decision maker. Keywords: Assembly lines; production lines; manufacturing system design; evolutionary computation.
1. Introduction The current global marketplace environment, characterized by intense competition, together with the increased pace of technological change, has led to the shortening of product life cycles and an increase in product variety. Industrial companies must 383
March 15, 2010
384
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
be able to provide a high degree of product customization to fulfill the needs of an increasingly sophisticated customer demand. Moreover, responsiveness in terms of short and reliable delivery lead times is requested by a market where time is seen as a key driver. Mass customization is a response to this phenomenon. It refers to the design, production, marketing, and delivery of customized products on a mass basis. This means that customers can choose, order, and receive especially configured products, often selecting from a variety of product options, to meet their individual needs. On the other hand, customers are not willing to pay high premiums for these customized products compared to competing standard products in the market. They want both flexibility and productivity from their suppliers. To respond to this changing environment, industrial companies need to maximize the usage rate of their production resources, namely assembly lines. Historically, assembly lines were used to produce a low variety of products in high volumes, as they allowed for low production costs, reduced cycle times, and accurate quality levels. These are important advantages from which companies can derive benefits if they want to remain competitive. However, single-model assembly lines (used over the past decades), designed to a single homogenous product, are the production systems least suited to high variety demand scenarios. As manufacturing is shifting from high volume/low mix production to high mix/low volume production, mixed-model assembly lines in which a set of similar models of a product can be assembled simultaneously are better suited to respond to new market demands. Cellular manufacturing systems are another form of production system suited to coping with high product variety and short lead times. In this type of system, functionally different equipments are grouped together into manufacturing cells to produce a set of product families, and each cell can be seen as a production line. The use of an appropriate type of assembly line (namely, mixed-model, U-shaped, two-sided, etc.) or production line (namely, manufacturing cells), suited for the new manufacturing demand paradigm, is, therefore, a crucial factor for the success of the company in delivering customized products at low costs. Having developed a set of models to help decision-makers in the design of mixed-model assembly lines with parallel workstations of different types (straight lines, U-lines, and two-sided lines) and also a model for balancing a manufacturing cell (or production line), the next step was to incorporate them into a decision support system (DSS) that can help the decision-maker to not only interact with the models but also to generate and compare different solutions to the problems under analysis. The main purpose is to provide a user-friendly interface, allowing the line designer to be, in a simple but powerful way, assisted in his final decision. The following section of this chapter presents the structure of the main elements of the proposed DSS, namely the data and model management bases and the interface. Section 3 explains in a more detailed way the models that are available in the system and finally in Section 4 some conclusions are pointed out.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 1.
385
DSS components.
2. Structure of the Decision Support System The proposed system includes the three typical subsystems of a DSS: data management, model management, and interface (Turban and Aronson, 1998). These three elements are connected and interact with the user as shown in Fig. 1. A brief explanation of these elements is given in the following paragraphs. 2.1. Model Management The model management subsystem includes the algorithms, previously developed by the authors, to address the different types of assembly and production line balancing problems. The assembly line balancing problem (ALBP) arises when designing (or redesigning) an assembly line and it consists in finding a feasible assignment of tasks to workstations in such a way that the assembly costs are minimized, the demand is met, and the constraints of the assembly process are satisfied. The type I ALBP aims at minimizing the number of workstations for a given cycle time, while type II aims at minimizing the cycle time for a given number of workstations. This problem has been extensively researched and comprehensive literature reviews addressing it include the works of Ghosh and Gagnon (1989), Scholl (1999) and more recently by Becker and Scholl (2006) and Scholl and Becker (2006). The algorithms included in the DSS are meta-heuristic based procedures, used to balance assembly lines with characteristics that better reflect the industrial reality, namely: • Mixed-model: Mixed-model assembly lines allow the simultaneous assembly of a set of similar models for a product, which may be launched in the assembly line in any sequence. As the trend for current markets is to have a wider product range
March 15, 2010
386
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
and variability, mixed-model assembly lines are preferred over the traditional single-model assembly lines. • Parallel workstations: The use of parallel workstations, in which two or more replicas of a workstation perform the same set of tasks on different products, allows for cycle times shorter than the longest task time. This increases the line production rate and also provides greater flexibility in the design of the assembly line. • Two-sided lines: Two-sided assembly lines are a special type of assembly lines in which workers perform assembly tasks in both sides of the line. This type of lines is of great importance, especially in the assembly of large-sized products, like automobiles, buses or trucks, in which some tasks must be performed at a specific side of the product. • Flexible U-lines: When the demand of the products, and consequently the production volume of the line, is highly variable, the lines have to be frequently re-balanced. This represents a cost for the companies that could be reduced if the lines were easily adaptable to changes in production volumes and product mix. Flexible U-lines address these issues because whenever the production volume changes, the line layout remains but the number of operators working on the line and the tasks they perform will be adjusted in order to meet the demand. The increasing demand for personalized products has led to the necessity to alter the characteristics of traditional assembly lines and also led to the development of other types of production systems that are able to maintain high flexibility while keeping the main advantages of an assembly line. Cellular manufacturing has emerged as an alternative that collects the advantages of both productand process-oriented systems for a high variety and medium volume product mix (Burbidge, 1992). In a cellular manufacturing system, functionally diverse machines are grouped into dedicated cells used to exploit flow shop efficiency in processing the different parts. Cellular manufacturing systems are hybrid systems that exhibit the characteristics of process-based systems at the plant level and the characteristics of productbased systems at the cell level. In many cellular manufacturing systems the intracellular flow pattern has a behavior similar to an unpaced production line (where most of the workstations are machines) in which the parts do not forcefully follow the same unidirectional flow. In an unpaced line there is no fixed time for a resource to complete the respective workload. The design of the cell and the production line balancing problem becomes quite complex since it involves both workers and equipments and also tasks that require different types of resources. All these different types of problems were addressed using meta-heuristic based procedures and the resulting computer programs were included in the DSS. Section 3 will describe with more detail the main features that each of the models developed to address the different assembly and production line balancing problems.
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 2.
387
Class diagram.
2.2. Data Management Figure 2 shows the class diagram, using the UML — Unified Modeling Language notation (Larman, 2004), for the database of the proposed DSS. The diagram supports both input and output data objects, and it was built so it can be used with every algorithm and type of problem considered in the system. Therefore, although there are some object classes common to every line balancing problem (e.g., Problem, Task, Solution), some were purposely created to fit the requirements of the addressed problems. The input data are associated with the characteristics of a specific line balancing problem. It is necessary to specify the set of tasks belonging to an assembly/production process, its precedence relationships, zoning constraints, and processing times for the different models, as well as the demand values for each model (and demand scenario in the case of flexible U-lines) and the production period.
March 15, 2010
14:45
388
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
This data will be the input for an adequate algorithm, selected from the set of procedures available in the system, which will then produce a balancing solution. The output data is the assignment of tasks to workstations, operators, and/or machines, depending on the nature of the problem addressed. For example, in straight assembly lines one workstation corresponds to one operator, while in flexible U-lines one workstation may have more than one operator working on it and one operator may perform tasks in more than one workstation. However, in either way, a task can only be assigned to one workstation and one operator. If the problem is related to production lines, a task can also be assigned to one machine. Each balancing solution provided by the system is characterized by a set of performance measures (e.g., line efficiency, workload balancing between and within stations). 2.3. Interface Figure 3 illustrates, using a simple flowchart, how the user can interact with the system. Essentially, the decision maker must first distinguish between an assembly line and a production line. This type of decision is assisted by information the user can find in the help menu easily accessible in every part of the system. Assembly line Type of line?
Type of configuration?
Straight line
Introduce data
Production line
Upload data No Save data
2-Sided line
Run the three algorithms
Compare solutions?
View best solution
Yes
Introduce data
No Save solution?
View other solutions Upload data
Yes Save data
Flexibe U-line
Run algorithm
View solution
Introduce data No Exit system? Upload data Yes Save data
Run algorithm
End
View solution
Introduce data
No
Upload data
Save data
Run algorithm
View solution
Save solution?
Yes
No
Figure 3.
Interaction with the DSS.
End
Exit system? Yes
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
389
If the goal is to balance a production line, the user will be directed to an interface where he can choose to upload a set of existing data, introduce a new set of data, or save data already introduced and/or modified. It is important to refer that the input data will be validated according to the requirements of the specific algorithm. Then the production line balancing algorithm can be activated and a solution is presented. If, however, the user chooses to balance an assembly line, then he has to decide the type of assembly line to study: straight line, two-sided line, flexible U-line. The options are basically the same referred previously (upload a set of existing data, introduce a new set of data, or save data) except in the case of a straight line. For this situation, the DSS includes three algorithms so it is possible to generate and compare three different solutions. The system presents the best one but the user can opt to verify the other solutions that were generated. After obtaining a solution to his/her problem the decision maker can save the solution or test another set of data. Figure 4 shows the interface used to introduce the data in a flexible Uline situation and Fig. 5 illustrates the output of for a simple instance of that problem. There was an effort to build user-friendly interfaces and, although the system is directed to specialist users, with a good understanding of the problems and algorithms, it will provide help to the decision maker in the selection of the most suited type of production system configuration. 3. Assembly and Production Line Balancing Algorithms Included in the DSS 3.1. Mixed-model with Parallel Workstations In a mixed-model assembly line, a set of similar models of a product are assembled in any sequence. The mixed-model nature of the problem on hand requires the cycle time to be defined taking into account the different models’ demand over the planning horizon. So, if a line is required to assemble M models each with a demand of Dm units over the planning horizon (P), the cycle time of the line is computed from the following equation: C = M
P
m=1 Dm
(1)
The overall proportion of the number of units of model m being assembled (or the production share of model m) is given by the following equation: Dm qm = M
p=1 Dp
(2)
March 15, 2010
390
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
Figure 4.
Input data interface.
The main features of the addressed assembly line balancing problems are the following: (i) the line is used to assemble a set of similar models of a product; (ii) the workstations along the line can be replicated to create parallel workstations, when the demand is such that some tasks have processing times higher than the cycle time; (iii) the assignment of tasks to a specific workstation can be forced or forbidden through the definition of zoning constraints. Taking into account these features, three types of constraints are defined for the assembly process: precedence constraints, zoning constraints, and capacity
March 15, 2010
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
DSS for Assembly and Production Line Balancing
Figure 5.
391
Output data interface.
constraints. The particular issues of these types of constraints are discussed in the following sections. 3.1.1. Precedence constraints The precedence constraints determine the sequence according to which the tasks can be assembled. Precedence constraints are usually depicted in a precedence diagram as the one presented in Fig. 6, where each node represents a task and each arc represents a precedence relationship between a pair of tasks (e.g., in the diagram shown in Fig. 6, tasks 8 and 9 are predecessors of task 12). A task can only be assigned to a workstation if it has no predecessors or if all of its predecessors have already been assigned to that workstation or to preceding workstations (e.g., in the diagram shown in Fig. 6, task 12 can only be performed after tasks 8 and 9 are concluded). In mixed-model assembly lines, the assembly process of the models must be sufficiently similar to the combination of the precedence diagrams of each model into a combined precedence diagram, from which the precedence constraints for the mixed-model assembly line balancing problem are derived. One should note,
March 15, 2010
392
1
14:45
WSPC/Trim Size: 9.75in x 6.5in
SPI-b778
b778-ch16
A. S. Simaria et al.
3
4
8
14
5
9
12
6
11
13
7
10
19 15
23 18
16
17
24 21
25
22
20
2
Figure 6.
Example of a precedence diagram.
however, that tasks can have different processing times for the different models and may not even be needed for some of them. 3.1.2. Zoning constraints Zoning constraints can be positive or negative. Positive zoning constraints force the assignment of certain tasks to a specific workstation. In the proposed approach, the tasks that need to be allocated to the same workstation are merged and treated by the procedure as only one task. Negative zoning constraints forbid the assignment of tasks to the same workstation. In the proposed procedures, a task is not available to be assigned to a workstation if there is an incompatible task already assigned to that workstation. 3.1.3. Capacity constraints The proposed approach is meant to deal with labor-intensive assembly lines. This type of lines is usually staffed by low-skilled labor that can be easily trained and so the number of tasks assigned to each worker (or workstation) needs to be kept to a minimum. This is an important issue that needs to be accounted for when parallel workstations are allowed, as is the case of the proposed approach. To picture this issue, we refer the extreme case o